Reading view

There are new articles available, click to refresh the page.

Which M4 chip and model?

In the light of recent news, you might now be wondering whether you can afford to wait until next year in the hope that Apple then releases the M4 Mac of your dreams. To help guide you in your decision-making, this article explains what chip options are available in this month’s new M4 models, and how to choose between them.

CPU core types

Intel CPUs in modern Macs have several cores, all of them identical. Whether your Mac is running a background task like indexing for Spotlight, or running code for a time-critical user task, code is run across any of the available cores. In an Apple silicon chip like those in the M4 family, background tasks are normally constrained to efficiency (E) cores, leaving the performance (P) cores for your apps and other pressing user tasks. This brings significant energy economy for background tasks, and keeps your Mac more responsive to your demands.

Some tasks are normally constrained to run only on E cores. These include scheduled background tasks like Spotlight indexing, Time Machine backups, and some encoding of media. Game Mode is perhaps a more surprising E core user, as explained below.

Most user tasks are run preferentially on P cores, when they’re available. When there are more high-priority threads to be run than there are available P cores, then macOS will normally send them to be run on E cores instead. This also applies to threads running a Virtual Machine (VM) using lightweight virtualisation, whose threads will be preferentially scheduled on P cores when they’re available, even when code being run in the VM would normally be allocated to E cores.

macOS also controls the clock speed or frequency of cores. For background tasks running on E cores, their frequency is normally held relatively low, for best energy efficiency. When high-priority threads overspill onto E cores, they’re normally run at higher frequency, which is less energy-efficient but brings their performance closer to that of a P core. macOS goes to great lengths to schedule threads and control core frequencies to strike the best balance between energy efficiency and performance.

Unfortunately, it’s normally hard to see effects of frequency in apps like Activity Monitor. Its CPU % figures only show the percentage of cycles that are used for processing, and make no allowance for core frequency. It will therefore show a background thread running at low frequency but 100%, the same as a thread overspilt from P cores running at the maximum frequency of that E core. So when you see Spotlight indexing apparently taking 200% of CPU % on your Mac’s E cores, that might only be a small fraction of their maximum capacity if they were running at maximum frequency.

There are no differences between chips in the M4 family when it comes to each type of CPU core: each P core in a Base variant is the same as each in an M4 Pro or Max, with the same maximum frequency, and the same applies to E cores. macOS also allocates threads to different types of core using the same rules, and their frequencies are controlled the same as well. What differs between them is the number of each type of core, ranging from 4 P and 4 E in the 8-core variant of the Base M4, up to 12 P and 4 E in the 16-core variant of the M4 Max. Thus, their single-core benchmark results should be almost identical, although their multi-core results should vary according to the number of cores.

Game Mode

This mode is an exception to normal CPU and GPU core use, as it:

  • gives preferential access to the E cores,
  • gives highest priority access to the GPU,
  • uses low-latency Bluetooth modes for input controllers and audio output.

However, my previous testing didn’t demonstrate that apps running in Game Mode were given exclusive access to E cores. But for gamers, it now appears that the more E cores, the better.

GPU cores

These are also used for tasks other than graphics, such as some of the more demanding calculations required for Machine Learning and AI. However, experience so far with Writing Tools in Sequoia 15.1 is that macOS currently offloads their heavy lifting to be run off-device in one of Apple’s dedicated servers. Although having plenty of GPU cores might well be valuable for non-graphics purposes in the future, for now there seems little advantage for many.

Thunderbolt 5

M4 Pro and Max, but not Base variants, come equipped with Thunderbolt ports that not only support Thunderbolt 3 and 4, but 5, as well as USB4. Thunderbolt 5 should effectively double the speed of connected TB5 SSDs, but to see that benefit, you’ll need to buy a TB5 SSD. Not only are they more expensive than TB3/4 models, but at present I know of only one range that’s due to ship this year. There will also be other peripherals with TB5 support, including at least one dock and one hub, although neither is available yet. The only TB5 accessories that are already available are cables, and even they are expensive.

TB5 also brings increased video bandwidth and support for DisplayPort 2.1, although even the M4 Max can’t make full use of that. If you’re looking to drive a combination of high-res displays, consult Apple’s Tech Specs carefully, as they’re complicated.

Although TB5 will become increasingly important over the next few years, TB3/4 and USB4 are far from dead yet and are supported by all M4 models.

Which M4 chip?

The table below summarises key figures for each of the variants in the M4 family that have now been released. It’s likely that next year Apple will release an Ultra, consisting of two M4 Max chips joined in tandem, in case you feel the burning desire for 24 P and 8 E cores.

m4configs2

Models available next week featuring each M4 chip are shown with green rectangles at the right.

There are two variants of the Base M4, one with 4P + 4E and 8 GPU cores, the same as Base variants in M1 to M3 families. There’s also the more capable variant, for the first time with 4P + 6E, which promises to be a better all-rounder, and when in Game Mode. It also has an extra couple of GPU cores.

The M4 Pro also comes in two variants, this time differing in the number of P cores, 8 or 10, and GPU cores, 16 or 20. Those overlap with the M4 Max, with 10 or 12 P cores and 32 or 40 GPU cores. Thus the gap between M4 Pro and Max isn’t as great as in the M3, with the GPUs in the M4 Max being aimed more at those working with high-res video, for instance. For more general use, there’s little difference between the 14-core Pro and Max.

Memory and storage

Chips in the M4 family also determine the maximum memory and internal SSD capacity. Apple has at last eliminated base models with only 8 GB of memory, and all now start with at least 16 GB. Base M4 chips are limited to a maximum of 32 GB, while the M4 Pro can go up to 64 GB, and the 16-core Max up to 128 GB, although in its 14-core variant, the Max is only available with 36 GB (I’m very grateful to Thomas for pointing this out below).

Unfortunately, Apple hasn’t increased the minimum size of internal SSD, which remains at 256 GB for some base models. Smaller SSDs may be cheaper, but they are also likely to have shorter lives, as under heavy use their small number of blocks will be erased for reuse more frequently. That may shorten their life expectancy to much less than the normal period of up to 10 years, as was seen in some of the first M1 models. This is more likely to occur when swap space is regularly used for virtual memory. I for one would have preferred 512 GB as a starting point.

While Base M4 chips come with SSDs up to 2 TB in size, both Pro and Max can be supplied with internal SSDs of up to 8 TB.

I hope this proves useful in guiding your decision.

Disk Images: Performance

Over the last few years, the performance of disk images has been keenly debated. In some cases, writing to a disk image proceeds at a snail’s pace, but this has appeared unpredictable. Over two years ago, I reported sometimes dismal write performance to disk images, summarised in the table below.

diskimages13

This article presents new results from tests performed using macOS 15.0.1 Sequoia, which should give a clearer picture of what performance to expect now.

Methods

Previous work highlighted discrepancies depending on how tests were performed, whether on freshly made disk images, or on those that had already been mounted and written to. The following protocol was used throughout these new tests:

  1. A 100 GB APFS disk image was created using Disk Utility, which automatically mounts the disk image on completion.
  2. A single folder was created on the mounted disk image, then it was unmounted.
  3. After a few seconds, the disk image was mounted again by double-clicking it in the Finder, and was left mounted for at least 10 seconds before performing any tests. That should ensure read-write disk images are converted into sparse file format, and allows time for Trimming.
  4. My utility Stibium then wrote 160 test files ranging in size from 2 MB to 2 GB in randomised order, a total of just over 53 GB, to the test folder.
  5. Stibium then read those files back, in the same randomised order.
  6. The disk image was then unmounted, its size checked, and it was trashed.

All tests were performed on a Mac Studio M1 Max, using its 2 TB internal SSD, and an external Samsung 980 Pro 2 TB SSD in an OWC Express 1M2 enclosure running in USB4 mode.

Results

These are summarised in the table below.

xferdiskimage24

Read speeds for sparse bundles and read-write disk images were high, whether the container was encrypted or not. On the internal SSD, encryption resulted in read speeds about 1 GB/s lower than those for unencrypted tests, but differences for the external SSD were small and not consistent.

Write speeds were only high for sparse bundles, where they showed similar effects with encryption. Read-write disk images showed write speeds consistently about 1 GB/s, whether on the internal or external SSD, and regardless of encryption.

When unencrypted, read and write speeds for sparse (disk) images were also slower. Although faster than read-write disk images when writing to the internal SSD, read speed was around 2.2 GB/s for both. Results for encrypted sparse images were by far the worst of all the tests performed, and ranged between 0.08 to 0.5 GB/s.

Surprisingly good results were obtained from a new-style virtual machine with FileVault enabled in its disk image. Although previous tests had found read and write speeds of 4.4 and 0.7 GB/s respectively, the Sequoia VM achieved 5.9 and 4.5 GB/s.

Which disk image?

On grounds of performance only, the fastest and most consistent type of disk image is a sparse bundle (UDSB). Although on fast internal SSDs there is a reduction in read and write speeds of about 1 GB/s when encrypted using 256-bit AES, no such reduction should be seen on fast external SSDs.

On read speed alone, a read-write disk image is slightly faster than a sparse bundle, but its write speed is limited to 1 GB/s. For disk images that are more frequently read from than written to, read-write disk images should be almost as performant as sparse bundles.

However, sparse (disk) images delivered weakest performance, being particularly slow when encrypted. Compared with previous results from 2022, unencrypted write performance has improved, from 0.9 to 2.0 GB/s, but their use still can’t be recommended.

Performance range

It’s hard to explain how three different types of disk image can differ so widely in their performance. Using the same container encryption, write speeds ranged from 0.08 to 3.2 GB/s, for a sparse image and sparse bundle, on an external SSD with a native write speed of 3.2 GB/s. It’s almost as if sparse images are being deprecated by neglect.

Currently, excellent performance is also delivered by FileVault images used by Apple’s lightweight virtualisation on Apple silicon. The contrast is great between its 4.5 GB/s write speed and that of an encrypted sparse image at 0.1 GB/s, a factor of 45 when running on identical hardware.

Recommendations

  • For general use, sparse bundles (UDSB) are to be preferred for their consistently good read and write performance, whether encrypted or unencrypted.
  • When good write speed is less important, read-write disk images (UDRW) can be used, although their write performance is comparable to that of USB 3.2 Gen 2 at 10 Gb/s and no faster.
  • Sparse (disk) images (UDSP) are to be avoided, particularly when encrypted, as they’re likely to give disappointing performance.
  • Encrypting UDSB or UDRW disk images adds little if any performance overhead, and should be used whenever needed.

Previous articles

Introduction
Tools
How read-write disk images have gone sparse

What performance should you get from different types of storage?

External storage is invariably sold with ‘up-to’ performance figures. In practice, you’ll seldom realise anything like some write or read speeds claimed. And when it comes to prolonged tasks like that first full Time Machine backup, no matter how fast you thought that drive would be, it always takes longer than expected.

Over the last few years I have tested and reviewed many examples of different types of external storage, from basic USB 3 hard drives, to the latest USB4 SSD enclosures, and NAS packed with fast SSDs. This article draws on all those test results to give you a better idea of what to expect when they’re being used with your Mac.

Results quoted here are typical for those tests performed mostly using a Mac Studio M1 Max, but unless otherwise indicated should be similar for recent Intel models. They’re summarised in this table.

storage1

Write speeds are given for:

  • the single 50 MB write test performed by Time Machine before each backup;
  • 500 multiple concurrent writes of 4 KB each, performed in those same Time Machine tests;
  • calculated net write speed over a first full backup to APFS of at least 400 GB;
  • general write speed measurement using my app Stibium, which gives broadly similar results to other leading benchmarking apps.

General read speeds are also obtained using Stibium, and similar to other apps. All speeds are given as MB/s for consistency.

Before looking at individual types of storage, one obvious and important result is the effect of throttling by macOS on Time Machine backup performance. Considering Time Machine’s own tests, writing a single 50 MB file is performed consistently at around 200-225 MB/s to local storage of whatever type, and multiple concurrent writes of 4 KB files reach around 20-23 MB/s regardless of local storage type. Those hold good even when you back up to a fast Thunderbolt 3 SSD, and backing up to a NAS is little quicker unless it’s over 2.5GbE to an NVMe SSD. Local transfer speeds only differ more substantially in general tests, when they aren’t throttled as they are in Time Machine.

Hard disks

When writing to or reading from a local hard disk, performance varies substantially according to which sectors on the hard disk are being accessed. This is a well-known phenomenon, and the result of geometry, as sectors are faster at the periphery of the disk’s platter, and slower in the inner part. Ranges given here take that into account: the lower figure is for inner sectors, and the higher for outer ones. Some users compensate for this effect, and only ever use the outer half of a disk’s sectors to obtain better performance, but that reduces their available capacity, and effectively doubles their cost per TB.

SSDs

SATA SSDs may be cheapest, but they’re also slowest, and with Macs they generally don’t enjoy Trim or SMART health indicator support. Of the two, Trim support is usually the more important, as without that, they can accumulate blocks waiting to be erased and returned for further use, and as a result their write (but not read) speed can fall as low as 100 MB/s. Unless used for largely static storage, this is a significant risk.

NVMe SSDs deliver twice the performance of SATA models, and generally enjoy Trim but not SMART indicator support. This makes them far better suited to general use, as their write speeds should be sustained from new throughout their working life.

USB 3.2 Gen 2, Thunderbolt 3, USB4

Translating commonly quoted transfer speeds for these three protocols into real-world speeds turns out to be complex. In practice, these are what you can expect to see:

  • USB 3.2 Gen 2 at 10 Gb/s is slightly less than 1 GB/s
  • Thunderbolt 3 at 32 Gb/s is up to 3 GB/s
  • USB4 at 40 Gb/s is up to 3.4 GB/s.

All recent models of Mac, both Intel and Apple silicon, should realise full performance over USB 3.2 Gen 2 and Thunderbolt 3, but support for USB4 is limited to Apple silicon. Unless a drive or enclosure specifically includes Thunderbolt 3 as a fallback, when connected to an Intel Mac, you should expect it to fall back to USB 3.2 Gen 2 at just under 1 GB/s, less than a third of the speed of USB4.

NAS

Although I haven’t made any systematic comparison between AFP and SMB network protocols, I can see no consistent difference in their performance, when used with the latest versions of macOS and NAS software. The latter, though, can be critical: older versions of NAS software can perform poorly when used over SMB with recent macOS. Keeping your NAS software up to date is important.

Throttling of Time Machine backup writing isn’t supposed to occur when backing up over a network, and there is some evidence here to support that, with significantly better results for 50 MB test files. However, those are only apparent when using NVMe SSDs in the NAS, with a wired Ethernet 2.5GbE connection to provide sufficient bandwidth.

Check TM performance

Provided that your Mac is running a recent version of macOS and backing up to APFS, it’s simple to read the two write performance tests that occur at the start of each Time Machine backup using my free T2M2. Alternatively, you can also read them using the Time Machine custom log extract in Mints. In T2M2 they should look something like:
Destination IO performance measured:
Wrote 1 50 MB file at 238.02 MB/s to "/Volumes/ThunderBay2" in 0.210 seconds
Concurrently wrote 500 4 KB files at 35.58 MB/s to "/Volumes/ThunderBay2" in 0.058 seconds

Check general performance

Although there are other apps that will do this, I developed Stibium for this purpose. Follow the ‘gold standard’ procedure detailed in its Help Reference to obtain the most accurate and reproducible results. Stibium can test any storage you can access in the Finder, including all local devices and networked systems such as NAS.

Further reading

Which external drives have Trim and SMART support?
How to evaluate an external SSD
You can read my reviews in MacFormat and MacLife magazines, available in the App Store.

个人备份指北

备份的意义

我想大家都多多少少遇到过设备死机、进水、被偷,甚至被黑客勒索的事情吧。从概率上来说,数据丢失只是早晚的事情。这就是为什么要备份,不然可能青春的回忆,各种美好就再也不见了,那会很痛,可能比失恋还痛。

「不同裝置都有相同的資料才叫作資料備份,只是換個裝置存放不叫做備份[1]

我和一些网络上厉害的家伙聊过,发现他们也没有做到真正的备份。把数据一股脑丢进硬盘,把数据丢进各种网盘/云服务,甚至程序员也只是把代码放在 GitHub 上。这些都不是备份,国内的网盘不说了,诸如 iCloud 和 Google Drive,本质是个同步盘。历史上不管是 iCloud 还是 Google Drive 都出现过宕机问题,不把鸡蛋放在一个篮子里,要狡兔三窟。

备份的目的是为了可以「无损」还原,如果不能还原,储存再多的数据也没有意义。

备份类型

  • 全部备份(Full Backup,包括系统)备份数据量大,备份所需时间长。
  • 增量备份(Incremental Backup)自上一次完全备份后有变化数据的备份
  • 差异备份 (Differential backup) 提供完整备份后变更的文件

第一种全盘备份,举个例子,从一台电脑上一模一样搬迁到另一台电脑上。有些备份是不包括系统设置数据的,还原数据后需要重新设定应用设置、字体大小、系统语言啊,种种……准备系统备份的好处在于带有一个开机系统,比如 Windows 的 WinPE,macOS 的可引导安装器

差异备份与增量备份的区别在于前者包括上一次完全备份、差异备份或增量备份,后者仅是上一次完全备份后有变化。挺绕的……

3-2-1-1-0 原则

  • 至少做三份数据副本(一份本体,两份 copy)
  • 存储在至少两种不同类型的存储介质
  • 在异地位置保留一份备份副本

在 3-2-1 原则再加两点[2]

  • 将一份媒体副本离线或隔空
  • 确保所有可恢复性解决方案没有错误(刚说备份之目的)

在这个科技高度发展的年代,其实最安全的储存方式还是用磁带保存,埋在一个角落(比如 GitHub 埋在北极的胶片[3])。

云备份

当然我们不用这么极端,用个靠谱的云备份就行了,也是一种异地备份。刚刚讲过,常规的那些网盘不算备份,要选择专门用来备份的云存储服务,那么靠谱的备份服务有哪些呢,可选的不多。

大家公认的有 Backblaze Personal Backup。每月 $7,不限备份容量,从 2007 年运营至今,稳定。懒人首选,只要后台开着软件就可以无感备份,手机端有 App 也可以备份。

对于国内用户来说由于网络封锁,需要耗费代理流量,第一次备份需要注意流量。从国内访问,稳定性一定要考虑,可是目前无解。

还有一个严格意义来说不算提供云储存服务的软件——Arq, 2009 年运营,$49.99 买断单设备,Premium $59.99/year(提供 1 TB 云空间,不推荐)

搭配第三方云端空间使用,官方描述:Use it for encrypted, versioned backups of your important files. 有加密功能不错。

相当于一个云服务整合管理器

硬盘备份

推荐用硬盘来储存,硬盘分为 HDD(机械硬盘)和 SSD(固态硬盘),硬盘可是大有学问,这里不展开了,感兴趣的可以在本 blog 搜索 SSD

相比之下 SSD 的优势在:

  1. SSD 读取写入速度快,快就是王道
  2. SSD 的寿命(写入和擦除)长
  3. SSD 可以在使用中移动

缺点:

  1. 售价贵
  2. 恢复数据困难

不管是 HDD 还是 SSD,「任何的存储装置都是消耗品」,说不定哪天就挂掉了,所以多准备一块硬盘吧。虽然 SSD 也能抢救资料,但是恢复难度大(要看硬盘哪里损坏)关键是数据恢复又贵(基本 ¥2000 以上)又麻烦还不安全(冠希哥艳照门)。

Time Machine

用 Mac 的同学可以用一下这个,免费,但是不太好用,设定逻辑太简单,Apple 没有好好做,所以有下面的软件推荐。

搭配硬盘使用的备份软件

  • SuperDuper (2004 年 $27.95 买断制,有免费版功能少一点也可以用,优点是可以把 SSD 作为一个启动盘,支持差异备份。

  • Carbon Copy Cloner 6 $39.99 更加老牌的一个软件,官方的口号是「A leader in Mac backups since 2002」,速度和稳定性比 Time Machine 好,口碑不错。

上述备份软件和 Backblaze 都提供试用,可以选择一款适合自己的。

NAS 备份

我没有使用 NAS,这方面不了解,根据需求选择。用 NAS 也要有双保险,做好异地备份。虽然硬盘不怕坏了,但是也有一些不可抗力因素。

备份内容

除了全盘备份,还需要注意保存在云上的数据,比如我存在 Bitwarden 上的密码。暂时好像没有别的了。


看了那么多,今天你备份了吗?Just do it!

参考

PrimoCache:让固态硬盘作为缓存给机械硬盘加速

对于电脑硬盘,固态肯定是全方面优于机械硬盘的选择,不过按照马克思主义矛盾论的观点,这就存在一个 “低速的 HDD 与高价的 SSD” 之间的矛盾。目前我的笔记本使用 128G+1T 的组合,处于并将长期处于 “个人电脑硬盘的基本矛盾” 之中。

直到,我遇到了 PrimoCache 这款软件。推荐给大家。

PrimoCache 是一款可以将物理内存、SSD 硬盘或闪存盘等虚拟成硬盘缓存的软件。它可以自动将硬盘中读取的数据存入物理内存等速度较快的设备,当系统再次需要该数据时它可以很快从缓存设备中读取,而无需再次访问速度较慢的硬盘,从而有效提升物理硬盘的访问性能。

中文官网:http://www.romexsoftware.com/zh-cn/primo-cache/index.html
平台:Windows(其实 *nix 下也有类似的)
软件类型:共享软件

两个月后更新:

经过 2 个月的实际体验,这款软件并没有宣传的那么完美。少数软件一运行就会完全死机(跑跑卡丁车,并确定是由该软件造成的),整个系统也似乎有一种不稳定的感觉(偶尔弹出一些意义不明的错误提示)。另外还有额外的内存占用。

总之,不推荐将系统盘加速,也不推荐大多数情况下的使用。除非你有一些常玩的游戏,但由于几十 GB 的体积巨大不能放入 SSD,才值得使用此软件。

缓存技术

这种理念我认为非常好,Cache 技术也是计算机硬件软件当中一个使用非常广泛的技术。这和最初的英特尔快速存储技术(RST)以及英特尔傲腾技术类似。都是使用少量高速的 SSD 作为缓存,为低速的 HDD 加速, 使得电脑拥有 HDD 的大容量的同时,拥有接近于 SSD 的速度。

至于什么数据会被缓存到 SDD 中?这是由算法控制的,自动选择 HDD 中最常用的那些数据。

PrimoCache 与 RST 或者傲腾的区别在于,这款软件不需要你使用最新的 Intel 主板,或者是购买 Intel 家的傲腾内存,它兼容一切现有的 SSD。

PrimoCache 还支持使用内存作为一级缓存,SSD 作为二级缓存

是的,这也是 PrimoCache 的一个特有的功能,内存的每秒读写速度单位在 GB 级别,比 SSD 高了一个量级,能有效为 SSD 加速。(不过我还没有直观感受到差异,大概在这时瓶颈已经不在 IO 了)

效果展示

我现在终于可以把动辄几十 G 的游戏放心的放在机械硬盘了,然后使用 PrimoCache 让他们拥有令人满意的读取速度。

我使用了 12G SSD 作为二级缓存,1G RAM 作为一级缓存,运行测速工具对机械硬盘测速结果如下:

未使用缓存:

使用缓存:

注意,由于缓存的原理是将常用数据放在 SSD、RAM 中,需要时快速获取,所以使用测试软件随机读取或写入时并没有预存这个过程,并不能反映实际效果。
但是我们也可以看到明显的进步了。

注意事项

发现的缺点:

  • 使用二级缓存 SSD 时,需要占用一定量的内存用于存储映射。
  • 这是一个收费软件,虽然有破解版。
  • 之前出现了一次显卡被降频,关闭该软件后恢复。但后来开启该软件又没有出现类似状态。

此外,虽然我的 RAM 有 16GB,但我也只使用了不到 2GB 作为硬盘缓存,因为我觉得目前大多数大型软件都会使用 RAM 为自己加速,我们没必要多此一举。并且充裕的 RAM 本身也是提升电脑响应速度的途径。

❌