I’ve long been critical of some of the best-selling utilities for the Mac, that set out to perform deduplication of files by detecting which appear to be identical, and removing ‘spare’ copies. This is because APFS introduced clone files, and in the right circumstances those take up no space in storage, as their data is common and not duplicated at all. As it’s practically difficult to tell whether two files are clones, any utility or command tool that claims to save space by removing duplicates can’t tell you the truth, and in most cases won’t save as much space as it claims.
Claims made by those utilities are often exaggerated. This is because they calculate how much space they think they’ve saved by adding the sizes of the potential duplicates they have deleted. That’s not correct when a clone file is deleted, as that doesn’t actually free any space at all, even though the clone file has exactly the same nominal size as the original.
Benefitting from clone files
I’m delighted to see the eminent John Siracusa turn this on its head and finally make better use of clone files in his app Hyperspace, available from the App Store. Instead of deleting clones, his app can replace duplicate copies with clones, and so achieve real space savings. This comes down to simple arithmetic:
if you have two copies (not clones) of a file in the same APFS volume, the total size they take on disk is twice the size of one of them;
if you have two clones (not copies) of a file in the same APFS volume, the total size they take on disk is only the size of one of them, as its clone takes no additional space at all.
Hyperspace thus checks all the files in a selected folder, identifies which are identical copies, and (where suitable) will replace those copies (except an original) with clones, so saving real storage space.
I also think it has the most user-friendly payment scheme: download Hyperspace free of charge and check your Mac with it. If it doesn’t find sufficient savings, and you decide not to use it to replace any duplicates with clones, then it costs you nothing. If you want to reclaim that space, then you can opt to pay according to the amount of space it saves, by subscription, or with a one-time payment. On that basis, I unhesitatingly recommend everyone to download it from the App Store, and at least check their Home folder to see if it’s worth paying to reclaim space. You have absolutely nothing to lose.
In my case, perhaps because I tend to clone files using the Finder’s Duplicate command, the savings that it offered were of little benefit, but your Home folder could be different and release 100 GB or more.
Sparse files
The other space-saving special file type in APFS is the sparse file. Although it can bring great savings in storage space, that’s largely up to the app(s) that create and maintain the file, rather than the user. Devising an app that could go round converting plain to sparse files is harder, and risks incompatibility with those apps that access those files.
Fitting 285 GB into 16.5 GB
As a demonstration of how effective APFS special files are in saving disk space, I built myself a 100 GB partition (APFS Container) on an SSD and tried to fill it with clone and sparse files until I got bored.
At this stage, the 100 GB partition contains:
One 16.5 GB IPSW image file, with nine clones of it, created using the Duplicate command.
Eleven 10 GB sparse files and one clone, created using my app Sparsity.
Add those file sizes together and they come to 285 GB, yet the 100 GB partition only has 16.5 GB stored on it, and still has over 83 GB free. No compression is involved here, of course.
As the saying goes, there ain’t such as thing as a free lunch, and that free space could vanish quickly depending on what happens to those files. The worst case is for an app not to recognise sparse files, and write one to disk in plain format, so swallowing 10 GB at once. Editing the cloned files would be a more gradual way of their growing in size. Only changed data would then need to be saved, so free disk space would steadily fall as more changes were made to the clone.
Clone and sparse files are by no means unique to APFS, but they can be impressive, and above all they’re effective at reducing excess erase-write cycles that age SSDs, whatever you do with the storage they free.
I’m very grateful to Duncan for drawing my attention to Hyperspace, and to John Siracusa for an outstanding app.
In the last couple of weeks I’ve been asked to help recover data lost when files have been accidentally deleted, and an internal SSD has been wiped remotely using Find My Mac. What we perhaps haven’t fully appreciated is how improved security protection in our Macs has made it far harder, if not impossible, to recover such lost data. Allow me to explain in three scenarios.
Lost files on a hard disk
When files are deleted from a hard disk, the file system marks them as no longer being in use, and they’re left in place on the hard disk until they need to be overwritten with fresh data. If the hard disk has ample free space, that could occur days, weeks or even months later. Data recovery software and services can be used to scan each storage block and try to reconstruct the original files. If the file system and its data are encrypted, the encryption key is required to enable the contents to be decrypted.
There’s extensive experience in such data recovery, and provided the disk isn’t physically damaged or malfunctioning, results can be surprisingly good. As services charge according to the amount of data they recover, there are also strong incentives.
This works both ways, of course, in that someone who gets access to that hard disk could also recover files from it if they’re unencrypted. For this reason, when you’re passing on or disposing of a hard disk, you should perform a secure erase to overwrite its entire contents. If it’s going for recycling, once that has been done, you should also render the disk unusable by physically damaging its platters.
Deleted files on an SSD
What happens on an SSD depends on whether there’s already a snapshot of that volume. If there is, and that snapshot includes the deleted files, the file system metadata for them is retained in that snapshot, and the storage containing their data is also retained. The files can then be recovered by mounting that snapshot and either reverting the whole volume to that earlier state, or copying those files to a different volume.
If there’s no prior snapshot containing the files, the file system marks their extents as being free for reuse. At some time after their deletion, that information is sent to the SSD in a Trim command. When the SSD next has a moment to perform its routine housekeeping, the physical storage used will then be erased ready to be written to again.
Although there’s some uncertainty as to when that Trim command will be sent to the SSD, one time that we know that supported SSDs are Trimmed is during mounting, in the case of an internal SSD when that Mac starts up. So if your Mac has started up since the files were deleted, those files are most likely to have been completely erased from its internal SSD. With their erasure, chances of ever recovering those files have gone.
Wiped Data volume
Macs with T2 or Apple silicon chips have an ingenious method of ‘wiping’ the entire contents of the Data volume when it’s encrypted on the internal SSD. This can be triggered using the Erase All Content and Settings (EACAS) feature in the Transfer or Reset item in General settings, or remotely via Find My Mac. Either way, this destroys the ‘effaceable key’ and the ability to decrypt the contents of the Data volume, even if it’s not additionally protected by FileVault. As Apple states: “Erasing the key in this manner renders all files cryptographically inaccessible.”
This is to ensure that if your Mac is stolen, no one can recover the contents of its internal SSD once it has been wiped in this way. Nearly a year ago there were claims that old data could re-appear afterwards, but those turned out to be false.
I’m afraid that the only way to recover the data from a volume wiped using EACAS or Find My Mac is to restore it from a backup.
Backups are more important
For Intel Macs with T2 chips, and Apple silicon Macs, the chances of being able to recover files from their internal SSDs have become diminishingly small. This makes it all the more important that you make and keep good and comprehensive backups of everything in your Mac’s Data volume.
I’m always sad to hear of those who have suffered data loss, and shocked to learn of how many still don’t keep backups.
If you want a quiet life, just format each external disk in APFS (with or without encryption), and cruise along with plenty of free space on it. For those who need to do different, either getting best performance from a hard disk, or coping with less free space on an SSD, here are some tips that might help.
File system basics
A file system like APFS provides a logical structure for each disk. At the top level it’s divided into one or more partitions that in APFS also serve as containers for its volumes. Partitions are of fixed size, although you can always repartition a disk, a process that macOS will try to perform without losing any of the data in its existing partitions. That isn’t always possible, though: if your 1 TB disk already contains 750 GB, then repartitioning it into two containers of 500 GB each will inevitably lose at least 250 GB of existing data.
All APFS volumes within any given container share the same disk space, and by default each can expand to fill that. However, volumes can also have size limits imposed on them when they’re created. Those can reserve a minimum size for that volume, or limit it to a maximum quota size.
How that logical structure is implemented in terms of physical disk space depends on the storage medium used.
Faster hard disks
Hard disks store data in circular tracks of magnetic material. To store each file requires multiple sectors of those tracks, each of which can contain 512 or 4096 bytes. As the length (circumference) of the tracks is greater as you move away from the centre of the platter towards its edge, but the disk spins at a constant number of revolutions per minute (its angular velocity is constant), it takes a shorter time for sectors at the periphery of the disk to pass under the heads than for those closer to the centre of the platter. The result is that read and write performance also varies according to where files are stored on the disk: they’re faster the further they are from the centre.
This graph shows how read and write speeds change in a typical compact external 2 TB hard disk as data is stored towards the centre of the disk. At the left, the outer third of the disk delivers in excess of 130 MB/s, while the inner third at the right delivers less than 100 MB/s.
You can use this to your advantage. Although you don’t control exactly where file data is stored on a hard disk, you can influence that. Disks normally fill with data from the periphery inwards, so files written first to an otherwise empty disk will normally be written and read faster.
You can help that on a more permanent basis by dividing the disk into two or more partitions (APFS containers), as the first will normally be allocated space on the disk nearest the periphery, so read and write faster than later partitions added nearer the centre. Adding a second container or partition of 20% of the total capacity of the disk won’t cost you much space, but it will ensure that performance doesn’t drop off to little more than half that achieved in the most peripheral 10%.
Reserving free space on SSDs
The firmware in SSDs knows nothing of its logical structure, and for key features manages the whole storage as a single unit. Wear levelling ensures that individual blocks of memory have similar numbers of erase-write cycles, so they age evenly. Consumer SSDs that use dynamic SLC write caches allocate those from across the whole storage, and aren’t confined to partitions. You can thus manage free space to keep sufficient dynamic cache available at a disk level.
One approach is to partition the SSD to reserve a whole container, with its fixed size, to support the needs of the dynamic cache. An alternative is to use volume reserve and quota sizes for the same purpose, within a single container. For example, in a 1 TB SSD with a 100 GB SLC write cache you could either:
with a single volume, set its quota to 900 GB, or
add an empty volume with its reserve size set to 100 GB.
Which of these you choose comes down to personal preference, although on boot volume groups you won’t be able to set a quota for its Data volume, and the most practical solution for a boot disk is to add an empty volume with a specified reserve size.
To do this when creating a new volume, click on the Size Options… button and set the quota or reserve.
Summary
Partition hard disks so that you only use the fastest 80% or so of the disk.
To reserve space in an SSD for dynamic caching, you can add a second APFS container.
A simpler and more flexible way to reserve space on SSDs is setting a quota size for a single volume, or adding an empty volume with a reserve size.
Size options can currently only be set when creating a volume.
Fast SSDs aren’t always fast when writing to them. Even an Apple silicon Mac’s internal SSD can slow alarmingly in the wrong circumstances, as some have recently been keen to demonstrate. This article explains why an expensive SSD normally capable of better than 2.5 GB/s write speed might disappoint, and what you can do to avoid that.
In normal use, there are three potential causes of reduced write speed in an otherwise healthy SSD:
thermal throttling,
SLC write cache depletion,
the need for Trimming and/or housekeeping.
Each of those should only affect write speed, leaving read speed unaffected.
Thermal throttling
Writing data to an SSD generates heat, and writing a lot can cause it to heat up significantly. Internal temperature is monitored by the firmware in the SSD, and when that rises sufficiently, writing to it will be throttled back to a lower speed to stabilise temperature. Some SSDs have proved particularly prone to thermal throttling, among them older versions of the Samsung X5, one of the first full-speed Thunderbolt 3 SSDs.
In testing, thermal throttling can be hard to distinguish from SLC write cache depletion, although thorough tests should reveal its dependence on temperature rather than the mere quantity of data written.
The only solution to thermal throttling is adequate cooling of the SSD. Internal SSDs in Macs with active cooling using fans shouldn’t heat up sufficiently to throttle, provided their air ducts are kept free and they’re used in normal ambient temperatures. Well-designed external enclosures should ensure sufficient cooling using deep fins, although active cooling using small fans remains more controversial.
SLC write cache
To achieve their high storage density, almost all consumer-grade SSDs store multiple bits in each of their memory cells, and most recent products store three in Triple-Level Cell or TLC. Writing all three bits to a single cell takes longer than it would to write them to separate cells, so most TLC SSDs compensate by using caches. Almost all feature a smaller static cache of up to 16 GB, used when writing small amounts of data, and a more substantial dynamic cache borrowed from main storage cells by writing single bits to them as if they were SLC (single-level cell) rather than TLC.
This SLC write cache becomes important when writing large amounts of data to the SSD, as the size of the SLC write cache then determines overall performance. In practice, its size ranges from around 2.5% of total SSD capacity to over 10%. This can’t be measured directly, but can be inferred from measuring speed when writing more data than can be contained in the cache. As it can’t be emptied during full-speed write, once the dynamic cache is full, write speed suddenly falls; for example a Thunderbolt 5 SSD with full-speed write of 5.5 GB/s might fall to 1.4 GB/s when its SLC write cache is full. This is seen in both external and internal SSDs.
To understand the importance of SLC write cache in determining performance, take this real-world example:
100 GB is written to a Thunderbolt 5 SSD with an SLC write cache of 50 GB. Although the first half of the 100 GB is written at 5.5 GB/s, the remaining 50 GB is written at 1.4 GB/s because the cache is full. Total time for the whole write is then 44.8 seconds.
Performing the same to a USB4 SSD with an SLC write cache in excess of 100 GB has a slower maximum rate of 3.7 GB/s, but that’s sustained for the whole 100 GB, which then takes only 27 seconds, 60% of the time of the ‘faster’ SSD.
To predict the effect of SLC write cache size on write performance, you therefore need to know cache size, out-of-cache write speed, and the time required to empty a full cache between writes. I have looked at these on two different SSDs: a recent 2 TB model with a Thunderbolt 5 interface, and a self-assembled USB4 OWC 1M2 enclosure containing a Samsung 990 Pro 2 TB SSD. Other enclosures and SSDs will differ, of course.
The TB5 SSD has a 50 GB SLC write cache, as declared by the vendor and confirmed by testing. With that cache available, write speed is 5.5 GB/s over a TB5 interface, but falls to 1.4 GB/s once the cache is full. It then takes 4 minutes for the cache to be emptied and made available for re-use, allowing write speeds to reach 5.5 GB/s again.
The USB4 SSD has an SLC write cache in excess of 212 GB, as demonstrated by writing a total of 212 GB at its full interface speed of 3.7 GB/s. As the underlying performance of that SSD is claimed to exceed that required to support TB5, putting that SSD in a TB5 enclosure should enable it to comfortably outperform the other SSD.
Two further factors could affect SLC write cache: partitioning and free space.
When you partition a hard disk, that affects the physical layout of data on the disk, a feature sometimes used to ensure that data only uses the outer tracks where reads and writes are fastest. That doesn’t work for SSDs, where the firmware manages storage use, and won’t normally segregate partitions physically. That ensures partitioning into APFS containers doesn’t affect SLC write cache, either in terms of size or performance.
Free space can be extremely important, though. SLC write cache can only use storage that’s not already in use, and if necessary has been erased ready to be re-used. If the SSD only has 100 GB free, then that can’t all be used for cache, so limiting the size that’s available. This is another good reason for the performance of SSDs to suffer when they have little free space available.
Ultimately, to attain high write speeds through SLC write cache, you have to understand the limits of that cache and to work within them. One potential method for effectively doubling the size of that cache might be to use two SSDs in RAID-0, although that opens further questions.
Trim and housekeeping
In principle, Trim appears simple. For example, Wikipedia states: “The TRIM command enables an operating system to notify the SSD of pages which no longer contain valid data. For a file deletion operation, the operating system will mark the file’s sectors as free for new data, then send a TRIM command to the SSD.” A similar explanation is given by vendors like Seagate: “SSD TRIM is a command that optimizes SSDs by informing them which data blocks are no longer in use and can be wiped. When files are deleted, the operating system sends a TRIM command, marking these blocks as free for reuse.”
This rapidly becomes more complicated, though. For a start, the TRIM command for SATA doesn’t exist for NVMe, used by faster SSDs, where its closest substitute is DEALLOCATE. Neither is normally reported in the macOS log, although APFS does report its initial Trim when mounting an SSD. That’s reported for each container, not volume.
What we do know from often bitter experience is that some SSDs progressively slow down with use, a phenomenon most commonly (perhaps only?) seen with SATA drives connected over USB. Those also don’t get an initial Trim by APFS when they’re mounted.
It’s almost impossible to assess whether time required for Trim and housekeeping is likely to have any adverse effect on SSD write speed, provided that sufficient free disk space is maintained to support full-speed writing to the SLC write cache. Neither does there appear to be any need for a container to be remounted to trigger any Trim or housekeeping required to erase deleted storage ready for re-use, provided that macOS considers that SSD supports Trimming.
Getting best write performance from an SSD
Avoid thermal throttling by keeping the SSD’s temperature controlled. For internal SSDs that needs active cooling by fans; for external SSDs that needs good enclosure design with cooling fins or possibly a fan.
Keep ample free space on the SSD so the whole of its SLC write cache can be used.
Limit continuous writes to within the SSD’s SLC write cache size, then allow sufficient time for the cache to empty before writing any more.
It may be faster to use an SSD with a larger SLC write cache over a slower interface, than one with a smaller cache over a faster interface.
With more new M4 Macs in the offing, one question that I’m asked repeatedly is whether you should save money by getting a Mac with the smallest internal SSD and extend that using cheaper external storage. This article considers the pros and cons.
Size and prices
In Apple’s current M4 models, the smallest internal storage on offer is 256 GB. For the great majority, that’s barely adequate if you don’t install any of your own apps. It might suffice in some circumstances, for example if you work largely from shared storage, but for a standalone Mac it won’t be sufficient in five years time. Your starting point should therefore be a minimum of 512 GB internal SSD. Apple’s typical charge for increasing that to 2 TB is around $/€/£ 600.
The alternative to 2 TB internally would be an external 2 TB SSD. Unless you’re prepared to throw it away after three years, you’ll want to choose the most versatile interface that’s also backward compatible. The only choice here is Thunderbolt 5, which currently comes at a small premium over USB4 or Thunderbolt 3. Two TB would currently cost you $/€/£ 380-400, although those prices are likely to reduce in the coming months as TB5 SSDs come into greater supply.
Don’t be tempted to skimp with a USB 3.2 Gen 2 external SSD if that’s going to be your main storage. While it might seem a reasonable economy now, in 3-5 years time you’ll regret it. Besides, it may well have severe limitations in not Trimming as standard, and most don’t support SMART health indicators.
Thus, your expected saving by buying a Mac with only 512 GB internal storage, and providing 2 TB main storage on an external SSD, is around $/€/£ 200-220, and that’s really the only advantage in not paying Apple’s high price for an internal 2 TB SSD.
Upgrading internal storage in an Apple silicon model currently isn’t feasible for most users. As Apple doesn’t support such upgrades, they’re almost certain to invalidate its warranty and any AppleCare+ cover. That could change in the future, at least for some models like the Mac mini and Studio, but I think it unlikely that Apple would ever make an upgrade cheaper than initial purchase.
External boot disk
One of the few compelling reasons for choosing a Mac with minimal internal storage is when it’s going to be started up from an external boot disk. Because Apple silicon Macs must always start their boot process from their internal storage, and that Mac still needs Recovery and other features on its internal SSD, you can’t run entirely from an external SSD, but you could probably get away with the smallest available for its other specifications, either 256 or 512 GB.
Apple silicon Macs are designed to start up and run from their internal storage. Unlike Intel Macs with T2 chips, they will still boot from an external disk with Full Security, but there are several disadvantages in them doing so. Among those are the fact that, on an external boot disk, FileVault encryption isn’t performed in hardware and is inherently less secure, and AI isn’t currently supported when booted from an external disk. Choosing to do that thus involves compromises that you might not want to be stuck with throughout the lifetime of that Mac.
External media libraries
Regardless of the capacity of a Mac’s internal storage, it’s popular to store large media libraries on external storage, and for many that’s essential. This needs to be planned carefully: some libraries are easier to relocate than others, and provision has to be made for their backups. If you use hourly Time Machine backups for your working folders, you’ll probably want to back up external media libraries less frequently, and to different external storage.
External Home folder
Although it remains possible to relocate a user’s entire Home folder to external storage, this seems to have become more tricky in recent versions of macOS. Home folders also contain some of the most active files, particularly those in ~/Library, so moving them to an external SSD is going to require its good performance.
A more flexible alternative is to extend some working folders to external storage, while retaining the Home folder on internal storage. This can fit well with backup schedules, but you will still need to ensure the whole Home folder is backed up sufficiently frequently. This does have an unfortunate side-effect in privacy protection: this may require most of your working apps to be given access to Removable Volumes in the Files & Folders item in Privacy & Security settings. Thankfully, that should only need to be performed once when first using an app with external storage.
How much free space do you need?
When you’re weighing up your options to minimise the size of your new Mac’s internal storage, you also need to allow sufficient free space on each disk. APFS is very different from HFS+ in this respect: on external disks, in particular, HFS+ continues to work happily with just a few MB free, and could be filled almost to capacity. APFS, modern macOS and SSDs don’t work like that.
Measuring how much free space is needed isn’t straightforward either, as macOS trims back on its usage in response to falling free space. Some key features, such as retaining log entries, are sacrificed to allow others to continue. Snapshots can be removed or not made. Perhaps the best measurements come from observing the space requirements of VMs, where total virtual disk space much below 50 GB impairs running of normal functions. That’s the total size of the virtual disk, not the amount of free space, and doesn’t apply when iCloud or AI are enabled.
The other indicator of minimum free space requirements is for successful upgrading of macOS, which appears to be somewhere between 30-40 GB. This makes it preferable to keep an absolute minimum of around 50 GB free at all times. When possible, 100 GB gives more room for comfort.
SSD wear and performance
When the first M1 Macs were released, base models with just 8 GB of memory and 128 GB internal SSDs were most readily available, with custom builds (BTO) following later. As a result, many of those who set out to assess Apple’s new Macs ended up stress-testing those with inadequate memory and storage for the tasks they ran.
Many noticed rapid changes in their SSD wear indicators, and some were getting worryingly close to the end of their expected working life after just three years. Users also reported that SSD performance was falling. The reasons for those are that SSDs work best, age slowest, and remain fastest when they have ample free space. One common rule of thumb is to keep at least 20-25% of SSD capacity as free space, although evidence is largely empirical, and in places confused.
The simplest factor to understand is the effect of SSD size on wear. As the memory in an SSD is expected to last a fixed number of erase-write cycles, all other things being equal, writing and rewriting the same amount of data to a smaller SSD will reach that number more quickly. Thus, in general terms and under the same write load, a 512 GB SSD will last about half as long as a 1 TB SSD.
All other things aren’t equal, though, and that’s where wear levelling and Trim come into play. Without levelling the number of erase-write cycles across all the memory in an SSD, some would reach their limit far sooner than others. To tackle that, SSDs incorporate mechanisms to even out the use of individual memory cells, as wear levelling. The less free space available on an SSD, the less effective wear levelling can be, giving larger SSDs a significant advantage if they also have more free space.
Trimming is performed periodically to allow storage that has already been made available for reuse, for example when a file has been deleted, to be erased and made ready. Both APFS and HFS+ will Trim compatible SSDs when mounting a volume, but Trim support for external SSDs is only provided by default for those with NVMe interfaces, not SATA, and isn’t available for other file systems including ExFAT. Some SSDs may still be able to process available storage in their routine housekeeping, but others won’t. Without Trimming, an SSD gradually fills with unused memory waiting to be erased, and will steadily grind to a halt, with write speeds falling to about 10% of new.
Thus, to ensure optimum performance and working life, SSDs should be as large as possible, with much of their storage kept free. Experience suggests that a healthy amount of free space is 20-50% of their capacity.
Striking the best compromise
Apple silicon Macs work best and fastest when largely running from their internal SSDs. By all means reduce the capacity required by moving more static media libraries, and possibly large working folders, to an external SSD. But there’s no escaping the evidence that your Mac will work best and longest when its internal storage has a minimum of 20% free at all times, and you must ensure that never falls below 50 GB free space. Finally, consider your needs not today, but when you intend replacing that Mac in 3-5 years time, or any savings made now will prove a false economy.
There have been many reports of problems with Thunderbolt 5 support in Apple’s latest MacBook Pro and Mac mini models with M4 Pro and Max chips. Among the more concerning have been poor performance when accessing SSDs through TB5 docks and hubs, and the inability to drive more than two 4K displays through those. This article looks at what has changed, and what can currently be achieved when accessing SSDs either directly or via TB5 docks or hubs.
When I last tested these, using a Mac mini M4 Pro and Sequoia 15.2, I found that speeds measured through a TB5 dock were generally at least as good as those through a TB4 hub, with three notable exceptions:
Write speed from a TB5 port to a TB3 SSD through a TB5 dock fell to 0.42 GB/s, little more than 10% of that of a direct connection and similar to that expected from a SATA SSD operating over USB 3.2 Gen 2.
Write speed from a TB5 port to a USB4 SSD through a TB5 dock fell to 2.3 GB/s, about 62% of that expected.
Write speeds to a TB3 SSD through a TB5 dock occur at about half the expected speed, just as those through a TB4 hub.
Methods
Three Macs were used for testing:
iMac Pro (Intel, T2 chip) with macOS 15.1.1, over a Thunderbolt 3 port without USB4 support.
MacBook Pro (M3 Pro) with macOS 15.2, over a Thunderbolt 4/USB4 port.
Mac mini (M4 Pro) with macOS 15.3, over a Thunderbolt 5 port.
The results for the first two are taken from my previous tests, and here used for comparison.
The dock used was the Kensington SD5000T5 Thunderbolt 5 Triple Docking Station, with a total of three downstream TB5 ports. I’m very grateful to Sven who has provided his results from an OWC TB5 hub to support those from the dock.
Other methods are the same as those described previously. The TB5 SSD tested is one of the three currently available or on pre-order from OWC, Sabrent and LaCie (no, I’m not going to tell you which, as I’m still in the process of reviewing it).
Single SSDs
Results obtained from measuring read and write speeds on a single SSD at a time are summarised in the table below. Those that are concerning are set in bold italics.
Performance of the TB5 SSD when connected direct or through the dock was highest of all, and around 150% of the speeds achieved by the next fastest, the USB4 SSD, and around 180-250% those of the TB3 SSD, the slowest. Direct connection of USB4 SSDs to the TB5 port in macOS 15.3 resulted in even faster speeds than a TB4/USB4 connection using 15.2. Thus, a TB5 port with 15.3 delivers best performance over all types of external SSD tested here.
Of the three exceptionally poor results seen previously:
Write speed from a TB5 port to a TB3 SSD through a TB5 dock improved greatly from 0.42 GB/s to 1.6 GB/s, the same as in other Macs.
Write speed from a TB5 port to a USB4 SSD through a TB5 dock improved from 2.3 GB/s to 3.8 GB/s, the same as when connected direct.
Write speeds to a TB3 SSD through a TB5 dock remained at 1.6 GB/s, about half the expected speed, just as those through a TB4 hub.
This anomalous behaviour when writing to a TB3 SSD through a TB5 dock was also found by Sven in his tests on the OWC TB5 hub, and seems common to most if not all TB4 and TB5 docks and hubs. I haven’t seen any explanation as to why it occurs so widely.
Paired SSDs
Encouraged by these substantial improvements with Sequoia 15.3, I measured simultaneous read and write speeds to a pair of USB4 SSDs connected to the Kensington TB5 dock. Stibium has a GUI so can’t perform this in perfect synchrony. However, it reads or writes a total of 160 files in 53 GB during each of these tests, and outlying measurements are discounted using the following robust statistical techniques:
a 20% trimmed mean, giving the 20th and 80th centiles;
Theil-Sen regression;
linear regression through all measured values, returning a rate and latency.
Measured transfer rates in each of the two USB4 SSDs are given in the table below.
The first row of results gives the two write speeds measured simultaneously when both the SSDs were writing, similarly the second gives the two read speeds for simultaneous reading, and the bottom line shows speeds when one SSD was writing and the other reading at the same time.
When both SSDs were transferring their data in the same direction, individual speeds were about 3.1 GB/s, but when the directions of transfer were mixed, with one reading and the other writing, their speeds were similar to a single USB4 SSD. Total transfer speed was thus about 6.2 GB/s when in the same direction, but 7.2 GB/s when in opposite directions.
Multiple displays
Many of those buying into TB5 are doing so early because of its promised support for multiple displays. I haven’t yet seen sufficient evidence to decide whether this has improved with Sequoia 15.3. However, OWC has qualified full display support of its TB5 hub as requiring “native Thunderbolt 5 display or other displays that support USB-C connections and DisplayPort 2.1”. One likely reason for multiple displays not achieving support expected, such as three 4K at 144 Hz, is that they don’t support DisplayPort 2.1.
Which macOS?
As the evidence here suggests, macOS 15.3 or later is required for full TB5 performance, and OWC now includes that in the specifications for its TB5 hub. It also states that TB3 support requires macOS 15, although USB4 should still be supported in macOS 14 Sonoma.
Recommendations
TB5 SSDs are faster than USB4, which are faster than TB3, in almost every combination. The only exception to this is a USB4 SSD connected direct to a TB3 port, which is likely to be limited to 1.0 GB/s in both directions.
When pricing allows, prefer purchasing a ready-made TB5 SSD. If it’s to be used with an Intel Mac, confirm that it there supports TB3.
Self-assembly TB5 enclosures remain expensive at present, and a USB4 enclosure may then prove better value, provided that it won’t be used with an Intel Mac.
Avoid writing to a TB3 SSD connected to a dock or hub, as its speed is likely to be limited to 1.6 GB/s.
Ensure Macs with TB5 ports are updated to Sequoia 15.3 or later.
Ensure Macs to be used with TB5 docks or hubs are updated to Sequoia 15 or later, or they may not fully support TB3.
As the first batches of Thunderbolt 5 SSDs are starting to ship, this is a good time to take stock of what we have seen so far. Here I include some current results obtained from testing one of these new products.
So far, products that have already been released or are well into pre-order include:
OWC Envoy Ultra,
LaCie Rugged SSD Pro 5,
Sabrent Rocket XTRM5.
Starting prices for these are under $/€/£ 400 for 2 TB, making them a little more expensive than better USB4 or Thunderbolt 3 equivalents. I am very grateful to Jozef for telling me that Acasis are promising to ship a TB5-compatible empty enclosure imminently, although at $300 it doesn’t make self-assembly attractive yet. If you’re interested, he has included a link to the product page in his comment below.
Mac support
The only Macs that currently support TB5 are the latest MacBook Pro and Mac minis equipped with M4 Pro or Max chips. Although they’re claimed to support full-speed TB5 performance when running macOS Sequoia 15.0, problems have been reported in achieving that, at least with TB5 docks and hubs. If you intend using any TB5 peripheral, then you’d best start with 15.3, which has been reported as solving at least some of those problems. This may also explain some of the anomalies in SSD performance that have been claimed by a few early testers.
Other Apple silicon Macs should run TB5 SSDs in USB4 40 Gb/s mode, which should still be significantly faster than TB3. Intel Macs don’t support TB5 or USB4, though, so they’re most likely to fall back to run them as USB 3.2 Gen 2, at 1 GB/s, which would be a deep disappointment for the cost.
Benchmarks
Beware of claimed performance of TB5 SSDs by their manufacturers and in product reviews. Testing them isn’t as straightforward as with slower products.
For a start, quoted results are often taken from apps such as Blackmagic Disk Speed Test and AmorphousDiskMark. Although both have their uses, they also have their limits, notably that they only measure read and write speed for one test size, normally 5 GB in the former and 1 GiB in the latter. As the graph below shows, there are substantial differences in speed between different sizes.
The upper pair of unbroken lines show read and write speeds when operating in TB5 mode over 80 Gb/s to the Mac mini M4 Pro, and the pair below them with broken lines shows speeds when operating in USB4 mode over 40 Gb/s to a MacBook Pro M3 Pro. Calculated overall read/write speeds were 5.2/5.5 GB/s for TB5, and 3.1/3.1 GB/s for USB4. For comparison, Blackmagic returned 4.8/5.2 GB/s, and Amorphous 6.8/5.3 GB/s. Needless to say, the latter is the result being quoted by the manufacturer, despite its write speed looking highly suspect.
Highest read and write speeds were measured with 400 MB size, and there were only small differences once sizes exceeded 600 MB. However, speeds for files below 10 MB were considerably less than 100 MB and larger. Fortunately, Blackmagic Disk Speed Test and AmorphousDiskMark both use sizes in the more linear range above 600 MB, but their results don’t take into account smaller file sizes, which are more common in many real-world circumstances. As the 80th centile write speed was 5.62 GB/s, the write speed reported by Amorphous of 6.8 GB/s doesn’t appear representative.
Caching effects
All those benchmark results are subject to a major caution: they don’t provide any information on caching, used by most faster SSDs to improve performance. This typically uses part of the memory in SLC mode, sacrificing capacity for speed. I’ve seen a figure of 50 GB of cache quoted for one TB5 SSD that I’ve tested, and sure enough, once 50 GB has been written to it, its write speed drops from around 5.5 to 1.4 GB/s.
Few are likely to write more than 50 GB to an SSD in a single continuous session, but for those that do, it’s important to know when the SLC cache is likely to become full, and for write speed to fall to little better than USB 3.2 Gen 2. Neither Blackmagic Disk Speed Test nor AmorphousDiskMark can measure that for you.
Overall impressions
Thunderbolt 5 SSDs are starting to realise their promise of significantly faster read and write performance than even USB4 SSDs. Although TB5 SSDs are supported by a small range of the latest Macs, they should still be faster than TB3 when they fall back to USB4 on other Apple silicon Macs. However, if they’re to be used with Intel Macs, the likelihood is that they will fall further back to USB 3.2 Gen 2.
If you’re likely to stream very large quantities of data to them, more than about 50 GB, then you’ll need to obtain an estimate of the size of their SLC cache, and of their write performance when that is full, or you could be in for a big disappointment. The trouble with TB5 SSDs is that they’re sufficiently fast to fill their cache very quickly, in this case in around 10 seconds when writing at full speed.
Postscript
I’ve gone back to my original article analysing TB5 performance, and the maximum transfer rate it can achieve over its 80 Gb/s works out at 6 GB/s. I therefore conclude that the claimed 6.7 GB/s reported above can only be bogus, although it’s now being claimed by manufacturers and other testing sites. If you see TB5 claimed to exceed 6 GB/s, then don’t trust those figures.