怎么判断固态硬盘是快要坏了?有什么征兆么
现在电脑上用了是一块铠侠的硬盘,正常使用 2 年多了,没有什么异常,但是总是疑神疑鬼,担心哪天忽然就掉盘了
之前用过一块金士顿的 SSD ,就是出现几次电脑开机异常情况(当时没经验,后来了解到这其实就是 SSD 生病预警了的迹象),之后就突然掉盘,再也读不出数据了。各种恢复数据方式搞了很久,还是徒劳无功
现在这块 KIOXIA 的盘用了 2 年多,还是很稳定没什么异常,不知道该不该换了。怎么判断固态硬盘是快要坏了? SSD 崩之前有什么征兆么
现在电脑上用了是一块铠侠的硬盘,正常使用 2 年多了,没有什么异常,但是总是疑神疑鬼,担心哪天忽然就掉盘了
之前用过一块金士顿的 SSD ,就是出现几次电脑开机异常情况(当时没经验,后来了解到这其实就是 SSD 生病预警了的迹象),之后就突然掉盘,再也读不出数据了。各种恢复数据方式搞了很久,还是徒劳无功
现在这块 KIOXIA 的盘用了 2 年多,还是很稳定没什么异常,不知道该不该换了。怎么判断固态硬盘是快要坏了? SSD 崩之前有什么征兆么
I’ve long been critical of some of the best-selling utilities for the Mac, that set out to perform deduplication of files by detecting which appear to be identical, and removing ‘spare’ copies. This is because APFS introduced clone files, and in the right circumstances those take up no space in storage, as their data is common and not duplicated at all. As it’s practically difficult to tell whether two files are clones, any utility or command tool that claims to save space by removing duplicates can’t tell you the truth, and in most cases won’t save as much space as it claims.
Claims made by those utilities are often exaggerated. This is because they calculate how much space they think they’ve saved by adding the sizes of the potential duplicates they have deleted. That’s not correct when a clone file is deleted, as that doesn’t actually free any space at all, even though the clone file has exactly the same nominal size as the original.
I’m delighted to see the eminent John Siracusa turn this on its head and finally make better use of clone files in his app Hyperspace, available from the App Store. Instead of deleting clones, his app can replace duplicate copies with clones, and so achieve real space savings. This comes down to simple arithmetic:
Hyperspace thus checks all the files in a selected folder, identifies which are identical copies, and (where suitable) will replace those copies (except an original) with clones, so saving real storage space.
I also think it has the most user-friendly payment scheme: download Hyperspace free of charge and check your Mac with it. If it doesn’t find sufficient savings, and you decide not to use it to replace any duplicates with clones, then it costs you nothing. If you want to reclaim that space, then you can opt to pay according to the amount of space it saves, by subscription, or with a one-time payment. On that basis, I unhesitatingly recommend everyone to download it from the App Store, and at least check their Home folder to see if it’s worth paying to reclaim space. You have absolutely nothing to lose.
In my case, perhaps because I tend to clone files using the Finder’s Duplicate command, the savings that it offered were of little benefit, but your Home folder could be different and release 100 GB or more.
The other space-saving special file type in APFS is the sparse file. Although it can bring great savings in storage space, that’s largely up to the app(s) that create and maintain the file, rather than the user. Devising an app that could go round converting plain to sparse files is harder, and risks incompatibility with those apps that access those files.
As a demonstration of how effective APFS special files are in saving disk space, I built myself a 100 GB partition (APFS Container) on an SSD and tried to fill it with clone and sparse files until I got bored.
At this stage, the 100 GB partition contains:
Add those file sizes together and they come to 285 GB, yet the 100 GB partition only has 16.5 GB stored on it, and still has over 83 GB free. No compression is involved here, of course.
As the saying goes, there ain’t such as thing as a free lunch, and that free space could vanish quickly depending on what happens to those files. The worst case is for an app not to recognise sparse files, and write one to disk in plain format, so swallowing 10 GB at once. Editing the cloned files would be a more gradual way of their growing in size. Only changed data would then need to be saved, so free disk space would steadily fall as more changes were made to the clone.
Clone and sparse files are by no means unique to APFS, but they can be impressive, and above all they’re effective at reducing excess erase-write cycles that age SSDs, whatever you do with the storage they free.
I’m very grateful to Duncan for drawing my attention to Hyperspace, and to John Siracusa for an outstanding app.
In the last couple of weeks I’ve been asked to help recover data lost when files have been accidentally deleted, and an internal SSD has been wiped remotely using Find My Mac. What we perhaps haven’t fully appreciated is how improved security protection in our Macs has made it far harder, if not impossible, to recover such lost data. Allow me to explain in three scenarios.
When files are deleted from a hard disk, the file system marks them as no longer being in use, and they’re left in place on the hard disk until they need to be overwritten with fresh data. If the hard disk has ample free space, that could occur days, weeks or even months later. Data recovery software and services can be used to scan each storage block and try to reconstruct the original files. If the file system and its data are encrypted, the encryption key is required to enable the contents to be decrypted.
There’s extensive experience in such data recovery, and provided the disk isn’t physically damaged or malfunctioning, results can be surprisingly good. As services charge according to the amount of data they recover, there are also strong incentives.
This works both ways, of course, in that someone who gets access to that hard disk could also recover files from it if they’re unencrypted. For this reason, when you’re passing on or disposing of a hard disk, you should perform a secure erase to overwrite its entire contents. If it’s going for recycling, once that has been done, you should also render the disk unusable by physically damaging its platters.
What happens on an SSD depends on whether there’s already a snapshot of that volume. If there is, and that snapshot includes the deleted files, the file system metadata for them is retained in that snapshot, and the storage containing their data is also retained. The files can then be recovered by mounting that snapshot and either reverting the whole volume to that earlier state, or copying those files to a different volume.
If there’s no prior snapshot containing the files, the file system marks their extents as being free for reuse. At some time after their deletion, that information is sent to the SSD in a Trim command. When the SSD next has a moment to perform its routine housekeeping, the physical storage used will then be erased ready to be written to again.
Although there’s some uncertainty as to when that Trim command will be sent to the SSD, one time that we know that supported SSDs are Trimmed is during mounting, in the case of an internal SSD when that Mac starts up. So if your Mac has started up since the files were deleted, those files are most likely to have been completely erased from its internal SSD. With their erasure, chances of ever recovering those files have gone.
Macs with T2 or Apple silicon chips have an ingenious method of ‘wiping’ the entire contents of the Data volume when it’s encrypted on the internal SSD. This can be triggered using the Erase All Content and Settings (EACAS) feature in the Transfer or Reset item in General settings, or remotely via Find My Mac. Either way, this destroys the ‘effaceable key’ and the ability to decrypt the contents of the Data volume, even if it’s not additionally protected by FileVault. As Apple states: “Erasing the key in this manner renders all files cryptographically inaccessible.”
This is to ensure that if your Mac is stolen, no one can recover the contents of its internal SSD once it has been wiped in this way. Nearly a year ago there were claims that old data could re-appear afterwards, but those turned out to be false.
I’m afraid that the only way to recover the data from a volume wiped using EACAS or Find My Mac is to restore it from a backup.
For Intel Macs with T2 chips, and Apple silicon Macs, the chances of being able to recover files from their internal SSDs have become diminishingly small. This makes it all the more important that you make and keep good and comprehensive backups of everything in your Mac’s Data volume.
I’m always sad to hear of those who have suffered data loss, and shocked to learn of how many still don’t keep backups.
If you want a quiet life, just format each external disk in APFS (with or without encryption), and cruise along with plenty of free space on it. For those who need to do different, either getting best performance from a hard disk, or coping with less free space on an SSD, here are some tips that might help.
A file system like APFS provides a logical structure for each disk. At the top level it’s divided into one or more partitions that in APFS also serve as containers for its volumes. Partitions are of fixed size, although you can always repartition a disk, a process that macOS will try to perform without losing any of the data in its existing partitions. That isn’t always possible, though: if your 1 TB disk already contains 750 GB, then repartitioning it into two containers of 500 GB each will inevitably lose at least 250 GB of existing data.
All APFS volumes within any given container share the same disk space, and by default each can expand to fill that. However, volumes can also have size limits imposed on them when they’re created. Those can reserve a minimum size for that volume, or limit it to a maximum quota size.
How that logical structure is implemented in terms of physical disk space depends on the storage medium used.
Hard disks store data in circular tracks of magnetic material. To store each file requires multiple sectors of those tracks, each of which can contain 512 or 4096 bytes. As the length (circumference) of the tracks is greater as you move away from the centre of the platter towards its edge, but the disk spins at a constant number of revolutions per minute (its angular velocity is constant), it takes a shorter time for sectors at the periphery of the disk to pass under the heads than for those closer to the centre of the platter. The result is that read and write performance also varies according to where files are stored on the disk: they’re faster the further they are from the centre.
This graph shows how read and write speeds change in a typical compact external 2 TB hard disk as data is stored towards the centre of the disk. At the left, the outer third of the disk delivers in excess of 130 MB/s, while the inner third at the right delivers less than 100 MB/s.
You can use this to your advantage. Although you don’t control exactly where file data is stored on a hard disk, you can influence that. Disks normally fill with data from the periphery inwards, so files written first to an otherwise empty disk will normally be written and read faster.
You can help that on a more permanent basis by dividing the disk into two or more partitions (APFS containers), as the first will normally be allocated space on the disk nearest the periphery, so read and write faster than later partitions added nearer the centre. Adding a second container or partition of 20% of the total capacity of the disk won’t cost you much space, but it will ensure that performance doesn’t drop off to little more than half that achieved in the most peripheral 10%.
The firmware in SSDs knows nothing of its logical structure, and for key features manages the whole storage as a single unit. Wear levelling ensures that individual blocks of memory have similar numbers of erase-write cycles, so they age evenly. Consumer SSDs that use dynamic SLC write caches allocate those from across the whole storage, and aren’t confined to partitions. You can thus manage free space to keep sufficient dynamic cache available at a disk level.
One approach is to partition the SSD to reserve a whole container, with its fixed size, to support the needs of the dynamic cache. An alternative is to use volume reserve and quota sizes for the same purpose, within a single container. For example, in a 1 TB SSD with a 100 GB SLC write cache you could either:
Which of these you choose comes down to personal preference, although on boot volume groups you won’t be able to set a quota for its Data volume, and the most practical solution for a boot disk is to add an empty volume with a specified reserve size.
To do this when creating a new volume, click on the Size Options… button and set the quota or reserve.
Fast SSDs aren’t always fast when writing to them. Even an Apple silicon Mac’s internal SSD can slow alarmingly in the wrong circumstances, as some have recently been keen to demonstrate. This article explains why an expensive SSD normally capable of better than 2.5 GB/s write speed might disappoint, and what you can do to avoid that.
In normal use, there are three potential causes of reduced write speed in an otherwise healthy SSD:
Each of those should only affect write speed, leaving read speed unaffected.
Writing data to an SSD generates heat, and writing a lot can cause it to heat up significantly. Internal temperature is monitored by the firmware in the SSD, and when that rises sufficiently, writing to it will be throttled back to a lower speed to stabilise temperature. Some SSDs have proved particularly prone to thermal throttling, among them older versions of the Samsung X5, one of the first full-speed Thunderbolt 3 SSDs.
In testing, thermal throttling can be hard to distinguish from SLC write cache depletion, although thorough tests should reveal its dependence on temperature rather than the mere quantity of data written.
The only solution to thermal throttling is adequate cooling of the SSD. Internal SSDs in Macs with active cooling using fans shouldn’t heat up sufficiently to throttle, provided their air ducts are kept free and they’re used in normal ambient temperatures. Well-designed external enclosures should ensure sufficient cooling using deep fins, although active cooling using small fans remains more controversial.
To achieve their high storage density, almost all consumer-grade SSDs store multiple bits in each of their memory cells, and most recent products store three in Triple-Level Cell or TLC. Writing all three bits to a single cell takes longer than it would to write them to separate cells, so most TLC SSDs compensate by using caches. Almost all feature a smaller static cache of up to 16 GB, used when writing small amounts of data, and a more substantial dynamic cache borrowed from main storage cells by writing single bits to them as if they were SLC (single-level cell) rather than TLC.
This SLC write cache becomes important when writing large amounts of data to the SSD, as the size of the SLC write cache then determines overall performance. In practice, its size ranges from around 2.5% of total SSD capacity to over 10%. This can’t be measured directly, but can be inferred from measuring speed when writing more data than can be contained in the cache. As it can’t be emptied during full-speed write, once the dynamic cache is full, write speed suddenly falls; for example a Thunderbolt 5 SSD with full-speed write of 5.5 GB/s might fall to 1.4 GB/s when its SLC write cache is full. This is seen in both external and internal SSDs.
To understand the importance of SLC write cache in determining performance, take this real-world example:
To predict the effect of SLC write cache size on write performance, you therefore need to know cache size, out-of-cache write speed, and the time required to empty a full cache between writes. I have looked at these on two different SSDs: a recent 2 TB model with a Thunderbolt 5 interface, and a self-assembled USB4 OWC 1M2 enclosure containing a Samsung 990 Pro 2 TB SSD. Other enclosures and SSDs will differ, of course.
The TB5 SSD has a 50 GB SLC write cache, as declared by the vendor and confirmed by testing. With that cache available, write speed is 5.5 GB/s over a TB5 interface, but falls to 1.4 GB/s once the cache is full. It then takes 4 minutes for the cache to be emptied and made available for re-use, allowing write speeds to reach 5.5 GB/s again.
The USB4 SSD has an SLC write cache in excess of 212 GB, as demonstrated by writing a total of 212 GB at its full interface speed of 3.7 GB/s. As the underlying performance of that SSD is claimed to exceed that required to support TB5, putting that SSD in a TB5 enclosure should enable it to comfortably outperform the other SSD.
Two further factors could affect SLC write cache: partitioning and free space.
When you partition a hard disk, that affects the physical layout of data on the disk, a feature sometimes used to ensure that data only uses the outer tracks where reads and writes are fastest. That doesn’t work for SSDs, where the firmware manages storage use, and won’t normally segregate partitions physically. That ensures partitioning into APFS containers doesn’t affect SLC write cache, either in terms of size or performance.
Free space can be extremely important, though. SLC write cache can only use storage that’s not already in use, and if necessary has been erased ready to be re-used. If the SSD only has 100 GB free, then that can’t all be used for cache, so limiting the size that’s available. This is another good reason for the performance of SSDs to suffer when they have little free space available.
Ultimately, to attain high write speeds through SLC write cache, you have to understand the limits of that cache and to work within them. One potential method for effectively doubling the size of that cache might be to use two SSDs in RAID-0, although that opens further questions.
In principle, Trim appears simple. For example, Wikipedia states: “The TRIM command enables an operating system to notify the SSD of pages which no longer contain valid data. For a file deletion operation, the operating system will mark the file’s sectors as free for new data, then send a TRIM command to the SSD.” A similar explanation is given by vendors like Seagate: “SSD TRIM is a command that optimizes SSDs by informing them which data blocks are no longer in use and can be wiped. When files are deleted, the operating system sends a TRIM command, marking these blocks as free for reuse.”
This rapidly becomes more complicated, though. For a start, the TRIM command for SATA doesn’t exist for NVMe, used by faster SSDs, where its closest substitute is DEALLOCATE. Neither is normally reported in the macOS log, although APFS does report its initial Trim when mounting an SSD. That’s reported for each container, not volume.
What we do know from often bitter experience is that some SSDs progressively slow down with use, a phenomenon most commonly (perhaps only?) seen with SATA drives connected over USB. Those also don’t get an initial Trim by APFS when they’re mounted.
It’s almost impossible to assess whether time required for Trim and housekeeping is likely to have any adverse effect on SSD write speed, provided that sufficient free disk space is maintained to support full-speed writing to the SLC write cache. Neither does there appear to be any need for a container to be remounted to trigger any Trim or housekeeping required to erase deleted storage ready for re-use, provided that macOS considers that SSD supports Trimming.
I’m grateful to Barry for raising these issues.
With more new M4 Macs in the offing, one question that I’m asked repeatedly is whether you should save money by getting a Mac with the smallest internal SSD and extend that using cheaper external storage. This article considers the pros and cons.
In Apple’s current M4 models, the smallest internal storage on offer is 256 GB. For the great majority, that’s barely adequate if you don’t install any of your own apps. It might suffice in some circumstances, for example if you work largely from shared storage, but for a standalone Mac it won’t be sufficient in five years time. Your starting point should therefore be a minimum of 512 GB internal SSD. Apple’s typical charge for increasing that to 2 TB is around $/€/£ 600.
The alternative to 2 TB internally would be an external 2 TB SSD. Unless you’re prepared to throw it away after three years, you’ll want to choose the most versatile interface that’s also backward compatible. The only choice here is Thunderbolt 5, which currently comes at a small premium over USB4 or Thunderbolt 3. Two TB would currently cost you $/€/£ 380-400, although those prices are likely to reduce in the coming months as TB5 SSDs come into greater supply.
Don’t be tempted to skimp with a USB 3.2 Gen 2 external SSD if that’s going to be your main storage. While it might seem a reasonable economy now, in 3-5 years time you’ll regret it. Besides, it may well have severe limitations in not Trimming as standard, and most don’t support SMART health indicators.
Thus, your expected saving by buying a Mac with only 512 GB internal storage, and providing 2 TB main storage on an external SSD, is around $/€/£ 200-220, and that’s really the only advantage in not paying Apple’s high price for an internal 2 TB SSD.
Upgrading internal storage in an Apple silicon model currently isn’t feasible for most users. As Apple doesn’t support such upgrades, they’re almost certain to invalidate its warranty and any AppleCare+ cover. That could change in the future, at least for some models like the Mac mini and Studio, but I think it unlikely that Apple would ever make an upgrade cheaper than initial purchase.
One of the few compelling reasons for choosing a Mac with minimal internal storage is when it’s going to be started up from an external boot disk. Because Apple silicon Macs must always start their boot process from their internal storage, and that Mac still needs Recovery and other features on its internal SSD, you can’t run entirely from an external SSD, but you could probably get away with the smallest available for its other specifications, either 256 or 512 GB.
Apple silicon Macs are designed to start up and run from their internal storage. Unlike Intel Macs with T2 chips, they will still boot from an external disk with Full Security, but there are several disadvantages in them doing so. Among those are the fact that, on an external boot disk, FileVault encryption isn’t performed in hardware and is inherently less secure, and AI isn’t currently supported when booted from an external disk. Choosing to do that thus involves compromises that you might not want to be stuck with throughout the lifetime of that Mac.
Regardless of the capacity of a Mac’s internal storage, it’s popular to store large media libraries on external storage, and for many that’s essential. This needs to be planned carefully: some libraries are easier to relocate than others, and provision has to be made for their backups. If you use hourly Time Machine backups for your working folders, you’ll probably want to back up external media libraries less frequently, and to different external storage.
Although it remains possible to relocate a user’s entire Home folder to external storage, this seems to have become more tricky in recent versions of macOS. Home folders also contain some of the most active files, particularly those in ~/Library, so moving them to an external SSD is going to require its good performance.
A more flexible alternative is to extend some working folders to external storage, while retaining the Home folder on internal storage. This can fit well with backup schedules, but you will still need to ensure the whole Home folder is backed up sufficiently frequently. This does have an unfortunate side-effect in privacy protection: this may require most of your working apps to be given access to Removable Volumes in the Files & Folders item in Privacy & Security settings. Thankfully, that should only need to be performed once when first using an app with external storage.
When you’re weighing up your options to minimise the size of your new Mac’s internal storage, you also need to allow sufficient free space on each disk. APFS is very different from HFS+ in this respect: on external disks, in particular, HFS+ continues to work happily with just a few MB free, and could be filled almost to capacity. APFS, modern macOS and SSDs don’t work like that.
Measuring how much free space is needed isn’t straightforward either, as macOS trims back on its usage in response to falling free space. Some key features, such as retaining log entries, are sacrificed to allow others to continue. Snapshots can be removed or not made. Perhaps the best measurements come from observing the space requirements of VMs, where total virtual disk space much below 50 GB impairs running of normal functions. That’s the total size of the virtual disk, not the amount of free space, and doesn’t apply when iCloud or AI are enabled.
The other indicator of minimum free space requirements is for successful upgrading of macOS, which appears to be somewhere between 30-40 GB. This makes it preferable to keep an absolute minimum of around 50 GB free at all times. When possible, 100 GB gives more room for comfort.
When the first M1 Macs were released, base models with just 8 GB of memory and 128 GB internal SSDs were most readily available, with custom builds (BTO) following later. As a result, many of those who set out to assess Apple’s new Macs ended up stress-testing those with inadequate memory and storage for the tasks they ran.
Many noticed rapid changes in their SSD wear indicators, and some were getting worryingly close to the end of their expected working life after just three years. Users also reported that SSD performance was falling. The reasons for those are that SSDs work best, age slowest, and remain fastest when they have ample free space. One common rule of thumb is to keep at least 20-25% of SSD capacity as free space, although evidence is largely empirical, and in places confused.
The simplest factor to understand is the effect of SSD size on wear. As the memory in an SSD is expected to last a fixed number of erase-write cycles, all other things being equal, writing and rewriting the same amount of data to a smaller SSD will reach that number more quickly. Thus, in general terms and under the same write load, a 512 GB SSD will last about half as long as a 1 TB SSD.
All other things aren’t equal, though, and that’s where wear levelling and Trim come into play. Without levelling the number of erase-write cycles across all the memory in an SSD, some would reach their limit far sooner than others. To tackle that, SSDs incorporate mechanisms to even out the use of individual memory cells, as wear levelling. The less free space available on an SSD, the less effective wear levelling can be, giving larger SSDs a significant advantage if they also have more free space.
Trimming is performed periodically to allow storage that has already been made available for reuse, for example when a file has been deleted, to be erased and made ready. Both APFS and HFS+ will Trim compatible SSDs when mounting a volume, but Trim support for external SSDs is only provided by default for those with NVMe interfaces, not SATA, and isn’t available for other file systems including ExFAT. Some SSDs may still be able to process available storage in their routine housekeeping, but others won’t. Without Trimming, an SSD gradually fills with unused memory waiting to be erased, and will steadily grind to a halt, with write speeds falling to about 10% of new.
Thus, to ensure optimum performance and working life, SSDs should be as large as possible, with much of their storage kept free. Experience suggests that a healthy amount of free space is 20-50% of their capacity.
Apple silicon Macs work best and fastest when largely running from their internal SSDs. By all means reduce the capacity required by moving more static media libraries, and possibly large working folders, to an external SSD. But there’s no escaping the evidence that your Mac will work best and longest when its internal storage has a minimum of 20% free at all times, and you must ensure that never falls below 50 GB free space. Finally, consider your needs not today, but when you intend replacing that Mac in 3-5 years time, or any savings made now will prove a false economy.
There have been many reports of problems with Thunderbolt 5 support in Apple’s latest MacBook Pro and Mac mini models with M4 Pro and Max chips. Among the more concerning have been poor performance when accessing SSDs through TB5 docks and hubs, and the inability to drive more than two 4K displays through those. This article looks at what has changed, and what can currently be achieved when accessing SSDs either directly or via TB5 docks or hubs.
When I last tested these, using a Mac mini M4 Pro and Sequoia 15.2, I found that speeds measured through a TB5 dock were generally at least as good as those through a TB4 hub, with three notable exceptions:
Three Macs were used for testing:
The results for the first two are taken from my previous tests, and here used for comparison.
The dock used was the Kensington SD5000T5 Thunderbolt 5 Triple Docking Station, with a total of three downstream TB5 ports. I’m very grateful to Sven who has provided his results from an OWC TB5 hub to support those from the dock.
Other methods are the same as those described previously. The TB5 SSD tested is one of the three currently available or on pre-order from OWC, Sabrent and LaCie (no, I’m not going to tell you which, as I’m still in the process of reviewing it).
Results obtained from measuring read and write speeds on a single SSD at a time are summarised in the table below. Those that are concerning are set in bold italics.
Performance of the TB5 SSD when connected direct or through the dock was highest of all, and around 150% of the speeds achieved by the next fastest, the USB4 SSD, and around 180-250% those of the TB3 SSD, the slowest. Direct connection of USB4 SSDs to the TB5 port in macOS 15.3 resulted in even faster speeds than a TB4/USB4 connection using 15.2. Thus, a TB5 port with 15.3 delivers best performance over all types of external SSD tested here.
Of the three exceptionally poor results seen previously:
This anomalous behaviour when writing to a TB3 SSD through a TB5 dock was also found by Sven in his tests on the OWC TB5 hub, and seems common to most if not all TB4 and TB5 docks and hubs. I haven’t seen any explanation as to why it occurs so widely.
Encouraged by these substantial improvements with Sequoia 15.3, I measured simultaneous read and write speeds to a pair of USB4 SSDs connected to the Kensington TB5 dock. Stibium has a GUI so can’t perform this in perfect synchrony. However, it reads or writes a total of 160 files in 53 GB during each of these tests, and outlying measurements are discounted using the following robust statistical techniques:
Measured transfer rates in each of the two USB4 SSDs are given in the table below.
The first row of results gives the two write speeds measured simultaneously when both the SSDs were writing, similarly the second gives the two read speeds for simultaneous reading, and the bottom line shows speeds when one SSD was writing and the other reading at the same time.
When both SSDs were transferring their data in the same direction, individual speeds were about 3.1 GB/s, but when the directions of transfer were mixed, with one reading and the other writing, their speeds were similar to a single USB4 SSD. Total transfer speed was thus about 6.2 GB/s when in the same direction, but 7.2 GB/s when in opposite directions.
Many of those buying into TB5 are doing so early because of its promised support for multiple displays. I haven’t yet seen sufficient evidence to decide whether this has improved with Sequoia 15.3. However, OWC has qualified full display support of its TB5 hub as requiring “native Thunderbolt 5 display or other displays that support USB-C connections and DisplayPort 2.1”. One likely reason for multiple displays not achieving support expected, such as three 4K at 144 Hz, is that they don’t support DisplayPort 2.1.
As the evidence here suggests, macOS 15.3 or later is required for full TB5 performance, and OWC now includes that in the specifications for its TB5 hub. It also states that TB3 support requires macOS 15, although USB4 should still be supported in macOS 14 Sonoma.
As the first batches of Thunderbolt 5 SSDs are starting to ship, this is a good time to take stock of what we have seen so far. Here I include some current results obtained from testing one of these new products.
So far, products that have already been released or are well into pre-order include:
Starting prices for these are under $/€/£ 400 for 2 TB, making them a little more expensive than better USB4 or Thunderbolt 3 equivalents. I am very grateful to Jozef for telling me that Acasis are promising to ship a TB5-compatible empty enclosure imminently, although at $300 it doesn’t make self-assembly attractive yet. If you’re interested, he has included a link to the product page in his comment below.
The only Macs that currently support TB5 are the latest MacBook Pro and Mac minis equipped with M4 Pro or Max chips. Although they’re claimed to support full-speed TB5 performance when running macOS Sequoia 15.0, problems have been reported in achieving that, at least with TB5 docks and hubs. If you intend using any TB5 peripheral, then you’d best start with 15.3, which has been reported as solving at least some of those problems. This may also explain some of the anomalies in SSD performance that have been claimed by a few early testers.
Other Apple silicon Macs should run TB5 SSDs in USB4 40 Gb/s mode, which should still be significantly faster than TB3. Intel Macs don’t support TB5 or USB4, though, so they’re most likely to fall back to run them as USB 3.2 Gen 2, at 1 GB/s, which would be a deep disappointment for the cost.
Beware of claimed performance of TB5 SSDs by their manufacturers and in product reviews. Testing them isn’t as straightforward as with slower products.
For a start, quoted results are often taken from apps such as Blackmagic Disk Speed Test and AmorphousDiskMark. Although both have their uses, they also have their limits, notably that they only measure read and write speed for one test size, normally 5 GB in the former and 1 GiB in the latter. As the graph below shows, there are substantial differences in speed between different sizes.
The upper pair of unbroken lines show read and write speeds when operating in TB5 mode over 80 Gb/s to the Mac mini M4 Pro, and the pair below them with broken lines shows speeds when operating in USB4 mode over 40 Gb/s to a MacBook Pro M3 Pro. Calculated overall read/write speeds were 5.2/5.5 GB/s for TB5, and 3.1/3.1 GB/s for USB4. For comparison, Blackmagic returned 4.8/5.2 GB/s, and Amorphous 6.8/5.3 GB/s. Needless to say, the latter is the result being quoted by the manufacturer, despite its write speed looking highly suspect.
Highest read and write speeds were measured with 400 MB size, and there were only small differences once sizes exceeded 600 MB. However, speeds for files below 10 MB were considerably less than 100 MB and larger. Fortunately, Blackmagic Disk Speed Test and AmorphousDiskMark both use sizes in the more linear range above 600 MB, but their results don’t take into account smaller file sizes, which are more common in many real-world circumstances. As the 80th centile write speed was 5.62 GB/s, the write speed reported by Amorphous of 6.8 GB/s doesn’t appear representative.
All those benchmark results are subject to a major caution: they don’t provide any information on caching, used by most faster SSDs to improve performance. This typically uses part of the memory in SLC mode, sacrificing capacity for speed. I’ve seen a figure of 50 GB of cache quoted for one TB5 SSD that I’ve tested, and sure enough, once 50 GB has been written to it, its write speed drops from around 5.5 to 1.4 GB/s.
Few are likely to write more than 50 GB to an SSD in a single continuous session, but for those that do, it’s important to know when the SLC cache is likely to become full, and for write speed to fall to little better than USB 3.2 Gen 2. Neither Blackmagic Disk Speed Test nor AmorphousDiskMark can measure that for you.
Thunderbolt 5 SSDs are starting to realise their promise of significantly faster read and write performance than even USB4 SSDs. Although TB5 SSDs are supported by a small range of the latest Macs, they should still be faster than TB3 when they fall back to USB4 on other Apple silicon Macs. However, if they’re to be used with Intel Macs, the likelihood is that they will fall further back to USB 3.2 Gen 2.
If you’re likely to stream very large quantities of data to them, more than about 50 GB, then you’ll need to obtain an estimate of the size of their SLC cache, and of their write performance when that is full, or you could be in for a big disappointment. The trouble with TB5 SSDs is that they’re sufficiently fast to fill their cache very quickly, in this case in around 10 seconds when writing at full speed.
I’ve gone back to my original article analysing TB5 performance, and the maximum transfer rate it can achieve over its 80 Gb/s works out at 6 GB/s. I therefore conclude that the claimed 6.7 GB/s reported above can only be bogus, although it’s now being claimed by manufacturers and other testing sites. If you see TB5 claimed to exceed 6 GB/s, then don’t trust those figures.
我想大家都多多少少遇到过设备死机、进水、被偷,甚至被黑客勒索的事情吧。从概率上来说,数据丢失只是早晚的事情。这就是为什么要备份,不然可能青春的回忆,各种美好就再也不见了,那会很痛,可能比失恋还痛。
「不同裝置都有相同的資料才叫作資料備份,只是換個裝置存放不叫做備份[1]」
我和一些网络上厉害的家伙聊过,发现他们也没有做到真正的备份。把数据一股脑丢进硬盘,把数据丢进各种网盘/云服务,甚至程序员也只是把代码放在 GitHub 上。这些都不是备份,国内的网盘不说了,诸如 iCloud 和 Google Drive,本质是个同步盘。历史上不管是 iCloud 还是 Google Drive 都出现过宕机问题,不把鸡蛋放在一个篮子里,要狡兔三窟。
备份的目的是为了可以「无损」还原,如果不能还原,储存再多的数据也没有意义。
第一种全盘备份,举个例子,从一台电脑上一模一样搬迁到另一台电脑上。有些备份是不包括系统设置数据的,还原数据后需要重新设定应用设置、字体大小、系统语言啊,种种……准备系统备份的好处在于带有一个开机系统,比如 Windows 的 WinPE,macOS 的可引导安装器。
差异备份与增量备份的区别在于前者包括上一次完全备份、差异备份或增量备份,后者仅是上一次完全备份后有变化。挺绕的……
在 3-2-1 原则再加两点[2],
在这个科技高度发展的年代,其实最安全的储存方式还是用磁带保存,埋在一个角落(比如 GitHub 埋在北极的胶片[3])。
当然我们不用这么极端,用个靠谱的云备份就行了,也是一种异地备份。刚刚讲过,常规的那些网盘不算备份,要选择专门用来备份的云存储服务,那么靠谱的备份服务有哪些呢,可选的不多。
大家公认的有 Backblaze Personal Backup。每月 $7,不限备份容量,从 2007 年运营至今,稳定。懒人首选,只要后台开着软件就可以无感备份,手机端有 App 也可以备份。
对于国内用户来说由于网络封锁,需要耗费代理流量,第一次备份需要注意流量。从国内访问,稳定性一定要考虑,可是目前无解。
还有一个严格意义来说不算提供云储存服务的软件——Arq, 2009 年运营,$49.99 买断单设备,Premium $59.99/year(提供 1 TB 云空间,不推荐)
搭配第三方云端空间使用,官方描述:Use it for encrypted, versioned backups of your important files. 有加密功能不错。
推荐用硬盘来储存,硬盘分为 HDD(机械硬盘)和 SSD(固态硬盘),硬盘可是大有学问,这里不展开了,感兴趣的可以在本 blog 搜索 SSD。
相比之下 SSD 的优势在:
缺点:
不管是 HDD 还是 SSD,「任何的存储装置都是消耗品」,说不定哪天就挂掉了,所以多准备一块硬盘吧。虽然 SSD 也能抢救资料,但是恢复难度大(要看硬盘哪里损坏)关键是数据恢复又贵(基本 ¥2000 以上)又麻烦还不安全(冠希哥艳照门)。
用 Mac 的同学可以用一下这个,免费,但是不太好用,设定逻辑太简单,Apple 没有好好做,所以有下面的软件推荐。
SuperDuper (2004 年 $27.95 买断制,有免费版功能少一点也可以用,优点是可以把 SSD 作为一个启动盘,支持差异备份。
Carbon Copy Cloner 6 $39.99 更加老牌的一个软件,官方的口号是「A leader in Mac backups since 2002」,速度和稳定性比 Time Machine 好,口碑不错。
上述备份软件和 Backblaze 都提供试用,可以选择一款适合自己的。
我没有使用 NAS,这方面不了解,根据需求选择。用 NAS 也要有双保险,做好异地备份。虽然硬盘不怕坏了,但是也有一些不可抗力因素。
除了全盘备份,还需要注意保存在云上的数据,比如我存在 Bitwarden 上的密码。暂时好像没有别的了。
看了那么多,今天你备份了吗?Just do it!
对于电脑硬盘,固态肯定是全方面优于机械硬盘的选择,不过按照马克思主义矛盾论的观点,这就存在一个 “低速的 HDD 与高价的 SSD” 之间的矛盾。目前我的笔记本使用 128G+1T 的组合,处于并将长期处于 “个人电脑硬盘的基本矛盾” 之中。
直到,我遇到了 PrimoCache 这款软件。推荐给大家。
PrimoCache 是一款可以将物理内存、SSD 硬盘或闪存盘等虚拟成硬盘缓存的软件。它可以自动将硬盘中读取的数据存入物理内存等速度较快的设备,当系统再次需要该数据时它可以很快从缓存设备中读取,而无需再次访问速度较慢的硬盘,从而有效提升物理硬盘的访问性能。
中文官网:http://www.romexsoftware.com/zh-cn/primo-cache/index.html
平台:Windows(其实 *nix 下也有类似的)
软件类型:共享软件
两个月后更新:
经过 2 个月的实际体验,这款软件并没有宣传的那么完美。少数软件一运行就会完全死机(跑跑卡丁车,并确定是由该软件造成的),整个系统也似乎有一种不稳定的感觉(偶尔弹出一些意义不明的错误提示)。另外还有额外的内存占用。
总之,不推荐将系统盘加速,也不推荐大多数情况下的使用。除非你有一些常玩的游戏,但由于几十 GB 的体积巨大不能放入 SSD,才值得使用此软件。
这种理念我认为非常好,Cache 技术也是计算机硬件软件当中一个使用非常广泛的技术。这和最初的英特尔快速存储技术(RST)以及英特尔傲腾技术类似。都是使用少量高速的 SSD 作为缓存,为低速的 HDD 加速, 使得电脑拥有 HDD 的大容量的同时,拥有接近于 SSD 的速度。
至于什么数据会被缓存到 SDD 中?这是由算法控制的,自动选择 HDD 中最常用的那些数据。
PrimoCache 与 RST 或者傲腾的区别在于,这款软件不需要你使用最新的 Intel 主板,或者是购买 Intel 家的傲腾内存,它兼容一切现有的 SSD。
PrimoCache 还支持使用内存作为一级缓存,SSD 作为二级缓存
是的,这也是 PrimoCache 的一个特有的功能,内存的每秒读写速度单位在 GB 级别,比 SSD 高了一个量级,能有效为 SSD 加速。(不过我还没有直观感受到差异,大概在这时瓶颈已经不在 IO 了)
我现在终于可以把动辄几十 G 的游戏放心的放在机械硬盘了,然后使用 PrimoCache 让他们拥有令人满意的读取速度。
我使用了 12G SSD 作为二级缓存,1G RAM 作为一级缓存,运行测速工具对机械硬盘测速结果如下:
未使用缓存:
使用缓存:
注意,由于缓存的原理是将常用数据放在 SSD、RAM 中,需要时快速获取,所以使用测试软件随机读取或写入时并没有预存这个过程,并不能反映实际效果。
但是我们也可以看到明显的进步了。
发现的缺点:
此外,虽然我的 RAM 有 16GB,但我也只使用了不到 2GB 作为硬盘缓存,因为我觉得目前大多数大型软件都会使用 RAM 为自己加速,我们没必要多此一举。并且充裕的 RAM 本身也是提升电脑响应速度的途径。