Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

A brief history of defragging

By: hoakley
5 October 2024 at 15:00

When the first Macs with internal hard disks started shipping in 1987, they were revolutionary. Although they still used 800 KB floppy disks, here at last was up to 80 MB of reliable high-speed storage. It’s worth reminding yourself of just how tiny those capacities are, and the fact that the largest of those hard disks contained one hundred times as much as each floppy disk. By the 1990s, with models like the Mac IIfx, internal hard disks had doubled in capacity, and reached as much as 12 GB at the end of the century.

Over that period we discovered that hard disks also needed routine maintenance if they were to perform optimally, and all the best users started to defrag their hard disks.

Consider a large library containing tens or even hundreds of thousands of books. Major reference works are often published in a series of volumes. When you need to consult several consecutive volumes of such a work, how they’re stored is critical to the task. If someone has tucked each volume away in a different location within the stack, assembling those you need is going to take a long while. If all its volumes are kept in sequence on a single shelf, that’s far quicker. That’s why fragmentation of data has been so important in computer storage.

The story of defragging on the Mac is perhaps best illustrated in the rise and fall of Coriolis Systems and iDefrag. Coriolis was started in 2004, initially to develop iPartition, a tool for non-destructive re-partitioning of HFS+ disks, but its founder Alastair Houghton was soon offering iDefrag, which became a popular defragging tool. This proved profitable until SSDs became more widespread and Apple released APFS in High Sierra, forcing Coriolis to shut down in 2019, when defragging Macs effectively ceased.

All storage media, including memory, SSDs and rotating hard disks, can develop fragmentation, but most serious attention has been paid to the problem on hard disks. This is because of their electro-mechanical mechanism for seeking to locations on the spinning platter they use for storage. To read a fragmented file sequentially, the read-write head has to keep physically moving to new positions, which takes time and contributes to ageing of the mechanism and eventual failure. Although solid-state media can have slight overhead accessing disparate storage blocks sequentially, this isn’t thought significant and attempts to address that invariably have greater disadvantages.

Fragmentation on hard disks comes in three quite distinct forms: file data across most of the storage, file system metadata, and free space. Different strategies and products have been used to tackle each of those, with varying degrees of success. While few doubt the performance benefits achieved immediately after defragging each of those, little attention has been paid to demonstrating more lasting benefits, which remain more dubious.

Manually defragging HFS+ hard disks was always a questionable activity, as Apple added background defragmentation to Mac OS X 10.2, released two years before Coriolis was even founded. By El Capitan and Sierra that built-in defragging was highly effective, and the need for manual defragging had almost certainly become a popular myth. Neither did many consider the adverse effects on hard disk longevity of those intense periods of disk activity.

techtoolpro1999

The second version of TechTool Pro in 1999 offered a simplified volume map for what it termed optimisation, offering the options to defragment only files, or the whole disk contents including free space.

techtoolpro2000

By the following year, TechTool Pro was paying greater attention to defragging file system metadata, here shown in white. This view was animated during the process of defragmentation, showing its progress in gathering together all the files and free space into contiguous areas. TechTool Pro is still developed and sold by MicroMat, and now in version 20.

speeddisk2001

A similar approach was adopted by its competitor Speed Disk, here with even more categories of contents.

drivegenius2010

By 2010, my preferred defragger was Drive Genius 3, shown here working on a 500 GB SATA hard disk, one of four in my desktop Mac; version 6 is still sold by Prosoft. One popular technique for defragmentation with systems like that was to keep one of its four internal disks empty, then periodically clone one of the other disks to that, and clone it back again.

diskwarrior2000

Alsoft’s DiskWarrior is another popular maintenance tool, shown here in 2000. This doesn’t perform conventional defragmentation, but restructures the trees used for file system metadata, and remains an essential tool for anyone maintaining HFS+ disks.

Since switching to APFS on SSDs several years ago, I have missed defragging like a hole in the head.

What performance should you get from different types of storage?

By: hoakley
10 September 2024 at 14:30

External storage is invariably sold with ‘up-to’ performance figures. In practice, you’ll seldom realise anything like some write or read speeds claimed. And when it comes to prolonged tasks like that first full Time Machine backup, no matter how fast you thought that drive would be, it always takes longer than expected.

Over the last few years I have tested and reviewed many examples of different types of external storage, from basic USB 3 hard drives, to the latest USB4 SSD enclosures, and NAS packed with fast SSDs. This article draws on all those test results to give you a better idea of what to expect when they’re being used with your Mac.

Results quoted here are typical for those tests performed mostly using a Mac Studio M1 Max, but unless otherwise indicated should be similar for recent Intel models. They’re summarised in this table.

storage1

Write speeds are given for:

  • the single 50 MB write test performed by Time Machine before each backup;
  • 500 multiple concurrent writes of 4 KB each, performed in those same Time Machine tests;
  • calculated net write speed over a first full backup to APFS of at least 400 GB;
  • general write speed measurement using my app Stibium, which gives broadly similar results to other leading benchmarking apps.

General read speeds are also obtained using Stibium, and similar to other apps. All speeds are given as MB/s for consistency.

Before looking at individual types of storage, one obvious and important result is the effect of throttling by macOS on Time Machine backup performance. Considering Time Machine’s own tests, writing a single 50 MB file is performed consistently at around 200-225 MB/s to local storage of whatever type, and multiple concurrent writes of 4 KB files reach around 20-23 MB/s regardless of local storage type. Those hold good even when you back up to a fast Thunderbolt 3 SSD, and backing up to a NAS is little quicker unless it’s over 2.5GbE to an NVMe SSD. Local transfer speeds only differ more substantially in general tests, when they aren’t throttled as they are in Time Machine.

Hard disks

When writing to or reading from a local hard disk, performance varies substantially according to which sectors on the hard disk are being accessed. This is a well-known phenomenon, and the result of geometry, as sectors are faster at the periphery of the disk’s platter, and slower in the inner part. Ranges given here take that into account: the lower figure is for inner sectors, and the higher for outer ones. Some users compensate for this effect, and only ever use the outer half of a disk’s sectors to obtain better performance, but that reduces their available capacity, and effectively doubles their cost per TB.

SSDs

SATA SSDs may be cheapest, but they’re also slowest, and with Macs they generally don’t enjoy Trim or SMART health indicator support. Of the two, Trim support is usually the more important, as without that, they can accumulate blocks waiting to be erased and returned for further use, and as a result their write (but not read) speed can fall as low as 100 MB/s. Unless used for largely static storage, this is a significant risk.

NVMe SSDs deliver twice the performance of SATA models, and generally enjoy Trim but not SMART indicator support. This makes them far better suited to general use, as their write speeds should be sustained from new throughout their working life.

USB 3.2 Gen 2, Thunderbolt 3, USB4

Translating commonly quoted transfer speeds for these three protocols into real-world speeds turns out to be complex. In practice, these are what you can expect to see:

  • USB 3.2 Gen 2 at 10 Gb/s is slightly less than 1 GB/s
  • Thunderbolt 3 at 32 Gb/s is up to 3 GB/s
  • USB4 at 40 Gb/s is up to 3.4 GB/s.

All recent models of Mac, both Intel and Apple silicon, should realise full performance over USB 3.2 Gen 2 and Thunderbolt 3, but support for USB4 is limited to Apple silicon. Unless a drive or enclosure specifically includes Thunderbolt 3 as a fallback, when connected to an Intel Mac, you should expect it to fall back to USB 3.2 Gen 2 at just under 1 GB/s, less than a third of the speed of USB4.

NAS

Although I haven’t made any systematic comparison between AFP and SMB network protocols, I can see no consistent difference in their performance, when used with the latest versions of macOS and NAS software. The latter, though, can be critical: older versions of NAS software can perform poorly when used over SMB with recent macOS. Keeping your NAS software up to date is important.

Throttling of Time Machine backup writing isn’t supposed to occur when backing up over a network, and there is some evidence here to support that, with significantly better results for 50 MB test files. However, those are only apparent when using NVMe SSDs in the NAS, with a wired Ethernet 2.5GbE connection to provide sufficient bandwidth.

Check TM performance

Provided that your Mac is running a recent version of macOS and backing up to APFS, it’s simple to read the two write performance tests that occur at the start of each Time Machine backup using my free T2M2. Alternatively, you can also read them using the Time Machine custom log extract in Mints. In T2M2 they should look something like:
Destination IO performance measured:
Wrote 1 50 MB file at 238.02 MB/s to "/Volumes/ThunderBay2" in 0.210 seconds
Concurrently wrote 500 4 KB files at 35.58 MB/s to "/Volumes/ThunderBay2" in 0.058 seconds

Check general performance

Although there are other apps that will do this, I developed Stibium for this purpose. Follow the ‘gold standard’ procedure detailed in its Help Reference to obtain the most accurate and reproducible results. Stibium can test any storage you can access in the Finder, including all local devices and networked systems such as NAS.

Further reading

Which external drives have Trim and SMART support?
How to evaluate an external SSD
You can read my reviews in MacFormat and MacLife magazines, available in the App Store.

❌
❌