Reading view

There are new articles available, click to refresh the page.

Is it worth storing Time Machine backups on a faster drive?

Folk wisdom is that there’s little point in wasting a faster drive to store your Time Machine backups, because when those backups are made, their write speed is throttled by macOS. This article considers how true that might be, and what benefits there might be to using faster solid-state storage.

I/O Throttling

Documentation of I/O policy is in the man page for the getiopolicy_np() call in the Standard C Library, last revised in 2019, and prevailing settings are in sysctl’s debug.lowpri_throttle_* values. These draw a distinction between I/O to local disks, being “I/O sent to the media without going through a network” such as I/O “to internal and external hard drives, optical media in internal and external drives, flash drives, floppy disks, ram disks, and mounted disk images which reside on these media”, and those to “remote volumes” “that require network activity to complete the operation”. The latter are “currently only supported for remote volumes mounted by SMB or AFP.”

Inclusion of remote volumes is a relatively recent change, as in the previous version of this man page from 2006, they were explicitly excluded as “remote volumes mounted through networks (AFP, SMB, NFS, etc) or disk images residing on remote volumes.”

Five policy levels are supported:

  • IOPOL_IMPORTANT, the default, where I/O is critical to system responsiveness.
  • IOPOL_STANDARD, which may be delayed slightly to allow IOPOL_IMPORTANT to complete quickly, and presumably referred to in sysctl as Tier 1.
  • IOPOL_UTILITY, for brief background threads that may be throttled to prevent impact on higher policy levels, in Tier 2.
  • IOPOL_THROTTLE, for “long-running I/O intensive work, such as backups, search indexing, or file synchronization”, that will be throttled to prevent impact on higher policy levels, in Tier 3.
  • IOPOL_PASSIVE, for mounting files from disk images, and the like, intended more for server situations so that lower policy levels aren’t slowed by them.

However, the idea that throttled I/O is intentionally slowed at all times isn’t supported by the explanation of how throttling works: “If a throttleable request occurs within a small time window of a request of higher priority, the thread that issued the throttleable I/O is forced to a sleep for a short period. This slows down the thread that issues the throttleable I/O so that higher-priority I/Os can complete with low-latency and receive a greater share of the disk bandwidth.”

Settings in sysctl for Tier 3 give the window duration as 500 milliseconds, and the sleep period as 200 ms, except for SSDs, whose sleep period is considerably shorter at just 25 ms. Those also set a maximum size for I/O at 131,072 bytes. You can view those settings in the debug section of Mints’ sysctl viewer.

Some years ago, it was discovered that the user can globally disable IOPOL_THROTTLE and presumably all other throttling policy with the command
sudo sysctl debug.lowpri_throttle_enabled=0
although that doesn’t persist across restarts, and isn’t documented in the man page for sysctl. This is provided in an option in St. Clair Software’s App Tamer, to “Accelerate Time Machine backups”, for those who’d rather avoid the command line.

Performance checks

Before each Time Machine backup starts, backupd runs two checks of disk performance, by writing one 50 MB file and 500 4 KB files to the backup volume, reported in the log as
Checking destination IO performance at "/Volumes/ThunderBay2"
Wrote 1 50 MB file at 286.74 MB/s to "/Volumes/ThunderBay2" in 0.174 seconds
Concurrently wrote 500 4 KB files at 23.17 MB/s to "/Volumes/ThunderBay2" in 0.088 seconds

With normal throttling in force, there’s surprisingly wide variation that appears only weakly related to underlying SSD performance, as shown in the table below.

tmbackupssds2

Results from the three first full backups are far greater than those for subsequent automatic backups, with single-file tests recording 900-1100 MB/s, and multi-file tests 75-80 MB/s. Routine backups after those ranged from 190-365 MB/s and 3-32 MB/s, less than half. Most extraordinary are the results for the OWC Express 1M2 enclosure when used with an M3 Pro: on the first full backup, the 50 MB test achieved 910 MB/s, but an hour later during the first automatic backup that fell to 191 MB/s.

tmfirstbackup

Evidence suggests that in macOS Sequoia, at least, first full backups may now run differently, and make more use of P cores, as shown in the CPU History window above. In comparison, automatic backups are confined to the E cores. However, the backup itself still runs at a tenth of the measured write speed of the SSD.

Time Machine runs the first full backup as a ‘manual backup’ rather than scheduling it through the DAS-CTS dispatching mechanism. It’s therefore plausible that initial backups aren’t run at a Background QoS, so given access to P cores, and at a higher I/O throttling policy than IOPOL_THROTTLE, so being given higher priority access to I/O.

Changing policy

When tested here previously, disabling throttling policy appeared to have little effect on Time Machine’s initial performance checks, but transfer rates during backups were as much as 150% of those achieved when throttling policy was in force.

The big disadvantage of completely disabling throttling policy is that this can only be applied globally, so including I/O for Spotlight indexing, file synchronisation, other background tasks and those needing high-priority access to I/O. Leaving the policy disabled in normal circumstances could readily lead to adverse side-effects, allowing Spotlight indexing threads to swamp important user storage access, for example.

SSD or hard disk?

Despite wide variation in test results, all those for the 50 MB file were far in excess of the highest write speed measured for any equivalent hard disk at 148 MB/s (for the outer disk). Overall backup speeds for the first full backup to a USB4 SSD were also double that hard disk write speed. When throttling does occur, the period of sleep enforced on writes to a hard disk is eight times longer than that for an SSD, making the impact of I/O throttling greater for a hard drive than an SSD.

Another good reason for preferring SSD rather than hard disk storage is the use of APFS, and its tendency over time to result in severe fragmentation in the file system metadata on hard disks. Provided that an SSD has Trim support it should continue to perform well for many years.

Conclusions

  • macOS applies throttling policies on I/O with both local and networked storage to ensure that I/O-intensive tasks like backing up don’t block higher priority access.
  • Disabling those policies isn’t wise because of its general effects.
  • Those policies use different settings for I/O to SSDs from those for other types of storage to allow for their different performance.
  • First full backups to SSD appear to run at higher priority to allow them to complete more quickly.
  • As a result, effective write speeds to SSDs during Time Machine backups are significantly faster than could be achieved to hard disks.
  • When the consequences of using APFS on hard disks are taken into account, storing Time Machine backups on SSD has considerable advantages in terms of both speed and longevity.

How to browse your Mac’s log from months ago

I hope you like magic tricks, as this article shows how you can do the impossible with your Mac’s log: browse log entries from the distant past, from months or years past. My example is taken from my new Mac mini M4 Pro, which I set up on the afternoon of 8 November 2024, 19 days ago. To demonstrate that this isn’t a screenshot taken earlier, I’ve included a calendar widget to confirm that this was taken on 27 November, when all its logs from 8 November had gone.

logarchive

That log extract contains entries made by Migration Manager when it was busy migrating from a backup to my new M4 Mac. As that’s so long ago now, those entries have since been removed from the log files on my Mac. So how did I do that? The answer, of course, is in a Time Machine, with a little help from my free log browser Ulbow.

Accessing old logs

There are three different ways you can access the macOS Unified log:

  • the live log, with entries displayed as they’re written to the log, as you might in Console;
  • the active log, where you browse the entries currently saved in the log, including those made in the recent past, which Console can’t help you with, but Ulbow is designed to do;
  • an archive copy of the log, where you can browse entries from the log at the time that archive was made.

Logarchives are structured bundles containing log and other files normally stored inside folders in /var/db, collected at a given moment. You can create them using the log collect command, but that only works with the active log on your Mac. One simple way to create a logarchive is to run a sysdiagnose, whose output files contain a logarchive of your Mac’s active log files. However, log collect can’t turn an arbitrary collection of log files into a logarchive, that’s something only Ulbow can do. If you can feed it the right log files, say from a backup, then Ulbow will turn them into a logarchive ready to browse in Ulbow.

To do this, you’ll need a Time Machine backup of the Mac’s Data volume. Backups made using other utilities should also work, provided they contain the required directories in /var/db.

Ulbow’s Logarchive Tool

To create a logarchive from a copy of the log files, open the Logarchive Tool in Ulbow’s Window menu.

logarchive4

Prepare the log files by creating a new folder and copying two folders from the backup of the Mac’s Data volume to it: /private/var/db/diagnostics and /private/var/db/uuidtext. The former contains the log files and their supporting data such as timesync files, while the latter is a collation of supplementary content arranged by UUID. I like to name that folder and the logarchive itself using the Time Machine backup timestamp to remind me of the last date and time of entries in that logarchive.

Click on the Make Logarchive button, select the enclosing folder, and Ulbow will do its best to turn those into a logarchive bundle, on external storage if you wish. You can then Catalogue that, Analyse its contents, and open it for browsing in Ulbow.

To browse those log entries, open a New window in Ulbow, then open the logarchive using that command from the File menu. Once you have made a logarchive, you can also open individual .tracev3 log files within it. The only difficulty with these is discovering an appropriate time to start browsing from. For that, the statistics supplied when you Analyse the logarchive are invaluable.

logarchive5

As Apple has never published a specification for logarchive bundles, this may not work in every case, but in my experience it can even create a usable logarchive when some of the logs are missing. It should also work for the equivalent folders obtained from an Apple device, such as an iPhone or iPad.

Summary of what you need

  • A Time Machine or other backup containing log files in /private/var/db/ for the period you want to browse.
  • Ulbow’s Logarchive Tool from its Window menu.

Should you migrate to your new Mac, and when?

Between now and the end of the year, many will be taking on a new Mac, either a brand new M4 model or an older Mac sold or passed on. Before you even think about setting it up, you need to work out how you’re going to move your existing apps, Home folder and media libraries to it. Are you going to trust that to Migration Assistant in macOS? If so, should you do that when first personalising and configuring the Mac, or leave it until later? This article is intended to help you plan that.

What do you want to migrate?

The first important question is what you want to migrate from your old Mac to the new one. If you intend taking across much of what’s on the old one, including all your current user settings, then the best time to do that is when you’re first configuring the Mac, shortly after powering it up. Instead of going through all the steps to create a new user account, migrating at this stage will set that account up for you.

If you’re currently running an older version of macOS, or don’t want to transfer settings from your current user account, then you might do better to delay any migration until later, when you’ve already created yourself a new account and set it up as you want.

If you want to start from scratch, and only move the files and folders you want, then you may prefer to give Migration Assistant a miss, and perform the migration by hand. This is much harder, and even if you think you know what to do, you may well be surprised with the tasks that prove difficult, and those that don’t work at all. Modern macOS is exceedingly complicated and largely undocumented, so it’s easy to omit something important, and waste time trying to move it by hand.

When should you migrate?

Migration Assistant isn’t perfect, and the version of macOS pre-installed on a new Mac inevitably needs to be updated as soon as you can. As a result, in the past I’ve recommended that you delay performing migration until you’ve set up the primary admin account and brought the Mac fully up to date with macOS.

With new M4 Macs, I now recommend that, if you do want to migrate much or all from your old Mac, you do so during initial configuration, rather than later. The first batches of M4 Macs come with the regular edition of macOS Sequoia 15.1, and need to be updated to a newer build number, taking them from 24B83 to 24B2083, which is only for M4 models and VMs. Unlike some previous Macs, the version of 15.1 they ship with is still robust, and its Migration Assistant works well, so migrating early should be robust, if you want to move much of what’s on your old Mac.

If you remain undecided, then don’t forget that you can always migrate later.

What to migrate from

If you’re going to migrate much, you want your new Mac to be connected to the source it’s going to use by the quickest means possible. In practice, that means by Thunderbolt 3/4 if you can, rather than over a regular network. Either connect the two Macs back-to-back with a good Thunderbolt 4 cable, or migrate from a backup on a Thunderbolt 3 SSD, if you can.

Migration from a backup should work correctly whether that backup was created by Time Machine, Carbon Copy Cloner, SuperDuper, or any other good utility. All you need to do is ensure that it was made recently.

iCloud

Just as when migrating iOS and iPadOS devices, sharing using iCloud often proves a major advantage, indeed it may be the only way to fully synchronise the Data Protection keychain that’s shared when you share Keychain in iCloud. Although that’s backed up by Time Machine, restoring or copying it from a backup may not work.

If you don’t normally share your Keychain in iCloud, or other databases such as Contacts and Calendar, consider turning those on and letting them sync fully before migration, as that could save you trouble. Once migration is complete and your new Mac has synced fully, you can always disable their sharing again. If you’re selling or passing your old Mac on, you’ll be disconnecting it from your iCloud account and using Erase All Content and Settings (EACAS) anyway.

Anticipate problems

Migration Assistant, whether used during initial configuration or later, works best when:

  • both Macs are running the same version of macOS (and Migration Assistant);
  • it’s migrating the primary admin user account from your old Mac to the primary admin account of your new Mac;
  • there are no disk errors, damaged folders or files in either source or destination storage;
  • the old user account is entire and doesn’t use any ‘tricks’ in its Home folder.

You do get a choice as to what you migrate, but if you’re going to have to work through that in detail, picking some items and not others, you might find a painstaking manual migration better. If the versions of macOS on the two Macs (or backup) are far apart, then you may find yourself having to work through the new Mac correcting any misunderstandings that arise between the two versions.

If you migrate later, you’ll be given the option of merging from the old account into the current one on your new Mac. If you include settings in that migration, then your new settings will be overwritten by the old ones, and may need further attention once that migration has completed.

Post-migration checks

Once migration has finished, check through all your key settings to ensure they’re just as you want for your new Mac. As far as security is concerned, a quick check with SilentKnight covers key items like FileVault and security data updates. One important setting that I noticed hadn’t been enabled on my new M4 Mac mini was Find My, which takes a bit of searching in System Settings to get right. You should also set aside some time and patience to attend to all the details in Privacy & Security, to ensure nothing is amiss there.

The current version of Migration Assistant does try to migrate the contents of well-known hidden folders including /usr/local, and should copy across any command tools and other files you have installed there. It may not cope so well with more extensive customisation in those hidden folders, though, and you should check those locations carefully following migration.

Migration mess

Very occasionally, Migration Assistant is like a bull in a china shop, and creates havoc, or fails to complete. Allow time so that you can work out how best to resolve that. If the worst comes to the worst, don’t be afraid to start from scratch with a clean install of macOS and an empty Data volume.

Pulling tricks

Old methods of instant migration using volume clones only work with old versions of macOS, prior to Big Sur or even Catalina. While you might get away with using them now, they often cause fundamental problems. Bite the bullet and take the easy way by migrating properly, so your new Mac gets off to the right start.

I wish you fair winds and success!

See also

Migrating to a new Mac, and claiming Time Machine backups
Apple Support

Migrating to a new Mac, and claiming Time Machine backups

Over the last few years, Migration Manager in macOS has improved considerably, and when used wisely it can save you a great deal of time getting you new Mac up and running. This article explains how you can use it in Sequoia, and how that works with Time Machine and the backups from your old Mac.

There are two occasions when you can transfer your apps, settings and documents using the process of migration: during the initial personalisation and configuration of macOS, or using Migration Assistant at any time later. I always used to postpone that until macOS was set up and brought up to date, but lately I’ve preferred to get this over with first.

This time I faced a more difficult problem: I was replacing my old Mac Studio M1 Max with my new Mac mini M4 Pro, both of which are intended to use my single Studio display. While I could have run one of them headless (without a display connected), Migration Assistant expects both Macs to be used interactively, which would have required some cunning juggling. Instead of opting to connect them back-to-back, I therefore chose to migrate during initial setup from the Studio’s last Time Machine backup. As the backup storage for my Studio was a Thunderbolt 3 external SSD, this proved surprisingly quick.

Migration during setup

Before unboxing your new or previously owned Mac, prepare the source for migration, and any cable required to connect that Mac with the source. Migrating from a backup is here one of the simplest and fastest options, as all you need do is move your old Mac’s external backup storage over to the new Mac.

In the past, migration used the fastest connection available between two Macs that were connected back-to-back, but Apple’s current support note now states that Wi-Fi will be used. In my case, given that both Macs have 10 GbE, that could have been a disappointment if they were already connected by Ethernet cable.

Another important step is ensuring that your new Mac has a keyboard and mouse/trackpad. As I was moving those from my Studio, I connected both to the new Mac mini by their charging leads to ensure they’d work and pair correctly.

Unbox your Mac, connect it up to your migration source, its new keyboard and mouse/trackpad, then start it up. When the setup sequence reaches the Migration Assistant screen, select the migration source.

tmmigrate

If you’re migrating directly from your old Mac, at this point you’ll need to open Migration Assistant on that Mac and set it to migrate to another Mac. Follow the prompts to continue the process.

Migration after setup

Initiate this by running Migration Assistant in /Applications/Utilities, and follow its prompts to select and connect to the source as above.

Time Machine backups

Assuming that your old Mac made Time Machine backups, and you want your new Mac to continue doing so, now’s the time to connect that backup storage, if it wasn’t already used as the migration source. When you do, you’ll be offered two options, to claim the existing backups for the new Mac, or to leave them for the old one.

tmbackupclaim

You should see this dialog if:

  • you have migrated settings from an old Mac to this one,
  • you have migrated settings from a Time Machine backup of another Mac to this one,
  • you cloned the boot volume group used to start up your Mac,
  • your Mac’s logic board has been replaced.

If you claim the existing backups for your new Mac, then they’ll be used as part of its backup history, but you won’t be able to use them with the old Mac. You may prefer to leave those old backups as they are, and gradually delete them to free up space. Provided that you create the new backup volume for your Mac in the same container, old and new backups will share the same free space on your backup storage.

This apparently replaces the old tmutil inheritbackup command, which no longer appears to work with backups to APFS.

Postscript

I am very grateful to Csaba Fitzl for two useful pieces of information:

  • In macOS Sequoia 15.1 at least, tmutil inheritbackup does work again, when entered with the path to the backup volume, e.g. /Volumes/[TMDISK].
  • Claiming old backups is only offered when migration has taken place. If you set up your new Mac without migrating, it doesn’t appear to be offered, although you should then be able to use tmutil inheritbackup instead.

As Nicolai points out in the comments below, Apple’s support note is incorrect about migration only using Wi-Fi connections to another Mac. This should work as it always has, and select the fastest connection between them, which could include back-to-back Thunderbolt.

Why you need to make archives, and how to

We back up to ensure that we can recover files, whole volumes, our complete Mac if needed. When that crucial document you were working on earlier has vanished, or becomes damaged, or disaster strikes a disk, backups are essential. But how do you preserve all those documents that used to come on paper, records, correspondence and certificates? How will you or your successors be able to retrieve them in ten or thirty years time? This brief article considers how you should archive them safely, which isn’t the same as backing them up.

By archiving, I mean putting precious files somewhere they can be retrieved in at least ten years time. They may include financial, business, employment and personal records, as well as all finished work that you want to record for posterity. For most, they’ll also include a careful selection of still images, movies, and the more important documents you might create, such as books, theses and papers. They’re what you and the law want you to keep in perpetuity, and to be able to retrieve even after you’re gone.

To see how this can be achieved, I consider: the storage medium to be used, file formats that will be retrievable, how to index them for access, physical storage conditions, and the checks of their integrity that are needed.

Storage medium

While backups are most likely to be kept on hard disks or SSDs, neither of those is in the least suitable for archives, as they have relatively short lifetimes and are too sensitive to storage conditions. Instead, you need a removable medium, today probably Blu-ray disks intended for archival use, such as M-DISC.

For those with copious archives of importance beyond their family, Sony used to offer Optical Disk Archive systems, but those products were discontinued last year and don’t appear to have a suitable replacement. This illustrates one of the problems with planning for the more distant future: today’s technology can all too easily become orphaned.

Businesses are increasingly turning to cloud services to store their archives, but for the great majority of us the recurring cost makes this impractical. In any case, best practice should be to use cloud services as a supplement to a physical archive. iCloud is more affordable for the storage of most important documents, but requires a Legacy Contact to be appointed.

File formats

While it’s fine to archive documents in their original format, as you do in your backups, it’s also important to extract their contents into more permanent formats. Among those most likely to prove durable for the next 50-100 years are:

  • UTF-8 (and formerly ASCII) for text files,
  • JPEG and PNG for still images,
  • audio, video and rich media using one of the widely-used compression standards and file formats,
  • XML-based open document standards,
  • CSV for data,
  • PDF provided that it complies with one of the archival standards PDF/A-1 to /A-4.

You may find it worthwhile tarring together large collections of smaller files, but don’t use an unusual compression or ‘archive’ format, which might prove inaccessible in the future.

Indexing and access

For larger collections, even when structured carefully, a thorough list of contents in UTF-8 text format is essential. While there are index and search tools that could help, in this respect too archives are different from backups. If you’re going to be gathering TB of files, look at some of the commercial solutions. Although some are free to use, like the long-established Greenstone, they aren’t intended for casual users and might prove demanding.

Physical storage conditions

Never print on the disk itself, which can result in its degradation, and keep paper records alongside disks in the same container, but not inside the cases themselves, where they could damage them.

Archive optical disks should be stored in cases with centre hub security, not in sleeves. They must be kept in a cool, dry and dark container, in which there is no mould or fungus. They also need to be protected from physical threats such as flood and fire. Firesafes are popular furniture for this, but you must then ensure that their combination or keys are readily available and not separated from the safe.

There used to be a vogue for commercial data repositories, often underground storage sites that had been repurposed. Not only were those expensive, but many failed to take the care that they promised, and plenty went bankrupt and put their contents at risk. If you can arrange it, store one copy with you, and another at a friend’s or relative’s at least a few miles away.

Integrity checks

If you’re serious about maintaining your archives, some form of integrity checking, such as that provided by my free utilities Dintch, Fintch and cintch, is essential. Check a sample on each disk once a year, to ensure that none has started to deteriorate. If you do detect errors, that’s the time to burn a replacement before the original is lost to decay.

Conclusion

Backups are for recovery, while archives are for posterity. Start building your archives now, and keep them safe for the future.

Further reading

How to burn a Blu-ray disc in Monterey
Wikipedia point of entry

Postscript

Some of you are reporting widespread claims that some Blu-ray burners no longer work in Sequoia. I have therefore repeated the process that I described in Monterey, using exactly the same Pioneer burner connected to a Mac Studio M1 Max running macOS 15.1. I’m delighted to report that it still works perfectly, and I see no reason that any other recent Pioneer optical drive should prove incompatible. All you need to do is follow the instructions.

Happy archiving!

Planning complex Time Machine backups for efficiency

Time Machine (TM) has evolved to be a good general-purpose backup utility that makes best use of APFS backup storage. However, it does have some quirks, and offers limited controls, that can make it tricky to use with more complex setups. Over the last few weeks I’ve had several questions from those trying to use TM in more demanding circumstances. This article explains how you can design volume layout and backup exclusions for the most efficient backups in such cases.

How TM backs up

To decide how to solve these problems, it’s essential to understand how TM makes an automatic backup. In other articles here I have provided full details, so here I’ll outline the major steps and how they link to efficiency.

At the start of each automatic backup, TM checks to see if it’s rotating backups across more than one backup store. This is an unusual but potentially invaluable feature that can be used when you make backups in multiple locations, or want added redundancy with two or more backup stores.

Having selected the backup destination, it removes any local snapshots from the volumes to be backed up that were made more than 24 hours ago. It then creates a fresh snapshot on each of those volumes. I’ll consider these later.

Current versions of TM normally don’t use those local snapshots to work out what needs to be backed up from each volume, but (after the initial full backup) should rely on that volume’s record of changes to its file system, FSEvents. These observe two lists of exclusions: those fixed by TM and macOS, including the hidden version database on each volume and recognised temporary files, and those set by the user in TM settings. Among the latter should be any very large bundles and folders containing huge numbers of small files, such as the Xcode app, as they will back up exceedingly slowly even to fast local backup storage, and can tie up a network backup for many hours. It’s faster to reinstall Xcode rather than restore it from a backup.

Current TM backups are highly efficient, as TM can copy just the blocks that have changed; older versions of TM backing up to HFS+ could only copy whole files. However, that can be impaired by apps that rewrite the whole of each large file when saving. Because the backup is being made to APFS, TM ensures that any sparse files are preserved, and handles clone files as efficiently as possible.

Once the backup has been written, TM then maintains old backups, to retain:

  • hourly backups for the last 24 hours, to accompany hourly local snapshots,
  • daily backups over the previous month,
  • weekly backups stretching back to the start of the current backup series.

These are summarised in the diagram below.

tmseqoutline1

Local snapshots

TM makes two types of snapshot: on each volume it’s set to back up, it makes a local snapshot immediately before each backup, then deletes that after 24 hours; on the backup storage, it turns each backup into a snapshot from which you can restore backed up files, and those are retained as stated above.

APFS snapshots, including TM local snapshots, include the whole of a volume, without any exceptions or exclusions, which can have surprising effects. For example, a TM exclusion list might block backing up of large virtual machine files resulting in typical backups only requiring 1-2 GB of backup storage, but because those VMs change a lot, each local snapshot could require 25 GB or more of space on the volume being backed up. One way to assess this is to check through each volume’s TM exclusion list and assess whether items being excluded are likely to change much. If they are, then they should be moved to a separate volume that isn’t backed up by TM, thus won’t have hourly snapshots.

Some workflows and apps generate very large working files that you may not want to clutter up either TM backups or local snapshots. Many apps designed to work with such large files provide options to relocate the folders used to store static libraries and working files. If necessary, create a new volume that’s excluded completely from TM backups to ensure those libraries and working files aren’t included in snapshots or backups.

TM can’t run multiple backup configurations with different sets of exclusions, though. If you need to do that, for instance to make a single nightly backup of working files, then do so using a third-party utility in addition to your hourly TM backups.

This can make a huge difference to free space on volumes being backed up, as the size of each snapshot can be multiplied by 24 as TM will try to retain each hourly snapshot for the last 24 hours.

Macs that aren’t able to make backups every hour can also accrue large snapshots, as they may retain older snapshots, that will only grow larger over time as that volume changes from the time that snapshot was made.

While snapshots are a useful feature of TM, the user has no control over them, and can’t shorten their period of retention or turn them off altogether. Third-party backup utilities like Carbon Copy Cloner can, and may be more suitable when local snapshots can’t be managed more efficiently.

iCloud Drive

Like all backup utilities, TM can only back up files that are in iCloud Drive when they’re downloaded to local storage. Although some third-party utilities can work through your iCloud Drive files downloading them automatically as needed, TM can’t do that, and will only back up files that are downloaded at the time that it makes a backup.

There are two ways to ensure files stored in iCloud Drive will be backed up: either turn Optimise Mac Storage off (in Sonoma and later), or download the files you want backed up and ‘pin’ them to ensure they can’t be removed from local storage (in Sequoia). You can pin individual files or whole folders and their entire contents by selecting the item, Control-click for the contextual menu, and selecting the Keep Downloaded menu command.

Key points

  • Rotate through 2 or more backup stores to handle different locations, or for redundancy.
  • Back up APFS volumes to APFS backup storage.
  • Exclude all non-essential files, and bundles containing large numbers of small files, such as Xcode.
  • Watch for apps that make whole-file changes, thus increasing snapshot and backup size.
  • Store large files on volumes not being backed up to minimise local snapshot size.
  • If you need multiple backup settings, use a third-party utility in addition to TM.
  • To ensure iCloud Drive files are backed up, either turn off Optimise Mac Storage (Sonoma and later), or pin essential files (Sequoia).

Further reading

Time Machine in Sonoma: strengths and weaknesses
Time Machine in Sonoma: how to work around its weaknesses
Understand and check Time Machine backups to APFS
Excluding folders and files from Time Machine, Spotlight, and iCloud Drive

APFS incompatibilities and how to live with them

APFS has many of the features of modern file systems that make most efficient use of space, and others that aren’t found in more traditional file systems like HFS+. While those should all work well when used locally with other APFS volumes, they can prove incompatible with other file systems, and may have untoward side effects. This article looks at those features that you could encounter problems with, and how best to work around them.

Sparse files

Most modern file systems store files that contain significant amounts of blank or absent data in a special sparse file format. In APFS, this is implemented by only allocating file extents to those blocks that do contain data. In many cases, this can save significant disk space. Initially, sparse files were unusual in APFS, but over time they have become increasingly common, and can now be found in some types of disk image, virtual machine storage, and in some databases. Note that disk image types whose name includes the word sparse, sparse bundles and sparse disk images, don’t use sparse file format at all, but are the victims of an unfortunate name collision.

Neither macOS nor APFS can simply convert regular files to sparse format; for a file to be written in sparse format, the code writing it must explicitly skip the empty space. As a result, sparse files are prone to explode to full size when they’re copied or moved unless that’s between two local volumes both using APFS. Examples of where you should expect them to explode include:

  • copy or move to HFS+, as that has no sparse file format
  • copy between Macs using AirDrop or file sharing
  • back up to network storage, although local Time Machine backups to APFS should preserve them
  • copy or move to any other local file system other than APFS, even if that file system has its own support for sparse files.

In each of those cases, sparse files explode in size as they’re copied from the source volume. If you have a 100 GB sparse file that only takes 20 GB of local disk space, when it’s copied over the network, or to a local HFS+ volume, the full 100 GB has to be transferred.

One potential workaround is to compress the sparse file before transferring it. All good compression algorithms will work efficiently on the blank space in the file, so when compressed its size could be as small as the original sparse file. However, when it’s decompressed, even on another APFS volume, it will explode to full size. For disk images, that can be corrected by mounting them, as APFS will then trim their contents, and the disk image should be saved back into sparse format.

Clone files

These are two distinct files that share common data, normally the result of duplicating the original within the same APFS volume. Those two cloned files then only require the storage of the whole file, plus those data blocks that differ between them. This only works within the same volume, and the moment that either of the clones is moved or copied to any other volume, it assumes full size, as it can no longer share data with the other clone.

However, most other file systems don’t support file cloning in this way. When you duplicate a file in an HFS+ volume, there’s no shared data between them, and the two require twice the amount of space as one does.

Snapshots

Snapshots consist of a complete copy of the volume at an instant in time, so require a copy of the file system metadata for that volume, and retain copies of storage blocks for each file as they are changed subsequent to the snapshot being made, so you could roll back that volume to its state at the moment of the snapshot.

Although Time Machine backups contain snapshots of the volume they back up, snapshots can’t normally be copied to another disk or volume. Some have been able to make a complete copy of a disk including its snapshots using the dd command tool, but that should be considered experimental. In all other circumstances, snapshots stay where they were made, but you can always copy from an existing snapshot to reconstitute a volume.

Directory hard links

These aren’t available in APFS, but are supported in HFS+, where they’re used extensively in Time Machine backups. They work like regular hard links, but act on directories rather than individual files. They can’t be copied in any way to an APFS volume, but can be used to reconstitute a volume.

Extended attributes

These are additional metadata associated with files and folders, and are fully supported in both APFS and HFS+. However, many are treated as being ephemeral, and may not be preserved during copying or other actions. The system of flags used to determine which are preserved is detailed in this article.

Several other file systems also support extended attributes, but copying between them is unlikely to transfer them between file systems. In some cases, extended attributes are preserved using AppleDouble format, in which each file can have a hidden shadow with its name prefixed with ._ (dot-underscore) characters. These are most often seen in FAT and ExFAT volumes, but are prone to confuse users of other computers.

Key points

  • APFS sparse files are only preserved when copying or moving files between local APFS volumes. In other circumstances they explode to full size.
  • Good compression methods can keep a sparse file to a similar size, but decompression explodes them to full size. Disk images may then be restored as sparse files after they have been mounted again from APFS.
  • Clone files aren’t preserved when copied or moved to any other volume.
  • Snapshots can’t be copied at all, although they can be reconstituted as volumes.
  • APFS doesn’t support the directory hard links used in Time Machine backups to HFS+, but a backup can be reconstituted as a volume.
  • Extended attributes are preserved between APFS and HFS+, but not with other file systems, except in the shadow files of AppleDouble format, seen in FAT and ExFAT volumes.

Understand and check Time Machine backups to APFS

Over the last 17 years, Time Machine has backed up many millions of Macs, ranging from the last Power Macs to the latest M3 models, every hour. It has changed greatly over that time. This article uses my free utility T2M2 to explain how Time Machine now makes automatic backups to APFS storage, and how to check that. This account is based primarily on what happens in macOS Sonoma and Sequoia, when they’re backing up automatically every hour to local storage.

T2M2 offers three features to see what’s happening with Time Machine and backups:

  • Check Time Machine button, to analyse backups made over the last few hours.
  • Speed button, to view progress reports in the log during a long backup.
  • Browse log button, to show filtered log extracts in fullest detail.

These are each detailed in T2M2’s Help book. Here I’ll concentrate on the first and last, and defer speed checks to that documentation.

Browse log for detail

In practice, you’re most likely to view T2M2’s analysis using the Check Time Machine button first, but here I’ll walk through the greater detail available in log extracts, to build understanding of the sequence of events in each automatic backup.

t2m2rept1

To help you see which subsystems are involved in each stage, T2M2 displays their entries in different colours, and you can show or hide each of those.

Automatic backups are called off by the DAS-CTS dispatching mechanism, whose entries are shown in red (DAS) and blue (CTS). They schedule backups so they don’t occur when the Mac is heavily loaded with other tasks, and call a chain of services to start the backup itself. Dispatching is now reliable over long time periods, but can be delayed or become irregular in some situations. Inspecting those DAS-CTS entries usually reveals the cause.

From there on, most informative log entries are made by Time Machine itself.

Preparations are to:

  • Find each destination backup storage, decide whether any rotation scheme applies, so determine the destination for this backup.
  • Check write performance to the backup destination.
  • Find the machine store on the destination.
  • Determine which local snapshots should be removed, and delete them.
  • Create local snapshot(s) as ‘stable’ snapshot(s), and mount them.
  • Mount previous local snapshot(s) as ‘reference’ snapshot(s).
  • Determine how to compute what needs to be backed up from each source. This should normally use FSEvents to build the EventDatabase.
  • Scan the volumes to be backed up to determine what needs to be backed up.
  • Estimate the total size of the new backup to be created.

Once backing up starts, entries cover:

  • Copying designated items to the destination.
  • Posting periodic progress reports during longer backups.

When that’s complete, closing stages are to:

  • Report details of the backup just completed.
  • Set local snapshots ready for the next backup, with the ‘stable’ snapshot(s) marked as ‘reference’, and unmount local snapshots.
  • Delete working folder used during the previous backup as ‘incomplete’.
  • Create the destination backup snapshot.
  • Delete any old backups due for removal.
  • Report backup success or other outcome.

They’re summarised in this diagram. Although derived from Sonoma, Sequoia brings no substantial change.

tmbackup14a

Or its PDF tear-out version: tmbackup14b

Check Time Machine summaries

t2m2rept2

T2M2 analyses all those log entries to produce a summary of how Time Machine has performed over the last few hours. That is broken down into sections as follows.

Backup destination. This is given with the free space currently available on that volume, followed by the results of write speed measurements made before each backup starts.
Backing up 1 volumes to ThunderBay2 (/dev/disk10s1,TMBackupOptions(rawValue: 257)): /Volumes/ThunderBay2
Current free space on backup volumes:
✅ /Volumes/ThunderBay2 = 1.67 TB
Destination IO performance measured:
Wrote 1 50 MB file at 286.74 MB/s to "/Volumes/ThunderBay2" in 0.174 seconds
Concurrently wrote 500 4 KB files at 23.17 MB/s to "/Volumes/ThunderBay2" in 0.088 seconds

Backup summary. This should be self-evident.
Started 4 auto backup cycles, and 0 manual backups;
completed 4 volume backups successfully,
last backup completed successfully 53.4 minutes ago,
Times taken for each auto backup were 0.3, 0.3, 0.2, 0.2 minutes,
intervals between the start of each auto backup were 61.4, 60.2, 60.3 minutes.

Backups created and deleted (or ‘thinned’).
Created 4 new backups
Thinned:
Thinning 1 backups using age-based thinning, expected free space: 1.67 TB actual free space: 1.67 TB trigger 50 GB thin 83.33 GB dates: (
"2024-10-01-043437")

Local snapshots created and deleted.
Created 4 new snapshots, and deleted 4 old snapshots.

How the items to be backed up were determined. This shows which methods Time Machine used to work out which items needed to backed up, and should normally be FSEvents, once a first full backup has been made.
Of 4 volume backups:
0 were full first backups,
0 were deep scans,
4 used FSEvents,
0 used snapshot diffs,
0 used consistency scans,
0 used cached events.

Backup results. For each completed backup, Time Machine’s report on how many items were added, and their size.
Finished copying from volume "External1"
1 Total Items Added (l: Zero KB p: Zero KB)
3 Total Items Propagated (shallow) (l: Zero KB p: Zero KB)
403274 Total Items Propagated (recursive) (l: 224.93 GB p: 219.31 GB)
403275 Total Items in Backup (l: 224.93 GB p: 219.31 GB)
1 Directories Copied (l: Zero KB p: Zero KB)
2 Files Move Skipped (l: Zero KB p: Zero KB) | 2 items propagated (l: 6 KB p: 12 KB)
1 Directories Move Skipped (l: Zero KB p: Zero KB) | 403269 items propagated (l: 224.93 GB p: 219.31 GB)

Error messages.
✅ No error messages found.

iCloud Drive and pinning

Backing up the contents of iCloud Drive is a longstanding problem for Time Machine, if Optimise Mac Storage is enabled. Those files in iCloud Drive that are stored locally should be backed up, but any that have been evicted from local storage could only be included if they were to be downloaded prior to the backup starting.

In Sonoma and earlier, the only way to ensure that an evicted file in iCloud Drive is included in a Time Machine backup is to manually download it. Sequoia now lets you ‘pin’ individual files and whole folders so that they aren’t evicted, so will always be included in Time Machine backups. This is explained here.

Discrepancies and glitches

T2M2 tries to make sense from log entries that don’t always behave as expected. As backups can be complex, there are situations when T2M2 may report something that doesn’t quite add up. This most commonly occurs when there have been manual backups, or a third-party app has been used to control the scheduling and dispatch of backups. If you do see figures that don’t appear quite right, don’t assume that there’s something wrong with Time Machine or your backups. Generally, these glitches disappear from later automatic backups, so you might like to leave it for a few hours before checking Time Machine again to see if the problem has persisted.

Further reading

What performance should you get from different types of storage?
Where should you back up to?
How big a backup store do you need?
How Time Machine backs up iCloud Drive in Sonoma
Time Machine backing up different file systems
Excluding folders and files from Time Machine, Spotlight, and iCloud Drive
Snapshots aren’t backups
Time Machine in Sonoma: strengths and weaknesses
Time Machine in Sonoma: how to work around its weaknesses
Time Machine in Sonoma: Rotating backups and NAS
A brief history of Time Machine

❌