Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Is it worth storing Time Machine backups on a faster drive?

By: hoakley
18 December 2024 at 15:30

Folk wisdom is that there’s little point in wasting a faster drive to store your Time Machine backups, because when those backups are made, their write speed is throttled by macOS. This article considers how true that might be, and what benefits there might be to using faster solid-state storage.

I/O Throttling

Documentation of I/O policy is in the man page for the getiopolicy_np() call in the Standard C Library, last revised in 2019, and prevailing settings are in sysctl’s debug.lowpri_throttle_* values. These draw a distinction between I/O to local disks, being “I/O sent to the media without going through a network” such as I/O “to internal and external hard drives, optical media in internal and external drives, flash drives, floppy disks, ram disks, and mounted disk images which reside on these media”, and those to “remote volumes” “that require network activity to complete the operation”. The latter are “currently only supported for remote volumes mounted by SMB or AFP.”

Inclusion of remote volumes is a relatively recent change, as in the previous version of this man page from 2006, they were explicitly excluded as “remote volumes mounted through networks (AFP, SMB, NFS, etc) or disk images residing on remote volumes.”

Five policy levels are supported:

  • IOPOL_IMPORTANT, the default, where I/O is critical to system responsiveness.
  • IOPOL_STANDARD, which may be delayed slightly to allow IOPOL_IMPORTANT to complete quickly, and presumably referred to in sysctl as Tier 1.
  • IOPOL_UTILITY, for brief background threads that may be throttled to prevent impact on higher policy levels, in Tier 2.
  • IOPOL_THROTTLE, for “long-running I/O intensive work, such as backups, search indexing, or file synchronization”, that will be throttled to prevent impact on higher policy levels, in Tier 3.
  • IOPOL_PASSIVE, for mounting files from disk images, and the like, intended more for server situations so that lower policy levels aren’t slowed by them.

However, the idea that throttled I/O is intentionally slowed at all times isn’t supported by the explanation of how throttling works: “If a throttleable request occurs within a small time window of a request of higher priority, the thread that issued the throttleable I/O is forced to a sleep for a short period. This slows down the thread that issues the throttleable I/O so that higher-priority I/Os can complete with low-latency and receive a greater share of the disk bandwidth.”

Settings in sysctl for Tier 3 give the window duration as 500 milliseconds, and the sleep period as 200 ms, except for SSDs, whose sleep period is considerably shorter at just 25 ms. Those also set a maximum size for I/O at 131,072 bytes. You can view those settings in the debug section of Mints’ sysctl viewer.

Some years ago, it was discovered that the user can globally disable IOPOL_THROTTLE and presumably all other throttling policy with the command
sudo sysctl debug.lowpri_throttle_enabled=0
although that doesn’t persist across restarts, and isn’t documented in the man page for sysctl. This is provided in an option in St. Clair Software’s App Tamer, to “Accelerate Time Machine backups”, for those who’d rather avoid the command line.

Performance checks

Before each Time Machine backup starts, backupd runs two checks of disk performance, by writing one 50 MB file and 500 4 KB files to the backup volume, reported in the log as
Checking destination IO performance at "/Volumes/ThunderBay2"
Wrote 1 50 MB file at 286.74 MB/s to "/Volumes/ThunderBay2" in 0.174 seconds
Concurrently wrote 500 4 KB files at 23.17 MB/s to "/Volumes/ThunderBay2" in 0.088 seconds

With normal throttling in force, there’s surprisingly wide variation that appears only weakly related to underlying SSD performance, as shown in the table below.

tmbackupssds2

Results from the three first full backups are far greater than those for subsequent automatic backups, with single-file tests recording 900-1100 MB/s, and multi-file tests 75-80 MB/s. Routine backups after those ranged from 190-365 MB/s and 3-32 MB/s, less than half. Most extraordinary are the results for the OWC Express 1M2 enclosure when used with an M3 Pro: on the first full backup, the 50 MB test achieved 910 MB/s, but an hour later during the first automatic backup that fell to 191 MB/s.

tmfirstbackup

Evidence suggests that in macOS Sequoia, at least, first full backups may now run differently, and make more use of P cores, as shown in the CPU History window above. In comparison, automatic backups are confined to the E cores. However, the backup itself still runs at a tenth of the measured write speed of the SSD.

Time Machine runs the first full backup as a ‘manual backup’ rather than scheduling it through the DAS-CTS dispatching mechanism. It’s therefore plausible that initial backups aren’t run at a Background QoS, so given access to P cores, and at a higher I/O throttling policy than IOPOL_THROTTLE, so being given higher priority access to I/O.

Changing policy

When tested here previously, disabling throttling policy appeared to have little effect on Time Machine’s initial performance checks, but transfer rates during backups were as much as 150% of those achieved when throttling policy was in force.

The big disadvantage of completely disabling throttling policy is that this can only be applied globally, so including I/O for Spotlight indexing, file synchronisation, other background tasks and those needing high-priority access to I/O. Leaving the policy disabled in normal circumstances could readily lead to adverse side-effects, allowing Spotlight indexing threads to swamp important user storage access, for example.

SSD or hard disk?

Despite wide variation in test results, all those for the 50 MB file were far in excess of the highest write speed measured for any equivalent hard disk at 148 MB/s (for the outer disk). Overall backup speeds for the first full backup to a USB4 SSD were also double that hard disk write speed. When throttling does occur, the period of sleep enforced on writes to a hard disk is eight times longer than that for an SSD, making the impact of I/O throttling greater for a hard drive than an SSD.

Another good reason for preferring SSD rather than hard disk storage is the use of APFS, and its tendency over time to result in severe fragmentation in the file system metadata on hard disks. Provided that an SSD has Trim support it should continue to perform well for many years.

Conclusions

  • macOS applies throttling policies on I/O with both local and networked storage to ensure that I/O-intensive tasks like backing up don’t block higher priority access.
  • Disabling those policies isn’t wise because of its general effects.
  • Those policies use different settings for I/O to SSDs from those for other types of storage to allow for their different performance.
  • First full backups to SSD appear to run at higher priority to allow them to complete more quickly.
  • As a result, effective write speeds to SSDs during Time Machine backups are significantly faster than could be achieved to hard disks.
  • When the consequences of using APFS on hard disks are taken into account, storing Time Machine backups on SSD has considerable advantages in terms of both speed and longevity.

How to browse your Mac’s log from months ago

By: hoakley
28 November 2024 at 15:30

I hope you like magic tricks, as this article shows how you can do the impossible with your Mac’s log: browse log entries from the distant past, from months or years past. My example is taken from my new Mac mini M4 Pro, which I set up on the afternoon of 8 November 2024, 19 days ago. To demonstrate that this isn’t a screenshot taken earlier, I’ve included a calendar widget to confirm that this was taken on 27 November, when all its logs from 8 November had gone.

logarchive

That log extract contains entries made by Migration Manager when it was busy migrating from a backup to my new M4 Mac. As that’s so long ago now, those entries have since been removed from the log files on my Mac. So how did I do that? The answer, of course, is in a Time Machine, with a little help from my free log browser Ulbow.

Accessing old logs

There are three different ways you can access the macOS Unified log:

  • the live log, with entries displayed as they’re written to the log, as you might in Console;
  • the active log, where you browse the entries currently saved in the log, including those made in the recent past, which Console can’t help you with, but Ulbow is designed to do;
  • an archive copy of the log, where you can browse entries from the log at the time that archive was made.

Logarchives are structured bundles containing log and other files normally stored inside folders in /var/db, collected at a given moment. You can create them using the log collect command, but that only works with the active log on your Mac. One simple way to create a logarchive is to run a sysdiagnose, whose output files contain a logarchive of your Mac’s active log files. However, log collect can’t turn an arbitrary collection of log files into a logarchive, that’s something only Ulbow can do. If you can feed it the right log files, say from a backup, then Ulbow will turn them into a logarchive ready to browse in Ulbow.

To do this, you’ll need a Time Machine backup of the Mac’s Data volume. Backups made using other utilities should also work, provided they contain the required directories in /var/db.

Ulbow’s Logarchive Tool

To create a logarchive from a copy of the log files, open the Logarchive Tool in Ulbow’s Window menu.

logarchive4

Prepare the log files by creating a new folder and copying two folders from the backup of the Mac’s Data volume to it: /private/var/db/diagnostics and /private/var/db/uuidtext. The former contains the log files and their supporting data such as timesync files, while the latter is a collation of supplementary content arranged by UUID. I like to name that folder and the logarchive itself using the Time Machine backup timestamp to remind me of the last date and time of entries in that logarchive.

Click on the Make Logarchive button, select the enclosing folder, and Ulbow will do its best to turn those into a logarchive bundle, on external storage if you wish. You can then Catalogue that, Analyse its contents, and open it for browsing in Ulbow.

To browse those log entries, open a New window in Ulbow, then open the logarchive using that command from the File menu. Once you have made a logarchive, you can also open individual .tracev3 log files within it. The only difficulty with these is discovering an appropriate time to start browsing from. For that, the statistics supplied when you Analyse the logarchive are invaluable.

logarchive5

As Apple has never published a specification for logarchive bundles, this may not work in every case, but in my experience it can even create a usable logarchive when some of the logs are missing. It should also work for the equivalent folders obtained from an Apple device, such as an iPhone or iPad.

Summary of what you need

  • A Time Machine or other backup containing log files in /private/var/db/ for the period you want to browse.
  • Ulbow’s Logarchive Tool from its Window menu.

Migrating to a new Mac, and claiming Time Machine backups

By: hoakley
12 November 2024 at 15:30

Over the last few years, Migration Manager in macOS has improved considerably, and when used wisely it can save you a great deal of time getting you new Mac up and running. This article explains how you can use it in Sequoia, and how that works with Time Machine and the backups from your old Mac.

There are two occasions when you can transfer your apps, settings and documents using the process of migration: during the initial personalisation and configuration of macOS, or using Migration Assistant at any time later. I always used to postpone that until macOS was set up and brought up to date, but lately I’ve preferred to get this over with first.

This time I faced a more difficult problem: I was replacing my old Mac Studio M1 Max with my new Mac mini M4 Pro, both of which are intended to use my single Studio display. While I could have run one of them headless (without a display connected), Migration Assistant expects both Macs to be used interactively, which would have required some cunning juggling. Instead of opting to connect them back-to-back, I therefore chose to migrate during initial setup from the Studio’s last Time Machine backup. As the backup storage for my Studio was a Thunderbolt 3 external SSD, this proved surprisingly quick.

Migration during setup

Before unboxing your new or previously owned Mac, prepare the source for migration, and any cable required to connect that Mac with the source. Migrating from a backup is here one of the simplest and fastest options, as all you need do is move your old Mac’s external backup storage over to the new Mac.

In the past, migration used the fastest connection available between two Macs that were connected back-to-back, but Apple’s current support note now states that Wi-Fi will be used. In my case, given that both Macs have 10 GbE, that could have been a disappointment if they were already connected by Ethernet cable.

Another important step is ensuring that your new Mac has a keyboard and mouse/trackpad. As I was moving those from my Studio, I connected both to the new Mac mini by their charging leads to ensure they’d work and pair correctly.

Unbox your Mac, connect it up to your migration source, its new keyboard and mouse/trackpad, then start it up. When the setup sequence reaches the Migration Assistant screen, select the migration source.

tmmigrate

If you’re migrating directly from your old Mac, at this point you’ll need to open Migration Assistant on that Mac and set it to migrate to another Mac. Follow the prompts to continue the process.

Migration after setup

Initiate this by running Migration Assistant in /Applications/Utilities, and follow its prompts to select and connect to the source as above.

Time Machine backups

Assuming that your old Mac made Time Machine backups, and you want your new Mac to continue doing so, now’s the time to connect that backup storage, if it wasn’t already used as the migration source. When you do, you’ll be offered two options, to claim the existing backups for the new Mac, or to leave them for the old one.

tmbackupclaim

You should see this dialog if:

  • you have migrated settings from an old Mac to this one,
  • you have migrated settings from a Time Machine backup of another Mac to this one,
  • you cloned the boot volume group used to start up your Mac,
  • your Mac’s logic board has been replaced.

If you claim the existing backups for your new Mac, then they’ll be used as part of its backup history, but you won’t be able to use them with the old Mac. You may prefer to leave those old backups as they are, and gradually delete them to free up space. Provided that you create the new backup volume for your Mac in the same container, old and new backups will share the same free space on your backup storage.

This apparently replaces the old tmutil inheritbackup command, which no longer appears to work with backups to APFS.

Postscript

I am very grateful to Csaba Fitzl for two useful pieces of information:

  • In macOS Sequoia 15.1 at least, tmutil inheritbackup does work again, when entered with the path to the backup volume, e.g. /Volumes/[TMDISK].
  • Claiming old backups is only offered when migration has taken place. If you set up your new Mac without migrating, it doesn’t appear to be offered, although you should then be able to use tmutil inheritbackup instead.

As Nicolai points out in the comments below, Apple’s support note is incorrect about migration only using Wi-Fi connections to another Mac. This should work as it always has, and select the fastest connection between them, which could include back-to-back Thunderbolt.

Planning complex Time Machine backups for efficiency

By: hoakley
28 October 2024 at 15:30

Time Machine (TM) has evolved to be a good general-purpose backup utility that makes best use of APFS backup storage. However, it does have some quirks, and offers limited controls, that can make it tricky to use with more complex setups. Over the last few weeks I’ve had several questions from those trying to use TM in more demanding circumstances. This article explains how you can design volume layout and backup exclusions for the most efficient backups in such cases.

How TM backs up

To decide how to solve these problems, it’s essential to understand how TM makes an automatic backup. In other articles here I have provided full details, so here I’ll outline the major steps and how they link to efficiency.

At the start of each automatic backup, TM checks to see if it’s rotating backups across more than one backup store. This is an unusual but potentially invaluable feature that can be used when you make backups in multiple locations, or want added redundancy with two or more backup stores.

Having selected the backup destination, it removes any local snapshots from the volumes to be backed up that were made more than 24 hours ago. It then creates a fresh snapshot on each of those volumes. I’ll consider these later.

Current versions of TM normally don’t use those local snapshots to work out what needs to be backed up from each volume, but (after the initial full backup) should rely on that volume’s record of changes to its file system, FSEvents. These observe two lists of exclusions: those fixed by TM and macOS, including the hidden version database on each volume and recognised temporary files, and those set by the user in TM settings. Among the latter should be any very large bundles and folders containing huge numbers of small files, such as the Xcode app, as they will back up exceedingly slowly even to fast local backup storage, and can tie up a network backup for many hours. It’s faster to reinstall Xcode rather than restore it from a backup.

Current TM backups are highly efficient, as TM can copy just the blocks that have changed; older versions of TM backing up to HFS+ could only copy whole files. However, that can be impaired by apps that rewrite the whole of each large file when saving. Because the backup is being made to APFS, TM ensures that any sparse files are preserved, and handles clone files as efficiently as possible.

Once the backup has been written, TM then maintains old backups, to retain:

  • hourly backups for the last 24 hours, to accompany hourly local snapshots,
  • daily backups over the previous month,
  • weekly backups stretching back to the start of the current backup series.

These are summarised in the diagram below.

tmseqoutline1

Local snapshots

TM makes two types of snapshot: on each volume it’s set to back up, it makes a local snapshot immediately before each backup, then deletes that after 24 hours; on the backup storage, it turns each backup into a snapshot from which you can restore backed up files, and those are retained as stated above.

APFS snapshots, including TM local snapshots, include the whole of a volume, without any exceptions or exclusions, which can have surprising effects. For example, a TM exclusion list might block backing up of large virtual machine files resulting in typical backups only requiring 1-2 GB of backup storage, but because those VMs change a lot, each local snapshot could require 25 GB or more of space on the volume being backed up. One way to assess this is to check through each volume’s TM exclusion list and assess whether items being excluded are likely to change much. If they are, then they should be moved to a separate volume that isn’t backed up by TM, thus won’t have hourly snapshots.

Some workflows and apps generate very large working files that you may not want to clutter up either TM backups or local snapshots. Many apps designed to work with such large files provide options to relocate the folders used to store static libraries and working files. If necessary, create a new volume that’s excluded completely from TM backups to ensure those libraries and working files aren’t included in snapshots or backups.

TM can’t run multiple backup configurations with different sets of exclusions, though. If you need to do that, for instance to make a single nightly backup of working files, then do so using a third-party utility in addition to your hourly TM backups.

This can make a huge difference to free space on volumes being backed up, as the size of each snapshot can be multiplied by 24 as TM will try to retain each hourly snapshot for the last 24 hours.

Macs that aren’t able to make backups every hour can also accrue large snapshots, as they may retain older snapshots, that will only grow larger over time as that volume changes from the time that snapshot was made.

While snapshots are a useful feature of TM, the user has no control over them, and can’t shorten their period of retention or turn them off altogether. Third-party backup utilities like Carbon Copy Cloner can, and may be more suitable when local snapshots can’t be managed more efficiently.

iCloud Drive

Like all backup utilities, TM can only back up files that are in iCloud Drive when they’re downloaded to local storage. Although some third-party utilities can work through your iCloud Drive files downloading them automatically as needed, TM can’t do that, and will only back up files that are downloaded at the time that it makes a backup.

There are two ways to ensure files stored in iCloud Drive will be backed up: either turn Optimise Mac Storage off (in Sonoma and later), or download the files you want backed up and ‘pin’ them to ensure they can’t be removed from local storage (in Sequoia). You can pin individual files or whole folders and their entire contents by selecting the item, Control-click for the contextual menu, and selecting the Keep Downloaded menu command.

Key points

  • Rotate through 2 or more backup stores to handle different locations, or for redundancy.
  • Back up APFS volumes to APFS backup storage.
  • Exclude all non-essential files, and bundles containing large numbers of small files, such as Xcode.
  • Watch for apps that make whole-file changes, thus increasing snapshot and backup size.
  • Store large files on volumes not being backed up to minimise local snapshot size.
  • If you need multiple backup settings, use a third-party utility in addition to TM.
  • To ensure iCloud Drive files are backed up, either turn off Optimise Mac Storage (Sonoma and later), or pin essential files (Sequoia).

Further reading

Time Machine in Sonoma: strengths and weaknesses
Time Machine in Sonoma: how to work around its weaknesses
Understand and check Time Machine backups to APFS
Excluding folders and files from Time Machine, Spotlight, and iCloud Drive

Understand and check Time Machine backups to APFS

By: hoakley
3 October 2024 at 14:30

Over the last 17 years, Time Machine has backed up many millions of Macs, ranging from the last Power Macs to the latest M3 models, every hour. It has changed greatly over that time. This article uses my free utility T2M2 to explain how Time Machine now makes automatic backups to APFS storage, and how to check that. This account is based primarily on what happens in macOS Sonoma and Sequoia, when they’re backing up automatically every hour to local storage.

T2M2 offers three features to see what’s happening with Time Machine and backups:

  • Check Time Machine button, to analyse backups made over the last few hours.
  • Speed button, to view progress reports in the log during a long backup.
  • Browse log button, to show filtered log extracts in fullest detail.

These are each detailed in T2M2’s Help book. Here I’ll concentrate on the first and last, and defer speed checks to that documentation.

Browse log for detail

In practice, you’re most likely to view T2M2’s analysis using the Check Time Machine button first, but here I’ll walk through the greater detail available in log extracts, to build understanding of the sequence of events in each automatic backup.

t2m2rept1

To help you see which subsystems are involved in each stage, T2M2 displays their entries in different colours, and you can show or hide each of those.

Automatic backups are called off by the DAS-CTS dispatching mechanism, whose entries are shown in red (DAS) and blue (CTS). They schedule backups so they don’t occur when the Mac is heavily loaded with other tasks, and call a chain of services to start the backup itself. Dispatching is now reliable over long time periods, but can be delayed or become irregular in some situations. Inspecting those DAS-CTS entries usually reveals the cause.

From there on, most informative log entries are made by Time Machine itself.

Preparations are to:

  • Find each destination backup storage, decide whether any rotation scheme applies, so determine the destination for this backup.
  • Check write performance to the backup destination.
  • Find the machine store on the destination.
  • Determine which local snapshots should be removed, and delete them.
  • Create local snapshot(s) as ‘stable’ snapshot(s), and mount them.
  • Mount previous local snapshot(s) as ‘reference’ snapshot(s).
  • Determine how to compute what needs to be backed up from each source. This should normally use FSEvents to build the EventDatabase.
  • Scan the volumes to be backed up to determine what needs to be backed up.
  • Estimate the total size of the new backup to be created.

Once backing up starts, entries cover:

  • Copying designated items to the destination.
  • Posting periodic progress reports during longer backups.

When that’s complete, closing stages are to:

  • Report details of the backup just completed.
  • Set local snapshots ready for the next backup, with the ‘stable’ snapshot(s) marked as ‘reference’, and unmount local snapshots.
  • Delete working folder used during the previous backup as ‘incomplete’.
  • Create the destination backup snapshot.
  • Delete any old backups due for removal.
  • Report backup success or other outcome.

They’re summarised in this diagram. Although derived from Sonoma, Sequoia brings no substantial change.

tmbackup14a

Or its PDF tear-out version: tmbackup14b

Check Time Machine summaries

t2m2rept2

T2M2 analyses all those log entries to produce a summary of how Time Machine has performed over the last few hours. That is broken down into sections as follows.

Backup destination. This is given with the free space currently available on that volume, followed by the results of write speed measurements made before each backup starts.
Backing up 1 volumes to ThunderBay2 (/dev/disk10s1,TMBackupOptions(rawValue: 257)): /Volumes/ThunderBay2
Current free space on backup volumes:
✅ /Volumes/ThunderBay2 = 1.67 TB
Destination IO performance measured:
Wrote 1 50 MB file at 286.74 MB/s to "/Volumes/ThunderBay2" in 0.174 seconds
Concurrently wrote 500 4 KB files at 23.17 MB/s to "/Volumes/ThunderBay2" in 0.088 seconds

Backup summary. This should be self-evident.
Started 4 auto backup cycles, and 0 manual backups;
completed 4 volume backups successfully,
last backup completed successfully 53.4 minutes ago,
Times taken for each auto backup were 0.3, 0.3, 0.2, 0.2 minutes,
intervals between the start of each auto backup were 61.4, 60.2, 60.3 minutes.

Backups created and deleted (or ‘thinned’).
Created 4 new backups
Thinned:
Thinning 1 backups using age-based thinning, expected free space: 1.67 TB actual free space: 1.67 TB trigger 50 GB thin 83.33 GB dates: (
"2024-10-01-043437")

Local snapshots created and deleted.
Created 4 new snapshots, and deleted 4 old snapshots.

How the items to be backed up were determined. This shows which methods Time Machine used to work out which items needed to backed up, and should normally be FSEvents, once a first full backup has been made.
Of 4 volume backups:
0 were full first backups,
0 were deep scans,
4 used FSEvents,
0 used snapshot diffs,
0 used consistency scans,
0 used cached events.

Backup results. For each completed backup, Time Machine’s report on how many items were added, and their size.
Finished copying from volume "External1"
1 Total Items Added (l: Zero KB p: Zero KB)
3 Total Items Propagated (shallow) (l: Zero KB p: Zero KB)
403274 Total Items Propagated (recursive) (l: 224.93 GB p: 219.31 GB)
403275 Total Items in Backup (l: 224.93 GB p: 219.31 GB)
1 Directories Copied (l: Zero KB p: Zero KB)
2 Files Move Skipped (l: Zero KB p: Zero KB) | 2 items propagated (l: 6 KB p: 12 KB)
1 Directories Move Skipped (l: Zero KB p: Zero KB) | 403269 items propagated (l: 224.93 GB p: 219.31 GB)

Error messages.
✅ No error messages found.

iCloud Drive and pinning

Backing up the contents of iCloud Drive is a longstanding problem for Time Machine, if Optimise Mac Storage is enabled. Those files in iCloud Drive that are stored locally should be backed up, but any that have been evicted from local storage could only be included if they were to be downloaded prior to the backup starting.

In Sonoma and earlier, the only way to ensure that an evicted file in iCloud Drive is included in a Time Machine backup is to manually download it. Sequoia now lets you ‘pin’ individual files and whole folders so that they aren’t evicted, so will always be included in Time Machine backups. This is explained here.

Discrepancies and glitches

T2M2 tries to make sense from log entries that don’t always behave as expected. As backups can be complex, there are situations when T2M2 may report something that doesn’t quite add up. This most commonly occurs when there have been manual backups, or a third-party app has been used to control the scheduling and dispatch of backups. If you do see figures that don’t appear quite right, don’t assume that there’s something wrong with Time Machine or your backups. Generally, these glitches disappear from later automatic backups, so you might like to leave it for a few hours before checking Time Machine again to see if the problem has persisted.

Further reading

What performance should you get from different types of storage?
Where should you back up to?
How big a backup store do you need?
How Time Machine backs up iCloud Drive in Sonoma
Time Machine backing up different file systems
Excluding folders and files from Time Machine, Spotlight, and iCloud Drive
Snapshots aren’t backups
Time Machine in Sonoma: strengths and weaknesses
Time Machine in Sonoma: how to work around its weaknesses
Time Machine in Sonoma: Rotating backups and NAS
A brief history of Time Machine

❌
❌