Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Which local file systems does macOS 26 support?

By: hoakley
18 November 2025 at 15:30

Support in macOS for file systems has continued to change over the last couple of years. This article summarises support available for local file systems, in attached rather than network or cloud storage, in macOS 26.1 Tahoe.

APFS

This is the default file system for Macs and Apple’s devices, although macOS has standardised on its case-insensitive variant for general use, while iOS and other OSes use case-sensitive. The one common exception is with Time Machine backup storage, which requires case-sensitivity. The only situation in which HFS+ is still expected is for bootable macOS installers.

The most significant feature of HFS+ that is missing in APFS is directory hard links, a key feature of Time Machine backups made to HFS+ storage.

Multiple APFS volumes can share the same APFS partition (container), in contrast to other file systems supported by macOS, in which each partition is also a volume.

As universal as APFS is on modern Macs, it’s very rarely available on other computer systems, and the only support for other platforms is from Paragon for Windows or for Linux.

HFS+

The Macintosh Extended file system HFS+ is the predecessor to APFS and is still fully supported in macOS. It comes from an era of hard disks rather than SSDs and may still be preferred for use on hard disks. Early versions were prone to cumulative errors, particularly when crashes or kernel panics occurred. Those risks were mitigated with the introduction of journalling, and HFS+ should only be used with journalling enabled. Currently supported versions of macOS no longer support its HFS predecessor, though, as that was dropped in 2019.

The only remaining situation in which HFS+ is still required is for bootable macOS installers, as detailed here.

HFS+ lacks many of the modern features of APFS, including snapshots, sparse files, clone files, and firmlinks used to join System and Data volumes. Because support for encryption was implemented late, in Core Storage logical volume management, recent macOS doesn’t support encrypted HFS+. However, like APFS, HFS+ is capable of supporting Trim on SSDs. Each HFS+ volume is a partition, thus has fixed size, in contrast to APFS partitions (containers) which can contain multiple volumes.

ExFAT, FAT32

These are two of a family of file systems introduced for MS-DOS and Windows. Although the older FAT formats are now antiquated, ExFAT remains the most commonly encountered format for USB flash drives (thumb drives, memory sticks) and SD cards, where it’s the default format for SDXC and SDUC cards larger than 32 GB. Unlike FAT32, ExFAT supports massive volumes and file sizes, and was optimised for use in flash memory. However, its implementation in macOS doesn’t support Trim.

These formats have relatively basic features, and lack encryption. Used from a Mac, they don’t have support for document versions, and may encounter indexing problems with Spotlight. They do, though, support extended attributes by using AppleDouble file format, in which those are saved in shadow files with names starting with ._ (dot – underscore). While those shadow files preserve extended attributes for use with macOS, they can confuse Windows users, for whom they can be deleted, for example using Ross Tulloch’s BlueHarvest.

In recent versions of macOS, these file systems are implemented in user-space using FSKit.

NTFS

Although you won’t find any mention of it in Disk Utility, macOS includes read-only support for NTFS, enabling the one-way transfer of files from Windows. There are third-party products to extend that with write support, including an implementation from Paragon. NTFS is significant for its support of extended attributes as Alternate Data Streams (ADS).

Available formats

Disk Utility version 22.7 in macOS 26.1 Tahoe can format the following file systems using a GUID Partition Map:

  • APFS, unencrypted case-insensitive
  • APFS, encrypted case-insensitive
  • APFS, unencrypted case-sensitive
  • APFS, encrypted case-sensitive
  • HFS+ journalled case-insensitive (JHFS+)
  • HFS+ journalled case-sensitive
  • ExFAT
  • MS-DOS (FAT32).

The command tool diskutil additionally offers FAT, FAT12, FAT16, and HFS+ without journalling.

ZFS

The only other major file system that can be supported by Macs is ZFS, available as OpenZFS on OS X. That isn’t a trivial undertaking, and is dependent on a kernel extension.

Linux file systems

There doesn’t appear to be native support for Btrfs, which is best accessed through Linux virtualisation when needed. While you can use a VM for ext4 and its predecessors, Paragon also offers support for them that is claimed to be compatible with macOS Tahoe.

MacFUSE

Traditionally, native file systems are implemented in kernel-space, requiring a kernel extension for macOS. This remains the case for those used by the operating system and for performance-critical tasks. In other cases, it’s possible to implement a file system in user-space without the need for a kernel extension. This has been the goal of the FUSE project, and with the introduction of FSKit support for user-space file systems in macOS, the MacFUSE implementation now runs without any kext. It’s hoped that will open up access to more file systems in the future.

I’m very grateful to Robert for pointing out Paragon’s support for Linux Ext file systems.

Does that SSD Trim, and why is it important?

By: hoakley
13 November 2025 at 15:30

Trim is one of the Dark Arts of SSDs. It’s important if not essential, but it’s not easy to discover whether an SSD is Trimming properly. Some say that you need to enable Trim for external SSDs, yet others don’t and never seem to encounter a problem.

Why Trim?

Data stored on a hard disk doesn’t need to be erased before the space it takes can be reused, but SSDs work differently. Before a page of SSD memory can be reused, it must be erased, and that’s the part that takes time. If a fast SSD had to erase each page when it needed to write to it, that SSD wouldn’t be much faster than a good hard disk.

To overcome this problem, when the file system has pages that no longer contain data in use, it should tell the SSD that they’re free so they can be erased to prepare them for reuse. In SATA SSDs that’s performed by the TRIM command, and its equivalent for faster NVMe SSDs is DEALLOCATE, although it’s the older command whose name has stuck.

Modern SSDs also perform their own housekeeping, and in many cases may not need to be Trimmed at all. However, when an operating system and SSD both support Trim (or DEALLOCATE), that should ensure optimum performance.

When an SSD doesn’t get Trimmed and can’t compensate for that with its own housekeeping routines, its performance suffers noticeably. This is most commonly seen with SATA SSDs that would normally have write speeds of around 500 MB/s. When they need a good Trim, that can fall to around 100 MB/s, the same speed you’d expect from a hard disk. But this doesn’t affect read speeds at all, so one way of telling whether an SSD needs Trimming is to measure its read and write speeds.

Which SSDs are Trimmed?

You’ll be delighted to know that, for their relatively high cost, all Apple internal SSDs Trim reliably, without any need for tweaking any settings.

As a rule, external SSDs don’t Trim by default if they have a SATA interface, giving them read and write speeds of about 500 MB/s. Those with faster NVMe interfaces, including those connected by Thunderbolt 3-5 or USB4, should Trim by default when they’re formatted in APFS.

System Information normally lists a drive’s Trim support, if you can find the right section. Browse its Hardware section to discover the protocols the drive supports. These can be confusing, as the SSD and its enclosure may well have multiple entries in different headings, and some of the information may appear conflicting.

trim01

USB4 drives operating in Thunderbolt 3 mode can also be confusing. When connected to an Intel Mac (which doesn’t support USB4 itself) they may be reported in the Thunderbolt/USB4 device tree as being USB4.0 operating in Thunderbolt 3 mode, with a link speed of up to 40 Gbit/s, then in the NVMExpress device tree with a link width of x4 and speed of 8.0 GT/s.

SATA drives should appear in the Serial-ATA device tree, even though they might be connected via Thunderbolt 3, and you may see a statement of Trim support.

trim02

Device trees worth inspecting include: NVMExpress, PCI, SATA, Storage, Thunderbolt/USB4 and USB.

Which file systems Trim?

Trim is well-demonstrated in SSDs formatted in APFS, and is known to occur in HFS+. However, old PC files systems like ExFAT don’t have any Trim support, and it can’t be enabled in a Mac at least.

Unfortunately, as HFS+ is now an old Mac filesystem, it can’t readily be seen in log entries, while those from APFS contain valuable detail that makes them suitable for use when testing for Trim.

How to verify Trim

Use Mints to verify whether your external drive does get trimmed correctly when it’s mounted, using its Disk Mount feature. In essence, what you do is:

  1. Eject and disconnect the external drive.
  2. Connect the drive at a known time, according to the Mac’s clock.
  3. Leave the Mac alone until all that disk’s volumes have been mounted.
  4. 20 seconds after connecting the drive, or 10 seconds after the last of its volumes has mounted, open the Mints app.
  5. Click on the Disk Mount button, and set the time in its log window to the time at which you connected the drive.
  6. Set the period to a minimum of 20 seconds, long enough to cover the period up to 10 seconds after the last volume mounted.
  7. Uncheck all the category checkboxes except the first, APFS +.
  8. Click the Get log button.
  9. When log entries are displayed, scroll to the end and look back for APFS trim entries.

This only works for APFS, though, as log entries for HFS+ don’t appear to show Trimming in this way.

trim05

Those entries are characteristically of the form
23-03-25 19:01:06.930 apfs spaceman_scan_free_blocks:3311: disk5 scan took 0.030901 s (no trims)
23-03-25 19:01:10.960 apfs spaceman_scan_free_blocks:3293: disk5 scan took 4.030544 s, trims took 3.944665 s
23-03-25 19:01:10.960 apfs spaceman_scan_free_blocks:3295: disk5 471965989 blocks free in 9131 extents
23-03-25 19:01:10.960 apfs spaceman_scan_free_blocks:3303: disk5 471965989 blocks trimmed in 9131 extents (432 us/trim, 2314 trims/s)
23-03-25 19:01:10.960 apfs spaceman_scan_free_blocks:3306: disk5 trim distribution 1:1461 2+:1267 4+:4121 16+:785 64+:822 256+:675

trim06

Check that the named disk, here disk5, is the SSD or APFS container on the SSD that you’re checking. If it has entries reporting that blocks have been trimmed, this confirms that the SSD has been trimmed as expected. Disks that don’t trim normally only show the first of that series, ending in the words no trims.

It’s possible to enable Trim for all external storage using the trimforce command, but you should normally verify that your external SSD does Trim correctly when mounted.

If you have an SSD that hasn’t been Trimming and is suffering poor write performance, you may be able to help it recover by copying its contents to another disk, then erasing its volumes, or the whole container. Those should return their whole contents as free space, and so enable the SSD’s own housekeeping to erase them in readiness for reuse.

Disentangling timestamps with Dropera 2

By: hoakley
29 October 2025 at 15:30

Last week I drew attention to problems interpreting the timestamps of files in macOS, generating lively discussion with Chris and Richard. Although the gist of that article stands, it was clear that there remained several issues, and this sequel tries to address those better, with the aid of a new app, Dropera 2. The TL;DR for this article is that timestamps are even more complicated than I previously described.

The first task was to develop a tool for reading and recording these timestamps reproducibly. Although the four key timestamps are exposed in Precize, opening a file in that app is likely to change at least one of them. For a substitute I turned to an earlier SwiftUI demo, Dropera, with its drag-and-drop support to handle file URLs without opening, reading or otherwise changing the file.

Timestamps

This new version of Dropera had then to read the right timestamps and convert them into accessible dates and times. For some timestamps, we’re spoiled for choice: time of creation of a file, shown in the Finder as Date Created, is saved in an APFS file’s attributes as create_time, exposed in the stat command as st_birthtime, and accessible within an app’s code as the NSFileCreationDate file attribute or creationDate URL resource. To establish which timestamp is which, I have compiled a table giving the sources I have been able to discover.

For Dropera, following careful comparison with the values delivered in stat, I selected:

  • Creation from NSFileCreationDate
  • Modification from NSFileModificationDate
  • Attribute modification from attributeModificationDate
  • Access from contentAccessDate.

The last two aren’t available in the Finder or elsewhere in the GUI, while the Finder does provide Date Last Opened, Date Added and Content Created:

  • Date Last Opened isn’t related to Access Time, but is recorded in the com.apple.useddate extended attribute. Unfortunately, adding and manipulating that may change the Attribute Modification timestamp, so has to be avoided. As Apple doesn’t appear to document the xattr or its interpretation, I have avoided looking at it any further here.
  • Date Added is the timestamp when a file was added to its current directory. As that isn’t strictly speaking a file attribute, I will ignore it here.
  • Embedded records of Content Created require file data to be read, so accessing them is likely to update at least one of the key timestamps.

Each of those three is available from the metadata indexed by Spotlight, but that’s an indirect and unreliable way to access them.

Using Dropera

To use Dropera to read the four key timestamps just drop the file onto its window, and the app will then display the filename, its path, and the four timestamps in the order of Creation, Modification, Attribute modification and Access times.

Adjust the window width to make these most convenient to read. The layout shown above is compact and automatically splits each timestamp into date and time components. In other uses, you might prefer to widen the window so each entry takes a single line.

To refresh the values shown for the current files, click on the Refresh tool. Although the window continues to display just the single set of entries, the previous timestamps are saved in memory, and the whole history since the last drag-and-drop onto that window can be exported to a text file using the Save as text tool. You can also select the line of results, copy and paste that into another app if you prefer.

Drop multiple files onto Dropera’s window and all their timestamps will be shown. You can then select continuously (Shift-click) or discontinuously (Command-click) to copy those values, and Refresh them.

Demonstration

Open TextEdit, create a new file and set its type to plain text. Add a few words and save it somewhere convenient. Then open Dropera, and drop that file onto its window to inspect its timestamps. They’re likely to show Creation and Modification times the same, Attribute modification slightly later, and Access another second or so afterwards.

Next make a small change to that document and save it. Click Dropera’s Refresh tool and only the Creation time remains unchanged. Close that document in TextEdit, and Refresh the times again to confirm that none have changed. Now select the file in the Finder so its QuickLook thumbnail is previewed. When you Refresh its times, you should see its Access time has been updated. Now double-click the file to open it in TextEdit, and check times again while the file is still open. Attribute modification and Access times are altered, but remain the same after you have closed the document. Export those records as a text file, and you should see:

  • Creation time remains unchanged throughout.
  • Modification time changes once, when the second version was saved.
  • Attribute modification time changes twice, when the modified file was saved, and when the file was reopened.
  • Access time changes three times, the additional occasion being when you viewed the document’s thumbnail in the Finder.

Dropera 2, which requires macOS 14.6 or later, is available from here: dropera20

Enjoy!

Be careful when interpreting APFS timestamps

By: hoakley
24 October 2025 at 14:30

Timestamps on files and folders are important, and can be used for many different purposes, from sorting files to find the most recent, to providing evidence in court. Each file (and directory) in APFS has four separate timestamps you can use:

  • Created, termed in APFS create_time, gives “the time that this record was created”.
  • Modified, mod_time, “the time that this record was last modified”.
  • Last opened, access_time, “the time that this record was last accessed”.
  • Attributes Modified, change_time, “the time that this record’s attributes were last modified”.

Although the Finder only displays those to the nearest second, the macOS API readily provides fractions of a second, commonly resolved to milliseconds, and the raw values are saved as nanoseconds since 00:00 UTC on 1 January 1970.

The first three are those most commonly used, although the last can be relevant in backups in particular. If you want to see which file in a folder is the oldest there, look for the oldest time Created. If you want to know which was most recently changed, look for the latest Modified time.

Where you must be careful is in interpreting those dates, as it’s easy to make assumptions that may not always work. The first confounding factor is that files can change without updating the Last opened time.

Apple’s APFS reference states that access_time is updated according to the behaviour set by that volume’s APFS_FEATURE_STRICTATIME setting. “If this flag is set, the access_time field of j_inode_val_t is updated every time the file is read. Otherwise, that field is updated when the file is read, but only if its value is prior to the timestamp stored in the mod_time field.”

Does APFS in macOS currently set the APFS_FEATURE_STRICTATIME for its volumes?

You can see how that works in a simple demonstration. Create a new text file in TextEdit and add a line or two of text to it, then save it. In the Finder, Created, Modified and Last opened timestamps will now all give the same time and date. Leaving that file open in TextEdit, wait a couple of minutes, then add another line to the file and save it again. In the Finder, the Modified time will be updated, but not the Last opened time, because the file hasn’t been opened and read again. Close the file, wait another couple of minutes, and open it again. You should see its Last opened time update, indicating that APFS_FEATURE_STRICTATIME is set for that volume.

My example is even more extreme. According to these timestamps, this file was Created on 13 July, and hasn’t been opened since. However, it was modified without being read on 23 September. If it’s opened and read now, it’s Last opened time will be updated whatever the APFS_FEATURE_STRICTATIME setting.

There’s another confounding factor that makes this even less reliable: QuickLook. Although its thumbnails and previews are constructed using file data, that access doesn’t affect either the Modified or Last opened timestamps, and opening and reading a QuickLook Preview isn’t recorded anywhere in a document’s attributes. So it could be extremely misleading to assume that a file hasn’t been viewed just because its Last opened timestamp hasn’t changed.

If you do want to use the timestamps on files or folders, it’s essential to know their limitations and behaviours, otherwise you could draw the wrong conclusions.

Resolve a file’s path from its inode number

By: hoakley
8 October 2025 at 14:30

If you ever encounter an error when checking an APFS volume using First Aid in Disk Utility or fsck_apfs, you won’t be informed of the path and name of the item responsible, but given its inode number, in an entry like
warning: inode (id 402194151): Resource Fork xattr is missing for compressed file

The inode number given can only be resolved to a path and file/folder name if you also have a second number giving the volume for that item. As that will be for the volume being checked at the time, you should be able to identify that immediately. The only time that you might struggle to do that is with items in a snapshot; those should, I think, be the same as the volume they are taken from. However, as snapshots are read-only, there’s probably little point in pursuing errors in them.

To resolve these in my free utility Mints, open its inode Resolver using the Window / Data… / Inode menu command. Drag and drop another file from the same volume onto that window.

mints1151

The Resolver will then display that file’s volfs path, such as
/.vol/16777242/1241014

All you need do now is paste the inode number given in the warning or error message in Disk Utility or fsck_apfs, into the Inode Number box at the top of the Resolver window, and click the Resolve button. Mints then looks up information for that inode number on the same volume, using GetFileInfo, and displays it below.

mints1152

One drag and drop, a paste, and a click to discover what APFS is complaining about.

Command line

You’ll sometimes see Terminal’s find command with the option -inum recommended as a way to convert from an inode number to a regular path. Although you can do that, it’s easier to use the command GetFileInfo instead. For that you’ll need the full volfs path, including the volume number.

To find the volume number, use my free utility Precize, and open another file on the same volume. The second line in its window gives the full volfs path for that file. Copy the start of that, leaving the second number, the inode, such as
/.vol/16777238/

purgeable1

Alternatively, you can use the stat command as given below.

In Terminal, type
GetFileInfo
with a space at the end, and paste the text you copied from Precize. Then copy and paste the inode number given in the First Aid warning, to assemble the whole command, such as
GetFileInfo /.vol/16777238/402194151

Press Return, and after a few seconds, you should see something like
file: "/Users/hoakley/Library/Mobile Documents/com~apple~CloudDocs/backup1/0MintsSpotlightTest4syzFiles/SpotTestA.rtf"
type: "\0\0\0\0"
creator: "\0\0\0\0"
attributes: avbstclinmedz
created: 05/17/2023 08:45:00
modified: 05/17/2023 08:45:00

giving the full path and filename that you want.

GetFileInfo is one of the oldest commands in macOS, and has been deprecated as long as anyone can remember. I suspect that Apple is still trying to work out what can substitute for it.

Get a volfs path for a file

Use Precize to run this the other way around: open the file and read the path in that second line. To copy the whole of it, press Command-2.

The simplest ways of obtaining inode numbers and so building volfs paths in Terminal are using the -i option to the ls command, and for individual items using stat:
ls -i lists each item in the current directory, giving its inode number first, e.g.
22084095 00swift
13679656 Microsoft User Data
22075835 Wolfram Mathematica

and so on;
stat myfile.text returns
16777220 36849933 -rw-r--r-- 1 hoakley staff […] myfile.text
where the first number is the volume number, and the second is the inode number of that item, or /.vol/16777220/36849933.

What to do when APFS has problems

By: hoakley
7 October 2025 at 14:30

You’ve just run First Aid in Disk Utility, or fsck_apfs, and that reports warnings or errors. What should you do next?

Failure to unmount

By far the most frequent error encountered in Disk Utility’s checks results from its inability to unmount a volume before it can start testing. While this is reported as an error, and it prevents the checks from running, it can sometimes be solved by manually unmounting the volume in question. It normally doesn’t indicate anything sinister, and is simply frustrating.

Repeat in Recovery mode

If the warnings or errors were reported on your current boot Data volume and you ran that check in normal user mode, consider starting your Mac up in Recovery mode to repeat the check there.

Although macOS does an impressive job of performing checks on a live volume, it’s more reliable and more likely to be able to perform any repairs needed in Recovery mode, when the Data volume isn’t mounted and live. When there, if you prefer, you can still use fsck_apfs in Terminal.

diskutil05

diskutil06

The other case where checks may be best made in Recovery are those on an active Time Machine backup volume. That’s because those can be difficult to unmount before running the checks, although that is possible as long as a backup isn’t being made at the time, or Spotlight indexing taking place. The sure way to avoid those is to do so in Recovery mode.

Warnings or errors?

Any remarks about problems or irregularities encountered during checks should make explicit whether they are warnings or errors, and it’s essential to make a clear distinction between them.

Warnings are observations that could have significance, or might be perfectly normal in the circumstances. Only an APFS engineer is likely to be able to tell the difference. Among the commonest are those reporting missing xattrs for compressed files:
warning: inode (id 113812826): Resource Fork xattr is missing for compressed file
Experience is that those aren’t related to any consequent errors, and you should be able to leave those alone.

Errors are abnormalities that do have more significance, and might have the potential to cause further problems. Where possible, First Aid or fsck_apfs should attempt to repair those, most probably by performing “deferred repairs”. Those are normally minor errors that it already has plans to attend to when that volume is next mounted, the time that APFS normally performs its routine maintenance.

Snapshots

These are read-only copies of the file system metadata at a previous instant in time, and are associated with retained storage blocks. They aren’t part of the active volume, and their metadata are separate. As they’re read-only, any warnings or errors are most unlikely to be fixed, so you have a choice of leaving that snapshot to be deleted routinely by age, or deleting it early yourself.

Snapshots are made by backup utilities including Time Machine, which are required by Apple to have a mechanism that will delete them automatically after a set period. In the case of Time Machine, that’s when the snapshot is over 24 hours old. Snapshots aren’t backups, but augment regular backups, and stand in for them when backup storage isn’t available. In general, there seems little point in deleting a snapshot early just because there’s a reported warning or error for it, as that won’t affect the health of the active volume.

Identifying faulty items

Warnings and errors related to specific files or directories are normally given with an item id, which should be their inode number. To go any further, you’ll need to convert that to a path and file/directory name. You should therefore copy and paste all reports into a separate file as reference. Resolving an inode to a path and item name is detailed in this article.

In many cases involving items that can be resolved to an existing path, the faulty item is in one of the hidden folders such as .Spotlight-V100 for Spotlight indexes, or .DocumentRevisions-V100 for the document version database. In the former, rebuilding Spotlight’s indexes may resolve the problem, but you’re unlikely to be able to do anything about the latter.

If the inode resolves to a regular file, deleting that can remove the problem, but when you try restoring that file from a backup you may discover the backup has the same problem. Getting to the bottom of a recurrent file system error might require the knowledge and skills of an Apple engineer. Consider reporting this using Feedback, as it should then help iron out any remaining bugs in APFS.

You should also consider whether your Mac might be running old third-party software that is causing recurrent errors. Normally, products should work at a higher level that isolates them from the file system itself, but there are some surprising exceptions. If you can identify a cause, please inform the developers of that software so that it can be fixed.

Old versions of APFS

One potentially dangerous practice occurs when an older version of APFS changes a newer file system. APFS back in High Sierra and Mojave knew nothing of boot volume groups, firmlinks, or many of the features of more modern versions of APFS. If you really must run different versions of macOS on the same Mac, or shared external storage, avoid such version conflicts, and never run an older version of Disk Utility or fsck_apfs on a newer APFS container or volume.

Explainer: inodes and inode numbers

By: hoakley
4 October 2025 at 15:00

Every self-respecting file system identifies files and directories using numbered data structures. In most modern file systems, those data structures are known as inodes, and their numbers are inode numbers, sometimes shortened to inodes. The term is thought to be a contraction of index node, which certainly makes sense, but is lost in the mists of time.

In any file system, for example an individual APFS volume, the inode numbers uniquely identify each inode, and each object within that file system has its own inode. Whatever else the file system might do, the inode number identifies one and only one object within it. Thus one invariant way of identifying any file is by referencing the file system containing it, and its inode number.

HFS+

The Mac’s original native file systems, ending most recently in Mac OS Extended File System, or HFS+ from its origins in the Hierarchical File System, don’t use inodes as such, and don’t strictly speaking have inode numbers. Instead, the data structures for their files and folders are kept in Catalogue Nodes, and their numbers are Catalogue Node IDs, CNIDs.

With Mac OS X came Unix APIs, and their requirement to use inodes and inode numbers. As CNIDs are unique to each file and folder within an HFS+ volume, for HFS+ they are used as inode numbers, although in some ways they differ. CNIDs are unsigned 32-bit integers, with the numbers 0-15 reserved for system use. For example, CNID 7 is normally used as the Startup file’s ID.

Although not necessarily related to CNIDs, it’s worth noting that the Mac’s original file system MFS allowed a maximum of 4,094 files, its successor HFS was limited to 65,535, and HFS+ to 4,294,967,295. Those are in effect the maximum number of inode numbers or their equivalents allowed in that file system.

APFS

Unlike HFS+, APFS was designed from the outset to support standard Posix features, so has inodes numbered using unsigned 64-bit integers.

Strictly, the APFS inode number is the object identifier in the header of the file-system key. Not all unsigned 64-bit integers can be used as inode numbers, though, as a bit mask is used, and the number includes an encoded object type. APFS also makes special allowance for volume groups consisting of firmlinked System and Data volumes, to allow for their inode numbers to remain unique across both volumes. Officially, those allow for a maximum number of inode numbers of 9,223,372,036,854,775,808, either in single APFS file systems, or shared across two in a volume group.

fileobject1

Every file in APFS is required to have, at an absolute minimum, an inode and its associated attributes, shown above in blue, including a set of timestamps, permissions, and other essential information about that file.

There are also three optional types of component:

  • although some files may be dataless, most have data, for which they need file extents (green), records linking to the storage blocks containing that file’s data (pink);
  • smaller extended attributes (yellow), named metadata objects that are stored in a single record;
  • larger extended attributes (yellow), over 3,804 bytes in size, whose data is stored separately in a data stream.

Interpretation

When looking at inode numbers in volume groups, they can be used to determine which of the firmlinked file systems contains any given file. Both volumes share the same volume ID, and files in the System volume have very high inode numbers, while those in the Data volume are relatively low. I have given further details here.

Inode numbers are more generally useful in distinguishing files whose data appears identical, and the behaviour of various methods of linking files.

Copying a file within the same file system (volume) creates a new file with its own inode number, of course. However, duplicating a file within the same file system results in an APFS clone file, with a distinct inode, although it shares common data, so their inode numbers are also different.

fileobject3

Instead of duplicating everything, only the inode and its attributes (blue and pink) are duplicated, together with their file extent information. There’s a flag in each clone file’s attributes to indicate that cloning has taken place.

A symbolic link or symlink is merely a pointer to the linked file’s path, and doesn’t involve inode numbers at all. Because of that, changing the linked file’s name or path breaks its link. Finder aliases and their kindred bookmarks do contain inode numbers, so should be able to cope with changes to the linked file’s name or path, provided it remains in the same file system and doesn’t change inode number.

Hard links are more complex, and depend on the way in which the file system implements them, although the fundamental rule is that each object that’s hardlinked to the same file has the same inode number. According to Apple’s reference to APFS, this is how it handles hard links.

fileobject6

When you create a hard link to a file (blue), APFS creates two siblings (purple) with their own IDs and links, including different paths and names as appropriate. Those don’t replace the original inode, and there remains a single file object for the whole of that hardlinked file.

Inode attributes keep a count of the number of links they have to siblings in their link (or reference) count. Normally, when a file has no hard links that’s one, and there are no sibling files. When a file is to be deleted, if its link count is only 1, the file and all its associated components can be removed, subject to the requirements of any clones and applicable snapshots. If the link count is greater than 1, then only the sibling being removed is deleted.

Easy access

Inode numbers are readily accessed using command tools in Terminal. For example, ls -i lists items with their inode numbers shown. One free utility that displays full information about inode numbers and much more is Precize.

The volume ID is given as the first number in the volfs path, and the second is the inode number of that file within that. Note that the File Reference URL (FileRefURL) uses a different numbering system, and the Ref count of 1 indicates this file has no hard links.

❌
❌