Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Last Week on My Mac: A strategy for data integrity

By: hoakley
20 July 2025 at 15:00

File data integrity is one of those topics that never goes away. Maybe that’s because we’ve all suffered in the past, and can’t face a repeat of that awful feeling when an important document can’t be opened because it’s damaged, or crucial data have gone missing. Before considering what we could do prevent that from happening, we must be clear about how it could occur.

We have an important file, and immediately after it was last changed and saved, a SHA256 digest was made of it and saved to that file as an extended attribute, in the way that you can using Dintch, Fintch or cintch. A few days or weeks later we open the file and discover its contents have changed.

Reasons

What could account for that?

One obvious reason is that the file was intentionally changed and saved without updating its digest. Provided there are good backups, we should be able to step back through them to identify when that change occurred, and decide whether it’s plausible that it was performed intentionally. Although the file’s Modified datestamp should coincide with the change seen in its backups, there’s no way of confirming that change was intentional, or even which app was used to write the changed file (with some exceptions, such as PDF).

Exactly the same observations would also be consistent with the file being unintentionally changed, perhaps as a result of a bug in another app or process that resulted in it writing to the wrong file or storage location. The changed digest can only detect the change in file content, and can’t indicate what was responsible. This is a problem common to file systems that automatically update their own records of file digests, as they are unable to tell whether the change is intentional, simply that there has been a change. This also applies to changes resulting from malicious activity.

The one circumstance in which change in contents, hence in digest, wouldn’t necessarily follow a change in the file’s Modified datestamp is when an error occurs in the storage medium. However, this is also the least likely to be encountered in modern storage media without that error being reported.

Errors occurring during transfer to and from storage are detected by CRC or similar checks made as part of the transfer protocol. This is one of the reasons why a transfer bandwidth of 40 Gb/s cannot realise a data transfer rate of 5 GB/s, because part of that bandwidth is used by the error-checking overhead. Once written to a hard disk or SSD, error-correcting codes are used to verify integrity of the data, and are used to detect bad storage blocks.

Out of interest, I’ve been conducting a long-term experiment with 97 image files totalling 60.8 MB stored in my iCloud Drive since 11 April 2020, over five years ago. At least once a year I download them all and check them using Dintch, and so far I haven’t had a single error.

Datestamps

There are dangers inherent in putting trust in file datestamps as markers of change.

In APFS, each file has four different datestamps stored in its attributes:

  • create_time, time of creation of that file,
  • mod_time, time that file was last modified,
  • change_time, time that the file’s attributes including extended attributes were last modified,
  • access_time, time that file was last read.

For example, a file with the following datestamps

  • create_time 2025-04-18 19:58:48.707+0100
  • mod_time 2025-04-18 20:00:56.134+0100
  • change_time 2025-07-19 06:59:10.542+0100
  • access_time 2025-07-19 06:52:17.504+0100

was created on 18 April this year, last modified a couple of minutes later, last had its attributes changed on 19 July, but was last read 7 minutes before that modification to its attributes.

These can be read using Precize, or in Terminal, but there’s a catch with access_time. APFS has an optional feature, set by volume, determining whether access_time is changed strictly. If that option is set, then every time a file is accessed, whether it’s modified or not, its access_time is updated. However, this defaults to only updating access_time if its current value is earlier than mod_time. I’m not aware of any current method to determine whether the strict access_time is enabled for any APFS volume, and it isn’t shown in Disk Utility.

mod_time can be changed when there has been no change in the file’s data, for example using the Terminal command touch. Any of the times can be altered directly, although that should be very unusual even in malware.

Although attaching a digest to a file as an extended attribute will update its change_time, there are many other reasons for that being changed, including macOS adding or changing quarantine xattrs, the file’s ‘last used date’, and others.

Proposed strategy

  1. Tag folders and files whose data integrity you wish to manage.
  2. Back them up using a method that preserves those tags, such as Time Machine, or copied to iCloud Drive.
  3. Periodically Check their tags to verify their integrity.
  4. As soon as possible after any have been intentionally modified and saved, Retag them to ensure their tags are maintained.
  5. In the event that any are found to have changed, and no longer match their tag, trace that change back in their backups.

Unlike automatic integrity-checking built into a file system, this will detect all unexpected changes, regardless of whether they are made through the file system, are intentional or unintentional, are malicious, or result from errors in storage media or transmission. Because only intentionally changed files are retagged, this also minimises the size of backups.

Checking data integrity

By: hoakley
16 July 2025 at 14:30

How can you check whether important data has acquired errors? It could be in packets transferred over a network, files being stored in the cloud, or those put in an archive for safe-keeping. This article explains how errors can be detected and in some cases corrected.

Checksum

One quick and simple way to keep track of whether errors have crept in is to add all the byte values together using modulo arithmetic, and see if that number matches what we expect. Modulo addition ensures the sum never exceeds 255 (for 8-bit bytes), but it also means there’s a 1 in 255 chance that the checksum may remain correct even when the data is wrong. There’s an even more serious problem, though, as changing the order of bytes in the data won’t have any effect on their checksum, even though it would scramble the data.

One solution to those is the Fletcher checksum, using two values instead of one. Both start at zero, then the value of the first block of data is added to the first of those. That is then added to the second value, and each time another value is added to the first, their total is added to the second value. At the end the two values are combined to give the Fletcher checksum.

As this only uses modulo addition, it’s extremely quick, so is used with 32-bit blocks for the Fletcher-64 checksum in APFS file system metadata. The chances of a ‘collision’, in which an error fails to show up in the checksum, are almost 1 in 4.3 billion. Wikipedia’s account includes worked examples.

Cyclic redundancy check

These use additional bits to form cyclic codes based on the remainder of a polynomial division of the data contents. Although they sound complex, they use simple binary operations, so can be very quick in use. Again, Wikipedia’s account includes worked examples.

These were originally developed for use over communication channels such as radio, where they are effective against short bursts of errors. Since then they have been widely used in hard disks and optical storage. They have two main disadvantages, in that they can easily be reversed, allowing the original data to be reconstructed, and they can’t protect against intentional modifications.

Hash

Secure Hash Algorithms, SHA, are a family of cryptographic functions based on a one-way compression function. The first, SHA-1, produces a 160-bit value referred to as the digest. Used from 1995, it was broken in 2005, and has been replaced by SHA-2 using digest sizes of 224-512 bits, now most widely used as SHA-256.

For SHA-256, data is processed in 512-bit chunks and subjected to 64 rounds of a compression function before being appended into the 256-bit digest, while SHA-512 uses 1024-bit chunks and 80 rounds. Full details are again given in Wikipedia.

Important properties of cryptographic hash functions include:

  • There’s a one-to-one mapping between input data and hash, so the same data always generates the same hash.
  • It’s not feasible to work out the input data for any given hash, making the mapping one-way.
  • Collisions are so rare as to not occur in practice.
  • Small changes in the input data result in large changes in the hash, so amplifying any differences.
  • Hash values should be fairly evenly distributed.

SHA-256 and related hashes are used in code signatures, as CDHashes of the protected parts of each app, bundle, etc. They’re also used to verify the integrity of the Signed System Volume in modern macOS, where they’re assembled into a tree hierarchy so they can be verified lazily, on-demand. More generally, cryptographic hashes are used in message authentication codes (MAC) to verify data integrity in TLS (formerly SSL).

Error-correcting code

Those methods of detecting errors can’t, in general, be used to correct them. The first effective error-correcting code was devised by Richard Hamming, and is named after him. It can correct all single-bit errors, and will detect two-bit errors as well. Wikipedia’s excellent account, complete with explanatory Venn diagrams, is here.

Ten years after Hamming code came Reed-Solomon code (R-S), invented by Irving S Reed and Gustave Solomon. Nearly twenty years later, when Philips Labs were developing the format for CDs, their code was adopted to correct errors in their reading. Unlike others, when used in CDs, R-S codes are applied to bytes rather than bits, in two steps.

The first encodes 24 B of input data into 28 B of code. Interleaving is then performed on those codewords in blocks of 28, or 784 B of codewords, following which a second R-S coding is performed to convert each 28 B into 32 B of code. The overall code rate is thus 24/32, so an input file grows by a third following this double encoding. R-S code is explained in detail in this Wikipedia article.

The reason for such a complex scheme of error-correction in CDs is to correct bursts of errors up to 4 KB, or 500 bytes, representing about 2.5 mm of the track on a CD. Similar methods have been used for DVDs and in parchive files, which were distributed in USENET posts. However, it becomes progressively harder and less efficient to provide error-correction for larger blocks of missing data, which is of course one of the most serious problems in computer storage systems.

macOS

Unlike some file systems including Zfs, APFS and macOS don’t provide any native method for checking the integrity of data stored in files, although macOS does offer SHA-256 and SHA-512 support in CryptoKit. My free suite of two apps (Dintch and Fintch) and a command tool (cintch) offer a robust scheme using CryptoKit’s SHA-256 that I and others have been using for the last five years. Details are in their Product Page.

Updates for file integrity (Dintch/Fintch), compression (Cormorant) and LogUI build 70

By: hoakley
14 July 2025 at 14:30

This is the last batch of ‘simple’ updates to my free apps to bring them up to the expectations of macOS 26 Tahoe. With them comes a minor update to my log browser LogUI, which is recommended for all using Tahoe, as it fixes an annoying if fundamentally cosmetic bug.

Preparing these updates for release was a little troublesome, as I attempted this using developer beta 3 of Tahoe and Xcode 26 beta 3. Little did I realise when I got all four rebuilt, tested and notarized, that this combination had stripped their shiny new Tahoe-compliant app icons. That made these new versions unusable in Sequoia and earlier, as they each displayed there with the generic app icon, despite working fine in Tahoe.

Eventually I discovered that I could build fully functional versions using Xcode 26 beta 2 in Sequoia 15.5, so that’s how they have been produced.

File integrity

Five years ago I build a suite of two apps and a command tool to enable checking the integrity of file data wherever it might be stored. This uses SHA256 hashes stored with each checked file as an extended attribute. At that time, the only alternative saved hashes to a file in the enclosing folder, which I considered to be suboptimal, as it required additional maintenance whenever files were moved or copied to another location. It made more sense to ensure that the hash travels with the file whose data integrity it verifies.

The three are Fintch, intended for use with single files and small collections, Dintch, for larger directories or whole volumes, and cintch, a command tool ideal for calling from your own scripts. As the latter has no interface beyond its options, it continues to work fine in macOS 26.

Since then other products have recognised the benefits of saving hashes as extended attributes, although some may now use SHA512 rather than SHA256 hashes. What may not be apparent is the disadvantage of that choice.

Checking the integrity of thousands of files and hundreds of GB of data is computationally intensive and takes a lot of time, even on fast M4 chips. It’s therefore essential to make that as efficient as possible. Although checksums would be much quicker than SHA256 hashes, they aren’t reliable enough to detect some changes in data. SHA algorithms have the valuable property of amplifying even small differences in data: changing a single bit in a 10 GB file results in a huge change in its SHA256 hash.

At the same time, the chances of ‘collisions’, in which two different files share the same hash, are extremely low. For SHA256, the probability that two arbitrary byte sequences share the same hash is one in 2^256, roughly one in 1.2 x 10^77. Using SHA512 changes that to one in 2^512, which is even more remote.

However, there ain’t no such thing as a free lunch, as going from SHA256 to SHA512 brings a substantial increase in the computational burden. When run on a Mac mini M4 Pro, using its internal SSD, SHA256 hashes are computed from files on disk at a speed of around 3 GB/s, but that falls to 1.8 GB/s when using SHA512 hashes instead.

dintchcheck14

Dintch provides two controls to optimise its performance: you can tune the size of its buffer to cope best with the combination of CPU and storage, and you can set it to run at one of three different QoS values. At its highest QoS, it will run preferentially on Apple silicon P cores for a best speed of 3 GB/s, while run at its lowest QoS it will be confined to the E cores for best energy economy, and a speed of around 0.6 GB/s for running long jobs unobtrusively in the background.

The two apps and cintch are mutually compatible, and with their earlier versions going back to macOS El Capitan. In more recent versions of macOS they use Apple’s CryptoKit for optimum performance.

Dintch version 1.8 is now available from here: dintch18
Fintch version 1.4 is now available from here: fintch14
and from their Product Page, from where you can also download cintch. Although they do use the auto-update mechanism, I fear that changes in WordPress locations may not allow this to work with earlier versions.

Compression/decompression

Although I recommend Keka as a general compression and decompression utility, I also have a simple little app that I use with folders and files I transfer using FileVault. This only uses AppleArchive LZFSE, and strips any quarantine extended attributes when decompressing. It’s one of my testbeds for examining core allocation in Apple silicon Macs, so has extensive controls over QoS and performance, and offers manual settings as well as three presets.

Cormorant version 1.6 is now available from here: cormorant16
and from its Product Page. Although it does use the auto-update mechanism, I fear that changes in WordPress locations may not allow this to work with version 1.5 and earlier.

LogUI

Those using this new lightweight log browser in Tahoe will have discovered that, despite SwiftUI automatically laying out its controls, their changed sizes in Tahoe makes a mess of the seconds setting for times. This new version corrects that, and should be easier to use.

LogUI version 1 build 70 is now available from here: logui170

There will now be a pause in updates for macOS Tahoe until Apple has restored backward compatibility of app icons, hopefully in the next beta-releases.

❌
❌