With more new M4 Macs in the offing, one question that I’m asked repeatedly is whether you should save money by getting a Mac with the smallest internal SSD and extend that using cheaper external storage. This article considers the pros and cons.
Size and prices
In Apple’s current M4 models, the smallest internal storage on offer is 256 GB. For the great majority, that’s barely adequate if you don’t install any of your own apps. It might suffice in some circumstances, for example if you work largely from shared storage, but for a standalone Mac it won’t be sufficient in five years time. Your starting point should therefore be a minimum of 512 GB internal SSD. Apple’s typical charge for increasing that to 2 TB is around $/€/£ 600.
The alternative to 2 TB internally would be an external 2 TB SSD. Unless you’re prepared to throw it away after three years, you’ll want to choose the most versatile interface that’s also backward compatible. The only choice here is Thunderbolt 5, which currently comes at a small premium over USB4 or Thunderbolt 3. Two TB would currently cost you $/€/£ 380-400, although those prices are likely to reduce in the coming months as TB5 SSDs come into greater supply.
Don’t be tempted to skimp with a USB 3.2 Gen 2 external SSD if that’s going to be your main storage. While it might seem a reasonable economy now, in 3-5 years time you’ll regret it. Besides, it may well have severe limitations in not Trimming as standard, and most don’t support SMART health indicators.
Thus, your expected saving by buying a Mac with only 512 GB internal storage, and providing 2 TB main storage on an external SSD, is around $/€/£ 200-220, and that’s really the only advantage in not paying Apple’s high price for an internal 2 TB SSD.
Upgrading internal storage in an Apple silicon model currently isn’t feasible for most users. As Apple doesn’t support such upgrades, they’re almost certain to invalidate its warranty and any AppleCare+ cover. That could change in the future, at least for some models like the Mac mini and Studio, but I think it unlikely that Apple would ever make an upgrade cheaper than initial purchase.
External boot disk
One of the few compelling reasons for choosing a Mac with minimal internal storage is when it’s going to be started up from an external boot disk. Because Apple silicon Macs must always start their boot process from their internal storage, and that Mac still needs Recovery and other features on its internal SSD, you can’t run entirely from an external SSD, but you could probably get away with the smallest available for its other specifications, either 256 or 512 GB.
Apple silicon Macs are designed to start up and run from their internal storage. Unlike Intel Macs with T2 chips, they will still boot from an external disk with Full Security, but there are several disadvantages in them doing so. Among those are the fact that, on an external boot disk, FileVault encryption isn’t performed in hardware and is inherently less secure, and AI isn’t currently supported when booted from an external disk. Choosing to do that thus involves compromises that you might not want to be stuck with throughout the lifetime of that Mac.
External media libraries
Regardless of the capacity of a Mac’s internal storage, it’s popular to store large media libraries on external storage, and for many that’s essential. This needs to be planned carefully: some libraries are easier to relocate than others, and provision has to be made for their backups. If you use hourly Time Machine backups for your working folders, you’ll probably want to back up external media libraries less frequently, and to different external storage.
External Home folder
Although it remains possible to relocate a user’s entire Home folder to external storage, this seems to have become more tricky in recent versions of macOS. Home folders also contain some of the most active files, particularly those in ~/Library, so moving them to an external SSD is going to require its good performance.
A more flexible alternative is to extend some working folders to external storage, while retaining the Home folder on internal storage. This can fit well with backup schedules, but you will still need to ensure the whole Home folder is backed up sufficiently frequently. This does have an unfortunate side-effect in privacy protection: this may require most of your working apps to be given access to Removable Volumes in the Files & Folders item in Privacy & Security settings. Thankfully, that should only need to be performed once when first using an app with external storage.
How much free space do you need?
When you’re weighing up your options to minimise the size of your new Mac’s internal storage, you also need to allow sufficient free space on each disk. APFS is very different from HFS+ in this respect: on external disks, in particular, HFS+ continues to work happily with just a few MB free, and could be filled almost to capacity. APFS, modern macOS and SSDs don’t work like that.
Measuring how much free space is needed isn’t straightforward either, as macOS trims back on its usage in response to falling free space. Some key features, such as retaining log entries, are sacrificed to allow others to continue. Snapshots can be removed or not made. Perhaps the best measurements come from observing the space requirements of VMs, where total virtual disk space much below 50 GB impairs running of normal functions. That’s the total size of the virtual disk, not the amount of free space, and doesn’t apply when iCloud or AI are enabled.
The other indicator of minimum free space requirements is for successful upgrading of macOS, which appears to be somewhere between 30-40 GB. This makes it preferable to keep an absolute minimum of around 50 GB free at all times. When possible, 100 GB gives more room for comfort.
SSD wear and performance
When the first M1 Macs were released, base models with just 8 GB of memory and 128 GB internal SSDs were most readily available, with custom builds (BTO) following later. As a result, many of those who set out to assess Apple’s new Macs ended up stress-testing those with inadequate memory and storage for the tasks they ran.
Many noticed rapid changes in their SSD wear indicators, and some were getting worryingly close to the end of their expected working life after just three years. Users also reported that SSD performance was falling. The reasons for those are that SSDs work best, age slowest, and remain fastest when they have ample free space. One common rule of thumb is to keep at least 20-25% of SSD capacity as free space, although evidence is largely empirical, and in places confused.
The simplest factor to understand is the effect of SSD size on wear. As the memory in an SSD is expected to last a fixed number of erase-write cycles, all other things being equal, writing and rewriting the same amount of data to a smaller SSD will reach that number more quickly. Thus, in general terms and under the same write load, a 512 GB SSD will last about half as long as a 1 TB SSD.
All other things aren’t equal, though, and that’s where wear levelling and Trim come into play. Without levelling the number of erase-write cycles across all the memory in an SSD, some would reach their limit far sooner than others. To tackle that, SSDs incorporate mechanisms to even out the use of individual memory cells, as wear levelling. The less free space available on an SSD, the less effective wear levelling can be, giving larger SSDs a significant advantage if they also have more free space.
Trimming is performed periodically to allow storage that has already been made available for reuse, for example when a file has been deleted, to be erased and made ready. Both APFS and HFS+ will Trim compatible SSDs when mounting a volume, but Trim support for external SSDs is only provided by default for those with NVMe interfaces, not SATA, and isn’t available for other file systems including ExFAT. Some SSDs may still be able to process available storage in their routine housekeeping, but others won’t. Without Trimming, an SSD gradually fills with unused memory waiting to be erased, and will steadily grind to a halt, with write speeds falling to about 10% of new.
Thus, to ensure optimum performance and working life, SSDs should be as large as possible, with much of their storage kept free. Experience suggests that a healthy amount of free space is 20-50% of their capacity.
Striking the best compromise
Apple silicon Macs work best and fastest when largely running from their internal SSDs. By all means reduce the capacity required by moving more static media libraries, and possibly large working folders, to an external SSD. But there’s no escaping the evidence that your Mac will work best and longest when its internal storage has a minimum of 20% free at all times, and you must ensure that never falls below 50 GB free space. Finally, consider your needs not today, but when you intend replacing that Mac in 3-5 years time, or any savings made now will prove a false economy.
Making a CPU do more work requires more than increasing its frequency, it needs removal of obstacles that can prevent it from making best use of those cycles. Among the most important of those is memory access. High-speed local caches, L1 and L2, can be a great help, but in the worst case fetching data from memory can still take hundreds of CPU core cycles, and that memory latency may then delay a running process. This article explains some techniques that are used in the CPU cores of Apple silicon chips, to improve processing speed by making execution more efficient and less likely to be delayed.
Out-of-order execution
No matter how well a compiler and build system might try to optimise the instructions they assemble into executable code, when it comes to running that code there are ways to improve its efficiency. Modern CPU cores use a pipeline architecture for processing instructions, and can reorder them to maintain optimum instruction throughput. This uses a re-order buffer (ROB), which can be large to allow for greatest optimisation. All Apple silicon CPU cores, from the M1 onwards, use out-of-order execution with ROBs, and more recent families appear to have undergone further improvement.
In addition to executing instructions out of order, many modern processors perform speculative execution. For example, when code is running a loop to perform repeated series of operations, the core will speculate that it will keep running that loop, so rather than wait to work out whether it should loop again, it presses on. If it then turns out that it had reached the end of the loop phase, the core rolls back to where it entered the loop and follows the correct branch.
Although this wastes a little time on the last run of each loop, if it’s had to loop a million times before that, accumulated time savings can be considerable. However, on its own speculative execution can be limited by data that has to be loaded from memory in each loop, so more recently CPU cores have tried to speculate on the data they require.
Load address prediction
One common pattern of data access within code loops is in their addresses in memory. This occurs when the loop is working through a stored array of data, where the address of each item is at a constant address increment. For this, the core watches the series of addresses being accessed, and once it detects that they follow a regular pattern, it performs Load Address Prediction (LAP) to guess the next address to be used.
The core then performs two functions simultaneously: it proceeds to execute the loop using the guessed address, while continuing to load the actual address. Once it can, it then compares the predicted and actual addresses. If it guessed correctly, it continues execution; if it guessed wrong, then it rolls back in the code, uses the actual address, and resumes execution with that instead.
As with speculative execution, this pays off when there are a million addresses in a strict pattern, but loses out when a pattern breaks.
Load value prediction
LAP only handles addresses in memory, whose contents may differ. In other cases, values fetched from memory can be identical. To take advantage of that, the core can watch the value being loaded each time the code passes through the loop. This might represent a constant being used in a calculation, for example.
When the core sees that the same value is being used each time, it performs Load Value Prediction (LVP) to guess the next value to be loaded. This works essentially the same as LAP, with comparison between the predicted and actual values used to determine whether to proceed or to roll back and use the correct value.
This diagram summarises the three types of speculative execution now used in Apple silicon CPU cores, and identifies which families in the M-series use each.
Vulnerabilities
Speculative execution was first discovered to be vulnerable in 2017, and this was made public seven years ago, in early 2018, in a class of attack techniques known as Spectre. LAP and LVP were demonstrated and exploited in SLAP and FLOP in 2024-25.
Mechanisms for exploiting speculative designs are complex, and rely on a combination of training and misprediction to give an attacker access to the memory of other processes. The only robust protection is to disable speculation altogether, although various types of mitigation have also been developed for Spectre. The impact of disabling speculative execution, LAP or LVP greatly impairs performance in many situations, and isn’t generally considered commercially feasible.
Risks
The existence of vulnerabilities that can be exploited might appear worrying, particularly as their demonstrations use JavaScript running in crafted websites. But translating those into a significant risk is more challenging, and a task for Apple and those who develop browsers to run in macOS. It’s also a challenge to third-parties who develop security software, as detecting attempts to exploit vulnerabilities in speculative behaviour is relatively novel.
One reason we haven’t seen many (if any) attacks using the Spectre family of vulnerabilities is that they’re hardware specific. For an attacker to use them successfully on a worthwhile proportion of computers, they would need to detect the CPU and run code developed specifically for that. SLAP and FLOP are similar, in that neither would succeed on Intel or M1 Macs, and FLOP requires the LVP support of an M3 or M4. They’re also reliant on locating valuable secrets in memory. If you never open potentially malicious web pages when your browser already has exploitable pages loaded, then they’re unlikely to be able to take advantage of the opportunity.
Where these vulnerabilities are more likely to be exploited is in more sophisticated, targeted attacks that succeed most when undetected for long periods, those more typical of surveillance by affiliates of nation-states.
In the longer term, as more advanced CPU cores become commonplace, risks inherent in speculative execution can only grow, unless Apple and other core designers address these vulnerabilities effectively. What today is impressive leading-edge security research will help change tomorrow’s processor designs.
Apple has gone to great lengths to make the transition to its new Arm-based Macs as seamless as possible. However, there are some major differences that most need to take into account before making their leap of faith from a cherished but now-ageing Intel Mac to a sleek and glitzy new M-series Mac. This article clarifies what are often points of confusion about what you can’t or shouldn’t do with a new Apple silicon Mac.
You can’t run any macOS before Monterey (or possibly Big Sur)
There are two ways to run macOS on Apple silicon Macs: natively, or in a virtual machine (VM). The oldest version of macOS your Mac can run natively is that current at the time that model was released. Models released before October 2021 can run macOS 11 Big Sur, and are the only Apple silicon Macs that can do so. Those released from October 2021 onwards can only run the version of macOS that was current at the time of their release, but can run older versions back to macOS 12 Monterey in a VM. Current models with M4 chips are even more restricted, as the earliest version they can run is macOS 15 Sequoia, although their VMs can still stretch back to Monterey if you need.
Catalina, Mojave and earlier were never released with support for Apple silicon Macs, so can’t be run on them, and will never be able to without emulating Intel processors in software, which is slow and unreliable.
A VM running on an Apple silicon Mac can’t run Big Sur, because the Virtio driver support required for virtualisation wasn’t complete then, and didn’t work until macOS 12 Monterey, although even there it offers fewer features than in Ventura. Full details are given here.
You can’t virtualise or run Intel macOS or 32-bit apps
Bundled in macOS is Rosetta 2, enabling you to run 64-bit Intel code and apps that are compatible with macOS 10.15 Catalina. Rosetta isn’t an emulation engine, but translates code from Intel to Arm instructions. However, it can’t translate 32-bit code, and it can’t translate operating systems like macOS. It does run 64-bit Intel apps amazingly quickly, though.
A VM running macOS on Apple silicon can therefore use Rosetta 2 to translate and run 64-bit Intel code in apps that are compatible with macOS 10.15 Catalina, but is subject to the same limitations as any version of macOS on Apple silicon, in that it can’t handle older or 32-bit apps. Neither can it be used on the host Mac to run a VM of any Intel version of macOS.
If you need access to older or 32-bit Intel software, then the only practical way of doing that is on an Intel Mac that’s able to run Mojave or earlier.
You can’t install Intel kernel extensions
Rosetta 2 translation can’t support the privileged level of execution required for kernel extensions, so if you need your Mac to be able to load and use kernel extensions that are only available for Intel Macs, you can’t do that on an Apple silicon Mac. The great majority of more recent kernel extensions are now available as Universal versions that can also run native on M-series chips, but if your Mac still relies on an older kernel extension that’s Intel-only, then you can’t use that on a new Mac.
You can’t boot fully from an external drive
Unlike Intel Macs, Apple silicon models can only start their boot process from their internal SSD, as that’s required to support their Secure Boot. Although Apple silicon Macs can boot from external disks, early phases of that process still rely on the internal SSD and security policies (‘LocalPolicy’) saved there. This has several consequences:
An Apple silicon Mac can only boot from an external disk that is ‘owned’ by a user recognised by the primary system on its internal SSD. This is a valuable security measure, as without knowing login details for a suitable user of the internal SSD, it’s not possible to boot an Apple silicon Mac from an external bootable disk.
An Apple silicon Mac can only boot from an external disk if it can at least start that process from its internal SSD, normally requiring a bootable system on the internal SSD as well. If you do intend booting your Mac from an external disk, in practice you still need to install and maintain a bootable system on its internal SSD.
Total failure of the internal SSD results in failure to boot from external disks as well. A bootable external disk can’t ‘get you home’ in that emergency.
Apple silicon Macs don’t really boot fully from ‘bootable’ external installer disks, although they can still be used to install macOS when necessary, and may be required when installing older versions of macOS than currently installed.
Instructions for installing macOS on an external disk so that it can boot an Apple silicon Mac are given here.
You can’t use Boot Camp
Boot Camp allows you to start up an Intel Mac as if it’s a regular PC to run Windows. As Apple silicon Macs have completely different processors and other hardware, they can’t support that option. If you want to run Windows on your Apple silicon Mac, you’ll have to do that using a virtualiser like Parallels Desktop, and currently those can only run Arm versions of Windows, although Parallels is working on an emulator that can run some Intel versions. You can already try that out.
Avoid kernel extensions
Unlike Intel Macs, Apple silicon Macs don’t allow the use of third-party kernel extensions when running in Full Security mode. Before it can have those enabled, its startup security has to be reduced, and their use explicitly set in Startup Security Utility in Recovery mode. For most users that’s a significant deterrent. In almost all cases now, traditional kernel extensions should be replaced by new-style system extensions. You can read more about that here.
Avoid ‘cloning’ boot volume groups
Before Catalina and Big Sur divided the boot volume into a group of volumes, including System and Data, it was popular to make identical copies of, or ‘clone’, the volume containing the system. This is even more complex with Apple silicon Macs because of the multiple containers on their internal SSD. Although apps like SuperDuper and Carbon Copy Cloner can still create clones, they can’t include the whole of the internal SSD. That limits their usefulness, and they can readily fail.
The only way you can completely replace the contents of the internal SSD of an Apple silicon Mac is to restore it from an IPSW image file when the Mac is in DFU mode. That erases the SSD so that its Data volume then has to be restored from a backup or copy, not a task to be undertaken lightly or in a hurry. This is explored in detail here.
Don’t try using startup key combinations
Entering Recovery mode and accessing features that are controlled using startup key combinations on Intel Macs is completely different in Apple silicon Macs, and controlled using the Power button. Holding keys during startup does nothing for an Apple silicon Mac. I have an illustrated guide, details on Fallback Recovery, and on troubleshooting.
Neither can you reset the PMC or NVRAM using startup keys. The PMC in an Apple silicon Mac is completely different, and shouldn’t need to be reset. If it does, then restarting should suffice. NVRAM is primarily for the use of macOS not the user, and you should never have to reset it. Further information is given here.
Further reading
My page listing articles specific to Apple silicon Macs contains extensive information and guidance.
There was a stir of excitement last year when some claimed that Apple’s M4 and the A18 chip inside iPhone 16 models had a new security processor, the Secure Exclave Processor, a sister to the Secure Enclave Processor in all modern Macs. This article considers whether there’s evidence to support that.
Secure Enclave
The first Macs with a secure enclave and its processor, the SEP, were released over eight years ago, in the MacBook Pro 2016 models with a T1 chip. They were quickly succeeded by the iMac Pro at the end of 2017, with the T2 chip that became universal across the last models of Intel Macs. Inside each T2 chip is a 32-bit Arm core running sepOS to store and handle encryption keys and support other security features.
When Apple released the first M1 Macs, they too came with their own SEP, this time integrated into the main chip, but still running sepOS. Apple provides extensive details of each SEP used up to May 2024 in its Platform Security Guide.
During kernel boot of an M4 Pro running macOS Sequoia 15.3.1, the SEP writes a useful commentary of its startup to the Unified log. The SEP’s key store is started about 5 seconds after the initial system boot entry, and before the other CPU cores are started up. SEP boot takes place a little later, and it then starts handling messages. Support for biometric authentication follows, as does key unwrapping for apfs. Although the SEP isn’t loquacious, it is far from silent in the log.
Enclaves, exclaves, conclaves
To the best of my knowledge, the first description of exclaves in Apple’s operating systems was that of Dataflow Forensics on a beta-release of iOS 17 on 8 August 2023. They proposed that exclaves are code domains isolated from the kernel itself, so that, should the kernel become compromised in any way, components in exclaves should remain protected. I followed that with a report of new exclave kernel extensions in macOS 14.4, discussed here, and referenced back to Apple’s XNU source code.
It’s useful to distinguish between these three terms in their normal usage:
an enclave is a territory entirely surrounded by the territory of another state;
an exclave is an isolated fragment of a state that exists separately from the main part of that state;
a conclave is a private meeting or close assembly, more specifically a meeting of the College of Cardinals convened to elect a new Pope.
Based on those and XNU sources, I proposed last August that exclaves provide a static set of resources to the XNU kernel. Examples include conclave managers, services like Apple ID for VMs, named buffers and audio buffers. These resources are named and have a corresponding identifier shared between XNU and the enclave. Those are discovered during the boot process, and made available in a table at two levels: a root table assembles resources by their scope, while secondary tables list actual processes.
For example, a root table entry for the domain com.apple.conclave.a might link to a secondary table listing an audio buffer, an audio service as a conclave manager, and additional buffering. In the case of audio exclaves, they might be used to connect a system extension running with user privileges to privileged access in the kernel and its extensions.
Support in Sequoia
As of Sequoia 15.3.1, there are three kernel extensions supporting exclaves:
ExclaveKextClient.kext 1.0.0
ExclaveSEPManagerProxy.kext 1.0
ExclavesAudioKext.kext 220.24
There are also two private frameworks:
CoreSpeechExclave.framework 1.0 (3403.7.3)
libmalloc_exclaves_introspector.framework 1.0 (1)
There’s a group of private entitlements governing access to exclave and conclave functions, whose names start with com.apple.private.exclaves.
There are also multiple references to Tightbeam, which doesn’t appear to have been released in open source. This refers to highly focussed lasers used in communications in The Expanse sci-fi. In macOS, it first appeared as the name of a Private Framework released in macOS 13.0, and supported since then.
Examination of the 15.3.1 kernel boot sequence in the log reveals a single entry relating to exclaves, in which the neural engine (ANE) states that no exclaves are assigned to it, as it boots, far later in the sequence than SEP activity: ANE0: start: No Exclaves assigned to ANE
XNU source code based on macOS Sequoia 15.0 reveals that code in exclaves is either run in CPU cores, for which there is code to support the management of multiple cores, or in the neural engine (ANE).
Summary
The Secure Enclave is an isolated processing unit within Apple silicon chips that stores and handles encryption keys and supports other security features. It runs its own operating system, sepOS.
Exclaves are managed code resources that are run in isolation from the kernel, and are currently used for I/O to support audio and CoreSpeech, sensors, and possibly for Apple ID for VMs. They run in CPU cores and the neural engine, and although they may have privileged access to hardware, they have no processor of their own, in Macs at least.
Conclaves are close assemblies of exclaves, and should never communicate by fumata.
Most testing and benchmarks avoid putting heavy loads on CPU and GPU at the same time, so running an Apple silicon chip ‘full on’. This article explores what happens in the CPU and GPU of an M4 Pro when they’re drawing a total of over 50 W, and how that changes in Low Power mode. It concludes my investigations of power modes, for the time being.
Methods
Three test runs were performed on a Mac mini M4 Pro with 10 P and 4 E cores, and a 20-core GPU. In each run, Blender Benchmarks were run using Metal, and shortly after the start of the first of those, monster, 3 billion tight loops of NEON code were run on CPU cores at maximum Quality of Service in 10 threads. From previous separate runs, the monster test runs the GPU at its maximum frequency of 1,578 MHz and 100% active residency, to use about 20 W, and that NEON code runs all 10 P cores at high frequency of about 3,852 MHz and 100% active residency to use about 32 W. This combined testing was performed in each of the three power modes: Low Power, Automatic, and High Power.
In addition to recording test performance, powermetrics was run during the start of each NEON test at its shortest sampling period, with both cpu_power and gpu_power samplers active.
Performance
There was no difference in performance between High Power and Automatic settings, which completed both tasks with the same performance as when they were run separately:
NEON time separate 2.12 s, together High Power 2.12 s, Auto 2.12 s
monster performance separate 1215-1220, together High Power 1221, Auto 1220.
As expected, Low Power performance was greatly reduced. NEON time was 4.33 s (49% performance), even slower than running alone at Low Power (2.87 s), and monster performance 795, slightly lower than running alone at Low Power (837).
High Power mode
This first graph shows CPU core cluster frequencies and active residencies for a period of 0.3 seconds when the monster test was already running, and the NEON test was started.
At time 0, the P0 cluster (black) was shut down, and the P1 cluster (red) running with one core at 100% active residency, a second at about 60%, and at about 3,900 MHz. As the ten test threads were loaded onto the two clusters, cluster frequencies were quickly brought to 3,852 MHz, by reducing that of the P1 cluster and rapidly increasing that of the P0 cluster.
By 0.1 seconds, both clusters were at full active residency and running at 3,852 MHz, where they remained until the NEON test threads completed.
Power used by the CPU followed the same pattern, rising rapidly from about 6,000 mW to about 32,000 mW at 0.1 seconds. GPU power varied between 8,600-23,000 mW, resulting in a peak total power of slightly less than 52,000 mW, and a dip to 40,600 mW. Typical sustained power with both CPU and GPU tests running was 50-52 W.
Low Power mode
These results are more complicated, and involve significant use of the E cluster.
This graph shows active residency alone, and this time includes the E cluster, shown in blue, and the GPU, in purple. NEON test threads were initially loaded into the two P clusters, filling them at 0.13 seconds. After that, threads were moved from some of those P cores to run on E cores instead, leaving just two test threads running on each of the P clusters by 0.26 seconds. Over much of that time the GPU had full active residency, but as that fell threads were moved from E cores back to P cores. By the end of this period of 0.5 seconds, 4 of 5 cores in each of the two P clusters were at 100%, and the GPU was also at 100% active residency.
This bar chart shows changing cluster total active residency for the E (red) and two P (blues) clusters by sample. With 10 test threads and significant overhead, the total should have reached at least 1,000%, which was only achieved in sample 4, and from sample 13 onwards.
Those active residencies are shown in the lower section of this graph (with open circles), together with cluster frequencies (filled circles) above them. As the P clusters were being loaded with test threads, both P clusters (black) were brought to a frequency of only 1,800 MHz, compared with 3,852 MHz in the High Power test. The E cluster (blue) was run throughout at its maximum frequency of 2,592 MHz, except for one sample period. GPU frequency (purple) remained below 1,000 MHz throughout, compared with a steady maximum of 1,578 MHz when at High Power.
Power changed throughout this initial period running the NEON test. Initially, CPU power (red) rose to a peak of 6,660 mW, then fell slowly to 3,500 mW before rising again to about 6,000 mW. GPU power rose to peak at just over 7,000 mW, but at one stage fell to only 26 mW. Total power used by the CPU and GPU ranged between 11-13.2 W, apart from a short period when it fell below 5 W. Those are all far lower than the steadier power use in High Power mode.
How macOS limits power
Running these tests in Low Power mode elicited some of the most sophisticated controls I have seen in Apple silicon chips. Compared to being run unfettered in Automatic or High Power mode, macOS used a combination of strategies to keep CPU and GPU total power use below 13.5 W:
P core frequencies were limited to 1,800 MHz, instead of 3,852 MHz.
High QoS threads that would normally have been run on P cores were transferred to E cores, which were then run at their maximum frequency of 2,592 MHz.
Threads continued to be transferred between E and P cores to balance performance against power use.
GPU frequency was limited to below 1,000 MHz.
Despite reducing power use to a total of 25% of High Power mode, effects on performance were far less, attaining about 50% of that at High Power mode.
Residency is the percentage of time a core is in a specific state. Idle residency is thus the percentage of time that core is idle and not processing instructions. Active residency is the percentage of time it isn’t idle, but is actively processing instructions. Down residency is the percentage of time the core is shut down. All these are independent of the core’s frequency or clock speed.