Last Week on My Mac: Coming soon to your Mac’s neural engine
If you’ve read any of my articles here about the inner workings of CPU cores in Apple silicon chips, you’ll know I’m no stranger to using the command tool powermetrics
to discover what they’re up to. Last week I attempted something more adventurous when trying to estimate how much power and energy are used in a single Visual Look Up (VLU).
My previous tests have been far simpler: start powermetrics
collecting sample periods using Terminal, then run a set number of core-intensive threads in my app AsmAttic, knowing those would complete before that sampling stopped. Analysing dozens of sets of measurements of core active residency, frequency and power use is pedestrian, but there’s no doubt as to when the tests were running, nor which cores they were using.
VLU was more intricate, in that once powermetrics
had started sampling, I had to double-click an image to open it in Preview, wait until its Info tool showed stars to indicate that stage was complete, open the Info window, spot the buttons that appeared on recognised objects, select one and click on it to open the Look Up window. All steps had to be completed within the 10 seconds of sampling collections, leaving me with the task of matching nearly 11,000 log entries for that interval against sampling periods in powermetrics'
hundred samples.
The first problem is syncing time between the log, which gives each entry down to the microsecond, and the sampling periods. Although the latter are supposed to be 100 ms duration, in practice powermetrics
is slightly slower, and most ranged between about 116 and 129 ms. As the start time of each period is only given to the nearest second, it’s impossible to know exactly when each sample was obtained.
Correlating log entries with events apparent in the time-course of power use is also tricky. Some are obvious, and the start of sampling was perhaps the easiest giveaway as powermetrics
has to be run using sudo
to obtain elevated privileges, which leaves unmistakeable evidence in the log. Clicks made on Preview’s tools are readily missed, though, even when you have a good estimate of the time they occurred.
Thus, the sequence of events is known with confidence, and it’s not hard to establish when VLU was occurring. As a result, estimating overall power and energy use for the whole VLU also has good confidence, although establishing finer detail is more challenging.
The final caution applies to all power measurements made using powermetrics
, that those are approximate and uncalibrated. What may be reported as 40 mW could be more like 10 or 100 mW.
In the midst of this abundance of caution, one fact stands clear: VLU hardly stresses any part of an Apple silicon chip. Power used during the peak of CPU core, GPU and neural engine (ANE) activity was a small fraction of the values measured during my previous core-intensive testing. At no time did the ten P cores in my M4 Pro come close to the power used when running more than one thread of intensive floating-point arithmetic, and the GPU and ANE spent much of time twiddling their thumbs.
Yet when Apple released VLU in macOS Monterey, it hadn’t been expecting to be able to implement it at all in Intel chips because of its computational demand. What still looks like magic can now be accomplished with ease even in a base M1 model. And when we care to leave our Macs running, mediaanalysisd
will plod steadily through recently saved images performing object recognition and classification to add them to Spotlight’s indexes, enabling us to search images by labels describing their contents. Further digging in Apple’s documentation reveals that VLU and indexing of discovered object types is currently limited by language to English, French, German, Italian, Spanish and Japanese.
Some time in the next week or three, when Apple releases macOS Tahoe, we’ll start seeing Apple silicon Macs stretch their wings with the first apps to use its Foundation Models. These are based on the same Large Language Models (LLMs) already used in Writing Tools, and run entirely on-device, unlike ChatGPT. This has unfortunately been eclipsed by Tahoe’s controversial redesign, but as more developers get to grips with these new AI capabilities, you should start to see increasingly novel features appearing.
What developers will do with them is currently less certain. These LLMs are capable of working with text including dialogue, thus are likely to appear early in games, and should provide specialist variants of more generic Writing Tools. They can also return numbers rather than text, and suggest and execute commands and actions that could be used in predictive automation. Unlike previous support for AI techniques such as neural networks, Foundation Models present a simple, high-level interface that can require just a few lines of code.
If you’ve got an Apple silicon Mac, there’s a lot of potential coming in Tahoe, once you’ve jiggled its settings to accommodate its new style.