Apple's system transition to its own processors seems to be on track, if not ahead of schedule.
Another year predictably equals another spring-scheduled product launch from Apple, this time with a “Peek Performance” moniker:
This one was once again completely pre-recorded; apparently, COVID-related infection, hospitalization and mortality numbers haven’t yet dropped enough for Apple’s in-person-return reassurance. Oddly, too, there was no mention of the heartbreaking situation in Ukraine; given how comparatively late (versus past years) Apple announced the event, I was beginning to think that world woes might compel the company to completely pass on it this year. Guess not.
Friday Night Major League Baseball on AppleTV+
The iPhone 13…in green…
Happy early Saint Patrick’s Day, everyone!
The Third-generation iPhone SE
The iPhone SE 2022’s enhancements from 2020’s second-generation model include:
It’s otherwise essentially identical to its precursor, including its oldie-but-goodie Touch ID fingerprint reader support, none of which is a bad thing, mind you (says the guy with two 2016 first-generation iPhone SEs still in storage)!
The 5th-generation iPad Air
Apple’s original iPad tablet has trifurcated (?) into three parallel product lines:
Roughly a year ago, Apple migrated the M1 SoC (previously restricted solely to the company’s computers) to the iPad Pro. This time, the iPad Air gets in on the fun, again with a fully 8-core GPU variant of the M1. Also included is optional 5G cellular support (again, sub-6-only), and an upgraded front camera with the Center Stage virtual follow-me mode that leverages ultra-wide angle optics in combination with AI-augmented digital zoom capabilities.
The M1 Ultra SoC
Last October, in covering the M1 Pro and Max SoC variants (along with the systems containing them), I wrote:
The die sizes on the new M1 Pro and M1 Max SoCs are already ridiculously big, and next-gen semiconductor processes aren’t arriving (at least in the volumes that Apple needs) any time soon. So, my best guess is that somewhere between now and the end of next year Apple will unveil “M2” SoCs roughly comparable from CPU and GPU core count standpoints with today’s offerings, but with upgraded A15-derived microarchitectures, and with multi-SoC interconnect hooks that multiply the core counts and system memory allotments at the system level…something like the “chiplet” architecture that AMD’s using on its latest CPU generations. Doing so would enable the company to leverage a common sliver of silicon across multiple system product variations.
Well, the M2 isn’t here yet, nor did I expect it yet (note the “end of next year” wording), but part of what I forecasted less than six months ago has already come to pass. Recall that the fundamental difference between the M1 Pro and M1 Max was the effective doubling-up of the GPU subsystem on the die in the latter case…with the result remaining single-die in implementation. At first glance, you might think that the M1 Ultra is an even more impressive clone; it looks like the entire M1 Max due has been mirrored. At this point, however, I’ll repeat part of which I said earlier as a reminder: “The die sizes on the new M1 Pro and M1 Max SoCs are already ridiculously big, and next-gen semiconductor processes aren’t arriving (at least in the volumes that Apple needs) any time soon.”
What you’re in fact looking at are two separate M1 Max die, stitched together with silicon interconnect: A chiplet, in a word. And what interconnect it is. Quoting from the press release:
Apple’s innovative UltraFusion uses a silicon interposer that connects the chips across more than 10,000 signals, providing a massive 2.5TB/s of low latency, inter-processor bandwidth—more than 4x the bandwidth of the leading multi-chip interconnect technology.
Ironically, AMD, Arm, Intel and other industry heavyweights (but not Apple, and I guess we now we know why…NVIDIA was also curiously absent from the list) just a couple of days ago unveiled an industry standard chiplet interconnect standard called the Universal Chiplet Interconnect Express (UCIe). Doubling up the die leads to a predictable doubling up of other SoC specs…up to 128 GBytes of in-package integrated DRAM, 20 CPU cores, and 48 to 64 GPU cores. That last bit is maybe the most impressive twist for me from an implementation standpoint…just a few months ago, I’d written regarding a system I was building:
The intent here was to boost the overall graphics performance of the system to something more modern-ish, by running the two graphics subsystems in an arrangement that AMD at the time called “Hybrid Crossfire” mode (with the “hybrid” qualifier referencing that one of the two GPUs was on the APU, versus being discrete like its peer). The concept is simple: the concurrently-running graphics processors would share the overall “heavy lifting” load, alternatively operating on sequential pixel rows, for example. The implementation (as with its SLI equivalent from NVIDIA) is less simple, however; imagine a single polygon spanning multiple pixel rows and you’ll immediately intuit the amount of inter-GPU interaction that’s involved in translating the concept into reality. As such, very few gaming (or other) graphics apps now attempt to harness the concept’s potential.
Well, apparently Apple’s giving the multi-GPU parallel processing concept another “go,” thanks in no small part to its vertical integration of hardware and software (operating system, graphics API, and core applications).
The Mac Studio
A notable amount of the pre-event prognostication focused on whether Apple would upgrade the Apple Silicon-based Mac mini with the M1 Pro and/or M1 Max Soc(s), both to increase the platform’s processing muscle and to expand its connectivity. While the company still may do so, a later transition to the M2 is still more likely in my mind. As I wrote last October:
I’ve tried several times over the past year to talk myself into buying a new M1 Mac mini, but the comparative dearth of external connections coupled with the 16 Gbyte-max system memory (and ridiculous markup over the mainstream 8 Gbyte alternative) have always given me pause. But I’m admittedly atypical; today’s M1-based Mac mini is adequate for the masses, and for high-end users there’s always the Mac Pro. Here’s the thing: the Mac mini was originally intended to be a low-priced “gateway drug” to the Apple ecosystem for PC users that already owned keyboards, mice and displays. Over time, however, the Mac mini got beefier and more expensive, blurring the distinction between it and the Mac Pro…So it wouldn’t surprise me if Apple leaves the Apple Silicon Mac mini alone going forward, and continues to sell the Intel-based high-end variant until the Apple Silicon-based Mac Pro is ready to go.
The Intel-based Mac mini remains in Apple’s product line to this day, as does the Intel-based Mac Pro…although Apple made it clear at the end of this most recent event that an Apple Silicon-based successor for the latter was a matter of when, not if. Interestingly, though, Apple obsoleted the Intel-based 27” iMac today, after having previously EOL’d its higher-end iMac Pro sibling, thereby putting the entire future of larger-than-24” all-in-one computers in some doubt.
Instead, we got a somewhat new product, the Mac Studio, the sight of which made this G4 Cube owner smile. Consider it, if you will, an Apple Silicon Mac mini on steroids. It’s the first system destination for the M1 Ultra SoC (as well as an added option for the M1 Max), a bit more than twice the volume of the Mac mini (1.4 x 7.7 x 7.7 inches, vs 3.7 x 7.7 x 7.7 inches). It runs rings around an Intel-based Mac Pro, at least on paper. But unlike the Mac Pro (the first and latest iterations, to be precise, disregarding the “trash can” intermediary generation), albeit like the G4 Cube, it uses the extra size for efficient thermal dissipation, not for end user system expansion.
The Studio Display
Neither the Mac mini nor the Mac Studio come with an integrated display, of course, so Apple has conveniently also introduced a mainstream (in a relative sense, of course…this is Apple we’re talking about, after all) offering. The company has a fairly long history of selling branded CRTs and (later) LCDs, but after 2017’s 27” Thunderbolt display (an example of which I own), it switched to co-marketing and reselling LG’s 4K and 5K resolution LCDs for mainstream user purposes. The 32” Pro Display XDR remained in the company’s product line, but at $4,999 (plus an extra $1,000 for a stand and the same markup for nano-texture glass) it wasn’t for the masses.
The 27” 5K resolution Studio Display, complete with an integrated six-speaker array, multi-mic array, and a 12 Mpixel webcam, and powered by an A13 Bionic SoC, marks a return to the mainstream segment for the company. That said, it’s still $1,599, and that’s for a variant with an elementary stand and no nano-texture glass (a stand-less option for VESA mounting purposes is also available). And it’s not just for the Mac Studio; any Thunderbolt-interface Mac is a candidate.
My italicized emphasis in the prior sentence is purposeful, by the way. As I learned with my own predecessor LCD, don’t buy this if you fancy ever alternatively using it with a PC in the future. Granted, Thunderbolt-interface non-Apple systems do exist. However, Thunderbolt is this display’s only interface option. And even more importantly, you’ll find no hardware button-accessible brightness, contrast or other controls built into this LCD; instead, these functions exclusively leverage software running on the connected Mac. Caveat emptor.
(At Least) One More to Go
Shortly after Apple unveiled its homegrown silicon strategy in mid-2020, it forecasted a full transition away from Intel-based computers by the end of 2022. At this point, we have Apple Silicon-based:
Still remaining in the Intel stable, as I previously mentioned, are:
I suspect a full Mac mini transition to Apple Silicon is gated by either the M2 or a higher-end M1 SoC; Apple may be just holding off on the latter right now in the hopes that those folks will buy more profitable Mac Studios instead. And the Apple Silicon Mac Pro (based on the M2? A higher-chiplet count M1 Ultra implementation?) is a foregone conclusion. Apple’s plans seem on track (if not ahead of schedule), at least to me. Your thoughts? Let me know in the comments.
This article was originally published on EDN.
Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.