Apple’s transition to Arm chips

Article By : Brian Dipert

Performance, power consumption, and control are the three primary factors behind Apple’s transition to Arm chips.

I’ve got some serious déjà vu going on right now. As (very) long-time readers may recall, 15 years (and a few days) ago, I wrote a blog series on Apple’s then-rumored transition from IBM’s PowerPC architecture to Intel’s x86 CPUs. Two days later, another blog series I penned beginning at Apple’s WWDC (Worldwide Developers Conference), right after Steve Jobs’ keynote, confirmed the scuttlebutt. And here I am, a decade-and-a-half later, and at WWDC 2020 Apple’s gone and done it again, fully embracing the Arm architecture that’s powered its smartphones, tablets, and smartwatches since 2007 (as well as acting as a co-processor in its PCs since the summer of 2018).

slide introducing Mac SoCs at Apple WWDC 2020

It’s a switch that’s been rumored for years, as I noted in my most recent coverage of this particular topic, back in January 2019. And it’s the third time Apple’s undertaken such a CPU transformation; as I wrote a year and a half ago, “Initial Macs were based on Motorola’s 68000 processors; the company transitioned to IBM- and Motorola-developed PowerPC CPUs beginning in 1994.” From there to Intel, of course; and now, to Arm, completely, and within two years, the company claimed in this week’s virtual keynote. But will it really play out that way, across all computer product line segments? And will it be a product line transition, or an expansion, or a little bit of both? The answer to these questions depends on numerous factors, many but not all under Apple’s control. Before predicting the path forward, let’s first retrace Apple’s steps to this point. After all, to quote Confucius, “Study the past if you would define the future.”

Why did Apple switch away from Motorola in the first place? Part of the answer is that they really didn’t as PowerPC was co-developed by IBM and Motorola. But it still was a notable architecture switch, and Apple was primarily driven by performance concerns when the company made it. Motorola’s 68K architecture was running out of steam, and the 88K successor was late and seemingly underwhelming. Keep in mind, too, that the portable Mac market was a sliver of the total at the time. Yes, Apple had introduced its first “portable” computer back in 1989, followed by successors, but the bulk of Apple’s business consisted of systems that solely plugged into the wall. So power consumption was a secondary (or less) concern in the grand scheme of things, and heat could always be dealt with by adding more fans.

One little-known (but important, IMHO) additional aspect of the PowerPC era is that while Apple relied on its partners for CPU designs, it handled much of the core logic and other silicon/hardware development work itself. Now fast-forward to 2005. PowerPC was, like 68K before it, running out of performance headroom both absolutely and relative to the x86 competitor, a reality defined not only by comparative architecture potential but also comparative market acceptance. x86 CPU shipments (and revenue, and profits) dwarfed those of PowerPC, leading to decreased investments in PowerPC design, associated manufacturing processes and other key factors by the various partners.

Putting absolute performance aside, mobile (and more general thermally-challenged) computing form factors were increasingly coming to the fore. Expand your analysis beyond absolute performance to performance/watt, and the comparative advantages of the x86 architecture became even more evident. Trust me: my opinions aren’t just anecdotal. I’ve owned both a PowerPC G4-based iBook and PowerBook, a G4 Cube, a G4 Mac mini, and both dual G4- and dual G5-based Power Macs, along with Intel- and Windows-based alternatives of the time. The Apple machines were comparatively slow and hot, and the laptops went through battery charges like water. I only stuck with them as long as I did given my fondness for the systems’ software, not hardware, and I was poised to move fully back to Windows-based machines when Apple announced the x86 switch. And don’t forget: when Apple made its decision, it had in-depth insight into Intel’s future process and product roadmaps, which Intel executed solidly on, at least until recently.

In switching to Intel, however, Apple essentially ceded all silicon development to its partner. Admittedly, the company briefly flirted with NVIDIA as a core logic supplier (with the 9400M; I owned a SFF PC based on this same chipset design), but those two companies quickly (and dramatically) parted ways. Through it all, Intel remained Apple’s sole PC processor supplier, and as core logic (followed by graphics) were subsumed by the CPU, Intel gained more and more platform control.

Control (aka “vertical integration”) is something that, as long-time Apple watchers already know, the company has always yearned for, and in increasing amounts. The back-step exemplified by the Intel partnership has become increasingly problematic as Intel’s process development pace has slowed. Intel’s historical “tick-tock” cadence, alternatively incrementing the process node and the architecture implemented on that node in successive years, was officially replaced (out of necessity) by a “process–architecture–optimization” model beginning in 2016, with multiple optimization cycles of late.

Specifically, Intel’s 10 nm process node, originally scheduled to appear in product form beginning in 2015, is only now ramping into volume production and is a subset of what it was originally forecasted to be from performance, power consumption, transistor density, and other metrics. Intel’s chip designers have done an admirable job squeezing as much performance potential out of the 14 nm process (and architectures based on it) as they can, but they’re now running up against theoretical limits. And the resultant IC delays and cancellations have had a ripple effect on system partners’ plans as well, including Apple’s.

Contrast this with Apple’s homegrown efforts. The company has an architecture license from Arm, meaning that it has significant implementation latitude as long as it maintains instruction set compatibility. It leverages foundries as its manufacturing partners; as I wrote in late 2018, Apple has had 7 nm-fabricated and Arm-based SoCs in production for nearly two years already. And unlike any PC supplier, not to mention any Android-based smartphone, tablet, or smartwatch manufacturer, it not only handles system (and underlying silicon) design itself, it also develops its own operating system, along with a robust application suite for it. All of which makes a CPU architecture switch a much more palatable proposition versus, say, for Dell. And it’s not as if the company hasn’t been telegraphing its intentions for a while now; several years ago, for example, Apple transitioned to its own iOS graphics architecture, commensurate with a switch to a homegrown graphics API.

As was the case 15 years ago, during the upcoming transition Apple will offer both Arm- and x86-compiled versions of MacOS 11.0 “Big Sur” (which it showed running in Arm-native form at the keynote) and subsequent operating systems. As before, Apple will provide a “Universal” application “wrapper” that enables applications to run on both operating system variants (Apple showed off Arm-native versions of both its branded apps and those from Adobe and Microsoft at the keynote). As before, Apple will also leverage the “Rosetta” emulation layer to allow x86-only apps to run on an Arm-only silicon foundation. And as before, Apple is already seeding the developer community with an Arm platform akin to the Mac mini. One new twist: thanks in part to the Catalyst co-development platform, iOS and iPad OS apps will also run natively on new Arm-based hardware.

slide on macOs Big Sur development platform at Apple WWDC 2020

slide introducing the developer transition kit at Apple WWDC 2020

 

[Continue reading on EDN US: Questions]

Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:

Leave a comment