Apple’s 2022 WWDC: Evolution and obsolescence

Article By : Brian Dipert

Good news…bad news…bewildering, bah-hah-inducing news…there was a little bit of everything at this year’s Worldwide Developers Conference.

Look back at my roughly 20 years’ worth of coverage of various iterations of Apple’s yearly mid-summer conference, and you’ll notice that software and services announcements dominate. This should be no surprise; the “D” in WWDC stands for developers, after all, and the bulk of Apple’s developers create software apps that tap into Apple’s operating system and service ecosystems (sometimes competing with Apple’s own products in the process). Sure, the WWDC news is also of interest from a personal perspective if you’re an Apple product user, but from a professional standpoint, developers are the focus, and software therefore rules the roost.

That said, hardware shifts also affect developers’ plans, although Apple often focuses its WWDC-timed announcements here on big-picture trends, especially after the demise of the Macworld Conference & Expo (where, for example, Steve Jobs unveiled the first-generation iPhone in early January 2007). Apple chose the 2005 WWDC, for example, to announce its computers’ planned transition from PowerPC to Intel x86 CPUs. And more recently, just two years ago in fact, Apple again used WWDC to announce another computer processor architecture transition, this time from x86 to its own Arm-based “Apple Silicon” SoCs. In contrast, however, no hardware whatsoever was announced at last year’s WWDC.

An as-usual abundance of new and enhanced software features was unveiled this week at the 2022 WWDC (the first in-person event in three years, “thanks” to COVID-19) for Apple’s main operating systems:

  • MacOS, for computers (particularly re upcoming MacOS 13, “Ventura”)
  • iOS, for smartphones (specifically on upcoming iOS 16)
  • iPadOS, for tablets (for the similarly-numbered iPadOS 16)
  • watchOS, for smartwatches (watchOS 9, to be precise), and
  • tvOS (v16 for Apple TV hardware is upcoming)

If you’re curious about software news in a general sense, I’ll redirect you elsewhere for in-depth details; the following sites provide lots of coverage, including play-by-play keynote live blogs:

My focus, at least for this particular piece, will instead be twofold:

  • New hardware, specifically its constituent silicon building blocks, and
  • How evolutions in Apple’s operating systems affect the company’s existing hardware (and those who, like me, own it)

And as an added bonus, I’ll also conclude with a bit of snark. Without further ado…

Generational Evolution

Two years ago, as previously mentioned, Apple publicly announced its planned transition from x86- to “Apple Silicon” (Arm-based) computers, which the company estimated at the time would take two years. Depending on when you “start the clock,” as well as how literally you take the “two year” quote, the schedule may or may not now be in overtime, since the highest-end member of Apple’s computing product line, the Mac Pro, is still Intel-only as I write these words. That said, if you’re feeling generous, you might keep in mind that Apple also said at the time that its first “Apple Silicon” chip (and the systems based on it) wouldn’t roll out until the end of that same year; that happened five months later, in fact, in November 2020.

In my initial coverage of the M1 SoC, I referred to the “A14 to A14X … err … M1 evolution,” by analogy referencing Apple’s earlier A12-to-A12X-to-A12Z SoC family expansion. Although the tone of my comment was joking (at least in attempt), the underlying point I was trying to make was serious. Clearly, an SoC for a smartphone differs in substantial ways from one intended for a computer or even for an intermediary tablet (no matter that the lines have now blurred somewhat due to binary compatibility and functional overlap between the operating systems and applications they all run), in factors such as:

  • CPU core counts, “mixes” between performance- and power consumption-tuned core variants, and clock speeds
  • GPU core counts, clock speeds and complexities
  • Amounts and types of on-chip cache
  • Technologies, capacities and speeds of accessed system DRAM, and
  • Peripheral function types and mixes
  • etc.

That said, if Apple were a smart company (and it is), it’d still strive to leverage whatever development-effort commonality and sharing it can between A-series SoCs for smartphones and tablets and M-series SoCs for computers, starting with the microarchitectures used for their CPU and GPU cores. And indeed, as AnandTech’s as-always in-depth coverage makes clear, the M1 SoC was built on the foundation of its A14 SoC precursor, unveiled inside iPhone 12 smartphones one month earlier.

Operating under a reasonable (IMHO, at least) assumption that “the past is a guide to the future,” AnandTech (and I) are leveraging the A14-to-A15 SoC evolution as a blueprint to filling in some of the gaps in the scant info that Apple revealed this week about its new M2 SoC:

It’s the silicon foundation of the modestly redesigned 13” MacBook Air and even more subtly tweaked 13” MacBook Pro (both scheduled to start shipping next month):

Apple first off revealed that the M2 is manufactured on a “second-generation” 5 nm process from its foundry partner (unidentified TSMC), albeit the same fundamental lithography as was used on the M1. The M1-to-M2 total transistor count also grew 20%—to 20 billion. As even a brief glance at the comparative die sizes makes clear the fundamental focus of that “second-generation” 5 nm process was apparently not on transistor density improvement, because the M2 die is notably larger than its forebear:

Instead, the development attention was likely subdivided between higher inherent performance (i.e., faster clock speeds) and lower power consumption, together translating into reduced energy consumption to complete a given task.

What about the circuitry that all those transistors were used to construct? Let’s begin with the CPU. As before, it’s a big.LITTLE (to use Arm terminology) combo of four “high performance” cores and four “high efficiency” cores. And, following the earlier A14-to-A15 analogy to its logical conclusion, each core type architecture has evolved, from “Firestorm” to “Avalanche” for the “high performance” cores and from “Icestorm” to “Blizzard” for “high efficiency” cores.

The earlier M2 summary graphic includes the claim that it’s “18% faster” than the M1. The devil’s in the details, though, as the saying goes; this assertion turns out to be specific to scenarios when the user is running multithreaded applications, presumably those that use all eight cores concurrently. From the A14-to-A15 analysis, we know that the “Blizzard” high-efficiency cores garnered special architecture development attention from Apple, which makes sense since the more they can be leveraged versus the high-performance alternatives, the better the resultant overall energy consumption will be. How will the M2 compare to its predecessor when running single-threaded apps, for example? We won’t know until developers and reviewers get their hands on hardware, because Apple never dives into such specificity.

We also don’t know, of course, how much of the performance improvement comes from a core’s (or the cores’) architecture enhancements versus more straightforward peak clock speed increases. And we don’t know how much of it comes from factors other than the cores themselves. Take cache, for example; has the L1 associated with each core, and/or the L2 shared by the cores, increased in size and/or speed? Same goes for system memory, although we at least know a bit more here; Apple has migrated from LP (low power) DDR4 on the M1 to latest-and-greatest LPDDR5 on the M2, with consequent 50% higher peak bandwidth, as well as expanded the maximum DRAM footprint from 16 GBytes on the M1 to 24 GBytes on the M2:

Next, let’s look at the GPU. The “35% faster” claim here is more believable, if only due to the fact that the maximum GPU core count is 25% higher than on the M1, at 10 vs 8 (as before, system implementations of the M2 will come in both 10 and lower—8? 6? Both? Other?—core count GPU variants, to maximize usable SoC yield). Also, Apple’s been rolling its own GPUs, versus leveraging Imagination Technologies- and Intel-sourced graphics, for a shorter time particularly in the computer space, so the learning curve-derived generational improvement potential is higher here. There’s always also potential peak clock speed increases as an influencing factor. And of course, since Apple’s computers now (like smartphones and tablets) leverage a unified memory architecture, with the graphics frame buffer and system memory sharing a common dollop of DRAM, the DDR4-to-5 transition helps here too.

Other M2 tidbits of note:

  • It seemingly inherits the A15’s deep learning inference core, running at the same clock speed as the A15’s implementation of it, therefore explaining both Apple’s absolute and vs-M1 comparative performance claims about it.
  • When I previously covered the M1 Pro and M1 Max derivatives of the M1, I essentially (intentionally simplistically) positioned them as straightforward CPU and GPU core-count variants built on an M1 foundation. Turns out Apple also did some other tweaks; for example, the successors’ video encode/decode blocks added hardware-accelerated support for the company’s ProRes and ProRes Raw codecs, with professional video applications (and the Mac Pro hardware platform to come?) in mind. The M2 carries forward this added support, also handling 8K-resolution video (vs “only” 4K on the M1).
  • I’m admittedly surprised that the M2 still only supports v3 of the Thunderbolt interface specification, and only two ports’ worth at that. The M1 Mac mini’s external expansion shortcomings versus its Intel precursor’s capabilities apparently won’t get rectified, at least for a potential base M2 system variant in this same form factor.

Speaking of Intel-based computer predecessors…

Dramatic Obsolescence

As regular readers may already realize, I regularly cover the topic of “obsolescence by design”…most recently just a couple of days ago, in fact, in that instance related to deep-drained embedded batteries that turned an expensive set of earbuds into unrecoverable paperweights. Here, in contrast, I’ll be covering the concept angle wherein “perfectly good hardware is prematurely done in by a manufacturer’s refusal to continue supporting it with evolutions in associated operating systems and other software.” And Apple unfortunately provided an abundance of case study examples this week.

Let’s start with MacOS. Upcoming MacOS 13 “Ventura,” simply stated, will only support computers based on “Apple Silicon” (of course) and, for Intel-based systems, only those including the T2 Security Chip. That decision obsoletes the early 2015 13” Retina MacBook Pro I’m typing these words on, for example. It obsoletes the late 2014 Mac mini to my right. And, most frustrating of all, it obsoletes the MacPro 6,1 “trash can” computer to my left. That last one, as you’ll soon read about in more detail in a coming-soon separate blog post, I only acquired (used) a few weeks ago as a birthday present from my wife.

Granted, the “trash can” Mac Pro entered production in 2013, but it didn’t exit production until December 2019, only 2.5 years ago. Nevertheless, it’ll be dead from a software support standpoint less than three years from now, or maybe even sooner. Apple historically “goes gold” on a new operating system release every fall, and from that point until the next year’s generational release, it supports both the “new kid in town” and its previous two generational forebears. The 2013-2019 Mac Pro runs MacOS 12 “Monterey” which, given historical patterns, will fall out of support in fall 2024. But hey, there’s nothing preventing Apple from breaking precedent and speeding the demise to its inevitable conclusion even sooner, of course…

Lest you think Apple’s just picking on Intel-based computerslet’s look at iOS. To the company’s credit, it’s to date supported systems based on the A9 SoC, which dates from 2015, for a notable length of time. But with upcoming iOS 16, A9- and A10 Fusion-based smartphones get the axe…that means the iPhone 6s, the iPhone 7 and 7 Plus…and my two first-generation iPhone SEs, even though they still run iOS 15 remarkably well given their March 2016-intro age. And unlike with MacOS, Apple’s support for iOS 15 (as well as iPadOS 15 and WatchOS 8…keep reading…) effectively ends immediately with iOS 16’s “gold” release later this fall.

Here’s the odd inconsistency, though. iPadOS 16 similarly neuters legacy tablet support, but with curious exceptions. My A8-based iPad mini 4 belatedly gets the axe this fall, for example. But my A9X-based first-generation 9.7” iPad Pro, also like the first-generation iPhone SE dating from March 2016, does not, even though its A9-based smartphone siblings do (the only meaningful difference between the A9 and A9X is their relative graphics robustness.)

And watchOS 9? It marks the nexus of corporate hubris. It (also belatedly) drops support for the Apple Watch Series 3, two of which I currently own. But here’s the rub: even after announcing the Series 3’s pending end of support on Monday morning, Apple’s still selling both new and refurbished Series 3 smart watches through its online store (screenshots below taken by my pending-demise iPad mini 4, by the way):

If that’s not screwing over the naïve customer, I don’t know what is (unless, of course, Apple’s got some atypical long-term support plan for watchOS 8 that it hasn’t shared with us yet).

The Snark

In closing, especially after the prior somber prose, I thought I’d pass along a chuckle (at least I found it funny). For several MacOS (and associated iOS and iPadOS) generations, Apple has iteratively been building out a set of capabilities it bundles under the Continuity moniker:

  • Controlling both an iPad and Mac with a common keyboard, mouse and trackpad set
  • Drag-and-dropping content from one device to another
  • Mirroring or extending a Mac display to a nearby iPad (mimicking capabilities that third-party applications like Duet Display and Luna Display had already delivered), and
  • Making and taking calls, and sending and receiving text messages, on multiple devices
  • etc.

Well, as of MacOS 13 and iOS/iPadOS 16, you can add Continuity Camera (specifically for video; it’s previously been available for still images) to the list. Once again copying what third parties already provide, it enables you to use an iPhone as an alternative to your computer’s built-in webcam. What cracks me up isn’t the concept per se, it’s the reasoning: “Since the iPhone has larger and more capable cameras than the small FaceTime cameras placed in a laptop lid.”

Admittedly, I’ve long complained about the subpar webcams built into Macs…heck, I even tried using a HDV camcorder as an alternative. But unlike others, I have a basis for comparison: the 5K (still) and 1080p (video) resolution webcam built into the thin bezel of my svelte Microsoft Surface Pro X, which delivers beautiful images under a variety of ambient lighting conditions and even supports cool capabilities such as gaze direction-correcting Eye Contact. Conversely, Apple can’t seemingly even implement a decent webcam in a $1,599 (and up) standalone display. So with all due respect, Apple, I don’t think the thin laptop lids are the issue. I think you’re the issue. But hey, good luck trying to sell a bunch of iPhones to “solve” the problem.

Having just crossed through 2,500 words, I’m going to close for now. Comments on questions on anything I’ve written? As always, sound off in the comments!

This article was originally published on EDN.

Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 

Leave a comment