2022 tech themes: A look ahead

Article By : Brian Dipert

Having recently reviewed the past year, our intrepid engineer now tackles forecasting the hot topics of the year ahead.

Recently, I took a stab at summarizing what I thought were the key tech themes of the past year (see “A 2021 technology retrospective: Strange days indeed“). This time around, once again after skipping a year in the cadence, I’m going to tackle an even more challenging project: prognosticating what I believe will be the hot topics of the year ahead. In doing so, I’ve strived to not reiterate the same themes from 2021 to 2022, unless I truly believed that they’d clearly repeat at the top of the heap in the year to come. That said, about that pandemic…

The continued COVID-19 question mark

Initial reports of a previously unknown pneumonia virus first emerged from China in mid-December 2019, and the world quickly and dramatically changed. It hasn’t yet reverted to pre-pandemic characteristics, and it very likely never will. Entire industries have risen and fallen in response to the effects of government-decreed lockdowns, individuals’ and families’ layoffs and income downturns, severe (and often long-term) illness and deaths, and the like. And two years’ (give or take) worth of “Zoom school” have fundamentally affected an entire generation of children, their teachers, and their parents.

Sad but true, the pandemic isn’t even close to being over yet. One of the downsides of writing about fast-moving issues in advance is that you have no solid idea how things will look when your words finally appear in print. COVID -19 is a perfect example of what I’m talking about. Literally the same day I initially sat down to write this section of the piece, the first reports of a new variant of SARS-CoV-2 (now known as Omicron) started coming out of South Africa. Analysis indicates that it’s a notable mutation of Delta and previous versions. But what does that mean? Is it significantly more transmissible? More infectious to the unvaccinated, the vaccinated, and/or those who’ve already had COVID-19? Does it affect those infected more significantly than prior variants? And to the degree that any of this is true, how do the results vary with age, gender, culture, and other variables?

By the time you read these words, we’ll likely know at least a bit more about Omicron than we do now as I’m writing them. But definitive conclusions will likely still be in process. At least one thing is already highly likely, however… something we’ve already been told by epidemiologists from the beginning, regardless of whether we chose to listen and believe their words. Until a sufficiently high percentage of the world’s (not just a particular country, or region within that country) “herd” population develops antibody immunity, whether natural (via infection) or through vaccination, the virus will continue to mutate and thrive. As Jeff Goldblum’s Jurassic Park character Dr. Ian Malcom memorably opined: “Life finds a way”.

Deep learning’s Cambrian moment

Speaking of life…When I originally joined EDN full-time at the beginning of 1997, my initial editorial “beat” covered programmable logic devices and toolsets. Within a few months, I’d added volatile and nonvolatile semiconductor memory. And in short order, I’d also picked up multimedia technologies and products, notably graphics chips and boards. At the time, as my then-analyst colleague Peter Glaskowsky was also fond of pointing out, there were around 40 companies participating in the graphics processor business, not to mention even more suppliers of add-in cards based on those processors. But in short order, the list got dramatically shorter, due to mergers, acquisitions, bankruptcies, and the like.

As I look at today’s participant-rich deep learning silicon and software market, spanning both training and inference, I can’t help but be struck by a sense of déjà vu, no matter that I also realize that I’m being a bit unfair in doing so. After all, back in the late 1990s, graphics processors were essentially used solely for graphics processing. NVIDIA and others’ visions of general-purpose graphics processing units (GPGPUs) were at the time largely theoretical, aside from a few GPU-accelerated Adobe plugins and niche scientific and supercomputer applications. Conversely, ironically, GPUs are now leading players in the AI processing space, along with a diversity of more application-focused coprocessors. So I do expect the deep learning market to have longer evolutionary and diversity “legs.” That said, I also forecast an eventual extinction shakeout here, too…beginning in 2022? Maybe.

The ongoing importance of architecture

Echoing what I wrote last time, as the number of transistors that it’s possible to cost-effectively squeeze onto a sliver of silicon continues to slow, what you build out of those transistors becomes increasingly critical. Last time I talked about how pivotal the advancements in AMD’s latest Zen microarchitecture have been to its resurgence in recent years, along with Intel’s response. But the benefits of a superior architecture aren’t restricted to PC-intended products.

For example, architecture choices, along with the degree that software exploits the benefits of those choices, will be critical to the future success (or failure) of the various deep learning processor suppliers mentioned earlier. Do you gamble on an eventual development platform winner—TensorFlow, PyTorch, etc.—and craft an architecture that supports only (or at least optimally) that platform? Or do you hedge your bets with an approach that, while perhaps not optimal from performance, cost, power consumption and other metrics, is more versatile?

Open source processors’ time in the sun

As Wikipedia notes, open-source hardware, specifically (for purposes of this particular write-up) open-source microprocessor instruction sets and architectures built around those ISAs, have been around for decades. It’s likely a little-known fact to some of you, for example, that a public domain instruction set for v2 and earlier versions of the Arm ISA exists. And both Sun (with OpenSPARC) and IBM (OpenPOWER) have also joined the open-source silicon movement.

Many more of you, conversely, are likely at least conceptually familiar with the burgeoning RISC-V movement. RISC, of course, is the Reduced Instruction Set Computer methodology, whose early proponents included Computer Architecture: A Quantitative Approach co-authors David Patterson (UC Berkeley professor and champion of the Berkeley RISC architecture) and John Hennessy (Stanford professor and principal designer of the MIPS architecture). For their seminal reference manual, Patterson and Hennessy developed the DLX instruction set, which formed the foundation of the several-decade old OpenRISC program.

Beginning in 2010, Patterson and his students also tackled periodic three-month (summer) RISC research projects. RISC-V is, as its name implies, the fifth generation of this project series. So why has this ISA generation particularly caught such intense fire in the tech industry? Part of the reason, unsurprisingly, is a reflection of the RISC-V architecture’s inherent robustness, along with its extensibility. Part of it, I suspect, is the maturation of the open-source development toolset and other software infrastructure supporting the architecture. Part of it is a reaction to growing frustration with the traditional industry approaches to processor development:

  • Closed instruction sets available only in fully fabricated silicon implementation form from a short list of suppliers (x86, etc.), or
  • Closed instruction sets that you could design your own chips based on, but only after paying hefty license and royalty fees to the ISA developer (Arm, MIPS, etc.)

And finally, part of it is something that I first mentioned in my prior write-up: instruction set emulation and broader operating system and application virtualization have gotten so robust, and the CPUs running this emulation and virtualization have gotten so performant, that precompiled-software portability is now a meaningful reality. As recent examples of RISC-V’s momentum, I’ll point to Imagination Technologies’ early-December announcement of its RISC-V IP plans (quick history lesson: Imagination had previously acquired MIPS in 2001, then sold it in 2017), along with an early-September job posting from Apple for a RISC-V programmer (??!!!).

For more on David Patterson’s views on RISC-V and processor architecture evolution more generally, I’ll direct you to his keynote at the September 2020 Embedded Vision Summit, a public preview of which is below:

 

 

Also of interest is Patterson’s follow-on interview session with Jeff Bier, founder of the Edge AI and Vision Alliance and general chairman of the Embedded Vision Summit (not to mention my boss), also available in public preview below:

 

 

The normalization of remote work (and the “Great Resignation’s” aftershocks)

As I write these words, and as I’ve also mentioned in a couple of other recent write-ups, I’m only a few weeks away from my quarter-century anniversary of being a work-from-home guy. Many of the rest of you have more recently been exposed to the concept, out of pandemic necessity. And while, like me, you’ve undoubtedly experienced its challenges, such as the temptation to further blur an already fuzzy demarcation between work and personal lives, you’ve also experienced its upsides: no more lengthy and mind-numbing round-trip commutes, increased flexibility within the historical workday, etc.

I suspect that, to at least a notable degree, we won’t ever completely return to the “way it was before.” In fact, I’d wager that having a taste of a work-from-home or “hybrid” employment lifestyle is one of the key factors behind the so-called “Great Resignation” that tech and broader media alike inform me is well underway. Folks don’t want to go back to “the way it was before”; they’re willing to (if necessary) take cuts in salary, benefits, etc. in order to maintain the flexibility they’ve experienced these past two years. And as employers are compelled to become more flexible in order to ensure employee retention, those employees will increasingly discover that compensation package trimming might not be required at all…although there will long be lingering Neanderthals

The metaverse starts to stir

Back in late 2018 and early 2019, I wrote a two-part series on virtual reality (VR) and augmented reality (AR). I just revisited those posts in preparation for crafting this one, and I’m pleased with how spot-on they still seem to be (at least to me), three years later. I continue to believe that the two technologies will coexist going forward, although the latter may achieve broader adoption: VR, as its name implies, enables the user to replace reality with a virtual alternative, while AR allows for reality’s supplementation (to the latter case, I always think of the example of your smart glasses’ integrated display flashing up the name of the person you’re chatting up at the cocktail party, for when your memory fails you).

But when will either (or both) begin to transition from early-adopter irrational exuberance to mainstream adoption? Perhaps we’ll look back at 2022 as the year when the crossing of the chasm started in earnest. For one thing, it’s impossible to understate the significance of Facebook’s recent rebrand as Meta. Sure, some of the underlying motivation may have been an attempt to get out from under the negative image the old name had collected of late, akin to Philip Morris USA’s relabel as Altria or Blackwater’s sequential rename first to Xe Services and later to Academi. But ever since then-Facebook announced its acquisition of Oculus in March 2014, it’s been clear that the metaverse (a term originally coined by Neal Stephenson in 1992’s seminal science fiction classic, Snow Crash) or, if you prefer, cyberspace (Neil Gibson in 1982’s short story Burning Chrome and the follow-on 1984 novel Neuromancer) is key to the long-term online community that the company aspires to cultivate.

Speaking of sizeable communities, equally curious to me will be to see what (if anything) Apple publicly does here in 2022. The company has supposedly been internally developing an AR headset for years, according to persistent rumors, and the latest buzz says this is the year that the first-generation offering will see the light of day. Equally if not more interesting to me is the insight recently offered by long-time Apple analyst Ming-chi Kuo, that the company intends for AR gear to completely replace the smartphone within a decade. Thoughts, readers?

Autonomy slowly accelerates

2021 was another year filled with fully autonomous car tests and premature “coming soon” pronouncements; 2022 will likely be the same. Meanwhile, ADAS (advanced driver assistance systems) is getting increasingly robust, and such partial autonomy will continue its upward trend in the coming year. Similarly, while full autonomy everywhere remains a work in progress, I’ll wager that short-duration and other constrained-environment autonomy trials will begin to bear tangible fruit in the coming year; think about urban public transit, for example, or food and other package delivery services, both on the ground and, via drone, in the air.

Operating constraints of a different sort define a scenario I’m equally bullish about. I’m thinking specifically about trucking fleets. While it’s debatable whether today’s strained supply chain is the result of too few human drivers or just insufficiently compensated ones, its impact is being felt by everyone striving to acquire holiday gifts this season (among other things). Plus, the inherent shortcomings of drowsy, distracted, intoxicated or otherwise imperfect human operators is a longstanding problem. While navigating semi-trailers the “last mile” to and from docks, airports, warehouses, etc. may remain the domain of non-autonomous operators for at least some time to come, long stretches of highway are comparatively uncomplicated scenarios for non-humans. Advanced fully autonomous semi-trailer trials are already underway, and I expect large-scale rollouts to begin, if not in 2022 itself, then soon thereafter.

Batteries get ever denser, ever more plentiful, and ever cheaper

Hear the words “mobile electronics device” and the first things you’ll probably think of are laptop computers, smartphones, and tablets. Broaden the brainstorming to include “anything electronic that can’t exclusively be powered by an AC cord—i.e., that needs to run on batteries some or all of the time”—and the list significantly lengthens. Here are some candidates off the top of my head (and in no particular order save for how they came out of my head):

At least some of this gear can alternatively be powered by conventional nickel-cadmium, alkaline or lithium batteries, but once these batteries’ stored charge is depleted, they need to be discarded and replaced (an expensive, not to mention environment-degrading, exercise). That said, historically, rechargeable batteries have been comparatively costly, not to mention low-capacity for a given-sized cell structure, low-output current, and low-cycle count before they too needed to be cast aside.

Thankfully, technology innovation coupled with the magical capitalist combination of increased demand and supply are rapidly resolving lingering shortcomings; specifically, the massive manufacturing plants being built to service large (and forecasted much larger future) EV demand are providing pass-through benefit to other applications, too. Just last week, for example, I picked up a 24-pack of Amazon Basics NiMH rechargeable AA batteries, with claimed 2,000 mAh capacity and supporting up to 1,000 recharge cycles (and pre-charged to boot) for $20.99 as part of a Black Friday promotion. Tack on a couple of $13.58 eight-bay chargers, and I aspire to never need to buy disposable AAs ever again.

In closing (for this section), I’ll note one other important rechargeable battery application equally beneficial to the environment, which I alluded to in my previous write-up. Renewable energy sources—sun, wind, tides, geothermal, and the like—aspire to still save us to at least some extent from the carbon-based energy-caused calamity known as climate change. However, they’re inherently inconsistent: the sun doesn’t always shine, the wind doesn’t always blow, etc. Storing excess generated renewable energy in battery bank intermediaries such as Tesla’s Megapacks, for discharge when renewable output is low, can buffer such cyclical behavior. And battery packs come in handy when legacy carbon-based energy sources fail, too.

Space travel becomes commonplace

Odds are quite good, I suspect, that you’ve already heard about the numerous manned flights to (the edge of, in some cases) space this past year by “normal” human beings, versus astronauts:

The number is small, but it’s notably bigger than the “zero” of previous years. And that all of them were successful (more or less) is a harbinger of more to come in 2022 and beyond. A bit of editorializing on this situation, by the way, if you’ll indulge me: While it might be easy (as I admittedly initially did) to view these solely as the ego exercises of the super rich, I was heartened by the insights of William Shatner (i.e., Captain Kirk on the original Star Trek series and several follow-on movies), versions of which I both saw on video and read, and both prior to and after his apparently life-changing flight on New Shepard:

“The idea here is not to go, ‘Yeah, look at me. I’m in space,’” the legendary actor said…“This is a baby step into the idea of getting industry up there, so that all those polluting industries … off of Earth.” The actor said operations could be built “250, 280 miles above the Earth” to then “send that power down here, and they catch it, and they then use it, and it’s there.” All it “needs is … somebody as rich as Jeff Bezos [to say], ‘Let’s go up there,’” Shatner said.

And with that, nearly 3,000 words in, I’m going to wrap up this forecast of the year to come. What did I overemphasize, understate, and completely overlook? Let me know in the comments!

This article was originally published on EDN.

Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 

Leave a comment