A technology forecast for 2023

Article By : Brian Dipert

Our intrepid engineer prognosticates the year to come, to be followed shortly thereafter by a look back at the past year.

In the past, EDN has published my retrospectives on the prior year first (here’s 2019, for example, and 2021…we didn’t do one in 2020), followed by my forecasts for the year to come (2020 and 2022). While the cadence admittedly makes conceptual sense—one tends to review and learn from the past before prognosticating what’s to come, after all, in the spirit of “Those who cannot remember the past are condemned to repeat it”—from a practical standpoint it was non-ideal. Take last year, for example: my retrospective appeared on the EDN website on December 9, 2021, but due to publishing lead times, I’d submitted it two weeks prior and more than a month before the year’s end on November 27, 2021. A lot can happen in a month-plus!

As such, we’re going to try reversing the order this time around. I’ll be submitting this “2023 look ahead” piece first, in late November 2022. While it’s still true that “a lot can happen in a month-plus” (reference, for example, topic #3 “unpredictable geopolitical tensions” that follows), with resultant influence on prior-made forecasts, I trust that my readers will be forgiving if my crystal ball is less than perfectly…err…crystal clear. In exchange, the aspiration is that my “2022 look back” piece, to be submitted some time in December, will be more encompassing as a result.

Without further ado, and ordered solely in the sequence in which they originally streamed out of my cerebrum…

Inconsistently easing semiconductor constraints

A year ago, looking back over my shoulder at the 2021 that was then wrapping up, I wrote:

The complex global supply chain from semiconductor fab all the way through to finished systems on retailer shelves is being clobbered by manpower and other constraints, coupled with ongoing worldwide measures intended to tamp down COVID-19 flare-ups. Toss in abundant new-market demand, specifically from cryptocurrency “miners,” and suffice it to say that being a product planner is about the last job I’d want right now.

Thankfully, we’re (generally, at least) in a much better place one year later, although we’re not yet fully “out of the woods”. The degree of improvement is, for example, both somewhat dependent on what specific segment of the technology sector your company operates in and how big or otherwise influential your company is within that segment. I’ll elaborate further on these topics in next month’s retrospective.

So, what will 2023 look like in this regard? Ironically, some semiconductors are seemingly already in an oversupply situation, specifically those that are:

  • Inherently high volume, and
  • Commodity in nature, both factors translating into
  • Multi (large)-supplier participation in them

Specifically, I’m talking about semiconductor memory, both volatile (DRAM) and nonvolatile (NAND flash memory). Prices are crashing, along with profits, translating into customer delight and supplier angst. To some degree, this is nothing new: as a long-ago veteran of the NOR flash memory market, I’m intimately familiar with the periodic boom-and-bust cycles that cyclically occur in such industries—want to see my scars? If anything, COVID effects just served to stretch out the otherwise normal-length constrained half of the boom-and-bust sine wave.

In contrast, lower-volume and less commodity products will return to supply stability more slowly, so we’ll probably continue to periodically hear about situations such as automobile manufacturing lines shut down due to IC non-availability.

The downturn of the bitcoin mining market has enabled the graphics processor segment (another high-volume consumer of wafers and other fab, test and packaging facilities and resources) to regain some semblance of normalcy, a situation which I suspect will extend into the new year. And Intel seems to finally be getting its manufacturing house in order, albeit after a multi-year flailing-about delay, which should stabilize (and maximize) yields out of its existing fab network, both for itself and its fledgling foundry services aspirations.

That said, the recently passed CHIPS and Science Act in the United States likely won’t translate into notably increased supply (from Intel or others) in 2023, since it takes longer than that to build, “kit” and ramp new fabs to production volumes and yields (assuming there’s enough water and electricity to “feed” them…see topic #2 that follows). And that said, the bulk of the world’s semiconductor fab output will continue to come from Asia, specifically the combo of South Korea and Taiwan (assuming economic, military and/or political conflict doesn’t squelch the wafer outflow…see topic #3 that follows).

Mounting environmental concerns

Manufacturing chips is incredibly resource-intensive, specifically as it relates to both electricity and water consumption (as is data center operation, by the way). Look at the fabs in the United States, and you’ll notice that a notable percentage of them are in states such as Arizona, New Mexico, and Utah, all sharing common characteristics such as an abundance of available flat land, relatively free of seismic concerns. Unfortunately, as anyone who’s followed recent weather trends in the West already knows, water there is becoming increasingly scarce, as the Colorado River turns into a trickle earlier each year and its average daily flow volume weakens even at normally fruitful times. It also bears noting that fresh water shortages aren’t a West- (or even U.S.-) only phenomenon; just last year, for example, Taiwan was grappling with whether to prioritize farmers or TSMC and other foundries as drought gripped the country.

Droughts have happened before, of course, but the overall trend is increasingly difficult for even the most skeptical folks to ignore: the planet is warming, and human activity is its root cause. Which leads us to electricity: increased demand ironically further exacerbates global warming (leading to even more demand driven by increased air conditioning usage…lather, rinse and repeat…), specifically when that power is generated using fossil fuels such as coal and natural gas. Admittedly, the transition to “green” energy sources such as solar, wind, hydro (I’m referring to tides and waves, not dams) and geothermal is underway (bolstered by a re-embrace of more controversial sources such as nuclear fission), but this shift is going much more slowly than will be necessary to keep the peak temperature increase below the 1.5°C target beyond which experts believe the consequences for both our species and our planet will be dramatically more dire. And will notoriously short-term-thinking humans, fixated on instant gratification and personal profit, be willing to make the lifestyle changes necessary to not overshoot (and ideally undershoot) this threshold? Fuhgeddaboudit.

I’m admittedly feeling even more cynical than usual, because as I write these words, this year’s United Nations Climate Change Conference is going into weekend negotiation “overtime” with no meaningful breakthrough in sight. If anything, disagreements between countries and (more generally) between the developed and developing world are seemingly growing increasingly stark as COP27 progresses. Past promises of global warming prevention are transforming into proposals for increasingly radical geo-engineering intervention, along with talk of minimizationmitigation and adaptation; a recent environment-themed issue of The Economist perhaps visually communicated it best with this prescient cover:

So, do I think that, in spite of hurricanes, floods, droughts, forest fires and all of the other extreme weather events that are growing more numerous and intense all around us, reminding us of what we’re doing to Gaia and its other inhabitants; we’ll continue on the path toward irrevocable ecological disaster in 2023 regardless of whether or not (for example) the environmentally focused, recently passed Inflation Reduction Act in the United States remains in force (keep reading)? Alas, I do. And having said that, I fervently hope to be proven wrong.

Unpredictable geopolitical tensions

Although the recently concluded mid-term elections in the United States (Sorry, folks, but I admittedly know best the country in which I’m a resident and citizen; please do chime in with observations on your own locales, such as potential positive changes in deforestation trajectory resulting from the recent election results in Brazil, in the comments.) have resulted in a “divided government” situation, the fact that the Democrat Party still controls the White House and (narrowly) the Senate, with the Republican Party in the (narrow) majority in the House of Representatives, will likely have little to no impact on the already-approved Rural Broadband Expansion Act, the Infrastructure Investment and Jobs Act, and the aforementioned CHIPS and Science Act and Inflation Reduction Act. The beneficial technology impacts of the CHIPS Act have already been discussed, and the benefits of the first two acts are also obvious, both in terms of expanded broadband infrastructure (new “builds” and upgrades alike) and the resulting expanded tech usage by both new and existing broadband service customers.

Look beyond the U.S. border, however, and things get much more complicated, therefore unpredictable. First off, there’s the ongoing conflict between Ukraine and the invading Russians. Will it be resolved, militarily and/or via negotiated settlement, any time in 2023 (or, for that matter, between when I write these words in late November and the end of 2022)? And if so, what form will it take? Who knows. Until (and realistically still even after) resolution occurs, the impacts on the tech sector are numerous:

  • The coding competencies of software engineers in both countries have long been legion, the result in no small part of past Soviet-era archaic computer hardware that compelled programmers to efficiently squeeze out optimum results. For now, ongoing coding in Ukraine is understandably being put on the back burner as programmers instead take up arms to defend their country, while sanctions from the West have substantially hampered ongoing business for Russian software firms.
  • Those sanctions are both economic and technology access-constraining in nature. Many Western companies have proactively shut down their Russian (as punishment) and Ukrainian (for employee safety) subsidiaries and, “encouraged” by Western governments’ policies, are curtailing shipments of chips and the IP inside them, software and systems. The resulting tangible impacts affect not only the Russian military’s ability to replenish its armament stocks, but also the Russian citizen’s ability to purchase new smartphones, computers, televisions, and the like.
  • What are those Russian coders doing with their newfound “spare time”? At least some of them are turning to hacking, with Russian government “encouragement”. While the bulk of this cyber-attention is focused on Ukraine, the NATO countries that back Ukraine are also garnering hackers’ interest, as more generally is anyone deemed anti-Russian.
  • One particularly big question mark leading into the cold winter in Europe is the degree to which Russia will curtail natural gas shipments to Ukraine’s allies, and the degree to which the impact of this decreased heat source will cause those allies’ support to waver.
  • And more generally, even now the overall instability in the region coupled with cutbacks in various fossil fuel energy sources are ramping inflation across the globe.

Unfortunately, conflict isn’t restricted to Europe. Take Asia. Arguably encouraged by Russia’s actions against Ukraine, which the aggressor claims is its own territory versus a separate country, China’s similar-themed saber-rattling against Taiwan is once again ramping up. Earlier this summer, for example, in response to US House of Representatives Majority Leader Nancy Pelosi’s visit to Taiwan, China conducted a massive showcase of its military potential—air (including missiles), sea and ground—as well as added cyberattacks against its island neighbor.

Partly to signal support for Taiwan, along with responding to a claimed longstanding lack of respect for intellectual property in China, and more generally in reaction to China’s aggressive aspirations to be seen as a leading player on the world stage, the United States has also recently curtailed shipments of leading-edge technology to its Asian competitor. This includes chips built using US-made semiconductor manufacturing equipment but made by companies and foundries in other countries. While the intention of this policy is defensible to many in the United States, the business impacts to US-based companies also can’t be overlooked.

And then there are North and South Korea, to the north of Taiwan. Kim Jong-un’s North Korean regime just conducted another ballistic missile test yesterday as I write these words. That makes more than 40 so far in 2022, many of these missiles with range capable of reaching U.S. territories in the Pacific and even the U.S. mainland. And any of them, along with shorter-range armaments already deployed in the North Korea military arsenal, could of course also hit South Korea, Japan, and other neighbors. South Korea, in partnership with the United States, responds each time with shows of force of its own. But should these feints transform into full-blown warfare, the impacts to both countries, along with their neighbors and the world at large, would be profound, both technologically and otherwise.

Unclearly legal generative AI

This last one might seem a bit obscure at first glance but stick with me. I’ve been a technology journalist for more than a quarter century now. Although EDN owns the words I generate while in their employment, I’m fine with that because they pay me in exchange! What I’m not fine with is when I see that someone’s “lifted” entire sentences-to-paragraphs from one of my pieces and used them in their writeups without attribution. Even more egregious, of course, is when some other site republishes one of my writeups in its entirety, complete with graphics, even if the thief is “generous” enough to list me as the original author.

What’s this got to do with technology, besides the fact that I generally write about technology? Consider deep learning and other AI techniques. The basic approach, as I’ve covered before, involves first training a network model by exposing it to labeled examples of whatever data (text, words/phrases/other sounds, still and video images, etc.) it is you want it to understand, akin to a child learning how to identify various objects by name. Then, when you later expose that same trained model to other, similar data, it’s able to infer (therefore why this step is known as inference) what that new data is from its already assembled knowledge base.

At this point, taking the analogy of the child a bit further, you might be wondering whether that same model might also be able to use its “imagination” to create completely new content based on that same training foundation. This technique, known as “generative AI”, is becoming increasingly robust, although its implementation isn’t based on synthetic imagination but on competition—specifically, the generative adversarial network. From Wikipedia’s definition:

A generative adversarial network (GAN) is a class of machine learning frameworks…Two neural networks contest with each other in the form of a zero-sum game, where one agent’s gain is another agent’s loss. Given a training set, this technique learns to generate new data with the same statistics as the training set. For example, a GAN trained on photographs can generate new photographs that look at least superficially authentic to human observers, having many realistic characteristics…The core idea of a GAN is based on the “indirect” training through the discriminator, another neural network that can tell how “realistic” the input seems, which itself is also being updated dynamically. This means that the generator is not trained to minimize the distance to a specific image, but rather to fool the discriminator…GANs are similar to mimicry in evolutionary biology, with an evolutionary arms race between both networks.

Why did GANs end up on my short list of 2023 trends to pay attention to? Consider, first, that they are already being used to create all kinds of (semi-)original content (synthetic media, in Wikipedia parlance), based on descriptive input from the toolset user:

Some of the quagmire created by such technology lies with the potential for GAN-generated content to supplant that created by actual human beings. Another aspect, tying into the “unclearly legal” portion of this section header along with the “(semi-)” qualifier in the previous paragraph, lies in how these networks were created in the first place. To understand what I mean here, begin by re-reading the model training description in this section’s paragraph, if necessary. Now, consider how GANs are trained.

If you guessed “using real-life, human-created text, images, videos, 3D models, voices, music and other sounds, etc.”…congratulations, you’re right. And therein lies the crux of the issue. Who’s financially and otherwise compensating those original content creators, beginning by acknowledging them by name? If you guessed “nobody”…congratulations, you’re right again.

A synthetic symphony that sounds like Mozart, or a play reminiscent of Shakespeare, or an art piece that hearkens to Picasso, are one thing…those artists are now deceased, and their works are in the public domain (that said, if someone were to try to pass off a synthetic Picasso as the real thing, there’d still be a problem). But what if someone claimed that their GAN-generated horror story, trained on novels from (still alive) Stephen King, was real? Scary (pun intended)!

Right now, maybe you’re thinking to yourself something along the lines of “well, that’s too bad for those artists, but I’m not a writer, musician, sketcher or painter, sculptor, photographer, videographer, etc., how does this issue personally affect me?” Be careful not to underestimate either your artistry in the broader sense or technology’s equally broader potential. GANs are already being used to generate large amounts of computer game content, for example. What if a GAN started writing entire pieces of software (come to think of it, this is already happening); how would the human programmers out there feel then? Other engineering analogies are equally apt.


There are so many more 2023 forecast topics that I’d still like to weigh in on:

But just over 3,000 words in at this point, I’m going restrain myself and instead wrap up this forecast of the year to come. What did I overemphasize, understate, and completely overlook? Let me know in the comments!


This article was originally published on EDN.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.


Leave a comment