Power supply management in quantum computers

Article By : Steve Taranovich

How will we power these ‘beasts’ that need as much power as a small city?

In 2016, the first quantum computers were able to be programmed from a high level user interface1 to run arbitrary quantum algorithms. The design architectures were small in scale, with just a handful of qubits each.

Some of the big players like IBM, Google, Microsoft, as well as several start-ups have an ultimate goal to create a larger scale device of commercial viability sometime in the near future.

Quantum computers1 will most likely be an integral part of a heterogeneous, multi-core computer where a classical processor will interact with several accelerators like FPGAs, GPUs and also a quantum co-processor. Figure 1 shows the different layers of the quantum computer system stack.

Figure 1 A quantum computer system stack (Image courtesy of Reference 2)

Delft University of Technology in Holland is working toward a goal of programming on a silicon quantum chip that may get us closer to a viable quantum computer.

How will we power these quantum ‘beasts’?

Let’s begin with power supply needs for Supercomputers, and then on to Exascale computers which are available now and move towards the quantum computer power needs after that since there is not a full-blown quantum computer in existence yet. You will soon see the challenges that power management designers will have going from Exascale power management challenges to quantum computers with their unique challenges, but powering Exascale will set the stage for quantum computers.


Supercomputers are typically used to model and simulate complex, dynamic systems that would be too costly, impractical or impossible to physically demonstrate. These computers greatly help scientists explore the evolution of our universe, biological systems, weather forecasting and even renewable energy. Let’s look at the latest sub-group of Supercomputers called Exascale Supercomputers since they are the newest and fastest machines.

Exascale supercomputers

Exascale supercomputers are the next-gen supercomputers. These systems enable improved simulation of complex processes involved in medicine, biotechnology, advanced manufacturing, energy, material design and the physics of the universe. This class of Supercomputers can perform tasks for these applications faster and with higher definition.

Right now the most energy-intensive supercomputer in the world is the Tianhe-2 in Guangzhou, China. This computer uses 18 megawatts of power, and the exascale Tianhe-3 will demand even more power than this. For frame of reference, a typical hydroelectric dam in the US produces approximately 36 megawatts of power.

Cray and AMD are developing ‘Frontier’, for the US government. This is expected to be the world’s fastest Exascale supercomputer when it goes online in 2021 with 1.5 exaflops of raw processing power—that’s 1.5 quintillion calculations per second (Figure 2).

Figure 2 The frontier Exascale Supercomputer (Image courtesy DOE/Oak Ridge National Laboratory)

These kinds of computers will enable significant medical advancements that can improve the lives of our planet’s billions of people. They may be able to create medical treatment specifically tailored for individual needs by quickly accessing and processing an individual’s historical, genetic, and environmental variations. The development of precision medicines to attack a cancerous tumor may be possible due to the incredible processing speed of these computers.

Vicor working with PEZY Computing

I knew that Vicor was working with PEZY Computing, so I contacted my friend, Robert Gendron, P.E. and Corporate Vice President, Product Marketing & Technical Resources at Vicor Corporation to discuss how to best power a super high speed, power-hungry processor.

Gendron commented that what came to his mind when I mentioned quantum computers—a ‘gold chandelier’ design (Figure 3) that IBM did in their 50-Qubit quantum computer design in 2017. Back then, and still now, these kinds of computers are very challenging to use and the quantum state had only been preserved for 90 microseconds. Quantum computing at IBM is making progress, but this architecture is still in early stages.

In order for quantum computers to be used for more complex tasks, they’re going to need thousands, maybe millions of qubits. Designers are moving towards that goal as you read this.

Gendron said that we are closer to reaching ExaScale Computers and the Japanese company ExaScaler/PEZY Computing has the ZettaScaler-2.0 configurable liquid immersion cooling system for their own processor boards as well as for Intel boards and other companies products (See my article on EDN entitled ‘Submerge your power supply, and other options’).

Figure 3
IBM’s quantum computer design (Image courtesy of IBM)

Gendron continued and said that from what he has heard and seen in the industry, the US government would probably develop a true ExaScale computer before anyone else. The challenging issue Vicor sees in trying to create more computational power is latency. Communicating from the processor to memory or to other processors. He said you are limited by the size of the processor, that is, wafer yield and reticle size around 30 x 35 or so an individual processor can only be so big before you need two processors next to each other. There is a density play here, said Gendron, either because of the silicon limitations or the latency of communicating to memory and other devices.

From what I see in the industry, these processors are running at microwave speeds and 10 GHz processors will come to market in a couple of years. That means more power and more heat so liquid Nitrogen is not far off from doing immersion cooling. Plus as processor speeds increase we may see microwave gold stripline or microstrip being used on these PC boards (Figure 4).

Figure 4 Comparing two architectures for the first quantum computers. (a) Superconducting qubits are shown connected by microwave resonators (Image courtesy of IBM Research), and (b) The linear chain of trapped ions connected by laser-mediated interactions. Insets: Qubit connectivity graphs, star-shaped (a) and fully connected (b). (Images courtesy of Reference 1)

Reference 1 presents results from running a selection of algorithms on the two quantum computers, including the first realization of the Hidden Shift algorithm. The strengths of the superconducting system lie in faster gate speeds and a solid-state platform, while the ion trap system features superior qubits and reconfigurable connections. The performance of these systems is seen to reflect the level of connectivity in the base hardware, indicating that quantum computer applications and hardware should be “co-designed”1.

The US Department of Energy is looking for a 1,000 petaflop computer with about 20MW power needs by 2023. Right now Canada has an Exascaler computer that consumes 1.35MW.

So I asked Gendron what Vicor considers as the best power architecture to power these voraciously hungry processors (I kind of already knew the answer; see my EDN article Data center power in 2019).

Gendron pointed me to the Green 500 site where every six months computers are ranked and benchmarked on computational power. They look at flops/watt, essentially efficiency. For the first time, in June 2019, all of the 500 systems deliver a petaflop or more on the High Performance Linpack (HPL) benchmark. So the June 2019 list has the DGX SaturnV Volta with an Nvidia DGX-1 Volta36 processor in the number 1 spot at 15.113 GFlops/watt efficiency. However, that processor ranked only 469 in the Top 500 which ranks supercomputers in order of computational power.

The work that Vicor did with PEZY, in the ‘Gyoukou’ system, ranked 4 in the Green 500 and 5 in the Top 500 back in 2017. Vicor reported that PEZY supercomputers leverage 48V Factorized Power, a high efficiency, high density power distribution architecture. PEZY’s CPUs are co-packaged with Vicor’s Power-on-Package (“PoP”) Modular Current Multipliers (“MCMs”), which enable efficient, direct 48V to sub-1V current multiplication at the XPU. See the following video:

PEZY had the “PoP” Vicor, with a 48V input, voltage regulator up on the substrate of the CPU to decrease the power loss in the ‘last inch’ by eliminating the majority of the board losses to deliver more power to the processor. The Vicor “PoP” system can actually deliver up to 1,000A peak current!

So, for example, if about 1V @400A is going to the processor across a 400 micro Ohm board trace, we square 400 and then multiply by 400 micro Ohms to get the I2R loss to the processor gives 64W/400W or an improvement in efficiency of almost 16% (Figure 5).

 Figure 5 The location of the Vicor “PoP” packages close to the processor (Red square) (Image courtesy of Vicor)

Gendron said that future challenges are the sub-1V power supply levels to the processor in the order of 0.5V or 0.6V. The challenges here is to make a power delivery system with a load step of 400A while staying within a window of 0.5V (500 mV)—10% is 50 mV, so you need to be less than a 20 mV step or really less than 5 mV if you want to stay within the compliance. This is very difficult to achieve.

Quantum computers7

Quantum computers are in a class by themselves. Their architecture is far different than even Exascale supercomputers. Think of how machine learning can accelerated with quantum computers6.

Airbus is using computers to help calculate the most fuel-efficient ascent and descent paths for aircraft. Quantum computers will do this faster and better (too bad Boeing’s Max 8 did not have infinite simulations done via a quantum computer).

The newest and fastest quantum ICs will have an array of qubits running at extremely low temperatures, colder than the void of outer space, down to tens of milliKelvin degrees (10 degrees mK is -459.652 degrees F). Power designers may be able to leverage these temperatures to cool their power architectures. This will require a great deal of power. The quantum processors will also require quite a bit of power, although not as much as the cooling of the ICs to these extreme temperatures, but efficiently powering the processors will surely be a daunting task in itself. Think of huge I2R losses from the power supply to the processors along the circuit board conductive trace which Exascale computers have, but with far more power demands by the processors.

Powering the quantum computer

A next-gen Vicor “PoP” architecture will have to be developed for the quantum processors when this new class of quantum computers arrive in earnest.

Not only will we need to power the processing section of a quantum computer but also the challenging cooling system of this new breed of computer which needs to cool the system down to temperatures near absolute zero. The power needed for this level of cooling will dwarf the power needs of the processor5.

Powering the cooling system

Google’s Quantum AI Lab has D-Wave’s latest quantum computer. D-Wave probably has one of the closest to commercial quantum computers right now (Figure 6).

Figure 6 The D-Wave 2000Q system architecture (Image courtesy of D-Wave)

A standard data center can normally accommodate the D-Wave 2000Q quantum computer.

D-Wave’s refrigeration system with many layers of shielding, is situated in an internal high vacuum with a temperature just slightly above absolute zero. This system isolates the computer from external magnetic fields, vibration, and RF signals that can cause errors in this highly sensitive system.

The D-Wave system consumes less than 25 kW of power, most of which goes towards operating the cooling and front-end servers.

The D-Wave quantum processing unit (QPU) is built from a lattice of tiny loops of the metal niobium, each of which is one qubit (shown on the next page, in red). Below temperatures of 9.2 kelvin, niobium becomes a superconductor and exhibits quantum mechanical effects. When in a quantum state, current flows in both directions simultaneously, which means that the qubit is in superposition—that is, in both a 0 and a 1 state at the same time. At the end of the problem-solving process, this superposition collapses into one of the two classical states, 0 or 1.

Figure 7 Quantum processing unit (Image courtesy of D-Wave)

There will be much more powerful quantum computers coming in the near future that will demand a great deal more power, especially to the processors.

Steve Taranovich is a senior technical editor at EDN with 45 years of experience in the electronics industry.


  1. Comparing the Architectures of the First Programmable Quantum Computers, Norbert M. Linke, Dmitri Maslov, Martin Roetteler, Shantanu Debnath, Caroline Figgatt, Kevin A. Landsman, Kenneth Wright, Christopher Monroe, 2017 Conference on Lasers and Electro-Optics Europe & European Quantum Electronics Conference (CLEO/Europe-EQEC)
  2. Towards a Scalable Quantum Computer, C. G. Almudever, N. Khammassi, L Hutin, M.Vinet, M. Babaie, F. Sebastiano, E. Charbon, K. Bertels, 2018 13th International Conference on Design & Technology of Integrated Systems in Nanoscale Era (DTIS)
  3. Computer Systems Architecture Vs Quantum Computer, Dr. Kuntam Babu Rao, International Conference on Intelligent Computing and Control Systems, ICICCS 2017
  4. IBM Research explains how quantum computing works and why it matters, VentureBeat
  5. What is quantum computing?, IBM
  6. Machine learning, meet quantum computing, MIT Technology Review, Nov. 16, 2018
  7. Explainer: What is a quantum computer?, MIT Technology Review, January 29, 2019

Check out all the stories in this quantum computing Special Project:

The basics of quantum computing—A tutorial

Advances in quantum computing are happening at breakneck speed. For engineers that want to be a part of its rapid evolution, there are free, hands-on ways to learn the basics and actually experience the technology.

What’s next in quantum computing?

Quantum computing is on a steady upward trajectory, but the field is in flux with new technologies starting to come online.

Will we need a ‘supercomputer’ on our desk?

Cloud computing and big data analysis will require the development of ever more performant computing systems. We can imagine what these high-performance computers might look like as early as 2035.

Leave a comment