Quantum computers exploit the laws of physics, especially those of quantum mechanics that study subatomic particles, in practice, the one that tells us how nature really works.
Quantum computers exploit the laws of physics, especially those of quantum mechanics that study subatomic particles, in practice, the one that tells us how nature really works. Their fundamental unit is the quantum bit or qubit, linked to the state in which a particle or an atom is found and whose peculiarities allow to carry out the calculations in a much faster way. In this series of articles, we will analyze the foundations of a quantum computer, allowing us to choose one and, why not, understand the techniques of design. Stay tuned!
History of a Quantum Computer
In the classical computational method, each bit is represented by zero or one (binary system); in quantum computing, the qubit can be 0-1 or zero and one simultaneously. This is possible thanks to the superimposition of quantum states, which enables parallel calculations (instead of one at a time), exponentially multiplying power and speed even for extremely complex calculations, reducing processing times from years to minutes.
The spark of quantum computing is the work of Richard Feynman. In 1981 at MIT, he presented a problem concerning classical computers that cannot efficiently simulate the evolution of quantum systems. Therefore, he proposed a basic model for a quantum computer. With this, he outlined the possibility of overcoming the classical computers exponentially. However, it took more than 10 years before a special algorithm was created to change the vision of quantum computing, the Shor algorithm.
In 1994, Peter Shor developed his algorithm to efficiently factor large integers exponentially faster than the best classical algorithm implemented in traditional computers. The latter takes millions of years to calculate 300-digit numbers. The ability to solve puzzles in hours instead of millions of years using quantum computers has steered technology in a new direction.
In 1996, Lov Grover invented a quantum database search algorithm that could solve problems 4 times faster than classical computers. In 1997 the first small quantum computer was built, but the field really took off only when the Canadian company D-Wave revealed it's 28-qubit quantum computer in 2007. In 1998, a 2-qubit quantum computer was built. About twenty years later, in 2017, IBM presented the first commercially usable quantum computer, taking the race to another level.
There are some public quantum computers available for anyone who wants to program. IBM, Rigetti, Google, and IonQ all provide public access with open source tools to real quantum computing hardware. Today some of the engineers are also working on a quantum communication system. This is important in order to build a quantum version of the Internet (figure 1).
The introduction of the concept of quantum happened in the context of studies on the radiation of black bodies, i.e., ideal bodies capable of completely absorbing the incident radiation. Planck was looking for a physical model that could justify this phenomenon. He hypothesized that the interaction between radiation and matter occurred by the transfer of finite amounts of energy, the "quanta". Albert Einstein then resorted to the concept of what Planck introduced to explain the photoelectric effect, the phenomenon whereby a metal surface hit by electromagnetic radiation of appropriate frequency emits electrons.
Since the photons are discontinuous, it is not possible to use a classical deterministic theory but only a probabilistic and statistical one. Quantum mechanics provides information on the probabilities of measuring a given value, which can be interpreted as following: having at disposal infinite identical systems, making the same measurement on all systems, the distribution of the obtained values and just the framework module of the wave function that describes the system.
In a quantum computer, the fundamental unit of information is the quantum bit or qubit. To explain this new concept, we have to make use of a mathematical notation, known as Dirac's notation. The state of a classic bit is described by the values 0 and 1, similarly, for qubits, the vectors 0 and 1 are used. The difference between bits and qubits is in the fact that a qubit can also be found in other states. While a classic bit corresponds to two precise physical states such as 0 and 1, in qubit, it is not possible to measure its quantum state precisely. We can associate a probability. If we consider the system function as from the following formula:
Where α and β are coefficients, and 0 and 1 are the qubit vectors. We can associate a probability equal to |α|2 in 0 or equal to |β|2 in 1; these two values are, therefore, amplitudes of probability.
Consider a register composed of 3 bits. With this, we can represent up to 8 different possible levels, which are the binary representations of the numbers from 0 to 7. Let's consider instead a quantum register composed of 3 qubits. It will be able to contain up to all the 8 numbers at the same time thanks to a quantum superposition, which is through a coherent superposition of states. This property of the qubit allows having high computing power. The only problem is that in reality, we can only measure one state at a time, allowing us to manipulate more qubits at a time and then observe the result. Another problem that arises is the possibility to manipulate these qubits without interfering with their state. To solve this problem, we can go back to Shannon's information theory and insert error correction systems.
In a traditional computer, the information is obtained by logical ports; similarly, we can speak of quantum gates that normally manipulate one or more qubits. The computational power of quantum computers is of high interest because would allow solving complex scientific calculations and could also weaken all the algorithms developed with the principles of current cryptography, often based precisely on the mathematical problem of factoring.
Quantum computers use three concepts. The first is the "quantum superposition", the idea at the base of the famous live and dead cat of Schrödinger. Unlike the classic bits, as we said, that can only have two states - one or zero -, the "qubits" can be a combination of both. The Google machine, for example, has 53 qubits, which among them can represent almost ten million possible superposition states.
The second is "entanglement". It binds quantum particles together through time and space. In standard computers, each bit is strictly connected to the state of the next. The third is related to the amplitude of probability that we have introduced previously.
Qubits in quantum processing are strongly intertwined. Mathematical operations on overlapping and intertwined qubits can act simultaneously, to a greater or lesser extent, on all qubits in a single processing process.
The quantum computer places a qubit at one state and then intersects it with the nearby one. Once this is done, it allows the rules of quantum physics to work with the states and connections of qubits that evolve over time. In the end, qubits are examined simultaneously to obtain a response. The main job is to find the right answer out of a billion of the wrong ones.
In classical physics, probabilities must be expressed in positive numbers. Let's say that there is a probability of rain by 30%. Quantum mechanics uses a related concept called "amplitudes". We need to make sure that the amplitudes that represent the wrong answers cancel each other out, while those that represent the right ones emerge. In this way, developers can approach the correct solution with an acceptable approximation.
Overlapping and entanglement are two of the key concepts of quantum mechanics theory and contribute to the great computational capacity of quantum computers. The principle of superposition provides that an electron immersed in a magnetic field can have the spin aligned with the field (and in this case, it is said that the electron is in a spin-up state) or have a spin opposite to the field (the electron is in a spin-down state). For the laws of quantum, a particle can also be in an overlapping state and behaves as if it were both in a spin-up state and in a spin-down state. If applied to quantum computing, the principle of superposition establishes that the qubit can assume the two states of the "classic" bit simultaneously and validate "0" and "1" at the same time (figure 2).
In the entanglement, also called quantum correlation, the particles that have interacted in the past keep a connection between them (provided that they are in a completely isolated system). In this way, knowing the spin of a particle, it will be possible to know automatically also the spin of the second particle: if the first one is in spin-up, the second one will be in spin-down, independently from the distance that divides them. In quantum computing, this allows transfer information from one end of the system to the other (but theoretically also from one end of the world to another) in a practically instantaneous way (not by chance we speak of quantum teleportation).
The rules that are the basis of quantum computation differ greatly from the classical ones:
- No-cloning: quantum information cannot be copied with absolute fidelity, and therefore cannot be read with absolute fidelity.
- Quantum information can instead be transferred with absolute fidelity, provided that the original is destroyed in the process.
- Every measure taken on a quantum system destroys most of the information, leaving it in a basic state.
- Even if, in some cases, it is possible to know exactly in which basic state the system will be found after a measurement, most of the time, we will have only probabilistic predictions.
- Some observables cannot have simultaneously precisely defined values for the Heisenberg uncertainty principle: the position and the velocity of an object cannot both be measured exactly, at the same time, even in theory.
- Quantum information can be, and usually is, encoded by non-local correlations between different parts of a physical system. In practice, entanglement is used.
A useful visualization of a qubit can be obtained by means of a geometric interpretation that associates the states of a qubit to the points on the surface of a sphere of unitary radius. The south pole of the sphere corresponds to 1 and the north pole to 0. This sphere is known as the Bloch sphere and is represented in Figure 3. Many of the operations on a single qubit can be described within this sphere.
There is a two-way match between a generic state of a qubit:
and a point on the unitary sphere:
where θ and ϕ are real numbers (the spherical coordinates of the point).
In a qubit, the values α and β are complex numbers such that |α|2 + |β|2 = 1. Using the description of α and β in polar coordinates, we can write the system function as:
Developing in polar coordinates, we obtain:
As we can encode in the number θ, bit strings of arbitrary length, the classical information that may contain a qubit would seem infinite. However, the only way to extract the information contained in a qubit is through measurement.
The description of a qubit as a vector in a bi-dimensional space complex has a correspondent in the real world. In particular, any physical system with at least two discrete and sufficiently separate energy levels is an appropriate candidate to represent a qubit that is the basis of the quantum computer. The three most common approaches are those based on the two different polarizations of a photon; the alignment of nuclear spin in a uniform magnetic field, two levels of energy of an electron orbiting in a single atom.