The emergence of artificial intelligence (AI) raises major ethical questions for societies. Quantum is likely to do the same.
When applying algorithms to sensitive topics – say HR or justice data – one requirement is to be able to explain how you got the result. There is no doubt that there are black boxes that recommend this or that option without further controversy.
If we still have to wait several years before we actually see operational quantum computing, then the possibility of its “superiority” raises the same kind of question.
It would indeed be impossible to simulate algorithms implemented on a quantum material worthy of the name, on a classical computer. Knowing the astronomical amount of parameters a quantum computer can take in to search for an answer, how can a human go deep into them to determine whether or not a computation makes sense?
“If a quantum computer can effectively solve a problem, can it convince an observer that its solution is correct?” asks Mark Carrell-Billiard, head of global technology innovation at Accenture.
Already relevant use cases envisaged for quantum computing
Quantum computing does not exist yet, but we know how it will behave and researchers understand better and better in which areas it can be used in an appropriate way.
Heike Riel, IBM’s Head of Science and Technology, and President of IBM Research Quantum Europe, explains, “It’s not just about technological beauty. We strive to generate value. It’s a journey: developing technology, finding the most appropriate next-stage applications, demonstrating that value and then developing hardware and programs.”
The journey has already begun, for example, in Eon Energy, which has joined the IBM Quantum Network. The transition to greener sources, such as solar and wind energy, has doubled the types of energy that the power grid will manage. Quantum computing could help improve these networks if businesses and many homes become electricity producers in the future through their own PV systems or electric vehicles, thanks to initiatives like the Aeon Network (V2G) project.
Heike RaelHead of Science and Technology at IBM
In this project, electric car batteries are connected to the grid as flexible storage media. Thus, it becomes possible to balance fluctuations in the production of renewable sources. Quantum computing will drive these processes more efficiently and effectively.
All these sources have different characteristics. Forecasts are becoming more and more complex,” recalls Heike Riel. “You have to improve the system in real time. However, the complexity grows exponentially with the number of parameters, and eventually becomes a problem that is difficult to solve using traditional computing. »
Another example of application, in science, a theoretical physicist and chief scientist at Cambridge Computing, Bob Quick asserts that the behavior of atoms and molecules – governed by the laws of quantum mechanics – must be modelable on a quantum computer governed by these same laws.
In view of the complex performance of quantum mechanics, a physical matter simulation [au niveau moléculaire] It is more and more expensive,” he explains. In fact, in terms of just storage, he explains, it would be impossible to adapt a conventional computer with these kinds of problems.
New materials simulation and particle behavior modeling are two of the greatest conceivable use cases for quantum computers. In August 2021, Nicholas Rubin and Charles Neal, two research scientists at Google AI Quantum, wrote a blog about an experiment aimed at creating complex chemical simulations using the Hartree-Fock model of computational physics.
Accurate numerical forecast [sur] The researchers write that chemical processes, starting with the laws of quantum mechanics that govern them, can open new pathways in chemistry, and help improve a wide range of industries.”
Reliability of results and quantum noise
However, these promises come with their share of challenges. The two researchers at Google, for example, found that their algorithms are still hampered by the high error rate of the first quantum computers.
Mark Carrell BilliardsHead of Global Technology Innovation at Accenture.
Like the ability of classical neural networks to tolerate defects in data, the pair explain that in their experiment, the VQE algorithm (for the Variational Quantum Eigensolver) attempts to optimize the parameters of the quantum circuit to reduce the “noise” that interferes with the algorithm.
IBM is working on the same problem. With so few qubits, it remains possible to simply check the result of a quantum algorithm on a quantum computer, by comparing it to the result of that same algorithm on a classical machine that simulates the behavior of a quantum computer.
This method is only possible as long as the number of qubits remains low enough, says IBM’s Heike Riel. The key point here is to understand how the “noise” of many qubits affects the system by producing false results.
Today, IBM is continuing its roadmap with the 128-qubit system and wants to “provide evidence that error correction can work,” says Heike Riel, “We are working to verify the results.”
Mark Mattingly Scott, managing director for Europe at manufacturer Quantum Brilliance, poses another challenge: interpretability.
“It is one of the paradoxes of quantum computing. When we get to the point where it is useful — when a quantum algorithm can perform calculations at a speed and accuracy that would be impossible with a classical computer — it becomes impossible to directly verify the accuracy of the results obtained,” he summarizes.
“We could validate the process in shortened versions of the same problem, which we do daily with classical algorithms, but there would be no ‘control’ method as such.”
But quantum computing is, in essence, non-deterministic. Mark Mattingly-Scott therefore insists that the results obtained are based on probabilities. “A quantum algorithm works by using a quantum mechanism that reinforces the construction of the ‘correct’ response, and destructively suppresses the ‘wrong’ response,” he explains. But this construction remains the fruit of possibility.” So there is always a certain amount of uncertainty. And using a classical computer to validate a quantum computer is only possible at the methodological level, not at the data level itself. »
Mark Mattingley ScottManaging Director for Europe at Quantum Brilliance
For his part, Bob Quick, of the specialist Cambridge Quantum Computing, believes that the principle of composition and category theory can help us understand what is happening in a quantum computer.
The Belgian researcher explained this idea in a book he co-wrote with Alex Kissinger (“Quantum Image Processes”). From a general point of view, the book looks at how to break down large quantum problems into smaller components. According to Bob Koike, these “small blocks” are more understandable and verifiable.
In a similar way, Mark Carrel-Billiard’s team at Accenture is working out how to map specific problems into subsets of math problems. These “sub-problems” are then coded using software development tools (SDKs) and libraries from many quantitative platforms. By testing software on different quantum hardware architectures, it becomes theoretically possible to determine whether they will produce consistent results.
In some cases, verification can also be performed “in vivo”. In chemistry, for example, Michael Biercuk, CEO and founder of Q-CTRL, explains, “For molecular structure or chemical dynamics computed on a quantum computer, it may not be possible to validate the simulation computation itself. On the other hand, we can perform an experiment real chemistry [ou une analyse comparative avec des molécules connues] To validate the results. »
To understand or not to understand, that is the question
Quantum computing will also remain one method among others. “If you have a complex optimization problem to solve, it doesn’t matter how or what type of computer you use as long as you get a result in the fastest and most efficient way possible,” confirms IBM’s Heike Riel.
From an IBM perspective, a complex computational problem often has several distinct parts. Some of them will be handled using quantum computing, others using classical computing.
And even in the first case, an understanding of quantum mechanics would not necessarily be necessary. Once the basics are in place, “you’ll need a model developer who doesn’t need to understand quantum computing in detail, but who should know how to describe the problem and use the best option to solve it,” predicts Heike IBM Riel. “The developer of the model does not have to bother with advanced quantum knowledge.”, she insists.
But interpretability and reliability will remain imperatives that are closely intertwined with these technologies.