Quantum computers of the future may ultimately outperform their classical counterparts to solve intractable problems in computer science, medicine, business, chemistry, physics, and other fields. But the machines are not there yet: They are riddled with inherent errors, which researchers are actively working to reduce. One way to study these errors is to use classical computers to simulate the quantum systems and verify their accuracy. The only catch is that as quantum machines become increasingly complex, running simulations of them on traditional computers would take years or longer.
Now, Caltech researchers have invented a new method by which classical computers can measure the error rates of quantum machines without having to fully simulate them. The team describes the method in a paper in the journal Nature.
"In a perfect world, we want to reduce these errors. That's the dream of our field," says Adam Shaw, lead author of the study and a graduate student who works in the laboratory of Manuel Endres, professor of physics at Caltech. "But in the meantime, we need to better understand the errors facing our system, so we can work to mitigate them. That motivated us to come up with a new approach for estimating the success of our system."
In the new study, the team performed experiments using a type of simple quantum computer known as a quantum simulator. Quantum simulators are more limited in scope than current rudimentary quantum computers and are tailored for specific tasks. The group's simulator is made up of individually controlled Rydberg atoms—atoms in highly excited states—which they manipulate using lasers.
One key feature of the simulator, and of all quantum computers, is entanglement—a phenomenon in which certain atoms become connected to each other without actually touching. When quantum computers work on a problem, entanglement is naturally built up in the system, invisibly connecting the atoms. Last year, Endres, Shaw, and colleagues revealed that as entanglement grows, those connections spread out in a chaotic or random fashion, meaning that small perturbations lead to big changes in the same way that a butterfly's flapping wings could theoretically affect global weather patterns.
This increasing complexity is believed to be what gives quantum computers the power to solve certain types of problems much faster than classical computers, such as those in cryptography in which large numbers must be quickly factored.
But once the machines reach a certain number of connected atoms, or qubits, they can no longer be simulated using classical computers. "When you get past 30 qubits, things get crazy," Shaw says. "The more qubits and entanglement you have, the more complex the calculations are."
The quantum simulator in the new study has 60 qubits, which Shaw says puts it in a regime that is impossible to simulate exactly. "It becomes a catch-22. We want to study a regime that is hard for classical computers to work in, but still rely on those classical computers to tell if our quantum simulator is correct." To meet the challenge, Shaw and colleagues took a new approach, running classical computer simulations that allow for different amounts of entanglement. Shaw likens this to painting with brushes of different size.
"Let's say our quantum computer is painting the Mona Lisa as an analogy," he says. "The quantum computer can paint very efficiently and, in theory, perfectly, but it makes errors that smear out the paint in parts of the painting. It's like the quantum computer has shaky hands. To quantify these errors, we want our classical computer to simulate what the quantum computer has done, but our Mona Lisa would be too complex for it. It's as if the classical computers only have giant brushes or rollers and can't capture the finer details.
"Instead, we have many classical computers paint the same thing with progressively finer and finer brushes, and then we squint our eyes and estimate what it would have looked like if they were perfect. Then we use that to compare against the quantum computer and estimate its errors. With many cross-checks, we were able to show this ‘squinting’ is mathematically sound and gives the answer quite accurately."
The researchers estimated that their 60-qubit quantum simulator operates with an error rate of 91 percent (or an accuracy rate of 9 percent). That may sound low, but it is, in fact, relatively high for the state of the field. For reference, the 2019 Google experiment, in which the team claimed their quantum computer outperformed classical computers, had an accuracy of 0.3 percent (though it was a different type of system than the one in this study).
Shaw says: "We now have a benchmark for analyzing the errors in quantum computing systems. That means that as we make improvements to the hardware, we can measure how well the improvements worked. Plus, with this new benchmark, we can also measure how much entanglement is involved in a quantum simulation, another metric of its success."
The Nature paper titled "Benchmarking highly entangled states on a 60-atom analog quantum simulator" was funded by the National Science Foundation (partially via Caltech’s Institute for Quantum Information and Matter, or IQIM), the Defense Advanced Research Projects Agency (DARPA), the Army Research Office, the U.S. Department of Energy's Quantum Systems Accelerator, the Troesh postdoctoral fellowship, the German National Academy of Sciences Leopoldina, and Caltech's Walter Burke Institute for Theoretical Physics. Other Caltech authors include former postdocs Joonhee Choi and Pascal Scholl; Ran Finkelstein, Troesh Postdoctoral Scholar Research Associate in Physics; and Andreas Elben, Sherman Fairchild Postdoctoral Scholar Research Associate in Theoretical Physics. Zhuo Chen, Daniel Mark, and Soonwon Choi (BS '12) of MIT are also authors.
Journal
Nature