Quantum computers and other devices hold incredible promise for enabling new types of sensors and communications, improving simulations of physical and chemical systems, and ushering in new levels of secure cryptography. While dangling such possibilities, quantum advances also pose something of a conundrum: Once quantum systems are big enough, standard computers will not be powerful enough to check the accuracy of the quantum devices.
For the last three years and with funding from the Agile Science of Test and Evaluation program within the Air Force Office of Scientific Research (AFOSR), three Caltech professors and their colleague from Stanford University have been addressing the question of how to assess the performance of quantum systems. On April 22, Caltech hosted a Quantum Benchmarking Symposium on campus for experts to discuss progress that has been made and to share their findings—not only with students and colleagues from academia but also with representatives of the quantum industry, including from companies such as Google Quantum AI, Anyon Computing, Q-CTRL, and Quantum Machines.
"It's actually a hard and very interesting problem," says Austin Minnich, professor of mechanical engineering and applied physics, deputy chair of the Division of Engineering and Applied Science at Caltech, and one of the symposium's organizers. "How can you trust the output of a quantum computer if you can't verify it with the computers we have?"
This question has been considered since the beginning of the field of quantum information, and many techniques have been developed for quantum computers. In the case of quantum computers, the qubits—their fundamental units of information—can be controlled by implementing operations that researchers specify. But with many other quantum devices, scientists do not have the ability to implement whatever operations they would like. Benchmarking schemes that consider the resources available for different quantum devices, Minnich says, are less studied.
For example, Andrei Faraon (BS '04), the William L. Valentine Professor of Applied Physics and Electrical Engineering and another of the symposium's organizers, uses rare-earth ions found in crystals, such as ytterbium, to "talk" through interactions with nearby atoms in the crystal. Such a system can be used to store information, serving as a type of quantum memory. However, "there are many ways for information to get lost in the crystal," says Faraon, who is also the Fletcher Jones Foundation Director of the Kavli Nanoscience Institute at Caltech. "What we need, in general, is a way to measure the effects of this very complicated environment and then to determine how much it corrupts the operation of the system."
This task is complicated by the fact that the atoms in the crystal cannot be controlled directly. Rather, Faraon and his colleagues use lasers to control the rare earth ions that then interact with the crystal atoms. Resource-aware benchmarking schemes developed by the team allow the performance of the overall quantum system to be assessed by controlling only the rare earth ion.
As discussed in the symposium, similar situations occur in other types of quantum computers for both academic and industrial researchers, such as those that operate with neutral atoms or superconducting qubits. The symposium highlighted the common need for resource-aware quantum benchmarking techniques for diverse quantum technologies and the translation of academic research to industry practice.
In addition to Minnich and Faraon, Manuel Endres, professor of physics at Caltech, and Joonhee Choi of Stanford University are the other grantees of AFOSR funding and organizers of the symposium.
Six of the speakers at the Resource-Aware Quantum Benchmarking Symposium that took place at Caltech on April 22, 2026.
Credit: EAS Communications Office/Caltech
