Gil Kalai has been waiting twenty years to be proven wrong.
The Hebrew University mathematician entered quantum computing in 2005 and almost immediately started arguing that practical, fault-tolerant quantum computers could not be built. Not because of engineering constraints that might eventually be solved, but because of deep mathematical limits on what noisy, real-world qubits can do. He has published papers, debated opponents at conferences, and maintained the same core position through two decades of quantum hype cycles.
The bet, in hindsight, looks almost reasonable. The field has spent an estimated billions of dollars or more testing his theory. Kalai himself has said he is glad humanity spent billions to test his ideas, or maybe tens of billions of dollars. The 2019 Google quantum supremacy demonstration was supposed to be the experiment that settled the question. If Google genuinely achieved quantum supremacy over classical computers, as it claimed, then Kalai was wrong.
Kalai does not think Google achieved quantum supremacy. And the case for his position is stronger than most in the industry acknowledge.
Two predictions, one test
Kalai's argument rests on two falsifiable conjectures. The first involves correlated errors. When qubits become entangled, their errors stop being independent. Instead, noise synchronizes across the system in bursts that overwhelm quantum error correction before the computer can scale to useful sizes. This is not a vague concern about "noise" in general. It is a specific mathematical prediction: entangled qubits exhibit substantial positive correlation in their errors, and this correlation scales with entanglement in a way that makes error correction provably insufficient.
The second prediction concerns what noisy intermediate-scale quantum (NISQ) devices can actually do. Kalai argues that NISQ-era machines sit in a computational complexity class called LDP, for low-degree polynomials. Problems in this class can be simulated efficiently by classical computers. If this is correct, then the supposed advantage of NISQ devices over classical computation is illusory. They are not doing anything that cannot be done better with existing hardware.
Both conjectures are testable. The 2019 Google experiment was the first large-scale test.
The Google experiment and its problems
Google's 53-qubit Sycamore processor performed a specific computational task in approximately 200 seconds, which the company claimed would take a classical supercomputer roughly 10,000 years. The paper appeared in the journal Nature under the title "Quantum supremacy using a programmable superconducting processor."
Kalai, working with statisticians Yosef Rinott and Tomer Shoham, published three papers (2020, 2022, 2023) critiquing the statistical methodology behind this claim. Their core argument: the Google team's analysis relies on an independence assumption between components of the quantum computer that is violated by the very physics of noise-sensitive systems. As Kalai wrote, the assumption of independent errors across qubits in a superconducting processor at near-zero temperature is "striking" given that such systems are, by design, exquisitely sensitive to environmental perturbations.
Other research groups reached similar conclusions. IBM published papers arguing the 10,000-year classical estimate was significantly overstated. Teams led by Pan and Zhang, and by Gao and collaborators, published analyses suggesting Google's results did not constitute genuine supremacy. The consensus in the peer-reviewed literature is that Google's claim was substantially undermined, though Google has not retracted it.
This matters for the timeline. If Google had genuinely demonstrated quantum supremacy over classical computers in 2019, it would have refuted Kalai's conjectures. Instead, the most rigorous reanalyses of the result have vindicated his statistical skepticism.
The 2012 refutation attempt and its aftermath
Kalai did not escape challenge. In 2012, Aram Harrow and Steven Flammia published a preprint claiming to refute one of his central conjectures. If correct, this would have been a decisive blow to his framework.
Kalai responded in 2022, identifying what he argued were fundamental flaws in their argument. The exchange remains unresolved in the sense that the 2012 preprint was not published in a peer-reviewed journal and the contested points were never fully adjudicated. But Kalai's detailed rebuttal exists and has not been comprehensively answered.
Scott Aaronson, the quantum computing researcher who maintains the blog Shtetl-Optimized, has documented cases where Kalai's predictions about quantum milestones were wrong. He has also acknowledged that Kalai's core arguments about noise and complexity have not been definitively refuted. In a December 2025 post, Aaronson described his own doubt about Kalai's correctness in explicitly declining terms: with every experimental milestone, the little voice asking whether Kalai might be right has grown quieter, until now it can barely be heard. That is not the language of a man who thinks the skeptic is gaining ground.
Kalai debated Matthias Christandl at the Learned Society of the Czech Republic in May 2025 on the question of whether true quantum computing has been achieved. His position has not changed. He does not dispute that qubits can be created. He argues that practical, useful quantum computers at meaningful scale cannot be built due to inherent properties of how noise behaves in entangled systems.
The intellectual honesty here is notable. Kalai has said explicitly that he is glad humanity spent billions testing his theory. He framed the investment as a valuable experiment regardless of outcome. That is an unusual position for someone whose professional reputation is on the line.
The money is still being spent
Despite the equivocal results from the 2019 test, investment in quantum computing continues at a substantial pace. Google, IBM, Microsoft, and IonQ are among the companies building larger devices. Error correction demonstrations have shown improvements. The timeline for practical quantum advantage keeps being revised, usually upward.
Kalai's framework does not say quantum computers are impossible in principle. It says the path to fault-tolerant quantum computing faces a specific barrier: as you entangle more qubits to scale up, the correlated error problem gets worse faster than error correction can compensate. The engineering might solve some problems. It cannot, in his view, solve this one.
Whether he is right is a question that will be answered by machines, not arguments. The field is building them at scale. The next few years of error correction results will be the most direct test yet. If systems begin achieving the fidelity improvements needed for fault-tolerant operation at the scale their proponents claim, Kalai's theory will need revision. If they plateau, his predictions will look increasingly prescient.
The billions spent so far are not the experiment. They are the setup. The result will come from whatever comes next.