The radiation knew: how Google solved half its quantum error problem and missed the other half
The radiation knew
Google built the best superconducting quantum chip in the world. It still gets knocked off course by cosmic rays.
That's the finding from Google Quantum AI's latest paper (Kurilovich et al., arXiv March 2026, accepted in Physical Review X): a gap-engineered processor like the Willow chip solves one half of the radiation problem it was designed to solve — and misses the other half entirely.
Here's what the paper actually says.
Superconducting quantum computers have always been vulnerable to ionizing radiation. Cosmic rays slam into the chip's silicon substrate, shatter Cooper pairs, and scatter quasiparticles across the device. Those quasiparticles poison qubit coherence. It's been documented for years, and it sets a floor on logical error rates that no amount of qubit scaling can push below a certain threshold.
Google's answer was gap engineering: redesign the Josephson junction so that quasiparticles literally cannot tunnel through it. The gap difference across the junction — δΔ/h = 12 GHz, for the record — creates a potential barrier. QPs too cold to overcome it bounce off. The T1 error bursts that follow radiation impacts become shorter and rarer. On the Willow processor, they went from occurring every second to occurring every 25 milliseconds, then further still.
But the logical error rate floor didn't disappear. Something was still wrong.
Kurilovich et al. built a fast diagnostic protocol — repeating Ramsey, spin-echo, and T1 measurements every 5 microseconds across a 60-qubit array — to watch what actually happens during an impact event. What they found: radiation doesn't only inject tunneling quasiparticles. It also shifts qubit frequencies.
The shifts are systematic and large. During an impact, a qubit's operating frequency drops by up to 3 MHz — and stays dropped for roughly a millisecond. During that millisecond, every QEC cycle (~1 microsecond) accumulates roughly 2π of spurious phase. The correction circuit sees this as an error and attempts to correct it. The correction is wrong, because the problem isn't noise — it's that the qubit's actual frequency has drifted relative to the control system's reference.
Gap engineering does nothing to stop this. The quasiparticles causing the frequency shift aren't tunneling through the junction; they're interacting with the qubit's Josephson energy through a different mechanism entirely. The barrier that was supposed to save the qubit has no effect on this pathway.
The result: on the Willow processor, a fully correlated error burst — hitting many qubits simultaneously with a quasi-static frequency shift — fires roughly once every 71 seconds. That's not frequent enough to dominate error rates under all conditions, but it is frequent enough to set a hard floor on logical error rates that a fault-tolerant computer would need to operate below.
"The shifts originate from QP-qubit interactions in the JJ region," the paper notes, with admirable understatement.
Google does have a partial answer. A modified repetition code circuit — one that accounts for the duration of dynamical decoupling and adds an echo pulse between Hadamard gates — reduces the circuit's sensitivity to frequency shifts. The paper shows this working. But it is a circuit-level fix, not a hardware fix, and the paper itself acknowledges that radiation events still produce detectable errors in the modified code, just with a different signature.
The broader implication is uncomfortable. The error correction strategies the superconducting quantum computing field has spent years developing — surface codes, repetition codes, gap-engineered hardware — were designed around a specific error mechanism. That mechanism is real, and gap engineering does suppress it. But the radiation problem turned out to have a second chapter, and the field has been writing it without realizing it.
Independent confirmation comes from a separate study in Nature Communications, which correlated spatiotemporal qubit relaxation events with scintillating detector signals in a different superconducting qubit device at a different latitude. The same underlying physics — ionizing radiation, quasiparticle generation, correlated errors — appears in both.
The timing of the Phys.org coverage is the trigger for this story arriving now, but the finding has been available since March. The reason to write it is the reframing: this isn't a story about a new problem. It's a story about how the problem was only half-solved, and what that implies for anyone counting on superconducting qubit scaling to deliver fault-tolerant quantum computing on a specific timeline.
The practical stakes are concrete. If you are building a superconducting quantum system today, your error correction circuit needs to account for quasi-static frequency drift during radiation events — not just random bit-flip noise. If you are funding a quantum computing roadmap that assumes superconducting qubits will reach logical error rates below 10⁻¹⁰ or so, the 71-second burst cadence on an unshielded processor is a data point you need in your model. And if you are an IBM, a QuEra, or anyone else running superconducting hardware, the open question — whether your systems show the same radiation-induced phase error signature — is one you probably want answered before a competitor answers it first.
The paper will tell you the frequency shifts are negative, reach 3 MHz, last a millisecond, and come from quasiparticle-qubit interactions near the Josephson junction. What it does not tell you is whether the fix is a circuit modification, a shielding requirement, or an architectural rethink. That is the next chapter — and it is not written yet.