GPU Acceleration Compresses QEC Research Cycles From Days to Hours
Alice & Bob announced a 9.25x speedup in quantum error correction decoding last week, achieved by moving the computation from CPUs to NVIDIA GPUs.

image from FLUX 2.0 Pro
Alice & Bob announced a 9.25x speedup in quantum error correction decoding last week, achieved by moving the computation from CPUs to NVIDIA GPUs. The number is real. The implication some coverage drew — that quantum error correction is getting faster — is not quite right. What's accelerating is the classical computing that supports quantum error correction, not the quantum hardware itself.
The benchmark: 100,000 simulated shots of syndrome decoding that took 18 hours and 2 minutes on an AMD Ryzen 9 9950X was reduced to 1 hour and 57 minutes using an NVIDIA GH200 Grace Hopper system and the CUDA-Q platform. The same logical error performance, identical decoding accuracy, just parallelized across GPU cores. The work was presented at NVIDIA GTC 2026.
This distinction matters because quantum error correction has two distinct computational demands. The quantum hardware executes gates and maintains coherence. The classical computer attached to it continuously processes syndrome data — measurement outcomes that tell you whether errors occurred without destroying the quantum information — and computes the corrections. That classical decoding step is a bottleneck in current QEC research. Simulating a fault-tolerant architecture at scale, testing code variants, estimating failure rates: all of it requires running classical decoding algorithms that can take hours or days on CPU clusters. GPU acceleration compresses that timeline.
Alice & Bob's specific focus is Elevator Codes, a concatenation architecture the company described in a January preprint. The pitch: by using a small number of supplementary logical ancilla qubits that move up and down through repetition codes during computation, bit-flip errors can be suppressed to a degree that would otherwise require dramatically more hardware. The claim is a 10,000x reduction in logical error rate at roughly 3x the qubit overhead. That's the target for future Alice & Bob processors. The CUDA-Q work is the classical simulation infrastructure to design and validate those codes.
The connection to Horizon Quantum is worth noting. Horizon's CEO Joe Fitzsimons argued in the SPAC Insider podcast that the field has reached an inflection point where error correction can run fast enough to suppress errors faster than they accumulate. Horizon is betting on cat qubits and hardware control software. Alice & Bob is betting on cat qubits and better error correction codes. They're not competitors — Alice & Bob builds the hardware that Horizon might run software on — but they're solving different parts of the same problem. Fitzsimons cited August 2024 as the first demonstration of error suppression below physical qubit error rates. Alice & Bob's Elevator Codes preprint appeared in January 2026. The gap between those two points is where the actual engineering lives.
The CUDA-Q integration is also notable as a signal of NVIDIA's deepening role in quantum computing infrastructure. CUDA-Q is NVIDIA's quantum-classical simulation platform. The June 2025 integration of CUDA-Q into Dynamiqs, Alice & Bob's QPU simulation library, was the precursor. The GTC 2026 presentation is the follow-through. NVIDIA is building the classical computing layer that quantum companies depend on for research and development — a position that generates revenue regardless of which quantum hardware approach wins.
The open question: real-time decoding. The current speedup applies to offline simulation — researchers iterating on code designs before deploying to actual hardware. Real-time decoding, where the classical processor must keep up with the quantum hardware during actual computation, has different latency constraints that GPU parallelism may not fully address. Alice & Bob says future work will investigate real-time decoding and system-level calibration. That's the harder problem.
Alice & Bob's Elevator Codes announcement is here.
The arXiv preprint on Elevator Codes is here.

