When QuEra Computing published its 2026 roadmap last year, the headline number was 100:1. The company said it would need roughly 10,000 physical qubits to produce 100 reliable logical qubits — a ratio that put practical quantum computing a decade away for most investors. This week, QuEra published a paper with Harvard and MIT that shows the opposite problem has been solved first: getting the ratio below 2:1, meaning just two physical qubits to encode one reliable logical one. The paper is careful about what it claims. The press coverage has not been.
The result, posted to arXiv on April 17 by authors at QuEra, Harvard, and MIT, demonstrates a quantum error-correcting code family called qLDPC (quantum low-density parity-check codes) running on neutral-atom hardware in simulation, according to Quantum Computing Report. The numbers are real: a [[1152,580]] code encodes 580 logical qubits into 1,152 physical ones; a [[2304,1156]] code encodes 1,156 logical qubits into 2,304 physical ones, the paper shows. Both achieve approximately 2:1 ratios, approaching an error rate of one per trillion logical operations under circuit-level noise.
The technique works because neutral-atom hardware can physically rearrange the atoms that hold qubits, which is exactly what qLDPC codes require to extract error syndromes efficiently, QuEra explains in its blog post. Superconducting qubits, fixed on a chip, cannot do this easily. Their limited connectivity constrains which error-correcting codes they can run. The neutral-atom architecture avoids this bottleneck by design. QuEra's own blog post frames the result honestly: "This paper is a quantum memory result. We show that logical information can be stored with extremely low error rates. Further developments will be needed to establish all ingredients for full fault-tolerant computation."
That last sentence ("full fault-tolerant computation") is doing a lot of work the press coverage has ignored. Storing a logical qubit reliably is not the same as performing a logical gate on it. Error correction can protect information at rest; executing a sequence of gates on that information requires additional, harder engineering. The paper itself says "further developments will be needed." None of the wire coverage leading with the 2:1 ratio mentions this.
The 2:1 result is genuinely impressive. It is not what QuEra's roadmap sells. The company has publicly committed — in a press release quoted by postquantum.com — to 100 logical qubits with over 10,000 physical qubits by 2026, a 100:1 ratio. The paper delivers a 2:1 ratio. That is a fiftyfold discrepancy between demonstrated capability and stated ambition, and it belongs in any story about this result.
For context: the dominant error-correcting code in quantum computing today is the surface code, which most architectures use because it is relatively simple to implement. It also typically requires hundreds to thousands of physical qubits per logical qubit, QuEra's blog notes. A 2:1 ratio would be a step change — but only if it generalizes from the quantum memory regime to full logical computation, only if it holds at larger scale, and only if the neutral-atom platform can operate the thousands of physical qubits required for real algorithms.
The authors — Hengyun Zhou and Nishad Maskara of MIT, Chen Zhao and Casey Duckering of QuEra, and Andi Gu of Harvard — are careful in the paper. The per-logical-per-round error rate of 1.3×10^-13 is reported under a circuit-level noise model at p=0.1%, not on actual hardware. The door to practical fault-tolerant quantum computing has cracked open. It has not been walked through.