In 1965, Gordon Moore noticed a pattern. Eight years later, it still didn't have a name — but transistor density was doubling every two years, and nobody in the room knew what they'd eventually build.
That analogy is the one Hermann Hauser, the ARM co-founder and ParityQC investor, reached for when Wolfgang Lechner and his Innsbruck team published a result last Tuesday that ran a quantum algorithm on 52 qubits — nearly double the previous mark of 27, set on trapped-ion hardware roughly two years ago. Hauser compared the doubling to the early days of classical computing, when the same trajectory was visible but unnamed. The difference: when transistors were doubling, nobody was measuring success one in a hundred attempts.
The fidelity of the result is F ≈ 10^(-2), which is quantum-speak for working roughly once in every hundred attempts. That's not a typing error. The paper presents the Quantum Fourier Transform — a foundational quantum algorithm, used in factoring and simulation tasks — as a benchmark, a standardized test circuit for measuring what a given architecture can do at scale, and the 1% fidelity is the actual data point. The qubit count is what ended up in the headline.
What the headline skipped over is the architecture that made 52 qubits possible in the first place. Parity Twine, the ParityQC compilation approach, eliminates SWAP gates entirely. Conventional quantum compilers use SWAP gates to work around a basic hardware constraint: when two qubits need to interact but aren't physically adjacent on the chip, you swap their positions until they are. It's a necessary kludge — and an expensive one. Every SWAP is more hardware operations, more accumulated noise, more fidelity lost. Parity Twine redesigns the circuit layout and the problem encoding together, so the required interactions are natively supported by how the qubits are already connected. No swaps. Fewer gates. Shallower circuits.
The scaling claim in the paper is super-exponential: O(exp(N²)) improvement over swap-based methods as qubit count grows. That's a specific mathematical claim, not marketing language. At N=52, the advantage is already the difference between a result and a non-result. At N=100 or N=200, the gap becomes structural.
"We show that the scaling can be improved further by including iSWAP gates in the instruction set," the paper notes, almost as an aside. That's the next chapter — and it's already in the current version.
For ParityQC, the commercial logic is straightforward. The company licenses its architecture and software stack to hardware partners; IBM is one of them. The Heron r3 result demonstrates that Parity Twine works on actual IBM silicon, not just in theory. For IBM, hosting a record QFT on their processor is good optics. For ParityQC, it's evidence their compilation layer is a platform play rather than a point solution.
The skeptics' case is also straightforward: 1% fidelity is not useful fidelity for any application currently on the table. The authors know this. The press release acknowledged it briefly before pivoting to talk about industrial applicability and Moore's Law — the quantum industry's preferred bridge from "interesting result" to "someone should care." The honest version: a European architecture company took IBM's best available hardware and compiled a benchmark circuit on more qubits than the hardware's previous users thought to try. That's real. It's not a quantum advantage claim, and the paper doesn't make one.
The paper is on arXiv. Parity Twine was developed at the University of Innsbruck and spun out as ParityQC in 2019, with Wolfgang Lechner as co-CEO.