In short

Every scientific field has an era before its tools worked and an era after. For quantum computing, the dividing line is the moment a logical qubit — a group of physical qubits running an error-correcting code — first got better as you added more physical hardware. That moment was 9 December 2024, when Google Willow demonstrated a single surface-code logical qubit at distances d = 3, 5, 7 with logical error rate dropping by a factor \Lambda \approx 2.9 at each step. In 2024-2025, Quantinuum with Microsoft demonstrated four logical qubits on a colour code with fault-tolerant Clifford gates. These are the first two real beachheads. The 2026-2030 window is when the field scales: IBM's 200-logical-qubit Starling target for 2029, Quantinuum's scaling of H-series ion traps with logical-algorithm demonstrations, Google's successor to Willow aiming for multi-logical-qubit machines. The 2030-2045 window is when useful applications arrive: small-molecule chemistry first (10-20 logical qubits, 10^4-10^6 gates), lattice-gauge-theory toy simulations, then medium-molecule chemistry by 2035, and credible Shor's algorithm against RSA-2048 around 2040-2045, requiring \sim 2 \times 10^7 physical qubits. Mosca's theorem — if the time you need your data kept secret plus the time to migrate cryptography exceeds the time until quantum computers break your crypto, you have a problem — says this is the right era to migrate to post-quantum cryptography, even though the threat is fifteen years away. You are reading this chapter in the dawn of the logical-qubit era; your working career will span it.

Every scientific field has a before and an after. Chemistry before and after Mendeleev's periodic table. Biology before and after Darwin and Watson-Crick. Computing before and after the transistor. In each case, the moment is not the discovery but the first time the new tool actually worked — the first stable periodic arrangement, the first correct phylogenetic tree, the first running silicon logic gate. Everything the field builds afterward rests on that moment.

Quantum computing crossed its moment on 9 December 2024. On that date, the Google Quantum AI team reported, in Nature, that their Willow processor — 105 superconducting qubits wired as a surface-code error-correcting array — had demonstrated a logical qubit whose error rate fell as the code distance grew. Distance 3, distance 5, distance 7: each step halved the logical error. For the first time in history, a quantum computer got better at its job when you gave it more hardware. The threshold theorem, proved on paper in 1996, worked on silicon in 2024.

This chapter is about the era that moment opened. Not the protocol (that is the logical qubits in practice chapter). Not the hardware roster (that is the landscape in 2026 chapter). The era — the span of time running roughly 2024 through 2045 during which quantum computing transitions from interesting to useful. You are reading this in its dawn. Your working career, if you are a 15-year-old today in 2026, will span it.

The historical timeline — how we got here

Logical qubits did not arrive out of nowhere in December 2024. They are the culmination of nearly thirty years of theoretical and experimental progress. Here is the arc.

The quantum computing timeline from 1994 to 2026A horizontal timeline showing major quantum-computing milestones. 1994 Shor's algorithm. 1995 Steane and Shor quantum error correction codes. 1996 threshold theorem. 1997 Kitaev surface codes. 2012 first below-threshold cloud access for error correction on superconducting. 2018 Preskill coins NISQ. 2019 Google Sycamore random circuit supremacy. 2023 IBM Condor 1121 physical qubits. 2024 Google Willow below threshold at distance 7 and Quantinuum four logical qubits on colour code. 2026 is marked as the dawn of the logical qubit era. Thirty years of progress — from Shor's algorithm to Willow 1994 Shor's algorithm 1995-96 QEC codes (Shor, Steane) + threshold thm 1997 Kitaev surface code 2012-15 early QEC demos (IBM, Delft, Google) 2018 Preskill coins "NISQ" 2019 Sycamore random-circuit supremacy 2023 IBM Condor 1121 qubits Dec 2024 Willow below threshold (d=7) + Quantinuum 4 logical qubits 2026 dawn of the logical-qubit era 30 years from Shor's algorithm (the theoretical reason to build a quantum computer) to Willow (the first logical qubit that got better with scale). Pre-2000: theoretical foundations (Feynman 1982, Shor 1994, Steane-Shor codes 1995, threshold theorem 1996) 2000-2018: small-qubit experiments, noisy gates, early QEC demos, physical-qubit counts grow to tens
The arc. The theoretical prerequisites for fault-tolerant quantum computing were all in place by 1996. It took thirty more years of materials science, cryogenics, control electronics, and relentless engineering to build a machine where the prerequisites actually worked. The accent mark on late 2024 is where the transition happens — the field crossed from the pre-logical-qubit era into the logical-qubit era.

1982 — Feynman asks the question. In a lecture at MIT, Richard Feynman asked whether a computer made of quantum components could simulate physics that a classical computer could not. The question was philosophical; the machinery to answer it did not exist.

1994 — Shor's algorithm. Peter Shor showed that if a quantum computer existed, it could factor integers in polynomial time — breaking the RSA cryptosystem. This was the first algorithm with an exponential separation from the best known classical approach. The field acquired a reason to build real hardware.

1995-1996 — Quantum error correction and the threshold theorem. Shor, Steane, and others showed that quantum errors, though fundamentally different from classical errors (they can be continuous, not just bit-flips), could be corrected by encoding one logical qubit across many physical qubits. Kitaev, Aharonov-Ben-Or, and Knill-Laflamme-Zurek proved the threshold theorem: if the physical error rate is below a critical threshold, you can make logical error arbitrarily small by using a larger code. The theoretical scaffold for fault-tolerant quantum computing was in place.

1997 — The surface code. Alexei Kitaev introduced the toric code, later refined into the surface code — the family of two-dimensional error-correcting codes that would, decades later, become the practical standard for superconducting-qubit architectures.

2012-2015 — First QEC demonstrations. IBM, TU Delft, and Google demonstrated small quantum error-correcting codes on physical hardware. These were proof-of-principle experiments: the codes ran, they corrected some errors, but they did not yet operate below threshold in the sense that a larger code gave a lower logical error rate. The hardware was not quite good enough.

2018 — Preskill names NISQ. John Preskill coined the term Noisy Intermediate-Scale Quantum to describe the regime of 50-1000 noisy physical qubits without error correction. The word noisy was the key ingredient — it acknowledged that the current generation of hardware was useful for research demonstrations but not for fault-tolerant algorithms.

2019 — Sycamore supremacy. Google's Sycamore processor demonstrated a random-circuit-sampling benchmark that it claimed classical computers could not match. This was not an application and not a logical qubit; it was a supremacy-style benchmark on physical qubits. Classical algorithmic improvements have since reduced the gap, but the experimental result stands.

2023 — IBM Condor: 1121 physical qubits. The first four-digit superconducting-qubit chip. Condor's individual gate fidelities were too low for useful surface-code operation, but the fabrication demonstrated that 1000-qubit chips were manufacturable.

December 2024 — Willow crosses the threshold. The specific experiment: Google ran the surface code on Willow at distances d = 3, d = 5, and d = 7 (requiring 17, 49, and 97 physical qubits respectively). At each distance step, the logical error rate per cycle dropped by a factor \Lambda \approx 2.9. This is the signature of being below threshold — scaling up the code makes the logical qubit better, not worse. Published in Nature on 9 December 2024.

2024-2025 — Quantinuum colour code with gates. In parallel with Willow, Quantinuum (jointly with Microsoft) demonstrated four logical qubits on a 2D colour code with fault-tolerant Clifford gates (transversal H, S, CNOT) and a T gate via magic-state injection. Where Willow demonstrated one logical qubit that passively protects quantum information, Quantinuum demonstrated multiple logical qubits that can compute with each other at the logical level.

These are the two beachheads. Willow said: you can encode one logical qubit, and it gets better as you scale. Quantinuum said: you can make multiple encoded qubits talk to each other using fault-tolerant gates. Together they are the minimum viable product for the logical-qubit era.

The current state — 2024-2026

Logical qubits demonstrated over time, 2020-2026A chart showing the number of logical qubits demonstrated on physical hardware over time. From 2020 to 2023, the count is effectively zero. In late 2023 to early 2024, Quantinuum's H1 and H2 experiments show 1 to 2 logical qubits. In late 2024, Quantinuum's colour code demonstrates 4 logical qubits. Willow demonstrates one logical qubit below threshold. 2025-26 projects Quantinuum to 6 to 8 logical qubits and Google's successor chip to 2 to 4 logical qubits. The dashed projection target line aims at 100 logical qubits by 2029. Logical qubits demonstrated — the slowest, most important curve in the field $10^3$ $10^2$ $10$ $1$ $0$ logical qubits 2020 2022 2024 2026 2028 2030 2032 Quantinuum 4 Willow 1 IBM Starling target 200 (2029) credible 2030 band 10-100 logical qubits threshold for small chemistry threshold for medium chemistry / toy lattice gauge From 0 (2020-2023) to 1-4 (2024) to 10-100 (target 2030) — three orders of magnitude in a decade.
The curve that matters most for the era. Physical-qubit growth has gone through two orders of magnitude in a decade — roughly the straight-line extrapolation one would draw from 2015 to 2025. Logical-qubit growth has just started. The 2030 target — 10-100 logical qubits across multiple platforms — would put the field above the threshold for the first genuinely useful fault-tolerant demonstrations (small-molecule chemistry). Whether the curve bends that way depends on how hard the engineering gets as you scale from the current 4-logical-qubit demonstrations to the 50-logical-qubit regime.

The Willow protocol is worth a paragraph on its own. The surface code arranges physical qubits on a 2D grid of data qubits and ancilla (measurement) qubits. Every cycle, each ancilla measures a parity check of neighbouring data qubits; the pattern of these measurements, fed to a classical decoder, identifies which physical errors have occurred and what correction to apply to the logical qubit at the end. The code distance d is the number of physical errors that can be reliably corrected; it scales with the size of the grid. Willow ran at d = 3 (requiring 17 physical qubits), d = 5 (49 qubits), and d = 7 (97 qubits), using the rest of its 105 qubits for readout, calibration, and overhead. The headline number, \Lambda \approx 2.9, means each doubling of code area cuts logical error by a factor of 2.9. If you can sustain \Lambda \ge 2 all the way out to d \sim 25, you are in the regime where fault-tolerant algorithms at the scale of Shor's-against-RSA become feasible.

The Quantinuum colour-code demonstration used the trapped-ion H2 machine's 56-64 ion register to encode four logical qubits in a 2D colour code. The colour code has a mathematical property the surface code lacks: transversal implementation of the full Clifford group. In plain language — you can do logical H, S, and CNOT by applying the corresponding physical gate to each physical qubit in the block, in parallel, with no extra ancillas. This simplifies the logical-gate budget enormously. The non-Clifford T gate still requires magic-state injection (the standard trick — see fault-tolerant gates and magic states), but the Clifford part is free.

Between them, Willow and Quantinuum cover the two halves of the fault-tolerance story: passive protection (Willow) and active computation (Quantinuum). The rest of the logical-qubit era will be about scaling both.

The near-term — 2027-2030

What does the roadmap look like for the next five years? The combined picture from all the major vendors:

IBM Starling (2029, 200 logical qubits). The most ambitious publicly declared logical-qubit target. IBM's roadmap assumes meeting the intermediate Nighthawk and Kookaburra milestones (multi-chip modular superconducting processors with dynamic-circuit capability) and a factor-of-10 improvement in dynamic-circuit performance over the 2024 Heron baseline. If Starling is hit, it will be the first multi-logical-qubit machine in commercial operation.

Google successor to Willow (~2027). Google has not published a formal roadmap with dates, but conference talks and blog posts suggest a 2026-27 successor chip with ~1000 physical qubits capable of hosting multiple logical qubits simultaneously and demonstrating logical two-qubit gates (logical CNOT, logical CZ). A small fault-tolerant algorithm on 2-4 logical qubits is a credible 2028-29 milestone.

Quantinuum H-series scaling (~2030). Quantinuum's target is to scale from 4 logical qubits (2024) to 10-20 logical qubits (late 2020s) while preserving the fault-tolerant Clifford capability. The physical-layer challenge is scaling the ion register from 64 to 256 without fidelity degradation.

IonQ, Atom Computing, QuEra, Pasqal (2027-2029). Error-correction primitives are on the roadmap for all four platforms. Neutral-atom arrays with 1000-atom capacity make larger code distances geometrically feasible; the bottleneck is mid-circuit measurement and fidelity.

PsiQuantum (early 2030s). PsiQuantum's explicit strategy skips NISQ and small-scale FTQC entirely. The declared target is a ~10^6-photonic-qubit fault-tolerant machine in the early 2030s. No logical-qubit demonstrations at intermediate scales.

Microsoft (2030s, if the physics holds). Majorana 1 was announced in February 2025 and remains scientifically contested. If topological qubits are real and scalable, Microsoft's roadmap to fault tolerance could be much shorter than the surface-code route; if they are not, Microsoft retains Azure Quantum and Quantinuum partnerships as alternative paths.

The net expectation for 2030: somewhere between 10 and 100 logical qubits on at least two platforms, with the first fault-tolerant algorithms running end-to-end. Not a thousand logical qubits, not Shor's against RSA. The first useful fault-tolerant demonstrations — not the endgame.

The beyond — 2030-2045

Past 2030, the projections get noisier because the underlying engineering problems get harder. Here is a calibrated picture, with explicit uncertainty ranges.

2030-2033 — first useful fault-tolerant chemistry. Small-molecule quantum chemistry (H_2, LiH, BeH_2, maybe H_2O and small organic fragments) requires ~10-30 logical qubits and 10^4-10^6 logical gates. This is the first genuinely useful fault-tolerant demonstration — a result that a computational chemist would prefer to buy quantum time for rather than run on a classical cluster. Expected window: 2030-2033, with some optimists pointing at 2029 and some pessimists at 2035.

2033-2035 — medium-molecule chemistry. FeMoco (the active site of nitrogenase — the enzyme that fixes atmospheric nitrogen and whose classical simulation is an unsolved grand challenge), catalyst active sites, small crystal unit cells. 100-500 logical qubits, 10^7-10^9 logical gates. If this milestone is hit, it is the first commercially consequential quantum computation — catalyst design and enzyme engineering have real industrial value.

2035-2040 — materials simulation. High-temperature superconductors, lithium-ion battery electrolytes, photovoltaic materials. 500-2000 logical qubits, 10^9-10^{10} gates. The dream application from the 1990s quantum-simulation literature. Expected window 2035-2040.

2040-2045 — Shor's against RSA-2048. Breaking 2048-bit RSA with Shor's algorithm requires approximately 8000 logical qubits and \sim 10^{10}-10^{11} logical gates. With surface-code overhead at distance d \sim 25, the physical-qubit count comes to roughly 2 \times 10^7. The best machine today is \sim 10^3 physical qubits; getting to 2 \times 10^7 is a four-order-of-magnitude engineering problem with many unsolved sub-problems (cryogenic wiring, control-electronics scaling, high-throughput real-time decoding). Expected window: 2040-2045, with credible uncertainty bands extending to 2050.

Longer-term — full quantum advantage at industrial scale. Arbitrary hard-problem speedups, full utility for combinatorial optimisation, drug discovery at industrial throughput. No credible earlier target than 2045+.

The honest summary: the logical-qubit era runs roughly 2024-2045, with genuine usefulness beginning in the 2030-2033 window and industrial maturity in the 2040s. If you are 15 in 2026, you will see most of this arc.

The Mosca threshold, revisited

Mosca's theorem — named after Michele Mosca of the Institute for Quantum Computing (Waterloo) — gives a clean framework for thinking about when to worry about the quantum threat to cryptography. State the theorem as three quantities:

Mosca's inequality: if X + Y > Z, you have a problem. Data that needs to stay secret for X years, protected by cryptography that will take Y years to replace, is at risk if a sufficiently large quantum computer appears in Z years — because during the interval between when data is encrypted and when it can be decrypted, the attacker can simply store the ciphertext ("harvest now, decrypt later") and wait.

In 2026, with Shor's-against-RSA plausibly 2040-2045:

For Aadhaar biometric data, with a shelf-life measured in decades (X \ge 50), and an NQM-coordinated migration plan that realistically takes 5-10 years to complete (Y \approx 5-10), you have X + Y \approx 55-60 years, while Z \approx 15-20 years. Mosca's inequality fails by a factor of three. Sensitive Aadhaar data encrypted with RSA-2048 today is potentially vulnerable to harvest-now-decrypt-later even under optimistic quantum timelines.

This is the reason CERT-In and MeitY have already published PQC migration guidance aligned with NIST standards (CRYSTALS-Kyber, Dilithium, SPHINCS+). The quantum threat is fifteen years away; the migration needs to happen now, because X is large.

Mosca's theorem is the single cleanest argument for why the logical-qubit era matters to policy today, not in 2040. The machines are not here yet. The migration must still begin.

What "useful" actually means

A word about useful, because it carries a lot of weight in this chapter. In the quantum-computing literature, "useful" is a higher bar than "advantage" or "supremacy":

The logical-qubit era, by the definition this chapter uses, begins when fault-tolerant primitives are available (Willow and Quantinuum, 2024-2025) and ends when useful fault-tolerant has been demonstrated at industrial scale (2030-2045, depending on application). You are reading this roughly 12-18 months into it.

The worked examples

Example 1: Extrapolating Willow's $\Lambda$ to a useful algorithm

Setup. Willow measured \Lambda \approx 2.9 at code distances d = 3, 5, 7. What does this extrapolation imply for the code distance and physical-qubit count needed to run a useful fault-tolerant algorithm — say, a 50-logical-qubit chemistry algorithm at 10^7 logical gates?

Step 1. Logical error per gate, at distance d, scales roughly as p_L(d) \sim (p_0 / p_\text{th})^{(d+1)/2} when p_0 < p_\text{th} (below threshold). At d = 7, Willow's measured p_L \approx 3 \times 10^{-3} per cycle. To run an algorithm with N = 50 \times 10^7 = 5 \times 10^8 logical-qubit-gate-operations, you need p_L \lesssim 1/N = 2 \times 10^{-9} per logical op. Why p_L \lesssim 1/N: the total probability that any error occurs during the algorithm is approximately N \cdot p_L; if this is smaller than 1, the algorithm has a good chance of succeeding; if it is larger, the algorithm is dominated by errors and the output is noise. The 1/N rule is the minimum requirement for the algorithm to produce a meaningful answer.

Step 2. From Willow, p_L drops by factor \Lambda \approx 2.9 per distance step of 2. Going from d = 7 (where p_L \approx 3 \times 10^{-3}) to a target p_L \approx 2 \times 10^{-9} requires a factor of 1.5 \times 10^6 suppression. In \Lambda steps, this is \log(1.5 \times 10^6) / \log(2.9) \approx 14.1/1.06 \approx 13 distance-doublings of d. Why 13 distance-doublings: each step of d by 2 gives one factor of \Lambda. To get \Lambda^{13} \approx 2.9^{13} \approx 1.6 \times 10^6. Check: this matches the 1.5 \times 10^6 suppression needed.

Step 3. So the required distance is d = 7 + 2 \times 13 = 33. A distance-d surface code requires \sim 2d^2 = 2 \cdot 33^2 \approx 2200 physical qubits per logical qubit.

Step 4. Scaling to 50 logical qubits: 50 \times 2200 = 110{,}000 physical qubits. Add 3x-5x overhead for ancillas, magic-state factories, and decoding — real estimates for useful fault-tolerant chemistry fall in the 3 \times 10^5-10^6 physical-qubit range per logical algorithm of this scale.

Result. A 50-logical-qubit, 10^7-gate chemistry algorithm needs roughly \sim 3 \times 10^5 physical qubits at distance d \sim 33, assuming Willow's \Lambda \approx 2.9 holds out to d = 33.

Extrapolating Willow's Lambda to useful chemistry scaleA chart showing logical error rate per cycle on the y-axis versus code distance d on the x-axis. The measured Willow points at distance 3, 5, 7 show logical error dropping from 1e-1 to 3e-3 with Lambda of 2.9 per distance step. A dashed extrapolation extends the line to distance 33 reaching a logical error of 2e-9, intersecting a horizontal line for the target error needed to run a 5e8 operation algorithm. Willow's $\Lambda \approx 2.9$ extrapolated to useful algorithm scale $1$ $10^{-3}$ $10^{-6}$ $10^{-9}$ $10^{-12}$ logical error / cycle $d=3$ $d=5$ $d=7$ $d=17$ $d=33$ code distance $d$ target for 50 qubits × 10⁷ gates useful threshold Willow measured $\Lambda \approx 2.9$ extrapolation (assumes $\Lambda$ holds to large $d$)
Willow's measured data fits a factor-$\Lambda \approx 2.9$ suppression per distance step of 2. Extrapolating to distance $d \sim 33$ reaches the logical error rate needed for a 50-qubit $10^7$-gate algorithm. The extrapolation assumes the $\Lambda$ scaling continues; sustaining $\Lambda \ge 2$ out to distance 25-33 is the key engineering bet for the logical-qubit era.

What this tells you about the roadmap. The \sim 3 \times 10^5 physical-qubit number is in the right ballpark for the IBM Starling 200-logical-qubit target (Starling's publicly stated physical-qubit requirement is roughly 10^5-10^6, depending on code distance assumptions). The numbers are big but not absurd — if physical-qubit counts continue to grow at the historical rate of ~3x every two years, starting from \sim 10^3 in 2026, the 3 \times 10^5 regime is reachable by 2033-2035. That matches the outside of the credible useful-fault-tolerant window. Why the calculation is a rough estimate rather than a precise prediction: \Lambda may not stay constant at large d (correlated errors, heating, calibration drift could cause it to degrade); the 2x in 2d^2 physical qubits per logical qubit is the surface-code standard but other codes (colour code, LDPC codes) can be more efficient; magic-state factory overheads can swing the total by 3-5x. The answer is an order-of-magnitude estimate, not a precise engineering calculation. The job of the extrapolation is to tell you whether the goal is 10x, 100x, or 10000x away from the current state — and the answer in this case is ~100-1000x, which is the right regime for "a decade of engineering" rather than "centuries" or "already solved."

Example 2: The 2028-2030 chemistry milestone prediction

Setup. Given 2024-2026 trends, predict specifically what fault-tolerant chemistry demonstrations will look like in the 2028-2030 window. Which molecules? Which platforms? What precision?

Step 1. Identify the credible platforms for multi-logical-qubit demonstrations by 2028-2030. Three candidates based on 2026 roadmaps:

  • Google (Willow-successor with multi-logical-qubit support, 2027-2028).
  • Quantinuum (H-series scaled to 10-20 logical qubits, colour code with Cliffords, 2028-2030).
  • IBM (Kookaburra/Starling progression; Starling 200 logical qubits is a 2029 target, likely late).

Step 2. Identify the first-tier molecules that fit a 10-20 logical qubit machine.

  • H_2 (hydrogen molecule): 2 electrons, 4 orbitals → ~4-8 logical qubits. Solved classically exactly; a quantum demonstration would be a benchmark, not new chemistry.
  • LiH (lithium hydride): 4 electrons, 6-10 orbitals → ~12-20 logical qubits. On the frontier of what a 20-logical-qubit machine can handle.
  • BeH_2 (beryllium hydride): 6 electrons, 7-10 orbitals → ~14-20 logical qubits. Similar scale to LiH.
  • H_2O (water): 10 electrons, 7-14 orbitals → ~14-28 logical qubits. Possible on a 2030-class machine. Real chemistry — water's electronic structure is known but computing it at sub-chemical-accuracy is nontrivial.

Step 3. Precision target. "Chemical accuracy" is conventionally \pm 1 kcal/mol (\approx 1.6 \times 10^{-3} Hartree). Reaching this requires logical error rates of \sim 10^{-6} per logical operation, summed over ~10^4-10^5 logical operations in a variational or phase-estimation algorithm.

Step 4. Predict the milestone. By 2030, a demonstration of LiH or BeH_2 to chemical accuracy on a 15-20 logical qubit machine, at a circuit depth of 10^4-10^5 logical gates. This would be the first genuinely useful fault-tolerant chemistry result. The demonstration would appear in Nature or Science with joint authorship between a hardware company (Google / Quantinuum / IBM) and a computational-chemistry group (one of several that have been working on quantum algorithms for a decade — Google-Caltech, IBM-Qunasys, Quantinuum-JSR, etc.).

Result. The first useful fault-tolerant chemistry demonstration will plausibly be LiH or BeH_2 to chemical accuracy on 15-20 logical qubits, 2029-2031. The hardware will be whichever platform hits the 20-logical-qubit mark first; the chemistry will not be new (these molecules are solved classically), but the methodology will be new — the first time a quantum-computed electronic structure is within chemical accuracy and the classical simulation of the same quantum circuit is infeasible. That is the bar for "useful." Why LiH and BeH_2 specifically rather than something larger: the fault-tolerant resource estimates for larger molecules (H_2O, NH_3, CH_4) exceed 20-30 logical qubits at chemical accuracy. Those will come in the 2032-2035 window. The 2029-2031 first-useful demonstration is almost certainly on a small diatomic or triatomic. After that, the field will scale quickly — each factor-of-3 in logical qubit count roughly doubles the molecule's electron count.

Common confusions

Going deeper

If you understand that you are living at the dawn of the logical-qubit era — that Willow 2024 and Quantinuum 2024-25 are the first real beachheads, that the 2027-2030 window is about scaling to 10-100 logical qubits, that the 2030-2033 window delivers the first useful fault-tolerant demonstrations (small-molecule chemistry), that the 2035-2045 window delivers industrial applications and plausibly Shor's-against-RSA, and that Mosca's theorem says post-quantum cryptography must migrate now even though the threat is fifteen years away — you have chapter 201. What follows is the additional detail a policy analyst, hardware PhD student, or algorithm designer would need.

The decoder bottleneck

A less-discussed scaling problem: the classical decoder that interprets syndrome measurements and decides what correction to apply. At code distance d, the decoder must process syndrome data at the physical-gate clock rate — for superconducting qubits, roughly 1 microsecond per cycle. Decoding at distance d \ge 25 in real time is a serious classical engineering problem, requiring FPGA or ASIC-class decoders with custom algorithms (minimum-weight perfect matching, union-find decoders, machine-learning decoders). Google, Quantinuum, and IBM all have active decoder-engineering programmes. If the decoder cannot keep up, the logical error rate stops improving with d — the code is no longer below threshold in practice even if the physical errors are.

Magic-state factories

Fault-tolerant computation requires non-Clifford gates (typically T) that do not admit transversal implementation. The standard approach is magic-state distillation — consuming noisy input T-state copies to produce a small number of higher-fidelity T-state outputs, then using those to teleport a T gate into the circuit. Magic-state factories are the single largest overhead in useful fault-tolerant resource estimates; for Shor's-against-RSA, magic-state factories consume roughly half of the physical-qubit budget. Improvements in distillation protocols (Litinski 2019, Gidney-Fowler 2019, Gidney 2024) have steadily reduced the overhead by factors of 2-10 across the last five years; more improvements are expected.

LDPC codes and the qubit-count optimisation

The surface code is the workhorse for 2D-planar architectures, but it has a large overhead: \sim 2d^2 physical qubits per logical qubit, growing quadratically with code distance. Low-density parity-check (LDPC) quantum codes can reduce this overhead significantly — some LDPC constructions encode many logical qubits per code block with overhead scaling more favourably. The engineering cost is that LDPC codes require long-range connectivity, which is hard for 2D-planar superconducting arrays but natural for trapped ions (all-to-all), neutral atoms (reconfigurable tweezers), and photonics. By 2030, some fraction of the logical-qubit count on non-superconducting platforms is likely to come from LDPC codes rather than the surface code.

The decoder-parallelism question

Error correction on multiple logical qubits in a single machine requires parallel decoding — the decoder for logical qubit A must run simultaneously with the decoder for logical qubit B, and the two decoders may need to communicate to handle logical gates that act on both qubits. This is an open architectural question. Google, IBM, and Quantinuum each have internal approaches, none published in detail as of early 2026. The 2028-2030 demonstrations of multi-logical-qubit algorithms will require these parallel decoder architectures to work end-to-end.

Algorithmic co-design

Fault-tolerant algorithms are not the same as NISQ algorithms with more qubits. Algorithms in the logical-qubit era are co-designed with the error-correcting code:

The result: 2030-era algorithms are not the 1994-era algorithms running on better hardware; they are cleaned-up versions adapted to the cost model of fault-tolerant quantum computing.

India's place in the era

India's National Quantum Mission is timed well. Phase 1 (2023-2027) targets 50-100 physical qubits — matching the timing of the global NISQ-to-FTQC transition. Phase 2 (2027-2031) targets 500-1000 physical qubits and first error-correction demonstrations — matching the global scaling window. The NQM does not currently fund a push for 2000-logical-qubit machines, which is the right decision given the global fab and cryogenics supply chain required. What India can and should do, and is doing: build sovereign 100-qubit-class hardware platforms across multiple technologies, train a thousand-strong quantum workforce at IITs and IISc, and position Indian startups (QpiAI, BosonQ PSI, QNu Labs) for the 2030s commercialisation wave.

For the next generation of Indian quantum scientists — the 15-year-old reading this wiki today — the alignment is: your undergraduate years cover 2027-2031 (Phase 2 scaling); your PhD years cover 2031-2036 (first useful fault-tolerant demonstrations); your early career covers 2036-2045 (industrial applications). You graduate straight into the decade when the field becomes useful. It is, for somebody with the right training, the right moment to enter.

Choosing a platform to learn

If you want to start programming logical qubits today, the most accessible routes in 2026:

For a student who wants to graduate into the logical-qubit era's workforce, the single highest-leverage skill is error correction and fault-tolerance engineering — understanding how surface codes, colour codes, and LDPC codes work; how decoders operate; how magic-state factories fit in; how logical algorithms are compiled. This is the skill stack the 2028-2035 hiring wave will want.

Where this leads next

References

  1. Google Quantum AI, Quantum error correction below the surface code threshold (2024), Nature 638arXiv:2408.13687.
  2. Peter Shor, Polynomial-time algorithms for prime factorization and discrete logarithms (1994) — arXiv:quant-ph/9508027.
  3. Michele Mosca, Cybersecurity in an era with quantum computers: will we be ready? (2018) — IEEE S&P.
  4. John Preskill, Quantum Computing in the NISQ era and beyond (2018) — arXiv:1801.00862.
  5. Wikipedia, Quantum error correction.
  6. IBM Quantum, IBM Quantum roadmap (2024 update)ibm.com/quantum/roadmap.