In short

As of early 2025, fault-tolerant quantum computing has one logical qubit, not yet a computation. Google Willow (Nature, December 2024) demonstrated the surface code on 105 superconducting qubits, running one logical qubit at code distances d = 3, d = 5, and d = 7. The logical error rate fell by a factor of \sim 2.1 at each increment — the first definitive on-hardware demonstration of below-threshold operation, exactly as the threshold theorem predicts. The d = 7 logical qubit runs at \sim 2.9 \times 10^{-5} error per cycle, which is better than any individual physical qubit on the chip. Quantinuum H2 (64 trapped-ion qubits) demonstrated logical Clifford operations on the [[7, 1, 3]] colour code and [[15, 1, 3]] Reed-Muller code in 2023-2024. IBM Heron (133 qubits) supports mid-circuit measurement and feedback — the "dynamic circuits" that error correction requires — and IBM's roadmap targets fault-tolerant milestones in 2027-2029. QuEra, IonQ, and Microsoft each have their own roadmaps with different architectures (neutral atoms, trapped ions, Majorana-based topological qubits). What nobody has is multiple logical qubits executing a non-trivial algorithm. That is the 2027-2030 horizon. Useful fault-tolerant applications — Hamiltonian simulation for chemistry, error-corrected Grover for search — land in the 2030-2035 window if scaling continues. Cryptographically relevant Shor's algorithm against RSA-2048 requires \sim 2 \times 10^7 physical qubits and sits somewhere in the 2035-2050 range depending on whom you ask. The gap between "one logical qubit today" and "twenty million physical qubits in a cryogenic hall, factoring RSA" is four orders of magnitude in qubit count and two orders of magnitude in error rate — a long engineering journey, not a magical threshold crossing. India's National Quantum Mission is scheduled to reach 50-1000 physical qubits by 2031; the IIT Madras-TCS superconducting fabrication line, the TIFR trapped-ion platform, and Raman Research Institute's photonic programme are the main hardware efforts.

You have spent most of this curriculum in the territory of beautiful theorems. Shor's algorithm runs in polynomial time. The threshold theorem says arbitrarily long quantum computation is possible in principle. Magic state distillation gives you universal gates. Every one of these results is proved and correct.

None of them is a quantum computer.

This chapter asks the question a sceptical engineer would ask after the first seventeen theorems in a fault-tolerance textbook: what have people actually built? The answer, as of early 2025, is both more impressive than you might expect (the threshold theorem has been demonstrated on real superconducting hardware, not just on paper) and more modest than the press releases suggest (no lab anywhere has run a useful algorithm on multiple error-corrected qubits). Single logical qubits are solved. Multiple logical qubits are the frontier.

The point of this chapter is to hand you a calibrated, specific, numerical picture of where the field actually stands — what machines exist, what they have demonstrated, what they cannot yet do, and when the next milestones are credibly expected. This is the antidote to the "quantum computers will break all encryption next year" genre of article. It is also the antidote to the "quantum computers are a scam that will never work" genre. Both miss the same honest middle: a slow, steady engineering climb, on specific scaling curves, with landmarks you can point to.

The 2024 Willow result — why it matters

On 9 December 2024, Google Quantum AI published in Nature a paper titled "Quantum error correction below the surface code threshold." The chip: Willow, 105 superconducting qubits arranged on a 2D grid, with per-qubit gate errors around 3 \times 10^{-3} for two-qubit gates.

The experiment: encode a single logical qubit in the surface code at three different code distances — d = 3 (a 3 \times 3 patch of physical qubits, i.e. 9 data + 8 measurement = 17 qubits), d = 5 (25 + 24 = 49 qubits), and d = 7 (49 + 48 = 97 qubits). Run each encoded qubit for many rounds of syndrome measurement. Measure the logical error rate per round.

The result:

The ratio between successive distances: p_L(d=5) / p_L(d=3) \approx 0.34 and p_L(d=7) / p_L(d=5) \approx 0.34. Constant ratio \Lambda \approx 1/0.34 \approx 2.9 suppression per two extra rows of distance. This is the threshold theorem operating on hardware. Below threshold, each increment of distance suppresses the logical error by a multiplicative constant; above threshold, increasing distance would increase the logical error. Willow demonstrated, for the first time on real hardware, that we are genuinely below the surface-code threshold.

Willow 2024: logical error rate versus code distanceA semi-log plot with code distance on the x-axis at values 3, 5, and 7, and logical error per cycle on the y-axis from 10 to the minus 5 to 10 to the minus 2. Three data points connected by a line showing a geometric decrease: p_L of 4.4 times 10 to the minus 3 at d equals 3, 1.5 times 10 to the minus 3 at d equals 5, and 5 times 10 to the minus 4 at d equals 7. Below the line, a dashed extrapolation continues to d equals 9 and d equals 11. A horizontal band labelled single-qubit error rate at 3 times 10 to the minus 3 is visible. Google Willow (2024): below-threshold operation logical error per cycle halves each time $d$ increases by 2 $10^{-2}$ $10^{-3}$ $10^{-4}$ $10^{-5}$ $10^{-6}$ logical error per cycle physical gate error ($3 \times 10^{-3}$) $d = 3$ $d = 5$ $d = 7$ $d = 9$ (extrap.) $4.4 \times 10^{-3}$ $1.5 \times 10^{-3}$ $5 \times 10^{-4}$ (Willow) $\sim 1.5 \times 10^{-4}$ Each step of $d$ suppresses logical error by factor $\Lambda \approx 2.9$. Scaling is exponential in $d$.
Willow's core result. Three data points — $d = 3$, $d = 5$, $d = 7$ — show the logical error falling geometrically as the code distance grows. The constant ratio between successive points is the signature of below-threshold operation. Extrapolating naively, $d = 9$ would give $\sim 1.5 \times 10^{-4}$, $d = 11$ would give $\sim 5 \times 10^{-5}$, and so on, until systematic errors (leakage, correlated noise, decoder imperfections) eventually flatten the curve at some very low number.

Why this matters. For more than a decade, the threshold theorem has been a mathematical promise — if you get below threshold, arbitrarily long computation is possible. Running actual hardware below threshold, such that adding more physical qubits helps rather than hurts, was the single most important open experimental question in the field. Willow answered it. The scaling curve is measured, the direction is correct, the rate of suppression is consistent with the theoretical \Lambda for the surface code at the measured physical error rate.

This is not yet a computation. It is one logical qubit, sitting in a cryostat, going through syndrome cycles. But it is the first time any lab has demonstrated on real hardware that the error-correction machinery works as the threshold theorem says it should.

What Willow didn't do

A careful reading of the press coverage is useful. Headlines on 9 December 2024 ran "Google's Willow chip just solved quantum computing" and "Willow announces the quantum era." Calibrate these against what the paper actually reports:

Hype check. Willow demonstrated one logical qubit, going through syndrome cycles without implementing any logical gate. There was no logical Hadamard, no logical CNOT, no logical T. There was no algorithm run on the encoded qubit. The computation the press coverage sometimes mentions — Willow running a circuit-sampling benchmark in 5 minutes that would take a classical supercomputer 10^{25} years — was done on physical (unencoded) qubits, not on the logical qubit. It is a beautiful demonstration of below-threshold operation, not a fault-tolerant computation.

The distinction matters. A single logical qubit without gates is to a quantum computer what a working transistor is to a CPU: a necessary building block, but not yet useful for computation. Even the simplest algorithms (Deutsch-Jozsa, Bernstein-Vazirani) need at least two logical qubits and a logical CNOT. A non-trivial computation needs dozens of logical qubits and millions of logical gates.

That said, the single-logical-qubit milestone is genuinely hard. It requires:

Every one of these is a years-long engineering programme. That Willow has them all simultaneously working is the real news.

The competing platforms — March 2025 snapshot

Superconducting qubits (Google, IBM) are not the only game. Four hardware platforms are in serious contention for fault-tolerant quantum computing. Each has a different qubit type, a different error model, a different scaling trajectory, and a different current-best logical-qubit demonstration.

Hardware platforms for fault-tolerant quantum computing, early 2025A table with four columns: Google Willow, IBM Heron, Quantinuum H2, IonQ Tempo. Each column lists the qubit type, the qubit count, the physical two-qubit gate error, the best logical-qubit demonstration, and the declared 2027 roadmap target. Annotations indicate which platforms are below the surface-code threshold and which still need to close that gap. Fault-tolerant-quantum-computing hardware landscape, early 2025 Google Willow IBM Heron Quantinuum H2 IonQ Tempo / QuEra qubit type superconducting (transmon) superconducting (transmon) trapped ion (Yb, Ba) trapped ion / neutral atom physical qubits 105 133 (Heron) 64 (H2) 64-256 2-qubit gate error $\sim 3 \times 10^{-3}$ $\sim 5 \times 10^{-3}$ $\sim 5 \times 10^{-4}$ $\sim 10^{-3}$-$10^{-2}$ logical demo surface $d$=7, below threshold dynamic circuits, code prep colour code, logical Cliffords logical Bell states, QEC 2027 roadmap 1000s qubits, multi-logical Starling, 200 logical H-series scaling, logical alg Tempo: 256, 99.9% below threshold: Willow (demonstrated), Quantinuum (gate-error only)
Four platforms, four strategies. Superconducting systems (Google, IBM) optimise for qubit count; trapped ions (Quantinuum, IonQ) optimise for gate fidelity; neutral atoms (QuEra) optimise for scalability via laser addressing; topological qubits (Microsoft, discussed below) optimise for intrinsic error suppression. The field has not converged on a winning platform, which is itself a sign of a field still finding its shape.

Google Willow (superconducting, 105 qubits)

The lead demonstration, as discussed. Transmon qubits on a 2D grid, tunable couplers, cryogenic control at around 10 mK. The 105-qubit count is the data qubits; the chip actually has more than that when you include measurement qubits and couplers. Logical-qubit demo at d = 7 is the best on record.

Google's 2025-2027 roadmap: grow from 105 to 1000+ physical qubits on a single chip, demonstrate multiple logical qubits on one wafer, begin logical two-qubit gates. The milestone after that (2027-2030) is running a small algorithm on encoded qubits.

IBM (superconducting, 133-1121 qubits)

IBM's largest chip is Condor at 1121 qubits (2023) — impressive qubit count but error rates high enough that it cannot run deep circuits. The working chip for error-correction work is Heron at 133 qubits with improved couplers and crucially dynamic circuits: mid-circuit measurement and classical feedback, which is what error correction requires.

IBM has not yet reported a below-threshold surface-code demonstration comparable to Willow. What they have is a different roadmap: a modular architecture where many chips are networked via microwave couplers, targeting "system of systems" fault tolerance. The upcoming Starling chip (target 2029) is advertised for 200 logical qubits.

Quantinuum H2 (trapped-ion, 64 qubits)

Trapped-ion platforms have a fundamental advantage over superconducting: physical gate errors are routinely \sim 5 \times 10^{-4} or better — ten times better than the best superconducting numbers — and the noise model is cleaner (less correlated noise, no leakage into non-computational levels). The tradeoff is speed: trapped-ion two-qubit gates take \sim 100 \mus vs. \sim 30ns for superconducting — about 10000x slower per gate. For error correction this is not a problem; for a large algorithm it multiplies runtime.

Quantinuum's H2 machine, 64 qubits, demonstrated in 2023-2024:

The H2 logical-qubit error rate is not quite as low as Willow's d = 7 surface code, but the trapped-ion platform has more room to grow on distance because each additional qubit is "just another ion" rather than additional fabrication complexity.

IonQ, QuEra, PsiQuantum, others

One logical qubit is not a quantum computer

A logical qubit, without gates, is a memory register. To do a computation, you need:

  1. Multiple logical qubits — at least dozens for small algorithms.
  2. Logical gates — at minimum, transversal Cliffords; for universality, magic-state injection for T.
  3. Multi-logical-qubit algorithms — the circuit must fit in the logical-qubit count, and the algorithm must tolerate the per-gate logical error rate.
  4. Classical control — decoders running in real time, compilation from high-level circuit to physical gates, scheduling.

Willow has (1) in the "could demonstrate 2-3 logical qubits" sense if they dedicated the whole chip to error correction. They have not demonstrated (2). They have not attempted (3). For (4), the decoder runs in real time on a d = 7 single qubit — impressive, but scaling to 100 logical qubits requires much faster decoders.

The industry consensus is that moving from 1 logical qubit to 100 logical qubits is roughly a 5-year programme from 2024, and moving from 100 to 1000 is another 5 years. At 1000 logical qubits with the kinds of logical error rates Willow demonstrates at d = 7, you can run:

You cannot yet run Shor's on RSA-2048, which needs \sim 8000 logical qubits and 10^{10} T gates — an accumulated logical error budget of order 10^{-15} per gate that distance-7 does not provide.

Worked examples

Example 1: extrapolating Willow to larger distances

Setup. Willow's measured logical error rates at d = 3, 5, 7:

p_L(3) = 4.4 \times 10^{-3}, \quad p_L(5) = 1.5 \times 10^{-3}, \quad p_L(7) = 5 \times 10^{-4}.

The suppression ratio \Lambda between adjacent distances (with \Delta d = 2): \Lambda = p_L(d) / p_L(d+2). Why \Delta d = 2: surface codes are defined on an odd-by-odd lattice patch. Distance increments by 2 at a time because adding one row adds both a new data and a new ancilla.

Step 1. Compute \Lambda: \Lambda_1 = 4.4 / 1.5 \approx 2.93; \Lambda_2 = 1.5 / 0.5 = 3.0. Close to each other; call \Lambda \approx 2.9.

Step 2. Extrapolate to d = 9, 11, 13, assuming constant \Lambda:

p_L(9) \approx 5 \times 10^{-4} / 2.9 \approx 1.7 \times 10^{-4}.
p_L(11) \approx 1.7 \times 10^{-4} / 2.9 \approx 6 \times 10^{-5}.
p_L(13) \approx 6 \times 10^{-5} / 2.9 \approx 2 \times 10^{-5}.

Step 3. Qubit cost. Surface-code qubit count is 2d^2 - 1. At d = 9: 161 qubits for one logical. At d = 13: 337 qubits per logical. Compared with d = 7's 97-qubit logical, you pay about 4x more for another \sim 30x error suppression.

Step 4. Assess. To reach p_L \lesssim 10^{-15} (the level needed for a trillion-gate algorithm), extrapolate \log(p_L) = \log(5 \times 10^{-4}) - 0.5 (d - 7) \log(2.9), so d needs to grow to about 25-27, giving \sim 1200-1500 physical qubits per logical qubit. This is exactly the distance used in the Gidney-Ekerå RSA-2048 estimates.

Result. The \Lambda value Willow measured at d \le 7 is consistent, when extrapolated, with the distance-25 surface code underpinning published RSA-2048 resource estimates. The extrapolation is the link between the 2024 experimental demonstration and the 2035+ cryptographic timeline. It assumes \Lambda stays constant, which it might not — correlated noise, decoder imperfections, calibration drift all could cause \Lambda to degrade at larger d.

Example 2: physical-qubit overhead for a useful algorithm

Setup. Suppose you want to run a fault-tolerant quantum simulation of a 100-spin Hamiltonian. You estimate the algorithm needs 200 logical qubits and 10^8 logical gates — of which 5 \times 10^7 are T gates.

Step 1. Logical error budget. For the whole algorithm to succeed with probability \sim 1/2, the per-gate logical error must be below 1/(2 \times 10^8) = 5 \times 10^{-9}. Why a 50% success probability is the target: Shor's algorithm is randomised — you expect to run it a few times. Any \sim50% success probability is perfectly acceptable, and the resource estimate is logarithmic in the target probability so the exact number barely matters.

Step 2. Surface-code distance needed. Solve p_L = p_L(7) / \Lambda^{(d-7)/2} = 5 \times 10^{-9}. Gives (d-7)/2 = \log_{2.9}(5 \times 10^{-4} / 5 \times 10^{-9}) = \log_{2.9}(10^5) \approx 10.8. So d \approx 29. Round up to d = 29.

Step 3. Physical qubits per logical. 2d^2 - 1 = 2(29)^2 - 1 = 1681 physical qubits per logical qubit.

Step 4. Data qubits. 200 \times 1681 \approx 3.4 \times 10^5 physical qubits for the logical register alone.

Step 5. Magic-state factories. 5 \times 10^7 T gates over the runtime of the algorithm. At 20 physical-qubit-rounds per magic state, and assuming 100 parallel factories each processing a state every 100 cycles, factory qubit count is \sim 10^5. Similar order to the data-qubit count.

Step 6. Total. \sim 5 \times 10^5 physical qubits for this algorithm. About half a million qubits for a modest quantum simulation.

Result. Useful fault-tolerant applications are in the 10^5-10^6-qubit range. Willow has 10^2, a thousand times smaller. Scaling up by 1000x is the frontier work of 2025-2035.

Common confusions

"A logical qubit is a quantum computer"

A logical qubit is a single protected bit of quantum information. A quantum computer is a system that runs circuits on many qubits with quality good enough that the circuit output is useful. A single logical qubit can store a state; it cannot compute anything. You need multiple logical qubits, logical gates, and sufficient error rates to make those gates non-trivially compose. Willow demonstrated storage, not computation.

"We have fault-tolerant quantum computing now"

We have below-threshold operation — one demonstration, one platform, one logical qubit. Fault-tolerant quantum computing means running arbitrary algorithms on encoded qubits with the error rate staying bounded. That requires multi-logical-qubit systems with logical gates, which no lab has yet run at interesting scale. The community expects 2027-2030 for small demonstrations and 2035+ for useful applications.

"Willow solved quantum computing"

Willow solved (or at least heavily evidenced) the threshold theorem's experimental hypothesis: that below-threshold operation is achievable. This is one critical milestone. It does not solve:

Each of these is a multi-year research programme. Willow is one (large) step on a long road.

"China's logical-qubit results are larger / smaller than the US results"

Claims about which country is "winning" at quantum computing usually reflect which press office is loudest. As of early 2025: Google Willow is the biggest surface-code below-threshold result. Quantinuum (UK/US) has the best colour-code logical Clifford demo. USTC (Chinese Academy of Sciences) has demonstrated some of the largest photonic boson-sampling experiments and has strong superconducting programmes at Zuchongzhi. Both the US and China are investing heavily; neither is decisively ahead. The race is real at a national-funding level but the science is global and moves together.

"Error correction means we do not need better qubits"

Wrong. Error correction reduces the logical error rate, not the physical error rate. If the physical error rate is above threshold, correction makes things worse. Every platform's hardware team is working to reduce physical errors in parallel with the error-correction team, and both efforts must succeed together. "Better qubits" and "more qubits" are complementary, not alternatives.

The honest timeline

Every serious fault-tolerance roadmap is a decades-long plan. Here is a calibrated synthesis of the industry consensus, picking median estimates across Google, IBM, Quantinuum, IonQ, and the NQM:

Fault-tolerant quantum computing roadmap, 2024-2050A horizontal timeline from 2024 to 2050 with six labeled milestones. 2024: one logical qubit below threshold, Willow. 2027: 10-100 logical qubits with logical algorithms. 2030: 100-1000 logical qubits running fault-tolerant Grover, small simulation. 2035: thousands of logical qubits, small practical applications in chemistry. 2040: useful Hamiltonian simulation at industrial scale. 2045-2050: cryptographically useful Shor, RSA-2048 factoring, the quantum era. Above each milestone, the qubit count grows from 100 through 1000 through 10 thousand to 100 thousand to 1 million to 20 million. Fault-tolerant QC roadmap — industry median consensus 2024 1 logical qubit (Willow $d=7$) $\sim 10^2$ physical 2027 $\sim 10$ logical small circuits $\sim 10^3$ physical 2030 $\sim 100$ logical toy Hamilton. sim. $\sim 10^5$ physical 2035 $\sim 1000$ logical chem/materials $\sim 10^6$ physical 2040 industrial chem drug discovery $\sim 10^7$ physical 2045+ RSA-2048 breaks (Shor) $\sim 2 \times 10^7$ physical ✓ done ← frontier 2025-27 useful applications open here
A calibrated timeline. Each milestone is the median across independent roadmaps; the uncertainty on later dates is $\pm 5$ years. Useful chemistry / materials applications are 2030-2035; cryptographically relevant Shor is 2045+. These dates assume no transformative hardware breakthrough (Majorana qubits working at scale, photonic networking breakthroughs, etc.) — any one of which would accelerate by a decade.

Policy calibration. The 2045+ date for RSA-2048 is fifteen to twenty years out under current scaling. That is long enough to migrate cryptographic infrastructure, but short enough that harvest-now-decrypt-later attacks are a real concern today: data you encrypt under RSA or ECC now, if stolen and stored, can be decrypted when the attacker has a large enough quantum computer. India's CERT-In and MeitY are pushing for post-quantum cryptography migration by 2030 precisely because of this.

The India angle

The National Quantum Mission (approved 2023, ₹6003 crore over 8 years to 2031) has explicit hardware milestones:

The phase 1 targets will not match Willow in either qubit count or fidelity — but they will put India on the same scaling curve, and phase 2 will begin closing the gap. In parallel, the post-quantum cryptography programme at IIT Kanpur and IIT Bombay is working on the policy side: which PQC algorithms (CRYSTALS-Kyber, CRYSTALS-Dilithium, SPHINCS+, all NIST-selected in 2024) are suitable for Aadhaar and UPI cryptography, and when the migration must happen.

India is genuinely part of the global fault-tolerance race — not leading, but closing the gap on most platforms, and with a serious sovereign programme that does not depend on licensed foreign hardware. By the late 2020s, expect papers from IIT Madras, IISc, and TIFR on logical-qubit demonstrations at modest code distances. By the 2030s, expect a flagship Indian fault-tolerant demonstration.

Going deeper

The rest of this chapter concerns the detailed architectures of each platform, the Willow paper's technical protocols, real-time decoding of surface codes, Microsoft's topological qubit claims, and the NQM hardware plan in more detail. This is the research-level view useful for a student considering where to do a PhD in experimental quantum computing, or a policy analyst evaluating NQM milestones. The earlier sections are enough for a calibrated high-level understanding.

Willow in detail — the surface code protocol

The Willow chip is a square grid of transmon qubits with tunable couplers. At code distance d, a d \times d patch of data qubits is interleaved with (d-1) \times (d-1) + d \times (d-1) / 2 measurement qubits (roughly). The syndrome circuit for each round is:

  1. Initialise measurement qubits in |0\rangle (for Z-stabilizers) or |+\rangle (for X-stabilizers).
  2. Perform four CNOTs per measurement qubit to its neighbouring data qubits, in a time-ordered choreography to avoid gate conflicts on the grid.
  3. Measure each measurement qubit in Z (or X for X-stabilizers after a Hadamard).
  4. Feed the syndrome bits to a real-time decoder running on an FPGA beside the dilution refrigerator.
  5. Record the decoder's error guess; apply correction virtually (update the logical-operator frame) rather than physically flipping qubits.

The cycle time is about 1 \mus per round at d = 7. Willow ran \sim 10^5 rounds per logical-qubit experiment, enough to accumulate statistics on the logical error rate.

The decoder is PyMatching (Fowler's open-source minimum-weight perfect matching library), extended with Google's proprietary enhancements for correlated-error decoding. The decoder runs in \sim 1 \mus per round — barely fast enough. Going to d = 11 or d = 13 will require faster decoders (recent work on neural-network decoders is closing this gap).

Quantinuum's colour-code demonstration

The Quantinuum H2 demonstration in 2024 was the first end-to-end logical-qubit computation:

They followed this with a magic-state injection experiment: prepare a noisy |T\rangle, inject it into the data register via CNOT + measurement + conditional S, yielding a non-Clifford logical operation on the encoded qubit. This is the first physical demonstration of the full magic-state injection gadget from ch. 127.

H2's logical error rate is not yet as low as Willow's, but the gate set is larger — and the trapped-ion error model is cleaner. Quantinuum is betting on "more logical qubits via scaling ion traps" rather than "higher-fidelity physical qubits."

Microsoft's topological push

Microsoft announced the Majorana 1 chip in February 2025, based on topological qubits built from topological superconductor nanowires. If the underlying physics works (the scientific community remains divided on whether the reported Majorana signatures are clean), the qubit-level error rate could be orders of magnitude lower than any other platform, because errors are suppressed by the topological gap — an intrinsic physical property rather than a software layer.

The promise: topological qubits do not need the huge surface-code overhead. One topological qubit could be as good as \sim 1000 surface-code-encoded superconducting qubits. If Majorana 1's physics is real, Microsoft's roadmap to RSA-2048 could shorten from 20+ years to 10 years.

If not, Microsoft has a decade of research to absorb. Other topological proposals (parafermions, non-Abelian anyons in fractional quantum Hall systems, Fibonacci anyons) exist but are even further from working hardware. The topological-qubit question is one of the most important open questions in the field, with the answer likely resolved by 2027 or so.

Real-time decoding — the unsolved piece

All of this rests on decoders that run fast enough to keep up with the syndrome stream. At d = 7, 1 \mus/round is fine. At d = 25 (RSA-2048 target), decoders must run on graphs with thousands of vertices while still meeting per-round deadlines. Current minimum-weight perfect matching scales poorly. Recent approaches:

The decoding bottleneck is one of the reasons the scaling from d = 7 to d = 25 is not trivial.

NQM detailed hardware plan

The NQM's hardware thrust has five mission hubs:

Budget: about ₹1200 crore each for hubs 1-3, less for 4-5. Each hub is targeting platform-specific milestones over 8 years. By 2027 the Indian superconducting platform targets 50-100 qubits with dynamic circuits (equivalent to IBM's 2019 era); by 2030, 500-1000 qubits with first error-correction results.

India also partners with IBM's Quantum Network (multiple IIT members) and Quantinuum (IIT Madras agreement). This gives Indian students and researchers access to frontier hardware while the domestic programme scales.

For the student considering this area

Experimental QC as of 2025 is a field where the next ten years will produce most of the important results in the history of the field. The pace of progress is accelerating. Every platform has exciting open problems — scaling, noise characterisation, decoding algorithms, magic-state factories, logical-qubit architectures, compiling algorithms. The theoretical side (algorithms, complexity, simulation methods) continues to be rich, but the experimental side is where the frontier is most active right now.

Where this leads next

This chapter closes the quantum-error-correction arc of the curriculum. The next part of the curriculum — Hamiltonian simulation — turns to the application that Feynman proposed in 1982 and that is, in 2025, the most credible near-term practical use of a fault-tolerant quantum computer: simulating quantum chemistry, materials, and high-energy physics.

Beyond simulation, the arc of the book will revisit complexity theory with the lens of fault-tolerance (which complexity classes survive error correction?), touch on near-term NISQ algorithms that do not require fault tolerance (VQE, QAOA), and close with the meta-question of what quantum computing ultimately is for.

References

  1. Google Quantum AI, Quantum error correction below the surface code threshold (2024) — Nature 638, 920 / arXiv:2408.13687.
  2. Quantinuum, Logical entanglement and fault-tolerant gates on the [[7,1,3]] colour code on H2 (2024) — arXiv:2404.02280.
  3. Austin Fowler, Matteo Mariantoni, John Martinis, Andrew Cleland, Surface codes: Towards practical large-scale quantum computation (2012) — arXiv:1208.0928.
  4. Wikipedia, Fault-tolerant quantum computation.
  5. John Preskill, Lecture Notes on Quantum Computation, Chapter 7: Quantum Error Correctiontheory.caltech.edu/~preskill/ph229.
  6. Government of India, National Quantum Mission (Cabinet Approval, 2023)dst.gov.in/national-quantum-mission.