In short
Shor's 9-qubit code (1995) is the first full quantum error-correcting code — the construction that proved QEC was possible. It concatenates the phase-flip and bit-flip codes: take the outer 3-qubit phase-flip code, and replace each of its three physical qubits with a 3-qubit bit-flip-encoded block. Total: 9 physical qubits encoding 1 logical qubit. The encoded logical basis is |0\rangle_L = \tfrac{1}{2\sqrt 2}(|000\rangle + |111\rangle)^{\otimes 3} and |1\rangle_L = \tfrac{1}{2\sqrt 2}(|000\rangle - |111\rangle)^{\otimes 3}. Six inner syndromes (Z Z parities within each block) catch X errors; two outer syndromes (XXXXXX parities between blocks) catch Z errors. Y = iXZ triggers both kinds of syndromes, so it is caught as well — and by the discretisation theorem, correcting \{X, Y, Z\} on each qubit means correcting every single-qubit continuous rotation. Overhead is heavy: 9 physical qubits per logical qubit, before anything practical is even attempted. Modern codes (Steane's 7-qubit, surface code) do better. But this 9-qubit paper was the landmark — it transformed the question "can quantum computers ever work?" from an open problem into an engineering challenge.
By 1994, quantum computing looked impressive on paper and hopeless in practice. Peter Shor had just published the factoring algorithm that gave the field its headline application. But the same Shor knew, better than most, that running the factoring algorithm on real hardware would require suppressing decoherence over millions of gate operations — and the three walls of quantum error correction (no-cloning, continuous errors, measurement collapse — see why QEC is hard) seemed to make that impossible. Many senior physicists of the era believed quantum computing was fundamentally doomed, a mathematically elegant but physically unrealisable idea.
Shor's response, published in 1995 as Scheme for reducing decoherence in quantum computer memory, was to construct, by hand, a nine-qubit encoding that corrects every single-qubit error — X, Y, Z, and any continuous rotation in between. The proof was by construction. If such a code exists, then every error analysis that ended with "decoherence kills quantum computation" became instead "decoherence can be corrected, if you are willing to pay the overhead". The question moved from impossibility to engineering.
This chapter is that construction. You will build the 9-qubit code from the two building blocks of the previous two chapters (bit-flip and phase-flip codes). You will see why stacking them catches every single-qubit error, how the syndromes fit together, and why the discretisation theorem (hinted at in why QEC is hard) converts continuous noise into a finite Pauli correction. And you will see why, despite all of this, modern fault-tolerant quantum computing has mostly moved past Shor's code — to codes with better overhead and better geometric structure, but all built on the template this chapter lays down.
The two building blocks, in 30 seconds
Bit-flip code (3 qubits, catches X errors):
Stabilisers Z_1 Z_2, Z_2 Z_3 measure bit-parity inside the block. Blind to Z errors.
Phase-flip code (3 qubits, catches Z errors):
Stabilisers X_1 X_2, X_2 X_3 measure X-parity. Blind to X errors.
Neither alone is useful against real noise, which has both kinds of errors in comparable amounts. Shor's insight: stack them.
The construction — concatenate phase-flip with bit-flip
Concatenation means: take the outer code, and replace each of its physical qubits with a codeword of the inner code. For Shor's 9-qubit code, the outer code is the phase-flip code (which has three physical qubits), and each of those three qubits is itself bit-flip-encoded into three physical qubits. Total: 3 \times 3 = 9 physical qubits.
The encoded logical states
Start with a logical qubit \alpha|0\rangle + \beta|1\rangle. Apply the phase-flip encoding:
Now replace each of the three qubits with a bit-flip-encoded block. The bit-flip encoding maps |0\rangle \to |000\rangle, |1\rangle \to |111\rangle, so it extends linearly to
Why the bit-flip encoding extends linearly: the encoding is a unitary operation. Unitaries are linear. The encoded version of a linear combination of computational basis states is the same linear combination of their encoded versions. This is also why encoding does not violate no-cloning — linearity of the encoding map is incompatible with cloning (which would have to be non-linear).
Substituting each |+\rangle \to (|000\rangle + |111\rangle)/\sqrt 2 and each |-\rangle \to (|000\rangle - |111\rangle)/\sqrt 2 in the phase-flip encoded state gives the 9-qubit Shor code logical states:
Why the 1/(2\sqrt 2) normalisation: each block has factor 1/\sqrt 2 from the bit-flip encoding of |+\rangle or |-\rangle, and the three blocks multiply to (1/\sqrt 2)^3 = 1/(2\sqrt 2). The overall 1/\sqrt 2 from the phase-flip encoding of |+\rangle is absorbed; the logical encoded state of |0\rangle or |1\rangle on its own picks up only the three-block factor 1/(2\sqrt 2).
The encoding circuit
Two stages, in the obvious order.
-
Phase-flip stage. On qubits 1, 4, 7 (the "leader" of each block), run the phase-flip encoding: CNOT from qubit 1 to qubits 4 and 7, then Hadamard qubits 1, 4, 7. This produces the phase-flip-encoded state across the three block leaders, with qubits 2, 3, 5, 6, 8, 9 still in |0\rangle.
-
Bit-flip stage. Within each block, CNOT from the leader to its two block-mates. Block A: CNOT from 1 to 2, CNOT from 1 to 3. Block B: CNOT from 4 to 5, CNOT from 4 to 6. Block C: CNOT from 7 to 8, CNOT from 7 to 9.
Why every single-qubit error is corrected
Three error types to handle: X, Z, and Y = iXZ. Plus continuous errors, which the discretisation theorem reduces to these three. Go through each.
X errors: caught by the inner bit-flip code
An X error on any one of qubits 1 through 9 is a bit-flip within its block. Inside block A (qubits 1, 2, 3), the state before the error is (|000\rangle + |111\rangle)/\sqrt 2 (if the logical qubit is |0\rangle_L) or (|000\rangle - |111\rangle)/\sqrt 2 (for |1\rangle_L). An X_1 error flips the first qubit of the block:
The two inner stabilisers Z_1 Z_2 and Z_2 Z_3 within this block now read (-1, +1) — the bit-flip-code syndrome for "error on qubit 1". Apply X_1 to recover. Block A is restored, and the other two blocks are untouched.
Six inner syndromes total (two per block), with 8 possible non-trivial patterns per block but only 3 used (the three single-qubit bit-flip locations). If any single X error occurs on any of the 9 qubits, exactly one block's inner syndrome fires and uniquely identifies which qubit.
Z errors: caught by the outer phase-flip code
This is the subtler mechanism, and it is worth going slowly.
Within a block, a single Z error on any qubit has the same effect on the block's logical state. To see this, compute on block A. The block's encoded state for logical |+\rangle_{\text{block}} (in the outer picture, where the block's "qubit" is one of the three phase-flip qubits) is (|000\rangle + |111\rangle)/\sqrt 2. Apply Z_1:
Why the sign flip: Z|0\rangle = |0\rangle and Z|1\rangle = -|1\rangle. Z_1|000\rangle = |000\rangle (qubit 1 is in |0\rangle, unchanged). Z_1|111\rangle = -|111\rangle (qubit 1 is in |1\rangle, sign flipped). Subtract to get (|000\rangle - |111\rangle)/\sqrt 2.
Now apply Z_2 to the original state instead:
Same answer. And Z_3 gives the same answer again. A Z error on any of the three qubits in a block produces the same block-level effect: it flips the sign of (|000\rangle + |111\rangle) to (|000\rangle - |111\rangle). Block-level, this is exactly the action of a single Z on the corresponding phase-flip-code qubit (which would take |+\rangle \to |-\rangle, and |-\rangle \to |+\rangle in the outer picture).
So within each block, "a Z error anywhere" looks like "the block's phase flipped". That is exactly what the outer phase-flip code is designed to detect. Its stabilisers, applied to blocks A, B, C, are:
That is, the six-qubit X-parity between blocks A and B, and between blocks B and C. These are the phase-flip code's X-parity stabilisers lifted through the bit-flip encoding. Each one has eigenvalue +1 on the encoded space and flips to -1 when a Z error occurs in one of the blocks it spans.
Why the outer stabiliser is X^{\otimes 3} on each block: recall from bit-flip-code that the logical X of the bit-flip code is X_L = X_1 X_2 X_3 (Pauli X on all three physical qubits). The outer phase-flip code's stabilisers are X_1^{\text{outer}} X_2^{\text{outer}} etc., acting on the logical qubits of the inner code. Lifting X^{\text{outer}} through the bit-flip encoding gives X_L = X \otimes X \otimes X on the three physical qubits of that block. So the outer stabiliser X_1^{\text{outer}} X_2^{\text{outer}} becomes (X_1 X_2 X_3)(X_4 X_5 X_6) in physical-qubit notation. Six-qubit X-parity.
So: two outer syndromes, each spanning six qubits, catch any single Z error on any of the nine qubits.
Y errors: caught by both codes
Y = iXZ (up to a global phase factor). A Y error on qubit j is, up to phase, an X error followed by a Z error on qubit j. The X part triggers the inner syndrome for that block; the Z part triggers the outer syndrome that contains that block. Both fire. Apply X_j Z_j (which equals Y_j up to an overall phase, and the overall phase cancels because the state was an eigenstate of the syndrome). The code recovers.
Continuous errors: the discretisation theorem at work
Any single-qubit unitary U close to the identity can be written
with complex coefficients a_0, a_x, a_y, a_z. (This is because \{I, X, Y, Z\} span the space of 2\times 2 complex matrices — see why QEC is hard.) Applying U to qubit j gives a superposition of four outcomes: identity, X_j error, Y_j error, Z_j error, each with amplitude a_0, a_x, a_y, a_z.
When you measure the syndromes, the state collapses probabilistically onto one of these four possibilities. If it collapses to "no error" (probability |a_0|^2, typically close to 1 for small errors), do nothing. If it collapses to "X on qubit j" (probability |a_x|^2), apply X_j. Similarly for Y_j and Z_j. The continuous unitary has been discretised into one of four Pauli outcomes, each of which Shor's code handles.
The upshot. Shor's code corrects every single-qubit Pauli error (I, X, Y, Z on any of 9 physical qubits). By the discretisation theorem, it automatically corrects every single-qubit continuous rotation, every single-qubit amplitude-damping operator, every single-qubit depolarising channel, and more generally every single-qubit error described by Kraus operators supported on Pauli's. The continuous error space reduces to a discrete Pauli correction after syndrome measurement.
The syndrome circuit summary
Eight syndrome bits, measured by eight ancilla qubits:
- 6 inner bits (two per block): Z_1 Z_2, Z_2 Z_3 on block A; Z_4 Z_5, Z_5 Z_6 on block B; Z_7 Z_8, Z_8 Z_9 on block C. These are the bit-flip-code syndromes of each block, catching X errors.
- 2 outer bits: X_1 X_2 X_3 X_4 X_5 X_6 (blocks A + B parity) and X_4 X_5 X_6 X_7 X_8 X_9 (blocks B + C parity). These are the phase-flip-code syndromes lifted through the bit-flip encoding, catching Z errors.
Each stabiliser is measured with a standard ancilla-based parity circuit (Hadamard-ancilla, controlled Pauli's onto the data qubits, Hadamard-ancilla, measure). The ancilla returns a classical bit. Eight classical bits total: one 8-bit syndrome.
Not all 2^8 = 256 possible syndromes are used. The "addressable" errors are: no error (1 pattern), single-qubit X (9 patterns), single-qubit Y (9 patterns), single-qubit Z (9 patterns) = 28 correctable patterns. The other 228 syndromes correspond to two-or-more-qubit errors, which the code detects but cannot correct (or mis-corrects — applying a spurious "single-qubit" correction that increases the logical error). As with any code, this limitation is the source of the code's error threshold.
Worked examples
Example 1: encode |0⟩ step by step
Trace the encoding of the classical logical state |0\rangle through the two-stage Shor circuit, seeing the 9-qubit output build up.
Start. Input: |\psi\rangle|0\rangle^{\otimes 8} = |0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle = |000000000\rangle. (The data qubit is qubit 1; the rest are ancillas in |0\rangle.)
Stage 1, step A: CNOT from qubit 1 to qubit 4. Qubit 1 is |0\rangle, so the CNOT does nothing. State: |000000000\rangle.
Stage 1, step B: CNOT from qubit 1 to qubit 7. Qubit 1 is |0\rangle, CNOT does nothing. State: |000000000\rangle.
Stage 1, step C: Hadamard on qubits 1, 4, 7. Each of these three qubits is |0\rangle, and H|0\rangle = |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2. Apply to qubits 1, 4, 7 in parallel:
Why the factor of 1/(2\sqrt 2): each Hadamard contributes a 1/\sqrt 2; three Hadamards contribute (1/\sqrt 2)^3 = 1/(2\sqrt 2).
This expands to 8 terms — one for each choice of |0\rangle or |1\rangle on qubits 1, 4, 7 — with qubits 2, 3, 5, 6, 8, 9 always in |0\rangle.
Stage 2, block A: CNOT from qubit 1 to qubits 2 and 3. Within block A, whenever qubit 1 is |1\rangle, flip qubits 2 and 3. So |100\rangle \to |111\rangle. The (|0\rangle + |1\rangle)_1 |0\rangle_2 |0\rangle_3 becomes (|000\rangle + |111\rangle) on block A.
Stage 2, block B: CNOT from qubit 4 to qubits 5 and 6. Same action, giving (|000\rangle + |111\rangle) on block B.
Stage 2, block C: CNOT from qubit 7 to qubits 8 and 9. Same action, giving (|000\rangle + |111\rangle) on block C.
Final state.
Result. The logical |0\rangle is encoded into a 9-qubit state that is a symmetric product of three GHZ-like blocks. Each block is an entangled two-term superposition; the three blocks are tensor-multiplied. Expanding all 8 terms gives: |000000000\rangle contributes with amplitude 1/(2\sqrt 2), |000000111\rangle with the same amplitude, and so on for all 8 patterns where each block is either |000\rangle or |111\rangle.
What this shows. Encoding |0\rangle is a clean tensor product of three GHZ states, with no inter-block entanglement. The entanglement is entirely within each block (between qubits 1-2-3, 4-5-6, 7-8-9). Across blocks, the three are independent. Encoding |1\rangle would give (|000\rangle - |111\rangle) for each block — same structure, sign differences inside blocks. A general logical superposition \alpha|0\rangle + \beta|1\rangle lifts to a sum of the two block-patterns and introduces cross-block correlation.
Example 2: detect and correct a Z error on qubit 5
Show end-to-end how Shor's code catches a Z error on the middle qubit of block B (the middle block).
Setup. The encoded state is the logical |+\rangle_L = (|0\rangle_L + |1\rangle_L)/\sqrt 2, so for each block we have [(|000\rangle + |111\rangle) + (|000\rangle - |111\rangle)]/(2\sqrt 2 \cdot \sqrt 2) \ldots — we do not need the exact expansion. What we need is: the encoded state, after a Z_5 error, has had the block-B sign flipped.
Step 1. The error. Apply Z_5 = I^{\otimes 4} \otimes Z \otimes I^{\otimes 4}. Within block B, Z on any single qubit flips the block-level sign: (|000\rangle + |111\rangle) \to (|000\rangle - |111\rangle). Blocks A and C are unchanged. Why any single Z does the same thing to the block: from the identity Z_i (|000\rangle + |111\rangle) = (|000\rangle - |111\rangle) for i = 1, 2, 3 — the minus sign comes from whichever qubit in |111\rangle gets its sign flipped, but all three flip contribute a single overall - to the |111\rangle term because the other two qubits are in |1\rangle (eigenstate of Z with eigenvalue -1) but Z_i acts as identity on them. The computation is the same for i = 1, 2, 3.
Step 2. Measure the inner syndromes. Within block B, compute Z_4 Z_5 and Z_5 Z_6 on (|000\rangle - |111\rangle). On |000\rangle: both products = +1. On |111\rangle: both products = +1. The block-B inner syndrome is (+1, +1) — "no bit-flip". Blocks A and C similarly read (+1, +1). Inner syndromes clean.
Step 3. Measure the outer syndromes. Compute S^{\text{outer}}_1 = X_1 X_2 X_3 X_4 X_5 X_6 on the full state. The crucial fact is how X_1 X_2 X_3 acts on each block:
- On block A, X_1 X_2 X_3 (|000\rangle + |111\rangle) = (|111\rangle + |000\rangle) = (|000\rangle + |111\rangle) → eigenvalue +1.
- On block B (post-error), X_4 X_5 X_6 (|000\rangle - |111\rangle) = (|111\rangle - |000\rangle) = -(|000\rangle - |111\rangle) → eigenvalue -1.
So S^{\text{outer}}_1 = (+1)(-1) = -1 on the corrupted state.
Compute S^{\text{outer}}_2 = X_4 X_5 X_6 X_7 X_8 X_9:
- On block B (corrupted), eigenvalue -1.
- On block C, eigenvalue +1.
- Product: -1.
So S^{\text{outer}}_2 = -1.
Step 4. Read the outer syndrome. (s^{\text{outer}}_1, s^{\text{outer}}_2) = (-1, -1). Matching the phase-flip-code syndrome table (from phase-flip-code), this syndrome corresponds to "Z error on the middle block (block B)". Why the middle block: the phase-flip code's syndrome (-1, -1) flags the middle qubit of the three phase-flip qubits, and the middle phase-flip qubit is block B in the lifted code.
Step 5. Apply correction. The syndrome tells us "there is a phase flip somewhere in block B". The code cannot distinguish which of qubits 4, 5, 6 was actually hit — but that is okay, because as we saw in step 1, Z on any of the three qubits has the same block-level effect. So applying Z to any of qubits 4, 5, or 6 undoes the error at the block level. The convention is to apply Z to the "leader" qubit of the block, i.e., Z_4:
Why this works despite the error being on qubit 5: the block-level effect of Z on qubit 4 (flipping the sign of |111\rangle) is identical to the block-level effect of Z on qubit 5 was (the same sign flip). Applying either one "undoes" a block-level sign flip. The code operates on block-level phases, not on individual qubit phases.
Result. Block B is restored to (|000\rangle + |111\rangle)/\sqrt 2 and the logical state is exactly recovered. The amplitude pattern of |+\rangle_L is preserved throughout — no measurement ever touched the logical \alpha, \beta, only the syndrome eigenvalues.
What this shows. Shor's code handles Z errors via the outer phase-flip layer: the outer syndrome identifies the affected block, and any single Z on any qubit of that block cancels the error. The code does not need to know which specific qubit was hit — block-level correction suffices. This is the essential feature of concatenation: the outer code sees "logical qubits" (the blocks), each of which internally handles its own errors, while the outer code handles errors that manifest at the block-logical level.
Historical significance
Shor's code paper changed the outlook for quantum computing overnight. Before 1995, the consensus in much of the physics community was that decoherence rendered quantum computation impossible at scale — a position articulated publicly by, among others, Rolf Landauer and William Unruh. After 1995, the question was no longer whether decoherence could be defeated but what the overhead would be, how small the physical error rate had to get, and whether fault-tolerant protocols could stack codes to arbitrary depth without new errors sneaking in.
Within 18 months of Shor's paper:
- Andrew Steane (1996) constructed the 7-qubit Steane code, a more efficient single-error-correcting code with beautiful algebraic structure.
- Calderbank, Shor, and Steane (together, 1996) generalised both constructions into the CSS codes — a whole family built from classical linear codes.
- Gottesman (1997) introduced the stabiliser formalism, a unifying algebraic language for nearly all quantum codes, including Shor's and Steane's.
- Kitaev (1997, with a late-1990s preprint) introduced topological codes on lattices, presaging the surface code.
- Multiple groups independently proved the threshold theorem (late 1990s): if the physical error rate is below a constant, arbitrary logical error suppression is possible.
Shor's 9-qubit code is rarely used in modern fault-tolerant proposals. The surface code has better thresholds and better geometric locality; the Steane code is more compact; Gottesman's broader stabiliser formalism subsumes both. But Shor's is the first, and the easiest to present from first principles — which is why nearly every textbook on quantum error correction opens with it.
Why this matters. The 9-qubit code is the proof that quantum error correction is possible. It is not used in practice, but every code that is used — the surface code, the colour code, bosonic codes, LDPC codes — inherits the same template: encode in entanglement not copies, measure parities not qubits, correct discrete Pauli errors. The 1995 paper is the template; everything since is engineering on top.
Common confusions
-
"9 physical qubits per logical qubit is what modern codes use." No. Modern fault-tolerant proposals use the surface code, which requires roughly d^2 physical qubits per logical qubit at distance d, typically d \geq 15 for useful suppression — so hundreds to thousands of physical qubits per logical qubit, not 9. Shor's 9-qubit code is distance-3 (corrects one error); it is the minimum viable, not the optimal.
-
"Concatenation means tensor product." Tensor product is part of it, but concatenation also requires the outer code's logical operations to be implemented across blocks in a consistent way. The outer stabilisers are not simple products; they are lifted through the inner encoding. This is why S^{\text{outer}}_1 = X_1 X_2 X_3 X_4 X_5 X_6 is a six-qubit operator, not a product-structured two-qubit operator.
-
"Shor's code is optimal." Far from it. For correcting single-qubit errors, Steane's 7-qubit code is more compact (7 qubits instead of 9). Asymptotically, the surface code achieves distance d with about 2d^2 qubits, far better than Shor's distance-3 code's fixed-d cost. Shor's code is historically important, not rate-optimal.
-
"Continuous errors require continuous corrections." No — this is the most counterintuitive feature of QEC. A continuous rotation e^{-i\epsilon Z} requires only a discrete Z correction after syndrome measurement. The measurement projects the continuous error onto one of \{I, X, Y, Z\} probabilistically; whichever outcome the projection yields is then correctable by a single Pauli. You never have to apply a continuous rotation to correct a continuous error.
-
"Shor's code was a lucky guess." It was not — it was the explicit product of phase-flip and bit-flip codes, each of which Shor understood needed to be combined. The nine-qubit structure follows directly from "I need both X and Z protection; concatenate". The hard insight was realising that concatenation works — that the inner code's protection survives intact when viewed through the outer code's lens, and that the discretisation theorem lets this scheme handle arbitrary single-qubit errors.
Going deeper
If you came here to see how Shor's code works and why it was historically pivotal, you have it. This section gives the stabiliser-formalism presentation, explains how the Steane code does the same job in 7 qubits, and sketches the path from Shor to the surface code.
The 8 stabiliser generators
In the stabiliser formalism (ch.119), Shor's code is defined by 8 commuting Pauli operators:
Inner stabilisers (bit-flip within blocks):
Outer stabilisers (phase-flip between blocks):
All 8 commute pairwise (check: Z Z and XXX XXX share an even number of qubits where they act non-trivially, so the anticommutations cancel; within a block, the Z Z's all commute). The code space is the simultaneous +1-eigenspace of all 8, which is a 2-dimensional subspace of the 9-qubit 512-dimensional Hilbert space — one logical qubit.
Logical operators:
The logical operators are the code's "what does measurement yield" and "what gate flips the logical qubit" operators. They have weight 3 — meaning a single weight-1 or weight-2 error cannot accidentally look like a logical operator, which is why the code has distance 3 (corrects one error).
From Shor to Steane — doing it in 7 qubits
Steane's 7-qubit code (1996) achieves the same single-error correction with only 7 qubits. The construction uses the classical [7, 4, 3] Hamming code twice — once for the X-stabilisers, once for the Z-stabilisers. The Hamming code corrects any single classical bit error on 7 bits; using it on both "kinds" of error (via the CSS construction) corrects any single Pauli error on 7 qubits.
The Steane code is also the simplest code where transversal Clifford gates (Hadamard, phase, CNOT between two copies) are implementable — a property Shor's code lacks. This makes Steane a much better starting point for fault-tolerant computation proposals than Shor.
From Steane to the surface code
The Steane code is still only distance-3. Building fault-tolerant quantum computers requires codes with arbitrary distance, so that the logical error rate can be suppressed to arbitrarily low levels.
Concatenation (stacking codes like Russian dolls) is one path: encode 1 logical qubit into 7 physical qubits with Steane, then encode each of those 7 into another 7, getting 7^2 = 49 physical qubits per logical — still distance-3 at each level, but the probability of a block being mis-corrected goes as (p/p_{\text{th}})^3 per level, so the tower of codes achieves exponentially suppressed logical error with polylog overhead.
Topological codes (Kitaev 1997, surface code) are a better path. Physical qubits are arranged on a 2D lattice, stabilisers act on neighbouring qubits only, and the distance scales with the linear size L of the lattice. Single errors must form chains through the lattice to cause a logical error — a statistically rare event. The surface code achieves distance L with about 2L^2 qubits, scaling better than concatenated Steane, and requires only local (nearest-neighbour) operations, matching real hardware topologies. This is why most modern fault-tolerant proposals use the surface code.
Shor's 9-qubit code sits at the bottom of this tree: the first proof of concept. Steane is one step up in efficiency. CSS is the general family. Stabiliser codes are the even more general family. Surface codes are the hardware-aware instantiation.
Indian QEC research and the National Quantum Mission
India's National Quantum Mission (2023, ₹6000 crore, 8-year horizon) lists fault-tolerant quantum computing as one of four thematic hubs. Active Indian research touching on Shor-code-style constructions and their modern descendants includes:
- TIFR Mumbai — theoretical and NMR-experimental QEC. The Indian NMR community has demonstrated implementations of the Steane code and the phase-flip code on 5-to-7-qubit NMR processors since the 2000s.
- IIT Madras — quantum information group with work on stabiliser codes, LDPC codes, and concatenation.
- IISc Bangalore — ongoing work on fault-tolerant protocols and resource estimation for quantum advantage.
- IIT Delhi, IIT Bombay — younger groups contributing to the NQM error-correction working group.
- Raman Research Institute, Bangalore — primarily quantum optics but with interest in photonic QEC.
The NQM explicitly targets reaching logical qubits with below-physical-qubit error rates within its 8-year window — a milestone Shor's code theoretically enables but which, in 2026, has been demonstrated in only a handful of labs worldwide (Google's Willow surface-code distance-7 result, Quantinuum's trapped-ion experiments, IBM's Heron series). Indian groups are collaborators and consumers of this platform-level progress.
What Shor's code does not solve
Shor's 9-qubit code corrects single-qubit errors, in memory, assuming the encoding, syndrome, and recovery are all error-free. Real quantum hardware does not give you any of those three assumptions. Fault-tolerant quantum computing adds protocols that keep encoded states safe even when the gates operating on them (including syndrome extraction) are themselves noisy. This requires:
- Transversal gates — implementations that do not spread errors between physical qubits within a block.
- Flag qubits — extra ancillas that detect when syndrome extraction itself introduced errors.
- Magic state distillation — a protocol for producing high-fidelity non-Clifford gate ancillas, since transversal implementations do not cover the full universal gate set.
These concepts are developed in later chapters (Part 14 covers them over roughly 10 chapters). Shor's code is the memory-protection primitive; the fault-tolerance machinery is what turns "memory protection" into "arbitrary long computation".
Where this leads next
- Bit-flip code — the inner code of Shor's construction.
- Phase-flip code — the outer code of Shor's construction.
- Why QEC is hard — the three walls and three insights that motivate the whole field.
- Steane 7-qubit code — Shor's more compact successor, with better algebraic structure.
- CSS codes — the general construction that Shor's and Steane's codes instantiate.
- Stabilizer formalism intro — the algebraic language that unifies every quantum code.
- Threshold theorem — below-threshold physical error rates enable arbitrary logical fidelity.
- Surface code — the leading candidate for fault-tolerant quantum computing.
References
- Peter Shor, Scheme for reducing decoherence in quantum computer memory (1995), Phys. Rev. A 52, R2493 — arXiv:quant-ph/9508027. The original 9-qubit code paper. Open access.
- Andrew Steane, Multiple particle interference and quantum error correction (1996) — arXiv:quant-ph/9601029. The 7-qubit code paper, published one year after Shor's.
- Daniel Gottesman, Stabilizer codes and quantum error correction (PhD thesis, 1997) — arXiv:quant-ph/9705052. The stabiliser-formalism unification of Shor, Steane, and CSS codes.
- John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229. Pedagogical derivation of Shor's code and the full error-correction apparatus.
- Nielsen and Chuang, Quantum Computation and Quantum Information (2010), §10.2 (Shor's code) — Cambridge University Press.
- Wikipedia, Quantum error correction and Shor code — accessible overview with the 9-qubit construction in standard textbook form.