In short

Shor's 9-qubit code (1995) is the first full quantum error-correcting code — the construction that proved QEC was possible. It concatenates the phase-flip and bit-flip codes: take the outer 3-qubit phase-flip code, and replace each of its three physical qubits with a 3-qubit bit-flip-encoded block. Total: 9 physical qubits encoding 1 logical qubit. The encoded logical basis is |0\rangle_L = \tfrac{1}{2\sqrt 2}(|000\rangle + |111\rangle)^{\otimes 3} and |1\rangle_L = \tfrac{1}{2\sqrt 2}(|000\rangle - |111\rangle)^{\otimes 3}. Six inner syndromes (Z Z parities within each block) catch X errors; two outer syndromes (XXXXXX parities between blocks) catch Z errors. Y = iXZ triggers both kinds of syndromes, so it is caught as well — and by the discretisation theorem, correcting \{X, Y, Z\} on each qubit means correcting every single-qubit continuous rotation. Overhead is heavy: 9 physical qubits per logical qubit, before anything practical is even attempted. Modern codes (Steane's 7-qubit, surface code) do better. But this 9-qubit paper was the landmark — it transformed the question "can quantum computers ever work?" from an open problem into an engineering challenge.

By 1994, quantum computing looked impressive on paper and hopeless in practice. Peter Shor had just published the factoring algorithm that gave the field its headline application. But the same Shor knew, better than most, that running the factoring algorithm on real hardware would require suppressing decoherence over millions of gate operations — and the three walls of quantum error correction (no-cloning, continuous errors, measurement collapse — see why QEC is hard) seemed to make that impossible. Many senior physicists of the era believed quantum computing was fundamentally doomed, a mathematically elegant but physically unrealisable idea.

Shor's response, published in 1995 as Scheme for reducing decoherence in quantum computer memory, was to construct, by hand, a nine-qubit encoding that corrects every single-qubit error — X, Y, Z, and any continuous rotation in between. The proof was by construction. If such a code exists, then every error analysis that ended with "decoherence kills quantum computation" became instead "decoherence can be corrected, if you are willing to pay the overhead". The question moved from impossibility to engineering.

This chapter is that construction. You will build the 9-qubit code from the two building blocks of the previous two chapters (bit-flip and phase-flip codes). You will see why stacking them catches every single-qubit error, how the syndromes fit together, and why the discretisation theorem (hinted at in why QEC is hard) converts continuous noise into a finite Pauli correction. And you will see why, despite all of this, modern fault-tolerant quantum computing has mostly moved past Shor's code — to codes with better overhead and better geometric structure, but all built on the template this chapter lays down.

The two building blocks, in 30 seconds

Bit-flip code (3 qubits, catches X errors):

|0\rangle \mapsto |000\rangle, \qquad |1\rangle \mapsto |111\rangle.

Stabilisers Z_1 Z_2, Z_2 Z_3 measure bit-parity inside the block. Blind to Z errors.

Phase-flip code (3 qubits, catches Z errors):

|0\rangle \mapsto |{+}{+}{+}\rangle, \qquad |1\rangle \mapsto |{-}{-}{-}\rangle.

Stabilisers X_1 X_2, X_2 X_3 measure X-parity. Blind to X errors.

Neither alone is useful against real noise, which has both kinds of errors in comparable amounts. Shor's insight: stack them.

The construction — concatenate phase-flip with bit-flip

Concatenation means: take the outer code, and replace each of its physical qubits with a codeword of the inner code. For Shor's 9-qubit code, the outer code is the phase-flip code (which has three physical qubits), and each of those three qubits is itself bit-flip-encoded into three physical qubits. Total: 3 \times 3 = 9 physical qubits.

The Shor 9-qubit code block structureThree large rounded boxes labelled block A, block B, block C, each containing three small circles labelled q₁ q₂ q₃ (block A), q₄ q₅ q₆ (block B), q₇ q₈ q₉ (block C). An arrow at the top indicates outer code: phase-flip protects between blocks. An arrow at the bottom indicates inner code: bit-flip protects within each block.Shor 9-qubit code: 3 blocks × 3 qubitsouter phase-flip code protects between blocksBlock Aq₁q₂q₃(|000⟩+|111⟩)/√2Block Bq₄q₅q₆(|000⟩+|111⟩)/√2Block Cq₇q₈q₉(|000⟩+|111⟩)/√2inner bit-flip code protects within each block3 blocks × 3 qubits = 9 physical qubits for 1 logical qubitsign pattern across blocks encodes |0⟩_L vs |1⟩_L
Shor's 9-qubit code as a tree. The three outer "qubits" of the phase-flip code become three blocks (A, B, C) of three physical qubits each, with each block internally protected by the bit-flip code. A $Z$ error on any one physical qubit is detected by the outer code (because within a block, $Z$ on any single qubit acts like a collective $Z$ on the whole block). An $X$ error on any one physical qubit is detected by that block's inner bit-flip syndrome. $Y = iXZ$ triggers both.

The encoded logical states

Start with a logical qubit \alpha|0\rangle + \beta|1\rangle. Apply the phase-flip encoding:

\alpha|0\rangle + \beta|1\rangle \;\mapsto\; \alpha|{+}{+}{+}\rangle + \beta|{-}{-}{-}\rangle.

Now replace each of the three qubits with a bit-flip-encoded block. The bit-flip encoding maps |0\rangle \to |000\rangle, |1\rangle \to |111\rangle, so it extends linearly to

|+\rangle = \tfrac{1}{\sqrt 2}(|0\rangle + |1\rangle) \;\mapsto\; \tfrac{1}{\sqrt 2}(|000\rangle + |111\rangle),
|-\rangle = \tfrac{1}{\sqrt 2}(|0\rangle - |1\rangle) \;\mapsto\; \tfrac{1}{\sqrt 2}(|000\rangle - |111\rangle).

Why the bit-flip encoding extends linearly: the encoding is a unitary operation. Unitaries are linear. The encoded version of a linear combination of computational basis states is the same linear combination of their encoded versions. This is also why encoding does not violate no-cloning — linearity of the encoding map is incompatible with cloning (which would have to be non-linear).

Substituting each |+\rangle \to (|000\rangle + |111\rangle)/\sqrt 2 and each |-\rangle \to (|000\rangle - |111\rangle)/\sqrt 2 in the phase-flip encoded state gives the 9-qubit Shor code logical states:

\boxed{|0\rangle_L \;=\; \tfrac{1}{2\sqrt 2}\,(|000\rangle + |111\rangle)(|000\rangle + |111\rangle)(|000\rangle + |111\rangle)}
\boxed{|1\rangle_L \;=\; \tfrac{1}{2\sqrt 2}\,(|000\rangle - |111\rangle)(|000\rangle - |111\rangle)(|000\rangle - |111\rangle)}

Why the 1/(2\sqrt 2) normalisation: each block has factor 1/\sqrt 2 from the bit-flip encoding of |+\rangle or |-\rangle, and the three blocks multiply to (1/\sqrt 2)^3 = 1/(2\sqrt 2). The overall 1/\sqrt 2 from the phase-flip encoding of |+\rangle is absorbed; the logical encoded state of |0\rangle or |1\rangle on its own picks up only the three-block factor 1/(2\sqrt 2).

The encoding circuit

Two stages, in the obvious order.

  1. Phase-flip stage. On qubits 1, 4, 7 (the "leader" of each block), run the phase-flip encoding: CNOT from qubit 1 to qubits 4 and 7, then Hadamard qubits 1, 4, 7. This produces the phase-flip-encoded state across the three block leaders, with qubits 2, 3, 5, 6, 8, 9 still in |0\rangle.

  2. Bit-flip stage. Within each block, CNOT from the leader to its two block-mates. Block A: CNOT from 1 to 2, CNOT from 1 to 3. Block B: CNOT from 4 to 5, CNOT from 4 to 6. Block C: CNOT from 7 to 8, CNOT from 7 to 9.

Shor 9-qubit code encoding circuitA nine-wire circuit. Top wire is the data qubit. First stage: CNOTs from qubit 1 to qubits 4 and 7, then Hadamards on qubits 1, 4, 7. Second stage: within each block of three qubits (1-2-3, 4-5-6, 7-8-9), CNOTs from the leader (1, 4, or 7) to the other two qubits. Output is the encoded 9-qubit state.Encoding circuit: α|0⟩+β|1⟩ → Shor 9-qubit code state|ψ⟩|0⟩|0⟩|0⟩|0⟩|0⟩|0⟩|0⟩|0⟩HHHStage 1: outer phase-flip encodingStage 2: inner bit-flip encoding (each block)block A: CNOTs 1→2, 1→3block B: CNOTs 4→5, 4→6block C: CNOTs 7→8, 7→9
Shor's encoding circuit in two stages. Stage 1 (left) is the phase-flip encoding: two CNOTs from qubit 1 to qubits 4 and 7 (the block leaders), then a Hadamard on each of qubits 1, 4, 7. Stage 2 (right) is the inner bit-flip encoding: within each block, CNOT from the leader to its two block-mates. Total: 6 CNOTs and 3 Hadamards.

Why every single-qubit error is corrected

Three error types to handle: X, Z, and Y = iXZ. Plus continuous errors, which the discretisation theorem reduces to these three. Go through each.

X errors: caught by the inner bit-flip code

An X error on any one of qubits 1 through 9 is a bit-flip within its block. Inside block A (qubits 1, 2, 3), the state before the error is (|000\rangle + |111\rangle)/\sqrt 2 (if the logical qubit is |0\rangle_L) or (|000\rangle - |111\rangle)/\sqrt 2 (for |1\rangle_L). An X_1 error flips the first qubit of the block:

\tfrac{1}{\sqrt 2}(|000\rangle + |111\rangle) \;\xrightarrow{X_1}\; \tfrac{1}{\sqrt 2}(|100\rangle + |011\rangle).

The two inner stabilisers Z_1 Z_2 and Z_2 Z_3 within this block now read (-1, +1) — the bit-flip-code syndrome for "error on qubit 1". Apply X_1 to recover. Block A is restored, and the other two blocks are untouched.

Six inner syndromes total (two per block), with 8 possible non-trivial patterns per block but only 3 used (the three single-qubit bit-flip locations). If any single X error occurs on any of the 9 qubits, exactly one block's inner syndrome fires and uniquely identifies which qubit.

Z errors: caught by the outer phase-flip code

This is the subtler mechanism, and it is worth going slowly.

Within a block, a single Z error on any qubit has the same effect on the block's logical state. To see this, compute on block A. The block's encoded state for logical |+\rangle_{\text{block}} (in the outer picture, where the block's "qubit" is one of the three phase-flip qubits) is (|000\rangle + |111\rangle)/\sqrt 2. Apply Z_1:

Z_1 \tfrac{1}{\sqrt 2}(|000\rangle + |111\rangle) \;=\; \tfrac{1}{\sqrt 2}(|000\rangle - |111\rangle).

Why the sign flip: Z|0\rangle = |0\rangle and Z|1\rangle = -|1\rangle. Z_1|000\rangle = |000\rangle (qubit 1 is in |0\rangle, unchanged). Z_1|111\rangle = -|111\rangle (qubit 1 is in |1\rangle, sign flipped). Subtract to get (|000\rangle - |111\rangle)/\sqrt 2.

Now apply Z_2 to the original state instead:

Z_2 \tfrac{1}{\sqrt 2}(|000\rangle + |111\rangle) \;=\; \tfrac{1}{\sqrt 2}(|000\rangle - |111\rangle).

Same answer. And Z_3 gives the same answer again. A Z error on any of the three qubits in a block produces the same block-level effect: it flips the sign of (|000\rangle + |111\rangle) to (|000\rangle - |111\rangle). Block-level, this is exactly the action of a single Z on the corresponding phase-flip-code qubit (which would take |+\rangle \to |-\rangle, and |-\rangle \to |+\rangle in the outer picture).

So within each block, "a Z error anywhere" looks like "the block's phase flipped". That is exactly what the outer phase-flip code is designed to detect. Its stabilisers, applied to blocks A, B, C, are:

S^{\text{outer}}_1 \;=\; X_1 X_2 X_3 \, X_4 X_5 X_6, \qquad S^{\text{outer}}_2 \;=\; X_4 X_5 X_6 \, X_7 X_8 X_9.

That is, the six-qubit X-parity between blocks A and B, and between blocks B and C. These are the phase-flip code's X-parity stabilisers lifted through the bit-flip encoding. Each one has eigenvalue +1 on the encoded space and flips to -1 when a Z error occurs in one of the blocks it spans.

Why the outer stabiliser is X^{\otimes 3} on each block: recall from bit-flip-code that the logical X of the bit-flip code is X_L = X_1 X_2 X_3 (Pauli X on all three physical qubits). The outer phase-flip code's stabilisers are X_1^{\text{outer}} X_2^{\text{outer}} etc., acting on the logical qubits of the inner code. Lifting X^{\text{outer}} through the bit-flip encoding gives X_L = X \otimes X \otimes X on the three physical qubits of that block. So the outer stabiliser X_1^{\text{outer}} X_2^{\text{outer}} becomes (X_1 X_2 X_3)(X_4 X_5 X_6) in physical-qubit notation. Six-qubit X-parity.

So: two outer syndromes, each spanning six qubits, catch any single Z error on any of the nine qubits.

Y errors: caught by both codes

Y = iXZ (up to a global phase factor). A Y error on qubit j is, up to phase, an X error followed by a Z error on qubit j. The X part triggers the inner syndrome for that block; the Z part triggers the outer syndrome that contains that block. Both fire. Apply X_j Z_j (which equals Y_j up to an overall phase, and the overall phase cancels because the state was an eigenstate of the syndrome). The code recovers.

Continuous errors: the discretisation theorem at work

Any single-qubit unitary U close to the identity can be written

U \;=\; a_0 I + a_x X + a_y Y + a_z Z

with complex coefficients a_0, a_x, a_y, a_z. (This is because \{I, X, Y, Z\} span the space of 2\times 2 complex matrices — see why QEC is hard.) Applying U to qubit j gives a superposition of four outcomes: identity, X_j error, Y_j error, Z_j error, each with amplitude a_0, a_x, a_y, a_z.

When you measure the syndromes, the state collapses probabilistically onto one of these four possibilities. If it collapses to "no error" (probability |a_0|^2, typically close to 1 for small errors), do nothing. If it collapses to "X on qubit j" (probability |a_x|^2), apply X_j. Similarly for Y_j and Z_j. The continuous unitary has been discretised into one of four Pauli outcomes, each of which Shor's code handles.

The upshot. Shor's code corrects every single-qubit Pauli error (I, X, Y, Z on any of 9 physical qubits). By the discretisation theorem, it automatically corrects every single-qubit continuous rotation, every single-qubit amplitude-damping operator, every single-qubit depolarising channel, and more generally every single-qubit error described by Kraus operators supported on Pauli's. The continuous error space reduces to a discrete Pauli correction after syndrome measurement.

How Shor's code catches each error typeFour rows in a table. Row 1: X error on qubit j — inner syndrome fires — apply X to qubit j. Row 2: Z error on qubit j — outer syndrome fires — apply Z to qubit j. Row 3: Y error on qubit j — both syndromes fire — apply Y. Row 4: continuous error on qubit j — discretisation theorem projects onto one of {I, X, Y, Z} — apply Pauli correction.Error typeSyndrome that firesCorrection appliedOutcomeX on qubit jinner (block with j)X on qubit jexact recoveryZ on qubit jouter (spanning block j)Z on qubit jexact recoveryY = iXZ on qubit jinner + outerY on qubit jexact recoverycontinuous rotationone of the above,projected by measurementmatching Pauliexact recoveryAny single-qubit error on any of the 9 physical qubits — corrected.
Shor's code handles every single-qubit error via the inner–outer syndrome pair. $X$ errors fire an inner syndrome; $Z$ errors fire an outer syndrome; $Y$ fires both. Continuous rotations collapse to one of these three (plus identity) under syndrome measurement and are then corrected by the same Pauli operations.

The syndrome circuit summary

Eight syndrome bits, measured by eight ancilla qubits:

Each stabiliser is measured with a standard ancilla-based parity circuit (Hadamard-ancilla, controlled Pauli's onto the data qubits, Hadamard-ancilla, measure). The ancilla returns a classical bit. Eight classical bits total: one 8-bit syndrome.

Not all 2^8 = 256 possible syndromes are used. The "addressable" errors are: no error (1 pattern), single-qubit X (9 patterns), single-qubit Y (9 patterns), single-qubit Z (9 patterns) = 28 correctable patterns. The other 228 syndromes correspond to two-or-more-qubit errors, which the code detects but cannot correct (or mis-corrects — applying a spurious "single-qubit" correction that increases the logical error). As with any code, this limitation is the source of the code's error threshold.

Worked examples

Example 1: encode |0⟩ step by step

Trace the encoding of the classical logical state |0\rangle through the two-stage Shor circuit, seeing the 9-qubit output build up.

Start. Input: |\psi\rangle|0\rangle^{\otimes 8} = |0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle|0\rangle = |000000000\rangle. (The data qubit is qubit 1; the rest are ancillas in |0\rangle.)

Stage 1, step A: CNOT from qubit 1 to qubit 4. Qubit 1 is |0\rangle, so the CNOT does nothing. State: |000000000\rangle.

Stage 1, step B: CNOT from qubit 1 to qubit 7. Qubit 1 is |0\rangle, CNOT does nothing. State: |000000000\rangle.

Stage 1, step C: Hadamard on qubits 1, 4, 7. Each of these three qubits is |0\rangle, and H|0\rangle = |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2. Apply to qubits 1, 4, 7 in parallel:

\tfrac{1}{2\sqrt 2}(|0\rangle + |1\rangle)_1 |0\rangle_2 |0\rangle_3 (|0\rangle + |1\rangle)_4 |0\rangle_5 |0\rangle_6 (|0\rangle + |1\rangle)_7 |0\rangle_8 |0\rangle_9.

Why the factor of 1/(2\sqrt 2): each Hadamard contributes a 1/\sqrt 2; three Hadamards contribute (1/\sqrt 2)^3 = 1/(2\sqrt 2).

This expands to 8 terms — one for each choice of |0\rangle or |1\rangle on qubits 1, 4, 7 — with qubits 2, 3, 5, 6, 8, 9 always in |0\rangle.

Stage 2, block A: CNOT from qubit 1 to qubits 2 and 3. Within block A, whenever qubit 1 is |1\rangle, flip qubits 2 and 3. So |100\rangle \to |111\rangle. The (|0\rangle + |1\rangle)_1 |0\rangle_2 |0\rangle_3 becomes (|000\rangle + |111\rangle) on block A.

Stage 2, block B: CNOT from qubit 4 to qubits 5 and 6. Same action, giving (|000\rangle + |111\rangle) on block B.

Stage 2, block C: CNOT from qubit 7 to qubits 8 and 9. Same action, giving (|000\rangle + |111\rangle) on block C.

Final state.

|0\rangle_L \;=\; \tfrac{1}{2\sqrt 2}(|000\rangle + |111\rangle)_A (|000\rangle + |111\rangle)_B (|000\rangle + |111\rangle)_C.

Result. The logical |0\rangle is encoded into a 9-qubit state that is a symmetric product of three GHZ-like blocks. Each block is an entangled two-term superposition; the three blocks are tensor-multiplied. Expanding all 8 terms gives: |000000000\rangle contributes with amplitude 1/(2\sqrt 2), |000000111\rangle with the same amplitude, and so on for all 8 patterns where each block is either |000\rangle or |111\rangle.

What this shows. Encoding |0\rangle is a clean tensor product of three GHZ states, with no inter-block entanglement. The entanglement is entirely within each block (between qubits 1-2-3, 4-5-6, 7-8-9). Across blocks, the three are independent. Encoding |1\rangle would give (|000\rangle - |111\rangle) for each block — same structure, sign differences inside blocks. A general logical superposition \alpha|0\rangle + \beta|1\rangle lifts to a sum of the two block-patterns and introduces cross-block correlation.

Example 2: detect and correct a Z error on qubit 5

Show end-to-end how Shor's code catches a Z error on the middle qubit of block B (the middle block).

Setup. The encoded state is the logical |+\rangle_L = (|0\rangle_L + |1\rangle_L)/\sqrt 2, so for each block we have [(|000\rangle + |111\rangle) + (|000\rangle - |111\rangle)]/(2\sqrt 2 \cdot \sqrt 2) \ldots — we do not need the exact expansion. What we need is: the encoded state, after a Z_5 error, has had the block-B sign flipped.

Step 1. The error. Apply Z_5 = I^{\otimes 4} \otimes Z \otimes I^{\otimes 4}. Within block B, Z on any single qubit flips the block-level sign: (|000\rangle + |111\rangle) \to (|000\rangle - |111\rangle). Blocks A and C are unchanged. Why any single Z does the same thing to the block: from the identity Z_i (|000\rangle + |111\rangle) = (|000\rangle - |111\rangle) for i = 1, 2, 3 — the minus sign comes from whichever qubit in |111\rangle gets its sign flipped, but all three flip contribute a single overall - to the |111\rangle term because the other two qubits are in |1\rangle (eigenstate of Z with eigenvalue -1) but Z_i acts as identity on them. The computation is the same for i = 1, 2, 3.

Step 2. Measure the inner syndromes. Within block B, compute Z_4 Z_5 and Z_5 Z_6 on (|000\rangle - |111\rangle). On |000\rangle: both products = +1. On |111\rangle: both products = +1. The block-B inner syndrome is (+1, +1) — "no bit-flip". Blocks A and C similarly read (+1, +1). Inner syndromes clean.

Step 3. Measure the outer syndromes. Compute S^{\text{outer}}_1 = X_1 X_2 X_3 X_4 X_5 X_6 on the full state. The crucial fact is how X_1 X_2 X_3 acts on each block:

  • On block A, X_1 X_2 X_3 (|000\rangle + |111\rangle) = (|111\rangle + |000\rangle) = (|000\rangle + |111\rangle) → eigenvalue +1.
  • On block B (post-error), X_4 X_5 X_6 (|000\rangle - |111\rangle) = (|111\rangle - |000\rangle) = -(|000\rangle - |111\rangle) → eigenvalue -1.

So S^{\text{outer}}_1 = (+1)(-1) = -1 on the corrupted state.

Compute S^{\text{outer}}_2 = X_4 X_5 X_6 X_7 X_8 X_9:

  • On block B (corrupted), eigenvalue -1.
  • On block C, eigenvalue +1.
  • Product: -1.

So S^{\text{outer}}_2 = -1.

Step 4. Read the outer syndrome. (s^{\text{outer}}_1, s^{\text{outer}}_2) = (-1, -1). Matching the phase-flip-code syndrome table (from phase-flip-code), this syndrome corresponds to "Z error on the middle block (block B)". Why the middle block: the phase-flip code's syndrome (-1, -1) flags the middle qubit of the three phase-flip qubits, and the middle phase-flip qubit is block B in the lifted code.

Step 5. Apply correction. The syndrome tells us "there is a phase flip somewhere in block B". The code cannot distinguish which of qubits 4, 5, 6 was actually hit — but that is okay, because as we saw in step 1, Z on any of the three qubits has the same block-level effect. So applying Z to any of qubits 4, 5, or 6 undoes the error at the block level. The convention is to apply Z to the "leader" qubit of the block, i.e., Z_4:

Z_4 (|000\rangle - |111\rangle) \;=\; |000\rangle - (-1)|111\rangle \;=\; |000\rangle + |111\rangle.

Why this works despite the error being on qubit 5: the block-level effect of Z on qubit 4 (flipping the sign of |111\rangle) is identical to the block-level effect of Z on qubit 5 was (the same sign flip). Applying either one "undoes" a block-level sign flip. The code operates on block-level phases, not on individual qubit phases.

Result. Block B is restored to (|000\rangle + |111\rangle)/\sqrt 2 and the logical state is exactly recovered. The amplitude pattern of |+\rangle_L is preserved throughout — no measurement ever touched the logical \alpha, \beta, only the syndrome eigenvalues.

Correcting a Z₅ error in Shor's codeA four-stage flow. Stage 1: encoded |+⟩_L across three blocks. Stage 2: Z₅ error flips block B's sign from plus to minus. Stage 3: outer syndrome reads (-1, -1) flagging block B. Stage 4: Z₄ applied, block B restored.encoded |+⟩_LA: (|000⟩+|111⟩)B: (|000⟩+|111⟩)C: (|000⟩+|111⟩)all S = +1Z₅after Z₅ errorA: (|000⟩+|111⟩)B: (|000⟩−|111⟩)C: (|000⟩+|111⟩)outer S = (−1, −1)readouter syndrome(−1, −1)→ Z on block Bapply Z₄Z₄recoveredall blocks(|000⟩+|111⟩)|+⟩_L restoredZ on any qubit of a block has the same block-level effect as Z on any other.The outer code identifies the block, not the qubit. That is all it needs to correct.The same cycle handles X errors via inner syndromes, and Y errors via both.
One correction cycle for a $Z$ error on qubit 5 (middle of block B). The inner syndromes are clean (the error is not a bit-flip). The outer syndromes both read $-1$, flagging block B. Applying $Z$ to any qubit in block B undoes the block-level phase flip, and the logical state is exactly recovered.

What this shows. Shor's code handles Z errors via the outer phase-flip layer: the outer syndrome identifies the affected block, and any single Z on any qubit of that block cancels the error. The code does not need to know which specific qubit was hit — block-level correction suffices. This is the essential feature of concatenation: the outer code sees "logical qubits" (the blocks), each of which internally handles its own errors, while the outer code handles errors that manifest at the block-logical level.

Historical significance

Shor's code paper changed the outlook for quantum computing overnight. Before 1995, the consensus in much of the physics community was that decoherence rendered quantum computation impossible at scale — a position articulated publicly by, among others, Rolf Landauer and William Unruh. After 1995, the question was no longer whether decoherence could be defeated but what the overhead would be, how small the physical error rate had to get, and whether fault-tolerant protocols could stack codes to arbitrary depth without new errors sneaking in.

Within 18 months of Shor's paper:

Shor's 9-qubit code is rarely used in modern fault-tolerant proposals. The surface code has better thresholds and better geometric locality; the Steane code is more compact; Gottesman's broader stabiliser formalism subsumes both. But Shor's is the first, and the easiest to present from first principles — which is why nearly every textbook on quantum error correction opens with it.

Why this matters. The 9-qubit code is the proof that quantum error correction is possible. It is not used in practice, but every code that is used — the surface code, the colour code, bosonic codes, LDPC codes — inherits the same template: encode in entanglement not copies, measure parities not qubits, correct discrete Pauli errors. The 1995 paper is the template; everything since is engineering on top.

Common confusions

Going deeper

If you came here to see how Shor's code works and why it was historically pivotal, you have it. This section gives the stabiliser-formalism presentation, explains how the Steane code does the same job in 7 qubits, and sketches the path from Shor to the surface code.

The 8 stabiliser generators

In the stabiliser formalism (ch.119), Shor's code is defined by 8 commuting Pauli operators:

Inner stabilisers (bit-flip within blocks):

S_1 = Z_1 Z_2, \quad S_2 = Z_2 Z_3, \quad S_3 = Z_4 Z_5, \quad S_4 = Z_5 Z_6, \quad S_5 = Z_7 Z_8, \quad S_6 = Z_8 Z_9.

Outer stabilisers (phase-flip between blocks):

S_7 = X_1 X_2 X_3 X_4 X_5 X_6, \qquad S_8 = X_4 X_5 X_6 X_7 X_8 X_9.

All 8 commute pairwise (check: Z Z and XXX XXX share an even number of qubits where they act non-trivially, so the anticommutations cancel; within a block, the Z Z's all commute). The code space is the simultaneous +1-eigenspace of all 8, which is a 2-dimensional subspace of the 9-qubit 512-dimensional Hilbert space — one logical qubit.

Logical operators:

Z_L = Z_1 Z_2 Z_3 \;=\; Z_4 Z_5 Z_6 \;=\; Z_7 Z_8 Z_9 \quad (\text{all equivalent up to stabilisers}),
X_L = X_1 X_4 X_7 \;=\; X_2 X_5 X_8 \;=\; X_3 X_6 X_9 \quad (\text{again all equivalent}).

The logical operators are the code's "what does measurement yield" and "what gate flips the logical qubit" operators. They have weight 3 — meaning a single weight-1 or weight-2 error cannot accidentally look like a logical operator, which is why the code has distance 3 (corrects one error).

From Shor to Steane — doing it in 7 qubits

Steane's 7-qubit code (1996) achieves the same single-error correction with only 7 qubits. The construction uses the classical [7, 4, 3] Hamming code twice — once for the X-stabilisers, once for the Z-stabilisers. The Hamming code corrects any single classical bit error on 7 bits; using it on both "kinds" of error (via the CSS construction) corrects any single Pauli error on 7 qubits.

The Steane code is also the simplest code where transversal Clifford gates (Hadamard, phase, CNOT between two copies) are implementable — a property Shor's code lacks. This makes Steane a much better starting point for fault-tolerant computation proposals than Shor.

From Steane to the surface code

The Steane code is still only distance-3. Building fault-tolerant quantum computers requires codes with arbitrary distance, so that the logical error rate can be suppressed to arbitrarily low levels.

Concatenation (stacking codes like Russian dolls) is one path: encode 1 logical qubit into 7 physical qubits with Steane, then encode each of those 7 into another 7, getting 7^2 = 49 physical qubits per logical — still distance-3 at each level, but the probability of a block being mis-corrected goes as (p/p_{\text{th}})^3 per level, so the tower of codes achieves exponentially suppressed logical error with polylog overhead.

Topological codes (Kitaev 1997, surface code) are a better path. Physical qubits are arranged on a 2D lattice, stabilisers act on neighbouring qubits only, and the distance scales with the linear size L of the lattice. Single errors must form chains through the lattice to cause a logical error — a statistically rare event. The surface code achieves distance L with about 2L^2 qubits, scaling better than concatenated Steane, and requires only local (nearest-neighbour) operations, matching real hardware topologies. This is why most modern fault-tolerant proposals use the surface code.

Shor's 9-qubit code sits at the bottom of this tree: the first proof of concept. Steane is one step up in efficiency. CSS is the general family. Stabiliser codes are the even more general family. Surface codes are the hardware-aware instantiation.

Indian QEC research and the National Quantum Mission

India's National Quantum Mission (2023, ₹6000 crore, 8-year horizon) lists fault-tolerant quantum computing as one of four thematic hubs. Active Indian research touching on Shor-code-style constructions and their modern descendants includes:

The NQM explicitly targets reaching logical qubits with below-physical-qubit error rates within its 8-year window — a milestone Shor's code theoretically enables but which, in 2026, has been demonstrated in only a handful of labs worldwide (Google's Willow surface-code distance-7 result, Quantinuum's trapped-ion experiments, IBM's Heron series). Indian groups are collaborators and consumers of this platform-level progress.

What Shor's code does not solve

Shor's 9-qubit code corrects single-qubit errors, in memory, assuming the encoding, syndrome, and recovery are all error-free. Real quantum hardware does not give you any of those three assumptions. Fault-tolerant quantum computing adds protocols that keep encoded states safe even when the gates operating on them (including syndrome extraction) are themselves noisy. This requires:

These concepts are developed in later chapters (Part 14 covers them over roughly 10 chapters). Shor's code is the memory-protection primitive; the fault-tolerance machinery is what turns "memory protection" into "arbitrary long computation".

Where this leads next

References

  1. Peter Shor, Scheme for reducing decoherence in quantum computer memory (1995), Phys. Rev. A 52, R2493 — arXiv:quant-ph/9508027. The original 9-qubit code paper. Open access.
  2. Andrew Steane, Multiple particle interference and quantum error correction (1996) — arXiv:quant-ph/9601029. The 7-qubit code paper, published one year after Shor's.
  3. Daniel Gottesman, Stabilizer codes and quantum error correction (PhD thesis, 1997) — arXiv:quant-ph/9705052. The stabiliser-formalism unification of Shor, Steane, and CSS codes.
  4. John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229. Pedagogical derivation of Shor's code and the full error-correction apparatus.
  5. Nielsen and Chuang, Quantum Computation and Quantum Information (2010), §10.2 (Shor's code) — Cambridge University Press.
  6. Wikipedia, Quantum error correction and Shor code — accessible overview with the 9-qubit construction in standard textbook form.