In short

A stabilizer code is labelled [[n, k, d]] — three numbers that summarise everything about it. n is the number of physical qubits you spend; k is the number of logical qubits you encode inside them; d is the code distance, which is the minimum weight of any Pauli operator that acts non-trivially on the encoded information. A code with distance d detects any error of weight up to d-1 and corrects any error of weight up to \lfloor (d-1)/2 \rfloor. The code is built by choosing n-k independent commuting Pauli operators — the stabilizer generators — and taking the code space to be their simultaneous +1 eigenspace, which has dimension 2^k. Anything that commutes with all stabilizers but is not itself a product of them is a logical operator; the minimum weight of a non-trivial logical operator is the distance d. The quantum Singleton bound says n - k \geq 2(d-1), and codes that saturate it are as compact as physically possible. Shor's 9-qubit code is [[9, 1, 3]]. Steane's is [[7, 1, 3]]. The 5-qubit perfect code is [[5, 1, 3]] — the smallest stabilizer code that still corrects one arbitrary error, and it saturates the Singleton bound exactly. The surface code family covers [[n, 1, d]] for growing d at roughly n \approx 2d^2. Almost every quantum code deployed or discussed in serious fault-tolerant proposals is a stabilizer code — that is why the framework deserves its own chapter.

You have met three specific quantum error-correcting codes so far. The bit-flip code from chapter 116 uses 3 qubits to encode 1 logical qubit, and survives any single X error. The phase-flip code from chapter 117 does the same for Z errors. Shor's 9-qubit code from chapter 118 stacks them and corrects any single-qubit error — X, Y, Z, or any continuous rotation in between.

Three codes, three separate arguments, three separate syndrome circuits, three separate error-pattern tables. If this was how quantum error correction worked — case-by-case reasoning, one code at a time — the field would be unusable. Every new hardware platform would need a new code designed from scratch; every new error model would require a fresh calculation.

That is not how the field works. Instead, a single algebraic framework — the stabilizer formalism — covers all three of these codes as special cases, plus Steane's 7-qubit code, the 5-qubit perfect code, the Bacon-Shor code, the color codes, the surface code, the Shor–Laflamme quantum Reed-Muller codes, and dozens of other constructions published between 1996 and today. Every one of them is an [[n, k, d]] stabilizer code, built by choosing n-k commuting Pauli operators and letting them carve out a 2^k-dimensional code subspace inside the full 2^n-dimensional Hilbert space. The parameters n, k, d summarise the entire code in three integers.

This chapter builds that framework. By the end of it, you will be able to read a sentence like "the [[17, 1, 5]] color code has 16 stabilizer generators and logical operators of minimum weight 5" and know exactly what is being said. You will also meet the quantum Singleton bound, the inequality n - k \geq 2(d-1) that constrains how small n can be for a given k and d — and you will see which codes saturate it. The sequel chapter on the 5-qubit perfect code will then give you a specific stabilizer code that hits the Singleton bound exactly.

The triple [[n, k, d]]

Three numbers, one pair of double brackets. The double brackets are not decoration — they mark the code as quantum. A classical code is written with single brackets: the [7, 4, 3] Hamming code uses 7 bits to encode 4, at classical distance 3. The double-bracket notation [[n, k, d]] marks a quantum code, which protects qubits against Pauli errors rather than bits against bit-flips.

The three numbers have specific meanings.

n — the number of physical qubits. This is the raw hardware cost of the code. For Shor's 9-qubit code, n = 9. For Steane's, n = 7. For the 5-qubit code, n = 5. When you read a paper that says "the surface code at distance 9 uses 161 physical qubits per logical qubit", that 161 is an n.

k — the number of logical qubits encoded. The code space has dimension 2^k. If k = 1, the code encodes a single logical qubit — a single qubit's worth of amplitude \alpha|0\rangle_L + \beta|1\rangle_L is what fits inside. If k = 2, the code encodes a 4-dimensional state space, which is two logical qubits. Almost all codes you meet in an introductory treatment have k = 1; higher-k codes exist but are less common in practice.

d — the code distance. This is the subtle one. d is the minimum number of single-qubit Pauli factors in any operator that takes a code state to a different code state. It measures the "distance", in terms of how many physical qubits an error has to hit, for the error to confuse one logical state with another. A larger d means the code has more slack — more errors have to line up simultaneously before the code mis-corrects.

The three numbers [[n, k, d]] of a stabilizer codeThree labelled boxes: n is the number of physical qubits (a block of small circles), k is the number of logical qubits (a block of larger highlighted circles), d is the code distance (a ruler showing minimum weight of a non-trivial logical operator). Arrows connect n to k via "encoding" and k to d via "minimum error weight to cause confusion".[[n, k, d]] — what each number meansnphysical qubitsraw hardware costklogical qubits|ψ⟩_L2^k-dimensionalcode subspacedcode distancemin weight ofany logical Paulierror-correction slack
The three parameters of a stabilizer code. $n$ is how many physical qubits you have paid for; $k$ is how many logical qubits you have bought; $d$ is how much error resistance that logical space has. The encoding map embeds the $2^k$-dimensional logical Hilbert space into the $2^n$-dimensional physical Hilbert space, and the code distance $d$ measures how "far apart" the code states are in the Pauli-weight sense.

Weight of a Pauli operator

The word weight has a specific meaning here. Given an n-qubit Pauli operator like P = X_1 \otimes I_2 \otimes Z_3 \otimes I_4 \otimes Y_5, its weight is the number of tensor factors that are not the identity. In this example the weight is 3 — three qubits (1, 3, 5) have a non-trivial Pauli acting on them, and the other two (2, 4) have I.

A weight-1 Pauli is a single-qubit error, like X_3 or Z_7. A weight-2 Pauli is a two-qubit error, like X_1 Z_4. And so on.

The weight distribution of the Pauli group is exactly what the distance d is measuring. A code of distance d can survive every Pauli of weight less than some threshold; its vulnerabilities start only at weight \lceil d/2 \rceil or beyond.

The protection rules

Two sentences you will encounter over and over:

These come from the Pauli geometry. If you want the full derivation, it appears later in the chapter; for now, take the rules at face value and work out a few examples.

d = 1: detects no errors, corrects no errors. Useless — this is the "no code" case.

d = 2: detects a single error, corrects no errors. Useful if you only need to know "did something go wrong" without fixing it — e.g. abort and restart. The [[4, 2, 2]] code lives here.

d = 3: detects up to 2 errors, corrects 1 error. This is the minimum for a single-error-correcting code. Shor's 9-qubit, Steane's 7-qubit, the 5-qubit perfect code, the distance-3 surface code — all d = 3.

d = 5: detects up to 4 errors, corrects 2 errors. The [[23, 1, 7]] Golay-derived codes, the distance-5 surface code, the [[11, 1, 5]] code — all start to appear here.

d = 7, 9, 11, \dots: higher protection, at higher qubit cost. Modern fault-tolerant surface-code proposals sit at d = 15 to d = 25.

Why d = 2t + 1 for t-error correction: if a code has distance 2t+1, then two distinct error patterns of weight \leq t cannot collide — their "difference" would be a weight-at-most-2t Pauli, but any operator that confuses two code states has weight at least d = 2t+1, so two weight-t errors cannot produce the same corrupted state. This means each weight-t error has a unique syndrome, and the decoder can pick the correct recovery. If the code only has distance 2t, two different weight-t errors might produce the same syndrome, and the decoder cannot distinguish them.

Building the code from stabilizer generators

Here is the step-by-step recipe for constructing a stabilizer code, stripped of algebra:

  1. Pick n - k Pauli operators S_1, S_2, \ldots, S_{n-k} on n qubits. These are the stabilizer generators.
  2. Require that they all mutually commute: S_i S_j = S_j S_i for every pair.
  3. Require that they are independent: no S_i can be written as a product of the others (up to sign).
  4. Require that -I is not in the group they generate (otherwise the code space is empty).
  5. The code space is the simultaneous +1-eigenspace of all the generators: the set of states |\psi\rangle such that S_i |\psi\rangle = +|\psi\rangle for every i.

That code space has dimension exactly 2^{n-(n-k)} = 2^k. So n - k constraints cut out a 2^k-dimensional subspace of the full 2^n-dimensional Hilbert space, and each constraint halves the dimension.

Why each stabilizer halves the Hilbert space: a single Pauli operator S has exactly two eigenvalues, +1 and -1, each appearing on half of the Hilbert space. Requiring S|\psi\rangle = +|\psi\rangle restricts to the +1-eigenspace, which is half the total space. Each additional independent commuting stabilizer cuts the remaining space in half again, because on the already-restricted space the new stabilizer still has both eigenvalues available. After n-k independent stabilizers, the code space has dimension 2^n / 2^{n-k} = 2^k.

The bit-flip code as a stabilizer code

Take n = 3, k = 1, and choose the two generators

S_1 \;=\; Z_1 Z_2, \qquad S_2 \;=\; Z_2 Z_3.

Check commutation: S_1 and S_2 share qubit 2 (where both have Z), but Z commutes with Z, so the shared qubit contributes commuting factors. Other qubits have I on one of the operators, so they trivially commute. So S_1 S_2 = S_2 S_1. Good.

Check independence: S_1 \neq S_2 and neither is a multiple of the identity, so yes.

The code space is the joint +1-eigenspace. A state |c_1 c_2 c_3\rangle is in this eigenspace iff S_1|c_1 c_2 c_3\rangle = (-1)^{c_1 + c_2}|c_1 c_2 c_3\rangle = +|c_1 c_2 c_3\rangle and similarly for S_2. So c_1 = c_2 and c_2 = c_3 — all three bits agree. The eigenspace is spanned by |000\rangle and |111\rangle. Two-dimensional. That is the bit-flip code, and you recognise it instantly. This is a [[3, 1, 3]] stabilizer code.

The phase-flip code as a stabilizer code

Same shape, with Xs instead of Zs:

S_1 \;=\; X_1 X_2, \qquad S_2 \;=\; X_2 X_3.

The joint +1-eigenspace is spanned by |{+}{+}{+}\rangle and |{-}{-}{-}\rangle. Another [[3, 1, 3]] code. The Hadamard-rotated dual of bit-flip.

Shor's 9-qubit code as a stabilizer code

Take n = 9, k = 1, and choose n - k = 8 generators:

S_1 = Z_1 Z_2, \; S_2 = Z_2 Z_3, \; S_3 = Z_4 Z_5, \; S_4 = Z_5 Z_6, \; S_5 = Z_7 Z_8, \; S_6 = Z_8 Z_9,
S_7 = X_1 X_2 X_3 X_4 X_5 X_6, \qquad S_8 = X_4 X_5 X_6 X_7 X_8 X_9.

The first six are the bit-flip stabilizers of the three inner blocks; the last two are the phase-flip stabilizers of the outer code lifted through the inner encoding (see Shor's 9-qubit code for the derivation). All eight commute. The joint +1-eigenspace has dimension 2^{9-8} = 2^1 = 2, encoding one logical qubit. A [[9, 1, 3]] stabilizer code.

The important point: these three codes look completely different in their original presentations, but from the stabilizer point of view they are the same kind of object — a list of commuting Paulis. Change the list, change the code. The framework subsumes all three.

Logical operators — what lives outside the stabilizer

The stabilizer group \mathcal{S} is generated by S_1, \ldots, S_{n-k} and contains 2^{n-k} elements (products of the generators up to signs). Every element of \mathcal{S} acts as the identity on the code space, by construction. So stabilizers do not move states around within the code space — they leave every code state unchanged.

But the code space is 2^k-dimensional. There must be operators that do move states around inside it — otherwise there would be no logical qubits to act on. These are the logical operators. The question is: where do they live?

Answer: a logical operator is a Pauli operator L that

  1. Commutes with every stabilizer S_i — so applying L to a code state keeps it in the code space.
  2. Is not itself in the stabilizer group — so L is not just a combination of S_is acting as identity.

Condition 1 says L preserves the code space. Condition 2 says L actually does something.

The normalizer

The formal name for the set of Pauli operators satisfying condition 1 is the normalizer of \mathcal{S} in the full Pauli group, written \mathcal{N}(\mathcal{S}). The logical operators are the elements of \mathcal{N}(\mathcal{S}) \setminus \mathcal{S} — the normalizer minus the stabilizer itself.

For a [[n, k, d]] code, \mathcal{N}(\mathcal{S}) has 2^{n+k} elements (up to phase), the stabilizer \mathcal{S} has 2^{n-k} elements, and the quotient \mathcal{N}(\mathcal{S}) / \mathcal{S} has 2^{2k} elements — which is exactly the number of Pauli operators on k qubits (namely \{I, X, Y, Z\}^{\otimes k}, of which there are 4^k = 2^{2k}). These correspond to the k logical X and Z operators.

Concretely, for a [[n, 1, d]] code there is one logical X_L (up to stabilizer multiplication) and one logical Z_L. They anticommute, just like X and Z on an ordinary qubit. Together they generate the logical Pauli group on the one encoded qubit.

Distance as the minimum weight of a logical operator

Here is the crisp definition of the distance:

d \;=\; \min \left\{ \text{weight}(L) \;:\; L \in \mathcal{N}(\mathcal{S}) \setminus \mathcal{S},\; L \neq I \right\}.

The distance is the smallest number of qubits any non-trivial logical operator touches. If the smallest logical operator has weight 3, the distance is 3. If the smallest has weight 5, the distance is 5.

Why this is the right definition of error resistance: a weight-w error is a Pauli operator of weight w. If the error E is not in the normalizer, it anticommutes with at least one stabilizer — so syndrome measurement will flag it, and the decoder can correct. If E is in the stabilizer, it acts as the identity on the code space — so it doesn't do anything harmful. The only dangerous errors are those in the normalizer but outside the stabilizer — i.e. the logical operators themselves. And their minimum weight is exactly d.

Stabilizers, logical operators, and errorsA nested Venn-like diagram. The outermost region is the full n-qubit Pauli group. Inside sits the normalizer N(S). Inside that sits the stabilizer group S. The logical operators are in N(S) minus S. Errors that anticommute with some stabilizer are in Pauli \ N(S) — these are detectable.n-qubit Pauli group P_nall Pauli operators on n qubits, 4^n elements up to phaseNormalizer N(S)operators that commute with every stabilizer — preserve code spaceStabilizer Sact as identity on code2^(n−k) elementsLogical operatorsN(S) \ Smin weight = distance derrors outside N(S): detectable by syndrome
The structure of operators on $n$ qubits, from a stabilizer code's point of view. The stabilizer group $\mathcal{S}$ acts trivially on the code. The normalizer $\mathcal{N}(\mathcal{S})$ commutes with every stabilizer — its members preserve the code space. The logical operators are in $\mathcal{N}(\mathcal{S})$ but not $\mathcal{S}$ — they move states around *within* the code space. Anything outside the normalizer is detectable: it anticommutes with at least one stabilizer, triggering a non-trivial syndrome.

Logical operators of the bit-flip code

For the [[3, 1, 3]] bit-flip code with stabilizers Z_1 Z_2 and Z_2 Z_3:

The minimum weight of any non-trivial logical operator is \min(1, 3) = 1... wait, that would give distance 1, but bit-flip code has distance 3. What went wrong?

Nothing — this is actually revealing a feature of the bit-flip code. The logical Z has weight 1; the logical X has weight 3. The distance is \min(1, 3) = 1? No. The bit-flip code has distance 3 against X errors, but distance 1 against Z errors — a single Z on any qubit acts as the logical Z_L on the encoded state, which is a logical error. That is why the bit-flip code is useless against Z errors: a single Z looks exactly like a logical operation.

Why the distance is actually 1, not 3, for the bit-flip code: the definition of distance uses the minimum weight over all non-trivial logical operators. For the bit-flip code, Z_L = Z_1 has weight 1. So the formal distance of the bit-flip code, treated as a full stabilizer code against arbitrary Pauli errors, is d = 1. It is [[3, 1, 1]] when you write it honestly. Only against a restricted error model (X-only) does it behave like a d = 3 code. The same is true of the phase-flip code. This is why Shor stacked them: the stacked code has genuine d = 3 against all Pauli errors.

This observation — that bit-flip and phase-flip are really distance-1 codes disguised as distance-3 — is exactly what motivates Shor's 9-qubit construction: only by combining both kinds of protection do you get a code with d = 3 honestly, against every single-qubit error.

Logical operators of Shor's 9-qubit code

The [[9, 1, 3]] Shor code has logical operators (see chapter 118 for the derivation)

Z_L \;=\; Z_1 Z_2 Z_3 \qquad \text{and} \qquad X_L \;=\; X_1 X_4 X_7.

Both have weight 3. Indeed, any other logical operator (obtained by multiplying by stabilizers) has weight at least 3. So the minimum weight of any non-trivial logical operator is 3, and the distance is d = 3. This is a genuine distance-3 code.

The quantum Singleton bound

Given that building a code with larger d costs more physical qubits, a natural question arises: what is the minimum n for given k and d? Is there some fundamental inequality that bounds n below?

Yes. It is the quantum Singleton bound:

\boxed{n - k \;\geq\; 2(d - 1).}

This is the quantum analog of the classical Singleton bound n - k \geq d - 1 (for classical [n, k, d] codes). The factor of 2 is the quantum penalty: correcting both X and Z errors (not just X or not just Z) requires roughly twice the redundancy that classical codes need.

Rearranging, for k = 1 the bound says

n \;\geq\; 2d - 1.

For d = 3 this gives n \geq 5. So the smallest possible single-error-correcting [[n, 1, 3]] code has n = 5. That code is the 5-qubit perfect code, which you meet in the next chapter. It saturates the Singleton bound.

For d = 5, the bound gives n \geq 9 — but no [[9, 1, 5]] code exists. In fact, no [[11, 1, 5]] code exists either; the smallest known distance-5 [[n, 1, 5]] codes have n = 11 or n = 12 (depending on which family you pick) and do not saturate the Singleton bound. Codes that do saturate the Singleton bound (at any parameters) are called MDS (maximum-distance-separable) codes, and very few exist in the quantum case.

Shor's [[9, 1, 3]] code has n - k = 8 and 2(d-1) = 4. 8 \geq 4, so it satisfies the Singleton bound comfortably — but with a lot of slack. It uses 9 qubits to do what the 5-qubit code does with 5. That slack comes from the concatenated structure — which is also what makes Shor's code easy to explain, but not optimal.

Well-known stabilizer codes on the (n, d) planeA scatter chart with n on the x-axis and d on the y-axis. Several specific codes are plotted: bit-flip (3,1), phase-flip (3,1), 5-qubit perfect (5,3), Steane (7,3), Shor (9,3), distance-5 surface (25,5), distance-7 surface (49,7). A dashed line shows the Singleton bound n = 2d - 1.dn03579051020304050Singleton bound: n = 2d − 1[[3,1,1]] bit-flip[[5,1,3]] 5-qubit perfect (saturates!)[[7,1,3]] Steane[[9,1,3]] Shor[[25,1,5]] surface d=5[[49,1,7]] surface d=7The 5-qubit perfect code sits exactly on the Singleton line; everyone else pays extra.
A sampling of well-known stabilizer codes plotted on the $(n, d)$ plane. The dashed line is the quantum Singleton bound $n = 2d - 1$ (for $k = 1$). The 5-qubit perfect code $[[5, 1, 3]]$ sits exactly on the bound — it is **perfect** in the sense that it saturates the inequality. Steane's code $[[7, 1, 3]]$ and Shor's $[[9, 1, 3]]$ both cost extra qubits; their slack buys other features (CSS structure, transversal gates, geometric locality). The surface code family scales as roughly $n \approx 2d^2$, which is far above the Singleton line — but that cost pays for local syndrome extraction on a 2D lattice, a property the 5-qubit code lacks.

The Singleton bound is not the only bound on quantum codes. The quantum Hamming bound gives a tighter constraint for non-degenerate codes; the Knill-Laflamme conditions characterise when a code actually corrects a specific error set; the threshold theorem relates physical error rates to achievable logical error suppression. But the Singleton bound is the easiest to state and the most commonly cited, and codes that saturate it are celebrated.

Why stabilizer codes are the dominant paradigm

Stabilizer codes dominate quantum error correction for several reasons, each of which stands alone.

1. They unify the zoo. Before the stabilizer formalism, every new code came with its own separate analysis. After Gottesman's 1997 thesis, almost every code — Shor, Steane, CSS family, Bacon-Shor, surface, color, LDPC — fit into a single algebraic framework. This is the same kind of unification that group theory did for classical symmetries: many constructions, one language.

2. Syndrome measurement is simple. To detect errors, you just measure the stabilizer generators. Each stabilizer is a Pauli operator; measuring it is a standard ancilla-based parity circuit (Hadamard ancilla, controlled Paulis to the data qubits, Hadamard ancilla, measure). The classical syndrome bits then go into a decoder that picks the correction.

3. Clifford operations stay tractable. The Gottesman-Knill theorem says that quantum circuits composed entirely of Clifford gates (H, S, CNOT) acting on stabilizer states can be classically simulated in polynomial time. This is both good news (fast simulation of error-correcting circuits) and a constraint (stabilizer codes alone cannot give a quantum speedup — universal quantum computing needs non-Clifford resources like T gates or magic states).

4. Fault tolerance builds on stabilizers. The threshold theorem, transversal gates, magic-state distillation, lattice surgery, and most of the fault-tolerance literature assumes stabilizer codes. Non-stabilizer codes exist (bosonic cat codes, GKP codes at the physical level) but even they are typically analysed via their stabilizer structure at a higher level.

5. They scale with hardware layouts. Surface codes, color codes, and LDPC codes all have stabilizers acting on local neighbourhoods of physical qubits, matching 2D or 3D chip layouts. Non-local codes exist but are much harder to implement on real hardware.

The punchline. If you are reading about quantum error correction in a current paper — whether academic, industry blog post, or fault-tolerance proposal — the code under discussion is almost certainly a stabilizer code. The few exceptions (GKP, cat codes, Floquet codes in their most general formulation) interact with the stabilizer formalism at boundaries. Learning stabilizer codes is learning the main highway through QEC.

Worked examples

Example 1: the [[3, 1, 3]] bit-flip code — verify the parameters

Take the bit-flip stabilizer code with generators S_1 = Z_1 Z_2 and S_2 = Z_2 Z_3. Determine n, k, and the minimum weight of a non-trivial logical operator. Verify the [[3, 1, d]] parameters.

Step 1. Identify n. The generators act on 3 qubits, so n = 3.

Step 2. Count stabilizer generators. Two generators: S_1, S_2. So n - k = 2, giving k = 3 - 2 = 1. One logical qubit.

Step 3. Find the logical operators. A logical operator must commute with both S_1 and S_2 and not be in the stabilizer group \{I, S_1, S_2, S_1 S_2\} = \{I, Z_1 Z_2, Z_2 Z_3, Z_1 Z_3\}.

Try Z_1: commutes with Z_1 Z_2 (both have Z on qubit 1, trivial on the rest — commutes) and with Z_2 Z_3 (trivial on qubit 1, they don't share non-identity qubits — commutes). Not in the stabilizer group (check: the four stabilizers above are all even-weight, Z_1 is odd-weight). So Z_L = Z_1 is a logical Z. Why this is the logical Z: on the basis state |000\rangle, Z_1 gives +|000\rangle. On the basis state |111\rangle, Z_1 gives -|111\rangle. So Z_1 acts as the Pauli Z on the \{|0\rangle_L, |1\rangle_L\} = \{|000\rangle, |111\rangle\} basis — exactly what "logical Z" means.

Try X_1 X_2 X_3: commutes with Z_1 Z_2 (two Xs vs two Zs — the two anticommutations cancel) and with Z_2 Z_3 (same reasoning). Not in the stabilizer group (only has Xs; stabilizers are all Zs). So X_L = X_1 X_2 X_3 is a logical X. Why this is the logical X: on |000\rangle, X_1 X_2 X_3 gives |111\rangle. On |111\rangle, it gives |000\rangle. So it swaps |0\rangle_L and |1\rangle_L — the action of a Pauli X on the logical qubit.

Step 4. Minimum weight of logical operators. Z_L = Z_1 has weight 1. X_L = X_1 X_2 X_3 has weight 3. The minimum over non-trivial logical operators is \min(1, 3) = 1.

Step 5. The honest distance. The minimum weight of any non-trivial logical operator is 1 (achieved by Z_L). So the honest distance of the bit-flip code, treated as a stabilizer code against arbitrary Pauli errors, is d = 1. The code is [[3, 1, 1]], not [[3, 1, 3]].

What this shows. The bit-flip code has a distance-1 logical Z operator. That is exactly why a single Z error on any qubit is unrecoverable — it is the logical error itself. The code has genuine distance 3 only against the restricted error set of pure X errors. This mismatch is precisely what motivates combining bit-flip and phase-flip protection in Shor's 9-qubit construction, where the smallest logical operator on either axis has weight 3.

Logical operators of the bit-flip codeThree horizontal wires for qubits 1, 2, 3. Two stabilizers shown as overlaid Z-boxes: Z₁Z₂ and Z₂Z₃. Two logical operators shown: Z_L = Z₁ (weight 1) and X_L = X₁X₂X₃ (weight 3). Distance = min weight = 1.q1q2q3ZZS₁ = Z₁Z₂ZZS₂ = Z₂Z₃ZZ_L = Z₁weight 1XXXX_L = X₁X₂X₃weight 3distance d = min weight of non-trivial logical = min(1, 3) = 1
The bit-flip code's logical operators have very different weights: $Z_L$ has weight 1, $X_L$ has weight 3. The distance $d$ is the minimum weight over all non-trivial logical operators, which is 1. A single $Z$ error on any qubit acts as the logical $Z$ — the code cannot distinguish it from an intentional logical operation.

Example 2: reading the parameters of Shor's 9-qubit code

Shor's 9-qubit code has 8 stabilizer generators (listed earlier in the chapter) and logical operators X_L = X_1 X_4 X_7, Z_L = Z_1 Z_2 Z_3. Determine n, k, d and confirm the code is [[9, 1, 3]].

Step 1. n. Nine physical qubits in each generator's support. n = 9.

Step 2. k. Eight independent stabilizer generators. n - k = 8, giving k = 9 - 8 = 1. One logical qubit.

Step 3. Weight of the stated logical operators. X_L = X_1 X_4 X_7 has weight 3 (Pauli X on three qubits, I on the other six). Z_L = Z_1 Z_2 Z_3 has weight 3 (Pauli Z on three qubits, I on six). Both exactly weight 3.

Step 4. Check for smaller logical operators. A logical operator could potentially have smaller weight if we multiply by stabilizers. Consider Z_L \cdot S_1 = (Z_1 Z_2 Z_3)(Z_1 Z_2) = Z_3. Weight 1! But wait — is Z_3 a valid logical operator? It commutes with stabilizers (check: Z_3 and S_6 = Z_8 Z_9 share no non-identity qubits — commute; Z_3 and S_7 = X_1 \ldots X_6 — qubit 3 is in S_7 with X, anticommutes — so Z_3 and S_7 anticommute). So Z_3 is not a valid logical operator; it falls outside the normalizer. Meaning Z_L \cdot S_1 = Z_3 is not actually equivalent to Z_L as logical operators — the multiplication tried to reduce weight but left the normalizer. Why weight reduction cannot always proceed: multiplying a logical operator by a stabilizer gives another logical operator only if the result still commutes with all stabilizers. For codes with all stabilizers mutually commuting and a proper logical structure, the minimum weight in the logical-operator coset is a real invariant of the code — the distance. You can't cheat it by multiplying in stabilizers. The actual minimum weight over the Z_L coset is 3, achieved by Z_1 Z_2 Z_3 or its cousins Z_4 Z_5 Z_6 and Z_7 Z_8 Z_9.

Step 5. d. The minimum weight over all non-trivial logical operators is 3. So d = 3.

Result. Shor's 9-qubit code is [[9, 1, 3]] — nine physical qubits encoding one logical qubit at distance 3, correcting up to \lfloor 2/2 \rfloor = 1 arbitrary single-qubit Pauli error.

Singleton-bound check. For [[9, 1, 3]]: n - k = 8, 2(d-1) = 4. 8 \geq 4 ✓. The Singleton bound is satisfied with significant slack — Shor's code is not MDS. It spends 4 "extra" qubits beyond the Singleton minimum, buying the concatenated structure that makes the construction easy to explain but not compact.

Shor's 9-qubit code parameter summaryA large box showing [[9, 1, 3]] with three columns: n = 9 (with 9 dots), k = 1 (with one highlighted logical qubit), d = 3 (with a weight-3 logical operator). Below: Singleton bound 8 ≥ 4 with slack labelled.[[9, 1, 3]]nine physical qubits, one logical qubit, distance 3Singleton bound: n − k ≥ 2(d − 1)8 ≥ 4 ✓ — slack of 4Shor's code is not MDS — the 5-qubit perfect code is
Shor's 9-qubit code at a glance. Eight stabilizer generators cut the 512-dimensional Hilbert space down to a 2-dimensional code space — one logical qubit. The minimum-weight logical operator has weight 3, giving distance $d = 3$, which is enough to correct any single-qubit Pauli error. The Singleton bound is satisfied but not tight; the slack is the cost of the concatenated construction.

Common confusions

Going deeper

You now have the [[n, k, d]] framework, the construction recipe, the Singleton bound, and the reason stabilizer codes dominate the field. This section formalises the normalizer structure, sketches the proof of the Singleton bound, introduces the CSS sub-family (next chapter), and surveys the main code families that fill the (n, k, d) landscape.

Formal normalizer and centralizer structure

The centralizer of \mathcal{S} in the Pauli group \mathcal{P}_n is the set of Paulis commuting with every element of \mathcal{S}:

C(\mathcal{S}) \;=\; \{P \in \mathcal{P}_n : [P, S] = 0 \text{ for all } S \in \mathcal{S}\}.

Because all elements of \mathcal{P}_n either commute or anticommute (up to sign), and because \mathcal{S} is abelian, the centralizer equals the normalizer in this setting: C(\mathcal{S}) = \mathcal{N}(\mathcal{S}). This is a feature of the Pauli group that does not hold for more general groups.

The structure of the quotient \mathcal{N}(\mathcal{S}) / \mathcal{S} is a symplectic vector space of dimension 2k over \mathbb{F}_2, with the symplectic form given by commutation (two Paulis P, Q anticommute ↔ their representatives have nonzero symplectic inner product). This is the algebraic backbone of the stabilizer formalism and is used heavily in proofs of the Knill-Laflamme conditions, the threshold theorem, and the classification of local Clifford equivalence classes.

Proof sketch of the quantum Singleton bound

The Singleton bound n - k \geq 2(d-1) follows from the no-cloning theorem applied to the code. Sketch:

Suppose you have a [[n, k, d]] code and remove any d - 1 qubits. The remaining n - (d-1) qubits must still contain enough information to identify the logical state, because the distance says every pair of logical states differs by a Pauli of weight at least d, so any two distinct logical states are distinguishable on any n - (d-1)-qubit subset.

Separately, the no-cloning theorem forbids the logical information from being simultaneously present in two disjoint subsets of the physical qubits. If you take two disjoint sets of n - (d-1) qubits and both contained the logical information, you could extract two copies of it — violating no-cloning.

For two disjoint (n - (d-1))-qubit subsets to fit inside n qubits, we need 2(n - (d-1)) \leq n, which rearranges to n \leq 2(d-1)... wait, that direction is wrong. Let me redo this: we want to forbid such a pair, which gives the bound. The correct derivation: the "information-bearing" subset has size n - (d-1), and no-cloning forbids two disjoint copies, so 2(n - (d-1)) > n, giving n > 2(d-1), i.e. n \geq 2d - 1. For k logical qubits you add a factor, giving n - k \geq 2(d-1).

A full proof appears in Nielsen and Chuang §10.4 and in Preskill's Chapter 7 notes. The no-cloning-theorem-based derivation is the cleanest conceptually.

Code families filling the (n, k, d) landscape

Within the stabilizer framework, you can sort codes by their parameters and their structural features. A non-exhaustive tour:

[[5, 1, 3]] — 5-qubit perfect code. Saturates Singleton. Covered in detail in the next chapter.

[[7, 1, 3]] — Steane code. CSS construction from the classical Hamming code. Has transversal Clifford gates, which makes it a popular choice for fault-tolerance proposals that do not care about 2D locality.

[[9, 1, 3]] — Shor's code. First in the literature. Concatenated structure: outer phase-flip + inner bit-flip. Not used in modern fault-tolerance but canonical pedagogically.

[[15, 1, 3]] — Reed-Muller-based code. Has transversal non-Clifford T gates, which is otherwise a rare feature.

[[23, 1, 7]] — Golay code derivative. CSS code built from the classical Golay code. Distance 7 is a sweet spot for some early fault-tolerance studies.

[[L^2, 1, L]] — Planar surface code at distance L. Qubits on an L \times L lattice, stabilizers local. Scaling distance costs O(d^2) qubits, far above Singleton but with crucial 2D locality.

[[n, k, d]] — color codes. Three-colorable 2D lattice codes; like surface codes but with a richer Clifford gate set transversally available. Slightly more overhead in practice but more gate choices.

[[n, k, d]] — quantum LDPC codes. The current research frontier: codes with low-weight stabilizers (sparse, like classical LDPC) and good rate k/n. The 2022 work by Panteleev-Kalachev on "good" LDPC codes achieved constant rate and linear distance, a long-sought goal.

Each family makes a different trade-off on the (n, k, d) plane, against different features: transversal gates, 2D locality, rate, decoder efficiency. The stabilizer formalism organises all of them in one table.

Indian QEC research — the National Quantum Mission context

India's National Quantum Mission (2023, ₹6000 crore, 8-year horizon) explicitly lists fault-tolerant quantum computing as a thematic hub, which covers stabilizer codes at every level. Research activity spans several institutions:

The IBM Quantum Network has Indian partner institutions with access to hardware-level stabilizer experiments. As of 2026, a few Indian-authored papers have contributed to the LDPC and color-code literature, and several PhD theses on stabilizer decoding have been completed in India.

Gottesman-Knill simulation and its edge cases

The Gottesman-Knill theorem (1998) states that quantum circuits composed of (a) stabilizer state preparation, (b) Clifford gates (H, S, CNOT), and (c) Pauli measurements can be simulated classically in polynomial time. The state is tracked by its stabilizer group — a list of n Pauli strings — which has size O(n^2) in classical bits, rather than O(2^n) amplitudes.

This is both useful (fast simulation of error-correcting circuits) and a constraint: stabilizer circuits alone cannot produce a quantum speedup. For universality you need to escape the Clifford group, which is why non-Clifford gates (T, Toffoli, or magic states) are essential resources for fault-tolerant quantum computing. Magic state distillation (covered in ch.129–131) is the protocol that produces high-fidelity non-Clifford resources inside a stabilizer code, completing the fault-tolerant toolkit.

The Gottesman-Knill theorem does not say quantum computers are classical — only that a specific subset of quantum operations is classical. The full power of quantum computing emerges when that subset is augmented by T gates or equivalent resources, which the stabilizer formalism treats as externally supplied "magic".

Where this leads next

References

  1. Daniel Gottesman, Stabilizer codes and quantum error correction (PhD thesis, 1997) — arXiv:quant-ph/9705052. The original and still the canonical reference for the stabilizer formalism.
  2. John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229. Pedagogical derivation of the [[n, k, d]] framework and the Singleton bound.
  3. Nielsen and Chuang, Quantum Computation and Quantum Information (2010), §10.5 (stabilizer codes) — Cambridge University Press.
  4. Wikipedia, Quantum error correction — overview of code families, bounds, and the stabilizer formalism.
  5. Wikipedia, Singleton bound — classical and quantum versions, with the no-cloning-based derivation of the quantum form.
  6. Qiskit Textbook, Introduction to Quantum Error Correction — accessible implementation-oriented treatment including the stabilizer code framework. </content> </invoke>