In short

The implicit measurement principle says that a qubit left unmeasured at the end of a circuit gives the same statistics on the rest of the qubits as if you had measured it in any basis and discarded the outcome. Mathematically, the reduced state on the kept qubits is the partial trace of the joint state over the ignored qubit — and this partial trace equals the mixture you would get by measuring and averaging over outcomes. Ancilla qubits you forgot about, scratch registers you never cleaned, and most importantly the environment that your qubits inevitably couple to — all of them are implicit measurers, silently collapsing your quantum state into a mixture. This is the mathematical content of decoherence, and it is the reason you can never pretend an unused qubit doesn't exist: from the kept qubits' point of view, the unused one is doing a measurement on you.

Look at almost any quantum circuit diagram. Data qubits at the top, ancilla qubits below, gates firing back and forth, measurement meters at the far right of some wires. Now look at the wires that do not end in a meter. What happens to those qubits?

The honest answer — the one your algorithm's correctness depends on — is not "nothing." It is also not "they just sit there as whatever state they were in." It is stranger than both. A qubit you leave unmeasured at the end of a circuit behaves, as far as every other qubit in the system is concerned, exactly as if you had measured it in any basis you like and then forgotten the outcome. The outcome was not forgotten because you refused to look; it was forgotten because you never looked at all. And yet the collapse happens anyway, from the viewpoint of the rest of the circuit.

This is the implicit measurement principle. It is the partner of the deferred measurement principle from the previous chapter, and together they complete a single sentence: every qubit in a quantum circuit is either explicitly measured at the end, or implicitly measured by being ignored. The difference between the two is only whether a classical bit comes out — the quantum damage to the rest of the circuit is the same either way.

This chapter proves the statement using the partial trace, shows it explicitly on a Bell state, walks through a three-qubit ancilla example where the principle is responsible for a concrete piece of algorithmic damage, and finishes with the most important application of all: the environment, which couples weakly to every real qubit, is an unavoidable implicit measurer. The machinery in this chapter is the starting point for every modern discussion of decoherence.

What the principle says

Fix a joint pure state |\psi\rangle_{AB} of two quantum systems, A and B. System A is what you care about — the data register, the part of the circuit whose output you will read. System B is what you are leaving unmeasured — an ancilla at the end of the algorithm, an environment you cannot control, a qubit you simply forgot about.

The claim comes in two equivalent forms.

Form 1 (partial trace). The description of A alone, after B is ignored, is the reduced density matrix

\rho_A \;=\; \operatorname{tr}_B \bigl(|\psi\rangle_{AB}\langle\psi|\bigr).

Any measurement statistic on A alone — the probability of any outcome of any experiment you run purely on A — is determined by \rho_A and nothing else.

Form 2 (measurement-then-average). Pick any orthonormal basis \{|b_i\rangle\} of system B. Suppose you measured B in that basis, getting outcome i with probability p_i, and then threw the classical outcome i away. The state of A, averaged over the forgotten outcomes, is

\rho_A^{\text{meas}} \;=\; \sum_i p_i \, |\text{rel}_i\rangle_A\langle\text{rel}_i|,

where |\text{rel}_i\rangle_A is the normalised state of A that accompanies outcome i on B — the so-called relative state.

The principle. These two descriptions are equal:

\rho_A \;=\; \rho_A^{\text{meas}}.

Identically. For every joint state |\psi\rangle_{AB}. For every orthonormal basis on B.

Two equivalent pictures of an unmeasured qubitTwo panels side by side. Left panel labelled ignore B shows a circuit with two wires A and B, where A ends in a measurement meter and B ends at a blank (no meter). Right panel labelled measure B and throw away shows the same circuit but with a measurement meter on B followed by a trash can icon. A big equal sign connects the two panels, and below them the equation rho A equals trace B of ket psi bra psi equals sum i p i ket rel i bra rel i.ignore B (implicit)ABM(no meter)B is left unmeasured=measure B and forgetABMM→ discardB measured, outcome thrown awayρ_A = tr_B(|ψ⟩⟨ψ|) = Σᵢ pᵢ |relᵢ⟩⟨relᵢ|the two sides are mathematically identical
Two operations that look different on the quantum circuit diagram are mathematically identical. The left side says "ignore $B$"; the right says "measure $B$ and throw the outcome away." Both produce the same reduced state $\rho_A$, because the partial trace equals the measurement-averaged mixture.

The practical content: there is no meaningful distinction between a qubit you ignored and a qubit that was measured behind your back. From the kept system's side, they are indistinguishable.

Proving it

Write the joint state |\psi\rangle_{AB} in the chosen orthonormal basis \{|b_i\rangle\} of B:

|\psi\rangle_{AB} \;=\; \sum_i |\phi_i\rangle_A \otimes |b_i\rangle_B,

where |\phi_i\rangle_A is the (unnormalised) component of A that multiplies |b_i\rangle_B. Nothing has been assumed yet — this is just expansion in a basis.

Why this form is completely general: any joint state can be written as a sum over the basis states of B with some A-state coefficient for each term. If the state factorises, only one term is non-zero. If it is entangled, multiple terms are non-zero and no single-term rewrite exists.

The measurement-then-average side

Measure B in the basis \{|b_i\rangle\}. The probability of outcome i is the squared norm of the component that accompanies |b_i\rangle:

p_i \;=\; \langle\phi_i | \phi_i\rangle_A.

Conditioned on outcome i, the post-measurement normalised state of A (the relative state from Everett's formalism) is

|\text{rel}_i\rangle_A \;=\; \frac{|\phi_i\rangle_A}{\sqrt{p_i}}.

If the outcome is forgotten and you average over it, the mixed state on A is

\rho_A^{\text{meas}} \;=\; \sum_i p_i \, |\text{rel}_i\rangle_A\langle\text{rel}_i| \;=\; \sum_i p_i \cdot \frac{|\phi_i\rangle_A\langle\phi_i|_A}{p_i} \;=\; \sum_i |\phi_i\rangle_A\langle\phi_i|_A.

Why the p_i factors cancel: the weight p_i and the renormalisation of the relative state (which introduced a 1/p_i in the outer product) multiply to 1. The unnormalised components |\phi_i\rangle reappear naturally.

The partial-trace side

Compute the partial trace directly. From the joint density matrix

|\psi\rangle_{AB}\langle\psi| \;=\; \sum_{i,j} |\phi_i\rangle_A\langle\phi_j|_A \otimes |b_i\rangle_B\langle b_j|_B,

the partial trace over B in the \{|b_k\rangle\} basis is

\rho_A \;=\; \operatorname{tr}_B\bigl(|\psi\rangle_{AB}\langle\psi|\bigr) \;=\; \sum_k \langle b_k|_B\,\bigl(|\psi\rangle_{AB}\langle\psi|\bigr)\,|b_k\rangle_B.

Apply the \langle b_k| on the left and the |b_k\rangle on the right of each term, using \langle b_k | b_i\rangle = \delta_{ki} and \langle b_j | b_k\rangle = \delta_{jk}:

\rho_A \;=\; \sum_k \sum_{i,j} |\phi_i\rangle\langle\phi_j| \cdot \delta_{ki} \cdot \delta_{jk} \;=\; \sum_i |\phi_i\rangle\langle\phi_i|.

Why the double sum collapses: the Kronecker deltas force k = i from the first inner product and k = j from the second, so i = j. The double sum reduces to a single sum, matching the measurement-then-average expression exactly.

The identity

Both computations produce the same expression:

\rho_A \;=\; \sum_i |\phi_i\rangle_A\langle\phi_i|_A \;=\; \rho_A^{\text{meas}}.

This is the implicit measurement principle, proved. The computation took two short passes of algebra. The physics behind it: whatever coherence A had with B in the joint state has been washed out by the partial trace — equivalently, by the measurement-and-forget — and what remains is a classical mixture of the relative states, weighted by how likely each was.

Note the basis-independence. The proof was carried out for a chosen basis \{|b_i\rangle\} of B, but the left-hand side \rho_A does not depend on the basis — it is the partial trace, which is basis-independent. That means: for every basis you might have measured B in, the post-measurement-and-forget mixture on A is the same density matrix. The measurer's basis choice is invisible to the kept system. This is a surprisingly strong statement, and a crucial one for what comes next.

Worked examples

Example 1 — Tracing out half a Bell state

Setup. Take the Bell state |\Phi^+\rangle_{AB} = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle). Verify the implicit measurement principle on it: show that \operatorname{tr}_B(|\Phi^+\rangle\langle\Phi^+|) equals the mixture obtained by measuring B in the computational basis and forgetting the outcome.

Step 1 — decompose in the B-basis. Write the Bell state as a sum over the computational basis of B:

|\Phi^+\rangle_{AB} \;=\; |\phi_0\rangle_A \otimes |0\rangle_B + |\phi_1\rangle_A \otimes |1\rangle_B,

with |\phi_0\rangle_A = \tfrac{1}{\sqrt{2}}|0\rangle and |\phi_1\rangle_A = \tfrac{1}{\sqrt{2}}|1\rangle. Why these values: the Bell state is a sum of |00\rangle and |11\rangle with equal amplitude 1/\sqrt{2}. Collecting the B = |0\rangle part: \tfrac{1}{\sqrt{2}}|0\rangle_A \otimes |0\rangle_B. Collecting the B = |1\rangle part: \tfrac{1}{\sqrt{2}}|1\rangle_A \otimes |1\rangle_B. These are the |\phi_i\rangle_A components.

Step 2 — measurement-then-average. Measure B in the computational basis. The probabilities are p_0 = \langle\phi_0|\phi_0\rangle = 1/2 and p_1 = \langle\phi_1|\phi_1\rangle = 1/2. The relative states are |\text{rel}_0\rangle_A = |\phi_0\rangle / \sqrt{p_0} = |0\rangle and |\text{rel}_1\rangle_A = |\phi_1\rangle / \sqrt{p_1} = |1\rangle. The mixture is

\rho_A^{\text{meas}} \;=\; \tfrac{1}{2}|0\rangle\langle 0| + \tfrac{1}{2}|1\rangle\langle 1| \;=\; \frac{I}{2}.

Step 3 — partial trace directly. Form the joint density matrix:

|\Phi^+\rangle\langle\Phi^+| \;=\; \tfrac{1}{2}\bigl(|00\rangle\langle 00| + |00\rangle\langle 11| + |11\rangle\langle 00| + |11\rangle\langle 11|\bigr).

Apply \operatorname{tr}_B term by term using \operatorname{tr}_B(|a c\rangle\langle b d|) = \langle d|c\rangle \cdot |a\rangle\langle b|:

\operatorname{tr}_B = \tfrac{1}{2}\bigl(\langle 0|0\rangle |0\rangle\langle 0| + \langle 1|0\rangle |0\rangle\langle 1| + \langle 0|1\rangle |1\rangle\langle 0| + \langle 1|1\rangle |1\rangle\langle 1|\bigr).

The cross-terms \langle 1|0\rangle and \langle 0|1\rangle are zero; the diagonals give \langle 0|0\rangle = \langle 1|1\rangle = 1. So

\rho_A \;=\; \tfrac{1}{2}|0\rangle\langle 0| + \tfrac{1}{2}|1\rangle\langle 1| \;=\; \frac{I}{2}.

Step 4 — check another basis. Try B in the X basis \{|+\rangle, |-\rangle\}. Rewrite |\Phi^+\rangle using |0\rangle = (|+\rangle + |-\rangle)/\sqrt{2} and |1\rangle = (|+\rangle - |-\rangle)/\sqrt{2}:

|\Phi^+\rangle \;=\; \tfrac{1}{2}\bigl(|0\rangle_A(|+\rangle_B + |-\rangle_B) + |1\rangle_A(|+\rangle_B - |-\rangle_B)\bigr) \;=\; \tfrac{1}{\sqrt{2}}\bigl(|+\rangle_A|+\rangle_B + |-\rangle_A|-\rangle_B\bigr).

Measuring B in the X basis: p_+ = p_- = 1/2, relative states |+\rangle_A and |-\rangle_A. The mixture is \tfrac{1}{2}|+\rangle\langle +| + \tfrac{1}{2}|-\rangle\langle -| = I/2.

Result. Both bases yield \rho_A = I/2, and the direct partial trace yields the same thing. The implicit measurement principle holds for this state in both bases checked, as the general theorem guarantees. What this shows: the maximally mixed state I/2 is what Alice sees on her qubit of a Bell pair — whether Bob measured it in any basis, or did not measure it at all. The two cases are indistinguishable to Alice, which is the no-communication theorem in one of its many disguises.

Bell state reduction by partial traceA flowchart showing the Bell state ket Phi plus on the left, an arrow labelled trace B, and rho A equals I over 2 on the right. A second row shows the same Bell state, an arrow labelled measure B in any basis forget, and the same rho A equals I over 2 on the right. Both paths converge.|Φ⁺⟩_ABBell statetr_Bmeasure B in any basis, forgetρ_A = I/2maximally mixedBoth paths produce the same reduced state."Ignore B" and "measure B and forget" are the same operation on A.
The two paths — ignore $B$ via partial trace, or measure $B$ and discard — lead to the same reduced state $\rho_A$. This is the implicit measurement principle on the Bell state.

Example 2 — Three-qubit circuit with a forgotten ancilla

Setup. A data qubit d starts in the superposition |+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle). An ancilla a starts in |0\rangle. A CNOT fires with d as control and a as target. At the end of the circuit, a is left unmeasured. Compute the reduced state of d and verify that it equals the measurement-averaged state.

Step 1 — evolve the joint state. Before the CNOT the joint state is |+\rangle_d|0\rangle_a = \tfrac{1}{\sqrt{2}}(|00\rangle + |10\rangle) (writing |d \, a\rangle). After the CNOT, the target a is flipped when d is |1\rangle:

|\psi\rangle \;=\; \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \;=\; |\Phi^+\rangle.

Why the state is now a Bell state: one CNOT on |+\rangle|0\rangle is the textbook Bell-state preparation. The data qubit started in a superposition, the ancilla was in |0\rangle, and the CNOT entangled them.

Step 2 — trace out the ancilla. From Example 1, \operatorname{tr}_a(|\Phi^+\rangle\langle\Phi^+|) = I/2. So the reduced state of the data qubit is \rho_d = I/2.

Step 3 — the damage. The data qubit started as the pure superposition |+\rangle, which has purity \operatorname{tr}(\rho_d^2) = 1 and a definite expectation value \langle X\rangle = 1. After the circuit, the reduced state is the maximally mixed I/2 — the density matrix of a fair coin in every basis, with purity \operatorname{tr}(\rho_d^2) = 1/2 and expectation values \langle X\rangle = \langle Y\rangle = \langle Z\rangle = 0. The forgotten ancilla destroyed every ounce of phase information the data qubit originally carried.

Step 4 — measurement-averaged description. Measure a in the computational basis. Outcomes: 0 with probability 1/2 (leaving data in |0\rangle), 1 with probability 1/2 (leaving data in |1\rangle). Forgetting the outcome gives the mixture \tfrac{1}{2}|0\rangle\langle 0| + \tfrac{1}{2}|1\rangle\langle 1| = I/2. Same answer as the partial trace, as the principle demands.

Step 5 — try a different basis. Measure a in the X basis. Rewrite the joint state: |\Phi^+\rangle = \tfrac{1}{\sqrt{2}}(|+\rangle_d|+\rangle_a + |-\rangle_d|-\rangle_a). Outcomes on a: + with probability 1/2 (leaving d in |+\rangle), - with probability 1/2 (leaving d in |-\rangle). Forgetting: \tfrac{1}{2}|+\rangle\langle +| + \tfrac{1}{2}|-\rangle\langle -| = I/2. The mixture is the same density matrix I/2 in either basis.

Result. The unmeasured ancilla a behaved like a silent measurer. The data qubit's reduced state — the object that governs every experimental prediction on d alone — is I/2, identical to what you would have obtained if you had measured a and thrown away the outcome in any basis. What this shows: this is the precise mechanism by which a "dirty" ancilla (an ancilla left entangled with the data at the end of a computation) destroys quantum advantage. Every phase the data was carrying has been traced out, leaving a classical-looking mixture behind. The cure (from the uncomputation chapter) is to run the entangling step backwards before the ancilla is abandoned — thereby disentangling the two qubits so that the partial trace no longer damages the data.

Ancilla entanglement and the reduced state of the data qubitA circuit diagram. The data qubit begins in ket plus and the ancilla in ket zero. A CNOT from d to a entangles them; the ancilla is left unmeasured. On the right, rho d equals I over 2 is shown with the note pure superposition destroyed.|+⟩ d|0⟩ aCNOT creates entanglementunmeasuredtr_aρ_d = I/2pure superpositiondestroyed by theimplicit measurement
The ancilla was never explicitly measured, but because it is entangled with the data qubit and ignored at the end, the data qubit's reduced state is the maximally mixed $I/2$. This is the implicit measurement principle doing algorithmic damage.

Decoherence as environmental implicit measurement

Everything up to this point has been about ancilla qubits inside a circuit. The most far-reaching application of the implicit measurement principle is elsewhere: the environment.

Every real qubit is weakly coupled to a vast number of external degrees of freedom — thermal photons in the microwave cavity, vibrations of the mounting substrate, stray magnetic fields, cosmic rays on a bad day. Collectively, these are the environment, and the qubit cannot help interacting with them.

Here is the canonical picture. Over a short time t, a qubit initially in state \alpha|0\rangle + \beta|1\rangle evolves, along with its environment, under the natural coupling Hamiltonian into an entangled joint state of the form

\alpha|0\rangle_S |E_0(t)\rangle + \beta|1\rangle_S |E_1(t)\rangle,

where |E_0(t)\rangle and |E_1(t)\rangle are the environment states that become correlated with the two qubit branches. You have no access to the environment — it has 10^{23} degrees of freedom in the walls of your cryostat. From your perspective, it is an enormous unmeasured register.

Apply the implicit measurement principle. The reduced density matrix of the system, after tracing out the environment, is

\rho_S \;=\; |\alpha|^2 |0\rangle\langle 0| + \alpha\beta^* \langle E_1(t) | E_0(t)\rangle |0\rangle\langle 1| + \alpha^*\beta \langle E_0(t) | E_1(t)\rangle |1\rangle\langle 0| + |\beta|^2 |1\rangle\langle 1|.

The diagonal entries are the measurement probabilities in the computational basis — those are not affected. But the off-diagonal coherences \alpha\beta^* and \alpha^*\beta get multiplied by the environment overlap \langle E_0 | E_1\rangle.

For a well-coupled environment, this overlap decays rapidly — often exponentially with a timescale T_2 (the "dephasing time"). As the environment "learns" which branch the qubit is in by accumulating distinguishing records, the overlap drops to zero, and

\rho_S \;\longrightarrow\; |\alpha|^2 |0\rangle\langle 0| + |\beta|^2 |1\rangle\langle 1|.

A diagonal density matrix in the computational basis. The superposition is gone; all that remains is a classical probability distribution. The qubit has decohered.

This is not a new postulate piled on top of quantum mechanics. It is a direct consequence of the implicit measurement principle applied to a realistic qubit-environment interaction. The environment has effectively measured the qubit in some basis (the "pointer basis" determined by the coupling Hamiltonian), and because the environment's outcome is unreachable, the qubit behaves exactly as if it had been measured and its outcome thrown away. The principle is what makes this identification precise.

Decoherence as environmental partial tracingA schematic showing a qubit S coupled to a large cloud labelled environment E. An arrow points from the joint state to a reduced density matrix with vanishing off-diagonals, with a label reading coherences multiplied by env overlap, which decays.qubit Sα|0⟩ + β|1⟩environment E∼ 10²³ DOFunmeasuredcoupletr_Eρ_S (mixed)|α|²(diag entries)off-diag × ⟨E₀|E₁⟩ → 0decoherenceThe environment is an unavoidable implicit measurer.
The qubit couples to an environment, a giant unmeasured "register." Tracing out the environment — which is what ignoring it amounts to — reduces the qubit state to a mixture with suppressed off-diagonals. This is decoherence.

Quantum error correction is, in one framing, the art of fighting the implicit measurement principle. It cannot stop the environment from being an unmeasured register — physics does not permit that — but it can redistribute the quantum information across many physical qubits so that the environment's implicit measurement hits only unimportant components, leaving the encoded logical information intact. Every error-correction protocol in existence is, at its core, a strategy for surviving environmental implicit measurement.

Common confusions

Going deeper

If you understand the statement, the partial-trace proof, the Bell-state example, and the decoherence interpretation, you have everything you need to use the implicit measurement principle in reading papers and writing quantum algorithms. The rest of this section goes deeper: the formal proof framed as a statement about quantum operations, the purification theorem that is its structural mirror image, the pointer-basis question that ties into the physics of decoherence, and the operational consequences for quantum error correction.

The principle as a statement about quantum operations

A quantum channel — any physically admissible transformation of a density matrix — admits a Stinespring dilation: every channel \mathcal{E} on system S can be written as \mathcal{E}(\rho_S) = \operatorname{tr}_E(U(\rho_S \otimes |0\rangle\langle 0|_E)U^\dagger) for some unitary U on S \otimes E and some ancilla space E. This is the purification of a channel: every noisy or non-unitary operation you can describe is secretly a unitary coupling to a larger system followed by a partial trace over the larger system.

The implicit measurement principle is what guarantees that this construction is physically sensible: when we write the partial trace on the right-hand side, we mean the mathematical operation \operatorname{tr}_E — and that operation corresponds to "the environment is inaccessible, do a mid-circuit measurement on it in any basis you like and forget the outcome, or equivalently leave it alone." The principle says: all of those interpretations give the same \rho_S, so the Stinespring dilation is well-defined without our having to pick a specific measurement strategy for E.

In turn, every non-unitary quantum operation that shows up in quantum information — amplitude damping, phase damping, depolarising noise, arbitrary CPTP maps — can be implemented as "unitary plus implicit measurement on an ancilla." Any channel is a purification-plus-forget.

The purification theorem — the converse direction

Every mixed density matrix \rho_A on system A has a purification: there exists a reference system R and a pure state |\psi\rangle_{AR} such that \rho_A = \operatorname{tr}_R(|\psi\rangle_{AR}\langle\psi|). The purification is unique up to a unitary on R, and R can always be chosen with \dim R = \operatorname{rank}(\rho_A).

The implicit measurement principle is the forward direction (partial trace gives a mixed state); purification is the reverse (every mixed state arises from a partial trace of some pure state on a larger system). Together they say: mixedness and "entangled with some external thing you ignored" are the same phenomenon, up to bookkeeping. The Church of the Larger Hilbert Space, as the joke goes — every classical-looking uncertainty is really entanglement with something you chose not to look at.

This has a real consequence: if you are handed a mixed state \rho_A and asked to predict the results of any experiment on A, you can freely imagine it came from tracing out some fictitious reference system R from a pure state. The predictions are the same either way. This is the reason the density-matrix formalism and the pure-state-on-a-bigger-space formalism are interchangeable — and it is the lens through which modern quantum information theory treats every source of noise.

The pointer basis — which basis does the environment measure in?

A subtlety the implicit measurement principle glosses over is that the environment's effective measurement basis is not arbitrary in physics. While the principle holds for every orthonormal basis (the reduced state on S is the same regardless of which basis we choose for the bookkeeping), a real environment has a specific coupling Hamiltonian, and that Hamiltonian selects a preferred basis — the pointer basis — in which the environment "copies" information about S into itself most robustly.

For a typical superconducting qubit coupled to electromagnetic noise, the pointer basis is approximately the computational (energy) basis. That is why off-diagonal coherences in the computational basis decay fastest, and why superposition states like |+\rangle decohere to I/2 while computational-basis states like |0\rangle and |1\rangle remain stable for longer. The pointer-basis story (Zurek's einselection programme, 1981 onwards) adds physics content on top of the bare partial-trace bookkeeping.

Crucially, the implicit measurement principle does not depend on the pointer basis — the reduced state \rho_S is the same regardless of which basis the environment's pointer states happen to live in. But the rate at which different components of the state decohere is basis-dependent, and this is what makes the pointer basis a real physical object.

Consequences for quantum error correction

Quantum error correction protocols are built around the implicit measurement principle. A stabiliser code takes k logical qubits and encodes them in n > k physical qubits, with n - k ancilla-like "check" qubits. Periodically, the check qubits are measured (syndrome extraction); the outcomes are classical bits that tell you which error, if any, occurred; and a corresponding correction is applied.

The principle is doing work at two places in this picture. First, between syndrome measurements, the qubits are coupled to the environment, and tracing the environment out (by the implicit measurement principle) causes errors at rates set by the coupling strength. Second, the syndrome extraction is a deliberate explicit measurement of the check qubits — designed so that its outcomes reveal the error while leaving the logical state unaffected. The difference between "environment as implicit measurer" (bad) and "check qubits as explicit measurer" (good) is which basis the measurement is in relative to the logical subspace. The code is engineered so that environment-basis measurements don't damage logical information, while check-qubit-basis measurements extract syndromes without damaging logical information either.

In effect, error correction is a carefully choreographed game of getting the implicit measurement principle to work for you (syndrome extraction) rather than against you (environmental decoherence). Every surface-code demonstration and every logical-qubit experiment in the past five years is a story about this choreography.

Where this leads next

References

  1. Nielsen and Chuang, Quantum Computation and Quantum Information (2010), §2.4 and §4.4 — density matrices, the partial trace, and the implicit measurement principle. Cambridge University Press.
  2. John Preskill, Lecture Notes on Quantum Computation, Ch. 3 — open systems, partial trace, and decoherence. theory.caltech.edu/~preskill/ph229.
  3. Wikipedia, Partial trace — the formal definition that underlies the principle.
  4. Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical (2003) — the definitive survey of environmental decoherence, pointer bases, and einselection. arXiv:quant-ph/0105127.
  5. Wikipedia, Deferred measurement principle — paired article covering both deferred and implicit measurement.
  6. Qiskit Textbook, Density matrices and reduced states — practical examples of partial-trace computations on simulator.