In short
Quantum mechanics has two kinds of time evolution. Postulate 2 says closed systems evolve unitarily — smoothly, continuously, reversibly: |\psi(t)\rangle = U(t) |\psi(0)\rangle. Postulate 3 says measurement evolves the state discontinuously — a non-unitary projection onto one of the basis states, with outcome probabilities given by the Born rule. Unitary is reversible; measurement is not. The measurement problem is this awkwardness: where does the line between the two kinds of dynamics sit, and why are there two kinds at all? Different interpretations — Copenhagen, many-worlds, relational quantum mechanics, QBism — answer in different ways, and they all make exactly the same experimental predictions. Decoherence is the modern tool that explains operationally why the quantum-to-classical transition looks the way it does without needing an interpretation. For building quantum computers, you can use Postulate 3 directly and never choose an interpretation.
Quantum mechanics has a structural oddity that most physics textbooks rush past in a single sentence. Here it is, stated plainly.
When a quantum system is left alone, its state evolves continuously and smoothly. Every tiny slice of time, the state |\psi\rangle rotates slightly by a unitary operator determined by the Hamiltonian. You can run the evolution forwards, you can run it backwards — a film of the state vector moving through its Hilbert space is identical played in reverse. Nothing is lost. No information escapes. This is the content of Postulate 2 of quantum mechanics, and every gate in every quantum computer you will ever meet is a unitary operator applying this rule.
But the moment you measure the system, something else happens. The continuous evolution cuts out. The state, in one step, jumps to one of a small number of allowed outcomes. The amplitudes that a moment ago were sweeping around phase space are now a single basis ket with probability 1, and the phase information of the other branches is gone. You cannot run this step backwards — there is no unitary inverse of "projected to |0\rangle with probability |\alpha|^2." This is Postulate 3, the Born rule applied as an actual event.
The two rules do not obviously belong to the same theory. The quantum algorithm that runs on your chip is pure Postulate 2 from start to finish, except for the measurement at the end, which is pure Postulate 3. Why does the same theory have two kinds of dynamics? Where does the boundary between them sit — at the atomic scale, at the scale of the detector, at the scale of your eye, at the scale of your awareness of the outcome? Is the boundary real or is it an artefact of how physicists drew the picture?
This question is the measurement problem, and it is a hundred years old. It will not be solved in this chapter. Over the past century, physicists have proposed four or five main interpretations of quantum mechanics, each of which dissolves the problem in a different way. None of them is experimentally preferred over the others — they all predict the same laboratory outcomes, precisely because Postulate 3 is the empirically correct rule for what happens when you measure. The chapter that follows sketches what each interpretation says, notes what they agree on, and then tells you the pragmatic engineering stance that builders of quantum computers take in practice.
The short version: interpretations matter enormously for philosophy and almost not at all for computing. You can understand, design, and program every quantum algorithm without ever choosing between Copenhagen and many-worlds. But knowing the debate exists — and knowing which pieces of it are live and which are cultural — is part of being quantum-literate.
The problem, stated formally
Write down the two postulates side by side.
Postulate 2 — Unitary evolution. Between measurements, a closed quantum system evolves via a unitary operator:
This is linear (amplitudes add before squaring), deterministic (given U and the initial state, the final state is fixed), reversible (apply U^\dagger to undo), and continuous (infinitesimal time steps give infinitesimal unitary rotations).
Postulate 3 — Projective measurement. Given projectors \{P_m\} with \sum_m P_m = I, a measurement on |\psi\rangle produces outcome m with probability p(m) = \langle \psi | P_m |\psi\rangle, and the post-measurement state is P_m|\psi\rangle / \sqrt{p(m)}. This is non-linear (the renormalisation depends on which outcome occurred), stochastic (one of several outcomes, with known probabilities), irreversible (phase information from the unselected branches is gone), and discontinuous (the state jumps, not rotates).
The two postulates describe incompatible kinds of dynamics. Unitary evolution is Schrödinger's equation; projection is an extra axiom bolted on top. There is nothing inside Postulate 2 that will ever produce a projection — run unitary evolution forever and the state just keeps rotating. Yet every experiment ends in a measurement that obeys Postulate 3. The formalism works; the sensible question of when Postulate 2 stops and Postulate 3 takes over has no unambiguous answer within the standard theory.
The naive answer "Postulate 3 applies whenever a physicist looks" is obviously unsatisfactory — physicists are made of atoms, and atoms obey Postulate 2. The equally naive answer "Postulate 3 applies when the system gets big enough" begs the question — how big, why then, and why does a continuous law have a hard cut-off? Every interpretation of quantum mechanics is, in essence, a proposal for how to resolve this tension.
Four interpretations in four paragraphs
The standard interpretations, each compressed to a single idea. Every one of them produces exactly the same predictions as Postulate 3. They differ in the story they tell about what is really going on underneath the calculation.
Copenhagen. The interpretation written into most physics textbooks. Postulate 3 is fundamental; measurement is a basic process that cannot be reduced to unitary evolution. The role of the observer is special — at the moment a measurement is recorded, the wavefunction collapses. The "classical world" of observers and measuring devices sits outside the quantum formalism. Attributed roughly to Niels Bohr and Werner Heisenberg in the late 1920s. The virtue of Copenhagen is operational clarity: apply unitary evolution, apply the Born rule at measurement, read off probabilities. The cost is accepting that the theory has a non-reducible second law whose boundary is drawn by hand.
Many-worlds (Everett). Proposed by Hugh Everett III in his 1957 PhD thesis. There is no collapse and no Postulate 3 at the fundamental level. The entire universe — system plus apparatus plus observer — evolves by a single universal unitary. When a measurement happens, the observer also becomes a quantum system, and the apparent "outcome" is just the observer ending up correlated (entangled) with one branch of the superposition. All branches exist; "you" experience one of them because "you" are a branch of the universal wavefunction. The Born probability is then a statement about which branch you find yourself in. The virtue is a single unified law — just unitary evolution, no seam. The cost is the ontological inflation: every quantum event in the universe continuously multiplies the branches, and no one agrees on how to make "probability of being in a branch" well-defined from within.
Relational quantum mechanics (Rovelli). Proposed by Carlo Rovelli in 1996. Quantum states are not absolute facts about the world — they are relational, referring to the information one system has about another. Two different observers can assign different states to the same system at the same time, and both are correct from their own relational perspective. The collapse is then a perspective-dependent event: it happens relative to the observer who measured, but not in absolute terms. No single "view from nowhere" about the universe is possible. The virtue is that it dissolves paradoxes like Wigner's friend (below). The cost is giving up the idea of a single, observer-independent state of the world.
QBism (Quantum Bayesianism). Developed by Christopher Fuchs and collaborators in the early 2000s. Quantum states are subjective — they encode an agent's personal degrees of belief about what future measurement outcomes she will experience. A quantum state is more like a bookie's odds than like a physical property of the system. Collapse is trivially explained: it is the updating of beliefs upon receiving new information, just as a Bayesian updates a probability distribution after an observation. The virtue is philosophical hygiene — no collapse mystery because there is no physical wavefunction to collapse. The cost is accepting that physics is not describing an objective world "out there" in the usual sense.
These four (plus several others — pilot-wave / de Broglie–Bohm, consistent histories, objective-collapse models like GRW) are the main positions. None of them has been ruled out by experiment. All of them reproduce standard quantum predictions. The choice between them is, as of 2026, philosophical rather than empirical.
What every interpretation agrees on — the Born rule
Despite their differences, all four interpretations agree on what the experimentalist sees. They have to: the experimental record is fixed, and any interpretation that disagreed with the Born rule would have been falsified decades ago.
Specifically, every interpretation agrees that:
- When you measure a state |\psi\rangle in the basis \{|m\rangle\}, the probability of outcome m is |\langle m|\psi\rangle|^2.
- After the measurement, conditional on outcome m, all future predictions about this system use the state |m\rangle (or P_m|\psi\rangle/\sqrt{p(m)} for a degenerate projector).
- Two experiments with the same preparation produce the same probability distribution over outcomes.
- Any unitary gate you apply has the matrix effect U|\psi\rangle you would compute in the standard formalism.
This is why it is safe to build quantum computers without picking an interpretation: the engineering stack — gate calibration, state preparation, readout — depends only on the empirically tested shared core. Copenhagen and many-worlds disagree about whether collapse is "really" happening, but they agree on the probability that a photon hits detector D0.
Why this matters for the wiki. This is the only chapter in the quantum-computing track where different authors might write the narrative in different words — one textbook says the wavefunction "collapses," another says it "splits," and a third says it "updates," but all the predictions and all the code are the same. You should recognise the vocabulary when you read a paper, and you should not be alarmed when different physicists sound like they disagree. The disagreement is downstream of what a quantum state is; the computation is upstream and doesn't care.
Decoherence — the modern practical answer
For all the philosophical weight the measurement problem carries, a single technical idea introduced in the 1970s and refined through the 2000s changed how physicists think about the quantum-classical boundary. That idea is decoherence.
Imagine a qubit in a superposition |\psi\rangle = \alpha|0\rangle + \beta|1\rangle. In perfect isolation, its density matrix is pure:
The off-diagonal entries \alpha\beta^* and \alpha^*\beta carry the phase information — the very thing that distinguishes the pure superposition from a classical mixture (chapter 15 made this distinction the point).
Now let the qubit couple weakly to an environment — the thermal fluctuations of the substrate it sits on, the stray electromagnetic fields of the lab, the phonons of the surrounding crystal. The environment is a huge system, and we do not track its degrees of freedom. After a short time, the qubit and the environment have become entangled: the full state is something like \alpha|0\rangle|E_0\rangle + \beta|1\rangle|E_1\rangle, where |E_0\rangle and |E_1\rangle are two nearly orthogonal environment states.
Trace out the environment — take the partial trace over the inaccessible degrees of freedom (chapter 26) — and what remains for the qubit is:
The off-diagonals have been suppressed — wiped out by the environment's branches becoming orthogonal. From the qubit's local perspective, the pure superposition has been replaced by a classical probabilistic mixture. The Hadamard test (chapter 15) would now fail — a second H on this mixture plus a measurement would give 50-50, not a deterministic outcome.
This is decoherence. It is not collapse — the global state, system plus environment, is still pure and still evolving unitarily. The environment has simply absorbed the phase information and spread it across trillions of degrees of freedom, where no realistic operation can retrieve it. The effective dynamics on the qubit alone look like collapse.
Crucially, decoherence explains the quantum-to-classical transition without needing an observer or a special postulate. Why do cats never appear in superposition states? Because a cat has \sim 10^{25} atoms, each coupling to environmental modes, and the decoherence timescale is astronomical in its suppression — the Schrödinger cat superposition decoheres in \sim 10^{-40} seconds or faster, far below any measurement resolution. The cat's density matrix is diagonal by the time your eye's retina photon has travelled a nanometre. The "classical world" is quantum, but quantum effects die so fast in big objects that you never notice them.
For quantum computers, decoherence is the central engineering enemy. Every real qubit loses coherence over time T_2 — superconducting qubits around 100\,\mus as of 2025, trapped ions up to seconds, NV centres milliseconds. The off-diagonals of the qubit's density matrix, which encode the interference that makes quantum algorithms work, decay exponentially. Error correction (Part 17) is the systematic way to counteract this by redundantly encoding a logical qubit in many physical qubits and continuously detecting and reversing error processes before they accumulate.
Worked examples
Example 1 — different interpretations, same prediction
A qubit is prepared in |+\rangle = \tfrac{1}{\sqrt 2}(|0\rangle + |1\rangle). You measure in the computational basis. Show that Copenhagen, many-worlds, and QBism produce the same experimentally observable probabilities.
Step 1 — compute the amplitudes.
The overlap of |+\rangle with each basis state:
Why: \langle 0|+\rangle is the coefficient of |0\rangle in |+\rangle; similarly for |1\rangle.
Step 2 — Born rule.
Step 3 — read each interpretation's narrative.
-
Copenhagen: The wavefunction |+\rangle exists before measurement; upon measurement, it collapses onto |0\rangle with probability \tfrac12 or onto |1\rangle with probability \tfrac12. The observer sees one outcome.
-
Many-worlds: There is no collapse. The full state of "qubit + measuring apparatus + observer" evolves unitarily into the entangled superposition \tfrac{1}{\sqrt 2}|0\rangle|\text{apparatus shows 0}\rangle|\text{observer sees 0}\rangle + \tfrac{1}{\sqrt 2}|1\rangle|\text{apparatus shows 1}\rangle|\text{observer sees 1}\rangle. Both branches exist; the observer in each branch experiences their outcome with certainty. The "probability" is a statement about which branch the reporter of the experiment finds themselves in.
-
QBism: The state |+\rangle encodes the experimenter's prior belief about outcomes — she expects 0 with credence \tfrac12 and 1 with credence \tfrac12. After she reads the result, she updates her credences; the state she now assigns is |0\rangle (or |1\rangle), reflecting what she now knows.
Step 4 — the experimentally recorded outcome.
If you run the experiment 10,000 times, each of the three interpretations predicts that the measurement record will show approximately 5,000 zeros and 5,000 ones. The probabilities of individual outcomes are \tfrac12 and \tfrac12 under every interpretation.
Result.
| Interpretation | p(0) | p(1) | What "really" happens |
|---|---|---|---|
| Copenhagen | 0.5 | 0.5 | wavefunction collapses |
| Many-worlds | 0.5 | 0.5 | branch splits; you experience one |
| QBism | 0.5 | 0.5 | belief updates to match outcome |
What this shows. The laboratory cannot tell these interpretations apart. Every experiment that anyone has ever done fits all three (and the others not listed). A physicist using the Qiskit circuit simulator to calculate expected results for a hardware run can compute the same |+\rangle \to 50-50 prediction without mentioning any interpretation — the formalism is interpretation-independent.
Example 2 — decoherence timescale for a superconducting qubit
Estimate how long a quantum algorithm can run on a typical superconducting qubit before decoherence wipes out the superposition. Use the Preskill's-notes order-of-magnitude numbers for a 2025-era device.
Step 1 — state the qubit parameters.
For a typical transmon-style superconducting qubit:
- Gate duration: \tau_{\text{gate}} \approx 20\,\text{ns} = 2 \times 10^{-8}\,\text{s}.
- Coherence time: T_2 \approx 100\,\mu\text{s} = 10^{-4}\,\text{s}. Why these numbers: gate time is set by the microwave pulse duration — pulses too short misaddress the qubit's energy levels. T_2 is the experimentally measured decay constant for the off-diagonals of the density matrix, extracted from Ramsey fringe experiments.
Step 2 — compute the number of operations before decoherence.
The fraction of coherence remaining after n sequential gates is approximately \exp(-n \tau_{\text{gate}} / T_2). Set this to 1/e (the usual "coherence lost" threshold):
Why 1/e: at n gates, the off-diagonal of the density matrix is roughly e^{-n\tau/T_2} times its initial value. n\tau/T_2 = 1 is the first e-fold decay, the natural scale for "how many gates before the algorithm is noise."
Step 3 — compare to algorithm sizes.
A typical NISQ-era variational quantum circuit uses 10-100 two-qubit gates on 5-50 qubits — comfortably inside n = 5000. Shor's algorithm for factoring a 2048-bit RSA key requires roughly 10^{10} logical operations on \sim 10^4 logical qubits; with concatenated error correction and realistic physical-to-logical overhead, this translates to \sim 10^{13} physical gate operations — six orders of magnitude beyond the coherent window of a single physical qubit. The gap is closed by quantum error correction, which keeps the logical qubit coherent much longer than any of its underlying physical qubits.
Step 4 — connect to the interpretation question.
Decoherence is the physical process that produces the apparent collapse observed in the lab. It is not one of the philosophical interpretations; it is a calculation about the partial trace of a system weakly coupled to a large environment. All four interpretations agree that the reduced density matrix decoheres on timescale T_2. They disagree about what this means for the rest of the universe, but not about what the qubit does or about how long you have to run the computation.
Result.
What this shows. The measurement problem is a philosophical puzzle, but its practical shadow is the decoherence timescale — a concrete, measured number that determines how deep a quantum circuit can run before the superposition turns classical. An engineer building a Willow-chip or a Quantinuum H2 does not need to resolve whether the universe is one-world or many-worlds; she needs to keep the off-diagonals alive until the algorithm's last gate, and she has \sim 5000 operations worth of time in which to do it. Everything beyond that threshold requires error correction, not metaphysics.
Common confusions
-
"Which interpretation is correct?" This is the question students ask immediately, and the honest answer is: nobody knows, and for quantum-computing purposes it does not matter. Every interpretation produces the same experimental predictions, so no laboratory experiment can distinguish them. The choice is about what picture you prefer for reasoning about quantum mechanics — Copenhagen is operational, many-worlds is ontological, Relational is epistemic, QBism is subjective. For a QC engineer, pick whichever lets you reason most clearly about gates and measurements; the math doesn't care.
-
"Observer means a human, so measurement happens when a person looks." No. "Observer" in the Copenhagen sense means any macroscopic measuring apparatus — a photodetector, a transistor, a scratched bit of film, a single transistor latched by a comparator. Humans are irrelevant to the physics. The special role Copenhagen gives to "measurement" is a role given to any irreversible, macroscopic, decohering interaction — whether or not anyone ever reads the resulting recording. A single photon striking a piece of dust collapses its superposition (or equivalently: decoheres rapidly against the dust's environment). The dust does not need consciousness for this to happen.
-
"Schrödinger's cat is a paradox about consciousness and reality." Schrödinger's original 1935 thought experiment was a criticism of the Copenhagen interpretation, not an endorsement of a mystery. He constructed the cat scenario to ridicule the idea that a macroscopic object could remain in a superposition — his point was that the theory, taken at face value, predicts something obviously absurd, therefore the theory needs an interpretation. Decoherence resolves the cat: macroscopic objects decohere so fast that no cat-scale superposition lasts any measurable time. The cat is always definitely alive or definitely dead by the time anyone could check, not because consciousness collapses anything but because the cat's 10^{25} atoms are entangled with their environment in femtoseconds.
-
"Entangled particles collapse each other faster than light." This is a classic misstatement, and no — entanglement correlations do not transmit information faster than light, and no interpretation claims they do. If Alice and Bob share an entangled pair and Alice measures, the joint statistics guarantee that Bob's subsequent measurement will be correlated with hers, but Bob cannot tell Alice measured until she sends him a classical message via ordinary light-speed communication. The no-communication theorem (chapter 158) is an explicit calculation that marginal statistics on Bob's side are unaffected by Alice's measurement choice. All four interpretations respect this.
-
"We'll eventually do an experiment that picks the right interpretation." Some physicists hope for this; the track record is not encouraging. Every proposed experimental distinguisher has so far turned out to agree with all the standard interpretations. The GRW-Ghirardi-Rimini-Weber class of objective-collapse models (which add a tiny non-unitary term to Schrödinger's equation) could, in principle, be experimentally distinguished from the interpretations above by looking for spontaneous localisation of mesoscopic superpositions — and several groups are actively testing this. But GRW is itself a modification of quantum mechanics, not an interpretation of it. Within standard QM, all four interpretations remain empirically indistinguishable.
Going deeper
If you have understood that Postulate 2 and Postulate 3 describe different kinds of dynamics, that each interpretation proposes a different resolution, and that decoherence is the practical bridge between quantum and classical — you have the operational grasp. What follows adds colour: the Wigner's-friend puzzle that keeps the debate alive, the timescales of real-world decoherence, the objective-collapse models that do make new predictions, and the implication for quantum error correction.
Wigner's friend and the consistency question
Eugene Wigner's 1961 thought experiment sharpens the Copenhagen ambiguity. Alice (Wigner's friend) measures a qubit inside a sealed lab. She sees outcome 0 or outcome 1 — from her point of view, the wavefunction has collapsed. Meanwhile, Bob (Wigner himself) stands outside the sealed lab and treats "Alice plus qubit plus lab" as one big quantum system evolving unitarily. From Bob's perspective, no collapse has happened — Alice and the qubit are in an entangled superposition of "saw-0-and-qubit-is-0" and "saw-1-and-qubit-is-1."
Who is right? Copenhagen has to pick: either Alice's measurement is the "real" one and Bob is ignorant, or Bob's unitary description is the "real" one and Alice's experience of a definite outcome is… an illusion? Many-worlds sidesteps by saying both descriptions are fine: from Bob's view, Alice has branched, and from Alice's view, she is in one branch. Relational QM explicitly embraces the observer-dependence: the state is relative to whom, and Alice's and Bob's states genuinely can differ.
A 2018 experiment by Massimiliano Proietti et al. realised a photonic version of the Wigner's-friend setup and showed that observers can legitimately disagree about "the fact" of a measurement outcome in a manner that refutes certain classes of observer-independent realism. The result doesn't pick a winning interpretation, but it demonstrates that the puzzle is not merely philosophical.
Decoherence timescales — a table of real systems
How fast does decoherence act on physical systems of various sizes? A rough compilation (from Zurek's decoherence review and experimental measurements):
| System | Decoherence time |
|---|---|
| Single photon in vacuum | indefinitely long |
| Trapped-ion qubit | 1–100 s |
| Superconducting qubit (Willow, 2025) | ~10^{-4} s |
| NV-centre electron spin at room temperature | ~10^{-3} s |
| Electron in a hot atomic gas | ~10^{-12} s |
| Dust grain (1 μm) in air | ~10^{-30} s |
| Cat (1 kg) in air | ~10^{-40} s |
For macroscopic objects, the decoherence time is unimaginably short — many orders of magnitude below any time you can measure. This is why cats are always definitely alive or definitely dead: their superpositions dissolve instantly. It is also why qubits must be cooled, shielded, and isolated at cryogenic temperatures — every environmental photon, phonon, or charge fluctuation represents another channel into which the coherence can leak.
Objective-collapse models — the one place interpretations matter empirically
GRW (Ghirardi-Rimini-Weber, 1986) and its cousins like CSL (continuous spontaneous localisation) add a small non-unitary, stochastic term to Schrödinger's equation. Each particle in the universe spontaneously localises in space at a rate of roughly 10^{-16} per second, with the localisation width around 100 nm. For a single atom this is invisible — you would wait longer than the age of the universe to see a spontaneous jump. For a macroscopic object with 10^{25} particles, the cumulative rate is 10^{25} \times 10^{-16} = 10^9 spontaneous localisations per second — fast enough to effectively forbid macroscopic superpositions.
This is a genuine modification of quantum mechanics, not an interpretation, and in principle it is experimentally testable. Several ongoing precision experiments — bouncing mesoscopic cantilevers, long-baseline molecular interferometers — are pushing the bounds on GRW parameters. As of 2026 no deviation from standard QM has been observed, and the experimentally allowed parameter space is shrinking.
The existence of GRW-type models shows that the measurement problem is not purely philosophical: it is logically possible to have a theory where there is an objective physical process that collapses wavefunctions, distinguishable in principle from the standard unitary-plus-Born-rule theory. So far, experiment prefers the standard theory.
Why this matters for quantum error correction
Quantum error correction (QEC) uses the measurement postulate in an unusual way. In QEC, you repeatedly measure the system — specifically, you measure joint observables called syndromes — and use each measurement outcome to decide what corrective gates to apply. Each syndrome measurement causes a projective collapse, but the collapse is onto an error-type eigenspace that does not reveal the encoded quantum information. So you get to learn about the errors without learning about (and therefore without disturbing) the logical state.
This is a beautiful use of Postulate 3 as a computational tool. The measurement problem's ambiguity about "what really happens" during collapse is irrelevant: the engineering fact is that the measurement outcome is a classical bit which the decoder uses to pick a correction. Copenhagen-flavored writers say "the collapse tells us the error." Many-worlds-flavored writers say "the decoder is a classical controller correlated with the error-syndrome branch." Both descriptions agree on what Qiskit prints to the terminal. QEC is the strongest practical demonstration that quantum computing is interpretation-agnostic — you can literally run it and produce a working fault-tolerant qubit without committing to a philosophy.
The pragmatic engineering stance — and India's quantum mission
If you talk to physicists and engineers working at IISc Bangalore, TIFR Mumbai, IIT Madras, or in industry at Quantinuum, IBM, Google, or QuEra, you will find that nobody in the lab is arguing about interpretations at the bench. The discussion is about pulse shapes, gate fidelities, leakage errors, crosstalk, and control electronics. The measurement postulate is applied as a rule; the interpretation question is — rightly — left to philosophy seminars and Sunday afternoons.
India's National Quantum Mission (2023, ₹6003 crore over 2023–2031) is a good example of this attitude made institutional. Its four verticals — computing, communication, sensing, materials — are engineering missions. They rely on the standard quantum formalism in its operational form. The mission does not ask whether Everett or Bohr is right. It asks how to get 50–1000 reliable qubits, how to get satellite-to-ground quantum key distribution working at national scale, how to build single-photon detectors with world-class timing resolution. Meghnad Saha, C.V. Raman, and Satyendra Nath Bose — three giants of the early quantum era — built their contributions by following the empirical evidence where it led and not waiting for the philosophical questions to resolve first. The engineering descendants of that tradition do the same.
Where this leads next
- Superposition vs Classical Randomness — chapter 15, the companion piece: what makes a superposition different from a 50-50 coin.
- Decoherence — Introduction — the formal treatment of how environment coupling washes out coherence.
- Density Matrices — Introduction — the formalism that handles pure and mixed states in one language.
- Quantum Error Correction — Overview — the engineering resolution of the decoherence problem, using syndrome measurements as computational tools.
- Bell Inequality — the experiment that rules out classical-hidden-variable interpretations.
References
- John Preskill, Lecture Notes on Quantum Computation — Chapter 3: Measurement and Evolution — theory.caltech.edu/~preskill/ph229.
- Wikipedia, Interpretations of quantum mechanics.
- Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical (2003) — arXiv:quant-ph/0105127.
- Hugh Everett III, "Relative State" Formulation of Quantum Mechanics (1957) — Rev. Mod. Phys., via Wikipedia.
- Carlo Rovelli, Relational Quantum Mechanics (1996) — arXiv:quant-ph/9609002.
- Nielsen and Chuang, Quantum Computation and Quantum Information — Cambridge University Press.