In short
The outer product of a ket with a bra, |\psi\rangle\langle\phi|, is a matrix — a column times a row. It is rank 1: every row and column is proportional to every other. When the two sides are the same unit vector, P_\psi = |\psi\rangle\langle\psi| is a projector: it picks out the |\psi\rangle-component of any state you feed it, and it is idempotent (P^2 = P) and Hermitian (P = P^\dagger). Sums of orthogonal projectors decompose the identity — this is the completeness relation \sum_i|i\rangle\langle i| = I — and every projective quantum measurement is exactly a set of such projectors summing to I.
In the previous chapter you learned that the inner product \langle\phi|\psi\rangle is a row times a column, and that a row times a column always collapses to one complex number. Flip the order. What happens when you write a column times a row — a ket times a bra — instead?
A column has shape n \times 1. A row has shape 1 \times n. Multiplied in that order, the shapes combine to n \times n — a square matrix. So a ket times a bra is not a number. It is an operator: a machine that eats one ket and spits out another.
This looks like a small technical swap. It is actually the move that turns Dirac notation into the most flexible language in quantum mechanics. Projectors, density matrices, measurement operators, Hamiltonians written in their own eigenbasis — all of them are built as sums of outer products. When a textbook writes H = \sum_n E_n |n\rangle\langle n| for the Hamiltonian of a system, it is using the outer-product machinery of this chapter, no more and no less.
This chapter builds the piece at a time: what an outer product is as a matrix; why every outer product has rank 1; why the self-outer-product |\psi\rangle\langle\psi| deserves the special name projector; how sums of projectors rebuild the identity; and why every projective measurement in quantum mechanics is, by definition, a set of projectors that sum to the identity.
The mechanics — row times column versus column times row
Start with the two computational-basis states written as columns, and the bra versions as rows:
The inner product \langle 0|1\rangle is a row times a column — 1 \times 2 times 2 \times 1 — which gives a 1 \times 1 matrix, a single number:
Why zero: |0\rangle and |1\rangle are orthogonal, so any measure of overlap between them must vanish. The inner-product rule confirms what the geometry already says.
Now flip the order. The outer product |0\rangle\langle 1| is a column times a row — 2 \times 1 times 1 \times 2 — which gives a 2 \times 2 matrix:
Why the matrix has this shape: the (i, j) entry of the outer product is (|0\rangle)_i \cdot (\langle 1|)_j — the i-th component of the ket times the j-th component of the bra. Only when both components are 1 (here: i = 0, j = 1) do you get a non-zero entry.
Do the same for the other three computational-basis outer products:
These four matrices are the "elementary" matrices of a 2-dimensional space. Any 2×2 matrix can be written as a linear combination of them, because they form a basis for the space of 2×2 matrices.
The general rule for any two kets |\psi\rangle and |\phi\rangle in \mathbb{C}^n: if the ket has components \psi_i and the bra has components \phi_j^*, then
One multiplication per entry, n^2 entries in total. The resulting matrix carries the information of both the ket and the bra, but packed into a very specific form — a form that always has rank 1.
Every outer product has rank 1
Rank is the number of linearly independent columns of a matrix (equivalently, the number of linearly independent rows). For an n \times n matrix, rank ranges from 0 (the zero matrix) to n (an invertible matrix).
Every outer product |\psi\rangle\langle\phi| — so long as neither ket nor bra is zero — has rank exactly 1. You can see this directly. Write the outer product as a matrix:
Look at the columns. The j-th column is
Why every column is a scalar multiple of |\psi\rangle: each column comes from multiplying the column vector |\psi\rangle by one scalar \phi_j^* — the j-th entry of the bra. Different columns use different scalars, but they all point in the same direction.
So every column of the outer product is proportional to |\psi\rangle. The column space of the matrix is one-dimensional. That is exactly what "rank 1" means.
Here is the geometric meaning in one sentence. A rank-1 matrix turns every input it is fed into some scalar multiple of one fixed direction. It cannot produce two independent output directions. In quantum language: the outer product |\psi\rangle\langle\phi| maps everything it acts on into the one-dimensional line spanned by |\psi\rangle.
This is the key geometric fact you will use through the rest of the chapter: outer products are the matrix form of "collapse into a chosen direction."
Projectors — when the outer product is self-symmetric
A particularly useful case is when the ket and the bra are the same unit vector. Pick a state |\psi\rangle with \langle\psi|\psi\rangle = 1, and define
This matrix has a name. It is the projector onto |\psi\rangle — the operator that takes any state |\phi\rangle and returns the component of |\phi\rangle along the direction of |\psi\rangle.
Applying a projector — the calculation
Let |\phi\rangle be any state. Apply P_\psi to it:
Why the parentheses group that way: matrix multiplication is associative, so you can choose where to put the brackets. Grouping the bra with the ket on the right converts \langle\psi|\phi\rangle into a single complex number — the inner product you computed in chapter 5. The outer |\psi\rangle remains.
The middle piece \langle\psi|\phi\rangle is a number. Numbers commute with everything, so you can pull it to the front:
This is the key formula. Applying the projector |\psi\rangle\langle\psi| to any state |\phi\rangle returns a vector pointing in the |\psi\rangle direction, with a coefficient \langle\psi|\phi\rangle — the overlap of |\phi\rangle with |\psi\rangle.
In pictures: imagine |\psi\rangle is a direction, and |\phi\rangle is some other arrow at an angle to it. The projector takes |\phi\rangle, drops a perpendicular onto the |\psi\rangle line, and returns the vector you see on that line. The length of that vector is the inner product \langle\psi|\phi\rangle; the direction is |\psi\rangle itself. You are reading off the |\psi\rangle-component of |\phi\rangle.
Two defining properties of projectors
Every projector of this form satisfies two properties, which together actually characterise what it means to be a projector. Any matrix satisfying both is, automatically, a projector onto some subspace.
Property 1: idempotent. A projector squared is itself:
Proof. Write it out:
Why this is the "projection" identity: once you have projected onto a direction, projecting again does not change anything — you are already on the line, so the drop-a-perpendicular operation drops a zero-length perpendicular. The squared projector is the same as the projector.
Property 2: Hermitian. A projector equals its own conjugate transpose:
Proof. The dagger of an outer product flips its two factors and conjugates each. Since (|\psi\rangle)^\dagger = \langle\psi| and (\langle\psi|)^\dagger = |\psi\rangle:
Why: taking the Hermitian conjugate of a product reverses the order and daggers each factor — the rule (AB)^\dagger = B^\dagger A^\dagger from chapter 7. Here the ket daggers to a bra and the bra daggers back to a ket, and reversing them gives you the same outer product you started with.
Idempotent plus Hermitian is the operational definition of "projector." Any matrix with those two properties projects onto some subspace of the Hilbert space.
Projectors onto subspaces — summing rank-1 projectors
So far a projector has been rank 1 — it projects onto the line spanned by a single ket. The real usefulness of projectors is that you can sum them to project onto higher-dimensional subspaces.
Take any orthonormal set \{|i\rangle\}_{i=1}^{k} — that is, \langle i|j\rangle = \delta_{ij} — and define
This operator is called the projector onto the subspace spanned by \{|i\rangle\}. Its action on any state |\phi\rangle is
Why this is "the component in the subspace": each term \langle i|\phi\rangle|i\rangle is the component of |\phi\rangle along the |i\rangle direction. Summing them gives you the total component of |\phi\rangle that lies inside the subspace spanned by \{|i\rangle\}.
You can check that this P is still idempotent and Hermitian, and therefore a projector. The rank of P is k (the dimension of the subspace) — each orthogonal rank-1 projector contributes one independent direction to the image.
Geometrically: imagine the subspace is a 2D plane sitting inside 3D space. A vector in 3D space has a unique "shadow" on the plane — its perpendicular drop. P is the operator that computes that shadow.
One rank-1 projector projects onto a line. A sum of two orthogonal rank-1 projectors projects onto the plane those two lines span. A sum of k orthogonal rank-1 projectors projects onto a k-dimensional subspace. And a sum of all the rank-1 projectors in an orthonormal basis projects onto the whole space — which is the identity.
The completeness relation — the identity as a sum of projectors
You met this identity briefly in chapter 5. Now you have the outer-product machinery to see why it works.
For any orthonormal basis \{|i\rangle\} of the full n-dimensional Hilbert space,
This is the completeness relation, or the resolution of the identity.
For the qubit computational basis, the check is one-line:
Why completeness is so useful — expanding an operator
The completeness relation does one thing and does it with devastating efficiency: it lets you "insert the identity" anywhere you like, in a form that is secretly a sum over a basis. Here is the canonical move.
Take any operator A. You can always sandwich it between two copies of the identity — doing so changes nothing. Replace each identity with a sum of projectors:
Expand the product:
where the numbers A_{ij} = \langle i|A|j\rangle are the matrix elements of A in the basis \{|i\rangle\}.
Why this matters: you have just decomposed a generic operator A into a linear combination of outer products |i\rangle\langle j|, with coefficients A_{ij} that are plain complex numbers. This is exactly what "writing A as a matrix" means — the completeness relation is the bookkeeping that turns an abstract operator into its matrix representation in a chosen basis.
Chapter 7 builds on this identity. Every time you see a derivation in later chapters that "inserts a resolution of the identity" at a cunning place, the move is always the same: \sum_i |i\rangle\langle i| in the middle of an expression, to translate one representation into another.
Projectors encode measurements
Quantum mechanics tells you how a measurement works, and the language it uses is the language of projectors. Here is the rule, stated carefully; chapter 11 (the four postulates) and chapter 12 (projective measurement) will derive the full Born rule — this chapter just introduces the structural fact.
A projective measurement is specified by a set of projectors \{P_1, P_2, \ldots, P_k\} satisfying
- Each P_i is a projector: P_i^2 = P_i and P_i^\dagger = P_i.
- They are mutually orthogonal: P_i P_j = 0 for i \neq j.
- They sum to the identity: \sum_i P_i = I.
When you perform this measurement on a state |\psi\rangle, exactly one of the outcomes i happens. The probability of outcome i is
and the state of the system after the measurement, given that outcome i was observed, is
(The division by \sqrt{\langle\psi|P_i|\psi\rangle} renormalises the state — projecting |\psi\rangle onto the i-th subspace generally shrinks its norm; you divide by the new norm to get a unit vector again.)
The computational-basis measurement is the simplest example. Take P_0 = |0\rangle\langle 0| and P_1 = |1\rangle\langle 1| — the two projectors onto the two basis directions. They are Hermitian, idempotent, mutually orthogonal (P_0 P_1 = |0\rangle\langle 0|1\rangle\langle 1| = |0\rangle \cdot 0 \cdot \langle 1| = 0), and they sum to the identity. Their associated probabilities are
which is the Born rule you already know: the probability of getting outcome 0 is |\alpha|^2, of getting outcome 1 is |\beta|^2.
The deep fact. Every quantum measurement, no matter how exotic, is fundamentally a set of projectors summing to the identity. The projectors label the outcomes; the probabilities and post-measurement states follow from the rule above. Generalisations exist (POVMs, continuous-outcome measurements) but they are all built out of this base pattern.
Worked examples
Example 1: Build $|+\rangle\langle +|$ and verify idempotency
The state |+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle) is one of the two X-basis states you met in chapter 5. Compute the projector |+\rangle\langle +| as an explicit 2 \times 2 matrix, and verify that squaring it gives back the same matrix.
Step 1. Write |+\rangle as a column and \langle +| as a row.
Why no conjugation this time: the components of |+\rangle are 1/\sqrt{2} and 1/\sqrt{2}, both real. Real numbers are their own conjugates, so the bra is just the row with the same numbers.
Step 2. Multiply the column by the row.
Why the common factor: the two \tfrac{1}{\sqrt{2}} factors multiply to give \tfrac{1}{2}, which pulls out to the front. Every entry of the matrix is the product of the two components that correspond to it — here, all four entries are 1 \cdot 1 = 1.
Step 3. Square the matrix.
Do the matrix multiplication entry by entry. The (1,1) entry is 1 \cdot 1 + 1 \cdot 1 = 2. The (1,2) entry is 1 \cdot 1 + 1 \cdot 1 = 2. Same for all four:
Step 4. Multiply by the front factor.
Why this had to happen: idempotency is automatic for any outer product of a unit vector with itself. The \langle +|+\rangle in the middle of |+\rangle\langle +|\cdot|+\rangle\langle +| equals 1, collapsing the squared operator back to the single one. The arithmetic just confirms what the algebra guaranteed.
Result. The projector onto |+\rangle is the matrix
and it satisfies (|+\rangle\langle +|)^2 = |+\rangle\langle +|.
What this shows. The matrix of a projector has a very specific pattern. Every row is proportional to every other; every column is proportional to every other. That is the signature of rank 1. And the idempotency property P^2 = P is not a coincidence — it follows from \langle +|+\rangle = 1, which collapses the product back to one factor.
Example 2: Apply $|0\rangle\langle 0|$ to a superposition state
Let |\psi\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle). Apply the projector P_0 = |0\rangle\langle 0| to |\psi\rangle. Interpret the result.
Step 1. Write out the product.
Step 2. Distribute the projector across the sum (it is a linear operator, so it does).
Why: a projector is a matrix, and matrix action on a sum is the sum of the matrix action on each piece — this is the linearity of matrix multiplication.
Step 3. Use orthonormality to simplify the inner products. Recall \langle 0|0\rangle = 1 and \langle 0|1\rangle = 0.
Why the |1\rangle term disappears: the inner product \langle 0|1\rangle is zero — the computational-basis states are orthogonal — and multiplying any ket by zero kills it. Only the |0\rangle-term survives.
Result. Projecting the superposition \tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle) onto the |0\rangle direction gives
The output is not normalised — its norm is \tfrac{1}{\sqrt{2}}, not 1. The squared norm is \tfrac{1}{2}, which is exactly the probability that a computational-basis measurement of |\psi\rangle would return outcome 0.
What this shows. The projector has done two things at once. It has isolated the |0\rangle-component of the input state (the result points along |0\rangle). And the size of that result — the norm of P_0|\psi\rangle — encodes the probability of the corresponding measurement outcome. The next chapter in this track on measurement (ch.12) will formalise this double role: the projector tells you which way the state collapses, and the squared length of the projected vector tells you how likely that collapse is.
Common confusions
-
"A projector is a gate." No. Quantum gates are unitary operators — they satisfy U^\dagger U = I and are always invertible. Projectors are decidedly not unitary: they are not invertible (rank less than full kills invertibility) and they are not norm-preserving (they shrink states). Projectors are measurement objects, not gate objects. The only "projector" that is unitary is the full-rank identity I itself, which is the trivial projector onto the whole space.
-
"|\psi\rangle\langle\phi| equals |\phi\rangle\langle\psi|." No. These are two different matrices, and they are related only by Hermitian conjugation: (|\psi\rangle\langle\phi|)^\dagger = |\phi\rangle\langle\psi|. For the specific case of the computational basis, |0\rangle\langle 1| = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} whereas |1\rangle\langle 0| = \begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix} — same non-zero entry, different positions. They are each other's transposes (and conjugates, though here the entries are real so conjugation is trivial). Don't conflate them.
-
"The completeness relation says the projectors sum to 1." No — they sum to I, the identity matrix. For a qubit, I is the 2\times 2 identity \text{diag}(1, 1), not the number 1. The distinction matters: operators are matrices, scalars are numbers, and equations between them have to match types. Writing "\sum_i P_i = 1" is a shorthand you will sometimes see in texts where the context makes the identity matrix unambiguous, but inside a derivation you want to be strict — the right-hand side is I.
-
"A rank-1 projector catches the whole state." Only if the state happens to lie entirely in the projector's 1D direction. In general, P_\psi|\phi\rangle is only the component of |\phi\rangle along |\psi\rangle. The perpendicular component (I - P_\psi)|\phi\rangle is what got thrown away. Completeness says P_\psi + (I - P_\psi) = I — the two pieces, together, reconstruct the input.
-
"Measurement operators must be rank 1." Not necessarily. A rank-k projector measures "is the state in this k-dimensional subspace?" — a coarse-grained measurement that doesn't distinguish between different states within the subspace. For example, projecting "is the qubit in the \{|00\rangle, |01\rangle\} subspace of a two-qubit system?" uses a rank-2 projector |00\rangle\langle 00| + |01\rangle\langle 01|. Projective measurements in general are sets of projectors of any rank, so long as they are orthogonal and sum to the identity.
Going deeper
If you can compute an outer product by hand, verify idempotency of a projector, use the completeness relation to expand an operator, and see why measurements are built from projectors — you have everything you need for the next three chapters. The following sections are optional context: how the spectral theorem makes every Hermitian operator a weighted sum of projectors; POVMs as the generalisation; the fact that the rank of a projector equals its trace; and an analogy to biometric identity matching as a kind of real-world projection.
Spectral decomposition preview — projectors are the atoms of observables
Every Hermitian operator A (the kind that represents a physical observable) has a spectral decomposition:
where the \lambda_i are real numbers (the eigenvalues of A) and \{|i\rangle\} is an orthonormal basis of eigenvectors. This is the spectral theorem — a central result of linear algebra, proved in any standard course.
Read the decomposition as: A is built by multiplying each eigenvalue by the projector onto its eigenspace, and summing. Projectors are not one tool among many in quantum mechanics — they are the building blocks of every observable.
When you measure A, the possible outcomes are the eigenvalues \lambda_i. The probability of getting outcome \lambda_i is \langle\psi|P_i|\psi\rangle where P_i = |i\rangle\langle i| is the projector onto the \lambda_i-eigenspace. And the state after the measurement collapses to P_i|\psi\rangle normalised.
This is the answer to the question "what is it that a quantum observable actually is?" It is an operator whose eigenvalues are the possible measurement outcomes and whose eigen-projectors are the operational steps of the measurement. Projectors are the physics; the spectral theorem is the guarantee that the physics has a clean mathematical form.
Positive operator-valued measures (POVMs)
There is a generalisation of projective measurement called a POVM (positive operator-valued measure) that relaxes one of the three conditions above. Specifically, it drops the requirement that the operators be projectors — they are only required to be positive semi-definite and to sum to the identity.
Formally, a POVM is a set \{E_1, E_2, \ldots, E_k\} of Hermitian operators such that E_i \geq 0 (eigenvalues non-negative) and \sum_i E_i = I. The probability of outcome i when measuring state |\psi\rangle is still \langle\psi|E_i|\psi\rangle.
POVMs are strictly more general than projective measurements — they allow measurements with more outcomes than the dimension of the Hilbert space, which is sometimes the natural description of a realistic imperfect measurement or a measurement that throws information into an auxiliary system. Every POVM on an n-dimensional space can be realised as a projective measurement on a larger space (this is Naimark's theorem), so POVMs do not expand the fundamental physics — they are a convenient bookkeeping device.
Part 6 of the track, on the measurement postulate, returns to this generalisation.
The rank and trace of a projector
A rank-1 projector has one non-zero eigenvalue (equal to 1) and all others zero. A rank-k projector has k eigenvalues equal to 1 and the rest zero. In either case:
The trace of an operator is the sum of its diagonal entries, which is also the sum of its eigenvalues. For a projector, the eigenvalues are just 0s and 1s, so the trace counts the 1s — and that is the dimension of the subspace the projector targets.
Example: P = |0\rangle\langle 0| has diagonal entries (1, 0), so \text{tr}(P) = 1, matching the fact that it projects onto a 1-dimensional subspace. The sum P_0 + P_1 = I has trace 2, matching the fact that the whole Hilbert space is 2-dimensional.
Sketch of why this works. By the spectral theorem, P can be written in its eigenbasis as a diagonal matrix with 1s where the subspace is and 0s elsewhere. The trace is basis-independent — it is the same in any basis — so the trace of P in its own eigenbasis is literally the number of 1s, which is the rank.
This fact is the reason the trace comes up constantly in quantum information: \text{tr}(P\rho) computes "how much of the state \rho lies in the subspace that P projects onto," which is exactly the probability of the corresponding measurement outcome.
An Indian-context aside — Aadhaar biometric matching as projection
The Unique Identification Authority of India runs the Aadhaar system, which stores biometric templates (fingerprints, iris scans) for over a billion people. Verifying an individual involves taking a fresh biometric scan and checking whether it "matches" a stored template.
There is a rough analogy with quantum projection. A biometric template can be thought of as defining a direction in a high-dimensional feature space — the direction that captures everything unique about this individual's fingerprint. A new scan comes in as another vector. The verification procedure, at its heart, projects the new scan onto the direction of the stored template and checks the length of the projection. If the projected vector is long (the scans align), the verification succeeds. If it is short (the scans are orthogonal-ish), verification fails.
The analogy is loose — the actual algorithms use correlation scores, minutiae matching, and classical pattern recognition, not quantum projectors. But the geometric intuition is the same: identity is a direction, verification is a projection, and the length of the projection is the match score. Quantum projection makes this idea mathematically precise — the projector |\psi\rangle\langle\psi| is literally the identity matcher for the state |\psi\rangle. Aadhaar's engineers and quantum physicists, in very different notation, are doing the same basic linear algebra.
Where this leads next
- Operators as Matrices — chapter 7, next in the track. The full matrix form of any operator, matrix elements as inner products A_{ij} = \langle i|A|j\rangle, and the Hermitian-conjugate (dagger) operation.
- Four Postulates of Quantum Mechanics — chapter 11. The measurement postulate in full; projectors play the starring role and every outcome probability is computed from the projectors you met here.
- Projective Measurement — chapter 12. A careful, worked treatment of projective measurement with examples beyond the computational basis; the Born rule derived from the postulates.
- Density Matrices — Introduction — chapter 13. Density matrices generalise pure-state projectors |\psi\rangle\langle\psi| to mixed states \sum_i p_i|\psi_i\rangle\langle\psi_i|, and the whole machinery of this chapter carries over.
References
- Nielsen and Chuang, Quantum Computation and Quantum Information — Cambridge University Press, §2.1.5 and §2.2.3.
- John Preskill, Lecture Notes on Quantum Computation — theory.caltech.edu/~preskill/ph229, Chapter 2 (measurement and projectors).
- Wikipedia, Projection (linear algebra).
- Wikipedia, Outer product.
- Wikipedia, POVM — for the generalised-measurement picture that projectors are a special case of.
- Qiskit Textbook, Linear algebra for quantum computing — outer-product matrix examples with qubit states.