In short
The inner product \langle\phi|\psi\rangle is a complex number that measures how much two quantum states overlap. Mechanically it is a row (the bra) times a column (the ket), with the bra's entries complex-conjugated. Orthogonal states have \langle\phi|\psi\rangle = 0 — no overlap — and an orthonormal basis is a set where every pair is orthogonal and each state has \langle\psi|\psi\rangle = 1. Every physical state must be normalised: \langle\psi|\psi\rangle = 1, because |\langle i|\psi\rangle|^2 is the probability of measuring outcome i, and probabilities have to add up to 1.
Imagine two arrows drawn on the same page. They share a starting point; they point in different directions. You want one number that says how aligned they are: if they point the same way, the number is big; if they point at right angles, it is zero; if they point in opposite directions, it is negative. In two-dimensional real geometry this number is the familiar dot product \mathbf{v} \cdot \mathbf{w} = |\mathbf{v}|\,|\mathbf{w}|\cos\theta. Draw the two arrows and you see the geometry: one number, one angle, one picture.
Quantum states carry exactly this same idea — with one twist. A quantum state is a vector in a complex vector space, not a real one. You cannot literally draw it as an arrow on the page, but the same alignment question still makes sense: given two states |\phi\rangle and |\psi\rangle, how much do they overlap? The answer is a complex number called the inner product, written \langle\phi|\psi\rangle. It plays exactly the role the dot product plays for ordinary vectors — and a little more besides, because its magnitude squared turns out to be the probability of measuring one state and getting the other.
This chapter builds that single number. You will see where the complex conjugation comes from, why orthogonal states have zero overlap, why every physical state must satisfy \langle\psi|\psi\rangle = 1, and why the identity matrix is secretly a sum of projectors — all from one definition.
The geometric picture — overlap as alignment
Start with real vectors on a flat page. Two arrows \mathbf{v} and \mathbf{w} meet at an angle \theta. Their dot product is
Three easy facts fall out of this formula:
- If the arrows point the same direction (\theta = 0), \cos\theta = 1 and the dot product is just |\mathbf{v}|\,|\mathbf{w}|.
- If they are perpendicular (\theta = \pi/2), \cos\theta = 0 and the dot product is 0.
- If they point in opposite directions (\theta = \pi), \cos\theta = -1 and the dot product is -|\mathbf{v}|\,|\mathbf{w}|.
Now assume both are unit vectors, so |\mathbf{v}| = |\mathbf{w}| = 1. Then \mathbf{v} \cdot \mathbf{w} = \cos\theta — a single number between -1 and +1 that is exactly the cosine of the angle between them. The dot product is the alignment.
Why unit vectors: the dot product of general vectors mixes two pieces of information — the angle and the lengths. For quantum states you fix the length to 1 (you will see why in a moment) so that the inner product carries only the angle, leaving the magnitudes out of the way.
The quantum version is the same idea in complex dimensions. Two quantum states |\phi\rangle and |\psi\rangle are vectors in a complex vector space. They have a "length" (more precisely, a norm) and a "direction." The inner product \langle\phi|\psi\rangle gives you one complex number that captures how aligned the two are — generalising \cos\theta into the complex plane. When the states are unit vectors, |\langle\phi|\psi\rangle| is between 0 and 1, and it is exactly 1 when |\phi\rangle and |\psi\rangle are the same state (up to phase), and exactly 0 when they are orthogonal.
The next few sections make "quantum inner product" fully concrete.
Defining the inner product — the mechanical rule
You met the mechanical definition in chapter 4. Write two qubit states as
The inner product \langle\phi|\psi\rangle is the bra \langle\phi| — the row (\gamma^*, \delta^*) — multiplied into the ket |\psi\rangle — the column (\alpha, \beta)^T:
One complex number. Two coefficients from |\phi\rangle, conjugated; two from |\psi\rangle, kept as they are; paired up and added.
Why the conjugation on |\phi\rangle: the inner product needs to satisfy two geometric requirements at once. First, the norm \langle\psi|\psi\rangle should be a non-negative real number — otherwise it cannot be a length. Second, the inner product should be linear in |\psi\rangle so that it plays nicely with superposition. The only way to get both of these in a complex vector space is to complex-conjugate one side. Putting the conjugation on the bra (the left-hand side) is the physicist's convention; mathematicians sometimes put it on the other side.
The same rule generalises directly to bigger spaces. If |\psi\rangle and |\phi\rangle live in \mathbb{C}^n with components \alpha_i and \gamma_i, then
Sum of conjugated-coefficient times coefficient, one term per basis state.
An immediate consequence: the norm
Take |\phi\rangle = |\psi\rangle in the formula. The inner product becomes
A sum of squared magnitudes. This is always a non-negative real number, and it is zero only when every component is zero — that is, only when |\psi\rangle = 0.
The quantity \sqrt{\langle\psi|\psi\rangle} is called the norm of |\psi\rangle and is often written \||\psi\rangle\|. It is the quantum analogue of the length of an arrow. For a physical quantum state, you always have \langle\psi|\psi\rangle = 1 — the norm is 1 — because of the probability rule you are about to see.
Another immediate consequence: what happens when you flip the inner product
What is \langle\psi|\phi\rangle, the inner product the other way round? Apply the rule:
Compare with \langle\phi|\psi\rangle = \gamma^*\alpha + \delta^*\beta. These are different numbers, in general — but they are complex conjugates of each other, because taking the conjugate of \gamma^*\alpha + \delta^*\beta gives (\gamma^*\alpha)^* + (\delta^*\beta)^* = \alpha^*\gamma + \beta^*\delta. So:
This is the conjugate-symmetry of the inner product. For real vectors the dot product is symmetric, \mathbf{v} \cdot \mathbf{w} = \mathbf{w} \cdot \mathbf{v}; for complex vectors you get symmetry up to complex conjugation. The magnitude is symmetric, |\langle\phi|\psi\rangle| = |\langle\psi|\phi\rangle|, but the phase flips sign.
Why this matters: when you square the inner product to get a probability (the next section), the conjugation flip does not show up — |z|^2 = |\bar z|^2. So physical predictions do not care which way round you write the inner product, but the intermediate symbols do.
Overlap and the probability link
Here is the physical meaning of the inner product. If you prepare a quantum system in the state |\psi\rangle and then measure it in a basis that contains |\phi\rangle as one of its outcomes, the probability that your measurement returns "\phi" is
The magnitude squared of the inner product is a probability. This is known as the Born rule, and chapter 11 derives it carefully from the measurement postulate. For now, treat it as a fact about what the inner product means: it measures the likelihood that a state prepared as |\psi\rangle will be observed as |\phi\rangle.
Three consequences, which you will use constantly:
-
Identical states give probability 1. If |\phi\rangle = |\psi\rangle, the inner product is \langle\psi|\psi\rangle = 1 (for a normalised state), and |1|^2 = 1. You prepare |\psi\rangle and measure in a basis containing |\psi\rangle; the probability of getting |\psi\rangle is 1.
-
Orthogonal states give probability 0. If \langle\phi|\psi\rangle = 0, then |0|^2 = 0. You prepare |\psi\rangle and measure in a basis containing |\phi\rangle; you will never get |\phi\rangle as an outcome. Orthogonal states are perfectly distinguishable.
-
Intermediate angles give intermediate probabilities. If |\langle\phi|\psi\rangle| = 1/\sqrt{2}, the probability is 1/2 — fifty-fifty. The states |0\rangle and |+\rangle = (|0\rangle + |1\rangle)/\sqrt{2} have exactly this overlap: if you prepare |+\rangle and measure in the \{|0\rangle, |1\rangle\} basis, you get |0\rangle half the time.
The inner product is the bridge between the geometry of Hilbert space and the experimental statistics of quantum mechanics.
Orthogonality — perfectly distinguishable states
Two states are orthogonal when their inner product is zero:
For the computational basis, this is immediate: |0\rangle and |1\rangle are orthogonal because
Orthogonality is not a vague "these states are different" statement. It is an experimental statement: if you prepare |1\rangle and measure in the \{|0\rangle, |1\rangle\} basis, you get |1\rangle with probability 1 and |0\rangle with probability 0. The two states are perfectly distinguishable by a measurement in the basis that contains both of them.
The X-basis gives another pair of orthogonal states. The eigenstates of the Pauli-X operator are
Compute \langle +|-\rangle. Using the rule "conjugate the bra's coefficients and multiply":
So |+\rangle and |-\rangle are orthogonal. Notice how the two basis states look almost the same on paper — they differ only by a minus sign on the second term — and yet that minus sign makes them as distinguishable as |0\rangle and |1\rangle. This is a recurring quantum phenomenon: tiny-looking phase changes produce fully distinguishable states.
Orthogonality in the complex case is subtler than perpendicularity
For real vectors, orthogonality is "perpendicular" — the two arrows meet at 90°. For complex vectors, "perpendicular" is not quite the right word. Consider the states
These two states differ only by a global phase e^{i\varphi}. Physically they are the same state (global phases are unobservable, as you saw in chapter 14). The inner product is
So \langle\alpha|\beta\rangle = e^{i\varphi}, a complex number of magnitude 1. Its magnitude squared is 1. The two states have probability 1 of being identified with each other — they are, physically, the same state. But the inner product itself is not 1; it is e^{i\varphi}. This is the complex-vs-real subtlety: the magnitude of the inner product carries the physical information; the phase is bookkeeping that gets squared away.
Real vectors have no such phase freedom, and "orthogonal" and "perpendicular" mean the same thing. Complex vectors have more room, and orthogonality is the stronger condition that \langle\phi|\psi\rangle = 0 as a complex number, not just |\langle\phi|\psi\rangle| = 0 — though for magnitude-zero these happen to coincide.
Orthonormal bases — a complete and clean coordinate system
A set of states \{|i\rangle\} is an orthonormal basis if:
- Each state is normalised: \langle i|i\rangle = 1.
- Different states are orthogonal: \langle i|j\rangle = 0 for i \neq j.
- Any state in the space can be written as a sum |\psi\rangle = \sum_i c_i|i\rangle for some coefficients c_i.
Condition 1 and 2 combined — \langle i|j\rangle = \delta_{ij} — is the orthonormality condition. Condition 3 is the completeness condition.
For a single qubit, the canonical orthonormal basis is \{|0\rangle, |1\rangle\}: each is normalised, the two are orthogonal, and any state |\psi\rangle = \alpha|0\rangle + \beta|1\rangle is a linear combination. The X-basis \{|+\rangle, |-\rangle\} is another orthonormal basis for the same space. Any state can be written in the X-basis too: |\psi\rangle = c_+|+\rangle + c_-|-\rangle with appropriate c_\pm. These are two different coordinate systems for the same 2-dimensional complex space, like Cartesian and polar coordinates for the plane.
Two properties of orthonormal bases that make them useful:
-
Coefficients come from inner products. If |\psi\rangle = \sum_i c_i|i\rangle is a state expressed in an orthonormal basis, then c_i = \langle i|\psi\rangle — the i-th coefficient is literally the inner product of the i-th basis state with |\psi\rangle. Derivation: \langle j|\psi\rangle = \langle j|\sum_i c_i|i\rangle = \sum_i c_i\langle j|i\rangle = \sum_i c_i \delta_{ji} = c_j. Why: the sum collapses because \langle j|i\rangle is zero except when i = j, at which point it is 1 and picks out exactly the c_j coefficient. This is the same machinery that lets Fourier series read off coefficients from integrals — and it is essentially the same trick, in a different vector space.
-
Probabilities are squared-coefficient magnitudes. If |\psi\rangle = \sum_i c_i|i\rangle, the probability of measuring outcome i in the basis \{|i\rangle\} is |c_i|^2. Combined with c_i = \langle i|\psi\rangle, this is just the Born rule restated: P(i) = |\langle i|\psi\rangle|^2.
Normalisation — why every state satisfies \langle\psi|\psi\rangle = 1
Every physical quantum state must be normalised:
The reason is probability. If you measure |\psi\rangle in an orthonormal basis \{|i\rangle\}, the probability of getting outcome i is |\langle i|\psi\rangle|^2. The probabilities across all outcomes must sum to 1 (something has to happen, after all). So:
Now use the two properties from the previous section. The left side is \sum_i |c_i|^2 = \sum_i c_i^* c_i. That sum is exactly \langle\psi|\psi\rangle:
Why the double sum collapses: \langle i|j\rangle = \delta_{ij}, so out of all the i-j pairs only those with i = j survive — every cross-term is zero. The sum over all pairs reduces to a sum over just the diagonal.
So \langle\psi|\psi\rangle = \sum_i|c_i|^2, and the probabilities-sum-to-1 requirement says this equals 1. Hence every physically meaningful state has norm 1.
If a state does not have norm 1, it is an unnormalised representative of a physical state — fine as an intermediate step in a calculation, but you must divide by the norm before extracting probabilities. Example 2 below walks through this.
The completeness relation — the identity as a sum of projectors
Here is a subtle-looking identity that turns out to be one of the most useful in quantum mechanics. For any orthonormal basis \{|i\rangle\},
where I is the identity operator. In words: the sum of the outer products |i\rangle\langle i| over an orthonormal basis equals the identity matrix.
You will meet outer products properly in chapter 6; here it is enough to know what |i\rangle\langle i| is as a matrix. For |0\rangle = (1, 0)^T:
And for |1\rangle:
Sum them:
Why this works for any orthonormal basis, not just the computational one: any orthonormal basis is a rotated copy of \{|0\rangle, |1\rangle\}, and the identity matrix is rotation-invariant. The sum \sum_i |i\rangle\langle i| is just "project onto the i-th basis direction, sum over all directions," and if you cover all directions orthogonally you are doing nothing at all.
Why call it "completeness"? Because the fact that the sum equals I is what it means for the basis to be complete — there is no direction left uncovered; every vector can be reached. An incomplete set of orthonormal vectors (say, only |0\rangle) gives |0\rangle\langle 0| \neq I — that sum is a projector onto the 1-dimensional subspace spanned by |0\rangle, not the full identity.
The completeness relation is the single most-used identity in intermediate-level quantum mechanics. It lets you "insert the identity" anywhere you want, in the form of a sum of projectors, and expand any operator or state in whatever basis is convenient. You will see it bookend derivations throughout Part 6 and Part 9.
Worked examples
Example 1: Compute every overlap in the four-state family
Compute the inner products \langle 0|+\rangle, \langle 1|+\rangle, \langle +|{-}\rangle, \langle 0|0\rangle, and \langle 0|1\rangle. Present them in a table, and verify that \{|+\rangle, |{-}\rangle\} is an orthonormal basis.
Step 1. Write the four states as columns.
Why: the definitions of the computational-basis states and the X-basis states, written out explicitly. All four coefficients are real, so the bra is just the column laid flat — no conjugation changes.
Step 2. Compute each inner product using the row-times-column rule.
Why the pattern: each inner product is two numbers added (for 2-dimensional states). When the two non-zero components align, you get 1/\sqrt{2} or 1. When they cancel (as in \langle +|{-}\rangle), you get 0.
Step 3. Present in a table.
| Overlap | Value |
|---|---|
| \langle 0 | 0\rangle | 1 |
| \langle 0 | 1\rangle | 0 |
| \langle 0 | +\rangle | 1/\sqrt{2} |
| \langle 1 | +\rangle | 1/\sqrt{2} |
| \langle +|{-}\rangle | 0 |
Step 4. Verify \{|+\rangle, |{-}\rangle\} is an orthonormal basis. You need \langle +|+\rangle = 1, \langle {-}|{-}\rangle = 1, and \langle +|{-}\rangle = 0.
All three conditions hold, so \{|+\rangle, |{-}\rangle\} is an orthonormal basis. ✓
Result. Five inner products computed, \{|+\rangle, |{-}\rangle\} verified as an orthonormal basis.
What this shows. The inner product gives you one number per pair of states, and that number is all you need to tell apart orthogonal, identical, and partially-overlapping states. Fluency with this table is the main reason quantum-computing students can read algorithms at a glance.
Example 2: Normalising a state
Given the unnormalised state
compute its norm and produce a normalised version.
Step 1. Write the state as a column, and write its bra as a conjugated row.
Why (1 - i)^* = 1 + i: the complex conjugate flips the sign of the imaginary part. The real coefficient 2 is its own conjugate.
Step 2. Compute \langle\psi|\psi\rangle as the row-times-column.
Why the structure: the rule is "i-th bra coefficient times i-th ket coefficient, summed." Two terms for a two-dimensional space.
Step 3. Expand the product (1+i)(1-i).
Why this always gives a real answer: the product of a complex number with its conjugate is |z|^2 = z^* z, which is always real and non-negative. Here |1 - i|^2 = 1^2 + 1^2 = 2 — two, just as computed.
Step 4. Add the terms to get the squared norm.
The norm is \||\psi\rangle\| = \sqrt{6}.
Step 5. Divide by the norm to normalise.
Verify: \langle\psi_{\text{norm}}|\psi_{\text{norm}}\rangle = \tfrac{1}{6}\langle\psi|\psi\rangle = \tfrac{1}{6} \cdot 6 = 1. ✓
Result. The unnormalised state 2|0\rangle + (1-i)|1\rangle has norm \sqrt{6}. Dividing by \sqrt{6} gives the normalised state \tfrac{2}{\sqrt{6}}|0\rangle + \tfrac{1-i}{\sqrt{6}}|1\rangle, which now satisfies \langle\psi_{\text{norm}}|\psi_{\text{norm}}\rangle = 1.
What this shows. Normalisation is not a physical change — it is a bookkeeping step. The direction of the vector in Hilbert space is what encodes the physical state; the magnitude is something you can freely rescale. Any derivation that produces an unnormalised state can be normalised at the end by dividing by \sqrt{\langle\psi|\psi\rangle}, and the resulting state is the physically meaningful one.
Common confusions
-
"\langle\phi|\psi\rangle = \langle\psi|\phi\rangle always." False for complex states. The correct relation is \langle\phi|\psi\rangle = \overline{\langle\psi|\phi\rangle} — complex conjugation flips the sign of the imaginary part when you swap the two sides. The magnitudes are equal, but the phases are opposite. For real-valued states (all coefficients real), the two inner products happen to agree, which is why the real-vector dot product is symmetric.
-
"|\langle\phi|\psi\rangle| is the probability of measuring |\phi\rangle." No. The square of the magnitude is the probability: P(\phi|\psi) = |\langle\phi|\psi\rangle|^2. Missing the square gives you the square root of a probability — which for a 50\% outcome would be 1/\sqrt{2} \approx 0.707, not 0.5. The Born rule is a squared-amplitude rule, always.
-
"Orthogonal means perpendicular like in 2D." For real vectors, yes — orthogonality and perpendicularity are the same. For complex vectors, orthogonality is a richer condition: \langle\phi|\psi\rangle = 0 as a complex number, meaning the real and imaginary parts both vanish. The geometric picture of "perpendicular arrows" still helps, but the full definition includes complex phases.
-
"A state |\psi\rangle always has \langle\psi|\psi\rangle = 1." Only after you normalise it. A raw linear combination like 2|0\rangle + 3|1\rangle does not have norm 1 — its norm is \sqrt{4 + 9} = \sqrt{13}. Before extracting probabilities, you divide by \sqrt{13} to get the physical state \tfrac{2}{\sqrt{13}}|0\rangle + \tfrac{3}{\sqrt{13}}|1\rangle. An unnormalised ket is a valid intermediate object in algebra; it is not a physically realisable state until you normalise.
-
"Global phase changes the state." No. Multiplying |\psi\rangle by e^{i\gamma} gives a new ket but the same physical state — every probability |\langle\phi|\psi\rangle|^2 is unchanged because the extra phase e^{i\gamma} has magnitude 1 and vanishes when you square. This is why states are sometimes described as "equivalence classes up to global phase" — |\psi\rangle and e^{i\gamma}|\psi\rangle are the same physics.
-
"The completeness relation is an advanced topic." It is the most widely used identity in quantum mechanics. Whenever you want to expand an operator or a state in a particular basis, you "insert the identity" as \sum_i|i\rangle\langle i| and let the basis take over. Every derivation in Parts 6-10 uses this move.
Going deeper
If you can compute an inner product by eye, recognise orthogonal pairs, normalise a given state, and apply the completeness relation to expand a state in any basis — you have everything you need for the next several chapters. What follows is optional context: the Schwarz inequality and what it guarantees, a brief tour of Gram-Schmidt orthogonalisation, how completeness lets you write an operator as a matrix of matrix elements, and a peek at the infinite-dimensional case.
The Cauchy-Schwarz inequality
One of the oldest and most useful inequalities in linear algebra, applied to the quantum inner product, reads
with equality if and only if |\phi\rangle and |\psi\rangle are proportional (one is a scalar multiple of the other). For unit vectors, the right-hand side is 1 and the inequality becomes |\langle\phi|\psi\rangle| \leq 1.
Connect this to the geometric picture. For real unit vectors, \langle\phi|\psi\rangle = \cos\theta, and |\cos\theta| \leq 1 with equality only at \theta = 0 or \pi. The Cauchy-Schwarz inequality is the complex-vector restatement of exactly this: the magnitude of the inner product of two unit vectors cannot exceed 1, and it equals 1 only when they point in the same direction (up to a phase).
A quick proof. Consider the state |\chi\rangle = |\phi\rangle - \frac{\langle\psi|\phi\rangle}{\langle\psi|\psi\rangle}|\psi\rangle — the component of |\phi\rangle orthogonal to |\psi\rangle. Since \langle\chi|\chi\rangle \geq 0:
Why: expanding \langle\chi|\chi\rangle gives four cross-terms; two of them cancel against each other, and the survivors rearrange into \langle\phi|\phi\rangle - |\langle\psi|\phi\rangle|^2 / \langle\psi|\psi\rangle. Requiring this to be non-negative gives the inequality.
Rearranging: |\langle\psi|\phi\rangle|^2 \leq \langle\phi|\phi\rangle\langle\psi|\psi\rangle, which is Schwarz. This inequality underlies the uncertainty principle (where it bounds the product of two observables' variances), the Bell inequalities (where it bounds classical correlations), and many quantum-information quantities.
Gram-Schmidt orthogonalisation
Suppose you have a list of linearly independent but not orthogonal vectors, and you want an orthonormal basis for the same space. The Gram-Schmidt procedure builds one step by step.
Step 1. Take the first vector |v_1\rangle and normalise it: |e_1\rangle = |v_1\rangle / \||v_1\rangle\|.
Step 2. Take the second vector |v_2\rangle, subtract off its component along |e_1\rangle, and normalise the remainder:
Why this works: subtracting \langle e_1|v_2\rangle\,|e_1\rangle removes the part of |v_2\rangle that lies along |e_1\rangle. What is left is orthogonal to |e_1\rangle by construction — you just killed all of the overlap.
Step 3. For the third vector, subtract off both components and normalise. In general, for the k-th input vector, subtract the components along each of the previously constructed orthonormal vectors.
Example in 2D: start with |v_1\rangle = |0\rangle + |1\rangle and |v_2\rangle = |0\rangle. First, normalise |v_1\rangle: its norm is \sqrt{2}, so |e_1\rangle = (|0\rangle + |1\rangle)/\sqrt{2} = |+\rangle. Next, \langle e_1|v_2\rangle = \langle +|0\rangle = 1/\sqrt{2}, so
The norm of |u_2\rangle is \sqrt{1/4 + 1/4} = 1/\sqrt{2}, so |e_2\rangle = |u_2\rangle \cdot \sqrt{2} = (|0\rangle - |1\rangle)/\sqrt{2} = |{-}\rangle. The procedure has produced the X-basis \{|+\rangle, |{-}\rangle\} from the non-orthogonal pair \{|+\rangle \cdot \sqrt{2}, |0\rangle\}.
Gram-Schmidt is how you would build an orthonormal basis from scratch given any linearly independent spanning set. In quantum computing it rarely comes up explicitly — the bases you meet are usually already orthonormal — but it is the theoretical guarantee that orthonormal bases exist for every Hilbert space.
The completeness relation as a resolution of the identity
The trick \sum_i|i\rangle\langle i| = I is called a resolution of the identity because it decomposes I into a sum of rank-1 projectors. Its real power is in expanding operators.
Take any operator A. Insert the identity on both sides:
where A_{ij} = \langle i|A|j\rangle are the matrix elements of A in the basis \{|i\rangle\}.
Why this is useful: any operator can be rebuilt from its matrix elements A_{ij} and the outer products |i\rangle\langle j|. The matrix elements are numbers; the outer products are fixed, basis-dependent objects. Together they reconstruct the operator. This is the formal statement that "operators in quantum mechanics are matrices in a basis" — the completeness relation is the bookkeeping that makes the translation precise.
Every computation in Parts 6-10 of this track uses this move. Whenever you see a sum over basis states with |i\rangle\langle j| or |i\rangle\langle i| buried inside a derivation, the completeness relation is what put it there.
Inner products in infinite-dimensional spaces
The formulas in this chapter generalise to infinite-dimensional Hilbert spaces without any essential change — you only replace sums by integrals. The state of a particle on a line is a wavefunction \psi(x), a complex-valued function of position, and the ket |\psi\rangle is identified with the function itself. The inner product of two wavefunctions is
Orthonormality becomes
where \delta(x - x') is the Dirac delta function. Normalisation becomes
which says the probability of finding the particle somewhere in space is 1. Completeness becomes
Every identity you have met in this chapter has an infinite-dimensional version. Dirac notation runs over them unchanged — which is why the formalism is so widely used. You will meet these continuous versions in Part 9 when the track covers position and momentum observables.
An Indian-context aside — Bose's counting
The word statistics in quantum mechanics usually means "how do you count states." Photons — identical bosons — have a specific rule for how indistinguishable copies must be counted, and that rule was derived by Satyendra Nath Bose in 1924. The counting relies on the inner product: two photon states that look different in symbol form but are related by a particle swap give the same state (inner product 1 with each other), and the counting of distinct physical states must reflect that. Without the inner-product rule and the orthogonality condition on basis states, there is no way to state what "identical bosons" means precisely — and without Bose's counting, quantum mechanics cannot describe a laser, a Bose-Einstein condensate, or the photon statistics from a distant star. The machinery this chapter sets up is what makes Bose's 1924 argument work.
Where this leads next
- Outer Products and Projectors — chapter 6, the next article. The |\psi\rangle\langle\psi| construction as a rank-1 projector, the relation P^2 = P, and the completeness relation seen again through the projector lens.
- Operators as Matrices — chapter 7. How the matrix elements A_{ij} = \langle i|A|j\rangle capture an operator in a basis, adjoints, Hermitian operators, and unitary operators.
- Tensor Products the Quantum Way — chapter 8. How inner products of composite states factor as products of single-qubit inner products, and why this is the algebraic core of entanglement.
- The Bloch Sphere — chapter 14, the geometric picture of single-qubit states. Inner products between Bloch vectors translate directly to the overlap formulas of this chapter.
- Projective Measurement — chapter 12. How the Born rule P(i) = |\langle i|\psi\rangle|^2 combines with the completeness relation to give the full measurement postulate of quantum mechanics.
References
- Nielsen and Chuang, Quantum Computation and Quantum Information — Cambridge University Press, §2.1.4.
- John Preskill, Lecture Notes on Quantum Computation — theory.caltech.edu/~preskill/ph229, Chapter 2.
- Wikipedia, Inner product space.
- Wikipedia, Bra–ket notation — the overlap \langle\phi|\psi\rangle in situ.
- Qiskit Textbook, Linear algebra for quantum computing — inner-product worked examples with qubit states.
- Wikipedia, Satyendra Nath Bose — context for the Bose-counting aside above.