In short
Dirac notation is a compact linear-algebra shorthand used everywhere in quantum mechanics. A ket |\psi\rangle is a column vector of complex amplitudes. A bra \langle\psi| is its conjugate transpose — a row vector with every entry complex-conjugated. Put them together and \langle\phi|\psi\rangle is a row times a column, which is a single number — the inner product. The other way round, |\phi\rangle\langle\psi| is a column times a row, which is a matrix — the outer product. That is the whole idea; every extra rule in this chapter is a consequence.
Open any textbook on quantum mechanics, any paper on quantum computing, any Wikipedia article about a quantum algorithm, and within three pages you will run into a symbol that looks like nothing in your previous mathematics:
Half a bracket, a vertical bar, another half-bracket, a Greek letter on each side. Somebody has clearly decided that ordinary linear-algebra notation was not quite right, and has replaced it with a pair of angle brackets and a pipe. If you grew up writing column vectors as (v_1, v_2)^T and dot products as v \cdot w or v^T w, your first reaction is annoyance. Why this?
The answer is that linear-algebra notation, as a high-school student learns it, is genuinely bad for the kind of manipulation physicists do every day. It is not wrong — it computes the right numbers — but it is cluttered, hard to scan, and hard to typeset. In the late 1930s Paul Dirac, the mathematician's physicist, noticed this and invented a notation that fixed it. Every physicist since has used his notation, and every quantum-computing researcher does too. The notation is sometimes called bra-ket because of how the angle brackets look. It is sometimes called Dirac notation because he invented it. Either name, same thing.
You have already seen |\psi\rangle briefly — in chapter 1, where it was used without decoding; in chapter 14 on the Bloch sphere, where it was introduced with a one-line translation to a column vector. This chapter is where you become properly fluent. You will leave it able to read any bra-ket expression the way you read ordinary algebra — by eye, without stopping to translate.
The problem Dirac was solving
Before the notation, set aside the physics and remember the linear algebra. A quantum state of a single qubit is a column vector of two complex numbers:
A matrix like the Pauli X gate is the 2-by-2 array
To multiply a matrix by a column vector you write X\mathbf{v}. Fine. Nothing strange so far.
Now ask the question physicists ask constantly: what is the overlap between two states? In linear algebra this is the inner product. For real vectors it is the familiar dot product \mathbf{v} \cdot \mathbf{w}. For complex vectors it is something slightly different — you have to complex-conjugate one side. The usual way to write it, borrowed from matrix calculus, is
where the symbol \dagger (pronounced "dagger") means "take the transpose and complex-conjugate every entry." So if \mathbf{v} = (\alpha, \beta)^T then \mathbf{v}^\dagger = (\alpha^*, \beta^*) — a row with conjugated entries. Multiply that row by the column \mathbf{w} = (\gamma, \delta)^T and you get the single number \alpha^*\gamma + \beta^*\delta.
Now ask the next question physicists ask constantly: what is the expectation value of an operator A in a state \mathbf{v}? The answer is
And the question after that: what is the projector onto the direction of a unit vector \mathbf{v}? The answer is
which is not a number but a matrix (a column times a row).
Already the page is getting ugly. Every line has daggers floating around, the boldface for vectors is easy to miss, and a novice reader cannot tell at a glance whether \mathbf{v}^\dagger A \mathbf{v} is a number or a matrix or a vector. Now imagine a derivation that occupies half a page of this. You are squinting at superscript symbols trying to remember which object is a row and which is a column. Mistakes are easy; reading is slow.
Why the clutter matters: a physicist writes these expressions every day, often dozens of times in a single proof. A notation that is one character shorter per expression saves not seconds but hours across a career. More importantly, a notation that is easier to scan prevents errors. Dirac was not aiming for prettiness; he was aiming for readability under exhaustion.
The solution — kets and bras
Dirac's proposal was this. Keep the mathematics exactly the same. Change the symbols.
For a column vector of complex numbers, write
and call it a ket, pronounced "ket psi." The vertical bar on the left and the right-facing angle bracket together form a symbol that visually opens to the right — the direction in which the column extends into the expression. For the two computational-basis states of a qubit, the convention is
A general single-qubit state is written
For the conjugate-transposed row vector, write
and call it a bra, pronounced "bra psi." The left-facing angle bracket and the vertical bar together open to the left — the direction in which the row extends. If |\psi\rangle = (\alpha, \beta)^T then
— a row whose entries are the complex conjugates of the ket's entries.
That is the entire grammar. Every other bra-ket construction is built from these two pieces.
Why the brackets point the way they do
The name bra-ket is the only pun Dirac ever made in print — the two halves of an English "bracket," split along the vertical bar. Silly, but the notation is not silly at all.
Look at the ket |\psi\rangle. The vertical bar and the right-pointing angle \rangle together form a shape that leans to the right. When you put a ket into an expression, things that multiply it from the left act on it: A|\psi\rangle, \langle\phi|\psi\rangle. The opening on the right tells you this is a column, waiting to be operated on from the left or combined with another term.
Look at the bra \langle\psi|. Its left-pointing angle \langle and vertical bar lean the other way. When you put a bra into an expression, things that multiply it from the right act on it: \langle\psi|A, \langle\psi|\phi\rangle. The opening on the left tells you this is a row, waiting to be combined with something on its right.
Every expression you build respects this. A bra sits on the left, a ket on the right, operators sit in between. The notation reads left-to-right like English prose, and the shape of each symbol tells you its role.
Why this matters for typing: when you write a derivation in bra-ket notation, you never have to ask "is this a row or a column?" — the brackets tell you. In matrix notation, you remember by convention that \mathbf{v} is a column and \mathbf{v}^\dagger is a row. The Dirac brackets make the convention visual and automatic.
Putting them together — four shapes, four meanings
The magic of bra-ket notation is that the rules of matrix multiplication, when you apply them to kets and bras, produce exactly four kinds of object. Each kind has a natural Dirac shape.
1. Bra times ket — a number
Put a bra next to a ket:
This is a row (of dimension 1 \times n) multiplied by a column (of dimension n \times 1). The result is a 1 \times 1 matrix — a single number. For a two-dimensional Hilbert space,
This is the inner product of the two states. It is the central quantity in all of quantum mechanics: overlaps, amplitudes, probabilities, expectation values — everything eventually reduces to an inner product. Chapter 5 is entirely devoted to inner products, so this chapter only names the object and moves on.
Read the expression aloud: "bra phi ket psi." The two halves of the word bracket latch together, and the result is one scalar number.
2. Ket times bra — a matrix
Put a ket next to a bra the other way:
This is a column (n \times 1) multiplied by a row (1 \times n). The result is an n \times n matrix. For a two-dimensional space,
This is the outer product. Every entry of the matrix is a simple product of one ket coefficient and one (conjugated) bra coefficient. Chapter 6 builds projectors and density matrices out of outer products.
3. Operator between a bra and a ket — a number
Put an operator A between a bra and a ket:
Parse this left-to-right. A|\psi\rangle is an operator acting on a column — the result is another column. Then the bra \langle\phi| on the left is a row times that column, which is a number. So \langle\phi|A|\psi\rangle is a scalar, called a matrix element of A. When |\phi\rangle = |\psi\rangle, the quantity \langle\psi|A|\psi\rangle is the expectation value of A in the state |\psi\rangle — the average outcome you would get if you measured A on an ensemble of copies of |\psi\rangle.
4. Sum of outer products — a matrix
A sum of outer products stays a matrix:
This is how operators get built out of their matrix elements. You will use this constantly starting in chapter 7.
Why the notation works — readability, composability, generality
Three pedagogical arguments for why Dirac notation genuinely is better, not just different.
Readability. In a derivation, the most common bug is a misremembered row-or-column. Is v a column or a row here? In matrix notation you have to track which side of the dagger each symbol is on; in bra-ket notation the bracket direction does it for you. Does this expression evaluate to a number, a vector, or a matrix? In matrix notation you multiply the dimensions in your head; in bra-ket notation you just look at the first and last brackets. A bra on the left and a ket on the right — scalar. A ket on the left — vector or matrix. A bra on the right — dual vector or matrix. The grammar is visible.
Composability. Quantum mechanics is full of expressions like
In matrix notation this is \mathbf{v}^\dagger A \mathbf{w} \cdot \mathbf{w}^\dagger B \mathbf{u} — four boldface vectors, two daggers, and a lot of tracking to do. In bra-ket it is one visual sentence. Better still: if you multiply \langle\phi| on the left by |\psi\rangle\langle\psi| in the middle, the middle piece is a projector (chapter 6) and the expression reduces to \langle\phi|\psi\rangle\langle\psi|, then to \langle\psi| scaled by the scalar \langle\phi|\psi\rangle. The notation itself suggests the simplification.
Generality. Nothing in the definition of a ket requires it to have finitely many components. A ket |\psi\rangle can be a column with infinitely many entries, or a continuous function (the wavefunction of a particle on a line). A bra \langle\phi| can be a linear functional from an infinite-dimensional space to the complex numbers. The notation looks identical in all cases: \langle\phi|\psi\rangle is still an inner product. This is why Dirac's notation became the universal language of quantum mechanics — it absorbs the infinite-dimensional generalisations without change. Matrix notation, by contrast, struggles when you leave finite matrices behind.
The economy — a side-by-side comparison
For a single cricket scorecard of how concise Dirac notation really is, here are five common expressions in both forms. Read them side by side until the correspondence becomes automatic.
| Quantity | Matrix notation | Dirac notation |
|---|---|---|
| A quantum state | \mathbf{v} = (\alpha, \beta)^T | |\psi\rangle = \alpha|0\rangle + \beta|1\rangle |
| Its conjugate transpose | \mathbf{v}^\dagger = (\alpha^*, \beta^*) | \langle\psi| = \alpha^*\langle 0| + \beta^*\langle 1| |
| Inner product | \mathbf{v}^\dagger \mathbf{w} | \langle v|w\rangle |
| Outer product | \mathbf{v}\mathbf{w}^\dagger | |v\rangle\langle w| |
| Operator acting on state | A\mathbf{v} | A|\psi\rangle |
| Matrix element | \mathbf{v}^\dagger A \mathbf{w} | \langle v|A|w\rangle |
| Expectation value of A | \mathbf{v}^\dagger A \mathbf{v} | \langle\psi|A|\psi\rangle |
| State in a basis | \mathbf{v} = \sum_i v_i \mathbf{e}_i | |\psi\rangle = \sum_i \alpha_i |i\rangle |
| Basis projector | \mathbf{e}_i \mathbf{e}_i^\dagger | |i\rangle\langle i| |
| Completeness relation | \sum_i \mathbf{e}_i \mathbf{e}_i^\dagger = I | \sum_i |i\rangle\langle i| = I |
Count symbols in the last row. Matrix: nine, plus a convention that \mathbf{e}_i means "the i-th standard basis vector." Dirac: seven, and |i\rangle is self-explanatory.
This saving compounds. A quantum-mechanics derivation that runs five pages in matrix notation runs three in Dirac. A Hamiltonian expressed as \sum_{ij} H_{ij} |i\rangle\langle j| is instantly recognisable; the same object as a matrix requires you to define the basis, write out the entries, and remember the ordering convention. The information is the same; the medium is better.
When to use which
The question is never "Dirac or matrices — which is correct?" They are the same objects, in different notation. Every ket is a column vector; every bra is a row vector; every Dirac expression expands to a matrix expression and vice versa. The translation is mechanical.
The question is which notation makes the current task easier. Rules of thumb:
Use Dirac (bra-ket) when:
- You are doing a conceptual derivation — manipulating states, operators, and their overlaps symbolically.
- You want to see the structure of an expression at a glance: "this is a scalar, this is a projector, this is a matrix element."
- You are working with named states like |0\rangle, |1\rangle, |+\rangle, |-\rangle, or the |n\rangle basis of a harmonic oscillator. The named kets carry more meaning than component-indexed vectors do.
- You are working in infinite-dimensional spaces (position, momentum), where matrix notation breaks down.
Use matrices when:
- You need numerical values — say, you are computing an expectation value with a specific state on a specific operator.
- You are implementing something in code — NumPy, QuTiP, Qiskit. The numerical library needs concrete arrays, not symbolic bra-kets.
- You are asked to prove a specific matrix identity (e.g. "verify that H \cdot H = I"). Writing out the 2-by-2 entries is often shorter than manipulating bras and kets.
- You are teaching someone who has never seen Dirac notation before. The matrix version is usually the starting point even if the final form is bra-ket.
Translate fluently between them. The goal of this chapter is not that you memorise one notation and translate when absolutely forced. The goal is that you can read either one without conversion — the way a bilingual speaker reads two languages without mentally translating between them. You should be able to see \langle 0 | H | 0 \rangle and also (1, 0) \cdot H \cdot (1, 0)^T as the same object. By the time you finish Part 2 of this track, that fluency will be automatic.
Worked examples
Example 1: Write a state as a column, and check normalisation
Given
write it as a column vector, find the bra \langle\psi| as a row vector, and verify that \langle\psi|\psi\rangle = 1 by explicit matrix multiplication.
Step 1. Write the ket as a column.
Use |0\rangle = (1, 0)^T and |1\rangle = (0, 1)^T and add the two scaled versions:
Why: the definition of a ket as a column vector means you can add component by component. The two basis kets contribute to different components, so the result is the pair of coefficients stacked vertically.
Step 2. Write the bra as the conjugate transpose.
The coefficients 1/\sqrt{2} are real, so their conjugates are themselves. The conjugate transpose turns the column into a row:
Why: the bra rule is "conjugate every entry and lay the column on its side." Real numbers are their own conjugates, so here only the laying-on-its-side happens.
Step 3. Compute \langle\psi|\psi\rangle by matrix multiplication.
A row times a column of length two is the sum of pairwise products:
Why: this is the definition of the inner product. Each component of the row multiplies the corresponding component of the column, and you add the products. One half plus one half is one, which is exactly what a normalised state must satisfy.
Result. |\psi\rangle is the column (1/\sqrt{2}, 1/\sqrt{2})^T, its bra is the row (1/\sqrt{2}, 1/\sqrt{2}), and \langle\psi|\psi\rangle = 1.
What this shows. The state |+\rangle = (|0\rangle + |1\rangle)/\sqrt{2} from the Bloch-sphere chapter is, in columns and rows, just the pair of coefficients (1/\sqrt{2}, 1/\sqrt{2}). Nothing is hidden. The Dirac notation is the matrix notation, compressed.
Example 2: Expectation value of the Pauli-X gate
Let |\phi\rangle = \alpha\,|0\rangle + \beta\,|1\rangle be a general normalised single-qubit state. Compute \langle\phi | X | \phi\rangle, where X is the Pauli-X gate
in two ways — once in matrix notation, and once in bra-ket manipulations.
Approach A: matrix notation.
Step 1. Write the state as a column and its conjugate transpose as a row.
Step 2. Compute X|\phi\rangle.
Why: multiplying the 2 \times 2 matrix into the column takes the dot product of each row of X with the column. Row 1 is (0, 1), which kills \alpha and keeps \beta. Row 2 is (1, 0), which keeps \alpha and kills \beta. The net effect is to swap the two components — which is precisely why X is called a bit-flip.
Step 3. Left-multiply by the bra.
Approach B: bra-ket manipulations.
Step 1. Use the definition of X on basis states: X|0\rangle = |1\rangle and X|1\rangle = |0\rangle. Apply X to the ket by linearity:
Why: X is a linear operator, so it distributes over the sum and pulls scalars out. The action on each basis ket flips the bit — this is the whole content of the X gate.
Step 2. Contract with the bra \langle\phi| = \alpha^*\langle 0| + \beta^*\langle 1|:
Step 3. Expand the product into four terms and use the orthonormality of the basis (\langle 0|0\rangle = \langle 1|1\rangle = 1, \langle 0|1\rangle = \langle 1|0\rangle = 0):
Why: the orthonormality of |0\rangle and |1\rangle makes two of the four inner products vanish and sets the other two to one. This is the single rule that makes bra-ket manipulations shorter than matrix manipulations — most cross-terms kill themselves.
Result. Both approaches give \langle\phi|X|\phi\rangle = \alpha^*\beta + \beta^*\alpha = 2\,\mathrm{Re}(\alpha^*\beta).
What this shows. Dirac notation and matrix notation are not rivals. They are two handwriting styles for the same mathematics. Example 2 did the same job twice — once with numbers, once with symbols — and got the same answer. When you are computing on a concrete state, matrices are direct. When you are reasoning about what structure the answer has ("expectation values of X are real because \alpha^*\beta + \beta^*\alpha is 2\,\mathrm{Re}(\alpha^*\beta)"), bra-ket shows the structure faster.
Common confusions
-
"Bra-ket notation is just a dressed-up version of dot products." Partly true, mostly wrong. It is a dressed-up version of the conjugate-transposed dot product — the complex-vector inner product, not the real one. Writing v^T w (without the dagger) in a complex space would be wrong; writing \langle v | w \rangle is always right, because the conjugation is built into the bra. The notation isn't syntactic sugar; it enforces a rule.
-
"The bra \langle\psi| is the inverse of the ket |\psi\rangle." No. The bra is the conjugate transpose (Hermitian conjugate), not the inverse. A ket does not even have an inverse in general — it is a vector, not a square matrix, and vectors do not have multiplicative inverses. Going from ket to bra is a reflection: columns to rows, and every complex number to its conjugate. Applying the operation twice gives you back the original ket — reflection is its own inverse as an operation.
-
"Kets and bras live in the same space, so I can add them." No. Kets live in a Hilbert space \mathcal{H}; bras live in its dual space \mathcal{H}^*. A bra is a linear function that eats a ket and produces a complex number. You can add two kets to get a ket, and two bras to get a bra, but you cannot add a bra and a ket — they are different types of object, like adding a function and its input. The Riesz representation theorem makes the correspondence between \mathcal{H} and \mathcal{H}^* exact, but they are still formally different spaces. You will see this formalised in the going-deeper section.
-
"I can compute |\psi\rangle\langle\phi|\psi\rangle by multiplying in any order." Careful. The expression |\psi\rangle\langle\phi|\psi\rangle has two readings: (i) the outer product |\psi\rangle\langle\phi| applied to the ket |\psi\rangle, or (ii) the ket |\psi\rangle scaled by the scalar \langle\phi|\psi\rangle. Both readings give the same answer — bra-ket multiplication is associative — but the second reading is usually easier. Learn to scan for the scalar in the middle and pull it to the front: |\psi\rangle\langle\phi|\psi\rangle = \langle\phi|\psi\rangle\,|\psi\rangle.
-
"\langle\psi| and |\psi\rangle are two independent quantum states." No. They are two notational facets of the same quantum state. If you know one, you know the other — you just conjugate-transpose. Writing a state "as a bra" versus "as a ket" is a choice of how to use it in an expression, not a change of physical state.
-
"Dirac notation is Physics-only; in computer science you should use matrices." The two fields have converged. Every quantum-computing textbook uses bra-ket notation, every major quantum-software library (Qiskit, Cirq, QuTiP) supports it, and every research paper in quantum algorithms writes circuits in bra-ket form. Treat bra-ket fluency as a core CS skill for this track, not as a physics import.
Going deeper
If you have understood that a ket is a column, a bra is a row, and a bra-ket is a number while a ket-bra is a matrix — you have the working knowledge you need for the next thirty chapters. What follows is optional context: where Dirac's notation came from, why mathematicians consider it rigorous, and how it generalises to infinite-dimensional systems that matter for quantum field theory and continuous-variable quantum computing.
Dirac's original motivation and the rest of 1939
Paul Dirac introduced bra-ket notation formally in 1939, in a short paper titled A new notation for quantum mechanics in the Mathematical Proceedings of the Cambridge Philosophical Society. By then the mathematical formulation of quantum mechanics had been settled by John von Neumann's 1932 book Mathematische Grundlagen der Quantenmechanik, which recast the theory in terms of operators on a Hilbert space and became the standard mathematical reference for the next four decades.
Von Neumann's notation was essentially linear-algebraic — states as vectors, observables as self-adjoint operators, probabilities as spectral expansions. It was rigorous and precise. Dirac's objection was that it was unwieldy to compute with. A physicist doing daily calculations did not need von Neumann's full abstract setup; they needed a shorthand that kept track of rows, columns, and conjugates automatically. That shorthand is bra-ket.
The physics community adopted Dirac's notation almost instantly. Schrödinger used it. Pauli used it. Every textbook written after 1945 uses it. When Sakurai wrote Modern Quantum Mechanics in 1985, the entire first chapter is essentially a tour of Dirac notation because that, rather than the Schrödinger wave equation or the Heisenberg matrix, is the modern starting point.
Kets as elements of a Hilbert space
Formally, a quantum state is a unit vector in a Hilbert space \mathcal{H} — a complex vector space equipped with an inner product \langle\cdot,\cdot\rangle satisfying
- Linearity in the second argument: \langle u, \alpha v + \beta w\rangle = \alpha\langle u, v\rangle + \beta\langle u, w\rangle.
- Conjugate symmetry: \langle u, v\rangle = \overline{\langle v, u\rangle}.
- Positive-definiteness: \langle v, v\rangle \geq 0, with equality only when v = 0.
plus completeness (every Cauchy sequence converges). A ket |\psi\rangle is an element of this space. For a single qubit the space is \mathbb{C}^2; for n qubits it is \mathbb{C}^{2^n}; for a particle on a line it is L^2(\mathbb{R}), the space of square-integrable complex functions.
The physics of quantum mechanics distinguishes two further structural facts: the space is separable (has a countable orthonormal basis — usually true for real systems) and states are unit vectors (\langle\psi|\psi\rangle = 1). Both conditions are part of the four postulates you will see in chapter 10.
Bras as linear functionals — the dual space
A bra is not a vector in \mathcal{H}. A bra is a linear functional — a linear map from \mathcal{H} to \mathbb{C}. Given a ket |\phi\rangle in the Hilbert space, the corresponding bra \langle\phi| is the function that eats any ket |\psi\rangle and returns the complex number \langle\phi|\psi\rangle.
The space of all such linear functionals is called the dual space of \mathcal{H}, written \mathcal{H}^*. For finite-dimensional spaces and for every Hilbert space that shows up in quantum computing, the Riesz representation theorem says that \mathcal{H} and \mathcal{H}^* are in one-to-one correspondence: every continuous linear functional on \mathcal{H} is given by inner product with some vector. That is what makes the notational identification \langle\phi| \leftrightarrow |\phi\rangle precise.
For infinite-dimensional Hilbert spaces, the correspondence still holds, but it is subtler: you need continuity of the functional, and you need to be careful with generalised bras (position eigenbras \langle x|, momentum eigenbras \langle p|) that are not elements of the Hilbert space but of a larger rigged space. This is the technical content of the distinction between "states" and "generalised states" that will matter in Part 9 on continuous observables.
Why the notation generalises to infinite dimensions
The defining virtue of Dirac notation is that it does not care about dimension. A ket |\psi\rangle is a vector in some Hilbert space — whether that space is \mathbb{C}^2, \mathbb{C}^{2^{100}}, or L^2(\mathbb{R}), the symbols look the same. The inner product \langle\phi|\psi\rangle is a complex number whether it is a sum of two terms, a sum of 2^{100} terms, or an integral:
This generality is not a convenience — it is the reason Dirac notation became universal. Quantum field theory, continuous-variable quantum computing, and the treatment of position and momentum all live in infinite-dimensional spaces, and Dirac notation runs over them without change. Matrix notation, in contrast, requires substantial adaptation — matrix elements become kernels, sums become integrals, and finite-dimensional tricks break down.
The abuse of notation that physicists live with
One common "abuse" that mathematicians notice: physicists write \langle x|\psi\rangle = \psi(x) as if \langle x| were a bra in the Hilbert-space sense. It is not — position eigenstates |x\rangle are not normalisable elements of L^2(\mathbb{R}). The precise statement lives in the rigged Hilbert space (the Gelfand triple), where |x\rangle is a distribution rather than a vector. For practical computation, the abuse is harmless and is universal. Every physics textbook treats \langle x|\psi\rangle as the wavefunction \psi(x), and everything checks out as long as you remember which operations are legitimate and which are not. You will meet this again in Part 9.
A note on Indian context
When this track introduces Dirac notation, the obvious Indian historical hook is Satyendra Nath Bose, whose 1924 paper on the statistics of light quanta (what we now call Bose–Einstein statistics) used early versions of the operator-and-state formalism that Dirac would later give its definitive notation. Einstein translated Bose's paper into German and submitted it to Zeitschrift für Physik; the ideas in it are why every identical-particle system in nature is classified as either a boson (named after Bose) or a fermion. Dirac's notation, developed fifteen years later, became the standard way to write down states of both. Without Bose's statistics, Dirac's formalism has nothing identifiable to describe; without Dirac's notation, Bose's statistics is hard to write down in readable form. Both men are part of why you, today, can read \langle\phi|\psi\rangle and know exactly what it means.
Where this leads next
- Inner Products in Bra-Ket Form — chapter 5, the next article. How \langle\phi|\psi\rangle measures overlap, why orthogonal states have zero overlap, and why every state must satisfy \langle\psi|\psi\rangle = 1.
- Outer Products and Projectors — chapter 6. The |\psi\rangle\langle\psi| construction, projectors onto subspaces, and the completeness relation as a sum of projectors.
- Operators as Matrices — chapter 7. How a linear operator acts on a ket, the dagger and the Hermitian conjugate, and the three operator families (Hermitian, unitary, projectors) that show up everywhere in quantum computing.
- Tensor Products the Quantum Way — chapter 8. How bra-ket notation handles composite systems: |0\rangle \otimes |1\rangle = |01\rangle, the Kronecker product, and why two qubits live in a four-dimensional space.
- Bloch Sphere — chapter 14, if you want to jump ahead to the geometric picture of single-qubit states. Every state on the Bloch sphere is one ket; the chapter you have just read is the algebra behind that geometry.
References
- P. A. M. Dirac, A new notation for quantum mechanics (1939) — DOI:10.1017/S0305004100021162. The original paper that introduced the bra-ket.
- Wikipedia, Bra–ket notation.
- John Preskill, Lecture Notes on Quantum Computation — theory.caltech.edu/~preskill/ph229, Chapter 2.
- Nielsen and Chuang, Quantum Computation and Quantum Information — Cambridge University Press, §2.1.
- Qiskit Textbook, Linear algebra for quantum computing — notation and worked matrix examples.
- Wikipedia, Satyendra Nath Bose — on whose 1924 paper Dirac's formalism later sat.