In short

Dirac notation is a compact linear-algebra shorthand used everywhere in quantum mechanics. A ket |\psi\rangle is a column vector of complex amplitudes. A bra \langle\psi| is its conjugate transpose — a row vector with every entry complex-conjugated. Put them together and \langle\phi|\psi\rangle is a row times a column, which is a single number — the inner product. The other way round, |\phi\rangle\langle\psi| is a column times a row, which is a matrix — the outer product. That is the whole idea; every extra rule in this chapter is a consequence.

Open any textbook on quantum mechanics, any paper on quantum computing, any Wikipedia article about a quantum algorithm, and within three pages you will run into a symbol that looks like nothing in your previous mathematics:

\langle \phi \mid \psi \rangle.

Half a bracket, a vertical bar, another half-bracket, a Greek letter on each side. Somebody has clearly decided that ordinary linear-algebra notation was not quite right, and has replaced it with a pair of angle brackets and a pipe. If you grew up writing column vectors as (v_1, v_2)^T and dot products as v \cdot w or v^T w, your first reaction is annoyance. Why this?

The answer is that linear-algebra notation, as a high-school student learns it, is genuinely bad for the kind of manipulation physicists do every day. It is not wrong — it computes the right numbers — but it is cluttered, hard to scan, and hard to typeset. In the late 1930s Paul Dirac, the mathematician's physicist, noticed this and invented a notation that fixed it. Every physicist since has used his notation, and every quantum-computing researcher does too. The notation is sometimes called bra-ket because of how the angle brackets look. It is sometimes called Dirac notation because he invented it. Either name, same thing.

You have already seen |\psi\rangle briefly — in chapter 1, where it was used without decoding; in chapter 14 on the Bloch sphere, where it was introduced with a one-line translation to a column vector. This chapter is where you become properly fluent. You will leave it able to read any bra-ket expression the way you read ordinary algebra — by eye, without stopping to translate.

The problem Dirac was solving

Before the notation, set aside the physics and remember the linear algebra. A quantum state of a single qubit is a column vector of two complex numbers:

\mathbf{v} = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}, \qquad \alpha, \beta \in \mathbb{C}.

A matrix like the Pauli X gate is the 2-by-2 array

X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}.

To multiply a matrix by a column vector you write X\mathbf{v}. Fine. Nothing strange so far.

Now ask the question physicists ask constantly: what is the overlap between two states? In linear algebra this is the inner product. For real vectors it is the familiar dot product \mathbf{v} \cdot \mathbf{w}. For complex vectors it is something slightly different — you have to complex-conjugate one side. The usual way to write it, borrowed from matrix calculus, is

\mathbf{v}^\dagger \mathbf{w}

where the symbol \dagger (pronounced "dagger") means "take the transpose and complex-conjugate every entry." So if \mathbf{v} = (\alpha, \beta)^T then \mathbf{v}^\dagger = (\alpha^*, \beta^*) — a row with conjugated entries. Multiply that row by the column \mathbf{w} = (\gamma, \delta)^T and you get the single number \alpha^*\gamma + \beta^*\delta.

Now ask the next question physicists ask constantly: what is the expectation value of an operator A in a state \mathbf{v}? The answer is

\mathbf{v}^\dagger A \mathbf{v}.

And the question after that: what is the projector onto the direction of a unit vector \mathbf{v}? The answer is

\mathbf{v} \mathbf{v}^\dagger,

which is not a number but a matrix (a column times a row).

Already the page is getting ugly. Every line has daggers floating around, the boldface for vectors is easy to miss, and a novice reader cannot tell at a glance whether \mathbf{v}^\dagger A \mathbf{v} is a number or a matrix or a vector. Now imagine a derivation that occupies half a page of this. You are squinting at superscript symbols trying to remember which object is a row and which is a column. Mistakes are easy; reading is slow.

Matrix notation versus Dirac notationTwo columns side by side. The left column, titled "Matrix notation", shows three messy expressions with dagger symbols. The right column, titled "Dirac notation", shows the same three quantities in bra-ket form, visibly cleaner.Matrix notationvw— inner productv w— outer productv A v— expectation valuedaggers, superscripts, hard to scanDirac notation⟨v|w⟩— inner product|v⟩⟨w|— outer product⟨v|A|v⟩— expectation valuedirection made visible by the brackets
The same three quantities in two notations. On the left, matrix notation leans on superscript daggers and typographic conventions you have to remember. On the right, Dirac's brackets make the direction of every object visible: a ket opens to the right, a bra opens to the left, and the shape of the expression tells you whether you end with a number or a matrix.

Why the clutter matters: a physicist writes these expressions every day, often dozens of times in a single proof. A notation that is one character shorter per expression saves not seconds but hours across a career. More importantly, a notation that is easier to scan prevents errors. Dirac was not aiming for prettiness; he was aiming for readability under exhaustion.

The solution — kets and bras

Dirac's proposal was this. Keep the mathematics exactly the same. Change the symbols.

For a column vector of complex numbers, write

|\psi\rangle

and call it a ket, pronounced "ket psi." The vertical bar on the left and the right-facing angle bracket together form a symbol that visually opens to the right — the direction in which the column extends into the expression. For the two computational-basis states of a qubit, the convention is

|0\rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \qquad |1\rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix}.

A general single-qubit state is written

|\psi\rangle = \alpha\,|0\rangle + \beta\,|1\rangle = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}.

For the conjugate-transposed row vector, write

\langle\psi|

and call it a bra, pronounced "bra psi." The left-facing angle bracket and the vertical bar together open to the left — the direction in which the row extends. If |\psi\rangle = (\alpha, \beta)^T then

\langle\psi| = (\alpha^*, \beta^*)

— a row whose entries are the complex conjugates of the ket's entries.

That is the entire grammar. Every other bra-ket construction is built from these two pieces.

Dirac notation cheat sheetFour labeled boxes arranged in a 2-by-2 grid. Top-left: a ket drawn as a column vector of alpha, beta. Top-right: a bra drawn as a row vector of alpha-star, beta-star. Bottom-left: an inner product drawn as a row times a column producing a single scalar. Bottom-right: an outer product drawn as a column times a row producing a 2-by-2 matrix.Ket — column|ψ⟩ =αβa column of complex amplitudesBra — row⟨ψ| =α* β*row with conjugated entriesInner product — a number⟨φ|ψ⟩ =csingle complex scalar (1×1)Outer product — a matrix|φ⟩⟨ψ| =■ ■■ ■2×2 matrix
The four shapes you will see on every page of quantum mechanics from now on. A ket is a column. A bra is a row. Bra times ket is a number. Ket times bra is a matrix. Once you recognise these four by eye, Dirac notation stops being punctuation and starts being grammar.

Why the brackets point the way they do

The name bra-ket is the only pun Dirac ever made in print — the two halves of an English "bracket," split along the vertical bar. Silly, but the notation is not silly at all.

Look at the ket |\psi\rangle. The vertical bar and the right-pointing angle \rangle together form a shape that leans to the right. When you put a ket into an expression, things that multiply it from the left act on it: A|\psi\rangle, \langle\phi|\psi\rangle. The opening on the right tells you this is a column, waiting to be operated on from the left or combined with another term.

Look at the bra \langle\psi|. Its left-pointing angle \langle and vertical bar lean the other way. When you put a bra into an expression, things that multiply it from the right act on it: \langle\psi|A, \langle\psi|\phi\rangle. The opening on the left tells you this is a row, waiting to be combined with something on its right.

Every expression you build respects this. A bra sits on the left, a ket on the right, operators sit in between. The notation reads left-to-right like English prose, and the shape of each symbol tells you its role.

Why this matters for typing: when you write a derivation in bra-ket notation, you never have to ask "is this a row or a column?" — the brackets tell you. In matrix notation, you remember by convention that \mathbf{v} is a column and \mathbf{v}^\dagger is a row. The Dirac brackets make the convention visual and automatic.

Putting them together — four shapes, four meanings

The magic of bra-ket notation is that the rules of matrix multiplication, when you apply them to kets and bras, produce exactly four kinds of object. Each kind has a natural Dirac shape.

1. Bra times ket — a number

Put a bra next to a ket:

\langle\phi|\psi\rangle.

This is a row (of dimension 1 \times n) multiplied by a column (of dimension n \times 1). The result is a 1 \times 1 matrix — a single number. For a two-dimensional Hilbert space,

\langle\phi|\psi\rangle = (\gamma^*, \delta^*) \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \gamma^* \alpha + \delta^* \beta.

This is the inner product of the two states. It is the central quantity in all of quantum mechanics: overlaps, amplitudes, probabilities, expectation values — everything eventually reduces to an inner product. Chapter 5 is entirely devoted to inner products, so this chapter only names the object and moves on.

Read the expression aloud: "bra phi ket psi." The two halves of the word bracket latch together, and the result is one scalar number.

2. Ket times bra — a matrix

Put a ket next to a bra the other way:

|\phi\rangle\langle\psi|.

This is a column (n \times 1) multiplied by a row (1 \times n). The result is an n \times n matrix. For a two-dimensional space,

|\phi\rangle\langle\psi| = \begin{pmatrix} \gamma \\ \delta \end{pmatrix} (\alpha^*, \beta^*) = \begin{pmatrix} \gamma\alpha^* & \gamma\beta^* \\ \delta\alpha^* & \delta\beta^* \end{pmatrix}.

This is the outer product. Every entry of the matrix is a simple product of one ket coefficient and one (conjugated) bra coefficient. Chapter 6 builds projectors and density matrices out of outer products.

3. Operator between a bra and a ket — a number

Put an operator A between a bra and a ket:

\langle\phi|A|\psi\rangle.

Parse this left-to-right. A|\psi\rangle is an operator acting on a column — the result is another column. Then the bra \langle\phi| on the left is a row times that column, which is a number. So \langle\phi|A|\psi\rangle is a scalar, called a matrix element of A. When |\phi\rangle = |\psi\rangle, the quantity \langle\psi|A|\psi\rangle is the expectation value of A in the state |\psi\rangle — the average outcome you would get if you measured A on an ensemble of copies of |\psi\rangle.

4. Sum of outer products — a matrix

A sum of outer products stays a matrix:

\sum_i c_i\, |\phi_i\rangle\langle\psi_i|.

This is how operators get built out of their matrix elements. You will use this constantly starting in chapter 7.

The four shapes of a bra-ket expressionA table with four rows showing each Dirac expression, the corresponding matrix expression, and the shape of the output: scalar, matrix, scalar, matrix.Dirac expressionMatrix translationOutput shape⟨φ|ψ⟩vwscalar (1×1)|φ⟩⟨ψ|v wmatrix (n×n)⟨φ|A|ψ⟩v A wscalar (1×1)Σ cᵢ |φᵢ⟩⟨ψᵢ|Σ cᵢ vᵢ wᵢmatrix (n×n)
The type-diagram of Dirac notation. If the expression ends with a ket, it is either a ket or a matrix; if it ends with a bra, it is either a bra or a matrix; if it begins with a bra and ends with a ket, it is a scalar. You never have to compute the matrix to know its shape.

Why the notation works — readability, composability, generality

Three pedagogical arguments for why Dirac notation genuinely is better, not just different.

Readability. In a derivation, the most common bug is a misremembered row-or-column. Is v a column or a row here? In matrix notation you have to track which side of the dagger each symbol is on; in bra-ket notation the bracket direction does it for you. Does this expression evaluate to a number, a vector, or a matrix? In matrix notation you multiply the dimensions in your head; in bra-ket notation you just look at the first and last brackets. A bra on the left and a ket on the right — scalar. A ket on the left — vector or matrix. A bra on the right — dual vector or matrix. The grammar is visible.

Composability. Quantum mechanics is full of expressions like

\langle\phi|A|\psi\rangle \cdot \langle\psi|B|\chi\rangle.

In matrix notation this is \mathbf{v}^\dagger A \mathbf{w} \cdot \mathbf{w}^\dagger B \mathbf{u} — four boldface vectors, two daggers, and a lot of tracking to do. In bra-ket it is one visual sentence. Better still: if you multiply \langle\phi| on the left by |\psi\rangle\langle\psi| in the middle, the middle piece is a projector (chapter 6) and the expression reduces to \langle\phi|\psi\rangle\langle\psi|, then to \langle\psi| scaled by the scalar \langle\phi|\psi\rangle. The notation itself suggests the simplification.

Generality. Nothing in the definition of a ket requires it to have finitely many components. A ket |\psi\rangle can be a column with infinitely many entries, or a continuous function (the wavefunction of a particle on a line). A bra \langle\phi| can be a linear functional from an infinite-dimensional space to the complex numbers. The notation looks identical in all cases: \langle\phi|\psi\rangle is still an inner product. This is why Dirac's notation became the universal language of quantum mechanics — it absorbs the infinite-dimensional generalisations without change. Matrix notation, by contrast, struggles when you leave finite matrices behind.

The economy — a side-by-side comparison

For a single cricket scorecard of how concise Dirac notation really is, here are five common expressions in both forms. Read them side by side until the correspondence becomes automatic.

Quantity Matrix notation Dirac notation
A quantum state \mathbf{v} = (\alpha, \beta)^T |\psi\rangle = \alpha|0\rangle + \beta|1\rangle
Its conjugate transpose \mathbf{v}^\dagger = (\alpha^*, \beta^*) \langle\psi| = \alpha^*\langle 0| + \beta^*\langle 1|
Inner product \mathbf{v}^\dagger \mathbf{w} \langle v|w\rangle
Outer product \mathbf{v}\mathbf{w}^\dagger |v\rangle\langle w|
Operator acting on state A\mathbf{v} A|\psi\rangle
Matrix element \mathbf{v}^\dagger A \mathbf{w} \langle v|A|w\rangle
Expectation value of A \mathbf{v}^\dagger A \mathbf{v} \langle\psi|A|\psi\rangle
State in a basis \mathbf{v} = \sum_i v_i \mathbf{e}_i |\psi\rangle = \sum_i \alpha_i |i\rangle
Basis projector \mathbf{e}_i \mathbf{e}_i^\dagger |i\rangle\langle i|
Completeness relation \sum_i \mathbf{e}_i \mathbf{e}_i^\dagger = I \sum_i |i\rangle\langle i| = I

Count symbols in the last row. Matrix: nine, plus a convention that \mathbf{e}_i means "the i-th standard basis vector." Dirac: seven, and |i\rangle is self-explanatory.

This saving compounds. A quantum-mechanics derivation that runs five pages in matrix notation runs three in Dirac. A Hamiltonian expressed as \sum_{ij} H_{ij} |i\rangle\langle j| is instantly recognisable; the same object as a matrix requires you to define the basis, write out the entries, and remember the ordering convention. The information is the same; the medium is better.

When to use which

The question is never "Dirac or matrices — which is correct?" They are the same objects, in different notation. Every ket is a column vector; every bra is a row vector; every Dirac expression expands to a matrix expression and vice versa. The translation is mechanical.

The question is which notation makes the current task easier. Rules of thumb:

Use Dirac (bra-ket) when:

Use matrices when:

Translate fluently between them. The goal of this chapter is not that you memorise one notation and translate when absolutely forced. The goal is that you can read either one without conversion — the way a bilingual speaker reads two languages without mentally translating between them. You should be able to see \langle 0 | H | 0 \rangle and also (1, 0) \cdot H \cdot (1, 0)^T as the same object. By the time you finish Part 2 of this track, that fluency will be automatic.

Worked examples

Example 1: Write a state as a column, and check normalisation

Given

|\psi\rangle = \tfrac{1}{\sqrt{2}}\,|0\rangle + \tfrac{1}{\sqrt{2}}\,|1\rangle,

write it as a column vector, find the bra \langle\psi| as a row vector, and verify that \langle\psi|\psi\rangle = 1 by explicit matrix multiplication.

Step 1. Write the ket as a column.

Use |0\rangle = (1, 0)^T and |1\rangle = (0, 1)^T and add the two scaled versions:

|\psi\rangle = \tfrac{1}{\sqrt{2}}\begin{pmatrix} 1 \\ 0 \end{pmatrix} + \tfrac{1}{\sqrt{2}}\begin{pmatrix} 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix}.

Why: the definition of a ket as a column vector means you can add component by component. The two basis kets contribute to different components, so the result is the pair of coefficients stacked vertically.

Step 2. Write the bra as the conjugate transpose.

The coefficients 1/\sqrt{2} are real, so their conjugates are themselves. The conjugate transpose turns the column into a row:

\langle\psi| = \left(\tfrac{1}{\sqrt{2}}, \tfrac{1}{\sqrt{2}}\right).

Why: the bra rule is "conjugate every entry and lay the column on its side." Real numbers are their own conjugates, so here only the laying-on-its-side happens.

Step 3. Compute \langle\psi|\psi\rangle by matrix multiplication.

A row times a column of length two is the sum of pairwise products:

\langle\psi|\psi\rangle = \left(\tfrac{1}{\sqrt{2}}, \tfrac{1}{\sqrt{2}}\right) \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \end{pmatrix} = \tfrac{1}{\sqrt{2}}\cdot\tfrac{1}{\sqrt{2}} + \tfrac{1}{\sqrt{2}}\cdot\tfrac{1}{\sqrt{2}} = \tfrac{1}{2} + \tfrac{1}{2} = 1.

Why: this is the definition of the inner product. Each component of the row multiplies the corresponding component of the column, and you add the products. One half plus one half is one, which is exactly what a normalised state must satisfy.

Result. |\psi\rangle is the column (1/\sqrt{2}, 1/\sqrt{2})^T, its bra is the row (1/\sqrt{2}, 1/\sqrt{2}), and \langle\psi|\psi\rangle = 1.

Ket, bra, and inner product for the plus stateThree stacked rectangles showing the ket as a vertical pair of 1 over root 2, the bra as a horizontal pair, and the inner product as the scalar 1.Ket |ψ⟩1/√21/√2Bra ⟨ψ|1/√21/√2Inner product1⟨ψ|ψ⟩ = (1/√2)(1/√2) + (1/√2)(1/√2) = 1/2 + 1/2 = 1a normalised state satisfies ⟨ψ|ψ⟩ = 1 by definition
Example 1 in pictures. The row times the column gives a single scalar, and for a normalised state that scalar is 1. This is why every physically meaningful $|\psi\rangle$ is chosen so $\langle\psi|\psi\rangle = 1$.

What this shows. The state |+\rangle = (|0\rangle + |1\rangle)/\sqrt{2} from the Bloch-sphere chapter is, in columns and rows, just the pair of coefficients (1/\sqrt{2}, 1/\sqrt{2}). Nothing is hidden. The Dirac notation is the matrix notation, compressed.

Example 2: Expectation value of the Pauli-X gate

Let |\phi\rangle = \alpha\,|0\rangle + \beta\,|1\rangle be a general normalised single-qubit state. Compute \langle\phi | X | \phi\rangle, where X is the Pauli-X gate

X = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},

in two ways — once in matrix notation, and once in bra-ket manipulations.

Approach A: matrix notation.

Step 1. Write the state as a column and its conjugate transpose as a row.

|\phi\rangle = \begin{pmatrix} \alpha \\ \beta \end{pmatrix}, \qquad \langle\phi| = (\alpha^*, \beta^*).

Step 2. Compute X|\phi\rangle.

X|\phi\rangle = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \begin{pmatrix} \beta \\ \alpha \end{pmatrix}.

Why: multiplying the 2 \times 2 matrix into the column takes the dot product of each row of X with the column. Row 1 is (0, 1), which kills \alpha and keeps \beta. Row 2 is (1, 0), which keeps \alpha and kills \beta. The net effect is to swap the two components — which is precisely why X is called a bit-flip.

Step 3. Left-multiply by the bra.

\langle\phi|X|\phi\rangle = (\alpha^*, \beta^*) \begin{pmatrix} \beta \\ \alpha \end{pmatrix} = \alpha^*\beta + \beta^*\alpha.

Approach B: bra-ket manipulations.

Step 1. Use the definition of X on basis states: X|0\rangle = |1\rangle and X|1\rangle = |0\rangle. Apply X to the ket by linearity:

X|\phi\rangle = X(\alpha|0\rangle + \beta|1\rangle) = \alpha X|0\rangle + \beta X|1\rangle = \alpha|1\rangle + \beta|0\rangle.

Why: X is a linear operator, so it distributes over the sum and pulls scalars out. The action on each basis ket flips the bit — this is the whole content of the X gate.

Step 2. Contract with the bra \langle\phi| = \alpha^*\langle 0| + \beta^*\langle 1|:

\langle\phi|X|\phi\rangle = (\alpha^*\langle 0| + \beta^*\langle 1|)(\alpha|1\rangle + \beta|0\rangle).

Step 3. Expand the product into four terms and use the orthonormality of the basis (\langle 0|0\rangle = \langle 1|1\rangle = 1, \langle 0|1\rangle = \langle 1|0\rangle = 0):

\begin{aligned} \langle\phi|X|\phi\rangle &= \alpha^*\alpha\,\langle 0|1\rangle + \alpha^*\beta\,\langle 0|0\rangle + \beta^*\alpha\,\langle 1|1\rangle + \beta^*\beta\,\langle 1|0\rangle \\ &= 0 + \alpha^*\beta + \beta^*\alpha + 0 \\ &= \alpha^*\beta + \beta^*\alpha. \end{aligned}

Why: the orthonormality of |0\rangle and |1\rangle makes two of the four inner products vanish and sets the other two to one. This is the single rule that makes bra-ket manipulations shorter than matrix manipulations — most cross-terms kill themselves.

Result. Both approaches give \langle\phi|X|\phi\rangle = \alpha^*\beta + \beta^*\alpha = 2\,\mathrm{Re}(\alpha^*\beta).

Two paths to the same answerTwo parallel arrows going from the top label to a shared bottom result. Left arrow is labeled matrix notation with explicit matrix multiplication steps. Right arrow is labeled bra-ket manipulations with orthonormality. Both arrive at the same final expression.⟨φ|X|φ⟩ computed two waysMatrix notation1. write as column (α, β)ᵀ2. X flips columns → (β, α)ᵀ3. row × column → α*β + β*αBra-ket manipulation1. X|0⟩ = |1⟩, X|1⟩ = |0⟩2. expand the four cross-terms3. orthonormality kills two → same answer⟨φ|X|φ⟩ = α*β + β*α = 2 Re(α*β)
Example 2: the same answer via two different manipulations. Both produce $\alpha^*\beta + \beta^*\alpha$, and both take about the same number of lines. The matrix approach is more mechanical; the bra-ket approach is more *transparent* — you see which terms die and why.

What this shows. Dirac notation and matrix notation are not rivals. They are two handwriting styles for the same mathematics. Example 2 did the same job twice — once with numbers, once with symbols — and got the same answer. When you are computing on a concrete state, matrices are direct. When you are reasoning about what structure the answer has ("expectation values of X are real because \alpha^*\beta + \beta^*\alpha is 2\,\mathrm{Re}(\alpha^*\beta)"), bra-ket shows the structure faster.

Common confusions

Going deeper

If you have understood that a ket is a column, a bra is a row, and a bra-ket is a number while a ket-bra is a matrix — you have the working knowledge you need for the next thirty chapters. What follows is optional context: where Dirac's notation came from, why mathematicians consider it rigorous, and how it generalises to infinite-dimensional systems that matter for quantum field theory and continuous-variable quantum computing.

Dirac's original motivation and the rest of 1939

Paul Dirac introduced bra-ket notation formally in 1939, in a short paper titled A new notation for quantum mechanics in the Mathematical Proceedings of the Cambridge Philosophical Society. By then the mathematical formulation of quantum mechanics had been settled by John von Neumann's 1932 book Mathematische Grundlagen der Quantenmechanik, which recast the theory in terms of operators on a Hilbert space and became the standard mathematical reference for the next four decades.

Von Neumann's notation was essentially linear-algebraic — states as vectors, observables as self-adjoint operators, probabilities as spectral expansions. It was rigorous and precise. Dirac's objection was that it was unwieldy to compute with. A physicist doing daily calculations did not need von Neumann's full abstract setup; they needed a shorthand that kept track of rows, columns, and conjugates automatically. That shorthand is bra-ket.

The physics community adopted Dirac's notation almost instantly. Schrödinger used it. Pauli used it. Every textbook written after 1945 uses it. When Sakurai wrote Modern Quantum Mechanics in 1985, the entire first chapter is essentially a tour of Dirac notation because that, rather than the Schrödinger wave equation or the Heisenberg matrix, is the modern starting point.

Kets as elements of a Hilbert space

Formally, a quantum state is a unit vector in a Hilbert space \mathcal{H} — a complex vector space equipped with an inner product \langle\cdot,\cdot\rangle satisfying

plus completeness (every Cauchy sequence converges). A ket |\psi\rangle is an element of this space. For a single qubit the space is \mathbb{C}^2; for n qubits it is \mathbb{C}^{2^n}; for a particle on a line it is L^2(\mathbb{R}), the space of square-integrable complex functions.

The physics of quantum mechanics distinguishes two further structural facts: the space is separable (has a countable orthonormal basis — usually true for real systems) and states are unit vectors (\langle\psi|\psi\rangle = 1). Both conditions are part of the four postulates you will see in chapter 10.

Bras as linear functionals — the dual space

A bra is not a vector in \mathcal{H}. A bra is a linear functional — a linear map from \mathcal{H} to \mathbb{C}. Given a ket |\phi\rangle in the Hilbert space, the corresponding bra \langle\phi| is the function that eats any ket |\psi\rangle and returns the complex number \langle\phi|\psi\rangle.

The space of all such linear functionals is called the dual space of \mathcal{H}, written \mathcal{H}^*. For finite-dimensional spaces and for every Hilbert space that shows up in quantum computing, the Riesz representation theorem says that \mathcal{H} and \mathcal{H}^* are in one-to-one correspondence: every continuous linear functional on \mathcal{H} is given by inner product with some vector. That is what makes the notational identification \langle\phi| \leftrightarrow |\phi\rangle precise.

For infinite-dimensional Hilbert spaces, the correspondence still holds, but it is subtler: you need continuity of the functional, and you need to be careful with generalised bras (position eigenbras \langle x|, momentum eigenbras \langle p|) that are not elements of the Hilbert space but of a larger rigged space. This is the technical content of the distinction between "states" and "generalised states" that will matter in Part 9 on continuous observables.

Why the notation generalises to infinite dimensions

The defining virtue of Dirac notation is that it does not care about dimension. A ket |\psi\rangle is a vector in some Hilbert space — whether that space is \mathbb{C}^2, \mathbb{C}^{2^{100}}, or L^2(\mathbb{R}), the symbols look the same. The inner product \langle\phi|\psi\rangle is a complex number whether it is a sum of two terms, a sum of 2^{100} terms, or an integral:

\langle\phi|\psi\rangle = \int_{-\infty}^{\infty} \phi^*(x)\,\psi(x)\,dx.

This generality is not a convenience — it is the reason Dirac notation became universal. Quantum field theory, continuous-variable quantum computing, and the treatment of position and momentum all live in infinite-dimensional spaces, and Dirac notation runs over them without change. Matrix notation, in contrast, requires substantial adaptation — matrix elements become kernels, sums become integrals, and finite-dimensional tricks break down.

The abuse of notation that physicists live with

One common "abuse" that mathematicians notice: physicists write \langle x|\psi\rangle = \psi(x) as if \langle x| were a bra in the Hilbert-space sense. It is not — position eigenstates |x\rangle are not normalisable elements of L^2(\mathbb{R}). The precise statement lives in the rigged Hilbert space (the Gelfand triple), where |x\rangle is a distribution rather than a vector. For practical computation, the abuse is harmless and is universal. Every physics textbook treats \langle x|\psi\rangle as the wavefunction \psi(x), and everything checks out as long as you remember which operations are legitimate and which are not. You will meet this again in Part 9.

A note on Indian context

When this track introduces Dirac notation, the obvious Indian historical hook is Satyendra Nath Bose, whose 1924 paper on the statistics of light quanta (what we now call Bose–Einstein statistics) used early versions of the operator-and-state formalism that Dirac would later give its definitive notation. Einstein translated Bose's paper into German and submitted it to Zeitschrift für Physik; the ideas in it are why every identical-particle system in nature is classified as either a boson (named after Bose) or a fermion. Dirac's notation, developed fifteen years later, became the standard way to write down states of both. Without Bose's statistics, Dirac's formalism has nothing identifiable to describe; without Dirac's notation, Bose's statistics is hard to write down in readable form. Both men are part of why you, today, can read \langle\phi|\psi\rangle and know exactly what it means.

Where this leads next

References

  1. P. A. M. Dirac, A new notation for quantum mechanics (1939) — DOI:10.1017/S0305004100021162. The original paper that introduced the bra-ket.
  2. Wikipedia, Bra–ket notation.
  3. John Preskill, Lecture Notes on Quantum Computationtheory.caltech.edu/~preskill/ph229, Chapter 2.
  4. Nielsen and Chuang, Quantum Computation and Quantum InformationCambridge University Press, §2.1.
  5. Qiskit Textbook, Linear algebra for quantum computing — notation and worked matrix examples.
  6. Wikipedia, Satyendra Nath Bose — on whose 1924 paper Dirac's formalism later sat.