In short

The Holevo bound (Alexander Holevo, 1973) is the fundamental ceiling on classical information extractable from a quantum source. Suppose Alice picks a classical message x from distribution p(x) and encodes it into a quantum state \rho_x. She sends the qubit(s) to Bob, who performs any quantum measurement — yielding classical outcome y. The classical mutual information between x and y obeys

I(X; Y) \;\leq\; \chi \;=\; S(\rho) - \sum_x p(x)\, S(\rho_x),

where \rho = \sum_x p(x) \rho_x is the average state and \chi is the Holevo quantity. Three corollaries:

  • \chi \leq \log_2 d for a d-dimensional system. n qubits convey at most n classical bits per transmission.
  • For pure-state ensembles, \chi = S(\rho); for non-orthogonal pure states, the gap to \log_2 d can be large.
  • \chi is the capacity of a classical-quantum channel (HSW theorem, 1996-97).

The bound resolves the apparent puzzle of superdense coding: Alice sends 2 classical bits per qubit, seemingly beating the bound. She does not — the protocol consumes a pre-shared Bell pair (one ebit), and the 2 bits come from the combined resource of 1 qubit transmitted plus 1 ebit pre-shared. The Holevo bound on the transmitted qubit alone is still 1 bit; the extra bit is entanglement currency. The bound is the theorem that forbids quantum parallelism from becoming information parallelism: superposition explores, measurement extracts, and extraction is classically bounded.

Quantum mechanics lets you prepare a qubit in a superposition of |0\rangle and |1\rangle with any complex amplitudes you like — an uncountably infinite family of states. So in principle, a single qubit "contains" an infinite amount of information: the two real numbers \theta, \phi parameterising the Bloch sphere. Yet when you try to read it out — measure it — you get exactly one classical bit. The infinite-dimensional continuum of amplitudes collapses to a single yes-or-no answer.

The Holevo bound, proved by Alexander Holevo in 1973 and published in his paper on "Bounds for the quantity of information transmitted by a quantum communication channel," is the theorem that makes this observation precise. The classical information you can extract from a quantum system is bounded — and the bound is tight. A d-dimensional quantum system cannot convey more than \log_2 d classical bits, no matter how clever the encoding or the measurement. For a single qubit: one classical bit per transmission, period.

The bound is fundamental. Every quantum-to-classical communication protocol — QKD, quantum fingerprinting, quantum random-access codes, classical capacity of noisy quantum channels — inherits the Holevo bound as its upper ceiling. And yet, as you will see, the bound is violable by entanglement: superdense coding sends 2 classical bits per physical qubit, a fact that initially seems to contradict Holevo but in fact refines it (the pre-shared Bell pair is a separate quantum resource).

This chapter builds the statement, the proof idea, the consequence for n-qubit transmissions, and the reconciliation with superdense coding.

The setup — a classical-quantum channel

Alice wants to send a classical message to Bob using a quantum channel. The setup:

  1. Alice's side. Alice has a random classical message X taking values in \{1, 2, \ldots, |\mathcal{X}|\} with probability distribution p(x). Shannon entropy: H(X) = -\sum_x p(x) \log_2 p(x).
  2. Encoding. For each possible message value x, Alice prepares a quantum state \rho_x on a d-dimensional Hilbert space \mathcal{H}. These need not be orthogonal; they need not even be pure. The ensemble is \{p(x), \rho_x\}.
  3. Transmission. Alice sends the quantum state to Bob over a noiseless channel. If X = x, Bob receives \rho_x.
  4. Measurement. Bob performs a measurement described by a POVM \{M_y\}_{y \in \mathcal{Y}} (positive operators summing to identity). The measurement outcome is a classical random variable Y with distribution p(y | x) = \text{tr}(M_y \rho_x) when the input is x.
  5. Decoding. Bob uses Y to guess X.

The quantity to maximise is the classical mutual information I(X; Y) between Alice's input and Bob's measurement outcome. The question: how large can I(X; Y) get?

Classical-quantum channel with POVM decoderA pipeline diagram. On the left, a box labelled Alice with random classical variable X. An arrow labelled "encode" leads to a quantum state rho_x. Another arrow labelled "transmit" crosses a channel to Bob's side. Bob holds the state rho_x and performs a measurement described by POVM M_y, producing classical outcome Y. A final arrow labels the output Y. Above the diagram, a summary equation reads I(X;Y) less than or equal to chi.AliceX ~ p(x)classical messageencodequantum stateρ_xon H, dim dtransmitBob's POVM{M_y}sum to ImeasureBobYoutcomeHolevo: I(X; Y) ≤ χ = S(ρ) − Σ p(x) S(ρ_x)for any encoding and any measurement
The classical-quantum channel. Alice's classical message $X$ is encoded into a quantum state $\rho_x$, transmitted to Bob, and decoded by a POVM. The Holevo bound says the classical mutual information $I(X; Y)$ between Alice's $X$ and Bob's $Y$ is at most $\chi$, the Holevo quantity of the ensemble.

The bound — statement

Holevo bound

For any ensemble \{p(x), \rho_x\} of quantum states and any POVM measurement \{M_y\}, the classical mutual information between the input X and the outcome Y satisfies

I(X; Y) \;\leq\; \chi(\{p(x), \rho_x\}) \;=\; S(\rho) - \sum_x p(x)\, S(\rho_x),

where \rho = \sum_x p(x) \rho_x is the average state and \chi is the Holevo quantity (also called the Holevo information or \chi-quantity). The bound is tight in the sense that for every ensemble there exists a measurement approaching equality in the large-block limit (HSW theorem).

As a direct corollary, for any d-dimensional system,

I(X; Y) \;\leq\; \chi \;\leq\; S(\rho) \;\leq\; \log_2 d.

Conclusion: n qubits transmit at most n classical bits of information per channel use.

Reading the statement. The first term S(\rho) is the von Neumann entropy of the average state — the "mixedness" Bob sees if he does not know which x Alice sent. The second term \sum_x p(x) S(\rho_x) is the average entropy of the individual signal states — the mixedness remaining once x is known. The difference is the "reduction in uncertainty due to knowing x" — which is exactly the structural content that a good measurement might extract. Holevo's theorem says: no measurement can extract more classical information than this difference.

For a pure-state ensemble (each \rho_x = |\psi_x\rangle\langle\psi_x|), S(\rho_x) = 0 for all x, so

\chi_{\text{pure}} \;=\; S(\rho) \;=\; S\!\left(\sum_x p(x) |\psi_x\rangle\langle\psi_x|\right).

This special case is often the one you meet first.

Why the bound uses S(\rho_x) for each x and not just S(\rho): if Alice's signal states \rho_x are themselves already mixed (not pure), part of the "uncertainty in \rho" comes from the internal noise of each \rho_x, which Bob cannot hope to resolve. The Holevo quantity subtracts that intrinsic noise, leaving only the distinguishable classical-ensemble component as an upper bound on extractable information.

Proof idea — strong subadditivity in one line

The modern proof of the Holevo bound is a direct application of strong subadditivity (Lieb-Ruskai 1973) of von Neumann entropy. You will not need the whole proof for this chapter, but the three-line idea is worth seeing.

Consider a tripartite system X M Q:

The joint state before measurement is \rho_{XQ} = \sum_x p(x) |x\rangle\langle x|_X \otimes \rho_{x,Q}. Compute its Holevo quantity: by direct calculation, I(X; Q) = \chi in the quantum mutual information sense. Now perform Bob's measurement, coupling Q to M and then tracing out Q. By the data-processing inequality (a consequence of strong subadditivity),

I(X; Y) \;=\; I(X; M) \;\leq\; I(X; Q) \;=\; \chi.

Why this is a one-line proof: the whole Holevo bound reduces to "measurement cannot increase classical mutual information with a reference register," which is what the quantum data-processing inequality says. Strong subadditivity is the deeper theorem underneath, and Lieb-Ruskai's 1973 proof of it unlocks Holevo's 1973 bound as an immediate corollary — the two theorems appeared in the same year for a reason.

Holevo's original 1973 proof used a different route — direct manipulation of entropies and Klein's inequality — but the strong-subadditivity proof has become standard. Nielsen-Chuang §12.1 [3] spells out both arguments.

Consequences

n qubits \leq n classical bits

Take an n-qubit system (d = 2^n). Any ensemble of states on this system has average \rho with S(\rho) \leq \log_2 2^n = n. So by the Holevo bound,

I(X; Y) \;\leq\; \chi \;\leq\; n \text{ bits}.

You cannot extract more than n classical bits from n transmitted qubits. The Bloch sphere's continuum of amplitudes is inaccessible in any single-shot measurement. What looked like infinite information content in the quantum state turns out to be, from a classical-extraction perspective, bounded by the number of qubits.

This is a statement with physical bite. It says that a quantum computer with n qubits, no matter how cleverly programmed, cannot output more than n classical bits per "run" — because the output is a quantum measurement, and every measurement on an n-qubit register produces at most n classical bits. Quantum speedup, where it exists, is not about transmitting more information; it is about computing answers whose classical transmission is bounded by the same n-bit ceiling.

Non-orthogonal states lose information

Suppose Alice encodes two equally likely messages into two non-orthogonal pure states: |\psi_0\rangle and |\psi_1\rangle with |\langle\psi_0|\psi_1\rangle| = c \in (0, 1). The average state is \rho = \tfrac{1}{2}|\psi_0\rangle\langle\psi_0| + \tfrac{1}{2}|\psi_1\rangle\langle\psi_1|. Its eigenvalues are \lambda_\pm = (1 \pm c)/2, and

\chi \;=\; S(\rho) \;=\; H\!\left(\frac{1 + c}{2}\right).

At c = 0 (orthogonal states): \chi = H(1/2) = 1 bit — perfect distinguishability, full bit extracted. At c = 1 (identical states): \chi = H(1) = 0 — no information, the states are the same.

Intermediate c: for c = 1/\sqrt 2 (the angle between |0\rangle and |+\rangle), \chi = H((1 + 1/\sqrt 2)/2) \approx H(0.854) \approx 0.601 bits. Alice's message is worth 1 classical bit (uniformly random between two outcomes), but Bob can recover at most 0.601 bits because the states are not orthogonal. The remaining 0.399 bits are irrecoverably mixed into quantum noise — Holevo is telling you that non-orthogonality costs information, quantitatively.

Holevo chi for two pure states as a function of overlapA plot of chi versus |inner product c| for c in [0, 1]. The curve starts at chi = 1 when c = 0 (orthogonal states), descends monotonically, and reaches chi = 0 at c = 1 (identical states). The value at c = 0.707 is marked at approximately 0.601. The x-axis is labelled |<psi_0|psi_1>|, the y-axis is labelled chi in bits.00.50.707101overlap |⟨ψ₀|ψ₁⟩|χ (bits)orthogonal: χ = 1c = 1/√2: χ ≈ 0.601identical: χ = 0
The Holevo quantity for two equally likely pure states, as a function of their inner product. Orthogonal states deliver a full bit; non-orthogonal states deliver strictly less. At the BB84 pair overlap of $1/\sqrt 2$, only about $0.601$ bits are extractable from each signal.

This is why QKD protocols like BB84 are secure: Eve, limited by Holevo, cannot fully read the quantum signals that Alice and Bob exchange, and her ignorance can be converted to cryptographic security.

Superdense coding — the apparent violation

Superdense coding is a protocol in which Alice sends 2 classical bits to Bob by transmitting a single qubit. On the face of it, this violates the Holevo bound: one qubit, two bits extracted. How?

The answer is that superdense coding consumes a pre-shared entangled Bell pair (one ebit) as a resource. The combined resource budget is 1 qubit transmitted plus 1 ebit consumed, for a total of 2 qubits of quantum resource. The 2 classical bits extracted honour a generalised Holevo-style accounting: 2 classical bits in, 2 quantum resources out. The Holevo bound on the transmitted qubit alone, taken in isolation, is still 1 bit — Alice's single qubit, untouched by the pre-shared entanglement, is in the maximally mixed state \rho = I/2, which has exactly \chi = S(I/2) = 1 bit of Holevo quantity.

The resolution is that the pre-shared Bell pair is itself a quantum resource carrying one ebit of entanglement, and consuming it counts toward Alice's "quantum budget." Superdense coding is a proof of the entanglement-assisted classical capacity theorem: with free pre-shared entanglement, the classical capacity per qubit is 2 \log_2 d, exactly twice the Holevo limit. Without pre-shared entanglement, it is \log_2 d.

Superdense coding resourcesA resource budget diagram. On the left, two shaded boxes labelled "1 qubit transmitted" and "1 ebit (Bell pair) consumed" stack together. An arrow labelled "superdense coding" points to a single box on the right labelled "2 classical bits received". Annotations show that Holevo bound on the transmitted qubit alone is 1 bit, and the extra bit comes from the entanglement resource.1 qubit transmittedρ = I/2, χ = 1 bit1 ebit consumedpre-shared Bell pairsuperdense coding2 classical bitsreceived by Bob2 bits of Shannon infoHolevo on transmitted qubit alone: 1 bit. Extra bit: accounted for by ebit consumed.Entanglement-assisted classical capacity = 2 log d bits per qubit.
Superdense coding's resource accounting. The $2$ classical bits come from two quantum resources: $1$ qubit transmitted (contributing at most $1$ bit by Holevo) plus $1$ ebit of pre-shared entanglement (contributing the second bit). The Holevo bound is not violated; entanglement just counts as a separate currency.

This is the cleanest example of "quantum resources are a zoo, not a single quantity." Qubits, ebits, classical bits, and cbits are distinct resources with distinct exchange rates. The Holevo bound fixes the exchange rate between qubits and classical bits; entanglement-assisted protocols consume a second resource (ebits) to beat that rate in a limited sense.

The HSW theorem — tightness at capacity

The Holevo bound is an upper bound. Is it tight? The answer is yes, asymptotically. The Holevo-Schumacher-Westmoreland (HSW) theorem (1996-97) [hsw-theorem] says:

For any classical-quantum channel x \mapsto \rho_x, the classical capacity is

C \;=\; \max_{\{p(x), \rho_x\}} \chi(\{p(x), \rho_x\}),
where the maximisation is over input distributions (with the signal alphabet fixed). Block codes of length n achieve rate \chi with vanishing error as n \to \infty.

In other words: the Holevo quantity is not just an upper bound on mutual information — it is achievable at large block length, just like Shannon's capacity for classical channels. The two bookends (Shannon for classical, HSW for quantum) have the same structure: capacity = max mutual information across input distributions. See the HSW theorem chapter for the full story.

Worked examples

Example 1: Holevo for a uniform $\{|0\rangle, |+\rangle\}$ ensemble

Setup. Alice picks X \in \{0, 1\} with probability 1/2 each and encodes

  • X = 0 into |\psi_0\rangle = |0\rangle,
  • X = 1 into |\psi_1\rangle = |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2.

These are non-orthogonal pure states with inner product \langle 0|+\rangle = 1/\sqrt 2. Compute \chi and interpret.

Step 1 — average density operator.

\rho \;=\; \frac{1}{2}|0\rangle\langle 0| + \frac{1}{2}|+\rangle\langle +|.

In matrix form,

|+\rangle\langle +| \;=\; \frac{1}{2}\begin{pmatrix}1 & 1\\ 1 & 1\end{pmatrix}, \qquad |0\rangle\langle 0| \;=\; \begin{pmatrix}1 & 0\\ 0 & 0\end{pmatrix},

so

\rho \;=\; \frac{1}{2}\begin{pmatrix}1 & 0\\ 0 & 0\end{pmatrix} + \frac{1}{2}\cdot\frac{1}{2}\begin{pmatrix}1 & 1\\ 1 & 1\end{pmatrix} \;=\; \frac{1}{4}\begin{pmatrix}3 & 1\\ 1 & 1\end{pmatrix}.

Why the off-diagonals appear: the X-basis state |+\rangle contributes off-diagonal "coherences" in the Z basis. Averaging a Z-diagonal state with an X-diagonal state gives a density matrix with both Z and X character, hence the off-diagonal entries.

Step 2 — eigenvalues of \rho. The trace is \text{tr}(\rho) = 1 (good — it is a density operator). The determinant is \det(\rho) = (3 \cdot 1 - 1)/16 = 2/16 = 1/8. Eigenvalues satisfy \lambda^2 - \lambda + 1/8 = 0, so

\lambda_\pm \;=\; \frac{1 \pm \sqrt{1 - 1/2}}{2} \;=\; \frac{1 \pm \sqrt{1/2}}{2} \;=\; \frac{1 \pm 1/\sqrt 2}{2}.

Numerically, \lambda_+ \approx 0.854 and \lambda_- \approx 0.146.

Step 3 — von Neumann entropy. Since \rho_x are pure, \chi = S(\rho) = H(\lambda_+):

\chi \;=\; -0.854 \log_2 0.854 - 0.146 \log_2 0.146.

Compute: \log_2 0.854 \approx -0.228 and \log_2 0.146 \approx -2.776. So

\chi \approx 0.854 \cdot 0.228 + 0.146 \cdot 2.776 \approx 0.195 + 0.405 \approx 0.600 \text{ bits}.

Step 4 — interpret. Alice has 1 bit of classical message (uniform over two outcomes, H(X) = 1). Any measurement Bob performs extracts at most \chi \approx 0.600 bits of mutual information. Alice has wasted at least 0.400 bits per transmission by using non-orthogonal encoding.

Step 5 — the optimal measurement. The Holevo bound is achievable asymptotically, but for a single shot, what is the best Bob can do? The optimal single-shot measurement for two equiprobable pure states is the Helstrom measurement (a projective measurement onto the eigenbasis of \rho_0 - \rho_1). For these states it achieves mutual information

I(X; Y)^{\text{opt,single}} \;\approx\; 0.399 \text{ bits},

which is strictly less than \chi = 0.600. The gap \chi - I^{\text{single}} \approx 0.201 bits is recoverable only by collective measurements on long blocks — a single-letter measurement leaves about a third of the available Holevo information on the table.

Mutual info for Z+X ensembleA horizontal bar chart comparing three quantities for the two-state ensemble {|0⟩, |+⟩} uniformly mixed. Bars for H(X) = 1 bit (source entropy), χ = 0.600 bits (Holevo asymptotic bound), and I_single ≈ 0.399 bits (single-shot Helstrom measurement). The y-axis labels each bar, the x-axis is in bits.00.51bits extracted per transmissionH(X)1 bit (what Alice has)χHolevo: 0.600I_singleHelstrom: 0.399
Information accounting for the $\{|0\rangle, |+\rangle\}$ uniform ensemble: Alice has $1$ bit of message, Bob's best single-shot measurement extracts $\approx 0.399$ bits, and the Holevo bound $\chi \approx 0.600$ bits is reachable only by collective measurements on long blocks. The gaps quantify the cost of non-orthogonal encoding and the value of block coding.

What this shows. Non-orthogonal pure states leak information in a precise, quantitative way. The Holevo quantity tells you the asymptotic limit; a single-shot measurement sits strictly below it. Getting from one to the other requires processing many qubits together, which is also the reason HSW's block-coding statement needs n \to \infty.

Example 2: $n$ qubits convey at most $n$ classical bits — a concrete check

Setup. Alice has an n-qubit register. She wants to encode classical messages into the 2^n-dimensional Hilbert space so that Bob, performing the best possible measurement, recovers as much classical information as possible. What is the upper bound, and can it be attained?

Step 1 — apply the Holevo corollary. For any ensemble \{p(x), \rho_x\} on the n-qubit system, \chi \leq S(\rho) \leq \log_2(2^n) = n bits. So Bob's mutual information with Alice's message is at most n bits.

Step 2 — can it be achieved? Yes. Alice picks X uniformly from \{0, 1, \ldots, 2^n - 1\} and encodes X = x as the computational-basis state |x\rangle (a bit string of length n). The states \{|x\rangle\} are mutually orthogonal, so Bob can measure in the computational basis and recover x exactly: Y = X, and

I(X; Y) \;=\; H(X) \;=\; n \text{ bits}.

Compute \chi: \rho = \tfrac{1}{2^n} \sum_x |x\rangle\langle x| = I/2^n, so S(\rho) = n. Each |x\rangle is pure, so \sum_x p(x) S(\rho_x) = 0. Therefore \chi = n, and Bob saturates the Holevo bound.

Step 3 — why you cannot beat it. Suppose Alice tried to encode more than n bits — e.g., n + 1 bits into n qubits. She would need an ensemble \{|\psi_x\rangle\}_{x=0}^{2^{n+1} - 1}, which has 2^{n+1} elements. But in a 2^n-dimensional Hilbert space, at most 2^n states can be mutually orthogonal — so at least two of the |\psi_x\rangle must overlap. Non-orthogonal states cannot be perfectly distinguished, so I(X; Y) < H(X) = n + 1. The exact deficit is quantified by the Holevo bound: I(X; Y) \leq \chi \leq \log_2 2^n = n, so at least 1 bit is unrecoverable.

Step 4 — the UPI-transaction framing. Imagine transmitting UPI transaction codes. A 6-digit UPI PIN is \log_2 (10^6) \approx 19.93 bits. To securely transmit one PIN in a single quantum transmission, Alice needs at least 20 qubits — there is no way to stuff more than 20 classical bits into 20 qubits. For a 16-digit card number (\log_2 10^{16} \approx 53 bits), she needs at least 53 qubits. The qubit-per-bit floor is enforced by Holevo, not by engineering. Quantum mechanics is not a way to transmit more classical bits per physical carrier than you already can.

Bits per qubit — Holevo ceilingA line graph showing two lines. The first, labelled "Holevo ceiling", is the identity y = n (a straight diagonal line). The second, labelled "classical only via orthogonal computational-basis encoding", traces the same identity line — they coincide. Points are marked at n = 1, 2, 3, 4, 5 showing 1 bit, 2 bits, 3 bits, 4 bits, 5 bits. A dashed horizontal line at n + 1 indicates the unreachable region above.01234505n = number of qubits transmittedclassical bits recoverableHolevo ceiling: χ ≤ nforbidden — χ > n impossible for any encoding/measurement
The Holevo bound is a diagonal ceiling in the "qubits in, bits out" plane. For any $n$, the best possible encoding + measurement delivers at most $n$ classical bits per $n$-qubit transmission. The orthogonal computational-basis encoding attains this ceiling exactly.

What this shows. The Holevo bound \chi \leq n is tight and attained. Quantum transmission of classical data is no better than classical transmission, qubit-for-bit. The advantage of quantum communication lies elsewhere — in protocols like QKD that exploit the disturbance caused by eavesdropping, or in entanglement-assisted setups that consume a separate resource. Raw classical throughput per qubit is capped by Holevo.

Common confusions

Going deeper

If you have the Holevo bound statement I(X; Y) \leq \chi = S(\rho) - \sum_x p(x) S(\rho_x), the corollary that n qubits carry at most n classical bits, and the superdense-coding reconciliation, you have the essentials. The rest of this section treats accessible information and its gap from \chi, pretty-good measurements, the Holevo capacity of a noisy channel, and the history and refinements.

Accessible information vs Holevo quantity

For single-shot measurements, the supremum of I(X; Y) over all POVMs is the accessible information I_{\text{acc}}(\mathcal{E}) of the ensemble \mathcal{E} = \{p(x), \rho_x\}:

I_{\text{acc}}(\mathcal{E}) \;=\; \max_{\{M_y\}} I(X; Y).

Holevo's bound says I_{\text{acc}} \leq \chi. In general the inequality is strict — computing I_{\text{acc}} exactly is NP-hard even for small ensembles. For the two-pure-state example above with overlap c, a known closed form exists (the Helstrom-measurement result), but for three or more states, numerical optimisation is usually required. The HSW theorem shows I_{\text{acc}}^{(n)} \to n \chi as n \to \infty (i.e., block coding closes the gap asymptotically), but finite-n results can be substantially below n\chi.

Pretty-good measurement (square-root measurement)

A canonical near-optimal POVM is the pretty-good measurement (PGM), also called the square-root measurement: M_x = \rho^{-1/2} p(x) \rho_x \rho^{-1/2} (with appropriate care for singular \rho). Hausladen and Wootters (1994) showed PGM achieves mutual information within a factor 2 of the optimum, and in many cases is exactly optimal. PGM is the central technical tool in the achievability proof of HSW — it is the measurement whose performance, amplified by block coding, converges to \chi.

The Holevo capacity of a noisy channel

For a general quantum channel \mathcal{N}: A \to B (not classical-quantum), the product-state classical capacity is

\chi(\mathcal{N}) \;=\; \max_{\{p(x), \rho_x\}} S\!\left(\mathcal{N}\bigl(\textstyle\sum_x p(x) \rho_x\bigr)\right) - \sum_x p(x) S(\mathcal{N}(\rho_x)).

The HSW theorem says block codes using product input states achieve \chi(\mathcal{N}). Whether entangled input states can do better — the additivity question for classical capacity — was open for a decade until Hastings (2009) resolved it: surprisingly, entangled inputs can boost capacity for some channels, so the single-letter \chi is not always the capacity. The full capacity is \lim_{n \to \infty} \tfrac{1}{n} \chi(\mathcal{N}^{\otimes n}), the regularized Holevo quantity. The HSW theorem chapter treats this in depth.

Holevo's 1973 and the Indian connection

Alexander Holevo's 1973 paper "Some estimates for the amount of information transmissible by a quantum communication channel" [1] was published in Russian in Problems of Information Transmission — a Soviet journal then largely inaccessible to Western information theorists. The paper sat mostly unread for decades. It was only after Hausladen, Jozsa, Schumacher, Westmoreland and Wootters' 1996 rediscovery and proof of the capacity theorem that Holevo's original work was translated and properly credited. The Indian Statistical Institute (Kolkata) played a role in this rediscovery: K. R. Parthasarathy's 1992 book An Introduction to Quantum Stochastic Calculus gave the first English-language treatment of Holevo's bound accessible to a wider audience, and Parthasarathy's school at ISI Delhi has continued to produce influential work on quantum information — including the early rigorous proofs of the HSW theorem's Russian-school antecedents.

Refinements — continuity and strong converse

The modern form of Holevo's bound includes:

Where this leads next

References

  1. Alexander S. Holevo, Some estimates for the amount of information transmissible by a quantum communication channel (1973) — Problems of Information Transmission. The original bound.
  2. Paul Hausladen, Richard Jozsa, Benjamin Schumacher, Michael Westmoreland, William Wootters, Classical information capacity of a quantum channel (1996) — Phys. Rev. A 54, 1869. Rediscovery and achievability.
  3. Nielsen and Chuang, Quantum Computation and Quantum Information (2010), §12.1 (The Holevo bound) — Cambridge University Press.
  4. John Preskill, Lecture Notes on Quantum Computation, Ch. 10 (Quantum Shannon theory) — theory.caltech.edu/~preskill/ph229. Clean proof via strong subadditivity.
  5. Mark M. Wilde, Quantum Information Theory (2nd ed., 2017), Ch. 10–11 — arXiv:1106.1445. Modern treatment with strong converse.
  6. Wikipedia, Holevo's theorem — compact statement, proof sketch, and reconciliation with superdense coding.