In short
Boson sampling is a quantum supremacy proposal that uses photons instead of qubits. You send n indistinguishable single photons into an m-mode linear-optical interferometer — a network of beam splitters and phase shifters, where typically m \gg n (often m \approx n^2) — and measure which output modes the photons exit from. Repeat many times to build up an empirical distribution over photon-count patterns. The classical hardness comes from a clean theorem: each output pattern's probability is proportional to |\mathrm{Perm}(A_S)|^2, where A_S is an n \times n submatrix of the interferometer's unitary and \mathrm{Perm} is the permanent — a matrix function that is provably \#\mathrm{P}-hard to compute (strictly harder than anything in \mathrm{NP}, unless the polynomial hierarchy collapses). Scott Aaronson and Alex Arkhipov proposed boson sampling in 2011, three years before random circuit sampling. The Jiuzhang series at USTC (Jian-Wei Pan's group) turned the proposal into experiments: Jiuzhang 1.0 (2020, 76 photons), 2.0 (2021, 113 photons), 3.0 (2023, up to 255-mode interferometer). Like random circuit sampling, boson sampling produces no useful output — the samples solve no real-world problem. It is a capability benchmark, not a computation. But it is a remarkably clean one: photonic hardware, room-temperature operation (aside from the photon detectors), and a complexity-theoretic hardness proof as elegant as any in quantum computing.
In 2011, Scott Aaronson and his student Alex Arkhipov at MIT asked a question that looked like a detour from the mainstream of quantum computing. At the time, the field was focused on universal quantum computers — machines that could run Shor's algorithm, or Grover's, or any polynomial-size quantum circuit. Building such a machine required high-fidelity single- and two-qubit gates on stable, error-correctable qubits. Progress was slow.
Aaronson and Arkhipov asked: what if you gave up universality? What if you built a restricted quantum device — one that could only do one specific kind of calculation — but you chose that calculation to be provably hard classically, and easy for available photonic hardware? Their answer was a proposal called boson sampling. It required no programmable gates, no error correction, no control over individual qubits. It required only a source of single photons, a passive optical network, and photon-counting detectors — three components that existing laboratories already had.
The payoff was theoretical muscle. Aaronson and Arkhipov proved that exact classical simulation of boson sampling was \#\mathrm{P}-hard, and they gave strong evidence that even approximate simulation was hard under standard complexity conjectures. If a photonic device ever ran boson sampling at scale, it would constitute a supremacy demonstration on a physically natural task.
Nine years later, a team at the University of Science and Technology of China (USTC), led by Jian-Wei Pan, built the device. They called it Jiuzhang — after The Nine Chapters on the Mathematical Art, the 2000-year-old Chinese mathematical treatise. Jiuzhang 1.0 detected 76 photons in a 100-mode interferometer. Jiuzhang 2.0 reached 113 photons. Jiuzhang 3.0 pushed the interferometer to 255 modes. Each result claimed supremacy on a photonic task: classical simulation would take longer than the age of the universe.
This chapter walks through what boson sampling is, why it is classically hard, how Jiuzhang implements it, and how it fits next to random circuit sampling in the supremacy landscape.
The task, in pictures
Forget formulas for a moment. The setup is physically concrete.
Imagine a wall with m narrow slots (modes). In front of the wall is an optical network — a tangle of beam splitters and phase shifters — that takes any photon entering the wall at one slot and routes it to some superposition of slots on the opposite side. The network is linear: photons do not interact with each other inside it, they simply follow the optical paths the network lays down.
Now send n indistinguishable single photons into n of the m input slots. Each photon travels through the network independently as far as the network is concerned — but because the photons are indistinguishable (bosons, identical quantum particles), their amplitudes interfere on the output side. The probability of finding photons in a specific pattern on the output is not the product of single-photon probabilities; it is an interference pattern determined by all possible ways the n photons could have reached that configuration.
The task is: sample from the output distribution. Repeat with fresh sets of n photons many times. You end up with a list of output patterns — counts in mode 1, counts in mode 2, and so on.
Three things distinguish boson sampling from running a general quantum circuit:
- No programmable gates on individual photons. The network is fixed for one experimental run. You cannot apply a controlled-NOT between two photons, because photons do not interact in linear optics.
- No mid-circuit measurements. Detectors sit only at the output.
- The photons are indistinguishable bosons. This is the crucial quantum feature. If the photons were classical particles (coloured balls), each would follow the network independently and the output distribution would be a simple product — easy to simulate. Indistinguishability forces the amplitudes to interfere, producing a distribution that tracks a deep mathematical object.
That object is the matrix permanent.
The permanent — the hard object at the centre
Every output probability in boson sampling has the same structure: it is the squared absolute value of the permanent of a submatrix of the interferometer's unitary. To see where this comes from, start with the definition of the permanent itself.
The matrix permanent
The permanent of an n \times n complex matrix A is
where the sum runs over all n! permutations \sigma of \{1, 2, \ldots, n\}.
The permanent looks almost identical to the determinant. The difference: the determinant has signs — \mathrm{Det}(A) = \sum_{\sigma} \mathrm{sgn}(\sigma) \prod_i A_{i, \sigma(i)}, weighting each permutation by the sign of \sigma. The permanent drops those signs and just adds everything up.
That one missing sign is the difference between trivial and intractable. The determinant can be computed in O(n^3) time by Gaussian elimination. The permanent has no known polynomial-time algorithm, and Leslie Valiant proved in 1979 that computing the permanent is \#\mathrm{P}-hard — meaning any polynomial-time algorithm for the permanent would collapse much of classical complexity theory.
Why the signs matter so much: Gaussian elimination works for the determinant because adding a multiple of one row to another changes the determinant predictably (it does not change it at all, in fact). The permanent has no such row-reduction rule — changing one row can change the permanent in ways that depend on all the other rows. There is no shortcut; you have to, in effect, examine all n! permutations.
For small n, you can still compute the permanent. For n = 2:
Compare with the determinant: ad - bc. Same two terms, different sign on one. For n = 3, the permanent has 6 terms; for n = 4, 24 terms; for n = 20, 2.4 \times 10^{18} terms. The best known classical algorithm — Ryser's formula (1963) — computes the permanent in O(2^n \cdot n) time, which is exponential in n. For n = 50 this is already \sim 10^{16} operations — hours on a supercomputer, per permanent. Boson sampling requires computing many permanents to simulate.
From the apparatus to the permanent
Where does the permanent enter the photonic physics?
When a single photon enters input mode i and the interferometer is described by an m \times m unitary U, the photon exits in output mode j with amplitude U_{ji}. For one photon, this is just linear optics — no quantum magic beyond standard wave mechanics.
Now add a second indistinguishable photon, and prepare a specific input configuration. Say photon 1 enters mode i_1 and photon 2 enters mode i_2, and you want to know the amplitude that one photon exits in mode j_1 and the other in mode j_2. Because the photons are indistinguishable, the amplitude has two contributions — photon 1 could go to j_1 and photon 2 to j_2, or photon 1 to j_2 and photon 2 to j_1:
Why the two paths add without signs: photons are bosons — their wavefunctions are symmetric under exchange. Swapping the two photons does not introduce a minus sign. This is precisely what makes the coefficient a permanent rather than a determinant (fermions — electrons, say — would give a determinant and antibunching). The famous Hong–Ou–Mandel dip, in which two photons entering a 50{:}50 beam splitter always exit together, is exactly this two-photon permanent effect.
Generalise. For n indistinguishable photons entering modes i_1 < i_2 < \cdots < i_n and exiting in modes j_1 < j_2 < \cdots < j_n, the probability of observing that output pattern is
where A_S is the n \times n submatrix of U indexed by rows \{j_1, \ldots, j_n\} and columns \{i_1, \ldots, i_n\}.
For most output patterns, the permanent is a small, hard-to-predict complex number. The squared magnitude gives the probability. Every different output pattern corresponds to a different submatrix, and hence a different permanent. The whole output distribution is encoded in the permanents of \binom{m}{n} submatrices of U.
Why this is hard classically — the Aaronson–Arkhipov argument
The classical hardness argument has two layers.
Exact sampling is \#\mathrm{P}-hard. If a classical algorithm could produce samples whose distribution exactly matched the boson sampling distribution, then by running the algorithm enough times and estimating probabilities, you could estimate |\mathrm{Perm}(A_S)|^2 for any A_S. Estimating permanents is \#\mathrm{P}-hard (Valiant, 1979), so an efficient exact sampler would give an efficient \#\mathrm{P}-hard algorithm — contradicting widely believed complexity conjectures.
Approximate sampling is still hard. Real quantum devices do not sample exactly — noise and imperfection mean the produced distribution is close to, but not identical to, the ideal distribution. Aaronson and Arkhipov proved, under two standard conjectures (the permanent-of-Gaussians conjecture and anticoncentration), that even approximate classical sampling (to small total-variation distance) would require polynomial-time algorithms for the permanent on random Gaussian matrices — again contradicting conjectured hardness.
The killer blow is the polynomial hierarchy collapse argument: if approximate boson sampling can be simulated in classical polynomial time, then the polynomial hierarchy \mathrm{PH} — an infinite tower of complexity classes generalising \mathrm{NP} — collapses to its third level. This collapse would be a catastrophe for classical complexity theory; no one expects it to be true.
Jiuzhang — building the machine
Eight years after Aaronson and Arkhipov's paper, Jian-Wei Pan's group at USTC turned the proposal into hardware. The series of experiments, collectively called Jiuzhang, used Gaussian boson sampling — a variant proposed by Craig Hamilton and collaborators in 2017 that replaces single-photon inputs with squeezed states of light. Squeezed states are easier to generate at scale than single photons, and the resulting hardness argument is structurally similar.
Jiuzhang 1.0 (2020)
- Setup: 50 single-mode squeezed-light sources; 100-mode interferometer; 100 detectors.
- Detected photons: up to 76 in a single run.
- Quantum runtime: 200 seconds for the full experimental campaign.
- Classical estimate: \sim 2.5 billion years on a world-class supercomputer, for matching statistical fidelity.
Jiuzhang 2.0 (2021)
- Setup: pump lasers controlled in phase; larger interferometer, improved sources.
- Detected photons: up to 113.
- Classical estimate: beyond the age of the universe (\sim 10^{17} seconds) by a factor of \sim 10^{10}.
Jiuzhang 3.0 (2023)
- Setup: 255-mode interferometer; ultra-high-fidelity optical components.
- Detected photons: comparable to 2.0 but at higher statistical quality.
- Result: extends the supremacy claim with better verifiability.
Photonics vs superconducting — different regimes of the same claim
Jiuzhang is fundamentally different hardware from Google Sycamore. Here is how the two platforms compare as supremacy demonstrations.
| Feature | Sycamore (superconducting) | Jiuzhang (photonic) |
|---|---|---|
| Particle | Superconducting transmon qubits | Single / squeezed photons |
| Environment | 20 millikelvin (dilution refrigerator) | Room-temperature optics (detectors cryogenic) |
| Gates | Programmable — any quantum circuit | Fixed interferometer per run |
| Task | Random circuit sampling (RCS) | Boson sampling |
| Hardness | Tensor-network / state-vector | Permanent computation (\#\mathrm{P}-hard) |
| Scalability | Add qubits, needs connectivity | Add modes + photons, needs detectors |
Photonic platforms have a natural fit with sampling tasks: linear-optical hardware produces exactly the kind of distribution where the permanent appears, and single-photon detection is a mature technology. The downside is that photonic platforms are less programmable — you cannot easily reconfigure the interferometer between runs, and universal fault-tolerant photonic quantum computing requires substantial additional technology (like measurement-based quantum computing with fusion-gate architectures, an approach taken by companies like PsiQuantum and Xanadu).
A physical Indian connection worth noting: Satyendra Nath Bose's 1924 paper on the statistics of light quanta is the foundational document for the concept of a "boson" — identical bosonic particles like photons obey the symmetry rule that makes their amplitudes add without sign changes. Without Bose statistics, there is no permanent; without the permanent, there is no boson sampling. The supremacy claim rides on a mathematical structure that came out of Bose's correspondence with Einstein over a century ago. Bose did the science that gives the experiment its name.
Hype check. Boson sampling is not useful. The output is a photon-count distribution with no embedded problem — it does not factor integers, simulate molecules, or optimise anything. Jiuzhang, as impressive as it is, is a benchmark experiment. Reading "Jiuzhang achieved quantum supremacy" as "photonic quantum computers now solve industry problems" is exactly the misreading that pop-science keeps making. The correct reading is: photonic hardware can, on a specifically engineered task, produce samples no classical computer can reproduce at scale. Supremacy, yes. Practical computation, no.
Example 1: The two-photon amplitude — a $2 \times 2$ permanent
The cleanest possible boson sampling calculation. Imagine a tiny 3-mode interferometer (not a realistic experiment, just a calculation) with unitary
(the 3 \times 3 discrete Fourier transform, properly normalised). Two indistinguishable photons enter modes 1 and 2. What is the probability that one photon exits in mode 1 and the other in mode 3?
Step 1 — Identify the submatrix. The output modes are \{1, 3\}, the input modes are \{1, 2\}. Take the 2 \times 2 submatrix formed by rows 1, 3 and columns 1, 2:
Why this submatrix: the output pattern is a specification of which modes detected photons. Each output mode picks one row of U (the row indexed by that output mode). Each input mode picks one column (the column indexed by that input mode). The probability comes from the permanent of the corresponding submatrix.
Step 2 — Compute the permanent. For a 2 \times 2 matrix \begin{pmatrix} a & b \\ c & d \end{pmatrix}, \mathrm{Perm} = ad + bc:
Evaluate e^{4\pi i/3} = -\frac{1}{2} - \frac{\sqrt{3}}{2}i. Then:
Why keeping the complex parts: the permanent is a complex number, not a probability. Only its squared magnitude gives a probability. If you accidentally drop the imaginary part, you lose quantum interference.
Step 3 — Square the magnitude. The squared magnitude is |z|^2 = \mathrm{Re}(z)^2 + \mathrm{Im}(z)^2:
Step 4 — Interpret. The probability of finding one photon in mode 1 and one in mode 3, given that input was modes 1 and 2, is 1/9 \approx 11.1\%. You would observe this pattern in roughly 1 of every 9 experimental runs.
Why the probability is not trivial: a classical calculation (treating photons as distinguishable) would give a different, typically smaller number. The enhancement from |\mathrm{Perm}|^2 versus the distinguishable-particle prediction is the signature of Hong–Ou–Mandel-like bosonic interference. At scale, interference patterns across all \binom{m}{n} outputs encode the full complexity of the matrix permanent.
Result. The two-photon output pattern (mode 1 and mode 3) has probability 1/9. The calculation required computing a single 2 \times 2 permanent — a trivial operation. For a 50-photon experiment, you would need 50 \times 50 permanents, each requiring \sim 2^{50} \cdot 50 \approx 5 \times 10^{16} operations. That is a day on a supercomputer per permanent, and the full output distribution has \sim \binom{m}{50} permanents. Scaling is ruthless.
Example 2: Jiuzhang's $100$-mode, $76$-photon regime
A scale-up from the toy calculation to what Jiuzhang 1.0 actually achieved. The numbers here are not computed step-by-step (they involve real permanents of real experimental matrices) — instead, the example tracks the size and cost of the classical simulation that would be required.
Setup. Jiuzhang 1.0 has 100 output modes and detects 76 photons. The interferometer implements a specific 100 \times 100 Gaussian unitary. The experimental task is to produce samples whose distribution matches the ideal Gaussian-boson-sampling distribution.
Step 1 — Count the configurations. A 76-photon pattern across 100 modes is a choice of which modes detected photons (combinatorial complexity: \binom{100}{76} if each mode is occupied by at most one photon, or larger if multiple photons per mode are possible). \binom{100}{76} = \binom{100}{24} \approx 10^{22}.
Why this matters: for classical simulation, you need access to permanents of 76 \times 76 submatrices. There are on the order of 10^{22} such submatrices — you do not have to compute all of them for sampling, but the sheer number is a scale indicator. At 113 photons (Jiuzhang 2.0), the number of patterns explodes further.
Step 2 — Cost of one 76 \times 76 permanent. Ryser's formula scales as O(2^n \cdot n). For n = 76:
On a supercomputer at 10^{18} operations per second, that is \sim 5.7 \times 10^6 seconds \approx 2.2 months per permanent.
Step 3 — Cost of sampling. Classical sampling requires producing samples from the distribution. The simplest exact method would compute marginal probabilities and condition sample by sample — this requires many permanents per sample. Alternative methods (tensor networks, approximation) can do much better, but they still scale exponentially. Published classical-simulation estimates for Jiuzhang 1.0 put the total cost at roughly 2.5 billion years on a 2020-era supercomputer.
Why "billion years" rather than "centuries": the combined effect of sampling many output patterns, computing many permanents, and matching statistical fidelity multiplies the per-permanent cost by a large factor. A single permanent is already expensive; full distribution sampling amplifies the cost enormously.
Step 4 — What the quantum device does. The photonic apparatus runs the interferometer in \sim 200 seconds of wall-clock time, producing 10^4–10^6 samples at the desired photon count. The quantum device does not "compute the permanent" — it produces samples naturally, because the physics of indistinguishable photons is the permanent structure. The classical simulator must compute what the quantum device simply is.
Result. Jiuzhang 1.0 demonstrates supremacy on the Gaussian boson sampling task: quantum runtime \sim 200 seconds, classical estimate \sim 2.5 billion years, ratio \sim 4 \times 10^{14}. Jiuzhang 2.0 and 3.0 extend this further. The physical bottleneck is photon detection at scale — higher-efficiency detectors yield higher effective photon counts, which drive the classical-simulation cost up exponentially.
Common confusions
-
"Boson sampling is useful." No. The output is a photon-count distribution with no embedded problem. Boson sampling does not factor, simulate chemistry, optimise, or compute anything of practical interest. It is a benchmark task — chosen for its classical hardness, not for any utility. Some proposals have attempted to extract useful information from boson sampling (molecular vibronic spectra, graph isomorphism approximations), but these remain speculative and none have demonstrated advantage over classical methods.
-
"Photons are classical light, so photon sampling is classical." No. Individual photons are quanta of the electromagnetic field; indistinguishable photons obey Bose statistics, producing interference patterns with no classical analogue. Boson sampling's hardness comes directly from the quantum structure of identical bosons — the classical picture (distinguishable particles through the same network) gives a different, much easier distribution.
-
"IBM or Google vs USTC — who 'won' supremacy?" The question is malformed. Google Sycamore (2019) demonstrated supremacy on random circuit sampling, a task defined for programmable superconducting qubits. USTC Jiuzhang (2020+) demonstrated supremacy on boson sampling, a task defined for linear optical photonics. These are different tasks on different hardware; both are legitimate supremacy claims. The community treats them as parallel, independent achievements rather than competing ones.
-
"Computing the permanent is NP-hard." Not quite — it is \#\mathrm{P}-hard, which is strictly stronger. \mathrm{NP} is about deciding yes/no questions; \#\mathrm{P} is about counting solutions. \#\mathrm{P}-hard problems sit above \mathrm{NP} in the complexity hierarchy (even \mathrm{NP} is a subset of \mathrm{P}^{\#\mathrm{P}}). The permanent is the canonical \#\mathrm{P}-hard problem.
-
"Boson sampling has been classically simulated, so the claim is dead." Partially true. Specific classical algorithms have improved — tensor-network methods, noise-exploitation techniques — and narrowed the per-experiment gap. But the asymptotic hardness (larger n, more modes) remains; Jiuzhang has outgrown the classical frontier multiple times.
-
"Boson sampling requires no quantum coherence." False. It requires indistinguishable photons — photons with identical frequency, polarisation, and arrival time, which demand maintaining quantum coherence through the interferometer. Any decoherence (the photons becoming partially distinguishable) degrades the permanent interference toward a classical product-distribution outcome. Experimental boson sampling is precisely an engineering exercise in maintaining photonic indistinguishability at scale.
Going deeper
You have the apparatus, the permanent connection, the Aaronson–Arkhipov argument, and the Jiuzhang experiments. The rest of this section fills in the more technical material: why computing the permanent is \#\mathrm{P}-hard (Valiant's theorem); the full proof sketch of the polynomial-hierarchy-collapse argument; the Gaussian boson sampling variant and why it is easier to build; classical spoofing attempts and what they do and do not accomplish; the photonic platforms beyond Jiuzhang (PsiQuantum, Xanadu, QuiX); and the place of the USTC programme in the landscape of national quantum initiatives.
Valiant's theorem — the permanent is \#\mathrm{P}-hard
Leslie Valiant's 1979 paper The complexity of computing the permanent proved that computing the permanent of a 0/1 matrix is \#\mathrm{P}-hard (and therefore at least as hard as any \#\mathrm{P} problem, which includes counting all satisfying assignments of a Boolean formula). The reduction goes: a Boolean formula \varphi can be encoded as a bipartite graph whose perfect matchings correspond bijectively to satisfying assignments of \varphi. The permanent of the biadjacency matrix counts perfect matchings, and therefore counts satisfying assignments — which is \#\mathrm{P}-complete.
The consequence: there is no polynomial-time classical algorithm for the permanent unless \mathrm{P}^{\#\mathrm{P}} = \mathrm{FP}, which would be a catastrophic complexity collapse (it would imply \mathrm{P} = \mathrm{NP} and much more).
The polynomial hierarchy collapse argument
Aaronson and Arkhipov's hardness argument for approximate boson sampling uses a technique called Stockmeyer counting. The argument, at a sketch level:
- Assume, for contradiction, that approximate classical sampling of boson sampling is in polynomial time (in the BPP complexity class).
- Use Stockmeyer's algorithm (which is in \Sigma_3^P, the third level of the polynomial hierarchy) to approximately count the probability of any output pattern.
- This gives an algorithm in \Sigma_3^P for approximating permanents.
- Under the permanent-of-Gaussians conjecture (widely believed), exact computation of random Gaussian permanents is \#\mathrm{P}-hard, and approximation is equally hard.
- Therefore \mathrm{P}^{\#\mathrm{P}} \subseteq \Sigma_3^P, which by Toda's theorem collapses the polynomial hierarchy.
This is a complexity-theoretic upper bound on classical tractability. It does not prove quantum supremacy unconditionally; it proves it under the conjecture that the polynomial hierarchy is infinite, which is nearly as unshakeable as \mathrm{P} \neq \mathrm{NP}.
Gaussian boson sampling
The original Aaronson–Arkhipov proposal used single-photon Fock states as input — one photon in each of n selected modes. Single-photon sources with high purity, indistinguishability, and on-demand availability are experimentally demanding to build at scale.
Hamilton, Kruse, Sansoni, Barkhofen, Silberhorn, Jex (2017) proposed Gaussian boson sampling as a practical variant. Instead of single photons, use squeezed vacuum states at each input mode. The photon-number statistics of squeezed states are non-trivial (not Poissonian), and the output distribution now has probabilities proportional to Hafnians of submatrices of a modified unitary. The Hafnian is closely related to the permanent — it generalises the permanent for symmetric matrices — and retains \#\mathrm{P}-hardness.
Jiuzhang uses Gaussian boson sampling because squeezed vacuum sources are substantially easier to build than true single-photon sources. The complexity-theoretic hardness carries through.
Classical spoofing and noise
Classical simulation algorithms exploit three levers:
- Tensor-network compression: if the effective entanglement of the photon state is bounded, tensor-network methods can efficiently simulate.
- Noise: real photons are not perfectly indistinguishable. If indistinguishability drops below a threshold, the distribution approaches the classical (distinguishable-particle) distribution, which is easy to simulate.
- Marginal concentration: if the output distribution concentrates mass on a small subset of patterns, sampling from the marginal is easier than from the full distribution.
Each experimental improvement (cleaner photons, better detectors, more modes) works against all three levers, pushing the classical simulation cost back up. The arms race between photonic hardware and classical algorithms is ongoing and is covered in detail in the next chapter on classical spoofing.
Photonic platforms beyond Jiuzhang
Several companies and academic groups have built or are building photonic quantum hardware for sampling and for universal computation.
- Xanadu (Canada) — developing Borealis, a photonic quantum computer offering Gaussian boson sampling as a cloud service. Published supremacy claim in 2022.
- PsiQuantum — targeting fault-tolerant universal photonic quantum computing using fusion-based architectures. Not a supremacy player, but an aggressive industrial bet on photonics.
- QuiX (Netherlands) — integrated photonic processors for boson sampling and optical quantum computing.
- ORCA Computing (UK) — photonic quantum networks and sampling experiments.
The photonic platform is diverse. Jiuzhang represents the academic, pure-supremacy tradition; the commercial platforms (Xanadu, PsiQuantum) aim for different milestones (cloud-accessible sampling, fault-tolerant computing).
Indian context — photonic quantum research
India's photonic quantum computing effort is concentrated in a few centres:
- Raman Research Institute (RRI), Bangalore — founded by C.V. Raman; active in quantum optics and photonic experiments, including early work on twin-photon sources and quantum key distribution.
- ISRO (Indian Space Research Organisation) — satellite-based quantum key distribution experiments using photons, including the 2023 demonstration over 300 km.
- Indian Institute of Science (IISc), Bangalore — theoretical and experimental quantum optics.
- IIT Madras, IIT Bombay, IIT Delhi — growing photonic quantum research groups.
The historical Indian connection to boson sampling runs through Bose himself. Satyendra Nath Bose's 1924 paper, Planck's law and the hypothesis of light quanta (Plancks Gesetz und Lichtquantenhypothese), established the statistical rules for identical photons — the mathematical foundation that the permanent captures combinatorially. Einstein translated and championed the paper, and the particles we now call bosons are named after Bose. The National Quantum Mission (2023) explicitly includes photonic supremacy demonstrations as a possible near-term milestone; the theoretical foundation traces back to Bose's original statistical arguments.
Jian-Wei Pan and the USTC programme
The USTC programme under Jian-Wei Pan is the world's most aggressive photonic-supremacy effort. Pan's group has led the Jiuzhang series, the Zuchongzhi superconducting series, satellite-based quantum key distribution (Micius, 2016), and quantum teleportation experiments. The programme is state-funded, centralised, and produces results at a pace that no Western academic group matches.
This is worth understanding for Indian context: the scale of photonic supremacy work at USTC is the result of sustained institutional funding and concentrated expertise over two decades. India's National Quantum Mission, launched in 2023 with a \sim ₹6000 crore (\sim \$720 million USD) eight-year budget, explicitly targets a comparable programme. Whether India can match USTC's output depends less on funding alone than on the development of concentrated experimental groups with the detector technology, squeezed-light sources, and integrated photonics skills that Jiuzhang embodies.
Where this leads next
- Quantum Supremacy Defined — the parent definition; what supremacy claims must satisfy.
- Google Random Circuit Sampling — the superconducting-qubit supremacy demonstration that predated (in announcement) Jiuzhang.
- Classical Spoofing of Supremacy — the ongoing improvements in classical simulation and what they imply.
- Gaussian Boson Sampling — the squeezed-light variant that Jiuzhang actually implements.
- USTC Jiuzhang — a dedicated chapter on the Jiuzhang hardware and experimental history.
- Useful Quantum Advantage — the contrasting milestone of solving problems people care about.
References
- Scott Aaronson and Alex Arkhipov, The computational complexity of linear optics (2011) — the original boson sampling proposal and hardness proof. arXiv:1011.3245.
- Han-Sen Zhong et al. (USTC), Quantum computational advantage using photons (2020) — the Jiuzhang 1.0 paper in Science. arXiv:2012.01625.
- Craig S. Hamilton et al., Gaussian boson sampling (2017) — the variant Jiuzhang implements. arXiv:1612.01199.
- Wikipedia, Boson sampling — curated overview with a running timeline of experiments.
- John Preskill, Quantum computing and the entanglement frontier (2012) — the supremacy-coinage essay with an early discussion of sampling candidates. arXiv:1203.5813.
- Leslie Valiant, The complexity of computing the permanent (Theoretical Computer Science, 1979) — the \#\mathrm{P}-hardness proof for the permanent.