In short

Quantum supremacy is a narrow technical claim: there exists one specific task that a quantum device completes in reasonable time, and no classical computer can match that time using any known algorithm. The term was coined by John Preskill in 2012. Google claimed the first supremacy demonstration in October 2019 using a task called random circuit sampling — running a random 53-qubit circuit and collecting output samples, finishing in 200 seconds what they estimated would take a supercomputer 10{,}000 years. Supremacy is not the same as useful quantum advantage. The tasks chosen for supremacy experiments are specifically engineered to be hard classically; they have no practical use. Factoring RSA-2048, simulating useful chemistry, and breaking real encryption are all still years away. The community has largely shifted to the term quantum advantage — partly because "supremacy" carries uncomfortable political connotations, partly because it better captures the graded reality: classical simulation keeps improving, the gap keeps closing, and the cleanest claims keep being softened after follow-up work. You should treat supremacy as a benchmark, not a milestone.

In October 2019, Nature published a cover story with a striking claim: Google's 53-qubit Sycamore chip had, in 200 seconds, completed a computation that the best classical supercomputer on Earth would need roughly 10{,}000 years to reproduce. The headline shouted one word: supremacy. Across the internet, the word landed as "quantum computers have arrived." In physics departments, the reaction was more nuanced — impressed, but careful. By February 2020, IBM was arguing the same computation could be simulated classically in about 2.5 days, not 10{,}000 years. Later work shrank it further.

What actually happened? What did Google mean by supremacy, what did they not mean, and how should you read the word the next time you see it in a headline? This chapter unpacks the definition — carefully enough that when you meet it in the next chapter (where you dig into the Sycamore experiment itself) you already have the conceptual frame.

The short preview, before you meet the formal definition: supremacy is a benchmark, not a coronation. It is a claim about one specific task, chosen to be maximally hard classically and comparatively easy quantumly, with no requirement that the task be practically useful. The claim is significant — it is the first experimental evidence that quantum hardware can outrun classical hardware on something — but the gap between "something" and "something you care about" is still very much open territory.

Where the word came from

The term quantum supremacy was coined in 2012 by John Preskill, the Richard P. Feynman Professor of Theoretical Physics at Caltech, in an essay titled Quantum computing and the entanglement frontier. Preskill was writing for a workshop, and he needed a phrase for a question the community was converging on: at what point does a quantum computer become provably more powerful than any classical machine, not in theory (Shor had done that in 1994) but in demonstrated practice?

His answer, in effect: call the threshold "quantum supremacy." It is the moment an experimentalist can point at a real machine and say, truthfully, that no classical computer can reproduce what it just did — on some well-defined task, not necessarily a useful one.

The word itself was never meant to glorify. In context, Preskill was clear that supremacy was a narrow technical achievement, not a declaration of quantum dominance. But words carry connotations that physicists cannot fully control. By 2020, many researchers had begun using quantum advantage instead — partly because "supremacy" shares linguistic territory with "white supremacy" and other terms the field did not want to echo, and partly because "advantage" is scientifically more accurate: it implies a spectrum (a small advantage, a large one, a practical one) rather than a binary.

You will see both words in the literature. This article uses supremacy when discussing the original 2012 framing and the 2019 Google claim that made the term famous, and advantage when discussing the softer, broader claim that has displaced it.

The supremacy–advantage spectrumA horizontal axis labelled from left to right: "classical can easily do it" on the left, then "noisy quantum does something specific", then "quantum supremacy (narrow task)", then "quantum advantage (useful)", and on the far right "fault-tolerant quantum computing (breaks RSA)". Three labelled points mark Google 2019, NISQ era, and "still years away".the supremacy–advantage spectrumclassicalsufficesNISQdemossupremacynarrow task,no useGoogle 2019(Sycamore)useful advantagechemistry,optimisationfault-tolerantShor on RSA-2048(years out)supremacy ≠ useful advantage ≠ fault-tolerant QC — three different milestones on one road
Three milestones on the road from classical to fault-tolerant quantum computing. Supremacy is the first — a narrow demonstration on a specifically hard task. Useful advantage is the second — a speedup on a problem someone actually cares about. Fault-tolerant quantum computing (enough qubits with error correction to run Shor's algorithm on real RSA) is the third, and still years away.

The formal definition

With the history in place, here is the technical statement.

Quantum supremacy

A quantum supremacy demonstration is an experiment showing that a specific, well-defined computational task T satisfies all three of the following conditions:

  • Quantum tractability. There exists a quantum device that completes T in time t_Q, where t_Q is short enough to be wall-clock practical (seconds to hours).
  • Classical hardness. The best known classical algorithm for T requires time t_C that is super-polynomially larger than t_Q. "Best known" means: against every algorithm anyone has yet found, run on the best supercomputer available.
  • Verifiability. The quantum device's output on T is checkable — either directly, or statistically, or under a complexity-theoretic assumption — so that an observer can confirm the device actually solved T rather than producing garbage.

The three conditions are doing distinct work.

Quantum tractability is where the actual hardware lives. Your device must finish the task in a reasonable time — a few minutes, a few hours, not centuries. Otherwise the claim is vacuous. Quantum circuits of a certain depth on a certain number of qubits take a measurable wall-clock time to execute; that time is t_Q.

Classical hardness is the comparison that matters. If the best classical algorithm takes 10^6 times longer than the quantum device, that is a factor of 10^6. If the best classical algorithm takes 2^{50} times longer, that is a quadrillion — and as the problem size grows, the gap grows exponentially. Supremacy requires a super-polynomial gap: the classical running time is not bounded by any polynomial in the problem size relative to the quantum running time. The strongest claims push this gap into the "takes longer than the age of the universe" regime.

Verifiability is the catch. Most problems quantum hardware is good at have an easy verification step (factoring: multiply the factors back together; check the product). But the tasks that maximise classical hardness tend to be sampling problems — producing samples from a probability distribution — and sampling from a distribution is not directly verifiable by running a test. The verification step for sampling tasks relies on statistical benchmarks like the linear cross-entropy benchmark (XEB, which you will meet in the next chapter), or on complexity-theoretic arguments that the sampler's output is statistically incompatible with random noise.

Why verifiability is tricky: a quantum device that outputs random garbage would trivially produce samples "classical computers cannot reproduce" — because the garbage has no structure anyone can match. Supremacy demands that the device produces the right kind of samples, from the distribution the ideal quantum circuit would produce. Statistical benchmarks like XEB quantify how close the observed samples are to that ideal.

Hype check. Supremacy is not the claim that "quantum is now useful." It is the claim that "quantum hardware is, on this one narrow task, strictly more powerful than the best classical hardware." Useful quantum advantage — solving a problem people care about, faster or better than any classical method — is a separate, harder bar. Factoring 2048-bit RSA, simulating drug molecules at chemical accuracy, breaking industrial encryption: none of these is what supremacy demonstrated. Treating "Google achieved quantum supremacy" as "quantum computers now work" is reading a narrow technical sentence as a sweeping civilisational one.

Why the chosen tasks are useless

This is the hardest part of the conceptual story and the part the hype gets wrong most often. The tasks used for supremacy experiments are chosen precisely because they are hard classically, and the same features that make them hard classically make them useless in practice.

Consider the three main candidate tasks.

Random Circuit Sampling (RCS)

The task. Pick a random quantum circuit C on n qubits, depth d. Run it on the initial state |0\rangle^{\otimes n}. Measure all qubits in the computational basis. Repeat many times, producing samples x_1, x_2, \ldots, x_M \in \{0, 1\}^n. Your output is the set of samples, or equivalently, an empirical estimate of the output distribution.

Why it is classically hard. To simulate the circuit classically, you must either track the full 2^n-dimensional state vector (infeasible memory for n \geq 50), or use tensor-network methods whose cost scales exponentially in circuit depth. Aaronson and Arkhipov, along with Bouland–Fefferman–Nirkhe–Vazirani, have proved (under standard complexity-theoretic conjectures) that exactly sampling from a random quantum circuit's output distribution is \#\mathrm{P}-hard — the highest level of the counting hierarchy.

Why it is useless. The "answer" is just samples from a random distribution. There is no underlying problem being solved, no computation being performed on real data. You cannot factor a number, simulate a molecule, or optimise a supply chain with RCS. It is a benchmark, nothing more.

Boson Sampling

The task. Send n indistinguishable photons into a linear optical interferometer with m modes (typically m \approx n^2). Measure the output photon-count pattern in each mode. Repeat for samples. Output: the distribution over photon-count patterns.

Why it is classically hard. Aaronson and Arkhipov (2011) proved that classical simulation of boson sampling reduces to computing the permanent of large complex matrices — a \#\mathrm{P}-hard problem. Their paper established boson sampling as the first supremacy-style candidate, predating RCS.

Why it is useless. Same story. The output is a photon-count distribution with no embedded problem. Boson sampling is, structurally, a lovely demonstration of quantum statistics (the Hong–Ou–Mandel effect generalised). But no one has a use for the samples themselves.

IQP — Instantaneous Quantum Polynomial-time

The task. A restricted class of quantum circuits in which all gates are diagonal in the X basis (i.e., conjugated Hadamards with diagonal-Z gates in between). Bremner–Jozsa–Shepherd (2011) showed that sampling from IQP circuits is classically hard under reasonable complexity assumptions.

Why it is classically hard. IQP circuits concentrate amplitude patterns that, if classically simulable, would collapse the polynomial hierarchy — a complexity-theoretic catastrophe.

Why it is useless. IQP captures a slice of quantum behaviour that is weaker than universal quantum computation but still hard to simulate. The output distributions do not encode any useful problem.

Supremacy tasks: hard classically, useless practicallyThree boxes arranged vertically. Each box lists one candidate task — RCS, Boson Sampling, IQP — with two columns: why it is classically hard (left) and why it is practically useless (right). A horizontal band at the bottom notes "tasks chosen for provable classical hardness, not for applications".the candidate supremacy tasksRandom Circuit Sampling (RCS)hard: #P-hard to simulate exactly · useless: output is just samples from a random distributionBoson Samplinghard: computing matrix permanents is #P-hard · useless: photon-count patterns encode no problemIQP (Instantaneous Quantum Polynomial)hard: classical simulation would collapse the polynomial hierarchy · useless: no applicationchosen for provable classical hardness, not for practical utility
Three candidate supremacy tasks. Each is structurally designed to be classically hard — the hardness is a *feature*, engineered for the demonstration. The flip side is that none of them solves any problem the world cares about. Supremacy is a benchmark for hardware capability, not a useful computation.

The symmetry is not an accident. Tasks that maximise the quantum-vs-classical gap tend to live deep inside sampling complexity, where the output is a probability distribution rather than an answer. Answers are verifiable and useful; distributions are hard and useless. The design of a supremacy experiment is to pick a task on the "hard and useless" side of that line and show that the quantum hardware can reach it.

The 2019 Google Sycamore claim, at a glance

The Google supremacy paper, Quantum supremacy using a programmable superconducting processor (Arute et al., Nature 574, 505–510, 2019), claimed the first experimental demonstration. You will meet the full story in the next chapter; here is the high-level frame.

The 2019 Sycamore supremacy claim, by the numbersFour key statistics arranged as labelled boxes: 53 qubits, 20 depth, 200 seconds quantum runtime, 10000 years classical estimate.Sycamore 2019: the headline numbers53qubits(superconducting)20circuit depth(cycles of gates)200 squantumwall-clock time10,000 yrclaimedclassical timeverification: linear cross-entropy benchmark (XEB) = 0.0024 vs 0 for randomroughly 20% above uniform-random baseline — consistent with a correct quantum samplerthe 10,000-year estimate was later disputed (see next chapter for IBM's 2.5-day rebuttal)
The headline numbers from the Google $2019$ announcement. The quantum runtime and the verification score (XEB) are measured facts. The classical-time estimate is an extrapolation based on Google's own simulator — and, as the next chapter details, was disputed almost immediately by IBM, who argued a tensor-network approach could complete the simulation in about $2.5$ days.

The "supremacy expires" phenomenon

A feature of supremacy claims that is genuinely surprising: they do not stay proved.

Here is the pattern. A quantum device performs task T in time t_Q. The team estimates classical time t_C using the best simulator they know. The gap t_C / t_Q is enormous and the supremacy claim is announced. Then, in the months and years after, classical algorithm developers read the paper, realise that task T has a specific structure their simulators can exploit, and publish a new classical simulator that finishes T in time t_C' with t_C' \ll t_C. Sometimes t_C' is small enough that the supremacy claim is ambiguous; sometimes not.

This happened to Google's 2019 claim.

Importantly: the classical improvements do not eliminate the supremacy gap at larger qubit counts. The quantum hardware keeps growing (USTC's Zuchongzhi pushed to 66 qubits; Jiuzhang operated in a regime of 113 photons), and the classical simulation cost grows exponentially with qubit count. The per-experiment claim can be softened by clever classical algorithms; the asymptotic story still favours quantum.

Evolution of classical simulation times for the Sycamore taskA horizontal bar chart. Four bars labelled by year and source: 2019 Google's estimate 10000 years (the longest bar), 2019 IBM's claim 2.5 days (a tiny bar), 2021 Pan et al 15 hours (even tinier), 2022 further improvements (smaller still). Quantum runtime 200 seconds marked as a reference line.classical simulation of Sycamore, over time2019 Google10,000 years (initial estimate)2019 IBM~ 2.5 days (tensor network + disk swap)2021 Pan et al~ 15 hours (GPU cluster)quantum200 seconds (Sycamore)horizontal axis is logarithmic; each bar shows the best known classical simulation timesupremacy claims get softer over time as classical simulators improve
The history of classical simulation of Google's $2019$ Sycamore task. The original $10{,}000$-year estimate has been progressively refined downward as classical tensor-network methods improve and specialised hardware becomes available. The per-experiment gap narrows, but the asymptotic regime (larger $n$, deeper circuits) still favours the quantum device exponentially.

Example 1: The Sycamore experiment at a high level

Pin down the supremacy definition by walking through how it applies to Google's 2019 claim.

The task T. Sample from the output distribution of a fixed random quantum circuit C on 53 qubits with 20 layers of gates. Specifically: run C on |0\rangle^{\otimes 53}, measure all qubits, record the outcome as a 53-bit string. Repeat \sim 10^6 times. Output: the empirical distribution over the 2^{53} \approx 9 \times 10^{15} possible outcomes.

Step 1 — Quantum tractability. The quantum device (Sycamore) ran the circuit and collected \sim 10^6 samples in wall-clock time t_Q \approx 200 seconds. Each individual sample takes microseconds; the bulk of 200 seconds is classical overhead — compiling the next circuit, reading out measurement results, and cycling the device between circuits.

Why 200 seconds is "practical": humans can wait 200 seconds. A supremacy claim that required a thousand-year quantum computation would itself be unverifiable in a reasonable lifetime.

Step 2 — Classical hardness. Google's team estimated, using their own in-house tensor-network simulator running on Summit, that producing the same number of samples to the same statistical fidelity would require \sim 10{,}000 years. This is t_C. The ratio is:

\frac{t_C}{t_Q} = \frac{10{,}000 \text{ years}}{200 \text{ seconds}} = \frac{10{,}000 \times 3.15 \times 10^7 \text{ s}}{200 \text{ s}} \approx 1.6 \times 10^9.

Why this is super-polynomial: the ratio 10^9 is not itself super-polynomial — it is a large constant. The super-polynomial claim is structural: as qubit count n grows, t_C scales at least like 2^n while t_Q scales polynomially. At n=53 the ratio happens to be about 10^9; at n=100 it would be astronomically larger.

Step 3 — Verifiability. Here is the tricky condition. Sampling a distribution is not a yes-or-no answer to a question; you cannot directly check a sample is "correct." Instead, Google used the linear cross-entropy benchmark (XEB).

XEB works like this: for each observed sample x_i, compute (on a classical simulator, for small enough sub-circuits) the theoretical probability p_i = |\langle x_i | C | 0 \rangle^{\otimes n}|^2 that an ideal quantum circuit would produce x_i. Then:

\mathrm{XEB} = 2^n \cdot \langle p_i \rangle - 1,

where \langle p_i \rangle is the average of p_i over observed samples. For a uniformly random sample, \mathrm{XEB} \approx 0. For an ideal quantum circuit, \mathrm{XEB} \approx 1. Sycamore's measured XEB: 0.0024 — small in absolute terms, but roughly 20\% above the uniform baseline after error analysis, and statistically consistent with a noisy quantum circuit producing close-to-ideal samples.

Why this is "verifiable": the XEB score cannot be faked by a classical device that doesn't actually know the circuit's amplitudes. A device producing uniformly random output (or any output not tracking the true distribution) gets \mathrm{XEB} \approx 0. Only a device that is actually implementing circuit C produces \mathrm{XEB} above noise.

The supremacy claim. All three conditions are met: quantum tractable (200 s), classically hard (10^4 years estimated), verifiable (XEB statistically significant). Therefore, under Google's 2019 data and methodology, this experiment demonstrates quantum supremacy on the task RCS.

The caveats. The classical estimate was Google's, using Google's simulator. IBM argued a different classical simulator could do it in \sim 2.5 days. The supremacy claim is sensitive to which classical algorithm you compare against. The claim is historical — it reflects what was known in October 2019 — and later work has continuously updated the classical benchmark.

Sycamore vs Summit on the 2019 RCS taskLeft box: Sycamore quantum device, 53 qubits, 20 depth, 200 seconds. Arrow labelled "samples, XEB ≈ 0.0024" to a "verified output distribution" box. Right box: Summit supercomputer, classical tensor-network simulator, with a label "10,000 years estimated". Both lead to the same verified output, with the speed ratio marked.the same task, two wildly different wall-clock timesSycamore53 qubits, depth 20~10⁶ samples200 secondsSummit supercomputertensor-network simsame target distribution~10,000 yearsverified via XEBscore ≈ 0.0024(~20% above baseline)ratio: ~1.6 × 10⁹ — the basis of the 2019 supremacy claim(later softened by IBM's 2.5-day classical simulation; gap still large asymptotically)
The supremacy claim in one picture. Same task, same target distribution, same verification protocol — but Sycamore completes it in $200$ seconds while Summit's classical simulator was estimated to need $10{,}000$ years. The quantum-to-classical ratio is the supremacy signal. The later IBM rebuttal ($2.5$ days) softened the per-experiment claim, but the asymptotic trend (wider gap at more qubits) remained.

Result. Google's 2019 experiment demonstrates quantum supremacy on the RCS task, under Google's comparison methodology. The demonstration does not produce a useful computation — the samples themselves solve no real-world problem — but it establishes that quantum hardware can, on a specifically engineered task, outrun the best available classical hardware by several orders of magnitude. That is precisely what supremacy was defined to mean.

Example 2: Why supremacy on RCS does not factor RSA-$2048$

A contrast that makes the "supremacy ≠ useful" distinction concrete. The question: does Sycamore's 2019 supremacy demonstration help break RSA-2048 encryption, the workhorse of online security?

Short answer: no. Not even slightly. Not in principle and not in practice.

Setup. RSA-2048 is a public-key cryptosystem whose security rests on the hardness of factoring a 2048-bit integer. Shor's algorithm (1994) is the quantum algorithm that factors integers in polynomial time — if you can run it on a large enough quantum computer.

Step 1 — The qubit requirement for Shor's on RSA-2048. Current estimates (Gidney and Ekerå, 2021) place the resource requirement for factoring a 2048-bit integer using the surface code for error correction at roughly 20 million physical qubits, with gate error rates around 10^{-3}, running for about 8 hours of wall-clock time.

Why 20 million, when Shor's "only" needs a few thousand logical qubits: the logical-to-physical overhead of surface-code error correction is roughly 1000-to-1 at current error rates. 4000 logical qubits × 1000 = 4 million; with ancillas, scratch, and margin, 20 million. Lower error rates would reduce this.

Step 2 — What Sycamore has. Sycamore has 53 physical qubits with gate error rates around 10^{-2} to 10^{-3} and no error correction. For comparison with Shor:

  • Sycamore: 53 qubits, no error correction.
  • Required for Shor on RSA-2048: \sim 20{,}000{,}000 physical qubits, with error correction.

Gap: a factor of \sim 4 \times 10^5 in qubit count, plus error correction that Sycamore does not have.

Step 3 — Why the supremacy demonstration does not translate. RCS is a sampling task; its classical hardness comes from the fact that tracking 2^{53} amplitudes is expensive. Shor's on RSA-2048 is a deterministic algorithmic task; it requires reliably implementing modular exponentiation on a 4000-logical-qubit register. These are different regimes. The hardware that demonstrated RCS supremacy at 53 qubits does not scale to 20 million qubits by any simple extension — you need genuine error correction, better gates, better coherence, completely different engineering. Supremacy on RCS at 53 qubits demonstrates that the NISQ (Noisy Intermediate-Scale Quantum) platform can run some task. It says little about the fault-tolerant regime where Shor's actually lives.

Why this matters for policy: the Indian National Quantum Mission (₹6000 crore, 2023) includes post-quantum cryptography migration as a strategic priority. The timeline for when Shor-capable machines arrive is genuinely uncertain — some estimates put it at the 2030s, others later — but treating Sycamore's supremacy as "RSA is now broken" is wrong. The relevant question is not "did Google achieve supremacy?" but "when will fault-tolerant Shor become feasible?" The two have very different answers.

Step 4 — A concrete sanity check. Imagine you asked Sycamore to factor RSA-2048 today. It cannot — it does not have the qubits, the error correction, or the coherence. Even if you somehow encoded a 4000-qubit Shor's algorithm into 53 physical qubits (impossible — the logical-qubit overhead forbids it), the gate errors would corrupt the computation long before the algorithm completed. Supremacy on RCS at 53 qubits implies nothing about Shor's on 2048-bit RSA.

Sycamore supremacy vs Shor on RSA-2048 requirementsTwo bar charts side by side. Left: what Sycamore has — 53 physical qubits, no error correction, a small bar. Right: what Shor on RSA-2048 needs — 20 million physical qubits with surface code error correction, an enormous bar. The gap is marked as "~4 × 10^5 times more qubits needed".Sycamore (2019) vs Shor-on-RSA-2048 requirementSycamore has53physical qubitsno error correctiongate error ~10⁻³Shor-on-RSA-2048 needs~20,000,000physical qubitssurface-code encoded~8 hours runtimeratio ~4 × 10⁵ — plus error correction Sycamore does not have
The scale difference. Sycamore operates in the NISQ regime — small, noisy, no error correction. A fault-tolerant factoring of RSA-$2048$ needs a machine roughly $400{,}000$ times larger *in qubit count alone*, with error correction overhead the NISQ hardware does not provide. Supremacy on one regime does not automatically translate to capability in the other.

Result. Sycamore's supremacy demonstration does not help factor RSA-2048. The supremacy claim establishes that quantum hardware can, on a specific narrow task, outrun classical hardware in the NISQ regime. It says nothing about when or whether fault-tolerant, error-corrected quantum computing will arrive and what it will then be able to do. Reading supremacy as "quantum is now useful" conflates two very different engineering regimes and leads to bad predictions about cryptography, chemistry, and every other application that actually matters. The next chapter digs into the Sycamore experiment in full detail.

Common confusions

Going deeper

You have the definition, the candidate tasks, the 2019 claim at a high level, the "supremacy expires" phenomenon, and the useful-vs-narrow distinction. The rest of this section collects the more technical content: Preskill's original framing, the formal complexity-theoretic hardness proofs for the candidate tasks, the 20202024 follow-up experiments at USTC (Jiuzhang, Zuchongzhi), IBM's shift to "useful quantum advantage" as a counter-narrative, the ethics debate around the word "supremacy," and where 2026 experimental supremacy work stands in relation to error correction and the fault-tolerant transition.

Preskill's 2012 essay — the original context

John Preskill's Quantum computing and the entanglement frontier (arXiv:1203.5813) was written for a symposium celebrating Solvay-1911 and marking the centennial of Rutherford's model. The essay uses "supremacy" almost in passing, as a term for a threshold condition. Key passages: Preskill argues that demonstrating supremacy on some task — any task — would be "an impressive milestone in the history of technology," distinct from and prerequisite to demonstrating useful quantum computing.

Preskill was explicit that supremacy tasks need not be useful: "We should recognise that attaining quantum supremacy may be much easier than building a useful quantum computer." He even proposed candidate tasks, including sampling from random circuit output distributions — seven years before Google's announcement.

Formal hardness of RCS

The classical hardness of random circuit sampling has been studied in several levels of rigour.

The implication: RCS supremacy sits on a formal foundation, but the formal hardness results apply to the ideal-noiseless regime. As noise increases, the classical complexity decreases, and the supremacy window narrows.

Boson sampling — Aaronson and Arkhipov 2011

The original quantum-advantage-in-a-sampling-model paper is The computational complexity of linear optics by Scott Aaronson and Alex Arkhipov (STOC 2011, arXiv:1011.3245). They proposed a hardware-friendly restricted model: indistinguishable photons through a linear optical interferometer. The central theorem: exact sampling of the output distribution is \#\mathrm{P}-hard (reducing to computing a matrix permanent), and approximate sampling is hard under two conjectures (the permanent-of-Gaussians conjecture and anticoncentration).

Boson sampling influenced the field enormously. It clarified that supremacy claims would live in the sampling-problem regime rather than the decision-problem regime, and it spurred experimental efforts at the University of Science and Technology of China (USTC), culminating in the Jiuzhang series of experiments.

USTC Jiuzhang — photonic supremacy

In 2020, the USTC group led by Jian-Wei Pan announced Jiuzhang (1.0), a photonic boson sampling experiment with 76 detected photons in a 100-mode interferometer. Classical simulation estimate: 2.5 billion years. In 2021, Jiuzhang 2.0 extended to 113 detected photons; Jiuzhang 3.0 followed in 2022. Each iteration widened the quantum-classical gap and used Gaussian boson sampling (a variant better matched to available photon sources).

Photonic supremacy has a different flavour from superconducting supremacy: the hardware is room-temperature (ignoring the laser sources), the computational model is restricted (linear optical, no universal gates), and the verification protocols differ.

USTC Zuchongzhi — superconducting supremacy extended

USTC's superconducting quantum computing effort (20212024) produced Zuchongzhi (2.0, 2.1, 3.0), superconducting processors with up to 66 qubits. These experiments extended Google's 2019 RCS demonstration to more qubits and deeper circuits, pushing the quantum-classical gap wider.

By 2024, the supremacy claim had been independently demonstrated on at least two hardware platforms (superconducting, photonic) at multiple institutions (Google, USTC). The claim is robust to hardware choice; the candidate tasks (RCS, boson sampling) are robust to experimental variation.

IBM's counter-framing: useful quantum advantage

IBM, after their 2019 rebuttal to Google's supremacy claim, has pushed a different frame: useful quantum advantage. The argument: supremacy is a narrow academic milestone; what matters is advantage on problems industry cares about. IBM's roadmap (published annually since 2020) targets specific useful-advantage demonstrations — simulation of specific chemistry problems, optimisation benchmarks — rather than pure sampling supremacy.

The two framings are compatible: supremacy is a capability threshold; useful advantage is a utility threshold. Both are legitimate milestones. The community has increasingly used "useful quantum advantage" as the more ambitious target, with supremacy understood as the earlier, narrower achievement.

The terminology debate

In 20192020, several groups (including the Nature editorial board in certain contexts) reconsidered the word "supremacy." The concerns: the word shares linguistic territory with "white supremacy," carries an exclusionary connotation, and was originally chosen without reflection on these associations. Alternative terms proposed: quantum advantage, quantum primacy, quantum transcendence, quantum ascendancy. None fully replaced "supremacy" — historical papers continue to use it — but "quantum advantage" has become the most common contemporary term.

This is not purely a political debate. The narrower word "supremacy" implied a binary victory; the broader word "advantage" captures the graded reality (small vs large speedup, useful vs useless task, noisy vs fault-tolerant regime) that the 20192026 evidence has supported. The terminology shift tracks the scientific shift toward acknowledging nuance.

Indian context — the National Quantum Mission's supremacy ambitions

India's National Quantum Mission (NQM), launched in 2023 with a ₹6000 crore budget over 8 years, explicitly includes quantum supremacy / advantage demonstrations as a strategic objective. The mission's four verticals — quantum computing, quantum communication, quantum sensing, quantum materials — have their own milestones; the computing vertical targets:

The supremacy-vs-advantage distinction matters for Indian policy: demonstrating supremacy on a sampling task is significantly easier than demonstrating useful advantage on an industrial problem. The NQM's explicit framing treats these as separate goals. Indian researchers are active at both fronts — TIFR, IISc, IIT Madras, IIT Bombay, and the Raman Research Institute have groups working on either restricted hardware for sampling demonstrations or useful-advantage algorithms for near-term hardware. A few researchers of Indian origin (including at Google Quantum AI and IBM Quantum) have been co-authors on foundational supremacy papers.

Where this leads next

References

  1. John Preskill, Quantum computing and the entanglement frontier (2012) — the essay that coined "quantum supremacy." arXiv:1203.5813.
  2. Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 505510, 2019) — the Sycamore paper. Nature preprint/open copy.
  3. Scott Aaronson and Alex Arkhipov, The computational complexity of linear optics (2011) — the boson sampling paper. arXiv:1011.3245.
  4. Edwin Pednault et al. (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the IBM response to Google's supremacy claim. arXiv:1910.09534.
  5. Wikipedia, Quantum supremacy — the curated reference with a running history of experimental claims and classical rebuttals.
  6. Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — an accessible, technically careful discussion of the Google claim by one of the architects of the sampling-supremacy framework.