In short
Quantum supremacy is a narrow technical claim: there exists one specific task that a quantum device completes in reasonable time, and no classical computer can match that time using any known algorithm. The term was coined by John Preskill in 2012. Google claimed the first supremacy demonstration in October 2019 using a task called random circuit sampling — running a random 53-qubit circuit and collecting output samples, finishing in 200 seconds what they estimated would take a supercomputer 10{,}000 years. Supremacy is not the same as useful quantum advantage. The tasks chosen for supremacy experiments are specifically engineered to be hard classically; they have no practical use. Factoring RSA-2048, simulating useful chemistry, and breaking real encryption are all still years away. The community has largely shifted to the term quantum advantage — partly because "supremacy" carries uncomfortable political connotations, partly because it better captures the graded reality: classical simulation keeps improving, the gap keeps closing, and the cleanest claims keep being softened after follow-up work. You should treat supremacy as a benchmark, not a milestone.
In October 2019, Nature published a cover story with a striking claim: Google's 53-qubit Sycamore chip had, in 200 seconds, completed a computation that the best classical supercomputer on Earth would need roughly 10{,}000 years to reproduce. The headline shouted one word: supremacy. Across the internet, the word landed as "quantum computers have arrived." In physics departments, the reaction was more nuanced — impressed, but careful. By February 2020, IBM was arguing the same computation could be simulated classically in about 2.5 days, not 10{,}000 years. Later work shrank it further.
What actually happened? What did Google mean by supremacy, what did they not mean, and how should you read the word the next time you see it in a headline? This chapter unpacks the definition — carefully enough that when you meet it in the next chapter (where you dig into the Sycamore experiment itself) you already have the conceptual frame.
The short preview, before you meet the formal definition: supremacy is a benchmark, not a coronation. It is a claim about one specific task, chosen to be maximally hard classically and comparatively easy quantumly, with no requirement that the task be practically useful. The claim is significant — it is the first experimental evidence that quantum hardware can outrun classical hardware on something — but the gap between "something" and "something you care about" is still very much open territory.
Where the word came from
The term quantum supremacy was coined in 2012 by John Preskill, the Richard P. Feynman Professor of Theoretical Physics at Caltech, in an essay titled Quantum computing and the entanglement frontier. Preskill was writing for a workshop, and he needed a phrase for a question the community was converging on: at what point does a quantum computer become provably more powerful than any classical machine, not in theory (Shor had done that in 1994) but in demonstrated practice?
His answer, in effect: call the threshold "quantum supremacy." It is the moment an experimentalist can point at a real machine and say, truthfully, that no classical computer can reproduce what it just did — on some well-defined task, not necessarily a useful one.
The word itself was never meant to glorify. In context, Preskill was clear that supremacy was a narrow technical achievement, not a declaration of quantum dominance. But words carry connotations that physicists cannot fully control. By 2020, many researchers had begun using quantum advantage instead — partly because "supremacy" shares linguistic territory with "white supremacy" and other terms the field did not want to echo, and partly because "advantage" is scientifically more accurate: it implies a spectrum (a small advantage, a large one, a practical one) rather than a binary.
You will see both words in the literature. This article uses supremacy when discussing the original 2012 framing and the 2019 Google claim that made the term famous, and advantage when discussing the softer, broader claim that has displaced it.
The formal definition
With the history in place, here is the technical statement.
Quantum supremacy
A quantum supremacy demonstration is an experiment showing that a specific, well-defined computational task T satisfies all three of the following conditions:
- Quantum tractability. There exists a quantum device that completes T in time t_Q, where t_Q is short enough to be wall-clock practical (seconds to hours).
- Classical hardness. The best known classical algorithm for T requires time t_C that is super-polynomially larger than t_Q. "Best known" means: against every algorithm anyone has yet found, run on the best supercomputer available.
- Verifiability. The quantum device's output on T is checkable — either directly, or statistically, or under a complexity-theoretic assumption — so that an observer can confirm the device actually solved T rather than producing garbage.
The three conditions are doing distinct work.
Quantum tractability is where the actual hardware lives. Your device must finish the task in a reasonable time — a few minutes, a few hours, not centuries. Otherwise the claim is vacuous. Quantum circuits of a certain depth on a certain number of qubits take a measurable wall-clock time to execute; that time is t_Q.
Classical hardness is the comparison that matters. If the best classical algorithm takes 10^6 times longer than the quantum device, that is a factor of 10^6. If the best classical algorithm takes 2^{50} times longer, that is a quadrillion — and as the problem size grows, the gap grows exponentially. Supremacy requires a super-polynomial gap: the classical running time is not bounded by any polynomial in the problem size relative to the quantum running time. The strongest claims push this gap into the "takes longer than the age of the universe" regime.
Verifiability is the catch. Most problems quantum hardware is good at have an easy verification step (factoring: multiply the factors back together; check the product). But the tasks that maximise classical hardness tend to be sampling problems — producing samples from a probability distribution — and sampling from a distribution is not directly verifiable by running a test. The verification step for sampling tasks relies on statistical benchmarks like the linear cross-entropy benchmark (XEB, which you will meet in the next chapter), or on complexity-theoretic arguments that the sampler's output is statistically incompatible with random noise.
Why verifiability is tricky: a quantum device that outputs random garbage would trivially produce samples "classical computers cannot reproduce" — because the garbage has no structure anyone can match. Supremacy demands that the device produces the right kind of samples, from the distribution the ideal quantum circuit would produce. Statistical benchmarks like XEB quantify how close the observed samples are to that ideal.
Hype check. Supremacy is not the claim that "quantum is now useful." It is the claim that "quantum hardware is, on this one narrow task, strictly more powerful than the best classical hardware." Useful quantum advantage — solving a problem people care about, faster or better than any classical method — is a separate, harder bar. Factoring 2048-bit RSA, simulating drug molecules at chemical accuracy, breaking industrial encryption: none of these is what supremacy demonstrated. Treating "Google achieved quantum supremacy" as "quantum computers now work" is reading a narrow technical sentence as a sweeping civilisational one.
Why the chosen tasks are useless
This is the hardest part of the conceptual story and the part the hype gets wrong most often. The tasks used for supremacy experiments are chosen precisely because they are hard classically, and the same features that make them hard classically make them useless in practice.
Consider the three main candidate tasks.
Random Circuit Sampling (RCS)
The task. Pick a random quantum circuit C on n qubits, depth d. Run it on the initial state |0\rangle^{\otimes n}. Measure all qubits in the computational basis. Repeat many times, producing samples x_1, x_2, \ldots, x_M \in \{0, 1\}^n. Your output is the set of samples, or equivalently, an empirical estimate of the output distribution.
Why it is classically hard. To simulate the circuit classically, you must either track the full 2^n-dimensional state vector (infeasible memory for n \geq 50), or use tensor-network methods whose cost scales exponentially in circuit depth. Aaronson and Arkhipov, along with Bouland–Fefferman–Nirkhe–Vazirani, have proved (under standard complexity-theoretic conjectures) that exactly sampling from a random quantum circuit's output distribution is \#\mathrm{P}-hard — the highest level of the counting hierarchy.
Why it is useless. The "answer" is just samples from a random distribution. There is no underlying problem being solved, no computation being performed on real data. You cannot factor a number, simulate a molecule, or optimise a supply chain with RCS. It is a benchmark, nothing more.
Boson Sampling
The task. Send n indistinguishable photons into a linear optical interferometer with m modes (typically m \approx n^2). Measure the output photon-count pattern in each mode. Repeat for samples. Output: the distribution over photon-count patterns.
Why it is classically hard. Aaronson and Arkhipov (2011) proved that classical simulation of boson sampling reduces to computing the permanent of large complex matrices — a \#\mathrm{P}-hard problem. Their paper established boson sampling as the first supremacy-style candidate, predating RCS.
Why it is useless. Same story. The output is a photon-count distribution with no embedded problem. Boson sampling is, structurally, a lovely demonstration of quantum statistics (the Hong–Ou–Mandel effect generalised). But no one has a use for the samples themselves.
IQP — Instantaneous Quantum Polynomial-time
The task. A restricted class of quantum circuits in which all gates are diagonal in the X basis (i.e., conjugated Hadamards with diagonal-Z gates in between). Bremner–Jozsa–Shepherd (2011) showed that sampling from IQP circuits is classically hard under reasonable complexity assumptions.
Why it is classically hard. IQP circuits concentrate amplitude patterns that, if classically simulable, would collapse the polynomial hierarchy — a complexity-theoretic catastrophe.
Why it is useless. IQP captures a slice of quantum behaviour that is weaker than universal quantum computation but still hard to simulate. The output distributions do not encode any useful problem.
The symmetry is not an accident. Tasks that maximise the quantum-vs-classical gap tend to live deep inside sampling complexity, where the output is a probability distribution rather than an answer. Answers are verifiable and useful; distributions are hard and useless. The design of a supremacy experiment is to pick a task on the "hard and useless" side of that line and show that the quantum hardware can reach it.
The 2019 Google Sycamore claim, at a glance
The Google supremacy paper, Quantum supremacy using a programmable superconducting processor (Arute et al., Nature 574, 505–510, 2019), claimed the first experimental demonstration. You will meet the full story in the next chapter; here is the high-level frame.
- Hardware: Sycamore, a 53-qubit superconducting transmon chip (one qubit of the original 54 failed, leaving 53 active).
- Task: Random Circuit Sampling (RCS).
- Circuit: depth 20, random single-qubit rotations interleaved with fixed two-qubit iSWAP-like gates.
- Runs: \sim 10^6 samples per circuit configuration, across 10 random circuit instances.
- Verification: the linear cross-entropy benchmark (XEB) — a statistical score measuring how close the samples are to the ideal distribution. Sycamore's XEB: 0.0024 (uniform random gives \approx 0; perfect circuit would give \approx 1).
- Quantum time: 200 seconds to complete.
- Classical estimate: 10{,}000 years on Summit, the world's fastest supercomputer at the time, using Google's best simulator.
The "supremacy expires" phenomenon
A feature of supremacy claims that is genuinely surprising: they do not stay proved.
Here is the pattern. A quantum device performs task T in time t_Q. The team estimates classical time t_C using the best simulator they know. The gap t_C / t_Q is enormous and the supremacy claim is announced. Then, in the months and years after, classical algorithm developers read the paper, realise that task T has a specific structure their simulators can exploit, and publish a new classical simulator that finishes T in time t_C' with t_C' \ll t_C. Sometimes t_C' is small enough that the supremacy claim is ambiguous; sometimes not.
This happened to Google's 2019 claim.
- October 2019: Google claims 10{,}000 years for the classical simulation of Sycamore's task.
- October 2019 (one week later): IBM publishes a preprint arguing that using tensor-network contraction with heavy disk swap, the simulation could finish in \sim 2.5 days on the same Summit supercomputer.
- 2021: Pan, Chen, and Zhang from the Chinese Academy of Sciences announce a classical simulation of Sycamore's original task in about 15 hours on a cluster of 512 GPUs.
- 2022: further improvements; the gap continues to shrink.
Importantly: the classical improvements do not eliminate the supremacy gap at larger qubit counts. The quantum hardware keeps growing (USTC's Zuchongzhi pushed to 66 qubits; Jiuzhang operated in a regime of 113 photons), and the classical simulation cost grows exponentially with qubit count. The per-experiment claim can be softened by clever classical algorithms; the asymptotic story still favours quantum.
Example 1: The Sycamore experiment at a high level
Pin down the supremacy definition by walking through how it applies to Google's 2019 claim.
The task T. Sample from the output distribution of a fixed random quantum circuit C on 53 qubits with 20 layers of gates. Specifically: run C on |0\rangle^{\otimes 53}, measure all qubits, record the outcome as a 53-bit string. Repeat \sim 10^6 times. Output: the empirical distribution over the 2^{53} \approx 9 \times 10^{15} possible outcomes.
Step 1 — Quantum tractability. The quantum device (Sycamore) ran the circuit and collected \sim 10^6 samples in wall-clock time t_Q \approx 200 seconds. Each individual sample takes microseconds; the bulk of 200 seconds is classical overhead — compiling the next circuit, reading out measurement results, and cycling the device between circuits.
Why 200 seconds is "practical": humans can wait 200 seconds. A supremacy claim that required a thousand-year quantum computation would itself be unverifiable in a reasonable lifetime.
Step 2 — Classical hardness. Google's team estimated, using their own in-house tensor-network simulator running on Summit, that producing the same number of samples to the same statistical fidelity would require \sim 10{,}000 years. This is t_C. The ratio is:
Why this is super-polynomial: the ratio 10^9 is not itself super-polynomial — it is a large constant. The super-polynomial claim is structural: as qubit count n grows, t_C scales at least like 2^n while t_Q scales polynomially. At n=53 the ratio happens to be about 10^9; at n=100 it would be astronomically larger.
Step 3 — Verifiability. Here is the tricky condition. Sampling a distribution is not a yes-or-no answer to a question; you cannot directly check a sample is "correct." Instead, Google used the linear cross-entropy benchmark (XEB).
XEB works like this: for each observed sample x_i, compute (on a classical simulator, for small enough sub-circuits) the theoretical probability p_i = |\langle x_i | C | 0 \rangle^{\otimes n}|^2 that an ideal quantum circuit would produce x_i. Then:
where \langle p_i \rangle is the average of p_i over observed samples. For a uniformly random sample, \mathrm{XEB} \approx 0. For an ideal quantum circuit, \mathrm{XEB} \approx 1. Sycamore's measured XEB: 0.0024 — small in absolute terms, but roughly 20\% above the uniform baseline after error analysis, and statistically consistent with a noisy quantum circuit producing close-to-ideal samples.
Why this is "verifiable": the XEB score cannot be faked by a classical device that doesn't actually know the circuit's amplitudes. A device producing uniformly random output (or any output not tracking the true distribution) gets \mathrm{XEB} \approx 0. Only a device that is actually implementing circuit C produces \mathrm{XEB} above noise.
The supremacy claim. All three conditions are met: quantum tractable (200 s), classically hard (10^4 years estimated), verifiable (XEB statistically significant). Therefore, under Google's 2019 data and methodology, this experiment demonstrates quantum supremacy on the task RCS.
The caveats. The classical estimate was Google's, using Google's simulator. IBM argued a different classical simulator could do it in \sim 2.5 days. The supremacy claim is sensitive to which classical algorithm you compare against. The claim is historical — it reflects what was known in October 2019 — and later work has continuously updated the classical benchmark.
Result. Google's 2019 experiment demonstrates quantum supremacy on the RCS task, under Google's comparison methodology. The demonstration does not produce a useful computation — the samples themselves solve no real-world problem — but it establishes that quantum hardware can, on a specifically engineered task, outrun the best available classical hardware by several orders of magnitude. That is precisely what supremacy was defined to mean.
Example 2: Why supremacy on RCS does not factor RSA-$2048$
A contrast that makes the "supremacy ≠ useful" distinction concrete. The question: does Sycamore's 2019 supremacy demonstration help break RSA-2048 encryption, the workhorse of online security?
Short answer: no. Not even slightly. Not in principle and not in practice.
Setup. RSA-2048 is a public-key cryptosystem whose security rests on the hardness of factoring a 2048-bit integer. Shor's algorithm (1994) is the quantum algorithm that factors integers in polynomial time — if you can run it on a large enough quantum computer.
Step 1 — The qubit requirement for Shor's on RSA-2048. Current estimates (Gidney and Ekerå, 2021) place the resource requirement for factoring a 2048-bit integer using the surface code for error correction at roughly 20 million physical qubits, with gate error rates around 10^{-3}, running for about 8 hours of wall-clock time.
Why 20 million, when Shor's "only" needs a few thousand logical qubits: the logical-to-physical overhead of surface-code error correction is roughly 1000-to-1 at current error rates. 4000 logical qubits × 1000 = 4 million; with ancillas, scratch, and margin, 20 million. Lower error rates would reduce this.
Step 2 — What Sycamore has. Sycamore has 53 physical qubits with gate error rates around 10^{-2} to 10^{-3} and no error correction. For comparison with Shor:
- Sycamore: 53 qubits, no error correction.
- Required for Shor on RSA-2048: \sim 20{,}000{,}000 physical qubits, with error correction.
Gap: a factor of \sim 4 \times 10^5 in qubit count, plus error correction that Sycamore does not have.
Step 3 — Why the supremacy demonstration does not translate. RCS is a sampling task; its classical hardness comes from the fact that tracking 2^{53} amplitudes is expensive. Shor's on RSA-2048 is a deterministic algorithmic task; it requires reliably implementing modular exponentiation on a 4000-logical-qubit register. These are different regimes. The hardware that demonstrated RCS supremacy at 53 qubits does not scale to 20 million qubits by any simple extension — you need genuine error correction, better gates, better coherence, completely different engineering. Supremacy on RCS at 53 qubits demonstrates that the NISQ (Noisy Intermediate-Scale Quantum) platform can run some task. It says little about the fault-tolerant regime where Shor's actually lives.
Why this matters for policy: the Indian National Quantum Mission (₹6000 crore, 2023) includes post-quantum cryptography migration as a strategic priority. The timeline for when Shor-capable machines arrive is genuinely uncertain — some estimates put it at the 2030s, others later — but treating Sycamore's supremacy as "RSA is now broken" is wrong. The relevant question is not "did Google achieve supremacy?" but "when will fault-tolerant Shor become feasible?" The two have very different answers.
Step 4 — A concrete sanity check. Imagine you asked Sycamore to factor RSA-2048 today. It cannot — it does not have the qubits, the error correction, or the coherence. Even if you somehow encoded a 4000-qubit Shor's algorithm into 53 physical qubits (impossible — the logical-qubit overhead forbids it), the gate errors would corrupt the computation long before the algorithm completed. Supremacy on RCS at 53 qubits implies nothing about Shor's on 2048-bit RSA.
Result. Sycamore's supremacy demonstration does not help factor RSA-2048. The supremacy claim establishes that quantum hardware can, on a specific narrow task, outrun classical hardware in the NISQ regime. It says nothing about when or whether fault-tolerant, error-corrected quantum computing will arrive and what it will then be able to do. Reading supremacy as "quantum is now useful" conflates two very different engineering regimes and leads to bad predictions about cryptography, chemistry, and every other application that actually matters. The next chapter digs into the Sycamore experiment in full detail.
Common confusions
-
"Quantum supremacy means quantum computers are now useful." No. Supremacy is a narrow technical claim about one specifically hard task. Usefulness is a separate, harder milestone — often called useful quantum advantage — requiring that the task be one people care about (factoring, chemistry, optimisation) rather than a sampling benchmark. As of 2026, no useful quantum advantage has been demonstrated at scale.
-
"Supremacy proves quantum computers work." Weak claim. Supremacy proves that quantum hardware can run some task the best classical hardware cannot match. It is evidence of capability on that narrow task. General quantum computing — being able to reliably run any quantum circuit of interest — requires error correction and fault tolerance, which NISQ hardware does not provide. Saying "quantum computers work because Google demonstrated supremacy" is like saying "airplanes fly because the Wright Brothers went 120 feet" — true and important, but the implication is not what most people hear.
-
"Classical simulators will always catch up and supremacy is meaningless." Partially true. Per-experiment classical simulators do improve, sometimes dramatically (the IBM 2.5-day response to Google's 10{,}000-year estimate is the canonical example). But the asymptotic separation — as quantum hardware grows to more qubits and deeper circuits — remains favourable for quantum. The supremacy framework is inherently sensitive to algorithmic progress; that is a feature, not a bug. A claim that survives aggressive classical attack for a few years is a better claim than one that falls immediately.
-
"Google's 2019 experiment broke encryption / was a breakthrough in cryptography." Incorrect. The RCS task has nothing to do with factoring, discrete log, or any cryptographic problem. Sycamore in 2019 had 53 qubits and no error correction. Breaking RSA-2048 via Shor's algorithm requires on the order of millions of physical qubits with error correction. These are different engineering regimes.
-
"Supremacy and advantage are the same thing." Not quite. Supremacy (Preskill's original term) emphasises a binary threshold: some task is quantumly tractable and classically intractable, full stop. Advantage is the more modern, broader term, allowing for gradations (small advantage, large advantage, useful advantage). Most practitioners today use "advantage" except when specifically discussing the 2012-era framing or the Sycamore paper.
-
"Quantum supremacy has been formally proved." It is an experimental claim relative to conjectured classical hardness. The classical-hardness side of the definition assumes that no polynomial-time classical algorithm exists for the task — this is a complexity-theoretic conjecture, not a theorem. Aaronson–Arkhipov and others have proved formal hardness under assumptions like "the polynomial hierarchy does not collapse," but these assumptions are unproven (though widely believed). Supremacy is strong empirical evidence, not a mathematical certainty.
Going deeper
You have the definition, the candidate tasks, the 2019 claim at a high level, the "supremacy expires" phenomenon, and the useful-vs-narrow distinction. The rest of this section collects the more technical content: Preskill's original framing, the formal complexity-theoretic hardness proofs for the candidate tasks, the 2020–2024 follow-up experiments at USTC (Jiuzhang, Zuchongzhi), IBM's shift to "useful quantum advantage" as a counter-narrative, the ethics debate around the word "supremacy," and where 2026 experimental supremacy work stands in relation to error correction and the fault-tolerant transition.
Preskill's 2012 essay — the original context
John Preskill's Quantum computing and the entanglement frontier (arXiv:1203.5813) was written for a symposium celebrating Solvay-1911 and marking the centennial of Rutherford's model. The essay uses "supremacy" almost in passing, as a term for a threshold condition. Key passages: Preskill argues that demonstrating supremacy on some task — any task — would be "an impressive milestone in the history of technology," distinct from and prerequisite to demonstrating useful quantum computing.
Preskill was explicit that supremacy tasks need not be useful: "We should recognise that attaining quantum supremacy may be much easier than building a useful quantum computer." He even proposed candidate tasks, including sampling from random circuit output distributions — seven years before Google's announcement.
Formal hardness of RCS
The classical hardness of random circuit sampling has been studied in several levels of rigour.
- Approximate sampling in total variation distance. Bouland, Fefferman, Nirkhe, and Vazirani (2018) proved that classical simulation of RCS to additive error in total variation distance is \#\mathrm{P}-hard under standard complexity assumptions (specifically, the non-collapse of the polynomial hierarchy and average-case hardness of permanent computation).
- XEB-style hardness. Aaronson and Gunn (2020) formalised the XEB verification procedure and showed that achieving a positive XEB score is itself classically hard under standard conjectures.
- Noise considerations. Real quantum circuits have noise. Aharonov, Gao, Landau, Liu, and Vazirani (2022) proved that noisy RCS is efficiently classically simulable if the noise rate exceeds a threshold — implying that supremacy demonstrations are sensitive to the fidelity of the underlying gates.
The implication: RCS supremacy sits on a formal foundation, but the formal hardness results apply to the ideal-noiseless regime. As noise increases, the classical complexity decreases, and the supremacy window narrows.
Boson sampling — Aaronson and Arkhipov 2011
The original quantum-advantage-in-a-sampling-model paper is The computational complexity of linear optics by Scott Aaronson and Alex Arkhipov (STOC 2011, arXiv:1011.3245). They proposed a hardware-friendly restricted model: indistinguishable photons through a linear optical interferometer. The central theorem: exact sampling of the output distribution is \#\mathrm{P}-hard (reducing to computing a matrix permanent), and approximate sampling is hard under two conjectures (the permanent-of-Gaussians conjecture and anticoncentration).
Boson sampling influenced the field enormously. It clarified that supremacy claims would live in the sampling-problem regime rather than the decision-problem regime, and it spurred experimental efforts at the University of Science and Technology of China (USTC), culminating in the Jiuzhang series of experiments.
USTC Jiuzhang — photonic supremacy
In 2020, the USTC group led by Jian-Wei Pan announced Jiuzhang (1.0), a photonic boson sampling experiment with 76 detected photons in a 100-mode interferometer. Classical simulation estimate: 2.5 billion years. In 2021, Jiuzhang 2.0 extended to 113 detected photons; Jiuzhang 3.0 followed in 2022. Each iteration widened the quantum-classical gap and used Gaussian boson sampling (a variant better matched to available photon sources).
Photonic supremacy has a different flavour from superconducting supremacy: the hardware is room-temperature (ignoring the laser sources), the computational model is restricted (linear optical, no universal gates), and the verification protocols differ.
USTC Zuchongzhi — superconducting supremacy extended
USTC's superconducting quantum computing effort (2021–2024) produced Zuchongzhi (2.0, 2.1, 3.0), superconducting processors with up to 66 qubits. These experiments extended Google's 2019 RCS demonstration to more qubits and deeper circuits, pushing the quantum-classical gap wider.
By 2024, the supremacy claim had been independently demonstrated on at least two hardware platforms (superconducting, photonic) at multiple institutions (Google, USTC). The claim is robust to hardware choice; the candidate tasks (RCS, boson sampling) are robust to experimental variation.
IBM's counter-framing: useful quantum advantage
IBM, after their 2019 rebuttal to Google's supremacy claim, has pushed a different frame: useful quantum advantage. The argument: supremacy is a narrow academic milestone; what matters is advantage on problems industry cares about. IBM's roadmap (published annually since 2020) targets specific useful-advantage demonstrations — simulation of specific chemistry problems, optimisation benchmarks — rather than pure sampling supremacy.
The two framings are compatible: supremacy is a capability threshold; useful advantage is a utility threshold. Both are legitimate milestones. The community has increasingly used "useful quantum advantage" as the more ambitious target, with supremacy understood as the earlier, narrower achievement.
The terminology debate
In 2019–2020, several groups (including the Nature editorial board in certain contexts) reconsidered the word "supremacy." The concerns: the word shares linguistic territory with "white supremacy," carries an exclusionary connotation, and was originally chosen without reflection on these associations. Alternative terms proposed: quantum advantage, quantum primacy, quantum transcendence, quantum ascendancy. None fully replaced "supremacy" — historical papers continue to use it — but "quantum advantage" has become the most common contemporary term.
This is not purely a political debate. The narrower word "supremacy" implied a binary victory; the broader word "advantage" captures the graded reality (small vs large speedup, useful vs useless task, noisy vs fault-tolerant regime) that the 2019–2026 evidence has supported. The terminology shift tracks the scientific shift toward acknowledging nuance.
Indian context — the National Quantum Mission's supremacy ambitions
India's National Quantum Mission (NQM), launched in 2023 with a ₹6000 crore budget over 8 years, explicitly includes quantum supremacy / advantage demonstrations as a strategic objective. The mission's four verticals — quantum computing, quantum communication, quantum sensing, quantum materials — have their own milestones; the computing vertical targets:
- A 50–1000 qubit superconducting or photonic platform by 2028.
- A useful-advantage demonstration (chemistry, optimisation) by 2031.
- Post-quantum cryptography migration of critical infrastructure (Aadhaar, UPI, banking) by 2032.
The supremacy-vs-advantage distinction matters for Indian policy: demonstrating supremacy on a sampling task is significantly easier than demonstrating useful advantage on an industrial problem. The NQM's explicit framing treats these as separate goals. Indian researchers are active at both fronts — TIFR, IISc, IIT Madras, IIT Bombay, and the Raman Research Institute have groups working on either restricted hardware for sampling demonstrations or useful-advantage algorithms for near-term hardware. A few researchers of Indian origin (including at Google Quantum AI and IBM Quantum) have been co-authors on foundational supremacy papers.
Where this leads next
- Google Random Circuit Sampling — the next chapter, with the full Sycamore 2019 experiment and IBM's rebuttal.
- Boson Sampling — the Aaronson-Arkhipov proposal and the USTC Jiuzhang experiments.
- IQP Circuits — the restricted quantum model that underlies another supremacy candidate.
- Useful Quantum Advantage — the contrasting industrial-utility milestone.
- BQP defined — the complexity class that supremacy tasks live in (or near).
- Lessons about quantum speedups — the broader taxonomy of when and why quantum outperforms classical.
References
- John Preskill, Quantum computing and the entanglement frontier (2012) — the essay that coined "quantum supremacy." arXiv:1203.5813.
- Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 505–510, 2019) — the Sycamore paper. Nature preprint/open copy.
- Scott Aaronson and Alex Arkhipov, The computational complexity of linear optics (2011) — the boson sampling paper. arXiv:1011.3245.
- Edwin Pednault et al. (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the IBM response to Google's supremacy claim. arXiv:1910.09534.
- Wikipedia, Quantum supremacy — the curated reference with a running history of experimental claims and classical rebuttals.
- Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — an accessible, technically careful discussion of the Google claim by one of the architects of the sampling-supremacy framework.