In short
In October 2019, Google's Sycamore chip — 53 active superconducting transmon qubits — ran 20-layer random quantum circuits and collected about a million output samples per circuit in 200 seconds of wall-clock time. Verification was done via the linear cross-entropy benchmark (XEB): Sycamore's measured XEB was 0.0024, roughly 20\% above the uniform-random baseline of 0.002, statistically consistent with a noisy but correct quantum sampler. Google estimated that reproducing the same sampling fidelity on Summit (the world's fastest supercomputer at the time) would take about 10{,}000 years — hence the supremacy claim. One week later, IBM published a preprint arguing that with tensor-network contraction and secondary-storage tricks, the same task could be simulated classically in about 2.5 days. Pan, Chen, and Zhang pushed this further in 2021–2022, reaching \sim 15 hours on GPU clusters. The per-experiment gap has narrowed substantially, but the asymptotic trend (wider gap at more qubits and deeper circuits) still favours quantum. Follow-up supremacy experiments — USTC's Zuchongzhi (66 qubits superconducting) and Jiuzhang (113-photon boson sampling) — have extended the regime. As of 2026, the Sycamore result is considered the first credible experimental supremacy demonstration, with the important caveat that classical simulators have not stopped improving.
On 23 October 2019, Nature published a paper with a five-author list starting "Frank Arute, Kunal Arya, Ryan Babbush …" — a team of 77 researchers from Google AI Quantum and academic collaborators — titled Quantum supremacy using a programmable superconducting processor. The paper reported that Google's Sycamore chip had, in 200 seconds, done a computation that they estimated would take Summit (the world's fastest supercomputer at the time, housed at Oak Ridge National Laboratory) approximately 10{,}000 years.
Newspapers around the world — The New York Times, The Hindu, BBC News — led with the word "supremacy." Social media took it further: "quantum computers have arrived," "RSA is broken," "we are living through a singularity." None of that was true. What Google had done was narrower, deeper, more careful — and, in the days after publication, almost immediately contested by IBM.
This chapter walks through the experiment in detail. You already know the shape of the quantum supremacy definition — the three conditions (quantum tractability, classical hardness, verifiability) that Google's claim must satisfy. Here you meet the experiment that instantiated those conditions for the first time, the verification protocol (XEB), the classical-simulation rebuttal from IBM, the follow-up classical improvements, and the current state of the argument in 2026.
The chip: Sycamore, 54 qubits, one broken
Sycamore is a superconducting quantum processor — a small chip of aluminium deposited on silicon, cooled in a dilution refrigerator to about 20 millikelvin (colder than outer space). Each qubit is a transmon — a tiny loop of superconducting metal interrupted by a Josephson junction, whose lowest two energy levels are used as |0\rangle and |1\rangle.
The chip was designed with 54 qubits in a rectangular-ish grid. On the specific run-up to the supremacy experiment, one qubit failed to calibrate reliably, leaving 53 usable qubits. The team worked around it — the circuits were adapted to skip the bad qubit and use only the connected 53-qubit sub-grid.
- Qubit count: 53 (of 54 total).
- Connectivity: each qubit coupled to its nearest neighbours in a 2D grid (typically 4 neighbours per interior qubit, fewer at the edges).
- Single-qubit gate fidelity: 99.84\% median.
- Two-qubit gate fidelity: 99.38\% median — the weakest link and the dominant source of error.
- Readout fidelity: 96.2\% — measuring a qubit is noticeably less reliable than gating it.
- Circuit timing: single-qubit gates \sim 25 ns; two-qubit gates \sim 12 ns; full 20-layer circuit \sim 1.1 μs of quantum dynamics, then 1 μs of measurement.
The random circuits
The task was not to run some specific algorithm. It was to run a random circuit — a circuit whose gates are picked from a template, with specific single-qubit gate choices randomised — and sample from its output distribution.
The circuit structure, in each layer:
-
Single-qubit random gates. Each of the 53 qubits receives one of three gates (\sqrt{X}, \sqrt{Y}, \sqrt{W} where W = (X + Y)/\sqrt{2}), chosen independently at random. These gates are all 90° rotations about different axes in the Bloch sphere equatorial plane.
-
Two-qubit fixed gates. A specific two-qubit gate — a fixed fSim gate close to \sqrt{\mathrm{iSWAP}} with a controlled-phase kick — is applied to a specific subset of nearest-neighbour pairs. The pattern of which pairs gate together in which layer rotates through four configurations (call them A, B, C, D), chosen to maximise the entanglement spread across the chip.
Repeat for m layers. For the supremacy experiment, m = 20. The total number of gates is roughly 53 \times 20 = 1060 single-qubit gates and \approx 430 two-qubit gates.
For each sampled circuit instance (one particular random choice of single-qubit gates in each layer), the task was: initialise all 53 qubits to |0\rangle, apply the circuit, measure all qubits in the computational basis, record the 53-bit outcome. Repeat around 10^6 times to build up an empirical distribution.
The state-space problem
Before understanding why this is hard classically, let's see what quantum state the circuit creates.
At the start of the circuit, the qubits are in |0\rangle^{\otimes 53} — a product state with one complex amplitude (equal to 1). After one layer of Hadamard-like single-qubit gates, the state is close to a uniform superposition over all 2^{53} basis states. After two-qubit gates, the state becomes highly entangled, and after 20 layers, the state is a very generic-looking entangled state with random-ish amplitudes spread over all 2^{53} basis states.
The number 2^{53} is worth holding in your head. It is approximately 9.0 \times 10^{15} — nine quadrillion. Storing a state vector with that many complex amplitudes requires:
Why 16 bytes per amplitude: a complex number with double-precision real and imaginary parts takes 8 + 8 = 16 bytes. Lower-precision (single-precision, 8 bytes) would cut this in half but introduce rounding errors that compound over 20 layers of gates.
For comparison, Summit — the world's fastest supercomputer at the time — had about 250 petabytes of total storage (counting RAM and disk). Sycamore's state vector barely fits. The Frontier supercomputer (launched 2022) has about 700 petabytes. Fugaku similar. Storing the state is technically possible on a few top-tier supercomputers; simulating the quantum evolution (applying gates and tracking amplitudes) on that state is much more expensive than just storing it.
Linear cross-entropy benchmark (XEB)
The verification question: given that Sycamore produced samples x_1, x_2, \ldots, x_M \in \{0,1\}^{53}, how do you know these samples actually came from the circuit's output distribution, rather than from a random noise source?
The linear cross-entropy benchmark is Google's answer. The procedure:
-
For each sample x_i, compute the ideal amplitude. Let C be the intended circuit. Classically compute — for small subsections, tensor-network methods allow this even for 53-qubit circuits — the probability p_i = |\langle x_i | C | 0^{53}\rangle|^2 that an ideal quantum circuit would produce sample x_i.
-
Average over observed samples. Define:
- Interpret. For a noisy quantum device, this score lies between 0 (uniformly random samples — no quantum coherence) and 1 (perfect circuit execution — every sample lands where the ideal amplitudes put probability mass).
Why this formula: the ideal amplitudes p_i average, in expectation over samples from a uniform distribution, to 1/2^{53}. Multiplying by 2^{53} normalises this to 1. Subtracting 1 shifts the baseline to 0 for uniform sampling. A correct quantum circuit — which produces samples with probability proportional to p_i itself (the Porter–Thomas distribution) — gives \langle p_i \rangle \approx 2/2^{53} in expectation, yielding \mathrm{XEB} = 1.
A positive XEB score demonstrates that the observed samples concentrate on outcomes the ideal circuit would favour, which cannot happen by accident without the device actually implementing the circuit.
Sycamore's measured XEB: \approx 0.0024 (with statistical error bars that placed it comfortably above 0). A score of 0.0024 is small, but small in a specific way: it is the score you expect from a noisy circuit whose gate fidelity is around 99\% per gate, multiplied over \sim 1000 gates, giving 0.99^{1000} \approx 4 \times 10^{-5} — but because XEB captures certain correlations rather than whole-circuit fidelity, the practical score is higher. The key point: 0.0024 is statistically far above the null hypothesis of uniform random sampling (where XEB \approx 0 with error bars of \sim 10^{-5} for M = 10^6 samples).
The classical-simulation argument
With 53 qubits and a 20-layer circuit, what does classical simulation cost?
Approach 1 — State-vector simulation. Store all 2^{53} complex amplitudes; at each gate, compute the action of the gate on the state. Cost: 144 petabytes of storage (feasible on a top supercomputer), and roughly 1000 gate applications, each touching all 2^{53} amplitudes — \sim 10^{19} floating-point operations. On Summit at 200 petaFLOPS, this is \sim 50 seconds per circuit. For 10^6 samples, this is \sim 50 million seconds \approx 1.6 years.
Why this is not the 10{,}000-year estimate: the 10{,}000-year number came from a more conservative analysis involving full XEB verification (not just sample generation) and Google's specific simulator implementation with various overheads. Different methodological choices yield different extrapolations — this is part of why the claim is sensitive to how you compare.
Approach 2 — Tensor-network simulation. For circuits with limited entanglement depth, tensor-network methods factorise the state efficiently. Cost: exponential in the "bond dimension" of the network, which grows with depth. For depth 20, the bond dimension is typically exponential in the 53 \times 20 spacetime volume, but with clever contraction orderings and secondary-storage tricks, the cost is dramatically reduced.
Google's published estimate: 10{,}000 years on Summit using their own state-vector-based simulator. This was the basis of the supremacy claim.
The IBM rebuttal: 2.5 days, not 10{,}000 years
One week after Google's paper, IBM published a preprint (Pednault, Gunnels, Nannicini, Horesh, Wisnieff; arXiv:1910.09534) arguing that Google's 10{,}000-year estimate was off by a factor of \sim 10^6.
The IBM argument, simplified:
-
Use tensor-network contraction, not state-vector. The Sycamore circuit, despite its 20 layers, has a specific structure that admits efficient tensor-network representation.
-
Swap amplitudes between RAM and disk aggressively. Summit has \sim 250 PB of disk storage. By carefully batching the computation and swapping blocks of amplitudes between memory and disk, the simulation can be done without needing all 144 PB in RAM simultaneously.
-
Result: \sim 2.5 days, not 10{,}000 years. IBM did not actually run this simulation — their paper was a projection based on their method's scaling — but the methodology was published and reviewable.
This dropped the classical-time estimate by a factor of \sim 10^{6}. The quantum-to-classical ratio fell from \sim 10^9 to \sim 10^3 — still a supremacy claim in some reading, but a much narrower one. IBM also argued, more pointedly, that "supremacy" was the wrong frame; the right frame was "useful quantum advantage," which neither Google nor anyone else had yet demonstrated.
Further classical improvements (2020–2024)
The IBM 2.5-day estimate did not end the classical simulation story. Subsequent work has continued to tighten the gap.
-
Huang et al., 2020. Tensor-network algorithms with bond-dimension compression achieve Sycamore simulation in \sim 200 seconds per output sample on small clusters — but each sample is expensive, so generating 10^6 samples is still months.
-
Pan, Chen, Zhang (USTC), 2021. Solving the sampling problem of the Sycamore quantum circuits. Using tensor-network contraction with a specific memory hierarchy, they simulated the full Sycamore task in \approx 15 hours on a cluster of 512 GPUs. The quantum-to-classical ratio at this point dropped to \sim 300. Still quantum-faster, but narrowly.
-
Liu et al. (Alibaba, 2022). Further tensor-network improvements; simulation time \approx 5 days on a single high-end workstation. Gap at \sim 2000.
-
Kalachev et al. (2022). Continued tensor-network improvements; pushing the boundary further.
What is striking: these classical algorithms exploit the specific structure of Sycamore's circuits (the 2D grid topology, the particular fSim gate, the exact 20-layer depth) to compress the simulation. A slightly deeper circuit, a slightly different gate set, or a slightly different topology can make the classical simulation dramatically harder.
Follow-up experiments: USTC and others
The 2019 Google result prompted a flurry of follow-up supremacy experiments. Two Chinese platforms stand out.
USTC Jiuzhang (photonic, 2020–2022)
Jiuzhang (named after the classical Chinese mathematical text Nine Chapters on the Mathematical Art) is a series of photonic boson sampling experiments at the University of Science and Technology of China, led by Jian-Wei Pan.
- Jiuzhang 1.0 (2020): 76 detected photons in a 100-mode Gaussian boson sampling experiment. Classical simulation estimate: 2.5 billion years.
- Jiuzhang 2.0 (2021): extended to 113 detected photons. The classical simulation estimate grew to beyond the age of the universe.
- Jiuzhang 3.0 (2022): up to 255-mode interferometer, record photon counts.
Jiuzhang is fundamentally different hardware from Sycamore — it is a photonic system, not superconducting; it is room-temperature (aside from the photon sources); and it samples from boson-sampling distributions rather than RCS output. But the supremacy claim pattern is similar: a well-defined task, a measurable quantum runtime, an estimated super-polynomial classical simulation cost.
USTC Zuchongzhi (superconducting, 2021–2024)
Zuchongzhi (named after a 5th-century Chinese mathematician) is USTC's superconducting supremacy platform.
- Zuchongzhi 2.0 (2021): 56 qubits, 20-layer circuits. Direct extension of Google Sycamore.
- Zuchongzhi 2.1 (2021): 66 qubits, 24-layer circuits. Larger quantum-classical gap than Sycamore 2019.
- Zuchongzhi 3.0 (2024): 105 qubits, pushing further into regimes where classical simulation becomes astronomical.
Each Zuchongzhi iteration extends the Sycamore-style claim and stays ahead of the classical simulation frontier.
IBM Heron and Condor (2023–2024)
IBM's own superconducting hardware has grown: Heron (133 qubits, 2023) and Condor (1121 qubits, 2023). These machines are designed for error correction and useful advantage rather than supremacy — they target industrial problems (chemistry, optimisation) rather than RCS benchmarks.
Example 1: The size of the $53$-qubit state vector
A concrete computation to pin down why classical simulation is expensive.
Setup. Sycamore has 53 active qubits. A quantum state on 53 qubits is a complex vector in a 2^{53}-dimensional Hilbert space — i.e., 2^{53} complex amplitudes.
Step 1 — Count the amplitudes. The number of basis states for 53 qubits is:
Why every basis state matters: in a generic entangled state (the kind produced by a random circuit), essentially all 2^{53} amplitudes are non-zero and cannot be compressed without losing accuracy. Classical simulation must track all of them.
Step 2 — Bytes per amplitude. A complex number in double precision (IEEE 754) is 16 bytes (8 bytes for real part, 8 for imaginary).
Step 3 — Total memory. Multiply amplitudes by bytes:
Why petabyte: 1 petabyte = 10^{15} bytes = 1000 terabytes = 1 million gigabytes. A typical laptop has a terabyte of disk. The storage-simulation budget for Sycamore is 144{,}000 laptops' worth of disk — all simultaneously in RAM.
Step 4 — Compare to available hardware.
- Laptop: 8–64 GB RAM. Cannot store 144 PB. Cannot simulate.
- Workstation: up to \sim 2 TB RAM. Cannot store. Cannot simulate.
- Summit (2019, the fastest supercomputer): \sim 10 PB total RAM, 250 PB disk. Cannot fit 144 PB in RAM; disk-swap approaches can manage, at the cost of per-gate slowdowns.
- Frontier (2022, current fastest): \sim 10 PB RAM, 700 PB disk. Similar regime.
Step 5 — Add one qubit. Going from 53 to 54 qubits doubles the amplitudes to 2^{54}: 288 petabytes. 55 qubits: 576 PB. 60 qubits: 18 exabytes (1.8 \times 10^{19} bytes). At 60 qubits, no supercomputer on earth can store the state vector.
Result. A full state-vector classical simulation of Sycamore requires 144 PB of storage — on the ragged edge of what the best supercomputers can handle, and completely infeasible for any smaller machine. This is why state-vector simulation costs t_C \gg t_Q, and why tensor-network methods (which avoid storing the full state) are where the classical-simulation action is. The memory-wall is the core reason the supremacy claim has force.
Example 2: Reading Sycamore's XEB score
The measured XEB of 0.0024 is small in absolute terms. What does it tell you?
Step 1 — The uniform baseline. If Sycamore were outputting uniformly random bitstrings (ignoring the circuit entirely), the average ideal amplitude over samples would be:
Plugging into the XEB formula:
Why zero: random sampling weights every bitstring equally, so the average amplitude is just 1/2^{53}, giving XEB = 0 by construction. Any positive XEB is evidence the sampler is not uniform.
Step 2 — The ideal-circuit baseline. If Sycamore were running the circuit perfectly (zero noise), the samples would follow the Porter–Thomas distribution — a specific heavy-tailed distribution on [0, 2^{53}/2^{53}] predicted by random-matrix theory for random quantum circuits. For this distribution:
giving \mathrm{XEB}_{\text{ideal}} = 2^{53} \cdot (2/2^{53}) - 1 = 1.
Step 3 — Sycamore's measurement. The observed XEB is 0.0024. The statistical uncertainty (error bar), from \sim 10^6 samples, is much smaller — around 10^{-4}. So 0.0024 \pm 0.0001: unambiguously positive, unambiguously below 1.
Step 4 — Interpret. The noise model for Sycamore predicts that with gate errors of order 10^{-3} per gate over \sim 1000 gates, the whole-circuit fidelity is:
But XEB is not whole-circuit fidelity; it captures two-point correlations in the output distribution. For Porter–Thomas–like distributions, the relation between XEB and per-gate fidelity is roughly:
with various simulation-specific corrections. The measured 0.0024 reflects additional noise from imperfect readout and correlated error sources. The measured value is consistent with the noise model, and far above zero. This is the evidence that Sycamore is really running the circuit, not producing noise.
Step 5 — Ten million times above the uniform baseline. A useful way to put the number: the standard error of XEB for uniform sampling with M = 10^6 samples is roughly 1/\sqrt{M \cdot 2^{53}} \approx 10^{-11}. Sycamore's 0.0024 is \sim 2 \times 10^8 standard errors above zero — a 200-million-sigma detection. This is not a marginal claim; it is an overwhelming statistical signature of quantum-correlated sampling.
Why this matters: XEB is not just "a number that looks small." It is a number whose expected range under the null hypothesis (uniform random) is extraordinarily tight, so even a small positive value is highly significant. The quantum signal cuts through the classical noise floor by many orders of magnitude.
Result. Sycamore's XEB of 0.0024 is exactly what you expect from a noisy quantum device running the intended 20-layer circuit with per-gate errors around 10^{-3}. It sits at roughly 200 million standard errors above the uniform-random baseline, providing overwhelming statistical evidence that the circuit is being executed correctly. The number is small in absolute terms only because the underlying task has 2^{53} possible outcomes — any positive XEB in this regime is a large signal.
Common confusions
-
"Google proved quantum is faster than classical." Only for this specific task (RCS), only under Google's comparison methodology, and only in the NISQ (noisy, no error correction) regime. Useful tasks — factoring, chemistry, optimisation — have not seen analogous demonstrations. Scaling claims beyond the specific task to general quantum computing is unjustified.
-
"RSA is broken because of Sycamore." No. Sycamore has 53 qubits, no error correction, per-gate error \sim 10^{-3}. Factoring RSA-2048 with Shor's algorithm requires roughly 20 million physical qubits with surface-code error correction — a completely different engineering regime. Sycamore in 2019 is to Shor-on-RSA-2048 as the Wright Brothers' flight is to a Boeing 777.
-
"The IBM rebuttal killed the supremacy claim." No — it softened it. IBM's 2.5-day estimate was never actually executed; it was a projection. Subsequent classical simulations (Pan–Chen–Zhang 2021) have brought the estimate down further, but the per-experiment gap remains substantial and the asymptotic gap (at more qubits, deeper circuits) remains exponential. The claim was "quantum supremacy on this task in 2019"; it is still defensible that quantum hardware did something no classical system could easily match.
-
"Classical simulations will always catch up." For any specific experiment, classical simulators improve and the gap shrinks. But each classical-simulation improvement exploits a specific structure (the 2D grid, the particular gate set, the exact depth). Changing the hardware slightly (more qubits, different topology, non-Clifford gates) can make the classical simulation dramatically harder again. The supremacy frontier moves forward, not backward.
-
"XEB of 0.0024 sounds terrible — how can that be supremacy?" The number is small because the output distribution lives in a space of 2^{53} outcomes; any non-trivial signal in that space is numerically tiny. XEB of 0.0024 is \sim 200-million-sigma above uniform-random; it is an overwhelmingly significant experimental signal, just not a large absolute value. Reading it as "tiny signal" is a scale-mismatch error.
-
"Sycamore is outdated; we have better machines now." True — Zuchongzhi (105 qubits), IBM Condor (1121 qubits), and continuing photonic boson sampling have all exceeded Sycamore's 53-qubit benchmark. But Sycamore is historically significant as the first crediblesupremacy demonstration, and its methodology (random circuits + XEB) remains a standard benchmark.
-
"Supremacy is just a marketing term." Not quite. Preskill coined it in 2012 as a precise technical threshold, separate from "useful quantum advantage." The word carries connotations (acknowledged by the community from 2020 onward — see quantum supremacy defined), but the underlying technical milestone is well-defined and has been operationalised by multiple experiments.
Going deeper
You have the Sycamore chip design, the random circuit structure, the XEB verification, the classical simulation estimates on both sides of the rebuttal, and the post-Sycamore landscape. The rest of this section collects the technical content: XEB statistics in detail, tensor-network simulation methods, the noise model for Sycamore, the post-supremacy research frontier, IBM's shift to useful advantage, and Indian contributions.
XEB statistics in detail
XEB has a specific statistical structure. For an ideal n-qubit circuit with Porter–Thomas output distribution:
- Expected value of a single p_i (random sample): 1/2^n.
- Expected value of 2^n \cdot p_i: 1, so raw 2^n \cdot p_i has mean 1 and variance 1.
- XEB estimator averages M samples; estimator variance is 1/M.
For M = 10^6: XEB standard error is \sqrt{1/10^6} = 10^{-3}. Sycamore measured 0.0024, which is 2.4 standard errors above 0 in the natural XEB scale (not the absolute scale).
The absolute-scale-vs-XEB-scale distinction is where the "200-million-sigma" number comes from: under the null hypothesis of uniformly random sampling (not just Porter–Thomas sampling), the variance is much smaller, because the null distribution does not exhibit the Porter–Thomas heavy tail. The multi-hundred-million-sigma figure is specific to the test against the uniform-random null.
Noise model for Sycamore
The dominant noise sources in Sycamore, ranked by impact:
- Two-qubit gate error. \sim 6.2 \times 10^{-3} per gate. Sources: imperfect cross-resonance pulses, unwanted ZZ coupling, control-pulse distortion.
- Single-qubit gate error. \sim 1.6 \times 10^{-3} per gate.
- Readout error. \sim 3.8 \times 10^{-2} (per qubit, per readout).
- Decoherence (T1, T2). T1 \sim 15 μs, T2 \sim 10 μs; circuit duration \sim 1.1 μs, so decoherence contributes \sim 10^{-2} fidelity loss over the full circuit.
- Correlated errors. Cross-talk between nearby qubits, cosmic-ray-induced correlated losses — less common but significant for long runs.
The total circuit fidelity predicted by this model is \sim 0.002–0.003, consistent with the measured XEB of 0.0024.
Tensor-network simulation techniques
The classical simulation improvements since 2019 have all been variations on tensor networks. The core ideas:
-
Exact contraction via treewidth. The quantum circuit is represented as a tensor network (a graph of tensors). Simulation cost scales exponentially in the treewidth of the contraction tree, which depends on the circuit's geometry. For 2D-grid circuits with low depth, treewidth grows polynomially with circuit size, not exponentially.
-
Matrix Product States (MPS). A linear tensor-network ansatz efficient for low-entanglement states. Not great for Sycamore (which produces high-entanglement outputs), but used in related experiments.
-
Projected Entangled Pair States (PEPS). 2D generalisation of MPS. Better for 2D-grid circuits like Sycamore, but contraction is expensive.
-
Tensor contraction with slicing. Break the tensor network into subcomponents that fit in memory, compute each, recombine. The Pan–Chen–Zhang 15-hour simulation exploits this.
Each of these techniques has its regime of applicability. A clever quantum circuit designer can choose circuits that sit in the hard regime for all known classical methods — which is exactly what Sycamore, Zuchongzhi, and Jiuzhang did.
Post-supremacy research: from demos to useful advantage
The 2020s research direction has largely shifted from "prove supremacy on a new task" to "demonstrate quantum advantage on a task people actually care about." Specific targets:
- Quantum chemistry. Simulate molecules (caffeine, nitrogenase, FeMoCo) beyond the reach of classical methods. IBM, Google, and startups (PsiQuantum, IonQ, Rigetti) all have active chemistry programmes.
- Optimisation. Quantum approximate optimisation algorithm (QAOA), quantum annealing, variational quantum eigensolvers — applied to combinatorial problems in logistics, finance, and machine learning.
- Quantum error correction milestones. Demonstrating break-even (where a logical qubit lives longer than its physical constituents) is the gateway to fault tolerance. IBM (Heron), Google (Surface-7, Surface-17), Quantinuum (H-series ion traps) have all reported partial demonstrations in 2023–2025.
Supremacy-style experiments continue (Zuchongzhi 3.0, Jiuzhang 3.0) but the community's attention has moved to utility.
IBM's framing and the "useful quantum advantage" milestone
IBM's public position, articulated in the Quantum Developer Conferences starting 2021, distinguishes:
- Quantum supremacy / advantage: demonstrating a task classically intractable. Already achieved.
- Useful quantum advantage: demonstrating a task that is both classically intractable and practically valuable. Not yet achieved; IBM targets 2025–2029 range.
- Quantum utility: quantum computing integrated into industrial workflows with measurable ROI. 2030s and beyond.
This framing treats supremacy as necessary but insufficient. Google, USTC, and the Indian NQM have similar roadmaps.
Indian context — Indian researchers and the NQM's experimental targets
India's contribution to the supremacy landscape is currently at the theoretical and algorithmic level rather than experimental hardware. Key points:
- Indian researchers at Google Quantum AI and IBM Quantum have been co-authors on follow-up papers. These include contributors with academic training from IIT Delhi, IIT Bombay, and IISc Bangalore.
- TIFR Mumbai's theoretical quantum information group works on verification protocols for supremacy experiments and on proving classical-hardness lower bounds.
- RRI Bangalore and IIT Madras run photonic quantum computing experiments in the boson-sampling regime, smaller-scale than USTC Jiuzhang but with active research.
- National Quantum Mission (2023, ₹6000 crore over 8 years): targets a 50-1000 qubit experimental platform by 2028, with supremacy/advantage demonstrations as explicit milestones. The computing vertical is led by IIT Madras with partners at IISc, IITB, TIFR, and C-DAC.
The Indian ambition is not to repeat the Sycamore experiment (that race is already run) but to be competitive on the next wave: useful quantum advantage on chemistry, optimisation, or communication problems, and to contribute to error-corrected quantum computing. Post-quantum cryptography migration of Aadhaar, UPI, and banking infrastructure — anticipating the eventual fault-tolerant era — is a parallel priority under the NQM.
Where this leads next
- Quantum supremacy defined — the formal definition this chapter operationalised.
- Boson sampling — the photonic alternative (Aaronson–Arkhipov, USTC Jiuzhang).
- USTC Jiuzhang — the photonic follow-up experiment at 76–255 photons.
- Quantum advantage claims — the broader landscape of demonstrated and projected advantages.
- Quantum error correction — the technology that separates NISQ supremacy from fault-tolerant utility.
- BQP defined — the complexity class supremacy tasks live in or near.
References
- Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 505–510, 2019) — the original Sycamore paper. arXiv:1910.11333 (companion).
- Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, Robert Wisnieff (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the IBM rebuttal preprint. arXiv:1910.09534.
- Han-Sen Zhong et al. (USTC), Quantum computational advantage using photons (Science 370, 1460–1463, 2020) — Jiuzhang 1.0. arXiv:2011.02801.
- Feng Pan, Keyang Chen, Pan Zhang, Solving the sampling problem of the Sycamore quantum circuits (2021) — the 15-hour classical simulation. arXiv:2111.03011.
- Wikipedia, Sycamore processor — the curated reference with a running summary of classical rebuttals and follow-up work.
- Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — a carefully technical, readable discussion of the Google claim and the IBM rebuttal by one of the architects of the sampling-supremacy framework.