In short
Every major quantum supremacy claim has been challenged — and partly eroded — by improving classical algorithms. Google's October 2019 Sycamore paper estimated 10{,}000 years of classical simulation; IBM responded within a week with a \sim 2.5-day projection using tensor-network contraction and secondary-storage tricks. Pan, Chen, and Zhang of CAS reduced this further to \sim 15 hours on a 512-GPU cluster in 2021–2022. A team led by Dorit Aharonov (2022) proved a general classical algorithm that efficiently simulates noisy random quantum circuits when the noise rate exceeds a threshold — implying any supremacy claim must live in a narrow, well-protected regime of very low noise per gate. The classical-spoofing toolkit has three main weapons: tensor-network methods that compress states with limited entanglement, stabilizer-based techniques that exploit structure in near-Clifford circuits, and noise-exploitation that uses the decoherence present in real quantum hardware to simplify the simulation. None of these classical advances mean "quantum is fake." They mean: strict quantum supremacy is fragile, per-experiment gaps shrink, and only error-corrected, fault-tolerant regimes (where Shor's factoring or chemistry simulation lives) admit truly unambiguous quantum advantage. For a reader who sees a headline "quantum has solved X in Y seconds, 10{,}000 years classical," the correct response in 2026 is: wait a year and check whether classical has caught up.
On 23 October 2019, Google's Sycamore paper announced quantum supremacy. One week later — eight days, actually — IBM posted a preprint on arXiv arguing that Google had got the classical number wrong by roughly six orders of magnitude. IBM did not claim Google's quantum device was not a quantum device. They did not claim the 200-second quantum runtime was inflated. They claimed something much more specific: that the classical side of the comparison, the 10{,}000-year estimate, was based on the wrong classical algorithm. With the right algorithm — a tensor-network contraction strategy that used Summit's 250 petabytes of disk storage aggressively — the same simulation could be done in about 2.5 days.
This became the pattern. Every subsequent supremacy claim has been contested, not by disputing the quantum experiment, but by improving the classical side. Pan, Chen, and Zhang of the Chinese Academy of Sciences, working independently of IBM, published classical simulations of Sycamore-class circuits in \sim 15 hours on a 512-GPU cluster in 2022. The USTC Jiuzhang photonic experiments faced similar scrutiny — Alibaba and others published classical algorithms that attacked specific structural features of the experimental setup, narrowing the gap.
Does this mean quantum supremacy is a fraud? No. It means the supremacy framework is inherently sensitive to classical algorithmic progress, and each claim must be understood as a snapshot: "as of date D, using the best classical algorithm known on hardware H, this quantum device wins by factor F." The per-experiment factor F drops as classical methods mature. The asymptotic gap (at larger n, deeper circuits) still favours quantum — but practical supremacy is a moving target.
This chapter walks through the main classical-spoofing techniques, the biggest rebuttals on record, and what survives these attacks. The reader will leave knowing how to read supremacy headlines with healthy skepticism and understanding why Shor's algorithm — on a future fault-tolerant machine — would be a claim of a fundamentally different kind.
The pattern: claim, rebut, extend
Before the techniques, see the pattern.
Three forces drive the cycle:
- Classical algorithms mature. Any new supremacy task immediately becomes a target for classical algorithm designers. They read the paper, identify structural features (circuit depth, topology, gate choice), and design simulators that exploit that structure. A few months to a few years is enough to cut the classical cost dramatically.
- Real quantum hardware has noise. Every supremacy claim is based on a specific noise regime. Classical simulators can exploit the same noise — a decoherent quantum state is easier to simulate than a pure one, and noisy circuits lose entanglement faster than ideal circuits.
- The quantum side keeps scaling. When Google showed 53-qubit Sycamore, USTC responded with 66-qubit Zuchongzhi 2.1 and 105-qubit Zuchongzhi 3.0. Jiuzhang went from 76 photons to 255-mode interferometers. The asymptotic quantum-classical gap at n = 100 qubits is much wider than at n = 50.
Technique 1: tensor networks
Tensor networks are the single most important tool in classical simulation of quantum circuits. Understanding why they are powerful starts with understanding what they cost to use.
A quantum state on n qubits is a complex vector in a 2^n-dimensional Hilbert space. Naively storing the state takes O(2^n) memory — infeasible for n \geq 50. But not every quantum state is "fully entangled." Many physically relevant states — the ground states of local Hamiltonians, the output of shallow circuits, states with 1D connectivity — can be factorised into a network of small tensors, each with only O(\mathrm{poly}(n)) entries.
The key parameter is the bond dimension \chi — a measure of how much entanglement the state carries across a given cut of the network. Low bond dimension means low entanglement means efficient simulation:
- A product state (no entanglement): \chi = 1, cost O(n).
- A state with mild entanglement: \chi = O(\mathrm{poly}(n)), cost polynomial.
- A generic entangled state: \chi = O(2^{n/2}), cost exponential — back to the hard regime.
Real circuits often sit between these extremes. For shallow circuits (depth \ll n), bond dimensions stay manageable for surprisingly long. This is the lever that classical simulators pull.
Three widely used tensor-network formats:
- Matrix Product States (MPS) — the 1D case. Efficient for 1D spin chains and for shallow circuits on a line. IBM's 2019 Sycamore rebuttal used an MPS-like approach with aggressive use of secondary storage.
- Projected Entangled Pair States (PEPS) — the 2D case. Appropriate for 2D topologies like Sycamore. Contracting a PEPS exactly is \#\mathrm{P}-hard in general, but approximate contraction is often tractable.
- Tree Tensor Networks (TTN) — hierarchical structures suited to quantum chemistry and some graph-based problems.
Tensor-network simulators for supremacy circuits typically use a hybrid: compute amplitudes for small subsets of output bitstrings using tensor contraction on sub-circuits, then combine these amplitudes to estimate distribution properties or to generate samples. The Pan–Zhang 15-hour simulation uses exactly this hybrid approach.
Technique 2: stabilizer-based simulation
A different technique, exploiting a different structure. Stabilizer circuits — circuits built from Clifford gates (H, S, \mathrm{CNOT}) and Pauli measurements — can be simulated classically in polynomial time. This is the Gottesman–Knill theorem (1998), one of the most surprising results in the early quantum computing literature.
The reason: a stabilizer state can be described by specifying which Pauli operators stabilise it (a polynomial-size description), rather than by storing its amplitude vector (an exponential-size description). Every Clifford gate maps stabilizer descriptions to stabilizer descriptions in polynomial time.
The implication for supremacy: any supremacy circuit must contain non-Clifford gates (like T = R_z(\pi/4)). If every gate were Clifford, the circuit would be trivially classically simulable, and no supremacy claim could stand. Conversely, if a circuit has only a few non-Clifford gates, classical simulators can use stabilizer-rank methods to handle them efficiently.
- Stabilizer rank: every quantum state can be written as a sum of stabilizer states; the minimum number of terms is the stabilizer rank. A state with stabilizer rank r can be simulated in time O(r \cdot \mathrm{poly}(n)).
- Bravyi–Smith–Smolin (2016): improved methods for simulating low-T-count circuits using stabilizer-rank techniques.
- Pashayan–Wallman–Bartlett (2015): introduced quasiprobability simulation, which can simulate any circuit but pays an exponential cost in the amount of "magic" (non-stabilizerness).
Real supremacy circuits have \sim 1000 non-Clifford gates, which is enough to push stabilizer-rank methods out of reach. But: as quantum error-correction codes are layered on top of supremacy circuits, the effective Clifford-vs-non-Clifford balance matters again. Stabilizer techniques are a constant presence in classical simulation toolkits.
Technique 3: noise exploitation
The most subtle classical-spoofing technique is not a direct simulation improvement but an observation about physics: real quantum circuits have noise, and noise reduces quantum complexity.
A perfect quantum circuit on n qubits produces a pure state in a 2^n-dimensional Hilbert space. Classical simulation must track all of this. A noisy quantum circuit produces a mixed state — a probabilistic mixture of pure states — and if the noise rate per gate is high enough, the mixed state rapidly approaches the maximally mixed state (uniform over all basis states), which is trivial to simulate.
Aharonov, Gao, Landau, Liu, and Vazirani's 2022 paper, A polynomial-time classical algorithm for noisy random circuit sampling, made this formal:
Theorem (informal). For random quantum circuits with constant noise rate p > 0 per gate, there is a classical algorithm that samples from the circuit's output distribution (up to small total-variation distance) in time polynomial in the number of qubits.
In other words: any supremacy claim based on a noisy quantum circuit is, in principle, classically simulable in polynomial time — provided the noise rate is high enough. The devil is in the threshold: the theorem holds for constant noise rate, and modern experiments have noise rates around 10^{-3} per gate, small but non-zero.
For Sycamore's \sim 1000 gates at 10^{-3} per gate, the cumulative fidelity is 0.999^{1000} \approx 0.37. For each factor of \sim e^{-1} degradation, noise washes out one "bit" of quantum information. When the remaining information is less than O(\log n) bits, classical simulation becomes efficient. This puts a practical limit on the useful depth of any NISQ supremacy experiment.
The big rebuttals, in detail
With the techniques in mind, the rebuttals become legible.
IBM 2019 — tensor networks plus secondary storage
Pednault, Gunnels, Nannicini, Horesh, and Wisnieff's preprint Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (arXiv:1910.09534) argued that Google's 10{,}000-year estimate overlooked two things:
- Tensor-network contraction is better than state-vector simulation. For Sycamore's depth-20 random circuits, the effective bond dimension in an MPS-like representation is manageable at the end of each time slice.
- Summit has 250 PB of disk storage, not just RAM. By streaming amplitudes between RAM and disk, the simulation can proceed without needing all 144 PB of state vector in main memory.
Their projected classical runtime: \sim 2.5 days. IBM did not actually run this simulation (nobody gave them Summit time for that), but the methodology was public and reviewable.
The effect on the supremacy claim: the per-experiment classical cost dropped by \sim 10^6. The ratio t_C / t_Q fell from 10^9 to about 10^3. Still a supremacy claim in some readings — t_Q = 200 s vs t_C = 2.2 \times 10^5 s — but a much narrower one.
Pan, Chen, Zhang 2021–2022 — GPU-accelerated tensor contraction
Pan, Chen, and Zhang (CAS, Institute of Theoretical Physics) attacked the same Sycamore task with specialised tensor-network algorithms and GPU acceleration. Their paper Solving the sampling problem of the Sycamore quantum circuits used:
- A specific contraction ordering optimised for Sycamore's 2D grid topology.
- 512 NVIDIA GPUs (a large cluster, but not a top-tier supercomputer).
- Full end-to-end simulation (unlike IBM's projection, this was actually executed).
Runtime: \sim 15 hours. Quantum-to-classical ratio: \sim 270, still quantum-faster but now within an order of magnitude where classical hardware improvements (better GPUs, better algorithms) could catch up.
Alibaba and Tencent — continued tensor-network improvements
Research groups at Alibaba Cloud and Tencent Quantum Lab have published further tensor-network refinements for both RCS and Gaussian boson sampling. Each paper pushes the boundary by identifying new structural features of specific experimental circuits to exploit.
Aharonov et al. 2022 — the general theorem
The most theoretically important rebuttal is Aharonov, Gao, Landau, Liu, and Vazirani's A polynomial-time classical algorithm for noisy random circuit sampling (arXiv:2211.03999). Rather than targeting a specific experiment, they proved:
- For any random circuit with constant noise rate p > 0 per gate, a classical algorithm can sample from the output distribution (to any desired total-variation distance) in polynomial time.
- The algorithm is theoretical — not yet engineered into production code — but establishes an asymptotic upper bound on the hardness of noisy supremacy tasks.
The implication: strict supremacy in the presence of noise is a fragile property. Supremacy claims must argue that their noise is low enough that the Aharonov-et-al threshold is not crossed. This is a far more nuanced claim than "this experiment proves quantum supremacy."
The fair-comparison debate
A recurring question: is it fair to compare a 2019 quantum experiment against a 2022 classical algorithm?
The answer depends on what "supremacy" is claiming. Three positions:
- Strict supremacy: the claim must survive any classical attack, including future ones. Under this reading, Sycamore 2019's claim has been erased by Pan–Zhang 2022.
- Contemporaneous supremacy: the claim is about t_C using the best known classical algorithm at the time of the experiment. Under this reading, Sycamore's 2019 claim stands — IBM's rebuttal came after, and IBM did not beat Google's 200-second quantum runtime.
- Asymptotic supremacy: the claim is about scaling — at sufficiently large n, the quantum-classical gap is super-polynomial. This is the complexity-theoretic version, and it survives classical algorithm improvements as long as the quantum hardware can be scaled.
Most researchers today use a blend: a supremacy claim is a snapshot with an asymptotic promise. The snapshot can be softened by better classical algorithms; the asymptotic promise (extrapolated to larger quantum machines) remains intact.
Hype check. Quantum supremacy is slippery. A claim that looks rock-solid at announcement may be partly rolled back months later. This is not a sign of fraud — it is a structural feature of a field where the classical side is a moving target. When a new "X times faster than classical" headline appears, the correct response is: which classical algorithm is being compared against? Has anyone attacked the claim with tensor networks, noise exploitation, or stabilizer techniques yet? Wait a year and check back. The strictest kind of supremacy — fault-tolerant, error-corrected, running Shor's on RSA — is a different claim entirely, and when that arrives (years away), it will survive classical attack by virtue of the structural gap.
What survives classical spoofing
Classical spoofing has eroded many near-term claims, but certain quantum advantages appear structurally safe.
Shor's algorithm (factoring)
Peter Shor's polynomial-time quantum factoring algorithm (1994) produces a specific integer output — a factor of the input. It is not a sampling task; it is a decision-with-certificate task. There is no known classical algorithm that factors in polynomial time (best classical: general number field sieve, sub-exponential e^{O((\log n)^{1/3} (\log \log n)^{2/3})}).
For Shor's advantage to be "spoofed" classically, someone would have to invent a polynomial-time classical factoring algorithm — which would collapse much of cryptography. This is a far stronger complexity assumption than supremacy, and no classical-spoofing-style technique touches it.
Error-corrected quantum computing
Fault-tolerant quantum computing, once achieved, operates in a different regime. Logical qubits protected by error correction (surface code, colour codes) can run arbitrarily deep circuits with arbitrarily low error. Classical simulation of such machines is as hard as classical simulation of the ideal noiseless circuits — which the Aharonov et al. result does not touch, because their theorem crucially uses noise.
A quantum computer that executes 10^{10} logical gates on 1000 logical qubits is beyond any classical-spoofing technique. The cost to build such a machine is enormous, but the computational advantage, once built, is structurally unassailable.
Chemistry simulation on specific problems
For specific chemistry problems — notably, ground-state energies of molecules with strong electron correlation (like \mathrm{FeMoco} in nitrogenase) — quantum algorithms have a clean structural advantage. The problem has an inherent quantum structure that classical simulation cannot compactly represent (the wavefunction's entanglement is genuinely many-body).
This is an area of active research and an explicit goal of quantum roadmaps (IBM, Google, USTC all target this). Once demonstrated, useful quantum advantage on these problems would be robust to classical spoofing in a way that supremacy demos are not.
Example 1: Tensor-network simulation of a $20$-depth random circuit on $53$ qubits
A walkthrough of why IBM's 2.5-day estimate for Sycamore was plausible — without going into full algorithmic detail.
Setup. Sycamore's supremacy circuits are 20 layers deep on 53 qubits with 2D grid connectivity. Each layer consists of single-qubit gates and a fixed pattern of fSim two-qubit gates on neighbouring pairs.
Step 1 — Bond dimension at layer k. After k layers of generic random gates, the bond dimension of an MPS representing the state grows roughly as:
Why it grows and then plateaus: entanglement spreads through the circuit with each gate, doubling bond dimension per layer until it hits the maximum possible value 2^{n/2} (Schmidt rank bound across any bipartition). At that point the state is maximally entangled and no further compression is possible.
For Sycamore at n = 53 and k = 20: \chi_{20} \approx 2^{20} \approx 10^6 — below the maximum 2^{26.5} \approx 10^8. The state is not yet fully entangled.
Step 2 — Memory cost. An MPS with bond dimension \chi on n qubits requires O(n \chi^2) memory:
Why this is still feasible: Summit has \sim 250 petabytes of total storage (RAM plus disk). 850 TB fits. The raw state vector (144 PB) did not. Tensor-network compression cuts the memory by \sim 200x.
Step 3 — Time per sample. Computing a single output amplitude from an MPS takes O(n \chi^3) time:
On Summit at 200 petaFLOPS = 2 \times 10^{17} FLOPs, each sample takes \sim 260 seconds. For 10^6 samples, this is \sim 3 \times 10^8 seconds \approx 8 years.
Step 4 — IBM's batching tricks. IBM's argument was that you do not need to compute amplitudes one at a time. By batching the contraction — computing amplitudes for many bitstrings in a single tensor-network sweep — and using disk storage for intermediate results, the per-sample cost drops substantially. Their projection gave \sim 2.5 days.
Why this illustrative calculation does not exactly match IBM's: real tensor-network contraction uses more sophisticated tricks (specific contraction orderings, block-sparse storage, GPU acceleration) that lower the constants significantly. This example is a simplification; the actual IBM paper has the full engineering.
Result. For Sycamore-scale circuits, tensor-network methods with aggressive batching bring the classical simulation cost from Google's 10{,}000-year figure down to a few days — the core of the IBM rebuttal. The quantum-classical ratio drops from \sim 10^9 to \sim 10^3, softening (but not eliminating) the supremacy claim.
Example 2: Noise-assisted classical simulation — how much error is too much
A toy calculation showing why gate-error rates drive supremacy claims.
Setup. Consider a random quantum circuit with n = 53 qubits, depth d = 20, and gate error rate \epsilon per gate. The total number of gates is roughly N = 1000. Each gate has fidelity 1 - \epsilon; the circuit's overall fidelity is approximately:
Step 1 — Fidelity at three noise levels. For \epsilon = 10^{-3} (Sycamore-level):
For \epsilon = 5 \times 10^{-3} (lower-quality hardware):
For \epsilon = 10^{-2} (noisy early NISQ):
Why fidelity matters for classical simulation: fidelity is the overlap of the noisy circuit's output state with the ideal output state. When fidelity drops, the noisy output increasingly resembles a uniform random state — which is trivial to simulate classically.
Step 2 — Effective quantum content. The "amount of quantum information" the circuit carries scales roughly with \log_2(1/F) — the number of qubits' worth of purity remaining:
- \epsilon = 10^{-3}: \log_2(1/0.37) \approx 1.4 — you have about 1.4 qubits of pure quantum content out of 53.
- \epsilon = 5 \times 10^{-3}: \log_2(1/0.0067) \approx 7.2 qubits' worth.
- \epsilon = 10^{-2}: \log_2(1/4.5 \times 10^{-5}) \approx 14.4 qubits' worth.
Why this framing is misleading at face value: the "1.4 qubits of purity" for Sycamore sounds like too little to matter. But even a tiny amount of quantum content concentrated on specific correlations (detectable by XEB) is enough for a supremacy claim — XEB measures how much the noisy samples concentrate on outcomes the ideal circuit would favour, not how pure the overall state is. So the relationship between noise and supremacy is subtler than "more noise = no supremacy."
Step 3 — The Aharonov et al. regime. Aharonov, Gao, Landau, Liu, Vazirani's theorem kicks in when the noise rate is "constant" in a formal sense: \epsilon does not shrink as n grows. For any such \epsilon > 0, there is a classical polynomial-time algorithm that samples from the circuit's output distribution to any desired accuracy.
Step 4 — Concrete threshold estimate. The exact polynomial of the Aharonov et al. algorithm depends on the constants and the target accuracy; for Sycamore-like experiments, the practical implication is roughly: if your per-gate error rate exceeds \sim 1\%, classical simulation of a depth-d circuit is efficient (polynomial in n and d). Below 1\%, classical simulation remains exponential asymptotically but can still be efficient at small to moderate n.
Result. Supremacy demonstrations live in a narrow noise regime. Even modest improvements in the gate-error rate (10^{-3} to 10^{-4}) dramatically increase the classical-simulation difficulty. Modern NISQ hardware sits precisely at the boundary — the exact location where every new classical algorithm can, in principle, erode the supremacy gap.
Common confusions
-
"Classical spoofing means quantum is useless." No. Classical spoofing of supremacy demonstrations does not affect Shor's algorithm, error-corrected quantum computing, or structural chemistry problems. It affects only the narrow class of near-term sampling benchmarks that were chosen precisely for their sensitivity to classical algorithmic progress. The fault-tolerant future remains unaffected.
-
"Classical has already won the race." Incorrect. Classical algorithms have narrowed but not eliminated the gap for current experiments. At larger quantum scales — 100 qubits, depth 30+ — classical simulation remains infeasible. The race is ongoing; the asymptotic trend still favours quantum.
-
"Spoofing is cheating — the classical algorithms use too much structure." No. Classical-spoofing algorithms are legitimate classical algorithms run on general-purpose classical hardware. They do use structure — that is what classical algorithms do. A quantum supremacy claim is, at its heart, a claim about the best classical algorithm; any advance on the classical side is a fair update to the comparison.
-
"Shor's factoring has been classically spoofed." Definitively no. Polynomial-time classical factoring remains an open problem. Sub-exponential algorithms (number field sieve) exist but are far from polynomial. Shor's algorithm's advantage, once realised on a fault-tolerant machine, is robust to any classical-spoofing technique known.
-
"Why bother with quantum if classical keeps catching up?" Because classical does not catch up on everything. Sampling supremacy is a benchmark for hardware capability, chosen for its classical hardness in a narrow asymptotic sense. Useful quantum advantage (chemistry, cryptography, optimisation) is a different regime where classical spoofing appears structurally limited.
-
"The supremacy debate is just a PR war between Google and IBM." Partially true, but the underlying science is serious. Both groups have published technically rigorous arguments. The fact that the debate happens publicly (in Nature, in arXiv preprints, on blogs) rather than in closed industrial R&D is a feature of academic norms, not PR manipulation.
Going deeper
You have the cycle, the techniques, the big rebuttals, and the survivors. The rest of this section digs into: detailed tensor-network methods (MPS, PEPS, and contraction orderings); stabilizer-rank and magic-state simulation; the formal structure of the Aharonov et al. 2022 theorem; the cost-per-operation race between quantum and classical hardware; the distinction between strict supremacy, contemporaneous supremacy, and asymptotic supremacy; the scientific-ethics question of how supremacy claims should be made and verified; and what a truly unambiguous quantum-only task would require.
Tensor networks — MPS, PEPS, and contraction orderings
A matrix product state (MPS) on n qubits is a state of the form:
where each A^{(i)}_{s_i} is a \chi \times \chi matrix indexed by the physical state s_i \in \{0, 1\} of the i-th qubit. The parameter \chi is the bond dimension.
- For \chi = 1: product state.
- For \chi polynomial: efficient classical simulation.
- For \chi exponential: no compression.
Projected entangled pair states (PEPS) generalise MPS to 2D — each site has tensors with multiple bond indices, one per neighbour. PEPS can describe states with area-law entanglement in 2D, making them appropriate for Sycamore-like topologies.
The key computational operation is contraction — summing over the bond indices to compute an amplitude. For MPS, contraction is efficient (O(n \chi^3)). For PEPS, exact contraction is \#\mathrm{P}-hard in general, but approximate contraction with controlled bond-dimension growth is tractable.
Choosing a good contraction ordering is a critical optimisation. IBM's 2019 rebuttal and Pan–Zhang's 2022 simulation both rely on carefully chosen orderings that minimise peak memory and total operations.
Stabilizer-rank and magic-state methods
Clifford circuits are exactly classically simulable in polynomial time. Non-Clifford gates (like T) introduce "magic" — a measure of non-stabilizerness. The stabilizer rank of a state is the minimum number of stabilizer states needed to write it as a linear combination.
- For a state with stabilizer rank r, any amplitude can be computed in time O(r \cdot \mathrm{poly}(n)).
- Low-T-count circuits produce states with low stabilizer rank.
- Bravyi, Smith, and Smolin (2016) showed that T-count k gives stabilizer rank at most 2^{0.8 k} — a non-trivial improvement over the naive 2^k.
The implication for supremacy: supremacy circuits must have enough non-Clifford structure to defeat stabilizer-rank methods. Sycamore's \sim 430 two-qubit fSim gates each contribute non-Clifford content; the stabilizer rank of the resulting state is extremely large, and stabilizer-rank methods are not competitive with tensor-network methods for RCS.
For error-corrected quantum computing, magic-state distillation is a critical primitive. A detailed treatment is beyond this chapter.
The Aharonov et al. 2022 theorem — formal structure
Aharonov, Gao, Landau, Liu, and Vazirani's A polynomial-time classical algorithm for noisy random circuit sampling proves, roughly:
Theorem. Let C be a random quantum circuit on n qubits with depth d, where each gate is followed by local depolarising noise with rate \epsilon > 0 (a constant, not shrinking with n). There exists a classical algorithm that samples from a distribution \epsilon-close in total variation to the circuit's noisy output distribution, in time polynomial in n and d.
The algorithm's core idea: noisy random circuits produce output distributions that concentrate around the uniform distribution as depth grows, with corrections that can be computed approximately using tensor-network truncation with small bond dimension. The noise provides the truncation: noisy states naturally have bounded effective entanglement.
The theorem is asymptotic — the polynomial may have a large constant, and for specific small-scale experiments, the tensor-network constants may still be large. But it establishes that any supremacy claim based on noisy NISQ hardware is, in principle, classically simulable with enough effort. This is an important boundary condition on the supremacy framework.
The cost-per-operation race
Modern classical hardware trends (2020–2026):
- GPU FLOPs per dollar double every \sim 2 years.
- Custom AI accelerators (NVIDIA H100, Google TPU v5) push further.
- Exascale supercomputers (Frontier 2022, El Capitan 2024) provide \gtrsim 10^{18} FLOPs.
Modern quantum hardware trends:
- Qubit counts growing: 53 (Sycamore) → 105 (Zuchongzhi 3) → 1121 (IBM Condor, 2023).
- Gate fidelities improving: 99.4\% → 99.9\% → 99.97\% targets.
- Connectivity and compilation improving.
The race between classical and quantum is happening on both sides. A classical algorithm that is 10^3x faster is not as significant as a quantum hardware scaleup of 2^{20}x state-space. As quantum hardware approaches the error-correction threshold and fault-tolerant regime, classical algorithms face an exponential wall they cannot climb.
Strict vs contemporaneous vs asymptotic supremacy
Three related but distinct claims:
- Strict supremacy: at all times, including the future, no classical algorithm matches the quantum device on this task. This is what pop-science headlines suggest. It is also almost impossible to verify, because classical algorithms keep improving.
- Contemporaneous supremacy: at the time of the experiment, the best-known classical algorithm was beaten by the quantum device. This is what most experimental papers actually claim. It is verifiable at the time but subject to later revision.
- Asymptotic supremacy: as the problem size grows, the quantum-classical gap grows super-polynomially. This is the complexity-theoretic claim, and it is supported by hardness proofs (conditional on standard conjectures).
A careful reader can distinguish these three when evaluating any supremacy headline.
What a truly unambiguous quantum-only task would look like
For a supremacy claim to be robust to all classical attack, the task should have:
- Structural hardness beyond sampling: e.g. factoring, discrete log, graph isomorphism — tasks where the quantum advantage comes from a specific non-classical algorithmic structure (QFT, phase estimation).
- Fault tolerance: the quantum device uses error correction, eliminating the noise-exploitation lever.
- Large-scale: at 100+ logical qubits, the classical state-space is intrinsically beyond reach.
- Verifiable output: ideally, a deterministic computational output (like a factoring), not just a sample from a distribution.
Shor's algorithm on RSA-2048 meets all four criteria. The hardware to run it (roughly 20 million physical qubits with surface-code error correction) does not exist yet, but is the target of IBM's, Google's, and other roadmaps. When such a machine is built, the supremacy claim will be of a qualitatively different kind — a strict, structural, unambiguous claim, not a sampling benchmark.
Indian context — verification methodology and the National Quantum Mission
India's National Quantum Mission, launched in 2023 with an \sim ₹6000 crore (\sim \$720 million) eight-year budget, explicitly targets both quantum supremacy and useful quantum advantage. The Mission's documents are carefully written in a way that reflects the lessons of the USTC–Google–IBM debate: supremacy is a milestone, not an endpoint; useful quantum advantage is the harder goal; verification methodology matters.
Specific implications for Indian researchers:
- Academic groups at TIFR, IISc, IIT Madras, IIT Bombay are active in both theoretical complexity analyses of supremacy (conditions for classical-hardness proofs) and experimental supremacy hardware (superconducting qubit chips, photonic platforms).
- Industry (TCS, Infosys, HCL, Wipro) is developing classical simulation tooling for verification — simulators that can validate quantum experiments at moderate scale.
- Policy (DRDO, NIC) tracks the supremacy-vs-spoofing cycle as part of post-quantum cryptography planning. The India-specific concern: ensuring that Aadhaar, UPI, and banking systems migrate to post-quantum cryptographic primitives before Shor-capable hardware arrives, regardless of whether supremacy demonstrations continue to be softened by classical advances.
A cultural note: the verification-methodology debate has broad resonance in India's scientific community, where healthy skepticism of "we beat the West" claims is a feature of how the NQM documents are framed. The Indian approach (cross-checking quantum claims against state-of-the-art classical algorithms, before publishing) is explicitly designed to avoid the Google–IBM pattern of public disputes.
Where this leads next
- Quantum Supremacy Defined — the parent definition framing the three conditions supremacy must meet.
- Google Random Circuit Sampling — the specific 2019 Sycamore experiment central to the spoofing story.
- Boson Sampling — the photonic supremacy route and the USTC Jiuzhang experiments.
- Useful Quantum Advantage — the contrasting milestone of solving problems people actually care about.
- Tensor Networks Preview — a dedicated introduction to the tensor-network methods used in classical spoofing.
- Resource Estimates for RSA-2048 — what it would take to reach the true fault-tolerant regime.
References
- Edwin Pednault et al. (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the week-after rebuttal that cut Google's 10{,}000 years to 2.5 days. arXiv:1910.09534.
- Feng Pan, Keyang Chen, and Pan Zhang, Solving the sampling problem of the Sycamore quantum circuits (2021) — the 15-hour GPU-cluster simulation. arXiv:2111.03011.
- Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, Umesh Vazirani, A polynomial-time classical algorithm for noisy random circuit sampling (2022) — the general noisy-simulation theorem. arXiv:2211.03999.
- Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 2019) — the original Sycamore paper. Nature article.
- Google Quantum AI, Quantum error correction below the surface code threshold (Nature 638, 2024, Willow processor) — updated supremacy/advantage claims on error-corrected hardware. Nature article.
- Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — a technically careful discussion of the Google claim and the spoofing landscape, written by one of the architects of the sampling-supremacy framework.