In short

Every major quantum supremacy claim has been challenged — and partly eroded — by improving classical algorithms. Google's October 2019 Sycamore paper estimated 10{,}000 years of classical simulation; IBM responded within a week with a \sim 2.5-day projection using tensor-network contraction and secondary-storage tricks. Pan, Chen, and Zhang of CAS reduced this further to \sim 15 hours on a 512-GPU cluster in 20212022. A team led by Dorit Aharonov (2022) proved a general classical algorithm that efficiently simulates noisy random quantum circuits when the noise rate exceeds a threshold — implying any supremacy claim must live in a narrow, well-protected regime of very low noise per gate. The classical-spoofing toolkit has three main weapons: tensor-network methods that compress states with limited entanglement, stabilizer-based techniques that exploit structure in near-Clifford circuits, and noise-exploitation that uses the decoherence present in real quantum hardware to simplify the simulation. None of these classical advances mean "quantum is fake." They mean: strict quantum supremacy is fragile, per-experiment gaps shrink, and only error-corrected, fault-tolerant regimes (where Shor's factoring or chemistry simulation lives) admit truly unambiguous quantum advantage. For a reader who sees a headline "quantum has solved X in Y seconds, 10{,}000 years classical," the correct response in 2026 is: wait a year and check whether classical has caught up.

On 23 October 2019, Google's Sycamore paper announced quantum supremacy. One week later — eight days, actually — IBM posted a preprint on arXiv arguing that Google had got the classical number wrong by roughly six orders of magnitude. IBM did not claim Google's quantum device was not a quantum device. They did not claim the 200-second quantum runtime was inflated. They claimed something much more specific: that the classical side of the comparison, the 10{,}000-year estimate, was based on the wrong classical algorithm. With the right algorithm — a tensor-network contraction strategy that used Summit's 250 petabytes of disk storage aggressively — the same simulation could be done in about 2.5 days.

This became the pattern. Every subsequent supremacy claim has been contested, not by disputing the quantum experiment, but by improving the classical side. Pan, Chen, and Zhang of the Chinese Academy of Sciences, working independently of IBM, published classical simulations of Sycamore-class circuits in \sim 15 hours on a 512-GPU cluster in 2022. The USTC Jiuzhang photonic experiments faced similar scrutiny — Alibaba and others published classical algorithms that attacked specific structural features of the experimental setup, narrowing the gap.

Does this mean quantum supremacy is a fraud? No. It means the supremacy framework is inherently sensitive to classical algorithmic progress, and each claim must be understood as a snapshot: "as of date D, using the best classical algorithm known on hardware H, this quantum device wins by factor F." The per-experiment factor F drops as classical methods mature. The asymptotic gap (at larger n, deeper circuits) still favours quantum — but practical supremacy is a moving target.

This chapter walks through the main classical-spoofing techniques, the biggest rebuttals on record, and what survives these attacks. The reader will leave knowing how to read supremacy headlines with healthy skepticism and understanding why Shor's algorithm — on a future fault-tolerant machine — would be a claim of a fundamentally different kind.

The pattern: claim, rebut, extend

Before the techniques, see the pattern.

The supremacy–rebuttal cycle, 2019–2024A horizontal timeline with four labelled events. Google October 2019 claim 10,000 years classical. IBM October 2019 rebuttal 2.5 days. Pan-Zhang 2022 rebuttal 15 hours. Aharonov et al 2022 general noisy-simulation theorem. Each event shown as a labelled marker with a brief description.the supremacy–rebuttal cycleGoogle Sycamore10,000 yr claimOct 2019IBM rebuttal2.5 days viatensor + diskOct 2019Pan–Zhang CAS15 hours on512-GPU cluster2021–22Aharonov et algeneral noisyclassical sim2022each classical advance narrows the gap; quantum experiments respond by scaling upsupremacy is a moving target, not a fixed milestone
The supremacy–rebuttal cycle for Google's $2019$ Sycamore claim. Four major events mark the decay of the original $10{,}000$-year estimate. Each rebuttal used a different classical technique — tensor-network contraction (IBM), GPU-accelerated tensor networks (Pan–Zhang), and general theoretical results on noisy-circuit simulation (Aharonov et al). The pattern has repeated for each subsequent supremacy experiment.

Three forces drive the cycle:

Technique 1: tensor networks

Tensor networks are the single most important tool in classical simulation of quantum circuits. Understanding why they are powerful starts with understanding what they cost to use.

A quantum state on n qubits is a complex vector in a 2^n-dimensional Hilbert space. Naively storing the state takes O(2^n) memory — infeasible for n \geq 50. But not every quantum state is "fully entangled." Many physically relevant states — the ground states of local Hamiltonians, the output of shallow circuits, states with 1D connectivity — can be factorised into a network of small tensors, each with only O(\mathrm{poly}(n)) entries.

The key parameter is the bond dimension \chi — a measure of how much entanglement the state carries across a given cut of the network. Low bond dimension means low entanglement means efficient simulation:

Real circuits often sit between these extremes. For shallow circuits (depth \ll n), bond dimensions stay manageable for surprisingly long. This is the lever that classical simulators pull.

Tensor network representation of a quantum stateTop: a long horizontal chain of small circles (tensors) connected by horizontal lines (bonds). Labels: each tensor corresponds to one qubit; each bond has dimension chi. Bottom: formula storage cost O(n·chi^2), vs dense storage O(2^n). Three panels illustrate three regimes: chi=1 (product state, trivial), chi=polynomial (efficient), chi=exponential (hard).tensor networks — compression when entanglement is boundedn tensors, each with O(χ²) entries; horizontal bonds carry dimension χopen legs (downward) index the n physical qubitsχ = 1product statecost O(n)χ = poly(n)mild entanglementcost polynomialχ = 2^(n/2)full entanglementcost exponential
A tensor-network (matrix product state) representation of a quantum state. Each tensor holds one physical qubit's data and connects to its neighbours by a "bond" of dimension $\chi$. For product states $\chi = 1$; for states with bounded entanglement $\chi$ stays polynomial; for fully entangled states $\chi$ grows exponentially. Classical simulators try to keep $\chi$ small by choosing good contraction orderings and exploiting circuit structure.

Three widely used tensor-network formats:

Tensor-network simulators for supremacy circuits typically use a hybrid: compute amplitudes for small subsets of output bitstrings using tensor contraction on sub-circuits, then combine these amplitudes to estimate distribution properties or to generate samples. The Pan–Zhang 15-hour simulation uses exactly this hybrid approach.

Technique 2: stabilizer-based simulation

A different technique, exploiting a different structure. Stabilizer circuits — circuits built from Clifford gates (H, S, \mathrm{CNOT}) and Pauli measurements — can be simulated classically in polynomial time. This is the Gottesman–Knill theorem (1998), one of the most surprising results in the early quantum computing literature.

The reason: a stabilizer state can be described by specifying which Pauli operators stabilise it (a polynomial-size description), rather than by storing its amplitude vector (an exponential-size description). Every Clifford gate maps stabilizer descriptions to stabilizer descriptions in polynomial time.

The implication for supremacy: any supremacy circuit must contain non-Clifford gates (like T = R_z(\pi/4)). If every gate were Clifford, the circuit would be trivially classically simulable, and no supremacy claim could stand. Conversely, if a circuit has only a few non-Clifford gates, classical simulators can use stabilizer-rank methods to handle them efficiently.

Real supremacy circuits have \sim 1000 non-Clifford gates, which is enough to push stabilizer-rank methods out of reach. But: as quantum error-correction codes are layered on top of supremacy circuits, the effective Clifford-vs-non-Clifford balance matters again. Stabilizer techniques are a constant presence in classical simulation toolkits.

Technique 3: noise exploitation

The most subtle classical-spoofing technique is not a direct simulation improvement but an observation about physics: real quantum circuits have noise, and noise reduces quantum complexity.

A perfect quantum circuit on n qubits produces a pure state in a 2^n-dimensional Hilbert space. Classical simulation must track all of this. A noisy quantum circuit produces a mixed state — a probabilistic mixture of pure states — and if the noise rate per gate is high enough, the mixed state rapidly approaches the maximally mixed state (uniform over all basis states), which is trivial to simulate.

Aharonov, Gao, Landau, Liu, and Vazirani's 2022 paper, A polynomial-time classical algorithm for noisy random circuit sampling, made this formal:

Theorem (informal). For random quantum circuits with constant noise rate p > 0 per gate, there is a classical algorithm that samples from the circuit's output distribution (up to small total-variation distance) in time polynomial in the number of qubits.

In other words: any supremacy claim based on a noisy quantum circuit is, in principle, classically simulable in polynomial time — provided the noise rate is high enough. The devil is in the threshold: the theorem holds for constant noise rate, and modern experiments have noise rates around 10^{-3} per gate, small but non-zero.

For Sycamore's \sim 1000 gates at 10^{-3} per gate, the cumulative fidelity is 0.999^{1000} \approx 0.37. For each factor of \sim e^{-1} degradation, noise washes out one "bit" of quantum information. When the remaining information is less than O(\log n) bits, classical simulation becomes efficient. This puts a practical limit on the useful depth of any NISQ supremacy experiment.

Noise kills supremacy at modest depthsA line graph with "gate fidelity percentage" on the x-axis (99.0 to 99.99) and "maximum supremacy depth" on the y-axis (log scale, 10 to 100000). A curve rising steeply: at 99% the depth is about 100; at 99.9% the depth is about 1000; at 99.99% the depth is about 10000. A horizontal band marks the 2026 NISQ regime around 99.5-99.9%.supremacy depth vs gate fidelitygate fidelity →max useful depth99.0%99.5%99.9%99.99%10²10³10⁴10⁵2026 NISQ hardware operates here
As gate fidelity improves, the maximum depth at which quantum supremacy survives grows exponentially. At $99.0\%$ fidelity, useful depth is around $100$ gates; at $99.99\%$, around $10^4$. The region around $99.5$–$99.9\%$ is where $2026$ NISQ hardware lives — enough for shallow supremacy demonstrations, not enough for fault-tolerant computation. Classical noise-exploitation algorithms become more effective as depth exceeds this budget.

The big rebuttals, in detail

With the techniques in mind, the rebuttals become legible.

IBM 2019 — tensor networks plus secondary storage

Pednault, Gunnels, Nannicini, Horesh, and Wisnieff's preprint Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (arXiv:1910.09534) argued that Google's 10{,}000-year estimate overlooked two things:

  1. Tensor-network contraction is better than state-vector simulation. For Sycamore's depth-20 random circuits, the effective bond dimension in an MPS-like representation is manageable at the end of each time slice.
  2. Summit has 250 PB of disk storage, not just RAM. By streaming amplitudes between RAM and disk, the simulation can proceed without needing all 144 PB of state vector in main memory.

Their projected classical runtime: \sim 2.5 days. IBM did not actually run this simulation (nobody gave them Summit time for that), but the methodology was public and reviewable.

The effect on the supremacy claim: the per-experiment classical cost dropped by \sim 10^6. The ratio t_C / t_Q fell from 10^9 to about 10^3. Still a supremacy claim in some readings — t_Q = 200 s vs t_C = 2.2 \times 10^5 s — but a much narrower one.

Pan, Chen, Zhang 20212022 — GPU-accelerated tensor contraction

Pan, Chen, and Zhang (CAS, Institute of Theoretical Physics) attacked the same Sycamore task with specialised tensor-network algorithms and GPU acceleration. Their paper Solving the sampling problem of the Sycamore quantum circuits used:

Runtime: \sim 15 hours. Quantum-to-classical ratio: \sim 270, still quantum-faster but now within an order of magnitude where classical hardware improvements (better GPUs, better algorithms) could catch up.

Alibaba and Tencent — continued tensor-network improvements

Research groups at Alibaba Cloud and Tencent Quantum Lab have published further tensor-network refinements for both RCS and Gaussian boson sampling. Each paper pushes the boundary by identifying new structural features of specific experimental circuits to exploit.

Aharonov et al. 2022 — the general theorem

The most theoretically important rebuttal is Aharonov, Gao, Landau, Liu, and Vazirani's A polynomial-time classical algorithm for noisy random circuit sampling (arXiv:2211.03999). Rather than targeting a specific experiment, they proved:

The implication: strict supremacy in the presence of noise is a fragile property. Supremacy claims must argue that their noise is low enough that the Aharonov-et-al threshold is not crossed. This is a far more nuanced claim than "this experiment proves quantum supremacy."

The fair-comparison debate

A recurring question: is it fair to compare a 2019 quantum experiment against a 2022 classical algorithm?

The answer depends on what "supremacy" is claiming. Three positions:

Most researchers today use a blend: a supremacy claim is a snapshot with an asymptotic promise. The snapshot can be softened by better classical algorithms; the asymptotic promise (extrapolated to larger quantum machines) remains intact.

Hype check. Quantum supremacy is slippery. A claim that looks rock-solid at announcement may be partly rolled back months later. This is not a sign of fraud — it is a structural feature of a field where the classical side is a moving target. When a new "X times faster than classical" headline appears, the correct response is: which classical algorithm is being compared against? Has anyone attacked the claim with tensor networks, noise exploitation, or stabilizer techniques yet? Wait a year and check back. The strictest kind of supremacy — fault-tolerant, error-corrected, running Shor's on RSA — is a different claim entirely, and when that arrives (years away), it will survive classical attack by virtue of the structural gap.

What survives classical spoofing

Classical spoofing has eroded many near-term claims, but certain quantum advantages appear structurally safe.

Shor's algorithm (factoring)

Peter Shor's polynomial-time quantum factoring algorithm (1994) produces a specific integer output — a factor of the input. It is not a sampling task; it is a decision-with-certificate task. There is no known classical algorithm that factors in polynomial time (best classical: general number field sieve, sub-exponential e^{O((\log n)^{1/3} (\log \log n)^{2/3})}).

For Shor's advantage to be "spoofed" classically, someone would have to invent a polynomial-time classical factoring algorithm — which would collapse much of cryptography. This is a far stronger complexity assumption than supremacy, and no classical-spoofing-style technique touches it.

Error-corrected quantum computing

Fault-tolerant quantum computing, once achieved, operates in a different regime. Logical qubits protected by error correction (surface code, colour codes) can run arbitrarily deep circuits with arbitrarily low error. Classical simulation of such machines is as hard as classical simulation of the ideal noiseless circuits — which the Aharonov et al. result does not touch, because their theorem crucially uses noise.

A quantum computer that executes 10^{10} logical gates on 1000 logical qubits is beyond any classical-spoofing technique. The cost to build such a machine is enormous, but the computational advantage, once built, is structurally unassailable.

Chemistry simulation on specific problems

For specific chemistry problems — notably, ground-state energies of molecules with strong electron correlation (like \mathrm{FeMoco} in nitrogenase) — quantum algorithms have a clean structural advantage. The problem has an inherent quantum structure that classical simulation cannot compactly represent (the wavefunction's entanglement is genuinely many-body).

This is an area of active research and an explicit goal of quantum roadmaps (IBM, Google, USTC all target this). Once demonstrated, useful quantum advantage on these problems would be robust to classical spoofing in a way that supremacy demos are not.

What survives classical spoofing, and what does notTwo columns in a table. Left column: "has been spoofed (partly)" — Sycamore RCS 2019, Jiuzhang boson sampling 2020, shallow circuits with noise. Right column: "survives classical spoofing" — Shor's factoring, fault-tolerant quantum computing, chemistry on strongly-correlated molecules. Each item shown as a row with a short explanation.what spoofing has touched — and what it has notpartly spoofed• Sycamore RCS 2019→ 10 000 yr → 2.5 days → 15 hours• Jiuzhang boson sampling 2020→ specific-structure spoofing attempts• shallow noisy circuits→ Aharonov et al 2022 general sim• many "near-term" QML claims→ dequantization resultssurvives spoofing• Shor's factoring (fault-tolerant)→ would require classical P = NP-ish collapse• fault-tolerant quantum computing→ no noise → Aharonov thm does not apply• chemistry on strong correlation→ structural quantum advantage• cryptographic primitives→ lattice-based post-quantum included
The spoofing-resistant tasks are structurally different from supremacy benchmarks. Shor's factoring, fault-tolerant computation, and structural chemistry problems do not live on the edge of classical tractability — they are clearly beyond it, and classical improvements do not chip away at them. Supremacy demonstrations, by contrast, live exactly on the boundary where small classical advances matter.

Example 1: Tensor-network simulation of a $20$-depth random circuit on $53$ qubits

A walkthrough of why IBM's 2.5-day estimate for Sycamore was plausible — without going into full algorithmic detail.

Setup. Sycamore's supremacy circuits are 20 layers deep on 53 qubits with 2D grid connectivity. Each layer consists of single-qubit gates and a fixed pattern of fSim two-qubit gates on neighbouring pairs.

Step 1 — Bond dimension at layer k. After k layers of generic random gates, the bond dimension of an MPS representing the state grows roughly as:

\chi_k \approx \min(2^k, 2^{n/2}).

Why it grows and then plateaus: entanglement spreads through the circuit with each gate, doubling bond dimension per layer until it hits the maximum possible value 2^{n/2} (Schmidt rank bound across any bipartition). At that point the state is maximally entangled and no further compression is possible.

For Sycamore at n = 53 and k = 20: \chi_{20} \approx 2^{20} \approx 10^6 — below the maximum 2^{26.5} \approx 10^8. The state is not yet fully entangled.

Step 2 — Memory cost. An MPS with bond dimension \chi on n qubits requires O(n \chi^2) memory:

53 \times (10^6)^2 \times 16 \text{ bytes} \approx 8.5 \times 10^{14} \text{ bytes} = 850 \text{ TB}.

Why this is still feasible: Summit has \sim 250 petabytes of total storage (RAM plus disk). 850 TB fits. The raw state vector (144 PB) did not. Tensor-network compression cuts the memory by \sim 200x.

Step 3 — Time per sample. Computing a single output amplitude from an MPS takes O(n \chi^3) time:

53 \times (10^6)^3 = 5.3 \times 10^{19} \text{ operations per sample}.

On Summit at 200 petaFLOPS = 2 \times 10^{17} FLOPs, each sample takes \sim 260 seconds. For 10^6 samples, this is \sim 3 \times 10^8 seconds \approx 8 years.

Step 4 — IBM's batching tricks. IBM's argument was that you do not need to compute amplitudes one at a time. By batching the contraction — computing amplitudes for many bitstrings in a single tensor-network sweep — and using disk storage for intermediate results, the per-sample cost drops substantially. Their projection gave \sim 2.5 days.

Why this illustrative calculation does not exactly match IBM's: real tensor-network contraction uses more sophisticated tricks (specific contraction orderings, block-sparse storage, GPU acceleration) that lower the constants significantly. This example is a simplification; the actual IBM paper has the full engineering.

Tensor-network cost of simulating SycamoreA table showing three rows: memory cost 850 TB, time per sample 260 s, total for 10^6 samples 8 years naive, 2.5 days with batching. Each row has a short explanation.classical cost of simulating Sycamore, via MPSmetricvaluebond dimension χ~ 10⁶ (after 20 layers)memory cost n·χ²~ 850 TBtime / sample (n·χ³)~ 260 s on SummitIBM's batching and disk-streaming tricks reduced total simulation cost to ~2.5 days
A simplified cost breakdown for tensor-network simulation of Sycamore's supremacy circuit. Bond dimension after $20$ layers is $\sim 10^6$, giving $\sim 850$ TB of memory and $\sim 260$ seconds per output amplitude on a supercomputer. IBM's published methodology batched this across many bitstrings and used disk storage, projecting $\sim 2.5$ days total — a million times cheaper than Google's initial estimate.

Result. For Sycamore-scale circuits, tensor-network methods with aggressive batching bring the classical simulation cost from Google's 10{,}000-year figure down to a few days — the core of the IBM rebuttal. The quantum-classical ratio drops from \sim 10^9 to \sim 10^3, softening (but not eliminating) the supremacy claim.

Example 2: Noise-assisted classical simulation — how much error is too much

A toy calculation showing why gate-error rates drive supremacy claims.

Setup. Consider a random quantum circuit with n = 53 qubits, depth d = 20, and gate error rate \epsilon per gate. The total number of gates is roughly N = 1000. Each gate has fidelity 1 - \epsilon; the circuit's overall fidelity is approximately:

F \approx (1 - \epsilon)^N \approx e^{-\epsilon N}.

Step 1 — Fidelity at three noise levels. For \epsilon = 10^{-3} (Sycamore-level):

F \approx e^{-10^{-3} \cdot 1000} = e^{-1} \approx 0.37.

For \epsilon = 5 \times 10^{-3} (lower-quality hardware):

F \approx e^{-5} \approx 0.0067.

For \epsilon = 10^{-2} (noisy early NISQ):

F \approx e^{-10} \approx 4.5 \times 10^{-5}.

Why fidelity matters for classical simulation: fidelity is the overlap of the noisy circuit's output state with the ideal output state. When fidelity drops, the noisy output increasingly resembles a uniform random state — which is trivial to simulate classically.

Step 2 — Effective quantum content. The "amount of quantum information" the circuit carries scales roughly with \log_2(1/F) — the number of qubits' worth of purity remaining:

  • \epsilon = 10^{-3}: \log_2(1/0.37) \approx 1.4 — you have about 1.4 qubits of pure quantum content out of 53.
  • \epsilon = 5 \times 10^{-3}: \log_2(1/0.0067) \approx 7.2 qubits' worth.
  • \epsilon = 10^{-2}: \log_2(1/4.5 \times 10^{-5}) \approx 14.4 qubits' worth.

Why this framing is misleading at face value: the "1.4 qubits of purity" for Sycamore sounds like too little to matter. But even a tiny amount of quantum content concentrated on specific correlations (detectable by XEB) is enough for a supremacy claim — XEB measures how much the noisy samples concentrate on outcomes the ideal circuit would favour, not how pure the overall state is. So the relationship between noise and supremacy is subtler than "more noise = no supremacy."

Step 3 — The Aharonov et al. regime. Aharonov, Gao, Landau, Liu, Vazirani's theorem kicks in when the noise rate is "constant" in a formal sense: \epsilon does not shrink as n grows. For any such \epsilon > 0, there is a classical polynomial-time algorithm that samples from the circuit's output distribution to any desired accuracy.

Step 4 — Concrete threshold estimate. The exact polynomial of the Aharonov et al. algorithm depends on the constants and the target accuracy; for Sycamore-like experiments, the practical implication is roughly: if your per-gate error rate exceeds \sim 1\%, classical simulation of a depth-d circuit is efficient (polynomial in n and d). Below 1\%, classical simulation remains exponential asymptotically but can still be efficient at small to moderate n.

Noise budget for a 53-qubit, 1000-gate circuitA table with three rows showing error per gate (1e-3, 5e-3, 1e-2), total circuit fidelity (0.37, 0.0067, 4.5e-5), and simulation regime (hard, borderline, easy).error per gate sets the classical-simulation difficultyε per gatecircuit fidelityclassical regime10⁻³~ 0.37hard (Sycamore)5 × 10⁻³~ 0.007borderline10⁻²~ 4.5 × 10⁻⁵efficient (spoofable)Aharonov et al 2022: for any constant ε > 0, polynomial classical algorithm exists
The effect of gate-error rate on classical simulation difficulty. At Sycamore-level noise ($10^{-3}$), the classical simulation remains hard per-experiment (though improving classical algorithms are narrowing the gap). At noisier levels ($10^{-2}$), classical simulation becomes efficient and supremacy claims evaporate. The Aharonov et al. theorem makes this qualitative picture rigorous: asymptotically, constant noise rates always admit polynomial-time classical sampling.

Result. Supremacy demonstrations live in a narrow noise regime. Even modest improvements in the gate-error rate (10^{-3} to 10^{-4}) dramatically increase the classical-simulation difficulty. Modern NISQ hardware sits precisely at the boundary — the exact location where every new classical algorithm can, in principle, erode the supremacy gap.

Common confusions

Going deeper

You have the cycle, the techniques, the big rebuttals, and the survivors. The rest of this section digs into: detailed tensor-network methods (MPS, PEPS, and contraction orderings); stabilizer-rank and magic-state simulation; the formal structure of the Aharonov et al. 2022 theorem; the cost-per-operation race between quantum and classical hardware; the distinction between strict supremacy, contemporaneous supremacy, and asymptotic supremacy; the scientific-ethics question of how supremacy claims should be made and verified; and what a truly unambiguous quantum-only task would require.

Tensor networks — MPS, PEPS, and contraction orderings

A matrix product state (MPS) on n qubits is a state of the form:

|\psi\rangle = \sum_{s_1, \ldots, s_n} \mathrm{tr}\left[ A^{(1)}_{s_1} \cdot A^{(2)}_{s_2} \cdots A^{(n)}_{s_n}\right] |s_1 s_2 \cdots s_n\rangle,

where each A^{(i)}_{s_i} is a \chi \times \chi matrix indexed by the physical state s_i \in \{0, 1\} of the i-th qubit. The parameter \chi is the bond dimension.

Projected entangled pair states (PEPS) generalise MPS to 2D — each site has tensors with multiple bond indices, one per neighbour. PEPS can describe states with area-law entanglement in 2D, making them appropriate for Sycamore-like topologies.

The key computational operation is contraction — summing over the bond indices to compute an amplitude. For MPS, contraction is efficient (O(n \chi^3)). For PEPS, exact contraction is \#\mathrm{P}-hard in general, but approximate contraction with controlled bond-dimension growth is tractable.

Choosing a good contraction ordering is a critical optimisation. IBM's 2019 rebuttal and Pan–Zhang's 2022 simulation both rely on carefully chosen orderings that minimise peak memory and total operations.

Stabilizer-rank and magic-state methods

Clifford circuits are exactly classically simulable in polynomial time. Non-Clifford gates (like T) introduce "magic" — a measure of non-stabilizerness. The stabilizer rank of a state is the minimum number of stabilizer states needed to write it as a linear combination.

The implication for supremacy: supremacy circuits must have enough non-Clifford structure to defeat stabilizer-rank methods. Sycamore's \sim 430 two-qubit fSim gates each contribute non-Clifford content; the stabilizer rank of the resulting state is extremely large, and stabilizer-rank methods are not competitive with tensor-network methods for RCS.

For error-corrected quantum computing, magic-state distillation is a critical primitive. A detailed treatment is beyond this chapter.

The Aharonov et al. 2022 theorem — formal structure

Aharonov, Gao, Landau, Liu, and Vazirani's A polynomial-time classical algorithm for noisy random circuit sampling proves, roughly:

Theorem. Let C be a random quantum circuit on n qubits with depth d, where each gate is followed by local depolarising noise with rate \epsilon > 0 (a constant, not shrinking with n). There exists a classical algorithm that samples from a distribution \epsilon-close in total variation to the circuit's noisy output distribution, in time polynomial in n and d.

The algorithm's core idea: noisy random circuits produce output distributions that concentrate around the uniform distribution as depth grows, with corrections that can be computed approximately using tensor-network truncation with small bond dimension. The noise provides the truncation: noisy states naturally have bounded effective entanglement.

The theorem is asymptotic — the polynomial may have a large constant, and for specific small-scale experiments, the tensor-network constants may still be large. But it establishes that any supremacy claim based on noisy NISQ hardware is, in principle, classically simulable with enough effort. This is an important boundary condition on the supremacy framework.

The cost-per-operation race

Modern classical hardware trends (20202026):

Modern quantum hardware trends:

The race between classical and quantum is happening on both sides. A classical algorithm that is 10^3x faster is not as significant as a quantum hardware scaleup of 2^{20}x state-space. As quantum hardware approaches the error-correction threshold and fault-tolerant regime, classical algorithms face an exponential wall they cannot climb.

Strict vs contemporaneous vs asymptotic supremacy

Three related but distinct claims:

A careful reader can distinguish these three when evaluating any supremacy headline.

What a truly unambiguous quantum-only task would look like

For a supremacy claim to be robust to all classical attack, the task should have:

Shor's algorithm on RSA-2048 meets all four criteria. The hardware to run it (roughly 20 million physical qubits with surface-code error correction) does not exist yet, but is the target of IBM's, Google's, and other roadmaps. When such a machine is built, the supremacy claim will be of a qualitatively different kind — a strict, structural, unambiguous claim, not a sampling benchmark.

Indian context — verification methodology and the National Quantum Mission

India's National Quantum Mission, launched in 2023 with an \sim ₹6000 crore (\sim \$720 million) eight-year budget, explicitly targets both quantum supremacy and useful quantum advantage. The Mission's documents are carefully written in a way that reflects the lessons of the USTC–Google–IBM debate: supremacy is a milestone, not an endpoint; useful quantum advantage is the harder goal; verification methodology matters.

Specific implications for Indian researchers:

A cultural note: the verification-methodology debate has broad resonance in India's scientific community, where healthy skepticism of "we beat the West" claims is a feature of how the NQM documents are framed. The Indian approach (cross-checking quantum claims against state-of-the-art classical algorithms, before publishing) is explicitly designed to avoid the Google–IBM pattern of public disputes.

Where this leads next

References

  1. Edwin Pednault et al. (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the week-after rebuttal that cut Google's 10{,}000 years to 2.5 days. arXiv:1910.09534.
  2. Feng Pan, Keyang Chen, and Pan Zhang, Solving the sampling problem of the Sycamore quantum circuits (2021) — the 15-hour GPU-cluster simulation. arXiv:2111.03011.
  3. Dorit Aharonov, Xun Gao, Zeph Landau, Yunchao Liu, Umesh Vazirani, A polynomial-time classical algorithm for noisy random circuit sampling (2022) — the general noisy-simulation theorem. arXiv:2211.03999.
  4. Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 2019) — the original Sycamore paper. Nature article.
  5. Google Quantum AI, Quantum error correction below the surface code threshold (Nature 638, 2024, Willow processor) — updated supremacy/advantage claims on error-corrected hardware. Nature article.
  6. Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — a technically careful discussion of the Google claim and the spoofing landscape, written by one of the architects of the sampling-supremacy framework.