In short

In October 2019, Google's Sycamore chip — 53 active superconducting transmon qubits — ran 20-layer random quantum circuits and collected about a million output samples per circuit in 200 seconds of wall-clock time. Verification was done via the linear cross-entropy benchmark (XEB): Sycamore's measured XEB was 0.0024, roughly 20\% above the uniform-random baseline of 0.002, statistically consistent with a noisy but correct quantum sampler. Google estimated that reproducing the same sampling fidelity on Summit (the world's fastest supercomputer at the time) would take about 10{,}000 years — hence the supremacy claim. One week later, IBM published a preprint arguing that with tensor-network contraction and secondary-storage tricks, the same task could be simulated classically in about 2.5 days. Pan, Chen, and Zhang pushed this further in 20212022, reaching \sim 15 hours on GPU clusters. The per-experiment gap has narrowed substantially, but the asymptotic trend (wider gap at more qubits and deeper circuits) still favours quantum. Follow-up supremacy experiments — USTC's Zuchongzhi (66 qubits superconducting) and Jiuzhang (113-photon boson sampling) — have extended the regime. As of 2026, the Sycamore result is considered the first credible experimental supremacy demonstration, with the important caveat that classical simulators have not stopped improving.

On 23 October 2019, Nature published a paper with a five-author list starting "Frank Arute, Kunal Arya, Ryan Babbush …" — a team of 77 researchers from Google AI Quantum and academic collaborators — titled Quantum supremacy using a programmable superconducting processor. The paper reported that Google's Sycamore chip had, in 200 seconds, done a computation that they estimated would take Summit (the world's fastest supercomputer at the time, housed at Oak Ridge National Laboratory) approximately 10{,}000 years.

Newspapers around the world — The New York Times, The Hindu, BBC News — led with the word "supremacy." Social media took it further: "quantum computers have arrived," "RSA is broken," "we are living through a singularity." None of that was true. What Google had done was narrower, deeper, more careful — and, in the days after publication, almost immediately contested by IBM.

This chapter walks through the experiment in detail. You already know the shape of the quantum supremacy definition — the three conditions (quantum tractability, classical hardness, verifiability) that Google's claim must satisfy. Here you meet the experiment that instantiated those conditions for the first time, the verification protocol (XEB), the classical-simulation rebuttal from IBM, the follow-up classical improvements, and the current state of the argument in 2026.

The chip: Sycamore, 54 qubits, one broken

Sycamore is a superconducting quantum processor — a small chip of aluminium deposited on silicon, cooled in a dilution refrigerator to about 20 millikelvin (colder than outer space). Each qubit is a transmon — a tiny loop of superconducting metal interrupted by a Josephson junction, whose lowest two energy levels are used as |0\rangle and |1\rangle.

The chip was designed with 54 qubits in a rectangular-ish grid. On the specific run-up to the supremacy experiment, one qubit failed to calibrate reliably, leaving 53 usable qubits. The team worked around it — the circuits were adapted to skip the bad qubit and use only the connected 53-qubit sub-grid.

Sycamore chip layout — 54 qubits in a 2D grid, 1 inactiveA grid of 54 circles arranged in a diagonal rectangular pattern. Each circle represents a qubit. Lines connect each circle to its nearest neighbours, showing the coupler topology. One circle is shaded grey and marked "inactive" — the failed qubit. The remaining 53 circles are shown in the accent colour as "active". Axis labels indicate approximately 9 columns and 6 rows.Sycamore: 54 transmon qubits on a 2D grid, 53 activeinactiveeach qubit coupled only to its nearest 2D neighbours — this constrains circuit compilationsingle-qubit gate fidelity ≈ 99.84%, two-qubit gate fidelity ≈ 99.38%, readout ≈ 96.2%
Sycamore's physical layout. $54$ transmon qubits on a 2D grid, each coupled to its nearest neighbours via tunable couplers. One qubit failed to calibrate, leaving $53$ active. The limited connectivity means long-range interactions between qubits need to be compiled down to chains of nearest-neighbour gates — a constraint that affects what circuits are practical to run.

The random circuits

The task was not to run some specific algorithm. It was to run a random circuit — a circuit whose gates are picked from a template, with specific single-qubit gate choices randomised — and sample from its output distribution.

The circuit structure, in each layer:

  1. Single-qubit random gates. Each of the 53 qubits receives one of three gates (\sqrt{X}, \sqrt{Y}, \sqrt{W} where W = (X + Y)/\sqrt{2}), chosen independently at random. These gates are all 90° rotations about different axes in the Bloch sphere equatorial plane.

  2. Two-qubit fixed gates. A specific two-qubit gate — a fixed fSim gate close to \sqrt{\mathrm{iSWAP}} with a controlled-phase kick — is applied to a specific subset of nearest-neighbour pairs. The pattern of which pairs gate together in which layer rotates through four configurations (call them A, B, C, D), chosen to maximise the entanglement spread across the chip.

Repeat for m layers. For the supremacy experiment, m = 20. The total number of gates is roughly 53 \times 20 = 1060 single-qubit gates and \approx 430 two-qubit gates.

One layer of the Sycamore random circuitA simplified circuit diagram showing 5 horizontal wires representing qubits. Each wire has a box with a random single-qubit gate label (sqrt(X), sqrt(Y), sqrt(W) — one of three). Between wires, vertical lines connect pairs via fSim two-qubit gate boxes. This layer is labelled "one cycle", with a note "×20 to complete the circuit".one cycle of the Sycamore RCS circuit (5 of 53 qubits shown)q₀q₁q₂q₃q₄√X√Y√W√X√Yrandom 1-qubitfSimq₀,q₁fSimq₂,q₃2-qubit fSim (pattern A)√Y√W√X√Y√Wrandom 1-qubitfSimq₁,q₂fSimq₃,q₄2-qubit fSim (pattern B)one cycle = random 1-qubit + fSim A + random 1-qubit + fSim B; ×20 cycles total
One layer of the Sycamore random circuit, schematically. Each qubit gets a randomly-chosen single-qubit gate (one of $\sqrt{X}$, $\sqrt{Y}$, $\sqrt{W}$), followed by fSim two-qubit gates on a fixed subset of nearest-neighbour pairs. The pattern of which pairs are gated rotates between four configurations A, B, C, D across cycles. Twenty of these layers together constitute the full supremacy circuit.

For each sampled circuit instance (one particular random choice of single-qubit gates in each layer), the task was: initialise all 53 qubits to |0\rangle, apply the circuit, measure all qubits in the computational basis, record the 53-bit outcome. Repeat around 10^6 times to build up an empirical distribution.

The state-space problem

Before understanding why this is hard classically, let's see what quantum state the circuit creates.

At the start of the circuit, the qubits are in |0\rangle^{\otimes 53} — a product state with one complex amplitude (equal to 1). After one layer of Hadamard-like single-qubit gates, the state is close to a uniform superposition over all 2^{53} basis states. After two-qubit gates, the state becomes highly entangled, and after 20 layers, the state is a very generic-looking entangled state with random-ish amplitudes spread over all 2^{53} basis states.

The number 2^{53} is worth holding in your head. It is approximately 9.0 \times 10^{15} — nine quadrillion. Storing a state vector with that many complex amplitudes requires:

2^{53} \times 16 \text{ bytes} = 9.0 \times 10^{15} \times 16 = 1.44 \times 10^{17} \text{ bytes} \approx 144 \text{ petabytes}.

Why 16 bytes per amplitude: a complex number with double-precision real and imaginary parts takes 8 + 8 = 16 bytes. Lower-precision (single-precision, 8 bytes) would cut this in half but introduce rounding errors that compound over 20 layers of gates.

For comparison, Summit — the world's fastest supercomputer at the time — had about 250 petabytes of total storage (counting RAM and disk). Sycamore's state vector barely fits. The Frontier supercomputer (launched 2022) has about 700 petabytes. Fugaku similar. Storing the state is technically possible on a few top-tier supercomputers; simulating the quantum evolution (applying gates and tracking amplitudes) on that state is much more expensive than just storing it.

Linear cross-entropy benchmark (XEB)

The verification question: given that Sycamore produced samples x_1, x_2, \ldots, x_M \in \{0,1\}^{53}, how do you know these samples actually came from the circuit's output distribution, rather than from a random noise source?

The linear cross-entropy benchmark is Google's answer. The procedure:

  1. For each sample x_i, compute the ideal amplitude. Let C be the intended circuit. Classically compute — for small subsections, tensor-network methods allow this even for 53-qubit circuits — the probability p_i = |\langle x_i | C | 0^{53}\rangle|^2 that an ideal quantum circuit would produce sample x_i.

  2. Average over observed samples. Define:

\mathrm{XEB} = 2^{53} \cdot \frac{1}{M} \sum_{i=1}^{M} p_i - 1.
  1. Interpret. For a noisy quantum device, this score lies between 0 (uniformly random samples — no quantum coherence) and 1 (perfect circuit execution — every sample lands where the ideal amplitudes put probability mass).

Why this formula: the ideal amplitudes p_i average, in expectation over samples from a uniform distribution, to 1/2^{53}. Multiplying by 2^{53} normalises this to 1. Subtracting 1 shifts the baseline to 0 for uniform sampling. A correct quantum circuit — which produces samples with probability proportional to p_i itself (the Porter–Thomas distribution) — gives \langle p_i \rangle \approx 2/2^{53} in expectation, yielding \mathrm{XEB} = 1.

A positive XEB score demonstrates that the observed samples concentrate on outcomes the ideal circuit would favour, which cannot happen by accident without the device actually implementing the circuit.

Sycamore's measured XEB: \approx 0.0024 (with statistical error bars that placed it comfortably above 0). A score of 0.0024 is small, but small in a specific way: it is the score you expect from a noisy circuit whose gate fidelity is around 99\% per gate, multiplied over \sim 1000 gates, giving 0.99^{1000} \approx 4 \times 10^{-5} — but because XEB captures certain correlations rather than whole-circuit fidelity, the practical score is higher. The key point: 0.0024 is statistically far above the null hypothesis of uniform random sampling (where XEB \approx 0 with error bars of \sim 10^{-5} for M = 10^6 samples).

XEB verification — from samples to scoreThree-stage flow diagram. Stage 1: a list of sample bitstrings. Stage 2: each sample has its ideal amplitude p_i computed classically on small subsections. Stage 3: average of 2^n * p_i over all samples, minus 1 gives the XEB score. Reference points: 0 for uniform random, 1 for ideal, 0.0024 for Sycamore.linear cross-entropy benchmark (XEB)1. samplesx₁ = 0110...10x₂ = 1011...01x₃ = 0101...11...(M ≈ 10⁶ samples)2. compute amplitudesp_i = |⟨x_i|C|0⟩|²via tensor-networkon subsections(expensive but feasible)3. compute XEBXEB = 2ⁿ · ⟨p_i⟩ − 1≈ 0 ⟶ uniform random≈ 1 ⟶ ideal circuitSycamore: 0.0024positive XEB is statistical evidence the sampler is really running circuit C,not producing random garbage or samples from some other distribution
The XEB verification protocol. Observed samples are cross-referenced against the theoretical output amplitudes of the ideal circuit, computed classically on subsections. A positive XEB score means the observed samples are concentrated on outcomes the ideal circuit would favour — evidence of genuine quantum coherence, not noise.

The classical-simulation argument

With 53 qubits and a 20-layer circuit, what does classical simulation cost?

Approach 1 — State-vector simulation. Store all 2^{53} complex amplitudes; at each gate, compute the action of the gate on the state. Cost: 144 petabytes of storage (feasible on a top supercomputer), and roughly 1000 gate applications, each touching all 2^{53} amplitudes — \sim 10^{19} floating-point operations. On Summit at 200 petaFLOPS, this is \sim 50 seconds per circuit. For 10^6 samples, this is \sim 50 million seconds \approx 1.6 years.

Why this is not the 10{,}000-year estimate: the 10{,}000-year number came from a more conservative analysis involving full XEB verification (not just sample generation) and Google's specific simulator implementation with various overheads. Different methodological choices yield different extrapolations — this is part of why the claim is sensitive to how you compare.

Approach 2 — Tensor-network simulation. For circuits with limited entanglement depth, tensor-network methods factorise the state efficiently. Cost: exponential in the "bond dimension" of the network, which grows with depth. For depth 20, the bond dimension is typically exponential in the 53 \times 20 spacetime volume, but with clever contraction orderings and secondary-storage tricks, the cost is dramatically reduced.

Google's published estimate: 10{,}000 years on Summit using their own state-vector-based simulator. This was the basis of the supremacy claim.

The IBM rebuttal: 2.5 days, not 10{,}000 years

One week after Google's paper, IBM published a preprint (Pednault, Gunnels, Nannicini, Horesh, Wisnieff; arXiv:1910.09534) arguing that Google's 10{,}000-year estimate was off by a factor of \sim 10^6.

The IBM argument, simplified:

  1. Use tensor-network contraction, not state-vector. The Sycamore circuit, despite its 20 layers, has a specific structure that admits efficient tensor-network representation.

  2. Swap amplitudes between RAM and disk aggressively. Summit has \sim 250 PB of disk storage. By carefully batching the computation and swapping blocks of amplitudes between memory and disk, the simulation can be done without needing all 144 PB in RAM simultaneously.

  3. Result: \sim 2.5 days, not 10{,}000 years. IBM did not actually run this simulation — their paper was a projection based on their method's scaling — but the methodology was published and reviewable.

This dropped the classical-time estimate by a factor of \sim 10^{6}. The quantum-to-classical ratio fell from \sim 10^9 to \sim 10^3 — still a supremacy claim in some reading, but a much narrower one. IBM also argued, more pointedly, that "supremacy" was the wrong frame; the right frame was "useful quantum advantage," which neither Google nor anyone else had yet demonstrated.

The IBM rebuttal, schematicallyTwo side-by-side estimates: Google's 2019 claim of 10000 years (a large bar) versus IBM's 2019 estimate of 2.5 days (a tiny bar). The underlying method column notes "state-vector" for Google and "tensor-network + disk" for IBM.Google estimate vs IBM rebuttalGoogle 2019 estimatestate-vector simulationSummit supercomputer≈ 10,000 years(basis of supremacy claim)IBM 2019 rebuttaltensor-network + disk swapSummit (same hardware)≈ 2.5 days(projected, not executed)same hardware, same task, different simulator → estimate drops by ~10⁶
Google's claim and IBM's rebuttal, side by side. Same hardware (Summit), same task (RCS on Sycamore's $53$ qubits), completely different classical simulation methods — and a $10^6$ factor difference in estimated runtime. The IBM number was a projection based on methodology, not an executed simulation.

Further classical improvements (20202024)

The IBM 2.5-day estimate did not end the classical simulation story. Subsequent work has continued to tighten the gap.

What is striking: these classical algorithms exploit the specific structure of Sycamore's circuits (the 2D grid topology, the particular fSim gate, the exact 20-layer depth) to compress the simulation. A slightly deeper circuit, a slightly different gate set, or a slightly different topology can make the classical simulation dramatically harder.

Follow-up experiments: USTC and others

The 2019 Google result prompted a flurry of follow-up supremacy experiments. Two Chinese platforms stand out.

USTC Jiuzhang (photonic, 20202022)

Jiuzhang (named after the classical Chinese mathematical text Nine Chapters on the Mathematical Art) is a series of photonic boson sampling experiments at the University of Science and Technology of China, led by Jian-Wei Pan.

Jiuzhang is fundamentally different hardware from Sycamore — it is a photonic system, not superconducting; it is room-temperature (aside from the photon sources); and it samples from boson-sampling distributions rather than RCS output. But the supremacy claim pattern is similar: a well-defined task, a measurable quantum runtime, an estimated super-polynomial classical simulation cost.

USTC Zuchongzhi (superconducting, 20212024)

Zuchongzhi (named after a 5th-century Chinese mathematician) is USTC's superconducting supremacy platform.

Each Zuchongzhi iteration extends the Sycamore-style claim and stays ahead of the classical simulation frontier.

IBM Heron and Condor (20232024)

IBM's own superconducting hardware has grown: Heron (133 qubits, 2023) and Condor (1121 qubits, 2023). These machines are designed for error correction and useful advantage rather than supremacy — they target industrial problems (chemistry, optimisation) rather than RCS benchmarks.

The post-Sycamore supremacy landscapeA timeline from 2019 to 2024 with four major supremacy experiments marked: Google Sycamore 2019 at 53 qubits; USTC Jiuzhang 2020-2022 photonic at 76-255 photons; USTC Zuchongzhi 2021-2024 at 56-105 qubits; IBM Heron 2023 at 133 qubits. Each labeled with its platform and distinguishing feature.post-Sycamore supremacy experimentsGoogle Sycamore53 qubits · supercond.2019USTC Jiuzhang76-255 photons · optical2020–22USTC Zuchongzhi56-105 qubits · supercond.2021–24IBM Heron/Condor133–1121 qubits · supercond.2023+two platforms (superconducting, photonic), multiple groups — supremacy is now replicated
The supremacy landscape from Sycamore ($2019$) forward. Two independent hardware platforms (superconducting, photonic) and multiple experimental groups (Google, USTC, IBM) have demonstrated supremacy or advanced the frontier. The Sycamore claim is no longer a one-off; it is an established capability regime that multiple organisations have reached.

Example 1: The size of the $53$-qubit state vector

A concrete computation to pin down why classical simulation is expensive.

Setup. Sycamore has 53 active qubits. A quantum state on 53 qubits is a complex vector in a 2^{53}-dimensional Hilbert space — i.e., 2^{53} complex amplitudes.

Step 1 — Count the amplitudes. The number of basis states for 53 qubits is:

2^{53} = 9{,}007{,}199{,}254{,}740{,}992 \approx 9.0 \times 10^{15}.

Why every basis state matters: in a generic entangled state (the kind produced by a random circuit), essentially all 2^{53} amplitudes are non-zero and cannot be compressed without losing accuracy. Classical simulation must track all of them.

Step 2 — Bytes per amplitude. A complex number in double precision (IEEE 754) is 16 bytes (8 bytes for real part, 8 for imaginary).

Step 3 — Total memory. Multiply amplitudes by bytes:

9.0 \times 10^{15} \times 16 \text{ bytes} = 1.44 \times 10^{17} \text{ bytes} = 144 \text{ petabytes}.

Why petabyte: 1 petabyte = 10^{15} bytes = 1000 terabytes = 1 million gigabytes. A typical laptop has a terabyte of disk. The storage-simulation budget for Sycamore is 144{,}000 laptops' worth of disk — all simultaneously in RAM.

Step 4 — Compare to available hardware.

  • Laptop: 864 GB RAM. Cannot store 144 PB. Cannot simulate.
  • Workstation: up to \sim 2 TB RAM. Cannot store. Cannot simulate.
  • Summit (2019, the fastest supercomputer): \sim 10 PB total RAM, 250 PB disk. Cannot fit 144 PB in RAM; disk-swap approaches can manage, at the cost of per-gate slowdowns.
  • Frontier (2022, current fastest): \sim 10 PB RAM, 700 PB disk. Similar regime.

Step 5 — Add one qubit. Going from 53 to 54 qubits doubles the amplitudes to 2^{54}: 288 petabytes. 55 qubits: 576 PB. 60 qubits: 18 exabytes (1.8 \times 10^{19} bytes). At 60 qubits, no supercomputer on earth can store the state vector.

Memory required vs qubit countA log-scale bar chart. Qubit counts from 40 to 70 on the x-axis. Y-axis shows memory required in bytes on log scale. Bars double in height for each additional qubit. Reference lines at 1 TB (typical laptop disk), 1 PB (high-end supercomputer RAM), 1000 PB (largest storage available).exponential memory growth with qubit countqubitsbytes4050535560651 TB1 PB1000 PBSycamore
Memory for full state-vector storage vs qubit count, log scale. Each additional qubit doubles the required memory. Sycamore's $53$ qubits sit at the edge of what supercomputer-grade storage can handle; $60$ qubits is far past any current hardware.

Result. A full state-vector classical simulation of Sycamore requires 144 PB of storage — on the ragged edge of what the best supercomputers can handle, and completely infeasible for any smaller machine. This is why state-vector simulation costs t_C \gg t_Q, and why tensor-network methods (which avoid storing the full state) are where the classical-simulation action is. The memory-wall is the core reason the supremacy claim has force.

Example 2: Reading Sycamore's XEB score

The measured XEB of 0.0024 is small in absolute terms. What does it tell you?

Step 1 — The uniform baseline. If Sycamore were outputting uniformly random bitstrings (ignoring the circuit entirely), the average ideal amplitude over samples would be:

\langle p_i \rangle_{\text{uniform}} = \frac{1}{2^{53}}.

Plugging into the XEB formula:

\mathrm{XEB}_{\text{uniform}} = 2^{53} \cdot \frac{1}{2^{53}} - 1 = 0.

Why zero: random sampling weights every bitstring equally, so the average amplitude is just 1/2^{53}, giving XEB = 0 by construction. Any positive XEB is evidence the sampler is not uniform.

Step 2 — The ideal-circuit baseline. If Sycamore were running the circuit perfectly (zero noise), the samples would follow the Porter–Thomas distribution — a specific heavy-tailed distribution on [0, 2^{53}/2^{53}] predicted by random-matrix theory for random quantum circuits. For this distribution:

\langle p_i \rangle_{\text{ideal}} = \frac{2}{2^{53}},

giving \mathrm{XEB}_{\text{ideal}} = 2^{53} \cdot (2/2^{53}) - 1 = 1.

Step 3 — Sycamore's measurement. The observed XEB is 0.0024. The statistical uncertainty (error bar), from \sim 10^6 samples, is much smaller — around 10^{-4}. So 0.0024 \pm 0.0001: unambiguously positive, unambiguously below 1.

Step 4 — Interpret. The noise model for Sycamore predicts that with gate errors of order 10^{-3} per gate over \sim 1000 gates, the whole-circuit fidelity is:

F \approx \prod_g (1 - \varepsilon_g) \approx \prod_g (1 - 10^{-3}) = 0.999^{1000} \approx 0.37.

But XEB is not whole-circuit fidelity; it captures two-point correlations in the output distribution. For Porter–Thomas–like distributions, the relation between XEB and per-gate fidelity is roughly:

\mathrm{XEB} \approx \prod_g (1 - \varepsilon_g)^2 \approx 0.999^{2000} \approx 0.14,

with various simulation-specific corrections. The measured 0.0024 reflects additional noise from imperfect readout and correlated error sources. The measured value is consistent with the noise model, and far above zero. This is the evidence that Sycamore is really running the circuit, not producing noise.

Step 5 — Ten million times above the uniform baseline. A useful way to put the number: the standard error of XEB for uniform sampling with M = 10^6 samples is roughly 1/\sqrt{M \cdot 2^{53}} \approx 10^{-11}. Sycamore's 0.0024 is \sim 2 \times 10^8 standard errors above zero — a 200-million-sigma detection. This is not a marginal claim; it is an overwhelming statistical signature of quantum-correlated sampling.

Why this matters: XEB is not just "a number that looks small." It is a number whose expected range under the null hypothesis (uniform random) is extraordinarily tight, so even a small positive value is highly significant. The quantum signal cuts through the classical noise floor by many orders of magnitude.

Interpreting Sycamore's XEB = 0.0024A horizontal axis labeled XEB from 0 to 1. Three points marked: 0 labelled "uniform random"; 0.0024 labelled "Sycamore measured"; 1 labelled "ideal circuit (perfect)". A bracket shows the statistical error bars around 0 are tiny (~10^-11), much smaller than the measurement.Sycamore XEB on the supremacy scale0uniform random(no quantum signal)0.0024Sycamore~2×10⁸ sigmaabove zero1ideal circuit(perfect gates)axis not to scale — Sycamore sits very close to 0 but far above the uniform baseline statistically
Sycamore's $\mathrm{XEB} = 0.0024$ on the supremacy scale. Numerically close to the uniform-random baseline ($0$) and far from the ideal-circuit value ($1$), but statistically many orders of magnitude above the noise floor — a clean experimental signature that the circuit is producing quantum-correlated samples despite NISQ-era noise.

Result. Sycamore's XEB of 0.0024 is exactly what you expect from a noisy quantum device running the intended 20-layer circuit with per-gate errors around 10^{-3}. It sits at roughly 200 million standard errors above the uniform-random baseline, providing overwhelming statistical evidence that the circuit is being executed correctly. The number is small in absolute terms only because the underlying task has 2^{53} possible outcomes — any positive XEB in this regime is a large signal.

Common confusions

Going deeper

You have the Sycamore chip design, the random circuit structure, the XEB verification, the classical simulation estimates on both sides of the rebuttal, and the post-Sycamore landscape. The rest of this section collects the technical content: XEB statistics in detail, tensor-network simulation methods, the noise model for Sycamore, the post-supremacy research frontier, IBM's shift to useful advantage, and Indian contributions.

XEB statistics in detail

XEB has a specific statistical structure. For an ideal n-qubit circuit with Porter–Thomas output distribution:

For M = 10^6: XEB standard error is \sqrt{1/10^6} = 10^{-3}. Sycamore measured 0.0024, which is 2.4 standard errors above 0 in the natural XEB scale (not the absolute scale).

The absolute-scale-vs-XEB-scale distinction is where the "200-million-sigma" number comes from: under the null hypothesis of uniformly random sampling (not just Porter–Thomas sampling), the variance is much smaller, because the null distribution does not exhibit the Porter–Thomas heavy tail. The multi-hundred-million-sigma figure is specific to the test against the uniform-random null.

Noise model for Sycamore

The dominant noise sources in Sycamore, ranked by impact:

The total circuit fidelity predicted by this model is \sim 0.0020.003, consistent with the measured XEB of 0.0024.

Tensor-network simulation techniques

The classical simulation improvements since 2019 have all been variations on tensor networks. The core ideas:

Each of these techniques has its regime of applicability. A clever quantum circuit designer can choose circuits that sit in the hard regime for all known classical methods — which is exactly what Sycamore, Zuchongzhi, and Jiuzhang did.

Post-supremacy research: from demos to useful advantage

The 2020s research direction has largely shifted from "prove supremacy on a new task" to "demonstrate quantum advantage on a task people actually care about." Specific targets:

Supremacy-style experiments continue (Zuchongzhi 3.0, Jiuzhang 3.0) but the community's attention has moved to utility.

IBM's framing and the "useful quantum advantage" milestone

IBM's public position, articulated in the Quantum Developer Conferences starting 2021, distinguishes:

This framing treats supremacy as necessary but insufficient. Google, USTC, and the Indian NQM have similar roadmaps.

Indian context — Indian researchers and the NQM's experimental targets

India's contribution to the supremacy landscape is currently at the theoretical and algorithmic level rather than experimental hardware. Key points:

The Indian ambition is not to repeat the Sycamore experiment (that race is already run) but to be competitive on the next wave: useful quantum advantage on chemistry, optimisation, or communication problems, and to contribute to error-corrected quantum computing. Post-quantum cryptography migration of Aadhaar, UPI, and banking infrastructure — anticipating the eventual fault-tolerant era — is a parallel priority under the NQM.

Where this leads next

References

  1. Frank Arute et al. (Google AI Quantum), Quantum supremacy using a programmable superconducting processor (Nature 574, 505510, 2019) — the original Sycamore paper. arXiv:1910.11333 (companion).
  2. Edwin Pednault, John A. Gunnels, Giacomo Nannicini, Lior Horesh, Robert Wisnieff (IBM), Leveraging secondary storage to simulate deep 54-qubit Sycamore circuits (2019) — the IBM rebuttal preprint. arXiv:1910.09534.
  3. Han-Sen Zhong et al. (USTC), Quantum computational advantage using photons (Science 370, 14601463, 2020) — Jiuzhang 1.0. arXiv:2011.02801.
  4. Feng Pan, Keyang Chen, Pan Zhang, Solving the sampling problem of the Sycamore quantum circuits (2021) — the 15-hour classical simulation. arXiv:2111.03011.
  5. Wikipedia, Sycamore processor — the curated reference with a running summary of classical rebuttals and follow-up work.
  6. Scott Aaronson, Quantum supremacy: the gloves are off (blog post, 2019) — a carefully technical, readable discussion of the Google claim and the IBM rebuttal by one of the architects of the sampling-supremacy framework.