In short

Quantum advantage is problem-specific. There is no single "quantum is faster"; there is a ledger with ticks and crosses. Factoring and discrete logarithm (Shor's algorithm) — strong, needs fault tolerance, credibly arriving 2035-2045. Quantum simulationstrong, credible advantage on 100-200 orbital chemistry problems 2028-2032; the original motivation for quantum computing. Hidden-subgroup-problem instances over abelian groups — strong, subsumes factoring and discrete log. Generic combinatorial optimisation via QAOAweak to none, no clear asymptotic advantage demonstrated, most benchmarks favour classical heuristics. Quantum machine learning via kernels or VQCweak, many proposed speedups de-quantised by Tang and successors (2018-2024). Unstructured search via Groverquadratic at best, marginal in practice because the oracle is expensive and the \sqrt{N} win is eaten by hardware constants. Quantum key distribution — real but narrow; useful where physical-key exchange is impossible (satellite links, specific nuclear-scale industries), irrelevant for most internet traffic. Amplitude estimation for Monte Carlo — quadratic, unclear ROI against well-tuned classical MC. The honest 2026 verdict: two rock-solid wins (factoring, simulation), several narrow niches (QKD, some graph problems), one overrated technique (QAOA optimisation), one substantially de-hyped field (quantum ML). Most "quantum is coming for X" claims in contemporary press are about the third or fourth category. A calibrated reader disbelieves the default and looks up the specific case.

You have read the theory in lessons about quantum speedups. You have read the near-term empirics in utility-scale quantum computing. This chapter does the dirty work in between: a ledger. Column 1 is the problem. Column 2 is the best classical algorithm and its cost. Column 3 is the best quantum algorithm and its cost. Column 4 is the honest gap. Column 5 is the arrival timeline given current hardware trends. The rows are ordered roughly from strongest quantum advantage to weakest.

Read it like a report card. Some subjects get an A; some get a D; most get a B or C with specific notes. There is no single grade for quantum computing. The only honest way to discuss advantage is case by case.

How to read each case

Every case below follows the same five-line template.

The case-by-case advantage templateA five-row table template showing the structure used for each problem case. Row 1: Problem — one-line statement. Row 2: Best classical — algorithm name and complexity. Row 3: Best quantum — algorithm name and complexity. Row 4: Gap — quantitative comparison. Row 5: Timeline — when credible advantage is expected.the template applied to each caseproblemone-line specification of what you are solvingbest classicalalgorithm + asymptotic cost + practical implementationbest quantumalgorithm + asymptotic cost + qubit / gate countgapexponential / polynomial / quadratic / marginal / nonetimelinewhen credible advantage arrives given 2026 hardware
The case-by-case template used throughout this chapter. Each problem gets five rows and a verdict. The template forces the honest-gap question to be answered explicitly rather than waved over.

Case 1 — Factoring (Shor's algorithm)

Problem. Given a composite integer N with two large prime factors p and q, find p and q. The cryptographic instance: N is a 2048-bit RSA modulus.

Best classical. The general number-field sieve (GNFS) — sub-exponential in \log N, specifically \exp(O((\log N)^{1/3} (\log \log N)^{2/3})). For 2048-bit N, this is roughly 2^{112} operations; practical factoring records stand at RSA-250 (829 bits, factored 2020, about 2700 core-years on a cluster).

Best quantum. Shor's algorithm, which runs in O(\log^3 N) gates. For 2048-bit N, the gate count is roughly 10^{10} logical gates — a genuinely enormous number, but polynomial in \log N rather than sub-exponential. Physical qubit count (with surface-code error correction at threshold 10^{-4}): approximately 2 \times 10^7 physical qubits in the Gidney-Ekerå 2021 estimate.

Gap. Exponential. There is no polynomial-time classical factoring algorithm known, and decades of number-theoretic effort suggest one is unlikely. Shor's is polynomial. The gap is genuinely super-polynomial and widely believed to be structural — complexity theory predicts \mathrm{BQP} \supsetneq \mathrm{BPP} on the basis of Shor's.

Timeline. Credible RSA-2048 attack requires ~2 \times 10^7 physical qubits at current error rates. The 2024 state-of-the-art is \sim 10^3 physical qubits. Projections for 20 million physical qubits with d = 25 surface code: 2040-2045 on aggressive extrapolation, 2045-2055 on conservative. Verdict: the gap is real, the algorithm works, the hardware gap is 15-20 years.

Example 1: When does Shor's actually beat GNFS?

Setup. You want to estimate — numerically — at what key size N a fault-tolerant quantum computer with realistic gate times would factor faster than the best classical GNFS implementation on today's hardware. This tells you the crossover point — the N below which classical is still better, above which quantum wins.

Step 1. Write the classical cost. GNFS runtime is T_C(N) = \exp\!\left(c (\ln N)^{1/3} (\ln \ln N)^{2/3}\right) with c \approx 1.923. For N = 2^{1024} (1024-bit), this evaluates to about 2^{80} operations; for N = 2^{2048}, about 2^{112}. On a cluster doing 10^{14} ops/second, 2^{112} ops takes \sim 10^{20} seconds — roughly 3 trillion years. Why this is the relevant classical baseline: GNFS is the best-known classical factoring algorithm and has been iteratively optimised for decades; the probability that a dramatically better classical algorithm will appear is low, though not zero.

Step 2. Write the quantum cost. Shor's circuit for N bits uses \sim 3N^3 logical Toffoli gates (the dominant cost, from modular exponentiation). Each logical gate needs \sim 100 physical cycles with a realistic surface code. At a physical gate time of 1 \mus, a logical gate takes \sim 100 \mus. For N = 2048, 3 \cdot 2048^3 \approx 2.6 \times 10^{10} logical gates. Runtime: 2.6 \times 10^{10} \cdot 10^{-4}\,\text{s} = 2.6 \times 10^6 seconds, or about 30 days. Why 30 days is a realistic estimate: this uses the Gidney-Ekerå 2021 analysis, which accounts for magic-state distillation overhead, parallel cores, and typical surface-code cycle times. The number can be pushed lower with more parallelism and higher with more conservative error budgets.

Step 3. Find the crossover. Set T_C(N) = T_Q(N) and solve. Classical sub-exponential vs quantum polynomial means the crossover is small N. Concretely: for N = 2^{600} (600-bit), classical GNFS takes \sim 2^{55} ops or about 10 hours; quantum Shor's takes \sim 2 \cdot 3 \cdot 600^3 \cdot 100\mu\text{s} \approx 2 hours. Crossover is around 600 bits. For N < 600 bits, classical is competitive; for N > 600 bits, quantum is faster given the quantum machine exists. Why the crossover is small: the exponent in classical runtime grows as (\log N)^{1/3}, slowly but unboundedly; the quantum runtime grows as (\log N)^3, polynomially. Polynomial always beats sub-exponential at sufficient size.

Step 4. Why nobody cares about 600-bit factoring. 600-bit RSA has been factored classically (RSA-250 at 829 bits was factored in 2020). RSA keys used in practice are 2048 bits or larger. The interesting crossover is not "where is quantum faster" but "where is quantum faster on a key size still in use". That is \geq 2048 bits, where classical is infeasible and quantum (when built) would take 30 days. Why the practical question is different: key-length standards respond to classical attacks, not quantum ones. RSA-2048 is safe against classical GNFS for decades; it is not safe against a hypothetical Shor's machine. The cryptanalytic urgency comes from the timing of machine arrival, not the mathematical crossover.

Step 5. Mosca's theorem on migration. Michele Mosca's 2013 framework: if T_{\text{secret}} is the time your data must remain confidential, T_{\text{migrate}} is the time to migrate to post-quantum cryptography, and T_{\text{quantum}} is the years until a cryptographically relevant quantum computer exists, then you must begin migrating whenever T_{\text{secret}} + T_{\text{migrate}} > T_{\text{quantum}}. For sensitive data needing 30 years of secrecy, with 10 years to migrate and T_{\text{quantum}} = 15 years, the inequality is 30 + 10 > 15 — migrate now. Why this is the actually-important framing: the crossover from Step 3 is theoretical; Mosca's inequality is the real-world decision rule. Indian banks, defence systems, and Aadhaar infrastructure all need to do this calculation.

Result. Quantum factoring beats classical at any key size above \sim 600 bits once the machine exists. The machine does not exist; it will arrive in 2035-2045. Mosca's inequality is already satisfied for sensitive long-lived data. The advantage gap is real; the urgency is from migration math, not from a machine running today.

Shor vs GNFS — classical sub-exponential vs quantum polynomialA log-log plot with key size in bits on the x-axis from 100 to 4096 and runtime in seconds on the y-axis. A classical curve labelled GNFS rises super-polynomially, steeply. A quantum curve labelled Shor rises polynomially, more gently. The two curves cross around 600 bits. A vertical line at 2048 bits is annotated with current RSA key size; classical GNFS at that point is 10 to the 20 seconds while Shor is 10 to the 6 seconds.Shor vs GNFS runtime as a function of key sizekey size N (bits, log scale)runtime (s)256102420484096GNFSShorcrossover≈ 600 bitsRSA-2048
Classical GNFS (sub-exponential) vs quantum Shor's (polynomial) as functions of RSA key size. The curves cross around 600 bits — below that, classical is competitive; above, quantum wins once the machine exists. At 2048 bits, the gap is roughly $10^{14}\times$ in quantum's favour (when the quantum machine exists). The practical urgency is not the gap but Mosca's migration timing.

Case 2 — Quantum simulation of chemistry and materials

Problem. Given a Hamiltonian H describing the electrons in a molecule or lattice of atoms, estimate the ground-state energy E_0 (or dynamics under H) to chemical accuracy (1 kcal/mol for chemistry).

Best classical. For small systems (up to ~20 orbitals): CCSD(T) — coupled-cluster with single, double, and perturbative triple excitations — scales as O(n^7) in number of orbitals. Gold standard; reaches chemical accuracy. For ~20-100 orbitals: DMRG (density-matrix renormalisation group) — effective for 1D-like systems, struggles for 2D. For strongly-correlated 2D systems (transition-metal catalysts, high-T_c superconductors) at 50+ orbitals: classical methods fail — no classical algorithm reaches chemical accuracy in practical time.

Best quantum. Quantum Phase Estimation (QPE) for ground-state energy, scaling as O(\text{poly}(n)) in orbitals with error correction. VQE (Variational Quantum Eigensolver) for the NISQ regime. Logical qubit requirement: 2n qubits (two per orbital for spin-up and spin-down). For a 100-orbital FeMoco (iron-molybdenum cofactor of nitrogenase, the enzyme that fixes nitrogen for plant fertilisers): roughly 200 logical qubits, 10^9-10^{10} T-gates.

Gap. Exponential for large strongly-correlated systems. CCSD(T) fails at modest orbital counts for non-weakly-correlated molecules; DMRG struggles in 2D; full configuration interaction is exact but exponential. Quantum phase estimation is polynomial in orbital count. The gap is genuine and large.

Timeline. Chemistry advantage arrives earliest because the required logical-qubit count is small: 50-200 logical qubits suffice for ground-state calculations of interesting catalysts. Credibly 2028-2032 on Quantinuum H-series scaled up, IBM Starling-class, or a Microsoft/topological platform if the physics works. Verdict: the strongest near-term win, and the one most likely to land first with a convincing commercial demonstration.

Materials and strongly-correlated physics

Beyond chemistry, quantum simulation targets include:

Each has a credible quantum-advantage story at a 100-1000 logical qubit scale. Each is 5-10 years further out than small-molecule chemistry because of qubit count.

Case 3 — Hidden-subgroup-problem instances (abelian)

Problem. Given black-box function f: G \to S on a group G with the promise that f(x) = f(y) \iff x y^{-1} \in H for a hidden subgroup H, find H.

Best classical. For most non-trivial abelian groups, \Omega(|G|^{1/2}) queries (random sampling plus collision). For G = \mathbb{Z}_N^* (the factoring reduction), this recovers sub-exponential GNFS via the specific structure.

Best quantum. Polynomial in \log|G|. The abelian HSP algorithm is the abstract framework from which Shor's (period finding, G = \mathbb{Z}), Simon's (hidden XOR, G = \mathbb{Z}_2^n), and discrete-log algorithms all derive.

Gap. Exponential, uniformly for abelian G. The HSP over abelian groups is the cleanest large class of problems with a known exponential quantum speedup.

Timeline. Same as Shor's — any abelian HSP instance of practical interest is a cryptographic application, and the hardware timeline is 2035-2045. Verdict: the theoretical foundation for every exponential quantum speedup we know. Practical applications are cryptographic and share Shor's timeline.

Non-abelian HSP — the open frontier

For non-abelian groups (dihedral, symmetric, general matrix groups), no general polynomial-time algorithm is known. Kuperberg's algorithm gives sub-exponential but not polynomial speedups for dihedral HSP; generic symmetric-group HSP ties to graph isomorphism, which went from "hard" to "quasipolynomial" in Babai's 2015 classical algorithm. The non-abelian frontier is a 30-year-old open problem; expecting near-term advances is not realistic.

Case 4 — Quantum machine learning

Problem. Given a classical dataset, produce a model (classifier, regressor, generative) whose performance on held-out data is better or faster-to-train than classical models.

Best classical. Transformer-based deep networks (2023-2026), classical kernel methods, gradient-boosted trees, classical SVM. All enormously developed, running on highly optimised GPU / TPU hardware. State-of-the-art on most practical ML benchmarks is classical.

Best quantum. Several proposals:

Gap. Weak to none in practice. The canonical story:

The de-quantisation chain

In 2009 Harrow, Hassidim, Lloyd proposed HHL, a quantum algorithm solving linear systems A\mathbf{x} = \mathbf{b} in \tilde O(\log N) for sparse well-conditioned A — an apparent exponential speedup over classical O(N). HHL underlies many quantum-ML proposals.

In 2018 Ewin Tang, then an undergraduate, published a series of "de-quantised" classical algorithms matching HHL-based quantum ML at similar complexity, provided you have quantum-like access to the classical data (sample access with \ell_2-norm weighting). Tang's original paper de-quantised quantum recommender systems; successor work (Tang 2019, Gilyén-Lloyd-Tang 2018, Chia et al 2020) extended the technique to quantum PCA, quantum least squares, and large chunks of the quantum-ML stack.

The structure of the argument: HHL's assumed quantum access model implicitly requires a classical data structure that is almost as powerful as HHL's quantum access. If you have to build the quantum access, you might as well use a classical sampling structure — and then classical algorithms match the quantum asymptotic.

What remains of quantum ML

After de-quantisation, the remaining plausible quantum-ML advantages:

  1. Models of quantum data — classifying quantum states produced by a quantum experiment directly. No classical access model matches this because the input is genuinely quantum.
  2. Specific kernel methods — Liu, Arunachalam, Temme (2021) proved a quantum-kernel speedup over classical kernels on a specific constructed problem. The problem is artificial and no natural-dataset advantage follows.
  3. Quantum generative models for quantum states — produce samples from a distribution that is naturally quantum. Useful for quantum simulation pipelines, not for classical ML.

Timeline. No practical classical-ML-advantage demonstration is expected in the 2026-2030 window. The field is an active research area but has largely de-hyped since 2018. Verdict: the most aggressively over-promised corner of quantum computing, and the one where the honest 2026 assessment is "there is no clear advantage on classical ML tasks." Quantum data ML remains open and interesting.

Case 5 — Combinatorial optimisation (QAOA)

Problem. Given a combinatorial objective f: \{0,1\}^n \to \mathbb{R} (MaxCut, TSP, max-SAT, portfolio), find x^* = \arg\max_x f(x).

Best classical. Depends on problem. Mixed-integer linear programming (Gurobi, CPLEX) for structured problems; simulated annealing, parallel tempering, and tabu search for unstructured ones; specialised algorithms (Goemans-Williamson for MaxCut with 0.878-approximation). For most practical instances at scale, classical heuristics find excellent solutions in seconds.

Best quantum. QAOA (Quantum Approximate Optimization Algorithm, Farhi-Goldstone-Gutmann 2014): a variational algorithm alternating problem-Hamiltonian and mixing-Hamiltonian steps. At depth p, approximation ratio improves with p but so do classical comparisons. Quantum annealing (D-Wave) for Ising-formulated problems.

Gap. None-to-marginal as of 2026. Multiple studies:

No convincing demonstration — theoretical or experimental — of QAOA beating the best classical heuristic on a commercially meaningful optimisation problem exists as of 2026.

Timeline. Unclear. Some researchers believe QAOA at very high depth on fault-tolerant hardware could yield modest polynomial speedups on specific structured problems; others believe the advantage is zero. Without a clean theoretical argument for advantage (unlike factoring, which has Shor's), the field has been unable to identify a specific problem where QAOA is demonstrably better.

Hype check. Of all quantum-computing fields, combinatorial optimisation is the one where consulting-firm reports and vendor marketing most aggressively overshoot the scientific consensus. McKinsey and BCG reports routinely project tens of billions of dollars of "quantum optimisation value" by 2030. The scientific literature, by contrast, has not produced a single reproducible benchmark where QAOA beats a well-tuned classical heuristic on a commercially meaningful instance. When you read a quantum-optimisation claim, ask: (a) what is the classical baseline? (simulated annealing at appropriate depth, not a naïve brute force). (b) what is the instance size? (a toy 20-variable MaxCut is not representative). (c) is there a complexity-theoretic argument for the advantage? (usually, no). If any of these fails, discount the claim heavily.

Verdict: the most aggressively over-promised and under-delivering area. Expect more dequantisation, fewer legitimate advantage claims over the 2026-2030 window.

Case 6 — Unstructured search (Grover)

Problem. Given a function f: \{0, \ldots, N-1\} \to \{0,1\} with exactly one x^* satisfying f(x^*) = 1, find x^*.

Best classical. O(N) queries — no way around examining each input in the worst case.

Best quantum. Grover's algorithm: O(\sqrt{N}) queries. Optimal by BBBV (Bennett-Bernstein-Brassard-Vazirani 1997).

Gap. Quadratic. Real, provable, and provably optimal.

Why the quadratic speedup is marginal in practice

Several factors erode Grover's \sqrt{N} advantage when you move from oracle complexity to wall-clock time:

  1. The oracle is expensive. If the "database" is a literal list of N items, each quantum query requires a circuit that examines the list — O(N) gates per query. Grover's O(\sqrt{N}) queries cost O(N^{1.5}) gates, worse than classical O(N).

  2. Error correction overhead. \sqrt{N} logical queries on a fault-tolerant machine cost \sqrt{N} \cdot C_{\text{FT}} gates where C_{\text{FT}} is the per-logical-gate cost of error correction. For N = 2^{80}, \sqrt{N} = 2^{40}; with C_{\text{FT}} \sim 10^4, total gate count is 2^{40} \cdot 10^4 \approx 10^{16} gates. A classical brute force of 2^{80} = 10^{24} operations on a 10^{14}-op/second classical cluster takes 10^{10} seconds. 10^{16} quantum gates at 1 \mus per gate take 10^{10} seconds. Tie — the quadratic constant is eaten by fault-tolerance overhead.

  3. Parallelisation is hard on quantum. Classical brute force trivially parallelises: M machines do N/M work each. Grover's is inherently sequential — you cannot simply split a Grover search among M quantum machines to get M\times speedup (Zalka 1999 showed root-M parallelisation at best). Parallelisation erodes the quantum advantage further.

  4. Cryptographic constants. For AES-128, Grover's \sqrt{2^{128}} = 2^{64} gate count with error correction is \sim 2^{64} \cdot 10^4 = 2^{78} gates. Classical 2^{128} operations on 10^{10} machines for a trillion years is \sim 10^{30} operations — both numbers are astronomical, and the "Grover halves your effective key length" calculation glosses over the fact that both regimes are infeasible. AES-256 with Grover's is 2^{128}-equivalent, which is still infeasible.

Verdict: Grover's quadratic speedup is theoretically real but practically marginal. For cryptographic "Grover cracks AES" claims, the fix is trivial — double the key size, problem solved. For database "Grover searches faster" claims, they are almost always fake — real databases use indices, not linear scans.

Case 7 — Quantum key distribution (BB84 and descendants)

Problem. Alice and Bob want to establish a shared secret key over an untrusted channel, with security guaranteed by physics rather than by computational hardness.

Best classical. Classical key distribution via post-quantum cryptography (Kyber, Dilithium) — secure under the assumption that lattice problems are hard for quantum computers. Or key distribution by pre-shared physical keys (diplomatic pouch, certificate authority).

Best quantum. BB84 and descendants (E91, B92, continuous-variable QKD). Security is based on the no-cloning theorem plus disturbance-detection by measurement: any eavesdropper necessarily introduces detectable errors. Provably secure against any attack consistent with quantum mechanics.

Gap. Physical vs computational. QKD provides information-theoretic security (secrecy holds even against an attacker with unlimited computational power); post-quantum cryptography provides computational security (secrecy holds conditional on lattice problems being hard).

Timeline. QKD is deployed today, in niches:

The commercial case is narrow because:

QKD's strongest case is long-term archival secrecy for data that must remain confidential for 50+ years — national secrets, sensitive medical records — where Mosca's theorem makes the "quantum threat" inequality binding.

Verdict: QKD is real, deployed, and narrowly useful. It is not replacing internet TLS; it will coexist with post-quantum cryptography in specific high-assurance channels.

Case 8 — Quantum Monte Carlo and amplitude estimation

Problem. Estimate the expectation value \mu = \mathbb{E}_{x \sim p}[f(x)] for a distribution p and a function f. Applications: option pricing, risk estimation, integration in high dimension.

Best classical. Monte Carlo: N samples give standard error O(1/\sqrt{N}). Reaching precision \varepsilon requires N = O(1/\varepsilon^2) samples. Variance reduction (control variates, importance sampling) reduces the constant, not the scaling.

Best quantum. Amplitude estimation (Brassard-Høyer-Mosca-Tapp 2002): Reaches precision \varepsilon in O(1/\varepsilon) quantum samples — a quadratic speedup.

Gap. Quadratic. Real asymptotic speedup.

When the quadratic speedup matters

Financial applications are the most-advertised target. A one-week overnight risk calculation at \varepsilon = 10^{-4} classical precision would in principle drop to 10^{-4} seconds of quantum compute — but:

  1. The quantum machine needs amplitude-encoding of the input distribution (typically O(\log N) qubits plus O(\text{poly}(\log N)) gates per sample in the best case).
  2. Fault-tolerance overhead costs ~10^4 per logical gate.
  3. The classical Monte Carlo runs on highly-parallelised GPU clusters.

Net: the quadratic speedup survives asymptotically but is often overwhelmed by constants for realistic finance problems. Detailed studies (Chakrabarti et al 2020, Stamatopoulos et al 2019) estimate break-even at \varepsilon \sim 10^{-6} or smaller — a regime most financial risk calculations do not require.

Timeline. Unclear. Quadratic speedup for specific high-precision scientific MC (quantum chemistry Monte Carlo, stochastic PDE integration) is credible around 2030+ with fault-tolerant machines. Verdict: real quadratic advantage, but commercial ROI against modern GPU MC is unclear.

Case 9 — A handful of specific graph and matrix problems

Best-known problems with proven polynomial quantum advantages:

Each is interesting theoretically. None has yet been the basis of a practical advantage, because the constant factors and oracle implementations make wall-clock comparisons unclear.

Verdict: a fertile research area, an active source of algorithm papers, but not yet a source of commercial products.

Example 2: VQE for FeMoco — where quantum actually beats classical

Setup. FeMoco is the iron-molybdenum cofactor at the active site of nitrogenase, the enzyme that reduces atmospheric N_2 to NH_3 for plant fertilisers. Industrial ammonia production (Haber-Bosch, 1913) consumes about 2% of global energy. Understanding FeMoco's mechanism could enable catalysts that fix nitrogen at room temperature, saving billions of tonnes of CO_2. The relevant quantum chemistry problem: compute the ground-state energy of a 100-orbital active space around the Fe-Mo cluster to chemical accuracy.

Step 1. Classical cost — why CCSD(T) fails. FeMoco has strong multi-reference character: its ground state is not well-approximated by a single Slater determinant. CCSD(T) assumes a single-reference starting point and diverges or gives chemically inaccurate answers on multi-reference systems. DMRG works in 1D but FeMoco is essentially 0D (a cluster, not a lattice). Full configuration interaction on 100 orbitals needs \binom{100}{50} \approx 10^{29} determinants — infeasible. Why multi-reference matters: transition-metal clusters have near-degenerate d-orbital configurations, so the ground-state wavefunction is a superposition of many determinants. Methods that assume a dominant single determinant (CCSD, CCSD(T)) fail for these systems in ways that are not fixable by adding more orbitals.

Step 2. Quantum phase estimation cost. QPE on a 100-orbital active space needs 2n = 200 logical qubits for the orbital representation plus O(\log(1/\varepsilon)) ancillas for phase register — say, 240 logical qubits total. The dominant cost is Trotterised time evolution of H for time t \sim 1/\varepsilon where \varepsilon is the target energy precision in Hartrees. For \varepsilon = 1 millihartree (chemical accuracy), Reiher et al (2017) estimated the circuit at \sim 10^{10} T-gates on a surface-code machine with 10^6-10^7 physical qubits. Why T-gates dominate: surface-code Clifford gates are cheap (transversal); T-gates require magic-state distillation, which accounts for ~99% of the resource cost in chemistry algorithms. Any serious chemistry speedup estimate counts T-gates, not total gates.

Step 3. The advantage calculation. Classical: infeasible; no known polynomial-time algorithm reaches chemical accuracy on 100-orbital FeMoco. Quantum: 10^{10} T-gates at \sim 1 ms per logical T-gate on a projected 2030+ machine = 10^7 seconds = ~4 months. For the first time on a practically important problem, quantum is the only option, not a faster alternative. Why this is the cleanest advantage case: there is no "quantum vs classical" race here; there is "quantum succeeds, classical fails." No amount of classical cleverness is likely to close this gap because the underlying problem is #P-hard (exact ground-state energy of a local Hamiltonian), and CCSD(T)-like approximations break down on strongly-correlated systems for structural reasons.

Step 4. Timeline to execution. 240 logical qubits with 10^4 physical qubits each = 2.4 \times 10^6 physical qubits. This is 1000x today's qubit count. Credible by 2030-2035 on aggressive Quantinuum or IBM Starling-successor scaling. Why the timeline is plausible despite the qubit gap: the required qubit count for FeMoco is far smaller than for Shor-2048 (2.4 million vs 20 million). Chemistry advantage arrives first on the logical-qubit ladder because chemistry needs fewer qubits and tolerates lower code distance; Shor-2048 needs the largest surface-code distances because its circuits are the longest.

Step 5. The indirect commercial case. Even before a 2030+ quantum machine factors FeMoco, the hardware-accelerated exploration of catalyst chemistry — small fragments, classical-quantum hybrid workflows, active-space reduction — has been the target of several industrial collaborations (Mitsubishi Chemical, JSR, BASF partnering with IBM or Quantinuum). The commercial case does not require perfect advantage; it requires credible progress toward it.

Result. FeMoco represents the cleanest demonstration that quantum simulation will eventually beat classical chemistry on a commercially meaningful problem. The hardware gap is closing at a credible pace. The science is unambiguous: no classical method in 2026 can match what a 240-logical-qubit machine in 2032 can do on this problem. This is why chemistry is the field most likely to produce the first non-contested quantum advantage claim.

FeMoco — a clean quantum advantage caseA comparison table. Classical CCSD(T) at 100 orbitals fails on strongly-correlated FeMoco. DMRG struggles because the system is not 1D. Full CI infeasible. Quantum phase estimation on 240 logical qubits succeeds in months of runtime. Verdict: quantum is the only option for this problem.FeMoco ground-state — where classical fails, quantum succeedsCCSD(T)diverges (multi-reference strong correlation)DMRGstruggles (0D cluster, not 1D chain)full CI10^29 determinants — infeasibleQPE (quantum)240 logical qubits, ~10^10 T-gates, months of runtimeverdictquantum is the only option; 2030-2035 arrival
FeMoco ground-state energy — the cleanest near-future quantum advantage case. Every classical method in the stack fails on strong correlation. A 240-logical-qubit quantum phase estimation succeeds in months. This is the problem most likely to deliver the first non-contested chemistry advantage.

The case-by-case report card

Collect it all in one table.

Quantum advantage report card 2026A grade-style report card with problem areas in the left column and grades on the right. Shor factoring A strong exponential but 15-20 years to hardware. Chemistry simulation A-minus strong exponential arriving 2028-2032. Abelian HSP A strong theoretical foundation. Materials simulation B-plus credible 2030-plus. Amplitude estimation C-plus quadratic unclear ROI. Graph and matrix problems B polynomial research active. Unstructured search C quadratic practically marginal. QKD B-minus real but narrow niche. Combinatorial optimisation D under-delivering. Quantum ML D-plus mostly dequantised.quantum advantage report card — 2026Shor factoringA — exponential; 15-20 yrs to hardwareAChemistry simulationA− — exponential; first advantage 2028-32A−Abelian HSPA — foundation of exponential speedupsAMaterials / strongly-correlatedB+ — credible advantage 2030+B+Graph / matrix problemsB — polynomial; research activeBQKDB− — real but narrow nicheB−Amplitude estimation (Monte Carlo)C+ — quadratic; unclear ROIC+Grover unstructured searchC — quadratic; practically marginalCQuantum MLD+ — mostly de-quantisedD+Combinatorial optimisation (QAOA)D — no clear advantage demonstratedD
The 2026 quantum-advantage report card, sorted from strongest to weakest. Two A-grade wins; two B-grade niches; two C-grade marginals; two D-grade disappointments. When you next read a quantum-computing press release, locate the claim on this card before accepting it.

Common confusions

Going deeper

You have the report card and you have two cases in depth (Shor/GNFS crossover, FeMoco chemistry advantage). The rest of this section discusses three methodological topics: how to evaluate a new advantage claim you encounter in the wild; what the theoretical-complexity-class limits on each category look like (BQP, QMA, QCMA); the role of oracle-model vs standard-model advantage; and the evolving landscape of "practical quantum advantage" definitions.

Evaluating a new advantage claim

When a paper or press release claims quantum advantage on some problem, run this checklist:

  1. What is the problem? One sentence, specific, with input and output. If the problem is ill-defined, the claim is meaningless.
  2. What is the best classical algorithm? Not a straw-man brute force — the genuine state of the art. Naïve linear-scan databases are not the baseline for Grover; indexed databases are.
  3. What is the asymptotic gap? Exponential, polynomial, quadratic, or marginal? A claim without an asymptotic statement is usually marketing.
  4. What is the constant? For quadratic-or-smaller gaps, constants matter enormously. The gap might disappear once FT overhead, oracle cost, and hardware parameters are included.
  5. Has classical caught up? Check arXiv for citations from the last 18 months. If three groups have beaten the claim with tensor networks, belief propagation, or de-quantisation, the claim no longer holds.
  6. Is there a complexity-theoretic argument? Shor's has one (HSP over \mathbb{Z}, interference matches structure). QAOA does not. A claim without structure is usually optimistic at best.

Apply this checklist mechanically. It will correctly dismiss roughly 70% of popular-press quantum claims.

Complexity classes and advantage categories

Each advantage category corresponds to a complexity class:

The complexity-theoretic picture reinforces the empirical one: BQP's known advantages are structured problems (HSP and simulation); the open gaps are at the boundaries with QMA and NP, and no unconditional proof of separation exists.

Oracle-model versus standard-model advantage

Many advantage proofs are relative to an oracle. The oracle is a black-box function the algorithm queries; the advantage is that the quantum algorithm makes fewer queries. The catch: oracle advantages do not automatically translate to standard-model (no-oracle, all-circuits-concrete) advantages, because the oracle might not have an efficient implementation.

When you read a paper claiming "exponential quantum-classical separation," check whether the claim is oracle-model or standard-model. Standard-model exponential separations are much rarer and harder to prove.

Practical quantum advantage — Preskill, Aaronson, and the evolving definitions

The term practical quantum advantage (PQA) has been evolving through the 2020s. Scott Aaronson's 2022 formulation requires:

  1. A scientifically or commercially meaningful problem (not a benchmark).
  2. A specific, well-defined classical baseline using the best known algorithm on modern hardware.
  3. A verifiable quantum result (at least in a regime where classical can check).
  4. Robustness against 18+ months of classical counter-attack.

By this bar, zero demonstrations as of 2026 clear PQA. Random-circuit supremacy (Google 2019, USTC 2020) fails criterion 1 (random circuits have no application). Kim et al 2023 fails criterion 4 (classical caught up in 6 months). Quantinuum logical-qubit demos pass 1 and 3 but not yet 2 (no commercially meaningful classical beat).

John Preskill's 2018 NISQ essay anticipated this: he called genuine useful near-term quantum computing "unclear" and said the usefulness would likely arrive only with error correction. The 2020s have validated that prediction.

Indian context — National Quantum Mission's case-specific priorities

India's National Quantum Mission explicitly prioritises chemistry and materials simulation (targeting pharmaceuticals and catalysts for the Indian market), QKD for secure communication (DRDO and ISRO satellite deployments), and post-quantum cryptography migration for the Aadhaar and UPI infrastructure. Combinatorial optimisation and quantum ML receive less emphasis in official roadmaps, reflecting the mission planners' awareness of the advantage case-by-case. This is consistent with a calibrated global view: target the A-grade cases first, handle the others as research.

Where this leads next

References

  1. Peter Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer (1994) — arXiv:quant-ph/9508027.
  2. Gidney and Ekerå, How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits (2021) — arXiv:1905.09749. The definitive resource-estimate paper.
  3. Reiher, Wiebe, Svore, Wecker, Troyer, Elucidating reaction mechanisms on quantum computers (2017) — arXiv:1605.03590. The FeMoco resource-estimate benchmark.
  4. Ewin Tang, A quantum-inspired classical algorithm for recommendation systems (2019) — arXiv:1807.04271. The founding paper of the dequantisation programme.
  5. Stilck França and García-Patron, Limitations of optimization algorithms on noisy quantum devices (2021) — arXiv:2009.05532. The bounds on QAOA advantage under realistic noise.
  6. Wikipedia, Quantum advantage — current encyclopaedic treatment of the evolving definitions.