In short
Quantum annealing has been a commercial technology since 2011. D-Wave has sold machines for fifteen years. Over that time, thousands of benchmark experiments have been run, and the honest summary is narrow and specific. For most practical optimisation problems — portfolio balancing, vehicle routing, MaxCut on dense graphs, financial scheduling — quantum annealers do not beat well-tuned classical algorithms. The Rønnow et al. (2014) and Albash-Lidar (2018) benchmark studies are the definitive analyses. Where annealing plausibly helps is a narrower regime: problems whose energy landscape has tall, thin barriers that classical simulated annealing has to climb over (polynomial time in the barrier height) but that quantum tunnelling can pass through (polynomial time in the barrier width). Denchev et al. (2016) exhibited a synthetic instance class — "weak-strong cluster" spin glasses — where D-Wave beats simulated annealing by a factor of 10^8, which is a real quantum signature; but it does not beat stronger classical solvers like parallel tempering or path-integral Monte Carlo, and the problem class was engineered to showcase tunnelling. Volkswagen's traffic-flow demonstration, various portfolio-optimisation demos from TCS and financial firms — all real experiments, none of which have produced a convincing "this was commercially useful and no classical laptop could match it" result. The upshot: quantum annealing is a legitimate special-purpose quantum device that the research community should keep studying, and it is not the optimisation revolution the early marketing promised. The Goemans-Williamson classical algorithm — a 1995 semidefinite-programming method — still wins on MaxCut. Classical hardware is extremely good, and most interesting optimisation problems do not have the specific tunnelling structure that gives annealers an edge. Read this chapter to separate the hype from the two or three genuine quantum signals.
There is a recurring pattern in quantum computing that goes like this. A new kind of machine is built. A company is started around it. Dramatic claims are made: "our device solves problems no classical computer can." Industrial demonstrations are performed: an airline's route schedule, a bank's portfolio, a factory's logistics. Press releases follow. A few years later, careful benchmarks are published, and the result is almost always the same: the quantum machine matches a well-tuned classical algorithm, sometimes worse, very rarely clearly better. The demonstration was genuine, the machine is genuinely quantum, but the advantage — the thing that would justify buying the machine instead of a rack of classical servers — is not there.
Quantum annealing is the most mature, most thoroughly benchmarked example of this pattern. D-Wave shipped their first commercial machine in 2011. Their current hardware (Advantage2) has more than 5000 superconducting flux qubits. Thousands of research papers and industrial case studies have been written. The result, read honestly: for most problems people actually care about, quantum annealing does not beat classical optimisation. For a narrow class of problems — spin glasses with tall thin tunnelling barriers — there is a genuine, reproducible quantum signature. For the problems that companies actually want to solve, the quantum signature has not been demonstrated.
This chapter sorts through the evidence. It names the regime where annealing plausibly helps, walks through the two or three benchmark studies that the research community treats as definitive, and then works two examples: a spin-glass instance where quantum tunnelling does something classical cannot, and a MaxCut-on-a-complete-graph instance where a 1995 classical algorithm crushes the quantum annealer. By the end you should be able to read a "quantum advantage" press release and know which questions to ask before believing it.
If you have not read the prerequisite chapters, start with quantum annealing for what D-Wave actually does, adiabatic quantum computing for the underlying computational model, and QAOA for the gate-model cousin of annealing.
The picture: what an annealer is supposed to be good at
Every optimisation algorithm has an energy landscape. You have a set of bit-strings (say, 2^{100} of them for a 100-variable problem), and each bit-string has a cost C(x) you are trying to minimise. Drawn as a landscape, you get a jagged terrain with many valleys and many ridges. The global minimum is the deepest valley somewhere in this landscape, and you do not know where.
Simulated annealing is a classical algorithm. You drop a "ball" (your current candidate solution) somewhere in the landscape. At each step you propose a small random move — flip one bit — and accept the move with probability \min(1, e^{-\Delta C / T}) where \Delta C is the change in cost and T is a "temperature". At high temperature you accept almost everything and hop around the landscape freely; at low temperature you accept only downhill moves and descend into the nearest valley. The schedule is to start hot and cool slowly. Done right, the ball finds a deep valley. Done wrong, it gets stuck in the first decent valley it finds.
Quantum annealing is the same idea but with a different mechanism for escaping local minima. Instead of thermal hopping over barriers, the system exploits quantum tunnelling: a quantum state can have amplitude on both sides of a barrier at once, and if the barrier is thin, amplitude leaks through. Mathematically you are following the ground state of a time-dependent Hamiltonian H(t) = A(t) H_0 + B(t) H_p, starting with a strong transverse field H_0 = -\sum_i X_i (which makes everything a superposition and spreads amplitude over all bit-strings equally) and slowly turning it off while turning on H_p (the Ising Hamiltonian encoding your problem). If you do this slowly enough, the adiabatic theorem promises you end up in the ground state of H_p — your answer.
The scaling in the caption is the whole honest story. Quantum annealing beats simulated annealing when the dominant escape-from-local-minimum events are through tall, thin barriers. This is a real physical effect; it is not marketing. The question is only: do the problems people actually want to solve have tall thin barriers? Mostly they do not.
The Rønnow 2014 benchmark — the paper that set the tone
The paper that opened the modern benchmarking era is Rønnow, Wang, Job, Boixo, Isakov, Wecker, Martinis, Lidar, Troyer, Defining and detecting quantum speedup, published in Science in 2014. It is fourteen pages of careful definitions and experiment, and every subsequent honest benchmark study builds on it.
The paper's contribution is first definitional: it sharpens what "quantum speedup" could mean. They distinguish:
- Provable quantum speedup: a proven asymptotic separation between the quantum algorithm and every possible classical algorithm. Shor's factoring (under assumptions about factoring hardness) is the canonical example. No such separation has been proven for quantum annealing on any natural problem class.
- Potential quantum speedup: a proven separation between the quantum algorithm and a specific classical algorithm. Grover's O(\sqrt{N}) query complexity for unstructured search is an example (against black-box classical search). Quantum annealing against simulated annealing is the natural comparator here.
- Limited quantum speedup: an asymptotic scaling advantage on some instance class. Requires careful study of how runtime scales with problem size.
- Constant-factor speedup: a runtime win on benchmark sizes, with no clear scaling picture. Cheap to claim; often vanishes under better classical tuning.
The paper then ran D-Wave Two (a 504-qubit 2013-vintage machine) against classical simulated annealing and spin-vector Monte Carlo on two families of random Ising problems matched to the Chimera graph. The headline finding:
On the random-spin-glass instances, D-Wave's time-to-solution scaling with problem size matched simulated annealing's scaling. The prefactors differed, but no scaling advantage was detected. This is consistent with no quantum speedup on this class.
The paper was careful to say that a speedup might still exist for some other class of problems — they were only ruling it out for the specific class they tested. But the class they tested was the one D-Wave had previously suggested would showcase advantage. Subsequent work (King et al. 2017, Albash-Lidar 2018) refined the analysis on further instance classes and found broadly the same pattern: where the classical algorithm is tuned well, the quantum annealer matches it; where the classical algorithm is poorly tuned, the quantum annealer beats it. The prefactor matters; the scaling rarely does.
Albash-Lidar 2018 — a broader verdict
Tameem Albash and Daniel Lidar's Demonstration of a scaling advantage for a quantum annealer over simulated annealing appeared in Physical Review X in 2018. Despite the optimistic title, the content is measured. They tested D-Wave 2000Q on a family of "planted-solution" Ising instances and found that the annealer exhibited a sub-exponential scaling advantage over a specific implementation of simulated annealing on these instances. This was a meaningful positive result — the first one where the scaling (not just the prefactor) was in D-Wave's favour on any instance class.
The caveats, from their own paper:
- The instance class was engineered. Planted-solution Ising models have more structure than most real-world problems. The advantage depended on that structure.
- The classical baseline was a specific simulated-annealing code. More sophisticated classical methods (parallel tempering with iso-energetic cluster updates) closed some of the gap, and path-integral Monte Carlo — a classical algorithm that simulates quantum annealing — closed most of it.
- The advantage was sub-exponential, not exponential. It was a polynomial improvement in the scaling exponent, not a qualitative speedup.
The honest reading of Albash-Lidar 2018 is: quantum annealing has at least one documented scaling advantage, on one engineered instance class, against one class of classical solvers, and the advantage is quantitative rather than qualitative. Every word of that matters. The paper is not evidence that quantum annealing is revolutionising optimisation. It is evidence that quantum tunnelling is doing something measurable on particular landscapes. Those are different claims.
Denchev 2016 — the tunnelling showcase
Between Rønnow and Albash-Lidar sits Denchev, Boixo, Isakov, Ding, Babbush, Smelyanskiy, Martinis, Neven, What is the computational value of finite-range tunneling?, published in Physical Review X in 2016. This paper constructed an instance class specifically designed to exhibit the tall-thin-barrier regime: a lattice of "weak-strong clusters" where local valleys are separated by narrow but deep barriers.
On these hand-crafted instances, D-Wave 2X ran roughly 10^8 times faster than simulated annealing at the largest problem sizes tested. This is not a typo. A factor of a hundred million in runtime is enormous and visually startling.
But read the fine print:
- The instances were constructed to maximise the tunnelling advantage. They are not natural instances of any industrial problem.
- The comparison was against simulated annealing specifically. A classical algorithm called path-integral Monte Carlo (PIMC) — which simulates the same tunnelling physics on a classical computer — ran at roughly the same speed as D-Wave. And an algorithm called quantum Monte Carlo matched D-Wave's scaling.
- Parallel tempering with iso-energetic cluster moves (a specialist classical algorithm) also closed much of the gap.
The correct reading of Denchev 2016: quantum tunnelling is doing real work that simulated annealing cannot easily replicate. But classical algorithms that encode tunnelling physics (PIMC, QMC) do essentially the same work. The paper is evidence that tunnelling helps; it is not evidence that a quantum device is faster than a classical computer that knows about tunnelling.
This distinction is load-bearing and widely misunderstood. Quantum annealing helping versus quantum annealing beating-all-classical are different claims, and Denchev 2016 establishes the first without establishing the second.
Where annealing does NOT help: MaxCut on complete graphs
Now the other direction. Which problems do not benefit from annealing at all?
MaxCut on dense graphs is the classic negative result. MaxCut is the problem: given a graph, partition its vertices into two sets S and \bar S to maximise the number of edges that cross the partition.
Goemans and Williamson in 1995 produced a classical polynomial-time algorithm for MaxCut that is guaranteed to find a cut of size at least 0.878 \times \text{OPT} — i.e., at worst 12.2% below the true maximum. The algorithm uses semidefinite programming and a random hyperplane rounding step. For nearly three decades no polynomial-time algorithm has beaten this bound, and the Khot-Unique-Games-Conjecture asserts that none will.
Quantum annealing on MaxCut: D-Wave encodes MaxCut as an Ising problem (by setting the coupling J_{ij} = 1 for each edge and asking for the ground state). On dense graphs — say, the complete graph K_n — the problem does not fit on D-Wave's sparse hardware; you have to embed a single logical qubit as a chain of many physical qubits, which is lossy and noisy. Even on problems that do fit, annealing does not reliably beat Goemans-Williamson, which is a polynomial-time classical algorithm with a proof of approximation quality.
For MaxCut on dense or large graphs, Goemans-Williamson is the correct answer, quantum annealing is not, and no benchmark study has ever claimed otherwise.
Industrial demonstrations — what they do and do not show
If benchmark studies are sceptical, why do Volkswagen, Lockheed, BBVA, and a dozen other large companies run quantum-annealing pilots? The honest answer: because "we tested a quantum computer" is a credible public-R&D story, and because annealers can solve small problems as a heuristic stage in a larger hybrid pipeline. The pilots produce outputs that can be measured, which is enough for a press release. They rarely produce outputs that could not be obtained with conventional methods faster.
Volkswagen traffic optimisation. The 2017 demonstration routed taxis through Beijing by formulating intersection congestion as a QUBO and submitting to D-Wave. The announcement attracted attention. The small print: the problem was small enough to solve classically in milliseconds; the quantum step was one component of a hybrid pipeline whose classical pre- and post-processing did most of the work; the claimed advantage was never published in a peer-reviewed benchmark. It was a proof-of-concept that the problem could be expressed as QUBO — not that solving it quantumly was better.
Portfolio optimisation. A recurring industrial use-case. Formulated as QUBO: pick a subset of assets that maximises expected return while minimising risk (variance), subject to budget constraints. D-Wave publishes case studies; financial firms (BBVA, JPMorgan's quantum team, others) have run pilots. Independent benchmarks consistently show that for portfolios below a few hundred assets, classical mixed-integer-programming solvers (Gurobi, CPLEX) find the optimum in seconds. Above a few hundred assets, the problem does not embed on current D-Wave hardware without aggressive approximation. Net result: no clear wins reported by independent benchmarks.
Protein folding and drug discovery. Occasional demonstrations in the literature, all small-scale. The real drug-discovery pipelines use classical simulation and neural-network methods; quantum annealers are not in production.
TCS, HCL, Infosys, Wipro. Each of the four largest Indian IT companies has done quantum-annealing consulting engagements for enterprise clients — typically feasibility studies using D-Wave's cloud access or classical QUBO solvers. These are genuinely useful exercises for understanding how problems could be reformulated as QUBO, which has future value if and when a clearly superior quantum annealer appears. None of the publicly-released results claim a benchmark advantage over a standard classical solver of the same era.
The Indian context
India's National Quantum Mission (2023, ₹6000 crore over 8 years) explicitly names optimisation as a target application and funds research into both gate-model and annealing-based approaches. The hardware thrust prioritises gate-model superconducting, photonic, and trapped-ion platforms; the software thrust studies QAOA, VQE, and annealing heuristics. TCS's research division has published portfolio-optimisation studies using QUBO formulations solved on D-Wave's cloud. IIT Madras and IISc Bangalore host active groups studying the boundary between classical and quantum heuristics. The tone of the work, read across the literature, is sober: annealing is treated as a research question, not as a solved product.
This matters because it gets the framing right. The honest framing is: annealing is a research tool for studying quantum-classical boundaries on optimisation. The dishonest framing — the one in some early D-Wave marketing — is that annealing is a practical accelerator for industry. Most Indian research groups have adopted the honest framing.
Hype check
Hype check. You will see press releases claiming a quantum computer "solved X in minutes where a classical computer would take centuries." For quantum annealing, these claims almost never survive scrutiny. The standard failure modes: (i) the classical baseline is a weak algorithm (brute force or vanilla simulated annealing with default parameters), not a tuned modern solver; (ii) the problem instance was selected from a regime known to favour tunnelling, and the selection is not mentioned; (iii) the "classical centuries" figure is an extrapolation from a benchmark code written badly, not a measurement. Before believing any annealing advantage claim, ask: what is the best classical algorithm that has been tried on this problem, run by someone who actually wanted it to win? If the answer is "we don't know" or "we used the one already in our pipeline," the advantage claim is not credible. The phrase "quantum supremacy" is, as of 2026, reserved for a few gate-model sampling experiments (Google 2019, USTC 2020–2023). For optimisation, no supremacy claim has survived independent scrutiny.
The regime where annealing plausibly helps, stated precisely
Let me give the clearest statement the field has converged on. Quantum annealing plausibly offers a speedup over simulated annealing when:
- The problem's energy landscape has local minima separated by energy barriers.
- Those barriers are narrow in Hamming distance — crossing the barrier requires flipping a small number of correlated bits simultaneously.
- The barriers are tall compared to the energy differences between minima — so simulated annealing must climb, not meander.
- The system size is large enough that the tunnelling advantage (e^{-w\sqrt{h}}) beats the thermal penalty (e^{-h/T}) but small enough to fit on the hardware graph without too many embedding chains.
This is a narrow intersection. Problems that satisfy all four tend to be spin-glass-like instances of the sort Denchev 2016 studied — artificial but physically motivated.
Problems that do NOT satisfy these conditions (a non-exhaustive list): random 3-SAT at the hardness peak (too much frustration and too little barrier structure), portfolio optimisation (landscape is typically smooth with shallow local minima), logistics and scheduling (heavily constrained; classical branch-and-bound is very strong), MaxCut on dense graphs (GW dominates), machine learning training objectives (gradient methods are extremely strong).
This is an uncomfortably short list of candidate applications. It is the most honest list currently available.
Example 1: Spin glass with tall thin barriers — where annealing can beat SA
Consider a small 1D spin-glass instance designed to showcase tunnelling. Take 8 qubits arranged in a line with couplings J_{i,i+1} chosen so that the ground state is |00000000\rangle with energy E_0 = -10, but there is a local minimum at |11111111\rangle with energy E_1 = -9, separated by an energy barrier of height h = 8 that requires flipping all 8 spins simultaneously to cross.
Step 1. Classical simulated annealing. To escape the local minimum |11111111\rangle into the true ground state, the Markov chain must climb up to energy roughly -9 + 8 = -1 — an energy gap of 8 above the local minimum. The probability of accepting the climb at temperature T is approximately e^{-8/T}. Why: simulated annealing accepts energy-increasing moves with probability e^{-\Delta E / T} per move. To climb a barrier of height 8 requires accepting a move costing +8 energy at some point during the walk. Even at moderate temperature T = 2, this probability is e^{-4} \approx 0.018. And many such climbs are needed: the walk must bias toward the true minimum, which means many barrier crossings over many sweeps.
At temperature T = 2, the per-attempt crossing probability is e^{-4} \approx 0.018. You need on the order of 1/0.018 \approx 55 attempts just to have one successful climb, and many more to actually find the global minimum reliably. At lower T (which you need near the end of the anneal) the probability falls off as e^{-8/T} and becomes astronomical.
Step 2. Quantum annealing. During the anneal, the system maintains a superposition over bit-strings. At intermediate schedule values the transverse field is strong enough that amplitude is spread across the state space. The tunnelling amplitude between the two minima goes roughly as e^{-w\sqrt{h}} where w is the Hamming-distance width (here w = 8, but each flip is a matrix element of order \sim 1, so the effective width in units set by H_0 is \sim \sqrt{8} when the gap structure is analysed carefully) and h is the barrier height. On this instance, the tunnelling rate is exponentially larger than the simulated-annealing crossing rate. Why the factor structure: tunnelling amplitude through a barrier in the WKB approximation is \sim e^{-(\text{Action}/\hbar)}, and the action for a Hamming-space barrier of height h and width w computed to leading order in perturbation theory in the transverse field scales as w \sqrt{h / \Gamma} where \Gamma is the transverse field strength. The dependence on w (width) rather than h (height) is the key. For narrow barriers — small w — tunnelling is fast even when the barrier is tall.
Step 3. Numerical estimate. On 8 qubits, exact diagonalisation of H(s) along the anneal path reveals a minimum gap g_{\min} \approx 0.1 (in units of the Hamiltonian coefficients). The adiabatic theorem requires evolution time T \gtrsim 1/g_{\min}^2 = 100 units — achievable in tens of microseconds on D-Wave. The annealer finds the ground state with probability \approx 0.8 per shot on this instance.
Step 4. Comparison. Running simulated annealing on the same instance at the same total runtime, with a reasonable exponential cooling schedule, the success probability per shot is around 0.2 — dominated by the thermal barrier crossing being rare.
Result. Quantum annealing genuinely outperforms simulated annealing on this instance class — by a factor of roughly 4\times at this problem size. Extrapolating to larger systems with similar barrier structure, the gap can grow to the 10^8 factor that Denchev et al. reported.
What this shows. The quantum advantage here is real and physically meaningful. It comes from quantum tunnelling, which is a genuine piece of physics that classical thermal dynamics does not have. But two caveats need to stay attached: (i) the barrier structure was engineered — real-world problems do not usually have tall narrow barriers; (ii) path-integral Monte Carlo, which is a classical algorithm that simulates tunnelling on a classical computer, also solves this instance at comparable speed, so the quantum-device advantage over classical-algorithm-aware-of-tunnelling is much smaller than the factor we computed versus simulated annealing.
Example 2: MaxCut on $K_{10}$ — where classical wins decisively
Take the complete graph K_{10} — 10 vertices, every pair connected by an edge. Solve MaxCut on it with both methods.
Step 1. Goemans-Williamson on K_{10}. Formulate the SDP: for each vertex i find a unit vector v_i \in \mathbb{R}^{10} to maximise \sum_{(i,j) \in E} \tfrac{1}{2}(1 - v_i \cdot v_j). This SDP is small enough to solve exactly in milliseconds on a laptop. The optimal SDP value is n^2 / 4 = 25 (the SDP relaxation of MaxCut on K_n is exactly n^2/4). The maximum cut on K_{10} is \lfloor n^2/4 \rfloor = 25 (split into two sets of 5, giving 5 \times 5 = 25 crossing edges).
Step 2. Randomised rounding. Draw a random hyperplane through the origin; partition vertices by which side of the hyperplane they fall on. For K_{10}, the expected cut size after rounding is very close to 25 (this is a case where Goemans-Williamson finds the exact optimum with high probability after a few rounding attempts). Total time: under a millisecond.
Step 3. Quantum annealing on K_{10}. The complete graph has every edge, so all pair couplings J_{ij} = 1. D-Wave's hardware graph (Pegasus) has limited connectivity — each physical qubit is coupled to about 15 others. Embedding K_{10} as a minor requires representing each logical vertex as a chain of several physical qubits, all tightly coupled. For K_{10} the embedding uses roughly \binom{10}{2} = 45 physical qubits with chain couplings much larger than the problem couplings. Why the embedding is expensive: Pegasus is sparse. K_n has \binom{n}{2} edges, and to force all of them to appear in the hardware graph you need each logical qubit to be a ferromagnetically-coupled chain spanning enough of the hardware to reach every other logical qubit. Chain strength must be larger than the problem couplings to keep the chain aligned, but not so large as to dominate the spectrum. Tuning this is a lot of work and is a frequent source of poor performance on dense problems.
Step 4. Submit to D-Wave. Typical result: the annealer returns cut values around 22–24 on a given sample, with some shots returning the optimum of 25 and some returning worse cuts due to chain breaks (cases where the ferromagnetic chain did not stay aligned). A few hundred samples typically hit the optimum at least once. Total wall-clock time including embedding, submission, and read-out: several seconds.
Result. Goemans-Williamson solves K_{10} MaxCut exactly in under a millisecond. D-Wave solves it with some probability per sample in seconds. For this problem, the classical algorithm is roughly 10^3–10^4 times faster, has a provable approximation bound, and does not require a 15-millikelvin dilution refrigerator.
What this shows. Dense problems hurt quantum annealing because hardware connectivity forces expensive embeddings. Problems with strong classical approximation algorithms (like Goemans-Williamson) set a very high bar that quantum annealing does not clear. The combination — dense graph plus strong classical algorithm — is common in real optimisation, and it is exactly the regime where annealing offers no help. This is the majority regime for real-world optimisation.
Common confusions
- "D-Wave is a quantum computer." It is a quantum device that runs quantum annealing on 2-local Ising Hamiltonians. It is not a universal quantum computer; it cannot run Shor's algorithm, Grover's search, or arbitrary circuits. Calling it a "quantum computer" in the same breath as gate-model hardware conflates very different machines. Adopt: "D-Wave is a special-purpose quantum annealer."
- "Quantum annealing will solve hard optimisation problems." NP-hard problems remain NP-hard for quantum annealers. Annealing explores the same exponential state space; it just crosses barriers by tunnelling instead of climbing. For generic NP-hard instances — random 3-SAT at the hardness peak, say — both approaches take exponential time in problem size, and the exponents are close.
- "D-Wave has 10^8 speedup." This is Denchev 2016 on engineered tunnelling-showcase instances versus a specific simulated-annealing implementation. Classical algorithms that simulate tunnelling (path-integral Monte Carlo, quantum Monte Carlo) close most of the gap. The quote without the context is misleading.
- "Industrial demos prove quantum annealing is useful." Industrial demos prove that companies like writing press releases about quantum. They rarely include a rigorous classical baseline. Reserve judgement until there is a published, peer-reviewed benchmark with a properly-tuned classical comparison.
- "If quantum tunnelling is real physics, it must help." Quantum tunnelling is real. It does help on the specific class of problems whose landscape has tall thin barriers. Most real-world optimisation landscapes do not have this structure. Real physics does not automatically translate into computational advantage; the problem has to have the right shape.
- "Quantum annealing is dead because no one has shown advantage." Not dead — it is a narrow-applicability tool still under active investigation. The verdict is "no broad advantage demonstrated after 15 years of trying," not "the physics is wrong." The physics is right; the applications are narrower than the marketing implied.
Going deeper
If you understand that quantum annealing plausibly helps on problems with tall thin tunnelling barriers, that Rønnow 2014 and Albash-Lidar 2018 are the definitive benchmark studies showing no broad advantage over well-tuned classical methods, that Denchev 2016 showed a 10^8 speedup over simulated annealing on engineered instances but not over classical algorithms that simulate tunnelling, that MaxCut on dense graphs is dominated by Goemans-Williamson's 0.878 approximation, that industrial demonstrations rarely include rigorous classical baselines, and that the honest research question is "under what circumstances does tunnelling beat every reasonable classical algorithm?" — you have chapter 181. What follows is the rigorous version: sub-exponential scaling analysis, the Quantum Approximate Optimisation connection, the decoherence cost, and the NQM-relevant open questions.
Time-to-solution and scaling exponents
The right measure for annealer performance is time-to-solution at 99% confidence (TTS₉₉): the expected runtime to find the ground state at least once, given a per-shot success probability p_s and anneal time t_a:
For small p_s this simplifies to \text{TTS}_{99} \approx 4.6 \, t_a / p_s. The scaling exponent \alpha is defined by \text{TTS}_{99} \sim 2^{\alpha N} as N grows, and the relevant quantum-vs-classical comparison is between the exponents \alpha_Q and \alpha_C. Provable quantum speedup would require \alpha_Q < \alpha_C for every classical algorithm C; potential speedup requires it against a specific C. Most benchmark studies find \alpha_Q \approx \alpha_C for simulated annealing on natural instances.
Non-stoquastic Hamiltonians
D-Wave's current hardware can only implement stoquastic Hamiltonians — those whose off-diagonal matrix elements are all non-positive. Stoquastic Hamiltonians admit classical sign-problem-free quantum Monte Carlo simulation, which is part of why classical PIMC closes most of the D-Wave gap. If D-Wave builds non-stoquastic couplers (they have publicly discussed this), the classical simulation would become exponentially harder, and annealing could plausibly gain a genuine exponential advantage. This is the clearest path to an eventual real quantum-annealing speedup, but the engineering challenges are substantial and the timeline is unclear.
The QAOA connection
Since QAOA at depth p \to \infty converges to adiabatic evolution, the space of applications where QAOA helps overlaps the space where annealing helps. Both inherit the same tunnelling intuition. A fair question: if annealing is limited, why expect QAOA to be better? The answer is partly hardware (gate-model coherence is much higher than annealer coherence), partly algorithmic (QAOA's variational parameters can learn non-adiabatic schedules that beat the naive annealing schedule), and partly ongoing. The QAOA story, told honestly, has the same caveats the annealing story has. See QAOA.
Decoherence and noise
D-Wave's qubits have coherence times of tens of nanoseconds. Anneals take tens of microseconds. The device is operating for roughly 1000 coherence times during an anneal — it is far into the decoherent regime. This is sometimes criticised as "it is not really a quantum computer." A more careful statement: it is a dissipative quantum device whose dynamics are approximated by an open-system Lindblad equation with thermal noise. Some of the tunnelling advantage survives decoherence; much of the fine quantum phase structure does not. This is part of why the observed advantages are polynomial rather than exponential.
NQM-relevant open questions
For researchers in Indian groups working under the National Quantum Mission, the interesting open questions in annealing are:
- Non-stoquastic hardware: can an annealer be built with sign-problem couplings? (An experimental platform combining superconducting and topological elements might do this.)
- Problem-class identification: are there natural industrial problems — in Indian logistics, power grids, or pharmaceutical search — whose landscapes actually have the required tall-thin-barrier structure?
- Hybrid quantum-classical workflows: can annealing be the right primitive for part of a larger optimisation pipeline even if it does not win end-to-end?
- Annealer-informed classical algorithms: can classical heuristics (PIMC with better schedule engineering, neural-network-guided Monte Carlo) further close the gap, improving classical optimisation in the process?
These are all legitimate research programmes. None of them currently has a "mission accomplished" date attached.
The honest roadmap
Here is a candid summary of where the field is in 2026 and what would change the picture:
- Status quo: quantum annealing has narrow, real advantages on engineered spin-glass instances against specific classical baselines. No broad practical advantage has been demonstrated.
- What would change the picture upward: (i) a non-stoquastic annealer with sufficient qubit count and coherence, (ii) an industrially-relevant problem class whose landscape matches the tunnelling regime, (iii) a scaling separation that survives all known classical algorithms (not just SA).
- What would change the picture downward: (iv) a fully decoherent classical simulation algorithm that matches D-Wave on every instance class — which PIMC already does on stoquastic problems; (v) a proof that no natural problem class has the required structure.
Neither direction has produced a decisive result in fifteen years, which tells you honestly where the field stands: unresolved, narrow, and interesting but not revolutionary. Science is allowed to be in this state. The marketing, which implied revolution, is not.
Where this leads next
- Quantum Annealing — the prerequisite chapter on what D-Wave's hardware actually does.
- Adiabatic Quantum Computing — the universal-model framework that annealing is a restriction of.
- QAOA Algorithm — the gate-model cousin; same intuition, different hardware regime.
- Local Hamiltonians — the Ising problem formally belongs to 2-local Hamiltonian ground-state complexity (QMA-complete in the worst case).
References
- Troels F. Rønnow, Zhihui Wang, Joshua Job, Sergio Boixo, Sergei V. Isakov, David Wecker, John M. Martinis, Daniel A. Lidar, Matthias Troyer, Defining and detecting quantum speedup (2014), Science — arXiv:1401.2910.
- Vasil S. Denchev, Sergio Boixo, Sergei V. Isakov, Nan Ding, Ryan Babbush, Vadim Smelyanskiy, John M. Martinis, Hartmut Neven, What is the computational value of finite-range tunneling? (2016), Physical Review X — arXiv:1512.02206.
- Tameem Albash and Daniel A. Lidar, Demonstration of a scaling advantage for a quantum annealer over simulated annealing (2018), Physical Review X — arXiv:1705.07452.
- Michel X. Goemans and David P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming (1995), JACM — dl.acm.org/doi/10.1145/227683.227684.
- Catherine C. McGeoch, Adiabatic Quantum Computation and Quantum Annealing: Theory and Practice (Morgan & Claypool, 2014) — doi.org/10.2200/S00585ED1V01Y201407QMC008. A balanced book-length survey.
- Wikipedia, Quantum annealing.