In short
Adiabatic quantum computing (AQC), proposed by Farhi, Goldstone, Gutmann, and Sipser in 2000, is a computational model that does not use gates. Instead, you choose a Hamiltonian H_0 whose ground state is easy to prepare (like a uniform superposition), and a Hamiltonian H_p whose ground state encodes the answer to your problem. Start the quantum system in the ground state of H_0 and then slowly interpolate the Hamiltonian from H_0 to H_p — for instance, H(t) = (1 - s)H_0 + s H_p with s ramping from 0 to 1 over time T. The adiabatic theorem (a century-old result in quantum mechanics) guarantees that if T is large compared to 1/g_{\min}^2, where g_{\min} is the smallest spectral gap encountered during the evolution, the system stays in the instantaneous ground state throughout. At the end, measure — and you get the answer. For easy problems the gap stays open and T is polynomial; for hard problems the gap can close exponentially and T explodes. Aharonov and van Dam (2007) proved that AQC is polynomially equivalent in power to the circuit model: anything a circuit does in time T, AQC can do in \text{poly}(T), and vice versa. In hardware, D-Wave has built commercial machines since 2011 that implement a restricted form of AQC called quantum annealing — H_p must be an Ising Hamiltonian — on 5000+ superconducting flux qubits. The catch: after 15 years and hundreds of benchmarks, no clear quantum advantage of D-Wave over classical simulated annealing has been demonstrated for any practical problem. AQC is a legitimate, beautiful, universal computational model. Quantum annealing is a real machine you can buy time on. Whether the machine has ever beaten a good classical laptop at anything commercially useful remains contested.
The quantum-computing story you have been told so far goes like this. You have some qubits. You apply gates to them. At the end you measure. Each gate is a unitary operation, carefully designed; each measurement collapses the state. A Shor's-algorithm circuit or a Grover's-algorithm circuit is a recipe: apply this gate, then this one, then this one, then measure.
There is a completely different way to compute with quantum mechanics. Instead of choreographing a sequence of discrete gates, you let the system's Hamiltonian — the operator that describes its energy — do the work. You pick a Hamiltonian whose ground state (the lowest-energy eigenstate) is the answer to your problem. You pick another Hamiltonian whose ground state is easy to prepare. You put the system in the easy ground state, and then you slowly slide the Hamiltonian from the easy one to the hard one. If you do it slowly enough, a deep theorem of quantum mechanics guarantees that the system follows along — it stays in the ground state the whole way. When you have finished sliding, you measure, and you have your answer.
No gates. No circuit diagram. No careful sequencing of operations. Just a Hamiltonian that you are smoothly reshaping over time, and the system tracking along. This is adiabatic quantum computing, and it was proposed by Edward Farhi, Jeffrey Goldstone, Sam Gutmann, and Michael Sipser at MIT in 2000, in a paper titled simply Quantum Computation by Adiabatic Evolution.
It sounds like it might be a different kind of machine from a standard quantum computer — something less powerful, or more restricted, or more specialised. It is not. Dorit Aharonov and Wim van Dam proved in 2007 that adiabatic quantum computing is polynomially equivalent to the gate model. Anything a circuit can do efficiently, adiabatic evolution can do efficiently. Anything adiabatic can do, a circuit can simulate efficiently. They are the same computational model, expressed in two different languages.
This chapter explains the adiabatic recipe, the adiabatic theorem that makes it work, the spectral gap that decides how fast it can go, the polynomial equivalence, and the D-Wave machine — which implements a restricted variant called quantum annealing on actual hardware you can submit jobs to today. It also contains an honest hype check on what D-Wave has, and has not, demonstrated.
The adiabatic theorem — the physics behind the trick
The adiabatic theorem is one of the older results in quantum mechanics, going back to Born and Fock in 1928. Before we use it as a computational tool, here is what it actually says.
Consider a quantum system with a time-dependent Hamiltonian H(t). At each moment t, the Hamiltonian has a set of eigenstates |\psi_0(t)\rangle, |\psi_1(t)\rangle, |\psi_2(t)\rangle, \dots with energies E_0(t) < E_1(t) < E_2(t) < \dots. Call |\psi_0(t)\rangle the instantaneous ground state: the lowest-energy eigenstate of H(t) at the current moment. As t changes, the eigenstates change too — they morph smoothly, as long as the Hamiltonian is changing smoothly.
Now suppose you start the system in the ground state at time t = 0: the state |\Psi(0)\rangle = |\psi_0(0)\rangle. Let the Hamiltonian evolve. What happens to the system's state?
The adiabatic theorem says: if H(t) is changing slowly enough, the system stays in the instantaneous ground state the whole time. That is,
"Slowly enough" has a precise meaning. Let g(t) = E_1(t) - E_0(t) be the spectral gap between the ground state and the first excited state. If the total evolution time is T, then the adiabatic theorem requires roughly
Why the gap squared: the derivation is standard perturbation theory. At each moment, the probability of "leaking" from the ground state into the first excited state per unit time is proportional to the matrix element of \partial_t H between those two states, divided by the energy difference — that is, divided by the gap. Integrating this leakage rate over time, and demanding the total leakage is small, gives a condition with 1/g^2 (one factor from the rate, one from the time it takes for the two states to dephase). The gap squared is the universal adiabatic scaling, and it is the reason adiabatic evolution is either fast or painfully slow.
The condition T \gg 1/g_{\min}^2 is the whole story. If the minimum gap g_{\min} is large (say, \sim 1), then a modest evolution time suffices and the system tracks the ground state effortlessly. If g_{\min} is small (say, \sim 10^{-5}), then the evolution must take at least 10^{10} units of time — which is terrible if you hoped to compute something useful.
The spectral gap determines the runtime. Everything else in adiabatic quantum computing follows from this.
The adiabatic protocol for computation
Now we turn this physics theorem into a computational recipe.
Pick your Hamiltonians. Choose a simple starting Hamiltonian H_0 whose ground state is easy to prepare. A near-universal choice is the transverse-field Hamiltonian
whose ground state is |+\rangle^{\otimes n} — the uniform superposition over all 2^n computational-basis states. (Applying a Hadamard to each of n |0\rangle qubits gets you there in a single time step.)
Why -\sum_i X_i has that ground state: each term -X_i is minimised when qubit i is an eigenstate of X with eigenvalue +1, which is |+\rangle. Since the terms act on different qubits, they can be minimised independently, so the joint ground state is |+\rangle^{\otimes n}. The energy is -n, the minimum possible.
Choose a problem Hamiltonian H_p whose ground state encodes the answer to whatever computational problem you want to solve. For combinatorial optimisation, H_p is typically an Ising Hamiltonian:
where h_i and J_{ij} are real coefficients encoding the problem. The ground state of H_p is a specific basis state |z^*\rangle — a bit-string z^* that minimises the classical energy \sum_i h_i z_i + \sum_{ij} J_{ij} z_i z_j. That bit-string is your answer.
Interpolate. Define a schedule s(t) that goes from 0 at t = 0 to 1 at t = T. Linear is the default: s(t) = t/T. Construct the time-dependent Hamiltonian
At t = 0 this is just H_0; at t = T it is just H_p; in between it is a mixture.
Evolve. Start the system in |\psi(0)\rangle = |+\rangle^{\otimes n}, the ground state of H_0. Let the Hamiltonian H(t) evolve it for time T. The state evolves according to the Schrödinger equation:
Measure. At t = T, measure every qubit in the computational basis. If T was chosen large enough (larger than 1/g_{\min}^2), the state at the end is the ground state of H_p up to small corrections, so the measurement outcome is the bit-string z^* that minimises your cost function. That is the answer.
Why the gap is everything
The choice of T depends entirely on g_{\min}, the smallest gap encountered during the evolution. A few typical scenarios illustrate what this means in practice.
Easy problems: gap stays open. If the problem is easy — say, finding the ground state of a short-range Ising model with no frustration — the gap typically scales as a constant, g_{\min} = \Theta(1), independent of system size n. The adiabatic condition T \gg 1/g_{\min}^2 becomes T = O(1): the evolution time is constant. You can solve the problem in a fixed amount of time no matter how big it gets. This is the best case.
Moderately hard problems: gap closes polynomially. For many tractable problems, g_{\min} scales as 1/\text{poly}(n). Then T = O(\text{poly}(n)) — polynomial in the problem size. The adiabatic algorithm is efficient, in the same complexity sense as any good polynomial-time algorithm.
Hard problems: gap closes exponentially. For NP-hard problems — like random instances of 3-SAT at the hardness peak, or spin glasses with many local minima — the minimum gap typically scales as g_{\min} \sim 2^{-\alpha n} for some \alpha > 0. The adiabatic condition then demands T \gg 2^{2\alpha n}: the evolution time is exponential in the problem size. This is no better than classical brute force.
The entire research program of adiabatic quantum computing can be summarised as: under what circumstances does the gap stay open? For structured problems — say, unstructured search, where the gap is \Theta(1/\sqrt{N}) — AQC matches Grover's quadratic speedup. For random NP-hard instances, the gap closes exponentially and AQC gives no advantage. The question of whether there is a class of practically relevant problems for which AQC systematically outperforms classical is exactly the question of whether the gap systematically stays open for that class — and as of 2026, no such systematic advantage has been proven.
Aharonov and van Dam: AQC is universal
In 2007, Dorit Aharonov, Wim van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, and Oded Regev published a landmark paper establishing that AQC and the circuit model are polynomially equivalent as computational models.
The content of the theorem is two simulations:
-
Circuit → AQC. Any quantum circuit U of depth D acting on n qubits can be compiled into an adiabatic evolution on a Hamiltonian of \text{poly}(n, D) size, whose ground state encodes the output of the circuit. The construction uses a history-state Hamiltonian that entangles a "clock" register with the state of the circuit at each time step; the ground state is a uniform superposition over all clock positions, each correlated with the circuit state at that position.
-
AQC → Circuit. Any adiabatic evolution of poly-time T with a poly-sized gap can be simulated by a quantum circuit of \text{poly}(T, n) size, using Trotterisation (the time-evolution operator e^{-iHt} can be approximated by a sequence of short-time-step gates to any desired accuracy).
Why this matters for thinking about AQC: it says AQC is not a weird special-purpose tool. It is the same computational model as the circuit picture, just expressed in a different language. Any problem that has a polynomial-time quantum algorithm in the circuit model has a polynomial-time adiabatic algorithm. Shor's factoring, for instance, can be done adiabatically. The gap structure of a carefully-chosen H_p is polynomially open, matching the gate-count complexity. AQC is universal.
The equivalence has one important caveat: the specific H_0 and H_p constructed by Aharonov-van Dam are not the simple transverse-field Ising models that D-Wave uses. They involve multi-qubit interaction terms. So "universal AQC" needs more than what D-Wave's hardware supports.
D-Wave — quantum annealing in practice
D-Wave Systems, a Canadian company founded in 1999, has been building and selling quantum annealers since 2011. A quantum annealer is a physical device that implements a specific, restricted variant of AQC: the problem Hamiltonian H_p is limited to a 2-local Ising form
where the edges (i,j) are restricted to a specific hardware connectivity graph G (historically the "Chimera" graph, more recently "Pegasus" and "Zephyr"). The starting Hamiltonian is the transverse field H_0 = -\sum_i X_i, and the device interpolates H(t) = A(t) H_0 + B(t) H_p according to a fixed annealing schedule.
The qubits are superconducting flux qubits cooled to around 15 millikelvin in a dilution refrigerator. As of 2024, the D-Wave Advantage2 prototype has more than 5000 physical qubits and more than 40,000 couplers between them. You can submit a job remotely, the system runs the annealing schedule for tens of microseconds, and it returns a bit-string.
D-Wave solves optimisation problems formulated as QUBO (Quadratic Unconstrained Binary Optimisation) — which is the classical equivalent of finding the ground state of the Ising Hamiltonian above. Examples include MaxCut, graph colouring, portfolio optimisation, and certain scheduling problems. The company targets industrial customers: Volkswagen (for traffic-flow optimisation), Lockheed Martin, NASA, and a number of financial-services firms.
Quantum annealing is not universal AQC
The crucial caveat is that D-Wave's machines do not implement universal AQC. Universal AQC, as Aharonov-van Dam showed, requires arbitrary multi-qubit interactions in H_p, including non-stoquastic terms (ones with complex off-diagonal matrix elements). D-Wave's H_p is restricted to 2-local stoquastic Ising terms. The resulting device is powerful enough to search the ground state of a specific class of optimisation problems, but it cannot natively run Shor's factoring, or Grover's search on an arbitrary oracle, or most other circuit-model algorithms.
What quantum annealing can in principle do is exploit quantum tunnelling to escape local minima of the energy landscape — tunnelling through energy barriers that a classical annealer (simulated annealing) must climb over. Whether this tunnelling gives a practical speedup for realistic optimisation problems is the central open question in quantum annealing.
Hype check
Hype check. D-Wave has been shipping commercial quantum-annealing machines since 2011 — fifteen years as of 2026. In that time, no quantum advantage for a practical problem has been convincingly demonstrated. Troyer, Rønnow, and collaborators (2014, 2017) ran extensive benchmarks comparing D-Wave systems to well-tuned classical algorithms — simulated annealing, spin-vector Monte Carlo, parallel tempering — and found that for almost every test case, D-Wave matches classical performance up to a constant factor. Where D-Wave appears to win, the "win" often evaporates when the classical baseline is tuned harder. Katzgraber and collaborators' analyses have argued that the problems where D-Wave's hardware is best — small, highly structured instances — are precisely the problems where classical heuristics are also strong. The 2015–2019 debate over "quantum speedup" on D-Wave produced a clear consensus: D-Wave is useful for benchmarking annealing-based heuristics and for some optimisation workflows, but it does not solve any commercially important problem faster than a good classical laptop. D-Wave's own marketing now emphasises hybrid classical-quantum workflows (where the quantum step is one ingredient among many) rather than raw quantum advantage. This may change if annealers achieve larger gaps and lower noise. But as of 2026, the honest answer to "does D-Wave beat classical?" is: for no known practical problem has it done so reproducibly.
This hype check is not a dismissal of adiabatic quantum computing as a field. AQC remains a legitimate, beautiful, and universal model of quantum computation, and its close connection to variational circuit algorithms like QAOA keeps it central to the research programme. The narrow claim is: D-Wave's specific commercial product has not demonstrated the advantage the company's early marketing suggested it would. There is a meaningful gap between "AQC is a universal quantum model" (true) and "quantum annealers are useful accelerators for industry problems today" (unproven).
QAOA — AQC in discrete steps
There is a beautiful connection between adiabatic quantum computing and the Quantum Approximate Optimisation Algorithm (QAOA), introduced by Farhi, Goldstone, and Gutmann in 2014. QAOA is essentially discretised adiabatic evolution.
Here is the connection. The continuous-time adiabatic evolution can be approximated by a Trotter decomposition: break the evolution time T into p short steps of length \Delta t = T/p, and within each step approximate
Trotterisation is exact in the limit p \to \infty and approximate for finite p. QAOA uses finite p: it applies p layers of alternating H_0-evolution and H_p-evolution on a gate-based quantum computer, with the durations treated as variational parameters (\beta_k, \gamma_k) optimised by a classical outer loop.
So:
- p = 1: one Trotter step, a rough approximation of adiabatic evolution — but sometimes already useful.
- p = O(\log n) to O(\text{poly}(n)): closer to true adiabatic evolution; performance matches AQC on many problems.
- p \to \infty: exact adiabatic evolution.
The beauty of QAOA is that it is runnable on NISQ hardware — you need only a finite-depth circuit of one- and two-qubit gates, not a continuous-time analog evolution. The classical outer loop optimises the variational parameters to compensate for small p. QAOA inherits AQC's universality (in the p \to \infty limit) and AQC's difficulties (for hard problems, the required p can be large).
You can read the full QAOA story in the QAOA algorithm chapter. The short version: QAOA is what you run on NISQ; AQC is what it approximates as depth increases. They are two faces of the same coin.
Example 1: MaxCut on three vertices via adiabatic evolution
Solve the MaxCut problem on a triangle graph using adiabatic quantum computing. The triangle has vertices \{1, 2, 3\} and edges \{(1,2), (2,3), (1,3)\}. A "cut" is an assignment of vertices to two sets S and \bar S; we want to maximise the number of edges with one endpoint in each set.
Step 1. Encode the problem as an Ising Hamiltonian. For each vertex i introduce a qubit whose Z-eigenvalue is z_i \in \{+1, -1\}, interpreted as the partition label. An edge (i,j) is cut exactly when z_i \neq z_j, i.e. when z_i z_j = -1. The number of cut edges is
To maximise cuts is to minimise \sum_{(i,j) \in E} z_i z_j. So we set
The ground state of H_p is the bit-string with the maximum cut. The minimum eigenvalue is -1 (since two edges can be cut but not all three on a triangle — this is a frustrated graph). Why the triangle is frustrated: if you put vertex 1 in S and vertex 2 in \bar S, edge (1,2) is cut. If you then put vertex 3 in S, edge (2,3) is cut but edge (1,3) is not. You can cut at most 2 of the 3 edges. There are six equally-optimal solutions: each of the three pairs of vertices can be split from the third vertex.
Step 2. Choose H_0. The standard transverse-field choice:
whose ground state is |+\rangle|+\rangle|+\rangle = \tfrac{1}{2\sqrt 2}\sum_{x,y,z} |xyz\rangle.
Step 3. Write the interpolated Hamiltonian.
At s = 0: ground state is |+++\rangle. At s = 1: ground state is a superposition of the six optimal-cut bit-strings (or any one of them under symmetry breaking). The adiabatic theorem says that if we sweep s from 0 to 1 slowly enough, we end up in the ground state of H_p — one of the maximum-cut bit-strings.
Step 4. Estimate the minimum gap. For a 3-qubit problem, the gap can be computed by exact diagonalisation. At s = 1/2, the minimum gap of H(s) for this problem is around g_{\min} \approx 0.45 (in units where the Hamiltonian coefficients are O(1)). So the required evolution time is T \gg 1/g_{\min}^2 \approx 5, in Hamiltonian-time units. On a D-Wave-like machine with a few gigahertz clock rate, this corresponds to nanoseconds. Why the gap is so generous here: three qubits is a trivially small problem, and the ground state is degenerate (multiple equal-cost solutions), which keeps the gap above the first non-degenerate excited state large. For a larger graph — say, 50 vertices of a random regular graph — the gap would scale as 1/\text{poly}(n) and require much longer runtimes.
Step 5. Measure. After the evolution, measure in the computational basis. You obtain one of the six ground-state bit-strings with probability 1/6 each (under symmetry). Any of them gives you a valid maximum-cut.
Result. The adiabatic algorithm for triangle MaxCut needs only a few time units and produces a solution with 2 of the 3 edges cut — the provable maximum. On this tiny problem, classical brute force is of course instantaneous; the interest is purely illustrative.
What this shows. The adiabatic protocol is a clean, physical recipe: encode your problem in a Hamiltonian, choose an easy starting Hamiltonian, interpolate. No gate sequencing, no circuit design. The whole complexity of "how hard is the problem" gets encoded in one number — the minimum gap during the sweep — and the runtime scales with that number. For small or easy problems, AQC is elegant and efficient. For hard problems, the gap is what decides whether you win.
Example 2: QUBO on a D-Wave machine
Convert a small satisfiability problem to a QUBO and see how D-Wave would encode it.
Step 1. Start with the SAT problem: find x, y, z \in \{0, 1\} satisfying the constraints (x \lor y), (y \lor z), and (\bar x \lor \bar z).
Step 2. Convert each clause to an energy penalty. A clause (a \lor b) is violated when a = 0 and b = 0; we assign energy +1 to the violating assignment and 0 to all others:
The clause (\bar x \lor \bar z) is violated when x = 1 and z = 1, so f(x, z) = xz. Why these polynomial forms: each clause is a Boolean function taking values in \{0, 1\}. We want its value to be 0 on satisfying assignments and 1 on violating assignments, so that the sum of clause penalties is zero exactly when the formula is satisfied. Expanding (1-a)(1-b) = 1 - a - b + ab gives a quadratic polynomial in the \{0,1\} variables.
Step 3. Sum the clause energies to get the QUBO cost:
Expand:
A satisfying assignment is one with C = 0 (all clauses simultaneously satisfiable). Checking: x = 1, y = 1, z = 0 gives C = 3 - 2 - 2 - 0 + 1 + 0 + 0 = 0, which works.
Step 4. Convert \{0, 1\} variables to \{-1, +1\} spins using x = (1 - Z)/2 (or equivalently Z = 1 - 2x, so x = 0 \to Z = +1 and x = 1 \to Z = -1). Substitute into the cost:
After the algebra,
for some constant c_0 that doesn't affect the ground state. Why the constant drops out: the D-Wave device finds the bit-string with minimum energy. A constant shift in the Hamiltonian shifts every energy by the same amount, so the minimiser is unchanged. The machine only cares about the coefficients h_i (on the single-Z terms) and J_{ij} (on the pair-ZZ terms).
Step 5. Send to D-Wave. The input to the machine is the set of fields and couplings:
D-Wave initialises all qubits in |+\rangle (the ground state of the transverse field), slowly ramps down the transverse field while ramping up these Ising couplings, and after ~20 microseconds measures the bit-string. The output should be (Z_1, Z_2, Z_3) = (-1, -1, +1) or (+1, +1, -1) (corresponding to x = y = 1, z = 0 or x = y = 0, z = 1 in \{0,1\} variables), both satisfying the formula.
Result. Any QUBO can be mapped to a D-Wave Ising problem by the substitution above. Satisfiability, graph colouring, MaxCut, portfolio optimisation — all become instances of finding the ground state of a specific Ising Hamiltonian. This is what D-Wave does and the only thing it does.
What this shows. D-Wave is not running a general-purpose quantum program. It is finding the ground state of whatever Ising Hamiltonian you hand it, by quantum annealing. The "quantum-ness" is the annealing schedule and the possible role of tunnelling in escaping local minima. For problems whose Ising ground state is easy, D-Wave finds it fast; for problems whose ground state is hard (gap closes exponentially), D-Wave takes exponential time — which matches what classical simulated annealing does on the same instances.
Common confusions
- "Adiabatic quantum computing is an analog machine, so it cannot be universal." It is analog (continuous-time evolution), and it is universal: Aharonov-van Dam proved the polynomial equivalence with the circuit model. "Analog" does not mean "restricted" in this context. The key is that the Hamiltonian itself can be arbitrarily complex; the analog is in time, but the Hamiltonian's structure is discretely specifiable.
- "D-Wave is a quantum computer." It is a quantum device — its qubits are genuine flux qubits with measurable quantum coherence — but it is a restricted quantum device, implementing only 2-local Ising quantum annealing. It cannot run Shor's algorithm, Grover's search, or the circuit-model algorithms you have seen. Calling it a "quantum computer" in the same sense as a gate-model machine conflates universal and special-purpose hardware.
- "Adiabatic means slow means useless." Slow relative to what matters: the inverse gap squared. If the gap is large, "slow" still means fast. If the gap is small, "slow" means very slow. The word "adiabatic" is a physics term for "slow enough that the system tracks its instantaneous ground state" — not a judgement about absolute time.
- "QAOA and AQC are different algorithms." They are the same algorithm, expressed in continuous time (AQC) versus discrete Trotter steps (QAOA). QAOA is what you run on NISQ hardware; AQC is its continuous-time limit.
- "D-Wave's billion-dollar quantum advantage is real." As of 2026, no convincing demonstration of quantum advantage for a practical problem on D-Wave exists, despite the company's marketing history. The Troyer, Rønnow, and related benchmark studies remain the definitive analysis.
- "The adiabatic theorem is a guarantee." It is an asymptotic bound. In practice, finite-T evolutions have some probability of non-adiabatic transitions to excited states, and the probability of a correct answer depends on how tightly T \gg 1/g_{\min}^2 is satisfied. Real adiabatic computation is a probabilistic process with success probability approaching 1 as T grows.
Going deeper
If you understand that adiabatic quantum computing evolves a time-dependent Hamiltonian from an easy H_0 to a problem H_p, that the adiabatic theorem guarantees ground-state tracking if the evolution time is much larger than 1/g_{\min}^2, that AQC is polynomially equivalent to the circuit model (Aharonov-van Dam 2007), that D-Wave implements a restricted form called quantum annealing on superconducting flux qubits, that QAOA is the Trotter-discretised finite-p version of AQC, and that no convincing D-Wave quantum advantage has been demonstrated as of 2026 — you have chapter 178. What follows is the rigorous statement of the adiabatic theorem, the Aharonov-van Dam equivalence proof sketch, D-Wave's hardware, the Troyer benchmark studies, and the deeper QAOA-AQC connection.
The rigorous adiabatic theorem
A modern version of the adiabatic theorem (Jansen, Ruskai, Seiler 2007; Ambainis, Regev 2004) states: let H(s) with s = t/T be a smooth family of Hamiltonians on a Hilbert space, with non-degenerate ground state |\psi_0(s)\rangle and gap g(s). Let |\Psi(T)\rangle be the solution of the Schrödinger equation starting from |\Psi(0)\rangle = |\psi_0(0)\rangle. Then
for some constant C and an irrelevant global phase \phi. The condition T \gg 1/g_{\min}^2 follows when the \partial_s H norms are O(1) and the dominant contribution comes from the smallest gap along the path. Modifying the schedule s(t) — using slower steps near g_{\min} and faster steps elsewhere — can tighten the bound, but cannot beat the 1/g_{\min}^2 scaling in general.
Aharonov-van Dam equivalence — proof sketch
The AQC → Circuit direction is straightforward Trotterisation: e^{-i H(s) dt} \approx e^{-i (1-s) H_0 dt} e^{-i s H_p dt}, each exponential implementable as a short sequence of gates. The polynomial overhead comes from the number of Trotter steps needed to achieve a given accuracy.
The Circuit → AQC direction is cleverer. Given a quantum circuit U = U_L U_{L-1} \cdots U_1, construct a history-state Hamiltonian on a joint "clock + computation" Hilbert space, whose ground state is
The Hamiltonian has three parts: one that penalises clock-state violations, one that penalises ground-state violations at \ell = 0, and one that penalises inconsistencies between consecutive clock-and-computation states. Its gap can be shown to be \Omega(1/L^2), giving an adiabatic evolution time T = O(L^4) — polynomial in circuit depth. Hence any quantum circuit can be simulated in AQC in polynomial time.
D-Wave hardware in detail
Each D-Wave qubit is a flux qubit: a superconducting loop threaded with a persistent-current loop, with a Josephson junction forming an anharmonic potential with two nearly-degenerate minima corresponding to clockwise and counter-clockwise current. These are the logical |0\rangle and |1\rangle states. The transverse field is implemented by external flux biases that tilt the double-well potential; the Ising coupling J_{ij} is implemented by inductive coupling between loops. Typical qubit coherence times are tens of nanoseconds, compared to tens of microseconds for circuit-model superconducting qubits — because D-Wave optimises for qubit count and connectivity rather than coherence. The operating temperature is ~15 mK in a dilution refrigerator. The graph connectivity has evolved from Chimera (~6 couplers per qubit, 2011) to Pegasus (15 couplers, 2020) to Zephyr (20 couplers, 2023–2024), each upgrade increasing the number of Ising instances that can be natively embedded.
Troyer benchmarks
Troyer, Rønnow, Wang, Job, Boixo, Isakov, Wecker, Lidar, Martinis (Science, 2014) ran D-Wave Two on carefully-constructed Ising instances and compared to classical simulated annealing, parallel tempering, and spin-vector Monte Carlo. They found that, at fixed problem size, D-Wave's scaling exponent was the same as classical simulated annealing's — meaning no asymptotic advantage — and the constant prefactor was comparable to or worse than tuned classical code. Subsequent work (King et al. 2018, Albash & Lidar 2018) produced more careful instance-class analyses, finding small (polynomial-in-prefactor) advantages in narrow regimes but no exponential separation. The 2024 state of play: D-Wave's Advantage2 is excellent at being a quantum annealer, and classical annealers are excellent at being classical annealers, and on realistic optimisation benchmarks they finish in similar times.
The QAOA-AQC connection and beyond
QAOA's p = 1 layer is a very crude Trotter approximation of adiabatic evolution; it cannot achieve the full power of AQC. But — and this is a subtle point — QAOA's variational optimisation over (\beta, \gamma) can sometimes do better than the naive adiabatic schedule, because the classical outer loop learns an effective schedule that concentrates the Trotter steps near where they are most useful. For specific problem classes (bounded-degree MaxCut, some constraint-satisfaction problems), low-p QAOA beats low-p adiabatic. For other problem classes, large-p QAOA approaches adiabatic performance. The interplay between QAOA layer depth, adiabatic fidelity, and problem-class structure is an active research frontier.
Where this leads next
- QAOA Algorithm — the discretised, variational version of AQC that runs on NISQ gate-model hardware.
- Local Hamiltonians — the Hamiltonian structure AQC uses and the complexity class (QMA-complete) the ground-state problem sits in.
- Unitary Evolution — the Schrödinger equation and time-dependent Hamiltonians that AQC uses.
- Quantum Annealing — the hardware-specific restricted variant D-Wave implements.
- D-Wave — the commercial quantum-annealing company and its machines.
References
- Edward Farhi, Jeffrey Goldstone, Sam Gutmann, Michael Sipser, Quantum Computation by Adiabatic Evolution (2000) — arXiv:quant-ph/0001106.
- Dorit Aharonov, Wim van Dam, Julia Kempe, Zeph Landau, Seth Lloyd, Oded Regev, Adiabatic Quantum Computation is Equivalent to Standard Quantum Computation (2007), SIAM J. Comp. — arXiv:quant-ph/0405098.
- Tameem Albash and Daniel A. Lidar, Adiabatic Quantum Computation (2018), Rev. Mod. Phys. — arXiv:1611.04471.
- Troels F. Rønnow, Zhihui Wang, Joshua Job, Sergio Boixo, Sergei V. Isakov, David Wecker, John M. Martinis, Daniel A. Lidar, Matthias Troyer, Defining and detecting quantum speedup (2014), Science — arXiv:1401.2910.
- Wikipedia, Adiabatic quantum computation.
- John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229.