In short

MaxCut is the canonical combinatorial optimisation benchmark: given a graph G = (V, E), partition the vertex set V into two subsets S and V \setminus S, maximising the number of edges with one endpoint on each side. NP-hard in general. Encode each vertex i as a bit x_i \in \{0, 1\}; an edge (i, j) is "cut" when x_i \ne x_j. The cost Hamiltonian is

H_C = \tfrac{1}{2} \sum_{(i,j) \in E} (1 - Z_i Z_j),

whose ground state (maximum eigenvalue, if you flip the sign) encodes the best partition. The mixer is H_M = \sum_j X_j. QAOA at depth p applies p alternating layers e^{-i\gamma H_C} e^{-i\beta H_M} to |+\rangle^{\otimes n}, optimises (\gamma, \beta) \in \mathbb{R}^{2p} classically, and measures. Each e^{-i\gamma H_C} is one R_{ZZ} rotation per edge; each e^{-i\beta H_M} is one R_X per qubit. Approximation ratios (worst case on 3-regular graphs, Farhi et al. 2014): p = 1 \to 0.6924; p = 2 \to 0.7559; p = 3 \to 0.7924. Classical baseline: Goemans-Williamson 0.878 via semidefinite-programming relaxation (1995, polynomial-time) — which is optimal under the Unique Games Conjecture. Empirically (Wurtz-Lykov 2021) QAOA at p \approx 11 starts to match Goemans-Williamson, but at that depth circuit noise dominates on NISQ hardware. Status: QAOA-for-MaxCut is the most studied and best understood quantum optimisation algorithm, and as of 2025 there is no demonstrated quantum speedup over classical methods on real MaxCut instances of practical size. The problem is a clean testbed for theoretical analysis, not yet a real-world win.

MaxCut is the combinatorial optimisation problem that every optimisation textbook reaches for first. The statement fits on a napkin: given a graph, split the vertices into two groups so that as many edges as possible go between groups rather than within them.

A concrete example. Suppose you are a Bangalore cricket-league organiser with 8 teams and a fixture list of 12 matches already scheduled. You want to split the teams into two pools — Pool A plays on Saturdays, Pool B on Sundays — in a way that maximises the number of inter-pool matches (so neither day is boring). That is MaxCut. The teams are vertices; the matches are edges; the pools are the two sides of the cut; "inter-pool" matches are cut edges.

Is it hard? For small instances you solve it by enumeration — 8 teams means 2^8 = 256 possible splits, check each one. For n = 100 teams you have 2^{100} splits, more than the atoms in your body. MaxCut is NP-hard: there is no polynomial-time classical algorithm known, and under standard complexity-theoretic assumptions there cannot be one.

What you can do in polynomial time is compute an approximate answer. The celebrated Goemans-Williamson algorithm (1995) achieves an approximation ratio of 0.878 — meaning the cut it finds has size at least 0.878 times the true maximum. Under the Unique Games Conjecture, this is optimal for classical polynomial-time algorithms.

QAOA — the Quantum Approximate Optimization Algorithm from the previous chapter — is the quantum contender. MaxCut is its canonical benchmark, the problem on which QAOA's claims have been most sharply tested. This chapter is the accounting: what QAOA achieves on MaxCut, how the numbers compare with classical, and where "quantum advantage" stands.

MaxCut as a combinatorial problem

Formally: a graph G = (V, E) with |V| = n vertices and |E| = m edges. Optionally each edge has a weight w_{ij} > 0; the unweighted case has w_{ij} = 1 throughout. A cut is a partition of V into two subsets S and \bar{S} = V \setminus S. The cut value is

\mathrm{Cut}(S) = \sum_{(i,j) \in E : \, i \in S, j \in \bar{S}} w_{ij},

the total weight of edges with one endpoint on each side. MaxCut: find the partition maximising \mathrm{Cut}(S).

Encode the partition via bits: x_i = 0 if i \in S, x_i = 1 if i \in \bar{S}. An edge (i, j) is cut iff x_i \ne x_j. The cut value is

\mathrm{Cut}(x) = \sum_{(i,j) \in E} w_{ij} \, \mathbb{1}[x_i \ne x_j] = \sum_{(i,j) \in E} w_{ij} \cdot \tfrac{1}{2}(1 - (-1)^{x_i + x_j}).
MaxCut: partition a graph's vertices to maximise cut edgesA small graph with six vertices arranged roughly in two clusters. Three vertices on the left are coloured one shade, three on the right are coloured another shade. Edges go within clusters (drawn in a muted colour) and between clusters (drawn in accent colour and labelled cut). A caption below notes this is the maximum cut: 5 cut edges out of 7 total. A MaxCut instance — 6 vertices, 7 edges, optimal cut has 5 cut edges $v_1$ $v_2$ $v_3$ $v_4$ $v_5$ $v_6$ within $S$ cut edge $S$ $\bar{S}$
A 6-vertex, 7-edge graph with the optimal cut drawn. The partition $S = \{v_1, v_2, v_3\}$ vs $\bar{S} = \{v_4, v_5, v_6\}$ cuts 5 edges (shown in accent colour); 2 edges are within-partition and not cut. For small graphs the optimum is easy to find by inspection; for large graphs it is NP-hard.

The cost Hamiltonian for MaxCut

Using the bit-to-Pauli encoding x_i \to (1 - Z_i)/2 from QAOA's algorithm chapter, the edge indicator \mathbb{1}[x_i \ne x_j] becomes

\mathbb{1}[x_i \ne x_j] \;\; \leftrightarrow \;\; \tfrac{1}{2}(1 - Z_i Z_j).

Why this works: Z_i Z_j has eigenvalue +1 when both qubits are in |0\rangle or both in |1\rangle (same side of the cut), and -1 when they differ (cut). So \tfrac{1}{2}(1 - Z_i Z_j) is 0 when not cut, 1 when cut — exactly the indicator.

Summing over edges, the MaxCut cost Hamiltonian is

H_C = \sum_{(i,j) \in E} w_{ij} \cdot \tfrac{1}{2}(1 - Z_i Z_j) = \tfrac{W}{2} - \tfrac{1}{2} \sum_{(i,j) \in E} w_{ij} \, Z_i Z_j,

where W = \sum_{ij} w_{ij} is the total edge weight. Its maximum eigenvalue equals the maximum cut value. QAOA maximises H_C — or equivalently, minimises -H_C. (The constant W/2 just shifts the spectrum; it drops out of the optimisation.)

MaxCut cost Hamiltonian

For graph G = (V, E) with edge weights w_{ij}, the MaxCut cost operator is

H_C = \sum_{(i,j) \in E} w_{ij} \cdot \tfrac{1}{2}(1 - Z_i Z_j).

It is diagonal in the computational basis with eigenvalue equal to the cut value of the corresponding bitstring. For unweighted graphs, w_{ij} = 1 throughout.

The QAOA circuit for MaxCut

Assemble the QAOA template with this specific H_C.

Initial state: |+\rangle^{\otimes n} — each qubit gets a Hadamard from |0\rangle. Total: n Hadamards.

Cost unitary U_C(\gamma) = e^{-i \gamma H_C}. Because all Z_i Z_j terms commute (they are diagonal in the same basis), the exponential factorises exactly:

U_C(\gamma) = \prod_{(i,j) \in E} e^{i \gamma w_{ij} Z_i Z_j / 2}.

(The constant W/2 in H_C contributes a global phase e^{-i \gamma W/2}, which is unobservable and can be dropped.)

Each factor e^{i \theta Z_i Z_j} is the standard ZZ rotation gate, denoted R_{ZZ}(\theta). It implements as a sequence of three standard gates:

R_{ZZ}(\theta) = \mathrm{CNOT}_{i \to j} \cdot R_Z(\theta)_j \cdot \mathrm{CNOT}_{i \to j},

Why this identity: the CNOT sandwich conjugates Z_j into Z_i Z_j, and R_Z(\theta) = e^{-i \theta Z / 2} evolves under that conjugated operator, which is exactly e^{-i \theta Z_i Z_j / 2}. Pen-and-paper verification is a good linear-algebra exercise. where R_Z(\theta) = \mathrm{diag}(e^{-i\theta/2}, e^{i\theta/2}). Two CNOTs and one single-qubit R_Z per edge — that is the per-edge cost of one QAOA layer.

Mixer unitary U_M(\beta) = e^{-i \beta H_M} with H_M = \sum_j X_j. All X_j commute (they act on different qubits), so

U_M(\beta) = \prod_{j} e^{-i \beta X_j} = \prod_j R_X(2\beta)_j,

where R_X(\theta) = e^{-i \theta X / 2} is the standard X-rotation. One single-qubit gate per qubit — trivially cheap.

QAOA circuit for MaxCut on a 4-cycle graph, p=1A detailed circuit diagram with four qubit wires representing vertices 0, 1, 2, 3 of a 4-cycle graph. First all four qubits get Hadamards. Then a block labelled U C gamma is implemented as four R ZZ gamma gates, one per edge: between wires 0 and 1, between 1 and 2, between 2 and 3, and between 3 and 0. Each R ZZ is shown as CNOT, R Z gamma, CNOT on the involved wires. Then a mixer block of four R X 2 beta single-qubit rotations, one per wire. Finally four measurement meters. QAOA circuit on a 4-cycle graph, $p=1$ $|0\rangle_0$ $|0\rangle_1$ $|0\rangle_2$ $|0\rangle_3$ H H H H $U_C(\gamma)$ = 4 $R_{ZZ}(\gamma)$ gates $R_Z(\gamma)$ edge (0,1) $R_Z(\gamma)$ edge (1,2) $R_Z(\gamma)$ edge (2,3) (edge (3,0) similar) $U_M(\beta)$ $R_X(2\beta)$ $R_X(2\beta)$ $R_X(2\beta)$ $R_X(2\beta)$ M M M M Per QAOA layer: 2 CNOTs + 1 $R_Z$ per edge, plus 1 $R_X$ per qubit. Parameters: $(\gamma, \beta)$.
QAOA circuit on the 4-cycle graph, $p = 1$. The cost block $U_C(\gamma)$ decomposes into one $R_{ZZ}(\gamma)$ gate per edge (here, 4 gates), each implemented as two CNOTs sandwiching an $R_Z$. The mixer $U_M(\beta)$ is four single-qubit $R_X(2\beta)$ rotations. Depth per layer: roughly $2$ (for the CNOTs) plus 1 (for the $R_X$), times the chromatic index of the graph (to parallelise non-adjacent edges).

Depth per layer

For an n-vertex graph with m edges and maximum degree d:

This is what makes MaxCut a NISQ-friendly QAOA target: constant-degree graphs give constant-per-layer depth, and small p gives small total depth.

Approximation ratios — what QAOA buys you

The approximation ratio is the ratio of the QAOA-found cut value to the true maximum cut value, in the worst case over instances of a given family.

p = 1 on 3-regular graphs: 0.6924

Farhi-Goldstone-Gutmann (2014) analysed QAOA analytically at p = 1 on triangle-free 3-regular graphs. Because the state after U_C(\gamma) U_M(\beta) |+\rangle^{\otimes n} has a local structure (each qubit's reduced density matrix depends only on its 2-hop neighbourhood), the expected cut value per edge can be computed by tracking a small tensor network.

For a triangle-free 3-regular graph, the optimal (\gamma^*, \beta^*) yields expected cut per edge of approximately 0.6924. Multiplied by the number of edges and compared to the true maximum, the worst-case approximation ratio is 0.6924.

Approximation ratio vs QAOA depth pA line graph with depth p on the horizontal axis ranging from 1 to 12, and approximation ratio on the vertical axis ranging from 0.5 to 1. A dotted horizontal line at 0.5 labelled random assignment. A dashed horizontal line at 0.878 labelled Goemans-Williamson classical bound. A rising curve shows QAOA approximation ratios at each p: 0.69 at p=1, 0.756 at p=2, 0.792 at p=3, growing gradually through 0.82, 0.84, 0.86, 0.87, 0.88, and crossing the Goemans-Williamson line around p equals 11. QAOA approximation ratio vs depth $p$ (3-regular MaxCut) 0.5 0.6 0.7 0.8 0.9 approx. ratio depth $p$ 1 2 3 4 5 6 7 8 9 10 11 random (0.5) GW 0.878 0.692 0.756 0.792 $\approx$ 0.88
Worst-case QAOA approximation ratios on 3-regular MaxCut instances vs depth $p$ (values from Farhi et al. 2014 for $p = 1, 2, 3$; higher $p$ values from Wurtz-Lykov 2021 empirical simulations). QAOA beats random ($0.5$) immediately at $p = 1$ but only reaches the classical Goemans-Williamson bound ($0.878$) around $p \approx 11$. At depths that large, circuit noise on real NISQ hardware dominates — the theoretical approximation ratio is not achieved in practice.

p = 2, 3: 0.7559 and 0.7924

Farhi et al. extended the analysis to p = 2 and p = 3, showing approximation ratios of 0.7559 and 0.7924 respectively on 3-regular graphs. The trend is clear but slow: each additional layer adds a few percentage points, and the improvement diminishes.

p \geq 11: matches Goemans-Williamson empirically

Wurtz and Lykov (2021) pushed QAOA simulation to p = 11 on 3-regular graphs and reported approximation ratios matching Goemans-Williamson (0.878) at that depth. Beyond p = 11, QAOA appears to climb further towards 1.0 — but exact-simulation costs explode as p grows, and experimental NISQ hardware cannot run circuits that deep without noise washing out the signal.

The classical baselines

To be honest about what QAOA achieves, you have to measure it against the classical state of the art.

Random assignment: 0.5 — flip a fair coin for each vertex. This is the floor.

Greedy: \geq 0.5. On adversarial graphs, greedy achieves only 0.5; on typical instances it does much better.

Goemans-Williamson (1995): 0.878, via semidefinite-programming relaxation. The algorithm:

  1. Relaxation. Replace each bit x_i with a unit vector v_i \in \mathbb{R}^n; the cost becomes maximising \sum_{(i,j)} \tfrac{1}{2}(1 - v_i \cdot v_j). This is a convex SDP, solvable in polynomial time.
  2. Rounding. Pick a random hyperplane through the origin; assign x_i = 0 if v_i is on one side, x_i = 1 on the other.
  3. Analysis. The expected cut value from this rounding is at least 0.878 times the optimum.

Goemans-Williamson runs in polynomial time. Under the Unique Games Conjecture (Khot 2002), the ratio 0.878 is optimal for polynomial-time classical algorithms — no classical algorithm can do better on worst-case instances.

Exponential-time exact algorithms: ratio 1.0 but runtime 2^n or worse.

Simulated annealing, tabu search, tuned heuristics: no worst-case bound, but empirically near-optimal on many benchmarks — often matching or beating Goemans-Williamson in practice.

Challenges for QAOA

Parameter optimisation

The classical outer loop has to find (\gamma^*, \beta^*) \in \mathbb{R}^{2p} that maximise the expected cut. The landscape has local minima; naive gradient descent gets stuck. Strategies that help:

Noise sensitivity

At higher p, NISQ noise accumulates. A p = 11 QAOA circuit on a 100-vertex 3-regular graph has 11 \times (2m + n) = 11 \times (2 \cdot 150 + 100) = 4400 gates, comfortably exceeding NISQ depth budgets. Actual hardware experiments max out at p = 2 or 3 for problems of this size.

No quantum speedup

The deepest challenge: even at p large enough to match Goemans-Williamson's approximation ratio, the classical algorithm runs in polynomial time. Quantum parity — not speedup. For a quantum speedup, QAOA would need to either (a) achieve a strictly better approximation ratio than GW (which contradicts UGC unless UGC fails), or (b) achieve GW's ratio at lower runtime — but the runtime comparison is hostile to QAOA at current parameter-count-per-sample and shot-count-per-sample overheads.

Worked examples

Example 1: Full QAOA circuit for a 4-cycle graph at $p = 1$

Setup. Graph: 4 vertices \{0, 1, 2, 3\}, 4 edges forming a cycle: (0,1), (1,2), (2,3), (3,0). Unweighted.

Step 1. Cost Hamiltonian.

H_C = \tfrac{1}{2}[(1 - Z_0 Z_1) + (1 - Z_1 Z_2) + (1 - Z_2 Z_3) + (1 - Z_3 Z_0)] = 2 - \tfrac{1}{2}(Z_0 Z_1 + Z_1 Z_2 + Z_2 Z_3 + Z_3 Z_0).

Step 2. Initial state. Apply H to each qubit starting from |0000\rangle:

|+\rangle^{\otimes 4} = \tfrac{1}{4} \sum_{x \in \{0,1\}^4} |x\rangle.

Equal superposition over all 16 bitstrings.

Step 3. Apply U_C(\gamma). Product of four commuting ZZ rotations:

U_C(\gamma) = e^{i \gamma Z_0 Z_1 / 2} \cdot e^{i \gamma Z_1 Z_2 / 2} \cdot e^{i \gamma Z_2 Z_3 / 2} \cdot e^{i \gamma Z_3 Z_0 / 2},

up to the global phase from the constant 2. Each rotation multiplies the amplitude of basis state |x\rangle by e^{i \gamma (-1)^{x_i + x_j} / 2} — a phase depending on whether that edge is cut in x.

Step 4. Apply U_M(\beta). Product of four single-qubit R_X(2\beta) rotations. Each qubit gets mixed: R_X(2\beta)|0\rangle = \cos\beta |0\rangle - i \sin\beta |1\rangle, and similarly for |1\rangle.

Step 5. Measure. Measure all qubits in the Z basis. Sample bitstrings. For each sample, compute C(x) = number of cut edges.

Step 6. Estimate F_1(\gamma, \beta). Average C(x) over, say, 10^4 samples. This estimates \langle H_C \rangle.

Step 7. Optimise. Grid-search over (\gamma, \beta) \in [0, 2\pi] \times [0, \pi]. Optimum for the 4-cycle at p = 1 is approximately (\gamma^*, \beta^*) = (\pi / 4, \pi / 8), giving expected cut \approx 3.0 (out of 4).

Result. QAOA at p = 1 on the 4-cycle yields expected cut \approx 3.0, approximation ratio \approx 0.75. The true optimum (cut = 4) is achieved only by the bitstrings |0101\rangle and |1010\rangle, which have elevated (but not unit) probability in the output distribution. To reach cut = 4 reliably, increase p.

Example 2: Expected cut for $p = 1$ on a random 3-regular graph

Setup. Consider a large triangle-free 3-regular graph with n vertices and m = 3n/2 edges. We apply QAOA at p = 1 with the provably optimal (\gamma^*, \beta^*).

Step 1. Use the Farhi et al. formula. For each edge, the expected contribution to \langle H_C \rangle at p = 1 depends only on the local 2-neighbourhood (because the QAOA state at p = 1 has depth-2 light cone). For 3-regular triangle-free graphs, the per-edge expected cut is (1 + C_3)/2, where C_3 \approx 0.3848 at the optimal (\gamma^*, \beta^*).

Step 2. Compute. Per-edge expected cut \approx (1 + 0.3848)/2 \approx 0.6924. Multiplied by the number of edges: \langle H_C \rangle \approx 0.6924 \cdot m.

Step 3. Approximation ratio. The maximum possible cut on a 3-regular graph is at most m (every edge cut); the minimum (over hard instances) cut that the true optimum achieves is at least 0.878 m (Goemans-Williamson lower bound). In the worst case the ratio is 0.6924 / 1.0 = 0.6924, since adversarial instances can make the true optimum exactly m.

Step 4. Compare. Goemans-Williamson achieves 0.878 in worst case on the same instance family, in polynomial time. p = 1 QAOA achieves 0.6924.

Result. On 3-regular MaxCut, p = 1 QAOA is a provable quantum algorithm with approximation ratio 0.6924 — worse than classical Goemans-Williamson's 0.878. To match GW, you need p \approx 11, at which point circuit depth becomes a hardware barrier on NISQ machines. This is the central numerical fact of QAOA-on-MaxCut: theoretical progress is real but gradual; classical crossover depth is deep.

Common confusions

"QAOA solves MaxCut"

It approximates MaxCut. The output is a bitstring whose cut value is close to, but not necessarily equal to, the optimum. Exact solution via QAOA would require p \to \infty with the right schedule, and then runtime becomes unbounded.

"p = 1 QAOA is enough for MaxCut"

Only for very structured instances. On 3-regular graphs, p = 1 gives 0.6924 — which is less than classical GW's 0.878. Real advantage, if any, needs higher p.

"Goemans-Williamson is slow; QAOA is faster"

The opposite is true — in runtime, not approximation ratio. Goemans-Williamson is polynomial-time classical. QAOA at matched approximation ratio (p \approx 11) requires circuits deep enough that they are hard to run. Classical wins on runtime even when QAOA matches on approximation ratio.

"QAOA has a proven quantum advantage for MaxCut"

No. At any depth where QAOA has been theoretically analysed, classical GW is at least as good (on worst-case 3-regular instances). At depths where QAOA matches GW (p \geq 11), noise dominates NISQ hardware. And on problem instances where both run, well-tuned classical heuristics (simulated annealing, tabu search) usually match or beat either.

"More edges means more QAOA depth"

Not directly. More edges means more gates per layer, but the number of layers p is independent. Depth per layer scales with the edge chromatic number (colouring of edges), not edge count.

The India angle

Indian optimisation research is large — the Indian Railways, BEST (Mumbai bus network), and the power-grid scheduling problems at Power Grid Corporation are all MaxCut-adjacent in structure. Classical optimisation software for these problems is mature (CPLEX, Gurobi, open-source simulated annealing). QAOA's relevance to Indian optimisation industry is therefore not that it solves problems nobody else can — it is that it is a research vehicle for understanding what near-term quantum hardware could do, on problems Indian operations-research teams already understand.

TCS Research (Pune/Mumbai) has published on QAOA for financial portfolio optimisation and scheduling, using IBM cloud hardware. QpiAI (Bangalore) ships a QAOA-based optimisation toolkit targeting logistics clients. IIT Delhi has a quantum-algorithms group working on QAOA variants with problem-specific mixers for graph-colouring problems.

The honest assessment from the Indian industrial side, expressed by TCS and Wipro researchers at NQM conferences in 2024, is: QAOA is a research programme worth tracking, and no present-day industrial deployment justifies investment in quantum-specific optimisation software over classical heuristics, until fault-tolerant hardware arrives. That is the right calibration for an NQM-funded research environment: take quantum optimisation seriously as physics, modestly as technology, and not yet as a product.

Going deeper

The rest of this chapter covers the Farhi-Goldstone-Gutmann derivation of the p = 1 approximation ratio on 3-regular graphs, the Fourier-based parameter heuristic of Zhou et al., the Wurtz-Lykov extrapolation to high p, near-optimal classical approximation algorithms beyond Goemans-Williamson (Barak-Brandao-Harrow-Kelner-Steurer-Zhou 2015), QAOA on structured graph families (regular, random, Erdős-Rényi), and the relationship between QAOA-for-MaxCut and the adiabatic quantum computing framework. This is the PhD-entry view.

The Farhi-Goldstone-Gutmann 2014 derivation (sketch)

At p = 1, the QAOA state is |\psi(\gamma, \beta)\rangle = e^{-i\beta H_M} e^{-i\gamma H_C} |+\rangle^{\otimes n}. Its light cone is depth 2: to evaluate \langle \psi | Z_i Z_j | \psi \rangle for an edge (i, j), you only need the sub-circuit on qubits within the 2-hop neighbourhood of \{i, j\}. For a bounded-degree graph, this is a constant-size quantum calculation — doable by pen and paper (or symbolic computation).

For triangle-free 3-regular graphs, each edge's 2-hop neighbourhood has the same topology (a small tree), so the expectation value is identical across edges. Farhi et al. compute it explicitly:

\langle Z_i Z_j \rangle = \tfrac{1}{2} \sin(4\beta) [\cos^{d-1}(\gamma) \sin(\gamma) - \text{higher terms}],

where d = 3 for 3-regular. Optimising over (\gamma, \beta) gives the 0.6924 ratio.

Fourier parameterisation (Zhou et al. 2020)

The empirical observation: optimal QAOA parameters (\gamma_1^*, \ldots, \gamma_p^*, \beta_1^*, \ldots, \beta_p^*) are smooth functions of the layer index k. Parameterise them as truncated Fourier series:

\gamma_k = \sum_{l=1}^{q} u_l \sin\left(\tfrac{(2l - 1) \pi k}{2p + 1}\right), \quad \beta_k = \sum_{l=1}^{q} v_l \cos\left(\tfrac{(2l - 1) \pi k}{2p + 1}\right),

with q \ll p Fourier coefficients. The optimisation is over (u_l, v_l) \in \mathbb{R}^{2q} — fewer parameters, smoother landscape.

Wurtz-Lykov extrapolation

Wurtz and Lykov (2021) simulate QAOA on 3-regular graphs up to p = 11, extrapolating the approximation ratio trend. They find empirical matching of Goemans-Williamson at p \approx 11 and conjecture that unbounded-p QAOA could exceed 0.878 — potentially breaking the UGC barrier, but only in the unbounded-depth regime that is inaccessible to NISQ. Whether this constitutes a quantum advantage is contested: the comparison requires careful benchmarking against tuned classical heuristics, and the NISQ-accessible depths do not match GW.

Near-optimal classical beyond Goemans-Williamson

Barak, Brandao, Harrow, Kelner, Steurer and Zhou (2015) gave a polynomial-time classical algorithm that achieves approximation ratio 0.878 + \epsilon on structured instance families (e.g. random graphs). This raises the bar for QAOA: even achieving 0.878 is not enough to beat classical on those families. On worst-case 3-regular instances, Goemans-Williamson's 0.878 remains the benchmark.

QAOA on structured graph families

Relationship to adiabatic quantum computing

At p \to \infty, QAOA converges to the adiabatic interpolation H(s) = (1 - s) H_M + s H_C evolved slowly. For MaxCut instances where the minimum spectral gap along this path is \Delta_{\min}, the adiabatic theorem requires total time T = O(1/\Delta_{\min}^2). On hard MaxCut instances, \Delta_{\min} can be exponentially small — so adiabatic MaxCut takes exponential time, and so does its infinite-p QAOA limit. The "finite p is an approximation to adiabatic" framing is honest but doesn't rescue quantum advantage: the adiabatic algorithm itself isn't faster than classical on MaxCut.

Where this leads next

References

  1. Edward Farhi, Jeffrey Goldstone, Sam Gutmann, A Quantum Approximate Optimization Algorithm (2014) — arXiv:1411.4028. Original QAOA paper with the p = 1 MaxCut analysis.
  2. Michel Goemans, David Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming (JACM, 1995) — dl.acm.org/doi/10.1145/227683.227684. The classical 0.878 bound.
  3. Jonathan Wurtz, Danylo Lykov, The Fixed Angle Conjecture for QAOA on Regular MaxCut Graphs (2021) — arXiv:2107.00677. High-p numerical results and parameter-concentration analysis.
  4. Leo Zhou et al., Quantum Approximate Optimization Algorithm: Performance, Mechanism, and Implementation on Near-Term Devices (2020) — arXiv:1812.01041. Fourier parameterisation and parameter-concentration heuristics.
  5. John Preskill, Lecture Notes on Quantum Computation, Chapter 7: NISQ algorithmstheory.caltech.edu/~preskill/ph229.
  6. Wikipedia, Maximum cut.