In short
BQP — Bounded-error Quantum Polynomial time — is the class of decision problems a quantum computer can solve in polynomial time with error probability at most 1/3. Formally: a language L \in \mathrm{BQP} if there exists a uniform family of polynomial-size quantum circuits \{C_n\} such that on input x of length n, C_n outputs the right answer with probability at least 2/3. "Uniform" means the circuit for input length n can be generated in polynomial time by a classical Turing machine. The 2/3 threshold is a convention — by running the circuit k times and taking the majority, the error drops to 2^{-\Omega(k)}, so 2/3 and 1 - 2^{-100} give the same class. Known containments: \mathrm{BPP} \subseteq \mathrm{BQP} \subseteq \mathrm{PSPACE} (and tighter, \mathrm{BQP} \subseteq \mathrm{PP}). Known inhabitants: factoring, discrete logarithm, abelian hidden-subgroup problems, Pell's equation, local-Hamiltonian simulation — all via Shor-style or phase-estimation-style algorithms. What is not known: whether \mathrm{BPP} \neq \mathrm{BQP} unconditionally, whether \mathrm{NP} \subseteq \mathrm{BQP} (almost certainly not), whether \mathrm{BQP} \subseteq \mathrm{PH} (probably not — Raz–Tal 2019 gives a contrary oracle separation).
You have seen four quantum algorithms — Deutsch, Deutsch–Jozsa, Bernstein–Vazirani, Simon — followed by Grover's search and Shor's factoring. Each solves a specific problem faster than any known classical algorithm. Each runs, if you count the gates, in time that is polynomial in the input size. And each produces an answer that is usually correct — the final measurement might fail with some small probability, and you may need to repeat to be sure.
Take a step back. What do these algorithms have in common as a class? What is the shared property that makes them "quantum-efficient"? If you wanted to describe, in one definition, the full set of problems a quantum computer can solve efficiently — including all the ones already discovered, and all the ones still to come — what would you write down?
That definition has a name. It is called BQP — Bounded-error Quantum Polynomial time — and it plays the same role in quantum complexity theory that P plays in classical complexity theory. BQP is the thing Shor's algorithm proves is bigger than BPP. BQP is the thing Grover's algorithm demonstrates a powerful tool of. BQP is what complexity theorists spend their time trying to separate from other classes. This chapter defines it carefully, shows what it contains, and sets up the comparison questions — \mathrm{BQP} vs \mathrm{BPP}, \mathrm{BQP} vs \mathrm{NP} — that the next chapters will explore.
The definition, with each word decoded
Here is the formal statement.
BQP — Bounded-error Quantum Polynomial time
A language L \subseteq \{0, 1\}^* is in BQP if there exists a uniform family of polynomial-size quantum circuits \{C_n\}_{n \geq 1} such that for every input x \in \{0, 1\}^n:
- If x \in L: the circuit C_n, run on input |x\rangle \otimes |0\rangle^{\otimes \mathrm{poly}(n)} and measured in the computational basis, outputs 1 (on the designated output qubit) with probability \geq 2/3.
- If x \notin L: the circuit C_n, run on the same input, outputs 1 with probability \leq 1/3.
The word "uniform" means: there exists a deterministic classical Turing machine that, given n in unary, outputs a description of C_n in time polynomial in n.
Every phrase in that definition is doing work. Take them one at a time.
"Language L" — standard complexity-theory speak for a decision problem. L is a set of strings; "x \in L" means "the yes-instances of the problem include x." The question is: given x, decide whether x \in L. For factoring, L might be "pairs (N, k) where N has a factor less than k."
"Uniform family of circuits \{C_n\}" — for each possible input length n, there is one circuit C_n that handles all inputs of that length. This is the circuit model of computation. Unlike a single Turing machine that handles all lengths, the circuit model gives you one machine per length — but requires them to be describable uniformly, meaning a classical algorithm can efficiently generate C_n given n.
"Polynomial-size" — the number of gates in C_n is bounded by some fixed polynomial in n. So C_n might have 10n, n^2, or even n^{100} gates, but not 2^n.
"Uniform" (the modifier) — this is the crucial word that distinguishes BQP from its non-uniform cousin BQP/poly. Uniformity says: the circuits are not an arbitrary sequence; they are generated by a classical algorithm. Without uniformity, a complexity class could "cheat" by encoding arbitrary information — including undecidable problems — into the circuit descriptions themselves. Uniformity ensures the circuits are genuinely computable.
"|x\rangle \otimes |0\rangle^{\otimes \mathrm{poly}(n)}" — the circuit runs on the input encoded as |x\rangle along with a polynomial number of ancilla (scratch) qubits initialised to |0\rangle. The reason for the ancillas is that quantum computation is reversible — you cannot delete information mid-circuit, so you keep extra qubits around to hold intermediate garbage.
"Measured in the computational basis, outputs 1" — at the end of the computation, designate one qubit as the "output" qubit. Measure it (the other qubits are ignored). The outcome is 0 or 1.
"Probability \geq 2/3" — because measurement is random, the circuit does not always give the same answer. BQP requires it be right at least 2/3 of the time on every input.
"Probability \leq 1/3" — symmetrically, on no-instances, the circuit should only mistakenly output 1 with probability at most 1/3.
The 1/3 gap between "yes with probability \geq 2/3" and "yes with probability \leq 1/3" is what "bounded error" means. If the probability could be anywhere between 1/3 and 2/3, you could not distinguish yes from no reliably.
Why 2/3 — the amplification trick
The number 2/3 looks arbitrary. It is. But it does not matter — any constant strictly above 1/2 gives the same class, and you can amplify the success probability to any level you want with only polynomial overhead.
Here is the trick. Suppose you have a BQP algorithm with success probability \geq 2/3. Run it k independent times on the same input x, get k answers a_1, a_2, \ldots, a_k \in \{0, 1\}, and output the majority.
Claim. The majority is wrong with probability at most e^{-k/18}.
The standard proof uses the Chernoff bound. Let X_i = 1 if the i-th run gives the wrong answer, X_i = 0 otherwise. Each X_i is a Bernoulli random variable with \Pr[X_i = 1] \leq 1/3. The majority is wrong only if more than k/2 of the runs are wrong — i.e., \sum_{i} X_i > k/2. The expectation is \mathbb{E}[\sum X_i] \leq k/3, and the Chernoff bound gives:
Why this works: the Chernoff bound is the classical fact that sums of independent bounded random variables concentrate very sharply around their mean. The mean of "number of wrong runs" is at most k/3, so the chance of exceeding k/2 decays exponentially in k. With k = 500 runs, the error drops below 2^{-30} — much smaller than the probability of a cosmic ray flipping a bit in your hardware.
Consequence. The class BQP defined with threshold 2/3 is the same class defined with threshold 1 - 2^{-100}. The choice of 2/3 is just the cleanest round number above 1/2. It could have been 0.51 or 0.99 — same class.
The boundary is 1/2: if you allow the success probability to be exactly 1/2, you get the class PP (unbounded-error probabilistic polynomial), which is much larger than BPP and BQP combined.
Equivalent definitions
BQP has three common presentations that define the same class. A fluent reader recognises all three.
1. Uniform circuit families. The definition above — one polynomial-size quantum circuit per input length, generated uniformly by a classical Turing machine. This is the standard modern formulation.
2. Quantum Turing machines. Before the circuit model was preferred, complexity theorists worked with quantum Turing machines — tapes, a finite-state transition function with complex amplitudes instead of probabilities, and unitary evolution instead of deterministic transitions. Bernstein and Vazirani (1993) formalised this model and proved BQP-under-QTMs equals BQP-under-circuit-families (for uniform circuit families). You rarely meet QTMs in practice — the circuit model is cleaner — but they are the historical foundation.
3. Uniform families of "Shor-style" circuits. A constructive way: a problem is in BQP if it can be solved by polynomial classical preprocessing, a polynomial number of applications of Hadamard + Toffoli + phase gates (a universal quantum gate set), and polynomial classical postprocessing. This is the "Shor template" applied generally.
All three are the same class. The Solovay–Kitaev theorem ensures that any "reasonable" finite set of universal quantum gates — Hadamard + CNOT + T, for example — gives the same BQP. The choice of gate set, the choice of qubit dimension (qubits vs qutrits), and the choice of circuit model vs Turing machine all wash out.
What is in BQP
You have already seen several problems known to be in BQP. A consolidated list:
Problems solvable in polynomial time by Shor-style algorithms:
- Factoring. Given N and k, decide whether N has a factor less than k. Shor (1994). Quantum polynomial time, classically sub-exponential.
- Discrete logarithm. Given a group G, elements g, h \in G, and a number k, decide whether g^k = h for some k less than a bound. Shor (1994). Polynomial quantum, sub-exponential classical.
- Abelian Hidden Subgroup Problem. Given a function f on an abelian group that is constant on cosets of an unknown subgroup H, find H. Generalises factoring, discrete log, and Simon's problem. Polynomial quantum.
- Pell's equation. Find the fundamental solution to x^2 - D y^2 = 1 for a given D. Hallgren (2002). Classical best: sub-exponential. Quantum: polynomial.
- Order-finding. Given a and N coprime, find the smallest r with a^r \equiv 1 \pmod N. Core subroutine of Shor's.
- Computing the class number of a number field (under certain conditions). Polynomial quantum.
Problems solvable by quantum simulation:
- Local Hamiltonian simulation. Given a local Hamiltonian H on n qubits and a time t, simulate the quantum evolution e^{-iHt} for time polynomial in t and the inverse error. Lloyd (1996). This is a BQP-complete problem — as hard as anything in BQP.
- Estimating ground-state energies of certain local Hamiltonians — for some classes, BQP-solvable.
Problems with modest-but-real speedups (also in BQP, but with classical BPP algorithms close behind):
- Unstructured search (via Grover's) — \sqrt{N} queries instead of N. In BQP; also in a deterministic class with linear queries.
- Collision-finding — finding two inputs that hash to the same output in O(N^{1/3}) queries (Brassard–Høyer–Tapp). In BQP.
- Amplitude estimation — estimating the probability of an event in O(1/\epsilon) quantum queries vs O(1/\epsilon^2) classical.
Problems trivially in BQP (because in BPP):
- Primality testing — in P, hence in BPP, hence in BQP. You would never use a quantum computer for this, but BQP contains it.
- Polynomial identity testing — in BPP, hence in BQP.
- Anything in BPP. Every randomised classical algorithm is a quantum algorithm in disguise — the quantum circuit just does not use superposition meaningfully.
The containments — where BQP fits
The known containment relationships involving BQP.
Every inclusion here is proved.
- \mathrm{P} \subseteq \mathrm{BPP} — a deterministic algorithm is a trivial randomised one that ignores its coins.
- \mathrm{BPP} \subseteq \mathrm{BQP} — a randomised classical algorithm is a trivial quantum algorithm. You build the quantum circuit using only CNOT, Toffoli, and Hadamard gates. Hadamard on |0\rangle gives (|0\rangle + |1\rangle)/\sqrt{2}, measure to get a random bit. Use these bits for coin flips; use Toffoli and CNOT for the deterministic classical logic. The quantum simulation of a T-step classical randomised computation uses O(T) gates.
Why this matters: BPP ⊆ BQP is not a "discovery" — it is structural. Anything a classical randomised algorithm does, a quantum algorithm can too, with no overhead. The interesting question is the strict containment: can BQP do more than BPP?
- \mathrm{BQP} \subseteq \mathrm{PSPACE} — Bernstein and Vazirani (1997) proved that any quantum polynomial-time computation can be simulated by a classical algorithm using polynomial space (though possibly exponential time). The idea: the final amplitude on the accept outcome can be written as a sum over exponentially-many paths through the circuit. Classically, you iterate over the paths one at a time, reusing space. Polynomial memory, exponential time.
- \mathrm{BQP} \subseteq \mathrm{PP} — Adleman, DeMarrais, Huang (1997) showed this tighter upper bound. PP is "probabilistic polynomial time with unbounded error" — stranger than BPP, but still a well-defined class. The fact that BQP sits inside PP means that quantum computation, for all its strangeness, is captured by a specific classical probabilistic model.
What is not known
-
\mathrm{BPP} \stackrel{?}{=} \mathrm{BQP}. Almost certainly no — factoring is in BQP and is believed to be outside BPP. But a proof of strict separation would imply unprovability of strong classical results (it would likely separate BPP from PP, for one). As of 2026, this is open.
-
\mathrm{NP} \stackrel{?}{\subseteq} \mathrm{BQP}. Almost certainly no. Grover's algorithm gives the best known quantum attack on SAT — a \sqrt{2^n} speedup, still exponential. A polynomial-time quantum algorithm for an NP-complete problem would be a seismic event; none is known. Oracle-based evidence (Bennett–Bernstein–Brassard–Vazirani, and later Aaronson) suggests NP-complete problems are hard quantumly.
-
\mathrm{BQP} \stackrel{?}{\subseteq} \mathrm{NP}. Also not known. Quantum computations might accept with no short classical certificate. The generalisation QMA — "quantum Merlin–Arthur" — allows quantum certificates and is the likely home of BQP, but the comparison \mathrm{BQP} \subseteq \mathrm{NP} has no proof.
-
\mathrm{BQP} \stackrel{?}{\subseteq} \mathrm{PH}. Raz and Tal (2019) constructed an oracle O such that \mathrm{BQP}^O \not\subseteq \mathrm{PH}^O. This is strong evidence — in the black-box world, quantum outperforms the entire polynomial hierarchy. In the non-relativised world (real uniform complexity) the question is still open.
A brief history
1985 — Deutsch. David Deutsch defines the quantum Turing machine in the paper Quantum theory, the Church–Turing principle, and the universal quantum computer. He also gives the first algorithm (what we now call the Deutsch algorithm) showing quantum computers can do something slightly faster than classical.
1993 — Bernstein–Vazirani. Ethan Bernstein and Umesh Vazirani formalise BQP as a complexity class, prove \mathrm{BPP} \subseteq \mathrm{BQP}, and prove \mathrm{BQP} \subseteq \mathrm{PSPACE}. They also give the Bernstein–Vazirani algorithm, showing a larger speedup than Deutsch's. The circuit model of quantum computation is established as the standard formalism.
1994 — Shor. Peter Shor publishes the polynomial-time quantum algorithm for factoring and discrete log. This is the first concrete evidence that BQP probably contains problems outside BPP. Overnight, quantum computing becomes a field with cryptographic consequences.
1996 — Grover. Lov Grover publishes the \sqrt{N} unstructured search algorithm. BQP gets another named inhabitant.
1997 — Adleman–DeMarrais–Huang. \mathrm{BQP} \subseteq \mathrm{PP} is proved, strengthening Bernstein–Vazirani's earlier \mathrm{BQP} \subseteq \mathrm{PSPACE}.
1997 — Bennett–Bernstein–Brassard–Vazirani (BBBV). The BBBV theorem proves Grover's \sqrt{N} is optimal for unstructured search — no quantum algorithm can beat it. This rules out exponential speedups for NP-style problems in the oracle model.
2002 — Hallgren. Pell's equation is shown to be in BQP. Another structured-problem speedup in the Shor family.
2019 — Raz–Tal. Ran Raz and Avishay Tal construct the first oracle separating BQP from the polynomial hierarchy: \mathrm{BQP}^O \not\subseteq \mathrm{PH}^O for some oracle O. Strong evidence that quantum advantage can exceed what any level of the classical polynomial hierarchy can match.
Between these landmarks there are dozens of smaller results — specific BQP-complete problems (local Hamiltonian, Jones polynomial), specific BQP-hard results (promise problems, sampling problems like BosonSampling), and extensions (QMA, QIP). The modern view places BQP as a rich class with a few clean upper bounds, strong lower-bound conjectures, and a handful of concrete problems whose exact classification drives much current research.
Example 1: Polynomial identity testing is in BPP and therefore in BQP
Polynomial identity testing (PIT) is the problem: given two polynomials p(x_1, \ldots, x_n) and q(x_1, \ldots, x_n) in multiple variables (given as arithmetic circuits, not as expanded polynomials), decide whether they are identical as polynomials.
Why it is in BPP. By the Schwartz–Zippel lemma: if p and q are not identical, then for any finite set S \subseteq \mathbb{Q}, a uniformly random point (r_1, \ldots, r_n) \in S^n satisfies p(r_1, \ldots, r_n) \neq q(r_1, \ldots, r_n) with probability at least 1 - d/|S|, where d is the maximum degree.
The BPP algorithm: sample random r_i's from a large enough set S, evaluate both circuits on the sample, compare. If equal on a random sample, declare them identical (with high probability, correct). If unequal, declare them different (always correct).
Why this belongs to BPP: the algorithm is randomised, polynomial-time (evaluating arithmetic circuits), and correct with probability at least 2/3 (by choosing |S| large enough relative to degree d).
Why it is also in BQP. Because \mathrm{BPP} \subseteq \mathrm{BQP}. The quantum version would implement exactly the BPP algorithm: generate random bits by Hadamarding |0\rangle's and measuring, then run the classical comparison. No quantum speedup, but full membership.
What this shows about BQP. PIT is a useful sanity check: BQP contains problems that do not need quantum techniques at all. BQP is defined "quantumly" but it is not a class of "problems that require quantum mechanics." It is a class of problems quantum computers can solve efficiently — and that includes all the ones classical randomised computers can already solve.
Result. Polynomial identity testing sits inside BPP, and because \mathrm{BPP} \subseteq \mathrm{BQP}, it is also in BQP. This is an example of a BQP-membership with no quantum speedup. BQP-membership is a necessary condition for "efficient on a quantum computer" but not a sufficient one for "exclusively quantum." To show a genuine quantum advantage, you need a BQP-algorithm for a problem that is not in BPP — and that is exactly what Shor's achieves for factoring.
Example 2: Factoring is in BQP via Shor's algorithm
Factoring is the canonical example of a BQP-problem with no known polynomial classical algorithm. The BQP-membership is established via Shor's algorithm.
The problem. Given N (an n-bit integer, promised to be composite) and a threshold k < N, decide whether N has a prime factor less than k.
Why this is a decision problem. A yes-instance exists iff the smallest prime factor of N is less than k. Complexity theory works with decision problems, and binary searching over k recovers a full factorisation from polynomially many decision queries.
Shor's algorithm sketch. The algorithm reduces factoring to order-finding: given a coprime to N, find the smallest r with a^r \equiv 1 \pmod N. Order-finding is solved by the following quantum circuit:
- Prepare two registers: the first with 2n qubits in uniform superposition (via Hadamards on |0\rangle^{2n}), the second with n qubits in |0\rangle.
- Apply the modular-exponentiation unitary: |x\rangle |0\rangle \mapsto |x\rangle |a^x \bmod N\rangle. This requires O(n^3) gates classically (via repeated squaring, reversibilised).
- Measure the second register (or defer the measurement — same outcome). The first register collapses to a superposition periodic with period r.
- Apply the inverse quantum Fourier transform to the first register. The QFT is O(n^2) gates.
- Measure the first register. The outcome is (with high probability) close to a multiple of 2^{2n}/r. Continued-fraction expansion recovers r classically.
Gate count. Modular exponentiation is O(n^3) gates; QFT is O(n^2) gates; classical post-processing is polynomial. Total: O(n^3) gates per run. The success probability per run is \Theta(1/\log n); with O(\log n) repetitions the success probability is high.
Classical hardness. The best known classical factoring algorithm (the General Number Field Sieve) runs in sub-exponential time \exp(O(n^{1/3} (\log n)^{2/3})). Much faster than 2^n, but much slower than polynomial.
Why this proves factoring is in BQP: Shor's gives a uniform family of polynomial-size quantum circuits — one for each input length n — generated from a polynomial-time classical algorithm (given n, write down the modular exponentiation unitary and the QFT unitary). The success probability is above 2/3 after amplification. By the definition of BQP, factoring is in BQP.
Why factoring is conjecturally not in BPP. Fifty years of cryptanalysis have produced no polynomial-time classical algorithm. RSA's security depends on this hardness. The lack of a proof is why we say "conjecturally" — in principle, a clever classical algorithm might still be found.
Result. Factoring is in BQP. No polynomial-time classical algorithm is known for factoring; the best classical algorithms are sub-exponential. If \mathrm{BPP} contains factoring, then \mathrm{BPP} and \mathrm{BQP} might be equal on this particular problem — but no BPP algorithm has been found, and most researchers believe none exists. Factoring is thus the cleanest candidate for a BQP ∖ BPP problem — an efficient quantum algorithm for a problem with no known efficient classical algorithm. This is the source of Shor's cryptographic impact: if factoring becomes BQP-tractable in practice (a fault-tolerant quantum computer is built), every RSA-based system must migrate to post-quantum cryptography. India's National Quantum Mission includes exactly this transition as one of its strategic priorities.
Common confusions
-
"BQP is a faster quantum version of NP." No. BQP and NP are different kinds of classes. BQP is defined by what a machine can efficiently compute with bounded error; NP is defined by what a verifier can efficiently check given a hint. These are orthogonal characterisations. NP includes problems like SAT that may not be in BQP (Grover's gives only quadratic speedup); BQP includes problems like quantum simulation that may not be in NP (no obvious classical certificate).
-
"Quantum computers solve NP in polynomial time." No evidence. The best quantum algorithm for NP-complete problems is Grover's \sqrt{2^n} — still exponential. Complexity theorists believe \mathrm{NP} \not\subseteq \mathrm{BQP}. A polynomial-time quantum SAT solver would be among the most surprising results in theoretical computer science.
-
"BQP is 'tractable' for quantum — analogous to P for classical." Partly right. BQP is the formal analogue of P when "efficient algorithm" means "efficient quantum algorithm with bounded error." The analogue of deterministic P for quantum computers is sometimes called EQP (exact quantum polynomial time — zero error), which is a strict subset of BQP.
-
"BQP circuits have to use only Hadamard and CNOT." No — they use any universal gate set. Hadamard + Toffoli + Phase gate (S), or Hadamard + CNOT + T, or Hadamard + CNOT + some single-qubit rotation by an irrational angle — all are universal. The Solovay–Kitaev theorem says they all give the same class.
-
"BQP depends on the choice of gate set." It does not. Any universal gate set gives the same BQP (up to polylogarithmic overhead to approximate one gate set by another). The Solovay–Kitaev theorem is precisely this robustness statement.
-
"The 2/3 in the definition is fundamental." No — any constant strictly greater than 1/2 gives the same class via amplification. The number 2/3 is a convention.
-
"BQP is the class of problems a real quantum computer can solve today." No. BQP is an asymptotic class — it describes what a hypothetical fault-tolerant quantum computer can do in polynomial time. Today's NISQ machines (Noisy Intermediate-Scale Quantum) cannot reliably factor a 2048-bit integer yet. BQP is theoretical; practical quantum computing is still in its early stages.
Going deeper
You have the formal definition, the amplification trick, the equivalent formulations, the known containments, and the list of BQP-inhabitants. The rest of this section collects the more technical content: the proof of BQP ⊆ PSPACE, the BQP amplification via phase estimation (stronger than majority vote), the distinction between promise problems and decision problems, the QMA extension, the oracle separations, the question of BQP-complete problems, and intermediate classes like BosonSampling that sit near BQP but are not exactly BQP.
The BQP ⊆ PSPACE proof
The proof uses the path integral or Feynman path interpretation of quantum circuits. Any quantum circuit can be decomposed into gates from a finite universal set. The probability that the circuit outputs a specific bitstring on measurement is a sum:
where each "path" is a sequence of basis-state transitions through the circuit (one transition per gate). The sum has exponentially many paths, but each individual path's amplitude is a product of a polynomial number of complex numbers (gate entries) and can be computed in polynomial space.
So a classical simulator can iterate over all paths, maintaining a running sum of amplitudes and freeing the memory of each path after its contribution is added. Total space: polynomial (to hold one path at a time). Total time: exponential. This puts BQP inside PSPACE.
BQP amplification via phase estimation
Majority vote gives exponential error reduction. An even tighter bound — exponential reduction with zero overhead — is possible via phase estimation and amplitude amplification. Given a BQP algorithm with success probability p > 1/2, one can construct a slightly modified circuit whose success probability is 1 - \epsilon with only O(\log(1/\epsilon) / \sqrt{p - 1/2}) overhead. This is better than majority vote's O(\log(1/\epsilon)) overhead when p - 1/2 is large. The tool is Grover-style amplitude amplification applied to the success-failure branch of the BQP circuit.
Promise problems and BQP
A promise problem is like a decision problem but with an implicit "promise" that the input satisfies some condition — on non-conforming inputs, the algorithm's behaviour is unspecified. Many quantum algorithms (Deutsch–Jozsa, Simon's) solve promise problems, not decision problems.
The promise-problem version of BQP is called PromiseBQP. It is a strictly larger class than BQP in a technical sense — it allows more flexibility in what problems you can express — but for practical purposes the difference is small. Most natural BQP statements apply equally to PromiseBQP.
QMA — quantum verifier
QMA (Quantum Merlin–Arthur) is the quantum analogue of NP. A problem is in QMA if for every yes-instance there exists a short quantum state (the "Merlin proof") such that a polynomial-time quantum verifier (Arthur) accepts with high probability, and for no-instances the verifier rejects all quantum states with high probability. QMA contains BQP (trivially — the prover sends |0\rangle's and Arthur ignores them) and is contained in PSPACE. QMA-complete problems include the Local Hamiltonian problem.
Oracle separations
Several landmark oracle separations involving BQP:
- Simon's problem (1994): \mathrm{BQP}^O \not\subseteq \mathrm{BPP}^O for O the Simon oracle. First oracle separation between quantum and classical bounded-error classes.
- Forrelation (Aaronson–Ambainis 2018): Oracle separation between BQP and PH. Pushed further by Raz–Tal 2019 to an explicit oracle where the gap is exponential.
- Glued trees (Childs, Cleve, Deotto, Farhi, Gutmann, Spielman 2003): Exponential oracle separation for a graph-traversal problem, solvable in polynomial quantum queries by a quantum walk.
- Welded tree: Another instance of graph-structure-based exponential separation.
These separations are in the oracle (relativised) model. Whether they translate to the non-oracle (uniform) world is unknown — this is analogous to the fact that oracle separations between P and NP exist but do not resolve the uniform P vs NP question.
BQP-complete problems
A problem is BQP-complete if it is in BQP and every BQP problem reduces to it in polynomial time. BQP-complete problems are believed to be the hardest problems in BQP. Known BQP-complete problems include:
- Local Hamiltonian simulation. Given a local Hamiltonian H, a time t, and a precision \epsilon, simulate e^{-iHt} to precision \epsilon in time polynomial in all three.
- Approximating the Jones polynomial at specific roots of unity. The Jones polynomial is a topological invariant of knots; approximating its value at e^{2\pi i / 5}, for instance, is BQP-complete (Aharonov, Jones, Landau 2006).
- Computing matrix elements of sparse unitaries.
BQP-complete problems show BQP is "natural" — it has problems that are complete for it, analogous to SAT for NP. Without BQP-complete problems, you might worry that BQP is just an arbitrary collection; with them, BQP has structural character.
The gate-count, depth, and width trade-offs
BQP is defined via polynomial-size circuits, where "size" is total gate count. Two other measures matter in practice:
- Depth — the number of time-steps in the circuit. Many quantum algorithms have polynomial size but logarithmic depth when parallelism is exploited. The class QNC (analogous to classical NC) captures logarithmic-depth quantum computation.
- Width — the number of qubits used. For a fixed problem, width and depth can be traded: using more qubits allows shallower circuits, and vice versa.
These finer-grained measures matter for near-term quantum hardware where gate counts are the bottleneck. For theoretical BQP, only asymptotic polynomial size matters.
BosonSampling and other "intermediate" classes
BosonSampling is a restricted model of quantum computing — photons passing through a linear-optical network — that is weaker than full BQP but still does something no efficient classical algorithm can (under standard complexity-theoretic assumptions). BosonSampling is not in BQP in the traditional sense; it samples from a distribution rather than decides a language. But it captures a similar "quantum advantage" signal in the sampling regime.
IQP (Instantaneous Quantum Polynomial time) and DQC1 (Deterministic Quantum Computing with one clean qubit) are other intermediate classes — weaker than BQP but still believed to offer classical speedups. Each exists to capture "what you can do with restricted quantum resources." Their hardness reductions often imply classical hardness of sampling, which is the foundation of "quantum supremacy" experiments.
Indian context — Indian theoretical quantum complexity
Quantum complexity is a focus area at several Indian institutions:
- IIT Madras and IISc Bangalore run theoretical quantum information groups — work on communication complexity lower bounds, separation results, and quantum algorithms for linear-algebraic problems.
- TIFR Mumbai hosts some of the strongest Indian complexity theorists, including researchers on promise-problem quantum complexity and structural results.
- IIIT Bangalore runs a Centre for Quantum Science and Technology that includes quantum complexity among its agendas.
The National Quantum Mission (2023, ₹6000 crore budget over 8 years) explicitly funds theoretical quantum complexity research, alongside quantum communication, quantum sensing, and quantum hardware. BQP and its neighbours are part of a research tradition India is now scaling up.
Where this leads next
- BQP vs BPP — the comparison with classical randomised polynomial time. Factoring as evidence; oracle separations as the formal strongest-known result.
- BQP vs NP — the question of whether quantum computers solve NP-complete problems. Grover's quadratic bound. Complexity-theoretic barriers.
- Complexity Classes Recap — the classical classes (P, BPP, NP, PSPACE) that BQP is defined relative to.
- Lessons about quantum speedups — how the four oracle algorithms illustrate what makes BQP powerful.
- Shor's algorithm — the canonical BQP algorithm with a full circuit walkthrough.
- Grover's algorithm — the quadratic-speedup BQP algorithm for unstructured search.
References
- Bernstein and Vazirani, Quantum complexity theory (1993, 1997) — the paper formalising BQP and proving \mathrm{BQP} \subseteq \mathrm{PSPACE}. arXiv:quant-ph/9701001.
- Wikipedia, BQP — the standard encyclopaedic treatment.
- John Preskill, Lecture Notes on Quantum Computation, Ch. 6 — theory.caltech.edu/~preskill/ph229.
- Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach, Ch. 10 (quantum complexity) — theory.cs.princeton.edu/complexity.
- Ran Raz and Avishay Tal, Oracle separation of BQP and PH (2019) — the landmark oracle separation. ECCC TR18-107.
- Wikipedia, Complexity class — overview placing BQP among classical and quantum classes.