In short

BQP — Bounded-error Quantum Polynomial time — is the class of decision problems a quantum computer can solve in polynomial time with error probability at most 1/3. Formally: a language L \in \mathrm{BQP} if there exists a uniform family of polynomial-size quantum circuits \{C_n\} such that on input x of length n, C_n outputs the right answer with probability at least 2/3. "Uniform" means the circuit for input length n can be generated in polynomial time by a classical Turing machine. The 2/3 threshold is a convention — by running the circuit k times and taking the majority, the error drops to 2^{-\Omega(k)}, so 2/3 and 1 - 2^{-100} give the same class. Known containments: \mathrm{BPP} \subseteq \mathrm{BQP} \subseteq \mathrm{PSPACE} (and tighter, \mathrm{BQP} \subseteq \mathrm{PP}). Known inhabitants: factoring, discrete logarithm, abelian hidden-subgroup problems, Pell's equation, local-Hamiltonian simulation — all via Shor-style or phase-estimation-style algorithms. What is not known: whether \mathrm{BPP} \neq \mathrm{BQP} unconditionally, whether \mathrm{NP} \subseteq \mathrm{BQP} (almost certainly not), whether \mathrm{BQP} \subseteq \mathrm{PH} (probably not — Raz–Tal 2019 gives a contrary oracle separation).

You have seen four quantum algorithms — Deutsch, Deutsch–Jozsa, Bernstein–Vazirani, Simon — followed by Grover's search and Shor's factoring. Each solves a specific problem faster than any known classical algorithm. Each runs, if you count the gates, in time that is polynomial in the input size. And each produces an answer that is usually correct — the final measurement might fail with some small probability, and you may need to repeat to be sure.

Take a step back. What do these algorithms have in common as a class? What is the shared property that makes them "quantum-efficient"? If you wanted to describe, in one definition, the full set of problems a quantum computer can solve efficiently — including all the ones already discovered, and all the ones still to come — what would you write down?

That definition has a name. It is called BQP — Bounded-error Quantum Polynomial time — and it plays the same role in quantum complexity theory that P plays in classical complexity theory. BQP is the thing Shor's algorithm proves is bigger than BPP. BQP is the thing Grover's algorithm demonstrates a powerful tool of. BQP is what complexity theorists spend their time trying to separate from other classes. This chapter defines it carefully, shows what it contains, and sets up the comparison questions — \mathrm{BQP} vs \mathrm{BPP}, \mathrm{BQP} vs \mathrm{NP} — that the next chapters will explore.

The definition, with each word decoded

Here is the formal statement.

BQP — Bounded-error Quantum Polynomial time

A language L \subseteq \{0, 1\}^* is in BQP if there exists a uniform family of polynomial-size quantum circuits \{C_n\}_{n \geq 1} such that for every input x \in \{0, 1\}^n:

  • If x \in L: the circuit C_n, run on input |x\rangle \otimes |0\rangle^{\otimes \mathrm{poly}(n)} and measured in the computational basis, outputs 1 (on the designated output qubit) with probability \geq 2/3.
  • If x \notin L: the circuit C_n, run on the same input, outputs 1 with probability \leq 1/3.

The word "uniform" means: there exists a deterministic classical Turing machine that, given n in unary, outputs a description of C_n in time polynomial in n.

Every phrase in that definition is doing work. Take them one at a time.

"Language L" — standard complexity-theory speak for a decision problem. L is a set of strings; "x \in L" means "the yes-instances of the problem include x." The question is: given x, decide whether x \in L. For factoring, L might be "pairs (N, k) where N has a factor less than k."

"Uniform family of circuits \{C_n\}" — for each possible input length n, there is one circuit C_n that handles all inputs of that length. This is the circuit model of computation. Unlike a single Turing machine that handles all lengths, the circuit model gives you one machine per length — but requires them to be describable uniformly, meaning a classical algorithm can efficiently generate C_n given n.

"Polynomial-size" — the number of gates in C_n is bounded by some fixed polynomial in n. So C_n might have 10n, n^2, or even n^{100} gates, but not 2^n.

"Uniform" (the modifier) — this is the crucial word that distinguishes BQP from its non-uniform cousin BQP/poly. Uniformity says: the circuits are not an arbitrary sequence; they are generated by a classical algorithm. Without uniformity, a complexity class could "cheat" by encoding arbitrary information — including undecidable problems — into the circuit descriptions themselves. Uniformity ensures the circuits are genuinely computable.

"|x\rangle \otimes |0\rangle^{\otimes \mathrm{poly}(n)}" — the circuit runs on the input encoded as |x\rangle along with a polynomial number of ancilla (scratch) qubits initialised to |0\rangle. The reason for the ancillas is that quantum computation is reversible — you cannot delete information mid-circuit, so you keep extra qubits around to hold intermediate garbage.

"Measured in the computational basis, outputs 1" — at the end of the computation, designate one qubit as the "output" qubit. Measure it (the other qubits are ignored). The outcome is 0 or 1.

"Probability \geq 2/3" — because measurement is random, the circuit does not always give the same answer. BQP requires it be right at least 2/3 of the time on every input.

"Probability \leq 1/3" — symmetrically, on no-instances, the circuit should only mistakenly output 1 with probability at most 1/3.

The 1/3 gap between "yes with probability \geq 2/3" and "yes with probability \leq 1/3" is what "bounded error" means. If the probability could be anywhere between 1/3 and 2/3, you could not distinguish yes from no reliably.

The BQP definition as a diagramFlow diagram. On the left, the input x. An arrow goes to a circuit box labelled "quantum circuit C sub n, polynomial size." From the circuit, two arrows lead to probability bars. Top arrow, labelled "x in L", points to a red bar at height two-thirds marking "output 1 with prob ≥ 2/3." Bottom arrow, labelled "x not in L", points to a short bar at height one-third marking "output 1 with prob ≤ 1/3." A dashed horizontal band at height one-half is marked "forbidden region — no bounded error possible."the BQP definition, schematicallyinput xC_npoly-sizequantum circuituniformly generatedx ∈ Lx ∉ L½ threshold≥ 2/3accept(yes-instance)½ threshold≤ 1/3reject(no-instance)
BQP in one picture. The input $x$ is fed into a uniformly-generated polynomial-size quantum circuit. For yes-instances, the output qubit measures to $1$ with probability at least $2/3$; for no-instances, with probability at most $1/3$. The gap between these thresholds is what makes the class "bounded error" and the classification reliable.

Why 2/3 — the amplification trick

The number 2/3 looks arbitrary. It is. But it does not matter — any constant strictly above 1/2 gives the same class, and you can amplify the success probability to any level you want with only polynomial overhead.

Here is the trick. Suppose you have a BQP algorithm with success probability \geq 2/3. Run it k independent times on the same input x, get k answers a_1, a_2, \ldots, a_k \in \{0, 1\}, and output the majority.

Claim. The majority is wrong with probability at most e^{-k/18}.

The standard proof uses the Chernoff bound. Let X_i = 1 if the i-th run gives the wrong answer, X_i = 0 otherwise. Each X_i is a Bernoulli random variable with \Pr[X_i = 1] \leq 1/3. The majority is wrong only if more than k/2 of the runs are wrong — i.e., \sum_{i} X_i > k/2. The expectation is \mathbb{E}[\sum X_i] \leq k/3, and the Chernoff bound gives:

\Pr\left[ \sum X_i \geq k/2 \right] \leq \exp\left( -\frac{(k/2 - k/3)^2}{2 \cdot k/3} \right) = \exp\left(-\frac{k}{24}\right).

Why this works: the Chernoff bound is the classical fact that sums of independent bounded random variables concentrate very sharply around their mean. The mean of "number of wrong runs" is at most k/3, so the chance of exceeding k/2 decays exponentially in k. With k = 500 runs, the error drops below 2^{-30} — much smaller than the probability of a cosmic ray flipping a bit in your hardware.

Consequence. The class BQP defined with threshold 2/3 is the same class defined with threshold 1 - 2^{-100}. The choice of 2/3 is just the cleanest round number above 1/2. It could have been 0.51 or 0.99 — same class.

The boundary is 1/2: if you allow the success probability to be exactly 1/2, you get the class PP (unbounded-error probabilistic polynomial), which is much larger than BPP and BQP combined.

Amplification shrinks the error exponentially in repeat countA curve plotting error probability on a log scale against number of repetitions k. At k=1 the error is 1/3. It drops roughly to 0.1 at k=10, to 10^-4 at k=50, and to 10^-20 at k=200. A horizontal dashed line at 2^-100 is reached by k≈800.error drops exponentially with repetitionsk repetitionserror1/310⁻³10⁻¹⁵k=1k=50k=200majority vote of k independent runs: error ≤ exp(−k/24)
Amplification by majority vote. Running a BQP circuit $k$ times and outputting the majority drops the error probability from $1/3$ to $\exp(-k/24)$. A few hundred repetitions gets you astronomical reliability — tighter than any hardware error rate you will ever see in practice.

Equivalent definitions

BQP has three common presentations that define the same class. A fluent reader recognises all three.

1. Uniform circuit families. The definition above — one polynomial-size quantum circuit per input length, generated uniformly by a classical Turing machine. This is the standard modern formulation.

2. Quantum Turing machines. Before the circuit model was preferred, complexity theorists worked with quantum Turing machines — tapes, a finite-state transition function with complex amplitudes instead of probabilities, and unitary evolution instead of deterministic transitions. Bernstein and Vazirani (1993) formalised this model and proved BQP-under-QTMs equals BQP-under-circuit-families (for uniform circuit families). You rarely meet QTMs in practice — the circuit model is cleaner — but they are the historical foundation.

3. Uniform families of "Shor-style" circuits. A constructive way: a problem is in BQP if it can be solved by polynomial classical preprocessing, a polynomial number of applications of Hadamard + Toffoli + phase gates (a universal quantum gate set), and polynomial classical postprocessing. This is the "Shor template" applied generally.

All three are the same class. The Solovay–Kitaev theorem ensures that any "reasonable" finite set of universal quantum gates — Hadamard + CNOT + T, for example — gives the same BQP. The choice of gate set, the choice of qubit dimension (qubits vs qutrits), and the choice of circuit model vs Turing machine all wash out.

What is in BQP

You have already seen several problems known to be in BQP. A consolidated list:

Problems solvable in polynomial time by Shor-style algorithms:

Problems solvable by quantum simulation:

Problems with modest-but-real speedups (also in BQP, but with classical BPP algorithms close behind):

Problems trivially in BQP (because in BPP):

The containments — where BQP fits

The known containment relationships involving BQP.

\mathrm{P} \subseteq \mathrm{BPP} \subseteq \mathrm{BQP} \subseteq \mathrm{PP} \subseteq \mathrm{PSPACE}.

Every inclusion here is proved.

Why this matters: BPP ⊆ BQP is not a "discovery" — it is structural. Anything a classical randomised algorithm does, a quantum algorithm can too, with no overhead. The interesting question is the strict containment: can BQP do more than BPP?

What is not known

BQP in the complexity landscapeNested Hasse diagram. At the top: PSPACE. Below it: PP, then BQP and NP side by side without containment between them, then BPP below BQP, then P at the bottom. Arrows show strict containment (proved) and dashed arrows show conjectured containments.BQP in the complexity landscapePSPACEPPBQPNPBPPPknownfactoring in BQPlikely not in BPPBQP and NP:incomparable(no known containment)
BQP's position in the complexity Hasse diagram. Solid arrows are proved strict-or-equal containments; dashed arrows are conjectured. BQP is above BPP (almost certainly strictly, with factoring as witness) and below PP (proved). It is incomparable with NP in terms of known containment — no proof either way.

A brief history

1985 — Deutsch. David Deutsch defines the quantum Turing machine in the paper Quantum theory, the Church–Turing principle, and the universal quantum computer. He also gives the first algorithm (what we now call the Deutsch algorithm) showing quantum computers can do something slightly faster than classical.

1993 — Bernstein–Vazirani. Ethan Bernstein and Umesh Vazirani formalise BQP as a complexity class, prove \mathrm{BPP} \subseteq \mathrm{BQP}, and prove \mathrm{BQP} \subseteq \mathrm{PSPACE}. They also give the Bernstein–Vazirani algorithm, showing a larger speedup than Deutsch's. The circuit model of quantum computation is established as the standard formalism.

1994 — Shor. Peter Shor publishes the polynomial-time quantum algorithm for factoring and discrete log. This is the first concrete evidence that BQP probably contains problems outside BPP. Overnight, quantum computing becomes a field with cryptographic consequences.

1996 — Grover. Lov Grover publishes the \sqrt{N} unstructured search algorithm. BQP gets another named inhabitant.

1997 — Adleman–DeMarrais–Huang. \mathrm{BQP} \subseteq \mathrm{PP} is proved, strengthening Bernstein–Vazirani's earlier \mathrm{BQP} \subseteq \mathrm{PSPACE}.

1997 — Bennett–Bernstein–Brassard–Vazirani (BBBV). The BBBV theorem proves Grover's \sqrt{N} is optimal for unstructured search — no quantum algorithm can beat it. This rules out exponential speedups for NP-style problems in the oracle model.

2002 — Hallgren. Pell's equation is shown to be in BQP. Another structured-problem speedup in the Shor family.

2019 — Raz–Tal. Ran Raz and Avishay Tal construct the first oracle separating BQP from the polynomial hierarchy: \mathrm{BQP}^O \not\subseteq \mathrm{PH}^O for some oracle O. Strong evidence that quantum advantage can exceed what any level of the classical polynomial hierarchy can match.

Between these landmarks there are dozens of smaller results — specific BQP-complete problems (local Hamiltonian, Jones polynomial), specific BQP-hard results (promise problems, sampling problems like BosonSampling), and extensions (QMA, QIP). The modern view places BQP as a rich class with a few clean upper bounds, strong lower-bound conjectures, and a handful of concrete problems whose exact classification drives much current research.

Example 1: Polynomial identity testing is in BPP and therefore in BQP

Polynomial identity testing (PIT) is the problem: given two polynomials p(x_1, \ldots, x_n) and q(x_1, \ldots, x_n) in multiple variables (given as arithmetic circuits, not as expanded polynomials), decide whether they are identical as polynomials.

Why it is in BPP. By the Schwartz–Zippel lemma: if p and q are not identical, then for any finite set S \subseteq \mathbb{Q}, a uniformly random point (r_1, \ldots, r_n) \in S^n satisfies p(r_1, \ldots, r_n) \neq q(r_1, \ldots, r_n) with probability at least 1 - d/|S|, where d is the maximum degree.

The BPP algorithm: sample random r_i's from a large enough set S, evaluate both circuits on the sample, compare. If equal on a random sample, declare them identical (with high probability, correct). If unequal, declare them different (always correct).

Why this belongs to BPP: the algorithm is randomised, polynomial-time (evaluating arithmetic circuits), and correct with probability at least 2/3 (by choosing |S| large enough relative to degree d).

Why it is also in BQP. Because \mathrm{BPP} \subseteq \mathrm{BQP}. The quantum version would implement exactly the BPP algorithm: generate random bits by Hadamarding |0\rangle's and measuring, then run the classical comparison. No quantum speedup, but full membership.

What this shows about BQP. PIT is a useful sanity check: BQP contains problems that do not need quantum techniques at all. BQP is defined "quantumly" but it is not a class of "problems that require quantum mechanics." It is a class of problems quantum computers can solve efficiently — and that includes all the ones classical randomised computers can already solve.

Polynomial identity testing as a BPP (hence BQP) algorithmFlow diagram. Two polynomial expression boxes p(x, y) equals x squared y plus x y squared and q(x, y) equals x y (x plus y) are inputs. Arrows to a random sampling step producing (r1, r2). Another arrow to evaluation. Final arrow to compare, with outcomes "equal" or "different".PIT: randomised verification of polynomial identityp(x,y) = x²y + xy²(circuit form)q(x,y) = xy(x+y)(circuit form)random sample(r₁, r₂) uniformevaluatep(r₁,r₂) ?= q(r₁,r₂)Schwartz–Zippel: if p ≠ q, random sample catches it with prob. ≥ 1 − d/|S|
Polynomial identity testing. Pick a random point in a large set, evaluate both polynomials there, and compare. By the Schwartz–Zippel lemma, unequal polynomials agree on a random point with tiny probability — and the algorithm is correct with high confidence. PIT is in BPP, hence trivially in BQP.

Result. Polynomial identity testing sits inside BPP, and because \mathrm{BPP} \subseteq \mathrm{BQP}, it is also in BQP. This is an example of a BQP-membership with no quantum speedup. BQP-membership is a necessary condition for "efficient on a quantum computer" but not a sufficient one for "exclusively quantum." To show a genuine quantum advantage, you need a BQP-algorithm for a problem that is not in BPP — and that is exactly what Shor's achieves for factoring.

Example 2: Factoring is in BQP via Shor's algorithm

Factoring is the canonical example of a BQP-problem with no known polynomial classical algorithm. The BQP-membership is established via Shor's algorithm.

The problem. Given N (an n-bit integer, promised to be composite) and a threshold k < N, decide whether N has a prime factor less than k.

Why this is a decision problem. A yes-instance exists iff the smallest prime factor of N is less than k. Complexity theory works with decision problems, and binary searching over k recovers a full factorisation from polynomially many decision queries.

Shor's algorithm sketch. The algorithm reduces factoring to order-finding: given a coprime to N, find the smallest r with a^r \equiv 1 \pmod N. Order-finding is solved by the following quantum circuit:

  1. Prepare two registers: the first with 2n qubits in uniform superposition (via Hadamards on |0\rangle^{2n}), the second with n qubits in |0\rangle.
  2. Apply the modular-exponentiation unitary: |x\rangle |0\rangle \mapsto |x\rangle |a^x \bmod N\rangle. This requires O(n^3) gates classically (via repeated squaring, reversibilised).
  3. Measure the second register (or defer the measurement — same outcome). The first register collapses to a superposition periodic with period r.
  4. Apply the inverse quantum Fourier transform to the first register. The QFT is O(n^2) gates.
  5. Measure the first register. The outcome is (with high probability) close to a multiple of 2^{2n}/r. Continued-fraction expansion recovers r classically.

Gate count. Modular exponentiation is O(n^3) gates; QFT is O(n^2) gates; classical post-processing is polynomial. Total: O(n^3) gates per run. The success probability per run is \Theta(1/\log n); with O(\log n) repetitions the success probability is high.

Classical hardness. The best known classical factoring algorithm (the General Number Field Sieve) runs in sub-exponential time \exp(O(n^{1/3} (\log n)^{2/3})). Much faster than 2^n, but much slower than polynomial.

Why this proves factoring is in BQP: Shor's gives a uniform family of polynomial-size quantum circuits — one for each input length n — generated from a polynomial-time classical algorithm (given n, write down the modular exponentiation unitary and the QFT unitary). The success probability is above 2/3 after amplification. By the definition of BQP, factoring is in BQP.

Why factoring is conjecturally not in BPP. Fifty years of cryptanalysis have produced no polynomial-time classical algorithm. RSA's security depends on this hardness. The lack of a proof is why we say "conjecturally" — in principle, a clever classical algorithm might still be found.

Shor's algorithm placing factoring in BQPCircuit-style diagram showing three registers and five stages labelled 1 through 5. Stage 1: H on input register. Stage 2: modular exponentiation controlled by input register. Stage 3: measure the function register. Stage 4: inverse QFT on input register. Stage 5: measure and classical post-processing to recover order r, then gcd to find factor.Shor's algorithm: factoring in BQP|0⟩²ⁿ|0⟩ⁿH⊗²ⁿ1. super.U_a:|x⟩|0⟩ ↦|x⟩|aˣ mod N⟩2. mod. exp.M3. measure f-regQFT⁻¹4. FourierM5. measureclassicalcont. frac. → rgcd(N, a^(r/2)±1)total gate count: O(n³); success prob. Θ(1/log n); BQP-membership established
Shor's algorithm schematically. Hadamards create a uniform superposition, modular exponentiation entangles input with function value, the inverse QFT aligns amplitudes so measurement reveals (near) multiples of $2^{2n}/r$, and classical continued-fraction expansion recovers the order $r$. Total circuit size $O(n^3)$ gates — polynomial in the input length. Factoring ∈ BQP.

Result. Factoring is in BQP. No polynomial-time classical algorithm is known for factoring; the best classical algorithms are sub-exponential. If \mathrm{BPP} contains factoring, then \mathrm{BPP} and \mathrm{BQP} might be equal on this particular problem — but no BPP algorithm has been found, and most researchers believe none exists. Factoring is thus the cleanest candidate for a BQP ∖ BPP problem — an efficient quantum algorithm for a problem with no known efficient classical algorithm. This is the source of Shor's cryptographic impact: if factoring becomes BQP-tractable in practice (a fault-tolerant quantum computer is built), every RSA-based system must migrate to post-quantum cryptography. India's National Quantum Mission includes exactly this transition as one of its strategic priorities.

Common confusions

Going deeper

You have the formal definition, the amplification trick, the equivalent formulations, the known containments, and the list of BQP-inhabitants. The rest of this section collects the more technical content: the proof of BQP ⊆ PSPACE, the BQP amplification via phase estimation (stronger than majority vote), the distinction between promise problems and decision problems, the QMA extension, the oracle separations, the question of BQP-complete problems, and intermediate classes like BosonSampling that sit near BQP but are not exactly BQP.

The BQP ⊆ PSPACE proof

The proof uses the path integral or Feynman path interpretation of quantum circuits. Any quantum circuit can be decomposed into gates from a finite universal set. The probability that the circuit outputs a specific bitstring on measurement is a sum:

\Pr[\text{output} = y] = \left| \sum_{\text{paths } \pi} \text{amplitude}(\pi) \right|^2,

where each "path" is a sequence of basis-state transitions through the circuit (one transition per gate). The sum has exponentially many paths, but each individual path's amplitude is a product of a polynomial number of complex numbers (gate entries) and can be computed in polynomial space.

So a classical simulator can iterate over all paths, maintaining a running sum of amplitudes and freeing the memory of each path after its contribution is added. Total space: polynomial (to hold one path at a time). Total time: exponential. This puts BQP inside PSPACE.

BQP amplification via phase estimation

Majority vote gives exponential error reduction. An even tighter bound — exponential reduction with zero overhead — is possible via phase estimation and amplitude amplification. Given a BQP algorithm with success probability p > 1/2, one can construct a slightly modified circuit whose success probability is 1 - \epsilon with only O(\log(1/\epsilon) / \sqrt{p - 1/2}) overhead. This is better than majority vote's O(\log(1/\epsilon)) overhead when p - 1/2 is large. The tool is Grover-style amplitude amplification applied to the success-failure branch of the BQP circuit.

Promise problems and BQP

A promise problem is like a decision problem but with an implicit "promise" that the input satisfies some condition — on non-conforming inputs, the algorithm's behaviour is unspecified. Many quantum algorithms (Deutsch–Jozsa, Simon's) solve promise problems, not decision problems.

The promise-problem version of BQP is called PromiseBQP. It is a strictly larger class than BQP in a technical sense — it allows more flexibility in what problems you can express — but for practical purposes the difference is small. Most natural BQP statements apply equally to PromiseBQP.

QMA — quantum verifier

QMA (Quantum Merlin–Arthur) is the quantum analogue of NP. A problem is in QMA if for every yes-instance there exists a short quantum state (the "Merlin proof") such that a polynomial-time quantum verifier (Arthur) accepts with high probability, and for no-instances the verifier rejects all quantum states with high probability. QMA contains BQP (trivially — the prover sends |0\rangle's and Arthur ignores them) and is contained in PSPACE. QMA-complete problems include the Local Hamiltonian problem.

Oracle separations

Several landmark oracle separations involving BQP:

These separations are in the oracle (relativised) model. Whether they translate to the non-oracle (uniform) world is unknown — this is analogous to the fact that oracle separations between P and NP exist but do not resolve the uniform P vs NP question.

BQP-complete problems

A problem is BQP-complete if it is in BQP and every BQP problem reduces to it in polynomial time. BQP-complete problems are believed to be the hardest problems in BQP. Known BQP-complete problems include:

BQP-complete problems show BQP is "natural" — it has problems that are complete for it, analogous to SAT for NP. Without BQP-complete problems, you might worry that BQP is just an arbitrary collection; with them, BQP has structural character.

The gate-count, depth, and width trade-offs

BQP is defined via polynomial-size circuits, where "size" is total gate count. Two other measures matter in practice:

These finer-grained measures matter for near-term quantum hardware where gate counts are the bottleneck. For theoretical BQP, only asymptotic polynomial size matters.

BosonSampling and other "intermediate" classes

BosonSampling is a restricted model of quantum computing — photons passing through a linear-optical network — that is weaker than full BQP but still does something no efficient classical algorithm can (under standard complexity-theoretic assumptions). BosonSampling is not in BQP in the traditional sense; it samples from a distribution rather than decides a language. But it captures a similar "quantum advantage" signal in the sampling regime.

IQP (Instantaneous Quantum Polynomial time) and DQC1 (Deterministic Quantum Computing with one clean qubit) are other intermediate classes — weaker than BQP but still believed to offer classical speedups. Each exists to capture "what you can do with restricted quantum resources." Their hardness reductions often imply classical hardness of sampling, which is the foundation of "quantum supremacy" experiments.

Indian context — Indian theoretical quantum complexity

Quantum complexity is a focus area at several Indian institutions:

The National Quantum Mission (2023, ₹6000 crore budget over 8 years) explicitly funds theoretical quantum complexity research, alongside quantum communication, quantum sensing, and quantum hardware. BQP and its neighbours are part of a research tradition India is now scaling up.

Where this leads next

References

  1. Bernstein and Vazirani, Quantum complexity theory (1993, 1997) — the paper formalising BQP and proving \mathrm{BQP} \subseteq \mathrm{PSPACE}. arXiv:quant-ph/9701001.
  2. Wikipedia, BQP — the standard encyclopaedic treatment.
  3. John Preskill, Lecture Notes on Quantum Computation, Ch. 6 — theory.caltech.edu/~preskill/ph229.
  4. Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach, Ch. 10 (quantum complexity) — theory.cs.princeton.edu/complexity.
  5. Ran Raz and Avishay Tal, Oracle separation of BQP and PH (2019) — the landmark oracle separation. ECCC TR18-107.
  6. Wikipedia, Complexity class — overview placing BQP among classical and quantum classes.