In short

BPP sits inside BQP for a trivial reason — a quantum computer can simulate every classical coin flip (apply H to |0\rangle and measure) and every classical gate (use Toffoli to emulate AND, OR, NOT). So any bounded-error classical randomised algorithm runs unchanged on a quantum machine. The real question is whether BQP is strictly bigger: whether there exist problems a quantum computer can solve in polynomial time with bounded error that no classical randomised machine can. The answer that almost everyone believes is yes — factoring lives in BQP (Shor 1994) and has resisted classical polynomial-time attack for 50 years of cryptographic research — but no one has proved it. A proof of \mathrm{BQP} \supsetneq \mathrm{BPP} would imply a lower bound on classical factoring, which would imply a separation in some classical complexity class (specifically a statement at least as strong as \mathrm{BPP} \neq \mathrm{PSPACE}), an open problem for decades. In the oracle model the separation is proven and exponential — Simon's problem gives a verified exponential gap \mathrm{BQP}^O \supsetneq \mathrm{BPP}^O — but oracle separations do not imply unrelativised ones. Every assumption RSA rests on, every signature Aadhaar uses, every UPI handshake — all quietly assume \mathrm{BQP} \supsetneq \mathrm{BPP}.

You have met BQP — the class of problems a quantum computer solves in polynomial time with bounded error — and BPP, the classical analogue where the polynomial-time machine is allowed to flip coins. Two natural questions follow. Is BPP inside BQP? And if it is, is it strictly inside — are there quantum-solvable problems that classical randomness cannot catch up to?

The first question has a one-paragraph answer. The second has a fifty-year backlog of circumstantial evidence, a stack of oracle separations, and not one unconditional proof. This article is about both — the easy direction and the hard one, and why the hard one has resisted resolution even after the invention of Shor's algorithm, an algorithm so striking that if it doesn't prove strict containment, you have to ask what would.

The easy direction — BPP ⊆ BQP

A classical randomised polynomial-time algorithm does two kinds of things: it flips fair coins, and it applies Boolean gates (AND, OR, NOT) to the results. Both are cheap on a quantum machine.

Coin flips come from the Hadamard. Take a qubit in |0\rangle, apply H, measure in the computational basis. The amplitudes after H are \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle, so measurement gives 0 with probability |\tfrac{1}{\sqrt{2}}|^2 = \tfrac{1}{2} and 1 with the same probability. That is a perfect fair coin. Need n independent coin flips? Apply H to n qubits in parallel and measure. The cost is n gates and n measurements — a constant per coin.

Classical gates come from the Toffoli. The Toffoli gate (also called CCX) takes three qubits |a\rangle |b\rangle |c\rangle and returns |a\rangle|b\rangle|c \oplus (a \cdot b)\rangle — it flips the third qubit if and only if the first two are both |1\rangle. Set the third qubit to |0\rangle as an ancilla, and the output is |a\rangle|b\rangle|a \cdot b\rangle — you have computed the AND of a and b into a fresh wire. The NOT gate is the Pauli X. OR is built from AND and NOT via De Morgan: a \lor b = \neg(\neg a \land \neg b). Every classical circuit decomposes into AND, OR, NOT, and fan-out — and every one of those has a direct quantum implementation.

Fan-out — making a copy of a bit — looks like it should run into the no-cloning theorem, but no-cloning forbids cloning unknown quantum states. Classical bits in |0\rangle or |1\rangle form are perfectly clonable — you apply a CNOT with the bit as control and a fresh |0\rangle as target, and you get two copies. The no-cloning theorem only bites when the state is in a genuine superposition of distinguishable amplitudes, which classical bits never are.

A quantum circuit simulating a classical randomised gateLeft panel: classical picture of a fair coin flip feeding into an AND gate with another input. Right panel: the quantum equivalent — a Hadamard on one wire (giving a random bit), paired with another qubit, feeding into a Toffoli gate with a fresh ancilla target wire.any BPP algorithm runs on a quantum machineclassical (BPP)coinbANDb ∧ coinPr(coin = 1) = 1/2quantum (BQP) simulating it|0⟩H|b⟩|0⟩= b ∧ coinH creates coin; Toffoli computes AND
The classical picture on the left — a coin feeds into an AND gate — is simulated on the right by a Hadamard (which produces a fair random bit on measurement) and a Toffoli (which computes the AND into a fresh ancilla). Any Boolean circuit plus coin flips converts to a quantum circuit of the same depth, up to constant factors.

So the argument is complete. A BPP machine runs in polynomial time using coin flips and Boolean gates. Each coin flip costs one H plus one measurement. Each Boolean gate costs a constant number of Toffolis, ancillas, and Xs. The whole simulation is polynomial-time. Therefore

\mathrm{BPP} \subseteq \mathrm{BQP}.

The formal picture. If L \in \mathrm{BPP}, there is a randomised classical Turing machine M running in time p(n) that decides L with error at most 1/3. You build a quantum circuit C_n of size polynomial in p(n) as follows: allocate p(n) "coin" qubits in state |0\rangle, apply H^{\otimes p(n)} to turn them into a uniform superposition, allocate a work register, then implement M's computation deterministically given the coin values using Toffolis. Measure the answer qubit. The output distribution is identical to M's, so the acceptance probability is the same, and error stays at 1/3. L \in \mathrm{BQP}.

Nothing about this simulation is deep. It is bookkeeping. And that is the point — the easy direction is easy because classical computation is a special case of quantum computation, almost by definition.

The hard direction — is BPP = BQP?

Now the interesting question. Does BQP contain anything BPP does not?

The honest mathematical answer is no one has proved it either way. The consensus-of-evidence answer is yes, BQP is strictly larger. Let us separate the two carefully, because the distinction matters for what you can trust and what you cannot.

What's believed, and why

Three pieces of evidence point to strict containment.

Evidence 1: Shor's algorithm. Shor (1994) gave a polynomial-time quantum algorithm for factoring n-bit integers. The best classical algorithm known, the general number field sieve (GNFS), runs in time roughly \exp\!\big(c \, n^{1/3} (\log n)^{2/3}\big) — sub-exponential, but nowhere near polynomial. If you accept that factoring is not in BPP — which is the working assumption of the entire RSA-based cryptographic world — then factoring is in BQP \setminus BPP, and BQP is strictly larger.

The qualifier "if you accept" is important. No one has proved factoring is outside BPP. That is itself an open problem. So Shor's algorithm provides conditional evidence: if one widely held conjecture is true, then BQP is strictly larger.

Evidence 2: The discrete logarithm. The same pattern applies. Shor's algorithm (in its second half, for discrete logarithm rather than factoring) solves the discrete-logarithm problem in polynomial time on a quantum computer. The best classical algorithms are again sub-exponential. Both factoring and discrete log live inside the Hidden Subgroup Problem over \mathbb{Z}, which is the structural reason Shor's works. Both are widely believed to be outside BPP.

Evidence 3: Oracle separations. In the oracle model — where the algorithm is allowed to query a black-box function but not to inspect its insides — there is a proven, unconditional, exponential separation between BPP and BQP. Simon's algorithm solves Simon's problem (finding a hidden period s such that f(x) = f(x \oplus s)) in O(n) quantum queries, while any classical randomised algorithm needs \Omega(2^{n/2}) queries. This is not a conjecture. It is a theorem.

The caveat: oracle separations do not directly imply unrelativised separations. An oracle separation says "for some oracle O, \mathrm{BQP}^O \supsetneq \mathrm{BPP}^O." It does not say \mathrm{BQP} \supsetneq \mathrm{BPP}. But every unrelativised separation we know implies its oracle analogue; an oracle separation is necessary-but-not-sufficient evidence of a real separation. The presence of an oracle separation is strong — every class C_1 \neq C_2 we have ever proved separated also had oracle separations — and the absence of one would be very strong evidence the classes are equal.

The containment landscape with its question markA nested set diagram showing P inside BPP inside BQP inside PSPACE, with a large question mark annotation on the strict-containment arrow between BPP and BQP. Factoring sits labelled inside BQP but outside BPP under the current conjecture; Simon's problem sits as an oracle example.BPP ⊆ BQP ⊆ PSPACE — one of these inclusions is an open problemPSPACEBQPBPPPsortingprimalityfactoringdiscrete log(conjectured)?is this boundary strict?
The containment landscape. P $\subseteq$ BPP $\subseteq$ BQP $\subseteq$ PSPACE, with three of the four inclusions trivial and one — BPP versus BQP — conjectured strict but unproven. Factoring and discrete log are the headline candidates for problems in BQP $\setminus$ BPP, but each candidacy rests on a separate unproven classical hardness conjecture.

Why a proof has not arrived

Proving \mathrm{BQP} \supsetneq \mathrm{BPP} unconditionally would require exhibiting a specific language L that a quantum machine solves in polynomial time and no classical randomised machine does. This runs into two walls.

Wall 1: lower bounds on classical algorithms are extraordinarily hard. Proving factoring is outside BPP means ruling out every clever classical randomised polynomial-time algorithm — including ones nobody has discovered yet. The history of classical computing is littered with problems that looked hopelessly hard until someone found a clever algorithm (primality testing was outside P until AKS 2002; linear programming was outside P until Khachiyan 1979). Proving a lower bound forecloses those surprises.

The specific barrier here is that proving factoring \notin BPP would imply \mathrm{BPP} \neq \mathrm{PSPACE} (since factoring is in PSPACE and trivially in BQP), which is itself a longstanding open problem. Any route to \mathrm{BQP} \supsetneq \mathrm{BPP} via a specific hard problem has to climb at least this high.

Wall 2: "dequantization" attacks exist. A growing line of research, starting with Ewin Tang's 2018 undergraduate thesis, has shown that some quantum algorithms with apparent exponential advantages can be matched by classical algorithms given the same kind of sampling access. Tang dequantised the quantum recommendation-system algorithm (Kerenidis–Prakash 2016) — the classical algorithm runs in the same asymptotic time if you give it the same access model. This did not touch Shor's or Simon's, but it showed that "quantum algorithm \Rightarrow exponential advantage" is a riskier inference than it used to be. Every candidate problem for BQP \setminus BPP has to survive the possibility of classical de-randomisation.

Wall 3: the relativisation barrier. Techniques that work for proving BPP lower bounds in the black-box model don't necessarily work in the unrelativised world. Baker, Gill, Solovay (1975) showed oracles exist relative to which \mathrm{P} = \mathrm{NP} and oracles relative to which \mathrm{P} \neq \mathrm{NP} — so any "relativising" proof technique cannot settle P versus NP. The same barrier applies to BPP versus BQP. You would need a non-relativising technique, and those are rare (Razborov–Rudich's natural proofs barrier, and the algebrisation barrier of Aaronson–Wigderson, further constrain the techniques available).

Why the answer matters for everyone who uses a phone

If BPP = BQP — if every quantum algorithm can be matched classically — then factoring is in BPP, which means a classical polynomial-time factoring algorithm exists. The consequences cascade:

The entire public-key-cryptography stack is a bet that BQP is strictly bigger than BPP. If the bet is wrong, it is not just that quantum computers become superfluous; classical computers become dangerous. The post-quantum migration currently being planned for Aadhaar, UPI, banking, and government communications is a hedge against \mathrm{BQP} \supsetneq \mathrm{BPP} being true and a quantum computer eventually being built — but also against the weaker possibility that a classical attack appears first.

Example 1: simulating a BPP primality test on a quantum machine

The Miller–Rabin primality test is a classical randomised algorithm: given an integer N, flip coins to pick a random base a with 1 < a < N, then check a modular-exponentiation identity to decide if N is probably prime. It runs in polynomial time in \log N with error at most 1/4 per trial — and thus is in BPP.

Setup. To simulate this on a quantum computer, you need only two ingredients: quantum randomness and reversible arithmetic.

Step 1 — turn the coin flips into Hadamards. The algorithm picks a random base a, which requires \lceil \log_2(N-2) \rceil random bits. Allocate that many "coin" qubits in |0\rangle and apply H^{\otimes k}:

|0\rangle^{\otimes k} \xrightarrow{H^{\otimes k}} \frac{1}{\sqrt{2^k}} \sum_{a=0}^{2^k - 1} |a\rangle.

Why: a fair random choice of one of 2^k bases matches exactly the uniform amplitude pattern Hadamard produces on k qubits. Measuring here immediately would give a uniformly random a — but we can also delay measurement and keep the register quantum.

Step 2 — implement the modular exponentiation reversibly. Miller–Rabin computes a^d \bmod N for some d derived from N. This is the same modular exponentiation used in Shor's algorithm. Implement it as a reversible circuit mapping

|a\rangle \, |0\rangle \mapsto |a\rangle \, |a^d \bmod N\rangle.

Why: unitaries must be reversible, so classical irreversible arithmetic gets embedded into a reversible circuit by keeping an input copy. Standard technique — it adds polynomial overhead in ancillas.

Step 3 — compute the Miller–Rabin predicate. Apply another reversible circuit that checks the Miller–Rabin conditions against a^d \bmod N and writes the answer into a fresh qubit |\text{prime?}\rangle.

Step 4 — measure. Measure the "prime?" qubit. The probability of output 1 matches the classical Miller–Rabin acceptance probability.

Result. The full quantum circuit has the same error profile as the classical Miller–Rabin test (error \leq 1/4 on composite inputs, 0 on primes). The circuit size is polynomial in \log N — each step added at most a polynomial factor. So primality testing, which we already knew was in BPP, is also in BQP. This is not a new result; it is a demonstration that the BPP ⊆ BQP argument is completely mechanical for any specific problem.

Miller–Rabin primality test as a quantum circuitA quantum circuit with three wire groups. Top group: k coin qubits each starting in ket zero, each with a Hadamard gate. Middle group: ancilla register for a to the d mod N. Bottom group: answer qubit. A reversible modular exponentiation block is shown spanning coin and ancilla wires, followed by a predicate block spanning ancilla and answer wires, then a measurement on the answer qubit.classical Miller–Rabin simulated on a quantum machine|0⟩|0⟩|0⟩coinsHHH|0⟩|0⟩ancilla|0⟩answera^d mod N(reversible)M–R predicate(reversible)0/1
A quantum realisation of the Miller–Rabin primality test. Hadamards generate the fair coin flips; a reversible modular-exponentiation block computes $a^d \bmod N$; a reversible predicate block checks the Miller–Rabin conditions; the final measurement returns $1$ for "probably prime" and $0$ for "composite." Every BPP algorithm maps onto this template.

Interpretation. The picture is the whole BPP ⊆ BQP proof in one circuit. You replace coins with Hadamards, replace irreversible arithmetic with reversible arithmetic plus ancillas, and measure at the end. Nothing genuinely quantum is happening — no interference, no entanglement-based speedup. The quantum machine is just being used as an awkward classical one.

Example 2: the Shor evidence, in numbers

Take 2048-bit RSA — the standard key size for secure web traffic. The modulus N is about 2^{2048} \approx 10^{617}. You want to factor N.

Setup. Compare the best classical algorithm (the general number field sieve) and Shor's algorithm on the same input.

Step 1 — count classical operations. GNFS has running time heuristically \exp\!\big((1.923 + o(1)) \cdot (\ln N)^{1/3} \cdot (\ln \ln N)^{2/3}\big). Plug N = 2^{2048}:

\ln N \approx 1420, \quad (\ln N)^{1/3} \approx 11.2, \quad (\ln \ln N)^{2/3} \approx 3.7.

Why: we need a numerical estimate for the \log-of-operations count, because the asymptotic formula hides the constants that actually determine feasibility at 2048 bits.

Compute the exponent: 1.923 \cdot 11.2 \cdot 3.7 \approx 79.7. So GNFS needs about e^{79.7} \approx 10^{35} operations to factor 2048-bit RSA. Every current estimate (RSA Laboratories, NIST) puts the number between 10^{34} and 10^{36}, so the order of magnitude is right.

Step 2 — count Shor's operations. Shor's algorithm runs in time O((\log N)^3) gates, with the constant depending on how efficiently you implement modular exponentiation. A careful estimate (Beauregard 2003 and subsequent optimisations) gives roughly 64 \cdot n^3 Toffoli gates for n-bit N, plus logarithmic corrections. For n = 2048:

64 \cdot 2048^3 \approx 5.5 \times 10^{11} \text{ gates}.

That is about 10^{12} — ten to twelve orders of magnitude fewer operations than GNFS needs.

Step 3 — compare. At a billion operations per second (a modest number for both classical and quantum), GNFS on 2048-bit RSA takes 10^{35}/10^9 = 10^{26} seconds — roughly 10^{18} years, billions of times the age of the universe. Shor's on the same input takes 10^{12}/10^9 = 10^3 seconds — under an hour.

Result. If factoring is genuinely outside BPP, 2048-bit RSA is secure against every classical attacker forever (barring an algorithmic breakthrough nobody has found in 50 years). Against a full-scale quantum computer running Shor's, it falls in an afternoon. This gap — 10 to 12 orders of magnitude in operations — is the concrete form of the BQP-vs-BPP question for cryptography. Every RSA-protected system in India, from the Aadhaar signing servers at UIDAI to the HTTPS endpoint of any banking app, is trusting that this gap really is a gap and not an artefact of classical algorithms we haven't discovered.

Classical vs quantum factoring at 2048 bitsA horizontal bar chart on a logarithmic operations axis. A very long red bar represents GNFS at 10 to the 35 operations; a much shorter blue bar represents Shor at 10 to the 12 operations. The gap is annotated as 23 orders of magnitude.factoring 2048-bit RSA: GNFS vs Shor10^510^1210^2010^2810^35operations (log scale)GNFS — classical — ≈ 10^35 opsShor — quantum — ≈ 10^12 ops~ 23 orders of magnitude gapan hour vs 10^18 years — if the gap is real
The concrete numerical gap between GNFS (best classical factoring algorithm, $\sim 10^{35}$ operations) and Shor's (quantum, $\sim 10^{12}$ operations) at 2048-bit RSA. Twenty-three orders of magnitude. If BQP = BPP, a classical algorithm matching Shor's must exist — and 50 years of intensive search have not found it.

Interpretation. Whenever someone asks "why do complexity theorists believe BQP \supsetneq BPP even without a proof?", this diagram is the answer. The gap is not small, not subtle, not sensitive to model assumptions — it is twenty-three orders of magnitude in operation count at a cryptographically relevant input size. Closing that gap with a classical algorithm would be the largest algorithmic discovery in modern history.

Common confusions

Going deeper

If you wanted the landscape picture and the cryptographic consequences, you have them. The rest of this section gets formal: how the BPP \subseteq BQP simulation is actually constructed gate-for-gate, how oracle separations are formalised, what the proof barriers look like in detail, and where the research frontier currently sits. You can stop here if you only needed the conceptual picture.

The formal BPP ⊆ BQP simulation

A more rigorous statement: let M be a probabilistic Turing machine running in time p(n) on inputs of length n, with acceptance probability \geq 2/3 on yes-instances and \leq 1/3 on no-instances. Build a quantum circuit C_n of size O(p(n)^2) as follows.

Step A. Allocate three registers: a work register W of p(n) qubits (scratch for M's tape), a coin register R of p(n) qubits, and an answer qubit a. Initialise all to |0\rangle.

Step B. Apply H^{\otimes p(n)} to R. This puts R into a uniform superposition over all 2^{p(n)} possible coin-sequences:

|R\rangle = \frac{1}{\sqrt{2^{p(n)}}} \sum_{r \in \{0,1\}^{p(n)}} |r\rangle.

Step C. Implement M's transition function as a reversible classical circuit. For each step of M, the next configuration is a deterministic function of the current configuration and the next coin bit from R. Use Toffolis and CNOTs to embed the transition into a reversible circuit U_M acting on W \oplus R \oplus a. The Bennett trick — keep a history register — turns any deterministic Turing machine of time T into a reversible circuit of size O(T^2), or O(T \log T) with more care (Bennett 1989).

Step D. After p(n) applications of U_M, the answer register a holds M's output on coin-sequence r in superposition:

|W, R, a\rangle = \frac{1}{\sqrt{2^{p(n)}}} \sum_{r} |W_r\rangle |r\rangle |M(x, r)\rangle.

Step E. Measure a in the computational basis. The probability of outcome 1 is

\Pr[a = 1] = \frac{1}{2^{p(n)}} \sum_r \mathbb{1}[M(x, r) = 1] = \Pr_{r}[M(x, r) = 1],

exactly the classical acceptance probability. Bounded error \geq 2/3 is preserved.

Therefore L \in \mathrm{BPP} \Rightarrow L \in \mathrm{BQP}. The O(p(n)^2) size of C_n is the simulation cost — polynomial overhead, nothing exponential.

Oracle separations — Simon, Watrous, Raz–Tal

The first proven exponential separation was Simon's (1994), giving a problem that BQP solves in O(n) queries and BPP needs \Omega(2^{n/2}) queries for — an exponential gap in the oracle model.

Watrous (2005) extended this with succinct-group separations: \mathrm{BQP}^O can contain decision problems that \mathrm{MA}^O (Merlin–Arthur, a class combining BPP with an NP-style prover) cannot. This is a stronger separation than Simon's, because MA is larger than BPP.

Raz and Tal (2019) achieved the current strongest oracle separation: they showed there exists an oracle O such that \mathrm{BQP}^O \not\subseteq \mathrm{PH}^O, where PH is the polynomial hierarchy (a whole tower of classes built on top of NP and coNP). This is remarkable because PH contains not just BPP but every class defined by bounded alternation of quantifiers — a huge classical landscape. BQP escapes it entirely relative to an oracle.

The Raz–Tal result uses a specific problem called Forrelation: given two Boolean functions f, g: \{0,1\}^n \to \{\pm 1\}, decide whether g is close to the Fourier transform of f. A quantum algorithm solves this in O(1) queries; Raz–Tal prove any classical algorithm in PH needs \Omega(\sqrt{N}/\log N) queries, which is exponential.

Why these separations do not imply unrelativised separations

An oracle separation \mathrm{BQP}^O \supsetneq \mathrm{BPP}^O tells you that the black-box behaviour differs. It says, "for some function f accessed as a black box, a quantum algorithm can exploit f's structure and a classical one cannot." But "accessed as a black box" is a stronger constraint than "given by a circuit." A real problem has a concrete circuit description; you can inspect it, exploit its structure, and potentially find a classical algorithm that the oracle-model argument rules out.

Baker, Gill, Solovay (1975) made this precise: they constructed oracles A, B such that \mathrm{P}^A = \mathrm{NP}^A and \mathrm{P}^B \neq \mathrm{NP}^B. So the P-versus-NP question is not settled by oracle arguments alone — both answers are consistent with some oracle choice. Any technique that relativises (i.e. works the same way with or without an oracle) cannot resolve P versus NP. The same barrier applies to BPP versus BQP.

The implication: separating BPP from BQP requires a non-relativising technique. The known non-relativising techniques (arithmetisation, used for \mathrm{IP} = \mathrm{PSPACE}; algebrisation, Aaronson–Wigderson 2008; natural proofs, Razborov–Rudich 1994) are limited, and applying them to BPP-vs-BQP has not yet yielded a separation.

Dequantization — the other direction of evidence

Tang's 2018 undergraduate thesis at Austin, supervised by Scott Aaronson, gave a classical algorithm for the "recommendation-system" problem with the same asymptotic complexity as the earlier quantum algorithm of Kerenidis and Prakash (2016). The quantum algorithm was claimed to give an exponential speedup; Tang's dequantisation matched it classically given the same sampling access, showing the "speedup" was an artefact of the access model, not a genuine quantum advantage.

Since Tang 2018, a growing list of quantum machine-learning algorithms — quantum PCA, quantum SVM, some HHL-based linear-system solvers — have been similarly dequantised. The pattern: any time a quantum algorithm's "speedup" depends critically on cheap sampling access (the "\ell^2-sample model"), the same speedup often transfers classically.

This has not touched Shor's or Simon's or any HSP algorithm. Those rely on genuine Fourier structure and entanglement-based phase estimation that has no classical sampling analogue. But dequantisation research has sharpened the community's understanding: a candidate for BQP \setminus BPP must survive classical-sampling-based attacks, and many candidates have not.

The current state of the frontier: the HSP-over-\mathbb{Z} family (factoring, discrete log) and the lattice-problem family (Regev 2005 and successors — see below) are the main remaining candidates whose quantum advantages have resisted all dequantisation attempts.

Lattice problems and Regev's learning-with-errors

Regev (2005) gave a quantum algorithm for certain lattice problems (specifically, Bounded Distance Decoding and related shortest-vector problems in lattices) in polynomial time, where the best classical algorithms are sub-exponential. This is important for two reasons.

First, lattice problems are believed to be classically hard — the LWE (Learning with Errors) problem and its variants form the foundation of post-quantum cryptography schemes like Kyber and Dilithium, which NIST has standardised. The post-quantum crypto community bets on lattice problems being classically hard and also hard for quantum computers (despite Regev's result, which only breaks specific lattice sub-problems).

Second, Regev's algorithm opens an additional candidate for BQP \setminus BPP — one that doesn't rely on the factoring conjecture. If any of Regev's lattice subproblems are outside BPP, BQP is strictly bigger. This gives a second pillar for the belief, independent of factoring.

The cryptographic weight of the conjecture

Every public-key cryptosystem in production (as of 2026) bets on \mathrm{BQP} \supsetneq \mathrm{BPP} — explicitly or implicitly. The specific wagers:

If \mathrm{BPP} = \mathrm{BQP} is ever proved, the entire RSA/ECC stack collapses instantly — not to a future quantum attacker but to today's classical computers. If \mathrm{BPP} \subsetneq \mathrm{BQP} is proved but also \mathrm{BQP} contains Kyber's lattice problems, the post-quantum replacements fail too. India's National Quantum Mission's post-quantum migration timeline assumes the first bet is right (classical cannot catch up) and budgets for the second being wrong eventually (quantum computers will be built).

The research frontier

Three active directions as of 2026:

  1. Lower bounds for specific problems. Can we prove factoring \notin BPP unconditionally? Or any specific problem separating BQP from BPP? Despite decades of effort, no such result exists — we do not even have unconditional super-linear lower bounds for most of NP on classical randomised machines.

  2. Non-relativising techniques. Can the techniques of \mathrm{IP} = \mathrm{PSPACE} (arithmetisation, error-correcting codes, the sum-check protocol) be adapted to separate BPP from BQP? Attempts have not succeeded, but the direction remains open.

  3. New candidate problems. Is there a natural problem (not specifically designed as a quantum-hardness witness) whose classical hardness is believable and which sits in BQP? Candidates include certain lattice problems, graph isomorphism (although GI is now known to be quasi-polynomial classically — Babai 2016), and various promise-problems in number theory.

Indian context — post-quantum cryptography migration

The Indian government's National Quantum Mission (launched 2023, ₹6000 crore over 8 years) includes a substantive post-quantum cryptography component. India's Unique Identification Authority (UIDAI) — which issues Aadhaar credentials to over a billion citizens — relies on RSA and ECDSA signatures whose security rests on the \mathrm{BQP} \supsetneq \mathrm{BPP} belief being correct in the right direction (that classical cannot match quantum). UPI, IndiaStack, and every Indian banking handshake lives in the same bet.

IISc Bangalore and IIT Delhi both have active quantum-complexity groups contributing to post-quantum standardisation. The research question "is BPP = BQP?" is not an abstract curiosity for India — it is the question whose answer determines whether the next decade of digital infrastructure holds up.

Where this leads next

References

  1. Bernstein and Vazirani, Quantum complexity theory (1997) — the paper introducing BQP and proving its basic closure properties. SIAM J. Comput., preprint at arXiv:quant-ph/9701001.
  2. Simon, On the power of quantum computation (1994) — the first provable exponential oracle separation between BQP and BPP. IEEE FOCS, full version.
  3. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer (1994) — the factoring algorithm that is the headline evidence for BQP strictly containing BPP. arXiv:quant-ph/9508027.
  4. Tang, A quantum-inspired classical algorithm for recommendation systems (2018) — the dequantisation result that refined how the community thinks about quantum speedups. arXiv:1807.04271.
  5. Wikipedia, BQP — the standard encyclopaedic treatment with the containment diagram.
  6. Scott Aaronson, Shtetl-Optimized blog: BQP, BPP, and the polynomial hierarchy — accessible commentary from a working researcher in quantum complexity.