In short
Quantum speedups come from interference acting on structured problems. The four oracle algorithms you have met — Deutsch, Deutsch-Jozsa, Bernstein-Vazirani, Simon — each require a promise (constant-or-balanced, linear, periodic) because that promise is the structure interference exploits. Strip away the structure and the best you can do is a quadratic speedup (Grover's \sqrt{N}), and the BBBV theorem proves no quantum algorithm can do better for unstructured search. The speedup ladder has four rungs: exponential (Shor, Simon, hidden-subgroup problems), polynomial (some graph problems), quadratic (Grover-style amplitude amplification), none (most problems). The class BQP — problems quantum computers solve in polynomial time with bounded error — contains factoring, discrete log, and period-finding, is believed to strictly contain BPP, and is not known to contain NP-complete problems. Quantum will not speed up arbitrary algorithms, will not solve NP in polynomial time by any known route, and will not break symmetric encryption outright. The short rule: interference plus structure equals exponential advantage; interference without structure equals quadratic; no interference equals no advantage.
You have met four quantum algorithms in this part of the curriculum: Deutsch, Deutsch-Jozsa, Bernstein-Vazirani, and Simon's. Each made a specific problem easier than a classical computer could make it. Deutsch went from 2 queries to 1. Deutsch-Jozsa went from 2^{n-1}+1 deterministic queries to 1. Bernstein-Vazirani went from n queries to 1. Simon's went from \Omega(2^{n/2}) queries to O(n). Four different problems, four different improvements — some constant-factor, some polynomial, some exponential.
The obvious next question: can you see the pattern? Why do these work? And — just as important — when does the quantum advantage not materialise? Are there problems where a quantum computer offers no improvement at all, or does the strange physics of interference and superposition always help?
This article is the pattern-recognition lesson. Seven takeaways drawn from the four oracle algorithms, each illustrated against those algorithms and then generalised. By the end, you will have a checklist for recognising when a problem is a candidate for a quantum speedup — and you will know, honestly, what quantum computing will and will not do when the era of fault-tolerant machines arrives.
Lesson 1 — Interference is the engine
Strip every oracle algorithm you have met down to its skeleton and the same three-beat pattern appears.
- Hadamard the input register to create a superposition over all possible inputs.
- Apply the oracle, which uses phase kickback to plant (-1)^{f(x)} (or a generalised phase) in front of each basis state.
- Hadamard again — a second Fourier-like transform — to collapse the phase pattern into an interference pattern at measurement.
The third step is where the magic happens. The first two set up a state in which the answer you want is encoded as a pattern of amplitudes. The final Hadamard turns that pattern into probabilities that concentrate on the answer. Constructive interference amplifies the answer; destructive interference suppresses everything else.
Without interference, this template collapses. If you dropped the final Hadamard, you would measure a random input and learn nothing about f. If you dropped the first Hadamard, you could not query multiple inputs at once. Interference is not an optional flourish; it is the only reason this pattern beats classical queries.
Classical randomised algorithms do not have interference. They have probability distributions. Two probabilities can add, never cancel; you cannot arrange two independent "maybe-right" computations to destructively suppress wrong answers. Interference of complex amplitudes is the one operation the quantum world provides that the classical world does not.
Lesson 2 — Structure is required
Every oracle algorithm you have met came with a promise.
- Deutsch-Jozsa: "f is constant or balanced, nothing in between."
- Bernstein-Vazirani: "f(x) = s \cdot x \pmod 2 for a fixed bit string s."
- Simon's: "f(x) = f(y) \iff x \oplus y \in \{0, s\} for a fixed nonzero s."
Drop the promise and each algorithm fails. Deutsch-Jozsa fed an arbitrary f might return any outcome, which you cannot interpret. Bernstein-Vazirani fed a non-linear f would return a "hidden string" that doesn't correspond to anything. Simon's fed an f with no period would interpret whatever random periodicity happened to appear.
The promise is not a convenience — it is the structure interference exploits. Interference concentrates amplitude on the right answer only when the right answer has some global mathematical property (a symmetry, a period, a linear structure) that the phases encode. Without that property, the phases are random and interference does not concentrate — it spreads.
Three concrete checks:
-
Deutsch-Jozsa: if f is neither constant nor balanced (e.g. 60\% zeros and 40\% ones), the sum \sum_x (-1)^{f(x)} is some nonzero fraction, and the |0^n\rangle outcome has an intermediate probability — neither 1 nor 0. The algorithm returns a noisy answer that can be mistaken for either case.
-
Bernstein-Vazirani: if f is not linear (not of the form s \cdot x), the algorithm returns a bit string y whose interpretation as "the hidden s" is meaningless — the output corresponds to no underlying structure.
-
Simon's: if f has no period (no hidden s), then f is injective, the sampled y vectors are uniformly random, and the linear system you solve at the end picks up a random vector rather than a meaningful one.
The rule: quantum speedups are for structured problems. Random problems are — empirically and provably — hard for quantum computers too.
Lesson 3 — Query complexity vs time complexity
The speedups you have measured are in query complexity — the number of oracle calls. In the oracle model, this is all that matters: one query = one application of U_f, regardless of how expensive U_f is internally.
But a real algorithm running on real hardware needs two things:
- Few queries — so the query-complexity speedup is pointing somewhere useful.
- An efficient implementation of the oracle — so each query is cheap in gates and time.
When both hold, you get a time-complexity speedup: the full algorithm runs faster, not just in query count.
Shor's algorithm is the poster case. The oracle is modular exponentiation f(k) = a^k \bmod N. Computing a^k \bmod N with a classical circuit takes O(\log^2 N) gates (via repeated squaring). Shor's query-complexity analysis says O(1) queries suffice — but each query costs O(\log^2 N) gates, giving a total gate complexity of O(\log^3 N). That is polynomial in \log N — a genuine, full-time-complexity exponential speedup over the classical sub-exponential number-field sieve.
Deutsch-Jozsa and Bernstein-Vazirani have the same property: the oracles in practice (for contrived constant-or-balanced or linear functions) are cheap circuits, so the query-complexity wins translate directly to gate-count wins. Simon's algorithm can have an expensive oracle for some instances, but for the canonical Simon-problem formulation, the oracle is O(n)-gate cheap.
When the query speedup does not translate to a time speedup. Grover's algorithm provides a \sqrt{N} query speedup for unstructured search. In the oracle model, this is impressive. But in practice, if the "database" is a list of N items, each query must touch the full database to compute f — and a classical algorithm can read the whole list in O(N) time, giving classical O(N) and quantum O(\sqrt{N}) gate counts. The \sqrt{N} speedup survives for functions f with cheap, structured descriptions (a short Boolean formula, say), but vanishes when the oracle is genuinely a flat list. The cost of the oracle is part of the wall-clock speedup, and ignoring it oversells the gains.
Lesson 4 — The speedup ladder
Arrange what you know about quantum algorithms on a vertical ladder of how large the speedup is. Four rungs, each corresponding to a different kind of problem structure.
Some detail on each rung.
Exponential. Reserved for problems with deep structure — almost always some form of the Hidden Subgroup Problem over an abelian group. Factoring, discrete log, period-finding, Pell's equation, and a handful of algebraic problems live here. Outside of HSP, quantum walks on some specific graphs (the "glued trees" problem, Child's welded-tree algorithm) give exponential separations in the oracle model, but these are exceptional.
Polynomial. Specific graph and algebraic problems have quantum algorithms with polynomial but not exponential advantages: e.g., matrix-matrix product verification uses O(n^{5/3}) quantum queries vs O(n^2) classical, for a \Theta(n^{1/3}) polynomial advantage. These gains are modest but real.
Quadratic. Grover's algorithm searches an unstructured domain of N items in O(\sqrt{N}) queries vs classical O(N). Amplitude amplification is the general template — almost any "find a needle in a haystack" problem benefits from a \sqrt{N}-factor improvement. The BBBV theorem (Bennett, Bernstein, Brassard, Vazirani 1997) proves this is optimal: no quantum algorithm can search an unstructured oracle in better than \Omega(\sqrt{N}) queries. Grover is tight.
None. Most problems. Sorting a list, multiplying two arbitrary matrices, running a compiler, serving a web request, computing the Fibonacci sequence — classical computers are at least as good, often better (because they have decades of optimisation and zero error correction overhead). A quantum computer running any of these is strictly slower than a laptop.
Lesson 5 — BQP vs BPP
The language that complexity theorists use to formalise "what quantum computers can do in polynomial time" is the class BQP.
BQP (Bounded-error Quantum Polynomial time) is the class of decision problems solvable by a quantum algorithm in polynomial time with error probability at most 1/3 on every input. Errors can be reduced to arbitrarily small by repetition — running the algorithm k times and taking a majority vote drives the error to (1/3)^k in k repeats.
BPP (Bounded-error Probabilistic Polynomial time) is the classical analogue: polynomial-time algorithms allowed to flip coins, with bounded error.
The known containment relationships:
- \mathrm{P} \subseteq \mathrm{BPP}: trivially, because P ignores the random coins.
- \mathrm{BPP} \subseteq \mathrm{BQP}: any classical randomised algorithm can be simulated by a quantum circuit of polynomial size.
- \mathrm{BQP} \subseteq \mathrm{PSPACE}: any quantum circuit can be simulated in polynomial space classically, though the simulation takes exponential time.
Is BPP strictly smaller than BQP? Almost certainly yes — factoring is in BQP (via Shor's) and is believed to be outside BPP (nobody has found a randomised polynomial-time factoring algorithm in 50+ years of trying). But a proof of strict separation is not known; it would imply BPP ≠ PSPACE, which is a longstanding open problem in complexity theory.
Does BQP contain NP-complete problems? Almost certainly no. The best-known quantum algorithm for 3-SAT, Grover's search, gives a \sqrt{2^n} speedup over brute force — still exponential. Complexity-theoretic evidence (relative to random oracles) suggests NP \not\subseteq BQP. Most researchers believe that NP-complete problems are hard for quantum computers just as they are hard for classical ones.
What BQP definitely contains. Factoring, discrete log, order-finding, abelian HSP, Pell's equation, quantum simulation (simulating local-Hamiltonian dynamics), some graph-coloring approximations, some lattice problems in specific regimes. The "interesting" problems for quantum computers are exactly the ones that admit HSP-style or phase-estimation-style algorithms.
Lesson 6 — What quantum will not do
A list of popular claims that collapse under scrutiny, one by one.
Hype check. Before you read another popular article about quantum computing, calibrate against this list. The hype-to-reality ratio in consumer media is enormous. The scientific reality is genuinely exciting; the inflated claims obscure it rather than reveal it.
"Quantum computers solve NP-complete problems in polynomial time." Not established, and complexity-theoretic evidence suggests they do not. Grover's algorithm provides a \sqrt{2^n} speedup for SAT — still exponential. A quantum polynomial-time SAT algorithm would have enormous consequences (including likely breaking most cryptography and many optimisation routines at once), and if one existed, the past 30 years of quantum-algorithm research would likely have found it.
"Quantum computers speed up arbitrary algorithms." No. For problems without specific structure (HSP, amplitude amplification, simulation), quantum offers no speedup or at best polynomial speedups that disappear once hardware constants are accounted for. A quantum computer running an inventory-management system is strictly slower than a laptop.
"Quantum databases lookups are O(\sqrt{N}) instead of O(N)." Grover's algorithm searches an oracle — a function that, on input x, returns whether x is the target — in O(\sqrt{N}) queries. But a classical database with an index supports lookup in O(\log N) (binary search) or amortised O(1) (hash table). The \sqrt{N} "speedup" is relative to a linear scan, which no real database does. Grover helps when the "function" is a cheap Boolean expression, not a data structure.
"Grover cracks AES in seconds." Grover's algorithm halves the effective key length: 256-bit AES becomes as hard as 128-bit classical AES against Grover-equipped attackers. But 128-bit security is still approximately 2^{128} operations — orders of magnitude beyond feasible on any quantum computer built or projected. Symmetric encryption survives with a 2\times key-length increase (AES-256 replacing AES-128, for instance). Grover does not break symmetric crypto; it softens it.
"Quantum computers break all encryption." Only public-key crypto based on factoring or discrete log. RSA, Diffie-Hellman, ECC — all vulnerable to Shor's. Post-quantum replacements (Kyber, Dilithium, McEliece, SPHINCS+) based on lattice, hash, and code problems are actively being standardised by NIST and have no known polynomial-time quantum attack. India's post-quantum transition, part of the National Quantum Mission, is planning for this.
"Quantum machine learning has beaten classical ML." No clean practical demonstration as of 2026. Several theoretically proposed quantum-ML algorithms (quantum SVM, quantum PCA) had apparent exponential speedups that were later "de-quantised" — classical algorithms achieving the same complexity were discovered (Tang 2019 and subsequent work). Quantum ML is an open research area; there is no demonstrated commercial advantage.
"We already have useful quantum computers." As of 2026, the state of the art is NISQ — Noisy Intermediate-Scale Quantum — with 100–1000 physical qubits and gate error rates around 10^{-3}. Useful fault-tolerant quantum computing requires logical qubits built from thousands of physical qubits each. Breaking 2048-bit RSA is estimated to require 20 million physical qubits with current error rates. The gap is still orders of magnitude, and bridging it is an active engineering problem.
Lesson 7 — The Shor-vs-Grover gap
The two most famous quantum algorithms illustrate the interference + structure principle perfectly.
Shor's algorithm gets an exponential speedup. Why? Because factoring, via its reduction to order-finding, is an HSP instance over \mathbb{Z} — it has deep Fourier-like structure. The algorithm's quantum Fourier transform is the exact tool that matches the problem's structure. One application of the QFT aligns the amplitudes so the period r shows up with high probability at the measured outcome.
Grover's algorithm gets a quadratic speedup. Why only quadratic? Because unstructured search has no structure to exploit. The oracle tells you "this input is good" or "this input is bad," full stop. No periodicity, no linearity, no hidden subgroup — just a flat landscape with a single peak somewhere. The best you can do is amplitude amplification: each iteration rotates the current superposition slightly towards the target basis state. After \Theta(\sqrt{N}) iterations, the amplitude on the target is close to 1 and you measure. You cannot beat \sqrt{N} because without structure, you have no lever larger than the 1/\sqrt{N}-per-iteration angle the amplitude picks up.
The BBBV theorem formalises this. For any function f in the unstructured oracle model, any quantum algorithm needs \Omega(\sqrt{N}) queries to find a marked input with constant probability. Grover is optimal. No cleverer quantum algorithm for unstructured search exists, now or ever.
The contrast tells you the whole story. Structure (HSP) gives exponential speedup. No structure gives quadratic at best. The intermediate cases — polynomial speedups — arise when the problem has partial structure that gets partial interference advantage. This taxonomy is the modern view of quantum advantage, and it is stable: every new quantum algorithm discovered in the last two decades has fit somewhere on this ladder.
Example 1: Simon's algorithm fits all four lessons
Walk through Simon's and check each lesson against it.
Lesson 1 — interference is the engine. Simon's algorithm: Hadamard the input register, query the oracle, measure the output register (optional simplification), Hadamard the input register, measure. The structure is H-oracle-H — the oracle plants a phase pattern that encodes the hidden period s, and the second Hadamard turns phase into probability. Without the final Hadamard, measurement would return a random coset element, telling you nothing about s.
Lesson 2 — structure is required. Simon's problem has an explicit promise: f(x) = f(y) \iff x \oplus y \in \{0, s\}. Without this promise, the function is an arbitrary f and the algorithm's output (the sampled y-vectors) does not correspond to a meaningful s. The promise is the structure — specifically, it is the hidden-subgroup structure that interference exploits.
Lesson 3 — query complexity vs time complexity. The query-complexity win: O(n) queries vs classical \Omega(2^{n/2}) — exponential in the randomised model. The oracle is typically cheap to implement (a few gates for a specific test function), so the time-complexity speedup is also exponential. Simon's is a full-time-complexity exponential win against randomised classical — the first such result in the oracle model.
Lesson 4 — speedup ladder. Simon's sits on the exponential rung, just above Shor's (which is for a superset of Simon's class of problems — Simon's is HSP for \mathbb{Z}_2^n, Shor's is HSP for \mathbb{Z}).
Result. Simon's algorithm is an interference-engine running on a structured problem (the hidden-subgroup promise) at the query-complexity level, producing an exponential query speedup that carries through to time complexity. All four lessons apply directly. This is the simplest non-trivial full-stack demonstration of quantum advantage — and it is why Simon's algorithm is often singled out as the algorithm that inspired Shor.
Example 2: Why Grover's speedup is only quadratic
Now look at Grover's search and see why the same lessons give only a quadratic speedup.
The problem. You have a function f: \{0, 1, \ldots, N-1\} \to \{0, 1\} and you want to find an input x^* with f(x^*) = 1. No promise beyond "there is exactly one such x^*."
Lesson 1 — interference is the engine. Grover's algorithm applies the Hadamard to create an equal superposition, then an "oracle + diffusion" block that slightly rotates the state toward the target |x^*\rangle. Each iteration adds \Theta(1/\sqrt{N}) to the amplitude on |x^*\rangle. After \Theta(\sqrt{N}) iterations, the amplitude is \Theta(1) and measurement gives the answer. Interference is present — destructive interference suppresses the non-target basis states — but with far less structure to exploit.
Lesson 2 — structure is required. Grover's has no structural promise. The only assumption is "there is a marked input." The oracle tells you, for each input, whether it's marked — and nothing more. No hidden period, no linearity, no group structure.
Why this limits the speedup. Each Grover iteration rotates the state by an angle of \Theta(1/\sqrt{N}) toward |x^*\rangle. The full rotation to |x^*\rangle is \pi/2 (from "initial state orthogonal to target" to "target"). So the number of iterations is \Theta(\sqrt{N}) — not because of a clever algorithmic choice, but because that is the smallest rotation angle the absence of structure permits. Without periodicity or linearity, you have no lever bigger than 1/\sqrt{N}-per-iteration.
The BBBV lower bound. BBBV (Bennett, Bernstein, Brassard, Vazirani 1997) proved that in the unstructured oracle model, every quantum algorithm needs \Omega(\sqrt{N}) queries with constant success probability. Grover's is optimal. No clever trick gives a polynomial-time unstructured search algorithm, and no future algorithm ever will.
Ladder rung. Grover's sits on the quadratic rung — better than nothing, worse than polynomial or exponential. The quadratic rung is the ceiling for every unstructured problem.
Result. Grover's speedup is quadratic because unstructured search has no structure to exploit; without structure, interference can only rotate amplitude in \Theta(1/\sqrt{N}) steps; so \sqrt{N} queries is both achievable (Grover) and optimal (BBBV). The lessons explain not just the algorithm but also the upper bound on the speedup.
Common confusions
-
"Quantum computers are faster in general." No — only for certain problems. For most day-to-day computation (sorting, web serving, graphics rendering, running a compiler), classical computers are at least as good and usually better due to zero error-correction overhead and decades of hardware optimisation.
-
"Quantum speedups come from parallelism — trying all inputs at once." No — this is the single most damaging misconception. Quantum computers manipulate amplitudes through interference; measurement returns exactly one outcome per run. The \sqrt{N} or exponential speedup is not from N parallel computations; it is from arranging amplitudes so the desired outcome dominates.
-
"Quantum computers will solve NP-complete problems." No evidence. Grover's gives a \sqrt{2^n} speedup for SAT — still exponential. Complexity theory suggests NP \not\subseteq BQP. A polynomial-time quantum SAT algorithm would be a seismic event; nothing approaches it after 30+ years of research.
-
"Grover's lets you crack AES in seconds." No. Grover's halves the effective key length: AES-256 with Grover is as hard as AES-128 classically, which is around 2^{128} operations — infeasible on any conceivable hardware. The fix is to use AES-256 instead of AES-128. Symmetric crypto survives; only public-key crypto based on factoring or discrete log is broken.
-
"The exponential speedup is from superposition." Superposition is a precondition for quantum algorithms, but it is not the mechanism. Superposition alone gives you a uniform over all inputs, which yields a uniformly random measurement — no useful information. The mechanism is interference acting on the amplitudes produced by the oracle. Superposition sets the stage; interference does the work.
-
"If a problem has a quantum speedup, that speedup scales to all similar problems." No. Factoring has an exponential quantum speedup via Shor's. "Similar" problems — say, arbitrary number-theoretic computations — do not automatically inherit the speedup. Each problem's speedup depends on whether it has exploitable structure (HSP, Grover-compatible search space, etc.), not on family resemblance.
-
"Quantum query complexity separation implies time-complexity separation." Not automatic. A query-complexity speedup requires an efficient oracle implementation to translate to time-complexity. For Shor's, the oracle (modular exponentiation) is classical-polynomial. For some pathological oracle-model separations, the "oracle" is a specifically-constructed function with no efficient implementation — the speedup is a theorem about the oracle model, not a recipe for a fast algorithm.
Going deeper
You have the seven lessons and the examples against two concrete algorithms. The rest of this section collects the formal machinery: how lower bounds like BBBV are actually proved, the relationship between BQP and other complexity classes (including the PH hierarchy), a handful of concrete oracle separations that have resisted "de-quantisation," and the quantum-speedup checklist you can apply to new problems.
Query-complexity lower bounds — adversary and polynomial methods
Two main techniques prove quantum query-complexity lower bounds.
The adversary method (Ambainis 2000) — you are given two families of inputs, X (for which the answer is 0) and Y (for which the answer is 1). You design a "relation" between pairs (x, y) \in X \times Y and show that every oracle query can change the "overlap" between the quantum algorithm's state on x and on y by at most a certain amount. To distinguish x from y, the algorithm must flip the overlap from close to 1 (indistinguishable) to close to 0 (distinguishable), requiring many queries. The adversary bound is tight for many problems; it is currently the strongest general lower-bound technique.
The polynomial method (Beals, Buhrman, Cleve, Mosca, de Wolf 1998) — show that the output of any quantum query algorithm is a polynomial in the oracle's input bits whose degree is at most twice the number of queries. If the problem requires a high-degree polynomial to represent, you need many queries. This gives tight bounds for problems like majority, parity, and element distinctness, and it is how Grover's \sqrt{N} lower bound was first proved.
BBBV — the optimality of Grover's
The Bennett-Bernstein-Brassard-Vazirani theorem (1997) predates Grover's algorithm itself. The theorem states: in the unstructured oracle model for search, any quantum algorithm that finds a marked input with probability at least 2/3 must make \Omega(\sqrt{N}) queries. The proof uses the polynomial method: the success probability of any T-query algorithm is a polynomial of degree 2T in the oracle's indicator bits; search requires a polynomial that is 0 on all-zeros and 1 on any single-one input, which needs degree \Omega(\sqrt{N}) — so T = \Omega(\sqrt{N}).
BBBV is not just a lower bound; it is a structural statement about quantum computing. It says: the universe does not permit any quantum algorithm to do better than \sqrt{N} on unstructured problems. This is a physical constraint, not a gap in our cleverness. Combined with Grover's matching upper bound, it fully resolves the unstructured-search problem.
BQP containment and hardness
\mathrm{BQP} \subseteq \mathrm{PP} \subseteq \mathrm{PSPACE}, where PP is "probabilistic polynomial time with unbounded error" — a weak but well-defined class. The inclusion \mathrm{BQP} \subseteq \mathrm{PP} (Adleman, DeMarrais, Huang 1997) is the best known upper bound on BQP in classical terms.
\mathrm{BPP} \subsetneq \mathrm{BQP} is widely conjectured (Shor's algorithm is the evidence) but not proved. \mathrm{NP} \not\subseteq \mathrm{BQP} is believed; the best evidence is that no polynomial-time quantum algorithm is known for any NP-hard problem, and complexity-theoretic barriers (oracle separations, specifically Aaronson's work on the polynomial hierarchy) suggest fundamental obstacles.
Surprisingly, there are also oracle separations going the other way: relative to certain oracles, \mathrm{BQP} \not\subseteq \mathrm{PH} (the polynomial hierarchy), so in the black-box world quantum advantage can exceed what any level of the polynomial hierarchy can match. Raz and Tal (2019) proved this for a specific oracle (Forrelation), giving the strongest-known separation between BQP and classical complexity.
Rigorous oracle separations — specific problems
Several problems have proven exponential quantum-classical separations in the oracle model:
- Simon's problem — exponential in the randomised oracle model.
- Welded tree / glued trees (Childs, Cleve, Deotto, Farhi, Gutmann, Spielman 2003) — exponential separation for a quantum walk on a specific graph.
- Forrelation (Aaronson, Ambainis) — exponential separation that also places BQP outside the polynomial hierarchy relative to an oracle.
Each of these relies on a different form of structure: Simon's on hidden-subgroup periodicity, glued trees on graph structure, Forrelation on a Fourier-analytic test. All non-trivial quantum speedups trace back to some form of structure — the ladder is not a coincidence, it is a consequence of how interference works.
The quantum speedup checklist
When a new problem arrives and someone asks "does a quantum algorithm help?" — run this list:
- Is it a hidden-subgroup problem over an abelian group? If yes, polynomial-time quantum algorithm exists. Shor's applies or Simon's generalises. This is the best case.
- Is it a hidden-subgroup problem over a non-abelian group? If yes, probably no polynomial-time algorithm is known. Dihedral and symmetric-group HSP are the frontier. Kuperberg's sub-exponential algorithm might apply.
- Is it a search problem without structure? If yes, Grover's quadratic speedup applies. No better is possible (BBBV).
- Is it amenable to quantum walks or amplitude estimation? Specific graph and matrix problems have polynomial speedups via these techniques.
- Is it simulating a quantum system? Quantum computers have natural polynomial-time algorithms for local-Hamiltonian simulation — the original motivation for quantum computing (Feynman 1982).
- Is it something else? Probably classical is as good. Exception: some specific structured-linear-algebra tasks (HHL algorithm for linear systems with specific sparse matrices) have polynomial speedups — but check the fine print, as many such "speedups" have been de-quantised.
Indian context — TIFR, IISc, and complexity theory research
Indian research groups have contributed to the theoretical foundations underlying these lessons. Ashwin Nayak, Rahul Jain, Pranab Sen, and others have worked on communication-complexity lower bounds and the quantum adversary method. The IISc Bangalore theoretical-CS group and TIFR Mumbai's School of Technology and Computer Science are the main academic hubs for quantum complexity theory in India. The lessons in this article — especially the line between what quantum can and cannot do — reflect a research tradition India has contributed to, and whose continuation is part of the academic layer of the National Quantum Mission.
Where this leads next
- Grover's algorithm — the unstructured-search algorithm, the quadratic speedup, and the BBBV optimality argument.
- Shor's algorithm — the exponential-speedup factoring algorithm, the crown jewel of HSP over \mathbb{Z}.
- BQP vs BPP — the formal complexity class distinguishing quantum from classical polynomial-time computation.
- The Hidden Subgroup Problem — the framework where almost every exponential quantum speedup lives.
- Amplitude amplification — the general template that Grover's search is a special case of.
- Quantum simulation — the "natural" quantum-advantage application that Feynman originally envisioned.
References
- Bennett, Bernstein, Brassard, Vazirani, Strengths and weaknesses of quantum computing (1997) — the BBBV theorem proving Grover's optimality. arXiv:quant-ph/9701001.
- Wikipedia, BQP — the standard encyclopaedic treatment of the complexity class.
- John Preskill, Lecture Notes on Quantum Computation, Ch. 6 — theory.caltech.edu/~preskill/ph229.
- Nielsen and Chuang, Quantum Computation and Quantum Information, §6.6 (quantum search and speedups) — Cambridge University Press.
- Stephen Jordan, Quantum Algorithm Zoo — a curated list of known quantum algorithms by category. Wikipedia: Quantum algorithm.
- Andrew Childs and Wim van Dam, Quantum algorithms for algebraic problems (2010) — a broad survey of structured-problem speedups. arXiv:0812.0380.