In short
Classical complexity gives you a vocabulary for "what classical computers can and cannot easily do." P is the class of problems solvable in polynomial time by a deterministic algorithm — sorting, primality testing, shortest paths. BPP is the same thing but the algorithm is allowed to toss coins and make errors at most 1/3 of the time — likely equal to P, but not proved. NP is the class of problems whose correct answers you can verify in polynomial time — Sudoku, SAT, the travelling salesman decision problem. NP-complete problems are the hardest in NP: solve one in polynomial time and you have solved them all. PSPACE is everything you can do with polynomial memory and any amount of time — it contains NP. The containments everyone accepts are \mathrm{P} \subseteq \mathrm{BPP}, \mathrm{P} \subseteq \mathrm{NP}, \mathrm{NP} \subseteq \mathrm{PSPACE}. Whether P equals NP — a \$1 million prize, fifty years of work, nobody has a proof either way. You need this ladder because BQP — the quantum analogue of P — is defined on top of it.
Suppose you are given a 300-digit number and asked whether it is prime. Is this hard?
Suppose you are given the same 300-digit number and asked to factor it into its two prime factors. Is this hard?
Suppose you are given a 9 \times 9 Sudoku grid and asked whether it has a solution. Now a 1000 \times 1000 Sudoku grid. Now a 10^6 \times 10^6 grid. At what size does "Sudoku" become unreasonable, and at what size does "primality testing" become unreasonable?
These questions sound similar. They are not. Primality testing for a 300-digit number runs in about a second on a laptop. Factoring the same number takes every classical supercomputer on Earth longer than the age of the universe. Sudoku on a 9 \times 9 grid is a ten-minute puzzle; scaled to 10^6, it is provably intractable in a way that even infinite computers would struggle with. These gaps between "easy" and "hard" are not cosmetic — they are the spine of theoretical computer science.
Complexity theory is the field that organises problems by difficulty. The objects it studies are called complexity classes — buckets of problems, each bucket defined by how much time, space, or randomness a solving algorithm is allowed to use. Before you can understand what a quantum computer does — whether it gives you a genuine speedup over these classes, and what the class BQP even means — you need the classical vocabulary. This article is that vocabulary.
The setup — what is a "problem"?
Throughout this article, "problem" means a decision problem: a question with a yes/no answer, parameterised by an input. Given an integer N, is N prime? Given a graph G, is G 3-colourable? Given a Boolean formula \phi, does \phi have a satisfying assignment? The answer is always one bit. This might feel restrictive — "what about finding the factors, not just asking if they exist?" — but decision problems are enough for almost all of complexity theory. If you can decide "does N have a factor less than k?" in polynomial time, you can find the factor by binary search.
A problem also comes with an input size n — the number of bits needed to write down the input. A 300-digit number has input size about 1000 bits (\log_2(10^{300}) \approx 997). A graph on V vertices has input size about V^2 bits (one bit per possible edge). Complexity classes measure how long an algorithm takes as a function of n.
The two things that matter most:
- Polynomial time (like n, n^2, n^{100}) is called efficient. Polynomials are the class of growth rates where doubling the input makes the runtime at most a constant factor bigger.
- Exponential time (like 2^n, 3^n, n^n) is not efficient. Exponentials double with every extra bit of input. At n = 60, a 2^n algorithm takes longer than the age of the universe.
This split — polynomial versus exponential — is the central dichotomy. The whole ladder of complexity classes is about which problems land on the polynomial side of it.
P — deterministic polynomial time
P stands for Polynomial time. A problem is in P if there is a deterministic algorithm — no coin flips, no guessing, no nondeterminism — that solves every instance of the problem in time polynomial in the input size.
Example problems in P:
- Sorting n numbers. Any decent sort (merge sort, quicksort average case, heap sort) runs in O(n \log n).
- Shortest paths in a graph. Dijkstra's algorithm finds the shortest path from a source to every vertex in O(V^2) or O(E \log V).
- Primality testing. Given an integer N with n digits, decide whether N is prime. This was famously not known to be in P until 2002, when three researchers at IIT Kanpur — Manindra Agrawal, Neeraj Kayal, and Nitin Saxena — published what is now called the AKS primality test. It runs in polynomial time (in n, the number of digits) for every input. Before AKS, the best known deterministic algorithms were either conditional on unproved number-theoretic assumptions or gave probabilistic answers. AKS is one of the cleanest results in modern theoretical computer science, and it is entirely Indian.
- Matrix multiplication. Multiplying two n \times n matrices is in P — school algorithm takes O(n^3), Strassen's is O(n^{2.807}), and Coppersmith–Winograd-style algorithms push this to around O(n^{2.37}).
- Linear programming. Deciding whether a system of linear inequalities has a solution is in P (via the ellipsoid method or interior-point methods).
The intuition: P is the class of problems with "honest fast algorithms." If you can write a program that always gives the right answer, with no randomness and no hope, and it runs fast as the input grows — your problem is in P.
BPP — randomised polynomial time
Now suppose your algorithm is allowed to flip coins. You write a procedure that, on a given input, computes an answer but also uses some random bits along the way. You require two things:
- The algorithm runs in polynomial time on every input and every choice of random bits.
- The algorithm gets the right answer with probability at least 2/3 on every input.
The "on every input" is strict — it is not enough for the algorithm to succeed on 2/3 of inputs; it has to succeed with probability at least 2/3 for each and every possible input, where the probability is over the coin flips.
This defines BPP — Bounded-error Probabilistic Polynomial time.
Why 2/3 and not 99\%? Because the choice does not matter. If you have any constant success probability above 1/2 (say 51\%), you can boost it. Run the algorithm k independent times and take the majority vote. By the Chernoff bound, the majority is wrong with probability that shrinks as \exp(-\Omega(k)). A few dozen repeats pushes the error to 2^{-100} — astronomically small. So 2/3 and 1 - 2^{-100} describe the same class; the number is chosen for concreteness.
Examples in BPP:
- Polynomial identity testing. Given two polynomial expressions p(x_1, \ldots, x_n) and q(x_1, \ldots, x_n) in many variables, decide whether they are identical as polynomials. The trick is to plug in random values for the variables: by the Schwartz-Zippel lemma, if the polynomials are different, they disagree at a random point with high probability. This is in BPP. Whether it is in P is a famous open problem — equivalent to derandomising BPP.
- Miller-Rabin primality test. Classically, before AKS, the fastest way to test primality for a 1000-digit number was a probabilistic algorithm: flip coins, possibly make an error with probability \leq 2^{-100}. Miller-Rabin showed that primality is in BPP (which, by AKS, we now know is a weaker statement than "primality is in P").
- Monte Carlo integration. Estimating the value of a high-dimensional integral by sampling random points is a randomised polynomial-time algorithm for a huge class of problems.
The containment \mathrm{P} \subseteq \mathrm{BPP} is trivial: a deterministic algorithm is just a randomised algorithm that ignores its random bits.
Is BPP strictly bigger than P? This is one of the great open questions of theoretical CS, but the current belief — based on derandomisation results by Impagliazzo, Wigderson, and others — is that \mathrm{P} = \mathrm{BPP}. Every BPP algorithm, if this conjecture is right, can be derandomised with only polynomial overhead. The conjecture is not proved, but the evidence for it is strong. You can think of BPP as "probably the same as P, but we have not finished the proof."
NP — verifiable in polynomial time
NP is not what most people think it is. The letters stand for Nondeterministic Polynomial time, but the cleanest way to think about it is in terms of verification.
A problem is in NP if, for every "yes" instance, there is a short certificate (a hint, a proof, a witness) that an efficient deterministic verifier can check in polynomial time.
- SAT (Boolean satisfiability). Given a Boolean formula \phi in variables x_1, \ldots, x_n, is there an assignment making \phi true? The certificate is the assignment itself — n bits. The verifier plugs in the assignment and evaluates \phi, which takes polynomial time in the formula's size. SAT is in NP.
- Sudoku. A valid Sudoku solution is a certificate; the verifier checks each row, column, and 3 \times 3 block. Polynomial time. Sudoku is in NP.
- Hamiltonian path. Given a graph, is there a path visiting every vertex exactly once? The path is the certificate. The verifier checks each consecutive pair of vertices is connected by an edge. In NP.
- Traveling salesman decision problem. Given a weighted graph and a threshold k, is there a tour of total weight \leq k? The tour is the certificate.
- Factoring. Given N and k, does N have a factor less than k? The factor is the certificate. The verifier does trial division. Factoring is in NP (and its complement, asking for no factor below k, is in coNP).
Every problem in P is also in NP. If you can solve a problem in polynomial time, you can also verify it — the verifier just runs the polynomial-time algorithm and compares against the certificate (or ignores the certificate entirely). So \mathrm{P} \subseteq \mathrm{NP}.
NP-complete — the hardest problems in NP
Some problems in NP have the property that every other NP problem can be reduced to them in polynomial time. A polynomial-time reduction from problem A to problem B is an algorithm that takes an instance of A and produces, in polynomial time, an equivalent instance of B — so that solving B solves A. If every NP problem reduces to your specific problem, that problem is as hard as every NP problem — it is NP-hard. If it is both NP-hard and in NP, it is NP-complete.
NP-complete problems you have heard of:
- SAT — the original, proved NP-complete by Cook (1971) and independently by Levin. Every NP problem reduces to SAT.
- 3-SAT — SAT restricted to clauses with exactly 3 literals. Still NP-complete.
- 3-Colouring. Given a graph, can it be coloured with 3 colours so no edge connects two vertices of the same colour?
- Knapsack. Given items with weights and values and a capacity, is there a subset of value \geq k fitting in the capacity?
- Travelling Salesman (decision version).
- Clique. Given a graph G and a number k, does G contain a complete subgraph on k vertices?
- Integer Programming. Deciding whether a system of linear inequalities has an integer solution.
The reductions linking them form a web: if one NP-complete problem turns out to be in P, they all do. And every other NP problem with it. This is the single cleanest reason P vs NP is so important — one polynomial-time algorithm for any NP-complete problem is one polynomial-time algorithm for all of them.
Not every NP problem is NP-complete. Factoring is in NP but is not believed to be NP-complete. Graph isomorphism is in NP but is widely believed to be strictly between P and NP-complete. These NP-intermediate problems — if they exist — live in a gap that Ladner's theorem (1975) proved has to be non-empty if P ≠ NP.
P vs NP — the million-dollar question
The P vs NP question asks: does every NP problem have a polynomial-time algorithm? Formally: is \mathrm{P} = \mathrm{NP}?
Every NP-complete problem fails to have a known polynomial-time algorithm. The fastest known SAT solvers are exponential in the worst case. Every attempted polynomial-time algorithm for an NP-complete problem in 50 years of effort has failed. The belief is that \mathrm{P} \neq \mathrm{NP} — that there are problems you can verify fast but cannot solve fast.
But it is not proved. Not even close.
The Clay Mathematics Institute has posted a \$1 million prize for a proof either way. A proof that \mathrm{P} = \mathrm{NP} would be earth-shattering: every NP-complete problem would get a fast algorithm, cryptography would be in crisis (most modern crypto is not NP-hard but the implications would still be enormous), mathematical theorem-proving would be automatable, machine learning would be revolutionised. A proof that \mathrm{P} \neq \mathrm{NP} would vindicate the universal belief and would require a new technique — the known proof-barrier results (relativisation, natural proofs, algebrisation) rule out most existing techniques.
As of 2026, no human knows the answer. This matters for quantum computing because: if \mathrm{P} = \mathrm{NP}, quantum computers become less special. If \mathrm{P} \neq \mathrm{NP}, the question of whether BQP can solve NP-complete problems becomes the central question of quantum complexity theory — and the answer, as far as anyone can prove, is no.
PSPACE — polynomial space, any time
PSPACE is the class of problems solvable using a polynomial amount of memory, with no bound on runtime. An algorithm gets polynomial work space (plus the input), can take as many steps as it likes, and must eventually halt with a correct answer.
PSPACE is larger than NP:
- \mathrm{NP} \subseteq \mathrm{PSPACE}. If a problem is in NP, you can check every possible certificate one at a time in polynomial space — if one certificate is accepted by the verifier, output yes.
- \mathrm{coNP} \subseteq \mathrm{PSPACE} — same argument.
- \mathrm{BPP} \subseteq \mathrm{PSPACE} — simulate all coin flip outcomes.
Examples in PSPACE:
- Generalised chess (or Go, or checkers). Given a position on an n \times n board, does the player to move have a winning strategy? Checking this requires exploring a game tree with exponentially many positions, but each branch can be checked in polynomial space by reusing memory. Chess and Go scaled up to arbitrary board sizes are PSPACE-complete.
- Quantified Boolean formulas (QBF). A Boolean formula preceded by alternating universal and existential quantifiers: \forall x_1 \exists x_2 \forall x_3 \ldots : \phi(x_1, \ldots, x_n). Deciding whether it is true is PSPACE-complete. Think of it as "is there always a response to every attack..."
- Regular expression equivalence over certain extensions is PSPACE-complete.
PSPACE is a very large class — it contains both NP and coNP, and much more. Nobody has proved \mathrm{PSPACE} \neq \mathrm{NP}, but everyone believes it. Proving it is strictly harder than proving \mathrm{P} \neq \mathrm{NP}.
The hierarchy — how the classes fit together
The widely-accepted containments are:
Every one of these \subseteq signs is proved. What is not proved is whether any of them are strict:
- \mathrm{P} \stackrel{?}{=} \mathrm{BPP} — probably yes, no proof.
- \mathrm{P} \stackrel{?}{=} \mathrm{NP} — probably no, no proof.
- \mathrm{BPP} \stackrel{?}{\subseteq} \mathrm{NP} — probably yes, no proof.
- \mathrm{NP} \stackrel{?}{=} \mathrm{PSPACE} — probably no, no proof.
If any single one of these gets resolved, it would be the most important theoretical CS result of the century.
Why this matters for quantum computing
The reason you are reading this article in a quantum computing curriculum is that BQP — Bounded-error Quantum Polynomial time, the class you meet in the next chapter — sits in a specific position on this ladder.
Three facts known about BQP and these classes:
- \mathrm{BPP} \subseteq \mathrm{BQP}. A quantum computer can simulate any classical randomised algorithm. This is trivial — the quantum computer just prepares |0\rangle states, applies classical (Toffoli) gates, and measures. No quantumness needed.
- \mathrm{BQP} \subseteq \mathrm{PSPACE}. Bernstein and Vazirani proved in 1997 that any quantum polynomial-time algorithm can be simulated in polynomial space classically (though possibly in exponential time). Quantum computers are not infinitely powerful — they cannot solve problems outside PSPACE.
- BQP contains problems not known to be in BPP. Factoring and discrete log — via Shor's algorithm — are in BQP. Nobody has a polynomial-time classical algorithm for them, randomised or otherwise. This gives concrete evidence that \mathrm{BPP} \neq \mathrm{BQP}, though it is not a proof (because \mathrm{BPP} = \mathrm{BQP} would also imply \mathrm{BPP} solves factoring, which is also not disproved).
Is BQP contained in NP? No known proof either way. Intuitively, if a quantum computer outputs "yes" on some input, there does not need to be a short classical certificate — the certificate would have to describe the quantum computation, which may require super-polynomial amplitude information.
Is NP contained in BQP? Probably not. Grover's algorithm gives a \sqrt{2^n} speedup for SAT — still exponential. No polynomial-time quantum algorithm for any NP-complete problem is known, and there are oracle-based barriers suggesting none exists.
So BQP sits:
- Above BPP (with factoring as witness).
- Below PSPACE.
- In an unknown relationship to NP.
This is the reason complexity classes matter for quantum computing — not as an abstract exercise, but because the precise question what can a quantum computer efficiently do? is exactly a question about where BQP lives on this diagram. The rest of this part of the curriculum makes that question concrete.
Example 1: Classifying four familiar problems
For each problem, name the smallest complexity class it provably sits in — based on what is known today.
Problem A — "Is N prime?" Input: a number N with n digits. Question: is N prime?
This is in P. The AKS algorithm (Agrawal–Kayal–Saxena, IIT Kanpur, 2002) runs in deterministic polynomial time in n. Why this belongs in P: the algorithm is deterministic, always correct, and polynomial in the input length. It is not in P "by being in NP" or "by being verifiable" — AKS is a genuinely fast deterministic algorithm.
Before 2002, primality was only known to be in BPP (via Miller-Rabin) and NP (via Pratt certificates). AKS tightened the classification to the best possible.
Problem B — "Is N the product of two primes?" Input: N with n digits. Question: does N = pq for primes p and q?
This is in NP (and in BQP). The certificate for NP is the pair (p, q) — the verifier checks that p and q are prime (using AKS) and that p \cdot q = N, all in polynomial time. For BQP, Shor's algorithm finds the factors in time polynomial in n. Why this is not known to be in P or BPP: despite 50+ years of classical cryptanalysis, the best known deterministic and randomised algorithms both run in sub-exponential time (\exp(O(n^{1/3} \log^{2/3} n)) for the general number field sieve).
This is the gap Shor's algorithm exploits. If factoring were in P, RSA would already be broken. The strong belief is that factoring is NP-intermediate — in NP, not in P, but probably not NP-complete either.
Problem C — "Is graph G 3-colourable?" Input: a graph G on n vertices. Question: can G's vertices be coloured with 3 colours so that no edge connects vertices of the same colour?
This is NP-complete. The certificate is the colouring; the verifier checks each edge. NP-completeness comes from a polynomial-time reduction from SAT. Why this is believed to be hard: NP-completeness means the problem is as hard as every other NP problem. A polynomial-time algorithm for 3-colouring would immediately give polynomial-time algorithms for SAT, knapsack, travelling salesman, and every other NP problem — a resolution of P vs NP.
No known polynomial-time quantum algorithm. Grover's gives a \sqrt{3^n} speedup over brute-force 3^n — still exponential.
Problem D — "Does White have a winning strategy from this position?" Input: an n \times n chess position (generalised chess — standard chess is 8 \times 8, but we let n vary).
This is PSPACE-complete. The game tree has depth polynomial in n and branching factor polynomial in n, so the number of leaves is exponential — but each branch can be evaluated in polynomial space by alternating min-max reuse of memory. Why this is not in NP: a winning strategy is not a single short certificate — it has to respond to every possible opponent move, and those responses have to respond to further responses, a tree of exponential total size. You cannot hand a verifier a polynomial-size proof that "White wins."
Result: a problem's surface description does not tell you its complexity. Primality has a polynomial algorithm; factoring almost certainly does not (classically). 3-colourability is in the same NP-complete equivalence class as SAT; chess sits one level higher. Classification is the first step to knowing what kind of algorithm — classical, randomised, or quantum — has any hope of helping.
Example 2: A toy problem from each class
Sometimes the named benchmarks obscure the concepts. Here is a toy problem for each class that a class-11 student can verify by hand.
A toy problem in P — "multiples of three." Given a number N in decimal, decide whether N is a multiple of 3. Why this is in P: the classical rule is to sum the digits of N and check if the sum is divisible by 3. If the sum is still not obviously divisible, recurse. The digit-sum is O(n) work where n is the number of digits, and iterating is O(\log n) levels deep — polynomial overall.
A toy problem in BPP (or conjecturally P) — "random sum test." You are given two functions f, g : \{1, \ldots, n\} \to \mathbb{Z} given as black boxes, and you are promised that either \sum_{i=1}^n f(i) = \sum_{i=1}^n g(i) or they differ by at least 1. Distinguish which. Why the randomised version is natural: pick a random subset S \subseteq \{1, \ldots, n\}, compute \sum_{i \in S} f(i) and \sum_{i \in S} g(i), compare. If the functions agree, the subsums always agree. If they differ by a nonzero amount, the subsum difference is a non-constant signal that a random subset will pick up with probability \geq 1/2.
This toy illustrates why randomness can simplify: the straightforward deterministic approach sums all n values, while the randomised approach can probabilistically shortcut.
A toy NP problem — "subset sum." Given integers a_1, a_2, \ldots, a_n and a target T, is there a subset of them summing to exactly T? The certificate is the subset; verification is O(n). Subset sum is NP-complete (a classical example used in pedagogy).
A toy PSPACE problem — "does the first player win Nim on n piles?" Given a Nim position with n piles and pile sizes up to 2^n, does the player to move have a winning strategy? The classical answer uses XOR of pile sizes, so for plain Nim this is actually in P. But for generalised Nim-like games with various restrictions, the problem becomes PSPACE-complete — a small tweak to the rules moves a game out of P.
Result: the classes differ not in the problems they study but in the style of computation allowed. P — fixed deterministic steps. BPP — deterministic steps plus random coins. NP — deterministic verifier plus polynomial-size hint. PSPACE — polynomial memory, unlimited time. Quantum computing adds one more style: superposition and interference, polynomial time, bounded error. That style is BQP, and it does not fit cleanly into any of the above.
Common confusions
-
"NP means 'not polynomial.'" No. NP means "Nondeterministic Polynomial" — problems verifiable in polynomial time by a deterministic verifier (equivalently, solvable in polynomial time by a nondeterministic machine). Every problem in P is also in NP. The word "not" has nothing to do with it.
-
"P vs NP is solved." No. It remains open. Occasional press releases claim proofs but none have survived peer review. The Clay Prize is still unclaimed.
-
"BPP is strictly larger than P." Probably not. The widely-held belief, based on derandomisation arguments by Impagliazzo, Wigderson, Nisan and others, is that \mathrm{P} = \mathrm{BPP}. The proof has not been completed, but the direction of evidence is strong.
-
"NP-complete problems are impossible." They are not impossible — they are believed to be hard in the worst case. Many NP-complete problems have excellent practical algorithms for real-world inputs (modern SAT solvers handle millions of variables, commercial travelling-salesman solvers route packages globally). What is believed impossible is a single algorithm that solves every instance of an NP-complete problem in polynomial time.
-
"NP-complete problems are unsolvable quantumly." No known polynomial-time quantum algorithm exists for NP-complete problems, but there is no proof of impossibility. The belief is that \mathrm{NP} \not\subseteq \mathrm{BQP}, but this is open. Grover's gives a \sqrt{2^n} speedup — still exponential, but not nothing.
-
"BQP is inside NP." Not known. Quantum computations may produce "yes" answers that have no short classical certificate. The current best upper bound is \mathrm{BQP} \subseteq \mathrm{PP} \subseteq \mathrm{PSPACE}, not \mathrm{BQP} \subseteq \mathrm{NP}.
-
"Factoring is NP-complete." No. Factoring is in NP, but it is not known to be NP-complete — and most researchers believe it is not. If factoring were NP-complete, Shor's algorithm would imply \mathrm{NP} \subseteq \mathrm{BQP}, which would collapse a lot of the complexity landscape. Factoring is widely believed to be NP-intermediate — strictly between P and NP-complete.
Going deeper
You now have the four main classes — P, BPP, NP, PSPACE — and how they line up. The rest of this section adds the refinements you will meet when you push further: the polynomial hierarchy, advice classes, coNP, the precise Turing-machine formalism, and the distinction between decision and counting problems. None of these are prerequisites for BQP, but they are the vocabulary that lives around it in serious complexity-theory writing.
The polynomial hierarchy (PH)
PH generalises NP to a tower. Level \Sigma_1^P = \mathrm{NP}. Level \Pi_1^P = \mathrm{coNP} (complements of NP problems — "is this formula unsatisfiable?"). Level \Sigma_2^P = \mathrm{NP}^{\mathrm{NP}}: a nondeterministic polynomial-time machine with an NP oracle. Higher levels stack further alternations.
PH is defined by problems of the form "\exists x_1 \forall x_2 \exists x_3 \ldots \phi(x_1, \ldots, x_k)" where each x_i is a polynomial-length string and \phi is a polynomial-time-checkable predicate. The number of quantifier alternations is the level.
PH is contained in PSPACE. It is widely conjectured that PH is an infinite strict hierarchy — if any two consecutive levels collapse, the entire hierarchy collapses to that level. This is not known to be true, but its failure would imply a very surprising symmetry in the landscape.
For quantum computing: a celebrated result by Raz and Tal (2019) showed an oracle relative to which BQP is not contained in PH. This is strong evidence that BQP — the quantum class — can outperform even highly-alternating polynomial-time classical computation.
Advice classes — P/poly and non-uniformity
P/poly is the class of problems solvable by a polynomial-size family of circuits — one circuit per input length n, where each circuit may be different. This is "non-uniform" computation: there is no requirement that the n'th circuit is efficiently describable.
Every problem in P is in P/poly (the uniform Turing machine gives the circuit). But P/poly also contains undecidable languages (the unary encoding of the halting problem, for example). It is the "hardware-limited but infinitely creative" version of polynomial time.
For quantum: BQP/poly is the advice version of BQP. It is studied because many statements about BQP are easier to prove in the non-uniform setting — proving a separation between BPP/poly and BQP/poly is still open but gives insight into the uniform case.
coNP and the complementary perspective
coNP is the class of decision problems whose no-instances have short certificates. "Is this number not prime?" — the certificate is a non-trivial factor, so this is in NP. "Is this formula a tautology?" (true on every assignment) — the certificate of a non-tautology is a falsifying assignment, so the tautology question is in coNP.
NP and coNP are distinct classes if \mathrm{NP} \neq \mathrm{coNP} — conjectured but unproven. Their intersection \mathrm{NP} \cap \mathrm{coNP} contains problems with both short yes-certificates and short no-certificates. Factoring is in \mathrm{NP} \cap \mathrm{coNP} (coNP because "if no factor below k exists, the full prime factorisation is a certificate of that"). This is evidence that factoring is not NP-complete — if it were, NP-complete would equal coNP-complete, which would force \mathrm{NP} = \mathrm{coNP}.
The Turing-machine formalism
Formally, a complexity class is defined relative to a model of computation — usually the multi-tape deterministic Turing machine for P, the probabilistic Turing machine for BPP, the nondeterministic Turing machine for NP, and the quantum Turing machine (or uniform quantum circuit family) for BQP.
The Cobham–Edmonds thesis says: all reasonable models of computation simulate each other with at most polynomial overhead. So P is the same class whether you define it using Turing machines, random-access machines (RAM), Boolean circuits (of polynomial size), or pointer machines. This is why "polynomial" is robust and "linear" is not — linear-time algorithms are model-sensitive.
The quantum analogue of this robustness holds too: BQP is the same class whether you define it via quantum Turing machines, uniform quantum circuit families, or measurement-based quantum computation. Whether you use two-level qudits (qubits) or d-level qudits, BQP is the same.
Complete problems and reductions
A problem A reduces to B (written A \leq_p B) if there is a polynomial-time algorithm that transforms an instance of A into an instance of B preserving yes-and-no answers. If every problem in a class \mathcal{C} reduces to A, and A \in \mathcal{C}, then A is \mathcal{C}-complete.
Complete problems are "the hardest" in their class. SAT is NP-complete. QBF (quantified Boolean formulas) is PSPACE-complete. The linear-algebraic problem \mathrm{MATPROD} is complete for \mathbf{\#P} (the counting class). Complete problems are the backbone of complexity-class definitions — you can often study a class by studying its complete problems.
For BQP, complete problems are known but somewhat technical — the approximating Jones polynomial and quantum simulation of local Hamiltonians are both BQP-complete.
Decision vs search vs counting
Every decision problem has a search counterpart (find the certificate, not just decide if it exists) and a counting counterpart (count the certificates). For NP, the counting class is called \mathbf{\#P}. For instance, \mathrm{\#SAT} asks "how many satisfying assignments does this formula have?" — much harder than SAT itself.
Toda's theorem (1991) is astonishing: \mathrm{PH} \subseteq \mathbf{P}^{\mathbf{\#P}}. Counting problems are at least as hard as the whole polynomial hierarchy. And \mathrm{BQP} \subseteq \mathbf{P}^{\mathbf{\#P}} (via the \mathrm{BQP} \subseteq \mathbf{PP} result), so quantum computation is bounded by counting complexity.
Intermediate classes — AM, IP, and interactive proofs
Beyond NP sit the interactive proof classes: AM (Arthur-Merlin, a two-move verifier-prover game with public coins), IP (polynomial-round interactive proofs), and MA (a one-round randomised verifier). The beautiful theorem \mathrm{IP} = \mathrm{PSPACE} (Shamir 1990) shows interactive proofs can verify anything a PSPACE machine can decide.
For quantum: the quantum analogue QMA (quantum Merlin-Arthur) plays a role analogous to NP but with quantum proofs and quantum verifier. QMA sits between BQP and PP. You will meet QMA later in the curriculum in the chapter on quantum verifiers.
Indian context — TIFR, IISc, and theoretical computer science
Theoretical computer science in India has been concentrated at a few institutions:
- TIFR Mumbai — the School of Technology and Computer Science has long been the strongest theoretical CS group in the country. Naveen Garg, Ramprasad Saptharishi, Prahladh Harsha, and others have done major work in approximation algorithms, algebraic complexity, and coding theory.
- IISc Bangalore — active complexity and algorithms research, especially under Arnab Bhattacharyya and collaborators.
- IIT Kanpur — home of AKS primality. Manindra Agrawal has been one of the most influential theoretical computer scientists of the 21st century.
- CMI Chennai and ISI Kolkata — smaller but serious TCS groups.
- IIIT Bangalore and IIT Bombay — growing programmes in quantum complexity.
The line from these institutions to the theorems in this article is direct. AKS is an Indian result. The Adleman–Huang–DeMarrais 1997 theorem \mathrm{BQP} \subseteq \mathrm{PP} cites Indian-adjacent techniques. When you study complexity theory at a serious Indian institution, you meet its history as a local tradition, not an imported one.
Where this leads next
- BQP Defined — the formal definition of Bounded-error Quantum Polynomial time, and where it sits on the hierarchy you just learned.
- BQP vs BPP — the comparison between classical and quantum randomised polynomial time, and the evidence that they differ.
- BQP vs NP — the question of whether quantum computers can solve NP-complete problems.
- Lessons about quantum speedups — how the four oracle algorithms (Deutsch, Bernstein-Vazirani, Simon, Grover) illustrate the structure-interference-speedup triangle.
- The hidden subgroup problem — the algebraic framework inside BQP where almost all exponential speedups live.
References
- Agrawal, Kayal, Saxena, PRIMES is in P (2002) — the AKS primality test. Annals of Mathematics (authors' preprint).
- Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach — textbook, with free draft online. theory.cs.princeton.edu/complexity.
- Wikipedia, P versus NP problem — the standard encyclopaedic treatment.
- Wikipedia, Complexity class — overview of classical complexity classes.
- Wikipedia, Turing machine — the formal model underlying P, BPP, NP, and PSPACE.
- Michael Sipser, Introduction to the Theory of Computation — Cengage. publisher page.