In short

Classical complexity gives you a vocabulary for "what classical computers can and cannot easily do." P is the class of problems solvable in polynomial time by a deterministic algorithm — sorting, primality testing, shortest paths. BPP is the same thing but the algorithm is allowed to toss coins and make errors at most 1/3 of the time — likely equal to P, but not proved. NP is the class of problems whose correct answers you can verify in polynomial time — Sudoku, SAT, the travelling salesman decision problem. NP-complete problems are the hardest in NP: solve one in polynomial time and you have solved them all. PSPACE is everything you can do with polynomial memory and any amount of time — it contains NP. The containments everyone accepts are \mathrm{P} \subseteq \mathrm{BPP}, \mathrm{P} \subseteq \mathrm{NP}, \mathrm{NP} \subseteq \mathrm{PSPACE}. Whether P equals NP — a \$1 million prize, fifty years of work, nobody has a proof either way. You need this ladder because BQP — the quantum analogue of P — is defined on top of it.

Suppose you are given a 300-digit number and asked whether it is prime. Is this hard?

Suppose you are given the same 300-digit number and asked to factor it into its two prime factors. Is this hard?

Suppose you are given a 9 \times 9 Sudoku grid and asked whether it has a solution. Now a 1000 \times 1000 Sudoku grid. Now a 10^6 \times 10^6 grid. At what size does "Sudoku" become unreasonable, and at what size does "primality testing" become unreasonable?

These questions sound similar. They are not. Primality testing for a 300-digit number runs in about a second on a laptop. Factoring the same number takes every classical supercomputer on Earth longer than the age of the universe. Sudoku on a 9 \times 9 grid is a ten-minute puzzle; scaled to 10^6, it is provably intractable in a way that even infinite computers would struggle with. These gaps between "easy" and "hard" are not cosmetic — they are the spine of theoretical computer science.

Complexity theory is the field that organises problems by difficulty. The objects it studies are called complexity classes — buckets of problems, each bucket defined by how much time, space, or randomness a solving algorithm is allowed to use. Before you can understand what a quantum computer does — whether it gives you a genuine speedup over these classes, and what the class BQP even means — you need the classical vocabulary. This article is that vocabulary.

The setup — what is a "problem"?

Throughout this article, "problem" means a decision problem: a question with a yes/no answer, parameterised by an input. Given an integer N, is N prime? Given a graph G, is G 3-colourable? Given a Boolean formula \phi, does \phi have a satisfying assignment? The answer is always one bit. This might feel restrictive — "what about finding the factors, not just asking if they exist?" — but decision problems are enough for almost all of complexity theory. If you can decide "does N have a factor less than k?" in polynomial time, you can find the factor by binary search.

A problem also comes with an input size n — the number of bits needed to write down the input. A 300-digit number has input size about 1000 bits (\log_2(10^{300}) \approx 997). A graph on V vertices has input size about V^2 bits (one bit per possible edge). Complexity classes measure how long an algorithm takes as a function of n.

The two things that matter most:

This split — polynomial versus exponential — is the central dichotomy. The whole ladder of complexity classes is about which problems land on the polynomial side of it.

Polynomial versus exponential growthA graph showing three curves labelled n, n squared, and 2 to the n, plotted against input size n from 0 to 30. The 2 to the n curve shoots up sharply after n equals 15 while the polynomial curves remain near the bottom.why the polynomial-exponential split mattersinput size nstepsn (linear)2ⁿat n = 60, 2ⁿ exceeds the age of the universe in nanoseconds
Polynomial growth ($n$, $n^2$, $n^{100}$) stays manageable — doubling the input makes the runtime a constant factor bigger. Exponential growth ($2^n$) doubles with every extra bit of input and is hopeless beyond $n \approx 60$ even on supercomputers.

P — deterministic polynomial time

P stands for Polynomial time. A problem is in P if there is a deterministic algorithm — no coin flips, no guessing, no nondeterminism — that solves every instance of the problem in time polynomial in the input size.

Example problems in P:

The intuition: P is the class of problems with "honest fast algorithms." If you can write a program that always gives the right answer, with no randomness and no hope, and it runs fast as the input grows — your problem is in P.

BPP — randomised polynomial time

Now suppose your algorithm is allowed to flip coins. You write a procedure that, on a given input, computes an answer but also uses some random bits along the way. You require two things:

  1. The algorithm runs in polynomial time on every input and every choice of random bits.
  2. The algorithm gets the right answer with probability at least 2/3 on every input.

The "on every input" is strict — it is not enough for the algorithm to succeed on 2/3 of inputs; it has to succeed with probability at least 2/3 for each and every possible input, where the probability is over the coin flips.

This defines BPP — Bounded-error Probabilistic Polynomial time.

Why 2/3 and not 99\%? Because the choice does not matter. If you have any constant success probability above 1/2 (say 51\%), you can boost it. Run the algorithm k independent times and take the majority vote. By the Chernoff bound, the majority is wrong with probability that shrinks as \exp(-\Omega(k)). A few dozen repeats pushes the error to 2^{-100} — astronomically small. So 2/3 and 1 - 2^{-100} describe the same class; the number is chosen for concreteness.

Examples in BPP:

The containment \mathrm{P} \subseteq \mathrm{BPP} is trivial: a deterministic algorithm is just a randomised algorithm that ignores its random bits.

Is BPP strictly bigger than P? This is one of the great open questions of theoretical CS, but the current belief — based on derandomisation results by Impagliazzo, Wigderson, and others — is that \mathrm{P} = \mathrm{BPP}. Every BPP algorithm, if this conjecture is right, can be derandomised with only polynomial overhead. The conjecture is not proved, but the evidence for it is strong. You can think of BPP as "probably the same as P, but we have not finished the proof."

Deterministic versus randomised algorithmsTwo side-by-side circuit flowcharts. Left: input flows into a deterministic machine and out comes a single correct answer. Right: input and a stream of random coin flips flow into a randomised machine; the output is correct with probability at least two thirds.the two models of efficient computationP — deterministicinputalgorithmpoly timeansweralways correctBPP — randomisedinputcoinsalgorithmpoly timeanswercorrect with probability ≥ 2/3
P asks for a deterministic polynomial-time algorithm. BPP adds coin flips and requires a correct answer with probability at least $2/3$ on every input. Boosting by majority vote lets you amplify $2/3$ to any constant you want, so the threshold is not the interesting part — polynomial runtime is.

NP — verifiable in polynomial time

NP is not what most people think it is. The letters stand for Nondeterministic Polynomial time, but the cleanest way to think about it is in terms of verification.

A problem is in NP if, for every "yes" instance, there is a short certificate (a hint, a proof, a witness) that an efficient deterministic verifier can check in polynomial time.

Every problem in P is also in NP. If you can solve a problem in polynomial time, you can also verify it — the verifier just runs the polynomial-time algorithm and compares against the certificate (or ignores the certificate entirely). So \mathrm{P} \subseteq \mathrm{NP}.

NP-complete — the hardest problems in NP

Some problems in NP have the property that every other NP problem can be reduced to them in polynomial time. A polynomial-time reduction from problem A to problem B is an algorithm that takes an instance of A and produces, in polynomial time, an equivalent instance of B — so that solving B solves A. If every NP problem reduces to your specific problem, that problem is as hard as every NP problem — it is NP-hard. If it is both NP-hard and in NP, it is NP-complete.

NP-complete problems you have heard of:

The reductions linking them form a web: if one NP-complete problem turns out to be in P, they all do. And every other NP problem with it. This is the single cleanest reason P vs NP is so important — one polynomial-time algorithm for any NP-complete problem is one polynomial-time algorithm for all of them.

Not every NP problem is NP-complete. Factoring is in NP but is not believed to be NP-complete. Graph isomorphism is in NP but is widely believed to be strictly between P and NP-complete. These NP-intermediate problems — if they exist — live in a gap that Ladner's theorem (1975) proved has to be non-empty if P ≠ NP.

P vs NP — the million-dollar question

The P vs NP question asks: does every NP problem have a polynomial-time algorithm? Formally: is \mathrm{P} = \mathrm{NP}?

Every NP-complete problem fails to have a known polynomial-time algorithm. The fastest known SAT solvers are exponential in the worst case. Every attempted polynomial-time algorithm for an NP-complete problem in 50 years of effort has failed. The belief is that \mathrm{P} \neq \mathrm{NP} — that there are problems you can verify fast but cannot solve fast.

But it is not proved. Not even close.

The Clay Mathematics Institute has posted a \$1 million prize for a proof either way. A proof that \mathrm{P} = \mathrm{NP} would be earth-shattering: every NP-complete problem would get a fast algorithm, cryptography would be in crisis (most modern crypto is not NP-hard but the implications would still be enormous), mathematical theorem-proving would be automatable, machine learning would be revolutionised. A proof that \mathrm{P} \neq \mathrm{NP} would vindicate the universal belief and would require a new technique — the known proof-barrier results (relativisation, natural proofs, algebrisation) rule out most existing techniques.

As of 2026, no human knows the answer. This matters for quantum computing because: if \mathrm{P} = \mathrm{NP}, quantum computers become less special. If \mathrm{P} \neq \mathrm{NP}, the question of whether BQP can solve NP-complete problems becomes the central question of quantum complexity theory — and the answer, as far as anyone can prove, is no.

PSPACE — polynomial space, any time

PSPACE is the class of problems solvable using a polynomial amount of memory, with no bound on runtime. An algorithm gets polynomial work space (plus the input), can take as many steps as it likes, and must eventually halt with a correct answer.

PSPACE is larger than NP:

Examples in PSPACE:

PSPACE is a very large class — it contains both NP and coNP, and much more. Nobody has proved \mathrm{PSPACE} \neq \mathrm{NP}, but everyone believes it. Proving it is strictly harder than proving \mathrm{P} \neq \mathrm{NP}.

The hierarchy — how the classes fit together

The classical complexity hierarchyNested set diagram. Innermost region P contains sorting and AKS primality. BPP contains P and adds polynomial identity testing. NP contains P plus factoring, SAT, 3-colouring, knapsack. BPP and NP overlap inside a larger region labelled PSPACE that also contains QBF and generalised chess.the classical complexity landscapePSPACEgeneralised chess, QBFBPPNPPAKS, sortingpolynomial identity testingSAT, 3-colouring,factoring, knapsack
The classical hierarchy. P sits inside both BPP and NP. BPP is probably equal to P. NP contains problems believed to lie outside P — and the NP-complete ones (SAT, 3-colouring, knapsack, travelling salesman) are the hardest in it. All of these sit inside PSPACE, which also contains strategic-game problems like generalised chess that need exponential time but only polynomial memory.

The widely-accepted containments are:

\mathrm{P} \subseteq \mathrm{BPP} \subseteq \mathrm{PSPACE}.
\mathrm{P} \subseteq \mathrm{NP} \subseteq \mathrm{PSPACE}.

Every one of these \subseteq signs is proved. What is not proved is whether any of them are strict:

If any single one of these gets resolved, it would be the most important theoretical CS result of the century.

Why this matters for quantum computing

The reason you are reading this article in a quantum computing curriculum is that BQP — Bounded-error Quantum Polynomial time, the class you meet in the next chapter — sits in a specific position on this ladder.

Three facts known about BQP and these classes:

  1. \mathrm{BPP} \subseteq \mathrm{BQP}. A quantum computer can simulate any classical randomised algorithm. This is trivial — the quantum computer just prepares |0\rangle states, applies classical (Toffoli) gates, and measures. No quantumness needed.
  2. \mathrm{BQP} \subseteq \mathrm{PSPACE}. Bernstein and Vazirani proved in 1997 that any quantum polynomial-time algorithm can be simulated in polynomial space classically (though possibly in exponential time). Quantum computers are not infinitely powerful — they cannot solve problems outside PSPACE.
  3. BQP contains problems not known to be in BPP. Factoring and discrete log — via Shor's algorithm — are in BQP. Nobody has a polynomial-time classical algorithm for them, randomised or otherwise. This gives concrete evidence that \mathrm{BPP} \neq \mathrm{BQP}, though it is not a proof (because \mathrm{BPP} = \mathrm{BQP} would also imply \mathrm{BPP} solves factoring, which is also not disproved).

Is BQP contained in NP? No known proof either way. Intuitively, if a quantum computer outputs "yes" on some input, there does not need to be a short classical certificate — the certificate would have to describe the quantum computation, which may require super-polynomial amplitude information.

Is NP contained in BQP? Probably not. Grover's algorithm gives a \sqrt{2^n} speedup for SAT — still exponential. No polynomial-time quantum algorithm for any NP-complete problem is known, and there are oracle-based barriers suggesting none exists.

So BQP sits:

This is the reason complexity classes matter for quantum computing — not as an abstract exercise, but because the precise question what can a quantum computer efficiently do? is exactly a question about where BQP lives on this diagram. The rest of this part of the curriculum makes that question concrete.

Example 1: Classifying four familiar problems

For each problem, name the smallest complexity class it provably sits in — based on what is known today.

Problem A — "Is N prime?" Input: a number N with n digits. Question: is N prime?

This is in P. The AKS algorithm (Agrawal–Kayal–Saxena, IIT Kanpur, 2002) runs in deterministic polynomial time in n. Why this belongs in P: the algorithm is deterministic, always correct, and polynomial in the input length. It is not in P "by being in NP" or "by being verifiable" — AKS is a genuinely fast deterministic algorithm.

Before 2002, primality was only known to be in BPP (via Miller-Rabin) and NP (via Pratt certificates). AKS tightened the classification to the best possible.

Problem B — "Is N the product of two primes?" Input: N with n digits. Question: does N = pq for primes p and q?

This is in NP (and in BQP). The certificate for NP is the pair (p, q) — the verifier checks that p and q are prime (using AKS) and that p \cdot q = N, all in polynomial time. For BQP, Shor's algorithm finds the factors in time polynomial in n. Why this is not known to be in P or BPP: despite 50+ years of classical cryptanalysis, the best known deterministic and randomised algorithms both run in sub-exponential time (\exp(O(n^{1/3} \log^{2/3} n)) for the general number field sieve).

This is the gap Shor's algorithm exploits. If factoring were in P, RSA would already be broken. The strong belief is that factoring is NP-intermediate — in NP, not in P, but probably not NP-complete either.

Problem C — "Is graph G 3-colourable?" Input: a graph G on n vertices. Question: can G's vertices be coloured with 3 colours so that no edge connects vertices of the same colour?

This is NP-complete. The certificate is the colouring; the verifier checks each edge. NP-completeness comes from a polynomial-time reduction from SAT. Why this is believed to be hard: NP-completeness means the problem is as hard as every other NP problem. A polynomial-time algorithm for 3-colouring would immediately give polynomial-time algorithms for SAT, knapsack, travelling salesman, and every other NP problem — a resolution of P vs NP.

No known polynomial-time quantum algorithm. Grover's gives a \sqrt{3^n} speedup over brute-force 3^n — still exponential.

Problem D — "Does White have a winning strategy from this position?" Input: an n \times n chess position (generalised chess — standard chess is 8 \times 8, but we let n vary).

This is PSPACE-complete. The game tree has depth polynomial in n and branching factor polynomial in n, so the number of leaves is exponential — but each branch can be evaluated in polynomial space by alternating min-max reuse of memory. Why this is not in NP: a winning strategy is not a single short certificate — it has to respond to every possible opponent move, and those responses have to respond to further responses, a tree of exponential total size. You cannot hand a verifier a polynomial-size proof that "White wins."

Four familiar problems placed on the hierarchyFour labelled boxes arranged on a horizontal axis of increasing difficulty. Primality in P, factoring in NP and BQP, 3-colouring in NP-complete, generalised chess in PSPACE-complete. Each box has the problem's name and the class it belongs to.four problems, four classeseasyhardis N prime?PAKS, 2002factor N?NP ∩ BQPShor's 19943-colourable?NP-completeSAT-familychess n × nPSPACE-completegame treesfour surface-similar questions, four very different complexity classes
Four problems that all "ask something about a number or a game" and land in radically different complexity classes. The surface similarity hides the deep structural differences: whether deterministic algorithms exist, whether short certificates exist, whether memory-bounded exhaustive search works.

Result: a problem's surface description does not tell you its complexity. Primality has a polynomial algorithm; factoring almost certainly does not (classically). 3-colourability is in the same NP-complete equivalence class as SAT; chess sits one level higher. Classification is the first step to knowing what kind of algorithm — classical, randomised, or quantum — has any hope of helping.

Example 2: A toy problem from each class

Sometimes the named benchmarks obscure the concepts. Here is a toy problem for each class that a class-11 student can verify by hand.

A toy problem in P — "multiples of three." Given a number N in decimal, decide whether N is a multiple of 3. Why this is in P: the classical rule is to sum the digits of N and check if the sum is divisible by 3. If the sum is still not obviously divisible, recurse. The digit-sum is O(n) work where n is the number of digits, and iterating is O(\log n) levels deep — polynomial overall.

A toy problem in BPP (or conjecturally P) — "random sum test." You are given two functions f, g : \{1, \ldots, n\} \to \mathbb{Z} given as black boxes, and you are promised that either \sum_{i=1}^n f(i) = \sum_{i=1}^n g(i) or they differ by at least 1. Distinguish which. Why the randomised version is natural: pick a random subset S \subseteq \{1, \ldots, n\}, compute \sum_{i \in S} f(i) and \sum_{i \in S} g(i), compare. If the functions agree, the subsums always agree. If they differ by a nonzero amount, the subsum difference is a non-constant signal that a random subset will pick up with probability \geq 1/2.

This toy illustrates why randomness can simplify: the straightforward deterministic approach sums all n values, while the randomised approach can probabilistically shortcut.

A toy NP problem — "subset sum." Given integers a_1, a_2, \ldots, a_n and a target T, is there a subset of them summing to exactly T? The certificate is the subset; verification is O(n). Subset sum is NP-complete (a classical example used in pedagogy).

A toy PSPACE problem — "does the first player win Nim on n piles?" Given a Nim position with n piles and pile sizes up to 2^n, does the player to move have a winning strategy? The classical answer uses XOR of pile sizes, so for plain Nim this is actually in P. But for generalised Nim-like games with various restrictions, the problem becomes PSPACE-complete — a small tweak to the rules moves a game out of P.

Toy problems for each classFour vertical columns each labelled with a class name and a toy problem underneath. Columns from left to right: P with multiples of three, BPP with random sum test, NP with subset sum, PSPACE with Nim-variant games.one toy problem per classPMultiples of 3sum the digitsdeterministicpolynomial timeBPPRandom sumtestsample a subsetwith high prob.NPSubset sumdoes any subsettotal exactly T?NP-completePSPACENim variantswinning strategyin a gamepoly space
A one-line toy problem per class. The point is the *shape* of the computation: a digit-sum is deterministic, a subset-sample is randomised, a certificate-search is non-deterministic, a game-tree-explore is polynomial-space.

Result: the classes differ not in the problems they study but in the style of computation allowed. P — fixed deterministic steps. BPP — deterministic steps plus random coins. NP — deterministic verifier plus polynomial-size hint. PSPACE — polynomial memory, unlimited time. Quantum computing adds one more style: superposition and interference, polynomial time, bounded error. That style is BQP, and it does not fit cleanly into any of the above.

Common confusions

Going deeper

You now have the four main classes — P, BPP, NP, PSPACE — and how they line up. The rest of this section adds the refinements you will meet when you push further: the polynomial hierarchy, advice classes, coNP, the precise Turing-machine formalism, and the distinction between decision and counting problems. None of these are prerequisites for BQP, but they are the vocabulary that lives around it in serious complexity-theory writing.

The polynomial hierarchy (PH)

PH generalises NP to a tower. Level \Sigma_1^P = \mathrm{NP}. Level \Pi_1^P = \mathrm{coNP} (complements of NP problems — "is this formula unsatisfiable?"). Level \Sigma_2^P = \mathrm{NP}^{\mathrm{NP}}: a nondeterministic polynomial-time machine with an NP oracle. Higher levels stack further alternations.

PH is defined by problems of the form "\exists x_1 \forall x_2 \exists x_3 \ldots \phi(x_1, \ldots, x_k)" where each x_i is a polynomial-length string and \phi is a polynomial-time-checkable predicate. The number of quantifier alternations is the level.

PH is contained in PSPACE. It is widely conjectured that PH is an infinite strict hierarchy — if any two consecutive levels collapse, the entire hierarchy collapses to that level. This is not known to be true, but its failure would imply a very surprising symmetry in the landscape.

For quantum computing: a celebrated result by Raz and Tal (2019) showed an oracle relative to which BQP is not contained in PH. This is strong evidence that BQP — the quantum class — can outperform even highly-alternating polynomial-time classical computation.

Advice classes — P/poly and non-uniformity

P/poly is the class of problems solvable by a polynomial-size family of circuits — one circuit per input length n, where each circuit may be different. This is "non-uniform" computation: there is no requirement that the n'th circuit is efficiently describable.

Every problem in P is in P/poly (the uniform Turing machine gives the circuit). But P/poly also contains undecidable languages (the unary encoding of the halting problem, for example). It is the "hardware-limited but infinitely creative" version of polynomial time.

For quantum: BQP/poly is the advice version of BQP. It is studied because many statements about BQP are easier to prove in the non-uniform setting — proving a separation between BPP/poly and BQP/poly is still open but gives insight into the uniform case.

coNP and the complementary perspective

coNP is the class of decision problems whose no-instances have short certificates. "Is this number not prime?" — the certificate is a non-trivial factor, so this is in NP. "Is this formula a tautology?" (true on every assignment) — the certificate of a non-tautology is a falsifying assignment, so the tautology question is in coNP.

NP and coNP are distinct classes if \mathrm{NP} \neq \mathrm{coNP} — conjectured but unproven. Their intersection \mathrm{NP} \cap \mathrm{coNP} contains problems with both short yes-certificates and short no-certificates. Factoring is in \mathrm{NP} \cap \mathrm{coNP} (coNP because "if no factor below k exists, the full prime factorisation is a certificate of that"). This is evidence that factoring is not NP-complete — if it were, NP-complete would equal coNP-complete, which would force \mathrm{NP} = \mathrm{coNP}.

The Turing-machine formalism

Formally, a complexity class is defined relative to a model of computation — usually the multi-tape deterministic Turing machine for P, the probabilistic Turing machine for BPP, the nondeterministic Turing machine for NP, and the quantum Turing machine (or uniform quantum circuit family) for BQP.

The Cobham–Edmonds thesis says: all reasonable models of computation simulate each other with at most polynomial overhead. So P is the same class whether you define it using Turing machines, random-access machines (RAM), Boolean circuits (of polynomial size), or pointer machines. This is why "polynomial" is robust and "linear" is not — linear-time algorithms are model-sensitive.

The quantum analogue of this robustness holds too: BQP is the same class whether you define it via quantum Turing machines, uniform quantum circuit families, or measurement-based quantum computation. Whether you use two-level qudits (qubits) or d-level qudits, BQP is the same.

Complete problems and reductions

A problem A reduces to B (written A \leq_p B) if there is a polynomial-time algorithm that transforms an instance of A into an instance of B preserving yes-and-no answers. If every problem in a class \mathcal{C} reduces to A, and A \in \mathcal{C}, then A is \mathcal{C}-complete.

Complete problems are "the hardest" in their class. SAT is NP-complete. QBF (quantified Boolean formulas) is PSPACE-complete. The linear-algebraic problem \mathrm{MATPROD} is complete for \mathbf{\#P} (the counting class). Complete problems are the backbone of complexity-class definitions — you can often study a class by studying its complete problems.

For BQP, complete problems are known but somewhat technical — the approximating Jones polynomial and quantum simulation of local Hamiltonians are both BQP-complete.

Decision vs search vs counting

Every decision problem has a search counterpart (find the certificate, not just decide if it exists) and a counting counterpart (count the certificates). For NP, the counting class is called \mathbf{\#P}. For instance, \mathrm{\#SAT} asks "how many satisfying assignments does this formula have?" — much harder than SAT itself.

Toda's theorem (1991) is astonishing: \mathrm{PH} \subseteq \mathbf{P}^{\mathbf{\#P}}. Counting problems are at least as hard as the whole polynomial hierarchy. And \mathrm{BQP} \subseteq \mathbf{P}^{\mathbf{\#P}} (via the \mathrm{BQP} \subseteq \mathbf{PP} result), so quantum computation is bounded by counting complexity.

Intermediate classes — AM, IP, and interactive proofs

Beyond NP sit the interactive proof classes: AM (Arthur-Merlin, a two-move verifier-prover game with public coins), IP (polynomial-round interactive proofs), and MA (a one-round randomised verifier). The beautiful theorem \mathrm{IP} = \mathrm{PSPACE} (Shamir 1990) shows interactive proofs can verify anything a PSPACE machine can decide.

For quantum: the quantum analogue QMA (quantum Merlin-Arthur) plays a role analogous to NP but with quantum proofs and quantum verifier. QMA sits between BQP and PP. You will meet QMA later in the curriculum in the chapter on quantum verifiers.

Indian context — TIFR, IISc, and theoretical computer science

Theoretical computer science in India has been concentrated at a few institutions:

The line from these institutions to the theorems in this article is direct. AKS is an Indian result. The Adleman–Huang–DeMarrais 1997 theorem \mathrm{BQP} \subseteq \mathrm{PP} cites Indian-adjacent techniques. When you study complexity theory at a serious Indian institution, you meet its history as a local tradition, not an imported one.

Where this leads next

References

  1. Agrawal, Kayal, Saxena, PRIMES is in P (2002) — the AKS primality test. Annals of Mathematics (authors' preprint).
  2. Sanjeev Arora and Boaz Barak, Computational Complexity: A Modern Approach — textbook, with free draft online. theory.cs.princeton.edu/complexity.
  3. Wikipedia, P versus NP problem — the standard encyclopaedic treatment.
  4. Wikipedia, Complexity class — overview of classical complexity classes.
  5. Wikipedia, Turing machine — the formal model underlying P, BPP, NP, and PSPACE.
  6. Michael Sipser, Introduction to the Theory of Computation — Cengage. publisher page.