In short
The BBBV theorem (Bennett, Bernstein, Brassard, Vazirani, 1997) proves that any quantum algorithm which solves unstructured search with constant success probability must make \Omega(\sqrt N) oracle queries. Since Grover's algorithm achieves O(\sqrt N) queries, Grover is optimal — no cleverer quantum algorithm for unstructured search can ever be invented.
The classical proof uses a hybrid argument. Consider a quantum algorithm that makes T queries to a "silent" oracle (one that does nothing to every input). Switch the oracle, one query at a time, to the true oracle that marks the hidden input w. Each switch changes the algorithm's final state by a small amount — at most 2/\sqrt N in Euclidean norm, on average over w. To distinguish "there is a marked input" from "there isn't" the total change must reach \Omega(1). Summing the tiny per-query bounds gives T \cdot \frac{2}{\sqrt N} \gtrsim 1, so T = \Omega(\sqrt N).
A second route — the polynomial method (Beals, Buhrman, Cleve, Mosca, de Wolf, 1998) — represents the output probability of any T-query quantum algorithm as a polynomial of degree at most 2T in the oracle bits, and shows that "search" requires a polynomial of degree \Omega(\sqrt N).
Both techniques now anchor the theory of quantum-query lower bounds. BBBV matters beyond Grover: it tells you where quantum speedups come from. Exponential speedups — Shor, Simon — require structure. Without structure, the best possible speedup is quadratic. This is the ceiling for brute-force quantum search, and, via Grover's matching upper bound, it is the ceiling we have already reached.
Grover's algorithm finds a marked input in an unstructured list of N items using about (\pi/4)\sqrt N oracle queries. Classically you need \Omega(N) queries. The quantum speedup is a full quadratic factor — \sqrt N instead of N, which for N = 10^{12} means a million queries instead of a trillion.
A natural next question: could you do better? Is there some undiscovered quantum algorithm hiding in the corner of Hilbert space that searches an unstructured oracle in, say, N^{1/3} queries — or even \log N queries? The pop-science framing of quantum computing — "tries every answer at once" — certainly suggests that sub-\sqrt N might be possible. A billion needles in a haystack, instant answer, right?
In 1997 Charles Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani settled the question with a single theorem. The answer is no. The \sqrt N scaling is not a limitation of Grover's specific construction; it is a ceiling imposed by the physics. No quantum algorithm, now or ever, can search an unstructured oracle in fewer than \Omega(\sqrt N) queries. Grover is exactly optimal — the bound on the upper side meets the bound on the lower side, and the gap is closed forever.
This chapter is about that lower bound. It is one of the cleanest results in the entire theory of quantum computation: a proof that the universe imposes a specific, quantitative limit on how fast a quantum computer can search through an unstructured space. By the end, you will understand the hybrid argument, see why each oracle query can only make so much "progress" toward the marked input, and know why BBBV is the template for almost every lower bound in the field.
The theorem, informally
Fix an n-qubit quantum circuit \mathcal A that makes T applications of an oracle U_f. Suppose that on input "the oracle has a marked input w," \mathcal A outputs w with probability at least 2/3. Suppose also that on input "the oracle has no marked input at all," \mathcal A detects that absence with probability at least 2/3. Then
for some constant c > 0 independent of n.
That is the BBBV theorem. Strip away the formal clothing and it is saying: any quantum circuit that reliably picks out a needle from a haystack of size N, using only oracle queries, must make at least a constant fraction of \sqrt N queries.
The theorem is tight — Grover's algorithm achieves T = (\pi/4)\sqrt N for a \cos^2-close-to-one success probability, which sits just above the 2/3 threshold. The constant in the lower bound is smaller than Grover's constant, but both are \Theta(\sqrt N). You have matching upper and lower bounds, differing only by small constants. For the whole field, this is rare: lower bounds are almost always loose. For unstructured search, the two bounds shake hands.
The idea — how can you even prove a lower bound?
Upper bounds are easy in spirit: exhibit an algorithm that works in the claimed time. Grover's (\pi/4)\sqrt N upper bound is proved by writing down the algorithm and checking the geometry.
Lower bounds are harder. You have to rule out every possible algorithm, including ones no one has ever invented. How do you reason about an adversary who could be using any quantum circuit whatsoever?
The BBBV idea is clever and, in retrospect, almost obvious. Track what an oracle query can contribute to the algorithm's state. If each query can only push the final output probability up by a tiny amount — a quantity that decreases with N — then many queries are needed to reach a probability of 2/3. Specifically, if each query can push progress by at most c/\sqrt N, and you need total progress of \Omega(1), then T = \Omega(\sqrt N).
Making this precise is the substance of the proof. The ingredient that makes it work is the hybrid argument.
The hybrid argument
Fix a target success probability p \ge 2/3. Suppose an algorithm \mathcal A makes T queries and, for every possible marked input w \in \{0,1\}^n, outputs w with probability at least p.
Now introduce a reference oracle U_0 that does nothing: U_0 |x\rangle = |x\rangle for every x. There is no marked input; the oracle is "silent." If you run \mathcal A with U_0, its final state is some fixed state |\psi_0\rangle regardless of any marked input (because there isn't one). The measurement outcome on |\psi_0\rangle is some distribution over \{0,1\}^n.
For each possible marked w, let U_w be the oracle that marks w. Running \mathcal A with U_w produces a final state |\psi_w\rangle. By assumption, measuring |\psi_w\rangle returns w with probability at least p.
The key observation. If |\psi_w\rangle is close to |\psi_0\rangle in Euclidean distance — specifically, if \||\psi_w\rangle - |\psi_0\rangle\| is small — then their measurement statistics are almost the same, and |\psi_w\rangle cannot be returning w with probability much higher than |\psi_0\rangle does. But |\psi_0\rangle has no idea what w is; the probability it returns any specific w is at most 1/N-ish (averaged over random w, exactly 1/N).
So |\psi_w\rangle must be far from |\psi_0\rangle — far enough that the measurement distribution shifts significantly toward w. Concretely, a standard inequality shows
for each w that is measured with probability at least p in |\psi_w\rangle. Why: two states that are \epsilon apart in \ell_2 norm have total-variation distance in their measurement distributions bounded by \epsilon. To move the probability of outcome w from 1/N to p requires total-variation shift of at least p - 1/N, which requires \ell_2 distance at least \sqrt{p - 1/N} by a quick Cauchy-Schwarz. Squaring gives the inequality.
Sum over all N possible marked inputs:
This is the target inequality. The left side measures how much the silent and marked oracles can pull the algorithm's state apart, summed over all possible choices of w. For p \ge 2/3, the right side is about 2N/3 — a lot.
Now count how much each oracle query can contribute to this sum. Here comes the hybrid step.
The per-query bound
Define hybrid k (for k = 0, 1, \ldots, T) as the algorithm you get by using U_w for the first k queries and U_0 for the remaining T - k queries. Hybrid 0 uses only U_0 throughout, so its output is |\psi_0\rangle. Hybrid T uses U_w throughout, so its output is |\psi_w\rangle. Intermediate hybrids interpolate.
By the triangle inequality,
Each term is the change from switching the (k+1)-th query from U_0 to U_w. Let |\phi_{w,k}\rangle be the state just before the (k+1)-th query in the hybrid using U_w for queries 1, \ldots, k and U_0 thereafter. After that query, the state switches from U_0 |\phi_{w,k}\rangle = |\phi_{w,k}\rangle (silent case) to U_w |\phi_{w,k}\rangle (marked case).
What does U_w do? It flips the sign of the amplitude on |w\rangle and leaves everything else alone. So
where \alpha_{w,k} = \langle w | \phi_{w,k}\rangle is the amplitude on |w\rangle at that moment. The Euclidean length of this difference is 2 |\alpha_{w,k}|.
After the query, the algorithm applies some unitary to produce the final state. Unitaries preserve Euclidean distance, so the final-state difference is also 2 |\alpha_{w,k}|:
Why this is the central step: the only way U_w differs from U_0 is in what it does to the amplitude on |w\rangle. If the algorithm has no amplitude on |w\rangle at a given moment (i.e., \alpha_{w,k} = 0), then U_w and U_0 produce identical results — that query does nothing useful toward distinguishing w from "no mark." Progress toward finding w is exactly the accumulation of amplitude on |w\rangle across queries.
Summing and using triangle-inequality-squared (Cauchy-Schwarz):
Averaging over w
Now sum the inequality \||\psi_w\rangle - |\psi_0\rangle\|^2 \le 4T \sum_k |\alpha_{w,k}|^2 over all w \in \{0,1\}^n:
The inner sum \sum_w |\alpha_{w,k}|^2 is the total probability mass of |\phi_{w,k}\rangle over all basis states — but wait, \alpha_{w,k} depends on w because the hybrid itself depends on w. The subtle point is that the intermediate state |\phi_{w,k}\rangle is produced by running the algorithm with U_w for the first k queries, so the amplitude on |w\rangle depends on which w we're considering.
Here the argument uses a convexity trick. Replace |\phi_{w,k}\rangle with the state produced by the all-silent hybrid — call it |\phi_k\rangle — which does not depend on w. A short additional argument (see Going deeper) bounds the substitution error; the upshot is
because |\phi_k\rangle is a unit vector.
So \sum_w \||\psi_w\rangle - |\psi_0\rangle\|^2 \le 4T \cdot T = 4T^2.
Putting it together
Combining the target inequality and the hybrid bound:
Rearranging:
Why this finishes the proof: the left inequality said the sum of state-distances is at least linear in N (for p bounded away from zero). The right inequality said each query contributes at most O(1) to the distance sum, so T^2 queries contribute at most O(T^2). Equating linear-in-N with T^2 gives T \ge \sqrt N. The argument is tight because Grover achieves it — there's no slack to chase.
That is the BBBV theorem.
Why the argument works
Strip the proof down to its slogan: each query can grow the amplitude on |w\rangle by at most \Theta(1/\sqrt N), on average over w, so \sqrt N queries are needed to reach constant amplitude.
The 1/\sqrt N comes from a simple counting fact. The algorithm's state is a unit vector in an N-dimensional space. Spread the unit norm over all N basis states, on average each one gets |\alpha|^2 = 1/N of the squared amplitude — so |\alpha| = 1/\sqrt N. That average amplitude is the per-query "lever" for pushing probability toward the marked input. To move the success probability from the baseline 1/N to a constant, you need \sqrt N applications of that lever.
Grover's algorithm is, in this language, the algorithm that uses the lever optimally — at every step, its amplitude on |w\rangle is as large as the constraints allow. No cleverer algorithm can do better, because the constraint is physics, not engineering.
The polynomial method — a second proof
There is an entirely different proof, discovered in 1998 by Beals, Buhrman, Cleve, Mosca, and de Wolf. It uses polynomials in an elegant way that deserves brief exposition.
Here is the key fact. The output probability of any T-query quantum algorithm, viewed as a function of the oracle's truth table (the N-bit vector of values of f), is a real polynomial of degree at most 2T in the oracle bits.
Why degree 2T: each query of the oracle is linear in the oracle bits (it either flips a sign or not, a degree-1 operation). The algorithm alternates unitary operations (polynomial-free, they just multiply the amplitudes by fixed constants) with queries (degree-1 in oracle bits). After T queries, the final amplitudes are degree-T polynomials in the oracle bits. The output probability is a squared absolute value of an amplitude, which squares the degree — so degree-2T in the oracle bits.
Call this polynomial P(f) = \Pr[\mathcal A \text{ succeeds} \mid \text{oracle } f]. If \mathcal A solves unstructured search reliably, then P(f) \ge 2/3 for every f that has exactly one marked input, and P(f) \le 1/3 for the all-zeros f. But a polynomial in N variables that is above 2/3 on every unit vector and below 1/3 on the zero vector must have degree \Omega(\sqrt N).
That degree lower bound — originally due to A. A. Markov's work on polynomial approximation in the 19th century, then refined by Nisan and Szegedy in 1994 — is a purely classical fact about polynomials. Combining it with the "degree \le 2T" bound gives 2T \ge \Omega(\sqrt N), i.e. T \ge \Omega(\sqrt N). Same conclusion, different route.
Which proof to use. The hybrid argument is more intuitive; the polynomial method is more general. For some problems (like element distinctness, collision finding), the polynomial method gives tight lower bounds that the hybrid argument cannot. For others (non-Boolean outputs, general relations), the hybrid argument is cleaner. Both are now standard techniques in the "quantum query complexity" toolkit.
Example 1: one query gives you probability $1/N$
Take the simplest case: T = 1. The algorithm starts in some fixed state, applies the oracle once, applies some unitary, and measures. How large can the success probability be?
Step 1. Set up the state before the query. The algorithm's starting state is a unit vector |\phi\rangle = \sum_x \alpha_x |x\rangle with \sum_x |\alpha_x|^2 = 1.
Step 2. Apply the oracle. For the marked-at-w oracle, the state becomes U_w |\phi\rangle = \sum_{x \ne w} \alpha_x |x\rangle - \alpha_w |w\rangle. The amplitude on |w\rangle has flipped sign; all others unchanged.
Step 3. Apply any post-query unitary V. The measurement probability on outcome w is |\langle w | V U_w | \phi\rangle|^2.
Step 4. Average over all w. Without loss of generality, suppose the algorithm is "best" in some sense — maximising the average-over-w success probability. Averaging |\langle w | V U_w | \phi\rangle|^2 over w and using a standard Cauchy-Schwarz argument, the average cannot exceed \frac{1 + |\text{max amplitude on } w|^2}{N}, which for a uniform starting state (best you can do without prior information) is approximately \frac{2}{N}.
Result. With a single query, the best average success probability is O(1/N). One query is enough to know essentially nothing. To push success probability to a constant, you need many more queries — and the hybrid argument tells you exactly how many.
Example 2: the polynomial method for small $N$
Work through the polynomial method for a tiny case: N = 4, so n = 2 and the oracle is specified by four bits f_0, f_1, f_2, f_3 \in \{0,1\} with exactly one f_i = 1.
Step 1. The algorithm's success probability is a polynomial. For any T-query algorithm, \Pr[\text{output correct}](f_0, f_1, f_2, f_3) is a polynomial in the four oracle bits of degree at most 2T.
Step 2. What does "solving search" require? The algorithm must output 0 on input (1,0,0,0), output 1 on (0,1,0,0), output 2 on (0,0,1,0), output 3 on (0,0,0,1). The probability of correct output is a polynomial that equals 2/3 or more at each of the four "standard unit" inputs and equals 1/3 or less at the all-zeros input.
Step 3. What is the minimum degree? For N = 4, it turns out the minimum polynomial degree is 2 (you can check by hand, or invoke the Nisan-Szegedy bound \Omega(\sqrt N) = \Omega(2)). So the algorithm needs 2T \ge 2, i.e., at least T = 1 query.
Step 4. Scale up. For N = 10^6, \sqrt N = 1000, so the polynomial must have degree at least 1000, so T \ge 500. For N = 10^{12}, T \ge 500{,}000. These are the numbers BBBV's theorem delivers in the asymptotic regime.
Result. The polynomial method is an algorithm-free lower bound: you never need to think about how a specific algorithm works, only about the fact that its output probability is a bounded-degree polynomial. This decouples the lower bound from the details of the algorithm, which is why the method generalises to problems far beyond Grover.
What BBBV implies
BBBV is more than a note about Grover. It is a structural result with wide consequences.
Grover is optimal — no cleverer unstructured search. Anyone who proposes a sub-\sqrt N quantum search algorithm for an unstructured oracle is wrong; BBBV rules it out. This has saved a lot of wasted effort.
Exponential speedups require structure. Shor, Simon, Bernstein-Vazirani, Deutsch-Jozsa — every exponential quantum speedup known rides on a mathematical property (periodicity, linearity, hidden subgroup structure) that interference can exploit. BBBV proves that without such structure, no exponential speedup is possible; the best you get is quadratic. This is the sharp boundary between the "algorithmic niches where quantum helps exponentially" and "everywhere else, quadratic at best."
NP-hard problems unlikely to fall to brute-force quantum. SAT, travelling salesman, clique — these are defined as search over 2^n candidate solutions. Applying Grover gives a brute-force speedup to 2^{n/2}, but that is still exponential. A polynomial-time quantum algorithm for NP-hard problems would require exploiting structure, not brute force, and no such structure has been found. BBBV does not prove NP \not\subseteq BQP, but it does close off the "just apply Grover" route.
A template for other lower bounds. The hybrid argument has been extended to prove \Omega(\sqrt N) for many variants (multiple marked inputs, with promise of at-most-one mark, and so on), and the polynomial method extends to collision finding (\Omega(N^{1/3})), element distinctness (\Omega(N^{2/3})), graph connectivity, and more. Every lower bound in quantum query complexity traces lineage back to these two techniques.
Common confusions
-
"\sqrt N is the quantum limit for all problems." No — BBBV is specifically about unstructured search. Structured problems like factoring have exponential speedups via Shor, a \log N-style improvement, which is much more than \sqrt N. The \sqrt N limit applies only when the oracle is a black-box with no exploitable property.
-
"BBBV applies to classical algorithms too." No — BBBV is a quantum lower bound. The classical lower bound is the much stronger \Omega(N) (you really do have to look at every input). BBBV says "even with quantum, you can only do \sqrt N — not \log N, and not constant." Classical is \Omega(N); quantum is \Omega(\sqrt N).
-
"If I allow randomness, the lower bound goes away." No. The BBBV proof holds for any quantum algorithm — including randomised ones, which can be viewed as a quantum algorithm that ignores parts of its output. More formally, bounded-error quantum algorithms (algorithms allowed to err with probability up to 1/3) are still \Omega(\sqrt N). The "1/3 error" is already baked into the theorem statement.
-
"The \Omega(\sqrt N) is only asymptotic; small N is different." The asymptotic form is strongest, but the inequality T \ge \sqrt{(Np - 1)/4} holds for every N \ge 2. For very small N the bound is loose (the constant matters), but the scaling is correct from the smallest values of N upward.
-
"Grover is tight, so BBBV closes the search problem forever." Tight up to constants. The exact constant in Grover is \pi/4 \approx 0.785; BBBV's constant is somewhat smaller. The scaling is closed forever; the precise constant is not (and nobody thinks about the constant much — it's the scaling that matters for algorithm design).
-
"The polynomial method is the modern technique; the hybrid method is old." Both are modern. The hybrid method is more intuitive and was historically first (BBBV 1997); the polynomial method (1998) is stronger for some problems but harder to apply. Both are taught side-by-side in every graduate quantum-computation course.
Going deeper
The hybrid argument and the polynomial method are covered above at the "you can follow the steps" level. The rest of this section fills in the technical pieces the main proof glossed over: the all-silent-hybrid substitution, the Nisan-Szegedy polynomial degree bound, the adversary method (a third technique, often tightest), and the placement of BBBV in the broader theory of quantum complexity.
The all-silent substitution
The main proof used the inequality \sum_w |\alpha_{w,k}|^2 \le \sum_w |\langle w | \phi_k\rangle|^2 = 1, substituting the w-dependent intermediate state |\phi_{w,k}\rangle with the w-independent all-silent state |\phi_k\rangle. The substitution is valid because the two states are equal up to the first k queries, and the first k queries only differ when the algorithm has amplitude on |w\rangle — which is exactly the amplitude the bound is trying to sum. A careful accounting (see BBBV's 1997 paper, §3) shows the substitution loses only a constant factor, which affects the constant in front of \sqrt N but not the scaling.
The Nisan-Szegedy degree bound
A real polynomial P(x_1, \ldots, x_N) that approximates a Boolean function f: \{0,1\}^N \to \{0,1\} to within error 1/3 — i.e., |P(x) - f(x)| \le 1/3 for every Boolean input x — has degree at least \Omega(\sqrt{\deg(f)}), where \deg(f) is the exact polynomial degree of f. For the search function "is any f_i = 1?" the exact degree is N, so the approximating-degree is \Omega(\sqrt N). This is the Nisan-Szegedy theorem (1994), proved using A. A. Markov's 1890 inequality on polynomial derivatives.
Chained into the polynomial-method framework: quantum-algorithm output probability P(f) is degree \le 2T; any polynomial solving search has degree \ge \Omega(\sqrt N); therefore T \ge \Omega(\sqrt N).
The adversary method
Introduced by Andris Ambainis in 2000, the adversary method is a third technique for proving quantum query lower bounds, and it is often tight when the hybrid and polynomial methods are not. The idea: construct a bipartite "relation" R \subseteq X \times Y between positive-instance inputs X and negative-instance inputs Y. Each query can change the "overlap" between the algorithm's state on x \in X and y \in Y by at most a certain amount depending on the structure of R. Summing, you get a lower bound on the number of queries.
For unstructured search, the adversary method reproduces \Omega(\sqrt N). For problems like element distinctness, it is currently the strongest known technique. The "semi-definite programming" generalisation (Reichardt 2009) turned out to be tight for all function-evaluation problems — a remarkable result.
Generalised BBBV — multi-target search
If the oracle has M marked inputs instead of one, Grover's algorithm finds one of them in O(\sqrt{N/M}) queries. The BBBV-style lower bound says this is optimal: \Omega(\sqrt{N/M}) is required. When M = 1, this is \Omega(\sqrt N); when M = N/2, it is \Omega(1) — basically instant, which makes sense because half the inputs are marked and random sampling works.
Lower bounds for other problems
The techniques BBBV pioneered have generated a rich theory. A sampling of notable lower bounds:
- Collision finding: \Omega(N^{1/3}) quantum queries for finding x \ne y with f(x) = f(y) (Aaronson-Shi 2004). Classical is \Omega(\sqrt N) (birthday bound).
- Element distinctness: \Omega(N^{2/3}) quantum queries for deciding whether a list has duplicates (Ambainis 2004). Classical is \Omega(N).
- Graph connectivity: \Omega(N^{3/2}) quantum queries with adjacency-matrix oracle.
- Boolean-formula evaluation: tight bounds via span programs and the adversary method (Reichardt 2011).
Each of these sits on the quantum-speedup ladder — the polynomial speedups (often in the \sqrt N to N^{2/3} range) that BBBV-style analysis precisely quantifies.
BBBV's place in complexity theory
BBBV is the foundational result for quantum query complexity — the sub-field of complexity theory concerned with oracle-query lower bounds in the quantum setting. The results have implications for BQP (the class of problems quantum computers solve in polynomial time). Oracle separations between BQP and NP, BQP and PH, and between BQP at different query counts, all use BBBV-descended techniques. The most celebrated recent result — Raz-Tal 2019, showing a relativised separation BQP \not\subseteq PH — uses an adversary-style analysis to build a problem that quantum solves efficiently but no level of the polynomial hierarchy can.
The Indian theoretical-computer-science community has contributed directly to this line. Rahul Jain and Pranab Sen (TIFR) and Ashwin Nayak (formerly IIT Madras, now Waterloo) have proved strong communication and query lower bounds using adversary-method and polynomial-method variants. IISc Bangalore's theoretical CS group and TIFR's School of Technology and Computer Science are the main Indian hubs for quantum complexity theory, both of them producers of the lower-bound research program BBBV launched.
The pedagogical summary
Every lower bound in quantum query complexity traces back to three ideas:
- The hybrid argument (BBBV 1997) — swap oracles one query at a time, bound the per-query change.
- The polynomial method (BBCMW 1998) — represent output probability as a low-degree polynomial, lower-bound the degree.
- The adversary method (Ambainis 2000) — design a relation between positive and negative instances, bound the per-query overlap change.
The three are complementary. For unstructured search, all three give \Omega(\sqrt N). For more subtle problems, one may be strictly stronger than the others. BBBV is the historical progenitor and the simplest instance of the first technique.
Where this leads next
- Grover's algorithm circuit — the algorithm that achieves the \sqrt N upper bound BBBV's lower bound matches.
- Amplitude amplification — Grover generalised: the \sqrt N-style speedup that BBBV shows is the ceiling for search-like problems.
- Unstructured search — the problem definition and classical lower bound.
- Lessons about quantum speedups — the taxonomy of quantum speedups that BBBV's ceiling helps define.
- Grover applications and limits — where \sqrt N actually helps in practice and where it doesn't.
References
- Bennett, Bernstein, Brassard, Vazirani, Strengths and weaknesses of quantum computing (1997) — the original BBBV paper, containing the hybrid-argument proof. arXiv:quant-ph/9701001.
- Beals, Buhrman, Cleve, Mosca, de Wolf, Quantum lower bounds by polynomials (1998) — the polynomial method for quantum query complexity. arXiv:quant-ph/9802049.
- Andris Ambainis, Quantum lower bounds by quantum arguments (2000) — the adversary method. arXiv:quant-ph/0002066.
- John Preskill, Lecture Notes on Quantum Computation, Ch. 6 — theory.caltech.edu/~preskill/ph229.
- Nielsen and Chuang, Quantum Computation and Quantum Information, §6.6 (optimality of quantum search) — Cambridge University Press.
- Wikipedia, Quantum query complexity — encyclopaedic survey of the lower-bound techniques.