In short

Quantum phase estimation (QPE) takes a unitary U and one of its eigenstates |u\rangle with U|u\rangle = e^{2\pi i\varphi}|u\rangle, and returns an n-bit approximation of the phase \varphi. The trick to using it for a real algorithm is to find a unitary whose eigenvalue is the thing you want to know. Five headline applications exploit this pattern: Shor's order-finding (eigenvalue encodes the period of modular multiplication), HHL (eigenvalues of a matrix A drive a linear-system solver), quantum chemistry (eigenvalues of a molecular Hamiltonian are energy levels), amplitude estimation (eigenvalue encodes a probability), and quantum counting (amplitude estimation specialised to count solutions). QPE plus a clever unitary is, in a real sense, the entire toolkit of exact-eigenvalue quantum algorithms.

Phase estimation, on its own, looks narrow. You hand it a unitary U and an eigenstate |u\rangle, it hands you back the eigenphase \varphi written in binary. One question in, one answer out. It is hard to see, on first exposure, why this would be the beating heart of whole families of quantum algorithms.

The point becomes obvious once you ask the inverse question: what problems can you convert into eigenvalue-finding? Astonishingly many. Factoring a 2048-bit RSA modulus reduces to finding the period of modular exponentiation — which is the eigenvalue of a modular-multiplication unitary. Solving a linear system A\mathbf{x} = \mathbf{b} reduces to inverting the eigenvalues of A — which QPE can read off. Computing the ground-state energy of a molecule is literally asking for the smallest eigenvalue of its Hamiltonian. Counting the number of solutions to a search problem reduces to estimating the amplitude of a Grover operator — whose eigenvalue encodes that count.

Once you see it, you see it everywhere. This chapter is the tour — five applications of the same idea, told in the order of least to most abstract.

The template — every PE application looks the same

Every application of phase estimation follows the same three-step template:

  1. Find a unitary U whose eigenvalue e^{2\pi i\varphi} encodes the quantity you actually want.
  2. Prepare an eigenstate (or a superposition of eigenstates) of U on the data register.
  3. Run QPE with enough ancilla qubits to read \varphi to the precision you need.

The art is all in step 1. Given a problem — factor this number, solve this system, find this ground-state energy — the first question a quantum-algorithm designer asks is "which unitary's eigenvalue is my answer?"

Phase estimation as a subroutine, five branchesA central labelled box reading Quantum Phase Estimation with five arrows radiating outward to labelled branches: Shor order finding, HHL linear systems, Quantum chemistry ground state, Amplitude estimation, Quantum counting. Each branch carries a small caption stating what that application's unitary is.QPEU |u⟩ = e^(2πiφ) |u⟩Shor's order-findingU: y → ay mod NHHL linear systemsU = e^(iAt)Quantum chemistryU = e^(-iHt)Amplitude estimationU = Grover operator QQuantum counting(amplitude estimation)
Five of the most important quantum algorithms are all phase estimation wearing different hats. What changes is the unitary whose eigenvalue you want.

The rest of this article walks through the five applications in that tree, one at a time, in rough order of rising abstraction. The goal is not to make you an expert on each — each deserves its own full chapter — but to show you the shape of how each one uses QPE.

Application 1 — Shor's order-finding

Factoring an integer N classically is hard: the best known algorithm (the general number field sieve) runs in sub-exponential but super-polynomial time, and a 2048-bit modulus is out of reach for every computer ever built. Shor discovered in 1994 that factoring reduces to a simpler-sounding task called order-finding.

The order-finding problem. Given integers a and N with \gcd(a, N) = 1, find the smallest positive integer r such that a^r \equiv 1 \pmod{N}. This r is called the order of a modulo N.

Why this helps factor N: once you know r (and r is even, which happens with non-trivial probability for random a), a^{r/2} - 1 and a^{r/2} + 1 are likely to share non-trivial factors with N, and a classical GCD step extracts them.

The unitary. Define a quantum operation U on \log_2 N qubits that multiplies by a modulo N:

U|y\rangle \;=\; |ay \bmod N\rangle.

Apply U repeatedly to |1\rangle: you get |a\rangle, |a^2 \bmod N\rangle, |a^3 \bmod N\rangle, \ldots, cycling back to |1\rangle after exactly r steps — because a^r \equiv 1. So U is a permutation of the integers \{1, 2, \ldots, N-1\} that are coprime to N, and each orbit has length r.

The eigenvalues. Because U^r = I on the orbit of |1\rangle, the eigenvalues of U are the r-th roots of unity: e^{2\pi i \cdot s/r} for s = 0, 1, \ldots, r-1. The corresponding eigenstates are

|u_s\rangle \;=\; \frac{1}{\sqrt{r}}\sum_{k=0}^{r-1} e^{-2\pi i\, sk/r}\,|a^k \bmod N\rangle.

Why this is the eigenstate: applying U shifts k \to k+1 in the sum, and the exponential e^{-2\pi i\, sk/r} absorbs the shift as a global phase of e^{2\pi i\, s/r}. One line of algebra confirms U|u_s\rangle = e^{2\pi i\, s/r}|u_s\rangle.

The PE step. Run quantum phase estimation on U with an initial state |1\rangle on the data register. The key observation: |1\rangle is not itself an eigenstate, but it is a uniform superposition of the r eigenstates,

|1\rangle \;=\; \frac{1}{\sqrt{r}}\sum_{s=0}^{r-1} |u_s\rangle.

By linearity, PE run on this superposition produces a superposition of phase-register states, one for each eigenvalue. Measurement collapses the phase register to a single s/r (to n bits of precision), from which classical continued-fraction expansion recovers r with high probability.

Example 1: the PE step for order-finding

Take N = 15, a = 7. You want the order r such that 7^r \equiv 1 \pmod{15}.

Step 1. Build the unitary. U|y\rangle = |7y \bmod 15\rangle acts on the 4-qubit register (enough for y \in \{0, 1, \ldots, 15\}). Iterating from y = 1: 1 \to 7 \to 4 \to 13 \to 1 \to \ldots — the orbit has length 4, so the order is r = 4. Why this is a good test case: you can verify by hand that 7^2 = 49 \equiv 4, 7^3 \equiv 28 \equiv 13, 7^4 \equiv 91 \equiv 1 \pmod{15}. The order is 4. The quantum algorithm must find 4 without ever computing this list classically.

Step 2. The eigenvalues of U are the 4th roots of unity: \{1, i, -1, -i\} with phases \varphi \in \{0, 1/4, 2/4, 3/4\}.

Step 3. Prepare |1\rangle on the data register. This is a superposition of the four eigenstates |u_0\rangle + |u_1\rangle + |u_2\rangle + |u_3\rangle (each with amplitude 1/\sqrt{4} = 1/2).

Step 4. Run PE with, say, 6 ancilla qubits (giving 2^{-6} precision — enough to distinguish fractions with denominator \le 4). The measurement on the ancilla register returns, with probability 1/4 each, one of \{0, 16, 32, 48\}/64 = \{0, 1/4, 1/2, 3/4\} — the four values of s/r. Why precision 6 bits: to resolve a denominator r, the phase must be known to precision better than 1/(2r^2). For r = 4 this is 1/32 < 2^{-5}, so n = 6 suffices with room to spare.

Step 5. Suppose the measurement returns s/r = 3/4. The classical post-processor does the continued-fraction expansion of 0.75 = 3/4 and reads off the denominator r = 4.

Result. The order is r = 4. From a^{r/2} = 7^2 = 49 \equiv 4 \pmod{15}, the factors of 15 are recovered as \gcd(4 - 1, 15) = 3 and \gcd(4 + 1, 15) = 5. Done.

Order-finding circuit — PE on modular multiplicationA quantum circuit with two registers. The top register of n ancilla qubits begins in the all-zero state, is Hadamarded, then is used to control successive powers of the modular-multiplication unitary U acting on the bottom data register. The data register starts in state ket one. Finally the top register is inverse-QFT'd and measured.|0⟩|0⟩|1⟩n ancilladataHUU^(2ⁿ⁻¹)QFT†controlled powers of U encode the phase into the ancilla register; inverse QFT reads it out
The order-finding circuit. The ancilla register of $n$ qubits (top) is Hadamarded into uniform superposition, then used to control successively higher powers of $U$ on the data register. The inverse QFT at the end converts the phase pattern into a measurable bit string proportional to $s/r$.

What this shows. QPE does not "directly compute the order" — it computes an eigenphase, which happens to encode the order. The classical post-processor (continued fractions) turns the phase into the integer answer. This is the typical pattern for all PE applications: PE extracts a phase, and a classical subroutine converts the phase into what you really wanted.

Shor's algorithm as a whole is covered in the dedicated Shor's algorithm chapter; the full picture adds the modular-exponentiation circuit (how U^{2^k} is built efficiently) and the full analysis of success probability. What you take away here: Shor's algorithm is phase estimation applied to a very specific unitary, and the eigenvalue is the answer.

Application 2 — HHL, the linear-system solver

Solving a linear system A\mathbf{x} = \mathbf{b} is the workhorse of every computational field — engineering, economics, machine learning, fluid dynamics. Classically, Gaussian elimination takes O(N^3) operations on a dense N \times N matrix; conjugate-gradient methods on sparse well-conditioned matrices take O(N\sqrt{\kappa}) where \kappa is the condition number.

In 2009, Harrow, Hassidim, and Lloyd published a quantum algorithm — HHL — that produces the quantum state |\mathbf{x}\rangle proportional to A^{-1}|\mathbf{b}\rangle in O(\text{poly}(\log N, \kappa, 1/\varepsilon)) time. When \kappa and 1/\varepsilon are polylogarithmic in N, this is an exponential speedup over classical methods.

The trick. If you know the eigen-decomposition of A,

A \;=\; \sum_j \lambda_j |u_j\rangle\langle u_j|,

then solving A\mathbf{x} = \mathbf{b} is easy: expand |\mathbf{b}\rangle = \sum_j \beta_j |u_j\rangle in the eigenbasis, divide each component by \lambda_j, and reassemble:

|\mathbf{x}\rangle \;=\; \sum_j \frac{\beta_j}{\lambda_j}|u_j\rangle.

So the whole problem reduces to knowing the eigenvalues \lambda_j of A and being able to invert them. Which is what PE gives you — on the unitary U = e^{iAt} for a well-chosen time t. The eigenvalues of U are e^{i\lambda_j t}; QPE extracts \lambda_j into an ancilla register.

The pipeline, end to end.

  1. Prepare |\mathbf{b}\rangle. The vector \mathbf{b} must be loaded into the quantum state (either by a cheap circuit or, in principle, from a quantum random-access memory).
  2. Run PE on U = e^{iAt} with |\mathbf{b}\rangle on the data register. The output is
\sum_j \beta_j |\lambda_j\rangle|u_j\rangle,

where |\lambda_j\rangle is the binary expansion of \lambda_j on the ancilla register. 3. Perform eigenvalue inversion. Use a controlled rotation to produce, on a new flag qubit, an amplitude proportional to 1/\lambda_j. The state now reads

\sum_j \beta_j |\lambda_j\rangle|u_j\rangle\Bigl(\sqrt{1 - C^2/\lambda_j^2}\,|0\rangle + \frac{C}{\lambda_j}|1\rangle\Bigr)

for some constant C. 4. Uncompute the PE. Apply PE in reverse to clear the |\lambda_j\rangle ancilla, leaving

\sum_j \beta_j |u_j\rangle\Bigl(\sqrt{1 - C^2/\lambda_j^2}\,|0\rangle + \frac{C}{\lambda_j}|1\rangle\Bigr).
  1. Measure the flag qubit and post-select on |1\rangle. Conditional on that outcome, the data register is in the desired state |\mathbf{x}\rangle \propto \sum_j (\beta_j/\lambda_j)|u_j\rangle, up to normalisation.
The HHL pipelineA horizontal pipeline of five labelled boxes: prepare ket b, phase estimation to extract eigenvalues, controlled rotation for eigenvalue inversion, uncompute phase estimation, post-select and output ket x. Arrows flow left to right. A caption notes A = sum of lambda u bra-ket, and x = A inverse b.load |b⟩inputPE on e^(iAt)extract λⱼrotate byC/λPE⁻¹uncomputepost-select→ |x⟩x ∝ A⁻¹ b (as a quantum state, not a classical vector)total gates: poly(log N, κ, 1/ε) — exponentially faster than classical when κ is small
The HHL pipeline. Phase estimation sits at the heart, twice: once to extract the eigenvalues of $A$ into an ancilla, and once (run in reverse) to clear that ancilla after the inversion rotation is done.

The caveats — there are many. HHL's exponential speedup is real but narrowly conditional:

In practice, HHL is useful when you want a global property of \mathbf{x} (a summary statistic, a mean, an expectation value with some operator), not when you want to list every component. It is the engine behind several proposed quantum machine-learning algorithms — though the community has become more cautious after the "dequantisation" results of Tang and others, which showed that some HHL-based ML algorithms have classical counterparts that run nearly as fast under the same input assumptions.

Application 3 — quantum chemistry, the ground-state energy

The fundamental problem of quantum chemistry is this: given a molecular Hamiltonian H (a Hermitian operator encoding the kinetic energy of the electrons and nuclei and the Coulomb interactions between them), compute the ground-state energy — the lowest eigenvalue E_0.

Classically, this is hard. The Hilbert space of n electrons scales as 2^{O(n)}, and brute-force diagonalisation quickly becomes intractable. Approximate methods (density functional theory, coupled cluster, configuration interaction) are the workhorses of computational chemistry, but they have well-known limits — especially for systems with strong electron correlation such as transition-metal catalysts and biological enzymes.

The idea. Energies are eigenvalues of H. The unitary U = e^{-iHt} has eigenvalues e^{-iE_j t}. So QPE on U extracts E_j in binary, provided you can prepare an eigenstate of H.

The catch — the eigenstate. Preparing the exact ground state is, in general, as hard as computing the ground-state energy. But here is a useful loophole: if you can prepare a state |\phi\rangle that has non-trivial overlap with the true ground state |\psi_0\rangle, then |\phi\rangle expands as

|\phi\rangle \;=\; c_0|\psi_0\rangle + c_1|\psi_1\rangle + c_2|\psi_2\rangle + \ldots

and running QPE on |\phi\rangle produces a superposition of energy readouts on the ancilla, weighted by |c_j|^2. With probability |c_0|^2 the measurement lands on E_0.

Hartree–Fock and coupled-cluster states (the classical approximations) usually have good overlap with the true ground state of small molecules — enough to make QPE-based energy extraction the gold-standard approach for a fault-tolerant quantum computer.

Why this is the killer app. Simulating quantum chemistry may be the single most practically valuable thing a fault-tolerant quantum computer will do. Catalyst design, nitrogen fixation (half the world's food supply passes through the Haber–Bosch process, currently responsible for about 2% of global energy use), drug discovery, and battery chemistry all hinge on accurate ground-state energies. The National Quantum Mission in India has explicitly named quantum chemistry as a target area, with research groups at TIFR Mumbai, IISc Bangalore, and the IITs working on both near-term variational methods and long-term QPE-based approaches.

Near-term (NISQ) vs long-term (fault-tolerant). Today's noisy machines cannot run the deep QPE circuit needed for chemistry-relevant accuracy — a single run typically requires a long coherent chain of controlled time-evolution unitaries, and the noise compounds. Instead, the NISQ-era champion is the variational quantum eigensolver (VQE), which uses a classical optimiser to tune parameters of a short quantum circuit so that the prepared state approximates the ground state. VQE trades circuit depth for the cost of many function evaluations and does not give rigorous phase-estimation precision — but it can run on 50-qubit NISQ machines.

The long-term vision is QPE. Once fault-tolerant machines with millions of qubits exist, QPE on molecular Hamiltonians is what turns quantum computing into a working chemistry tool. Every estimate of "how many qubits for useful quantum chemistry" you see in a news article is ultimately a calculation of how big a QPE circuit can be run before the error correction overhead eats the benefit.

Application 4 — amplitude estimation

Grover's search algorithm finds a marked element in an unstructured database of N items using O(\sqrt{N}) queries, a quadratic speedup over classical. Brassard, Høyer, Mosca, and Tapp (2000) discovered a more general result: if you have any quantum circuit A that prepares a state of the form

A|0\rangle \;=\; \sin\theta\,|{\rm good}\rangle + \cos\theta\,|{\rm bad}\rangle,

you can estimate the probability p = \sin^2\theta of landing on a "good" outcome with quadratic speedup over classical Monte Carlo. This is amplitude estimation.

The unitary. Build the Grover operator

Q \;=\; -A\,S_0\,A^{-1}\,S_{\rm good}

where S_0 flips the sign of the |0\rangle state and S_{\rm good} flips the sign of good outcomes. This is the standard Grover-iteration operator generalised beyond search.

The eigenvalues of Q are e^{\pm 2i\theta} — their phase is exactly twice the amplitude angle. So QPE on Q, with the initial state A|0\rangle (which is a superposition of Q's two eigenstates), extracts 2\theta to n bits of precision.

From \theta, the probability p = \sin^2\theta follows.

The speedup. Classical Monte Carlo for estimating p requires O(1/\varepsilon^2) samples to reach precision \varepsilon. Amplitude estimation achieves the same precision with O(1/\varepsilon) queries to A — a quadratic speedup.

Example 2: amplitude estimation for a biased coin

Suppose you have a quantum circuit A that prepares

A|0\rangle \;=\; \sqrt{p}\,|1\rangle + \sqrt{1-p}\,|0\rangle,

with p unknown — a "quantum biased coin". You want to estimate p.

Step 1. Compare the classical approach. Sampling the circuit R times and counting "1" outcomes gives an estimate \hat{p} with standard error \sqrt{p(1-p)/R}. Reaching precision \varepsilon = 10^{-3} requires R \sim 10^6 samples. Why standard Monte Carlo scales as 1/\sqrt{R}: the central limit theorem. Averaging R independent Bernoulli samples produces a normal distribution whose width shrinks as 1/\sqrt{R}. Halving the error takes four times as many samples.

Step 2. Define the Grover operator Q = -A\,S_0\,A^{-1}\,S_1, where S_1 flips the sign of the |1\rangle state and S_0 flips the sign of the |0\rangle state. The eigenvalues of Q are e^{\pm 2i\theta} where \sin\theta = \sqrt{p}.

Step 3. Run QPE on Q, with the initial state A|0\rangle, using n ancilla qubits. The measurement outcome gives 2\theta to precision 2^{-n}, which translates to precision \varepsilon \sim 2^{-n} for p. Why the precision on p is linear in \theta's precision: p = \sin^2\theta, so dp/d\theta = \sin(2\theta), bounded by 1. A small error in \theta produces a proportionally small error in p.

Step 4. Count queries to A. QPE makes O(2^n) applications of Q, and each application of Q makes O(1) applications of A. Total: O(2^n) = O(1/\varepsilon).

Result. To reach \varepsilon = 10^{-3}, amplitude estimation uses about 10^3 queries to A, compared with 10^6 for classical Monte Carlo. A thousand-fold reduction in circuit invocations.

Amplitude estimation circuitA two-register circuit. The top register of n ancilla qubits is Hadamarded and used to control powers of the Grover operator Q on the data register. The data register begins in state A ket zero. Inverse QFT on the ancilla, measurement gives an estimate of 2 theta.|0⟩|0⟩A|0⟩HQQ^(2ⁿ⁻¹)QFT†ancilla measurement returns 2θ to precision 2⁻ⁿ; probability p = sin²θ
Amplitude estimation is QPE with $U = Q$, the Grover operator built from $A$. The structure is identical to Shor's order-finding circuit — only the unitary has changed.

What this shows. Amplitude estimation is the general version of Grover search. Where Grover finds a marked element in O(\sqrt{N}) queries, amplitude estimation counts (or measures the fraction of) marked elements with the same quadratic speedup. Both are phase estimation on the Grover operator; only the initial state and the classical post-processing differ.

Where amplitude estimation shows up in the wild: Monte Carlo speedup for financial option pricing, portfolio risk estimation, and insurance payoffs. Classical Monte Carlo dominates quantitative finance — every derivative pricing engine at every investment bank runs millions of Monte Carlo paths per evaluation. A quadratic speedup on a process that takes hours translates to minutes, and early proposals (for instance, several papers from IBM Quantum in 2019–2021) have benchmarked amplitude-estimation-based pricers against the classical baseline. The gap is not yet a fault-tolerant reality, but the algorithmic path is clear.

Application 5 — quantum counting

Quantum counting is the specialisation of amplitude estimation to the case where A is the uniform superposition and "good" is "marked element in a Grover oracle." Concretely: given a search oracle that marks M out of N items, count M without having to find every solution.

The amplitude \sin\theta = \sqrt{M/N} in this case. Amplitude estimation returns \theta, from which M = N\sin^2\theta follows. The number of queries is O(\sqrt{N/\varepsilon}) — faster than either classical counting (\Theta(N)) or naive sampling (O(1/\varepsilon^2)).

Quantum counting matters because it lets you answer "how many solutions does this Boolean formula have?" or "how many strings in this set satisfy this property?" — #P questions — with a quadratic speedup. This is a different kind of speedup than Shor's (which is believed super-polynomial) and a different flavour than Grover's pure search; it shows that phase estimation's utility spans every regime between classical and quantum-exponential.

Iterative phase estimation — PE for the NISQ era

Classical QPE uses n ancilla qubits to read the phase \varphi as an n-bit string. On a NISQ device with 50–100 qubits and high error rates, reserving n ancillas for precision is expensive — and the controlled-U^{2^k} chain is long and noisy.

Iterative phase estimation (Kitaev's trick, later refined by many authors) reads the phase one bit at a time, using only one ancilla qubit. The protocol measures \varphi's least-significant bit first, feeds that classical bit into a phase correction on the next iteration, and extracts the next bit. After n iterations you have \varphi to n bits, using only one ancilla and exchanging space for time and classical feedback.

This is the workhorse version of PE on today's hardware, and it is the form most often reported in experimental papers implementing QPE-based chemistry or HHL-style linear-system demonstrations. The total quantum work is the same O(2^n) applications of U, but the qubit overhead collapses to 1.

Common confusions

Going deeper

If you are here for the big picture — that PE is a subroutine, and Shor's, HHL, chemistry, amplitude estimation, and counting are all applications of it — you have it. The rest of this section digs into caveats, trade-offs, and the ecosystem around each application.

The HHL caveats in detail

The HHL exponential speedup is one of the most-quoted quantum-algorithm results, and also one of the most-misrepresented. Each of the four caveats deserves a careful look.

Condition number. The runtime is O(\text{poly}(\log N, \kappa, 1/\varepsilon)) where \kappa is the ratio of largest to smallest eigenvalue of A. When \kappa is very small (e.g., \kappa = O(\text{polylog}\,N)), the speedup is exponential. When \kappa is \Theta(N), HHL becomes no better than classical. Many real-world matrices — especially those from discretised PDEs — have \kappa that scales polynomially with N, eating into the speedup. Improved variants (Childs–Kothari–Somma 2017, Ambainis 2012) reduce the \kappa dependence.

Input preparation. Loading a classical N-dimensional vector \mathbf{b} into a quantum state takes \Theta(N) operations in the worst case. If \mathbf{b} has structure — for instance, it is the output of another quantum subroutine, or it admits a QRAM (quantum random-access memory) with O(\log N) access — then preparation can be fast. QRAM is a hardware assumption that has not been realised at scale; most HHL implementations on real hardware restrict to toy problems where \mathbf{b} is hand-crafted.

Output readout. The output of HHL is |\mathbf{x}\rangle, a quantum state. To read all N entries of \mathbf{x} you would need \Omega(N) measurements, which kills the speedup. Useful applications read a summary\langle\mathbf{x}|M|\mathbf{x}\rangle for some observable M (mean, variance, overlap with a reference vector, support size). Quantum ML applications often use HHL as an inner step that produces a state, then further processes the state quantum-mechanically before any measurement.

Dequantisation. Ewin Tang and collaborators (2018 onwards) showed that several HHL-based ML algorithms have classical counterparts — under the same strong input assumptions (sampling access to rows of A, norm queries on \mathbf{b}) — that match HHL's complexity up to polynomial factors. The upshot: HHL's exponential speedup over unrestricted classical algorithms is real, but the correct comparison for a fair benchmark is a classical algorithm with the same input model, and in many ML settings that classical algorithm is only polynomially slower.

Quantum chemistry — the encoding zoo

The chemistry Hamiltonian is a fermionic operator, and qubits are not naturally fermionic. A qubit encoding is needed. Three standards:

For a molecule with n spin-orbitals, the Hamiltonian has O(n^4) terms (two-body Coulomb interactions). Simulating one time step e^{-iH \delta t} via Trotter decomposition requires O(n^4) gates per step, and QPE requires many steps for precision. End-to-end cost for chemistry-relevant accuracy (chemical precision, about 1 kcal/mol) is dominated by the simulation depth — typically 10^910^{12} gates for a medium-sized molecule like a FeMo cofactor.

Amplitude estimation in finance

Monte Carlo pricing of a European call option values the expected payoff \mathbb{E}[\max(S_T - K, 0)] over the risk-neutral distribution of the underlying asset S_T. Classical implementations draw a billion sample paths and average. Amplitude estimation replaces the averaging with a QPE on the Grover operator built from the payoff circuit, achieving O(1/\varepsilon) queries to the payoff circuit instead of O(1/\varepsilon^2).

The catch: the payoff circuit must be implemented on a quantum computer. Building a quantum circuit for the Black–Scholes price path requires arithmetic circuits that are themselves deep. For a contract valued at several basis points of precision, the total gate count is on the order of 10^610^7 — beyond NISQ but within reach of early fault-tolerant machines.

Iterative PE — the fast-feedback assumption

Iterative PE needs fast classical feedback: the measurement result at iteration k must be available before iteration k+1 runs, to adjust the phase correction. This is easy on a simulator but harder on real hardware — the classical control loop must complete within the qubit coherence time. Most NISQ platforms (superconducting, trapped-ion) now support this, with round-trip times of order 1 microsecond, comfortably inside coherence windows of hundreds of microseconds to seconds.

India's National Quantum Mission — the chemistry play

Of the eight years and ₹6000 crore of India's National Quantum Mission, a significant share is earmarked for quantum chemistry and materials applications. The reasoning is pragmatic: India's pharmaceutical industry (the largest generic drug manufacturer in the world) and its emerging catalysis research at IISc Bangalore and the National Chemical Laboratory, Pune, stand to gain directly from quantum-chemistry capability. The long-term target is fault-tolerant QPE-based chemistry; the near-term target is hybrid classical-quantum methods (VQE and variants) running on IBM Quantum, IonQ, or Quantinuum hardware accessed over the cloud.

Where this leads next

References

  1. Aram W. Harrow, Avinatan Hassidim, Seth Lloyd, Quantum Algorithm for Linear Systems of Equations (2009) — arXiv:0811.3171. The original HHL paper.
  2. Gilles Brassard, Peter Høyer, Michele Mosca, Alain Tapp, Quantum Amplitude Amplification and Estimation (2000) — arXiv:quant-ph/0005055. Amplitude estimation and counting.
  3. Wikipedia, Quantum phase estimation algorithm — overview and applications.
  4. John Preskill, Lecture Notes on Quantum Computation, Chapter 6–7 — phase estimation and its applications — theory.caltech.edu/~preskill/ph229.
  5. Andrew M. Childs, Robin Kothari, Rolando Somma, Quantum algorithm for systems of linear equations with exponentially improved dependence on precision (2017) — arXiv:1511.02306. Improved HHL.
  6. Sam McArdle, Suguru Endo, Alán Aspuru-Guzik, Simon Benjamin, Xiao Yuan, Quantum computational chemistry (2020) — arXiv:1808.10402. Review of QPE, VQE, and encodings for chemistry.