In short
Feynman's 1982 paper [1] argued: classical computers need 2^n complex numbers to simulate n qubits; a quantum computer holds 2^n amplitudes in n physical qubits natively; therefore build a quantum simulator to simulate quantum systems. Return to that argument now, armed with the full quantum-computing toolkit, and the modern version has three concrete tasks: (1) Apply e^{-iHt} to a prepared state — Hamiltonian simulation, covered in chapters 131–136. (2) Extract the ground-state energy E_0 of a Hamiltonian H — via phase estimation or VQE. This is the "molecular energy" number chemists want. (3) Sample from thermal / Gibbs states e^{-\beta H}/Z — the quantum generalisation of Monte Carlo. These are what "useful quantum advantage" looks like in the near term. Not breaking RSA — that needs fault-tolerant machines with millions of physical qubits, probably a decade away. Quantum chemistry — the binding energy of a drug candidate, the catalytic pathway of the FeMoco cluster, the phase diagram of a high-T_c superconductor — is what 100–1000-logical-qubit machines get right first. The hype-check is simple: useful quantum simulation means a specific molecule or material whose properties classical methods cannot compute and a quantum algorithm can. There are few confirmed examples today. There will be more by the time the National Quantum Mission reaches its 2031 targets.
In chapter 2 you met Feynman's argument in its raw form: n qubits need 2^n complex amplitudes to describe, classical computers run out of memory around n=50, and the simulator should be built of the same stuff it is simulating. That was the opening of the track. You had no idea, then, what a qubit could do — no Hadamard, no entanglement, no phase estimation, no error correction, no complexity class named QMA. The argument was beautiful but abstract.
Now you have all of that. You know how to encode an orbital as a qubit. You know how to apply a controlled rotation. You know phase estimation extracts eigenvalues of a unitary, and you know e^{-iHt} is a unitary for any Hermitian H. So go back to Feynman's paper with fresh eyes. What does "simulate a quantum system" concretely mean, now that you know what a quantum computer concretely is? What are the problems chemists actually want solved, and which of them does the quantum toolkit actually touch?
This chapter is that return visit. It is short on new machinery — you have the machinery — and long on framing. By the end, you will be able to name the three tasks of quantum simulation, point at the specific applications where a 1000-logical-qubit machine would beat every supercomputer on Earth, and distinguish the honest promise from the press release.
What Feynman actually proposed — the 1982 checklist
Feynman's paper [1] is fourteen pages, has no theorems, and contains three concrete claims. They are worth restating in modern language, because everything that followed — Lloyd 1996, Abrams–Lloyd 1999, Aspuru-Guzik et al. 2005, every VQE paper since 2014 — is an elaboration of one of these three.
Claim 1 — classical simulation is exponentially expensive. The state of n two-level quantum systems is a vector of 2^n complex amplitudes. To track time evolution exactly, you store the whole vector and apply a 2^n \times 2^n unitary to it on every step. At n = 50 this state vector weighs around 18 petabytes. At n = 60, more than all the data stored on Earth. The cost is structural, not engineering.
Claim 2 — a quantum machine holds 2^n amplitudes for free. Put n physical qubits on a chip. Their joint Hilbert space is 2^n-dimensional by construction. No classical storage of the vector is needed; the vector is the physical state. Gates applied to the hardware evolve the amplitudes at the rate the hardware runs.
Claim 3 — therefore, build quantum machines to simulate quantum physics. This is the proposal. Not "to break ciphers," not "to solve optimisation," not "to train neural networks." To simulate the thing whose mathematical structure is already sitting in the hardware.
Notice what is not on this list. Feynman did not propose a way to factor integers. He did not propose a way to search an unsorted list. He did not propose Grover, or Shor, or teleportation. Those applications came later, and they came precisely because the 1982 paper had opened a door: once you accept that quantum hardware can be a computational resource, you can ask what else such hardware might be good at. The "else" turned out to include number theory and search. But the original target was physics, and the original target is what this chapter is about.
The three tasks of quantum simulation
Modern quantum-simulation research decomposes Feynman's Claim 3 into three specific computational tasks. Every paper you will read on the subject — every VQE benchmark, every Hamiltonian-simulation lower bound, every chemistry startup's marketing deck — is doing one of these three. Learn to spot them and the literature becomes navigable.
Task 1: time evolution — apply e^{-iHt}
The Schrödinger equation says a quantum state |\psi(t)\rangle evolves in time under a Hamiltonian H according to
Why this is the centre of physics: H encodes all the interactions of the system — the kinetic energy, the Coulomb forces between electrons, the magnetic couplings between spins. Knowing |\psi(t)\rangle at all later t is the "everything else is commentary" version of simulation.
Here, \hbar is just a units constant; quantum-computing papers absorb it into H and write e^{-iHt}. The operator e^{-iHt} is unitary (because H is Hermitian), so in principle you can implement it as a quantum circuit. The practical question is how many gates does the circuit cost, as a function of n, t, and the desired precision \epsilon?
- For general (arbitrary) H, no efficient quantum algorithm exists — it would violate basic complexity assumptions.
- For local Hamiltonians (chapter 130) — a sum H = \sum_i h_i where each h_i acts on a few qubits — Trotter-Suzuki gives cost O(\text{poly}(n) \cdot t / \epsilon^{1/k}) for various k, which is polynomial in everything.
- Newer methods (linear combinations of unitaries, quantum signal processing, qubitisation) improve the t-dependence to near-optimal.
Chapters 131 and 132 do the detailed cost analysis. For now, the headline: time evolution of any physically reasonable Hamiltonian is a polynomial-cost quantum task, and an exponential-cost classical task.
Task 2: ground-state energy — find E_0
Every real physics question starts by asking: what is the ground state of this Hamiltonian — the lowest-energy quantum state, the one the system sits in at zero temperature? And what is its energy E_0?
For a molecule, the ground-state energy is the binding energy — how strongly the atoms hold together, whether a proposed drug binds to a receptor, how much energy is released in a reaction. For a material, the ground state tells you whether it is a metal, an insulator, a magnet, or a superconductor. The whole edifice of chemistry is built on computing E_0 for various H.
The quantum toolkit offers two routes:
Route A: Phase estimation. If you can prepare any state |\phi\rangle with non-negligible overlap with the true ground state |\psi_0\rangle, and apply e^{-iHt}, phase estimation (chapters 73–76) extracts the eigenvalue E_0 with precision \epsilon in time O(1/\epsilon). This is the exact route — fault-tolerant, high-precision, expensive in gate count.
Route B: Variational Quantum Eigensolver (VQE). Parameterise a trial state |\psi(\vec\theta)\rangle as a short quantum circuit with classical parameters \vec\theta. Measure \langle\psi(\vec\theta)|H|\psi(\vec\theta)\rangle. Use a classical optimiser to minimise the energy over \vec\theta. The output is an approximation to E_0 — the quality depends on how expressive the ansatz is. This is the NISQ-friendly route — short circuits, classical outer loop, approximate.
Why there are two routes and not one: phase estimation needs a fault-tolerant quantum computer (long circuits, low error rate), which doesn't exist yet. VQE runs on today's noisy hardware but gives only approximate answers. Both routes target the same number E_0; which you use depends on what machine you have.
Chapter 98 already showed you the complexity ceiling: deciding whether E_0 \leq a or E_0 \geq b for a k-local Hamiltonian is QMA-complete. That means neither route will be efficient for arbitrary H — the hardness is baked into the problem. What the routes do give you is efficient algorithms for the physically interesting cases: Hamiltonians with structure (locality, positivity, low-lying spectral gap), starting states with decent overlap, small molecules where the ansatz can be expressive.
Task 3: thermal states — sample from e^{-\beta H}/Z
At non-zero temperature T, a quantum system is not in its ground state; it is in a thermal mixture. The density matrix is the Gibbs state
All thermodynamic quantities — specific heat, magnetic susceptibility, equilibrium binding probabilities — are computed from \rho_\beta. Classical quantum Monte Carlo samples from Gibbs states for many systems but hits the famous sign problem for fermions: the sampling weights become complex or negative, and statistical error blows up exponentially. Nitrogen fixation, cuprate superconductors, and most interesting electronic systems are exactly where the sign problem bites hardest.
Quantum algorithms for Gibbs sampling exist (chapters 135–136). They are less mature than Tasks 1 and 2 but are the third leg of the simulation programme, because finite-temperature chemistry — reactions in solution at 300 K, proteins at body temperature — is what the real world is made of.
Why classical simulation hits a wall
The argument from chapter 2 was "2^n amplitudes." In practice, classical chemistry and condensed-matter simulation never stores the full 2^n-vector — they use cleverly compressed representations that exploit structure. Understanding those representations and their failure modes is what makes Feynman's argument operationally real.
Three dominant classical methods:
Density Functional Theory (DFT). Replace the 2^n-dimensional wavefunction with the 3D electron density n(\vec r). Run a self-consistent field calculation. DFT is the workhorse of computational chemistry — most catalyst screening, most material property prediction, most drug-binding estimates use DFT. It is fast (O(n^3)), it is routine, and it covers most of chemistry. Where it fails: strongly correlated electrons, where the true ground state cannot be captured by an effective single-electron theory. DFT error on FeMoco's binding energy is large enough to get the chemistry wrong.
Quantum Monte Carlo (QMC). Sample the wavefunction using stochastic methods. Exact in principle; in practice limited by the fermion sign problem — the probability weights become negative, and the variance grows exponentially. Most interesting materials (high-T_c, transition-metal oxides, Hubbard-like models away from half-filling) are unreachable by QMC.
Tensor networks (DMRG, PEPS, MERA). Represent the wavefunction as a network of small tensors. DMRG (Density Matrix Renormalisation Group) is the state of the art for one-dimensional systems — spin chains, quantum wires — and routinely solves systems with hundreds of sites. For 2D and 3D, tensor networks become exponentially expensive as the system size grows in the "transverse" direction. This is the frontier. Anything genuinely two-dimensional with strong correlation is out of reach.
The rule of thumb: if a classical method exists that is efficient for your system, use it. Quantum computers will not out-compete DFT on caffeine. But for the systems classical methods fail on — and those systems happen to include exactly the chemistry industry cares about — the quantum route is the only known option.
Concrete application areas
Feynman's Claim 3 is a proposal; it is not automatically useful. For the proposal to pay off, there have to be specific problems where the quantum route wins. Four application areas dominate the current literature.
Chemistry — molecular electronic structure. The Hamiltonian of electrons in a molecule, in second-quantised form, is
where c_p, c_p^\dagger are fermionic annihilation/creation operators for orbital p, t_{pq} are one-electron integrals (kinetic + nuclear attraction), and V_{pqrs} are two-electron integrals (Coulomb repulsion). Under Jordan-Wigner or Bravyi-Kitaev encoding, this maps to a k-local qubit Hamiltonian with k \approx 4. Small molecules (\text{H}_2, \text{LiH}, \text{BeH}_2) have been run on NISQ hardware and reproduce classical results. The target is \text{FeMoco}, the catalytic cluster of the enzyme nitrogenase — seven iron atoms, one molybdenum, and a mechanism biology has mastered but industry hasn't. Reiher et al. 2017 [6] estimated ~200 logical qubits and 10^{14} T-gates to resolve the FeMoco ground state; those numbers have come down substantially since, but remain a fault-tolerant target.
Condensed matter — Hubbard-like models. The 2D Hubbard model is the minimal model thought to capture high-temperature superconductivity in cuprates. It has two parameters (hopping t, on-site repulsion U), and its phase diagram contains a superconducting region near half-filling that classical methods cannot map. A quantum simulator of a few hundred sites would settle a thirty-year-old open question in condensed-matter physics.
High-energy physics — lattice gauge theories. Lattice QCD is classical physics' best tool for the strong force, but non-zero chemical potentials hit the sign problem. Quantum simulation of lattice gauge theories is an active programme (it is how protons are glued out of quarks at finite baryon density).
Quantum chemistry in solution — thermal averages. Drug binding is a finite-temperature phenomenon in water. The binding free energy \Delta G = -k_B T \ln(K) is a thermal average, reachable (in principle) via Task 3 quantum Gibbs sampling.
Example 1 — classical RAM versus quantum hardware for a 50-electron molecule
Setup. A medium drug molecule — say, a thyroid-hormone analogue — has about 50 electrons you would want to include in an "active space" for accurate simulation. Each electron lives in a basis of ~50 spatial orbitals (a reasonable correlation-consistent basis). The many-electron Hilbert space is exponential in the orbital count.
Step 1. Count classical amplitudes. For 50 spin-orbitals filled with 50 electrons, the Slater-determinant count is \binom{100}{50} \approx 10^{29}. Storing that many complex amplitudes at 16 bytes each would cost about 1.6 \times 10^{30} bytes — absurdly more than any machine Earth could build.
Why the binomial coefficient and not 2^{100}: electrons are fermions and conserve particle number. The physically relevant Hilbert space is the 50-electron sector of the 100-spin-orbital space, whose dimension is \binom{100}{50} rather than 2^{100}.
Step 2. Count qubits for the quantum encoding. Under Jordan-Wigner, one qubit per spin-orbital: 100 qubits. Under more efficient encodings (Bravyi-Kitaev, symmetry-adapted): 95–98 qubits. Either way, order 100.
Step 3. Gate-count estimate. For a Trotter-based ground-state simulation to chemical accuracy (1 kcal/mol), published estimates for molecules of this size give 10^{10} to 10^{12} T-gates for phase estimation, and 10^5 to 10^8 parameters for VQE-style ansätze on today's hardware.
Step 4. Compare. The classical problem is 10^{29} amplitudes, each of which in principle interacts with 10^{29} others per gate step — completely out of reach. The quantum problem is 100 qubits and 10^{10}–10^{12} gates — out of reach today, within reach of fault-tolerant machines estimated for the 2030s.
Result. Classical simulation of this molecule at high accuracy: impossible. Quantum simulation: will be possible within a decade of fault-tolerant machines shipping.
What this shows. The 2^n wall from chapter 2 is not a pedagogical toy. For the molecules pharmaceutical companies actually care about, no classical machine that will ever be built can store the exact wavefunction. A fault-tolerant quantum computer can.
Where the near-term utility is
"Quantum advantage" has two layers. The honest version:
Verified advantage on a benchmark task — already here, on random-circuit sampling and a handful of bosonic-sampling tasks (Google 2019, USTC 2020, etc). These have no scientific or commercial application.
Useful advantage on a scientific problem — not confirmed yet, targeted for the late 2020s / early 2030s. The candidates, in decreasing order of how likely they are to arrive first:
- Small-molecule energies with hard correlation. Transition-metal dimers, open-shell organics, bond-breaking transition states. Fault-tolerant needed; a few hundred logical qubits; 10^9–10^{11} T-gates.
- 2D Hubbard model phase diagram. A benchmark condensed-matter problem; hundreds of sites; requires both ground-state and time-evolution algorithms.
- Drug-binding free energies with quantum corrections. Task 3 (thermal sampling) plus Task 2 (ground-state energies) plus classical force-field averaging over water. Thousands of logical qubits; the most commercially important and most speculative on timescales.
Hype check. The press around "quantum drug discovery" and "quantum AI for chemistry" is often ahead of the science. Useful quantum simulation is concrete: it names a specific molecule or material, a specific property, and shows the classical method that fails and the quantum method that succeeds. Most pop-science claims don't pass that test. Watch for the ones that do — they will come from groups like Google Quantum AI, IBM, IonQ, and academic partners publishing specific molecules with specific error bars, not from generic "we used a quantum computer" announcements. The first confirmed useful-advantage chemistry result will be a specific paper, not a vibe.
Example 2 — a drug-discovery pipeline with and without quantum
Setup. You are at a pharma company designing a catalyst for selective C-H activation — a high-value transformation used across the drug pipeline. The active site has iron, so electronic structure is strongly correlated. Classical DFT gives binding energies with errors around 5 kcal/mol on this chemistry; you want errors below 1 kcal/mol to rank candidates.
Step 1. The classical workflow. Run DFT on each candidate (hours per molecule). Rank by DFT energy. Synthesise the top ten. Test. Problem: the DFT error bar is larger than the differences between candidates — many "top ten" by DFT are not top ten in the lab.
Step 2. The quantum-enhanced workflow (late 2020s target). Same DFT pre-filter to narrow the candidate list (cheap). For the top 100 candidates, run a fault-tolerant quantum phase-estimation calculation of the active-site electronic ground state in an ~50-orbital active space. Gate count per candidate: \sim 10^{10} T-gates. At a 1 MHz logical gate rate, that is 10^4 seconds = 3 hours per molecule.
Why this is tractable: the 100-candidate filter is set by how many the classical pre-screen can narrow to. The per-candidate quantum run is the fault-tolerant budget — 3 hours each is survivable.
Step 3. The payoff. Sub-chemical-accuracy ranking of 100 candidates; synthesise the top five; hit rate in the lab goes from 10% to 50%+. Measured in R&D dollars, this is a multi-billion-rupee uplift per successful drug program.
Step 4. The caveat. This workflow doesn't exist today. Every number in Step 2 is a target, not a measurement. Fault-tolerant hardware with sufficient logical qubit counts is not yet deployed — the National Quantum Mission's 2031 milestone of 50–1000 physical qubits with useful fidelity is the runway; useful chemistry results probably come a few years after.
Result. The pipeline is the vision Feynman implicitly pointed at in 1982: compute the thing classical methods cannot, to do chemistry the lab cannot otherwise rank.
What this shows. Feynman's argument is not an abstract physics point — it is an eventual change in how pharmaceutical and materials R&D is done. The path from 1982's fourteen-page paper to a real pipeline runs through exactly the algorithms covered in Part 15.
India and the Feynman agenda
The National Quantum Mission (₹6000 crore, approved 2023) targets 50–1000 physical qubits by 2031, specifically to support chemistry, materials, and drug-discovery applications — Feynman's agenda, rebranded. Several Indian institutions are active: the Tata Institute of Fundamental Research (TIFR, Mumbai) has a long-standing quantum chemistry group; the Indian Institute of Science (IISc, Bangalore) runs both NMR-based and superconducting-qubit experiments; IIT Madras hosts algorithmic research on VQE and chemistry benchmarks; and the NMR-quantum group at TIFR (Anil Kumar's line) was among the earliest worldwide to demonstrate small-molecule quantum algorithms.
Indian pharma — Dr. Reddy's, Cipla, Biocon — is watching these developments closely. The economic argument is crisp: if quantum-enhanced catalyst design reduces the cost-per-drug-program by even 10%, the payoff to a country running one of the world's largest generics industries is measured in lakhs of crores per decade. The National Quantum Mission's chemistry track is aimed exactly at that return.
Common confusions
-
"Quantum computers speed up everything." No — the speedup is specific to problems where the mathematical structure matches what quantum amplitudes do well (superposition, interference, phase, entanglement). For simulating quantum systems, the speedup is exponential. For sorting a list, a quantum computer is strictly slower than a laptop.
-
"Feynman was thinking of Shor's algorithm." No. Shor's paper came in 1994, twelve years later. Feynman's 1982 proposal was narrowly about physics simulation. It was Peter Shor, David Deutsch, and others who later realised that the same machine could also attack number-theoretic problems.
-
"Chemistry on NISQ is already useful." Misleading. Small molecules (\text{H}_2, \text{LiH}) have been simulated on NISQ, but those simulations reproduce results that classical methods already give exactly. They are proof-of-principle, not new science. Useful quantum chemistry — simulating a molecule a classical computer cannot — is a future milestone, likely on fault-tolerant hardware.
-
"DFT is going to be replaced by quantum computers." No. For the 95% of chemistry where DFT works, it will keep working — it is fast, cheap, and accurate enough. Quantum computers will handle the 5% where strong correlation breaks DFT: transition-metal catalysts, open-shell transition states, strongly correlated materials. A post-quantum computational chemistry stack will have both classical and quantum components, picking the right tool per problem.
-
"A quantum computer solves the Schrödinger equation exactly." A quantum computer simulates a chosen Hamiltonian on a chosen basis with chosen precision. The "exactness" is relative to the model; modelling errors (chosen basis set, chosen Hamiltonian approximations, finite Trotter error) still exist. Chemists talk about "chemical accuracy" (~1 kcal/mol) as the target, because that is the threshold at which predicted and measured reaction rates agree. Quantum computers will hit chemical accuracy on harder systems than classical ones can — that is the claim, not "solves nature exactly."
-
"Cryptography is the main quantum application." Popular press lean on it because it is dramatic, but the gate counts for RSA-breaking are much larger than for useful quantum chemistry. Practical Shor's-algorithm attacks on 2048-bit RSA need millions of physical qubits; useful FeMoco-class chemistry might need tens of thousands. Chemistry is likely to arrive first.
Going deeper
You now have the landscape: three simulation tasks, four application areas, and the honest calibration that useful quantum advantage is probably a chemistry result in the late 2020s. What follows adds the rigorous complexity and algorithmic underpinnings — Lloyd's 1996 theorem that made Feynman's proposal a theorem, the fermion-encoding machinery that turns a molecule into a qubit Hamiltonian, and the recent history of classical-simulation records that keeps the goalposts honest.
Lloyd 1996 — Feynman's proposal becomes a theorem
Feynman's 1982 argument was informal. The question "can any local Hamiltonian be efficiently simulated on a quantum computer?" remained technically open for fourteen years. Seth Lloyd's 1996 paper [2] settled it. Lloyd proved: for any Hamiltonian H = \sum_{i=1}^M h_i where each h_i acts on a bounded number of qubits and has bounded norm, the unitary e^{-iHt} can be simulated to precision \epsilon on a quantum computer using O(M^2 t^2 / \epsilon) elementary gates using a first-order Trotter decomposition. Higher-order Trotter-Suzuki schemes drop the t-scaling closer to linear.
This is the foundational theorem of quantum simulation. It says: every physically reasonable Hamiltonian is efficiently simulatable on a quantum computer, with cost polynomial in the system size, the simulation time, and inverse precision. Lloyd's theorem is the reason Feynman's casual proposal became an actual research programme.
Subsequent improvements — Berry et al. 2015 on "linear combinations of unitaries," Low–Chuang 2017 on quantum signal processing, Babbush et al. 2018 on optimised Trotter methods — have narrowed the gate-count constants and improved the scaling with precision to near-optimal (O(\log(1/\epsilon)) instead of polynomial). These are the chapters 131–132 of this track.
Fermion encodings — turning a molecule into qubits
The molecular electronic Hamiltonian is written in terms of fermionic creation and annihilation operators c_p^\dagger, c_p. These obey anticommutation relations — \{c_p, c_q^\dagger\} = \delta_{pq} — which qubits do not natively obey (qubits are bosonic in the computational-basis sense). To run the Hamiltonian on a quantum computer, you map fermion operators to qubit operators. Two standard encodings:
Jordan-Wigner. One qubit per spin-orbital. c_p = \prod_{q<p} Z_q \cdot \frac{X_p + iY_p}{2}. Pros: simple, correct. Cons: the \prod Z string makes every two-body term non-local — a single fermionic hop becomes a product of O(n) qubit operators.
Bravyi-Kitaev. A different tree-based mapping. Each fermionic operator becomes a product of O(\log n) qubit operators. Pros: more local, better for circuit depth. Cons: more intricate to code.
Both encodings are exact (they preserve the anticommutation structure). Which to use depends on the hardware — Jordan-Wigner for structured hardware with good mid-range connectivity, Bravyi-Kitaev for hardware with more local connectivity. The chapter 134 on fermion encodings works through both in detail.
Classical simulation records — a moving target
One reason to stay honest about quantum advantage is that classical simulation has also improved dramatically. Post-2019, groups responded to Google's 53-qubit supremacy claim with increasingly clever classical algorithms:
- Pan et al. 2021 — a tensor-network simulation of the Google Sycamore sampling task on 512 GPUs completed in 15 hours what Google estimated classical machines would take 10,000 years to do. (Google later disputed the comparison's apples-to-apples-ness.)
- 2024 — classical simulation of 60-qubit random circuits with reasonable depth is routine for tensor-network methods on 1D-like structures.
The quantum-advantage target moves as classical methods improve. The honest metric is not "can a specific quantum task beat supercomputers" — it is "can a specific scientifically useful task be done by quantum means that no classical method has caught up with." Chemistry holds up best here because the sign problem is a mathematical limit on classical methods, not an engineering one.
Why ground-state estimation is hard even for quantum computers
Chapter 98 proved that the local Hamiltonian problem is QMA-complete. That means: even quantum computers cannot efficiently solve the general ground-state problem. Phase estimation works when you can prepare a state with good overlap with the true ground state; that preparation is the hard part, and there is no efficient algorithm that works for arbitrary H.
In practice, chemists exploit structure. Hartree-Fock provides a good starting point for weakly correlated molecules (overlap with true ground state is high). Unitary coupled cluster ansatz provides a good approximation for molecules with moderate correlation. For strongly correlated systems — exactly the interesting ones — preparing a good starting state is itself a research question, and VQE is a pragmatic bet rather than a guarantee.
The HHL algorithm and quantum linear algebra
A sibling to Feynman simulation: the Harrow-Hassidim-Lloyd (HHL) algorithm solves sparse linear systems A|x\rangle = |b\rangle in quantum polylog time. Many physics and engineering problems reduce to linear algebra, and HHL-style methods extend Feynman's agenda from "simulate Hamiltonian dynamics" to "solve the equations that come out of Hamiltonian-derived models." The caveats are substantial — HHL's output is the state |x\rangle, not the full solution vector, so readouts require structured queries — but for specific problems (PDE solvers, optimal control), HHL is a sibling to simulation.
Where this leads next
- Local Hamiltonians — chapter 130, the structural property that makes simulation efficient and ground-state finding QMA-hard.
- Hamiltonian Simulation: Trotter-Suzuki — chapter 131, the first algorithm for implementing e^{-iHt}.
- Variational Quantum Eigensolver — chapter 133, the NISQ-friendly route to ground-state energies.
- Quantum Chemistry Introduction — Part 15.3, the specific workflow for molecular problems.
- Feynman's Argument for Quantum — chapter 2, the original statement of the 2^n wall.
- QMA and the Quantum Cook-Levin Theorem — chapter 98, the complexity ceiling for ground-state problems.
References
- Richard P. Feynman, Simulating Physics with Computers (1982) — DOI:10.1007/BF02650179.
- Seth Lloyd, Universal Quantum Simulators, Science (1996) — DOI:10.1126/science.273.5278.1073.
- John Preskill, Lecture Notes on Quantum Computation, Chapter 5 — theory.caltech.edu/~preskill/ph229.
- Nielsen and Chuang, Quantum Computation and Quantum Information, §4.7 — Cambridge University Press.
- Wikipedia, Quantum simulator.
- Reiher, Wiebe, Svore, Wecker, Troyer, Elucidating reaction mechanisms on quantum computers (2017) — arXiv:1605.03590.