In short
In May 1982, Richard Feynman gave a talk at a small conference outside Boston and asked an awkward question: can a classical computer efficiently simulate quantum mechanics? His answer was no, and the reason is sharp. The state of n qubits — or more generally, n interacting two-level quantum systems — takes 2^n complex numbers to write down. At n=40 that is a trillion numbers (a workstation budget). At n=50 it is a petabyte (a data centre). At n=60 it is more memory than the largest supercomputer on Earth. The wall is not engineering, it is the mathematics. Feynman's proposal, almost casual in the paper: if nature is quantum-mechanical, build the simulator out of quantum parts. The simulator is a 2^n-dimensional quantum system by virtue of being a quantum system itself. That single observation is the foundation of the whole field.
A hundred physicists, a warm May weekend, a big wooden house outside Boston called Endicott House. The conference was called the Physics of Computation Conference, jointly organised by MIT and IBM in 1981. Among the attendees was Richard Feynman, then 63, a year past his second Nobel-worthy result on partons, still curious about everything. He gave a talk — originally titled simply Simulating Physics with Computers — and afterwards, when asked to write it up, produced one of the most consequential papers in the history of computer science. The paper is fourteen pages long. It has no theorems. It poses a question and offers an answer.
The question is this: suppose you want to run a simulation — a proper, from-first-principles simulation — of a chunk of the real world. A small molecule. A tiny chip of iron. A hundred electrons bouncing around an atom. Can a classical computer do it?
By 1981, "simulation" was already routine. Weather models, crash tests, chemistry codes — all of them running on classical computers, all of them approximating the real world to some tolerable accuracy. What Feynman wanted to know was not whether you could approximate. He wanted to know whether you could be exact. If the underlying physics is quantum mechanical — which, for atoms and electrons, it undeniably is — can a classical machine keep up with it as the number of particles grows?
The answer he gave, and which the next two paragraphs will pin down in concrete numbers, is no. The growth is exponential. It is not exponential in the way a badly-written sorting algorithm is exponential — a thing you could fix with a cleverer data structure. It is exponential in the way the universe itself is structured. And Feynman's proposal for what to do about it — "let the computer itself be built of quantum-mechanical elements which obey quantum mechanical laws" — is the one-line origin of quantum computing as a subject.
The 2^n wall
Start with the sharpest version of the argument, the version that every quantum-computing student should be able to recite.
A classical bit has two states: 0 and 1. A register of n classical bits has 2^n possible states — 00…0 through 11…1 — but at any moment it is in exactly one of them. To tell you which one, your laptop stores n bits of memory. At n=50, that is 50 bits. At n=300, 300 bits. At n=10{,}000, 10 kB. Linear in n.
Now take a single qubit. It has two basis states, written |0\rangle and |1\rangle — the symbol |0\rangle is read "ket zero" and is just the 2-dimensional column vector (1, 0)^\top; |1\rangle is (0, 1)^\top. Any state of the qubit is a linear combination of these two basis states:
where \alpha and \beta are complex numbers called amplitudes. So a single qubit requires two complex numbers to describe fully (four real numbers), with one normalisation constraint relating them.
What about n qubits? The basis states are now all 2^n strings — |00\cdots 0\rangle, |00\cdots 1\rangle, …, |11\cdots 1\rangle. A general state is a linear combination of all of them:
Why the sum runs over all 2^n strings: quantum mechanics is linear. If |x\rangle and |y\rangle are allowed states, so is any linear combination. With n qubits there are 2^n basis strings, so a fully general superposition has one amplitude for each of them. There is no way to "skip" a string — the mathematics forces you to carry all 2^n of them.
That is 2^n complex amplitudes. One per basis string. All of them in general non-zero. The n classical bits held n pieces of information; the n qubits hold 2^n. Plotting it in the other direction: to simulate n qubits on a classical computer, you must store 2^n complex numbers in memory, and on every gate update all of them.
Linear in n versus exponential in n. When you do the arithmetic, the gap gets ugly fast.
The numbers that make the wall real
An argument that only says "exponential" is not yet persuasive — every algorithms student has seen exponential blow-ups they could fix with a good data structure. The number to sit with is what "2^n complex amplitudes" actually weighs when you store it on a real computer. A single complex number at double precision is two 64-bit floats: 16 bytes. So n qubits costs 2^n \times 16 bytes.
Put that in a table.
| Qubit count n | Amplitudes 2^n | Memory at 16 B each | What it fits in |
|---|---|---|---|
| 10 | 1,024 | 16 kB | text file |
| 20 | ~1 million | 16 MB | phone |
| 30 | ~1 billion | 16 GB | laptop |
| 35 | ~34 billion | ~549 GB | large workstation |
| 40 | ~1 trillion | ~17.6 TB | small cluster |
| 45 | ~35 trillion | ~563 TB | large cluster / HPC facility |
| 50 | ~1 quadrillion | ~18 PB | the world's top supercomputers |
| 55 | ~36 quadrillion | ~576 PB | beyond any single machine |
| 60 | ~1.15 × 10¹⁸ | ~18.4 EB | more than all data stored on Earth |
| 80 | ~1.2 × 10²⁴ | ~1.9 × 10¹⁰ EB | absurd |
| 300 | ~2 × 10⁹⁰ | — | exceeds the atom count of the visible universe |
Why each row counts: memory = 2^n \times 16 bytes. At n = 30, that is 2^{30} \approx 10^9 complex numbers, each 16 bytes, giving 16 GB — roughly the RAM of a decent laptop. Every time you add a qubit, memory doubles.
Two things are worth noticing in this table. First, somewhere between n = 45 and n = 55, the memory requirement leaves the realm of any single machine that exists on Earth. That is the "quantum supremacy" regime — the regime where classical simulation physically runs out of silicon. Second, the table overstates how far classical simulation can reach. The 2^n-amplitude count is only the storage cost. Every gate update also has to touch a non-trivial fraction of those amplitudes — typically O(2^n) floating-point operations per gate. Even if you had the memory, time to run a circuit of depth d grows as d \times 2^n. A million-gate circuit at n = 50 is \sim 10^{21} operations, which at exaflop scale (10^{18} operations per second) is half an hour of dedicated compute per circuit run — and most useful circuits are much deeper.
A concrete example sharpens it.
Example: how much RAM does a 35-qubit state take?
Setup. You want to simulate 35 qubits on a classical machine. The state is a vector of 2^{35} complex amplitudes, stored as complex doubles (two 64-bit floats, 16 bytes each).
Step 1. Count the amplitudes.
Why 35 is the threshold worth naming: 35 is where a single serious workstation with half a terabyte of RAM starts to struggle. One extra qubit doubles the requirement, and the wall gets close fast.
Step 2. Multiply by the bytes per amplitude.
Step 3. Check against reality. A workstation with 512 GB of RAM (the kind a well-funded research group owns) will not fit a 35-qubit state. Moving to 1 TB is possible but unusual. Moving to 2 TB — enough for 36 qubits — is rare.
Step 4. Go up by 15 qubits.
Result. At n = 35, you need a half-terabyte workstation per state. At n = 50, 18 petabytes — roughly the full storage of Frontier, the world's top supercomputer in 2024. At n = 51, you need two Frontiers. At n = 60, you need every large data centre on Earth.
What this shows. Every single qubit you add to your simulation doubles the memory and roughly doubles the compute. This is the Feynman wall. It is not slow software; it is not an inefficient algorithm. It is the structure of quantum mechanics telling you that the "state space" of n quantum particles is genuinely 2^n-dimensional, and any classical device that wants to hold it must spend exponential memory to do so.
The numbers above assume no compression, no approximation, no exploiting of structure. Clever classical simulation methods — matrix product states and tensor networks — can cheat the wall for weakly entangled states, which are common in one-dimensional physics but rare in the quantum circuits people actually want to run. The going-deeper section returns to this. For generic quantum states, the wall is real and the numbers above are tight.
Why this matters for chemistry
The most dramatic consequence of the wall is in quantum chemistry. Every molecule is a quantum-mechanical system. Every chemical bond is the exchange or sharing of electrons, and electrons do not obey classical physics — they obey the Schrödinger equation, in which their state is a vector in an exponentially large Hilbert space.
Take caffeine. You have met it in your last chai or Red Bull. It has 24 atoms, of which only a handful are "active" in any given chemistry question, but the electrons responsible for its biological action number around 100. To simulate those electrons exactly — each in a superposition of hundreds of spatial orbitals, interacting via the Coulomb force — you need to track the amplitudes of the full multi-electron quantum state. The count scales roughly as 2^{\text{orbitals}}, and for the electrons of caffeine that is an integer with scores of digits.
Classical chemistry manages this with approximations. Density functional theory (DFT), for example, replaces the full many-electron wavefunction with a cleverly parameterised density, and works well for many molecules. But for systems with strong electron correlation — where the electrons influence each other so much that no mean-field-like approximation captures them — DFT and its relatives fail. The most economically important chemistries sit in this hard regime:
- Nitrogen fixation. The conversion of atmospheric \text{N}_2 into ammonia is currently done industrially by the Haber–Bosch process, which eats roughly 2% of the world's energy budget. Biological nitrogen fixation, performed by the enzyme nitrogenase, does the same reaction at room temperature and normal pressure. The active site of nitrogenase is the FeMoco cluster — seven iron atoms, one molybdenum, a sulphur cage, a carbon in the centre — and no classical simulation has been able to compute its ground-state energy accurately enough to predict its mechanism. A quantum computer of \sim 200 logical qubits could.
- Room-temperature superconductors. These are materials whose electrons form a strongly correlated condensate. Cuprates, iron pnictides, hydrides under pressure — none of these are well-predicted by classical simulation. Designing one computationally would transform grids and transport.
- Drug molecules with transition metals. Many catalysts and drug binding sites involve 3d or 4d transition metals, where the electrons strongly correlate. Accurate ground-state energies, accurate binding affinities — classical methods struggle.
This is the "Feynman family" of quantum computing applications. The argument is simple and almost the whole argument Feynman himself made: nature is quantum, and simulating nature efficiently requires a tool that is also quantum.
Hype check. It is tempting to leap from "quantum simulation is exponentially hard classically" to "quantum computers will solve everything" — that is where most hyperbole starts. The measured version: there exists a set of problems, not general ones, for which the 2^n scaling is the thing stopping classical computers. Quantum simulation is the clearest example. Most problems, most of the time, do not have this structure, and for those a classical computer is still the right tool. You will see this theme over and over: quantum advantage is specific, not general. This is the point of §9 of the quantum-computing style — do not claim more than the exponential scaling actually gives you.
Feynman's proposal — the one-line idea that made the field
Feynman's 1982 paper [1] builds up this wall in more-or-less the form above, then makes an almost offhand proposal:
"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."
This is the most-quoted line in the history of quantum computing, and it deserves its fame — but the real substance of the paper is in the previous paragraph. The proposal is this: build a computer whose components are themselves quantum-mechanical, and use it to simulate other quantum-mechanical systems. The simulator does not need to "store" the 2^n amplitudes in explicit form. Instead, the simulator is a 2^n-dimensional quantum state by virtue of being built out of n qubits. Gates applied to the simulator evolve its 2^n amplitudes just as the same gates applied to the simulated system would.
This turns the curse into a feature. Classically, 2^n is a storage disaster. Quantum-mechanically, 2^n is free — it is what you already have when you put n qubits on a chip. The "memory" is the Hilbert space the qubits already live in.
Why this is a deep, not shallow, observation: it says that the physical world admits computing machines that natively exploit the exponential state space, while classical machines have to pay for it in explicit memory. If Feynman is right — and four decades of algorithm design suggests he is — the notion of "efficient computation" depends on the laws of physics the computer obeys. This is the reason the field is studied by both physicists and computer scientists.
The subtle second insight — quantum structure, not just quantum problems
Feynman's original paper was narrowly focused: he was after quantum simulation. But his framing contained a second, subtler claim that other researchers pulled out over the next decade. Any problem whose mathematical structure already looks quantum-mechanical — periodicity, interference, group-theoretic symmetry — might also get a quantum speedup, even when the problem itself is not about simulating matter.
Factoring is the canonical example. Integers do not move under the Schrödinger equation; they are static number-theoretic objects. But the problem of finding the factors of N can be recast as finding the period of a modular exponentiation a^k \bmod N, and periods are exactly what the quantum Fourier transform extracts with an exponential speedup. Peter Shor's 1994 discovery that this works was the moment Feynman's "quantum to simulate quantum" idea transformed into the broader principle that quantum computers can exploit any problem structure that quantum mechanics is natively good at.
The three quantum advantages you will meet throughout this track — Shor (periodicity), Grover (search/amplitude amplification), quantum simulation (Feynman) — are all versions of the same idea: find a problem whose natural mathematical description involves an exponentially large vector space with interference, and quantum mechanics will give you a speedup.
Common confusions
-
"The 2^n amplitudes mean a qubit register stores exponentially more information." No. The amplitudes exist during the computation, but measurement of n qubits yields exactly n classical bits, and no clever protocol can extract more than that from a single copy of the state. The Holevo bound (chapter 181) makes this rigorous. What the 2^n amplitudes buy is not more information out of a measurement — it is the ability for interference to concentrate probability on the right answer before you measure.
-
"Feynman proved quantum computers are faster than classical." He did not. He proposed that they might be more efficient for simulating quantum systems. The first rigorous proofs of quantum advantage came later: David Deutsch's 1985 universal quantum computer paper; Deutsch–Jozsa (1992) and Bernstein–Vazirani (1993) showing exponential separations on structured oracle problems; Shor (1994) on factoring and discrete log. Feynman started the conversation; he did not finish it.
-
"Classical computers can simulate any quantum system, just slowly." Technically yes, with unlimited time and memory. Practically, for n beyond about 45–55, the required memory exceeds anything that will ever exist on Earth. Calling that "just slow" is like calling the heat death of the universe "just a bit of a wait." The exponential wall is a physical limit for all practical purposes.
-
"The quantum computer 'tries all 2^n possibilities at once'." Classic misconception, and worth naming explicitly. A quantum computer does put the state into a superposition over 2^n basis strings, but measurement projects to exactly one of them. The speedup comes from interference — amplitudes for wrong answers cancelling, amplitudes for right answers adding — not from parallel brute-force evaluation. Feynman's argument is that the 2^n-dimensional state space is useful, but how it is useful depends on the algorithm.
-
"Once we have quantum computers, chemistry is solved." Quantum computers will make certain previously-intractable electronic-structure problems tractable. But chemistry is not just electronic structure — it also involves statistical mechanics, reaction dynamics, solvent effects, and many approximations that have little to do with the 2^n-wall. Quantum computing is a powerful new instrument for part of the problem; the rest will still require the classical quantum-chemistry infrastructure.
Going deeper
If you have understood that the state of n qubits takes 2^n complex amplitudes, that classical machines hit a physical wall around n = 45–55, and that Feynman's proposal is "build the simulator out of quantum parts to turn the exponential cost into a linear-in-n hardware count" — you have the core of chapter 2. What follows is extra context for readers who want the sharper version of Feynman's paper, the modern classical-simulation frontier, and the 2019 "quantum supremacy" landmark.
What Feynman's 1982 paper actually argued
Feynman's paper is not just "counting amplitudes." It draws a careful distinction between two different notions of classical simulation.
Type 1 — compute expectation values of a local Hamiltonian efficiently. This is the practical chemist's goal. If you can get energies and correlation functions to within some target accuracy in polynomial time, you have "simulated" the system in the sense that matters for predicting experiments. Feynman points out that no known classical algorithm does this for arbitrary local Hamiltonians.
Type 2 — sample from the output distribution of a quantum experiment. This is the stricter notion. If someone runs a quantum experiment and produces a probability distribution over measurement outcomes, can a classical computer reproduce that distribution? Feynman argued no — the probability distributions that quantum mechanics produces cannot in general be written as a classical probabilistic algorithm of polynomial size, because the sign (or phase) of the amplitudes can create patterns with no classical analogue. This is the argument that, decades later, underpins the notion of quantum sampling advantage.
Feynman's paper also introduced the phrase "computer built of quantum-mechanical elements." The words "qubit" and "quantum gate" did not yet exist (David Deutsch coined the formal framework in 1985), but the physical picture was already there.
Matrix product states and the 2D frontier
Not every quantum state is as hard to simulate as the worst case. Matrix product states (MPS), and their higher-dimensional cousins tensor networks, are a clever parametrisation that captures weakly entangled quantum states using only polynomial memory — not 2^n. For one-dimensional systems with only short-range entanglement, MPS methods can simulate n qubits in time polynomial in n, defeating the naive 2^n argument.
The trick: most physical 1D systems satisfy the area law for entanglement — the entanglement across any cut grows like the boundary of the region, not the volume — and MPS encodes exactly these states efficiently. For 1D physics (spin chains, quasi-1D conductors), classical MPS simulation is the state of the art, and the DMRG algorithm (White, 1992) based on MPS beats any naive quantum approach.
For 2D and 3D systems — real materials, real molecules — the area law still holds, but the boundary size grows, the MPS bond dimension must grow, and efficient simulation breaks down. This is where the classical frontier runs out and quantum simulation has its cleanest opportunity. You will meet tensor networks formally in Part 13; for now, the statement is: classical simulation of quantum systems is not uniformly hard, but for generic 2D-and-up systems it remains exponential.
Quantum supremacy — Google, IBM, China
In October 2019, Google announced that its 53-qubit Sycamore processor had performed a computational task — sampling from the output of a random quantum circuit — in 200 seconds that they estimated would take the world's top supercomputers 10{,}000 years. They called the milestone "quantum supremacy." IBM immediately pushed back with a classical algorithm they estimated could do it in 2.5 days. Chinese groups (USTC, Pan Jianwei's team) followed with photonic supremacy experiments in 2020 and a 66-qubit superconducting version in 2021.
The lesson is threefold.
First, Feynman's wall is real. The classical estimates for these specific tasks, even after clever speedups, run into the exponential cost of tracking high-entanglement 2D circuit states.
Second, "supremacy" is a narrow claim. It means "this quantum machine can do something classical simulation cannot do in any reasonable time" — not "this quantum machine is useful." Random-circuit sampling has no known application. The demonstrations were valuable because they proved that the hardware could execute deep, entangled circuits on many qubits at once — a necessary engineering milestone on the path to useful quantum computing — but the tasks themselves are benchmark-only.
Third, India's quantum community reads these results carefully because they shape the timeline for the National Quantum Mission. The NQM's 2031 target of 50–1000 physical qubits (with useful fidelity) is calibrated against the Willow/Sycamore era and the move toward logical qubits.
The link to Shor's algorithm — periodicity as quantum structure
Feynman's argument is about simulating quantum systems. Shor's algorithm is about factoring integers. The connection is that both exploit problems whose mathematical description has natural quantum structure.
For simulation, the structure is that the system is quantum, so a quantum simulator tracks its amplitudes for free. For factoring, the structure is that a^k \bmod N is a periodic function of k, and the quantum Fourier transform extracts periods in one step while classical Fourier transforms on enormous integers would take sub-exponential time. The QFT is where Feynman's interference idea meets number theory — a pattern in the integers that a classical algorithm cannot see becomes a peak in the amplitude distribution that a quantum measurement can.
Part 9 of this track builds the QFT carefully, and Part 10 is all of Shor's algorithm. When you arrive there, you will see the line "now the amplitudes concentrate on integer multiples of 1/r" — and it will be the direct descendant of the paragraph you just read. Feynman's 1982 paper plus the quantum Fourier transform is, in a real sense, the entire mathematical content of factoring-via-quantum-computing.
Where this leads next
- A Map of this Track — chapter 3, a one-page tour of the 21 parts and the three reader paths through them.
- What is Quantum Computing? — chapter 1, the overview and hype-calibration this chapter builds on.
- The Bloch Sphere — chapter 14, the picture that makes single-qubit states and gates visual.
- Simulating Quantum Systems — Part 15, where Feynman's argument returns in algorithmic form (Trotter decompositions, Hamiltonian simulation, variational quantum eigensolvers).
- Shor's Algorithm — Introduction — Part 10, the explicit quantum-structure-exploiting algorithm sketched in the going-deeper section above.
References
- Richard P. Feynman, Simulating Physics with Computers (1982) — DOI:10.1007/BF02650179 (summary at Wikipedia).
- Seth Lloyd, Universal Quantum Simulators, Science (1996) — DOI:10.1126/science.273.5278.1073.
- Arute et al. (Google Quantum AI), Quantum supremacy using a programmable superconducting processor (2019) — arXiv:1910.11328 (PDF mirror).
- John Preskill, Lecture Notes on Quantum Computation — theory.caltech.edu/~preskill/ph229.
- Wikipedia, Quantum simulator.
- Nielsen and Chuang, Quantum Computation and Quantum Information — Cambridge University Press.