In short

In May 1982, Richard Feynman gave a talk at a small conference outside Boston and asked an awkward question: can a classical computer efficiently simulate quantum mechanics? His answer was no, and the reason is sharp. The state of n qubits — or more generally, n interacting two-level quantum systems — takes 2^n complex numbers to write down. At n=40 that is a trillion numbers (a workstation budget). At n=50 it is a petabyte (a data centre). At n=60 it is more memory than the largest supercomputer on Earth. The wall is not engineering, it is the mathematics. Feynman's proposal, almost casual in the paper: if nature is quantum-mechanical, build the simulator out of quantum parts. The simulator is a 2^n-dimensional quantum system by virtue of being a quantum system itself. That single observation is the foundation of the whole field.

A hundred physicists, a warm May weekend, a big wooden house outside Boston called Endicott House. The conference was called the Physics of Computation Conference, jointly organised by MIT and IBM in 1981. Among the attendees was Richard Feynman, then 63, a year past his second Nobel-worthy result on partons, still curious about everything. He gave a talk — originally titled simply Simulating Physics with Computers — and afterwards, when asked to write it up, produced one of the most consequential papers in the history of computer science. The paper is fourteen pages long. It has no theorems. It poses a question and offers an answer.

The question is this: suppose you want to run a simulation — a proper, from-first-principles simulation — of a chunk of the real world. A small molecule. A tiny chip of iron. A hundred electrons bouncing around an atom. Can a classical computer do it?

By 1981, "simulation" was already routine. Weather models, crash tests, chemistry codes — all of them running on classical computers, all of them approximating the real world to some tolerable accuracy. What Feynman wanted to know was not whether you could approximate. He wanted to know whether you could be exact. If the underlying physics is quantum mechanical — which, for atoms and electrons, it undeniably is — can a classical machine keep up with it as the number of particles grows?

The answer he gave, and which the next two paragraphs will pin down in concrete numbers, is no. The growth is exponential. It is not exponential in the way a badly-written sorting algorithm is exponential — a thing you could fix with a cleverer data structure. It is exponential in the way the universe itself is structured. And Feynman's proposal for what to do about it — "let the computer itself be built of quantum-mechanical elements which obey quantum mechanical laws" — is the one-line origin of quantum computing as a subject.

The 2^n wall

Start with the sharpest version of the argument, the version that every quantum-computing student should be able to recite.

A classical bit has two states: 0 and 1. A register of n classical bits has 2^n possible states — 00…0 through 11…1 — but at any moment it is in exactly one of them. To tell you which one, your laptop stores n bits of memory. At n=50, that is 50 bits. At n=300, 300 bits. At n=10{,}000, 10 kB. Linear in n.

Now take a single qubit. It has two basis states, written |0\rangle and |1\rangle — the symbol |0\rangle is read "ket zero" and is just the 2-dimensional column vector (1, 0)^\top; |1\rangle is (0, 1)^\top. Any state of the qubit is a linear combination of these two basis states:

|\psi\rangle = \alpha\,|0\rangle + \beta\,|1\rangle

where \alpha and \beta are complex numbers called amplitudes. So a single qubit requires two complex numbers to describe fully (four real numbers), with one normalisation constraint relating them.

What about n qubits? The basis states are now all 2^n strings — |00\cdots 0\rangle, |00\cdots 1\rangle, …, |11\cdots 1\rangle. A general state is a linear combination of all of them:

|\psi\rangle = \sum_{x \in \{0,1\}^n} \alpha_x\,|x\rangle

Why the sum runs over all 2^n strings: quantum mechanics is linear. If |x\rangle and |y\rangle are allowed states, so is any linear combination. With n qubits there are 2^n basis strings, so a fully general superposition has one amplitude for each of them. There is no way to "skip" a string — the mathematics forces you to carry all 2^n of them.

That is 2^n complex amplitudes. One per basis string. All of them in general non-zero. The n classical bits held n pieces of information; the n qubits hold 2^n. Plotting it in the other direction: to simulate n qubits on a classical computer, you must store 2^n complex numbers in memory, and on every gate update all of them.

Classical simulation versus quantum hardwareTwo labelled boxes. Left: a classical computer trying to simulate n qubits — shows an array of 2^n amplitude slots with exponential blow-up. Right: a quantum computer of n qubits, shown as n small qubit icons, that IS the quantum system natively.Classical simulation of n qubitsstore all 2^n amplitudes in RAMα_000α_001α_010α_011α_100α_101… 2^n − 6 more …every gate = update 2^n numbersmemory: 2^n × 16 bytesexponential wall at n ≈ 50Quantum hardware of n qubitsthe machine IS a 2^n-dim quantum stateq_0q_1q_2q_3gates act on physical hardware directlymemory: n physical qubitsscales linearly with n
The same $2^n$-dimensional state lives in two places. On the left, a classical computer has to store every amplitude as an explicit number and update all of them on every gate. On the right, a quantum computer is already a $2^n$-dimensional quantum system by being built out of $n$ physical qubits.

Linear in n versus exponential in n. When you do the arithmetic, the gap gets ugly fast.

The numbers that make the wall real

An argument that only says "exponential" is not yet persuasive — every algorithms student has seen exponential blow-ups they could fix with a good data structure. The number to sit with is what "2^n complex amplitudes" actually weighs when you store it on a real computer. A single complex number at double precision is two 64-bit floats: 16 bytes. So n qubits costs 2^n \times 16 bytes.

Put that in a table.

Qubit count n Amplitudes 2^n Memory at 16 B each What it fits in
10 1,024 16 kB text file
20 ~1 million 16 MB phone
30 ~1 billion 16 GB laptop
35 ~34 billion ~549 GB large workstation
40 ~1 trillion ~17.6 TB small cluster
45 ~35 trillion ~563 TB large cluster / HPC facility
50 ~1 quadrillion ~18 PB the world's top supercomputers
55 ~36 quadrillion ~576 PB beyond any single machine
60 ~1.15 × 10¹⁸ ~18.4 EB more than all data stored on Earth
80 ~1.2 × 10²⁴ ~1.9 × 10¹⁰ EB absurd
300 ~2 × 10⁹⁰ exceeds the atom count of the visible universe

Why each row counts: memory = 2^n \times 16 bytes. At n = 30, that is 2^{30} \approx 10^9 complex numbers, each 16 bytes, giving 16 GB — roughly the RAM of a decent laptop. Every time you add a qubit, memory doubles.

Two things are worth noticing in this table. First, somewhere between n = 45 and n = 55, the memory requirement leaves the realm of any single machine that exists on Earth. That is the "quantum supremacy" regime — the regime where classical simulation physically runs out of silicon. Second, the table overstates how far classical simulation can reach. The 2^n-amplitude count is only the storage cost. Every gate update also has to touch a non-trivial fraction of those amplitudes — typically O(2^n) floating-point operations per gate. Even if you had the memory, time to run a circuit of depth d grows as d \times 2^n. A million-gate circuit at n = 50 is \sim 10^{21} operations, which at exaflop scale (10^{18} operations per second) is half an hour of dedicated compute per circuit run — and most useful circuits are much deeper.

Memory required versus qubit count, log scaleA log-scale bar chart showing memory to store all amplitudes rises exponentially with qubit count. Bars for 10, 20, 30, 40, 50, 60, 70 qubits climb steeply. A horizontal dashed line marks the capacity of the world's largest supercomputers (~1 EB). A red band highlights the cliff between 45 and 55 qubits where classical simulation becomes physically impossible.10^2010^1610^1210^810^410^0bytes10203040506070qubit count nthe cliff45–55 qubitslargest supercomputers (~1 EB)
Memory needed to classically simulate $n$ qubits (log scale). Around $n = 50$, the required storage exceeds the world's largest supercomputers. This is not an engineering limit — it is a consequence of the mathematics of quantum states.

A concrete example sharpens it.

Example: how much RAM does a 35-qubit state take?

Setup. You want to simulate 35 qubits on a classical machine. The state is a vector of 2^{35} complex amplitudes, stored as complex doubles (two 64-bit floats, 16 bytes each).

Step 1. Count the amplitudes.

2^{35} = 34{,}359{,}738{,}368 \approx 3.44 \times 10^{10}

Why 35 is the threshold worth naming: 35 is where a single serious workstation with half a terabyte of RAM starts to struggle. One extra qubit doubles the requirement, and the wall gets close fast.

Step 2. Multiply by the bytes per amplitude.

3.44 \times 10^{10} \text{ amplitudes} \times 16 \text{ bytes/amplitude} = 5.50 \times 10^{11} \text{ bytes} \approx 549 \text{ GB}

Step 3. Check against reality. A workstation with 512 GB of RAM (the kind a well-funded research group owns) will not fit a 35-qubit state. Moving to 1 TB is possible but unusual. Moving to 2 TB — enough for 36 qubits — is rare.

Step 4. Go up by 15 qubits.

2^{50} \times 16 \text{ bytes} = 1.13 \times 10^{15} \times 16 \approx 1.80 \times 10^{16} \text{ bytes} \approx 18 \text{ PB}

Result. At n = 35, you need a half-terabyte workstation per state. At n = 50, 18 petabytes — roughly the full storage of Frontier, the world's top supercomputer in 2024. At n = 51, you need two Frontiers. At n = 60, you need every large data centre on Earth.

What this shows. Every single qubit you add to your simulation doubles the memory and roughly doubles the compute. This is the Feynman wall. It is not slow software; it is not an inefficient algorithm. It is the structure of quantum mechanics telling you that the "state space" of n quantum particles is genuinely 2^n-dimensional, and any classical device that wants to hold it must spend exponential memory to do so.

The numbers above assume no compression, no approximation, no exploiting of structure. Clever classical simulation methods — matrix product states and tensor networks — can cheat the wall for weakly entangled states, which are common in one-dimensional physics but rare in the quantum circuits people actually want to run. The going-deeper section returns to this. For generic quantum states, the wall is real and the numbers above are tight.

Why this matters for chemistry

The most dramatic consequence of the wall is in quantum chemistry. Every molecule is a quantum-mechanical system. Every chemical bond is the exchange or sharing of electrons, and electrons do not obey classical physics — they obey the Schrödinger equation, in which their state is a vector in an exponentially large Hilbert space.

Take caffeine. You have met it in your last chai or Red Bull. It has 24 atoms, of which only a handful are "active" in any given chemistry question, but the electrons responsible for its biological action number around 100. To simulate those electrons exactly — each in a superposition of hundreds of spatial orbitals, interacting via the Coulomb force — you need to track the amplitudes of the full multi-electron quantum state. The count scales roughly as 2^{\text{orbitals}}, and for the electrons of caffeine that is an integer with scores of digits.

A small molecule — electrons to simulateA cartoon of a small molecule like caffeine with several atom circles connected by bonds, each atom surrounded by electron dots. A label points at the electron cloud and reads "the correlated many-electron state lives in an exponentially large Hilbert space".CNCOHelectrons (red dots)live in a 2^(orbitals)-dimensional spaceevery chemical bond is a quantum correlation problem
A cartoon of part of a molecule. The electrons (red) are not independent — their quantum state is entangled across the whole molecule, and the dimension of that state grows exponentially with the number of electrons you include.

Classical chemistry manages this with approximations. Density functional theory (DFT), for example, replaces the full many-electron wavefunction with a cleverly parameterised density, and works well for many molecules. But for systems with strong electron correlation — where the electrons influence each other so much that no mean-field-like approximation captures them — DFT and its relatives fail. The most economically important chemistries sit in this hard regime:

This is the "Feynman family" of quantum computing applications. The argument is simple and almost the whole argument Feynman himself made: nature is quantum, and simulating nature efficiently requires a tool that is also quantum.

Hype check. It is tempting to leap from "quantum simulation is exponentially hard classically" to "quantum computers will solve everything" — that is where most hyperbole starts. The measured version: there exists a set of problems, not general ones, for which the 2^n scaling is the thing stopping classical computers. Quantum simulation is the clearest example. Most problems, most of the time, do not have this structure, and for those a classical computer is still the right tool. You will see this theme over and over: quantum advantage is specific, not general. This is the point of §9 of the quantum-computing style — do not claim more than the exponential scaling actually gives you.

Feynman's proposal — the one-line idea that made the field

Feynman's 1982 paper [1] builds up this wall in more-or-less the form above, then makes an almost offhand proposal:

"Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy."

This is the most-quoted line in the history of quantum computing, and it deserves its fame — but the real substance of the paper is in the previous paragraph. The proposal is this: build a computer whose components are themselves quantum-mechanical, and use it to simulate other quantum-mechanical systems. The simulator does not need to "store" the 2^n amplitudes in explicit form. Instead, the simulator is a 2^n-dimensional quantum state by virtue of being built out of n qubits. Gates applied to the simulator evolve its 2^n amplitudes just as the same gates applied to the simulated system would.

This turns the curse into a feature. Classically, 2^n is a storage disaster. Quantum-mechanically, 2^n is free — it is what you already have when you put n qubits on a chip. The "memory" is the Hilbert space the qubits already live in.

Why this is a deep, not shallow, observation: it says that the physical world admits computing machines that natively exploit the exponential state space, while classical machines have to pay for it in explicit memory. If Feynman is right — and four decades of algorithm design suggests he is — the notion of "efficient computation" depends on the laws of physics the computer obeys. This is the reason the field is studied by both physicists and computer scientists.

The subtle second insight — quantum structure, not just quantum problems

Feynman's original paper was narrowly focused: he was after quantum simulation. But his framing contained a second, subtler claim that other researchers pulled out over the next decade. Any problem whose mathematical structure already looks quantum-mechanical — periodicity, interference, group-theoretic symmetry — might also get a quantum speedup, even when the problem itself is not about simulating matter.

Factoring is the canonical example. Integers do not move under the Schrödinger equation; they are static number-theoretic objects. But the problem of finding the factors of N can be recast as finding the period of a modular exponentiation a^k \bmod N, and periods are exactly what the quantum Fourier transform extracts with an exponential speedup. Peter Shor's 1994 discovery that this works was the moment Feynman's "quantum to simulate quantum" idea transformed into the broader principle that quantum computers can exploit any problem structure that quantum mechanics is natively good at.

The three quantum advantages you will meet throughout this track — Shor (periodicity), Grover (search/amplitude amplification), quantum simulation (Feynman) — are all versions of the same idea: find a problem whose natural mathematical description involves an exponentially large vector space with interference, and quantum mechanics will give you a speedup.

Common confusions

Going deeper

If you have understood that the state of n qubits takes 2^n complex amplitudes, that classical machines hit a physical wall around n = 4555, and that Feynman's proposal is "build the simulator out of quantum parts to turn the exponential cost into a linear-in-n hardware count" — you have the core of chapter 2. What follows is extra context for readers who want the sharper version of Feynman's paper, the modern classical-simulation frontier, and the 2019 "quantum supremacy" landmark.

What Feynman's 1982 paper actually argued

Feynman's paper is not just "counting amplitudes." It draws a careful distinction between two different notions of classical simulation.

Type 1 — compute expectation values of a local Hamiltonian efficiently. This is the practical chemist's goal. If you can get energies and correlation functions to within some target accuracy in polynomial time, you have "simulated" the system in the sense that matters for predicting experiments. Feynman points out that no known classical algorithm does this for arbitrary local Hamiltonians.

Type 2 — sample from the output distribution of a quantum experiment. This is the stricter notion. If someone runs a quantum experiment and produces a probability distribution over measurement outcomes, can a classical computer reproduce that distribution? Feynman argued no — the probability distributions that quantum mechanics produces cannot in general be written as a classical probabilistic algorithm of polynomial size, because the sign (or phase) of the amplitudes can create patterns with no classical analogue. This is the argument that, decades later, underpins the notion of quantum sampling advantage.

Feynman's paper also introduced the phrase "computer built of quantum-mechanical elements." The words "qubit" and "quantum gate" did not yet exist (David Deutsch coined the formal framework in 1985), but the physical picture was already there.

Matrix product states and the 2D frontier

Not every quantum state is as hard to simulate as the worst case. Matrix product states (MPS), and their higher-dimensional cousins tensor networks, are a clever parametrisation that captures weakly entangled quantum states using only polynomial memory — not 2^n. For one-dimensional systems with only short-range entanglement, MPS methods can simulate n qubits in time polynomial in n, defeating the naive 2^n argument.

The trick: most physical 1D systems satisfy the area law for entanglement — the entanglement across any cut grows like the boundary of the region, not the volume — and MPS encodes exactly these states efficiently. For 1D physics (spin chains, quasi-1D conductors), classical MPS simulation is the state of the art, and the DMRG algorithm (White, 1992) based on MPS beats any naive quantum approach.

For 2D and 3D systems — real materials, real molecules — the area law still holds, but the boundary size grows, the MPS bond dimension must grow, and efficient simulation breaks down. This is where the classical frontier runs out and quantum simulation has its cleanest opportunity. You will meet tensor networks formally in Part 13; for now, the statement is: classical simulation of quantum systems is not uniformly hard, but for generic 2D-and-up systems it remains exponential.

Quantum supremacy — Google, IBM, China

In October 2019, Google announced that its 53-qubit Sycamore processor had performed a computational task — sampling from the output of a random quantum circuit — in 200 seconds that they estimated would take the world's top supercomputers 10{,}000 years. They called the milestone "quantum supremacy." IBM immediately pushed back with a classical algorithm they estimated could do it in 2.5 days. Chinese groups (USTC, Pan Jianwei's team) followed with photonic supremacy experiments in 2020 and a 66-qubit superconducting version in 2021.

The lesson is threefold.

First, Feynman's wall is real. The classical estimates for these specific tasks, even after clever speedups, run into the exponential cost of tracking high-entanglement 2D circuit states.

Second, "supremacy" is a narrow claim. It means "this quantum machine can do something classical simulation cannot do in any reasonable time" — not "this quantum machine is useful." Random-circuit sampling has no known application. The demonstrations were valuable because they proved that the hardware could execute deep, entangled circuits on many qubits at once — a necessary engineering milestone on the path to useful quantum computing — but the tasks themselves are benchmark-only.

Third, India's quantum community reads these results carefully because they shape the timeline for the National Quantum Mission. The NQM's 2031 target of 50–1000 physical qubits (with useful fidelity) is calibrated against the Willow/Sycamore era and the move toward logical qubits.

The link to Shor's algorithm — periodicity as quantum structure

Feynman's argument is about simulating quantum systems. Shor's algorithm is about factoring integers. The connection is that both exploit problems whose mathematical description has natural quantum structure.

For simulation, the structure is that the system is quantum, so a quantum simulator tracks its amplitudes for free. For factoring, the structure is that a^k \bmod N is a periodic function of k, and the quantum Fourier transform extracts periods in one step while classical Fourier transforms on enormous integers would take sub-exponential time. The QFT is where Feynman's interference idea meets number theory — a pattern in the integers that a classical algorithm cannot see becomes a peak in the amplitude distribution that a quantum measurement can.

Part 9 of this track builds the QFT carefully, and Part 10 is all of Shor's algorithm. When you arrive there, you will see the line "now the amplitudes concentrate on integer multiples of 1/r" — and it will be the direct descendant of the paragraph you just read. Feynman's 1982 paper plus the quantum Fourier transform is, in a real sense, the entire mathematical content of factoring-via-quantum-computing.

Where this leads next

References

  1. Richard P. Feynman, Simulating Physics with Computers (1982) — DOI:10.1007/BF02650179 (summary at Wikipedia).
  2. Seth Lloyd, Universal Quantum Simulators, Science (1996) — DOI:10.1126/science.273.5278.1073.
  3. Arute et al. (Google Quantum AI), Quantum supremacy using a programmable superconducting processor (2019) — arXiv:1910.11328 (PDF mirror).
  4. John Preskill, Lecture Notes on Quantum Computationtheory.caltech.edu/~preskill/ph229.
  5. Wikipedia, Quantum simulator.
  6. Nielsen and Chuang, Quantum Computation and Quantum InformationCambridge University Press.