In short
From 1925 to 1980, quantum mechanics was a branch of physics. Its job was to predict the behaviour of atoms, molecules, and fields. Between 1980 and 2000, five short papers turned quantum mechanics into a computational resource: Feynman 1982 asked whether quantum systems could be efficiently simulated; Deutsch 1985 wrote down the first quantum Turing machine and the first quantum algorithm; Bennett and Brassard 1984 showed quantum states could be used to distribute cryptographic keys; Shor 1994 showed a quantum computer could factor integers exponentially faster than the best classical algorithm; Grover 1996 showed a quantum computer could search an unstructured database in \sqrt{N} time. After Grover, the field was born — a new research community, new journals, new conferences, and within ten years a new billion-dollar industry. The reframing is what you call the quantum information turn. Indian contributions — Umesh Vazirani on BQP, Ashwin Nayak on walks, Rajagopal Nannapaneni on QEC — shaped the theory throughout. This chapter is a tour of the five papers that did it, plus the broader shift in thinking they started.
Until about 1980, quantum mechanics was the physicist's tool. You used it to compute the binding energy of a hydrogen atom, the spectrum of a complex molecule, the conductivity of a semiconductor. If you wanted to "do something" with a quantum system, the answer was always measure it — make a physical device whose output tells you about the quantum state inside.
No one thought of a quantum system as a computer. Computation was binary: bits, gates, machines of the kind Turing and von Neumann had defined in the 1930s and 1940s. Quantum mechanics was the strange theory that explained why transistors worked — you needed it to design silicon — but the transistor itself was a classical device. The quantum layer was a means to a classical end.
Between 1980 and 2000, that picture changed completely. By 2000 a new subject existed — quantum information science — with its own arXiv section (quant-ph), its own community, its own conferences (QIP, TQC), and its own billion-dollar industry by 2020. What happened in those twenty years is the subject of this chapter. The thread through it is a single phrase that a physicist in 1979 would not have understood and a physicist in 1999 would take for granted: quantum mechanics is a resource for computation and communication. The mathematical content did not change. The perspective did.
Before 1980: quantum as physics, computation as classical
A 1975 physics graduate would have learned quantum mechanics as follows. A system is described by a state vector |\psi\rangle in a Hilbert space. Observables are Hermitian operators. Time evolution is unitary. Measurement is projection. Applications are atoms, molecules, solids, nuclei, scattering experiments. The word "computer" did not appear anywhere in the curriculum.
A 1975 computer scientist would have learned computation as follows. A computer is an abstract machine — a Turing machine, a random-access machine, a circuit of NAND gates. Its state is a string of bits. Its transitions are classical, deterministic (or probabilistic with classical random numbers). Questions like "what can be computed efficiently?" were the subject of complexity theory, freshly axiomatised by Stephen Cook in 1971 with the notion of NP-completeness.
Between these two worlds there was a wall. Quantum mechanics was "physics" and computation was "classical." A rare exception was Benioff's 1980 paper describing a classical Turing machine implemented by a Hamiltonian — more a philosophical observation than a new technology. The wall was still standing.
Then, between 1982 and 1996, the wall came down.
1982: Feynman's seed
You have met this paper already (chapter 2: Feynman's Argument). Richard Feynman at a 1981 conference at Endicott House asked whether classical computers could efficiently simulate quantum systems. His answer — no, because the state space is 2^n-dimensional and classical machines must store all 2^n amplitudes — was the first rigorous-sounding argument that quantum mechanics could be computationally stronger than classical mechanics.
Feynman's proposal was tentative. He did not describe a general-purpose quantum computer. He did not define a quantum algorithm. He did not even name the thing — the word "qubit" had not yet been coined (it arrived in 1995 with Ben Schumacher). What he did was ask the right question, and suggest the right answer: build a simulator whose components are quantum-mechanical, so that the 2^n-dimensional state is free.
This is the seed of the field. A seed, not a field — you cannot yet do anything with Feynman's paper. You cannot run an algorithm. You cannot prove an advantage. You can only suspect that if you had such a machine it would be powerful. The next three years were spent on the question: what is such a machine, formally?
1985: Deutsch's quantum Turing machine
David Deutsch, at Oxford, published Quantum theory, the Church–Turing principle and the universal quantum computer in 1985 [1]. The paper does three things.
First, it defines a quantum Turing machine. Deutsch took Turing's 1936 machine model — head, tape, states, transitions — and replaced the classical state with a quantum state, and the classical transition function with a unitary operator. A quantum Turing machine is to a classical Turing machine what a quantum computer is to a classical computer: the same abstract model, with classical bits replaced by qubits and classical transitions by unitaries.
Second, it proves a universality theorem. Just as a classical Turing machine can simulate any other classical Turing machine with polynomial overhead, Deutsch's quantum Turing machine can simulate any other quantum Turing machine — and, importantly, any quantum physical process — with polynomial overhead. This is the quantum Church–Turing principle: any physically realisable quantum computer can be efficiently simulated by Deutsch's universal quantum computer.
Third, it produces the first quantum algorithm. The Deutsch algorithm — you will have seen it in Part 8 of this track — is a toy problem with an exact quantum speedup. You are given a function f: \{0,1\} \to \{0,1\}, unknown but guaranteed to be either constant or balanced. Classically, you need two queries to decide which. Deutsch's algorithm does it in one query. A factor of 2 speedup, not impressive in absolute terms, but the first proof that a quantum computer can outperform a classical one on a well-defined problem.
Example 1: why Deutsch's 1985 paper mattered more than the 2x speedup suggests
Setup. The Deutsch algorithm gives a 2x speedup — one quantum query instead of two classical ones. In any practical sense, this is trivial: if you have a function you can query, two queries is not a hardship. So why is the paper considered the founding of the field?
Step 1. Read the abstract carefully. The paper's central claim is not about the algorithm. It is about the existence of a computational model that is stronger than the classical Turing machine for at least one problem. Up to 1985, it was conceivable that classical and quantum Turing machines had the same computational power — that quantum mechanics was "just" a faster classical computer, polynomial-equivalent. Deutsch's algorithm killed that possibility. Quantum computation is provably different.
Step 2. The paper defines BQP implicitly. Although Deutsch does not use the term (Bernstein and Vazirani will coin it in 1993), the paper defines a quantum Turing machine running in polynomial time as a legitimate model of efficient computation. This is the formal home for everything quantum algorithms will later do.
Step 3. The paper frames computation in terms of interference. The Deutsch algorithm works because of interference: two possible intermediate states overlap constructively or destructively depending on whether f is constant or balanced. This is the move — computation as interference — that every subsequent quantum algorithm will extend.
Why the 2x speedup is misleading as a headline: the paper is not about being twice as fast at a specific problem. It is about establishing the model and the framework that the next thirty years of algorithms will use. Every future quantum algorithm is a more elaborate version of the interference trick Deutsch introduced.
Step 4. The exponential speedups come later. Deutsch–Jozsa (1992, Deutsch's own extension with Richard Jozsa) gives the first exponential quantum speedup on an oracle problem. Bernstein–Vazirani (1993) and Simon's algorithm (1994) push further. Shor (1994) delivers the first exponential speedup on a problem people cared about classically. All of these are direct descendants of Deutsch 1985.
Result. Deutsch 1985 is to quantum computing what Turing 1936 is to classical computing: the paper that defined the model and asked the right questions. Everything else is the harvest.
What this shows. The quantum information turn is not a sequence of performance milestones. It is a sequence of conceptual clarifications that each removed a barrier to thinking about quantum mechanics computationally.
1984: Bennett and Brassard and quantum cryptography
In parallel with Deutsch's abstract work, Charles Bennett (IBM Yorktown Heights) and Gilles Brassard (Université de Montréal) were thinking about something completely different: quantum cryptography. In 1984 they published the BB84 protocol — a way for two parties, Alice and Bob, to share a secret cryptographic key using the transmission of single photons [2]. You will have met BB84 in depth in Part 18 of this track.
The key insight: measuring a quantum state disturbs it. If Alice sends photons polarised in one of four non-orthogonal directions (vertical, horizontal, +45°, −45°), and an eavesdropper tries to intercept and measure them without knowing the basis, she will unavoidably introduce errors that Alice and Bob can detect by publicly comparing a small subset of their bits. Security rests on a physical law, not on a computational assumption.
BB84 was radical because it framed a cryptographic primitive — key distribution — in terms of quantum mechanics. It was not an algorithm for solving a computational problem. It was a protocol for accomplishing a task (secure key distribution) that classical cryptography could only approximate (via computational hardness assumptions like discrete log). Quantum mechanics offered information-theoretic security.
This matters for the turn because it established, early, that quantum mechanics was a resource for more than computation. It was a resource for communication, for cryptography, for randomness generation — for information processing of all kinds. The field that was forming was not just "quantum computing"; it was quantum information.
1994: Shor and the breakthrough
Deutsch had given an algorithm with a 2x speedup. Deutsch–Jozsa gave an exponential speedup on a structured oracle problem (useful for proofs, not for anything anyone cared about classically). Simon's algorithm (1994) gave an exponential speedup on another oracle problem. All of this mattered internally but had no external audience.
Then in April 1994, Peter Shor — at Bell Labs — posted a preprint titled Algorithms for quantum computation: discrete logarithms and factoring [3]. The paper did exactly what the title said. It described an algorithm that, given an n-bit integer N, would produce its prime factors on a quantum computer in time O(n^3). The best classical algorithm (general number field sieve) runs in time roughly e^{(\ln N)^{1/3} (\ln \ln N)^{2/3}} — sub-exponential, but super-polynomial. Shor's algorithm is a superpolynomial speedup on a problem everyone cared about.
The implication was immediate and dramatic. RSA, the public-key cryptosystem that secures essentially all internet traffic in 1994 and almost all of it today, is based on the hardness of factoring. If a large quantum computer existed, Shor's algorithm would break RSA. Every HTTPS connection, every banking transaction, every encrypted email depends on factoring being hard.
Why this changed the field overnight: until Shor, quantum computing was an academic curiosity studied by a handful of theorists. After Shor, quantum computing was a national-security issue. Within three years, the NSA had a quantum-computing research programme. Within ten years, every major tech company had a quantum group. Within thirty years, governments were spending hundreds of millions on dedicated quantum initiatives — including India's ₹6000-crore National Quantum Mission.
Shor's algorithm also gave the field its first clean connection to a deep problem. Factoring, order-finding, the discrete logarithm problem — these are all instances of the hidden subgroup problem, a classical object of algebraic algorithms. Shor showed that for abelian hidden subgroups, the quantum Fourier transform gives an exponential speedup. This opened a systematic search for more problems of the same shape.
1996: Grover and the quadratic speedup
Two years after Shor, Lov Grover at Bell Labs asked a different question. Factoring is a structured problem. What about problems without structure? Suppose you have a database of N items and you are looking for one specific item, but you have no way to sort or index the database — you can only test items one at a time. Classically, you need O(N) queries in the worst case.
Grover's 1996 paper [4] gave a quantum algorithm requiring only O(\sqrt{N}) queries. A quadratic speedup — not as dramatic as Shor's exponential, but applicable to any unstructured search problem. Since a huge number of problems reduce to unstructured search (including brute-force breaking of symmetric ciphers), Grover's algorithm is the second pillar of quantum algorithmic advantage.
Grover's algorithm is also geometrically elegant (see Grover's Geometric Picture). It works by rotating the quantum state inside a 2-dimensional subspace of the full Hilbert space, with each rotation step bringing the state closer to the "solution" direction. After O(\sqrt{N}) steps, the state is aligned with the solution, and measurement finds it with high probability.
The five-paper summary
Example 2: what Shor's algorithm did to cryptography — concrete numbers
Setup. A 2048-bit RSA key — the standard today for HTTPS, banking, Aadhaar authentication, UPI — secures around 10^18 devices worldwide. The security of RSA-2048 rests on the assumption that factoring a 2048-bit integer is infeasible.
Step 1. Compute classical cost. The general number field sieve (the best classical factoring algorithm) on a 2048-bit integer requires roughly 2^{112} elementary operations. At an exaflop rate (10^{18} ops/second), this is 5 \times 10^{15} seconds — about 150 million years. So for classical machines, RSA-2048 is safe.
Step 2. Compute quantum cost. Shor's algorithm, on the same 2048-bit integer, runs in O(n^3) basic quantum operations, where n = 2048. That is about 8.6 \times 10^9 gates. At a microsecond per gate, the quantum running time is hours.
Why this is the right headline number: the running time is not the full story. To execute 10^{10} gates reliably, you need fault-tolerant quantum error correction, which current best estimates say requires about 20 million physical qubits for surface-code-based correction. Current hardware has about 1000 physical qubits (IBM Condor, Atom Computing). So "hours of running time" is the algorithmic cost — but the prerequisite is a machine 20{,}000 times larger than today's.
Step 3. Compute the response. NIST (US) and equivalently CCA (Indian) have post-quantum cryptography standardisation programmes. In 2024 NIST finalised three post-quantum algorithms (ML-KEM, ML-DSA, SLH-DSA) based on lattice and hash problems that neither Shor nor any known quantum algorithm breaks. India's CERT-In has issued advisories on the timeline for migration, with financial-sector and Aadhaar-sector cryptography targeted for upgrade by 2030.
Step 4. Compute the industrial response. Every major cloud provider, telecom, and bank in India is now running post-quantum cryptography pilots. RBI's payment systems department has an explicit quantum-safe migration plan. The field that did not exist in 1993 is now a line item in the 2025 budget of every security-sensitive institution on Earth.
Result. Shor's algorithm, a paper of a few pages posted to an arXiv section that did not then exist, triggered a cryptographic migration that will cost the world's banks and governments an estimated 50 billion dollars over the next decade.
What this shows. The quantum information turn was not abstract. Shor 1994 is the single paper that moved quantum computing from "interesting theoretical curiosity" to "national-security-relevant technology." The industrial response — NIST, ISO, CERT-In, NQM — is all downstream of that one paper.
Indian contributions through the turn
The quantum information turn had Indian contributors throughout, and the National Quantum Mission traces part of its intellectual pedigree to them.
Umesh Vazirani (UC Berkeley, born in India) co-authored the 1993 Bernstein–Vazirani paper, which defined the complexity class BQP — bounded-error quantum polynomial time, the formal class of problems efficiently solvable by quantum computers. You have met BQP in Part 12 of this track. Vazirani's 1996 paper with Yao extended this work. He has continued to lead theoretical quantum computing at Berkeley, mentoring a generation of researchers.
Ashwin Nayak (Waterloo, born in India, IIT Kanpur undergrad) contributed foundational work on quantum walks — the quantum generalisation of random walks, a major algorithmic primitive. His papers on quantum walks on graphs are standard references in the field (Part 11 covers walk-based algorithms).
Rajagopal (Raju) Nannapaneni and others at IIT Madras contributed to early work on quantum error correction and codes.
Preskill (Caltech, not Indian but central to the field) coined the term "quantum supremacy" in 1998, naming the milestone that Google and USTC would claim in 2019–2021. His lecture notes remain the standard graduate text.
The Indian diaspora in quantum information theory is dense. Every major research group — Berkeley, MIT, Caltech, Waterloo, Oxford — has Indian-origin faculty or senior researchers. The theoretical computer science side of the field has particularly strong Indian representation, partly because of the strong CS tradition at IITs and ISI Kolkata.
After 2000: the field explodes
By 2000, the foundations were in place: a model of computation (Deutsch), a collection of algorithms (Shor, Grover), a cryptographic protocol (BB84), a formal complexity class (BQP). What followed was a decade-and-a-half of rapid development:
- Quantum teleportation (Bennett et al., 1993) demonstrated that quantum information could be transmitted using entanglement and classical communication — a protocol that now runs on optical networks in labs and across satellites.
- Quantum error correction (Shor 1995, Steane 1996, Knill–Laflamme 1997) showed that noisy quantum computers could in principle be made fault-tolerant. Part 13 of this track is this whole story.
- The stabilizer formalism (Gottesman 1997) gave a compact description of a huge class of quantum error-correcting codes — now the standard language of quantum hardware design.
- Adiabatic quantum computation (Farhi et al., 2000) offered an alternative computational model that turned out to be equivalent to the circuit model but suggested new hardware ideas.
- Topological quantum computation (Kitaev 1997) proposed fault-tolerance via topologically-protected states — an idea Microsoft is still trying to realise.
- Quantum walks (Aharonov–Ambainis–Kempe–Vazirani 2001) established walks as an algorithmic primitive.
- The first experimental demonstrations — NMR factoring 15 (2001), ion-trap factoring (2012), the first commercial quantum annealer from D-Wave (2011) — converted theory into hardware.
- Quantum supremacy claims — Google 2019, USTC 2020–2021 — reached the first demonstrations that a quantum machine could outperform classical simulation on a specific artificial task. Part 12 and Part 19 cover these.
- The National Quantum Mission (India, 2023), the US National Quantum Initiative (2018), the EU Quantum Flagship (2018), China's investments at national scale — governments around the world recognising the field and funding it.
By 2025, the field has its own textbooks (Nielsen and Chuang, Preskill, Watrous), its own journals (Quantum, PRX Quantum), its own multi-billion-dollar industry (IBM, Google, IonQ, Rigetti, Quantinuum, Atom Computing, PsiQuantum, many more), and its own community of tens of thousands of researchers.
Common confusions
-
"The quantum information turn is just the history of quantum computing." It is more specific than that. The "turn" is the conceptual shift from viewing quantum mechanics as a physics theory to viewing it as an information-processing resource. You can trace the shift in the vocabulary: "wavefunction" became "quantum state," "measurement" became "read-out," "Hamiltonian evolution" became "unitary gate," "decoherence" became "quantum error." The physics did not change; the questions you ask of it did.
-
"Feynman invented quantum computing." Feynman posed the seed question. Deutsch formalised the model. Shor and Grover produced the first compelling algorithms. Bennett and Brassard established the cryptographic side. If you want one origin, Deutsch 1985 is the closest single paper. But the field is genuinely a collective product of the period 1982–1996.
-
"The turn happened because of a technological breakthrough." No technological breakthrough was required. All five founding papers are theory. They were written with paper and pencil. The experimental realisations followed decades later, and are still catching up. This is unusual for a computer science subfield and marks quantum computing as an unusually theory-driven discipline.
-
"Post-turn quantum mechanics is different from pre-turn quantum mechanics." Mathematically, no. The postulates are unchanged. What changed is the community studying them, the language used, and the problems asked. A 1995 quantum mechanics textbook and a 2025 one on the same topic would have essentially the same equations; the second would have an extra set of chapters on qubits, gates, codes, and complexity.
-
"The word 'qubit' was used from the start." The term was coined by Ben Schumacher in 1995, ten years after Deutsch defined the quantum Turing machine. Before 1995, people wrote "a two-level quantum system" or "a spin-half particle." The word is a surprisingly recent convenience.
Going deeper
If you have internalised the five-paper chronology — Feynman 1982 (seed), Deutsch 1985 (model and first algorithm), Bennett–Brassard 1984 (crypto protocol), Shor 1994 (superpolynomial speedup on factoring), Grover 1996 (quadratic search) — and the conceptual claim that the quantum information turn is about treating quantum mechanics as an information resource rather than as a physics theory, you have chapter 210. What follows is the deeper structure of why the turn had the specific shape it had, and a reading list into the frontier.
Why the turn happened when it did, not earlier
Feynman 1982 could in principle have been written in 1960. The mathematics needed — Hilbert spaces, unitary operators, tensor products — was all there. Why did the turn take another twenty years?
Three reasons, roughly. First, classical computability theory was still being worked out through the 1960s and 1970s; the framework of complexity classes (P, NP, BPP) that quantum complexity needs as its baseline was only formalised in 1971 (Cook) and 1979 (Bennett–Gill). You cannot ask "is BQP strictly more powerful than BPP?" until you have BPP. Second, the physics of controlling individual quantum systems — one atom, one photon — matured in the 1970s and 1980s (Paul traps, laser cooling, optical pumping), making the physical implementation of a qubit plausible rather than science fiction. Third, computer science itself took the idea seriously only after Shor gave it a reason: breaking RSA is a concrete, enormous practical consequence, and computer scientists follow practical consequences.
The hidden subgroup problem and post-Shor algorithmic ambition
After Shor, the natural next question was: what other classical problems would have exponential quantum speedups? Factoring, order-finding, and the discrete logarithm all reduce to the abelian hidden subgroup problem (HSP) — the question of, given a function on a group that is constant on cosets of an unknown subgroup, identifying the subgroup. The quantum Fourier transform solves the abelian HSP in polynomial time.
The non-abelian HSP is the big open question. If a quantum algorithm could solve the general HSP, it would give an exponential speedup on graph isomorphism (a major open problem in classical complexity), shortest-vector problems in lattices (which would break post-quantum lattice cryptography), and many other problems. Thirty years after Shor, the non-abelian HSP remains unsolved — the biggest known target for "the next Shor moment." Part 9 of this track discusses this in detail.
Quantum supremacy — the 1998 term and the 2019 demonstration
John Preskill coined the term quantum supremacy in 1998 at a workshop, defining it as "the regime where quantum computers do things that classical computers cannot, regardless of whether they are useful." He proposed it as a conceptual milestone: the first demonstration that the 2^n wall was real in a runnable experiment, not just in a theorist's argument.
Twenty-one years later, Google's 53-qubit Sycamore processor claimed the milestone on a random-circuit-sampling task. IBM pushed back with classical algorithms that narrowed the gap. USTC in China followed with a photonic (Jiuzhang, 2020) and a superconducting (Zuchongzhi, 2021) demonstration. As of 2025, several supremacy claims exist, each subsequently narrowed by classical-simulation advances — exactly the back-and-forth Preskill predicted.
The term "supremacy" has been criticised, and "quantum advantage" is now preferred in many venues. But the conceptual milestone — a quantum machine doing something a classical machine cannot efficiently reproduce — has clearly been crossed, even if the specific benchmark tasks (random circuit sampling, boson sampling) have no economic use.
The Indian research ecosystem after the turn
By 2025, India has a recognisable quantum information research ecosystem:
- TIFR Mumbai runs an active ion-trap quantum computing group, partnered with TIFR-H and Tata Trusts.
- IISc Bangalore has theoretical and experimental quantum groups, partnered with IBM Quantum.
- IIT Madras, IIT Bombay, IIT Delhi, IIT Kharagpur, IIT Roorkee all have quantum computing / quantum information groups.
- RRI Bangalore (Raman Research Institute) is a world-class quantum optics lab.
- PRL Ahmedabad runs atomic and optical experiments relevant to clocks and magnetometry.
- C-DAC (Centre for Development of Advanced Computing) builds quantum simulation software.
- Industry: Tata Consultancy Services (TCS), Infosys, Wipro, HCL, and several startups (QNu Labs, BosonQ Psi, Qkrishi) work on quantum cryptography, simulation, and algorithms.
The National Quantum Mission coordinates across these institutions. The intellectual pedigree — Bose, Saha, Raman, Bhabha → Chandrasekhar → diaspora researchers (Vazirani, Nayak) → current NQM investigators — is a continuous 100-year thread. Part 21 of this track (landscape 2026) surveys the ecosystem in detail.
Where this leads next
- The Founding Physicists — chapter 209, the 1920s pioneers whose physics the turn turned into information.
- A Final Challenge — chapter 211, the closing chapter with projects to try.
- Feynman's Argument — chapter 2, Feynman's 1982 paper in depth.
- What is Quantum Computing? — chapter 1, the overview this chapter assumes.
- BQP Defined — Part 12, the complexity class Deutsch's model and Bernstein–Vazirani's paper formalised.
References
- David Deutsch, Quantum theory, the Church–Turing principle and the universal quantum computer (1985) — Royal Society summary.
- Charles H. Bennett and Gilles Brassard, Quantum cryptography: Public key distribution and coin tossing (1984) — reprinted in Theoretical Computer Science (2014) — arXiv link.
- Peter Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer (1994) — arXiv:quant-ph/9508027.
- Lov K. Grover, A fast quantum mechanical algorithm for database search (1996) — arXiv:quant-ph/9605043.
- Ethan Bernstein and Umesh Vazirani, Quantum complexity theory (1993) — SICOMP via arXiv summary.
- John Preskill, Lecture Notes on Quantum Computation — theory.caltech.edu/~preskill/ph229.