In short
By 1900, classical physics looked finished. Newton's mechanics, Maxwell's electromagnetism, and Boltzmann's statistical mechanics together explained almost everything. Three nagging problems refused to fit. Blackbody radiation — the light emitted by a hot oven — had a spectrum classical physics could not reproduce; Rayleigh and Jeans's formula diverged at short wavelengths (the ultraviolet catastrophe). The photoelectric effect — electrons ejected by light from a metal surface — behaved as if light came in discrete packets, which classical waves could not explain. Atomic spectra — the sharp, discrete colours emitted by heated gases — made no sense if atoms were tiny classical oscillators. Planck (1900) patched the first mystery with an "act of desperation": energy must come in quanta E = h\nu. Einstein (1905) took the patch seriously and invented the photon. Bohr (1913) quantised the hydrogen atom. Indian contemporaries shaped the new theory: Saha (1920) applied the quantum statistics to stars, Raman (1928) discovered the scattering effect that bears his name, and Bose (1924) derived the statistics of photons that, combined with Einstein's extension, founded quantum statistical mechanics. Quantum mechanics was forced on physicists by experiments; qubits are the direct descendants of the puzzles of 1900.
Lord Kelvin stood up at the Royal Institution on 27 April 1900 and gave a lecture that has since become famous — or infamous, depending on whom you ask. The title was Nineteenth-Century Clouds over the Dynamical Theory of Heat and Light. Kelvin was, at 76, the grand old man of British physics. His lecture reviewed the triumphant century: Newton's mechanics perfected by Lagrange and Hamilton, Maxwell's unification of electricity and magnetism, Boltzmann's statistical mechanics. Physics, Kelvin declared, was essentially a finished subject, the theoretical architecture complete. There were just two small clouds on the horizon: the failure of experiments to detect the luminiferous aether, and some curious anomalies in the thermodynamics of radiation.
Within five years, both clouds exploded into full revolutions. The aether problem became special relativity. The radiation problem became quantum mechanics. Kelvin did not live to see the storm; he died in 1907, the year Einstein's relativity papers had just appeared and Planck's quantum hypothesis was still a curiosity. The 20th century in physics — all of it — lived inside the two clouds that Kelvin thought were nearly gone.
This chapter is about the second cloud, the one that became your subject. You are going to trace the three experimental mysteries that forced quantum mechanics into being — blackbody radiation, the photoelectric effect, atomic spectra — and see how the people who solved them were driven by data, not by elegance. Planck called his own solution "an act of desperation." Einstein called his light-quantum hypothesis "heuristic." Bohr called his atomic model a mystery wrapped in a contradiction. And India's first-generation quantum physicists — Saha, Raman, Bose — arrived in time to shape the new theory from its first decade. By the end of this chapter you will see why the mathematical machinery you are now learning is not an aesthetic choice. It is the only thing the experiments would accept.
What classical physics looked like in 1900
To feel the shock of what went wrong, you need a picture of the classical edifice. Around 1900, physicists had three complete theories:
- Newtonian mechanics. Particles with positions and velocities, forces that cause accelerations, the future deterministic given the present. Perfected by Euler, Lagrange, Hamilton, and Jacobi into the Hamiltonian formalism that, by 1900, could handle any finite system of particles.
- Maxwell's electromagnetism. Electric and magnetic fields filling space, governed by four elegant equations, unifying light, heat radiation, radio waves, and X-rays into a single electromagnetic theory. Maxwell had published his four equations in 1865; Hertz had confirmed them experimentally with radio waves in 1887.
- Statistical mechanics and thermodynamics. Boltzmann's picture of heat as molecular motion, giving a deep statistical foundation to entropy and the second law. By 1900 this was contentious (atoms were not yet fully accepted by every chemist), but the mathematical framework was solid.
Together, these three looked universal. A hot iron bar glowed because its molecules vibrated (statistical mechanics) and the moving charges radiated (electromagnetism). An orbiting planet obeyed Newton. A beam of light was a wave governed by Maxwell. The ambition of the era was to fit every phenomenon into this framework. When Kelvin spoke of two clouds, he meant only that a handful of stubborn details remained.
Why this history matters to a quantum-computing student: the qubit is the child of these three cracks. Energy comes in discrete levels because of Planck. Light interacts with matter in packets because of Einstein. Atoms have sharp spectral lines because of Bohr. The Hilbert-space description you now use exists because experiments made every alternative untenable. Quantum mechanics was not an aesthetic choice; it was the only theory the data would accept.
The first crisis — blackbody radiation and the UV catastrophe
Heat up a piece of iron in a forge. It glows red, then orange, then yellow, then white as it gets hotter. Every hot body radiates electromagnetic energy across a range of wavelengths, and the spectrum — how much energy at each wavelength — depends only on the temperature, not on what the object is made of. This universality is what made the problem interesting.
Physicists idealised a hot object as a blackbody — an object that absorbs all incoming radiation and reaches thermal equilibrium with its own emission. An ideal blackbody at temperature T has a spectrum u(\nu, T) giving the energy density per frequency. By the late 19th century this spectrum had been measured with great precision by Lummer and Pringsheim in Germany, and it had a characteristic shape: a curve that rises with frequency, peaks, and then falls. The peak frequency shifts upward as T increases — Wien's displacement law, \nu_{\text{peak}} \propto T.
The trouble was that classical physics could not reproduce the measured curve.
The classical prediction — and its catastrophe
Treat the radiation inside a blackbody cavity as a gas of electromagnetic waves standing between the walls. Each mode of oscillation (each standing-wave pattern) is a little harmonic oscillator. Statistical mechanics says that in thermal equilibrium, each oscillator gets an average energy of k_B T, where k_B is Boltzmann's constant — the equipartition theorem. The number of modes in a frequency range [\nu, \nu + d\nu] is \frac{8 \pi \nu^2}{c^3} \, d\nu per unit volume. Multiply together and you get the Rayleigh–Jeans formula:
Why this derivation feels inevitable to a classical physicist: equipartition is one of the safest results in statistical mechanics. Counting modes in a cavity is one of the cleanest exercises in electromagnetism. Multiplying them is addition. There is nothing obviously wrong with any step — and yet the answer is a disaster.
The Rayleigh–Jeans formula matches the measured curve at low frequencies (long wavelengths), but at high frequencies it diverges: u(\nu, T) \to \infty as \nu \to \infty. The total energy integrated over all frequencies is infinite. In practice, this means every hot object would radiate infinite energy in the ultraviolet — an obvious absurdity. Paul Ehrenfest later coined the phrase ultraviolet catastrophe.
The experimental curve does something completely different. It rises, peaks, and falls — going to zero at both low and high frequencies. Rayleigh–Jeans matches only the rising part. Wilhelm Wien had proposed a competing formula, based on a thermodynamic argument, that matched the falling part but failed at low frequencies. Neither classical formula worked across the whole range.
Planck's desperate fix
Max Planck, a conservative German theorist, spent the 1890s trying to derive the correct spectrum from first principles. On 14 December 1900, he presented his answer to the Berlin Physical Society. The answer was correct; the justification, by his own admission, was a patch.
Planck's hypothesis: the walls of the cavity exchange energy with the radiation not continuously, but in discrete packets. Each oscillator of frequency \nu can only have energies that are integer multiples of a fundamental unit:
where h is a new constant of nature — now called Planck's constant, numerically h \approx 6.626 \times 10^{-34} joule-seconds. An oscillator cannot have energy 0.5 h\nu or 1.3 h\nu; only 0, h\nu, 2h\nu, and so on.
With this quantisation in place, the statistical mechanics changes. Instead of every oscillator having average energy k_B T (equipartition), the average becomes:
Why the energy per oscillator gets cut off at high frequency: for h\nu \gg k_B T, even exciting the first quantum costs more than the typical thermal energy k_B T, so high-frequency modes almost never get excited. Their contribution to the radiation is exponentially suppressed. The UV catastrophe disappears because the oscillators at high frequency are "frozen out" — thermally inactive — by the discreteness.
Multiplying by the mode count gives Planck's formula:
This matches the experimental blackbody spectrum across the whole frequency range, for every temperature at which it was ever measured. Not to a few percent — to the measurement precision of the era, which by 1900 was already exceptional.
Why Planck called it an "act of desperation"
Planck did not believe that energy was really discrete. He thought he had found a formal trick that happened to work, and that someone would eventually re-derive the correct spectrum without invoking quantisation. He spent the next 15 years looking for the classical foundation and never found one. In a 1931 letter he wrote: "I can characterise the whole procedure as an act of desperation, since, by nature I am peaceable and opposed to doubtful adventures."
What Planck had actually discovered — and resisted for years — was the first inkling of a new physics in which discreteness, not continuity, was fundamental. He won the 1918 Nobel Prize in Physics for the discovery.
The second crisis — the photoelectric effect
Heinrich Hertz, the same man who had confirmed Maxwell's radio waves in 1887, noticed a small effect that year he could not explain: ultraviolet light shining on a metal surface made it easier for sparks to jump across a nearby gap. The ultraviolet was somehow knocking charges out of the metal. Philipp Lenard, Hertz's former student, investigated systematically in the 1890s, and by 1902 he had established four strange experimental facts about this photoelectric effect.
- Below a threshold frequency, no electrons come off — regardless of intensity. If red light cannot eject electrons from a given metal, making the red light 100 times brighter still ejects zero electrons. Only light above a specific cutoff frequency \nu_0 (different for different metals) ejects electrons.
- Above threshold, the number of electrons ejected is proportional to the intensity of the light. More photons in means more electrons out — linearly.
- The maximum kinetic energy of the ejected electrons depends on the frequency, not the intensity. Doubling the frequency (above threshold) roughly doubles the maximum kinetic energy of the electrons. Doubling the intensity just doubles the number of electrons but keeps their individual energies unchanged.
- Ejection is effectively instantaneous. Even at very low intensities, an electron comes out within nanoseconds of the light starting — no detectable "accumulation" time.
Why classical physics cannot explain any of this
In classical physics, light is a continuous wave. A more intense wave carries more energy; a brighter light should push electrons out more vigorously, regardless of frequency. At low intensity, a classical calculation says the wave needs to pour energy into a single electron for seconds or minutes before ejection happens — not nanoseconds.
The threshold frequency is the most damning anomaly. Nothing in Maxwell's equations says "no ejection below a certain frequency." If you have a wave of intensity I, you have energy flux I; given enough time, a classical electron should absorb any amount of energy. The experimental threshold is a direct fingerprint of something non-classical.
Einstein's 1905 resolution — and the word "quantum" becomes real
In 1905, his annus mirabilis, Albert Einstein published four papers; the one that won him the 1921 Nobel Prize was On a Heuristic Point of View Concerning the Production and Transformation of Light. It proposed something that even Planck had not quite committed to: light itself comes in discrete packets, each of energy E = h\nu. Einstein called these packets Lichtquanta — light quanta; today we call them photons (a term Gilbert Lewis coined in 1926).
Einstein's equation for the photoelectric effect:
where K_{\max} is the maximum kinetic energy of an ejected electron, \nu is the frequency of the incoming light, h is Planck's constant, and W is the work function of the metal — the minimum energy needed to pull an electron loose from the metal surface.
Why this equation explains everything Lenard observed: a photon either ejects an electron or it does not — a single photon carries energy h\nu, and if h\nu < W the electron stays put no matter how many more photons arrive. Above threshold (\nu > W/h = \nu_0), each photon delivers the same excess energy h\nu - W to an electron, which explains why individual ejection energies depend on frequency but not intensity. Intensity means "number of photons per second," so more intensity means more electrons, but each one has the same maximum kinetic energy. The instantaneous ejection follows because a single photon-electron interaction is a one-shot event.
Einstein's light-quantum was radical even by the standards of the day. Physicists had accepted Planck's quanta as a property of oscillators in cavity walls, but most assumed light in free space was still a Maxwell wave. Einstein said: no, light itself is discrete. This was rejected by essentially everyone, including Planck, for more than a decade. Robert Millikan spent 1908–1916 trying to disprove Einstein's equation experimentally; he ended up confirming it precisely, and said publicly that he still did not believe Einstein's interpretation — even as his own data supported it to within 1 percent. Millikan received the 1923 Nobel Prize in part for the measurement.
By 1923, Arthur Compton's scattering experiments on X-rays showed photons carrying momentum as well as energy, demonstrating that light-as-particle was not a metaphor — photons really do behave like particles in collisions. Einstein's 1905 hypothesis was finally settled.
The third crisis — atomic spectra and the Bohr atom
A third experimental anomaly had been building for decades. If you heat hydrogen gas (or any gas of atoms) in a discharge tube, it glows — and the light it emits is not continuous like a hot solid. It consists of sharp, discrete spectral lines at very specific wavelengths. Different elements have different line patterns; the pattern is a fingerprint unique to the element.
For hydrogen, the spectral lines in the visible follow a simple numerical pattern that Johann Balmer wrote down in 1885:
Rydberg generalised this for all hydrogen lines:
where R_H \approx 1.0974 \times 10^7 \text{ m}^{-1} is the Rydberg constant and n_1 < n_2 are positive integers. Balmer's visible series is n_1 = 2; Lyman's ultraviolet series is n_1 = 1; Paschen's infrared is n_1 = 3.
The Rydberg formula matched thousands of hydrogen spectral measurements across decades of experimental work. It had no classical explanation. Why integers? Why these specific integers? Why squared? A classical atom — Thomson's "plum pudding" model, or Rutherford's 1911 nuclear model of a positive nucleus with electrons orbiting — should either radiate continuously (if the electrons orbit) or not radiate at all (if they are stationary). Neither matched the discrete lines.
Worse, Rutherford's orbital atom had a fatal defect: an accelerating electron (and a circularly orbiting one is accelerating) radiates electromagnetic energy. A classical orbiting electron would spiral into the nucleus in about 10^{-11} seconds, radiating its orbital energy as a continuous spectrum of electromagnetic waves. Atoms manifestly do not do this — they are stable and they radiate only at discrete frequencies.
Niels Bohr's 1913 postulates
Niels Bohr, a young Danish theorist working in Rutherford's Manchester lab, proposed in 1913 three postulates that saved the atom at the cost of admitting a completely new physics:
- Electrons in atoms occupy only certain discrete "stationary" orbits. In these orbits, contrary to classical electromagnetism, they do not radiate.
- The orbits are those for which the angular momentum is a multiple of \hbar = h / 2\pi: L = n\hbar for n = 1, 2, 3, \ldots
- Radiation is emitted or absorbed only when the electron jumps between orbits, with the emitted photon carrying energy E_{\text{photon}} = E_{n_2} - E_{n_1} = h \nu.
These postulates were not derived from anything. They were chosen because they worked. Combined with Newtonian mechanics for the allowed orbits and Coulomb's law for the attraction, they produced an exact prediction for the hydrogen spectrum.
The Bohr radius — Example 2 preview
A quick derivation, to be developed fully in Example 2 below. For the n-th stationary orbit, balance centripetal and Coulomb forces:
Combine with the quantisation of angular momentum, m_e v r = n\hbar, and solve for r. The result is:
where a_0 \approx 0.529 \times 10^{-10} m is the Bohr radius — the radius of the smallest allowed hydrogen orbit. The energy of the n-th orbit works out to:
And the emitted photon frequency for a jump from n_2 to n_1 is exactly the Rydberg formula — in the right units, the Rydberg constant R_H drops out of the calculation, agreeing with Balmer's 1885 fit to the sixth decimal place.
Why this was a triumph and a crisis at once: the numbers worked, perfectly, for hydrogen. But Bohr's postulates combined classical orbital mechanics with non-classical stability and radiation rules in a way nobody could justify. It was a patch on a patch. The success of the numbers drove everyone to look for the deeper theory; the model itself was known, from 1913 onward, to be provisional. It also failed completely for helium — a two-electron atom — which pointed squarely at something missing.
The Indian contemporaries — Saha, Raman, Bose
While Planck, Einstein, and Bohr were working in Berlin, Bern, and Copenhagen, three Indian physicists were preparing to make contributions that shaped the new theory from its first decade. Their biographies overlap with the quantum revolution so tightly that each story is worth telling briefly.
Meghnad Saha — the Saha equation (1920)
Meghnad Saha (1893–1956) grew up in a village in Dhaka district (now Bangladesh), the son of a small grocer, and worked his way through school on scholarships. At Calcutta University in the 1910s he studied quantum mechanics as it was being invented — reading Planck's and Bohr's papers as they appeared in German and English journals that arrived by ship months late.
In 1920, Saha published the Saha ionisation equation, which uses Boltzmann statistics (classical) combined with Bohr's discrete energy levels (quantum) to calculate the degree of ionisation of an element in a hot gas as a function of temperature and pressure. The equation applied quantum-mechanical atomic models to astrophysics: it explained why hot stars show the spectral lines of ionised atoms while cooler stars show neutral-atom lines. Stellar classification (O, B, A, F, G, K, M — the main sequence of spectral classes) was known empirically before Saha; his equation turned it into a temperature scale.
The Saha equation was one of the first large-scale applications of quantum atomic theory to a physical system beyond the laboratory. It was cited repeatedly in Bohr's own later writing and became foundational for modern astrophysics. Saha founded what became the Saha Institute of Nuclear Physics in Calcutta (now Kolkata) and served in the Indian parliament; the National Physical Laboratory in Delhi and several institutions bear his influence. He was nominated for the Nobel Prize four times, never winning.
C. V. Raman — the Raman effect (1928)
Chandrasekhara Venkata Raman (1888–1970) was a financial clerk in the Indian Civil Service who did physics in his spare evenings at the Indian Association for the Cultivation of Science in Calcutta. By 1917 he had become professor of physics at Calcutta University; by the 1920s he was doing original work on light scattering.
On 28 February 1928, Raman and his student K. S. Krishnan announced the Raman effect: when monochromatic light scatters off molecules, a small fraction of the scattered light emerges at shifted frequencies. The shift is the difference between the input photon's energy and the vibrational or rotational energy levels of the scattering molecules. Each molecule has its own Raman fingerprint — a set of characteristic shifts that identify the molecule unambiguously.
The Raman effect is a direct experimental demonstration of quantised molecular energy levels. A classical wave would not produce sharp frequency shifts; only if light interacts with molecules in discrete energy steps — absorb a photon, re-emit at a shifted frequency determined by a discrete molecular transition — does the Raman pattern arise. Raman thus confirmed, via a clean experiment, the quantum-mechanical picture of molecular vibrations.
Raman received the 1930 Nobel Prize in Physics — the first Asian, first Indian, first non-white laureate in the sciences. The Raman effect became the foundation of Raman spectroscopy, a workhorse technique in chemistry, materials science, and (in the 21st century) medicine and remote sensing. Every time a pharmaceutical lab in India identifies a drug formulation, a Raman spectrometer is involved.
Satyendra Nath Bose — Bose statistics (1924)
Satyendra Nath Bose (1894–1974), a lecturer at Dhaka University, was trying in 1924 to teach Planck's blackbody formula to his students and finding that the standard derivations (which derived the formula from classical-statistical arguments) did not satisfy him. He tried a different approach: treat the photons themselves as indistinguishable quantum particles and count the allowed quantum states directly, without assuming any classical distribution of particles over states.
The resulting statistical mechanics — now called Bose–Einstein statistics — produced Planck's blackbody formula cleanly and without any of the classical baggage. Bose sent his paper to Einstein directly (Einstein had the reputation as the open-minded referee), who translated it from English into German, arranged for its publication in Zeitschrift für Physik, and immediately extended Bose's reasoning from photons to massive particles. Einstein's extension predicted Bose–Einstein condensation — a phase of matter in which many bosons occupy the same quantum state — which was experimentally confirmed in 1995, winning a Nobel Prize for Cornell, Wieman, and Ketterle.
Bosons — integer-spin particles that obey Bose–Einstein statistics — are literally named after Bose. The Higgs boson, discovered at CERN in 2012, is part of this family. Every photon, every helium-4 nucleus, every pair of electrons that form a Cooper pair in a superconductor — all are bosons, all statistics trace back to the 1924 paper that Bose wrote because a textbook derivation made him unhappy.
Bose did not win a Nobel Prize despite several nominations. His paper is one of the most consequential in the history of statistical physics.
From the old quantum theory to the full formalism (1925–1926)
Between Bohr's 1913 atom and the modern Hilbert-space formalism you are now learning, the field spent a dozen years patching the theory. This was the old quantum theory: Bohr's postulates plus Sommerfeld's elliptic orbits plus ad hoc selection rules plus quantum numbers invented one at a time to match spectra. It worked for hydrogen and sort of worked for alkali metals; it failed for essentially every other atom, molecule, or more complex system. The old quantum theory was a patchwork, and every working physicist knew it.
The breakthrough came from two directions in 1925–1926.
Matrix mechanics (Heisenberg, Born, Jordan, 1925). Werner Heisenberg, recovering from hay fever on the North Sea island of Helgoland in June 1925, replaced Bohr's orbits with algebraic tables of numbers that represented the possible transitions between states. Max Born recognised these tables as matrices. Within months, Heisenberg, Born, and Pascual Jordan had a full matrix-based theory of atomic transitions.
Wave mechanics (Schrödinger, 1926). Erwin Schrödinger, starting from Louis de Broglie's 1924 proposal that every particle has an associated wave with wavelength \lambda = h/p, wrote down a wave equation for the electron's wavefunction. Solving it for hydrogen reproduced Bohr's energy levels exactly, without any ad hoc postulates — the quantisation emerged from the boundary conditions on the wavefunction.
The two formalisms initially looked completely different — Heisenberg's matrices and Schrödinger's waves. In 1926 Schrödinger, and separately Paul Dirac and Pascual Jordan, proved they were mathematically equivalent. By 1927, John von Neumann had unified both in the abstract Hilbert-space formalism that modern quantum mechanics uses.
Everything after 1927 is elaboration of the framework you now know: states as unit vectors in Hilbert space, observables as Hermitian operators, unitary evolution, Born's probability rule, measurement as projection. The three crises that started it — blackbody radiation, the photoelectric effect, atomic spectra — all become routine textbook exercises in the new framework.
Example: Planck's formula from his blackbody derivation
The first worked example — a compact derivation of Planck's formula, working from the quantisation hypothesis to the full spectrum.
Example 1: deriving Planck's blackbody formula from $E_n = n h \nu$
Step 1. The average energy per mode. Consider a single oscillator at frequency \nu in thermal equilibrium at temperature T. Classical statistical mechanics gives it any energy with Boltzmann probability \propto e^{-E / k_B T}. If the allowed energies are discrete — E_n = n h \nu — the probability of finding it in state n is proportional to e^{-n h \nu / k_B T}.
The partition function is a geometric series:
Why this series converges: e^{-h\nu/k_BT} < 1 always (it's the exponential of a negative number), so the geometric series converges to 1 / (1 - r) with r = e^{-h\nu/k_BT}.
Step 2. Compute the average energy. The average energy is \langle E \rangle = \sum_n E_n p_n, where p_n = e^{-n h \nu / k_B T} / Z. Using the trick \langle E \rangle = -\partial_\beta \ln Z with \beta = 1 / k_B T:
Why this is the dramatic step: for h\nu \ll k_BT (low frequency), e^{h\nu/k_BT} \approx 1 + h\nu/k_BT, so \langle E \rangle \approx k_BT — exactly the equipartition answer. For h\nu \gg k_BT (high frequency), e^{h\nu/k_BT} is huge and \langle E \rangle is exponentially suppressed. The discreteness of energy changes nothing at low frequency and everything at high frequency.
Step 3. Count the modes. The number of electromagnetic modes per unit volume with frequency in [\nu, \nu + d\nu] is (a standard exercise in cavity electrodynamics):
Here c is the speed of light, and the factor of 8π includes both polarisation states of light in 3D.
Step 4. Multiply to get the spectrum. The energy density per unit frequency is mode count times average energy per mode:
Step 5. Check the limits. At low frequency (h\nu \ll k_BT), e^{h\nu/k_BT} - 1 \approx h\nu/k_BT, so u(\nu, T) \to (8\pi\nu^2 / c^3) \cdot k_B T — the Rayleigh–Jeans limit. At high frequency (h\nu \gg k_BT), e^{h\nu/k_BT} - 1 \approx e^{h\nu/k_BT}, so u(\nu, T) \to (8\pi h \nu^3 / c^3) \cdot e^{-h\nu/k_BT} — Wien's approximation. Planck's formula interpolates between the two correctly.
Result. One quantisation hypothesis — energy comes in units of h\nu — plus standard statistical mechanics recovers the entire blackbody spectrum. Measuring the peak of the spectrum at several temperatures gives Planck's constant h numerically. The 1900 fit to Lummer and Pringsheim's data gave h \approx 6.55 \times 10^{-34} J·s; the modern value is 6.626 \times 10^{-34} — agreeing to 1%.
Interpretation. The calculation shows why quantisation kills the UV catastrophe: at high frequency, the first quantum h\nu is larger than the available thermal energy k_BT, so high-frequency oscillators are frozen out. The discreteness of energy is the mechanism; Planck's constant h sets the scale at which it kicks in.
Example: the Bohr radius of hydrogen
The second worked example. Derive the smallest allowed orbit of the hydrogen atom from Bohr's quantisation postulate.
Example 2: the Bohr radius and the hydrogen ground-state energy
Step 1. Classical mechanics of the circular orbit. An electron of mass m_e, charge -e, orbiting a proton of charge +e at radius r with speed v. Centripetal force equals Coulomb attraction:
Rearranging: m_e v^2 = \frac{e^2}{4 \pi \epsilon_0 r} — the kinetic energy is \frac{1}{2} m_e v^2 = \frac{e^2}{8 \pi \epsilon_0 r}, and by the virial theorem the total energy is half the potential: E = -\frac{e^2}{8 \pi \epsilon_0 r}.
Step 2. Bohr's quantisation postulate. Angular momentum is an integer multiple of \hbar:
Why this postulate at all: there is no classical reason. Bohr picked it because it made the numbers work. A decade later de Broglie gave it a wave-mechanical justification — the electron's de Broglie wave has to fit an integer number of wavelengths around the orbit — and Schrödinger turned that into a full equation. In 1913 Bohr just wrote it down and checked the consequences.
Step 3. Solve for r. From Step 2: v = n \hbar / (m_e r). Substitute into the force balance:
Step 4. Numerical value for n=1. The Bohr radius is:
Plugging in \epsilon_0 = 8.854 \times 10^{-12} F/m, \hbar = 1.054 \times 10^{-34} J·s, m_e = 9.109 \times 10^{-31} kg, e = 1.602 \times 10^{-19} C:
Why this number is important: it is the characteristic length scale of atoms. Every atom is roughly one Bohr radius across (multiplied by some n^2 depending on the most-occupied orbit). Chemistry happens on the Bohr-radius scale. X-ray diffraction patterns of crystals have spacings comparable to a_0. The Bohr radius is the ruler of the molecular world.
Step 5. The ground-state energy. Using E_n = -\frac{e^2}{8\pi\epsilon_0 r_n} and r_n = n^2 a_0:
The ground-state energy of hydrogen is -13.6 electronvolts — the ionisation energy (the energy required to strip the electron from the atom) is +13.6 eV. This is one of the most famous numbers in physics. It is measurable to 12 decimal places today and agrees with the Bohr calculation at the first level of approximation; the discrepancies at higher precision come from relativistic effects, fine structure, hyperfine structure, and quantum electrodynamics — corrections built on top of Bohr's foundation, not replacements of it.
Step 6. Check against Balmer. A photon emitted when an electron drops from n_2 to n_1 has frequency \nu = (E_{n_2} - E_{n_1}) / h, which gives wavelengths matching the Rydberg formula exactly. The first Balmer line (from n_2 = 3 to n_1 = 2) has wavelength:
This is the famous red Balmer-α line at 656 nm, visible in every hydrogen discharge tube and in the spectra of many stars.
Interpretation. Bohr's atom produces, from one postulate grafted onto classical mechanics, every feature of the hydrogen spectrum: discrete energy levels, the Balmer series, the ionisation energy, the atomic size. The fact that it works is what made it impossible to ignore; the fact that it only works for hydrogen (and fails for helium) is what drove the next decade's search for the deeper theory.
Common confusions about the old crises
-
"Planck believed in photons." Not really — not for many years. Planck introduced quantisation as a mathematical trick for the walls of the cavity, not as a statement about the nature of light. He thought Einstein's 1905 light-quantum was too radical and spent years urging him to back down from it. Planck accepted photons only after Compton scattering in 1923.
-
"Einstein won the Nobel for relativity." No — he won it for the photoelectric effect. The 1921 Nobel Prize citation explicitly avoids relativity (still controversial in some circles) and cites "services to theoretical physics, and especially for his discovery of the law of the photoelectric effect." Relativity never won him a Nobel.
-
"Bohr's atom was derived from quantum mechanics." It was the other way around. Bohr's atom was a patchwork built before quantum mechanics existed, and quantum mechanics was invented in part to explain why Bohr's postulates worked. The Bohr model is an approximation to the full Schrödinger treatment, not a derivation.
-
"The ultraviolet catastrophe was called that at the time." The name was coined by Paul Ehrenfest around 1911; Rayleigh and Jeans themselves did not call it a catastrophe. In 1900 the problem was seen as a puzzling discrepancy, not a crisis.
-
"Bose sent his paper to Einstein as a beginner." Bose was already a university lecturer with publications when he sent the paper, and Einstein's translating it for publication was an act of recognition between established researchers, not a master-student interaction. Bose's paper had been rejected by Philosophical Magazine in England first, which is why he sent it to Einstein.
-
"Raman's effect was accidentally discovered." False — Raman was specifically looking for molecular scattering effects predicted by classical-quantum hybrid theories, and he built instruments for years to detect them. The effect was a deliberate, sought-for discovery.
-
"The old quantum theory was wrong, so it was a waste." It was a patchwork that failed for most systems, but it was the bridge that made the 1925–1926 revolution possible. Every formulation of modern quantum mechanics — matrix mechanics, wave mechanics, Dirac's transformation theory, von Neumann's Hilbert-space framework — solved problems the old theory had made visible but couldn't handle. The old theory was scaffolding, not error.
Going deeper
You have seen the three crises, the people who cracked them, and the outline of how the old quantum theory of 1913–1924 gave way to the full formalism of 1925–1926. The going-deeper section below assumes you want to sharpen the historical picture with a few threads the main text only touched on: de Broglie's matter waves as the missing link, the other experiments that supported the quantum picture, the specific sociology of how physicists argued about it, and how this history connects directly to qubits.
De Broglie's matter waves — 1924
Louis de Broglie, a French aristocrat and graduate student, proposed in his 1924 doctoral thesis that just as light (a wave) exhibits particle properties, particles (electrons, atoms) exhibit wave properties. The wavelength associated with a particle of momentum p is:
De Broglie's thesis advisor, Paul Langevin, sent a copy to Einstein, who wrote back that the proposal was "certainly correct" and should be taken seriously. De Broglie's waves gave an immediate physical interpretation of Bohr's quantisation postulate: the allowed orbits are those in which the electron's de Broglie wave fits an integer number of wavelengths around the circumference (2\pi r_n = n \lambda = n h / (m_e v), which rearranges to Bohr's m_e v r_n = n \hbar).
De Broglie's hypothesis was experimentally confirmed in 1927 by Davisson and Germer (electron diffraction from a nickel crystal) and independently by Thomson (electron diffraction through thin films). Wave-particle duality became a physical reality.
This is where the modern picture comes from: a qubit is a wavefunction, a superposition is an interference pattern of amplitudes, the Born rule is a probability interpretation of wave amplitudes. The line from 1905 photons through 1924 matter waves to 1926 Schrödinger to 2026 qubits is direct.
The other supporting experiments
The three crises were the most famous, but the quantum revolution had a supporting cast.
-
Specific heats of solids (Einstein 1907, Debye 1912). Classical equipartition predicts a specific heat of 3R (Dulong–Petit) independent of temperature; experimentally, specific heats drop to zero as T \to 0. Einstein's 1907 paper applied quantisation to atomic vibrations and produced a temperature-dependent formula that Debye refined in 1912. Another victory of discreteness over continuity.
-
The Compton effect (1923). X-ray photons scatter off electrons and lose energy in a way that matches photon-electron billiard-ball collisions exactly. This was the experiment that converted most remaining skeptics of Einstein's 1905 light-quantum.
-
Stern–Gerlach experiment (1922). Silver atoms passed through an inhomogeneous magnetic field split into two discrete beams rather than forming a continuous distribution. This was the first direct demonstration of spatial quantisation — angular momentum takes discrete values — and provided the first evidence for what would later be called spin. See the Stern-Gerlach chapter for more.
-
Raman scattering (1928, already discussed). Direct confirmation of molecular quantum-level structure.
Taken together, the evidence from 1900 to 1928 made classical physics untenable for microscopic phenomena. Quantum mechanics was not a choice; it was a demand.
The sociological story — how physicists argued
The adoption of quantum mechanics was messy. Planck never fully accepted the implications of his own 1900 result. Einstein invented the photon and then spent the last 30 years of his life fighting the probabilistic interpretation of quantum mechanics that other physicists built on top of it ("God does not play dice"). Bohr became the high priest of the Copenhagen interpretation but admitted that no one really understood what it meant. Schrödinger's equation was adopted immediately; his own preferred interpretation (classical waves in configuration space) was not.
A generation of young physicists — Heisenberg, Pauli, Jordan, Dirac, born around 1900 — built the modern formalism almost without the older generation's active participation. By the early 1930s they had the mathematical structure complete; the interpretation remained contentious and, to some extent, still is.
The lesson for you as a quantum-computing student: the field's founders did not understand everything they had found. They knew the formalism worked, and they accepted the experimental constraints, and they disagreed forever about what any of it meant. Ninety years later, the formalism still works, the experiments still constrain us, and interpretations still disagree — but the computational structure (qubits, gates, measurements, entanglement) is cleanly defined and experimentally well-tested. You are standing on their mathematical work, not their philosophy.
The direct line to quantum computing
Every concept in quantum computing has roots in the old crises.
- Quantised energy levels → qubits, the two-level system, |0\rangle and |1\rangle as ground and excited states.
- Photons as discrete carriers → photonic qubits, quantum communication, quantum key distribution.
- Wave-particle duality → superposition, interference, the whole amplitude-based formalism.
- Quantum statistics (Bose) → bosonic systems, photonic quantum computing, boson sampling.
- The Planck scale h → the coupling between energy and frequency that underlies every gate, since a rotation by angle \theta on a qubit takes time t = \theta \hbar / E for some driving energy E.
When you implement a Hadamard gate on a superconducting transmon qubit, you are sending a microwave pulse at the qubit's transition frequency \nu = \Delta E / h — using Einstein's E = h\nu directly. When you measure a qubit's state via dispersive readout, you are using the coupled-oscillator physics that Planck invented to describe cavity radiation. The old crises are not a historical prologue to quantum computing; they are the working physics of quantum computing, still active every nanosecond on every quantum device in the world.
What the Indian pioneers' work means for today
Saha's ionisation equation is still the foundational calculation in stellar astrophysics; Indian astronomers at IIA Bangalore, IUCAA Pune, and IIST Thiruvananthapuram use it every day. Raman spectroscopy is the standard tool in Indian pharmaceutical chemistry, forensic science, and materials research; the Raman Research Institute in Bangalore continues world-class work in quantum optics. Bose statistics underlies every photonic and bosonic quantum-computing experiment; Xanadu's photonic chips, Chinese boson-sampling experiments at USTC, and neutral-atom platforms worldwide all rely on Bose's 1924 counting.
The National Quantum Mission, launched in 2023 with ₹6000 crore of funding over eight years, names its flagship research centres partly in honour of this legacy — the Saha Institute, the Raman Research Institute, the Bose Institute (Kolkata, named for Jagadish Chandra Bose, who was S. N. Bose's teacher and an early radio-wave experimentalist). The people you now build quantum computers with — the Bose-Einstein condensates of neutral-atom platforms, the coherent photons of photonic platforms, the Saha-equation-derived stellar targets of quantum sensing — carry the names of Indian physicists forward into the technology.
Where this leads next
- What is Quantum Computing? — chapter 1, the overview that this history justifies.
- The Founding Physicists — the biographical chapter that expands on Planck, Einstein, Bohr, and their Indian contemporaries.
- The Quantum Information Turn — how the 1980s reframed quantum mechanics from "theory of matter" to "theory of information."
- Feynman's Argument for Quantum Computers — the 1982 paper that turned quantum physics into quantum computing.
- Stern-Gerlach — the 1922 experiment that first demonstrated spatial quantisation and gave the first direct evidence for spin.
References
- Max Planck, On the Law of Distribution of Energy in the Normal Spectrum (1901) — Annalen der Physik translation (Wikipedia summary).
- Albert Einstein, On a Heuristic Point of View Concerning the Production and Transformation of Light (1905) — Annalen der Physik translation (AIP).
- Niels Bohr, On the Constitution of Atoms and Molecules (1913) — Philosophical Magazine original (Wikipedia summary).
- S. N. Bose, Planck's Law and the Hypothesis of Light Quanta (1924) — Zeitschrift für Physik (Wikipedia summary of Bose-Einstein statistics).
- C. V. Raman and K. S. Krishnan, A New Type of Secondary Radiation (1928) — Nature 121, 501 (Wikipedia Raman effect).
- Wikipedia, Old quantum theory — concise summary of the 1900–1925 patchwork period.