In short

Condensed matter physics is the study of what happens when 10^{23} electrons share a crystal. They do not move independently. They push each other through Coulomb forces, obey the Pauli exclusion principle, and organise themselves into collective states: superconductors that carry current without resistance, ferromagnets that remember their orientation, topological insulators with protected edge modes, quantum spin liquids with no long-range order but long-range entanglement. The minimal models that capture this physics have existed for decades. The 2D Hubbard model,

H_{\text{Hub}} = -t \sum_{\langle ij\rangle, \sigma} \left(c^\dagger_{i\sigma} c_{j\sigma} + \text{h.c.}\right) + U \sum_i n_{i\uparrow} n_{i\downarrow},

is believed by most to capture the essence of high-temperature superconductivity in copper-oxide ceramics. Seventy years of classical effort have not resolved its phase diagram at intermediate U/t and hole doping. The Heisenberg and Ising models describe magnets; Kitaev and toric code models describe topological phases. Quantum computers target these Hamiltonians through Trotter and qubitisation for dynamics, VQE and phase estimation for ground states, and specialised Gibbs-sampling algorithms for finite temperature. Near-term NISQ work has demonstrated Heisenberg chains of \sim 50 sites and 2D models up to \sim 20 sites. Useful quantum advantage on condensed matter — like chemistry — is a fault-tolerant target for the 2030s, though analog quantum simulators (cold atoms, trapped ions, Rydberg arrays) already answer questions classical methods cannot, and are a different animal from gate-based quantum computers.

In the Haber-Bosch plant that makes India's fertiliser, electrons are breaking the triple bond of \text{N}_2. Six chapters ago that was molecular chemistry. In the high-voltage transmission line that carries Gujarat's solar electricity to Maharashtra, electrons are flowing through a copper conductor with some resistance. In a levitating magnet above a cooled ceramic disc in a physics lab, electrons are flowing with no resistance at all. What is the difference?

The difference is that in a solid — a lot of atoms arranged in a regular crystal — electrons are not independent. They share the same lattice, they push each other through long-range Coulomb forces, and quantum-mechanically they are indistinguishable fermions. The collective state that emerges is not the sum of individual behaviours; it is a new thing. That new thing might be a metal, an insulator, a magnet, a superconductor, a charge-density wave, a Mott insulator, a quantum spin liquid, or a topological insulator. The whole of condensed matter physics is the study of which collective state forms for which combination of lattice, interaction strength, and filling — and why.

The previous chapter on simulating chemistry handled one molecule at a time. Condensed matter widens the scope: a million or a billion interacting electrons, arranged on a periodic lattice. The exponential wall is even more brutal — no amount of symmetry reduction rescues you — and the chemistry is richer, because the possible collective states are more varied and less understood. Quantum computers, Feynman's proposal goes, should be especially good here: the hardware is natively a lattice of interacting quantum systems, and the Hamiltonians that matter are naturally local on that lattice.

This chapter is that case. You will meet the handful of model Hamiltonians (Hubbard, Heisenberg, Ising, Kitaev) that capture most of condensed-matter physics, see why classical methods genuinely cannot solve them beyond one dimension, and learn what quantum algorithms and quantum simulators have demonstrated so far. Like chemistry, useful condensed-matter quantum simulation is a fault-tolerant target — but unlike chemistry, analog quantum simulators (cold atoms, Rydberg arrays) are already doing some of this work today, so the story has more ongoing news.

The models that capture (most of) condensed matter

A real solid has 10^{23} atoms, each with many electrons, interacting via Coulomb forces, moving in complex lattice geometries. You cannot write down the true Hamiltonian and solve it. What condensed-matter physicists do instead is extract minimal models that capture the essential physics of a specific phenomenon. Four models dominate.

The Hubbard model — the standard model of strongly correlated electrons

The Hubbard model (Hubbard 1963, Gutzwiller 1963, Kanamori 1963) keeps only two terms:

H_{\text{Hub}} = -t \sum_{\langle ij\rangle, \sigma} \left(c^\dagger_{i\sigma} c_{j\sigma} + \text{h.c.}\right) + U \sum_i n_{i\uparrow} n_{i\downarrow}.

Here c^\dagger_{i\sigma} creates an electron with spin \sigma \in \{\uparrow, \downarrow\} on lattice site i, n_{i\sigma} = c^\dagger_{i\sigma} c_{i\sigma} is the number operator, \langle ij\rangle denotes nearest-neighbour pairs on the lattice, and "h.c." means Hermitian conjugate (the c^\dagger_i c_j term has a partner c^\dagger_j c_i). Two parameters:

That is it. Two parameters, a lattice, a filling ("how many electrons per site"). From this, and only this, Philip Anderson argued in 1987 that you should get the phase diagram of the cuprate high-temperature superconductors. The 2D Hubbard model at intermediate U/t near half-filling with some hole doping is believed to contain superconductivity, antiferromagnetism, pseudogap physics, and a strange-metal phase — and classical methods, after 40 years of trying, cannot map that phase diagram reliably.

Why the Hubbard model is special: it strips the problem to the bare minimum needed to generate strong correlations (the U term, which couples opposite-spin electrons on the same site) and lattice kinetic energy (the t term), with nothing else distracting. When U = 0, electrons are free and the physics is trivial band theory. When t = 0, electrons are frozen and the physics is localised spin physics. The interesting regime is U \sim t, where the two effects compete, and analytical methods break down. Numerics become the only hope — and classical numerics hit walls in 2D.

The Heisenberg model — interacting spins

Take the Hubbard model in the large-U limit at half-filling (exactly one electron per site). Double occupancy is suppressed by the huge U; the charge degrees of freedom are frozen; only the spin degree of freedom remains. Second-order perturbation theory gives an effective Hamiltonian — the antiferromagnetic Heisenberg model:

H_{\text{Heis}} = J \sum_{\langle ij\rangle} \vec S_i \cdot \vec S_j,

where \vec S_i = (S_i^x, S_i^y, S_i^z) are spin-1/2 operators and J = 4t^2/U > 0 for antiferromagnetic coupling. Each qubit is a spin; the Hamiltonian is a sum of two-body local terms; the total Pauli-weight count is O(number of bonds).

In 1D, the Heisenberg antiferromagnet is solved exactly (Bethe ansatz, 1931, Hans Bethe). In 2D on a square lattice, the ground state is an antiferromagnet with long-range Néel order. In 2D on triangular or kagome lattices — where the antiferromagnet cannot satisfy all bonds simultaneously (frustration) — the ground state may be a quantum spin liquid: no long-range magnetic order, but long-range entanglement. Whether any material realises a gapped spin liquid is an open experimental question; whether specific Hamiltonians theoretically produce one is also an open question, actively under attack by classical tensor-network methods and quantum simulators.

The transverse-field Ising model — the simplest phase transition

Drop the vector structure and keep only z-z couplings plus a transverse x field:

H_{\text{TFIM}} = -J \sum_{\langle ij\rangle} Z_i Z_j - h \sum_i X_i.

The parameter ratio h/J drives a quantum phase transition at h/J = 1 in 1D: for h \ll J, the ground state is ferromagnetic (all spins aligned); for h \gg J, it is a paramagnet (all spins aligned with X). The transition is continuous and exhibits universal critical exponents. The TFIM is the "hydrogen atom" of quantum phase transitions — the first model every condensed-matter student meets, and a pedagogically clean target for quantum simulation.

Topological models — Kitaev honeycomb, toric code

Alexei Kitaev's 2006 honeycomb model:

H_{\text{Kitaev}} = -J_x \sum_{x\text{-bonds}} X_i X_j - J_y \sum_{y\text{-bonds}} Y_i Y_j - J_z \sum_{z\text{-bonds}} Z_i Z_j,

on the honeycomb lattice with three kinds of bonds. It is exactly solvable (a remarkable accident) and supports non-Abelian topological phases. The toric code (chapter 115) is another topological model with a ground state that encodes protected logical qubits. Topological phases are interesting because their ground-state degeneracies do not depend on local perturbations — a structural robustness that is the basis of topological quantum computation.

The four model Hamiltonians of condensed matterFour panels showing Hubbard (square lattice with hopping arrows and on-site U), Heisenberg (spin chain with arrows alternating), TFIM (chain with Z couplings and X field), Kitaev honeycomb lattice with three bond colours.Hubbard−t hop + U on-sitehigh-Tc targetHeisenbergJ S_i · S_jmagnets, spin liquidsTFIM−J ZZ−h X (transverse field)quantum phase transitionat h/J = 1Kitaev honeycombXX, YY, ZZ bondstopological phases
The four model Hamiltonians that organise condensed matter. Hubbard on a square lattice — the high-temperature superconductivity target. Heisenberg chain — magnets and spin liquids. Transverse-field Ising — the simplest quantum phase transition. Kitaev honeycomb — topological phases with non-Abelian anyons. Each is a sum of local terms, mapped to qubit Paulis directly (for spin models) or via Jordan-Wigner (for fermionic Hubbard).

Why classical simulation hits a wall — in 2D

Classical condensed-matter simulation has an enormous toolkit. Unlike chemistry, where exponential scaling in electron count kills everything beyond \sim 50 electrons, condensed matter has good methods in 1D and a partial toolkit in 2D. The wall is not at n \sim 50; it is at 2D with strong correlation.

Exact diagonalisation. For a 2D Hubbard model on a 4 \times 4 square (16 sites, 32 spin-orbitals), the Hilbert space is 2^{32} = 4 \times 10^9 states. Exact diagonalisation at half-filling (with symmetry reduction) handles this on a supercomputer. 5 \times 5 is the practical ceiling; beyond that, the matrix dimension exceeds available memory.

Density Matrix Renormalisation Group (DMRG). Invented by Steven White in 1992, DMRG is the undisputed champion of 1D quantum simulation. It keeps the wavefunction as a matrix product state — a chain of small tensors whose bond dimension \chi controls the accuracy. For 1D gapped systems, DMRG achieves essentially exact results with polynomial cost. For spin chains of 1000 sites, DMRG gets the ground state to 10+ digit accuracy in a few hours.

In 2D, DMRG fails more softly. You can still run it on thin strips (L_y \sim 6–10 sites wide), but the required bond dimension grows exponentially with L_y — a direct consequence of the area law for entanglement: the entanglement entropy across a bipartition grows with the boundary length. In 1D the boundary is two points (constant); in 2D it scales with L_y. Quasi-1D tensor networks (cylinders of modest width) handle this; genuinely 2D extrapolations are at the frontier.

Projected Entangled Pair States (PEPS). 2D generalisation of matrix product states. Much more powerful in principle than DMRG for 2D, much harder to optimise in practice. Active research; not yet a routine tool.

Quantum Monte Carlo. Sample the partition function stochastically. Exact for bosons and unfrustrated systems; fails on frustrated or fermionic models due to the sign problem: the sampling weights can go negative, and statistical noise grows exponentially with system size and inverse temperature. The 2D Hubbard model away from half-filling hits the sign problem hard, which is precisely the doping regime where superconductivity is believed to live. Frustrated Heisenberg models on triangular or kagome lattices hit the sign problem too.

Classical simulation in 1D versus 2DLeft side: 1D, DMRG works, exact to 10+ digits. Right side: 2D, DMRG struggles beyond strip width 10, QMC has sign problem, PEPS is immature. Middle: the strong-correlation 2D Hubbard target circled as open.1D• DMRG — essentially exact• Bethe ansatz for integrable cases• QMC for unfrustrated bosonsclassical methods win2D with correlation• DMRG only on narrow strips• QMC killed by sign problem• PEPS immature, hard to convergeno classical method is decisivethe 2D strongly correlated regime is the quantum-simulation frontier
The landscape. In 1D, DMRG is so good that quantum computers have no foreseeable advantage — the area law keeps classical bond dimensions small. In 2D with strong correlation, every classical method breaks down in some regime; that is where a quantum computer would matter. The 2D Hubbard phase diagram, quantum spin liquids, and frustrated magnets live in this frontier.

The operational rule. If a system is 1D, DMRG probably solves it — do not build a quantum computer to beat DMRG on a spin chain. If a system is 2D and gapped without frustration, QMC probably solves it. If a system is 2D with strong correlation, frustration, or fermionic sign problems, you are in quantum-simulation territory.

The quantum approach — four algorithmic tools

Everything you learned in chapters 129–134 now specialises to condensed matter. Four primitives carry the programme:

Hamiltonian simulation (Trotter, qubitisation). Apply e^{-iHt} to an input state to follow the real-time dynamics — quench experiments, transport measurements, out-of-equilibrium propagation. Trotter and qubitisation from chapters 131–134 translate directly; the Hamiltonians are local and the gate counts are polynomial in system size and time.

Adiabatic state preparation. Start with a simple Hamiltonian H_0 whose ground state |\psi_0^{(0)}\rangle you can prepare, and slowly interpolate H(s) = (1-s)H_0 + s H_{\text{target}}. If the interpolation is slow enough (slower than the inverse-square gap), the adiabatic theorem guarantees you end up in the ground state of H_{\text{target}}. Rate of "slow enough" depends on the minimum gap along the path — which for quantum phase transitions can be exponentially small, making this a subtle business.

Variational approaches (VQE, QAOA). Parameterise a trial state; measure \langle H\rangle; classically optimise. Specialised ansatze exist for Hubbard-like models ("Hamiltonian variational ansatz") that exploit the structure of the target. QAOA-style ansatze target combinatorial embeddings of spin problems.

Gibbs sampling. At finite temperature T, the system is in the Gibbs state \rho_\beta = e^{-\beta H}/Z. Quantum algorithms for preparing Gibbs states exist (minimum-memory Poulin-Wocjan, quantum Metropolis, dissipative preparation); they are less mature than zero-temperature methods but are the natural tool for finite-temperature condensed matter (specific heat, magnetic susceptibility, critical scaling).

Applications — the open questions

What would a useful quantum condensed-matter simulation actually answer?

The 2D Hubbard phase diagram. The big one. At U/t \sim 8 and hole doping \sim 15\%, the model is believed to superconduct. Classical DMRG on cylinders sees hints of d-wave pairing but cannot commit. A gate-based quantum simulator of \sim 100 sites with fault-tolerant precision would settle thirty-plus years of debate. Resource estimates are comparable to chemistry — tens of thousands of T-gates per Trotter step, thousands of logical qubits, hours of quantum compute per phase-diagram point.

Quantum spin liquids. The kagome Heisenberg antiferromagnet and similar frustrated models may have ground states with topological order and no classical analogue. Identifying which Hamiltonians realise which spin liquids is open; experiments on herbertsmithite (a Cu-based kagome mineral) hint at a spin-liquid state but are inconclusive. Quantum simulators could map the phase diagram of frustrated Heisenberg models directly and tell condensed-matter theorists which microscopic Hamiltonians realise which phases.

Room-temperature superconductivity. A materials-design problem, not purely a model-Hamiltonian problem. If quantum simulation can reliably predict whether a proposed high-T_c material design will superconduct — and at what temperature — the payoff is civilisation-altering: lossless electricity transmission, cheaper MRI magnets, maglev trains, compact fusion reactors. No one claims this is close, but it is the long-term banner.

Topological quantum memories. Non-Abelian anyons in Kitaev-like models could store logical qubits with built-in error correction. Classically we do not know the full phase diagram of candidate topological materials; quantum simulators could characterise them. This loops back into the error-correction chapters.

Materials for batteries, solar cells, hydrogen storage. Practical materials design, where the electronic structure is 2D and strongly correlated at the relevant surfaces. The overlap with chemistry is substantial.

The 2D Hubbard phase diagramPhase diagram with axes doping (x) and U/t (y). Regions labelled: antiferromagnet at half-filling, pseudogap and strange metal away from it, d-wave superconductivity at intermediate doping and intermediate U/t. A big "???" region marks the contested territory.hole doping →U/t →antiferromagnet (half-filling)d-wave SC?pseudogapstrange metal???
The believed phase diagram of the 2D Hubbard model. Antiferromagnet at half-filling; pseudogap at small doping; a contested $d$-wave superconducting region at intermediate doping; strange-metal behaviour near optimal doping. Thirty years of classical computation have not nailed down the boundaries or the precise nature of the superconducting state. A useful quantum simulator would.

Hype check. You will see press releases saying a quantum computer "solved" a condensed-matter model after demonstrating a short Trotter simulation of a 20-site chain. Be sceptical. Small 1D systems are classically trivial — DMRG does them better. The claim "solved" is only meaningful in the regime where classical methods genuinely fail, which is 2D with strong correlation, at scales of at least \sim 10 \times 10 sites, with fault-tolerant precision. No such result exists as of 2026. When it does, it will be a genuine scientific milestone. In the meantime, analog quantum simulators (cold atoms, Rydberg arrays) already contribute real physics at scales classical methods cannot match — but those are different machines from gate-based quantum computers, and they answer different questions. The press often conflates them.

Worked examples

Example 1 — A $2 \times 2$ Hubbard model on 8 qubits

Setup. Take the simplest non-trivial 2D Hubbard instance: a 2 \times 2 square lattice with 4 sites, half-filled (4 electrons, 2 up-spin and 2 down-spin). Each site has spin up and spin down, so there are 8 spin-orbitals and we need 8 qubits under Jordan-Wigner.

Step 1. Label the sites 1, 2, 3, 4 going around the square. The hopping term connects sites (1,2), (2,3), (3,4), (4,1) — the four edges of the square. For each edge, for each spin, there is a fermionic hop c^\dagger_{i\sigma} c_{j\sigma} + \text{h.c.} Total: 4 \times 2 = 8 hopping terms.

Step 2. The on-site interaction contributes U n_{i\uparrow} n_{i\downarrow} per site: 4 terms. Each is a product of two number operators, which under Jordan-Wigner becomes \tfrac{1}{4}(I - Z_{i\uparrow})(I - Z_{i\downarrow}) — four Pauli terms per site, 16 total (some simplify).

Step 3. Total qubit Hamiltonian. After expanding, the Pauli-term count is order a few dozen. Each hopping term becomes a Pauli string of weight O(|i-j|) under Jordan-Wigner with the standard site ordering; on a 2D grid the effective weight is the "Jordan-Wigner path" along the chosen snake ordering. For 2 \times 2, weights are at most 4.

Step 4. Prepare the Hartree-Fock (or non-interacting) ground state as a starting point. Run VQE with a Hamiltonian-variational ansatz |\psi(\vec\theta)\rangle = \prod_k e^{-i\theta_k H_k} |\text{HF}\rangle, where H_k runs over individual hopping and interaction terms. 8 qubits, maybe 30 VQE parameters, a few hundred measurement circuits per iteration, 100 iterations.

\text{cost per VQE run} \approx 10^4 \text{ circuit executions.}

Why this fits on NISQ hardware: 8 qubits is well within current superconducting and ion-trap devices (which routinely operate 50+ qubits). Circuit depth per VQE measurement is modest (tens of CNOT gates). The total budget — a few minutes of wall-clock time on a working device — is cheap.

Step 5. Compare to exact diagonalisation. For 8 qubits and 4 electrons, the constrained Hilbert space has dimension \binom{4}{2}^2 = 36, trivially solvable on a laptop. Your VQE result should match the exact ground state to chemical-style accuracy. The point of this exercise is not new science but to exercise the full pipeline: fermion encoding, ansatz, optimiser, readout.

Result. The 2 \times 2 Hubbard model is to 2D Hubbard what \text{H}_2 is to chemistry: a training-wheels exercise that checks the pipeline. 4 \times 4 Hubbard (16 sites, 32 qubits) is the next rung, already at the edge of classical exact diagonalisation. 10 \times 10 Hubbard (100 sites, 200 qubits logical) is the fault-tolerant scientific target.

2×2 Hubbard model on 8 qubitsFour sites arranged as a square with hopping edges. Each site has two spin-orbitals, encoded on two qubits (Jordan-Wigner), totalling 8 qubits. Labels show the snake ordering.1234−t hop−t hopU n↑n↓U n↑n↓U n↑n↓U n↑n↓8 spin-orbitals → 8 qubits (JW)
The $2 \times 2$ Hubbard model. Four sites on a square, each site carrying spin-up and spin-down orbitals, connected by hopping $t$ and on-site interaction $U$. Eight qubits under Jordan-Wigner; a few dozen Pauli terms in the qubit Hamiltonian; a few minutes of NISQ compute to run VQE.

What this shows. The basic workflow — fermion encoding, ansatz selection, VQE loop, comparison to exact diagonalisation — is already deployable. Scaling it to 32 qubits (4 \times 4) is underway. Scaling to 200 qubits (10 \times 10) with fault-tolerance is the frontier.

Example 2 — A 50-site Heisenberg chain on NISQ hardware

Setup. A 1D antiferromagnetic Heisenberg chain of 50 spin-1/2 sites is the target. This is directly representable — each site is one qubit, so 50 qubits total. The Hamiltonian has 49 nearest-neighbour bonds.

H_{\text{Heis, 50}} = J \sum_{i=1}^{49} (X_i X_{i+1} + Y_i Y_{i+1} + Z_i Z_{i+1}).

Step 1. Every term is a two-qubit Pauli product with weight 2 — Jordan-Wigner is not needed (the spins are not fermions). The Hamiltonian is already a qubit operator.

Step 2. Goal: simulate the real-time dynamics of a domain wall. Prepare an initial state |\uparrow\uparrow\cdots\uparrow\downarrow\downarrow\cdots\downarrow\rangle (left half up, right half down); evolve for time t under H_{\text{Heis}}; measure the local magnetisation \langle Z_i\rangle at each site.

Step 3. Trotter. First-order Trotter decomposition of e^{-iHt/n} interleaves three types of terms: even-index XX+YY bonds, odd-index XX+YY bonds, and all ZZ bonds. One Trotter step costs \sim 100 two-qubit gates. For time t = 10 with 50 Trotter steps, total gate count is \sim 5000 two-qubit gates.

Step 4. Classical cross-check. This same simulation is within reach of time-dependent DMRG (for 1D, low entanglement) and matrix product state methods. So what does the quantum simulator give you? At late times, the entanglement entropy after a quench grows linearly, and at some critical time the bond dimension required classically exceeds what machines can handle. The quantum simulator runs past that wall at constant cost — a regime where the classical methods genuinely fail.

Step 5. Demonstrated results. IBM's 2023 work showed a 127-qubit Heisenberg-adjacent simulation on superconducting hardware, using error mitigation to extract physical signals from noisy gates. Google has demonstrated similar at comparable scales. These are real demonstrations of quantum utility — modest in scientific impact but genuinely at the edge of classical methods.

\text{gate count} \approx 5000, \qquad \text{error-mitigated fidelity} \sim 0.5.

Why 50 sites is interesting on NISQ: the entanglement after a quench grows with time; for short times, DMRG is fine; for late times, DMRG's bond dimension blows up and quantum wins. The crossover time is material-dependent. At 50 sites, you can push to times where the classical cost is starting to bite.

Result. 50-site Heisenberg dynamics is a live NISQ target, demonstrated by multiple groups. It is not yet conclusive "quantum advantage" — classical methods can sometimes catch up — but it is a realistic near-term science case, and the 2D generalisations are the next natural step.

What this shows. Dynamics simulations (Task 1 from the Feynman-revisited framework) are more NISQ-friendly than ground-state problems (Task 2) because they do not require error-free long-time stability. Late-time quench dynamics in 2D is an especially natural fault-tolerant target — the regime where classical tensor networks clearly fail.

Common confusions

Going deeper

You now have the lay of the land: four model Hamiltonians, classical methods that shine in 1D and break in 2D, quantum algorithms transplanted from the chemistry chapter, and the long-term targets of high-T_c and spin liquids. What follows adds three more layers — the Anderson RVB story that started the Hubbard programme, the Kitaev honeycomb as an exactly solvable topological model, and the analog-simulator landscape where cold atoms and Rydberg arrays already do condensed-matter science.

Anderson's 1987 resonating valence bond proposal

Philip Anderson's 1987 Science paper, shortly after the discovery of high-temperature superconductivity in \text{La}_2\text{CuO}_4 by Bednorz and Müller, proposed that the 2D Hubbard model at half-filling has a ground state described by a resonating valence bond (RVB) state — a superposition of singlet pairings of neighbouring spins. Doping this RVB state away from half-filling, he argued, liberates the singlets to move coherently and superconduct. The specific mechanism (Anderson's RVB) has not been definitively confirmed; but the framework — that cuprate superconductivity is a consequence of the 2D Hubbard Hamiltonian — has guided the field for four decades. The Hubbard phase diagram is the scientific prize that quantum simulation could nail down.

The Kitaev honeycomb as exactly solvable topological matter

Kitaev's 2006 model on a honeycomb lattice assigns different Pauli couplings (XX, YY, ZZ) to the three bond directions. The model is exactly solvable via a mapping to free Majorana fermions plus a static \mathbb{Z}_2 gauge field. For appropriate coupling ratios, the ground state is a gapped \mathbb{Z}_2 spin liquid; for others, it is gapless and hosts non-Abelian anyons (Ising anyons, the simplest non-Abelian type). The model is a pedagogical goldmine: it demonstrates topological phases concretely, admits exact diagonalisation, and is the blueprint for topological quantum computation schemes that use Majorana zero modes in semiconductor nanowires.

Fractional quantum Hall effect

The fractional quantum Hall (FQH) effect — the appearance of plateaus in the Hall conductance at fractional values like 1/3 — is the original experimental demonstration of topological order. The 1/3 Laughlin state is a highly correlated many-electron wavefunction with non-trivial topological invariants (Chern number). Simulating FQH states on a quantum computer is a natural target: the relevant Hamiltonians are clearly 2D and strongly correlated, classical methods are marginal, and there are discrete topological invariants that make the output easily verifiable.

Analog quantum simulators

Parallel to the gate-based programme, the analog quantum-simulation programme uses specialised hardware to implement specific Hamiltonians:

Analog simulators answer specific condensed-matter questions today. They are less flexible than gate-based machines (each platform implements a fixed family of Hamiltonians), but for the Hamiltonians they do implement, they reach scales gate-based simulators will take a decade to match.

Indian condensed matter and quantum simulation

The Indian tradition in condensed matter is deep. C.V. Raman's 1928 Raman effect is a direct spectroscopic signature of quantised molecular vibrations — the first experimental demonstration that matter is quantum-mechanical on everyday scales. The Homi Bhabha-founded Tata Institute of Fundamental Research (TIFR, Mumbai) hosts one of the world's strongest condensed matter groups, including experimental and theoretical work on superconductors, magnets, and topological materials. The Indian Institute of Science (IISc, Bangalore) has strong groups on both condensed-matter theory and cold-atom experiments. The Raman Research Institute (RRI) runs quantum optics and ultracold atom experiments. IIT Bombay, IIT Kanpur, IIT Madras, IIT Kharagpur, IIT Roorkee — all host condensed-matter physics groups, with several moving toward quantum simulation collaborations. The Saha Institute of Nuclear Physics in Kolkata and the Harish-Chandra Research Institute in Allahabad round out the theoretical landscape. The National Quantum Mission's materials and sensing pillar funds exactly these groups. Any condensed-matter result coming out of Indian quantum simulation will have deep institutional roots.

Where this leads next

References

  1. Wikipedia, Hubbard model.
  2. Georgescu, Ashhab, Nori, Quantum simulation (2014) Rev. Mod. Phys.arXiv:1308.6253.
  3. John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229.
  4. Wikipedia, Condensed matter physics.
  5. Bernien et al., Probing many-body dynamics on a 51-atom quantum simulator (2017) — arXiv:1707.04344.
  6. Wikipedia, Resonating valence bond theory — Anderson's 1987 proposal for cuprate superconductivity.