In short

NISQ stands for Noisy Intermediate-Scale Quantum. John Preskill coined the term in 2018 to name — and calibrate expectations around — the generation of quantum hardware we actually have right now. Three features define a NISQ machine: noisy (per-gate error rates of roughly 10^{-3} to 10^{-2}, below the surface-code threshold but not by a comfortable margin); intermediate-scale (50 to a few thousand physical qubits — too many to simulate classically for the hardest tasks, too few to host full quantum error correction); and no error correction (maybe a single logical qubit demonstration, as in Google Willow, but no multi-logical-qubit computation). Every quantum computer the press has written about — Google Sycamore (53 qubits, 2019), IBM Osprey (433, 2022), IBM Condor (1121, 2023), IBM Heron (133, 2024), Google Willow (105, 2024), Quantinuum H2, IonQ Tempo, QuEra Aquila — is a NISQ machine. Algorithms that run on NISQ hardware must tolerate the noise: shallow circuits, variational approaches (VQE, QAOA), small quantum simulation, error-mitigation tricks instead of full error correction. Algorithms that NISQ machines cannot run: Shor's on a real RSA key (needs ~20 million physical qubits), useful industrial optimisation (needs fault tolerance), cryptographically relevant factoring, or any computation whose circuit depth exceeds the noise horizon. NISQ is a bridge era, not a destination. The fault-tolerant machines the field is aiming for — hundreds of logical qubits, arbitrary algorithms — are a 2030-2035 horizon. Until then, NISQ is where everyone lives.

Every few months, a headline lands somewhere: "Scientists build the most powerful quantum computer yet", "IBM passes 1000-qubit barrier", "China claims quantum supremacy". The 15-year-old reading the headline in Lucknow reasonably concludes that quantum computing has arrived — that machines powerful enough to break encryption and solve the world's hardest problems are now operating. Then they read the next article, in Nature, in which a careful physicist writes that "no NISQ machine has yet demonstrated a clear quantum advantage for any problem of practical interest." Something is off between the headlines and the science.

The thing that is off has a name, and the name is NISQ.

NISQ stands for Noisy Intermediate-Scale Quantum. The term was coined in 2018 by John Preskill, a theoretical physicist at Caltech who has shaped quantum-information thinking since the 1990s. He did not invent it in celebration — he invented it to set expectations, to give the field a vocabulary for talking honestly about what the machines being built in 2018-2025 can and cannot do. The word intermediate is doing work: these machines are intermediate between the two-qubit experiments of the 2000s and the error-corrected quantum computers the field ultimately wants to build. They are also intermediate in a less comfortable sense — intermediate between not useful and useful, with the line still uncrossed as of this writing.

This chapter is about what NISQ means, what machines qualify as NISQ, what NISQ machines can genuinely do, what they absolutely cannot do, and where the boundary runs between today's hardware and tomorrow's. The point is to hand you a calibrated picture that lets you read every future headline with the right dose of scepticism.

Decoding the three letters

N — Noisy

Every operation on a NISQ qubit has a nonzero probability of going wrong. A single-qubit gate (Hadamard, Pauli, rotation) has an error rate of around 10^{-4} to 10^{-3} on the best platforms, and a two-qubit gate (CNOT, CZ) has an error rate of around 10^{-3} to 10^{-2}. Measurement has its own error, typically 1%-3%. Coherence times (T_1, T_2) are finite — the qubit's quantum state decays back towards a classical mixture after tens to hundreds of microseconds on superconducting platforms, seconds on trapped-ion platforms.

What does that mean for a circuit? If you run a 100-gate circuit on a platform with 10^{-2} two-qubit error, your circuit's output is dominated by error — you expect roughly 100 \times 10^{-2} = 1 gate errors on average, enough to scramble the answer. A 1000-gate circuit is hopeless.

So NISQ algorithms must live inside a noise horizon: a circuit depth beyond which the signal is gone. For 10^{-3} error rate and a 50-qubit circuit, you can fit something like O(10^2) gates in this horizon. For 10^{-2}, much less.

The NISQ region on a qubit-count-vs-error-rate planeA two-dimensional map with physical qubit count on the x-axis from 1 to 10 million on a log scale, and two-qubit gate error rate on the y-axis from 10 to the minus 5 to 10 to the minus 1 on a log scale. A horizontal dashed band at 1 percent marks the surface-code threshold. A shaded NISQ region spans roughly 50 to 5000 qubits with error 10 to the minus 3 to 10 to the minus 2. A future fault-tolerant region is shaded at 100000 to 10 million qubits with error below 10 to the minus 3. Named machines are placed as points: Sycamore, Osprey, Condor, Heron, Willow, Quantinuum H2, QuEra Aquila, IonQ Tempo. The NISQ region — qubit count versus gate error $10^{-1}$ $10^{-2}$ $10^{-3}$ $10^{-4}$ $10^{-5}$ 2-qubit gate error $10$ $10^{2}$ $10^{3}$ $10^{4}$ $10^{5}$ $10^{6}$ physical qubits NISQ region 50-5000 qubits, 10⁻³ to 10⁻² fault-tolerant era 2030s+ target surface-code threshold (~1%) Sycamore (53) Osprey (433) Condor (1121) Heron (133) Willow (105) Quantinuum H2 (64) QuEra Aquila (256)
Every NISQ machine sits in a shaded rectangle: intermediate qubit counts, error rates near but below the surface-code threshold. The fault-tolerant region is separated by two orders of magnitude in qubit count and one order of magnitude in required error rate — the gap the field is trying to cross in the 2025-2035 window.

I — Intermediate-scale

The intermediate in NISQ is both technical and philosophical. Technically, it refers to machines with roughly 50 to 5000 physical qubits. The floor is 50 because below that, a classical computer can simulate the quantum state exactly (a 50-qubit state has 2^{50} \approx 10^{15} amplitudes, pushing the limits of exact simulation). The ceiling is fuzzier, but 5000 is where we run out of hardware as of 2025; at higher counts, you probably have enough qubits for modest error correction and the regime shifts.

Philosophically, intermediate means in between. In between the 2-qubit demonstrations of the 1990s and the million-qubit error-corrected machines of the 2030s. In between being so small that a classical laptop can simulate everything and being so large that error correction tames the noise. In between being a physics lab experiment and being a useful computer. The word is an admission: this is a transitional period, and the transition is slow.

SQ — Quantum (and Significantly Quantum-advantaged? No.)

The last two letters — Scale and Quantum — are almost banal. These are quantum machines: they manipulate qubits, run circuits, produce quantum states. The question "are they significantly quantum-advantaged over classical machines?" is precisely what the NISQ label does not promise.

Hype check. The frequent misreading is that NISQ quantum computers must have quantum advantage because they are bigger than anything a classical computer can simulate. That is not what intermediate-scale means. A NISQ machine with 1000 qubits is too big to simulate in full generality — but for any specific problem you want to solve, the question of whether the NISQ machine beats a well-tuned classical algorithm is empirical, problem-specific, and as of 2025 the answer for every practically useful problem is "no, or no clear evidence." The only domains where NISQ machines have provably outperformed classical computers (Google's 2019 random-circuit sampling; USTC's boson sampling in 2020-2023) are narrow, artificial tasks with no known industrial application.

A brief census of NISQ machines

To make this concrete, here are the flagship NISQ machines as of early 2025. Each is at a different point in the qubit-count-vs-fidelity trade-off.

Census of NISQ quantum computers, 2019-2025A table of named NISQ machines with four columns: name and year, qubit technology, qubit count, and typical two-qubit gate error. Entries for Sycamore 2019 at 53 superconducting qubits error 1 percent, Osprey 2022 at 433 superconducting error 1 to 2 percent, Condor 2023 at 1121 superconducting error 1 to 2 percent, Heron 2024 at 133 superconducting error 0.5 percent, Willow 2024 at 105 superconducting error 0.3 percent, Quantinuum H2 2024 at 64 trapped ion error 0.05 percent, IonQ Tempo 2025 at 64 trapped ion error 0.1 percent, QuEra Aquila 2023 at 256 neutral atom error 0.5 percent. Named NISQ machines — early 2025 snapshot Machine (year) Technology Qubits 2-qubit error Google Sycamore (2019) superconducting transmon 53 $\sim 10^{-2}$ IBM Osprey (2022) superconducting transmon 433 $\sim 1$-$2 \times 10^{-2}$ IBM Condor (2023) superconducting transmon 1121 $\sim 1$-$2 \times 10^{-2}$ IBM Heron (2024) superconducting + couplers 133 $\sim 5 \times 10^{-3}$ Google Willow (2024) superconducting + tunable 105 $\sim 3 \times 10^{-3}$ Quantinuum H2 (2024) trapped ion (Yb/Ba) 64 $\sim 5 \times 10^{-4}$ IonQ Tempo (2025) trapped ion 64 $\sim 10^{-3}$ QuEra Aquila (2023) neutral atoms 256 $\sim 5 \times 10^{-3}$
A snapshot of the NISQ fleet. Superconducting platforms dominate on qubit count; trapped-ion platforms win on gate fidelity; neutral atoms are scaling fastest. Every one of these is NISQ. None is a fault-tolerant quantum computer.

Notice how modest the qubit counts are. IBM Condor leads at 1121 qubits, but its two-qubit error rate means you cannot run a circuit of more than a few dozen gates without the output being dominated by noise. Google Willow has a quarter of Condor's qubit count but an error rate good enough to demonstrate a single logical qubit at surface-code distance d = 7 (see logical qubits in practice).

What NISQ machines can actually do

The NISQ constraint — shallow circuits, noise-tolerant algorithms — dictates the algorithms that fit the hardware.

Variational algorithms

The dominant class of NISQ-native algorithms is variational: parameterise a shallow quantum circuit U(\theta) with a few hundred parameters, prepare the state U(\theta)|0\rangle, measure some observable (typically the energy of a Hamiltonian), and feed the result back to a classical optimiser that updates \theta. The quantum computer is a subroutine — an expensive probabilistic function evaluator — inside a classical optimisation loop.

The two headline examples:

The next chapter — variational algorithms generally — is an overview of this family. The key insight for NISQ: because the circuit is shallow (O(10) layers) and errors only need to not destroy the gradient signal, variational algorithms tolerate noise in a way that Shor's algorithm (with its 10^{10} gates) cannot.

Small quantum simulation

Simulating the time evolution of a quantum system — a spin chain, a small molecule, a lattice gauge theory at toy size — is something classical computers struggle with. NISQ machines can simulate systems of 30-100 qubits for moderate evolution times. These are research experiments, not industrial applications, but they do produce real physics results — which matches Feynman's 1982 proposal (see Feynman revisited) that simulating quantum systems is the natural application of quantum computers.

Benchmarking and noise characterisation

A great deal of NISQ work is about the machines themselves: measuring T_1 (energy-relaxation time) and T_2 (dephasing time); characterising gate fidelities via randomised benchmarking; running cross-entropy benchmarking to certify quantum-supremacy sampling; building error models that the compiler can use. This is not glamorous, but it is the foundation every fault-tolerant experiment rests on.

Quantum supremacy demonstrations

Narrow, specific tasks chosen to be easy for a quantum computer and hard for a classical one:

Both count as NISQ demonstrations — narrow tasks, no practical application, yet real quantum advantage for the task chosen.

What NISQ machines cannot do

The sharper half of the picture is what NISQ cannot do. Pop-science coverage routinely blurs this, which makes it worth enumerating.

Shor's algorithm on real RSA keys

Factoring a 2048-bit RSA modulus with Shor's algorithm requires roughly 2 \times 10^7 physical qubits (Gidney-Ekerå 2019 estimate, surface-code architecture). NISQ machines have 10^2 to 10^3 qubits — four to five orders of magnitude short. More importantly, Shor's algorithm requires \sim 10^{10} logical gates, each of which must succeed with probability near 1. NISQ gate errors of 10^{-3} would bury the algorithm under noise within the first few dozen gates. Shor's on NISQ is not a question of waiting; it is a question of replacing the hardware.

Hype check. Claims you will occasionally see that "NISQ machines can factor small numbers using Shor's" are usually referring to variants run on 5-15 qubits that factor hand-picked numbers like 15 = 3 \times 5 or 21 = 3 \times 7. These are instructive demonstrations of the algorithm mechanics, not a cryptographic threat. The smallest factoring a NISQ machine has ever done is of numbers a secondary-school student can factor in their head.

Useful industrial optimisation

A lot of marketing claims QAOA will revolutionise logistics or portfolio optimisation. As of 2025, every head-to-head comparison of QAOA on NISQ hardware versus a well-tuned classical heuristic (simulated annealing, Gurobi, CPLEX) has come out tied at best, and usually in favour of the classical algorithm. The theoretical guarantees of QAOA at constant depth on random instances are modest, and the NISQ noise erodes what little advantage theory promises. QAOA on NISQ for a real-world logistics problem is, as of writing, an active research question with no clear yes.

Industrially relevant quantum chemistry

VQE on \text{H}_2 (4 electrons, 4-8 qubits) works and matches classical methods. Scaling up — to \text{Fe}-containing enzymes, to nitrogen fixation, to the FeMo-cofactor of nitrogenase (the poster-child "quantum simulation will solve fertiliser" example) — requires \sim 100 logical qubits and deep circuits. Far outside NISQ. A fault-tolerant machine is what that calculation needs.

Useful quantum machine learning

Quantum neural networks, quantum support vector machines, quantum kernel methods. All proposed, none demonstrated to beat classical ML on a real task, all limited by NISQ noise and by the classical-ML ecosystem's enormous head-start. The honest position is that quantum ML is a research direction with no clear advantage yet.

Error mitigation versus error correction

A concept often confused. Error correction is the full machinery from part 13-14 of this curriculum: encode a logical qubit in many physical qubits, measure syndromes, apply corrections, drive the logical error rate down exponentially with code distance. Requires fault-tolerant hardware and a lot of overhead.

Error mitigation is a cheaper, weaker cousin designed specifically for NISQ. You do not encode anything. Instead, you run the (noisy) circuit in a clever way and use classical post-processing to estimate what the noiseless output would have been.

Error mitigation versus error correctionA two-column comparison. Left column titled error correction, with bullet points encode logical qubit in many physical qubits, syndrome measurement and active correction, logical error decreases exponentially with code distance, requires fault tolerance, high overhead. Right column titled error mitigation, with bullet points keep single unencoded qubits, run circuit many times with different parameters, classical post processing to extrapolate, works on NISQ today, lower overhead but weaker. Error correction vs error mitigation — two different strategies Error correction fault-tolerant era • encode 1 logical in $O(d^2)$ physical • syndrome measurement every cycle • active correction during runtime • logical error $\propto (p/p_{th})^{d/2}$ • exponential suppression with $d$ • requires $p <$ threshold • high qubit overhead (100s-1000s) • arbitrary-depth circuits possible status: single-qubit demos (Willow) Error mitigation NISQ era • no encoding — bare physical qubits • run circuit many times, vary noise • classical post-processing only • extrapolate to zero-noise limit • or probabilistically cancel errors • no threshold requirement • low qubit overhead • limited to shallow circuits status: used daily on NISQ hardware
Error correction and error mitigation are complementary, not alternatives. Correction is a structural cure — encode, measure, correct — that requires fault-tolerant hardware. Mitigation is a statistical trick — run many times, post-process — that works today but does not scale to arbitrary circuit depths.

Three of the main mitigation techniques:

Mitigation is not magic. It cannot extend the noise horizon indefinitely — the sample-count overhead grows exponentially in circuit depth for techniques like PEC. But for shallow NISQ circuits, it routinely recovers signal that would otherwise be buried in noise, and it is what makes VQE and QAOA feasible on real hardware.

Worked examples

Example 1: when does a VQE calculation hit the NISQ noise wall?

Setup. You want to run VQE on the hydrogen molecule \text{H}_2 using the UCCSD ansatz, which for a minimal basis set (2 \times 2 spin-orbitals) requires 4 qubits and a circuit with roughly 20 two-qubit gates per variational step. You plan to run on a platform with two-qubit gate error p = 5 \times 10^{-3} (Quantinuum-ish).

Step 1. Probability of a circuit being error-free: (1-p)^{20} = (0.995)^{20} \approx 0.905. Why multiplicative: each gate is an independent event, so no-error across all gates compounds as a product.

Step 2. So about 90% of shots give the clean circuit output; 10% of shots are corrupted by at least one error. This is manageable — you average over enough shots and the signal is visible.

Step 3. Now scale up to \text{H}_4 (4 hydrogen atoms, 8 qubits, ansatz with ~120 two-qubit gates). Probability of error-free: (0.995)^{120} \approx 0.549. Only half the shots are clean; the other half are contaminated. VQE still works but the required shot count has grown.

Step 4. Scale to \text{BeH}_2 (16 qubits, ~500 two-qubit gates at the UCCSD level): (0.995)^{500} \approx 0.082. Now over 90% of shots are corrupted. The signal is buried in noise.

Step 5. Scale to \text{FeMoco} (~100 qubits, tens of thousands of two-qubit gates): the error-free probability is essentially 0.

Result. At p = 5 \times 10^{-3}, VQE on a molecule like \text{H}_2 is comfortable; by \text{BeH}_2, mitigation is essential; by \text{FeMoco}, NISQ hardware is hopeless. This is the NISQ wall in one example. To go beyond it, either the gate error must drop by several orders of magnitude, or error correction must take over.

Example 2: estimating when NISQ becomes "useful"

Setup. Call a quantum computation useful when it produces a molecular-ground-state energy for a chemistry problem that classical computers cannot solve — say, 10 logical qubits of fault-tolerant computation with logical error rate below 10^{-6}.

Step 1. Surface-code overhead: from the logical qubits in practice chapter, a logical qubit at distance d uses 2d^2 - 1 physical qubits and has logical error \sim p_L(7) \times \Lambda^{-(d-7)/2}, with p_L(7) \approx 5 \times 10^{-4} and \Lambda \approx 2.9 on Willow's scaling curve. Why these specific numbers: Willow is the one machine that has demonstrated below-threshold operation and reported the scaling constants directly.

Step 2. To reach logical error 10^{-6}, solve 5 \times 10^{-4} / 2.9^{(d-7)/2} = 10^{-6}, giving (d-7)/2 \approx \log_{2.9}(500) \approx 5.8, so d \approx 19.

Step 3. Physical qubits per logical: 2 \times 19^2 - 1 = 721 physical qubits.

Step 4. Ten logical qubits: \sim 7200 physical qubits just for the data. Magic-state factories add roughly a factor of 2, so call it \sim 15000 physical qubits total.

Step 5. The largest NISQ machines today: Condor at 1121, Osprey at 433, Willow at 105. None are close to 15000.

Result. A useful fault-tolerant chemistry computation needs ~10^4 physical qubits — one order of magnitude larger than the largest machine in 2025. At current scaling rates (superconducting chips roughly doubling per year), that is 3-4 years of hardware growth plus the harder problem of maintaining the fidelity. The NISQ-to-useful transition is likely in the 2028-2032 window for small chemistry, with industrially relevant problems (\text{FeMoco}-scale) arriving 2035+.

Common confusions

"NISQ machines already outperform classical computers"

For two narrow, artificial tasks (random-circuit sampling and boson sampling), at moderate scales, yes. For every useful problem, no. The confusion arises because supremacy demonstrations are real quantum advantages, but the tasks they outperform classical on are not tasks anyone wants to solve for reasons other than the demonstration itself.

"NISQ means quantum computers are one step away from breaking RSA"

RSA-2048 needs roughly 2 \times 10^7 physical qubits with error rates below 10^{-3}. The largest NISQ machine today is about 10^3 qubits. Four orders of magnitude in size and a significant step-change in the error regime (you need full fault tolerance, not mitigation). NISQ is not a stepping stone to cryptography-breaking Shor's; the architecture required is different.

"Error mitigation means full error correction is unnecessary"

Mitigation is valuable on NISQ and will remain valuable on fault-tolerant machines as a complementary technique. But mitigation cannot extend an algorithm's circuit depth indefinitely — the statistical overhead of PEC grows exponentially in depth, and ZNE is limited by how accurately you can amplify noise. For algorithms that need 10^{10} gates (Shor's on RSA), only error correction works.

"NISQ is just an old name for pre-fault-tolerant"

NISQ is specifically intermediate-scale with no error correction. A hypothetical 10-qubit machine with perfect gates would not be NISQ (too small). A hypothetical billion-physical-qubit machine with perfect error correction would not be NISQ (fault-tolerant). The term describes a specific regime, not a generic temporal placeholder.

"Quantum supremacy means quantum computers are now useful"

Quantum supremacy (sometimes rebranded "quantum advantage" or "quantum computational advantage") means the quantum machine outperformed the best classical simulation on a specific task. It does not mean the task was useful. The Sycamore 2019 task (sampling from random quantum circuits) is useful precisely as a benchmark — a certification that the hardware is doing something classical resources cannot easily replicate — not as an application in the economic sense.

The transition to the fault-tolerant era

NISQ is explicitly a bridge. Every serious hardware roadmap — Google, IBM, Quantinuum, IonQ, QuEra — targets a transition from NISQ to fault-tolerant quantum computing (FTQC) sometime in the late 2020s. The milestones:

NISQ to fault-tolerant transition timelineA horizontal timeline from 2018 to 2040 with six labeled milestones. 2018: Preskill coins NISQ. 2019: Sycamore supremacy. 2024: Willow demonstrates one logical qubit below threshold. 2027: ten logical qubits across platforms. 2030: one hundred logical qubits, small algorithms. 2035: thousands of logical qubits, industrial chemistry. Above the timeline, a blue band labeled NISQ era spans from 2018 to roughly 2028. A separate band labeled fault-tolerant era begins around 2027 with increasing shading. NISQ era to fault-tolerant era — transition timeline NISQ era (2018-2028) fault-tolerant era (2027+) 2018 Preskill coins NISQ 2019 Sycamore 2024 Willow $d=7$ 2027 ~10 logical qubits 2030 ~100 logical qubits 2035 industrial chem The NISQ and FTQC eras overlap: Willow (2024) is a NISQ machine that demonstrated a fault-tolerant primitive. The transition is gradual.
NISQ is not a sharp boundary but a gradient. Willow (2024) is still a NISQ machine by qubit count and error rate, but it demonstrated the first fault-tolerant primitive on real hardware. By 2027-2030 the industry expects multi-logical-qubit demonstrations; industrial applications are still 2035+.

During this transition, NISQ algorithms (variational approaches, small simulations) will remain relevant — they are the only thing the hardware can run — while the field gradually acquires fault-tolerant capability. Expect the 2020s to be described in textbooks as "the NISQ decade" and the 2030s as "the early fault-tolerant era."

The India angle

India's National Quantum Mission (approved 2023, ₹6003 crore over 8 years to 2031) is explicitly a NISQ-era programme. Phase 1 targets (by 2027) are 50-100 physical qubits across four platforms — superconducting at IIT Madras and TCS, trapped-ion at TIFR Mumbai and IISc Bangalore, photonic at Raman Research Institute Bangalore, neutral atom at IIT Bombay. The qubit counts are modest compared with the Western flagships, but the timeline is realistic and the platform diversity reduces sovereign risk.

On the algorithm side, QpiAI (Bangalore), TCS Research, and Infosys have variational-algorithm groups working on VQE and QAOA. IIT Madras's quantum software and algorithms programme runs problems through IBM Quantum Network access. The NQM has allocated part of its budget explicitly for NISQ-era algorithm development, recognising that software that runs on pre-fault-tolerant machines is the only software that matters until the FTQC transition.

Going deeper

The rest of this chapter concerns the precise technical content of Preskill's 2018 paper, the formal theory of error mitigation (sampling overheads, bias-variance trade-offs), recent arguments about classical simulability of supposedly-advantaged NISQ tasks, and the detailed technical bottlenecks blocking the NISQ-to-FTQC transition. This is the research-literature view, useful for a student considering a PhD in near-term quantum algorithms or hardware. The earlier sections are enough for a calibrated understanding.

Preskill's 2018 essay — what it actually argued

The 2018 paper is titled Quantum Computing in the NISQ era and beyond, arXiv:1801.00862. It was the keynote at Q2B 2017. The core argument:

  1. Quantum computers with 50-100 qubits and moderately low error rates will exist within 5 years (which came true with Sycamore in 2019 and Osprey in 2022).
  2. These machines will be too small for full fault tolerance but large enough to exceed brute-force classical simulation on certain tasks.
  3. The right theoretical framing is not "just wait for fault tolerance" but "what are NISQ machines actually good for, honestly?"
  4. Candidates for NISQ utility: variational algorithms, quantum simulation of small systems, sampling-based supremacy, and learning noise models to feed into future fault-tolerant designs.
  5. Honest expectation: NISQ might or might not produce a useful quantum advantage; it is a research regime, not a product launch.

The essay is still worth reading in 2025 because its predictions mostly held up and because its sobriety is a corrective to most press coverage.

The sampling overhead theorem for mitigation

Probabilistic error cancellation has a hard theoretical limit. If a noise channel has quasi-probability-distribution inverse with total variation \gamma, mitigating that noise across k locations in a circuit multiplies the required number of shots by \gamma^{2k}. For a depolarising channel with noise p, \gamma \approx (1 + p)/(1 - p) \approx 1 + 2p to lowest order.

For a 100-gate circuit at p = 10^{-2}: \gamma^{200} \approx (1.02)^{200} \approx 52. Manageable: shot count 50x more than the noiseless case. For a 10000-gate circuit at the same p: \gamma^{20000} \approx 10^{172}. Impossible. The sampling overhead grows exponentially in circuit depth, which is why error mitigation is a NISQ-only technique.

Classical-simulability challenges to supremacy claims

Sycamore 2019 claimed a 10^{10}-fold speedup over the best classical simulation then known. Within two years, better classical algorithms (tensor-network simulation with improved contraction schedules, GPU parallelism, exploiting the specific random-circuit structure) closed the gap to less than 10^3. The supremacy line has kept moving as classical algorithms improve. The current state (as of 2025) is that Sycamore's 2019 result would take hours, not 10^4 years, on a modern GPU cluster.

This does not invalidate the demonstration — Sycamore did something that required all of classical computing to catch up with — but it reminds everyone that "supremacy" is not forever. Boson-sampling and IQP circuits have similar histories.

The NISQ-to-FTQC transition technical bottlenecks

Five hard engineering problems stand between NISQ and useful FTQC:

  1. Scaling qubit counts by 1-2 orders of magnitude without degrading fidelity. Cryogenic cables, control electronics, fabrication yield, classical wiring, all scale poorly.
  2. Real-time decoding of surface-code syndromes at large distance (see logical qubits in practice).
  3. Magic-state factories that produce high-fidelity |T\rangle states at the rate a computation needs them.
  4. Compilation from high-level algorithms to fault-tolerant gate sets — a research-level problem that the classical quantum-compiler community is still in the early stages of.
  5. Classical-control bandwidth — a million-qubit fault-tolerant machine needs gigabit-per-second classical control and gigabit-per-second syndrome readout.

Each is its own research programme. NISQ is the regime where all five are being worked on in parallel, on smaller-scale systems.

NISQ in the Indian ecosystem

India's NISQ participation is both domestic (NQM-funded hardware at IITs and TIFR) and external (IBM Quantum Network, Quantinuum partnerships, cloud access to US platforms). A realistic Indian trajectory by 2030: a domestic 1000-qubit superconducting machine at IIT Madras / TCS; a domestic 100-ion trapped-ion machine at TIFR; an Indian VQE/QAOA software stack; Indian student theses on NISQ algorithms, error mitigation, and hardware characterisation. No Indian logical-qubit demonstration at NISQ scale by 2030 — that is a 2032-2034 milestone under current scaling.

Where this leads next

This chapter sets up the NISQ frame; the next chapter, variational algorithms generally, explains the dominant algorithmic response to the NISQ constraint. After that, individual variational algorithms — VQE for chemistry, QAOA for optimisation, quantum machine learning — each get their own chapters.

The counterpart to NISQ is the fault-tolerant regime, covered in logical qubits in practice and the earlier quantum-error-correction arc of this curriculum. The two chapters together give you a fully calibrated picture of where quantum computing actually stands in 2025, without either the press-release hype or the NIMBY cynicism.

References

  1. John Preskill, Quantum Computing in the NISQ era and beyond (2018) — arXiv:1801.00862.
  2. Google Quantum AI (Arute et al.), Quantum supremacy using a programmable superconducting processor (Nature, 2019) — Nature 574, 505 / arXiv:1911.00012.
  3. Kristan Temme, Sergey Bravyi, Jay Gambetta, Error mitigation for short-depth quantum circuits (2017) — arXiv:1612.02058.
  4. John Preskill, Lecture Notes on Quantum Computation, Chapter 7theory.caltech.edu/~preskill/ph229.
  5. Wikipedia, Noisy intermediate-scale quantum era.
  6. IBM Research, IBM Quantum roadmap.