In short

As of early 2026, quantum computing has one logical qubit running below threshold (Google Willow, 105 superconducting qubits, distance d = 7 surface code, December 2024 Nature), fault-tolerant Clifford primitives on a small colour code (Quantinuum H2, 64 trapped ions, 2024 demonstrations with Microsoft collaboration on four logical qubits), and four-digit physical-qubit platforms (IBM Condor at 1121 superconducting qubits; Atom Computing at 1180 neutral atoms in a single array, November 2024). Jiuzhang (USTC) continues to produce photonic boson-sampling results at \sim100-photon scale; Microsoft announced a Majorana 1 topological-qubit prototype in February 2025 that the physics community still disputes. What nobody has done is demonstrate useful quantum advantage for any industrially interesting problem — every "advantage" claim sits inside narrow sampling tasks whose classical-simulation gap has been repeatedly re-closed. The target for the first useful fault-tolerant applications (small-molecule chemistry on \sim 20 electron orbitals, lattice-gauge-theory simulations at toy scale, error-corrected Grover on small databases) is 2028-2030; cryptographically relevant Shor's on RSA-2048 remains a 2040s horizon requiring \sim 2 \times 10^7 physical qubits. India's National Quantum Mission (₹6003 crore, 2023-2031) has set phase-1 targets of 50-100 physical qubits across superconducting, trapped-ion, photonic, and neutral-atom platforms at IIT Madras, TIFR Mumbai, IISc Bangalore, and Raman Research Institute Bangalore, with domestic startups QpiAI (Bangalore), BosonQ PSI (Bhilai), and QNu Labs (Bangalore) commercialising algorithms, optimisation, and quantum-safe cryptography in parallel. This chapter is the platform-by-platform, number-by-number, caveat-by-caveat snapshot of where the machine actually is.

Every few years the quantum-computing community has to look itself in the mirror and answer, honestly, the question the headlines keep avoiding: where are we really? Not where does the latest press release say we are. Not where does the skeptic-in-chief say we are. Where does the actual machine — the cryostat, the ion trap, the photonic chip, the laser array — stand?

You have come through a long curriculum. You know how qubits work. You know what quantum error correction is. You know why Preskill gave us the NISQ label in 2018 and why the logical-qubit demonstration on Willow in 2024 was genuinely a milestone. What you deserve, before leaving this part of the curriculum, is a calibrated tour: platform by platform, number by number, as of early 2026. A reference you can come back to when the next headline hits.

The answer is not "quantum computers have arrived." Nor is it "quantum computing is a hoax." The honest answer is in the middle, and it is specific: one logical qubit that works the way the threshold theorem says it should; small demonstrations of fault-tolerant Clifford gates; thousand-physical-qubit platforms on multiple architectures; zero useful industrial applications; a clear target of 2028-2030 for the first useful fault-tolerant demos, and 2035-2045 for the big things (chemistry at industrial scale, Shor's on real keys). That is the machine in 2026.

The platforms, side by side

Before drilling into each, here is the landscape at a glance. No single architecture has won; every serious player has a different trade-off.

Quantum-computing platforms, early 2026 snapshotA table of eight quantum-computing platforms — IBM, Google, Quantinuum, IonQ, Atom Computing, QuEra and Pasqal, PsiQuantum, and Xanadu — with columns for qubit technology, largest machine and qubit count, best two-qubit gate error, flagship demonstration, and declared near-term roadmap target. Annotations highlight which platforms have demonstrated logical-qubit primitives. Quantum-computing platforms — early 2026 Platform Technology Biggest / qubits 2Q error Headline 2024-25 IBM superconducting transmon Condor 1121 / Heron 133 $\sim 5 \times 10^{-3}$ dynamic circuits, Heron r2 Google superconducting + couplers Willow 105 $\sim 3 \times 10^{-3}$ $d=7$ below threshold Quantinuum trapped ion (Yb/Ba) H2-56, H2 up to 64 $\sim 5 \times 10^{-4}$ 4 logical qubits on colour code IonQ trapped ion (Yb, Ba) Forte / Tempo 64 $\sim 10^{-3}$ cloud access, AQ benchmarks Atom Computing neutral atoms (caesium) 1180 atoms (2024) $\sim 5 \times 10^{-3}$ biggest single array QuEra / Pasqal neutral atoms (Rb) Aquila 256 / Pasqal 324 $\sim 5 \times 10^{-3}$ analog + digital modes PsiQuantum silicon-photonic, MBQC Omega tape-out 2024 fusion-based GlobalFoundries Fab 10 Xanadu photonic (CV, squeezed light) Borealis 216 modes $-$ GKP roadmap, cloud access
Eight flagship platforms, four architectures, zero convergence. Accent highlights mark the platforms with demonstrated logical-qubit primitives as of early 2026 (Google Willow's below-threshold surface code; Quantinuum H2's fault-tolerant colour-code gates; Atom Computing's 1180-atom array). No platform has yet demonstrated a multi-logical-qubit useful computation.

IBM — Heron the workhorse, Condor the count

IBM's roadmap split in 2023 into two lines. Condor (1121 qubits, announced late 2023) was the qubit-count flagship, designed to show that fabricating four-digit superconducting-qubit chips is feasible. Condor's error rates (\sim 1-2 \times 10^{-2} on two-qubit gates) are too high for deep circuits or for surface-code error correction at useful distance; Condor is a scaling demonstration, not a computational machine.

Heron (133 qubits, launched 2024 with a refreshed Heron r2 variant in 2025) is the chip IBM actually runs customer workloads on. Heron has improved tunable couplers, lower crosstalk, and critically dynamic circuits — mid-circuit measurement with classical feedback — which is the feature quantum error correction requires. IBM has not yet published a Willow-style below-threshold surface-code result on Heron, but the hardware capability to do so is in place.

The forward roadmap: Flamingo (2025, modular interconnects between chips), Crossbill (2025-2026, multi-chip modules), Nighthawk (2025-2026, 156-qubit chip with improved error rates), Kookaburra (2026, three Nighthawk-class modules wired together), and the flagship Starling target of 200 logical qubits by 2029. If IBM hits Starling, it will be the first declared multi-logical-qubit machine in commercial operation.

Google — Willow, and what comes after

Willow is the chip that crossed the threshold line on 9 December 2024. 105 superconducting transmon qubits, surface code at d = 3, 5, 7, logical error rate falling by factor \Lambda \approx 2.9 at each distance step. This was the first on-hardware demonstration that the threshold theorem works in practice. The logical qubits in practice chapter covers this in detail.

What Willow did not do: any logical gate (no logical Hadamard, no logical CNOT), any computation on encoded qubits, any algorithm on more than a single logical qubit. The press coverage sometimes conflated Willow's below-threshold demonstration with the separate 2024 random-circuit-sampling benchmark on physical (unencoded) qubits. Two different experiments, two different claims; both on the same chip, but one is a fault-tolerant primitive and the other is a supremacy-style benchmark.

Google's internal roadmap targets a successor chip (often called Willow generation 2 in the public blog posts, though the final name is likely to differ) with \sim 1000 qubits and the ability to host multiple logical qubits simultaneously. The key next experiments: logical two-qubit gates between two encoded qubits, and running a small logical algorithm end-to-end. Expected 2026-2028.

Quantinuum — the colour-code leader

Quantinuum, a UK-US company formed from Honeywell's trapped-ion program and Cambridge Quantum's software arm, runs the H-series of trapped-ion machines. H1 has 20 qubits; H2 has 56 or 64 qubits depending on configuration (the physical trap holds up to 64 ions; H2-56 is a commonly cited configuration).

Trapped ions have a fundamental advantage over superconducting qubits: per-gate physical errors are about ten times lower (\sim 5 \times 10^{-4} on two-qubit gates), and the noise model is cleaner (no leakage into non-computational levels, less correlated noise). The cost is speed: a trapped-ion two-qubit gate takes \sim 100 microseconds versus \sim 30 nanoseconds for superconducting — about four orders of magnitude slower. For error correction, speed is not the bottleneck; fidelity is. For a billion-gate algorithm, the slower clock does matter.

Quantinuum's 2024 breakthrough, announced jointly with Microsoft, was the demonstration of four logical qubits on a colour code with fault-tolerant Clifford gates (transversal H, S, CNOT) and a magic-state injection for T gates. The colour code is an alternative to the surface code with a key property: transversal implementation of the full Clifford group (not just a subset), which simplifies the logical-gate budget. This was the first end-to-end logical-qubit demonstration with a non-trivial gate count on multiple encoded qubits.

Quantinuum is the leading platform for small-scale fault-tolerance experiments in 2024-2026. Their roadmap targets scaling the ion trap to hundreds of ions while preserving fidelity, and demonstrating logical algorithms on 10-20 encoded qubits by the late 2020s.

IonQ — Forte, Tempo, and cloud access

IonQ is the other major trapped-ion company, with machines including Forte (32 algorithmic qubits since 2022) and Tempo (64-qubit target, 2024-2025 rollout). IonQ's architectures use a shuttling-based approach in which ions are physically moved between different trap regions to implement multi-qubit operations — a different trade-off from Quantinuum's static-register geometry.

IonQ's commercial footprint is broader than its research footprint: Forte is accessible via AWS Braket and Azure Quantum, and IonQ has shipped on-premises systems to research customers. The Algorithmic Qubits metric (a variant of quantum volume) is IonQ's preferred benchmark; Forte reports 35 AQ, meaning it can reliably run 2^{35}-state Hilbert-space-dimension algorithms at moderate depth.

IonQ has not yet demonstrated a below-threshold fault-tolerant primitive comparable to Quantinuum's 2024 colour-code results, but the physical gate fidelities are in the same general range.

Atom Computing — the thousand-atom array

Atom Computing, a Berkeley-area startup, took neutral-atom quantum computing into four-digit territory in November 2024 by trapping 1180 caesium atoms in a single optical-tweezer array with individual addressability. This is, as of early 2026, the single largest array of programmable qubits on any platform by atom count.

Two caveats: the 1180 number is the atom count; the usable qubit count for a programmable computation is somewhat lower because atoms at array edges have degraded fidelity and because some atoms must be used for ancillary tasks (measurement, cooling, reloading). The two-qubit gate error for the largest-array configuration is in the 5 \times 10^{-3} range — competitive but not yet in the Quantinuum regime.

Atom Computing has also demonstrated mid-circuit measurement on neutral atoms — a non-trivial achievement, since neutral atoms have traditionally been hard to measure without destroying them. Mid-circuit measurement is a prerequisite for dynamic circuits and therefore for error correction; Atom Computing has declared a late-2020s target for neutral-atom fault-tolerance demonstrations.

QuEra and Pasqal — the analog-plus-digital neutral-atom line

QuEra (Boston) and Pasqal (Paris) both run rubidium-based neutral-atom arrays with a distinctive feature: their machines can run in either analog mode (continuous-time evolution under a programmable Rydberg-atom Hamiltonian, ideal for combinatorial-optimisation ansätze and spin-model simulation) or digital mode (circuit-model with discrete gates). QuEra Aquila has 256 atoms; Pasqal's latest arrays push past 300.

The analog mode has produced some of the cleanest demonstrations of quantum simulation on \sim 256-atom spin systems. The digital mode is more recent and is catching up with the fidelities of other platforms. Both companies target the late 2020s for error-correction primitives on their platforms, with the large atom counts making high-distance codes at least geometrically feasible.

PsiQuantum — the silicon-photonic bet

PsiQuantum (Palo Alto, founded 2015) is not building NISQ machines. Their explicit bet is to skip NISQ entirely and build a million-qubit fault-tolerant photonic machine in the early 2030s, using measurement-based quantum computing in a variant called fusion-based quantum computing. In 2024, jointly with GlobalFoundries, they tape-out the Omega chip — wafer-scale silicon-photonic integration of single-photon sources, waveguides, beamsplitters, phase shifters, and superconducting-nanowire single-photon detectors on one wafer fabricated in Fab 10 (upstate New York).

Omega is not a quantum computer. It is the building block — the first integrated wafer that can produce, route, and detect single photons at scale. Building a million-qubit photonic fault-tolerant machine requires linking thousands of Omega-class chips with phase-stable interconnects, integrating with classical control, and demonstrating sustained logical-qubit operation. PsiQuantum's public target remains early 2030s. No interim NISQ demonstrations are planned — they have explicitly staked the company on the endgame.

Xanadu — continuous-variable photonics and GKP

Xanadu (Toronto) pursues photonic quantum computing on a different axis: continuous-variable (CV) encoding, where quantum information lives in the electric-field quadratures of squeezed light. Their 2022 Borealis machine demonstrated Gaussian boson sampling with 216 photonic modes — a supremacy-style result on a photonic platform. Their roadmap to fault tolerance uses GKP (Gottesman-Kitaev-Preskill) states as the non-Gaussian resource that upgrades Gaussian CV-MBQC to universal computation.

Xanadu has a mature cloud platform (X-series) and a software stack (Strawberry Fields, PennyLane) that has become one of the most-used toolchains for variational quantum machine learning, regardless of which hardware the algorithm eventually runs on.

The 2024-2025 milestones, in one place

Pulling the individual platform stories together, here are the specific achievements that 2024-2025 delivered — the moments when the field moved, not just the press releases.

Key 2024-2025 quantum-computing milestonesA timeline of seven specific milestones from 2024 through mid-2025. January 2024 shows Quantinuum H2 colour code four logical qubits. April 2024 shows IBM Heron launch with dynamic circuits. November 2024 shows Atom Computing 1180 atom array. December 2024 shows Google Willow distance seven below threshold. December 2024 shows PsiQuantum Omega tape-out at GlobalFoundries. February 2025 shows Microsoft Majorana 1 prototype with a caveat about scientific controversy. Mid 2025 shows Pasqal and QuEra scaling past 300 atoms. 2024-2025 milestones — the moments the field moved Jan 2024 Quantinuum H2 colour code 4 logical qubits Apr 2024 IBM Heron dynamic circuits 133 qubits Nov 2024 Atom Computing 1180-atom array (biggest single array) Dec 2024 Google Willow $d=7$ below threshold (Nature) Dec 2024 PsiQuantum Omega GlobalFoundries tape-out Feb 2025 Microsoft Majorana 1 (contested) Mid 2025 Pasqal/QuEra past 300 atoms Two below-threshold or logical-qubit results; one thousand-qubit-class platform; two fabrication milestones. No useful quantum advantage.
A concentrated burst of milestones. Willow (Dec 2024) is the single most important — the first hardware demonstration of below-threshold operation. Quantinuum's colour-code work (early 2024) is the first end-to-end logical-qubit computation with gates. Atom Computing's 1180-atom array is the platform-capability story. Microsoft's Majorana 1 is marked in grey because the underlying physics claims remain scientifically contested as of early 2026.

A quick note on Majorana 1. Microsoft's February 2025 announcement claimed the first demonstration of topological qubits — qubits whose intrinsic physical encoding suppresses errors without the software overhead of a surface code. If the underlying physics is real, Microsoft's roadmap to useful fault tolerance could be dramatically shorter than the surface-code route. The trouble is that the physics — detecting Majorana zero modes in topological-superconductor nanowires — has a long history of claimed-and-retracted results over the past decade. Microsoft's February 2025 Nature paper has received mixed reception: parts of the community accept it, parts remain sceptical, and an independent replication by a second laboratory has not yet appeared. The honest position in early 2026 is unresolved.

The progress metrics — where the numbers actually are

Four numbers summarise where each platform stands: qubit count, two-qubit gate fidelity, coherence time, and logical-qubit count. Pulled together:

Physical-qubit counts over time across platformsA semi-log chart with years from 2018 to 2026 on the x-axis and physical qubit count from ten to ten thousand on the y-axis. Three curves show superconducting, trapped-ion, and neutral-atom qubit counts rising over time. Superconducting rises from about 50 in 2018 to 1121 by late 2023. Trapped ion rises from about 10 in 2018 to 64 by 2024. Neutral atom jumps from 50 in 2020 to 256 in 2023 and 1180 in 2024. Physical-qubit growth, 2018-2026 $10^{4}$ $10^{3}$ $10^{2}$ $10$ $1$ physical qubits 2018 2019 2020 2021 2022 2023 2024 2025 2026 Sycamore 53 Osprey 433 Condor 1121 Trapped ion (Quantinuum H2) Atom Computing 1180 superconducting trapped ion neutral atom Qubit counts grew by two orders of magnitude in 6 years on superconducting and neutral-atom platforms.
Qubit count alone is the easiest metric to chart and the most misleading. Superconducting and neutral-atom platforms have gone from ~50 qubits in 2018 to ~1000 in 2024. Trapped ions have grown slower but at much higher fidelity per qubit. The fidelity and logical-qubit counts are the harder, more important metrics; see the earlier table.

The honest assessment of "useful quantum advantage"

Here is the question nobody wants to answer directly in press releases: has any quantum computer, anywhere, solved a problem of real practical interest faster than a classical computer?

The answer in early 2026 is no.

Hype check. The phrase "quantum advantage" has been used in three distinct and non-overlapping senses — narrow sampling tasks (Sycamore 2019, Jiuzhang 2020-2023, Willow's 2024 circuit-sampling benchmark), cost-model arguments in specific optimisation instances (heavily contested), and outright marketing claims. Only the first sense — narrow sampling tasks — has produced a reproducible experimental result that the physics community broadly accepts. In all three cases, improvements in classical algorithms have repeatedly dequantised the supposed advantage. The Sycamore 2019 claim of 10^{10}-year classical simulation has been reduced by later classical work to hours on a modern GPU cluster. Jiuzhang boson-sampling has similarly been dequantised to the ~kilohour scale. The supremacy frontier keeps moving because the classical algorithms keep improving.

What the field genuinely has demonstrated:

What the field has not demonstrated:

This is not a pessimistic assessment. It is a calibrated one. The field has in the past six years crossed two enormous thresholds (qubit count scaling to 1000+; gate fidelity below the surface-code threshold) that many experts thought would take longer. The next two thresholds — multiple logical qubits in a single machine; useful fault-tolerant algorithms — are the ones that would turn quantum computing from a research field into an industry. The target window is 2028-2030 for the first useful fault-tolerant demos, and 2035+ for anything approaching industrial deployment.

The worked examples

Example 1: The platform-comparison matrix, 2026

Setup. Fill in one table row per platform with the specific numbers you would quote in a talk. The format: [qubit type] / [largest machine] / [best 2Q error] / [logical qubits demonstrated] / [2027 roadmap target].

Step 1. IBM: superconducting transmon / Condor 1121 (count), Heron 133 (usable) / 5 \times 10^{-3} / none demonstrated (dynamic circuits capable) / Starling 200 logical. Why two-row entries for IBM: Condor is the qubit-count flagship but not the workhorse. Heron is the machine you would actually pick for a state-of-the-art demonstration today. Any serious state-of-the-art table has to carry both.

Step 2. Google: superconducting + tunable couplers / Willow 105 / 3 \times 10^{-3} / 1 (surface code d=7) / Willow-generation-2 with multi-logical.

Step 3. Quantinuum: trapped ion (Yb/Ba) / H2-56 / 5 \times 10^{-4} / 4 (colour code with Cliffords) / logical algorithm on 10-20 encoded qubits.

Step 4. Atom Computing: neutral atoms (Cs) / 1180-atom array / 5 \times 10^{-3} / none demonstrated (mid-circuit measurement demonstrated) / neutral-atom logical primitives.

Step 5. Atom Computing, QuEra, Pasqal, IonQ, PsiQuantum, Xanadu complete the table along the same format.

Result. A five-column matrix per platform is what a state-of-the-art talk actually needs. The matrix changes every 12-18 months; the columns do not. Cache the columns; refresh the numbers. Why just five columns: these are the five numbers that discriminate between platforms on the dimensions that matter for a useful fault-tolerant machine. Qubit count without fidelity is marketing; fidelity without qubit count is a physics experiment. Logical-qubit count is the fault-tolerant metric. The 2027 target captures roadmap credibility.

Example 2: Extrapolating 2026 to 2030

Setup. Given the 2024-2026 scaling trends, what are credible 2030 milestones for each of the four big numbers (physical qubit count, 2Q gate error, logical qubit count, "useful advantage")?

Step 1. Physical qubits: superconducting grew ~2x per year on count from 2019-2023. Assume half that rate continuing (the field is entering harder engineering problems): 1000 in 2024 → ~3000-5000 in 2028 → ~10000 by 2030 on the best chips. Neutral-atom on similar trajectory. Why slow the growth rate: scaling from 1000 to 10000 qubits on superconducting runs into wiring, cryogenics, and control-electronics bottlenecks that did not constrain the 100-1000 regime. The engineering problems get harder, not easier, as you scale.

Step 2. 2Q gate error: Willow and Quantinuum are at 3 \times 10^{-4}-5 \times 10^{-3}. By 2030, expect 10^{-4}-10^{-3} on the best platforms — a factor-of-3 improvement, consistent with the historical pace.

Step 3. Logical qubits: 1-4 today → 10-100 by 2028-2030 is the industry target across multiple roadmaps. 1000+ logical qubits is a 2035 target for the fastest-advancing platform.

Step 4. Useful advantage: specific target is a small-molecule quantum simulation of something like FeMoco or the nitrogenase active site — requiring 50-100 logical qubits running 10^7-10^9 logical gates. Credible window 2030-2032 on the optimistic end, 2035+ on the realistic end. Cryptographically relevant Shor's is 2040+.

Result. The 2030 picture, extrapolating credibly: ~10000 physical qubits, ~10-100 logical qubits, useful fault-tolerant quantum chemistry demonstrations on small molecules. Not RSA factoring. Not drug discovery. Not "solve the climate crisis." A first generation of genuinely useful fault-tolerant quantum computation on modest problem sizes. Why the demonstrations are "chemistry on small molecules" specifically: quantum chemistry is the application with the clearest quantum advantage (exponential classical scaling in electron count; polynomial quantum scaling); the instances fit the projected logical-qubit counts of 2028-2030 hardware; and molecules have natural benchmark hierarchies (H₂, then LiH, then BeH₂, then larger). Shor's on RSA-2048 needs orders-of-magnitude more resources and is therefore a 2040+ project even under optimistic extrapolations.

The India angle — National Quantum Mission status

The National Quantum Mission (NQM) was approved by the Indian Union Cabinet in April 2023 with a budget of ₹6003 crore (~$720 million at 2023 exchange rates) over the period 2023-2031. The mission is organised around four verticals — quantum computing, quantum communication, quantum sensing and metrology, and quantum materials and devices — and implemented as a hub-and-spoke architecture spread across the Indian academic and research ecosystem.

The quantum-computing hub targets, as of mid-2026:

The three Indian quantum-computing startups visible in 2026:

On the cryptographic side, CERT-In and MeitY (the Ministry of Electronics and IT) have issued post-quantum-cryptography migration guidance aligned with the NIST PQC standards (CRYSTALS-Kyber, Dilithium, SPHINCS+ as of 2024), with a target of PQC migration for government systems — including parts of the Aadhaar and UPI ecosystems — by around 2030. This is well ahead of the credible Shor's-algorithm-threat timeline but appropriately cautious given harvest-now-decrypt-later concerns: data encrypted today can be stored and decrypted when future hardware permits.

India is not leading on hardware — the flagship demonstrations remain in the US, UK, and China — but the NQM is realistic about this. The policy design is to build domestic capability in parallel with accessing global capability, to train the next generation of Indian quantum physicists and engineers at IITs and IISc, and to position Indian startups for the post-2030 commercialisation wave when fault-tolerant hardware finally becomes useful.

Common confusions

Going deeper

If you understand that as of early 2026 the field has one logical qubit on Willow below threshold, four logical qubits on the Quantinuum colour code with Clifford gates, thousand-qubit-class physical arrays on superconducting and neutral-atom platforms, zero demonstrated useful quantum advantage, and a 2028-2030 target for the first useful fault-tolerant applications — you have chapter 175. What follows is the detailed roadmap for each major platform, the industrial-application timelines, and the NQM implementation architecture at the level a policy analyst or hardware PhD student would need.

Detailed platform roadmaps

IBM. Published roadmap (2023 update): Condor (1121, 2023) → Flamingo (modular, 2025) → Crossbill (multi-chip, 2025) → Nighthawk (156 qubits, 2025-26) → Kookaburra (3-way linked Nighthawk, 2026) → Starling (200 logical qubits, 2029) → Blue Jay (2033+, scaling Starling). The 2029 Starling target is the most ambitious near-term logical-qubit milestone publicly declared by any vendor. Meeting it requires dynamic-circuit performance improvements by about 10x from the 2024 Heron baseline, combined with reliable multi-chip coupling and real-time decoder throughput at distance d \sim 20.

Google. Less publicly staged roadmap than IBM, but research papers and conference talks suggest a phased push: Willow (2024) → successor chip with ~1000 qubits (2026-27) → multi-logical-qubit demonstrations (2027-28) → small fault-tolerant algorithm (2029-30). The metric Google's team emphasises internally is \Lambda — the ratio of logical error suppression per distance step — which Willow measured at \sim 2.9 at d \le 7. Maintaining \Lambda \ge 2 up to d \sim 25 is the technical bet that Shor's-scale algorithms rest on.

Quantinuum. Public target: systematic scaling of the H-series ion count from 64 toward 256 by late decade, combined with improved coherence and gate fidelity. The colour-code logical-qubit count has grown from 1 (2023) to 4 (2024) to a target of \sim 10-20 (late 2020s), with logical algorithms as the 2028-2030 horizon.

IonQ. Algorithmic-qubit-oriented roadmap (2024 guidance): Forte (35 AQ, 2024) → Tempo (64 physical qubits, 64+ AQ target) → error-correction-capable machine by 2029. Cloud-first commercial model.

Atom Computing, QuEra, Pasqal. Neutral-atom roadmaps target 10000-atom arrays by the late 2020s, with the architectural claim that neutral atoms will scale the most cleanly among all platforms (trapping more atoms is "just more laser beams and more trap sites"). Error-correction primitives are targeted for 2027-2029.

PsiQuantum. Explicit bet: skip NISQ, deliver a fault-tolerant machine with \sim 10^6 photonic qubits by early 2030s. The Omega chip (2024 tape-out) is the first building block; achieving fault-tolerance requires linking thousands of such chips with phase-stable interconnects.

Xanadu. CV-MBQC with GKP states, cloud-first (Borealis, X-series). 2024-2025 roadmap added integrated GKP state generation; fault-tolerant demonstrations targeted for late 2020s.

Microsoft. Topological-qubit path (Majorana 1, 2025) — if the physics holds up, aggressive roadmap to \sim 100 topological qubits and useful applications in the 2030s; otherwise, Microsoft has invested in Quantinuum partnerships and software (Azure Quantum) as alternative paths.

Industrial applications timeline

The applications most likely to see genuine quantum advantage, in roughly increasing scale of required resources:

  1. Small-molecule quantum chemistry (\text{H}_2 through \text{LiH}, \text{BeH}_2). 10-20 logical qubits, 10^4-10^6 logical gates. Expected window: 2028-2030.
  2. Lattice-gauge-theory simulation at toy scales. 50-100 logical qubits, 10^6-10^8 logical gates. Expected window: 2030-2033.
  3. Fault-tolerant Grover's on small but non-trivial databases. 50-200 logical qubits. Expected window: 2030-2035.
  4. Medium-molecule chemistry (FeMoco, active sites of industrial enzymes). 100-500 logical qubits. Expected window: 2032-2035.
  5. Materials simulation (high-temperature superconductors, battery electrolytes, catalysts). 500-2000 logical qubits. Expected window: 2035-2040.
  6. Full combinatorial optimisation at industrial scale. Still unclear that quantum advantage exists; if it does, 2035+.
  7. Shor's on RSA-2048. 8000 logical qubits, 10^{10}-10^{11} logical gates, \sim 2 \times 10^7 physical qubits at surface-code distance \sim 25. Expected window: 2040-2050.

The first genuinely useful fault-tolerant application is most likely small-molecule quantum chemistry, because the problem structure fits the strengths of quantum hardware (exponential classical scaling, unitary time evolution) and because the benchmark problem hierarchy is natural (start at \text{H}_2, scale up).

Finance, chemistry, optimisation — where the money actually goes

Pharma (Merck, Roche, Boehringer Ingelheim): quantum-chemistry partnerships with Quantinuum, IBM, and Google, typically focused on electronic-structure methods for lead-compound discovery. Timeline to commercial payoff: 2030+ under optimistic scenarios.

Finance (JPMorgan Chase, Goldman Sachs, HSBC): quantum-optimisation research for portfolio construction and derivatives pricing. No credible claim of industrial advantage yet; most experiments are scoping work on \le 50-qubit NISQ hardware.

Chemicals and materials (BASF, ExxonMobil, TotalEnergies): catalyst discovery, molecular simulation. Very similar timeline to pharma.

Logistics and scheduling (DHL, Volkswagen, Airbus): QAOA-style optimisation problems on NISQ hardware. Interesting NISQ demonstrations; unclear path to demonstrated industrial advantage.

Cybersecurity (governments, banking, telecom): quantum-safe-cryptography migration is the single most mature commercial thread, because it is driven by harvest-now-decrypt-later rather than by current quantum-computing capability. NIST's 2024 post-quantum-cryptography standards (CRYSTALS-Kyber, Dilithium, SPHINCS+) are being deployed now.

NQM implementation architecture

The four Technology Hubs (T-Hubs) of the NQM and their institutional leads:

Cross-hub coordination sits with the NQM Governing Body and a Mission Director under the Department of Science and Technology. A parallel Quantum Skills Mission trains PhD and MTech students across the IIT and IISc systems specifically in quantum hardware, software, and applications.

The sovereign-capability thesis: by 2031 India should have domestically-built 1000-qubit-class superconducting hardware, domestic trapped-ion and photonic platforms at 100-qubit scale, working QKD deployment in government and strategic sectors, and a trained workforce of a few thousand quantum-specialist scientists and engineers. The commercial payoff scenarios are 2031+ when global fault-tolerance crosses the usefulness threshold; the sovereign-capability payoffs (defence, strategic, cryptographic) are 2028+ on the QKD and sensing sides.

For the reader considering this area

If you are a 15-year-old reading this in 2026, the simplest summary: quantum computing is genuinely real, genuinely useful, and genuinely still a decade away from industrial utility. The decade between now and 2035 will produce more of the important results in the history of the field than any previous decade. Learning quantum information, quantum algorithms, and quantum hardware now positions you for the commercialisation wave when it arrives. The NQM hubs in India — IIT Madras, TIFR, IISc, RRI — are serious research institutions, competitive with international counterparts, and they will be running larger and more interesting hardware every year through the late 2020s. The commercial startups (QpiAI, BosonQ PSI, QNu Labs) are hiring. The global companies (IBM, Google, Quantinuum, IonQ) all have Indian collaborations.

The single best thing you can do if you want to enter this field: build a solid foundation in linear algebra, quantum mechanics, and classical algorithms. Take the freely available online courses (Qiskit Textbook, Xanadu's PennyLane tutorials, Preskill's lecture notes). Pick one platform's cloud access and run actual quantum programs on it. The hardware gets bigger every year; the intellectual framework is what you have to build once.

Where this leads next

References

  1. Google Quantum AI, Quantum error correction below the surface code threshold (2024), Nature 638, 920arXiv:2408.13687.
  2. Atom Computing, 1180-atom neutral-atom quantum computer announcement (2024) — atom-computing.com.
  3. IBM Quantum, IBM Quantum roadmap (2024 update)ibm.com/quantum/roadmap.
  4. Wikipedia, Quantum computing.
  5. John Preskill, Quantum Computing in the NISQ era and beyond (2018/2023 retrospective) — arXiv:1801.00862.
  6. Government of India, National Quantum Mission (DST)dst.gov.in/national-quantum-mission.