In short
DiVincenzo's criteria are a seven-item checklist published by IBM theorist David DiVincenzo in 2000 (in a paper titled The Physical Implementation of Quantum Computation). They are the standard scorecard for evaluating any candidate quantum-computing platform. The first five are the core requirements for computation: (1) scalable, well-characterised qubits; (2) the ability to initialise qubits in a known reference state such as |0\rangle; (3) decoherence times long enough to run the desired circuit; (4) a universal gate set (single-qubit gates plus at least one entangling two-qubit gate — Clifford+T or an equivalent); (5) qubit-specific measurement. The last two — (6) the ability to interconvert stationary qubits with flying (photonic) qubits, and (7) the ability to faithfully transmit flying qubits between locations — are the networking criteria, needed if you want a distributed or networked quantum computer rather than a single monolithic machine. Today's platforms — superconducting transmons (IBM Condor 1121 qubits, IBM Heron 133 with high fidelity; Google Willow 105), trapped ions (Quantinuum H2 56 qubits, IonQ Tempo), photonics (PsiQuantum, Xanadu), neutral atoms (QuEra 256, Atom Computing 1180+), diamond NV centres (TU Delft, QBLOX) — each have a distinctive profile on the seven criteria. No platform currently aces every row. DiVincenzo's list was written for the pre-error-correction era; in the logical-qubit era a corresponding "logical-DiVincenzo" version applies to encoded qubits, and the scorecard changes (logical error rate replaces physical, logical gate fidelity replaces physical, entangling connectivity between logical qubits becomes a first-class requirement). India's National Quantum Mission targets four of the five major hardware platforms at TIFR, IIT Madras, IIT Delhi, and RRI; the DiVincenzo list is the yardstick used to track progress.
In the year 2000, a theorist at IBM named David DiVincenzo wrote a five-page paper titled The Physical Implementation of Quantum Computation. Quantum computing was already a hot field — Shor's algorithm existed, teleportation had been demonstrated, NMR machines had run two-qubit circuits — but the engineering discussion was a zoo. Every experimental group had its own favourite hardware: ion traps, superconducting circuits, photons, quantum dots, nuclear spins in liquid NMR, optical lattices of neutral atoms. Each group told a different story about why theirs was the right platform. There was no way to compare them.
DiVincenzo's paper did something modest and powerful. It wrote down a checklist. Seven concrete physical requirements a system had to meet to count as a quantum computer. If your platform achieved them, you had a quantum computer. If it did not, it did not matter how elegant the underlying physics was — you were not there yet.
Twenty-six years later that checklist is still how the field talks about hardware. Every quantum-computing company benchmarks itself against DiVincenzo. Every conference review talk opens with "let us evaluate platform X against the five criteria." The list has outlived the specific hardware generations it was written for because it identifies what is actually necessary about a quantum computer, regardless of how it is built.
This chapter is about the seven criteria — what each one means, why it matters, and how the major platforms of 2026 score on each.
The seven criteria
DiVincenzo numbered his criteria 1 through 7, but the first five are the core computation requirements and the last two are about networking. We will follow that structure.
Criterion 1 — Scalable, well-characterised qubits
The requirement. You need qubits. A lot of them. And each qubit has to be genuinely a qubit — a two-level quantum system whose two states (call them |0\rangle and |1\rangle) are well isolated from the rest of the Hilbert space.
Two parts, both important. Well-characterised means you know what your qubit is: its energy levels, its Hamiltonian, its couplings to neighbours and to the environment. A system where the qubit's parameters drift day to day or differ chip to chip by factors of two is not well-characterised and is not usable.
Scalable means you can make more of them. If making one qubit takes a person-year of laboratory effort, and making two takes five person-years (because they interact), and making ten takes a hundred person-years because the cryostat runs out of room, then the platform does not scale. Scaling requires a manufacturable process: a fabrication technique that produces qubits at a rate that matches the quantum-volume growth the field needs.
What it rules out. A system where each qubit is a unique laboratory artefact — hand-tuned by a postdoc with a diamond microscope over six weeks — can run beautiful small demonstrations but cannot scale to thousands of qubits. The history of the field is strewn with lovely hand-built systems that never scaled: photonic cluster-state demonstrations of the 2000s, early nitrogen-vacancy-centre qubits, some liquid-NMR approaches.
Platform scores in 2026.
- Superconducting (IBM, Google, Rigetti): excellent scalability — IBM Condor has 1121 qubits on a single chip (announced December 2023), IBM's 2025 roadmap targets 10,000+ by 2029 via multi-chip modules. Characterisation is good: transmon parameters are measured per-qubit in calibration runs.
- Trapped ions (Quantinuum H2, IonQ, AQT): very well-characterised (natural atomic levels, identical across all ions of the same species) but scaling is harder — H2 runs 56 qubits in a QCCD (quantum charge-coupled device) architecture that shuttles ions between trap zones. Scaling to thousands is a serious engineering challenge.
- Neutral atoms (QuEra, Atom Computing, Pasqal): strong on scaling — Atom Computing announced a 1180-qubit system in late 2023, QuEra's Aquila supports 256. Single-species well-characterised. Challenge is individual-atom addressing at scale.
- Photonic (PsiQuantum, Xanadu): photon-number qubits are identical by physics. PsiQuantum targets million-qubit machines via silicon-photonic fabrication at GlobalFoundries. Probabilistic gates are the main obstacle, not scaling.
- Diamond NV centres (TU Delft, Qutech, QBLOX): excellent characterisation but scaling to large arrays is the outstanding challenge. Best systems are 10-20 qubits per node.
Criterion 2 — Reliable initialisation
The requirement. Every qubit must be initialisable into a known reference state — conventionally |0\rangle — with high fidelity before a computation begins. This matters because quantum computation is a specific transformation on a specific input state; if you do not know your input, you cannot interpret your output.
The fidelity target is determined by whatever initialisation-error budget your application can tolerate. For NISQ demonstrations, 99% initialisation fidelity suffices. For fault-tolerant quantum computing, the requirement is much stricter — initialisation errors propagate through the first cycle of syndrome measurements and must be below the surface-code threshold.
What it rules out. Any platform where the qubit has no natural way to relax to a known state, or where the relaxation is much slower than the computation time scale, fails this criterion. This was an historical problem for liquid-state NMR quantum computing — nuclear spins at room temperature are in a nearly maximally mixed thermal state, and "pseudo-pure" state preparation was the hacky workaround.
Platform scores.
- Superconducting: reset via dispersive measurement + conditional flip, or via heat-exchange with a cold reservoir. Fidelity typically 99.5-99.9%. Initialisation takes ~100 ns to a few microseconds.
- Trapped ions: laser-cool to the motional ground state, then optical pumping to the hyperfine state |0\rangle. Fidelity > 99.9%. Slow (milliseconds) but reliable.
- Neutral atoms: laser cooling + optical pumping. Initialisation fidelity > 99%.
- Photonic: single-photon sources produce |0\rangle (vacuum) or |1\rangle (one photon) — intrinsic to the detection scheme. Main challenge is source efficiency (are you sure you got exactly one photon, not two or zero?).
- Diamond NV: laser initialisation via ionisation of the nitrogen-vacancy centre. Fidelity 95-99% depending on the setup.
All major platforms do well on criterion 2. This is the least controversial of the five core requirements in 2026.
Criterion 3 — Long decoherence times
The requirement. A qubit's state must persist long enough to complete the computation you intend. Decoherence — the loss of quantum information to the environment — is parameterised by two time constants:
- T_1 (energy relaxation, amplitude damping): the timescale over which an excited state |1\rangle decays back to |0\rangle.
- T_2 (dephasing): the timescale over which the relative phase between |0\rangle and |1\rangle in a superposition gets scrambled. Always T_2 \le 2T_1.
The effective rule is: coherence time must exceed gate time by the circuit depth you want to run. If a two-qubit gate takes 50 ns and you want a 1000-gate circuit, you need coherence times of at least 50 μs — ideally much more to leave fidelity margin.
Why T₂ ≤ 2T₁ — an important bound: amplitude damping (T₁) necessarily also causes dephasing, because a state that has decayed to the ground state has lost all of its phase information too. The exact relation is T₂⁻¹ = (2T₁)⁻¹ + T_φ⁻¹ where T_φ is "pure dephasing" — dephasing from sources other than relaxation. If T_φ is very long (pure dephasing is negligible), T₂ approaches 2T₁. Most real systems are limited by pure dephasing to T₂ substantially below 2T₁.
Platform scores.
| Platform | T_1 | T_2 | Two-qubit gate time | Gates per T_1 |
|---|---|---|---|---|
| Superconducting (IBM Heron, 2024) | 300 μs | 200 μs | 60 ns | ~5,000 |
| Trapped ions (Quantinuum H2) | seconds | > 1 s | 100-500 μs | ~2,000-10,000 |
| Neutral atoms (QuEra Aquila) | ~10 s | ~1 s | ~1 μs (Rydberg) | ~10,000,000 |
| Photonic (PsiQuantum) | ∞ in-flight | ∞ in-flight | sub-ns | (fundamentally different: photons don't decohere, but fabrication loss plays the analogue role) |
| Diamond NV (electron spin) | ~5 ms | ~2 ms | ~500 ns | ~10,000 |
Trapped ions and neutral atoms win on raw coherence time (seconds vs microseconds). Superconducting qubits have fast gates and improving T_1, so the gates-per-lifetime metric is comparable. Photonics redefines the question — photons themselves don't decohere, but losses in waveguides and detectors play an equivalent role.
Criterion 4 — A universal gate set
The requirement. You need a set of physically implementable gates that can approximate any unitary operation on your qubits to arbitrary accuracy. Classical analogy: NAND is universal for digital logic; with NAND gates you can build every Boolean function. Quantum analogue: a universal gate set like Clifford + T — the Hadamard H, phase S, CNOT, and the non-Clifford T = \text{diag}(1, e^{i\pi/4}) — can approximate any unitary. The Solovay-Kitaev theorem bounds the required gate count as polynomial in the inverse-accuracy target.
The practical question is: can your hardware implement at least one single-qubit non-Clifford rotation and at least one entangling two-qubit gate, both with high fidelity?
Platform scores — fidelity is the axis that matters.
| Platform | Best single-qubit fidelity | Best two-qubit fidelity |
|---|---|---|
| Superconducting (IBM Heron, Google Willow) | 99.95% | 99.7% |
| Trapped ions (Quantinuum H2, IonQ) | 99.999% | 99.9% |
| Neutral atoms (QuEra, Atom) | 99.9% | 99.5% |
| Photonic (PsiQuantum, Xanadu) | > 99.99% | ~99% (via measurement-based gate teleportation) |
| Diamond NV | 99.9% | 98-99% |
Trapped ions have the highest two-qubit gate fidelities, which is why they lead on per-gate error metrics despite their slower gate speeds. Superconducting is closing the gap. The surface-code error-correction threshold is roughly 1% two-qubit error; all major platforms are now below that threshold, but with varying margin.
Universality itself is a solved problem on every major platform — each has a native gate set that generates a universal group. The open question is fidelity at scale: can you maintain 99.7% two-qubit fidelity across a 1000-qubit chip, or does cross-talk drag it down?
Criterion 5 — Qubit-specific measurement
The requirement. You must be able to measure individual qubits, in a specific basis, with high fidelity — and the measurement must not disturb the other qubits you are not measuring.
"In a specific basis" usually means the computational (Z) basis, though any universal gate set plus single-qubit rotations can transform any basis measurement into a Z measurement.
Platform scores.
- Superconducting: dispersive readout — a microwave resonator coupled to the qubit picks up a state-dependent frequency shift. Readout time ~50-500 ns, fidelity typically 98-99.5%. Non-destructive (the qubit survives the measurement in the measured state).
- Trapped ions: state-dependent fluorescence — shining resonant light on the ion causes |0\rangle to scatter photons and |1\rangle to stay dark (or vice versa). Fidelity > 99.9%. Measurement time ~100 μs. Non-destructive (ion survives).
- Neutral atoms: similar fluorescence readout. Fidelity 95-99.5%, improving rapidly. Measurement typically destructive (atom is lost), though non-destructive schemes are emerging.
- Photonic: photon-number-resolving detectors. Fidelity depends on detector efficiency (up to 95%+ with superconducting nanowire detectors). Destructive — photon is absorbed.
- Diamond NV: optical readout of the electron spin; fidelity ~95%.
Trapped ions are the fidelity leader here, superconducting a close second, photonics depends entirely on detector technology.
Worked example — IBM's superconducting platform scored against all seven
Example 1: How does IBM Heron (2024) score on each criterion?
Setup. IBM Heron is IBM's most recent (2024) superconducting transmon processor: 133 qubits in a heavy-hex lattice, running on the IBM Quantum cloud platform. Take it through DiVincenzo's list.
Criterion 1 — Scalable, well-characterised qubits.
- 133 transmons on a single chip, fabricated in IBM's superconducting process. Roadmap to 10,000+ via multi-chip modules by 2029.
- Each qubit is characterised by its frequency, anharmonicity, T₁, T₂, and readout fidelity in daily calibration runs. Characterisation data is published per-qubit on the IBM Quantum dashboard.
- Score: strong. The characterisation is public, the fabrication is industrial, and the scaling roadmap is credible.
Criterion 2 — Reliable initialisation.
- Reset via active measurement-and-conditional-flip. Initialisation time ~1 μs, fidelity ~99.5%.
- Score: passes, with headroom.
Criterion 3 — Long decoherence times.
- Heron's T₁ ~ 300 μs, T₂ ~ 200 μs.
- Two-qubit gate time ~60 ns → ~5,000 gates per T₁.
- Score: adequate for NISQ depth, marginal for fault-tolerant scaling. Each syndrome cycle in surface code uses tens of gates, so fault-tolerant circuits on Heron-era hardware are limited to hundreds of logical cycles before coherence runs out.
Criterion 4 — Universal gate set.
- Native single-qubit gates: R_x, R_z by hardware, Hadamard by composition. Two-qubit native: tunable-coupler CNOT or CZ.
- Best two-qubit fidelity: 99.7% (Heron's improvement over Eagle and Osprey).
- Score: passes and improving. Above threshold, working toward 99.9%.
Criterion 5 — Qubit-specific measurement.
- Dispersive readout via per-qubit resonators. Readout time ~1 μs, fidelity ~99%.
- Score: passes, with room for fidelity improvement.
Criterion 6 — Stationary-flying interconvert.
- Microwave-to-optical transduction for stationary-to-flying conversion is an active research area at IBM, MIT, NIST. No production-ready microwave-photon interface yet.
- Score: open research problem. IBM's current multi-chip module architecture uses cryogenic interconnects (microwave signals in superconducting coax) rather than flying qubits — a classical shortcut that works within a single dilution refrigerator but does not generalise to room-temperature networks.
Criterion 7 — Faithfully transmit flying qubits.
- Not yet applicable at Heron scale. Would require microwave-to-optical conversion (criterion 6) first.
- Score: not yet.
Result. IBM Heron is a credible NISQ machine — strong on criteria 1-5, with the gaps being the networking criteria 6-7 which are not strictly required for a monolithic quantum computer. For fault-tolerant scaling, the open questions are pushing two-qubit fidelity to 99.9%+ and reducing readout time, both of which IBM's 2025-2029 roadmap addresses.
Worked example — trapped ions vs superconducting on criteria 3 and 6
Example 2: Quantinuum H2 vs IBM Heron — coherence and interconnect
Setup. Two leading NISQ platforms. Compare them on criterion 3 (coherence) and criterion 6 (flying-qubit interconvert) — the two criteria where they differ most.
Criterion 3 — Coherence.
| Metric | IBM Heron (superconducting) | Quantinuum H2 (trapped ion) |
|---|---|---|
| T_1 | 300 μs | > 1 second (typically 5-50 s on the hyperfine qubit) |
| T_2 | 200 μs | > 1 second |
| Single-qubit gate time | ~30 ns | ~5 μs |
| Two-qubit gate time | ~60 ns | ~100-500 μs |
| Gates per T_1 | ~5,000 | ~2,000-10,000 |
Trapped ions have dramatically longer absolute coherence times (seconds versus microseconds) but much slower gates (microseconds versus nanoseconds). The ratio — gates per coherence time — is broadly comparable, with trapped ions slightly ahead.
Why this matters: the ratio is what governs circuit depth before errors dominate. A 10,000-gate circuit is feasible in principle on both platforms — Heron runs it in ~600 μs of wall-clock time, H2 in ~5 seconds. Wall-clock time matters for throughput (how many circuits per hour you execute), but the gate-per-coherence ratio determines computational reach. For the foreseeable future, both platforms live in broadly the same regime on criterion 3, just at different absolute speeds.
Criterion 6 — Flying-qubit interconvert.
Trapped ions have a natural path here: atomic transitions in the visible/infrared band are already at optical wavelengths. A single ion can be entangled with a single emitted photon via spontaneous emission after a state-selective excitation — a "photon-ion entanglement" has been demonstrated since 2004 (Blatt, Innsbruck; Monroe, Michigan) with fidelity > 90%. Trapped-ion quantum networks are an active area: Monroe's group at Duke has demonstrated ion-ion entanglement across different traps via photon-mediated Bell measurements.
Superconducting qubits, by contrast, emit microwave photons at ~5 GHz — wavelengths of ~6 cm, transported only by superconducting coax cable. Transduction to optical frequencies (~200 THz) requires a six-orders-of-magnitude frequency conversion with a coherent process. Piezo-optomechanical and electro-optic transducers have demonstrated quantum transduction with efficiency of a few percent as of 2025 — the technology is real but not production-ready.
Net comparison on networking.
| Platform | Native photon frequency | Transduction needed? | Entanglement-at-a-distance demonstrated? |
|---|---|---|---|
| Trapped ion | optical (~300-400 THz) | no | yes, km-scale |
| Superconducting | microwave (~5 GHz) | yes, ~6 orders of magnitude | no (transduction demonstrated at low efficiency) |
| Neutral atom | optical | no | in progress (Rydberg-mediated photon entanglement) |
| Diamond NV | optical (~637 nm) | no | yes, 1.3 km demonstrated at TU Delft |
Result. For networked quantum computing — where multiple QPUs communicate via flying qubits — trapped ions, neutral atoms, and diamond NV centres have a fundamental architectural advantage over superconducting. Superconducting will either need to solve microwave-to-optical transduction or stay within single-fridge boundaries. This is why every major quantum-networking effort worldwide (including ISRO's satellite-QKD experiments and TU Delft's metropolitan quantum network) is built on optically-active platforms.
Comparative platform scorecard
A compact summary of how each platform scores on all seven criteria, as of 2026:
The logical-qubit version — DiVincenzo in the fault-tolerant era
DiVincenzo's 2000 paper was written before fault-tolerant quantum computing had left the blackboard. Today, with surface-code demonstrations, magic-state distillation on near-term hardware, and Google Willow's 2024 below-threshold logical-qubit experiment, the field has evolved. The same seven criteria apply to logical qubits, but with sharper numerical targets:
- Scalable logical qubits — a surface-code logical qubit encoded in a d \times d patch of physical qubits needs O(d^2) physical qubits per logical. Scaling means reaching 100-1000 logical qubits, i.e. 10^5-10^7 physical.
- Initialisation of a logical qubit — prepare the code in its logical |0\rangle state, typically via measurement-based initialisation.
- Logical coherence — logical error rate below some application-dictated target (e.g. 10^{-15} per logical operation for Shor-RSA-2048).
- Universal logical gates — transversal Clifford (automatic in many codes) plus non-Clifford logical gates via magic-state distillation or code switching.
- Logical measurement — destructive or non-destructive measurement of an encoded logical qubit. 6, 7. Logical flying qubits — transmission of an encoded logical qubit between QPUs. Still a research question.
The relevant benchmark in 2026 is Google's Willow chip, which demonstrated a distance-7 surface code with a logical error rate per cycle below the distance-5 threshold — the first experimental demonstration that growing the code actually reduces the logical error, a necessary condition for fault tolerance. IBM's 2024 Quantum Developer Conference roadmap targets 200 logical qubits by 2029 via modular superconducting architecture.
India's National Quantum Mission — hardware diversity as policy
The NQM (2023, ₹6003 crore over 8 years) explicitly targets four hardware platforms in parallel, using DiVincenzo's criteria as the scorecard:
- Superconducting — TIFR Mumbai and IISc Bangalore. Goal: 50-100 qubit superconducting processors by 2030, in partnership with international fabrication facilities and (eventually) the upcoming SCL-Mohali fab.
- Trapped ion — IIT Delhi (group of Prof. Kumar Rajesh). Focus on high-fidelity, networkable trapped-ion QPUs at the 10-50 qubit scale.
- Photonic — IIT Madras (SeCCI and related groups). Focus on photonic cluster-state generation and photonic quantum simulation at modest qubit counts.
- Neutral atom — RRI Bangalore and IISER Mohali. Focus on cold-atom / Rydberg-atom platforms for analog quantum simulation and digital quantum computation.
The explicit policy choice is hardware diversity. Just as NIST's PQC portfolio spreads risk across lattice, hash, and code-based families, India's NQM spreads hardware risk across four DiVincenzo-compliant platforms. If one platform hits a scaling ceiling, others compensate. The yardstick used to track each — in NQM annual reviews, MeitY progress reports, and the Principal Scientific Advisor's public briefings — is unmistakably DiVincenzo's list.
Common confusions
- "DiVincenzo's criteria are fundamental physical laws." No — they are requirements for a practical quantum computer, derived from the structure of quantum circuit model computation. They are not laws of nature; they are engineering targets informed by what quantum circuits need to be executable. A hypothetical alternative model of computation (measurement-based, adiabatic, topological) would have a slightly different but equivalent list.
- "Criteria 6 and 7 are required for every quantum computer." They are required for a networked quantum computer — multiple QPUs communicating via flying qubits. A monolithic single-fridge quantum computer can achieve useful computation with only criteria 1-5. IBM's current machines, for example, are strong on 1-5 and have not yet demonstrated 6 at production quality — they are still quantum computers.
- "NISQ satisfies all seven criteria." Only partially. NISQ machines satisfy criteria 1-5 with degraded quality — their qubits exist, can be initialised, have some coherence, support some gates, can be measured — but all with error rates and fidelities that are below what fault-tolerant computation requires. "NISQ satisfies DiVincenzo" is true in the loose sense of "has all the pieces" but false in the strict sense of "meets the thresholds required for useful fault-tolerant computation."
- "Photonic quantum computing fails criterion 3 because photons cannot be stored." This is a common but wrong claim. Photons in flight do not decohere (their quantum state is stable), but they do get lost via absorption and scattering. The effective "coherence" of a photonic platform is measured as channel loss per unit distance rather than a time T_1. PsiQuantum's platform has loss budgets in the 0.1-1 dB range per component, which is the photonic analogue of coherence — a different quantity measured differently, not an absent one.
- "Better fidelity means better platform." Fidelity is necessary but not sufficient. A platform with 99.999% gates but only 20 qubits is not more useful than a platform with 99.5% gates and 1000 qubits, because what matters is qubits × circuit depth achievable within fidelity budget. DiVincenzo's list deliberately treats scalability (criterion 1) as independent of fidelity (embedded in criteria 3-5) — both are needed and they interact.
- "The last two criteria are optional." For a single monolithic quantum computer, yes. For the long-term vision of a quantum internet — multiple QPUs connected via quantum channels, enabling distributed fault-tolerant quantum computing — criteria 6 and 7 become essential. Every serious roadmap (IBM Modular, Quantinuum Helios, PsiQuantum photonic network) plans for criteria 6 and 7 in the 2030s.
Going deeper
If you understand that DiVincenzo's 2000 paper gave the field a seven-item checklist for a practical quantum computer — five core-computation criteria (scalable qubits, initialisation, coherence, universal gates, measurement) and two networking criteria (flying-qubit interconvert, transmission) — that different hardware platforms score differently on each criterion and no platform currently aces every row, and that India's NQM has chosen hardware diversity across four platforms using DiVincenzo's list as yardstick — you have chapter 162. The material below is for readers who want the extended fault-tolerant version of the criteria, the hybrid-platform (ion+photon) architectures, the latest experimental benchmarks, and the specific NQM platform roadmaps.
The fault-tolerant extension — Preskill's "logical DiVincenzo"
John Preskill and others have argued for a restated version of DiVincenzo's list for the logical-qubit era. The essence is:
- Physical DiVincenzo: your platform must satisfy DV1-DV5 at the physical qubit layer with enough headroom to run quantum error correction.
- Logical DiVincenzo: the error-corrected code layer must present to the application a logical qubit that itself satisfies DV1-DV5 at the logical layer, with logical error rates small enough for the target application.
The critical inequality is the error-correction threshold: physical error rate < threshold (about 1% for surface codes). As long as you are below threshold and can afford enough physical qubits per logical, growing d exponentially suppresses logical error. All modern hardware platforms are now below threshold for their best two-qubit gates — but barely, with margins of 3-10× rather than the 100-1000× needed for truly efficient encoding.
Hybrid platforms — the hardware's next chapter
No platform is best at everything on DiVincenzo's list. This invites hybrid architectures: combine platforms so each handles the criterion it is good at. Examples in current research:
- Trapped ion + photon: use trapped ions for stationary logical qubits (excellent fidelity, long coherence) and photons for networking between QPUs. Demonstrated at Duke, Oxford, and Innsbruck.
- Superconducting + microwave-to-optical transducer + photon: keep the computational advantage of superconducting, but add a transducer for flying qubits. Active research at MIT, NIST, Chicago, and ETH Zürich.
- Neutral atom + photon: same idea with Rydberg-dressed atoms as the stationary qubit. QuEra and Atom Computing both pursuing this.
- Silicon spin qubit + photon: silicon's fabrication compatibility plus photonic networking. Intel, Quantum Motion.
Hybrid designs are the consensus path forward for DV6-DV7 in the late 2020s.
Recent benchmark results (2024-2026)
A few highlights showing the field's progress against the criteria:
- Google Willow (Dec 2024): distance-7 surface code with logical error below distance-5 baseline — the first below-threshold demonstration at two successive code sizes. Implies DV3 headroom sufficient for scalable fault tolerance on superconducting.
- IBM Heron r2 (2024): 99.7% two-qubit fidelity on a 133-qubit chip; average circuit depth reaches 5000 gates.
- Quantinuum H2 (2024): 99.914% two-qubit fidelity, logical qubit with 10^{-4} logical error rate per gate using a distance-3 Steane code — a clear lead on criterion 4 (fidelity).
- Atom Computing (late 2023): 1180 neutral atom qubits with coherence times > 10 seconds.
- PsiQuantum (2024): 1M-qubit silicon photonic roadmap published; intermediate demonstrations at 10-100 qubit scale with cluster-state generation.
- TU Delft / QBLOX (2024): entanglement between nitrogen-vacancy centres 25 km apart on a fibre link — a flagship criterion-6/7 demonstration.
The NQM platform-wise roadmaps
- Superconducting (TIFR + IISc): 2026 — 16 qubit processor; 2028 — 50 qubits; 2030 — 100 qubits with below-threshold two-qubit fidelity.
- Trapped ion (IIT Delhi): 2026 — 10 qubit linear-chain trap with 99.5% two-qubit fidelity; 2028 — 30 qubit QCCD architecture; 2030 — networked two-node trapped-ion QPU.
- Photonic (IIT Madras): 2026 — 8-qubit photonic cluster-state demonstrator; 2028 — 32-qubit photonic quantum simulator; 2030 — full-scale photonic processor via CMOS-compatible fabrication.
- Neutral atom (RRI + IISER Mohali): 2026 — 50-atom Rydberg array for quantum simulation; 2028 — 200-atom universal quantum computer; 2030 — 500 atoms with site-specific addressing.
These targets are conservative by international standards (IBM plans 10,000+ superconducting qubits by 2029, for example) but match India's manufacturing capacity and the NQM's incremental funding model. The NQM review committee explicitly scores each project against DiVincenzo's criteria in its annual assessments.
The criterion DiVincenzo did not include — classical control
Modern quantum computers are controlled by massive classical hardware stacks: arbitrary waveform generators producing nanosecond-accurate microwave pulses (superconducting), FPGA-based pulse sequencers (all platforms), cryogenic electronics (some), and large room-temperature servers translating quantum circuits into control signals. DiVincenzo's 2000 list does not explicitly include "classical control infrastructure capable of scaling with the qubit count," but in 2026 this has become a distinct scaling bottleneck on its own. IBM's 200 qubit Heron requires thousands of control lines; a hypothetical million-qubit machine would require a redesign of the entire classical-control stack, not just the qubits themselves. Some authors call this the "implicit eighth criterion" of DiVincenzo. When you read about cryogenic CMOS controllers, rapid single-flux-quantum logic, or photonic control buses, these are all engineering responses to the missing-from-DV criterion of scalable classical control.
Where this leads next
- What NISQ Means — the category of today's hardware and how well it satisfies DiVincenzo's criteria.
- Logical Qubits in Practice — how error correction takes a platform that barely satisfies DiVincenzo at the physical layer into one that satisfies it at the logical layer.
- Superconducting Transmons — the platform specifically.
- Trapped Ions — the platform specifically.
- Photonic Quantum Computing — the measurement-based photonic approach.
References
- David P. DiVincenzo, The Physical Implementation of Quantum Computation (2000) — arXiv:quant-ph/0002077.
- Wikipedia, DiVincenzo's criteria.
- John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229.
- IBM Quantum, IBM Quantum development and roadmap — ibm.com/quantum/technology.
- Google Quantum AI, Quantum error correction below the surface-code threshold (Willow, 2024) — nature.com/articles/s41586-024-08449-y.
- Department of Science and Technology (India), National Quantum Mission — dst.gov.in/national-quantum-mission.