In short

DiVincenzo's criteria are a seven-item checklist published by IBM theorist David DiVincenzo in 2000 (in a paper titled The Physical Implementation of Quantum Computation). They are the standard scorecard for evaluating any candidate quantum-computing platform. The first five are the core requirements for computation: (1) scalable, well-characterised qubits; (2) the ability to initialise qubits in a known reference state such as |0\rangle; (3) decoherence times long enough to run the desired circuit; (4) a universal gate set (single-qubit gates plus at least one entangling two-qubit gate — Clifford+T or an equivalent); (5) qubit-specific measurement. The last two — (6) the ability to interconvert stationary qubits with flying (photonic) qubits, and (7) the ability to faithfully transmit flying qubits between locations — are the networking criteria, needed if you want a distributed or networked quantum computer rather than a single monolithic machine. Today's platforms — superconducting transmons (IBM Condor 1121 qubits, IBM Heron 133 with high fidelity; Google Willow 105), trapped ions (Quantinuum H2 56 qubits, IonQ Tempo), photonics (PsiQuantum, Xanadu), neutral atoms (QuEra 256, Atom Computing 1180+), diamond NV centres (TU Delft, QBLOX) — each have a distinctive profile on the seven criteria. No platform currently aces every row. DiVincenzo's list was written for the pre-error-correction era; in the logical-qubit era a corresponding "logical-DiVincenzo" version applies to encoded qubits, and the scorecard changes (logical error rate replaces physical, logical gate fidelity replaces physical, entangling connectivity between logical qubits becomes a first-class requirement). India's National Quantum Mission targets four of the five major hardware platforms at TIFR, IIT Madras, IIT Delhi, and RRI; the DiVincenzo list is the yardstick used to track progress.

In the year 2000, a theorist at IBM named David DiVincenzo wrote a five-page paper titled The Physical Implementation of Quantum Computation. Quantum computing was already a hot field — Shor's algorithm existed, teleportation had been demonstrated, NMR machines had run two-qubit circuits — but the engineering discussion was a zoo. Every experimental group had its own favourite hardware: ion traps, superconducting circuits, photons, quantum dots, nuclear spins in liquid NMR, optical lattices of neutral atoms. Each group told a different story about why theirs was the right platform. There was no way to compare them.

DiVincenzo's paper did something modest and powerful. It wrote down a checklist. Seven concrete physical requirements a system had to meet to count as a quantum computer. If your platform achieved them, you had a quantum computer. If it did not, it did not matter how elegant the underlying physics was — you were not there yet.

Twenty-six years later that checklist is still how the field talks about hardware. Every quantum-computing company benchmarks itself against DiVincenzo. Every conference review talk opens with "let us evaluate platform X against the five criteria." The list has outlived the specific hardware generations it was written for because it identifies what is actually necessary about a quantum computer, regardless of how it is built.

This chapter is about the seven criteria — what each one means, why it matters, and how the major platforms of 2026 score on each.

The seven criteria

DiVincenzo numbered his criteria 1 through 7, but the first five are the core computation requirements and the last two are about networking. We will follow that structure.

The seven DiVincenzo criteriaA vertical list of seven requirements. The first five labelled core computation are in the upper shaded block, and the last two labelled networking are in the lower shaded block. Each row states the criterion name and a one-line statement of what it requires. Row one scalable well-characterised qubits. Row two reliable initialisation in the zero state. Row three long decoherence times compared to gate time. Row four universal gate set Clifford plus T or equivalent. Row five qubit-specific measurement. Row six interconvert stationary and flying qubits. Row seven faithfully transmit flying qubits between locations.DiVincenzo's seven criteria (2000) — the scorecard still in use in 2026CORE COMPUTATION1. Scalable, well-characterised qubitscan you make many of them?2. Reliable initialisation to |0⟩every qubit starts from a known state3. Long decoherence times (T₁, T₂ ≫ gate time)state lives long enough to compute4. Universal gate set (Clifford + T, or equivalent)any circuit can be compiled5. Qubit-specific measurementread each qubit individuallyNETWORKING (optional for single machines)6. Interconvert stationary and flying qubitsqubit → photon and back7. Faithfully transmit flying qubits over distancequantum channels for QPU-to-QPU
The full list. Five criteria to be a quantum computer at all (rows 1-5); two more to be a networkable one (rows 6-7). Every hardware review, paper, and NQM progress report you will ever read on quantum computing is structured — implicitly or explicitly — against these seven requirements.

Criterion 1 — Scalable, well-characterised qubits

The requirement. You need qubits. A lot of them. And each qubit has to be genuinely a qubit — a two-level quantum system whose two states (call them |0\rangle and |1\rangle) are well isolated from the rest of the Hilbert space.

Two parts, both important. Well-characterised means you know what your qubit is: its energy levels, its Hamiltonian, its couplings to neighbours and to the environment. A system where the qubit's parameters drift day to day or differ chip to chip by factors of two is not well-characterised and is not usable.

Scalable means you can make more of them. If making one qubit takes a person-year of laboratory effort, and making two takes five person-years (because they interact), and making ten takes a hundred person-years because the cryostat runs out of room, then the platform does not scale. Scaling requires a manufacturable process: a fabrication technique that produces qubits at a rate that matches the quantum-volume growth the field needs.

What it rules out. A system where each qubit is a unique laboratory artefact — hand-tuned by a postdoc with a diamond microscope over six weeks — can run beautiful small demonstrations but cannot scale to thousands of qubits. The history of the field is strewn with lovely hand-built systems that never scaled: photonic cluster-state demonstrations of the 2000s, early nitrogen-vacancy-centre qubits, some liquid-NMR approaches.

Platform scores in 2026.

Criterion 2 — Reliable initialisation

The requirement. Every qubit must be initialisable into a known reference state — conventionally |0\rangle — with high fidelity before a computation begins. This matters because quantum computation is a specific transformation on a specific input state; if you do not know your input, you cannot interpret your output.

The fidelity target is determined by whatever initialisation-error budget your application can tolerate. For NISQ demonstrations, 99% initialisation fidelity suffices. For fault-tolerant quantum computing, the requirement is much stricter — initialisation errors propagate through the first cycle of syndrome measurements and must be below the surface-code threshold.

What it rules out. Any platform where the qubit has no natural way to relax to a known state, or where the relaxation is much slower than the computation time scale, fails this criterion. This was an historical problem for liquid-state NMR quantum computing — nuclear spins at room temperature are in a nearly maximally mixed thermal state, and "pseudo-pure" state preparation was the hacky workaround.

Platform scores.

All major platforms do well on criterion 2. This is the least controversial of the five core requirements in 2026.

Criterion 3 — Long decoherence times

The requirement. A qubit's state must persist long enough to complete the computation you intend. Decoherence — the loss of quantum information to the environment — is parameterised by two time constants:

The effective rule is: coherence time must exceed gate time by the circuit depth you want to run. If a two-qubit gate takes 50 ns and you want a 1000-gate circuit, you need coherence times of at least 50 μs — ideally much more to leave fidelity margin.

Why T₂ ≤ 2T₁ — an important bound: amplitude damping (T₁) necessarily also causes dephasing, because a state that has decayed to the ground state has lost all of its phase information too. The exact relation is T₂⁻¹ = (2T₁)⁻¹ + T_φ⁻¹ where T_φ is "pure dephasing" — dephasing from sources other than relaxation. If T_φ is very long (pure dephasing is negligible), T₂ approaches 2T₁. Most real systems are limited by pure dephasing to T₂ substantially below 2T₁.

Platform scores.

Platform T_1 T_2 Two-qubit gate time Gates per T_1
Superconducting (IBM Heron, 2024) 300 μs 200 μs 60 ns ~5,000
Trapped ions (Quantinuum H2) seconds > 1 s 100-500 μs ~2,000-10,000
Neutral atoms (QuEra Aquila) ~10 s ~1 s ~1 μs (Rydberg) ~10,000,000
Photonic (PsiQuantum) ∞ in-flight ∞ in-flight sub-ns (fundamentally different: photons don't decohere, but fabrication loss plays the analogue role)
Diamond NV (electron spin) ~5 ms ~2 ms ~500 ns ~10,000

Trapped ions and neutral atoms win on raw coherence time (seconds vs microseconds). Superconducting qubits have fast gates and improving T_1, so the gates-per-lifetime metric is comparable. Photonics redefines the question — photons themselves don't decohere, but losses in waveguides and detectors play an equivalent role.

Criterion 4 — A universal gate set

The requirement. You need a set of physically implementable gates that can approximate any unitary operation on your qubits to arbitrary accuracy. Classical analogy: NAND is universal for digital logic; with NAND gates you can build every Boolean function. Quantum analogue: a universal gate set like Clifford + T — the Hadamard H, phase S, CNOT, and the non-Clifford T = \text{diag}(1, e^{i\pi/4}) — can approximate any unitary. The Solovay-Kitaev theorem bounds the required gate count as polynomial in the inverse-accuracy target.

The practical question is: can your hardware implement at least one single-qubit non-Clifford rotation and at least one entangling two-qubit gate, both with high fidelity?

Platform scores — fidelity is the axis that matters.

Platform Best single-qubit fidelity Best two-qubit fidelity
Superconducting (IBM Heron, Google Willow) 99.95% 99.7%
Trapped ions (Quantinuum H2, IonQ) 99.999% 99.9%
Neutral atoms (QuEra, Atom) 99.9% 99.5%
Photonic (PsiQuantum, Xanadu) > 99.99% ~99% (via measurement-based gate teleportation)
Diamond NV 99.9% 98-99%

Trapped ions have the highest two-qubit gate fidelities, which is why they lead on per-gate error metrics despite their slower gate speeds. Superconducting is closing the gap. The surface-code error-correction threshold is roughly 1% two-qubit error; all major platforms are now below that threshold, but with varying margin.

Universality itself is a solved problem on every major platform — each has a native gate set that generates a universal group. The open question is fidelity at scale: can you maintain 99.7% two-qubit fidelity across a 1000-qubit chip, or does cross-talk drag it down?

Criterion 5 — Qubit-specific measurement

The requirement. You must be able to measure individual qubits, in a specific basis, with high fidelity — and the measurement must not disturb the other qubits you are not measuring.

"In a specific basis" usually means the computational (Z) basis, though any universal gate set plus single-qubit rotations can transform any basis measurement into a Z measurement.

Platform scores.

Trapped ions are the fidelity leader here, superconducting a close second, photonics depends entirely on detector technology.

Worked example — IBM's superconducting platform scored against all seven

Example 1: How does IBM Heron (2024) score on each criterion?

Setup. IBM Heron is IBM's most recent (2024) superconducting transmon processor: 133 qubits in a heavy-hex lattice, running on the IBM Quantum cloud platform. Take it through DiVincenzo's list.

Criterion 1 — Scalable, well-characterised qubits.

  • 133 transmons on a single chip, fabricated in IBM's superconducting process. Roadmap to 10,000+ via multi-chip modules by 2029.
  • Each qubit is characterised by its frequency, anharmonicity, T₁, T₂, and readout fidelity in daily calibration runs. Characterisation data is published per-qubit on the IBM Quantum dashboard.
  • Score: strong. The characterisation is public, the fabrication is industrial, and the scaling roadmap is credible.

Criterion 2 — Reliable initialisation.

  • Reset via active measurement-and-conditional-flip. Initialisation time ~1 μs, fidelity ~99.5%.
  • Score: passes, with headroom.

Criterion 3 — Long decoherence times.

  • Heron's T₁ ~ 300 μs, T₂ ~ 200 μs.
  • Two-qubit gate time ~60 ns → ~5,000 gates per T₁.
  • Score: adequate for NISQ depth, marginal for fault-tolerant scaling. Each syndrome cycle in surface code uses tens of gates, so fault-tolerant circuits on Heron-era hardware are limited to hundreds of logical cycles before coherence runs out.

Criterion 4 — Universal gate set.

  • Native single-qubit gates: R_x, R_z by hardware, Hadamard by composition. Two-qubit native: tunable-coupler CNOT or CZ.
  • Best two-qubit fidelity: 99.7% (Heron's improvement over Eagle and Osprey).
  • Score: passes and improving. Above threshold, working toward 99.9%.

Criterion 5 — Qubit-specific measurement.

  • Dispersive readout via per-qubit resonators. Readout time ~1 μs, fidelity ~99%.
  • Score: passes, with room for fidelity improvement.

Criterion 6 — Stationary-flying interconvert.

  • Microwave-to-optical transduction for stationary-to-flying conversion is an active research area at IBM, MIT, NIST. No production-ready microwave-photon interface yet.
  • Score: open research problem. IBM's current multi-chip module architecture uses cryogenic interconnects (microwave signals in superconducting coax) rather than flying qubits — a classical shortcut that works within a single dilution refrigerator but does not generalise to room-temperature networks.

Criterion 7 — Faithfully transmit flying qubits.

  • Not yet applicable at Heron scale. Would require microwave-to-optical conversion (criterion 6) first.
  • Score: not yet.
IBM Heron scorecardA vertical scorecard for IBM Heron 2024 against DiVincenzo's seven criteria. Each row is a criterion name with a filled progress bar indicating quality. Criteria one through five are largely filled green to amber indicating strong scores. Criterion six has a small empty bar indicating an open research problem. Criterion seven has an empty bar indicating not applicable. The chart shows Heron is a strong NISQ platform with networking support as future work.IBM Heron (2024) — score against each criterion1. Scalable qubits133 qubits → 10k target2. Initialisation99.5% fidelity3. CoherenceT₁ ≈ 300 μs, 5k gates4. Universal gates2Q fidelity 99.7%5. Measurement99% dispersive readout6. Flying-qubit interconvertμ-wave→optical: research7. Flying-qubit transmitnot yet at Heron scaleSummaryStrong on criteria 1–5 (NISQ-ready). Networking criteria 6–7 are open research.
Heron's scorecard. Strong on the core computation requirements (1-5), open on the networking requirements (6-7). This is typical of superconducting platforms in 2026: they are excellent single-fridge quantum computers, but connecting multiple fridges into a networked QPU remains a research problem.

Result. IBM Heron is a credible NISQ machine — strong on criteria 1-5, with the gaps being the networking criteria 6-7 which are not strictly required for a monolithic quantum computer. For fault-tolerant scaling, the open questions are pushing two-qubit fidelity to 99.9%+ and reducing readout time, both of which IBM's 2025-2029 roadmap addresses.

Worked example — trapped ions vs superconducting on criteria 3 and 6

Example 2: Quantinuum H2 vs IBM Heron — coherence and interconnect

Setup. Two leading NISQ platforms. Compare them on criterion 3 (coherence) and criterion 6 (flying-qubit interconvert) — the two criteria where they differ most.

Criterion 3 — Coherence.

Metric IBM Heron (superconducting) Quantinuum H2 (trapped ion)
T_1 300 μs > 1 second (typically 5-50 s on the hyperfine qubit)
T_2 200 μs > 1 second
Single-qubit gate time ~30 ns ~5 μs
Two-qubit gate time ~60 ns ~100-500 μs
Gates per T_1 ~5,000 ~2,000-10,000

Trapped ions have dramatically longer absolute coherence times (seconds versus microseconds) but much slower gates (microseconds versus nanoseconds). The ratio — gates per coherence time — is broadly comparable, with trapped ions slightly ahead.

Why this matters: the ratio is what governs circuit depth before errors dominate. A 10,000-gate circuit is feasible in principle on both platforms — Heron runs it in ~600 μs of wall-clock time, H2 in ~5 seconds. Wall-clock time matters for throughput (how many circuits per hour you execute), but the gate-per-coherence ratio determines computational reach. For the foreseeable future, both platforms live in broadly the same regime on criterion 3, just at different absolute speeds.

Criterion 6 — Flying-qubit interconvert.

Trapped ions have a natural path here: atomic transitions in the visible/infrared band are already at optical wavelengths. A single ion can be entangled with a single emitted photon via spontaneous emission after a state-selective excitation — a "photon-ion entanglement" has been demonstrated since 2004 (Blatt, Innsbruck; Monroe, Michigan) with fidelity > 90%. Trapped-ion quantum networks are an active area: Monroe's group at Duke has demonstrated ion-ion entanglement across different traps via photon-mediated Bell measurements.

Superconducting qubits, by contrast, emit microwave photons at ~5 GHz — wavelengths of ~6 cm, transported only by superconducting coax cable. Transduction to optical frequencies (~200 THz) requires a six-orders-of-magnitude frequency conversion with a coherent process. Piezo-optomechanical and electro-optic transducers have demonstrated quantum transduction with efficiency of a few percent as of 2025 — the technology is real but not production-ready.

Net comparison on networking.

Platform Native photon frequency Transduction needed? Entanglement-at-a-distance demonstrated?
Trapped ion optical (~300-400 THz) no yes, km-scale
Superconducting microwave (~5 GHz) yes, ~6 orders of magnitude no (transduction demonstrated at low efficiency)
Neutral atom optical no in progress (Rydberg-mediated photon entanglement)
Diamond NV optical (~637 nm) no yes, 1.3 km demonstrated at TU Delft

Result. For networked quantum computing — where multiple QPUs communicate via flying qubits — trapped ions, neutral atoms, and diamond NV centres have a fundamental architectural advantage over superconducting. Superconducting will either need to solve microwave-to-optical transduction or stay within single-fridge boundaries. This is why every major quantum-networking effort worldwide (including ISRO's satellite-QKD experiments and TU Delft's metropolitan quantum network) is built on optically-active platforms.

Comparative platform scorecard

A compact summary of how each platform scores on all seven criteria, as of 2026:

Platform comparison matrixA matrix with seven rows for the DiVincenzo criteria and five columns for the major hardware platforms: superconducting, trapped ions, neutral atoms, photonic, and diamond NV. Each cell is coloured green for strong, amber for moderate, or red for weak. Superconducting is strong on most core computation criteria but weak on networking criteria six and seven. Trapped ions are strong across the board with moderate scaling. Neutral atoms are strong on coherence and scaling. Photonic is distinct with exceptional qubit identicality but only moderate on gate universality due to probabilistic gates. Diamond NV scores well on networking but moderate on scaling.Platform vs. DiVincenzo's seven criteria — 2026 snapshotSuperconductingTrapped ionsNeutral atomsPhotonicDiamond NV1. Scalable qubitsstrongmoderatestrongstrongweak2. Initialisationstrongstrongstrongmoderatestrong3. Coherencemoderatestrongstrongstrongmoderate4. Universal gatesstrongstrongmoderatemoderatemoderate5. Measurementstrongstrongmoderatemoderatemoderate6. Flying interconvertweakstrongmoderatenativestrong7. Flying transmitweakmoderatemoderatenativestrongNo platform aces every row. Every real quantum-computing roadmap is a bet on closing one of the weak cells.
The matrix nobody disagrees with. Superconducting wins on scale and engineering maturity but struggles with networking (criterion 6). Trapped ions excel at fidelity and networking but scale more slowly. Neutral atoms are the dark horse — improving fast on all fronts. Photonics is architecturally different — its weaknesses and strengths both come from the fact that photons are probabilistic. Diamond NV is niche but shines on networking. Every major hardware company's five-year plan is built around closing the cells where its platform is weak.

The logical-qubit version — DiVincenzo in the fault-tolerant era

DiVincenzo's 2000 paper was written before fault-tolerant quantum computing had left the blackboard. Today, with surface-code demonstrations, magic-state distillation on near-term hardware, and Google Willow's 2024 below-threshold logical-qubit experiment, the field has evolved. The same seven criteria apply to logical qubits, but with sharper numerical targets:

  1. Scalable logical qubits — a surface-code logical qubit encoded in a d \times d patch of physical qubits needs O(d^2) physical qubits per logical. Scaling means reaching 100-1000 logical qubits, i.e. 10^5-10^7 physical.
  2. Initialisation of a logical qubit — prepare the code in its logical |0\rangle state, typically via measurement-based initialisation.
  3. Logical coherence — logical error rate below some application-dictated target (e.g. 10^{-15} per logical operation for Shor-RSA-2048).
  4. Universal logical gates — transversal Clifford (automatic in many codes) plus non-Clifford logical gates via magic-state distillation or code switching.
  5. Logical measurement — destructive or non-destructive measurement of an encoded logical qubit. 6, 7. Logical flying qubits — transmission of an encoded logical qubit between QPUs. Still a research question.

The relevant benchmark in 2026 is Google's Willow chip, which demonstrated a distance-7 surface code with a logical error rate per cycle below the distance-5 threshold — the first experimental demonstration that growing the code actually reduces the logical error, a necessary condition for fault tolerance. IBM's 2024 Quantum Developer Conference roadmap targets 200 logical qubits by 2029 via modular superconducting architecture.

India's National Quantum Mission — hardware diversity as policy

The NQM (2023, ₹6003 crore over 8 years) explicitly targets four hardware platforms in parallel, using DiVincenzo's criteria as the scorecard:

The explicit policy choice is hardware diversity. Just as NIST's PQC portfolio spreads risk across lattice, hash, and code-based families, India's NQM spreads hardware risk across four DiVincenzo-compliant platforms. If one platform hits a scaling ceiling, others compensate. The yardstick used to track each — in NQM annual reviews, MeitY progress reports, and the Principal Scientific Advisor's public briefings — is unmistakably DiVincenzo's list.

Common confusions

Going deeper

If you understand that DiVincenzo's 2000 paper gave the field a seven-item checklist for a practical quantum computer — five core-computation criteria (scalable qubits, initialisation, coherence, universal gates, measurement) and two networking criteria (flying-qubit interconvert, transmission) — that different hardware platforms score differently on each criterion and no platform currently aces every row, and that India's NQM has chosen hardware diversity across four platforms using DiVincenzo's list as yardstick — you have chapter 162. The material below is for readers who want the extended fault-tolerant version of the criteria, the hybrid-platform (ion+photon) architectures, the latest experimental benchmarks, and the specific NQM platform roadmaps.

The fault-tolerant extension — Preskill's "logical DiVincenzo"

John Preskill and others have argued for a restated version of DiVincenzo's list for the logical-qubit era. The essence is:

The critical inequality is the error-correction threshold: physical error rate < threshold (about 1% for surface codes). As long as you are below threshold and can afford enough physical qubits per logical, growing d exponentially suppresses logical error. All modern hardware platforms are now below threshold for their best two-qubit gates — but barely, with margins of 3-10× rather than the 100-1000× needed for truly efficient encoding.

Hybrid platforms — the hardware's next chapter

No platform is best at everything on DiVincenzo's list. This invites hybrid architectures: combine platforms so each handles the criterion it is good at. Examples in current research:

Hybrid designs are the consensus path forward for DV6-DV7 in the late 2020s.

Recent benchmark results (2024-2026)

A few highlights showing the field's progress against the criteria:

The NQM platform-wise roadmaps

These targets are conservative by international standards (IBM plans 10,000+ superconducting qubits by 2029, for example) but match India's manufacturing capacity and the NQM's incremental funding model. The NQM review committee explicitly scores each project against DiVincenzo's criteria in its annual assessments.

The criterion DiVincenzo did not include — classical control

Modern quantum computers are controlled by massive classical hardware stacks: arbitrary waveform generators producing nanosecond-accurate microwave pulses (superconducting), FPGA-based pulse sequencers (all platforms), cryogenic electronics (some), and large room-temperature servers translating quantum circuits into control signals. DiVincenzo's 2000 list does not explicitly include "classical control infrastructure capable of scaling with the qubit count," but in 2026 this has become a distinct scaling bottleneck on its own. IBM's 200 qubit Heron requires thousands of control lines; a hypothetical million-qubit machine would require a redesign of the entire classical-control stack, not just the qubits themselves. Some authors call this the "implicit eighth criterion" of DiVincenzo. When you read about cryogenic CMOS controllers, rapid single-flux-quantum logic, or photonic control buses, these are all engineering responses to the missing-from-DV criterion of scalable classical control.

Where this leads next

References

  1. David P. DiVincenzo, The Physical Implementation of Quantum Computation (2000) — arXiv:quant-ph/0002077.
  2. Wikipedia, DiVincenzo's criteria.
  3. John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229.
  4. IBM Quantum, IBM Quantum development and roadmapibm.com/quantum/technology.
  5. Google Quantum AI, Quantum error correction below the surface-code threshold (Willow, 2024) — nature.com/articles/s41586-024-08449-y.
  6. Department of Science and Technology (India), National Quantum Missiondst.gov.in/national-quantum-mission.