In short

Measurement-based quantum computing (MBQC) — also called the one-way quantum computer — is a model, introduced by Raussendorf and Briegel in 2001, in which you first prepare a large entangled cluster state on a 2D lattice of qubits, and then compute by performing single-qubit measurements on those qubits in a chosen order, adaptively choosing each measurement basis from the outcomes so far. The output of the computation sits on the last un-measured qubits. It is provably equivalent in power to the circuit model — any quantum algorithm can be compiled into a measurement pattern on a cluster — and it is the natural fit for photonic hardware, where two-photon gates are nearly impossible during a computation but single-photon measurements are easy and fast. PsiQuantum's billion-dollar fault-tolerant photonic bet is built on a variant called fusion-based quantum computing, which generates small entangled photon clusters in silicon-photonics chips and "fuses" them into one giant resource state through probabilistic Bell-state measurements. Xanadu uses the continuous-variable analogue, where cluster states of squeezed light modes are measured with photon-number-resolving detectors. The key trade-off: state preparation is hard (you need a macroscopic entangled state of maybe a million photons) but the computation itself is then just photodetectors clicking — and photodetectors are the single most mature piece of quantum hardware humans know how to build.

The circuit model of quantum computing, which you have spent dozens of chapters building up, goes like this. Line up some qubits in |00...0\rangle. Apply Hadamards, apply CNOTs, apply T gates, apply more CNOTs — in a carefully choreographed sequence — and at the end, measure everything. The circuit is a temporally ordered recipe. The gates happen in time order. The measurement is the final step.

Here is a completely different recipe.

Line up a much larger grid of qubits, this time arranged in a 2D lattice. Before you do anything else, put every single one of them in the superposition |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2. Now apply a controlled-Z gate between every pair of nearest neighbours on the lattice. This takes a while — every neighbour pair must be entangled. Once that is done, you are holding one enormous entangled state called a cluster state. You have not computed anything yet. You have just set up a resource.

Now compute by measuring. Pick the leftmost column of qubits and measure each one in a carefully chosen basis. The outcomes are random. Look at the outcomes. Based on those outcomes, pick the measurement bases for the next column. Measure them. Based on their outcomes, pick the bases for the next column. And so on. When you have measured all but the rightmost column, the remaining qubits are in a quantum state that is exactly what the corresponding circuit would have produced.

This is measurement-based quantum computing. There are no gates during the computation. The entanglement is laid down in advance, and the computation unfolds as a cascade of measurements that consume that entanglement one column at a time. The direction of the computation is baked into the lattice — which is why it is called one-way. You cannot go backwards, because a measurement is not a reversible thing.

It sounds too clever by half. Measurements are supposed to destroy quantum information, not compute with it. And yet Raussendorf and Briegel proved — in 2001, in a paper that has since been cited more than 8000 times — that this model is exactly as powerful as the circuit model. Anything Shor's algorithm can do on a circuit, MBQC can do on a cluster. The measurement pattern is a kind of compiled program; the cluster state is a kind of compiled hardware.

This chapter is a picture of that model. Why does it work? Why is it specifically good for photons? And why did PsiQuantum spend a billion dollars building a fab to make exactly this kind of machine?

The flip — what MBQC trades for what

In the circuit model, the bulk of the engineering difficulty is in the two-qubit gates. A Hadamard on a single qubit is easy; a CNOT between two qubits is the hard part on every hardware platform. Superconducting CNOTs require microwave pulse engineering; trapped-ion CNOTs require shared motional modes; photonic CNOTs require photon-photon interactions, which basically do not exist in empty space.

In MBQC, the two-qubit difficulty moves from the computation into the state preparation. You need to build that cluster state. Once you have it, the computation itself requires only single-qubit measurements, and single-qubit measurements are, on every platform, the easiest operation — especially on photons, where a single-photon measurement is just a photodetector clicking. The order in which you did things has been flipped: entangle first, then compute. Gates up front, measurements during execution.

For matter qubits — superconducting, trapped ions, neutral atoms — this flip is neutral. You can do gates during a computation anyway, so why pay the cost of a huge entangled resource state up front? Indeed, nobody runs trapped-ion computers in MBQC mode. They run them in the circuit model.

For photons it is the opposite. Photons are a delight to measure and a nightmare to gate. Two photons passing through empty space do not interact at all. The only way to make them interact is to send them through a beamsplitter and hope they come out the right ports — and even then, the interaction is probabilistic. During a computation, you cannot afford probabilistic gates. During state preparation, however, you can: if a preparation step fails, you can just try again. So you offload all the two-photon difficulty into the preparation stage, generate the cluster state probabilistically, and once you have it, the computation runs in one smooth sweep of detector clicks.

This is why MBQC is the native model for photonic quantum computing.

Circuit model vs measurement-based modelTwo panels contrast circuit model, where gates happen during the computation and measurements at the end, with measurement-based model, where the entangled resource state is prepared first and the computation consists of single-qubit measurements in sequence.Circuit model vs MBQC — when does the entanglement happen?Circuit model|0⟩|0⟩|0⟩HMMMgates during computation, measure at endMBQC (one-way)red = measured; black = outputentangle all, then measure left → right
Left: circuit model. Qubits start in $|0\rangle$; gates (Hadamard boxes, CNOT dots-and-targets) happen during the computation; meter boxes measure at the end. Right: MBQC. A 2D grid of qubits is pre-entangled into a cluster state (lines indicate CZ bonds). Computation proceeds by measuring left-to-right, adaptively choosing each measurement basis. The rightmost column (black) holds the final quantum output; everything else (red) has been measured.

The cluster state — what you are preparing

The specific resource state MBQC needs is called a cluster state, and its construction is charmingly simple. Pick any graph G — a bunch of vertices with some edges between them. Put one qubit at each vertex. Initialise every qubit in |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2. Now, for every edge of the graph, apply a controlled-Z (CZ) gate between the two qubits at that edge's endpoints. The resulting state is the graph state for G; when G is a 2D square lattice, the result is specifically called a cluster state. Let N be the number of qubits.

|\text{cluster}\rangle_G = \left(\prod_{(i,j) \in E(G)} CZ_{ij}\right) |+\rangle^{\otimes N}.

Why this is a useful state: every CZ between neighbours introduces a kind of local correlation — the two qubits become entangled in a way that, when one is later measured in a certain basis, it teleports quantum information to its neighbour. By chaining neighbours, you get an entanglement network that can route information anywhere on the lattice, just by choosing which qubits to measure and in what order.

The CZ gate is symmetric — it does not care which qubit is "control" and which is "target" — and in the basis \{|0\rangle, |1\rangle\} it multiplies the coefficient of |11\rangle by -1 and leaves everything else alone. Two important facts about CZ:

  1. Two CZ gates on disjoint edges commute with each other. The order in which you apply them during preparation does not matter.
  2. CZ^2 = I. Applying the same CZ twice cancels out.

So the cluster state is a well-defined object that depends only on the graph, not on the order in which you built it. If you made the lattice edges in any order, even all at once in parallel, you would end up with the same state.

What a small cluster state looks like

Take the simplest cluster: two qubits joined by an edge. Starting from |+\rangle |+\rangle and applying one CZ:

CZ\, |+\rangle|+\rangle = \tfrac{1}{2}(|00\rangle + |01\rangle + |10\rangle - |11\rangle).

Why the minus sign: CZ flips the sign of |11\rangle only. The other three amplitudes are untouched. The state is now entangled — it cannot be written as a product of two single-qubit states, which is exactly the definition of entanglement from your earlier chapters.

Measure the first qubit of this two-qubit cluster in the X basis (i.e., in the basis \{|+\rangle, |-\rangle\}). Two outcomes are equally likely.

You started with a state that was entangled across two qubits. After the measurement, the first qubit has delivered its information ("which of |+\rangle, |-\rangle did I become?") as a classical bit, and the second qubit carries the continuation of whatever computation you were doing. The second qubit's final state depends on the first qubit's outcome by a Pauli Z — but a Pauli Z is just a classical sign, which you can track and correct at the end.

That last line — "track and correct at the end" — is the entire principle of MBQC. Every measurement has a random outcome, but the randomness is a known classical correction that you bookkeep in software. By the time you reach the output qubits, you know exactly which Pauli corrections to apply.

2D cluster state on a square latticeA 5 by 4 grid of dots represents qubits. Lines between horizontal and vertical neighbours represent controlled-Z bonds. The dots all start in the state plus, then controlled-Z gates are applied along every edge to produce the cluster state.Cluster state — a qubit at every vertex, a CZ on every edge|+⟩|+⟩CZ on every edgehorizontal = computation time →
The 2D square-lattice cluster state. Each vertex is a qubit initialised in $|+\rangle$; each edge represents a CZ gate applied between the two endpoints. Once all CZs have been applied, the whole grid is one globally entangled state. By convention, the horizontal direction encodes computation time, and the vertical direction encodes a qubit register — a $K$-wide lattice corresponds to a circuit on $K$ logical qubits.

Why the 2D lattice is universal

A deep theorem of MBQC (proved by Raussendorf, Briegel, and Browne in the early 2000s) says that the square-lattice cluster state, together with adaptive single-qubit measurements in a two-parameter family of bases, is universal: any quantum circuit on K logical qubits and depth D can be simulated by a measurement pattern on a cluster state of size roughly K \times D. The horizontal direction in the lattice encodes the sequence of gates; the vertical direction encodes the qubit register. A logical "qubit wire" in the circuit becomes a horizontal row in the cluster. A logical CNOT between two rows becomes a specific pattern of measurement bases on the qubits in the connecting vertical bond.

A 1D cluster — a chain — is not enough for universal computation. You can implement single-qubit gates on a chain (each chain implements a sequence of rotations on one logical qubit), but you cannot produce entanglement between two distant logical qubits from a chain alone. You need the two-dimensional connectivity of the lattice so that a logical CNOT has somewhere to live.

How a single column of measurements steers the computation

The mechanics of MBQC are cleanest on the 1D chain, which implements single-qubit logic. Then 2D extends that to full circuits. Here is the 1D story.

Take a horizontal chain of n qubits, pre-entangled into a 1D cluster state. Call them q_1, q_2, \dots, q_n. Suppose you want to implement the rotation R_z(\theta) on one logical qubit. MBQC recipe: measure q_1 in the computational basis (choose this initialisation), measure q_2 in the basis \{|0\rangle + e^{i\theta}|1\rangle,\, |0\rangle - e^{i\theta}|1\rangle\} (this basis depends on the rotation angle \theta), then measure q_3 in the X basis, then q_4 in X, and so on up to q_{n-1}. The unmeasured qubit q_n at the right-hand end now holds the state R_z(\theta)\, H^{n-2} |\psi\rangle, up to Pauli corrections tracked from the outcomes.

Why this works: each measurement of a qubit in an angle-dependent basis teleports the information to the next qubit in the chain, and on the way through, the teleported state picks up a rotation by the chosen measurement angle. The randomness of the outcome just shifts the rotation by a classical Pauli X or Z that you know from the measurement result and can correct for in software later. What looks like a string of random coin flips is actually an exactly determined unitary gate plus a known classical byproduct.

This is the key move in MBQC: an adaptively chosen single-qubit measurement basis implements a rotation. By composing measurements down a chain of many qubits, you apply a sequence of rotations. By using vertical bonds between chains, you couple two logical qubits. The whole circuit model is reconstructed from these simple measurement-steering primitives.

Example 1: Two-qubit cluster and an X-basis measurement

Work out what happens when you measure the first qubit of a two-qubit cluster state in the X basis, and verify that the output is an exact one-qubit state that depends deterministically (up to a tracked correction) on the outcome.

Step 1. Write down the state. Starting from |+\rangle|+\rangle and applying one CZ across the single edge:

|\phi_{12}\rangle = \tfrac{1}{2}(|00\rangle + |01\rangle + |10\rangle - |11\rangle).

Why: CZ flips the sign of |11\rangle and leaves the other basis states alone. The |+\rangle|+\rangle state had four equal amplitudes \tfrac{1}{2}; afterwards, the fourth one has a minus sign. This is the smallest possible cluster state and it is entangled.

Step 2. Rewrite in the X basis of qubit 1. Use |0\rangle = \tfrac{1}{\sqrt 2}(|+\rangle + |-\rangle) and |1\rangle = \tfrac{1}{\sqrt 2}(|+\rangle - |-\rangle). Substitute into the state:

|\phi_{12}\rangle = \tfrac{1}{2\sqrt 2}\big[(|+\rangle+|-\rangle)|0\rangle + (|+\rangle+|-\rangle)|1\rangle + (|+\rangle-|-\rangle)|0\rangle - (|+\rangle-|-\rangle)|1\rangle\big].

Why: the measurement of qubit 1 in the X basis will project onto |+\rangle or |-\rangle. Writing qubit 1 in that basis lets you read off what qubit 2 is left with in each case.

Step 3. Simplify. Group the |+\rangle_1 and |-\rangle_1 parts:

|\phi_{12}\rangle = \tfrac{1}{\sqrt 2}\Big[|+\rangle_1 \otimes |+\rangle_2 \;+\; |-\rangle_1 \otimes |-\rangle_2\Big].

Why: the |+\rangle_1 coefficient collects two copies of |0\rangle_2 + |1\rangle_2, which is \sqrt 2 |+\rangle_2. The |-\rangle_1 coefficient collects |0\rangle_2 - |1\rangle_2, which is \sqrt 2 |-\rangle_2. The cluster state, viewed in the X basis of qubit 1 and the computational basis of qubit 2, is a Bell-like correlated state.

Step 4. Measure qubit 1 in the X basis. Outcome + with probability 1/2 leaves qubit 2 in |+\rangle; outcome - with probability 1/2 leaves qubit 2 in |-\rangle. The classical outcome m_1 \in \{0, 1\} is the measurement record. Why: the measurement projects the joint state onto the appropriate X-eigenstate of qubit 1. Whatever is left on qubit 2 is determined by the coefficient in front of that X-eigenstate — in this case, exactly the corresponding X-eigenstate of qubit 2.

Step 5. Interpret the outcome as a gate on qubit 2. The two possible residual states for qubit 2, |+\rangle and |-\rangle, differ by a Pauli Z: |-\rangle = Z|+\rangle. So measuring qubit 1 in the X basis implements the identity gate on qubit 2 (taking |+\rangle \to |+\rangle) if you got outcome +, or a Z gate on qubit 2 (taking |+\rangle \to |-\rangle) if you got outcome -. The Z byproduct is tracked classically. Why: this is the simplest case of the MBQC teleportation-and-rotation principle. An X-basis measurement on one qubit of a cluster teleports the information to the neighbour, applies the identity rotation (since X-basis corresponds to \theta = 0), and leaves a possible Z correction that depends on the outcome. For a non-X-basis measurement at angle \theta, the implemented gate would have been R_z(\theta) instead of identity.

Result. Measuring one qubit of a two-qubit cluster state in the X basis implements a single-qubit identity gate on the other qubit (in the logical |+\rangle basis), with a known Pauli-Z byproduct determined by the outcome. No unitary gate was applied during the computation — only a measurement.

Two-qubit cluster, X-basis measurement on qubit 1Two dots joined by a CZ edge represent a two-qubit cluster state. On the left a cross symbol denotes an X-basis measurement on the first qubit, with two possible outcomes. On the right, the second qubit is left in either plus state or minus state depending on the outcome, with a Pauli Z connecting them.Two-qubit cluster → X-basis measurement → one-qubit outputq₁q₂CZM_Xq₂ outputoutcome +: |+⟩outcome −: |−⟩ = Z|+⟩track byproduct(apply Z in software)
The minimal MBQC primitive. A two-qubit cluster (left) is a pair of dots joined by a CZ edge. Measuring qubit 1 in the X basis transfers the quantum information to qubit 2 with a known Pauli-$Z$ byproduct depending on the outcome. Real MBQC circuits chain hundreds or thousands of these transfers together.

What this shows: measurements in MBQC are not just data readouts — they are gates. An X-basis measurement on one end of a cluster edge implements an identity-gate teleportation from that qubit to its neighbour, with a known classical sign correction. Varying the measurement basis varies the implemented gate. This is how the entire circuit model is reconstructed out of nothing but single-qubit measurements on a large entangled resource.

Photons are the natural fit

With the mechanics in hand, the case for photons becomes obvious.

A photonic qubit — encoded in polarisation, in a dual-rail pair of modes, or in a time-bin — propagates through optical fibre or waveguide with negligible decoherence over kilometres. Photons do not interact with air, with each other, with almost anything else; they stream forward at the speed of light without any environment coupling to them. This is the single biggest reason photonics is attractive for quantum computing: the coherence time, in the sense of "how long before my quantum state is scrambled," is essentially the transit time of the photon through the optics.

But the same absence of interaction that makes photons decoherence-free also makes two-photon gates nearly impossible. A CNOT gate between two photons requires their amplitudes to interact nonlinearly, and the nonlinear optical effects that could do this (like the Kerr effect) are so weak at the single-photon level that a deterministic photon-photon gate remains beyond what anyone has built. The workarounds — Knill-Laflamme-Milburn linear optics, Hong-Ou-Mandel bunching, type-II SPDC entanglement — are all probabilistic. They succeed some fraction of the time; the rest of the time, the photons simply do not entangle and you have to retry.

Probabilistic gates are lethal during a computation. If each gate has a 25% success rate and a circuit has a thousand gates, the circuit never finishes — you would need (1/0.25)^{1000} attempts. Offline state preparation is a different story: you can retry a small probabilistic gate as many times as you like, on a small piece of the resource state, until it succeeds, and only then stitch that piece into the final cluster. The probability of ever finishing is one.

This is the fundamental reason MBQC is photonic quantum computing's natural home. Probabilistic gates are fine for preparation; they are fatal for computation. Move all the probabilistic gate work into the preparation, and let the computation be just measurements — for which photons are outstanding.

PsiQuantum — the billion-dollar bet

PsiQuantum, founded in 2015 in Palo Alto by Jeremy O'Brien and a small team from Bristol, is a silicon-photonics quantum-computing startup that has, as of 2026, raised more than $1 billion. Their bet: build a fault-tolerant quantum computer with a million photonic qubits, all on standard silicon-photonic chips fabricated in GlobalFoundries' commercial CMOS fab.

Their architecture is a variant of MBQC called fusion-based quantum computing (FBQC). Instead of preparing one monolithic cluster state all at once, they generate small three- or four-photon entangled states called resource states, and then fuse them together into the larger cluster through Bell-state measurements called fusions. Each fusion is a probabilistic operation with success probability around 75% using standard linear optics (boostable toward 100% with ancilla photons). A fusion failure doesn't corrupt the computation; it just removes the failed edge from the cluster, and the error-correction code is designed to tolerate missing edges up to a threshold.

PsiQuantum fusion-based approachSmall groups of three photons each represent resource states generated by silicon photonic chips. Fusion measurements, shown as orange boxes, probabilistically join the resource states into a larger cluster state. Some fusions fail; those edges are then absent from the cluster.Fusion-based photonic MBQC (PsiQuantum)fusionfusion✗ failsmall 3-photon resource states generated on-chipfusions join them into a fault-tolerant cluster
Fusion-based quantum computing. Silicon-photonic chips generate small three-photon entangled resource states continuously. Fusion measurements — linear-optical Bell measurements between photons from different resource states — probabilistically join them into a larger cluster. Failed fusions (dashed box) simply remove that edge; the error-correction code tolerates up to a threshold fraction of missing edges.

The PsiQuantum bet rests on two technological pieces. The first is silicon photonics at CMOS scale — integrating single-photon sources, waveguides, beamsplitters, phase shifters, and single-photon detectors all on one wafer, so that thousands of resource-state generators and fusion modules can run in parallel on a single chip. The second is fault-tolerant architecture — using the fusion failures and photon losses as natural erasure errors in a topological code (like the surface code), so that the overall machine tolerates fusion failure rates up to about 25% while still performing a logical computation reliably. A 2023 PsiQuantum arXiv preprint detailed their Omega chip — the first wafer-scale integration of all these components — with commercial fabrication at GlobalFoundries' Fab 10 in New York.

Has PsiQuantum built a useful quantum computer yet? No. Their public roadmap targets a million-qubit fault-tolerant machine in the early 2030s. Between now and then, they must demonstrate single-resource-state fidelity, then multi-resource-state fusion, then scale to thousands of resource states, then integrate the error correction. This is an engineering project the size of a semiconductor-fab buildout. The bet is that, if it works, the payoff is the first actually-useful fault-tolerant quantum computer — and the photonic path avoids the cryogenics of superconducting platforms.

Xanadu and continuous-variable MBQC

A second, very different photonic MBQC approach is pursued by Xanadu Quantum Technologies in Toronto. Instead of encoding qubits in the presence or absence of single photons, Xanadu encodes quantum information in continuous variables — the electric-field amplitudes of squeezed light. The cluster state is prepared in optical modes of squeezed light interfered on beamsplitters, and measurements are carried out with either homodyne detection (measuring a continuous quadrature) or photon-number-resolving detection (counting photons).

Continuous-variable MBQC generalises the discrete-qubit model: a single mode of light is a harmonic oscillator with an infinite-dimensional Hilbert space, and you can entangle many such modes into a continuous-variable cluster. Gaussian operations (displacements, squeezings, beamsplitters) are easy to perform but are not alone enough for universal quantum computation. The key result of Gu, Weedbrook, Menicucci, and Ralph (2009) is that a Gaussian cluster state combined with a non-Gaussian measurement — specifically photon-number-resolving detection — is universal for quantum computation.

Xanadu's Borealis machine (2022) demonstrated a 216-mode Gaussian boson-sampling experiment on a photonic CV platform — not yet MBQC, but the same hardware stack. Their longer-term roadmap is toward continuous-variable MBQC with fault tolerance encoded using GKP (Gottesman-Kitaev-Preskill) states — non-Gaussian states of a single mode that serve as logical qubits robust to small displacement errors. Xanadu's 2024 preprint proposed a fault-tolerant CV-MBQC architecture using GKP-encoded cluster states.

Indian context

Photonic quantum computing in India is at an earlier stage than trapped-ion or superconducting work, but it is growing. The National Quantum Mission identifies photonic platforms as one of four pillars; research groups at IIT Madras, IISc Bangalore, and the Raman Research Institute, Bangalore, run single-photon-source and photon-entanglement experiments on integrated-photonic chips. RRI in particular has a long history in quantum optics — they were part of the ground-station network for China's Micius satellite QKD experiments and have collaborated on Indian satellite-QKD proofs of concept through ISRO. A small PsiQuantum-style fusion-based effort is being scoped under NQM funding for 2026–2028, focused initially on silicon-photonic resource-state generation in a domestic fab partnership (with SCL, Chandigarh, and private foundries).

Common confusions

Going deeper

If you understand that MBQC prepares a cluster state first and then computes by single-qubit measurements in an adaptively chosen basis, that it is equivalent to the circuit model, that it is the natural fit for photonic hardware because two-photon gates are probabilistic but single-photon measurements are easy, and that PsiQuantum and Xanadu are the two leading photonic MBQC companies — you have chapter 169. What follows is the formal measurement-pattern notation, the fault-tolerant fusion architecture in more detail, the continuous-variable formulation, and the scaling numbers for practical MBQC machines.

Formal measurement patterns

A measurement-based quantum computation can be specified formally as a measurement pattern — a tuple (V, I, O, E, \text{bases}) consisting of a set of qubits V, inputs I \subseteq V, outputs O \subseteq V, a graph E specifying the cluster-state edges, and an assignment of measurement bases to each non-output qubit. Each non-output qubit is measured in the basis

\{\cos(\alpha/2)|0\rangle + \sin(\alpha/2)|1\rangle,\, \cos(\alpha/2)|0\rangle - \sin(\alpha/2)|1\rangle\}

where \alpha may depend on earlier outcomes — this is the adaptive character. The computation is valid (deterministically simulates a unitary) if and only if the pattern has a flow — a function that identifies, for each measurement, which later qubit inherits the Pauli corrections from that outcome. Danos, Kashefi, and Panangaden (2007) formalised the measurement calculus, giving rewrite rules for measurement patterns analogous to circuit identities.

Fault-tolerant MBQC

Raussendorf, Harrington, and Goyal (2007) showed that a 3D cluster state encodes a fault-tolerant surface code naturally: the third dimension plays the role of time, and each slice along the third axis is a code cycle. A measurement pattern on this 3D cluster implements a fault-tolerant logical operation with a threshold around 0.75%. The fusion-based architecture of PsiQuantum is a clever variant: the 3D cluster is built incrementally by fusing small 4-photon "GHZ-3" resource states into a Raussendorf lattice, with each fusion being a probabilistic Bell measurement. Photon loss and fusion failure both manifest as erasure errors at the code level, and the surface code tolerates erasure rates up to 50%. The PsiQuantum claim is that this tolerance, combined with 99.5%+ detector efficiency and modest loss budgets, is sufficient for sub-threshold fault tolerance at scale.

Continuous-variable MBQC

Menicucci, Flammia, and Pfister (2008) showed that a continuous-variable cluster state — where each "qubit" is a mode of squeezed light and each "edge" is a two-mode squeezing interaction — supports a natural generalisation of MBQC using homodyne measurements. A pure Gaussian cluster state measured with homodyne detection alone only implements Gaussian operations, which are efficiently classically simulable (Bartlett-Sanders theorem). Adding a non-Gaussian resource — typically photon-number-resolving detection, or injection of non-Gaussian states like GKP states — upgrades the computation to universal. Xanadu's roadmap uses GKP-encoded clusters: each mode holds a GKP qubit (a comb of delta functions in position space, a comb in momentum space), which is error-corrected against small displacement noise. The fault-tolerance threshold for GKP-based CV-MBQC is around 10 dB of squeezing plus a modest GKP ancilla error rate — numbers that have been individually demonstrated but not yet combined.

Resource-state scaling

How big a cluster state do you need to run a useful quantum algorithm? A Shor-algorithm instance to factor a 2048-bit RSA key requires roughly 10^710^8 logical gate operations; each logical gate uses \sim 10^3 physical qubits worth of cluster; the total physical cluster size is around 10^{10}10^{11} entangled photons. This is why the PsiQuantum goal is a million-physical-qubit machine — and why the entangling rate (the number of new cluster edges produced per second) is the dominant engineering metric. On 2024 silicon-photonic hardware, the entangling rate per chip is in the gigahertz range, so a few thousand chips in parallel could in principle produce the required cluster volume in real time.

PsiQuantum Omega chip

The 2024 PsiQuantum Omega chip (announced jointly with GlobalFoundries) is a wafer-scale silicon-photonics integration of:

The Omega chip does not by itself realise a million-qubit quantum computer. It is the first-generation building block; achieving fault tolerance requires linking thousands of Omega chips with phase-stable interconnects, integrating them with classical control electronics, and demonstrating sustained logical-qubit operation. PsiQuantum's 2024 announcement placed this target in the early 2030s.

Xanadu Borealis and beyond

Xanadu's Borealis machine (2022) demonstrated Gaussian boson sampling with 216 photonic modes and claimed quantum advantage — running a specific sampling task in microseconds that would take classical supercomputers 9000 years. This was not MBQC; it was the simpler task of sampling from a Gaussian output distribution. Borealis was a step toward CV-MBQC in that it proved that large-scale Gaussian cluster states can be engineered and measured on a photonic platform. Xanadu's 2024–2025 roadmap is to integrate GKP state generation (the non-Gaussian ingredient) and demonstrate a single logical qubit, then scale to fault-tolerant MBQC.

Where this leads next

References

  1. Robert Raussendorf and Hans J. Briegel, A One-Way Quantum Computer (2001), Physical Review LettersarXiv:quant-ph/0010033.
  2. Hans J. Briegel et al., Measurement-based quantum computation (2009), Nature PhysicsarXiv:0910.1116.
  3. Sara Bartolucci et al. (PsiQuantum), Fusion-based quantum computation (2021) — arXiv:2101.09310.
  4. Nicolas C. Menicucci et al., Universal Quantum Computation with Continuous-Variable Cluster States (2006) — arXiv:quant-ph/0605198.
  5. Wikipedia, One-way quantum computer.
  6. John Preskill, Lecture Notes on Quantum Computation, Chapter 7 — theory.caltech.edu/~preskill/ph229.