In short
A qubit is an imperfect quantum system. Even with no gates applied, two clocks start ticking the moment you prepare a state. T_1 — the energy-relaxation time — is how long the excited state |1\rangle survives before it decays to |0\rangle, releasing its energy to the environment. The excited-state population obeys P(|1\rangle \mid t) = e^{-t/T_1}. T_2 — the phase-coherence time — is how long a superposition \alpha|0\rangle + \beta|1\rangle keeps its relative phase. The density-matrix off-diagonal decays as \rho_{01}(t) = \rho_{01}(0)\,e^{-t/T_2}. A theorem binds them: T_2 \leq 2\,T_1. A third time, T_2^*, is the "free Ramsey" coherence time including slow frequency drifts, and typically T_2^* < T_2. Three experiments measure these numbers on any hardware: a Rabi + wait protocol for T_1, a Ramsey fringe protocol for T_2^*, and a Hahn echo protocol for T_2. Typical 2026 values: T_1 \approx T_2 \approx 100–300 μs on IBM Heron transmons, T_2 \sim seconds on Quantinuum trapped ions, effectively infinite on photonic qubits, milliseconds on neutral atoms. The ratio T_{\text{coherence}} / T_{\text{gate}} — roughly 1000 to 5000 on today's superconducting machines — is the single most important figure for deciding whether a quantum algorithm can finish before its qubits forget what they were doing.
A quantum computer does not sit still. The moment you prepare a qubit in a clean state and step back, the environment starts undoing your work.
Drop a qubit into |1\rangle, walk away for a hundred microseconds on a current IBM chip, and there is a decent chance you come back to find it in |0\rangle — because |1\rangle has more energy than |0\rangle, and the universe, given half a chance, prefers lower energy. Prepare a qubit in |+\rangle — the equal superposition on the equator of the Bloch sphere — and after a hundred microseconds you will likely find that it has drifted to some mixed, phase-scrambled state that is no longer |+\rangle at all.
Two numbers describe this: T_1, the clock for energy decay, and T_2, the clock for phase coherence. They are the first two rows of every hardware datasheet published by IBM, Google, Quantinuum, QuEra, or any other quantum-computing company. Read any paper announcing a new chip and the first table you will see is the per-qubit T_1 and T_2.
This chapter is about what those two numbers mean, why they obey the inequality T_2 \leq 2T_1, how you actually measure them on a real machine, and what they tell you about whether your hardware is any good.
T_1 — the energy-relaxation time
Imagine flipping a qubit from |0\rangle to |1\rangle with a \pi-pulse, then leaving it alone. On a superconducting transmon, the |1\rangle state is about 5 GHz higher in energy than |0\rangle — the qubit is sitting on a ledge. The environment is full of electromagnetic modes at roughly the right frequency, and given enough time the qubit will spontaneously emit a microwave photon into one of them and fall back to |0\rangle.
The probability of still being in |1\rangle after a time t follows a simple exponential:
The characteristic time T_1 is the 1/e time: after t = T_1, the excited-state population has dropped to 1/e \approx 0.37. After t = 2T_1, it is down to 1/e^2 \approx 0.14. After t = 5T_1, essentially every qubit has relaxed.
Why exponential decay: each moment in time, a small probability dt/T_1 is assigned to the event "the qubit emits its photon and decays now." The probability of surviving many tiny intervals in a row multiplies: P(t + dt) = P(t)(1 - dt/T_1), which in the limit dt \to 0 is the differential equation dP/dt = -P/T_1, whose solution is P(t) = e^{-t/T_1}. Exponential decay is the signature of a memoryless process — the qubit does not remember how long it has been sitting there.
In the language of the standard-channels chapter, T_1 decay is amplitude damping with time-dependent parameter \gamma(t) = 1 - e^{-t/T_1}. The density-matrix population of the excited state obeys \rho_{11}(t) = \rho_{11}(0)\,e^{-t/T_1} and energy leaks out to the environment at a rate \hbar\omega_{01}/T_1.
The physical origin
T_1 is set by the rate at which the qubit's environment can absorb its energy. Three ingredients control it:
- Density of environmental modes at the qubit's frequency. If the surrounding material has lots of electromagnetic modes available at 5 GHz, the qubit can dump its energy easily. Josephson-junction defects and two-level systems (TLSs) in the oxide layer are the usual culprits on transmons.
- Coupling strength of the qubit to those modes. A better-isolated qubit has a longer T_1; a more strongly coupled one has a shorter T_1. Engineers spend most of their time here.
- Temperature. At finite temperature, some environmental modes are already occupied, and the qubit can also absorb energy from them. At 20 mK on a transmon, k_B T \ll \hbar\omega_{01} so thermal excitations are rare — but not zero.
T_2 — the phase-coherence time
Now a different experiment. Prepare the qubit not in |1\rangle but in |+\rangle = (|0\rangle + |1\rangle)/\sqrt 2 — the equal superposition sitting on the +x equator of the Bloch sphere. Wait. What goes wrong?
Two things. First, the |1\rangle part of the superposition can still decay to |0\rangle at the T_1 rate. Second — and this is new — the relative phase between |0\rangle and |1\rangle can get scrambled without any energy being lost. If the qubit's frequency fluctuates slightly from moment to moment (because of magnetic-field jitter, charge-trap rearrangements, mechanical vibration, or cosmic rays), the state acquires a random phase e^{i\phi(t)} between |0\rangle and |1\rangle. Average over many runs of the experiment, and the off-diagonal element of the density matrix gets smeared to zero.
The off-diagonal coherence decays as
The Bloch vector's x- and y-components — the projection of the state onto the equatorial plane — shrink by the factor e^{-t/T_2}, while the z-component shrinks more slowly (because only amplitude damping, not pure dephasing, moves the state along z).
Why the in-plane component is the phase information: on the Bloch sphere, the azimuthal angle \varphi corresponds to the relative phase between the |0\rangle and |1\rangle amplitudes. A state with the same populations but a random \varphi averages over the equator to zero transverse Bloch vector. So the length of the in-plane Bloch vector is the coherence. When T_2 clocks out, the state has no preferred direction in the equatorial plane — it is a classical mixture of populations.
The two things that kill T_2
T_2 has two sources, and they add in the decay rate:
The first term, 1/(2T_1), is the coherence decay forced by amplitude damping itself. Why this factor of 2: amplitude damping multiplies the off-diagonal element of the density matrix by \sqrt{1-\gamma} = e^{-t/(2T_1)} — the square root of the population-survival factor. A population that decays as e^{-t/T_1} forces the coherence to decay as e^{-t/(2T_1)}, half as fast in the exponent.
The second term, 1/T_\varphi, is pure dephasing — the part that scrambles the phase without any energy loss. It comes from slow frequency noise: anything that makes the qubit's 0-to-1 transition frequency wiggle around its nominal value, so that after a time t the phase has drifted by a random amount. For transmons the main sources are 1/f charge noise, flux noise, and fluctuations in the critical currents of the Josephson junctions.
The inequality T_2 \leq 2\,T_1
Since T_\varphi^{-1} \geq 0, the formula above gives
Equality holds when there is no pure dephasing at all — when the only source of coherence decay is the amplitude damping itself. In that "amplitude-damping-limited" regime, T_2 = 2 T_1 exactly. In practice every hardware platform has some pure dephasing, so T_2 is strictly smaller than 2 T_1. If a datasheet ever claims T_2 > 2 T_1, it is either a fit error or the reported number is T_{2,\text{echo}} (the refocused Hahn-echo time, which can exceed the raw Ramsey T_2 but still obeys the underlying bound on the coherence at a fixed time).
The inequality has a clean physical reading: any mechanism that flips the qubit already scrambles its phase by the same amount. You can have pure phase noise without any energy loss, but you cannot have energy loss without the associated phase decoherence coming along for the ride. That asymmetry is what gives T_2 its upper bound.
T_2 versus T_2^* — the inhomogeneous-broadening subtlety
Real experimenters talk about two phase-coherence times: T_2 and T_2^*. The starred version is the one you measure directly by a simple Ramsey-fringe experiment (described below). The un-starred version is the one you measure by inserting a refocusing pulse (the Hahn echo). They differ because of a concept called inhomogeneous broadening.
Any real experiment runs the same circuit many times — perhaps ten thousand times — and averages. If the qubit's frequency is slightly different on each shot (because of slow drifts in temperature, magnetic field, or local two-level systems), each shot accumulates a slightly different phase during the wait. Averaging over those different phases gives a faster apparent coherence decay than what each individual shot experiences.
- T_2^* (the inhomogeneously broadened coherence time) includes slow shot-to-shot frequency drifts. It is what you measure in a raw Ramsey experiment.
- T_2 (the echo-refocused coherence time) corrects for those drifts by inserting a \pi-pulse halfway through the wait, which swaps |0\rangle and |1\rangle and causes the accumulated phase drift to cancel out over the second half of the wait. It measures the true decoherence.
The relation is T_2^* \leq T_2. On transmons with significant 1/f noise, T_2^* can be several times shorter than T_2. On trapped ions with very stable frequencies, the two are nearly equal.
Measuring the three times
Every hardware paper reports T_1, T_2^*, and T_2. Each comes from a specific pulse sequence.
T_1 measurement — "prepare |1⟩, wait, measure"
Apply a \pi-pulse to |0\rangle to reach |1\rangle. Wait a variable time \tau. Measure in the computational basis. Count the fraction of |1\rangle outcomes. Repeat for many \tau values. Fit the data to P(\tau) = e^{-\tau/T_1} (plus a small thermal-population constant for high-temperature corrections).
T_2^* measurement — Ramsey interferometry
Prepare |0\rangle. Apply a \pi/2 pulse to reach |+\rangle. Wait a variable time \tau (during this wait, the state precesses around the z-axis and accumulates a phase). Apply a second \pi/2 pulse. Measure in the computational basis.
The probability of getting |0\rangle at the end oscillates with \tau at the qubit frequency (if you deliberately offset the rotating-frame frequency by a known amount \Delta), and the oscillation envelope decays as e^{-\tau/T_2^*}:
The decay envelope is the coherence-loss curve; the oscillation gives you the detuning. Fit both out of the data.
T_2 measurement — Hahn echo (spin echo)
Same as Ramsey but with a \pi-pulse inserted in the middle: \pi/2 \to \text{wait } \tau/2 \to \pi \to \text{wait }\tau/2 \to \pi/2 \to \text{measure}. The middle \pi-pulse swaps |0\rangle and |1\rangle, which inverts the sign of any slow frequency drift. Whatever phase error the state picks up in the first half gets cancelled by an equal-and-opposite phase error in the second half. The fast (high-frequency) noise is not refocused and survives; the slow inhomogeneous broadening is removed. What remains is T_2.
The echo trick was invented for nuclear-magnetic-resonance (NMR) spectroscopy in the 1950s and is one of the oldest tools in experimental quantum mechanics. It works for essentially the same reason a metronome synchronises with an orchestra: flip the sign of the drift and what was diverging comes back together. For quantum computing, the Hahn echo is the simplest member of a much larger family of dynamical decoupling sequences (CPMG, XY8, and friends), covered in the going-deeper section.
Typical values across platforms (2026)
The numbers below reflect representative published figures as of early 2026. They change month to month as hardware improves — treat these as order-of-magnitude, not exact.
| Platform | T_1 | T_2 (echo) | T_2 / T_1 | Notes |
|---|---|---|---|---|
| IBM Heron transmon | \sim 200 μs | \sim 150 μs | \sim 0.75 | Best SC at scale; 133+ qubits |
| Google Willow transmon | \sim 70 μs | \sim 80 μs | \sim 1.1 | 105 qubits, distance-7 surface code demo |
| Rigetti Ankaa-3 transmon | \sim 30 μs | \sim 30 μs | \sim 1 | Tunable-coupler architecture |
| Quantinuum H2 trapped ion | \sim hours | \sim 10 s | \sim 0 | Hyperfine qubit; echo is barely needed |
| IonQ Tempo trapped ion | \sim 30 s | \sim 1 s | \sim 0.03 | Qubit is a long-lived atomic state |
| QuEra Aquila neutral atom | \sim 4 s | \sim 1 ms | — | Rydberg qubit — decoherence is mostly Doppler |
| Photonic (PsiQuantum, Xanadu) | effectively \infty | effectively \infty | — | Photons do not decohere in flight; loss is the real enemy |
| Silicon spin (UNSW, TU Delft) | \sim 100 ms | \sim 1 ms | \sim 0.01 | T_2 limited by nuclear-spin bath |
Two observations stand out. First, T_1 varies by nine orders of magnitude across platforms: microseconds on transmons, seconds on neutral atoms, hours on trapped ions, infinite on photons. Second, the ratio T_2 / T_1 is not uniform either — on superconducting qubits T_2 is close to T_1, meaning pure dephasing is comparable to amplitude damping; on trapped ions T_2 is much less than T_1 because the excited state lives for so long that pure dephasing dominates.
Indian context. At TIFR Mumbai, Anil Kumar's pulsed-NMR quantum-computing group in the 2000s measured T_1 and T_2 on nuclear-spin qubits in liquid solutions and recorded coherence times of several seconds — among the longest qubit coherence times ever measured. The catch is that liquid-NMR qubits do not scale: you cannot build a thousand-qubit liquid-NMR quantum computer. But the measurement techniques pioneered there — Ramsey and Hahn-echo protocols, decoupling sequences — are the direct ancestors of the protocols run every day on IBM's transmons today. India's National Quantum Mission funds superconducting-qubit fabrication at IISc Bangalore and TIFR, where per-qubit T_1 and T_2 characterisation is the first step in every chip launch.
Why coherence times matter — the gate budget
A quantum algorithm has two relevant time scales: the gate time T_g (how long a single operation takes) and the coherence time T_c \approx \min(T_1, T_2). The ratio is the gate budget:
After roughly this many gates, the qubit has decohered by an order-one amount and any further computation is noise. More precisely, if each gate has its own error probability \epsilon_g \sim T_g / T_c induced by decoherence alone, the total error after N gates is roughly N \cdot \epsilon_g, and the computation breaks down when that reaches \approx 1.
On a 2026 transmon with T_c \approx 100 μs and T_g \approx 30–100 ns, the gate budget is 1,000 to 5,000 operations before decoherence alone kills the computation. On Quantinuum trapped ions with T_c \approx 10 s and T_g \approx 10 μs, the budget is 10^6 operations. On PsiQuantum photonics with T_c = \infty, the budget is infinite from coherence — the practical limit comes from photon loss, which is its own problem.
Fault-tolerant quantum computing changes this calculus dramatically. With surface-code error correction, you convert many noisy physical qubits into a single logical qubit whose logical error rate is exponentially suppressed in the code distance. The effective logical-qubit coherence time can be made much longer than the physical T_1 — but this requires the physical error rates to be below a threshold (~10^{-3} per gate on current codes), and the physical T_1/T_g ratio is the dominant input to that error rate. Long physical coherence is still the foundation.
Examples
Example 1 — Reading a Ramsey fringe
Suppose you run a Ramsey experiment on a transmon qubit, deliberately detuning the rotating-frame frequency by \Delta = 2\pi \times 1 MHz (so 1 oscillation every microsecond). At \tau = 0 you measure P(|0\rangle) = 1. As \tau increases, the probability oscillates and the envelope shrinks. You record the following data points:
| \tau (μs) | P(|0\rangle) (measured) | |---|---| | 0 | 1.00 | | 20 | 0.64 | | 40 | 0.59 | | 80 | 0.52 | | 140 | 0.50 |
Find the qubit's T_2^*.
Step 1. The Ramsey model is P(|0\rangle \mid \tau) = \tfrac{1}{2}(1 + e^{-\tau/T_2^*}\cos(\Delta\tau)). Strip the cosine by looking at the envelope — the extremes of the oscillation. At \tau = 140 μs the oscillation has completed 140 full cycles (since the period is 1 μs), and the value 0.50 is exactly at the envelope centre, meaning the coherence has essentially died and only the long-time average survives. The envelope heights at early \tau encode T_2^*. Why focus on the envelope: the cosine oscillation carries the frequency information, which is not what we want right now. Taking the peak-to-trough amplitude at each \tau isolates the decay of the coherence factor e^{-\tau/T_2^*}.
Step 2. At \tau = 20 μs, we are near a peak of the cosine, and the envelope height above the 0.5 baseline is 0.64 - 0.50 = 0.14. The envelope equation says this should equal \tfrac{1}{2}e^{-\tau/T_2^*}. So \tfrac{1}{2}e^{-20/T_2^*} = 0.14, hence e^{-20/T_2^*} = 0.28, so T_2^* = -20/\ln(0.28) \approx 20/1.27 \approx 15.7 μs. This is a rough first estimate; noise in the data will give some spread.
Step 3. Check with a different point. At \tau = 40 μs, envelope should be \tfrac{1}{2}e^{-40/15.7} \approx \tfrac{1}{2}\cdot 0.08 = 0.04. The data shows 0.59 - 0.50 = 0.09, which is higher than the exponential fit predicts. That means T_2^* is actually longer than 15.7 μs. A two-point fit gives T_2^* \approx (40-20) / \ln(0.14/0.09) = 20 / 0.44 \approx 45 μs. A proper fit across all the data, weighted by uncertainties, would give something in the range 30–50 μs.
Step 4. Interpretation in hardware terms. A transmon with T_2^* \approx 40 μs and single-qubit gate time \approx 30 ns has a gate budget of \sim 1300 single-qubit gates before phase coherence is lost — comfortable for a shallow circuit, tight for a full Shor run.
Result. T_2^* \approx 30–50 μs from this data; the decay envelope, not the oscillation itself, is what encodes it.
What this shows. A Ramsey experiment gives you two numbers for free: the qubit detuning (from the fringe oscillation) and the coherence time (from the envelope). On real hardware, you run this every calibration cycle to check that nothing has drifted overnight. If T_2^* has mysteriously halved since yesterday, something on the chip has changed.
Example 2 — What the $T_2/T_1$ ratio tells you
A trapped-ion experiment reports T_1 = 30 s and T_2 = 1 s. What does the ratio tell you about the dominant decoherence mechanism?
Step 1. Compute the ratio. T_2 / T_1 = 1/30 \approx 0.033. Far less than the amplitude-damping limit T_2 / T_1 = 2.
Step 2. Extract pure-dephasing time. From 1/T_2 = 1/(2T_1) + 1/T_\varphi:
So T_\varphi \approx 1.017 s \approx T_2. Pure dephasing is essentially the entire story. Why this makes sense: T_1 is 30 times longer than T_2, so amplitude damping contributes a rate 1/(2T_1) = 1/60 \approx 0.017 s^{-1} to the phase-decoherence rate, while pure dephasing contributes roughly 1 s^{-1}. The pure dephasing rate is \sim 60\times larger than the amplitude-damping contribution — so T_\varphi and T_2 are essentially the same number.
Step 3. Physical interpretation. On a trapped ion, the qubit is typically a hyperfine transition in the ground state — energies differ by a few GHz, but both states are electronically the ground state, so spontaneous emission is extremely rare (T_1 measured in minutes to hours). What kills coherence is magnetic-field drift: the hyperfine splitting is magnetic-field-sensitive, and a few mG of drift over a second shifts the qubit phase enough to destroy coherence. Shielding, feedback-stabilised current sources, and magnetic-field-insensitive "clock-state" qubits (which use the m_F = 0 level) can push T_2 into the tens of seconds.
Step 4. Compare with a transmon. For IBM Heron with T_1 \approx 200 μs, T_2 \approx 150 μs:
So T_\varphi \approx 240 μs — comparable to both T_1 and T_2, meaning amplitude damping and pure dephasing contribute in roughly equal measure. Improving either one would buy you real coherence gains.
Result. Trapped ion: pure dephasing dominates (T_\varphi \ll T_1, so effort should go into magnetic shielding and clock states). Transmon: balanced (T_\varphi \approx T_1, so improvements to junction materials and electromagnetic isolation both help). The ratio T_2 / T_1 is a diagnostic for where to focus engineering.
What this shows. The T_2 / T_1 ratio is not just a number — it is the experimentalist's shorthand for "which part of the physics do I have to fix next?" A ratio near 2 means amplitude damping is the only problem; better isolation helps. A ratio near 0 means pure dephasing dominates; better frequency stability helps. This is the first page of any noise-characterisation report.
Common confusions
-
"T_2 is always equal to T_1." No. The inequality is T_2 \leq 2 T_1, and neither extreme is universal. On transmons they are often close (both limited by similar materials physics). On trapped ions T_2 \ll T_1 (amplitude damping is negligible but phase noise is not). On photonic qubits both are effectively infinite.
-
"T_2^* is just a bad measurement of T_2." No — they are different physical quantities. T_2^* is the real coherence time you see in a single unmitigated Ramsey experiment; T_2 is what remains after echo refocusing. The difference is a measure of how much of the decoherence comes from slow (refocusable) drift vs fast (unfocusable) noise. Both are reported on hardware datasheets for a reason.
-
"Decoherence and error are the same thing." Decoherence is a cause of error, but not the only one. Gates also have control errors (the pulse was a little too long), calibration errors (the qubit frequency drifted between calibration and running), readout errors (the measurement misread the state), and crosstalk (applying a gate to qubit A disturbed qubit B). T_1 and T_2 set a lower bound on the gate error rate — you cannot do better than the ratio T_g / T_c — but real hardware usually has gate errors several times worse than this lower bound.
-
"Longer T_2 is always better." For most purposes, yes. But the useful figure for a quantum algorithm is not T_2 alone — it is T_2 / T_g. A trapped ion with T_2 = 1 s and T_g = 100 μs has a gate budget of 10^4; a transmon with T_2 = 100 μs and T_g = 30 ns has a gate budget of \sim 3000. The ion has the longer coherence time but only \sim 3\times the gate budget — not the 10^4\times you might naively guess from just comparing T_2.
-
"Echo eliminates T_2 decay." No. The Hahn echo refocuses slow (low-frequency) drifts; it does nothing to fast (high-frequency) noise. The T_2 measured with echo is still finite; it is the coherence time under the fast noise only.
-
"T_1 of the excited state is the same as T_1 of the ground state." At finite temperature, T_1 has two rates: one for |1\rangle \to |0\rangle (spontaneous emission) and one for |0\rangle \to |1\rangle (thermal excitation). In the cold-bath limit (k_B T \ll \hbar \omega), the first dominates and T_1 is well-approximated by the emission time. On dilution-refrigerator transmons at 20 mK, the thermal occupation of the excited state is a few percent, giving a small but measurable |0\rangle \to |1\rangle rate.
Going deeper
If you have T_1, T_2, T_2^*, the inequality T_2 \leq 2T_1, and the three measurement protocols — you have the working vocabulary of decoherence. The sections below formalise the T_2 \leq 2T_1 proof, explain dynamical decoupling sequences beyond the Hahn echo, and connect to randomised benchmarking and quantum error correction.
Formal proof of T_2 \leq 2T_1
Consider the composite noise channel \mathcal E(\rho) = \mathcal E_{\text{AD}}(\rho) \circ \mathcal E_{\text{PD}}(\rho) — amplitude damping followed by pure dephasing (the order does not matter for the Bloch action). Let the amplitude-damping parameter be \gamma_1(t) = 1 - e^{-t/T_1}, and let the pure-dephasing parameter be \gamma_\varphi(t) = 1 - e^{-t/T_\varphi}.
The Bloch-vector action of amplitude damping multiplies the x- and y-components by \sqrt{1 - \gamma_1} = e^{-t/(2T_1)}, and the Bloch action of pure dephasing multiplies those components by \sqrt{1 - \gamma_\varphi} = e^{-t/(2T_\varphi)} as well (pure dephasing and phase damping are unitarily equivalent; see standard-channels). Composing the two multiplies the factors:
Wait — T_\varphi is defined differently in the two conventions. In the convention where \rho_{01}(t) = \rho_{01}(0) e^{-t/T_\varphi} (pure dephasing affects the density-matrix off-diagonal with rate 1/T_\varphi), the Bloch-vector factor from pure dephasing alone is e^{-t/T_\varphi} (since r_x \propto \rho_{01} + \rho_{10} and both off-diagonals share the same decay rate). Combining with amplitude damping's e^{-t/(2T_1)}, the total is
Since T_\varphi^{-1} \geq 0, we get T_2^{-1} \geq (2T_1)^{-1}, i.e. T_2 \leq 2T_1. The proof is just bookkeeping on the two contributions, but the physical content — that amplitude damping alone sets the maximum phase coherence — is a theorem, not a convention.
Dynamical decoupling beyond the Hahn echo
The Hahn echo is the one-pulse version of a general family of dynamical decoupling sequences. The idea: insert refocusing \pi-pulses at carefully chosen times during a long wait, so that the qubit accumulates its noise in a pattern that mostly cancels.
Carr-Purcell (CP) sequence. A uniform train of \pi-pulses: \pi/2 \to (\text{wait}, \pi, \text{wait})^N \to \pi/2. Refocuses low-frequency noise up to order N in the Taylor expansion of the noise spectrum. T_2 typically grows as N^{2/3} for 1/f noise.
Carr-Purcell-Meiboom-Gill (CPMG). Same structure as CP but with the \pi-pulses along the axis perpendicular to the initial state, which cancels systematic pulse errors. The standard workhorse for long-T_2 experiments.
XY-4, XY-8, XY-16. Sequences where successive \pi-pulses alternate between the X and Y axes (and further pattern rotations for XY8). These are robust to both pulse errors and finite-pulse-duration corrections.
UDD (Uhrig Dynamical Decoupling). The pulses are not uniformly spaced — they are placed at times t_k = T \sin^2(\pi k / (2N+2)). For a noise spectrum with a hard cutoff, UDD gives the best possible suppression at a given N.
In practice, all modern transmon experiments use some form of dynamical decoupling when running deep circuits. IBM's Qiskit has a DynamicalDecoupling transpiler pass that automatically inserts XY8 or CPMG sequences on idle qubits during the wait times of a circuit. Typical improvement: effective T_2 is multiplied by a factor of 2–5.
The spin-echo connection to NMR
The Hahn echo was invented in 1950 by Erwin Hahn to measure nuclear-spin coherence times in solids. For the first forty years of its history, the echo was a pure-NMR technique — and many of the results now used in quantum computing originated in NMR spectroscopy. Anil Kumar's group at TIFR Mumbai in the 1990s and 2000s used echo and its cousins to measure coherence times on liquid-state nuclear-spin qubits of several seconds, running some of the earliest multi-qubit quantum-computing experiments in history. When superconducting-qubit experiments borrowed the Ramsey and echo protocols in the late 1990s, the NMR community had already refined them for half a century.
Platform-specific decoherence mechanisms
Superconducting transmons. T_1 is limited by dielectric loss in the chip substrate and the two Josephson junctions — microwave photons leak into two-level-system (TLS) defects at the oxide interface and into the substrate's bulk loss tangent. T_2 is limited by 1/f charge noise (from trapped charges on surfaces) and critical-current noise (from fluctuations in the junction barrier). Materials research — tantalum surfaces, airbridge fabrication, isotopically pure silicon substrates — routinely pushes T_1 from 100 μs to 300 μs and is expected to reach ms scale in the 2020s.
Trapped ions. T_1 on ground-state hyperfine qubits is astronomically long (spontaneous emission is forbidden). The limiting process is usually off-resonant photon scattering from the cooling laser. T_2 is limited by magnetic-field noise (hyperfine transitions shift with field), laser-phase noise, and motional heating during gates.
Neutral atoms. T_1 for optical qubits is limited by spontaneous emission of the Rydberg state (tens of μs for Rydberg) or by trap loss for ground-state qubits. T_2 is limited by Doppler dephasing (atoms move thermally during the experiment), laser-intensity noise, and magnetic-field gradients across the atom array.
Diamond NV centres. T_1 at room temperature is set by phonon emission from the electronic excited state — ms to seconds depending on temperature. T_2 is limited by the surrounding nuclear-spin bath (carbon-13 nuclei and any other magnetic defects), and dynamical decoupling routinely extends it by orders of magnitude.
Silicon spin qubits. T_1 is set by phonon-mediated relaxation and is typically seconds to hours. T_2 is limited by the nuclear-spin bath (silicon-29 and any phosphorus donors). Isotopically purified silicon-28 (removing silicon-29) has pushed T_2 into the tens of milliseconds.
Fault tolerance and the threshold theorem
All of the above is about physical coherence times. Quantum error correction translates physical coherence into logical coherence: many noisy physical qubits implement one logical qubit whose effective T_1^{\text{log}}, T_2^{\text{log}} scale exponentially in the code distance, provided the physical error rate is below a threshold. For the surface code on a transmon, the threshold is about p_{\text{phys}} < 0.01 per gate. Since p_{\text{phys}} \sim T_g / T_c, this translates to requiring T_c / T_g > 100 — which current transmons with T_c \approx 100 μs and T_g \approx 30 ns comfortably satisfy.
The real target is T_c / T_g > 10^4 or so, which would allow logical error rates low enough for the first useful fault-tolerant algorithms. That is the long-term engineering programme of every transmon group in the world — the T_1, T_2 numbers are the scoreboard.
Where this leads next
- Superconducting transmon — the hardware that most closely tracks the T_1, T_2 numbers. How to build an anharmonic oscillator out of Josephson junctions.
- Standard channels — the full catalogue of noise processes (amplitude damping, phase damping, depolarizing, bit-flip, phase-flip) of which T_1, T_2 are two specific named examples.
- Trapped ions — Paul traps — the hardware where T_2 hits seconds. Why hyperfine qubits last so long.
- Noise mitigation — the NISQ-era techniques (zero-noise extrapolation, probabilistic error cancellation) built on top of characterised T_1, T_2.
- Kraus representation — the general mathematical framework for describing any noise channel, including the combined T_1 + T_2 evolution.
References
- Wikipedia, Quantum decoherence — the physical picture of system-environment coupling that produces T_1 and T_2 decay.
- Nielsen and Chuang, Quantum Computation and Quantum Information, §8.3 (standard channels, amplitude damping, phase damping) — Cambridge University Press.
- John Preskill, Lecture Notes on Quantum Computation, Ch. 3 — master-equation derivation of T_1 and T_2, and the inequality T_2 \leq 2 T_1. theory.caltech.edu/~preskill/ph229.
- Krantz, Kjaergaard, Yan, Orlando, Gustavsson, Oliver, A quantum engineer's guide to superconducting qubits (2019) — arXiv:1904.06560. The definitive review of T_1, T_2 on transmons.
- Bruzewicz, Chiaverini, McConnell, Sage, Trapped-ion quantum computing: progress and challenges (2019) — arXiv:1904.04178. Trapped-ion coherence times and the mechanisms that limit them.
- IBM Quantum, Qubit Coherence Times — live per-qubit T_1, T_2, T_2^{\text{echo}} data on public-cloud transmons.