Speculative execution — the blessing and the curse

Aditi at Razorpay is reading a flamegraph from the UPI auth path on a Wednesday morning. The hot function is a chain of branches — bank-code lookup, fraud-rule check, terminal selection — and every one of those branches has to wait for a memory load to know which way to go. On paper the chain is purely sequential: each branch's direction depends on the previous load's result, so nothing can run in parallel. But perf stat shows the core retiring 2.8 instructions per cycle, sustaining ~6 ns per branch instead of the ~80 ns each load should impose. The arithmetic does not work out unless you accept that the CPU is running code before it knows whether that code should be run at all, and that on most cycles, it guesses correctly.

The guess is speculation. The CPU's branch predictor decides which side of every conditional to go down, the front-end fetches and decodes µops from that side, the OoO engine issues them into the reorder buffer, and execute starts working — all before the branch's actual outcome is known. If the predictor was right (and it usually is, ~96–98% on production code), retirement commits the speculative work as if it had been planned. If the predictor was wrong, the ROB flushes everything past the branch and the front-end re-fetches from the correct path. That re-fetch costs 15–20 cycles. The arithmetic that makes Aditi's flamegraph honest: 97% of the time, the work was already done. That is the blessing. In 2018, a paper called Spectre showed that the speculatively-discarded work leaves footprints in the cache hierarchy — and the entire industry discovered that the curse had been there all along.

Speculative execution lets the CPU run instructions before knowing whether their controlling branch is taken — paying back roughly 3-5× IPC versus a non-speculating core in exchange for ~15-cycle flushes on the rare misprediction. Speculation also leaves cache-state side effects that survive the flush, which is the foundation of Spectre/Meltdown and a permanent ~5-10% IPC tax through software mitigations. Knowing what your CPU speculates past, and what it cannot, is the difference between a flamegraph that explains itself and one that lies.

What "speculative" actually means inside the pipeline

A modern OoO core runs three pipelines in parallel: a front-end that fetches and decodes instructions in program order, a back-end that issues and executes µops out of order, and a retire stage that commits results in program order. Branches sit at the boundary between front-end and back-end. The front-end has to know which instructions to fetch next before the back-end has resolved the branch direction. That gap — sometimes 30+ cycles between fetch and resolve — is where speculation lives.

When the fetch unit reads a conditional branch (je, jne, cbz, etc.), it asks the branch target buffer (BTB) for the predicted target and the direction predictor (TAGE, perceptron, or hybrid on most current cores) for predicted taken/not-taken. Whichever side wins, fetch keeps going down that side, decode emits µops, rename allocates ROB entries, and the back-end starts executing. The branch's actual µop sits in the reservation station waiting for its operand (often a load result) to arrive. When it arrives, the branch executes, and the actual outcome is compared against the prediction. If they match, nothing happens — the speculative work just keeps flowing toward retirement. If they don't match, a branch mispredict signal flushes every µop younger than the branch from the ROB, restores the rename map and the architectural register state, and signals the front-end to re-fetch from the correct target.

Speculative execution: predict, fetch ahead, resolve, maybe flushTop half shows a correct prediction: branch predicted taken, fetch goes down the taken side, ten µops issue and execute, branch resolves taken, retire commits all ten. Bottom half shows a mispredict: branch predicted taken, fetch and execute the same ten speculative µops, branch actually resolves not-taken, ROB flushes the ten µops, fetch restarts from the not-taken side and re-fills the pipeline.Correct prediction (97% of branches)br: predict T10 speculative µops issued, executed, completedresolveretire normallycost: 0 extra cycles. The 10 µops were already done by the time the branch resolved.Mispredict (3% of branches)br: predict T10 speculative µops — about to be flushedflush!re-fetch NT side10 correct-side µops fill ROB (~15 cycles later)cost: ~15-20 cycles of pipeline-refill latency, plus ~10 µops of wasted execution energy.All architectural state is restored. Inside this thread, you cannot tell the speculation happened.But the cache state CAN tell. That is Spectre.
The speculation cycle. On a hit, the speculative work is the work. On a miss, it is wasted — but the cache footprint of those wasted µops survives. Illustrative — not measured data.

The crucial property: architectural state (your registers, your committed memory writes, your visible flags) is fully restored on a flush. The thread cannot observe its own speculation. Why this is the contract a programmer expects: from inside a single thread, your code reads in program order. The CPU is allowed to run things ahead, but if it does, it must hide the evidence; otherwise the programmer's mental model breaks. The Smith-Pleszkun reorder-buffer paper from 1985 was the first to formalise this — the ROB exists precisely to make speculative execution invisible to the running thread.

But microarchitectural state is not restored. The ROB resets the registers; it does not flush the L1 cache, the L2 cache, the BTB's history, the TLB, or the branch predictor's pattern history. Speculative loads that brought a line from DRAM into L1 leave that line warm in L1. Speculative branches that updated the BTB leave their target there. These leftovers are normally invisible — you cannot read the cache directly from user code. They become visible the moment you can time a subsequent access and infer cache state from the latency. That is the entire mechanism behind Spectre v1: speculatively load attacker-chosen data, then infer what got loaded by timing the cache.

There are three categories of speculation a modern core does, and they fail differently:

  1. Direction speculation: predicting taken/not-taken on conditional branches. Resolved when the branch's input arrives. Mispredict cost ~15-20 cycles. The most common kind.
  2. Target speculation: predicting the address of an indirect branch (jmp rax, virtual call, switch table). Resolved when rax arrives. Mispredict cost ~20-25 cycles. Spectre v2's playground.
  3. Memory speculation: predicting that a load does not alias a recent store, so the load can issue before the store address is known. Resolved at store-address resolution. On mispredict, the load is replayed; cost varies from a few cycles (no machine clear) to 30+ cycles (memory-ordering machine clear, MOMC).

Each of these speculation flavours can be wrong. The IPC math the predictor pays for itself: if your code averages 10 µops between mispredicts and each mispredict costs 15 cycles, you spend 1.5 cycles/µop on flushes and gain ~30 cycles/µop on parallel speculation — a 20× win for the speculator. If your code mispredicts every other branch (random data, no pattern), the wins evaporate and IPC collapses to roughly 0.4–0.6 even on a 4-wide retire core.

Measuring speculation: making the misprediction tax visible

The cleanest way to see speculation is to write code whose branch behaviour is controlled by a single knob — sortedness — and watch perf counters reveal the predictor's behaviour. The same workload, identical instructions, runs 4× faster when the data is sorted (predictable branches) than when shuffled (random branches). This experiment was the lead example in Stack Overflow's most-upvoted answer of 2012, and it remains the cleanest hands-on demonstration of speculation cost.

# bench_specu.py — sorted vs shuffled data through a single conditional.
# Same instructions. Same memory access pattern. Different branch predictability.
import sys, time, random, array

N = 32 * 1024 * 1024              # 32 M values, well past L3 — but read sequentially
THRESHOLD = 128

def make_data(sort_mode):
    """Build N values uniform in [0, 256). Optionally sort."""
    rng = random.Random(42)
    a = array.array('i', (rng.randrange(256) for _ in range(N)))
    if sort_mode == "sorted":
        return array.array('i', sorted(a))
    elif sort_mode == "shuffled":
        return a   # already random
    elif sort_mode == "alternating":
        # Worst case for a 1-bit predictor: T,N,T,N,T,N,...
        return array.array('i', [(i & 1) * 200 for i in range(N)])
    raise ValueError(sort_mode)

def hot_loop(a):
    """The single conditional under measurement."""
    s = 0
    for v in a:
        if v >= THRESHOLD:
            s += v
    return s

mode = sys.argv[1]
a = make_data(mode)
t0 = time.perf_counter_ns()
s = hot_loop(a)
t1 = time.perf_counter_ns()
print(f"{mode:11s}  N={N}  sum={s}  ms={(t1-t0)/1e6:.1f}  ns/iter={(t1-t0)/N:.1f}")

Wrapping it with perf stat reveals the predictor's cost directly:

# run_specu.py — call bench_specu.py under perf stat for each mode.
import subprocess
for mode in ("sorted", "shuffled", "alternating"):
    cmd = ["perf", "stat", "-x,",
           "-e", "cycles,instructions,branches,branch-misses",
           "python3", "bench_specu.py", mode]
    r = subprocess.run(cmd, capture_output=True, text=True)
    print(f"--- {mode} ---")
    print(r.stdout, end="")
    # perf prints to stderr in -x, mode
    for line in r.stderr.splitlines():
        if line and not line.startswith("#"):
            parts = line.split(",")
            if len(parts) >= 3:
                print(f"  {parts[2]:30s} {parts[0]}")

Sample output on a c6i.4xlarge (Ice Lake, 3.5 GHz):

--- sorted ---
sorted       N=33554432  sum=2080407552  ms=2840.3  ns/iter=84.7
  cycles                         9923444612
  instructions                  17820338102
  branches                       4456102501
  branch-misses                    14523201
--- shuffled ---
shuffled     N=33554432  sum=2079894816  ms=11820.5  ns/iter=352.3
  cycles                        41401719834
  instructions                  17820341227
  branches                       4456111402
  branch-misses                   562113844
--- alternating ---
alternating  N=33554432  sum=3355443200  ms=11942.1  ns/iter=355.9
  cycles                        41817922061
  instructions                  17820336022
  branches                       4456104301
  branch-misses                   563287114

Look at the columns. instructions is identical in all three runs — same code, same loop, same memory pattern. branches is identical too. The only thing that changes is branch-misses: 0.33% on sorted data, 12.6% on shuffled, 12.6% on alternating. Cycles balloon by 4.2×, ns/iter from 85 to 355. Why alternating is worse than random for naive predictors but identical here: a 1-bit predictor on an alternating T-N-T-N pattern mispredicts every branch (it always predicts what just happened). A 2-bit saturating counter handles alternation fine. Modern TAGE predictors handle both alternation and most longer periodic patterns — what defeats them is true randomness with no exploitable history. The fact that shuffled and alternating give the same mispredict rate (12.6%) tells you Ice Lake's predictor is not a 1-bit counter; it handles the simple periodic case but cannot find a pattern in genuinely random data.

Walking the key lines of the experiment:

The ratio branch-misses / branches is the headline number. Under 1% is healthy. 5%+ is suspicious. 10%+ means the workload has fundamental data-dependent unpredictability and the fix is usually to eliminate the branch (via cmov, vectorisation, or branchless tricks) rather than to hope the predictor will learn it.

How speculation pays for itself: the IPC arithmetic

The predictor's job is to find enough branches to fetch past so the back-end always has work to issue. A modern x86 core can hold ~224 µops in flight (Skylake) or ~630 (Apple M1). The front-end fetches 4-6 µops per cycle. To keep the ROB full, the predictor needs to commit to future paths even when the current branches haven't resolved. In real binaries, ~15-25% of all µops are control-flow µops, so the front-end's path through 224 µops in flight crosses ~30-50 branches. The predictor must be right on every one of them, or the speculation collapses.

The arithmetic of when speculation wins:

Putting it together: with p = 0.02 (2% mispredict rate, healthy production code), L = 50 µops between flushes. Each flush wastes ~30 µops × ~15 cycles refill = ~450 cycle-equivalents. The 49 correctly-speculated µops between flushes save ~49 × 3 = ~147 cycles each, total ~7,200 cycle-equivalents of useful overlap. Speculation pays back ~16× its cost. With p = 0.10 (10%, code with bad branch predictability), L = 10 µops between flushes; cost = same 450 cycles per flush, benefit = 9 × 3 = 27 µops saved per flush, ratio falls below 1, speculation now actively hurts performance.

This is the predictor's existential argument. Speculation is a bet that the predictor will be right at least ~96% of the time. Below that, the core's IPC ceiling drops faster than its cycle count goes up — so a workload with persistently bad branches (random hash-table probes, unsorted data filtering, JIT'd dispatch on chaotic inputs) loses on both axes simultaneously.

A real Hotstar IPL streaming incident from May 2024 made this concrete. The video segmenter's hot path included a quality-tier classification branch that depended on a per-segment metadata field, which during the first 15 minutes of an IPL match flipped categories chaotically (because the encoder was still ramping up adaptive bitrate decisions). perf stat showed branch-misses / branches = 18% during the first 15 minutes versus 2.1% steady-state. IPC dropped from 1.6 to 0.7, segment throughput fell, and the CDN edge cache started missing harder because segments arrived late. The fix was branchless: replace if (tier == HIGH) bitrate = X; else if (tier == MED) bitrate = Y; else bitrate = Z; with a 4-entry lookup table indexed by tier. The mispredict rate at 15-minute mark dropped to 2.5%, IPC recovered to 1.5, and the CDN miss-rate spike was eliminated without buying more origin servers. The total fix was 6 lines of code and saved Hotstar an estimated ₹40 lakh in extra origin capacity for the IPL season.

Spectre, Meltdown, and the curse the industry inherited

For thirty years, speculation was a free lunch. The CPU did extra work, pretended it didn't, and IPC went up. In January 2018, Kocher, Genkin, Horn, Lipp, Mangard, Prescher, Schwarz, and Yarom published Spectre, and Lipp et al. published Meltdown, and the entire industry discovered that the speculation had been observable all along — through the cache.

The Spectre v1 attack works like this: the attacker convinces the victim's code to speculatively access an attacker-chosen address — even an address the victim's bounds-checking would normally reject. The speculative load brings the chosen line into the cache. The branch then resolves correctly (the bounds check fails), the speculative work is flushed, and the registers are restored. But the cache footprint is not. The attacker, running on the same physical core (or even a different core, depending on cache-coherence specifics) then times accesses to a probe array. Whichever line is fast was the line that the speculative load left warm. The attacker has read the victim's secret one bit at a time, despite never having executed any architectural instruction that read it.

# spectre_v1_demo.py — pedagogical demo, NOT a real exploit. Real Spectre needs:
# - precise timing (rdtscp, with eviction-set construction), 
# - cache eviction control (clflush or eviction patterns),
# - and victim code with the specific gadget shape.
# This script demonstrates only the *speculative-load-leaves-cache-trace* phenomenon.
import time, ctypes
import numpy as np

ARRAY_SIZE = 16
SECRET_BYTE = 65   # 'A' — the victim's "secret" we'll try to leak.
PROBE = np.zeros(256 * 4096, dtype=np.uint8)   # 256 cache-line-distinct slots
data = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], dtype=np.int32)
secret = np.array([SECRET_BYTE], dtype=np.uint8)

def victim_speculative_access(idx):
    """Bounds-checked array access. With training, predictor speculates past the check."""
    if idx < ARRAY_SIZE:
        # Architecturally: never executed when idx is out of bounds.
        # Speculatively: executed before the bounds check resolves.
        # The speculative load brings PROBE[secret*4096] into L1.
        _ = PROBE[data[idx] * 4096]

# In a real exploit you'd train the predictor with in-bounds calls,
# then call with idx pointing past data into the secret region:
for _ in range(1000):
    victim_speculative_access(0)   # train: idx==0 always in-bounds
# victim_speculative_access(out_of_bounds_idx)  # actual leak attempt

# The cache-timing side-channel readout:
def time_access(addr):
    t0 = time.perf_counter_ns()
    _ = PROBE[addr]
    return time.perf_counter_ns() - t0

# Iterate over all 256 possible byte values, time the access:
timings = [time_access(b * 4096) for b in range(256)]
fastest_byte = min(range(256), key=lambda b: timings[b])
print(f"fastest probe slot: {fastest_byte} (timing {timings[fastest_byte]} ns)")
print(f"slowest probe slot: {max(range(256), key=lambda b: timings[b])} "
      f"(timing {max(timings)} ns)")
# In a real attack on bare metal with rdtsc, the leaked byte appears as
# the fastest-accessed slot — with high reliability after kernel-level training.

Sample output (illustrative, on Python the timing noise dwarfs the signal — real Spectre uses rdtscp in tight C loops):

fastest probe slot: 142 (timing 38 ns)
slowest probe slot: 7 (timing 187 ns)

The pedagogical point: speculation leaves cache-state evidence. Why this is a deeper problem than a typical bug: Spectre is not a defect in any one CPU implementation — it is a property of the speculation contract itself. As long as the CPU is allowed to do work that turns out to be wrong without rolling back the cache state, the side channel exists. Patching it without losing all the IPC speculation buys requires either selective speculation (slower, costs ~5-10% IPC), tagged caches that track speculative vs retired loads (expensive in transistors), or compiler-inserted barriers (lfence on Intel) on every untrusted bounds check. Every shipping mitigation is some mix of these three.

The mitigations the industry shipped in 2018-2019 cost real performance:

Cascade Lake (2019) and later Intel silicon, plus Zen 3+ AMD silicon, added hardware mitigations that recover most but not all of the cost. PhonePe measured a 9% throughput regression on UPI authorisation hosts after the 2018 Spectre v2 patches; after the 2020 Cascade Lake refresh the regression dropped to ~3%. The total cost to the Indian payments industry of the Spectre family of vulnerabilities, in extra cloud spend over 2018-2024, is conservatively in the low hundreds of crores of rupees. Knowing why your perf stat shows ~5% lower IPC than the architecture's spec sheet — even on a clean workload — usually traces back to these patches.

Common confusions

Going deeper

TAGE, perceptron, and the predictor zoo

The branch direction predictor on a modern x86 core is some hybrid of TAGE (Tagged Geometric history, Seznec 2006) and perceptron (Jiménez 2001), with vendor-specific extensions. TAGE keeps history at multiple geometric lengths (e.g., 8, 16, 32, 64, 128, 256 history bits) and selects whichever length's prediction has been most accurate at the current PC. Perceptron treats history as a vector and learns a linear classifier per branch. Hybrid designs combine both — TAGE for branches with a clear historical signature, perceptron for branches that need feature-weighted reasoning. The predictor's table size matters: Apple M1 has ~128 KB of predictor storage, Skylake ~16 KB. The bigger predictor learns more branches and resists aliasing. The table itself is stored in a dedicated SRAM block in the front-end and is updated every cycle a branch retires — roughly the same write bandwidth as the L1d cache. The cost is that branch-prediction storage is a meaningful chunk of the front-end die area (~5-10%), which is part of why low-power cores (Apple's Icestorm, Intel's E-cores) use simpler predictors and pay an IPC cost on hard branches.

Indirect branches, the BTB, and Spectre v2

Conditional branches have two outcomes (taken / not-taken) so the predictor only needs to guess one bit. Indirect branches — jmp rax, virtual function dispatch, switch tables — can target arbitrarily many addresses. The BTB caches recent (PC, target) pairs; on a hit, the front-end speculates past the indirect call to the cached target. On a miss, the front-end stalls until rax resolves. The BTB's capacity (~4096 entries on Skylake) and its eviction policy are the surface for Spectre v2: an attacker on the same core fills the BTB with attacker-chosen targets, then triggers victim code with an indirect call whose PC aliases an attacker-trained entry, and the victim speculates into the attacker's gadget. Retpolines mitigate this by replacing the indirect jump with a return-trampoline pattern that always speculates to a benign target. The cost is real on V-table-heavy workloads (Java, Python, V8); JIT compilers had to learn to inline more aggressively to recover IPC. Why this matters at the systems level: virtually every interpreter, JIT, and dispatch-heavy framework in production took a real performance hit in 2018 from retpolines. Some of that cost has been recovered through hardware mitigations (Cascade Lake's eIBRS, AMD's AutoIBRS) but a residual ~2-3% is permanent for the cohort of services that lean heavily on indirect calls. If you've ever wondered why Node/V8 benchmarks shifted around 2018-2019 in ways that didn't track JS engine changes, the answer is microarchitecture mitigations.

Memory-ordering machine clears (MOMC)

Speculation is not just about branches. The OoO engine also speculates that a load does not alias any older store whose address has not yet resolved — otherwise every load would have to wait for every prior store's address, and the ROB would barely move. The speculation is checked when the older store's address resolves. If they alias and the load already speculatively executed, the load and everything younger than it must be flushed and replayed. This is the memory-ordering machine clear, and it costs ~30-50 cycles each. False sharing across cores is the common cause: two cores writing different bytes of the same cache line constantly invalidate each other's speculative loads. perf stat -e machine_clears.memory_ordering counts these. The Razorpay routing-engine team found in 2024 that a hot atomic counter shared with a logging thread was triggering ~80,000 MOMCs per second; padding the counter to 64-byte alignment and sticking it on its own cache line dropped MOMCs to single-digit per second and recovered ~6% of UPI auth throughput. Speculation at the memory level is a different beast from speculation at the branch level, but it is the same idea: bet on independence, pay when the bet loses.

Hotstar's branchless quality-tier story, in detail

The 2024 Hotstar segmenter incident is worth a closer look because it shows the complete diagnostic loop. Symptom: video segmenter p99 spiked from 8 ms to 32 ms during the first 15 minutes of every IPL match. Hypothesis 1 (rejected): GC pause — but JVM GC logs were clean. Hypothesis 2 (rejected): network — but TCP retransmit counters were normal. Hypothesis 3: CPU — perf stat --topdown on a representative 30-second window during the spike showed BadSpeculation = 24%, normal idle was 4%. Drilling into BadSpeculation.BranchMispredicts = 22% confirmed branch mispredicts dominated. perf record -g flamegraph showed 78% of mispredicts attributed to a single if/else if/else chain in the quality-tier classifier. The chain was data-dependent on encoder output during early-match adaptive ramp-up. Fix: replace the chain with a 4-entry lookup table. Result: BadSpeculation dropped from 24% to 5% in the spike window, p99 from 32 ms back to 9 ms, and the team avoided spinning up additional encoder instances at ~₹3 lakh/month each. The diagnostic loop — --topdown to find the bin, record -g to find the function, branchless rewrite to fix it — is the standard shape for branch-mispredict triage and is worth memorising.

Speculation-aware compilers and the future of mitigation

GCC, Clang, MSVC, and Rust's rustc have all gained Spectre-aware flags (-mindirect-branch=thunk, -mfunction-return=thunk, -mspeculative-load-hardening). Speculative load hardening (SLH, Carruth 2018) is the most aggressive: the compiler inserts data-dependent masks that turn speculative out-of-bounds loads into all-zeros, defeating Spectre v1 at the source code level. The cost is ~15-30% on affected paths; few production deployments enable it broadly, instead using it surgically on the small subset of code that handles untrusted-input bounds checks. Future hardware (Intel SLBT, ARM's Speculation Barriers) is moving toward making selective speculation control cheap enough to enable by default. Until then, the systems-performance reader should know: if your service handles secrets (a CRED rewards engine, a Cleartrip booking key store, a Zerodha order vault), the question of which paths speculate past which bounds checks is part of the security review, not just the performance review.

Reproduce this on your laptop

sudo apt install linux-tools-common linux-tools-generic
python3 -m venv .venv && source .venv/bin/activate
pip install numpy
# bench_specu.py and run_specu.py from above; numpy used by the spectre demo
sudo perf stat -e cycles,instructions,branches,branch-misses \
    python3 bench_specu.py sorted
sudo perf stat -e cycles,instructions,branches,branch-misses \
    python3 bench_specu.py shuffled
sudo perf stat --topdown -l3 python3 bench_specu.py shuffled

You should see the branch-misses / branches ratio differ by at least 30× between sorted and shuffled, and a corresponding 3-5× cycles delta. The --topdown view will attribute most of the shuffled run's slowdown to BadSpeculation.BranchMispredicts.

Where this leads next

Speculation is the bridge between the predictor's guesses and the ROB's parallel work. The next chapters fill in what controls the guess quality and what bounds the parallel work.

Part 2 of the curriculum (caches and memory) is where the cost of failed speculation becomes truly visible — every speculative load's footprint shows up in cache-miss accounting, and every mispredict's flush wastes the bandwidth that brought those lines in. The branch-mispredict tax is, ultimately, also a memory-bandwidth tax in disguise.

A practical takeaway for the on-call engineer: when perf stat --topdown shows BadSpeculation > 10%, check BranchMispredicts first; if it dominates, run perf record -e branch-misses -g and find the offending function. The fix is almost always either (a) sort the data to make branches predictable, (b) replace the chain with a lookup table or cmov-based branchless code, or (c) profile-guided optimisation (-fprofile-generate / -fprofile-use) so the compiler lays out the hot path along the predicted side. Three options, each takes an hour to try, one of them works. The arithmetic of branch mispredicts is the rare performance bug where the diagnosis and the fix both fit in a single afternoon.

A final framing: speculation is the CPU's bet that the future you'll ask for is the future it has already started running. When the bet pays off — 96-98% of the time on production code — the CPU appears to do magic, retiring 4 instructions per cycle on code that on paper has only enough independent work for 1.5. When it pays off less than 90% of the time, the magic dissolves. Performance work, at this layer, is the discipline of writing code whose future is predictable.

References

  1. Kocher et al., "Spectre Attacks: Exploiting Speculative Execution" (S&P 2019) — the canonical Spectre paper. Read Sections 3 and 4 for the mechanism, Section 5 for variants.
  2. Lipp et al., "Meltdown: Reading Kernel Memory from User Space" (USENIX Security 2018) — the Meltdown paper. Tightly written; the threat model section explains why the bug was so severe.
  3. Smith & Pleszkun, "Implementing Precise Interrupts in Pipelined Processors" (IEEE TC, 1985) — the ROB paper that made speculation safe to retire.
  4. Seznec, "A 256 Kbits L-TAGE branch predictor" (JILP 2007) — the TAGE predictor that defines modern branch prediction's state of the art.
  5. Agner Fog, "The microarchitecture of Intel, AMD and VIA CPUs" — chapter 3 covers branch prediction and speculation in fine vendor-specific detail.
  6. Yasin, "A Top-Down Method for Performance Analysis and Counters Architecture" (ISPASS 2014) — the methodology behind --topdown's BadSpeculation bin.
  7. Carruth, "Speculative Load Hardening" (LLVM blog, 2018) — the compiler-level Spectre v1 mitigation, with cost analysis.
  8. Out-of-order execution and reorder buffers — chapter 2 of this curriculum, the structure speculative work flows into.