In short
The previous chapter (one global lock, correct but useless) serialised transactions by letting exactly one run at a time. Correct — and hopeless on a multi-core machine. The obvious fix is per-row locks: two transactions touching different rows should not block each other. But a naive "lock-what-you-need, release-when-done" discipline breaks serialisability almost immediately — you can construct a schedule where two transactions see each other's intermediate state. Two-phase locking (2PL) is the protocol that makes per-object locking safe. The rule, due to Eswaran, Gray, Lorie, and Traiger in 1976: every transaction has a growing phase during which it may acquire locks but may not release any, followed by a shrinking phase during which it may release locks but may not acquire any new ones. Once you have released your first lock, you are in the shrinking phase forever — one way door. That "acquire-acquire-acquire-release-release-release" discipline is enough to guarantee conflict-serialisability. Three flavours form a ladder:
- Basic 2PL — release locks as soon as you no longer need them, even before
COMMIT. Serialisable, but not recoverable: one transaction can read another's uncommitted data and commit first. - Strict 2PL (S2PL) — hold all write locks until commit or abort. Eliminates cascading aborts.
- Rigorous 2PL — hold both read and write locks until commit. Simplest to reason about; the commit point is the serialisation point. What Postgres, MySQL InnoDB, and Oracle actually use.
Each step up the ladder holds locks longer — more blocking, less throughput — in exchange for a stronger safety property. 2PL's unavoidable cost is deadlocks: two transactions each wait on a lock the other holds, forever. Deadlock handling is chapter 55.
Per-row locks are not enough on their own
You have just built per-row locking. Instead of one global mutex, the lock manager hands out a separate lock per (table, row_id) pair. Transaction T1 touching alice and bob no longer blocks T2 touching carol and dave. Throughput should scale with the number of cores. This is the whole reason you abandoned the global lock from chapter 52.
Now try to use it. Here are two transactions on a bank ledger:
T1: READ(A) WRITE(A) UNLOCK(A) WRITE(B) UNLOCK(B)
T2: READ(A) UNLOCK(A) WRITE(C) UNLOCK(C)
T1 debits A, then later credits B. T2 reads A and writes some unrelated row C. Each transaction "locks when it touches a row, unlocks when it is done with that row". Reasonable, right? Watch what happens at the interleaving above.
T1 writes A and releases. T2 reads the new value of A. T1 then writes B. The observable order of events is: T1 wrote A, T2 read A, T1 wrote B. There is no serial schedule — neither "all of T1, then all of T2" nor "all of T2, then all of T1" — that produces this behaviour. In the first ordering, T2 would have seen B after T1's write; in the second, T2 would not have seen T1's A. The schedule is not conflict-serialisable.
The bug is specifically in T1: after releasing A, it went back and grabbed another lock. It mixed acquiring and releasing. You need a rule that forbids exactly that.
The two-phase rule, precisely
The rule is a single invariant per transaction:
Once a transaction has released its first lock, it may never acquire another lock.
Phrased differently: every transaction's lock timeline has exactly two phases.
- Growing phase. The transaction acquires locks, possibly many, possibly over time. It never releases during this phase.
- Shrinking phase. The transaction releases locks, possibly one at a time, possibly all at commit. It never acquires during this phase.
The boundary between the phases is the lock point — the instant the transaction holds its maximum set of locks. Before the lock point, locks monotonically accumulate. After the lock point, locks monotonically disappear.
That graph has exactly one peak. A protocol violation would produce two peaks — a rise, a fall, another rise — which is exactly what broke the bank-transfer example above. The rule says: one peak, one valley.
Why this specific shape: the rule is about the act of acquiring and releasing, not about which locks the transaction happens to hold at a given moment. Upgrades (read lock to write lock) count as acquiring a new lock, so they must happen in the growing phase too. Downgrades (write lock to read lock) are considered a partial release and therefore belong in the shrinking phase. Every production lock manager enforces the rule by tracking a single per-transaction boolean: "have I released anything yet?". The first release flips it. Every subsequent acquire consults it.
Why 2PL produces serialisable schedules
The rule looks almost arbitrary. Why does "no acquire after release" guarantee serialisability? Work through it carefully for two transactions; the general result follows by induction.
Take transactions T1 and T2 that conflict — they both touch some object x, and at least one writes. By the compatibility matrix, only one can hold its lock on x at a time. Say T1 gets there first: T1 acquires the lock, does its work, releases at time t_r1, and by the 2PL rule is now in its shrinking phase forever. T2 unblocks and acquires the lock at time t_a2 >= t_r1.
Suppose T1 and T2 conflict on a second object y as well. If T1 got y first, every conflict edge runs T1 → T2 and T1 is entirely "before" T2. What if T2 got y first? Then T2 held y through T2's lock point, so T2 acquired y before releasing anything. But T2 needed x as well, and could not acquire x until after T1 released it, i.e. after T1's lock point. So T2's lock point is after T1's. Yet T1 needed y in its growing phase (before its own lock point), which would have blocked until T2 released y — and T2 is still in its growing phase. That is a cycle in the acquire order, impossible since the lock manager serialises each lock.
Why the cycle-in-acquire-order argument is the real content: for every pair of conflicting transactions, one of them must be "entirely before" the other in the sense that all conflict edges point the same way. When you generalise to N transactions, the conflict graph is acyclic — because each transaction has a single lock point and conflicts ordered around that point give a consistent topological position. An acyclic conflict graph is exactly the definition of conflict-serialisability. So 2PL → acyclic conflict graph → serialisable schedule.
The proof is structural, not combinatorial. You are not checking cases; you are noting that the single lock point is the transaction's position in the serial order that the concurrent schedule is equivalent to. If T1's lock point precedes T2's, the schedule is equivalent to "T1 then T2". Full stop.
Basic 2PL in Python
Build the simplest version of the protocol — basic 2PL — and run it end to end. A LockManager holds a table keyed by object identity; a Transaction wrapper enforces the phase discipline.
# concurrency/lockmgr.py
import threading
class LockDisciplineError(Exception):
"""Raised when a transaction tries to acquire after releasing."""
class LockManager:
"""Coarse lock manager. One condvar per object, plus holder bookkeeping."""
def __init__(self):
self._guard = threading.Lock() # protects the table
self._cv = threading.Condition(self._guard)
self._holders: dict = {} # key -> set of (tx_id, mode)
def _compatible(self, key, tx_id, mode) -> bool:
held = self._holders.get(key, set())
for (h_tx, h_mode) in held:
if h_tx == tx_id:
continue # self-held is always ok
if mode == "W" or h_mode == "W":
return False # any W excludes everyone else
return True # only shared readers
def acquire(self, tx_id, key, mode):
with self._cv:
while not self._compatible(key, tx_id, mode):
self._cv.wait() # block until lock is free
self._holders.setdefault(key, set()).add((tx_id, mode))
def release(self, tx_id, key):
with self._cv:
held = self._holders.get(key, set())
held = {(t, m) for (t, m) in held if t != tx_id}
if held:
self._holders[key] = held
else:
self._holders.pop(key, None)
self._cv.notify_all() # wake anyone who was waiting
Thirty-five lines of real threading primitives. _compatible implements the read/write compatibility matrix — any writer excludes everyone, readers share. Waiters park on a single condition variable; a release wakes all of them and the incompatible ones go back to sleep. Production lock managers use a per-key condvar or an intrusive wait queue to avoid the thundering herd, but the semantics are identical.
Now the transaction wrapper that enforces phase discipline:
# concurrency/transaction.py
from .lockmgr import LockManager, LockDisciplineError
class Transaction:
"""Enforces the 2PL rule: no acquire after first release."""
def __init__(self, tx_id: int, lm: LockManager):
self.tx_id = tx_id
self.lm = lm
self._held: set = set() # (key, mode) pairs
self._shrinking = False # flipped on first release
def read(self, key):
if self._shrinking:
raise LockDisciplineError(f"tx {self.tx_id} acquire after release")
self.lm.acquire(self.tx_id, key, "R")
self._held.add((key, "R"))
return storage.read(key) # external row-read
def write(self, key, value):
if self._shrinking:
raise LockDisciplineError(f"tx {self.tx_id} acquire after release")
self.lm.acquire(self.tx_id, key, "W")
self._held.add((key, "W"))
storage.write(key, value)
def release(self, key):
self._shrinking = True # one-way flip
self.lm.release(self.tx_id, key)
self._held = {(k, m) for (k, m) in self._held if k != key}
def commit(self):
for (k, _) in list(self._held):
self.lm.release(self.tx_id, k)
self._held.clear()
The single state variable _shrinking is the entire phase-discipline check. Every read and write inspects it before calling acquire; every release sets it. Try to call write after release and you get LockDisciplineError — the protocol rejects your transaction before it can corrupt the schedule.
Why one boolean is sufficient: the 2PL rule is a single-transition property — there exists a moment after which you cannot acquire. You do not need to know when that moment happened or which release triggered it, only that it has happened. A boolean captures exactly that. No per-lock metadata, no timestamps, just one flag per transaction. This is why the protocol survives the scale of production systems: enforcement is O(1) per operation.
This is basic 2PL: locks released as soon as the transaction no longer needs them, possibly well before commit. The protocol is satisfied. The schedule is serialisable. You might think you are done. You are not.
The problem with basic 2PL — cascading aborts
Serialisability is necessary but not sufficient for a correct database. You also need recoverability: if a transaction aborts, its effects must be undoable, and no committed transaction must have depended on those effects.
Basic 2PL breaks recoverability. Here is the scenario.
T1: write(x=100) release(x) ... ABORT (undo x back to old value)
T2: read(x=100) commit
T1 wrote x = 100 while holding the write lock, then released the lock in its shrinking phase. It has not committed yet — just released the lock because it was done with x from its own point of view. T2, now able to acquire a read lock on x, reads the new value 100, uses it, and commits.
Then T1 aborts — maybe a constraint check failed, maybe a system crash during commit. The undo rolls x back to its old value. But T2 has already committed based on reading 100. T2 saw a value that no longer exists. Worse, T2's write-set may have encoded the 100 somewhere — an audit log that says "total was 100", a notification sent to a user. T2's committed effects depend on uncommitted-and-now-rolled-back state.
The only fix is to abort T2 as well. And if T3 read from T2, abort T3. And so on, down the chain. This is a cascading abort. In a busy system with thousands of transactions per second, a single rollback can cascade across hundreds of dependent transactions, each of which has to be identified, undone, and its own effects unwound. The performance is abysmal and the cascade depth is unpredictable.
A schedule is cascadeless (also called avoiding cascading aborts, ACA) if every transaction only reads values written by already-committed transactions. Basic 2PL does not give you that. You need something stricter.
Strict 2PL — hold write locks until commit
Strict 2PL patches basic 2PL with one extra rule:
Release all write locks only at commit or abort.
Read locks may still be released early, in the shrinking phase, exactly as before. But the write locks stay held until the transaction has committed (or been aborted and rolled back). The effect: no other transaction can read a dirty write, because the writer's lock blocks every reader until the writer has made its effects durable.
The Python change is one line. Replace the early-release path in Transaction.release so that it only releases read locks:
def release(self, key):
self._shrinking = True
# Find the mode we hold. If it's a write lock, defer to commit.
modes = {m for (k, m) in self._held if k == key}
if "W" in modes:
return # ignore early write-release
self.lm.release(self.tx_id, key)
self._held = {(k, m) for (k, m) in self._held if k != key}
That is strict 2PL. No cascading aborts are possible, because when T2 goes to read x, T1 either has not released its write lock (so T2 blocks and eventually reads T1's committed value) or has released it (which means T1 committed, and the value T2 reads is durable). There is no window in which T2 can see T1's dirty write.
The protocol is still serialisable — strict 2PL is a restriction of basic 2PL, so anything forbidden by basic 2PL is also forbidden by strict 2PL. And it is also cascadeless. This is what MySQL InnoDB documents as its "two-phase locking protocol" for row-level locks, though InnoDB's full lock model adds gap locks and intention locks on top (chapter 54). SQL Server's default isolation mode does the same.
The cost is slightly more blocking. A transaction that is done with x but still running — maybe it is holding x's write lock while it waits for a disk flush of y — continues to block readers of x who could, in principle, have been served from the committed value. In practice this delay is small because most transactions are short. In adversarial workloads (long-running writer, many short readers), the delay can dominate and you need MVCC to sidestep it.
Rigorous 2PL — hold everything until commit
Rigorous 2PL goes one step further:
Release both read and write locks only at commit or abort.
The shrinking phase is trivial — it consists of the single instant of commit, at which point every lock the transaction holds is released atomically. Every transaction's lock timeline is a plateau: locks rise, locks stay at the peak, locks all drop to zero at commit.
The Python change is to delete the early-release path entirely:
def release(self, key):
# In rigorous 2PL there is no early release. Only commit/abort releases.
raise LockDisciplineError("rigorous 2PL forbids early release")
def commit(self):
for (k, _) in list(self._held):
self.lm.release(self.tx_id, k)
self._held.clear()
self._shrinking = True
Why choose rigorous over strict? Simplicity of reasoning. Under rigorous 2PL, the commit point is the serialisation point. If T1 commits before T2, then in the equivalent serial schedule T1 comes before T2. That is true for reads and writes alike, and you never need to worry about which phase a read lock got released in.
Under strict 2PL, read locks can be released during the transaction — which is correct (serialisability is preserved) but the serialisation point for a read is not the commit, it is the moment the read lock was released. If you are trying to prove some more sophisticated property about the system — like strict serialisability in a distributed setting, where the order observed by external clients must match the serial order — rigorous 2PL makes the argument trivial because there is exactly one lock-release event per transaction.
PostgreSQL uses rigorous 2PL for its explicit table-level locks (SELECT FOR UPDATE, etc.), and MVCC for ordinary reads. Oracle is similar. The extra blocking over strict 2PL is less painful than it might seem because most OLTP transactions are short: a few milliseconds of held read locks is rarely the contention bottleneck. Long-running analytical transactions bypass 2PL entirely via snapshot isolation.
Read/write lock compatibility
Throughout the discussion above you have been relying on the read/write lock compatibility matrix. Spell it out explicitly — this is the single most referenced table in concurrency control.
The rule is symmetric and tiny:
| Read (held) | Write (held) | |
|---|---|---|
| Read (want) | ✓ | ✗ |
| Write (want) | ✗ | ✗ |
Two transactions can both hold read locks on the same row simultaneously — that is the whole point of distinguishing read from write. A writer, however, excludes everyone: other writers (obviously, to prevent lost updates) and also readers (to prevent dirty reads). The minimal efficient RWLock implementation tracks two pieces of state:
# concurrency/rwlock.py
import threading
class RWLock:
def __init__(self):
self._cv = threading.Condition()
self._readers = 0 # number of shared readers
self._writer = False # exclusive writer flag
def acquire_read(self):
with self._cv:
while self._writer:
self._cv.wait() # block while a writer holds
self._readers += 1
def acquire_write(self):
with self._cv:
while self._writer or self._readers > 0:
self._cv.wait() # need exclusive access
self._writer = True
def release_read(self):
with self._cv:
self._readers -= 1
if self._readers == 0:
self._cv.notify_all() # last reader wakes a writer
def release_write(self):
with self._cv:
self._writer = False
self._cv.notify_all() # wake everyone
A writer waits until the reader count reaches zero; the last departing reader wakes it up. This naive version can starve writers under heavy read load — a production RWLock adds a waiting_writers counter and blocks new readers when a writer is queued, to ensure fairness. But the semantics are exactly those of the compatibility matrix.
Deadlocks — the unavoidable cost of 2PL
2PL makes per-row locking safe. It does not make it deadlock-free. In fact, 2PL causes deadlocks — they are the tax you pay for using it.
The canonical deadlock needs only two transactions and two rows:
T1: lock(x) acquired
T2: lock(y) acquired
T1: lock(y) BLOCKED (T2 holds y)
T2: lock(x) BLOCKED (T1 holds x)
T1 holds x and waits for y. T2 holds y and waits for x. Neither can proceed. Neither can release — both are still in their growing phase, and 2PL forbids release-then-acquire cycles. This wait will never resolve on its own. It is a genuine deadlock, and it can happen whenever two transactions acquire locks in opposite orders.
You cannot prevent deadlocks inside 2PL. What you can do is detect them and break them, or avoid them by ordering. There are three classical strategies:
- Timeouts. Each lock acquisition has a deadline. If it is not granted by then, abort the transaction. Crude but cheap — this is what MySQL does by default (
innodb_lock_wait_timeout, usually 50 seconds). - Wait-for graph detection. The lock manager maintains a directed graph of "T1 is waiting for T2". Run cycle detection periodically; when a cycle is found, pick a victim and abort it. Postgres does this every
deadlock_timeout(default 1 s). - Wound-wait / Wait-die. Use transaction timestamps to decide, at wait time, whether the younger or older transaction should be aborted. Classical algorithms from the 1980s, still used in some distributed systems.
The full treatment — with a real wait-for-graph implementation in Python — is chapter 55, deadlock detection with wait-for graphs. For now, know that 2PL and deadlocks are a package deal. Every real lock-based database ships with a deadlock detector baked in.
A concrete scenario
Bank transfer and audit, under strict 2PL
A live system with two transactions running concurrently against a bank ledger.
- T1
transfer 100 from alice to bob. Needs write locks onaliceandbob. - T2
audit: sum over {alice, bob, carol}. Needs read locks on all three.
Timeline under strict 2PL (times in milliseconds):
t=0: T1 begins.
t=0.1: T1 acquires write(alice). [alice locked W by T1]
t=0.2: T1 acquires write(bob). [bob locked W by T1]
t=0.3: T1 debits alice: 500 -> 400.
t=0.4: T1 credits bob: 300 -> 400.
t=3: T2 begins.
t=3.1: T2 tries to acquire read(alice). BLOCKS (T1 holds write lock).
t=10: T1 commits. Write locks on alice, bob released atomically.
t=10.1: T2 unblocks. Acquires read(alice). [alice locked R by T2]
t=10.2: T2 acquires read(bob). [bob locked R by T2]
t=10.3: T2 acquires read(carol). [carol locked R by T2]
t=10.4: T2 reads and sums: 400 + 400 + 700 = 1500.
t=11: T2 commits. Read locks released.
T2 saw the post-transfer state of alice and bob. The audit sum is 1500, not 1400 (which would be if T2 saw pre-transfer alice and post-transfer bob — a non-serialisable read). Strict 2PL made this impossible: T2 could not begin reading until T1's write locks were gone, and those only went away at T1's commit.
The cost was 7 ms of blocking on T2 — the time T1 held its write locks while T2 waited. If many readers like T2 pile up behind T1, they all wait in line. For short transactions like this one, that queue is a few milliseconds. For long transactions, it is a reason to use MVCC (build 8) instead.
Compared with a single global lock (chapter 52), 2PL scales very differently with contention ratio — the fraction of transactions that conflict with any given transaction. At low contention, 2PL throughput scales almost linearly with core count; at 100% contention (every transaction conflicts with every other), 2PL degrades to global-lock throughput because every pair serialises.
The takeaway: 2PL is not magic. It moves the serialisation point from a coarse global mutex to a fine per-object lock. If your workload actually has fine-grained access patterns, you get most of the cores to yourself. If every transaction needs to touch the same hot row, you are back to global-lock territory — and no protocol can fix that, because no protocol can make conflicting operations run in parallel.
Common confusions
- "2PL means hold locks throughout the transaction." Only in strict and rigorous 2PL. Basic 2PL releases locks as soon as the transaction no longer needs them, well before commit — which is why basic 2PL is almost never used in practice (cascading aborts). When someone in industry says "2PL" they almost always mean strict or rigorous.
- "2PL prevents deadlocks." The opposite — 2PL creates deadlocks. The growing-phase rule is exactly what makes deadlocks possible: a transaction cannot release a lock to break a wait cycle because that would end its growing phase and force it to never acquire another lock. Deadlock detection is a separate mechanism layered on top (chapter 55).
- "Serialisable isolation requires 2PL." No. MVCC with serialisable snapshot isolation (SSI, Cahill 2008, what Postgres's
SERIALIZABLEmode uses) provides serialisability without holding read locks — it detects conflicts at commit time and aborts. Optimistic concurrency control does the same in a different style. 2PL is one protocol; it is not the only one. - "Read locks and write locks cost the same." In some implementations read locks are very cheap — a shared atomic reference counter — while write locks involve exclusion, WAL bookkeeping, and index maintenance. In other implementations (lock manager with a hash table of lock queues), both cost a hash-table lookup plus a condvar acquire, so they are similar. The cost depends on the engine. Do not assume.
- "The growing-shrinking rule is per lock." No, it is per transaction. Once any lock has been released, no new lock (on any object) may be acquired. It is a global property of the transaction's timeline, not a property of individual locks.
Going deeper
You understand 2PL and its variants. A few directions extend the picture.
Optimistic concurrency control as the antithesis
2PL is pessimistic — it assumes transactions will conflict and locks defensively. Optimistic concurrency control (OCC), introduced by Kung and Robinson in 1981, takes the opposite bet: run the transaction without locks while recording read-set and write-set, then at commit validate that no concurrent transaction invalidated your read-set, and finally apply your writes atomically. When contention is low, OCC wins — no lock manager overhead, no blocking, no deadlocks. When contention is high, OCC thrashes as transactions keep aborting and retrying. Modern in-memory engines like Hekaton (SQL Server's in-memory OLTP) use OCC variants.
Lock escalation
A transaction updating 10 million rows would allocate 10 million lock-table entries, burning memory. Lock escalation is the engine's response: start with row-level locks, and when a single transaction holds more than some threshold (SQL Server uses 5000 on one object), upgrade them all to a table-level or page-level lock. Less concurrency, much less overhead. Escalation is a heuristic and it can surprise you — a workload fine at 4999 row locks suddenly blocks readers at 5001 because the engine escalated. The more modern alternative is intention locks (chapter 54), which give you multi-granularity locking without the surprise transition.
Predicate locking and the phantom problem
Row locks protect individual rows. But what about rows that don't exist yet? T1 reads every row where salary > 100000 and locks them — say, 42 rows. T2 inserts a new row with salary = 150000. When T1 re-reads the predicate it now sees 43 rows. That new row is a phantom, and row-level 2PL does nothing to stop it. Predicate locking generalises row locks to sets of rows defined by a predicate. In practice, predicate locks are hard to implement efficiently (checking whether two predicates overlap is undecidable in general), so real systems use index locks or gap locks — locking ranges in the index. InnoDB's gap-lock system is the reason SELECT ... FOR UPDATE on a range blocks inserts into that range from other transactions.
Where this leads next
You now have a working per-object lock manager, the 2PL protocol in all three variants, and an understanding of why each variant exists. The next chapters flesh out the rest of the lock-based concurrency stack.
- Lock granularity and intention locks — how a lock manager supports table-level, page-level, and row-level locks simultaneously, without forcing every transaction to check every granularity explicitly. Introduces the IS/IX/SIX lock modes.
- Deadlock detection with wait-for graphs — the detector that every lock-based system ships with. Builds the wait-for graph, runs cycle detection, picks a victim. Also covers wound-wait and wait-die.
- MVCC and snapshot isolation (Build 8) — the non-locking alternative to 2PL for reads, which is why Postgres's
SELECTnever blocksUPDATEand vice versa. Where reads and writes finally stop fighting.
2PL is the foundation on which the rest of this build sits. Every subsequent chapter in concurrency control either extends it (intention locks, predicate locks, deadlock detection) or argues with it (MVCC, OCC). The protocol's core insight — that a single per-transaction lock-point pins down the serialisation order — is almost sixty years old and still the default answer in every production database you will touch.
References
- Eswaran, Gray, Lorie, Traiger, The Notions of Consistency and Predicate Locks in a Database System, CACM 19(11), 1976 — the founding paper. Introduces 2PL, predicate locks, and the idea of serialisability as the correctness criterion. Almost every concurrency-control paper since cites it.
- Bernstein, Hadzilacos, Goodman, Concurrency Control and Recovery in Database Systems, Addison-Wesley 1987 — the textbook. Free PDF from Microsoft Research. Chapter 3 is the definitive treatment of 2PL, strict 2PL, rigorous 2PL, and the proof that each produces conflict-serialisable schedules.
- Gray, Reuter, Transaction Processing: Concepts and Techniques, Morgan Kaufmann 1992 — the industry bible. Chapter 7 on locking is where most of the practical wisdom about real lock manager implementations (hash tables, latch hierarchies, thundering-herd avoidance) was first written down.
- PostgreSQL source tree, lmgr.c — the actual lock manager of a production relational database. About 5000 lines of C, readable, commented. Pair with the explicit locking chapter of the docs.
- MySQL Reference Manual, InnoDB Locking — how a real strict-2PL-plus-gap-locks system documents its behaviour. Includes the compatibility matrix, lock modes (S, X, IS, IX), and the subtle interaction with the MVCC read path.
- Kleppmann, Designing Data-Intensive Applications, O'Reilly 2017, chapter 7 — the accessible modern treatment. Covers 2PL, MVCC, and SSI in one chapter with production examples, and explains why serialisable snapshot isolation has largely replaced 2PL for read-heavy workloads.