The Euclidean algorithm computes \gcd(a, b) by replacing (a, b) with (b, a \bmod b) and repeating until the second number is zero. The first time you meet it, a natural worry appears: What if it never terminates? What if the remainders keep shrinking but never quite hit zero — like 1, 1/2, 1/4, 1/8, \ldots — and the algorithm runs forever?
The worry is misplaced, and the reason it is misplaced reveals something beautiful about the integers. The Euclidean algorithm cannot run forever because the sequence of remainders is a strictly decreasing sequence of non-negative integers, and no such sequence can be infinite. Every run terminates in at most b steps (and usually far fewer).
This is an instance of the well-ordering principle — one of the deepest properties of \mathbb{N} — dressed up as an algorithm.
The algorithm, restated
Given positive integers a, b with a \geq b, the Euclidean algorithm produces a sequence of remainders:
and for each k \geq 1, define r_{k+1} = r_{k-1} \bmod r_k. Stop when r_{k+1} = 0. The last non-zero remainder r_k is \gcd(a, b).
Each line of the algorithm is a Euclidean division:
The crucial inequality is on the right. The remainder r_{k+1} is strictly less than r_k.
The termination proof, in one sentence
Here is the whole proof:
The sequence r_1, r_2, r_3, \ldots is a sequence of non-negative integers with r_{k+1} < r_k; hence it must reach 0 in finitely many steps.
Read it twice. It is short because the heavy lifting is done by the integers, not by the algorithm. Non-negative integers cannot decrease forever — the smallest one is 0, and a strictly decreasing sequence must eventually arrive there.
Why this does not work for rationals
Here is what breaks if you try to run a Euclidean-style algorithm on fractions. Suppose you start with a = 1, b = 1/\sqrt{2}. Taking a \bmod b gives a remainder smaller than b, but still positive. You can keep going, and keep going, and keep going — the remainders get smaller but never reach 0.
The integers have a property that the reals lack: between any two distinct non-negative integers, there are only finitely many other non-negative integers. So a strictly decreasing sequence of integers has nowhere to go except down to 0. The reals are dense — between any two of them lie infinitely many others — so a strictly decreasing sequence of reals can limp along forever, converging to a limit without ever reaching it.
The Euclidean algorithm's termination is not algorithmic magic. It is the well-ordering of \mathbb{N} made visible.
The well-ordering principle, stated
Well-ordering principle. Every non-empty set of non-negative integers has a smallest element.
This is equivalent to saying that there is no infinite strictly decreasing sequence of non-negative integers. (Because if there were, the set of all elements in such a sequence would be a non-empty set with no smallest element — contradicting well-ordering.)
So the Euclidean algorithm terminates because:
- Each step produces a remainder r_{k+1} with 0 \leq r_{k+1} < r_k.
- The remainders form a decreasing sequence of non-negative integers.
- By well-ordering, this sequence cannot be infinite.
- It terminates when a remainder hits 0.
An upper bound on the number of steps
How many steps, at most? The quickest bound is b: each step decreases the remainder by at least 1, so after b steps you must have reached 0. But this is terrible — it says \gcd(1000, 999) takes up to 999 steps, when in fact it takes 1.
The sharp bound is given by Lamé's theorem (1844): the number of steps in the Euclidean algorithm on a, b (with a > b \geq 1) is at most 5 \log_{10} b. So \gcd(a, 1000) takes at most 15 steps. \gcd(a, 10^{18}) takes at most 90 steps. The algorithm is exponentially faster than its crude upper bound because each step roughly halves the smaller number.
Why the sharp bound involves \log: in the worst case, the quotient at each step is 1, which forces the sequence of remainders to follow the Fibonacci sequence backwards — F_n, F_{n-1}, F_{n-2}, \ldots. Since Fibonacci numbers grow like \varphi^n (golden ratio), reaching F_n requires n \approx \log_\varphi b steps. The constant 5 \log_{10} b in Lamé's theorem comes from \log_\varphi 10 \approx 4.78 < 5.
The Fibonacci worst case
Consecutive Fibonacci numbers — (F_{n+1}, F_n) — achieve the slowest Euclidean algorithm. Take a = 13, b = 8:
Five steps. Every quotient is 1 (except the last). The remainders are themselves Fibonacci numbers running in reverse. If you want to make the Euclidean algorithm work hard, feed it Fibonacci pairs — and even then, it takes only \log_\varphi b steps, a vanishingly slow growth.
What the termination proof is not
The termination proof does not rely on the value of the GCD, the quotients, the specific input numbers, or anything clever about factorisation. It relies only on the fact that remainders are non-negative integers strictly smaller than their predecessors. This is why the Euclidean algorithm works uniformly: for any two positive integers, regardless of size, regardless of prime structure, regardless of whether they are co-prime, the termination is guaranteed by the same argument.
A philosophical note
It is worth pausing on this. Most termination proofs in computing are subtle. Does a loop always exit? Does a recursion always return? These questions can be arbitrarily hard — in general, undecidable (the Halting Problem). But the Euclidean algorithm is certifiably terminating, and the certificate is short: a decreasing sequence of non-negative integers.
Programmers have borrowed this trick. A common way to prove a loop terminates is to produce a variant — a non-negative integer expression that strictly decreases with each iteration. If you can exhibit such a variant, the loop cannot loop forever. The Euclidean algorithm's variant is the current remainder. That is all you need.
The one-line answer
The Euclidean algorithm terminates because the remainders form a strictly decreasing sequence of non-negative integers, and no such sequence is infinite — the well-ordering of \mathbb{N} forbids it. It cannot run forever, no matter what inputs you feed it.
Related: Number Theory Basics · The Euclidean Algorithm as Rectangle-Tiling · Does gcd × lcm = product Work for Three Numbers? · Why One is Not a Prime Number · Mathematical Induction