When you learn arithmetic, the operands are single numbers. Add 3 and 5 and you get 8. But what if the inputs are not numbers but ranges — "somewhere between 1 and 5" and "somewhere between 2 and 3"? You can still add, subtract, and multiply them, and the output is again a range. This is interval arithmetic, and it is how measurement error, tolerance analysis, and a large chunk of numerical computing actually get done.
The rules look innocent. Addition is [a, b] + [c, d] = [a + c,\ b + d] — add the two lefts, add the two rights. Done. Subtraction looks like it should be analogous, but it is not: [a, b] - [c, d] = [a - d,\ b - c], with the opposite corners of the second interval. Multiplication is even messier — the answer depends on all four products ac, ad, bc, bd, because any of them can be the minimum or the maximum depending on signs. This playground lets you drag two intervals around, flip the operator, and watch the output set snap into place.
The widget
Set intervals A = [a, b] and B = [c, d] with the four sliders, then pick +, -, or \times from the buttons. The top two bands show A and B; the bottom band shows the result A \star B. The readouts print both intervals and the result in bracket notation so you can compare with a hand calculation.
Addition is the easy one
Start with A = [1, 5] and B = [2, 3] and hit +. The widget shows A + B = [3, 8].
The rule [a, b] + [c, d] = [a + c,\ b + d] falls out of a single observation: if x \in [a, b] and y \in [c, d], then the smallest the sum x + y can be is a + c (picking the smallest of each), and the largest is b + d (picking the largest of each). Because the sum is a monotonically increasing function of each variable, shrinking both inputs shrinks the sum and growing both grows it. There is no catch.
This is why engineers reach for interval arithmetic when they stack tolerances. If a beam is 1.00 \pm 0.02 metres long and a bracket adds 0.30 \pm 0.01 metres, the total is 1.30 \pm 0.03 metres — and \pm 0.03 is exactly 0.02 + 0.01. The errors add because addition of intervals adds the widths.
Subtraction is not what you expect
Set A = [1, 5] and B = [2, 3] and hit −. Naively you might write "the smallest is 1 - 2 = -1 and the largest is 5 - 3 = 2, so the answer is [-1, 2]." The widget disagrees — it shows [-2, 3].
The rule is [a, b] - [c, d] = [a - d,\ b - c]. Notice the swap: the left endpoint uses d (the right endpoint of B), and the right endpoint uses c (the left endpoint of B). To see why, rewrite subtraction as addition of the negative:
Why: negating an interval flips its endpoints, because if y \in [c, d] then -y \in [-d, -c]. The left endpoint becomes -d, not -c, since -d \le -c whenever c \le d.
Now apply the addition rule to [a, b] + [-d, -c] and you get [a + (-d),\ b + (-c)] = [a - d,\ b - c]. The swap is forced on you by the minus sign.
The consequence is counterintuitive: subtraction grows the width. The width of A - B is (b - c) - (a - d) = (b - a) + (d - c), which is the sum of the two widths — exactly the same growth as addition. Errors compound whether you add or subtract, even though you might hope subtraction "cancels" them.
For our example, A - B = [1 - 3,\ 5 - 2] = [-2, 3]. The width is 5, which equals the width of A (4) plus the width of B (1). Not -1 to 2, which would have width 3 — that answer is too tight, and using it would understate the true uncertainty.
Multiplication has to check all four corners
Set A = [-2, 3] and B = [-1, 4] and hit ×. The four corner products are ac = 2, ad = -8, bc = -3, bd = 12. The minimum is -8 and the maximum is 12, so A \cdot B = [-8, 12].
Why all four? Because multiplication is not monotone once negatives enter the picture. For positive-only intervals the answer is simpler: if both A and B sit entirely in [0, \infty), then A \cdot B = [ac, bd] — smallest times smallest for the floor, largest times largest for the ceiling, just like addition. But as soon as one interval crosses zero, a small negative times a small positive might beat a large positive times a large positive for the minimum, or a negative times a negative might dominate the maximum.
Interval Multiplication
For any two intervals [a, b] and [c, d] (with a \le b and c \le d):
You must evaluate all four products — there is no shortcut that works for every sign pattern.
You can case-split by signs to skip the min and max in many cases. If A \subseteq [0, \infty) and B \subseteq [0, \infty) the answer is [ac, bd]. If A \subseteq (-\infty, 0] and B \subseteq (-\infty, 0] the answer is [bd, ac] (two negatives give a positive product, and the larger magnitudes give the larger product). Between those clean cases are six more patterns, and the uniform "check all four" rule is the only one that always works — it is also exactly what the widget computes.
One more surprise: squaring is not the same as A \cdot A when A crosses zero. For A = [-1, 2], the product A \cdot A computes \min(1, -2, -2, 4) = -2 and \max = 4, giving [-2, 4]. But the set of actual squares \{x^2 : x \in [-1, 2]\} is [0, 4] — you can never get a negative square. The interval [-2, 4] is an overestimate that comes from treating the two copies of A as independent, as if the left factor could be -1 while the right factor were 2. Interval arithmetic is conservative by design: it gives a set that contains the true answer, but sometimes the true answer is smaller.
Where this matters: error propagation in numerical computing
When a physicist writes g = 9.8 \pm 0.1\,\mathrm{m/s^2}, they are describing an interval: g \in [9.7, 9.9]. Plug that into the period of a pendulum, T = 2\pi\sqrt{L/g}, and you want to know the interval of possible T values. Interval arithmetic gives you a guaranteed enclosure — a range that is certain to contain the true period, with no statistical assumptions about how the error is distributed.
Libraries like IEEE 754 interval arithmetic and Python's mpmath use exactly these rules, with one extra care: instead of computing a + c in floating-point and hoping for the best, they round the lower endpoint down and the upper endpoint up. The interval you get back is a little wider than the mathematical answer, but it is rigorously guaranteed to contain every value the true computation could produce. When a Mars lander's trajectory has to hit a window 50 km wide from 200 million km away, this kind of rigorous error tracking is not optional.
The playground above is the whole story in miniature: pick two intervals, pick an operator, read off the enclosure. Every computer algebra system that does certified arithmetic is running the same three rules on enormous chains of operations — the errors compound the way subtraction and multiplication say they will, and the output set tells you exactly what you can and cannot conclude.