The first time you see the notation [a, b) — square bracket on the left, round bracket on the right — your brain is right to flinch. It looks like someone started a closed interval, got distracted by a phone call, and finished it as an open one. A typo promoted to mathematics.
It is not a typo. The asymmetry is the whole point. Half-open intervals exist because some of the most common things you do on the real line — bucket numbers into bins, slice a list, build a probability distribution — need to include the start and exclude the end. Not sometimes. Constantly. If mathematics did not have [a, b), you would have to invent it within the first hour of writing real code or doing real statistics.
Let us go through why.
The partition problem
Here is the cleanest motivation. Take the real line and cut it into unit-length bins:
Every real number belongs to exactly one bin. The number 1.7 sits in [1, 2). The number 2.0 sits in [2, 3) — not in [1, 2), because the right endpoint is excluded.
Try the same thing with closed intervals. If you use [0, 1] and [1, 2] and [2, 3], then the number 1 is in two bins at once. That breaks counting. If you want a histogram of exam marks and a student scored exactly 60, do they go in the 50–60 bucket or the 60–70 bucket? Both? Neither? With open intervals (0, 1) and (1, 2), the integers 1, 2, 3, \ldots are in no bin at all — they fall through the cracks.
Half-open intervals are the only way to tile \mathbb{R} into finite-width pieces where every point has exactly one home. That is the deep reason the convention exists.
Python slicing
Open a Python REPL and try this:
a = [10, 20, 30, 40, 50]
a[0:3] # -> [10, 20, 30]
The slice a[0:3] returns elements at indices 0, 1, 2. Not index 3. The convention is: start inclusive, end exclusive. Exactly the half-open interval [0, 3) applied to array indices.
This is not an arbitrary choice by Guido van Rossum. It makes arithmetic clean:
len(a[i:j]) == j - i. Simple subtraction gives the length. If the end were inclusive, you would needj - i + 1, and every slicing expression would carry a hidden "off by one" risk.a[0:n] + a[n:] == a. The list splits into two pieces with no overlap and no gap. Element at index n belongs to exactly one of them (the second one). That is the partition property again.- You can iterate
for i in range(n):and know it runs n times, 0, 1, \ldots, n-1. The number of steps is the difference of endpoints.
Every language with half-open slicing — Python, Rust, Go, Kotlin, modern C++ iterators — gets these three clean properties for free. Every language with fully-closed indexing has to write awkward +1s everywhere.
Time buckets
Imagine you run a log service. You want to know how many requests arrived in each minute of the day. Minute zero is from 00{:}00{:}00 to 00{:}01{:}00. Where does a request at exactly 00{:}01{:}00{.}000 go — the first minute or the second?
The answer everyone has converged on: the second. Time buckets are half-open: [00{:}00,\ 00{:}01), [00{:}01,\ 00{:}02), and so on. A timestamp at the boundary belongs to the bucket that starts there. This is why SQL date_trunc, Pandas resample, and every monitoring tool (Prometheus, Grafana, CloudWatch) count a request at 00{:}01{:}00{.}000 as belonging to minute one, not minute zero.
Imagine the alternative. If midnight on Tuesday belonged to "Monday's bucket", then a day that "starts at midnight" would include a point that also belongs to the previous day. Every billing system, every calendar, every log aggregator would have to decide one-off which side of every midnight each second belongs to. The half-open convention settles it once and for all: a day is [\text{midnight}_d,\ \text{midnight}_{d+1}). The start is in, the next start is out.
Histogram bins in statistics
When you draw a histogram of student heights or cricket strike rates or Diwali sales receipts, you bin the data. Statistical tools (R, NumPy, Excel's FREQUENCY, every textbook) use half-open bins by default. A value that lands exactly on a bin boundary goes into the bin whose lower edge it matches.
This matters because real data has ties. If you use closed intervals [150, 160] and [160, 170] for heights in centimetres, every student who is exactly 160 cm tall is counted twice. Your bars sum to more than the number of students. Your percentages go above 100\%. Everyone downstream of you has a bad day.
Half-open intervals give you a partition, and a partition is exactly what "counting each thing once" requires.
Probability densities
For a continuous random variable with density f, the probability P(X \in A) is the integral of f over A. For a continuous distribution, it happens that P(X = a) = 0 for any single point, so P(X \in [a, b]) = P(X \in [a, b)) = P(X \in (a, b]) — they are all equal.
But for a cumulative distribution function (CDF), the convention F(x) = P(X \le x) means P(X \in (a, b]) = F(b) - F(a). The right endpoint is in, the left endpoint is out. This is the opposite half-open direction from slicing — and it's chosen because it makes the CDF right-continuous, which is what you want for numerical work.
The point is: both directions of half-open show up naturally. The notation [a, b) and (a, b] are not a pair of typos, they are a pair of tools, and which one you reach for depends on which endpoint you need to include.
Borel sets and measurability
One last place. In real analysis, the Borel \sigma-algebra on \mathbb{R} — the collection of "reasonable" sets you can measure — is the smallest \sigma-algebra containing all half-open intervals [a, b). You could have started with closed intervals or open intervals and gotten the same \sigma-algebra, but half-open is the version that plays nicest with Stieltjes integrals and with the CDF definition above.
When mathematicians choose the building blocks of measure theory, they pick half-open intervals. That is about as official a stamp of approval as a notation can get.
So what was going on when you first saw [a, b)?
Your flinch was reasonable. Notation that breaks symmetry should raise an eyebrow. The response is: yes, it breaks symmetry, and that broken symmetry is the feature. Closed on one side, open on the other, is the shape of "start included, next start excluded" — and that shape is what you need to tile, to slice, to bin, to bucket, and to measure without a point ever being double-counted or dropped.
The real mistake would be to avoid [a, b). Then you would face a choice every time you partitioned something: which closed interval wins the tie? Which open interval loses the integer? Half-open intervals exist so you never have to answer that question.