When you first meet primes, the definition seems to welcome 1 right in. A prime is "a number whose only positive divisors are 1 and itself." Well, the only positive divisors of 1 are 1 and 1. By that reading, 1 is prime. So why do textbooks, exam boards, and every working mathematician exclude it? Why does the definition always come with the extra clause "greater than 1"?
The short answer: because keeping 1 out makes every other theorem about primes vastly cleaner. The long answer is a story about what primes are for.
The definition, read carefully
The standard definition of a prime number is:
A positive integer p is prime if p > 1 and the only positive divisors of p are 1 and p.
The "p > 1" is not a historical accident — it is a deliberate exclusion. Without it, the Fundamental Theorem of Arithmetic fails.
The Fundamental Theorem demands unique factorisation
The Fundamental Theorem of Arithmetic says:
Every integer n \geq 2 can be written as a product of primes in exactly one way, up to the order of the factors.
The word exactly is doing almost all the work. Consider 12. Its prime factorisation is 2 \times 2 \times 3. That is the factorisation — there is only one.
Now pretend 1 is prime. Suddenly there are infinitely many factorisations of 12:
- 2 \times 2 \times 3
- 1 \times 2 \times 2 \times 3
- 1 \times 1 \times 2 \times 2 \times 3
- 1 \times 1 \times 1 \times 2 \times 2 \times 3
- \ldots
Every insertion of 1 gives a "new" prime factorisation. Uniqueness is gone. The Fundamental Theorem becomes false as stated, and you would have to rewrite it as: "Every integer \geq 2 can be written as a product of primes, unique up to the order of factors and up to how many 1s you throw in." That is uglier, and more importantly, less useful. The 1s do not carry information — they are noise.
The deeper reason: primes and units
Number theorists classify positive integers into three groups:
- Units. Numbers with a multiplicative inverse in the integers. In \mathbb{Z}_{> 0}, the only unit is 1. (In \mathbb{Z} with negatives allowed, the units are \pm 1.)
- Primes. Numbers that are not units and have no non-trivial factorisations.
- Composites. Numbers that factor non-trivially.
Units and primes play completely different roles. A unit multiplies other numbers without changing their "essential structure." A prime is an atom of multiplication — you cannot break it into smaller factors. Mixing the two roles into one category is like calling the number 0 both an additive identity and a "special number whose only divisor-in-addition is 0 itself." Yes, the second clause is true, but lumping 0 with ordinary numbers breaks things.
The same holds for 1. It is a unit. Declaring it prime would create a category with two fundamentally different kinds of member.
Other theorems that also demand 1 \neq \text{prime}
Once you start looking, a long list of theorems quietly assume 1 is not prime:
- Euclid's infinitude of primes. The proof forms N = p_1 p_2 \cdots p_r + 1 and argues N has a prime factor distinct from the p_i. If 1 were prime, 1 would already be in every list of primes, and the "new" prime factor could trivially be 1 — the argument collapses.
- The Prime Number Theorem. \pi(N) \sim N / \ln N. Including 1 would shift all the asymptotic estimates by an off-by-one error and break the analytic identities that derive this result.
- The Riemann zeta function's Euler product. \zeta(s) = \prod_p (1 - p^{-s})^{-1}. If p = 1 is allowed, the factor (1 - 1^{-s})^{-1} = 1/0 makes the product undefined.
- Wilson's theorem. (p - 1)! \equiv -1 \pmod{p} for prime p. For p = 1, (1 - 1)! = 0! = 1 and -1 \bmod 1 = 0 — so 1 \not\equiv -1 \pmod 1 in any useful sense. The theorem either fails or becomes vacuous.
Every one of these is a major result, and every one is stated cleanly only because 1 is excluded from the primes.
Historical wobble
It is worth knowing that 1's status has not always been settled. Many 19th-century mathematicians, including serious figures like Derrick Lehmer, did list 1 among the primes. Lehmer's 1914 prime tables start at 1. But by the mid-20th century, the consensus had hardened: excluding 1 makes the theorems clean, including 1 makes them messy, and the "is 1 prime?" question is not a deep ontological question — it is a definitional convenience. The convention won because it pays off in every corollary.
So when you read an older text that calls 1 prime, you are not witnessing mathematical truth changing — you are watching a convention get refined.
The test you can apply
A reliable mental test: a prime is a number that acts as an atom under multiplication. An atom has exactly one way of being itself — you cannot decompose it, and including or excluding it does not change what you are describing. The number 1 fails the second test. Multiplying by 1 changes nothing, so 1 should not appear in lists of multiplicative atoms. The atoms of multiplication are 2, 3, 5, 7, 11, 13, \ldots — and 1 is something else, the neutral element of multiplication, not one of its atoms.
What about 0?
For completeness: 0 is also not prime. 0 has infinitely many divisors (every integer divides 0), and 0 \cdot n = 0 for every n, so 0 can never appear in a factorisation of a non-zero integer. 0 is its own special object in number theory, with its own rules (it is the additive identity, the absorbing element under multiplication, and has no multiplicative inverse).
Primes live strictly in \mathbb{Z}_{\geq 2}.
One-line takeaway
1 is excluded from the primes so that every integer \geq 2 has exactly one prime factorisation. Include 1, and every number has infinitely many "factorisations" differing only by how many 1s you prepend — and almost every theorem about primes breaks. The exclusion is not a technicality. It is what makes number theory possible.
Related: Number Theory Basics · Sieve of Eratosthenes — Composites Vanish in Waves · Why √2 is Irrational · Why the Euclidean Algorithm Terminates · Number Systems