Write the sequence \tfrac{1}{1}, \tfrac{1}{2}, \tfrac{1}{3}, \tfrac{1}{4}, \tfrac{1}{5}, \ldots and watch the values drop: 1, 0.5, 0.33, 0.25, 0.2, \ldots Keep going and the numbers keep shrinking — \tfrac{1}{100} = 0.01, \tfrac{1}{1000} = 0.001, \tfrac{1}{10^6} = 0.000001. The bigger n gets, the smaller \tfrac{1}{n} gets, and there is no floor — \tfrac{1}{n} can be made as close to zero as you like by picking n big enough.

This is the most important reflex in late-school and college mathematics. Not a theorem to memorise — a gut feel you should have whenever a denominator runs off to infinity. Every limit, every series, every rate argument, every \epsilon-\delta proof runs on this single idea: as n grows, \tfrac{1}{n} shrinks toward zero.

The behaviour in one picture

Plot y = \tfrac{1}{n} for integer n from 1 onward and you get a sequence of dots that starts at 1 and falls, at first steeply and then gently, toward the x-axis. The curve never actually touches zero — because \tfrac{1}{n} > 0 for every positive n — but it gets arbitrarily close.

Interactive plot of one over n as a decreasing curve with a draggable pointA coordinate plane with n on the horizontal axis from zero point five to twenty and one over n on the vertical axis from zero to one point one. The curve y equals one over n is plotted as a smooth descending hyperbola. A draggable red point sits on the curve. Readouts show the current value of n and one over n as the reader drags. The curve approaches the horizontal axis without ever touching it, illustrating that one over n can be made arbitrarily small but never reaches zero. n 1/n 1 2 5 10 15 1 0.5 ↔ drag right to shrink 1/n as n → ∞, 1/n → 0 (but never reaches)
The curve $y = \tfrac{1}{n}$ for $n \geq 1$. Drag the red point rightward and $\tfrac{1}{n}$ shrinks toward zero. At $n = 10$, it is $0.1$. At $n = 100$ (off the right edge) it would be $0.01$. At $n = 10^6$ it is $10^{-6}$. The curve approaches the $x$-axis asymptotically — as close as you like, never quite there.

The formal statement (one line)

For every positive number \epsilon, there exists an integer N such that \tfrac{1}{n} < \epsilon whenever n > N.

Unpacking: pick any small target (\epsilon = 0.001, say). Then \tfrac{1}{n} < 0.001 whenever n > 1000. If you pick a smaller target \epsilon = 10^{-9}, you need n > 10^9 — but the N exists, and once you pass it, \tfrac{1}{n} stays below \epsilon forever. This is the rigorous version of "\tfrac{1}{n} \to 0 as n \to \infty."

The proof is a one-liner: \tfrac{1}{n} < \epsilon rearranges to n > \tfrac{1}{\epsilon}, so take N to be any integer bigger than \tfrac{1}{\epsilon}. The rearrangement is the proof — no calculus required.

Why: this is the statement that \tfrac{1}{n} has no positive lower bound. For any candidate floor \epsilon > 0, the sequence goes below it eventually. The only non-negative number that every positive \epsilon excludes is zero, so zero is the limit.

Where this shows up in later mathematics

The gut feel "big denominator → small fraction" is not arithmetic trivia. It powers most of calculus and analysis.

Limits. Many limits reduce to "\tfrac{1}{n} \to 0" applied to a transformed expression.

The technique — divide top and bottom by the highest power of n, then let all the \tfrac{1}{n}-style terms vanish — is the standard JEE attack on "limit of a rational function."

Series convergence. The geometric series \displaystyle \sum_{n=0}^{\infty} r^n = \frac{1}{1-r} converges when |r| < 1. Why? Because r^n behaves like "a number less than 1 raised to a big power" — which, by the same intuition, shrinks toward zero. See the entry on multiplying by less than 1 shrinks for the underlying reflex.

Calculus definitions. The derivative is a limit:

f'(x) = \lim_{h \to 0} \frac{f(x + h) - f(x)}{h}

Here h \to 0 is dual to \tfrac{1}{n} \to 0 — same idea in different notation. The derivative exists when the quotient approaches a finite value as the denominator approaches zero. The "infinitesimal" intuition students develop about calculus is really just the \tfrac{1}{n}-style intuition, written with Greek letters.

Infinite-decimal expansions. The statement "0.9999\ldots = 1" has a proof that leans on exactly this fact: the "error" 1 - 0.999 \ldots 9 (with n nines) is \tfrac{1}{10^n}, and \tfrac{1}{10^n} \to 0, so the error vanishes in the limit. The recurring decimal is not "almost 1" — it is exactly 1, because the only non-negative number below every \tfrac{1}{10^n} is zero. See why 0.999\ldots = 1 for the full version.

How fast is the shrinkage?

The rate at which \tfrac{1}{n} shrinks is actually quite modest. It takes n = 10 to get to 0.1, n = 100 to get to 0.01, n = 10^6 to get to 10^{-6}. To halve the value, you need to double n. The sequence \tfrac{1}{n} is the textbook case of slow decay — compared to, say, \tfrac{1}{2^n}, which halves the value every time you increment n by 1.

n \tfrac{1}{n} \tfrac{1}{2^n}
1 1 0.5
5 0.2 0.0313
10 0.1 0.000977
20 0.05 \approx 10^{-6}
100 0.01 \approx 10^{-30}

Both sequences approach zero, but \tfrac{1}{2^n} sprints there while \tfrac{1}{n} crawls. This distinction matters in series: \displaystyle \sum \tfrac{1}{n} (the harmonic series) diverges — the sum grows without bound, even as each term shrinks — while \displaystyle \sum \tfrac{1}{2^n} converges to 2. The harmonic series is the classic caveat that "each term \to 0" is not enough for a series to converge; the terms need to shrink fast enough. \tfrac{1}{n} doesn't.

Quick examples using the gut rule

1. Probability that a uniform random number on [0, 1] is exactly \tfrac{1}{2}. Zero. More generally, the probability of hitting any single point is zero — because you could partition [0, 1] into n equal intervals, each of length \tfrac{1}{n}, and the probability of landing in any one of them is \tfrac{1}{n}, which can be made smaller than any \epsilon. Point probabilities vanish.

2. Numerical approximation of \pi. A classic method is to inscribe a regular n-gon in a unit circle and compute its perimeter. The error — the gap between the polygon perimeter and 2\pi — shrinks like \tfrac{1}{n^2}. At n = 100 the error is about 0.001; at n = 10^6 it is about 10^{-11}. The method converges because the denominator grows.

3. Arithmetic mean vs. sample size. If you take the average of n random numbers from a distribution with mean \mu, the error of your sample average shrinks like \tfrac{1}{\sqrt{n}}. Not \tfrac{1}{n} — slower — but still to zero as n \to \infty. This is why experimental measurements get more accurate with more trials: the \tfrac{1}{\sqrt{n}} error term shrinks, even if slowly.

In every case, the fact that some expression involving n goes to zero as n \to \infty is the pivot of the argument. Your gut should immediately flag this the moment you see a big n in a denominator: "this will shrink." And if the conclusion of the problem depends on that shrinkage, the argument works.

The reflex to build

When you see \tfrac{1}{n} with n large, your mental state should be:

This should happen in under a second, without computation. Apply it to anything of the form "coefficient times \tfrac{1}{n}" or "\tfrac{\text{something}}{\text{big } n}" — the shrinkage dominates, and whatever coefficient is out front is being multiplied by a vanishingly small number.

What to remember

Every later chapter in mathematics — limits, series, calculus, analysis, probability, even elementary numerical methods — uses this fact. Internalise it now, and half of the later material will feel like a continuation of what you already know.

Related: Fractions and Decimals · Multiply by Less Than 1 Shrinks; Divide by Less Than 1 Grows — Make This a Reflex · Why Is 0.333… Exactly 1/3 and Not Just Very Close to It? · Why Dividing by a Fraction Less Than 1 Grows the Result — It Feels Backwards