In short

Real analysis is the rigorous version of the calculus you already know. The epsilon-delta definition of a limit replaces "f(x) gets close to L as x gets close to a" with a precise challenge-and-response: for every tolerance \varepsilon > 0 you demand, there must exist a distance \delta > 0 such that every x within \delta of a produces an f(x) within \varepsilon of L. Continuity at a point is just this definition applied with L = f(a). Uniform continuity strengthens the condition: the same \delta must work everywhere, not just point-by-point.

Pick up the function f(x) = 3x + 1 and the point x = 2. By eye, \lim_{x \to 2} f(x) = 7. The reasoning: plug in x = 2, get f(2) = 7. Of course the limit is 7.

Now pick up f(x) = \frac{\sin x}{x} at x = 0. Plug in x = 0 and you get 0/0 — no answer at all. But the limit does exist and equals 1. You computed it in the article on limits by squeezing.

Now pick up a nastier function:

f(x) = \begin{cases} x \sin(1/x) & \text{if } x \neq 0 \\ 0 & \text{if } x = 0. \end{cases}

The function wiggles violently as x approaches 0 — infinitely often, in fact. At every scale you zoom in, you find more oscillation. Does this function have a limit at x = 0? Is it continuous there? The intuitive pictures — "f(x) settles down to one number as x closes in" or "you can draw the graph without lifting your pen" — break under this kind of pressure. A function can wiggle infinitely and still have a limit. A function can be defined everywhere and still fail to be continuous in ways no picture reveals.

The response of mathematics, around the 1850s, was to rebuild the foundation. Stop relying on pictures. Turn "close to" into a statement with quantifiers and inequalities — something a computer, or a sceptic, could check. The resulting framework is called real analysis, and this article is your first look at it.

You will still use the pictures — they are indispensable for intuition. But every picture will now have a precise companion: a statement that says what "close" means, in language that does not depend on eyes or hands.

The problem with "close"

The informal definition of a limit says: \lim_{x \to a} f(x) = L means "f(x) gets close to L as x gets close to a."

Listen carefully to what this sentence actually commits to. How close is close? Does "gets close" mean it eventually reaches L? That it comes within 0.1 of L? Within 0.0001? Nothing in the sentence pins it down. The phrase "gets close to" is an action verb without a destination.

The breakthrough — usually credited to Cauchy in the 1820s and made precise by Weierstrass in the 1850s — was to recast the definition as a game between two players.

Player 1 (the sceptic) picks a tolerance. They say: "I want f(x) to be within 0.01 of L. Can you guarantee that?"

Player 2 (the one asserting the limit) picks a distance. They say: "Yes. As long as x is within 0.003 of a (but not equal to a), I promise f(x) is within 0.01 of L."

Then Player 1 tries a tighter tolerance. "What about 0.0001?" Player 2 has to find a new — probably smaller — \delta that works for this tighter \varepsilon. And so on.

The limit \lim_{x \to a} f(x) = L holds exactly when Player 2 can always win — no matter what tolerance Player 1 names, Player 2 can produce a matching distance. There is no cheating, no eventual victory: for every \varepsilon, some \delta must exist.

This is the epsilon-delta definition. In symbols:

Epsilon-delta definition of limit

Let f be a function defined on an open interval around a (except possibly at a itself). We say

\lim_{x \to a} f(x) = L

if for every \varepsilon > 0 there exists a \delta > 0 such that

0 < |x - a| < \delta \;\;\Longrightarrow\;\; |f(x) - L| < \varepsilon.

Read it slowly.

This definition does not contain the word "close" anywhere. It has been replaced by explicit inequalities. Everything that follows in real analysis is built on this one statement.

The geometry of the definition

There is a picture for the epsilon-delta definition, and it is worth drawing.

On the y-axis, mark the interval (L - \varepsilon, L + \varepsilon). This is the "tolerance band" — a horizontal strip of width 2\varepsilon around the horizontal line y = L. The sceptic says: "stay inside this strip."

On the x-axis, you want to find an interval (a - \delta, a + \delta) — the "safe zone" — such that every x in this interval (except possibly x = a itself) has its f(x) inside the strip.

Epsilon-delta picture for a limitA smooth curve rising from left to right. A horizontal band of width 2 epsilon is drawn around the limit value L on the y axis. A vertical band of width 2 delta is drawn around the point a on the x axis. The portion of the curve above the vertical band lies entirely inside the horizontal band.xyL+εLL−εa−δaa+δ
The epsilon-delta picture. The red band is the output tolerance $(L - \varepsilon, L + \varepsilon)$. The grey band is the input window $(a - \delta, a + \delta)$. The definition demands that every $x$ inside the grey band (except possibly $a$) maps to a point inside the red band. For a well-chosen $\delta$, the curve passes through the intersection cleanly.

For the limit to exist, you must be able to do this for every horizontal strip the sceptic draws. If the strip gets thinner, you shrink your vertical band to match. If no \delta works for some \varepsilon, the limit fails to exist at a.

A first proof from the definition

Time to use the definition on a specific function. Take f(x) = 3x + 1 and prove \lim_{x \to 2} f(x) = 7.

Example 1: linear function, straight from the definition

Claim: For f(x) = 3x + 1, \lim_{x \to 2} f(x) = 7.

Step 1. Write down what needs to be shown.

You must show: for every \varepsilon > 0 there exists a \delta > 0 such that 0 < |x - 2| < \delta implies |f(x) - 7| < \varepsilon.

Why: the proof strategy is always to produce a \delta as a function of the given \varepsilon. Start by writing down what you have to deliver.

Step 2. Simplify |f(x) - 7|.

|f(x) - 7| = |(3x + 1) - 7| = |3x - 6| = 3|x - 2|.

Why: you want to relate the output gap |f(x) - 7| to the input gap |x - 2| so you can figure out how much the input has to be controlled.

Step 3. Make 3|x - 2| < \varepsilon.

This happens exactly when |x - 2| < \varepsilon/3.

Why: divide both sides of the desired inequality by 3. This gives an explicit bound on how close x has to be to 2.

Step 4. Pick \delta = \varepsilon/3.

If 0 < |x - 2| < \varepsilon/3, then |f(x) - 7| = 3|x - 2| < 3 \cdot (\varepsilon/3) = \varepsilon. The implication holds.

Why: the choice of \delta comes from Step 3 — you pick whatever input tolerance is needed to guarantee the output tolerance. Here, that is exactly \varepsilon/3.

Step 5. Confirm that \delta > 0 for every \varepsilon > 0.

Since \varepsilon > 0, also \varepsilon/3 > 0. So for every \varepsilon > 0 you have produced a valid positive \delta. The definition is satisfied.

Result: \lim_{x \to 2} (3x + 1) = 7.

The line $y = 3x + 1$ passes through $(2, 7)$. The dashed horizontal lines at $y = 6.5$ and $y = 7.5$ represent the tolerance band for $\varepsilon = 0.5$. The corresponding input window on the $x$-axis is $(2 - \delta, 2 + \delta)$ with $\delta = 1/6 \approx 0.167$. For a smaller $\varepsilon$, you shrink $\delta$ proportionally.

A few things are worth noticing. First, the proof is constructive — it does not just argue that some \delta exists, it tells you exactly what \delta to pick: \delta = \varepsilon/3. Second, the slope 3 is what shows up in the denominator of \delta. If the slope had been 100, you would need \delta = \varepsilon/100. The steeper the function, the more tightly you have to control the input. Third, you never actually evaluated f at x = 2; the limit is about the values near 2, not at 2 itself.

From limits to continuity

Once you have the epsilon-delta definition of a limit, continuity is just two words.

Continuity at a point (rigorous)

A function f is continuous at the point a if f(a) is defined and

\lim_{x \to a} f(x) = f(a).

Spelled out via the epsilon-delta definition: for every \varepsilon > 0 there exists a \delta > 0 such that

|x - a| < \delta \;\;\Longrightarrow\;\; |f(x) - f(a)| < \varepsilon.

Notice the differences from the limit definition:

A function is continuous on an interval I if it is continuous at every point of I. This is the rigorous version of "you can draw the graph without lifting your pen." But unlike the pen picture, the rigorous version handles weird cases. The function x \sin(1/x) extended by f(0) = 0 is continuous at 0 — even though no pen on Earth could literally draw its infinite wiggles — because for every \varepsilon > 0, you can pick \delta = \varepsilon and the definition works:

|x \sin(1/x) - 0| = |x| \cdot |\sin(1/x)| \leq |x| < \delta = \varepsilon.

The picture lies (or at least, cannot be drawn). The definition does not.

A harder example

Take f(x) = x^2 and prove \lim_{x \to 3} f(x) = 9.

Example 2: $x^2$ at $x = 3$

Claim: For f(x) = x^2, \lim_{x \to 3} x^2 = 9.

Step 1. Simplify |f(x) - 9|.

|x^2 - 9| = |x - 3| \cdot |x + 3|.

Why: factoring is the first move because the quantity |x - 3| is what you control with \delta. Isolating that factor tells you what the remaining factor |x + 3| needs to be bounded by.

Step 2. Control the second factor |x + 3|.

The factor |x + 3| depends on x, which is awkward. But if you restrict x to be close to 3 — say, within distance 1 — then x lies in the interval (2, 4), and so x + 3 lies in the interval (5, 7). In particular, |x + 3| < 7 whenever |x - 3| < 1.

Why: to bound the product |x-3| \cdot |x+3|, you need a ceiling on the second factor. Restricting to |x - 3| < 1 gives you one.

Step 3. Produce the \delta.

Assume |x - 3| < 1 (so |x + 3| < 7). Then

|x^2 - 9| = |x - 3| \cdot |x + 3| < |x - 3| \cdot 7.

For this to be less than \varepsilon, you need |x - 3| < \varepsilon/7. So pick

\delta = \min\left(1, \frac{\varepsilon}{7}\right).

Why: the \min combines both constraints — |x - 3| < 1 to keep the bound on |x+3| valid, and |x-3| < \varepsilon/7 to keep the product below \varepsilon.

Step 4. Verify the implication.

Suppose 0 < |x - 3| < \delta. Then in particular |x - 3| < 1, so |x + 3| < 7. And |x - 3| < \varepsilon/7. Multiplying:

|x^2 - 9| = |x - 3| \cdot |x + 3| < \frac{\varepsilon}{7} \cdot 7 = \varepsilon.

The definition is satisfied.

Result: \lim_{x \to 3} x^2 = 9.

The parabola $y = x^2$ passes through $(3, 9)$. For $\varepsilon = 0.5$, the output band is $(8.5, 9.5)$. The $\delta$ you need is $\min(1, 0.5/7) = 1/14 \approx 0.071$. Notice that for $x^2$ the required $\delta$ depends on both $\varepsilon$ *and* the point $a$ — the parabola is steeper away from the origin, so proving a limit near $x = 100$ would need a much smaller $\delta$ for the same $\varepsilon$.

The last comment in the figure caption is the seed of the next big idea. For linear functions, the \delta that works at one point works everywhere. For x^2, the \delta you need depends on where you are — the function is steeper far from the origin, so the same output tolerance requires a tighter input tolerance. That dependence on location is what motivates the next definition.

Uniform continuity

Continuity at a point is a local condition: for each point a, you need to produce a \delta that depends on both \varepsilon and a. Usually it also depends on a, not just \varepsilon.

Sometimes, though, you can do better. Sometimes one \delta works for every point a in the domain at once. That stronger condition has its own name.

Uniform continuity

A function f is uniformly continuous on an interval I if for every \varepsilon > 0 there exists a \delta > 0 such that for all pairs of points x, y \in I,

|x - y| < \delta \;\;\Longrightarrow\;\; |f(x) - f(y)| < \varepsilon.

The crucial change from ordinary continuity: the \delta depends only on \varepsilon, not on the location. A single \delta must work everywhere on the interval simultaneously.

Compare the two definitions carefully:

The quantifier order has changed. In ordinary continuity, "for each a" comes before "there exists \delta" — so \delta can depend on a. In uniform continuity, "for each \varepsilon" comes first and then "there exists \delta" — and only after that do you quantify over points.

An example where the two definitions part ways. The function f(x) = x^2 is continuous on all of \mathbb{R}, but not uniformly continuous on \mathbb{R}.

Why? Consider the pair of points x_n = n and y_n = n + \frac{1}{n}. Their distance is |x_n - y_n| = 1/n, which goes to zero as n \to \infty. But

|f(x_n) - f(y_n)| = |n^2 - (n + 1/n)^2| = |{-2} - 1/n^2| = 2 + 1/n^2 > 2.

So even when you shrink the input distance to arbitrarily small values by going far out on the real line, the output distance stays above 2. No single \delta can make the output tolerance smaller than, say, \varepsilon = 1 for all pairs. So x^2 is not uniformly continuous on \mathbb{R}.

The reason is that x^2 gets steeper and steeper as x grows. Far from the origin, even tiny input changes produce large output changes. Uniform continuity fails when a function can get arbitrarily steep.

When does uniform continuity hold? A famous theorem, which you will prove in a real analysis course, says: if f is continuous on a closed, bounded interval [a, b], then f is automatically uniformly continuous on [a, b]. Closed-and-bounded is sometimes called compact. Compact intervals are special: continuity on them is always uniform. That is why x^2 fails on the whole real line but works fine on [0, 10] — on a bounded interval, the slope cannot run away to infinity.

Common confusions

Going deeper

You now have the core definitions that power real analysis. The rest of this section collects a few deeper points that connect the definitions to the larger theory.

Sequential continuity

There is an equivalent way to state continuity that uses sequences instead of epsilon-delta. A function f is continuous at a if and only if: for every sequence x_n \to a, the sequence f(x_n) \to f(a).

This is sometimes more convenient for proofs. Instead of chasing down an explicit \delta, you argue: let x_n be any sequence approaching a; show f(x_n) approaches f(a). The two formulations are logically equivalent — a theorem of real analysis, proved using the completeness of the real numbers.

Sequential continuity is often how you detect failure of continuity. If you can find a sequence x_n \to a for which f(x_n) does not approach f(a), the function is not continuous at a.

Why "real" analysis?

The word "real" in real analysis refers to the real numbers — as opposed to complex numbers, which give rise to complex analysis. But it also hints at something deeper: the definitions above are really about the real number line, with its ordering, its absolute value, and its completeness property.

The completeness property says: every non-empty set of real numbers that has an upper bound has a least upper bound. This single property is what makes the real numbers work — it is why limits converge, why continuous functions on [a, b] attain their maximum, why the intermediate value theorem is true. Before the nineteenth century, mathematicians had been using these facts on faith; completeness made them provable.

You will meet completeness in a first course on real analysis. It is the soil everything else grows in.

Historical context, briefly

The shift from informal calculus to rigorous analysis took about a century and a half, starting from Cauchy in the 1820s. Two centuries of spectacularly successful but unrigorous calculus had accumulated a list of paradoxes and counterexamples — functions that were continuous everywhere but differentiable nowhere, infinite series that could be rearranged to converge to any desired sum, "theorems" that were actually false for exotic functions.

Each counterexample forced the definitions to be sharpened. The epsilon-delta definition and its descendants were the end of that process — a foundation rigorous enough that the exotic functions were handled cleanly and no new counterexamples broke things.

Two centuries on, real analysis underlies not only calculus but also probability theory, functional analysis, the mathematical side of physics, and the theoretical foundations of machine learning. Every single theorem in those fields traces its rigour back to the definition in the box above.

Where this leads next

You now know how real analysis approaches the definitions you have been using intuitively. The next articles build on this foundation to tackle more sophisticated objects — functions that are themselves limits of other functions, and power series that express complicated functions as polynomials.