In short

The limit of f(x) as x approaches a is the single value that f(x) gets arbitrarily close to as x gets close to a, written \lim_{x \to a} f(x) = L. The limit exists if and only if the left-hand limit (approaching from below) and the right-hand limit (approaching from above) both exist and are equal. A function can have a limit at a point even if the function is not defined there — the limit cares about the approach, not the arrival.

Take the function

f(x) = \frac{x^2 - 1}{x - 1}

and try to evaluate it at x = 1. The numerator is 1 - 1 = 0 and the denominator is 1 - 1 = 0. You get \frac{0}{0}, which is not a number. The function is not defined at x = 1.

But watch what happens when you evaluate f at points near 1:

x f(x) = \dfrac{x^2 - 1}{x - 1}
0.9 1.9
0.99 1.99
0.999 1.999
1.001 2.001
1.01 2.01
1.1 2.1

From the left, f(x) is heading toward 2. From the right, f(x) is also heading toward 2. The function never reaches 2 at x = 1 — it is not defined there — but it gets as close as you want. If you pick x within 0.001 of 1, then f(x) is within 0.001 of 2. If you pick x within 0.0001 of 1, then f(x) is within 0.0001 of 2.

The number 2 is the limit of f(x) as x approaches 1.

The algebra explains why. Factor the numerator: x^2 - 1 = (x - 1)(x + 1). So

f(x) = \frac{(x-1)(x+1)}{x-1} = x + 1 \quad \text{for } x \neq 1

The function f is identical to the line y = x + 1 everywhere except at the single point x = 1, where it has a hole. The limit fills the hole.

The graph of $f(x) = \frac{x^2 - 1}{x - 1}$ is the line $y = x + 1$ with a hole at $(1, 2)$. The function is not defined at $x = 1$, but the closer $x$ gets to $1$, the closer $f(x)$ gets to $2$. That is the limit.

This is the core idea. A limit does not ask "what is f(a)?" — it asks "what is f(x) heading toward as x gets close to a?" The two questions can have different answers, or one might be defined when the other is not.

The intuitive definition

Limit (intuitive)

The limit of f(x) as x approaches a is L, written

\lim_{x \to a} f(x) = L

if f(x) can be made as close to L as you like by choosing x sufficiently close to a (but not equal to a).

The phrase "as close as you like" is doing real work. It does not mean "pretty close" or "close enough for engineering." It means: for every degree of closeness you demand, there is a way to achieve it. Want f(x) within 0.01 of L? Choose x within some distance of a, and it works. Want within 0.0001? Tighten the distance. Want within 10^{-100}? Still possible. No matter how tight your demand, the function can meet it — that is what a limit means.

Three critical points:

The value at a does not matter. The limit only cares about the behaviour of f near a, not at a itself. The function might not even be defined at a (as in the example above), or it might be defined but have a different value from the limit. What happens at a is a separate question from what happens near a.

x never equals a. The definition explicitly excludes x = a. You are always looking at points approaching a but never arriving. This is why the \frac{0}{0} problem at x = 1 did not stop you — you never actually evaluated f at 1.

The limit must be a single number. If f(x) bounces between two values as x approaches a, there is no limit. If f(x) heads toward different values from the left and the right, there is no limit. The limit exists only when there is one definite value being approached.

Left-hand and right-hand limits

The last point above leads to a refinement. Sometimes a function behaves differently on the two sides of a point. To capture this, you define one-sided limits.

One-sided limits

The left-hand limit of f(x) as x approaches a (from below) is written

\lim_{x \to a^-} f(x) = L^-

and means: f(x) \to L^- as x approaches a through values less than a.

The right-hand limit of f(x) as x approaches a (from above) is written

\lim_{x \to a^+} f(x) = L^+

and means: f(x) \to L^+ as x approaches a through values greater than a.

Take the function

g(x) = \begin{cases} x + 1 & \text{if } x < 2 \\ x + 3 & \text{if } x \geq 2 \end{cases}

As x approaches 2 from the left, g(x) approaches 2 + 1 = 3.

As x approaches 2 from the right, g(x) approaches 2 + 3 = 5.

The left-hand limit is 3 and the right-hand limit is 5. They disagree. The function jumps at x = 2, and the two-sided limit does not exist.

A piecewise function with a jump discontinuity at x equals 2The graph of a piecewise function. For x less than 2, the graph is the line y equals x plus 1, ending at the open circle at (2, 3). For x greater than or equal to 2, the graph is the line y equals x plus 3, starting at the filled circle at (2, 5). There is a jump from 3 to 5 at x equals 2. x y 1 2 3 4 1 2 3 4 5 LHL = 3 RHL = 5
The piecewise function $g(x)$ jumps from $3$ to $5$ at $x = 2$. The open circle at $(2, 3)$ means the function does not take that value. The filled circle at $(2, 5)$ means $g(2) = 5$. The left-hand limit is $3$ and the right-hand limit is $5$. Because they differ, the two-sided limit $\lim_{x \to 2} g(x)$ does not exist.

When does the limit exist?

Existence of a two-sided limit

The limit \lim_{x \to a} f(x) = L exists if and only if both one-sided limits exist and are equal:

\lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x) = L

If the one-sided limits exist but are not equal, or if either one-sided limit fails to exist, then the two-sided limit does not exist.

This gives you a practical test: to check whether a limit exists at a point, check the left-hand limit and the right-hand limit separately, then compare them.

The three standard scenarios:

Both sides agree. \lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x) = L. The limit exists and equals L. This is the "normal" case — most functions you meet at this stage are continuous, meaning the two sides always agree.

The sides disagree. \lim_{x \to a^-} f(x) = L_1 and \lim_{x \to a^+} f(x) = L_2 with L_1 \neq L_2. The limit does not exist. The function has a jump discontinuity at a, like the piecewise function above.

One or both sides blow up or oscillate. If f(x) \to \pm\infty as x \to a from either side, the limit does not exist (as a finite number). If f(x) oscillates without settling down — like \sin(1/x) as x \to 0 — the limit does not exist either.

The case of \sin(1/x) near x = 0

Take f(x) = \sin(1/x). As x approaches 0 from the right, 1/x grows without bound. So \sin(1/x) oscillates between -1 and +1 faster and faster, never settling on any single value.

The graph of $f(x) = \sin(1/x)$ near $x = 0$. As $x \to 0$, the oscillations become infinitely rapid. The function bounces between $-1$ and $+1$ without ever approaching a single value. The limit does not exist — not because it goes to infinity, but because it cannot make up its mind.

No matter how close to 0 you look, you can always find points where \sin(1/x) = 1 and other points where \sin(1/x) = -1. The function is perpetually indecisive. Neither the left-hand limit nor the right-hand limit exists, so the two-sided limit certainly does not.

Worked examples

Example 1: A limit that exists at a point where the function is undefined

Evaluate \displaystyle\lim_{x \to 3} \frac{x^2 - 9}{x - 3}.

Step 1. Try direct substitution: \frac{9 - 9}{3 - 3} = \frac{0}{0}. The function is not defined at x = 3.

Why: direct substitution is always worth trying first. If it gives a finite number, that number is the limit (for elementary functions). If it gives \frac{0}{0}, the limit might still exist — you just need to work harder.

Step 2. Factor the numerator: x^2 - 9 = (x - 3)(x + 3).

\frac{x^2 - 9}{x - 3} = \frac{(x-3)(x+3)}{x-3} = x + 3 \quad \text{for } x \neq 3

Why: cancelling the (x - 3) factor is valid because you are taking a limit as x \to 3, not evaluating at x = 3. In the limit, x is always different from 3, so x - 3 \neq 0.

Step 3. Take the limit of the simplified expression.

\lim_{x \to 3} (x + 3) = 6

Why: x + 3 is a continuous function — its limit as x \to 3 is just 3 + 3 = 6.

Step 4. Verify numerically by checking a value close to 3: f(3.01) = \frac{9.0601 - 9}{0.01} = \frac{0.0601}{0.01} = 6.01. Heading toward 6.

Result: \displaystyle\lim_{x \to 3} \frac{x^2 - 9}{x - 3} = 6.

The graph of $\frac{x^2 - 9}{x - 3}$ is the line $y = x + 3$ with a hole at $x = 3$. The limit at $x = 3$ is $6$ — the $y$-value where the hole is. The line passes through the point $(3, 6)$ in every sense except actually being defined there.

This is the most common pattern in limit problems: a \frac{0}{0} form that can be resolved by cancelling a common factor. The cancellation reveals what the function is "trying to be" at the problem point.

Example 2: A limit evaluated using one-sided limits

Evaluate \displaystyle\lim_{x \to 0} \frac{|x|}{x}.

Step 1. Write |x| using its piecewise definition: |x| = x if x > 0, and |x| = -x if x < 0.

Why: the absolute value sign hides a piecewise function. To check whether the limit exists, you need to examine the left and right sides separately — and the piecewise form makes each side explicit.

Step 2. Compute the right-hand limit. For x > 0: \frac{|x|}{x} = \frac{x}{x} = 1.

\lim_{x \to 0^+} \frac{|x|}{x} = 1

Why: when x is positive, |x| = x, and the fraction simplifies to 1 everywhere. A constant function has a constant limit.

Step 3. Compute the left-hand limit. For x < 0: \frac{|x|}{x} = \frac{-x}{x} = -1.

\lim_{x \to 0^-} \frac{|x|}{x} = -1

Why: when x is negative, |x| = -x, and the fraction simplifies to -1 everywhere.

Step 4. Compare: \lim_{x \to 0^+} = 1 \neq -1 = \lim_{x \to 0^-}.

Result: The limit \displaystyle\lim_{x \to 0} \frac{|x|}{x} does not exist.

Graph of the signum function showing the jump at x equals 0The function |x|/x equals negative 1 for all negative x and equals positive 1 for all positive x. At x equals 0 the function is not defined. The graph shows two horizontal half-lines: y equals negative 1 for x less than 0, and y equals positive 1 for x greater than 0, with open circles at the origin marking the gap. x y 1 −1 RHL = +1 LHL = −1
The graph of $\frac{|x|}{x}$ — also known as the **signum function**. It equals $+1$ for all positive $x$ and $-1$ for all negative $x$, with a jump at the origin. The left-hand and right-hand limits exist but disagree, so the two-sided limit does not exist.

This function is the sign detector: it outputs +1 for positive inputs and -1 for negative inputs. At the boundary x = 0, there is no single value being approached — the function makes a clean jump from -1 to +1. The one-sided limits are well-defined; the two-sided limit is not.

When the limit equals f(a)

In many "nice" functions — polynomials, rational functions (away from zeros of the denominator), trigonometric functions, exponentials — direct substitution works: \lim_{x \to a} f(x) = f(a).

This is not a coincidence. It is the definition of continuity: a function is continuous at a if the limit exists, the function is defined at a, and the two are equal. In other words, continuous functions are exactly the functions where the limit question and the evaluation question give the same answer.

The interesting limit problems — the ones that require algebra, cancellation, or careful analysis — are precisely the problems where f is not continuous at a, or not defined there. The \frac{0}{0} form is the signature of such a problem.

Common confusions

Going deeper

If you came here to learn what a limit is, how to check it from both sides, and when it exists, you have it — you can stop here. The rest of this section introduces the formal epsilon-delta definition and explains why it matters.

The epsilon-delta definition: making "as close as you like" precise

The intuitive definition uses phrases like "as close as you like" and "sufficiently close." These are vivid, but mathematicians need a definition that can be used in proofs — one where every word has a precise meaning and nothing is left to interpretation. The epsilon-delta definition achieves this.

The idea: turn the phrase "f(x) can be made as close to L as you like by choosing x sufficiently close to a" into a game between two players.

Player 1 (the challenger) picks a tolerance \varepsilon > 0 — how close to L they demand f(x) to be. This tolerance can be as small as they like: 0.01, 0.0001, 10^{-100}.

Player 2 (the responder) must then find a radius \delta > 0 such that whenever x is within \delta of a (but not equal to a), the value f(x) is within \varepsilon of L.

If Player 2 can always win — no matter how small the \varepsilon that Player 1 chooses — then the limit is L.

Limit (epsilon-delta)

\lim_{x \to a} f(x) = L means: for every \varepsilon > 0, there exists a \delta > 0 such that

0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon

Read this carefully. The |x - a| < \delta part says "x is within \delta of a." The 0 < |x - a| part excludes x = a itself — the limit only cares about nearby points, not the point itself. The |f(x) - L| < \varepsilon part says "f(x) is within \varepsilon of L."

Epsilon-delta visualisation for the limitA function curve passes through a region defined by a horizontal epsilon-band around the limit value L and a vertical delta-band around the point a. The curve stays inside the epsilon-band for all x within the delta-band, illustrating the epsilon-delta condition. x y L L+ε L−ε a a−δ a+δ ε-band δ-band
The epsilon-delta picture. The horizontal $\varepsilon$-band (shaded) is the tolerance around $L$ that Player 1 demands. The vertical $\delta$-band is the response: as long as $x$ stays inside the $\delta$-band (but not at $a$ itself), the curve stays inside the $\varepsilon$-band. For the limit to exist, a valid $\delta$ must exist for *every* $\varepsilon$, no matter how small.

An epsilon-delta verification

Verify that \lim_{x \to 4} (2x + 1) = 9 using the epsilon-delta definition.

You need: given any \varepsilon > 0, find a \delta > 0 such that 0 < |x - 4| < \delta implies |(2x + 1) - 9| < \varepsilon.

Simplify what you need: |(2x + 1) - 9| = |2x - 8| = 2|x - 4|.

So you need 2|x - 4| < \varepsilon, which means |x - 4| < \varepsilon/2.

Choose \delta = \varepsilon/2. Then whenever 0 < |x - 4| < \delta:

|(2x + 1) - 9| = 2|x - 4| < 2\delta = 2 \cdot \frac{\varepsilon}{2} = \varepsilon

For any \varepsilon > 0, the choice \delta = \varepsilon/2 works. The limit is confirmed: \lim_{x \to 4}(2x + 1) = 9.

This is the simplest possible epsilon-delta proof, but it shows the pattern: start with what you need (|f(x) - L| < \varepsilon), simplify it until it looks like |x - a| < \text{something}, and then choose \delta to be that something. For a linear function, the algebra is one line. For more complicated functions, the chain of inequalities is longer, but the structure is identical.

Why the formal definition matters

You might wonder why anyone would bother with \varepsilon and \delta when the intuitive definition seems perfectly clear. The reason is that intuition can mislead you on edge cases. Is \lim_{x \to 0} x\sin(1/x) = 0? Your intuition might hesitate — \sin(1/x) oscillates wildly, and multiplying by x makes the oscillations smaller but does not eliminate them.

The epsilon-delta definition settles it cleanly: |x \sin(1/x) - 0| = |x||\sin(1/x)| \leq |x|. So for any \varepsilon, choose \delta = \varepsilon. Then |x| < \delta implies |x\sin(1/x)| < \varepsilon. Done — the limit is 0.

Without the formal definition, you would be arguing by hand-waving. With it, you have a proof that no one can dispute. That is the power of making "as close as you like" precise.

Where this leads next

The limit is the foundation on which the rest of calculus is built. Every major concept in calculus — continuity, the derivative, the integral — is defined using limits. The most direct next steps: