In short
If you know \lim_{x \to a} f(x) = L and \lim_{x \to a} g(x) = M, then the limit of a sum is L + M, the limit of a product is LM, and the limit of a quotient is L/M (provided M \neq 0). These five rules — sum, difference, product, scalar multiple, and quotient — let you evaluate the limit of any polynomial or rational function by direct substitution.
Take the function f(x) = x^2 + 3x. What is \lim_{x \to 2} f(x)?
You could build a table of values — f(1.9) = 9.31, f(1.99) = 9.9301, f(1.999) = 9.993001 — and watch the outputs creep toward 10. But that is slow, and it tells you the answer without explaining why the answer is what it is. The table shows the limit converging; it does not explain the machinery.
Here is a better idea. You already know two simple limits:
If limits "respect addition" — if the limit of a sum is the sum of the limits — then you can just add:
And that is indeed the answer. But why does this work? Why should the limit of a sum be the sum of the limits? It sounds obvious, and it is true, but "sounds obvious" is not a proof.
Think of it this way. As x inches toward 2, x^2 inches toward 4 and 3x inches toward 6. You are adding two quantities that are each heading toward a target. If both of them are getting close to their targets, then their sum should be getting close to the sum of the targets — the errors just pile up. That intuition is right, and the proof below makes it airtight.
The whole point of this article is to turn that obvious-sounding claim into a theorem — and then to use it to evaluate limits of polynomials and rational functions without tables, without guessing, and without going back to the definition every time.
The five limit laws
Suppose you know two limits:
Then the following five rules hold.
The algebra of limits
1. Sum rule. \displaystyle\lim_{x \to a} [f(x) + g(x)] = L + M
2. Difference rule. \displaystyle\lim_{x \to a} [f(x) - g(x)] = L - M
3. Scalar multiple rule. For any constant k, \displaystyle\lim_{x \to a} [k \cdot f(x)] = kL
4. Product rule. \displaystyle\lim_{x \to a} [f(x) \cdot g(x)] = L \cdot M
5. Quotient rule. \displaystyle\lim_{x \to a} \frac{f(x)}{g(x)} = \frac{L}{M}, provided M \neq 0
Reading the rules. Each rule says the same thing in a different arithmetic context: you can move the limit inside the operation. The limit of a sum is the sum of the limits. The limit of a product is the product of the limits. The limit of a quotient is the quotient of the limits — as long as you are not dividing by zero.
The condition M \neq 0 in the quotient rule is not a technicality. When M = 0, the quotient L/M is undefined, and the limit may or may not exist — it depends on the specific functions. That situation has its own name (an indeterminate form) and its own article.
Notice that the difference rule follows immediately from combining the sum rule with the scalar multiple rule (take k = -1). So there are really four independent rules, not five. But writing the difference rule separately is convenient because you use it constantly.
Why these rules are true — the sum rule proof
The proofs all follow the same pattern. You start from the epsilon-delta definition of a limit and use a triangle-inequality argument to control the error. Here is the sum rule in full, because once you see this one, the others are natural variations on the same theme.
Claim. If \lim_{x \to a} f(x) = L and \lim_{x \to a} g(x) = M, then \lim_{x \to a} [f(x) + g(x)] = L + M.
Proof. You need to show: for every \varepsilon > 0, there is a \delta > 0 such that whenever 0 < |x - a| < \delta, the sum f(x) + g(x) is within \varepsilon of L + M. In symbols:
Rewrite the left side by grouping:
The triangle inequality says |A + B| \leq |A| + |B|. Apply it:
Now use the two given limits. Since \lim_{x \to a} f(x) = L, for the tolerance \varepsilon/2 there exists a \delta_1 > 0 such that |f(x) - L| < \varepsilon/2 whenever 0 < |x - a| < \delta_1. Similarly, since \lim_{x \to a} g(x) = M, for the same tolerance \varepsilon/2 there exists a \delta_2 > 0 such that |g(x) - M| < \varepsilon/2 whenever 0 < |x - a| < \delta_2.
Take \delta = \min(\delta_1, \delta_2). Then whenever 0 < |x - a| < \delta, both conditions hold simultaneously, and:
That completes the proof. \square
The key trick is splitting \varepsilon into two halves — one for each function — and then choosing \delta small enough to satisfy both. This is the standard epsilon-delta move: you budget your total error among the pieces.
Here is a picture of what just happened. The two functions f and g are each converging to their own target. The sum f + g is converging to L + M because the two individual errors add up — and if each error is under \varepsilon/2, the total error is under \varepsilon. The choice of \delta as the minimum of \delta_1 and \delta_2 ensures that both functions are within their error budget simultaneously.
The product rule proof
The product rule is slightly more involved because the product f(x) \cdot g(x) involves both functions at once, and you need to control how they interact. The trick is to add and subtract a bridge term.
Claim. If \lim_{x \to a} f(x) = L and \lim_{x \to a} g(x) = M, then \lim_{x \to a} [f(x) \cdot g(x)] = LM.
Proof. Start with what you want to make small:
Add and subtract the bridge term f(x) \cdot M:
Factor each pair:
Apply the triangle inequality:
You need |f(x)| to be bounded near x = a. Since \lim_{x \to a} f(x) = L, there is a \delta_0 such that |f(x) - L| < 1 whenever 0 < |x - a| < \delta_0, which gives |f(x)| < |L| + 1. Call this bound B = |L| + 1.
Now choose \delta_1 so that |g(x) - M| < \varepsilon/(2B) whenever 0 < |x - a| < \delta_1, and choose \delta_2 so that |f(x) - L| < \varepsilon/(2(|M| + 1)) whenever 0 < |x - a| < \delta_2. (The +1 in the denominator handles the case M = 0.)
Take \delta = \min(\delta_0, \delta_1, \delta_2). Then:
That completes the proof. \square
The bridge-term trick — adding and subtracting f(x)M in the middle — is the same technique you will meet again when proving the product rule for derivatives. It converts one hard problem (controlling a product) into two easier problems (controlling each factor separately).
The need to bound |f(x)| is worth pausing on. You cannot just say "|f(x)| \to |L|" and use |L| as the bound — that is circular (you would need to know the product rule to justify it). Instead, you use the weaker fact that f is bounded near a: since f(x) \to L, the values of f(x) cannot wander far from L when x is close to a. The bound B = |L| + 1 is a blunt instrument — any finite bound would work — but it is the cleanest choice.
The quotient rule proof
Claim. If \lim_{x \to a} f(x) = L and \lim_{x \to a} g(x) = M with M \neq 0, then \lim_{x \to a} \frac{f(x)}{g(x)} = \frac{L}{M}.
Proof. Write the quotient as a product: \frac{f(x)}{g(x)} = f(x) \cdot \frac{1}{g(x)}. If you can show \lim_{x \to a} \frac{1}{g(x)} = \frac{1}{M}, the product rule finishes the job.
So the real work is proving the reciprocal limit. You need:
The numerator |g(x) - M| goes to zero — that is the given limit. But you also need the denominator |g(x)| \cdot |M| to stay away from zero. Since M \neq 0 and g(x) \to M, the values g(x) eventually stay close to M, and in particular |g(x)| > |M|/2 for x close enough to a. Choose \delta_0 so that |g(x) - M| < |M|/2 whenever 0 < |x - a| < \delta_0; then |g(x)| > |M|/2.
With this bound:
Now choose \delta_1 so that |g(x) - M| < \varepsilon M^2 / 2 whenever 0 < |x - a| < \delta_1. Take \delta = \min(\delta_0, \delta_1). Then:
So \lim_{x \to a} \frac{1}{g(x)} = \frac{1}{M}. The product rule then gives \lim_{x \to a} \frac{f(x)}{g(x)} = L \cdot \frac{1}{M} = \frac{L}{M}. \square
Limits of polynomials: just substitute
These five rules, combined with two basic facts, let you evaluate the limit of any polynomial by direct substitution. The two basic facts are:
Both are immediate from the epsilon-delta definition — for the constant, take any \delta; for the identity, take \delta = \varepsilon.
From these two, the product rule gives \lim_{x \to a} x^2 = a^2, then \lim_{x \to a} x^3 = a^3, and by induction \lim_{x \to a} x^n = a^n for every positive integer n. The scalar multiple rule gives \lim_{x \to a} c \cdot x^n = c \cdot a^n. And the sum rule chains across any number of terms (apply it repeatedly, two terms at a time).
So for any polynomial p(x) = c_n x^n + c_{n-1} x^{n-1} + \cdots + c_1 x + c_0:
The limit of a polynomial is the polynomial evaluated at the point. Just plug in the number. No tables, no epsilon-delta, no work. The five laws did all the heavy lifting once; now you never have to redo it.
This is a powerful result. Take p(x) = 2x^4 - 7x^2 + 3x + 5. To find \lim_{x \to 1} p(x), you compute 2(1)^4 - 7(1)^2 + 3(1) + 5 = 2 - 7 + 3 + 5 = 3. That single substitution replaces the entire epsilon-delta argument. Behind the scenes, the sum rule was applied three times (once for each + sign), the scalar multiple rule was applied for the coefficient 2, the product rule was used to build up x^4 from repeated multiplication, and the constant rule handled the 5. All of that happens invisibly when you "just plug in."
Limits of rational functions
A rational function is a ratio of two polynomials: r(x) = \frac{p(x)}{q(x)}. The quotient rule says:
Again — just substitute. As long as the denominator is not zero at a, the limit of a rational function is the rational function evaluated at a.
When q(a) = 0, you cannot use the quotient rule directly. Two things can happen.
Case 1: p(a) = 0 and q(a) = 0. This is a 0/0 indeterminate form. The numerator and denominator both vanish at a, which means both share the factor (x - a). Factoring it out (once or more) and cancelling often resolves the limit. For example:
Case 2: p(a) \neq 0 and q(a) = 0. The numerator stays bounded away from zero while the denominator shrinks to zero. The ratio blows up: \lim_{x \to a} \frac{p(x)}{q(x)} = \pm\infty, or the limit does not exist if the sign differs from the left and right. For example, \lim_{x \to 0} \frac{1}{x^2} = +\infty (both sides blow up in the same direction), while \lim_{x \to 0} \frac{1}{x} does not exist as a two-sided limit (the left side gives -\infty, the right gives +\infty).
The limit of a composite function
There is one more rule that does not fit neatly into the "arithmetic" category but is essential for evaluating limits in practice.
Limit of a composite function
If \lim_{x \to a} g(x) = M and f is continuous at M, then
Reading the rule. The outer function f can be pulled outside the limit — provided f is continuous at the value the inner function is approaching. Continuity at M means that f does not have a jump or a hole at M, so what happens at M matches what happens near M.
This rule is why you can write things like:
The square root function is continuous at 4, so the limit passes through it. Similarly:
The sine function is continuous everywhere, so the limit passes through it at any value.
Putting it all together
Example 1: A rational function limit
Evaluate \displaystyle\lim_{x \to 3} \frac{x^2 + 2x - 1}{x + 4}.
Step 1. Check whether direct substitution works. Evaluate the denominator at x = 3: 3 + 4 = 7 \neq 0. The denominator is nonzero, so the quotient rule applies.
Why: the quotient rule requires the limit of the denominator to be nonzero. Checking this first saves you from doing unnecessary algebra.
Step 2. Evaluate the numerator polynomial at x = 3:
Why: the limit of a polynomial is the polynomial evaluated at the point — the sum and product rules guarantee this.
Step 3. Evaluate the denominator polynomial at x = 3:
Step 4. Apply the quotient rule:
Why: the quotient rule says the limit of a ratio equals the ratio of the limits, and both limits are just the polynomials evaluated at x = 3.
Result: \displaystyle\lim_{x \to 3} \frac{x^2 + 2x - 1}{x + 4} = 2.
The picture confirms what the algebra said: the function does nothing dramatic near x = 3, so the limit is just the function value.
Example 2: When the denominator is zero — factor first, then apply the laws
Evaluate \displaystyle\lim_{x \to 2} \frac{x^2 - 4}{x - 2}.
Step 1. Try direct substitution. Numerator: 4 - 4 = 0. Denominator: 2 - 2 = 0. You get 0/0, so the quotient rule does not apply directly.
Why: the quotient rule requires the denominator's limit to be nonzero. Here it is zero, so you need to simplify first.
Step 2. Factor the numerator. Recognise x^2 - 4 = (x - 2)(x + 2).
Why: both numerator and denominator have the factor (x - 2), which is the factor causing the 0/0. Factoring exposes it.
Step 3. Cancel the common factor. For x \neq 2:
Why: limits only care about what happens near x = 2, not at x = 2. For every x \neq 2, the cancellation is valid. The two functions \frac{x^2 - 4}{x - 2} and x + 2 are identical everywhere except at x = 2 itself, so they have the same limit.
Step 4. Now apply the limit laws to the simplified function:
Result: \displaystyle\lim_{x \to 2} \frac{x^2 - 4}{x - 2} = 4.
The graph shows a straight line with a puncture at x = 2. The limit is the y-value the line is heading toward — 4 — even though the function never actually arrives there.
Common confusions
-
"The limit of a product is always the product of the limits." True — when both limits exist as finite numbers. If one of the limits is \pm\infty, the product rule as stated does not apply. The extended rules for infinite limits are separate results, and they have their own exceptions (like 0 \cdot \infty, which is indeterminate).
-
"If substitution gives 0/0, the limit does not exist." Wrong. 0/0 is an indeterminate form, not an answer. It means the quotient rule does not apply directly, but the limit itself might very well exist — you just need to do more work (factor, rationalise, use a standard limit) to find it. Example 2 above is a case where 0/0 leads to a perfectly finite limit of 4.
-
"You can always cancel common factors." You can cancel when computing limits, because limits only depend on values near the point, not at the point. But if you are evaluating the function itself (not its limit), cancelling a factor that is zero at your point changes the function — it fills in a hole that was there before. The original function \frac{x^2 - 4}{x - 2} is undefined at x = 2; the simplified x + 2 is defined there. They are different functions that happen to agree everywhere except at one point.
-
"The composite function rule works for any outer function." Not quite — the outer function must be continuous at the value the inner limit approaches. If f has a jump or a removable discontinuity at M, you cannot pull f outside the limit. This is a genuine condition, not a formality.
Going deeper
If you came here to learn how to combine limits and evaluate polynomial and rational limits, you have it — you can stop here. The rest of this section is for readers who want to see the scalar multiple proof, the induction argument for polynomials, and a subtlety about one-sided limits.
Proof of the scalar multiple rule
Claim. If \lim_{x \to a} f(x) = L and k is a constant, then \lim_{x \to a} kf(x) = kL.
Proof. If k = 0, then kf(x) = 0 for all x, and the limit is 0 = kL. If k \neq 0, given \varepsilon > 0, choose \delta so that |f(x) - L| < \varepsilon / |k| whenever 0 < |x - a| < \delta. Then:
The induction argument for \lim x^n = a^n
The claim is: for every positive integer n, \lim_{x \to a} x^n = a^n.
Base case. n = 1: \lim_{x \to a} x = a, which holds by the identity limit.
Inductive step. Suppose \lim_{x \to a} x^{n-1} = a^{n-1}. Then x^n = x \cdot x^{n-1}. By the product rule:
By mathematical induction, the result holds for all n \geq 1. \square
This is the formal justification for the "just substitute" rule for polynomials: each term c_k x^k has limit c_k a^k, and the sum rule chains across all terms.
One-sided limits and the algebra
All five laws hold for one-sided limits as well. If \lim_{x \to a^+} f(x) = L and \lim_{x \to a^+} g(x) = M, then \lim_{x \to a^+} [f(x) + g(x)] = L + M, and similarly for the other rules. The proofs are identical — the epsilon-delta arguments restrict x to one side of a instead of both, but nothing else changes.
This is useful when the two-sided limit does not exist but a one-sided limit does. For instance, \lim_{x \to 0^+} \frac{1}{x} = +\infty — the function blows up — but you can still apply the product rule to expressions like \lim_{x \to 0^+} x \cdot \frac{1}{x} once you handle the indeterminate form.
The squeeze theorem — a sixth "law"
There is one more limit-evaluation technique that does not fit into the arithmetic framework but is indispensable. If g(x) \leq f(x) \leq h(x) for all x near a, and \lim_{x \to a} g(x) = \lim_{x \to a} h(x) = L, then \lim_{x \to a} f(x) = L.
This is how you prove the most important standard limit, \lim_{x \to 0} \frac{\sin x}{x} = 1 — by squeezing \frac{\sin x}{x} between \cos x and 1, both of which tend to 1 as x \to 0. The squeeze theorem complements the algebra of limits: the five laws handle combinations of known limits, while the squeeze theorem handles limits that cannot be decomposed into simpler pieces.
A word about infinite limits
The five laws as stated require L and M to be finite real numbers. When limits are infinite, some arithmetic combinations are well-defined and others are not. The well-defined ones follow from the same kind of epsilon-delta reasoning:
| Combination | Result |
|---|---|
| L + \infty | \infty (finite plus infinite is infinite) |
| \infty + \infty | \infty |
| c \cdot \infty (with c > 0) | \infty |
| c \cdot \infty (with c < 0) | -\infty |
| \infty \cdot \infty | \infty |
But \infty - \infty, 0 \cdot \infty, and \frac{\infty}{\infty} are indeterminate — the result depends on the specific functions. The full catalogue is the subject of Indeterminate Forms.
Where this leads next
You now know how to combine limits and evaluate limits of polynomials and rational functions by substitution. The next set of articles handles the cases where substitution doesn't work — the interesting cases.
- Indeterminate Forms — what to do when substitution gives 0/0, \infty/\infty, or one of the other five forms that the limit laws cannot handle directly.
- Standard Limits — the handful of limits (like \lim_{x \to 0} \frac{\sin x}{x} = 1) that you memorise because they come up everywhere and cannot be evaluated by the algebra of limits alone.
- Continuity — the formal connection between limits and function values: a function is continuous at a precisely when \lim_{x \to a} f(x) = f(a), which is exactly the "just substitute" rule.
- Derivative — the limit that launched calculus: \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}. The algebra of limits is the machinery under the hood of every derivative computation.
- Squeeze Theorem — a different technique for evaluating limits, useful when the function is trapped between two simpler functions whose limits you know.