In short
When a limit has the form f(x)^{g(x)} where both the base and the exponent are changing, you cannot evaluate the base and exponent separately. The standard technique is to take the logarithm, evaluate the resulting product-type limit, and exponentiate back. This article covers the 1^\infty, 0^0, and \infty^0 forms, limits involving definite sums interpreted as Riemann sums, and Stirling's approximation for factorials.
Here is a limit that looks like it should be obvious:
The base, 1 + 1/n, is heading toward 1. The exponent, n, is heading toward \infty. And 1 raised to any power is 1. So the answer should be 1.
Except it is not. Plug in a few values:
| n | \left(1 + \frac{1}{n}\right)^n |
|---|---|
| 1 | 2.000 |
| 10 | 2.594 |
| 100 | 2.705 |
| 1000 | 2.717 |
| 10000 | 2.718 |
The numbers are converging to something near 2.718 -- to a number that has a name. It is e, the base of the natural logarithm. The limit is not 1. The answer is e \approx 2.71828.
What went wrong with the "1 raised to any power is 1" argument? The base is not exactly 1. It is slightly bigger than 1, and you are raising it to an enormous power. The tiny excess above 1 is being amplified by the enormous exponent, and those two effects -- the base shrinking toward 1 and the exponent growing toward \infty -- are perfectly balanced against each other, producing a finite number that is neither 1 nor \infty.
This is what makes the form 1^\infty indeterminate: the answer depends on the specific race between the base and the exponent. Sometimes the base wins and the limit is 1. Sometimes the exponent wins and the limit is \infty. Sometimes they balance, and the limit is some finite number in between. You cannot know without doing the calculation.
To see this concretely, compare three limits that all have the 1^\infty shape:
- \lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n = e \approx 2.718 -- the base and exponent balance.
- \lim_{n \to \infty} \left(1 + \frac{1}{n^2}\right)^n = 1 -- the base shrinks too fast, so the exponent wins and the result is 1.
- \lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^{n^2} = \infty -- the exponent grows too fast, so the excess above 1 is amplified without bound.
Three limits, all "1^\infty", three different answers. The form alone tells you nothing. The specific rates matter.
The three exponential indeterminate forms
When you have \lim f(x)^{g(x)} and the base and exponent are both changing, three situations produce indeterminate forms:
| Form | Base approaches | Exponent approaches | Why it is indeterminate |
|---|---|---|---|
| 1^\infty | 1 | \pm\infty | Tiny excess amplified by huge power |
| 0^0 | 0^+ | 0 | Zero base fights zero exponent |
| \infty^0 | +\infty | 0 | Huge base fights vanishing exponent |
All three look like they should have definite answers (1^{\text{anything}} = 1, 0^{\text{anything}} = 0, \text{anything}^0 = 1). But those rules only apply when the base or the exponent is constant. When both vary simultaneously, the rules break.
The logarithm technique
All three forms are handled by the same trick: take the logarithm, evaluate the limit of the log, then exponentiate.
If L = \lim f(x)^{g(x)}, then
This converts the exponential indeterminate form into a product of two quantities, which is a problem you already know how to handle (convert it to a quotient, then use L'Hopital or expansion).
The logarithm method for $f(x)^{g(x)}$
To evaluate L = \displaystyle\lim_{x \to a} f(x)^{g(x)} when the form is 1^\infty, 0^0, or \infty^0:
- Set y = f(x)^{g(x)}, so \ln y = g(x) \cdot \ln f(x).
- Evaluate \displaystyle\lim_{x \to a} g(x) \cdot \ln f(x). Call this M.
- Then L = e^M.
If M is finite, the original limit is the finite number e^M. If M = +\infty, the limit is +\infty. If M = -\infty, the limit is 0.
Reading the definition. The exponential function e^x and the logarithm \ln x are inverses. Taking the log converts a power into a product (that is the fundamental property of logarithms: \ln(a^b) = b \ln a). Products are easier to handle than powers because you can rewrite 0 \cdot \infty as \frac{0}{1/\infty} = \frac{0}{0}, which is a form you know how to resolve.
The 1^\infty form in detail
The most common of the three forms. The template:
Write f(x) = 1 + u(x) where u(x) \to 0. Then
For small u, \ln(1 + u) \approx u (the first term of the Maclaurin series). So
This means: for the 1^\infty form, the limit of the logarithm is essentially \lim g(x) \cdot u(x), where u(x) is the amount by which the base exceeds 1.
This gives a fast formula. If \lim g(x) \cdot u(x) = M, then \lim f(x)^{g(x)} = e^M.
Check it against the original example: f(x) = 1 + 1/n, g(x) = n, u(x) = 1/n. So g \cdot u = n \cdot 1/n = 1, and the limit is e^1 = e. Correct.
A first worked example
Example 1: A $1^\infty$ limit
Evaluate \displaystyle\lim_{x \to 0} (\cos x)^{1/x^2}.
Step 1. Identify the form. As x \to 0: \cos x \to 1 and 1/x^2 \to \infty. This is 1^\infty.
Why: confirming the form tells you which technique to use. Here, the logarithm method applies.
Step 2. Take the logarithm. Let y = (\cos x)^{1/x^2}, so
Why: writing \frac{1}{x^2} \cdot \ln(\cos x) as a fraction puts it in 0/0 form (both \ln(\cos x) \to 0 and x^2 \to 0), ready for expansion or L'Hopital.
Step 3. Expand \ln(\cos x) for small x.
\cos x = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \cdots, so \cos x = 1 + u where u = -\frac{x^2}{2} + \cdots
\ln(1 + u) = u - \frac{u^2}{2} + \cdots = -\frac{x^2}{2} + \cdots (the u^2 term is order x^4, which you do not need yet)
Why: expanding \ln(\cos x) to order x^2 is enough because the denominator is x^2. The leading term survives; the rest vanish.
Step 4. Take the limit and exponentiate.
Why: the limit of the log is -1/2, so the original limit is e raised to that power.
Result: \displaystyle\lim_{x \to 0} (\cos x)^{1/x^2} = \frac{1}{\sqrt{e}} \approx 0.6065
The base \cos x approaches 1 from below (since \cos x < 1 for x \neq 0), and the exponent 1/x^2 grows without bound. The race between these two effects is decided by the rate: \cos x drops below 1 by about x^2/2, and the exponent grows as 1/x^2, so their product is -1/2. That balance point e^{-1/2} is the answer.
A second example: limits involving sums
A different kind of special limit arises when you are asked to evaluate the limit of a sum with increasing number of terms.
Example 2: A sum that becomes an integral
Evaluate \displaystyle\lim_{n \to \infty} \frac{1}{n}\left(\sin\frac{\pi}{n} + \sin\frac{2\pi}{n} + \sin\frac{3\pi}{n} + \cdots + \sin\frac{n\pi}{n}\right).
Step 1. Recognise the structure. The expression is
This is a sum of n terms, each of the form f(x_k) \cdot \Delta x, where x_k = k\pi/n and \Delta x = 1/n. But if you write \Delta x = \pi/n and x_k = k \cdot \Delta x, this is a Riemann sum for the integral of \sin x from 0 to \pi -- almost. You need to pull out the \pi correctly.
Why: any sum of the form \frac{1}{n}\sum f(k/n) can be recognised as a Riemann sum for \int_0^1 f(x)\,dx.
Step 2. Rewrite to match the Riemann sum template. Set t_k = k/n, so t_k runs from 1/n to 1 in steps of \Delta t = 1/n.
This is a right-endpoint Riemann sum for \displaystyle\int_0^1 \sin(\pi t)\,dt.
Why: each term \sin(\pi t_k) \cdot (1/n) is the area of a thin rectangle of height \sin(\pi t_k) and width 1/n. Adding them all up gives a staircase approximation to the area under \sin(\pi t) from 0 to 1.
Step 3. Evaluate the integral.
Why: the antiderivative of \sin(\pi t) is -\cos(\pi t)/\pi. Evaluating at the endpoints gives (-(-1) + 1)/\pi = 2/\pi.
Step 4. State the limit.
Why: the Riemann sum converges to the integral as n \to \infty, because \sin(\pi t) is continuous on [0,1].
Result: \displaystyle\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} \sin\frac{k\pi}{n} = \frac{2}{\pi}
The key insight is pattern recognition: any time you see \frac{1}{n}\sum_{k=1}^{n} f(k/n), the limit as n \to \infty is \int_0^1 f(x)\,dx. The sum is the integral, in disguise.
Common confusions
-
"1^\infty = 1 because 1 raised to anything is 1." Only when the base is exactly 1. If the base is 1 + \varepsilon where \varepsilon \to 0, the result depends on how fast \varepsilon shrinks relative to how fast the exponent grows. The answer can be any positive number, or 0, or \infty.
-
"For 0^0, the base is zero so the answer is zero." The base is not exactly zero; it is approaching zero. Meanwhile, the exponent is approaching zero from the positive side. The two effects fight: the shrinking base wants to push the result toward 0, but the shrinking exponent wants to push it toward 1 (since anything to the 0 power is 1). The winner depends on the specific functions.
-
"I cannot take the log of a limit." You can, as long as the expression inside is positive near the limit point. Since f(x)^{g(x)} requires f(x) > 0 (you cannot raise a negative number to a non-integer power in the reals), taking the log is always valid for these forms.
-
"A Riemann sum and an integral are different things." An integral is a limit of Riemann sums. That is the definition. When you write \int_a^b f(x)\,dx, you are writing the limit of \sum f(x_k)\,\Delta x as the partition gets finer.
-
"I need to simplify the sum before taking the limit." Sometimes you can evaluate the finite sum in closed form (using formulas for \sum k, \sum k^2, etc.) and then take n \to \infty. But often the closed form is hard to find, and recognising the sum as a Riemann sum is much faster.
Going deeper
If you came here to learn the 1^\infty technique and the Riemann sum trick, you have them. The rest of this section covers the 0^0 and \infty^0 forms, Stirling's approximation, and some subtleties.
The 0^0 form
The form 0^0 arises when f(x) \to 0^+ and g(x) \to 0. The logarithm method applies directly: \ln y = g(x) \ln f(x), which is of the form 0 \cdot (-\infty). Convert this to a quotient and evaluate.
For example, \lim_{x \to 0^+} x^x. Here f = g = x, and \ln y = x \ln x.
This is -\infty/\infty, which you can handle with L'Hopital's rule:
So \ln y \to 0, and y \to e^0 = 1. The limit \lim_{x \to 0^+} x^x = 1.
The \infty^0 form
The form \infty^0 arises when f(x) \to +\infty and g(x) \to 0. Again, take the log: \ln y = g(x) \ln f(x), which is 0 \cdot \infty.
For example, \lim_{x \to \infty} x^{1/x}. Here \ln y = \frac{\ln x}{x}, which is \infty/\infty. By L'Hopital:
So \ln y \to 0 and y \to e^0 = 1. The limit is 1.
The general Riemann sum recognition
The pattern from Example 2 generalises. Any limit of the form
and more generally, if the sum runs from k = 0 to n-1 or includes an offset,
The recognition step is the hard part. Here are the signals that a sum is a Riemann sum in disguise:
- The summand has the form f(k/n) or f(a + k \cdot h) where h = (b-a)/n.
- There is a factor of 1/n outside the sum.
- The number of terms grows as n \to \infty.
Once you spot the pattern, the computation reduces to evaluating a definite integral.
Products that become sums: the logarithm trick for products
Sometimes the limit involves a product of n terms rather than a sum. The technique: take the logarithm, which turns the product into a sum, recognise that sum as a Riemann sum, and exponentiate.
For example:
Take the log: \ln y = \frac{1}{n}\ln\frac{n!}{n^n} = \frac{1}{n}\sum_{k=1}^{n}\ln\frac{k}{n} = \frac{1}{n}\sum_{k=1}^{n}\ln(k/n).
This is a Riemann sum for \int_0^1 \ln x\,dx.
So \ln y \to -1 and y \to e^{-1} = 1/e.
Stirling's approximation
The computation above is closely related to one of the most useful approximations in mathematics: Stirling's formula for n!.
More precisely, \frac{n!}{\sqrt{2\pi n}(n/e)^n} \to 1 as n \to \infty. This means that for large n, the factorial grows like \sqrt{2\pi n} \cdot (n/e)^n, which is an explicit and computable expression.
Why is e involved? Because \ln(n!) \approx n\ln n - n (which is n\ln(n/e)), and this comes from approximating \sum_{k=1}^{n} \ln k by the integral \int_1^n \ln x\,dx = n\ln n - n + 1 \approx n\ln n - n. This is another Riemann-sum-to-integral conversion, the same technique as before.
Stirling's approximation is essential in combinatorics and probability, where n! appears in binomial coefficients and counting formulas. For instance, \binom{2n}{n} \approx \frac{4^n}{\sqrt{\pi n}} follows directly from applying Stirling to the factorials in \frac{(2n)!}{(n!)^2}.
The 1^\infty shortcut formula
For quick computation, many textbooks state the following: if \lim_{x \to a} f(x) = 1 and \lim_{x \to a} g(x) = \infty, then
This is a direct consequence of \ln(1 + u) \approx u for small u, where u = f(x) - 1. It saves the step of explicitly computing the logarithm.
For example: \lim_{x \to 0} (1 + \sin x)^{1/x}. Here f(x) - 1 = \sin x and g(x) = 1/x, so g(x)(f(x) - 1) = \sin x / x \to 1. The limit is e^1 = e.
Where this leads next
These special forms are part of a larger toolkit for difficult limits:
- L'Hopital's Rule -- the differentiation-based technique for 0/0 and \infty/\infty, which often appears inside the logarithm step of the exponential forms.
- Limits Using Expansion -- the series-expansion technique, which is the other main tool for resolving indeterminate quotients.
- Indeterminate Forms -- the full catalogue of all seven indeterminate forms and the relationships between them.
- Definite Integral -- the Riemann sum connection formalised: the integral as a limit of sums, with all the machinery of integration theory.
- Exponential and Logarithmic Limits -- the prerequisite for this article, covering the basic limits \lim (1+1/n)^n = e and \lim (\ln(1+x))/x = 1.