In short

When a limit has the form f(x)^{g(x)} where both the base and the exponent are changing, you cannot evaluate the base and exponent separately. The standard technique is to take the logarithm, evaluate the resulting product-type limit, and exponentiate back. This article covers the 1^\infty, 0^0, and \infty^0 forms, limits involving definite sums interpreted as Riemann sums, and Stirling's approximation for factorials.

Here is a limit that looks like it should be obvious:

\lim_{n \to \infty} \left(1 + \frac{1}{n}\right)^n

The base, 1 + 1/n, is heading toward 1. The exponent, n, is heading toward \infty. And 1 raised to any power is 1. So the answer should be 1.

Except it is not. Plug in a few values:

n \left(1 + \frac{1}{n}\right)^n
1 2.000
10 2.594
100 2.705
1000 2.717
10000 2.718

The numbers are converging to something near 2.718 -- to a number that has a name. It is e, the base of the natural logarithm. The limit is not 1. The answer is e \approx 2.71828.

What went wrong with the "1 raised to any power is 1" argument? The base is not exactly 1. It is slightly bigger than 1, and you are raising it to an enormous power. The tiny excess above 1 is being amplified by the enormous exponent, and those two effects -- the base shrinking toward 1 and the exponent growing toward \infty -- are perfectly balanced against each other, producing a finite number that is neither 1 nor \infty.

This is what makes the form 1^\infty indeterminate: the answer depends on the specific race between the base and the exponent. Sometimes the base wins and the limit is 1. Sometimes the exponent wins and the limit is \infty. Sometimes they balance, and the limit is some finite number in between. You cannot know without doing the calculation.

To see this concretely, compare three limits that all have the 1^\infty shape:

Three limits, all "1^\infty", three different answers. The form alone tells you nothing. The specific rates matter.

The three exponential indeterminate forms

When you have \lim f(x)^{g(x)} and the base and exponent are both changing, three situations produce indeterminate forms:

Form Base approaches Exponent approaches Why it is indeterminate
1^\infty 1 \pm\infty Tiny excess amplified by huge power
0^0 0^+ 0 Zero base fights zero exponent
\infty^0 +\infty 0 Huge base fights vanishing exponent

All three look like they should have definite answers (1^{\text{anything}} = 1, 0^{\text{anything}} = 0, \text{anything}^0 = 1). But those rules only apply when the base or the exponent is constant. When both vary simultaneously, the rules break.

The function (1 + 1/x)^x approaching eA curve showing the function (1 + 1/x) raised to the power x for positive x, starting near 2 at x equals 1 and rising toward the horizontal asymptote y equals e, which is approximately 2.718. The asymptote is marked with a dashed red line. x y 1 3 5 7 9 1 2 3 e
The function $\left(1 + \frac{1}{x}\right)^x$ for positive $x$. At $x = 1$, the value is $2$. As $x$ grows, the curve climbs toward the dashed line at $y = e \approx 2.718$ but never quite reaches it. The $1^\infty$ form produces a finite, irrational limit.

The logarithm technique

All three forms are handled by the same trick: take the logarithm, evaluate the limit of the log, then exponentiate.

If L = \lim f(x)^{g(x)}, then

\ln L = \lim\, g(x) \cdot \ln f(x)

This converts the exponential indeterminate form into a product of two quantities, which is a problem you already know how to handle (convert it to a quotient, then use L'Hopital or expansion).

The logarithm method for $f(x)^{g(x)}$

To evaluate L = \displaystyle\lim_{x \to a} f(x)^{g(x)} when the form is 1^\infty, 0^0, or \infty^0:

  1. Set y = f(x)^{g(x)}, so \ln y = g(x) \cdot \ln f(x).
  2. Evaluate \displaystyle\lim_{x \to a} g(x) \cdot \ln f(x). Call this M.
  3. Then L = e^M.

If M is finite, the original limit is the finite number e^M. If M = +\infty, the limit is +\infty. If M = -\infty, the limit is 0.

Reading the definition. The exponential function e^x and the logarithm \ln x are inverses. Taking the log converts a power into a product (that is the fundamental property of logarithms: \ln(a^b) = b \ln a). Products are easier to handle than powers because you can rewrite 0 \cdot \infty as \frac{0}{1/\infty} = \frac{0}{0}, which is a form you know how to resolve.

The 1^\infty form in detail

The most common of the three forms. The template:

\lim f(x)^{g(x)} \quad \text{where } f(x) \to 1 \text{ and } g(x) \to \infty

Write f(x) = 1 + u(x) where u(x) \to 0. Then

\ln\left(f(x)^{g(x)}\right) = g(x) \cdot \ln(1 + u(x))

For small u, \ln(1 + u) \approx u (the first term of the Maclaurin series). So

g(x) \cdot \ln(1 + u(x)) \approx g(x) \cdot u(x)

This means: for the 1^\infty form, the limit of the logarithm is essentially \lim g(x) \cdot u(x), where u(x) is the amount by which the base exceeds 1.

This gives a fast formula. If \lim g(x) \cdot u(x) = M, then \lim f(x)^{g(x)} = e^M.

Check it against the original example: f(x) = 1 + 1/n, g(x) = n, u(x) = 1/n. So g \cdot u = n \cdot 1/n = 1, and the limit is e^1 = e. Correct.

Flowchart for the logarithm methodA three-step flowchart: start with f(x) to the power g(x), take ln to get g times ln f which is 0 times infinity, rewrite as a fraction to get 0 over 0 or infinity over infinity, use L'Hopital or expansion to find M, then the answer is e to the M. f(x)^g(x) take ln g(x) · ln f(x) (0 · ∞) rewrite 0/0 or ∞/∞ solve → M answer = e^M
The workflow for any $1^\infty$, $0^0$, or $\infty^0$ limit. Take the logarithm to convert the power into a product, rewrite the product as a quotient (either $0/0$ or $\infty/\infty$), solve using L'Hopital or expansion, then exponentiate back.

A first worked example

Example 1: A $1^\infty$ limit

Evaluate \displaystyle\lim_{x \to 0} (\cos x)^{1/x^2}.

Step 1. Identify the form. As x \to 0: \cos x \to 1 and 1/x^2 \to \infty. This is 1^\infty.

Why: confirming the form tells you which technique to use. Here, the logarithm method applies.

Step 2. Take the logarithm. Let y = (\cos x)^{1/x^2}, so

\ln y = \frac{\ln(\cos x)}{x^2}

Why: writing \frac{1}{x^2} \cdot \ln(\cos x) as a fraction puts it in 0/0 form (both \ln(\cos x) \to 0 and x^2 \to 0), ready for expansion or L'Hopital.

Step 3. Expand \ln(\cos x) for small x.

\cos x = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \cdots, so \cos x = 1 + u where u = -\frac{x^2}{2} + \cdots

\ln(1 + u) = u - \frac{u^2}{2} + \cdots = -\frac{x^2}{2} + \cdots (the u^2 term is order x^4, which you do not need yet)

\frac{\ln(\cos x)}{x^2} = \frac{-x^2/2 + \cdots}{x^2} = -\frac{1}{2} + \cdots

Why: expanding \ln(\cos x) to order x^2 is enough because the denominator is x^2. The leading term survives; the rest vanish.

Step 4. Take the limit and exponentiate.

\ln y \to -\frac{1}{2}, \quad \text{so } y \to e^{-1/2} = \frac{1}{\sqrt{e}}

Why: the limit of the log is -1/2, so the original limit is e raised to that power.

Result: \displaystyle\lim_{x \to 0} (\cos x)^{1/x^2} = \frac{1}{\sqrt{e}} \approx 0.6065

The graph of $(\cos x)^{1/x^2}$. The function dips toward zero as $|x|$ grows (the cosine becomes small and the exponent is large), but at the centre, as $x \to 0$, it settles at $1/\sqrt{e} \approx 0.6065$ -- not $1$, despite the base approaching $1$.

The base \cos x approaches 1 from below (since \cos x < 1 for x \neq 0), and the exponent 1/x^2 grows without bound. The race between these two effects is decided by the rate: \cos x drops below 1 by about x^2/2, and the exponent grows as 1/x^2, so their product is -1/2. That balance point e^{-1/2} is the answer.

A second example: limits involving sums

A different kind of special limit arises when you are asked to evaluate the limit of a sum with increasing number of terms.

Example 2: A sum that becomes an integral

Evaluate \displaystyle\lim_{n \to \infty} \frac{1}{n}\left(\sin\frac{\pi}{n} + \sin\frac{2\pi}{n} + \sin\frac{3\pi}{n} + \cdots + \sin\frac{n\pi}{n}\right).

Step 1. Recognise the structure. The expression is

\frac{1}{n}\sum_{k=1}^{n} \sin\frac{k\pi}{n}

This is a sum of n terms, each of the form f(x_k) \cdot \Delta x, where x_k = k\pi/n and \Delta x = 1/n. But if you write \Delta x = \pi/n and x_k = k \cdot \Delta x, this is a Riemann sum for the integral of \sin x from 0 to \pi -- almost. You need to pull out the \pi correctly.

Why: any sum of the form \frac{1}{n}\sum f(k/n) can be recognised as a Riemann sum for \int_0^1 f(x)\,dx.

Step 2. Rewrite to match the Riemann sum template. Set t_k = k/n, so t_k runs from 1/n to 1 in steps of \Delta t = 1/n.

\frac{1}{n}\sum_{k=1}^{n} \sin(k\pi/n) = \sum_{k=1}^{n} \sin(\pi t_k) \cdot \Delta t

This is a right-endpoint Riemann sum for \displaystyle\int_0^1 \sin(\pi t)\,dt.

Why: each term \sin(\pi t_k) \cdot (1/n) is the area of a thin rectangle of height \sin(\pi t_k) and width 1/n. Adding them all up gives a staircase approximation to the area under \sin(\pi t) from 0 to 1.

Step 3. Evaluate the integral.

\int_0^1 \sin(\pi t)\,dt = \left[-\frac{\cos(\pi t)}{\pi}\right]_0^1 = -\frac{\cos\pi}{\pi} + \frac{\cos 0}{\pi} = -\frac{(-1)}{\pi} + \frac{1}{\pi} = \frac{2}{\pi}

Why: the antiderivative of \sin(\pi t) is -\cos(\pi t)/\pi. Evaluating at the endpoints gives (-(-1) + 1)/\pi = 2/\pi.

Step 4. State the limit.

\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} \sin\frac{k\pi}{n} = \frac{2}{\pi} \approx 0.6366

Why: the Riemann sum converges to the integral as n \to \infty, because \sin(\pi t) is continuous on [0,1].

Result: \displaystyle\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} \sin\frac{k\pi}{n} = \frac{2}{\pi}

Riemann sum rectangles approximating the integral of sin(pi t)The curve y equals sine of pi t from t equals 0 to t equals 1, with 8 thin rectangles drawn underneath approximating the area. The rectangles use right endpoints. The total area of the rectangles approaches the true area 2 over pi as the number of rectangles increases. t y 0.5 1 1 area → 2/π
The curve $y = \sin(\pi t)$ from $t = 0$ to $t = 1$, with a Riemann sum of $8$ rectangles drawn underneath. As the number of rectangles grows to infinity, the total area of the rectangles converges to the integral $\int_0^1 \sin(\pi t)\,dt = 2/\pi$.

The key insight is pattern recognition: any time you see \frac{1}{n}\sum_{k=1}^{n} f(k/n), the limit as n \to \infty is \int_0^1 f(x)\,dx. The sum is the integral, in disguise.

Common confusions

Going deeper

If you came here to learn the 1^\infty technique and the Riemann sum trick, you have them. The rest of this section covers the 0^0 and \infty^0 forms, Stirling's approximation, and some subtleties.

The 0^0 form

The form 0^0 arises when f(x) \to 0^+ and g(x) \to 0. The logarithm method applies directly: \ln y = g(x) \ln f(x), which is of the form 0 \cdot (-\infty). Convert this to a quotient and evaluate.

For example, \lim_{x \to 0^+} x^x. Here f = g = x, and \ln y = x \ln x.

\lim_{x \to 0^+} x \ln x = \lim_{x \to 0^+} \frac{\ln x}{1/x}

This is -\infty/\infty, which you can handle with L'Hopital's rule:

= \lim_{x \to 0^+} \frac{1/x}{-1/x^2} = \lim_{x \to 0^+} \frac{-x^2}{x} = \lim_{x \to 0^+} (-x) = 0

So \ln y \to 0, and y \to e^0 = 1. The limit \lim_{x \to 0^+} x^x = 1.

The graph of $x^x$ for $x > 0$. Despite the base heading to zero, the function approaches $1$ as $x \to 0^+$. The minimum occurs near $x = 1/e \approx 0.368$.

The \infty^0 form

The form \infty^0 arises when f(x) \to +\infty and g(x) \to 0. Again, take the log: \ln y = g(x) \ln f(x), which is 0 \cdot \infty.

For example, \lim_{x \to \infty} x^{1/x}. Here \ln y = \frac{\ln x}{x}, which is \infty/\infty. By L'Hopital:

\lim_{x \to \infty} \frac{\ln x}{x} = \lim_{x \to \infty} \frac{1/x}{1} = 0

So \ln y \to 0 and y \to e^0 = 1. The limit is 1.

The general Riemann sum recognition

The pattern from Example 2 generalises. Any limit of the form

\lim_{n \to \infty} \frac{1}{n}\sum_{k=1}^{n} f\!\left(\frac{k}{n}\right) = \int_0^1 f(x)\,dx

and more generally, if the sum runs from k = 0 to n-1 or includes an offset,

\lim_{n \to \infty} \frac{b - a}{n}\sum_{k=0}^{n-1} f\!\left(a + k \cdot \frac{b-a}{n}\right) = \int_a^b f(x)\,dx

The recognition step is the hard part. Here are the signals that a sum is a Riemann sum in disguise:

Once you spot the pattern, the computation reduces to evaluating a definite integral.

Products that become sums: the logarithm trick for products

Sometimes the limit involves a product of n terms rather than a sum. The technique: take the logarithm, which turns the product into a sum, recognise that sum as a Riemann sum, and exponentiate.

For example:

\lim_{n \to \infty} \left(\frac{1 \cdot 2 \cdot 3 \cdots n}{n^n}\right)^{1/n} = \lim_{n \to \infty} \left(\frac{n!}{n^n}\right)^{1/n}

Take the log: \ln y = \frac{1}{n}\ln\frac{n!}{n^n} = \frac{1}{n}\sum_{k=1}^{n}\ln\frac{k}{n} = \frac{1}{n}\sum_{k=1}^{n}\ln(k/n).

This is a Riemann sum for \int_0^1 \ln x\,dx.

\int_0^1 \ln x\,dx = [x\ln x - x]_0^1 = (0 - 1) - \lim_{x \to 0^+}(x\ln x - x) = -1 - (0 - 0) = -1

So \ln y \to -1 and y \to e^{-1} = 1/e.

Stirling's approximation

The computation above is closely related to one of the most useful approximations in mathematics: Stirling's formula for n!.

n! \approx \sqrt{2\pi n}\left(\frac{n}{e}\right)^n

More precisely, \frac{n!}{\sqrt{2\pi n}(n/e)^n} \to 1 as n \to \infty. This means that for large n, the factorial grows like \sqrt{2\pi n} \cdot (n/e)^n, which is an explicit and computable expression.

Why is e involved? Because \ln(n!) \approx n\ln n - n (which is n\ln(n/e)), and this comes from approximating \sum_{k=1}^{n} \ln k by the integral \int_1^n \ln x\,dx = n\ln n - n + 1 \approx n\ln n - n. This is another Riemann-sum-to-integral conversion, the same technique as before.

Stirling's approximation is essential in combinatorics and probability, where n! appears in binomial coefficients and counting formulas. For instance, \binom{2n}{n} \approx \frac{4^n}{\sqrt{\pi n}} follows directly from applying Stirling to the factorials in \frac{(2n)!}{(n!)^2}.

Ratio of n factorial to Stirling's approximationA graph showing the ratio of n factorial to Stirling's approximation for n from 1 to 12. The ratio starts above 1 at small n and decreases toward 1 as n grows, confirming that Stirling's formula becomes more accurate for larger values. n ratio 1.0 1.1 1 3 5 7 9 11 n!/Stirling → 1
The ratio $\frac{n!}{\sqrt{2\pi n}(n/e)^n}$ for $n = 1, 2, \ldots, 12$. At $n = 1$ the approximation is off by about $8\%$. By $n = 10$, the error is under $1\%$. The ratio approaches $1$ from above -- Stirling's formula slightly *underestimates* $n!$.

The 1^\infty shortcut formula

For quick computation, many textbooks state the following: if \lim_{x \to a} f(x) = 1 and \lim_{x \to a} g(x) = \infty, then

\lim_{x \to a} f(x)^{g(x)} = e^{\lim_{x \to a}\, g(x)(f(x) - 1)}

This is a direct consequence of \ln(1 + u) \approx u for small u, where u = f(x) - 1. It saves the step of explicitly computing the logarithm.

For example: \lim_{x \to 0} (1 + \sin x)^{1/x}. Here f(x) - 1 = \sin x and g(x) = 1/x, so g(x)(f(x) - 1) = \sin x / x \to 1. The limit is e^1 = e.

Where this leads next

These special forms are part of a larger toolkit for difficult limits: