In short

An improper integral is a definite integral where either the interval is infinite or the integrand blows up somewhere on the interval. You handle both cases by the same move: replace the problem point with a parameter t, compute the ordinary integral from a to t, and then take the limit as t approaches the problem point. The integral converges if the limit is finite and diverges if it is infinite or undefined.

How do you compute \int_1^\infty \frac{dx}{x^2}? The integral sign looks ordinary, but the upper limit is infinity — something you cannot plug into an antiderivative. The fundamental theorem of calculus says \int_a^b f(x)\,dx = F(b) - F(a), but F(\infty) is not a number.

You can do something almost as good. Pick a large finite number t and compute \int_1^t \frac{dx}{x^2}. That is an ordinary definite integral:

\int_1^t \frac{dx}{x^2} = -\frac{1}{x}\bigg|_1^t = -\frac{1}{t} + 1 = 1 - \frac{1}{t}

Now let t grow. As t \to \infty, the term \frac{1}{t} shrinks to zero, and the integral approaches 1. So the sensible definition of \int_1^\infty \frac{dx}{x^2} is this limit: 1.

Notice how strong this claim is. The region under the curve y = \frac{1}{x^2} from x = 1 all the way out to infinity has a finite area. The curve extends infinitely far to the right, but the area it encloses is exactly 1. That seems almost paradoxical — an infinite strip with a finite area — and yet the calculation is clean: you have a number.

This is an improper integral. It generalises the definite integral to cases where an endpoint is infinite, or where the integrand is unbounded somewhere on the interval. Both cases are handled by the same trick: take a limit of ordinary definite integrals.

The two kinds of improper integral

There are two situations that force an integral to be improper.

Type 1: infinite interval. An interval that extends to +\infty, -\infty, or both. Examples:

\int_1^\infty \frac{dx}{x^2}, \qquad \int_{-\infty}^0 e^x\,dx, \qquad \int_{-\infty}^\infty \frac{dx}{1 + x^2}

Type 2: singular integrand. A finite interval where the integrand becomes infinite at one or more points inside or on the boundary. Examples:

\int_0^1 \frac{dx}{\sqrt{x}}, \qquad \int_0^1 \ln x\,dx, \qquad \int_0^2 \frac{dx}{(x - 1)^2}

The first has a singularity at x = 0 (the integrand blows up). The second has a similar issue — \ln x \to -\infty as x \to 0^+. The third has a singularity at the interior point x = 1, which is the trickiest case because the problem is not at an endpoint.

Both types get the same treatment: you replace the problem point with a variable limit t and take a limit.

The formal definition

Improper integrals

Type 1a. If f is continuous on [a, \infty), then

\int_a^\infty f(x)\,dx \;=\; \lim_{t\to\infty}\int_a^t f(x)\,dx

The integral converges if this limit is finite; diverges otherwise.

Type 1b. If f is continuous on (-\infty, a], then

\int_{-\infty}^a f(x)\,dx \;=\; \lim_{t\to-\infty}\int_t^a f(x)\,dx

Type 1c. If f is continuous on (-\infty, \infty), split at any point c:

\int_{-\infty}^\infty f(x)\,dx \;=\; \int_{-\infty}^c f(x)\,dx + \int_c^\infty f(x)\,dx

and both pieces must converge independently.

Type 2a. If f is continuous on (a, b] but f(x)\to\infty as x\to a^+, then

\int_a^b f(x)\,dx \;=\; \lim_{t\to a^+}\int_t^b f(x)\,dx

Type 2b. If f is continuous on [a, b) but f(x)\to\infty as x\to b^-, then

\int_a^b f(x)\,dx \;=\; \lim_{t\to b^-}\int_a^t f(x)\,dx

Type 2c. If f has a singularity at an interior point c \in (a, b), split the integral as \int_a^c + \int_c^b and treat each piece by Type 2a or 2b. Both pieces must converge.

The thing to watch for in Type 1c and Type 2c is that both pieces must converge on their own. You cannot rely on "nice cancellations" between the two halves.

The canonical test: \int_1^\infty \frac{dx}{x^p}

One family of integrals deserves memorising, because almost every convergence question comes down to comparing your integrand with a member of this family.

The p-test. For the improper integral

\int_1^\infty \frac{dx}{x^p}

the behaviour depends on p:

The borderline is at p = 1. For p strictly greater than 1, the integrand decays fast enough that the integral is finite. For p strictly less, it decays too slowly. Exactly at p = 1, you get the logarithm, which diverges — but barely.

Let me verify the p > 1 case directly. For p \neq 1:

\int_1^t \frac{dx}{x^p} = \frac{x^{1-p}}{1-p}\bigg|_1^t = \frac{t^{1-p} - 1}{1 - p}

If p > 1, then 1 - p < 0, so t^{1-p} = \frac{1}{t^{p-1}} \to 0 as t \to \infty. So the limit is \frac{0 - 1}{1 - p} = \frac{1}{p - 1}.

If p < 1, then 1 - p > 0, so t^{1-p} \to \infty as t \to \infty. The limit is infinite — divergent.

If p = 1, the antiderivative is \ln x, and \ln t \to \infty.

The twin p-test for integrals with a singularity at 0:

\int_0^1 \frac{dx}{x^p}

behaves oppositely:

Notice the sign flip. An integrand that is too flat at infinity (small p) makes \int_1^\infty diverge; an integrand that is too steep at zero (large p) makes \int_0^1 diverge. The borderline is again p = 1, and p = 1 is the one case that fails on both sides.

The three borderline curves: $\frac{1}{\sqrt{x}}$ (flatter, soft red), $\frac{1}{x}$ (borderline, dashed red), $\frac{1}{x^2}$ (steeper, black). From $1$ to $\infty$: the $\frac{1}{x^2}$ integral converges, $\frac{1}{x}$ diverges, and $\frac{1}{\sqrt{x}}$ diverges. From $0$ to $1$: the story reverses — $\frac{1}{\sqrt{x}}$ converges, $\frac{1}{x}$ diverges, and $\frac{1}{x^2}$ diverges. The common dividing line is $p = 1$.

The first worked example

Example 1: $\int_0^1 \frac{dx}{\sqrt{x}}$

Step 1. Identify the improperness. The integrand \frac{1}{\sqrt{x}} blows up at x = 0: as x \to 0^+, \frac{1}{\sqrt{x}} \to \infty. So this is a Type 2a integral, with the singularity at the lower endpoint.

Why: before you can apply any definition, you need to know where the trouble is. The trouble is at x = 0, and the upper endpoint x = 1 is perfectly ordinary.

Step 2. Rewrite as a limit. Replace the lower endpoint 0 with a parameter t, and let t \to 0^+:

\int_0^1 \frac{dx}{\sqrt{x}} = \lim_{t\to 0^+}\int_t^1 x^{-1/2}\,dx

Why: this is the definition of a Type 2a improper integral. You have to avoid the singularity first, compute an ordinary integral, and only then send your avoidance to zero.

Step 3. Compute the ordinary integral.

\int_t^1 x^{-1/2}\,dx = \frac{x^{1/2}}{1/2}\bigg|_t^1 = 2\sqrt{x}\bigg|_t^1 = 2 - 2\sqrt{t}

Why: the antiderivative of x^{-1/2} is 2x^{1/2}. This is well-defined as long as t > 0, which it is.

Step 4. Take the limit as t \to 0^+.

\lim_{t\to 0^+}(2 - 2\sqrt{t}) = 2 - 0 = 2

Why: \sqrt{t} \to 0 as t \to 0, so the correction term -2\sqrt{t} vanishes and you are left with the clean number 2.

Result: \int_0^1 \frac{dx}{\sqrt{x}} = 2. The integral converges.

The curve $y = \frac{1}{\sqrt{x}}$ on $(0, 1]$, with a vertical asymptote at $x = 0$. The curve extends upward without bound, yet the area under it from $0$ to $1$ is exactly $2$. The vertical extent is infinite but the horizontal contribution near the asymptote is small enough that everything sums to a finite number.

This is a genuinely surprising result. The integrand is not bounded on [0, 1] — it goes to infinity at the left endpoint — and yet the area under it is a finite, specific number. The integrand goes to infinity slowly enough that the integral survives.

Compare with \int_0^1 \frac{dx}{x}, which does diverge (the antiderivative is \ln x, and \ln t \to -\infty). The difference between \frac{1}{\sqrt{x}} and \frac{1}{x} is that \frac{1}{\sqrt{x}} blows up less aggressively near zero, and the less aggressive blow-up is tame enough to integrate.

A second example, this one infinite

Example 2: $\int_0^\infty e^{-x}\,dx$

Step 1. Identify the improperness. The interval is infinite — it stretches from 0 to \infty. This is Type 1a with a = 0. The integrand e^{-x} is perfectly well-behaved on any finite piece of the interval; the only trouble is at \infty.

Why: even though e^{-x} is continuous everywhere, the integral is improper because the upper limit is not a real number.

Step 2. Rewrite as a limit. Replace the upper limit with t and let t \to \infty:

\int_0^\infty e^{-x}\,dx = \lim_{t\to\infty}\int_0^t e^{-x}\,dx

Step 3. Compute the ordinary integral.

\int_0^t e^{-x}\,dx = -e^{-x}\bigg|_0^t = -e^{-t} + e^0 = 1 - e^{-t}

Why: the antiderivative of e^{-x} is -e^{-x}. Plugging in the limits gives a clean expression.

Step 4. Take the limit as t \to \infty.

\lim_{t\to\infty}(1 - e^{-t}) = 1 - 0 = 1

Why: e^{-t} decays to zero extremely fast — faster than any polynomial. So the correction term vanishes.

Result: \int_0^\infty e^{-x}\,dx = 1. The integral converges.

The curve $y = e^{-x}$ starting at $(0, 1)$ and decaying rapidly to zero. The area under the entire right half of the curve — from $0$ to $\infty$ — is exactly $1$. The decay is so fast that most of the area is already captured by the time $x$ reaches $4$ or $5$; everything past that is essentially zero.

Exponential decay is aggressive — by x = 5, the function is already down to about 0.0067, and by x = 10 it is down to about 0.0000454. The "infinite tail" of the integral contributes a negligible amount, and the total area is a crisp 1.

The comparison test: convergence without computation

Sometimes you cannot compute the integral, but you still want to know whether it converges. The comparison test is the workhorse.

Comparison test. Suppose 0 \leq f(x) \leq g(x) on [a, \infty).

Picture: the area under f is trapped below the area under g. If the upper area is finite, so is the lower one. If the lower area is infinite, so is the upper one.

The test is usually applied by comparing your integrand to a member of the p-family \frac{1}{x^p}.

Example. Does \int_1^\infty \frac{dx}{x^2 + x + 1} converge?

For x \geq 1, the integrand satisfies \frac{1}{x^2 + x + 1} \leq \frac{1}{x^2} (because the denominator is bigger). And \int_1^\infty \frac{dx}{x^2} converges (it is 1, by the p-test with p = 2). So by the comparison test, \int_1^\infty \frac{dx}{x^2 + x + 1} converges as well.

You did not have to compute the actual value. You just bounded the integrand by something simpler whose behaviour you already knew.

Example. Does \int_1^\infty \frac{x}{x^2 - 0.5}\,dx converge?

For large x, this is roughly \frac{x}{x^2} = \frac{1}{x}, which diverges. More formally, for x \geq 1 the integrand is at least \frac{x}{x^2} = \frac{1}{x} (you can check this with a little algebra), and \int_1^\infty \frac{dx}{x} diverges. So by the comparison test, the original integral diverges too.

The pattern: for convergence, bound your integrand above by something convergent. For divergence, bound it below by something divergent.

The limit comparison test

Sometimes a direct comparison is awkward, and a softer form works better: the limit comparison test.

Limit comparison. If f and g are positive on [a, \infty) and

\lim_{x\to\infty}\frac{f(x)}{g(x)} = L\quad\text{where } 0 < L < \infty

then \int_a^\infty f and \int_a^\infty g both converge or both diverge.

Example. Does \int_1^\infty \frac{\sqrt{x^3 + 1}}{x^4 + 2}\,dx converge? The integrand behaves like \frac{x^{3/2}}{x^4} = \frac{1}{x^{5/2}} for large x. Formally, set g(x) = x^{-5/2}, and compute

\lim_{x\to\infty}\frac{\sqrt{x^3 + 1}/(x^4 + 2)}{x^{-5/2}} = \lim_{x\to\infty}\frac{x^{5/2}\sqrt{x^3 + 1}}{x^4 + 2} = \lim_{x\to\infty}\frac{x^{5/2}\cdot x^{3/2}\sqrt{1 + 1/x^3}}{x^4(1 + 2/x^4)} = 1

Since the limit is a positive finite number, both integrals behave the same. And \int_1^\infty x^{-5/2}\,dx converges by the p-test (with p = 5/2 > 1). So the original integral converges.

The limit comparison test is cleaner for rational functions than the direct one, because you do not need to rig up an explicit inequality — you just check the asymptotic behaviour.

Common confusions

Going deeper

If you can spot the improperness, rewrite the integral as a limit, and apply the comparison tests, you have the full toolkit. The rest of this section looks at the Cauchy principal value, the beta and gamma functions, and why the integral definition of improper integrals is actually a definition of a different mathematical object entirely — not a generalisation of the ordinary integral.

The Cauchy principal value

The integral \int_{-1}^1 \frac{dx}{x} is divergent in the usual sense — both halves blow up — but it has a symmetric cancellation that might still be meaningful. Define

\text{PV}\int_{-1}^1 \frac{dx}{x} = \lim_{\epsilon\to 0^+}\left[\int_{-1}^{-\epsilon}\frac{dx}{x} + \int_\epsilon^1 \frac{dx}{x}\right]

where the two small gaps around the singularity are taken to shrink at the same rate. This is called the Cauchy principal value. The computation gives:

\int_{-1}^{-\epsilon}\frac{dx}{x} = \ln|{-\epsilon}| - \ln|{-1}| = \ln\epsilon - 0 = \ln\epsilon
\int_\epsilon^1\frac{dx}{x} = \ln 1 - \ln\epsilon = -\ln\epsilon

Adding: \ln\epsilon + (-\ln\epsilon) = 0. So the PV is 0.

The principal value exists even when the ordinary improper integral does not — but only because you have specified a particular way of taking the limit (namely, symmetric shrinking). If you shrink asymmetrically, you get different answers. So PV is weaker than ordinary convergence, and it is a separate concept. For JEE you will almost never need it; for physics (Fourier transforms, contour integrals) it is unavoidable.

The gamma function

The most important improper integral in all of mathematics is the gamma function:

\Gamma(s) = \int_0^\infty x^{s-1} e^{-x}\,dx

For s > 0, this converges, and it generalises the factorial: \Gamma(n) = (n - 1)! for positive integers n. The integral is improper because of the infinite upper limit (the exponential tames it) and, when s < 1, also because of the singularity at x = 0 (the x^{s-1} factor blows up, but the singularity is mild enough that the integral converges).

Evaluating \Gamma(1/2) = \sqrt{\pi} takes more care than an elementary computation. One way: use a substitution x = u^2 to turn it into

\Gamma(1/2) = 2\int_0^\infty e^{-u^2}\,du

and then square and convert to polar coordinates. The result is \sqrt{\pi}, a number that shows up all over probability and statistics.

Absolute versus conditional convergence

An improper integral \int_a^\infty f(x)\,dx is called absolutely convergent if \int_a^\infty |f(x)|\,dx converges, and conditionally convergent if it converges but \int_a^\infty |f(x)|\,dx does not.

Example of conditional convergence: \int_0^\infty \frac{\sin x}{x}\,dx = \frac{\pi}{2}. The integrand oscillates, and the positive and negative pieces cancel each other enough to give a finite limit. But \int_0^\infty \frac{|\sin x|}{x}\,dx diverges.

For a comparison test to apply, you need absolute convergence (the comparison test requires f \geq 0, so it can only test the integral of |f|). Conditionally convergent integrals require subtler tools — typically integration by parts, which transfers the oscillation to a less problematic factor.

Why the definition is "right"

You might wonder why the definition of the improper integral is a limit of ordinary integrals, rather than some other kind of extension. The answer is that this is the definition that makes the improper integral agree with two other sensible definitions: the Lebesgue integral (for absolutely convergent integrals) and the limit of Riemann sums over a fixed partition as the partition is refined. They all agree where they apply, and the agreement is what makes the improper integral a robust mathematical object rather than a notational convenience.

Where this leads next