In short
An improper integral is a definite integral where either the interval is infinite or the integrand blows up somewhere on the interval. You handle both cases by the same move: replace the problem point with a parameter t, compute the ordinary integral from a to t, and then take the limit as t approaches the problem point. The integral converges if the limit is finite and diverges if it is infinite or undefined.
How do you compute \int_1^\infty \frac{dx}{x^2}? The integral sign looks ordinary, but the upper limit is infinity — something you cannot plug into an antiderivative. The fundamental theorem of calculus says \int_a^b f(x)\,dx = F(b) - F(a), but F(\infty) is not a number.
You can do something almost as good. Pick a large finite number t and compute \int_1^t \frac{dx}{x^2}. That is an ordinary definite integral:
Now let t grow. As t \to \infty, the term \frac{1}{t} shrinks to zero, and the integral approaches 1. So the sensible definition of \int_1^\infty \frac{dx}{x^2} is this limit: 1.
Notice how strong this claim is. The region under the curve y = \frac{1}{x^2} from x = 1 all the way out to infinity has a finite area. The curve extends infinitely far to the right, but the area it encloses is exactly 1. That seems almost paradoxical — an infinite strip with a finite area — and yet the calculation is clean: you have a number.
This is an improper integral. It generalises the definite integral to cases where an endpoint is infinite, or where the integrand is unbounded somewhere on the interval. Both cases are handled by the same trick: take a limit of ordinary definite integrals.
The two kinds of improper integral
There are two situations that force an integral to be improper.
Type 1: infinite interval. An interval that extends to +\infty, -\infty, or both. Examples:
Type 2: singular integrand. A finite interval where the integrand becomes infinite at one or more points inside or on the boundary. Examples:
The first has a singularity at x = 0 (the integrand blows up). The second has a similar issue — \ln x \to -\infty as x \to 0^+. The third has a singularity at the interior point x = 1, which is the trickiest case because the problem is not at an endpoint.
Both types get the same treatment: you replace the problem point with a variable limit t and take a limit.
The formal definition
Improper integrals
Type 1a. If f is continuous on [a, \infty), then
The integral converges if this limit is finite; diverges otherwise.
Type 1b. If f is continuous on (-\infty, a], then
Type 1c. If f is continuous on (-\infty, \infty), split at any point c:
and both pieces must converge independently.
Type 2a. If f is continuous on (a, b] but f(x)\to\infty as x\to a^+, then
Type 2b. If f is continuous on [a, b) but f(x)\to\infty as x\to b^-, then
Type 2c. If f has a singularity at an interior point c \in (a, b), split the integral as \int_a^c + \int_c^b and treat each piece by Type 2a or 2b. Both pieces must converge.
The thing to watch for in Type 1c and Type 2c is that both pieces must converge on their own. You cannot rely on "nice cancellations" between the two halves.
The canonical test: \int_1^\infty \frac{dx}{x^p}
One family of integrals deserves memorising, because almost every convergence question comes down to comparing your integrand with a member of this family.
The p-test. For the improper integral
the behaviour depends on p:
- If p > 1: converges, to \frac{1}{p - 1}.
- If p = 1: diverges (\int dx/x = \ln x, which grows without bound).
- If p < 1: diverges (x^{1-p} grows without bound).
The borderline is at p = 1. For p strictly greater than 1, the integrand decays fast enough that the integral is finite. For p strictly less, it decays too slowly. Exactly at p = 1, you get the logarithm, which diverges — but barely.
Let me verify the p > 1 case directly. For p \neq 1:
If p > 1, then 1 - p < 0, so t^{1-p} = \frac{1}{t^{p-1}} \to 0 as t \to \infty. So the limit is \frac{0 - 1}{1 - p} = \frac{1}{p - 1}.
If p < 1, then 1 - p > 0, so t^{1-p} \to \infty as t \to \infty. The limit is infinite — divergent.
If p = 1, the antiderivative is \ln x, and \ln t \to \infty.
The twin p-test for integrals with a singularity at 0:
behaves oppositely:
- If p < 1: converges, to \frac{1}{1 - p}.
- If p = 1: diverges.
- If p > 1: diverges.
Notice the sign flip. An integrand that is too flat at infinity (small p) makes \int_1^\infty diverge; an integrand that is too steep at zero (large p) makes \int_0^1 diverge. The borderline is again p = 1, and p = 1 is the one case that fails on both sides.
The first worked example
Example 1: $\int_0^1 \frac{dx}{\sqrt{x}}$
Step 1. Identify the improperness. The integrand \frac{1}{\sqrt{x}} blows up at x = 0: as x \to 0^+, \frac{1}{\sqrt{x}} \to \infty. So this is a Type 2a integral, with the singularity at the lower endpoint.
Why: before you can apply any definition, you need to know where the trouble is. The trouble is at x = 0, and the upper endpoint x = 1 is perfectly ordinary.
Step 2. Rewrite as a limit. Replace the lower endpoint 0 with a parameter t, and let t \to 0^+:
Why: this is the definition of a Type 2a improper integral. You have to avoid the singularity first, compute an ordinary integral, and only then send your avoidance to zero.
Step 3. Compute the ordinary integral.
Why: the antiderivative of x^{-1/2} is 2x^{1/2}. This is well-defined as long as t > 0, which it is.
Step 4. Take the limit as t \to 0^+.
Why: \sqrt{t} \to 0 as t \to 0, so the correction term -2\sqrt{t} vanishes and you are left with the clean number 2.
Result: \int_0^1 \frac{dx}{\sqrt{x}} = 2. The integral converges.
This is a genuinely surprising result. The integrand is not bounded on [0, 1] — it goes to infinity at the left endpoint — and yet the area under it is a finite, specific number. The integrand goes to infinity slowly enough that the integral survives.
Compare with \int_0^1 \frac{dx}{x}, which does diverge (the antiderivative is \ln x, and \ln t \to -\infty). The difference between \frac{1}{\sqrt{x}} and \frac{1}{x} is that \frac{1}{\sqrt{x}} blows up less aggressively near zero, and the less aggressive blow-up is tame enough to integrate.
A second example, this one infinite
Example 2: $\int_0^\infty e^{-x}\,dx$
Step 1. Identify the improperness. The interval is infinite — it stretches from 0 to \infty. This is Type 1a with a = 0. The integrand e^{-x} is perfectly well-behaved on any finite piece of the interval; the only trouble is at \infty.
Why: even though e^{-x} is continuous everywhere, the integral is improper because the upper limit is not a real number.
Step 2. Rewrite as a limit. Replace the upper limit with t and let t \to \infty:
Step 3. Compute the ordinary integral.
Why: the antiderivative of e^{-x} is -e^{-x}. Plugging in the limits gives a clean expression.
Step 4. Take the limit as t \to \infty.
Why: e^{-t} decays to zero extremely fast — faster than any polynomial. So the correction term vanishes.
Result: \int_0^\infty e^{-x}\,dx = 1. The integral converges.
Exponential decay is aggressive — by x = 5, the function is already down to about 0.0067, and by x = 10 it is down to about 0.0000454. The "infinite tail" of the integral contributes a negligible amount, and the total area is a crisp 1.
The comparison test: convergence without computation
Sometimes you cannot compute the integral, but you still want to know whether it converges. The comparison test is the workhorse.
Comparison test. Suppose 0 \leq f(x) \leq g(x) on [a, \infty).
- If \int_a^\infty g(x)\,dx converges, then \int_a^\infty f(x)\,dx converges.
- If \int_a^\infty f(x)\,dx diverges, then \int_a^\infty g(x)\,dx diverges.
Picture: the area under f is trapped below the area under g. If the upper area is finite, so is the lower one. If the lower area is infinite, so is the upper one.
The test is usually applied by comparing your integrand to a member of the p-family \frac{1}{x^p}.
Example. Does \int_1^\infty \frac{dx}{x^2 + x + 1} converge?
For x \geq 1, the integrand satisfies \frac{1}{x^2 + x + 1} \leq \frac{1}{x^2} (because the denominator is bigger). And \int_1^\infty \frac{dx}{x^2} converges (it is 1, by the p-test with p = 2). So by the comparison test, \int_1^\infty \frac{dx}{x^2 + x + 1} converges as well.
You did not have to compute the actual value. You just bounded the integrand by something simpler whose behaviour you already knew.
Example. Does \int_1^\infty \frac{x}{x^2 - 0.5}\,dx converge?
For large x, this is roughly \frac{x}{x^2} = \frac{1}{x}, which diverges. More formally, for x \geq 1 the integrand is at least \frac{x}{x^2} = \frac{1}{x} (you can check this with a little algebra), and \int_1^\infty \frac{dx}{x} diverges. So by the comparison test, the original integral diverges too.
The pattern: for convergence, bound your integrand above by something convergent. For divergence, bound it below by something divergent.
The limit comparison test
Sometimes a direct comparison is awkward, and a softer form works better: the limit comparison test.
Limit comparison. If f and g are positive on [a, \infty) and
then \int_a^\infty f and \int_a^\infty g both converge or both diverge.
Example. Does \int_1^\infty \frac{\sqrt{x^3 + 1}}{x^4 + 2}\,dx converge? The integrand behaves like \frac{x^{3/2}}{x^4} = \frac{1}{x^{5/2}} for large x. Formally, set g(x) = x^{-5/2}, and compute
Since the limit is a positive finite number, both integrals behave the same. And \int_1^\infty x^{-5/2}\,dx converges by the p-test (with p = 5/2 > 1). So the original integral converges.
The limit comparison test is cleaner for rational functions than the direct one, because you do not need to rig up an explicit inequality — you just check the asymptotic behaviour.
Common confusions
-
"\int_{-\infty}^\infty x\,dx = 0 by symmetry." It is actually divergent. The correct procedure for a (-\infty, \infty) integral is to split at any c and require both pieces to converge independently. \int_0^\infty x\,dx diverges, so the whole integral diverges — regardless of what happens with the left half.
-
"If the integrand goes to zero, the integral converges." False. \frac{1}{x} goes to zero, but \int_1^\infty \frac{dx}{x} diverges. The integrand has to decay fast enough. For the p-test, fast enough means faster than \frac{1}{x}.
-
"An integral over an infinite interval is infinite." Also false — the whole point of this article. \int_0^\infty e^{-x}\,dx = 1. A finite area can be spread over an infinite interval as long as the function decays fast enough.
-
"The singularity is at an endpoint, so I can still use the fundamental theorem directly." You cannot plug an endpoint into an antiderivative if the antiderivative is undefined at that endpoint. You must go through the limit. In practice, for a convergent integral the limit often gives a clean value — but you still must write the limit step.
-
"I can ignore the singularity because it is just one point." An interior singularity is the most dangerous case. \int_{-1}^1 \frac{dx}{x} looks like it might be zero by symmetry, but it is actually divergent — each half of the split has \ln 0 problems, and a divergent half sinks the whole integral.
Going deeper
If you can spot the improperness, rewrite the integral as a limit, and apply the comparison tests, you have the full toolkit. The rest of this section looks at the Cauchy principal value, the beta and gamma functions, and why the integral definition of improper integrals is actually a definition of a different mathematical object entirely — not a generalisation of the ordinary integral.
The Cauchy principal value
The integral \int_{-1}^1 \frac{dx}{x} is divergent in the usual sense — both halves blow up — but it has a symmetric cancellation that might still be meaningful. Define
where the two small gaps around the singularity are taken to shrink at the same rate. This is called the Cauchy principal value. The computation gives:
Adding: \ln\epsilon + (-\ln\epsilon) = 0. So the PV is 0.
The principal value exists even when the ordinary improper integral does not — but only because you have specified a particular way of taking the limit (namely, symmetric shrinking). If you shrink asymmetrically, you get different answers. So PV is weaker than ordinary convergence, and it is a separate concept. For JEE you will almost never need it; for physics (Fourier transforms, contour integrals) it is unavoidable.
The gamma function
The most important improper integral in all of mathematics is the gamma function:
For s > 0, this converges, and it generalises the factorial: \Gamma(n) = (n - 1)! for positive integers n. The integral is improper because of the infinite upper limit (the exponential tames it) and, when s < 1, also because of the singularity at x = 0 (the x^{s-1} factor blows up, but the singularity is mild enough that the integral converges).
Evaluating \Gamma(1/2) = \sqrt{\pi} takes more care than an elementary computation. One way: use a substitution x = u^2 to turn it into
and then square and convert to polar coordinates. The result is \sqrt{\pi}, a number that shows up all over probability and statistics.
Absolute versus conditional convergence
An improper integral \int_a^\infty f(x)\,dx is called absolutely convergent if \int_a^\infty |f(x)|\,dx converges, and conditionally convergent if it converges but \int_a^\infty |f(x)|\,dx does not.
Example of conditional convergence: \int_0^\infty \frac{\sin x}{x}\,dx = \frac{\pi}{2}. The integrand oscillates, and the positive and negative pieces cancel each other enough to give a finite limit. But \int_0^\infty \frac{|\sin x|}{x}\,dx diverges.
For a comparison test to apply, you need absolute convergence (the comparison test requires f \geq 0, so it can only test the integral of |f|). Conditionally convergent integrals require subtler tools — typically integration by parts, which transfers the oscillation to a less problematic factor.
Why the definition is "right"
You might wonder why the definition of the improper integral is a limit of ordinary integrals, rather than some other kind of extension. The answer is that this is the definition that makes the improper integral agree with two other sensible definitions: the Lebesgue integral (for absolutely convergent integrals) and the limit of Riemann sums over a fixed partition as the partition is refined. They all agree where they apply, and the agreement is what makes the improper integral a robust mathematical object rather than a notational convenience.
Where this leads next
- Definite Integration - Introduction — the ordinary definite integral that improper integrals generalise.
- Fundamental Theorem of Calculus — the theorem behind the evaluation of the ordinary integral inside each limit.
- Numerical Integration — when the integral cannot be evaluated in closed form, numerical methods take over. They work for proper integrals; improper ones need truncation first.
- Sum of Series Using Integration — many improper integrals arise as limits of discrete sums.
- Gamma Function — the most important example of an improper integral as a function of a parameter.