In short

When you cannot (or do not need to) compute a definite integral exactly, you can still trap it between bounds. The comparison theorem says that if f(x) \leq g(x) on an interval, then \int f \leq \int g. The estimation lemma bounds the integral by the integrand's extreme values times the interval length. The mean value theorem for integrals guarantees that somewhere on the interval, the integrand equals its average value exactly.

Here is a challenge. Without evaluating the integral, decide: is \int_0^1 e^{-x^2}\,dx greater than 1/2 or less than 1/2?

The function e^{-x^2} has no elementary antiderivative — you cannot express \int e^{-x^2}\,dx in terms of standard functions. So computing the integral exactly is not an option. But you can still answer the question.

On [0, 1], the function e^{-x^2} starts at e^0 = 1 and decreases to e^{-1} \approx 0.368. It is always positive and always less than or equal to 1. A crude lower bound: the function is at least e^{-1} \approx 0.368 everywhere on [0, 1], so the integral is at least 0.368 \times 1 = 0.368. A crude upper bound: the function is at most 1 everywhere, so the integral is at most 1 \times 1 = 1.

That traps the integral between 0.368 and 1. To get closer, notice that e^{-x^2} \geq 1 - x^2 on [0, 1] (because e^u \geq 1 + u for all u, applied with u = -x^2). So

\int_0^1 e^{-x^2}\,dx \geq \int_0^1 (1 - x^2)\,dx = 1 - \frac{1}{3} = \frac{2}{3}

And 2/3 > 1/2. The integral is greater than 1/2. You answered the question without ever finding an antiderivative.

This is the power of integral inequalities: they let you reason about integrals you cannot compute. In competition mathematics, in real analysis, and in applied problems, the ability to bound an integral is often more useful than the ability to evaluate it.

The comparison theorem

The simplest and most useful inequality for integrals is this: if one function is always bigger than another on an interval, then its integral is bigger too.

Comparison theorem for definite integrals

If f(x) \leq g(x) for all x in [a, b], then

\int_a^b f(x)\,dx \leq \int_a^b g(x)\,dx

If additionally f(x) < g(x) at least at one point where both are continuous, then the inequality is strict.

This is geometrically obvious: the integral is the signed area under the curve. If one curve is always above another, it encloses more area.

On $[0,1]$: the dotted line $y = 1$ sits above $e^{-x^2}$ (solid), which sits above $1 - x^2$ (dashed). The comparison theorem turns these pointwise inequalities into integral inequalities: $2/3 \leq \int_0^1 e^{-x^2}\,dx \leq 1$.

Using the comparison theorem

The typical strategy is:

  1. Find a function g(x) that is simpler than f(x) and satisfies f(x) \leq g(x) on [a,b].
  2. Evaluate \int_a^b g(x)\,dx (which you can do because g is simpler).
  3. Conclude \int_a^b f(x)\,dx \leq \int_a^b g(x)\,dx.

For a lower bound, find h(x) \leq f(x) and integrate h instead.

Common comparison functions: constants (the maximum and minimum of f), polynomials (from Taylor series truncation), and standard functions whose integrals are known.

Estimation of integrals

The comparison theorem becomes especially powerful when the comparison function is a constant — the maximum or minimum of the integrand on the interval.

Estimation lemma

If m \leq f(x) \leq M for all x \in [a, b], then

m(b - a) \leq \int_a^b f(x)\,dx \leq M(b - a)

This says: the integral is trapped between the smallest and largest rectangles that enclose the region.

In practice, you find m and M by the usual calculus technique: check the values of f at the endpoints and at any critical points inside [a, b].

A sharper version using monotonicity

If f is increasing on [a, b], then its minimum is f(a) and its maximum is f(b), so

f(a)(b-a) \leq \int_a^b f(x)\,dx \leq f(b)(b-a)

If f is decreasing, the inequalities reverse. This is useful because you do not need to hunt for critical points — monotonicity gives you the extreme values for free.

Triangle inequality for integrals

There is an integral analogue of the triangle inequality |a + b| \leq |a| + |b|:

\left|\int_a^b f(x)\,dx\right| \leq \int_a^b |f(x)|\,dx

The absolute value of the integral is at most the integral of the absolute value. Cancellation between positive and negative parts of f can make the left side smaller, but never larger.

Combining with the estimation lemma: if |f(x)| \leq M on [a,b], then |\int_a^b f(x)\,dx| \leq M(b-a).

The mean value theorem for integrals

The estimation lemma says the integral is between m(b-a) and M(b-a). The mean value theorem for integrals says something stronger: there is a specific point where the integrand equals the average.

Mean value theorem for integrals

If f is continuous on [a, b], then there exists a point c \in (a, b) such that

\int_a^b f(x)\,dx = f(c) \cdot (b - a)

The value f(c) is the average value of f on [a,b]:

f_{\text{avg}} = \frac{1}{b-a}\int_a^b f(x)\,dx

This is the integral version of the intermediate value theorem. Since f is continuous and takes the values m and M somewhere on [a,b], and \frac{\int f}{b-a} lies between m and M, the intermediate value theorem guarantees there is a c where f(c) equals that average.

The curve $f(x) = 0.3x^2 - 1.2x + 2.5$ on $[0.5, 4]$. The dashed horizontal line is at $y = f_{\text{avg}} \approx 1.7$, the average value. The rectangle with this height and the same base has the same area as the region under the curve. The point $c$ where the curve crosses the average-value line is guaranteed to exist by the mean value theorem for integrals.

The geometric picture is vivid: the "bumpy" area under the curve equals the area of a flat rectangle whose height is the average value of f. The function must cross that average-value line at least once — and the crossing point is c.

Why the mean value theorem matters

The mean value theorem for integrals is not just an existence result. It is the tool behind several important facts:

Worked examples

Example 1: Bounding an integral — show that $\frac{\pi}{6} \leq \int_0^{1/2} \frac{1}{\sqrt{1-x^2}}\,dx \leq \frac{\pi}{5}$... no, better: estimate $\int_1^2 \frac{1}{\ln x}\,dx$

The function f(x) = \frac{1}{\ln x} is defined for x > 1 (where \ln x > 0) and has no elementary antiderivative. Bound it.

Step 1. Find the extreme values on [1.5, 2] (we avoid x = 1 where \ln x = 0). Since \ln x is increasing, 1/\ln x is decreasing.

f(1.5) = \frac{1}{\ln 1.5} \approx \frac{1}{0.405} \approx 2.47, \quad f(2) = \frac{1}{\ln 2} \approx \frac{1}{0.693} \approx 1.44

Why: on [1.5, 2], the maximum of f is at the left endpoint (where \ln x is smallest), and the minimum is at the right endpoint (where \ln x is largest), because f is decreasing.

Step 2. Apply the estimation lemma with m = f(2) and M = f(1.5), on the interval [1.5, 2] of length 0.5.

1.44 \times 0.5 \leq \int_{1.5}^{2} \frac{dx}{\ln x} \leq 2.47 \times 0.5
0.72 \leq \int_{1.5}^{2} \frac{dx}{\ln x} \leq 1.24

Why: this is the estimation lemma applied directly — the integral is between the minimum height times the width and the maximum height times the width.

Step 3. For a tighter bound, split [1.5, 2] into two subintervals [1.5, 1.75] and [1.75, 2] and bound each separately.

On [1.5, 1.75]: f(1.5) \approx 2.47, f(1.75) = \frac{1}{\ln 1.75} \approx 1.79. Bounds: 1.79 \times 0.25 \leq \int_{1.5}^{1.75} \leq 2.47 \times 0.25, i.e., 0.448 \leq \int \leq 0.618.

On [1.75, 2]: f(1.75) \approx 1.79, f(2) \approx 1.44. Bounds: 1.44 \times 0.25 \leq \int_{1.75}^{2} \leq 1.79 \times 0.25, i.e., 0.360 \leq \int \leq 0.448.

Adding: 0.808 \leq \int_{1.5}^{2} \frac{dx}{\ln x} \leq 1.065.

Why: splitting into subintervals gives tighter bounds because the variation of f on each piece is smaller. This is the basic idea behind numerical integration.

Step 4. The exact value (computed numerically) is \int_{1.5}^2 \frac{dx}{\ln x} \approx 0.916, which indeed lies in the interval [0.808, 1.065].

Result: 0.808 \leq \int_{1.5}^{2} \frac{1}{\ln x}\,dx \leq 1.065.

The curve $y = 1/\ln x$ on $[1.5, 2]$ (solid), with horizontal dashed lines at $y = 1.44$ (minimum) and $y = 2.47$ (maximum). The area under the curve is trapped between the areas of the two rectangles formed by these heights and the base $[1.5, 2]$.

The two dashed horizontal lines form the ceiling and floor of the estimate. The actual area under the curve (solid) is somewhere between the two rectangles — and by splitting into subintervals, you can squeeze the bounds arbitrarily tight.

Example 2: Mean value theorem — find the average value of $f(x) = \sin x$ on $[0, \pi]$

Step 1. Compute the integral.

\int_0^\pi \sin x \, dx = [-\cos x]_0^\pi = -\cos\pi - (-\cos 0) = 1 + 1 = 2

Why: the antiderivative of \sin x is -\cos x. At x = \pi, -\cos\pi = 1. At x = 0, -\cos 0 = -1. The difference is 1 - (-1) = 2.

Step 2. Compute the average value.

f_{\text{avg}} = \frac{1}{\pi - 0}\int_0^\pi \sin x\,dx = \frac{2}{\pi} \approx 0.637

Why: the average value formula divides the integral by the length of the interval.

Step 3. Find the point c where f(c) = f_{\text{avg}}. Solve \sin c = 2/\pi.

c = \arcsin\!\left(\frac{2}{\pi}\right) \approx \arcsin(0.637) \approx 0.690 \text{ radians}

There is also a second solution c = \pi - 0.690 \approx 2.451 (since \sin(\pi - c) = \sin c).

Why: on [0, \pi], \sin x is non-negative and symmetric about \pi/2, so there are two points where it crosses the horizontal line y = 2/\pi.

Step 4. Verify the theorem: the rectangle with height 2/\pi and base \pi has area (2/\pi) \times \pi = 2, matching the integral.

Result: The average value of \sin x on [0, \pi] is \dfrac{2}{\pi} \approx 0.637, attained at c \approx 0.69 and c \approx 2.45.

The sine curve on $[0, \pi]$ with the horizontal line $y = 2/\pi \approx 0.637$ (dashed). The area under the sine curve equals the area of the rectangle with this height. The two marked points $c_1$ and $c_2$ are where the sine curve crosses the average-value line.

The picture shows the "equal area" property clearly: the sine curve bulges above the average-value line near the peak and dips below it near the ends. The area above the line equals the area below the line, so the two regions balance out — which is exactly what it means for 2/\pi to be the average.

Common confusions

Going deeper

The core tools — comparison, estimation, and the mean value theorem — are above. What follows is a collection of stronger inequalities that show up in analysis and competition mathematics.

The Cauchy–Schwarz inequality for integrals

The discrete Cauchy–Schwarz inequality (a_1b_1 + \cdots + a_nb_n)^2 \leq (a_1^2 + \cdots)(b_1^2 + \cdots) has a continuous analogue:

\left(\int_a^b f(x)g(x)\,dx\right)^2 \leq \int_a^b f(x)^2\,dx \cdot \int_a^b g(x)^2\,dx

This is remarkably useful for bounding integrals of products when you know the individual integrals. Equality holds when f and g are proportional: f(x) = \lambda g(x) for some constant \lambda.

Jensen's inequality

If \varphi is a convex function (curving upward) and f is integrable on [a,b], then

\varphi\!\left(\frac{1}{b-a}\int_a^b f(x)\,dx\right) \leq \frac{1}{b-a}\int_a^b \varphi(f(x))\,dx

In words: applying a convex function to the average is less than or equal to the average of the convex function. This is the continuous version of the AM-QM and AM-GM inequalities.

Monotonicity of integrals in the limits

If f(x) \geq 0 on [a, b], then the function F(t) = \int_a^t f(x)\,dx is increasing on [a, b]. This obvious-sounding fact is surprisingly useful: it means that for non-negative integrands, extending the interval can only increase the integral. Combined with the Leibniz rule, it gives F'(t) = f(t) \geq 0, a direct proof of monotonicity.

Bounding integrals via Taylor remainders

If f has a known Taylor expansion, you can bound the integral using the remainder term. For example, \sin x = x - x^3/6 + R_5(x) where |R_5(x)| \leq x^5/120. So

\int_0^1 \sin x\,dx = \int_0^1 \left(x - \frac{x^3}{6}\right)dx + \int_0^1 R_5(x)\,dx

The first integral is 1/2 - 1/24 = 11/24. The error term satisfies |\int_0^1 R_5| \leq 1/720 < 0.0014. So \int_0^1 \sin x\,dx \approx 11/24 with error less than 0.0014. (The exact value is 1 - \cos 1 \approx 0.4597, and 11/24 \approx 0.4583.)

Where this leads next