In short
A function that is strictly increasing on an interval never takes the same value twice, and always preserves the order of its inputs. That tiny fact is enough to prove inequalities like \sin x < x for x > 0, to read the range of a function off its endpoints, and to decide whether a composition f(g(x)) increases or decreases.
Here is a puzzle. Which is bigger, \sin 1 or 1? You cannot just reach for a calculator — the question is about why, not which. And \sin 1 is unfriendly enough that expanding it as 1 - \frac{1}{6} + \frac{1}{120} - \dots and adding up terms is ugly.
There is a cleaner way. Define g(x) = x - \sin x. Then g(0) = 0 - 0 = 0. If you can show that g is increasing for x > 0, then g(1) > g(0) = 0, which is exactly the inequality 1 > \sin 1. So the whole question reduces to one thing: is g increasing?
Differentiate. g'(x) = 1 - \cos x. Since \cos x \leq 1 always, g'(x) \geq 0. The derivative is never negative, so g is non-decreasing, and in fact strictly increasing except at isolated points where \cos x = 1. That's the whole proof. You have turned an inequality about a transcendental function into a one-line computation about a derivative.
This is not a lucky trick. It is the entire reason monotonicity matters: once you know a function is increasing, order gets preserved, and order-preservation is exactly the engine that turns calculus into a tool for proving inequalities.
The order-preservation principle
Before the applications, the single fact you will lean on all through this article:
The ordering principle
If f is strictly increasing on an interval I, then for any two points a, b in I with a < b:
If f is strictly decreasing, the inequality flips:
That is all. You push inputs through the function and the inequality either stays the same or flips, depending on which way the function is going.
And from the previous article you already have the tool to check monotonicity from a derivative: if f'(x) > 0 on an interval, then f is strictly increasing there; if f'(x) < 0, strictly decreasing. So the full workflow is:
- Rewrite the thing you want to prove as "g(x) > 0" or "g(x) < 0".
- Check what g is at a convenient starting point (usually an endpoint where things vanish).
- Look at g'(x) and decide whether g is going up or down from there.
- Combine.
Proving inequalities
Take a harder one. For x > 0, show that
Define g(x) = x - \ln(1+x). At x = 0, g(0) = 0 - \ln 1 = 0. Now differentiate:
For x > 0, both numerator and denominator are positive, so g'(x) > 0. The function g is strictly increasing on (0, \infty). Since g(0) = 0 and g only goes up from there, g(x) > 0 for every x > 0. That is exactly x - \ln(1+x) > 0, or \ln(1+x) < x. Done.
Now a slightly different shape. Show that e^x \geq 1 + x for every real x, with equality only at x = 0.
Let g(x) = e^x - 1 - x. Then g(0) = 1 - 1 - 0 = 0 and g'(x) = e^x - 1. The derivative is zero at x = 0, positive for x > 0, negative for x < 0. So g is decreasing on the left of zero and increasing on the right — meaning x = 0 is a minimum. The minimum value is g(0) = 0. Therefore g(x) \geq 0 everywhere, with equality only at x = 0. That is the inequality.
Notice what changed. In the first problem the derivative was positive everywhere on the interval, so one stroke of monotonicity was enough. In this second one the derivative changed sign, so you had to split the line into two intervals and show that each piece pushes g toward the minimum. Same idea, same tool, slightly more bookkeeping.
The template is always the same. To prove A(x) > B(x) on some interval:
- set g(x) = A(x) - B(x)
- check g at a starting point where you know its value
- use g' to decide the direction
- chain the order
Example 1: Proving $\tan x > x$ for $0 < x < \pi/2$
Step 1. Let g(x) = \tan x - x, which is defined on (-\pi/2, \pi/2). At x = 0, g(0) = \tan 0 - 0 = 0. Why: you want a "comparison function" that starts at zero, so any positive drift after that is proof of the inequality.
Step 2. Compute g'(x).
Why: the derivative of \tan x is \sec^2 x, and the derivative of x is 1. Subtract.
Step 3. Simplify and sign-check.
Why: the Pythagorean identity \sec^2 x = 1 + \tan^2 x. This is the cleanest form — a square — so its sign is easy to read.
Step 4. Since \tan^2 x \geq 0 always, and is strictly positive for 0 < x < \pi/2 (because \tan x is nonzero there), g'(x) > 0 on the whole open interval (0, \pi/2). So g is strictly increasing on [0, \pi/2). Combined with g(0) = 0:
That is \tan x - x > 0, or \tan x > x.
Result: \tan x > x for all x \in (0, \pi/2).
The figure shows what the algebra proved. Both curves leave the origin at the same slope, but \tan x bends upward immediately while y = x stays straight. The gap is exactly g(x), and that gap is positive for every x to the right of zero.
Finding the range of a function
Here is another use that looks very different but is really the same idea.
Suppose you want the range of f(x) = \frac{x}{1 + x^2} — every value f ever takes as x ranges over the reals. You could try to solve y = \frac{x}{1+x^2} for x as a function of y, which gives yx^2 - x + y = 0, a quadratic in x. The quadratic has a real solution when its discriminant is non-negative: 1 - 4y^2 \geq 0, meaning -\frac{1}{2} \leq y \leq \frac{1}{2}. That works — but it leaned on a trick that only applies when the rearranged equation happens to be quadratic. Monotonicity gives a cleaner path that works more generally.
Differentiate:
The denominator is positive. The numerator is positive when |x| < 1, zero at x = \pm 1, and negative when |x| > 1. So f is:
- decreasing on (-\infty, -1)
- increasing on (-1, 1)
- decreasing on (1, \infty)
At x = -1, f(-1) = -\frac{1}{2}. At x = 1, f(1) = \frac{1}{2}. As x \to \pm\infty, f(x) \to 0. Put those pieces together: coming in from the left, f slides down from 0 to its lowest point -\frac{1}{2} at x = -1, then rises monotonically to its peak +\frac{1}{2} at x = 1, then falls back toward 0 on the right. Because each piece is monotone, the function takes every value between its extremes and nothing outside them.
The range is \left[-\frac{1}{2}, \frac{1}{2}\right].
Why monotonicity gives you the range. A continuous function that is monotone on an interval [a, b] takes every value between f(a) and f(b) (by the intermediate value theorem) and nothing else (by monotonicity — any other value would require the function to come back and cross itself). So on each monotone piece, the image is just the interval between the endpoint values. The full range is the union of those pieces.
The general recipe:
- Find where f is monotone — split the domain at the points where f' changes sign.
- Compute f at each boundary (critical points and limits at infinity).
- Read off the image of each monotone piece.
- Union them.
Compare this to the discriminant trick at the top of the section. The discriminant approach worked because the rearranged equation happened to be quadratic. If the function had been f(x) = \frac{x}{1 + x^4} instead — a very similar-looking creature — the rearrangement y(1 + x^4) = x gives a quartic in x, and the real-solution condition is no longer a clean inequality in y. Monotonicity, by contrast, still works: differentiate, find where f' changes sign, evaluate at the critical points, read the range. The procedure is indifferent to the algebraic complexity of the function, because it only ever uses local information (the sign of the derivative).
Monotonicity of composite functions
The third big application has a surprisingly elegant answer, and once you see it, you will recognise it in every problem.
Suppose h(x) = f(g(x)) — a function built by applying g first, then f. When is h increasing? The chain rule says h'(x) = f'(g(x)) \cdot g'(x). For h to increase, you need h'(x) > 0, which means f'(g(x)) and g'(x) must have the same sign.
That translates to a rule you can state without a single derivative:
Composition rule for monotonicity
For the composition h(x) = f(g(x)) on an interval where both pieces are defined:
- increasing ∘ increasing = increasing
- decreasing ∘ decreasing = increasing
- increasing ∘ decreasing = decreasing
- decreasing ∘ increasing = decreasing
In one sentence: the composition is increasing iff f and g have the same monotonicity type, and decreasing iff they have opposite types.
The "same sign wins, different sign loses" rule is just the multiplication of signs in the chain rule, turned into words. You can also see it as a natural extension of how negatives behave in ordinary multiplication: (+) \cdot (+) = +, (-) \cdot (-) = +, (+) \cdot (-) = -. Increase counts as + and decrease counts as -; the composition is the product.
Take h(x) = \ln(1 + x^2) on (-\infty, 0). The inner function g(x) = 1 + x^2 is decreasing on (-\infty, 0) (its derivative is 2x, which is negative there). The outer function f(u) = \ln u is increasing on (0, \infty). Increasing \circ decreasing = decreasing. So h is decreasing on (-\infty, 0) — no chain-rule computation needed.
Check it anyway: h'(x) = \frac{2x}{1+x^2}, which is indeed negative on (-\infty, 0). The rule worked.
A second test. Take h(x) = e^{-x^2} on (0, \infty). The inner function g(x) = -x^2 is decreasing on (0, \infty). The outer function f(u) = e^u is increasing everywhere. Increasing \circ decreasing = decreasing. So h decreases on (0, \infty), which matches what you know about the bell curve — it is the classic example of a function that peaks at zero and drops off on both sides.
Example 2: The monotonicity of $f(x) = \sqrt{4 - x^2}$
This is a good case because the function is a composition and you should be able to read its behaviour off the rule.
Step 1. Identify the pieces. Write f(x) = u(v(x)) where v(x) = 4 - x^2 and u(t) = \sqrt{t}. The domain is -2 \leq x \leq 2, where v(x) \geq 0. Why: a composition only makes sense where the inner function lands in the domain of the outer one — here, where 4 - x^2 is non-negative.
Step 2. Figure out the monotonicity of each piece.
- v(x) = 4 - x^2 has v'(x) = -2x. Positive for x < 0, negative for x > 0. So v is increasing on [-2, 0] and decreasing on [0, 2].
- u(t) = \sqrt{t} has u'(t) = \frac{1}{2\sqrt{t}} > 0 on (0, \infty). So u is increasing on its whole domain.
Why: you have pinned down the direction of each piece. Now you can apply the composition rule on each subinterval.
Step 3. Apply the rule on each subinterval.
- On [-2, 0]: u increasing \circ v increasing = increasing.
- On [0, 2]: u increasing \circ v decreasing = decreasing.
Step 4. Confirm with the chain rule. f'(x) = \frac{-2x}{2\sqrt{4-x^2}} = \frac{-x}{\sqrt{4-x^2}}. This is positive for x < 0 and negative for x > 0, matching the rule exactly.
Result: f is increasing on [-2, 0] and decreasing on [0, 2], with a maximum at x = 0 where f(0) = 2.
The picture matches the rule exactly. The rule essentially is the shape of the semicircle — up, then down.
Common confusions
A few places where students slip when applying monotonicity.
-
"f'(x) \geq 0 on I" means "strictly increasing." Not quite. f'(x) \geq 0 only gives you non-decreasing. Strictly increasing requires f'(x) > 0 except possibly at isolated points. A function like f(x) = x + \sin x is strictly increasing even though its derivative 1 + \cos x hits 0 at isolated points — but a function like f(x) = \lfloor x \rfloor has derivative 0 on whole intervals and is certainly not strictly increasing.
-
Forgetting to check the starting value when proving an inequality. The monotonicity of g = A - B only tells you whether the gap is growing or shrinking. You still need to know where it starts. If g(0) = 5 and g is decreasing, you do not get g > 0 — you get g < 5, which might still go negative.
-
Treating the range like it equals [f(\text{smallest }x), f(\text{largest }x)]. That is only true if f is monotone on the whole interval. If the function has interior maxima and minima, the range is determined by the extrema, not the endpoints. Split into monotone pieces first.
-
"increasing \circ decreasing" — which wins? Neither "wins" individually; the rule is that two mismatched monotonicities compose to decreasing. A useful mnemonic: \text{same} \to \text{up}, \text{mixed} \to \text{down}.
-
Ignoring the domain when composing. \ln(x^2 - 4) is a composition, but it is only defined where x^2 - 4 > 0. Applying the composition rule on a part of the real line where the inner function lands outside the domain of the outer one gives you nonsense.
Going deeper
You have the three main applications — inequalities, range, composition. If you are here for JEE, that is genuinely enough, and you can stop. The rest of this section is about why the inequality method works, how it connects to Taylor series, and one surprising case where strict monotonicity still fails even though f' > 0 almost everywhere.
Why the g(a) = 0 trick is not a trick
The inequality workflow — "set g = A - B, check g at a starting point, show g goes the right way" — is really just the mean value theorem in disguise.
If g is differentiable on an interval and g(a) = 0, then for any x > a the mean value theorem gives you some c \in (a, x) with
Since g(a) = 0, this simplifies to g(x) = g'(c)(x - a). If g' is positive on the whole interval, the right-hand side is positive, so g(x) > 0. That is exactly the inequality, proved without even mentioning monotonicity.
So the "monotonicity proof" of an inequality and the "mean value theorem proof" are the same proof, dressed differently. Both come down to: the sign of the derivative tells you the direction of the change, and integrating the change from a known value gives you the new value.
The chain from Taylor series
A lot of the classical inequalities — \sin x < x, 1 - \cos x < \frac{x^2}{2}, e^x > 1 + x + \frac{x^2}{2} for x > 0, \ln(1+x) < x - \frac{x^2}{2} + \frac{x^3}{3} — are actually partial sums of a Taylor series compared against the function itself. The Taylor remainder controls the error, and its sign (determined by the sign of the next derivative) is exactly what monotonicity detects.
Concretely: define g_n(x) = f(x) - T_n(x), where T_n is the n-th degree Taylor polynomial of f at 0. Then g_n(0) = 0, g_n'(0) = 0, ..., and the first nonzero derivative at 0 is g_n^{(n+1)}(0) = f^{(n+1)}(0). The sign of this high derivative propagates down through repeated monotonicity arguments — integrate the sign once and you control g_n^{(n)}, integrate again to control g_n^{(n-1)}, and so on back to g_n itself.
This is why every standard inequality about \sin, \cos, \ln, \exp can be proved by monotonicity: they are all really statements about Taylor remainders.
When f' > 0 is not enough
Here is a pathology worth knowing about. There exist continuous functions f: \mathbb{R} \to \mathbb{R} with f'(x) > 0 for every x except on a set of measure zero — and yet f is not strictly increasing on any neighbourhood of certain points. These are constructed using fat Cantor-like sets and are not things you will meet in JEE problems. For any reasonable function — continuous with piecewise continuous derivative, which covers everything in your syllabus — the rule "if f' > 0 on an interval then f is strictly increasing there" is bulletproof. But when you later study real analysis you will learn that the conversion from "derivative positive" to "function increasing" actually uses the mean value theorem in an essential way, and the mean value theorem needs continuity of f — not of f'.
A harder inequality worked in full
Here is a classical inequality that will give you a sense of how far monotonicity alone can push you. Show that for 0 < x < 1,
You already proved the right half. The left half is new.
Define h(x) = \ln(1 + x) - \frac{x}{1 + x}. At x = 0, h(0) = 0 - 0 = 0. Differentiate:
For x > 0, both numerator and denominator are positive, so h'(x) > 0 and h is strictly increasing on (0, \infty). Combined with h(0) = 0: h(x) > 0 for x > 0, which gives \ln(1+x) > \frac{x}{1+x}. That is the left inequality.
Put the two together and the full chain \frac{x}{1+x} < \ln(1+x) < x is proved — both halves using exactly the same template, with two different comparison functions. This is a microcosm of how most inequality proofs in analysis work: pick g, evaluate at a base point, differentiate, sign-check.
Connection to what comes next
Monotonicity is the first-derivative view of a function: where it goes up, where it goes down. The next piece is concavity, which is the second-derivative view: how the slope itself is changing. Together, they give you enough information to sketch any reasonable curve by hand, which is the subject of the curve-sketching article.
There is also a deeper connection worth flagging. Many of the sharpest inequalities in analysis — Jensen's inequality, the AM-GM inequality, Cauchy-Schwarz — can be thought of as statements about monotonicity or concavity of carefully chosen auxiliary functions. Once you are comfortable with the g(a) = 0, g' > 0 pattern, you have the seed of an entire method for proving inequalities, and it extends well past one-variable calculus into multivariable analysis, probability, and optimisation theory.
Where this leads next
The three applications in this article — inequality proofs, range-finding, composition — all feed into the bigger machinery of using derivatives to describe functions.
- Maxima and Minima - First Derivative Test — the formal rule that turns "where f' changes sign" into "where f has a local peak or valley."
- Concavity and Points of Inflection — the second-derivative analogue of monotonicity, describing how the slope itself bends.
- Mean Value Theorems — the formal engine behind "derivative positive ⇒ function increasing," which you have been using all article.
- Curve Sketching — combining monotonicity, concavity, and asymptotes to draw the graph of a function by hand.
- Optimization Problems — using monotonicity and extrema to solve "what is the biggest / smallest possible value" problems from geometry, physics, and economics.