In short
A function is increasing on an interval if larger inputs give larger outputs, and decreasing if larger inputs give smaller outputs. For differentiable functions, there is a clean test: f'(x) > 0 means f is increasing, f'(x) < 0 means f is decreasing. The proof of this test uses the Mean Value Theorem.
Take the function f(x) = x^2. Plot it in your head: a parabola opening upward, with its lowest point at the origin. To the left of zero, the curve falls as you move right. To the right of zero, the curve rises as you move right. At x = 0 itself, the curve is momentarily flat — neither rising nor falling.
Now look at the derivative: f'(x) = 2x. When x < 0, f'(x) < 0 — the derivative is negative, and the function is falling. When x > 0, f'(x) > 0 — the derivative is positive, and the function is rising. When x = 0, f'(0) = 0 — the derivative is zero, and the function is flat.
This is not a coincidence. The sign of the derivative and the direction of the function are locked together. This connection has a name — monotonicity — and it is one of the most useful ideas in calculus.
Increasing and decreasing functions
Before connecting this to derivatives, the idea of "increasing" and "decreasing" needs a precise definition that does not rely on pictures.
Increasing and decreasing functions
A function f is increasing on an interval I if, for every pair of points x_1, x_2 \in I with x_1 < x_2, we have f(x_1) \le f(x_2).
A function f is decreasing on an interval I if, for every pair of points x_1, x_2 \in I with x_1 < x_2, we have f(x_1) \ge f(x_2).
In words: increasing means "move right, go up (or stay level)." Decreasing means "move right, go down (or stay level)."
Notice the \le and \ge — these allow the function to stay flat. The function f(x) = 3 (a constant) is technically both increasing and decreasing, because 3 \le 3 and 3 \ge 3 are both true. That is a quirk of the definition, not a deep issue.
Strictly monotonic functions
If you want to exclude the flat case, you use the strict versions.
Strictly increasing and strictly decreasing
A function f is strictly increasing on an interval I if, for every pair x_1 < x_2 in I, we have f(x_1) < f(x_2).
A function f is strictly decreasing on an interval I if, for every pair x_1 < x_2 in I, we have f(x_1) > f(x_2).
A function that is either strictly increasing or strictly decreasing on I is called strictly monotonic on I.
Strictly increasing means the function genuinely rises — no flat stretches allowed. Strictly decreasing means the function genuinely falls.
A strictly monotonic function has a crucial property: it is one-to-one. If f is strictly increasing and f(x_1) = f(x_2), then you cannot have x_1 \neq x_2 (because that would give f(x_1) \neq f(x_2)). This means strictly monotonic functions have inverses — and that is exactly why \ln x exists as the inverse of e^x, and \arcsin x exists as the inverse of \sin x on [-\pi/2, \pi/2].
The derivative test for monotonicity
Here is the central result. The sign of the derivative controls the direction of the function.
Monotonicity test
Let f be continuous on [a, b] and differentiable on (a, b).
- If f'(x) > 0 for all x \in (a, b), then f is strictly increasing on [a, b].
- If f'(x) < 0 for all x \in (a, b), then f is strictly decreasing on [a, b].
- If f'(x) = 0 for all x \in (a, b), then f is constant on [a, b].
- If f'(x) \ge 0 for all x \in (a, b), then f is increasing (non-decreasing) on [a, b].
- If f'(x) \le 0 for all x \in (a, b), then f is decreasing (non-increasing) on [a, b].
Proof of the strictly increasing case
Take any two points x_1, x_2 \in [a, b] with x_1 < x_2. The function f is continuous on [x_1, x_2] and differentiable on (x_1, x_2). By the Mean Value Theorem, there exists a point c \in (x_1, x_2) such that
Now, f'(c) > 0 (by hypothesis, since c \in (a, b)) and x_2 - x_1 > 0 (since x_1 < x_2). The product of two positive numbers is positive, so
Since this holds for every pair x_1 < x_2 in [a, b], the function is strictly increasing. \blacksquare
The proof for strictly decreasing is identical, with the inequalities reversed. The proof for the constant case follows from f'(c) = 0, which gives f(x_2) - f(x_1) = 0.
Notice what is happening: the MVT converts the local information "f' is positive at each point" into the global conclusion "f is increasing across the whole interval." Without the MVT, there is no way to make this jump — it is the engine of the proof.
A subtle point: f'(x) \ge 0 does not guarantee strictly increasing
If f'(x) \ge 0 on (a, b), the function is increasing but not necessarily strictly increasing. Consider f(x) = c (a constant) — here f'(x) = 0 \ge 0, but f is not strictly increasing.
However, if f'(x) \ge 0 on (a, b) and f' is not identically zero on any sub-interval of (a, b), then f is strictly increasing. The function f(x) = x^3 on [-1, 1] is an example: f'(x) = 3x^2 \ge 0, and f'(0) = 0, but f' is zero only at the single point x = 0, not on an interval. So x^3 is strictly increasing on [-1, 1].
Applying the test: a complete example
The standard procedure for finding where a function increases and decreases has three steps:
- Find f'(x).
- Find the critical points — where f'(x) = 0 or f'(x) does not exist.
- Test the sign of f' in each interval between consecutive critical points.
Example 1: Find the intervals of increase and decrease for $f(x) = x^3 - 3x^2 + 1$
Step 1. Compute the derivative.
Why: the derivative is a product of two factors, which makes finding its sign easy — the sign changes where the factors are zero.
Step 2. Find where f'(x) = 0.
Why: these are the critical points that divide the real line into intervals where f' has constant sign.
Step 3. Make a sign chart for f'(x) = 3x(x-2):
| Interval | Sign of 3x | Sign of x - 2 | Sign of f'(x) | f is |
|---|---|---|---|---|
| (-\infty, 0) | - | - | + | increasing |
| (0, 2) | + | - | - | decreasing |
| (2, \infty) | + | + | + | increasing |
Step 4. Evaluate f at the critical points: f(0) = 1, f(2) = 8 - 12 + 1 = -3.
Why: the function values at the turning points tell you the local maximum and minimum — f has a local max of 1 at x = 0 and a local min of -3 at x = 2.
Result: f is strictly increasing on (-\infty, 0) and (2, \infty), and strictly decreasing on (0, 2).
Example 2: Find the intervals of increase and decrease for $f(x) = x\,e^{-x}$
Step 1. Compute the derivative using the product rule.
Why: the product rule gives (uv)' = u'v + uv', with u = x and v = e^{-x}. The result factors neatly.
Step 2. Find where f'(x) = 0.
Since e^{-x} > 0 for all x, this reduces to 1 - x = 0, so x = 1.
Why: the exponential factor is always positive, so the sign of f' depends entirely on the factor (1 - x).
Step 3. Sign of f'(x) = e^{-x}(1 - x):
| Interval | Sign of e^{-x} | Sign of 1 - x | Sign of f'(x) | f is |
|---|---|---|---|---|
| (-\infty, 1) | + | + | + | increasing |
| (1, \infty) | + | - | - | decreasing |
Step 4. f(1) = 1 \cdot e^{-1} = 1/e \approx 0.368.
Why: this is the global maximum of f on all of \mathbb{R}. The function rises toward 1/e from the left and decays toward 0 from the right.
Result: f is strictly increasing on (-\infty, 1) and strictly decreasing on (1, \infty).
Common confusions
-
"If f'(c) = 0, the function is neither increasing nor decreasing at c." The derivative being zero at a single point does not interrupt monotonicity. The function f(x) = x^3 has f'(0) = 0, but f is strictly increasing on all of \mathbb{R}. What matters is the sign of f' on an interval, not at a single point.
-
"Increasing means the graph goes upward." More precisely: increasing means that as x increases (moves right), f(x) does not decrease. The graph could be flat for a stretch and still qualify as increasing (non-strictly). For strictly increasing, the graph must genuinely rise — no flat stretches.
-
"A function that increases and then decreases is not monotonic." Correct — but the function is monotonic on each piece. The term "monotonic on \mathbb{R}" means it does not change direction at all. When people say "find the monotonic intervals," they mean find the intervals on which the function is individually increasing or decreasing.
-
"f'(x) > 0 at a single point means f is increasing near that point." This is true only in a local, limiting sense. It does not mean there is a whole interval around that point where f is increasing. (Pathological counterexamples exist, though they do not appear in standard JEE problems.) The safe statement is: f'(x) > 0 on an open interval \implies f is increasing on that interval.
-
"If f is increasing and g is increasing, then f + g is increasing." True — the sum of two increasing functions is increasing. But be careful with products and compositions: f \cdot g is not necessarily increasing even if both f and g are (consider f(x) = g(x) = x on (-\infty, 0) — both are increasing but x^2 is decreasing there). Compositions are handled in the next article.
Going deeper
If you came here to learn how to find where a function increases or decreases, you have the tools — you can stop here. The rest is for readers who want to see why the MVT is essential and what happens at the boundary cases.
Why the MVT cannot be removed from the proof
It is tempting to think that "f'(x) > 0 means f is increasing" should be provable from the definition of the derivative alone, without the MVT. After all, if the derivative is positive, the function is locally rising — and if it is locally rising everywhere, shouldn't it be rising globally?
The answer is: not without the MVT. The derivative at a point x tells you what happens in an infinitesimal neighbourhood of x — it is a limit statement. The claim "f is increasing on [a, b]" is about every pair of points in the interval, which is a global statement. The MVT is the only tool that bridges the gap between local derivative information and global function behaviour. There is no shortcut.
Functions with f' = 0 at isolated points
The function f(x) = x^3 is strictly increasing on \mathbb{R}, even though f'(0) = 0. This is not a contradiction. The derivative is zero at a single point, but positive everywhere else. The MVT applied to any pair x_1 < x_2 gives f(x_2) - f(x_1) = f'(c)(x_2 - x_1) for some c between them. If c \neq 0 (which it might or might not be), f'(c) = 3c^2 > 0. If the interval [x_1, x_2] contains 0, you can split it at 0 and argue separately on [x_1, 0] and [0, x_2], or note that f(x_2) - f(x_1) = x_2^3 - x_1^3 = (x_2 - x_1)(x_2^2 + x_1 x_2 + x_1^2). The second factor is always positive (it is \frac{1}{2}[(x_1 + x_2)^2 + x_1^2 + x_2^2] after rearranging, which is a sum of squares).
The general rule: f'(x) \ge 0 on an interval, with f' = 0 only at isolated points (not on a whole sub-interval), implies f is strictly increasing.
Monotonicity and injectivity
A strictly monotonic function is always one-to-one (injective): distinct inputs give distinct outputs. The converse is almost true — a continuous injective function on an interval must be strictly monotonic (this is a consequence of the Intermediate Value Theorem). So for continuous functions on intervals, "strictly monotonic" and "one-to-one" are the same thing.
This is why inverse functions exist precisely for strictly monotonic functions. The function \sin x is not one-to-one on all of \mathbb{R} (it oscillates), but it is strictly increasing on [-\pi/2, \pi/2], and that is the domain on which \arcsin is defined.
Where this leads next
Monotonicity is the bridge between derivatives and the shape of a graph. The immediate applications are:
- Monotonicity — Applications — using monotonicity to prove inequalities, find ranges, and analyse composite functions.
- Maxima and Minima — First Derivative Test — the derivative sign change at a critical point tells you whether it is a maximum or a minimum.
- Maxima and Minima — Second Derivative Test — using the second derivative to classify critical points when the first derivative test is inconclusive.
- Mean Value Theorems — the theorem that makes monotonicity rigorous.