In short
The Leibniz rule tells you how to differentiate a definite integral with respect to a parameter. In its simplest form (fixed limits), you just move the derivative inside the integral. When the limits themselves depend on the parameter, two extra boundary terms appear — each one is the integrand evaluated at a moving limit, multiplied by that limit's derivative.
Here is a function defined in an unusual way:
What is F'(x)? You could evaluate the integral first: F(x) = x^3/3. Then F'(x) = x^2. Simple enough.
But look at what just happened. The derivative of \int_0^x t^2 \, dt turned out to be x^2 — the integrand itself, with t replaced by x. You integrated, then differentiated, and the two operations cancelled, leaving you right back where you started.
That is not a coincidence. It is the Fundamental Theorem of Calculus at work: differentiation undoes integration. But what if the setup is more complicated? What if the upper limit is not just x but x^2? What if both limits move? What if the integrand itself contains the parameter?
The Leibniz rule answers all of these questions in one formula.
The simplest case: one moving limit
Start with the case you just saw. Define
where a is a constant and f is continuous. The Fundamental Theorem of Calculus says:
Differentiation undoes integration. The derivative of the "area so far" function is the height of the curve at the current point.
Now suppose the upper limit is not x but some function g(x). For example:
The integral runs from 0 to x^2. As x changes, the upper limit x^2 changes, and the integral changes accordingly. To find F'(x), use the chain rule: think of the integral as a composition. Let G(u) = \int_0^u t^2 \, dt, so F(x) = G(x^2). By the chain rule:
The pattern: differentiate the integral as if the upper limit were a plain variable (giving the integrand evaluated at the upper limit), then multiply by the derivative of the upper limit.
The full Leibniz rule
Now for the general case. Suppose both limits and the integrand itself depend on a parameter x:
Leibniz integral rule
If f(x, t), \frac{\partial f}{\partial x}(x, t), a(x), and b(x) are all continuous and differentiable, then
Three terms, each with a clear geometric meaning:
-
f(x, b(x)) \cdot b'(x) — the integrand at the upper limit, scaled by how fast the upper limit is moving. If the upper limit moves to the right, you are adding a thin strip of area; its height is the integrand's value at the boundary, and its width is b'(x)\,dx.
-
-f(x, a(x)) \cdot a'(x) — the integrand at the lower limit, with a minus sign. If the lower limit moves to the right, you are removing area from the left end of the region.
-
\int_{a(x)}^{b(x)} \frac{\partial f}{\partial x}\,dt — the change due to the integrand itself shifting. Even if the limits are fixed, the integrand depends on x, so as x changes, every point of the integrand moves up or down. This term captures the cumulative effect of that shift.
Special cases
-
Fixed limits, parameter inside: a and b are constants, so a'(x) = b'(x) = 0. The boundary terms vanish, and you can move the derivative inside: \frac{d}{dx}\int_a^b f(x,t)\,dt = \int_a^b \frac{\partial f}{\partial x}(x,t)\,dt.
-
Variable upper limit, no parameter inside: f depends only on t, and a is constant. The formula reduces to F'(x) = f(b(x)) \cdot b'(x), which is the Fundamental Theorem plus the chain rule.
-
Both limits variable, no parameter inside: F'(x) = f(b(x)) \cdot b'(x) - f(a(x)) \cdot a'(x).
Why the formula is true
Break the problem into pieces. Write
and define an auxiliary function G(x, u, v) = \int_u^v f(x, t)\,dt, where u = a(x) and v = b(x). By the multivariable chain rule:
Now compute each partial derivative. \frac{\partial G}{\partial x} means: hold u and v fixed and differentiate under the integral sign. This gives \int_u^v \frac{\partial f}{\partial x}\,dt.
\frac{\partial G}{\partial v} means: hold x and u fixed, and differentiate with respect to the upper limit. By the Fundamental Theorem, this is f(x, v). Similarly, \frac{\partial G}{\partial u} = -f(x, u) (the minus sign because changing the lower limit subtracts area).
Substituting back u = a(x) and v = b(x):
which is the Leibniz rule.
Worked examples
Example 1: Variable upper limit — find $F'(x)$ when $F(x) = \int_1^{x^3} \frac{\sin t}{t}\,dt$
Step 1. Identify the pieces. The integrand f(t) = \frac{\sin t}{t} does not depend on x. The lower limit is the constant 1. The upper limit is b(x) = x^3.
Why: since f does not contain x and the lower limit is fixed, only the "upper boundary" term survives in the Leibniz rule.
Step 2. Apply the formula: F'(x) = f(b(x)) \cdot b'(x).
Why: evaluate the integrand at t = x^3 to get \frac{\sin(x^3)}{x^3}, then multiply by the derivative of the upper limit, \frac{d}{dx}(x^3) = 3x^2.
Step 3. Simplify.
Why: cancel one factor of x from x^2/x^3.
Step 4. Check at a specific point. At x = 1: F'(1) = \frac{3\sin(1)}{1} = 3\sin 1 \approx 2.52. This says the area under \frac{\sin t}{t} from 1 to t is growing at rate \approx 2.52 when t passes through 1 — which is plausible, since the integrand at t = 1 is \sin 1 \approx 0.84, and the upper limit is racing away at rate 3x^2 = 3.
Result: F'(x) = \dfrac{3\sin(x^3)}{x}.
The graph shows F'(x) oscillating rapidly for larger x, which makes sense: as the upper limit x^3 races through the oscillations of \sin t / t, the accumulated area alternately grows and shrinks.
Example 2: Both limits variable — find $F'(x)$ when $F(x) = \int_{x}^{x^2} e^{-t^2}\,dt$
Step 1. Identify the pieces. The integrand f(t) = e^{-t^2} does not depend on x. The lower limit is a(x) = x. The upper limit is b(x) = x^2.
Why: both limits are functions of x, so both boundary terms contribute. The "parameter inside" term vanishes because f does not involve x.
Step 2. Apply the Leibniz rule: F'(x) = f(b(x)) \cdot b'(x) - f(a(x)) \cdot a'(x).
Why: at the upper limit t = x^2, the integrand is e^{-x^4}, multiplied by b'(x) = 2x. At the lower limit t = x, the integrand is e^{-x^2}, multiplied by a'(x) = 1.
Step 3. Check the sign at x = 1. F'(1) = 2e^{-1} - e^{-1} = e^{-1} \approx 0.368. Positive, which makes sense: at x = 1, the upper limit x^2 = 1 equals the lower limit x = 1, so F(1) = 0. But the upper limit is racing away faster (b'(1) = 2) than the lower limit (a'(1) = 1), so the interval is widening and the integral is growing.
Step 4. Check at x = 2. F'(2) = 4e^{-16} - e^{-4} \approx 4(1.1 \times 10^{-7}) - 0.0183 \approx -0.0183. Negative — the integrand e^{-t^2} is extremely small for t near 4 (the upper limit at x = 2), so the upper boundary contributes almost nothing, and the lower boundary is eating area away.
Result: F'(x) = 2x\,e^{-x^4} - e^{-x^2}.
The graph reveals a natural story: for small x, the lower boundary term dominates and F' is negative. Near x = 1, the upper boundary catches up and F' turns positive. For large x, both exponentials die out and F' approaches zero.
Applications
Evaluating integrals that depend on a parameter
One powerful application of the Leibniz rule is to compute a family of integrals at once by treating the answer as a function of a parameter and setting up a differential equation.
Consider I(\alpha) = \int_0^\infty e^{-\alpha t} \frac{\sin t}{t}\,dt for \alpha > 0. This integral is hard to evaluate directly. But differentiate with respect to \alpha:
The t in the denominator was cancelled by the t produced by differentiating e^{-\alpha t}. The remaining integral is a standard Laplace transform:
Integrate: I(\alpha) = -\arctan \alpha + C. To find C, note that as \alpha \to \infty, I(\alpha) \to 0 (the exponential kills the integrand). So 0 = -\pi/2 + C, giving C = \pi/2. Therefore:
Setting \alpha = 0 (taking a limit) recovers the famous Dirichlet integral: \int_0^\infty \frac{\sin t}{t}\,dt = \frac{\pi}{2}.
Continuity of integrals with variable limits
If F(x) = \int_a^{g(x)} f(t)\,dt and f is continuous, then F is continuous — and in fact differentiable — wherever g is differentiable. This is useful for proving that solutions to certain differential equations are smooth.
Common confusions
-
"Differentiating under the integral just means taking the derivative of the integrand." Only when the limits are constants. If either limit depends on x, there are extra boundary terms. Forgetting them is the most common mistake.
-
"The Leibniz rule and the Fundamental Theorem are different things." They are the same thing in different clothes. The Fundamental Theorem is the special case of the Leibniz rule where the integrand does not depend on the parameter and only the upper limit moves. The Leibniz rule is the Fundamental Theorem made fully general.
-
"I can always move a derivative inside an integral." You can when the limits are constants and the integrand is well-behaved. If the integral is improper or the integrand has discontinuities that depend on the parameter, extra care is needed. Uniform convergence of \frac{\partial f}{\partial x} is the condition that guarantees the swap is valid.
-
"The partial derivative \partial f / \partial x is the same as f'." Not quite. When f(x, t) depends on two variables, \frac{\partial f}{\partial x} means: differentiate with respect to x while treating t as a constant. The ordinary derivative f' is ambiguous — you must specify which variable you are differentiating with respect to.
Going deeper
The main content above covers the mechanics you need for JEE-level problems. What follows is a proof sketch for the general case, and a discussion of the conditions under which differentiation under the integral sign is valid.
When can you swap d/dx and \int?
The key condition is uniform convergence. If \frac{\partial f}{\partial x}(x, t) is continuous on a rectangle [x_0 - \delta, x_0 + \delta] \times [a, b], then \int_a^b \frac{\partial f}{\partial x}\,dt is continuous in x, and the swap is valid. For improper integrals (where b = \infty or where f has a singularity), you need the integral \int_a^\infty \frac{\partial f}{\partial x}\,dt to converge uniformly in x — meaning the tail of the integral can be made small independently of x.
This is why the Dirichlet integral computation above requires care: \int_0^\infty e^{-\alpha t} \sin t\,dt converges uniformly for \alpha \geq \epsilon > 0, but not at \alpha = 0. To recover the value at \alpha = 0, you take a limit — which is valid because I(\alpha) turns out to be continuous at \alpha = 0.
Leibniz rule in higher dimensions
The one-dimensional Leibniz rule generalises. If F(\mathbf{x}) = \int_{D(\mathbf{x})} f(\mathbf{x}, t)\,dt where D is a region that depends on a vector of parameters \mathbf{x}, the derivative picks up boundary contributions proportional to the speed of the boundary and the value of f on it. This generalisation — called the Reynolds transport theorem in fluid mechanics — is the foundation of conservation laws in physics.
Connection to differential equations
Many classical differential equations are solved by defining F(x) = \int g(x, t)\,dt and using the Leibniz rule to show that F satisfies the equation. The Laplace transform, the Fourier transform, and the method of Green's functions all rely on differentiating under the integral sign. Ramanujan, in particular, was famous for producing startling integral identities by this method — he would differentiate a known integral with respect to a parameter, evaluate the result, and obtain a new identity that seemed to come from nowhere.
Where this leads next
- Definite Integration Techniques — the substitution and by-parts techniques that you combine with the Leibniz rule.
- Definite Integrals — Inequalities — bounding integrals without evaluating them, using comparison and the mean value theorem.
- Improper Integrals — what happens when the limits are infinite or the integrand is unbounded. The Leibniz rule applies to these too, with extra convergence conditions.
- Properties of Definite Integrals — the foundational properties (linearity, interval splitting, King's property) that the Leibniz rule builds on.
- Sum of Series Using Integration — turning sums into integrals, where the Leibniz rule helps differentiate the resulting parameter-dependent integrals.