In short
The sum rule says the derivative of f + g is f' + g'. The product rule says the derivative of f \cdot g is f' g + f g'. The quotient rule says the derivative of f/g is (f'g - fg')/g^2. Together with the scalar multiple rule, these let you differentiate any algebraic combination of basic functions without going back to the limit definition.
You know the derivatives of the basic functions: x^n gives nx^{n-1}, e^x gives e^x, \ln x gives 1/x. But real problems rarely hand you a single basic function. They hand you things like x^3 + 5x, or x^2 e^x, or \frac{x}{x^2 + 1}. These are combinations — sums, products, quotients — of basic functions.
To differentiate a combination, you need rules that tell you how the derivative of the combination relates to the derivatives of the pieces. There are exactly four such rules, and together they handle every algebraic combination you will ever meet.
The scalar multiple rule
The simplest rule: pulling a constant factor out of a derivative.
Claim: If c is a constant and f is differentiable, then
Proof. Go to the limit definition:
The constant c factors out of the difference quotient, then out of the limit (since the limit of a constant times a function is the constant times the limit). That is the entire proof.
Scalar multiple rule
If c is a constant and f is differentiable at x, then
This is the rule you have already been using without naming it: the derivative of 5x^3 is 5 \cdot 3x^2 = 15x^2. The constant factor stays; the differentiation operates only on f.
What if c = 0? Then c \cdot f(x) = 0 for all x, and c \cdot f'(x) = 0 \cdot f'(x) = 0. Both sides agree: the derivative of the zero function is zero. What if c = -1? Then \frac{d}{dx}[-f(x)] = -f'(x). The derivative of the negative of a function is the negative of its derivative. These are not separate rules — they are special cases of the scalar multiple rule.
The sum and difference rule
Claim: If f and g are both differentiable at x, then
Proof. Write the difference quotient for f + g:
Rearrange by grouping the f-terms and the g-terms:
This split is just algebra — the numerator separates cleanly into two pieces, each divided by the same h. Now take the limit as h \to 0:
The limit of a sum is the sum of the limits (this is the algebra of limits, and it applies because both limits exist — that is the differentiability hypothesis). The first limit is f'(x), the second is g'(x). So
The difference rule follows immediately: f(x) - g(x) = f(x) + (-1) \cdot g(x), so by the scalar multiple rule (with c = -1) and the sum rule,
Sum and difference rule
If f and g are both differentiable at x, then
The derivative of a sum is the sum of the derivatives. This is the rule that lets you differentiate polynomials term by term — and it extends to sums of any number of terms, not just two, by applying the rule repeatedly.
A quick example. Take h(x) = x^3 + e^x - \ln x. Apply the sum/difference rule term by term:
Each term is differentiated independently. The sum rule turns a multi-term problem into several single-term problems, each of which the basic derivative formulas handle. Combined with the scalar multiple rule, it gives you the ability to differentiate any linear combination of basic functions:
Mathematicians say that differentiation is a linear operator — it respects addition and scalar multiplication. This linearity is what makes polynomials so easy to differentiate.
The product rule
The sum rule was easy because the pieces in f + g don't interact: each function contributes independently. In a product f \cdot g, the two functions are entangled. Changing x by a tiny amount h changes both f and g simultaneously, and these changes multiply with each other.
Claim: If f and g are both differentiable at x, then
Proof. Start with the difference quotient:
Here is the trick. The numerator contains four quantities — f(x+h), g(x+h), f(x), g(x) — and you need to separate the changes in f from the changes in g. Add and subtract the same term f(x+h) \cdot g(x) in the numerator:
Now factor:
Take the limit as h \to 0. You need three facts:
- \lim_{h \to 0} f(x+h) = f(x), because f is differentiable at x, hence continuous.
- \lim_{h \to 0} \frac{g(x+h) - g(x)}{h} = g'(x).
- \lim_{h \to 0} \frac{f(x+h) - f(x)}{h} = f'(x).
The term g(x) does not involve h, so it passes through the limit unchanged. Putting it all together:
which is traditionally written as
Product rule
If f and g are both differentiable at x, then
The product rule says: to differentiate a product, differentiate one factor at a time while holding the other fixed, then add the two results. This pattern — one piece changes while the other holds still, then you switch — captures exactly how the product f \cdot g responds to a small change in x.
In Leibniz notation, the product rule reads
which makes the symmetry visible: the infinitesimal change in the product is the sum of two contributions — one from g changing while f is held still, and one from f changing while g is held still.
Why the "obvious" guess is wrong
A common first instinct is to guess that the derivative of f \cdot g is f' \cdot g'. A quick test kills this. Take f(x) = x and g(x) = x. Then f \cdot g = x^2, whose derivative is 2x. But f' \cdot g' = 1 \cdot 1 = 1. That is not 2x. The product rule gives the correct answer: f'g + fg' = 1 \cdot x + x \cdot 1 = 2x.
The reason the naive guess fails is geometric. Think of f \cdot g as the area of a rectangle with sides f and g. When you stretch both sides by a tiny amount, the area changes by two thin rectangles (one where f grew, one where g grew) plus a tiny corner square. The two thin rectangles are f' \cdot g and f \cdot g'. The corner square is proportional to h^2 and vanishes in the limit. The product rule captures the two rectangles; the naive guess f' \cdot g' corresponds to nothing in this picture.
The quotient rule
Claim: If f and g are both differentiable at x, and g(x) \neq 0, then
Proof. Start with the difference quotient:
Combine the two fractions over a common denominator:
Now apply the same add-and-subtract trick as in the product rule. Add and subtract f(x) \cdot g(x) in the numerator:
Substitute back and divide by h:
Take the limit as h \to 0. In the numerator, the two difference quotients become f'(x) and g'(x). In the denominator, g(x+h) \to g(x) by continuity (since g is differentiable). So
which is the same as
Quotient rule
If f and g are both differentiable at x and g(x) \neq 0, then
The quotient rule looks harder to remember than the product rule because of the minus sign and the square in the denominator. A mnemonic that Indian students often use: "low d-high minus high d-low, over the square of what's below" — where "high" is the numerator f, "low" is the denominator g, and "d-" means "derivative of." The minus sign is the key: in the product rule it is a plus, in the quotient rule it is a minus, and the order matters.
Where does the minus sign come from, intuitively? In a product f \cdot g, when f grows, the product grows — both contributions add. In a quotient f/g, when the denominator g grows, the quotient shrinks. The numerator contributes positively (f'g) but the denominator contributes negatively (-fg'). The minus sign encodes the fact that a bigger denominator pushes the fraction down.
Deriving the quotient rule from the product rule
There is an alternative route. Write f/g = f \cdot g^{-1} and apply the product rule:
For \frac{d}{dx}[g^{-1}], use the power rule with the chain rule: \frac{d}{dx}[g^{-1}] = -g^{-2} \cdot g'. So
Same result. This shows that the quotient rule is not an independent fact — it is the product rule in disguise. Some prefer to always use the product rule and never memorise the quotient rule separately.
Quick check: \frac{d}{dx}\left(\frac{1}{x}\right)
You already know from the derivative article that \frac{d}{dx}(1/x) = -1/x^2. Verify this with the quotient rule. Take f(x) = 1 and g(x) = x:
It matches. The quotient rule produces the same answer as the limit definition, as it must — the rules are shortcuts for the limit, not replacements.
Worked examples
Example 1: Differentiate $f(x) = x^2 \cdot e^x$
This is a product of u(x) = x^2 and v(x) = e^x.
Step 1. Identify u and v.
Why: the function is a product, so you need the product rule. Name the two factors.
Step 2. Compute u' and v'.
Why: the power rule gives u' = 2x, and the exponential's derivative is itself.
Step 3. Apply the product rule: f' = u'v + uv'.
Why: differentiate u while holding v fixed, then hold u fixed and differentiate v, then add.
Step 4. Factor.
Why: both terms share a factor of e^x. Factoring makes the result cleaner and reveals the zeros of f' at x = 0 and x = -2.
Result: f'(x) = e^x \cdot x(x + 2).
The derivative is zero at x = 0 and x = -2. At x = -2, the original function has value 4e^{-2} \approx 0.54, and the graph shows a local peak there. At x = 0, the function touches zero — a local minimum. The derivative changes sign at each of these points, confirming that they are genuine turning points.
Example 2: Differentiate $g(x) = \dfrac{x^2 - 1}{x^2 + 1}$
This is a quotient with f(x) = x^2 - 1 (numerator) and q(x) = x^2 + 1 (denominator).
Step 1. Compute f' and q'.
Why: both the numerator and denominator are polynomials. Differentiate each using the power rule.
Step 2. Apply the quotient rule: g' = \dfrac{f'q - fq'}{q^2}.
Why: the quotient rule is "low d-high minus high d-low, over low squared."
Step 3. Expand the numerator.
Why: the 2x^3 terms cancel. When terms cancel in a quotient rule computation, that is normal — it means the function is simpler than it initially looks.
Step 4. Write the final answer.
Why: the denominator (x^2+1)^2 is always positive, so the sign of g' depends only on 4x — positive when x > 0, negative when x < 0, zero when x = 0.
Result: g'(x) = \dfrac{4x}{(x^2 + 1)^2}.
The derivative is zero only at x = 0, where g(0) = -1. For x > 0, the derivative is positive (the function rises). For x < 0, the derivative is negative (the function falls). So x = 0 is a global minimum. As x \to \pm\infty, g(x) \to 1 and g'(x) \to 0: the function flattens out toward its horizontal asymptote.
Common confusions
-
"The derivative of a product is the product of the derivatives." It is not. \frac{d}{dx}[fg] = f'g + fg', not f'g'. The counterexample is simple: f = g = x gives fg = x^2, whose derivative is 2x, but f' \cdot g' = 1 \cdot 1 = 1. The product rule adds, not multiplies.
-
"The derivative of a quotient is the quotient of the derivatives." Also not. \frac{d}{dx}[f/g] = \frac{f'g - fg'}{g^2}, not f'/g'. Take f(x) = x^2, g(x) = x. Then f/g = x, whose derivative is 1. But f'/g' = 2x/1 = 2x. Not the same.
-
"In the quotient rule, it doesn't matter which is f and which is g." It does — swapping them changes the sign of the numerator. f'g - fg' is not the same as g'f - gf'. The numerator is negative. If you swap the roles, you get the negative of the correct answer.
-
"The quotient rule is more fundamental than the product rule." The opposite is true. The quotient rule is derived from the product rule (applied to f \cdot g^{-1}). If you know the product rule and the chain rule, you never need to memorise the quotient rule separately.
-
"These rules only work for polynomials." They work for any differentiable functions. The product rule applies to x^2 \sin x or e^x \ln x just as well as to x^2 \cdot x^3. The only requirement is that both f and g are differentiable at the point in question.
Going deeper
If you came here to learn the four rules and their proofs, you have them — you can stop here. What follows is for readers who want to see the product rule extended to three factors and a proof of the power rule for negative integers using the quotient rule.
The product rule for three factors
The product rule extends to three factors. If f, g, and k are all differentiable, then
The proof is a two-step application of the standard product rule. Group f \cdot g \cdot k as (f \cdot g) \cdot k and apply the product rule once:
Now expand (fg)' using the product rule again:
Substitute back:
The pattern is clear: differentiate one factor at a time, holding the others fixed, then sum all the terms. For n factors, there are n terms.
As a concrete example: \frac{d}{dx}[x \cdot e^x \cdot \ln x]. Using the three-factor formula with f = x, g = e^x, k = \ln x:
The power rule for negative integers from the quotient rule
You proved the power rule for positive integers using the binomial theorem. Here is a clean proof for negative integers using the quotient rule.
Let n be a positive integer and consider f(x) = x^{-n} = \frac{1}{x^n}. Apply the quotient rule with numerator 1 and denominator x^n:
Write m = -n (so m is a negative integer):
The formula \frac{d}{dx}[x^m] = mx^{m-1} holds for all negative integers m, exactly the same formula as for positive integers. The power rule is universal.
Leibniz's rule for repeated products
Leibniz noticed that the product rule for n differentiations of f \cdot g follows a pattern that mirrors the binomial theorem:
where f^{(k)} means the k-th derivative of f. For n = 1, this gives f'g + fg' — the product rule. For n = 2, it gives f''g + 2f'g' + fg''. The binomial coefficients \binom{n}{k} appear because, at each stage, the product rule splits one differentiation between f and g, and the number of ways to distribute n differentiations among two functions is exactly \binom{n}{k}.
This is rarely needed in school-level calculus, but it is a beautiful fact — and it connects differentiation to combinatorics in a way that is not at all obvious the first time you see it.
A common exam pattern: the reciprocal rule
A special case of the quotient rule that is worth internalising: the derivative of 1/g(x).
Set f(x) = 1 in the quotient rule. Then f'(x) = 0, and
This is sometimes called the reciprocal rule. It says: the derivative of 1/g is -g'/g^2. The negative sign is doing the same work as in the quotient rule — increasing g in the denominator decreases 1/g, and the derivative reflects that.
For instance, \frac{d}{dx}\left[\frac{1}{x^2 + 1}\right] = \frac{-2x}{(x^2 + 1)^2}. You do not need the full quotient rule for this — the reciprocal rule is faster when the numerator is just 1.
The product and quotient rules recover the power rule
You can recover the power rule for positive integers from the product rule alone. Consider f(x) = x \cdot x = x^2. By the product rule with u = v = x:
Now try f(x) = x^3 = x \cdot x^2. By the product rule with u = x and v = x^2 (whose derivative you just found is 2x):
Continuing: f(x) = x^4 = x \cdot x^3 gives f'(x) = x^3 + x \cdot 3x^2 = x^3 + 3x^3 = 4x^3. The pattern is clear — the product rule bootstraps its way up, adding one to the count each time. By induction, you get \frac{d}{dx}x^n = nx^{n-1} for every positive integer n.
This is not the most efficient proof of the power rule, but it shows that the product rule is powerful enough to derive it from scratch. The four rules in this article are not just tools for combining known derivatives — they are a complete system for building the entire calculus of algebraic functions.
Where this leads next
You now have the complete toolkit for differentiating combinations of functions. The last missing piece is the chain rule — what happens when functions are nested inside each other, like e^{x^2} or \sqrt{x^3 + 1}.
- Chain Rule — the rule for differentiating a function inside another function. Together with the product and quotient rules, it completes the differentiation toolkit.
- Derivatives of Basic Functions — the foundational derivatives (power, exponential, logarithm) that these rules operate on.
- Tangent and Normal — using derivatives to find the equation of the tangent line at any point on a curve. This is the first major application.
- Maxima and Minima — using derivatives to find turning points. This is why calculus is useful.
- Derivative — the foundational article that defines what a derivative is.