In short

When every entry of a determinant is a function of some variable t, the derivative of the determinant is the sum of determinants you get by differentiating one row at a time, leaving the other rows untouched. For a 3 \times 3 determinant this gives three terms. The rule follows directly from the product-and-sum structure of the determinant expansion.

Here is a 2 \times 2 determinant whose entries happen to be functions of t:

\Delta(t) = \begin{vmatrix} t^2 & \sin t \\ e^t & \cos t \end{vmatrix}

The value of this determinant is a single number for each t — it is t^2 \cos t - e^t \sin t. So \Delta(t) is itself a function of t, and you can ask a perfectly natural question: what is \Delta'(t)?

You could, of course, expand the determinant first and then differentiate the resulting expression. For a 2 \times 2 determinant with simple entries, that is fine. But for a 3 \times 3 or larger determinant, expanding first can produce a mess of six or more terms, each one a product of functions, each requiring the product rule. There is a much cleaner path — one that differentiates the determinant as a determinant, without ever expanding it.

That path is the row-by-row differentiation rule, and it is one of the most elegant small results in matrix calculus.

The wrong instinct, and why it fails

Your first instinct might be: differentiate every entry of the determinant and take the determinant of the resulting matrix. That would be neat and symmetric. It is also wrong.

Try it on the simple 2 \times 2 determinant \Delta(t) = \begin{vmatrix} t & 0 \\ 0 & t \end{vmatrix} = t^2.

The correct derivative is \Delta'(t) = 2t.

But if you differentiate every entry, you get \begin{vmatrix} 1 & 0 \\ 0 & 1 \end{vmatrix} = 1, which is not 2t. Not even close. The "differentiate everything" approach fails because the determinant is not a linear function of its entries — it is a product of entries (with signs), and the product rule does not let you differentiate all factors at once.

The correct rule differentiates one row at a time and adds the results. For the example above:

\Delta'(t) = \begin{vmatrix} 1 & 0 \\ 0 & t \end{vmatrix} + \begin{vmatrix} t & 0 \\ 0 & 1 \end{vmatrix} = t + t = 2t

That matches. The rule is a direct consequence of the product rule applied to the structure of a determinant, and the derivation below makes this precise.

Why row-by-row works: the 2 \times 2 case

Start with the simplest case and see the pattern.

\Delta(t) = \begin{vmatrix} f_1(t) & f_2(t) \\ g_1(t) & g_2(t) \end{vmatrix} = f_1 g_2 - f_2 g_1

Differentiate using the product rule on each term:

\Delta'(t) = (f_1' g_2 + f_1 g_2') - (f_2' g_1 + f_2 g_1')

Rearrange by grouping:

\Delta'(t) = (f_1' g_2 - f_2' g_1) + (f_1 g_2' - f_2 g_1')

Now look at each group. The first group is

\begin{vmatrix} f_1' & f_2' \\ g_1 & g_2 \end{vmatrix}

— the determinant with the first row differentiated and the second row left alone. The second group is

\begin{vmatrix} f_1 & f_2 \\ g_1' & g_2' \end{vmatrix}

— the determinant with the first row left alone and the second row differentiated.

So the derivative of the whole determinant is the sum of two determinants, one for each row:

\Delta'(t) = \begin{vmatrix} f_1' & f_2' \\ g_1 & g_2 \end{vmatrix} + \begin{vmatrix} f_1 & f_2 \\ g_1' & g_2' \end{vmatrix}

This is the row-by-row rule for a 2 \times 2 determinant. Differentiate one row at a time; add the results.

Notice the structural parallel with the ordinary product rule. If h(t) = f(t) \cdot g(t), the product rule says h' = f' g + f g' — differentiate one factor at a time, sum the results. The determinant is built from products, so the product rule propagates through it in exactly this way. The row-by-row rule is the product rule, lifted to the language of determinants.

The general rule

The same logic extends to any size. Here it is for a 3 \times 3 determinant, which is the case you will use most often.

Row-by-row differentiation of a determinant

Let

\Delta(t) = \begin{vmatrix} R_1(t) \\ R_2(t) \\ R_3(t) \end{vmatrix}

where R_1, R_2, R_3 are the three rows, and every entry is a differentiable function of t. Then

\frac{d\Delta}{dt} = \begin{vmatrix} R_1'(t) \\ R_2(t) \\ R_3(t) \end{vmatrix} + \begin{vmatrix} R_1(t) \\ R_2'(t) \\ R_3(t) \end{vmatrix} + \begin{vmatrix} R_1(t) \\ R_2(t) \\ R_3'(t) \end{vmatrix}

where R_i'(t) means differentiate every entry of row i with respect to t.

The same rule works column by column: differentiate one column at a time and add.

The rule says: take the determinant n times (once for each row), and in the k-th copy differentiate only the k-th row. Sum all n copies. That is the derivative.

Why it works in general. The determinant of an n \times n matrix is a sum of n! signed products, each product containing exactly one entry from each row. When you differentiate one such product with respect to t, the product rule spreads the derivative across the n factors — you get n terms, each one differentiating a different factor (i.e., a different row's entry) and leaving the rest alone. Summing over all n! products, the terms where row k's entry is differentiated collect into the determinant with row k differentiated. Since k runs from 1 to n, you get n such determinants, added together. That is the entire proof.

The column version

Since the determinant of a matrix equals the determinant of its transpose, you can apply the same rule column by column instead of row by row. For a 3 \times 3 determinant with columns C_1, C_2, C_3:

\frac{d\Delta}{dt} = \det[C_1' \; C_2 \; C_3] + \det[C_1 \; C_2' \; C_3] + \det[C_1 \; C_2 \; C_3']

Choose whichever orientation — rows or columns — makes the computation easier. If one column is constant, the column version immediately drops that term. If one row is constant, the row version drops it. The answer is the same either way.

Constant rows and columns

If one row of the determinant is constant — none of its entries depend on t — then differentiating that row produces a row of zeros. A determinant with an entire row of zeros is zero. So that term drops out of the sum.

This means: only the rows that actually change with t contribute to \Delta'(t). If only one row varies, the derivative is just one determinant (with only that row differentiated). This happens often in applications and can reduce a 3 \times 3 problem to a single determinant evaluation.

Similarly, if two rows are constant, only the single varying row contributes. In the extreme case where all rows are constant, the determinant is a constant and its derivative is zero — as it should be.

Here is a quick example. Let

\Delta(t) = \begin{vmatrix} 1 & 3 & 5 \\ 2 & 4 & 6 \\ t & t^2 & t^3 \end{vmatrix}

Rows 1 and 2 are constant. Only row 3 varies. So

\Delta'(t) = \begin{vmatrix} 1 & 3 & 5 \\ 2 & 4 & 6 \\ 1 & 2t & 3t^2 \end{vmatrix}

— a single determinant, not a sum of three. The other two terms are zero because differentiating a constant row gives a row of zeros. This shortcut saves real time on exam problems.

Worked examples

Example 1: A 2 x 2 determinant with polynomial entries

Let

\Delta(t) = \begin{vmatrix} t^2 & 3t \\ 2t & t^3 \end{vmatrix}

Find \Delta'(t).

Step 1. Identify the rows.

Row 1: [t^2, \; 3t]. Row 2: [2t, \; t^3].

Why: the rule differentiates one row at a time, so you need to see the rows clearly before you start.

Step 2. Differentiate row 1, keep row 2.

D_1 = \begin{vmatrix} 2t & 3 \\ 2t & t^3 \end{vmatrix} = 2t \cdot t^3 - 3 \cdot 2t = 2t^4 - 6t

Why: every entry of row 1 is differentiated with respect to t. Row 2 stays as it is.

Step 3. Keep row 1, differentiate row 2.

D_2 = \begin{vmatrix} t^2 & 3t \\ 2 & 3t^2 \end{vmatrix} = t^2 \cdot 3t^2 - 3t \cdot 2 = 3t^4 - 6t

Why: this is the second term in the row-by-row sum. Row 1 is untouched; only row 2 is differentiated.

Step 4. Add the two results.

\Delta'(t) = D_1 + D_2 = (2t^4 - 6t) + (3t^4 - 6t) = 5t^4 - 12t

Why: the row-by-row rule says \Delta' = D_1 + D_2. Adding the two expressions gives the derivative.

Step 5. Verify by direct expansion. \Delta(t) = t^2 \cdot t^3 - 3t \cdot 2t = t^5 - 6t^2. So \Delta'(t) = 5t^4 - 12t. It matches.

Why: for a 2 \times 2 case you can always check by expanding first and differentiating. The two methods must agree — and they do.

Result: \Delta'(t) = 5t^4 - 12t.

The black curve is $\Delta(t) = t^5 - 6t^2$, the determinant as a function of $t$. The red curve is $\Delta'(t) = 5t^4 - 12t$, its derivative found by row-by-row differentiation. At $t = 1$, the determinant equals $-5$ and the derivative equals $-7$, meaning the determinant is decreasing at that point — which matches the downward slope of the black curve there.

The graph confirms: wherever the red curve (derivative) is negative, the black curve (determinant) is decreasing. The row-by-row rule gave us the derivative without ever expanding the determinant — though for this 2 \times 2 example both paths were short.

Example 2: A 3 x 3 determinant with a constant row

Let

\Delta(t) = \begin{vmatrix} 1 & 0 & 2 \\ t & t^2 & t^3 \\ 0 & 1 & t \end{vmatrix}

Find \Delta'(t).

Step 1. Identify which rows vary with t.

Row 1 is [1, 0, 2] — all constants. Row 2 is [t, t^2, t^3] — all functions of t. Row 3 is [0, 1, t] — one entry depends on t.

Why: row 1 is constant, so differentiating it produces [0, 0, 0]. That determinant will be zero and can be skipped. Only rows 2 and 3 contribute.

Step 2. Differentiate row 2, keep rows 1 and 3.

D_2 = \begin{vmatrix} 1 & 0 & 2 \\ 1 & 2t & 3t^2 \ \\ 0 & 1 & t \end{vmatrix}

Expand along row 1: = 1(2t \cdot t - 3t^2 \cdot 1) - 0 + 2(1 \cdot 1 - 2t \cdot 0)

= (2t^2 - 3t^2) + 2(1) = -t^2 + 2

Why: expanding along row 1 is efficient because it has a zero entry that kills one term. The cofactor arithmetic is straightforward.

Step 3. Differentiate row 3, keep rows 1 and 2.

D_3 = \begin{vmatrix} 1 & 0 & 2 \\ t & t^2 & t^3 \\ 0 & 0 & 1 \end{vmatrix}

Expand along row 3: = 0 - 0 + 1(1 \cdot t^2 - 0 \cdot t) = t^2

Why: row 3 after differentiation is [0, 0, 1], which has two zeros — expansion along this row is almost free.

Step 4. Add the contributions.

\Delta'(t) = 0 + D_2 + D_3 = (-t^2 + 2) + t^2 = 2

Why: the t^2 terms cancel, leaving a constant. The derivative of this determinant is 2 for all t, meaning the determinant is a linear function of t.

Result: \Delta'(t) = 2.

The black line is $\Delta(t) = 2t$ (confirmed by direct expansion). The dashed red line is its derivative, $\Delta'(t) = 2$ — a constant. The determinant rises at a steady rate of $2$ per unit of $t$, which the row-by-row method found cleanly without expanding the $3 \times 3$ determinant first.

A constant derivative means the determinant is linear in t. You can verify by direct expansion: \Delta(t) = 1(t^2 \cdot t - t^3 \cdot 1) - 0 + 2(t \cdot 1 - t^2 \cdot 0) = (t^3 - t^3) + 2t = 2t. So \Delta'(t) = 2, exactly what the row-by-row method gave. The graph confirms it: \Delta(0) = 0, \Delta(1) = 2, \Delta(2) = 4 — a straight line with slope 2.

Common confusions

Applications

Showing a determinant is constant

One of the most common exam problems gives you a determinant with a parameter and asks you to show it does not depend on that parameter. The strategy: compute \Delta'(t) using the row-by-row rule. If \Delta'(t) = 0 for all t, the determinant is constant.

For a concrete illustration, take

\Delta(t) = \begin{vmatrix} 1 & t & t^2 \\ 1 & t & t^2 \\ 0 & 1 & 2t \end{vmatrix}

This determinant is zero for all t — rows 1 and 2 are identical — and the row-by-row derivative must also be zero. You can verify: each of the three terms in \Delta' has at least two proportional rows, so each is zero. This is a trivial example, but the principle scales. In harder problems, the individual terms in the row-by-row sum may not obviously be zero, but they cancel when added.

A more interesting class of problems asks you to compute \Delta'(\theta) where the entries involve \sin\theta and \cos\theta. The row-by-row rule turns each sine into a cosine and each cosine into a negative sine — one row at a time — and the resulting determinants often simplify using trigonometric identities. This is far cleaner than expanding the original determinant into a sum of products of trigonometric functions and then differentiating that sum.

The Wronskian and linear independence

In the theory of differential equations, there is a determinant called the Wronskian that measures whether a set of functions is linearly independent. For two functions y_1(x) and y_2(x):

W(x) = \begin{vmatrix} y_1 & y_2 \\ y_1' & y_2' \end{vmatrix}

Differentiate using the row-by-row rule. The first term has row 1 differentiated:

\begin{vmatrix} y_1' & y_2' \\ y_1' & y_2' \end{vmatrix} = 0

because two identical rows give a zero determinant. The second term has row 2 differentiated:

\begin{vmatrix} y_1 & y_2 \\ y_1'' & y_2'' \end{vmatrix}

So W'(x) = \begin{vmatrix} y_1 & y_2 \\ y_1'' & y_2'' \end{vmatrix}. If y_1 and y_2 satisfy a second-order ODE y'' + p(x)y' + q(x)y = 0, you can substitute y_i'' = -p y_i' - q y_i and simplify to get W' = -p(x) W. This is Abel's identity, and it tells you that the Wronskian either never vanishes or is identically zero — there is no in-between. The row-by-row differentiation rule is doing real work here; without it, the derivation would require expanding and re-collecting terms by hand.

Determinants of variable matrices in physics

When a matrix A(t) represents a physical system evolving in time — say, the inertia tensor of a rotating body, or the strain tensor of a deforming material — the rate of change of \det A(t) measures how the "volume" of the system changes. The row-by-row rule gives you this rate directly, without needing to expand the determinant at each instant.

The derivative of a matrix inverse

If A(t) is an invertible matrix depending on t, start from A(t) A^{-1}(t) = I and differentiate both sides:

A'(t) A^{-1}(t) + A(t) \frac{d}{dt}[A^{-1}(t)] = O

Solving for \frac{d}{dt}[A^{-1}] gives \frac{d}{dt}[A^{-1}] = -A^{-1} A' A^{-1}. This formula, combined with the row-by-row rule for \frac{d}{dt}[\det A], is the foundation of matrix calculus.

Going deeper

If you understand the row-by-row rule and can apply it to 2 \times 2 and 3 \times 3 determinants, you have what you need for most exam problems. The material below gives the proof for general n and a useful corollary.

Proof for n \times n determinants

Write the determinant using the Leibniz formula:

\det A(t) = \sum_{\sigma \in S_n} \text{sgn}(\sigma) \prod_{i=1}^{n} a_{i,\sigma(i)}(t)

Each term in the sum is a product of n functions of t (one from each row). Differentiate one such term using the product rule:

\frac{d}{dt} \prod_{i=1}^{n} a_{i,\sigma(i)} = \sum_{k=1}^{n} a_{1,\sigma(1)} \cdots a_{k,\sigma(k)}' \cdots a_{n,\sigma(n)}

Now sum over all permutations \sigma. For a fixed k, the terms where only a_{k,\sigma(k)} is differentiated collect into

\sum_{\sigma} \text{sgn}(\sigma) \, a_{1,\sigma(1)} \cdots a_{k,\sigma(k)}' \cdots a_{n,\sigma(n)}

This is the determinant of the matrix with row k replaced by its derivative. Summing over k from 1 to n gives

\frac{d}{dt} \det A = \sum_{k=1}^{n} \det A_k'

where A_k' is the matrix with row k differentiated and all other rows unchanged. This is exactly the row-by-row rule.

Jacobi's formula

For an invertible matrix A(t), the row-by-row rule can be repackaged as

\frac{d}{dt} \det A = \det A \cdot \text{tr}\!\left(A^{-1} \frac{dA}{dt}\right)

where \text{tr} denotes the trace (sum of diagonal entries). This is Jacobi's formula. It is extremely useful in advanced applications — for instance, in deriving how the determinant changes under a small perturbation of the matrix. The formula says that the logarithmic derivative of the determinant (that is, \frac{d}{dt} \ln |\det A|) equals the trace of A^{-1} \frac{dA}{dt}.

To see why this holds: expand \text{tr}(A^{-1} A') using A^{-1} = \frac{\text{adj}(A)}{\det A}. The trace of \frac{\text{adj}(A) \cdot A'}{\det A} picks out exactly the cofactor expansion terms that appear in the row-by-row differentiation, each divided by \det A. Multiplying through by \det A recovers the row-by-row sum.

A concrete verification of Jacobi's formula

Take A(t) = \begin{bmatrix} t & 1 \\ 0 & t \end{bmatrix}. Then \det A = t^2, so \frac{d}{dt} \det A = 2t.

Now compute the right side of Jacobi's formula. First, A' = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I, and A^{-1} = \frac{1}{t^2}\begin{bmatrix} t & -1 \\ 0 & t \end{bmatrix}. So A^{-1} A' = A^{-1}, and

\text{tr}(A^{-1} A') = \text{tr}\!\left(\frac{1}{t^2}\begin{bmatrix} t & -1 \\ 0 & t \end{bmatrix}\right) = \frac{t + t}{t^2} = \frac{2}{t}

Multiply by \det A = t^2: t^2 \cdot \frac{2}{t} = 2t. This matches \frac{d}{dt} \det A = 2t. Jacobi's formula and the row-by-row rule are two views of the same fact.

Higher derivatives

Since \Delta'(t) is itself a sum of determinants, you can differentiate again to get \Delta''(t). Apply the row-by-row rule to each of the n determinants in the sum. For a 2 \times 2 determinant \Delta = \begin{vmatrix} f_1 & f_2 \\ g_1 & g_2 \end{vmatrix}, you already know

\Delta' = \begin{vmatrix} f_1' & f_2' \\ g_1 & g_2 \end{vmatrix} + \begin{vmatrix} f_1 & f_2 \\ g_1' & g_2' \end{vmatrix}

Differentiate each of those two determinants using the row-by-row rule again:

\Delta'' = \begin{vmatrix} f_1'' & f_2'' \\ g_1 & g_2 \end{vmatrix} + \begin{vmatrix} f_1' & f_2' \\ g_1' & g_2' \end{vmatrix} + \begin{vmatrix} f_1' & f_2' \\ g_1' & g_2' \end{vmatrix} + \begin{vmatrix} f_1 & f_2 \\ g_1'' & g_2'' \end{vmatrix}

The two middle terms are identical, so

\Delta'' = \begin{vmatrix} f_1'' & f_2'' \\ g_1 & g_2 \end{vmatrix} + 2\begin{vmatrix} f_1' & f_2' \\ g_1' & g_2' \end{vmatrix} + \begin{vmatrix} f_1 & f_2 \\ g_1'' & g_2'' \end{vmatrix}

The coefficients 1, 2, 1 are the binomial coefficients \binom{2}{0}, \binom{2}{1}, \binom{2}{2}. This pattern generalises: the k-th derivative of a determinant involves terms weighted by multinomial coefficients, much like the Leibniz rule for the k-th derivative of a product. The pattern continues for higher derivatives but gets complicated quickly.

Where this leads next