In short

The most powerful single fact about determinants is that \det(AB) = \det(A) \cdot \det(B). This means you can compute the determinant of a product without ever computing the product matrix. The same rule, run in reverse, lets you "multiply" two determinants by building a suitable matrix product — a technique that turns difficult determinant evaluations into mechanical row-by-column arithmetic.

Here is a puzzle. Take two 2 \times 2 matrices:

A = \begin{pmatrix} 3 & 1 \\ 2 & 4 \end{pmatrix}, \quad B = \begin{pmatrix} 5 & 0 \\ 1 & 2 \end{pmatrix}

Compute \det(A) = 12 - 2 = 10. Compute \det(B) = 10 - 0 = 10. Now multiply them:

AB = \begin{pmatrix} 16 & 2 \\ 14 & 8 \end{pmatrix}

Compute \det(AB) = 128 - 28 = 100.

Look at the three numbers: \det(A) = 10, \det(B) = 10, \det(AB) = 100. The determinant of the product is the product of the determinants: 100 = 10 \times 10.

Coincidence? Try any other pair of matrices. It always works. This is the product rule for determinants, and it is one of the most important theorems in linear algebra.

The theorem

Product rule for determinants

For any two n \times n matrices A and B:

\det(AB) = \det(A) \cdot \det(B)

The rule says: the determinant is multiplicative. It converts a matrix product (a complicated operation) into a number product (the simplest operation in arithmetic). It applies to matrices of every size — 2 \times 2, 3 \times 3, 100 \times 100.

The proof for the 2 \times 2 case

The cleanest way to see why the rule works is to verify it directly for 2 \times 2 matrices, then understand the structure well enough to believe the general case.

Let A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and B = \begin{pmatrix} p & q \\ r & s \end{pmatrix}.

Step 1. Compute AB:

AB = \begin{pmatrix} ap + br & aq + bs \\ cp + dr & cq + ds \end{pmatrix}

Why: standard row-by-column multiplication. Entry (i, j) of AB is the dot product of row i of A with column j of B.

Step 2. Compute \det(AB):

\det(AB) = (ap + br)(cq + ds) - (aq + bs)(cp + dr)

Step 3. Expand both products:

(ap + br)(cq + ds) = apcq + apds + brcq + brds
(aq + bs)(cp + dr) = aqcp + aqdr + bscp + bsdr

Step 4. Subtract:

\det(AB) = apcq + apds + brcq + brds - aqcp - aqdr - bscp - bsdr

Why: just expanding the brackets and subtracting term by term. Some terms will cancel.

Step 5. Cancel and regroup. The terms apcq and aqcp are the same, so they cancel. Similarly brds and bsdr cancel. What remains:

\det(AB) = apds - aqdr + brcq - bscp

Factor this as:

= (ad - bc)(ps - qr)

Why: pull ad from the first and second terms, and -bc from the third and fourth: ad(ps - qr) - bc(ps - qr) = (ad - bc)(ps - qr). Alternatively, you can verify by re-expanding.

Step 6. Recognise the factors:

ad - bc = \det(A), \quad ps - qr = \det(B)

So \det(AB) = \det(A) \cdot \det(B).

Result: The product rule holds for all 2 \times 2 matrices.

The algebra above is a direct verification — not the most elegant proof, but the most transparent. Every step is arithmetic you can check.

The proof for the general n \times n case

The 2 \times 2 proof was a brute-force expansion. For the general case, a more structural argument is needed. Here is the standard approach.

The row-reduction argument

Every invertible matrix can be written as a product of elementary matrices — matrices that encode a single row operation (swapping two rows, multiplying a row by a scalar, or adding a multiple of one row to another).

Step 1. Check the product rule for each type of elementary matrix E paired with any matrix B:

Why: each of these follows from the basic properties of determinants (sign change on row swap, scalar factor pulled out, determinant unchanged by row addition). These are the properties you met in the article on properties of determinants.

Step 2. If A is invertible, write A = E_1 E_2 \cdots E_k as a product of elementary matrices. Then:

\det(AB) = \det(E_1 E_2 \cdots E_k B) = \det(E_1) \cdot \det(E_2 \cdots E_k B)

Applying the elementary matrix rule repeatedly:

= \det(E_1) \cdot \det(E_2) \cdots \det(E_k) \cdot \det(B) = \det(A) \cdot \det(B)

Why: the first equality uses the elementary matrix product rule proved in Step 1. Iterating this peels off one elementary matrix at a time until only \det(B) remains.

Step 3. If A is not invertible, then \det(A) = 0. But also, AB is not invertible (because if AB were invertible, then A would have a right inverse, contradicting A being singular). So \det(AB) = 0 = 0 \cdot \det(B) = \det(A) \cdot \det(B).

Why: a singular matrix has a zero determinant, and the product of a singular matrix with anything is still singular. Both sides are zero, so the equality holds trivially.

This completes the proof for all n \times n matrices: the product rule \det(AB) = \det(A) \cdot \det(B) holds whether or not A is invertible.

Multiplication of two determinants

The product rule, read backwards, gives you a technique for multiplying two determinants by constructing a matrix product.

Given

\begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix} \times \begin{vmatrix} p_1 & q_1 \\ p_2 & q_2 \end{vmatrix}

you can write the product as a single determinant:

= \begin{vmatrix} a_1 p_1 + b_1 p_2 & a_1 q_1 + b_1 q_2 \\ a_2 p_1 + b_2 p_2 & a_2 q_1 + b_2 q_2 \end{vmatrix}

The rule is: row of first \times column of second, exactly as in matrix multiplication. Entry (i, j) of the resulting determinant is the dot product of row i of the first matrix with column j of the second matrix.

The 3 \times 3 version

For 3 \times 3 determinants, the same rule applies. If \Delta_1 = \det(A) and \Delta_2 = \det(B) where A and B are 3 \times 3, then:

\Delta_1 \cdot \Delta_2 = \det(AB)

and AB is computed by the standard row-by-column rule. The result is a single 3 \times 3 determinant whose entries are dot products.

Flexibility in the multiplication

There is a subtlety. You can also multiply rows of the first with rows of the second (which corresponds to computing \det(AB^T)), or columns of the first with columns of the second (\det(A^T B)), or columns of the first with rows of the second (\det(A^T B^T)). Since \det(A^T) = \det(A), all these give the same numerical answer.

Why: \det(AB^T) = \det(A) \cdot \det(B^T) = \det(A) \cdot \det(B) = \Delta_1 \cdot \Delta_2. Transposing does not change the determinant, so you are free to transpose either matrix before multiplying.

This flexibility is important in practice. Sometimes the dot products come out cleaner when you multiply rows with rows instead of rows with columns. You should choose whichever combination gives the simplest arithmetic.

Immediate consequences

The product rule has several powerful consequences.

1. Determinant of a power

If A is n \times n, then:

\det(A^k) = [\det(A)]^k

Apply the product rule k - 1 times: \det(A^k) = \det(A \cdot A^{k-1}) = \det(A) \cdot \det(A^{k-1}) = \cdots = [\det(A)]^k.

2. Determinant of an inverse

If A is invertible (\det(A) \neq 0), then:

\det(A^{-1}) = \frac{1}{\det(A)}

Proof: A \cdot A^{-1} = I, so \det(A) \cdot \det(A^{-1}) = \det(I) = 1. Dividing both sides by \det(A) gives the result.

Why: the identity matrix I has determinant 1 (its diagonal entries are all 1, and the off-diagonal entries are all 0). The product rule forces \det(A^{-1}) to be the reciprocal of \det(A).

3. Similar matrices have equal determinants

Two matrices A and B are similar if B = P^{-1}AP for some invertible matrix P. Then:

\det(B) = \det(P^{-1}AP) = \det(P^{-1}) \cdot \det(A) \cdot \det(P) = \frac{1}{\det(P)} \cdot \det(A) \cdot \det(P) = \det(A)

The determinant is an invariant under similarity — it does not change when you change the basis. This is why the determinant of a linear transformation is well-defined, independent of which basis you use to represent it as a matrix.

4. A matrix is singular if and only if its determinant is zero

This follows because A is singular \iff A has no inverse \iff AB = I has no solution \iff \det(A) \cdot \det(B) \neq 1 for all B \iff \det(A) = 0. (The forward direction: if \det(A) = 0, then \det(A) \cdot \det(B) = 0 \neq 1.)

Worked examples

Example 1: Multiplying two $2 \times 2$ determinants

Compute \begin{vmatrix} 1 & 3 \\ 2 & 5 \end{vmatrix} \times \begin{vmatrix} 4 & 1 \\ 3 & 2 \end{vmatrix} by the multiplication rule.

Step 1. Compute each determinant separately first (for verification later).

\Delta_1 = 1 \times 5 - 3 \times 2 = 5 - 6 = -1
\Delta_2 = 4 \times 2 - 1 \times 3 = 8 - 3 = 5

Why: knowing the individual determinants gives the expected product (-1) \times 5 = -5. You can check your multiplication result against this.

Step 2. Form the product determinant using row-by-column multiplication.

Entry (1,1): row 1 of first \cdot column 1 of second = 1 \times 4 + 3 \times 3 = 4 + 9 = 13.

Entry (1,2): row 1 of first \cdot column 2 of second = 1 \times 1 + 3 \times 2 = 1 + 6 = 7.

Entry (2,1): row 2 of first \cdot column 1 of second = 2 \times 4 + 5 \times 3 = 8 + 15 = 23.

Entry (2,2): row 2 of first \cdot column 2 of second = 2 \times 1 + 5 \times 2 = 2 + 10 = 12.

Why: each entry is a dot product, exactly as in matrix multiplication. The first determinant supplies the rows; the second supplies the columns.

Step 3. Write the product as a single determinant and evaluate.

\begin{vmatrix} 13 & 7 \\ 23 & 12 \end{vmatrix} = 156 - 161 = -5

Step 4. Verify: \Delta_1 \times \Delta_2 = (-1) \times 5 = -5. The product rule checks out.

Result: The product of the two determinants is -5.

Schematic of determinant multiplicationA diagram showing two 2 by 2 determinants being multiplied to produce a single 2 by 2 determinant. The first determinant has entries 1, 3, 2, 5 and value minus 1. The second has entries 4, 1, 3, 2 and value 5. The product determinant has entries 13, 7, 23, 12 and value minus 5. 1 3 2 5 det = −1 × 4 1 3 2 det = 5 = 13 7 23 12 det = −5
Multiplying two $2 \times 2$ determinants. The entries of the product determinant are dot products: row of first with column of second. The determinant of the product ($-5$) equals the product of the determinants ($-1 \times 5$).

Example 2: Using the product rule to prove $\det(A^2) = [\det(A)]^2$

Let A = \begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{pmatrix}. Verify that \det(A^2) = [\det(A)]^2.

Step 1. Compute \det(A) by expanding along the first row.

\det(A) = 2 \begin{vmatrix} 3 & 1 \\ 1 & 2 \end{vmatrix} - 1 \begin{vmatrix} 1 & 1 \\ 0 & 2 \end{vmatrix} + 0
= 2(6 - 1) - 1(2 - 0) = 10 - 2 = 8

Why: the zero in position (1, 3) eliminates one cofactor, and the remaining two 2 \times 2 determinants are quick to compute.

Step 2. By the product rule, \det(A^2) = \det(A \cdot A) = \det(A) \cdot \det(A) = 8 \times 8 = 64.

Why: A^2 is just A multiplied by itself. The product rule applies directly — no need to compute A^2 at all.

Step 3. Verify by computing A^2 explicitly.

A^2 = \begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{pmatrix} \begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{pmatrix} = \begin{pmatrix} 5 & 5 & 1 \\ 5 & 11 & 5 \\ 1 & 5 & 5 \end{pmatrix}

Why: each entry is a dot product. For instance, entry (1,1) = 2 \cdot 2 + 1 \cdot 1 + 0 \cdot 0 = 5. Entry (2,2) = 1 \cdot 1 + 3 \cdot 3 + 1 \cdot 1 = 11.

Step 4. Compute \det(A^2) by expanding along the first row:

\det(A^2) = 5 \begin{vmatrix} 11 & 5 \\ 5 & 5 \end{vmatrix} - 5 \begin{vmatrix} 5 & 5 \\ 1 & 5 \end{vmatrix} + 1 \begin{vmatrix} 5 & 11 \\ 1 & 5 \end{vmatrix}
= 5(55 - 25) - 5(25 - 5) + 1(25 - 11) = 5 \times 30 - 5 \times 20 + 14 = 150 - 100 + 14 = 64

Result: \det(A^2) = 64 = 8^2 = [\det(A)]^2. The product rule saved a page of matrix multiplication — you could have skipped Steps 3 and 4 entirely.

The product rule applied to powersA diagram showing that det of A squared equals det of A, squared. On the left, A squared is shown as a product A times A. On the right, the determinants are shown: det A equals 8, so det of A squared equals 8 squared equals 64. det(A²) = det(A · A) = det(A) · det(A) = 8 × 8 = 64 The product rule generalises: det(Aⁿ) = [det(A)]ⁿ det(A³) = 8³ = 512 det(A¹⁰) = 8¹⁰ = 1,073,741,824 All without computing Aⁿ.
The product rule lets you compute the determinant of any power of $A$ from $\det(A)$ alone. For $\det(A) = 8$, the determinant of $A^{10}$ is $8^{10}$ — about a billion — and you never need to multiply ten matrices together.

The power of the product rule is visible here: computing A^{10} would require nine matrix multiplications, but \det(A^{10}) = 8^{10} requires knowing only a single number.

Common confusions

Applications

Application 1: Testing invertibility of a product

A matrix M is invertible if and only if \det(M) \neq 0. For a product AB:

\det(AB) = \det(A) \cdot \det(B)

This is zero if and only if at least one of \det(A), \det(B) is zero. So:

AB is invertible if and only if both A and B are invertible.

You don't need to compute AB to know whether it is invertible — just check the individual determinants.

Application 2: Proving determinant identities

Many determinant problems in competitive exams ask you to show that a determinant equals some expression. The product rule lets you factor a complicated determinant into simpler pieces.

For example, suppose you need to show that

\begin{vmatrix} 1 + a & 1 & 1 \\ 1 & 1 + b & 1 \\ 1 & 1 & 1 + c \end{vmatrix} = abc + ab + bc + ca

One approach: write the matrix as J + D where J is the 3 \times 3 all-ones matrix and D = \text{diag}(a, b, c). While \det(J + D) \neq \det(J) + \det(D), you can expand the determinant using properties of the diagonal perturbation — or simply expand directly and factor the result.

But the product rule helps in harder cases. If a determinant factors as a product of two structured matrices (say a Vandermonde times a diagonal), the product rule tells you its value is the product of two known determinants.

Application 3: Geometric transformations

When a linear transformation T maps a region of the plane, the area of the image is |\det(T)| times the area of the original region. If you compose two transformations T_1 followed by T_2, the combined transformation is T_2 T_1, and the area scaling factor is:

|\det(T_2 T_1)| = |\det(T_2)| \cdot |\det(T_1)|

The area scales multiply. A rotation (determinant 1) preserves areas. A scaling by factor k in both directions (determinant k^2) scales areas by k^2. Composing a rotation with a scaling gives a combined area factor of 1 \times k^2 = k^2 — exactly right.

Going deeper

If you came here to learn the product rule and how to multiply determinants, you have it — you can stop here. The rest of this section is for readers who want the abstract perspective, the connection to the Cauchy-Binet formula, and the relationship between determinants and volumes.

The determinant as a group homomorphism

The set of all invertible n \times n matrices forms a group under multiplication, called the general linear group GL(n). The non-zero real numbers form a group under multiplication, called \mathbb{R}^*.

The product rule says precisely that the map \det : GL(n) \to \mathbb{R}^* is a group homomorphism — it preserves the group operation. This is one of the most important examples of a homomorphism in abstract algebra.

The kernel of this homomorphism (the set of matrices with determinant 1) is the special linear group SL(n). These are the linear transformations that preserve volume, and they form a normal subgroup of GL(n).

The Cauchy-Binet formula

What if A is m \times n and B is n \times m with m \leq n? Then AB is m \times m, and \det(AB) makes sense even though A and B are not square. The Cauchy-Binet formula generalises the product rule:

\det(AB) = \sum_{S} \det(A_S) \cdot \det(B_S)

where the sum runs over all \binom{n}{m} subsets S of m columns of A (equivalently, m rows of B), and A_S, B_S are the corresponding m \times m submatrices.

When m = n, there is only one subset S (all columns), and the formula reduces to \det(AB) = \det(A) \cdot \det(B).

The Cauchy-Binet formula appears in differential geometry (computing the area of a parametric surface) and in combinatorics (counting spanning trees of a graph via the matrix tree theorem).

Determinants and volume

In \mathbb{R}^n, the absolute value of the determinant of an n \times n matrix A equals the n-dimensional volume of the parallelepiped (the higher-dimensional analogue of a parallelogram) spanned by the rows (or columns) of A.

The product rule then has a beautiful geometric interpretation: if you first deform space by B and then by A, the combined volume scaling factor is the product of the individual scaling factors. This is the same reasoning behind the area-of-transformed-region application above, extended to all dimensions.

For n = 2: |\det(A)| is the area of the parallelogram spanned by the rows. For n = 3: |\det(A)| is the volume of the parallelepiped. The sign tells you the orientation — whether the transformation preserves or reverses the "handedness" of the coordinate system.

A historical note

The product rule for determinants was proved in full generality by Jacques Binet and Augustin-Louis Cauchy independently in 1812, though special cases were known earlier. The result was a milestone: it showed that the determinant is not just a computational tool but a fundamental algebraic quantity with deep structural properties. In India, the study of determinants entered the curriculum through the influence of 19th-century British textbooks, and they became a standard part of the mathematical training that produced Ramanujan's generation of mathematicians.

Where this leads next

The product rule connects to nearly every part of matrix theory. The most direct continuations: