In short
The most powerful single fact about determinants is that \det(AB) = \det(A) \cdot \det(B). This means you can compute the determinant of a product without ever computing the product matrix. The same rule, run in reverse, lets you "multiply" two determinants by building a suitable matrix product — a technique that turns difficult determinant evaluations into mechanical row-by-column arithmetic.
Here is a puzzle. Take two 2 \times 2 matrices:
Compute \det(A) = 12 - 2 = 10. Compute \det(B) = 10 - 0 = 10. Now multiply them:
Compute \det(AB) = 128 - 28 = 100.
Look at the three numbers: \det(A) = 10, \det(B) = 10, \det(AB) = 100. The determinant of the product is the product of the determinants: 100 = 10 \times 10.
Coincidence? Try any other pair of matrices. It always works. This is the product rule for determinants, and it is one of the most important theorems in linear algebra.
The theorem
Product rule for determinants
For any two n \times n matrices A and B:
The rule says: the determinant is multiplicative. It converts a matrix product (a complicated operation) into a number product (the simplest operation in arithmetic). It applies to matrices of every size — 2 \times 2, 3 \times 3, 100 \times 100.
The proof for the 2 \times 2 case
The cleanest way to see why the rule works is to verify it directly for 2 \times 2 matrices, then understand the structure well enough to believe the general case.
Let A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} and B = \begin{pmatrix} p & q \\ r & s \end{pmatrix}.
Step 1. Compute AB:
Why: standard row-by-column multiplication. Entry (i, j) of AB is the dot product of row i of A with column j of B.
Step 2. Compute \det(AB):
Step 3. Expand both products:
Step 4. Subtract:
Why: just expanding the brackets and subtracting term by term. Some terms will cancel.
Step 5. Cancel and regroup. The terms apcq and aqcp are the same, so they cancel. Similarly brds and bsdr cancel. What remains:
Factor this as:
Why: pull ad from the first and second terms, and -bc from the third and fourth: ad(ps - qr) - bc(ps - qr) = (ad - bc)(ps - qr). Alternatively, you can verify by re-expanding.
Step 6. Recognise the factors:
So \det(AB) = \det(A) \cdot \det(B).
Result: The product rule holds for all 2 \times 2 matrices.
The algebra above is a direct verification — not the most elegant proof, but the most transparent. Every step is arithmetic you can check.
The proof for the general n \times n case
The 2 \times 2 proof was a brute-force expansion. For the general case, a more structural argument is needed. Here is the standard approach.
The row-reduction argument
Every invertible matrix can be written as a product of elementary matrices — matrices that encode a single row operation (swapping two rows, multiplying a row by a scalar, or adding a multiple of one row to another).
Step 1. Check the product rule for each type of elementary matrix E paired with any matrix B:
- Row swap: \det(EB) = -\det(B) and \det(E) = -1, so \det(EB) = \det(E) \cdot \det(B).
- Scalar multiplication of a row by k: \det(EB) = k \cdot \det(B) and \det(E) = k, so \det(EB) = \det(E) \cdot \det(B).
- Adding k times one row to another: \det(EB) = \det(B) and \det(E) = 1, so \det(EB) = \det(E) \cdot \det(B).
Why: each of these follows from the basic properties of determinants (sign change on row swap, scalar factor pulled out, determinant unchanged by row addition). These are the properties you met in the article on properties of determinants.
Step 2. If A is invertible, write A = E_1 E_2 \cdots E_k as a product of elementary matrices. Then:
Applying the elementary matrix rule repeatedly:
Why: the first equality uses the elementary matrix product rule proved in Step 1. Iterating this peels off one elementary matrix at a time until only \det(B) remains.
Step 3. If A is not invertible, then \det(A) = 0. But also, AB is not invertible (because if AB were invertible, then A would have a right inverse, contradicting A being singular). So \det(AB) = 0 = 0 \cdot \det(B) = \det(A) \cdot \det(B).
Why: a singular matrix has a zero determinant, and the product of a singular matrix with anything is still singular. Both sides are zero, so the equality holds trivially.
This completes the proof for all n \times n matrices: the product rule \det(AB) = \det(A) \cdot \det(B) holds whether or not A is invertible.
Multiplication of two determinants
The product rule, read backwards, gives you a technique for multiplying two determinants by constructing a matrix product.
Given
you can write the product as a single determinant:
The rule is: row of first \times column of second, exactly as in matrix multiplication. Entry (i, j) of the resulting determinant is the dot product of row i of the first matrix with column j of the second matrix.
The 3 \times 3 version
For 3 \times 3 determinants, the same rule applies. If \Delta_1 = \det(A) and \Delta_2 = \det(B) where A and B are 3 \times 3, then:
and AB is computed by the standard row-by-column rule. The result is a single 3 \times 3 determinant whose entries are dot products.
Flexibility in the multiplication
There is a subtlety. You can also multiply rows of the first with rows of the second (which corresponds to computing \det(AB^T)), or columns of the first with columns of the second (\det(A^T B)), or columns of the first with rows of the second (\det(A^T B^T)). Since \det(A^T) = \det(A), all these give the same numerical answer.
Why: \det(AB^T) = \det(A) \cdot \det(B^T) = \det(A) \cdot \det(B) = \Delta_1 \cdot \Delta_2. Transposing does not change the determinant, so you are free to transpose either matrix before multiplying.
This flexibility is important in practice. Sometimes the dot products come out cleaner when you multiply rows with rows instead of rows with columns. You should choose whichever combination gives the simplest arithmetic.
Immediate consequences
The product rule has several powerful consequences.
1. Determinant of a power
If A is n \times n, then:
Apply the product rule k - 1 times: \det(A^k) = \det(A \cdot A^{k-1}) = \det(A) \cdot \det(A^{k-1}) = \cdots = [\det(A)]^k.
2. Determinant of an inverse
If A is invertible (\det(A) \neq 0), then:
Proof: A \cdot A^{-1} = I, so \det(A) \cdot \det(A^{-1}) = \det(I) = 1. Dividing both sides by \det(A) gives the result.
Why: the identity matrix I has determinant 1 (its diagonal entries are all 1, and the off-diagonal entries are all 0). The product rule forces \det(A^{-1}) to be the reciprocal of \det(A).
3. Similar matrices have equal determinants
Two matrices A and B are similar if B = P^{-1}AP for some invertible matrix P. Then:
The determinant is an invariant under similarity — it does not change when you change the basis. This is why the determinant of a linear transformation is well-defined, independent of which basis you use to represent it as a matrix.
4. A matrix is singular if and only if its determinant is zero
This follows because A is singular \iff A has no inverse \iff AB = I has no solution \iff \det(A) \cdot \det(B) \neq 1 for all B \iff \det(A) = 0. (The forward direction: if \det(A) = 0, then \det(A) \cdot \det(B) = 0 \neq 1.)
Worked examples
Example 1: Multiplying two $2 \times 2$ determinants
Compute \begin{vmatrix} 1 & 3 \\ 2 & 5 \end{vmatrix} \times \begin{vmatrix} 4 & 1 \\ 3 & 2 \end{vmatrix} by the multiplication rule.
Step 1. Compute each determinant separately first (for verification later).
Why: knowing the individual determinants gives the expected product (-1) \times 5 = -5. You can check your multiplication result against this.
Step 2. Form the product determinant using row-by-column multiplication.
Entry (1,1): row 1 of first \cdot column 1 of second = 1 \times 4 + 3 \times 3 = 4 + 9 = 13.
Entry (1,2): row 1 of first \cdot column 2 of second = 1 \times 1 + 3 \times 2 = 1 + 6 = 7.
Entry (2,1): row 2 of first \cdot column 1 of second = 2 \times 4 + 5 \times 3 = 8 + 15 = 23.
Entry (2,2): row 2 of first \cdot column 2 of second = 2 \times 1 + 5 \times 2 = 2 + 10 = 12.
Why: each entry is a dot product, exactly as in matrix multiplication. The first determinant supplies the rows; the second supplies the columns.
Step 3. Write the product as a single determinant and evaluate.
Step 4. Verify: \Delta_1 \times \Delta_2 = (-1) \times 5 = -5. The product rule checks out.
Result: The product of the two determinants is -5.
Example 2: Using the product rule to prove $\det(A^2) = [\det(A)]^2$
Let A = \begin{pmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{pmatrix}. Verify that \det(A^2) = [\det(A)]^2.
Step 1. Compute \det(A) by expanding along the first row.
Why: the zero in position (1, 3) eliminates one cofactor, and the remaining two 2 \times 2 determinants are quick to compute.
Step 2. By the product rule, \det(A^2) = \det(A \cdot A) = \det(A) \cdot \det(A) = 8 \times 8 = 64.
Why: A^2 is just A multiplied by itself. The product rule applies directly — no need to compute A^2 at all.
Step 3. Verify by computing A^2 explicitly.
Why: each entry is a dot product. For instance, entry (1,1) = 2 \cdot 2 + 1 \cdot 1 + 0 \cdot 0 = 5. Entry (2,2) = 1 \cdot 1 + 3 \cdot 3 + 1 \cdot 1 = 11.
Step 4. Compute \det(A^2) by expanding along the first row:
Result: \det(A^2) = 64 = 8^2 = [\det(A)]^2. The product rule saved a page of matrix multiplication — you could have skipped Steps 3 and 4 entirely.
The power of the product rule is visible here: computing A^{10} would require nine matrix multiplications, but \det(A^{10}) = 8^{10} requires knowing only a single number.
Common confusions
-
"\det(A + B) = \det(A) + \det(B)." This is the most common mistake. The determinant is multiplicative, not additive. In general, \det(A + B) \neq \det(A) + \det(B). As a quick counterexample: take A = B = I_2 (the 2 \times 2 identity). Then \det(A) = \det(B) = 1, but \det(A + B) = \det(2I) = 4 \neq 2.
-
"The product rule requires A and B to be invertible." No. The rule \det(AB) = \det(A) \cdot \det(B) holds for all square matrices, including singular ones. When either A or B is singular, both sides are zero.
-
"\det(kA) = k \cdot \det(A)." Wrong. Multiplying every entry of an n \times n matrix by k multiplies the determinant by k^n (one factor of k per row). So \det(kA) = k^n \det(A). This is the scalar multiplication rule, not the product rule — but the two are sometimes confused.
-
"Determinant multiplication means multiplying element by element." No. The entries of the product determinant come from dot products — row of the first matrix with column of the second. This is matrix multiplication, not elementwise multiplication.
-
"The order does not matter: \det(AB) = \det(BA)." This is actually true, even though AB \neq BA in general. Both \det(AB) and \det(BA) equal \det(A) \cdot \det(B), since multiplication of real numbers is commutative. So while the product matrices AB and BA are usually different, their determinants are always equal.
Applications
Application 1: Testing invertibility of a product
A matrix M is invertible if and only if \det(M) \neq 0. For a product AB:
This is zero if and only if at least one of \det(A), \det(B) is zero. So:
AB is invertible if and only if both A and B are invertible.
You don't need to compute AB to know whether it is invertible — just check the individual determinants.
Application 2: Proving determinant identities
Many determinant problems in competitive exams ask you to show that a determinant equals some expression. The product rule lets you factor a complicated determinant into simpler pieces.
For example, suppose you need to show that
One approach: write the matrix as J + D where J is the 3 \times 3 all-ones matrix and D = \text{diag}(a, b, c). While \det(J + D) \neq \det(J) + \det(D), you can expand the determinant using properties of the diagonal perturbation — or simply expand directly and factor the result.
But the product rule helps in harder cases. If a determinant factors as a product of two structured matrices (say a Vandermonde times a diagonal), the product rule tells you its value is the product of two known determinants.
Application 3: Geometric transformations
When a linear transformation T maps a region of the plane, the area of the image is |\det(T)| times the area of the original region. If you compose two transformations T_1 followed by T_2, the combined transformation is T_2 T_1, and the area scaling factor is:
The area scales multiply. A rotation (determinant 1) preserves areas. A scaling by factor k in both directions (determinant k^2) scales areas by k^2. Composing a rotation with a scaling gives a combined area factor of 1 \times k^2 = k^2 — exactly right.
Going deeper
If you came here to learn the product rule and how to multiply determinants, you have it — you can stop here. The rest of this section is for readers who want the abstract perspective, the connection to the Cauchy-Binet formula, and the relationship between determinants and volumes.
The determinant as a group homomorphism
The set of all invertible n \times n matrices forms a group under multiplication, called the general linear group GL(n). The non-zero real numbers form a group under multiplication, called \mathbb{R}^*.
The product rule says precisely that the map \det : GL(n) \to \mathbb{R}^* is a group homomorphism — it preserves the group operation. This is one of the most important examples of a homomorphism in abstract algebra.
The kernel of this homomorphism (the set of matrices with determinant 1) is the special linear group SL(n). These are the linear transformations that preserve volume, and they form a normal subgroup of GL(n).
The Cauchy-Binet formula
What if A is m \times n and B is n \times m with m \leq n? Then AB is m \times m, and \det(AB) makes sense even though A and B are not square. The Cauchy-Binet formula generalises the product rule:
where the sum runs over all \binom{n}{m} subsets S of m columns of A (equivalently, m rows of B), and A_S, B_S are the corresponding m \times m submatrices.
When m = n, there is only one subset S (all columns), and the formula reduces to \det(AB) = \det(A) \cdot \det(B).
The Cauchy-Binet formula appears in differential geometry (computing the area of a parametric surface) and in combinatorics (counting spanning trees of a graph via the matrix tree theorem).
Determinants and volume
In \mathbb{R}^n, the absolute value of the determinant of an n \times n matrix A equals the n-dimensional volume of the parallelepiped (the higher-dimensional analogue of a parallelogram) spanned by the rows (or columns) of A.
The product rule then has a beautiful geometric interpretation: if you first deform space by B and then by A, the combined volume scaling factor is the product of the individual scaling factors. This is the same reasoning behind the area-of-transformed-region application above, extended to all dimensions.
For n = 2: |\det(A)| is the area of the parallelogram spanned by the rows. For n = 3: |\det(A)| is the volume of the parallelepiped. The sign tells you the orientation — whether the transformation preserves or reverses the "handedness" of the coordinate system.
A historical note
The product rule for determinants was proved in full generality by Jacques Binet and Augustin-Louis Cauchy independently in 1812, though special cases were known earlier. The result was a milestone: it showed that the determinant is not just a computational tool but a fundamental algebraic quantity with deep structural properties. In India, the study of determinants entered the curriculum through the influence of 19th-century British textbooks, and they became a standard part of the mathematical training that produced Ramanujan's generation of mathematicians.
Where this leads next
The product rule connects to nearly every part of matrix theory. The most direct continuations:
- Properties of Determinants — the row and column operations that underpin the proof of the product rule.
- Special Determinants — Vandermonde, circulant, and skew-symmetric determinants with closed-form evaluations.
- Determinants in Geometry — areas, collinearity, and concurrency via determinants.
- Inverse of Matrix — the adjoint method, where \det(A^{-1}) = 1/\det(A) is an immediate corollary of the product rule.
- System of Linear Equations (Cramer's Rule) — solving systems of equations using determinants, where the product rule guarantees uniqueness.