In short
Determinants satisfy a small collection of properties — transpose invariance, sign flip on row swap, row-column operations, factoring — that let you dramatically simplify a matrix before computing its determinant. These properties also power the factor theorem for determinants, which identifies when an algebraic expression must divide the determinant.
Compute this determinant directly:
You could expand along a row. That's nine multiplications, six additions, and three 2 \times 2 sub-determinants. Or you could notice that the third row is the sum of the first two rows: (5, 7, 9) = (1, 2, 3) + (4, 5, 6). A property you are about to learn says: if one row is a linear combination of the others, the determinant is zero. Done. No arithmetic at all.
That is the point of determinant properties. They are not just theorems to memorise for an exam — they are computational shortcuts that turn a painful calculation into a one-line observation. Master these properties and you will rarely need to expand a 3 \times 3 determinant the hard way.
Property 1: Transpose invariance
Property 1
The determinant of a matrix equals the determinant of its transpose.
This means every property that holds for rows also holds for columns. If you can swap two rows and the sign flips, then swapping two columns also flips the sign. If adding a multiple of one row to another preserves the determinant, then the same holds for columns.
Proof for 2×2. Let A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}. Then A^T = \begin{pmatrix} a & c \\ b & d \end{pmatrix}.
\det(A) = ad - bc.
\det(A^T) = ad - cb = ad - bc.
They are equal.
Proof for 3×3. Write out the cofactor expansion of \det(A) along row 1 and the cofactor expansion of \det(A^T) along column 1 of A^T. Since the rows of A become the columns of A^T, the two expansions produce exactly the same terms in the same order. The argument generalises to any n \times n matrix by induction on n.
Consequence. From now on, every property stated for rows automatically holds for columns, and vice versa. You will never need to prove the column version separately.
Property 2: Swapping two rows flips the sign
Property 2
If two rows (or columns) of a determinant are interchanged, the determinant changes sign.
Proof for 2×2. Start with \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc. Swap the two rows:
The sign flipped.
Proof for 3×3. Take A = \begin{pmatrix} R_1 \\ R_2 \\ R_3 \end{pmatrix} and let A' be obtained by swapping R_1 and R_2. Expand \det(A) along row 3. Each cofactor C_{3j} involves a 2 \times 2 sub-determinant formed from the remaining two rows. In A', those remaining rows (for the row-3 expansion) are R_2 and R_1 instead of R_1 and R_2. By the 2 \times 2 case, each such 2 \times 2 determinant changes sign. Since every term in the expansion picks up a factor of -1, the whole determinant changes sign.
Example.
Swap rows 1 and 2:
You can verify both sides equal 0 (the third row is the sum of the first two), so the sign flip is consistent: -0 = 0.
Property 3: Identical rows means determinant zero
Property 3
If two rows (or columns) of a determinant are identical, the determinant is zero.
Proof. Suppose rows i and j are identical. Swap them — by Property 2, the determinant changes sign. But swapping two identical rows leaves the matrix unchanged, so the determinant also stays the same. The only number equal to its own negative is zero.
Example.
No computation needed — rows 1 and 2 are the same.
Property 4: Scalar multiple of a row
Property 4
If every element of one row (or column) is multiplied by a scalar k, the determinant is multiplied by k.
Proof. Expand along the row that was multiplied. Each term in the expansion has exactly one factor from that row — and each such factor now carries an extra k. So the entire sum picks up a factor of k.
Consequence for the full matrix. If you multiply the entire n \times n matrix by k (every entry, not just one row), each of the n rows picks up a factor of k, so
This catches many students. Multiplying a 3 \times 3 matrix by 2 multiplies its determinant by 2^3 = 8, not by 2.
Example. Factor out common factors to simplify before expanding:
The factor of 3 comes out of row 1 (each element of row 1 was divisible by 3). The remaining determinant has smaller numbers and is easier to compute.
Property 5: Row addition
Property 5
If a multiple of one row is added to another row, the determinant does not change.
Proof. A determinant is linear in each row separately. Split the first row:
The second determinant has two identical rows (both are (b_1, b_2, b_3)), so by Property 3 it is zero. The result follows.
This is the single most useful property for computation. It lets you create zeros in a row or column by subtracting multiples of other rows — exactly what you do in Gaussian elimination — without changing the determinant.
Property 6: Sum decomposition of rows
Property 6
If each element of a row is a sum of two terms, the determinant can be split into two determinants.
Proof. Expand along the first row. Each term in the expansion has a factor from the first row. The factor a_j + p_j splits into a_j and p_j, and the entire sum of products splits into two sums — one with a_j's and one with p_j's. Each sum is a cofactor expansion of the corresponding determinant.
This property is the multi-row linearity of determinants. Together with Property 4 (scalar pull-out), it says the determinant is a linear function of each row separately, while the other rows are held fixed.
Property 7: Proportional rows
Property 7
If one row is a scalar multiple of another row, the determinant is zero.
Proof. Suppose row i equals k times row j. By Property 4, factor out k from row i. Now rows i and j are identical. By Property 3, the determinant of the remaining matrix is zero. So the original determinant is k \times 0 = 0.
Example.
Row 2 is 2 times row 1. The determinant is zero without any computation.
This generalises: if any row is a linear combination of the other rows, the determinant is zero. This is what happened in the opening example — the third row was the sum of the first two.
Simplification using properties
Here is a full demonstration of how properties turn a messy determinant into a clean one.
Compute:
This is a famous determinant — the Vandermonde determinant for three variables. Instead of expanding directly (which would give a mess of six terms), use row operations.
Step 1. Apply C_2 \to C_2 - C_1 and C_3 \to C_3 - C_1 (subtract column 1 from columns 2 and 3). By Property 5, the determinant doesn't change.
Step 2. Factor the differences of squares in row 3: b^2 - a^2 = (b-a)(b+a) and c^2 - a^2 = (c-a)(c+a).
Step 3. Factor (b - a) from column 2 and (c - a) from column 3, using Property 4.
Step 4. Expand along row 1. Only the (1,1) entry is nonzero:
To write this in the standard cyclic form, note that (b-a) = -(a-b), (c-a) = -(a-c), and (c-b) = -(b-c). Three sign flips give an overall factor of (-1)^3 = -1. But (a-b)(a-c)(b-c) and (a-b)(b-c)(c-a) differ by swapping (a-c) with (c-a), which introduces another -1. The net result:
The Vandermonde determinant factors into a product of pairwise differences: (a - b)(b - c)(c - a). This beautiful factorisation is no accident — it is a consequence of the factor theorem for determinants, which is the next topic.
The factor theorem for determinants
The ordinary factor theorem from polynomials says: if a polynomial p(x) satisfies p(r) = 0, then (x - r) is a factor of p(x).
There is an analogue for determinants.
Factor theorem for determinants
If the elements of a determinant are polynomials in a variable x, and the determinant vanishes when x = a, then (x - a) is a factor of the determinant.
Why it works. When you expand a determinant whose entries are polynomials in x, the result is itself a polynomial in x. If that polynomial is zero at x = a, the ordinary factor theorem tells you (x - a) divides it.
Application to the Vandermonde determinant. Consider
Set a = b. Then columns 1 and 2 become identical, so D = 0 by Property 3. By the factor theorem, (a - b) divides D.
Set b = c. Then columns 2 and 3 become identical, so D = 0. Therefore (b - c) divides D.
Set c = a. Then columns 3 and 1 become identical, so D = 0. Therefore (c - a) divides D.
So (a - b)(b - c)(c - a) divides D. Since D is a polynomial of degree 0 + 1 + 2 = 3 in the variables, and (a - b)(b - c)(c - a) also has degree 3, D must be a constant multiple of (a - b)(b - c)(c - a). Compare the coefficient of a^2 c (or any convenient term) on both sides to find that the constant is 1.
This is the same result as the row-operations derivation, obtained by a completely different method.
Worked examples
Example 1: Simplifying a structured determinant
Evaluate \begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \end{vmatrix} where a = 2, b = 3, c = 5.
Step 1. Recognise the structure. This is the Vandermonde determinant. By the result derived above, its value is (a - b)(b - c)(c - a).
Why: rather than expanding a 3 \times 3 determinant with nine entries, use the factorisation already proved. Pattern recognition saves work.
Step 2. Substitute a = 2, b = 3, c = 5.
Why: each factor is a pairwise difference. Three factors for three pairs.
Step 3. Multiply.
Why: the product of two negatives is positive, then times a positive stays positive.
Step 4. Verify by direct expansion. Expand along the first column (all entries are 1):
Why: the direct expansion confirms the factorisation. Both methods give 6.
Result: The determinant equals 6.
The determinant being nonzero tells you that the three points (a, a^2), (b, b^2), (c, c^2) form a proper triangle — they are not collinear. Since all three lie on the parabola y = x^2, and three distinct points on a parabola are never collinear (a parabola has no straight portions), the determinant must always be nonzero for distinct a, b, c.
Example 2: Using row operations to evaluate a determinant
Evaluate D = \begin{vmatrix} 2 & 3 & 4 \\ 5 & 6 & 7 \\ 8 & 9 & 10 \end{vmatrix}.
Step 1. Look for patterns. The entries in each row form an arithmetic progression with common difference 1: (2, 3, 4), (5, 6, 7), (8, 9, 10). Apply C_2 \to C_2 - C_1 and C_3 \to C_3 - C_1.
Why: subtracting column 1 from columns 2 and 3 does not change the determinant (Property 5). The goal is to create structure — and look, columns 2 and 3 are now proportional.
Step 2. Observe that column 3 is exactly 2 times column 2: (2, 2, 2) = 2 \times (1, 1, 1).
Why: proportional columns mean the determinant is zero (Property 7). Two columns pointing in the same direction collapse the parallelepiped to zero volume.
Step 3. Apply Property 7.
Why: no further computation is needed. The proportional-columns property immediately gives the answer.
Step 4. Verify by a different method. Apply R_2 \to R_2 - R_1 and R_3 \to R_3 - R_1 to the original matrix:
Row 3 is 2 times row 2. By Property 7, D = 0. Same answer.
Why: the same conclusion via rows instead of columns, confirming the result.
Result: D = 0.
Summary of all properties
Here is the complete reference table.
| # | Property | Effect on det |
|---|---|---|
| 1 | \det(A^T) = \det(A) | Row properties \leftrightarrow column properties |
| 2 | Swap two rows | Sign flips: \det \to -\det |
| 3 | Two identical rows | \det = 0 |
| 4 | Multiply one row by k | \det \to k \cdot \det |
| 5 | Add k \times (\text{row } j) to row i | \det unchanged |
| 6 | Split a row that is a sum | \det splits into a sum of two dets |
| 7 | One row = k \times another row | \det = 0 |
Properties 3 and 7 are consequences of Properties 2 and 4. Properties 5 and 6 express the multilinearity of the determinant. Property 1 doubles the toolkit by converting every row statement to a column statement.
Common confusions
-
"Adding rows changes the determinant." Only if you replace a row with a different one. The operation R_i \to R_i + kR_j (adding a multiple of row j to row i) leaves the determinant unchanged. But R_i \to kR_i (scaling a row by k) multiplies the determinant by k. Students often confuse these two operations.
-
"Factoring k from a row is the same as dividing the determinant by k." The opposite — factoring k out of one row means the determinant equals k times the remaining determinant. You are extracting a factor, not dividing.
-
"\det(kA) = k \cdot \det(A)." This is wrong for n > 1. The correct formula is \det(kA) = k^n \det(A), where n is the size of the matrix. Each of the n rows picks up a factor of k.
-
"If the determinant is zero, all entries are zero." The matrix \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix} has all nonzero entries but \det = 0. Zero determinant means the rows are linearly dependent, not that the entries are zero.
-
"The factor theorem for determinants is a separate theorem." It is really just the factor theorem for polynomials applied to the polynomial you get after expanding the determinant. The insight is recognising when the substitution makes the determinant vanish — which is usually when it creates two identical rows or columns.
Going deeper
If you came here to learn the properties and how to use them for simplification, you have it — you can stop here. The rest explores the product formula for determinants and the connection to matrix inverses.
The product formula
One of the most important results in the theory:
The determinant of a product is the product of the determinants. This has immediate consequences.
Consequence 1: If A is invertible, then AA^{-1} = I, so \det(A)\det(A^{-1}) = \det(I) = 1. This means \det(A^{-1}) = \frac{1}{\det(A)} — the determinant of the inverse is the reciprocal of the determinant.
Consequence 2: If \det(A) = 0, then for any matrix B, \det(AB) = 0 \cdot \det(B) = 0. A singular matrix "infects" any product it appears in — the product is also singular.
Consequence 3: For orthogonal matrices (A^T A = I), \det(A^T)\det(A) = 1, so (\det A)^2 = 1, giving \det A = \pm 1.
Determinants and area of transformed shapes
If a linear transformation T is represented by a matrix A, and you apply T to a region with area S, the area of the transformed region is |\det(A)| \cdot S. This is why the determinant is sometimes called the "area scaling factor" of the transformation.
For a rotation matrix, |\det| = 1, so areas are preserved. For a stretch matrix like \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, \det = 6, and every shape's area is multiplied by 6. If \det = 0, the transformation collapses everything to a lower dimension.
Determinants in the Indian exam tradition
In JEE Advanced and other competitive examinations, determinant problems often involve structured matrices — entries that are polynomials, trigonometric functions, or follow a pattern. The strategy is almost always the same: use row/column operations to simplify before expanding, use the factor theorem to identify factors, and use degree arguments to pin down the constant. Direct expansion of a 3 \times 3 determinant with algebraic entries is almost never the intended approach. The properties in this article are not just theory — they are the primary computational tools.
Where this leads next
- Determinants — Introduction — if you need to revisit the definition of determinants, minors, and cofactors.
- Special Matrices — the determinant conditions for orthogonal, nilpotent, and other special matrix types.
- Systems of Linear Equations — Cramer's rule uses determinants to solve systems directly.
- Matrix Operations — the product formula \det(AB) = \det(A)\det(B) and its consequences.