In short

Determinants satisfy a small collection of properties — transpose invariance, sign flip on row swap, row-column operations, factoring — that let you dramatically simplify a matrix before computing its determinant. These properties also power the factor theorem for determinants, which identifies when an algebraic expression must divide the determinant.

Compute this determinant directly:

\begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 5 & 7 & 9 \end{vmatrix}

You could expand along a row. That's nine multiplications, six additions, and three 2 \times 2 sub-determinants. Or you could notice that the third row is the sum of the first two rows: (5, 7, 9) = (1, 2, 3) + (4, 5, 6). A property you are about to learn says: if one row is a linear combination of the others, the determinant is zero. Done. No arithmetic at all.

That is the point of determinant properties. They are not just theorems to memorise for an exam — they are computational shortcuts that turn a painful calculation into a one-line observation. Master these properties and you will rarely need to expand a 3 \times 3 determinant the hard way.

Property 1: Transpose invariance

Property 1

The determinant of a matrix equals the determinant of its transpose.

\det(A^T) = \det(A)

This means every property that holds for rows also holds for columns. If you can swap two rows and the sign flips, then swapping two columns also flips the sign. If adding a multiple of one row to another preserves the determinant, then the same holds for columns.

Proof for 2×2. Let A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}. Then A^T = \begin{pmatrix} a & c \\ b & d \end{pmatrix}.

\det(A) = ad - bc.

\det(A^T) = ad - cb = ad - bc.

They are equal.

Proof for 3×3. Write out the cofactor expansion of \det(A) along row 1 and the cofactor expansion of \det(A^T) along column 1 of A^T. Since the rows of A become the columns of A^T, the two expansions produce exactly the same terms in the same order. The argument generalises to any n \times n matrix by induction on n.

Consequence. From now on, every property stated for rows automatically holds for columns, and vice versa. You will never need to prove the column version separately.

Transpose leaves the determinant unchangedA matrix A with rows labelled R1, R2, R3 next to its transpose with columns labelled R1, R2, R3. An equals sign between them indicates the determinants are the same. A R₁ → a b c R₂ → d e f R₃ → g h k det = Aᵀ ↓R₁ ↓R₂ ↓R₃ a d g b e h c f k
The rows of $A$ become the columns of $A^T$, but the determinant stays the same. This means every row property has an automatic column counterpart.

Property 2: Swapping two rows flips the sign

Property 2

If two rows (or columns) of a determinant are interchanged, the determinant changes sign.

\det(\text{swap of } A) = -\det(A)

Proof for 2×2. Start with \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc. Swap the two rows:

\begin{vmatrix} c & d \\ a & b \end{vmatrix} = cb - da = -(ad - bc)

The sign flipped.

Proof for 3×3. Take A = \begin{pmatrix} R_1 \\ R_2 \\ R_3 \end{pmatrix} and let A' be obtained by swapping R_1 and R_2. Expand \det(A) along row 3. Each cofactor C_{3j} involves a 2 \times 2 sub-determinant formed from the remaining two rows. In A', those remaining rows (for the row-3 expansion) are R_2 and R_1 instead of R_1 and R_2. By the 2 \times 2 case, each such 2 \times 2 determinant changes sign. Since every term in the expansion picks up a factor of -1, the whole determinant changes sign.

Example.

\begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{vmatrix}

Swap rows 1 and 2:

\begin{vmatrix} 4 & 5 & 6 \\ 1 & 2 & 3 \\ 7 & 8 & 9 \end{vmatrix} = -\begin{vmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{vmatrix}

You can verify both sides equal 0 (the third row is the sum of the first two), so the sign flip is consistent: -0 = 0.

Swapping rows reverses the orientation of the parallelogramTwo parallelograms side by side. The left one has vectors u then v going counterclockwise with positive determinant. The right one has vectors v then u going clockwise with negative determinant. Same area, opposite sign. Before swap: det = +6 u = (3,0) v = (1,2) CCW After swap: det = −6 v = (1,2) u = (3,0) CW
Swapping the two row vectors reverses the orientation of the parallelogram — counterclockwise becomes clockwise — while the area stays the same. The determinant flips sign to record this orientation change.

Property 3: Identical rows means determinant zero

Property 3

If two rows (or columns) of a determinant are identical, the determinant is zero.

Proof. Suppose rows i and j are identical. Swap them — by Property 2, the determinant changes sign. But swapping two identical rows leaves the matrix unchanged, so the determinant also stays the same. The only number equal to its own negative is zero.

\det(A) = -\det(A) \implies 2\det(A) = 0 \implies \det(A) = 0

Example.

\begin{vmatrix} 3 & 1 & 2 \\ 3 & 1 & 2 \\ 5 & 4 & 7 \end{vmatrix} = 0

No computation needed — rows 1 and 2 are the same.

Property 4: Scalar multiple of a row

Property 4

If every element of one row (or column) is multiplied by a scalar k, the determinant is multiplied by k.

\begin{vmatrix} ka_1 & ka_2 & ka_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = k \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

Proof. Expand along the row that was multiplied. Each term in the expansion has exactly one factor from that row — and each such factor now carries an extra k. So the entire sum picks up a factor of k.

Consequence for the full matrix. If you multiply the entire n \times n matrix by k (every entry, not just one row), each of the n rows picks up a factor of k, so

\det(kA) = k^n \det(A)

This catches many students. Multiplying a 3 \times 3 matrix by 2 multiplies its determinant by 2^3 = 8, not by 2.

Example. Factor out common factors to simplify before expanding:

\begin{vmatrix} 6 & 3 & 9 \\ 2 & 5 & 1 \\ 4 & 7 & 3 \end{vmatrix} = 3 \begin{vmatrix} 2 & 1 & 3 \\ 2 & 5 & 1 \\ 4 & 7 & 3 \end{vmatrix}

The factor of 3 comes out of row 1 (each element of row 1 was divisible by 3). The remaining determinant has smaller numbers and is easier to compute.

Property 5: Row addition

Property 5

If a multiple of one row is added to another row, the determinant does not change.

\begin{vmatrix} a_1 + kb_1 & a_2 + kb_2 & a_3 + kb_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

Proof. A determinant is linear in each row separately. Split the first row:

\begin{vmatrix} a_1 + kb_1 & a_2 + kb_2 & a_3 + kb_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} + k\begin{vmatrix} b_1 & b_2 & b_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

The second determinant has two identical rows (both are (b_1, b_2, b_3)), so by Property 3 it is zero. The result follows.

Row addition preserves the parallelogram areaTwo parallelograms side by side. The left has vectors u and v. The right has vectors u plus k times v and v. Both parallelograms have the same area — adding a multiple of one row to another is a shear that does not change the determinant. Before: det = ad − bc u v After R₁→R₁+kR₂: same det u + kv v
Adding a multiple of one row-vector to another is a shear — it tilts the parallelogram but does not change its area. The base and height relative to the unchanged vector stay the same, so the determinant is preserved.

This is the single most useful property for computation. It lets you create zeros in a row or column by subtracting multiples of other rows — exactly what you do in Gaussian elimination — without changing the determinant.

Property 6: Sum decomposition of rows

Property 6

If each element of a row is a sum of two terms, the determinant can be split into two determinants.

\begin{vmatrix} a_1 + p_1 & a_2 + p_2 & a_3 + p_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} + \begin{vmatrix} p_1 & p_2 & p_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix}

Proof. Expand along the first row. Each term in the expansion has a factor from the first row. The factor a_j + p_j splits into a_j and p_j, and the entire sum of products splits into two sums — one with a_j's and one with p_j's. Each sum is a cofactor expansion of the corresponding determinant.

This property is the multi-row linearity of determinants. Together with Property 4 (scalar pull-out), it says the determinant is a linear function of each row separately, while the other rows are held fixed.

Property 7: Proportional rows

Property 7

If one row is a scalar multiple of another row, the determinant is zero.

Proof. Suppose row i equals k times row j. By Property 4, factor out k from row i. Now rows i and j are identical. By Property 3, the determinant of the remaining matrix is zero. So the original determinant is k \times 0 = 0.

Example.

\begin{vmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 5 & 1 & 7 \end{vmatrix} = 0

Row 2 is 2 times row 1. The determinant is zero without any computation.

This generalises: if any row is a linear combination of the other rows, the determinant is zero. This is what happened in the opening example — the third row was the sum of the first two.

Simplification using properties

Here is a full demonstration of how properties turn a messy determinant into a clean one.

Compute:

D = \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}

This is a famous determinant — the Vandermonde determinant for three variables. Instead of expanding directly (which would give a mess of six terms), use row operations.

Step 1. Apply C_2 \to C_2 - C_1 and C_3 \to C_3 - C_1 (subtract column 1 from columns 2 and 3). By Property 5, the determinant doesn't change.

D = \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ a^2 & b^2 - a^2 & c^2 - a^2 \end{vmatrix}

Step 2. Factor the differences of squares in row 3: b^2 - a^2 = (b-a)(b+a) and c^2 - a^2 = (c-a)(c+a).

D = \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ a^2 & (b-a)(b+a) & (c-a)(c+a) \end{vmatrix}

Step 3. Factor (b - a) from column 2 and (c - a) from column 3, using Property 4.

D = (b - a)(c - a) \begin{vmatrix} 1 & 0 & 0 \\ a & 1 & 1 \\ a^2 & b + a & c + a \end{vmatrix}

Step 4. Expand along row 1. Only the (1,1) entry is nonzero:

D = (b - a)(c - a) \cdot 1 \cdot \begin{vmatrix} 1 & 1 \\ b + a & c + a \end{vmatrix}
= (b - a)(c - a) \cdot [(c + a) - (b + a)]
= (b - a)(c - a)(c - b)

To write this in the standard cyclic form, note that (b-a) = -(a-b), (c-a) = -(a-c), and (c-b) = -(b-c). Three sign flips give an overall factor of (-1)^3 = -1. But (a-b)(a-c)(b-c) and (a-b)(b-c)(c-a) differ by swapping (a-c) with (c-a), which introduces another -1. The net result:

\begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix} = (a - b)(b - c)(c - a)

The Vandermonde determinant factors into a product of pairwise differences: (a - b)(b - c)(c - a). This beautiful factorisation is no accident — it is a consequence of the factor theorem for determinants, which is the next topic.

The factor theorem for determinants

The ordinary factor theorem from polynomials says: if a polynomial p(x) satisfies p(r) = 0, then (x - r) is a factor of p(x).

There is an analogue for determinants.

Factor theorem for determinants

If the elements of a determinant are polynomials in a variable x, and the determinant vanishes when x = a, then (x - a) is a factor of the determinant.

Why it works. When you expand a determinant whose entries are polynomials in x, the result is itself a polynomial in x. If that polynomial is zero at x = a, the ordinary factor theorem tells you (x - a) divides it.

Application to the Vandermonde determinant. Consider

D(a, b, c) = \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}

Set a = b. Then columns 1 and 2 become identical, so D = 0 by Property 3. By the factor theorem, (a - b) divides D.

Set b = c. Then columns 2 and 3 become identical, so D = 0. Therefore (b - c) divides D.

Set c = a. Then columns 3 and 1 become identical, so D = 0. Therefore (c - a) divides D.

So (a - b)(b - c)(c - a) divides D. Since D is a polynomial of degree 0 + 1 + 2 = 3 in the variables, and (a - b)(b - c)(c - a) also has degree 3, D must be a constant multiple of (a - b)(b - c)(c - a). Compare the coefficient of a^2 c (or any convenient term) on both sides to find that the constant is 1.

D = (a - b)(b - c)(c - a)

This is the same result as the row-operations derivation, obtained by a completely different method.

Factor theorem strategy for determinantsA diagram showing the three substitutions a equals b, b equals c, and c equals a, each making two columns identical, each yielding a factor of the determinant. Set a = b Col 1 = Col 2 ⟹ (a−b) | D Set b = c Col 2 = Col 3 ⟹ (b−c) | D Set c = a Col 3 = Col 1 ⟹ (c−a) | D D = k(a−b)(b−c)(c−a)
The factor theorem strategy: each substitution that makes two columns identical produces a linear factor of the determinant. Three substitutions yield three factors. A degree argument then pins down the remaining constant $k$.

Worked examples

Example 1: Simplifying a structured determinant

Evaluate \begin{vmatrix} 1 & a & a^2 \\ 1 & b & b^2 \\ 1 & c & c^2 \end{vmatrix} where a = 2, b = 3, c = 5.

Step 1. Recognise the structure. This is the Vandermonde determinant. By the result derived above, its value is (a - b)(b - c)(c - a).

Why: rather than expanding a 3 \times 3 determinant with nine entries, use the factorisation already proved. Pattern recognition saves work.

Step 2. Substitute a = 2, b = 3, c = 5.

(a - b) = 2 - 3 = -1
(b - c) = 3 - 5 = -2
(c - a) = 5 - 2 = 3

Why: each factor is a pairwise difference. Three factors for three pairs.

Step 3. Multiply.

(a - b)(b - c)(c - a) = (-1)(-2)(3) = 6

Why: the product of two negatives is positive, then times a positive stays positive.

Step 4. Verify by direct expansion. Expand along the first column (all entries are 1):

D = 1 \cdot \begin{vmatrix} 3 & 9 \\ 5 & 25 \end{vmatrix} - 1 \cdot \begin{vmatrix} 2 & 4 \\ 5 & 25 \end{vmatrix} + 1 \cdot \begin{vmatrix} 2 & 4 \\ 3 & 9 \end{vmatrix}
= (75 - 45) - (50 - 20) + (18 - 12) = 30 - 30 + 6 = 6 \quad \checkmark

Why: the direct expansion confirms the factorisation. Both methods give 6.

Result: The determinant equals 6.

The three points $(2, 4)$, $(3, 9)$, and $(5, 25)$ lie on the parabola $y = x^2$ but form a nondegenerate triangle — the determinant is $6 \neq 0$, confirming that the points are not collinear. If they were collinear, the determinant would be zero.

The determinant being nonzero tells you that the three points (a, a^2), (b, b^2), (c, c^2) form a proper triangle — they are not collinear. Since all three lie on the parabola y = x^2, and three distinct points on a parabola are never collinear (a parabola has no straight portions), the determinant must always be nonzero for distinct a, b, c.

Example 2: Using row operations to evaluate a determinant

Evaluate D = \begin{vmatrix} 2 & 3 & 4 \\ 5 & 6 & 7 \\ 8 & 9 & 10 \end{vmatrix}.

Step 1. Look for patterns. The entries in each row form an arithmetic progression with common difference 1: (2, 3, 4), (5, 6, 7), (8, 9, 10). Apply C_2 \to C_2 - C_1 and C_3 \to C_3 - C_1.

D = \begin{vmatrix} 2 & 1 & 2 \\ 5 & 1 & 2 \\ 8 & 1 & 2 \end{vmatrix}

Why: subtracting column 1 from columns 2 and 3 does not change the determinant (Property 5). The goal is to create structure — and look, columns 2 and 3 are now proportional.

Step 2. Observe that column 3 is exactly 2 times column 2: (2, 2, 2) = 2 \times (1, 1, 1).

Why: proportional columns mean the determinant is zero (Property 7). Two columns pointing in the same direction collapse the parallelepiped to zero volume.

Step 3. Apply Property 7.

D = 0

Why: no further computation is needed. The proportional-columns property immediately gives the answer.

Step 4. Verify by a different method. Apply R_2 \to R_2 - R_1 and R_3 \to R_3 - R_1 to the original matrix:

D = \begin{vmatrix} 2 & 3 & 4 \\ 3 & 3 & 3 \\ 6 & 6 & 6 \end{vmatrix}

Row 3 is 2 times row 2. By Property 7, D = 0. Same answer.

Why: the same conclusion via rows instead of columns, confirming the result.

Result: D = 0.

Three parallel row vectors collapsing to zero determinantA diagram showing three row vectors of the matrix after the column operation. The second and third columns are proportional, meaning the three vectors all lie in a plane through the origin — the parallelepiped has zero volume and the determinant is zero. Original matrix ⎡ 2 3 4 ⎤ ⎢ 5 6 7 ⎥ ⎣ 8 9 10⎦ C₂−C₁, C₃−C₁ After operations ⎡ 2 1 2 ⎤ ⎢ 5 1 2 ⎥ ⎣ 8 1 2 ⎦ C₃ = 2 × C₂ ⟹ det = 0
After subtracting column 1 from columns 2 and 3, the resulting column 3 is exactly twice column 2. Proportional columns force the determinant to zero — the three row vectors are linearly dependent.

Summary of all properties

Here is the complete reference table.

# Property Effect on det
1 \det(A^T) = \det(A) Row properties \leftrightarrow column properties
2 Swap two rows Sign flips: \det \to -\det
3 Two identical rows \det = 0
4 Multiply one row by k \det \to k \cdot \det
5 Add k \times (\text{row } j) to row i \det unchanged
6 Split a row that is a sum \det splits into a sum of two dets
7 One row = k \times another row \det = 0

Properties 3 and 7 are consequences of Properties 2 and 4. Properties 5 and 6 express the multilinearity of the determinant. Property 1 doubles the toolkit by converting every row statement to a column statement.

Common confusions

Going deeper

If you came here to learn the properties and how to use them for simplification, you have it — you can stop here. The rest explores the product formula for determinants and the connection to matrix inverses.

The product formula

One of the most important results in the theory:

\det(AB) = \det(A) \cdot \det(B)

The determinant of a product is the product of the determinants. This has immediate consequences.

Consequence 1: If A is invertible, then AA^{-1} = I, so \det(A)\det(A^{-1}) = \det(I) = 1. This means \det(A^{-1}) = \frac{1}{\det(A)} — the determinant of the inverse is the reciprocal of the determinant.

Consequence 2: If \det(A) = 0, then for any matrix B, \det(AB) = 0 \cdot \det(B) = 0. A singular matrix "infects" any product it appears in — the product is also singular.

Consequence 3: For orthogonal matrices (A^T A = I), \det(A^T)\det(A) = 1, so (\det A)^2 = 1, giving \det A = \pm 1.

Determinants and area of transformed shapes

If a linear transformation T is represented by a matrix A, and you apply T to a region with area S, the area of the transformed region is |\det(A)| \cdot S. This is why the determinant is sometimes called the "area scaling factor" of the transformation.

For a rotation matrix, |\det| = 1, so areas are preserved. For a stretch matrix like \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, \det = 6, and every shape's area is multiplied by 6. If \det = 0, the transformation collapses everything to a lower dimension.

Determinants in the Indian exam tradition

In JEE Advanced and other competitive examinations, determinant problems often involve structured matrices — entries that are polynomials, trigonometric functions, or follow a pattern. The strategy is almost always the same: use row/column operations to simplify before expanding, use the factor theorem to identify factors, and use degree arguments to pin down the constant. Direct expansion of a 3 \times 3 determinant with algebraic entries is almost never the intended approach. The properties in this article are not just theory — they are the primary computational tools.

Where this leads next