In short

Certain determinants — Vandermonde, circulant, skew-symmetric, and block — carry so much internal structure that you can evaluate them without expanding row by row. Each has a closed-form formula, and recognising the pattern is half the work.

Suppose someone hands you this 3 \times 3 determinant and asks you to evaluate it:

\begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}

You could expand along the first row, collect terms, factor — a page of algebra. But if you stare at the matrix for a moment, you notice something: the first row is all ones, the second row is just a, b, c, and the third row is their squares. Each column is a geometric progression with the same ratio as its own variable. There is a pattern here, and patterns in determinants usually mean there is a shortcut.

The answer turns out to be (b - a)(c - a)(c - b). No expansion needed — just the pairwise differences of a, b, c, multiplied together. This determinant has a name: it is a Vandermonde determinant, and it is the first of several standard forms where structure beats brute force.

The Vandermonde determinant

The pattern in the opening example generalises. A Vandermonde matrix is one where each row is a sequence of powers of a single variable: 1, x, x^2, x^3, \ldots

For three variables, it looks like this:

V = \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}

For four, the same idea extends to a 4 \times 4 matrix with cubes in the last row. The general n \times n Vandermonde matrix has entry (i, j) = x_j^{i-1}.

Vandermonde determinant

For variables x_1, x_2, \ldots, x_n, the Vandermonde determinant is

\begin{vmatrix} 1 & 1 & \cdots & 1 \\ x_1 & x_2 & \cdots & x_n \\ x_1^2 & x_2^2 & \cdots & x_n^2 \\ \vdots & \vdots & \ddots & \vdots \\ x_1^{n-1} & x_2^{n-1} & \cdots & x_n^{n-1} \end{vmatrix} = \prod_{1 \le i < j \le n} (x_j - x_i)

The product runs over all pairs (i, j) with i < j, so there are \binom{n}{2} factors in total.

Why the formula works (the 3 \times 3 proof)

Take the 3 \times 3 case and prove it directly. Start with

D = \begin{vmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{vmatrix}

Step 1. Apply C_2 \to C_2 - C_1 and C_3 \to C_3 - C_1:

D = \begin{vmatrix} 1 & 0 & 0 \\ a & b - a & c - a \\ a^2 & b^2 - a^2 & c^2 - a^2 \end{vmatrix}

Why: subtracting a column from another does not change the determinant. This operation zeroes out two entries in the first row, making expansion easy.

Step 2. Factor the differences of squares: b^2 - a^2 = (b - a)(b + a) and c^2 - a^2 = (c - a)(c + a). Pull out the common factors:

D = (b - a)(c - a) \begin{vmatrix} 1 & 0 & 0 \\ a & 1 & 1 \\ a^2 & b + a & c + a \end{vmatrix}

Why: each column's entries share a common factor, and a scalar multiple of a column can be pulled outside the determinant.

Step 3. Expand along the first row. Only the (1,1) entry survives:

D = (b - a)(c - a) \begin{vmatrix} 1 & 1 \\ b + a & c + a \end{vmatrix}
= (b - a)(c - a)\bigl[(c + a) - (b + a)\bigr] = (b - a)(c - a)(c - b)

Why: the 2 \times 2 determinant is (c + a) - (b + a) = c - b. Three clean steps, and the formula drops out.

Result: D = (b - a)(c - a)(c - b).

Notice what the formula is saying. If any two of a, b, c are equal, one of the factors is zero, and the determinant vanishes. That makes sense: if two columns of a matrix are identical, its determinant is zero — and in a Vandermonde matrix, equal variables mean identical columns.

Where Vandermonde determinants appear

Vandermonde determinants show up naturally in polynomial interpolation. If you want a polynomial of degree n - 1 that passes through n given points, the system of equations you set up has a Vandermonde matrix as its coefficient matrix. The determinant being non-zero (which happens exactly when all the x-values are distinct) guarantees that the interpolating polynomial exists and is unique.

They also appear in the theory of symmetric polynomials and in coding theory (Reed-Solomon codes use Vandermonde matrices). In competitive exams, they appear as "evaluate this determinant" problems where the key insight is recognising the Vandermonde pattern.

The circulant determinant

A different kind of structure. A circulant matrix is one where each row is a cyclic shift of the row above it:

C = \begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix}

The second row is the first row shifted right by one position (with wraparound), and the third row is the first row shifted right by two. Every circulant is completely determined by its first row.

Circulant determinant ($3 \times 3$)

\begin{vmatrix} a & b & c \\ c & a & b \\ b & c & a \end{vmatrix} = a^3 + b^3 + c^3 - 3abc

Deriving the formula

Step 1. Apply C_1 \to C_1 + C_2 + C_3:

C = \begin{vmatrix} a + b + c & b & c \\ a + b + c & a & b \\ a + b + c & c & a \end{vmatrix}

Why: adding columns does not change the determinant. Now the first column has the same entry everywhere — you can factor it out.

Step 2. Pull out (a + b + c) from the first column:

C = (a + b + c) \begin{vmatrix} 1 & b & c \\ 1 & a & b \\ 1 & c & a \end{vmatrix}

Step 3. Apply R_2 \to R_2 - R_1 and R_3 \to R_3 - R_1:

C = (a + b + c) \begin{vmatrix} 1 & b & c \\ 0 & a - b & b - c \\ 0 & c - b & a - c \end{vmatrix}

Step 4. Expand along the first column:

C = (a + b + c)\bigl[(a - b)(a - c) - (b - c)(c - b)\bigr]
= (a + b + c)\bigl[(a - b)(a - c) + (b - c)^2\bigr]

Why: (b - c)(c - b) = -(b - c)^2, so the minus sign flips to a plus.

Step 5. Expand the bracket. (a - b)(a - c) = a^2 - ac - ab + bc and (b - c)^2 = b^2 - 2bc + c^2. Adding:

a^2 + b^2 + c^2 - ab - bc - ca

So the full determinant is:

C = (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca)

This is a well-known algebraic identity. The right-hand side equals a^3 + b^3 + c^3 - 3abc.

Why: you can verify this by expanding (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca). The twelve terms collapse to four: a^3 + b^3 + c^3 - 3abc. This is the same identity used in factoring sums of cubes.

Result: \det(C) = a^3 + b^3 + c^3 - 3abc = (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca).

The factored form is the useful one. It tells you, for instance, that the determinant is zero if and only if a + b + c = 0 or a = b = c. The second condition (a^2 + b^2 + c^2 - ab - bc - ca = 0) can be rewritten as \frac{1}{2}[(a - b)^2 + (b - c)^2 + (c - a)^2] = 0, which forces all three to be equal.

The skew-symmetric determinant

A square matrix A is skew-symmetric (or antisymmetric) if A^T = -A — that is, the entry in position (i, j) is the negative of the entry in position (j, i). The diagonal entries must all be zero (since a_{ii} = -a_{ii} implies a_{ii} = 0).

A general 3 \times 3 skew-symmetric matrix looks like:

A = \begin{pmatrix} 0 & a & b \\ -a & 0 & c \\ -b & -c & 0 \end{pmatrix}

Skew-symmetric determinant

If A is an n \times n skew-symmetric matrix, then:

  • When n is odd, \det(A) = 0.
  • When n is even, \det(A) is a perfect square (specifically, the square of a polynomial called the Pfaffian).

Why odd-order skew-symmetric determinants vanish

The proof is three lines.

Since A is skew-symmetric, A^T = -A. Taking determinants of both sides:

\det(A^T) = \det(-A)

The left side: \det(A^T) = \det(A) (a determinant does not change on transposing).

The right side: \det(-A) = (-1)^n \det(A) (multiplying every row by -1 pulls out one factor of -1 per row).

So \det(A) = (-1)^n \det(A).

When n is odd, (-1)^n = -1, so \det(A) = -\det(A), which gives 2\det(A) = 0, hence \det(A) = 0.

Why: the transpose does not change the determinant, but negating the matrix introduces a sign of (-1)^n. For odd n, this forces the determinant to equal its own negative — the only number that does that is zero.

The even case: a perfect square

For a 2 \times 2 skew-symmetric matrix:

\begin{vmatrix} 0 & a \\ -a & 0 \end{vmatrix} = 0 \cdot 0 - a \cdot (-a) = a^2

The determinant is a^2, a perfect square. For a 4 \times 4 skew-symmetric matrix with entries a, b, c, d, e, f, the determinant turns out to be (af - be + cd)^2 — again a perfect square. The polynomial whose square gives the determinant is called the Pfaffian. You will not need to compute Pfaffians for JEE, but you should know that even-order skew-symmetric determinants are never negative.

Verify with a concrete 3 \times 3

Take a = 2, b = 3, c = 5:

\begin{vmatrix} 0 & 2 & 3 \\ -2 & 0 & 5 \\ -3 & -5 & 0 \end{vmatrix}

Expand along the first row: 0 \cdot M_{11} - 2 \cdot M_{12} + 3 \cdot M_{13}.

M_{12} = \begin{vmatrix} -2 & 5 \\ -3 & 0 \end{vmatrix} = 0 - (-15) = 15.

M_{13} = \begin{vmatrix} -2 & 0 \\ -3 & -5 \end{vmatrix} = 10 - 0 = 10.

Determinant = 0 - 2(15) + 3(10) = -30 + 30 = 0. Exactly as predicted: odd order, determinant zero.

Block determinants

When a large matrix has a block of zeros in one corner, it splits into smaller pieces. A block matrix has the form:

M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}

where A, B, C, D are sub-matrices of appropriate sizes.

Block triangular determinant

If M is block upper triangular or block lower triangular — that is, either C = O (the zero matrix) or B = O — then:

\det\begin{pmatrix} A & B \\ O & D \end{pmatrix} = \det(A) \cdot \det(D)
\det\begin{pmatrix} A & O \\ C & D \end{pmatrix} = \det(A) \cdot \det(D)

The determinant of a block triangular matrix equals the product of the determinants of the diagonal blocks.

Why block triangular determinants factor

Consider the block upper triangular case. Write A as p \times p and D as q \times q, so M is (p + q) \times (p + q).

Expand the determinant by the last q rows. Because C = O, the bottom-left corner is all zeros. The cofactor expansion forces contributions only from columns that align with D. After careful accounting, the expansion separates into \det(A) times \det(D).

A more elegant argument uses the product rule: \det(M) = \det\begin{pmatrix} A & O \\ O & I_q \end{pmatrix} \cdot \det\begin{pmatrix} I_p & A^{-1}B \\ O & D \end{pmatrix} (when A is invertible). The first factor is \det(A), the second is \det(D), and the product is \det(A) \cdot \det(D).

Why: block triangular form is the matrix analogue of a triangular determinant, where diagonal entries multiply. The diagonal "entries" are now blocks, and their "product" is the product of their determinants.

A useful special case: block diagonal

When both off-diagonal blocks are zero:

\det\begin{pmatrix} A & O \\ O & D \end{pmatrix} = \det(A) \cdot \det(D)

This is the cleanest case. A block diagonal matrix is two independent systems that don't interact, and the determinant factors accordingly.

Worked examples

Example 1: Evaluate a Vandermonde determinant

Evaluate

\begin{vmatrix} 1 & 1 & 1 \\ 2 & 3 & 5 \\ 4 & 9 & 25 \end{vmatrix}

Step 1. Recognise the structure. The first row is all ones. The second row is 2, 3, 5. The third row is 4, 9, 25 — exactly 2^2, 3^2, 5^2. This is a Vandermonde determinant with a = 2, b = 3, c = 5.

Why: the key recognition is that the third row entries are the squares of the second row entries. That is the Vandermonde signature: 1, x, x^2 in each column.

Step 2. Apply the Vandermonde formula: (b - a)(c - a)(c - b).

(3 - 2)(5 - 2)(5 - 3) = 1 \times 3 \times 2 = 6

Why: the formula gives the product of all pairwise differences, taken in order. Three variables give three factors.

Step 3. Verify by direct expansion. Expand along the first row:

1 \cdot \begin{vmatrix} 3 & 5 \\ 9 & 25 \end{vmatrix} - 1 \cdot \begin{vmatrix} 2 & 5 \\ 4 & 25 \end{vmatrix} + 1 \cdot \begin{vmatrix} 2 & 3 \\ 4 & 9 \end{vmatrix}
= 1(75 - 45) - 1(50 - 20) + 1(18 - 12) = 30 - 30 + 6 = 6

Why: the direct expansion confirms the formula. In an exam, you would use the formula directly — but verifying on a small case builds confidence that you are applying it correctly.

Step 4. Interpret the result. The determinant is 6, a positive integer. If any two of 2, 3, 5 had been equal, the determinant would have been zero. The Vandermonde formula turned a 3 \times 3 expansion into a single line of arithmetic.

Result: The determinant equals 6.

Vandermonde matrix structure: columns as powersA visual representation of the 3 by 3 Vandermonde matrix. Three columns are shown, each labelled with the variable a, b, or c. Within each column, the entries 1, x, x-squared are displayed, showing the geometric-progression structure. a = 2 b = 3 c = 5 1 1 1 2 3 5 4 9 25 x⁰
The structure of the Vandermonde matrix: each column is a geometric progression $1, x, x^2$ built from a single variable. The determinant depends only on the pairwise differences between the column variables.

Example 2: Evaluate a circulant determinant

Evaluate

\begin{vmatrix} 3 & 1 & 2 \\ 2 & 3 & 1 \\ 1 & 2 & 3 \end{vmatrix}

Step 1. Recognise the circulant structure. Row 1 is (3, 1, 2). Row 2 is (2, 3, 1) — a cyclic shift of row 1. Row 3 is (1, 2, 3) — another cyclic shift. So a = 3, b = 1, c = 2.

Why: in a circulant, each row is the previous row shifted cyclically by one position. Once you see this, you can apply the formula directly.

Step 2. Apply the circulant formula: a^3 + b^3 + c^3 - 3abc.

3^3 + 1^3 + 2^3 - 3 \cdot 3 \cdot 1 \cdot 2 = 27 + 1 + 8 - 18 = 18

Why: the formula reduces a full expansion to simple arithmetic with the three entries of the first row.

Step 3. Alternatively, use the factored form: (a + b + c)(a^2 + b^2 + c^2 - ab - bc - ca).

a + b + c = 6
a^2 + b^2 + c^2 - ab - bc - ca = 9 + 1 + 4 - 3 - 2 - 6 = 3
\det = 6 \times 3 = 18

Why: the factored form is useful when you need to check whether the determinant is zero. Here a + b + c = 6 \neq 0 and the second factor is 3 \neq 0, so the determinant is non-zero.

Step 4. Verify by expanding along the first row:

3(9 - 2) - 1(6 - 1) + 2(4 - 3) = 3 \cdot 7 - 1 \cdot 5 + 2 \cdot 1 = 21 - 5 + 2 = 18

Result: The determinant equals 18.

Circulant matrix: cyclic shift patternA diagram showing three rows of a circulant matrix, with arrows indicating how each row is a cyclic shift of the previous one. Row 1 is 3, 1, 2. Row 2 shifts to become 2, 3, 1. Row 3 shifts to become 1, 2, 3. 3 1 2 2 3 1 1 2 3 cyclic shift cyclic shift
The cyclic shift pattern of a circulant matrix. The entry $3$ travels along the diagonal — row 1 column 1, row 2 column 2, row 3 column 3. Every circulant is completely determined by its first row.

Common confusions

Going deeper

If you came here to learn the four standard forms and their formulas, you have them — you can stop here. The rest of this section is for readers who want to see the general circulant formula, the connection to roots of unity, and the Schur complement.

The general circulant formula and roots of unity

The 3 \times 3 circulant formula a^3 + b^3 + c^3 - 3abc is a special case of a beautiful general result. For an n \times n circulant with first row (c_0, c_1, \ldots, c_{n-1}), the determinant is:

\det(C) = \prod_{k=0}^{n-1} \left( c_0 + c_1 \omega^k + c_2 \omega^{2k} + \cdots + c_{n-1} \omega^{(n-1)k} \right)

where \omega = e^{2\pi i / n} is a primitive n-th root of unity.

Each factor in the product is the polynomial p(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_{n-1} x^{n-1} evaluated at x = \omega^k. So the determinant of a circulant is the product of its associated polynomial evaluated at all n-th roots of unity.

For the 3 \times 3 case, the roots of unity are 1, \omega, \omega^2 where \omega = e^{2\pi i/3}. The polynomial is p(x) = a + bx + cx^2. Then:

\det = p(1) \cdot p(\omega) \cdot p(\omega^2) = (a + b + c) \cdot (a + b\omega + c\omega^2) \cdot (a + b\omega^2 + c\omega^4)

Since \omega^3 = 1, this simplifies to (a + b + c)(a + b\omega + c\omega^2)(a + b\omega^2 + c\omega). The last two factors are complex conjugates, and their product is a real number. Multiplying everything out recovers a^3 + b^3 + c^3 - 3abc.

This connection between circulants and roots of unity is the reason circulant matrices are diagonalised by the discrete Fourier transform matrix — a fact that underpins the Fast Fourier Transform (FFT) algorithm.

The Schur complement

For a general 2 \times 2 block matrix where A is invertible:

\det\begin{pmatrix} A & B \\ C & D \end{pmatrix} = \det(A) \cdot \det(D - CA^{-1}B)

The matrix S = D - CA^{-1}B is called the Schur complement of A in M. When C = O, the Schur complement reduces to D, recovering the block triangular formula.

Similarly, if D is invertible:

\det\begin{pmatrix} A & B \\ C & D \end{pmatrix} = \det(D) \cdot \det(A - BD^{-1}C)

The Schur complement is a powerful tool in advanced linear algebra. It appears in the derivation of formulas for block matrix inverses, in control theory, and in statistics (where the conditional covariance of a multivariate Gaussian is a Schur complement).

Pfaffians and the even skew-symmetric case

For a 2n \times 2n skew-symmetric matrix A, the determinant is the square of an integer polynomial called the Pfaffian, written \text{Pf}(A):

\det(A) = [\text{Pf}(A)]^2

For the 2 \times 2 case, \text{Pf}\begin{pmatrix} 0 & a \\ -a & 0 \end{pmatrix} = a, and \det = a^2.

For the 4 \times 4 case with entries \begin{pmatrix} 0 & a & b & c \\ -a & 0 & d & e \\ -b & -d & 0 & f \\ -c & -e & -f & 0 \end{pmatrix}, the Pfaffian is af - be + cd, and the determinant is (af - be + cd)^2.

Pfaffians play a role in combinatorics (counting perfect matchings in planar graphs) and in mathematical physics (partition functions). While they are beyond JEE scope, understanding that even skew-symmetric determinants are always non-negative is a useful fact.

Where this leads next

You now have a toolkit for recognising and evaluating structured determinants. The next steps connect determinants to other areas: