In short
The determinant of a square matrix is a single number that tells you two things at once: whether the matrix is invertible (determinant \neq 0) and how much the matrix scales areas or volumes. For a 2 \times 2 matrix it is ad - bc. For a 3 \times 3 matrix you expand along a row or column using minors and cofactors. The Sarrus rule gives a quick visual shortcut for 3 \times 3.
Take two vectors in the plane: \mathbf{u} = (3, 0) pointing along the x-axis, and \mathbf{v} = (1, 2) pointing diagonally. Together they span a parallelogram.
What is the area of that parallelogram? You can work it out from geometry: the base is |\mathbf{u}| = 3, the height is the perpendicular distance from \mathbf{v} to the x-axis, which is 2. So the area is 3 \times 2 = 6.
Now arrange the coordinates into a matrix:
Compute 3 \times 2 - 1 \times 0 = 6. The same number. Not a coincidence — the number you just computed is called the determinant of the matrix, and it always gives the signed area of the parallelogram spanned by the matrix's rows (or columns).
The sign matters: if the two vectors are arranged counterclockwise, the determinant is positive. If clockwise, negative. Zero means the two vectors are parallel — they collapse onto a line, and the "parallelogram" has no area at all.
This single number — positive, negative, or zero — turns out to encode exactly the information you need to know whether a system of equations has a unique solution, whether a matrix can be inverted, and how much a linear transformation stretches or compresses space.
The 2×2 determinant
Determinant of a 2×2 matrix
For a 2 \times 2 matrix A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, the determinant is
Two notations are standard: \det(A) and the vertical-bar notation \begin{vmatrix} a & b \\ c & d \end{vmatrix}. Both mean the same thing. The vertical bars look like absolute value bars, but they are not — the determinant can be negative.
Reading the formula. The determinant is the product of the main diagonal (a \times d) minus the product of the off-diagonal (b \times c). Think of it as a cross: multiply \searrow minus multiply \nearrow.
A few quick computations to build muscle memory:
The second one is zero — and notice that the second row (3, 6) is exactly 3 times the first row (1, 2). The two rows point in the same direction. The "parallelogram" is flat, with zero area. When the determinant is zero, the matrix is singular — it has no inverse, and any system of equations using this matrix as coefficients either has no solution or infinitely many.
Minors and cofactors
To extend the idea beyond 2 \times 2, you need two new terms.
Minor
The minor M_{ij} of an element a_{ij} in a square matrix is the determinant of the matrix obtained by deleting row i and column j.
Cofactor
The cofactor C_{ij} of an element a_{ij} is the minor with a sign attached:
The sign alternates in a checkerboard pattern: + when i + j is even, - when i + j is odd.
The checkerboard pattern for a 3 \times 3 matrix looks like this:
The sign of C_{ij} depends only on the position (i, j), not on the value of the entry.
Take a concrete 3 \times 3 matrix:
The minor M_{11} (delete row 1, column 1) is:
The cofactor C_{11} = (-1)^{1+1} M_{11} = (+1)(-3) = -3.
The minor M_{12} (delete row 1, column 2) is:
The cofactor C_{12} = (-1)^{1+2} M_{12} = (-1)(-6) = 6.
The minor M_{13} (delete row 1, column 3) is:
The cofactor C_{13} = (-1)^{1+3} M_{13} = (+1)(-3) = -3.
These three cofactors are the building blocks of the 3 \times 3 determinant.
The 3×3 determinant
Determinant of a 3×3 matrix (expansion along the first row)
For a 3 \times 3 matrix A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}, the determinant is
where each C_{ij} is the cofactor defined above. Written out fully:
Apply this to the matrix above:
Let's verify this step by step. The first term is a_{11} \cdot C_{11} = 2 \times (-3) = -6. The second term is a_{12} \cdot C_{12} = 3 \times 6 = 18. The third term is a_{13} \cdot C_{13} = 1 \times (-3) = -3. Summing: -6 + 18 - 3 = 9.
The determinant \det(A) = 9 \neq 0, so this matrix is invertible.
Sarrus' rule — a visual shortcut
For 3 \times 3 matrices only, there is a mnemonic that avoids computing cofactors explicitly.
Write out the 3 \times 3 matrix and copy the first two columns again to the right:
Now draw three diagonals going down-right (\searrow) and three diagonals going down-left (\swarrow). The determinant is the sum of the three down-right products minus the sum of the three down-left products.
Apply Sarrus to the same matrix A = \begin{pmatrix} 2 & 3 & 1 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}:
Positive diagonals (\searrow):
- 2 \times 5 \times 9 = 90
- 3 \times 6 \times 7 = 126
- 1 \times 4 \times 8 = 32
Negative diagonals (\swarrow):
- 1 \times 5 \times 7 = 35
- 2 \times 6 \times 8 = 96
- 3 \times 4 \times 9 = 108
Same answer as the cofactor expansion. Sarrus' rule is faster when you're working by hand, but it is purely a 3 \times 3 trick. For 4 \times 4 and above, you must use cofactor expansion or row reduction.
Expansion along any row or column
The formula above expanded along the first row. But the determinant can be computed by expanding along any row or any column — you always get the same answer.
Expansion along row i:
Expansion along column j:
This flexibility is powerful. If a row or column has zeros in it, expand along that row or column — every zero entry kills the corresponding cofactor computation, saving work.
Let's verify by expanding the same matrix along the second column:
Expanding along column 2:
C_{12} = (-1)^{1+2}\begin{vmatrix} 4 & 6 \\ 7 & 9 \end{vmatrix} = (-1)(36 - 42) = (-1)(-6) = 6
C_{22} = (-1)^{2+2}\begin{vmatrix} 2 & 1 \\ 7 & 9 \end{vmatrix} = (+1)(18 - 7) = 11
C_{32} = (-1)^{3+2}\begin{vmatrix} 2 & 1 \\ 4 & 6 \end{vmatrix} = (-1)(12 - 4) = -8
Same answer — 9 — as before. The row or column you choose does not affect the result. It affects only the amount of arithmetic you have to do.
Strategy. When computing a 3 \times 3 determinant by hand, scan the matrix for the row or column with the most zeros. Expand along that one to minimise the number of 2 \times 2 determinants you have to compute.
Why does expansion along any row or column give the same answer? Think of it this way: the determinant is a property of the matrix as a whole, not of any particular row. The cofactor expansion is one recipe for computing it. The fact that you can start the recipe from any row or column — and always arrive at the same number — is a consequence of the multilinear, alternating nature of the determinant function. You will see this formalised in Properties of Determinants. For now, just use the freedom: pick the row or column that makes your life easiest.
Here is a matrix where the right choice of row saves real effort:
Row 1 has two zeros. Expanding along row 1:
One 2 \times 2 determinant instead of three. The zeros in row 1 killed two of the three terms before any computation happened.
Worked examples
Example 1: Area of a triangle using determinants
Three vertices of a triangle are P = (1, 2), Q = (4, 6), and R = (7, 3). Find the area using a determinant.
Step 1. Set up the matrix. The area formula for a triangle with vertices (x_1, y_1), (x_2, y_2), (x_3, y_3) is
Why: each row represents a vertex, with a 1 appended. The determinant computes twice the signed area of the triangle. The absolute value and the \frac{1}{2} convert it to the actual area.
Step 2. Substitute the coordinates.
Step 3. Expand along the third column (it has all 1's, which simplifies the arithmetic to just computing the cofactors).
C_{13} = (+1)\begin{vmatrix} 4 & 6 \\ 7 & 3 \end{vmatrix} = 12 - 42 = -30
C_{23} = (-1)\begin{vmatrix} 1 & 2 \\ 7 & 3 \end{vmatrix} = -(3 - 14) = 11
C_{33} = (+1)\begin{vmatrix} 1 & 2 \\ 4 & 6 \end{vmatrix} = 6 - 8 = -2
Why: expanding along the third column because all entries are 1, so the cofactors are not multiplied by varying factors — each just contributes its cofactor value directly.
Step 4. Sum and take the absolute value.
Why: the negative sign of the determinant tells you the vertices were listed in clockwise order. The absolute value discards the orientation and gives the geometric area.
Result: The area of the triangle is \dfrac{21}{2} square units.
The determinant formula for area is not just a trick. It works because the determinant measures the signed area of the parallelogram formed by two edge vectors of the triangle, and a triangle is exactly half a parallelogram.
Example 2: Solving a system using Cramer's rule (preview)
Solve the system \begin{cases} 2x + y = 5 \\ 3x - 2y = 4 \end{cases} using determinants.
Step 1. Write the coefficient matrix and compute its determinant.
Why: D \neq 0, so the system has a unique solution. If D were zero, the two lines would be parallel (no solution) or identical (infinitely many).
Step 2. Compute D_x by replacing the first column with the constants.
Why: Cramer's rule says x = D_x / D. To form D_x, you replace the column of x-coefficients with the right-hand side.
Step 3. Compute D_y by replacing the second column with the constants.
Why: similarly, y = D_y / D. Replace the y-coefficient column with the right-hand side.
Step 4. Divide to get the solution.
Why: each variable equals the ratio of a modified determinant to the original determinant. This is Cramer's rule — a direct application of determinants to solving systems of linear equations.
Result: x = 2, y = 1.
Verify: 2(2) + 1 = 5 and 3(2) - 2(1) = 4. Both equations check out. The answer matches the intersection point visible in the graph.
Common confusions
-
"The determinant is a matrix." No — the determinant is a number, computed from a matrix. The matrix \begin{pmatrix} 2 & 3 \\ 1 & 4 \end{pmatrix} is a 2 \times 2 array; its determinant 2(4) - 3(1) = 5 is a single scalar.
-
"Determinant zero means the matrix is zero." Not at all. The matrix \begin{pmatrix} 1 & 2 \\ 2 & 4 \end{pmatrix} has determinant 4 - 4 = 0, but it is not the zero matrix. Determinant zero means the rows are linearly dependent — the second row is twice the first — so the matrix is singular (non-invertible).
-
"The order of rows doesn't matter." Swapping two rows flips the sign of the determinant. So \begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix} = -2 but \begin{vmatrix} 3 & 4 \\ 1 & 2 \end{vmatrix} = 2. The magnitude stays the same; the sign changes. You will explore this in Properties of Determinants.
-
"Sarrus' rule works for any size." It works only for 3 \times 3. There is no diagonal-copying trick for 4 \times 4 or 5 \times 5. For larger matrices, use cofactor expansion or row reduction.
-
"Minor and cofactor are the same thing." Almost — they differ by a sign. The minor M_{ij} is the determinant of a sub-matrix, and a determinant can itself be positive, negative, or zero. The cofactor C_{ij} = (-1)^{i+j} M_{ij} attaches an additional sign based on position. Forgetting the position sign is the single most common error in determinant computations. Use the checkerboard pattern (+ when i + j is even, - when odd) to get it right every time.
Going deeper
If you came here to learn how to compute 2 \times 2 and 3 \times 3 determinants, you have it — you can stop here. The rest explores the geometric meaning more carefully and connects determinants to volumes in higher dimensions.
The geometric meaning: signed area and volume
For a 2 \times 2 matrix, the determinant is the signed area of the parallelogram spanned by the row vectors. For a 3 \times 3 matrix, the determinant is the signed volume of the parallelepiped (a 3D parallelogram) spanned by the three row vectors.
Take A = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix}. This matrix stretches space by a factor of 1 along x, 2 along y, and 3 along z. The parallelepiped formed by the rows is a rectangular box with dimensions 1 \times 2 \times 3 = 6. And \det(A) = 1 \cdot 2 \cdot 3 = 6. The determinant is the volume.
If a transformation scales areas by a factor of k, its determinant is k. If it reflects (flips orientation), the determinant is negative. If it collapses a dimension — squashing 3D space into a plane, or a plane into a line — the determinant is zero.
Recursive definition for n×n
The cofactor expansion works for any n \times n matrix. Expanding along the first row:
where A_{1j} is the (n-1) \times (n-1) matrix obtained by deleting row 1 and column j. Each step reduces the problem by one dimension: a 4 \times 4 determinant becomes four 3 \times 3 determinants, each of which becomes three 2 \times 2 determinants.
This recursive structure is mathematically elegant but computationally expensive — an n \times n determinant via cofactor expansion requires roughly n! multiplications. For large n, row reduction methods (Gaussian elimination) compute determinants in O(n^3) operations instead.
Historical note
The concept of a determinant appeared in Japanese mathematician Seki Takakazu's work in 1683, independently of Leibniz who used them around the same time in Europe. In India, systems of linear equations appear in Brahmagupta's Brahmasphutasiddhanta (628 CE), and while Brahmagupta did not use the determinant notation, the underlying ideas — solving simultaneous equations by systematic elimination — are recognisably the same computational structure. The modern notation with vertical bars was introduced in the 19th century.
Where this leads next
- Properties of Determinants — the rules that let you simplify determinants before computing them: row/column operations, factoring, and the effect of transposes.
- Special Matrices — orthogonal, idempotent, and other matrices whose determinants have specific values.
- Systems of Linear Equations — using determinants (via Cramer's rule) and row reduction to solve systems.
- Matrix Operations — the addition, multiplication, and transpose operations that determinants interact with.