In short

A square matrix A has an inverse A^{-1} if A A^{-1} = A^{-1} A = I (the identity matrix). The inverse exists exactly when \det A \neq 0. You can find it using the adjoint formula A^{-1} = \frac{1}{\det A} \text{adj}(A), or by applying elementary row operations to the augmented matrix [A \mid I].

You know how to divide numbers. Given 5x = 20, you divide both sides by 5 and get x = 4. Behind that division is a simple idea: 5 has a multiplicative inverse, \frac{1}{5}, and multiplying by it undoes multiplication by 5.

Now suppose 5 is not a number but a matrix — say, A = \begin{bmatrix} 2 & 1 \\ 5 & 3 \end{bmatrix} — and instead of 5x = 20, you have

A \mathbf{x} = \mathbf{b}

where \mathbf{x} and \mathbf{b} are column vectors. How do you "divide" both sides by a matrix?

You cannot divide by a matrix — division is not defined for matrices. But you can do something equivalent: find a matrix A^{-1} such that A^{-1} A = I, the identity matrix. Then multiplying both sides of A\mathbf{x} = \mathbf{b} on the left by A^{-1} gives

A^{-1} A \mathbf{x} = A^{-1} \mathbf{b} \implies I \mathbf{x} = A^{-1} \mathbf{b} \implies \mathbf{x} = A^{-1} \mathbf{b}

The matrix A^{-1} is called the inverse of A. Finding it is the matrix version of division. The entire theory of solving systems of linear equations with matrices rests on whether this inverse exists and how to compute it.

What an inverse is

Inverse of a square matrix

A square matrix A of order n is called invertible (or non-singular) if there exists a square matrix B of the same order such that

AB = BA = I_n

where I_n is the n \times n identity matrix. The matrix B is called the inverse of A, written A^{-1}.

Three things to notice immediately.

The inverse must work from both sides. It is not enough for AB = I; you also need BA = I. For square matrices, it turns out that one implies the other — but this is a theorem, not a definition. The proof uses determinants and is in the going-deeper section.

Only square matrices can have inverses. A 2 \times 3 matrix cannot have an inverse, because you cannot multiply a 2 \times 3 matrix by anything and get a 2 \times 2 identity and a 3 \times 3 identity. The sizes do not work out.

Not every square matrix has an inverse. The matrix \begin{bmatrix} 1 & 2 \\ 2 & 4 \end{bmatrix} does not have one — no matter what 2 \times 2 matrix you multiply it by, you cannot get I. The condition that separates invertible matrices from non-invertible ones is the determinant.

The determinant test

A square matrix A is invertible if and only if \det A \neq 0.

Why? The short version: the inverse formula (coming next) has \det A in the denominator. If the determinant is zero, you are dividing by zero, and no inverse exists.

The deeper version: \det A = 0 means the rows of A are linearly dependent — one row is a combination of the others. Multiplication by A then collapses some dimension of space: different inputs produce the same output. An inverse would have to undo that collapse, but it cannot, because the lost information is gone. So no inverse exists.

A matrix with \det A = 0 is called singular. A matrix with \det A \neq 0 is called non-singular or invertible. This is the single most important dichotomy in the theory of matrices.

The adjoint

Before writing down the inverse formula, you need one building block: the adjoint of a matrix (also called the classical adjoint or adjugate — not to be confused with the conjugate transpose, which some advanced texts also call "adjoint").

Recall that the cofactor C_{ij} of the entry a_{ij} is (-1)^{i+j} times the minor M_{ij} (the determinant of the submatrix you get by deleting row i and column j).

Adjoint of a matrix

The adjoint of a square matrix A, written \text{adj}(A), is the transpose of the cofactor matrix:

[\text{adj}(A)]_{ij} = C_{ji}

That is: compute the cofactor of every entry, arrange them into a matrix, and then transpose the result.

The key property of the adjoint is:

A \cdot \text{adj}(A) = \text{adj}(A) \cdot A = (\det A) \, I

This identity holds for every square matrix, invertible or not. Here is why. The (i,j) entry of A \cdot \text{adj}(A) is

\sum_{k=1}^{n} a_{ik} \, [\text{adj}(A)]_{kj} = \sum_{k=1}^{n} a_{ik} \, C_{jk}

When i = j, this sum is \sum_k a_{ik} C_{ik} — the expansion of \det A along row i. So the diagonal entries are all \det A.

When i \neq j, the sum \sum_k a_{ik} C_{jk} is the expansion of a different determinant — one where row j has been replaced by a copy of row i. A determinant with two identical rows is zero (a basic property from the determinants article). So the off-diagonal entries are all zero.

Together: A \cdot \text{adj}(A) = (\det A) \, I. The same argument with columns gives \text{adj}(A) \cdot A = (\det A) \, I.

The inverse formula

Now divide both sides of A \cdot \text{adj}(A) = (\det A) \, I by \det A — which you can do as long as \det A \neq 0:

A \cdot \frac{\text{adj}(A)}{\det A} = I

So \frac{\text{adj}(A)}{\det A} is the inverse of A. This is the adjoint formula for the inverse:

Inverse using the adjoint

For a non-singular matrix A (i.e., \det A \neq 0):

A^{-1} = \frac{1}{\det A} \, \text{adj}(A)

For a 2 \times 2 matrix A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, this simplifies to a formula worth memorising:

A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

The recipe: swap the diagonal entries, negate the off-diagonal entries, divide by the determinant. This is fast enough to do in your head for 2 \times 2 matrices.

Worked examples

Example 1: Inverse of a 2 x 2 matrix

Find the inverse of A = \begin{bmatrix} 2 & 1 \\ 5 & 3 \end{bmatrix}.

Step 1. Compute the determinant.

\det A = 2 \cdot 3 - 1 \cdot 5 = 6 - 5 = 1

Why: the inverse exists only if \det A \neq 0. Here \det A = 1, so the inverse exists — and the formula will be especially clean because dividing by 1 changes nothing.

Step 2. Apply the 2 \times 2 formula: swap diagonal, negate off-diagonal.

A^{-1} = \frac{1}{1} \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}

Why: in the 2 \times 2 case, \text{adj}(A) = \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}. So you swap 2 and 3 on the diagonal, and negate 1 and 5 off the diagonal.

Step 3. Verify: compute A A^{-1}.

\begin{bmatrix} 2 & 1 \\ 5 & 3 \end{bmatrix} \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix} = \begin{bmatrix} 6-5 & -2+2 \\ 15-15 & -5+6 \end{bmatrix} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} = I

Why: verification is the ultimate check. If AA^{-1} = I, the inverse is correct. This step catches sign errors and arithmetic mistakes.

Step 4. Interpret geometrically. The matrix A transforms the unit square into a parallelogram. Since \det A = 1, this parallelogram has the same area as the original square — the transformation preserves area. The inverse A^{-1} maps the parallelogram back to the square.

Why: the determinant measures how a matrix scales area. When |\det A| = 1, no area is gained or lost, so the transformation is area-preserving.

Result: A^{-1} = \begin{bmatrix} 3 & -1 \\ -5 & 2 \end{bmatrix}.

The matrix A transforms the unit square into a parallelogram, and the inverse maps it backTwo shapes are shown: the unit square with vertices at the origin, (1,0), (0,1), and (1,1), and the parallelogram that results from applying the matrix A, with vertices at the origin, (2,5), (1,3), and (3,8). An arrow shows A mapping forward and A inverse mapping back. x y 1 2 3 4 5 1 2 3 unit sq (2,5) (1,3) (3,8) A → ← A⁻¹
The unit square (black) is mapped by $A$ to the red parallelogram with vertices at $(0,0)$, $(2,5)$, $(1,3)$, and $(3,8)$. Both shapes have the same area (because $\det A = 1$). The inverse $A^{-1}$ maps the parallelogram back to the unit square.

The picture shows why the inverse "undoes" the transformation. The matrix A stretches and shears the unit square into a parallelogram; A^{-1} reverses that stretching and shearing exactly.

Example 2: Inverse of a 3 x 3 matrix using the adjoint

Find the inverse of A = \begin{bmatrix} 1 & 2 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \end{bmatrix}.

Step 1. Compute \det A.

Expand along row 1:

\det A = 1(1 \cdot 1 - 1 \cdot 0) - 2(0 \cdot 1 - 1 \cdot 1) + 1(0 \cdot 0 - 1 \cdot 1)

= 1(1) - 2(-1) + 1(-1) = 1 + 2 - 1 = 2

Why: \det A = 2 \neq 0, so the inverse exists. The \frac{1}{\det A} factor will be \frac{1}{2}.

Step 2. Compute all nine cofactors.

C_{11} = +\begin{vmatrix} 1 & 1 \\ 0 & 1 \end{vmatrix} = 1, \quad C_{12} = -\begin{vmatrix} 0 & 1 \\ 1 & 1 \end{vmatrix} = -(0-1) = 1, \quad C_{13} = +\begin{vmatrix} 0 & 1 \\ 1 & 0 \end{vmatrix} = -1

C_{21} = -\begin{vmatrix} 2 & 1 \\ 0 & 1 \end{vmatrix} = -(2-0) = -2, \quad C_{22} = +\begin{vmatrix} 1 & 1 \\ 1 & 1 \end{vmatrix} = 0, \quad C_{23} = -\begin{vmatrix} 1 & 2 \\ 1 & 0 \end{vmatrix} = -(0-2) = 2

C_{31} = +\begin{vmatrix} 2 & 1 \\ 1 & 1 \end{vmatrix} = 1, \quad C_{32} = -\begin{vmatrix} 1 & 1 \\ 0 & 1 \end{vmatrix} = -(1-0) = -1, \quad C_{33} = +\begin{vmatrix} 1 & 2 \\ 0 & 1 \end{vmatrix} = 1

Why: each cofactor C_{ij} comes from the 2 \times 2 determinant left after deleting row i and column j, with the sign (-1)^{i+j}. All nine are needed for the adjoint.

Step 3. Form the adjoint (transpose of the cofactor matrix).

The cofactor matrix is \begin{bmatrix} 1 & 1 & -1 \\ -2 & 0 & 2 \\ 1 & -1 & 1 \end{bmatrix}. Its transpose is

\text{adj}(A) = \begin{bmatrix} 1 & -2 & 1 \\ 1 & 0 & -1 \\ -1 & 2 & 1 \end{bmatrix}

Why: the adjoint is the transpose of the cofactor matrix — the (i,j) entry of \text{adj}(A) is C_{ji}, not C_{ij}. Forgetting the transpose is one of the most common errors.

Step 4. Divide by the determinant.

A^{-1} = \frac{1}{2} \begin{bmatrix} 1 & -2 & 1 \\ 1 & 0 & -1 \\ -1 & 2 & 1 \end{bmatrix} = \begin{bmatrix} 1/2 & -1 & 1/2 \\ 1/2 & 0 & -1/2 \\ -1/2 & 1 & 1/2 \end{bmatrix}

Why: the formula A^{-1} = \frac{1}{\det A} \text{adj}(A) puts everything together. Every entry of the adjoint gets divided by 2.

Result: A^{-1} = \frac{1}{2}\begin{bmatrix} 1 & -2 & 1 \\ 1 & 0 & -1 \\ -1 & 2 & 1 \end{bmatrix}.

Verification that A times A inverse equals the identity matrixA diagram showing the matrix multiplication A times A inverse, with the three matrices displayed side by side: A, then A inverse, then the result I, the 3 by 3 identity matrix. Each entry of the product is computed and shown to equal 1 on the diagonal and 0 elsewhere. A 1 2 1 0 1 1 1 0 1 × A⁻¹ ½ −1 ½ ½ 0 −½ −½ 1 ½ = I 1 0 0 0 1 0 0 0 1 Each diagonal entry = 1, each off-diagonal = 0
The product $A \cdot A^{-1}$ equals the identity matrix $I$, confirming the inverse is correct. For instance, the $(1,1)$ entry is $1 \cdot \frac{1}{2} + 2 \cdot \frac{1}{2} + 1 \cdot (-\frac{1}{2}) = \frac{1+2-1}{2} = 1$.

The verification is the moment of truth. Every row of A dotted with every column of A^{-1} must produce 1 on the diagonal and 0 off the diagonal. If even one entry is wrong, the entire inverse is wrong — so always check at least one row.

Properties of the inverse

These properties are used constantly and are worth knowing with their proofs.

1. The inverse is unique. If both B and C are inverses of A, then B = BI = B(AC) = (BA)C = IC = C. So B = C.

2. (A^{-1})^{-1} = A. Since A^{-1} A = I, the matrix A is an inverse of A^{-1}. By uniqueness, it is the inverse.

3. (AB)^{-1} = B^{-1} A^{-1}. Check: (B^{-1} A^{-1})(AB) = B^{-1}(A^{-1} A)B = B^{-1} I B = B^{-1} B = I. The order reverses — just as taking off your shoes and socks reverses the order of putting them on.

4. (A^T)^{-1} = (A^{-1})^T. Start from AA^{-1} = I. Transpose both sides: (A^{-1})^T A^T = I^T = I. So (A^{-1})^T is the inverse of A^T.

5. \det(A^{-1}) = \frac{1}{\det A}. From AA^{-1} = I, take determinants: \det A \cdot \det(A^{-1}) = \det I = 1. Divide by \det A.

6. (kA)^{-1} = \frac{1}{k} A^{-1} for any nonzero scalar k. Check: \frac{1}{k} A^{-1} \cdot kA = A^{-1} A = I.

The elementary row operations method

The adjoint method works well for 2 \times 2 matrices and is practical for 3 \times 3. For larger matrices, or when you want a method that feels more mechanical, there is a second approach: Gauss-Jordan elimination.

The idea is simple. Write A and I side by side as an augmented matrix [A \mid I]. Now apply elementary row operations to transform the left half into I. Whatever those same operations do to the right half produces A^{-1}.

Why does this work? Each elementary row operation is equivalent to left-multiplying by a specific elementary matrix E_k. If a sequence of operations transforms A into I, then

E_m \cdots E_2 E_1 A = I

so E_m \cdots E_2 E_1 = A^{-1}. The same sequence applied to I gives

E_m \cdots E_2 E_1 I = A^{-1}

The right half of the augmented matrix records exactly these operations applied to I, so it ends up as A^{-1}.

The three elementary row operations are:

  1. Swap two rows: R_i \leftrightarrow R_j
  2. Multiply a row by a nonzero scalar: R_i \to k R_i, k \neq 0
  3. Add a multiple of one row to another: R_i \to R_i + k R_j

Here is the method applied to the matrix from Example 2.

\left[\begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 0 & 1 \end{array}\right]

Operation 1: R_3 \to R_3 - R_1 (eliminate the 1 in position (3,1)):

\left[\begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & -2 & 0 & -1 & 0 & 1 \end{array}\right]

Operation 2: R_3 \to R_3 + 2R_2 (eliminate the -2 in position (3,2)):

\left[\begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 2 & -1 & 2 & 1 \end{array}\right]

Operation 3: R_3 \to \frac{1}{2} R_3 (make the pivot 1):

\left[\begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & -1/2 & 1 & 1/2 \end{array}\right]

Operation 4: R_2 \to R_2 - R_3 (eliminate the 1 above the (3,3) pivot):

\left[\begin{array}{ccc|ccc} 1 & 2 & 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 1/2 & 0 & -1/2 \\ 0 & 0 & 1 & -1/2 & 1 & 1/2 \end{array}\right]

Operation 5: R_1 \to R_1 - R_3 (eliminate the 1 in position (1,3)):

\left[\begin{array}{ccc|ccc} 1 & 2 & 0 & 3/2 & -1 & -1/2 \\ 0 & 1 & 0 & 1/2 & 0 & -1/2 \\ 0 & 0 & 1 & -1/2 & 1 & 1/2 \end{array}\right]

Operation 6: R_1 \to R_1 - 2R_2 (eliminate the 2 in position (1,2)):

\left[\begin{array}{ccc|ccc} 1 & 0 & 0 & 1/2 & -1 & 1/2 \\ 0 & 1 & 0 & 1/2 & 0 & -1/2 \\ 0 & 0 & 1 & -1/2 & 1 & 1/2 \end{array}\right]

The left half is now I, so the right half is A^{-1}:

A^{-1} = \begin{bmatrix} 1/2 & -1 & 1/2 \\ 1/2 & 0 & -1/2 \\ -1/2 & 1 & 1/2 \end{bmatrix}

This matches the answer from the adjoint method, as it must.

When to use which method. The adjoint method is good for 2 \times 2 (fast, almost instant) and 3 \times 3 (systematic, and the cofactors are useful for other things too). The row-reduction method is better for 4 \times 4 and above, because computing cofactors of large matrices is tedious. For 3 \times 3 matrices in an exam setting, either method works — pick whichever you are more comfortable with.

Common confusions

Going deeper

If you can compute the inverse of a 2 \times 2 or 3 \times 3 matrix using either method, you have what you need for most problems. The material below covers the proof that one-sided inverses imply two-sided inverses, and a useful determinant identity.

Why AB = I implies BA = I for square matrices

For non-square matrices this is false: you can have AB = I without BA even being the same size. But for square matrices, if AB = I, then \det A \cdot \det B = \det I = 1, so \det A \neq 0. This means A is non-singular and has an inverse A^{-1} by the adjoint formula. Now:

AB = I \implies A^{-1}(AB) = A^{-1} \implies B = A^{-1}

So BA = A^{-1} A = I. The one-sided condition implies the two-sided condition, but only because the determinant argument guarantees non-singularity.

The Cayley-Hamilton connection

Every n \times n matrix satisfies its own characteristic equation. For a 2 \times 2 matrix A with trace \text{tr}(A) and determinant \det A, the Cayley-Hamilton theorem gives:

A^2 - \text{tr}(A) \, A + \det(A) \, I = O

If \det A \neq 0, you can rearrange:

A^{-1} = \frac{1}{\det A}\left(\text{tr}(A) \, I - A\right)

This gives a third route to the inverse of a 2 \times 2 matrix — one that requires neither cofactors nor row reduction, just the trace and determinant. For 3 \times 3 matrices, the Cayley-Hamilton theorem gives a similar (longer) formula involving A^2, A, and I.

Inverses and linear transformations

A matrix represents a linear transformation — a function from \mathbb{R}^n to \mathbb{R}^n that preserves addition and scalar multiplication. The inverse matrix represents the inverse function. The transformation is invertible exactly when it is both one-to-one (no two inputs produce the same output) and onto (every output is achieved by some input). The determinant test \det A \neq 0 is really testing both of these conditions at once.

Where this leads next