In short
A system of linear equations is consistent if it has at least one solution (exactly one, or infinitely many) and inconsistent if it has none. The rank of a matrix — the number of nonzero rows after reducing to echelon form — is the tool that decides which case you are in. For the system AX = B: if \text{rank}(A) = \text{rank}([A \mid B]) = n (the number of unknowns), there is a unique solution. If \text{rank}(A) = \text{rank}([A \mid B]) < n, there are infinitely many. If \text{rank}(A) < \text{rank}([A \mid B]), there are none.
Three shopkeepers in a small town each sell the same three items — pens, notebooks, and erasers — in different bundles. A customer records the total cost of each shopkeeper's bundle:
Here x, y, z are the prices of a pen, notebook, and eraser. Three equations, three unknowns. From your earlier work with matrices, you know to write this as AX = B and check whether \det A \neq 0. If it is, A^{-1} exists and you can compute X = A^{-1}B. Problem solved.
But what happens when \det A = 0? The previous article on systems of linear equations left this question partly open: the system might have infinitely many solutions, or it might have no solutions at all. The determinant alone cannot tell you which.
That is the gap this article fills. The tool you need is called the rank of a matrix, and it settles every case cleanly.
Echelon form: organising the matrix
Before defining rank, you need a way to simplify a matrix into a standard shape — the same way you simplify an equation before solving it. That standard shape is called row echelon form.
Row echelon form
A matrix is in row echelon form if:
- All rows consisting entirely of zeros are at the bottom.
- In each nonzero row, the first nonzero entry (called the pivot or leading entry) is to the right of the pivot in the row above.
- Every entry below a pivot is zero.
Here is what this looks like concretely. The matrix
is in echelon form — the pivots are 2, 4, 5, and each one sits strictly to the right and below the previous one. It looks like a staircase descending from left to right.
The matrix
is also in echelon form — the pivots are 1 and 7, the all-zero row is at the bottom, and each pivot is to the right of the one above it.
You reach echelon form by applying elementary row operations — the same operations you already know from computing inverses and solving systems:
- Swap two rows (R_i \leftrightarrow R_j).
- Scale a row by a nonzero constant (R_i \to kR_i, k \neq 0).
- Add a multiple of one row to another (R_i \to R_i + kR_j).
These operations do not change the solution set of the system — they are rearrangements that preserve all the information. The process of applying them systematically to reach echelon form is called Gaussian elimination.
Rank of a matrix
Rank
The rank of a matrix A, written \text{rank}(A) or \rho(A), is the number of nonzero rows in any row echelon form of A.
The rank does not depend on which echelon form you arrive at — different sequences of row operations may produce different echelon forms, but they all have the same number of nonzero rows.
Take a 3 \times 3 matrix. If its echelon form has three nonzero rows, the rank is 3. If one row vanishes (becomes all zeros), the rank is 2. If two rows vanish, the rank is 1.
The rank captures how many "independent" equations you really have. If you start with three equations but one of them turns out to be a combination of the other two, the echelon form will show this: that dependent equation becomes a row of zeros. The rank drops from 3 to 2, telling you that you really only have two constraints on three unknowns — which means you cannot pin down a unique answer.
Computing rank: a worked reduction
Take the matrix
Step 1. Eliminate below the first pivot. Apply R_2 \to R_2 - 2R_1 and R_3 \to R_3 - R_1:
Row 2 has vanished entirely — the second equation was just twice the first.
Step 2. Swap R_2 and R_3 to put the zero row at the bottom:
This is in echelon form. There are 2 nonzero rows, so \text{rank}(A) = 2.
The consistency theorem
Now you have the language to state the complete rule. Given a system AX = B with n unknowns, form the augmented matrix [A \mid B] by attaching the column B to the right of A.
Conditions for consistency (Rouché–Capelli theorem)
For the system AX = B with n unknowns:
- Unique solution: \text{rank}(A) = \text{rank}([A \mid B]) = n.
- Infinitely many solutions: \text{rank}(A) = \text{rank}([A \mid B]) < n.
- No solution (inconsistent): \text{rank}(A) < \text{rank}([A \mid B]).
The logic behind these three cases is direct.
Case 1 means every equation is independent and every unknown is pinned down. The echelon form has a pivot in every column — you can back-substitute and get exactly one value for each unknown.
Case 2 means some equations are redundant (they are combinations of others), so you have fewer constraints than unknowns. The "extra" unknowns become free variables — they can take any value, and the other unknowns adjust accordingly. This gives infinitely many solutions, parametrised by the free variables.
Case 3 is the strange one. The rank of [A \mid B] being larger than the rank of A means that the augmented matrix has a nonzero row of the form [0 \; 0 \; \cdots \; 0 \mid k] with k \neq 0. That row says 0 = k — a contradiction. No values of the unknowns can satisfy it. The system is inconsistent.
Worked examples of all three cases
Example 1: A system with a unique solution
Solve:
Step 1. Write the augmented matrix.
Why: putting the system into augmented-matrix form lets you apply row operations to all four columns simultaneously, keeping the equations in sync.
Step 2. Apply R_2 \to R_2 - 2R_1 and R_3 \to R_3 - R_1:
Why: eliminating the x-terms from rows 2 and 3 starts the staircase. The first pivot is now sitting alone in column 1.
Step 3. Apply R_3 \to R_3 - R_2:
Why: this completes the echelon form. Three nonzero rows, three unknowns.
Step 4. Read off the ranks: \text{rank}(A) = 3, \text{rank}([A \mid B]) = 3, and n = 3. The condition \text{rank}(A) = \text{rank}([A \mid B]) = n is met. Unique solution.
Back-substitute: from row 3, 3z = 6, so z = 2. From row 2, y - 2 = 2, so y = 4. From row 1, x + 4 + 2 = 6, so x = 0.
Result: (x, y, z) = (0, 4, 2).
The three pivots — one in each column — locked every unknown into a single value. No freedom, no contradiction. That is what \text{rank}(A) = n means geometrically: n independent constraints pinning down n unknowns.
Example 2: An inconsistent system (no solution)
Determine whether the following system is consistent:
Step 1. Write the augmented matrix.
Why: you need both A and [A \mid B] in echelon form to compare ranks.
Step 2. Apply R_2 \to R_2 - 2R_1 and R_3 \to R_3 - R_1:
Why: the second row of A was exactly twice the first, so the A-entries vanish. But the right-hand side did not: 12 - 2 \times 4 = 4 \neq 0. That surviving 4 is a warning sign.
Step 3. Swap R_2 and R_3:
Why: putting the nonzero row above the zero row puts the matrix in echelon form.
Step 4. Read off the ranks. In the coefficient part (the first three columns), there are 2 nonzero rows, so \text{rank}(A) = 2. In the full augmented matrix, row 3 reads [0 \;\; 0 \;\; 0 \mid 4] — the row is nonzero because of the 4 on the right, so \text{rank}([A \mid B]) = 3.
Since \text{rank}(A) = 2 < 3 = \text{rank}([A \mid B]), the system is inconsistent.
Result: No solution exists.
The second equation (2x + 4y + 6z = 12) looked like it should be compatible with the first (x + 2y + 3z = 4) because the coefficients are proportional — the left side of equation 2 is just twice the left side of equation 1. But the right side is 12, not 2 \times 4 = 8. The equations demand that the same expression simultaneously equal 4 and 6. That is why no solution exists.
The special case: homogeneous systems
A system AX = 0 — where the right-hand side is all zeros — is called a homogeneous system. These systems have a useful property that non-homogeneous systems lack:
A homogeneous system is always consistent. The trivial solution x_1 = x_2 = \cdots = x_n = 0 always works. The only question is whether there are also non-trivial solutions — solutions where at least one variable is nonzero.
The consistency theorem simplifies. Since B = 0, the augmented matrix [A \mid 0] can never produce a row of the form [0 \; 0 \; \cdots \; 0 \mid k] with k \neq 0 — the last column is all zeros. So \text{rank}(A) always equals \text{rank}([A \mid 0]), and Case 3 (inconsistency) is impossible. What remains:
- If \text{rank}(A) = n: only the trivial solution X = 0.
- If \text{rank}(A) < n: infinitely many solutions, including non-trivial ones. There are n - \text{rank}(A) free variables.
A particularly important consequence: if the number of equations is fewer than the number of unknowns (i.e., A is not square, or the square matrix is singular), then \text{rank}(A) < n, and the homogeneous system has non-trivial solutions.
Example: a homogeneous system with non-trivial solutions
Take the system
Every equation here is a multiple of the first: equation 2 is 2 \times equation 1, equation 3 is 3 \times equation 1. The augmented matrix reduces in one step:
The rank is 1, with n = 3 unknowns. So there are 3 - 1 = 2 free variables. Set y = s and z = t (where s and t are any real numbers), and from row 1: x = -2s - 3t.
The solution set is
This is a plane through the origin in three-dimensional space. The two vectors (-2, 1, 0) and (-3, 0, 1) span this plane, and every point on it satisfies all three equations.
Non-homogeneous systems with infinitely many solutions
When the system AX = B (with B \neq 0) has infinitely many solutions, the solution set has a specific structure: it is a particular solution plus the general solution of the associated homogeneous system AX = 0.
Take the system
The second equation is twice the first — redundant. The augmented matrix reduces to
Rank of A is 1, rank of [A \mid B] is 1, n = 3. So \text{rank}(A) = \text{rank}([A \mid B]) = 1 < 3 = n. Infinitely many solutions, with 3 - 1 = 2 free variables.
Set y = s, z = t. Then x = 3 - s - t.
The first vector (3, 0, 0) is a particular solution (any one specific solution works). The remaining part is the general solution of x + y + z = 0 — the homogeneous version. Geometrically, the solution set is a plane in 3D space, but shifted away from the origin by the particular solution.
The rank–determinant connection
For a square matrix A of order n, the rank gives you an alternative way to state a fact you already know from determinants:
If \text{rank}(A) < n, then \det A = 0 and A is singular. The echelon form has at least one zero row, which means at least one row of the original matrix was a linear combination of the others — the rows are not independent.
For non-square systems (more equations than unknowns, or fewer), the determinant is not defined, but the rank still works. The rank is the more general tool; the determinant is a special case that applies only to square matrices.
Common confusions
-
"If the determinant is zero, there is no solution." Not necessarily. \det A = 0 means the system does not have a unique solution — but it might still have infinitely many solutions. The determinant alone cannot distinguish between the "infinite" and "no solution" cases; you need the rank of the augmented matrix for that.
-
"A system with more equations than unknowns is always inconsistent." Not true. Extra equations that happen to be compatible (linear combinations of the others) do not create contradictions. A system of 5 equations in 3 unknowns can have a unique solution if 2 of the equations are redundant and the remaining 3 are independent and consistent.
-
"A homogeneous system always has only the trivial solution." Only when \text{rank}(A) = n. When the rank is less than n, there are infinitely many non-trivial solutions. For example, x + y = 0 has infinitely many solutions: (1, -1), (2, -2), etc.
-
"Echelon form is unique." Row echelon form is not unique — different sequences of row operations may produce different echelon forms. What is unique is the reduced row echelon form (where every pivot is 1 and every entry above and below each pivot is 0), but the rank is the same for all echelon forms of the same matrix.
-
"Rank is a property of the augmented matrix only." The rank of A and the rank of [A \mid B] are two different numbers, and both matter. The consistency test compares them. Always compute both.
Going deeper
If you came here to learn how to check whether a system has a solution and how many, you have it — you can stop here. The rest of this section covers the formal relationship between rank and solutions in more detail, the connection to the column space and null space, and a complete worked example with three equations and three unknowns that has infinitely many solutions.
Rank and the structure of solutions
The number of free variables in the solution set of AX = B (when it is consistent) is exactly n - \text{rank}(A). This number is sometimes called the nullity of A, and the equation
is called the rank–nullity theorem. It says that the unknowns divide into two groups: those pinned down by the equations (as many as the rank) and those free to roam (the nullity). Every extra independent equation removes one degree of freedom.
A complete 3 \times 3 case with infinitely many solutions
Solve the system
Form the augmented matrix and reduce:
R_2 \to R_2 - 2R_1, R_3 \to R_3 - 3R_1:
R_3 \to R_3 - R_2:
\text{rank}(A) = 2, \text{rank}([A \mid B]) = 2, n = 3. So the system is consistent with 3 - 2 = 1 free variable.
Column 2 has no pivot, so y is the free variable. Set y = t.
From row 2: z = 1.
From row 1: x + 2t + 3 = 5, so x = 2 - 2t.
The solution is
Geometrically, this is a line in three-dimensional space — a one-parameter family of solutions, each one valid.
Using the determinant as a quick first test
For a square system AX = B with n equations in n unknowns:
- Compute \det A.
- If \det A \neq 0: unique solution (no further work needed for the consistency question; rank is n).
- If \det A = 0: rank is less than n. Now you must form [A \mid B] and reduce to echelon form to decide between infinite and no solution.
The determinant is a shortcut for Case 1. For Cases 2 and 3, you need the full rank comparison.
Where this leads next
You now have a complete toolkit for deciding the consistency of any linear system and describing its solution set. The next set of ideas extends this in several directions.
- Eigenvalues and Cayley-Hamilton — the matrix equation AX = \lambda X is a homogeneous system (A - \lambda I)X = 0, and asking for non-trivial solutions leads to the characteristic equation.
- Properties of Determinants — the determinant shortcuts that let you check invertibility without row-reducing.
- Systems of Linear Equations — the substitution and elimination methods from before matrices.
- Inverse of Matrix — how to compute A^{-1} when it exists, using the adjoint or row reduction.
- Special Matrices — symmetric, skew-symmetric, orthogonal, and idempotent matrices, and what their ranks and eigenvalues look like.