In short
For a 2×2 system, substitution and elimination are perfectly clean — you express one variable, plug in, done in four lines. Matrix methods would be overkill. But the moment you hit a 3×3 system, substitution forces you to express one variable in terms of two others, then plug into two remaining equations, then repeat the chain. The algebra balloons. Matrix methods (Gaussian elimination, Cramer's rule, the matrix inverse) are uniform algorithms — same recipe regardless of size. For 3 variables they are competitive with substitution; for 4 or more they are the only sane choice. This is exactly the transition CBSE Class 12 trains you for in the Matrices and Determinants chapters.
You have already met substitution and elimination in Class 10 for a pair of equations in two unknowns. They feel almost playful — pick the easiest variable, isolate it, plug it in, solve. Two minutes of work.
Now imagine your physics teacher hands you a circuit problem with three loop currents, or your economics teacher gives you three commodities with three pricing constraints. Suddenly you have three equations in three unknowns. You instinctively reach for substitution — and within five minutes your page is a swamp of fractions.
This is not a personal failing. It is a property of the method. Substitution scales badly. Matrix methods scale gracefully. Knowing exactly when to switch is one of the most useful pieces of mathematical taste you can develop.
The 2×2 case: substitution wins
For a system like
substitution is almost embarrassingly easy. From the second equation, x = y + 1. Plug into the first: 2(y+1) + 3y = 13 \implies 5y = 11 \implies y = 11/5, then x = 16/5.
Three lines. No bookkeeping. No matrix needed.
If you tried Cramer's rule here, you would write a 2 \times 2 determinant, compute it, then two more determinants for x and y. You would arrive at the same answer with strictly more writing. Why: matrix methods carry overhead — setting up the augmented matrix, doing row operations, tracking signs. That overhead is fixed; substitution's cost is small for tiny systems. Below the crossover point, simpler wins.
The 3×3 case: matrix methods catch up — and pull ahead
Now consider three equations in three unknowns. The classic example:
Try substitution first, in your head. From the first equation, x = 6 - y - z. You now have to plug this into two other equations, getting two new equations in y and z. Then you need to repeat the trick — express y in terms of z, plug into the remaining equation. The algebra is doable but every step risks a sign error or a dropped coefficient.
Matrix methods, by contrast, are mechanical. You write the augmented matrix once, then push rows around following a fixed recipe. Let's see both approaches and compare.
Cramer's rule on the 3×3 system
Define the coefficient matrix A and its determinant D:
Expand along the first row:
For D_x, replace the first column with the right-hand side (6, 14, 36):
For D_y, replace the second column:
For D_z:
Cramer's rule gives x = D_x/D = 1, y = D_y/D = 2, z = D_z/D = 3. Solution: (1, 2, 3). Check in the first equation: 1 + 2 + 3 = 6. ✓
The same system by Gaussian elimination
Write the augmented matrix:
Step 1: R_2 \to R_2 - R_1, R_3 \to R_3 - R_1 (kill the x in rows 2 and 3):
Step 2: R_3 \to R_3 - 3R_2 (kill the y in row 3):
Back-substitute from the bottom: 2z = 6 \implies z = 3. Then y + 2(3) = 8 \implies y = 2. Then x + 2 + 3 = 6 \implies x = 1. Same answer, fewer total operations, and at no point did you have to invent a substitution — the recipe told you what to do at every step. Why uniform algorithms scale: at every row, the operation is "subtract a multiple of the pivot row". You never have to think about which variable to isolate. A computer can run this with no creativity at all — and so can you, on autopilot, after twenty minutes of practice.
Why substitution is slow — counting the pain
Let's actually attempt substitution on the same 3×3 system to see how the lines pile up.
From equation 1: x = 6 - y - z.
Plug into equation 2: (6 - y - z) + 2y + 3z = 14 \implies y + 2z = 8. (Line 1 of new system.)
Plug into equation 3: (6 - y - z) + 4y + 9z = 36 \implies 3y + 8z = 30. (Line 2 of new system.)
Now you have a 2×2 system in y, z. From the first, y = 8 - 2z. Plug into the second: 3(8 - 2z) + 8z = 30 \implies 24 - 6z + 8z = 30 \implies 2z = 6 \implies z = 3.
Back-substitute: y = 8 - 6 = 2, then x = 6 - 2 - 3 = 1.
That worked — but count the writing. Eight algebraic lines, each with multiple terms, each a chance for a sign slip. Gaussian elimination did the same job in three matrix snapshots with two row operations each. And every single substitution step required judgement (which variable do I isolate? which equation do I plug into?). Gaussian elimination required none. Why this matters at scale: judgement does not parallelise; mechanical recipes do. A computer can solve a 1000×1000 system in microseconds via Gaussian elimination — there is no "computer Cramer" for substitution because there is no fixed recipe.
The 4×4 case and beyond: substitution becomes infeasible
For four equations in four unknowns, substitution forces you to express one variable in terms of three others, plug into three remaining equations, then repeat. At every level of the recursion the expressions get longer and the coefficient algebra uglier. By the time you reach a 5×5 system by hand, substitution is essentially impossible without errors.
Matrix methods, however, just keep going. Gaussian elimination on an n \times n system requires roughly \tfrac{2}{3}n^3 arithmetic operations — and crucially, every operation is the same kind of operation. Real-world problems — finite-element simulations of bridges, weather forecasting at the IMD, neural network training — solve systems with thousands or millions of variables. They use Gaussian elimination with a refinement called partial pivoting (swap rows so the largest-magnitude entry sits on the diagonal, which controls round-off error). No human ever does this by hand. But the algorithm is the one you learn in CBSE Class 12.
The Class 12 transition
CBSE Class 12 introduces matrices and determinants as separate chapters precisely so that, in the chapter on Applications of Determinants and Matrices, you can revisit the systems-of-equations problem from Class 10 with grown-up tools. The official syllabus makes you solve 3×3 systems by Cramer's rule, by the matrix-inverse method (\mathbf{X} = A^{-1}\mathbf{b}), and by Gaussian elimination. You may be tempted to ask why — substitution would still work. The honest answer is what this article is about: substitution stops scaling, and the methods you need at university (linear algebra, numerical analysis, machine learning) are the matrix methods. Class 12 is preparing your toolkit for the work ahead.
So the rule of thumb to carry with you: two unknowns, substitute. Three or more, set up the matrix. Your future self — and your exam time — will thank you.
References
- NCERT, Mathematics Textbook for Class XII, Part I, Chapters 3 and 4 (Matrices and Determinants).
- Gilbert Strang, Introduction to Linear Algebra, 5th ed., Chapter 2 (Solving Linear Equations).
- MIT OpenCourseWare, 18.06 Linear Algebra — Lecture 2: Elimination with Matrices.
- Trefethen and Bau, Numerical Linear Algebra, Lecture 20 (Gaussian Elimination).
- Wikipedia, Cramer's rule.