In short
Matrices can be added (entry by entry, same order required), scaled (multiply every entry by a constant), and multiplied (rows of the first dot-product with columns of the second, inner dimensions must match). Addition is commutative; multiplication is not. These operations, together with their algebraic properties, turn matrices into a rich arithmetic system that generalises — and occasionally breaks — the rules you know from ordinary numbers.
A shopkeeper in Jaipur runs two stores. Each store sells three products: notebooks, pens, and erasers. The sales on Monday and Tuesday are:
Monday sales:
| Notebooks | Pens | Erasers | |
|---|---|---|---|
| Store 1 | 30 | 45 | 20 |
| Store 2 | 25 | 50 | 15 |
Tuesday sales:
| Notebooks | Pens | Erasers | |
|---|---|---|---|
| Store 1 | 35 | 40 | 25 |
| Store 2 | 20 | 55 | 30 |
To find the total sales over both days, you would add the corresponding entries: Store 1 sold 30 + 35 = 65 notebooks, 45 + 40 = 85 pens, and so on. That is matrix addition — entry by entry, position by position.
Now suppose the shopkeeper wants to know the revenue, not just the count. Notebooks cost ₹40, pens cost ₹10, erasers cost ₹5. To get each store's total revenue on Monday, you would multiply quantities by prices and add: Store 1's Monday revenue is 30 \times 40 + 45 \times 10 + 20 \times 5 = 1200 + 450 + 100 = 1750. That computation — multiplying a row by a column and summing — is exactly what matrix multiplication does.
Both operations are natural. Both arise from real problems. But they follow different rules, and those rules are worth understanding precisely.
Addition and subtraction
Matrix addition is the simplest operation. You add two matrices by adding their corresponding entries.
Matrix addition
If A = [a_{ij}] and B = [b_{ij}] are both m \times n matrices, their sum is the m \times n matrix
Matrix subtraction is defined the same way: A - B = [a_{ij} - b_{ij}].
The condition is strict: the two matrices must have the same order. You cannot add a 2 \times 3 matrix to a 3 \times 2 matrix. The operation is undefined — not "zero," not "error," but simply not a thing that exists. There is no way to match up entries from a 2 \times 3 grid with entries from a 3 \times 2 grid, because the grids have different shapes.
Using the shopkeeper's data:
Each entry in the result is the sum of the entries in the same position. No entry interacts with any other position. This is why addition is simple — every entry minds its own business.
Scalar multiplication
Multiplying a matrix by a single number (a scalar) means multiplying every entry by that number.
Scalar multiplication
If A = [a_{ij}] is an m \times n matrix and k is a scalar, then
If the shopkeeper decides to double his inventory projections:
Every entry is doubled. The matrix has the same shape as before; only the magnitudes change. This is why the operation is called scalar multiplication — the scalar acts uniformly on every entry, scaling the entire matrix.
Matrix multiplication
This is where things get interesting. Matrix multiplication is not entry-by-entry. It is a fundamentally different operation, and it is the one that makes matrices powerful.
The rule: to get the entry in row i, column j of the product AB, take the i-th row of A and the j-th column of B, multiply corresponding entries, and add.
Matrix multiplication
If A = [a_{ij}] is an m \times p matrix and B = [b_{ij}] is a p \times n matrix, then their product AB is the m \times n matrix C = [c_{ij}] where
The critical constraint: the number of columns of A must equal the number of rows of B. If A is m \times p and B is p \times n, the product AB exists and has order m \times n. If the inner dimensions don't match — say A is 2 \times 3 and B is 4 \times 2 — the product AB is undefined.
A memory aid: write the orders next to each other.
The two bold p's must match. They "cancel," leaving the outer dimensions m \times n as the order of the product.
Here is the revenue computation from the shopkeeper's problem, set up as a matrix multiplication. Monday's quantities form a 2 \times 3 matrix; the prices form a 3 \times 1 column matrix:
Store 1's Monday revenue is ₹1,750; Store 2's is ₹1,575. The multiplication did exactly what you would have done by hand — multiply quantities by prices and sum — but it did it for both stores at once, in a single operation.
Why multiplication is not commutative
With ordinary numbers, 3 \times 5 = 5 \times 3. With matrices, AB \neq BA in general. There are three reasons, in increasing order of depth.
Reason 1: existence. If A is 2 \times 3 and B is 3 \times 4, then AB is 2 \times 4 — it exists. But BA would require the inner dimensions of B (4) and A (2) to match, and 4 \neq 2. So BA doesn't even exist. One product is defined; the other is not.
Reason 2: size. Even when both products exist, they can have different orders. If A is 2 \times 3 and B is 3 \times 2, then AB is 2 \times 2 and BA is 3 \times 3. They are not even the same shape, let alone equal.
Reason 3: values. Even when both products exist and have the same order (which requires both A and B to be square matrices of the same size), the entries are typically different. Take
AB \neq BA, even though both are 2 \times 2. Non-commutativity is not a defect — it reflects the fact that matrix multiplication encodes sequential operations, and the order in which you perform operations matters. Rotating a shape and then reflecting it gives a different result from reflecting and then rotating.
Properties (with proofs)
The algebraic properties of matrix operations are the rules of the game. Each one is worth proving, because the proofs are short and they show you why the rule holds — not just that it does.
In the following, A, B, C are matrices of compatible orders, and k, l are scalars.
Properties of addition
1. Commutativity: A + B = B + A.
Proof. The (i,j)-entry of A + B is a_{ij} + b_{ij}. The (i,j)-entry of B + A is b_{ij} + a_{ij}. Since addition of real numbers is commutative, a_{ij} + b_{ij} = b_{ij} + a_{ij}. This holds for every (i,j), so A + B = B + A. \square
2. Associativity: (A + B) + C = A + (B + C).
Proof. The (i,j)-entry of (A + B) + C is (a_{ij} + b_{ij}) + c_{ij}. The (i,j)-entry of A + (B + C) is a_{ij} + (b_{ij} + c_{ij}). These are equal because real-number addition is associative. \square
3. Additive identity: A + O = A.
Proof. The (i,j)-entry of A + O is a_{ij} + 0 = a_{ij}. So A + O = A. \square
4. Additive inverse: A + (-A) = O, where -A = [-a_{ij}].
Proof. The (i,j)-entry of A + (-A) is a_{ij} + (-a_{ij}) = 0. So A + (-A) = O. \square
Properties of scalar multiplication
5. k(A + B) = kA + kB (scalar distributes over matrix addition).
Proof. The (i,j)-entry of k(A + B) is k(a_{ij} + b_{ij}) = ka_{ij} + kb_{ij}. The (i,j)-entry of kA + kB is ka_{ij} + kb_{ij}. Equal. \square
6. (k + l)A = kA + lA (scalar addition distributes over a matrix).
Proof. The (i,j)-entry of (k + l)A is (k + l)a_{ij} = ka_{ij} + la_{ij}. The (i,j)-entry of kA + lA is ka_{ij} + la_{ij}. Equal. \square
7. (kl)A = k(lA) (scalar associativity).
Proof. The (i,j)-entry of (kl)A is (kl)a_{ij}. The (i,j)-entry of k(lA) is k(la_{ij}). These are equal because real-number multiplication is associative. \square
8. 1 \cdot A = A (multiplicative identity for scalars).
Proof. The (i,j)-entry of 1 \cdot A is 1 \cdot a_{ij} = a_{ij}. \square
Properties of matrix multiplication
9. Associativity: (AB)C = A(BC).
Proof. Let A be m \times p, B be p \times q, C be q \times n. The (i,j)-entry of (AB)C is
The (i,j)-entry of A(BC) is
Both double sums range over the same set of index pairs (k, l) and sum the same terms a_{ik} b_{kl} c_{lj}, so they are equal. \square
10. Distributivity: A(B + C) = AB + AC (left) and (A + B)C = AC + BC (right).
Proof of left distributivity. The (i,j)-entry of A(B + C) is
The right side is the (i,j)-entry of AB plus the (i,j)-entry of AC, which is the (i,j)-entry of AB + AC. \square
The proof of right distributivity is analogous — distribute from the right and use the same splitting-of-sums argument.
11. Multiplicative identity: AI = A and IA = A.
Proof of AI = A. Let A be m \times n and I = I_n, the n \times n identity matrix. The (i,j)-entry of AI is
where \delta_{kj} is the (k,j)-entry of I: it equals 1 when k = j and 0 otherwise. Every term in the sum vanishes except the one where k = j, leaving a_{ij} \cdot 1 = a_{ij}. So AI = A. The proof that IA = A is the same argument applied to rows. \square
12. Scalar pull-through: k(AB) = (kA)B = A(kB).
Proof. The (i,j)-entry of k(AB) is k \sum_r a_{ir} b_{rj} = \sum_r (k a_{ir}) b_{rj}, which is the (i,j)-entry of (kA)B. And k \sum_r a_{ir} b_{rj} = \sum_r a_{ir} (k b_{rj}), which is the (i,j)-entry of A(kB). \square
A property that fails: cancellation
With ordinary numbers, if ab = ac and a \neq 0, then b = c. With matrices, this is false. You can have AB = AC with A \neq O and yet B \neq C. Here is a concrete case:
AB = AC, but B \neq C, and A is certainly not the zero matrix. Cancellation fails because A is not invertible — it does not have a matrix inverse. When an inverse exists, you can multiply both sides by A^{-1} and cancellation works. When it doesn't, the "zero-product-means-a-factor-is-zero" intuition breaks down.
Worked examples
Example 1: Adding and scaling matrices
A cricket academy tracks batting averages across two months:
Rows are three players; columns are two formats (ODI and T20). Find M + N and \frac{1}{2}(M + N).
Step 1. Check orders. Both are 3 \times 2. Addition is valid.
Why: matrices of different orders cannot be added — the operation is undefined.
Step 2. Add entry by entry.
Why: each entry in the result is the sum of the corresponding entries. Position (1,1): 42 + 38 = 80. Position (2,2): 51 + 43 = 94. And so on for all six entries.
Step 3. Multiply by the scalar \frac{1}{2}.
Why: every entry is halved. This gives the average of each player's performance across the two months.
Step 4. Interpret. Player 2 averaged 30 in ODI and 47 in T20 over the two months. Player 3 averaged 39 in ODI and 41.5 in T20.
Result: The two-month total is \begin{bmatrix} 80 & 82 \\ 60 & 94 \\ 78 & 83 \end{bmatrix} and the average is \begin{bmatrix} 40 & 41 \\ 30 & 47 \\ 39 & 41.5 \end{bmatrix}.
The picture confirms what the algebra says: addition is entry-by-entry, and the result has the same shape as the inputs.
Example 2: Multiplying two matrices
Compute AB where
Step 1. Check dimensions. A is 2 \times 3, B is 3 \times 2. Inner dimensions: 3 = 3. The product exists and has order 2 \times 2.
Why: A_{2 \times \mathbf{3}} \cdot B_{\mathbf{3} \times 2} = C_{2 \times 2}. The bold 3's match.
Step 2. Compute entry c_{11} (row 1 of A dotted with column 1 of B).
Why: pair the entries of row 1 of A with the entries of column 1 of B: (1,7), (2,8), (3,9). Multiply each pair, then add.
Step 3. Compute the remaining entries.
Why: same rule for each entry — row i of A dotted with column j of B.
Step 4. Assemble the product.
Result: AB = \begin{bmatrix} 50 & 68 \\ 122 & 167 \end{bmatrix}, a 2 \times 2 matrix.
Each entry in the 2 \times 2 result is a single number — the dot product of one row from A and one column from B. The diagram shows the computation for c_{11}; the other three entries follow the same pattern with different row-column pairs.
Common confusions
-
"To multiply matrices, multiply corresponding entries." That is how addition works. Multiplication is different: each entry of the product is a sum of products drawn from an entire row and an entire column. Entry-by-entry multiplication is a different operation called the Hadamard product, which is rarely used in the NCERT curriculum.
-
"If AB = O, then A = O or B = O." False. The cancellation law of ordinary numbers does not extend to matrices. As shown above, you can have two nonzero matrices whose product is the zero matrix. Such matrices are called zero divisors.
-
"AB and BA are always both defined." Only when A is m \times n and B is n \times m. In that case both products exist, but AB is m \times m and BA is n \times n — typically different sizes.
-
"(AB)^2 = A^2 B^2." No. (AB)^2 = (AB)(AB) = ABAB. For this to equal A^2 B^2 = AABB, you would need BA = AB — commutativity. Since matrices generally don't commute, this identity generally fails. The correct expansion is ABAB, not AABB.
Going deeper
If you came here to learn how matrix operations work and what properties they satisfy, you have it — you can stop here. The rest of this section is for readers who want to see matrices in a broader algebraic context.
Matrices as a ring
The algebraic properties you proved above — associativity of addition and multiplication, distributivity, existence of additive identity and inverse, non-commutativity of multiplication — show that the set of n \times n matrices forms a mathematical structure called a ring. Specifically, it is a non-commutative ring with unity (the identity matrix I being the unity). This is the same kind of structure as the integers under addition and multiplication, except that multiplication is not commutative.
The ring structure explains which familiar laws carry over from ordinary arithmetic and which do not. Distributivity carries over. Commutativity of multiplication does not. Cancellation does not. The existence of multiplicative inverses is not guaranteed — but when they exist, the matrix is called invertible (or non-singular), and that is the subject of a separate article.
Block multiplication
Large matrices are sometimes partitioned into smaller blocks, and the multiplication rule extends naturally:
provided the sub-matrix dimensions are compatible. This is called block multiplication, and it works because the multiplication formula depends only on the "row-dot-column" rule — which doesn't care whether the entries are numbers or matrices. This technique is used in numerical linear algebra to speed up computations on modern hardware, where moving data between memory levels is often more expensive than the arithmetic itself.
Computational cost
Multiplying two n \times n matrices using the definition takes n^3 multiplications (and nearly as many additions). For n = 1000, that is a billion multiplications. Reducing this cost is an active area of research. The naive algorithm is O(n^3). Strassen's algorithm, discovered in 1969, runs in O(n^{2.807}). The current best theoretical bound is below O(n^{2.372}). Finding the true optimal exponent — or proving a lower bound — remains one of the important open problems in theoretical computer science.
Where this leads next
You now know how to add, scale, and multiply matrices, and you know the algebraic properties (with proofs) that govern these operations. The next steps:
- Transpose of Matrix — flipping a matrix along its diagonal, and the symmetric and skew-symmetric matrices that result.
- Determinants — Introduction — the single number attached to a square matrix that controls whether it has an inverse.
- Inverse of Matrix — the matrix version of division, and when it exists.
- Systems of Linear Equations — using matrix operations to solve multiple equations at once.
- Special Matrices — orthogonal, idempotent, nilpotent, and involutory matrices, defined by algebraic conditions on the operations you just learned.