In short

Matrices can be added (entry by entry, same order required), scaled (multiply every entry by a constant), and multiplied (rows of the first dot-product with columns of the second, inner dimensions must match). Addition is commutative; multiplication is not. These operations, together with their algebraic properties, turn matrices into a rich arithmetic system that generalises — and occasionally breaks — the rules you know from ordinary numbers.

A shopkeeper in Jaipur runs two stores. Each store sells three products: notebooks, pens, and erasers. The sales on Monday and Tuesday are:

Monday sales:

Notebooks Pens Erasers
Store 1 30 45 20
Store 2 25 50 15

Tuesday sales:

Notebooks Pens Erasers
Store 1 35 40 25
Store 2 20 55 30

To find the total sales over both days, you would add the corresponding entries: Store 1 sold 30 + 35 = 65 notebooks, 45 + 40 = 85 pens, and so on. That is matrix addition — entry by entry, position by position.

Now suppose the shopkeeper wants to know the revenue, not just the count. Notebooks cost ₹40, pens cost ₹10, erasers cost ₹5. To get each store's total revenue on Monday, you would multiply quantities by prices and add: Store 1's Monday revenue is 30 \times 40 + 45 \times 10 + 20 \times 5 = 1200 + 450 + 100 = 1750. That computation — multiplying a row by a column and summing — is exactly what matrix multiplication does.

Both operations are natural. Both arise from real problems. But they follow different rules, and those rules are worth understanding precisely.

Addition and subtraction

Matrix addition is the simplest operation. You add two matrices by adding their corresponding entries.

Matrix addition

If A = [a_{ij}] and B = [b_{ij}] are both m \times n matrices, their sum is the m \times n matrix

A + B = [a_{ij} + b_{ij}]

Matrix subtraction is defined the same way: A - B = [a_{ij} - b_{ij}].

The condition is strict: the two matrices must have the same order. You cannot add a 2 \times 3 matrix to a 3 \times 2 matrix. The operation is undefined — not "zero," not "error," but simply not a thing that exists. There is no way to match up entries from a 2 \times 3 grid with entries from a 3 \times 2 grid, because the grids have different shapes.

Using the shopkeeper's data:

\begin{bmatrix} 30 & 45 & 20 \\ 25 & 50 & 15 \end{bmatrix} + \begin{bmatrix} 35 & 40 & 25 \\ 20 & 55 & 30 \end{bmatrix} = \begin{bmatrix} 65 & 85 & 45 \\ 45 & 105 & 45 \end{bmatrix}

Each entry in the result is the sum of the entries in the same position. No entry interacts with any other position. This is why addition is simple — every entry minds its own business.

Scalar multiplication

Multiplying a matrix by a single number (a scalar) means multiplying every entry by that number.

Scalar multiplication

If A = [a_{ij}] is an m \times n matrix and k is a scalar, then

kA = [k \cdot a_{ij}]

If the shopkeeper decides to double his inventory projections:

2 \begin{bmatrix} 30 & 45 & 20 \\ 25 & 50 & 15 \end{bmatrix} = \begin{bmatrix} 60 & 90 & 40 \\ 50 & 100 & 30 \end{bmatrix}

Every entry is doubled. The matrix has the same shape as before; only the magnitudes change. This is why the operation is called scalar multiplication — the scalar acts uniformly on every entry, scaling the entire matrix.

Matrix multiplication

This is where things get interesting. Matrix multiplication is not entry-by-entry. It is a fundamentally different operation, and it is the one that makes matrices powerful.

The rule: to get the entry in row i, column j of the product AB, take the i-th row of A and the j-th column of B, multiply corresponding entries, and add.

Matrix multiplication

If A = [a_{ij}] is an m \times p matrix and B = [b_{ij}] is a p \times n matrix, then their product AB is the m \times n matrix C = [c_{ij}] where

c_{ij} = \sum_{k=1}^{p} a_{ik} \, b_{kj} = a_{i1}b_{1j} + a_{i2}b_{2j} + \cdots + a_{ip}b_{pj}

The critical constraint: the number of columns of A must equal the number of rows of B. If A is m \times p and B is p \times n, the product AB exists and has order m \times n. If the inner dimensions don't match — say A is 2 \times 3 and B is 4 \times 2 — the product AB is undefined.

A memory aid: write the orders next to each other.

A_{m \times \mathbf{p}} \cdot B_{\mathbf{p} \times n} = C_{m \times n}

The two bold p's must match. They "cancel," leaving the outer dimensions m \times n as the order of the product.

Here is the revenue computation from the shopkeeper's problem, set up as a matrix multiplication. Monday's quantities form a 2 \times 3 matrix; the prices form a 3 \times 1 column matrix:

\begin{bmatrix} 30 & 45 & 20 \\ 25 & 50 & 15 \end{bmatrix} \begin{bmatrix} 40 \\ 10 \\ 5 \end{bmatrix} = \begin{bmatrix} 30(40) + 45(10) + 20(5) \\ 25(40) + 50(10) + 15(5) \end{bmatrix} = \begin{bmatrix} 1750 \\ 1575 \end{bmatrix}

Store 1's Monday revenue is ₹1,750; Store 2's is ₹1,575. The multiplication did exactly what you would have done by hand — multiply quantities by prices and sum — but it did it for both stores at once, in a single operation.

How matrix multiplication combines a row and a columnA diagram showing how entry c_ij of the product matrix is computed. Row i of matrix A is highlighted, and column j of matrix B is highlighted. Arrows show the corresponding entries being multiplied and summed to produce c_ij. A (m × p) ai1 ai2 aip ··· ··· row i → B (p × n) b1j b2j bpj ↑ col j = cij Σ aₖ · bₖ C (m × n)
Each entry $c_{ij}$ of the product $C = AB$ is computed from a single row of $A$ and a single column of $B$. You pair up the entries, multiply, and add. The row supplies one factor from each term; the column supplies the other.

Why multiplication is not commutative

With ordinary numbers, 3 \times 5 = 5 \times 3. With matrices, AB \neq BA in general. There are three reasons, in increasing order of depth.

Reason 1: existence. If A is 2 \times 3 and B is 3 \times 4, then AB is 2 \times 4 — it exists. But BA would require the inner dimensions of B (4) and A (2) to match, and 4 \neq 2. So BA doesn't even exist. One product is defined; the other is not.

Reason 2: size. Even when both products exist, they can have different orders. If A is 2 \times 3 and B is 3 \times 2, then AB is 2 \times 2 and BA is 3 \times 3. They are not even the same shape, let alone equal.

Reason 3: values. Even when both products exist and have the same order (which requires both A and B to be square matrices of the same size), the entries are typically different. Take

A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}, \quad B = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
AB = \begin{bmatrix} 2 & 1 \\ 4 & 3 \end{bmatrix}, \quad BA = \begin{bmatrix} 3 & 4 \\ 1 & 2 \end{bmatrix}

AB \neq BA, even though both are 2 \times 2. Non-commutativity is not a defect — it reflects the fact that matrix multiplication encodes sequential operations, and the order in which you perform operations matters. Rotating a shape and then reflecting it gives a different result from reflecting and then rotating.

Properties (with proofs)

The algebraic properties of matrix operations are the rules of the game. Each one is worth proving, because the proofs are short and they show you why the rule holds — not just that it does.

In the following, A, B, C are matrices of compatible orders, and k, l are scalars.

Properties of addition

1. Commutativity: A + B = B + A.

Proof. The (i,j)-entry of A + B is a_{ij} + b_{ij}. The (i,j)-entry of B + A is b_{ij} + a_{ij}. Since addition of real numbers is commutative, a_{ij} + b_{ij} = b_{ij} + a_{ij}. This holds for every (i,j), so A + B = B + A. \square

2. Associativity: (A + B) + C = A + (B + C).

Proof. The (i,j)-entry of (A + B) + C is (a_{ij} + b_{ij}) + c_{ij}. The (i,j)-entry of A + (B + C) is a_{ij} + (b_{ij} + c_{ij}). These are equal because real-number addition is associative. \square

3. Additive identity: A + O = A.

Proof. The (i,j)-entry of A + O is a_{ij} + 0 = a_{ij}. So A + O = A. \square

4. Additive inverse: A + (-A) = O, where -A = [-a_{ij}].

Proof. The (i,j)-entry of A + (-A) is a_{ij} + (-a_{ij}) = 0. So A + (-A) = O. \square

Properties of scalar multiplication

5. k(A + B) = kA + kB (scalar distributes over matrix addition).

Proof. The (i,j)-entry of k(A + B) is k(a_{ij} + b_{ij}) = ka_{ij} + kb_{ij}. The (i,j)-entry of kA + kB is ka_{ij} + kb_{ij}. Equal. \square

6. (k + l)A = kA + lA (scalar addition distributes over a matrix).

Proof. The (i,j)-entry of (k + l)A is (k + l)a_{ij} = ka_{ij} + la_{ij}. The (i,j)-entry of kA + lA is ka_{ij} + la_{ij}. Equal. \square

7. (kl)A = k(lA) (scalar associativity).

Proof. The (i,j)-entry of (kl)A is (kl)a_{ij}. The (i,j)-entry of k(lA) is k(la_{ij}). These are equal because real-number multiplication is associative. \square

8. 1 \cdot A = A (multiplicative identity for scalars).

Proof. The (i,j)-entry of 1 \cdot A is 1 \cdot a_{ij} = a_{ij}. \square

Properties of matrix multiplication

9. Associativity: (AB)C = A(BC).

Proof. Let A be m \times p, B be p \times q, C be q \times n. The (i,j)-entry of (AB)C is

\sum_{l=1}^{q} \left(\sum_{k=1}^{p} a_{ik} b_{kl}\right) c_{lj} = \sum_{l=1}^{q} \sum_{k=1}^{p} a_{ik} b_{kl} c_{lj}

The (i,j)-entry of A(BC) is

\sum_{k=1}^{p} a_{ik} \left(\sum_{l=1}^{q} b_{kl} c_{lj}\right) = \sum_{k=1}^{p} \sum_{l=1}^{q} a_{ik} b_{kl} c_{lj}

Both double sums range over the same set of index pairs (k, l) and sum the same terms a_{ik} b_{kl} c_{lj}, so they are equal. \square

10. Distributivity: A(B + C) = AB + AC (left) and (A + B)C = AC + BC (right).

Proof of left distributivity. The (i,j)-entry of A(B + C) is

\sum_{k=1}^{p} a_{ik}(b_{kj} + c_{kj}) = \sum_{k=1}^{p} a_{ik} b_{kj} + \sum_{k=1}^{p} a_{ik} c_{kj}

The right side is the (i,j)-entry of AB plus the (i,j)-entry of AC, which is the (i,j)-entry of AB + AC. \square

The proof of right distributivity is analogous — distribute from the right and use the same splitting-of-sums argument.

11. Multiplicative identity: AI = A and IA = A.

Proof of AI = A. Let A be m \times n and I = I_n, the n \times n identity matrix. The (i,j)-entry of AI is

\sum_{k=1}^{n} a_{ik} \cdot \delta_{kj}

where \delta_{kj} is the (k,j)-entry of I: it equals 1 when k = j and 0 otherwise. Every term in the sum vanishes except the one where k = j, leaving a_{ij} \cdot 1 = a_{ij}. So AI = A. The proof that IA = A is the same argument applied to rows. \square

12. Scalar pull-through: k(AB) = (kA)B = A(kB).

Proof. The (i,j)-entry of k(AB) is k \sum_r a_{ir} b_{rj} = \sum_r (k a_{ir}) b_{rj}, which is the (i,j)-entry of (kA)B. And k \sum_r a_{ir} b_{rj} = \sum_r a_{ir} (k b_{rj}), which is the (i,j)-entry of A(kB). \square

A property that fails: cancellation

With ordinary numbers, if ab = ac and a \neq 0, then b = c. With matrices, this is false. You can have AB = AC with A \neq O and yet B \neq C. Here is a concrete case:

A = \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix}, \quad B = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}, \quad C = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}
AB = \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix}, \quad AC = \begin{bmatrix} 1 & 1 \\ 2 & 2 \end{bmatrix}

AB = AC, but B \neq C, and A is certainly not the zero matrix. Cancellation fails because A is not invertible — it does not have a matrix inverse. When an inverse exists, you can multiply both sides by A^{-1} and cancellation works. When it doesn't, the "zero-product-means-a-factor-is-zero" intuition breaks down.

Worked examples

Example 1: Adding and scaling matrices

A cricket academy tracks batting averages across two months:

\text{March} = M = \begin{bmatrix} 42 & 35 \\ 28 & 51 \\ 37 & 44 \end{bmatrix}, \quad \text{April} = N = \begin{bmatrix} 38 & 47 \\ 32 & 43 \\ 41 & 39 \end{bmatrix}

Rows are three players; columns are two formats (ODI and T20). Find M + N and \frac{1}{2}(M + N).

Step 1. Check orders. Both are 3 \times 2. Addition is valid.

Why: matrices of different orders cannot be added — the operation is undefined.

Step 2. Add entry by entry.

M + N = \begin{bmatrix} 42+38 & 35+47 \\ 28+32 & 51+43 \\ 37+41 & 44+39 \end{bmatrix} = \begin{bmatrix} 80 & 82 \\ 60 & 94 \\ 78 & 83 \end{bmatrix}

Why: each entry in the result is the sum of the corresponding entries. Position (1,1): 42 + 38 = 80. Position (2,2): 51 + 43 = 94. And so on for all six entries.

Step 3. Multiply by the scalar \frac{1}{2}.

\frac{1}{2}(M + N) = \begin{bmatrix} 40 & 41 \\ 30 & 47 \\ 39 & 41.5 \end{bmatrix}

Why: every entry is halved. This gives the average of each player's performance across the two months.

Step 4. Interpret. Player 2 averaged 30 in ODI and 47 in T20 over the two months. Player 3 averaged 39 in ODI and 41.5 in T20.

Result: The two-month total is \begin{bmatrix} 80 & 82 \\ 60 & 94 \\ 78 & 83 \end{bmatrix} and the average is \begin{bmatrix} 40 & 41 \\ 30 & 47 \\ 39 & 41.5 \end{bmatrix}.

Matrix addition shown entry by entryTwo 3 by 2 matrices are shown side by side with a plus sign between them, and the resulting 3 by 2 sum matrix is shown after an equals sign. Each entry in the result is the sum of the corresponding entries from the two input matrices. M = 42 35 28 51 37 44 + 38 47 32 43 41 39 = 80 82 60 94 78 83 each entry is the sum of corresponding entries
Matrix addition is position-by-position. Each entry in the result matrix comes from adding the two entries that sit in the same row and column in the original matrices. The red-bordered result shows the total sales across two months.

The picture confirms what the algebra says: addition is entry-by-entry, and the result has the same shape as the inputs.

Example 2: Multiplying two matrices

Compute AB where

A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix}, \quad B = \begin{bmatrix} 7 & 10 \\ 8 & 11 \\ 9 & 12 \end{bmatrix}

Step 1. Check dimensions. A is 2 \times 3, B is 3 \times 2. Inner dimensions: 3 = 3. The product exists and has order 2 \times 2.

Why: A_{2 \times \mathbf{3}} \cdot B_{\mathbf{3} \times 2} = C_{2 \times 2}. The bold 3's match.

Step 2. Compute entry c_{11} (row 1 of A dotted with column 1 of B).

c_{11} = 1(7) + 2(8) + 3(9) = 7 + 16 + 27 = 50

Why: pair the entries of row 1 of A with the entries of column 1 of B: (1,7), (2,8), (3,9). Multiply each pair, then add.

Step 3. Compute the remaining entries.

c_{12} = 1(10) + 2(11) + 3(12) = 10 + 22 + 36 = 68
c_{21} = 4(7) + 5(8) + 6(9) = 28 + 40 + 54 = 122
c_{22} = 4(10) + 5(11) + 6(12) = 40 + 55 + 72 = 167

Why: same rule for each entry — row i of A dotted with column j of B.

Step 4. Assemble the product.

AB = \begin{bmatrix} 50 & 68 \\ 122 & 167 \end{bmatrix}

Result: AB = \begin{bmatrix} 50 & 68 \\ 122 & 167 \end{bmatrix}, a 2 \times 2 matrix.

Matrix multiplication step by step for entry c11The computation of entry c11 of the product AB is shown. Row 1 of A, containing 1, 2, 3, is highlighted. Column 1 of B, containing 7, 8, 9, is highlighted. Arrows show the pairwise products: 1 times 7, 2 times 8, 3 times 9. These sum to 50, which is entry c11 of the result. A 1 2 3 4 5 6 B 7 8 9 10 11 12 = 50 68 122 167 1 × 7 = 7 2 × 8 = 16 3 × 9 = 27 sum = 7 + 16 + 27 = 50 row 1 of A · column 1 of B → c₁₁ = 50
Computing $c_{11}$: take row 1 of $A$ (highlighted) and column 1 of $B$ (highlighted). Multiply corresponding entries — $1 \times 7$, $2 \times 8$, $3 \times 9$ — and add the products to get $50$. Repeat for every $(i,j)$ pair to fill the $2 \times 2$ result.

Each entry in the 2 \times 2 result is a single number — the dot product of one row from A and one column from B. The diagram shows the computation for c_{11}; the other three entries follow the same pattern with different row-column pairs.

Common confusions

Going deeper

If you came here to learn how matrix operations work and what properties they satisfy, you have it — you can stop here. The rest of this section is for readers who want to see matrices in a broader algebraic context.

Matrices as a ring

The algebraic properties you proved above — associativity of addition and multiplication, distributivity, existence of additive identity and inverse, non-commutativity of multiplication — show that the set of n \times n matrices forms a mathematical structure called a ring. Specifically, it is a non-commutative ring with unity (the identity matrix I being the unity). This is the same kind of structure as the integers under addition and multiplication, except that multiplication is not commutative.

The ring structure explains which familiar laws carry over from ordinary arithmetic and which do not. Distributivity carries over. Commutativity of multiplication does not. Cancellation does not. The existence of multiplicative inverses is not guaranteed — but when they exist, the matrix is called invertible (or non-singular), and that is the subject of a separate article.

Block multiplication

Large matrices are sometimes partitioned into smaller blocks, and the multiplication rule extends naturally:

\begin{bmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{bmatrix} = \begin{bmatrix} A_{11}B_{11} + A_{12}B_{21} & A_{11}B_{12} + A_{12}B_{22} \\ A_{21}B_{11} + A_{22}B_{21} & A_{21}B_{12} + A_{22}B_{22} \end{bmatrix}

provided the sub-matrix dimensions are compatible. This is called block multiplication, and it works because the multiplication formula depends only on the "row-dot-column" rule — which doesn't care whether the entries are numbers or matrices. This technique is used in numerical linear algebra to speed up computations on modern hardware, where moving data between memory levels is often more expensive than the arithmetic itself.

Computational cost

Multiplying two n \times n matrices using the definition takes n^3 multiplications (and nearly as many additions). For n = 1000, that is a billion multiplications. Reducing this cost is an active area of research. The naive algorithm is O(n^3). Strassen's algorithm, discovered in 1969, runs in O(n^{2.807}). The current best theoretical bound is below O(n^{2.372}). Finding the true optimal exponent — or proving a lower bound — remains one of the important open problems in theoretical computer science.

Where this leads next

You now know how to add, scale, and multiply matrices, and you know the algebraic properties (with proofs) that govern these operations. The next steps: