In short
A matrix is a rectangular array of numbers arranged in rows and columns, enclosed in square brackets. Each matrix has an order m \times n (read "m by n"), where m is the number of rows and n is the number of columns. Matrices come in many types — row, column, square, diagonal, identity, zero — and two matrices are equal only when they have the same order and every corresponding entry matches.
Suppose a school has three sections — A, B, C — and two subjects — Maths and Science. The number of students who scored above 90% in the latest exam is:
| Maths | Science | |
|---|---|---|
| Section A | 12 | 8 |
| Section B | 15 | 10 |
| Section C | 9 | 14 |
Six numbers, arranged in a grid. You could list them as "12, 8, 15, 10, 9, 14" — but that loses the structure. You no longer know which number belongs to which section and which subject. The grid — the arrangement into rows and columns — is what makes the data readable.
Mathematics needs a way to capture exactly this kind of structured arrangement. Not just a list of numbers, but a rectangular block of numbers where position matters. That object is called a matrix.
Matrices show up everywhere. In physics, rotations in space are described by 3 \times 3 matrices. In computer graphics, every frame of every video game is rendered using matrix multiplications. In economics, input-output models of entire national economies are written as matrix equations. The Indian satellite navigation system NavIC uses matrices to solve the positioning equations that pinpoint your location from satellite signals. Even Google's original PageRank algorithm — the one that decides which search result you see first — is, at its core, a single enormous matrix raised to a high power.
But before you can do any of that, you need the basics: what a matrix is, what kinds there are, and what it means for two matrices to be equal.
Definition and notation
A matrix is a rectangular arrangement of numbers in rows and columns, enclosed in brackets.
Matrix
A matrix is an ordered rectangular array of numbers (or expressions) arranged in m rows and n columns, written as
Each number in the array is called an element (or entry) of the matrix. The element in the i-th row and j-th column is denoted a_{ij}.
A few things to notice about the notation.
The double subscript a_{ij}. The first index i always names the row; the second index j always names the column. So a_{23} is the element in row 2, column 3 — never in row 3, column 2. Row first, column second. Always.
The shorthand. Instead of writing out the full grid every time, you can write A = [a_{ij}]_{m \times n}, or simply A = [a_{ij}] when the size is clear from context. Matrices are usually named with capital letters: A, B, C, P, Q.
The brackets. Square brackets [\,] and round parentheses (\,) are both standard. Indian textbooks — NCERT in particular — use square brackets. We will follow that convention throughout.
The school data from the opening, written as a matrix, is:
Here a_{11} = 12 (Section A, Maths), a_{23} = ? — that doesn't exist, because there are only 2 columns. Trying to access a_{23} is like trying to sit in a seat that isn't there. The matrix has exactly 3 \times 2 = 6 elements, no more.
Order of a matrix
The order (or size or dimension) of a matrix is simply its shape: the number of rows followed by the number of columns.
Order of a matrix
A matrix with m rows and n columns is said to have order m \times n (read "m by n"). The matrix has m \cdot n elements in total.
The school matrix above has 3 rows and 2 columns, so its order is 3 \times 2. A matrix with 1 row and 4 columns has order 1 \times 4. A matrix with 5 rows and 5 columns has order 5 \times 5.
The order matters because it determines what you can and cannot do with the matrix. You can only add two matrices if they have the same order. You can only multiply two matrices if the number of columns in the first equals the number of rows in the second. These rules will appear in the article on matrix operations, but the seed is planted here: the order is not a cosmetic label. It is a structural constraint.
Types of matrices
Matrices come in several standard types. Each type is defined by its order or by a pattern in its entries. Here is the complete list you need.
Row matrix. A matrix with exactly one row. Order: 1 \times n.
This is a 1 \times 4 row matrix. It has one row and four columns. Some books call it a row vector.
Column matrix. A matrix with exactly one column. Order: m \times 1.
This is a 3 \times 1 column matrix. It has three rows and one column. Some books call it a column vector.
Square matrix. A matrix where the number of rows equals the number of columns. Order: n \times n.
This is a 2 \times 2 square matrix. Square matrices are the most important kind — almost all the deep theory of matrices (determinants, eigenvalues, inverses) applies only to square matrices. The elements a_{11}, a_{22}, \ldots, a_{nn} — the elements where the row index equals the column index — form the principal diagonal (also called the leading diagonal or simply the diagonal).
Diagonal matrix. A square matrix where every element off the principal diagonal is zero.
The diagonal entries themselves can be anything — including zero. The condition is that a_{ij} = 0 whenever i \neq j.
Scalar matrix. A diagonal matrix where all the diagonal entries are the same number.
This is like multiplying by a constant (a "scalar"), and the name reflects that. In formal terms: a_{ij} = 0 when i \neq j, and a_{ii} = k for some fixed number k.
Identity matrix. A scalar matrix where that common diagonal value is 1. Written I or I_n (where n is the order).
The identity matrix is to matrix multiplication what the number 1 is to ordinary multiplication: multiplying any matrix by I leaves it unchanged. You will see the proof in matrix operations.
Zero matrix. A matrix where every single element is zero. Written O (capital letter O, not the number 0).
The zero matrix is to matrix addition what the number 0 is to ordinary addition: adding O to any matrix leaves it unchanged.
Upper triangular matrix. A square matrix where every element below the principal diagonal is zero.
Lower triangular matrix. A square matrix where every element above the principal diagonal is zero.
Notice the containment: every identity matrix is a scalar matrix. Every scalar matrix is a diagonal matrix. Every diagonal matrix is both upper and lower triangular. These types nest inside each other like concentric rings.
Equality of matrices
When are two matrices "the same"? The condition is strict.
Equality of matrices
Two matrices A = [a_{ij}] and B = [b_{ij}] are equal, written A = B, if and only if:
- They have the same order (m \times n).
- Every corresponding entry matches: a_{ij} = b_{ij} for all i and j.
Both conditions are necessary. Two matrices with different orders are never equal, even if their entries overlap. And two matrices with the same order are equal only if every single entry agrees — not just some of them.
This might sound trivial, but it is surprisingly powerful. Suppose someone tells you that
Because the two matrices are equal, every corresponding entry must match. So:
- Top-left: x + 1 = 4, giving x = 3.
- Bottom-right: y - 2 = 5, giving y = 7.
Matrix equality lets you read off equations — one equation per entry — from a single matrix statement. This is exactly how systems of linear equations are encoded in matrix form, and it is the reason matrices are useful for solving them.
Worked examples
Example 1: Identifying matrix type and reading entries
Consider the matrix
Step 1. Determine the order. There are 3 rows and 3 columns, so P is a 3 \times 3 matrix.
Why: count the rows (horizontal lines of entries) and the columns (vertical lines of entries).
Step 2. Check if it is square. Since the number of rows equals the number of columns (3 = 3), yes — P is a square matrix.
Why: square means m = n. That is the gateway to checking for the more specific types.
Step 3. Check if it is diagonal. Are all off-diagonal entries zero? The entries a_{12}, a_{13}, a_{21}, a_{23}, a_{31}, a_{32} are all 0. Yes — P is diagonal.
Why: diagonal requires a_{ij} = 0 for every i \neq j. All six off-diagonal entries pass this test.
Step 4. Check if it is scalar. Are all diagonal entries equal? a_{11} = a_{22} = a_{33} = 3. Yes — P is a scalar matrix.
Why: scalar adds one more condition on top of diagonal — all diagonal entries must be the same constant.
Step 5. Check if it is the identity matrix. The common diagonal value is 3, not 1. So P is not the identity matrix. It is a scalar matrix (equivalently, P = 3I_3).
Why: the identity matrix is the special scalar matrix where the diagonal constant is exactly 1.
Result: P is a 3 \times 3 scalar matrix equal to 3I_3. Its entries are p_{ij} = 0 for i \neq j, and p_{ii} = 3.
The classification is a chain: start from the most general type (square) and check each additional condition. The matrix settles into the most specific type whose conditions it satisfies.
Example 2: Using matrix equality to solve for unknowns
Given that
find x and y.
Step 1. Verify the orders match. Both matrices are 2 \times 2. The equality is valid.
Why: if the orders were different, the equation would have no solution — matrices of different orders are never equal.
Step 2. Equate the (1,1) entries: 2x - 1 = 3.
Why: the definition of matrix equality says corresponding entries must be equal. Position (1,1) on the left has 2x - 1; position (1,1) on the right has 3.
Step 3. Equate the (2,2) entries: y^2 - 9 = 0.
Why: the same principle, applied to position (2,2). Notice that this gives two values of y. Both are valid — there are two matrices that satisfy the equation, one with y = 3 and one with y = -3.
Step 4. Check the remaining entries. Position (1,2): 5 = 5. Position (2,1): 6 = 6. Both match automatically — no new information.
Why: these entries have no unknowns, so equality is already satisfied. But you should always verify this; if they didn't match, the equation would have no solution at all.
Result: x = 2 and y = \pm 3.
This is the key technique. A single matrix equation A = B is really m \times n separate scalar equations, one for each position. When the entries contain unknowns, matrix equality lets you extract and solve those equations individually.
Common confusions
-
"A 3 \times 2 matrix and a 2 \times 3 matrix are the same shape." They are not. 3 \times 2 has 3 rows and 2 columns; 2 \times 3 has 2 rows and 3 columns. The order is always rows first, columns second. Swapping the two gives a different matrix structure entirely — and the two matrices cannot be added or compared for equality.
-
"A 1 \times 1 matrix is just a number." Almost, but not quite. The matrix [5] and the number 5 behave the same way under arithmetic, but formally they are different objects. A 1 \times 1 matrix is a matrix that happens to have one row and one column. In most contexts you can treat it as a scalar, and NCERT permits this identification, but know that you are quietly crossing a boundary.
-
"The zero matrix and the number zero are the same thing." The zero matrix O is a matrix full of zeros; the number 0 is a scalar. A 2 \times 3 zero matrix and a 4 \times 1 zero matrix are not even equal to each other — they have different orders.
-
"A diagonal matrix must have all nonzero entries on the diagonal." No. The condition is that off-diagonal entries are zero. The diagonal entries can be anything, including zero. The zero matrix is a diagonal matrix (and a scalar matrix with scalar 0, and both upper and lower triangular).
Going deeper
If you came here to learn what a matrix is and what types exist, you have it — you can stop here. The rest of this section is for readers who want more context on how matrices connect to the broader landscape of mathematics.
Why rectangular grids?
A matrix is, at bottom, a function of two variables: give it a row index i and a column index j, and it returns the number a_{ij}. You could define it as a : \{1, 2, \ldots, m\} \times \{1, 2, \ldots, n\} \to \mathbb{R} — a function from pairs of indices to real numbers. The rectangular grid is just a way to display this function so that humans can see patterns in it.
This view explains why order matters. The domain of the function is a set of pairs, and \{1,2,3\} \times \{1,2\} (six pairs) is not the same set as \{1,2\} \times \{1,2,3\} (also six pairs, but different ones). The grid is not just a visual convenience — it encodes which pairs exist.
Matrices and systems of equations
The original motivation for matrices, historically, was solving systems of linear equations. Consider the system
You can separate this into three pieces: the coefficients of the unknowns, the unknowns themselves, and the constants on the right. Written as matrices:
The entire system is captured by the single matrix equation AX = B. Solving the system then reduces to finding the matrix X — and the tools for doing that (inverses, row reduction, Cramer's rule) all operate on the coefficient matrix A. This is why matrices were invented: not as abstract objects, but as a way to organise and solve systems of equations efficiently.
Indian mathematicians were solving systems of linear equations centuries before matrices were formalised. The Sulbasutras (800–500 BCE) contain geometric constructions that implicitly involve linear systems, and Brahmagupta (7th century) solved indeterminate equations using techniques that amount to row operations on what we would now call an augmented matrix.
The number of possible matrices
If each entry of an m \times n matrix can take any real value, there are uncountably many possible matrices of that order — as many as there are points in \mathbb{R}^{mn}. But if you restrict entries to, say, \{0, 1\}, the number becomes finite: exactly 2^{mn}. For a 3 \times 3 binary matrix, that is 2^9 = 512 possible matrices. This kind of counting becomes important in combinatorics and coding theory.
Where this leads next
You now know what a matrix is, how to classify it, and when two matrices are equal. The natural next step is learning what you can do with matrices.
- Matrix Operations — addition, subtraction, scalar multiplication, and the all-important matrix multiplication. This is where matrices start to do things.
- Transpose of Matrix — flipping a matrix along its diagonal, and the symmetric and skew-symmetric matrices that arise from it.
- Determinants — Introduction — the single number that captures whether a square matrix is invertible, and what it says about the system of equations the matrix represents.
- Special Matrices — orthogonal, idempotent, nilpotent, and involutory matrices: the matrices with special algebraic properties.