In short

A matrix is a rectangular array of numbers arranged in rows and columns, enclosed in square brackets. Each matrix has an order m \times n (read "m by n"), where m is the number of rows and n is the number of columns. Matrices come in many types — row, column, square, diagonal, identity, zero — and two matrices are equal only when they have the same order and every corresponding entry matches.

Suppose a school has three sections — A, B, C — and two subjects — Maths and Science. The number of students who scored above 90% in the latest exam is:

Maths Science
Section A 12 8
Section B 15 10
Section C 9 14

Six numbers, arranged in a grid. You could list them as "12, 8, 15, 10, 9, 14" — but that loses the structure. You no longer know which number belongs to which section and which subject. The grid — the arrangement into rows and columns — is what makes the data readable.

Mathematics needs a way to capture exactly this kind of structured arrangement. Not just a list of numbers, but a rectangular block of numbers where position matters. That object is called a matrix.

Matrices show up everywhere. In physics, rotations in space are described by 3 \times 3 matrices. In computer graphics, every frame of every video game is rendered using matrix multiplications. In economics, input-output models of entire national economies are written as matrix equations. The Indian satellite navigation system NavIC uses matrices to solve the positioning equations that pinpoint your location from satellite signals. Even Google's original PageRank algorithm — the one that decides which search result you see first — is, at its core, a single enormous matrix raised to a high power.

But before you can do any of that, you need the basics: what a matrix is, what kinds there are, and what it means for two matrices to be equal.

Definition and notation

A matrix is a rectangular arrangement of numbers in rows and columns, enclosed in brackets.

Matrix

A matrix is an ordered rectangular array of numbers (or expressions) arranged in m rows and n columns, written as

A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix}

Each number in the array is called an element (or entry) of the matrix. The element in the i-th row and j-th column is denoted a_{ij}.

A few things to notice about the notation.

The double subscript a_{ij}. The first index i always names the row; the second index j always names the column. So a_{23} is the element in row 2, column 3 — never in row 3, column 2. Row first, column second. Always.

The shorthand. Instead of writing out the full grid every time, you can write A = [a_{ij}]_{m \times n}, or simply A = [a_{ij}] when the size is clear from context. Matrices are usually named with capital letters: A, B, C, P, Q.

The brackets. Square brackets [\,] and round parentheses (\,) are both standard. Indian textbooks — NCERT in particular — use square brackets. We will follow that convention throughout.

The school data from the opening, written as a matrix, is:

A = \begin{bmatrix} 12 & 8 \\ 15 & 10 \\ 9 & 14 \end{bmatrix}

Here a_{11} = 12 (Section A, Maths), a_{23} = ? — that doesn't exist, because there are only 2 columns. Trying to access a_{23} is like trying to sit in a seat that isn't there. The matrix has exactly 3 \times 2 = 6 elements, no more.

Order of a matrix

The order (or size or dimension) of a matrix is simply its shape: the number of rows followed by the number of columns.

Order of a matrix

A matrix with m rows and n columns is said to have order m \times n (read "m by n"). The matrix has m \cdot n elements in total.

The school matrix above has 3 rows and 2 columns, so its order is 3 \times 2. A matrix with 1 row and 4 columns has order 1 \times 4. A matrix with 5 rows and 5 columns has order 5 \times 5.

The order matters because it determines what you can and cannot do with the matrix. You can only add two matrices if they have the same order. You can only multiply two matrices if the number of columns in the first equals the number of rows in the second. These rules will appear in the article on matrix operations, but the seed is planted here: the order is not a cosmetic label. It is a structural constraint.

Types of matrices

Matrices come in several standard types. Each type is defined by its order or by a pattern in its entries. Here is the complete list you need.

Row matrix. A matrix with exactly one row. Order: 1 \times n.

R = \begin{bmatrix} 3 & -1 & 7 & 0 \end{bmatrix}

This is a 1 \times 4 row matrix. It has one row and four columns. Some books call it a row vector.

Column matrix. A matrix with exactly one column. Order: m \times 1.

C = \begin{bmatrix} 5 \\ -2 \\ 8 \end{bmatrix}

This is a 3 \times 1 column matrix. It has three rows and one column. Some books call it a column vector.

Square matrix. A matrix where the number of rows equals the number of columns. Order: n \times n.

S = \begin{bmatrix} 1 & 4 \\ 7 & 2 \end{bmatrix}

This is a 2 \times 2 square matrix. Square matrices are the most important kind — almost all the deep theory of matrices (determinants, eigenvalues, inverses) applies only to square matrices. The elements a_{11}, a_{22}, \ldots, a_{nn} — the elements where the row index equals the column index — form the principal diagonal (also called the leading diagonal or simply the diagonal).

The principal diagonal of a 3 by 3 matrixA 3 by 3 matrix with entries shown. The elements a11, a22, and a33 along the principal diagonal are highlighted in red, running from the top-left corner to the bottom-right corner. a₁₁ a₂₂ a₃₃ a₁₂ a₁₃ a₂₁ a₂₃ a₃₁ a₃₂ principal diagonal
The principal diagonal of a $3 \times 3$ matrix runs from the top-left to the bottom-right. The diagonal entries $a_{11}$, $a_{22}$, $a_{33}$ are highlighted. Every matrix type that follows is defined by what happens on and off this diagonal.

Diagonal matrix. A square matrix where every element off the principal diagonal is zero.

D = \begin{bmatrix} 4 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 7 \end{bmatrix}

The diagonal entries themselves can be anything — including zero. The condition is that a_{ij} = 0 whenever i \neq j.

Scalar matrix. A diagonal matrix where all the diagonal entries are the same number.

S = \begin{bmatrix} 5 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 5 \end{bmatrix}

This is like multiplying by a constant (a "scalar"), and the name reflects that. In formal terms: a_{ij} = 0 when i \neq j, and a_{ii} = k for some fixed number k.

Identity matrix. A scalar matrix where that common diagonal value is 1. Written I or I_n (where n is the order).

I_3 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}

The identity matrix is to matrix multiplication what the number 1 is to ordinary multiplication: multiplying any matrix by I leaves it unchanged. You will see the proof in matrix operations.

Zero matrix. A matrix where every single element is zero. Written O (capital letter O, not the number 0).

O_{2 \times 3} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}

The zero matrix is to matrix addition what the number 0 is to ordinary addition: adding O to any matrix leaves it unchanged.

Upper triangular matrix. A square matrix where every element below the principal diagonal is zero.

U = \begin{bmatrix} 2 & 5 & -1 \\ 0 & 3 & 4 \\ 0 & 0 & 6 \end{bmatrix}

Lower triangular matrix. A square matrix where every element above the principal diagonal is zero.

L = \begin{bmatrix} 2 & 0 & 0 \\ 5 & 3 & 0 \\ -1 & 4 & 6 \end{bmatrix}

Notice the containment: every identity matrix is a scalar matrix. Every scalar matrix is a diagonal matrix. Every diagonal matrix is both upper and lower triangular. These types nest inside each other like concentric rings.

Nested classification of square matrix typesConcentric rounded rectangles showing how matrix types nest. The outermost rectangle is labelled Square, inside it is Triangular, then Diagonal, then Scalar, and the innermost is Identity. Square Triangular Diagonal Scalar Identity
Matrix types nest like concentric rings. Every identity matrix is scalar, every scalar matrix is diagonal, every diagonal matrix is triangular, and every triangular matrix is square. The reverse is not true — there are square matrices that are not triangular, and diagonal matrices that are not scalar.

Equality of matrices

When are two matrices "the same"? The condition is strict.

Equality of matrices

Two matrices A = [a_{ij}] and B = [b_{ij}] are equal, written A = B, if and only if:

  1. They have the same order (m \times n).
  2. Every corresponding entry matches: a_{ij} = b_{ij} for all i and j.

Both conditions are necessary. Two matrices with different orders are never equal, even if their entries overlap. And two matrices with the same order are equal only if every single entry agrees — not just some of them.

This might sound trivial, but it is surprisingly powerful. Suppose someone tells you that

\begin{bmatrix} x + 1 & 3 \\ 7 & y - 2 \end{bmatrix} = \begin{bmatrix} 4 & 3 \\ 7 & 5 \end{bmatrix}

Because the two matrices are equal, every corresponding entry must match. So:

Matrix equality lets you read off equations — one equation per entry — from a single matrix statement. This is exactly how systems of linear equations are encoded in matrix form, and it is the reason matrices are useful for solving them.

Worked examples

Example 1: Identifying matrix type and reading entries

Consider the matrix

P = \begin{bmatrix} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 3 \end{bmatrix}

Step 1. Determine the order. There are 3 rows and 3 columns, so P is a 3 \times 3 matrix.

Why: count the rows (horizontal lines of entries) and the columns (vertical lines of entries).

Step 2. Check if it is square. Since the number of rows equals the number of columns (3 = 3), yes — P is a square matrix.

Why: square means m = n. That is the gateway to checking for the more specific types.

Step 3. Check if it is diagonal. Are all off-diagonal entries zero? The entries a_{12}, a_{13}, a_{21}, a_{23}, a_{31}, a_{32} are all 0. Yes — P is diagonal.

Why: diagonal requires a_{ij} = 0 for every i \neq j. All six off-diagonal entries pass this test.

Step 4. Check if it is scalar. Are all diagonal entries equal? a_{11} = a_{22} = a_{33} = 3. Yes — P is a scalar matrix.

Why: scalar adds one more condition on top of diagonal — all diagonal entries must be the same constant.

Step 5. Check if it is the identity matrix. The common diagonal value is 3, not 1. So P is not the identity matrix. It is a scalar matrix (equivalently, P = 3I_3).

Why: the identity matrix is the special scalar matrix where the diagonal constant is exactly 1.

Result: P is a 3 \times 3 scalar matrix equal to 3I_3. Its entries are p_{ij} = 0 for i \neq j, and p_{ii} = 3.

Classification path of the matrix PA flowchart showing the classification of matrix P. Starting from Square, an arrow leads to Diagonal (all off-diagonal entries are zero), then to Scalar (all diagonal entries equal 3), and stops before Identity (diagonal value is 3, not 1). Square Diagonal Scalar Identity P = 3I₃ stops here off-diag = 0 ✓ all diag = 3 ✓ diag = 1? ✗
The classification path for $P$. Start from "square" and keep testing. $P$ passes the diagonal test (all off-diagonal entries are zero), passes the scalar test (all diagonal entries are the same: $3$), but fails the identity test (the common value is $3$, not $1$). So $P$ is a scalar matrix.

The classification is a chain: start from the most general type (square) and check each additional condition. The matrix settles into the most specific type whose conditions it satisfies.

Example 2: Using matrix equality to solve for unknowns

Given that

\begin{bmatrix} 2x - 1 & 5 \\ 6 & y^2 - 9 \end{bmatrix} = \begin{bmatrix} 3 & 5 \\ 6 & 0 \end{bmatrix}

find x and y.

Step 1. Verify the orders match. Both matrices are 2 \times 2. The equality is valid.

Why: if the orders were different, the equation would have no solution — matrices of different orders are never equal.

Step 2. Equate the (1,1) entries: 2x - 1 = 3.

2x = 4 \implies x = 2

Why: the definition of matrix equality says corresponding entries must be equal. Position (1,1) on the left has 2x - 1; position (1,1) on the right has 3.

Step 3. Equate the (2,2) entries: y^2 - 9 = 0.

y^2 = 9 \implies y = \pm 3

Why: the same principle, applied to position (2,2). Notice that this gives two values of y. Both are valid — there are two matrices that satisfy the equation, one with y = 3 and one with y = -3.

Step 4. Check the remaining entries. Position (1,2): 5 = 5. Position (2,1): 6 = 6. Both match automatically — no new information.

Why: these entries have no unknowns, so equality is already satisfied. But you should always verify this; if they didn't match, the equation would have no solution at all.

Result: x = 2 and y = \pm 3.

Matrix equality produces one equation per entryTwo 2 by 2 matrices side by side with an equals sign between them. Arrows connect each entry on the left to the corresponding entry on the right, showing how matrix equality yields individual equations. 2x−1 5 6 y²−9 = 3 5 6 0 2x − 1 = 3 → x = 2 y² − 9 = 0 → y = ±3
Matrix equality decomposes into one equation per entry. The entries with unknowns (highlighted in red) produce the equations you solve; the entries that already match provide a consistency check.

This is the key technique. A single matrix equation A = B is really m \times n separate scalar equations, one for each position. When the entries contain unknowns, matrix equality lets you extract and solve those equations individually.

Common confusions

Going deeper

If you came here to learn what a matrix is and what types exist, you have it — you can stop here. The rest of this section is for readers who want more context on how matrices connect to the broader landscape of mathematics.

Why rectangular grids?

A matrix is, at bottom, a function of two variables: give it a row index i and a column index j, and it returns the number a_{ij}. You could define it as a : \{1, 2, \ldots, m\} \times \{1, 2, \ldots, n\} \to \mathbb{R} — a function from pairs of indices to real numbers. The rectangular grid is just a way to display this function so that humans can see patterns in it.

This view explains why order matters. The domain of the function is a set of pairs, and \{1,2,3\} \times \{1,2\} (six pairs) is not the same set as \{1,2\} \times \{1,2,3\} (also six pairs, but different ones). The grid is not just a visual convenience — it encodes which pairs exist.

Matrices and systems of equations

The original motivation for matrices, historically, was solving systems of linear equations. Consider the system

2x + 3y = 7
5x - y = 4

You can separate this into three pieces: the coefficients of the unknowns, the unknowns themselves, and the constants on the right. Written as matrices:

\underbrace{\begin{bmatrix} 2 & 3 \\ 5 & -1 \end{bmatrix}}_{\text{coefficient matrix}} \underbrace{\begin{bmatrix} x \\ y \end{bmatrix}}_{\text{unknown vector}} = \underbrace{\begin{bmatrix} 7 \\ 4 \end{bmatrix}}_{\text{constant vector}}

The entire system is captured by the single matrix equation AX = B. Solving the system then reduces to finding the matrix X — and the tools for doing that (inverses, row reduction, Cramer's rule) all operate on the coefficient matrix A. This is why matrices were invented: not as abstract objects, but as a way to organise and solve systems of equations efficiently.

Indian mathematicians were solving systems of linear equations centuries before matrices were formalised. The Sulbasutras (800–500 BCE) contain geometric constructions that implicitly involve linear systems, and Brahmagupta (7th century) solved indeterminate equations using techniques that amount to row operations on what we would now call an augmented matrix.

The number of possible matrices

If each entry of an m \times n matrix can take any real value, there are uncountably many possible matrices of that order — as many as there are points in \mathbb{R}^{mn}. But if you restrict entries to, say, \{0, 1\}, the number becomes finite: exactly 2^{mn}. For a 3 \times 3 binary matrix, that is 2^9 = 512 possible matrices. This kind of counting becomes important in combinatorics and coding theory.

Where this leads next

You now know what a matrix is, how to classify it, and when two matrices are equal. The natural next step is learning what you can do with matrices.