In short

The transpose of a matrix A, written A' or A^T, is the matrix obtained by turning every row of A into a column (equivalently, reflecting A across its principal diagonal). Transpose interacts cleanly with addition, scalar multiplication, and matrix multiplication — though with a twist: (AB)^T = B^T A^T, not A^T B^T. A square matrix that equals its own transpose is called symmetric; one that equals the negative of its transpose is called skew-symmetric. Every square matrix can be written, uniquely, as the sum of a symmetric matrix and a skew-symmetric matrix.

Take a railway timetable. Suppose three trains — Rajdhani, Shatabdi, and Duronto — run between two cities, and you record their departure times in a table:

Delhi Mumbai
Rajdhani 16:25 08:35
Shatabdi 06:00 14:15
Duronto 22:50 11:40

This is a 3 \times 2 matrix — three rows (trains), two columns (cities). Now suppose someone asks you to reorganise the same data so that the cities are the rows and the trains are the columns:

Rajdhani Shatabdi Duronto
Delhi 16:25 06:00 22:50
Mumbai 08:35 14:15 11:40

No data was created or destroyed. Every entry is exactly where it was — but the 3 \times 2 table has become a 2 \times 3 table. Rows became columns. Columns became rows. The first row of the new table is the first column of the old one.

This operation — swapping rows and columns — is the transpose. It is one of the most basic operations in matrix algebra, and it has surprisingly deep consequences.

Definition

Transpose of a matrix

If A = [a_{ij}] is an m \times n matrix, the transpose of A, written A^T or A', is the n \times m matrix whose (i,j)-entry is a_{ji}:

(A^T)_{ij} = a_{ji}

In words: the element that sat in row i, column j of A now sits in row j, column i of A^T.

The transpose does two things simultaneously: it swaps the roles of rows and columns, and it reflects the matrix across its principal diagonal (the line from the top-left to the bottom-right corner).

A concrete example. Take

A = \begin{bmatrix} 1 & 4 & 7 \\ 2 & 5 & 8 \end{bmatrix}

This is a 2 \times 3 matrix. Its transpose is

A^T = \begin{bmatrix} 1 & 2 \\ 4 & 5 \\ 7 & 8 \end{bmatrix}

The first row of A(1, 4, 7) — became the first column of A^T. The second row — (2, 5, 8) — became the second column. The 2 \times 3 matrix has become a 3 \times 2 matrix.

Transpose flips a matrix across its diagonalA 2 by 3 matrix A is shown on the left. Its transpose, a 3 by 2 matrix, is shown on the right. Dashed arrows connect each entry to its new position, showing that entry a_ij moves to position j,i. The first row of A becomes the first column of A transpose. A 1 4 7 2 5 8 AT 1 2 4 5 7 8 row 1 col 1 row 1 of A → column 1 of AT
The transpose turns each row into a column. The first row of $A$ — the entries $1, 4, 7$ highlighted in red — becomes the first column of $A^T$. The dimensions flip from $2 \times 3$ to $3 \times 2$.

For a square matrix, the transpose is a reflection across the principal diagonal. The diagonal entries a_{11}, a_{22}, \ldots stay where they are (since swapping i and j does nothing when i = j). The entries above the diagonal swap with their mirrors below.

\begin{bmatrix} 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 \end{bmatrix}^T = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}

The diagonal entries 1, 5, 9 did not move. The entry 4 (position (1,2)) swapped with 2 (position (2,1)). The entry 7 (position (1,3)) swapped with 3 (position (3,1)). And so on.

Properties (with proofs)

The transpose interacts with the other matrix operations in clean, predictable ways. Each property is worth proving — the proofs are all short, and they show you exactly where the rule comes from.

Property 1. (A^T)^T = A (double transpose is the original).

Proof. The (i,j)-entry of (A^T)^T is the (j,i)-entry of A^T, which is the (i,j)-entry of A. In symbols: ((A^T)^T)_{ij} = (A^T)_{ji} = a_{ij}. Since every entry matches, (A^T)^T = A. \square

This says that transposing is its own inverse: do it twice and you are back where you started.

Property 2. (A + B)^T = A^T + B^T (transpose of a sum is the sum of transposes).

Proof. Both A and B are m \times n, so A + B is m \times n and (A + B)^T is n \times m. The (i,j)-entry of (A + B)^T is the (j,i)-entry of A + B, which is a_{ji} + b_{ji}. The (i,j)-entry of A^T + B^T is (A^T)_{ij} + (B^T)_{ij} = a_{ji} + b_{ji}. Equal. \square

Property 3. (kA)^T = k A^T (scalars pass through the transpose).

Proof. The (i,j)-entry of (kA)^T is the (j,i)-entry of kA, which is k \cdot a_{ji}. The (i,j)-entry of kA^T is k \cdot (A^T)_{ij} = k \cdot a_{ji}. Equal. \square

Property 4. (AB)^T = B^T A^T (transpose of a product reverses the order).

This is the most important property and the one most likely to trip you up. The transpose of a product is the product of the transposes in reverse order. Not A^T B^T — that is wrong. B^T A^T.

Proof. Let A be m \times p and B be p \times n, so AB is m \times n and (AB)^T is n \times m. On the other side, B^T is n \times p and A^T is p \times m, so B^T A^T is n \times m. The orders match.

Now compare entries. The (i,j)-entry of (AB)^T is the (j,i)-entry of AB:

(AB)_{ji} = \sum_{k=1}^{p} a_{jk} \, b_{ki}

The (i,j)-entry of B^T A^T is:

\sum_{k=1}^{p} (B^T)_{ik} \, (A^T)_{kj} = \sum_{k=1}^{p} b_{ki} \, a_{jk} = \sum_{k=1}^{p} a_{jk} \, b_{ki}

The two expressions are identical (the last step uses commutativity of real-number multiplication). So (AB)^T = B^T A^T. \square

The reversal of order has a natural interpretation. If A and B represent two sequential operations (first B, then A — remember, AB applies B first), then transposing reverses the sequence. This is the same pattern as reversing a sequence of steps: to undo "put on socks, then put on shoes," you "take off shoes, then take off socks." The transpose of a product reverses the factors for the same structural reason.

Extension to three matrices. Applying the rule twice:

(ABC)^T = ((AB)C)^T = C^T (AB)^T = C^T B^T A^T

The pattern extends to any number of factors: the transpose of a product is the product of the transposes in reversed order.

Symmetric and skew-symmetric matrices

The transpose gives rise to two special types of square matrices, each defined by how the matrix relates to its own transpose.

Symmetric matrix

A square matrix A is symmetric if A^T = A, that is, a_{ij} = a_{ji} for all i, j.

In a symmetric matrix, the entry in row i, column j is the same as the entry in row j, column i. The matrix is a mirror image of itself across the principal diagonal.

S = \begin{bmatrix} 1 & 4 & 7 \\ 4 & 2 & 5 \\ 7 & 5 & 3 \end{bmatrix}

Check: s_{12} = 4 = s_{21}. s_{13} = 7 = s_{31}. s_{23} = 5 = s_{32}. The matrix is symmetric.

Symmetric matrices appear everywhere in applications. In physics, the moment of inertia tensor of a rigid body is symmetric. In statistics, every covariance matrix is symmetric. In graph theory, the adjacency matrix of an undirected graph is symmetric (because if there is an edge from vertex i to vertex j, there is also one from j to i). Distance tables — like the road distance chart printed on the back of Indian road atlases — are symmetric: the distance from Jaipur to Udaipur is the same as from Udaipur to Jaipur.

Skew-symmetric matrix

A square matrix A is skew-symmetric (also called antisymmetric) if A^T = -A, that is, a_{ij} = -a_{ji} for all i, j.

Setting i = j in the condition: a_{ii} = -a_{ii}, which forces 2a_{ii} = 0, so a_{ii} = 0. Every diagonal entry of a skew-symmetric matrix is zero.

K = \begin{bmatrix} 0 & 3 & -5 \\ -3 & 0 & 7 \\ 5 & -7 & 0 \end{bmatrix}

Check: k_{12} = 3 and k_{21} = -3. k_{13} = -5 and k_{31} = 5. k_{23} = 7 and k_{32} = -7. Each off-diagonal pair sums to zero. The diagonal is all zeros. The matrix is skew-symmetric.

Symmetric vs skew-symmetric: mirror behaviour across the diagonalTwo 3 by 3 matrices side by side. The left one is symmetric: entries above the diagonal are mirrored identically below. The right one is skew-symmetric: entries above the diagonal are mirrored with opposite signs below, and the diagonal entries are all zero. Symmetric (A = AT) 1 4 7 4 2 5 7 5 3 mirror pairs are equal Skew-symmetric (A = −AT) 0 3 −5 −3 0 7 5 −7 0 mirror pairs are negatives; diagonal = 0
Symmetric vs skew-symmetric. In the symmetric matrix (left), mirror pairs across the diagonal are equal: $a_{12} = a_{21} = 4$. In the skew-symmetric matrix (right), mirror pairs are negatives of each other: $a_{12} = 3$, $a_{21} = -3$. The diagonal of a skew-symmetric matrix is always zero.

The decomposition theorem

Here is the deepest result in this article: every square matrix can be split into a symmetric part and a skew-symmetric part. Not approximately, not sometimes — always, uniquely.

Decomposition theorem

Every square matrix A can be written as

A = \underbrace{\frac{1}{2}(A + A^T)}_{\text{symmetric}} + \underbrace{\frac{1}{2}(A - A^T)}_{\text{skew-symmetric}}

This decomposition is unique.

Before proving it, see it in action. Take

A = \begin{bmatrix} 2 & 3 \\ 5 & 4 \end{bmatrix}

Compute A^T = \begin{bmatrix} 2 & 5 \\ 3 & 4 \end{bmatrix}.

Symmetric part: P = \frac{1}{2}(A + A^T) = \frac{1}{2}\begin{bmatrix} 4 & 8 \\ 8 & 8 \end{bmatrix} = \begin{bmatrix} 2 & 4 \\ 4 & 4 \end{bmatrix}

Skew-symmetric part: Q = \frac{1}{2}(A - A^T) = \frac{1}{2}\begin{bmatrix} 0 & -2 \\ 2 & 0 \end{bmatrix} = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}

Check: P + Q = \begin{bmatrix} 2 & 4 \\ 4 & 4 \end{bmatrix} + \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} 2 & 3 \\ 5 & 4 \end{bmatrix} = A. It works.

Now the proof.

Proof that the decomposition works.

Let P = \frac{1}{2}(A + A^T) and Q = \frac{1}{2}(A - A^T).

Step 1: P is symmetric.

P^T = \left(\frac{1}{2}(A + A^T)\right)^T = \frac{1}{2}(A + A^T)^T = \frac{1}{2}(A^T + (A^T)^T) = \frac{1}{2}(A^T + A) = P

Why: use Property 3 (scalar passes through transpose), Property 2 (transpose of sum), and Property 1 (double transpose is the original). The last step uses commutativity of addition.

Step 2: Q is skew-symmetric.

Q^T = \left(\frac{1}{2}(A - A^T)\right)^T = \frac{1}{2}(A - A^T)^T = \frac{1}{2}(A^T - (A^T)^T) = \frac{1}{2}(A^T - A) = -\frac{1}{2}(A - A^T) = -Q

Why: the same properties as before. The key moment is A^T - A = -(A - A^T), which flips the sign and gives Q^T = -Q.

Step 3: P + Q = A.

P + Q = \frac{1}{2}(A + A^T) + \frac{1}{2}(A - A^T) = \frac{1}{2}(A + A^T + A - A^T) = \frac{1}{2}(2A) = A

Why: the A^T terms cancel, leaving \frac{1}{2} \cdot 2A = A.

Step 4: The decomposition is unique.

Suppose A = P_1 + Q_1 where P_1 is symmetric and Q_1 is skew-symmetric. Then A^T = P_1^T + Q_1^T = P_1 - Q_1. Adding the two equations: A + A^T = 2P_1, so P_1 = \frac{1}{2}(A + A^T). Subtracting: A - A^T = 2Q_1, so Q_1 = \frac{1}{2}(A - A^T). Both are uniquely determined by A. \square

The decomposition theorem is not just a formal exercise. In physics, when you decompose the velocity gradient tensor of a fluid into symmetric and skew-symmetric parts, the symmetric part gives the strain rate (how fast the fluid is being stretched) and the skew-symmetric part gives the rotation rate (how fast it is spinning). The same matrix, split into two pieces, tells you two fundamentally different things about the fluid's motion. The decomposition separates the "stretching" from the "twisting."

Worked examples

Example 1: Computing a transpose and verifying the product rule

Let A = \begin{bmatrix} 1 & 3 \\ 2 & 4 \end{bmatrix} and B = \begin{bmatrix} 5 & 7 \\ 6 & 8 \end{bmatrix}. Verify that (AB)^T = B^T A^T.

Step 1. Compute AB.

AB = \begin{bmatrix} 1(5)+3(6) & 1(7)+3(8) \\ 2(5)+4(6) & 2(7)+4(8) \end{bmatrix} = \begin{bmatrix} 23 & 31 \\ 34 & 46 \end{bmatrix}

Why: row-dot-column. Entry (1,1): row 1 of A dotted with column 1 of B gives 5 + 18 = 23.

Step 2. Transpose the product.

(AB)^T = \begin{bmatrix} 23 & 34 \\ 31 & 46 \end{bmatrix}

Why: swap rows and columns. The (1,2)-entry 31 moves to position (2,1); the (2,1)-entry 34 moves to position (1,2).

Step 3. Compute B^T A^T separately.

B^T = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}, \quad A^T = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}
B^T A^T = \begin{bmatrix} 5(1)+6(3) & 5(2)+6(4) \\ 7(1)+8(3) & 7(2)+8(4) \end{bmatrix} = \begin{bmatrix} 23 & 34 \\ 31 & 46 \end{bmatrix}

Why: the order is B^T first, then A^T. This is the reversal that Property 4 predicts.

Step 4. Compare. (AB)^T = \begin{bmatrix} 23 & 34 \\ 31 & 46 \end{bmatrix} = B^T A^T. Verified.

Result: (AB)^T = B^T A^T = \begin{bmatrix} 23 & 34 \\ 31 & 46 \end{bmatrix}.

The transpose reversal rule for productsA diagram showing that the transpose of AB equals B transpose times A transpose. Two paths are shown: one transposes the product directly, the other reverses the order of the transposed factors. Both arrive at the same result matrix. AB transpose (AB)T reverse BT AT multiply BT AT =
Two paths, same destination. You can transpose the product $(AB)^T$ directly (top path), or reverse the factors and multiply $B^T A^T$ (bottom path). Property 4 guarantees they always agree.

The two paths — transpose-first and reverse-then-multiply — arrive at the same matrix. This is the content of Property 4, verified with specific numbers.

Example 2: Decomposing a matrix into symmetric and skew-symmetric parts

Decompose A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} as the sum of a symmetric and a skew-symmetric matrix.

Step 1. Compute A^T.

A^T = \begin{bmatrix} 1 & 4 & 7 \\ 2 & 5 & 8 \\ 3 & 6 & 9 \end{bmatrix}

Why: swap rows and columns. Row 1 of A becomes column 1 of A^T.

Step 2. Compute the symmetric part P = \frac{1}{2}(A + A^T).

A + A^T = \begin{bmatrix} 2 & 6 & 10 \\ 6 & 10 & 14 \\ 10 & 14 & 18 \end{bmatrix}, \quad P = \begin{bmatrix} 1 & 3 & 5 \\ 3 & 5 & 7 \\ 5 & 7 & 9 \end{bmatrix}

Why: A + A^T is always symmetric (prove it: (A + A^T)^T = A^T + A = A + A^T). Dividing by 2 preserves symmetry.

Step 3. Compute the skew-symmetric part Q = \frac{1}{2}(A - A^T).

A - A^T = \begin{bmatrix} 0 & -2 & -4 \\ 2 & 0 & -2 \\ 4 & 2 & 0 \end{bmatrix}, \quad Q = \begin{bmatrix} 0 & -1 & -2 \\ 1 & 0 & -1 \\ 2 & 1 & 0 \end{bmatrix}

Why: A - A^T is always skew-symmetric. The diagonal is all zeros (as it must be), and mirror entries are negatives: q_{12} = -1 and q_{21} = 1.

Step 4. Verify P + Q = A.

\begin{bmatrix} 1 & 3 & 5 \\ 3 & 5 & 7 \\ 5 & 7 & 9 \end{bmatrix} + \begin{bmatrix} 0 & -1 & -2 \\ 1 & 0 & -1 \\ 2 & 1 & 0 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix} = A \quad \checkmark

Why: the symmetric and skew-symmetric parts are complementary halves. Adding them reconstructs the original exactly.

Result: A = \begin{bmatrix} 1 & 3 & 5 \\ 3 & 5 & 7 \\ 5 & 7 & 9 \end{bmatrix} + \begin{bmatrix} 0 & -1 & -2 \\ 1 & 0 & -1 \\ 2 & 1 & 0 \end{bmatrix}, the unique decomposition into symmetric + skew-symmetric.

Decomposition of a 3 by 3 matrix into symmetric plus skew-symmetricThree 3 by 3 matrices are shown. The original matrix A on the left equals the symmetric matrix P plus the skew-symmetric matrix Q. P has mirror-equal entries across its diagonal. Q has zero diagonal and mirror-opposite entries. A 1 2 3 4 5 6 7 8 9 = P (symmetric) 1 3 5 3 5 7 5 7 9 + Q (skew-sym.) 0 −1 −2 1 0 −1 2 1 0 every square matrix = symmetric + skew-symmetric mirror pairs equal mirror pairs opposite, diag = 0
The matrix $A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}$ decomposes uniquely as $P + Q$, where $P$ is symmetric (mirror pairs across the diagonal are equal) and $Q$ is skew-symmetric (mirror pairs are opposite, diagonal is zero). Adding $P$ and $Q$ entry by entry recovers $A$ exactly.

The decomposition extracts the "symmetric part" and the "anti-symmetric part" of any matrix. The symmetric part captures what is shared between a_{ij} and a_{ji}; the skew-symmetric part captures the difference.

Common confusions

Going deeper

If you came here to learn the transpose, its properties, and the symmetric/skew-symmetric decomposition, you have everything you need — you can stop here. The rest is for readers who want to see where these ideas lead in more advanced mathematics.

Transpose and inner products

If you think of column vectors as n \times 1 matrices, then the dot product of two vectors \mathbf{u} and \mathbf{v} can be written as a matrix product:

\mathbf{u} \cdot \mathbf{v} = \mathbf{u}^T \mathbf{v}

The transpose turns the column \mathbf{u} into a row, and then the matrix product of a 1 \times n row and an n \times 1 column gives a 1 \times 1 matrix — a scalar — which is exactly the dot product. This is why the transpose is fundamental to linear algebra: it connects matrix multiplication to geometry through the inner product.

Symmetric matrices in spectral theory

Symmetric matrices have a remarkable property that non-symmetric matrices generally lack: they can always be diagonalised, and their eigenvalues are always real. This is the spectral theorem, one of the central results of linear algebra. It says that every symmetric matrix A can be written as A = Q \Lambda Q^T, where \Lambda is a diagonal matrix of eigenvalues and Q is an orthogonal matrix (one whose transpose is its inverse: Q^T Q = I). The spectral theorem is the mathematical foundation of principal component analysis (PCA), a technique used in data science, image processing, and the analysis of large datasets — including the statistical methods used in India's census surveys.

Skew-symmetric matrices and rotations

Every 3 \times 3 skew-symmetric matrix corresponds to a rotation axis in three-dimensional space. Given a vector \mathbf{w} = (w_1, w_2, w_3), the matrix

W = \begin{bmatrix} 0 & -w_3 & w_2 \\ w_3 & 0 & -w_1 \\ -w_2 & w_1 & 0 \end{bmatrix}

satisfies W\mathbf{v} = \mathbf{w} \times \mathbf{v} for any vector \mathbf{v} — the matrix-vector product is the cross product. This connection between skew-symmetric matrices and the cross product is used throughout mechanics and robotics. The ISRO Mars Orbiter Mission (Mangalyaan) used rotation matrices built from skew-symmetric components to compute the spacecraft's orientation in space.

Where this leads next

You now know the transpose, its algebraic properties, symmetric and skew-symmetric matrices, and the decomposition theorem that connects them. Here is where these ideas are used next: