In short

Certain square matrices satisfy a special algebraic condition — multiplying them by their own transpose gives the identity, or squaring them returns the same matrix, or squaring them gives zero. These are called orthogonal, idempotent, involutory, nilpotent, Hermitian, and skew-Hermitian matrices, and each type shows up naturally in geometry, projections, reflections, and quantum mechanics.

Take the matrix

R = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}

and multiply it by its own transpose:

R^T R = \begin{pmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{pmatrix} \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix} = \begin{pmatrix} \cos^2\theta + \sin^2\theta & 0 \\ 0 & \cos^2\theta + \sin^2\theta \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I

The product collapsed to the identity matrix. Not because of some trick with the numbers — because \cos^2\theta + \sin^2\theta = 1, an identity baked into the geometry of circles.

This matrix rotates every point in the plane by an angle \theta around the origin. When you multiply a vector by R, the vector rotates but its length stays the same. The condition R^T R = I is what makes that preservation of length happen. A matrix with this property is called orthogonal, and it is the first of five special types you will meet in this article.

Each of these types is defined by a single algebraic condition — a rule that the matrix satisfies when you do a particular operation on it. The conditions are simple. The consequences are deep.

Orthogonal matrices

Orthogonal matrix

A square matrix A is orthogonal if

A^T A = I

Equivalently, A^{-1} = A^T — the inverse of an orthogonal matrix is just its transpose.

The rotation matrix above is one example. Here is another: the matrix that reflects every point across the x-axis.

F = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}

Check: F^T F = F \cdot F = I, since flipping and then flipping again returns to the start.

Why the name "orthogonal"? Look at the columns of R. The first column is (\cos\theta, \sin\theta) and the second is (-\sin\theta, \cos\theta). Each column has length 1, and the dot product of the two columns is -\cos\theta\sin\theta + \sin\theta\cos\theta = 0. The columns are perpendicular unit vectors — orthogonal to each other. This is always the case: a matrix is orthogonal if and only if its columns form an orthonormal set.

Key properties of orthogonal matrices:

Rotation by 45 degrees as an orthogonal transformationA coordinate plane showing a vector before and after rotation by 45 degrees. The original vector points to (3,1) and the rotated vector points to approximately (1.41, 2.83). Both vectors have the same length, illustrating that orthogonal matrices preserve distances. x y 1 2 3 1 2 3 (3, 1) (1.41, 2.83) 45°
The vector $(3, 1)$ is rotated by $45°$ using an orthogonal matrix. The image lands at $(3\cos 45° - \sin 45°,\; 3\sin 45° + \cos 45°) \approx (1.41, 2.83)$. Both the original and the rotated vector have length $\sqrt{10}$ — the rotation preserved the length exactly.

Idempotent matrices

Idempotent matrix

A square matrix A is idempotent if

A^2 = A

That is, applying A twice is the same as applying it once.

The word comes from Latin: idem (same) + potens (power). Once is enough; doing it again changes nothing.

Think of it as a projection. Stand under a lamp at noon so the sun is directly overhead. Your shadow on the ground is a "projection" of your 3D body onto the 2D floor. Now project the shadow again — project the flat shadow onto the same floor. The shadow of a shadow is the same shadow. That is idempotence.

The simplest nontrivial example:

P = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}

This matrix projects every vector onto the x-axis — it kills the y-component and keeps the x-component. Check: P^2 = P \cdot P = P. Once the y-component is gone, projecting again changes nothing.

A more interesting example:

Q = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}

Check: Q^2 = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}\begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} = Q. This matrix projects every vector onto the line y = x.

Key properties of idempotent matrices:

Projection onto the line y equals xA coordinate plane showing a vector (3,1) being projected onto the line y equals x. The projection lands at (2,2). Projecting (2,2) again gives (2,2) — doing it twice is the same as doing it once. This is idempotence. x y 1 2 3 4 1 2 3 y = x (3, 1) (2, 2)
The vector $(3, 1)$ is projected onto the line $y = x$ by the matrix $Q$. The result is $(2, 2)$, which already lies on the line. Projecting $(2, 2)$ again gives $(2, 2)$ — this is exactly what $Q^2 = Q$ means geometrically.

Involutory matrices

Involutory matrix

A square matrix A is involutory if

A^2 = I

That is, A is its own inverse: A^{-1} = A.

If applying A once transforms something, applying it again undoes the transformation and brings you back to the start. Reflections are the prototype. Reflecting across a mirror twice returns everything to where it was.

The simplest example is

A = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}

which reflects every point across the x-axis. Reflect twice: A^2 = I.

Another example:

B = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}

This matrix swaps the two coordinates of every vector: (x, y) \mapsto (y, x). Swapping twice returns to the original, so B^2 = I.

Check: B^2 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I.

Key properties of involutory matrices:

Reflection across y equals x as an involutory transformationA coordinate plane with the line y equals x drawn as a dashed line. The point (3,1) reflects to (1,3), and reflecting (1,3) again returns to (3,1). Applying the matrix twice is the identity — this is the involutory property. x y 1 2 3 4 1 2 3 4 y = x (3, 1) (1, 3) reflect reflect back
The swap matrix $B = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$ reflects every point across the line $y = x$. The point $(3, 1)$ goes to $(1, 3)$. Reflecting again returns to $(3, 1)$. Two applications give the identity — this is exactly what $B^2 = I$ means.

Nilpotent matrices

Nilpotent matrix

A square matrix A is nilpotent if there exists a positive integer k such that

A^k = O

where O is the zero matrix. The smallest such k is called the index of nilpotency.

This is the opposite of involutory. Instead of returning to the identity, repeated application eventually annihilates everything.

Take

N = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}

Compute: N^2 = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix} = O.

Squaring the matrix killed it. The index of nilpotency is 2.

A 3 \times 3 example:

M = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}

Here M^2 = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} and M^3 = O. Three multiplications to reach zero — the index of nilpotency is 3.

Key properties of nilpotent matrices:

Powers of a nilpotent matrix decaying to zeroA chain showing N, then N squared, then N cubed equals O for the 3 by 3 nilpotent matrix. Each successive power has fewer nonzero entries until nothing remains. M 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 M³ = O 0 0 0 0 0 0 0 0 0 Each power shifts the nonzero entries up-right until nothing remains.
The $3 \times 3$ nilpotent matrix $M$ with index $3$: each successive power shifts the single nonzero entry one position further to the upper-right corner, until $M^3$ has no nonzero entries left. The matrix has been "annihilated" by its own repeated action.

Hermitian and skew-Hermitian matrices

Up to this point, all entries have been real numbers. But matrices with complex entries appear throughout physics and engineering — especially in quantum mechanics and signal processing. Two types of complex matrices are fundamental.

Write \bar{A} for the matrix obtained by taking the complex conjugate of every entry, and A^* = \bar{A}^T for the conjugate transpose (also written A^\dagger).

Hermitian matrix

A square matrix A with complex entries is Hermitian if

A^* = A

That is, the conjugate transpose equals the matrix itself.

Skew-Hermitian matrix

A square matrix A is skew-Hermitian if

A^* = -A

For real matrices, the conjugate transpose is just the transpose. So a real Hermitian matrix is the same as a symmetric matrix (A^T = A), and a real skew-Hermitian matrix is the same as a skew-symmetric matrix (A^T = -A). Hermitian matrices are the complex-number generalisation of symmetric matrices.

Here is a Hermitian matrix:

H = \begin{pmatrix} 2 & 3 + i \\ 3 - i & 5 \end{pmatrix}

Check: the (1,2) entry is 3 + i, and the (2,1) entry is 3 - i — its conjugate. The diagonal entries 2 and 5 are real, since a_{jj} = \overline{a_{jj}} forces each diagonal entry to equal its own conjugate. And H^* = H.

Here is a skew-Hermitian matrix:

S = \begin{pmatrix} 2i & 1 + i \\ -1 + i & -3i \end{pmatrix}

Check: the diagonal entries are purely imaginary (2i and -3i), since \overline{a_{jj}} = -a_{jj} forces each diagonal entry's real part to be zero. The (1,2) entry is 1 + i and the (2,1) entry is -1 + i = -(1 - i) = -\overline{1 + i}. So S^* = -S.

Key properties of Hermitian matrices:

Key properties of skew-Hermitian matrices:

Decomposition. Any square matrix A with complex entries can be written uniquely as

A = \underbrace{\frac{A + A^*}{2}}_{\text{Hermitian}} + \underbrace{\frac{A - A^*}{2}}_{\text{skew-Hermitian}}

This is the matrix analogue of writing any complex number as (real part) + (imaginary part).

Worked examples

Example 1: Verifying and classifying a given matrix

Given A = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}. Determine whether A is orthogonal, idempotent, involutory, or nilpotent.

Step 1. Check idempotent (A^2 = A?). Compute A^2.

A^2 = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}\begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix} = \begin{pmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{pmatrix}

Why: the (1,1) entry is 0 \cdot 0 + 1 \cdot 1 + 1 \cdot 1 = 2. The (1,2) entry is 0 \cdot 1 + 1 \cdot 0 + 1 \cdot 1 = 1. And so on for all nine entries.

A^2 \neq A (for example, the (1,1) entry is 2 vs 0). Not idempotent.

Step 2. Check involutory (A^2 = I?). From Step 1, A^2 = \begin{pmatrix} 2 & 1 & 1 \\ 1 & 2 & 1 \\ 1 & 1 & 2 \end{pmatrix} \neq I. Not involutory.

Why: A^2 has 2's on the diagonal and 1's off-diagonal. The identity matrix has 1's on the diagonal and 0's off-diagonal. They differ.

Step 3. Check nilpotent (A^k = O for some k?). Since A^2 has all positive entries, multiplying further will only produce matrices with positive entries — no power of A can be the zero matrix. Not nilpotent.

Why: every entry of A is non-negative, and A^2 has all entries \geq 1. Products of matrices with positive entries remain positive. The zero matrix cannot appear.

Step 4. Check orthogonal (A^T A = I?). Since A is symmetric (A = A^T), this reduces to A^2 = I, which already failed in Step 2.

Why: for a symmetric matrix, A^T = A, so A^T A = A \cdot A = A^2.

Result: A is none of the four special types. It is a symmetric matrix, but it does not satisfy the defining equation for any of the categories. The matrix has its own algebraic identity — A^2 = I + A is false (check: I + A has diagonal entries 1 while A^2 has diagonal entries 2), and no simple power condition holds. Not every matrix is special, and the classification test confirms that cleanly.

Classification flowchart for special matricesA flowchart showing how to test a matrix A against four conditions. First box: compute A squared. Then four branches: if A squared equals A, the matrix is idempotent. If A squared equals I, involutory. If A squared equals O, nilpotent with index 2. If A transpose times A equals I, orthogonal. If none match, the matrix is not one of the four special types. Compute A² A² = A Idempotent A² = I Involutory A² = O Nilpotent AᵀA = I Orthogonal None of these → ordinary matrix
A quick classification strategy: compute $A^2$ first and compare it against $A$, $I$, and $O$. Then check the orthogonality condition $A^T A = I$ separately. For the $3 \times 3$ matrix in this example, none of the four conditions held.

Example 2: Constructing an idempotent matrix from a vector

Given the column vector \mathbf{u} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, construct an idempotent matrix that projects onto the direction of \mathbf{u}.

Step 1. Compute \mathbf{u}^T \mathbf{u}.

\mathbf{u}^T \mathbf{u} = 1^2 + 1^2 = 2

Why: this is the squared length of \mathbf{u}. You need it to normalise the projection.

Step 2. Compute \mathbf{u}\mathbf{u}^T.

\mathbf{u}\mathbf{u}^T = \begin{pmatrix} 1 \\ 1 \end{pmatrix}\begin{pmatrix} 1 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}

Why: the outer product \mathbf{u}\mathbf{u}^T is a 2 \times 2 matrix. It captures the "shape" of the projection direction.

Step 3. Form P = \dfrac{\mathbf{u}\mathbf{u}^T}{\mathbf{u}^T\mathbf{u}}.

P = \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}

Why: dividing by \mathbf{u}^T\mathbf{u} normalises the projection so that applying it twice does nothing extra. This is the formula for orthogonal projection onto a one-dimensional subspace.

Step 4. Verify P^2 = P.

P^2 = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix}\begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} = P \quad \checkmark

Why: each entry confirms it. For instance, the (1,1) entry of P^2 is \frac{1}{2} \cdot \frac{1}{2} + \frac{1}{2} \cdot \frac{1}{2} = \frac{1}{4} + \frac{1}{4} = \frac{1}{2}, which matches P.

Result: P = \begin{pmatrix} 1/2 & 1/2 \\ 1/2 & 1/2 \end{pmatrix} is idempotent, and it projects any vector onto the line y = x.

Projection of (4, 2) onto the line y equals xA coordinate plane showing the vector (4, 2) being projected onto the line y equals x. The projection result is (3, 3). A dashed line connects (4,2) to (3,3) showing the perpendicular drop, confirming that the projection is orthogonal. x y 1 2 3 4 5 1 2 3 4 y = x (4, 2) (3, 3)
Applying $P$ to the vector $(4, 2)$: $P \begin{pmatrix} 4 \\ 2 \end{pmatrix} = \begin{pmatrix} 3 \\ 3 \end{pmatrix}$. The result lies on the line $y = x$, and the dashed segment from $(4, 2)$ to $(3, 3)$ is perpendicular to that line. This is orthogonal projection — and applying $P$ to $(3, 3)$ returns $(3, 3)$ again, confirming idempotence.

Common confusions

Going deeper

If you came here to learn what each type of special matrix is, you have it — you can stop here. The rest of this section explores the relationships between these types and their role in decomposition theorems.

The relationship web

These five types are not isolated categories. They are connected.

Involutory \leftrightarrow Idempotent. If A is involutory (A^2 = I), then P = \frac{1}{2}(I + A) is idempotent. Conversely, if P is idempotent, then A = 2P - I is involutory. The two concepts are equivalent up to a linear change of coordinates. Every idempotent matrix corresponds to exactly one involutory matrix, and vice versa.

Orthogonal + Symmetric \Rightarrow Involutory. If A is both orthogonal (A^T A = I) and symmetric (A^T = A), then A \cdot A = A^T A = I, so A is involutory. Reflection matrices — like \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} — are of this kind.

Nilpotent \Rightarrow Singular. Every nilpotent matrix has determinant 0, so it is not invertible. You cannot have a matrix that is both nilpotent and orthogonal — orthogonal matrices have determinant \pm 1 and are always invertible.

Hermitian \leftrightarrow Skew-Hermitian. Multiplying a Hermitian matrix by i gives a skew-Hermitian matrix, and vice versa. They are the same concept rotated by 90° in the complex plane.

Spectral decomposition

A Hermitian matrix H of size n \times n can always be written as

H = \lambda_1 P_1 + \lambda_2 P_2 + \cdots + \lambda_n P_n

where each \lambda_j is a real eigenvalue and each P_j is an idempotent, Hermitian projection matrix. The P_j's are mutually orthogonal (P_j P_k = O when j \neq k) and they sum to I. This is the spectral theorem — arguably the single most important result in linear algebra. It says that every Hermitian matrix, no matter how complicated, is just a weighted sum of simple projection operators. The special matrices in this article are the building blocks of that decomposition.

Where this leads next