In short

Three points are collinear (lie on the same line) exactly when the vector from one to another can be written as a scalar multiple of a third. Four points are coplanar (lie on the same plane) exactly when one of the vectors connecting them can be written as a combination of the other two. The language that makes all of this precise is linear dependence: a set of vectors is linearly dependent when at least one of them is redundant — expressible in terms of the others.

Take three cities on a map — say Delhi, Jaipur, and Agra. Do they lie on a single straight line? Look at a map and you will see that they roughly do (the three cities form a very flat triangle, nearly a straight line running south-west to south-east from Delhi). But "roughly" is not mathematics. How do you check precisely whether three points are collinear?

Now take four cities: Delhi, Jaipur, Agra, and Lucknow. Do all four lie on the same flat plane? In a real map, everything is on the same flat sheet of paper, so yes. But imagine these are not cities on a map — they are satellites in orbit, at different altitudes. Now the question becomes: do the four satellites lie on a single flat plane in 3D space, or are they spread across three dimensions?

These two questions — collinearity and coplanarity — are the geometric heart of this article. The algebraic tool that answers them both is the same: linear dependence.

Collinear vectors

Two nonzero vectors \vec{a} and \vec{b} are called collinear (or parallel) if they lie along the same line — they point in the same direction or in exactly opposite directions.

Collinear vectors

Two nonzero vectors \vec{a} and \vec{b} are collinear if and only if there exists a scalar \lambda such that:

\vec{b} = \lambda\vec{a}

Equivalently, \vec{a} and \vec{b} are collinear if and only if \vec{a} \times \vec{b} = \vec{0} (their cross product is the zero vector).

If \lambda > 0, the vectors point in the same direction; if \lambda < 0, they point in opposite directions. The magnitude |\lambda| is the ratio of their lengths: |\vec{b}| = |\lambda| \cdot |\vec{a}|.

Collinear and non-collinear vectorsOn the left, two vectors a and b point along the same line, with b being twice as long as a. On the right, two vectors c and d point in different directions and are not collinear. a b = 2a collinear (b = 2a) c d not collinear
Left: $\vec{b} = 2\vec{a}$ — the two vectors are collinear (one is a scalar multiple of the other). Right: $\vec{c}$ and $\vec{d}$ point in different directions — no scalar $\lambda$ can turn one into the other.

In component form, if \vec{a} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k} and \vec{b} = b_1\hat{i} + b_2\hat{j} + b_3\hat{k}, then \vec{b} = \lambda\vec{a} means:

\frac{b_1}{a_1} = \frac{b_2}{a_2} = \frac{b_3}{a_3} = \lambda

(provided none of the a_i is zero; when some are zero, the corresponding b_i must also be zero).

Collinear points

Three points A, B, C are collinear — they lie on a single straight line — if and only if the vectors \overrightarrow{AB} and \overrightarrow{AC} are collinear (parallel). That is, there exists a scalar \lambda such that:

\overrightarrow{AC} = \lambda \cdot \overrightarrow{AB}

The geometric reason is clear: if \overrightarrow{AC} is just a scaled version of \overrightarrow{AB}, then C lies on the same line as A and B — it is reached by walking from A in the same direction as B, just a different distance.

Checking collinearity of three points. Given A(1, 2, 3), B(4, 5, 6), and C(7, 8, 9):

\overrightarrow{AB} = 3\hat{i} + 3\hat{j} + 3\hat{k}, \qquad \overrightarrow{AC} = 6\hat{i} + 6\hat{j} + 6\hat{k}

Is \overrightarrow{AC} = \lambda \cdot \overrightarrow{AB}? Check the component ratios: 6/3 = 6/3 = 6/3 = 2. Yes, \lambda = 2. The three points are collinear. Geometrically, C is twice as far from A as B is, in the same direction.

Linear combination

Before tackling coplanarity, you need a precise definition of what it means to "build" one vector from others.

Linear combination

A linear combination of vectors \vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n is any expression of the form:

x_1\vec{a}_1 + x_2\vec{a}_2 + \cdots + x_n\vec{a}_n

where x_1, x_2, \ldots, x_n are scalars (real numbers).

Every vector operation you have seen — addition (\vec{a} + \vec{b} = 1 \cdot \vec{a} + 1 \cdot \vec{b}), scalar multiplication (3\vec{a} = 3 \cdot \vec{a} + 0 \cdot \vec{b}), the section formula (\frac{m\vec{b} + n\vec{a}}{m+n}) — is a linear combination. The term "linear" here means that you are only allowed to multiply vectors by scalars and add them. No squaring, no square-rooting, no multiplying two vectors together — just scaling and summing.

The set of all linear combinations of a given set of vectors is called their span. If \vec{a} and \vec{b} are not collinear, then \{x\vec{a} + y\vec{b} : x, y \in \mathbb{R}\} — the span of \vec{a} and \vec{b} — is an entire plane through the origin. Every vector in that plane can be built from \vec{a} and \vec{b}, and no vector outside the plane can.

Building a vector as a linear combinationA vector c is constructed as the sum of 2 times a and 1 times b, shown by placing the scaled vectors head-to-tail. 2a b c = 2a + b O
The linear combination $\vec{c} = 2\vec{a} + \vec{b}$: scale $\vec{a}$ by 2, then place $\vec{b}$ at the head. The vector from the origin to the endpoint is the combination. Every vector in the plane of $\vec{a}$ and $\vec{b}$ can be reached by adjusting the two scalars.

Linear independence and dependence

The idea that separates collinear from non-collinear, coplanar from non-coplanar, is whether any vector in a set is redundant — expressible in terms of the others.

Linear independence and dependence

Vectors \vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n are linearly dependent if there exist scalars x_1, x_2, \ldots, x_n, not all zero, such that:

x_1\vec{a}_1 + x_2\vec{a}_2 + \cdots + x_n\vec{a}_n = \vec{0}

If the only solution is x_1 = x_2 = \cdots = x_n = 0, the vectors are linearly independent.

What does linear dependence mean geometrically?

Linear independence vs dependenceLeft: two non-collinear vectors a and b span a plane. Right: a third vector c in the same plane is linearly dependent on a and b, because c equals some combination of a and b. the plane of a and b a b c = 2a + b dependent: c is in the plane a b d independent: d leaves the plane
Left: $\vec{c}$ lies in the plane of $\vec{a}$ and $\vec{b}$ — it can be written as $2\vec{a} + \vec{b}$, so $\{\vec{a}, \vec{b}, \vec{c}\}$ are linearly dependent. Right: $\vec{d}$ sticks out of the plane — no combination of $\vec{a}$ and $\vec{b}$ can reach it, so $\{\vec{a}, \vec{b}, \vec{d}\}$ are linearly independent.

The picture makes the key idea vivid. Linear dependence means one vector is trapped in the space already spanned by the others — it adds no new direction. Linear independence means each vector contributes a genuinely new direction that the others cannot replicate.

Coplanar vectors

Coplanar vectors

Three vectors \vec{a}, \vec{b}, \vec{c} are coplanar if they all lie in the same plane (or equivalently, in planes parallel to the same plane). This happens if and only if one of them can be expressed as a linear combination of the other two:

\vec{c} = x\vec{a} + y\vec{b}

for some scalars x and y.

Coplanar points. Four points A, B, C, D are coplanar if and only if the three vectors \overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD} are coplanar — i.e., linearly dependent.

In component form, if \overrightarrow{AB} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k}, \overrightarrow{AC} = b_1\hat{i} + b_2\hat{j} + b_3\hat{k}, and \overrightarrow{AD} = c_1\hat{i} + c_2\hat{j} + c_3\hat{k}, then the three vectors are coplanar if and only if:

\begin{vmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{vmatrix} = 0

This determinant condition is the scalar triple product [\overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD}] equalling zero. Geometrically, the scalar triple product measures the volume of the parallelepiped (a 3D parallelogram) formed by the three vectors. If the volume is zero, the "parallelepiped" is flat — all three vectors lie in a plane.

Coplanar vectors form a flat parallelepipedLeft: three non-coplanar vectors form a box (parallelepiped) with positive volume. Right: three coplanar vectors collapse the box to a flat shape with zero volume. height volume > 0 not coplanar height = 0 volume = 0 coplanar
Left: when three vectors are not coplanar, they form a parallelepiped with nonzero volume. Right: when three vectors are coplanar, the parallelepiped collapses flat — its volume is zero. The determinant measures this volume.

A concrete check

Are the four points A(1, 2, 3), B(4, 0, 1), C(3, 4, 5), D(2, 6, 7) coplanar?

Compute the three vectors from A:

\overrightarrow{AB} = 3\hat{i} - 2\hat{j} - 2\hat{k}
\overrightarrow{AC} = 2\hat{i} + 2\hat{j} + 2\hat{k}
\overrightarrow{AD} = 1\hat{i} + 4\hat{j} + 4\hat{k}

Compute the determinant:

\begin{vmatrix} 3 & -2 & -2 \\ 2 & 2 & 2 \\ 1 & 4 & 4 \end{vmatrix}

Expand along the first row:

= 3(2 \cdot 4 - 2 \cdot 4) - (-2)(2 \cdot 4 - 2 \cdot 1) + (-2)(2 \cdot 4 - 2 \cdot 1)
= 3(8 - 8) + 2(8 - 2) - 2(8 - 2)
= 3(0) + 2(6) - 2(6)
= 0 + 12 - 12 = 0

The determinant is zero, so the four points are coplanar. All four lie on a single plane in 3D space.

Testing linear dependence: the systematic method

Given vectors \vec{a}, \vec{b}, \vec{c} in component form, how do you determine whether they are linearly dependent? Set up the equation:

x\vec{a} + y\vec{b} + z\vec{c} = \vec{0}

Write this in components. If \vec{a} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k} and similarly for \vec{b} and \vec{c}, equating each component to zero gives the system:

a_1 x + b_1 y + c_1 z = 0
a_2 x + b_2 y + c_2 z = 0
a_3 x + b_3 y + c_3 z = 0

This is a homogeneous system of three equations in three unknowns. It always has the trivial solution x = y = z = 0. The vectors are linearly dependent if and only if there is a non-trivial solution — which happens exactly when the determinant of the coefficient matrix is zero.

This is why the determinant test works: a zero determinant means the system has infinitely many solutions, which means you can find scalars (not all zero) that combine the vectors to give zero, which means one vector is a combination of the others, which means they are dependent.

Worked examples

Example 1: Testing collinearity of three points

Are the points A(2, -1, 3), B(4, 1, 7), and C(7, 4, 13) collinear?

Step 1. Compute \overrightarrow{AB} and \overrightarrow{AC}.

\overrightarrow{AB} = (4-2)\hat{i} + (1-(-1))\hat{j} + (7-3)\hat{k} = 2\hat{i} + 2\hat{j} + 4\hat{k}
\overrightarrow{AC} = (7-2)\hat{i} + (4-(-1))\hat{j} + (13-3)\hat{k} = 5\hat{i} + 5\hat{j} + 10\hat{k}

Why: collinearity of three points reduces to collinearity of two vectors. Pick any one point as base and compute vectors to the other two.

Step 2. Check whether \overrightarrow{AC} = \lambda \cdot \overrightarrow{AB} for some scalar \lambda.

\frac{5}{2} = \frac{5}{2} = \frac{10}{4} = \frac{5}{2}

All three component ratios are equal: \lambda = 5/2.

Why: if all component ratios are equal, one vector is a scalar multiple of the other, so both vectors point in the same direction (or opposite). The points lie on a single line.

Step 3. Verify by writing the relationship explicitly.

\overrightarrow{AC} = \frac{5}{2} \cdot \overrightarrow{AB}

This confirms C lies on the ray from A through B, at 5/2 times the distance from A to B.

Why: \lambda = 5/2 > 1, so C is beyond B on the line from A. If \lambda were between 0 and 1, C would be between A and B. If \lambda were negative, C would be on the opposite side of A.

Step 4. State the conclusion.

Result: The points A, B, C are collinear. The point C lies on the line through A and B, at 5/2 of the way from A past B.

The three points plotted in the $xz$-plane (ignoring the $y$-coordinate for visibility). They lie on a straight line — confirming collinearity. The ratio $AB : BC$ equals $2 : 3$, consistent with $\overrightarrow{AC} = (5/2)\overrightarrow{AB}$ meaning $AB : AC = 2 : 5$, hence $AB : BC = 2 : 3$.

The component ratios tell you not just whether the points are collinear, but where each point sits along the line. Here, B divides AC in the ratio 2:3 from A.

Example 2: Testing coplanarity of four points

Are P(0, 1, 2), Q(1, 0, 3), R(3, 2, 1), S(1, 3, -1) coplanar?

Step 1. Compute three vectors from P.

\overrightarrow{PQ} = 1\hat{i} - 1\hat{j} + 1\hat{k}
\overrightarrow{PR} = 3\hat{i} + 1\hat{j} - 1\hat{k}
\overrightarrow{PS} = 1\hat{i} + 2\hat{j} - 3\hat{k}

Why: coplanarity of four points reduces to coplanarity of three vectors from a common point.

Step 2. Compute the 3 \times 3 determinant.

\begin{vmatrix} 1 & -1 & 1 \\ 3 & 1 & -1 \\ 1 & 2 & -3 \end{vmatrix}

Why: the determinant of the matrix formed by the three vectors equals their scalar triple product. It is zero exactly when the vectors are coplanar.

Step 3. Expand along the first row.

= 1\begin{vmatrix} 1 & -1 \\ 2 & -3 \end{vmatrix} - (-1)\begin{vmatrix} 3 & -1 \\ 1 & -3 \end{vmatrix} + 1\begin{vmatrix} 3 & 1 \\ 1 & 2 \end{vmatrix}
= 1(-3 - (-2)) + 1(-9 - (-1)) + 1(6 - 1)
= 1(-3 + 2) + 1(-9 + 1) + 1(5)
= (-1) + (-8) + 5 = -4

Why: each 2 \times 2 minor is computed as ad - bc. The cofactor expansion alternates signs: +, -, + along the first row, but the second entry is already -1, so its cofactor picks up an extra negative.

Step 4. Interpret.

The determinant is -4 \neq 0.

Result: The four points are not coplanar. They span the full three-dimensional space. The scalar triple product -4 means the parallelepiped formed by the three vectors has volume |-4| = 4 cubic units.

The four points projected onto the $xy$-plane. In this projection they form a proper quadrilateral — they do not collapse to a line, which is consistent with the non-zero determinant. In full 3D, these four points do not even share a common plane.

The sign of the determinant (-4) depends on the ordering of the vectors, but its absolute value (4) is the volume of the parallelepiped. When this volume is zero, the parallelepiped is flat and the points are coplanar. When it is nonzero, the fourth point sticks out of the plane defined by the first three.

Common confusions

Going deeper

If you came here to learn the tests for collinearity and coplanarity, you have them — you can stop here. What follows explores the algebraic structure that makes these tests work.

The rank interpretation

The determinant test for coplanarity is really a question about the rank of a matrix. Arrange the three vectors as rows of a 3 \times 3 matrix:

M = \begin{pmatrix} a_1 & a_2 & a_3 \\ b_1 & b_2 & b_3 \\ c_1 & c_2 & c_3 \end{pmatrix}

The determinant being zero tells you the rank is less than 3, but it does not distinguish rank 2 from rank 1. If you need to know whether three coplanar vectors are also all collinear, check whether any two of them are parallel.

Linear dependence in higher dimensions

In n-dimensional space, at most n vectors can be linearly independent. Any set of n + 1 or more vectors is automatically dependent. In 3D (n = 3), this means four or more vectors are always dependent. In 2D (n = 2), three or more vectors are always dependent — which is why any three vectors in a plane can be used in a coplanarity-like relation.

This connects to the earlier article on components and direction: the dimension of the space equals the number of components needed, equals the maximum number of independent vectors, equals the number of basis vectors. These are all the same number, seen from different angles.

The bridge to the scalar triple product

The determinant you computed for coplanarity is exactly the scalar triple product \vec{a} \cdot (\vec{b} \times \vec{c}). This product has a beautiful geometric meaning: it is the signed volume of the parallelepiped formed by the three vectors. The sign depends on orientation (right-hand rule), and the magnitude is the volume. The coplanarity condition [\vec{a}, \vec{b}, \vec{c}] = 0 is simply saying that the parallelepiped is flat — it has zero volume — which means all three vectors lie in a plane.

The scalar triple product is covered in detail in the article on scalar triple product. What matters here is the connection: coplanarity, linear dependence, zero determinant, and zero volume are four ways of saying the same thing.

A condition using position vectors

There is an elegant alternative form. Three points A(\vec{a}), B(\vec{b}), C(\vec{c}) are collinear if and only if there exist scalars x, y, z (not all zero) such that:

x\vec{a} + y\vec{b} + z\vec{c} = \vec{0} \quad \text{and} \quad x + y + z = 0

The extra condition x + y + z = 0 distinguishes collinearity of points from linear dependence of vectors. Without it, the equation x\vec{a} + y\vec{b} + z\vec{c} = \vec{0} just says the position vectors are dependent — which could happen even if the points are not collinear (for instance, if the origin is one of the points). The constraint x + y + z = 0 ensures that the relation is genuinely about the points, not just the vectors from the origin.

Similarly, four points are coplanar if and only if x\vec{a} + y\vec{b} + z\vec{c} + w\vec{d} = \vec{0} with x + y + z + w = 0 for some scalars not all zero. This is the affine dependence condition — the generalisation of linear dependence from vectors to points.

Where this leads next