In short
Three points are collinear (lie on the same line) exactly when the vector from one to another can be written as a scalar multiple of a third. Four points are coplanar (lie on the same plane) exactly when one of the vectors connecting them can be written as a combination of the other two. The language that makes all of this precise is linear dependence: a set of vectors is linearly dependent when at least one of them is redundant — expressible in terms of the others.
Take three cities on a map — say Delhi, Jaipur, and Agra. Do they lie on a single straight line? Look at a map and you will see that they roughly do (the three cities form a very flat triangle, nearly a straight line running south-west to south-east from Delhi). But "roughly" is not mathematics. How do you check precisely whether three points are collinear?
Now take four cities: Delhi, Jaipur, Agra, and Lucknow. Do all four lie on the same flat plane? In a real map, everything is on the same flat sheet of paper, so yes. But imagine these are not cities on a map — they are satellites in orbit, at different altitudes. Now the question becomes: do the four satellites lie on a single flat plane in 3D space, or are they spread across three dimensions?
These two questions — collinearity and coplanarity — are the geometric heart of this article. The algebraic tool that answers them both is the same: linear dependence.
Collinear vectors
Two nonzero vectors \vec{a} and \vec{b} are called collinear (or parallel) if they lie along the same line — they point in the same direction or in exactly opposite directions.
Collinear vectors
Two nonzero vectors \vec{a} and \vec{b} are collinear if and only if there exists a scalar \lambda such that:
Equivalently, \vec{a} and \vec{b} are collinear if and only if \vec{a} \times \vec{b} = \vec{0} (their cross product is the zero vector).
If \lambda > 0, the vectors point in the same direction; if \lambda < 0, they point in opposite directions. The magnitude |\lambda| is the ratio of their lengths: |\vec{b}| = |\lambda| \cdot |\vec{a}|.
In component form, if \vec{a} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k} and \vec{b} = b_1\hat{i} + b_2\hat{j} + b_3\hat{k}, then \vec{b} = \lambda\vec{a} means:
(provided none of the a_i is zero; when some are zero, the corresponding b_i must also be zero).
Collinear points
Three points A, B, C are collinear — they lie on a single straight line — if and only if the vectors \overrightarrow{AB} and \overrightarrow{AC} are collinear (parallel). That is, there exists a scalar \lambda such that:
The geometric reason is clear: if \overrightarrow{AC} is just a scaled version of \overrightarrow{AB}, then C lies on the same line as A and B — it is reached by walking from A in the same direction as B, just a different distance.
Checking collinearity of three points. Given A(1, 2, 3), B(4, 5, 6), and C(7, 8, 9):
Is \overrightarrow{AC} = \lambda \cdot \overrightarrow{AB}? Check the component ratios: 6/3 = 6/3 = 6/3 = 2. Yes, \lambda = 2. The three points are collinear. Geometrically, C is twice as far from A as B is, in the same direction.
Linear combination
Before tackling coplanarity, you need a precise definition of what it means to "build" one vector from others.
Linear combination
A linear combination of vectors \vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n is any expression of the form:
where x_1, x_2, \ldots, x_n are scalars (real numbers).
Every vector operation you have seen — addition (\vec{a} + \vec{b} = 1 \cdot \vec{a} + 1 \cdot \vec{b}), scalar multiplication (3\vec{a} = 3 \cdot \vec{a} + 0 \cdot \vec{b}), the section formula (\frac{m\vec{b} + n\vec{a}}{m+n}) — is a linear combination. The term "linear" here means that you are only allowed to multiply vectors by scalars and add them. No squaring, no square-rooting, no multiplying two vectors together — just scaling and summing.
The set of all linear combinations of a given set of vectors is called their span. If \vec{a} and \vec{b} are not collinear, then \{x\vec{a} + y\vec{b} : x, y \in \mathbb{R}\} — the span of \vec{a} and \vec{b} — is an entire plane through the origin. Every vector in that plane can be built from \vec{a} and \vec{b}, and no vector outside the plane can.
Linear independence and dependence
The idea that separates collinear from non-collinear, coplanar from non-coplanar, is whether any vector in a set is redundant — expressible in terms of the others.
Linear independence and dependence
Vectors \vec{a}_1, \vec{a}_2, \ldots, \vec{a}_n are linearly dependent if there exist scalars x_1, x_2, \ldots, x_n, not all zero, such that:
If the only solution is x_1 = x_2 = \cdots = x_n = 0, the vectors are linearly independent.
What does linear dependence mean geometrically?
-
Two vectors are linearly dependent \Leftrightarrow they are collinear. One can be written as a scalar multiple of the other. They lie on the same line through the origin.
-
Three vectors are linearly dependent \Leftrightarrow they are coplanar. One can be written as a linear combination of the other two. They all lie in the same plane through the origin.
-
Four or more vectors in 3D space are always linearly dependent. Three-dimensional space has room for at most three independent directions.
The picture makes the key idea vivid. Linear dependence means one vector is trapped in the space already spanned by the others — it adds no new direction. Linear independence means each vector contributes a genuinely new direction that the others cannot replicate.
Coplanar vectors
Coplanar vectors
Three vectors \vec{a}, \vec{b}, \vec{c} are coplanar if they all lie in the same plane (or equivalently, in planes parallel to the same plane). This happens if and only if one of them can be expressed as a linear combination of the other two:
for some scalars x and y.
Coplanar points. Four points A, B, C, D are coplanar if and only if the three vectors \overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD} are coplanar — i.e., linearly dependent.
In component form, if \overrightarrow{AB} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k}, \overrightarrow{AC} = b_1\hat{i} + b_2\hat{j} + b_3\hat{k}, and \overrightarrow{AD} = c_1\hat{i} + c_2\hat{j} + c_3\hat{k}, then the three vectors are coplanar if and only if:
This determinant condition is the scalar triple product [\overrightarrow{AB}, \overrightarrow{AC}, \overrightarrow{AD}] equalling zero. Geometrically, the scalar triple product measures the volume of the parallelepiped (a 3D parallelogram) formed by the three vectors. If the volume is zero, the "parallelepiped" is flat — all three vectors lie in a plane.
A concrete check
Are the four points A(1, 2, 3), B(4, 0, 1), C(3, 4, 5), D(2, 6, 7) coplanar?
Compute the three vectors from A:
Compute the determinant:
Expand along the first row:
The determinant is zero, so the four points are coplanar. All four lie on a single plane in 3D space.
Testing linear dependence: the systematic method
Given vectors \vec{a}, \vec{b}, \vec{c} in component form, how do you determine whether they are linearly dependent? Set up the equation:
Write this in components. If \vec{a} = a_1\hat{i} + a_2\hat{j} + a_3\hat{k} and similarly for \vec{b} and \vec{c}, equating each component to zero gives the system:
This is a homogeneous system of three equations in three unknowns. It always has the trivial solution x = y = z = 0. The vectors are linearly dependent if and only if there is a non-trivial solution — which happens exactly when the determinant of the coefficient matrix is zero.
This is why the determinant test works: a zero determinant means the system has infinitely many solutions, which means you can find scalars (not all zero) that combine the vectors to give zero, which means one vector is a combination of the others, which means they are dependent.
Worked examples
Example 1: Testing collinearity of three points
Are the points A(2, -1, 3), B(4, 1, 7), and C(7, 4, 13) collinear?
Step 1. Compute \overrightarrow{AB} and \overrightarrow{AC}.
Why: collinearity of three points reduces to collinearity of two vectors. Pick any one point as base and compute vectors to the other two.
Step 2. Check whether \overrightarrow{AC} = \lambda \cdot \overrightarrow{AB} for some scalar \lambda.
All three component ratios are equal: \lambda = 5/2.
Why: if all component ratios are equal, one vector is a scalar multiple of the other, so both vectors point in the same direction (or opposite). The points lie on a single line.
Step 3. Verify by writing the relationship explicitly.
This confirms C lies on the ray from A through B, at 5/2 times the distance from A to B.
Why: \lambda = 5/2 > 1, so C is beyond B on the line from A. If \lambda were between 0 and 1, C would be between A and B. If \lambda were negative, C would be on the opposite side of A.
Step 4. State the conclusion.
Result: The points A, B, C are collinear. The point C lies on the line through A and B, at 5/2 of the way from A past B.
The component ratios tell you not just whether the points are collinear, but where each point sits along the line. Here, B divides AC in the ratio 2:3 from A.
Example 2: Testing coplanarity of four points
Are P(0, 1, 2), Q(1, 0, 3), R(3, 2, 1), S(1, 3, -1) coplanar?
Step 1. Compute three vectors from P.
Why: coplanarity of four points reduces to coplanarity of three vectors from a common point.
Step 2. Compute the 3 \times 3 determinant.
Why: the determinant of the matrix formed by the three vectors equals their scalar triple product. It is zero exactly when the vectors are coplanar.
Step 3. Expand along the first row.
Why: each 2 \times 2 minor is computed as ad - bc. The cofactor expansion alternates signs: +, -, + along the first row, but the second entry is already -1, so its cofactor picks up an extra negative.
Step 4. Interpret.
The determinant is -4 \neq 0.
Result: The four points are not coplanar. They span the full three-dimensional space. The scalar triple product -4 means the parallelepiped formed by the three vectors has volume |-4| = 4 cubic units.
The sign of the determinant (-4) depends on the ordering of the vectors, but its absolute value (4) is the volume of the parallelepiped. When this volume is zero, the parallelepiped is flat and the points are coplanar. When it is nonzero, the fourth point sticks out of the plane defined by the first three.
Common confusions
-
"If three points form a triangle, they are not collinear." Correct — and this is the contrapositive. Three collinear points form a degenerate triangle with zero area. Any three points that form a triangle with positive area are non-collinear.
-
"Collinear vectors must start from the same point." Vectors are free objects — they can be placed anywhere. Two vectors are collinear if they are parallel (same or opposite direction), regardless of where they are positioned. Points are collinear if they lie on a single line. The distinction between parallel vectors and collinear points is important.
-
"Three vectors in 3D are always linearly independent." Only if they point in genuinely different directions. If one of them is a linear combination of the other two, they are dependent — they all lie in a single plane and do not "fill up" the three-dimensional space.
-
"The determinant test for coplanarity requires the position vectors." It requires the vectors from a common point, not the position vectors themselves. If you have four points, compute three vectors from one of them (any one), then form the determinant. Using the position vectors directly gives a different (and incorrect) matrix.
-
"Linear independence is a property of individual vectors." It is a property of a set of vectors — you cannot say a single vector is "linearly independent" in isolation (except that a single nonzero vector is linearly independent as a one-element set). The term describes a relationship among the vectors in the set.
Going deeper
If you came here to learn the tests for collinearity and coplanarity, you have them — you can stop here. What follows explores the algebraic structure that makes these tests work.
The rank interpretation
The determinant test for coplanarity is really a question about the rank of a matrix. Arrange the three vectors as rows of a 3 \times 3 matrix:
- If \text{rank}(M) = 3: the vectors span all of 3D space. They are linearly independent and not coplanar.
- If \text{rank}(M) = 2: the vectors span a plane. They are coplanar (linearly dependent), but not all collinear.
- If \text{rank}(M) = 1: the vectors span a line. All three are collinear (parallel).
- If \text{rank}(M) = 0: all vectors are zero.
The determinant being zero tells you the rank is less than 3, but it does not distinguish rank 2 from rank 1. If you need to know whether three coplanar vectors are also all collinear, check whether any two of them are parallel.
Linear dependence in higher dimensions
In n-dimensional space, at most n vectors can be linearly independent. Any set of n + 1 or more vectors is automatically dependent. In 3D (n = 3), this means four or more vectors are always dependent. In 2D (n = 2), three or more vectors are always dependent — which is why any three vectors in a plane can be used in a coplanarity-like relation.
This connects to the earlier article on components and direction: the dimension of the space equals the number of components needed, equals the maximum number of independent vectors, equals the number of basis vectors. These are all the same number, seen from different angles.
The bridge to the scalar triple product
The determinant you computed for coplanarity is exactly the scalar triple product \vec{a} \cdot (\vec{b} \times \vec{c}). This product has a beautiful geometric meaning: it is the signed volume of the parallelepiped formed by the three vectors. The sign depends on orientation (right-hand rule), and the magnitude is the volume. The coplanarity condition [\vec{a}, \vec{b}, \vec{c}] = 0 is simply saying that the parallelepiped is flat — it has zero volume — which means all three vectors lie in a plane.
The scalar triple product is covered in detail in the article on scalar triple product. What matters here is the connection: coplanarity, linear dependence, zero determinant, and zero volume are four ways of saying the same thing.
A condition using position vectors
There is an elegant alternative form. Three points A(\vec{a}), B(\vec{b}), C(\vec{c}) are collinear if and only if there exist scalars x, y, z (not all zero) such that:
The extra condition x + y + z = 0 distinguishes collinearity of points from linear dependence of vectors. Without it, the equation x\vec{a} + y\vec{b} + z\vec{c} = \vec{0} just says the position vectors are dependent — which could happen even if the points are not collinear (for instance, if the origin is one of the points). The constraint x + y + z = 0 ensures that the relation is genuinely about the points, not just the vectors from the origin.
Similarly, four points are coplanar if and only if x\vec{a} + y\vec{b} + z\vec{c} + w\vec{d} = \vec{0} with x + y + z + w = 0 for some scalars not all zero. This is the affine dependence condition — the generalisation of linear dependence from vectors to points.
Where this leads next
- Vector Operations — the addition and scalar multiplication that underlie every linear combination in this article.
- Components and Direction — the component form that makes the determinant test computational.
- Dot Product — a multiplication that measures the angle between two vectors. If two vectors are perpendicular, their dot product is zero — an orthogonality condition complementary to the parallelism condition here.
- Cross Product — the cross product \vec{a} \times \vec{b} is zero exactly when \vec{a} and \vec{b} are collinear. It gives the area of the parallelogram they form.
- Scalar Triple Product — the determinant condition for coplanarity, studied in full with volume interpretation and properties.