In short
Every measurement carries uncertainty. The absolute error is the magnitude of the difference between the measured value and the true (or mean) value. The relative error is the absolute error divided by the measured value, and the percentage error is the relative error times 100. When you combine measurements — add, multiply, divide, or raise to a power — the errors propagate: absolute errors add for sums and differences, while relative errors add for products, quotients, and powers (\Delta Z/Z = n\,\Delta A/A for Z = A^n). A measurement reported without its uncertainty is incomplete.
Picture this: you are in the physics lab, timing a simple pendulum. The teacher says measure 20 complete oscillations with a stopwatch. You press start, count twenty swings, press stop. The display reads 28.6 seconds. Your lab partner, timing the same pendulum, gets 28.3 seconds. The student behind you gets 29.1 seconds. Three people, three different numbers, same pendulum. Whose answer is "correct"?
None of them — and all of them. Each reading is an honest measurement of the same physical quantity, and each is slightly different because of tiny variations in reaction time, in where exactly the eye judges "one complete swing," in how the thumb hits the button. The real question is not which number is right but how far off could any of them be, and does that uncertainty change the conclusion you are drawing?
That is what this article is about. Not the annoying fact that measurements are imperfect — you already know that. The interesting part is that physics gives you a systematic way to quantify the imperfection, track it through calculations, and decide whether your final answer is reliable enough to trust.
Types of errors
Not all errors are created equal. Some are sneaky and consistent, pushing every reading in the same direction. Others are random, scattering readings above and below the true value. The distinction matters because the strategy for dealing with each type is completely different.
Systematic errors
A systematic error shifts every measurement in the same direction by roughly the same amount. It does not average out if you repeat the experiment — in fact, repeating the experiment a hundred times gives you the same wrong answer a hundred times, just with more confidence in it.
Systematic errors fall into three categories:
Instrumental errors. Your metre ruler has expanded slightly in the summer heat. Every length you measure with it reads a fraction of a millimetre too short. Or your spring balance has a bent pointer that rests at 0.2 N instead of zero — every weight reading is 0.2 N too high. The instrument itself is biased.
Personal errors (parallax and reaction time). When you read a thermometer, your eye is slightly above the mercury level, so you read a temperature that is consistently a little too high. When you start a stopwatch at the moment a pendulum crosses the mean position, your reaction time adds roughly 0.2 seconds to every reading — always too much, never too little. These are habits baked into the way you take the measurement.
Environmental errors. The room temperature drifts during a calorimetry experiment. A breeze pushes the pendulum slightly off its plane of oscillation. Humidity changes the resistance of the connecting wires in your metre bridge setup. These are conditions that affect the measurement but are not part of the physics you are trying to study.
The fix for systematic errors is not more repetitions — it is identifying the source and correcting for it. Zero the spring balance before reading. Apply a temperature correction to the ruler. Time 20 oscillations instead of one, so that your reaction-time error (roughly constant at 0.2 s) gets divided by 20 instead of dominating a single-oscillation measurement.
Random errors
A random error is the scatter you see when you repeat a measurement under identical conditions. Your five stopwatch readings of the same pendulum might be 28.3, 28.6, 28.4, 28.7, 28.5 seconds. Each reading is slightly different, and there is no pattern — sometimes you are a bit fast, sometimes a bit slow. No single reading is privileged over the others.
Random errors do average out. If you take enough measurements, the values above the true answer and the values below it roughly cancel, and the mean of all your readings is closer to the true value than any individual reading. This is the whole point of repeating measurements: repetition does not fix systematic errors, but it does reduce random errors.
Quantifying the error — absolute, relative, and percentage
Saying "the measurement has some error" is not useful. Physics demands a number. How much error?
Absolute error
Suppose you measure the length of a cricket pitch and get 20.15 m. The actual regulation length is 20.12 m. The absolute error is the magnitude of the difference:
Why the absolute value: the error could be positive (you measured too much) or negative (you measured too little). The absolute error captures the size of the discrepancy regardless of direction.
In practice, you often do not know the true value — that is the whole reason you are measuring. Instead, you take multiple measurements and use the mean as your best estimate. The absolute error of each individual reading is then:
where \bar{a} is the arithmetic mean of all your readings. And the mean absolute error — the number you report as the uncertainty — is the average of these individual errors:
Why the mean of the absolute deviations: each reading deviates from the mean by some amount. Averaging these deviations gives you a single number that represents how far a typical reading sits from the best estimate.
Relative error
An error of 0.03 m sounds small for a cricket pitch, but it would be enormous for the thickness of a coin. The absolute error alone does not tell you whether a measurement is good — you need to compare the error to the quantity itself.
Why divide by the measured value: this converts the error into a fraction of the measurement. A relative error of 0.001 means your measurement could be off by one part in a thousand — regardless of whether the quantity is 20 metres or 0.002 metres.
For the cricket pitch:
Percentage error
The percentage error is the relative error expressed as a percentage — nothing more:
A 0.15% error in the length of a cricket pitch is excellent. You can be confident the pitch is regulation length. Now compare: if you measured a 5 mm bolt with a ruler and got 5.5 mm, the percentage error would be (0.5/5.0) \times 100\% = 10\%. Same ruler, very different level of reliability.
Propagation of errors — the heart of the matter
Here is where things get genuinely interesting. You measure a length. You measure a time. Then you calculate a speed. What is the error in the speed? You did not measure the speed directly — you computed it from two measurements, each carrying its own uncertainty. How do the individual errors combine into the error of the result?
This is error propagation, and the rules depend on the mathematical operation you perform.
Rule 1: Sums and differences — absolute errors add
Suppose you measure two lengths:
and compute their sum Z = A + B.
The maximum possible value of Z occurs when both A and B are at their maximum:
The minimum possible value occurs when both are at their minimum:
The best estimate is Z = 25.3 + 14.7 = 40.0 cm. The maximum deviation from this is:
Why absolute errors add: in the worst case, both errors push the result in the same direction. The maximum possible error in the sum is the sum of the individual errors.
The same rule applies to differences. If Z = A - B:
Why errors still add for subtraction: when you subtract, the worst case is when A is too large and B is too small — the errors conspire in opposite directions on the operands, but the same direction on the result.
Rule 2: Products and quotients — relative errors add
Now suppose you measure a voltage and a current to compute power:
To find how the error propagates through multiplication, start with the relative errors. Write V = V_0(1 + \epsilon_V) where \epsilon_V = \Delta V/V_0 is the fractional error, and similarly I = I_0(1 + \epsilon_I).
Why expand the product: multiplying the two bracketed terms gives four terms. The last term \epsilon_V \epsilon_I is the product of two small fractions — it is negligibly small compared to the others.
Since \epsilon_V and \epsilon_I are both small (much less than 1), the product \epsilon_V \epsilon_I is tiny and can be dropped:
Why relative errors add for multiplication: a product is sensitive to the fractional change in each factor. If one factor is off by 2% and another by 3%, the product is off by at most 5%.
For the power calculation:
So P = 6.0 \pm 0.2 W (rounded to one significant figure in the error).
The same rule applies to division. If Z = A/B, the derivation is nearly identical — write A = A_0(1 + \epsilon_A) and B = B_0(1 + \epsilon_B), expand, and use the approximation (1 + \epsilon_B)^{-1} \approx 1 - \epsilon_B for small \epsilon_B:
Why the rule is the same for division as for multiplication: the maximum error in a quotient occurs when the numerator is too large and the denominator is too small (or vice versa), so the relative errors still add.
Rule 3: Powers — multiply the relative error by the exponent
If Z = A^n, then:
Here is why. Write Z = A^n and take the differential:
Why: this is the standard calculus result — the derivative of A^n is n A^{n-1}, and the change in Z for a small change \Delta A is approximately \frac{dZ}{dA} \cdot \Delta A.
Divide both sides by Z = A^n:
This is the most powerful rule of the three. It tells you that the exponent amplifies the relative error. If you measure a radius with 1% uncertainty and compute the volume of a sphere (V = \frac{4}{3}\pi r^3), the volume has 3 \times 1\% = 3\% uncertainty. The cube magnifies the error threefold.
Combining the rules
Real calculations involve multiple operations. The strategy is always the same: break the formula into steps, apply the appropriate rule at each step, and build up the total error.
For a formula like Z = \frac{A^2 B}{C}, treat it as a product and quotient of powers:
Why: A appears with exponent 2 (so its relative error is multiplied by 2), B appears with exponent 1, and C appears in the denominator with exponent 1. Since it is a product of powers, the relative errors add.
Worked examples
Example 1: Computing g from a pendulum experiment
In a lab practical, you measure the time period of a simple pendulum to compute the acceleration due to gravity g. The formula is T = 2\pi\sqrt{L/g}, which rearranges to:
Your measurements:
- Length of string: L = 100.0 \pm 0.1 cm = 1.000 \pm 0.001 m
- Time for 20 oscillations: t_{20} = 40.2 \pm 0.2 s
- Time period: T = t_{20}/20 = 2.010 \pm 0.010 s
Why divide by 20: timing 20 oscillations and dividing gives you the period of one oscillation, but the absolute error also gets divided by 20. A reaction-time error of 0.2 s becomes 0.01 s per oscillation — this is the standard technique for reducing timing errors in pendulum experiments.
Step 1. Compute the best estimate of g.
Why: substitute the measured values directly into the rearranged formula. The result is close to the accepted value of 9.81 m/s^2, which is encouraging.
Step 2. Identify the relative errors of each measured quantity.
Why compute relative errors: the formula for g is a product and quotient of L and T, so the propagation rules work with relative errors.
Step 3. Apply the propagation rule. In g = 4\pi^2 L T^{-2}, the constant 4\pi^2 has no error. L appears with exponent 1, and T appears with exponent -2 (equivalently, T^2 appears in the denominator with exponent 2).
Why multiply the time error by 2: T is squared in the formula. By the power rule, the exponent 2 multiplies the relative error. This is exactly why timing errors matter so much in pendulum experiments — the period is squared, so any error in T is doubled in g.
Step 4. Convert back to absolute error.
Result: g = 9.77 \pm 0.11 m/s^2, or equivalently, g = 9.8 \pm 0.1 m/s^2 (rounded to match the precision of the error).
What this shows: The error in g is dominated by the time measurement, not the length measurement — and the factor of 2 from the squaring is the reason. If you want a more precise value of g, improving the stopwatch precision (or timing more oscillations) helps far more than using a more precise ruler.
Example 2: Resistance from voltage and current measurements
You connect a resistor in a circuit and measure the voltage across it and the current through it. Using Ohm's law, R = V/I, compute the resistance and its uncertainty.
Your readings:
- Voltage: V = 4.8 \pm 0.1 V
- Current: I = 0.24 \pm 0.01 A
Step 1. Compute the best estimate of R.
Step 2. Compute the relative errors.
Step 3. Apply the quotient rule — relative errors add.
Why: R = V/I is a quotient. The relative error of a quotient is the sum of the relative errors of the numerator and denominator.
Step 4. Convert to absolute error.
Result: R = 20.0 \pm 1.3\;\Omega, or about 20 \pm 1\;\Omega (a 6.3% uncertainty).
What this shows: In a quotient, both the numerator and denominator errors contribute. The measurement with the larger relative error dominates the final uncertainty. Here, the ammeter is the weaker instrument — improving its precision would reduce the error in R more effectively than improving the voltmeter.
Reporting a measurement correctly
A number without its uncertainty is not a measurement — it is a guess. The standard way to report a measurement is:
For example: g = 9.77 \pm 0.11 m/s^2.
Three rules for correct reporting:
-
The uncertainty should have one or two significant figures. Write \pm 0.11, not \pm 0.107438. The uncertainty is itself uncertain (it is estimated from a small number of readings), so more than two significant figures is false precision.
-
The measured value should be rounded to match the uncertainty. If the uncertainty is \pm 0.1, the value should be rounded to one decimal place. Writing g = 9.7743 \pm 0.1 m/s^2 is nonsense — the digits "743" are meaningless because the uncertainty already tells you the measurement is unreliable beyond the first decimal place.
-
Always include the units. Writing g = 9.8 \pm 0.1 is incomplete. Is that m/s^2? cm/s^2? Units are not optional.
Consider the difference between writing "L = 1 m" and "L = 1.000 \pm 0.001 m." The first tells the reader almost nothing about the quality of the measurement. The second tells them exactly how much to trust it — the length is known to the nearest millimetre. When ISRO computes the trajectory of Chandrayaan, every input parameter carries a carefully computed uncertainty, and the engineers propagate those uncertainties through every step to know whether the spacecraft will enter lunar orbit or miss the Moon entirely. Uncertainty is not a footnote in professional physics — it is the difference between a successful mission and a lost spacecraft.
Common confusions
-
"Error means mistake." In everyday language, an error is something you did wrong. In physics, an error is an inherent limitation of the measurement process. You can do everything perfectly and still have errors — because your ruler has finite markings, your stopwatch has a finite reaction time, and the environment is never perfectly controlled. The word "uncertainty" is often used instead of "error" to avoid this confusion, and it is the better term.
-
"More decimal places means more accurate." Writing 9.8143 m/s^2 looks more precise than 9.8 m/s^2, but if your measurement uncertainty is \pm 0.1 m/s^2, those extra decimal places are fiction. They do not represent real information. More digits without a corresponding reduction in uncertainty is false precision.
-
"Systematic and random errors are treated the same way." They are not. Random errors can be reduced by taking more readings and averaging. Systematic errors cannot — they persist no matter how many readings you take. A miscalibrated instrument gives you the same wrong answer every time. The fix for systematic errors is to identify and eliminate the source.
-
"Percentage error and relative error are different things." They carry the same information — percentage error is simply relative error multiplied by 100. A relative error of 0.03 is the same as a percentage error of 3%. Use whichever form is clearer in context.
-
"Errors always make the result worse." Not necessarily. When you average many measurements, the random errors in individual readings tend to cancel out (some too high, some too low), and the mean is more reliable than any single reading. Taking more data reduces the effect of random errors — that is the whole point of repetition.
If you came here to understand what errors are, how to compute them, and how to propagate them through formulas, you have everything you need. What follows is the statistical treatment — standard deviation, standard error of the mean, and why the 1/\sqrt{n} rule works. This is the version used in JEE Advanced problems and in real research.
Standard deviation — a sharper measure of spread
The mean absolute error \Delta \bar{a} = \frac{1}{n}\sum|a_i - \bar{a}| is simple and intuitive, but statisticians and physicists prefer the standard deviation because of its mathematical properties.
The standard deviation of n measurements is:
Why square the deviations instead of taking absolute values: squaring has two advantages. First, it penalises large deviations more heavily than small ones — a single wildly wrong reading gets amplified, which is appropriate because outliers are a bigger problem than small scatter. Second, the square function is differentiable, which makes the mathematics of probability and statistics much cleaner. The absolute-value function has a kink at zero that causes problems in calculus.
Why divide by n - 1 instead of n: this is called Bessel's correction. When you compute the mean \bar{a} from the same data, you have used up one piece of information — the mean is constrained to be consistent with the data. This leaves only n - 1 truly independent deviations. Dividing by n - 1 gives an unbiased estimate of the true population spread. For large n, the difference between n and n - 1 is negligible.
Standard error of the mean — why more data helps
The standard deviation \sigma tells you how spread out individual readings are. But you care about the uncertainty in the mean, not in any single reading. The standard error of the mean is:
Why divide by \sqrt{n}: when you average n independent measurements, the random fluctuations partly cancel. The mean is more stable than any individual reading. The cancellation improves as \sqrt{n} — so 4 readings give a mean that is twice as precise as a single reading, 100 readings give a mean 10 times as precise, and so on. This is diminishing returns: to halve the uncertainty, you need four times as many readings.
This is a profound result. It means you can always make your measurement more precise by taking more data — but the improvement is slow. Going from \pm 1\% to \pm 0.5\% requires quadrupling the number of readings. Going from \pm 0.5\% to \pm 0.25\% requires quadrupling again (now 16 times the original). At some point, the systematic errors (which do not decrease with more readings) dominate, and taking more data stops helping.
Least count and its role
The least count of an instrument is the smallest division it can read. A standard metre ruler has a least count of 1 mm. A Vernier caliper with 50 divisions has a least count of 0.02 mm. A screw gauge with 100 divisions has a least count of 0.01 mm.
The least count sets a floor on the precision of a single reading — you cannot read the instrument more precisely than its smallest marking. When you report a measurement from a single reading (no repetitions), the absolute error is typically taken as half the least count, or sometimes the full least count, depending on the instrument and the convention your exam board follows.
However, the least count is not the only source of error. Parallax, environmental effects, and other systematic errors may be larger than the least count. The least count is the minimum possible uncertainty, not the actual uncertainty.
The general propagation formula
The rules for sums, products, and powers are special cases of a more general formula. If Z = f(A, B, C, \ldots) is any function of measured quantities A, B, C, \ldots, then the uncertainty in Z is:
Why partial derivatives: each partial derivative tells you how sensitive the result is to changes in one particular input. Multiplying by \Delta A converts the sensitivity into an actual contribution to the error. The square root of the sum of squares (rather than a simple sum) reflects the statistical assumption that the errors in A, B, C are independent and randomly distributed — they are unlikely to all push in the same direction simultaneously.
Notice that this formula uses the square root of the sum of squares, not a straight sum. This means the combined error is usually smaller than the sum of individual errors. The simple propagation rules (Rules 1, 2, 3 above) use a straight sum because they represent the worst case — the maximum possible error. The general formula gives the probable error, which is more realistic.
For JEE problems, the simpler rules (absolute errors add for sums, relative errors add for products) are what you need. The general formula with partial derivatives appears in advanced laboratory courses and research.
Where this leads next
- Significant Figures and Rounding — the rules for deciding how many digits to keep, and why trailing zeros matter.
- Measurement Instruments — Vernier calipers, screw gauges, and how least count determines the precision floor.
- Units and the SI System — the foundation that gives every measurement its meaning.
- Dimensional Analysis — using units to check formulas, estimate answers, and build physical intuition.
- Estimation and Order of Magnitude — when a rough answer is more useful than a precise one.