There is one habit that separates students who compute 1776 \bmod 7 on paper in ten seconds from students who fill half a page with long multiplication before giving up: reduce each factor mod n before you multiply, not after. It sounds obvious once stated, but the reflex has to be trained. Most school students, asked to compute 37 \cdot 48 \bmod 7, will instinctively compute 37 \cdot 48 = 1776 first and only then divide by 7. That is the slow, error-prone path. The fast path is to notice that 37 \equiv 2 and 48 \equiv 6 mod 7, so the product is 2 \cdot 6 = 12 \equiv 5. No number in the entire computation ever exceeds 48.
This article is about building that reflex so deeply that you never compute a large intermediate product again.
The rule that makes it legal
The whole habit rests on one identity:
In words: you can replace any factor by its remainder mod n without changing the final remainder. The rule extends to any number of factors and to any combination of additions and multiplications. So
and you can reduce mod n at every step — after each multiplication, or after each factor is introduced, or both. The final answer never changes.
Why: write a = qn + r_a and b = q'n + r_b where r_a = a \bmod n and r_b = b \bmod n. Then ab = (qn + r_a)(q'n + r_b) = qq'n^2 + q n r_b + q' n r_a + r_a r_b. The first three terms are all multiples of n, so they vanish mod n. What survives is r_a r_b \bmod n — which is the reduced product.
The two pipelines, side by side
Here is the contrast that should burn into your memory.
The two outputs are identical because the rule guarantees it. The work is not identical — one path is five times slower and far more prone to arithmetic slips.
A three-factor example: 23 \cdot 47 \cdot 89 \bmod 11
Watch what reduce-early does to a product that would otherwise balloon into six digits.
Without the habit: 23 \cdot 47 = 1081, then 1081 \cdot 89 = 96{,}209, then 96{,}209 \div 11 = 8746 remainder 3. You just did two multi-digit multiplications and a long division. One slip anywhere gives the wrong answer, and you have no way to spot-check.
With the habit:
Why each reduction: 23 = 2 \cdot 11 + 1, so 23 \equiv 1. Similarly 47 = 4 \cdot 11 + 3, and 89 = 8 \cdot 11 + 1.
Multiply the reduced residues:
Done. No number in the entire derivation exceeded 89. You can check it mentally in under fifteen seconds.
Reduce after every multiplication, not just at the start
When the product has many factors, it pays to reduce between multiplications too. Suppose you want 6 \cdot 8 \cdot 9 \cdot 7 \bmod 5. You could reduce each factor first — 6 \equiv 1, 8 \equiv 3, 9 \equiv 4, 7 \equiv 2 — but then 1 \cdot 3 \cdot 4 \cdot 2 = 24 is still two digits. Smarter: combine and reduce as you go.
Every intermediate remainder stays in \{0, 1, 2, 3, 4\}. This is the full discipline: reduce early, reduce often. After every factor, after every multiplication, whenever a number starts growing, drop it back into the range [0, n).
Why this matters for computers too
This habit is not only about pencil speed. Inside every cryptographic library on the planet, exactly this principle keeps the numbers tractable. When your browser negotiates a TLS handshake, it computes products mod a 2048-bit prime. The factors are themselves 2048-bit numbers; if you multiplied all of them first and then reduced at the end, the intermediate would grow to millions of bits and the computation would grind to a halt. Instead, every multiplication is followed immediately by a reduction mod n, keeping the working value bounded at 2048 bits. The mathematics you are learning to do by hand — reduce-then-multiply — is exactly the algorithm the hardware runs.
In every modular-exponentiation routine you will ever write or see, the inner loop looks like result = (result * base) mod n. The mod n is there every iteration, not at the end. That is the same rule as this article — applied inside a loop.
How this connects to repeated squaring
Once you have internalised "reduce every factor, reduce every intermediate," the next tool — computing huge powers like 2^{100} \bmod 7 by repeated squaring — is a direct consequence. Repeated squaring computes a^2, a^4, a^8, a^{16}, \dots one step at a time, reducing mod n after every squaring. Without the reduce-early habit, each squared number doubles in size, and by a^{32} you are drowning in digits. With it, the working value never exceeds n^2 — no matter how large the exponent. The technique you learned here is the foundation for the technique in that article.
One-line takeaway
In modular arithmetic, never let a number escape the range [0, n) if you can help it. Reduce each factor mod n before multiplying, and reduce every intermediate product mod n as soon as it appears. The final answer is the same, but the work stays small, your chance of arithmetic slip plummets, and the method scales from three-digit exam problems all the way up to RSA.
Related: Modular Arithmetic · How to Compute 2^{100} \bmod 7 by Hand · Last-Digit Problems: Switch to Mod 10 · mod Operator vs Congruence Relation