The six laws of exponents are first introduced for positive integer exponents, where they are almost self-evident. a^m \cdot a^n = a^{m+n} is just "m copies of a next to n copies of a is m+n copies of a." Nothing to argue with.
But then the textbook drops a bombshell. The same laws, it says, apply for all real exponents — including negative ones, fractional ones, and irrational ones. A student quite reasonably asks: why? The original argument used physical copies of a. What does half a copy of a even mean? Where did the counting argument go?
The answer is clean, and it is the opposite of a leap. The laws were not extended by invention. They were extended because the definitions of a^{-n} and a^{1/n} were chosen to make the laws keep working. You start by demanding consistency, and the definitions are forced out of that demand. Nothing is assumed — everything is derived.
The principle: keep the laws, invent nothing new
Here is the logic that unlocks the whole extension.
- The laws were derived for positive integer exponents from counting.
- For negative or fractional exponents, we have not yet defined the symbols a^{-3} or a^{1/2}.
- We are free to choose any definition we like for them — but the most useful definition is the one that keeps the laws from breaking.
- That choice turns out to be unique. There is only one definition of a^{-3} that makes the product law still hold, and only one definition of a^{1/2} that makes the power-of-a-power law still hold.
So the negative-exponent and fractional-exponent rules are not new axioms. They are the forced consequences of the laws already in place. Once you accept the laws for integer exponents, you have no choice in how the other exponents must be defined.
Forced definition of a^{-n}
Start with the product law, which we know is true for positive integers:
Suppose we now want to allow n to be a negative integer, say n = -m. What must a^{-m} be for the law to keep working? Plug in:
Why a^0 = 1: that is itself a forced consequence of the same law applied to a^m \cdot a^0 = a^{m + 0} = a^m. The only thing you can multiply a^m by to get a^m is 1. So a^0 = 1 for every a \neq 0, not by decree but by the law.
Now read the equation a^{m} \cdot a^{-m} = 1. It says a^{-m} is the multiplicative inverse of a^{m}. There is only one such thing: \dfrac{1}{a^m}. So we are forced to define
If you had picked any other definition, the product law would have broken. You had no real choice.
Forced definition of a^{1/n}
For fractional exponents, the same trick works with the power-of-a-power law. We know, for positive integers,
Suppose we now want to allow m = \tfrac{1}{n}. What must a^{1/n} be for the law to keep working? Plug in:
So a^{1/n} must be the number which, when raised to the n-th power, gives a. That is the n-th root of a. We are forced to define
Again, no real choice. Any other definition would break the power-of-a-power law. The identity between fractional exponents and roots is not something someone decided one afternoon — it is what the consistency demand squeezed out of you.
From here, arbitrary rationals follow by combining: a^{p/q} = (a^{1/q})^{p}, which is the q-th root of a, raised to the p. And irrational exponents are handled by continuity: a^{x} for irrational x is defined as the limit of a^{r} along rational r approaching x. The whole exponent function is the unique continuous extension of the positive-integer definition that keeps the laws intact.
Seeing the continuity
There are no gaps or jumps. The graph of y = 2^{x} is one smooth continuous curve through the integers, through the halves and thirds, through irrationals like \sqrt{2} — and it would have been impossible to make it smooth if the laws had not survived the extension. The smoothness is the evidence that the forced extension works.
A numerical check that makes it tangible
Let's verify the product law for a pair of non-integer exponents, using the forced definitions.
Check: is 2^{1/2} \cdot 2^{1/2} = 2^{(1/2) + (1/2)} = 2^{1} = 2?
Using the forced definition, 2^{1/2} = \sqrt{2}. And \sqrt{2} \cdot \sqrt{2} = 2. Yes. The law survives.
Check: is 2^{-3} \cdot 2^{5} = 2^{(-3) + 5} = 2^{2} = 4?
Using the forced definition, 2^{-3} = \tfrac{1}{8}. And \tfrac{1}{8} \cdot 32 = \tfrac{32}{8} = 4. Yes. The law survives.
Check: is \left(2^{1/3}\right)^{6} = 2^{(1/3) \cdot 6} = 2^{2} = 4?
Using the forced definition, 2^{1/3} = \sqrt[3]{2}. Cubing gives back 2, so \left(\sqrt[3]{2}\right)^{6} = \left(\left(\sqrt[3]{2}\right)^{3}\right)^{2} = 2^{2} = 4. Yes, the law survives.
Every check works because every negative and fractional exponent was defined to make the check work.
Why the demand for consistency matters
Mathematicians care about this because consistent extension is the single most powerful move in algebra. You define something on a small domain where it is easy, you state the rules that make it easy, and then you extend the definition to a wider domain by demanding the rules continue to hold. Almost every piece of "higher" mathematics you will meet — complex numbers, derivatives, matrices, infinite sums — is built this way.
Exponents are the simplest and cleanest example of the pattern. The definition "a^n is n copies of a multiplied" only works for positive integer n. The extension to all real numbers is forced by the product and power-of-a-power laws. And the payoff is that you can now write 2^{\sqrt{2}} or e^{i\pi} and the same laws still hold — because the laws were the blueprint for the extension all along.
When a future textbook says "by analogy, the law extends to...", what they really mean is "by forced consistency, the law extends to..." The laws are the architecture. Everything else is scaffolding that fits around them.
Related: Exponents and Powers · Why a⁰ = 1: The Halving Staircase That Forces the Answer · Is 2⁻³ Really Negative Eight? The Sign-on-the-Answer Trap · What Does a Fractional Exponent Like 2^(1/2) Actually Mean?