In short

Functions like |x|, \lfloor x \rfloor, and piecewise-defined functions do not obey the standard differentiation rules blindly. To differentiate them, you split them into smooth pieces, check differentiability at the join points using left and right derivatives, and handle each piece separately. Determinants whose entries are functions of x are differentiated by differentiating one row (or column) at a time.

Take the function f(x) = |x| \cdot x. At first glance, it involves the absolute value — which has a corner at the origin and is not differentiable there. So is f also not differentiable at x = 0?

Try the limit definition directly:

f'(0) = \lim_{h \to 0} \frac{|h| \cdot h - 0}{h} = \lim_{h \to 0} |h| = 0

The derivative exists and equals 0. The multiplication by x smoothed out the corner. The absolute value alone has a sharp V at the origin, but |x| \cdot x has a smooth cusp that just barely kisses the axis — differentiable, with slope zero.

This is the kind of surprise that makes special functions interesting. The standard rules (power rule, product rule, chain rule) assume the function is built from smooth pieces. When your function has corners, jumps, or sudden switches in formula, you need a different approach: split, check, and differentiate each piece.

Piecewise functions: the general strategy

A piecewise function is defined by different formulas on different intervals. The derivative on each interval is straightforward — just differentiate the formula that applies there. The only question is what happens at the boundary points where the formula changes.

The strategy is always the same:

  1. Differentiate each piece separately on its open interval.
  2. At each boundary point, compute the left-hand derivative and the right-hand derivative.
  3. If they match (and the function is continuous there), the function is differentiable at that point and the derivative equals the common value.
  4. If they don't match, the function is not differentiable at that point.

Derivative of a piecewise function

Suppose f is defined by

f(x) = \begin{cases} g(x) & \text{if } x < a \\ h(x) & \text{if } x \ge a \end{cases}

Then f'(x) = g'(x) for x < a and f'(x) = h'(x) for x > a. At x = a, the derivative exists if and only if:

  1. f is continuous at a: \lim_{x \to a^-} g(x) = h(a).
  2. The one-sided derivatives agree: g'(a^-) = h'(a^+).

When both conditions hold, f'(a) = g'(a^-) = h'(a^+).

Continuity is a prerequisite — if the function has a jump at the boundary, it cannot be differentiable there. But continuity alone is not enough. The function |x| is continuous at x = 0, but its left derivative (-1) and right derivative (+1) disagree, so the derivative does not exist.

The absolute value function

The absolute value function is the most important piecewise function in calculus. Write it out:

|x| = \begin{cases} x & \text{if } x \ge 0 \\ -x & \text{if } x < 0 \end{cases}

Away from zero: for x > 0, \frac{d}{dx}|x| = 1. For x < 0, \frac{d}{dx}|x| = -1.

At zero: the left derivative is -1 and the right derivative is +1. They disagree. So |x| is not differentiable at x = 0.

You can write the derivative compactly using the signum function:

\frac{d}{dx}|x| = \text{sgn}(x) = \begin{cases} 1 & \text{if } x > 0 \\ -1 & \text{if } x < 0 \\ \text{undefined} & \text{if } x = 0 \end{cases}

For a composite |u(x)| where u is differentiable:

\frac{d}{dx}|u(x)| = \text{sgn}(u(x)) \cdot u'(x) \quad \text{wherever } u(x) \neq 0

At points where u(x) = 0, you need to check the left and right derivatives directly.

A general absolute-value derivative

Here is a pattern that covers many textbook problems. For f(x) = |x - a|:

f'(x) = \begin{cases} 1 & \text{if } x > a \\ -1 & \text{if } x < a \\ \text{DNE} & \text{if } x = a \end{cases}

The corner shifts to x = a, but the story is the same. The slope is +1 on one side, -1 on the other, and undefined at the corner itself.

The graph of $f(x) = |x - 1|$. The corner sits at $x = 1$. To the left, the slope is $-1$; to the right, the slope is $+1$. At the corner itself, no single tangent line exists.

The floor and ceiling functions

The greatest integer function (floor function) \lfloor x \rfloor is the largest integer \le x. Its graph is a staircase: flat segments at integer heights, with jumps at every integer.

Between integers, the function is constant: \lfloor x \rfloor = n when n \le x < n + 1. The derivative of a constant is 0. So for non-integer x:

\frac{d}{dx}\lfloor x \rfloor = 0

At integers, the function has a jump discontinuity — it leaps from one step to the next. A function with a jump discontinuity is not continuous, and a function that is not continuous is certainly not differentiable. So \lfloor x \rfloor is not differentiable at any integer.

The ceiling function \lceil x \rceil (the smallest integer \ge x) has the same story with the jumps flipped. Its derivative is 0 between integers and does not exist at integers.

The fractional part function \{x\} = x - \lfloor x \rfloor has a different character. Between integers, \{x\} = x - n for the appropriate integer n, so its derivative is 1. At integers, the fractional part jumps from just below 1 back to 0, so the function is discontinuous and not differentiable there.

\frac{d}{dx}\{x\} = \begin{cases} 1 & \text{if } x \notin \mathbb{Z} \\ \text{DNE} & \text{if } x \in \mathbb{Z} \end{cases}
The floor function and its derivativeThe staircase graph of the floor function on the left, and a graph showing its derivative equal to zero between integers and undefined at integers on the right. y = ⌊x⌋ 0 1 2 3 x y d/dx ⌊x⌋ 1 2 3 y = 0 x
Left: the staircase graph of $\lfloor x \rfloor$. Right: the derivative is $0$ on every open interval between integers (the flat segments have zero slope) and does not exist at the integers (open circles show the jumps).

Composing the floor with smooth functions

A common exam pattern: differentiate f(x) = \lfloor x^2 \rfloor or f(x) = x \lfloor x \rfloor.

The rule is always the same: identify where the inner expression crosses an integer, because those are the points of potential non-differentiability. Between those points, the floor is constant, and you differentiate normally.

For f(x) = x \cdot \lfloor x \rfloor on the interval 1 \le x < 2: here \lfloor x \rfloor = 1, so f(x) = x, and f'(x) = 1. On the interval 2 \le x < 3: \lfloor x \rfloor = 2, so f(x) = 2x, and f'(x) = 2. At x = 2, check the left and right derivatives: f'(2^-) = 1 and f'(2^+) = 2. They disagree, so f is not differentiable at x = 2.

Max and min functions

Functions defined using \max or \min are piecewise functions in disguise. The function f(x) = \max(x, x^2) switches between x and x^2 depending on which is larger.

To differentiate such a function:

  1. Find where the two expressions are equal — that is where the switch happens.
  2. On each interval, differentiate whichever expression is the max (or min).
  3. At the switch point, check the left and right derivatives.

For f(x) = \max(x, x^2): set x = x^2, which gives x(x - 1) = 0, so the switch points are x = 0 and x = 1.

At x = 0: f'(0^-) = 2(0) = 0 and f'(0^+) = 1. They disagree — not differentiable.

At x = 1: f'(1^-) = 1 and f'(1^+) = 2(1) = 2. They disagree — not differentiable.

The function $f(x) = \max(x, x^2)$. The dashed lines show $y = x$ and $y = x^2$ separately. The solid red curve is the maximum of the two — it follows $x^2$ outside $[0, 1]$ and follows $x$ inside. The two switch points at $x = 0$ and $x = 1$ are corners where the derivative does not exist.

Implicit trig substitutions and differentiability

Some functions look complicated but simplify dramatically under a trigonometric substitution. The classic example is

f(x) = \sin^{-1}\!\left(\frac{2x}{1 + x^2}\right)

Substitute x = \tan\theta with \theta \in (-\pi/2, \pi/2). Then

\frac{2x}{1 + x^2} = \frac{2\tan\theta}{1 + \tan^2\theta} = \frac{2\tan\theta}{\sec^2\theta} = 2\sin\theta\cos\theta = \sin 2\theta

So f(x) = \sin^{-1}(\sin 2\theta) = 2\theta — but only when 2\theta lies in [-\pi/2, \pi/2], i.e., when |\theta| \le \pi/4, which means |x| \le 1.

The full picture:

f(x) = \sin^{-1}\!\left(\frac{2x}{1 + x^2}\right) = \begin{cases} \pi - 2\tan^{-1}x & \text{if } x > 1 \\ 2\tan^{-1}x & \text{if } -1 \le x \le 1 \\ -\pi - 2\tan^{-1}x & \text{if } x < -1 \end{cases}

On each piece, the derivative is \frac{-2}{1 + x^2} (for x > 1 and x < -1) or \frac{2}{1 + x^2} (for |x| < 1). At x = 1, the left derivative is \frac{2}{1 + 1} = 1 and the right derivative is \frac{-2}{1 + 1} = -1. They disagree — the function is not differentiable at x = \pm 1.

The lesson: when a function is defined through an inverse trigonometric expression containing a rational function of x, always check if the substitution changes branches. The apparent formula \frac{d}{dx}\sin^{-1}(u) = \frac{1}{\sqrt{1 - u^2}} \cdot u' is valid only when u stays strictly between -1 and 1. At the boundary, the function often has a corner.

Example 1: Derivative of $f(x) = |x^2 - 4|$

Step 1. Identify where the expression inside the absolute value is zero.

x^2 - 4 = 0 \implies x = \pm 2

Why: the absolute value creates a corner at exactly the points where the inside is zero. These are the potential trouble spots.

Step 2. Write the piecewise form.

f(x) = \begin{cases} x^2 - 4 & \text{if } x \le -2 \text{ or } x \ge 2 \\ -(x^2 - 4) = 4 - x^2 & \text{if } -2 < x < 2 \end{cases}

Why: when x^2 - 4 \ge 0 (outside [-2, 2]), the absolute value does nothing. When x^2 - 4 < 0 (between -2 and 2), the absolute value flips the sign.

Step 3. Differentiate each piece.

For x > 2 or x < -2: f'(x) = 2x.

For -2 < x < 2: f'(x) = -2x.

Why: standard power rule on each piece.

Step 4. Check the boundary points x = 2 and x = -2.

At x = 2: f'(2^-) = -2(2) = -4 and f'(2^+) = 2(2) = 4. They disagree — not differentiable.

At x = -2: f'(-2^-) = 2(-2) = -4 and f'(-2^+) = -2(-2) = 4. They disagree — not differentiable.

Why: the one-sided derivatives jump from -4 to 4 (or vice versa) at the corners. The parabola x^2 - 4 passes through zero with nonzero slope, so the absolute value creates a genuine corner.

Result: f'(x) = 2x for |x| > 2, f'(x) = -2x for |x| < 2, and f'(\pm 2) does not exist. More compactly: f'(x) = 2x \cdot \text{sgn}(x^2 - 4) for x \neq \pm 2.

The solid curve is $y = |x^2 - 4|$ — the dashed parabola $y = x^2 - 4$ with the part below the axis reflected upward. The corners at $x = \pm 2$ are clearly visible: the slope jumps from $-4$ to $+4$ at $x = 2$ and from $-4$ to $+4$ at $x = -2$.

The graph confirms what the algebra says. Between -2 and 2, the reflected parabola has a negative-then-positive slope (an inverted arch), while outside that interval, the original parabola's slope continues as usual. The corners at x = \pm 2 are the points where the reflected part meets the original part — and the slopes on either side point in opposite directions.

Example 2: Derivative of a piecewise-defined function

Define f(x) = \begin{cases} x^2 \sin(1/x) & \text{if } x \neq 0 \\ 0 & \text{if } x = 0 \end{cases}

This function is famous. It is continuous everywhere, differentiable everywhere (including at x = 0), but its derivative is not continuous at x = 0. Find f'(x).

Step 1. For x \neq 0, use the product rule and chain rule.

f'(x) = 2x \sin\!\left(\frac{1}{x}\right) + x^2 \cdot \cos\!\left(\frac{1}{x}\right) \cdot \left(-\frac{1}{x^2}\right) = 2x \sin\!\left(\frac{1}{x}\right) - \cos\!\left(\frac{1}{x}\right)

Why: f(x) = x^2 \cdot \sin(1/x) is a product of two differentiable functions (for x \neq 0). The derivative of \sin(1/x) uses the chain rule: \cos(1/x) \cdot (-1/x^2).

Step 2. At x = 0, use the limit definition directly.

f'(0) = \lim_{h \to 0} \frac{f(h) - f(0)}{h} = \lim_{h \to 0} \frac{h^2 \sin(1/h)}{h} = \lim_{h \to 0} h \sin\!\left(\frac{1}{h}\right)

Why: the product rule does not apply at x = 0 because \sin(1/x) is not even defined at x = 0. The only option is the definition.

Step 3. Evaluate the limit.

Since |\sin(1/h)| \le 1 for all h \neq 0, we have |h \sin(1/h)| \le |h|. As h \to 0, |h| \to 0, so by the squeeze theorem:

f'(0) = 0

Why: the factor h crushes the oscillation of \sin(1/h) to zero. The squeeze theorem makes this rigorous.

Step 4. Check whether f' is continuous at x = 0.

As x \to 0, the term 2x \sin(1/x) \to 0 (same squeeze argument). But the term -\cos(1/x) oscillates between -1 and +1 without settling. So \lim_{x \to 0} f'(x) does not exist, even though f'(0) = 0.

Why: the derivative exists at every point (including 0), but it oscillates so wildly near 0 that it is not continuous there. This is the standard example of a function that is differentiable everywhere but whose derivative is discontinuous.

Result: f'(x) = 2x \sin(1/x) - \cos(1/x) for x \neq 0, and f'(0) = 0.

The graph of $f(x) = x^2 \sin(1/x)$ near the origin. The oscillations of $\sin(1/x)$ are squeezed by the factor $x^2$, producing a curve that wiggles faster and faster as it approaches $x = 0$ but is pinched to zero amplitude. The tangent at the origin is the horizontal line $y = 0$ — the derivative exists and equals $0$.

The graph shows the squeeze in action. The curve oscillates infinitely many times near the origin, but the oscillations are trapped inside the envelope y = \pm x^2. The function passes smoothly through the origin with slope 0, even though the derivative's formula involves \cos(1/x), which oscillates without limit.

Differentiation of determinants

When the entries of a determinant are functions of x, you can differentiate the determinant with respect to x. The rule is elegant: differentiate one row at a time, keeping the other rows unchanged, and add the results.

Derivative of a determinant

If \Delta(x) = \begin{vmatrix} f_1(x) & f_2(x) \\ g_1(x) & g_2(x) \end{vmatrix}, then

\Delta'(x) = \begin{vmatrix} f_1'(x) & f_2'(x) \\ g_1(x) & g_2(x) \end{vmatrix} + \begin{vmatrix} f_1(x) & f_2(x) \\ g_1'(x) & g_2'(x) \end{vmatrix}

For a 3 \times 3 determinant, the same principle gives three terms — one for each row differentiated.

Why does this work? Expand the 2 \times 2 determinant: \Delta(x) = f_1(x) g_2(x) - f_2(x) g_1(x). Differentiate using the product rule:

\Delta'(x) = f_1' g_2 + f_1 g_2' - f_2' g_1 - f_2 g_1'

Regroup: (f_1' g_2 - f_2' g_1) + (f_1 g_2' - f_2 g_1'). The first group is the determinant with row 1 differentiated. The second is the determinant with row 2 differentiated. This is exactly the formula.

The 3 \times 3 case follows the same logic — expand the determinant using cofactors along the first row, apply the product rule to each term, and regroup. Each product gives a pair: one part where the row-1 entry is differentiated and the cofactor is kept, and one where the entry is kept and the cofactor is differentiated. The cofactors themselves are 2 \times 2 determinants, and differentiating them produces the terms for rows 2 and 3. The whole thing telescopes into three determinants.

A 3 \times 3 determinant example

Take \Delta(x) = \begin{vmatrix} x & x^2 & x^3 \\ 1 & 2x & 3x^2 \\ 0 & 2 & 6x \end{vmatrix}.

\Delta'(x) = \begin{vmatrix} 1 & 2x & 3x^2 \\ 1 & 2x & 3x^2 \\ 0 & 2 & 6x \end{vmatrix} + \begin{vmatrix} x & x^2 & x^3 \\ 0 & 2 & 6x \\ 0 & 2 & 6x \end{vmatrix} + \begin{vmatrix} x & x^2 & x^3 \\ 1 & 2x & 3x^2 \\ 0 & 0 & 6 \end{vmatrix}

The first determinant has rows 1 and 2 identical — its value is 0. The second determinant has rows 2 and 3 identical — its value is 0. Only the third survives.

Expand the third determinant along row 3. The only nonzero entry in that row is 6 in position (3, 3). Its cofactor is \begin{vmatrix} x & x^2 \\ 1 & 2x \end{vmatrix} = 2x^2 - x^2 = x^2. So \Delta'(x) = 6 \cdot x^2 = 6x^2.

Differentiation of a 3 by 3 determinant splits into three determinantsA diagram showing a 3 by 3 determinant being differentiated. An arrow points to three separate determinants added together, where the first has row 1 differentiated, the second has row 2 differentiated, and the third has row 3 differentiated. R₁ R₂ R₃ Δ(x) d/dx → R₁' R₂ R₃ + R₁ R₂' R₃ + R₁ R₂ R₃'
The rule for differentiating a $3 \times 3$ determinant. Differentiate one row at a time, keep the other two rows unchanged, and add the three resulting determinants. If any two rows become identical, that determinant is zero — which often simplifies the computation dramatically.

Common confusions

Going deeper

If you came here to learn how to handle absolute values, floor functions, and piecewise definitions in differentiation, you have the tools now — you can stop here. The rest is for readers who want to see the deeper structure behind these results.

Why piecewise differentiability is not enough for "smooth"

A function can be differentiable everywhere and still have a derivative that is discontinuous. The function f(x) = x^2 \sin(1/x) from Example 2 demonstrates this. Its derivative exists at every real number, but f' itself has a jump-like oscillation at x = 0 — the \cos(1/x) term prevents f' from having a limit at 0.

This means that even after you have checked differentiability everywhere, you cannot assume the derivative is well-behaved. The derivative of a differentiable function is guaranteed to satisfy the intermediate value property (by Darboux's theorem — the derivative hits every value between any two of its values), but it is not guaranteed to be continuous. This is one of the subtler facts in real analysis.

The n-th derivative of a determinant

The row-at-a-time rule extends to higher derivatives. For a 2 \times 2 determinant with entries that depend on x:

\Delta''(x) = \begin{vmatrix} f_1'' & f_2'' \\ g_1 & g_2 \end{vmatrix} + 2\begin{vmatrix} f_1' & f_2' \\ g_1' & g_2' \end{vmatrix} + \begin{vmatrix} f_1 & f_2 \\ g_1'' & g_2'' \end{vmatrix}

The pattern mirrors the binomial expansion of (D_1 + D_2)^n, where D_1 differentiates row 1 and D_2 differentiates row 2. For the n-th derivative:

\Delta^{(n)} = \sum_{k=0}^{n} \binom{n}{k} \begin{vmatrix} f_1^{(k)} & f_2^{(k)} \\ g_1^{(n-k)} & g_2^{(n-k)} \end{vmatrix}

This is Leibniz's rule applied to determinants — elegantly combining two ideas from different parts of calculus.

Weierstrass's monster

You know that |x| is continuous but not differentiable at x = 0 — one bad point. Can a function be continuous everywhere but differentiable nowhere?

In 1872, Weierstrass constructed exactly such a function:

W(x) = \sum_{n=0}^{\infty} a^n \cos(b^n \pi x)

where 0 < a < 1 and b is a positive odd integer with ab > 1 + 3\pi/2. This function is continuous (the series converges uniformly) but has a corner at every point — the graph is so jagged that no tangent line exists anywhere. The piecewise strategy above relies on having finitely many bad points with smooth stretches between them. Weierstrass showed that nature is not always so kind.

Where this leads next