]> 10.5 Accuracy of Approximations, and the Mean Value Theorem

## 10.5 Accuracy of Approximations, and the Mean Value Theorem

We now ask, how accurate are any of the approximations here, from the trivial one, the constant approximation $f ( x ) = f ( x 0 )$ , the linear approximation, etc.

Suppose $x > x 0 , m$ is the minimum value of the k-th derivative of $f$ between these two arguments, and $M$ is the maximum value of that derivative there.

We will invoke a principle, which in its simplest form is the statement: the faster you move, the further you go, other things being equal. Here we claim that if we invent a new function $f M$ by replacing the actual value of the k-th derivative of $f$ throughout the interval $( x 0 , z )$ by its maximum value over that interval, then $f M$ and all its first $k = 1$ derivatives will obey $f M ( x ' ) ≥ f ( j ) ( x ' )$ for all $x '$ in that interval.

Think of it this way: if you increase your speed $f '$ to the value $M$ you increase the distance traveled. If alternately you increase acceleration $f "$ to $M$ , by the same argument, that will increase speed, and hence will increase distance traveled. And so on. If you increase a higher derivative, that increase will trickle down to increase all the lower derivatives, and ultimately $f$ itself.

The nice thing about doing this is that the degree $k$ approximation to $f M$ at $x 0$ is exact at argument $x$ , because $f M$ 's k-th derivative is constant in the interval between $x 0$ and $x$ . Now the degree $k$ approximation to $f M$ is the degree $k − 1$ approximation to $f$ plus $M ( x − x 0 ) k k !$ .

Our inequality above applied with $j = 0$ therefore tells us that the (k - 1)-th approximation to f, plus $M ( x − x 0 ) k k !$ is at least $f ( x )$ , while by the same argument applied in the opposite order with $M$ replaced by $m$ , we can deduce that the same approximation plus $m ( x − x 0 ) k k !$ is at most $f ( x )$ .

The upshot of all this is have bounds on how far off the degree $k − 1$ approximation to $f$ at $x 0$ is from $f$ at argument $x$ : their difference lies between $m ( x − x 0 ) k k !$ and $M ( x − x 0 ) k k !$ .

We can go one step further and notice that this tells us that the error in the degree $k − 1$ approximation can be written as $q ( x − x 0 ) k k !$ where $q$ lies between $m$ and $M$ .

Since $m$ and $M$ are the minimum and maximum values of $f ( k )$ between $x 0$ and $x$ , if $f ( k )$ takes on all values in between its maximum and minimum (which it must if it is differentiable in that interval), it will take on the value $q$ . We can therefore write $q = f ( k ) ( x ' )$ for some $x '$ in that interval.

This allows us to translate our conclusion here into the following statement.

Theorem:

The error in the degree $k − 1$ approximation to $f$ at $x 0$ evaluated at argument $x$ is

$f ( k ) ( x ' ) ( x − x 0 ) k k !$

for some $x '$ in the interval $( x 0 , x )$ , if $f ( k )$ is continuous in that interval.

Exercises:

10.5 State this theorem for $k = 1$ . This result is called "the mean value theorem".

10.6 Repeat the argument above for the situation that occurs when $x < x 0$ . How does the conclusion change? What is different in the argument?