Home  18.013A  Chapter 10 


We now ask, how accurate are any of the approximations here, from the trivial one, the constant approximation $f(x)=f({x}_{0})$ , the linear approximation, etc.
Suppose $x{x}_{0},m$ is the minimum value of the kth derivative of $f$ between these two arguments, and $M$ is the maximum value of that derivative there.
We will invoke a principle, which in its simplest form is the statement: the faster you move, the further you go, other things being equal. Here we claim that if we invent a new function ${f}_{M}$ by replacing the actual value of the kth derivative of $f$ throughout the interval $({x}_{0},z)$ by its maximum value over that interval, then ${f}_{M}$ and all its first $k=1$ derivatives will obey ${f}_{M}(x\text{'})\ge {f}^{(j)}(x\text{'})$ for all $x\text{'}$ in that interval.
Think of it this way: if you increase your speed $f\text{'}$ to the value $M$ you increase the distance traveled. If alternately you increase acceleration $f"$ to $M$ , by the same argument, that will increase speed, and hence will increase distance traveled. And so on. If you increase a higher derivative, that increase will trickle down to increase all the lower derivatives, and ultimately $f$ itself.
The nice thing about doing this is that the degree $k$ approximation to ${f}_{M}$ at ${x}_{0}$ is exact at argument $x$ , because ${f}_{M}$ 's kth derivative is constant in the interval between ${x}_{0}$ and $x$ . Now the degree $k$ approximation to ${f}_{M}$ is the degree $k1$ approximation to $f$ plus $\frac{M{(x{x}_{0})}^{k}}{k!}$ .
Our inequality above applied with $j=0$ therefore tells us that the (k  1)th approximation to f, plus $\frac{M{(x{x}_{0})}^{k}}{k!}$ is at least $f(x)$ , while by the same argument applied in the opposite order with $M$ replaced by $m$ , we can deduce that the same approximation plus $\frac{m{(x{x}_{0})}^{k}}{k!}$ is at most $f(x)$ .
The upshot of all this is have bounds on how far off the degree $k1$ approximation to $f$ at ${x}_{0}$ is from $f$ at argument $x$ : their difference lies between $\frac{m{(x{x}_{0})}^{k}}{k!}$ and $\frac{M{(x{x}_{0})}^{k}}{k!}$ .
We can go one step further and notice that this tells us that the error in the degree $k1$ approximation can be written as $\frac{q{(x{x}_{0})}^{k}}{k!}$ where $q$ lies between $m$ and $M$ .
Since $m$ and $M$ are the minimum and maximum values of ${f}^{(k)}$ between ${x}_{0}$ and $x$ , if ${f}^{(k)}$ takes on all values in between its maximum and minimum (which it must if it is differentiable in that interval), it will take on the value $q$ . We can therefore write $q={f}^{(k)}(x\text{'})$ for some $x\text{'}$ in that interval.
This allows us to translate our conclusion here into the following statement.
Theorem:
The error in the degree $k1$ approximation to $f$ at ${x}_{0}$ evaluated at argument $x$ is
for some $x\text{'}$ in the interval $({x}_{0},x)$ , if ${f}^{(k)}$ is continuous in that interval.
Exercises:
10.5 State this theorem for $k=1$ . This result is called "the mean value theorem".
10.6 Repeat the argument above for the situation that occurs when $x{x}_{0}$ . How does the conclusion change? What is different in the argument?
