]> 7.3 The Symmetric Approximation: f '(x) ~ (f(x+d) - f(x-d)) / (2d)

7.3 The Symmetric Approximation: f '(x) ~ (f(x+d) - f(x-d)) / (2d)

Using this formula for the "d-approximation" to the derivative is much more efficient than using the naive formula f ( x + d ) f ( x ) d .

Why is it better?

The answer is that the "symmetric formula" is exactly right if f is a quadratic function, which means that the error made by it is proportional to d 2 or less as d decreases. The naive formula is wrong for quadratics and makes an error that is proportional to d .

How come?

Suppose f is a quadratic: f ( x ) = a x 2 + b x + c .

Then we get

f ( x + d ) = a ( x + d ) 2 + b ( x + d ) + x

and

f ( x + d ) f ( x d ) 2 d = 4 a x d + 2 b d 2 d = 2 a x + b

On the other hand, we get

f ( x + d ) f ( x ) d = 2 a x + b + a d

This means that the symmetric approximation is exact for any value of d for any quadratic; no need to make d small; and this is not true for the asymmetric formula.

In general, if our function being differentiated, f ( x + d ) , can be expanded in a power series in d , the first error in our symmetric formula comes from cubic terms, and will be proportional to d 2 .

The reason this happens is that the d 2 term in f ( x + d ) f ( x d ) cancels itself out, being the same in both terms. The same things happens for all even power terms, by the way; the errors in this approximation to the derivative all come from odd power terms in the power series expansion of f about x .

Thus, if we replace d by d 2 , the error in the symmetric approximation will decline by a factor of 4, while the asymmetric formula has error which declines only by a factor of 2 when we divide d by 2.

And so, the symmetric formula approaches the true answer for the derivative much faster than the naive asymmetric one does, as we decrease d .

Now we ask: can we get even faster convergence?