Suppose we have a given function, f, and we seek its derivative at argument x0.
One way to estimate it is to evaluate f at two points, x1 and x2, and examine the slope of the line from (x1, f(x1)) to (x2, f(x2)). But what should we use for x1 and x2 and what will we learn about f '(x0)?
The choice that first occurs to people is to set x1 = x0, and x2 = x0 + d for some very small d. So one can compute
This is not a horrible thing to do, but it is not very good, as we shall see.
What's wrong with it?
Well, if d is too big, the linear approximation won't be accurate and if d is too small, the inaccuracy of your calculation tools may screw up your answer. And the transition from being too big to too small may be difficult to find.