MIT OpenCourseWare Close Window
 
» Required Reading » Table of Contents » Chapter 13

13.4 Divide and Conquer

Previous Section Next Section

The last previous method uses a straight line approximation to get a new argument between old ones, once the values of f at the old arguments have the opposite sign. This is usually a smart thing to do, unless the endpoints of the interval between which your solution is trapped (in columns s and t above) converge very slowly.

To avoid that possibility we can, once the function f has opposite sign at two points, say a and b, evaluate it at the middle point, and replace one end by the middle. This will cut the size of the interval in which the solution must lie in half. This is slow convergence compared to the best of Newton’s algorithm or the variants discussed above, but it is steady and effective and will always give a definite improvement in accuracy in a fixed number of steps. Since 2 to the tenth power is a bit more than one thousand, (it’s 1024) the size of the interval goes down by a factor of at least 1000 for every ten iterations, and so if it starts at something like 1, after 35 or so steps you will have the answer to ten decimal places.

How does the algorithm go? We start with two arguments, a and b and suppose we assume a < b. We evaluate f(a) and f(b), and also . Then if the last of these and the first has the same sign you replace a by and keep b, while otherwise you keep a and replace b by .

In a spreadsheet you can put your initial guesses in aa2 and ab2 put "= aa2/2+ab2/2" in ac2 and put "= f(aa2)" in ad2 and copy that to ae2 and af2. You can then put "= if(ae2*af2>0, aa2,ac2)" in aa3, and "= if(ae2*af2>0,ac2,ab2)" in ab3, copy down and you are done.

Unless there is an error lurking somewhere, or you started with ad2 and ae2 having the same sign, this will give ten place more places of accuracy than what is implied by the initial interval after 35 or so steps.

Exercise 13.9 Compare performance of this algorithm with the previous ones. Any comments?