MIT OpenCourseWare Close Window
 
» Required Reading » Table of Contents » Chapter 25

25.3 Extrapolations and Better Approximations

Previous Section Next Section

We saw when discussing numerical differentiation that when we have a sequence of numbers which are approximations to a number A whose errors decrease by a given factor r from term to term, we can extrapolate the sequence to get a more rapidly converging one.

The general extrapolation rule is: to get rid of errors that decrease by a factor of z, take the current result times z minus the previous result and divide this difference by z -1.

Here for example, if we look at the answers provided by the trapezoid rule for values 16d, 8d, 4d, 2d, d, say, we get a sequence of approximate answers whose errors can be expected to decrease by a factor of roughly 4 each time. If we denote these answers as and look at the sequence , (for j = 2 to 5) the terms in A that go down as a factor of 4 will cancel out in each B and we will be left with the higher order terms only.
If we apply this procedure here, the B's are the Simpson's rule approximations previously defined.

Exercise 25.3

Verify this statement.
But we know that the leading terms in Simpson's rule decrease by a factor of 16 each time and we can play the same game with them. We can form Cj defined by , for j = 3 to 5, and get a "Super Simpson's rule approximation", the leading terms in which (in the power series expansion) decrease by 64 from term to term.
Well we can keep this up twice more: and get for j = 4 and 5, and , and this is about the best we can do as an approximation with 16 intervals.

Exercises:

25.4 Use the results of Exercise 2 which give the A's to compute the B's to E's. Compare their accuracy on an integral whose value you know. (for example sin (x) from x = 0 to 1)
Please be aware that these methods while very good are particularly effective on sin(x).

25.5 Can you find a function for which this approach to integrating is lousy? What might you do to improve it?