]> 26.2 The Left Hand Rule with Extrapolation

26.2 The Left Hand Rule with Extrapolation

We seek the solution of a first order differential equation of the form

d y d x = f ( x , y )

and we will assume that we know y ( a ) and wish to find y ( x ) for arguments between a and b, and in particular want to find y ( b ) .

We consider the example

d y d x = x + y

below with a = 0 , b = 1 or 2 and y ( 0 ) = 1 , to illustrate the methods discussed.

We want to break up the interval [ a , b ] into small subintervals, each of length d and for each subinterval approximate the change in y by an estimate of d y d x in it, multiplied by d .

With ordinary integration we had many different ways of estimating f , a left rule, right rule, trapezoid rule, or Simpson rule, and others as well, based upon using the values of f at various arguments within the subinterval.

The complication here is that we do not know the value of y anywhere but at the point a , or more generally, we can only expect to have an approximate value for y at the left hand side of our subinterval based upon our computations on previous subintervals. In fact the purpose of our treatment of that subinterval is to extend our estimate of y on its left end to an estimate of y on its right end.

In consequence of this fact, we have to use some procedure for estimating y in the subinterval in order to apply any of the previous techniques other than the left hand rule.
This complication does not affect the left hand rule; so we first ask, how can we apply that rule? And then: is it possible to get an accurate numerical solution by use of the left hand rule?
The left hand rule discovered by Euler and others, consists of making the estimate for the change in y over the interval x to x + d

y ( x + d ) y ( x ) = f ( x , y ( x ) ) * d

and successively applying it to each subinterval between a and b to compute y ( a + j * d ) successively for each j and compute ultimately y ( b ) . It involves computing the change in y over each interval by using the value of x and y obtained previously at the left end of the interval in f ( x , y ) .

This method has the virtue that it is extremely easy to implement on a spreadsheet. It has the defect that it is not very accurate. It is asymmetric between the endpoints of each subinterval, and as a result, if you decrease d by a factor of two, the leading error term goes down by a factor of two. As a consequence, to improve accuracy by a factor of 1000 you have to reduce d by a factor of 1000, and that is not an efficient way to solve such differential equations.

The following instructions implement this rule for f ( x , y ) when the last row here is copied or filled down N 1 rows. These are columns A, B, C and D and rows 1-9.

To switch to a different differential equation you need only change the D9 entry appropriately and copy the results down.

A

B

C

D

1=row number

Left Hand Rule

    f ( x , y ) = x + y

2

enter a

0

   

3

enter b

1

   

4

enter N

64

   

5

enter y(a)

1

   

6

d

=(B4-B3)/B5

 

 

7

interval index

X

Y

f ( x , y )

8

0

=B3

=B6

=B9+C9

9

=A9+1

=B9+$B$7

=C9+(B10-B9)*D9

=B10+C10

10

The interval index plays no role here and is included only to allow convenient checking that you have enough rows for your computation. (You could omit it.)

We can use extrapolation to improve performance here, just as we used it for numerical differentiation in Chapter 7 and for numerical integration in the last previous chapter.

Suppose we characterize our subdivision of the interval [ a , b ] by the number of subintervals N . Let us refer to the result of applying the left hand rule here as just described as L ( N ) . To compute L ( N / 2 ) given your computation of L ( N ) you need only copy the whole thing over elsewhere, switch the entry in B10 by multiplying the second term by 2, and copy that entry down. The answer will appear after half as many steps.

We can then define an extrapolated left hand rule, L 2 ( N ) whose accuracy should improve by a factor of 4 when N is replaced by 2 N , by

L 2 ( 2 N ) = 2 L ( 2 N ) L ( N )

This rule should then have behavior not far from that of the trapezoid rule. And we can do better by extrapolating this rule, to form L 3

L 3 ( 2 N ) = ( 4 L 2 ( 2 N ) L 2 ( N ) 3 )

This should cause errors to decrease by a factor of 8 on doubling the number of intervals, and we can extrapolate again, to form L 4 , and so on, according to the rule

L j ( 2 N ) = ( 2 j 1 L j 1 ( 2 N ) L j 1 ( N ) 2 j 1 1 )

Surprisingly enough, you can achieve considerable accuracy this way. L ( 32 ) is not very accurate, and L ( 1 ) , L ( 2 ) , L ( 4 ) , , L ( 16 ) are worse, but they permit computation of L 6 ( 32 ) which is much better.

Exercise 26.1 Set up a spreadsheet to compute these for f ( x , y ) = x + y , a = 0 , b = 1 with y ( a ) = 1 and compute L 6 ( 64 ) for the value of y at x = 1 .

The results obtained for y ( 2 ) for this problem are as follows. The proportional error is described in the second table

N L L 2 L 3 L 4 L 5 L 6 L 7 L 8

1

3

             

2

5

7

 

exact answer

=

11.7781122

 

4

7.125

9.25

10

         

8

8.921

10.71686

11.20581

11.37807

       

16

10.17

11.41207

11.64381

11.70638

11.728268

     

32

10.92

11.66817

11.75353

11.76921

11.773395

11.77485027

   

64

11.33

11.74777

11.77431

11.77727

11.777811

11.77795396

11.77800322

 

128

11.55

11.77013

11.77758

11.77805

11.778098

11.7781071

11.77810953

11.77811037

It is quite remarkable, but even going back to the one interval approximation improves the final answer, by allowing one more extrapolation

N\L index

1

2

3

4

5

6

7

8

1

-0.7453

             

2

-0.5755

-0.405677

-1

 

 

 

 

 

4

-0.3951

-0.214645

-0.15097

 

 

 

 

 

8

-0.2426

-0.090104

-0.04859

-0.03396

 

 

 

 

16

-0.1368

-0.031078

-0.0114

-0.00609

-0.00423

 

 

 

32

-0.0731

-0.009335

-0.00209

-0.00076

-0.0004

-0.000277

 

 

64

-0.0378

-0.002576

-0.00032

-7.1E-05

-2.6E-05

-1.34E-05

-9.25E-06

 

128

-0.0193

-0.000678

-4.5E-05

-5.6E-06

-1.2E-06

-4.33E-07

-2.27E-07

-1.56E-07

 

Proportional Error for Left Hand Rule y ' = y + z , y ( 0 ) = 1 , finding y ( 2 )

   

It can be seen by these results that using the simplest possible numerical method and extrapolating the hell out of it, can actually give accurate results.

Note here that all extrapolation was done with the final answers at x = 2 obtained using the left hand rule. You could obtain slightly better results by applying the extrapolations at intermediate x values, as soon as they become available to improve all the computations.

Thus the first extrapolation could be applied to update y after two intervals, the second after each set of four intervals, etc. This reduces the effect of errors in y compounding, which is only slightly helpful here.

Obviously you do better the smaller you make d , that is, the large you make N .
But here extrapolating using smaller values of N is much more effective than doubling N in improving the answer.