WEBVTT
00:00:00.000 --> 00:00:01.950
The following
content is provided
00:00:01.950 --> 00:00:06.060
by MIT OpenCourseWare under
a Creative Commons license.
00:00:06.060 --> 00:00:08.200
Additional information
about our license
00:00:08.200 --> 00:00:11.930
and OpenCourseWare in general
is available at ocw.mit.edu.
00:00:15.570 --> 00:00:18.510
PROFESSOR: OK.
00:00:18.510 --> 00:00:21.500
So I'll start a little formally.
00:00:21.500 --> 00:00:28.160
That this is 18.086 in
Spring semester of 2006.
00:00:28.160 --> 00:00:33.110
And we're lucky to have it
videotaped for OpenCourseWare.
00:00:33.110 --> 00:00:38.360
So, let me begin by
remembering the main topics
00:00:38.360 --> 00:00:44.680
from what you might call the
first half, 18.085, the Fall
00:00:44.680 --> 00:00:46.590
course.
00:00:46.590 --> 00:00:50.450
And then, most important
to us, so it gets a star,
00:00:50.450 --> 00:00:54.070
are the main topics
for this course.
00:00:54.070 --> 00:00:59.810
So 18.085 began with
applied linear algebra.
00:00:59.810 --> 00:01:04.400
The discrete equations
of mechanics,
00:01:04.400 --> 00:01:07.470
and physics and engineering.
00:01:07.470 --> 00:01:10.410
And the type of matrices
that it involved,
00:01:10.410 --> 00:01:14.400
so we learned what positive
definite matrices are.
00:01:14.400 --> 00:01:20.580
Then the center of the course
was differential equations.
00:01:20.580 --> 00:01:22.440
Ordinary differential
equations, so that
00:01:22.440 --> 00:01:25.620
was 1D, partial differential
equations like Laplace,
00:01:25.620 --> 00:01:28.650
that was 2D, with
boundary values.
00:01:28.650 --> 00:01:32.940
So that led to
systems of equations.
00:01:32.940 --> 00:01:36.650
And then the third big
topic was Fourier methods.
00:01:36.650 --> 00:01:41.760
All of Fourier ideas: series,
integrals discrete transform.
00:01:41.760 --> 00:01:47.860
So that's like the first
half of the course.
00:01:47.860 --> 00:01:49.790
Well, that's 18.085.
00:01:49.790 --> 00:01:50.290
OK.
00:01:50.290 --> 00:01:52.010
And here we are.
00:01:52.010 --> 00:01:58.150
So this course has,
also, three major topics.
00:01:58.150 --> 00:02:01.010
The first, the one
we start on today,
00:02:01.010 --> 00:02:06.580
is differential equations that
start from initial values.
00:02:06.580 --> 00:02:10.010
So I'm thinking of the wave
equation, where we're given
00:02:10.010 --> 00:02:11.650
the displacement of velocity.
00:02:11.650 --> 00:02:14.320
The heat equation.
00:02:14.320 --> 00:02:17.850
Black-Scholes equation,
which comes from finance,
00:02:17.850 --> 00:02:20.820
is a version of
the heat equation.
00:02:20.820 --> 00:02:23.620
Ordinary differential equations,
that's going to be today.
00:02:26.570 --> 00:02:30.500
And non-linear equations,
Navier-Stokes ultimately.
00:02:30.500 --> 00:02:33.390
But not heavily Navier-Stokes.
00:02:33.390 --> 00:02:38.010
So this is where we begin.
00:02:38.010 --> 00:02:44.660
And then, a second big
topic is how to solve.
00:02:44.660 --> 00:02:46.670
Because this course
is really applied
00:02:46.670 --> 00:02:48.820
math and scientific computing.
00:02:48.820 --> 00:02:54.200
The second big topic is, how do
you solve a large linear system
00:02:54.200 --> 00:02:56.210
A*x equal b?
00:02:56.210 --> 00:03:01.460
Do you do it by direct
methods, elimination?
00:03:01.460 --> 00:03:03.970
With renumbering
of nodes, and there
00:03:03.970 --> 00:03:08.060
are lots of tricks that makes
elimination very successful.
00:03:08.060 --> 00:03:12.220
Or do you do it by
iterative methods?
00:03:12.220 --> 00:03:14.440
Multi-grid is one.
00:03:14.440 --> 00:03:18.670
So this is really modern
scientific computing
00:03:18.670 --> 00:03:20.420
we're talking about.
00:03:20.420 --> 00:03:25.830
We're really at the point
where people are computing.
00:03:25.830 --> 00:03:30.690
And then the third topic,
which is a little different,
00:03:30.690 --> 00:03:32.840
is optimization, minimization.
00:03:36.630 --> 00:03:41.460
Linear programming would be one
example, but among many, many.
00:03:41.460 --> 00:03:44.600
So that's a major
area in applied math.
00:03:44.600 --> 00:03:45.100
OK.
00:03:45.100 --> 00:03:48.010
So this is our topic,
differential equations.
00:03:48.010 --> 00:03:51.210
And I was asked before
the tape started
00:03:51.210 --> 00:03:52.780
about my own background.
00:03:52.780 --> 00:03:59.200
And my thesis was in this topic.
00:03:59.200 --> 00:04:06.010
Because a key question
will be stability.
00:04:06.010 --> 00:04:08.470
Is the difference method stable?
00:04:08.470 --> 00:04:14.160
You'll see that that's a
requirement to be any good.
00:04:14.160 --> 00:04:16.910
You really have
to sort out what--
00:04:16.910 --> 00:04:20.480
and that usually puts a
limitation on the time step.
00:04:20.480 --> 00:04:23.300
So we're going to
have a time step.
00:04:23.300 --> 00:04:29.220
And it may be limited by the
requirement of stability.
00:04:29.220 --> 00:04:33.990
And the fact that stability
and convergence-- success
00:04:33.990 --> 00:04:40.570
of the method-- are
so intimately linked.
00:04:40.570 --> 00:04:47.210
There was a key paper by Lax and
Richtmyer that we'll look at.
00:04:47.210 --> 00:04:51.300
It's known now as the Lax
Equivalence Theorem, stability
00:04:51.300 --> 00:04:52.740
and convergence.
00:04:52.740 --> 00:04:53.890
We'll see it in detail.
00:04:56.630 --> 00:05:01.880
Actually when I was a grad
student, by good chance,
00:05:01.880 --> 00:05:06.050
there was a seminar, and I was
asked to report on some paper
00:05:06.050 --> 00:05:08.050
and it happened to be that one.
00:05:08.050 --> 00:05:12.480
And I just, looking back,
think, gosh, I was lucky.
00:05:12.480 --> 00:05:15.430
Because that paper,
the Lax-Richtmyer paper
00:05:15.430 --> 00:05:17.760
with the Lax
Equivalence Theorem,
00:05:17.760 --> 00:05:20.710
set the question of stability.
00:05:20.710 --> 00:05:26.720
And then years of effort
went into finding tests
00:05:26.720 --> 00:05:31.070
for stability-- how do you
decide, is the method stable?
00:05:31.070 --> 00:05:34.440
So we'll see some of that.
00:05:34.440 --> 00:05:35.630
OK.
00:05:35.630 --> 00:05:39.340
Since then I've worked
on other things,
00:05:39.340 --> 00:05:43.380
well, wavelets would be one
that relates to Fourier.
00:05:43.380 --> 00:05:47.680
And other topics too.
00:05:47.680 --> 00:05:49.710
And lots of linear algebra.
00:05:49.710 --> 00:05:55.322
But it's a pleasure to look
back to the beginning of
00:05:55.322 --> 00:05:56.030
[UNINTELLIGIBLE].
00:05:56.030 --> 00:05:58.210
OK.
00:05:58.210 --> 00:06:00.000
So that's the
OpenCourseWare site
00:06:00.000 --> 00:06:02.740
which does have
the course outline,
00:06:02.740 --> 00:06:05.740
and other information
about the course.
00:06:05.740 --> 00:06:11.920
Some of which I have mentioned
informally before the class.
00:06:11.920 --> 00:06:17.590
This is our central website,
with no decimal in 18086,
00:06:17.590 --> 00:06:25.110
which will have notes coming
up, MATLAB things, problems.
00:06:25.110 --> 00:06:27.100
That's our content.
00:06:27.100 --> 00:06:31.070
It will be the 086 page.
00:06:31.070 --> 00:06:31.570
OK.
00:06:31.570 --> 00:06:38.130
So that's the course
outline in three lines.
00:06:38.130 --> 00:06:40.510
And then the web
page and a handout
00:06:40.510 --> 00:06:48.190
will give you half a dozen
sub-headings under those.
00:06:48.190 --> 00:06:48.740
OK.
00:06:48.740 --> 00:06:50.990
So that's the course.
00:06:50.990 --> 00:06:56.320
And actually it's the center
of scientific computing.
00:06:56.320 --> 00:07:00.410
There would be other courses
at MIT covering material
00:07:00.410 --> 00:07:06.150
like that, because it's just so
basic you can't go without it.
00:07:06.150 --> 00:07:09.280
All right, so now
I'm ready to start
00:07:09.280 --> 00:07:14.860
on the first lecture, which
will be ordinary differential
00:07:14.860 --> 00:07:16.230
equations.
00:07:16.230 --> 00:07:21.150
And I won't always have things
so clearly and beautifully
00:07:21.150 --> 00:07:25.980
written on the board, but
today I'm better organized.
00:07:25.980 --> 00:07:27.640
So here's our problem.
00:07:27.640 --> 00:07:30.160
Ordinary differential equations.
00:07:30.160 --> 00:07:41.870
So we're given initial
values u at t equals 0.
00:07:41.870 --> 00:07:42.370
OK.
00:07:42.370 --> 00:07:44.630
So there's always an
initial value here.
00:07:44.630 --> 00:07:50.290
The question is, what's the
equation that makes it evolve?
00:07:50.290 --> 00:07:51.020
OK.
00:07:51.020 --> 00:07:56.900
So, often we'll think in
terms of one equation.
00:07:56.900 --> 00:07:59.860
The unknown will be u, u of t.
00:07:59.860 --> 00:08:06.910
Or, in reality there are
typically N equations.
00:08:06.910 --> 00:08:12.850
N might be 6 or 12 or
something in a small problem.
00:08:12.850 --> 00:08:17.250
But also, N could be very large.
00:08:17.250 --> 00:08:20.610
We'll see how
easily that happens.
00:08:20.610 --> 00:08:21.320
OK.
00:08:21.320 --> 00:08:30.690
And often, to discuss stability
or understand a method
00:08:30.690 --> 00:08:35.370
and try it, the linear
case is the natural one.
00:08:35.370 --> 00:08:43.600
So I'll use a*u with a small a
to make us think that we've got
00:08:43.600 --> 00:08:49.710
a scalar, or a capital A to
indicate we're talking about
00:08:49.710 --> 00:08:51.120
a matrix.
00:08:51.120 --> 00:08:56.500
So, the correspondence between
these and the more general--
00:08:56.500 --> 00:08:58.220
so these are linear.
00:08:58.220 --> 00:09:01.810
These are likely
to be not linear.
00:09:01.810 --> 00:09:05.390
But there's a natural
match between-- the number
00:09:05.390 --> 00:09:10.150
a there corresponds to the
partial of f with respect to u.
00:09:10.150 --> 00:09:11.710
And of course we see it.
00:09:11.710 --> 00:09:18.230
If that was f, then its
derivative is indeed a.
00:09:18.230 --> 00:09:23.380
And in this matrix case, there
would be a Jacobian matrix.
00:09:23.380 --> 00:09:26.970
If we have N right-hand
sides, because I
00:09:26.970 --> 00:09:31.410
have N u's, and the
matrix that comes
00:09:31.410 --> 00:09:34.780
in is the derivative of
right-hand side in equation
00:09:34.780 --> 00:09:38.590
i with respect to unknown j.
00:09:38.590 --> 00:09:43.630
So it's a N by N matrix,
and of course, this
00:09:43.630 --> 00:09:45.880
is the case where
it's a constant.
00:09:45.880 --> 00:09:50.020
Yeah, I should have said,
these are not only linear,
00:09:50.020 --> 00:09:52.450
but constant coefficients.
00:09:52.450 --> 00:09:56.170
So we know of course the
solution, it's an exponential,
00:09:56.170 --> 00:10:01.500
e to the a*t times
u_0, the initial bunch.
00:10:01.500 --> 00:10:04.350
So we know everything about it.
00:10:04.350 --> 00:10:10.840
Nevertheless, it's the best test
case for difference methods.
00:10:10.840 --> 00:10:14.460
And let me just
comment that we do
00:10:14.460 --> 00:10:16.780
have to pay attention
to the possibility
00:10:16.780 --> 00:10:19.770
that a could be complex.
00:10:19.770 --> 00:10:25.440
Mostly I'll think of a as
a real, and often negative.
00:10:25.440 --> 00:10:30.040
Often the equation
will be stable.
00:10:30.040 --> 00:10:35.990
A negative a means that
e to the a*t decreases.
00:10:35.990 --> 00:10:39.730
A negative definite matrix,
negative eigenvalues,
00:10:39.730 --> 00:10:44.080
mean that the matrix
exponential decays.
00:10:46.670 --> 00:10:49.560
And if that matrix is
symmetric, those eigenvalues
00:10:49.560 --> 00:10:54.050
are all real, that's a key case.
00:10:54.050 --> 00:10:56.700
Symmetric matrices
have real eigenvalues,
00:10:56.700 --> 00:11:07.840
and they're the most important,
most attractive class.
00:11:07.840 --> 00:11:11.920
And a negative
definite matrix might
00:11:11.920 --> 00:11:18.480
come from a diffusion term, like
second derivative, d second u,
00:11:18.480 --> 00:11:21.220
dx squared.
00:11:21.220 --> 00:11:26.930
But a convection term, du/dx,
from a first derivative,
00:11:26.930 --> 00:11:29.250
will not be symmetric.
00:11:31.800 --> 00:11:34.620
In fact, maybe it'll
be anti-symmetric.
00:11:34.620 --> 00:11:39.400
It will push the eigenvalues
into the complex plane.
00:11:39.400 --> 00:11:41.730
And therefore, we
have to pay attention
00:11:41.730 --> 00:11:45.980
to the possibility of complex a.
00:11:48.520 --> 00:11:51.100
As you know, the whole
point of eigenvalues
00:11:51.100 --> 00:11:56.490
is that we can understand
matrices to a very large extent
00:11:56.490 --> 00:12:02.660
by diagonalizing, by reducing
them to N scalar equations
00:12:02.660 --> 00:12:07.040
with the eigenvalues
as the little a's
00:12:07.040 --> 00:12:08.310
in those scalar equations.
00:12:08.310 --> 00:12:12.110
So anyway, that's the set up.
00:12:12.110 --> 00:12:15.780
Now I want to say
something about methods.
00:12:15.780 --> 00:12:19.110
I want to introduce
some of the names.
00:12:19.110 --> 00:12:22.970
What am I thinking here?
00:12:22.970 --> 00:12:27.050
This topic, today's topic
of ordinary differential
00:12:27.050 --> 00:12:34.620
equations, is not going
to last long in 18.086.
00:12:34.620 --> 00:12:40.850
I might still be
discussing it tomorrow.
00:12:40.850 --> 00:12:42.880
Friday, I mean.
00:12:42.880 --> 00:12:45.350
But not much after that.
00:12:45.350 --> 00:12:49.650
We'll soon be on to partial
differential equations.
00:12:49.650 --> 00:12:55.250
So I want to sort of tell you
about ordinary differential
00:12:55.250 --> 00:13:00.770
equations, well, in kind
of an organized way,
00:13:00.770 --> 00:13:06.850
so that you know the
competing methods.
00:13:06.850 --> 00:13:13.170
And also so that you
know this key distinction
00:13:13.170 --> 00:13:21.660
between-- non-stiff
is a typical equation,
00:13:21.660 --> 00:13:25.920
u prime equal minus 4u
would be a nonstiff,
00:13:25.920 --> 00:13:29.740
completely normal equation.
00:13:32.320 --> 00:13:38.630
And for that, those are the
relatively easy ones to solve.
00:13:38.630 --> 00:13:44.950
We can use explicit differences.
00:13:44.950 --> 00:13:48.270
And I'll say what that
word explicit means.
00:13:48.270 --> 00:13:54.300
What I want to do is just, like,
help you organize in your mind.
00:13:54.300 --> 00:13:59.970
So non-stiff are the average,
everyday differential
00:13:59.970 --> 00:14:01.730
equations.
00:14:01.730 --> 00:14:06.960
For those we can use explicit
methods that are fast.
00:14:06.960 --> 00:14:10.460
And we can make them accurate.
00:14:10.460 --> 00:14:14.300
And explicit means-- so
this is the first meaning--
00:14:14.300 --> 00:14:19.090
that I compute the new U--
and I'll use capital U--
00:14:19.090 --> 00:14:22.230
at time step n plus 1.
00:14:22.230 --> 00:14:26.240
So that's capital
U at the new time.
00:14:26.240 --> 00:14:30.110
Natural to call
that the new time.
00:14:30.110 --> 00:14:34.480
Can be computed directly from
U at the old time, maybe U
00:14:34.480 --> 00:14:38.450
at the time before that, maybe
U at the times before that.
00:14:41.260 --> 00:14:44.060
In EE terms it's causal.
00:14:44.060 --> 00:14:48.730
The new U comes
explicitly by some formula
00:14:48.730 --> 00:14:55.150
that we'll construct from
the previous computations.
00:14:55.150 --> 00:14:58.380
It's fast.
00:14:58.380 --> 00:15:04.060
Of course, I not only use
U_n but I used f of U_n.
00:15:04.060 --> 00:15:11.190
So I should say from U_n itself
and from f of U_n at time t_n,
00:15:11.190 --> 00:15:15.870
and similarly for earlier times.
00:15:15.870 --> 00:15:17.120
You see how fast that is?
00:15:17.120 --> 00:15:23.120
Because the only new
calculation of f is this one.
00:15:23.120 --> 00:15:29.730
The previous step produced U_n,
and at the new step, this step,
00:15:29.730 --> 00:15:34.670
we plug it in to find
out what the slope is.
00:15:34.670 --> 00:15:38.660
And that may be the
expensive calculation,
00:15:38.660 --> 00:15:44.050
because f could involve all
sorts of functions, all sorts
00:15:44.050 --> 00:15:46.080
of physical constants.
00:15:46.080 --> 00:15:49.170
It can be complicated or not.
00:15:49.170 --> 00:15:54.300
but if it is, we only
have to do it once here.
00:15:54.300 --> 00:15:55.800
That's explicit.
00:15:55.800 --> 00:15:57.700
Now, what's the opposite?
00:15:57.700 --> 00:16:00.590
Implicit.
00:16:00.590 --> 00:16:07.770
So the point about implicit
methods is the difference
00:16:07.770 --> 00:16:12.870
equation, the formula,
involves not just U_(n+1),
00:16:12.870 --> 00:16:17.520
the new value, but also the
slope at that new value,
00:16:17.520 --> 00:16:20.560
at that new unknown value.
00:16:20.560 --> 00:16:25.080
So it means that an implicit
equation could be non-linear.
00:16:25.080 --> 00:16:31.270
Because if f is not linear,
then this term could involve
00:16:31.270 --> 00:16:34.010
cosines, or exponentials
or whatever,
00:16:34.010 --> 00:16:36.010
of the unknown U_(n+1).
00:16:36.010 --> 00:16:40.310
So that, you can't
do it as fast.
00:16:40.310 --> 00:16:44.960
So this is definitely
slower per step.
00:16:44.960 --> 00:16:48.340
But, the advantage,
and the reason
00:16:48.340 --> 00:16:54.710
it gets used for stiff problems,
is that on stiff problems
00:16:54.710 --> 00:16:58.500
it will allow a
much larger step.
00:16:58.500 --> 00:17:06.000
So it's this constant trade-off
of speed versus stability.
00:17:06.000 --> 00:17:08.590
These are less
stable, as we'll see,
00:17:08.590 --> 00:17:11.320
and we'll get to see
what stability means.
00:17:11.320 --> 00:17:16.790
These are slower but more
stable, implicit methods.
00:17:16.790 --> 00:17:18.310
OK.
00:17:18.310 --> 00:17:21.080
I didn't yet say what stiff is.
00:17:21.080 --> 00:17:22.400
I'll say that in a moment.
00:17:25.090 --> 00:17:26.350
OK.
00:17:26.350 --> 00:17:29.220
So explicit versus implicit.
00:17:29.220 --> 00:17:35.290
U_(n+1) immediately or
U_(n+1) only by maybe using
00:17:35.290 --> 00:17:39.600
Newton's method to
solve an equation,
00:17:39.600 --> 00:17:46.180
because U_(n+1) appears
in a complicated formula.
00:17:46.180 --> 00:17:49.460
And we have an
equation to solve.
00:17:49.460 --> 00:17:50.530
OK.
00:17:50.530 --> 00:17:58.690
So now, this part
of the board begins
00:17:58.690 --> 00:18:01.300
to identify different methods.
00:18:01.300 --> 00:18:05.020
And I'll follow the
same two-column system
00:18:05.020 --> 00:18:10.090
that non-stiff versus stiff.
00:18:10.090 --> 00:18:15.370
So this will be explicit
and those will be implicit.
00:18:15.370 --> 00:18:16.930
OK.
00:18:16.930 --> 00:18:20.070
So the one method that occurs
to everybody right away
00:18:20.070 --> 00:18:21.670
is Euler's method.
00:18:21.670 --> 00:18:25.720
And that will be the first
method we'll construct,
00:18:25.720 --> 00:18:27.670
of course.
00:18:27.670 --> 00:18:32.290
So that's the first
idea anybody would have.
00:18:32.290 --> 00:18:35.450
It has the minimum accuracy.
00:18:35.450 --> 00:18:36.610
It's first order.
00:18:36.610 --> 00:18:42.030
The order of accuracy will
be an issue for all methods.
00:18:42.030 --> 00:18:48.500
And Euler has p equal 1,
order p equal 1, first order.
00:18:48.500 --> 00:18:50.870
So it's too crude.
00:18:50.870 --> 00:18:54.100
I mean, if you wanted
to tracking a space
00:18:54.100 --> 00:19:00.180
station with Euler's method,
you would lose it real fast,
00:19:00.180 --> 00:19:03.750
or you would have to
take delta t so small
00:19:03.750 --> 00:19:05.360
that it would be impossible.
00:19:05.360 --> 00:19:08.930
So Euler's method is the
first one you think of,
00:19:08.930 --> 00:19:11.110
but not the one you finish with.
00:19:11.110 --> 00:19:14.040
And similarly on
the implicit side,
00:19:14.040 --> 00:19:19.230
backward Euler, we'll see, is
again the first implicit method
00:19:19.230 --> 00:19:21.210
you think of.
00:19:21.210 --> 00:19:25.050
But in the end, you
probably want to do that.
00:19:25.050 --> 00:19:26.310
OK.
00:19:26.310 --> 00:19:31.780
So those are two specific
methods that are easy.
00:19:31.780 --> 00:19:35.660
The first ones we'll write down.
00:19:35.660 --> 00:19:39.930
Now then comes families
of methods, especially
00:19:39.930 --> 00:19:42.560
these Adams families.
00:19:42.560 --> 00:19:50.850
So Adams-Bashforth is explicit,
Adams-Moulton are implicit,
00:19:50.850 --> 00:20:01.110
and the coefficients are in
books on numerical analysis
00:20:01.110 --> 00:20:02.400
and will be on the web.
00:20:05.140 --> 00:20:08.480
And I'll say more
about them of course.
00:20:08.480 --> 00:20:10.770
And see the first ones.
00:20:10.770 --> 00:20:13.030
So what Adams-Bashforth,
how does does it
00:20:13.030 --> 00:20:14.930
differ from Adams-Moulton?
00:20:14.930 --> 00:20:15.970
OK.
00:20:20.520 --> 00:20:22.260
So those are multi-step methods.
00:20:22.260 --> 00:20:26.380
Number two was
multi-step methods.
00:20:26.380 --> 00:20:31.840
To get more accurate
we use more old values.
00:20:31.840 --> 00:20:34.450
So Adams-Moulton
and Adams-Bashforth,
00:20:34.450 --> 00:20:40.430
or especially Adams-Bashforth,
would use several old values
00:20:40.430 --> 00:20:43.290
to get the order of
accuracy up high.
00:20:43.290 --> 00:20:44.940
OK.
00:20:44.940 --> 00:20:49.370
Now this third category.
00:20:52.290 --> 00:20:53.930
Of course, actually
Euler would be
00:20:53.930 --> 00:20:56.290
the first of the
Adams-Bashforth,
00:20:56.290 --> 00:21:01.240
and backward Euler might
be early in Adams-Moulton.
00:21:01.240 --> 00:21:06.240
Or maybe backward Euler is
early in backward differences.
00:21:06.240 --> 00:21:08.290
OK.
00:21:08.290 --> 00:21:11.960
So what I want to
say is, Runge-Kutta
00:21:11.960 --> 00:21:19.170
is like a different approach
to constructing methods.
00:21:19.170 --> 00:21:22.270
You might know what
some of these already.
00:21:22.270 --> 00:21:24.290
And it's a one-step method.
00:21:27.510 --> 00:21:34.350
That's a method
that just produces,
00:21:34.350 --> 00:21:44.620
by sort of half-steps, you
could say, it gets to U_(n+1).
00:21:44.620 --> 00:21:47.210
But it's explicit.
00:21:47.210 --> 00:21:52.760
And the code in MATLAB,
the code ODE forty-five,
00:21:52.760 --> 00:21:56.000
or maybe I should
say ODE four five,
00:21:56.000 --> 00:22:03.300
is like the workhorse of ODEs.
00:22:03.300 --> 00:22:05.770
I guess I'm hoping
you'll try that.
00:22:05.770 --> 00:22:08.210
You'll find ODE45.
00:22:08.210 --> 00:22:10.720
And I'll try to get
it onto the web,
00:22:10.720 --> 00:22:13.980
but you could just easily
discover the syntax
00:22:13.980 --> 00:22:17.450
to call it, and apply
it to an equation.
00:22:17.450 --> 00:22:20.560
Yeah I'll get
examples on the web,
00:22:20.560 --> 00:22:22.210
and please do the examples.
00:22:22.210 --> 00:22:29.440
So this is-- the 4,
5 typically means
00:22:29.440 --> 00:22:32.690
that it use a
fourth-order Runge-Kutta,
00:22:32.690 --> 00:22:34.460
so that's pretty accurate.
00:22:34.460 --> 00:22:41.570
To get up to a fourth
order means the error up
00:22:41.570 --> 00:22:47.200
at time t equal 1, say,
is delta t to the fourth.
00:22:47.200 --> 00:22:52.360
So by cutting delta t in half,
you divide the error by 16.
00:22:52.360 --> 00:22:55.530
And then it also uses
a fifth-order one.
00:22:59.660 --> 00:23:07.010
And maybe I can
point to these words.
00:23:07.010 --> 00:23:14.430
What makes ODE45-- four
five-- good, successful.
00:23:14.430 --> 00:23:17.080
I mean, how does it work?
00:23:17.080 --> 00:23:19.560
It slows down or speeds up.
00:23:19.560 --> 00:23:23.800
It speeds up when it can, it
slows down when it has to.
00:23:23.800 --> 00:23:28.350
And speed up means
a bigger delta t,
00:23:28.350 --> 00:23:30.670
slow down means a
smaller delta t,
00:23:30.670 --> 00:23:34.500
for the sake of
stability or accuracy.
00:23:34.500 --> 00:23:35.670
Yeah.
00:23:35.670 --> 00:23:39.540
So if the equation is
suddenly doing something,
00:23:39.540 --> 00:23:42.480
if the solution is
suddenly doing something,
00:23:42.480 --> 00:23:47.080
you know, important and
quick, then probably
00:23:47.080 --> 00:23:54.050
at that period delta t will get
reduced automatically by ODE45.
00:23:54.050 --> 00:23:57.380
It'll cut it in half, cut
in half again, half again.
00:24:02.280 --> 00:24:06.340
Every good code is
constantly estimating,
00:24:06.340 --> 00:24:09.200
using internal
checks to estimate
00:24:09.200 --> 00:24:10.730
what error it's making.
00:24:13.820 --> 00:24:14.890
About the error.
00:24:14.890 --> 00:24:16.560
Just a quick word
about the error.
00:24:16.560 --> 00:24:21.310
So ODE45, it will, unless
you tell it otherwise,
00:24:21.310 --> 00:24:28.270
the default accuracy that it
will adjust for is relative.
00:24:28.270 --> 00:24:33.040
So ODE45 will have a
relative accuracy--
00:24:33.040 --> 00:24:38.810
so can I just squeeze in a few
words-- of 10 to the minus 3.
00:24:38.810 --> 00:24:40.390
It'll shoot for that.
00:24:40.390 --> 00:24:45.410
And an absolute accuracy
of 10 to the minus 6.
00:24:45.410 --> 00:24:50.060
So it will try to
keep the error--
00:24:50.060 --> 00:24:51.870
it will plan to keep
the error, and it'll
00:24:51.870 --> 00:24:55.980
tell you if it has a problem,
below 10 to the minus 6.
00:24:55.980 --> 00:24:58.250
And you could set
that differently.
00:24:58.250 --> 00:24:59.910
OK.
00:24:59.910 --> 00:25:04.900
I hope you experiment
a little with ODE45.
00:25:04.900 --> 00:25:10.740
And you can either plot the
solutions, some exponential.
00:25:10.740 --> 00:25:12.220
I mean, push it.
00:25:12.220 --> 00:25:20.120
Like, let delta t be pretty big,
and see where it breaks down.
00:25:20.120 --> 00:25:23.780
Well, I guess ODE45 is
engineered not to break down.
00:25:23.780 --> 00:25:27.260
If you try a delta
t-- it'll decide
00:25:27.260 --> 00:25:30.470
on what delta t it can do.
00:25:30.470 --> 00:25:35.570
So we'll also
code, just quickly,
00:25:35.570 --> 00:25:42.150
some methods by ourselves,
and see for a large delta
00:25:42.150 --> 00:25:45.140
t, what problems could happen.
00:25:45.140 --> 00:25:45.660
OK.
00:25:45.660 --> 00:25:48.240
And then backwards.
00:25:48.240 --> 00:25:50.500
This says backward differences.
00:25:50.500 --> 00:25:57.370
That's the category of implicit
methods for stiff equation,
00:25:57.370 --> 00:26:01.260
and backward Euler
would be the very first.
00:26:01.260 --> 00:26:10.200
And ODE15S is the code that,
is the most used code, perhaps,
00:26:10.200 --> 00:26:13.850
for stiff equations.
00:26:13.850 --> 00:26:17.430
And it will do the same
thing, it will vary delta t.
00:26:17.430 --> 00:26:19.690
It was will vary the order.
00:26:19.690 --> 00:26:24.450
So, if it has to slow
down, it may slow.
00:26:24.450 --> 00:26:26.630
And if things are
happening too quickly,
00:26:26.630 --> 00:26:30.540
it may change to a low-order,
safe, secure method.
00:26:30.540 --> 00:26:33.770
When things are, you know, when
the satellite is buzzing out
00:26:33.770 --> 00:26:38.070
in space, and it
can take giant steps
00:26:38.070 --> 00:26:41.920
or it can have
high-order accuracy,
00:26:41.920 --> 00:26:45.120
say in astronomy, of
course, they would go up
00:26:45.120 --> 00:26:50.700
to eighth order and higher,
tracking, estimating
00:26:50.700 --> 00:26:59.470
where stars would
go or planets, these
00:26:59.470 --> 00:27:03.600
will do that automatically,
up to a certain point.
00:27:03.600 --> 00:27:08.610
And we could write codes,
and of course people
00:27:08.610 --> 00:27:12.070
have, for Adams-Bashforth
and Adams-Moulton,
00:27:12.070 --> 00:27:16.720
varying delta t, varying p.
00:27:16.720 --> 00:27:20.190
I guess here's a comment.
00:27:20.190 --> 00:27:23.710
If I had given this
lecture yesterday,
00:27:23.710 --> 00:27:28.470
I would have emphasized
number two, Adams-Bashforth
00:27:28.470 --> 00:27:30.960
and Adams-Moulton,
because books do it
00:27:30.960 --> 00:27:33.760
and I basically
learned this subject
00:27:33.760 --> 00:27:37.180
from the major
textbooks on ODEs.
00:27:39.830 --> 00:27:44.360
But I had a
conversation with one
00:27:44.360 --> 00:27:47.240
of the computational scientists
in the math department,
00:27:47.240 --> 00:27:49.610
and that changed the lecture.
00:27:49.610 --> 00:27:52.570
Because I learned
that although books
00:27:52.570 --> 00:28:03.620
tend to discuss those, the
Adams methods, in practice
00:28:03.620 --> 00:28:07.630
these methods, the methods
three-- Runge-Kutta
00:28:07.630 --> 00:28:10.760
and backward differences--
are the most used.
00:28:13.950 --> 00:28:22.680
And that's reflected in the fact
that these two major codes that
00:28:22.680 --> 00:28:27.970
are available in MATLAB are
in the number three category.
00:28:27.970 --> 00:28:29.610
OK.
00:28:29.610 --> 00:28:37.660
So now, that's a picture with
name but not details, right?
00:28:37.660 --> 00:28:40.550
And I haven't even
said what stiff means.
00:28:40.550 --> 00:28:43.030
So somewhere I've written
something about stiff.
00:28:43.030 --> 00:28:45.250
Yeah.
00:28:45.250 --> 00:28:45.750
OK.
00:28:51.600 --> 00:28:59.150
So stiff means--
so, if I had only
00:28:59.150 --> 00:29:01.940
e to the minus t
in the solution,
00:29:01.940 --> 00:29:06.230
then the solution would decay
at that rate, e to the minus t.
00:29:06.230 --> 00:29:11.200
That time step would
adjust to keep it accurate.
00:29:11.200 --> 00:29:15.490
And if I have only
e to the minus 99t,
00:29:15.490 --> 00:29:19.140
then again-- that of
course is a faster decay,
00:29:19.140 --> 00:29:22.140
much faster decay--
so the time step would
00:29:22.140 --> 00:29:25.450
adjust to be much
smaller, probably
00:29:25.450 --> 00:29:27.350
about 1/99 or something.
00:29:27.350 --> 00:29:34.690
Anyway, much smaller, to
capture within each step
00:29:34.690 --> 00:29:39.670
the correct behavior of e to
the minus 99t, and no problem.
00:29:39.670 --> 00:29:44.020
So in other words, it's not the
minus 99 that makes it stiff.
00:29:44.020 --> 00:29:46.520
If it was only the minus
99- in other words,
00:29:46.520 --> 00:29:49.600
if I had a equal
minus 99 up there,
00:29:49.600 --> 00:29:52.670
I wouldn't call it
a stiff equation.
00:29:52.670 --> 00:29:57.740
What makes it stiff is
the combination here.
00:30:00.300 --> 00:30:05.490
You see, what's going to happen
as time gets anywhere, this
00:30:05.490 --> 00:30:07.540
is going to control u of t.
00:30:10.770 --> 00:30:18.810
The decay of u of t is going
to have that time constant 1.
00:30:18.810 --> 00:30:21.460
But this is going
to control-- so I'll
00:30:21.460 --> 00:30:31.550
say but-- this would control
delta t for explicit methods.
00:30:37.600 --> 00:30:40.930
And I just picked minus
99, but it could have
00:30:40.930 --> 00:30:44.720
been minus 999 or far worse.
00:30:44.720 --> 00:30:46.970
So it could be very stiff.
00:30:54.050 --> 00:30:58.030
This word stiff and identifying
this class of problems, which
00:30:58.030 --> 00:31:04.640
now is familiar to everybody
who has to compute, didn't come,
00:31:04.640 --> 00:31:07.990
you know, with Gauss and so on.
00:31:07.990 --> 00:31:09.840
It came much more recently.
00:31:09.840 --> 00:31:12.820
But it's become
very important now,
00:31:12.820 --> 00:31:15.630
to distinguish stiff
from non-stiff.
00:31:15.630 --> 00:31:18.160
So where does stiff
problems arise?
00:31:18.160 --> 00:31:25.590
Well you can see how they
arise in chemical processes
00:31:25.590 --> 00:31:32.140
with very different decay
rates, biological processes,
00:31:32.140 --> 00:31:36.140
control theory, all sorts
of cases where there's
00:31:36.140 --> 00:31:39.930
a big dynamic range of rates.
00:31:39.930 --> 00:31:47.210
And they also arise in systems,
because the eigenvalues
00:31:47.210 --> 00:31:51.420
could be minus 1 and minus 99.
00:31:51.420 --> 00:31:56.470
I can easily create a matrix
that has those two eigenvalues.
00:31:56.470 --> 00:32:00.080
So it would be a system--
let me create such a matrix.
00:32:00.080 --> 00:32:02.970
I think probably,
if I put minus 50's
00:32:02.970 --> 00:32:08.530
on the diagonal, that
gives trace-- sum down
00:32:08.530 --> 00:32:10.730
the diagonal is minus 100.
00:32:10.730 --> 00:32:13.020
And that should equal the
sum of the eigenvalues.
00:32:13.020 --> 00:32:14.660
So, so far so good.
00:32:14.660 --> 00:32:17.600
And now, if I want these
two particular numbers,
00:32:17.600 --> 00:32:21.250
I could put 49 here.
00:32:21.250 --> 00:32:24.180
So that matrix has
those two eigenvalues.
00:32:32.080 --> 00:32:35.070
I mean, this is
the kind of matrix,
00:32:35.070 --> 00:32:37.400
well you might say
ill conditioned would
00:32:37.400 --> 00:32:40.730
be a word that you hear
for a matrix like this.
00:32:40.730 --> 00:32:43.000
The condition number
is the ratio--
00:32:43.000 --> 00:32:46.830
so the condition
number of this matrix
00:32:46.830 --> 00:32:51.250
would be the ratio 99 over 1.
00:32:51.250 --> 00:32:53.170
And that condition
number is a number
00:32:53.170 --> 00:32:55.570
that MATLAB computes
all the time.
00:32:55.570 --> 00:32:58.260
Every time you give it a
matrix problem like this,
00:32:58.260 --> 00:33:01.350
it estimate, not computes.
00:33:01.350 --> 00:33:03.870
Because to compute
the condition number
00:33:03.870 --> 00:33:07.240
requires solving an
eigenvalue problem,
00:33:07.240 --> 00:33:11.160
and it does not want to take
forever to do that exactly.
00:33:11.160 --> 00:33:14.240
But it gets an estimate.
00:33:14.240 --> 00:33:19.080
So that's only a moderate
condition number of 99.
00:33:19.080 --> 00:33:24.700
That's not a disaster,
but it makes the point.
00:33:24.700 --> 00:33:25.410
OK.
00:33:25.410 --> 00:33:27.540
So that's what
stiff equations are.
00:33:27.540 --> 00:33:28.220
OK.
00:33:28.220 --> 00:33:30.780
Now I've got one
more board prepared.
00:33:30.780 --> 00:33:35.390
I won't have this every day, but
let me use it while I got it.
00:33:35.390 --> 00:33:37.810
OK.
00:33:37.810 --> 00:33:41.310
It's only prepared in
the sense of telling us
00:33:41.310 --> 00:33:43.260
what we're going to do.
00:33:43.260 --> 00:33:46.660
So this is what's
coming: a construction
00:33:46.660 --> 00:33:50.370
of methods, what stability is
about, and what's convergence.
00:33:50.370 --> 00:33:54.750
So let me begin, let me get
Euler constructed today.
00:33:54.750 --> 00:33:56.300
And backward Euler.
00:33:56.300 --> 00:34:03.250
So Euler's method will be
the most obvious method,
00:34:03.250 --> 00:34:07.420
equals f at U_n.
00:34:07.420 --> 00:34:15.050
So, in the model
case, it's a*U_n.
00:34:15.050 --> 00:34:16.720
So that's Euler.
00:34:16.720 --> 00:34:19.560
So that tells us then
that we could rewrite it--
00:34:19.560 --> 00:34:27.510
U_(n+1) is explicitly
1 plus a delta t U_n.
00:34:27.510 --> 00:34:28.010
Right?
00:34:28.010 --> 00:34:32.890
I've moved up the delta t
and I moved over the U_n,
00:34:32.890 --> 00:34:34.580
and it's simple like that.
00:34:34.580 --> 00:34:35.880
OK.
00:34:35.880 --> 00:34:40.460
So what's the solution
after n time steps?
00:34:40.460 --> 00:34:42.580
Let me stay with
Euler for a moment.
00:34:42.580 --> 00:34:45.920
After n steps, every
step just multiplied
00:34:45.920 --> 00:34:50.190
by this growth factor.
00:34:50.190 --> 00:34:52.180
Let me refer to that
as the growth factor
00:34:52.180 --> 00:34:56.750
even if it's, as I
hope, smaller than 1.
00:34:56.750 --> 00:35:01.450
So U_n, after n
steps, is this thing
00:35:01.450 --> 00:35:03.990
has multiplied n times U_0.
00:35:07.010 --> 00:35:09.620
And what is the stability now?
00:35:09.620 --> 00:35:17.770
Stability is-- suppose a
is negative, typically.
00:35:17.770 --> 00:35:22.900
That means-- often we'll think
of that model, a negative,
00:35:22.900 --> 00:35:27.460
so that the differential
equation is completely stable.
00:35:27.460 --> 00:35:28.210
Right?
00:35:28.210 --> 00:35:34.550
e to the a*t, with a
negative, is perfect.
00:35:34.550 --> 00:35:37.670
The question is, is
this one perfect?
00:35:37.670 --> 00:35:51.710
So stability will be,
is this number below 1?
00:35:51.710 --> 00:35:54.240
That's the test.
00:35:54.240 --> 00:35:59.110
Is that-- we could argue about,
are we going to let it equal 1?
00:35:59.110 --> 00:36:04.220
Maybe I'll let it equal 1.
00:36:04.220 --> 00:36:09.000
So 1 plus a delta
t smaller than 1.
00:36:09.000 --> 00:36:14.160
That'll be the stability
requirement for forward Euler.
00:36:14.160 --> 00:36:18.190
And what's the a delta t?
00:36:18.190 --> 00:36:19.390
What's the boundary?
00:36:19.390 --> 00:36:22.600
What's the critical
value of a delta t?
00:36:22.600 --> 00:36:25.050
Remember a negative.
00:36:25.050 --> 00:36:27.850
How negative can it
be, or how negative
00:36:27.850 --> 00:36:28.850
can this combination be?
00:36:28.850 --> 00:36:31.690
You see, it's a
combination a delta t.
00:36:31.690 --> 00:36:40.850
So let me put above here what
the stability condition is.
00:36:40.850 --> 00:36:44.640
So the stability
condition on Euler
00:36:44.640 --> 00:36:47.700
is going to be-- so
do you see what it is?
00:36:47.700 --> 00:36:49.980
How negative can a delta t be?
00:36:52.730 --> 00:36:57.290
Go ahead, you don't even have
to push the-- well, let's see.
00:36:57.290 --> 00:37:03.120
So if a delta t, this
is a negative number,
00:37:03.120 --> 00:37:06.840
so this is dragging
us down from 1 always.
00:37:06.840 --> 00:37:10.790
Right? a delta t is like
a subtracting away from 1.
00:37:10.790 --> 00:37:14.590
So we're here with the 1.
00:37:14.590 --> 00:37:17.060
And a delta t is
pulling us this way.
00:37:17.060 --> 00:37:23.980
And how far, at what point are
we going to meet instability?
00:37:23.980 --> 00:37:25.480
Yeah.
00:37:25.480 --> 00:37:27.480
When we hit minus 1.
00:37:27.480 --> 00:37:31.160
This is the 1 plus a
delta t that I graphed.
00:37:31.160 --> 00:37:33.630
So it starts at 1
with delta t equals 0.
00:37:33.630 --> 00:37:35.830
And as I increase delta
t, it moves this way.
00:37:35.830 --> 00:37:37.820
And eventually, it hits there.
00:37:37.820 --> 00:37:44.560
And the limit will be at
a delta t equal minus 2.
00:37:47.250 --> 00:37:54.400
Because if it carries me 2 from
1, that carries me to minus 1.
00:37:54.400 --> 00:37:57.960
And if I go further,
I'm unstable.
00:37:57.960 --> 00:38:01.590
And you could easily
check that Euler
00:38:01.590 --> 00:38:05.150
will blow up like
crazy if you just
00:38:05.150 --> 00:38:07.240
go a little beyond that limit.
00:38:07.240 --> 00:38:09.730
Because you're taking
powers of a number
00:38:09.730 --> 00:38:13.180
bigger than, or below minus 1.
00:38:13.180 --> 00:38:13.690
OK.
00:38:13.690 --> 00:38:16.610
So there's the stability
limit for Euler.
00:38:16.610 --> 00:38:20.480
And that's a pretty
unpleasant restriction.
00:38:23.480 --> 00:38:28.420
We will have other
stability limits.
00:38:28.420 --> 00:38:30.660
We'll have some stability
limits over here,
00:38:30.660 --> 00:38:35.420
but we're looking for
numbers better than 2.
00:38:35.420 --> 00:38:41.970
This Adams-Bashforth will
give us numbers worse than 2.
00:38:41.970 --> 00:38:44.780
We'll see those methods
first thing next time.
00:38:44.780 --> 00:38:45.610
OK.
00:38:45.610 --> 00:38:48.800
Now, let me just get back-- OK.
00:38:48.800 --> 00:38:51.140
If I drew a picture
then-- let's see,
00:38:51.140 --> 00:38:53.140
have I got another board here?
00:38:53.140 --> 00:38:54.560
Nope.
00:38:54.560 --> 00:39:01.200
If I drew a picture of good
Euler, so Euler doing its job,
00:39:01.200 --> 00:39:04.380
the true solution is
decaying like that,
00:39:04.380 --> 00:39:08.830
and what does Euler do?
00:39:08.830 --> 00:39:11.500
Forward Euler says,
it looks here,
00:39:11.500 --> 00:39:14.680
it looks at the slope,
that's all it knows,
00:39:14.680 --> 00:39:16.820
it goes maybe that far.
00:39:16.820 --> 00:39:22.450
That's a small delta t.
00:39:22.450 --> 00:39:24.840
That's an OK method.
00:39:24.840 --> 00:39:26.600
You see, it's not very
accurate of course.
00:39:26.600 --> 00:39:30.540
It's lost all the
curvature in the solution.
00:39:30.540 --> 00:39:34.120
So this was the true
solution e to the a*t.
00:39:34.120 --> 00:39:40.810
And this is the 1 plus
a delta t to the power,
00:39:40.810 --> 00:39:45.130
well, t over delta t times.
00:39:45.130 --> 00:39:46.990
Well, no, this
was only one step,
00:39:46.990 --> 00:39:50.230
so that would just
be 1 plus a delta t.
00:39:50.230 --> 00:39:51.330
OK.
00:39:51.330 --> 00:39:56.660
Where the bad, the unstable
one, if I just try to draw that,
00:39:56.660 --> 00:40:02.830
the true solution is e
to the a*t, but I take--
00:40:02.830 --> 00:40:06.810
I start with that value
and the right slope.
00:40:06.810 --> 00:40:12.750
But I take too big a time
step and I'm below minus 1.
00:40:15.440 --> 00:40:22.500
So that was a case of
a too large delta t.
00:40:22.500 --> 00:40:24.610
OK.
00:40:24.610 --> 00:40:29.470
Now, I've got one minute left
to mention backward Euler.
00:40:29.470 --> 00:40:31.850
Well, that won't take me long.
00:40:31.850 --> 00:40:36.220
I'll just change
this to U_(n+1).
00:40:36.220 --> 00:40:41.220
So I'm evaluating the
slope at the new time.
00:40:41.220 --> 00:40:49.360
So this is now backward
Euler, or implicit Euler.
00:40:54.140 --> 00:40:55.230
OK.
00:40:55.230 --> 00:40:59.220
So that changes this equation.
00:40:59.220 --> 00:41:05.460
Oh, let's find out the
equation for implicit Euler.
00:41:05.460 --> 00:41:05.960
OK.
00:41:08.600 --> 00:41:11.070
I only have one more
moment, I'll just
00:41:11.070 --> 00:41:14.830
pop it in this little corner.
00:41:14.830 --> 00:41:16.450
So now, here's backward Euler.
00:41:16.450 --> 00:41:18.410
All right.
00:41:18.410 --> 00:41:20.970
You can tell me what backward
Euler is going to be.
00:41:20.970 --> 00:41:27.550
So U_(n+1) minus U_n over
delta t is a*U_(n+1).
00:41:27.550 --> 00:41:30.160
So let me collect
the n plus 1's.
00:41:30.160 --> 00:41:32.260
So I have one of them.
00:41:32.260 --> 00:41:34.951
And then minus a delta t.
00:41:34.951 --> 00:41:35.450
Right?
00:41:35.450 --> 00:41:40.250
And well, I bring it
over to the left side.
00:41:40.250 --> 00:41:44.420
And here I just have U_n.
00:41:44.420 --> 00:41:46.280
OK.
00:41:46.280 --> 00:41:50.570
So what happens
at every step now?
00:41:50.570 --> 00:41:53.850
I divide-- every step
is a division by this.
00:41:53.850 --> 00:41:56.930
So I could bring this
into the denominator.
00:41:56.930 --> 00:42:04.380
So U_n is that
division times U_0.
00:42:09.250 --> 00:42:11.800
And you see the stability?
00:42:11.800 --> 00:42:16.780
What's the size of
this growth factor?
00:42:16.780 --> 00:42:19.160
Remember, I'm thinking
of a as a negative.
00:42:19.160 --> 00:42:23.440
I'm dealing with a stable
differential equation.
00:42:23.440 --> 00:42:28.400
And so what's the main point
about this growth factor,
00:42:28.400 --> 00:42:30.950
this ratio in parentheses?
00:42:30.950 --> 00:42:33.820
It is?
00:42:33.820 --> 00:42:40.670
If a is negative, what do
I know about that number?
00:42:40.670 --> 00:42:45.340
Think of a as
minus 2, let's say.
00:42:45.340 --> 00:42:51.080
So then I have 1 over 1 plus 2
delta t, because it's minus a
00:42:51.080 --> 00:42:51.870
there.
00:42:51.870 --> 00:42:55.680
And it's going to
be smaller than 1.
00:42:55.680 --> 00:42:56.440
Right.
00:42:56.440 --> 00:42:58.400
It's going to be stable.
00:42:58.400 --> 00:43:04.090
So Euler is actually absolutely
stable, backward Euler.
00:43:04.090 --> 00:43:07.970
Backward Euler has this
property of A-stability.
00:43:07.970 --> 00:43:14.900
There is no stability limit
on negative a, no problem.
00:43:14.900 --> 00:43:18.230
Or even pure imaginary
a, no problem.
00:43:18.230 --> 00:43:19.580
Always stable.
00:43:19.580 --> 00:43:21.670
So that shows how
implicit methods
00:43:21.670 --> 00:43:27.320
can be greatly much more
stable than explicit,
00:43:27.320 --> 00:43:31.670
just by comparing forward
to backward Euler.
00:43:31.670 --> 00:43:35.400
But, just to remind
you the price.
00:43:35.400 --> 00:43:39.760
The price is that this
equation has to be solved.
00:43:39.760 --> 00:43:43.170
Well, it wasn't any
trouble in linear case.
00:43:43.170 --> 00:43:46.400
In the linear case
we're just inverting
00:43:46.400 --> 00:43:48.500
a matrix for a linear system.
00:43:48.500 --> 00:43:53.050
So backward Euler for linear
system, big linear system,
00:43:53.050 --> 00:43:58.850
is a matrix inversion, well a
matrix linear system to solve.
00:43:58.850 --> 00:44:02.960
But for a non-linear
problem, it's serious work.
00:44:02.960 --> 00:44:12.740
And yet, you have to do the work
if the explicit method gives
00:44:12.740 --> 00:44:16.680
you such a small delta t that
it's impossible to go with.
00:44:16.680 --> 00:44:17.410
OK.
00:44:17.410 --> 00:44:23.240
Thanks for your patience
on this first lecture.
00:44:23.240 --> 00:44:29.610
So, obviously I'm going to have
one more discussion of ODEs,
00:44:29.610 --> 00:44:34.190
in which we construct
these methods,
00:44:34.190 --> 00:44:36.630
see their stability
and their convergence.
00:44:36.630 --> 00:44:40.414
And then we move on to the PDEs.