1
00:00:00 --> 00:00:01
2
00:00:01 --> 00:00:02
The following content is
provided under a Creative
3
00:00:02 --> 00:00:03
Commons license.
4
00:00:03 --> 00:00:06
Your support will help MIT
OpenCourseWare continue to
5
00:00:06 --> 00:00:09
offer high-quality educational
research for free.
6
00:00:09 --> 00:00:12
To make a donation, or to view
additional materials from
7
00:00:12 --> 00:00:16
hundreds of MIT courses, visit
MIT OpenCourseWare
8
00:00:16 --> 00:00:19
at ocw.mit.edu.
9
00:00:19 --> 00:00:25
PROFESSOR STRANG: I'm open for
questions as always on every
10
00:00:25 --> 00:00:33
aspect, or comments on every
aspect of the course. yeah?.
11
00:00:33 --> 00:00:52
The MATLAB problem, yes.
12
00:00:52 --> 00:00:55
I better remember what
that MATLAB problem is.
13
00:00:55 --> 00:01:04
So it's solving the equation
-u''+Vu' equal some delta
14
00:01:04 --> 00:01:09
function halfway along?
15
00:01:09 --> 00:01:17
And I chose, boundary
conditions like those so that
16
00:01:17 --> 00:01:20
this gets replaced by a
matrix K/(delta x squared).
17
00:01:20 --> 00:01:24
18
00:01:24 --> 00:01:30
And this term is V times
some center difference.
19
00:01:30 --> 00:01:32
I don't know whether
I should use C.
20
00:01:32 --> 00:01:36
We called C earlier in the
book, C was a circulant.
21
00:01:36 --> 00:01:40
This is a centered first
difference over delta x
22
00:01:40 --> 00:01:46
equal right-hand side.
23
00:01:46 --> 00:01:59
All that multiplies the
discrete solution u.
24
00:01:59 --> 00:02:01
I didn't get that very
well for the camera.
25
00:02:01 --> 00:02:03
Maybe I'll say it again.
26
00:02:03 --> 00:02:08
We have the second difference
that corresponds to -u''.
27
00:02:09 --> 00:02:11
And V times the first
difference that
28
00:02:11 --> 00:02:12
corresponds to Vu'.
29
00:02:12 --> 00:02:18
30
00:02:18 --> 00:02:23
So the physical problem is,
what happens is V gets larger.
31
00:02:23 --> 00:02:25
V grows.
32
00:02:25 --> 00:02:31
How does the solution
change as V increases?
33
00:02:31 --> 00:02:34
So it's a very
practical problem.
34
00:02:34 --> 00:02:39
The V measures the importance
of this convection, the
35
00:02:39 --> 00:02:43
velocity of the fluid.
36
00:02:43 --> 00:02:56
Then n, which is, well maybe
delta x is 1/(n+1), maybe.
37
00:02:56 --> 00:03:02
So that we would have n
interior points and we
38
00:03:02 --> 00:03:06
know the boundary values.
39
00:03:06 --> 00:03:14
So the question is how do we
interpret this increasing n?
40
00:03:14 --> 00:03:17
I guess we expect more and
more accuracy, right?
41
00:03:17 --> 00:03:21
As n increases, delta
x getting smaller.
42
00:03:21 --> 00:03:27
Our idea is that this should
be a better and better
43
00:03:27 --> 00:03:33
approximation to the
differential equation.
44
00:03:33 --> 00:03:38
The question about the
eigenvalues of this matrix, I
45
00:03:38 --> 00:03:44
don't know exactly the answer.
46
00:03:44 --> 00:03:49
So the eigenvalues, did you
discover that the eigenvalues
47
00:03:49 --> 00:03:54
suddenly, they change
their real, for small
48
00:03:54 --> 00:03:56
values of V and n.
49
00:03:56 --> 00:04:02
But then there's some point at
which they become complex.
50
00:04:02 --> 00:04:06
And a point that involves
both V and delta x
51
00:04:06 --> 00:04:09
actually, V and n.
52
00:04:09 --> 00:04:12
It's kind of interesting.
53
00:04:12 --> 00:04:22
I guess at this moment,
interesting is the key word.
54
00:04:22 --> 00:04:24
So what is the physics
going on here?
55
00:04:24 --> 00:04:36
So I have a 1-D flow and
it's blocked by these
56
00:04:36 --> 00:04:38
boundary conditions.
57
00:04:38 --> 00:04:42
Are sort of blocking
it at both ends.
58
00:04:42 --> 00:04:43
And I'm looking for
a steady state.
59
00:04:43 --> 00:04:47
And I'm feeding in
this source here.
60
00:04:47 --> 00:04:54
So I'm feeding in a
source halfway along.
61
00:04:54 --> 00:04:55
So I think that would be right.
62
00:04:55 --> 00:05:01
Now what's going to
happen roughly?
63
00:05:01 --> 00:05:05
This term is, I think,
flowing to the right.
64
00:05:05 --> 00:05:10
So it's going to carry,
suppose this was smoke or
65
00:05:10 --> 00:05:13
particles or whatever.
66
00:05:13 --> 00:05:17
This is like an environmental
differential equation or a
67
00:05:17 --> 00:05:19
chemical engineering equation.
68
00:05:19 --> 00:05:22
Just oh, and more and
more applications.
69
00:05:22 --> 00:05:30
So I think of it as this term
is carrying the flow with it.
70
00:05:30 --> 00:05:31
Like, that way.
71
00:05:31 --> 00:05:38
So I'm expecting your solutions
to be larger on this side.
72
00:05:38 --> 00:05:43
And how do we get any
action at all on the left?
73
00:05:43 --> 00:05:46
That comes from the diffusion.
74
00:05:46 --> 00:05:52
The particles, when they enter,
get carried away, and faster
75
00:05:52 --> 00:05:54
and faster as V increases.
76
00:05:54 --> 00:05:58
But at the same time they
diffuse, they bounce
77
00:05:58 --> 00:05:59
back and forth.
78
00:05:59 --> 00:06:04
And some bit of it
goes this way.
79
00:06:04 --> 00:06:06
But not a lot.
80
00:06:06 --> 00:06:11
I'm guessing that the solution
would have a profile that
81
00:06:11 --> 00:06:13
would be kind of small.
82
00:06:13 --> 00:06:15
And then there's whatever
jump there has to be here,
83
00:06:15 --> 00:06:18
and then larger there.
84
00:06:18 --> 00:06:25
Oh, but then I have
to get it to zero.
85
00:06:25 --> 00:06:27
It's a strange situation.
86
00:06:27 --> 00:06:31
And maybe I'll take this
chance to mention it.
87
00:06:31 --> 00:06:38
What happens as V gets
really, really large?
88
00:06:38 --> 00:06:43
You could say ok, as V
gets extremely large,
89
00:06:43 --> 00:06:45
forget that term.
90
00:06:45 --> 00:06:47
This is the important term.
91
00:06:47 --> 00:06:49
Vu' equals that.
92
00:06:49 --> 00:06:52
Which we could easily solve.
93
00:06:52 --> 00:06:58
But this equation with
just this term is only
94
00:06:58 --> 00:07:00
first order, right?
95
00:07:00 --> 00:07:01
It only has a first derivative.
96
00:07:01 --> 00:07:04
And how many boundary
conditions would we expect to
97
00:07:04 --> 00:07:08
have if I gave you, if this
wasn't here and I gave you
98
00:07:08 --> 00:07:11
just a first order equation?
99
00:07:11 --> 00:07:12
One.
100
00:07:12 --> 00:07:14
And we've got two.
101
00:07:14 --> 00:07:16
Which we're imposing.
102
00:07:16 --> 00:07:20
So this, sort of, limit as
V increases, it produces
103
00:07:20 --> 00:07:24
something called a
boundary layer.
104
00:07:24 --> 00:07:31
The solution is forced
to satisfy these
105
00:07:31 --> 00:07:31
boundary conditions.
106
00:07:31 --> 00:07:34
One of them, it's
quite happy about.
107
00:07:34 --> 00:07:36
The other one, it didn't
really want to satisfy.
108
00:07:36 --> 00:07:41
It only satisfied them
because, I had two
109
00:07:41 --> 00:07:42
originally with this part.
110
00:07:42 --> 00:07:48
But as this takes over, the
struggle to satisfy that second
111
00:07:48 --> 00:07:51
boundary condition that it
really doesn't want to satisfy
112
00:07:51 --> 00:07:57
is resolved by something, a
layer that just makes a, sort
113
00:07:57 --> 00:08:01
of, exponential correction
at the boundary to get to
114
00:08:01 --> 00:08:03
where it's supposed to go.
115
00:08:03 --> 00:08:03
So, anyway.
116
00:08:03 --> 00:08:08
This is a model problem
that has boundary layers.
117
00:08:08 --> 00:08:13
Those are terrifically
important in aerodynamics,
118
00:08:13 --> 00:08:17
you have layers around
the actual aircraft.
119
00:08:17 --> 00:08:24
So lots of physics and
computational science is hiding
120
00:08:24 --> 00:08:28
in this type of example.
121
00:08:28 --> 00:08:29
So those are a few words.
122
00:08:29 --> 00:08:34
But not an answer
to your question.
123
00:08:34 --> 00:08:40
I guess I'm happy if you, for
example, let me know where
124
00:08:40 --> 00:08:43
the eigenvalues change
from real to complex.
125
00:08:43 --> 00:08:46
I mean, it happens, but you can
probably, if you look at the
126
00:08:46 --> 00:08:52
matrix, you can see what
happens at the point where
127
00:08:52 --> 00:08:58
that change takes place.
128
00:08:58 --> 00:08:59
That's some comments.
129
00:08:59 --> 00:09:03
And with the MATLAB, pure
MATLAB part, I guess I'm
130
00:09:03 --> 00:09:08
hoping, and the graders are
hoping, that by providing a
131
00:09:08 --> 00:09:15
code we'll get a pretty
systematic set of answers for
132
00:09:15 --> 00:09:22
the requested V equal, what
was it, three and 12
133
00:09:22 --> 00:09:24
maybe, or something.
134
00:09:24 --> 00:09:25
And the requested n(delta x)'s.
135
00:09:25 --> 00:09:29
136
00:09:29 --> 00:09:32
So I'm hoping that everybody's
graph is going to look pretty
137
00:09:32 --> 00:09:36
similar for those requests.
138
00:09:36 --> 00:09:42
And then if you could do a
little exploration yourself and
139
00:09:42 --> 00:09:48
make that a fifth page, or
really less than a page,
140
00:09:48 --> 00:09:51
about take V larger.
141
00:09:51 --> 00:09:53
Take n larger.
142
00:09:53 --> 00:09:54
What happens?
143
00:09:54 --> 00:09:57
I'm very happy.
144
00:09:57 --> 00:09:59
Or any direction.
145
00:09:59 --> 00:10:05
I mean, just think of it as
a mini-project to do the
146
00:10:05 --> 00:10:15
requested ones and then to do
a little experimentation.
147
00:10:15 --> 00:10:22
I'll just be interested
about anything you observe.
148
00:10:22 --> 00:10:28
So there's no right
answer to that part.
149
00:10:28 --> 00:10:33
Is that any help
with the MATLAB?
150
00:10:33 --> 00:10:38
Let me just say, since you
showed up today, if you have
151
00:10:38 --> 00:10:45
trouble with the MATLAB and you
need Peter's help at noon
152
00:10:45 --> 00:10:50
Friday it would be ok to turn
in the MATLAB later on Friday.
153
00:10:50 --> 00:10:52
I shouldn't say that.
154
00:10:52 --> 00:10:55
But I just did.
155
00:10:55 --> 00:10:57
So there you are.
156
00:10:57 --> 00:11:00
Right.
157
00:11:00 --> 00:11:07
I mean, this course, as you've
begun to see, I know that you
158
00:11:07 --> 00:11:10
have lots of demands
on your time.
159
00:11:10 --> 00:11:13
And sometimes you
have job interviews.
160
00:11:13 --> 00:11:14
All sorts of stuff.
161
00:11:14 --> 00:11:17
Conferences that
you have to go to.
162
00:11:17 --> 00:11:19
We deal with that.
163
00:11:19 --> 00:11:25
So just give me the homeworks
as soon as you can.
164
00:11:25 --> 00:11:29
If you're ready Friday at
class time, that's perfect.
165
00:11:29 --> 00:11:39
If are stuck on MATLAB and you
want to go to Peter's
166
00:11:39 --> 00:11:42
discussion, which follows the
Friday class in here,
167
00:11:42 --> 00:11:45
that's fine too.
168
00:11:45 --> 00:11:48
So that's some thoughts
about the MATLAB and
169
00:11:48 --> 00:11:53
it's partly open-ended.
170
00:11:53 --> 00:11:57
What about lots of other
things in this course?
171
00:11:57 --> 00:11:59
Homework came in.
172
00:11:59 --> 00:12:00
How was the homework?
173
00:12:00 --> 00:12:02
Maybe I just take
this chance to ask.
174
00:12:02 --> 00:12:05
Was it a reasonable length?
175
00:12:05 --> 00:12:07
The homework three, the pset?
176
00:12:07 --> 00:12:09
177
00:12:09 --> 00:12:15
Not too bad?
178
00:12:15 --> 00:12:19
I will be able to post
solutions because several
179
00:12:19 --> 00:12:29
people are contributing typed
solutions that could go on on
180
00:12:29 --> 00:12:36
the math.mit.edu/cse website.
181
00:12:36 --> 00:12:40
Or they could also go
on our 18.085 website
182
00:12:40 --> 00:12:41
for this semester.
183
00:12:41 --> 00:12:42
Good.
184
00:12:42 --> 00:12:44
Ok, help me out with
some questions.
185
00:12:44 --> 00:12:58
Thanks.
186
00:12:58 --> 00:13:01
So the question is about
element matrices.
187
00:13:01 --> 00:13:12
In lecture 8, I guess it was,
where we were assembling this,
188
00:13:12 --> 00:13:16
I used the word that's used by
finite element people,
189
00:13:16 --> 00:13:19
we were assembling K.
190
00:13:19 --> 00:13:26
And one way to do it was to
multiply the three matrices.
191
00:13:26 --> 00:13:30
But that's not how
it's actually done.
192
00:13:30 --> 00:13:35
It's actually assembled
out of small matrices.
193
00:13:35 --> 00:13:39
Maybe small k would
be the right.
194
00:13:39 --> 00:13:44
So this would be a
k_element out of smaller
195
00:13:44 --> 00:13:47
element matrices.
196
00:13:47 --> 00:13:51
So for example, what's the
element matrix for, one for
197
00:13:51 --> 00:13:57
each spring in this particular
problem of springs and masses.
198
00:13:57 --> 00:14:03
So let me draw some springs,
some masses, some springs,
199
00:14:03 --> 00:14:08
another mass, more springs,
fixed, not fixed, whatever,
200
00:14:08 --> 00:14:12
maybe fixed-free the way
I've drawn it there.
201
00:14:12 --> 00:14:15
So here's a spring with
spring constant c_2.
202
00:14:16 --> 00:14:23
And we could look at the
contribution to the whole
203
00:14:23 --> 00:14:29
matrix coming from this
piece of the problem.
204
00:14:29 --> 00:14:33
This was actually, the finite
element method has a wonderful
205
00:14:33 --> 00:14:39
history of people, and it had
different names way back, of
206
00:14:39 --> 00:14:45
people seeing the structure as
broken in pieces and then
207
00:14:45 --> 00:14:47
connected together.
208
00:14:47 --> 00:14:50
And then what did a
typical piece look like?
209
00:14:50 --> 00:14:57
So a typical piece there, well,
you let me just write down what
210
00:14:57 --> 00:14:59
this matrix is going to be.
211
00:14:59 --> 00:15:03
The little element matrix
is coming from spring two.
212
00:15:03 --> 00:15:07
So this would be, like
element two will be,
213
00:15:07 --> 00:15:10
it'll have a c_2 outside.
214
00:15:10 --> 00:15:12
I'll put the c_2 outside.
215
00:15:12 --> 00:15:18
And then inside will
be a little this guy.
216
00:15:18 --> 00:15:23
And we can talk more
about why it's that one.
217
00:15:23 --> 00:15:30
But just to have the element
matrix there on the board.
218
00:15:30 --> 00:15:38
So my claim is that this is
a small piece of the big K.
219
00:15:38 --> 00:15:45
So the big K matrix, what's the
size of the big K, of K itself?
220
00:15:45 --> 00:15:48
Three by three in this
case, yeah, three masses.
221
00:15:48 --> 00:15:50
So it'll be three by three.
222
00:15:50 --> 00:15:57
So I'm thinking that this
spring which connects mass one
223
00:15:57 --> 00:16:05
to mass two, so it's only going
to be like, a two by two piece,
224
00:16:05 --> 00:16:11
a little local piece, you could
say, that that little k fits in
225
00:16:11 --> 00:16:20
this, is assembled into the, I
better call it k_2, right?
226
00:16:20 --> 00:16:26
So it's a little k that sits up
in this two by two block and
227
00:16:26 --> 00:16:29
doesn't contribute to the rest.
228
00:16:29 --> 00:16:32
Then let's just draw the
rest of the picture.
229
00:16:32 --> 00:16:35
So this would produce an
element matrix that looks
230
00:16:35 --> 00:16:39
just the same, that's
the beauty of this.
231
00:16:39 --> 00:16:43
That all the elements, apart
from change in the spring
232
00:16:43 --> 00:16:44
constant, look the same.
233
00:16:44 --> 00:16:48
So there'd be a little k_3.
234
00:16:49 --> 00:16:51
And where will it go?
235
00:16:51 --> 00:16:55
This is the whole
core to the point.
236
00:16:55 --> 00:16:57
It'll go to the lower right.
237
00:16:57 --> 00:17:00
Down here?
238
00:17:00 --> 00:17:02
Overlapping, overlapping.
239
00:17:02 --> 00:17:09
Because this mass is attached
to that spring and to that one.
240
00:17:09 --> 00:17:12
So these little element
matrices, they overlap.
241
00:17:12 --> 00:17:18
And you just need, if you can
imagine the code that's going
242
00:17:18 --> 00:17:23
to do this, you need a list
of springs and a list of
243
00:17:23 --> 00:17:26
masses and a list of the
connections between them.
244
00:17:26 --> 00:17:29
And it'll sit in here
because it's two by two.
245
00:17:29 --> 00:17:33
So that's two by two, that's
two by two and in this
246
00:17:33 --> 00:17:38
overlap entry will be a
c_2 from the upper box.
247
00:17:38 --> 00:17:43
It'll be a c_2+c_3 as we
discovered by direct
248
00:17:43 --> 00:17:45
multiplication.
249
00:17:45 --> 00:17:47
So that's that spring.
250
00:17:47 --> 00:17:51
Now what about this
first spring?
251
00:17:51 --> 00:17:52
So there's a little k_1.
252
00:17:54 --> 00:17:57
Now k_1 should look
the same as this.
253
00:17:57 --> 00:17:59
Except what?
254
00:17:59 --> 00:18:01
So it's going to be a
difference with this
255
00:18:01 --> 00:18:04
spring because?
256
00:18:04 --> 00:18:06
Because of this fixed.
257
00:18:06 --> 00:18:09
End.
258
00:18:09 --> 00:18:10
There's no mass zero.
259
00:18:12 --> 00:18:18
k_1 would normally sit up
here, but actually it's only
260
00:18:18 --> 00:18:21
going to be one by one.
261
00:18:21 --> 00:18:27
So the k_1 little element
matrix would look like
262
00:18:27 --> 00:18:31
c_1[1, -1; -1, 1].
263
00:18:31 --> 00:18:34
And then the boundary
conditions, knock those out.
264
00:18:34 --> 00:18:39
And an interesting point if
you're writing big code
265
00:18:39 --> 00:18:40
you have to decide.
266
00:18:40 --> 00:18:46
Do I, as I create k_1, this
little element matrix, do I
267
00:18:46 --> 00:18:49
pay attention to the
boundary conditions?
268
00:18:49 --> 00:18:54
And then k_1 would just be
that single number c_1
269
00:18:54 --> 00:18:56
which would sit there.
270
00:18:56 --> 00:19:01
So up there will be c_1 from
the k_1 matrix and c_2
271
00:19:01 --> 00:19:05
from the k_2 matrix.
272
00:19:05 --> 00:19:08
That's the entry there.
273
00:19:08 --> 00:19:12
And then, as I say, in coding
finite elements, which you
274
00:19:12 --> 00:19:17
guys may do at some point,
there's a question.
275
00:19:17 --> 00:19:18
What's sufficient?
276
00:19:18 --> 00:19:24
Shall I pay attention to
the boundary in creating
277
00:19:24 --> 00:19:27
these element matrices
or shall I do it later?
278
00:19:27 --> 00:19:31
And the rule seems
to be, do it later.
279
00:19:31 --> 00:19:40
So what gets created in finite
elements is a, in our language,
280
00:19:40 --> 00:19:43
would be a free-free matrix.
281
00:19:43 --> 00:19:55
It's a matrix where even this
spring has a two by two piece.
282
00:19:55 --> 00:20:01
So what's the problem with
the free-free matrix?
283
00:20:01 --> 00:20:05
The free-free matrices, those
with no supports, those
284
00:20:05 --> 00:20:08
matrices will be singular.
285
00:20:08 --> 00:20:10
Right, singular.
286
00:20:10 --> 00:20:16
Because the vector of all ones,
if you multiply the K, the
287
00:20:16 --> 00:20:18
free-free matrix times the
vector of all ones,
288
00:20:18 --> 00:20:20
you get zero.
289
00:20:20 --> 00:20:23
The whole thing could shift.
290
00:20:23 --> 00:20:28
And will have other rigid
motions in two dimensions.
291
00:20:28 --> 00:20:31
Let's just think ahead here.
292
00:20:31 --> 00:20:38
What are the rigid motions, the
null space of K, you could say,
293
00:20:38 --> 00:20:41
for a two-dimensional truss.
294
00:20:41 --> 00:20:44
So I've got a bunch
of bars and springs.
295
00:20:44 --> 00:20:46
Think of a mattress.
296
00:20:46 --> 00:20:49
I'm in two dimensions, a
mattress is a bunch of springs
297
00:20:49 --> 00:20:54
connected in a 2-D grid.
298
00:20:54 --> 00:20:58
And say it's free at both ends.
299
00:20:58 --> 00:21:00
So what can I do to a mattress?
300
00:21:00 --> 00:21:01
Oh, my gosh!
301
00:21:01 --> 00:21:05
I didn't expect that
to be on videotape.
302
00:21:05 --> 00:21:06
So it's in the plane.
303
00:21:06 --> 00:21:09
We're in 2-D here.
304
00:21:09 --> 00:21:13
What can I do if there are
no boundary conditions?
305
00:21:13 --> 00:21:15
Well I can shift
it to the right.
306
00:21:15 --> 00:21:16
Right?
307
00:21:16 --> 00:21:18
That's our .
308
00:21:18 --> 00:21:20
What else can I do?
309
00:21:20 --> 00:21:21
I can rotate.
310
00:21:21 --> 00:21:23
And I can shift it
the other way.
311
00:21:23 --> 00:21:29
So there would be three rigid
motions for the 2-D problem.
312
00:21:29 --> 00:21:33
Two translations
and a rotation.
313
00:21:33 --> 00:21:35
And when you get up to three
dimensions, do you want to
314
00:21:35 --> 00:21:40
guess what's the number in 3-D?
315
00:21:40 --> 00:21:41
Six.
316
00:21:41 --> 00:21:42
Number six.
317
00:21:42 --> 00:21:46
Three translations and
rotations around three axes.
318
00:21:46 --> 00:21:54
So those, either one rigid
motion or three rigid motions
319
00:21:54 --> 00:21:59
or six rigid motions have to
get, the boundary conditions
320
00:21:59 --> 00:22:04
eventually have to remove
those, have to knock out rows
321
00:22:04 --> 00:22:05
and columns and remove them.
322
00:22:05 --> 00:22:13
But my point was just that
quite efficient to do it later.
323
00:22:13 --> 00:22:15
The picture is so clear here.
324
00:22:15 --> 00:22:19
So the actual matrix would
be four by four, the
325
00:22:19 --> 00:22:20
unreduced matrix.
326
00:22:20 --> 00:22:28
And then when we fix node zero
there, that would knock out
327
00:22:28 --> 00:22:31
that part and leave the three
by three that we want.
328
00:22:31 --> 00:22:38
So this is just discussion of
how to think of this matrix K.
329
00:22:38 --> 00:22:45
So the direct way to think
of it was as a product
330
00:22:45 --> 00:22:46
of big matrices.
331
00:22:46 --> 00:22:53
But in reality it's assembled
from these element pieces.
332
00:22:53 --> 00:22:56
And of course, our goal in
talking about finite elements
333
00:22:56 --> 00:23:00
will be to see that.
334
00:23:00 --> 00:23:04
It'll come up again, of course.
335
00:23:04 --> 00:23:06
Your good question led
me there, but did I
336
00:23:06 --> 00:23:11
answer the question?
337
00:23:11 --> 00:23:16
And maybe the way I mentioned
it in class was to notice
338
00:23:16 --> 00:23:20
that these guys, these
element matrices can be
339
00:23:20 --> 00:23:22
thought of this way.
340
00:23:22 --> 00:23:26
It is matrix multiplication,
but done differently.
341
00:23:26 --> 00:23:31
It's a column of this matrix
times the number C here
342
00:23:31 --> 00:23:33
times a row of this.
343
00:23:33 --> 00:23:38
So it's matrix multiplication,
columns times rows.
344
00:23:38 --> 00:23:49
So you can multiply AB, columns
of A times rows of B and
345
00:23:49 --> 00:23:55
then add over from one to n.
346
00:23:55 --> 00:23:59
So column of A times row of B.
347
00:23:59 --> 00:24:03
So a column, then, is
a vector like this.
348
00:24:03 --> 00:24:10
A row is a vector like that.
349
00:24:10 --> 00:24:17
And the result is a full size
matrix, but if the column is
350
00:24:17 --> 00:24:23
concentrated at a couple of
nodes and the row is, then it
351
00:24:23 --> 00:24:28
will have zeroes everywhere
except at that.
352
00:24:28 --> 00:24:34
This is the element matrix
with the C included.
353
00:24:34 --> 00:24:39
That would be the element
matrix already blown up to
354
00:24:39 --> 00:24:43
full size by adding
zeroes elsewhere.
355
00:24:43 --> 00:24:49
So, I mean the heart of the
element matrix is where the
356
00:24:49 --> 00:24:51
action is on that spring.
357
00:24:51 --> 00:24:55
And then, when we assemble
it, of course, it doesn't
358
00:24:55 --> 00:24:59
contribute down there.
359
00:24:59 --> 00:25:03
So the technology of
finite elements is quite
360
00:25:03 --> 00:25:05
interesting and it fits.
361
00:25:05 --> 00:25:11
It's a beautiful way to
see the discrete problem.
362
00:25:11 --> 00:25:15
Ready for another
question of any sort.
363
00:25:15 --> 00:25:22
Thank you, good.
364
00:25:22 --> 00:25:25
Yeah, sorry, it got
onto the problem set.
365
00:25:25 --> 00:25:30
And then I thought-- let me
write those words down first,
366
00:25:30 --> 00:25:34
singular value decomposition.
367
00:25:34 --> 00:25:40
Well I won't write
all those words.
368
00:25:40 --> 00:25:42
That's a wonderful thing.
369
00:25:42 --> 00:25:45
It's a highlight of matrix
theory except for the
370
00:25:45 --> 00:25:47
length of its name.
371
00:25:47 --> 00:25:53
So SVD is what
everybody calls it.
372
00:25:53 --> 00:25:57
It's only like, every year now
people appreciate more and more
373
00:25:57 --> 00:26:01
the importance of this SVD,
this singular value
374
00:26:01 --> 00:26:02
decomposition.
375
00:26:02 --> 00:26:09
So shall I say a few
words about it now?
376
00:26:09 --> 00:26:14
Just a few.
377
00:26:14 --> 00:26:18
So it's Section
1.7 of the book.
378
00:26:18 --> 00:26:26
And my thought was, hey we've
done so much matrix theory
379
00:26:26 --> 00:26:29
including eigenvalues and
positive definiteness,
380
00:26:29 --> 00:26:32
let's get on and use it.
381
00:26:32 --> 00:26:37
And then I can come back
to the SVD in a sort
382
00:26:37 --> 00:26:40
of lighter moment.
383
00:26:40 --> 00:26:43
Because I'm not thinking
you will be responsible
384
00:26:43 --> 00:26:43
for the SVD.
385
00:26:44 --> 00:26:46
Eigenvalues I hope
you're understanding.
386
00:26:46 --> 00:26:49
Positive definite I hope
you're understanding.
387
00:26:49 --> 00:26:52
That's heart of
the course stuff.
388
00:26:52 --> 00:26:59
But SVD is a key idea in
linear algebra, but we
389
00:26:59 --> 00:27:01
can't do everything.
390
00:27:01 --> 00:27:06
But we can say what it is.
391
00:27:06 --> 00:27:08
What's the first point?
392
00:27:08 --> 00:27:12
The first point is that every
matrix, even a rectangular
393
00:27:12 --> 00:27:15
matrix, has got a singular
value decomposition.
394
00:27:15 --> 00:27:23
So the matrix A can be m by n.
395
00:27:23 --> 00:27:27
I wouldn't speak about
the eigenvalues of a
396
00:27:27 --> 00:27:29
rectangular matrix.
397
00:27:29 --> 00:27:32
Because if I multiply, you
remember, the eigenvalue
398
00:27:32 --> 00:27:34
equation wouldn't make sense.
399
00:27:34 --> 00:27:39
Ax=lambda*x is no good
if A is rectangular.
400
00:27:39 --> 00:27:40
Right?
401
00:27:40 --> 00:27:45
The input would be of length n
and the output would be of
402
00:27:45 --> 00:27:50
length m and this
wouldn't work.
403
00:27:50 --> 00:27:54
So eigenvectors are
not what I'm after.
404
00:27:54 --> 00:27:58
But somehow the goal
of eigenvectors was
405
00:27:58 --> 00:28:00
to diagonalize.
406
00:28:00 --> 00:28:03
The goal of eigenvectors
was to find these special
407
00:28:03 --> 00:28:09
directions in which matrix
A acted like a number.
408
00:28:09 --> 00:28:15
And then, as we saw today in
the part that partly still up
409
00:28:15 --> 00:28:20
here, we could solve equations
by eigenvectors by looking
410
00:28:20 --> 00:28:26
for these, following
these special guys.
411
00:28:26 --> 00:28:29
What do we do for a
rectangular matrix?
412
00:28:29 --> 00:28:31
What replaces this?
413
00:28:31 --> 00:28:36
So this is now not good.
414
00:28:36 --> 00:28:40
The idea is simply we need
two sets of vectors.
415
00:28:40 --> 00:28:48
We need some v's and some u's.
416
00:28:48 --> 00:28:51
So that's the central
equation of the SVD.
417
00:28:52 --> 00:28:56
Now what can we get?
418
00:28:56 --> 00:29:01
So now we have more freedom
because we're getting a bunch
419
00:29:01 --> 00:29:05
of v's that have, these
guys are in our n.
420
00:29:05 --> 00:29:09
They have length n, right
to multiply A times v.
421
00:29:09 --> 00:29:17
And the output is, so the
n of these, n v's, we're
422
00:29:17 --> 00:29:19
in n dimensional space.
423
00:29:19 --> 00:29:22
So those are called
singular vectors.
424
00:29:22 --> 00:29:24
They're called right singular
vectors because they're
425
00:29:24 --> 00:29:26
sitting to the right of A.
426
00:29:26 --> 00:29:32
And these guys, these outputs
will be in-- these are v's
427
00:29:32 --> 00:29:35
in our n, n dimensions.
428
00:29:35 --> 00:29:41
I have m of these, m u's
in m dimensional space.
429
00:29:41 --> 00:29:45
And these things are
numbers, of course.
430
00:29:45 --> 00:29:50
And actually, they're all
greater or equal to zero.
431
00:29:50 --> 00:29:59
So we can get, by allowing
ourselves the freedom of two
432
00:29:59 --> 00:30:02
right singular vectors and left
singular vectors, we can get a
433
00:30:02 --> 00:30:06
lot more, and we can get,
here's the punch line.
434
00:30:06 --> 00:30:13
We can get the v's to be
orthogonal, orthonormal.
435
00:30:13 --> 00:30:19
Just like the eigenvectors of a
symmetric matrix, we can get
436
00:30:19 --> 00:30:23
these v's, these singular
vectors to be perpendicular to
437
00:30:23 --> 00:30:27
each other and we can get the
u's to be perpendicular
438
00:30:27 --> 00:30:28
to each other.
439
00:30:28 --> 00:30:34
What we can't, what we're not
shooting for is the v's to
440
00:30:34 --> 00:30:36
be the same as the u's.
441
00:30:36 --> 00:30:40
They're not even in the same
space now, if the matrix
442
00:30:40 --> 00:30:41
A is rectangular.
443
00:30:41 --> 00:30:47
And even if the matrix A is
square but not symmetric, we
444
00:30:47 --> 00:30:49
wouldn't get this
perpendicularity.
445
00:30:49 --> 00:30:57
But we get it in the SVD by
having different v's and u's.
446
00:30:57 --> 00:31:02
Here's my picture
of linear algebra.
447
00:31:02 --> 00:31:10
This is the big picture
of linear algebra.
448
00:31:10 --> 00:31:12
This over here is n
dimensional space.
449
00:31:12 --> 00:31:17
We have v's.
450
00:31:17 --> 00:31:19
Think of that as n
dimensional space.
451
00:31:19 --> 00:31:21
That's kind of a puny picture.
452
00:31:21 --> 00:31:29
But when I multiply by A,
I take a vector here,
453
00:31:29 --> 00:31:30
I multiply by A.
454
00:31:30 --> 00:31:33
So let me just do that.
455
00:31:33 --> 00:31:37
I multiply by A, I take a
vector v, well, already put
456
00:31:37 --> 00:31:43
v, I take Av and I get
something over here.
457
00:31:43 --> 00:31:51
And this will be
the space of u's.
458
00:31:51 --> 00:31:54
Now I have to ask you one thing
about linear algebra that
459
00:31:54 --> 00:31:56
I keep hammering away.
460
00:31:56 --> 00:32:04
If I take any vector and
multiple by A, What do I get?
461
00:32:04 --> 00:32:11
If I take any vector
v, like .
462
00:32:11 --> 00:32:13
Here's A say,
[3, 6; 4, 7; 5, 8].
463
00:32:13 --> 00:32:16
464
00:32:16 --> 00:32:19
So that's A times v.
465
00:32:19 --> 00:32:23
What can you tell me about A
times v that goes a little
466
00:32:23 --> 00:32:26
deeper than just telling
me the numbers?
467
00:32:26 --> 00:32:33
It's a linear combination
of the columns.
468
00:32:33 --> 00:32:36
These are the
outputs, the Av's.
469
00:32:36 --> 00:32:40
So this space is called
the column space.
470
00:32:40 --> 00:32:43
It's all combinations
of the columns.
471
00:32:43 --> 00:32:50
These are all combinations
of the columns.
472
00:32:50 --> 00:32:52
That's the column space.
473
00:32:52 --> 00:32:56
It's a bunch of vectors.
474
00:32:56 --> 00:33:02
So the point is that these
u's, that I have like, a
475
00:33:02 --> 00:33:06
fantastic choice of axes.
476
00:33:06 --> 00:33:10
A linear algebra person
would use the word, bases.
477
00:33:10 --> 00:33:14
But geometrically I'm
saying they're fantastic
478
00:33:14 --> 00:33:17
axes in these spaces.
479
00:33:17 --> 00:33:25
So that if I choose the right
axes in the two spaces, then
480
00:33:25 --> 00:33:28
one axis will go to that
one when I multiply by A.
481
00:33:28 --> 00:33:30
The next one will
go to that one.
482
00:33:30 --> 00:33:32
The third will go to that one.
483
00:33:32 --> 00:33:37
It just could not be better.
484
00:33:37 --> 00:33:43
And that's the statement.
485
00:33:43 --> 00:33:46
Now maybe I'll just say
a word about where it's
486
00:33:46 --> 00:33:51
used or why it's useful.
487
00:33:51 --> 00:33:58
Oh, in so many applications.
488
00:33:58 --> 00:34:02
Well most of you are
not biologists.
489
00:34:02 --> 00:34:04
And I'm not certainly.
490
00:34:04 --> 00:34:09
But we know that there's a lot
of interesting math these days
491
00:34:09 --> 00:34:15
in a lot of interesting
experiments with genes.
492
00:34:15 --> 00:34:20
So what happens?
493
00:34:20 --> 00:34:24
So we've got about
40 people here.
494
00:34:24 --> 00:34:34
And we measure the,
well, we get data.
495
00:34:34 --> 00:34:36
That's what I'm
really going to say.
496
00:34:36 --> 00:34:40
Somehow we got a giant
amount of data. right?.
497
00:34:40 --> 00:34:41
Probably 40 columns.
498
00:34:41 --> 00:34:46
Everybody here is entitled to
be a column of the matrix.
499
00:34:46 --> 00:34:51
And the entries will be
like, have you got TB, what
500
00:34:51 --> 00:34:55
height, all this stuff.
501
00:34:55 --> 00:34:57
So you got enormous
amount of data.
502
00:34:57 --> 00:35:02
And the question is, what's
important in that data.
503
00:35:02 --> 00:35:06
You've got a million entries in
a giant matrix and you want to
504
00:35:06 --> 00:35:08
extract the important thing.
505
00:35:08 --> 00:35:11
Well, it's the SVD
that does it.
506
00:35:11 --> 00:35:17
You take that giant matrix of
data, you find the v's and
507
00:35:17 --> 00:35:19
the sigmas and to u's.
508
00:35:19 --> 00:35:28
And then the largest sigmas are
the most important information
509
00:35:28 --> 00:35:32
if things are scaled and
statistics is coming
510
00:35:32 --> 00:35:34
into this also.
511
00:35:34 --> 00:35:43
I could a give sensible, much
more mathematical discussion
512
00:35:43 --> 00:35:49
of one word, one name it
goes under is principle
513
00:35:49 --> 00:35:52
component analysis.
514
00:35:52 --> 00:35:56
That's a standard tool for
statisticians looking at
515
00:35:56 --> 00:35:58
giant amounts of data.
516
00:35:58 --> 00:36:02
Is to find the principal
components and those will come
517
00:36:02 --> 00:36:04
from these eigenvectors.
518
00:36:04 --> 00:36:12
Well your question about the
SVD set off that discussion.
519
00:36:12 --> 00:36:15
I'll only add a
couple of words.
520
00:36:15 --> 00:36:22
These v's and these u's are
actually eigenvectors.
521
00:36:22 --> 00:36:28
But they're not eigenvectors
of A. v's happen to be the
522
00:36:28 --> 00:36:31
eigenvectors of A transpose A.
523
00:36:31 --> 00:36:35
And the u's happen to be
eigenvectors of A, A transpose.
524
00:36:35 --> 00:36:41
And the linear algebra
comes together to give
525
00:36:41 --> 00:36:45
you this key equation.
526
00:36:45 --> 00:36:48
And of course, you and I know
that if I'm looking at the
527
00:36:48 --> 00:36:53
eigenvectors v of A
transpose*A, A transpose A is
528
00:36:53 --> 00:36:56
a symmetric matrix.
529
00:36:56 --> 00:36:58
In fact, positive definite.
530
00:36:58 --> 00:37:02
So that the eigenvectors will
be orthogonal, the eigenvalues
531
00:37:02 --> 00:37:03
will be positive.
532
00:37:03 --> 00:37:07
And then this one is coming
from the eigenvectors of A A
533
00:37:07 --> 00:37:12
transpose, which is different.
534
00:37:12 --> 00:37:16
So if the matrix A happened to
be square, happened to be
535
00:37:16 --> 00:37:19
symmetric, happened to be
positive definite, this
536
00:37:19 --> 00:37:20
would just be Ax=lambda*x.
537
00:37:21 --> 00:37:24
I didn't have to cross it out.
538
00:37:24 --> 00:37:26
I'll use K for that.
539
00:37:26 --> 00:37:32
So if the matrix A was one of
our favorite matrices, was a K,
540
00:37:32 --> 00:37:35
that would be the case in which
the SVD is no different
541
00:37:35 --> 00:37:37
from eigenvalues.
542
00:37:37 --> 00:37:41
The v's are the same as the
u's, the sigmas are the same
543
00:37:41 --> 00:37:46
as the lambdas, all fine.
544
00:37:46 --> 00:37:50
This is sort of the way to get
the beauty of positive definite
545
00:37:50 --> 00:37:54
symmetric matrices when the
matrix itself that you start
546
00:37:54 --> 00:37:56
with isn't even square.
547
00:37:56 --> 00:38:00
Like this one.
548
00:38:00 --> 00:38:02
More than I wanted to say,
more than you wanted
549
00:38:02 --> 00:38:04
to hear about the SVD.
550
00:38:04 --> 00:38:06
551
00:38:06 --> 00:38:08
What else?
552
00:38:08 --> 00:38:17
Yes, thank you.
553
00:38:17 --> 00:38:19
More about, sure.
554
00:38:19 --> 00:38:20
Let me repeat the question.
555
00:38:20 --> 00:38:24
So this refers to this
morning's lecture, lecture 9
556
00:38:24 --> 00:38:32
about time-dependent problems.
557
00:38:32 --> 00:38:35
And the point was that when I
have a differential equation
558
00:38:35 --> 00:38:39
in time there're lots of
ways to replace it by
559
00:38:39 --> 00:38:45
difference equations.
560
00:38:45 --> 00:38:50
So let's take the equation
du/dt, let's make it first
561
00:38:50 --> 00:38:57
order is some function
of u often and t.
562
00:38:57 --> 00:39:02
That would be a first order.
563
00:39:02 --> 00:39:07
Yeah, it's going to look at
when I write that much down.
564
00:39:07 --> 00:39:09
What have I got?
565
00:39:09 --> 00:39:14
I've got a first order
differential equation.
566
00:39:14 --> 00:39:17
Is it linear?
567
00:39:17 --> 00:39:18
No.
568
00:39:18 --> 00:39:21
I'm allowing some, this
function of u could be sin(u).
569
00:39:23 --> 00:39:25
It could be u to
the tenth power.
570
00:39:25 --> 00:39:28
It could be e to the u.
571
00:39:28 --> 00:39:32
So this is how
equations really come.
572
00:39:32 --> 00:39:35
The linear ones are the model
problems that we can solve.
573
00:39:35 --> 00:39:40
This is how linear
equations really come.
574
00:39:40 --> 00:39:44
Euler thought of, let's give
Euler credit here, so forward
575
00:39:44 --> 00:39:51
Euler and backward Euler.
576
00:39:51 --> 00:39:58
And the point will be that
this guy is explicit.
577
00:39:58 --> 00:40:01
So can I write that word so
I don't forget to write it?
578
00:40:01 --> 00:40:02
Explicit.
579
00:40:02 --> 00:40:04
And that this guy
will be implicit.
580
00:40:04 --> 00:40:10
And this is a big distinction.
581
00:40:10 --> 00:40:12
And we'll see it.
582
00:40:12 --> 00:40:20
So this says u_(n+1)-u_n over
delta t is the value of the
583
00:40:20 --> 00:40:33
slope at the start of the step.
584
00:40:33 --> 00:40:37
So that's the most important,
most basic, first thing
585
00:40:37 --> 00:40:38
you would think of.
586
00:40:38 --> 00:40:40
It replaces this by this.
587
00:40:40 --> 00:40:44
You start with u_0, and
from this you get u_1.
588
00:40:45 --> 00:40:48
And then you know u_1 and
from this you get u_2.
589
00:40:49 --> 00:40:55
And the point is at every step
this equation is telling
590
00:40:55 --> 00:40:57
you what u_1 is from u_0.
591
00:40:58 --> 00:41:00
You just move u_0
over to that side.
592
00:41:00 --> 00:41:03
You've only got stuff you
know and then you know u_1.
593
00:41:04 --> 00:41:07
Contrast that with
backward Euler.
594
00:41:07 --> 00:41:14
So that's u_(n+1)-u_n over
delta t is f at-- Now what
595
00:41:14 --> 00:41:18
am I going to put there?
596
00:41:18 --> 00:41:24
I'm going to put the end, the
point we don't know yet.
597
00:41:24 --> 00:41:31
So is the slope at u_(n
+ 1) and the time that
598
00:41:31 --> 00:41:32
goes with it, t_(n+1).
599
00:41:34 --> 00:41:41
t_n is just a shorthand for
n*delta t. n steps forward
600
00:41:41 --> 00:41:42
in time got us to t_n.
601
00:41:43 --> 00:41:45
This is one more step.
602
00:41:45 --> 00:41:52
Now why is this
called implicit?
603
00:41:52 --> 00:41:57
How do I find u_(n+1)
out of this?
604
00:41:57 --> 00:41:59
I've got to solve for it.
605
00:41:59 --> 00:42:01
It's much more expensive.
606
00:42:01 --> 00:42:04
Because it'll be probably
a non-linear system of
607
00:42:04 --> 00:42:07
equations to solve.
608
00:42:07 --> 00:42:09
We'll take a little
time on that.
609
00:42:09 --> 00:42:13
But of course, there's one
outstanding method to solve a
610
00:42:13 --> 00:42:16
system of non-linear equations
and that's called
611
00:42:16 --> 00:42:18
Newton's method.
612
00:42:18 --> 00:42:20
So Newton is coming in.
613
00:42:20 --> 00:42:26
Newton's method is sort of
the first, the good way, if
614
00:42:26 --> 00:42:31
you can do it, to solve.
615
00:42:31 --> 00:42:37
Well when I say solve, I mean
set up an iteration which,
616
00:42:37 --> 00:42:43
after maybe three loops or five
loops will be accurate to
617
00:42:43 --> 00:42:46
enough digits that you can
say, ok that's u_(n+1).
618
00:42:47 --> 00:42:49
On to the next step.
619
00:42:49 --> 00:42:53
Then the next step will
have the same equation
620
00:42:53 --> 00:42:57
but with n up one.
621
00:42:57 --> 00:42:57
So it'd be u_( n+2)-u_(n+1).
622
00:43:00 --> 00:43:05
So you see the extra work
here, but it's more stable.
623
00:43:05 --> 00:43:08
The way this one spiraled
out, this one spiraled
624
00:43:08 --> 00:43:12
in in the model problem.
625
00:43:12 --> 00:43:17
I hope you look at that
Section 2.2 which was about
626
00:43:17 --> 00:43:21
the model problem that
we discussed today.
627
00:43:21 --> 00:43:22
There's more to say.
628
00:43:22 --> 00:43:29
I mean, this is the central
problem of time-dependent,
629
00:43:29 --> 00:43:34
evolving initial
value problems.
630
00:43:34 --> 00:43:37
You're sort of
marching forward.
631
00:43:37 --> 00:43:42
But here, each step of the
march takes an inner loop which
632
00:43:42 --> 00:43:49
has to work hard to solve,
to find where you march to.
633
00:43:49 --> 00:43:52
Is that a help, to indicate?
634
00:43:52 --> 00:43:59
Because this is a very
fundamental difference.
635
00:43:59 --> 00:44:05
And Chapter 6 of the book
develops higher order methods.
636
00:44:05 --> 00:44:08
These are both first order,
first order accurate.
637
00:44:08 --> 00:44:14
The error that you're making
is of the order of delta t.
638
00:44:14 --> 00:44:16
That's not great, right?
639
00:44:16 --> 00:44:19
Because you would have to take
many, many small steps to
640
00:44:19 --> 00:44:24
have a reasonable error.
641
00:44:24 --> 00:44:28
So these higher order methods
allow you to get to a great
642
00:44:28 --> 00:44:32
answer with bigger steps.
643
00:44:32 --> 00:44:37
A lot of thinking has gone
into that and somehow it's
644
00:44:37 --> 00:44:43
a pretty basic problem.
645
00:44:43 --> 00:44:51
Just to mention for MATLAB,
ode45 is maybe the workhorse
646
00:44:51 --> 00:44:56
code to solve this
type of a problem.
647
00:44:56 --> 00:45:01
And the method is not Euler.
648
00:45:01 --> 00:45:04
That would not be good enough.
649
00:45:04 --> 00:45:07
It's called Runge-Kutta.
650
00:45:07 --> 00:45:11
Two guys, Runge and Kutta,
figured out a formula that got
651
00:45:11 --> 00:45:14
up to fourth order accurate.
652
00:45:14 --> 00:45:17
So that's the thing.
653
00:45:17 --> 00:45:23
And then if we looked further
about this subject we would
654
00:45:23 --> 00:45:27
distinguish some equations
that are called stiff.
655
00:45:27 --> 00:45:29
So I'll just write
that word down.
656
00:45:29 --> 00:45:38
Some equations, the iteration
is particularly difficult.
657
00:45:38 --> 00:45:40
You have two time scales.
658
00:45:40 --> 00:45:42
Maybe you've got things
happening on a slow
659
00:45:42 --> 00:45:44
scale and a high scale.
660
00:45:44 --> 00:45:47
Multi-scale computations,
that's where the
661
00:45:47 --> 00:45:49
subject is now.
662
00:45:49 --> 00:45:55
And so there would be separate
codes with an S in their names
663
00:45:55 --> 00:46:02
suitable for these tougher
problems, stiff equation.
664
00:46:02 --> 00:46:09
I guess one thing you sometimes
get in the review session is a
665
00:46:09 --> 00:46:17
look outside the scope of
what I can cover and ask
666
00:46:17 --> 00:46:20
homework problems about.
667
00:46:20 --> 00:46:29
I'm sure hoping that you're
assembling the elements of
668
00:46:29 --> 00:46:32
computational science here.
669
00:46:32 --> 00:46:34
First, what are the questions?
670
00:46:34 --> 00:46:35
What are some answers?
671
00:46:35 --> 00:46:38
What are the issues?
672
00:46:38 --> 00:46:44
What do you have to balance
to make a good decision?
673
00:46:44 --> 00:46:47
Time for another
question if we like.
674
00:46:47 --> 00:46:50
Yeah, thank you.
675
00:46:50 --> 00:47:05
Stability, yeah, right.
676
00:47:05 --> 00:47:10
This here?
677
00:47:10 --> 00:47:13
First, if we want a matrix
then this has to be a
678
00:47:13 --> 00:47:15
vector of unknowns.
679
00:47:15 --> 00:47:19
I'm thinking now of a system
of, this is n equations.
680
00:47:19 --> 00:47:23
I've got u is a vector
with n components.
681
00:47:23 --> 00:47:25
I've got n equations here.
682
00:47:25 --> 00:47:29
The notation, I can put
a little arrow over it
683
00:47:29 --> 00:47:32
just to remind myself.
684
00:47:32 --> 00:47:36
And if I want to get
to a matrix I better
685
00:47:36 --> 00:47:38
do the linear case.
686
00:47:38 --> 00:47:48
So I'll do the linear case.
687
00:47:48 --> 00:47:51
The big picture is that
the new values come
688
00:47:51 --> 00:47:54
from the old values.
689
00:47:54 --> 00:47:56
There has to be a
matrix multiplier.
690
00:47:56 --> 00:48:01
Anytime we have a
linear process.
691
00:48:01 --> 00:48:06
So the new values come from old
values by some linear map.
692
00:48:06 --> 00:48:12
A matrix is doing it.
693
00:48:12 --> 00:48:14
So there's a sort
of growth matrix.
694
00:48:14 --> 00:48:17
Can I just put down
some letters here?
695
00:48:17 --> 00:48:22
I won't be able to give
you a complete answer.
696
00:48:22 --> 00:48:25
But this'll do it.
697
00:48:25 --> 00:48:34
So u_(n+1) is some
matrix times u_n.
698
00:48:34 --> 00:48:38
That's what I wrote down this
morning for a special case.
699
00:48:38 --> 00:48:44
It's gotta look like that
for a linear problem.
700
00:48:44 --> 00:48:48
And let's suppose that we have
this nice situation as today
701
00:48:48 --> 00:48:51
where it's the same
G at every step.
702
00:48:51 --> 00:48:58
So the problem is not changing
in time, it's linear,
703
00:48:58 --> 00:48:59
it's all good.
704
00:48:59 --> 00:49:03
What is the solution after
n times steps compared
705
00:49:03 --> 00:49:06
to the start?
706
00:49:06 --> 00:49:12
So give me a simple formula for
the solution to this step,
707
00:49:12 --> 00:49:15
step, step equation.
708
00:49:15 --> 00:49:20
If I started at initial value
u_0 and I looked to see, well
709
00:49:20 --> 00:49:25
what matrix gets me over n
steps, what do I write there?
710
00:49:25 --> 00:49:30
G to the n, right, G to the n.
711
00:49:30 --> 00:49:36
So now comes the question.
712
00:49:36 --> 00:49:38
The stability question is
whether do the powers
713
00:49:38 --> 00:49:41
of G to the nth grow?
714
00:49:41 --> 00:49:43
And here's the point.
715
00:49:43 --> 00:49:48
That the continuous problem
that this came from, it's
716
00:49:48 --> 00:49:52
got its own growth or
decay or oscillation.
717
00:49:52 --> 00:49:56
This guy, the discrete
one, has got its.
718
00:49:56 --> 00:50:01
They're going to be close
for a step or two.
719
00:50:01 --> 00:50:06
For one or two steps I'm
expecting that this is a
720
00:50:06 --> 00:50:10
reasonable consistent
approximation to my problem.
721
00:50:10 --> 00:50:16
But after I take a thousand
steps, this one could still
722
00:50:16 --> 00:50:22
be close or it could
have exploded.
723
00:50:22 --> 00:50:26
And that's the stability thing.
724
00:50:26 --> 00:50:32
The choice of the difference
method can be be stable or not.
725
00:50:32 --> 00:50:36
And it's going to be the
eigenvalues of G that
726
00:50:36 --> 00:50:38
are the best guide.
727
00:50:38 --> 00:50:45
So in the end people compute
the eigenvalues of the growth
728
00:50:45 --> 00:51:10
matrix and look to see are
they bigger than one or not.
729
00:51:10 --> 00:51:13
Let's close with that example
because that's a good example.
730
00:51:13 --> 00:51:18
So it fits this, but not quite,
because it wasn't completely
731
00:51:18 --> 00:51:20
forward or completely backward.
732
00:51:20 --> 00:51:23
And let's just write down what
it was in that model problem
733
00:51:23 --> 00:51:26
and find the matrix.
734
00:51:26 --> 00:51:29
So can you remind me
what I wrote today?
735
00:51:29 --> 00:51:33
So u, was it u first?
736
00:51:33 --> 00:51:35
U_(n+1) was U_n+delta t*V_n.
737
00:51:35 --> 00:51:41
738
00:51:41 --> 00:51:47
And then the new velocity was
the old velocity and it should
739
00:51:47 --> 00:51:50
be minus delta t times the u.
740
00:51:50 --> 00:51:55
Because our equation is, these
are representing the equation.
741
00:51:55 --> 00:52:01
u'=v is the first one. v'=-u
is my second equation.
742
00:52:01 --> 00:52:04
So and then the point was
I could take that because
743
00:52:04 --> 00:52:10
I know it already, I
could take it there.
744
00:52:10 --> 00:52:14
Where is the matrix here?
745
00:52:14 --> 00:52:20
I want to find the growth
matrix that tells me now-- My
746
00:52:20 --> 00:52:28
growth matrix, of course, this
is and this is .
747
00:52:28 --> 00:52:31
748
00:52:31 --> 00:52:34
And this is a two
by two matrix.
749
00:52:34 --> 00:52:38
And I hope we can read it off.
750
00:52:38 --> 00:52:40
Or find it anyway.
751
00:52:40 --> 00:52:45
Because that's the key.
752
00:52:45 --> 00:52:49
How could we get a matrix
out of these two equations?
753
00:52:49 --> 00:52:52
Let me just ask you to look
at them and think what
754
00:52:52 --> 00:52:58
are we going to do.
755
00:52:58 --> 00:53:00
That's a good question.
756
00:53:00 --> 00:53:02
What shall I do?
757
00:53:02 --> 00:53:07
I would like to know the new
U's, U, V from the old.
758
00:53:07 --> 00:53:10
What'll I do?
759
00:53:10 --> 00:53:22
Bring stuff onto, a U_(n+1) on
this side and n on this side.
760
00:53:22 --> 00:53:25
This guy I want to
get over here.
761
00:53:25 --> 00:53:29
Can I do that with erasing?
762
00:53:29 --> 00:53:40
So it's going to come
over with a plus sign.
763
00:53:40 --> 00:53:45
So I guess I see here an
implicit matrix acting on the
764
00:53:45 --> 00:53:48
left and an explicit matrix
acting on the right.
765
00:53:48 --> 00:53:50
So I see a .
766
00:53:52 --> 00:53:54
And I see my explicit matrix.
767
00:53:54 --> 00:53:56
Shall I call it E?
768
00:53:56 --> 00:53:58
That's the right sides.
769
00:53:58 --> 00:54:02
I see a one and a
one and a delta t.
770
00:54:02 --> 00:54:09
And now my left side of my
equation, the n+1 stuff.
771
00:54:09 --> 00:54:14
What's my implicit matrix on
the left sides of the equation?
772
00:54:14 --> 00:54:23
Well, a one and a one and a
plus delta t is here, right?
773
00:54:23 --> 00:54:31
And let's call that
I for implicit.
774
00:54:31 --> 00:54:34
Are we close?
775
00:54:34 --> 00:54:36
I think that looks pretty good.
776
00:54:36 --> 00:54:38
What's G?
777
00:54:38 --> 00:54:40
What's the matrix G now?
778
00:54:40 --> 00:54:44
You just have to begin to
develop a little faith that
779
00:54:44 --> 00:54:48
yeah, I can deal with matrices,
I can move them around,
780
00:54:48 --> 00:54:51
they're under my control.
781
00:54:51 --> 00:54:55
I wanted this picture.
782
00:54:55 --> 00:54:58
I want to move everything
to the right.
783
00:54:58 --> 00:55:03
What do I do to move everything
to the right-hand side?
784
00:55:03 --> 00:55:06
I want that to be over there.
785
00:55:06 --> 00:55:07
It's an inverse.
786
00:55:07 --> 00:55:08
It's the inverse.
787
00:55:08 --> 00:55:14
So G, the matrix G there
is, I bring I over here.
788
00:55:14 --> 00:55:22
It's the inverse
of I times the E.
789
00:55:22 --> 00:55:25
And I can figure out
what that matrix is.
790
00:55:25 --> 00:55:28
And I can find its eigenvalues.
791
00:55:28 --> 00:55:31
And it'll be interesting.
792
00:55:31 --> 00:55:33
So actually that's
the perfect problem.
793
00:55:33 --> 00:55:37
I mean, if you want to see
what's going on, well the book
794
00:55:37 --> 00:55:43
will be a good help I think,
but that's the growth
795
00:55:43 --> 00:55:45
matrix for leapfrog.
796
00:55:45 --> 00:55:49
And I'll tell you the result.
797
00:55:49 --> 00:55:56
The eigenvalues are right of
magnitude one, as you hope, up
798
00:55:56 --> 00:55:58
to a certain value of delta t.
799
00:55:58 --> 00:56:01
And then when delta t passes
that stability limit the
800
00:56:01 --> 00:56:04
eigenvalues take off.
801
00:56:04 --> 00:56:09
So that this method is
great provided you're not
802
00:56:09 --> 00:56:15
too ambitious and take
too large a delta t.
803
00:56:15 --> 00:56:18
Thanks for that last
question, thanks for coming.