1
00:00:00 --> 00:00:01
2
00:00:01 --> 00:00:02
The following content is
provided under a Creative
3
00:00:02 --> 00:00:03
Commons license.
4
00:00:03 --> 00:00:06
Your support will help MIT
OpenCourseWare continue to
5
00:00:06 --> 00:00:09
offer high-quality educational
resources for free.
6
00:00:09 --> 00:00:11
To make a donation or to view
additional materials from
7
00:00:11 --> 00:00:15
hundreds of MIT courses, visit
MIT OpenCourseWare
8
00:00:15 --> 00:00:21
at ocw.mit.edu.
9
00:00:21 --> 00:00:23
PROFESSOR STRANG: OK.
10
00:00:23 --> 00:00:27
Well, we had an election
since I saw you last.
11
00:00:27 --> 00:00:32
I hope you're happy
about the results.
12
00:00:32 --> 00:00:35
I'm very happy.
13
00:00:35 --> 00:00:36
Except one thing.
14
00:00:36 --> 00:00:40
There was a senator
who was in 18.085.
15
00:00:40 --> 00:00:42
And he lost his seat.
16
00:00:42 --> 00:00:48
So we don't have a senator
from 18.085 any more.
17
00:00:48 --> 00:00:51
It was Senator Sununu,
from New Hampshire.
18
00:00:51 --> 00:00:53
So he took 18.085.
19
00:00:53 --> 00:00:59
Actually, it was a
funny experience.
20
00:00:59 --> 00:01:05
So, some years ago I had to go
to Washington to testify before
21
00:01:05 --> 00:01:10
the committee about increasing
the funding for mathematics.
22
00:01:10 --> 00:01:13
So I was nervous, of
course, trying to remember
23
00:01:13 --> 00:01:15
what I've got to say.
24
00:01:15 --> 00:01:19
More money is the main point,
but you have to say it
25
00:01:19 --> 00:01:22
more delicately than that.
26
00:01:22 --> 00:01:28
And so I got in, a lot of
people were taking turns, you
27
00:01:28 --> 00:01:30
only get five minutes to do it.
28
00:01:30 --> 00:01:34
And so I went and sat
down, waited for my turn.
29
00:01:34 --> 00:01:38
And here on the committee was
my student, Senator Sununu.
30
00:01:38 --> 00:01:42
Well at that time, I think,
Representative Sununu.
31
00:01:42 --> 00:01:45
So that was nice.
32
00:01:45 --> 00:01:48
He's a nice guy.
33
00:01:48 --> 00:01:53
Actually, the lobbyist who kind
of guided me there, and found
34
00:01:53 --> 00:01:57
the room for me and sort of
told me what to do, said I
35
00:01:57 --> 00:02:01
hope you gave him an A.
36
00:02:01 --> 00:02:06
And the truth is I didn't.
37
00:02:06 --> 00:02:09
So I guess you guys should
all get As, just to
38
00:02:09 --> 00:02:11
be on the safe side.
39
00:02:11 --> 00:02:18
Or let me know if you're going
to be in the Senate anyway.
40
00:02:18 --> 00:02:20
So there you go.
41
00:02:20 --> 00:02:20
Anyway.
42
00:02:20 --> 00:02:24
So alright.
43
00:02:24 --> 00:02:28
But he wasn't great in 18.085.
44
00:02:28 --> 00:02:33
OK, so you know what, this
evening I just called the
45
00:02:33 --> 00:02:38
schedules office to confirm
that it is tonight, in 54-100.
46
00:02:38 --> 00:02:44
It's not a long, difficult,
exam so see you at 7:30, or see
47
00:02:44 --> 00:02:50
you earlier at 4:00 for a
review of everything that
48
00:02:50 --> 00:02:52
you want to ask about.
49
00:02:52 --> 00:02:56
So I won't try to do an exam
review this morning; we've
50
00:02:56 --> 00:03:00
got lots to do and we're
into November already.
51
00:03:00 --> 00:03:04
So I want to finish
the discussion of the
52
00:03:04 --> 00:03:05
fast Poisson solver.
53
00:03:05 --> 00:03:10
Because it's just a neat
idea which is so simple.
54
00:03:10 --> 00:03:13
Anytime you have a
nice square grid.
55
00:03:13 --> 00:03:17
I guess the message is, anytime
you have a nice square or
56
00:03:17 --> 00:03:21
rectangular grid, you should
be thinking, use the fast
57
00:03:21 --> 00:03:23
Fourier transform somehow.
58
00:03:23 --> 00:03:29
And here it turns out that the
eigenvectors are for this
59
00:03:29 --> 00:03:36
problem, this K2D, so we're
getting this matrix K2D.
60
00:03:36 --> 00:03:40
Which is of size n squared by
n squared, so a large matrix.
61
00:03:40 --> 00:03:48
But its eigenvectors will be,
well, actually they'll be
62
00:03:48 --> 00:03:52
sine functions for the
K2D, the fixed-fixed.
63
00:03:52 --> 00:03:56
And they would be cosine
functions for the
64
00:03:56 --> 00:03:58
B2D, free-free.
65
00:03:58 --> 00:04:02
And the point is the
fast Fourier transform
66
00:04:02 --> 00:04:05
does all those fast.
67
00:04:05 --> 00:04:09
I mean, the fast Fourier
transform is set up for the
68
00:04:09 --> 00:04:15
complex exponentials as we'll
see in a couple of weeks.
69
00:04:15 --> 00:04:18
That's the most important
algorithm I could
70
00:04:18 --> 00:04:21
tell you about.
71
00:04:21 --> 00:04:26
After maybe Gaussian
elimination, I don't know.
72
00:04:26 --> 00:04:31
But my point here is that if
you can do complex exponentials
73
00:04:31 --> 00:04:38
fast, you can do sines fast,
you can do cosines fast, and
74
00:04:38 --> 00:04:44
the result will be that instead
of I did an operation count
75
00:04:44 --> 00:04:48
when I did elimination in the
order one, two, three, four,
76
00:04:48 --> 00:04:53
five, six, ordered
by along each row.
77
00:04:53 --> 00:04:56
That gave us a matrix
that I want to look at
78
00:04:56 --> 00:04:59
again, this K2D matrix.
79
00:04:59 --> 00:05:02
If I did elimination
as it stands, I think
80
00:05:02 --> 00:05:03
the count was n^4.
81
00:05:04 --> 00:05:11
I think the count was size n
squared, I had n squared
82
00:05:11 --> 00:05:16
columns to work on and I think
each column took n squared
83
00:05:16 --> 00:05:17
steps, so I was up to n^4.
84
00:05:19 --> 00:05:25
And that's manageable.
85
00:05:25 --> 00:05:28
2-D problems, you really
can use backslash.
86
00:05:28 --> 00:05:35
But for this particular
problem, using the FFT way, it
87
00:05:35 --> 00:05:38
comes down to n squared log n.
88
00:05:38 --> 00:05:40
So that's way better.
89
00:05:40 --> 00:05:41
Way better.
90
00:05:41 --> 00:05:42
Big saving.
91
00:05:42 --> 00:05:46
In 1-D, you don't see a saving.
92
00:05:46 --> 00:05:51
But in 2-D it's a big saving,
and in 3-D an enormous saving.
93
00:05:51 --> 00:05:57
So a lot of people would like
to try to use the FFT also when
94
00:05:57 --> 00:05:59
the region isn't a square.
95
00:05:59 --> 00:06:02
You can imagine all the
thinking that goes into this.
96
00:06:02 --> 00:06:05
But we'll focus on the
one where it's a square.
97
00:06:05 --> 00:06:07
Oh, one thing more to add.
98
00:06:07 --> 00:06:11
I spoke about reordering
the unknowns.
99
00:06:11 --> 00:06:13
But you probably were
wondering, OK, what's that
100
00:06:13 --> 00:06:16
about because I only
just referred to it.
101
00:06:16 --> 00:06:23
Let me suggest an ordering and
you can tell me what the matrix
102
00:06:23 --> 00:06:26
K2D will look like
in this ordering.
103
00:06:26 --> 00:06:30
I'll call it the red
black ordering.
104
00:06:30 --> 00:06:33
I guess with the election just
behind us I could call it
105
00:06:33 --> 00:06:34
the red blue ordering.
106
00:06:34 --> 00:06:40
OK, red blue ordering, OK, so
the ordering, instead of
107
00:06:40 --> 00:06:46
ordering it by, let me draw
this again, I'll put in a
108
00:06:46 --> 00:06:52
couple more, make n=4 here, so
now I've got 16, n
109
00:06:52 --> 00:06:54
squared is now 16.
110
00:06:54 --> 00:06:57
So here's the idea.
111
00:06:57 --> 00:07:03
The red blue ordering is
like a checkerboard.
112
00:07:03 --> 00:07:05
The checkerboard ordering,
you could think of.
113
00:07:05 --> 00:07:09
This will be one, this'll
be two, this'll be three,
114
00:07:09 --> 00:07:13
this'll be four, five,
six, seven, eight.
115
00:07:13 --> 00:07:17
So I've numbered
from one to eight.
116
00:07:17 --> 00:07:21
And now nine to 16 will
be the other guys.
117
00:07:21 --> 00:07:26
These will be nine, ten,
11, 12, 13, 14, 15, 16.
118
00:07:26 --> 00:07:33
So suppose I do that,
what K2D look like?
119
00:07:33 --> 00:07:36
Maybe you could, it's
a neat example.
120
00:07:36 --> 00:07:40
It gives us one more chance
to look at this matrix.
121
00:07:40 --> 00:07:44
So K2D is always going to
have fours on the diagonal.
122
00:07:44 --> 00:07:46
So I have 16 fours.
123
00:07:46 --> 00:07:51
Whatever order I take, the
equation at a typical point
124
00:07:51 --> 00:07:55
like say this one, that's
point number one, two.
125
00:07:55 --> 00:07:59
That's point number three, at a
point number three I have my
126
00:07:59 --> 00:08:02
five point molecule with
a four in the middle.
127
00:08:02 --> 00:08:06
So it'd be a four in
this, on that position.
128
00:08:06 --> 00:08:10
And then I have the four
guys around it, the minus
129
00:08:10 --> 00:08:13
ones, and where will they
appear in the matrix?
130
00:08:13 --> 00:08:14
With this ordering.
131
00:08:14 --> 00:08:18
So you get to see the
point of an ordering.
132
00:08:18 --> 00:08:21
So this was point one, point
two, point three, I'm
133
00:08:21 --> 00:08:24
focusing on point three.
134
00:08:24 --> 00:08:28
It's got a four on the diagonal
and then it's got a minus one,
135
00:08:28 --> 00:08:30
a minus one, a minus one, a
minus one, where
136
00:08:30 --> 00:08:32
do they appear?
137
00:08:32 --> 00:08:34
Well, all those guys
are what color?
138
00:08:34 --> 00:08:36
They're all color blue.
139
00:08:36 --> 00:08:37
Right?
140
00:08:37 --> 00:08:38
All the neighbors
are colored blue.
141
00:08:38 --> 00:08:42
So I won't get to them until
I've gone through the red,
142
00:08:42 --> 00:08:45
the eight reds, and then
I come back to nine.
143
00:08:45 --> 00:08:49
Whatever, 11, 12, 13.
144
00:08:49 --> 00:08:53
The point is they'll
all be over here.
145
00:08:53 --> 00:09:03
So this is like, you see that
the only nodes that are next
146
00:09:03 --> 00:09:05
to a red node are all blue.
147
00:09:05 --> 00:09:08
And the only nodes next
to a blue node are red.
148
00:09:08 --> 00:09:14
So this is, sorry filled in a
little bit, that's where I'm
149
00:09:14 --> 00:09:17
sort of like four diagonals
of minus ones or something.
150
00:09:17 --> 00:09:19
But there you go.
151
00:09:19 --> 00:09:22
That gives you an idea of what
a different ordering could be.
152
00:09:22 --> 00:09:26
And you see what will happen
now with elimination?
153
00:09:26 --> 00:09:29
Elimination is completed
already on this stuff.
154
00:09:29 --> 00:09:33
So all the, with elimination
you want to push the
155
00:09:33 --> 00:09:36
non-zeroes way to the end.
156
00:09:36 --> 00:09:42
That's sort of like the
central idea is, it's kind
157
00:09:42 --> 00:09:43
of a greedy approach.
158
00:09:43 --> 00:09:45
Greedy algorithm.
159
00:09:45 --> 00:09:49
Use up zeroes while
the sun's shining.
160
00:09:49 --> 00:09:54
And then near the end, where
the problem is elimination's
161
00:09:54 --> 00:09:56
reduced it to a much smaller
problem, then you've
162
00:09:56 --> 00:09:58
got some work to do.
163
00:09:58 --> 00:10:01
So that's what
would happen here.
164
00:10:01 --> 00:10:06
By the way there's a movie,
on the 18.086 page.
165
00:10:06 --> 00:10:09
I hope some of you guys will
consider taking 18.086.
166
00:10:09 --> 00:10:13
It's a smaller class, people
do projects like creating
167
00:10:13 --> 00:10:16
this elimination movie.
168
00:10:16 --> 00:10:18
So that would be math
at MIT.edu/18.086.
169
00:10:23 --> 00:10:30
And with the movie has a movie
of different orderings.
170
00:10:30 --> 00:10:32
That to try to figure out
what's the absolute best
171
00:10:32 --> 00:10:36
ordering with the absolute
least fill-in is, that's
172
00:10:36 --> 00:10:39
one of these NP-Hard
combinatorial problems.
173
00:10:39 --> 00:10:48
But to get an ordering that's
much better than bye rows is
174
00:10:48 --> 00:10:53
not only doable but
should be done.
175
00:10:53 --> 00:10:56
OK, so that's the ordering
for elimination.
176
00:10:56 --> 00:10:58
As this is for elimination.
177
00:10:58 --> 00:11:01
OK.
178
00:11:01 --> 00:11:03
So that's a whole
world of its own.
179
00:11:03 --> 00:11:07
Ordering the thing for
Gaussian elimination.
180
00:11:07 --> 00:11:15
My focus in this section 3.5,
so this is Section 3.5 of the
181
00:11:15 --> 00:11:20
book and then this is Section
3.6, which is our major one.
182
00:11:20 --> 00:11:21
That's our big deal.
183
00:11:21 --> 00:11:26
But my focus in 3.5 is
first to try to see
184
00:11:26 --> 00:11:28
what the 2-D matrix is.
185
00:11:28 --> 00:11:31
I mean, a big part of this
course, I think, a big part of
186
00:11:31 --> 00:11:35
what I can do for you, is to,
like, get comfortable
187
00:11:35 --> 00:11:36
with matrices.
188
00:11:36 --> 00:11:39
Sort of see what do you look
at when you see a matrix?
189
00:11:39 --> 00:11:40
What's its shape?
190
00:11:40 --> 00:11:43
What are its properties?
191
00:11:43 --> 00:11:48
And so this K2D is our first
example of a 2-D matrix, and
192
00:11:48 --> 00:11:50
it's highly structured.
193
00:11:50 --> 00:11:54
The point is we'll have
K2D matrices out of
194
00:11:54 --> 00:11:55
finite elements.
195
00:11:55 --> 00:11:59
But the finite elements might
be, well, they could be those.
196
00:11:59 --> 00:12:01
This could be a finite
element picture.
197
00:12:01 --> 00:12:08
And then the finite element
matrix on such a regular mesh
198
00:12:08 --> 00:12:11
would be quite structured too.
199
00:12:11 --> 00:12:16
But if I cut them all up into
triangles and I have a curved
200
00:12:16 --> 00:12:21
region and everything, an
unstructured mesh, then I'll
201
00:12:21 --> 00:12:25
still have the good properties,
this will still be symmetric
202
00:12:25 --> 00:12:27
positive definite,
all the good stuff.
203
00:12:27 --> 00:12:31
But the FFT won't come in.
204
00:12:31 --> 00:12:34
OK, so now I want to take
a look again at this
205
00:12:34 --> 00:12:38
K2D matrix, OK?
206
00:12:38 --> 00:12:42
So one way to describe the K2D
matrix is the way I did last
207
00:12:42 --> 00:12:45
time, to kind of write down the
typical row, typical
208
00:12:45 --> 00:12:46
row in the middle.
209
00:12:46 --> 00:12:50
It's got a four on the
diagonal and four minus ones.
210
00:12:50 --> 00:12:55
And here we've re-ordered it so
the four minus ones came off.
211
00:12:55 --> 00:12:56
Came much later.
212
00:12:56 --> 00:13:00
But either way, that's what
the matrix looks like.
213
00:13:00 --> 00:13:03
Let me go back to
this ordering.
214
00:13:03 --> 00:13:08
And let's get 2-D, another
better, clearer look at K2D.
215
00:13:09 --> 00:13:19
I want to construct K2D from K.
216
00:13:19 --> 00:13:20
Which is this K1D.
217
00:13:22 --> 00:13:26
I don't use that name,
but here it is.
218
00:13:26 --> 00:13:29
Let me just show it to you.
219
00:13:29 --> 00:13:34
OK, so let me take the matrix
that does the second
220
00:13:34 --> 00:13:37
differences in the x direction.
221
00:13:37 --> 00:13:42
This'll be an n squared
by n squared matrix.
222
00:13:42 --> 00:13:45
And it'll take second
differences on every row.
223
00:13:45 --> 00:13:47
Let me just write
in what it'll be.
224
00:13:47 --> 00:13:50
It'll be K, it'll
be a block matrix.
225
00:13:50 --> 00:13:53
K, K, n blocks.
226
00:13:53 --> 00:13:55
Each n by n.
227
00:13:55 --> 00:13:59
You see that that will be the
second difference matrix?
228
00:13:59 --> 00:14:03
K takes the second
differences along a row.
229
00:14:03 --> 00:14:06
This K will take the second x
differences along the next row.
230
00:14:06 --> 00:14:09
Row by row, simple.
231
00:14:09 --> 00:14:11
So this is the u_xx part.
232
00:14:11 --> 00:14:18
Now what will the u_yy,
the second y derivative.
233
00:14:18 --> 00:14:23
That gives second differences
up the columns, right?
234
00:14:23 --> 00:14:27
So can I see what that matrix
will look like, second
235
00:14:27 --> 00:14:29
differences up the columns?
236
00:14:29 --> 00:14:32
Well, I think it will
look like this.
237
00:14:32 --> 00:14:35
It will have twos
on the diagonal.
238
00:14:35 --> 00:14:39
2I, 2I, 2I, this is second
differences, right?
239
00:14:39 --> 00:14:41
Down to 2I.
240
00:14:43 --> 00:14:46
And next to it will be a minus
the identity - let me write
241
00:14:46 --> 00:14:48
it, and you see if you
think it looks good.
242
00:14:48 --> 00:14:50
So minus the identity.
243
00:14:50 --> 00:14:54
It's like a blown up K.
244
00:14:54 --> 00:14:55
Somehow.
245
00:14:55 --> 00:14:56
Right?
246
00:14:56 --> 00:15:01
I have, do you see, it's
every row has got a two
247
00:15:01 --> 00:15:03
and two minus ones.
248
00:15:03 --> 00:15:06
That's from the minus 1one the
two and the minus one in the
249
00:15:06 --> 00:15:08
vertical second difference.
250
00:15:08 --> 00:15:12
But you see how the count,
you see how the numbering
251
00:15:12 --> 00:15:14
changed it from here?
252
00:15:14 --> 00:15:17
Here the neighbors were right
next to each other, because
253
00:15:17 --> 00:15:19
we're ordering by rows.
254
00:15:19 --> 00:15:23
Here I have to wait a whole
n to get the guy above.
255
00:15:23 --> 00:15:28
And I'm also n away
from the guy below.
256
00:15:28 --> 00:15:33
So this is the two minus
one minus one centered
257
00:15:33 --> 00:15:35
there, above and below.
258
00:15:35 --> 00:15:39
Do you see that?
259
00:15:39 --> 00:15:41
If you think about a
little it's not sort
260
00:15:41 --> 00:15:43
of difficult to see.
261
00:15:43 --> 00:15:48
And I guess the thing I want
also to do, is to tell you that
262
00:15:48 --> 00:15:52
there's a neat little MATLAB
command, or neat math idea
263
00:15:52 --> 00:15:55
really, and they just made a
MATLAB command out of it,
264
00:15:55 --> 00:16:02
that produces this matrix
and this one out of K.
265
00:16:02 --> 00:16:03
Can I tell you what this is?
266
00:16:03 --> 00:16:12
It's something you don't see in
typical linear algebra courses.
267
00:16:12 --> 00:16:19
So I'm contracting K2D from
K1D by, this is called
268
00:16:19 --> 00:16:20
a Kronecker product.
269
00:16:20 --> 00:16:27
It's named after a guy, some
dead German, or sometimes
270
00:16:27 --> 00:16:33
called tensor product.
271
00:16:33 --> 00:16:39
The point is, this is always
the simplest thing to do in
272
00:16:39 --> 00:16:41
two dimensions or three
dimensions, or so on.
273
00:16:41 --> 00:16:46
Is like product of 1-D
things, like copy the
274
00:16:46 --> 00:16:48
1-D idea both ways.
275
00:16:48 --> 00:16:50
That's all this thing is doing.
276
00:16:50 --> 00:16:53
Copying the one idea in
the x direction and
277
00:16:53 --> 00:16:55
in the y direction.
278
00:16:55 --> 00:17:00
So the MATLAB command,
I've got two pieces here.
279
00:17:00 --> 00:17:05
For this first piece it's
kron, named after Kronecker,
280
00:17:05 --> 00:17:08
of - now, let's see.
281
00:17:08 --> 00:17:13
I'm going to put two n by n
matrices, and Kronecker product
282
00:17:13 --> 00:17:15
is going to be a matrix, a
giant matrix, of
283
00:17:15 --> 00:17:16
size n squared.
284
00:17:16 --> 00:17:17
Like these.
285
00:17:17 --> 00:17:20
OK, so I want to write the
right thing in there.
286
00:17:20 --> 00:17:28
I think the right thing
is the identity and K.
287
00:17:28 --> 00:17:31
OK, and so I have to explain
what this tensor product,
288
00:17:31 --> 00:17:32
Kronecker product, is.
289
00:17:32 --> 00:17:38
And this guy happens to be
also a Kronecker product.
290
00:17:38 --> 00:17:43
But it's K and I.
291
00:17:43 --> 00:17:46
So I'm just like
mentioning here.
292
00:17:46 --> 00:17:52
That because if you have 2-D
problems and 3-D problems and
293
00:17:52 --> 00:17:55
they're on a nice square grid
so you can like just take
294
00:17:55 --> 00:18:00
products of things, this is
what you want to know about.
295
00:18:00 --> 00:18:00
OK.
296
00:18:00 --> 00:18:03
So what is this
Kronecker product?
297
00:18:03 --> 00:18:07
Kronecker product says
take the first matrix, I.
298
00:18:07 --> 00:18:10
It's n by n.
299
00:18:10 --> 00:18:13
Just n by n, that's
the identity.
300
00:18:13 --> 00:18:19
And now take this K and
multiply every one of those
301
00:18:19 --> 00:18:24
numbers, these are all numbers,
let me put more numbers in,
302
00:18:24 --> 00:18:26
so I've got 16 numbers.
303
00:18:26 --> 00:18:31
This is going to work for
this mesh that has four
304
00:18:31 --> 00:18:33
guys in each direction.
305
00:18:33 --> 00:18:35
And four directions.
306
00:18:35 --> 00:18:36
Four levels.
307
00:18:36 --> 00:18:42
So I is four by four, and K is
four by four and now, each
308
00:18:42 --> 00:18:45
number here gets
multiplied by K.
309
00:18:45 --> 00:18:50
So because of all those zeroes
I get just K, K, K, K.
310
00:18:50 --> 00:18:53
Which is exactly what I wanted.
311
00:18:53 --> 00:18:57
So that Kronecker product just
in one step you've created an n
312
00:18:57 --> 00:19:00
squared by n squared matrix
that you really would not want
313
00:19:00 --> 00:19:05
to type in entry by entry.
314
00:19:05 --> 00:19:07
That would be a horrible idea.
315
00:19:07 --> 00:19:11
OK, and if I follow the same
principle here, I'll take the
316
00:19:11 --> 00:19:16
Kronecker product here, as I
start with a matrix K, so I
317
00:19:16 --> 00:19:20
start with two, minus one,
minus one, two, minus one,
318
00:19:20 --> 00:19:25
minus one, two, minus
one, minus one, two.
319
00:19:25 --> 00:19:30
That's my K, and now what's
the Kronecker idea?
320
00:19:30 --> 00:19:36
Each of those numbers gets
multiplied by this matrix.
321
00:19:36 --> 00:19:41
So that two becomes 2I, minus
one becomes minus I, minus one
322
00:19:41 --> 00:19:44
becomes minus I, it all clicks.
323
00:19:44 --> 00:19:53
And it's producing exactly the
u_yy part, the second part.
324
00:19:53 --> 00:19:57
So that's the good
construction thing.
325
00:19:57 --> 00:19:58
Just to tell you about.
326
00:19:58 --> 00:20:04
And the beauty is, this is just
like cooked up, set up for
327
00:20:04 --> 00:20:05
separation of variables.
328
00:20:05 --> 00:20:08
If I want to know the
eigenvalues and the
329
00:20:08 --> 00:20:16
eigenvectors of this
thing, I start by knowing
330
00:20:16 --> 00:20:20
the eigenvalues and
eigenvectors of K.
331
00:20:20 --> 00:20:22
Can you remind me
what those are?
332
00:20:22 --> 00:20:26
We should remember them.
333
00:20:26 --> 00:20:30
So, anybody remember
eigenvectors of K?
334
00:20:30 --> 00:20:34
Well, this is going back
to Section 1.5, way
335
00:20:34 --> 00:20:35
early in the semester.
336
00:20:35 --> 00:20:41
And there's a lot of writing
involved and I probably
337
00:20:41 --> 00:20:43
didn't do it all.
338
00:20:43 --> 00:20:47
But let me remind
you of the idea.
339
00:20:47 --> 00:20:50
We guessed those eigenvectors.
340
00:20:50 --> 00:20:52
And how did we guess them?
341
00:20:52 --> 00:20:57
We guessed them by comparing
this difference equation to
342
00:20:57 --> 00:20:59
the differential equation.
343
00:20:59 --> 00:21:02
So we're in 1-D now, I'm
just reminding you of what
344
00:21:02 --> 00:21:03
we did a long time ago.
345
00:21:03 --> 00:21:06
OK, so what did we do?
346
00:21:06 --> 00:21:10
I want to know the
eigenvectors of these guys.
347
00:21:10 --> 00:21:12
So I better know the
eigenvictors of
348
00:21:12 --> 00:21:14
the 1-D one first.
349
00:21:14 --> 00:21:17
OK, so what did we do in 1-D?
350
00:21:17 --> 00:21:23
We found the eigenvectors of
K by looking first at the
351
00:21:23 --> 00:21:28
differential equation
-u''=lambda*u u with, and
352
00:21:28 --> 00:21:34
remember we're talking about
the fixed-fixed-fixed here.
353
00:21:34 --> 00:21:37
And anybody remember the
eigenvectors of this guy,
354
00:21:37 --> 00:21:40
eigenfunctions I guess
I should say for those?
355
00:21:40 --> 00:21:44
So sines and cosines
look good here, right?
356
00:21:44 --> 00:21:46
Because you take their second
derivative, it brings
357
00:21:46 --> 00:21:48
down the constant.
358
00:21:48 --> 00:21:52
And then do I want
sines or cosines?
359
00:21:52 --> 00:21:55
Looking at the boundary
conditions, that's going
360
00:21:55 --> 00:21:56
to tell me everything.
361
00:21:56 --> 00:22:00
I want sines, cosines are wiped
out by this first condition
362
00:22:00 --> 00:22:02
that u(0) should be zero.
363
00:22:02 --> 00:22:08
And then the sine has to come
back to zero at one, so the
364
00:22:08 --> 00:22:17
eigenfunctions were u equals
the sine of something, I think.
365
00:22:17 --> 00:22:21
I want a multiple of
pi, right? pi*k*x.
366
00:22:21 --> 00:22:24
367
00:22:24 --> 00:22:26
I think that would be right.
368
00:22:26 --> 00:22:30
Because at x=1, that's
come back to zero.
369
00:22:30 --> 00:22:35
And the eigenvalue of lambda,
if I just plug that in, two
370
00:22:35 --> 00:22:38
derivatives bring
down pi*k twice.
371
00:22:38 --> 00:22:41
And with a minus, and
that cancels that minus.
372
00:22:41 --> 00:22:45
So the eigenvalues were
pi squared, K squared.
373
00:22:45 --> 00:22:50
And that's the beautiful
construction for
374
00:22:50 --> 00:22:53
differential equations.
375
00:22:53 --> 00:22:56
And these eigenfunctions
are a complete set,
376
00:22:56 --> 00:22:57
it's all wonderful.
377
00:22:57 --> 00:23:03
So that's a model problem of
what classical applied math
378
00:23:03 --> 00:23:05
does for Bessel's equation,
Legendre's equation, a
379
00:23:05 --> 00:23:09
whole long list of things.
380
00:23:09 --> 00:23:13
Now, note all those equations
that I just mentioned would
381
00:23:13 --> 00:23:15
have finite difference analogs.
382
00:23:15 --> 00:23:20
But to my knowledge, it's only
this one, this simplest, best
383
00:23:20 --> 00:23:24
one of all, we could call
it the Fourier equation if
384
00:23:24 --> 00:23:26
we needed a name for it.
385
00:23:26 --> 00:23:27
I just thought of that.
386
00:23:27 --> 00:23:28
That sounds good to me.
387
00:23:28 --> 00:23:29
Fourier equation.
388
00:23:29 --> 00:23:30
OK.
389
00:23:30 --> 00:23:35
So what's great about this one
is that the eigenvectors in the
390
00:23:35 --> 00:23:41
matrix case are just found
by sampling these functions.
391
00:23:41 --> 00:23:44
You're right on the dot if you
just sample these functions.
392
00:23:44 --> 00:23:49
So, for K, the eigenvectors,
what do I call eigenvectors?
393
00:23:49 --> 00:23:53
Maybe y's, I think I
sometimes call them y.
394
00:23:53 --> 00:24:00
So a typical eigenvector y
would be a sine vectors.
395
00:24:00 --> 00:24:08
I'm sampling it at, let's see,
can I use h for this step size?
396
00:24:08 --> 00:24:10
So h is 1/(n+1).
397
00:24:10 --> 00:24:14
398
00:24:14 --> 00:24:17
As we saw in the
past. h is 1/(n+1).
399
00:24:18 --> 00:24:25
So a typical eigenvector would
sample this thing at h, 2h,
400
00:24:25 --> 00:24:32
sin(2*pi*k*h), and so
on, dot dot dot dot.
401
00:24:32 --> 00:24:34
And that would be
an eigenvector.
402
00:24:34 --> 00:24:37
That would be the eigenvector
number K, actually.
403
00:24:37 --> 00:24:42
And do you see that that sort
of eigenvector is well set up.
404
00:24:42 --> 00:24:46
It's going to work because
it ends, where does it end?
405
00:24:46 --> 00:24:50
Who's the last person, the last
component of this eigenvector?
406
00:24:50 --> 00:24:52
It's sin(N*pi*k*h).
407
00:24:52 --> 00:24:57
408
00:24:57 --> 00:25:00
Sorry about all the symbols,
but compared to most
409
00:25:00 --> 00:25:03
eigenvectors this is,
like, the greatest.
410
00:25:03 --> 00:25:06
OK, now why do I like that?
411
00:25:06 --> 00:25:11
Because the pattern, what would
be the next one if there
412
00:25:11 --> 00:25:13
was an n plus first?
413
00:25:13 --> 00:25:18
If there was an n plus first
component, what would it be?
414
00:25:18 --> 00:25:19
What would be the
sin((N+1)pi*k*h)?
415
00:25:19 --> 00:25:23
416
00:25:23 --> 00:25:25
That's the golden question.
417
00:25:25 --> 00:25:29
What would be, I'm trying to
say that these have a nice
418
00:25:29 --> 00:25:34
pattern because the guy that's
the next person here would be
419
00:25:34 --> 00:25:40
sin((N+1_pi*k*h),
which is what?
420
00:25:40 --> 00:25:42
So N+1 is in there.
421
00:25:42 --> 00:25:45
And h is in there, so what
do I get from those guys?
422
00:25:45 --> 00:25:53
The (N+1)h is just one,
right? (N+1)h carries me
423
00:25:53 --> 00:25:55
all the way to the end.
424
00:25:55 --> 00:25:58
That's a guy that's out, it's
not in our matrix because
425
00:25:58 --> 00:26:01
that's the part, that's a
known one, that's a zero in a
426
00:26:01 --> 00:26:03
fixed boundary condition.
427
00:26:03 --> 00:26:09
So like the next guy would
be sin((N+1)h), that's
428
00:26:09 --> 00:26:10
one. sin(pi*k).
429
00:26:11 --> 00:26:12
And what is sin(pi*k)?
430
00:26:12 --> 00:26:14
431
00:26:14 --> 00:26:17
The sin(pi*k) is always?
432
00:26:17 --> 00:26:18
Zero.
433
00:26:18 --> 00:26:23
So we're getting it right, like
the guy that got knocked off
434
00:26:23 --> 00:26:25
following this pattern
will be zero.
435
00:26:25 --> 00:26:29
And what about the guy that
was knocked off of that end?
436
00:26:29 --> 00:26:32
If this was sin(2pi*k*h), and
sin(pi*k*h), this would be
437
00:26:32 --> 00:26:35
sin(0pi*k*h), which would be?
438
00:26:35 --> 00:26:37
Also zero, sin(0).
439
00:26:38 --> 00:26:44
So, anyway you could check just
by, it takes a little patience
440
00:26:44 --> 00:26:50
with trig identities, if I
multiply K by that vector, the
441
00:26:50 --> 00:26:54
pattern keeps going, the two
minus one, minus one, you know
442
00:26:54 --> 00:26:57
at a typical point I'm going to
have two of these, minus one of
443
00:26:57 --> 00:26:59
these, minus one of these.
444
00:26:59 --> 00:27:01
And it'll look good.
445
00:27:01 --> 00:27:05
And when I get to the end I'll
have two minus one of these.
446
00:27:05 --> 00:27:08
The pattern will still hold,
because the minus one of
447
00:27:08 --> 00:27:10
these I can say is there.
448
00:27:10 --> 00:27:13
But it is a zero, so it's OK.
449
00:27:13 --> 00:27:14
And similarly here.
450
00:27:14 --> 00:27:19
Anyway, the pattern's good
and the eigenvalue is?
451
00:27:19 --> 00:27:20
Something.
452
00:27:20 --> 00:27:22
OK, it has some formula.
453
00:27:22 --> 00:27:25
It's not going to be
exactly this guy.
454
00:27:25 --> 00:27:27
Because we're taking
differences of sines and
455
00:27:27 --> 00:27:29
not derivatives of sines.
456
00:27:29 --> 00:27:31
So it has some formula.
457
00:27:31 --> 00:27:35
Tell me what you would
know about lambda.
458
00:27:35 --> 00:27:37
I'm not going to ask you
for the formula, I'm
459
00:27:37 --> 00:27:38
going to write it down.
460
00:27:38 --> 00:27:42
Tell me, before I write it
down, or I'll write it down
461
00:27:42 --> 00:27:43
and then you can tell me.
462
00:27:43 --> 00:27:45
I have two on the diagonal,
so that's just going
463
00:27:45 --> 00:27:47
to give me a two.
464
00:27:47 --> 00:27:49
And then the guy on the left
and the guy on the right,
465
00:27:49 --> 00:27:54
two, two with minus ones
I think gives us two.
466
00:27:54 --> 00:27:57
I think it turns out
to be a cos(k).
467
00:27:57 --> 00:28:00
468
00:28:00 --> 00:28:03
A k has to be in there, a pi
has to be in there, and h.
469
00:28:03 --> 00:28:07
I think that'll be it.
470
00:28:07 --> 00:28:09
I think that's the eigenvalue.
471
00:28:09 --> 00:28:13
That's the k'th eigenvalue to
go with this eigenvector.
472
00:28:13 --> 00:28:16
What do you notice about
two minus two times the
473
00:28:16 --> 00:28:19
cosine of something?
474
00:28:19 --> 00:28:20
What is it?
475
00:28:20 --> 00:28:24
Positive, negative,
zero, what's up?
476
00:28:24 --> 00:28:28
Two minus twp times
a cosine is always?
477
00:28:28 --> 00:28:29
Positive.
478
00:28:29 --> 00:28:34
What does that tell us about
K that we already knew?
479
00:28:34 --> 00:28:35
It's positive definite, right?
480
00:28:35 --> 00:28:38
All eigenvalues, here we
actually know what they all
481
00:28:38 --> 00:28:42
are, they're all positive.
482
00:28:42 --> 00:28:43
I'm never going to
get zero here.
483
00:28:43 --> 00:28:53
These k*pi*h's are not hitting
the ones where the cosine is
484
00:28:53 --> 00:28:57
one and, of course,
you can imagine.
485
00:28:57 --> 00:28:59
So I started this sentence
and realized I should
486
00:28:59 --> 00:29:02
add another sentence.
487
00:29:02 --> 00:29:04
These are all positive.
488
00:29:04 --> 00:29:05
We expected that.
489
00:29:05 --> 00:29:08
We knew that k was a
positive definite matrix.
490
00:29:08 --> 00:29:11
All its eigenvalues are
positive, and now we actually
491
00:29:11 --> 00:29:13
know what they are.
492
00:29:13 --> 00:29:19
And if I had the matrix B
instead for a free-free one?
493
00:29:19 --> 00:29:22
Then this formula would
change a little bit.
494
00:29:22 --> 00:29:28
And what would be different
about B, the free-free matrix?
495
00:29:28 --> 00:29:31
Its eigenvectors would be
maybe at half angles or
496
00:29:31 --> 00:29:33
some darned thing happened.
497
00:29:33 --> 00:29:36
And so we get something
slightly different here.
498
00:29:36 --> 00:29:40
Maybe h is 1/n for
that, I've forgotten.
499
00:29:40 --> 00:29:44
And what's the deal
with the free-free K?
500
00:29:44 --> 00:29:48
One of the eigenvalues is zero.
501
00:29:48 --> 00:29:53
The free-free is the positive
semi-definite example.
502
00:29:53 --> 00:30:00
OK, that was like a quick
review of stuff we did earlier.
503
00:30:00 --> 00:30:04
And so I'm coming back to that
early point in the book,
504
00:30:04 --> 00:30:11
because of this great fact that
my eigenvectors are sines.
505
00:30:11 --> 00:30:16
So the point is, my
eigenvectors being sines, that
506
00:30:16 --> 00:30:21
just lights up a light saying
use the fast Fourier transform.
507
00:30:21 --> 00:30:25
You've got a matrix
full of sine vectors.
508
00:30:25 --> 00:30:29
Your eigenvector matrix
is a sine transform.
509
00:30:29 --> 00:30:34
It's golden, so use it.
510
00:30:34 --> 00:30:37
So I'll just remember
then how, recall that.
511
00:30:37 --> 00:30:43
So what are the
eigenvectors in 2-D?
512
00:30:43 --> 00:30:47
First of all, so
let's go to 2-D.
513
00:30:47 --> 00:30:51
Let me do the
continuous one first.
514
00:30:51 --> 00:30:52
Yeah, -u_xx-u_yy=lambda*u.
515
00:30:52 --> 00:30:57
516
00:30:57 --> 00:31:00
The eigenvalue problem for
Laplace's equation now in
517
00:31:00 --> 00:31:05
2-D, and again I'm going
to make it on the square.
518
00:31:05 --> 00:31:07
This unit square.
519
00:31:07 --> 00:31:10
And I'm going to have zero
boundary conditions.
520
00:31:10 --> 00:31:13
Fixed-fixed.
521
00:31:13 --> 00:31:18
Anybody want to propose
an eigenfunction for
522
00:31:18 --> 00:31:20
the 2-D problem?
523
00:31:20 --> 00:31:25
So the 2-D one, the whole idea
is hey, this square is like a
524
00:31:25 --> 00:31:27
product of intervals somehow.
525
00:31:27 --> 00:31:32
We know the answer
in each direction.
526
00:31:32 --> 00:31:35
What do you figure,
what would be a good
527
00:31:35 --> 00:31:36
eigenfunction, u(x,y)?
528
00:31:36 --> 00:31:38
529
00:31:38 --> 00:31:41
Or eigenfunction, yeah, u(x,y).
530
00:31:43 --> 00:31:46
For this problem?
531
00:31:46 --> 00:31:49
What do you think?
532
00:31:49 --> 00:31:51
This is like this.
533
00:31:51 --> 00:31:57
The older courses on applied
math did this until you
534
00:31:57 --> 00:31:58
were blue in the face.
535
00:31:58 --> 00:32:02
Because there wasn't
finite elements and good
536
00:32:02 --> 00:32:03
stuff at that time.
537
00:32:03 --> 00:32:07
It was exact formulas.
538
00:32:07 --> 00:32:10
You wondered about exact
eigenfunctions and for this
539
00:32:10 --> 00:32:13
problem variables separate.
540
00:32:13 --> 00:32:19
And you get u(x) is
the product of sine.
541
00:32:19 --> 00:32:25
Of this guy, sin(pi*k*x),
times the sin(pi*l*y).
542
00:32:25 --> 00:32:30
543
00:32:30 --> 00:32:34
Well, once you have the idea
that it might look like that,
544
00:32:34 --> 00:32:37
you just plug it in to
see, does it really work?
545
00:32:37 --> 00:32:40
And what's the eigenvalue?
546
00:32:40 --> 00:32:42
So I claim that that's a good
eigenfunction function and I
547
00:32:42 --> 00:32:47
need, it's got two indices,
k and l, two frequencies.
548
00:32:47 --> 00:32:50
It's got an x frequency
and a y frequency.
549
00:32:50 --> 00:32:55
So I need double index.
k and l will go, yeah.
550
00:32:55 --> 00:32:57
All the way out to infinity
in the continuous problem.
551
00:32:57 --> 00:32:59
And what's the eigenvector?
552
00:32:59 --> 00:33:02
Can you plug this guy in?
553
00:33:02 --> 00:33:05
What happens when
you plug that in?
554
00:33:05 --> 00:33:11
Take the second x
derivative, what happens?
555
00:33:11 --> 00:33:14
A constant comes out and
what's the constant?
556
00:33:14 --> 00:33:17
pi squared k squared.
557
00:33:17 --> 00:33:19
Then plug it into that term.
558
00:33:19 --> 00:33:21
A constant comes out again.
559
00:33:21 --> 00:33:25
What's that constant?
pi squared l squared.
560
00:33:25 --> 00:33:28
They come out with a minus and
then you already built in the
561
00:33:28 --> 00:33:31
minus, so that they've
made it a plus.
562
00:33:31 --> 00:33:37
So the lambda_kl is just pi
squared, k squared, plus
563
00:33:37 --> 00:33:41
pi squared l squared.
564
00:33:41 --> 00:33:45
You see how it's going?
565
00:33:45 --> 00:33:48
If we knew it in 1-D
now we get it in 2-D,
566
00:33:48 --> 00:33:50
practically for free.
567
00:33:50 --> 00:33:53
Just the idea of doing this.
568
00:33:53 --> 00:33:56
OK, and now I'm going to do
it for finite differences.
569
00:33:56 --> 00:34:04
So now I have K2D and I want to
ask you about its eigenvectors,
570
00:34:04 --> 00:34:05
and what do you think they are?
571
00:34:05 --> 00:34:08
Well, of course, they're
just like those.
572
00:34:08 --> 00:34:17
They're just the eigenvectors,
shall I call them z, k, l.
573
00:34:17 --> 00:34:19
So these will be
the eigenvectors.
574
00:34:19 --> 00:34:29
The eigenvectors z_kl, will be,
their components, I don't even
575
00:34:29 --> 00:34:31
know the best way
to write them.
576
00:34:31 --> 00:34:35
The trouble is this matrix
is of size n squared.
577
00:34:35 --> 00:34:38
Its eigenvectors have got
n squared components,
578
00:34:38 --> 00:34:39
and what are they?
579
00:34:39 --> 00:34:41
Just you could
tell me in words.
580
00:34:41 --> 00:34:43
What do you figure are going
to be the components of the
581
00:34:43 --> 00:34:51
eigenvectors in K2D when
you know them in 1-D?
582
00:34:51 --> 00:34:54
Products, of course.
583
00:34:54 --> 00:35:00
This construction, however I
write it, is just like this.
584
00:35:00 --> 00:35:07
z_kl, a typical component,
typical components so that
585
00:35:07 --> 00:35:14
components of the eigenvectors
are products of these
586
00:35:14 --> 00:35:25
guys, like sin(k*pi*h),
something like that.
587
00:35:25 --> 00:35:28
I've got indices that I
don't want to get into.
588
00:35:28 --> 00:35:30
Just damn, it I'll
just put that.
589
00:35:30 --> 00:35:34
Sine here or something, right.
590
00:35:34 --> 00:35:41
We could try to sort out the
indices, the truth is a kron
591
00:35:41 --> 00:35:45
operation does it for us.
592
00:35:45 --> 00:35:50
So all I'm saying is we've
got an eigenvector with n
593
00:35:50 --> 00:35:53
components in the x direction.
594
00:35:53 --> 00:35:57
We've got another one with
index l and components
595
00:35:57 --> 00:35:58
in the y direction.
596
00:35:58 --> 00:36:01
Multiply these end guys
by these end guys, you
597
00:36:01 --> 00:36:02
get n squared guys.
598
00:36:02 --> 00:36:04
Whatever order you
write them in.
599
00:36:04 --> 00:36:07
And that's the eigenvector.
600
00:36:07 --> 00:36:14
And then the eigenvalue is,
so the eigenvalue will
601
00:36:14 --> 00:36:16
be, well what do we got?
602
00:36:16 --> 00:36:22
We have a second x difference,
so it will be a 2-2cos(k*pi*h).
603
00:36:22 --> 00:36:26
604
00:36:26 --> 00:36:29
From the xx term.
605
00:36:29 --> 00:36:33
And it'll plus the other term
will be a 2-2cos(l*pi*h).
606
00:36:33 --> 00:36:38
607
00:36:38 --> 00:36:40
That's cool, yeah.
608
00:36:40 --> 00:36:47
I didn't get into writing the
details of the eigenvector,
609
00:36:47 --> 00:36:48
that's a mess.
610
00:36:48 --> 00:36:49
This is not a mess.
611
00:36:49 --> 00:36:51
This is nice.
612
00:36:51 --> 00:36:52
What do I see again?
613
00:36:52 --> 00:36:54
I see four coming
from the diagonal.
614
00:36:54 --> 00:36:59
I see two minus ones coming
from left and right.
615
00:36:59 --> 00:37:03
I see two minus ones
coming from up and below.
616
00:37:03 --> 00:37:07
And do I see a positive number?
617
00:37:07 --> 00:37:08
You bet.
618
00:37:08 --> 00:37:11
Right, this is positive,
this is positive.
619
00:37:11 --> 00:37:12
Sum is positive.
620
00:37:12 --> 00:37:15
The sum of two positive
definite matrices is
621
00:37:15 --> 00:37:17
positive definite.
622
00:37:17 --> 00:37:21
I know everything about k.
623
00:37:21 --> 00:37:24
And the fast Fourier
transform makes all those
624
00:37:24 --> 00:37:25
calculations possible.
625
00:37:25 --> 00:37:31
So maybe I, just to conclude
this subject, would be to
626
00:37:31 --> 00:37:39
remind you, how do you use the
eigenvalues and eigenvectors
627
00:37:39 --> 00:37:43
in solving the equation?
628
00:37:43 --> 00:37:46
I guess I'd better
put that on a board.
629
00:37:46 --> 00:37:48
But this is what
we did last time.
630
00:37:48 --> 00:37:53
So the final step would be
how do I solve (K2D)U=F?
631
00:37:53 --> 00:37:58
632
00:37:58 --> 00:38:01
You remember, what
were the three steps?
633
00:38:01 --> 00:38:05
I do really want you to
learn those three steps.
634
00:38:05 --> 00:38:08
It's the way eigenvectors
and eigenvalues are use.
635
00:38:08 --> 00:38:10
It's the whole point
of finding them.
636
00:38:10 --> 00:38:16
The whole point of finding them
is to split this problem into n
637
00:38:16 --> 00:38:21
squared little 1-D problems,
where each eigenvector is
638
00:38:21 --> 00:38:23
just doing its own thing.
639
00:38:23 --> 00:38:24
So what do you do?
640
00:38:24 --> 00:38:30
You write F as a combination
with some coefficients, c_kl
641
00:38:30 --> 00:38:32
of the eigenvectors y_kl.
642
00:38:33 --> 00:38:35
So these are vectors.
643
00:38:35 --> 00:38:38
Put an arrow over them
just to emphasize.
644
00:38:38 --> 00:38:39
Those are vectors.
645
00:38:39 --> 00:38:42
Then what's the answer,
let's skip the middle
646
00:38:42 --> 00:38:44
step which was so easy.
647
00:38:44 --> 00:38:46
Tell me the answer.
648
00:38:46 --> 00:38:49
Supposed the right hand side is
a combination of eigenvectors.
649
00:38:49 --> 00:38:55
Then the left hand side, the
answer, is also a combination
650
00:38:55 --> 00:38:56
of eigenvectors.
651
00:38:56 --> 00:39:00
And just tell me, what
are the coefficients
652
00:39:00 --> 00:39:03
in that combination?
653
00:39:03 --> 00:39:10
This is like the whole idea
is on two simple lines here.
654
00:39:10 --> 00:39:14
The coefficient there is?
655
00:39:14 --> 00:39:18
What do I have? c_kl,
does that come in?
656
00:39:18 --> 00:39:22
Yes, because c_kl tells
me how much of this
657
00:39:22 --> 00:39:24
eigenvector is here.
658
00:39:24 --> 00:39:28
But now, what else comes
in? lambda_kl, and what
659
00:39:28 --> 00:39:30
do I do with that?
660
00:39:30 --> 00:39:34
Divide by it, because when
I multiply it'll multiply
661
00:39:34 --> 00:39:36
by it and bring back F.
662
00:39:36 --> 00:39:40
So there you go.
663
00:39:40 --> 00:39:43
That's the, this u is
a vector, of course.
664
00:39:43 --> 00:39:45
These are all vectors.
665
00:39:45 --> 00:39:52
2-D, we'll give them an arrow,
just as a reminder that
666
00:39:52 --> 00:39:55
these are vectors with
n squared components.
667
00:39:55 --> 00:39:59
They're giant vectors, but the
point is this y_kl, its n
668
00:39:59 --> 00:40:03
squared components are just
products, as we saw here,
669
00:40:03 --> 00:40:04
of the 1-D problem.
670
00:40:04 --> 00:40:09
So the fast Fourier transform,
the 2-D fast Fourier
671
00:40:09 --> 00:40:11
transform just takes off.
672
00:40:11 --> 00:40:17
Takes off and gives you
this fantastic speed.
673
00:40:17 --> 00:40:20
So that's absolutely the right
way to solve the problem.
674
00:40:20 --> 00:40:23
And everybody sees
that picture?
675
00:40:23 --> 00:40:25
Don't forget that picture.
676
00:40:25 --> 00:40:30
That's like a nice part of
this subject, is to get
677
00:40:30 --> 00:40:34
it's formulas and not code.
678
00:40:34 --> 00:40:37
But it's easily
turned into code.
679
00:40:37 --> 00:40:41
Because the FFT and the sine
transform are all coded.
680
00:40:41 --> 00:40:45
Actually, Professor Johnson in
the Math Department, he created
681
00:40:45 --> 00:40:48
the best FFT code there is.
682
00:40:48 --> 00:40:49
Do you know that?
683
00:40:49 --> 00:40:51
I'll just mention.
684
00:40:51 --> 00:40:56
His code is called FFTW,
Fastest Fourier Transform in
685
00:40:56 --> 00:41:02
the West, and the point is
it's set up to be fast on
686
00:41:02 --> 00:41:06
whatever computer, whatever
architecture you're using.
687
00:41:06 --> 00:41:07
It figures out what that is.
688
00:41:07 --> 00:41:11
And optimizes the
code for that.
689
00:41:11 --> 00:41:13
And gives you the answer.
690
00:41:13 --> 00:41:14
OK.
691
00:41:14 --> 00:41:19
That's Part 2 complete
of Section 3.5.
692
00:41:19 --> 00:41:23
I guess I'm hoping after we get
through the quiz, the next
693
00:41:23 --> 00:41:29
natural step would be
some MATLAB, right?
694
00:41:29 --> 00:41:36
We really should have some
MATLAB case of doing this.
695
00:41:36 --> 00:41:39
I haven't thought of
it, but I'll try.
696
00:41:39 --> 00:41:41
And some MATLAB for
finite elements.
697
00:41:41 --> 00:41:49
And there's a lot of finite
element code on the course
698
00:41:49 --> 00:41:52
page, ready to download.
699
00:41:52 --> 00:41:55
But now we have to
understand it.
700
00:41:55 --> 00:41:59
Are you ready to tackle, so in
ten minutes we can get the
701
00:41:59 --> 00:42:04
central idea of finite elements
in 2-D, and then after the
702
00:42:04 --> 00:42:09
quiz, when our minds are clear
again, we'll do it
703
00:42:09 --> 00:42:13
properly Friday.
704
00:42:13 --> 00:42:17
This is asking you a
lot, but can I do it?
705
00:42:17 --> 00:42:22
Can I go to finite
elements in 2-D?
706
00:42:22 --> 00:42:25
OK.
707
00:42:25 --> 00:42:26
Oh, I have a choice.
708
00:42:26 --> 00:42:27
Big, big choice.
709
00:42:27 --> 00:42:31
I could use
triangles, or quads.
710
00:42:31 --> 00:42:33
That picture, this
picture would naturally
711
00:42:33 --> 00:42:35
set up for quads.
712
00:42:35 --> 00:42:37
Let me start with triangles.
713
00:42:37 --> 00:42:46
So I have some region with
lots of triangles here.
714
00:42:46 --> 00:42:48
Got to have some interior
points, or I'm not going
715
00:42:48 --> 00:42:51
to have any unknowns.
716
00:42:51 --> 00:42:56
Gosh, that's a big mesh
with very little unknown.
717
00:42:56 --> 00:43:01
Let me put in another few here.
718
00:43:01 --> 00:43:02
Have I got enough?
719
00:43:02 --> 00:43:06
Oops, that's not a triangle.
720
00:43:06 --> 00:43:07
How's that?
721
00:43:07 --> 00:43:10
Is that now a triangulation?
722
00:43:10 --> 00:43:14
So that's an unstructured
triangular mesh.
723
00:43:14 --> 00:43:16
Its quality is not too bad.
724
00:43:16 --> 00:43:25
The angles, some angles are
small but not very small.
725
00:43:25 --> 00:43:29
And they're not near, that
angle's getting a little big
726
00:43:29 --> 00:43:33
too, it's getting toward
a 180 degrees which you
727
00:43:33 --> 00:43:35
have to stay away from.
728
00:43:35 --> 00:43:40
It was a 180, the triangle
would squash in.
729
00:43:40 --> 00:43:42
So it's a bit
squashed, that one.
730
00:43:42 --> 00:43:44
This one's a little
long and narrow.
731
00:43:44 --> 00:43:45
But not bad.
732
00:43:45 --> 00:43:50
And you need a mesh generator
to generate a mesh like this.
733
00:43:50 --> 00:43:56
And I had a thesis student just
a few years ago who wrote a
734
00:43:56 --> 00:43:59
nice mesh generator in MATLAB.
735
00:43:59 --> 00:44:01
So we get a mesh.
736
00:44:01 --> 00:44:04
OK.
737
00:44:04 --> 00:44:09
Now, what do you remember
the finite element idea?
738
00:44:09 --> 00:44:13
Well, first you have
to use the weak form.
739
00:44:13 --> 00:44:17
Let's save the weak form
for first thing Friday.
740
00:44:17 --> 00:44:20
The weak form of Laplace.
741
00:44:20 --> 00:44:22
I need the weak form of
Laplace's equation.
742
00:44:22 --> 00:44:27
I'll just tell you what it is.
743
00:44:27 --> 00:44:32
We all have double
integrals, oh, Poisson.
744
00:44:32 --> 00:44:33
I'm making it Poisson.
745
00:44:33 --> 00:44:36
I want a right-hand side.
746
00:44:36 --> 00:44:39
Double integral,
this will be du/dx.
747
00:44:39 --> 00:44:45
748
00:44:45 --> 00:44:46
d test function/dx.
749
00:44:47 --> 00:44:51
And now what's changed in 2-D,
I'll also have a du/dy*dv/dy.
750
00:44:51 --> 00:44:56
751
00:44:56 --> 00:44:56
dx/dy.
752
00:44:57 --> 00:44:58
That's what I'll get.
753
00:44:58 --> 00:45:04
And on the right-hand side
I'll just have F, the load
754
00:45:04 --> 00:45:07
times the test. dx/dy.
755
00:45:07 --> 00:45:10
756
00:45:10 --> 00:45:13
Yeah I guess, I have to write
that down because that's
757
00:45:13 --> 00:45:15
our starting point.
758
00:45:15 --> 00:45:18
Do you remember the situation,
and of course that's on the
759
00:45:18 --> 00:45:25
exam this evening, is
remembering about the weak form
760
00:45:25 --> 00:45:32
in 1-D, so in 2-D I expect to
see x's and y's, my functions
761
00:45:32 --> 00:45:34
are functions of x and y.
762
00:45:34 --> 00:45:37
I have my, this is my solution.
763
00:45:37 --> 00:45:42
The V's are all possible test
functions, and this hold for
764
00:45:42 --> 00:45:47
every, this is like
the virtual work.
765
00:45:47 --> 00:45:50
You could call it the equation
of virtual work, if your
766
00:45:50 --> 00:45:53
were a little old-fashioned.
767
00:45:53 --> 00:45:56
Those V's are virtual
displacements, they're
768
00:45:56 --> 00:45:57
test functions.
769
00:45:57 --> 00:45:59
That's what it looks like.
770
00:45:59 --> 00:46:04
I'll come back to that
at the start of Friday.
771
00:46:04 --> 00:46:07
What's the finite element idea?
772
00:46:07 --> 00:46:11
Just as in 1-D, the finite
element idea is choose some
773
00:46:11 --> 00:46:13
trial functions, right?
774
00:46:13 --> 00:46:15
Choose some trial functions.
775
00:46:15 --> 00:46:20
And our approximate guy is
going to be some combination
776
00:46:20 --> 00:46:22
of these trial functions.
777
00:46:22 --> 00:46:26
Some coefficient U_1 times the
first trial function plus so
778
00:46:26 --> 00:46:35
on, plus U_n times the
nth trial function.
779
00:46:35 --> 00:46:38
So this will be our, and
these will also be our
780
00:46:38 --> 00:46:39
test functions again.
781
00:46:39 --> 00:46:47
These will also be, the
phis are also the V's.
782
00:46:47 --> 00:46:49
I'll get an equation.
783
00:46:49 --> 00:46:53
By using n tests, by using
the weak form n times
784
00:46:53 --> 00:46:55
for these n functions.
785
00:46:55 --> 00:46:59
And so each equation comes.
786
00:46:59 --> 00:47:03
V is one of the phis, U is
this combination of phis.
787
00:47:03 --> 00:47:04
Plug it in.
788
00:47:04 --> 00:47:07
You get an equation.
789
00:47:07 --> 00:47:11
What do I got to do in three
minutes, is speak about
790
00:47:11 --> 00:47:13
the most important point.
791
00:47:13 --> 00:47:14
The phis.
792
00:47:14 --> 00:47:17
Choosing the phis
is what matters.
793
00:47:17 --> 00:47:20
That's what made
finite elements win.
794
00:47:20 --> 00:47:27
The fact that I choose nice
simple functions, which are
795
00:47:27 --> 00:47:30
polynomials, little easy
polynomials on each
796
00:47:30 --> 00:47:33
element, on each triangle.
797
00:47:33 --> 00:47:35
And the ones I'm going
to speak about today
798
00:47:35 --> 00:47:37
are the linear ones.
799
00:47:37 --> 00:47:39
They're like hat
functions, right?
800
00:47:39 --> 00:47:41
But now we're in 2-D.
801
00:47:41 --> 00:47:43
They're going to be pyramids.
802
00:47:43 --> 00:47:48
So if I take, this is
unknown number one.
803
00:47:48 --> 00:47:50
Unknown number one, it's
got its hat function.
804
00:47:50 --> 00:47:52
Pyramid function, sorry.
805
00:47:52 --> 00:47:53
Pyramid function.
806
00:47:53 --> 00:47:56
So its pyramid function
is a one there, right?
807
00:47:56 --> 00:48:01
The pyramid, top of the
pyramid is over that point.
808
00:48:01 --> 00:48:06
And it goes down to zero so
it's zero at all these points.
809
00:48:06 --> 00:48:08
Well, at all the other points.
810
00:48:08 --> 00:48:16
That's the one beauty of finite
elements, is that then this U,
811
00:48:16 --> 00:48:19
this coefficient U_1 that we
compute, will actually be our
812
00:48:19 --> 00:48:21
approximation at this point.
813
00:48:21 --> 00:48:25
Because all the others
are zero at that point.
814
00:48:25 --> 00:48:28
So can you just, this is all
you have to do in the last 60
815
00:48:28 --> 00:48:32
seconds is visualize
this function.
816
00:48:32 --> 00:48:35
You see it, maybe tent would
be better than pyramid?
817
00:48:35 --> 00:48:36
Either one.
818
00:48:36 --> 00:48:39
What's the base of the pyramid?
819
00:48:39 --> 00:48:45
The base of the pyramid, I'm
thinking of the surface as a
820
00:48:45 --> 00:48:48
way to visualize this function.
821
00:48:48 --> 00:48:50
It's a function that's
one here, it goes down
822
00:48:50 --> 00:48:53
to zero linearly.
823
00:48:53 --> 00:48:57
So these are linear
triangular elements.
824
00:48:57 --> 00:48:59
What's the base of the pyramid?
825
00:48:59 --> 00:49:03
Just this, right?
826
00:49:03 --> 00:49:05
That's the base.
827
00:49:05 --> 00:49:10
Because over in these
other triangles, nothing
828
00:49:10 --> 00:49:11
ever got off zero.
829
00:49:11 --> 00:49:13
Nothing got off the ground.
830
00:49:13 --> 00:49:15
It's zero, zero,
zero at all three.
831
00:49:15 --> 00:49:18
If I know it at all three
corners, I know it everywhere.
832
00:49:18 --> 00:49:22
For linear functions, it's
just a piece of flat roof.
833
00:49:22 --> 00:49:23
Piece of flat roof.
834
00:49:23 --> 00:49:25
So I have how many pieces have
I got around this point?
835
00:49:25 --> 00:49:27
Oh, where is the point?
836
00:49:27 --> 00:49:32
I guess I've got one, two,
three, four, five, six, is it?
837
00:49:32 --> 00:49:37
It's a six, it's a pyramid with
six sloping faces sloping down
838
00:49:37 --> 00:49:41
to zero along the sides, right?
839
00:49:41 --> 00:49:47
So it's like a six-sided
teepee, six-sided tent.
840
00:49:47 --> 00:49:53
And here are the rods that
hold the tent up would
841
00:49:53 --> 00:49:55
be these six things.
842
00:49:55 --> 00:50:00
And then the six sides
would be six flat pieces.
843
00:50:00 --> 00:50:05
If you would see what I've
tried to draw there,
844
00:50:05 --> 00:50:06
that's phi_1.
845
00:50:07 --> 00:50:12
And you can imagine that when
I've got that, I can take
846
00:50:12 --> 00:50:14
the x derivative and
the y derivative.
847
00:50:14 --> 00:50:17
What will I know about the x
derivative and the y derivative
848
00:50:17 --> 00:50:20
in a typical triangle?
849
00:50:20 --> 00:50:26
If my function is in a typical
triangle, say in this triangle,
850
00:50:26 --> 00:50:27
my function looks like a+bx+cy.
851
00:50:27 --> 00:50:31
852
00:50:31 --> 00:50:34
a+bx, it's flat.
853
00:50:34 --> 00:50:35
It's linear.
854
00:50:35 --> 00:50:39
And these three numbers, a and
b, are decided by the fact that
855
00:50:39 --> 00:50:41
the function should be one
there, zero there,
856
00:50:41 --> 00:50:43
and zero there.
857
00:50:43 --> 00:50:47
Three facts about the function,
three coefficients to
858
00:50:47 --> 00:50:49
match those facts.
859
00:50:49 --> 00:50:54
And what's the deal
on the x derivative?
860
00:50:54 --> 00:50:54
It's just b.
861
00:50:54 --> 00:50:56
It's a constant.
862
00:50:56 --> 00:50:58
The y derivative is
just c, a constant.
863
00:50:58 --> 00:51:04
So the integrals are easy and
the the whole finite element
864
00:51:04 --> 00:51:08
system just goes smoothly.
865
00:51:08 --> 00:51:12
So then what the finite element
system has to do, and we'll
866
00:51:12 --> 00:51:16
talk about it Friday, is gotta
keep track of which triangles
867
00:51:16 --> 00:51:17
go to which nodes.
868
00:51:17 --> 00:51:20
How do you assemble, where
do these pieces go in?
869
00:51:20 --> 00:51:26
But every finite, every element
matrix is going to be simple.
870
00:51:26 --> 00:51:27
OK.