1
00:00:00 --> 00:00:01
2
00:00:01 --> 00:00:03
The following content is
provided under a Creative
3
00:00:03 --> 00:00:03
Commons license.
4
00:00:03 --> 00:00:06
Your support will help MIT
OpenCourseWare continue to
5
00:00:06 --> 00:00:10
offer high-quality educational
resources for free.
6
00:00:10 --> 00:00:13
To make a donation, or to view
additional materials from
7
00:00:13 --> 00:00:16
hundreds of MIT courses, visit
MIT OpenCourseWare
8
00:00:16 --> 00:00:20
at ocw.mit.edu.
9
00:00:20 --> 00:00:21
PROFESSOR STRANG: Well, OK.
10
00:00:21 --> 00:00:27
So I thought I would, as last
time, before Quiz 1 and now
11
00:00:27 --> 00:00:30
before Quiz 2, I thought I
would just tell you what the
12
00:00:30 --> 00:00:33
topics of the four
problems are.
13
00:00:33 --> 00:00:40
Remember that trusses are still
to be reviewed, and of course
14
00:00:40 --> 00:00:48
the fun was to find mechanisms,
to find solutions, to Au=0
15
00:00:48 --> 00:00:50
in the case when the
truss was unstable.
16
00:00:50 --> 00:00:53
So you can guess that
I'll probably pick
17
00:00:53 --> 00:00:55
an unstable truss.
18
00:00:55 --> 00:00:59
And ask for mechanisms.
19
00:00:59 --> 00:01:05
Then maybe the key problem
for this third of the
20
00:01:05 --> 00:01:09
course is finite elements.
21
00:01:09 --> 00:01:12
That idea, and finding elements
we've really only done
22
00:01:12 --> 00:01:14
in one dimension.
23
00:01:14 --> 00:01:20
So and particularly linear
hat function elements.
24
00:01:20 --> 00:01:25
So you see what's not included
there is like cubics.
25
00:01:25 --> 00:01:29
I mean, those are really great
finite elements, but maybe
26
00:01:29 --> 00:01:32
not so great for an exam.
27
00:01:32 --> 00:01:36
So 1-D is enough; linear
elements are enough
28
00:01:36 --> 00:01:37
to do on an exam.
29
00:01:37 --> 00:01:43
And then questions three and
four, so this one will be
30
00:01:43 --> 00:01:44
weighted more heavily.
31
00:01:44 --> 00:01:47
Then questions three and four
are the topics that we've been
32
00:01:47 --> 00:01:51
doing the last week, two weeks.
33
00:01:51 --> 00:01:56
And I realize we haven't had
full time to digest them, to
34
00:01:56 --> 00:01:58
work a whole lot of exercises.
35
00:01:58 --> 00:02:04
So those will be, right now
they're too simple, my
36
00:02:04 --> 00:02:06
questions on those topics.
37
00:02:06 --> 00:02:07
Three and four, I'll try
to make them a little
38
00:02:07 --> 00:02:08
harder for you.
39
00:02:08 --> 00:02:11
But I might not succeed.
40
00:02:11 --> 00:02:13
So, anyway.
41
00:02:13 --> 00:02:14
There we go.
42
00:02:14 --> 00:02:21
So I just felt we can't talk
about a course in applied math
43
00:02:21 --> 00:02:25
without dealing with these
topics in vector calculus that
44
00:02:25 --> 00:02:28
lead to Laplace's equation.
45
00:02:28 --> 00:02:32
But my heart is really in
solving Laplace's equation
46
00:02:32 --> 00:02:35
or Poisson's equation
by finite differences.
47
00:02:35 --> 00:02:41
That'll be today, or by finite
elements, that'll be Wednesday
48
00:02:41 --> 00:02:43
and probably Friday.
49
00:02:43 --> 00:02:49
So this is like the core of
the computational side but we
50
00:02:49 --> 00:02:52
had to do this preparation.
51
00:02:52 --> 00:02:58
And then you know that Peter
kindly changed his weekly
52
00:02:58 --> 00:03:03
review session to Tuesday, I
think it's Tuesday at noon; you
53
00:03:03 --> 00:03:04
could look on the website.
54
00:03:04 --> 00:03:12
So Peter will be a Tuesday
review and then regular,
55
00:03:12 --> 00:03:16
that's me, on Wednesday.
56
00:03:16 --> 00:03:21
And then the exam, I think,
is Wednesday evening.
57
00:03:21 --> 00:03:22
I think, yeah.
58
00:03:22 --> 00:03:24
Let me just do a review.
59
00:03:24 --> 00:03:29
Oh, Peter sent me a note about
a question on an old exam.
60
00:03:29 --> 00:03:34
I think it was 2005,
question three.
61
00:03:34 --> 00:03:42
I think it was, 2005
question three.
62
00:03:42 --> 00:03:45
Peter said he didn't
understand, at all,
63
00:03:45 --> 00:03:47
the solution to 3c.
64
00:03:47 --> 00:03:49
Nor do I.
65
00:03:49 --> 00:03:55
I think somehow, I think there
was a little confusion in that,
66
00:03:55 --> 00:03:58
and the answer in there is the
answer to a different question.
67
00:03:58 --> 00:04:01
Forget it.
68
00:04:01 --> 00:04:08
Having looked at this one, a
and b are worth looking at in
69
00:04:08 --> 00:04:18
this div grad potential stream
function world that we're in.
70
00:04:18 --> 00:04:23
Those are typical
exercises in that area.
71
00:04:23 --> 00:04:24
Yes, question.
72
00:04:24 --> 00:04:26
AUDIENCE: [INAUDIBLE]
73
00:04:26 --> 00:04:27
PROFESSOR STRANG:
Oh, good question.
74
00:04:27 --> 00:04:27
Yes.
75
00:04:27 --> 00:04:29
Thanks for reminding me.
76
00:04:29 --> 00:04:35
So, solutions to the homework
that you're practicing with.
77
00:04:35 --> 00:04:41
If anybody has typed up
solutions or could, we'll
78
00:04:41 --> 00:04:43
post them right away.
79
00:04:43 --> 00:04:50
So I'll just hope for an email
from somebody who solved some
80
00:04:50 --> 00:04:53
or all of those problems.
81
00:04:53 --> 00:04:56
And of course you'll see
me again Wednesday.
82
00:04:56 --> 00:05:00
But that's an invitation for
anybody who worked through
83
00:05:00 --> 00:05:05
those homeworks that
I'll never collect.
84
00:05:05 --> 00:05:07
The current homework.
85
00:05:07 --> 00:05:09
OK, thanks for that.
86
00:05:09 --> 00:05:12
Other questions about the exam,
and then you'll have another
87
00:05:12 --> 00:05:18
chance Wednesday to ask about
that, so those are the topics.
88
00:05:18 --> 00:05:19
OK.
89
00:05:19 --> 00:05:25
And before I get to the fast
Poisson solvers there's so much
90
00:05:25 --> 00:05:31
about Laplace's equation to say
and I think it still remains to
91
00:05:31 --> 00:05:34
write Green's formula
on the board.
92
00:05:34 --> 00:05:39
And that's part of the
big picture of Laplace's
93
00:05:39 --> 00:05:41
equation is to see this.
94
00:05:41 --> 00:05:45
So we have double integrals
now, that's the difference.
95
00:05:45 --> 00:05:51
But we're still
integrating Au against w.
96
00:05:51 --> 00:05:57
So Au is the gradient end of u,
w is w_1, w_2, and this is now
97
00:05:57 --> 00:06:04
for any u and any w, so I
want the inner product.
98
00:06:04 --> 00:06:10
Of the gradient view with any
w, so this is like the Auw
99
00:06:10 --> 00:06:14
part, so the gradient view,
that would be a du/dx
100
00:06:14 --> 00:06:16
times the w_1.
101
00:06:16 --> 00:06:24
And a du/dy times w_2.
dx/dy, that would be.
102
00:06:24 --> 00:06:28
So inner products, if I'm in
two dimensions, inner products
103
00:06:28 --> 00:06:31
are integrals, of course, if
I'm dealing with function.
104
00:06:31 --> 00:06:37
So this is like
the Au against w.
105
00:06:37 --> 00:06:39
Inner product.
106
00:06:39 --> 00:06:44
Au, is those two pieces, w is
those, there's the dot product.
107
00:06:44 --> 00:06:49
And now the key idea behind
what Green's formula is about,
108
00:06:49 --> 00:06:55
what we use it for in 1-D
is integration by parts.
109
00:06:55 --> 00:06:59
So an integration by parts is
going to produce a doubled
110
00:06:59 --> 00:07:03
integral in which, so I'm just
sort of thinking through
111
00:07:03 --> 00:07:05
Green's formula.
112
00:07:05 --> 00:07:08
Of course, we could just say
OK, remember that formula.
113
00:07:08 --> 00:07:12
But you want to just see it
as an integration by parts.
114
00:07:12 --> 00:07:14
See how it evolved.
115
00:07:14 --> 00:07:15
And then we can think.
116
00:07:15 --> 00:07:19
So I plan to take two
integration by parts, that x
117
00:07:19 --> 00:07:23
derivative will get taken off
of u and put onto w_1.
118
00:07:23 --> 00:07:24
So I'll have u*dw_1.
119
00:07:25 --> 00:07:29
Oh, there'll be a minus right,
from the integration by part. u
120
00:07:29 --> 00:07:38
will multiply minus dw_1/dx,
so that was this guy
121
00:07:38 --> 00:07:39
integrated by parts.
122
00:07:39 --> 00:07:43
Now this one integrated by
parts, a y derivative is
123
00:07:43 --> 00:07:46
coming off of u and onto w_2.
124
00:07:46 --> 00:07:50
So that'll leave me with
u times dw_2/dy with
125
00:07:50 --> 00:07:53
the minus sign.
126
00:07:53 --> 00:07:57
That comes also from that
integration by parts.
127
00:07:57 --> 00:08:00
And then there's
the boundary term.
128
00:08:00 --> 00:08:04
So there's the term which, now
the boundary of the region is
129
00:08:04 --> 00:08:07
the curve around it instead
of just the two points.
130
00:08:07 --> 00:08:11
And what do we see in the
integration by parts?
131
00:08:11 --> 00:08:17
Well, from the boundary term we
expect a u and then the key
132
00:08:17 --> 00:08:20
point is it's (w dot n)ds.
133
00:08:21 --> 00:08:26
It's the normal component
of of this vector w.
134
00:08:26 --> 00:08:31
Because we're looking, well,
that's just what it is.
135
00:08:31 --> 00:08:35
So that's Green's formula
written out in detail.
136
00:08:35 --> 00:08:40
And this is the formula that
tells us that here, if this was
137
00:08:40 --> 00:08:48
a, a being gradient, on the
left side I'm seeing grad u in
138
00:08:48 --> 00:08:53
a product with w, and over here
I'm seeing the inner product
139
00:08:53 --> 00:08:57
of u with, what's that?
140
00:08:57 --> 00:09:00
If I look at minus, if I have
this quantity there that's in
141
00:09:00 --> 00:09:04
parentheses, what's its name?
142
00:09:04 --> 00:09:05
It's the divergence.
143
00:09:05 --> 00:09:11
It's minus the divergence of w.
144
00:09:11 --> 00:09:15
Plus the boundary term.
145
00:09:15 --> 00:09:25
So this is why I allowed myself
this gradient transpose equal
146
00:09:25 --> 00:09:29
minus divergence to see that if
A is the gradient then
147
00:09:29 --> 00:09:31
A transpose will be
minus the divergence.
148
00:09:31 --> 00:09:36
And this shows how that,
just sort of shows you
149
00:09:36 --> 00:09:39
the integration by parts
that makes that true.
150
00:09:39 --> 00:09:51
So, you know there's so many
integral formulas in this
151
00:09:51 --> 00:09:52
world of vector calculus.
152
00:09:52 --> 00:09:55
But this is like, one to know.
153
00:09:55 --> 00:09:59
And actually, if you want to
know, OK what about the
154
00:09:59 --> 00:10:01
details, where did
that come from?
155
00:10:01 --> 00:10:05
This actually turns out
to be, and the text will
156
00:10:05 --> 00:10:09
make it clear, is the
divergence theorem.
157
00:10:09 --> 00:10:14
The divergence theorem gives us
this, this, exactly this, so
158
00:10:14 --> 00:10:26
the divergence theorem applied
to the vector field u times w.
159
00:10:26 --> 00:10:29
It turns out if I apply the
divergence theorem to that,
160
00:10:29 --> 00:10:32
then I get a whole lot of
terms from the divergence of
161
00:10:32 --> 00:10:34
that, and they are these.
162
00:10:34 --> 00:10:38
And then I get this
for the boundary.
163
00:10:38 --> 00:10:40
To flex out.
164
00:10:40 --> 00:10:42
OK, so.
165
00:10:42 --> 00:10:45
I just didn't want to leave
the subject without writing
166
00:10:45 --> 00:10:49
down the central thing.
167
00:10:49 --> 00:10:52
What does this tell us, then?
168
00:10:52 --> 00:10:55
I have a region.
169
00:10:55 --> 00:11:03
In that region I have Laplace's
equation or Poisson's equation.
170
00:11:03 --> 00:11:06
Say Poisson.
171
00:11:06 --> 00:11:07
On the boundary.
172
00:11:07 --> 00:11:12
Now, this is the final
thing to ask you about.
173
00:11:12 --> 00:11:17
What are suitable boundary
conditions for this
174
00:11:17 --> 00:11:19
problem in 2-D?
175
00:11:19 --> 00:11:20
Right?
176
00:11:20 --> 00:11:24
In 1-D, by now we know
boundary conditions.
177
00:11:24 --> 00:11:26
That we know the main ones.
178
00:11:26 --> 00:11:29
There's fixed,
where u or given.
179
00:11:29 --> 00:11:33
Or there's free where
the slope is given.
180
00:11:33 --> 00:11:37
Now, what do we do here?
181
00:11:37 --> 00:11:44
So this is where we
used to be in 1-D.
182
00:11:44 --> 00:11:46
That's a straight line.
183
00:11:46 --> 00:11:48
Not perfectly straight but
anyway, straight line.
184
00:11:48 --> 00:11:52
So 1-D there are only
two boundary points.
185
00:11:52 --> 00:11:55
The normal vector
goes that way.
186
00:11:55 --> 00:12:01
And that way, this is the
end in this formula.
187
00:12:01 --> 00:12:05
I think if I just wrote
ordinary integration by parts
188
00:12:05 --> 00:12:10
it would look just the
same and my boundary
189
00:12:10 --> 00:12:12
reduces to two points.
190
00:12:12 --> 00:12:15
Here my boundary's the
whole curve around.
191
00:12:15 --> 00:12:21
So what corresponds to fixed
boundary conditions, and
192
00:12:21 --> 00:12:23
what corresponds to free
boundary conditions?
193
00:12:23 --> 00:12:28
So I want the one corresponds
to fixed, those are the ones
194
00:12:28 --> 00:12:32
that named after Dirichlet.
195
00:12:32 --> 00:12:34
Other names?
196
00:12:34 --> 00:12:38
Or the rest will be free,
because the ones named after
197
00:12:38 --> 00:12:43
Neumann, and those are
what we call natural
198
00:12:43 --> 00:12:49
boundary conditions.
199
00:12:49 --> 00:12:53
So I haven't, just ready to
finish this story here.
200
00:12:53 --> 00:12:57
What are the possibilities?
201
00:12:57 --> 00:13:00
Well, in each we could
have part of the
202
00:13:00 --> 00:13:02
boundary could be fixed.
203
00:13:02 --> 00:13:05
Part could be free.
204
00:13:05 --> 00:13:09
So fixed would mean say some
other boundary, let me call
205
00:13:09 --> 00:13:11
this part of the
boundary fixed.
206
00:13:11 --> 00:13:15
So the temperature, for
example is fixed on this
207
00:13:15 --> 00:13:17
part of the boundary.
208
00:13:17 --> 00:13:20
So this would be like, and this
might be the whole boundary.
209
00:13:20 --> 00:13:24
It might be fixed on the whole
one, but I'm now allowing the
210
00:13:24 --> 00:13:29
possibility that we have here
a fixed part and here
211
00:13:29 --> 00:13:33
we have a free part.
212
00:13:33 --> 00:13:39
In fluids, here we have fixed
corresponds to, like, there's
213
00:13:39 --> 00:13:40
some obstacle there.
214
00:13:40 --> 00:13:44
Some wall.
215
00:13:44 --> 00:13:50
Free is where the fluid is
coming in or flowing out.
216
00:13:50 --> 00:13:54
So there it's a
velocity condition.
217
00:13:54 --> 00:13:59
OK, so fixed conditions, these
Dirichlet ones will be like,
218
00:13:59 --> 00:14:05
tell me what u is, u is given.
219
00:14:05 --> 00:14:08
On this part.
220
00:14:08 --> 00:14:11
So on part of the boundary,
or possibly all, I
221
00:14:11 --> 00:14:13
could be given u=u_0.
222
00:14:15 --> 00:14:20
It could be zero, as
it often was in 1-D.
223
00:14:20 --> 00:14:22
Often just took u to be zero.
224
00:14:22 --> 00:14:27
Now, what I'm asking maybe for
the first time, is what's the
225
00:14:27 --> 00:14:29
free boundary conditions?
226
00:14:29 --> 00:14:35
What do we prescribe w=0 on the
free part of the boundary?
227
00:14:35 --> 00:14:38
Is w=0 the correct
free condition?
228
00:14:38 --> 00:14:39
Or w=w_0?
229
00:14:41 --> 00:14:45
If we wanted to allow
it to be non-zero.
230
00:14:45 --> 00:14:49
That would sound right, right?
u on one side, w on the other.
231
00:14:49 --> 00:14:54
But now we're in 2-D.
u is still a scalar.
232
00:14:54 --> 00:14:55
That's fine.
233
00:14:55 --> 00:14:57
But w is now a vector.
234
00:14:57 --> 00:15:02
So if I was to prescribe w on
this part of the boundary, I'm
235
00:15:02 --> 00:15:06
giving you two conditions,
not one And that's not good.
236
00:15:06 --> 00:15:09
I can't give you more than
one boundary condition.
237
00:15:09 --> 00:15:11
And the clue is there.
238
00:15:11 --> 00:15:16
It's w dot n, it's
the outward flow.
239
00:15:16 --> 00:15:18
That you could prescribe.
240
00:15:18 --> 00:15:24
You can't say in advance what
the tangential flow should be.
241
00:15:24 --> 00:15:30
So the free conditions would be
like w dot n, which is of
242
00:15:30 --> 00:15:37
course, since w is v, we have
Laplace's equation here with
243
00:15:37 --> 00:15:47
c=1 in between w and v, so this
is the gradient of u dot n,
244
00:15:47 --> 00:15:51
grad u dot n, and it's
sometimes written, the
245
00:15:51 --> 00:15:54
derivative of u in the
normal direction.
246
00:15:54 --> 00:15:59
And that we can
prescribe as some w_0.
247
00:15:59 --> 00:16:05
Oh, I don't to say w_0 because
w_0 makes it sound like a
248
00:16:05 --> 00:16:10
vector would be allowed, and
that's the whole point here.
249
00:16:10 --> 00:16:15
We give one boundary condition;
we either give u or w dot n.
250
00:16:15 --> 00:16:26
So let me give it some
different name, like F.
251
00:16:26 --> 00:16:27
So.
252
00:16:27 --> 00:16:30
I'll just ask one more
question to bring this home.
253
00:16:30 --> 00:16:36
And then I'll get onto the fun
today of fast Poisson solvers.
254
00:16:36 --> 00:16:44
Suppose what was the deal on
in 1-D when we had free-free
255
00:16:44 --> 00:16:46
boundary conditions?
256
00:16:46 --> 00:16:50
What was the deal in one
dimension when we had free-free
257
00:16:50 --> 00:16:51
boundary conditions?
258
00:16:51 --> 00:16:55
Both conditions were
slope conditions. u'=0.
259
00:16:57 --> 00:17:00
What matrix did we get?
260
00:17:00 --> 00:17:05
What was different about that
matrix, the free-free case?
261
00:17:05 --> 00:17:07
I'm always looking back to
1-D because that's the
262
00:17:07 --> 00:17:10
case we really understood.
263
00:17:10 --> 00:17:15
What was the the problem
with the free-free matrix?
264
00:17:15 --> 00:17:16
It was singular.
265
00:17:16 --> 00:17:19
What was its name?
266
00:17:19 --> 00:17:21
Was K, was it?
267
00:17:21 --> 00:17:27
You remember the first day
of class, the unforgettable
268
00:17:27 --> 00:17:30
four matrices?
269
00:17:30 --> 00:17:34
So this was fixed-fixed,
fixed-fixed.
270
00:17:34 --> 00:17:41
This one was, you could tell me
better than I can remember.
271
00:17:41 --> 00:17:43
T was free-fixed.
272
00:17:43 --> 00:17:45
And that was still OK.
273
00:17:45 --> 00:17:47
By OK I mean it was
still invertable.
274
00:17:47 --> 00:17:49
What was B?
275
00:17:49 --> 00:17:51
Free-free, that's
what I'm looking at.
276
00:17:51 --> 00:17:55
And the problem with free-free
and then this was periodic,
277
00:17:55 --> 00:18:01
that's Fourier stuff that's
the next part of the course.
278
00:18:01 --> 00:18:08
But the point is that this
free-free was singular.
279
00:18:08 --> 00:18:09
Right?
280
00:18:09 --> 00:18:11
And it'll be the same here.
281
00:18:11 --> 00:18:16
If the whole boundary
is free, then I've got
282
00:18:16 --> 00:18:18
a singular problem.
283
00:18:18 --> 00:18:22
If I've got a whole boundary
that's free, yeah actually
284
00:18:22 --> 00:18:23
I'll just put it here.
285
00:18:23 --> 00:18:34
Suppose I have Laplace's
equation, zero, and suppose
286
00:18:34 --> 00:18:41
I make the whole all free.
287
00:18:41 --> 00:18:47
So grad u dot n is zero
on the whole boundary.
288
00:18:47 --> 00:18:53
So it's like solving Bu=0.
289
00:18:55 --> 00:18:58
And the point is that
that has solutions.
290
00:18:58 --> 00:19:01
You remember, what was
in the null space of B,
291
00:19:01 --> 00:19:04
this free-free case?
292
00:19:04 --> 00:19:08
Bu=0, for what vector?
293
00:19:08 --> 00:19:12
Our favorite guilty vector.
294
00:19:12 --> 00:19:15
It was constants.
295
00:19:15 --> 00:19:16
Right, all ones.
296
00:19:16 --> 00:19:20
All constants. u=1,
all constants.
297
00:19:20 --> 00:19:23
Right, so that was the
free-free case in 1-D.
298
00:19:23 --> 00:19:28
Now, I just want you to tell
me the same thing over here.
299
00:19:28 --> 00:19:32
That equation does not
have a unique solution.
300
00:19:32 --> 00:19:34
So it's a singular problem.
301
00:19:34 --> 00:19:39
Here's the equation, something
u equals zero, but you can
302
00:19:39 --> 00:19:44
tell me solutions to that
pure Neumann problem.
303
00:19:44 --> 00:19:46
That are not zero.
304
00:19:46 --> 00:19:52
So this is a problem where
there's a null space.
305
00:19:52 --> 00:19:54
And what is the solution?
306
00:19:54 --> 00:19:59
What's a solution to this?
307
00:19:59 --> 00:20:07
To the Laplace's equation in
the inside and slope zero
308
00:20:07 --> 00:20:09
all around the boundary.
309
00:20:09 --> 00:20:12
There's a simple solution
to that one, which is
310
00:20:12 --> 00:20:16
just like this guy.
311
00:20:16 --> 00:20:20
And do you see what it is?
u equal constant, right.
312
00:20:20 --> 00:20:25
So u equal constant.
313
00:20:25 --> 00:20:30
So I'd like you to see all the
whole picture connecting with
314
00:20:30 --> 00:20:37
what we fully understood in the
1-D case for those matrices.
315
00:20:37 --> 00:20:43
We don't expect to be able to
have a pure Neumann problem.
316
00:20:43 --> 00:20:45
A totally free problem.
317
00:20:45 --> 00:20:48
Because these, it'll be
singular and we'll have
318
00:20:48 --> 00:20:51
to do something special.
319
00:20:51 --> 00:20:56
So the point is that it
would be enough for me just
320
00:20:56 --> 00:20:59
to fix that little bit.
321
00:20:59 --> 00:21:02
If I just fix that little bit
and prescribe u=u_0 on a
322
00:21:02 --> 00:21:07
little bit, I could make
the rest of it free.
323
00:21:07 --> 00:21:11
But I can't make it all free.
324
00:21:11 --> 00:21:12
OK.
325
00:21:12 --> 00:21:21
So those are things which, just
to complete the big picture
326
00:21:21 --> 00:21:26
of Laplace's equation.
327
00:21:26 --> 00:21:29
OK, now I'm going to
turn to solving.
328
00:21:29 --> 00:21:32
So now I want to solve
Laplace's equation and then
329
00:21:32 --> 00:21:37
we'll have, because we had the
review sessions coming on
330
00:21:37 --> 00:21:42
Tuesday and Wednesday of the
past, now we're looking future.
331
00:21:42 --> 00:21:43
Forward.
332
00:21:43 --> 00:21:46
Ready to look forward.
333
00:21:46 --> 00:21:48
Laplace's equation.
334
00:21:48 --> 00:21:50
On a square.
335
00:21:50 --> 00:21:52
On a square.
336
00:21:52 --> 00:21:53
OK.
337
00:21:53 --> 00:21:57
So let me draw this square.
338
00:21:57 --> 00:22:02
Laplace's equation or
Poisson's equation.
339
00:22:02 --> 00:22:03
In a square.
340
00:22:03 --> 00:22:07
OK, I'm going to make it the
boundary conditions, so
341
00:22:07 --> 00:22:08
I'm going to have a mesh.
342
00:22:08 --> 00:22:11
This is going to be
finite differences.
343
00:22:11 --> 00:22:15
Finite differences will
do fine on a square.
344
00:22:15 --> 00:22:21
As soon as I draw this curved
region, I better be thinking
345
00:22:21 --> 00:22:22
about finite elements.
346
00:22:22 --> 00:22:27
So that's what, today is
the, like, square day.
347
00:22:27 --> 00:22:32
OK, so square day I could
think of finite differences.
348
00:22:32 --> 00:22:40
I just draw a mesh,
let's make it small.
349
00:22:40 --> 00:22:41
So this is all known.
350
00:22:41 --> 00:22:47
These are all known values,
let me just say u is given.
351
00:22:47 --> 00:22:48
On the boundary.
352
00:22:48 --> 00:22:54
I'll do the Dirichlet problem.
353
00:22:54 --> 00:22:59
So at all these mesh
points, I know u.
354
00:22:59 --> 00:23:04
And I want to find u inside.
355
00:23:04 --> 00:23:06
So I've got nine
unknowns, right?
356
00:23:06 --> 00:23:08
Nine interior mesh points.
357
00:23:08 --> 00:23:11
Three times three three.
358
00:23:11 --> 00:23:14
So I've got nine unknowns,
I want nine equations and
359
00:23:14 --> 00:23:17
I'm thinking here about
difference equations.
360
00:23:17 --> 00:23:19
So let me write my
equation -u_xx-u_yy=f.
361
00:23:19 --> 00:23:30
362
00:23:30 --> 00:23:35
Poisson.
363
00:23:35 --> 00:23:38
What finite difference, I
want to turn this into
364
00:23:38 --> 00:23:40
finite differences.
365
00:23:40 --> 00:23:47
That'll be my system
of nine equations.
366
00:23:47 --> 00:23:50
And the unknowns will be
the nine values of u,
367
00:23:50 --> 00:23:52
so these are unknown.
368
00:23:52 --> 00:23:53
Right?
369
00:23:53 --> 00:23:56
Those are the unknown.
370
00:23:56 --> 00:23:58
OK, and I need equations.
371
00:23:58 --> 00:24:04
So let me take a typical
mesh point, like this one.
372
00:24:04 --> 00:24:07
I'm looking for the finite
difference approximation
373
00:24:07 --> 00:24:09
to the Laplace.
374
00:24:09 --> 00:24:14
If you give me Laplace's
equation and I say OK, what's
375
00:24:14 --> 00:24:17
a good finite difference
approximation.
376
00:24:17 --> 00:24:20
That I can then put in
the computer and solve,
377
00:24:20 --> 00:24:22
I'm ready to do it.
378
00:24:22 --> 00:24:25
So what should we
do for this one?
379
00:24:25 --> 00:24:27
For that term?
380
00:24:27 --> 00:24:32
What's the natural, totally
natural thing to do to replace
381
00:24:32 --> 00:24:36
the finite difference
replacement of minus u_xx?
382
00:24:36 --> 00:24:37
Second difference.
383
00:24:37 --> 00:24:38
Second difference.
384
00:24:38 --> 00:24:43
And with the minus sign there,
second differences around this
385
00:24:43 --> 00:24:47
point will be a minus one of
there, two of those, a
386
00:24:47 --> 00:24:49
minus one of those.
387
00:24:49 --> 00:24:56
That second difference in the x
direction is my replacement.
388
00:24:56 --> 00:24:57
My approximation.
389
00:24:57 --> 00:25:01
And what about in
the y direction?
390
00:25:01 --> 00:25:02
Same thing.
391
00:25:02 --> 00:25:05
Second differences,
but now vertically.
392
00:25:05 --> 00:25:08
So now I have minus one,
two more, that makes
393
00:25:08 --> 00:25:10
this guy a four.
394
00:25:10 --> 00:25:12
And this is minus one.
395
00:25:12 --> 00:25:15
And on the right hand side,
I'll take the value of
396
00:25:15 --> 00:25:18
f at the center point.
397
00:25:18 --> 00:25:22
Do you see that molecule?
398
00:25:22 --> 00:25:26
Let me highlight the
molecule there.
399
00:25:26 --> 00:25:30
Four in the center
minus ones beside it.
400
00:25:30 --> 00:25:37
Our matrix, we're going to
produce a KU=f matrix, and
401
00:25:37 --> 00:25:39
what's the K going to be?
402
00:25:39 --> 00:25:40
It's k2D.
403
00:25:40 --> 00:25:41
Let's give a name.
404
00:25:41 --> 00:25:43
A new name.
405
00:25:43 --> 00:25:44
We've had k forever.
406
00:25:44 --> 00:25:45
Let's have K2D.
407
00:25:45 --> 00:25:48
408
00:25:48 --> 00:25:52
And u is, so tell me now
again just so we got,
409
00:25:52 --> 00:25:56
what size is K2D for
this particular problem?
410
00:25:56 --> 00:25:59
What's the shape of it?
411
00:25:59 --> 00:26:02
How many unknowns
have I got in u?
412
00:26:02 --> 00:26:04
Nine.
413
00:26:04 --> 00:26:06
Nine unknowns.
414
00:26:06 --> 00:26:14
And notice, of course, near the
boundary, if this was the, well
415
00:26:14 --> 00:26:16
let me take the first one.
416
00:26:16 --> 00:26:19
What does the very first
row of K look like?
417
00:26:19 --> 00:26:20
If I number nodes like this.
418
00:26:20 --> 00:26:24
So this will be node number
one, two, three, four, five,
419
00:26:24 --> 00:26:27
six, seven, eight, nine.
420
00:26:27 --> 00:26:31
So number one is sort of
close to a boundary.
421
00:26:31 --> 00:26:36
So I have here a minus one,
a two, and this is gone.
422
00:26:36 --> 00:26:40
That's like a column of
our matrix. removed.
423
00:26:40 --> 00:26:43
Like a grounded
node, or whatever.
424
00:26:43 --> 00:26:46
But then I have the vertical
one minus one, two,
425
00:26:46 --> 00:26:49
making this into a four.
426
00:26:49 --> 00:26:52
And this guy will
also be removed.
427
00:26:52 --> 00:26:57
So I think if I look at
K2D, it's nine by nine.
428
00:26:57 --> 00:27:01
I don't plan to write
all 81 entries.
429
00:27:01 --> 00:27:04
But I could write
the first row.
430
00:27:04 --> 00:27:09
The first row comes from
the first equation.
431
00:27:09 --> 00:27:12
So the first row has an
f_1 on the right hand
432
00:27:12 --> 00:27:14
side, f at this point.
433
00:27:14 --> 00:27:17
And what do you see,
what's on the diagonal?
434
00:27:17 --> 00:27:19
Of K2D?
435
00:27:19 --> 00:27:20
Four.
436
00:27:20 --> 00:27:25
And what's the rest of the row?
437
00:27:25 --> 00:27:30
Well there's the next point
and it has a minus one.
438
00:27:30 --> 00:27:32
And then this point
is not connected to
439
00:27:32 --> 00:27:33
that, so it's a zero.
440
00:27:33 --> 00:27:37
I'd better leave room
for nine by nine here.
441
00:27:37 --> 00:27:40
Four, minus one, zero.
442
00:27:40 --> 00:27:45
And now what about number,
the rest of the row?
443
00:27:45 --> 00:27:49
So there's for is the
center, minus one, zero.
444
00:27:49 --> 00:27:52
And now I come to here,
so what's the next
445
00:27:52 --> 00:27:54
entry in my matrix?
446
00:27:54 --> 00:27:55
Minus one.
447
00:27:55 --> 00:27:59
We're multiplying the
unknown, the fourth unknown.
448
00:27:59 --> 00:28:02
And what about the
rest of the row?
449
00:28:02 --> 00:28:05
This guy is not connected
to five, six, seven,
450
00:28:05 --> 00:28:06
eight, or nine.
451
00:28:06 --> 00:28:11
So now I have zero,
zero, zero, zero, zero.
452
00:28:11 --> 00:28:13
OK.
453
00:28:13 --> 00:28:17
So that would be a row that
would be a row that's sort
454
00:28:17 --> 00:28:21
of different because it
corresponds to a mesh
455
00:28:21 --> 00:28:23
point that's way over
near the corner.
456
00:28:23 --> 00:28:25
How about the fifth row?
457
00:28:25 --> 00:28:30
What would be the fifth row of
K2D corresponding to this guy
458
00:28:30 --> 00:28:33
that's right in the middle?
459
00:28:33 --> 00:28:36
What's on the diagonal, what's
the 5, 5 entry of K2D?
460
00:28:39 --> 00:28:39
Four.
461
00:28:39 --> 00:28:40
Good.
462
00:28:40 --> 00:28:42
I'm going to have a
diagonal of fours.
463
00:28:42 --> 00:28:45
I'm going to have fours all
the way down the diagonal.
464
00:28:45 --> 00:28:49
Because every point the
molecule is centered at gives
465
00:28:49 --> 00:28:53
me the four, and I just have a
minus one for its neighbor.
466
00:28:53 --> 00:28:56
And this guy has all
four neighbors.
467
00:28:56 --> 00:29:02
They're all alive and well,
so there's a guy, this
468
00:29:02 --> 00:29:03
is the fifth row.
469
00:29:03 --> 00:29:06
There's somebody right
next door on the left.
470
00:29:06 --> 00:29:08
Somebody right next
door on the right.
471
00:29:08 --> 00:29:15
And then do the the other
minus ones go after that?
472
00:29:15 --> 00:29:17
This is the whole point.
473
00:29:17 --> 00:29:21
I want you to see how
this K2D matrix looks.
474
00:29:21 --> 00:29:24
Because it's the one, it's the
equation we have to solve.
475
00:29:24 --> 00:29:27
So we've got to know what
this matrix looks like.
476
00:29:27 --> 00:29:29
So what's the point here?
477
00:29:29 --> 00:29:30
Four, it's got a
neighbor there.
478
00:29:30 --> 00:29:34
This was unknown number five.
479
00:29:34 --> 00:29:37
Its fourth unknown and the
sixth unknown, what are the
480
00:29:37 --> 00:29:39
numbers of these unknowns?
481
00:29:39 --> 00:29:43
Two and eight.
482
00:29:43 --> 00:29:48
So this guy is going to have a
minus one in the two position.
483
00:29:48 --> 00:29:49
And then zero.
484
00:29:49 --> 00:29:54
And this guy, it'll also have
a minus one, is that nine?
485
00:29:54 --> 00:29:56
Yeah, do you see
how it's going?
486
00:29:56 --> 00:30:03
I have a diagonal, I have
five diagonals somehow.
487
00:30:03 --> 00:30:07
I have diagonal, the neighbor
on the left, the neighbor on
488
00:30:07 --> 00:30:10
the right, the neighbor
underneath, and the
489
00:30:10 --> 00:30:12
neighbor above.
490
00:30:12 --> 00:30:17
Across the street
you put something.
491
00:30:17 --> 00:30:21
So our matrix, this very
important matrix, more
492
00:30:21 --> 00:30:26
attention has gone into how to
solve this equation for that
493
00:30:26 --> 00:30:32
matrix than any other single
specific example in
494
00:30:32 --> 00:30:33
numerical analysis.
495
00:30:33 --> 00:30:36
It's just a model problem.
496
00:30:36 --> 00:30:40
And what do I notice
about this matrix?
497
00:30:40 --> 00:30:44
So now you see it's
got five diagonals.
498
00:30:44 --> 00:30:48
But, and what's the big but?
499
00:30:48 --> 00:30:55
The big but is that those five
diagonals don't run, it's not
500
00:30:55 --> 00:31:00
just a band of five because you
don't get to the one up above
501
00:31:00 --> 00:31:09
until you've gone all the way,
you see the bandwidth is big.
502
00:31:09 --> 00:31:16
If my matrix, if this is one,
two up to n, and this is one,
503
00:31:16 --> 00:31:22
two up to n, then how many
unknowns have I got?
504
00:31:22 --> 00:31:23
Just think big now.
505
00:31:23 --> 00:31:25
We're in 2-D, think 2-D.
506
00:31:25 --> 00:31:30
How many unknowns have I got? n
was three in the picture, but
507
00:31:30 --> 00:31:34
now you're kind of visualizing
a much finer grid.
508
00:31:34 --> 00:31:38
How many unknowns? n
squared unknowns, right?
509
00:31:38 --> 00:31:40
One to n in each direction.
510
00:31:40 --> 00:31:43
So we've got the matrix
of size n squared.
511
00:31:43 --> 00:31:46
Way bigger, way bigger.
512
00:31:46 --> 00:31:50
I mean if n was ten, our
matrix is of size 100.
513
00:31:50 --> 00:31:54
And of course, that's
completely simple.
514
00:31:54 --> 00:31:58
The real question is when n
becomes 1,000 and our matrix
515
00:31:58 --> 00:32:00
is at size a million.
516
00:32:00 --> 00:32:02
And those get solved every day.
517
00:32:02 --> 00:32:05
So, question is how.
518
00:32:05 --> 00:32:11
Actually this, for a million
by million matrix, this fast
519
00:32:11 --> 00:32:15
Poisson solver that I want
to describe does it great.
520
00:32:15 --> 00:32:21
Actually, so you can deal with
a matrix of order a million
521
00:32:21 --> 00:32:23
without turning a hair.
522
00:32:23 --> 00:32:25
OK now, but can we
understand that matrix?
523
00:32:25 --> 00:32:29
So suppose n is 100.
524
00:32:29 --> 00:32:31
Just to have us.
525
00:32:31 --> 00:32:33
What's the story?
526
00:32:33 --> 00:32:35
This matrix, then.
527
00:32:35 --> 00:32:37
What's the size of that matrix?
528
00:32:37 --> 00:32:39
Did I say 100?
529
00:32:39 --> 00:32:43
Or 1,000, do you want
to think really big?
530
00:32:43 --> 00:32:44
I guess I said 1,000.
531
00:32:44 --> 00:32:47
And I can get up to a million.
532
00:32:47 --> 00:32:51
OK, so n is 1,000, so we
have a 1,000 by 1,000,
533
00:32:51 --> 00:32:52
a million unknowns.
534
00:32:52 --> 00:32:56
My matrix is a million
by a million.
535
00:32:56 --> 00:32:57
Let me ask you this.
536
00:32:57 --> 00:32:59
Here's the question that
I want to ask you.
537
00:32:59 --> 00:33:01
What's the bandwidth?
538
00:33:01 --> 00:33:12
How far from the main diagonal
out to that minus one.
539
00:33:12 --> 00:33:18
In other words, this is really
the full band of the matrix.
540
00:33:18 --> 00:33:22
It's got a lot of zeroes
inside the band.
541
00:33:22 --> 00:33:23
A whole bunch.
542
00:33:23 --> 00:33:28
Maybe 990 something
zeroes in here.
543
00:33:28 --> 00:33:34
And in here, and how far
is it from here to here?
544
00:33:34 --> 00:33:36
1,000, right?
545
00:33:36 --> 00:33:38
It's 1,000.
546
00:33:38 --> 00:33:40
So the bandwidth is 1,000.
547
00:33:40 --> 00:33:47
And what if I do ordinary, so
now I want to begin to think
548
00:33:47 --> 00:33:50
how do I solve this equation?
549
00:33:50 --> 00:33:52
How do I work with that matrix?
550
00:33:52 --> 00:33:54
And you have an idea
of that matrix?
551
00:33:54 --> 00:33:58
It's a beautiful
matrix to work with.
552
00:33:58 --> 00:34:04
It's very like our K matrix;
it's just up to 2-D.
553
00:34:04 --> 00:34:08
In fact, it's closely connected
to our K2D will be closely
554
00:34:08 --> 00:34:13
connected to K and that's what
makes life possible here.
555
00:34:13 --> 00:34:21
But suppose I had to
solve this equation.
556
00:34:21 --> 00:34:25
Alright, so I give it to MATLAB
just as a system of equations.
557
00:34:25 --> 00:34:27
A million equations,
a million unknowns.
558
00:34:27 --> 00:34:32
So MATLAB takes a gulp
and then it tries, OK?
559
00:34:32 --> 00:34:37
So ordinary elimination,
that's what backslash does.
560
00:34:37 --> 00:34:40
By the way this matrix
is symmetric, and you
561
00:34:40 --> 00:34:43
won't be surprised it's
positive definite.
562
00:34:43 --> 00:34:46
Everything is nice
about that matrix.
563
00:34:46 --> 00:34:49
In fact more than I've said.
564
00:34:49 --> 00:34:56
OK, what happens if I do
elimination on that matrix?
565
00:34:56 --> 00:34:59
How long does it take,
how fast is it?
566
00:34:59 --> 00:35:03
Well, MATLAB will take
advantage, or any code,
567
00:35:03 --> 00:35:10
will take advantage of
all these zeroes, right?
568
00:35:10 --> 00:35:14
In other words, what do I
mean by take advantage?
569
00:35:14 --> 00:35:15
And what do I mean
by elimination?
570
00:35:15 --> 00:35:19
You remember I'm going to
subtract multiples of this row.
571
00:35:19 --> 00:35:24
They'll be a minus one
below it on this guy.
572
00:35:24 --> 00:35:29
This row that has some fours
and some minus ones, whatever.
573
00:35:29 --> 00:35:33
Elimination, I'm subtracting a
multiple of this row from this
574
00:35:33 --> 00:35:37
row to get that minus one
to be a zero, right?
575
00:35:37 --> 00:35:40
I'm eliminating this
entry in the matrix.
576
00:35:40 --> 00:35:47
This is ordinary, LU, this
is ordinary LU of K2D.
577
00:35:47 --> 00:35:51
I'm sort of thinking
through LU of k 2-D.
578
00:35:51 --> 00:35:54
And what I'm going to
conclude is that there
579
00:35:54 --> 00:35:56
ought to be a better way.
580
00:35:56 --> 00:35:58
It's such a special
matrix, and it's going
581
00:35:58 --> 00:36:00
to be messed up by LU.
582
00:36:00 --> 00:36:02
And what do I mean
by messed up?
583
00:36:02 --> 00:36:08
I mean that all these great
zeroes, 990-something zeroes
584
00:36:08 --> 00:36:16
there and there, inside the
band, are going to fill in.
585
00:36:16 --> 00:36:21
That's the message, if you want
to talk about solving large
586
00:36:21 --> 00:36:27
linear systems, the big issue
is fill in, it's
587
00:36:27 --> 00:36:29
a sparse matrix.
588
00:36:29 --> 00:36:30
That's typical.
589
00:36:30 --> 00:36:33
We have a local operator
here, so we expect a very
590
00:36:33 --> 00:36:36
local, sparse matrix.
591
00:36:36 --> 00:36:39
Large sparse matrices is
what we're doing now.
592
00:36:39 --> 00:36:45
And the problem is that zeroes
here will never get touched
593
00:36:45 --> 00:36:48
because we don't need any
elimination, they're
594
00:36:48 --> 00:36:49
already zero.
595
00:36:49 --> 00:36:56
But all this diagonal of minus
ones when we do eliminations to
596
00:36:56 --> 00:36:59
make those zero, then some
non-zeroes are going to come
597
00:36:59 --> 00:37:03
in and fill in the zeroes.
598
00:37:03 --> 00:37:07
So the work, how much work
would elimination be, actually?
599
00:37:07 --> 00:37:15
We could even think about that.
600
00:37:15 --> 00:37:19
Can I just give the result
for how much time would
601
00:37:19 --> 00:37:20
elimination take?
602
00:37:20 --> 00:37:22
Ordinary elimination.
603
00:37:22 --> 00:37:29
So I have n squared rows to
work on. n squared rows to work
604
00:37:29 --> 00:37:35
on, and then the distance, that
distance was what in terms of
605
00:37:35 --> 00:37:39
n, the width there
is? n, right?
606
00:37:39 --> 00:37:41
It was about 1,000.
607
00:37:41 --> 00:37:45
So I get n numbers here
and I have to work
608
00:37:45 --> 00:37:52
on n numbers below.
609
00:37:52 --> 00:37:56
To clean up a column I'm
working with a vector of
610
00:37:56 --> 00:38:01
length n and I'm subtracting
it from n rows below.
611
00:38:01 --> 00:38:05
So cleaning up a column takes
me n squared operation.
612
00:38:05 --> 00:38:10
The total then is N^4.
613
00:38:10 --> 00:38:14
N^4, for elimination.
614
00:38:14 --> 00:38:16
And that's not acceptable.
615
00:38:16 --> 00:38:16
Right?
616
00:38:16 --> 00:38:19
If n is 1,000, I'm up to 10^12.
617
00:38:20 --> 00:38:25
So elimination is not
a good way to solve.
618
00:38:25 --> 00:38:30
Elimination, in
this ordering, ha.
619
00:38:30 --> 00:38:31
Yeah.
620
00:38:31 --> 00:38:36
So can I tell you two ways,
two better ways to deal
621
00:38:36 --> 00:38:39
with this problem?
622
00:38:39 --> 00:38:42
So two better ways to
deal with this problem.
623
00:38:42 --> 00:38:51
One is, and MATLAB have a code
that would do it, one is to
624
00:38:51 --> 00:38:55
change from this ordering one,
two, three, four, five, six,
625
00:38:55 --> 00:39:00
seven, eight, nine, change
to a different ordering
626
00:39:00 --> 00:39:03
that had less fill-in.
627
00:39:03 --> 00:39:09
So that's a substantial part
of scientific computing.
628
00:39:09 --> 00:39:15
Is to re-order the unknowns.
629
00:39:15 --> 00:39:21
To reduce the number of wasted,
fill-in things that you have to
630
00:39:21 --> 00:39:25
fill in, and then you have
to eliminate out again.
631
00:39:25 --> 00:39:29
And I can get that N^4 down
quite a bit that way.
632
00:39:29 --> 00:39:36
OK, so that's a topic
of importance in
633
00:39:36 --> 00:39:37
scientific computing.
634
00:39:37 --> 00:39:42
Is given a large linear system,
put the rows and columns
635
00:39:42 --> 00:39:44
in a good order.
636
00:39:44 --> 00:39:47
OK, and that's something you
hope the machine can do.
637
00:39:47 --> 00:39:53
The machine will have a record
of where are the non-zeroes.
638
00:39:53 --> 00:39:55
And then so you run the
little, you really run
639
00:39:55 --> 00:39:57
the program twice.
640
00:39:57 --> 00:40:00
You run it once without
using the numbers.
641
00:40:00 --> 00:40:04
Just the positions of
non-zeroes to get a good
642
00:40:04 --> 00:40:08
ordering so that the number
of elimination steps
643
00:40:08 --> 00:40:09
will be way down.
644
00:40:09 --> 00:40:13
And then you run it with the
numbers in that good order
645
00:40:13 --> 00:40:15
to do the elimination.
646
00:40:15 --> 00:40:21
So that's a topic of,
one topic in this.
647
00:40:21 --> 00:40:23
But.
648
00:40:23 --> 00:40:31
Today's topic is using the
fact that this is a square.
649
00:40:31 --> 00:40:34
That this matrix has
a special form.
650
00:40:34 --> 00:40:38
That we can actually find its
eigenvalues and eigenvectors.
651
00:40:38 --> 00:40:41
And that's what makes a
fast Poisson solver.
652
00:40:41 --> 00:40:45
So that's the topic of the
lecture and the topic of the
653
00:40:45 --> 00:40:46
next section in the book.
654
00:40:46 --> 00:40:50
And I will get started
on it now, and then
655
00:40:50 --> 00:40:52
complete it Wednesday.
656
00:40:52 --> 00:40:58
So I want to take this
particular K2D for a square and
657
00:40:58 --> 00:41:05
show a different way to solve
the equations that uses the
658
00:41:05 --> 00:41:06
fast Fourier transform.
659
00:41:06 --> 00:41:14
When I see, when you see, a
regular mesh, regular numbers.
660
00:41:14 --> 00:41:20
Nothing varying, constant
diagonals, think fast
661
00:41:20 --> 00:41:21
Fourier transform.
662
00:41:21 --> 00:41:24
The FFT has got a chance
when you have matrices
663
00:41:24 --> 00:41:25
like this one.
664
00:41:25 --> 00:41:32
And this I want to show
you how it works.
665
00:41:32 --> 00:41:35
I want to show you the
underlying idea, and then we'll
666
00:41:35 --> 00:41:38
see the details of the fast
Fourier transform in
667
00:41:38 --> 00:41:40
the Fourier section.
668
00:41:40 --> 00:41:43
So, of course, everybody knows
the third part of this course,
669
00:41:43 --> 00:41:48
which is coming up next week,
we'll be well into it,
670
00:41:48 --> 00:41:51
is the Fourier part.
671
00:41:51 --> 00:41:52
OK.
672
00:41:52 --> 00:41:55
But I can give you, because
we've seen the eigenvectors
673
00:41:55 --> 00:42:00
and eigenvalues of K,
I can do this now.
674
00:42:00 --> 00:42:04
Ready for the smart idea?
675
00:42:04 --> 00:42:08
The good way, and it only works
because this is a square,
676
00:42:08 --> 00:42:11
because the coefficients are
constant, because everything's
677
00:42:11 --> 00:42:14
beautiful, here's the idea.
678
00:42:14 --> 00:42:16
OK.
679
00:42:16 --> 00:42:26
Central idea.
680
00:42:26 --> 00:42:32
The idea is, so I have this
matrix K2D and I'm trying
681
00:42:32 --> 00:42:35
to solve a system with
that particular matrix.
682
00:42:35 --> 00:42:41
And the fantastic thing about
this matrix is that I know
683
00:42:41 --> 00:42:44
what it's eigenvalues
and eigenvectors are.
684
00:42:44 --> 00:42:47
So let's suppose we know them.
685
00:42:47 --> 00:42:48
So I know (K2D)y=lambda*y.
686
00:42:48 --> 00:42:53
687
00:42:53 --> 00:42:57
That's when y is an
eigenvector, and remember it's
688
00:42:57 --> 00:42:59
got n squared components.
689
00:42:59 --> 00:43:01
Lambda's a number.
690
00:43:01 --> 00:43:07
And I've got a whole lot of
these, y_k, lambda_k, y_k,
691
00:43:07 --> 00:43:16
and I've got k going from
one up to n squared.
692
00:43:16 --> 00:43:19
Now I want to use them.
693
00:43:19 --> 00:43:22
I want to use them
to solve (K2D)u=f.
694
00:43:22 --> 00:43:27
695
00:43:27 --> 00:43:34
Let me say, it's totally
amazing that I would, in fact
696
00:43:34 --> 00:43:37
this is the only important
example I know in scientific
697
00:43:37 --> 00:43:42
computing, in which I would
use the eigenvalues and
698
00:43:42 --> 00:43:46
eigenvectors of the matrix
to solve a linear system.
699
00:43:46 --> 00:43:48
You see, normally you think
of eigenvalue problems as
700
00:43:48 --> 00:43:51
like, those are harder.
701
00:43:51 --> 00:43:55
It's more difficult to solve
this problem than this one.
702
00:43:55 --> 00:43:56
Normally, right?
703
00:43:56 --> 00:43:59
That's the way to
think about it.
704
00:43:59 --> 00:44:07
But for this special matrix, we
can figure out the eigenvalues
705
00:44:07 --> 00:44:10
and eigenvectors by
pencil and paper.
706
00:44:10 --> 00:44:14
So we know what these guys are.
707
00:44:14 --> 00:44:20
And that will give us a way
to solve a linear system.
708
00:44:20 --> 00:44:24
So can I remember, this
is now central message.
709
00:44:24 --> 00:44:33
What are the three steps that
to use eigenvectors and
710
00:44:33 --> 00:44:40
eigenvalues in solving linear
equations, in solving
711
00:44:40 --> 00:44:43
differential equations, in
solving difference equations,
712
00:44:43 --> 00:44:46
they're always the
same three steps.
713
00:44:46 --> 00:44:49
Can I remember what
those three steps are?
714
00:44:49 --> 00:44:56
So step one, so I'm supposing
that I know these guys.
715
00:44:56 --> 00:44:57
First of all, that's amazing.
716
00:44:57 --> 00:44:59
To know them in advance.
717
00:44:59 --> 00:45:02
And the second amazing
thing is that they have
718
00:45:02 --> 00:45:04
beautiful structure.
719
00:45:04 --> 00:45:08
And those facts make it
possible to do something you
720
00:45:08 --> 00:45:10
would never expect to do.
721
00:45:10 --> 00:45:12
Use eigenvalues for
a linear system.
722
00:45:12 --> 00:45:14
So here's the way to
do it, three steps.
723
00:45:14 --> 00:45:23
One, expand f as a combination
of the eigenvectors. c_1*y_1,
724
00:45:23 --> 00:45:27
up to however many we've got.
725
00:45:27 --> 00:45:30
We've got n squared of them.
726
00:45:30 --> 00:45:33
Gosh, we're loaded
with eigenvectors.
727
00:45:33 --> 00:45:41
Do you remember, this a
column of size n squared.
728
00:45:41 --> 00:45:43
Our problem has size n squared.
729
00:45:43 --> 00:45:45
So it's big, a million.
730
00:45:45 --> 00:45:48
So I'm going to expand
f in terms of the
731
00:45:48 --> 00:45:52
million eigenvectors.
732
00:45:52 --> 00:45:54
OK.
733
00:45:54 --> 00:45:56
So that tells me the
right hand side.
734
00:45:56 --> 00:46:01
Now, you remember step two in
this one, two, three process?
735
00:46:01 --> 00:46:05
Step two is the easy one,
because I want to get,
736
00:46:05 --> 00:46:06
what am I looking for?
737
00:46:06 --> 00:46:10
I'm aiming for, of course,
the answer I'm aiming
738
00:46:10 --> 00:46:16
for is K2D inverse f.
739
00:46:16 --> 00:46:16
Right?
740
00:46:16 --> 00:46:17
That's what I'm shooting for.
741
00:46:17 --> 00:46:22
That's my goal,
that's the solution.
742
00:46:22 --> 00:46:27
So here I've got f in terms
of the eigenvectors.
743
00:46:27 --> 00:46:32
Now I want to get, how do I get
u in terms of the eigenvectors?
744
00:46:32 --> 00:46:39
Well, what's the eigenvalue for
the inverse, K to the inverse?
745
00:46:39 --> 00:46:42
Do you remember?
746
00:46:42 --> 00:46:43
It's one over.
747
00:46:43 --> 00:46:45
It's one over the
eigenvalue, right?
748
00:46:45 --> 00:46:53
If this is true, then this will
be true with lambda_k inverse.
749
00:46:53 --> 00:46:56
It's simple and I won't stop
to do it, but we could
750
00:46:56 --> 00:46:57
come back to it.
751
00:46:57 --> 00:47:01
So I know how to
get K2D inverse.
752
00:47:01 --> 00:47:03
So that's what I'm going to do.
753
00:47:03 --> 00:47:06
I'm going to divide, so
here's the fast step.
754
00:47:06 --> 00:47:14
Divide c_k, each of those
c_k's, by lambda_k.
755
00:47:14 --> 00:47:17
756
00:47:17 --> 00:47:18
What do I have now?
757
00:47:18 --> 00:47:27
I've got c_1/lambda_1*y_1 plus
c_n squared, the last guy
758
00:47:27 --> 00:47:34
divided by its eigenvalue
times its eigenvector.
759
00:47:34 --> 00:47:37
And that's the answer.
760
00:47:37 --> 00:47:40
Right, that's the
correct answer.
761
00:47:40 --> 00:47:45
If this is the right hand side,
then this is the solution.
762
00:47:45 --> 00:47:46
Because why?
763
00:47:46 --> 00:47:51
Because when I multiply this
solution by K2D it multiplies
764
00:47:51 --> 00:47:54
every eigenvector by
eigenvalue, it cancels all
765
00:47:54 --> 00:47:57
the lambdas, and I get x.
766
00:47:57 --> 00:48:01
Do you see, I can't raise
that board, I'm sorry.
767
00:48:01 --> 00:48:04
Can you see it at the back?
768
00:48:04 --> 00:48:08
Should I write it up here
again, because everything
769
00:48:08 --> 00:48:09
hinges on that.
770
00:48:09 --> 00:48:18
The final, the answer u, the
K2D inverse f, is the same
771
00:48:18 --> 00:48:20
combination that gave f.
772
00:48:20 --> 00:48:26
But I divide by lambda_1, c_2
y, the second eigenvector
773
00:48:26 --> 00:48:28
over lambda_2 and so on.
774
00:48:28 --> 00:48:34
Because when I multiply by K2D,
multiplying each y, each
775
00:48:34 --> 00:48:38
eigenvector will bring out an
eigenvalue, it will cancel at
776
00:48:38 --> 00:48:41
and I'll have the original f.
777
00:48:41 --> 00:48:45
So do you see that
those are the steps?
778
00:48:45 --> 00:48:49
The steps are, you have to
do a, you could say it's a
779
00:48:49 --> 00:48:52
transform into eigenspace.
780
00:48:52 --> 00:48:55
I take my vector in physical
space, it's got its
781
00:48:55 --> 00:48:57
n squared component.
782
00:48:57 --> 00:49:01
At physical points.
783
00:49:01 --> 00:49:03
I express it in eigenspace.
784
00:49:03 --> 00:49:06
By that I mean it's a
combination of the eigenvectors
785
00:49:06 --> 00:49:11
and now the n squared physical
values are n squared
786
00:49:11 --> 00:49:18
amplitudes, or amounts
of each eigenvector.
787
00:49:18 --> 00:49:22
So that was a, that's the
Fourier series idea that's the
788
00:49:22 --> 00:49:26
Fourier transform idea, it
just appears everywhere.
789
00:49:26 --> 00:49:30
Express the function
in the good basis.
790
00:49:30 --> 00:49:32
We're using these
good functions.
791
00:49:32 --> 00:49:35
In these good functions,
the answer is simple.
792
00:49:35 --> 00:49:38
I'm just dividing by lambda.
793
00:49:38 --> 00:49:40
Do you see that I'm just
dividing by lambda, where if it
794
00:49:40 --> 00:49:43
was a differential equation,
this is just what we did for
795
00:49:43 --> 00:49:45
differential equations.
796
00:49:45 --> 00:49:49
What did I do for
differential equations?
797
00:49:49 --> 00:49:51
Do you mind if I just ask you?
798
00:49:51 --> 00:49:55
If I was solving dU/dt=(K2D)U.
799
00:49:55 --> 00:50:00
800
00:50:00 --> 00:50:06
Starting from f, start
with U(0) as f.
801
00:50:06 --> 00:50:10
I just want to
draw the analogy.
802
00:50:10 --> 00:50:12
How did we solve
these equations?
803
00:50:12 --> 00:50:16
I expanded f in terms of
the eigenvector, and
804
00:50:16 --> 00:50:19
then what was step two?
805
00:50:19 --> 00:50:22
For the differential
equation, I didn't divide
806
00:50:22 --> 00:50:26
by lambda_k, what did I do?
807
00:50:26 --> 00:50:29
I multiplied by e^(lambda*k*t).
808
00:50:29 --> 00:50:32
809
00:50:32 --> 00:50:34
You see, it's the same trick.
810
00:50:34 --> 00:50:37
Just get it in eigenspace.
811
00:50:37 --> 00:50:41
In eigenspace it's like one
tiny problem at a time.
812
00:50:41 --> 00:50:45
You're just following that
normal mode, and so here
813
00:50:45 --> 00:50:48
instead of dividing by
lambda, I multiplied by
814
00:50:48 --> 00:50:49
either the lambda*t.
815
00:50:49 --> 00:50:54
And for powers of for K2D to
the 100th power, I would
816
00:50:54 --> 00:50:56
multiply by lambda
to the hundredth.
817
00:50:56 --> 00:50:59
Here, I have K2D to the
minus first power.
818
00:50:59 --> 00:51:03
So I'm multiplying by
lambdas to the minus one.
819
00:51:03 --> 00:51:10
OK, and the key point is that
these c's can be found fast.
820
00:51:10 --> 00:51:14
And this summing up
can be done fast.
821
00:51:14 --> 00:51:17
Now that's what also
very exceptional.
822
00:51:17 --> 00:51:19
And that's where
the FFT comes in.
823
00:51:19 --> 00:51:25
The fast Fourier transform
is a fast way to get into
824
00:51:25 --> 00:51:28
eigenspace and to get back.
825
00:51:28 --> 00:51:31
When eigenspace happens
to be frequency space.
826
00:51:31 --> 00:51:35
You see that's the key.
827
00:51:35 --> 00:51:40
In other words these y's and
lambdas that I said we knew,
828
00:51:40 --> 00:51:48
these y's, the eigenvectors,
are pure sine vectors.
829
00:51:48 --> 00:51:53
So that this is an expansion
is a sine transform.
830
00:51:53 --> 00:51:54
S-I-N-E.
831
00:51:54 --> 00:51:56
A pure sine transform.
832
00:51:56 --> 00:52:00
And that can be done fast,
and it can be inverted fast.
833
00:52:00 --> 00:52:06
So you see what makes this fast
Poisson solver work, is I have
834
00:52:06 --> 00:52:09
to know eigenvectors and
eigenvalues, and they have to
835
00:52:09 --> 00:52:13
be fantastic vectors
like sign vectors.
836
00:52:13 --> 00:52:18
OK, so I'll give you more, the
final picture the details,
837
00:52:18 --> 00:52:21
Wednesday and then move
on to finite elements.
838
00:52:21 --> 00:52:21