1
00:00:00 --> 00:00:01
2
00:00:01 --> 00:00:02
The following content is
provided under a Creative
3
00:00:02 --> 00:00:03
Commons license.
4
00:00:03 --> 00:00:06
Your support will help MIT
OpenCourseWare continue to
5
00:00:06 --> 00:00:09
offer high-quality educational
resources for free.
6
00:00:09 --> 00:00:13
To make a donation or to view
additional materials from
7
00:00:13 --> 00:00:16
hundreds of MIT courses, visit
MIt OpenCourseWare
8
00:00:16 --> 00:00:19
at ocw.mit.edu.
9
00:00:19 --> 00:00:26
PROFESSOR STRANG: Shall we just
start on this review session?
10
00:00:26 --> 00:00:31
So, any questions on anything
from Chapter one, anything from
11
00:00:31 --> 00:00:36
those first seven lectures
is very, very welcome.
12
00:00:36 --> 00:00:41
So this morning finished the
serious part of what we'll
13
00:00:41 --> 00:00:46
do in the chapter with
positive definite matrices.
14
00:00:46 --> 00:00:49
And we'll see a lot of
those fortunately.
15
00:00:49 --> 00:00:52
They're the best.
16
00:00:52 --> 00:00:58
So questions about, I hope you
look in the book at other
17
00:00:58 --> 00:01:04
problems in the problem sets as
well as the ones I suggest.
18
00:01:04 --> 00:01:09
And then I can, anyway.
19
00:01:09 --> 00:01:13
Ready for any questions.
20
00:01:13 --> 00:01:14
Ok.
21
00:01:14 --> 00:01:19
Which problem is it?
22
00:01:19 --> 00:01:21
In section?
23
00:01:21 --> 00:01:25
Section 1.6, problem 27,
what have I done there?
24
00:01:25 --> 00:01:28
Oh, ok, that's good.
25
00:01:28 --> 00:01:30
So it's about positive
definite matrices.
26
00:01:30 --> 00:01:36
May I just put on the board
what the central question is?
27
00:01:36 --> 00:01:40
Just put these matrices up.
28
00:01:40 --> 00:01:43
We're given that H and K
are positive definite.
29
00:01:43 --> 00:01:47
And then the question is, what
about these block matrices.
30
00:01:47 --> 00:01:50
Do I call them M and N?
31
00:01:50 --> 00:01:54
One is the block matrix
that looks like that.
32
00:01:54 --> 00:01:59
And another one is the block
matrix that looks like this.
33
00:01:59 --> 00:02:04
So those are both symmetric.
34
00:02:04 --> 00:02:07
We're allowed to ask, are they
positive definite or negative
35
00:02:07 --> 00:02:10
definite because they passed
the first requirement.
36
00:02:10 --> 00:02:10
They're symmetric.
37
00:02:10 --> 00:02:12
We can discuss them.
38
00:02:12 --> 00:02:15
Because of course H and
K each were symmetric.
39
00:02:15 --> 00:02:19
The transpose of this would
bring K transpose down here,
40
00:02:19 --> 00:02:22
but that's K, so all good.
41
00:02:22 --> 00:02:32
So the question now.
42
00:02:32 --> 00:02:43
Of these guys to those
guys I guess, yes.
43
00:02:43 --> 00:02:44
Good question.
44
00:02:44 --> 00:02:47
So this guy has, let's
take eigenvalues first.
45
00:02:47 --> 00:02:50
So this guy has some
eigenvalues, say
46
00:02:50 --> 00:02:51
lambda_1 to lambda_n.
47
00:02:53 --> 00:02:55
And this guy, we'll suppose
they're the same size, so
48
00:02:55 --> 00:02:57
they don't have to be.
49
00:02:57 --> 00:02:59
Maybe I shouldn't, but I will.
50
00:02:59 --> 00:03:06
This has some other
eigenvalues, maybe e_1
51
00:03:06 --> 00:03:09
to e_n for eigenvalue.
52
00:03:09 --> 00:03:12
And then the question is, okay,
what about the eigenvalues
53
00:03:12 --> 00:03:15
of that combination?
54
00:03:15 --> 00:03:16
And what about this?
55
00:03:16 --> 00:03:20
So it's a good question,
I think for all of us to
56
00:03:20 --> 00:03:23
practice what just came
up in the lecture.
57
00:03:23 --> 00:03:29
The idea of block matrices.
58
00:03:29 --> 00:03:36
So looking here at eigenvalues
I could also look at pivots.
59
00:03:36 --> 00:03:39
Pivots would be interesting
to look at, too.
60
00:03:39 --> 00:03:40
Maybe I'll start with pivots.
61
00:03:40 --> 00:03:42
Can I?
62
00:03:42 --> 00:03:43
Did you think?
63
00:03:43 --> 00:03:44
What would be the pivots of M?
64
00:03:44 --> 00:03:51
If I start elimination on M
what will I see for pivots?
65
00:03:51 --> 00:03:57
Well, I start up in the usual
left-hand corner and work down.
66
00:03:57 --> 00:04:00
So what am I going
to see first?
67
00:04:00 --> 00:04:02
I'm going to see
the pivots of H.
68
00:04:02 --> 00:04:05
It won't even know, by the
time I had halfway there,
69
00:04:05 --> 00:04:07
it won't even have seen K.
70
00:04:07 --> 00:04:10
And then, that'll be fine.
71
00:04:10 --> 00:04:16
And then this will be,
what's going to happen?
72
00:04:16 --> 00:04:18
This is all zeroes.
73
00:04:18 --> 00:04:20
So never get touched, right?
74
00:04:20 --> 00:04:24
So when I get down to
the second half I
75
00:04:24 --> 00:04:26
see all zeroes here.
76
00:04:26 --> 00:04:28
K is still going to be
sitting right there.
77
00:04:28 --> 00:04:29
Nothing happened.
78
00:04:29 --> 00:04:32
Because when I did these
eliminations nothing
79
00:04:32 --> 00:04:34
changed with K.
80
00:04:34 --> 00:04:38
So the rest of the pivots
will be the pivots of K.
81
00:04:38 --> 00:04:40
Good.
82
00:04:40 --> 00:04:42
Now, we might hope for the
same thing with eigenvalues
83
00:04:42 --> 00:04:46
and probably that's
going to happen.
84
00:04:46 --> 00:04:49
This is like a diagonal matrix.
85
00:04:49 --> 00:04:51
And actually, what
words would I use?
86
00:04:51 --> 00:04:53
Block diagonal.
87
00:04:53 --> 00:04:55
I'd call that matrix
block diagonal.
88
00:04:55 --> 00:04:58
And those are very
nice matrices.
89
00:04:58 --> 00:05:02
That tells us that the big
matrix, for all practical
90
00:05:02 --> 00:05:06
purposes, is breaking up
into these smaller blocks.
91
00:05:06 --> 00:05:12
Actually MATLAB will search for
a way to reorder the rows and
92
00:05:12 --> 00:05:15
columns to get that in
case it's possible.
93
00:05:15 --> 00:05:21
So here it's in front of us.
94
00:05:21 --> 00:05:24
Let's see if we can figure out.
95
00:05:24 --> 00:05:31
That lambda_1 I believe is
also an eigenvalue of M.
96
00:05:31 --> 00:05:33
So it was an eigenvalue of H.
97
00:05:33 --> 00:05:36
So that this, the fact that
it has that eigenvalue
98
00:05:36 --> 00:05:38
lambda_1 means what?
99
00:05:38 --> 00:05:48
That H times this times some
vector y is lambda_1*y, right?
100
00:05:48 --> 00:05:51
If that's an eigenvalue
it's got an eigenvector
101
00:05:51 --> 00:05:53
and let's call it y.
102
00:05:53 --> 00:05:56
Now this is a good question.
103
00:05:56 --> 00:06:01
I believe this block matrix
also has eigenvalue lambda_1
104
00:06:01 --> 00:06:03
and what's its eigenvector?
105
00:06:03 --> 00:06:09
What could I multiply
M by to get lambda_1
106
00:06:09 --> 00:06:13
times the same thing?
107
00:06:13 --> 00:06:14
Can you see what?
108
00:06:14 --> 00:06:17
Of course I'm thinking
that y is going to help
109
00:06:17 --> 00:06:20
but it's grown now.
110
00:06:20 --> 00:06:23
So what would be the
eigenvector here?
111
00:06:23 --> 00:06:27
When I multiply by M it'll just
come out right with the same
112
00:06:27 --> 00:06:33
eigenvalue? y_1, or
y rather, and then?
113
00:06:33 --> 00:06:36
And then zero, good. y_0.
114
00:06:36 --> 00:06:41
Because if I multiply, can
I put in what M really is?
115
00:06:41 --> 00:06:43
The H and K.
116
00:06:43 --> 00:06:45
H there, K there.
117
00:06:45 --> 00:06:48
When I do that multiplication
I get lambda_1*y.
118
00:06:49 --> 00:06:52
When I do this multiplication,
see I've just, that's a zero
119
00:06:52 --> 00:06:55
block, zero, so I got a zero.
120
00:06:55 --> 00:06:56
Perfect.
121
00:06:56 --> 00:07:05
So the eigenvectors of H just
sit with a zero in the K part
122
00:07:05 --> 00:07:09
and produce an eigenvector of
the block matrix with
123
00:07:09 --> 00:07:10
the same lambda_1.
124
00:07:11 --> 00:07:14
So you can see then, we
get the whole picture.
125
00:07:14 --> 00:07:18
The eigenvalues are just
sitting there and the
126
00:07:18 --> 00:07:20
eigenvectors are there.
127
00:07:20 --> 00:07:24
Now maybe you got all that and
wanted-- well I haven't said
128
00:07:24 --> 00:07:27
anything about N, Sorry.
129
00:07:27 --> 00:07:29
Everybody thinks more about N.
130
00:07:29 --> 00:07:31
So what's the thing with N?
131
00:07:31 --> 00:07:34
What would you say about N?
132
00:07:34 --> 00:07:37
If you look at that matrix,
suppose I don't even tell you
133
00:07:37 --> 00:07:40
it's positive definite at
first, would you say that
134
00:07:40 --> 00:07:45
looks like a invertible
or singular matrix?
135
00:07:45 --> 00:07:48
Everybody's going
to say singular.
136
00:07:48 --> 00:07:55
And why would you say
that's singular?
137
00:07:55 --> 00:08:04
Well, the determinant of a
block matrix, this morning I
138
00:08:04 --> 00:08:07
said do whatever you like
with block matrices.
139
00:08:07 --> 00:08:13
But I have to admit that if I
had a bunch of general blocks,
140
00:08:13 --> 00:08:16
if I had to take the
determinant of that, and of
141
00:08:16 --> 00:08:19
course everybody's remembering
Professor Strang doesn't like
142
00:08:19 --> 00:08:24
determinants, if I had to take
the determinant, I'd have
143
00:08:24 --> 00:08:27
to do the whole thing.
144
00:08:27 --> 00:08:31
The separate determinants would
not tell me the story, usually.
145
00:08:31 --> 00:08:33
So determinants
are a bit tricky.
146
00:08:33 --> 00:08:37
But up here the determinant
will come out zero.
147
00:08:37 --> 00:08:44
I guess what I would hope
your internal test for a
148
00:08:44 --> 00:08:50
singular matrix is, are
the columns independent?
149
00:08:50 --> 00:08:52
And then the matrix
is invertible.
150
00:08:52 --> 00:08:53
Or are they dependent?
151
00:08:53 --> 00:09:00
Do you have some columns that
are in the same direction as
152
00:09:00 --> 00:09:02
other columns, same direction
as combinations of
153
00:09:02 --> 00:09:03
other columns?
154
00:09:03 --> 00:09:08
If you look at the columns
of that, say column one,
155
00:09:08 --> 00:09:14
so column one is the first
column of K repeated.
156
00:09:14 --> 00:09:17
What do you think about the
columns of that matrix,
157
00:09:17 --> 00:09:18
that block matrix N?
158
00:09:18 --> 00:09:24
Do you see that same
column showing up again?
159
00:09:24 --> 00:09:25
Yeah.
160
00:09:25 --> 00:09:29
That very same column, which is
the first column of K, again
161
00:09:29 --> 00:09:32
twice, is going to show up
right there, first
162
00:09:32 --> 00:09:33
column of K again.
163
00:09:33 --> 00:09:39
So this matrix has two
identical columns.
164
00:09:39 --> 00:09:41
No way it could be invertible.
165
00:09:41 --> 00:09:46
And in fact, you can tell me
what vector, I'm always saying
166
00:09:46 --> 00:09:48
are the columns independent?
167
00:09:48 --> 00:09:50
Here, no, they're dependent.
168
00:09:50 --> 00:09:56
And then you can tell me an x.
169
00:09:56 --> 00:09:59
So this is my block matrix N.
170
00:09:59 --> 00:10:06
I want to know an x so
that the result is zero.
171
00:10:06 --> 00:10:13
That's really my
same indication.
172
00:10:13 --> 00:10:15
We found two identical columns.
173
00:10:15 --> 00:10:19
What would be the x?
174
00:10:19 --> 00:10:23
Well, you have to tell me more
than one, minus one because
175
00:10:23 --> 00:10:30
I've got a big x there.
176
00:10:30 --> 00:10:33
Yeah I've gotta make it big
enough, but essentially it's
177
00:10:33 --> 00:10:34
the one, minus one, thanks.
178
00:10:34 --> 00:10:38
And enough zeroes in there
and enough zeroes in there.
179
00:10:38 --> 00:10:46
So the fact that that vector
gets taken to zero is the same
180
00:10:46 --> 00:10:49
thing as saying that one of
this column minus one of
181
00:10:49 --> 00:10:51
this column gives zero.
182
00:10:51 --> 00:10:53
In other words, the
columns are the same.
183
00:10:53 --> 00:10:58
And of course, by doing this
we're seeing the one and minus
184
00:10:58 --> 00:11:01
one could have gone into
position two there,
185
00:11:01 --> 00:11:02
position three.
186
00:11:02 --> 00:11:06
So we've got a whole
bunch of vectors.
187
00:11:06 --> 00:11:13
This matrix N, this [K, K; K,
K] has got a whole lot of
188
00:11:13 --> 00:11:15
vectors that it takes to zero.
189
00:11:15 --> 00:11:18
What I would say it has
a large null space.
190
00:11:18 --> 00:11:22
A large space of vectors
that it takes to zero.
191
00:11:22 --> 00:11:25
So that's a really
useful exercise.
192
00:11:25 --> 00:11:26
I'm delighted you asked it.
193
00:11:26 --> 00:11:34
Now I'm ready for more.
194
00:11:34 --> 00:11:34
Could do.
195
00:11:34 --> 00:11:37
Exactly, row reduction.
196
00:11:37 --> 00:11:39
I should look to see what
would happen in elimination.
197
00:11:39 --> 00:11:43
Well, elimination would go
swimmingly along for the
198
00:11:43 --> 00:11:46
first part because it's
only looking here.
199
00:11:46 --> 00:11:52
But then what would I have
after the first half
200
00:11:52 --> 00:11:56
of elimination?
201
00:11:56 --> 00:12:02
Well I'd have I suppose
whatever that K changed
202
00:12:02 --> 00:12:04
to, elimination.
203
00:12:04 --> 00:12:04
What should we call it?
204
00:12:04 --> 00:12:09
U or something?
205
00:12:09 --> 00:12:13
When I did these row steps
that matrix turned into this
206
00:12:13 --> 00:12:15
upper triangular matrix.
207
00:12:15 --> 00:12:18
And maybe you can tell me what
will have happened at the
208
00:12:18 --> 00:12:20
same time to the rest?
209
00:12:20 --> 00:12:25
What will I see sitting here if
I just do ordinary elimination
210
00:12:25 --> 00:12:29
and I'm just looking there and
using the pivots and
211
00:12:29 --> 00:12:31
so on, I'll see?
212
00:12:31 --> 00:12:35
It'll be U because whenever
I do on the left side I'm
213
00:12:35 --> 00:12:37
doing to the whole row.
214
00:12:37 --> 00:12:40
And now, the main point
is, what will I see?
215
00:12:40 --> 00:12:42
Now elimination, keep
going, keep going.
216
00:12:42 --> 00:12:47
Do elimination to clear
out this column, this
217
00:12:47 --> 00:12:49
whole bunch, right?
218
00:12:49 --> 00:12:52
Elimination.
219
00:12:52 --> 00:12:54
And now what am I going
to see in that corner?
220
00:12:54 --> 00:12:57
All zeroes, right.
221
00:12:57 --> 00:13:09
So that's telling me that the
matrix has just got half of the
222
00:13:09 --> 00:13:12
eigenvalues positive, half
of the pivots are positive.
223
00:13:12 --> 00:13:16
The second half all zeroes.
224
00:13:16 --> 00:13:19
So I guess, here I've
found an eigenvector
225
00:13:19 --> 00:13:22
with what eigenvalue?
226
00:13:22 --> 00:13:24
That's looking like an
eigenvector to me if we're
227
00:13:24 --> 00:13:26
thinking eigenvectors.
228
00:13:26 --> 00:13:30
And what's the eigenvalue
that goes with it?
229
00:13:30 --> 00:13:30
Zero.
230
00:13:30 --> 00:13:32
Because Nx is 0x.
231
00:13:34 --> 00:13:37
You can either think of it
as Nx=0 if you're thinking
232
00:13:37 --> 00:13:40
about systems of equations.
233
00:13:40 --> 00:13:46
Or Nx=0x if you're thinking
that that guy is an eigenvector
234
00:13:46 --> 00:13:49
with eigenvalues here.
235
00:13:49 --> 00:13:51
So I'm pretty happy.
236
00:13:51 --> 00:13:55
I mean many of you will
have spotted this.
237
00:13:55 --> 00:13:56
Probably perhaps all.
238
00:13:56 --> 00:14:03
But I'm happy that's an example
that just shows how you have to
239
00:14:03 --> 00:14:06
think big with block
matrices I guess.
240
00:14:06 --> 00:14:08
Good.
241
00:14:08 --> 00:14:11
Ok on that?
242
00:14:11 --> 00:14:27
What else, thanks.
243
00:14:27 --> 00:14:27
That's true.
244
00:14:27 --> 00:14:31
And that's really all I've done
so far is those four examples.
245
00:14:31 --> 00:14:35
I think that language of
fixed-fixed and fixed-free
246
00:14:35 --> 00:14:40
really comes, I mean I used it
early about those four
247
00:14:40 --> 00:14:44
matrices, but it's really going
to show up at the next lecture,
248
00:14:44 --> 00:14:52
Friday, when I have a line of
springs and the matrices
249
00:14:52 --> 00:14:53
that come out of that.
250
00:14:53 --> 00:15:00
So Friday we'll finally
be on those first four.
251
00:15:00 --> 00:15:05
A fifth matrix will appear
in this course finally.
252
00:15:05 --> 00:15:08
Of course, it's going to be
related to the first ones,
253
00:15:08 --> 00:15:14
naturally but we'll move to,
we'll see something new and
254
00:15:14 --> 00:15:18
then we'll see the fixed-free
idea again for those.
255
00:15:18 --> 00:15:21
So if that can wait until
Friday, you'll see
256
00:15:21 --> 00:15:24
some different ones.
257
00:15:24 --> 00:15:26
Good.
258
00:15:26 --> 00:15:28
Questions, thoughts.
259
00:15:28 --> 00:15:31
You can ask about anything.
260
00:15:31 --> 00:15:35
Maybe I can ask.
261
00:15:35 --> 00:15:40
Any thoughts about the
pace of the course?
262
00:15:40 --> 00:15:49
This is sort of a heavy dose
of linear algebra, right?
263
00:15:49 --> 00:15:53
Of course, the answer
maybe depends on how much
264
00:15:53 --> 00:15:56
you had seen before.
265
00:15:56 --> 00:15:59
So those who haven't seen very
much linear algebra at all
266
00:15:59 --> 00:16:04
really got quite a
bit quickly here.
267
00:16:04 --> 00:16:10
Because many courses on linear
algebra never reach this key
268
00:16:10 --> 00:16:16
idea of positive definiteness
that ties it all together.
269
00:16:16 --> 00:16:19
So you've seen quite
a bit, really.
270
00:16:19 --> 00:16:23
Of course, we've concentrated
on symmetric matrices and
271
00:16:23 --> 00:16:29
there's a whole garden or
forest or zoo of matrices
272
00:16:29 --> 00:16:32
of different types.
273
00:16:32 --> 00:16:34
So what matrices have we seen?
274
00:16:34 --> 00:16:40
Symmetric matrices and then
their eigenvectors were
275
00:16:40 --> 00:16:44
orthogonal and we could
say orthonormal.
276
00:16:44 --> 00:16:51
So that gave us, I don't know
if you remember this part,
277
00:16:51 --> 00:16:54
which when we wrote it
down I said, big deal.
278
00:16:54 --> 00:16:56
That's very important.
279
00:16:56 --> 00:16:59
That's this principal
axis theorem.
280
00:16:59 --> 00:17:04
These Q's, what kind
of a matrix is Q?
281
00:17:04 --> 00:17:06
It's the eigenvector matrix.
282
00:17:06 --> 00:17:12
And for symmetric matrix, so
this is the eigenvector matrix.
283
00:17:12 --> 00:17:14
And what do we know about it?
284
00:17:14 --> 00:17:22
In the special case
of symmetric K?
285
00:17:22 --> 00:17:27
What do we know especially
about the eigenvectors then?
286
00:17:27 --> 00:17:28
They're orthogonal.
287
00:17:28 --> 00:17:29
We can make them orthonormal.
288
00:17:29 --> 00:17:34
So this will be an
orthogonal matrix.
289
00:17:34 --> 00:17:39
And that was a matrix
with Q transpose was
290
00:17:39 --> 00:17:41
the same as Q inverse.
291
00:17:41 --> 00:17:44
Normally we would see the
inverse there, but for these
292
00:17:44 --> 00:17:47
we can put the transpose.
293
00:17:47 --> 00:17:52
Here's one type of matrix,
symmetric, very important.
294
00:17:52 --> 00:17:56
Here's another type of
matrix, orthogonal matrices.
295
00:17:56 --> 00:17:58
And of course, many,
many other varieties.
296
00:17:58 --> 00:18:01
Well here we have a very
nice matrix, so that
297
00:18:01 --> 00:18:03
matrix is diagonal.
298
00:18:03 --> 00:18:06
Right, that's just the
eigenvalues, so that's
299
00:18:06 --> 00:18:08
a diagonal matrix.
300
00:18:08 --> 00:18:12
And what do we know, if K is
positive definite, let's just,
301
00:18:12 --> 00:18:14
this was for any symmetric one.
302
00:18:14 --> 00:18:19
So what's special if K
is positive definite?
303
00:18:19 --> 00:18:21
Somehow the positive
definiteness should
304
00:18:21 --> 00:18:22
show up here.
305
00:18:22 --> 00:18:26
And where does it show?
306
00:18:26 --> 00:18:29
Positive eigenvalues, exactly.
307
00:18:29 --> 00:18:33
The Q could be any,
any Q would be fine.
308
00:18:33 --> 00:18:37
But we would see
positive eigenvalues.
309
00:18:37 --> 00:18:41
Oh, here's a little point
about eigenvalues.
310
00:18:41 --> 00:18:44
Suppose I have my matrix K.
311
00:18:44 --> 00:18:47
And it's got some eigenvalues.
312
00:18:47 --> 00:18:57
Now let me add four times
the identity to it.
313
00:18:57 --> 00:18:59
What are the eigenvalues now?
314
00:18:59 --> 00:19:02
What are the eigenvectors now?
315
00:19:02 --> 00:19:08
What's changed and how
and what hasn't changed?
316
00:19:08 --> 00:19:12
Because that's a pretty easy,
the identity matrix is always
317
00:19:12 --> 00:19:15
the easy one for us to
know what's happening.
318
00:19:15 --> 00:19:20
So what is happening to
the eigenvalues now?
319
00:19:20 --> 00:19:23
If K had these eigenvalues
lambda, what are the
320
00:19:23 --> 00:19:25
eigenvalues of K+4I?
321
00:19:25 --> 00:19:31
322
00:19:31 --> 00:19:32
You add?
323
00:19:32 --> 00:19:32
You add four, yeah.
324
00:19:32 --> 00:19:36
The eigenvalues of this are
the eigenvalues of K+4.
325
00:19:38 --> 00:19:42
That is just like shifting the
matrix, you could think of it
326
00:19:42 --> 00:19:48
is adding four along the
diagonal will add four.
327
00:19:48 --> 00:19:54
And the eigenvectors would
be exactly the same ones.
328
00:19:54 --> 00:19:56
I would have Kx would
agree with lambda*S.
329
00:19:57 --> 00:19:59
And 4Ix would agree with 4x.
330
00:20:00 --> 00:20:05
So that proves it.
331
00:20:05 --> 00:20:11
Good to see what you can do,
the limited number of things
332
00:20:11 --> 00:20:15
that you're allowed to do
without changing the
333
00:20:15 --> 00:20:17
eigenvectors, and therefore
you can spot the
334
00:20:17 --> 00:20:19
eigenvalues right away.
335
00:20:19 --> 00:20:22
The limited things you can
invert, you can shift like
336
00:20:22 --> 00:20:27
this, you could square it,
cube it, take powers,
337
00:20:27 --> 00:20:34
things like that.
338
00:20:34 --> 00:20:37
I'm going to look to you
now for giving me a lead
339
00:20:37 --> 00:20:42
on something that is
interesting or not.
340
00:20:42 --> 00:20:48
Yes, thanks.
341
00:20:48 --> 00:20:52
Go ahead.
342
00:20:52 --> 00:21:02
Oh, I see okay, yes.
343
00:21:02 --> 00:21:03
I see.
344
00:21:03 --> 00:21:05
Alright.
345
00:21:05 --> 00:21:08
So that's page 64 of the book.
346
00:21:08 --> 00:21:18
Well, so that's a problem
that physicists love.
347
00:21:18 --> 00:21:21
I don't know how much I
can say about it here,
348
00:21:21 --> 00:21:23
to tell the truth.
349
00:21:23 --> 00:21:26
Just to mention.
350
00:21:26 --> 00:21:28
Do they use a minus sign?
351
00:21:28 --> 00:21:30
Probably they do.
352
00:21:30 --> 00:21:39
So their equation is minus the
second derivative of u plus
353
00:21:39 --> 00:21:45
(x squared)*u and they
are interested in the
354
00:21:45 --> 00:21:54
eigenvalues equal lambda*u.
355
00:21:54 --> 00:21:58
The case that we've done in
class was without this (x
356
00:21:58 --> 00:22:01
squared)*u term, right?
357
00:22:01 --> 00:22:07
The absolutely most important
case is the second derivative
358
00:22:07 --> 00:22:08
of u equal lambda*u.
359
00:22:09 --> 00:22:13
The eigenvalues were, or
what were the eigenvectors
360
00:22:13 --> 00:22:15
in that case?
361
00:22:15 --> 00:22:21
What were the eigenvectors of
the second derivative before
362
00:22:21 --> 00:22:27
there was any (x squared)*u
and E potential showing up?
363
00:22:27 --> 00:22:30
They were just sines
and cosines, right?
364
00:22:30 --> 00:22:33
Sines and cosines have the
property that if you take two
365
00:22:33 --> 00:22:40
derivatives you get them back
with some factor lambda.
366
00:22:40 --> 00:22:45
Now let me just look at
that problem without
367
00:22:45 --> 00:22:49
saying much about it.
368
00:22:49 --> 00:22:53
First of all, the first thing
I want to know is have I
369
00:22:53 --> 00:22:55
got a linear problem here?
370
00:22:55 --> 00:22:57
Have I got a linear equation?
371
00:22:57 --> 00:23:00
Because that's where I
talk about eigenvalues.
372
00:23:00 --> 00:23:05
So in the matrix case,
I'd say I have a matrix.
373
00:23:05 --> 00:23:09
K times an eigenvector.
374
00:23:09 --> 00:23:14
That matrix represents
something linear.
375
00:23:14 --> 00:23:18
It's just, all the rules
of addition work here.
376
00:23:18 --> 00:23:20
Here it is linear.
377
00:23:20 --> 00:23:27
It is linear.
378
00:23:27 --> 00:23:34
What I'm trying to say is, I
just call that a variable
379
00:23:34 --> 00:23:36
coefficient and that's what
we're going to see
380
00:23:36 --> 00:23:38
in Chapter two.
381
00:23:38 --> 00:23:44
The material or something could
lead to some dependence on x.
382
00:23:44 --> 00:23:49
But u is still there,
just linearly.
383
00:23:49 --> 00:23:55
In other words, this is a
perfectly ok linear operator
384
00:23:55 --> 00:24:00
and am I imagining that
it's positive definite?
385
00:24:00 --> 00:24:00
Let's see.
386
00:24:00 --> 00:24:08
This part with the minus sign
was positive definite, right?
387
00:24:08 --> 00:24:12
Well, at least semi-definite.
388
00:24:12 --> 00:24:15
So let me just remember
the most important case.
389
00:24:15 --> 00:24:20
If I look at this equation,
d second u/dx squared
390
00:24:20 --> 00:24:21
equals lambda*u.
391
00:24:22 --> 00:24:28
So that's the eigenvalue,
eigenfunction problem
392
00:24:28 --> 00:24:30
for our good friend.
393
00:24:30 --> 00:24:35
What do I say about
the eigenvalues now?
394
00:24:35 --> 00:24:40
What can you tell me about
the eigenvalues of that?
395
00:24:40 --> 00:24:41
Mostly positive.
396
00:24:41 --> 00:24:44
Because they were sort
of omega squares.
397
00:24:44 --> 00:24:47
But I mean zero could be
an eigenvalue, right?
398
00:24:47 --> 00:24:55
What would the eigenfunction
be for lambda equal zero?
399
00:24:55 --> 00:24:59
If I wanted to get zero here,
if I wanted a zero on the
400
00:24:59 --> 00:25:06
right side, what functions
u could give me zero?
401
00:25:06 --> 00:25:08
Constant function.
402
00:25:08 --> 00:25:12
Yeah, the constant function
is certainly there
403
00:25:12 --> 00:25:14
as a possibility.
404
00:25:14 --> 00:25:18
But anyway, I would
say this is positive
405
00:25:18 --> 00:25:20
semi-definite at least.
406
00:25:20 --> 00:25:23
And this part?
407
00:25:23 --> 00:25:26
How do I think about
that as a big matrix?
408
00:25:26 --> 00:25:29
I think of it sort of like a
big matrix with x squared
409
00:25:29 --> 00:25:36
running down the diagonal.
410
00:25:36 --> 00:25:38
With a matrix, you could say
walking down the diagonal
411
00:25:38 --> 00:25:41
because it's n steps.
412
00:25:41 --> 00:25:45
For differential equations,
maybe running is
413
00:25:45 --> 00:25:46
the right word.
414
00:25:46 --> 00:25:54
Because it doesn't jump, it's
just bzzz all the way from
415
00:25:54 --> 00:25:56
zero squared to whatever.
416
00:25:56 --> 00:26:04
Anyway, that would correspond
to a diagonal matrix, but
417
00:26:04 --> 00:26:07
not constant diagonal.
418
00:26:07 --> 00:26:09
Diagonal, but not
constant diagonal.
419
00:26:09 --> 00:26:13
Because this x squared
number is changing.
420
00:26:13 --> 00:26:20
It's like a spring, it's like a
bunch of springs in which the
421
00:26:20 --> 00:26:24
first spring maybe has a
spring constant of one.
422
00:26:24 --> 00:26:27
And then we have a tighter
spring and then a very tight
423
00:26:27 --> 00:26:31
spring and so on, more and
more, higher and higher
424
00:26:31 --> 00:26:32
constants there.
425
00:26:32 --> 00:26:39
Well, I'm just speaking
very roughly here.
426
00:26:39 --> 00:26:44
Because variable coefficient,
variable material properties,
427
00:26:44 --> 00:26:48
springs of different
elasticities, we're
428
00:26:48 --> 00:26:51
ready to move to that.
429
00:26:51 --> 00:26:57
Our problems up to now, the
springs were all the same.
430
00:26:57 --> 00:27:01
The bar, if it was a
bar, was uniform.
431
00:27:01 --> 00:27:05
And now this would
be a step forward.
432
00:27:05 --> 00:27:10
But now, of course, this
specific problem just
433
00:27:10 --> 00:27:15
happens to have a solution
that physicists love.
434
00:27:15 --> 00:27:19
It has a meaning to
physicists, not to me.
435
00:27:19 --> 00:27:23
And the eigenfunctions have
a meaning and they're
436
00:27:23 --> 00:27:26
famous functions.
437
00:27:26 --> 00:27:28
It's just glorious.
438
00:27:28 --> 00:27:31
So you could say that's the
special problem, the way we had
439
00:27:31 --> 00:27:36
four special matrices in
18.085, that would be a similar
440
00:27:36 --> 00:27:44
special problem in
quantum mechanics.
441
00:27:44 --> 00:27:48
Let's turn to something
entirely different.
442
00:27:48 --> 00:27:51
Questions about any topic.
443
00:27:51 --> 00:27:54
Or I can ask some and you
can take this, maybe
444
00:27:54 --> 00:27:56
that's one way to review.
445
00:27:56 --> 00:27:57
Go ahead.
446
00:27:57 --> 00:27:59
Thanks.
447
00:27:59 --> 00:28:03
Number 20 of 1.6.
448
00:28:03 --> 00:28:06
1.6 is a section, oh, no.
449
00:28:06 --> 00:28:11
That's positive definite
notes so I'm okay with that.
450
00:28:11 --> 00:28:15
I see that I did ask you a
question on the homework from
451
00:28:15 --> 00:28:23
1.7 which I may not get
to cover in lecture, but
452
00:28:23 --> 00:28:24
give it a shot anyway.
453
00:28:24 --> 00:28:28
So what's 20?
454
00:28:28 --> 00:28:32
Oh, ok, that's good.
455
00:28:32 --> 00:28:36
Without multiplying
out the matrix.
456
00:28:36 --> 00:28:39
So it's this
Q*lambda*Q transpose.
457
00:28:41 --> 00:28:44
So I'm telling you in that
question what Q, lambda,
458
00:28:44 --> 00:28:48
and Q transpose are.
459
00:28:48 --> 00:28:53
The Q is this [cosine,
minus sine; sine, cosine].
460
00:28:53 --> 00:28:57
The lambda is two and five,
I think, in that question.
461
00:28:57 --> 00:29:01
And the Q transpose of
course is [cosine, sine;
462
00:29:01 --> 00:29:05
minus sine, cosine].
463
00:29:05 --> 00:29:12
And if I've told you that those
are the numbers then you
464
00:29:12 --> 00:29:15
could multiply those
together to get K.
465
00:29:15 --> 00:29:21
But you can tell me,
this is like K exposed.
466
00:29:21 --> 00:29:27
The matrix is like, we're told
more than we would know.
467
00:29:27 --> 00:29:30
If I multiply it all together,
I wouldn't see that the
468
00:29:30 --> 00:29:33
eigenvectors are these guys,
that the eigenvalues
469
00:29:33 --> 00:29:34
are these guys.
470
00:29:34 --> 00:29:42
So what, without looking to
see, what are the eigenvalues
471
00:29:42 --> 00:29:46
of this matrix K if we
multiplied it all together?
472
00:29:46 --> 00:29:49
What would the
eigenvalues actually be?
473
00:29:49 --> 00:29:53
Two and five, right, because
we built it up that way.
474
00:29:53 --> 00:29:56
What would the determinant be?
475
00:29:56 --> 00:29:59
Now what do we know
about determinants?
476
00:29:59 --> 00:30:02
It would be ten is
the right answer.
477
00:30:02 --> 00:30:06
What's the right
way to see that?
478
00:30:06 --> 00:30:09
Well, the determinant is
always the product of the
479
00:30:09 --> 00:30:11
eigenvalues, isn't it?
480
00:30:11 --> 00:30:15
These guys have
determinant ten anyway.
481
00:30:15 --> 00:30:18
And if I hadn't normalized,
so this had some bigger
482
00:30:18 --> 00:30:23
determinant, this would have
some smaller determinant.
483
00:30:23 --> 00:30:25
Their inverses, their
determinants will give me
484
00:30:25 --> 00:30:28
the 1 and there's the ten.
485
00:30:28 --> 00:30:33
What else could I ask about
or did I ask about for that?
486
00:30:33 --> 00:30:38
The eigenvectors, ok.
487
00:30:38 --> 00:30:41
The eigenvectors of the
matrix, what are they?
488
00:30:41 --> 00:30:44
They're these columns that are
sitting here for us, they're
489
00:30:44 --> 00:30:46
those two columns, right.
490
00:30:46 --> 00:30:49
And would you like to just
check that if the, I believe
491
00:30:49 --> 00:30:51
that column is an eigenvector.
492
00:30:51 --> 00:30:56
And which, do you think two
or five is it's eigenvalue?
493
00:30:56 --> 00:30:59
That goes with this
first column.
494
00:30:59 --> 00:31:02
Everybody's going to say
two and that's right.
495
00:31:02 --> 00:31:08
And do you want me to just take
that matrix times this proposed
496
00:31:08 --> 00:31:12
eigenvector and just see
if it's going to work?
497
00:31:12 --> 00:31:19
Suppose I just do all and
just see, sure enough this
498
00:31:19 --> 00:31:20
will be an eigenvector.
499
00:31:20 --> 00:31:23
So what do I have
at this point?
500
00:31:23 --> 00:31:25
Can you do this
times this first?
501
00:31:25 --> 00:31:31
What do I get? c squared
plus s squared is one.
502
00:31:31 --> 00:31:36
And -cs plus cs is zero.
503
00:31:36 --> 00:31:38
So at that point I have .
504
00:31:38 --> 00:31:40
Now comes this matrix.
505
00:31:40 --> 00:31:46
So what do I have after that
matrix speaks up? .
506
00:31:46 --> 00:31:51
And now I take two times
this and what do I get?
507
00:31:51 --> 00:31:55
Or that matrix
times the .
508
00:31:55 --> 00:31:59
How do you multiply a matrix
times that vector.
509
00:31:59 --> 00:32:00
Here's the good way
to think of it.
510
00:32:00 --> 00:32:04
It's two times the
first column.
511
00:32:04 --> 00:32:06
And zero times the second.
512
00:32:06 --> 00:32:10
So the net result of the
whole deal was two times
513
00:32:10 --> 00:32:12
that first column.
514
00:32:12 --> 00:32:16
Which is exactly saying that
this is an eigenvector.
515
00:32:16 --> 00:32:20
When I did all that
it came back again.
516
00:32:20 --> 00:32:24
Scaled by two.
517
00:32:24 --> 00:32:27
So that's a good example.
518
00:32:27 --> 00:32:29
And then, is the matrix
positive definite?
519
00:32:29 --> 00:32:33
That connects to
today's lecture.
520
00:32:33 --> 00:32:36
What test would you use
to show that the matrix
521
00:32:36 --> 00:32:39
is positive definite?
522
00:32:39 --> 00:32:40
The eigenvalues, yeah.
523
00:32:40 --> 00:32:42
The eigenvalues are
sitting there.
524
00:32:42 --> 00:32:44
Two and five, both positive.
525
00:32:44 --> 00:32:47
If I changed one of those
signs, then it would no
526
00:32:47 --> 00:32:50
longer be positive definite.
527
00:32:50 --> 00:32:53
It would still be symmetric,
I'd still have the
528
00:32:53 --> 00:32:56
eigenvectors, but then
eigenvalue would have
529
00:32:56 --> 00:33:00
jumped to minus five.
530
00:33:00 --> 00:33:02
I think this sort of helps out.
531
00:33:02 --> 00:33:06
I guess I hope that as I'm
doing these things, you're
532
00:33:06 --> 00:33:12
ahead of me or with me in the
calculation and you just have
533
00:33:12 --> 00:33:16
to do a bunch of these to get
confidence that you've
534
00:33:16 --> 00:33:17
got the right thing.
535
00:33:17 --> 00:33:22
Ok, yes?
536
00:33:22 --> 00:33:24
1.6, 24.
537
00:33:24 --> 00:33:26
Is that also a
homework problem?
538
00:33:26 --> 00:33:28
Alright, but you guys
are reading the rest
539
00:33:28 --> 00:33:30
of the book, right?
540
00:33:30 --> 00:33:32
Not only the
homework questions.
541
00:33:32 --> 00:33:34
Ah.
542
00:33:34 --> 00:33:35
Oh, dear.
543
00:33:35 --> 00:33:41
24, that's a very
good question.
544
00:33:41 --> 00:33:43
About this, yeah.
545
00:33:43 --> 00:33:54
Right.
546
00:33:54 --> 00:33:55
It's a good question.
547
00:33:55 --> 00:33:58
And if today's lecture
had been, well it
548
00:33:58 --> 00:33:59
ran a little late.
549
00:33:59 --> 00:34:03
But if we ran another 20
minutes late, I could
550
00:34:03 --> 00:34:05
have done this.
551
00:34:05 --> 00:34:09
I'll just say what's
in that problem.
552
00:34:09 --> 00:34:16
And then we'll see it again.
553
00:34:16 --> 00:34:21
So what's in that question?
554
00:34:21 --> 00:34:23
Let me write down what it is.
555
00:34:23 --> 00:34:28
So I have a positive
definite matrix K, right?
556
00:34:28 --> 00:34:33
And then I've got its energy.
557
00:34:33 --> 00:34:39
I'm using u rather than
x, so let's use u.
558
00:34:39 --> 00:34:46
So my u transpose Ku, or
like x transpose Kx today.
559
00:34:46 --> 00:34:52
That is this bowl-shaped
figure, right?
560
00:34:52 --> 00:35:01
If I graph this on the
u_1, u_2 maybe up to
561
00:35:01 --> 00:35:02
u_n, all in the base.
562
00:35:02 --> 00:35:04
And now I have the picture.
563
00:35:04 --> 00:35:07
So I'm in n+1 dimensions.
564
00:35:07 --> 00:35:09
The other dimension
is this one.
565
00:35:09 --> 00:35:15
Then that's the one where I
might get this bowl-shaped guy.
566
00:35:15 --> 00:35:17
And I've called that energy.
567
00:35:17 --> 00:35:22
In many, many physical problems
there is a factor of 1/2.
568
00:35:22 --> 00:35:27
And it's going to be nice to
have that factor of 1/2.
569
00:35:27 --> 00:35:35
So that won't change
anything, just half as big.
570
00:35:35 --> 00:35:41
So what is the minimum
value of that energy?
571
00:35:41 --> 00:35:46
And what is the minimum value
of this, if I said minimize
572
00:35:46 --> 00:35:48
that, you could do
it right away.
573
00:35:48 --> 00:35:51
It'd be a zero.
574
00:35:51 --> 00:35:57
Now I'm going to
introduce a linear term.
575
00:35:57 --> 00:36:01
This was a quadratic term and
it had u squareds in it.
576
00:36:01 --> 00:36:05
So the linear term is going
to be u transpose f is
577
00:36:05 --> 00:36:06
the shorthand for it.
578
00:36:06 --> 00:36:12
And of course, we all know that
that stands for u_1*f_1, u_2
579
00:36:12 --> 00:36:16
all minus, u_2*f_2 and so on.
580
00:36:16 --> 00:36:19
However many dimensions I'm in.
581
00:36:19 --> 00:36:21
You can imagine I'm
in two dimensions.
582
00:36:21 --> 00:36:23
So it's -u_1*f_1 - u_2*f_2.
583
00:36:23 --> 00:36:30
584
00:36:30 --> 00:36:34
So what I'm saying is that
minimizing just this was
585
00:36:34 --> 00:36:35
like, too easy, right?
586
00:36:35 --> 00:36:36
The answer was zero.
587
00:36:36 --> 00:36:39
Nobody's interested in
that for very long.
588
00:36:39 --> 00:36:43
But now it is much more
interesting when I get a
589
00:36:43 --> 00:36:48
linear term in there.
590
00:36:48 --> 00:36:51
So what happens now?
591
00:36:51 --> 00:36:59
Well, the effect of that linear
term is to shift that bowl
592
00:36:59 --> 00:37:02
sorta over and down a little.
593
00:37:02 --> 00:37:04
So that instead of
sitting where I drew
594
00:37:04 --> 00:37:09
it, let me erase it.
595
00:37:09 --> 00:37:15
If I know graph this function,
this is my function of u, this
596
00:37:15 --> 00:37:22
is still the most important
part, but now I have
597
00:37:22 --> 00:37:24
a first order turn.
598
00:37:24 --> 00:37:27
And the result is, it
still goes through here.
599
00:37:27 --> 00:37:27
Right?
600
00:37:27 --> 00:37:31
Why does it still go
through that same point?
601
00:37:31 --> 00:37:37
Because if I take u_1 and
u_2 to be zero, I get zero.
602
00:37:37 --> 00:37:38
So I still get zero there.
603
00:37:38 --> 00:37:39
But the bowl
604
00:37:39 --> 00:37:40
has shifted.
605
00:37:40 --> 00:37:44
It's more like something here.
606
00:37:44 --> 00:37:48
And it still has a minimum
because this is still
607
00:37:48 --> 00:37:50
the all-important term.
608
00:37:50 --> 00:37:52
But it's just moved
over and down.
609
00:37:52 --> 00:37:55
So it has the minimum value.
610
00:37:55 --> 00:38:01
It actually goes below zero,
but if I look at it if I'm
611
00:38:01 --> 00:38:04
sitting at the minimum
and looking I'm seeing a
612
00:38:04 --> 00:38:06
bowl going up, right.
613
00:38:06 --> 00:38:11
So I hope that picture shows.
614
00:38:11 --> 00:38:16
And now, of course,
that's the geometry.
615
00:38:16 --> 00:38:18
In other words, the same
geometry just moved the
616
00:38:18 --> 00:38:20
thing over and down.
617
00:38:20 --> 00:38:24
But the algebra is,
where is the minimum?
618
00:38:24 --> 00:38:26
What is the value
of that minimum?
619
00:38:26 --> 00:38:35
And this problem, 24, is
one way to do the minimum.
620
00:38:35 --> 00:38:37
One way to do it.
621
00:38:37 --> 00:38:42
But actually, if you doesn't
like linear, well I won't say
622
00:38:42 --> 00:38:46
didn't like linear algebra,
that's against my religion.
623
00:38:46 --> 00:38:52
So if you like calculus and you
said, wait a minute, if you
624
00:38:52 --> 00:38:55
give me something you want me
to minimize, what will I do?
625
00:38:55 --> 00:38:59
I'll set derivatives to zero.
626
00:38:59 --> 00:39:03
And can I just jump
to the answer?
627
00:39:03 --> 00:39:07
Oh, what derivatives
do I set to zero now,
628
00:39:07 --> 00:39:12
for the minimum here?
629
00:39:12 --> 00:39:14
It's the first derivatives.
630
00:39:14 --> 00:39:21
And they're first derivatives
with respect to?
631
00:39:21 --> 00:39:24
I look at df/d what?
632
00:39:24 --> 00:39:27
You see I've already
given it away.
633
00:39:27 --> 00:39:29
These are going to be
partial derivatives.
634
00:39:29 --> 00:39:30
Why's that?
635
00:39:30 --> 00:39:32
Because I've got
two directions.
636
00:39:32 --> 00:39:36
So I have a df/du_1=0
and a df/du_2=0.
637
00:39:36 --> 00:39:39
638
00:39:39 --> 00:39:42
In other words, when I sit
here at the bottom I'm seeing
639
00:39:42 --> 00:39:46
this whole bowl above me.
640
00:39:46 --> 00:39:51
If I go along the u_2 direction
it should go up and if
641
00:39:51 --> 00:39:54
I come along the u_1
direction, goes up.
642
00:39:54 --> 00:40:00
But it's flat at the
bottom both ways.
643
00:40:00 --> 00:40:04
So what's my point here?
644
00:40:04 --> 00:40:09
If you like calculus, you'll
get to two equations.
645
00:40:09 --> 00:40:12
And I just want to say what
those equations are, because
646
00:40:12 --> 00:40:19
they're all important.
647
00:40:19 --> 00:40:22
Suppose we only had
u_1 and nothing else.
648
00:40:22 --> 00:40:26
Then this would just be a
parabola and the derivative
649
00:40:26 --> 00:40:29
of this would be at
1/2 K*u squared.
650
00:40:29 --> 00:40:31
Suppose n is one.
651
00:40:31 --> 00:40:34
I'm only in one.
652
00:40:34 --> 00:40:39
So what's the derivative
of 1/2 K*u squared?
653
00:40:39 --> 00:40:40
The derivative.
654
00:40:40 --> 00:40:44
So I'm looking for if this was
1/2 K*u squared and I took the
655
00:40:44 --> 00:40:47
derivative with respect
to u, it would be?
656
00:40:47 --> 00:40:47
It'd be Ku.
657
00:40:48 --> 00:40:52
And it works here in
the matrix case.
658
00:40:52 --> 00:40:56
And what would be the
derivative of u, transpose of u
659
00:40:56 --> 00:41:03
times f, if u was just a number
and if u was just one thing and
660
00:41:03 --> 00:41:08
f was a single number, the
derivative would be? f, yeah.
661
00:41:08 --> 00:41:12
It'd be f.
662
00:41:12 --> 00:41:18
That's the system.
663
00:41:18 --> 00:41:20
I've jumped to the answer.
664
00:41:20 --> 00:41:27
That this set of two or n
equations in matrix language
665
00:41:27 --> 00:41:30
would just be, and I'll even
write it better as Ku=f.
666
00:41:33 --> 00:41:35
That tells me where
the minimum is.
667
00:41:35 --> 00:41:42
The minimizing guy is, so this
is in the base and then the
668
00:41:42 --> 00:41:43
thing is dropping down.
669
00:41:43 --> 00:41:49
I still have to figure out
what's the bottom value.
670
00:41:49 --> 00:41:54
But I've now identified
where the minimum occurs.
671
00:41:54 --> 00:41:57
So you get two questions
about a minimum.
672
00:41:57 --> 00:41:59
Where is it?
673
00:41:59 --> 00:42:01
What value of u
gives the minimum?
674
00:42:01 --> 00:42:09
And at that point, at that
lowest point, how low is it?
675
00:42:09 --> 00:42:13
The one thing you've gotta
remember is that when you
676
00:42:13 --> 00:42:19
minimize that quadratic, you
get that system of equations.
677
00:42:19 --> 00:42:21
And then, of course, the
answer, you have to
678
00:42:21 --> 00:42:22
solve that system.
679
00:42:22 --> 00:42:25
But this goes back to
what I said at the
680
00:42:25 --> 00:42:29
first minute of today.
681
00:42:29 --> 00:42:34
That we have two ways of
looking at a problem.
682
00:42:34 --> 00:42:38
Usually we go directly
to the equations.
683
00:42:38 --> 00:42:44
Sometimes the problem
comes naturally to us
684
00:42:44 --> 00:42:46
as a minimum problem.
685
00:42:46 --> 00:42:49
Like we have to minimize the
cost, we want to build a
686
00:42:49 --> 00:42:53
new school or something.
687
00:42:53 --> 00:42:57
So we've got some cost function
that we minimize that will
688
00:42:57 --> 00:43:04
lead, through calculus or
linear algebra, to this.
689
00:43:04 --> 00:43:14
So I've done everything but
answer the question 24.
690
00:43:14 --> 00:43:19
We only checked the one by
one case to see that that's
691
00:43:19 --> 00:43:23
the right equations,
derivative equal zero.
692
00:43:23 --> 00:43:27
And now you could use
calculus as I said.
693
00:43:27 --> 00:43:36
But if I answer that question,
well let me just do a little.
694
00:43:36 --> 00:43:41
The idea of that question
24, so that was what?
695
00:43:41 --> 00:43:45
1.6, 24, or something.
696
00:43:45 --> 00:43:46
Is that right?
697
00:43:46 --> 00:43:47
Yeah.
698
00:43:47 --> 00:43:55
Is that I could rewrite
this to make it clear.
699
00:43:55 --> 00:43:56
I think it's u
minus K inverse f.
700
00:43:56 --> 00:44:00
701
00:44:00 --> 00:44:07
Transpose K times u
minus K inverse f.
702
00:44:07 --> 00:44:17
And then a minus f 1/2, f
transpose K inverse 1/2.
703
00:44:17 --> 00:44:21
Actually, my best friend in
China told me this trick.
704
00:44:21 --> 00:44:25
And I didn't give him
credit for it in the book.
705
00:44:25 --> 00:44:27
But I should have done.
706
00:44:27 --> 00:44:30
I just think that if
you multiply all this
707
00:44:30 --> 00:44:33
out, you'll get this.
708
00:44:33 --> 00:44:35
It's what I would
call an identity.
709
00:44:35 --> 00:44:40
That just simply means that
it's just true for every u.
710
00:44:40 --> 00:44:42
It's true for everything.
711
00:44:42 --> 00:44:45
Can I try to multiply
some of that out?
712
00:44:45 --> 00:44:53
Just so you kind of see it.
713
00:44:53 --> 00:44:57
Yeah, that's what I
mean, multiply it out.
714
00:44:57 --> 00:44:59
You've got it.
715
00:44:59 --> 00:45:01
This thing would
give me four terms.
716
00:45:01 --> 00:45:05
It'd be this transpose
times that times that.
717
00:45:05 --> 00:45:07
Which is my guy here.
718
00:45:07 --> 00:45:09
And then I'll have something.
719
00:45:09 --> 00:45:10
It's just like numbers.
720
00:45:10 --> 00:45:13
Then this thing times
that times this.
721
00:45:13 --> 00:45:15
And this thing times
that times that.
722
00:45:15 --> 00:45:18
And this thing times
that times that.
723
00:45:18 --> 00:45:19
Let me do that last one.
724
00:45:19 --> 00:45:26
What happens when I do the
1/2 and this transpose
725
00:45:26 --> 00:45:28
times the K times this.
726
00:45:28 --> 00:45:32
So I'm using the distributive,
whatever, laws.
727
00:45:32 --> 00:45:36
Let's just do that particular
term and see what
728
00:45:36 --> 00:45:37
we're getting.
729
00:45:37 --> 00:45:42
So I have 1/2 of the minus
K inverse f transpose.
730
00:45:42 --> 00:45:45
So how do I write that?
731
00:45:45 --> 00:45:47
Shoot.
732
00:45:47 --> 00:45:50
Well, it's something times
something transpose.
733
00:45:50 --> 00:45:53
So what do I have to do?
734
00:45:53 --> 00:45:54
Opposite order.
735
00:45:54 --> 00:45:57
So I have a minus, an
F transpose and the
736
00:45:57 --> 00:45:58
K inverse transpose.
737
00:45:58 --> 00:46:02
You're seeing all this stuff.
738
00:46:02 --> 00:46:06
And then comes the K and
then comes the minus,
739
00:46:06 --> 00:46:07
oh again the minus.
740
00:46:07 --> 00:46:08
So that'd be a plus, right?
741
00:46:08 --> 00:46:13
Times K inverse times f.
742
00:46:13 --> 00:46:15
So that's one of the terms.
743
00:46:15 --> 00:46:18
That's one of the
terms that shows up.
744
00:46:18 --> 00:46:22
And what good is that one?
745
00:46:22 --> 00:46:25
So that's one.
746
00:46:25 --> 00:46:27
You could say that's
the longest term.
747
00:46:27 --> 00:46:30
That's the one with
the messiest term.
748
00:46:30 --> 00:46:31
But you can fix it.
749
00:46:31 --> 00:46:34
What would you do with that?
750
00:46:34 --> 00:46:37
K times K inverse is?
751
00:46:37 --> 00:46:38
Identity.
752
00:46:38 --> 00:46:40
So we can forget that.
753
00:46:40 --> 00:46:42
And now we're there.
754
00:46:42 --> 00:46:45
That's 1/2, f transpose,
f on this side.
755
00:46:45 --> 00:46:49
Oh, what's K inverse transpose?
756
00:46:49 --> 00:46:53
It's the same as K inverse
because K is symmetric, so
757
00:46:53 --> 00:46:54
its inverse is symmetric.
758
00:46:54 --> 00:46:57
So that transpose doesn't
change the matrix.
759
00:46:57 --> 00:47:08
In other words, this term will
show up and this term is oh!
760
00:47:08 --> 00:47:11
Nope, sorry.
761
00:47:11 --> 00:47:13
I was going to goof here.
762
00:47:13 --> 00:47:15
I was going to say this
is the same as this,
763
00:47:15 --> 00:47:16
but it's not, right?
764
00:47:16 --> 00:47:18
Why not?
765
00:47:18 --> 00:47:20
Because it's positive.
766
00:47:20 --> 00:47:24
And this guy is negative.
767
00:47:24 --> 00:47:27
Has my good friend
Professor Lin messed up?
768
00:47:27 --> 00:47:32
Nope.
769
00:47:32 --> 00:47:35
What's going to happen now?
770
00:47:35 --> 00:47:39
The two that I didn't
do, you see, the 1/2 u
771
00:47:39 --> 00:47:43
transpose K u is here.
772
00:47:43 --> 00:47:48
Then comes this one, which I
didn't do, and then another
773
00:47:48 --> 00:47:51
one that I didn't do, and
then this one that I did.
774
00:47:51 --> 00:47:53
They'll all be the same.
775
00:47:53 --> 00:47:57
So they'll all contribute with
their plus sign or minus sign
776
00:47:57 --> 00:48:02
and the net result will be
a perfect match, yeah.
777
00:48:02 --> 00:48:08
So I won't wear out your
patience by doing that.
778
00:48:08 --> 00:48:11
But I do want to
make the point.
779
00:48:11 --> 00:48:17
What was Professor Lin's point
in suggesting to write it in
780
00:48:17 --> 00:48:21
this more complicated way?
781
00:48:21 --> 00:48:27
His point was we could see
this is just a constant.
782
00:48:27 --> 00:48:29
Doesn't depend on u.
783
00:48:29 --> 00:48:34
And now I can see what
value of u would make this
784
00:48:34 --> 00:48:35
as small as possible.
785
00:48:35 --> 00:48:38
Remember, I'm still
trying to minimize.
786
00:48:38 --> 00:48:43
This part, I can't make it
bigger or smaller, it's fixed.
787
00:48:43 --> 00:48:44
It's u that I can play with.
788
00:48:44 --> 00:48:51
So what u should I choose
to make this part smaller?
789
00:48:51 --> 00:48:53
Bear with me.
790
00:48:53 --> 00:48:58
What u will make this big mess
as small as I can get it
791
00:48:58 --> 00:49:01
and how small can I get it?
792
00:49:01 --> 00:49:07
If I take u to B K inverse
f, then this is zero,
793
00:49:07 --> 00:49:10
this is zero, I get zero.
794
00:49:10 --> 00:49:15
And that's my claim, that u
equal K inverse f is the best
795
00:49:15 --> 00:49:17
possible, is the minimizer.
796
00:49:17 --> 00:49:24
And how do I know that I can't
make this more negative
797
00:49:24 --> 00:49:25
than the zero?
798
00:49:25 --> 00:49:30
I can get it down to
zero by making that to
799
00:49:30 --> 00:49:31
be the zero vector.
800
00:49:31 --> 00:49:41
But how do I know I can't
make it below zero?
801
00:49:41 --> 00:49:44
The K is positive definite and
I'm sitting here with some
802
00:49:44 --> 00:49:48
x transpose and some x.
803
00:49:48 --> 00:49:52
The X hax this sort of messy
form but it's an x and
804
00:49:52 --> 00:49:53
here's its transpose.
805
00:49:53 --> 00:50:00
So this is an x transpose, Kx
and can't be brought below zero
806
00:50:00 --> 00:50:02
when K is positive definite.
807
00:50:02 --> 00:50:05
Good.
808
00:50:05 --> 00:50:09
So we've said a good bit about
positive definite here, but
809
00:50:09 --> 00:50:19
happy to think-- Yeah, thanks.
810
00:50:19 --> 00:50:27
In fact, finally a fifth.
811
00:50:27 --> 00:50:29
Exactly.
812
00:50:29 --> 00:50:30
Thanks, perfect question.
813
00:50:30 --> 00:50:33
And let me answer it clearly.
814
00:50:33 --> 00:50:37
Each of those five tests
completely decides
815
00:50:37 --> 00:50:38
positive definite.
816
00:50:38 --> 00:50:42
So the five tests
are all equivalent.
817
00:50:42 --> 00:50:46
If a matrix passes one
test, it passes all five.
818
00:50:46 --> 00:50:48
So that's great, right?
819
00:50:48 --> 00:50:52
So we just do whichever
test we want.
820
00:50:52 --> 00:50:55
Or whichever way we want
to understand the matrix.
821
00:50:55 --> 00:51:02
I was going to add, I didn't
say a lot about this one.
822
00:51:02 --> 00:51:09
Can I just add a note
about a MATLAB command?
823
00:51:09 --> 00:51:18
The command chol(K).
824
00:51:18 --> 00:51:23
That's the first letters
in the name Cholesky.
825
00:51:23 --> 00:51:28
So chol is the first four
letters of this name.
826
00:51:28 --> 00:51:34
And that's a MATLAB command.
827
00:51:34 --> 00:51:38
If I've defined a matrix that's
positive definite and I use
828
00:51:38 --> 00:51:45
that command, out will pop an
A, one particular A that works.
829
00:51:45 --> 00:51:50
Out will pop on A that
makes this work.
830
00:51:50 --> 00:51:55
It'll be a square A and
it'll be upper triangular.
831
00:51:55 --> 00:52:00
So out will pop, so this
command is very, very close to
832
00:52:00 --> 00:52:06
the LU but it's just sort of
the appropriate version,
833
00:52:06 --> 00:52:13
symmetrized version of
elimination when you have a
834
00:52:13 --> 00:52:16
positive definite
symmetric matrix.
835
00:52:16 --> 00:52:20
If your matrix is not positive
definite, the MATLAB
836
00:52:20 --> 00:52:22
will tell you so.
837
00:52:22 --> 00:52:24
So it produces one
particular A.
838
00:52:24 --> 00:52:28
There are many A's that would
work, but there's one
839
00:52:28 --> 00:52:30
particular upper
triangular one.
840
00:52:30 --> 00:52:40
It's just related to the
usual u, but yes, thanks.
841
00:52:40 --> 00:52:43
No, I only even get into
that ballpark if the
842
00:52:43 --> 00:52:45
matrix is symmetric.
843
00:52:45 --> 00:52:47
I don't touch it otherwise.
844
00:52:47 --> 00:52:51
So my matrix is symmetric
before I begin.
845
00:52:51 --> 00:52:54
So I know good things about it.
846
00:52:54 --> 00:52:57
And here I'm asking for more.
847
00:52:57 --> 00:53:01
Here I'm asking are the
pivots all positive?
848
00:53:01 --> 00:53:03
Are the eigenvalues
all positive?
849
00:53:03 --> 00:53:06
So that's more.
850
00:53:06 --> 00:53:11
But I could think of some
interpretation that would, for
851
00:53:11 --> 00:53:15
non-symmetric matrices, but
it has problems, so I'd
852
00:53:15 --> 00:53:17
rather just leave it.
853
00:53:17 --> 00:53:22
Stay with symmetric.
854
00:53:22 --> 00:53:27
Well that's two hours of
lots of linear algebra.
855
00:53:27 --> 00:53:31
I'm hoping you're going to
like the MATLAB problem.
856
00:53:31 --> 00:53:41
Would you like to
see what it'll be?
857
00:53:41 --> 00:53:44
I'll just tell you what
the equation will be.
858
00:53:44 --> 00:53:49
So it'll be a
differential equation.
859
00:53:49 --> 00:53:53
Oh, dear, what is it?
860
00:53:53 --> 00:53:57
So it's a differential
equation with a -u''
861
00:53:57 --> 00:54:00
that we know and love.
862
00:54:00 --> 00:54:03
And what else has it got?
863
00:54:03 --> 00:54:05
Oh yes, right.
864
00:54:05 --> 00:54:09
So here's the problem.
865
00:54:09 --> 00:54:13
Here's the equation.
866
00:54:13 --> 00:54:21
So it has the -u'', the second
derivative and it has a first
867
00:54:21 --> 00:54:24
derivative equal whatever.
868
00:54:24 --> 00:54:28
In fact, the example will
choose a delta function there.
869
00:54:28 --> 00:54:32
So what am I talking
about here?
870
00:54:32 --> 00:54:37
This would be a diffusion
and this would be, anybody
871
00:54:37 --> 00:54:39
met these things before?
872
00:54:39 --> 00:54:41
That would be a convection.
873
00:54:41 --> 00:54:46
So that's a first derivative,
that's an anti-symmetric.
874
00:54:46 --> 00:54:50
The MATLAB problem is now going
to create the difference
875
00:54:50 --> 00:54:51
matrix for that.
876
00:54:51 --> 00:54:55
So the symmetric part will
be our old friend K.
877
00:54:55 --> 00:55:02
But now we've got the
convection term is appearing.
878
00:55:02 --> 00:55:04
And it's going to
be anti-symmetric.
879
00:55:04 --> 00:55:09
And if v is big, it gets
more and more important.
880
00:55:09 --> 00:55:10
So what happens?
881
00:55:10 --> 00:55:13
What happens with
equations like this?
882
00:55:13 --> 00:55:17
Really this is like the first
time in the course that
883
00:55:17 --> 00:55:22
we've allowed this first
derivative term to pop up.
884
00:55:22 --> 00:55:29
But nevertheless we can see
a lot of what's happening.
885
00:55:29 --> 00:55:32
And how to deal with
those equations?
886
00:55:32 --> 00:55:37
I mean, if you ask a chemical
engineer or anybody, they're
887
00:55:37 --> 00:55:41
always dealing with a flow,
like the Charles River is
888
00:55:41 --> 00:55:45
flowing along, that's coming
from the velocity there, but
889
00:55:45 --> 00:55:48
at the same time stuff
is diffusing in it.
890
00:55:48 --> 00:55:53
It's just a constant problem
in true, true applications.
891
00:55:53 --> 00:55:56
And this is the best
model, I think.
892
00:55:56 --> 00:56:03
So you'll see that and
I'm pleased about that.
893
00:56:03 --> 00:56:06
As you'd see.
894
00:56:06 --> 00:56:07
Any last question?
895
00:56:07 --> 00:56:09
I'm always happy.
896
00:56:09 --> 00:56:10
Well I'll see you Friday then.
897
00:56:10 --> 00:56:12
Thanks for coming.