1
00:00:00 --> 00:00:01
2
00:00:01 --> 00:00:02
The following content is
provided under a Creative
3
00:00:02 --> 00:00:03
Commons license.
4
00:00:03 --> 00:00:06
Your support will help MIT
OpenCourseWare continue to
5
00:00:06 --> 00:00:10
offer high-quality educational
resources for free.
6
00:00:10 --> 00:00:13
To make a donation, or to view
additional materials from
7
00:00:13 --> 00:00:16
hundreds of MIT courses, visit
MIT OpenCourseWare
8
00:00:16 --> 00:00:19
at ocw.mit.edu.
9
00:00:19 --> 00:00:26
PROFESSOR STRANG:
Shall we start?
10
00:00:26 --> 00:00:31
The main job of today is
eigenvalues and eigenvectors.
11
00:00:31 --> 00:00:34
The next section in the book
and a very big topic and
12
00:00:34 --> 00:00:37
things to say about it.
13
00:00:37 --> 00:00:40
I do want to begin with a
recap of what I didn't
14
00:00:40 --> 00:00:45
quite finish last time.
15
00:00:45 --> 00:00:50
So what we did was solve this
very straightforward equation.
16
00:00:50 --> 00:00:53
Straightforward except that
it has a point source,
17
00:00:53 --> 00:00:54
a delta function.
18
00:00:54 --> 00:00:59
And we solved it, both the
fixed-fixed case when a
19
00:00:59 --> 00:01:05
straight line went up and back
down and in the free-fixed case
20
00:01:05 --> 00:01:08
when it was a horizontal line
and then down with
21
00:01:08 --> 00:01:11
slope minus one.
22
00:01:11 --> 00:01:15
And there are different ways
to get to this answer.
23
00:01:15 --> 00:01:18
But once you have it,
you can look at it and
24
00:01:18 --> 00:01:20
say, well is it right?
25
00:01:20 --> 00:01:22
Certainly the boundary
conditions are correct.
26
00:01:22 --> 00:01:26
Zero slope went through
zero, that's good.
27
00:01:26 --> 00:01:30
And then the only thing you
really have to check is does
28
00:01:30 --> 00:01:36
the slope drop by one at the
point of the impulse? because
29
00:01:36 --> 00:01:40
that's what this is
forcing us to do.
30
00:01:40 --> 00:01:43
It's saying the slope
should drop by one.
31
00:01:43 --> 00:01:46
And here the slope
is 1-a going up.
32
00:01:46 --> 00:01:51
And if I take the derivative,
it's -a going down.
33
00:01:51 --> 00:01:54
1-a dropped to -a, good.
34
00:01:54 --> 00:01:56
Here the slope was zero.
35
00:01:56 --> 00:02:00
Here the slope was
minus one, good.
36
00:02:00 --> 00:02:01
So those are the right answers.
37
00:02:01 --> 00:02:11
And this is simple, but
really a great example.
38
00:02:11 --> 00:02:15
And then, what I wanted
to do was catch the same
39
00:02:15 --> 00:02:17
thing for the matrices.
40
00:02:17 --> 00:02:24
So those matrices, we all know
what K is and what T is.
41
00:02:24 --> 00:02:29
So I'm solving, I'm
really solving K
42
00:02:29 --> 00:02:31
inverse equal identity.
43
00:02:31 --> 00:02:33
That's the equation
I'm solving.
44
00:02:33 --> 00:02:37
So I'm looking for K inverse
and trying to get the
45
00:02:37 --> 00:02:40
columns of the identity.
46
00:02:40 --> 00:02:44
And you realize the columns
of the identity are just
47
00:02:44 --> 00:02:45
like delta vectors.
48
00:02:45 --> 00:02:50
They've got a one in one
spot, they're a point load
49
00:02:50 --> 00:02:52
just like this thing.
50
00:02:52 --> 00:02:56
So can I just say how
I remember K inverse?
51
00:02:56 --> 00:02:59
I finally, you know, again
there are different
52
00:02:59 --> 00:03:01
ways to get to it.
53
00:03:01 --> 00:03:03
One way is MATLAB, just do it.
54
00:03:03 --> 00:03:10
But I guess maybe the whole
point is, the whole point of
55
00:03:10 --> 00:03:16
these and the eigenvalues that
are coming too, is this.
56
00:03:16 --> 00:03:24
That we have here the chance
to see important, special
57
00:03:24 --> 00:03:26
cases that work out.
58
00:03:26 --> 00:03:29
Normally we don't find the
inverse, print out the
59
00:03:29 --> 00:03:30
inverse of a matrix.
60
00:03:30 --> 00:03:32
It's not nice.
61
00:03:32 --> 00:03:36
Normally we just let eig
find the eigenvalues.
62
00:03:36 --> 00:03:40
Because that's an even worse
calculation, to find
63
00:03:40 --> 00:03:42
eigenvalues, in general.
64
00:03:42 --> 00:03:46
I'm talking here about our
matrices of all sizes n by n.
65
00:03:47 --> 00:03:50
Nobody finds the eigenvalues
by hand of n by n matrices.
66
00:03:51 --> 00:03:56
But these have terrific
eigenvalues and
67
00:03:56 --> 00:03:58
especially eigenvectors.
68
00:03:58 --> 00:04:04
So in a way this is a little
bit like, typical of math.
69
00:04:04 --> 00:04:08
That you ask about general
stuff or you write the
70
00:04:08 --> 00:04:14
equation with a matrix A.
71
00:04:14 --> 00:04:17
So that's the general
information.
72
00:04:17 --> 00:04:21
And then there's the
specific, special guys
73
00:04:21 --> 00:04:22
with special functions.
74
00:04:22 --> 00:04:27
And here there'll be sines and
cosines and exponentials.
75
00:04:27 --> 00:04:30
Other places in applied math,
there are Bessel functions
76
00:04:30 --> 00:04:31
and Legendre functions.
77
00:04:31 --> 00:04:33
Special guys.
78
00:04:33 --> 00:04:37
So here, these are special.
79
00:04:37 --> 00:04:40
And how do I
complete K inverse?
80
00:04:40 --> 00:04:44
So this four, three, two, one.
81
00:04:44 --> 00:04:46
Let me complete T inverse.
82
00:04:46 --> 00:04:48
You probably know T
inverse already.
83
00:04:48 --> 00:04:54
So T, this is, four, three,
two, one, is when the load is
84
00:04:54 --> 00:04:59
way over at the far left end
and it's just descending.
85
00:04:59 --> 00:05:05
And now I'm going to, let me
show you how I write it in.
86
00:05:05 --> 00:05:07
Pay attention here
to the diagonal.
87
00:05:07 --> 00:05:15
So this will be three,
three, two, one.
88
00:05:15 --> 00:05:24
Do you see that's the solution
that's sort of like this one?
89
00:05:24 --> 00:05:28
That's the second column of
the inverse so it's solving,
90
00:05:28 --> 00:05:31
I'm solving T times T
inverse equals I here.
91
00:05:31 --> 00:05:35
It's the second column
is the guy with a one
92
00:05:35 --> 00:05:39
in the second place.
93
00:05:39 --> 00:05:42
So that's where the load is,
in position number two.
94
00:05:42 --> 00:05:46
So I'm level, three,
three up to that load.
95
00:05:46 --> 00:05:51
And then I'm dropping
after the load.
96
00:05:51 --> 00:05:56
What's the third
column of T inverse?
97
00:05:56 --> 00:05:59
I started with that first
column and I knew that the
98
00:05:59 --> 00:06:02
answer would be symmetric
because T is symmetric,
99
00:06:02 --> 00:06:05
so that allowed me to
write the first row.
100
00:06:05 --> 00:06:08
And now we can
fill in the rest.
101
00:06:08 --> 00:06:12
So what do you think, if the
point load is now, I'm looking
102
00:06:12 --> 00:06:16
at the third column, third
column of the identity, the
103
00:06:16 --> 00:06:19
load has moved down to
position number three.
104
00:06:19 --> 00:06:22
So what do I have
there and there?
105
00:06:22 --> 00:06:23
Two and two.
106
00:06:23 --> 00:06:26
And what do I have last?
107
00:06:26 --> 00:06:26
One.
108
00:06:26 --> 00:06:28
It's dropping to zero.
109
00:06:28 --> 00:06:31
You could put zero in
green here if you wanted.
110
00:06:31 --> 00:06:39
Zero is the unseen last
boundary, you know,
111
00:06:39 --> 00:06:42
row at this end.
112
00:06:42 --> 00:06:45
And finally, what's
happening here?
113
00:06:45 --> 00:06:50
What do I get from that?
114
00:06:50 --> 00:06:54
All one, one, one
to the diagonal.
115
00:06:54 --> 00:06:57
And then sure enough
it drops to zero.
116
00:06:57 --> 00:07:01
So this would be a case
where the load is there.
117
00:07:01 --> 00:07:06
It would be one, one,
one, one and then boom.
118
00:07:06 --> 00:07:07
No, it wouldn't be.
119
00:07:07 --> 00:07:08
It'd be more like this.
120
00:07:08 --> 00:07:15
One, one, one, one
and then down to.
121
00:07:15 --> 00:07:18
That's a pretty clean inverse.
122
00:07:18 --> 00:07:22
That's a very beautiful matrix.
123
00:07:22 --> 00:07:24
Don't you admire that matrix?
124
00:07:24 --> 00:07:27
I mean, if they were all
like that, gee, this
125
00:07:27 --> 00:07:29
would be a great world.
126
00:07:29 --> 00:07:40
But of course it's not sparse.
127
00:07:40 --> 00:07:43
That's why we don't
often use the inverse.
128
00:07:43 --> 00:07:46
Because we had a sparse
matrix T that was really
129
00:07:46 --> 00:07:48
fast to compute with.
130
00:07:48 --> 00:07:51
And here, if you tell me the
inverse, you've actually
131
00:07:51 --> 00:07:52
slowed me down.
132
00:07:52 --> 00:07:58
Because you've given me now a
dense matrix, no zeroes even
133
00:07:58 --> 00:08:05
and multiplying T inverse times
the right side would be slower
134
00:08:05 --> 00:08:08
than just doing elimination.
135
00:08:08 --> 00:08:11
Now this is the kind of
more interesting one.
136
00:08:11 --> 00:08:15
Because this is the one
that has to go up to the
137
00:08:15 --> 00:08:18
diagonal and then down.
138
00:08:18 --> 00:08:22
So let me, can I fell in what
I the way this one goes?
139
00:08:22 --> 00:08:26
I'm going upwards to the
diagonal and then I'm
140
00:08:26 --> 00:08:28
coming down to zero.
141
00:08:28 --> 00:08:31
Remember that I'm coming
down to zero on this K.
142
00:08:31 --> 00:08:37
So Zero, zero, zero, zero
is kind of the row number.
143
00:08:37 --> 00:08:41
If that's row number zero,
here's one, two, three,
144
00:08:41 --> 00:08:42
four, the real thing.
145
00:08:42 --> 00:08:48
And then row five is getting
back to zero again.
146
00:08:48 --> 00:08:52
So what do you think, finish
the rest of that column.
147
00:08:52 --> 00:08:55
So you're telling me now
the response to the
148
00:08:55 --> 00:08:57
load in position two.
149
00:08:57 --> 00:08:59
So it's going to
look like this.
150
00:08:59 --> 00:09:03
In fact, it's going to
look very like this.
151
00:09:03 --> 00:09:06
There's the three and then
this is in position two.
152
00:09:06 --> 00:09:08
And then I'm going to have
something here and something
153
00:09:08 --> 00:09:12
here and it'll drop to zero.
154
00:09:12 --> 00:09:14
What do I get?
155
00:09:14 --> 00:09:15
Four, two.
156
00:09:15 --> 00:09:17
Six, four, two, zero.
157
00:09:17 --> 00:09:19
It's dropping to Zero.
158
00:09:19 --> 00:09:22
I'm going to finish this in but
then I'm going to look back and
159
00:09:22 --> 00:09:25
see have I really got it right.
160
00:09:25 --> 00:09:28
How does this go now?
161
00:09:28 --> 00:09:31
Two, let's see.
162
00:09:31 --> 00:09:36
Now it's going up from zero
to two to four to six.
163
00:09:36 --> 00:09:38
That's on the diagonal.
164
00:09:38 --> 00:09:39
Now it starts down.
165
00:09:39 --> 00:09:43
It's got to get to zero,
so that'll be a three.
166
00:09:43 --> 00:09:48
Here is a one going up to
two to three to four.
167
00:09:48 --> 00:09:49
Is that right?
168
00:09:49 --> 00:09:52
And then dropped fast to zero.
169
00:09:52 --> 00:09:55
Is that correct?
170
00:09:55 --> 00:09:57
Think so, yep.
171
00:09:57 --> 00:10:01
Except, wait a minute now.
172
00:10:01 --> 00:10:03
We've got the right
overall picture.
173
00:10:03 --> 00:10:06
Climbing up, dropping down.
174
00:10:06 --> 00:10:07
Climbing up, dropping down.
175
00:10:07 --> 00:10:09
Climbing up, dropping down.
176
00:10:09 --> 00:10:10
All good.
177
00:10:10 --> 00:10:17
But we haven't yet got, we
haven't checked yet that
178
00:10:17 --> 00:10:23
the change in the slope
is supposed to be one.
179
00:10:23 --> 00:10:24
And it's not.
180
00:10:24 --> 00:10:29
Here the slope is like, three,
It's going up by threes and
181
00:10:29 --> 00:10:33
then it's going down by twos.
182
00:10:33 --> 00:10:38
So we've gone from going
up at a slope of three to
183
00:10:38 --> 00:10:41
down to a slope of two.
184
00:10:41 --> 00:10:44
Up three, down just like this.
185
00:10:44 --> 00:10:47
But that would be a
change in slope of five.
186
00:10:47 --> 00:10:51
Therefore there's a 1/5.
187
00:10:51 --> 00:10:54
So this is going up with a
slope of four and down with a
188
00:10:54 --> 00:10:58
slope of one. four dropping to
one when I divide by the
189
00:10:58 --> 00:11:01
five that's what I like.
190
00:11:01 --> 00:11:04
Here is up by twos, down by
threes, again it's a change
191
00:11:04 --> 00:11:07
of five so I need the five.
192
00:11:07 --> 00:11:09
Up by ones, down by four.
193
00:11:09 --> 00:11:12
Sudden, that's a
fast drop of four.
194
00:11:12 --> 00:11:15
Again, the slope changed
by five, dividing by
195
00:11:15 --> 00:11:17
five, that's got it.
196
00:11:17 --> 00:11:18
So that's my picture.
197
00:11:18 --> 00:11:22
You could now create K
inverse for any size.
198
00:11:22 --> 00:11:30
And more than that, sort of
see into K inverse what
199
00:11:30 --> 00:11:32
those numbers are.
200
00:11:32 --> 00:11:38
Because if I wrote the five by
five or six by six doing it
201
00:11:38 --> 00:11:42
a column at a time, it would
look like a bunch of numbers.
202
00:11:42 --> 00:11:44
But you see it now.
203
00:11:44 --> 00:11:46
Do you see the pattern?
204
00:11:46 --> 00:11:51
Right.
205
00:11:51 --> 00:11:55
This is one way to get to those
inverses, and homework problems
206
00:11:55 --> 00:11:57
are offering other ways.
207
00:11:57 --> 00:12:04
T, in particular, is
quite easy to invert.
208
00:12:04 --> 00:12:10
Do I have any other comment on
inverses before the lecture on
209
00:12:10 --> 00:12:12
eigenvalues really starts?
210
00:12:12 --> 00:12:18
Maybe I do have one comment,
one important comment.
211
00:12:18 --> 00:12:20
It's this, and I won't
develop it in full,
212
00:12:20 --> 00:12:24
but let's just say it.
213
00:12:24 --> 00:12:28
What if the load is
not a delta function?
214
00:12:28 --> 00:12:31
What if I have other loads?
215
00:12:31 --> 00:12:35
Like the uniform load of all
ones or any other load?
216
00:12:35 --> 00:12:45
What if the discrete load
here is not a delta vector?
217
00:12:45 --> 00:12:48
I now know the responses
to each column of
218
00:12:48 --> 00:12:50
the identity, right?
219
00:12:50 --> 00:12:54
If I put a load in position
one, there's the response.
220
00:12:54 --> 00:12:58
If I put a load in position
two, there is the response.
221
00:12:58 --> 00:13:03
Now, what if I
have other loads?
222
00:13:03 --> 00:13:05
Let me take a typical load.
223
00:13:05 --> 00:13:10
What if the load was, well,
the one we looked at before.
224
00:13:10 --> 00:13:13
If the load was .
225
00:13:13 --> 00:13:20
So that I had, the bar
was hanging by its own
226
00:13:20 --> 00:13:24
weight, let's say.
227
00:13:24 --> 00:13:29
In other words, could I
solve all problems by
228
00:13:29 --> 00:13:31
knowing these answers?
229
00:13:31 --> 00:13:33
That's what I'm
trying to get to.
230
00:13:33 --> 00:13:38
If I know these special delta
loads, then can I get the
231
00:13:38 --> 00:13:41
solution for every load?
232
00:13:41 --> 00:13:42
Yes, no?
233
00:13:42 --> 00:13:43
What do you think?
234
00:13:43 --> 00:13:45
Yes, right.
235
00:13:45 --> 00:13:49
Now with this matrix it's kind
of easy to see because if you
236
00:13:49 --> 00:13:53
know the inverse matrix, well
you're obviously in business.
237
00:13:53 --> 00:13:59
If I had another load, say
another load f for load, I
238
00:13:59 --> 00:14:03
would just multiply by
K inverse, no problem.
239
00:14:03 --> 00:14:05
But I want to look
a little deeper.
240
00:14:05 --> 00:14:11
Because if I had other loads
here than a delta function,
241
00:14:11 --> 00:14:15
obviously if I had two delta
functions I could just
242
00:14:15 --> 00:14:17
combine the two solutions.
243
00:14:17 --> 00:14:20
That's linearity that
we're using all the time.
244
00:14:20 --> 00:14:23
If I had ten delta functions
I could combine them.
245
00:14:23 --> 00:14:29
But then suppose I had instead
of a bunch of spikes, instead
246
00:14:29 --> 00:14:33
of a bunch of point loads,
I had a distributed load.
247
00:14:33 --> 00:14:38
Like all ones, how
could I do it?
248
00:14:38 --> 00:14:39
Main point is I could.
249
00:14:39 --> 00:14:40
Right?
250
00:14:40 --> 00:14:44
If I know these answers,
I know all answers.
251
00:14:44 --> 00:14:49
If I know the response to a
load at each point, then-- come
252
00:14:49 --> 00:14:50
back to the discrete one.
253
00:14:50 --> 00:14:57
What would be the answer if
the load was ?
254
00:14:57 --> 00:15:06
Suppose I now try to solve
the equation Ku=ones(4,1),
255
00:15:06 --> 00:15:08
so all ones.
256
00:15:08 --> 00:15:09
What would be the answer?
257
00:15:09 --> 00:15:12
How would I get it?
258
00:15:12 --> 00:15:15
I would just add the columns.
259
00:15:15 --> 00:15:20
Now why would I do that?
260
00:15:20 --> 00:15:21
Right.
261
00:15:21 --> 00:15:27
Because this, the right-hand
side, the input is the sum of
262
00:15:27 --> 00:15:31
the four columns, the
four special inputs.
263
00:15:31 --> 00:15:36
So the output is the sum of
the four outputs, right.
264
00:15:36 --> 00:15:39
In other words, as you saw,
we must know everything.
265
00:15:39 --> 00:15:41
And that's the way
we really know it.
266
00:15:41 --> 00:15:42
By linearity.
267
00:15:42 --> 00:15:47
If the input is a combination
of these, the output is the
268
00:15:47 --> 00:15:49
same combination of those.
269
00:15:49 --> 00:15:50
Right.
270
00:15:50 --> 00:15:55
So, for example, in this T
case, if input was, if I
271
00:15:55 --> 00:16:02
did Tu=ones, I would just
add those and the output
272
00:16:02 --> 00:16:06
would be .
273
00:16:06 --> 00:16:10
That would be the output
from .
274
00:16:10 --> 00:16:20
And now, oh boy.
275
00:16:20 --> 00:16:26
Actually, let me just introduce
a guy's name for these
276
00:16:26 --> 00:16:31
solutions and not
today show you.
277
00:16:31 --> 00:16:33
You have the idea, of course.
278
00:16:33 --> 00:16:37
Here we added because
everything was discrete.
279
00:16:37 --> 00:16:40
So you know what we're
going to do over here.
280
00:16:40 --> 00:16:44
We'll take integrals , right?
281
00:16:44 --> 00:16:51
A general load will be an
integral over point loads.
282
00:16:51 --> 00:16:53
That's the idea.
283
00:16:53 --> 00:16:54
A fundamental idea.
284
00:16:54 --> 00:17:00
That some other load, f(x) is
an integral of these guys.
285
00:17:00 --> 00:17:05
So the solution will be the
same integral of these guys.
286
00:17:05 --> 00:17:08
Let me not go there except to
tell you the name, because
287
00:17:08 --> 00:17:11
it's a very famous name.
288
00:17:11 --> 00:17:16
This solution u with the
delta function is called
289
00:17:16 --> 00:17:17
the Green's function.
290
00:17:17 --> 00:17:20
So I've now introduced
the idea, this is the
291
00:17:20 --> 00:17:21
Green's function.
292
00:17:21 --> 00:17:25
This guy is the Green's
function for the
293
00:17:25 --> 00:17:30
fixed-fixed problem.
294
00:17:30 --> 00:17:33
And this guy is the
Green's function for
295
00:17:33 --> 00:17:36
the free-fixed problem.
296
00:17:36 --> 00:17:40
And the whole point is, maybe
this is the one point I want
297
00:17:40 --> 00:17:44
you to sort of see
always by analogy.
298
00:17:44 --> 00:17:50
The Green's function is
just like the inverse.
299
00:17:50 --> 00:17:52
What is the Green's function?
300
00:17:52 --> 00:17:59
The Green's function is the
response at x to the u(x)
301
00:17:59 --> 00:18:03
when the input, when
the impulses is at a.
302
00:18:03 --> 00:18:04
So it sort of depends
on two things.
303
00:18:04 --> 00:18:09
It depends on the position a
of the input and it tells you
304
00:18:09 --> 00:18:14
the response at position x.
305
00:18:14 --> 00:18:19
And often we would use the
letter G for Green's.
306
00:18:19 --> 00:18:22
So it depends on x(a).
307
00:18:23 --> 00:18:30
And maybe I'm happy if you just
sort of see in some way what we
308
00:18:30 --> 00:18:33
did there is just like
what we did here.
309
00:18:33 --> 00:18:37
And therefore the Green's
function must be just a
310
00:18:37 --> 00:18:46
differential, continuous
version of an inverse matrix.
311
00:18:46 --> 00:18:54
Let's move on to eigenvalues
with that point sort of made,
312
00:18:54 --> 00:18:59
but not driven home by
many, many examples.
313
00:18:59 --> 00:19:15
Question, I'll take
a question, shoot.
314
00:19:15 --> 00:19:21
Why did I increase zero, three,
six and then decrease six?
315
00:19:21 --> 00:19:29
Well intuitively it's because
this is copying this.
316
00:19:29 --> 00:19:32
What's wonderful is that
it's a perfect copy.
317
00:19:32 --> 00:19:36
I mean, intuitively the
solution to our difference
318
00:19:36 --> 00:19:39
equation should be like
the solution to our
319
00:19:39 --> 00:19:40
differential equation.
320
00:19:40 --> 00:19:44
That's why if we have some
computational, some
321
00:19:44 --> 00:19:47
differential equation that we
can't solve, which would be
322
00:19:47 --> 00:19:51
much more typical than this
one, that we couldn't solve it
323
00:19:51 --> 00:19:57
exactly by pencil and paper, we
would replace derivatives by
324
00:19:57 --> 00:20:00
differences and go over here
and we would hope that they
325
00:20:00 --> 00:20:02
were like pretty close.
326
00:20:02 --> 00:20:07
Here they're right,
they're the same.
327
00:20:07 --> 00:20:08
Oh the other columns?
328
00:20:08 --> 00:20:09
Absolutely.
329
00:20:09 --> 00:20:11
These guys?
330
00:20:11 --> 00:20:14
Zero, two, four, six going up.
331
00:20:14 --> 00:20:18
Six, three, zero coming back.
332
00:20:18 --> 00:20:25
So that's a discrete
thing of one like that.
333
00:20:25 --> 00:20:29
And then the next guy and the
last guy would be going up
334
00:20:29 --> 00:20:34
one, two, three, four
and then sudden drop.
335
00:20:34 --> 00:20:35
Thanks for all questions.
336
00:20:35 --> 00:20:40
I mean, this sort of, by adding
these guys in, the first one
337
00:20:40 --> 00:20:41
actually went up that way.
338
00:20:41 --> 00:20:45
You see the Green's functions.
339
00:20:45 --> 00:20:48
But of course this has a
Green's function for every
340
00:20:48 --> 00:20:53
a. x and a are running all
the way from zero to one.
341
00:20:53 --> 00:20:58
Here they're just
discrete positions.
342
00:20:58 --> 00:21:02
Thanks.
343
00:21:02 --> 00:21:07
So playing with these delta
functions and coming up with
344
00:21:07 --> 00:21:12
this solution, well, as I say,
different ways to do it.
345
00:21:12 --> 00:21:16
I worked through one way
in class last time.
346
00:21:16 --> 00:21:18
It takes practice.
347
00:21:18 --> 00:21:21
So that's what the
homework's really for.
348
00:21:21 --> 00:21:26
You can see me come up with
this thing, then you can, with
349
00:21:26 --> 00:21:29
leisure, you can follow the
steps, but you've gotta
350
00:21:29 --> 00:21:32
do it yourself to see.
351
00:21:32 --> 00:21:36
Eigenvalues and, of
course, eigenvectors.
352
00:21:36 --> 00:21:46
We have to give
them a fair shot.
353
00:21:46 --> 00:21:49
Square matrix.
354
00:21:49 --> 00:21:54
So I'm talking about general,
what eigenvectors and
355
00:21:54 --> 00:21:57
eigenvalues are and
why do we want them.
356
00:21:57 --> 00:22:01
I'm always trying to say
what's the purpose, you know,
357
00:22:01 --> 00:22:07
not doing this just for
abstract linear algebra.
358
00:22:07 --> 00:22:11
We do this, we look for
these things because they
359
00:22:11 --> 00:22:16
tremendously simplify a
problem if we can find it.
360
00:22:16 --> 00:22:19
So what's an eigenvector?
361
00:22:19 --> 00:22:24
The eigenvalue is this number,
lambda, and the eigenvector
362
00:22:24 --> 00:22:26
is this vector y.
363
00:22:26 --> 00:22:33
And now, how do I
think about those?
364
00:22:33 --> 00:22:37
Suppose I take a vector
and I multiply by A.
365
00:22:37 --> 00:22:42
So the vector is headed
off in some direction.
366
00:22:42 --> 00:22:44
Here's a vector v.
367
00:22:44 --> 00:22:47
If I multiply, and I'm given
this matrix, so I'm given the
368
00:22:47 --> 00:22:51
matrix, whatever my matrix is.
369
00:22:51 --> 00:22:54
Could be one of those
matrices, any other matrix.
370
00:22:54 --> 00:23:00
If I multiply that by v,
I get some result, Av.
371
00:23:00 --> 00:23:01
What do I do?
372
00:23:01 --> 00:23:06
I look at that and I say that
v was not an eigenvector.
373
00:23:06 --> 00:23:11
Eigenvectors are the special
vectors which come out
374
00:23:11 --> 00:23:12
in the same direction.
375
00:23:12 --> 00:23:15
Av comes out parallel to v.
376
00:23:15 --> 00:23:18
So this was not an eigenvector.
377
00:23:18 --> 00:23:21
Very few vectors are
eigenvectors, they're
378
00:23:21 --> 00:23:22
very special.
379
00:23:22 --> 00:23:25
Most vectors, that'll
be a typical picture.
380
00:23:25 --> 00:23:33
But there's a few of them
where I've a vector y
381
00:23:33 --> 00:23:35
and I multiply by A.
382
00:23:35 --> 00:23:36
And then what's the point?
383
00:23:36 --> 00:23:42
Ay is in the same direction.
384
00:23:42 --> 00:23:45
It's on that same line as y.
385
00:23:45 --> 00:23:48
It could be, it might
be twice as far out.
386
00:23:48 --> 00:23:49
That would be Ay=2y.
387
00:23:51 --> 00:23:53
It might go backwards.
388
00:23:53 --> 00:23:56
This would be a
possibility, Ay=-y.
389
00:23:56 --> 00:23:58
390
00:23:58 --> 00:24:02
It could be just halfway.
391
00:24:02 --> 00:24:05
It could be, not move at all.
392
00:24:05 --> 00:24:06
That's even a possibility.
393
00:24:06 --> 00:24:06
Ay=0y.
394
00:24:07 --> 00:24:10
Count that.
395
00:24:10 --> 00:24:17
Those y's or eigenvectors and
the eigenvalue is just, from
396
00:24:17 --> 00:24:19
this point of view, the
eigenvalue has come in second
397
00:24:19 --> 00:24:24
because it's, so y was a
special vector that
398
00:24:24 --> 00:24:26
kept its direction.
399
00:24:26 --> 00:24:32
And then lambda is just the
number, the two, the zero, the
400
00:24:32 --> 00:24:39
minus one, the 1/2 that tells
you stretching, shrinking,
401
00:24:39 --> 00:24:41
reversing, whatever.
402
00:24:41 --> 00:24:42
That's the number.
403
00:24:42 --> 00:24:45
But y is the vector.
404
00:24:45 --> 00:24:54
And notice that if I knew y and
I knew it was an eigenvector,
405
00:24:54 --> 00:24:59
then of course if I multiply by
A, I'll learn the eigenvalue.
406
00:24:59 --> 00:25:02
And if I knew an eigenvalue,
you'll see how I could
407
00:25:02 --> 00:25:03
find the eigenvector.
408
00:25:03 --> 00:25:06
Problem is you have
to find them both.
409
00:25:06 --> 00:25:07
And they multiply each other.
410
00:25:07 --> 00:25:11
So we're not talking about
linear equations anymore.
411
00:25:11 --> 00:25:13
Because one unknown is
multiplying another.
412
00:25:13 --> 00:25:19
But we'll find a way to look
to discover eigenvectors
413
00:25:19 --> 00:25:23
and eigenvalues.
414
00:25:23 --> 00:25:27
I said I would try to make
clear what's the purpose.
415
00:25:27 --> 00:25:36
The purpose is that in this
direction on this y line, line
416
00:25:36 --> 00:25:43
of multiples of yA is just
acting like a number.
417
00:25:43 --> 00:25:48
A is some big n by n,
1,000 by 1,000 matrix.
418
00:25:48 --> 00:25:50
So a million numbers.
419
00:25:50 --> 00:25:59
But on this line if we find an
eigenline you could say, an
420
00:25:59 --> 00:26:03
eigendirection in that
direction, all the
421
00:26:03 --> 00:26:06
complications of A are gone.
422
00:26:06 --> 00:26:08
It's just acting like a number.
423
00:26:08 --> 00:26:14
So in particular we could solve
1,000 differential equations
424
00:26:14 --> 00:26:23
with 1,000 unknown u's with
this 1,000 by 1,000 matrix.
425
00:26:23 --> 00:26:27
We can find a solution
and this is where the
426
00:26:27 --> 00:26:34
eigenvector eigenvalue
are going to pay off.
427
00:26:34 --> 00:26:35
You recognize this.
428
00:26:35 --> 00:26:38
Matrix A is of size 1,000.
429
00:26:38 --> 00:26:41
And u is a vector
of 1,000 unknowns.
430
00:26:41 --> 00:26:44
So that's a system
of 1,000 equations.
431
00:26:44 --> 00:26:50
But if we have found an
eigenvector and it's eigenvalue
432
00:26:50 --> 00:26:56
then the equation will, if it
starts in that direction it'll
433
00:26:56 --> 00:26:59
stay in that direction and
the matrix will just be
434
00:26:59 --> 00:27:01
acting like a number.
435
00:27:01 --> 00:27:03
And we know how to
solve U'=lambda*u.
436
00:27:03 --> 00:27:06
437
00:27:06 --> 00:27:10
That one by one scalar problem
we know how to solve.
438
00:27:10 --> 00:27:13
The solution to that
is e to the lambda*t.
439
00:27:13 --> 00:27:17
440
00:27:17 --> 00:27:21
And of course it could
have a constant do that.
441
00:27:21 --> 00:27:25
Don't forget that these
equations are linear.
442
00:27:25 --> 00:27:30
If I multiply it, if I take
2e^(lambda*t), I have a two
443
00:27:30 --> 00:27:32
here and a two here and
it's just as good.
444
00:27:32 --> 00:27:37
So I better allow that as well.
445
00:27:37 --> 00:27:39
A constant times e^(lambda*t)y.
446
00:27:41 --> 00:27:43
Notice this is a vector.
447
00:27:43 --> 00:27:47
It's a number times a
number, the growth.
448
00:27:47 --> 00:27:50
So the lambda is now, for the
differential equation, the
449
00:27:50 --> 00:27:54
lambda, this number
lambda is crucial.
450
00:27:54 --> 00:27:58
It's telling us whether the
solution grows, whether it
451
00:27:58 --> 00:28:01
decays, whether it oscillates.
452
00:28:01 --> 00:28:05
And we're just looking at this
one normal mode, you could say
453
00:28:05 --> 00:28:09
normal mode for eigenvector y.
454
00:28:09 --> 00:28:18
We certainly have not found
all possible solutions.
455
00:28:18 --> 00:28:25
If we have an eigenvector,
we found that one.
456
00:28:25 --> 00:28:30
And there's other uses
and then, let me think.
457
00:28:30 --> 00:28:31
Other uses, what?
458
00:28:31 --> 00:28:34
So let me write again the
fundamental equation,
459
00:28:34 --> 00:28:34
Ay=lambda*y.
460
00:28:34 --> 00:28:37
461
00:28:37 --> 00:28:41
So that was a
differential equation.
462
00:28:41 --> 00:28:43
Going forward in time.
463
00:28:43 --> 00:28:48
Now if we go forward in
steps we might multiply
464
00:28:48 --> 00:28:55
by A at every step.
465
00:28:55 --> 00:28:59
Tell me an eigenvector
of A squared.
466
00:28:59 --> 00:29:01
I'm looking for a vector that
doesn't change direction
467
00:29:01 --> 00:29:05
when I multiply twice by A.
468
00:29:05 --> 00:29:10
You're going to tell me
it's y. y will work.
469
00:29:10 --> 00:29:15
If I multiply once by A
I get lambda times y.
470
00:29:15 --> 00:29:20
When I multiply again by A I
get lambda squared times y.
471
00:29:20 --> 00:29:30
You see eigenvalues are great
for powers of a matrix, for
472
00:29:30 --> 00:29:33
differential equations.
473
00:29:33 --> 00:29:37
The nth power will just take
the eigenvalue to the nth.
474
00:29:37 --> 00:29:42
The nth power of A will just
have lambda to the nth there.
475
00:29:42 --> 00:29:47
You know, the pivots of
a matrix are all messed
476
00:29:47 --> 00:29:49
up when I square it.
477
00:29:49 --> 00:29:52
I can't see what's
happening with the pivots.
478
00:29:52 --> 00:29:56
The eigenvalues are a different
way to look at a matrix.
479
00:29:56 --> 00:30:01
The pivots are critical numbers
for steady-state problems.
480
00:30:01 --> 00:30:07
The eigenvalues are the
critical numbers for moving
481
00:30:07 --> 00:30:11
problems, dynamic problems,
things are oscillating
482
00:30:11 --> 00:30:13
or growing or decaying.
483
00:30:13 --> 00:30:21
And by the way, let's just
recognize since this is the
484
00:30:21 --> 00:30:29
only thing that's changing in
time, what would be the, I'll
485
00:30:29 --> 00:30:30
just go down here,
e^(lambda*t).
486
00:30:31 --> 00:30:32
Let's just look and see.
487
00:30:32 --> 00:30:35
When would I have decay?
488
00:30:35 --> 00:30:38
Which you might want
to call stability.
489
00:30:38 --> 00:30:40
A stable problem.
490
00:30:40 --> 00:30:43
What would be the condition
on lambda to get
491
00:30:43 --> 00:30:46
for this to decay.
492
00:30:46 --> 00:30:49
Lambda less than zero.
493
00:30:49 --> 00:30:52
Now there's one little
bit of bad news.
494
00:30:52 --> 00:30:55
Lambda could be complex.
495
00:30:55 --> 00:30:58
Lambda could be 3+4i.
496
00:30:58 --> 00:31:00
497
00:31:00 --> 00:31:03
It can be a complex
number, these eigenvalues
498
00:31:03 --> 00:31:09
even if A is real.
499
00:31:09 --> 00:31:11
You'll say, how'd that
happen, let me see?
500
00:31:11 --> 00:31:13
I didn't think.
501
00:31:13 --> 00:31:14
Well, let me finish
this thought.
502
00:31:14 --> 00:31:18
Suppose lambda was 3+4i.
503
00:31:18 --> 00:31:22
504
00:31:22 --> 00:31:25
So I'm thinking about what
would either the lambda*t
505
00:31:25 --> 00:31:27
do in that case?
506
00:31:27 --> 00:31:30
So this is small example.
507
00:31:30 --> 00:31:32
If I had lambda (3+4i)t.
508
00:31:32 --> 00:31:35
509
00:31:35 --> 00:31:40
What does that do
as time grows?
510
00:31:40 --> 00:31:42
It's going to grow
and oscillate.
511
00:31:42 --> 00:31:45
And what decides the growth?
512
00:31:45 --> 00:31:46
The real part.
513
00:31:46 --> 00:31:50
So it's really the decay
or growth is decided
514
00:31:50 --> 00:31:51
by the real part.
515
00:31:51 --> 00:31:55
The three, either the 3t,
that would be a growth.
516
00:31:55 --> 00:31:58
Let me put growth.
517
00:31:58 --> 00:32:01
And that would be, of
course, unstable.
518
00:32:01 --> 00:32:05
And that's a problem when
I have a real part of
519
00:32:05 --> 00:32:07
lambda bigger than zero.
520
00:32:07 --> 00:32:13
And then if lambda has a zero
real part, so it's pure
521
00:32:13 --> 00:32:17
oscillation, let me just
take a case like that.
522
00:32:17 --> 00:32:18
So e^(4it).
523
00:32:18 --> 00:32:20
524
00:32:20 --> 00:32:24
So that would be,
oscillating, right?
525
00:32:24 --> 00:32:31
It's cos(4t) + i*sin(4t),
it's just oscillating.
526
00:32:31 --> 00:32:39
So in this discussion we've
seen growth and decay.
527
00:32:39 --> 00:32:41
Tell me that parallels
because I'm always shooting
528
00:32:41 --> 00:32:43
for the parallels.
529
00:32:43 --> 00:32:45
What about the growth of A?
530
00:32:45 --> 00:32:52
What matrices, how can
I recognize a matrix
531
00:32:52 --> 00:32:56
whose powers grow?
532
00:32:56 --> 00:33:02
How can I recognize a matrix
whose powers go to zero?
533
00:33:02 --> 00:33:06
I'm asking you for powers
here, over there for
534
00:33:06 --> 00:33:08
exponentials somehow.
535
00:33:08 --> 00:33:16
So here would be A to higher
and higher powers goes to
536
00:33:16 --> 00:33:18
zero, the zero matrix.
537
00:33:18 --> 00:33:21
In other words, when I
multiply, multiply, multiply by
538
00:33:21 --> 00:33:25
that matrix I get smaller and
smaller and smaller matrices,
539
00:33:25 --> 00:33:26
zero in the limit.
540
00:33:26 --> 00:33:33
What do you think's the
test on the lambda now?
541
00:33:33 --> 00:33:37
So what are the eigenvalues
of A to the K?
542
00:33:37 --> 00:33:38
Let's see.
543
00:33:38 --> 00:33:41
If A had eigenvalues lambda, A
squared will have eigenvalues
544
00:33:41 --> 00:33:45
lambda squared, A cubed will
have eigenvalues lambda cubed,
545
00:33:45 --> 00:33:48
A to the thousandth will have
eigenvalues lambda
546
00:33:48 --> 00:33:49
to the thousandth.
547
00:33:49 --> 00:33:54
And what's the test for
that to be getting small?
548
00:33:54 --> 00:33:58
Lambda less than one.
549
00:33:58 --> 00:34:03
So the test for stability will
be in the discrete case.
550
00:34:03 --> 00:34:08
It won't be the real part of
lambda, it'll be the size
551
00:34:08 --> 00:34:10
of lambda less than one.
552
00:34:10 --> 00:34:16
And growth would be the size
of lambda greater than one.
553
00:34:16 --> 00:34:20
And again, there'd be this
borderline case when
554
00:34:20 --> 00:34:24
the eigenvalue has
magnitude exactly one.
555
00:34:24 --> 00:34:31
So you're seeing here and also
here the idea that we may have
556
00:34:31 --> 00:34:34
to deal with complex
numbers here.
557
00:34:34 --> 00:34:38
We don't have to deal with
the whole world of complex
558
00:34:38 --> 00:34:42
functions and everything but
it's possible for complex
559
00:34:42 --> 00:34:45
numbers to come in.
560
00:34:45 --> 00:34:49
Well while I'm saying that,
why don't I give an example
561
00:34:49 --> 00:34:58
where it would come in.
562
00:34:58 --> 00:35:03
This is going to be
a real matrix with
563
00:35:03 --> 00:35:07
complex eigenvalues.
564
00:35:07 --> 00:35:11
Complex lambdas.
565
00:35:11 --> 00:35:19
It'll be an example.
566
00:35:19 --> 00:35:26
So I guess I'm looking for a
matrix where y and Ay never
567
00:35:26 --> 00:35:29
come out in the same direction.
568
00:35:29 --> 00:35:34
For real y's I know, okay,
here's a good matrix.
569
00:35:34 --> 00:35:41
Take the matrix that rotates
every vector by 90 degrees.
570
00:35:41 --> 00:35:43
Or by theta.
571
00:35:43 --> 00:35:47
But let's say here's a
matrix that rotates every
572
00:35:47 --> 00:35:55
vector by 90 degrees.
573
00:35:55 --> 00:35:57
I'm going to raise this
board and hide it behind
574
00:35:57 --> 00:35:58
there in a minute.
575
00:35:58 --> 00:36:05
I just wanted to just to open
up this thought that we will
576
00:36:05 --> 00:36:09
have to face complex numbers.
577
00:36:09 --> 00:36:12
If you know how to multiply
two complex numbers and
578
00:36:12 --> 00:36:16
add them, you're ok.
579
00:36:16 --> 00:36:20
This isn't going to
turn into a big deal.
580
00:36:20 --> 00:36:25
But let's just realize that
suppose that matrix, if I put
581
00:36:25 --> 00:36:30
in a vector y and I multiply
by that matrix, it'll turn
582
00:36:30 --> 00:36:33
it through 90 degrees.
583
00:36:33 --> 00:36:35
So y couldn't be
an eigenvector.
584
00:36:35 --> 00:36:37
That's the point I'm
trying to make.
585
00:36:37 --> 00:36:42
No real vector could be the
eigenvector of a rotation
586
00:36:42 --> 00:36:46
matrix because every
vector gets turned.
587
00:36:46 --> 00:36:52
So that's an example where
you'd have to go to complex
588
00:36:52 --> 00:37:00
vectors. and I think if I tried
the vector 1i, so I'm letting
589
00:37:00 --> 00:37:03
the square root of minus one
into here, then I think
590
00:37:03 --> 00:37:05
it would come out.
591
00:37:05 --> 00:37:09
If I do that multiplication
I get minus i.
592
00:37:09 --> 00:37:10
And I get one.
593
00:37:10 --> 00:37:15
And I think that this
is, what is it?
594
00:37:15 --> 00:37:17
This is probably
minus i times that.
595
00:37:17 --> 00:37:32
So this is minus i
times the input.
596
00:37:32 --> 00:37:34
No big deal.
597
00:37:34 --> 00:37:36
That was like, you
can forget that.
598
00:37:36 --> 00:37:43
It's just complex
numbers can come in.
599
00:37:43 --> 00:37:52
Now let me come back to the
main point about eigenvectors.
600
00:37:52 --> 00:37:57
Things can be complex.
601
00:37:57 --> 00:38:02
So the main point is
how do we use them?
602
00:38:02 --> 00:38:08
And how many are there?
603
00:38:08 --> 00:38:10
Here's the key.
604
00:38:10 --> 00:38:14
A typical, good matrix which
includes every symmetric
605
00:38:14 --> 00:38:19
matrix, so it includes all of
our examples and more, if it's
606
00:38:19 --> 00:38:24
of size 1,000, it will have
1,000 different eigenvectors.
607
00:38:24 --> 00:38:29
And let me just say for our
symmetric matrices those
608
00:38:29 --> 00:38:33
eigenvectors will all be real.
609
00:38:33 --> 00:38:37
They're great, the eigenvectors
of symmetric matrices.
610
00:38:37 --> 00:38:40
Oh, let me find them for one
particular symmetric matrix.
611
00:38:40 --> 00:38:47
Say this guy.
612
00:38:47 --> 00:38:49
So that's a matrix. two by two.
613
00:38:49 --> 00:38:53
How many eigenvectors
am I now looking for?
614
00:38:53 --> 00:38:55
Two.
615
00:38:55 --> 00:38:59
You could say, how
do I find them?
616
00:38:59 --> 00:39:07
Maybe with a two by two,
I can even just wing it.
617
00:39:07 --> 00:39:13
We can come up with a vector
that is an eigenvector.
618
00:39:13 --> 00:39:18
Actually that's what we're
going to do here is we're going
619
00:39:18 --> 00:39:21
to guess the eigenvectors and
then we're going to show that
620
00:39:21 --> 00:39:24
they really are eigenvectors
and then we'll know the
621
00:39:24 --> 00:39:27
eigenvalues and it's fantastic.
622
00:39:27 --> 00:39:31
So like let's start here
with the two by two case.
623
00:39:31 --> 00:39:33
Anybody spot an eigenvector?
624
00:39:33 --> 00:39:35
Is an eigenvector?
625
00:39:35 --> 00:39:36
Try .
626
00:39:36 --> 00:39:39
What comes out of ?
627
00:39:39 --> 00:39:43
Well that picks the
first column, right?
628
00:39:43 --> 00:39:45
That's how I see
multiplying by .
629
00:39:45 --> 00:39:48
That says take one of
the first column.
630
00:39:48 --> 00:39:52
And is it an eigenvector?
631
00:39:52 --> 00:39:53
Yes, no?
632
00:39:53 --> 00:39:55
No.
633
00:39:55 --> 00:39:59
This vector is not in the
same direction as that one.
634
00:39:59 --> 00:40:00
No good.
635
00:40:00 --> 00:40:09
Now can you tell
me one that is?
636
00:40:09 --> 00:40:15
You're going to guess
it. . try .
637
00:40:15 --> 00:40:21
Do the multiplication
and what do you get?
638
00:40:21 --> 00:40:23
Right?
639
00:40:23 --> 00:40:29
If I input this vector
y, what do I get out?
640
00:40:29 --> 00:40:33
Actually I get y itself.
641
00:40:33 --> 00:40:36
Right?
642
00:40:36 --> 00:40:39
The point is it didn't change
direction, and it didn't
643
00:40:39 --> 00:40:40
even change length.
644
00:40:40 --> 00:40:42
So what's the
eigenvalue for that?
645
00:40:42 --> 00:40:47
So I've got one eigenvalue
now, one eigenvector. .
646
00:40:47 --> 00:40:51
And I've got the eigenvalue.
647
00:40:51 --> 00:40:53
So here are the
vectors, the y's.
648
00:40:53 --> 00:40:55
And here are the lambdas.
649
00:40:55 --> 00:41:01
And I've got one of them
and it's one, right?
650
00:41:01 --> 00:41:03
Would you like to
guess the other one?
651
00:41:03 --> 00:41:05
I'm only looking for
two because it's a
652
00:41:05 --> 00:41:06
two by two matrix.
653
00:41:06 --> 00:41:10
So let me erase here, hope
that you'll come up with
654
00:41:10 --> 00:41:17
another one. is
certainly worth a try.
655
00:41:17 --> 00:41:19
Let's test it.
656
00:41:19 --> 00:41:21
If it's an eigenvector,
then it should come out
657
00:41:21 --> 00:41:22
in the same direction.
658
00:41:22 --> 00:41:26
What do I get when I do that?
659
00:41:26 --> 00:41:28
So I do that multiplication.
660
00:41:28 --> 00:41:33
Three and I get three and
minus three, so have
661
00:41:33 --> 00:41:35
we got an eigenvector?
662
00:41:35 --> 00:41:37
Yep.
663
00:41:37 --> 00:41:42
And what's, so if this was
y, what is this vector?
664
00:41:42 --> 00:41:42
3y.
665
00:41:44 --> 00:41:47
So there's the other
eigenvector is and the
666
00:41:47 --> 00:41:56
other eigenvalue is three.
667
00:41:56 --> 00:42:01
So we did it by
spotting it here.
668
00:42:01 --> 00:42:03
MATLAB can't do it that way.
669
00:42:03 --> 00:42:06
It's got to figure it out.
670
00:42:06 --> 00:42:12
But we're ahead of
MATLAB this time.
671
00:42:12 --> 00:42:15
So what do I notice?
672
00:42:15 --> 00:42:17
What do I notice
about this matrix?
673
00:42:17 --> 00:42:20
It was symmetric.
674
00:42:20 --> 00:42:25
And what do I notice
about the eigenvectors?
675
00:42:25 --> 00:42:29
If I show you those two
vectors, and ,
676
00:42:29 --> 00:42:32
what do you see there?
677
00:42:32 --> 00:42:38
They're orthogonal.
is orthogonal to ,
678
00:42:38 --> 00:42:40
perpendicular is the
same as orthogonal.
679
00:42:40 --> 00:42:49
These are orthogonal,
perpendicular.
680
00:42:49 --> 00:42:54
I can draw them, of course and
see that. will go, if
681
00:42:54 --> 00:42:58
this is one, it'll go here.
682
00:42:58 --> 00:43:00
So that's .
683
00:43:00 --> 00:43:03
And will go there,
it'll go down, this would
684
00:43:03 --> 00:43:06
be the other one. .
685
00:43:06 --> 00:43:06
So there's y_1.
686
00:43:07 --> 00:43:08
There's y_2.
687
00:43:08 --> 00:43:11
And they are perpendicular.
688
00:43:11 --> 00:43:17
But of course I don't draw
pictures all the time.
689
00:43:17 --> 00:43:21
What's the test for two
vectors being orthogonal?
690
00:43:21 --> 00:43:23
The dot product.
691
00:43:23 --> 00:43:24
The dot product.
692
00:43:24 --> 00:43:29
The inner product. y transpose,
y_1 transpose * y_2.
693
00:43:31 --> 00:43:35
Do you prefer to write it
as y_1 with a dot, y_2?
694
00:43:35 --> 00:43:38
695
00:43:38 --> 00:43:42
This is maybe better because
it's matrix notation.
696
00:43:42 --> 00:43:51
And the point is orthogonal,
the dot product is zero.
697
00:43:51 --> 00:43:53
So that's good.
698
00:43:53 --> 00:43:56
Very good, in fact.
699
00:43:56 --> 00:43:59
So here's a very
important fact.
700
00:43:59 --> 00:44:06
Symmetric matrices have
orthogonal eigenvectors.
701
00:44:06 --> 00:44:09
What I'm trying to say is
eigenvectors and eigenvalues
702
00:44:09 --> 00:44:13
are like a new way
to look at a matrix.
703
00:44:13 --> 00:44:16
A new way to see into it.
704
00:44:16 --> 00:44:22
And when the matrix is
symmetric, what we see is
705
00:44:22 --> 00:44:25
perpendicular eigenvectors.
706
00:44:25 --> 00:44:28
And what comment do you have
about the eigenvalues of
707
00:44:28 --> 00:44:32
this symmetric matrix?
708
00:44:32 --> 00:44:37
Remembering what was
on the board for this
709
00:44:37 --> 00:44:40
anti-symmetric matrix.
710
00:44:40 --> 00:44:44
What was the point about
that anti-symmetric matrix?
711
00:44:44 --> 00:44:51
It's eigenvalues were imaginary
actually, an i there.
712
00:44:51 --> 00:44:53
Here is the opposite.
713
00:44:53 --> 00:44:57
What's the property of the
eigenvalues for a symmetric
714
00:44:57 --> 00:45:00
matrix that you just guess?
715
00:45:00 --> 00:45:02
They're real.
716
00:45:02 --> 00:45:03
They're real.
717
00:45:03 --> 00:45:07
Symmetric matrices are great
because they have real
718
00:45:07 --> 00:45:19
eigenvalues and they have
perpendicular eigenvectors and
719
00:45:19 --> 00:45:22
actually, probably if a matrix
has real eigenvalues and
720
00:45:22 --> 00:45:27
perpendicular eigenvectors,
it's going to be symmetric.
721
00:45:27 --> 00:45:32
So symmetry is a great property
and it shows up in a great way
722
00:45:32 --> 00:45:38
in this real eigenvalue, real
lambdas, and orthogonal y's
723
00:45:38 --> 00:45:48
shows up perfectly in
the eigenpicture.
724
00:45:48 --> 00:45:53
Here's a handy little check
on the eigenvalues to
725
00:45:53 --> 00:45:55
see if we got it right.
726
00:45:55 --> 00:45:56
Course we did.
727
00:45:56 --> 00:45:59
That's one and
three we can get.
728
00:45:59 --> 00:46:03
But let me just show you two
useful checks if you haven't
729
00:46:03 --> 00:46:06
seen eigenvalues before.
730
00:46:06 --> 00:46:10
If I add the eigenvalues,
what do I get?
731
00:46:10 --> 00:46:12
Four.
732
00:46:12 --> 00:46:15
And I compare that
with adding down the
733
00:46:15 --> 00:46:17
diagonal of the matrix.
734
00:46:17 --> 00:46:19
Two and two, four.
735
00:46:19 --> 00:46:21
And that check always works.
736
00:46:21 --> 00:46:25
The sum of the eigenvalues
matches the sum
737
00:46:25 --> 00:46:26
down the diagonal.
738
00:46:26 --> 00:46:30
So that's like, if you got all
the eigenvalues but one, that
739
00:46:30 --> 00:46:32
would tell you the last one.
740
00:46:32 --> 00:46:36
Because the sum of the
eigenvalues matches the
741
00:46:36 --> 00:46:39
sum down the diagonal.
742
00:46:39 --> 00:46:45
You have no clue where that
comes from but it's true.
743
00:46:45 --> 00:46:48
And another useful fact.
744
00:46:48 --> 00:46:52
If I multiply the
eigenvalues what do I get?
745
00:46:52 --> 00:46:53
Three?
746
00:46:53 --> 00:46:58
And now, where do you
see a three over here?
747
00:46:58 --> 00:47:00
The determinant.
748
00:47:00 --> 00:47:01
4-1=3.
749
00:47:03 --> 00:47:07
Can I just write those two
facts with no idea of proof.
750
00:47:07 --> 00:47:17
The sum of the lambdas, I could
write "sum" this is for any
751
00:47:17 --> 00:47:22
matrix, the sum of the lambdas
is equal to the, it's called
752
00:47:22 --> 00:47:25
the trace of the matrix.
753
00:47:25 --> 00:47:29
The trace of the matrix is
the sum down the diagonal.
754
00:47:29 --> 00:47:37
And the product of the lambdas,
lambda_1 times lambda_2 is the
755
00:47:37 --> 00:47:40
determinant of the matrix.
756
00:47:40 --> 00:47:43
Or if I had ten eigenvalues,
I would multiply all ten and
757
00:47:43 --> 00:47:47
I'd get the determinant.
758
00:47:47 --> 00:47:51
So that's some facts
about eigenvalues.
759
00:47:51 --> 00:47:56
There's more, of course, in
section 1.5 about how you
760
00:47:56 --> 00:48:04
would find eigenvalues and
how you would use them.
761
00:48:04 --> 00:48:09
That's of course the key point,
is how would we use them.
762
00:48:09 --> 00:48:15
Let me say something more about
that, how to use eigenvalues.
763
00:48:15 --> 00:48:22
Suppose I have this system of
1,000 differential equations.
764
00:48:22 --> 00:48:27
Linear, constant coefficients,
starts from some u(0).
765
00:48:27 --> 00:48:34
766
00:48:34 --> 00:48:37
How do eigenvalues and
eigenvectors help?
767
00:48:37 --> 00:48:40
Well, first I have to find
them, that's the job.
768
00:48:40 --> 00:48:44
So suppose I find 1,000
eigenvalues and eigenvectors.
769
00:48:44 --> 00:48:50
A times eigenvector number
i is eigenvalue number i
770
00:48:50 --> 00:48:52
times eigenvector number i.
771
00:48:52 --> 00:48:58
So these, y_1 to y_1,000,
so y_1 to y_1,000 are
772
00:48:58 --> 00:49:00
the eigenvectors.
773
00:49:00 --> 00:49:03
And each one has its own
eigenvalue, lambda_1
774
00:49:03 --> 00:49:04
to lambda_1,000.
775
00:49:05 --> 00:49:11
And now if I did that work,
sort of like, in advance,
776
00:49:11 --> 00:49:14
now I come to the
differential equation.
777
00:49:14 --> 00:49:21
How could I use this?
778
00:49:21 --> 00:49:27
This is now going to be the
most-- it's three steps to
779
00:49:27 --> 00:49:33
use it, three steps to use
these to get the answer.
780
00:49:33 --> 00:49:37
Ready for step one.
781
00:49:37 --> 00:49:44
Step one is break u_0
into eigenvectors.
782
00:49:44 --> 00:49:48
Split, separate, write,
express u(0) as a
783
00:49:48 --> 00:50:02
combination of eigenvectors.
784
00:50:02 --> 00:50:05
Now step two.
785
00:50:05 --> 00:50:08
What happens to
each eigenvector?
786
00:50:08 --> 00:50:10
So this is where the
differential equation
787
00:50:10 --> 00:50:11
starts from.
788
00:50:11 --> 00:50:13
This is the initial condition.
789
00:50:13 --> 00:50:19
1,000 components of u at the
start and it's separated into
790
00:50:19 --> 00:50:23
1,000 eigenvector pieces.
791
00:50:23 --> 00:50:28
Now step two is watch
each piece separately.
792
00:50:28 --> 00:50:41
So step two will be multiply
say, c_1 by e^(lambda_1*t),
793
00:50:41 --> 00:50:44
by it's growth.
794
00:50:44 --> 00:50:47
This is following
eigenvector number one.
795
00:50:47 --> 00:50:50
And in general, I would
multiply every one of the
796
00:50:50 --> 00:50:55
c's by e to those guys.
797
00:50:55 --> 00:50:59
So what would I have now?
798
00:50:59 --> 00:51:01
This is one piece of the start.
799
00:51:01 --> 00:51:05
And that gives me one
piece of the finish.
800
00:51:05 --> 00:51:14
So the finish is, the answer is
to add up the 1,000 pieces.
801
00:51:14 --> 00:51:18
And if you're with me, you see
what those 1,000 pieces are.
802
00:51:18 --> 00:51:23
Here's a piece, some multiple
of the first eigenvector.
803
00:51:23 --> 00:51:27
Now if we only were working
with that piece, we follow it
804
00:51:27 --> 00:51:31
in time by multiplying it by
either the lambda_1 * t,
805
00:51:31 --> 00:51:35
and what do we have
at a later time?
806
00:51:36 --> 00:51:36
c_1*e^(lambda_1*t)y_1.
807
00:51:41 --> 00:51:45
This piece has grown into that.
808
00:51:45 --> 00:51:48
And other pieces have
grown into other things.
809
00:51:48 --> 00:51:50
And what about the last piece?
810
00:51:50 --> 00:51:57
So what is it that
I have to add up?
811
00:51:57 --> 00:51:59
Tell me what to write here.
812
00:51:59 --> 00:52:07
c_1,000, however much of
eigenvector 1,000 was in there,
813
00:52:07 --> 00:52:14
and then finally, never written
left-handed before,
814
00:52:14 --> 00:52:20
e to the who?
815
00:52:20 --> 00:52:25
Lambda number 1,000, not
1,000 itself, but it's
816
00:52:25 --> 00:52:27
eigenvalue, 1,000t.
817
00:52:31 --> 00:52:36
This is just splitting, this
is constantly, constantly
818
00:52:36 --> 00:52:41
the method, the way to use
eigenvalues and eigenvectors.
819
00:52:41 --> 00:52:46
Split the problem into
the pieces that go,
820
00:52:46 --> 00:52:48
that are eigenvectors.
821
00:52:48 --> 00:52:53
Watch each piece,
add up the pieces.
822
00:52:53 --> 00:52:56
That's why eigenvectors
are so important.
823
00:52:56 --> 00:52:59
Yeah?
824
00:52:59 --> 00:53:02
Yes, right.
825
00:53:02 --> 00:53:08
Well, now, very good question.
826
00:53:08 --> 00:53:10
Let's see.
827
00:53:10 --> 00:53:12
Well, the first thing we
have to know is that we do
828
00:53:12 --> 00:53:14
find 1,000 eigenvectors.
829
00:53:14 --> 00:53:19
And so my answer is going to
be for symmetric matrices,
830
00:53:19 --> 00:53:21
everything always works.
831
00:53:21 --> 00:53:26
For symmetric matrices, if size
is 1,000, they have 1,000
832
00:53:26 --> 00:53:28
eigenvectors, and next time
we'll have a shot
833
00:53:28 --> 00:53:30
at some of these.
834
00:53:30 --> 00:53:33
What some of them are for
these special matrices.
835
00:53:33 --> 00:53:39
So this method always works
if I've got a full family of
836
00:53:39 --> 00:53:42
independent eigenvectors.
837
00:53:42 --> 00:53:47
If it's of size 1,000, I need,
you're right, exactly right.
838
00:53:47 --> 00:53:52
To see that this was
the questionable step.
839
00:53:52 --> 00:53:55
If I haven't got 1,000
eigenvectors, I'm not going to
840
00:53:55 --> 00:53:57
be able to take that step.
841
00:53:57 --> 00:53:59
And it happens.
842
00:53:59 --> 00:54:05
I am sad to report that
some matrices haven't
843
00:54:05 --> 00:54:07
got enough eigenvectors.
844
00:54:07 --> 00:54:11
Some matrices, they collapse.
845
00:54:11 --> 00:54:15
This always happens
in math, somehow.
846
00:54:15 --> 00:54:20
Two eigenvectors collapse
into one and the matrix is
847
00:54:20 --> 00:54:23
defective, like it's a loser.
848
00:54:23 --> 00:54:28
So now you have to, of
course, the equation
849
00:54:28 --> 00:54:31
still has a solution.
850
00:54:31 --> 00:54:37
So there has to be something
there, but the pure eigenvector
851
00:54:37 --> 00:54:40
method is not going to make
it on those special matrices.
852
00:54:40 --> 00:54:44
I could write down one
but why should we give
853
00:54:44 --> 00:54:46
space to a loser?
854
00:54:46 --> 00:54:51
But what happens in that case?
855
00:54:51 --> 00:54:55
You might remember from
differential equations when two
856
00:54:55 --> 00:54:59
of these roots, these are like
roots, these lambdas are like
857
00:54:59 --> 00:55:04
roots that you found in solving
a differential equation.
858
00:55:04 --> 00:55:09
When two of them come together,
that's when the danger is.
859
00:55:09 --> 00:55:11
When I have a double
eigenvalue, then there's a
860
00:55:11 --> 00:55:15
high risk that I've only
got one eigenvector.
861
00:55:15 --> 00:55:21
And I'll just put in this
little thing what the other, so
862
00:55:21 --> 00:55:23
the e^(lambda_1*t) is fine.
863
00:55:23 --> 00:55:29
But if that y_1 is like, if the
lambda_1's in there twice,
864
00:55:29 --> 00:55:30
I need something new.
865
00:55:30 --> 00:55:34
And the new thing turns out
to be t*e^(lambda* t).
866
00:55:34 --> 00:55:38
867
00:55:38 --> 00:55:40
I don't know if
anybody remembers.
868
00:55:40 --> 00:55:44
This was probably hammered back
in differential equations that
869
00:55:44 --> 00:55:50
if you had repeated something
or other then this, you didn't
870
00:55:50 --> 00:55:53
get pure e^(lambda*t)'s, you
got also a t*e^(lambda*t).
871
00:55:54 --> 00:55:56
Anyway that's the answer.
872
00:55:56 --> 00:55:59
That if we're short
eigenvectors, and it can
873
00:55:59 --> 00:56:02
happen, but it won't
for our good matrices.
874
00:56:02 --> 00:56:07
Ok, so Monday I've
got lots to do.
875
00:56:07 --> 00:56:11
Special eigenvalues and vectors
and then positive definite.