1
00:00:12 --> 00:00:12
OK.
2
00:00:12 --> 00:00:14
Shall we start?
3
00:00:14 --> 00:00:18
This is the second lecture on
eigenvalues.
4
00:00:18 --> 00:00:24.59
So the first lecture was --
reached the key equation,
5
00:00:24.59 --> 00:00:29
A x equal lambda x.
x is the eigenvector and
6
00:00:29 --> 00:00:31
lambda's the eigenvalue.
7
00:00:31 --> 00:00:33
Now to use that.
8
00:00:33 --> 00:00:39
And the, the good way to,
after we've found -- so,
9
00:00:39 --> 00:00:43
so job one is to find the
eigenvalues and find the
10
00:00:43 --> 00:00:44
eigenvectors.
11
00:00:44 --> 00:00:48.99
Now after we've found them,
what do we do with them?
12
00:00:48.99 --> 00:00:53
Well, the good way to see that
is diagonalize the matrix.
13
00:00:53 --> 00:00:55
So the matrix is A.
14
00:00:55 --> 00:00:59
And I want to show -- first of
all, this is like
15
00:00:59 --> 00:01:01
the basic fact.
16
00:01:01 --> 00:01:03
This, this formula.
17
00:01:03 --> 00:01:06
That's, that's the key to
today's lecture.
18
00:01:06 --> 00:01:10
This matrix A,
I put its eigenvectors in the
19
00:01:10 --> 00:01:12
columns of a matrix S.
20
00:01:12 --> 00:01:16
So S will be the eigenvector
matrix.
21
00:01:16 --> 00:01:21
And I want to look at this
magic combination S inverse A S.
22
00:01:21 --> 00:01:27
So can I show you how that --
what happens there?
23
00:01:27 --> 00:01:32
And notice, there's an S
inverse.
24
00:01:32 --> 00:01:39
We have to be able to invert
this eigenvector matrix S.
25
00:01:39 --> 00:01:44
So for that,
we need n independent
26
00:01:44 --> 00:01:46
eigenvectors.
27
00:01:46 --> 00:01:50
So that's the,
that's the case.
28
00:01:50 --> 00:01:50
OK.
29
00:01:50 --> 00:02:00
So suppose we have n linearly
independent eigenvectors
30
00:02:00 --> 00:02:00
of A.
31
00:02:00 --> 00:02:05
Put them in the columns of this
matrix S.
32
00:02:05 --> 00:02:12
So I'm naturally going to call
that the eigenvector matrix,
33
00:02:12 --> 00:02:18
because it's got the
eigenvectors in its columns.
34
00:02:18 --> 00:02:25
And all I want to do is show
you what happens when you
35
00:02:25 --> 00:02:27
multiply A times S.
36
00:02:27 --> 00:02:28
So A times S.
37
00:02:28 --> 00:02:34
So this is A times the matrix
with
38
00:02:34 --> 00:02:38
the first eigenvector in its
first column,
39
00:02:38 --> 00:02:41
the second eigenvector in its
second column,
40
00:02:41 --> 00:02:45
the n-th eigenvector in its
n-th column.
41
00:02:45 --> 00:02:49
And how I going to do this
matrix multiplication?
42
00:02:49 --> 00:02:53
Well, certainly I'll do it a
column at a time.
43
00:02:53 --> 00:02:56
And what do I get.
44
00:02:56 --> 00:03:00
A times the first column gives
me the first column of the
45
00:03:00 --> 00:03:02
answer, but what is it?
46
00:03:02 --> 00:03:04
That's an eigenvector.
47
00:03:04 --> 00:03:08
A times x1 is equal to the
lambda times the x1.
48
00:03:08 --> 00:03:12
And that lambda's we're --
we'll call lambda one,
49
00:03:12 --> 00:03:12
of course.
50
00:03:12 --> 00:03:15
So that's the first column.
51
00:03:15 --> 00:03:18
Ax1 is the same as lambda one
x1.
52
00:03:18 --> 00:03:20
A x2 is lambda two x2.
53
00:03:20 --> 00:03:26
So on, along to in the n-th
column we now how lambda n xn.
54
00:03:26 --> 00:03:30
Looking good,
but the next step is even
55
00:03:30 --> 00:03:30
better.
56
00:03:30 --> 00:03:35
So for the next step,
I want to separate out those
57
00:03:35 --> 00:03:39
eigenvalues, those,
those multiplying numbers,
58
00:03:39 --> 00:03:42
from the x-s.
59
00:03:42 --> 00:03:45
So then I'll have just what I
want.
60
00:03:45 --> 00:03:45
OK.
61
00:03:45 --> 00:03:48
So how, how I going to separate
out?
62
00:03:48 --> 00:03:53
So that, that number lambda one
is multiplying the first column.
63
00:03:53 --> 00:03:57
So if I want to factor it out
of the first column,
64
00:03:57 --> 00:04:02
I better put -- here is going
to be x1, and that's going to
65
00:04:02 --> 00:04:06
multiply this
matrix lambda one in the first
66
00:04:06 --> 00:04:07
entry and all zeros.
67
00:04:07 --> 00:04:11
Do you see that that,
that's going to come out right
68
00:04:11 --> 00:04:13
for the first column?
69
00:04:13 --> 00:04:17
Because w- we remember how --
how we're going back to that
70
00:04:17 --> 00:04:19
original punchline.
71
00:04:19 --> 00:04:23
That if I want a number to
multiply x1 then I can do it by
72
00:04:23 --> 00:04:27
putting x1 in that
column, in the first column,
73
00:04:27 --> 00:04:30
and putting that number there.
74
00:04:30 --> 00:04:33.13
Th- u- what I going to have
here?
75
00:04:33.13 --> 00:04:37
I'm going to have lambda -- I'm
going to have x1,
76
00:04:37 --> 00:04:38
x2, ...
,xn.
77
00:04:38 --> 00:04:41
These are going to be my
columns again.
78
00:04:41 --> 00:04:43
I'm getting S back again.
79
00:04:43 --> 00:04:45
I'm getting S back again.
80
00:04:45 --> 00:04:49
But now what's it multiplied
by,
81
00:04:49 --> 00:04:51
on the right it's multiplied
by?
82
00:04:51 --> 00:04:56
If I want lambda n xn in the
last column, how do I do it?
83
00:04:56 --> 00:05:02
Well, the last column here will
be -- I'll take the last column,
84
00:05:02 --> 00:05:07
use these coefficients,
put the lambda n down there,
85
00:05:07 --> 00:05:13
and it will multiply that n-th
column and give me lambda n xn.
86
00:05:13 --> 00:05:18
There, there you see matrix
multiplication just working for
87
00:05:18 --> 00:05:19
us.
88
00:05:19 --> 00:05:21
So I started with A S.
89
00:05:21 --> 00:05:25
I wrote down what it meant,
A times each eigenvector.
90
00:05:25 --> 00:05:29
That gave me lambda time the
eigenvector.
91
00:05:29 --> 00:05:33
And then when I peeled off the
lambdas, they were on the
92
00:05:33 --> 00:05:36
right-hand side,
so I've got S,
93
00:05:36 --> 00:05:39.48
my matrix, back again.
94
00:05:39.48 --> 00:05:42
And this matrix,
this diagonal matrix,
95
00:05:42 --> 00:05:47
the eigenvalue matrix,
and I call it capital lambda.
96
00:05:47 --> 00:05:52
Using capital letters for
matrices and lambda to prompt me
97
00:05:52 --> 00:05:57
that it's, that it's eigenvalues
that are in there.
98
00:05:57 --> 00:06:02
So you see that the eigenvalues
are just sitting down that
99
00:06:02 --> 00:06:03
diagonal?
100
00:06:03 --> 00:06:07
If I had a column x2 here,
I
101
00:06:07 --> 00:06:12
would want the lambda two in
the two two position,
102
00:06:12 --> 00:06:18
in the diagonal position,
to multiply that x2 and give me
103
00:06:18 --> 00:06:20
the lambda two x2.
104
00:06:20 --> 00:06:21
That's my formula.
105
00:06:21 --> 00:06:23
A S is S lambda.
106
00:06:23 --> 00:06:23
OK.
107
00:06:23 --> 00:06:28
That's the -- you see,
it's just a calculation.
108
00:06:28 --> 00:06:32
Now -- I mentioned,
and I
109
00:06:32 --> 00:06:36
have to mention again,
this business about n
110
00:06:36 --> 00:06:38
independent eigenvectors.
111
00:06:38 --> 00:06:41
As it stands,
this is all fine,
112
00:06:41 --> 00:06:45
whether -- I mean,
I could be repeating the same
113
00:06:45 --> 00:06:49
eigenvector, but -- I'm not
interested in that.
114
00:06:49 --> 00:06:56
I want to be able to invert S,
and that's where this comes in.
115
00:06:56 --> 00:07:02.54
This n independent eigenvectors
business comes in to tell me
116
00:07:02.54 --> 00:07:05
that that matrix is invertible.
117
00:07:05 --> 00:07:11
So let me, on the next board,
write down what I've got.
118
00:07:11 --> 00:07:13
A S equals S lambda.
119
00:07:13 --> 00:07:17
And now I'm,
I can multiply on the left by S
120
00:07:17 --> 00:07:18
inverse.
121
00:07:18 --> 00:07:22
So this is really --
I can do that,
122
00:07:22 --> 00:07:24
provided S is invertible.
123
00:07:24 --> 00:07:29
Provided my assumption of n
independent eigenvectors is
124
00:07:29 --> 00:07:29
satisfied.
125
00:07:29 --> 00:07:34
And I mentioned at the end of
last time, and I'll say again,
126
00:07:34 --> 00:07:40
that there's a small number of
matrices for -- that don't have
127
00:07:40 --> 00:07:43
n independent
eigenvectors.
128
00:07:43 --> 00:07:47.32
So I've got to discuss that,
that technical point.
129
00:07:47.32 --> 00:07:52
But the great -- the most
matrices that we see have n di-
130
00:07:52 --> 00:07:56
n independent eigenvectors,
and we can diagonalize.
131
00:07:56 --> 00:07:58
This is diagonalization.
132
00:07:58 --> 00:08:02
I could also write it,
and I often will,
133
00:08:02 --> 00:08:03
the other way round.
134
00:08:03 --> 00:08:08
If I multiply on the right by S
inverse, if I took this equation
135
00:08:08 --> 00:08:12
at the top and multiplied on the
right by S inverse,
136
00:08:12 --> 00:08:15
I could -- I would have A left
here.
137
00:08:15 --> 00:08:18
Now S inverse is coming from
the right.
138
00:08:18 --> 00:08:21
So can you keep those two
straight?
139
00:08:21 --> 00:08:26
A multiplies its eigenvectors,
that's how I keep them
140
00:08:26 --> 00:08:26
straight.
141
00:08:26 --> 00:08:28
So A multiplies S.
142
00:08:28 --> 00:08:29
A multiplies S.
143
00:08:29 --> 00:08:33
And then this S inverse makes
the whole thing diagonal.
144
00:08:33 --> 00:08:37
And this is another way of
saying the same thing,
145
00:08:37 --> 00:08:40.84
putting the Ss on the other
side of the equation.
146
00:08:40.84 --> 00:08:42.66
A is S lambda S inverse.
147
00:08:42.66 --> 00:08:45
So that's the,
that's the new
148
00:08:45 --> 00:08:46
factorization.
149
00:08:46 --> 00:08:52
That's the replacement for L U
from elimination or Q R for --
150
00:08:52 --> 00:08:53
from Gram-Schmidt.
151
00:08:53 --> 00:08:58
And notice that the matrix --
so it's, it's a matrix times a
152
00:08:58 --> 00:09:03
diagonal matrix times the
inverse of the first one.
153
00:09:03 --> 00:09:08
It's, that's the combination
that we'll see throughout this
154
00:09:08 --> 00:09:09
chapter.
155
00:09:09 --> 00:09:13
This combination with an S and
an S inverse.
156
00:09:13 --> 00:09:13
OK.
157
00:09:13 --> 00:09:16
Can I just begin to use that?
158
00:09:16 --> 00:09:19
For example,
what about A squared?
159
00:09:19 --> 00:09:23
What are the eigenvalues and
eigenvectors of A squared?
160
00:09:23 --> 00:09:27
That's a straightforward
question with a,
161
00:09:27 --> 00:09:29
with an absolutely clean
answer.
162
00:09:29 --> 00:09:33
So let me,
let me consider A squared.
163
00:09:33 --> 00:09:36
So I start with A x equal
lambda x.
164
00:09:36 --> 00:09:39.37
And I'm headed for A squared.
165
00:09:39.37 --> 00:09:42.32
So let me multiply both sides
by A.
166
00:09:42.32 --> 00:09:46
That's one way to get A squared
on the left.
167
00:09:46 --> 00:09:49
So -- I should write these if-s
in here.
168
00:09:49 --> 00:09:53
If A x equals lambda x,
then I multiply by A,
169
00:09:53 --> 00:09:57
so I get A
squared x equals -- well,
170
00:09:57 --> 00:10:00
I'm multiplying by A,
so that's lambda A x.
171
00:10:00 --> 00:10:05
That lambda was a number,
so I just put it on the left.
172
00:10:05 --> 00:10:09
And what do I -- tell me how to
make that look better.
173
00:10:09 --> 00:10:14.84
What have I got here for if,
if A has the eigenvalue lambda
174
00:10:14.84 --> 00:10:17
and eigenvector x,
what's
175
00:10:17 --> 00:10:19
up with A squared?
176
00:10:19 --> 00:10:23
A squared x,
I just multiplied by A,
177
00:10:23 --> 00:10:27
but now for Ax I'm going to
substitute lambda x.
178
00:10:27 --> 00:10:30
So I've got lambda squared x.
179
00:10:30 --> 00:10:35
So from that simple
calculation, I -- my conclusion
180
00:10:35 --> 00:10:42
is that the eigenvalues of A
squared are lambda squared.
181
00:10:42 --> 00:10:46
And the eigenvectors -- I
always think about both of
182
00:10:46 --> 00:10:46
those.
183
00:10:46 --> 00:10:49
What can I say about the
eigenvalues?
184
00:10:49 --> 00:10:51.06
They're squared.
185
00:10:51.06 --> 00:10:54
What can I say about the
eigenvectors?
186
00:10:54 --> 00:10:55.43
They're the same.
187
00:10:55.43 --> 00:10:57
The same x as in -- as for A.
188
00:10:57 --> 00:11:02
Now let me see that also
from this formula.
189
00:11:02 --> 00:11:09
How can I see what A squared is
looking like from this formula?
190
00:11:09 --> 00:11:13
So let me -- that was one way
to do it.
191
00:11:13 --> 00:11:18
Let me do it by just taking A
squared from that.
192
00:11:18 --> 00:11:24
A squared is S lambda S inverse
-- that's A -- times S lambda S
193
00:11:24 --> 00:11:29
inverse -- that's A,
which is?
194
00:11:29 --> 00:11:33
This is the beauty of
eigenvalues, eigenvectors.
195
00:11:33 --> 00:11:37
Having that S inverse and S is
the identity,
196
00:11:37 --> 00:11:41
so I've got S lambda squared S
inverse.
197
00:11:41 --> 00:11:44
Do you see what that's telling
me?
198
00:11:44 --> 00:11:50
It's, it's telling me the same
thing that I just learned here,
199
00:11:50 --> 00:11:54
but in the -- in a matrix form.
200
00:11:54 --> 00:11:58
It's telling me that the S is
the same, the eigenvectors are
201
00:11:58 --> 00:12:01
the same, but the eigenvalues
are squared.
202
00:12:01 --> 00:12:04
Because this is -- what's
lambda squared?
203
00:12:04 --> 00:12:05
That's still diagonal.
204
00:12:05 --> 00:12:09
It's got little lambda one
squared, lambda two squared,
205
00:12:09 --> 00:12:13
down to lambda n squared o- on
that diagonal.
206
00:12:13 --> 00:12:16
Those are the eigenvalues,
as we just learned,
207
00:12:16 --> 00:12:18
of A
squared.
208
00:12:18 --> 00:12:18
OK.
209
00:12:18 --> 00:12:25
So -- somehow those eigenvalues
and eigenvectors are really
210
00:12:25 --> 00:12:31
giving you a way to -- see
what's going on inside a matrix.
211
00:12:31 --> 00:12:37
Of course I can continue that
for -- to the K-th power,
212
00:12:37 --> 00:12:39
A to the K-th power.
213
00:12:39 --> 00:12:44
If I multiply,
if I have K of these together,
214
00:12:44 --> 00:12:49
do you see how S inverse S will
keep canceling in the,
215
00:12:49 --> 00:12:50
in the inside?
216
00:12:50 --> 00:12:56.19
I'll have the S outside at the
far left, and lambda will be in
217
00:12:56.19 --> 00:12:58
there K times,
and S inverse.
218
00:12:58 --> 00:13:01
So what's that telling me?
219
00:13:01 --> 00:13:05
That's telling me that the
eigenvalues of A,
220
00:13:05 --> 00:13:09
of A to the K-th power are the
K-th powers.
221
00:13:09 --> 00:13:15
The eigenvalues of A cubed are
the cubes of the eigenvalues of
222
00:13:15 --> 00:13:15
A.
223
00:13:15 --> 00:13:19
And the eigenvectors are the
same, the same.
224
00:13:19 --> 00:13:19
OK.
225
00:13:19 --> 00:13:23
In other words,
eigenvalues and eigenvectors
226
00:13:23 --> 00:13:28
give a great way to understand
the powers of a matrix.
227
00:13:28 --> 00:13:32
If I take the square of a
matrix,
228
00:13:32 --> 00:13:37
or the hundredth power of a
matrix, the pivots are all over
229
00:13:37 --> 00:13:38
the place.
230
00:13:38 --> 00:13:43
L U, if I multiply L U times L
U times L U times L U a hundred
231
00:13:43 --> 00:13:46.48
times, I've got a hundred L Us.
232
00:13:46.48 --> 00:13:49
I can't do anything with them.
233
00:13:49 --> 00:13:53
But when I multiply S lambda S
inverse by itself,
234
00:13:53 --> 00:13:58
when I look at the eigenvector
picture a hundred times,
235
00:13:58 --> 00:14:03
I get a hundred or ninety-nine
of these guys canceling out
236
00:14:03 --> 00:14:08
inside, and I get A to the
hundredth is S lambda to the
237
00:14:08 --> 00:14:10
hundredth S inverse.
238
00:14:10 --> 00:14:15.45
I mean, eigenvalues tell you
about powers of a matrix in a
239
00:14:15.45 --> 00:14:20
way that we had no way to
approach previously.
240
00:14:20 --> 00:14:24
For example,
when does -- when do the powers
241
00:14:24 --> 00:14:27
of a matrix go to zero?
242
00:14:27 --> 00:14:31
I would call that matrix
stable, maybe.
243
00:14:31 --> 00:14:34
So I could write down a
theorem.
244
00:14:34 --> 00:14:41
I'll write it as a theorem just
to use that word to emphasize
245
00:14:41 --> 00:14:46
that here I'm
getting this great fact from
246
00:14:46 --> 00:14:48
this eigenvalue picture.
247
00:14:48 --> 00:14:48
OK.
248
00:14:48 --> 00:14:54
A to the K approaches zero as K
goes, as K gets bigger if what?
249
00:14:54 --> 00:14:58
What's the w- how can I tell,
for a matrix A,
250
00:14:58 --> 00:15:01
if its powers go to zero?
251
00:15:01 --> 00:15:06
What's -- somewhere inside that
matrix is that information.
252
00:15:06 --> 00:15:10
That information is not present
in
253
00:15:10 --> 00:15:11
the pivots.
254
00:15:11 --> 00:15:14
It's present in the
eigenvalues.
255
00:15:14 --> 00:15:19
What do I need for the -- to
know that if I take higher and
256
00:15:19 --> 00:15:23
higher powers of A,
that this matrix gets smaller
257
00:15:23 --> 00:15:25
and smaller?
258
00:15:25 --> 00:15:28
Well, S and S inverse are not
moving.
259
00:15:28 --> 00:15:31
So it's this guy that has to
get small.
260
00:15:31 --> 00:15:35.73
And that's easy to --
to understand.
261
00:15:35.73 --> 00:15:39
The requirement is all
eigenvalues -- so what is the
262
00:15:39 --> 00:15:40
requirement?
263
00:15:40 --> 00:15:44
The eigenvalues have to be less
than one.
264
00:15:44 --> 00:15:47
Now I have to wrote that
absolute value,
265
00:15:47 --> 00:15:52
because those eigenvalues could
be negative, they could be
266
00:15:52 --> 00:15:53
complex numbers.
267
00:15:53 --> 00:15:56
So I'm taking the absolute
value.
268
00:15:56 --> 00:16:00
If all
of those are below one.
269
00:16:00 --> 00:16:05
That's, in fact,
we practically see why.
270
00:16:05 --> 00:16:12
And let me just say that I'm
operating on one assumption
271
00:16:12 --> 00:16:18
here, and I got to keep
remembering that that assumption
272
00:16:18 --> 00:16:20
is still present.
273
00:16:20 --> 00:16:25
That assumption was that I had
a
274
00:16:25 --> 00:16:30
full set of,
of n independent eigenvectors.
275
00:16:30 --> 00:16:34
If I don't have that,
then this approach is not
276
00:16:34 --> 00:16:35
working.
277
00:16:35 --> 00:16:40
So again, a pure eigenvalue
approach, eigenvector approach,
278
00:16:40 --> 00:16:44
needs n independent
eigenvectors.
279
00:16:44 --> 00:16:49
If we don't have n independent
eigenvectors,
280
00:16:49 --> 00:16:52
we can't diagonalize the
matrix.
281
00:16:52 --> 00:16:55
We can't get to a diagonal
matrix.
282
00:16:55 --> 00:17:00
This diagonalization is only
possible if S inverse makes
283
00:17:00 --> 00:17:01
sense.
284
00:17:01 --> 00:17:01
OK.
285
00:17:01 --> 00:17:05.21
Can I, can I follow up on that
point now?
286
00:17:05.21 --> 00:17:10
So you see why -- what we get
and, and why we want it,
287
00:17:10 --> 00:17:14
because we get information
about the
288
00:17:14 --> 00:17:19
powers of a matrix just
immediately from the
289
00:17:19 --> 00:17:20
eigenvalues.
290
00:17:20 --> 00:17:21
OK.
291
00:17:21 --> 00:17:27
Now let me follow up on this,
business of which matrices are
292
00:17:27 --> 00:17:29
diagonalizable.
293
00:17:29 --> 00:17:32
Sorry about that long word.
294
00:17:32 --> 00:17:36
So a matrix is,
is sure -- so here's,
295
00:17:36 --> 00:17:38
here's the main point.
296
00:17:38 --> 00:17:46
A is sure to be -- to have N
independent eigenvectors
297
00:17:46 --> 00:17:58
and, and be -- now here comes
that word -- diagonalizable if,
298
00:17:58 --> 00:18:09
if -- so we might as well get
the nice case out in the open.
299
00:18:09 --> 00:18:21
The nice case is when -- if all
the lambdas are different.
300
00:18:21 --> 00:18:31.74
That means, that means no
repeated eigenvalues.
301
00:18:31.74 --> 00:18:32
OK.
302
00:18:32 --> 00:18:35
That's the nice case.
303
00:18:35 --> 00:18:40
If my matrix,
and most -- if I do a random
304
00:18:40 --> 00:18:49
matrix in Matlab and compute its
eigenvalues -- so if I computed
305
00:18:49 --> 00:18:57
if I took eig of rand of ten
ten, gave, gave that Matlab
306
00:18:57 --> 00:19:05.69
command, the -- we'd get a
random ten by ten matrix,
307
00:19:05.69 --> 00:19:10
we would get a list of its ten
eigenvalues, and they would be
308
00:19:10 --> 00:19:11
different.
309
00:19:11 --> 00:19:14
They would be distinct is the
best word.
310
00:19:14 --> 00:19:18.95
I would have -- a random matrix
will have ten distinct -- a ten
311
00:19:18.95 --> 00:19:22
by ten matrix will have ten
distinct eigenvalues.
312
00:19:22 --> 00:19:25.93
And if it does,
the eigenvectors
313
00:19:25.93 --> 00:19:28
are automatically independent.
314
00:19:28 --> 00:19:30
So that's a nice fact.
315
00:19:30 --> 00:19:34
I'll refer you to the text for
the proof.
316
00:19:34 --> 00:19:39
That, that A is sure to have n
independent eigenvectors if the
317
00:19:39 --> 00:19:42
eigenvalues are different,
if.
318
00:19:42 --> 00:19:46
If all the, if all eigenvalues
are different.
319
00:19:46 --> 00:19:51
It's just if some lambdas are
repeated, then I have to look
320
00:19:51 --> 00:19:53
more
closely.
321
00:19:53 --> 00:19:58
If an eigenvalue is repeated,
I have to look,
322
00:19:58 --> 00:20:01
I have to count,
I have to check.
323
00:20:01 --> 00:20:06
Has it got -- say it's repeated
three times.
324
00:20:06 --> 00:20:11
So what's a possibility for the
-- so here is the,
325
00:20:11 --> 00:20:14
here is the repeated
possibility.
326
00:20:14 --> 00:20:20
And, and let me emphasize the
conclusion.
327
00:20:20 --> 00:20:26
That if I have repeated
eigenvalues, I may or may not,
328
00:20:26 --> 00:20:31
I may or may not have,
have n independent
329
00:20:31 --> 00:20:33
eigenvectors.
330
00:20:33 --> 00:20:34
I might.
331
00:20:34 --> 00:20:38
I, I, you know,
this isn't a completely
332
00:20:38 --> 00:20:40
negative case.
333
00:20:40 --> 00:20:47
The identity matrix -- suppose
I take the ten by ten identity
334
00:20:47 --> 00:20:48
matrix.
335
00:20:48 --> 00:20:53
What are the eigenvalues of
that
336
00:20:53 --> 00:20:54
matrix?
337
00:20:54 --> 00:20:57
So just, just take the easiest
matrix, the identity.
338
00:20:57 --> 00:21:01
If I look for its eigenvalues,
they're all ones.
339
00:21:01 --> 00:21:04
So that eigenvalue one is
repeated ten times.
340
00:21:04 --> 00:21:08
But there's no shortage of
eigenvectors for the identity
341
00:21:08 --> 00:21:09
matrix.
342
00:21:09 --> 00:21:13
In fact, every vector is an
eigenvector.
343
00:21:13 --> 00:21:16
So I can take ten independent
vectors.
344
00:21:16 --> 00:21:20
Oh, well, what happens to
everything -- if A is the
345
00:21:20 --> 00:21:24
identity matrix,
let's just think that one
346
00:21:24 --> 00:21:26
through in our head.
347
00:21:26 --> 00:21:30
If A is the identity matrix,
then it's got plenty of
348
00:21:30 --> 00:21:33
eigenvectors.
349
00:21:33 --> 00:21:35
I choose ten independent
vectors.
350
00:21:35 --> 00:21:37
They're the columns of S.
351
00:21:37 --> 00:21:40
And, and what do I get from S
inverse A S?
352
00:21:40 --> 00:21:42
I get I again,
right?
353
00:21:42 --> 00:21:46
If A is the identity -- and of
course that's the correct
354
00:21:46 --> 00:21:47
lambda.
355
00:21:47 --> 00:21:49.89
The matrix was already
diagonal.
356
00:21:49.89 --> 00:21:54
So if the matrix is already
diagonal, then the,
357
00:21:54 --> 00:21:58
the lambda is the same as the
matrix.
358
00:21:58 --> 00:22:03
A diagonal matrix has got its
eigenvalues sitting right there
359
00:22:03 --> 00:22:05
in front of you.
360
00:22:05 --> 00:22:10
Now if it's triangular,
the eigenvalues are still
361
00:22:10 --> 00:22:14
sitting there,
but so let's take a case where
362
00:22:14 --> 00:22:15
it's triangular.
363
00:22:15 --> 00:22:19
Suppose A is like,
two one two zero.
364
00:22:19 --> 00:22:23
So there's a case that's going
to be trouble.
365
00:22:23 --> 00:22:26
There's a case that's going to
be trouble.
366
00:22:26 --> 00:22:29
First of all,
what are the -- I mean,
367
00:22:29 --> 00:22:33
we just -- if we start with a
matrix, the first thing we do,
368
00:22:33 --> 00:22:39
practically without thinking is
compute the eigenvalues and
369
00:22:39 --> 00:22:40
eigenvectors.
370
00:22:40 --> 00:22:40
OK.
371
00:22:40 --> 00:22:43
So what are the eigenvalues?
372
00:22:43 --> 00:22:46
You can tell me right away what
they are.
373
00:22:46 --> 00:22:48
They're two and two,
right.
374
00:22:48 --> 00:22:53
It's a triangular matrix,
so when I do this determinant,
375
00:22:53 --> 00:22:57
shall I do this determinant of
A minus lambda I?
376
00:22:57 --> 00:23:03
I'll get this two minus lambda
one zero two minus lambda,
377
00:23:03 --> 00:23:04
right?
378
00:23:04 --> 00:23:09
I take that determinant,
so I make those into vertical
379
00:23:09 --> 00:23:11
bars to mean determinant.
380
00:23:11 --> 00:23:14
And what's the determinant?
381
00:23:14 --> 00:23:16.89
It's two minus lambda squared.
382
00:23:16.89 --> 00:23:18
What are the roots?
383
00:23:18 --> 00:23:20
Lambda equal two twice.
384
00:23:20 --> 00:23:26.28
So the eigenvalues are lambda
equals two and two.
385
00:23:26.28 --> 00:23:27
OK, fine.
386
00:23:27 --> 00:23:30
Now the next step,
find the eigenvectors.
387
00:23:30 --> 00:23:35.67
So I look for eigenvectors,
and what do I find for this
388
00:23:35.67 --> 00:23:36
guy?
389
00:23:36 --> 00:23:41
Eigenvectors for this guy,
when I subtract two minus the
390
00:23:41 --> 00:23:44
identity, so A minus two I has
zeros here.
391
00:23:44 --> 00:23:47
And I'm looking for the null
space.
392
00:23:47 --> 00:23:50
What's, what are the
eigenvectors?
393
00:23:50 --> 00:23:56
They're the --
the null space of A minus
394
00:23:56 --> 00:23:57.29
lambda I.
395
00:23:57.29 --> 00:24:01
The null space is only one
dimensional.
396
00:24:01 --> 00:24:07
This is a case where I don't
have enough eigenvectors.
397
00:24:07 --> 00:24:10
My algebraic multiplicity is
two.
398
00:24:10 --> 00:24:15
I would say,
when I see, when I count how
399
00:24:15 --> 00:24:20.91
often the eigenvalue is
repeated, that's
400
00:24:20.91 --> 00:24:23
the algebraic multiplicity.
401
00:24:23 --> 00:24:27
That's the multiplicity,
how many times is it the root
402
00:24:27 --> 00:24:28
of the polynomial?
403
00:24:28 --> 00:24:32
My polynomial is two minus
lambda squared.
404
00:24:32 --> 00:24:33
It's a double root.
405
00:24:33 --> 00:24:36
So my algebraic multiplicity is
two.
406
00:24:36 --> 00:24:40
But the geometric multiplicity,
which looks for vectors,
407
00:24:40 --> 00:24:45
looks for eigenvectors,
and -- which means the null
408
00:24:45 --> 00:24:50
space of this thing,
and the only eigenvector is one
409
00:24:50 --> 00:24:50
zero.
410
00:24:50 --> 00:24:52
That's in the null space.
411
00:24:52 --> 00:24:55
Zero one is not in the null
space.
412
00:24:55 --> 00:24:58
The null space is only one
dimensional.
413
00:24:58 --> 00:25:02
So there's a matrix,
my -- this A or the original A,
414
00:25:02 --> 00:25:04.61
that are not diagonalizable.
415
00:25:04.61 --> 00:25:09.13
I can't find two independent
eigenvectors.
416
00:25:09.13 --> 00:25:10
There's only one.
417
00:25:10 --> 00:25:10
OK.
418
00:25:10 --> 00:25:16
So that's the case that I'm --
that's a case that I'm not
419
00:25:16 --> 00:25:17
really handling.
420
00:25:17 --> 00:25:21
For example,
when I wrote down up here that
421
00:25:21 --> 00:25:26
the powers went to zero if the
eigenvalues were below one,
422
00:25:26 --> 00:25:33
I didn't really handle that
case of repeated eigenvalues,
423
00:25:33 --> 00:25:37.29
because my reasoning was based
on this formula.
424
00:25:37.29 --> 00:25:42
And this formula is based on n
independent eigenvectors.
425
00:25:42 --> 00:25:42
OK.
426
00:25:42 --> 00:25:46
Just to say then,
there are some matrices that
427
00:25:46 --> 00:25:51
we're, that, that we don't cover
through diagonalization,
428
00:25:51 --> 00:25:55
but the great majority we do.
429
00:25:55 --> 00:25:55
OK.
430
00:25:55 --> 00:26:00
And we, we're always OK if we
have different distinct
431
00:26:00 --> 00:26:01
eigenvalues.
432
00:26:01 --> 00:26:04.32
OK, that's the,
like, the typical case.
433
00:26:04.32 --> 00:26:08
Because for each eigenvalue
there's at least one
434
00:26:08 --> 00:26:09
eigenvector.
435
00:26:09 --> 00:26:14
The algebraic multiplicity here
is one for every eigenvalue and
436
00:26:14 --> 00:26:18.18
the geometric multiplicity is
one.
437
00:26:18.18 --> 00:26:21
There's one eigenvector.
438
00:26:21 --> 00:26:25
And they are independent.
439
00:26:25 --> 00:26:25
OK.
440
00:26:25 --> 00:26:25
OK.
441
00:26:25 --> 00:26:31
Now let me come back to the
important case,
442
00:26:31 --> 00:26:34
when, when we're OK.
443
00:26:34 --> 00:26:41
The important case,
when we are diagonalizable.
444
00:26:41 --> 00:26:47.79
Let me, look at -- so -- let me
solve
445
00:26:47.79 --> 00:26:49
this equation.
446
00:26:49 --> 00:26:59
The equation will be each -- I
start with some -- start with a
447
00:26:59 --> 00:27:01
given vector u0.
448
00:27:01 --> 00:27:09
And then my equation is at
every step, I multiply what I
449
00:27:09 --> 00:27:11
have by A.
450
00:27:11 --> 00:27:18
That, that equation ought to be
simple to handle.
451
00:27:18 --> 00:27:25
And I'd like to be able to
solve it.
452
00:27:25 --> 00:27:31
How would I find -- if I start
with a vector u0 and I multiply
453
00:27:31 --> 00:27:35
by A a hundred times,
what have I got?
454
00:27:35 --> 00:27:42.38
Well, I could certainly write
down a formula for the answer,
455
00:27:42.38 --> 00:27:45
so what, what -- so u1 is A u0.
456
00:27:45 --> 00:27:52
And u2 is -- what's u2 then?
u2, I multiply -- u2 I get
457
00:27:52 --> 00:27:58
from u1 by another multiplying
by A, so I've got A twice.
458
00:27:58 --> 00:28:02
And my formula is uk,
after k steps,
459
00:28:02 --> 00:28:07.76
I've multiplied by A k times
the original u0.
460
00:28:07.76 --> 00:28:10
You see what I'm doing?
461
00:28:10 --> 00:28:16
The next section is going to
solve systems of differential
462
00:28:16 --> 00:28:17.87
equations.
463
00:28:17.87 --> 00:28:21.24
I'm going to have derivatives.
464
00:28:21.24 --> 00:28:25
This section is the nice one.
465
00:28:25 --> 00:28:28
It solves difference equations.
466
00:28:28 --> 00:28:32
I would call that a difference
equation.
467
00:28:32 --> 00:28:37
It's -- at first order,
I would call that a first-order
468
00:28:37 --> 00:28:42
system, because it connects only
-- it only goes up one level.
469
00:28:42 --> 00:28:48
And I -- it's a system because
these are vectors and that's a
470
00:28:48 --> 00:28:49
matrix.
471
00:28:49 --> 00:28:52
And the solution is just
that.
472
00:28:52 --> 00:28:53
OK.
473
00:28:53 --> 00:28:55
But, that's a nice formula.
474
00:28:55 --> 00:29:00
That's the, like,
the most compact formula I
475
00:29:00 --> 00:29:04
could ever get.
u100 would be A to the one
476
00:29:04 --> 00:29:05.86
hundred u0.
477
00:29:05.86 --> 00:29:09
But how would I actually find
u100?
478
00:29:09 --> 00:29:14
How would I find -- how would I
discover what u100 is?
479
00:29:14 --> 00:29:19
Let me,
let me show you how.
480
00:29:19 --> 00:29:21
Here's the idea.
481
00:29:21 --> 00:29:27
If -- so to solve,
to really solve -- shall I say,
482
00:29:27 --> 00:29:33
to really solve -- to really
solve it, I would take this
483
00:29:33 --> 00:29:40
initial vector u0 and I would
write it as a combination of
484
00:29:40 --> 00:29:42
eigenvectors.
485
00:29:42 --> 00:29:46
To really solve,
write u
486
00:29:46 --> 00:29:52
nought as a combination,
say certain amount of the first
487
00:29:52 --> 00:29:58
eigenvector plus a certain
amount of the second eigenvector
488
00:29:58 --> 00:30:03
plus a certain amount of the
last eigenvector.
489
00:30:03 --> 00:30:05
Now multiply by A.
490
00:30:05 --> 00:30:11
You want to -- you got to see
the magic of eigenvectors
491
00:30:11 --> 00:30:13
working here.
492
00:30:13 --> 00:30:14
Multiply by A.
493
00:30:14 --> 00:30:17
So Au0 is
what?
494
00:30:17 --> 00:30:18
So A times that.
495
00:30:18 --> 00:30:23
A times -- so what's A -- I can
separate it out into n separate
496
00:30:23 --> 00:30:26
pieces, and that's the whole
point.
497
00:30:26 --> 00:30:31
That each of those pieces is
going in its own merry way.
498
00:30:31 --> 00:30:35.81
Each of those pieces is an
eigenvector, and when I multiply
499
00:30:35.81 --> 00:30:39
by A,
what does this piece become?
500
00:30:39 --> 00:30:45
So that's some amount of the
first -- let's suppose the
501
00:30:45 --> 00:30:49
eigenvectors are normalized to
be unit vectors.
502
00:30:49 --> 00:30:53
So that says what the
eigenvector is.
503
00:30:53 --> 00:30:58
It's a -- And I need some
multiple of it to produce u0.
504
00:30:58 --> 00:31:00
OK.
505
00:31:00 --> 00:31:04
Now when I multiply by A,
what do I get?
506
00:31:04 --> 00:31:09
I get c1, which is just a
factor, times Ax1,
507
00:31:09 --> 00:31:12
but Ax1 is lambda one x1.
508
00:31:12 --> 00:31:17.61
When I multiply this by A,
I get c2 lambda two x2.
509
00:31:17.61 --> 00:31:20.96
And here I get cn lambda n xn.
510
00:31:20.96 --> 00:31:26
And suppose I multiply by A to
the hundredth power now.
511
00:31:26 --> 00:31:32
Can we, having done it,
multiplied by A,
512
00:31:32 --> 00:31:35
let's multiply by A to the
hundredth.
513
00:31:35 --> 00:31:41
What happens to this first term
when I multiply by A to the one
514
00:31:41 --> 00:31:42
hundredth?
515
00:31:42 --> 00:31:46
It's got that factor lambda to
the hundredth.
516
00:31:46 --> 00:31:48
That's the key.
517
00:31:48 --> 00:31:54
That -- that's what I mean by
going its own merry way.
518
00:31:54 --> 00:31:56
It, it is pure eigenvector.
519
00:31:56 --> 00:32:00
It's exactly in a direction
where multiplication by A just
520
00:32:00 --> 00:32:03.5
brings in a scalar factor,
lambda one.
521
00:32:03.5 --> 00:32:07
So a hundred times brings in
this a hundred times.
522
00:32:07 --> 00:32:10
Hundred times lambda two,
hundred times lambda n.
523
00:32:10 --> 00:32:15
Actually, we're -- what are we
seeing here?
524
00:32:15 --> 00:32:18
We're seeing,
this same, lambda capital
525
00:32:18 --> 00:32:24
lambda to the hundredth as in
the, as in the diagonalization.
526
00:32:24 --> 00:32:28
And we're seeing the S matrix,
the, the matrix S of
527
00:32:28 --> 00:32:30
eigenvectors.
528
00:32:30 --> 00:32:35
That's what this has got to --
this has got to amount to.
529
00:32:35 --> 00:32:41
A lambda to the hundredth
power times an S times this
530
00:32:41 --> 00:32:46
vector c that's telling us how
much of each one is in the
531
00:32:46 --> 00:32:47
original thing.
532
00:32:47 --> 00:32:52
So if, if I had to really find
the hundredth power,
533
00:32:52 --> 00:32:55
I would take u0,
I would expand it as a
534
00:32:55 --> 00:32:59
combination of eigenvectors --
this is really S,
535
00:32:59 --> 00:33:04
the eigenvector matrix,
times c, the,
536
00:33:04 --> 00:33:07
the coefficient vector.
537
00:33:07 --> 00:33:13
And then I would immediately
then, by inserting these
538
00:33:13 --> 00:33:19
hundredth powers of eigenvalues,
I'd have the answer.
539
00:33:19 --> 00:33:24.38
So -- huh, there must be -- oh,
let's see, OK.
540
00:33:24.38 --> 00:33:27
It's -- so, yeah.
541
00:33:27 --> 00:33:31
So if u100 is A to the
hundredth times u0,
542
00:33:31 --> 00:33:37
and u0 is S c -- then you see
this formula is just this
543
00:33:37 --> 00:33:44
formula, which is the way I
would actually get hold of this,
544
00:33:44 --> 00:33:49
of this u100,
which is -- let me put it here.
545
00:33:49 --> 00:33:49
u100.
546
00:33:49 --> 00:33:54
The way
I would actually get hold of
547
00:33:54 --> 00:33:58
that, see what,
what the solution is after a
548
00:33:58 --> 00:34:02.43
hundred steps,
would be -- expand the initial
549
00:34:02.43 --> 00:34:08
vector into eigenvectors and let
each eigenvector go its own way,
550
00:34:08 --> 00:34:13
multiplying by a hundred at --
by lambda at every step,
551
00:34:13 --> 00:34:17
and therefore by lambda to the
hundredth
552
00:34:17 --> 00:34:21
power after a hundred steps.
553
00:34:21 --> 00:34:23
Can I do an example?
554
00:34:23 --> 00:34:26.78
So that's the formulas.
555
00:34:26.78 --> 00:34:30
Now let me take an example.
556
00:34:30 --> 00:34:35
I'll use the Fibonacci sequence
as an example.
557
00:34:35 --> 00:34:38
So, so Fibonacci example.
558
00:34:38 --> 00:34:43
You remember the Fibonacci
numbers?
559
00:34:43 --> 00:34:50
If we start with one and one as
F0 -- oh, I think I start with
560
00:34:50 --> 00:34:51
zero, maybe.
561
00:34:51 --> 00:34:54
Let zero and one be the first
ones.
562
00:34:54 --> 00:34:58.85
So there's F0 and F1,
the first two Fibonacci
563
00:34:58.85 --> 00:34:59
numbers.
564
00:34:59 --> 00:35:03
Then what's the rule for
Fibonacci numbers?
565
00:35:03 --> 00:35:05
Ah, they're the sum.
566
00:35:05 --> 00:35:08
The next one is the sum of
those,
567
00:35:08 --> 00:35:09
so it's one.
568
00:35:09 --> 00:35:13
The next one is the sum of
those, so it's two.
569
00:35:13 --> 00:35:17
The next one is the sum of
those, so it's three.
570
00:35:17 --> 00:35:20
Well, it looks like one two
three four five,
571
00:35:20 --> 00:35:24
but somehow it's not going to
do that way.
572
00:35:24 --> 00:35:26.48
The next one is five,
right.
573
00:35:26.48 --> 00:35:29
Two and three makes five.
574
00:35:29 --> 00:35:31
The next one is eight.
575
00:35:31 --> 00:35:33
The next one is thirteen.
576
00:35:33 --> 00:35:38
And the one hundredth Fibonacci
number is what?
577
00:35:38 --> 00:35:39.77
That's my question.
578
00:35:39.77 --> 00:35:44
How could I get a formula for
the hundredth number?
579
00:35:44 --> 00:35:47
And, for example,
how could I answer the
580
00:35:47 --> 00:35:51
question, how fast are they
growing?
581
00:35:51 --> 00:35:55
How fast are those Fibonacci
numbers
582
00:35:55 --> 00:35:56.2
growing?
583
00:35:56.2 --> 00:35:58
They're certainly growing.
584
00:35:58 --> 00:36:00
It's not a stable case.
585
00:36:00 --> 00:36:04
Whatever the eigenvalues of
whatever matrix it is,
586
00:36:04 --> 00:36:06
they're not smaller than one.
587
00:36:06 --> 00:36:08
These numbers are growing.
588
00:36:08 --> 00:36:10.83
But how fast are they growing?
589
00:36:10.83 --> 00:36:13
The answer lies in the
eigenvalue.
590
00:36:13 --> 00:36:19
So I've got to find the matrix,
so let me write down the
591
00:36:19 --> 00:36:21
Fibonacci rule.
592
00:36:21 --> 00:36:24
F(k+2) = F(k+1)+F k,
right?
593
00:36:24 --> 00:36:31
Now that's not in my -- I want
to write that as uk plus one and
594
00:36:31 --> 00:36:32
Auk.
595
00:36:32 --> 00:36:37.61
But right now what I've got is
a single equation,
596
00:36:37.61 --> 00:36:41
not a system,
and it's second-order.
597
00:36:41 --> 00:36:49.12
It's like having a second-order
differential equation with
598
00:36:49.12 --> 00:36:50
second derivatives.
599
00:36:50 --> 00:36:53
I want to get first
derivatives.
600
00:36:53 --> 00:36:57
Here I want to get first
differences.
601
00:36:57 --> 00:37:03
So the way, the way to do it is
to introduce uk will be a vector
602
00:37:03 --> 00:37:05
-- see, a small trick.
603
00:37:05 --> 00:37:08
Let uk be a vector,
F(k+1)
604
00:37:08 --> 00:37:09
and Fk.
605
00:37:09 --> 00:37:14
So I'm going to get a two by
two system, first order,
606
00:37:14 --> 00:37:19
instead of a one -- instead of
a scalar system,
607
00:37:19 --> 00:37:23.14
second order,
by a simple trick.
608
00:37:23.14 --> 00:37:29
I'm just going to add in an
equation F(k+1) equals F(k+1).
609
00:37:29 --> 00:37:32
That will be my second
equation.
610
00:37:32 --> 00:37:38
Then this is my system,
this is my unknown,
611
00:37:38 --> 00:37:41.4
and what's my one step
equation?
612
00:37:41.4 --> 00:37:46.45
So, so now u(k+1),
that's -- so u(k+1) is the left
613
00:37:46.45 --> 00:37:51
side, and what have I got here
on the right side?
614
00:37:51 --> 00:37:55
I've got some matrix
multiplying uk.
615
00:37:55 --> 00:38:00
Can you, do -- can you see that
all right?
616
00:38:00 --> 00:38:03
if you can see it,
then you can tell me what the
617
00:38:03 --> 00:38:04
matrix is.
618
00:38:04 --> 00:38:07
Do you see that I'm taking my
system here.
619
00:38:07 --> 00:38:10
I artificially made it into a
system.
620
00:38:10 --> 00:38:13
I artificially made the unknown
into a vector.
621
00:38:13 --> 00:38:18
And now I'm ready to look at
and see what the matrix
622
00:38:18 --> 00:38:18
is.
623
00:38:18 --> 00:38:24
So do you see the left side,
u(k+1) is F(k+2) F(k+1),
624
00:38:24 --> 00:38:26
that's just what I want.
625
00:38:26 --> 00:38:29
On the right side,
this remember,
626
00:38:29 --> 00:38:35
this uk here -- let me for the
moment put it as F(k+1) Fk.
627
00:38:35 --> 00:38:37
So what's the matrix?
628
00:38:37 --> 00:38:45
Well, that has a one and a one,
and that has a one and a zero.
629
00:38:45 --> 00:38:47
There's the matrix.
630
00:38:47 --> 00:38:52
Do you see that that gives me
the right-hand side?
631
00:38:52 --> 00:38:55
So there's the matrix A.
632
00:38:55 --> 00:38:58
And this is our friend uk.
633
00:38:58 --> 00:39:03
So we've got -- so that simple
trick -- changed the
634
00:39:03 --> 00:39:09
second-order scalar problem to a
first-order system.
635
00:39:09 --> 00:39:12.3
Two b- u- with two unknowns.
636
00:39:12.3 --> 00:39:14
With
a matrix.
637
00:39:14 --> 00:39:16
And now what do I do?
638
00:39:16 --> 00:39:20
Well, before I even think,
I find its eigenvalues and
639
00:39:20 --> 00:39:22
eigenvectors.
640
00:39:22 --> 00:39:27
So what are the eigenvalues and
eigenvectors of that matrix?
641
00:39:27 --> 00:39:28
Let's see.
642
00:39:28 --> 00:39:32
I always -- first let me just,
like, think for a minute.
643
00:39:32 --> 00:39:36
It's two by two,
so this shouldn't be impossible
644
00:39:36 --> 00:39:38
to do.
645
00:39:38 --> 00:39:39
Let's do it.
646
00:39:39 --> 00:39:40
OK.
647
00:39:40 --> 00:39:45
So my matrix,
again, is one one one zero.
648
00:39:45 --> 00:39:48
It's symmetric,
by the way.
649
00:39:48 --> 00:39:55.59
So what I will eventually know
about symmetric matrices is that
650
00:39:55.59 --> 00:39:59.67
the eigenvalues will come out
real.
651
00:39:59.67 --> 00:40:03
I won't get any complex numbers
here.
652
00:40:03 --> 00:40:09
And the eigenvectors,
once I get those,
653
00:40:09 --> 00:40:12
actually will be orthogonal.
654
00:40:12 --> 00:40:16
But two by two,
I'm more interested in what the
655
00:40:16 --> 00:40:18
actual numbers are.
656
00:40:18 --> 00:40:21.44
What do I know about the two
numbers?
657
00:40:21.44 --> 00:40:27
Well, should do you want me to
find this determinant of A minus
658
00:40:27 --> 00:40:27
lambda I?
659
00:40:27 --> 00:40:29
Sure.
660
00:40:29 --> 00:40:35
So it's the determinant of one
minus lambda one one zero,
661
00:40:35 --> 00:40:35
right?
662
00:40:35 --> 00:40:37
Minus lambda,
yes.
663
00:40:37 --> 00:40:37
God.
664
00:40:37 --> 00:40:38
OK.
665
00:40:38 --> 00:40:38.32
OK.
666
00:40:38.32 --> 00:40:41
There'll be two eigenvalues.
667
00:40:41 --> 00:40:47
What will -- tell me again what
I know about the two eigenvalues
668
00:40:47 --> 00:40:49
before I go any further.
669
00:40:49 --> 00:40:55
Tell me something about these
two eigenvalues.
670
00:40:55 --> 00:40:57
What do they add up to?
671
00:40:57 --> 00:40:59
Lambda one plus lambda two is?
672
00:40:59 --> 00:41:03
Is the same as the trace down
the diagonal of the matrix.
673
00:41:03 --> 00:41:05
One and zero is one.
674
00:41:05 --> 00:41:09
So lambda one plus lambda two
should come out to be one.
675
00:41:09 --> 00:41:13
And lambda one times lambda one
times lambda two should come out
676
00:41:13 --> 00:41:16
to be the determinant,
which is minus one.
677
00:41:16 --> 00:41:20
So I'm
expecting the eigenvalues to
678
00:41:20 --> 00:41:24
add to one and to multiply to
minus one.
679
00:41:24 --> 00:41:27
But let's just see it happen
here.
680
00:41:27 --> 00:41:32
If I multiply this out,
I get -- that times that'll be
681
00:41:32 --> 00:41:36
a lambda squared minus lambda
minus one.
682
00:41:36 --> 00:41:36
Good.
683
00:41:36 --> 00:41:40
Lambda squared minus lambda
minus one.
684
00:41:40 --> 00:41:46
Actually, I -- you see
the b- compare that with the
685
00:41:46 --> 00:41:50
original equation that I started
with.
686
00:41:50 --> 00:41:53
F(k+2) - F(k+1)-Fk is zero.
687
00:41:53 --> 00:42:00
The recursion that -- that the
Fibonacci numbers satisfy is
688
00:42:00 --> 00:42:07
somehow showing up directly here
for the eigenvalues when we set
689
00:42:07 --> 00:42:10
that to zero.
690
00:42:10 --> 00:42:10
WK.
691
00:42:10 --> 00:42:11
Let's solve.
692
00:42:11 --> 00:42:16
Well, I would like to be able
to factor that,
693
00:42:16 --> 00:42:20
that quadratic,
but I'm better off to use the
694
00:42:20 --> 00:42:22.81
quadratic formula.
695
00:42:22.81 --> 00:42:25
Lambda is -- let's see.
696
00:42:25 --> 00:42:31
Minus b is one plus or minus
the square root of b squared,
697
00:42:31 --> 00:42:35
which is one,
minus four times that times
698
00:42:35 --> 00:42:40
that, which is
plus four, over two.
699
00:42:40 --> 00:42:43
So that's the square root of
five.
700
00:42:43 --> 00:42:50
So the eigenvalues are lambda
one is one half of one plus
701
00:42:50 --> 00:42:55
square root of five,
and lambda two is one half of
702
00:42:55 --> 00:42:59
one minus square root of five.
703
00:42:59 --> 00:43:04
And sure enough,
they -- those add up to one and
704
00:43:04 --> 00:43:09
they multiply to give minus one.
705
00:43:09 --> 00:43:09
OK.
706
00:43:09 --> 00:43:12
Those are the two eigenvalues.
707
00:43:12 --> 00:43:16
How -- what are those numbers
approximately?
708
00:43:16 --> 00:43:20
Square root of five,
well, it's more than two but
709
00:43:20 --> 00:43:22
less than three.
710
00:43:22 --> 00:43:22
Hmm.
711
00:43:22 --> 00:43:25.59
It'd be nice to know these
numbers.
712
00:43:25.59 --> 00:43:30
I think, I think that -- so
that number comes out bigger
713
00:43:30 --> 00:43:33.36
than
one, right?
714
00:43:33.36 --> 00:43:34
That's right.
715
00:43:34 --> 00:43:39
This number comes out bigger
than one.
716
00:43:39 --> 00:43:44
It's about one point six one
eight or something.
717
00:43:44 --> 00:43:46
Not exactly,
but.
718
00:43:46 --> 00:43:50
And suppose it's one point six.
719
00:43:50 --> 00:43:52
Just, like, I think so.
720
00:43:52 --> 00:43:55
Then what's lambda two?
721
00:43:55 --> 00:44:01
Is, is lambda two positive or
negative?
722
00:44:01 --> 00:44:04
Negative, right,
because I'm -- it's,
723
00:44:04 --> 00:44:08
obviously negative,
and I knew that the -- so it's
724
00:44:08 --> 00:44:13
minus -- and they add up to one,
so minus point six one eight,
725
00:44:13 --> 00:44:14
I guess.
726
00:44:14 --> 00:44:14
OK.
727
00:44:14 --> 00:44:16
A- and some more.
728
00:44:16 --> 00:44:18
Those are the two eigenvalues.
729
00:44:18 --> 00:44:24
One eigenvalue bigger than one,
one eigenvalue smaller than
730
00:44:24 --> 00:44:24
one.
731
00:44:24 --> 00:44:27
Actually, that's a great
situation to be in.
732
00:44:27 --> 00:44:31
Of course, the eigenvalues are
different, so there's no doubt
733
00:44:31 --> 00:44:34
whatever -- is this matrix
diagonalizable?
734
00:44:34 --> 00:44:38
Is this matrix diagonalizable,
that original matrix A?
735
00:44:38 --> 00:44:39
Sure.
736
00:44:39 --> 00:44:42
We've got two distinct
eigenvalues and we can find the
737
00:44:42 --> 00:44:45
eigenvectors in
a moment.
738
00:44:45 --> 00:44:50
But they'll be independent,
we'll be diagonalizable.
739
00:44:50 --> 00:44:55
And now, you,
you can already answer my very
740
00:44:55 --> 00:44:56.94
first question.
741
00:44:56.94 --> 00:45:01
How fast are those Fibonacci
numbers increasing?
742
00:45:01 --> 00:45:06
How -- those -- they're
increasing,
743
00:45:06 --> 00:45:06
right?
744
00:45:06 --> 00:45:09
They're not doubling at every
step.
745
00:45:09 --> 00:45:12
Let me -- let's look again at
these numbers.
746
00:45:12 --> 00:45:15
Five, eight,
thirteen, it's not obvious.
747
00:45:15 --> 00:45:19
The next one would be
twenty-one, thirty-four.
748
00:45:19 --> 00:45:23.96
So to get some idea of what F
one hundred is,
749
00:45:23.96 --> 00:45:30
can you give me any -- I mean
the crucial number -- so it --
750
00:45:30 --> 00:45:37
these -- it's approximately --
what's controlling the growth of
751
00:45:37 --> 00:45:40
these Fibonacci numbers?
752
00:45:40 --> 00:45:42
It's the eigenvalues.
753
00:45:42 --> 00:45:49
And which eigenvalue is
controlling that growth?
754
00:45:49 --> 00:45:50
The big one.
755
00:45:50 --> 00:45:54
So F100 will be approximately
some constant,
756
00:45:54 --> 00:45:59
c1 I guess, times this lambda
one, this one plus square root
757
00:45:59 --> 00:46:02
of five over two,
to the hundredth power.
758
00:46:02 --> 00:46:08
And the two hundredth F -- in
other words, the eigenvalue --
759
00:46:08 --> 00:46:13
the Fibonacci numbers are
growing by about that factor.
760
00:46:13 --> 00:46:18
Do you see that we,
we've got precise information
761
00:46:18 --> 00:46:23
about the, about the Fibonacci
numbers out of the eigenvalues?
762
00:46:23 --> 00:46:23
OK.
763
00:46:23 --> 00:46:26
And again, why is that true?
764
00:46:26 --> 00:46:31
Let me go over to this board
and s- show what I'm doing here.
765
00:46:31 --> 00:46:35
The --
the original initial value is
766
00:46:35 --> 00:46:38
some combination of
eigenvectors.
767
00:46:38 --> 00:46:43
And then when we start -- when
we start going out the theories
768
00:46:43 --> 00:46:47
of Fibonacci numbers,
when we start multiplying by A
769
00:46:47 --> 00:46:51
a hundred times,
it's this lambda one to the
770
00:46:51 --> 00:46:52
hundredth.
771
00:46:52 --> 00:46:56
This term is,
is the one that's taking over.
772
00:46:56 --> 00:46:59
It's -- I mean,
that's big, like one point six
773
00:46:59 --> 00:47:01
to the hundredth power.
774
00:47:01 --> 00:47:04
The second term is practically
nothing, right?
775
00:47:04 --> 00:47:06
The point six,
or minus point six,
776
00:47:06 --> 00:47:10
to the hundredth power is an
extremely small,
777
00:47:10 --> 00:47:12
extremely small number.
778
00:47:12 --> 00:47:17
So this is -- there're only two
terms, because we're two by two.
779
00:47:17 --> 00:47:20.71
This number is -- this piece of
it is there, but it's,
780
00:47:20.71 --> 00:47:23
it's disappearing,
where this piece is there and
781
00:47:23 --> 00:47:26
it's growing and controlling
everything.
782
00:47:26 --> 00:47:30
So, so really the -- we're
doing, like, problems that are
783
00:47:30 --> 00:47:32.03
evolving.
784
00:47:32.03 --> 00:47:37
We're doing dynamic u- instead
of Ax=b, that's a static
785
00:47:37 --> 00:47:38
problem.
786
00:47:38 --> 00:47:42
We're now we're doing dynamics.
787
00:47:42 --> 00:47:47
A, A squared,
A cubed, things are evolving in
788
00:47:47 --> 00:47:47
time.
789
00:47:47 --> 00:47:52
And the eigenvalues are the
crucial, numbers.
790
00:47:52 --> 00:47:52
OK.
791
00:47:52 --> 00:47:58.8
I guess to complete this,
I better write down the
792
00:47:58.8 --> 00:47:59
eigenvectors.
793
00:47:59 --> 00:48:04
So we should complete the,
the whole process by finding
794
00:48:04 --> 00:48:06
the eigenvectors.
795
00:48:06 --> 00:48:10
OK, well, I have to -- up in
the corner, then,
796
00:48:10 --> 00:48:13
I have to look at A minus
lambda I.
797
00:48:13 --> 00:48:19
So A minus lambda I is this one
minus lambda one one and minus
798
00:48:19 --> 00:48:20.94
lambda.
799
00:48:20.94 --> 00:48:24
And now can we spot an
eigenvector out of that?
800
00:48:24 --> 00:48:27
That's, that's,
for these two lambdas,
801
00:48:27 --> 00:48:29
this matrix is singular.
802
00:48:29 --> 00:48:33
I guess the eigenvector -- two
by two ought to be,
803
00:48:33 --> 00:48:34
I mean, easy.
804
00:48:34 --> 00:48:40
So if I know that this matrix
is singular, then u- seems to me
805
00:48:40 --> 00:48:42
the eigenvector has to be
lambda and one,
806
00:48:42 --> 00:48:46
because that multiplication
will give me the zero.
807
00:48:46 --> 00:48:50
And this multiplication gives
me -- better give me also zero.
808
00:48:50 --> 00:48:52
Do you see why it does?
809
00:48:52 --> 00:48:56
This is the minus lambda
squared plus lambda plus one.
810
00:48:56 --> 00:49:00
It's the thing that's zero
because these lambdas
811
00:49:00 --> 00:49:01
are special.
812
00:49:01 --> 00:49:06
There's the eigenvector.
x1 is lambda one one,
813
00:49:06 --> 00:49:09
and x2 is lambda two one.
814
00:49:09 --> 00:49:15
I did that as a little trick
that was available in the two by
815
00:49:15 --> 00:49:16
two case.
816
00:49:16 --> 00:49:23
So now I finally have to -- oh,
I have to take the initial u0
817
00:49:23 --> 00:49:23
now.
818
00:49:23 --> 00:49:31.17
So to complete this example
entirely, I have to say,
819
00:49:31.17 --> 00:49:34
OK, what was u0?
u0 was F1 F0.
820
00:49:34 --> 00:49:42
So u0, the starting vector is
F1 F0, and those were one and
821
00:49:42 --> 00:49:42
zero.
822
00:49:42 --> 00:49:46
So I have to use that vector.
823
00:49:46 --> 00:49:52
So I have to look for,
for a multiple of the first
824
00:49:52 --> 00:50:00
eigenvector and the second to
produce u0, the one zero
825
00:50:00 --> 00:50:00
vector.
826
00:50:00 --> 00:50:06
This is what will find c1 and
c2, and then I'm done.
827
00:50:06 --> 00:50:12
Do you -- so let me instead of,
in the last five seconds,
828
00:50:12 --> 00:50:18
grinding out a formula,
let me repeat the idea.
829
00:50:18 --> 00:50:22
Because I'd really -- it's the
idea that's central.
830
00:50:22 --> 00:50:27
When things are evolving in
time -- let me come back to this
831
00:50:27 --> 00:50:30
board, because the ideas are
here.
832
00:50:30 --> 00:50:35
When things are evolving in
time by a first-order system,
833
00:50:35 --> 00:50:40
starting from an original u0,
the key is find the eigenvalues
834
00:50:40 --> 00:50:43
and eigenvectors of A.
835
00:50:43 --> 00:50:47
That will tell -- those
eigenvectors -- the eigenvalues
836
00:50:47 --> 00:50:50
will already tell you what's
happening.
837
00:50:50 --> 00:50:54
Is the solution blowing up,
is it going to zero,
838
00:50:54 --> 00:50:55
what's it doing.
839
00:50:55 --> 00:50:59
And then to,
to find out exactly a formula,
840
00:50:59 --> 00:51:04
you have to take your u0 and
write it as a combination of
841
00:51:04 --> 00:51:09.93
eigenvectors and then follow
each eigenvector separately.
842
00:51:09.93 --> 00:51:14
And that's really what this
formula, the formula for,
843
00:51:14 --> 00:51:19
-- that's what the formula for
A to the K is doing.
844
00:51:19 --> 00:51:24
So remember that formula for A
to the K is S lambda to the K S
845
00:51:24 --> 00:51:25
inverse.
846
00:51:25 --> 00:51:25
OK.
847
00:51:25 --> 00:51:29
That's,
that's difference equations.
848
00:51:29 --> 00:51:34
And you just have to -- so the,
the homework will give some
849
00:51:34 --> 00:51:39
examples, different from
Fibonacci, to follow through.
850
00:51:39 --> 00:51:43
And next time will be
differential equations.
851
00:51:43 --> 00:51:46
Thanks.