1
00:00:07 --> 00:00:07
Okay.
2
00:00:07 --> 00:00:12
This is a lecture where complex
numbers come in.
3
00:00:12 --> 00:00:18
It's a -- complex numbers have
slipped into this course because
4
00:00:18 --> 00:00:22
even a real matrix can have
complex eigenvalues.
5
00:00:22 --> 00:00:27
So we met complex numbers there
as the eigenvalues and complex
6
00:00:27 --> 00:00:29
eigenvectors.
7
00:00:29 --> 00:00:34
And we -- or --
this is probably the last -- we
8
00:00:34 --> 00:00:38
have a lot of other things to do
about eigenvalues and
9
00:00:38 --> 00:00:39
eigenvectors.
10
00:00:39 --> 00:00:41
And that will be mostly real.
11
00:00:41 --> 00:00:46.65
But at one point somewhere,
we have to see what you do when
12
00:00:46.65 --> 00:00:49
the numbers become complex
numbers.
13
00:00:49 --> 00:00:55
What happens when the vectors
are complex, when the matrixes
14
00:00:55 --> 00:00:58
are complex,
when the -- what's the inner
15
00:00:58 --> 00:01:01
product of two,
the dot product of two complex
16
00:01:01 --> 00:01:06
vectors -- we just have to make
the change, just see -- what is
17
00:01:06 --> 00:01:09
the change when numbers become
complex?
18
00:01:09 --> 00:01:13
Then, can I tell you about the
most important example of
19
00:01:13 --> 00:01:15
complex matrixes?
20
00:01:15 --> 00:01:18
It comes in the Fourier matrix.
21
00:01:18 --> 00:01:21
So the Fourier matrix,
which I'll describe,
22
00:01:21 --> 00:01:23
is a complex matrix.
23
00:01:23 --> 00:01:27
It's certainly the most
important complex matrix.
24
00:01:27 --> 00:01:31
It's the matrix that we need in
Fourier transform.
25
00:01:31 --> 00:01:35
And the -- really,
the special thing that I want
26
00:01:35 --> 00:01:39
to
tell you about is what's called
27
00:01:39 --> 00:01:44
the fast Fourier transform,
and everybody refers to it as
28
00:01:44 --> 00:01:49.82
the FFT and it's in all computer
and it's used -- it's being used
29
00:01:49.82 --> 00:01:54
as we speak in a thousand
places, because it has,
30
00:01:54 --> 00:01:57
like, transformed whole
industries
31
00:01:57 --> 00:02:01.83
to be able to do the Fourier
transform fast,
32
00:02:01.83 --> 00:02:07
which means multiplying -- how
do I multiply fast by that
33
00:02:07 --> 00:02:10
matrix -- by that n by n matrix?
34
00:02:10 --> 00:02:15
Normally, multiplications by an
n by n matrix -- would normally
35
00:02:15 --> 00:02:21
be n squared multiplications,
because I've got n squared
36
00:02:21 --> 00:02:24
entries
and none of them is zero.
37
00:02:24 --> 00:02:27
This is a full matrix.
38
00:02:27 --> 00:02:30
And it's a matrix with
orthogonal columns.
39
00:02:30 --> 00:02:34
I mean, it's just,
like, the best matrix.
40
00:02:34 --> 00:02:40
And this fast Fourier transform
idea reduces this n squared,
41
00:02:40 --> 00:02:44
which was slowing up the
calculation of Fourier
42
00:02:44 --> 00:02:48
transforms
down to n log(n).
43
00:02:48 --> 00:02:52
n log(n), log to the base two,
actually.
44
00:02:52 --> 00:02:58
And it's this -- when that hit
-- when that possibility hit,
45
00:02:58 --> 00:03:00
it made a big difference.
46
00:03:00 --> 00:03:07
Everybody realized gradually
what, -- that this simple idea
47
00:03:07 --> 00:03:13.96
-- you'll see it's just a simple
matrix factorization -- but it
48
00:03:13.96 --> 00:03:15
changed everything.
49
00:03:15 --> 00:03:16
Okay.
50
00:03:16 --> 00:03:21
So I want to talk about complex
vectors and matrixes in general,
51
00:03:21 --> 00:03:26
recap a little bit from last
time, and the Fourier matrix in
52
00:03:26 --> 00:03:27
particular.
53
00:03:27 --> 00:03:28
Okay.
54
00:03:28 --> 00:03:29
So what's the deal?
55
00:03:29 --> 00:03:30
All right.
56
00:03:30 --> 00:03:35
The main point is,
what about length?
57
00:03:35 --> 00:03:38
I'm given a vector,
I have a vector x.
58
00:03:38 --> 00:03:42
Or let me call it z as a
reminder that it's complex,
59
00:03:42 --> 00:03:44
for the moment.
60
00:03:44 --> 00:03:48
But I can -- later I'll call
the components x.
61
00:03:48 --> 00:03:50
They'll be complex numbers.
62
00:03:50 --> 00:03:54.04
But it's a vector -- z1,
z2 down to zn.
63
00:03:54.04 --> 00:03:58
So the only novelty is it's not
in R^n
64
00:03:58 --> 00:03:59
anymore.
65
00:03:59 --> 00:04:02
It's in complex n dimensional
space.
66
00:04:02 --> 00:04:06.6
Each of those numbers is a
complex number.
67
00:04:06.6 --> 00:04:11
So this z,z1 is in C^n,
n dimensional complex space
68
00:04:11 --> 00:04:12
instead of R^n.
69
00:04:12 --> 00:04:18
So just a different letter
there, but now the point about
70
00:04:18 --> 00:04:21.56
its length
is what?
71
00:04:21.56 --> 00:04:29.29
The point about its length is
that z transpose z is no good.
72
00:04:29.29 --> 00:04:36
z transpose z -- if I just put
down z transpose here,
73
00:04:36 --> 00:04:39
it would be z1,
z2, to zn.
74
00:04:39 --> 00:04:46
Doing that multiplication
doesn't give me the right thing.
75
00:04:46 --> 00:04:48
W-Why not?
76
00:04:48 --> 00:04:53.92
Because the length squared
should
77
00:04:53.92 --> 00:04:54
be positive.
78
00:04:54 --> 00:04:58
And if I multiply -- suppose
this is, like,
79
00:04:58 --> 00:04:59
1 and i.
80
00:04:59 --> 00:05:04
What's the length of the vector
with components 1 and i?
81
00:05:04 --> 00:05:07
What if I do this,
so n is just two.
82
00:05:07 --> 00:05:12
I'm in C^2, two dimensional
space, complex space with the
83
00:05:12 --> 00:05:16
vector whose components are 1
and i.
84
00:05:16 --> 00:05:17
All right.
85
00:05:17 --> 00:05:21
So if I took one times one and
i times i and added,
86
00:05:21 --> 00:05:24
z transpose z would be zero.
87
00:05:24 --> 00:05:30
But I don't -- that vector is
not -- doesn't have length zero
88
00:05:30 --> 00:05:34
-- the vector with the
components 1 and i -- this
89
00:05:34 --> 00:05:39
multiplication --
what I really want is z1
90
00:05:39 --> 00:05:40
conjugate z1.
91
00:05:40 --> 00:05:47
You remember that z1 conjugate
z1 is -- so you see that first
92
00:05:47 --> 00:05:53
step will be z1 conjugate z1,
which is the magnitude of z1
93
00:05:53 --> 00:05:56
squared, which is what I want.
94
00:05:56 --> 00:06:00
That's, like,
three squared or five squared.
95
00:06:00 --> 00:06:06
Now, if it's --
if z1 is i, then I multiplied
96
00:06:06 --> 00:06:13
by minus i gives one plus one,
so the component of length --
97
00:06:13 --> 00:06:18
the component i,
its modulus squared is plus
98
00:06:18 --> 00:06:18
one.
99
00:06:18 --> 00:06:20
That's great.
100
00:06:20 --> 00:06:26
So what I want to do then is do
that -- I want z1 bar z1,
101
00:06:26 --> 00:06:30
z2 bar z2, zn bar zn.
102
00:06:30 --> 00:06:35
And remember that -- you
remember this complex conjugate.
103
00:06:35 --> 00:06:38
So -- so there's the point.
104
00:06:38 --> 00:06:43
Now I can erase the no good and
put is good, because that now
105
00:06:43 --> 00:06:48
gives the answer zero for the
zero vector, of course,
106
00:06:48 --> 00:06:54.41
but it gives a positive length
squared for any other vector.
107
00:06:54.41 --> 00:06:58
So it's a --
it's the right definition of
108
00:06:58 --> 00:07:03
length, and essentially the
message is that we're always
109
00:07:03 --> 00:07:09
going to be taking -- when we
transpose, we also take complex
110
00:07:09 --> 00:07:10
conjugate.
111
00:07:10 --> 00:07:15
So let's -- let's find the
length of one -- so the vector
112
00:07:15 --> 00:07:19
one i, that's z,
that's that vector z.
113
00:07:19 --> 00:07:24
Now I take the conjugate of one
is one, the conjugate of i is
114
00:07:24 --> 00:07:24
minus i.
115
00:07:24 --> 00:07:28
I take this vector,
I get one plus one -- I get
116
00:07:28 --> 00:07:28
two.
117
00:07:28 --> 00:07:34
So that's a vector and that's a
vector of length -- square root
118
00:07:34 --> 00:07:34
of two.
119
00:07:34 --> 00:07:39.13
Square root of two is the
length and not the zero that we
120
00:07:39.13 --> 00:07:43
would have got from one minus i
squared.
121
00:07:43 --> 00:07:43
Okay.
122
00:07:43 --> 00:07:48.81
So the message really is
whenever we transpose,
123
00:07:48.81 --> 00:07:51
we also take conjugates.
124
00:07:51 --> 00:07:55
So here's a symbol -- one
symbol to do both.
125
00:07:55 --> 00:08:00
So that symbol H,
it stands for a guy named
126
00:08:00 --> 00:08:05
Hermite, who didn't actually
pronounce the H,
127
00:08:05 --> 00:08:10
but let's pronounce it -- so I
would
128
00:08:10 --> 00:08:13
call that z Hermitian z.
129
00:08:13 --> 00:08:21
I'll -- let me write that word,
Herm- so his name was Hermite,
130
00:08:21 --> 00:08:27
and then we make it into an
adjective, Hermitian.
131
00:08:27 --> 00:08:30
So z Hermitian z.
z H z.
132
00:08:30 --> 00:08:30
Okay.
133
00:08:30 --> 00:08:36.49
So, that's the -- that's the,
length squared.
134
00:08:36.49 --> 00:08:40.21
Now what's the inner product?
135
00:08:40.21 --> 00:08:44
Well, it should match.
136
00:08:44 --> 00:08:50
The inner product of two
vectors -- so inner product is
137
00:08:50 --> 00:08:54
no longer -- used to be y
transpose x.
138
00:08:54 --> 00:08:57
That's for real vectors.
139
00:08:57 --> 00:09:02
For complex vectors,
whenever we transpose,
140
00:09:02 --> 00:09:05
we also take the conjugate.
141
00:09:05 --> 00:09:08
So it's y Hermitian x.
142
00:09:08 --> 00:09:13
Of course it's not real
anymore, usually.
143
00:09:13 --> 00:09:18
That -- the inner product will
usually be complex number.
144
00:09:18 --> 00:09:22
But if y and x are the same,
if they're the same z,
145
00:09:22 --> 00:09:26.56
then we have z -- z H z,
we have the length squared,
146
00:09:26.56 --> 00:09:30
and that's what we want,
the inner product of a vector
147
00:09:30 --> 00:09:34
with itself should be its length
squared.
148
00:09:34 --> 00:09:39
So this is, like,
forced on us because this is
149
00:09:39 --> 00:09:40
forced on us.
150
00:09:40 --> 00:09:45
So -- so this z -- this --
everybody's picking up what this
151
00:09:45 --> 00:09:46
equals.
152
00:09:46 --> 00:09:49
This is z1 squared plus zn
squared.
153
00:09:49 --> 00:09:51
That's the length squared.
154
00:09:51 --> 00:09:57
And that's the inner product
that we have to go with.
155
00:09:57 --> 00:10:02
So it could be a complex number
now.
156
00:10:02 --> 00:10:04
One more change.
157
00:10:04 --> 00:10:07
Well, two more changes.
158
00:10:07 --> 00:10:13
We've got to change the idea of
a symmetric matrix.
159
00:10:13 --> 00:10:19
So I'll just recap on symmetric
matrixes.
160
00:10:19 --> 00:10:26.79
Symmetric means A transpose
equals A, but not -- no good if
161
00:10:26.79 --> 00:10:28.51
A is complex.
162
00:10:28.51 --> 00:10:34
So what do we
instead -- that applies
163
00:10:34 --> 00:10:37
perfectly to real matrixes.
164
00:10:37 --> 00:10:42
But now if my matrixes were
complex, I want to take the
165
00:10:42 --> 00:10:46
transpose and the conjugate to
equal A.
166
00:10:46 --> 00:10:52
So there's -- that's the -- the
right complex version of
167
00:10:52 --> 00:10:53
symmetry.
168
00:10:53 --> 00:10:57
The com- the symmetry now means
when
169
00:10:57 --> 00:11:01
I transpose it,
flip across the diagonal and
170
00:11:01 --> 00:11:02
take conjugates.
171
00:11:02 --> 00:11:06
So, for example -- here would
be an example.
172
00:11:06 --> 00:11:09
On the diagonal,
it had better be real,
173
00:11:09 --> 00:11:13
because when I flip it,
the diagonal is still there and
174
00:11:13 --> 00:11:19
it has to -- and then when
I take the complex conjugate it
175
00:11:19 --> 00:11:23
has to be still there,
so it better be a real number,
176
00:11:23 --> 00:11:25
let me say two and five.
177
00:11:25 --> 00:11:28
What about entries off the
diagonal?
178
00:11:28 --> 00:11:31
If this entry is,
say, three plus i,
179
00:11:31 --> 00:11:37
then this entry had better be
-- because I want whatever this
180
00:11:37 --> 00:11:40
--
when I transpose,
181
00:11:40 --> 00:11:43
it'll show up here and i
conjugate.
182
00:11:43 --> 00:11:46.84
So I need three minus I there.
183
00:11:46.84 --> 00:11:52
So there's a matrix with --
that corresponds to symmetry,
184
00:11:52 --> 00:11:54
but it's complex.
185
00:11:54 --> 00:11:59
And those matrixes are called
Hermitian matrixes.
186
00:11:59 --> 00:12:02
Hermitian matrixes.
187
00:12:02 --> 00:12:04
A H equals A.
188
00:12:04 --> 00:12:04
Fine.
189
00:12:04 --> 00:12:10
Okay, that's -- and those
matrixes have real eigenvalues
190
00:12:10 --> 00:12:14
and they have perpendicular
eigenvectors.
191
00:12:14 --> 00:12:17
What does perpendicular mean?
192
00:12:17 --> 00:12:23
Perpendicular means the inner
product -- so let's go on to
193
00:12:23 --> 00:12:25
perpendicular.
194
00:12:25 --> 00:12:31
Well, when I had perpendicular
vectors, for example,
195
00:12:31 --> 00:12:34
they were like q1,
q2 up to qn.
196
00:12:34 --> 00:12:39.64
That's my -- q is my letter
that I use for perpendicular.
197
00:12:39.64 --> 00:12:44
Actually, I usually -- I also
mean unit length.
198
00:12:44 --> 00:12:48
So those are perpendicular unit
vectors.
199
00:12:48 --> 00:12:53
But now what does -- so it's a
-- orthonormal basis,
200
00:12:53 --> 00:12:58
I'll still use those words,
but how do I compute
201
00:12:58 --> 00:12:59.37
perpendicular?
202
00:12:59.37 --> 00:13:02
How do I check perpendicular?
203
00:13:02 --> 00:13:07
This means that the inner
product of qi with qj -- but now
204
00:13:07 --> 00:13:11
I not only transpose,
I must conjugate,
205
00:13:11 --> 00:13:16
right, to get zero if i is not
j and one if
206
00:13:16 --> 00:13:17
i is j.
207
00:13:17 --> 00:13:22
So it's a unit vector,
meaning unit length,
208
00:13:22 --> 00:13:27
orthogonal -- all the angles
are right angles,
209
00:13:27 --> 00:13:34
but these are angles in complex
n dimensional space.
210
00:13:34 --> 00:13:38
So it's q1, q on- qi bar
transpose.
211
00:13:38 --> 00:13:40
Or, for short,
qi H qj.
212
00:13:40 --> 00:13:48
So it will still be true --
so let me -- again I'll create
213
00:13:48 --> 00:13:51
a matrix out of those guys.
214
00:13:51 --> 00:13:55
The matrix will have these q-s
in its columns,
215
00:13:55 --> 00:13:56
q2 to qn.
216
00:13:56 --> 00:14:00
And I want to turn that into
matrix language,
217
00:14:00 --> 00:14:01
just like before.
218
00:14:01 --> 00:14:03
What does that mean?
219
00:14:03 --> 00:14:07
That means I want all these
inner products,
220
00:14:07 --> 00:14:14
so I take these columns of Q,
multiply by their rows -- so it
221
00:14:14 --> 00:14:21
was -- it used to be Q -- it
used to be Q transpose Q equals
222
00:14:21 --> 00:14:22
I, right?
223
00:14:22 --> 00:14:26
This was an orthogonal matrix.
224
00:14:26 --> 00:14:28
But what's changed?
225
00:14:28 --> 00:14:31.81
These are now complex vectors.
226
00:14:31.81 --> 00:14:38
Their inner products are --
involve conjugating the first
227
00:14:38 --> 00:14:39
factor.
228
00:14:39 --> 00:14:43
So it's not --
it's the conjugate of Q
229
00:14:43 --> 00:14:44
transpose.
230
00:14:44 --> 00:14:46.94
It's Q bar transpose Q.
231
00:14:46.94 --> 00:14:47
Q H.
232
00:14:47 --> 00:14:51
So can I call this -- let me
call it Q H Q,
233
00:14:51 --> 00:14:52
which is I.
234
00:14:52 --> 00:14:57
So that's our new -- you -- you
see I'm just translating,
235
00:14:57 --> 00:15:02
and the -- the book h- on one
page gives a little dictionary
236
00:15:02 --> 00:15:08
of the right words in the real
case, R^n, and the corresponding
237
00:15:08 --> 00:15:13.07
words in
the complex case for the vector
238
00:15:13.07 --> 00:15:14
space C^n.
239
00:15:14 --> 00:15:19
Of course, C^n is a vector
space, the numbers we multiply
240
00:15:19 --> 00:15:25
are now complex numbers -- we're
just moving into complex n
241
00:15:25 --> 00:15:26
dimensional space.
242
00:15:26 --> 00:15:27
Okay.
243
00:15:27 --> 00:15:31
Now -- actually,
I have to say we changed the
244
00:15:31 --> 00:15:36
word
symmet- symmetric to Hermitian
245
00:15:36 --> 00:15:37
for those matrixes.
246
00:15:37 --> 00:15:43
People also change this word
orthogonal into another word
247
00:15:43 --> 00:15:48
that happens to be unitary,
as a word that applies -- that
248
00:15:48 --> 00:15:54
signals that we might be dealing
with a complex matrix here.
249
00:15:54 --> 00:15:56
So what's a unitary matrix?
250
00:15:56 --> 00:16:02
It's a -- it's just like an
orthogonal matrix.
251
00:16:02 --> 00:16:06
It's a square,
n by n matrix with orthonormal
252
00:16:06 --> 00:16:12
columns, perpendicular columns,
unit vectors -- unit vectors
253
00:16:12 --> 00:16:17
computed by -- and
perpendicularity computed by
254
00:16:17 --> 00:16:22
remembering that there's a
conjugate as well as a
255
00:16:22 --> 00:16:24
transpose.
256
00:16:24 --> 00:16:25.19
Okay.
257
00:16:25.19 --> 00:16:27
So those are the words.
258
00:16:27 --> 00:16:33
Now I'm ready to get into the
substance of the lecture which
259
00:16:33 --> 00:16:39
is the most famous complex
matrix, which happens to be one
260
00:16:39 --> 00:16:41
of these guys.
261
00:16:41 --> 00:16:46
It has orthogonal columns,
and it's named after Fourier
262
00:16:46 --> 00:16:52
because it comes into the
Fourier transform,
263
00:16:52 --> 00:16:57
so it's the matrix that's all
around us.
264
00:16:57 --> 00:16:57
Okay.
265
00:16:57 --> 00:17:04
Let me tell you what it is
first of all in the n by n case.
266
00:17:04 --> 00:17:10.95
Then often I'll let n be four
because four is a good size to
267
00:17:10.95 --> 00:17:12
work with.
268
00:17:12 --> 00:17:16
But here's the n by n Fourier
matrix.
269
00:17:16 --> 00:17:21
Its first column is the vector
of
270
00:17:21 --> 00:17:21.68
ones.
271
00:17:21.68 --> 00:17:23
It's n by n,
of course.
272
00:17:23 --> 00:17:28.45
Its second column is the
powers, the -- actually,
273
00:17:28.45 --> 00:17:34
better if I move from the math
department to EE for this one
274
00:17:34 --> 00:17:38
half hour and then,
please, let me move back again.
275
00:17:38 --> 00:17:39
Okay.
276
00:17:39 --> 00:17:44
What's the difference between
those two departments?
277
00:17:44 --> 00:17:49.05
It's just math starts counting
with
278
00:17:49.05 --> 00:17:53.43
one and electrical engineers
start counting at zero.
279
00:17:53.43 --> 00:17:56
Actually, they're probably
right.
280
00:17:56 --> 00:17:59
So anyway, we'll give them --
humor them.
281
00:17:59 --> 00:18:02
So this is really the zeroes
column.
282
00:18:02 --> 00:18:07
And the first column up to the
n-1, that's the one inconvenient
283
00:18:07 --> 00:18:11
spot in electrical engineering.
284
00:18:11 --> 00:18:15.84
All these expressions start at
zero, no problem,
285
00:18:15.84 --> 00:18:17
but they end at n-1.
286
00:18:17 --> 00:18:22.14
Well, that's -- that's the
difficulty of Course 6.
287
00:18:22.14 --> 00:18:27
So what's -- they're the powers
of a number that I'm going to
288
00:18:27 --> 00:18:32
call W -- W squared,
W cubed, W to the -- now what
289
00:18:32 --> 00:18:34
is the W here?
290
00:18:34 --> 00:18:35
What's the power?
291
00:18:35 --> 00:18:39
This was the zeroes power,
first power,
292
00:18:39 --> 00:18:42.69
second power,
this will be n minus first
293
00:18:42.69 --> 00:18:43
power.
294
00:18:43 --> 00:18:44
That's the column.
295
00:18:44 --> 00:18:46
What's the next column?
296
00:18:46 --> 00:18:50
It's the powers of W squared,
W to the fourth,
297
00:18:50 --> 00:18:53.78
W to the sixth,
W to the two n-1.
298
00:18:53.78 --> 00:19:00
And then more columns and more
columns and more columns and
299
00:19:00 --> 00:19:02
what's the last column?
300
00:19:02 --> 00:19:05
It's the powers of -- let's
see.
301
00:19:05 --> 00:19:08
We -- actually,
if we look around rows,
302
00:19:08 --> 00:19:11
w- this matrix is symmetric.
303
00:19:11 --> 00:19:15
It's symmetric in the old not
quite perfect way,
304
00:19:15 --> 00:19:20.31
not perfect because these
numbers are complex.
305
00:19:20.31 --> 00:19:24
And so it's -- that first row
is all ones.
306
00:19:24 --> 00:19:27
One W W squared up to W to the
n-1.
307
00:19:27 --> 00:19:33
That's the last column is the
powers of W to the n-1,
308
00:19:33 --> 00:19:37
so this guy matches that,
and finally we get W to
309
00:19:37 --> 00:19:40
something
here.
310
00:19:40 --> 00:19:45
I guess we could actually
figure out what that something
311
00:19:45 --> 00:19:46
is.
312
00:19:46 --> 00:19:49.63
What are the entries of this
matrix?
313
00:19:49.63 --> 00:19:55
The i j entry of this matrix
are -- I going to -- are you
314
00:19:55 --> 00:20:00
going to allow me to let i go
from zero to n minus one?
315
00:20:00 --> 00:20:05
So i and g go from zero
to n-1.
316
00:20:05 --> 00:20:11
So the one -- the zero zero
entry is a one -- it's just this
317
00:20:11 --> 00:20:15
same W guy to the power i times
j.
318
00:20:15 --> 00:20:16
Let's see.
319
00:20:16 --> 00:20:23
I'm jumping into formulas here
and I have to tell you what W is
320
00:20:23 --> 00:20:29
and then you know everything
about this matrix.
321
00:20:29 --> 00:20:33
So W is the -- well,
shall we finish here?
322
00:20:33 --> 00:20:37.44
What was this -- this is the
(n-1) (n-1) entry.
323
00:20:37.44 --> 00:20:40
This is W to the n-1 squared.
324
00:20:40 --> 00:20:45
Everything's looking like a
mess here, because we have --
325
00:20:45 --> 00:20:50
not too bad, because all the
entries are powers of W.
326
00:20:50 --> 00:20:54.04
There -- none of them are zero.
327
00:20:54.04 --> 00:20:56
This is a full matrix.
328
00:20:56 --> 00:21:00
But W is a very special number.
329
00:21:00 --> 00:21:05
W is the special number whose
n-th power is one.
330
00:21:05 --> 00:21:10
In fact -- well,
actually, there are n numbers
331
00:21:10 --> 00:21:11
like that.
332
00:21:11 --> 00:21:14
One of them is one,
of course.
333
00:21:14 --> 00:21:21
But the one we --
the W we want is -- the angle
334
00:21:21 --> 00:21:23.89
is two pi over n.
335
00:21:23.89 --> 00:21:28
Is that what I mean?
n over two pi.
336
00:21:28 --> 00:21:30
No, two pi over n.
337
00:21:30 --> 00:21:36
W is E to the I and the angle
is two pi over n.
338
00:21:36 --> 00:21:37
Right.
339
00:21:37 --> 00:21:41
Where is this W in the complex
plane?
340
00:21:41 --> 00:21:47.45
It's -- it's on the unit
circle,
341
00:21:47.45 --> 00:21:48
right?
342
00:21:48 --> 00:21:54
This is -- it's the cosine of
two pi over n plus I times the
343
00:21:54 --> 00:21:56
sine of two pi over n.
344
00:21:56 --> 00:21:59
But actually,
forget this.
345
00:21:59 --> 00:22:05
It's never good to work with
the real and imaginary parts,
346
00:22:05 --> 00:22:10
the rectangular coordinates,
when we're taking powers.
347
00:22:10 --> 00:22:16
To take that to the tenth
power, we can't see what we're
348
00:22:16 --> 00:22:17
doing.
349
00:22:17 --> 00:22:21
To take this form to the tenth
power, we see immediately what
350
00:22:21 --> 00:22:22.56
we're doing.
351
00:22:22.56 --> 00:22:25
It would be e to the i 20 pi
over n.
352
00:22:25 --> 00:22:29
So when our matrix is full of
powers -- so it's this formula,
353
00:22:29 --> 00:22:32
and where is this on the
complex plain?
354
00:22:32 --> 00:22:37
Here are the real numbers,
here's the imaginary axis,
355
00:22:37 --> 00:22:42
here's the unit circle of
radius one, and this number is
356
00:22:42 --> 00:22:47
on the unit circle at this
angle, which is one n-th of the
357
00:22:47 --> 00:22:48
full way round.
358
00:22:48 --> 00:22:51
So if I drew,
for example,
359
00:22:51 --> 00:22:56
n equals six,
this would be e to the two pi,
360
00:22:56 --> 00:23:00
two pi over six,
it would be one sixth of the
361
00:23:00 --> 00:23:03
way around, it would be 60
degrees.
362
00:23:03 --> 00:23:05
And where is W squared?
363
00:23:05 --> 00:23:10
So I -- my W is e to the two pi
I over six in this case,
364
00:23:10 --> 00:23:15
in this six by -- for the six
by six Fourier transform,
365
00:23:15 --> 00:23:19
it's totally constructed out of
this
366
00:23:19 --> 00:23:21
number and its powers.
367
00:23:21 --> 00:23:23
So what are its powers?
368
00:23:23 --> 00:23:27
Well, its powers are on the
unit circle, right?
369
00:23:27 --> 00:23:31
Because when I square a number,
a complex number,
370
00:23:31 --> 00:23:35.57
I square its absolute value,
which gives me one again.
371
00:23:35.57 --> 00:23:40
All the powers have -- are on
the unit circle.
372
00:23:40 --> 00:23:45
And the -- the angle gets
doubled to a hundred and twenty,
373
00:23:45 --> 00:23:48
so there's W squared,
there's W cubed,
374
00:23:48 --> 00:23:52
there's W to the fourth,
there's W to the fifth and
375
00:23:52 --> 00:23:55
there is W to the sixth,
as we hoped,
376
00:23:55 --> 00:23:58
W to the sixth coming back to
one.
377
00:23:58 --> 00:24:02
So those are the six -- can I
say this on TV?
378
00:24:02 --> 00:24:07
The six sixth roots of one,
and it's this one,
379
00:24:07 --> 00:24:11
the primitive one we say,
the first one,
380
00:24:11 --> 00:24:12
which is W.
381
00:24:12 --> 00:24:18
Okay, so what -- let me change
-- let me -- I said I would
382
00:24:18 --> 00:24:21
probably switch to n equal four.
383
00:24:21 --> 00:24:23
What's W for that?
384
00:24:23 --> 00:24:25
It's the fourth root of one.
385
00:24:25 --> 00:24:28
W to the fourth will be one.
386
00:24:28 --> 00:24:32.89
W will be e to the two pi i
over
387
00:24:32.89 --> 00:24:34
four now.
388
00:24:34 --> 00:24:35
What's that?
389
00:24:35 --> 00:24:39
This is e to the i pi over two.
390
00:24:39 --> 00:24:45
This is a quarter of the way
around the unit circle,
391
00:24:45 --> 00:24:51
and that's exactly i,
a quarter of the way around.
392
00:24:51 --> 00:24:55
And sure enough,
the powers are i,
393
00:24:55 --> 00:25:02
i squared, which is minus one,
i cubed,
394
00:25:02 --> 00:25:07
which is minus i and finally i
to the fourth which is one,
395
00:25:07 --> 00:25:08
right.
396
00:25:08 --> 00:25:11
So there's W,
W squared, W cubed,
397
00:25:11 --> 00:25:17
W to the fourth -- I'm really
ready to write down this Fourier
398
00:25:17 --> 00:25:21
matrix for the four by four
case, just so we see that
399
00:25:21 --> 00:25:22
clearly.
400
00:25:22 --> 00:25:24
Let me do it here.
401
00:25:24 --> 00:25:30
F4 is -- all right,
one one one one one one one W
402
00:25:30 --> 00:25:31
-- it's I.
403
00:25:31 --> 00:25:32
I squared.
404
00:25:32 --> 00:25:36
That's minus one.
i cubed is minus i.
405
00:25:36 --> 00:25:41
I'll -- I could write i squared
and i cubed.
406
00:25:41 --> 00:25:45
Why don't I,
just so we see the pattern for
407
00:25:45 --> 00:25:48
sure.
i squared,
408
00:25:48 --> 00:25:52.49
i cubed, i squared,
i cubed, i fourth,
409
00:25:52.49 --> 00:25:56
i sixth -- i fourth,
i sixth and i ninth.
410
00:25:56 --> 00:26:02
You see the exponents fall in
this nice -- the exponent is the
411
00:26:02 --> 00:26:08
row number times the column
number, always starting at zero.
412
00:26:08 --> 00:26:08
Okay.
413
00:26:08 --> 00:26:14
And now I can put in those
numbers if you like -- one one
414
00:26:14 --> 00:26:19
one one,
one i minus one minus i,
415
00:26:19 --> 00:26:25
one minus one,
one minus one and one minus i
416
00:26:25 --> 00:26:27
minus one i.
417
00:26:27 --> 00:26:27
No.
418
00:26:27 --> 00:26:28
Yes.
419
00:26:28 --> 00:26:29
Right.
420
00:26:29 --> 00:26:36.56
What's -- why do I think that
matrix is so remarkable?
421
00:26:36.56 --> 00:26:44
It's the four by four matrix
that comes into the four point
422
00:26:44 --> 00:26:47
Fourier transform.
423
00:26:47 --> 00:26:53
When we want to find the
Fourier
424
00:26:53 --> 00:26:57
transform, the four point
Fourier transform of a vector
425
00:26:57 --> 00:27:02
with four components,
we want to multiply by this F4
426
00:27:02 --> 00:27:05
or we want to multiply by F4
inverse.
427
00:27:05 --> 00:27:09.61
One way we're taking the
transform, one way we're taking
428
00:27:09.61 --> 00:27:11
the inverse transform.
429
00:27:11 --> 00:27:15.29
Actually, they're so close that
it's
430
00:27:15.29 --> 00:27:17
easy to confuse the two.
431
00:27:17 --> 00:27:22.33
The inverse of this matrix will
be a nice matrix also.
432
00:27:22.33 --> 00:27:26
So -- and that's,
of course, what makes it --
433
00:27:26 --> 00:27:29
that -- I guess Fourier knew
that.
434
00:27:29 --> 00:27:32
He knew the inverse of this
matrix.
435
00:27:32 --> 00:27:36
A- as you'll see,
it just comes from the fact
436
00:27:36 --> 00:27:40
that the
columns are orthogonal -- from
437
00:27:40 --> 00:27:45
the fact that the columns are
orthogonal, we will quickly
438
00:27:45 --> 00:27:48
figure out what is the inverse.
439
00:27:48 --> 00:27:53
What Fourier didn't know --
didn't notice -- I think Gauss
440
00:27:53 --> 00:27:58
noticed it but didn't make a
point of it and then
441
00:27:58 --> 00:28:03
it turned out to be really
important was the fact that this
442
00:28:03 --> 00:28:08
matrix is so special that you
can break it up into nice pieces
443
00:28:08 --> 00:28:12
with lots of zeroes,
factors that have lots of
444
00:28:12 --> 00:28:16.66
zeroes and multiply by it or by
its inverse very,
445
00:28:16.66 --> 00:28:17
very fast.
446
00:28:17 --> 00:28:17
Okay.
447
00:28:17 --> 00:28:22
But how did it get into this
lecture first?
448
00:28:22 --> 00:28:26
Because the columns are
orthogonal.
449
00:28:26 --> 00:28:32
Can I just check that the
columns of this matrix are
450
00:28:32 --> 00:28:33
orthogonal?
451
00:28:33 --> 00:28:40
So the inner product of that
column with that column is zero.
452
00:28:40 --> 00:28:46
The inner product of column one
with column three is zero.
453
00:28:46 --> 00:28:51
How about the inner product of
two and four?
454
00:28:51 --> 00:28:57
Can I take the inner product of
column two
455
00:28:57 --> 00:28:59.66
with column four?
456
00:28:59.66 --> 00:29:05
Or even the inner product of
two with three,
457
00:29:05 --> 00:29:13
let's -- let's see,
does that -- let me do two and
458
00:29:13 --> 00:29:13
four.
459
00:29:13 --> 00:29:14
Okay.
460
00:29:14 --> 00:29:18
What -- oh, I see,
yes, hmm.
461
00:29:18 --> 00:29:18
Hmm.
462
00:29:18 --> 00:29:27
Let's see, I believe that those
two columns are
463
00:29:27 --> 00:29:28
orthogonal.
464
00:29:28 --> 00:29:32
So let me take their inner
product and hope to get zero.
465
00:29:32 --> 00:29:36
Okay, now if you hadn't
listened to the first half of
466
00:29:36 --> 00:29:39
this lecture,
when you took the inner product
467
00:29:39 --> 00:29:42
of that with that,
you would have multiplied one
468
00:29:42 --> 00:29:46
by one, i by minus i,
and that would have given you
469
00:29:46 --> 00:29:50
one,
minus one by minus one would
470
00:29:50 --> 00:29:55
have given you another one minus
I by I would have been minus I
471
00:29:55 --> 00:29:57
squared, that's another one.
472
00:29:57 --> 00:30:02.98
So do I conclude that the inner
product of columns -- I said
473
00:30:02.98 --> 00:30:07
columns two and four,
that's because I forgot those
474
00:30:07 --> 00:30:09
are columns one and three.
475
00:30:09 --> 00:30:14
I'm interested in their inner
product and I'm hoping it's
476
00:30:14 --> 00:30:17.06
zero, but it doesn't look like
zero.
477
00:30:17.06 --> 00:30:18
Nevertheless,
it is zero.
478
00:30:18 --> 00:30:20
Those columns are
perpendicular.
479
00:30:20 --> 00:30:21
Why?
480
00:30:21 --> 00:30:23
Because the inner product -- we
conjugate.
481
00:30:23 --> 00:30:27
Do you remember that the -- one
of the vectors in the inner
482
00:30:27 --> 00:30:30
product has to get conjugated.
483
00:30:30 --> 00:30:35
So when I conjugated,
it changes that i to a minus i,
484
00:30:35 --> 00:30:40.76
changes this to a plus i,
changes those -- that second
485
00:30:40.76 --> 00:30:44
sine and that fourth sine and I
do get zero.
486
00:30:44 --> 00:30:47
So those columns are
orthogonal.
487
00:30:47 --> 00:30:50
So columns are orthogonal.
488
00:30:50 --> 00:30:53
They're not quite orthonormal.
489
00:30:53 --> 00:30:56.33
But I could fix that easily.
490
00:30:56.33 --> 00:30:59
They -- all those columns have
length two.
491
00:30:59 --> 00:31:04
Length squared is four,
like this -- the four I had
492
00:31:04 --> 00:31:08
there -- this length squared,
one plus -- one squared one
493
00:31:08 --> 00:31:14
squared one squared one squared
is four, square root is two --
494
00:31:14 --> 00:31:20
so if I really wanted them --
suppose I really wanted to fix
495
00:31:20 --> 00:31:24
life perfectly,
I could divide by two,
496
00:31:24 --> 00:31:29.5
and now I have columns that are
actually orthonormal.
497
00:31:29.5 --> 00:31:30
So what?
498
00:31:30 --> 00:31:33
So I can invert right away,
right?
499
00:31:33 --> 00:31:39
O- orthonormal columns means --
now I'm keeping this one half in
500
00:31:39 --> 00:31:44
here for the
moment -- c- means F4
501
00:31:44 --> 00:31:50
Hermitian, can I use that,
conjugate transpose times F4 is
502
00:31:50 --> 00:31:50
i.
503
00:31:50 --> 00:31:53
So I see what the inverse is.
504
00:31:53 --> 00:31:58
The inverse of F4 is -- it's
just like an -- an orthogonal
505
00:31:58 --> 00:31:59
matrix.
506
00:31:59 --> 00:32:04
The inverse is the transpose --
here the inverse is the
507
00:32:04 --> 00:32:08
conjugate transpose.
508
00:32:08 --> 00:32:08
So, fine.
509
00:32:08 --> 00:32:13
That -- that tells me that
anything good that I learn about
510
00:32:13 --> 00:32:19
F4 I'll know the same -- I'll
know a similar fact about its
511
00:32:19 --> 00:32:24
inverse, because its inverse is
just its conjugate transpose.
512
00:32:24 --> 00:32:27
Okay, now -- so what's good?
513
00:32:27 --> 00:32:30
Well, first,
the columns are orthogonal.
514
00:32:30 --> 00:32:32
That's a key fact,
then.
515
00:32:32 --> 00:32:35
That's the thing that makes the
inverse easy.
516
00:32:35 --> 00:32:39
But what property is it that
leads to the fast Fourier
517
00:32:39 --> 00:32:40
transform?
518
00:32:40 --> 00:32:43
So now I'm going to talk,
in these last minutes,
519
00:32:43 --> 00:32:46
about the fast Fourier
transform.
520
00:32:46 --> 00:32:49
What -- here's the idea.
521
00:32:49 --> 00:32:54
F6, our six by six matrix,
will c- there's a neat
522
00:32:54 --> 00:32:57
connection to F3,
half as big.
523
00:32:57 --> 00:33:00
There's a connection of F8 to
F4.
524
00:33:00 --> 00:33:04
There's a connection of F(64)
to F(32).
525
00:33:04 --> 00:33:08
Shall I write down what that
connection is?
526
00:33:08 --> 00:33:13
What's the connection of F(64)
to F(32)?
527
00:33:13 --> 00:33:21
So F(64) is a 64 by 64 matrix
whose W is the 64th root of one.
528
00:33:21 --> 00:33:26.56
So it's one 64th of the way
round in F(64).
529
00:33:26.56 --> 00:33:32
And it -- do- and F(32) is a 32
by 32 matrix.
530
00:33:32 --> 00:33:36
Remember, they're different
sizes.
531
00:33:36 --> 00:33:43
And the W in that 32 by 32
matrix is the 32nd root of one,
532
00:33:43 --> 00:33:50
which is twice as far --
that -- you sh- see that key
533
00:33:50 --> 00:33:55
point -- that's the -- that's
how 32 and 64 are connected in
534
00:33:55 --> 00:33:56
the Ws.
535
00:33:56 --> 00:34:02
The W for 64 is one 64th of the
way -- so all I'm saying is that
536
00:34:02 --> 00:34:07
if I square the W -- W(64),
that's what I'm using for the
537
00:34:07 --> 00:34:12
one over -- the -- W sixty f-
this Wn is either the i two pi
538
00:34:12 --> 00:34:17.83
over n --
so W(64) is one 64th of the way
539
00:34:17.83 --> 00:34:19
around it.
540
00:34:19 --> 00:34:24
When I square that,
what do I get but W(32)?
541
00:34:24 --> 00:34:24
Right?
542
00:34:24 --> 00:34:30
If I square this matrix,
I double the angle -- if I
543
00:34:30 --> 00:34:35
square this number,
I double the angle,
544
00:34:35 --> 00:34:37
I get, the -- the W(32).
545
00:34:37 --> 00:34:43
So somehow there's a little
hope
546
00:34:43 --> 00:34:46
here to connect F(64) with
F(32).
547
00:34:46 --> 00:34:49
And here's the connection.
548
00:34:49 --> 00:34:49
Okay.
549
00:34:49 --> 00:34:55
Let me -- let me go back,
-- yes, let me -- I'll do it
550
00:34:55 --> 00:34:56
here.
551
00:34:56 --> 00:34:58
Here's the connection.
552
00:34:58 --> 00:34:59
F(64).
553
00:34:59 --> 00:35:05
The 64 by 64 Fourier matrix is
connected to two copies of
554
00:35:05 --> 00:35:06
F(32).
555
00:35:06 --> 00:35:11
Let me leave a little space for
the connection.
556
00:35:11 --> 00:35:13
So this is 64 by 64.
557
00:35:13 --> 00:35:18
Here's a matrix of that same
size,
558
00:35:18 --> 00:35:23
because it's got two copies of
F(32) and two zero matrixes.
559
00:35:23 --> 00:35:28
Those zero matrixes are the
key, because when I multiply by
560
00:35:28 --> 00:35:32
this matrix, just as it is,
regular multiplication,
561
00:35:32 --> 00:35:37
I would take -- need 64 -- I
would -- I would have 64 squared
562
00:35:37 --> 00:35:40
little multiplications to do.
563
00:35:40 --> 00:35:43
But this matrix is half zero.
564
00:35:43 --> 00:35:47
Well, of course,
the two aren't equal.
565
00:35:47 --> 00:35:52
I'm going to put an equals
sign, but there has to be some
566
00:35:52 --> 00:35:58
fix up factors -- one there and
one there -- to make it true.
567
00:35:58 --> 00:36:05
The beauty is that these fix up
factors will be really -- almost
568
00:36:05 --> 00:36:07
all zeroes.
569
00:36:07 --> 00:36:11
So that as soon as we get this
formula right,
570
00:36:11 --> 00:36:16.49
we've got a great idea for how
to get from the sixty- from the
571
00:36:16.49 --> 00:36:21
64 squared calculations -- so
this original -- originally we
572
00:36:21 --> 00:36:26.75
have 64 squared calculations
from there, but this one will
573
00:36:26.75 --> 00:36:30
give us -- this is -- this will
--
574
00:36:30 --> 00:36:35
we don't need that many -- we
only need two times 32 squared,
575
00:36:35 --> 00:36:38
because we've got that twice.
576
00:36:38 --> 00:36:40
And -- plus the fix-up.
577
00:36:40 --> 00:36:45
So I have to tell you what's in
this fix-up matrix.
578
00:36:45 --> 00:36:49
The one on the right is
actually a permutation matrix,
579
00:36:49 --> 00:36:54
a very simple odds and evens
permutation matrix,
580
00:36:54 --> 00:36:58.53
the --
ones show up -- I haven't put
581
00:36:58.53 --> 00:37:04
enough ones, I really need a --
32 of these guys at -- double
582
00:37:04 --> 00:37:10
space and then -- you see it's
-- it's a permutation matrix.
583
00:37:10 --> 00:37:16
What it does -- shall I call it
P for permutation matrix?
584
00:37:16 --> 00:37:22
So what that P does when it
multiplies a vector,
585
00:37:22 --> 00:37:27
it takes the odd -- the even
numbered components first and
586
00:37:27 --> 00:37:29
then the odds.
587
00:37:29 --> 00:37:34
You see this -- this one
skipping every time is going to
588
00:37:34 --> 00:37:38.76
pick out x0, x2,
x4, x6 and then below that will
589
00:37:38.76 --> 00:37:41
come --
will pick out x1,
590
00:37:41 --> 00:37:42
x3, x5.
591
00:37:42 --> 00:37:46
And of course,
that can be hard wired in the
592
00:37:46 --> 00:37:49
computer to be instantaneous.
593
00:37:49 --> 00:37:53
So that says -- so far,
what have we said?
594
00:37:53 --> 00:37:59.31
We're saying that the 64 by 64
Fourier matrix is really
595
00:37:59.31 --> 00:38:04
separated into --
separate your vector into the
596
00:38:04 --> 00:38:08
odd -- into the even components
and the odd components,
597
00:38:08 --> 00:38:13
then do a 32 size Fourier
transform onto those separately,
598
00:38:13 --> 00:38:16
and then put the pieces
together again.
599
00:38:16 --> 00:38:21
So the pieces -- putting them
together turns them out
600
00:38:21 --> 00:38:26
to be I and a diagonal matrix
and I and a minus,
601
00:38:26 --> 00:38:28
that same diagonal matrix.
602
00:38:28 --> 00:38:34
So the fix-up cost is really
the cost of multiplying by D,
603
00:38:34 --> 00:38:39
this diagonal matrix,
because there's essentially no
604
00:38:39 --> 00:38:44.11
cost in -- in the I part or in
the permutation part,
605
00:38:44.11 --> 00:38:48
so really it's -- the fix-up
cost is
606
00:38:48 --> 00:38:53
essentially because D is
diagonal -- is 32
607
00:38:53 --> 00:38:54
multiplications.
608
00:38:54 --> 00:39:01
That's the -- there you're
seeing -- of course we didn't
609
00:39:01 --> 00:39:06
check the formula or we didn't
even say what D is yet,
610
00:39:06 --> 00:39:13
but I will -- this diagonal
matrix D is powers of W -- one W
611
00:39:13 --> 00:39:18
W squared down to W to the 31st.
612
00:39:18 --> 00:39:22
So you see that when I -- to do
a multiplication by D,
613
00:39:22 --> 00:39:25
I need to do 32
multiplications.
614
00:39:25 --> 00:39:26
There they are.
615
00:39:26 --> 00:39:31
Then -- but the other,
the more serious work is to do
616
00:39:31 --> 00:39:35
the F(32) twice on the --
separately on the even numbered
617
00:39:35 --> 00:39:40
and odd numbered components,
so twice 32 squared.
618
00:39:40 --> 00:39:43
So 64 squared is gone now.
619
00:39:43 --> 00:39:46
And that's the new count.
620
00:39:46 --> 00:39:49
Okay, great,
but what next?
621
00:39:49 --> 00:39:56
So that's -- I -- we now have
the key idea -- we would have to
622
00:39:56 --> 00:40:02
check the algebra,
but it's just checking a lot of
623
00:40:02 --> 00:40:05
sums that come out correctly.
624
00:40:05 --> 00:40:11
This is right --
the right way to see the fast
625
00:40:11 --> 00:40:15
Fourier transform,
or one right way to see it.
626
00:40:15 --> 00:40:19
Then you've got to see what's
the next idea.
627
00:40:19 --> 00:40:22
The next idea is to break the
32s down.
628
00:40:22 --> 00:40:24
Break those 32s down.
629
00:40:24 --> 00:40:30
So we have this factor,
and now we have the F(32),
630
00:40:30 --> 00:40:35.59
but that breaks into some guy
here -- F thirty- F six- F(16)
631
00:40:35.59 --> 00:40:36
-- F(16).
632
00:40:36 --> 00:40:41
Each -- each F(32) is breaking
into two copies of F(16),
633
00:40:41 --> 00:40:47
and then we have a permutation
and then the -- so this is a --
634
00:40:47 --> 00:40:51.95
like, this was a 64 size
permutation, this is a 32 size
635
00:40:51.95 --> 00:40:55
permutation -- I guess I've got
it twice.
636
00:40:55 --> 00:41:00
So it's -- I'm --
I'm just using the same idea
637
00:41:00 --> 00:41:05
recursively -- recursion is the
key word -- that on each of
638
00:41:05 --> 00:41:11
those F(32)s -- so here's zero
zero -- it's just -- to get
639
00:41:11 --> 00:41:16
F(32) -- this is the odd even
permutations -- so you see,
640
00:41:16 --> 00:41:21
we're -- the combination of
those permutations,
641
00:41:21 --> 00:41:22
what's it doing?
642
00:41:22 --> 00:41:28
This guy separates into odds --
in -- into evens and odds,
643
00:41:28 --> 00:41:33
and then this guy separates the
evens into the ones -- the
644
00:41:33 --> 00:41:39
numbers that are mult- the even
evens, which means zero four
645
00:41:39 --> 00:41:43
eight sixteen -- and even odds,
which means two,
646
00:41:43 --> 00:41:48
six, ten, fourteen -- and then
odd evens and
647
00:41:48 --> 00:41:49.65
odd odds.
648
00:41:49.65 --> 00:41:54
You see, together these
permutations then break it --
649
00:41:54 --> 00:42:00
break our vector down into x,
even even and three other
650
00:42:00 --> 00:42:01
pieces.
651
00:42:01 --> 00:42:07
Those are the four pieces that
separately get multiplied by
652
00:42:07 --> 00:42:13
F(16) -- separately fixed up by
these Is and Ds and Is and minus
653
00:42:13 --> 00:42:18
Ds -- so this count is now
reduced.
654
00:42:18 --> 00:42:21.61
This count is now -- what's it
reduced to?
655
00:42:21.61 --> 00:42:26
So that's going to be gone,
because 32 squared -- that's --
656
00:42:26 --> 00:42:29
that's the change I'm making,
right?
657
00:42:29 --> 00:42:34
The 32 squared -- w- so -- so
it's this that's now reduced.
658
00:42:34 --> 00:42:39
So I still have two times it,
but now what's 32 squared?
659
00:42:39 --> 00:42:45
It's gone in favor of two
sixteen squareds plus sixteen.
660
00:42:45 --> 00:42:50
That's -- and then the original
32 to fix.
661
00:42:50 --> 00:42:54
Maybe you see what's happening.
662
00:42:54 --> 00:43:01
Even easier than this formula
is w- what's -- when I do the
663
00:43:01 --> 00:43:07
recursion more and more times,
I get simpler and simpler
664
00:43:07 --> 00:43:10
factors in
the middle.
665
00:43:10 --> 00:43:15
Eventually I'll be down to two
point or one point Fourier
666
00:43:15 --> 00:43:16
transforms.
667
00:43:16 --> 00:43:21
But I get more and more factors
piling up on the right and left.
668
00:43:21 --> 00:43:24
On the right,
I'm just getting permutation
669
00:43:24 --> 00:43:24
matrixes.
670
00:43:24 --> 00:43:28.55
On the left,
I'm getting these guys,
671
00:43:28.55 --> 00:43:32
these Is and Ds,
so that there was a 32 there
672
00:43:32 --> 00:43:36
and -- each one of these is
costing 32.
673
00:43:36 --> 00:43:37
Each one of those is costing
674
00:43:38 --> 00:43:39.4
675
00:43:39.4 --> 00:43:41
And how many will there be?
676
00:43:41 --> 00:43:46
So you see the 32 for this
original fix up,
677
00:43:46 --> 00:43:50
because D had 32 numbers,
32 for this next fix up,
678
00:43:50 --> 00:43:53
because D has 16 and 16 more.
679
00:43:53 --> 00:43:56.01
I keep going.
680
00:43:56.01 --> 00:44:01
So the count in the middle goes
down to zip, but these fix up
681
00:44:01 --> 00:44:06
counts are all that I'm left
with, and how many factors --
682
00:44:06 --> 00:44:10
how many fix-ups have I got --
log in -- from 64,
683
00:44:10 --> 00:44:13
one step to 32,
one step to 16,
684
00:44:13 --> 00:44:16
one step to eight,
four, two and one.
685
00:44:16 --> 00:44:17
Six steps.
686
00:44:17 --> 00:44:23
So I have six fix-up --
six fix up factors.
687
00:44:23 --> 00:44:26
Finally I get to six times the
688
00:44:26 --> 00:44:27
689
00:44:28 --> 00:44:31
That's my final count.
690
00:44:31 --> 00:44:38
Instead of 64 squared,
this is log to the base two of
691
00:44:38 --> 00:44:43.36
64 times 64 -- actually,
half of 64.
692
00:44:43.36 --> 00:44:49.33
So actually,
the final count is n log to the
693
00:44:49.33 --> 00:44:56
base two of n -- that's the 32
-- a half.
694
00:44:56 --> 00:45:02
So can I put a box around that
wonderful, extremely important
695
00:45:02 --> 00:45:08.97
and satisfying conclusion --
that the fast Fourier transform
696
00:45:08.97 --> 00:45:15
multiplies by an n by n matrix,
but it does it not in n squared
697
00:45:15 --> 00:45:19
steps, but in one half n log n
steps.
698
00:45:19 --> 00:45:24
And if we just -- complete by
doing a count,
699
00:45:24 --> 00:45:32
let's suppose -- suppose -- a
typical case would be two to the
700
00:45:32 --> 00:45:32
tenth.
701
00:45:32 --> 00:45:37
Now n squared is bigger than a
million.
702
00:45:37 --> 00:45:44.35
So it's a thousand twenty four
times a thousand twenty four.
703
00:45:44.35 --> 00:45:51
But what is n -- what is one
half -- what is the new count,
704
00:45:51 --> 00:45:54
done the right way?
705
00:45:54 --> 00:45:59
It's n -- the thousand twenty
four times one half,
706
00:45:59 --> 00:46:02
and what's the logarithm?
707
00:46:02 --> 00:46:03
It's ten.
708
00:46:03 --> 00:46:05
So times ten over two.
709
00:46:05 --> 00:46:11.36
So it's five times -- it's five
times a thousand twenty four,
710
00:46:11.36 --> 00:46:17
where this one was a thousand
twenty four times a thousand
711
00:46:17 --> 00:46:18
twenty four.
712
00:46:18 --> 00:46:22
We've reduced the calculation
by a
713
00:46:22 --> 00:46:27
factor of 200 just by factoring
the matrix properly.
714
00:46:27 --> 00:46:32
This was a thousand times n,
we're now down to five times n.
715
00:46:32 --> 00:46:37
So we can do 200 Fourier
transforms, where before we
716
00:46:37 --> 00:46:40
could do one,
and in real scientific
717
00:46:40 --> 00:46:45.78
calculations where Fourier
transforms are happening all the
718
00:46:45.78 --> 00:46:49
time, we're saving a factor of
719
00:46:49 --> 00:46:55
in one of the major steps of
modern scientific computing.
720
00:46:55 --> 00:47:00.07
So that's the idea of the fast
Fourier transform,
721
00:47:00.07 --> 00:47:05.87
and you see the whole thing
hinged on being a special matrix
722
00:47:05.87 --> 00:47:08
with orthonormal columns.
723
00:47:08 --> 00:47:12
Okay, that's actually it for
complex numbers.
724
00:47:12 --> 00:47:17
I'm back next time really to --
to real numbers,
725
00:47:17 --> 00:47:22
eigenvalues and eigenvectors
and the key idea of positive
726
00:47:22 --> 00:47:25
definite matrixes is going to
show up.
727
00:47:25 --> 00:47:27
What's a positive definite
matrix?
728
00:47:27 --> 00:47:31
And it's terrific that this
course is going to reach
729
00:47:31 --> 00:47:35
positive definiteness,
because those are the matrixes
730
00:47:35 --> 00:47:39.67
that you see the most in
applications.
731
00:47:39.67 --> 00:47:41
Okay, see you next time.
732
00:47:41 --> 00:47:44
Thanks.