Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

These video lectures of Professor Gilbert Strang teaching 18.06 were recorded in Fall 1999 and do not correspond precisely to the current edition of the textbook. However, this book is still the best reference for more information on the topics covered in each lecture.

Strang, Gilbert. *Introduction to Linear Algebra*. 4th ed. Wellesley, MA: Wellesley-Cambridge Press, February 2009. ISBN: 9780980232714.

**Instructor/speaker:** Prof. Gilbert Strang

Lecture 24b: Quiz 2 review

## Related Resources

OK, this is quiz review day. The quiz coming up on Wednesday will before this lecture the quiz will be this hour one o'clock Wednesday in Walker, top floor of Walker, closed book, all normal.

I wrote down what we've covered in this second part of the course, and actually I'm impressed as I write it. so that's chapter four on orthogonality and you're remembering these -- what this is suggesting, these are those columns are orthonormal vectors, and then we call that matrix Q and the -- what's the key -- how do we state the fact that those v- those columns are orthonormal in terms of Q, it means that Q transpose Q is the identity.

So that's the matrix statement of the -- of the property that the columns are orthonormal, the dot products are either one or zero, and then we computed the projections onto lines and onto subspaces, and we used that to solve problems Ax=b in -- in the least square sense, when there was no solution, we found the best solution.

And then finally this Graham-Schmidt idea, which takes independent vectors and lines them up, takes -- subtracts off the projections of the part you've already done, so that the new part is orthogonal and so it takes a basis to an orthonormal basis.

And you -- those calculations involve square roots a lot because you're making things unit vectors, but you should know that step.

OK, for determinants, the three big -- the big picture is the properties of the determinant, one to three d- properties one, two and three, d- that define the determinant, and then four, five, six through ten were consequences.

Then the big formula that has n factorial terms, half of them have plus signs and half minus signs, and then the cofactor formula.

So and which led us to a formula for the inverse.

And finally, just so you know what's covered in from chapter three, it's section six point one and two, so that's the basic idea of eigenvalues and eigenvectors, the equation for the eigenvalues, the mechanical step, this is really Ax equal lambda x for all n eigenvectors at once, if we have n independent eigenvectors, and then using that to compute powers of a matrix.

So you notice the differential equations not on this list, because that's six point three, that that's for the third quiz.

OK.

Shall I what I usually do for review is to take an old exam and just try to pick out questions that are significant and write them quickly on the board, shall I -- shall I proceed that way again?

This -- this exam is really old.

November nineteen -- nineteen eighty-four, so that was before the Web existed.

So not only were the lectures not on the Web, nobody even had a Web page, my God.

OK, so can I nevertheless linear algebra was still as great as ever.

So may I and that wasn't meant to be a joke, OK, all right, so let me just take these questions as they come.

All right.

OK.

So the first question's about projections.

It says we're given the line, the -- the vector a is the vector two, one, two, and I want to find the projection matrix P that projects onto the line through a.

So my picture is, well I'm in three dimensions, of course, so there's a vector, two -- there's the vector a, two, one, two, let me draw the whole line through it, and I want to project any vector b onto that line, and I'm looking for the projection matrix.

So -- so this -- the projection matrix is the matrix that I multiply b by to get here.

And I guess this first part, this is just part one a, I'm really asking you the -- the quick way to answer, to find P, is just to remember what the formula is.

And -- and we're in -- we're projecting onto a line, so our formula, our -- our usual formula is AA transpose A inverse A transpose, but now A is just a column, one-column matrix, so it'll be just a, so I'll just call it little a, little a transpose, and this is just a number now, one by one, so I can put it in the denominator, so there's our -- that's really what we want to remember, and -- and using that two, one, two, what will I get?

I'm dividing by -- what's the length squared of that vector?

So what's a transpose a?

Looks like nine, and what's the matrix, well, I'm -- I'm doing two, one, two against two, one, two, so it's one-ninth of this matrix four, two, four, two, one, two, four, two, four.

Now the next part asked about eigenvalues.

So you see we're -- since we're learning, we know a lot more now, we can make connections between these chapters, so what's the eigenvector, what are the eigenvalues of P? I could ask what's the rank of P. What's the rank of that matrix?

Uh -- one.

Rank is one.

What's the column space?

If I apply P to all vectors then I fill up the column space, it's combinations of the columns, so what's the column

space? Well, it's just this line.

The column space is the line through two, one, two.

And now what's the eigenvalue?

So, or since that matrix has rank one, so tell me the eigenvalues of this matrix.

It's a singular matrix, so it certainly has an eigenvalue zero.

Actually, the rank is only one, so that means that there like going to be t- a two-dimensional null space, there'll be at least two, this lambda equals zero will be repeated twice because I can find two independent eigenvectors with lambda equals zero, and then of course since it's got three eigenvalues, what's the third one?

It's one.

How do I know it's one?

Either from the trace, which is nine over nine, which is one, or by remembering what -- what is the eigenvector, and actually now it's going to ask for the eigenvector, so what's the eigenvector for that eigenvalue?

What's the eigenvector for that eigenvalue?

It's the -- it's the vector that doesn't move, eigenvalue one, so the vector that doesn't move

is a. This a is the -- is -- is also the eigenvector with lambda equal one, because if I apply the projection matrix to a, I get a again.

Everybody sees that if I apply that matrix to a, can I do it in little letters, if I apply that matrix to a, then I have a transpose a canceling a transpose a and I get a again.

So sure enough, Pa equals a.

And the eigenvalue is one.

OK.

Good. now, actually it asks you further to solve this difference equation, so this will be -- this is -- this is solve u(k+1)=Puk, starting from u0 equal nine, nine, zero.

And find uk.

So -- so what's up?

Shall we find u1 first of all?

So just to get started.

So what is u1? It's Pu0 of course.

So if I do the projection of -- of this vector onto the line, so this is like my vector b now that I'm projecting onto the line, I get a times a transpose u0 over a transpose a.

Well, one way or another I just do this multiplication.

but maybe this is the easiest way to do it.

a transpose, can I remember what a is on

this board? Two, one, two, so I'm projecting onto the line through there.

This is the projection, it's P times the vector u0, so what do I have for a transpose u0 looks like eighteen, looks like twenty-seven, and a transpose a we figured was nine, so it's three a, so that this is the -- this is the x hat, the -- the multiple of a, in -- in our formulas, and of course that's six, three, six.

So that's u1. Computed out directly.

That's on the line through a and it's the closest point to u0, and it's just Pu0. You just straightforward multiplication produces that.

OK. Now, what's u2? Well, u2 is Pu1, I agree.

Do I need to compute that again?

No, because once I'm already on the line through A, uk will be, I could do the projection k times, but it's enough just to do it once.

It's the same, it's the same, six, three, six.

So this is a case where I could and -- and actually on the quiz if you see one of these, which could very well be there, and it could very well be not a projection matrix, then we would use all the eigenvalues and eigenvectors.

Let's think for a moment, how do you do those?

M - the point of this small part of a question was that when P is a projection matrix, so that P squared equals P and P cubed equals P, then -- then we don't need to get into the mechanics of all knowing all the other eigenvalues and eigenvectors.

We just can go directly.

But if P was now some other matrix, can you just -- let's just remember from these very recent lectures how you would proceed. from these very recent lectures we know that uk we would -- we would expand u0 as a combination of eigenvectors.

Let me leave -- yeah, as a combination of eigenvectors, c1x1, some multiple of the second eigenvector, some multiple of the third eigenvector, and then A to the ku0 would be c1, so this -- we have to find these numbers here, that's the work actually.

The work is find the eigenvalues, find the eigenvectors and find the c-s because they all come into the formula.

We have -- so -- so to do this, you can see what you have to compute.

You have to compute the eigenvalues, you have to compute the eigenvectors, and then to match u0 you compute the c-s, and then you've got it.

So it's -- it's just that's a formula that shows what pieces we need.

And what would actually happen in the case of this projection matrix?

If this A is a projection matrix, then a couple of eigenvalues are zero.

That's why we just throw those away.

The other eigenvalue was a one, so that we got the same thing every time, c3x3.

From the first time, second time, third, all iterations pro- left us with this constant, left us right here at six, three, six.

But maybe I take -- I'm taking this chance to remind you of what to do for other matrices.

OK. So that's part way through.

OK. The next question in nineteen eighty-four is fitting a straight line to points.

And actually a straight line through the origin.

A straight line through the origin.

So can I go to question two?

So this is fitting a straight line to these points, can I -- I'll just give you the points at t=1 the y is four, at t=2, y is five, at t=3, y is eight.

So we've got points one, two, three, four, five, and eight.

And I'm trying to fit a straight line through the origin to these three values.

OK, so my equation that I'm allowing myself is just y equal Dt.

So I have only one unknown.

One degree of freedom.

One parameter D.

So I'm expecting to end up my matrix so my -- my -- when I try to -- when I try to fit a straight line, that goes through the origin, that's because it goes through the origin, I've lost the constant c here, so I have just this should be a quick calculation.

and I can write down the three equations that -- that -- that would -- I'd like to solve if the line went through the points, that's a good start.

Because that displays the matrix.

So can I continue that problem?

We would like to solve-- so y is Dt, so I'd like to solve D times one times D equals four, two times D equals five and three times D equals eight.

That would be perfection.

If I could find such a D, then the line y equal Dt would satisfy all three equations, would go through all three points, but it doesn't exist.

So -- so I have to solve this -- so the -- my matrix is now you can see my matrix, it just has one column.

Multiplying a scalar D.

And you can see the right-hand side.

This is my Ax=b.

I don't need three equals signs now because I've got vectors.

OK. There's Ax=b and you take it from there.

You the -- the best x will be -- will come from -- so what's the -- the key equation?

So this is the A, this is the Ax hat equal b equation.

Well, Ax=b.

And what's the equation for x hat?

The best D, so to find the best D, the best x, the equation is A transpose A, the best D, is A transpose times the right-hand side.

This is all coming from projection on a line, our -- our matrix only has one column.

So A transpose A would be maybe fourteen, D hat, and A transpose b I'm getting four, ten, and twenty-four. Is that right?

Four, ten and twenty-four. So thirty-eight. So that tells me the best D hat is D hat is thirty-eight over fourteen.

OK. Fine.

All right. so we found the best line.

And now here's a -- here's the next question.

What vector did I just project onto what line?

See in this section on least squares here's the key point, I'm -- I'm asking you to think of the least squares problem in two ways.

Two different pictures.

Two different graphs.

One graph is this.

This is a graph in the -- in the b -- in the tb plane, ty plane.

The -- the -- the line itself.

The other picture I'm asking you to think of is like my projection picture.

What -- what projection -- what -- what vector I -- I projecting onto what line or what subspace when I -- when I

do this? So the -- my second picture is a projection picture that -- that sees the whole thing with vectors.

Here's my vector of course that I'm projecting.

I'm projecting that vector b onto the column space of A.

Of if you like -- it's just a line onto that's the line it's just a line, of course.

That's what this calculation is doing.

This is computing the best D, which is -- this is the x hat.

So -- so seeing it as a projection means I don't see the projection in this figure, right?

In this figure I'm not projecting those points onto that line or anything of the sort.

The projection s-picture for -- for least squares is in the -- in the space where b lies, the whole vector b, and the columns of A.

And then the x is the best combination that gives the projection.

OK. So that's a chance to tell me that.

OK. I'll go -- OK now finally in orthogonality there's the Graham-Schmidt idea.

So that's problem two D here.

It asks me if I have two vectors, a1 equal one, two, three, and a2 equal one, one, one, find two orthogonal vectors in that plane.

So those two vectors give a plane, they give a plane.

Which is of course the -- the column space of the -- of the matrix.

And I'm looking for an orthogonal basis for that plane.

So I'm looking for two orthogonal vectors.

And of course there are lots of -- I mean, I've got a plane there.

If I get one orthogonal pair, I can rotate it.

There's not just one answer here.

But Graham-Schmidt says OK, start with the first vector, and let that be -- and keep that one.

And then take the second one orthogonal to this.

So -- so Graham-Schmidt says start with this one and then make a second vector B, can I call that second vector B, this is going to be orthogonal to, so perpendicular to a1. If I can with my chalk create the key equation.

This vector B is going to be this one, one, one, but that one, one -- one, one, one is not perpendicular to a1, so I have to subtract off its projection, I have to subtract off the B, the -- the B trans- ye the B -- the -- the I should say the a1 transpose b over a1 transpose a1, that multiple of a1, I've got to remove.

So I just have to compute what that is, and I get ano- I get a vector B that's orthogonal to a1. It's the -- it's -- it's the original vector minus its projection.

Oh, so what is -- I mean this to be a2. So I'm projecting a2 onto the line through a1. Yeah.

That's the part that I don't want because that's in the direction I already have, so I subtract off that projection and I get the part I want, the orthogonal part.

So that's the Graham-Schmidt thing and we can put numbers in.

OK.

one, one, one take away a1 transpose a2 is six, a1 transpose a1 is fourteen,multiplying a1. And that gives us the new orthogonal vector B.

Because I only ask for orthogonal right now, I don't have to divide by the length which will involve a square root.

OK.

Third question.

Third question.

All right, let me -- I'll move this board up. third question will probably be about eigenvalues.

OK. Three.

This is a four-by-four matrix.

Its eigenvalues are lambda one, lambda two, lambda three, lambda four.

Question one.

What's the condition on the lambdas so that the matrix is invertible?

OK.

So under what conditions on the lambdas will the matrix be invertible?

So that's easy.

Invertible if what's the condition on the lambdas?

None of them are zero.

A zero eigenvalue would mean something in the null space would mean a solution to Ax=0x, but we're invertible, so none of them is zero, the product -- however you want to say, no -- no zero eigenvalues.

Good. OK, what's the determinant of A inverse?

The determinant of A inverse?

So where is that going to come from?

Well, if we knew the eigenvalues of A inverse, we could multiply them together to find the determinant.

And we do know the eigenvalues of A inverse.

What are they?

They're just one over lambda one times one over lambda two, that's the second eigenvalue, the third eigenvalue and the

fourth. So the product of the four eigenvalues of the inverse will give us the determinant of the inverse.

Fine. OK.

And what's the trace of A plus I?

So what do we know about trace?

It's the sum down the diagonal, but we don't know what our matrix is.

The trace is also the sum of the eigenvalues, and we do know the eigenvalues of A plus I.

So we just add them up.

So what -- what's the first eigenvalue of A plus I?

When the matrix A has eigenvalues lambda one, two, three and four, then the eigenvalues if I add the identity, that moves all the eigenvalues by one, so I just add up lambda one plus one, lambda two plus one, and so on, lambda three plus one, lambda four plus one, so it's lambda one plus lambda two plus lambda three plus lambda four plus four.

Right.

That movement by the identity moved all the eigenvalues by one, so it moved the whole trace by four.

So it was the trace of A plus four more.

OK. Let's see.

We may be finished this quiz twenty minutes early.

No. There's another question.

Oh, God, OK.

How did this class ever do it?

Well, you'll see. you'll be able to do it.

OK. this has got to be a determinant question.

All right.

More determinants and cofactors and big formula question.

OK. Let me do that.

So it's about a matrix, a -- a whole family of matrices.

Here's the four-by-four one.

The four-by-four one is, and -- and all the matrices in this family are tridiagonal with -- with ones.

Otherwise zeroes. So that's the pattern, and we've seen this matrix.

OK.

So the -- it's tridiagonal with ones on the diagonal, ones above and ones below, and you see the general formula An, so I'll use Dn for the determinant of An.

OK. All right.

So I'm going to do a -- the first question is use cofactors to show that Dn is something times D(n-1) plus something times D(n-2). And find those somethings.

OK. So this -- the fact that it's tridiagonal with these constant diagonals means that there is such a recurrence formula.

And so the first question is find it.

Well, what's the recurrence formula?

OK, how does it go?

So I'll use cofactors along the first row.

So I take that number times its cofactor.

So it's one times its cofactor and what is its cofactor? D(n-1), right, exactly, the cofactor is this -- is this guy uses up row one and column one, so the cofactor is down here, so it's one of those.

OK, that's the first cofactor term.

Now the other cofactor term is this guy.

Which uses up row one and column two and what's surprising about that?

When you use row one and column two that brings in a minus.

There'll be a minus because the -- the cofactor is this determinant times minus one.

The the one-two cofactor is that determinant with its sign changed.

OK.

So I have to look at that determinant and I have to remember in my head a sign is going to get changed.

OK. Now how do I do that determinant?

How do I make that one clear?

I -- the -- the neat way to do is -- is here I see I -- I'll use cofactors down the first column.

Because the first column is all zeroes except for that one, so this one is now -- and what's its cofactor?

Within this three-by-three its cofactor will be two-by-two, and what is it?

It's this, right?

So -- so that part is all gone, so I'm taking that times its cofactor, then zero times whatever its cofactor is, so it's really just one times and what's this in the general n-by-n case?

It's Dn minus two.

But now so is this a plus or sign or a minus sign, it's -- it's just a one, because there's a one from there and a one from there.

And is it a plus or a minus?

It's minus I guess because there was a minus the first time and then the second time it's a plus, so it's overall it's a minus.

So there's my a and b were one and minus one.

Those constants.

Th- that's the -- that's the recurrence.

OK.

And oh, then it asks you to then it asks you to solve this thing first by writing it as a -- as a system.

So now I'd like to know the solution.

I -- I better know how it starts, right?

It starts with D1, what was D1, that's just the one-by-one case, so D1 is one, and what is D2? Just to get us started and then this would give us D3, D4, and forever. D2 is this two-by-two that I'm seeing here and that determinant is obviously zero.

So those little ones will start the recurrence and then we take

off. And then the idea is to write this recurrence as -- as a Dn, D(n-1) is some matrix times the one before, the D(n-1), D(n-2). What's the matrix?

You see, you remember this step of taking a single second order equation and by introducing a vector unknown to make it into a -- to a first order system.

OK. So Dn is one of Dn minus one minus one, I think that -- that goes in the first row, right?

From the equation above?

And the second one is this is the same as this, so one and zero are fine.

So there's the matrix.

OK.

So now how do I proceed?

We can guess what this examiner's got in his little mind. well, find the eigenvalues.

And actually it tells us that the sixth power of these eigenvalues turns out to be one.

Uh, well, can -- can we get the equation for the eigenvalues?

Let's do it and let's get a formula for them.

OK. So what are the eigenvalues?

I look at the -- the matrix, this determinant one minus lambda and zero minus lambda, and these guys are still there, I compute that determinant, I get lambda squared minus lambda and then plus one.

And I set that to zero.

OK. So we're not Fibonacci here.

We're -- we're not seeing Fibonacci numbers.

Because the sign -- we had a sign change there.

And it's not clear right away whether these -- whether this -- is it clear?

Is this matrix stable or unstable?

When we take -- when we go further and further out?

Are these Ds increasing?

Are they going to zero?

Are they bouncing around periodically?

the answers have to be here.

I would like to know how big these lambdas are, right?

And the point is probably these -- let's -- let's see, what's lambda?

From the quadratic formula lambda is one, I switch the sign of that, plus or minus the square root of one minus 4ac, I getting a minus three there?

Over two.

What's up?

They're complex.

The -- the eigenvalues are one plus square root of three I over two and one minus square root of three I over two.

What's the magnitude of lambda?

That's the key point for stability.

These are two numbers in the complex plane.

One plus some -- somewhere here, and its complex conjugate there.

I want to know how far from the origin are those numbers.

What's the magnitude of lambda?

And do you see what it is?

Do you recognize this -- a number like that?

Take the real part squared and the imaginary part squared and add.

What do you get?

So the real part squared is a quarter.

The imaginary part squared is three-quarters. They add to one.

That's a number with -- that's on the unit circle.

That's an e to the i theta.

That's a cos(theta)+isin(theta). And what's theta?

This -- this is like a complex number that's worth knowing, it's not totally obvious but it's nice.

That's -- I should see that as cos(theta)+isin(theta), and the angle that would do that is sixty degrees, pi over three.

So that's a -- let me improve my picture.

So those -- that's e to the i pi over six -- pi over three.

This is -- this number is e to the i pi over three and e to the minus i pi over three.

We'll be doing more complex numbers briefly but a little more in the next two days.

next two lectures.

Anyway, the -- so what's the deal with stability, what do the Dn-s do?

Well, look, if -- if I take the sixth power I'm around at one, the problem actually told me this.

The sixth power of those eigenvalues brings me around to

What does that tell you about the matrix, by the way? one.

Suppose you know -- this was a great quiz question, so I should never have just said it, but popped out.

Suppose lambda one to the sixth and lambda two to the sixth are -- are one, which they are.

What does that tell me about a m- a matrix?

About my matrix A here.

Well, what -- what matrix is connected with lambda one to the sixth and lambda two to the sixth?

It's got to be the matrix A to the sixth.

So what is A to the sixth for that matrix?

It's got eigenvalues one and one.

Because when I take the sixth power, actually, ye, if I take the sixth power b- all the sixth power of that is one and the sixth power of that is one, the sixth power of this is e to the two pi i, that's one, the sixth power of this is e to the minus two pi i, that's one.

So the sixth powers, the -- the sixth power of that matrix has eigenvalues one and one, so what is it?

It's the identity, right.

So if I operate this -- if I run this thing six times, I'm back where I was.

The sixth power of that matrix is the identity.

Good. OK. So it'll loop around, it's -- it doesn't go to zero, it doesn't blow up, it just periodically goes around with period six.

OK. let's just see if there's a -- all right.

I'll -- let's see.

Could I also look at a -- at a final exam from nineteen ninety-two. I think that's yeah, let me do that on this last board.

It starts -- a lot of the questions in this exam are about a family of matrices.

Let me give you the fourth, the fourth guy in the family is -- has a one, so it's zeroes on the diagonal, but these are going one, two, three and so on.

One, two, three, and so on.

But, for the four-by-four case I'm stopping at four.

You see the pattern?

It's a family of matrices which is growing, and actually the numbers -- it's symmetric, right, it's equal to A4 transpose.

And we can ask all sorts of questions about its null space, its range, r- its column space find the projection matrix onto the column space of A3, for example, is in here.

So -- so one -- so A3 is zero, one, zero, one, zero, two, zero, two, zero.

OK, find the projection matrix onto the column space.

By the way, is that matrix singular or invertible?

Singular.

Why do we know it's singular?

I see that column three is a multiple of column one.

Or we could take its determinant.

So it's certainly singular.

The projection will be matrix will be three-by-three but it will project onto the column space, it'll project onto this plane.

The column space of A3, and I guess I would find it from the formula AA -- AA transpose A inverse, I would have to -- I would -- I guess I would do all this.

There may be a better way, perhaps I could think there might be a slightly quicker way, but that would come out pretty fast.

OK.

So that's be the projection matrix.

Next question.

Find the eigenvalues and eigenvectors of that matrix.

OK.

There's a three-by-three matrix, oh, yeah, so what are its eigenvalues and eigenvectors, we haven't done any three-by-threes. Let's do one.

I want to find, so how do I find eigenvalues?

I take the determinant of A3 minus lambda I.

So this is you just have to -- so I'm subtracting lambda from the diagonal, and I have a one, one, zero, zero, two, two there, and I just have to find that determinant.

OK, since it's three-by-three I'll just go for it.

This way gives me minus lambda cubed and a zero and zero.

Then in this direction which has the minus sign, that's a zero, four lambdas, I mean minus four lambdas, and minus another lambda, so that's minus five lambdas, but that direction goes with a minus sign, so I think it's plus five lambda.

That looks like the determinant of A3 minus lambda I, so I set it to zero.

So what are the eigenvalues?

Well, lambda equals zero -- lambda factors out of this, times minus lambda squared plus four, so the eigenvalues are five, thanks, thanks, so the eigenvalues are zero, square root of five, and minus square root of five.

And I would never write down those three eigenvalues without checking the trace to tell the truth.

Because -- because we did a bunch of calculations here but then I can quickly add up the eigenvalues to get zero, add up the trace to get zero, and feel that I'm -- well, I guess that wouldn't have caught my error if I'd made it -- if -- if that had been a four I wouldn't have noticed,the determinant isn't anything greatly useful here, right, because the determinant is just zero.

And so I never would know whether that five was right or wrong, but thanks for making it right.

OK. Ha.

Question two c, whoever wrote this, probably me, said this is not difficult.

I don't know why I put that in. just -- it asks for the projection matrix onto the column space of A4. How could I have thought that wasn't difficult?

It looks extremely difficult. what's the projection matrix onto the column space of A4? I don't know whether that -- this is not difficult is just like helpful or -- or insulting.

Uh, what do you think?

The -- what's the column space of A4 here?

Well, what's our first question is is the matrix singular or invertible?

If the answer is invertible, then what's the column space?

If -- if this matrix A4 is invertible, so that's my guess, if this problem's easy it has to be because this matrix is probably invertible.

Then its column space is R^4, good, the column space is the whole space, and the answer to this easy question is the projection matrix is the identity, it's the four-by-four identity matrix.

If this matrix is invertible.

Shall we check invertibility?

How would you find its determinant?

Can we just like take the determinant of that matrix?

I could ask you how -- so there -- there are twenty-four terms, do we want to write all twenty-four terms down?

not in the remaining ten seconds.

Better to use cofactors.

So I go along row one, I see one -- the only nonzero is this guy, so I should take that one times the cofactor.

Now so I'm down to this determinant.

OK. So now I'm -- look at this first column, I see one times this, there's the cofactor of the one, so I'm using up row one -- row one and column one of this three-by-three matrix, I'm down to this cofactor, and by the way, those were both plus signs, right?

No, they weren't.

That was a minus sign.

That was a -- that was a minus, and then that was a plus, and then this, so what's the determinant?

Nine.

Nine. Determinant is nine.

Determinant of A4 is nine.

OK.

Where A3, so my guess is I'll put that on the final this year, the -- probably the odd- numbered ones are singular and the even-numbered ones are invertible.

And I don't know what the determinants are but I'm betting that they have some nice formula.

OK. So, recitations this week will also be quiz review and then the quiz is Wednesday at one o'clock.

Thanks.

## Free Downloads

### Video

- iTunes U (MP4 - 104MB)
- Internet Archive (MP4 - 193MB)

### Free Streaming

### Subtitle

- English - US (SRT)

## Welcome!

This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.

**MIT OpenCourseWare** is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

**No enrollment or registration.** Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

**Knowledge is your reward.** Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

**Made for sharing**. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

Learn more at Get Started with MIT OpenCourseWare