These video lectures of Professor Gilbert Strang teaching 18.06 were recorded in Fall 1999 and do not correspond precisely to the current edition of the textbook. However, this book is still the best reference for more information on the topics covered in each lecture.
Instructor/speaker: Prof. Gilbert Strang
OK. Good. The final class in linear algebra at MIT this Fall is to review the whole course.
And, you know the best way I know how to review is to take old exams and just think through the problems.
So it will be a three-hour exam next Thursday.
Nobody will be able to take an exam before Thursday, anybody who needs to take it in some different way after Thursday should see me next Monday.
I'll be in my office Monday.
May I just read out some problems and, let me bring the board down, and let's start.
Here's a question.
This is about a 3-by-n matrix.
And we're given -- so we're given -- given -- A x equals 1 0 0 has no solution.
And we're also given A x equals 0 1 0 has exactly one solution.
OK. So you can probably anticipate my first question, what can you tell me about m?
It's an m-by-n matrix of rank r, as always, what can you tell me about those three numbers?
So what can you tell me about m, the number of rows, n, the number of columns, and r, the rank?
See, do you want to tell me first what m is?
How many rows in this matrix?
Must be three, right?
We can't tell what n is, but we can certainly tell that m is three.
And, what do these things tell us?
Let's take them one at a time.
When I discover that some equation has no solution, that there's some right-hand side with no answer, what does that tell me about the rank of the matrix?
It's smaller m.
Is that right?
If there is no solution, that tells me that some rows of the matrix are combinations of other rows.
Because if I had a pivot in every row, then I would certainly be able to solve the system.
I would have particular solutions and all the good
So any time that there's a system with no solutions, stuff. that tells me that r must be below m.
What about the fact that if, when there is a solution, there's only one?
What does that tell me?
Well, normally there would be one solution, and then we could add in anything in the null space.
So this is telling me the null space only has the 0 vector in
it. There's just one solution, period, so what does that tell me?
The null space has only the zero vector in it?
What does that tell me about the relation of r to n?
So this one solution only, that means the null space of the matrix must be just the zero vector, and what does that tell me about r and n?
The columns are independent.
So I've got, now, r equals n, and r less than m, and now I also know m is three.
So those are really the facts I know.
n=r and those numbers are smaller than three.
Sorry, yes, yes. r is smaller than m, and n, of course, is also.
So I guess this summarizes what we can tell.
In fact, why not give me a matrix -- because I would often ask for an example of such a matrix -- can you give me a matrix A that's an example?
That shows this possibility?
Exactly, that there's no solution with that right-hand side, but there's exactly one solution with this right-hand side.
Anybody want to suggest a matrix that does that?
What do I -- what vector do I want in the column space?
I want zero, one, zero, to be in the column space, because I'm able to solve for that.
So let's put zero, one, zero in the column space.
Actually, I could stop right there.
That would be a matrix with m equal three, three rows, and n and r are both one, rank one, one column, and, of course, there's no solution to that one.
So that's perfectly good as it is.
Or if you, kind of, have a prejudice against matrices that only have one column, I'll accept a second
column. So what could I include as a second column that would just be a different answer but equally good?
I could put this vector in the column space, too, if I wanted.
That would now be a case with r=n=2, but, of course, three m eq- m is still three, and this vector is not in the column space.
So you're -- this is just like prompting us to remember all those things, column space, null space, all that stuff.
Now, I probably asked a second question about this type of thing.
OK. Oh, I even asked, write down an example of a
Ah. matrix that fits the description.
Hm. I guess I haven't learned anything in twenty-six years.
Cross out all statements that are false about any matrix with these -- so again, these are -- this is the preliminary sta- these are the facts about my matrix, this is one example.
But, of course, by having an example, it will be easy to check some of these facts, or non-facts. Let me, let me write down some, facts.
Some possible facts.
So this is really true or false.
The determinant -- this is part one, the determinant of A transpose A is the same as the determinant of A A transpose.
Is that true or not?
Second one, A transpose A, is invertible.
Third possible fact, A A transpose is positive definite.
So you see how, on an exam question, I try to connect the different parts of the course.
So, well, I mean, the simplest way would be to try it with that matrix as a good example, but maybe we can answer, even directly.
Let me take number two first.
Because I'm -- you know, I'm very, very fond of that matrix, A transpose A.
And when is it invertible?
When is the matrix A transpose A, invertible?
The great thing is that I can tell from the rank of A that I don't have to multiply out A transpose A.
A transpose A, is invertible -- well, if A has a null space other than the zero vector, then it -- it's -- no way it's going to be invertible.
But the beauty is, if the null space of A is just the zero vector, so the fact -- the key fact is, this is invertible if r=n, by which I mean, independent columns of A.
In the matrix A.
If r=n -- if the matrix A has independent columns, then this combination, A transpose A, is square and still that same null space, only the zero vector, independent columns all good, and so, what's the true/false? Is it -- is this middle one T or F for this, in this setup?
Well, we discovered that -- we discovered that -- that r was n, from that second fact.
So this is a true.
That's a true.
And, of course, A transpose A, in this example, would probably be -- what would A transpose A, be, for that matrix?
Can you multiply A transpose A, and see what it looks like for that matrix?
What shape would it be?
It will be two by two.
And what matrix will it be?
So, it checks out.
OK, what about A A transpose?
Well, depending on the shape of A, it could be good or not so good.
It's always symmetric, it's always square, but what's the size, now?
This is three by n, and this is n by three, so the result is three by three.
Is it positive definite?
I don't think so.
If I multiply that by A transpose, A A transpose, what would the rank be?
It would be the same as the rank of A, that's -- it would be just rank two.
And if it's three-by-three, and it's only rank two, it's certainly not positive definite.
So what could I say about A A transpose, if I wanted to, like, say something true about it?
It's true that it is positive semi-definite. If I made this semi-definite, it would always be true, always.
But if I'm looking for positive definite, then I'm looking at the null space of whatever's here, and, in this case, it's got a null space.
So A, A -- eh, shall we just figure it out,
here? A A transpose, for that matrix, will be three-by-three. If I multiplied A by A transpose, what would the first row be?
All zeroes, right?
First row of A A transpose, could only be all zeroes, so it's probably a one there and a one there, or something like that.
But, I don't even know if that's right.
But it's all zeroes there, so it's certainly not positive definite.
Let me not put anything up I'm not sh- don't check.
What about this determinant?
Oh, well, I guess -- that's a sort of tricky question.
Is it true or false in this case?
It's false, apparently, because A transpose A, is invertible, we just got a true for this one, and we got a false, we got a z- we got a non-invertible one for this one.
So actually, this one is false, number one.
That surprises us, actually, because it's, I mean, why was it tricky?
Because what is true about determinants?
This would be true if those matrices were square.
If I have two square matrices, A and any other matrix B, could be A transpose, could be somebody else's matrix.
Then it would be true that the determinant of B A would equal the determinant of A B.
But if the matrices are not square and it would actually be true that it would be equal -- that this would equal the determinant of A times the determinant of A transpose.
We could even split up those two separate determinants.
And, of course, those would be equal.
But only when A is square.
So that's just, that's a question that rests on the, the falseness rests on the fact that the matrix isn't square in the first place.
Oh, now, even asks more.
Prove that A transpose y equals c -- hah-God, it's -- this question goes on and on.
now I ask you about A transpose y=c.
So I'm asking you about the equation -- about the matrix A transpose.
And I want you to prove that it has at least one solution -- one solution for every c, every right-hand side c, and, in fact -- in fact, infinitely many solutions for every c.
Well, none -- none of this is difficult, but, it's been a little while.
So we just have to think again.
When I have a system of equations -- this is -- this matrix A transpose is now, instead of being three by n, it's n by three, it's n by m.
To show that a system has at least one solution, when does this, when does this system -- when is the system always solvable?
When it has full row rank, when the rows are independent.
Here, we have n rows, and that's the rank.
So at least one solution, because the number of rows, which is n, for the transpose, is equal to r, the rank.
This A transpose had independent rows because A had independent columns, right?
The original A had independent columns, when we transpose it, it has independent rows, so there's at least one solution.
But now, how do I even know that there are infinitely many solutions?
Oh, what do I -- I want to know something about the null space.
What's the dimension of the null space of A transpose?
So the answer has got to be the dimension of the null space of A transpose, what's the general fact?
If A is an m by n matrix of rank r, what's the dimension of A transpose?
The null space of A transpose?
Do you remember that little fourth subspace that's tagging along down in our big picture?
It's dimension was m-r. And, that's bigger than zero. m is bigger than r.
So there's a lot in that null space.
So there's always one solution because n i- this is speaking about A transpose.
So for A transpose, the roles of m and n are reversed, of course, so I'm -- keep in mind that this board was about A transpose, so the roles -- so it's the null space of a transpose, and there are m-r free variables.
OK, that's, like, just some, review.
Can I take another problem that's also sort of -- suppose the matrix A has three columns, v1, v2, v3. Those are the columns of the matrix.
Solve Ax=v1-v2+v3. Tell me what x is.
Well, there, you're seeing the most -- the one absolutely essential fact about matrix multiplication, how does it work, when we do it a column at a time, the very, very first day, way back in September, we did multiplication a column at a time.
So what's x?
Just tell me?
One minus one, one.
OK. Everybody's got that.
OK? Then the next question is, suppose that combination is zero -- oh, yes, OK, so question (b) says -- part (b) says, suppose this thing is zero.
Suppose that's zero.
Then the solution is not unique.
Suppose I want true or false. -- and a reason.
Suppose this combination is zero.
Show that -- what does that tell me?
So it's a separate question, maybe I sort of saved time by writing it that way, but it's a totally separate question.
If I have a matrix, and I know that column one minus column two plus column three is zero, what does that tell me about whether the solution is unique or not?
Is there more than one solution?
What's uniqueness about?
Uniqueness is about, is there anything in the null space, right?
The solution is unique when there's nobody in the null space except the zero vector.
And, if that's zero, then this guy would be in the null space.
So if this were zero, then this x is in the null space of A.
So solutions are never unique, because I could always add that to any solution, and Ax wouldn't change.
So it's always that question.
Is there somebody in the null space?
Oh, now, here's a totally different question.
Suppose those three vectors, v1, v2, v3, are orthonormal.
So this isn't going to happen for orthonormal vectors.
OK, so part (c), forget part (b).
c. If v1, v2, v3, are orthonormal -- so that I would usually have called them q1, q2, q3. Now, what combination -- oh, here's a nice question, if I say so myself -- what combination of v1 and v2 is closest to v3? What point on the plane of v1 and v2 is the closest point to v3 if these vectors are orthonormal?
So let me -- I'll start the sentence -- then the combination something times v1 plus something times v2 is the closest combination to v3? And what's the answer?
What's the closest vector on that plane to v3? Zeroes. Right.
We just imagine the x, y, z axes, the v1, v2, th- v3 could be the standard basis, the x, y, z vectors, and, of course, the point on the xy plane that's closest to v3 on the z axis is zero.
So if we're orthonormal, then the projection of v3 onto that plane is perpendicular, it hits right at zero.
OK, so that's like a quick -- you know, an easy question, but still brings it out.
Let me see what, shall I write down a Markov matrix, and I'll ask you for its eigenvalues.
Here's a Markov matrix -- this -- and, tell me its eigenvalues.
So here -- I'll call the matrix A, and I'll call this as point two, point four, point four, point four, point four, point two, point four, point three, point three, point four.
Let's see -- it helps out to notice that column one plus column two -- what's interesting about column one plus column two?
It's twice as much as column three.
So column one plus column two equals two times column three.
I put that in there, column one plus column two equals twice column three.
OK. Tell me the eigenvalues of the matrix.
OK, tell me one eigenvalue? Because the matrix is singular.
Tell me another eigenvalue?
One, because it's a Markov matrix, the columns add to the all ones vector, and that will be an eigenvector of A transpose.
And tell me the third eigenvalue?
Let's see, to make the trace come out right, which is point eight, we need minus point two.
And now, suppose I start the Markov process.
Suppose I start with u(0) -- so I'm going to look at the powers of A applied to u(0). This is uk.
And there's my matrix, and I'm going to let u(0) be -- this is going to be zero, ten, zero.
And my question is, what does that approach?
If u(0) is equal to this -- there is u(0). Shall I write it in?
Maybe I'll just write in u(0). A to the k, starting with ten people in state two, and every step follows the Markov rule, what does the solution look like after k steps?
Let me just ask you that.
And then, what happens as k goes to infinity?
This is a steady-state question, right?
I'm looking for the steady state.
Actually, the question doesn't ask for the k step answer, it just jumps right away to infinity -- but how would I express the solution after k steps?
It would be some multiple of the first eigenvalue to the k-th power -- times the first eigenvector, plus some other multiple of the second eigenvalue, times its eigenvector, and some multiple of the third eigenvalue, times its eigenvector.
Good. And these eigenvalues are zero, one, and minus point two.
So what happens as k goes to infinity?
The only thing that survives the steady state -- so at u infinity, this is gone, this is gone, all that's left is c2x2.
So I'd better find x2. I've got to find that eigenvector to complete the answer.
What's the eigenvector that corresponds to lambda equal one?
That's the key eigenvector in any Markov process, is that eigenvector.
Lambda equal one is an eigenvalue, I need its eigenvector x2, and then I need to know how much of it is in the starting vector u0. OK.
So, how do I find that eigenvector?
I guess I subtract one from the diagonal, right?
So I have minus point eight, minus point eight, minus point six, and the rest, of course, is just -- still point four, point four, point four, point four, point three, point three, and hopefully, that's a singular matrix, so I'm looking to solve A minus Ix equal zero.
Let's see -- can anybody spot the solution here?
I don't know, I didn't make it easy for myself.
What do you think there?
Maybe those first two entries might be -- oh, no, what do you think?
Anybody see it?
We could use elimination if we were desperate.
Are we that desperate?
Anybody just call out if you see the vector that's in that null space.
Eh, there better be a vector in that null space, or I'm quitting.
Uh, ha- OK, well, I guess we could use elimination.
I thought maybe somebody might see it from further away.
Is there a chance that these guys are -- could it be that these two are equal and this is whatever it takes, like, something like three, three, two?
Would that possibly work?
I mean, that's great for this -- no, it's not that great.
Three, three, four -- this is, deeper mathematics you're watching now.
Three, three, four, is that -- it works!
Don't mess with it!
OK, it works, all right.
And, yes, OK, and, so that's x2, three, three, four, and, how much of that vector is in the starting vector?
Well, we could go through a complicated process.
But what's the beauty of Markov things?
That the total number of the total population, the sum of these doesn't change.
That the total number of people, they're moving around, but they don't get born or die or get dead.
So there's ten of them at the start, so there's ten of them there, so c2 is actually one, yes.
So that would be the correct solution.
OK. That would be the u infinity.
So I used there, in that process, sort of, the main facts about Markov matrices to, to get a jump on the answer.
OK. let's see.
OK, here's some, kind of quick, short questions.
Uh, maybe I'll move over to this board, and leave that for
the moment. I'm looking for two-by-two matrices.
And I'll read out the property I want, and you give me an example, or tell me there isn't such a matrix.
Here we go.
First -- so two-by-twos. First, I want the projection onto the line through A equals four minus three.
So it's a one-dimensional projection matrix I'm looking for.
And what's the formula for it?
What's the formula for the projection matrix P onto a line
through A. And then we'd just plug in this particular A.
Do you remember that formula?
There's an A and an A transpose, and normally we would have an A transpose A inverse in the middle, but here we've just got numbers, so we just divide by it.
And then plug in A and we've got it.
You can put in the numbers.
OK. Number two.
So this is a new problem.
The matrix with eigenvalue zero and three and eigenvectors -- well, let me write these down. eigenvalue zero, eigenvector one, two, eigenvalue three, eigenvector two, one.
I'm giving you the eigenvalues and eigenvectors instead of asking for them.
Now I'm asking for the matrix.
What's the matrix, then?
Here was a formula, then we just put in some numbers, what's the formula here, into which we'll just put the given numbers?
It's the S lambda S inverse, right?
So it's S, which is this eigenvector matrix, it's the lambda, which is the eigenvalue matrix, it's the S inverse, whatever that turns out to be, let me just leave it as inverse.
That has to be it, right?
Because if we went in the other direction, that matrix S would diagonalize A to produce lambda.
So it's S lambda S inverse.
Good. OK, ready for number three.
A real matrix that cannot be factored into A -- I'm looking for a matrix A that never could equal B transpose B, for any B.
A two-by-two matrix that could not be factored in the form B transpose B.
So all you have to do is think, well, what does B transpose B, look like, and then pick something different.
What do you suggest?
What shall we take for a matrix that could not have this form, B transpose B.
Well, what do we know about B transpose B?
It's always symmetric.
So just give me any non-symmetric matrix, it couldn't possibly have that form.
And let me ask the fourth part of this question -- a matrix that has orthogonal eigenvectors, but it's not symmetric.
What matrices have orthogonal eigenvectors, but they're not symmetric matrices?
What other families of matrices have orthogonal eigenvectors?
We know symmetric matrices do, but others, also.
So I'm looking for orthogonal eigenvectors, and, what do you suggest?
The matrix could be skew-symmetric. It could be an orthogonal matrix.
It could be symmetric, but that was too easy, so I ruled that out.
It could be skew-symmetric like one minus one, like that.
Or it could be an orthogonal matrix like cosine sine, minus sine, cosine.
All those matrices would have complex orthogonal eigenvectors.
But they would be orthogonal, and so those examples are fine.
OK. We can continue a little longer if you would like to, with these -- from this exam.
From these exams.
OK, here's a least squares problem in which, to make life quick, I've given the answer -- it's like Jeopardy!, right?
I just give the answer, and you give the question.
OK. Whoops, sorry.
Let's see, can I stay over here for the next question?
OK. least squares.
So I'm giving you the problem, one, one, one, zero, one, two, c d equals three, four, one, and that's b, of course, this is Ax=b.
And the least squares solution -- Maybe I put c hat d hat to emphasize it's not the true solution.
So the least square solution -- the hats really go here -- is eleven-thirds and minus one.
Of course, you could have figured that out in no time.
So this year, I'll ask you to do it, probably.
But, suppose we're given the answer, then let's just remember what happened.
OK, good question.
What's the projection P of this vector onto the column space of that matrix?
So I'll write that question down, one.
What is P? The projection.
The projection of b onto the column space of A is what?
Hopefully, that's what the least squares problem solved.
What is it?
This was the best solution, it's eleven-thirds times column one, plus -- or rather, minus one times column two.
That's what least squares did.
It found the combination of the columns that was as close as possible to b.
That's what least squares was doing.
It found the projection.
Secondly, draw the straight line problem that corresponds to
this system. So I guess that the straight line fitting a straight line problem, we kind of recognize.
So we recognize, these are the heights, and these are the points, and so at zero, one, two, the heights are three, and at t equal to one, the height is four, one, two, three, four, and at t equal to two, the height is one.
So I'm trying to fit the best straight line through those points.
I could fit a triangle very well, but, I don't even know which way the best straight line goes.
Oh, I do know how it goes, because there's the answer,yes. It has a height eleven-thirds, and it has slope minus one, so it's something like that.
Now, finally -- and this completes the course -- find a different vector b, not all zeroes, for which the least square solution would be zero.
So I want you to find a different B so that the least square solution changes to all zeroes. So tell me what I'm really looking for here.
I'm looking for a b where the best combination of these two columns is the zero combination.
So what kind of a vector b I looking for?
I'm looking for a vector b that's orthogonal to those columns.
It's orthogonal to those columns, it's orthogonal to the column space, the best possible answer is
zero. So a vector b that's orthogonal to those columns -- let's see, maybe one of those minus two of those, and one of those?
That would be orthogonal to those columns, and the best vector would be zero, zero.
So that's as many questions as I can do in an hour, but you get three hours, and, let me just say, as I've said by e-mail, thanks very much for your patience as this series of lectures was videotaped, and, thanks for filling out these forms, maybe just leave them on the table up there as you go out -- and above all, thanks for taking the course.
This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare