# Part 2: The Big Picture of Linear Algebra

Flash and JavaScript are required for this feature.

## Description

Multiplication by A transforms the row space to the column space. Professor Strang then reveals the Big Picture of Linear Algebra where all four fundamental subspaces interact.

Slides Used in this Video: Slides 10 through 14

Instructor: Gilbert Strang

GILBERT STRANG: OK, in this second part, I'm going to start with linear equations, A times x equal b. And you see actually, the first real good starting point is A times x equals 0. So are there any solutions to the matrix, any combinations of the columns that give 0, any solutions to A times x equals 0?

Now, I'm multiplying a matrix A by a vector x in a way you'll know. I take rows of x times, it's called a dot product. Rows of A times x. So I have a row of numbers. And x is a column of numbers. I multiply those numbers and add to get the dot product.

And I'm wondering, can I get 0 for each? Is every row-- so having a 0 there is telling me, in geometry, that that row is perpendicular, orthogonal to that column. If a row dot product with a column gives me a 0, then in n dimensional space, that row is perpendicular, 90 degree angle to that column x. So I'm looking to see, are there any vectors x that are perpendicular to all the rows? That's what Ax equals 0 is asking for.

Oh, and that's what I've just said right there. I've used the word orthogonal. That's more of a high level word than perpendicular. So I'll stay with that. It sounds a little cooler. OK.

And now, we can also look at that transpose. Oh, do you know what the transpose of a matrix is? I take those rows and flip the matrix, so that those rows become the columns. And the columns of A become the rows of A transpose.

So I'll look at A transpose times-- we'll call it y for the new problem. A transpose y is all 0s. And then the null space will be any vector, any solutions, any y that's perpendicular to the rows of A transpose. So I would need couple of hours of teaching to develop this properly because we're talking here about the fundamental theorem of linear algebra, which tells me that the vectors in the null space, like that, are perpendicular to the vectors. These guys are. That's the row space.

Oh, but maybe I have told you. We've said that, from this equation, that tells you the geometry that that row vectors are perpendicular to the x vector, the thing in the null space. So x is there. The rows are there. And they're perpendicular.

Now, if I transpose the matrix, remember that means exchanging rows and columns, so I have a new matrix, new size even. It will the same-- but it's a matrix. The same will be true for it. The rows become the columns. And the solutions to the new equation with A transpose go in that space.

So then that little perpendicular sign is reminding us of the geometry. So rows perpendicular to the x's. Columns perpendicular to the y's. That's the best. I finally saw the right way to say that.

So I have two pairs. And I know how big each of those four things are. Those are the four fundamental subspaces, two null spaces, two solution spaces with 0. Null means 0. So these x's are in the null space because of that 0. Those are the n's. And then this is the column space and the row space.

So we've got four spaces altogether, two pairs. And now, you get to see the big picture of linear algebra, where the four fundamental subspaces do their thing. There you go. You can die happy now.

The row spaces there, those are rows of the matrix, independent rows of the matrix. That's why I don't put in all the rows. There are m rows. But I only put in independent ones. So that might be a smaller number r, r the rank.

And here are the solutions, the guys perpendicular to them. This is the rows of the matrix. These are the vectors perpendicular to it. These are the columns of the matrix. These are the vectors perpendicular to the columns. You see it's just a natural splitting of the whole spaces of vectors into two pieces and two pieces.

And I think of the matrix A, when it multiplies stuff there, it gives stuff here. When A multiplies a vector x, you get a combination of the columns. That with the very, very first slide. A times x is a combination of the columns. And then we look at some x's, if there are any, where A times x gives 0. And there's 0 right there. OK. OK, so that's the big picture.

And I'll just point to another little point that's hiding in this picture. You see that little symbol there, that little thing, and it's also here? What that means is that those guys are perpendicular to these. And these are perpendicular to these.

So we have four subspaces, two pairs, two perpendicular pairs. And that's when you get the idea of knowing what they mean, knowing how to find them, at least for a small matrix, you've got the heart of linear algebra part one. This is the first half of linear algebra.

OK, I'll just see what else there is. Oh, here, oh, well, this is another comment. I've hardly told you how to multiply two matrices. The usual way is rows times columns.

But linear algebra being always interesting, there's another way that I happen to like, columns times rows. Now, there is a column times a row. Now, column times a row, we've seen that once for that rank one matrix.

Do you remember I said that those rank one matrix, one column times is one row are the building blocks? Well, here is the building. Those are n of those blocks. A column times a row, a column times a row. And here is a reminder of the-- oh, we've only-- oh, we're coming up to A equal LU, the first one. Get on with it, Professor Strang. OK.

OK, now we're solving equations. Now we're going to get L times U. So right. So there's two equations and two unknowns solved in high school and how. Do you remember how? That's the whole point.

If I take twice that equation, so it's 4x plus 6y equal 14, and subtract from this one, then I get an easy equation for only y by itself. So that's what I did. That's called elimination. I eliminated this 4x. It's gone.

It's 2 times that. That's why I chose to multiply it by 2. Then 2 times this gives me 4 x's. When I subtract it, it's gone and I'm left with 1y equal 1. So I know the answer y equal 1. And then I go backwards to x equal 2 because 2x plus, this is now, 3 equals 7. 2x is 4. x is 2.

And the real point about linear algebra done right is that all those steps can be expressed as a break up, another way to break up the matrix A into a lower triangular matrix. You see that that matrix is triangular. It's lower triangular. And this one is upper triangular. So those are called L and U. Yeah, yeah.

So what we did here is expressed by that matrix multiplication. You really want to express everything, in the end, as multiplying a couple of matrices. Then you know exactly where you are. So that's the idea of elimination.

And now, we only were doing a 2 by 2 matrix. You remember our little matrix was pathetic, 2, 3, 4, 7. That was our matrix A. We can't stop there. So linear algebra goes on to matrix of any size.

And this is the way to find the triangular factor L and the upper triangular factor U. That would need more time. So all I want to say is, when you're doing elimination solving equations, then in the back of your mind or in the back page, you are producing an L matrix lower and a U matrix upper. So yeah.

Let me see. Yeah, here we see them. The L matrix is all 0s above. The U matrix is all 0s below. And that's what is really happening. So that's what computer system totally focuses on. OK, that's the first slide of a new part. So I'll stop here and coming back to orthogonal vectors. Good.