Lecture 7: Linear Algebra: Vector Spaces and Operators (cont.)

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, the professor talked about eigenvalues and eigenvectors of hermitian operators acting on complex vector spaces, inner products on a vector space, etc.

Instructor: William Detmold

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare ocw.mit.edu.

PROFESSOR: OK, so let's get started. I just wanted to make one announcement before we start the lecture. So Prof. Zwiebach is a way again today, which is why I'm lecturing. And his office hours he's obviously not going to have, but Prof. Harrow has kindly agreed to take them over. So today I'll have office hours four to five, and then Prof. Harrow will have office hours afterwards five to six. So feel free to come and talk to us.

So today we're going to try and cover a few things. So we're going to spend a little bit of time talking about eigenvalues and vectors, which we've-- finishing this discussion from last time. Then we'll talk about inner products and inner product spaces. And then we'll talk about-- we'll introduce Dirac's notation, some of which we've already been using. And then, depending on time, we'll also talk a little bit more about linear operators. OK?

So let's start with where we were last time. So we were talking about T-invariant subspaces. So we had U is a T-invariant subspace if the following is satisfied. If T of U is equal to-- if this thing, which is all vectors that are generated by T from vectors that live in U. So if T is inside U itself. OK?

And we can define this in general for any U. However, one class of these invariant subspaces are very useful. So if we take U to be one dimensional. OK/ and so that really means that U I can write as some whatever field I'm defining my vector space over. Every element of this subspace U is just some scalar multiple of a single vector U. So this is a one dimensional thing.

Now if we have a T-invariant subspace of this one-- if this is going to be a T-invariant objective, then we get a very simple equation that you've seen before. So we're taking all vectors in U acting on them with T, and if it stays within U, then it has to be able to be written like this. So we have some operator acting on our vector space producing something in the same vector space, just rescaling it. OK? For sum lambda, which we haven't specified.

And you've seen this equation before in terms of matrices and vectors, right? This is an eigenvalue equation. So these are eigenvalues and these are eigenvectors. But now they're just an abstract version of what you've discussed before. And we'll come back to this in a moment.

One thing that we just defined at the end is the spectrum of an operator. The spectrum of T is equal to all eigenvalues of that operator. And so later on these will become-- this object will become important. Let's just concentrate on this and ask what does it mean.

So if we have lambda being an eigenvalues, what does that tell us? What does this equation tell us? Well, it tells us that on U. So all I'm doing is taking this term over to the other side of the equation and inserting the identity operator I. So this is in itself an operator now, right?

And so this tells us also that this operator, because it maps something that's non-zero to the null vector, this is not injective, OK? And you can even write the null space of T-- of T minus I lambda is equal to all eigenvectors with eigenvalue lambda. OK?

So every eigenvector with eigenvalue lambda, T acting on it is just going to give me lambda times the eigenvector again, and so this will vanish. So for all eigenvectors with that eigenvalue.

And we've previously seen that, if something is not injective, it's also not invertible, right? So this lets us write something quite nice down. So there's a theorem. Let me write it out. So if we let T is in the space of linear operators acting on this vector space v, and we have a set of eigenvalues, lambda 1, lambda 2, lambda n, distinct eigenvalues, eigenvalues of T, and the corresponding eigenvectors, which we will call U. OK, so the sum set U1, U2, up to Un with the correspondence by their label. So then we know that this list is actually a linearly independent set.

So we can prove this one very quickly. So let's do that. So let's assume it's false. So the proof is y contradiction, so assume it's false. And what does that mean? Well, that means that there is a non-trivial relation. I could write down some relation C1U1 plus C2U2 plus CkUk equals 0 without all the C's being 0.

And what we'll do is we'll actually say OK, let's do let-- so we'll let there be a k, a value of k that's less than or equal to n, such that this holds for Ci not equal to 0. So we're postulating that there is some linear dependence on some of these things.

So what we can do is then act on this vector here with T minus lambda k times the identity acting on this. So this is C1U1 plus dot dot dot plus CkUk. OK? And what do we get here? So we're going to get, if act on this piece of it, this is an eigenvector, so T acting on this one would just give us lambda 1, right? And so we're going to get products of lambda 1 minus lambda k for this piece, et cetera.

So this will give us C1 lambda 1 minus lambda k U1 plus dot dot dot up to the Ck minus 1 lambda k minus 1 minus lambda k Uk minus 1. And then when we act on this one here, so this one has an eigenvalue-- the eigenvalue corresponding to the eigenvector is lambda k, so that last term gets killed. So we get plus 0 lots of Uk. And we know this is still 0.

And now we've established, in fact, these things here are just numbers. All of these things. So we've actually written down a relation that involves less than k.

Actually, I should have said this. Let there be a least k less than or equal to, and such that we have linear dependence. But what we've just shown is that, in fact, there's a smaller space that's also linear independent, right? So we've contradicted what we assumed to start with. And you can just repeat this procedure, OK? And so this is a contradiction. And so, in fact, there must be no non-trivial relation even for k equals n between these vectors, OK?

Another brief theorem that we won't prove, although we'll sort of see why it works in a moment, is, again, for T in linear operators on v with v being a finite dimensional complex vector space. OK? There is at least one eigenvalue. So I guess for this-- so T has at least one eigenvalue.

Now remember, in the last lecture, we looked at a matrix, two by two matrix, that was rotations in the xy-plane and found there were, in fact, no eigenvalues. But that's because we were looking at a real vector space. So we were looking at rotations of the real plane.

So this is something that you can prove. We will see why it's true, but we won't prove it. And so one way of saying this is to go to a basis. And so everything we've said so far about eigenvalues and eigenvectors has not been referring to any particular basis. And in fact, eigenvalues are basis independent. But we can use a basis. And then we have matrix representations of operators that we've talked about.

And sort of this operator equation, or the operator statement that T minus lambda I-- so as operator statement T minus lambda I U equals 0 is equivalent to saying that-- well, we've said it here. We said that it's equivalent to saying that it's not invertible. This operator is not invertible. But that's also equivalent to saying that the matrix representation of it in any basis is not invertible.

And by here we just mean inverses as in the inverses that you've taken the many matrices in your lives. And so what that means then, if-- I'm sure you remember. If a matrix is not invertible, that means it has a vanishing determinant. So it has debt of this. Now you can think of this as a matrix. This determinant has to be 0. And remembering we can write this thing out. And so it has lambdas on the diagonal, and then whatever entries T has wherever it wants.

This just gives us a polynomial in lambda, right? So this gives us some f of lambda, which is a polynomial. And if you remember, this is called the characteristic polynomial. Characteristic. Right?

And so we can write it, if we want, as just some f of lambda is equal to just, in this case, it's going to be just lambda minus some lambda 1. I have to be able to write it like this. I can just break it up into these terms here, where the lambda I's, the 0's of this polynomial are, in general, complex and can be repeated.

Now what can happen is that you have, in the worst case -- I don't know if it's the worst case, but in one case, you could have all of the singularities-- all of the the 0's being at the same place. And you could have a eigenvalue that is in full degenerate here. Right? So if we, say, have lambda 1 occurring twice in this sequence, then we set out to a degenerate eigenvalue. And in principle, you could have just a single eigenvalue that's in full degenerate. But you can always write this. There has to be one lambda there at least, one lambda I there at least. And so you can see why this is true, right?

Now if you're in a real vector space, you don't get to say that, because this polynomial may only have complex roots, and they're not part of the space you're talking about. OK? So it can be repeated, and this is called degeneracy. OK, so are there any questions?

AUDIENCE: Can you turn it? It should be lambda I minus T, just so that it matches the next line.

PROFESSOR: Thank you, OK. Thank you. I could have flipped the sign on the next line as well. So any other questions? No? OK, so let's move on and we can talk about inner products.

And so first, what is an inner product? So an inner product is a map, but it's a very specific map. So an inner product on a vector space V is a map from V cross V to the field, F. And that's really what it's going to be.

Now who has seen an inner product somewhere? OK, what do we call it?

AUDIENCE: Dot product.

PROFESSOR: Dot product, right. So we can learn a lot from thinking about this simple case, so the motivation for thinking about this is really the dot product. So we have a vector space Rn. OK? And on that vector space, we might have two vectors, a, which I'm going to write as a1. a2 dot dot dot. a2 dot dot dot. an, and b.

So we have two vectors, and these are in vector space V. Then we can define the dot product, which is an example of one of these in a product. So a dot b. We can even put little vectors over these.

And so our definitions that we've used for many years is that this is a1 b1 plus a2 b2 plus dot dot dot an bn. And you see that this does what we want it. So it takes two vectors which live in our vector space. And from that, you get a number, right? So this lives in R.

So this is a nice example in a product. And we can look at what properties it gives us. So what do we know about this dot product? Well, we know some properties that it has is that a dot b. So it doesn't care which order you give the arguments in, all right?

Also, if I take the same vector, I know that this is got to be greater than or equal to 0, right? Because this is going to be our length. And the only case where it's 0 is when the vector is 0.

Well we can write this. a dotted in to, say, b to 1 b1 plus b to 1 b2. So this b2's are real numbers, and these b's are vectors, right? So this thing we can just write is equal to b to one a dot b1 plus b to 2 a do b2. And make them vectors everywhere. OK, so we've got three nice properties. And you can write down more if you want, but this will be enough for us.

And the other thing that we can do with this is we can define the length of a vector, right? So we can say this is for this defines a length. And more generally, we only call this the norm of the vector. And that, of course, you know is that mod a squared is just equal to a dot a, all right? So this is our definition of the norm.

OK so this definition over here is really by no means unique in satisfying these properties. So if I wrote down something where, instead of just say a1 b1 plus a1 b2 et cetera, I wrote down some positive number times a1 b1 times some other positive number a2 b2, et cetera, that would also satisfy all of these properties up here. So it's not unique.

And so you could consider another the dot product, which we would write as just c1 a1 b1 plus c2 a2 b2 plus some cn an bn, where the c's are just positive real numbers. That would satisfy all of the things that we know about our standard dot product. But for obvious reasons, we don't choose to do this, because it's not a very natural definition to put these random positive numbers along here. But we could.

And I guess one other thing that we have is the Schwarz inequality. And so this is the a dot b. So the absolute value of the dot product of a dot b is less than or equal to the product of the norms of the vectors, right? And so one of the problems in the piece is to consider this in the more abstract sense, but this is very easy to show for real vectors, right?

So this is all very nice. So we've talked about Rn. What we really are going to worry about is complex vector spaces. And so there we have a little problem. And the problem comes in defining what we mean by a normal, right? Because if I say now that this vector has complex components and write this thing here, I'm not guaranteed that this is a real number, right? And so I need to be a little bit careful.

So let's just talk about complex spaces. And we really want to have a useful definition of a length. So let's let z be in this thing, in interdimensional complex space. So really my z is equal to z1 z2 zn, where the zI line as being in c, right?

So how can define a link for this object? Well, we have to do it sort of in two steps. So already know how to define the length for a complex number, right? It's just the absolute value, the distance from the origin in the complex plane. But now we need to do this in terms of a more complicated vector space.

And so we can really think of this as equal to the sum of the squares of z1, of the absolute values of these complex numbers. OK? Which if we write it out, looks like z1 star z1 plus. OK?

And so we should now, thinking about the inner product, we should be thinking that the appearance of complex conjugation is not entirely unnatural. So if we ask about the length of a vector here, then that's going to arise from an inner product. OK? This object we want to arise from our inner product.

So we can now define our general in a product with the following axioms. So firstly, we want to basically maintain the properties that we've written down here, because we don't want to make our dot product not being in an inner product anymore. That'd be kind of silly.

So let's define our inner product in the following way. I'm going to write it in a particular way. So the inner product is going to be, again, a map. And it's going to take our vector space, two elements of the vector space to the field. And I'm in a complex vector space.

So it's a map that I'm going to right like this that takes v cross v to c. OK? And what I mean here is you put the two elements of your vector space in these positions in this thing, OK? And so really a b is what I mean by this. Where a and b-- so let me write it this way. So this thing is in c for a and b are in the v, right? So these things dots are where I'm going to plug-in my vectors.

And so this inner product should satisfy some axioms. And they look very much like what we've written here. So the first one is a slight modification. We want that a b is equal not to b a, but to its complex conjugate, OK? And this is related to what I was discussing here.

But from this, we can see that the product of a with itself is always real, because it and its complex conjugate are the same. So we know that a a is real. And we're also going to demand a definition of this inner a product that this is greater than or equal to 0. And it's only 0 if a equals 0. Right? So that's pretty much unchanged.

And then we want the same sort of distributivity. We do want to have that a inner producted with B to 1 b plus B to 2 b2 should be equal to B to 1 a b1 plus B to 2 a b2 where the [INAUDIBLE] are just complex numbers, right?

And that's what we need to ask of this. And then we can make a sensible definitions of it that will give us a useful norm as well. Now I'll just make one remark. This notation here, this is due to Dirac. And so it's very prevalent in physics.

You will see in most purely mathematical literature you will see this written just like this. So let me write it as a b and put these things in explicitly. And sometimes you'll even see a combination of these written like this, OK/ But they all mean the same thing.

Compared to what we've written up here, this seems a little asymmetric between the two items, right? Well firstly, these are isometric. And then down here we've shown something about that we demand something about the second argument, but we don't demand the same thing about the first argument. So why not? Can anyone see?

I guess what we would demand is exactly the same thing the other way around. So we would demand another thing that would be sort of alpha 1 a plus alpha 2 a2 b is equal to-- well, something like this. Well, we would actually demand this. a1 b. But I don't actually need to demand that, because that follows from number one, right? I take axiom one, apply it to this, and I automatically get this thing here. OK?

And notice what's arisen is-- actually let's just go through that, because you really do want to see these complex conjugates appearing here, because they are important. So this follows. So 1 plus 3, imply this. Let's just do this.

So let's start with this expression and start with this piece. And we know that this will then be given by axiom one by b alpha 1 a 1 plus alpha 2 a2 complex conjugate, all right? And then by this linearity of the second argument we can now distribute this piece, right? We can write this is alpha 1 b a1 plus alpha 2 b a2, all complex conjugated. Which let's put all the steps in. Is alpha 1 star and this one star.

And then, again, by the first argument, by the first axiom, we can flip these and get rid of the complex conjugation. And that gives us this one up here, right? So we only need to define this linearity, distributive property on one side of this thing. We could have chosen to define it here and wouldn't have needed that one, but we didn't.

OK, so let's look at a couple of examples. And the first one is a finite dimensional example. And we're going to take v is equal to cn. And our definition is going to be a pretty natural generalization of what we've written down before. So a and b are elements of cn. And this is just going to be a1 star b1 plus a2 star b2 plus an star bn. Another piece of chalk. So the only difference from dot product in real vector space is that we've put this complex conjugates here. And that you can check satisfies all of these axioms.

Another example is actually an example of an infinite dimensional vector space. Let's take v is the set of all complex functions, all f of x that are in c with x living in some finite interval. OK?

And so a natural norm to define on this space-- and this is something that we can certainly talk about in recitations-- is that if I have f and g in this vector space v, then f g I'm going to define-- this is my definition of what the dot product is-- is the integral from 0 to l f star of x g of x dx. OK?

If you think of this as arising from evaluating f at a set of discrete points where you've got a finite dimensional vector space, and then letting the space between those points go to 0, this is kind of the natural thing to arise. It's really an integral as a limit of a sum. And over here, of course, I could write this one as just the sum over i of ai star bi. i equals 1 to n.

And so this is the integral is infinite dimensional generalization of the sum, and so we have this. And that might be something to talk about in recitations. OK?

So we've gone from having just a vector space to having a vector space where we've added this new operation on it, this inner product operation. And that lets us do things that we couldn't do before. So firstly, it lets us talk about orthogonality. Previously we couldn't ask any question about two objects within our vector space. This let's us ask a question about two objects.

So if we have the inner product a b in some vector space V, then if this is 0, we say they're orthogonal. We say that the vectors a and b are orthogonal. And I'm sure you know what orthogonal means in terms of Rn, but this is just the statement of what it means in a abstract vector space. This is the definition of orthogonality.

So this is one thing if we have a set of vectors. e1 e2 en, such that ei ej is equal to delta ij, chronic to delta ij, this set is orthonormal. Again, a word you've seen many times.

OK, so we can also define the components of vectors now in basis dependent way. We're going to choose ei to be a set of vectors in our vector space V. We previously had things that form a basis, a basis of V. And if we also demand that they're orthonormal, then we can-- well, we can always decompose any vector in V in terms of its basis, right?

But if it's also orthonormal, then we can write a, which is a is some vector in V. a is equal to the sum over i equals 1 to n of some ai ei. So we can do that for any basis.

But then we can take this vector and form its inner product with the basis vectors. So we can look at what ek a is, right? So we have our basis vectors ek, and we take one of them and we dot product it into this vector here. And this is straightforward to c. This is going to be equal to the sum over i equals 1 to n ai. And then it's going to be the inner product of ek with ei, right? Because of this distributive property here. OK?

But we also know that, because this is an orthonormal basis, this thing here is a delta function, delta ik, right? And so I can, in fact, do this sum, and I get and this is equal to ak. And so we've defined what we mean by a component of this vector in this basis ei. They're defined by this inner product.

So we can also talk about the norm, which I think, unsurprisingly, we are going to take the norm to be, again, equal to this, just as we did in Rn, but now it's the more general definition of my inner product that defines our norm. And because of our axiom-- so because of number two in particular, this is a sensible norm. It's always going to be greater than or equal to 0. OK?

And conveniently we can also change this Schwarz inequality. So instead of the one that's specific to Rn, that becomes a b. All right, so let's cross that one out. This is what it becomes. And in the current p set, you've got to prove this is true, right?

We can also write down a triangle inequality, which is really something that norms should satisfy. So the norm of a plus b should be less than or equal to the norm of a plus the norm of b. And the R3 version of this is the longest side of a triangle is shorter than the two shorter sides, right? So this is fine.

OK, so you might ask why we're doing all of this seemingly abstract mathematics. Well, so now we're in a place where we can actually talk about the space where all of our quantum states are going to live. And so these inner product space-- these vector spaces that we've given an inner product. We can call them inner product spaces.

So we have a vector space with an inner product is actually we call a Hilbert space. And so this needs a little qualifier. So if this is a finite dimensional vector space, then this is straight. It is just a Hilbert space. Let me write it here. So let's write it as a finite dimensional vector space with an inner product is a Hilbert space.

But if we have an infinite dimensional vector space, we need to be a little bit careful. For an infinite dimensional vector space, we again need an inner product. We need to make sure that this space is complete. OK? And this is a kind of technical point that I don't want to spend too much time on, but if you think about-- well, let me just write it down. Vector space. Let me write it here. And I haven't defined what this complete vector space means. But if we have an infinite dimensional vector space that is complete or we make it complete, then we have an inner product. We also get a Hilbert space. And all of quantum mechanical states live in a Hilbert space.


PROFESSOR: Yes, that's true. So how's that? So we need to define what we mean by complete though, right? So I don't want to spend much time on this. But we can just do an example. If we take the space of-- let V equal the space of polynomials on an interval 0 to L, say.

So this means I've got all pn's. P0 plus p1x plus pn xn. There are things that will live in the completed vector space that are not of this form here. So for example, if I take n larger and larger, I could write down this polynomial. I could write pn of x is the sum over i equals 1 up to n of i-- x to the i over i factorial, right?

And all of pn's live in this space of polynomials. But their limit, as n becomes large, there's a sequence of these call it a cushy sequence that, as n goes to infinity, I generate something that's actually not a polynomial. So I generate e of x, which lives in the completion of this, but itself not a polynomial. Don't worry about this too much, but in order to really define a Hilbert space, we have to be a little bit careful for infinite dimensional cases.

OK, so a few more things that we can do to talk about. Well how do we make a orthonormal basis? So I presume you've all heard of Gram-Schmidt? The Gram-Schmidt procedure? Yep. OK, so that's how we make a orthonormal basis. And just the way you do it in R3, you do it the same way in your arbitrary vector space.

So we have the Gram-Schmidt procedure. So you can define this-- so we have a list v1, v2, vn are just vectors in our vector space that are linearly independent. So we can construct another list. There's also orthonormal, so it's a very useful thing for us to have.

And so you could define this recursively. You can just write that ej is equal to vj minus the sum over i less than j of ei. So this thing divided by its length. And so by the sum, you're orthogonalizing ej versus all of the previous ei's that you've already defined. And then you normalize it by dividing by its length, right? So that's something that's very useful.

And the last thing I want to say about these inner product spaces is that we can use them-- these inner products at least, is that we can use them to find the orthogonal complement of something, of anything really. So let's let u-- so we have a vector space V, and I can just choose some things in that and make a set. So u is the set of v that are in V. So it doesn't need to be a subspace. It's just a set.

For example, if v Rn, I could just choose vectors pointing along two directions, and that would give me my set. But that's not a subspace, because it doesn't contain some multiple of this vector times some multiple of this vector, which would be pointing over here. So this is just a set so far.

We can define u perpendicular, which we'll call the orthogonal complement of u. And this is defined as u perpendicular is equal to the set of v's in V such that v u is equal to 0 for all u in U. All of the things that live in this space are orthogonal to everything that lives in U. OK

And in fact, this one is a subspace automatically. So it is a vector space. So if I took my example of choosing the x direction and y direction for my set here, then everything perpendicular to the x direction and y direction is actually everything perpendicular to the xy-plane, and so that is actually a subspace of R3.

And so there's a nice theorem that you can think about, but it's actually kind of obvious. So if u is a subspace, then I can actually write that V is equal to the direct sum of U plus its orthogonal complement, OK? So that one's kind of fairly straightforward to prove, but we won't do it now.

OK, so in the last little bit, I want to talk more about this notation that I've introduced, that Dirac introduced. What can we say? If I can find a [INAUDIBLE] here. Are there any questions about this? Yep.

AUDIENCE: So when we find space and the idea of basis balance, why is that [INAUDIBLE] decompose things into plane waves when we're not actually [INAUDIBLE]?

PROFESSOR: So it's because it's-- basically it works. Mathematically, we're doing things that are not quite legitimate. And so we can generalize the Hilbert space a little bit, such that these non normalizable things can live in this generalized space. But really the answer is that it works, but no physical system is going to correspond to something like that. So if I take plane waves, that's not a physically realizable thing. It gives us an easy way to, instead of talking about some wave packet that some superposition of plane waves, we can talk about the plane waves by themselves and then form the wave packet afterwards, for example. Does that answer the question a little bit at least? Yep.

AUDIENCE: If p could be written as a sum of U [INAUDIBLE], why is U not [INAUDIBLE]?

PROFESSOR: Well, just think about the case that I was talking about. So if we're looking at R3 and we take U to be the set the unit vector in the x direction, the unit vector in the y direction, that's not a subspace, as I said, because I can take the unit vector in the x direction plus the unit vector in the y direction. It goes in the 45 degree direction. And it's not in the things I've written down originally.

So then if I talk about the subspace, the things spanned by x hat and y hat, then I have a subspace. It's the whole xy-plane. And the things are orthogonal to it in R3 are just the things proportion to z hat.

And so then I've got the things in this x hat and y hat, and the thing that's in here is z hat. And so that really is the basis for my R3 that I started with. That contains everything.

And more generally, the reason I need to make this a subspace is just because-- so I define U by some set of vectors that I'm putting into it. The things that are orthogonal to that are automatically already everything that's orthogonal to it, so there's no combination of the things in the orthogonal complement that's not already in that complement. Because I'm saying that this is everything in V that's orthogonal to these things in this subspace.

So I could write down some arbitrary vector v, and I could aways write it as a projection onto things that live in here and things that don't live in this one, right? And what I'm doing by defining this complement is I'm getting rid of the bits that are proportional to things in this, OK? All right any-- yep?

AUDIENCE: So an orthogonal complement is automatically a subspace?


AUDIENCE: But that doesn't necessarily mean that any random collection of vectors is a subspace.

PROFESSOR: No. All right, so let's move on and talk about the Dirac's notation. And let's do it here. So three or four lectures ago, we started talking about these objects, and we were calling them kets, right? And they were things that live in our vector space V. So these are just a way of writing down our vectors, OK?

So when I write down the inner product, which we have on the wall above, one of the bits of it looks lot like this. So we can really think of a b, the b being a ket. We know that b is a vector, and here we're writing in a particular way of writing things in terms of a ket.

And what we can do is actually think about breaking this object, this inner product up into two pieces. So remember the dot product is taking two vectors, a and b. One of them, we already have written it like a vector, because a ket is a vector.

What Dirac did in breaking this up is he said, OK, well this thing is a bracket, and so he's going to call this one a ket, and this is a bra. So this object with something it. The things inside these you should think of as just labeling these things. OK?

Now we already know this thing here. So these kets are things that live in-- I should say this is direct notation. OK, so we already know these kets are things that live in the vector space.

But what are the bras? Well, they're not vectors in V. So b is a vector, so maybe I should've called this one b to be a little less confusing. So b is a ket, and this is something that lives in our vector space V. This inner product we're writing in terms of bra and a ket.

The bra, what does it actually do? So I'm going to use it to make this inner product. And so what it's doing is it's taking a vector and returning a complex number. The inner product takes v cross v goes to c. But if I think of it as the action of this bra on this ket, then the action is that this bra eats a vector and spits back a complex number, OK?

So a is actually a map. OK? So these bras live in a very different place than the kets do. Although they are going to be very closely related. So firstly, it's not in V. You should be careful if you ever say that, because it's not right.

We actually say that it belongs to a dual space, which we label as V star, because it is very dependent on V, right? It's mapped from V to c. And I shouldn't even say this is a linear map.

Now what is V star? Well, at the moment it's just the space of all linear maps from V to c. Me But it itself is a vector space. So we can define addition of these maps. We can define addition on V star and also a scalar modification of these maps.

And so what that means is that I can define some bra w That's equal to alpha lots of another one plus B to b. And all of these live in this V star space. Let me write that explicitly. So a b and w live in V star, OK?

And the way we define this is actually through the inner product. We define it such that-- so I take all vectors v in the vector space big V, and the definition of w is that this holds.

And then basically from the properties of the inner product, you inherit the vector structure, the vector space structure. So this tells us V star is a vector space.

Let's go over here. And there's actually a correspondence between the objects in the original vector space V and those that live in V star. So we can say for any v in V, there's a unique-- I should write it like this. Any ket v in the vector space, there is a unique bra, which I'm also going to label by v, and this lives in V star.

And so we can show uniqueness by assuming it doesn't work. So let's assume that there exists a v and a v prime in here such that v-- so we'll assume that this one is not unique, but there are two things, v and v prime. And then we can construct-- from this, I can take this over to this side here, and I just get that v w minus v prime w is equal to 0, which I can then use the skew symmetry of these objects to write as w v minus w v prime star. So I've just changed the order of both of them.

And then I can use the property of kets. I can combine them linearly. So I know this is equal to w v minus v prime star. And essentially, that's it, because I know this has to be true for every w in the vector space V. So this thing is equal to 0. And so the only thing that can annihilate every other vector is going to be 0 for my definition, in fact, of the inner product.

So this implies that v minus v prime equals 0, the null vector, which implies that v equals v prime. So our assumption was wrong, and so this is unique. OK, let's see.

And so we actually have really a one to one correspondence between things in the vector space and things in the joule space, OK? And so we can actually label the bras by the same thing that's labeling the kets. So I can really do what I've done in the top line up there and have something-- everything is labeled by the same little v. Both the thing in the big vector space, big V, and the thing in V star are label by the same thing.

And more generally, I could say that v-- so there's a correspondence between this thing and this thing. And notice the stars appearing here. They came out of how we define the inner product.

OK, so really, in fact, any linear map you write down, any linear map like this defines one of these bras, because every linear map that takes to V to c lives in V star. So there has to be an element that corresponds to it. And just if you want to think about kind of a concrete way of talking about these, I can think of-- if I think of this as a column vector, v1 to vn, the way I should think about the bras is that they are really what you want to write as row vectors. And they may have to have the conjugates of the thing. The components are conjugated.

OK, and now you can ask what the dot product looks like. Alpha v is then just this matrix multiplication. But it's matrix multiplication of an n by 1 thing by 1 by n thing. Alpha 1 star alpha 2 star alpha n star times this one here. So v1 vn. And this is now just matrix multiplication. I guess I can write it like this.

So they're really, really quite concrete. They're as concrete as the kets are. So you can construct them as vectors like strings of numbers in this way. So I guess we should finish. So I didn't get to talk about linear operators, but we will resume there next week. Are there any questions about this last stuff or anything? No? OK. So you next week, or see you tomorrow, some of you. Thanks.


Free Downloads



  • English-US (SRT)