# Lecture 21: Angular Momentum (cont.)

Flash and JavaScript are required for this feature.

Description: In this lecture, the professor talked about algebraic theory of angular momentum, comments on spherical harmonics, etc.

Instructor: Barton Zwiebach

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

PROFESSOR: All right. Today we'll be talking a little about angular momentum. Continuing the discussion of those vector operators and their identities that we had last time. So it will allow us to make quite a bit of progress with those operators, and understand them better.

Then we'll go through the algebraic analysis of the spectrum. This is something that probably you've seen in some way or another, perhaps in not so much detail. But you're probably somewhat familiar, but it's good to see it again.

And finally at the end we'll discuss an application that is related to your last problem in the homework. And it's a rather mysterious thing that I think one should appreciate how unusual the result is, related to the two dimensional harmonic oscillator.

So I'll begin by reminding you of a few things. We have L, which is r cross p. And we managed to prove last time that that was equal to p cross r, with a minus sign. And then part of the problem's that you're solving with angular momentum use the concept of a vector and the rotations.

So if u is a vector under rotations-- to say that something is a vector under rotations means the following, means that if you compute Li commutator with uj, you can put a hat. All these things are operators, all these vectors. So maybe I won't put a hat here on the blackboard.

Then you're supposed to get i, epsilon, ijk, ih bar. Epsilon, ijk, uk. So that's a definition if you wish. Any object that does that is a vector under rotations.

And something that in the homework you can verify is that r and p are vectors under rotation. That is, if you put here xj, you get this thing with xk. If you put here pj, you get this thing with pk. If you compute the commutator.

So r and p are vectors under rotation. Then comes that little theorem, that is awfully important, that shows that if u and v are vectors under rotations-- u and v vectors under rotations-- then u dot v is a scalar. And u cross v is a vector. And in both cases, under rotations.

So this is something you must prove, because if you know how u and v commute with the angular momentum, you know how u times v, in either the dot combination or the cross combination, commute with j, with L. So to say that something is a scalar, the translation is that Li with u dot v will be 0.

You don't have to calculate it again. If you've shown that u and v are vectors, that they transform like that, this commutes with this. So r-- so what do you conclude from this?

That Li commutes with r squared, commutes-- it's equal to p squared. And it's equal to Li, r dot p. They all are 0.

Because r and p are vectors under rotation, so you don't have to compute those ones anymore. Li will commute with r squared, with p squared r cross p.

And also, the fact that u cross v is a vector means that Li commutated with u cross v, j-- the j component of u cross v is ih bar, u cross v. I'm sorry-- epsilon, ijk, u cross v, k.

Which is to say that u cross v is a vector under rotations. This has a lot of important corollaries. The most important perhaps is the commutation of angular momentum with itself.

That is since you've shown that r and p satisfy this, r cross p, which is angular momentum, is also a vector under rotation. So here choosing u equal r, and v equal p, you get that Li, Lj is equal to ih bar, epsilon, ijk, Lk.

And it's the end of the story. You got this commutation. The commutation you wanted.

In earlier courses, you probably found that this was a fairly complicated calculation. Which you had to put the x's and the p's, the x's and the p's, and start moving them. And it takes quite a while to do it. So, that's important.

Another property that follows from all of this, which is sort of interesting, that since L is now also a vector under rotations, Li commutes with L squared. Because l squared is L dot L, therefore it's a scalar. So Li commutes with L squared.

And that property is absolutely crucial. It's important that it's worth checking that in fact, it follows just from this algebra. You see, the only thing you need to know to compute the commutator of Li with L squared is how L's commute.

Therefore it should be possible to calculate this based on this algebra. So this property is true just because of this algebra, not because of anything we've said before. And that's important to realize it.

Because you have algebra like si, sj, ih bar, epsilon, ijk, sk, which was the algebra of spin angular momentum. And we claim that for that same reason that this algebra leads to this result, that si should commute with s squared. And you may remember that in the particular case we examined in this course, s squared-- that would be sx squared plus sy squared plus sz squared-- was in fact h bar over 2 squared.

And each matrix was proportional to the identity. So there's a 3 in the identity matrix. And s squared is really in the way we represent that spin, by 2 by 2 matrices, commutes with si. Because it is the identity. So it's no accident that this thing is 0.

Because this algebra, whatever l is, implies that this with the thing squared is equal to zero. So whenever we'll be talking about spin angular momentum, orbital angular momentum, total angular momentum, when we add them, there's all kinds of angular momentum. And our another generic name for angular momentum will be j.

And we'll say that ji, jj, equal ih bar, epsilon, ijk, jk is the algebra of angular momentum. And by using j, you're sending the signal that you may be talking about l. Or may be talking about s, but it's not obvious which you're talking about. And you're focusing on those properties of angular momentum that hold just because this algebra is supposed to be true.

So in this algebra, you will have that ji commutes with j squared. And what is j squared? Of course, j squared is j1 squared plus j2 squared plus j3 squared.

Now this is so important, and this derivation is a little bit indirect, that I encourage you all to just do it. Without using any formula, put the jx here, and compute this commutator. And it takes a couple of lines, but just convince yourself that this is true.

OK, now we did have a little more discussion. And these are all things that are basically related to what you've been doing in the homework. Another fact is that this algebra is translated into j cross j equal ih bar, j.

Another result in transcription of equations is that the statement that u is a vector under rotations corresponds to a vector identity. Just the fact that the algebra here is this, the fact that l with u is this, implies the following algebra. j cross u plus u cross j equal 2i h bar.

So this is for a vector under rotations. Under rotations. So this I think is in the notes.

It's basically saying that if you want to translate this equation into vector form, which is a nice thing to have, it reads like this. And the way to do that is to just calculate the left hand side. Put and index, i. And just try to get the right hand side. It will work out.

OK. Any questions so far with these identities?

OK. So we move on to another identity that you've been working on, based on the calculation of what is a cross b dot a cross b. If these things are operators, there's corrections to the classical formula for the answer of of what this product is supposed to be.

Actually, the classical formula, so it's not equal to a squared, b squared, minus a dot b squared. But it's actually equal to this, plus dot dot dot. A few more things.

Classically it's just that. You put 2 epsilons. Calculate the left hand side. And it's just these 2 terms.

Since there are more terms, let's look what they are for a particular case of interest. So our case of interest is L squared, that corresponds to r cross b, times r cross b. And indeed, it's not just r squared, p squared, minus r dot p squared. But there's a little extra.

And perhaps you have computed that little extra by now. It's ih bar r dot p. So that's a pretty useful result. And from here, we typically look for what is p squared.

So for p squared-- so what we do is pass these other terms to the other side. And therefore we have 1 over r squared, r dot p squared, minus ih bar, r dot p. Yes. Plus 1 over r squared, l squared.

And, we've done this with some prudence. The r squared is here in front of the p squared. It may be fairly different from having it to the other side.

And therefore, when I apply the inverse 1 over r squared, I apply it from the left. So I write it like that. And that's very different for having the r squared on the other side.

Could be completely different. Now, what is this? Well this is a simple computation, when you remember that p vector is h bar over i gradient.

And r dot p, therefore is h bar over i, r, dvr. Because r vector is r magnitude times the unit vector in the radial direction. And the radial direction of gradient is dvr.

So this can be simplified. I will not do it because it's in the notes. And you get minus h squared, 1 over r, d second, d r squared, r.

In a funny notation, the r is on the right. And the 1 over r is on the left. And you would say, this doesn't sound right.

You have here all this derivatives and what is an r doing to the right of the derivatives. I see no r. But this is a kind of a trick to rewrite everything in a short way.

So if you want, think of this being acting on some function of r. And see what it is. And then you put a function of r here, and calculate it. And you will see, you get the same.

So it's a good thing to try that. So p squared is given by this. There's another formula for p squared.

p squared is, of course, the Laplacian. So p squared is also equal to minus h squared times the Laplacian operator. And that's equal to minus h squared times-- in fact, the Laplacian operator is 1 over r, d second, dr squared, r, plus 1 over r squared, 1 over sine theta, dd theta, sine theta, dd theta.

It's a little bit messy. Plus 1 over sine squared theta, d second, d phi squared, times closing this. So a few things are there to learn.

And the first thing is if you compare these 2 expressions, you have a formula for l squared. You have l squared is 1 over r squared on the upper right. And here you have minus h squared times this thing.

So l squared, that scalar operator is minus h squared, 1 over sine theta, dd theta, sine theta, dd theta, plus 1 over sine squared theta, d second, d phi squared. So in terms of functions of 3 variables, x, y, and z, L squared, which is a very complicated object, has become just a function of the angular variables.

And that this a very important intuitive fact. L squared. L is operator. That's rotation.

So it shouldn't really affect the r, shouldn't change r, modify r in any way. So it's a nice thing to confirm here that this operator can be thought as an operator acting on the angular variables. Or you could say, on functions, on the units here for example.

It's a good thing. The other thing that you've learned here-- so this is a very nice result. It's not all that easy to get by direct computation.

If you had to do Lx squared plus Ly squared plus Lz squared, first all this possible order-- well, there's no ordering problems here. But you would have to write this in terms of x, and py, and pz, and xy, and z, then pass to angular variables. Simplify all that.

It's a very bad way to do it. And it's painful. So the fact that we got this like that is very nice.

The other thing that we've got is some understanding of the Hamiltonian for a central potential, what we call a central potential problem. v of r.

Now, I will write a v of r like this. But then we'll simplify it. In fact, let me just go to a central potential case, which means that the potential just depends on the magnitude of r. So r is the magnitude of the vector r.

So at this moment, you have p squared over there. So this whole Hamiltonian is minus h squared over 2m, 1 over r, d second, dr squared, r, plus p squared over 2m. So 1 over 2m, r squared, l squared plus v of r.

So our Hamiltonian has also been simplified. So this will be the starting point for writing the Schrodinger equation for central potentials. And you have the operator l squared. And as far as we can, we'll try to avoid computations in theta and phi very explicitly, but try to do things algebraically.

So at this moment, the last comment I want to make on this subject is the issue of set of commuting observables. So if you have a Hamiltonian like that, you can try to form a set of commuting observables that are going to help you understand the physics of your particular problem.

So the first thing that you would want to put in the list of complete set of observables is the Hamiltonian. We really want to know the energies of this thing. So what other operators do I have?

Well I have x1, x2, and x3. And well, can I add them to the Hamiltonian to have a complete set of commuting observables? Well, the x's commute among themselves. So can I add them?

Yes or no? No. No you can't add them, because the x's don't commute with the Hamiltonian.

There's a p here. p doesn't commute with x's. So that's out of the question. They cannot be added to our list.

How about the p's? p1, p2, and p3. Not good either, because they don't commune with the potential term. The potential has x dependents, and will take a miracle for it to commute.

In general, it won't commute. So no reason for it to commute, unless the potential is 0. So this is not good.

Nor is good to have r squared, or p squared, or r dot p. r squared, p squared, r dot p. No good either.

On the other hand, r cross p is interesting. You have the angular momentum, L1, L2, and L3. Well, the angular momentum will commute, I think, with the Hamiltonian.

You can see it here. You have p squared, and Li's commute with p squared because p is a vector under rotations. p doesn't communicate with Li, but p squared does.

Because that was a scalar. So this term commutes with any angular momentum operator.

Moreover, v or r, r is this. So a v of r is a function of r squared. And r squared is the vector r squared. So ultimately, anything that is a function of r is a function of r squared that involves the operator r squared, that also commutes with all the Li's.

So h commutes with all the Li's. And that's a great thing. So this is absolutely important. h commutes with all the Li's.

That's angular momentum conservation. As we've seen, the rate of change of any operator is equal to expectation value of the commutator of the operator with the Hamiltonian. So if you put any Li, this commutator is 0. And the operator is conserved in the sense of expectation values.

Now this conservation law is great. You could add this operators to the commuting set of observables. But this time, you have a different problem.

Yes, this commutes with h. This commutes with h. And this commutes with h. But these one's don't commute with each other.

So not quite good enough. You cannot add them all. So let's see how many can we add.

We can only add 1. Because once you have 2 of them, they don't commute. So you're going to add 1, and everybody has agreed to add L3.

So we have H, L3. And happily we have 1 more is L squared. Remember, L squared commutes with all the Li's, so that's another operator.

And for a central potential problem, this will be sufficient to label all of our states some.

AUDIENCE: So how do we know that we need the L squared? How do we know that we can't get-- how do we know that just H and L3 isn't already a complete set?

PROFESSOR: I probably wouldn't know now, but in a little bit, as we calculate the kind of states that we get with angular momentum, I will see that there are many states with the same value of L3 that don't correspond to the same value of the total or length of the angular momentum.

So it's almost like saying that there are angular momenta-- here is-- let me draw a plane. Here is z component of angular momentum, Lz. And here you got it.

You can have an angular momentum that is like that, and has this Lz. Or you can have an angular momentum that is like this, L prime, that has the same Lz. And then it will be difficult to tell these 2 states apart.

And they will correspond to states of this angular momentum, or this angular momentum, have the same Lz. Now drawing these arrows is extraordinarily misleading. Hope you don't get upset that I did it.

It's misleading because this vector you cannot measure simultaneously the 3 components. Because they don't commute. So what do I mean by drawing an arrow?

Nevertheless, the intuition is sort of there. And it's not wrong, the intuition. It will happen to be the case that states that have same amount of Lz will not be distinguished. But by the time we have this, we will distinguish them.

And that's also a peculiarity of a result but we'll use. Even though we're talking about 3 dimensions, the fact that the 1 dimensional Schrodinger equation has non degenerate bound states. You say, what does that have to do with 3 dimensions?

What will happen is that the 3 dimensional Schrodinger equation will reduce to a 1 dimensional radial equation. And the fact that that doesn't have degeneracies tells you that for bound state problems, this will be enough to do it. So you will have to wait a little to be sure that this will do it.

But this is pretty much the best we can do now. And I don't think you will be able to add anything else to this at this stage.

Now there's of course funny things that you could add like-- if there's spin, the particles have spin, well we can add spin and things like that. But let's leave it at that and now begin really our calculation, algebraic calculation, of the angular momentum representations.

So at this moment, we really want to make sure we work with this. Only this formula over here. And learn things about the kind of states that can exist in a system in which there are operators like that.

So it's a funny thing. You're talking about a vector space. And in fact, you don't know almost anything about this vector space so far. But there is an action of those operators.

From that fact alone, and one more important fact-- the j's are Hermitian. From these 2 facts, we're going to derive incredibly powerful results, extremely powerful things.

And as we'll see, they have applications even in cases that you would imagine they have nothing to do with angular momentum, which is really surprising. So how do we proceed with this stuff? Well, there's a hermeticity.

And you immediately introduce things called J plus minus, which are J1 plus minus i J 2. Or Jx plus minus y Jc. Then you calculate what is J plus J minus.

Well J plus J minus will be a J1 squared plus J2 squared. And then you have the cross product that this doesn't cancel. So J plus times J minus would be J1 plus i J2, J1 minus i J2.

So the next term would be minus i, J1, J2. And that's i h bar, J3. So this is J1 squared plus J2 squared plus h bar J3.

So that's a nice formula for J plus, J minus. J minus, J plus would be J1 squared plus J2 squared minus h bar J3. These 2 formulas are summarized by J plus, J minus-- minus, plus-- is equal to J1 squared plus J2 squared plus minus h bar J3.

OK. Things to learn from this. Maybe I'll continue here for a little while to use the blackboards, up to here only.

The commutator of J plus and J minus can be obtained from this equation. You just subtract them. And that's 2h bar, J3.

And finally, one last thing that we like to know is how to write J squared. So J squared is J1 squared plus J2 squared plus J3 squared, which then show up here.

So we might as well add it and subtract it. So I add a J3 squared, and I add it on the left hand side. And pass this term to the other side.

So J squared would be J plus, J minus, plus J3 squared, minus h bar, J3. Or J minus, J plus, plus J3 squared, plus h bar, J3.

OK. So that's J squared. OK. So we're doing sort of simple things.

Basically at this moment, we decided that we like better J plus and J minus. And we tried to figure out everything that we should know about J plus, J minus.

If we substitute Lx, and Jx, and Jy for J plus and J minus, you better know what is the commutator of J plus and J minus. And how to write J squared in terms of J plus and J minus.

And this is what we've done here. And in particular, we have a whole lot of nice formulas.

So one more formula is probably useful. And it's the formula for the commutator of J plus and J minus with Jz. Because after all, the J plus, J minus commutator, you've got it.

So if you're systematic about these things you should figure out that at this I would like to know what is the commutator of J plus and J minus with Jz.

So I can do Jz, J plus. It's not hard. It's Jz.

I'm sorry. I'm calling it 3. So, I think in the notes I call them x, y, and z. But never mind.

J1 plus i, J2. The plus is really with a plus i. So J3 with J1 by the cyclic ordering is ih bar, J2.

And here you have plus i, and J3 with J2 is minus ih bar, J1. So this is h bar, J1, plus i, J2, which is h bar, J plus.

So what you've learned is that J3 with J plus is equal to h bar, J plus. And if you did it with J minus, you'll find a minus, and a plus minus here.

So that is the complete result. And that should remind you of the analogous relation in which you have in the harmonic oscillator, N commutator, with a dagger. With a dagger.

And N commutator with a was minus a. Because of the fact that I maybe didn't say it here, and I should have, that the dagger of J plus is J minus. Because the operators are Hermitians.

So J plus and J minus are daggers of each other, are adjoins of each other. And here you see a very analogous situation. a and a dagger were adjoins of each other.

And with respect to N, a counting number operator. One increased it. One decreased it. a dagger increased the number eigenvalue of N. a decreased it, the same way it's going to happen here.

J plus is going to increase the C component of angular momentum. And J minus is going to decrease it.

OK. So we've done most of the calculations that we need. The rest is pretty easy work. Not that it was difficult so far. But it took a little time.

So what happens next is the following. You must make a declaration. There should exist states, basically.

We have a vector space. It's very large. It's actually infinite dimensional.

Because they will be related to all kinds of functions on the unit sphere. All these angular variables. So it's infinite dimensional.

So it's a little scary. But let's not worry about that. Something very nice happens with angular momentum. Something so nice that it didn't happen actually with a and a dagger.

With a and a dagger, you build states in the harmonic oscillator. And you build infinitely many ones. The operators x and p, you've learned you cannot represent them by finite dimensional matrices.

So this is a lot more complicated, you would say. And you would say, well, this is just much harder. This algebra is so much harder than this algebra.

Nevertheless, this algebra is the difficult one. Gives you infinite dimensional representations. You can keep piling the a daggers.

Here, this is a very dense algebra. Mathematicians would say this is much simpler than this one. And we'll see the simplicity of this one, in that you will manage to get representations and matrices that are finite dimensional to work these things out.

So it's going to be nicer in that sense. So what do we have? We have to think of our commuting observables and the set of Hermitian operators that commute.

So we have J squared, and J3-- I call it Jz now, apologies. And we'll declare that there are states. These are Hermitian, and they commute. So they must be diagonalized simultaneously. And there should exist states that represent the diagonalization.

In fact, since they commute, and can be diagonalized simultaneously, the vector space must break into a list of vectors. All of them eigenstates of these 2 operators. And all of them orthogonal to each other.

AUDIENCE: I was just wondering when we showed that Jz is Hermitian?

PROFESSOR: We didn't show it. We postulated that J's are Hermitian operators. So you know that when J is L, yes it's Hermitian. You know when J is spin, yes it's Hermitian. Whatever you're doing we'll use Hermitian operators.

So not only they can diagonalize simultaneously, by our main theorem about Hermitian operators, this should provide an orthonormal basis for the full vector space. So the whole answer is supposed to be here. Let's see.

So I'll define states, Jm, that are eigenstates of both of these things. And I have 2 numbers to declare those eigenvalues. You would say J squared.

Now, any normal person would put here maybe h squared, for units, time J squared. And then Jm. Don't copy it yet. And Jz for Jm.

It has units of angular momentum. So an h, times m, times Jm. But that turns out not to be very convenient to put the J squared there. It ruins the algebra later. So we'll put something different that we hope has the same effect.

And I will discuss that. I'll put h squared, J times J plus 1. It's a funny way of declaring how you're going to build the states. But it's a possible thing to do.

So here are the states, J and m. And the only thing I know at this moment is that since these are Hermitian operators, their eigenvalues must be real. So J times J plus 1 is real. And m is real. So J and m belong to the reals.

And they are orthogonal to-- we can say they're orthonormal states. We will see very soon that these things get quantized. But basically, the overlap of a Jm with a J prime, m prime would be 0 whenever the J's and the m's are different.

As you know from our theory, any 2 eigenstates with different eigenvalues are orthonormal. And in fact, you can choose a basis so that in fact, everything is orthonormal. So there's no question like that.

So let's explain a little what's happening with this thing. Why do we put this like that? Or why can we get away with this?

And the reason is the following. Let's consider Jm, J squared, Jm. If I use this, J squared on this is this number. And Jm with itself will be 1.

And therefore I'll put here h-- I'm sorry. This should be an h squared. J has units of angular momentum. h squared, J times J plus 1.

And I'm assuming that this will be discretized so I don't have to put the delta function normalization. At any rate, this thing is equal to this. And moreover, it's equal to the following. Jm sum over i, Jm, Ji, Ji, Jm.

But since J is Hermitian, this is nothing but the sum over i of the norm squared of Ji with J acting on Jm. The norm squared of this state. Because this times the bra with Ji Hermitian is the norm squared. So this is greater or equal than 0.

Perhaps no surprise, this is a vector operator, which is the sum of squares of Hermitian operators. And therefore it should be like that.

Now, given that, we have the following-- oops-- the following fact that L times L plus-- no. J times J plus 1 must be greater or equal than 0.

J times J plus 1 must be greater or equal than 0. Well, plot it as a function of J. It vanishes at 0.

J times J plus 1 vanishes at 0, and vanishes at minus 1. It's a function like this. The function J times J plus 1.

And this shows that all you need is this thing to be positive. So to represent all the states that have J times J plus 1 positive, I could label them with J's that are positive. Or J's that are smaller than minus 1.

So each way, I can label uniquely those states. So if I get J times J plus 1 equals 3, it may correspond to a J of something and a J of some other thing. I will have just 1 state, so I will choose J positive.

So given that J times J plus 1 is positive, I can label states with J positive, or 0. So it allows you to do this. Whatever value of this quantity that is positive corresponds to some J positive that you can put in here. A unique J positive. So this is a fine parametrization of the problem.

OK. Now what's next? Next, we have to understand what the J plus operators and J minus operators do to the states.

So, first thing is that J plus and J minus commute with J squared. That should not be a surprise. J1 and J2 commute. Every J commutes with J squared. So J plus and J minus commute with J squared.

What this means in words is that J plus and J minus do not change the eigenvalue of J squared on a state. That is, if I would have J squared on J plus or minus on Jm-- since I can move the J squared up across the J plus, minus-- it hits here.

Then I have J plus minus, J squared, Jm. And that's there for h squared, J times J plus 1, times J plus minus on Jm.

So this state is also a state with the same value of J squared. Therefore, it must have the same value of J. In other words, this state J plus minus of Jm must be proportional to a state with J and maybe some different value of m, but the same value of J.

J cannot have changed. J must be the same. Then we have to see who changes m, or how does J plus minus changes m.

So here comes a little bit of a same calculation. You want to see what is the m value of this thing. So you have J plus minus on Jm. And you act with it with a Jz, to see what it is.

And then, you put, well, the commutator first. Jz, J plus minus, plus J plus minus, Jz on the state. The commutator, you've calculated it before, was Jz with J plus minus is there, is plus minus h bar, J plus minus.

And this Jz already act. So this is plus h bar m, J plus minus on Jm. So we can get the J plus minus out. And this h bar m plus minus 1, j plus minus, Jm.

So look what you got. Jz acting on this state is h bar, m plus minus 1, Jm. So this state has m equal to either m plus 1, or m minus 1. Something that we can write.

Clearly-- oops-- in this way, we'll say that J plus minus, Jm-- we know already it's a state with J and m plus minus 1. So it raises m.

Just like what we said that the a's and a daggers raise or lower the number. J plus and J minus raise and lower Jz. Therefore, it's this is proportional to this state. But there's a constant of proportionality that we have to figure out.

And we'll call it the constant C, Jm. To be calculated. So the way to calculate this constant-- and that will bring us almost pretty close to what we need-- is to take inner products.

So we must take the dagger of this equation. So take the dagger, and you get Jm, the adjoin, J minus plus. And hit it with this equation.

So you'll have here-- well maybe I'll write it. The dagger of this equation would be C plus minus star of Jm. Jm plus minus 1.

And now, sandwich this with that. So you have Jm, J minus plus, J plus minus, Jm equals to norm of C plus minus Jm.

And then you have this state times this state, but that's 1. Because it's J, J, m plus 1, m plus 1. So this is an orthonormal basis. So we have just 1. And I don't have to write more.

Well the left hand side can be calculated. We have still that formula here. So let's calculate it.

The left hand side, I'll write it like this. I will have C plus minus, Jm squared, which is equal to the norm squared of J plus minus, Jm. It's equal to what?

Whatever this is, where you substitute that for this formula. So you'll put here Jm. And you'll have-- well, I want actually the formula I just erased. Because I actually would prefer to have J squared.

So I would have this is equal to J squared, minus J3 squared, plus minus h, J3. So let's see. I have the sign minus plus, plus minus. So I should change the signs there.

So it should be J squared, minus J3 squared, minus plus J3, and Jm, minus plus h bar, J3, Jm. So this is equal to h bar squared, J times J plus 1, minus an m squared, and a minus plus. So minus, plus, minus here.

I think I have it here correct. Plus minus 1. And that's it.

J squared is h squared this. J3 squared would give that. And the minus plus here is correctly with this one.

So m should be here. Plus minus m. So this is h squared, J times J plus 1, minus m, times m plus minus 1.

OK. So the C's have been already found. And you can take their square roots. In fact, we can ideally just take the square roots, because these things better be positive numbers because they're norms squared.

So whenever we'll be able to do this, these things better be positive, being the square of some states. And therefore the C plus minus is-- C plus minus of Jm can be simply taken to be h bar, square root of J times J plus 1, minus m, times m plus 1.

And it's because of this thing, this m times m plus 1, that it was convenient to have J times J plus 1. So that we can compare J's and m's better. Otherwise it would have been pretty disastrous.

So, OK, we're almost done now with the calculation of the spectrum. You will say, well, we seem to be getting no where. Learned all these properties, these states, and now you're just manipulating the states.

But the main thing is that we need these things to be positive. And that will give us the whole condition.

So, for example, we need 1, that the states J plus, Jm, their norm squareds be positive. So for the plus sign-- so you should have J times J plus 1, minus m, times m plus 1 be positive. Or m times m plus 1 be smaller then J times J plus 1.

The best way for my mind to solve these kind of things is to just plot them. So here is m. And here is m times m plus 1.

So you plot this function. And you want it to be less than some value of J times J plus 1. So here's J times J plus 1, some value.

So this is 0 here. This function is 0 at minus 1. So it will be something like this. And there's 2 values at which m becomes equal to this thing.

And one is clearly J. When m is equal to J, it's saturates an inequality. And the other one is minus J, minus 1. If m is minus J, minus 1, you will have minus J, minus 1 here, and minus J here, which would be equal to this. So, in order for these states to be good, the value of m must be in between J and minus J, minus 1.

Then the other case is that J plus on-- J minus on Jm. If you produce those states, they also must have positive norms. So J times J plus 1, minus m, times m minus 1 this time, must be greater than 0.

So m times m minus 1 must be less than or equal then J times J plus 1. And again, we try to do it geometrically. So here it is.

Here is m. And what values do you have? Well, if you plot here m times m minus 1. And that should be equal to some value that you get fixed, which is the value J times J plus 1.

So you think in terms of m's, how far can they go? So if you take m equals J plus 1 that hits it. So this is 0 here, at 1, at 0. So it's some function like this.

And here you have J plus 1. And here you have minus J. Both are the places for m equal J plus 1, and minus J that you get the states. You get the saturation.

So you can run m from this range. Now, m can go less than or equal to J plus 1, and greater than or equal to minus J.

But these 2 inequalities must hold at the same time. You cannot allow either one to go wrong for any set of states. So if both must hold at the same time for any state, because both things have to happen, you get constrained.

This time for the upper range, this is the stronger value. For the lower range, this is the stronger value. So m must go between J and minus J for both to hold. Oops. To hold.

Now look what happens. Funny things happen if-- this is reasonable that the strongest value comes from this equation. Because J plus increases m. So at some point you run into trouble if you increase m too much.

How much can you increase it? You cannot go beyond J, and that makes sense. In some sense, your intuition should be that J is the length of J squared. And m is mz.

So m should not go beyond J. And that's reasonable here. And in fact, when m is equal to J, this whole thing vanishes.

So if you reach that state when m is equal to J, only then for m equal to J, or for this state, you get 0. So you cannot raise the state anymore. So actually, you see if you choose some J over here, we need a few things to happen.

You choose some J, and some m. Well you're going to be shifting the m's. And if you keep adding J pluses, eventually you will go beyond this point.

The only way not to go beyond this point is if m reaches the value J. Because if m reaches the value J, the state is killed. So m should reach the value J over here at some stage.

So you fix J, and you try to think what m can be. And m has to reach the value J. So m at some point, whatever m is, you add 1. You add 1. You add 1.

And eventually you must reach the value J. Reach with some m prime. m here. You should reach the value J, so that you don't produce another state that is higher.

If you reach something before that, that state is not killed. This number is not equal to 0. You produce a state and it's a bad state of bad norm. So you must reach this one.

On the other hand, you can lower things. And if you go below minus J, you produce bad states. So you must also, when you decrease m, you must reach this point.

Because if you didn't, and you stop half a unit away from it, the next state that you produce is bad. And that can't be. So you must reach this one too.

And that's the key logical part of the argument in which this distance 2J plus 1-- no. I'm sorry. This 2J must be equal to some integer. And that's the key thing that must happen, because you must reach this and you must reach here. And m just varies by integers.

So the distance between this J and minus J must be twice an integer. And you've discovered something remarkable by getting to that point, because now you see that if this has to be an integer, well it may be 0, 1, 2, 3. And when J-- then J-- this integer is equal to 0, then J is equal to 0. 1/2, 1, 3/2.

And you get all these spins with-- consider particles without spin having spin 0. Particles with spin 1/2. Particles of spin 1, or angular momentum 1, orbital angular momentum 1.

And both these things have a reason for you. Now if you have 2J being an integer, the values of m go from J to J minus 1, up to minus J. And there are two J plus 1 values.

And in fact, that is the main result of the theory of angular momentum. The values of the angular momentum are 0, 1, 1/2, 3/2. So for J equals 0, there's just one state. m is equal to 0.

For J equals to 1, there's two states. I'm sorry for 1/2, two states. One with m equals 1/2. And m equals minus 1/2.

J equals 1, there's three states. M equals 1, 0, and minus 1. And so on. OK.

This is a great result. Let me give you an application in the last 10 minutes. It's a remarkable application.

Now actually, you would say, so what do you get-- what vector space were we talking about? And what's sort of the punchline here is that the vector space was infinite dimensional and it breaks down into states with J equals 0. States was J equal 1/2. States with J equal 1. States with J equal 3/2.

All these things are possibilities. They can all be present in your vector space. Maybe some are present. Some are not. That is part of figuring out what's going on.

When we do central potentials, 0, 1, 2, 4 will be present for the angular momentum theory. When we do spins, we have 1/2. And when we do other things, we can get some funny things as well.

So let's do a case where you get something funny. So the 2D, SHO. You have ax's, and ay's, and a daggers, and ay daggers. And this should seem very strange.

What are we talking about 2 dimensional oscillators after talking about 3 dimensional angular momentum and all that? Doesn't make any sense. Well, what's going to happen now is something more magical than when a magician takes a bunny out of a hat.

Out of this problem, an angular momentum, a 3 dimensional angular momentum, is going to pop out. No reason whatsoever there should be there at first sight. But it's there.

And it's an abstract angular momentum, but it's a full angular momentum. Let's see. Let's look at the spectrum.

Ground state. First excited state is isotropic. So 2 states degenerate in energy.

Next state. ax dagger, ax dagger. ax, ay. ay, ay. 3 states, degenerate.

Go up to ax dagger to the n, up to ax-- no ax, or ax dagger to the 0. And ay dagger to the n. And that's n a daggers up to 0 a daggers, so n plus 1 states.

3 states, 2 states, 1 state. And you'll come here and say, that's strange. 1 state, 2 states, 3 states, 4 states. Does that have anything to do with it?

Well, the surprise is it has something to do with it. Let's think about it.

Well, first thing is to put these aR's and aL oscillators-- these were 1/2, 1 over square root of 2, ax plus iay. And a left was 1 over square root of 2, ax minus iay. I may have-- no, the signs are wrong. Plus and minus.

And we had number operators. n right, which were a right dagger, a right. And n left, which was a left dagger, a left. And they don't mix a lefts and a rights.

And now, we could build a state the following way. 0. a right dagger on 0. a left dagger on 0.

A right dagger squared on 0. a right, a left on 0. and a left dagger, a left dagger on 0.

Up to a right dagger to the n on 0. Up to a left dagger to the n on 0. And this is completely analogous to what we had.

Now here comes the real thing. You did compute the angular momentum in the z direction. And the angular momentum in the z direction was Lz.

And you could compute this. xpy minus ypx. And this was all legal. And the answer was h bar, N right, minus NL.

That was the Lz component of angular momentum. So, let's see what Lz's those states have. This one has no n rights, or n lefts, so has Lz equals 0.

This state has Nz equal h bar. And this has minus h bar. OK. h bar and minus h bar.

That doesn't quite seem to fit here, because the z component of angular momentum is 1/2 of h bar, and minus 1/2 of h bar. That's-- something went wrong.

OK. You go here. You say, well, what is Lz?

Lz here was h bar, minus h bar. Here is 2h bar, 0, and minus 2h bar. And you look there, and say, no, that's not quite right either.

This-- if you would say these 3 states should correspond to angular momentum, they should have m equal plus 1, plus h bar, 0, and minus h bar. So it's not right.

OK. Well one other thing maybe we can make sense of this. If we had L plus, should be the kind of thing that you can't annihilate. That you annihilate the top state.

Remember L plus, or J plus, kept increasing so it should annihilate the top state. And I could try to devise something that annihilates the top state. And it would be something like aR dagger, a left.

Why? Because if aR dagger, a left, goes to the top state, the top state has no a left daggers, so the a left just zooms in, and hits the 0 and kills it. Kills it here.

So actually I do have something like an L plus. And I would have the dagger-- would be something like an L minus-- would be aL dagger, a right. And this one should annihilate the bottom one.

And it does. Because the bottom state has no aR's, and therefore has no aR daggers. And therefore, the aR comes there, and hits the state, and kills it.

So we seem to have more or less everything, but nothing is working. So we have to do a last conceptual step. And say-- you see, this is moving in a plane.

There's no 3 dimensional angular momentum. You are fooling yourself with this. But what could exist is an abstract angular momentum.

And for that, in order to-- it's time to change the letter from L to J. That means some kind of abstract angular momentum.

And I'll put a 1/2 here, now a definition. If this is what I called Jz, oh well, then thing's may look good. Because this one for Jz has now angular momentum 1/2 of h bar, and minus a half of h bar.

And that fits with this, these 2 states. And with the 1/2, the other ones, the Jz's, also have something here. So Jz here now becomes h bar, minus h bar, and it looks right.

And now you put the 1/2 here, and in fact, if you tried to make these things J-- call it J plus and J minus. Now you put a number here, and a number here. If you would have put a number here, if you try to enforce that the algebra be the algebra of angular momentum, the number would have come out to be 1/2.

But now we claim that in this 2 dimensional oscillator, there is-- because there's a number here that works with this 1/2. Something you have to calculate. And with this number, you have some sort of Jx, Jy, Jz, where this is like 1/2 of Lz. And those have come out of thin air.

But they form an algebra of angular momentum. And what have we learned today, if you have an algebra of angular momentum, the states must organize themselves into representations of angular momentum. So the whole spectrum of the 2 dimensional harmonic oscillator has in fact all spin representations.

J equals 0. J equals 1/2. J equals 1. J equals 2. J equals n, and all of them.

So the best example of all the representations of angular momentum are in the states of the 2 dimensional simple harmonic oscillator. It's an abstract angular momentum, but it's very useful. The one step I didn't do here for you is to check. Although you check that all of these Ji commute with the Hamiltonian.

Simple calculation to do it. In fact, the Hamiltonian is NL plus N right, and you can check it. Since they commute with them, these operators act in states and don't change the energy.

And they're a symmetry of the problem. So that's why they fell into representations. So this is our first example of a hidden symmetry. A problem that there was no reason a priori to expect an angular momentum to exist, but it's there, and helps explain the degeneracies.

These degeneracies you could have said they're accidental. But by the time you know they have to fall into angular momentum representations, you have great control over them. You couldn't have found different number of degenerate states at any level here.

This was in fact discovered by Julian Schwinger in a very famous paper. And is a classic example of angular momentum.

All right. That's it for today. See you on Wednesday if you come I'll be here.