Description: In this lecture, the professor discussed Resonant scattering.
Instructor: Wolfgang Ketterle
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Good afternoon, everybody. Let me just remind you where we are. We're describing interaction between the electromagnetic field and atoms. And we had formulated an exact approach using the time evolution operators and diagrams. And well, I think we understood what it means when atoms in the ground state emit photons which are virtually absorbed and all that. So we figured out what is really inside this formalism and what are the processes.
What we want to continue discussing today is one problem which you often have such approaches. And this is the problem of resonance. If you have a perturbative treatment, even if you carry to infinite order, you have, formally, divergences if you have resonant interaction. Because the ground state and the photon has exactly the same energy as the excited state. And that means if you write down the perturbative expansion, you have a 0 in the denominator. You have a divergence.
And I reminded you that in a phenomenological way, you've seen that this problem can be "fixed" by adding an imaginary part to the energy level just saying, well, the excited state couples by spontaneous emission to the radiation field. And the level has [INAUDIBLE]. Well, but you know, sometimes putting an imaginary part into Schrodinger's equation means it's no longer unitary time evolution. It has its problem.
But anyway, we want to now look deeper into it. I want to show you what are the tools to treat those infinity source divergences in a consistent and a systematic way. And one hint how we have to do it comes by simply taking this energy denominator and expanding it in gamma, simply a Taylor expansion in gamma. And then, we realize gamma is often calculated in second order Fermi's golden rule.
But since you have here all orders n, that tells us that doing something here probably means infinite orders in a perturbation series. And that's what I want to show you today. I want to show you that I can go beyond this result. But I can reproduce this result by going to infinite order in perturbation theory. And it means to sum up an infinite number of diagrams.
Who of you have actually seen those kind of diagrammatic tricks and summation? A few, OK. So it's maybe nice to see it again. But for those who haven't seen it, welcome to the magic of diagrams. I learned it from those examples. And I really like it. It's a very elegant way to combine equations with graphical manipulations. So that's our agenda for at least the first part of today.
And OK, we want to understand the time evolution of this system. Our tool is a time evolution operator. And at the end of the class on Monday, I told you, well, let's simplify things. Let's get rid of those temporal integrations and multiple integrals by simply doing a Fourier transform. Because a Fourier transform turns an integral or convolution into a product.
And so therefore, we introduced the Fourier transform, or the Laplace transform, of the time evolution operator. And this iterative equation where we get the nth order by plugging the n minus first order on the right hand side, this iterative equation turns now into a simpler algebraic iterative equation for the Fourier transform. So this is now the starting point for our discussion today. We want to calculate the Fourier transform of the time evolution operator to infinite orders.
So let me-- well, of course, any questions before I continue? So unfortunately, [INAUDIBLE]. We should copy this equation. Because we need it. So OK, we want to take this equation. And now we want to iterate it.
So the resolving G in 0's order is G0. Now plug G0 into the right hand side of the equation, and you get the first order, which is G0VG0. Now plug that into the end of the iterative equation, and you get the second order, G0VG0VG0. I think you've got the idea.
It's almost like a geometric series-- well, a geometric series with operators. But this is already sort of a hint. A geometric series can be summed up to infinity rather easy. So we have to introduce now the eigenfunctions of the unperturbed operator. So these are, let's say, the ground state, excited state, next excited state of your favorite atom. And they have eigenenergies Ek-- nope.
So if you are now expressing the equation above in the basis kl-- so in other words, this is an operator, and we want to know the matrix element between eigenfunctions k and l. The other thing we need is that G0. Remember the Fourier transform-- and I gave you sort a mini derivation here-- is just 1 over energy minus H. And if you apply that now to the operator G0, so G0 is nothing like-- is given by that.
So therefore, if you write now this equation as matrix elements, the first part, the G0, gives us 1 minus 1 over Z minus Ek. And since we are diagonal in H0, it's delta kl. And well, now you see the structure if you sum over intermediate states, or if you introduce intermediate states. So this is how we have to write it.
So this is just writing it down in basis function of the unperturbed operator H0. But now we can formulate the problem we are encountering and we want to solve. Namely, we have the problem with one state. Our problem is the excited state b. And we have a resonant excitation from the count state a.
So this is a discrete eigenstate of the unperturbed Hamiltonian with energy Eb. And therefore, we have terms, and actually divergent terms, which are 1 over Z minus Eb. And just to make the connection, this is sort of the description that book Z is in the complex plane and can have imaginary values. Sometimes, scattering and evolution equations are better formulated when you do it in the complex plane.
It doesn't really matter for us here. Just remember, Z is the energy. And it is the initial energy. And the initial energy is if it's a ground state and a resonant photon, we have a problem. Because the denominator is 0.
So in other words, for resonant excitation, we are interested in the case that Z is on the order of this close to Eb. OK, so what we want to do now is-- and this is the basic idea. By looking at this formally, exact calculation, we say the difficult parts are those where we have this energy denominator, which vanishes. The other parts are simple.
They don't have any divergences. They are not resonant and such. So what we want to do is now, in some way, we want to sort of give special treatment, factor out the problematic terms. And the rest is easy.
And for the easy part, which has no divergence, we can make any kind of approximation we want without altering the physics. But the resonant part, this needs special attention. Because if I treat it literally in those expressions, they don't make sense mathematically. Because they cause infinities.
I could continue to do it completely algebraically. But because I think it's just beautiful method, I want to look at this equation and write it down in symbols. So we want to arrive at a diagrammatic representation for this matrix element of the resolvent Gbb.
And I use symbols where the circle means the interaction. The straight line stands for the term which is problematic, which has resonant-- the resonant interaction has the divergent denominator. And then, I'll use a dashed line for all other terms, all other intermediate states, where i is not b, and therefore there is not a problem. It's not resonant. We don't have a divergence.
So I'm just sort of-- you see the structure of the sum. We sort of propagate here. We make a transition from l to k. And then, we propagate here. So in this kind of order, you start in this state, you have a vertex, you go to the other state, you have a vertex, you go to the next state, you have a vertex. This is how the system works. This is sort of what quantum mechanics does for you.
And what we're going to do is, this is an algebraic equation. And now we want to order them in the following way. We want to figure out which of those expressions include this problematic term exactly twice or three times or four times. So we regroup those infinite sums, these algebraic terms, in such a way that we say, OK, which one has the occurrence of this once, twice, three times, four times? So we regroup the terms, and then we see what we can do.
Now, I've picked the matrix element Gbb. So that means over here and over here we start out in the state b. So if I write down all terms which contain this resonant term twice, well, we start with a resonant term. And we have to end with a resonant term, because we have the matrix element Gbb, which I'm focusing now on.
And so there can be one vertex. We can start with this term. And now we can do something else in between. But we are not allowed to go back to the state b. Because we are only looking at terms where the solid line appears twice. So therefore, this can only be a dashed line, another state.
We can go through two vertices. But it can only include dashed lines in between. So let me use another symbol for it. So we start with that state. We end with a state b. But in between, we can sort of once, twice, three times go through other intermediate states. But we never are allowed to create another divergence. And this infinite sum over all other states-- I don't know how to calculate it yet. But I'll just call it a square box.
So in other words, what is sort of in lowest order, the operator V, with these other infinity terms, I symbolize that kind of much more complicated vertex, which includes an infinite sum, by a square and not by a circle. So diagrammatically-- feel free to write it down mathematically. It's obvious how to do it. The square box is the circle plus all terms like this.
Let's go back to equations. We call this the function Rb of Z. And what I've shown in diagrams above is nothing else than the circle Vbb, the matrix element of the interaction.
But now we have sums where we're not allowed to go through the resonant state. We have to go from b to an intermediate state. We propagate in the intermediate state. And then, we have to go back. And it's clear how to go to higher terms.
So we've just defined this function R by focusing on two occurrences of the straight line. Well, let's look at higher order terms. What happens when Z minus Eb, the divergent term, comes to the power n? Well, we just dealt with n equals 2. Let's now look at n equals 3. Colin.
AUDIENCE: I'm trying to find the definition. Is the box a T-matrix or the S-matrix? What's the definition of the [INAUDIBLE]. The S is--
PROFESSOR: No, the T-matrix is actually the matrix, the relevant matrix, of the time evolution operator. And if you factor out the delta function for the energy shell, you get the S-matrix. But we're not talking about a time evolution operator here. We're talking about the Fourier transform. And we're looking at the resolvent, which is the function G. And now, we've introduced a function R.
I actually tried to look up-- I wanted to use the correct word in class. I couldn't find the name. In the book, it's just called the function R. So it is the function R. And the function R turns out to be the kernel of the resolvent G. But none of it is the S-matrix and T-matrix.
It is related. Because if you do the inverse Fourier transform from G, we go back to U. And then, we have, I would say, the T-matrix.
AUDIENCE: This is the self energy?
PROFESSOR: We will find that R has a real and imaginary part. One is a self energy, and the other one is a decay rate, has an imaginary part. But the real part will be the self energy. Yes, we connect a lot of passwords you may have heard here and there.
OK, the question is, do we have to now define pentagons and hexagons? Do we have to find more and more symbols for more and more complicated sums? But the nice thing is no. Because n equals 3 means we have to start in state b. We have to stand in state b.
And one time in the time evolution, we can go through state b. And now, between that, we can go from here to there with any combination of states you want. But one thing is not allowed-- to involve the state b. Because we are focusing on three occurrences of the state b.
And everything else other than the state b has already a symbol. It is the square symbol. So this is the exact representation for n equals 3. And the contribution to the resolvent G, the Fourier transform of the time evolution operator, is-- well, we have factored out three occurrences of the state b. And then, we have three occurrences of the state b.
We need two square boxes. But the square box has already a name and an algebraic definition. So this is nothing else than-- I think it's called the kernel Rb squared.
OK, I've shown you n equals 2. I've shown you n equals 3. I assume it's now absolutely clear how to continue. It just involves more and more power. So the lowest order is that. And whenever we ask what happens when we allow more appearances of the state b, for each of them, we obtain another square box.
So by looking at the terms which are bothersome and regrouping the infinite terms according to one occurrence, two occurrence, three occurrence of this divergent denominator, we have now found an exact expression for Gb of Z. And since this is now an algebraic equation with a geometric series, we can write it exactly as this minus Rb of Z.
Well, like with every exact result, you have to ask, what is the use for it? Because we started out with U, which we couldn't calculate with Fourier transform. We had G, which we couldn't calculate. And now we've expressed G in R, which of course we cannot calculate exactly. But there is an importance. We have made progress for the following reasons.
Namely, the first is that those resonant terms, which appears in the time evolution whenever the system goes back to the state b, is now fully accounted for. We have sort of factored those terms out. We've given them special treatment. And therefore, and this is the main result, the exposition which now is the non-trivial expression, the function of the kernel, has no divergences.
And therefore, because there is no critical part to it, rather simple approximations can be made and lead to physically meaningful results. That's an idea you may see often in physics. You have a theory whether it's something complicated, non-perturbative divergence. But you just rewrite the theory, transform the equations in such a way that structure of the equations now accounts for the physics behind it. And the part which has to be calculated now can be calculated with crude approximation. And you still get the correct physics out of it.
The structure of the equation accounts for the physics. And the numerical part has become very harmless. But what it does is even if you do now a lowest order approximation to the function R, even those very simple approximations correspond to an infinite number of terms in the original expansion.
Or in other words, the message is, you have an expression. And if you do a perturbative expansion-- well, maybe you should do some form of perturbative expansion not to the whole expression. You should do it to some denominator or to some part of the denominator. Because then, the perturbative expansion involves no divergent terms and can be performed.
So therefore, we are now in a position to make approximations to the function R. And the simplest approximation which we can do is we can just try to see if we can get away with very low order. It's maybe not getting into any divergences here. And let me call this now the triangle.
So the circle was the naked interaction. The square would be the exact function if you sum up those interactions to all orders. And the triangle is now, well, the step in between. We hope to get away with a triangle.
So that would mean the following, that the exact formulation involved-- let me get black-- had a propagator. The state b has to go through multiple squares. This is how it propagates.
This would be the exact result after the resummation we have done. And an approximate result is now that the squares are replaced by triangles. So that's pretty neat. But the importance comes now when I pull things together, I want to show you what we have exactly done for treating an atom in the excited state and for treating light scattering.
So in other words, that's our result. We have derived it. I just go now and apply to an excited atomic state. So the state we are interested in is the atomic state b and no photons. And the property of the atomic state is obtained when we know the function Gb of Z. Just to tell you what it means, this is the time evolution operator when we Fourier transform, an inverse Fourier or Laplace transform, which is a contour integral in the complex plane, a generalization of the Fourier transform.
This would take us then to the time evolution for the atom in state b. So this is now the diagram matrix element of the T-matrix. And this is the matrix element between state B0B0 and the time evolution operator.
So we are calculating, of course, the Fourier transform of the time evolution of the state b by the Fourier transform through the resolvent G. And all the work we have done with our diagrams means instead of calculating G, we are calculating the kernel of Z. And if you use the lowest second order approach, then diagrammatically-- just give me one second, photon was emitted.
Yeah, sorry, so the process we have considered is that we go to second order. We can go through an intermediate state, can say, absorbing it and meeting a virtual photon. That's what two vertices mean. At the first vertex, the photon appears. At the second vertex, the photon disappears.
So that means now that we, with this approximation, approximate Gb of Z, the Fourier transform, the resolvents, in the following way. We let b just propagate, which is problematic because this has divergences. But we are allowing now the state b to go through intermediate states a. And remember, since this appears in the denominator, that means for G, just make a Taylor expansion in R, that it has this process to all orders.
So that propagator, the sort of propagation G, involves now all possibilities that we have to end up in state b again. But we can go through intermediate states a or a prime as often as we want. Or actually, we have summed it up. Our results contain those processes to infinite order.
So the question is, what have we neglected? I mean, that looks like a lot. We allow the state b to emit a photon, go to another state. It's reabsorbed and such. So the question is now, what is neglected?
So what we neglect is actually a lot. In this lowest order approximation for the function R, we approximated R by second order. We have only two vertices. So what we neglect are all processes where we have not just two vertices and one intermediate states, where we have several intermediate states between two occurrences of the state B0. Or diagrammatically, what we have neglected is-- let me just give you an example.
So what we have neglected is we have to always start with a state b, and we have to end with a state b. But now the idea was that-- one, two, let me just go through four vertices. The approximation we have done-- you have to sort of look through that maybe after the lecture and see that that's what we've really done, but trust me-- is that when the system goes away from the state b into an intermediate state a, at the next vertex it has to go back to the state b.
So what we have not included is processes where we go through states a, a prime, a double prime, and then eventually we go back to the state a. Or we have not included processes where we scatter, absorb. This is a state a. But then, we don't go back to b. We go to a prime. Then, we scatter again.
Here we are in state a double prime. And then, we are back in b. So in other words, we have said whenever something happens, and we go away from state b, the next vertex has to go back to state b. This is the nature of the lowest order approximation.
We have included to infinite order all processes where the state b emits a photon, reabsorbs it, emits and reabsorbs it. But the system cannot go sort of two steps away from the state b. It's only one step, and then go back. This is the nature of the approximation.
OK, so our result for this kernel, which describes the state b, is we have-- just go back, let you see it again. I'm writing now for you down the equation for the triangle, which is an interaction V, an intermediate state, and another interaction V, which is nothing else than Fermi's golden rule where we-- well, with a little twist-- have the initial state. The dipole interaction or the [? p.a ?] interaction takes us to an intermediate state with a photon with [INAUDIBLE] and polarization epsilon. We propagate in the intermediate state Ea.
And now we have to go back. But we have to go back with the same matrix element. So therefore the matrix element is squared. And we have a double sum. We sum over all possible states of the photon. And we can sum over all intermediate atomic states.
Yes, so this is what we have done. This expression has in general a real part and imaginary part. It's a function of the initial energy E. And it has an imaginary part. Yes, let me now interpret it in two ways. But before I do that, are there any questions? Yes.
AUDIENCE: Just mathematically, how do you get an imaginary part out of this? It's all real components, because we have magnitude squared divided by, presumably, the energies real and so on.
PROFESSOR: OK, this is now something to do that we have resonant terms. And what we often do is we add an infinitesimal eta, and then let eta go to 0. And it's the same if you have the function 1 over x. And look at a real part and imaginary part. It needs a little bit of correct treatment of functions in the complex plane.
Let me actually write it down. Then it becomes clear. But yes, thanks for the question. So I always said, we do a Fourier transform, but it's a Laplace transform. We do a Fourier transform at real energy. But if you do an integration along when you do the Fourier transform, you have to integrate over omega.
But the function we integrate, the time evolution operator, has poles in omega. So we can't just Fourier transform. Because we go right through the poles. But what we can do is we can add an imaginary part plus or minus eta. And we can just go around the poles. And then, it becomes mathematically meaningful.
And what we're doing here is-- but I'm not really explaining it mathematically-- we have played those tricks here. But I hope it becomes clear if I say what the real and imaginary parts are. So the real part is this matrix element squared, but double sum. But what we use is the principle part of it, which is well defined in the theory of complex functions. And it's divergent, but you take a certain symmetric limiting procedure.
So you have to interpret that as-- you have to introduce a limiting procedure to make sure that the divergence cancels out. And the imaginary part, value of the imaginary part of something which diverges-- and if you treat the eta correctly, let the eta go to 0, you will realize that the imaginary part turns into a delta function. So what we get is 2pi over H bar matrix element squared times the delta function.
So that's something which you have seen. So the imaginary part gets us Fermi's golden rule. And the real part has actually-- remember when we discussed the AC Stark shift. The AC Stark shift has a 1 over [INAUDIBLE] dependence. And you recognize that here. So this is actually nothing else than the AC Stark shift not due to a laser beam, but due to one photon per mode.
Because we started with an atom in an excited state. It can emit a photon in any mode. And this photon is now creating an AC Stark shift. And this is mathematically the expression. And such AC Stark shifts which appear as self energies, as energy shifts created by the state, this is nothing else than the famous Lamb shift. So that's what we get out here.
I have already-- do I in this sum? What we have here is we have this function R in the real and imaginary part, which depends on the energy E. But remember, we worked so hard with diagrams to make sure that the triangle-- first the square, and then the triangle, and this is what we calculate here-- has no resonant structure at the energy Eb. So therefore, we can neglect the energy dependence of that and simply replace the argument E by the energy we are interested in, namely energies close to Eb.
So in other words, when we had the function, the resolvent Gb of Z, all the dependence on energy came from this kernel. But this kernel is now so well-behaved, there are no resonant terms, that we can neglect its energy dependence. This actually has a name which we will encounter later when we discuss master equation and optical Bloch equations. So this replace, neglect E and set, or replace the dependence by E, by taking the value at Eb, this corresponds to the Markov approximation.
The Markov approximation often means that some relaxation time or some response of a system is replaced by delta function. Well, we've done the same here. Because if you replace some temporal response function by a delta function, that means the Fourier transform becomes constant. And by neglecting the energy dependence, we are now saying everything is constant as a function of energy. And that means in the temporal domain that we have a delta function.
I don't want to go further here. But when we talk about the master equation, we will also make a Markov approximation later on, but then in the temporal domain. And the two are equivalent here.
So what we've got now is we found a solution for the Fourier transform of the time evolution operator, which initially had a divergence at energy b. And this was the problem we are facing. But by now, calculating the function R, we have a correction, which is a radiative shift, which comes from the real part. And we obtained, as promised, the imaginary part, which we can approximate by Fermi's golden rule.
If we now Fourier transform back and obtain the time evolution of this state, it no longer evolves with the energy Eb. It has a shifted energy by this self energy. And this is called the radiative shift.
But in addition, because of the imaginary part, it has now an exponential decay. And you should now-- well, this is what we may have expected. But there are two things you should learn. The first thing is that the exponential decay would be different if we had not made the Markov approximation.
If we had kept a dependence of this imaginary part on energy, the Fourier transform would not have simply given us an exponential. So therefore, the exponential decay involves an approximation that the R function has no energy dependence. And you would say, well, is that really possible? If you have an atom, and it decays, or you have the atom in state b, at very, very short times you need Fourier transform elements of large amounts of energy. So maybe for the first femtosecond for the time evolution of an excited state, you need whatever, a whole X-ray spectrum of energies.
And it's obvious that the properties of this expression where you sum over all states, something will happen when you go past the normal excitation energy or the ionization energy of the atom. So what you can immediately read form here is that exponential decay is a simple approximation. It works very well. But at very early times, it will break down. Because then, the energy dependence matters.
But the longer you wait-- if you wait a few nanoseconds, the Fourier transform, the relevant part of the Fourier transform, is only a small energy or frequency interval around the resonance energy. And then, the density of states of your photon field is pretty much constant around here. And then, this approximation is excellent.
So I hope what you have learned from the treatment is number one, where the exponential decay comes from. Of course you knew that from coupling to all modes, but that the approximation which leads to exponential decay would also involve the density of states and the density of modes is constant, which is excellent for certain times but which is of course violated at early times.
And finally, if we hadn't done the infinite summation of diagrams, if we had done a perturbative expansion, we would have never obtained exponential decay. We would have obtained some polynominal decay. Questions about that? Yes.
AUDIENCE: What is the polynominal decay?
PROFESSOR: Some power law. 1 minus the time-- as far as I know, it would just involve powers of n. If you do lowest order perturbation theory, instead of getting an exponential decay, you would just get a linear slope. That's perturbation theory. And if you fix it, I think you get quadratic terms. So it's the sum of power loss.
Of course, an exponential function has a Taylor expansion, which is an infinite sum of polynominal terms. And therefore, we need infinite order to get the exponential. So it's not really profound what I'm saying. It's pretty much an exponential function is non-perturbative. Other questions?
So let me wrap up this chapter. What we have discussed here is-- I haven't really discussed resonant scattering. I've now focused on the function Gb. I focused on what happens to the state b.
But this is-- and this is what I want to show you now-- the only element we need to discuss resonant scattering. So when we have an atom in the ground state, and a photon comes along, and it takes the atom to the excited state b, then we go back to the same state-- could be also another state-- by emitting a photon k prime epsilon prime. And the relevant matrix element of the time evolution operator, which is the T-matrix, involves now the matrix element, the initial energy minus the intermediate energy.
And the critical part is really-- you can do it mathematically. I just show it here as a summary. The critical part is really the propagation of the state b, which is problematic. But we have now learned, and it transfers exactly to the light scattering problem, that we have to include now radiative shifts and an imaginary part for the decay to the time evolution.
And that means that this diagram here for light scattering has been-- we have added other terms to it. And the other terms are, of course, that when we scatter light of an excited state like this, the excited state can sort of emit photons and reabsorb them. And it can do that to-- so we go to that state. It can do that to infinite order.
So in other words, for any problem now which involves the excited state, we replace the 0's order propagation of the state b. And mathematically, it means we replace this function by the resolvent, which we have calculated by doing an approximation to the kernel R. Questions?
AUDIENCE: Question, [INAUDIBLE]?
AUDIENCE: Is this [INAUDIBLE]?
PROFESSOR: Yeah, OK, what happens is if you're off-resonant, you don't have a problem. This extra term delta and gamma, the radiative shift and the line widths only matter when the black term is close to 0. If you have a large detuning delta here, then the small shift in the line widths don't matter.
So everything we have done by correcting the naked propagation of the state b by the correct propagation with this infinite emission and reabsorption of virtual photons, this is only needed if the denominator is 0. And then, we have to figure out what else happens. And what else happens is obtained in higher order with this non-perturbative treatment. Other questions?
OK, 20 minutes left. So we now change gears. We move onto the optical Bloch equation. But let me give you one summary on this chapter of diagrams. Until maybe 10, 15 years ago here at MIT, we were not teaching that. And I felt often in discussion with students that a little bit more of a complete picture behind atom photon processes is needed. What I summarized could of course cover a whole semester course in QED and how to do calculation. And if you're interested in mathematical rigor, the green book, Atom-Photon Interactions, is pretty rigorous and still very physical.
But on the other hand, many of you experimentalists, I think you should have sort of this picture behind it, what really happens, what kind of emission processes are responsible for which effect. And at least this is sort of for me a take-home message which I hope you can enjoy even without mathematical rigor, that the fact that the excited state has [INAUDIBLE] really comes from an infinite number of absorption, of emission and reabsorption processes. So you should maybe think about when you take an atom to the excited state that the excited state does more than just absorb the photon, the atom is excited, and then it emits.
The real nature of this state is that it couples to many, many modes. It emits photons and reabsorbs them. And you can often neglect that in the simple description of your experiment. But if you take certain expressions seriously, they would have divergences. And that's what we discussed without this infinite number of processes which happen.
Of course, yes, the whole other regime which I should mention-- and this is when you can completely neglect the coupling to many modes. If you do Rabi oscillation with resonant interaction, you don't need all that. Because then, you're really looking at discrete states. So it really also depends what you want to describe.
If you do cavity QED with an atom, you have the Jaynes-Cummings model. And you have an exact solution. Then you have a similar mode problem. But here we have discussed what happens if you want to scatter light in free space. And then, you have to deal with the divergence of the excited state.
OK, the next chapter is called Derivation of the Optical Bloch Equation. And yes, what we really need for a number of phenomenon in AMO physics for laser cooling, light forces, and much more are the optical Bloch equations. But what I try in this chapter-- to give you sort of the fundamental story, the profound story, the fundamental insight behind the optical Bloch equations. Because what the optical Bloch equations are is the following.
We want to describe a quantum system. But the quantum system is coupled to the environment. And that means we have dissipation. And this is something which is not easily dealt with in simple quantum physics. Because a simply quantum system undergoes unitary time evolution described by a Hamiltonian.
And unitary time evolution does not allow entropy to increase, does not allow energies to be exchanged. So a lot of things which happen in our experiments and in daily life come because the system we are looking at, it follows Schrodinger's equation. But it is coupled to a much bigger system.
And so in this section, what I want to address at the most fundamental limit is, what is the step where we go from reversible equation, unitary time evolution, to something which is called relaxation, which is dissipative, where entropy is increased? And the step, of course, is we go from pure states to statistical operators. We describe the big system. But then, we focus on the small system.
And I want to show you, first with a simple example, but next week in more generality, how this completely changes the nature of the description of the small system. A small system which I just described by Schrodinger's equation, now follows the density matrix equation, has relaxation, the entropy increases, and such.
So to maybe set the stage, one of the simplest systems we can imagine is the Jaynes-Cummings model, that we have an atom in a cavity interacting with one mode of the radiation field. And in this cause in part one, we've dealt with it and looked at vacuum Robi oscillation and a few really neat things.
But what happens is that the system is an open quantum system. You can have spontaneous emission. And if your mirrors are not 100.00% reflectivity, some light leaks out. So in other words, we have coupling to the environment.
Of course, we can say we simply describe it. We have our atoms, maybe the one mode plus the one mode of the electromagnetic field. And then, we have the environment, which may consist of photons which are leaking out and photons which are spontaneously emitted. So this is our system. And of course, if you write down the total Hamiltonian and do the time evolution, something will come out which, in general, is very complicated, very entangled.
The atom is entangled. The atom moving to the left side is entangled with a photon which was emitted to the right side. And the recall of the photons push the atom.
So you have to know, to keep track of all the photons, which have been scattered in the lifetime of an atom. So you can do that. And you do that by propagating the system with its total Hamiltonian. But often, what we do is we simply put the photons in a trash can. We trash them. We're not interested in what the photons are doing. We're not keeping track of them.
Actually, they hit the wall of our vacuum chamber, and we couldn't even keep track of them. The vacuum chamber has taken care of them. So all that we are interested in-- how do we now describe the atomic system? What, after the time evolution, is now the density matrix of the system?
Of course, if we use the full description, we know the initial state of the environment. We know the initial state or our system. We propagate it exactly with the correct time evolution, we get everything. And then, we could reduce by doing a partial trace. We could reduce the description. What is now the probabilistic description for density matrix of the atom?
But this is rather complicated. What we want to do is we want to do it as a derivation. But in the end, we want to have a formulation which simply tells us, what is the atomic density matrix as a function of the initial atomic density matrix? So in other words, all that happens with the environment-- that there's initial state, that it gets entangled, and then we neglect maybe not keeping track of the photons-- we're not interested in all of that.
We really want to focus on what happens to the atom. How does an initial state of the atom propagate into a final state? And this is done by optical Bloch equations. This is done by the master equation. So the master equation, you can see, focuses on the relevant part of the system, maybe just a single atom. And we can neglect the million photons which have been emitted.
But those million photons which have been emitted into the environment, they change, of course. They change this density matrix of the atom. And if you find a description, the master equation is including with extra terms what those photons have done. Maybe this sounds very abstract. But in the end, you will find that maybe photons which emitted produce some damping. Or if you put atoms in molasses, the atomic motion comes to a standstill.
So in other words, we want to develop a systematic approximation scheme for the master equation whether effect of all these many, many degrees of freedom can maybe be simply expressed by a few damping terms. So that's the idea. So yes, you know already one example of a master equation.
And these are Einstein's rate equations, which we have discussed in the first part of the course. If you have a two level system which is coupled to the radiation field, then we obtained equations that the rate of change of the ground state population is related to the excited state population through spontaneous emission described by the Einstein e coefficient. And if you have a spectral density of the electromagnetic field, it causes stimulated emission. And it causes absorption described by the Einstein b coefficient. And you have a similar equation for the excited state.
So this is clearly the semi-classical limit of what we want to accomplish. But we want to know more. We really want to know not just, what is the rate equation for the atom? We want to know, you can see, the wave function. Well, wave function slash statistical operator-- the statistical operator contains all information of the wave function. And if the wave function is a pure state, it has a certain statistical operator. And we can find the pure state. So that's what we want to do.
But sort of just genetically, we really want to find the full quantum time evolution. And now I just want to express that. We have to be careful. The time evolution as a Hamiltonian, if you now bring in the environment, cannot be simply included by adding an imaginary term. This here violates the unitary time evolution.
In other words, when we find an equation for the quantum system, how it evolves, it's not that everything you think which phenomenologically works will work. It has to be consistent with the loss of quantum physics. In other words, when we find an equation for the statistical-- if you find an equation which describes the atomic system, it will be a requirement that a density matrix turns into a density matrix. So certain structure has to be obeyed.
And that is actually extremely restrictive. And our derivation of the master equation will actually show what kind of operators applied to the atomic density matrix are consistent, are quantum mechanically consistent. This is actually something-- well, we're not doing a lot of equations or work today, so let me rather be a little bit chatty.
This is actually something which is the frontier of our field, both in theory in ion traps and with neutral atoms, that we have some evolution of an atomic system by coupling it to the environment. Well, the usual environment you can think of is just taking photons away. It gives us a damping term of the excited state.
But now you can ask the question, can you construct an environment which has some degrees of freedom with laser fields, RF fields, and such? You call it the environment, and this environment does something really fancy. And the system comes in equilibrium with the environment. But could you engineer the environment that the system comes into equilibrium with the environment, and it is in a fancy super fluid or fancy entangled state?
So can you engineer the environment in such a way that it does something really fancy to your system? Well, you can dream of it. But you dreams are restricted by the mathematical structure of all possible master equations in the world. Because the environment cannot do everything for you. The environment can only do for you what can come out of all possible Hamiltonians.
And the fact that the total system evolves in a unitary way with the total Hamiltonian of the system, this is really restricting. This operator sort of stands for all possible master equations. It's restricting the master equation for your atomic density matrix.
There was just-- I think was it this year or last year-- a nice science and nature paper by [INAUDIBLE] where they engineered the environment around ions in an ion trap. And that stabilized the ion in the state, which was an unusual state, not what the normal environment would have done. So anyway, what I will be telling you is also sort of relevant to understand this sort of frontier in our field which is called environment engineering.
OK, density matrix-- good, five more minutes. So what we have is we have a system. And we have this environment. And what we are exchanging with the environment is both energy, but also entropy.
And so when we transfer energy or heat, there is a corresponding change in energy. And it's a general property of all quantum systems. It's a consequence of the fluctuation dissipation principle, the fluctuation dissipation theorem, that you cannot have any form of relaxation without noise.
So for instance, when we discuss optical molasses-- we do that in a few weeks, where the atomic motion is damped-- well, you have a damping term. It seems it brings some motion through friction to a standstill. But we know by general principles that damping is not possible without noise. So when somebody sells you a wonderful damping scheme which damps the motion, you should always ask, but there must be noise. What is the ultimate noise?
And it's fundamental that it is there. So therefore, our derivation of the master equation will also display that, that we do not get any form of damping without at least the fundamental quantum noise. So what we need is we need a description of the quantum noise, which comes from coupling to the environment.
The tool which we use for that is the density matrix. I assume everybody here is familiar with the density matrix. The atomic lecture notes on the wiki have a small tutorial on the density matrix if you want to freshen up your knowledge about density matrix, maybe read about it. I don't want to cover it in class.
The one thing we need, and I just want to remind you, is for all density matrices, you can always unravel it. The density matrix can be written as a probabilistic sum over states. This will actually play a major role. We will make certain models for damping. And it's really beautiful.
On Monday, I will give you the beam splitter model for the optical Bloch equation. I really like it. Because it's a microscopic model. And it shows you a lot of fundamental principles.
But the important part is whenever we have a way to construct a density matrix by saying, we have certain quantum states k, and we just add them up probabilistically, this kind of microscopic interpretation of the density matrix is called unravelling. It's sort of writing it as a specific diagonal sum over states. But those unravellings are not unique. They describe one possibility. But there are other possibilities.
And the one example which I can give you is that if you have a density matrix like this, you can write the density matrix as this form suggests, as a probabilistic sum of being in the state 0, 0, or in the state 1, 1 with probability 1/4 and 3/4. But you can also write it as being with equal probability in two states a and b where the states a and b are superpositions of state 0 and 1. And you can see by inspection that this will do the trick.
So we'll talk a lot about unravelling of the density matrix. That's why I want to say up front, that the same density matrix can be thought of as being created by different processes. But this actually makes it even more powerful. Because we have a unified description, or even an identical description, for different microscopic processes. OK, any last questions? Well then, let's enjoy the open house with incoming graduate students, and I'll see you on Monday.