Description: In this lecture, Prof. Adams discusses the time evolution of Gaussian wave packets both in free space and across potential steps.
Instructor: Allan Adams
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare ocw.mit.edu.
PROFESSOR: All right. Hello. So today, as people slowly trickle in, so today we're going to talk about scattering states.
So we've spent a lot of time talking about bound states, states corresponding to particles that are localized in a region and that can't get off to infinity. And in particular, when we distinguish bound states from scattering states, we're usually talking about energy eigenstates. They're eigenstates that are strictly localized in some region.
For example in the infinite well, all of the states are strictly localized to be inside the well. In the finite well, as we saw last time, there are a finite number of states, of energy eigenstates which are bound inside the well. But there are also states that aren't bound inside the well, as hopefully you'll see in recitation, and as we've discussed from qualitative structure of wave functions, but under the eigenfunction.
So why would we particularly care? We also did the harmonic oscillator where everything was nice and bound. So why would we particularly care about states that aren't bound for scattering states? And the obvious answer is, I am not bound. Most things in the world are either bound or they are scattering states. By scattering state just mean things that can get away. Things that can go away.
So it's easy to under emphasize, it's easy to miss the importance of scattering experiments. It's easy to say look, a scattering experiment is where you take some fixed target and you throw something at it and you look at how things bounce off and you try to do something about this object by helping it bounce off.
For example, one of the great experiments of our day is the LEC, and it's a gigantic ring, it's huge, at the foot of the Jura Mountains outside of Geneva. And those are people and that's the detector-- that's one of the detectors, I think, at CMS.
And what are you doing in this experiment? In this experiment you're taking two protons and you're accelerating them ridiculously fast, bounded only by the speed light, but just barely. Until they have a tremendous amount of kinetic energy, and you're colliding them into each other.
And the idea is if you collide them into each other and you watch how the shrapnel comes flying off, you can deduce what must have been going on when they actually collided. You could do something about the structure. How are protons built up out of corks.
And it's easy to think of a scattering experiment as some bizarre thing that happens in particle detectors under mountains. But I am currently engaged in a scattering experiment. The scattering experiments I'm currently engaged in is light is bouncing from light sources in the room, off of very bits of each of you. And some of it, and probably, scatters directly into my eye. That is shocking.
And through the scattering process and deducing what I can from the statistics of the photons that bounce into my eye, I could deduce that you're sitting there and not over there, which is pretty awesome. So we actually get a tremendous amount of the information about the world around us from scattering processes. These are not esoteric, weird things that happen. Scattering is how you interact with the world.
So when we do scattering problems, it's not because we're thinking about particle colliders. It's because at the end of the day I want to understand--
PROFESSOR: --how surfaces-- bless you-- how surfaces reflect. I want to understand why when you look in the mirror light bounces back. I want to understand how you see things.
So this is a much broader-- scattering's a much broader idea than just the sort of legacy of how we study particle physics. Scattering is what we do constantly. And we're going to come back to the computer in a bit. I'm going to close this for now. Yes, good.
So today we're going to begin a series of lectures in which we study scattering processes in one dimension in various different settings. So in particular that means we're going to be studying quantum particles being sent in from some distance, incident on some potential, incident on some object. And we're going to ask how likely are they to continue going through, how likely are they to bounce back.
And so we're going to start with an easy potential, an easy object off which to scatter something, which is the absence of an object, the free particle. And I want to think about how the system evolves.
So for the free particle where we just have something of mass, m, and its potential is constant, we know how to solve the--
AUDIENCE: This may be a little bit dumb, but given that these states, [INAUDIBLE]?
PROFESSOR: Fantastic question. I'm going to come back in just a second. It's a very good question.
So we're interested in energy eigenstates, and we know the energy eigenstates for the free particle-- we've written these down many times-- and I'm going to write it in the phonic form. Suppose we have a free particle in an energy eigenstate with energy e, then we know it's a superposition of clean waves, e to the i kx plus b e to minus ikx.
And the normalization is encoded in a and b, but, of course, these are normalizable states. We typically normalize them with a 1 over root 2 pi for delta function normalizability.
Now, in order for this to be a solution to the Schrodinger equation, we need the frequency or the energy e is equal to h-bar squared k squared upon 2m, and omega is this upon h-bar. And knowing that, we can immediately write down the solution. Because these are energy eigenstates, we can immediately write down the solution of the Schrodinger equation, which begins at time t 0 in this state.
So there's the solution of the time dependent Schrodinger equation. So we've now completely solved this in full generality, which is not so shocking since it's a free particle.
But let's think about what these two possible states are. What do these mean? So the first thing to say is this component of our superposition is a wave, and I'm going to call it a right moving wave. And why am I calling it a right moving wave?
If you draw the real part of this, some moment in time, it's got some crests. And if you draw on it a short bit of time later, I want to ask how the wave has moved. And, for example, a peak in the wave will be when kx minus omega t is 0. If kx minus omega t is 0, then kx is equal to omega t, or x is equal to omega over kt.
And assuming as we have that k is positive, then this says that as time increases the position increases. And this one, by exactly the same logic, x is equal to minus omega over kt.
So we'll call this a right going or right moving, and we'll call this a left moving wave. So the general solution is a superposition, an arbitrary superposition, of a left moving and a right moving wave. This should be familiar from 803 in your study of waves.
But as was pointed out just a moment ago, we've got a problem. Phi e or phi e is not normalizable. And this is obvious from if we-- let's have b equal 0 for simplicity. This is a pure phase, and the norm squared of a pure phase is 1. And so the probability density is constant. It's 1 from minus infinity to infinity.
So the probability of finding it at any given point is equal. So assuming this is on the real line, which is what we mean by saying it's a free particle, there's no constant you can multiply that by to make it normalizable. This is not a normalizable function.
So more precisely, what we usually do is we take our states phi sub k equal 1 over root 2 pi, e to the i kx. Such that phi k, phi k prime is equal to delta. Sorry-- that's not what I wanted to write. Such that-- well, fine. Phi k, phi k prime is equal to delta of k minus k prime.
This is as good as we can do. This is good as we get to normalizing it. It's not normalized to 1. It's normalized 1k is equal to k prime, to the value of the delta function, which is ill-defined.
So we've dealt with this before, we've talked about it. So how do we deal with it? Well, we've just learned, actually, a very important thing. This question, very good question, led to a very important observation which is that can a particle, can a quantum mechanical particle be placed, meaningfully said to be placed, in an energy eigenstate which is bound? Can you put a particle in the ground state of a harmonic oscillator?
PROFESSOR: Sure. That's fine. Can you put the particle in the k equals 7 state of a free particle?
PROFESSOR: No. So these scattering states, you can never truly put your particle truly in a scattering state. It's always going to be some approximate plane wave. Scattering states are always going to be some plane wave asymptotically. So we can't put the particles directly in a plane wave. What can we do, however?
PROFESSOR: Right. We can build a wave packet. We can use these as a basis for states, which are normalizable and reasonably localized. So we need to deal with wave packets.
So today is going to be, the beginning of today is going to be dealing with the evolution of a wave packet. And in particular we'll start by looking at the evolution in time of a well localized wave packet in the potential corresponding to a free particle. And from that we're going to learn already some interesting things, and we'll use that intuition for the next several lectures as well.
Questions before we get going? All right.
So here's going to be my first example. So consider for the free particle or consider a free particle in a minimum uncertainty wave packet. So what is a minimum uncertainty wave packet. We've talked--
AUDIENCE: A Gaussian.
PROFESSOR: A Gaussian, exactly. So the minimum uncertainly wave packet's a Gaussian. Suppose that I take my particle and I place it at time 0, x0 in a Gaussian. I'm going to properly normalize this Gaussian. I hate these factors of pi, but whatever. a root pi, e to the minus x squared over 2a squared.
So this is the wave function at time 0, I declare. I will just prepare the system in this state. And what I'm interested in knowing is how does the system evolve in time? What is x psi of x and t?
So how do we solve this problem? Well, we could just plug it into the Schrodinger equation directly and just by brute force try to crank it out. But the easier thing to do is to use the same technique we've used all the way along.
We are going to take the wave function, known wave function, known dependence on x, expand it as a superposition of energy eigenstates. Each energy eigenstates evolved in a known fashion will evolve those energy eigenstates in that superposition, and do the sum again, re-sum it, to get the evolution as a function of position. Cool?
AUDIENCE: So given that the x-basis and the t-bases are uncountable, [INAUDIBLE]?
PROFESSOR: Good. Here's the theorem. So the question is roughly how do you know you can do that? How do you know you can expand in the energy. And we have to go back to the spectral theorem which tells us for any observable, the corresponding operator has a basis of eigenfunctions. So this is a theorem that I haven't proven, but that I'm telling you. And that we will use over and over and over again, spectral theorem.
But what it tells you is that any operator has an eigenbasis. So if we find any good operator corresponding to an observable has a good eigenbasis. That means that if we find the eigenfunctions of the energy operator, any state can be expanded as a superposition in that state.
And the question of accountable versus uncountable is a slightly subtle one. For example, how many points are there on the circle? Well, there's an uncountable number. How many momentum modes are there on a circle? Well, that's countable. How does that work? How can you describe that both in position and in momentum space.
Ask me afterwards and we'll talk about that in more detail. But don't get too hung up on countable versus uncountable. It's not that big a difference. You're just summing-- it's just whether you sum over a continuous or discrete thing. I'm being glib, but it's useful to be glib at that level.
Other questions before we-- OK. Good. Yeah.
AUDIENCE: So the reason we are going to use energy eigenstates is just because [INAUDIBLE]?
PROFESSOR: Almost. So the question is are we using the energy eigenstates because at the end of the day it's going to boil down to doing a Fourier transform, because the energy eigenstates are just plain waves, e to the ix? And that will be true, but that's not the reason we do it. It's a good question. So let me disentangle these. Well, I guess I can use this.
The point is that if I know that si-- for any wave function in any system, if I know that phi of x at 0 is some function, and I know the energy eigenstates of that system, then I can write this as sum over n, of cn, phi n of x for some set of coefficient cn. So that's the spectral theorem.
Pick an operator here, the energy. Consider its eigenfunctions. They form a basis and any state can be expanded. But once we've expanded in the energy eigenbasis, specifically for the energy eigenbasis where e phi n is en phi n, then we know how these guys evolve in time. They evolve by an e to the minus i en t upon h-bar. And we also know that the Schrodinger equation is a linear equation, so we can superpose solutions and get a new solution.
So now, this is the expression for the full solution at time t. That's why we're using energy eigenstates. In this simple case of the free particle, it is a felicitous fact that the energy eigenstates are also Fourier modes. But that won't be true in general.
For example, in the harmonic oscillator system, the harmonic oscillator energy eigenstates are Gaussians times some special functions. And we know how to compute them. That's great. We use a raising, lowering operators. It's nice. But they're not plane waves.
Nonetheless, despite not being Fourier modes, we can always expand an arbitrary function in the energy eigenbasis. And once we've done so, writing down the time evolution is straightforward. That answer your question?
PROFESSOR: Great. Anything else? OK. So let's do it for this guy.
So what are the energy eigenstates. They're these guys, and I'm just going to write it as e to the i kx without assuming that k is positive. So k could be positive or negative, and energy is e to the h-bar squared k squared.
And so we can write this as-- we can write the wave function in this fashion, but we could also write it in Fourier transformed form, which is integral dk upon root 2 pi for minus infinity to infinity. I'm mostly not going to write the bounds. They're always going to be minus infinity to infinity. Of dk 1 over root 2 pi e to the i kx. The plane wave times the Fourier transform, psi tilde of k.
So here all I'm doing is defining for you the psi tilde. But you actually computed this, so this is the Fourier transfer. You actually computed this on, I think, the second problem set. So this is equal to the integral dk over root 2 pi e to the i kx.
And I'm going to get the coefficients-- I'm going to be careful about the coefficients a, so root a over 4 [INAUDIBLE] pi. e to the minus k squared a squared over 2.
So here, this is just to remind us if we have a Gaussian of width a, the Fourier transform is Gaussian of width 1 upon a. So that's momentum and position uncertainty in Fourier space. And you did this in problem set two.
So this is an alternate way of writing this wave function through its Fourier transform. Everyone cool with that?
But the nice thing about this is that we know how to time evolve this wave function. This tells us that that psi of xt is equal to integral-- and I'm going to get these coefficients all pulled out-- integral of root a over root pi. Integral dk upon root 2 pi. e to the i kx minus omega t. And remember, Omega it depends on k, because it's h-bar squared k squared upon 2m divided by h-bar. Times our Gaussian, e to the minus k squared a squared upon 2.
So everyone cool with that? So all I've done is I've taken this line, I pulled out the constant, and I've added the time evolution of the Fourier mode. So now we have this integral to give us the full, OK, so now we just have to do the integral.
And this is not so hard to do. But what makes it totally tractable is that if I just do this with some function of omega of k-- I mean that's complicated, know how to do that integral. But I happen to know that e is h-bar squared k squared upon 2m, which means that omega is equal to h-bar k squared upon 2m.
So I can rewrite this as minus h-bar k squared upon 2m. Yes?
AUDIENCE: Where is the fourth [INAUDIBLE]?
PROFESSOR: That's a square root of a square root.
AUDIENCE: Oh, I didn't see it. My bad.
PROFESSOR: No, no. That's OK. It's a horrible, horrible factor.
So what do we take away from this? Well, this can be written in a nice form. Note that here we have a k squared, here we have a k squared, here we have a k. This is an integral over k, so this is still Gaussian. It's still the exponential of a quadratic function of k. So we can use our formulas for Gaussian integrals to do this integral.
So to make that a little more obvious, let's simplify the form of this. So the form of this is again going to be square root of a over square root pi. Integral dk upon root 2 pi. e to the i kx.
And I'm going to take this term and this term and group them together because they both have a k squared in the exponential. e to the minus k squared upon 2 times-- now instead of just a squared, we have a squared plus i h-bar upon 2m t-- there's a typo in my notes. Crap. Good.
So before we do anything else, let's just check dimensional analysis. This has units of one of length squared. So this had better have units of length squared. That's a length squared, good. This, is this a length squared? So that's momentum times length, which is mass times length over time. Divide by mass, multiplied by time, so that's length squared. Good. So our units make sense.
PROFESSOR: This, too, we need.
AUDIENCE: In terms of 2m.
PROFESSOR: Oh, this one. Good. Thank you. Yes. Thanks. So this is again a Gaussian. And look, before we knew we have a Gaussian, the Fourier transform is just this Gaussian. The only difference is it's now a Gaussian with width. Not a, but a complex number.
But if you go through the analysis of the Gaussian integral, that's perfectly fine. It's not a problem at all. The effective width is this.
So what does that tell you about psi of t? So this tells you that psi of x and t is equal to root a over root pi. And now the width. So we have the 1 over root a.
But now the effective width is this guy, a squared plus-- so it's square root of a squared plus i h-bar upon mt. e to the minus x squared over 4a squared plus-- oops, where did my 4-- there's a 2 that-- there's a spurious 2 somewhere. I'm not sure where that 2 came from. There should be a 2 there. a squared plus i h-bar upon mt.
So at this point it's not totally transparent to me what exactly this wave function is telling me, because there's still a phase downstairs in the width. So to get a little more intuition let's look at something that's purely real. Let's look at the probability to find the particle at position x at time t. That's just the norm squared of this guy.
And if you go through a little bit of algebra, this is equal to 1 over root pi times-- I'm going to write out three factors-- times 1 over square root of-- yes, good-- a squared plus h-bar upon 2ma squared t squared times e to the minus x squared upon 2a squared plus h-bar squared over 2ma squared. Oops-- h-bar over 2ma squared t.
So what do we get? The probability distribution is again a Gaussian. But at any given moment in time-- this should be t squared-- at any given moment in time the width is changing. The width of this Gaussian is changing in time. Notice this is purely real again.
And meanwhile, the amplitude is also changing. And notice that it's changing in the following way. At time 0, this is the least it can possibly be. At time 0 this denominator is the least it can possibly be because that's gone. At any positive time, the denominator is larger so the probability has dropped off.
So the probability at any given point is decreasing. Except for we also have this Gaussian whose width is increasing in time, quadratically. It's getting wider and wider and wider. And you can check-- so let's check dimensional analysis again.
This should have units of length squared, and so this is momentum times length divided by length divided by mass-- that's just a velocity. So this is the velocity squared times squared. That's good. So this has the correct units.
So it's spreading with the velocity v is equal to h-bar upon 2ma. And what was a? a was the width at times 0. a was the minimum width.
So graphically, let's roll this out-- what is this telling us? And incidentally, a quick check, if this is supposed to be the wave function at time t, it had better reproduce the correct answer at time 0. So what is this wave function at time 0? We lose that, we lose that, and we recover the original wave function. So that's good.
So graphically what's going on? So let's plot p of x and t. At time 0 we have a Gaussian and it's centered at 0, and it's got a width, which is just a. And this is at time t equals 0.
At a subsequent time, it's again centered at x equals 0-- modulating my bad art-- but it's got a width, which is a squared square root of a squared plus h-bar upon 2ma squared, t squared. So it's become shorter and wider such that it's still properly normalized.
And at much later time it will be a much lower and much broader Gaussian. Again, properly normalized, and again centered at x equals 0. Is that cool?
So what this says is if you start with your well-localized wave packet and you let go, it disperses. Does that make sense? Is that reasonable? How about this. What happens if you ran the system backwards in time? If it's dispersing and getting wider, then intuitively you might expect that it'll get more sharp, if you integrated it back in time.
But, in fact, what happens if we take t negative in this analysis? We didn't assume it was positive, anyway.
PROFESSOR: It disperses again. It comes from being a very dispersed wave, to whap, to being very well [INAUDIBLE], and whap, it disperses out again. And notice that the sharper the wave function was localized, the smaller a was in the first place, the faster it disperses. This has 1 upon a. It has an a downstairs.
So the more sharp it was at time 0, the faster it disperses away. This is what our analysis predicts.
So let me give you two questions to think about. I'm not going to answer them right now. I want you to think about them-- not right-- well, think about them now, but also think about them more broadly as you do the problem set and as you're reading through the reading for the next week.
So the width in this example was least at time t equals 0. Why? How could you build a wave function, how could you take this result and build a wave function whose width was minimum at some other time, say, time t0. So that's one question.
Second question. This wave function is sitting still. How do you make it move? And either the second or the third problem on the problem set is an introduction to that question. So here's a challenge to everyone. Construct a well-localized wave packet that's moving, in the sense that it has well-defined momentum expectation value.
And again, hint, look at your second or third problem on the problem set. And then repeat this entire analysis and check how the probability distribution evolves in time. And in particular, what I'd like to do is verify that the probability distribution disperses, but simultaneously moves in the direction corresponding to the initial momentum with that momentum.
The center of the wave packet moves according to the momentum that you gave it in the first place. It's a free particle so should the expectation value, the momentum change over time? No. And indeed it won't. So that's a challenge for you.
OK, questions at this point?
AUDIENCE: Yeah. So this wave or this Gaussian's not moving, but it's centered over 0 at time 0 or x0?
PROFESSOR: Sorry, this is at x is equal to 0.
AUDIENCE: OK. So is that-- so all those [INAUDIBLE] are at x equals 0 over here. Shouldn't the amplitude increase and not the--
PROFESSOR: Good. So what I'm plotting-- sorry. So I'm plotting p of x and t as a function of x for different times. So this is time 0. This was time t equals something not 0, equals 1. And this is some time t equal to large. Right.
AUDIENCE: And so but is it if x is always 0, then shouldn't the [INAUDIBLE] not change because it would equal [INAUDIBLE]?
PROFESSOR: Good. So if x is 0, then the Gaussian contribution is always giving me 1. But there's still this amplitude. And the amplitude is the denominator is getting larger and larger with time. So, indeed, the amplitude should be falling off. Is that--
AUDIENCE: Yeah, but will the width change?
PROFESSOR: Yeah, absolutely. So what the width is, the width is saying, look, how rapidly as we increase x, how rapidly does this fall off as a Gaussian. And so if time is increasing, if time is very, very large, than this denominator's huge. So in order for this Gaussian to suppress the wave function, x has to be very, very large.
AUDIENCE: So we're assuming [INAUDIBLE] x hasn't changed.
PROFESSOR: Ah, good. Remember, so this x is not the position of the particle. x is the position at which we're evaluating the probability. So what we're plotting is we're plotting-- good. Yeah, this is an easy thing to get confused by.
What is this quantity telling us? This quantity is telling us the probability, the probability [INAUDIBLE, that we find the particle at position x at time t. So the x is like, look, where do you want to look? You tell me the x and I tell you how likely it is to be found there.
So what this is telling us is that the probability distribution to find the particles start out sharply peaked, but then it becomes more more and more dispersed. So at the initial time, how likely am I to find the particle, say, here? Not at all. But at a very late time, who knows, it could be there. I mean my confidence is very, very limited because the probability distribution's very wide. Did that answer your question.
PROFESSOR: Great. Yup?
AUDIENCE: [INAUDIBLE] what happens after [INAUDIBLE].
PROFESSOR: Excellent question. That's a fantastic question.
So we've talked about measurement of position. We've talked about measure-- So after you measure an observable, the system is left in an eigenstate of the observable corresponding to the observed eigenvalue.
So suppose I have some potential or I have some particle and it's moving around. It's in some complicated wave function, and some complicated state. And then you measure its position to be here. But my measurement isn't perfect. I know it's here with some reasonable confidence, with some reasonable width.
What's going to be the state thereafter? Well, it's going to be in a state corresponding to being more or less here with some uncertainty. Oh look, there's that state. And what happened subsequently? What happened subsequently is my probability immediately decreases.
And this is exactly what we saw before. Thank you for asking this question. It's a very good question. This is exactly what we said before. If we can make an arbitrarily precise measurement of the position, what would be the width?
PROFESSOR: Yeah. It'd be delta, so with width would be 0. That little a would be 0. So how rapidly does it disperse?
PROFESSOR: Pretty fast. And you have to worry a little bit because that's getting a little relativistic. Yeah.
AUDIENCE: If you make a measurement that's not perfect, how do you account for that [INAUDIBLE]?
PROFESSOR: Yeah. So OK. That's a good question.
So you make a measurement, it's not perfect, there was some uncertainty. And so let me rephrase this question slightly. So here's another version of this question.
Suppose I make a measurement of the system. If there are two possible configurations, two possible states, and they measure an eigenvalue corresponding to one of them, then I know what state the system is in.
And then again, if I'm measuring something like position where I'm inescapably going to be measuring that position with some uncertainty, what state is it left in afterwards? Do I know what state it's left in afterwards? No, I-- I know approximately what state. It's going to be a state corr-- OK, so then I have to have some model for the probability distribution of being in a state.
But usually what we'll do is we'll say we'll approximate that. We'll model that unknown. We don't know exactly what state it is because we don't know exactly what position we measured. Well, model that by saying, look, by the law of large numbers it's going to be roughly a Gaussian centered around the position with the width, which is our expected uncertainty.
Now, it depends on exactly what measurement you're doing. Sometimes that's not the right thing to do, but that's sort of a maximally naive thing. This is an interesting but complicated question, so come ask me in office hours.
AUDIENCE: So if this probability isn't moving, what is that velocity we [INAUDIBLE] over there?
PROFESSOR: Yeah, exactly. So what is this velocity? So this velocity, what is it telling us? It's telling us how rapidly the wave spreads out. So does that indicate anything moving? No. What's the expected value of the position at any time?
PROFESSOR: 0, because this is an even function. What's the position? Well, it's just as likely to be here as here. It's just that the probability of being here gets greater and greater over time for a while and then it falls off because the Gaussian is spreading and falling.
Yeah. One last question. Yup.
AUDIENCE: Can you tell me a little bit more about negative time is the same as positive time?
PROFESSOR: Yeah. So the question is can you talk a little bit more about why the behavior as time increases is the same as the behavior as we go backwards in time. And I'm not sure how best to answer that, but let me give it a go.
So the first thing is to say look at our solution and look at the probability distribution. The probability distribution is completely even in time. It depends only on t squared. So if I ask what happens at time t, it's the same as what happened at time minus t. That's interesting in a slightly unusual situation.
Let's look back at the wave function. Is that true of the wave function?
PROFESSOR: No. Because the phase and the complex parts depend on-- OK, so the amplitude doesn't depend on whether it's t or minus t. But the phase does. And that's a useful clue. Here's a better way.
What was the weird thing that made this t reversal invariant? There's another weird property of this, which is that it's centered at 0. It's sitting still. If we had given it momentum that would also not have been true.
So the best answer I can give you, though, requires knowing what happens when you have a finite momentum. So let me just sketch quickly.
So when we have a momentum, how do we take this wave packet and give it momentum k naught? So this is problem three on your problem set, but I will answer it for you.
You'd multiply by phase e to the i k naught x. And the expectation value will now shift to be h-bar k naught for momentum.
So what's that going to do for us? Well, as we do the Fourier transform, that's going to shift k to k minus k naught. And so we can repeat the entire analysis and get roughly k and k naught here.
The important thing is that we end up with phases. So the way to read off time 0 is going to be-- well, many answers to this. As time increases or decreases, we'll see from this phase that the entire wave packet is going to continue moving, it's going to continue walking across.
And that breaks the t minus t invariance. Because at minus t it would have been over here. And then think about that first question. How would you make it become minimum wave packet at some time which is not time equals 0?
So when you have answers to both of those you'll have the answer to your question.
AUDIENCE: What does it really mean for us to say that we have a probability distribution and measure something like negative time? Because it seems like we did something at time equals 0, and we say the probability that we would have found it [INAUDIBLE] is here.
AUDIENCE: It seems that you have to be aware [INAUDIBLE].
PROFESSOR: Good. So that's not exactly what the probability of distribution is telling us. So what the probability and distribution is telling you is not what happens when you make a measurement. It's given this state what would happen?
And so when you ask about the probability distribution at time minus 7, what you're saying is given the wave function now, what must the wave function have been a time earlier, such that Schrodinger evolution, not measurement, but Shrodinger evolution gives you this.
So what that's telling you is someone earlier-- had someone earlier been around to do that experiment, what value would they have got?
AUDIENCE: But wouldn't that have changed the wave function of that?
PROFESSOR: Absolutely, if they had done the measurement. There's a difference between knowing what the probability distribution would be if you were to do a measurement, and actually doing a measurement and changing the wave function.
AUDIENCE: So for the negative time, is it the earlier statement about what the wave function--
PROFESSOR: It's merely saying about what the wave function was at an earlier time.
AUDIENCE: It doesn't really have measurements.
PROFESSOR: No one's done any measurements, exactly. Writing down the probability distribution does not do a measurement. Exactly.
One last question. Sorry, I really-- Yup.
AUDIENCE: Why define it negative 2, and then prepares the states and its width times 0?
PROFESSOR: Well, you could ask the question slightly differently. If you had wanted to prepare the state at time minus 7 such that you got this state at time 0--
PROFESSOR: This is the inverse of asking given that the state is prepared now, what is it going to be at time 7. So they're equally reasonable questions. What should you have done last week so that you got your problem set turned in today? There was no problem set due today.
OK, so with that said, we've done an analytic analysis-- that was ridiculous-- we've done a quick computation that showed analytically what happens to a Gaussian wave packet.
We've done an analysis that showed us how the wave function evolved over time. What I'd like to do now is use some of the PhET simulations to see this effect, and also to predict some more effects.
So what I want you to think about this as, so this is the PhET quantum tunneling wave packet simulation. If you haven't played with these, you totally should. They're fantastic. They're a great way of developing some intuition. All props to the Colorado group. They're excellent.
So what I'm going to do is I'm going to run through a series of experiments that we could, in principle, do on a table top, although it would be fabulously difficult. But the basic physics is simple. Instead I'm going to run them on the computer on the table top because it's easier and cheaper. But I want you to think of these as experiments that we're actually running.
So here, what these diagram show, for those of you who haven't played with this, is this is the potential that I'm going to be working with. The green, this position, tells me the actual value of the energy of my wave packet. And the width tells me how broad that wave packet is.
And roughly speaking, how green it is tells me how much support I have on that particular energy eigenstate. And I can change that by tuning the initial width of the wave packet. So if I make width very narrow, I need lots and lots of different energy eigenstates to make an narrow well-localized wave packet. And if I allow the wave packet to be quite wide, then I don't need as many energy eigenstates. So let me make that a little more obvious.
So fewer energy eigenstates are needed, so we have a thinner band in energy, more energy eigenstates are needed, so I have more wide band. Is everyone cool with that?
And what this program does is it just integrates the Schrodinger equation-- well, yeah, well it just integrates the Schrodinger equation.
So what I want to do is I want to quickly-- oops. So this is the wave function, its absolute value. And this is the probability density, the norm squared. So let's see what happened. So what is this initial state? This initial state-- in fact, I'm going to re-load the configuration.
So this initial state corresponds to being more or less here with some uncertainty. So that's our psi of x0. And let's see how this evolves. So first, before you actually have it evolved, what do you think is going to happen? And remember, it's got some initial momentum as well.
OK. So there it is evolving and it's continuing to evolve, and it's really quite boring. This is a free particle. Wow, that was dull.
OK, so let's try that again. Let's look also at the real part and look at what the phase of the wave function's doing. So here there's something really cool going on. It might not be obvious. But look back at this. Look at how rapidly the wave packet moves and how rapidly the phase moves. Which one's moving faster?
PROFESSOR: Yeah, the wave packet. And the wave packet, it's kind of hard to tell exactly how much faster it's moving. But we can get that from this. What's the velocity of a single plane wave or the phase velocity?
So the phase velocity from problem set one is h-bar k squared upon 2m-- let me write this as omega upon k, which is just equal to h-bar k upon 2m. But that's bad because that's the momentum. Divide the momentum by the mass, that's the classical velocity.
So this says that the phase of any plane wave is moving with half the classical velocity. That's weird. On the other hand, the group velocity of a wave packet, of a normalizable wave packet is do mega dk. And do mega dk will pull it down an extra factor of 2. This Is equal to h-bar k upon m. This is the classical velocity.
So this is the difference between the phase and the group velocity, and that's exactly what we're seeing here. So the first thing you notice is you see the difference between the phase and the group velocity. So that's good. That should be reassuring.
But there's something else that's really quite nice about this. Let's make the wave packet really very narrow. So here it's much more narrow, and let's let the system evolve. And now you can see that the probability density is falling off quite rapidly. Let's see that again. Cool.
The probability distribution is falling off very rapidly. And there's also kind of a cool thing that's happening is you're seeing an accumulation of phase up here and a diminution down here. We can make that a little more sharp. Oops, sorry.
Yeah. So that is a combination of good things and the code being slightly badly behaved. So the more narrow I make that initial-- so let's-- so I made that really narrow, and how rapidly does it fall off?
So that very narrow wave packet by time 10 is like at a quarter. But if we made it much wider, that very narrow wave packet started out much lower and it stays roughly the same height. It's just diminishing very, very slowly. And that's just the 1 over a effect.
So this is nice. And I guess the way I'd like to think about this is this is confirming rather nicely, our predictions.
So now I want to take a slightly different system. So this system is really silly because we all know what's going to happen. Here's a hill, and if this were the real world I would think of this as some potential hill that has some nice finite fall off. The sort of place you don't want to go skiing off.
PROFESSOR: Well, I don't.
So imagine you take an object of mass m and you let it slide along in this potential, what will happen? It will move along at some velocity. It will get to this cliff, it will fall off the cliff and end up going much faster at the bottom. Yeah? So if you kick it here, it will get over there and it will end up much faster.
What happens to this wave packet. So we're solving the Schrodinger equation. It's exactly the same thing we've done before. Let's solve it. So here's our initial wave packet, nice and well-localized. There's our wave function. And what happens?
So at that point something interesting has happened. You see that for the most part, most of the probability is over here, but not all the probability is over here. There's still a finite probability that the particle stays near the wall for a while. Let's watch what happens to that probability as time goes by.
And now you should notice that it's decayed into a wave packet that's over here, and then a superposition of a wave packet over here moving to the left. This wave has scattered off the barrier going downhill. It's mostly transmitted, but some of it reflected. Which is kind of spooky.
So let's watch that-- I'm going to make this much more extreme. So here's a much more extreme version of this. I'm now going to make the energy very close to the height of the potential. The energy is very, very close to the height of the potential, and the potential is far away. We're still reasonably local-- I can make it a little less localized just to really crank up the suspense.
So watch what happens now. So first off, what do you expect to happen? So we've made the energy of the wave packet just barely higher than the energy of the hill. So it's almost classically disallowed from being up here.
What does that mean about its velocity? It's effective loss.
PROFESSOR: Yeah. The momentum, the expectation value for momentum should be very low because it just has a little bit of spare energy. So the expected value for the momentum should be very low. It should dribble, dribble, dribble, dribble, dribble, dribble, dribble till it gets to the cliff, and then-- [WHISTLE]-- go flying off.
So let's see if that's what we see. So it's dribbling, but look at the probability density down here. Oh, indeed, we do see little waves, and those waves are moving off really fast. But the amplitude is exceedingly small.
Instead, what we're seeing is a huge pile up of probability just before the wall, similar to what we saw a minute ago. Except the probability is much, much larger. And what's going to happen now?
PROFESSOR: It's leaking. It's definitely leaking. But an awful lot of the probability density is not going off the cliff. This would be Thelma and Louise cruising off the cliff and then just not falling. This is a very disconcerting-- you guys get to-- yeah.
So see what happens, see what happened here. We've got this nice big amplitude to go across or to reflect back, to scatter back. You really want to fall down classically, but quantum mechanically you can't. It's impossible.
So, in fact, the probability they reflect back is 41, and we're going to see how to calculate that in the next few lectures.
AUDIENCE: So the transmission was actually really high.
PROFESSOR: Transition was reasonably high, yup.
AUDIENCE: And it was really [INAUDIBLE].
PROFESSOR: It wasn't really, sorry?
AUDIENCE: It wasn't really apparent graphically.
PROFESSOR: Yeah, it wasn't apparent. The way it was apparent was just the fact that the amplitude that bounced back was relatively small. So what was going is any little bit of probability that fell down had a large effective momentum. Just ran off the screen very rapidly. So it's hard to see that in the simulation, it's true.
We can make the-- we can squeeze-- ah, there we go. So here is-- that's about as good as I'm going to be able to get. And it's just going to be glacially slow. But this is just going to be preposterously slow. I'm not even sure there's much point in-- But you can see what's going to happen. They're going to build up, we're going to get a little leak off, but for the most part, the wave packet's going to go back to the left.
Questions about this one before we move on to the next? Yeah.
PROFESSOR: Are the-- Yeah, exactly. So r and t here are going to be defined as the probability that this wave packet gets off to infinity in either direction.
AUDIENCE: What happens if you make the energy lower than--
PROFESSOR: Then you can't have a wave packet on the left that's moving. Because remember that the wave packet in order to have an expected momentum needs to be oscillating. But if the energy is lower than the potential, its exponential. So the expectation value for the momentum is 0. So you can't have a particle moving in from the left if it's got energy less than the height of the barrier. Cool?
AUDIENCE: Sorry, what's this green thing?
PROFESSOR: The green thing is the energy of the wave packet that I'm sending in.
OK, so let's go to the next one. So this is something we'd like to understand. The reason I'm showing you this is I want to explain this phenomena. We already did the dispersion. I want to explain this phenomena that you reflect when going downhill, which is perhaps surprising. And I want to ask how efficiently do reflect off any given barrier.
AUDIENCE: Can you describe the [INAUDIBLE] physical experiment that would show that done, though?
PROFESSOR: Oh yeah, absolutely. So suppose I have a little capacitor plate and there's a potential difference across it and a hole so a particle could shoot threw it. I mean it doesn't have to be an insanely small hole, but a hole. So that if a particle goes from here to the other side of the capacitor plate, it will have accelerated across the potential difference.
So if you think about the effective potential energy, the potential is decreasing linearly. It's a constant electric field. So over this short domain between the capacitor plates, we have effectively a linear potential energy.
So I have a particle that I send with very little momentum, but it carries some charge. It gets on capacitor plates, and if it goes through the capacitor plate, it ends with a lot more energy relative to the potential energy. That cool?
So a little capacitor plate with a hole in it is a beautiful example of this system. Good? Now, making it infinitely sharp, well, that would require-- you know. But making it very sharp is no problem.
So here's the inverse of what we just did, which is sending a particle into a barrier. And so there you see it does some quite awesome things. And I want to pause it right here. This is the energy band. So the green represents-- it says that at any given energy inside this green band, there's some contribution from the corresponding energy eigenstate to our wave packet.
There isn't any contribution from states up here where there's no green, which means that all the contributions come from energy eigenstates with energy below the height of the barrier. None of them have enough energy to cross the barrier.
And what we see when we send in our wave packet is some complicated mucking around near the barrier, but in particular, right there, there's a finite non-zero probability that you're found in the classically disallowed region. In the region where you didn't have enough energy to get there. We see right there.
And meanwhile, there's also a huge pile up here. This is a hard wall. Normally classically from a hard wall, you roll along, you hit the hard wall, and you bounce off instantaneously with the same velocity.
But here what we're saying is no, the wave doesn't just exactly bounce off instantaneously. We get this complicated piling up and the interference effect. So what's going on there? Well, it's precisely an interference effect.
So look at the wave function. The wave function is a much easier thing to look at. So the red is the amplitude. The red is the real part. The black is the phase. So the real part is actually behaving quite reasonably. It looks like it's just bouncing right off.
But the physical thing is the probability distribution. The probability distribution has interference terms from the various different contributions to the wave function. So at the collision those interference effects are very important. And at late times notice what's happened. At late times the probability distribution all went right back off.
So that's something else we'd like to understand. Penetration into the barrier from a classically disallowed wave packet. Yup.
AUDIENCE: [INAUDIBLE] the wave packet is also expanding again?
PROFESSOR: Yeah. The wave packet, exactly. So the wave packet is always going to expand. And the reason is we're taking our wave packet, which is a reasonably well-localized approximately Gaussian wave packet. So whatever else is going on, while it's in the part of the potential where the potential is constant, it's also just dispersing. And so we can see that here. Let's make that more obvious.
Let's make the initial wave packet much more narrow. So if the wave packet's much more narrow, we're going to watch it disperse, and let's move the wall over here. So long before it hits the wall it's going to disperse, right? That's what we'd expect. And lo, watch it disperse.
OK, now gets to the wall. And it's sort of a mess. Now, you see that there's contributions to the wave function over here, there's support for the wave function moving off to the right. But now notice that we have support on energy eigenstates with energy above the barrier. So, indeed, it was possible for some contribution to go off to the right.
So one more of these guys. I guess two more. These are fun.
So here's a hard wall. Let's make the-- So here's a hard wall. But it's a finite width barrier. What happens? Well, this is basically the same as what we just saw. We collide up against that first wall, and since there's an exponential fall off it doesn't really matter that there's this other wall where it falls down over here because the wave function is just exponentially suppressed in the classically disallowed region.
So let's see that again. It's always going to be exponentially suppressed in the classically disallowed region. As you see, there's that big exponential suppression. So there's some finite particle here, but you're really unlikely to find it out here exponentially.
On the other hand, if we make the barrier-- so let's think about this barrier. Classically, this is just as good as a thick barrier. You can't get passed this wall. But quantum mechanically what happens? You go right through. Just--
PROFESSOR: --right on--
Yeah, uh-huh. Right through. It barely even changes shape. It barely even changes shape. Just goes right on through.
Now, there's some probability that you go off to the left, that you bounce off.
I share your pain. I totally do. So I would like to do another experiment which demonstrates the difference between classical and quantum mechanical physics.
So we have some explaining to do. And this is going to turn out to be-- surprisingly, this is not a hard thing to explain. It's going to be real easy. Just like the dispersion, this is going to be not a hard thing to explain.
But the next one and this is-- oh yeah, question.
AUDIENCE: Does the height of the spike matter?
PROFESSOR: It does, absolutely. So the height of the spike and the width of the spike matter. So let's make it a little wider. And what we're going to see now is oh, it's a little less likely to go through. If we make it just a tiny bit wider we'll see that it's much less likely to go through. We'll see much stronger reflected wave. So both the height and width are going to turn out to matter.
So here you can see that there's an appreciable reflection, which there wasn't previously. Let's make it just a little bit wider. And now it's an awful lot closer to half and half. See the bottom two lumps? Let's make it just ever so slightly wider.
And again, looking down here we're going to have a transmitted bit, but we're also going to have a reflected-- now it looks like the reflected bit is even a little bit larger maybe. And we can actually compute the reflection transmission here. 72% gets reflected and 28% get transmitted.
I really strongly encourage you to play with these simulations. They're both fun and very illuminating. Yeah.
AUDIENCE: [INAUDIBLE] just the higher it got, the less got transmitted?
AUDIENCE: So if this were a delta function would [INAUDIBLE]?
PROFESSOR: Well, what's the width?
PROFESSOR: So what happens when we make the well less and less-- or more and more thin? More, yeah, through. So we've got a competition between the heights and the width.
So one of the problems on your problem set, either this week or next week, I can't recall, will be for the delta function barrier, compute the transmission probability. And, in fact, we're going to ask you to compute the transmission probability through two delta functions. And, in fact, that's this week.
And then next week we're going to show you a sneaky way of using something called the S matrix to construct bound states for those guys. Anyway, OK.
So next, the last simulation today. So this is the inverse of what we just did. Instead of having a potential barrier, we have a potential well.
So what do you expect to happen? Well, there's our wave packet, and it comes along, and all heck breaks loose inside. We get some excitations and it tunnels across. But we see there's all this sort of wiggling around inside the wave function.
And we get some support going off to the right, some support going off to the left. But nothing sticks around inside, which it looks like there is, but it's going to slowly decay away as it goes off to the boundary. These are scattering states.
So one way to think about what's going on here is it first scatters off, it scatters downhill, and then it scatters uphill. But there's a very funny thing that happens when you can scatter uphill and scatter downhill.
As we saw, any time you have scattering uphill and there's some probability-- that you reflected some probability that you transmit. And we scatter uphill. There's some probability you transmit, and some probability that you reflect.
But if you have both an uphill and a downhill when you have a well, something amazing happens. Consider this well. This didn't work very well. Let's take a much larger one.
So at some point, I think it was last year, some student's cell phone went off, and I was like oh, come on, dude. You can't do that. And like five minutes later my cell phone went off.
So I feel your pain, but please don't let that happen again.
So here we go. So this is a very similar system-- well, the same system, but with a different wave packet, and a slightly different width. And now what happens? Well, all this wave packet just sort of goes on, but how much probability is going back off to the left? Very little. In fact, we can really work this.
Sorry, I need this, otherwise I'm very bad at this. There's some probability that reflects in the first well. There's some probability that reflects in the second well. But the probability that reflects from the combination of the two, 0. How can that be?
So this is just like the boxes at the very beginning. There's some probability. So to go in this system-- so let's go back to the beginning.
There's some amplitude to go from the left side to the right-hand side by going-- so how would you go from the left-hand side to the right-hand side? You transmit and then transmit, right? So it's the transmission amplitude times transmission amplitude gives you the product.
But is that the only thing that could happen? What are other possible things that could happen?
PROFESSOR: Yeah. You could transmit, reflect, reflect, transmit. You could also transmit, reflect, reflect, reflect, reflect, transmit. And you could do that an arbitrary number of times. And each of those contributions is a contribution to the amplitude. And when you compute the probability of transmit, you don't take some of the probabilities. You take the square of the sum of the amplitudes.
And what we will find when we-- I guess we'll do this next time when we actually do this calculation-- is that that interference effect from taking the square of the sum of the amplitudes from each of the possible bouncings can interfere destructively and it can interfere constructively.
We can find points when the reflection is extremely low. And we can find points where the transmission is perfect.
AUDIENCE: So two questions. One, the computer says that the reflection coefficient is 0. Why did it look like it there was a wave packet going off back to the left?
PROFESSOR: Excellent question. And, indeed, there is a little bit here. So now I have to tell you a little bit about how the computer is casting this reflection amplitude.
The way it's doing that is it's saying suppose I have a state which has a definite energy at the center of this distribution. Then the corresponding reflection amplitude would be 0. But, in fact, I have a superposition. I've taken a superposition of them. And so the contributions from slightly different energies are giving me some small reflection.
AUDIENCE: And also, why do you have to fine-tune the width of that as well?
PROFESSOR: Yeah. That's a really-- let me rephrase the question. The question is why do you have to fine-tune the width of that well? Let me ask the question slightly differently. How does the reflection probability depend on the width of the well?
What's going on at these special values of the energy or special values of the width of the well when the transmission is perfect? Which I will call a resonance. Why is that happening? And we'll see that.
So that's a question I want you guys to ask. So I'm done now with the experiments. You guys should play with these on your own time to get some intuition.
AUDIENCE: Isn't the [INAUDIBLE] you just showed actually a classical [INAUDIBLE] objects?
PROFESSOR: So excellent. So here's what I'm going to tell. I'm going to tell you two things.
First off, if I take a classical particle and I put it through these potentials, for example, this potential. If I take a classical particle and I put it in this potential, it's got an energy here, does it ever reflect back?
PROFESSOR: No. Always, always it rolls and continues on. Always. So if I take a quantum particle and I do this, will it reflect back?
PROFESSOR: Apparently sometimes. Now, how do I encode that? I encode that in studying the solutions to the wave equation dt minus-- sorry-- i h-bar dt psi is equal to minus h-bar squared upon 2m, dt squared psi plus v psi.
So this property of the quantum particle that it reflects is a property of solutions to this equation. When I interpret the amplitude squared as a probability.
But this equation doesn't just govern-- this isn't the only place that this equation's ever shown up. An equation at least very similar to this shows up in studying the propagation of waves on a string, for example. Where the potential is played by the role of the tension or the density of the string.
So as you have a wave coming along the string, as I'm sure you did in 803, you have a wave coming along a string. And then the thickness or density of the string changes, becomes thicker or thinner, then sometimes you have reflections off that interference-- from impedance mismatch is one way to phrase it.
And as you move across to the other side you get reflections again. And the thing that matters is not the amplitude of the-- one of the things that matters is not the amplitude, but it's the intensity. And the intensity's a square. And so you get interference effects.
So, indeed, this phenomena of reflection and multiple reflection shows up in certain classical systems, but it shows up not in classical particle systems. It shows up in classical wave systems, like waves on a chain or waves in a rope. And they're governed by effectively the same equations, not exactly, but effectively the same equations, as the Schrodinger evolution governs the wave function.
But the difference is that those classical fields, those classical continuous objects have waves on them, but those waves are actual waves. You see the rope. The rope is everywhere.
In the case of the quantum mechanical description of a particle, the particle could be anywhere, but it is a particle. It is a chunk, it is a thing. And this wave is a probability wave in our knowledge or our command of the system.
Other questions? OK. So let's do the first example of this kind of system. So let's do the very first example we talked about, apart from the free particle, which is the potential step.
First example is the potential step. And what we want to do is we want to find the eigenfunctions of the following potential. Constant, barrier, constant. And I'm going to call the height here V naught, and the height here 0. And I'll call this position X equals 0.
And I want to send in a wave-- think about the physics of sending in a particle that has an energy E naught. Now to think about the dynamics, before we think about time evolution of a localized wave packet, as we saw in the free particle, it behooves us to find the energy eigenstates, then we can deduce the evolution of the wave packet by doing a re-summation by using the superposition principle.
So let's first find the energy eigenstates. And, in fact, this will turn out to encode all the information we need. So we want to find phi e of x. What is the energy eigenfunction with energy E, let's say.
And we know the answer on this side and we know the answer on this side because it's just constant. That's a free particle and we know how to write the solution. So we can write this as this is equal to. On the left-hand side it's just a sum of plane waves. It's exactly of that form. a e to the i kx plus b e to the minus i kx on the left.
And on the right we have c e to the minus i alpha x-- sorry. On this side it's disallowed, classically disallowed. So this is e to the minus-- sorry, do you-- plus alpha x plus d e to minus alpha x. Where h-bar squared k squared upon 2m is equal to e. And h-bar squared alpha squared upon 2m is equal to v naught minus e, the positive quantity.
So we need to satisfy our various continuity and normalizability conditions. And in particular, what had the wave function better do out this way?
PROFESSOR: It had better not divert-- are we going to be able to build normalizable wave packets-- or are we going to be able to find normalizable wave functions of this form? No. Because there are always going to plane waves on the left.
So the best we'll be able to do is find delta function normalizable energy eigenstates. That's not such a big deal because we can build wave packets, as we discussed before.
But on the other hand, it's one thing to be delta function normalizable going to just a wave, it's another thing to diverge. So if we want to build something that's normalizable up to a delta function normalization condition, this had better vanish. Yeah?
But on top of that, we need that the wave function is continuous, so at x equals we need that phi and phi prime are continuous. So that turns out to be an easy set of equations. So for phi, this says that a plus b is equal to d. And for phi prime, this says that ik a. And then the next one is going to give me minus ik a minus b is equal to, on the right-hand side, minus alpha c. Because the exponential's all value to 0.
So we can invert these to get that d is equal to 2-- that's weird.
AUDIENCE: You have a d in the bottom. There's [INAUDIBLE].
PROFESSOR: Oh, sorry. d. d. d. d. d. Thank you. To invert those you get 2k over k plus i alpha, I think? Yes. And b is equal to k minus i alpha over k plus i alpha.
And so you can plug these back in and get the explicit form of the wave function. Now, what condition must the energy satisfy order that this is a solution that's continuous and derivative is continuous?
Last time we found that in order to satisfy for the finite well, the continuity normalizability conditions, only certain values of the energy were allowed. Here we've imposed normalizability and continuity. What's the condition on the energy?
This is a trick question. There isn't any. For any energy we could find a solution. There was no consistency condition amongst these. Any energy. So are the energy eigenvalues continuous or discrete?
PROFESSOR: Continuous. Anything above 0 energy is allowed for the system, it's continuous.
So what does this wave function look like? Oh, I really should have done this on the clean board. What does this wave function look like? Well, it's oscillating out here, and it's decaying in here, and it's smooth.
So what's the meaning of this? What's the physical meaning? Well, here's the way I want to interpret this. How does this system evolve in time? How does this wave function evolve in time?
So this is psi of x. What's psi of x and t? Well, it just gets hit by an e to the minus i omega t minus omega t plus omega t minus i omega t. Everyone cool with that?
So all I did is I just said that this is an energy eigenfunction with energy e, and frequency omega is equal to e upon h-bar. And saying that this is an energy eigenstate tells you that under time evolution, it evolves by rotation by an overall phase, e to the minus i omega t.
And then I just multiplied the whole thing by e to the minus i omega t and distributed it. So here's the minus i omega t minus i omega t minus i omega t in the phase.
So doing that, though, gives us a simple interpretation. Compared to our free particle, e to the i kx minus omega t, it's a wave moving to the right with velocity omega over k, the phase velocity. This is a right-moving contribution. So plus. And this is a left-moving contribution. Everyone cool with that?
So on the left-hand side, what we have is a superposition of a wave moving to the right and a wave moving to the left. Yeah?
On the right, what do we have? We have an exponentially falling function whose phase rotates. Is this a traveling wave? No. It doesn't have any crests. It just has an overall phase that rotates.
And now here's one key thing. If we look at this, what's b? We did the calculation of v. v is k minus i alpha over k plus i alpha. What's the norm square root of b?
Well, the norm square root of b is going to be multiply this by its complex conjugate, multiply this by its complex conjugate. But their each other's complex conjugates will cancel. The norm squared is 1. So pure phase. So this tells you b is a pure phase.
So now look back at this. If b is a pure phase-- sorry-- if b is a pure phase, than this left-moving piece has the same amplitude as the right-moving piece. This is a standing wave. All it's doing is it's rotating by an overall phase.
But it's a standing wave because it's a superposition of a wave moving this way and a wave moving that way with the same amplitude. Just slightly shifted. So the fact that there's a little shift tells you that it's norm squared is not constant. It has a small wave.
So what we see is we get a standing wave that matches nicely onto a decaying exponential in this classically disallowed region. So what that tells you is what's the probability to get arbitrarily far to the right? 0. It's exponentially suppressed. What's the probability that your found some point on the left? Well, it's the norm squared of that, which is some standing wave.
So the reflection of how good is this is a mirror.
PROFESSOR: It's a perfect mirror. Well, it's not-- Now we have to quibble about what you mean by perfect mirror. It reflects with a phase. It reflects with a phase beta, and I'm going to call that phase-- I want to pick conventions here that are consistent throughout. Let's call that e to the i phi. So it reflects with a phase phi, which is to say that [INAUDIBLE] squared is 1.
And what happens-- I don't want to do this. So in what situation would you expect this to be a truly perfect mirror, this potential? When do you expect reflection to be truly perfect?
PROFESSOR: Yeah, exactly. So if the height here were infinitely high, then there's no probability. You can't leak in at all. There's no exponential tail. It's just 0. So in the case that v0 is much greater than e, what do we get? We get that v0 is much greater than e. That means alpha's gigantic, and k is very, very small.
If alpha is gigantic and k is very small, negligibly small, than this just becomes minus i alpha over i alpha. This just becomes minus 1 in that limit.
So in the limit that the wall is really a truly hard wall, this phase becomes minus 1. What happens to a wave when it bounces off a perfect mirror?
PROFESSOR: Its phase is inverted, exactly. Here we see that the phase shift is pi. You get a minus 1 precisely when the barrier is infinitely high. When the barrier's a finite height, classically it would be a perfect mirror still. But quantum mechanically it's no longer a perfect mirror. There's a phase shift which indicates that the wave is extending a little bit into the material. The wave is extending a little bit into the classically disallowed region.
This phase shift, called the scattering phase shift, is going to play a huge role for us. It encodes an enormous amount of the physics, as we'll see over the next few weeks. At the end of the semester it'll be important for us in our discussion of bands and solids.
So at this point, though, we've got a challenge ahead of us. I'm going to be using phrases like this is the part of the wave function moving to the right, and that's the part of the wave function moving to the left. I'm going to say the wave function is a superposition of those two things.
But I need a more precise version of stuff is going to the right. And this is where the probability density, phi squared, and the probability current, which you guys have constructed on the problem set, which is h-bar upon 2mi. Psi star dx i minus psi dx psi star.
These guys satisfy the conservation equation which is d RHO dt. The time rate of change of the density at some particular point at time t is equal to minus the gradient, derivative with respect to x, of j. And I should really call this jx. So the current in the x direction.
So this is the conservation equation, just like for the conservation of charge, and you guys should have all effectively derived this on the problem set.
AUDIENCE: Is the i under or above [INAUDIBLE]?
PROFESSOR: Is the i-- oh, sorry, the i is below. That's just my horrible handwriting.
So what I want to define very quickly is I want to be able to come up with an unambiguous definition of how much stuff is going left and how much stuff is going right. And the way I'm going to do that is I'm going to say that the wave function in general-- I'll do that here-- when we have a wave function at some point. It can be approximated if the potential is roughly constant at that point. It can be approximated in the following way.
The wave function is going to be psi is equal to psi incident plus psi transmitted-- yeah, psi reflected on the left of my barrier. And psi transmitted on the right. Where psi incident is that right-moving part. The part that's moving towards the barrier.
Psi reflected is the left-moving part that's reflected. Psi transmitted is the part that is on the right-hand side, the entire thing. So here the idea is I'm sending in something from the left. It can either reflect or it can transmit. So this is just my notation for this for these guys.
And so what I wanted to measure of how much stuff is going left, how much stuff is going right. And the way to do that is to use the probability current. And to say that associated to the incident term in the wave function is an incident current, which is h-bar k upon m from a squared.
So if we take this wave function, take that expression, plug it into j, this is what you get. If you take the reflected part b to the minus i kx minus omega d, and you plug it into the current expression, you get the part of-- sorry, I should call this capital J-- you get the part of the current corresponding to the reflected term, which is h-bar k with a minus k upon m v squared.
And we'll get that J transmitted is 0. Something that you proved on your problem set. If the wave function is real, it must-- the current to 0.
So let's think about what this is telling us quickly. So what is this [INAUDIBLE]? So what is a current? A current is the amount of stuff moving per unit time. So charge times the velocity. So what's the stuff we're interested in? It's the probability density.
There's a squared. Norm a squared. That's the probability amplitude, or probability density of the right-moving piece. Just consider it in isolation. So that's the probability density. It's the amount of stuff.
And this is the momentum of that wave, that's e to the i kx. There's h-bar k. That's the momentum divided by the mass, which is the classical velocity. The amount of stuff times the velocity. That a current. It's the current with which probability is flowing across a point.
Similarly, Jr, that's the amount of stuff, and the velocity is minus h-bar k upon m. So again, the current is the amount of stuff times the velocity.
I'm going to define-- this is just the definition, it's the reasonable choice of definition-- the transmission probability is the ratio, the norm squared of J transmitted over J incident. And here's why this is the reasonable thing-- oops-- incident. Here's why this is the reasonable quantity.
This is saying how much current is moving in from the left? How much stuff is moving in from-- And how much stuff is moving off on the right, to the right? So what is this in this case? 0, although this is a general definition.
And similarly, the reflection you can either write as 1 minus the transmission, because either you transmit or you reflect, the probability must sum to 1. Or you can write r as the current reflected. What fraction of the incident current is, in fact, reflected? And here this is 1. Because beta-- b is a phase, and the norm squared of a phase is 1.
So on your problems set, you'll be going through the computation of various reflection and transmission amplitudes for this potential. And we'll pick up on this with the barrier uphill next week.
Have a good spring break, guys.