Lecture 6: Wave Profiles, Heat Equation / point source

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

NARRATOR: The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu.

PROFESSOR: Ready to go. So first of all, to say thank you to several people in the class who sent codes to find these wavefronts when a shock, a step function is moving along with velocity c. I had roughly drawn what these three methods produced, but it was much better to see it in Mr. Sekora's movie that's on 18.086 site. And sure enough, behind the moving front is an overshoot, that reminds us of the Gibbs phenomenon, from Lax-Wendroff, and I think it's sort of unavoidable if we have a negative -- because one of the three coefficients at the previous time is negative for Lax-Wendroff, that's how it gets its second order accuracy, and we see that better accuracy in a much steeper, much steeper profile.

Then Lax-Friedrichs, if you've noticed, has a sort of jagged profile. Maybe you've realized why that would be. Somehow Lax-Friedrichs kind of operates on a staggered grid. We'll see, actually with -- I've drawn a staggered grid here, so I can say what I mean. Lax-Friedrichs computes this value from the two earlier values there and then at the next level again and so forth, so that actually, Lax-Friedrichs is using two staggered grids separately. I think -- normally, with Lax-Friedrichs, we call that n plus 1 and n plus 2 and n plus 3, but the picture is still right, so I think that's probably responsible for what you see, that there's two waves, two fronts, right, one delta x away from each other. And finally, upwind is also smeared out.

I guess I was so pleased to see these actually showing on the website and that makes me think of more questions. Maybe you too. First of all, as time goes forward, we see this bump beginning to appear. So am I right in thinking that it reaches a certain height and stays there? So I guess I'm thinking about sort of further questions, because every time you do a numerical experiment, good additional question show up. So I'm wondering, as this wavefront forms, this characteristic profile for these three methods, first of all, does that profile settle down to a steady profile? Ultimately it'd be great to be able to predict what it would be. In particular, does that bump reach a certain height? Because you can see it just barely forming and then you see it growing to a certain point. And then, when the movie ends, fortunately it plots this picture and you really see the profile, and you can actually see, if you look closely, that it goes above 1 again and probably more.

That would be, does it approach a steady profile? Does that height settle down? It'd be fantastic to be able to predict that height. For example, you always want to know, how do things depend on the parameter? The one parameter we have in this problem is r -- c delta t over delta x. If I change r, I'm changing all the coefficients. If I changed r to be exactly 1, then the profile would be perfect, right? That's the golden ratio, r equal 1. I would be computing that value exactly from this value, at exactly the right slope, on the characteristic, and perfection.

Now as r goes away from 1, I presume -- it goes down below 1. Of course, if it goes above 1, catastrophe. If it goes below 1, I presume the bump begins to appear, goes higher. I don't know. Could you see how the dependence on r appears? And then I can even suggest one more thing, which is -- let's put a fourth guy onto this picture, which would be leapfrog. Leapfrog for first-order equations, so let me just put down below what that would be. So it would be on that kind of a staggered grid. A new value at time n plus 1 would come from these at time n, really, and this one, at time n minus 1. So leapfrog -- you'd think of it right away. I use a centered difference in time, U_(j, n+1) minus U_(j, n-1) over 2 delta t equals c times a centered difference in space, U_(j+1, n) minus U_(j-1, n) over 2 delta x. So you see right away, the 2's cancel. The delta x times c -- the delta t times c over delta x is our old ratio r, so this is easily written in terms of r again. It does have this two time step issue and of course, we have to think about stability.

I'm writing down a fourth candidate for the equation u_t equals c*u_x, of course, before I come back to the second-order equations, or systems. I just wanted to put in these comments, partly because I'm thinking about projects that are -- and I hope you are -- so some people will have theses or problems from other courses which involve solving PDEs and that'd be perfectly natural to base some project on, but if you don't have some specific application, then here are a whole lot of interesting questions, to just go further with that. So the dependence on r would be a -- the profile dependence on r, c delta t over delta x, would be a question.

Let me just emphasize, what's the goal? When I say that this upwind one is smeared out, I mean that it doesn't satisfy what you really hope for. You want to capture that shock, that discontinuity -- not perfectly, you can't expect that -- but you want to capture it, maybe in, let's say, 2 delta x. A good method captures the shock within 2 delta x and does it without much undesirable oscillation, physical oscillation like that. So these are methods where we're trading off a good shock capture versus a smeared one, which is not satisfactory, really, in a lot of applications. So capturing the shock within 2 delta x, roughly, is highly desirable, and you might have to go add a artificial viscosity that we'll discuss to get there.

So that's past things, but still very much present, if I can say. Here I've introduced a new leapfrog, a one-way equation leapfrog, I'll call this, where the last lecture was about two-way leapfrog. Leapfrog for u_tt equals c squared u_xx, a two-way wave equation. I might just mention about this one-way leapfrog, once you put in e to the i*k*x as we always do, or e to the i*k*j delta x in the discrete case -- so you would put in an e to the i*j delta x times k, and then find the G, the growth factor in -- that'll depend on k and will tell us the time dependence. It'll be a G to the n-th.

I think you'll find that for this guy, the absolute value of G is exactly 1 while it's stable. Of course, if the Courant condition is violated, then there's no way it could be stable and G's will grow, be bigger than 1. But, I think in the stable range, you will have -- this G will have absolute value exactly 1. So it'll be on the unit circle. You don't have much leeway, right? We found upwind was, like, well inside; Lax-Friedrichs was well inside; Lax-Wendroff, because it had higher accuracy for low frequencies, was closer; and I think this guy is right on. So this is the first-order methods. This would be Lax-Wendroff and this would be this second-order method. So you're really playing with fire there and maybe this is not a favorite, just because it's too close to the edge, in fact, right on the edge. So those are some comments really generated by your numerical experiments and I think that's the way this course should -- homeworks and projects should develop out of numerical experiment.

Now I'll come back to complete my lecture on the second-order wave equation. So we studied leapfrog for that. I won't repeat that, but then with second-order wave equation, we have another option, which is to reduce a second-order equation with two time derivatives to a system of two equations that are first order in time and space. I've chosen to use the letters H and E as a kind of hint that we're touching a really big application, which is electromagnetism. So E for the electric field, H for the magnetic field but here in 1-D -- so it allows me just to mention that in 3-D, when E has three components, H has three components, this is really the curl and this is really minus the curl. And the notes will show the beautiful Yee mesh, staggered mesh that copies for finite difference methods, the properties of the curl, our interpretation of and the properties of -- that Maxwell discovered the physics of, you know, current, magnetic fields going around a wire that along which one an electric field is going.

But let me stay here with one dimension. So first of all, can we recover the wave equation from that? Let's just see that we haven't left behind the wave equation. So I just take the second derivative -- so the second derivative -- I just want to recover the wave equation so that I haven't lost it here -- so this will be, if I -- I'm taking the time derivative of the first equation, so I get d second h dt dx, right? But by the useful, extremely useful fact that this is the same as taking the t derivative first and the x derivative second, now I have the x derivative of this equation, so this is c -- now I have the x derivative, or c times it, so now I'm up to c squared, the x derivative of the x derivative, right? So we eliminated H, got back to one equation, second order, and it's the wave equation.

I want to show the same thing now for finite differences. It's just -- it's such a simple and useful device and our point will be that it works for -- just as well for differences. That allows us to do this staggered grid idea. So the staggered grid idea is, I'll do E on this grid, at these grid points, H at the half points, E again at the integer points, H again at the half points. Now, so my difference method, leapfrog, on this staggered mesh, will copy -- so the first equation will give E at the new time from the difference in H at this time. So, you see it. dE/dt is going to be replaced by that time difference. dH/dx is going to be replaced by this space difference, and that will determine E at the new time. So I need time 0 and time 1/2 to get started. Maybe I put those down here, somewhere. I have to use the initial condition, the initial velocity to get started on E and H and then I go onwards. I've got to this point and this point. Then this guy comes from this one, this one and this one. Of course, that similarly comes from the same equation shifted over.

Now I know these. Now I'm ready for this one. Coming from -- what am I approximating now? I'm on the staggered part of the grid, the half steps. I'm approximating this on the half steps, where H is defined. So that minus that is the time difference. This minus this -- which, I know these now -- is the space difference in this equation, and that's what allows me to compute the new one. You see, it's a simple idea. It's close but different from my scalar leapfrog, one-way leapfrog here. Here it was the same U at all the points. Here it is E at the integer time levels and H at the half integer time levels. It's natural to say -- I'm calling this leapfrog. Could I do this for the difference equation? I just want to see that if I eliminate H -- you saw what happened there. By using the identity that the second derivative in t and x is always the same as the second derivative in x and t, I eliminated H and got an equation that involved E only.

What's going to be the corresponding identity? Trivial, but just -- trivial things are not bad things to notice -- for the different method. This is what I wanted to be sure of, for the H part. This H -- so the H were -- do you mind if I draw yet another symbol? I want to use this idea to eliminate H. So let me put two H's there. Is that right? Yeah. I think what I'd like to know -- let me just say what I believe. I believe that the time difference of the space difference of H at mesh points is equal to the space difference of the time difference of H at the same mesh points. I meant to put these as subscripts. It's not delta x. It's delta sub x. Difference in the time direction, difference in the space direction.

Let's just see what this means. Let me redraw these four points. These are H. H at all those points. So this says, figure out the space differences -- so this, this minus this and this minus this -- and then take the time difference of those. Can I see what I'll get if I do that? So this means I take the space differences, so I have 1 minus 1 and that would be 1 minus 1, but now I'm going to take the time difference of those. So that'll be this one minus this one and this one plus this one. Do you see that? What we have here is H at this point minus H at this point minus H at this point plus H at this point plus 1.

I claim that we have the same thing here, that if I take the time differences first -- can I do that? Instead of taking the space differences first, I'll take this one, this time difference, this minus this and I'll subtract -- I'm taking now the space difference -- I'll subtract -- are you with me? You see, it's just the same four values -- two 1's and two minus 1's, whether we go first -- in fact, this actually is a -- I think maybe throws a little light on the calculus identity, that we can exchange the order of the derivatives, because we know we can exchange the order of the differences, because we just got numbers like 1, minus 1, minus 1, and plus 1.

If we let the spacing, the mesh, go to 0, this -- and divided by the delta x and the delta t, I suppose we would prove the continuous case. So in a way, this throws light on something that's probably something that we take for granted, but we wouldn't want to be pressed on too closely. So having that identity allows me to do exactly what I did with the continuous case there and show that this, when I eliminate the half levels, I've got ordinary leapfrog. So when I eliminate half levels, I will be -- when I eliminate this half level, let's say, then I'll be connecting -- when I eliminate half levels, I'm going to be connecting these at three levels -- one, two, three. It'll be leapfrog.

So since it's been a few days since we wrote down two-way leapfrog or second-order equation leapfrog, let me just write it again, and that's what we would get. The time difference squared of E will be equal to c squared times the space difference squared of E. And of course, here we have a delta x squared and here a delta t squared. So altogether, an r squared. That's the formula we know. That's the formula we know. So if I simplify this to make it look good and put the delta t squared up there, you see that it's just r squared. Maybe it's worth realizing that, again, the magic r equal 1 -- The magic ratio r equal 1 gives the exact U, agreeing exactly with u. This equation, if r is 1, would be satisfied exactly by the continuous solution and so it's the same as this one.

So that's leapfrog. So can I summarize leapfrog? Leapfrog was, first of all, that led us to these -- it worked for second-order equations. It led us to the same stability condition that r had to be less or equal to 1, but it took a quadratic equation for G. Remember, we had G squared minus 2G times some quantity a that involved r's and cosines plus 1 equals 0. The reason it was a quadratic was there were two time steps involved. We handled that. We got stability, if r was less than or equal to 1. That led us to a, in magnitude, less or equal 1. And that led us to G -- I'll just remember those steps -- a less or equal 1, and that let us to G less than or equal to 1.

Yes, the notes, including more about Maxwell's equation and Yee's beautiful method, with a figure that shows his mesh, will go up -- I mean, every section is getting upgraded and this one is the next one to be upgraded. Probably by tomorrow, you'll have a good section on second-order equations. For me, that's what I wanted to say about the wave equation alone, with no diffusion and now I'm ready to move to the heat equation. So what do we have coming up? Maybe important to say what's coming up.

So we've done one-way wave. Now we've done two-way waves. Next comes the heat equation, heat/diffusion, and then will come a mixture of the two, convection-diffusion -- and by the way, so I'll go straight to these -- in a couple of weeks, we'll have a guest lecturer who will do the applications to financial mathematics, mathematical finance. Specifically, the Black-Scholes equation. So you probably know that a lot of effort and a lot of money are going into the valuation of options and other financial derivatives and that this Black-Scholes equation beautifully produced the heat equation, and additional correction terms will produce a convection-diffusion equation. So this is an application that we wouldn't have made some years ago, but now it's worth focusing on that. I think it's interesting in itself. So this is what's coming and then you could say one-way wave equations, but nonlinear. So nonlinear wave equations and those are called conservation laws. That's after these linear cases. All right.

So I'm ready to tackle the heat equation, starting then on to the next section of the notes. So let me tackle it here. So the heat equation. Let me take the constant to be 1; u_t equals u_xx. Because we already see the big difference. Now, there's a second x derivative, but only a first derivative in time. The dimensions of t and x are no longer the same. We won't have delta t and delta x comparable. Now it will be delta t comparable with delta x squared, and that's a very significant difference, because that's -- if delta x is small, delta x squared is extremely small. So when delta t is constrained by delta x squared, we're looking at small time steps. We're looking at stiff systems, absolutely. This is going to be a stiff problem, because the low frequencies and the high frequencies differ enormously in the decay rate. In fact, why don't we see it right away, starting as always with exponentials.

So if I plug in u of x and t is some G of t e to the i*k*x, separating variables as always. Look for a pure exponential solution, plug that in and we get dG/dt times e to the i*k*x on the left, and now I have to take two x derivatives, so that brings i*k down twice, so that's minus k squared -- i squared giving the minus 1 -- times G, times e to the i*k*x. And now, of course, I can cancel that, which is never 0, and I've separated out the time part, the G part, dG/dt is minus k squared G. So G is e to the minus k squared t. Notice that it's k squared, and notice that it's real and negative. See, that's a very big difference. Compare e to the i*k*c*t, which it was for the one-way wave equation. The difference is enormous here. That's magnitude 1, energy conserved. This is magnitude smaller than 1, energy dissipated. We had a ratio delta t over delta x. Now we're going to see delta t over delta x squared because somehow x and t are -- now have -- it's x squared that matches t now. So now, when I put that into here -- let me just do it -- G of t is e to the minus k squared t.

So there is the pure exponential. Simple as always, because we have constant coefficients, but highly informative. I guess another thing that I read off of that is, what frequencies are decaying fast? High frequencies? If k is large, k squared is very large and e to the minus k squared t is decaying very quickly. So high-frequency noise in this system, roughness, discontinuities are going to get smoothed out, strongly, by moving forward in time. Physically, what's happening is, we have our wavefront, which stayed a perfect step function in the wave equation, will instantly smear. So diffusion smears discontinuity. Heat travels. If we were back over here, if the temperature starts out as 1 on this side and 0 on this side, then in an instant, some heat flows from right to left. In fact, it flows with infinite speed. In a short delta t, there's a little heat all the way out there -- very little, of course, because it's -- I guess we want to see what that behavior is.

So I now want to write down -- staying with the differential equation, what do we want to do? We did the exponential case and learned a good bit from that, the fast decay of high frequencies. The next step would be to put different frequencies, different k's together. By linearity, we can add these solutions, we can multiply that by any numbers we want, depending on k; we still have solutions. And we can integrate over k, so we will, actually. That that will be the next step. Let me write down what that does. I'm going to jump, take a little jump here to put on the board what I just said in words. I can multiply this, e to the minus k squared t e to the i*k*x -- that's what's happening to the pure frequency k. If I multiply that by the amount of that frequency which is in the initial condition -- so here is the initial condition, u_0, and I've taken its Fourier transform. This tells me how much of frequency k is in the problem, is in the initial problem. This tells me how much of frequency k is in the initial problem. That amount multiplies this, with this rapid decay as time goes forward, but at any later time, I'm just adding up -- and since k -- we're on the whole line -- k is a continuous -- so we add up from k equal minus infinity to infinity and I think we have to remember the 2*pi that depends on our particular definition of that transform, but I'm going to stay with this one.

That's the answer. That's u of x and t. So I've said that in words, but actually, we could just check it. We would like to know that this left-hand side solves the heat equation, and we would like to know that it has the correct start, the correct u of x and 0. Let's check that. At t equals 0, at the start, this is 1. So at the start, this formula, without that thing there, is just the inverse Fourier transform that takes the Fourier transform, multiplies by this, integrates to recover u_0. So it does start out correctly. Should I say that again? It starts correctly because at t equal 0, this is just the inverse Fourier transform of u_0, so it produces u_0. It inverts this transform that we took. Secondly, does it solve the heat equation? Sure, because for each k, we've just found that that solves the heat equation and now we're just putting different k's together.

So that's the answer. So you might say, OK, perfection. Do we need finite differences? I guess we do. This is typical of formulas that give the answer exactly, but in terms that are not numerically convenient. Because, to use his formula, first of all, this this requires an integral, an infinite integral to find -- integral to find -- for u_0, u hat. To use this formula, we'd have to find this transform, and then we have to do this integral for the inverse transform, this integral, for the answer, u. Two infinite integrals. And, not to mention the fact that on a finite interval, when x doesn't go all the way from minus infinity to infinity, we have to think all over again. So we have a nice formula for the answer, but -- it looks nice, but it's not numerically terrific. We can get a lot of information out of this and there's one other special solution that's also extremely informative. What if -- so I'm going to do a special case, but it's the big case. What if the initial conditions is a delta function? What if the initial condition is a delta function? An impulse at time 0, x equal 0, an instant source of heat at a point -- a point source with finite strength, you could say. An impulse at times 0. For the wave equation, what happened? That was like turning on a flash of light and it went along the characteristics. For the heat equation, it'll be quite different. There aren't characteristics here, but this is an extremely important solution, and I think you could call it the fundamental solution to the heat equation.

You may know what it is. You may have seen this formula. It's fantastic that we get a formula for this one. You could say, we've got a formula. What's the Fourier transform, what's u_0 hat of k for the delta function? 1, right? The Fourier transform of the delta function is a constant 1. All frequencies are there in equal amounts. This is a constant, so I can remove it. It's 1. I'm left with a specific integral to do. And the question is, can I do it? And the answer turns out to be yes. You can do that integral when this is a constant and that will give you the answer with that starting value.

Let me just say what that answer is. u of x and t -- this is the fundamental solution, u fundamental, maybe I should say, because it's such an important case -- is, turns out to be -- it's not obvious what that integral is. By the way, normally -- let's see -- normally it's -- to get the answer that I'm going to write down, we take a kind of end run on the integral, and I'm going to take a serious end run and write down the answer exactly. Here's the key part of the answer.

It's a solution which starts from a delta function, spreads of course, and the shape as it spreads is a bell. It's the famous bell-shaped Gaussian distribution of e to the minus -- so is it e to the minus x squared, I guess? And then, always here comes -- that scalar tells us the width of the bell. How far has it spread at time t? And what goes there is 4t. That's the key observation. And notice how x squared and t are coming in. It's that ratio, x squared to t, that's crucial. That's the parameter again. Of course, we need to multiply by a constant. How do I know I need a constant? Because the total heat is conserved. In other words, the integral of the delta function, the original source of heat, was 1. The integral from minus infinity to infinity of the delta function, which is all totally concentrated at one point, is 1. So I need to put on whatever constant it takes so that the integral of this thing shall stay 1 for later times. This is one integral from minus infinity to infinity that we can do and it turns out to need 1 over the square root of 4*pi*t.

That solves the heat equation. It's not a lot of fun to plug that into the heat equation, but it's possible, right? The x derivative, the time derivative is going to be messy. It'll have two terms. The x derivative will have two terms and they'll agree. The notes will give one way to do the integral and get that answer. You see that that integral, it's -- integrals with e to the minus k squared in them are impossible on a finite interval, but here we're going all the way from minus infinity to infinity, which you might think makes the problem harder, but actually it makes it a great deal easier and we can get an explicit answer, a beautiful answer.

So we learn that the solution to the heat equation, coming from a point source, is a bell-shaped Gaussian curve that gets wider and wider as t increases, but I guess you would want to look to see, what happens if t comes down to 0? Does that really approximate -- I meant to say, does it really converge to -- this should, as t goes to 0, this should go to the delta function, and in some way, it does. In some way, it does. It's wonderful. The delta function, of course, is 0 away from the origin.

So why does this approach 0 away from the origin? Suppose x is 1. Suppose x is 1. Then do you see this thing going to 0 as t goes to 0? Look what's happening. As t goes to 0, this part is blowing up, but kind of weakly. That's not a disastrous blowup. Compared to this e to the minus -- let's say 1 over 4t. As t goes to 0, that's e to a very negative exponent. That's going to 0 very fast, much faster than this is going to 0. If you like to think of l'Hopital's rule or ratio or something, this quantity is going to 0 way faster than this one is, and the result is 0, except at the one point x equals 0, where this is 1. The fact that the total amount of heat stays at 1 tells us that the delta function is the limit there.

Today was the heat equation, continuous case, with these explicit answers.

Then next time is -- tomorrow, I guess -- is the heat equation, finite differences. Again, we'll have several difference methods and to compare them would be terrific. Please think about that, comparison with the heat equation and/or what I was saying at the beginning of class, a deeper look at the comparison for the wave equation. See you tomorrow for difference equations for this problem. Thanks.

Free Downloads




  • English-US (SRT)