Lecture 5: Second-order Wave Equation (including leapfrog)

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

NARRATOR: The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu.

PROFESSOR: Ready to go on lecture five, I guess this is. So this -- we've done the one way wave equation, first order equation and now it's just natural to see what about the ordinary second order wave equation and on this board, I've written it two ways. I've written it as a second order equation, so second time derivative appears. That's going to make a change. It will have second differences in time, so we're going to have more steps, and of the same matching second difference in space. That's one way.

Then this is another important way to write the same equation as a first order system. So I can get back to first order systems and I can use all those methods -- Freidrichs, Lax-Freidrichs for example, Lax-Wendroff, whatever improvements we think of. On this first order system, d by et of a vector is a matrix times d by the x of a vector. So that's -- you really have a choice here. Written this way, we'll notice a little bit of a good thing, one good thing here. The matrix that shows up is symmetric. That was by the careful choice of putting a c in there. Then you see that the first equation in this system is utt, is c time cuxx. So the first equation is the wave equation and what about the second equation? This is typical when you reduce things to a lower order system, that you get an equation here which is just an identity. The derivative of c -- the time -- this is cuxt. It's the derivative xnt from there and we have c times the x derivative of ut, the cross derivative. So that is an identity because utx is the same as uxt. We can take time and x and t derivatives in either order for a function of x and t. So we'll see that.

I guess what I want to do in the lecture is not to repeat Lax-Freidrichs or Lax-Wendroff or this system. That would be unexciting. Let me just mention, what are the Eigen values of that matrix? So the Eigen values -- for a system, it's the Eigen values of the matrix that tell us what waves we've got, what speeds they go, where it -- when it was 1 by 1 of course, it was just that single number c, but what are the Eigen values of that matrix? Now having made it a symmetric matrix, we know right away it has real Eigen value and so it's a wave problem. For a heat equation, we would see something different, but this'll be a wave equation and this has two Eigen values and they are c and minus c. You could quickly check that that 2 by 2 matrix has Eigen values c and minus c. They add up to 0, which is the trace. They multiply c times minus c is minus c squared, which is the determinate. So that's -- they're the right Eigen value. So that's going to tell us what we'll find every way, that the speeds of the waves are c and minus c, which means a wave going one way was speed c and a wave going the other way was speed c. So it's a two way wave effect and this is the equation of real -- that really happens and I wanted to say just a little bit about it -- just two words about real problems, because ah we can't, in these first days, tackle the difficult problems of electromagnetism, antenna design, all kinds of wave problems. I guess I'm hoping some of you may be meeting real wave problems in other courses and would make a project that tries our methods on real problems. I just wanted to write down here one sort of typical real problem.

So in a real problem, we have a variable coefficient x, because maybe our material is not homogeneous, is not uniform and we have a forcing term -- I just put f for the amplitude to emphasize that's forcing term, so f standing for forcing, but it's an oscillating forcing term, but this is what you're going to have. In electromagnetism, the frequency is going to be very, very high. So this is a problem that's not easy numerically. Very, very high frequencies would normally require very, very small wavelengths, very, very small delta x, because you remember there'll be a k delta x in the error. So we took a big delta x or even an ordinary size delta x and let the frequency go way, way up, high, high frequency oscillation, our k delta x would be big and the accuracy would be poor and the results useless.

So a lot of -- I'm really -- guess I'm thinking here about a course on wave propogation would study this as a first model problem, do some analysis and then go into 2 d and 3 d. That's the reality of wave theory, is you can get asymptotic results for k going to infinity. You get things -- you're looking in matching powers of k, so to speak. You're getting plane waves and interesting stuff that we can't do already, right away here. So I'm going back to the ordinary wave equation, either second or -- start with second ordinance. utt is c squared uxx. That's our goal. As I said, we don't want to see Lax-Freidrichs and Lax-Wendroff again. We know those. It's rather -- so one new method is going to show up and it's -- leapfrog is the right name, the natural name to give it. Leapfrog because in the time direction, we have time n minus 1, n and n plus 1.

There's a leap there, but let me start with semidiscrete. We haven't done this until today. Semidiscrete means, as you see, that the time variable stays continuous. We have ordinary differential equations, nt, but the space variable is what's getting discrete. so the problem is half discrete -- discrete in x. There's a delta x, but there's no delta t yet. So it's natural to analyze this problem. So what do we have, really? In the x direction, we have a mesh of with delta x. So we have unknowns. If that's 0, j is measuring. So j delta x is the typical point. Point number j along there and then in the time direction, we're going continuously. So I don't have discrete steps, so it's called -- because those are lines in the time direction, this is -- semidiscrete is also referred to as the method of lines. So the method of lines is what we're doing here and it's -- all I want to do is what I always do. Take exponentials, look at their growth factors, understand the equation.

Actually, I better do it first with a wave equation itself. So I guess my organization for today is, first step is follow either the ikx for the wave equation. So I'll do that here. utt equals c squared uxx and I'm going to look at u proportional to -- and I'll call this factor g again -- g of t and times the frequency ikx. So I'm going to look for pure exponentials and will quickly find them. Then I'm going to do the same for semidiscrete, where that x becomes j delta x and I'm at point j. Then I'm going to eventually look at the discrete, now fully discrete, discrete in time and discrete in space and the question of stability arises. Accuracy we can -- you pretty much -- we pretty much know the accuracy for second differences, that's a second order accurate thing because it's centered, but stability is going to come in. So that's -- this is our first step.

Then I'll just mention what other point remains in this wave equation, is to look at the first order system. How does that look in space and time? As I said, the obvious thing to do would be Lax-Freidrichs or Lax-Wendroff, but there's another idea and it brings in a staggered grid. You'll see it. Somehow that staggered grid is really an important idea that we haven't met yet. That some unknowns are on the standard grid, ujn, but the other unknowns are on a grid that's half a delta x and half a delta t different. It's just right, actually. So that's a premier method in electromagnetism.

So that's what's coming, but let's start with the second order method, because we haven't seen the second order method. Second order equation -- plug in g. So I have -- the second time derivative gives me g double prime times either the ikx -- I'm separating variables, of course; t here and x there. On the other side, I have c squared. Now I have the x variable, so that brings down an ik -- so that's ik squared times ge to the ikx. Good. So simple, but substituting this is just a natural first step to see what's happening. Then of course, canceling e to the ikx leaves us with the equation for the g. g double prime is -- I'll make that i squared minus c squared k squared g. Of course, the new aspect is it's second order. So it's got two solutions instead of just one. It's constant coefficients, of course, and so we look for exponentials and the solution then would be -- two solutions are some -- there's an e to the i -- I guess I'm bringing back the i -- e to the ickt e to the ikx and e to the minus ickt e to the ikx. Those are the two possible gs. Those are if we take their second time derivative, we get the same as taking the second x derivative. We get that factor c is in the right place. You'll see it. The second time derivative brings this factor down, just as we wanted, but the point is that there are two of them.

So this I can write as e to the ikx plus ct. That's our old friend. This one I can write as e to the ik minus ct, which is our new friend, the new way. The way that's going to the right, so this x plus ct follows the characteristic lines to the left, just the way we had in the one way wave equation. This x minus ct stays constant on characteristic lines going up to the right, that it travels with speed c and the signal travels along those characteristics, was speed c. So that initial value now -- maybe I draw a picture over here -- so here is t equals this -- is x as always. This is t equals 0. I have some initial value u at 00 and I also have an initial velocity, because I'm in -- got a second order equation, but information somehow, pure exponential information travels along characteristics this way -- that's the new one -- and characteristics this way -- that's the old one. This is time space.

I'm graphing the two characteristic lines now. Minus ct equals -- if it starts from 0, then it would be 0. This is the x plus ct equals 0, the one we had before, the two characteristic lines. Waves are going both ways, right? Now, what we were able to do with the first order equation -- we even got a completely explicit solution to the one way wave equation and we can do the same for the two way wave equation, so I better write it down. I better write down the solution. So let me just continue this same idea. The general, the complete solutions will be u of x and t is some function of x minus ct and some function of -- call it f 2 of x plus ct. Now I'm taking the jump, the jump from one exponential, a particular k that gave me these particular guys, to all exponentials and assembling all exponentials into functions. So why is that OK? It's only OK in this because of this very special situation that all frequencies are behaving the same here. All frequencies have no dispersion.

Dispersion is going to be different behavior of different frequencies, so that's an important fact, which we don't face here. There will be just dispersion in the discrete methods, the semidiscrete and the discrete method. Different frequencies will travel at different speeds and that's what produces the oscillating error. Whatever shape the error has is because different ks are doing slightly different things, as we'll see when we do the discrete ones. But in the continuous case, every k, we have the same travelling either with speed c or minus c and therefore, we put them all together and we have a whole function traveling with speed c or minus c.

Now we've got to functions and we have to do we have to match two initial conditions. We have to match a u of x and zero and the time derivative has to match a ut of x and 0, so these are the initial conditions. You just match them up and you find the answer. So you discover what these functions really are. It turns out that, I think, we had -- you remember before, we had a u of x plus ct and 0? That was everything in the solution to the one way wave equation. Now we will have u of x minus ct 0 and this divided by 2 -- so there you see a wave's going each way and the division by 2 makes us -- now we're matching the initial condition, so that would be the total answer if the equation, if the body starts from rest with 0 velocity, but I haven't matched -- so I've only matched the udt equal 0 -- think the time derivative of that would be 0, because the time derivative of this would bring out a c, this would bring out a minus c and a t equals 0 -- they've cancelled, so I need another term and it turns out to be -- I'll just write it down. It's got to depend on x minus the ct and x plus ct and it's this -- it's the integral of this velocity dx. I won't -- there's no reason to take time to derive that. I think it's right and could check, [? right? ?]

So what this means is that the initial velocity is -- it's entering somehow all along -- somehow the initial velocity is affecting things within this cone, this characteristic cone that's defined by the characteristic lines. In two dimensions, that cone will really look like a cone. It'll look like an ice cream cone -- or maybe that's in three dimensions. Whatever. In other words, if the initial condition was a delta function at the origin, you could quickly see what the solution would be, and if the delta, if the initial condition was a step function at the origin, what would happen then?

Suppose I started with this step function at the origin. So, like, a wall of water and suppose it's at rest. So nothing -- at that moment t equals 0, I remove the dam and the water starts -- the water flows. Then this term is 0 because there was no initial velocity and this term is just half of that wall goes one way and half goes the other way. The notes draw a picture of that, so that -- so that's -- I've kind of quickly disposed of the solution of the wave equation and of course, that's too easy. This is -- I mean, that's good to know and good to compare when we meet real problems where c depends on x, where there's a forcing term, where there's a real complication. But it's just pace to stay with the model problems at the beginning.

So I'm just going to stay with the model problem and move on to discrete, so try to understand -- let me try to understand what happens if I make the space variable discrete, right? I guess I should say, another step toward reality would be, have some boundaries. Here, this is on the whole line, x from minus infinity to infinity, so reality would be that there would be -- this picture would have some boundary, maybe there at, say, minus l and l and some boundary conditions there, on that boundary and that -- so now we have a finite problem. That means the calculations are fine, but it means some new effects are coming in. Waves can bounce back off the boundary, partly. they can go out, be lost, go beyond or disappear, depending what the boundary conditions are. So that's another part of reality, is boundary conditions.

I'm probably suggesting then, some more realistic numerical experiments. So a realistic numerical experiment would be, put in some boundaries, make this a finite system, so it's at 0. Let's just have a finite number, stop there. This would be one boundary, this would be the other. Then I have just seven unknowns in that problem. That'd be a seven by seven matrix over there and it would be the second difference matrix that is so fundamental in all of scientific computing.

So we have here a second difference matrix, but it's infinite because I haven't got a boundary right now. So the fact of not having a boundary means Fourier is ready to go. So I'm going to plug in -- again, I'm going to try u. This is capital u now, because I've discretized something. I better write this in a form where we see the j, so can I write this equation just again for u number j -- so tracking up this line, for example, or this one? So I'm tracking up this line -- do you see that u is now a system? In this case -- let's see. I shouldn't have put this one in so that I'd have 1, 2, 3, 4, 5, 6, 7 -- that's good. So my unknowns are uj tracking up that line. Each uj is going in the time direction, so this equation is really the time derivative. It's an ordinary derivative. I can write d and not partial derivative, because the only derivatives here are in time of uj is c squared over delta x squared times uj plus 1 minus 2 uj plus uj minus 1. That's my equation. My system of equations, sorry. My system of equations. For the moment, I'm thinking j goes from minus infinity to infinity. I'm not going to put in a boundary, but what I will do, of course, is try uj some growth factor that -- j is a space, discounting space steps, so the growth factor is in time and it's continuous in time, because time's continuous, but it's always e to the ikj delta x. That's the right thing to plug in. I mean, I guess what we're -- what I'm really -- the reason for really doing this is just to reinforce the idea that trying a pure exponential is a good thing to do. It's even a good thing to do with variable coefficients and real problems. It's absolutely a good thing to do with these model problems.

Plus it in and what happens? So I get g double prime times the exponential, but of course you know in advance, I'm going to cancel that exponential -- is c squared over delta x squared times -- what goes there? What will cancel the exponential -- so I'm imagining that exponential as far out, but here it is at j plus 1. So I've incremented j by 1, so that is going to give me an e to the ik delta x, right? Here I haven't incremented it and here I've incremented it by minus 1, so that will give me an e to the minus ik delta x, all times the same exponential which appeared there and appeared there and got cancelled.

So that's it there. g double prime -- oh it's got to be a g -- right, there's a g. Right. When I plug it on the right hand side I got a g that was so uninteresting, I lost it. What does -- what's g there? I guess the main point to see is, this quantity, which is divided by delta x squared, which is coming because we have differences instead of derivatives. When we had derivatives, that quantity was c squared ik squared 4. In other words, minus c squared k squared. this was the correct factor and this one is the discrete factor and it's worth just paying a moment of attention to compare the two. Where am I going to write that comparison? I guess that's going to be on the board that's hidden, so I'm going to -- can you try to remember that? Let me focus just on this quantity for a minute or even just on this one, just because we see it so many times, because it's coming directly from second differences.

For the moment, let me take k delta x. I'll just call it theta. It's some angle. So can I just do a little bit of trig with this factor? 2 minus 2 cos theta. Let's see. I'm going to, I'm going to bring out a minus sign. So I'm going to put plus signs on those and minus signs on those. You see the main point of course, that e to the plus something, e to the i theta and e to the minus i theta combine to give 2 cos theta. Right, so that's the key simplification that's going to make this much neater. So I have 2 cos theta minus two. I have 2 cos theta minus 2 and if I bring out a minus sign, I'll have 2 minus 2 cos theta and that's what I want to write. I want to write 2 minus 2 cos theta. I want to see what that expression is.

Right, so that brought out a minus sign just to match that minus sign and the c squareds match, but what's different is this k squared is now gone into this 2 minus 2 cos k delta x. I'll write it divided by delta x squared. You see that this is -- that's the change. k squared, which is a positive quantity became this, which is also a positive quantity. 2 minus 2 cos theta, but it's not a pure k squared. We are getting dispersion. Different frequencies are showing up inside the cosine and that's not a linear function. So we're seeing different frequencies going their own way in a more interesting way then just what we saw here with k squared.

Of course, when we take the square root as we did here, that produced linear in k and no dispersion and something simple, but here when we take the square root -- let's get ready to take the square. How can we take the square root of that? By writing it as a square. It's never negative, and if I use a little trigonometry -- so I'm just going to write it down. This is 4 -- I think it's sine squared of theta over 2. That's just a handy fact. It's handy because I can take its square root and really see what -- so theta -- remember this -- theta is standing for k delta x. So then I want to divided it by delta x squared. Can I bring down that quantity just so you see it again? So this was the 2 minus 2 cos theta what with the minus sign, and dividing by delta x squared and that's -- so it was g double prime -- let's just write it. So I have g -- I'm now going to copy what -- g double prime wasn't a minus sign times the c squared times this -- 4 sine squared of k delta x on 2 divided by delta x squared. Maybe I'll put the delta x squared over there, where it kind of belongs. Times g, always forgetting g. So we're just following the golden way here, plugging in an exponential and seeing what happens and what happens is pretty nice.

We now know what the solutions are. There'll be two solutions. It's a linear problem, so the solutions will be exponentially in t. The solutions will have a plus or minus, because it's a minus sign. So g -- can I write it this way? g of the solution to this -- the two exponential solutions will be e to the plus or minus -- now I'm going to take this square root i times c times the square root of this. Actually, that's not so hard. That was our point, wasn't it? Sorry, I forgot about that. Sine k delta x over 2 divided by delta x -- and look, here's something nice. Bring the 2 down there. Make it 1/2 of the denominator. Do you see what this expression is? Actually, do you know its name? It's the sync function. It's the sync function, right? For very small delta x -- times t, right? It just -- we just have an exponential. Of course we do. This was a constant coefficient problem in t, so in a second order, I have two exponentials and I just took the square root of that coefficient, put it there, times t. Absolutely straightforward.

This quantity is the is the key one and this is the quantity which, for small k or small delta x, is approximately what? So if k and delta x are small, or even just delta x is small, what does this sync function look like? What is that ratio approximately? 1 k is small. What does the sign of theta look like when theta is a small angle? Looks like theta, right? Sine theta is about theta. So this thing is about k delta x over 2 a divided by delta. It's k, so it's approximately k for a small delta x, let's say. Delta x is small. That's the key thing to know about the sync function, that it begins by approximating the sign function, but then of course, when k delta x gets large, it wanders away. It's not linear nk. See, it starts out practically linear nk and that k is exactly the right k. That's the right factor. That's the k here. But in the differential equation, it stayed k and in the difference equation for k delta x moving away, it changes and that's why -- of course, that's the source of the error.

We're seeing now why the method is probably second order accuracy. We know that these second differences are second -- give second order accuracy. This would be -- if I looked at the next term, what's the next term in sine theta? Sine theta starts out theta and then -- what's the term after that? Everybody's Taylor series or -- [UNINTELLIGIBLE] theta cube, it's a theta cube term. Exactly, it's a minus theta cube over 6 three facotorial, but it's a theta cubed term and so the theta gave us the right thing and then two more powers of theta will give us a second order error, right?

I better move since -- so that's the story for semidiscrete. Semidiscrete produced a pure exponential, but with a phase factor, you could say, that wasn't linear in k where the real one was. Now what about -- let me move up to that board above, where now we're going to have -- we have it -- it's discrete in time as well, so we're going to have single steps. What's going to be the difference? Again, I'm going to try -- you know, I only have one idea here -- ujn will be sum g. Some growth factor times e to the ijk delta x. The same -- I'm going to try an e to the ikx and in the space variable, it will be g. At After n steps, it'll be g n times. So this is the discrete, right? This is the ansatz, to use a fancy word. This is what you plug in. Little crazy to use the word ansatz, which sounds fancy and the word plug, which is far from fancy. I should say substitute. Substitute in it. Alright. Do we know what the process is going to do? When I plug it in -- sorry, substitute it in on the left, we get -- and I'm going to cancel this from every term -- so all I'm interested in is the -- maybe I'll cancel g to the n minus 1. So let me write down what I get. This is going to give me -- in the time direction, I'll have a g squared minus a 2 g plus a 1 over delta t squared and in the space direction, which is at the centered time, so it's going to multiply a g, it's going to be my same, my very same thing. I could maybe give it a name here. I don't know whether -- should I give it a name? I know what it is anyway, because it's the same thing that I had in the semidiscrete method. It's this -- let's see what it looked like. At this point, it was -- look, let's do one thing.

Let me get this delta t squared up there times the c squared divided by the delta x squared. Let me just clean up those constants, so that when I multiply up by the delta t squared times the c squared divided by the delta x squared, what's that? c delta t over delta x squared is my old ratio, the Courrant number r squared and then it's this times g. So this will all be just exactly the same and that's what I simplified here. Let me just leave it as -- with a minus. Let me remember that I did take a minus so that I could write it as 2 minus 2 cosine of k delta x times g. Is that -- I really think I have kept track of everything here. The r squared coming from all this stuff, the minus 2 coming here and the plus 2 cosine coming from there. You OK with that?

But there's one new ingredients here. Again, g to the n -- I cancelled g to the n minus 1 times this from every term. So the new ingredient, that we just have five minutes to deal with, is the fact that we have second -- g squared just showed up. So why did g squared show up? Because it's a multistep method and therefore, I've got two gs. Of course, I had two gs in all these other cases because I had second derivatives and now I have second differences. So I have 2 g, so I have to solve for g and what is stability going to depend on? Stability is going to be whether g -- both gs, because there are two of them now -- or lesser or equal to 1. That's what I have to check and that's the equation. What do we figure? We figure that if r is pretty big -- if r is big, no way. If r is big, the Courrant test will fail and stability will fail. But if r is small, then I expect -- I have a right to hope for stability. So it's going to be the size of r.

So I just have to -- I've got this quadratic equation. I'm going to bring this thing over here. I'm going to write this as g squared minus 2 ag plus 1 equals 0, where a is this quantity and minus 2 a, so a is a 1 from there and when I bring this over to the other side, it's going to be a plus r squared -- factoring out -- notice the 2 is everywhere. So I put the 2 here. The a comes from the 1 there and from this times the r squared 1 plus r squared minus r squared cos k delta x.

I'm just doing algebra and it's coming out nicely. It's coming out to a nice equation here with a slightly messy expression for a, but if I know that I've named this quantity a, I can solve this. So what's the solution? I remember the quadratic formula and I get g as 1, I think, plus or minus the square root of 1 minus a squared. That's g. Maybe this is an a; probably is. I guess the quadratic equation takes a little memorizing, but I guess the first point is that that coefficient shows up there and then the square root that we know.

So what's up? Again, none of this is deep. I'm just following my one way path. Check stability. Check the size of g. Get an equation for g. Get a formula for g and look at it. So a is some positive number and the key is going to be whether a is bigger than 1 or smaller than 1. If a is -- is that right, 1 minus a squared or should that be a squared minus 1? Is it a squared minus 1? Professors are not responsible for the quadratic formula because we proved that we could do it in third grade and lost it since. I guess it's right. Is that right? Because look, here's the story. Suppose a is bigger than 1. Suppose -- I'm really looking at the roots of this equation. Notice, by the way, the roots of that equation multiply to give 1. So I'm on the edge here. Either both roots have size 1 or one of those roots is too big. The test is going to be, is a greater than 1? a greater than 1 will be unstable. a less than 1 will be stable and I'll connect a less than 1 to r less than 1. That will give me r less than 1. Alright. I've given away the result because time is pressing. Maybe just take the remaining time to see this.

Suppose a is bigger than 1. I'm just looking at the roots of this equation and here they are. If a is bigger than 1, like 2, I have 2 plus square root of 3, I'm way up, right? I'm bigger than 1, no question. Suppose a is less than 1. suppose a is 1/2. I have 1/2 plus or minus the square root of -- what? What's up here? If a is smaller than 1, this is negative and if it's negative, its square root is imaginary. So this is good. I have a real part and then an imaginary part and then on some magic little bit of board, I going to add their squares. So I just want to take the real parts squared and the imaginary part squared -- now the imagining part is going to be 1 minus a squared because when I bring out an i, then I have a 1 minus a squared there and when I add them, I get 1. So let me just say again what it is. I just figured out what is absolute value of g squared as real part squared and imaginary part squared and I got the answer 1.

So the conclusion is, the roots are stable when a is below 1 and I have instability when a is above 1. Then if I track that down, that reduces to the tests -- it just happens to be the same test on r, whether r is below 1, which is the Corraunt-Freidrichs-Lewy condition that I'm staying inside the characteristics or r is bigger than 1 and I'm taking a delta t that isn't stable.

I'm sorry to go slightly over the time. Thanks for staying with me. I'll began next time with this staggered grid. That won't take the whole lecture, but it's worth knowing because we see staggered grids in many of the best methods. Thanks. Have a good weekend and I guess it's Tuesday rather than Monday that I see you next.

Free Downloads




  • English-US (SRT)