Home » Courses » Mathematics » Computational Science and Engineering I » Video Lectures » Recitation 5
Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Recitation 5
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, if visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: So this is review session number five, I guess it is. And it comes before an exam next Tuesday, and actually there'll be a further review session number six on Monday, right before the exam. So maybe today we would, there's a homework problem set on Chapter 2, mostly the oscillating masses and springs and today's lecture that you see traces of, networks. And masses and springs also the static case. So I'm open as always to questions. Yes please, thank you?
AUDIENCE: 2.2 number six.
PROFESSOR STRANG: 2.2, number six. OK. Yeah. So this is, and of course you understand that, so I'm happy, that's a good question to discuss. And maybe number seven people well have something to say about. Good. So that just fine, so let me start right in on those. So, number six is the fact, I mean everybody understands that when energy is conserved, that's an important thing. And so the question is first when is energy conserved in the differential equations? In the equation we're trying to solve? And if it is, then we want to know, we would like to choose difference methods that also conserve energy. They may not be exactly right, they may not be exactly at a right point on this circle, if we're in that model problem, but still on the circle. So, and the point is that the trapezoidal method does stay on the circle, and of course the differential equation stays on the circle. Can I, so, and I put quite a bit into this problem six. So this is 2.2.6, and and let me try say something about that, OK.
So, first of all, there's the continuous problem. du/dt=Au. When does that conserve energy? And then there's the discrete problem, which we know that Euler's doesn't conserve energy, because we've seen it, the computer shows you right away. It spirals up from the circle, it spirals in, or forward and backward, or whatever. But trapezoidal method, is that the one that turns out to be well? OK, so it refers to equation 24 as the trapezoidal method, and let me try to follow that notation. Yep. the trapezoidal method is this one. Did we get music there for the trapezoidal method? OK. u_(n+1) equal (I+A*delta t/2)u_n. OK. Right. OK. So that, and actually problem seven, that you maybe want to discuss too, is the question of how accurate this is compared to the differential equation.
Everybody should see that this really came from, the original way to look at this was (u_(n+1)-u_n)/delta t, that approximated the derivative, equals A and then I'm taking half at u-- at the new time and half at the old time. So it's got that centering. That we suspect will give us a little extra accuracy. OK. So two questions then. One was the stability, so problem six was the energy conserved, and problem 2.2.7, if I anticipate it, is the order of accuracy. So these are both topics that are extremely important in choosing a difference method. We would like to know first when is energy conserved there? What differential equations have conserved energy? Physically we kind of can see them coming. If a physical universe is not being-- Somehow, lots of physical problems, we see those masses and springs oscillating, and we say OK, nothing's coming in from outside, how could-- Energy there would be the sum of the kinetic energy of the masses, and the potential energy in the springs. So energy passes between kinetic, when the mass is zooming past equilibrium, and potential energy, when the mass is stretching the spring. So we've got two cases.
And we would hope, and this trapezoidal method comes through, that energy is conserved. So can I just begin with this one? Maybe I always ought to say, because you guys are also thinking about the quiz. So, for example, this question about how to find the order of accuracy, I'll speak about that. but let me just say that's not something that we've done in enough detail that I would expect you to be quick on the quiz and just be able to do it out. I'll try to choose questions on the exam that you really have had more practice with. But this is certainly important, so it was that definitely right to put on the homework. And this is important. OK, so let me tackle this one.
First the differential equation. So by energy here I'm meaning just the length of u squared. So now I'm looking at energy conserved, OK. So I'm hoping energy conserved would mean that the derivative of u squared was zero. That's what conserving energy would mean. And my question is which differential equations-- What's the condition on A, in other words? How would I recognize from this matrix A that I have this interesting property? OK. So, let me just show you what I would do. Another way to write u squared is u transposed u. These are vectors, of course. So what's the derivative of u transpose u? My equation is telling me what the derivative of u is here, I've got the thing squared. OK. I have a product here, right? I've u's times u's. So I'm going to use the standard-- They happen to be vectors, so if I want to use like freshman calculus, I'd have to get down to the scalar to get down to the numbers, but I absolutely could do that and just follow them along, component by component. Or I could try to do it a whole column at a time. And let me try that.
It's going to be the product rule, right? In some form I'll have this guy times the derivative of this plus the derivative, I'll keep them in order, the derivative of this thing, times this guy. Right? That's the product rule. OK, now what do I know here? I know that the du/dt is Au, right? So this is u transposed Au. And I know du, is that dt meant to be dt? So du/dt is Au, again. Look, this isn't difficult. It's (Au) transpose u. And the question is, when is this zero, OK? So those were the two terms from the product rule, and notice they're not exactly the same. This is u transpose Au, and what's this guy? u transpose A transpose u. So if I put them together, I have u transpose times the A and the A transpose times u. And I'm hoping that this will be zero, for all the solutions that I've come up with. So the condition is simply that A plus A-- we want that to be the zero matrix, A plus A transpose. In other words, if A transpose is minus A, that's the good one. If A transpose is minus A, this is zero, that's zero, that's zero, that's zero, energy's conserved. So the energy is conserved when A transpose is minus A. It's for the anti-symmetric A's that energy is conserved. And of course this all makes sense.
What are the special solutions to that differential equation? The special solutions to this equation are e to the, just, this is connecting now with things that really are basic. The special solutions, the pure eigensolutions, the ones that follow their own paths, are the e^(lambda*t)x's, right? Where x is an eigenvector of A, and lambda's an eigenvalue. Those are the guys, and we expect to have n of them, and we expect a combination of those to give us the general solution and to match the boundary conditions. So these are the, these n of these guys with n different eigenvectors and their eigenvalues are the heart of problems like this. And of course that's the additional homework problem that wasn't in the book but I added as an additional problem was exactly that, to get you to practice with these eigenvectors and eigenvalues.
OK, now, what's the deal, I want to connect this energy conserving with this picture of solutions. When would this keep the same energy? When would this have constant energy, constant length? The length would be constant, since x is certainly, that's an eigenvector, whatever it is. This is what's changing. And now I want to know, when does the length not change? Well, the test would be that this number should have absolute value one, right? If this keeps absolute value one, then in the eigenvalue picture I have energy staying the same. OK? Now, when will this have magnitude one? Time is running along, this is e to the lambda t, so which lambdas? Zero, certainly, but now there's more, you gotta know the others. What other lambdas will have, what other lambdas give me this thing stays on the unit circle, absolute value one? Key question you must know. lambda could be? Imaginary. lambda could be imaginary; right, lambda could be imaginary. e^(i*omega*t). That's just like basic fact about complex numbers, that if lambda's imaginary we would have cosine of something t plus i times the sine of something t, cos squared plus sine squared being one, we'd be on the unit circle.
So from this picture we would want the lambdas to be pure imaginary. And now a little next step, what we'd like for eigenvectors, because the real solution will not be just one of these guys but a combination. So when we have a combination each one is doing its thing, each one better have lambda imaginary but more than that, we would want the x's to be perpendicular. Because if the x's interact, then this guy, one of these, you will say there's only one there, but I'm thinking of n of them there. A combination of say, two of them. Suppose I have an e^(lambda_1 t)*x_1, and an e^(lambda_2*t)*x_2, when does that conserve energy? Well, each one will, but the combination will be fine if the x's are perpendicular. Because if I have perpendicular vectors, then the length of the whole combination by Pythagoras is just one squared and the other squared and each of those pieces is constant.
Let me say what I'm trying to say. That the eigenvalue, eigenfunction picture also tells us that we would like imaginary eigenvalues and perpendicular eigenvectors. And that is exactly what you get from A transpose equal minus A. So A transpose equal minus A is exactly, those matrices, anti-symmetric matrices, have perpendicular eigenvectors, just like symmetric, but the eigenvalues are pure imaginary. Instead of all being real, they're all pure imaginary. In other words, that answer and the discussion here came to the same conclusion, that A should be anti-symmetric. OK.
Now let me look at-- Is that OK, so that's a discussion which is worth knowing about differential equations. When is energy conserved. Now I want to do, or the problem asks me to do, what about this difference equation? When is energy conserved there? And I believe it will be, this is the requirement for the differential equation to be OK, to conserve energy and so I'm going to expect, I'm going to need that in this one. And is that enough? If I have this A transpose equal minus A, anti-symmetric, it was good for this, does it also do the job here? Is this trapezoidal method just cool so that it will conserve energy, too. And the answer, I think, is yes, and the problem was to prove it, or to see why. So what do I now want? If you don't mind my erasing, I'm now going to look at the discrete guy. So now I'm looking at when is u_(n+1) squared equal u_n squared? That's what I mean by conserving energy in the discrete case. At every step, same energy.
So now I want to look at the energy in u_(n+1) compared to the energy in u_n, and I want to see that this holds. Probably, there's some smart way to do that. Now we're down to just the math questions. Math is always looking for some, you just sort of do the right thing, you stand back and poof it works. OK, so what's the right thing? Hopefully I have helped you and me by saying what would be a good idea to do here. Can I just look? OK, it say, oh, does it say what to do? Yeah. It says multiply by u_(n+1)+u_n. Take the dot product, that's interesting. Take the dot product, so why did that work? Take the dot product of both sides with u_(n+1)-u_n. Did anybody succeed with this idea? But that's the idea. And hopefully we'll get it to work. That if I multiply both sides by u_(n+1), oh, no, plus u_n. Maybe better if I look at it this way. I'm sort of OK to do it that way. Suppose I multiply both sides. So now I'm following on this idea, That equation I've rewritten here and without practice I don't know which one is the good one to start with. But I'm pretty OK with starting with this one. So what's my idea? That's my equation, now I'm going to multiply both sides by (u_(n+1)+u_n) transpose.
Now I don't have room to do it, unfortunately. I want to stick in here u_(n+...) with a plus sign in there, and of course I have to do the same thing here. OK. Are you OK, do you see what I'm doing? I want to show that this equation, which is the same as this equation, has this property which is a copy of this property. Here would be another way to do it. We could do it, the way we're going to do it now sort of compares with the way we started with the derivative of the norm squared. I could also ask the same question by following eigenvectors. I could also ask the same question by following eigenvectors. I'm guessing that here the eigenvalues-- u_(n+1) is, you see I could do it both ways. Maybe having just done eigenvectors let me do this one by eigenvectors. So an eigenvector of A, when it's x itself, is, what happens to an eigenvector. Suppose u_0 is an eigenvector x of A, What's u_1? Yeah, you really should see this question. So u_0 is the eigenvector x, then what is u_1? Let me just write it here. Ax equaling lambda*x. So these are the eigenvalues of A, and we've learned that they're pure imaginary in this case when we're ready to go, and now I'd like to know that we get the good thing here.
OK, so if u_n is an eigenvector, what is u_(n+1)? OK, so can I just do that? u_(n+1) is, so what do I have on that right hand side? x and what is Ax? It's lambda, right? It's lambda*x. So all this is one plus lambda delta t on two x but now I've also got to bring this guy over here, its inverse. And see what that does. Now it's the inverse, so it's going to have the same eigenvector and the eigenvalue's going to go in the denominator and it'll be one minus lambda delta t over two. OK, so that's u_(n+1). Do you see what's happening here? The eigenvector x, if we start with that eigenvector x, we come out with a multiple of x. And this is the multiple. So each finite difference step multiplies by a number just the way each, in the continuous case we were multiplying by e to the lambda t and in the discrete step by step case we're multiplying by that number.
Actually, this is why problem seven is important, because if we want to know how accurate the comparison is I want to compare e to the lambda t with that number. So problem six is asking a question about that ratio. And problem seven is asking another question about that very same ratio. Now what's the question for problem six? When will this vector have the same length as-- This x was u_n. So I started with the u_n, I multiplied by this number to get u_(n+1), when do they have the same length? When that number has absolute value one. So if I'm watching eigenvectors, this guy had absolute value one because lambda was imaginary. Now, what about this guy? lambda's still that same lambda, imaginary. What can you tell me about one plus, so lambda is some i*omega, delta t over two and down here I have one minus i*omega, that's the lambda, delta t over two. I believe that that does have absolute value one. Anybody tell me why? So this is checking that energy is conserved for each eigenvector. The energy-- Because the eigenvector is multiplied by that number and that's some number, it's some complex number, but I believe it has absolute value one and I believe you can tell me why. Yep. Because they're complex conjugates. This numerator and the denominator are complex conjugates, in the complex plane here's the one, and I go up by i*omega*delta t over two, or on this one I go down by-- Bu those lengths are the same. That numerator, the length of the numerator is that guy, the length of the denominator is this guy, and their ratio is one. So I think that this gives us the point about complex numbers. That a complex number and its conjugate automatically have ratio of magnitude one.
You see the difference between Euler's method. So Euler's method, so forward Euler-- Forward Euler would not have had this stuff on the left side. It would all have been on the right-hand side. Forward Euler would have been about i plus A*delta t. Delta t A. And what are its eigenvalues? One plus i omega delta t, right? With no, we're not dividing by anybody. This part is up top too, so it's one plus i*omega*delta t. Now, does that have absolute value one? Well, you know from the way I'm asking the question, what can you tell me about the absolute value of the forward Euler growth factor? Greater than one. Because this is the one, and this is the i*omega*delta t, maybe went up twice as far. And there was nobody to divide by. It's bigger than one, so it blows up. And the backward Euler had only the one over one minus i*omega*delta t, so the backward was like this, one over it. And less than one. But this balance has absolute value equal one. So, OK, that's the sort of heart of what's going on. Can I, before I tackle the question using the hint there, which would take me on another blackboard, can I discuss question seven? Were you going to ask me about number seven?
AUDIENCE: Yeah, I was.
PROFESSOR STRANG: You were? OK. Alright. We get the answer. So, question seven is about the accuracy. So here's the correct number, this is my e^(i*omega*t), that's the correct number that I should be multiplying by. And the actual number that I'm multiplying by is that much. Or, in the forward Euler case, it's that one. And so I'm comparing the one step accuracy. So let me compare one step accuracy. So this is the topic now, of order of accuracy. This is question seven. And it amounts to comparing the-- So what is one delta t step in the continuous case? So how much does the eigenvector x, what does it get multiplied by if I take a delta t step in the differential equation? So this is the exact delta t step, what the finite difference won't get exactly right. So the exact step delta t. The differential equation, and of course I'm always looking at Ax=lambda*x, the differential equation multiplies x by what? What's the exact growth factor, you could say, if my equation is du/dt=Au, that's the differential equation, and I'm supposing that I'm on an eigenvector x, so that the solution is e^(i*omega*t), or e^(i*lambda*x). Now, what happened over a delta t step? This is the answer like running along for all time, all I'm asking you to do is if the step is delta t, what's that number? I mean that number is telling us how much it grew in that delta t step, and of course it's e^(i*omega*delta t). That's the exact growth factor, that's G_exact. In one time step, the eigenvector gets multiplied by that, because that's the amount of time that elapsed.
And what's the approximate growth, the growth factor from trapezoidal is just what we wrote down here. One plus lambda*delta t, maybe I'll stay with lambda rather than i*omega. e^(lambda*delta t), and this was one plus delta t over two lambda, divided by one minus delta t over two lambda.
So question seven just says compare that with that. Thinking of delta t as a small time step, if delta t is zero, then of course e to the zero is one, if delta t is zero I get one here, they're correct if delta t is zero, that's no big deal. How do I understand what happens for small delta t? I'm comparing this exponential for a small delta t with this guy for a small delta t. How do you make comparisons for a small delta t? Well, that's what Taylor series is all about. Let's do the Taylor series. What's the series for the exponential? If delta t is small, I have e to some little number, tell me, start me out on the exponential. One, thanks, one plus, lambda*delta t plus, this is the exponential series, there are only two series in this world that are worth knowing. Really, that's literally true. In calculus you study all these infinite series, there are two that are important, that are worth remembering long after calculus. And e to the x, e to the whatever, is one of them. OK, what's the next term? Over two, lambda delta t squared over two, and then there's a cube guy if you don't mind telling me what's the denominator in that one? It's three factorial six. Good. And onward. OK. So that's one of the series that everybody should know.
OK, how we going to deal with this guy? We want to expand that, so what's my goal? I want you to expand that in powers of lambda*delta t and compare with this. And see where, they aren't going to be equal, right? At some point they're going to be different. But at least they should start out equal. So so here's the heart of problem seven. How do I expand this in powers of delta t? Do you mind if I just, this is just a number let me put it times one over, so this is times one minus delta t over two lambda, inverse right? I just bring that up as a number. So it's this guy times one over this guy. What do I do? This is, here's the moment when the math tools get used. And I'm well aware that it's like years since you did calculus or series or whatever, and those tools get rusty. And the point is that they're really genuine tools that we can now use. So what do you think? This is the problem one, this is the one coming from the denominator; this is 1/(1-x). So I have a 1/(1-x) deal. And what's the series for that?
I said there were two series worth remembering, and sure enough the exponential was one of them and now we're ready for the other one. What's the series for that guy? 1+x, good start. Plus x squared. Right, x squared plus x cubed and so on. Real simple. It's all the same stuff with no factorials. Those are the two series to know. The exponential series and the geometric series. Right, that's the geometric series. OK, so that's what I've got out of this stuff. Can I write it below? I have one plus delta t over two lambda. Let me just call that x for the moment. Delta t over two lambda is my x. One plus x, and this is 1/(1-x), which you just told me is one plus x plus x squared plus x cubed and so on. And now I've got to do that multiplication. OK, x is, remember this is x, I'm just saving space. Can you multiply those guys? So that's one plus x times a lot of stuff here. What do I have all together? Well, the one, what's the next term? Two x's? Everybody spots the two x's there? And then the next term, you have to get these terms right because we plan to compare with this guy and see how many we get. How many x squareds are in there? Is it two? Looks like two. Two x squareds. And two x cubes, and so on. Yeah, that looks right, OK.
Now I'm ready, what am I ready for? I'm ready to say what x is, x is this delta t over two lambda. So what have I got here, one? What is this guy now? Two x's is delta t lambda. Is this good? Yes, right? We're pleased. Because the two x is the, two of these is delta t lambda and that's what we wanted to match. Absolutely. Delta t lambda, lambda delta t. Now let's keep going. By the way if this first term hadn't matched we would be extremely surprised. Because that first matching is only saying that my difference equation is quite consistent, it's a reasonable creation out of the differential equation. And we knew that. The question is how much further are we going to get? Euler will not get any further. With Euler the next ones will fail. But I think with trapezoidal the next ones are going to work. Does it work? It's like we're holding our breath, right?
Two now, I'm going to put in x squared and see about this term. x is what? x is this guy, delta t over two lambda. Delta t lambda over two, squared. And now you get the fun. Because you're going to compare this term with what? With this term. And are they the same? Yes. Yes. So that's the way, you see, that you got the extra accuracy which Euler did not give you, but that's why the trapezoidal rule is a is a second order accurate method. OK, you may say that I went overboard to say all that. You may say I didn't ask that question. But it's the right question to ask about order of accuracy, and it's what problem seven was intending to bring. Maybe I called it h in problem seven rather than x here.
Well. Oh gosh, I realize I I'm supposed to come back to this one. But some people might have other problems that they're interested in. But let me, because time is pushing along, and the solution to this one we'll post, let me at least offer the possibility to ask me about something completely, not six or seven here, but something entirely different, like what's the first question on the quiz or anything. And that, let me say I'll hope to know by Tuesday. I love to teach, but making up exams is serious work. Anyway. Let me open a board and open to another question of any sort. Any place, Chapter 1, Chapter 2, whatever. Is there anything? So I know that you're in the middle of this homework. So I can say a little more here about that number six if you want, but I wanted to allow, yep.
AUDIENCE: [INAUDIBLE].
PROFESSOR STRANG: The A, from today's lecture this was the incidence matrix, and this was the A transpose A that's probably still on the board somewhere. Yep. Yep. So this is the A, which you should take in and be able to create if I gave you the graph, and this is the A transpose A, so it's through today's lecture, yeah. Next lecture I'll be talking about the A transpose by itself, which involves Kirchhoff's current law, it's beautiful. A transpose w equals zero. But I think this part was straightforward enough to be able to add this to our list of problems which fit the framework. So that's what that was about. It doesn't mean that this will be on but it could be, right. OK, what else? You guys are patient, I come on-- Yeah, thanks.
AUDIENCE: [INAUDIBLE].
PROFESSOR STRANG: Yep.
AUDIENCE: This is only valid when x is less than one?
PROFESSOR STRANG: It's only valid when x is less than one, so that's now the math point that this expansion for e^x valid for all x's. Because you're dividing by these bigger and bigger numbers. But this one is only valid up to x=1. At x=1 we're getting one plus one plus one, and we're getting one over one minus one, sort of infinity matches infinity, but then if x goes up to two, yeah what happens if x is two? It's sort of not good, but you know mathematics, it's never completely crazy, right? If x is two, what does this say? What have I got on the left hand side? Negative one. And what have I got on the right hand side? One plus two plus four plus eight. I should not allow this to be videotaped, but that's actually not so completely crazy. In some nutty way that could still make some sense. That's certainly will not be on the-- So you're right that x should be less than one, and of course it will be here because I'm looking at little delta t's. Little, so my delta t-- My x was this thing and my delta t, the time step was small and somehow that tells me, actually this is a good indication. It gives me the units. That stability and things going right will depend on lambda delta t. Will depend on lambda delta t, that's the key parameter there. That's like the dimensionless parameter that we're, or lambda delta t over two, or whatever. But lambda delta t is the key.
And a highly important key. It tells us that as lambda gets bigger, as the matrix has bigger eigenvalues, delta t has got to get smaller. And I mentioned stiff equations. Stiff equations are equations where the eigenvalues lambda are out of scale. You know, you might have two eigenvalues, one of size one and the other of size ten to the fourth, because you've got two physical processes going on at the same time. And those equations are tough, because that ten to the fourth guy is forcing your delta t to be really small. Whereas the action might, the true, real solution might be controlled by the lambda=1 guy. So to follow this slow evolution, you're having to take very small steps because on top of that slow evolution with the lambda=1, there's some very fast evolution maybe with lambda equal minus 10,000. Yeah, there's a lot happening here. And always you have to think OK, is there some way around that box. Because forward Euler would not get you through.
OK, thanks for that question, you got another one? OK.
AUDIENCE: So then if you weren't using small enough time steps, [INAUDIBLE]?
PROFESSOR STRANG: If you weren't using small enough time steps, OK. For trapezoidal, let's say?
AUDIENCE: I mean, that expansion wouldn't hold if you were using a lambda--
PROFESSOR STRANG: Well, the expansion is really intended for a small delta t. Yeah. It's not intended, I never added up the whole series. I just compared a couple of terms to see how am I doing, and I got the extra term to match from trapezoidal that I didn't get from Euler. So what's to say? If you took delta t too big, what would happen in the trapezoidal method? Well, you would stay on this circle because the absolute value of this thing is truly one. Even if lambda is enormous and delta t is way too big, we still had complex conjugates and their ratio was one. So we would not leave the circle, at least in perfect arithmetic, as everybody says. If we didn't make any round-off error, we would not leave the circle. But boy would we skip all over the place on that circle. So if we took delta t too big, we would be completely inaccurate. We wouldn't be unstable, for trapezoidal, because it would stay on the circle, but the phase would be completely wrong, yeah. So it would be a complex number of absolute value one, but it would not be close to the exact growth factor.
Well, so many things to say. I realize that the course moves along pretty quickly but this topic of numerical methods for differential equations, that's a core part of 18.086. So I'm like anticipating here in just a couple of days what really takes longer is the stability and the accuracy and the best choices for time-dependent problems. OK, always good questions. Anything else that's on your mind of any sort? Yes, thanks.
AUDIENCE: [INAUDIBLE].
PROFESSOR STRANG: 114?
AUDIENCE: There is a figure 2.7.
PROFESSOR STRANG: OK. OK. 114 figure 2.7. Oh yes, OK. Oh yes.
AUDIENCE: I figure it's about how these shapes [INAUDIBLE].
see. That has a bunch of figures, so that in order to say for everybody who's not looking at the book, those figures are about the problem we've discussed here with a model problem, where we're on a circle. So do I have space to draw a circle? Well, let me just make space here. OK, so page 114 has that model problem that we've drawn before. There's the exact solution, here's the phase plane; there's u and there's u', and the u was cos(t), so the u' was minus sin(t), and we travel around the circle. On the exact solution. Energy constant, u squared stays one-- u squared plus u prime squared stay one. Now which figure was it you wanted me to look at? So.
AUDIENCE: [INAUDIBLE]
PROFESSOR STRANG: Of any of them?
AUDIENCE: Yeah. PROFESSOR STRANG: OK, that's fine. Let's see. Is trapezoidal on that one? Yeah. Trapezoidal was the first one. OK, so figure 2.6 shows the trapezoidal method moving around the circle. So what happens? Yeah, thanks, that's a very suitable question. OK. and I took, in that figure I took, how long does it take for the exact solution to get exactly back where it started? At t equal what do I come back?
AUDIENCE: 2pi.
PROFESSOR STRANG: t equal to 2pi, I'm right back where I was. Right? Cosine has period 2pi. OK, now a single step of size 2pi would be really ridiculous, right? I mean, I want to now delta t. So in that figure I took delta t to be 2pi divided by 32. So I'm taking delta t to be the 2pi, that would bring me all the way around, but I'm dividing by 32. So, what does that mean? What what does the exact solution do at those steps? 32 steps? It goes on the circle, 32 equal steps, 30, 360, 2pi divided by 32 radians every time, comes back exactly there, the exact solution. And right where I started. So it's like following a planet. Now I do it by finite differences. So now I'm going to follow the trapezoidal rule, just what we've been talking about, with that time step, and with the equation-- Everybody remembers the equation was [u, u'] equals, do you remember what the matrix was in that equation? This is the derivative of it and this is [u, u']. Sorry to squeeze this in, but what I'm, u' is u'. u'' is minus u. Now we know why that matrix was good, right? Why is that? That's my matrix A, why is it good? Because it's exactly, it fits. A transpose is minus A. It's anti-symmetric. Keeps me right on the circle.
OK now, trapezoidal method keeps me right on the circle, 32 steps. And so the picture just shows where it goes after 32 steps. And 32, 32 does it come back there? Well, not exactly, right? We don't expect to find a different solution to be exactly in sync with cos(t), the real one. But it's really close. I think in that figure I can see that that's sort of a double point there, at 2pi. I put a little arrow indicating small phase error. It misses by a little bit. And actually, roughly what does it miss by? This was the point of the order of accuracy stuff. Roughly what size is that little error? That's what we did over here. The term that we got wrong was a delta t cubed. At each step. Can I just tell you the answer? The error here is of size delta t squared. Because over here we match those series and we found the error was delta t cubed. That's in a single step. But now we've got one over delta t steps, you see what I'm saying? That if the error was delta t cubed per step, and I have one over delta t steps, to get somewhere, or 2pi over delta t or whatever, then that gives me delta t squared. So that little error there is my error of size delta t squared. And that square tells me I've got a good method. At least, decent. Second order accurate. And the trapezoidal rule is sort of the natural one. Well, OK, so that's a full hour mostly devoted to two or three things. Actually the eigenvectors came into it. And the energy conservation came into it, the stability matching series came into it. And the picture. OK, I'll see you Friday for more about these guys, and then Monday evening please ask me everything you want to, on Monday evening. OK. Thank you.
This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare
MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. With more than 2,200 courses available, OCW is delivering on the promise of open sharing of knowledge. Learn more »
© 2001–2015
Massachusetts Institute of Technology
Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.