Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Lecture 23: Calculus of Var...

The following content is provided by MIT OpenCourseWare under a Creative Commons license. Additional information about our license and MIT OpenCourseWare in general is available at ocw.mit.edu.

PROFESSOR: Let's get started. Could I start first with the announcement of a talk this afternoon. I know your schedules are full, but the abstract for the talk I think will go up on the top of our web page. It's a whole range of other applications that I would hope to get to by an expert. Professor Ronicker, his applications include optimal control, which is certainly a big area of optimization. Actually, MathWorks, we think of them as doing linear algebra, but their number one customer is control theory world, and it totally connects with everything we're doing. He's also interested in adaptive meshing for finite element or other methods -- how to refine the mesh where it pays off in arrow problems, all sorts of problems. The mass behind it is this same saddle point structure, same KK -- when he says KKT equations, that's our two equations. I've been calling them sometimes Kuhn-Tucker, but after Kuhn and Tucker, it was noticed that a graduate student named Karush, also a K, had written a Masters thesis in which these important equations appeared. So now, they're often called KKT equations. I think Ronicker will do that.

Anyway, I don't know if you're free, but it'll be a full talk in this program of computational design and optimization. I don't know if you know MIT's new Masters degree program in CDO. So it's mostly engineering, a little optimization that's down in Operations Research, and a couple of guys in math. So that's a talk this afternoon, which will be right on target for this area and bring up applications that are highly important.

Well, so I thought today my job is going to be pretty straightforward. I want to do now the continuous problem. So I have functions as unknowns. I have integrals as inner products. But I still have a minimization problem. So I have a minimization problem and I'll call, it's often a potential energy, so let me use p. Our unknown function is u, so that's a function -- u of x, u of xy, u of xyz. Instead of inner products we have integrals, so there's a c of x. This is going to be a pure quadratic, so it's going to lead me to a linear equation. In between comes something important -- what people now call the weak form of the equation.

So, this is would be the simplest example I could put forward of the calculus of variation. So that's what we're talking about. Calculus of variation. What's the derivative of p with respect to u? Somehow that's what we have to find and we're going to set it to zero to minimize. Then why do we know it's a minimum? Well, that's always the second. The quadratic terms here are going to be positive. So we have a positive-definite problem. Positive-definite means things go this way, convex, and you locate the bottom, the minimum is where the derivative is zero. But what's the derivative? That's the question. It's not even called derivative in this subject, it's called the first variation. Instead of saying first derivative, I'll say first variation. Instead of writing dpdu, I'll write -- where can I write it? So this is key word, so this is going to be the first variation, and I'm going to write it with a different d, a Greek delta, dpdu. It's just sort of a reminder that we're dealing with functions and integrals of functions and so on. So it's just change the notation in the name a little as a trigger to the memory.

But this board really has not all the details are here, of course. This is a summary of what the calculus of variation does. So it takes a minimization problem. So I'm looking for a function u of x. Always, since we're in continuous problems we have boundary conditions. So as always, those could be -- let me imagine that those are the boundary conditions. So I would call those essential conditions. Every function that's allowed into the minimum has to satisfy the boundary condition. So it's a minimum over all u with the boundary conditions, with those boundary conditions.

There are two kinds of boundary conditions, and maybe I'll postpone thinking about boundary conditions till I get the equations, the differential equation inside the interval. So all these intervals go from zero to one. I won't put that. So we're in 1d. So, how do you find the function? u of x -- that stands for dudx. So the given data for the problem is load some source term, f of x, which is going to show up on the right hand side, and some coefficient c of x, which is going to show up in the equation. They depend on x, in general. Many, many, many physical problems looks like this. Sort of steady state problems, I would say. I'm not talking about Navier-Stokes fluid flow convection, I'm talking about static problems, first of all.

So here's the general idea of the calculus variation. It's the same as the general idea of calculus. How do you identify a minimum in calculus? If the minimum is at u, you perturb it a little by some delta u that I'm going to call v to have just one letter instead of two here. You say OK, if I look at that neighboring point, u plus delta u, my quantity is bigger. The minimum is at u. So we're remembering calculus -- I guess I'm saying I've written that here. Compare u with u plus v, which you could think of as u plus delta u. It's like v you might think of as a small movement away from the best function. In calculus it's a small movement away from the best point. So let me draw the calculus. If you think of this blackboard as being function space instead of just a blackboard, then I'm doing calculus of variation. But let me just do calculus here. So there's the minimum of u. Here is u plus v near it. Could be on this side or it could be on this side. Those are both u plus v. Well, you maybe want me to call one of them u minus b. But the point is v could have either sign.

I'm looking at minimum sort of inside where I can go to the right or the left. So what's the deal? Well, that point is then at that point or at that point or at any of these other points, p of u plus v is bigger then what it is at the minimum. That tells us that u is the winner. Now how do we get an equation out of that? Calculus comes in now. We expand this thing -- this is some small movement away from u. So we expand it, we look at the leading term -- well, of course, the leading term is p of u. Then what is the next term? What's the first order, first variation in p when I vary u to u plus v? Well, it's the whole point of calculus, actually. The central point of calculus is that this is some function that we call p prime of u times v, plus order of v squared. When v is small, v squared is very small. So what's our equation? Well, if this has to be bigger than p of u -- I could just cancel p of u -- so this thing has to be bigger equal zero now. I've squeezed it in a corner, but since it's calculus we kind of remember it. Also, it's easy to learn calculus and forget the main point.

So this has to be greater equal zero. Now this is going to be -- if v is small, this is going to be very small, so it won't help. So this thing had better be zero, right? That had better be zero. Because if it isn't zero, I could take v of the right sign to make it positive, and take v small, so that this would dominate this. Maybe I wanted to take v of the right sign to make that negative anyway. I need p prime of u to be zero. So that's what I end up with, of course, as everybody knew. That p prime of u had to be zero. So in that tiny picture, I've remembered what we know. Now let me come back to what we have to do when we have functions.

So what's happened? I compare p of u -- u is now a function, u of x -- with p of u plus v. So I plus in u plus v, I compare with what I get with only u, and what's the difference? I look at the difference and the difference will have a linear term from u prime plus v prime squared and then I'm going to take away the u prime squared because I gotta compare the two. So what's left? There will be a 2u prime, v prime. I'm maybe just being lazy here. I'm asking you to do it mentally and then I'll do it a little bit. So the difference in this comparison is a 2u prime, v prime times c, and the 2's cancel. There's the difference right there. What's the difference over in this term? Well, I have that term and then I have the same term with u plus v, and then when I subtract I just have the term with v.

So this is the dpdu that has to be zero for every v. Now I'm really saying the important thing. This weak form is like saying this has to be zero for every v. Then, of course, in this scaler case, it was like a very small step to decide well, if this is zero for every v, send that zero, which is the strong form. Are you with me? So we have a minimum problem, minimize p, we have a weak form, the first variation, the first derivative, the first order term has to be zero for every v. Then if it's zero for every v, that forces the derivative to be zero and that's the strong form.

Now over here it took more space. But the ultimate idea is the same. We looked at p of u plus v compared with p of u, subtracted, looked at the linear term and there it is, the first variation. That has to be zero for every v. I'll just mentioned boundary conditions now, as long as we're at this weak form. Don't think of this weak form as just some mathematical nonsense to get to the differential equation, because the weak form is the foundation for the finite element method -- all sorts of discrete methods, discretization methods will begin with the weak form, the weighted integral form, rather than the strong form.

I was just going to say a word about boundary conditions. What are the boundary conditions on v? So you could say well, v stands for a virtual displacement. Virtual meaning kind of we just imagining it, it's a displacement that we can imagine moving by that amount, but the whole point is the nature fix the minimum. What's the boundary condition? Well, all the candidates have to satisfy these boundary conditions. So I have to have u plus v at zero also has to equal a, and u plus v at 1 also has to equal b, if I took these simple boundary conditions. So, by subtraction, I learn the boundary conditions on v. When I say all v, I mean v of zero has to be what? Zero. And v of 1 has to be zero -- when we had those boundary conditions. Different problems could bring different boundary conditions, of course, but this is easier then. So when I say all v, I mean every function that starts at zero and ends at zero is a candidate in this weak form. I have to get the answer zero for all those functions.

Now somehow, I want to get to this point, the differential equation, and why don't we give it the name that everybody -- the two guys' names, Euler-Lagrange -- well, pretty famous names. This is the Euler-Lagrange equation. So maybe before where I said Kuhn-Tucker or something, if I'm talking about differential equations, I go back to these guys. Now, how did I get from weak form to strong form? That a key. If you see these two steps from the minimum principle to the weak form, that's just, again, plug in u plus v, subtract and take the linear part. Then it's true for all v's. Now how do I get from here to here? Notice that this form is an integrated form. This form is at every point. So it's much stronger and much more demanding. You could say OK, that gives us the equation -- that's the equation as we usually see them. I'll do 2d and that'll be Laplace's equation or somebody else's equation, but minimal surface equation, all sorts of equations. Everything. All sorts of applications including this afternoon's lecture.

How to get from here to here. Well, there's one trick in advanced calculus, actually. The most important trick in advanced calculus is integration by parts. Well, we use those words, integration by parts, in 1d here. So I'll do that. We use maybe somebody else's name -- Green's formula or Green's theorem, or Green-Gauss or the divergent theorem or whatever, in more dimensions. But 1d, I just integrate by parts. Do you remember how integration by parts goes? I want to get v by itself, but I got v prime, because when I plugged in u plus v, here out came -- it's the derivative so I've got a derivative. So how do I get rid of a derivative? Integrate by parts. Take the derivative off a v -- can I just do that this quick way. Put the derivative onto -- I almost said onto u. I'm doing integration by parts and saying how important it is but not writing out every step.

So I take the derivative off the this and I put it onto this, and it's gotta be -- and a minus sign appears when I do that. Where the heck am I gonna put that minus sign? Right in there. Minus. Then everybody knows that there's also a boundary term, right? So I have to squeeze somewhere in this boundary term. So now that I've done an integration by parts, the boundary term will be the v times the cu prime at the ends of the integral. I think that's right. What we hope is that that goes away. Well, of course, if it doesn't, then we have to -- that there's a good reason that we don't want it to, but here it's nice if it goes away. You see that it does, because we just decided that the boundary conditions on v were zero at both ends. So, v being zero at both ends kills that term.

So now, do you see what I have here? I could write it a little more cleanly. The whole point is that the v -- I now have v there and I have v there so I can factor v out of this. Just put it there. That was a good move. This minus sign is still in here. So I now have the integral of some function times v is zero. That's what I'm looking for. The integral of some function, some stuff, times v, and v can be anything -- is zero. What happens now? This integral has to be zero for every v. So if this stuff had a little bump up, I could take a v to have the same bump and the integral wouldn't be zero. So this stuff can't bump up, it can't bump down, it can't do anything. It has to be zero and that's the strong form. So the strong form is with this minus sign in there, minus the derivative -- see, an extra derivative came onto the cu prime because it came off the v, and the f was just sitting there in the linear, in the no derivative. So, do you see that pattern? You may have seen it before, but calculus variations have sort of disappeared as a subject to teach in advanced calculus. It used to be here in courses that Professor Hildebrand taught. But actually it comes back because we so much need the weak form in finite elements and other methods.

What I wrote over here is the discrete equivalence. I can't resist, looking at the matrix form for two reasons. First, it's simpler and it copies this. Do you see how this is a copy of that? That matrix form is supposed to be exactly analogous to this continuous form. Why is that? Because u transpose f, that's an inner product of u with f -- that's what this is. That integral. au is u prime in this analogy. So this is u prime times c times u prime with 1/2, and that, again, that transpose is telling us inner product integral. So if I forget u prime and think of it as a matrix problem, that's my minimum problem for matrix.

I want to find an equation for the winning u. In the end, this is going to be the equation. That's the equation that minimizes that. Half of 18085 was about this problem. While I concentrated in 18085 on this one, because minimum principles are just that little bit trickier. So that's 18086. Then in between, people seldom write about but, of course, it's going to work, is that I change u to u plus v, I multiply it out, I subtract, I look at the term linear in v, and that's it. That would be if I make that just a minus sign and put it all together the way I did here, that's the same thing. So this is the weak form. This is the minimum form, this is the weak form, and this is the strong form. You see that weak form? Somehow in the discrete case it's pretty clear that -- let's see. I could write this as -- you see, it's u transpose, a transpose, cav equal f transpose v. The conclusion is if this holds for every v, then this is the same as this. If two things have the same inner product with every vector v, they're the same, and that's the strong form. You'd have to transpose the whole thing, but no problem.

So now I guess I've tried to give the main sequence of logic in the continuous case, and it's parallel in the discrete case for this example. For this specific example because it's the easiest. Let me do what Euler and Lagrange did by extending to a larger class of examples. So now our minimization is still an integral -- I'll still say in 1d, I'll still keep these boundary conditions, but I'm going to allow some more general expression here. Instead of that pure quadratic, this could be whatever. Now I'm going to do calculus of variation. Fill in 1d. Calculus of variation, minimize the integral of some function of u and u prime with the boundary conditions, and I'll keep those nice so that integral's still zero to 1 and I'll keep these nice boundary conditions just to make my life easy. So that will lead to v of zero being zero, and v of 1 being zero.

What do I have to do? Again, I have to plug in u plus v and compare this result with the same thing having u plus v. So essentially I've got to compare f at u plus v, u plus u prime plus v prime with f of u and u prime. I have to find the leading term in the difference. So I'll just find out leading term, and then will come the integral. But the first job is really the leading term, and it's calculus, of course.

Now can we do that one? It's pure calculus. I have a function at two variables, the function of u and u prime. Actually, I did here. I had a u prime there and a u there. Once I write it down you're going to say sure, of course, I knew that. So the function of two variables and I'm looking for a little change. So a little change in the first argument produces the df -- the derivative of f with respect to that first argument times the delta u, which is what I'm calling v. That's the part that the dependence on u is responsible for. Now there's also a dependence on u prime. So I have the derivative of f with respect to u prime times the little movement in u prime, which is v prime. I can't leave it with an equal there, because that's only the linearized part, but that's all I really care about. This is order of v squared and v prime squared. Higher order, which is not going to -- when I think of v as small, v prime as small, then the linear part dominates.

So can you see what dpdu is now? Now I've got to integrate. I integrate I this, that very same thing. This -- dx. That has to be zero for all v. You don't mind if I lazily don't copy that into the--. That's the weak form. This was the minimum form, now I've got to the weak form. This is the first variation. The integral of the change in f, which has two components when there's a little change in u. I should be doing an example, but allow me to just keep going here until we get to the strong form.

So this is the weak form for every v. Let me just repeat that the weak form is quite important because in the finite element method we have the v's are the test functions, and we discretize the v's -- you know we have a finite number of test functions. Well, I can't go entirely. I'll come back to find that out. Let me stay with this continuous problem, calculus of variations problem. v is v of x, and it satisfies these boundary conditions. That's the only requirement that we need to think about here.

What's the strong form? Also called the Euler-Lagrange equation. How do I get to that strong form? How did I get to it before? I would like to get this into something times v. Here I've got something times v but I've also got something times v prime. I wanted to get it up here where was times v. What do I do? You know what do. There's only the one idea here. Integrate by parts. So I have the integral. Well, dfduv was no problem -- that had the v that I like. But it's this other guy that has a v prime and I want v. So, it doesn't take too much thinking. Integrate by parts, take the derivative off of v, get a minus sign, and put a derivative -- can I do it with a prime, but I'll do better below -- onto this, and then there's a boundary term, but the boundary term goes away because of the boundary condition.

So now I have the v. Can I make it on this board? There's the dfdu multiplying v, and then there's the minus -- this is the derivative d by dx of dfdu prime. Now all that is multiplying v and giving me the integral of zero. I promise to write that bigger now. But, again, the central point was to get the linear term times v. That's always the main point. Then what's the conclusion? What's the Euler-Lagrange equation? This integral is this quantity times v and v can be anything, and I gotta get zero. So, what's the equation? That stuff in brackets is zero. That's the Euler-Lagrange equation. Finally, let me just write it down here. Euler-Lagrange strong form. In this general problem it would be dfdu minus d by dx of dfdu prime equals zero. Would you like me to put on what a -- if f happened to depend on u double prime, it would be a plus -- this would be what would happen just so you get the pattern equalling zero. So I've gone one step further by allowing f to depend on curvature, and writing down what the resulting term would be in the Euler-Lagrange equation. Could you figure out why it would be that? Where with this thing have come from? It would have come from there would have been a dfdu double prime times v double prime in the weak form. Then I would have done two integrations by parts -- two minus signs making a plus, two derivatives moving off of v and onto the other thing and it would give me that. All times v, but now I've got everything times v, it's true for every v, so the quantity has to be zero. That's the Euler-Lagrange equation strong form 1d problem.

Yes? AUDIENCE: [INAUDIBLE PHRASE]?

PROFESSOR: Oh, you're right. You're right. If we had this situation then we would be up to -- this would typically be a fourth order equation and we would have two boundary conditions at each end. Absolutely. When I slipped in this just to sort of show the pattern, I didn't account for the boundary conditions. That would be one level up as well. Exactly. And nature -- well, fortunately we don't get many equations of higher degree than four. This would be like the beam equation, the parallels of that would be a beam equation or a plate equation or a shell equation, God forbid. In shell theory they're incredibly complicated because they're on surfaces. But the pattern is always this. Well, and, of course, they're also complicated because they're in 2D. So maybe I should say a little bit. Could I maybe write -- I'm trying to do a lot today. Trying to do kind of the formal stuff today.

So another step in the formal stuff would be to get into 2D, which I haven't done. What would be the famous 2D problem that leads to a positive equation? So 2D, what would I minimize, so p of u would be a double integral of dudx squared maybe times a cdudy squared. I would really like 1/2 on that just to make life good. Then minus a double integral of f times u. dxdy emphasize these are double integral.

That's a very, very important problem. Many people have tried to study that. Euler and Lagrange would produce an equation for it. You might say OK, solve equation that finishes the problem. But mathematicians are always worried is there a solution? Is there a minimum? So I'm dodging the bullet on that one. When I say minimize over all functions u, I could create problems where worse and worse and worse functions got closer and closer to a minimum and there was no limiting minimizer. I won't do that. This is a problem that works fine. What happens to it? Well, should we try to do the same weak form?

We'd have a dpdu, and what do you think it would look like? It would have the integral, double integral. What would it have? It will have a cdudx and there will be a dvdx. Then integration by parts will take that x derivative off a v and onto this with a minus. Did it fast, did it way fast. Then out of this thing will come, if I look at the v term they'll be a dvdy and I take that y derivative off of that and onto this. Oh, I can't use d anymore, I have to use partials. D by du of cdudy. Then the f is just sitting there. Oh, it's all multiplied by v. dxdu I didn't put yet, equals zero for all v. Again, this is the same thing that we had in ordinary calculus where it was just p prime of uv equals zero. This is a level of sophistication up. It's producing differential equations, not scaler equations.

So what's the deal? I've done the integration by parts, so I've got everything multiplying a v. So what's the strong form of this problem? Well, if this integral has to be zero for every v, then the conclusion is that this stuff in the brackets is zero. That's always the same. That's the strong form. So that's Laplace's equation or actually Poisson's equation because I have a right hand side, f of x.

Well, those are the mechanics. Now, what else comes into the calculus of variations. You've seen the pattern here. There's one important further possibility that we met last time, which was constraint. We're dealing here with a pure minimization. I didn't impose any side conditions on u except maybe the boundary condition. I'll just close with an example that I'm going to follow-up, and it's going to have a constraint. So it'll look like the original problem, but well there will be two u's. So can I try to get it right here?

My minimization, my unknown will have two components, u1 and u2. And it'll be into 2D actually. What I'm going to produce here is called the Stokes' problem. I'll study it next time, so if I run out of time, as I probably will, that's part of the plan. So it's Stokes and not Navier-Stokes. All I want to do is to write down a problem in which there is a constraint. So I write it as a minimization, say, dv1d -- oh, probably all these guys are in here. dv1dx and dv1dy and dv2dx and dv2dy -- sorry about all this stuff. Probably an f1v -- oh, I've written v, because my mind is saying that the usual notation, I should be writing u because that would fit with today's lecture. It's a velocity and that's why many people call it v, and then they have to call the pertubation, some w. f1u1, f2u2, all that stuff. No problem. That would lead to Laplace's equation, just the same, Poisson's equation. But I'm going to add a constraint. This u1u2 is a velocity, despite the letter. I want to make the material incompressible. I have flow here and it's like flow of water, probably incompressible. So incompressible means that dv1dx plus dv2dy is zero. So that's a constraint. How the heck do we deal with it?

So I could do this minimization but with the constraint. So all this stuff I'm totally cool with now. That would just be calculus of variation, that would get me the dpdu, but I have to account for the constraint. So how do you account for a constraint? You build it into the problem with a Lagrange multiplier. So I multiply this thing by some Lagrange multiplier, and as I empathized last time, Lagrange multipliers always turn out to mean something physically, and here it's the pressure. So it's natural to call the Lagrange multiplier p of xy. I build that in, so I subtract the Lagrange multiplier times this thing that has to be zero. That gets in the problem. Now my function now depends on u and the pressure.

I'm not going to push this to the very limit to find the strong form. But the strong form is the Stokes' equations that we'll study. So we have a lot to do here to make this into practical calculations where we can compute something. and finite elements is a powerful way to do it. So we have to turn these continuous problems into discrete problems. Then later we have to turn this type of problem, which will be a saddle point problem because it's got this Lagrange multiplier in there into a discrete problem. Let me just stop by putting the words saddle point there, and just as in the lecture this afternoon, saddle points appear as soon as you have constraints and Lagrange multipliers.

Well, thanks for your patience. That's a lot of material that will -- quickly now. Those basic steps will be section 7.2 and we'll go up on the web quickly just as soon as we get them revised. I'm writing notes on your projects and I hope I'll have them ready for Friday. I'll aim for Friday because Monday is Patriot's Day and you have to run the marathon. So I'll see you Friday. Good. Thanks.

## Free Downloads

### Video

- iTunes U (MP4 - 113.3MB)
- Internet Archive (MP4 - 213.2MB)

### Audio

- Internet Archive (MP3 - 12.1MB)

### Subtitle

- English - US (SRT)

## Welcome!

This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.

**MIT OpenCourseWare** is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

**No enrollment or registration.** Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

**Knowledge is your reward.** Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

**Made for sharing**. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

Learn more at Get Started with MIT OpenCourseWare