Video Description: Herb Gross illustrates the equivalence of the Fundamental Theorem of the Calculus of one variable to the Fundamental Theorem of Calculus for several variables. Topics include: The anti-derivative and the value of a definite integral; Iterated integrals.
Instructor/speaker: Prof. Herbert Gross
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu
HERBERT GROSS: Hi. Hopefully by now you have become well experienced at computing limits of double sums. You've learned to hate me a little bit more because the work was quite complicated, and hopefully you are convinced that there must be an easier way to do this stuff. And fortunately, the answer is there is an easier way sometimes.
And the concept that we're going to talk about today is again completely analogous to what happened in calculus of a single variable where we showed that the concept of definite integral existed independently of being able to form derivatives, but in the case where we knew a particular function whose derivative would be a given thing, we were able to perform the infinite sum-- or compute the infinite sum much more readily than had we had to rely on the arithmetic of infinite series.
By the way-- and I'll mention this later in a different context I hope-- that the converse problem was also true. In many cases one did not know how to find the required function with the given derivative, and in this case knowing how to find the area by the limit process was equivalent to how we were then able to invent new functions which had given derivatives.
Essentially the difference in point of view was the difference between what we called the first fundamental theorem of integral calculus, and the second fundamental theorem of integral calculus. As I said the same analogy will hold here, and let's get into this now without further ado. I call today's lesson the fundamental theorem. And what I'm going to do now is to pretend that we never had the lecture of last time. That we have never heard of an infinite sum.
I'm going to introduce what one calls the anti-derivative of a function of two independent variables, again keeping in mind the analogy that for more than two independent variables a similar treatment holds. And the exercises will include problems that have more than two independent variables. The idea is that when I write down this particular double integral, if I look at the innermost integral, notice that x appears only as a parameter. That the variable of integration is y notice then, that x is being treated as a constant.
What this says is what? Pick a fixed value of x, compute this integral, evaluate this as y goes from g1 of x To g2 of x. The resulting function is a function of x alone, and integrate that function of x from a to b. And to do that for you in slow motion so you see what happens here, what I'm essentially saying is read this thing inside out. Imagine some brackets placed here. Not imagine them, let's put them in. And what we're saying is look-- first of all, fix a value of x. Hold x constant. And for that fixed value of x, compute f of xydy from g1 of x to g2 of x.
Compute that, remember you're going to integrate that with respect to y. When you're all through with this because the limits involve only x, the integrand that you get-- this bracketed expression will now be a function of x alone, call it h of x. Integrate h of dx between a and b, and the resulting expression assuming that you can carry out the integration of course, will be a particular number.
That has nothing to do as I say, with double sums so far. Though I suspect that you're getting a bit suspicious. And in the same way the definite integral was a sum, and not an anti-derivative the notation of the definite integral looked enough like the notation of the indefinite integral. So we began to suspect there was going to be a connection between them.
That connection is what's going to be called the fundamental theorem in our present case. But going on, let me just give you an example showing you how we compute the anti-derivative. Let's take, for example interval from 0 to 2, integral from 0 to x squared, x cubed y dy dx. Again we go from inside to out, the way we read this is that we're integrating with respect to y. That means we're holding x constant for a fixed value of x integrate this as y goes from 0 to x squared.
So treating x as a constant, I integrate this with respect to y. The integral of y dy is 1/2 y squared. I evaluate that as y goes from o to x squared. That's crucial you see. It's y that goes from 0 to x squared. When I replace y by the upper limit, I get 1/2 x cubed times the quantity x squared, squared.
When I replace y by the lower limit 0, the integrand vanishes. So that what I wind up with is a function of x alone. Specifically what function is it? It's 1/2 x to the seventh, and I now integrate that from 0 to 2. That gives me 1/8 x to the eighth over 2, that's x to the eighth over 16. Evaluated between 0 and 2. 2 to the eighth over 16 is 2 to the eighth over 2 to the fourth, which is 2 to the fourth, which in turn is 16.
So again know this-- I can carry out this operation purely manipulatively. It is truly the inverse of taking the partial derivative. In the same way that taking the partial derivative involved holding all the variables but one constant, taking a possible integral, an anti-derivative, involves simply doing what? Integrating-- treating all the variables but the given one as a constant.
Which reduces you again to the equivalent of what the anti-derivative meant in the calculus of a single variable. Now here's where we come to the so-called fundamental theorem. Let me tie what we've talked about today, and what we talked about last time let me see if I can't tie those together for you in a completely different way. Meaning I want you to see that conceptually, today's lesson and last time's lesson are completely different, but the punch line is there's a remarkable connection between the two.
Let's suppose I take the region R, which is a very simple region that's bounded above by the curve y equals g2 of x. It's bounded below by the curve y equals g1 of x, and that these two curves happen to intersect at values corresponding to x equals a and x equals b. And what I would like to find is the mass of the region R-- if its density is some given function f of xy.
Now in terms of what we did last time this involved a double sum. What double sum was it? We partitioned this into rectangles. We picked a particular number c sub ij comma d sub ij in the ij-th rectangle. We computed f of c sub ij comma d sub ij times delta a sub ij. Added these all up the double sum as I went from 1 to n and j went from 1 to n, and took the limit as the maximum delta xi and delta yj approach 0, and that limit if it existed-- and it would exist if f was piecewise continuous-- That limit was denoted by what?
Double integral over the region R f of xy da. And that da could either be viewed as being dydx if you wanted, it could be dxdy, It could also be in different coordinate systems, we're not going to worry about that part just yet. We'll worry about that in future lectures.
Now on the other hand, let's stop right here for a moment. Let's see how one might have been tempted to tackle this problem from a completely intuitive point of view. The so-called engineering approach if you would never heard of the anti-derivative but had anti-derivative for several variables, but had taken part one of our course.
The engineering approach would be something like this-- you would say let's pick a very small infinitesimal region here, in which we can assume that the density is constant. So we'll pick a little region that's a very, very small rectangle of dimension what? We'll say it's dy by dx. and the density of this particular rectangle can be assumed to be the constant value f of xy.
Because that density is constant, the mass of this little piece should be f of xy times dydx. and then we say what? Let's add these all up. Let's go down to here we say look, this is a typical element. And we'll add them all up for all possible y's all possible x's. Now how would you do this intuitively? You'd say well look, let me hold x constant.
Well, for a constant value of x let's call it x sub 1. For a constant value of x, notice that y can vary any place to be in the region R. For that fixed value of x, y can be any place from here to here. In other words y varies continuously from g1 of x1 to g2 of x1.
Because x1 could have been x-- for that given x, y varies continuously from g1 of x to g2 of x. And notice that x could have been chosen if we want to be in the region R to be any place from a to b. And so mechanically, we might say let's just say here is the mass of a small element, and we'll add these all up so that for a fixed x, y goes from g1 of x to g2 of x. And x could be anywhere from a to b.
And it appears that this is a truism. This is not a truism. It's a rather remarkable result. It is analogous to what happened in calculus of a single variable. Notice that when we had the definite integral from a to b, f of x dx That was an infinite sum. It was summation f of c sub k, delta x sub k, as k went from 1 to n taking the limit as the maximum delta x sub k approached 0.
It turned out that if we knew a function capital f whose derivative was little f, then this sum could be evaluated by computing capital f of b minus capital f of a. Again the idea being if you knew a function whose derivative was f, that gave you an easy way to evaluate the sum. On the other hand, if you didn't know a function whose derivative was little f, evaluating the sum as a limit gave you a way of being able to find the anti-derivative.
A particular case in point you may recall was trying to handle the problem e to the minus x squared dx. As x goes from 0 to 1. This certainly existed as an area, but we did not know in fact, there is no elementary function. We invented one called the error function whose derivative is e to the minus x squared. So the idea is-- and this is the key point-- this is what becomes known as the fundamental theorem for several variables.
That there is a connection between a double infinite sum, and a double anti-derivative. That this particular expression here does not involve knowing anything about partial derivatives. This particular expression here does not involve being it any knowledge of knowing partial sums. This simply says what? Integrate this thing twice. Once holding x constant and letting y go from g1 of x to g2 of x, and integrating with respect to x.
And the amazing result is that if f is continuous, this limit exists, and in particular, if we happen to know how to actually carry out these repeated or iterated integrations, we can compute this complicated sum simply by carrying out this anti-derivative. And I think the best way to emphasize that to you is to repeat the punchline to the homework exercises of last assignment.
You may recall that last time we were dealing with the square whose vertices were 0 0, 1 0, 1 1, and 0 1. The density of the square at the point x comma y was x squared plus y squared. And I asked you as a homework problem to compute the mass of this plate R, to compute it exactly. And we said OK, by definition of-- Remember this thing here now with the R under here indicates a limit of a double sum. That this mass was by definition this particular result. And notice that last time we showed, in terms of double sums, that this came out to be 2/3.
Let me show you in terms of new theory how we can find this much more conveniently with no sweat so to speak. What I'm going to do now is the following-- The same analogy I used before. I look at my region here, which I'll call a, and I say what? For a fixed value of x. For a fixed value of x. Notice that y varies continuously from 0 to 1. See y varies continuously from 0 to 1. And then x in turn could have been chosen to be any place from 0 to 1.
That as we run through all these strips added up from 0 to 1, that covers our region R. So what we're saying is all right, a little element of our mass will be x squared plus y squared. dydx, we'll add these all up from 0 to 1, holding x constant. Then add them all up again as x goes from 0 to 1. Again, this is the iterated integral. How do we evaluate this? Well we treat x as a constant, integrate this with respect to y. If we're treating x as a constant, this then will come out to be x squared y, plus 1/3 y cubed, and we now evaluate that as y goes from 0 to 1.
Leaving these details out because they are essentially the calculus of a single variable all over again, this turns out to be the integral from 0 to 1 x squared plus 1/3 the x, which in turn is 1/3 x cubed, plus 1/3 x evaluated as x goes from 0 to 1. The upper limit gives me a third plus a third, the lower limit's 0. The mass here is 2/3, which checks with the result that we got last time.
Again, I'm not going to go through the proof of the fundamental theorem, I don't think it's that crucial. It's available in textbooks on advanced calculus some of the exercises may possibly give you hints as to how these results come about. But by and large, I'm more interested now in you seeing the overview, and at this stage of the game letting you get whatever specific theoretical details you desire on your own.
At any rate, let's continue on with examples. I think a very nice counterpart to example one and to refresh your memories without looking back at it, example one asked us to compute this anti-derivative. And what I'd like to do now is to emphasize the fundamental theorem by wording this a different way. What I want to do now is to describe the plate R. If its mass is given by the double integral over the region R row of x, y, da and that turns out to be integral from 0 to 3, 0 to x squared, x cubed y, dydx.
What is the region R if this is how it's mass is given? The first thing I want you to observe is that this part here is identified with the density part of the problem. And that these limits of integration determine the region R. For example, what this says is if you hold x constant, y varies from 0 to x squared. Let's see what that means. If you hold x constant, y varies from 0. Well y equals 0 is the x-axis. To y equals x squared-- that's this particular parabola-- what this says is for a fixed value of x, say x equals x1 to be in the region R, you can be any place from the x-axis along this strip up to y equals x1 squared. So your strip is like this.
Then we're told that in turn x could be chosen to be any place from 0 to 2. So now what we know is that this strip could have been chosen for any value of x between 0 and 2. And what that tells us therefore, is that the shape of our region R is the curve-- the region that's bounded above by the curved y equals x squared, below by the x-axis, and on the right by the line x equals 2.
This is our region R. You see the region R is determined by the limits of integration, and the density of R is given by row of xy equals x cubed y at the point x comma y. OK?
We'd mentioned before that why do you have to write dydx? Couldn't you have written dxdy? What I thought I'd like to do for my next example for you is to show you how one inverts-- or changes-- the order of integration. For example, given the same integral-- the same region R-- suppose we now want to express the mass in the form double integral x cubed y dxdy? You see the region R is still the same as it was before. But now you see what we want to do, how would we read this?
This says we're integrating with respect to x. x is varying. So this says for a fixed value of y, integrate this thing. See for a fixed value of y integrate this. Evaluated between the appropriate limits of x. In terms of y. And then integrate this with respect to y. Well, what this says is let's pick a fixed value of y. Let's say y equals y1.
For that fixed value of y notice that the curve y equals x squared for x positive in inverted form has the form x equals the square root of y. So for a fixed value of y, notice that x varies from the square root of y1 up to x equals 2. See x goes from the square root of y to 2, for a fixed value of y.
Now where can y be? This is our limits here. To be in the region R, y could have been chosen as low as this, or as high as this. In other words y varies continuously from 0 to four. And again, because I don't want to have our lecture obscured by computational detail, I simply urge you to-- on your own-- compute this double integral, compute the double integral that we obtained in example number three, and show that the answers are the same.
What I do want you to observe is how different the limits look. Notice that in the other integral it was from 0 to 2 outside, now it's from 0 to 4. Notice also on the inside integral it was from 0 to x squared. Now it's from the square root of y to 2. There is no mechanical way of doing this, at least in the two dimensional case, we can see from the picture what's happening.
In the multi-dimensional case, we have to resort to the theory of inverse functions, and this becomes at best a very messy procedure. Of course the answer is if it's such a messy procedure why bother with it?
And so in terms of another example that I would like to show you, let me give you an illustration in which it may be possible to find the required answer if we integrated one order, but not if we integrate with respect to the other order.
Let me evaluate the double integral 0 to 2, x to 2, e to the y squared, dydx. Let me see if I can at least write down what this thing means geometrically before I even begin. Notice that I can think of this as a plate R, whose density at the point x comma y is e to the y squared, and what is the shape of this plate?
Integrate with respect to y first. For a fixed value of x, y goes from y equals x-- well that's this line here-- to y equals 2. That's this line here. So for a fixed x, y varies continuously from here to here. x can be any place from 0 to 2 that happens to be the point 2 comma 2 where these two lines intersect.
So the region R is this rectangular region here. OK, this is our region R. And what we're saying is find the mass of the region R if its density at the point x comma y is e to the y squared. Now, notice that this density exists even if I've never heard of the anti-derivative.
If however I elect to use the fundamental theorem, and I say OK, what I'll do is I'll compute this. Notice I'm back to an old bugaboo. I don't know how to integrate e to the y squared dy, other than by an approximation. In terms of elementary functions, there is no function whose derivative with respect to y is e to the y squared.
So what I do is I elect to change the order of integration. Why is that? Well because if I change the order of integration sure I'm integrating e to the y squared, but now with respect to x, which means that e to the y squared is a constant. This is trivial to integrate. It's just going to be x times e to the y squared.
The problem is what? If I'm going to change the order here, I must make sure that I take care of the limits of integration accordingly. Now what this says is what? I'm going to integrate with respect to x first. x is going to vary, so I'm treating y as a constant. For that constant value of y, notice that the b in the region R for that constant value of y our x varies from what? x varies from 0 to the line y equals x, or accordingly x equals y.
So for that fixed value of y, x varies from x equals 0 to x equal y. And correspondingly, y can be chosen to be any fixed number between 0 and 2, and that gives me this particular double integral. And by the way, from this point on the rest is child's play.
Because when I integrate this you see this integrand is xe to the y squared.
When x is equal to y, this becomes ye to the y squared. When x is 0 the lower limit drops out. So what I want to integrate now is 0 to 2 ye to the y squared dy. But this is beautiful for me, because the factor of y in here is exactly what I need to be able to handle this. In other words, this is nothing more than 1/2 e to the y squared evaluated between 0 and 2.
Replacing y by two gives me 1/2 e to the fourth. Replacing y by 0 remember e to the 0 is one. Gives me minus 1/2 because I'm subtracting the lower limit. The answer to evaluating this integral, which was impossible to do in this form because of the fact that there was no elementary function whose derivative is e to the y squared turns out quite nicely. To be given by e to the fourth minus 1 over 2.
To summarize today's lecture, to show you what the fundamental theorem really means, all we're saying is that we often compute the infinite double sum-- double integral r f of x, y da by an appropriate iterated integral. That's what we mean is what? The anti-derivative. Integrating the anti-derivative successively. Integrated integral. See? Iterated integral.
And conversely, the gist of this whole thing is that we now have two entirely different topics that are related by a fantastic unifying thread that allows us to solve one of the problems in terms of the other, and vice versa.
You see again the analogy being complete with what happened in the calculus of a single variable. That we can evaluate double sums which was the aim of the previous lecture, by means of an iterated integral, we can evaluate iterated integrals sometimes by appropriately knowing how to find the limit of an appropriate double sum. And again we shall make use of this in the exercises. We will go into this in more detail from other aspects, and other points of view next time. At any rate, until next time, goodbye.
Funding for the publication of this video was provided by Gabriella and PaulRosenbaum foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.