Topics covered: The concept of partial fractions; finding f(x) when f'(x) is the quotient of two polynomials; some notes about identities; application of partial fractions to the case where f is of the form f(sinx, cos x).
Instructor/speaker: Prof. Herbert Gross
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part V, VI & VII Study Guide (PDF - 35MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
FEMALE VOICE: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Today, we're going to learn a rather powerful technique, called the technique of partial fractions, that's particularly applicable for one special type of integrand. In particular, it's going to be applicable to the situation where the integral has the form, the quotient, of two polynomials. In other words, suppose we have integral ''P of x' dx' over 'Q of x', where 'P' and 'Q' are both polynomials in 'x'.
And, by the way, for reasons that will become apparent later, we'll assume that the degree of 'P' is less than the degree of 'Q'. In other words, that the highest exponent in the numerator is less than the highest exponent in the denominator. By the way, if that's not the case, we can always carry out long division first and carry out enough terms until we get a term in which this is the particular case. You see, I'll make that clearer as we go along. But, for the time being, all we really care about in terms of outlining this form-- and, again, as we will often do in these types of techniques-- the place that we'll pick up the real fine computational points are in the exercises. We'll use the lectures to just outline what the technique is and how its used.
But suppose, now, that we have the quotient of two polynomials and we want to integrate it. Now, the idea is this, there are two key steps that dictate this particular method. The first thing is, we saw it in the last lecture, that we can handle denominators if they involve nothing worse than linear and quadratic polynomials. In other words, we know how to integrate something like 'dx/x' plus a constant. We know now how to integrate 'dx / ''ax squared' plus 'bx' plus 'c'', so we know how to handle that type, OK?
And the second important thing-- and I'll amplify this, too, as we go along-- the second important thing is that, theoretically, every real polynomial can be factored into linear and quadratic terms. Now, this is a little bit misleading, if you try to read more into this than what it really says. It doesn't mean that we can always find a factorization quite simply. In other words, we may, at best, be able to approximate a root. We'll have to use things like we've talked about in the notes so far: Newton's method, and things like this, linear approximations, tangential approximations. We may have to approximate the roots.
But that, theoretically, if a polynomial with real coefficients has degree greater than 2, it has, at least, one real root. In other words, we can keep factoring things out this way. The only thing we can't do is to guarantee that, once we get down to a quadratic, that we can get real roots out of this particular thing. In fact, the classic example is to visualize 'x squared plus 1'. In other words, we can't factor 'x squared plus 1' unless we introduce non-real numbers.
Remember the technique you can write, that 'x squared plus 1' is 'x squared' minus 'i squared' where 'i' is the square root of minus 1. And this factors into 'x plus i' times 'x minus i', et cetera, but these are non-real numbers. In other words, you can't always factor a quadratic to get real numbers. In fact, if you recall the quadratic formula of this: square root of 'b squared' minus '4 ac', if 'b squared' is less than '4 ac', what's inside the square root sign is negative, and that leads to non-real roots. So, you see, we can't always factor quadratics into real factors.
By the way, don't identify things that were difficult to factor with things that can't be factored. You know, a lot of times, we think of, say-- here's an example I thought you might enjoy seeing here-- take, for example, ''x' to the fourth plus 1'. That looks something like it belongs to the family 'x squared plus 1'. And 'x squared plus 1' can't be factored. You might think that ''x' to the fourth plus 1' can't be factored.
Now, again, it's not important that you understand what made me think of these tricks or what have you. What I do want to show is that even situations where the polynomials might look like a known factor, they really do. For example, with ''x' to the fourth plus 1', we can write this in the disguised form ''x squared plus 1' squared', see? That's, what? ''x' to the fourth plus 1'. But there's a middle term here of 2 'x squared', so I subtract off the 2 'x squared'. Now, this has the form, the sum and difference of 2 squares, namely, ''x squared plus 1' squared' and, also, the square root of 2 times 'x squared'.
In other words, I can write this as this plus the square root of 2 times 'x' times this minus the square root of '2 times x'. Observing, that even though the square root of 2 is an irrational number, it is, nonetheless, a real number. The important point that I want to point out, though, as far as setting up this technique called partial fractions, is that, whether it's easy or not easy, the fact remains that, when we have a polynomial in our denominator, it can always be factored into a combination of linear and quadratic factors using real numbers.
Well, it's difficult to factor some of these things, so, by way of illustration, let me pick out one that comes factored. Let me start with a particular problem here. Let's take the integral 'dx/ ''x minus 1' times 'x squared plus 1''. So, what is the integrand in this case? It's 1/ ''x minus 1' times 'x squared plus 1''.
Now, the idea is this. See, here's a quadratic factor, here's a linear factor. What fractions do I have to add to wind up with something like this? Well, again-- and this is going to be another example of our old adage that it's easier to scramble an egg than to unscramble one. You see, given two fractions, it's one thing to find their sum. Given the sum, it's quite another thing, computationally, to find what fractions you had to add to get that sum.
The idea is this. If you wind up with a denominator that has an 'x minus 1' term and an 'x squared plus 1' term, it appears that you must have started with terms, say, what? In other words, you must have had one term which had a denominator of 'x minus 1' and a term which had a denominator of 'x squared plus 1'. Because, you see, if I start with these kind of denominators when I cross-multiply and put things over a common denominator, I will wind up with 'x minus 1' times 'x squared plus 1'.
The question that comes up is what shall our numerators be? And here's the main reason why we picked the degree of the numerator to be less than the degree of the denominator. Notice, for example, in this particular case, the numerator has degree 0, namely, the highest power of 'x' to the peers is the 'x' to the 0 term. On the other hand, the denominator has degree 3. See, there's an 'x cubed' term in the denominator. You see, if they're doing 'x' to the fourth in the numerator, I could have multiplied out the denominator, divided it into the numerator, and just kept carrying out the division long enough until I wound up with a remainder which was less than a degree, less than a cubic. In other words, less than a third degree polynomial remainder.
That's not the point. The point is that as long as the degree of the numerator is less than the degree of the denominator, it means that the terms that we're adding must have the degree of the numerator less than the degree of the denominator. See, it's like adding fractions. If you start with one fraction whose numerator is greater than the denominator then, certainly, any sum that you get is going to be bigger than 1. In other words, if you want to wind-up, dealing with positive numbers, and you want to wind-up with a fraction which is less than 1, it stands to reason that all of the fractions that you're adding must be less than 1.
So, if I'm going to wind-up adding quotients of polynomials to get a sum in which the degree of the numerator is less than the degree of the denominator, it means that every one of the terms in my sum must have this particular property. In other words, with this as a hint, I say look at, my denominator here is 'x minus 1', that's degree one. That means my numerator can't be greater than degree 0. But degree 0 means a constant. So I say OK, that means that this has to form some constant over 'x minus 1'.
Now, I look at this denominator. It's quadratic. And I say to myself I'm starting out with a quadratic, the degree of the numerator can't be greater than the degree of the denominator. Since the degree of the denominator is 2, that means the degree of the numerator can't be more than 1. And the most general first degree polynomial has the form, what? Some constant times 'x' plus a constant.
So what I'm saying is, OK, to wind-up with something of the form-- well, to wind-up with 1/ ''x minus 1' times 'x squared plus 1'', I had better start with something of the form 'A/ 'x minus 1'' plus ''Bx plus C' over 'x squared plus 1'' where 'A', 'B', and 'C' are constants. The key point is that if we weren't sure that the degree of the numerator were less than the degree of the denominator, we would not know where to stop in our numerator. In other words, by this convention, we're sure that the degree of the numerator in any one of these terms can't be greater than the degree of the corresponding denominator, you see?
And by the way, notice, for example, if it turns out that we put too much in-- for example, suppose it turns out that this numerator here should only be a constant-- there's no law against having 'B' turn out to be 0. By the way, again, what these things are called-- just to increase our vocabulary-- what I'm going to do now is called the method of undetermined coefficients. You see, what I know is that I must have the form 'A over 'x minus 1' plus ''Bx plus C' over 'x squared plus 1''. What I don't know is, specifically, how to choose the values of 'A', 'B', and 'C'. And the technique works something like this. What we do is we put this over a common denominator. What will the common denominator be? It'll be 'x minus 1' times 'x squared plus 1'. How will I put this over a common denominator? It'll be 'A' times 'x squared plus 1', plus 'Bx plus C' times 'x minus 1'.
Now, this is supposed to be an identity. Now, if two fractions are identical and the denominators are the same, which is what they will be after I put this over a common denominator, the only way they can be identical is for the numerators to be identical. So, you see, what I'm going to do is to cross-multiply here to obtain the numerator of the right-hand side. And I will equate that to the numerator on the left-hand side, which is 1.
All right. Now, what is the numerator on the right-hand side? It's 'A' times 'x squared plus 1', that's 'A x squared plus A'. Then it's going to be 'x minus 1' times 'Bx plus C'. That's going to be 'B x squared' minus 'Bx plus C' times 'x minus C'. And now the idea something like this, and I'll come back to this in a few moments to hammer this home from a different point of view.
What is the coefficient of 'x squared' on the right-hand side of the equation? The coefficient of 'x squared' on the right-hand side of the equation is 'A plus B'. What is the coefficient of 'x squared' on the left-hand side of the equation? And at first glance you say there is no 'x squared' on the left-hand side of the equation. What that means, of course, is that the coefficient of 'x squared' on the left-hand side of the equation is 0. So what we say is OK, the coefficients of 'x squared' must match up, therefore, 'A plus B', which is a coefficient of 'x squared' on the right-hand side, must equal 0, which is the coefficient of 'x squared' on the left-hand side.
In a similar way, the coefficient of 'x' is 'C minus B' on the right-hand side. The coefficient of 'x' on the left-hand side is 0. Consequently, 'C minus B' must be 0. And, finally, the constant term on the right-hand side is given by 'A minus C'. On the left-hand side, the right-hand term is 1. Consequently, 'A minus C' must equal 1.
What do I wind-up with? Three equations with three unknowns. Well, there are a number of ways of handling these things. The easiest one I see, off-hand, is I notice if I add the first two equations, I wind-up with 'A plus C' is 0. Knowing that 'A plus C' is 0 and, also, that 'A minus C' is 1, it's easy for me to conclude that 'A' must be 1/2 and 'C' must be minus 1/2.
And, by the way, now knowing what 'A' and 'C' are, I can use either of these two equations to determine 'B', and 'B' turns out to be minus 1/2. In other words, what does this tell me? It tells me that if I replace 'A' here by 1/2, and 'B' and 'C' each by minus 1/2, the right-hand side here will be an identity for the left-hand side, that there'll be two different ways of naming the same number for each value of 'x'.
In fact, doing this now out in more detail, what we've really shown here, in other words, if I replace 'A' by 1/2 and 'B' and 'C' each by minus 1/2, then I factor the 1/2 out. What I've shown is that one over the quantity 'x minus 1' times 'x squared plus 1' is equal to 1/2 times the quantity '1 over 'x minus 1'' minus the quantity ''x plus 1' over 'x squared plus 1''. By the way, I can separate this into two terms, each of which has a denominator of 'x squared plus 1', and I get, what? 1/2 times 1/ 'x minus 1' minus 1/ ''2 times x'/ 'x squared plus 1'' minus 1/2 times '1/ 'x squared plus 1''.
Now, the key is this. I didn't really want this, what I wanted was, what? This was to be my integrand. What I wanted was to integrate this with respect to 'x'. Well, if given this identity over here, the integral of 'dx'/ ''x minus 1' times 'x squared plus 1'', recalling that the integral of a sum is a sum of the integrals, et cetera, can now be written as, what? It's 1/2 integral 'dx'/ 'x minus 1' minus 1/2 integral 'x'/ 'x squared plus 1' times 'dx' minus 1/2 integral 1/ 'x squared plus 1' times 'dx'.
Now, here's the point. Notice that every one of my denominators, now, is either linear or quadratic. In fact, without going through the details here again, if I let 'u' equal 'x minus 1' in this example, this reduces this to the form 'du/u', in other words, a ''u to the n' du' form. If I let 'u' equal 'x squared plus 1' over here, if 'u' is 'x squared plus 1', notice that 'du' is '2 xdx'. So my numerator becomes a constant multiple of 'du' and, again, I have a ''u of the n' du' form.
And finally, if I look at my last integral here, notice that this is the sum of two squares, which suggests the circular trigonometric functions that we were talking about last time. Or, if you wish, you can go to tables and look these things up. They're all in there. But, again, without going through the details because this is the easy part, again, it turns out that once you have this relationship here we can integrate this to obtain log absolute value 'x minus 1'. The integral here is log, natural log, absolute value of 'x squared plus 1'.
I can leave the absolute value signs out because 'x squared plus 1' has to be, at least, as big as 1. It can't be negative. And finally, either by trigonometric substitution, or by memorization, or what have you, integral of 'dx'/ 'x squared plus 1' is just the inverse tangent of 'x'. In other words, by using partial fractions and reducing a complicated polynomial denominator into a sum of linear and quadratic terms, I was able, by knowing my techniques of last time, to integrate the given expression.
And, by the way, I would be a little remiss at the stage of the game if I did not take the time to, once again, reinforce a very important concept, and that is it may be difficult, starting with this, to get this. What should not be difficult is, starting with this, to be able to differentiate it and show that you wind-up with 1/ ''x minus 1' times 'x squared plus 1''.
In other words, as usual, with the inverse derivative, once you find an answer and you want to see whether your answer is correct or not, all you have to do is differentiate the answer and see if you get the integrand. But, at any rate, this is how the technique called partial fractions works. It works for the quotient of two polynomials. And to make the undetermined coefficients technique work right, you must assume that the degree of the numerator is less than the degree of the denominator. And, obviously, in the exercises, I'll give you some where the degree of the numerator is greater than the degree of the denominator. And, if you don't perform long division first, you're going to get into trouble trying to find the answer to the problem. But all I want to emphasize here is the technique.
And, by the way, what I want to do before I go any further, also, is to emphasize a rather special property of polynomial identities. You recall that undetermined coefficients hinged on the following. And I'll pick a quadratic to illustrate it with. Suppose you have two quadratic expressions in 'x' identically equal, in other words, 'a sub-2 'x squared'' plus 'a sub-1 x' plus 'a sub-0' was identically equal to 'b sub-2 'x squared'' plus 'b sub-1 x' plus 'b0'.
Notice the technique that we used was is we said look at, let's compare the coefficients of 'x squared'. Let's equate the coefficients of 'x', and let's equate the constant terms. How do we know that you're allowed to do this? Well, let's see if we can show that this must be the case. For example, let's do this without any calculus at all. Suppose this is an identity. If this is an identity, it must be true for all values of 'x'. In particular, it must be true when 'x' is 0.
Notice that, when 'x' is 0, the left-hand side is 'a sub-0', the right-hand side is 'B sub-0', and we wind-up with 'a sub-0' equals 'b sub-0'. See, the constant terms are equal. Well, if 'a sub-0' equals 'b sub-0', we can cancel 'a sub-0' and 'b sub-0' from this equation. That leaves us with this equaling this. From this, we can factor out an 'x', and get to 'x' times 'a sub-2 x' plus 'a1' is identical with 'x' times 'b sub-2 x' plus 'b1'.
If 'x' is not 0, we can cancel 'x' from both sides of the equation, and that shows, what? That, for any non-zero value of 'x', 'a 2x' plus 'a1' must be identically equal to 'b 2x' plus 'b1'. Once we have this formula established, let's let 'x' equal 0 in here, and we see, what? With 'x' equal to 0, then 'a1' equals 'b1', in other words, that the coefficient of 'x' on the left-hand side equals the coefficient of 'x' on the right-hand side. You can keep on this way, but a very nice technique to use here is a reinforcement of something that we talked about before. I think it was when we were doing implicit differentiation, and we talked about identities verses equations.
You see, if this is an identity, and what I mean by an identity, that this-- we're not saying find what values of 'x' this is true for. By an identity, we're saying look at, these two expressions are the same for all values of 'x'. They're synonyms. And what we're saying is, if these two things are synonyms, their derivatives must be synonyms. And all we're saying is look at, if you want to use calculus here, differentiate both sides of this expression. And you wind-up with '2a sub-2 x' plus 'a1' is identically equal to '2b sub-2 x' plus 'b1'.
Since these two things are identical, let's equate their derivatives again. See, the derivative of identities are identical, and we wind-up with '2 a2' is identically equal to '2 b2' and, therefore, 'a2' must equal 'b2'. Knowing that 'a2' equals 'b2', we can come back to this step and show that 'a1' equals 'b1'. And now, knowing that 'a2' equals 'b2', and 'a1' equals 'b1', we can come back to the original equation and show that 'a0' must equal 'b0'.
Now, you may wonder why are we making all of this ado over what appears to be a very obvious thing? And I would like to give you a caution here. I'd like you to beware of something. It's something which works very nicely for polynomials, but doesn't always have to work all the time. In fact, later, when one studies differential equations, this becomes a very important concept, which later gets the name linear dependence and linear independence.
We're not going to go into that now, but the key idea is this. In general, knowing that 'a1 u1' plus 'a2 u2' is equal to 'b1 u1' plus 'b2 u2', you cannot say ah, therefore, the coefficients of 'u1' must be equal, and the coefficients of 'u2' must be equal. Don't get me wrong, if 'a1' equals 'b1', and 'a2' equals 'b2', then, certainly, 'a1 u1' plus 'a2 u2' is equal to 'b1 u1' plus 'b2 u2'. I'm not saying that. What I'm saying is, conversely, if you start, knowing that this is true, it does not follow that they must match up coefficient by coefficient, OK?
And you say well, why doesn't it have to follow? And I think the best way to do that is by means of an example. For example, in this general expression, let 'u1' equal 'x', and let 'u2' to be 'x/2'. Look at the expression '5x' plus '6 times x/2', that's, what? It's '5x' plus '3x' is '8x'. Look at '3x' plus '10 times x/2'. That's '3x' plus '5x', which is also '8x'. In other words, 5 times 'x' plus 6 times 'x/2' is identically equal to 3 times 'x' plus 10 times 'x/2'.
Yet, you can't say, therefore, the coefficients of 'x' must be equal, and the coefficients of 'x/2' must be equal. In fact, if you said that, you'd be saying that 5 is equal to 3 and 6 is equal to 10, which, of course, is not true, OK? So, at any rate, I just wanted to show you here the kind of mathematical rigor and cautions that have to be taken if one is going to use a technique called partial fractions, what the key ingredients are.
Now, it turns out that not only are partial fractions important in their own right, it also turns out that partial fractions handles a rather difficult type of integral: one that uses polynomials in 'sine x' and 'cosine x'. And the reason I wanted to mention this was not so much because the technique is nice. The technique, by the way, is in the text. But there's something very interesting in the text, the way the author introduces a topic. And I thought that that was worth an aside in its own right.
In the section where this thing appears, it says, "It has been discovered that." The author makes no attempt to show logically why one would expect that a certain thing is going to work. All the author says is, "It has been discovered that." And this tells a long story that, in many cases, we wind up with an integrand that we don't know how to handle. We make all sorts of substitutions in the hope that we can reduce the given integrand to a form that we know how to handle. Sometimes we're successful, sometimes we're not.
In the cases where we're not successful, somebody, either by clever intuition or what have you, maybe it's just blind luck, stumbles across a technique that happens to work. And I, sort of, liked this particular example in the text where the author says, "It has been discovered that," because, to me, it's not at all self-evident, and yet, it's a rather pretty result.
The result says this: suppose you make the substitution 'z' equals tangent 'x/2'. Now, where do you pull this out of the hat from? 'z' equals tangent 'x/2', that's the ingenuity, the experience, the luck. But the idea is, let's suppose that we stumbled across this one way or the other. If we translate this equation into a reference triangle, we have, what? We'll call the angle 'x/2'. And the tangent is 'z', so we'll make the side opposite 'z', the side-adjacent 1. That makes the hypotenuse the square root of '1 plus z squared'.
Now, watch what happens when you do this. See, 'z' is equal to '10x/2', therefore, 'dz' is the differential of '10x/2'. Remember, the differential of '10x', with respect to 'x', is secant squared. But we also, by the chain rule, have to multiply by a derivative of 'x/2' with respect to 'x'. In other words, if 'z' is '10x/2', 'dz' is not secant squared 'x/2', it's 1/2 'secant squared 'x/2' dx'.
But what's 'secant squared 'x/2''? Let's go back and look at our diagram. The secant of 'x/2' is the hypotenuse over side-adjacent. That's the square root of '1 plus 'z squared'' over 1 and, therefore, the square of the secant is just '1 plus 'z squared'' over 1, see, '1 plus 'z squared''. So, with the 1/2 in here, this becomes '1 plus 'z squared'' over 2 times 'dx'. And, consequently, if I compare these two now, notice that 'dx' is just twice 'dz'/ '1 plus 'z squared''. In other words, what's happened to 'dx'? It's been replaced by a differential in 'z', which involves the quotient of two polynomials. So, 'dx' comes out very nicely this way, in terms of, what? The quotient of two polynomials.
How about 'sine x' and 'cosine x'? And, again, notice the dependency on identities. 'Sine x' is twice 'sine 'x/2' cosine 'x/2''. But, from my reference triangle, I can pick off the trigonometric functions of 'x/2' very easily. Namely, the sine of 'x/2' is just 'z' over the square root of '1 plus 'z squared'', and the cosine of 'x/2' is just 1 over the square root of '1 plus 'z squared''.
Plugging that in here, I find that 'sine x' is twice 'z' over the square root of '1 plus 'z squared'' times 1 over the square root of '1 plus 'z squared''. Multiplying this out, I find, what? That 'sine x' is twice 'z', and the square root of '1 plus 'z squared'' times itself is just '1 plus 'z squared''. In other words, 'sine x' is '2z'/ '1 plus 'z squared''. In other words, with the substitution, what happens to 'sine x'? It becomes '2z'/ '1 plus 'z squared'', which is also the quotient of two polynomials in 'z', OK?
Finally, how about 'cosine x'? What identity can we use to reduce 'cosine x' in terms of 'x/2'? Why do we want the 'x/2'? Again, notice that even though we may not have invented this substitution by ourself, once it's invented, the relationship to the angle 'x/2' becomes very apparent.
At any rate, notice the identity that says that 'cosine 2x' is 'cosine squared x' minus 'sine squared x' translates into 'cosine x' is cosine squared of half the angle minus sine squared of half the angle. Well, 'cosine squared 'x/2'', well, 'cosine 'x/2'' is just 1 over the square root of '1 plus 'c squared'', so 'cosine squared 'x/2'' is just 1/ '1 plus 'z squared''. Similarly, 'sine squared 'x/2'' is just 'z squared' over '1 plus 'z squared''. And what we find is that 'cosine x' is '1 minus 'z squared'' over '1 plus 'z squared''.
And, again, what's happened? 'Cosine x' is now expressable as the quotient of two polynomials in 'z'. In fact, as an application of this, let's come back to an integrand that's been giving us some trouble for quite some time now. Let's look at the integral 'secant x dx' and see if we can't use this technique that we've just learned to solve this particular problem. Notice that secant is 1 over cosine, therefore, integral 'secant x dx' is integral 'dx' over 'cosine x'.
Now, let's come back here for a moment just to refresh our memories. We saw that 'dx' was two 'dz'/ '1 plus 'z squared''. We saw that 'cosine x' was '1 minus 'z squared'' over '1 plus 'z squared'', therefore, '1 over cosine x' will just be the reciprocal of this. In other words, coming back here now, we can replace 'dx' by '2 dz' over '1 plus 'z squared'' and '1 over cosine x' by '1 plus 'z squared'' over '1 minus 'z squared'', OK? Therefore, in terms of this substitution, integral 'secant x dx' is just twice integral 'dz'/ '1 minus 'z squared''.
But, what is this integrand? This integrand is the quotient of two polynomials in 'z', therefore, I could use partial fractions here, write this as a term: something over '1 plus z' plus something over '1 minus z', et cetera, solve the problem in terms of 'z', and then, remembering that 'z' is 'tangent 'x/2'', I can then replace 'z' by what its equal to in terms of 'x', and in that way solve the problem.
Now, you see, what I want you to see again, that comes true here all the time, is how we are continually looking for ways of reducing integrals to equivalent integrals, but, hopefully, ones that are more familiar to us, meaning, what? Ones that we are able to handle. In the next lesson, we're going to find a very powerful technique, which is far more general than partial fractions, a technique which is used over, and over, again, which is probably the single most important technique. But, I won't say any more about that right now. We'll continue with this discussion next time. And, until next time, good bye.
MALE VOICE: Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.
This OCW supplemental resource provides material from outside the official MIT curriculum.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare