Topics covered: Discussion of how infinity differs from "very large"; some sublte and not-so-subtle consequences of the difference; the case against intuition; motivating infinite series in terms of finding area as a limit.
Instructor/speaker: Prof. Herbert Gross
This section contains documents that are inaccessible to screen reader software. A "#" symbol is used to denote such documents.
Part V, VI & VII Study Guide (PDF - 35MB)#
Supplementary Notes (PDF - 46MB)#
Blackboard Photos (PDF - 8MB)#
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: Hi. Today we begin our final block of material in this particular course, and it's the segment entitled Infinite Series. And perhaps the best way to motivate this rather difficult block of material is in terms of the concept of many versus infinite. In many respects, this particular block could've been given much earlier in the course. But somehow or other, until we have some sort of a feeling as to what infinity really means, we have a maturity problem in trying to really grasp the significance of what's going on. In fact, in a manner of speaking, with all of this experience, there may be a maturity problem in trying to grasp the fundamental ideas. What I shall do throughout the material on this block is to utilize the lectures again to make sure that the concepts become crystallized and use the learning exercises plus the text plus supplements notes to make sure that the details are taken care of in adequate fashion.
At any rate, I've entitle today's lecture 'Many Versus Infinite'. And I thought the best way to get started on this was to think of a number that's very easy to write in terms of exponential notation. Let capital 'N' be 10 to the 10 to the 10th power. 10 to the 10, by the way, is 10 billion, a 1 followed by 10 zeroes. That's 10 to the 10-billionth power. That, of course, means, if written in place value, that would be a 1 followed by 10 billion zeroes.
And for those of you who would like an exercise in multiplication and long division and you want to compute the number of seconds in a year and what have you, it turns out without too much difficulty that it can be shown. That to write 1 billion zeroes at the rate of one per second would take in the order of magnitude of some 32 years. In other words, this number capital 'N', roughly speaking, writing it in place value notation at the rate of one digit per second would take 320 years to write.
And you say so what? And the answer is, well, after you've got out that far-- and by the way, this is crucial. 320 years is a long time. I was going to say it's a lifetime. It's more than the lifetime. It's a long time, but it's finite. Eventually, the job could be completed. But the interesting point is that once it's completed, the next number in our system is capital 'N plus 1', capital 'N plus 2', capital 'N plus 3', where in a sense then, with 'N' as a new reference point, we're back to the beginning of our number system.
In other words, granted that 'N' is a fantastically large number, if you wanted to become wealthy, to own 'N' dollars would more than realize your dream. But if your aim was to own infinitely much money, 'N' would be no closer than having no money at all. 'N' is no nearer the end of a number system than is the number 1 itself. There is the story that signifies the difference between many and infinite. And to hammer this point home, let me give you a few more examples. I cleverly call this additional examples.
We all know that there are just as many odd numbers and even numbers, right? The odds and the evens match up. Now, watch the following little gimmick. Write the first two odd numbers, then the first even number, the next two odd numbers, then the next even number, the next two odd numbers, than the next even number. And go on like this as long as you want. And no matter where we stop, even if we go to the 10 to the 10 to the 10th term, no matter what even number we stop at, there will always be twice as many odd numbers written on the board as there would be even numbers.
In other words, even though in the long run in terms of the infinity of each there are as many odds and evens, if we stop this process at any finite time no matter how far out, there will always be twice as many odds as there are evens. In fact, if you want to compound this little dilemma, write the first two evens, then an odd, in other words, 2, 4, 1, 6, 8, 3, 10, 12, 5, and you can get twice as many evens as there are odds, et cetera. And the whole argument again hinges on what? Confusing the concept of going out very far with going out endlessly. Oh, let me give you another example or two. I just want to throw these around so you at least get the mood created as to what we're really dealing with right now.
Let's take the endless sequence of numbers, the sum, 1 plus 'minus 1' plus 1 plus 'minus 1', and say let's go on forever. What will this sum be? Now, lookit, one way of grouping these terms is in twos. In other words, we'll start with the first two terms, the next two terms. In other words, we can write this as 1 plus minus 1 plus 1 plus minus 1. And writing it this way, we can see that each term adds up to 0, and the infinite sum would be 0.
On the other hand, if we now leave the first term alone and now start grouping the remaining terms in twos, we find that the infinite sum is 1. Now, we're not going to argue that something is fishy here. We're not going to say I wonder which is the right answer. What we have shown without fear of contradiction is that the answer that you get when you add infinitely many terms does depend on how you group them, unlike the situation of what happens when you add finitely many terms. In other words, notice the need for order as well as the terms themselves when you have a sum of infinitely many terms.
And the key point is don't be upset when you find out that your intuition is defied here. You say this doesn't seem real to me. It seems intuitively false. The point is our intuition is defied. Why? Because it doesn't apply. And why doesn't it apply? It doesn't apply because our intuition is based on visualizing large but finite amounts, not based on visualizing infinity.
You see, all of these paradoxes stem, because in our mind, we're trying to visualize infinity as meaning the same as very large. Well, you know, now we come to a very important crossroad. After all, if infinity is going to be this difficult a concept to handle, let's get rid of it the easy way. Let's refuse to study it. That's one way of solving problems. It's what I call the right wing conservative educational philosophy. If you don't like something, throw it out. The only trouble is we need it. For example, why do we need it? See, why deal with infinite sums? Well, because we need them. Among other places, we've already used them. For example, in computing areas. We've taken a limit as 'N' goes to infinity. Summation, 'k' goes from 1 to 'n', 'f of 'c sub k'' 'delta x', you see. And we need this limit.
And so the question comes up, how shall we add infinitely many terms? We have a choice now. We can throw the thing out, but we don't want to throw it out. We need it. So the question is how shall we add infinitely many terms? And even though we know that our intuition can get us in trouble, we do have nothing else to begin with. So we say OK, let's mimic what happened in the finite case and see if we can't extend that in a plausible way to cover the infinite case.
Let me pick a particularly straightforward example. Let's suppose I have the three numbers which I'll call 'a sub 1', 'a sub 2', and 'a sub 3', where 'a sub 1' will be 1/2, 'a sub 2' will be 1/4, and 'a sub 3' will be 1/8. In other words, just for reasons of identification later on in what I'm going to be doing, each term is half of the previous one. Now, I want to find the sum of these three terms. I want to find 'a1' plus 'a2' plus 'a3'. Now, colloquially, we just say, oh, that's 1/2 plus 1/4 plus 1/8, and I'll just add them up.
But do you remember how you learned to add? You may not have paid attention to it, but you learned to add as a sequence. You said I'll add the first one. Then the first plus the second gives me a number. That's my second partial sum. Then I'll add on the third number. That will give me my third partial sum. Then I have no more numbers to add. Consequently, my third partial sum is by definition the sum of these three numbers.
Writing it more symbolically, we say lookit, the first partial sum, 's sub 1', is 1/2. The second partial sum as 1/2 plus 1/4. Another way of saying that is what? It's the first partial sums plus the next term, which is 1/4. 1/2 plus 1/4 is 3/4. Then we said OK, the third partial sum is what we had before, namely, 3/4, plus the next term, which was 1/8, and that gives rise to 7/8. In other words, we said let's form 'a1', then 'a1 plus a2', 'a1 plus a2 plus a3'. And when we finally finished with our sequence of partial sums, the last partial sum was the answer.
And by the way, let me take time out here to hit home at the most important point, the point that I think is extremely crucial as a starting point if we're going to understand what this whole block is all about. It's to distinguish between a series and a sequence. And I'll have much more to say about this in the supplementary notes. But for now, think of it this way. A series is a sum of terms. A sequence is just a listing of terms.
In other words, in this particular problem, do not confuse the role of the 'a's with the role of the 's's. Notice that the a's refer to the sequence of numbers being added. In other words, the 'a's were what? They were 1/2, 1/4 and 1/8. These were the three numbers being added. Notice that the 's's were the partial sums. In other words, the partial sums form the sequence 's1', 's2', 's3'.
And to refresh your memories on this, that would be the sequence what? 1/2, 3/4, 7/8. In other words, this was the sum of the first number that you were adding. 3/4 was the sum of first two, and 7/8 was the sum of all three of them. And notice, by the way, the last partial sum, 's sub 3', the sum was defined to be the last partial sum, and that is what? The number-- this is very, very crucial. 1/2 plus 1/4 plus 1/8 is the sum of three numbers, but it's one number, and that number is called 7/8.
OK, you see what we're talking about now? We're looking at a bunch of terms. We're adding them up, and we see how the sum changes with each term. In fact, in terms of a very trivial analogy, think of an adding machine. As you punch numbers in, the 's's are the sums that you see being read as your total sum, whereas the a's are the individual numbers being punched in to add up, OK? I hope that's a trivial example. As I listen to myself saying it, it sounds like I made it harder than it really is.
At any rate, let's generalize this particular problem. Let's suppose now instead of wanting to add 1/2 plus 1/4 plus 1/8, we want to add up the first 'n' terms of the form 1/2 plus 1/4 plus 1/8. In other words, let the n-th term that we're going to add, 'a sub n', be '1 over '2 to the n''. Then the sum, the n-th partial sum here, the sum of these 'n' terms is, of course, what? 'a1' plus, et cetera, 'a sub n'. That turns out to be 1/2 plus 1/4 plus, et cetera, '1 over '2 to the n''.
And by the way, rather than take time to develop this recipe over here, I thought you might like to see another place that might be interesting to review mathematical induction. If you'll bear with me and just come back over here where we were computing these partial sums, notice that in each of these partial sums, notice that your denominator is always 2 raised to the same power as this subscript. See, 2 the first power is 2. 2 to the second power is 4. 2 to the third power is 8. In other words, if your subscript is n, your denominator is '2 to the n'.
Notice that your numerator is always one less than your denominator. In other words, if your denominator is '2 to the n', the numerator is '2 to the 'n minus 1''. And once we suspect this, this particular result can be proven by induction. I won't take the time to do this here. What I will take the time to do is to observe that this particular sum can be written more conveniently if we divide through by '2 to the n-th' to get 1 minus '1 over '2 the n''.
For example, suppose we wanted to add up the 10 numbers. I say 10 numbers here. 2 to the 10th is 1.024. But according to this recipe, 1/2 plus 1/4 plus 1/8 plus 1/16 plus, et cetera, et cetera, plus 1/1,024 would add up to be what? 1 minus 1/1,024. In other words, this would be 1,023/1,024, which seems to be pretty close to 1. In fact, you can begin to suspect that as 'n' gets arbitrarily large, 's sub n' gets arbitrarily close to 1 in value. I'm just talking fairly intuitively for the time being.
But, you see, the major question now is suppose you elect not to stop at that. And you see, this is a very key point. We've already seen how the whole world seems to change as soon as you say let's never stop as opposed to saying let's go out as far as you want. See, if we now say what happens if you go on endlessly over here? Well, it becomes very natural to say lookit, the n-th partial sum was 1 minus '1 over '2 to the n-th'. In the case where you were adding up a finite number of terms, when you came to the last partial sum, that was by definition your answer.
Now, what we're saying is lookit, because we have infinitely many terms to add, there is no last partial sum. And so what we say is lookit, instead of the last term, since there is no last term, why don't we just take the limit of the n-th partial sum as 'n' goes to infinity. In other words, in this particular case, notice that as 'n' approaches infinity, 1 minus '1 over '2 to the n'' approaches 1, and we then define the infinite sum, meaning what? I write it this way: as sigma 'n' goes from 1 to infinity, '1 over '2 to the n''. It really means what? The limit as 'n' goes to infinity: 1/2 plus 1/4 plus 1/8 plus 1/16-- endlessly-- that that limit is 1, and we define that to be the sum.
And again, as I say, I'm going to write that in greatly more detail in the notes, and also we'll have many exercises on this. I just wanted you to see how we get to infinite sums, which are called series by generalizing what happens in the finite case. And because this may seem a little vague to you, let me give you a pictorial representation of this same thing.
You see, what's happening here is this. Draw a little circle around 1 of bandwidth epsilon. In other words, let's mark off an interval epsilon on either side of 1. And let's call this point here 1 minus epsilon. Let's call this point here 1 plus epsilon. And what we're saying about our partial sums is this. That when you start off and you're adding up terms here, you have 1/2. 1/2 plus 1/4 brings you over to 3/4. The next possible sum is 7/8, et cetera.
And all we're saying is that these terms get arbitrarily close to 1 in value, meaning that after a while-- and I'll define more rigorously what after a while means in a moment-- all of the 's sub n's are within epsilon of 1. After a while, all of your partial sums are in here. And what you mean by after a while certainly depends on how big epsilon is. In other words, the smaller the bandwidth you allow yourself, the more terms you may have to take before you get within the tolerance limits that you allow yourself.
In any event, going back to something that we've been using for a long time, our basic definition is the following. If you have an infinite sequence, say, a collection of terms 'b sub n', in other words, 'b1', 'b2', 'b3', et cetera, without end, we say that that sequence converges to the limit 'L' written the limit of 'b sub n' as 'n' approaches infinity equals 'L', if and only if for every epsilon greater than 0 we can find a number 'N' which depends on epsilon-- notice the notation here: 'N' as a function of epsilon-- such that whenever little 'n' is greater than capital 'N', the absolute value of 'a sub n' minus 'L' is less than epsilon.
And, you see, again, you may wonder how in the world that you're going to remember this. If you memorize this, I guarantee you, in two day's time, you'll have to memorize it again. I also hope you have enough faith in me to recognize I didn't memorize this. There is a feeling that one gets for this. And let me give you what that feeling is.
Again, in terms of a picture, what it means-- well, I'll change these to 'a's now because that's the symbols that we've been using before in terms of the sequence of terms. What we really mean-- and I don't care what symbol you really use here-- is if you want to talk about the limit of 'a sub n' as 'n' approaches infinity, if that limit equals 'L', what the rigorous definition says is this. Draw yourself an interval around 'L' of bandwidth epsilon, in other words, from 'L minus epsilon' to 'L plus epsilon'. And what this thing says is that beyond a certain term, say, the capital N-th term, every term beyond this certain one is in here.
Well, all 'a n's are in here if 'n' is sufficiently large. I don't know if you can read that very well, but just listen to what I'm saying. All of the terms are in here if 'n' is sufficiently large. What this means again is that to all intents and purposes, if you think of this bandwidth as giving you a dot, see, a thick dot here where the endpoints are 'L minus epsilon' and 'L plus epsilon', what we're saying is lookit, after a certain term, the way I've drawn here, after the fifth term, all the remaining terms of my sequence are in here.
By the way, notice the role of the subscripts here. All the subscript tells you is where the term appears in your sequence. For example, the third term in your sequence could be a smaller number than the second term of your sequence. Do not confuse the size of the terms with the subscripts. The subscripts order the terms, but the third term in the sequence can be less than in size than the second term in the sequence. But again, I'll talk about that in more detail in the notes.
The point that I want you to see is that in concept what limit does is the following. Limit is to an infinite sequence as last term is to a finite sequence. In other words, a limit replaces infinitely many points by a finite number of points plus a dot. You see, going back to this example here, how many 'a sub n's were there? Well, there were infinitely many. Well, to keep track of these infinitely many, what do I have to keep track of now? Well, in this diagram, the first five 'a's plus this dot, because you see, every one of my infinitely many 'a's past the fifth one is inside this dot, you see?
So in other words then, what's happened? The thing that had to happen. We had to deal with infinite sequences. We saw the big philosophic difference between infinitely many and just large. And so our definition of limit had to be such that we could reduce in a way that was compatible with our intuition the concept of infinitely many points to a finite number. Because, you see, as I'll show you in the notes also, all of our arithmetic is geared for just a finite number of operations.
See, this is why this definition of limit is so crucial. Again, you may notice, and I'll remind you of this also in the exercises, that structurally this definition of limit is the same as the limit that we use when we talked about the limit of 'f of x', as 'x' approaches 'a', equals 'L'. The absolute value signs have the same properties as before.
And by the way, before I go on, let me just remind you again of one more thing while I'm talking that way. Instead of memorizing this, remember how you read this. The absolute value of 'a sub n' minus 'L' is less than epsilon means what? That 'a sub n' is within epsilon of 'L'. That's what we use in our diagram. But it seems to me I forgot to mention this. And I want you to see that what? The key building block analytically is the absolute value, and the meaning of absolute value is the same here as it was in blocks one and two of our course. So what I'm driving at is that the same limit theorems that we've been able to use up until now still apply.
Oh, by means of an example. Suppose I have the limit as 'n' approaches infinity, '2n plus 3' over '5n plus 7'. Notice that I can divide numerator and denominator through by 'n'. If I do that, I have the limit as 'n' approaches infinity. '2 plus '3/n'' over '5 plus '7/n''. Now using the fact that the limit of a sum is the sum of the limits, the limit of a quotient is the quotient of the limits, the limit of '1/n' as 'n' goes to infinity is 0. Notice that I can use the limit theorems to conclude that the limit of this particular sequence is 2/5. If I wanted to, the same ways we did in block one, block two, where we're talking about limits, given an epsilon, I can actually exhibit how far out I have to go before each of the terms in this sequence is within that given epsilon of 2/5.
By the way, again to emphasize once more, because this is so important, the difference between an infinite sum and an infinite sequence, observe that whereas the limit of the sequence of terms '2n plus 3' over '5n plus 7' is 2/5, the infinite sum composed of the terms of the form '2n plus 3' over '5n plus 7' is infinity since after a while each term that you're adding here behaves like 2/5. In other words, if you write this thing out to see what this means, pick 'n' to be 1. When 'n' is 1, this term is 5/12. When 'n' is 2, this is what? 7 plus 17, 7/17. When 'n' is 3, this is 9/22. When 'n' is 4, this is 8 plus 3 is 11, over 27.
In other words, what you're saying is the infinite sum means to add up all of these terms. The thing whose limit was 2/5 was the sequence of terms themselves. In other words, what we're saying is that after a certain point, every one of these terms behaves like 2/5. And what you're saying is lookit, after a point, what you're really doing is essentially adding on 2/5 every time you add on another term. And therefore, this sum can get as large as you want, just by adding on enough terms.
Again, observe the difference between the partial sums and the terms themselves. The terms that you're adding are approaching 2/5 as a limit. The thing that's becoming infinite is the sequence of partial sums. Because what you're saying is to get from one partial sum to the next, you're, roughly speaking, adding on 2/5 each time.
To generalize this, what we're saying is if the sequence of partial sums converges, the individual terms that you're adding must approach 0 in the limit. For if the limit of the 'a sub n's as 'n' approaches infinity is 'L', where 'L' is not 0, then beyond a certain term, the sum of the 'a sub n's behaves like the sum of the 'L's. And what you're saying is if 'L' is non zero, by adding on enough of these fixed 'L's, you can make the sum as large as you wish.
In other words, then, a sort of negative test is that if you know that the series converges, then the terms that you're adding on must approach 0 in the limit. Unfortunately, by the way, the converse is not true. Namely, if you know that the terms that you're adding on go to 0, you cannot conclude that their sum is finite. Again, it's our old friend of infinity times 0. You see, as these terms approach 0, when you start to add them up, it may be that they're not going to 0 fast enough.
In other words, notice that the terms are getting small, but you're also adding more and more of them. You see, what I wrote here is what? On the other hand, the limit of 'a sub n' as 'n' approaches infinity equals 0 is not enough to guarantee the convergence of this particular sum.
In fact, a trivial example to show this is look at the following contrived example. Start out with the first number being 1. Then take 1/2 twice, 1/3 three times, 1/4 four times, 1/5 five times, 1/6 six times. Form the n-th partial sum. Lookit, is it clear that the terms that are going into your sum are approaching 0 in the limit? You see, you have a one, then there are halves, then thirds, then fourths, then fifths, then sixths, sevenths, et cetera. The terms themselves are getting arbitrarily close to 0.
On the other hand, what is the sum becoming? Well, this adds up to 1. This adds up to 1. This adds up to 1, and this that up to 1. And in other words, by taking enough terms, I can tack on as many ones is I want, and ultimately, even though the terms become small, the sum becomes large. In fact, it's precisely because of this unpleasantness that we have to go into a rather difficult lecture next time, talking about OK, how then can you tell when an infinite sum converges to a finite limit and when doesn't it?
At any rate, that's what I said we're going to talk about next time. As far as today's lesson is concerned, I hope that we've straightened out the difference between a sequence and the series, partial sums and the terms being added. And in the hopes that we've done that, let me say, until next time, goodbye.
Funding for the publication of this video was provided by the Gabriella and Paul Rosenbaum Foundation. Help OCW continue to provide free and open access to MIT courses by making a donation at ocw.mit.edu/donate.
This OCW supplemental resource provides material from outside the official MIT curriculum.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare