Home » Courses » Mathematics » Differential Equations » Video Lectures » Lecture 15: Introduction to Fourier Series
Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Introduction to Fourier Series; Basic Formulas for Period 2(pi)
Instructor/speaker: Prof. Arthur Mattuck
Lecture 15: Introduction to...
Well, let's get started. The topic for today is -- Sorry. Thank you. For today and the next two lectures, we are going to be studying Fourier series. Today will be an introduction explaining what they are. And, I calculate them, but I thought before we do that I ought to least give a couple minutes oversight of why and where we're going with them, and why they're coming into the course at this place at all.
So, the situation up to now is that we've been trying to solve equations of the form y'' + a y', constant coefficient second-order equations, and the f of t was the input. So, we are considering inhomogeneous equations. This is the input. And so far, the response, then, is the solution equals the corresponding solution, y(t), maybe with some given initial conditions to pick out a special one we call the response, the response to that particular input. And now, over the last few days, the inputs have been, however, extremely special.
For input, the basic input has been an exponential, or sines and cosines. And, the trouble is that we learn how to solve those. But the point is that those seem extremely special. Now, the point of Fourier series is to show you that they are not as special as they look. The reason is that, let's put it this way, that any reasonable f(t) which is periodic, it doesn't have to be even very reasonable. It can be somewhat discontinuous, although not terribly discontinuous, which is periodic with period, maybe not the minimal period, but some period two pi.
Of course, sin(t) and cos(t) have the exact period two pi, but if I change the frequency to an integer frequency like sin(2t) or sin(26t), two pie would still be a period, although would not be the period. The period would be shorter. The point is, such a thing can always be represented as an infinite sum of sines and cosines. So, it's going to look like this. There's a constant term you have to put out front. And then, the rest, instead of writing, it's rather long to write unless you use summation notation. So, I will. So, it's a sum from n equal one to infinity integer values of n, in other words, of a sine and a cosine. It's customary to put the cosine first, and with the frequency, the n indicates the frequency of the thing.
And, the bn = sin(nt). Now, why does that solve the problem of general inputs for periodic functions, at least if the period is two pi or some fraction of it? Well, you could think of it this way. I'll make a little table. I'll make a little table. Let's look at, let's put over here the input, and here, I'll put the response. Okay, suppose the input is the function sin(nt). Well, in other words, if you just solve the problem, you put a sin(nt) here, you know how to get the answer, find a particular solution, in other words. In fact, you do it by converting this to a complex exponential, and then all the rigmarole we've been going through. So, let's call the response something. Let's call it y.
I'd better index it by n because it, of course, is a response to this particular periodic function. So, n(t), and if the input is cos(nt), that also will have a response, yn. Now, I really can't call them both by the same name. So, why don't we put a little s up here to indicate that that's the response to the sine. And here, I'll put a little c to indicate what the answer to the cosine. You're feeding cos(nt), what you get out is this function.
Now what? Well, by the way, notice that if n is zero, it's going to take care of a constant term, too. In other words, the reason there is a constant term out front is because that corresponds to cos(0t), which is one. Now, suppose I input instead (a)n cos(nt). All you do is multiply the answer by an. Same here. Multiply the input by bn. You multiply the response. That's because the equation is a linear equation. And now, what am I going to do? I'm going to add them up. If I add them up from the different ends and take a count also, the n equals zero corresponding to this first constant term, the sum of all these according to my Fourier formula is going to be f(t).
What's the sum of this, the corresponding responses? Well, that's going to be summation sum[(a)n (y)n^c(t) + (b)n (y)n], the response to the sine. That will be the sum from one to infinity, and there will be some sort of constant term here. Let's just call it c1. So, in other words, if this input produces that response, and these are things which we can calculate, we're led by this formula, Fourier's formula, to the response to things which otherwise we would have not been able to calculate, namely, any periodic function of period two pi will have, the procedure will be, you've got a periodic function of period two pi.
Find its Fourier series, and I'll show you how to do that today. Find its Fourier series, and then the response to that general f of t will be this infinite series of functions, where these things are things you already know how to calculate. They are the responses to sines and cosines. And, you just formed the sum with those coefficients. Now, why does that work? It works by the superposition principle. So, this is true. The reason I can do the adding and multiplying by constant, I'm using the superposition principle. If this input produces that response, then the sum of a bunch of inputs produces the sum of the corresponding responses.
And, why is that? Why can I use the superposition principle? Because the ODE is linear. It's okay, since the ODE is linear. That's what makes all this work. Now, so what we're going to do today is I will show you how to calculate those Fourier series. I will not be able to use it to actually solve any differential equation. It will take us pretty much all the period to show how to calculate a Fourier series. And, okay, so I'm going to solve differential equations on Monday. Wrong. I probably won't even get to it then because the calculation of a Fourier series is a sufficient amount of work that you really want to know all the possible tricks and shortcuts there are.
Unfortunately, they are not very clever tricks. They are just obvious things. But, it will take me a period to point out those obvious things, obvious in my sense if not in yours. And, finally, the third day, we'll solve differential equations. I will actually carry out the program. But the main thing we're going to get out of it is another approach to resonance because the things that we are going to be interested in are picking out which of these terms may possibly produce resonance, and therefore a very crazy response.
Some of the terms in the response suddenly get a much bigger amplitude than this than you would normally have thought they had because it's picking out resonant terms in the Fourier series of the input. Okay, well, that's a big mouthfu. Let's get started on calculating. So, the program today is calculate the Fourier series. Given f(t) periodic, having two pi as a period, find its Fourier series. How, in other words, do I calculate those coefficients, an and bn.
Now, the answer is not immediately apparent, and it's really quite remarkable. I think it's quite remarkable, anyway. It's one of the basic things of higher mathematics. And, what it depends upon are certain things called the orthogonality relations. So, this is the place where you've got to learn what such things are. Well, I think it would be a good idea to have a general definition, rather than immediately get into the specifics.
So, I'm going to call u(x), u(t), I think I will use, since Fourier analysis is most often applied when the variable is time, I think I will stick to independent variable t all period long, if I remember to, at any rate. So, these are two continuous, or not very discontinuous functions on minus pi. Let's make them periodic. Let's say two pi is a period. So, functions, for example like those guys, sin(t), sin(nt), sin(22t), and so on, say two pi is a period. Well, I want them really on the whole real axis, not there. Define for all real numbers.
Then, I say that they are orthogonal, perpendicular. But nobody says perpendicular. Orthogonal is the word, orthogonal on the interval [-pi, pi] if the integral, so, two are orthogonal. Well, these two functions, if the integral from minus pi to pi of u(t) v(t), the product is zero, that's called the orthogonality condition on [-pi, pi]. Now, well, it's just the definition. I would love to go into a little song and dance now on what the definition really means, and what its application, why the word orthogonal is used, because it really does have something to do with two vectors being orthogonal in the sense in which you live it in 18.02.
I'll have to put that on the ice for the moment, and whether I get to it or not depends on how fast I talk. But, you probably prefer I talk slowly. So, let's compromise. Anyway, that's the condition. And now, what I say is that that Fourier, that blue Fourier series, -- -- what finding the coefficients an and bn depends upon is this theorem that the collection of functions, as I look at this collection of functions, sin(nt) for any value of the integer, n, of course I can assume n is a positive integer because sine of minus nt is the same as sin(-nt) = sin(nt). And, cosine mt, let's give it a different, so I don't want you to think they are exactly the same integers.
So, this is a big collection of functions, as n runs from one to infinity-- Here, I could let m be run from zero to infinity because cos(0t) means something. It's a constant, one-- that any two distinct ones, two distinct, you know, how can two things be not different? Well, you know, you talk about two coincident roots. I'm just killing, doing a little overkill. Any two distinct ones of these, two distinct members of the set of this collection of, I don't know, there's no way to say that, any two distinct ones are orthogonal on this interval. Of course, they all have two pi as a period for all of them.
So, they form into this general category that I'm talking about, but any two distinct ones are orthogonal on the interval for [-pi, pi]. So, if I take the integral from -2pi to pi of [sin(3t) cos(4t) dt] = 0. If I integrate sin(3t) cos(60t), answer is zero. The same thing with two cosines, or a sine and a cosine. The only time you don't get zero is if you integrate, if you make the two functions the same. Now, how do you know that you could not possibly get the answer is zero if the two functions are the same?
If the two functions are the same, then I'm integrating a square. A square is always positive. I'm integrating a square. A square is always positive, and therefore I cannot get the answer, zero. But, in the other cases, I might get the answer zero. And the theorem is you always do. Okay, so, why is this? Well, there are three ways to prove this. It's like many fundamental facts in mathematics. There are different ways of going about it. By the way, along with the theorem, I probably should have included, so, I'm far away. But you might as well include, because we're going to need it. What happens if you use the same function? If I take U equal to V, and in that case, as I've indicated, you're not going to get the answer, zero.
But, what you will get is, so, in other words, I'm just asking, what is the (sin(nt))^2. That's a case where two of them are the same. I use the same function. What's that? Well, the answer is, it's the same as what you will get if you take the integral of [(cos(nt))^2 dt]. And, the answer to either one of these is pi. That's something you know how to do from 18.01 or the equivalent thereof. You can integrate sine squared. It's one of the things you had to learn for whatever exam you took on methods of integration. Anyway, so I'm not going to calculate this out. The answer turns out to be pi. All right, now, the ways to prove it are you can use trig identities.
And, I'm asking you in one of the early problems in the problem set, identities, identities for the product of sine and cosine, expressing it in a form in which it's easy to integrate, and you can prove it that way. Or, you can use, if you have forgotten the trigonometric identities and want to get some more exercise with complex-- you can use complex exponentials. So, I'm asking you how to, in another part of the same problem I'm asking you how to do it, do one of these, at any rate, using complex exponentials. And now, I'm going to use a mysterious third method another way. I'm going to use the ODE. I'm going to do that because this is the method. It's not just sines and cosines which are orthogonal.
There are masses of orthogonal functions out there. And, the way they are discovered, and the way you prove they're orthogonal is not with trig identities and complex exponentials because those only work with sines and cosines. It is, instead, by going back to the differential equation that they solve. And that's, therefore, the method here that I'm going to use here because this is the method which generalizes to many other differential equations other than the simple ones satisfied by sines and cosines. But anyway, that is the source.
So, the way the proof of these orthogonality conditions goes, so I'm not going to do that. And, I'm going to assume that m is different from n so that I'm not in either of these two cases. What it depends on is, what's the differential equation that all these functions satisfy? Well, it's a different differential equation depending upon the value of n, -- -- but they look at essentially the same. These satisfy the differential equation, in other words, what they have in common. The differential equation is, let's call it u. It looks better. It's going to look better if you let me call it u. u double prime plus, well, n squared, so for the function sin(nt), cos(nt), satisfy u'' + n^2 u.
In other words, the frequency is n, and therefore, this is a square of the frequency is what you put here, equals zero. In other words, what these functions have in common is that they satisfy differential equations that look like that. And the only thing that's allowed to vary is the frequency, which is allowed to change. The frequency is in this coefficient of u. Now, the remarkable thing is that's all you need to know. The fact that they satisfy the differential equation, that's all you need to know to prove the orthogonality relationship. Okay, let's try to do it.
Well, I need some notation. So, I'm going to let un and vm be any two of the functions. In other words, I'll assume m is different from n. For example, this one could be sin(nt), and that could be sin(mt), or this could be sin(nt) and that could be cos(mt). You get the idea. Any two of those in the subscript indicates whether what the n or the m is that are in that. Any two, and I mean really two, distinct, well, if I say that m is not n, then they positively have to be different. So, again, it's overkill with my two's-ness. And, what I'm going to calculate, well, first of all, from the equation, I'm going to write the equation this way.
It says that u'' = -n^2 u. That's true for any of these guys. Of course, here, it would be v'' = -m^2 v. You have to make those simple adjustments. And now, what we're going to calculate is the integral from -pi to pi of [un'' vm dt].
Now, just bear with me. Why am I going to do that? I can't explain what I'm going to do that. But you won't ask me the question in five minutes. But the point is, this is highly un-symmetric. The u is differentiated twice. The v isn't. So, those two functions-- but there is a way of turning them into an expression which looks extremely symmetric, where they are the same. And the way to do that is I want to get rid of one of these primes here and put one on here. The way to do that is if you want to integrate one of these guys, and differentiate this one to make them look the same, that's called integration by parts, the most important theoretical method you learned in 18.01 even though you didn't know that it was the most important theoretical method.
Okay, we're going to use it now as a basis for Fourier series. Okay, so I'm going to integrate by parts. Now, the first thing you do, of course, when you integrate by parts is you just do the integration. You don't do differentiation. So, the first thing looks like this. And, that's to be evaluated between negative pi and pi. In doing integration by parts between limits, minus what you get by doing both. You do both, the integration and the differentiation. And, again, evaluate that between limits. Now, I'm just going to BS my way through this.
This is zero. I don't care what the un's, which un you picked and which vm you picked. The answer here is always going to be zero. Instead of wasting six boards trying to write out the argument, let me wave my hands. Okay, it's clear, for example, that a v is a sine, sin(mt). Of course it's zero because the sine vanishes at both pi and minus pi. If the un were a cosine, after I differentiate it, it became a sine. And so, now it's this side guy that's zero at both ends. So, the only case in which we might have a little doubt is if this is a cosine, and after differentiation, this is also a cosine. In other words, it might look like cosine, after, this cos(nt) cos(mt) .
But, I claim that that's zero, too. Why? Because the cosines are even functions, and therefore, they have the same value at both ends. So, if I subtract the value evaluated at pi, and subtract the value of minus pi, again zero because I have the same value at both ends. So, by this entirely convincing argument, no matter what combination of sines and cosines I have here, the answer to that part will always be zero.
So, by calculation, but thought calculation; it's just a waste of time to write anything out. You stare at it until you agree that it's so. And now, I've taken, by this integration by parts, I've taken this highly un-symmetric expression and turned it into something in which the u and the v are treated exactly alike. Well, good, that's nice, but why? Why did I go to this trouble? Okay, now we're going to use the fact that this satisfies the differential equation, in other words, that u'' = -n, I'm sorry, I should have subscripted this. If that's the solution, then this is equal to, times. You have to put in a subscript otherwise. The n wouldn't matter.
All right, I'm now going to take that expression, and evaluate it differently. un'' vm dt is equal to, well, un double prime, because it satisfies the differential equation is equal to that. So, what is this? This is -n^2 times the integral from negative pi to pi, and I'm replacing un'' by minus -n^2 un. I pulled the minus -n^2 out. So, it's un here, and the other factor is vm dt. Now, that's the proof. Huh? What do you mean that's the proof? Okay, well, I'll first state it, why intuitively that's the end of the argument.
And then, I'll spell it out a little more detail, but the more detail you make for this, the more obscure it gets instead of, look, I just showed you that this is symmetric in u and v, after you massage it a little bit. Here, I'm calculating it a different way. Is this symmetric in u and v? Well, the answer is yes or no. Is this symmetric at u and v? No. Why? Because of the n. The n favors u. We have what is called a paradox. This thing is symmetric in u and v because I can show it is. And, it's not symmetric in u and v because I can show it is. I can show it's not symmetric because it favors the n.
Now, there's only one possible resolution of that paradox. Both would be symmetric if what were true? Pardon? Negative pi. All right, let me write it this way. Okay, never mind. You see, the only way this can happen is if this expression is zero. In other words, the only way something can be both symmetric and not symmetric is if it's zero all the time. And, that's what we're trying to prove, that this is zero. But, instead of doing it that way, let me show you. This is equal to that, and therefore, two things according to Euclid, two things equal to the same thing are equal to each other.
So, this equals that, which, in turn, therefore, equals what I would have gotten. I'm just saying the symmetry of different way, what I would have gotten if I had done this calculation. And, that turns out to be minus -m^2 * integral from -pi to pi of [un vm dt]. So, these two are equal because they are both equal to this. This is equal to that. This equals that. Therefore, how can this equal that unless the integral is zero? How's that? Remember, m is different from n. So, what this proves is, therefore, the integral from -pi to pi of [un vm dt] = 0, at least if m is different from n.
Now, there is one case I didn't include. Which case didn't I include? un times un is not supposed to be zero. So, in that case, I don't have to worry about, but there is a case that I didn't. For example, something like the cos(nt) sin(nt). Here, the m is the same as the n. Nonetheless, I am claiming that this is zero because these aren't the same function. One is a cosine. Why is that zero? Can you see mentally that that's zero? Mentally? Well, this is trying to be in another life, it's trying to be 1/2 sin(2 nt), right?
And obviously the integral from -pi to pi of [sin(2nt)] because you integrate it, and it turns out to be zero. You integrate it to a cosine, which has the same value of both ends. Well, that was a lot of talking. If this proof is too abstract for you, I won't ask you to reproduce it on an exam. You can go with the proofs using trigonometric identities, and/or complex exponentials.
But, you ought to know at least one of those, and for the problem set I'm asking you to fool around a little with at least two of them. Okay, now, what has this got to do with the problem we started with originally? The problem is to explain this blue series. So, our problem is, how, from this, am I going to get the terms of this blue series? So, given f(t), two pi s a period. Find the an and the bn. Okay, let's focus on the an. The bn is the same. Once you know how to do one, you know how to do the other. So, here's the idea. Again, it goes back to the something you learned at the very beginning of 18.02, but I don't think it took.
But maybe some of you will recognize it. So, what I'm going to do is write it. Here's the term we're looking for here, this one. Okay, and there are others. It's an infinite series that goes on forever. And now, to make the argument, I've got to put it one more term here. So, I'm going to put in ak cos(kt). I don't mean to imply that that k could be more than n, in which case I should have written it here. I could have also used equally well bk sin(kt) here, and I could have put it there. This is just some other term. This is the an, and this is the one we want. And, this is some other term.
Okay, all right, now, what you do is, to get the an, what you do is you multiply everything through by, you focus on the one you want, so it's dot, dot, dot, dot, dot, and you multiply by cos(nt). So, it's ak cos(kt) cos(nt). Of course, that gets multiplied, too. But, the one we want also gets multiplied, an. And, it becomes, when I multiply by cos(nt), (cos(nt))^2, and now, I hope you can see what's going to happen. Now, oops, I didn't multiply the f(t), sorry. It's the oldest trick in the book.
I now integrate everything from minus, so I don't endlessly recopy. I'll integrate by putting it up in yellow chalk, and you are left to your own devices. This is definitely a colored pen type of course. Okay, so, you want to integrate from minus pi to pi? Good. Just integrate everything on the right hand side, also, from minus pi to pi. Plus, these are the guys just to indicate that I haven't, they are out there, too. And now, what happens? What's this? Zero. Every term is zero because of the orthogonality relations. They are all of the form, a constant times cosine nt times something different from cos(nt), sin(kt), cos(kt), or even that constant term.
All of the other terms are zero, and the only one which survives is this one. And, what's its value? The integral from minus pi to pi of cosine squared, I put that up somewhere. It's right here, down there? It is pi. So, this term turns into an pi, an, dragged along, but this, the integral of the square of the cosine turns out to be pi. And so, the end result is that we get a formula for an. What is an? an is, well, an times pi, all these terms of zero, and nothing is left but this left-hand side. And therefore, an * pi = Integral from -pi to pi of [f(t) cos(nt) dt].
But, that's an times pi. Therefore, if I want just an, I have to divide it by pi. And, that's the formula for the coefficient an. The argument is exactly the same if you want bn, but I will write it down for the sake of completeness, as they say, and to give you a chance to digest what I've done, you know, 30 seconds to digest it. sin(nt) dt. And, that's because the argument is the same. And, the integral of sin^2(nt) is also pi. So, there's no difference there. Now, there's only one little caution. It have to be a little careful. This is n one, two, and so on, and this is also n one, two, and unfortunately, the constant term is a slight exception. We better look at that specifically because if you forget it, you can get them to gross, gross, gross errors.
How about the constant term? Suppose I repeat the argument for that in miniature. There is a constant term plus other stuff, a typical other stuff, an cosine, let's say. How am I going to get that constant term? Well, if you think of this as sort of like a constant times, the reason is the constant is because it's being multiplied by cos(0t). So, that suggests I should multiply by one.
In other words, what I should do is simply take the integral from -pi to pi of [f(t) dt]. What's the answer? Well, this integrated from minus pi to pi is how much? It's 2 pi c0, right? And, the other terms all give me zero. Every other term is zero because if you integrate cos(nt) or sin(nt) over a complete period, you always get zero.
There is as much area above the axis or below. Or, you can look at two special cases. Anyway, you always get zero. It's the same thing with sine here. So, the answer is that c0 is equal to, is a little special. You don't just put n = 0 here because then you would lose a factor of two. So, c0 should be 1/(2pi) times this integral. Now, there are two kinds of people in the world, the ones who learn two separate formulas, and the ones who just learn two separate notations. So, what most people do is they say, look, I want this to be always the formula for a zero.
That means, even when n = 0, I want this to be the formula. Well, then you are not going to get the right leading term. Instead of getting c0, you're going to get twice it, and therefore, the formula is, the Fourier series, therefore, isn't written this way. It's written-- If you want an a0 there, calculate it by this formula. Then, you've got to write not c0, but a0 / 2. I think you will be happiest if I have to give you advice. I think you'll be happiest remembering a single formula for the an's and bn's, in which case you have to remember that the constant leading term is a0 / 2 if you insist on using that formula. Otherwise, you have to learn a special formula for the leading coefficient, namely 1/(2pi) instead of 1/pi.
Well, am I really going to calculate a Fourier series in four minutes? Not very likely, but I'll give it a brave college try. Anyway, you will be doing a great deal of it, and your book has lots and lots of examples, too many, in fact. It ruined all the good examples by calculating them for you. But, I will at least outline. Do you want me to spend three minutes outlining a calculation just so you have something to work on in the next boring class you are in? Let's see, so I'll just put a few key things on the board. I would advise you to sit still for this. Otherwise you're going to hack it, and take twice as long as you should, even though I knew you've been up to 3:00 in the morning doing your problem set.
Cheer up. I got up at 6:00 to make up the new one. So, we're even. This should be zero here. So, here's minus pi. Here's pi. Here's one, negative one. The function starts out like that, and now to be periodic, it then has to continue on in the same way. So, I think that's enough of its path through life to indicate how it runs. This is a typical square-away function, sometimes it's called. It's an odd function. It goes equally above and below the axis. Now, the integrals, when you calculate them, the an is going to be, okay, look, the an = 0.
Let me, instead, and you will get that with a little hacking. I'm much more worried about what you'll do with the bn's. Also, next Monday you'll see intuitively that the an is zero, in which case you won't even bother trying to calculate it. How about the bn, though? Well, you see, because the function is discontinuous, so, this is my input. My f(t) is that orange discontinuous function.
The bn is going to be, I have to break it into two parts. In the first part, the function is negative one. And there, I will be taking the integral from -pi to pi of [-1 * sin(nt) dt]. And then, there's another part, sorry, minus pi to zero. The other part I integrate from zero to pi of what? Well, f(t) = +1. And so, I simply integrate sin(nt) dt. Now, each of these is a perfectly simple integral. The only question is how you combine them. So, this is, after you calculate it, it will be (1 - cos(n pi)) / n. And, this part will turn out to be (1 - cos(n pi)) / n.
And therefore, the answer will be two minus two cosine, two over n times, right, two minus, 2*(1 - cos(n pi))/n. No, okay, now, what's this? This is minus one if n is odd. It's plus one if n is even. Now, either you can work with it this way, or you can combine the two of them into a single expression. Its (-1)^n takes care of both of them. But, the way the answer is normally expressed, it would be minus two over n, two over n times, if n is even, I get zero.
If n is odd, I get two. So, times two, if n is odd, and zero if n is even. So, it's four over n, or it's zero, and the final series is a sum of those coefficients times the appropriate-- cosine or sine? Sine terms because the cosine terms were all coefficients, all turned out to be zero. I'm sorry I didn't have the chance to do that calculation in detail. But, I think that's enough sketch for you to be able to do the rest of it yourself.
MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. With more than 2,200 courses available, OCW is delivering on the promise of open sharing of knowledge.
© 2001–2014
Massachusetts Institute of Technology
Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.