Home » Courses » Chemistry » Thermodynamics & Kinetics » Video Lectures » Lecture 26: Partition function (Q) — many particles
Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Topics covered: Partition function (Q) — many particles
Instructor/speaker: Moungi Bawendi, Keith Nelson
Lecture 26: Partition funct...
ANNOUNCER: The following content is provided under a creative commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: So you've been learning about statistical mechanics. The microscopic underpinnings of thermodynamics. And last time we ended up working with the canonical partition function and showing that once you have the canonical position function, you have basically every thermodynamic quantity that you've learned how to calculate this far in the course. But before I start, on the lecture notes 25, there were a couple typos in there that actually didn't make any difference to the final result because they cancel each other out. But it's been corrected on the web version. And near the bottom of the page -- because we're going to be mostly using these notes today. Near the bottom of the page, where it says A is equal to u minus TS is equal to u. That should be a plus here. dA/dT, volume. And then further down the next line, blah blah blah blah blah. Minus u over T squared. And this should be a minus here. One over T. dA/dT, V. And et cetera. So luckily these two typos cancel each other out the result is correct. But there it is. OK. So last time, then, you saw how from the canonical partition function, you could get something like the energy. You wrote down an equation. The energy is equal to k, T squared, d log Q / dT. Under constant volume. And number of particles. And then you notice that the important variables are the volume, the number of particles, and the temperature. And we know that every thermodynamic quantity has a set of natural variables. For the Gibbs free energy it was the pressure and the temperature, the number of particles. And for the volume of the number of particles and the temperature, we know that that's the Helmholtz energy. So the natural variable that we would associate -- the natural thermodynamic variable we would associate with that set of constraints is the Helmholtz free energy. So it becomes interesting, then, to figure out, how can we write the Helmholtz free energy in terms of the canonical partition function? They seem to have the same set of natural variables.
And that's what you started doing and what we'll do today. And that's where the typo comes in. So let's write what we know about the Helmholtz free energy in terms of the energy u. I already wrote it up there. u minus TS. That's the definition of the Helmholtz free energy. And from the fact that dA is equal to minus p dV, minus S dT, plus mu dN. We can read out what S, here, is in terms of the Helmholtz free energy. S is just dA/dT, constant V and N. So we can plug that in here. u minus T dA, with a minus sign there, so that becomes a plus. And hence the typo. dT, constant V, and N. OK. So now we have an equation that relates u and the partition function. We want an equation that relates A and the partition function. If we rearrange this slightly, we can get that u, then, is equal to A minus T, dA/dT, constant V and N. So the question that we could ask ourselves is -- is there a function of A that kind of looks like that? And I know the answer, so I'm going to give it to you. A function of A that looks like the sum of A minus something times the derivative, with respect to that something. We should try to look at something like the derivative d(A/T)/dT, constant V and N. So if you take that derivative, you end up with one over T, dA/dT, minus A over T squared. But if you look at this result, here, and you multiply by T squared, there's the A, and there's the T times dA/dT. There's the A that -- we just have the sign wrong. So we have -- multiply this by minus T squared. Minus T squared, times minus T squared. And you have the same thing as here. So that tells us then, that u can be written as minus T squared, d(A/T)/dT, constant volume. It's a nice way to relate those two energies. And we have an expression for u in terms of the canonical function. And we can then replace it in here.
And that gets us, then, that minus T squared, d(A/T)/dT, constant number and volume is equal to u. And u, we saw, was equal to k T squared d log Q / dT. V and N fixed. And then we can start -- The t squareds disappear here. And then we have d/dT on this side and d/dT. Let's just take the integral. You take the integral of both sides, and that gets us that then A over T is equal to k, log Q, plus the constant of integration. And we can take that constant of integration to be whatever we want. Energy is all relative to some reference point. We can take it to be zero. A is equal to k T log Q.
That's a pretty neat result. There's the microscopic underpinning of things. Where we know about atoms, and energies, and states, and even quantum mechanics, and all sorts of things goes into here. All the microscopic information goes in here. And there's a thermodynamic variable that only cares about the macroscopic state of matter. It doesn't care that there are atoms there. It just cares that you know the pressure, the volume, the temperature, or any couple variables. And you can get a direct equality without any derivatives or anything. Between the macroscopic and the microscopic. So this is really pretty remarkable.
And once you have A, you have everything. Just like before, we said once you have G, that Gibbs free energy, when we were talking about things that depend on pressure and temperature. You have everything. It's the same thing here. We've got A, we've got u. We have everything. We can calculate every single thermodynamic variable from then on. For instance, if we want to have the entropy, S is equal to minus A over T, minus u over T. Where did I get that? I got that from way up here. A is equal to u minus TS. I solved for S in terms of A and u. I've got expressions in terms of the canonical function for A and u. Plug that in there. Get k log Q, plus k T, d log Q / dT, constant number and volume. et cetera. You can get the pressure. You can get the pressure from the fact that the pressure up here is a derivative of A with respect to volume. So you take the derivative of A with respect to volume here. You get the pressure. If you want the chemical potential. The chemical potential from the fundamental equation up here is the derivative of A with respect to the number of particles. You take the derivative of A with respect to the number of particles, you get the chemical potential in terms of the canonical partition function. So you've got everything. You've got u. You've got A. You've got p. You've got S. You've got H. You've got G. Name your valuable, you've got it. You want the heat capacity? It can get you the heat capacity in terms of the partition function. Any questions?
Alright so let's go on. Let's look a little bit closer at the entropy, in terms of the microscopic theory here. So let's start with S is A minus u over T. And let's see how far we can go. Alright. Let's write it in terms of -- let's write A and u in terms of things that we know. And let me just go to the end to tell you where I'm going, and why I'm going to make certain changes in my math here. So what we're going to get at the end is that S is this very nice quantity, which is minus k, pi log pi. Where the p's are the microstate probabilities. The probability that your state is in a particular -- yes?
STUDENT: [UNINTELLIGIBLE]
PROFESSOR: S is u minus A. Yes. u minus A. Thank you. I should read my notes. OK. So we're going to equate S here in terms of the probabilities of microstates. And that's going to be -- remember how we talked about S is related to concepts of randomness, or order, or disorder. So the number of possible microstates is related to the number of amount of disorder that you might have. If you have a pure crystal, and every atom is in its place, then the number of microstates at zero degree Kelvin is one. So the probability of being in that microstate is one. And the probability of being in every other microstate is zero. Alright, so there's a relationship that we're going to have here, which is going to be interesting. Let's derive it.
That means that we're going to want to have this somehow pop out of the equation. We're going to want to have this pop out of the equation. If you remember, pi is the -- pi is e to the minus the Ei over kT, divided by the partition function Q. So somehow we are going to have to get this to come out. We're going to have to have these e to the minus Ei over kT's come out. OK. So let's try to get them to come out right away. We know u. A way to write u is the average energy. Which means let's take one over kT out here. And there's a -- let's divided by k here. Let's get the k here. kT here. We're going to want to have a k come out here. There's the k here. So that's one way of getting it to come out. And then the u is going to be the average energy. e to the minus Ei, minus kT. That's just writing the average energy. So the energy times the probability of having that energy divided by the normalization. Plus, well, A is just log Q. A over kT is just log Q. So we've managed to extract this guy out here. OK. Now, this sum, here, this is sum over i. I'd really like to have this sum come all the way out. So I've got to find a way to do that. And it would be nice if I could find a sum here. Maybe if I multiply by one. If I write one in a funny way. Get a log here. You know, if I've got a couple logs here, maybe I can combine them to get a ratio. So let's rewrite this Ei in a funny way, here. Ei is -- I'm just rewriting it, but in a strange way. Log e to the minus Ei over kT. So if I take the log of e to the minus the Ei over kT. I get minus Ei over kT. The kT's disappear. So I just get Ei is equal to Ei. I'm just writing something that's pretty obvious here. And then we're going to take that expression, and put it in here. So now I'm going to be able to write S over k is minus -- So the kT here cancels out this kT here. Sum over i. e to the minus Ei over kT. Over Q. That's that term right here. And then I have the Ei, which is log e to the minus Ei over kT. This whole thing is in this parenthesis here. And then I have the plus log Q. Plus log Q.
OK. So I have this nice thing here. e to the minus Ei over kT divided by Q. Well that's looking an awful lot like this pi here. Which is what I'm trying to get out. I'm trying to get these pi's coming out. So that's a nice thing to have here. Now if I could only have a pi coming out here, somehow, that would be great too. And if I also got a sum here, that's a sum over i I could sort of combine everything together. So I'm going to write one in a funny way. One is equal to the sum of all probabilities. That's obvious. And I'm going to write this pi here in a form that looks like this. Sum over all i, e to the minus Ei over kT, divided by Q. Just writing one. The sum of all probability is equal to one. And I'm going to take this one, here, and I'm going to put it right in here. Now log Q doesn't care on i, so it's just a number. So that allows me to rewrite -- S over k is minus the sum over i. e to the minus Ei over kT, divided by the partition function. Times the log of e to the minus Ei over kT. Plus sum over i, e to the minus Ei over kT over Q. Times log Q.
OK good. I'm going to take these summations. Now everything is over the sum of Ei. This is great. And there's this factor here. E to the minus Ei over kT divided by the partition function. That appears in both. I can factor that out. and. Then I have these two logs. Log of this and log of that. That I can also combine together. This is the log of this. And then there's a minus sign. So if I take the -- it's going to be the log of this minus the log of that,. It's going to end up with a ratio. S over k is equal to minus Ei. Taking the summation out. e to the minus Ei over kT, over Q. That's this term. That term. And then I have the logs. Log of e to the minus Ei over kT. This is a minus sign. This is a plus sign. That means I divide here by Q. This is great. Look. This e to the minus Ei over kT over Q. e to the minus Ei over kT over Q. What is that? That's just pi. This is pi. That's the probability of microstate i. And this is equal to minus sum over i, pi log pi. There's the k, here. S is equal to minus k, log sum over i, pi, log pi. Another great result.
Now if you system is isolated. If you have an isolated system, that means that the energy -- You've got your boundary. The boundary doesn't let energy go in and out. Doesn't let the number of particles go in and out. Every single microstate is going to have the same energy. If this system is isolated. The only thing you're going to change is the positions of the particles. Or their vibrational energy. Or something. But let's just stick with translation. You're just going to change the positions. You're not going to change the energy. So, if the system is isolated, then the degeneracy of your energy is just a number of ways that you can flip the positions around. Indistinguishable ways. So the probability is just one over the number of possible ways of switching positions around for your particles. So for an isolated system, all microstates have the same energy. We can set that equal to zero as our reference point. And the probability of being in any one microstate is just one over the number of possible ways of rearranging things. So the probability of been in any one microstate is one over the number of my microstates. They all have the same energy. Where this is the degeneracy. So now when you plug that in here, S is minus k. Sum over all microstates from one to blah, one over blah, log of blah. This is a number. It can come out. The sum of one to omega. Of one over omega. Is omega times omega. It's one. The one over omega -- the log of one over omega is minus the log of omega. The negative signs cancel out. k log capital omega. For an isolated system. You've seen this before, probably. This is called the Boltzmann equation. And that is what is on his tombstone. If you go to - I think it's in Germany somewhere. Is it in Germany? Do you guys know?
STUDENT: Austria.
PROFESSOR: Austria, thank you. I knew it was that part of the world. If you go to Austria, to some famous cemetery, and go look for the tombstone that says S is equal to log omega. That's where Boltzmann is buried. OK. So this picture of -- yes question?
STUDENT: Can you repeat the argument for making that one over omega go away?
PROFESSOR: Making the one over omega go away here? Because you're taking the sum of all states from one, i equals one to omega. So it's one over our omega plus one over omega plus one over omega plus -- omega times. And this is just a number. So it comes out. It's not in the sum. So the sum, here, is all by itself. Any other questions?
OK. That's why we talk about entropy as being this fundamental property that tells you about the number of available states. That's what it is. You've got this connection now between this variable, which is sort of hard to really intuitively understand, when you're talking about thermodynamics. And this is much easier to understand here, in terms of the available ways of distributing your energy, or your particles, in this case here. In different bins. OK any questions?
So the next topic is we're going to work a little bit with the partition functions. And see how when you have systems that have multiple degrees of freedom, where each degree of freedom has a different kind of energy. Let's say translation, rotation, vibration. Then you can have a partition function for each of these degrees of freedom. And whereas the energies of the degrees of freedom add up, the partition functions get multiplied. So it's the separation of the partition functions into subsystem partition functions.
So so far, we've written for translation partition function. That the system partition function is the molecular partition function to the Nth power. If you have distinguishable particles. And you have to divide by N factorial. If you can swap particles without knowing the difference. This is the number of ways of swapping N particles with each other, so it's indistinguishable particles.
OK. So now let's say that -- Let's just make sure that -- Let's say that your energy of your system -- yes?
STUDENT: It says that when system's not isolated, [UNINTELLIGIBLE].
PROFESSOR: Where does it say that? In the notes?
STUDENT: [UNINTELLIGIBLE]
PROFESSOR: Let me see the notes here. What does it say? So that sentence has to do with this guy here. Basically it says that if you've got a huge number particles, the average energy is a given number. And the fluctuations around that average are very small. And so, the system behaves as if it's isolated. So when you have a system which is not isolated, then energy can come in and out of the system. So in principle, over time, you could have huge energy fluctuations. As energy comes out, or energy comes in. And if you have a countable number of molecules in your system, then if one molecule suddenly captures a lot of energy, then the whole system energy will go up a lot. But if you have ten to the 24th molecules, if one molecule suddenly gains a lot of energy, the system energy doesn't care. So small fluctuations -- or big fluctuations in the small number of molecules doesn't make any difference to the total energy. And so you can still use this then. It's good enough.
STUDENT: So how long is that good enough [UNINTELLIGIBLE]
PROFESSOR: Well if it's accountable, a handful of things, and it's not valid. If it's ten to the 24th and it's valid, and somewhere in between it breaks down. Then I don't know what the answer is. But usually if you have a thermodynamic system, then it's big enough. That's what thermodynamics is about. Where you don't really care that you have atoms there. You don't even know you have atoms there. So it's big enough. Good question.
Alright, so now let's take our microstate energy here. And our microstate energy is the sum of all the molecular energies Ei. So it's the sum of all energies E sub, over all the atoms. And each one of these energies, if it's a molecular energies, can be indexed by a quantum number of some sort. So it would be the sum over all energies. So quantum number for particle one, n1 is some sort of quantum number. n2 is some sort of quantum number. n3 is some sort of quantum number. And then you have all the molecules. En1 plus En2 plus En3, et cetera. So this is the energy from molecule one, energy from molecule two, energy from molecule three, energy from molecule four. And that little n tells you which energy state that molecule is. And the sum of all these energies is your microstate energy. As long as you can write this this way, then you're allowed to write this this way.
So that basically means that they're not interacting with each other. They're independent from each other. In this case here. So now if I write Q in terms of the sum over all microstates Ei. e to the minus Ei over kT. I'm going to replace this Ei here with the sum over all these energies here. And so the sum over all microstates, then, becomes the sum over all possible combinations of quantum numbers. n1, n2, n3, n4, et cetera. All the possible ways of getting molecule one in some state. All the possible ways of getting molecule two in some quantum number state. And then e to the minus -- and instead of capital Ei, I'm going to write the molecular energies. En1, plus En2, plus epsilon n3 plus et cetera. And then divide by kT. Basically I'm going to prove that this is a fine statement to make, as long as you can write the energy as a sum of component energies.
OK so now this term here, e to the minus En1, only cares about this sum here. En2, that's molecule number two, only cares about this sum here. Molecule number three only cares about the sum over all possible quantum numbers connected to molecule number three. So I can factor out all these sums into a factor of sums. Is equal to the sum over quantum number n1. e to the minus epsilon of n1, divided by kT. Times n2 e to the minus epsilon n2 over kT, times n3 e to the minus epsilon n3 divided by kT, et cetera. And now each one of these is basically the molecular partition function. These are all the possible energies of that molecule. And the sum over all possible energies times e to the minus E over kT is the partition function for the molecule. So we have q for molecule one times q for molecule two. And they're all the same. There are N of them. Plus q to the N. So just, in a way, clarified that the reason why we're able to write this system partition function, in terms of the molecular partition functions, with N of them, to the Nth power, is because we were able to separate out the energy here. In terms of the independent molecular energies. Where this is saying the molecules don't interact with each other. And are independent from each other. And then the one over N factorial comes in, so that you don't over count, for translation, the positions. They're indistinguishable.
OK. So now we can have -- Actually we're not -- This basic concept of the partition function multiplying each other, if the energies add, is not limited to going from the molecular partition functions to the system partition function. You can also look at the molecular partition function itself. And if the energy, the molecular energy, can be written in terms of a sum of energies of different degrees of freedom, for instance, the energy of a molecule could be the energy of the vibration, plus the energy of the translation, plus the energy of the rotation, plus the energy of the magnetic field, plus the energy of the electric field. et cetera. You have many energies that can add up with each other to create the molecular energy. And what we're going to be able to write, then, is that this molecular partition function itself can be written in terms of a product of partition functions for the sub parts of the molecular energy.
So let me clarify that statement. So if I can write my molecular energy, epsilon, is equal to a translational energy, plus a vibrational energy, plus a rotational energy, plus every other little energies that you can think of that are independent of each other. Then using the same argument we used to show that Q is the multiplication of these molecular partition function, we can write that the molecular partition function, little q, is just the multiplication of the degree of freedom partition functions -- molecular partition functions. The translational partition function times the vibrational partition function, times the rotational partition function, et cetera. If the energies add, then the partition functions multiply each other. And that's going to be powerful because when we look at something like a polymer or DNA or protein or something, in solution. And we're going to be looking at the configurations possible for that polymer or that biopolymer, then we'll know that the energy of that polymer in solution is going to be -- we'll be able to approximate it as the energy of the configuration for that polymer. The different ways that you can fold the protein, for instance. Plus everything else. The energy of everything else.
OK. So if the configurational energy can be separated from the sum of all vibration energies of all the bonds in that polymer. The way that polymer interacts with -- The way that the solution itself interacts with itself. Then if we can do this and we can do this approximation most of the time, then we'll be able to take the partition function for the polymer, and write it as the configurational partition function times the partition function for everything else. And we'll find that this part here will tend to factor out. We won't have to worry about it. And that this will carry all the important information that we'll need to know to see about changes in the system. Changes in Gibbs free energy, changes in the chemical potential. Everything will be related to this partition function. This subsystem. And because of the fact that you can factor them out, then this thing will end up dropping out. And this will become the important factor. OK. Let's do a quick example.
OK. So this is the example of having a very very, short polymer. Containing three monomers. Which can be in two configurations. And the energies are the same for these two configurations. So the configurational partition function, which you would generally write as the sum of e to the minus Ei for that configuration, divided by kT, plus e to the -- So we would usually write it as e to the minus epsilon one over kT, plus e to the minus epsilon two over kT, plus et cetera. They're all the same energy. And there are two configurations. The degeneracy is two. So you can write this as the degeneracy of the configuration, times the energy of the configuration. And you can set your energy reference to be zero. You can choose whatever you want it to be. And zero is a good number. So that e to the zero is equal to one. So that configuration partition function is just the degeneracy, which is equal to two, in this case here.
So now let's calculate the molecular and canonical partition functions for an ideal gas of these molecules here. And it's usually interesting to use a lattice model as a guide. And so in this lattice model here, you would divide space up into little cells. Pieces in two dimensions, but in reality it would be three dimensions. And then you place your molecules in lattice sites. Something like this. And then you end up counting the number of ways of arranging the molecules on the lattice. And let's say that we have N molecules that are in the gas phase. And the molecular volume, i.e. the size of the lattice site is v. That's the molecular volume. And N times v is the total volume. That's the total volume occupied by the particles. The total volume is the number of lattice sites. V is the total volume. And we're going to assume that all particles, all molecules have the same translation energy. It's an adequate approximation. We're going to set that equal to zero. So all molecules at any position here has the same E translation. And we're going to set that equal to zero. So the translational partition function. The molecular transitional partition function is -- Well there's only one energy. It's zero. So we only care about the degeneracy of that molecule. That molecule could be in here, or it could be here, it could be here, it could be here, it could be anywhere, right? The number of ways of putting that molecule on the lattice is the number of lattice sites available. Which is basically the molecular volume. Which is the total volume divided by the molecular volume. The total volume is -- right.
So the total volume is the number of lattice sites times the volume of each lattice site. So the total volume divided by the small volume is the total number of lattice sites. And the number of choices of putting that one molecule is anywhere on the lattice. That's your degeneracy.
So now if I look at the total molecular partition function, it's going to be the multiplication of the configurational partition function and the translational partition function. At each site, the molecule could have two configurations.
So q, for the molecule, that's q translational. I'm going to ignore all vibrations, rotation, et cetera. I'm going to assume that there are two degrees of freedom. Translational one, which is basically the positional one. And then the configurational one, which is internal to the molecule. So this one is V over v. That's the degeneracy. Capital V over little v. The degeneracy of placing the molecule on the lattice. And the configuration is the degeneracy of how the molecule folds. Yes.
STUDENT: [UNINTELLIGIBLE] PROFESSOR: The notes are wrong. So usually capital V is large and little v is small. So in the notes, if we have it in reverse, we should fix that. This is lecture 25, right? So we have q. No the notes seem to be right. Total volume is capital V, molecular volume is little v. Where is it that wrong in the notes?
STUDENT: [UNINTELLIGIBLE]
PROFESSOR: Oh look at that. My notes are different than yours. My notes are right. OK well it's obviously right on the web. Because this is the latest version. Alright so flip those big V's and little v's then. Huh. I thought they were from the same pile. OK. So this is your molecular partition function. And then when you look at the system, the system partition function can also be separated into a translation and the configuration for the system. We know what you need to do is take all the molecular partition functions, the transitional ones, and -- to the N factor. The number of particles. But now you have degeneracy. You've over counted. So you need to divide this N factorial. You also have the system partition function for the configurations. And that's q configuration to the N. Except here we don't need to divide by one over N factorial, because we're not over counting here. The over counting only happens when you're placing identical particles in a lattice, and you can swap them without making a difference. Here we talking about configurations. When we're talking about configurations, we're not talking about placing the identical particles in different spots. We're just looking at these two configurations here. And then the next particle is two configurations. The next particle is two configurations.
So this is really important. That this N over factorial only comes into play when you're talking about the translational degree of freedom. Not the other degrees of freedom. And now the total system partition function is the multiplication of these two. It ends up being capital V over v, to the N power, over N factorial, times two to the N.
OK. In general you could extend this analysis to include vibrations, rotations, energy in a magnetical field, electric field, et cetera. Any questions?
Alright the next thing that you're going to do then is to use this concept, as a sort of example, as a way to begin to calculate things that you've already calculated before. For instance, if you look at an expansion of an ideal gas, can we now calculate the entropy change. Not based on thermodynamics, but based on the statistical mechanics. On the microscopic description that we've just gone through. And it turns out that that's what happens. That you can do that. You get the same answer. Thank god you get the same answer. Otherwise we'd be in big trouble. So I'm just going to set up the problem, because I won't have time to do it. And then you can do it next time, when Keith comes back. So the problem is going to be the usual problem of having a volume V1 of a gas on one side, and a vaccuum expanding to volume V2, gas. And asking what is the entropy change. And you know that from thermo, that delta S in this case here is nR log V2 over V1, when the temperature is constant. That's going to be our answer. It has to be our answer. But this time, instead of knowing the answer, we're going to calculate it from microscopics.
So what you do is you start out with your initial state. You ask what is -- you write down the molecular volume V. The total volume here is V1. You assume that all molecules have the same translational energy. You set that equal to zero. The system translational energy is equal to zero. And so the entropy for this gas here is just the number of ways of placing the molecule in the lattice. This model that we have of space being separated into little cells. And so S is k log omega, where omega is the number of ways of placing the molecules in the lattice. Which is basically k log V, over v. Where this is the number of lattice sites.
OK. And what you're going to do next time, then, is start from here. Calculate what it is before. Calculate what it is after, and turn the crank, and get to the right answer. Then you're going to do the same thing for liquids. And that'll be it for this simple statistical mechanics.
This is one of over 2,200 courses on OCW. Find materials for this course in the pages linked along the left.
MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.
No enrollment or registration. Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.
Knowledge is your reward. Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.
Made for sharing. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)
Learn more at Get Started with MIT OpenCourseWare
MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. With more than 2,200 courses available, OCW is delivering on the promise of open sharing of knowledge. Learn more »
© 2001–2015
Massachusetts Institute of Technology
Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.