Home » Courses » Mathematics » Computational Science and Engineering I » Video Lectures » Lecture 17: Finite Elements in 1D (part 1)
Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Instructor: Prof. Gilbert Strang
Lecture 17: Finite Elements...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR STRANG: OK, so. This is a big day. Part One of the course is completed, and I have your quizzes for you, and that was a very successful result, I'm very pleased. I hope you are, too. Quiz average of 85, that's on the first part of the course. And then the second part so this is, Chapter 3 now. Starts in one dimension with an equation of a type that we've already seen a little bit. So there's some more things to say about the equation and the framework, but then we get to make a start on the finite element approach to solving it. We could of course in 1-D finite differences are probably the way to go, actually. In one dimension the special success of finite elements doesn't really show up that much because finite elements have been, I mean one great reason for their success is that they handle different geometries. They're flexible; you could have regions in the plane, three dimensional bodies of different shapes. Finite differences doesn't really know what to do on a curved boundary in in 2- or 3-D. Finite elements cope much better. So, we'll make a start today, more Friday on one dimensional finite elements and then, a couple of weeks later will be the real thing, 2-D and 3-D.
OK, so, ready to go on Chapter 3? So, that's our equation and everybody sees right away what's the framework that that's A transpose in some way; this is A transpose C A, but what's new of course is that we're dealing with functions, not vectors. So we're dealing with, you could say, operators, not matrices. And nevertheless, the big picture is still as it was. So let me take u(x) to be the displacements again. So I'm thinking more of mechanics than electronics here. Displacements, and then we have du, the e(x) will be du/dx, that'll be the stretching, the elongation and, of course at that step you already see the big new item, the fact that the A, the one that gets us from u to du/dx, instead of being a difference matrix which it has been, our matrix A is now a derivative. A is d/dx. So maybe I'll just take out that arrow. So A is d/dx. OK, but if we dealt OK with difference matrices, we're going to deal OK with derivatives. Then, of course, this is the C part, that produces w(x). And it's a multiplication by this possibly varying, possibly jumping, stiffness constancy of x.
So w(x) is c(x) c(x), that's our old w=Ce, this is Hooke's Law. I'll put Hooke's Law, but that's, or who's ever law it is. It's like a diagonal matrix; I hope you see that it's like a diagonal matrix. This function u is kind of like of a vector but a continuum vector instead of just a fixed, finite number of values. Then at each value we used to multiply by c_i, now our values are continuous with x, so we multiply by c(x). And then you're going to expect that, going up here, there's going to be an A transpose w f, and of course that A transpose we have to identify and that's the first point of the lecture, really. To identify what is A transpose.
What do I mean by A transpose? And I've got to say right away that I'm a little, the notation, writing a transpose of a derivative is like, that's not legal. Because we think of the transpose of a matrix; you sort of flip it over the main diagonal, but obviously it's got to be something more to it than that. And so that's a central math part of this lecture is what's really going on when you transpose? Because then we can copy what's going on and it's quite important to get it. Because the transpose, well, other notations and other words for it would be notation might be a star. Star would be way more common than transpose, I'll just stay with transpose because I want to keep pressing the parallel with A transpose C A. And the name for it would be the adjoint. And the adjoint method, and adjoint operator, those appear a lot. And you'll see them up here in finite elements. So this is a good thing to catch on to.
Why? Why should the transpose or the adjoint of the derivative be minus the derivative? And by the way, just while we're fixing this, this is a key factor which is certainly, we have a very strong hint from center difference, right? If I think of derivatives, if I associate them with differences, the center difference matrix, so the a matrix may be centered, would be. Just to remind us, it's a center difference has onws and minus one, one, zeroes on the diagonal, right? Minus one, one. Takes that difference at every row. Except possibly boundary rows. And of course as soon as you look at that matrix you see, yeah, it's anti symmetric. It's an anti symmetric matrix. So a transpose is minus a for center differences and therefore we're not so surprised to see a minus sign up here when we go to the continuous case, the derivative. But, we still have to say what it means. So that's what I'll do next, OK?
So this is a good thing to know. And I was just going to comment, what would be the transpose of the second derivative? I won't even write this down. If the derivative transpose sort of flips its sign to minus, what would you guess for this x transpose of second derivative, our more familiar d second by dx squared? Well we'll have two minus signs. So it'll come out fine. So second derivatives, even order derivatives are sort of like symmetric guys. Odd order derivatives, first and third and fifth derivatives, well, God forbid we ever meet a fifth derivative, but first derivative anyway, is anti symmetric. Except for boundary conditions. So I really have to emphasize that the boundary conditions come in. And you'll see them come in. They have to come in. OK, so what meaning can I assign to the transpose, or what was the real thing happening when we flipped the matrix across its diagonal? I claim that we really define the transpose by this rule. By we know what inner products are. I'll do vectors first, we know about inner products, dot products, we know what the dot product of two vectors is.
So, this is the transpose of A. How am I going to define the transpose of A? Well, I look at the dot product of Au with w. I'll use a dot here for once, I may erase it and replace it. If at the dot product of Au with w, then that equals for all u and w, all vectors u and w, that equals the dot product of u with something. Because u is coming. If I write out what the dot product is, I see u_1 multiplies something, u_2 multiplies something. And what goes in that little space? This is just an identity. I mean, it's like, you'll say no big deal. But I'm saying there is at least a small deal. OK. So if I write it this way, you'll tell me right away this should be the same as u transpose times something. And again, so I'm asking for the same something on both lines. What is that something? A transpose w. Whatever A transpose is, it's the matrix that makes this right. That's really my message. That A transpose is the reason we flipped the matrix across the diagonal, is that it makes that equation correct. And I'm writing the same thing here. OK.
So again, if we knew what dot products were, what inner product of vectors were, then A transpose is the matrix that makes this identity correct. And of course if you write it all out in terms of i, j every component, you find it is correct. So that defines the transpose of a matrix. And of course it coincides with flipping across the diagonal. Now, how about the transpose of a derivative. OK, so I'm going to follow the same rule. Here A is now going to be the derivative, and A transpose is going to be whatever it takes to make this true. But what do I mean? Now I have functions, so I have to think again, what do I mean by the inner product, the dot product? So for this to make sense I need to say, and it's a very important thing anyway, and it's the right natural choice, I need to say the dot product, or the inner product is a better word, of functions. Of two functions. A, e(x), and w. if I have two functions, what do I mean by their inner products?
Well, really I just think back what did we mean in the finite dimensional case, I multiplied each e by a w, each component of e by w, and I added, so what am I going to do here? Maybe my notation should be parentheses with a comma would be better than a dot, for function. So I have a function. I'm in function space now. We moved out of our n, today, into function space. Our vectors have become functions. And now what's the dot product of two vectors? Well, what am I going to do? I'm going to do what I have to do. I'm going to multiply each e by its corresponding w, and now they depend on this continuous variable x, so that's e(x) times w(x). And what do I do now? Integrate. Here, I added e_i times w_i, of course. Over here I have functions. I integrate dx. Over whatever the region of the problem is. And then our example's in 1-D, be zero to one. If these are functions of two variables I'd be integrating over some 2-D region, but we're in 1-D today. OK, so you see that I'm prepared to say this now makes sense.
I now want to say, I'm going to let A be the derivative, and I'm going to figure out what A transpose has to be. So if A is the derivative, so now is this key step. y is the transpose x, OK? So I look at the derivative, du/dx, with w, so that's this integral, zero to one of du/dx*w(x)dx, so that's my left side. Now I want to get u by itself. I want to get the dot product, so I want to get another integral here that has u(x) by itself. Times something, and that something is what I'm looking for. That something will be A transpose w. Right? Do you see what I'm doing? This is is the dot product, this Auw, so I've written out what a u in a product with w is. And now I want to get u out by itself and what it multiplies here will be the a transpose w, and my rule will be extended to the function case and I'll be ready to go.
Now do you recognize, this is a basic calculus step, what rule of calculus am I going to use? We're back to 18.01. I have the integral of a derivative times w, and what do I want to do? I want to get the derivative off of u. What happens? What's it called? Integration by part. Very important thing. Very important. If you miss it's important in calculus. It gets sometimes introduced as a rule, or a trick to find some goofy integral, but it's really the real thing. So what is integration by parts? What's the rule? You take the derivative off of u, you put it on to the other one just what we hope for, and then you also have to remember that there is a minus. Integration by parts has a minus. And usually you'd see it out there but here I've left more room for it there. So I have identified now, A transpose w. A transpose w has, if this is Au, in a product with w, then this is u in a product with A transpose w, it had to be what was. And so that one integration by parts brought out a minus sign. If I was looking at second derivatives there would probably be somewhere two integration by parts; I'd have minus twice, I'd be back to plus. And you're going to ask about boundary conditions.
And you're right to ask about boundary conditions. I even circled that, because that is so important. So what we've done so far is to get the interior of the interval right. Between zero and one, if A is the derivative, then A transpose is minus the derivative. That's all we've done. We have not got the boundary conditions yet. And we can't go on without that. OK, so I'm ready now to say something about boundary conditions. And it will bring up this square versus rectangular also, so we're getting the rules straight before we tackle finite elements.
OK, let me take an example of a matrix and its transpose. Just so you see how boundary conditions. Suppose I have a free fixed problem. Suppose I have a free fixed line of springs. What's the matrix A for that? Well - question? Yes.
AUDIENCE: [INAUDIBLE]
PROFESSOR STRANG: Yeah. that's Yes. When I learned it, it was also that stupid trick. So you would like me to put plus, can I put plus, whatever. What do you want me to call that? An integrated term? It would be, yeah. I even remember what it is. As you do better than me. u times w at the, is that good? Yeah, I think. So it's really this part that I'm now coming to. It's the boundary part that I'm now coming to. And let me say, so I'm glad you asked that question because I made it seem unimportant, where that's not true at all. The boundary condition is part of the definition of A, and part of the definition of A transpose. Just the way I'm about to say free fixed, I had to tell you that for you to know what a was. Until I tell you the boundary condition, you don't know what the boundary rows are. You only know the inside of the matrix. Or one possible inside. So I'm thinking my inside is going to be minus one, one, minus one, one. So on, as given we find the differences. Minus one, one. But. Oh no, let's see. So I'm doing free fixed. Is that right? Am I doing free fixed? OK, so am I taking free at the left end? Yes. Alright, so if I'm free at the left and fixed at the right end, what's my A? We're getting better at this, right? Minus one, one. Minus one, one. Minus one, one. Minus one, and the one here gets chopped off. You could say if you want the fifth row of A_0, remembering A_0 as the hint on the quiz, where it had five rows for the full thing, free free. And then when an n got fixed, the fifth column got removed, and that's my free fixed matrix.
At the left hand end, at the zero end, it's got the difference in there. Difference, difference, difference. But here at the right hand end, it's the fixing, the setting u, whatever it would be. u_5 to zero, or maybe it's u_4, because it's like one, two, three. Setting u_4 to zero knocked that out. OK. All I want to do is transpose that. And you'll see something that we maybe didn't notice before. So I transpose it, that's minus one, one becomes a column, minus one, one becomes a column. Minus one, one becomes a column. Minus one, all there is. That row becomes, so this was, so, have I got it right? Yes. What's happened? A transpose, what are the boundary conditions going with A transpose? The boundary conditions that went with a were, let me say first, what were the boundary conditions with A? Those are going to be boundary conditions on u. So A has boundary conditions on u. And A transpose has boundary conditions on w. Because A transpose acts on w, and A acts on u. So there was no choice.
So now what was the boundary condition here? The boundary condition was u_4=0, right? That was what I meant by that guy getting fixed. Now, and no boundary condition at u_0, it was free. Now, what are the boundary conditions that go with A transpose? And remember, A transpose is multiplying w. I'm going to put w here, so what are the boundary conditions that go with A transpose? This thing, nothing got knocked off. The boundary condition came up here for A transpose, the battery condition was w_0=0. That got knocked out, w_0 to be zero. No surprise. Free fixed, this is the free end at the left, this is the fixed end at the right. Did you ever notice that the matrix does it for you? I mean, when you transpose that matrix it just automatically built in the correct boundary conditions on w by, you started with the conditions on u, you transpose the matrix and you've discovered what the boundary conditions on w are.
And I'm going to do the same for the continuous problem. I'm going to do the same for the continuous problem, so the continuous free fixed. OK, what's the boundary condition on u? If it's free fixed, I just want you to repeat this, on the integral zero to one for functions u(x), w(x) instead of for vectors. What's the boundary condition on u, if I have a free fixed problem? u(1)=0. u over one equals zero. So this is the boundary conditions that goes with A in the free fixed case. And this is part of A. That is part of A. I don't know what a is until I know its boundary condition. Just the way I don't know what this matrix is. It could have been A_0, it could have lost one column, it could have lost two columns, whatever. I don't know until I've told you the boundary condition on u. And then transposing is going to tell me, automatically, without any further input, the boundary condition that goes on the adjoint. So what's the boundary condition on w that goes as part of A transpose? Well, you're going to tell me. Tell me. w(0) should be zero. It came out automatically, naturally. This is a big distinction between boundary conditions.
I would call that an essential boundary condition. I had to start with it, I had to decide on that. And then this, I call a natural boundary condition. Or there are even two guys' names, which are associated with these two types. So maybe a first chance to just mention these names. Because you'll often see, reading some paper, maybe a little on the mathematical side, you'll see that word used. The guy's name with this sort of boundary condition is a French name. Not so easy to say, Dirichlet. I'll say it more often in the future.
Anyway, I would call that a Dirichlet condition and you would say it's a fixed boundary condition. And if you were doing heat flow you would say it's a fixed temperature. Whatever. Fixed, is really the word to remember there. OK, and then I guess I better give Germany a shot here, too. So the boundary condition is associated with the name of Neumann. So if I said a Dirichlet problem, a total Dirichlet problem, I would be speaking about fixed fixed. And if I spoke about a Neumann problem I would be talking about free free. And this problem is Dirichlet at one end, Neumann at the other. Anyway, so essential and natural. And now of course I'm hoping that that's going to make this this boundary term go away.
OK, now I'm paying attention to this thing that you made me write. uw. OK, what happens there? uw? Oh, right this isn't bad because it shows that there's a boundary condition, I've got some little deals going. But do you see that that becomes zero? Why isn't zero at the top end, at one? When I take u times w at one, why do I get zero? Because u(1) is zero. Good. And it's the bottom end. When I take uw at the other boundary, why do I get zero? Because of w. You see that w was needed. That w(0) was needed because there was no controlling u(0). I had no control of u at the left hand end, because it was free. So the control has to come from w. And so w naturally had to be zero, because I wasn't controlling u at that left hand, free end. So one way or the other, the integration by parts is the key.
So that said, what I critically wanted to say about transposing, taking the adjoint, except I was just going to add a comment about this square A versus rectangular. And this was a case of square, right? This was a case, this free fixed case, this example I happened to pick was squared. A_0, the free-free guy that was a hint on the quiz, was rectangular. The fixed fixed, which was also on the quiz, was also rectangular. It was what, four by three or something. This A is four by four. And what is especially nice when it's squared? If our problem happens to give a square matrix, in the trust case if the number of displacement unknowns happens to equal the number of bars, so m equals n, I have a square matrix A. And this guy's invertible, so it's all good. Oh, that may be the whole point. That if it's rectangular I wouldn't talk about its inverse. But this is a square matrix, so A itself has an inverse. Instead of having, as I usually have, to deal with A transpose C A all at once, let me put this comment because it's just a small one up here. Right under these words. Square versus rectangular.
Square A, and let's say invertible, otherwise we're in the unstable K, so you know what I mean, in the network problems the number of nodes matches the number of edges? In the spring problem we have free fixed situations. Anyway, A comes out square. Whatever the application. If it comes out square, what is especially good? It comes out square, what's especially good is that it has an inverse. So that in this square case, this K inverse is A transpose C A inverse, can be split. This allows us to separate, to do what you better not do otherwise. In other words, we are three steps, which usually mash together and we can't separate them and we have to deal with the whole matrix at once. In this square case, they do separate. And so that's worth noticing. It means that we can solve backwards, we can solve these three one at a time. The inverses can be done separately. When A and A transpose are square, then from this equation I can find w. From knowing w I can find e, just by inverting C. By knowing e I can find u, just by inverting A. You see the three steps? You could invert that, and then you can invert the middle step, and you can invert A. And you've got u. So the square case is worth noticing. It special enough that in this case we would have an easy problem.
And this case is called, for trusses and mechanics, there's a name for this. And unfortunately it's a little long. Statically, that's not the key word, determinant is the key word. Statically determinate, that goes with squares. And you can guess what the rectangular matrix a, what word would I use, what's the opposite of determinate? It's got to be indeterminate. Rectangular a will be indeterminate. And all that is referring to is the fact that in the determinate case the forces determine the stresses. You don't have to take that, we'd get all three together; mix them, invert, go backwards. All that. You just can do them one at the time in this determinate case. And now I guess I'd better say, so here's the matrix case. But now in this chapter I always have to do the continuous part. So let me just stay with free fixed, and what is this balance equation? So this is my force balance. I didn't give it its moment but its moment has come now.
So the force balance equation is -dw/dx, because A transpose is minus a derivative equal f(x). and my free boundary condition, my free n, gave me w of what was it? The Neumann guy gave me w(0)=0. And what's the point? Do you see what I'm saying? I'm saying that this free fixed is a beautiful example of determinate. Square matrix A in the matrix case and the parallel and the continuous cases. I can solve that for w(x). I can solve directly for w(x). Without involving, you see I didn't have to know c(x). I hadn't even got that far. I'm just going backwards now. I can solve this, just the way I can invert that matrix. Inverting the matrix here is the same as solving the equation. You see I have a first order first derivative? I mean, it's so trivial, right? It's the equation you solved in the final problem of the quiz, where an f was a delta function. It was simple because it was a square determinate problem with one condition on w. When both conditions are on u, then it's not square any more. OK for that point?
Determinate versus indeterminate. OK. So that's sort of, and I could do examples. Maybe I've asked you on the homework to take a particular f(x). I hope it was a free fixed problem, if I was feeling good that day, because free fixed you'll be able to get w(x) right away. If it was fixed-fixed then I apologize it's going to take you a little bit longer to get to w. To get to u. OK. But this, of course- I just integrate. Inverting a difference matrix is just integrating a function. Good. OK, so this lecture so far was the transition from vectors and matrices to functions and continuous problems. And then, of course we're going to get deep into that because we got partial differential equations ahead. But today let's stay in one dimension and introduce finite elements. OK. Ready for finite elements, so that's now a major step.
Finite differences, maybe I'll mention this, probably in this afternoon's review session, where I'll just be open to homework problems. I'll say something more about truss examples, and I might say something about finite differences for this. But really, it's finite elements that get introduced right now. So let me do that. Finite elements and introducing them. OK. So the prep, the getting ready for finite elements is to get hold of something called the weak form of the equation. So that's going to be a statement of, the finite elements aren't appearing yet. Matrices are not appearing yet. I'm talking about the differential equation. But what do I mean by this weak form? OK, let me just go for it directly.
You see the equation up there? Let me copy it. So here's the strong form. The strong form is, you would say, the ordinary equation. Strong form is what our equation is, minus d/dx of c(x), du/dx=f(x). OK, that's the strong form. That's the equation. Now, how do I get to the weak form? Let me just go to it directly and then over the next days we'll see why it's so natural and important. If I go for it directly, what I do is this. I multiply both sides of the equation by something I'll call a test function. And I'll try to systematically use the letter v for the test function. u will be the solution. v isn't the solution, v is like any function that I test this equation this way. I'm just multiplying both sides by the same thing, some v(x). Any v(x). We'll see if there are any limitations, OK? And I integrate. OK. So you're going to say nope, no problem. I integrate from zero to one. Alright, this would be true for f. So now I'll erase the word strong form, because the strong form isn't on the board anymore.
It's the weak form now that we're looking at. And this is for any, and I'll put "any" in quotes just because eventually I'll say a little more about this. I'll write the equation this way. And you might think, OK, if this has to hold for every v(x), I could let v(x) be concentrated in a little area. And this would have to hold, then I could try another v(x), concentrated around other points. You can maybe feel that if this holds for every v(x), then I can get back to the strong form. If this holds for every v(x), then somehow that had better be the same as that. Because if this was f(x+1) and this is f(x), then I wouldn't have the equality any more. Should I just say that again? It's just like, at this point it's just a feeling, that if this is true for every v(x), then that part had better equal that part. That'll be my way back to the strong form. It's a little bit like climbing a hill. Going downhill was easy, I just multiplied by v and integrated. Nobody objected to that. I'm saying I'll be able to get back to the strong form with a little patience. But I like the week form. That's the whole point. You've got to begin to like the week form. If you begin to take it in and think OK.
Now, why do I like it? What am I going to do to that left side? The right side's cool, right? It looks good. Left side does not look good to me. When you see something like that, what do you think? Today's lecture has already said what to think. What should I do to make that look better? I should, yep. Integrate by parts. If I integrate by parts, you see what I don't like about it as it is, is two derivatives are hitting u, and v is by itself. And I want it to be symmetric. I'm going to integrate this by parts, this is minus the derivative of something. Times v. And when I integrate by parts, I'm going to have, it'll be an integral. And can you integrate by parts now? I mean, you probably haven't thought about integration by parts for a while. Just think of it as taking the derivative off of this, so it leaves that by itself. And putting it onto v, so it's dv/dx, and remembering the minus sign, but we have a minus sign so now it's coming up plus. That's the weak form. Can I put a circle around the weak form? Well, that wasn't exactly a circle. OK. But that's the weak form.
For every v, this is, I could give you a physical interpretation but I won't do it just this minute. This is going to hold for any v. That's the weak form. OK. Good. Now, why did I want to do that? The person who reminds me about boundary conditions should remind me again. That when I did this integration by parts, there should have been also. What's the integrated part now, that has to be evaluated at zero and one? This c, so it's that times that, right? It's that c(x)du/dv times v(x). Maybe minus. Yeah, you're right. Minus. Good. What do I want this to come out? Zero, of course. I don't want to think that the same. Alright, so now I'm doing this free fixed problem still. So what's the deal on the free fixed problem? Well, let's see. OK, I got the two ends and I want them to be zero.
OK, now at the free end, I'm not controlling v. I wasn't controlling u and I'm not going to be controlling its friend v. So this had to be zero. So this part will be zero at the free end. That boundary condition has just appeared again naturally. I had to have it because I had no control over b. And what about at the fixed end. At the fixed end, which is that, at the free end. Now, what's up at the fixed end? What was the fixed end? That's where u was zero. I'm going to make v also zero. So there's, when I said any v(x), I better put in with v=0 at the Dirichlet point, at fixed point, at fixed end. I need that. I need to know that v is zero at that end. I had u=0. Here's why I'm fine. So I'm saying that any time I have a Dirichlet condition, a fixed condition that tells me u, I think of v and you'll begin to think of v as a little movement away from u. u is the solution.
Now, remind me, this was free fixed. So the u might have been something like this. I just draw that. That's my u. This guy was fixed, right? By u. Now, I'm thinking of v's as, the letter v is very fortunate because it stands for virtual displacement. A virtual displacement is a little displacement away from u, but it has to satisfy the zero, the fixed condition that u satisfied. In other words, the little virtual v can't move away from zero. So I get this term is zero at the fixed end. OK. that's the little five minute time out state to check the boundary condition part. The net result is that that term's gone and I've got the weak format I've wanted. OK, three minutes to start to tell you how to use the weak form.
So this is called Galerkin's method. And it starts with the weak form. So he's Russian. Russia gets into the picture now. We had France and Germany with the boundary conditions, now we've got Russia with this fundamental principle of how to turn a continuous problem into a discrete problem. That's what Galerkin's idea does. Instead of a function unknown I want to have n unknowns. I want to get a discrete equation which will eventually be kKU=F. So I'm going to get to an equation KU=F, but not by finite difference, right? I could but, I'm not. I'm doing it this weak Galerkin finite element way. OK, so if I tell you the Galerkin idea then next time we bring in, we have libraries of finite elements. But you have to get the principle straight. So it's Galerkin's idea. Galerkin's idea was was choose trial functions. Let me call them call them t? Have to get the names right. Phi. OK, the Greeks get a shot ok. Trial functions, phi_1(x) to phi_n(x).
OK, so that's a choice you make. And we have a free choice. And it's a fundamental choice for all of applied math here. You choose some functions, and if you choose them well you get a great method, if you choose them badly you got a lousy method. OK, so you choose trial functions, and now what's the idea going to be? Your approximate U, approximate solution will be some combination of this. So combinations of those, let me call the coefficients U's, because those are the unknowns. Plus U_n*phi_n. So those are the unknowns. The n unknowns. I'll even remove that for the moment. You see, these are functions of x. And these are numbers. So our unknown, our n unknown numbers are the coefficients to be decided of the functions we chose. OK., now I need n equations. I've got n unknowns now, they're the unknown coefficients of these functions. I need an equation so I get n equations by choose test functions. V_1, V_2, up to V(x). Each V will give me an equation. So I'll have n equations at the end, I have n unknowns, I'll have a square matrix. And that'll be a linear system. I'll get to KU=F.
But do you see how I'm getting there? I'm getting there by using the weak form, By using Galerkin's idea of picking some trial functions, and some test functions, and putting them into the weak form. So Galerkin's idea is, take these functions and these functions. And apply the weak form just to those guys. Not to, the real weak form, the continuous weak form, was for a whole lot of V. We'll get n equations by picking n V's, and we'll get n unknowns by picking n phis. So this method, this idea, was a hundred years older than finite elements. The finite element idea was a particular choice of these guys, a particular choice of the phis and the V's as simple polynomials. And you might think well, why didn't Galerkin try those first, maybe he did. But key is that now with the computing power we now have compared to Galerkin, we can choose thousands of functions. If we keep them simple. So that's really what the finite element brought, finite element brought is. Keep the functions as simple polynomials and take many of them. Where Galerkin, who didn't have MATLAB, he probably didn't even have a desk computer, he used pencil and paper, he took one function. Or maybe two. I mean, that took him a day. But we take thousands of functions, simple functions, and we'll see on Friday the steps that get us to KU=F. So this is the prep for finite elements.
MIT OpenCourseWare makes the materials used in the teaching of almost all of MIT's subjects available on the Web, free of charge. With more than 2,200 courses available, OCW is delivering on the promise of open sharing of knowledge. Learn more »
© 2001–2015
Massachusetts Institute of Technology
Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use.