Description: This video covers models of synaptic transmission, spike trains, synaptic saturation, and somatic and dendritic inhibition.
Instructor: Michale Fee
MICHALE FEE: OK, so let's go ahead and get started. OK, so in the last lecture, we talked about how the inputs to neurons actually come into a cell mostly on the dendrite, which is this extended arborization of cylinders of cell membrane that give a very large surface area that allow many, many synapses to contact onto a neuron, many more than would be possible if all of those synapses were trying to connect to this neuron on its soma.
Today, we are going to follow up on that general picture of how neurons receive inputs. And today, we're going to focus on the question of how synapses work. So we're going to start by looking at a simple model of synapses. And we're going to end by understanding how synapses on different parts of the neuron can actually do quite different things.
So here's our list of learning objectives for today. So we're going to learn how to add a synapse to an equivalent circuit model. And we're going to describe a simple model of how that actually generates voltage changes in a neuron and what those synaptic inputs actually do. We're going to describe mathematical process called convolution. That's going to allow us to extend the idea of how a neuron responds to a single input spike from a presynaptic neuron to how a neuron responds to multiple spikes coming from a presynaptic neuron.
So we're going to introduce this idea of convolution, which I'm sure many of you have heard of before. But it's going to play an increasingly important role in the class. And so we're going to introduce it here.
We're going to talk about the idea of synaptic saturation, which is the idea that a single synaptic input can generate a small response in a neuron. You would think that as you generate more and more synaptic inputs to a neuron that the response of the postsynaptic neuron might just keep increasing. But, in fact, the response of a neuron to its inputs saturates at some level. And that process of saturation actually has very important consequences for how neurons respond to their inputs. And then finally, we're going to end with a fun story about the different functions of somatic and dendritic inhibition. And we're going to tell that story in the context of a crayfish behavior.
All right, so let's start with chemical synapses. There are also electrical synapses, which we're not going to talk about. And that's basically where two neurons can actually contact each other. There are actually proteins that form little holes between the neurons. And so they're just directly electrically connected with each other. That's called an electrical synapse, or a gap junction. We're not going to talk about that today. We're going to focus on chemical synapses.
So this is the structure of a typical excitatory synapse from a presynaptic neuron. This is the axon of a presynaptic neuron onto the dendrites of a postsynaptic neuron. Postsynaptic dendrites have often these specializations called spines, which are just little mushroom like protrusions of the cell membrane of the dendrite onto which presynaptic neurons can form of synapse. So this is called the presynaptic component or terminal. That's the postsynaptic component or postsynaptic terminal.
On the presynaptic side, there are very small synaptic vesicles, about 30 to 40 nanometers in diameter, that sort of form a cloud or a cluster on the just on the inside surface of this synaptic junction. The synapses are typically about a half a micron across. And the synaptic cleft is very small. It's about 20 nanometers. So this is not quite to scale. This should be maybe a little bit closer here.
All right, notice that that is very small. That synapse is really tiny. It's about the wavelength. Its size is equal to 1 wavelength of green light. So it's a tiny structure.
There's a lot going on inside that little thing, though. And here's we're going to walk through the sequence of events that just describes how a presynaptic action potential leads to depolarization and in a postsynaptic neuron. So we have an action potential that propagates down. That's a pulse of depolarizing voltage. When it reaches the synaptic terminal, that pulse of depolarizing voltage is about plus 50 millivolts. That activates voltage gated calcium channels that turn on, just the same way that we've described voltage activated sodium channels and potassium channels.
That allows calcium ions to flow into the presynaptic terminal. That calcium flows in and binds to presynaptic proteins that dock these vesicles onto the membrane facing the synaptic cleft. That causes those vesicles to fuse with the membrane. They open up and release their neurotransmitters. So all of those vesicles are filled with neurotransmitter. The vesicles are coated with actual pumps that take neurotransmitter from inside of the cell and pump it into the vesicles.
So then calcium flows in. Vesicle fuses, releases neurotransmitter into the cleft. Neurotransmitter then diffuses in the cleft. You could actually calculate how long it takes to get from one side to the other now, I think. It's not very long.
Ligand gated ion channels, they're basically like the kinds of ion channels we've already been discussing. But instead of being gated by voltage, they're gated by the binding of a neurotransmitter to a binding site on the outside of the protein. That produces a conformational change that opens the pour to the flow of ions.
Now, you have neurotransmitter binds to these, opens the pour. You have now positive ions that flow into the cell, because the cell is hyperpolarized. So it has a low voltage. Positive ions flow into the cell. That corresponds to an increase in the synaptic conductance.
What is that flow of ions, positive ions, into the cell do? It depolarizes the cell. You have synaptic current flowing in that then-- I forget to put it on there-- depolarizes the cell. Any questions about that?
OK, let's talk a little bit about the sort of some of the interesting numbers, like how many synapses there are, how many cells, how many dendrites in a little piece of neural tissue. It's pretty staggering actually. So the synopses are small.
But what's really amazing is that in a cubic millimeter of cortical tissue, there are a billion synapses. And if you think about what that means, there is a synapse on a grid, on a lattice, every 1.1 microns. So they're sort of some fraction of a micron big. And there's one of them every micron. Most of your brain is filled with synapses. There are 4.1 kilometers of axon in that same cubic millimeter and 500 meters of dendrite.
A typical cortical cell receives about 10,000 synapses. Each cell has about 4 millimeters of dendrite and 4 centimeters of axon. And there are about 10 to the 5 neurons per cubic millimeter in the mouse cortex. And in your entire brain, there are about 10 of the 8 neurons, which is the same roughly as the number of stars in our galaxy. And we're going to figure out how it works.
OK, so let's come back to this. So let's start by adding a synapse to an equivalent circuit model and understanding how that model works. So let's start with an ionotropic receptor. So ionotropic receptors are neurotransmitter receptors that also form an ion channel. There are other kinds of neurotransmitter receptors where a neurotransmitter binds. And that sends a chemical signal that opens up a different kind of ion channel. Those are called metabotropic neurotransmitter receptors. We're going to focus today on ionotropic receptors.
So a neurotransmitter binds. That binding opens a gate. And that allows a current to flow.
So these guys, Magleby and Stevens, did an experiment to understand how that conductance-- so when that ion channel opens, it turns on a conductance. And you can directly measure that conductance by doing a voltage clamp experiment. So here's the experiment they did.
They took a muscle fiber from a frog. They set up a voltage clamp on it, so an electrode to measure the voltage, and another electrode to inject current. You hold the voltage at different levels. And then what they did was they stimulated electrically the motor axon, the axon of the motor neuron that innervates the muscle. So that then activates this neuromuscular junction that opens acetylcholine receptors and produces a current as a function of time after the synapse is activated.
Any questions about the setup? We're simply holding the cell at different voltages, activating the synapse, and measuring how much current flows through the synapse. Yes.
AUDIENCE: So with the current flows through the ion channels in the muscle fibers?
MICHALE FEE: The current is flowing through these ion channels here in the synapse. Now, remember, there are all sorts of sodium channels and potassium channels and all those things. But do those do anything?
AUDIENCE: Not really.
MICHALE FEE: We're just holding the voltage-- that's why the voltage is so important, because if you were to do this experiment without a voltage, you would stimulate and the muscle would contract. And it would rip the electrodes out of the muscle fiber. So when you voltage clamp it, it holds the cell at a constant potential, so that the cell can't spike when the current flows in through the synapse. Yes.
AUDIENCE: On the graph, would the shock [INAUDIBLE]
MICHALE FEE: Yes, the shock--
AUDIENCE: The difference is like in positive and negative--
MICHALE FEE: Yeah, I'm going to explain this. I'm just setting it up right now. OK, any questions about the setup? Good questions. One step ahead of me.
All right, so now, let's look at what actually happens. So what you can see is that the current that goes through these ion channels is different depending on the voltage that you hold the cell at. So if you hold the cell at a negative potential-- so here you can see, the voltage it is like minus 120 here. And what happens is after you shock, you see this large, inward current. Remember, inward current, negative current corresponds to positive ions going into the cell, inward current.
So after you activate the motor axon, you get a large inward current that lasts a couple milliseconds. That corresponds to current going into the cell that would depolarize the cell and activate it, right? But as you raise the voltage, you can see that the current gets smaller. And at some point, the current actually goes to zero. And it goes to zero when you are holding the membrane potential close to zero. And as you hold the membrane potential more positive, you can see that the current actually goes the other way.
So what we can do is we can now plot that peak current as a function of the holding potential. So we're measuring current through an ion channel at different voltages. So what are we going to plot next? Just like we did for the sodium channel, for the potassium channel, we're going to-- what are we going to plot? What kind of plot?
MICHALE FEE: An I-V curve. Excellent. Let's do that. You can see it's actually linear. So the current is negative when you hold the cell negative. The current's positive when you hold the cell above zero. And it crosses at zero. What do we call that place where it crosses zero? What does that tell us? When this ion channel is open--
AUDIENCE: Reversal potential.
MICHALE FEE: So reversal potential or the equilibrium potential, that's right. That's kind of weird, right? An equilibrium potential that's zero. What kind of channel has an equilibrium potential at zero? Remember, sodium was very negative, like minus 80-- sorry, potassium was very negative.
MICHALE FEE: Excellent. It's something that passes both potassium and sodium. It's like a hole. So this ion channel is basically like opening a hole that passes positive ions in both directions. Potassium goes out, sodium comes in. Yes.
AUDIENCE: But that is only like [INAUDIBLE] before the--
MICHALE FEE: Notice that we're plotting that the current as a function of all voltages, and it crosses at zero. There's zero current at zero voltage, which happens when you have just a non-selective pore.
OK, what does that look like? An I-V curve that looks like that, what is that? There's a name for that. You use it when you build a circuit. You might have some transistors and some capacitors and some-- how about some resistors? It's just a resistor. That's what the I-V curve of a resistor looks like.
And if we were to put it in series with a battery, what would the voltage of the battery be? Zero. Remember, if we put it in series with a battery, it produces an offset.
So it's just our same equation-- the current is just a conductance times a voltage. It's just Ohm's law. That's 1 over resistance. And it's V minus e synaptic. And e synaptic is just 0. So that's called the driving potential. That's the conductance.
Can anyone take a guess at what the conductance looks like as a function of time? Somebody just hold your hand up, and-- I see two answers that are very close. Lena.
AUDIENCE: Would it be like that?
MICHALE FEE: Start over here. Like go this way. What would the conductance do as a function of time to make the current look like this? What would it be here? What would the conductance be here? Remember, the voltage is being held at some value. The current is zero. So the conductance must be?
MICHALE FEE: Zero. And how about here? It should be some big conductance. And then what about here?
MICHALE FEE: Zero. So what is that-- excellent, it just looks like that. The conductance just turns on and then turns off. Anybody want to take a guess at why that might be? Shock. What happens here? Why does this turn on? Because neurotransmitter binds to the receptor, opens the channel. And then what happens? The neurotransmitter falls off. And the neurotransmitter receptor closes. That's it.
We're going to do a little bit of mathematical modeling of that. But it's going to be pretty simple. All right? OK.
So there is our equivalent circuit. This thing right here equals conductance times of driving potential electrically is just that. You remember that, right? That's the same way we modeled the current of the sodium channel or the potassium channel.
AUDIENCE: So the graphical [INAUDIBLE] because it's linear, why would that [INAUDIBLE]
MICHALE FEE: The conductance constant?
MICHALE FEE: OK. Because look at the current. So, remember, these different voltages here just correspond to these voltages. Here V minus E sin. So E sin is what? The synaptic reversal potential is just zero, right? So for each one of these experiments, this driving potential is just constant. It's just given by this holding potential in the voltage kind of experiment.
So in order to turn something that looks like this into something that looks like this, you have to multiply it by something that looks like that. Does that makes sense? For each one of those experiments, this term is constant. And so to get a current that looks like this, the conductance has to look like that. Does that makes sense?
AUDIENCE: guess she's asking the upper--
MICHALE FEE: Oh, sorry, did I misunderstand the question? Ask it again.
AUDIENCE: Sorry. I was talking about that one.
MICHALE FEE: Oh.
AUDIENCE: But explanation did make sense.
MICHALE FEE: Did it help? But it was the wrong explanation. OK, so ask your question again.
AUDIENCE: Yes, so I was trying to relate it to what we were doing earlier with a [? coefficient ?] [INAUDIBLE]
MICHALE FEE: Yeah.
AUDIENCE: And it had an I-V curve that looked something like that curve--
MICHALE FEE: Oh, it like had some funny shape like this.
AUDIENCE: Yeah, it looked like [INAUDIBLE] at zero. And then it was like--
MICHALE FEE: Yeah, so remember, this is as a function of time. And this is exactly what the sodium conductance looked like as a function of time. It turns on. And then it turns off.
For the sodium conductance, this turning on happened with a voltage step. And the turning off happened because of the inactivation gate. In this case, it's different. What's turning this thing on is a neurotransmitter binding that turns it off. And what turns it off is not inactivation. It's the fact that the neurotransmitter falls off. But it has the same time dependence. It's just different mechanisms.
And then for this, was there a question still about this?
AUDIENCE: No. [INAUDIBLE] using that with the--
MICHALE FEE: Oh, with the tie-- it was, OK-- and you understand why this doesn't look like this, right?
MICHALE FEE: The reason that happened is because it looked like this for most of it, but it went back down to 0 here as a function of voltage y. Because of the voltage dependence shuts the [AUDIO OUT] off down here, this doesn't have voltage dependent. It's cool how all this stuff ties together, right? It's sort of the same stuff we learned for the Hodgkin Huxley, just applies to this case here.
All right, so there's the circuit. Here's the model that we described before. There's a simple soma with a capacitance and some leak conductance. And now that is how we would model attaching a synapse, any kind of a synapse, onto a soma. And if we wanted the some to be able to spike, we would add some potassium, some voltage dependent potassium and some voltage dependent sodium.
All right, any questions? Yeah.
AUDIENCE: I probably should have asked this a long time ago and not [INAUDIBLE] circuit. Do you know whether to have the big line of the battery--
MICHALE FEE: Oh, yeah, so that's not a dumb question at all. The answer is don't worry about it. Like there's a convention that the big one is the plus side. And I'm not even 100% sure I've been perfectly consistent in all my slides. The long line is supposed to be the positive side of the battery.
MICHALE FEE: Just don't worry. Just make one big and one small. And don't-- just make it a battery symbol. And I don't care if it's the right way. OK. OK? You don't want to make it too much like a capacitor, because if they're the same length, then it looks like a capacitor. And if you're worried just draw an arrow and write battery. OK? All right, good.
OK, so now, let's step back from our voltage clamp of experiment and attach this synapse to a real neuron, like this thing, the one that can't spike. It's just a leaky soma that's hyperpolarized. And now, what's the voltage in the cell going to do when we activate that synapse. What is the voltage here? We're turning on this conductance. What that's means is we're making distance get really small.
So what is the voltage inside the cell going to do? It's going to approach something. What's it going to approach?
We have this circuit. We have a battery and a resistor. Let's make that resistor really, really big. That connects the battery between the outside and inside of our neuron. And now we make the resistor really small all of a sudden. What's going to happen to the voltage in here?
AUDIENCE: What is G I?
MICHALE FEE: Sorry, just some other conductance. And let's just imagine that it's like potassium conductance that's kind of holding the cell hyperpolarized. But I don't want you to focus on this right now. What I want you to focus on is what would happen here when I turn on that synapse, when I make the resistor really small, when I make the conductance really big. What's going to happen to the voltage inside the cell? It's going to approach something. It's going to be dragged toward something.
It's going to be dragged toward the voltage of that battery. We're hooking that battery up to the inside of our neuron. Does that makes sense? OK, so that's what I'm going to show you now.
So if we have an excitatory synapse-- so what I'm going to show you is what happens when we activate a glutamatergic excitatory synapse on a cell. We're going to record the voltage in the cell. And we're going to activate that synapse. And what you see is that this is what you would see, for example, for the muscle fiber. You activate the synapse, and you see that the voltage of the cell-- if the cell is hyperpolarized, the voltage of the cell goes up.
If you hold the cell at a higher voltage, the voltage also goes up, but a little bit less. If you hold the cell at zero, you can see you activate that synapse, but the cell is already at the potential of the battery. And so there's no current and no change in the voltage. If you hold the cell at a positive voltage, and you activate the synapse, again, you're connecting the cell to a battery that has 0 volts. And so the voltage goes down.
So here's what I want to convey here, that when you activate a synapse, it forces the cell, forces the voltage in the cell, to approach the voltage of the reversal potential of the synapse, the voltage of that battery. Is that clear? Yes.
AUDIENCE: So [INAUDIBLE]
MICHALE FEE: It increases the conductance. Current flows into this cell. And it flows into the cell in a direction so that the voltage of approaches the equilibrium potential. Yes.
MICHALE FEE: What's that?
MICHALE FEE: Ah, yes, good. In fact, that's a great way to phrase it, because that kind of experiment here where we're measuring the voltage inside the cell is called current clamp experiment. So there's voltage clamp, where you force the voltage to be constant by varying the current. Here, we're holding the current constant, clamping the current, and measuring the voltage.
MICHALE FEE: Yeah, let's just go back to the setup here, this experiment here. You don't you don't set it up like this. You set it up like we had on the first day of class, where you just have a current source connected here and a volt meter attached to this electrode.
I didn't use that word. But that was the very first experiment we did. I think on the second day of class, we had a cell with a electrode to inject current, electrode to measure voltage. That was a current clamp experiment, because we're holding the current at some constant value. OK? Here, we're holding the voltage at some constant value and measuring current.
All right, so the idea is really simple, when you have a synapse, the synapse has a reversal potential. When you activate the synapse, the cell is dragged toward the reversal potential. Here, the reversal potential was zero. So when I activate the synapse, the voltage goes toward the reversal potential. Notice-- yes.
AUDIENCE: Oh, just a very basic question. So is [INAUDIBLE] the top half is like the synaptic face of the circuit and the bottom half is like--
MICHALE FEE: Are you talking about like here versus here? Yeah, so I've put that pink box around the synapse.
MICHALE FEE: That's the circuit that corresponds to having a synapse attached to the cell. And this is the circuit that we developed several weeks ago.
AUDIENCE: OK. But it's acting like the presynaptic-- like a cell or--
MICHALE FEE: Oh, no, the presynaptic cell is not part of this picture. The presynaptic cell is kind of like spritzing neurotransmitter onto this thing, which increases the conductance, which is the same as reducing the size of a resistor. Does that makes sense? Yes.
AUDIENCE: So is the reversal potential always going to be zero or like just like in this specific example?
MICHALE FEE: Great question. So the reversal potential is different for different kinds of synapses. You can see that this synapse, this kind of synapse here, if I'm hyperpolarized, it pushes the voltage of the cell up. What kind of synapse do you think that is? An excitatory synapse.
But notice something really cool. An excitatory synapse doesn't always push the voltage of the cell up. If the cell is sitting up here, it can push the voltage of the cell down. OK? Yes.
AUDIENCE: Why doesn't it just stay up?
MICHALE FEE: Why? Great. So great question. You can probably answer that already.
AUDIENCE: Well, it's some--
MICHALE FEE: Yeah. Yeah. I'm hearing a bunch of good answers. So what happens is we're stimulating the synapse here, releases neurotransmitter. The conductance goes up. Current flows in, depolarizes the cell, neurotransmitter unbinds. Current stops. And this thing brings the cell back down to its starting point, this other thing. Yeah, this thing is kind of holding the cell at some hyperpolarized potential.
AUDIENCE: So even though they've lost a current.
MICHALE FEE: The current is turning on and then turning off. Remember, the current-- here it is right here-- the current is turning on as the conductance turns on. And then when the conductance goes to zero, the current goes to zero.
AUDIENCE: Oh, I thought it was a big current clamp.
MICHALE FEE: No. The experiment we're doing is injecting constant current into the cell. And that's how we hold the cell at these different sort of kind of average voltages. OK, maybe I should say-- yes.
AUDIENCE: So I just [INAUDIBLE]
MICHALE FEE: Makes it do this?
AUDIENCE: Well, I don't understand how that changes. I just always want positive ions to flow in.
MICHALE FEE: Yeah. So positive ions flowing into the cell raise the voltage in this.
AUDIENCE: But I don't understand if the inside of cell is already positive, why adding more positive--
MICHALE FEE: Oh. OK, great question. You're talking about up here. OK, so let me just back up and explain one thing that maybe I didn't explain very well. In this experiment, we're injecting current through our current electrode to just hold the cell at different voltages while we stimulate the synapse. Does that makes sense?
Down here, let's say, we're not injecting any current. And then we inject a little bit of current to kind of hold the cell up here. And we activate the synapse and measure changes from there. Does that make sense?
AUDIENCE: So when it's a current clamp, it just leaves the input the same?
MICHALE FEE: Exactly. You're just turning a knob and saying I'm going to put in 1 nanoamp. And that's just going to kind of hold the cell up here. And then I activate the synapse and ask how does the voltage change. OK? Oh.
AUDIENCE: That's not my question.
MICHALE FEE: I know.
AUDIENCE: My question--
MICHALE FEE: But I realized-- your question is why does this go down?
AUDIENCE: Like I thought that's--
MICHALE FEE: Yeah. Exactly. Exactly. Isn't that cool? Excitatory synapses are not always inject-- so what happens up here? You're holding the cell up here. And now you activate the synapse. And the voltage actually goes down. And the answer is that if you're holding this cell up here positive, more positive than here, when you turn on that conductance, the current goes the other way.
AUDIENCE: Passed the reversal potential.
MICHALE FEE: So you've passed the reversal potential. You're holding the cell up here. And so [AUDIO OUT] turn on the synapse, the current flows the other direction. You have a positive current, which is positive ions flowing out, which lowers the voltage of the cell.
So the way to think about this is that when you have a synapse, you turn the synapse on, it doesn't matter where the voltage is, it's always driving it toward the reversal potential. And if you start above the reversal potential, the voltage will go down. If you start below the reversal potential, the voltage will go up. It's like not what you learned in 901, right?
MICHALE FEE: This is-- yeah. But this is how it really works. OK? Excitatory synapses, the reason you think of excitatory synapses as always pushing the voltage up is because most of the time, the cell is sitting [AUDIO OUT] here. But this is going to become much more important and obvious when we're talking about inhibitory synapses.
So let's talk about inhibitory segments. So here's a model with the synaptic reversal at zero. All excitatory synapses have their reversal potential around zero. The neuromuscular junction, the glutamatergic synapse, they're all basically non-specific pores that have a reversal potential of zero.
Inhibitory synapses are different. So excitatory-- the reason we call it an excitatory synapse is because that reversal potential is above the threshold for the neuron to spike. And so when you activate the synapse, you're pushing the voltage of the cell always toward a voltage that's above the spiking threshold. That's why it's called an excitatory synapse. And those are called excitatory postsynaptic potentials, or EPSP. That little bump is an EPSP.
All right, now, inhibitory synapses look really different. And now the effect that I'm talking about is important. With an inhibitory synapse, the reversal potential is around minus 75. Remember, most inhibitory synapses are chloride channels. And chloride, do you remember on the lecture about the equilibrium potentials? The reversal potential for chloride channels is around minus 75. And that's why the synaptic reversal potential is minus 75.
And now, you can see that if you hold the cell-- so here's where a cell normally sits. You activate the synapse. The voltage goes down. All right. And why does it go down? Because it's pulled toward the equilibrium potential for the chloride channel, chloride ion.
Now, notice that-- OK, so as the cell is more and more depolarized, you can see that it's more strongly pulled toward [AUDIO OUT] the voltage change is bigger. If you hold the cell at minus 75, there's no voltage change at all from activating an inhibitory synapse. And if you hyperpolarize the cell even more, you can see that when you activate an inhibitory synapse the voltage of the cell actually goes up.
So inhibitory synapses don't always make the potential of the cell go down. In fact, sometimes they can make this cell go up. In fact, what's really cool is in juvenile animals, the chloride reversal potential, there's more chloride inside of a cell than there is in an adult. And so the reversal potential of the chloride channels is actually up here. And chloride inhibitory synapses can actually make neurons spike. You can see where this thing sits. It just depends on the concentration of chloride ions.
Most of the time inhibitory synapses have a reversal potential that's minus 75. And we call that inhibitory because the reversal potential of the synapse is less than the spiking threshold.
AUDIENCE: What [INAUDIBLE] reversal potential?
MICHALE FEE: Just the-- OK, you know the answer to that question. You tell me. So what are the two things-- yeah, go ahead.
AUDIENCE: The type of ion.
MICHALE FEE: Good. The type of iron. And one more thing. There were two things we need to have a battery in a neuron. What are they? Anybody know?
MICHALE FEE: And?
MICHALE FEE: Ion selective permeability. So the reversal potential depends on what ion that channel is selective for and the concentrations of that ion inside and outside the cell. So for an inhibitory synapse, there are two types. There are chloride reversal potentials, the chloride channels that have a reverse potential minus 75. And there are also potassium channels that serve an inhibitory function that can be activated by synapses. And they have a reversal potential more like minus 80.
AUDIENCE: [INAUDIBLE] develop [INAUDIBLE] it's not chloride channel is not changing. So what the ion channel--
MICHALE FEE: It's the ion concen--
AUDIENCE: Change so the--
MICHALE FEE: The concentration that's different. OK? Cool. You see this kind of stuff all the time. Inhibitory postsynaptic potentials are often upward going.
Somebody will be super impressed with you if you look at a trace like this and you say, is that an EPSP or an IPSP? Because you don't know it just by looking at it. Most people would assume it's an EPSP. Yeah.
AUDIENCE: Was [INAUDIBLE] to cause like a spike?
MICHALE FEE: Well, so what do you think? It's inhibitory if the reversal potential [INAUDIBLE] in the threshold. So no matter how strong that inhibitory synapse is, can it ever cause a spike if the reversal potential is less than the threshold?
All right, let's go on. Any questions about this. It's so fundamental. Yeah.
AUDIENCE: [INAUDIBLE] potential [INAUDIBLE]
MICHALE FEE: So it's normally around minus 60 minus [AUDIO OUT] minus 75. OK? OK, let's go on.
All right, let me just talk a little bit more about this conductance. So if you do a single channel patch experiment, you can take an electrode. Remember, I showed you what it looks like if you take an electrode and you stick it on a single sodium channel or a single potassium channel. Well, you can do the same thing. You can stick it on a neurotransmitter receptor. You can flow neurotransmitter over the receptor.
And what you see is that when you put the neurotransmitter on, just like sodium and potassium channels, it flickers between an open state and a closed state. So it has two states-- open state and closed state. You can do the same kind of modeling of it as where the kinetic rate equation with two states. You can write down the probability that the channel is in the open and closed states. That's going to be a function of what? Like neurotransmitter concentration and time, right?
And you can write down the total synaptic conductance as the conductance of a single open neurotransmitter receptor times the number of neurotransmitter receptors times the probability that any of them is open, any one of them is open.
And so now, let's think about this probability as a function of time. So we can model the neurotransmitter receptors and write down the probability that it's open. The probability that's closed, then has to be 1 minus p. Alpha and beta are rate constants. 1 over time, they have units of 1 over time.
And what controls the rate at which the channels open? Good. So alpha will depend on the concentration of neurotransmitter. That controls the rate at which closed channels open. And how about open to closed? It's not the concentration of the neurotransmitter. It'll be something else.
MICHALE FEE: Exactly. So there will be basically some rate constant for a neurotransmitter unwinding. OK?
All right, so let's do that. So here's that model. This is a simplified version of the Magleby-Stevens model. And it looks like this. There's a close and an open state. The open state corresponds to the closed neurotransmitter receptor R binding to a neurotransmitter, an unbound neurotransmitter molecule, forming a complex, bound receptor complex. There's usually another step-- the way this is usually modeled in the Magleby-Stevens model is that the bound receptor complex is closed and then it opens in another transition. But we're just going to keep it simple and do it this way.
So we have a closed state unbound neurotransmitter receptor binds to an unbound neurotransmitter's receptor molecule and forms our bound receptor complex that is then open. So that's P. And that's 1 minus P. And we can just write down the rate equation just the same way that we did for analyze the time dependence of the sodium channel or the potassium channel.
Isn't that amazing? All that stuff we learned for Hodgkin-Huxley, just you can use all the same machinery here. OK?
OK, so that's a simple model. And you can take a simplification of it. We're going to assume that the binding is very fast, that the alpha is very fast. So that when you put a pulse of neurotransmitter concentration onto the synapse, the probability of being open, we're going to assume that this is super fast. So that the rate at which you go from unbound to bound is very high.
And now, the neurotransmitter goes away. And you can see zero concentration here. Let's say you bind, the probability of being open gets big. And then the neurotransmitter goes away. You can see that this first term goes to 0, because the neurotransmitter concentration is zero. You can see that dP/dt is just minus beta P. And that's just an exponential decay.
So the model is bind neurotransmitter opens the neurotransmitter receptor. Neurotransmitter goes away. And then there's an exponential decay probability that you're in the open state. Yes.
AUDIENCE: So like in a real synapse, the neurotransmitter wouldn't like go away, right?
MICHALE FEE: What's that? Oh, yeah, the neurotransmitter goes away. I forgot to say that. It's a super important point. There's one more step that I forgot to include. Habib is nodding at me like, yeah you forgot that.
What happens is the neurotransmitter goes into the cleft. And it gets taken up by neurotransmitter-- what they called again, Habib?
MICHALE FEE: Reuptake. Thank you. That neurotransmitter gets bound by receptors on glia and the presynaptic terminal and gets pumped out of the synaptic cleft. So that's what makes this go away, in addition to diffusion. OK.
AUDIENCE: [INAUDIBLE] like the time dependence of the concentration--
MICHALE FEE: So this is-- in the full model you do. And, in fact, what this really looks like is this kind of turns on more slowly and then goes away with an exponential. But I'm just kind of walking you through this super simplified model. That's my mental model for how synapse works. OK? People who actually work on synapses would probably laugh at me, but that's kind of how I think of it. OK? Lena.
AUDIENCE: Is that N the binding site?
MICHALE FEE: Oh, yeah, the N, the N is really cool. What do you think it means?
AUDIENCE: Binding sites?
MICHALE FEE: Yes, it's the number of binding sites. So you can see that if the receptor requires two neurotransmitter molecules to bind before it opens, then you have a concentration squared. And that has really important consequences on the way it works, because what it does is it makes the receptor very sensitive to high concentrations of neurotransmitter.
So they don't respond to the little leakage of some residual neurotransmitter left around that the reuptake systems haven't dealt with yet. But when a vessel releases, then you have a neurotransmitter at a very high concentration. And this thing makes it only sensitive to the peak of the neurotransmitter concentration. OK. Yes.
AUDIENCE: With the probably [INAUDIBLE] number of [INAUDIBLE]
MICHALE FEE: Here. In this phase, does this N matter? Is that your question?
MICHALE FEE: Yeah, well, so you know the answer to that question. Here, you're asking if here this N matters? What's that concentration here?
AUDIENCE: Oh, it's 0.
MICHALE FEE: So what's 0 to the N?
MICHALE FEE: 0. OK, next, so we just did all of that. I better speed up. Let's talk about convolution.
So this is what happens when a single action potential comes down the axon. The presynaptic terminal releases neurotransmitter. Boom, you get a pulse of probability of the receptor being open. That, you recall, is just proportional-- yikes, where'd it go. That probability is you just multiply it by some constants to get the conductance. So that is what the conductance looks like. OK?
All right, now, so there's our input spike. Our input neuron, our presynaptic neuron generates a spike. And that produces a response, which is a conductance that looks like a pulse and an exponential decay. OK?
Now, what do you think would happen if [AUDIO OUT] a bunch of spikes like this? Good. Just does it each time. OK? And we're ignoring fancy effects like paired pulse depression or facilitation or things like that. OK? There are all kinds of interesting things that synapses can do where there's one pulse and then there's some residual calcium left in the presynaptic terminal. And that makes the next pulse produce an even bigger response. Or sometimes one pulse will use up all the vesicles that are bound. And so the next pulse will come in and there won't be enough vesicles and this will be smaller.
We're just going to ignore all of those complications right now. And we're just going to think about how to model a postsynaptic response given a presynaptic input. And we're going to use what's called a linear model.
We're going to use convolution. And we're going to call this single event right here, we're going to call that the impulse response. The input is an impulse. And the response is the impulse response. And this is also called a linear kernel.
The response to multiple inputs can be modeled as a convolution. It looks a little messy. But I'm going to explain how to just visualize it very simply. All right? So here's how you think about it. Notice that the response of the system is just this operation where we're multiplying the kernel times the stimulus and integrating over time. OK?
So what we're going to do is-- the way to think about this is to take the kernel K, flip it [AUDIO OUT] wards and plot it there. OK? Now notice that what this thing does is it says I'm going to integrate the product of the kernel and the stimulus-- notice I'm integrating over tau. But this [AUDIO OUT] tau and that has a minus tau.
So what I'm doing is I'm multiplying at a particular-- and notice that it's shifted by time t. OK? So what I'm going to do is I'm going to put the kernel at time t. And I'm going to integrate the product of the kernel and the stimulus. So what do I get if I multiply this times this?
MICHALE FEE: 0. And so at that time, I write down a zero. OK? Now, I'm going to change t. And I'm going to shift this over and multiply them together. What do I get? Good. Another 0.
Now let's shift it again. I'm just increasing t here. I'm not integrating over t. I'm integrating over tau. So I just shift this a little bit. Good. And now when I multiply them together, what do I get?
MICHALE FEE: Like that [AUDIO OUT] good. I just get the area under that curve, and integrate. And I get a point here.
Now, let's shift it again. Good. Now, multiply them together. What do I get? Slightly smaller integral. And I plot that there. OK?
Shift it again. Integrate. Plot another point. Shift it again. Integrate. And if I keep shifting, the product is 0 and the integral is 0.
OK, so you can see that I can take this kernel, convolve it with a stimulus that's an impulse. And I get the kernel. I get the impulse response. Does that makes sense?
So you can see what I'm doing here is from time I'm taking the stimulus at time t. And as I integrate over tau, I'm going backwards here, and I'm going forwards here. So starting from here, I'm integrating like this. I'm multiplying these like this. And then integrating. OK? And that's why you flip it over. OK?
So it's very easy to picture what it does. So when somebody shows you a linear kernel and asks you to convolve it with a stimulus, the first thing you do is you just mentally flip it backwards and slide it over this [AUDIO OUT] and integrate the product at each different position. OK?
Now, you can see that when you do that, when you convolve that kernel with a pulse, you just recover the impulse response. OK? So now, let's convolve linear [AUDIO OUT] I'm plotting it flipped backwards now-- with a single pulse. So Daniel made these little demos for us. And the resulting conductance is here. OK?
Now, that was really obvious and easy, right? We didn't need to have a Matlab program simulate that for us, right? But what about this? Here, we have a spike train. There's the postsynaptic response from one spike. Now, to get the postsynaptic response from a bunch of spikes, we can just convolve the kernel with this spike train. And let's see what happens.
Boom. Response from the first spike. Boom. Boom. Boom, boom, boom. OK? And that is actually really, really close to what it looks like when a train of pulses comes into a neuron. OK?
What happens is the first spike produces a conductance response. When you have two spikes close together, the conductance from the first spike has not decayed yet when the second spike hits. And so that comes in and adds to it.
And it adds linearly. So if this is halfway down, you add a full impulse response on top of it. And now the previous one and the current one are decaying back to zero. And you can add as many of those on top of each other as you want.
And it turns out this is super easy to do in Matlab. You're going to learn how to do this. This idea is so fundamental. This idea of convolution, we're going to use to describe receptive fields. We're going to use it to describe filtering when we start getting to processing data. It's incredibly useful and very powerful. OK? Yes.
MICHALE FEE: This linear kernel reflects that response, the conductance response, of the postsynaptic neuron to a single spike, single presynaptic spike.
MICHALE FEE: Because if you have a bunch of single spikes like this, the answer is really obvious, right? It's just one of those, one of those little exponentials for every spike that comes in. But it's less obvious when you have a complex spike train like this.
AUDIENCE: [INAUDIBLE] on top of that. So it's just [INAUDIBLE]
MICHALE FEE: Yeah, the linear convolution tells you how the postsynaptic neuron is going to respond to a train of action potentials that are overlapped, where the response has not gone away to the first spike by the time the second spike arrives.
AUDIENCE: So because the kernel can have like different shapes.
MICHALE FEE: Yeah, the kernel can have different shapes. I've just chosen a particularly simple one, because actually an exponentially decaying kernel turns out to be really important. It's actually a low pass filter.
AUDIENCE: So be it's a linear [INAUDIBLE]
MICHALE FEE: Yes. The convolution, if you use the convolution, the thing you're using as the kernel is always a linear kernel, because the convolution is a linear function, you call the thing that you convolving a linear kernel. It's just terminology.
MICHALE FEE: Just focus on what it does and how it works, rather than the names. Just call it impulse response if that's easier. OK?
All right, let's push on. I want to introduce an idea of synaptic saturation. And I'm getting really worried about the crayfish. OK, synaptic saturation.
OK, so remember, we introduced the idea of a two-compartment model. So last time we talked about a model in which you have a soma and a dendrite. And you simplify the dendrites just by writing it down as another little piece of membrane or another little cellular compartment that's connected to the soma through a resistor. And now, you can write a model for the dendritic compartment that looks just like a capacitor and a conductance with a reversal potential.
And you have the same kind of model-- capacitor, conductance, reversal potential, battery-- for the soma. And, of course, those two compartments are connected to each [AUDIO OUT] resistor that represents the axial resistance of the piece of dendrite that's really connecting. OK? So that's called a two-compartment model.
And what we're going to do is just think briefly about how to think about what this looks like when you add a synapse to the dendrite. OK? And what we're going to study is how the voltage in the dendrite changes as a function of the amount of excitatory conductance that you add.
So we're going to start by-- we're doing steady state. So we don't need to worry about our capacitor. So we can actually just unsolder them and take them out of our circuit. And we're going to study the voltage response in the dendrite. So we're going to also throw away our soma and just ask, how does the dendrite respond to this synaptic input as a function of the amount of excitatory conductance?
And what I'm going to show you is just that the voltage change in the dendrite with zero inductance, of course, it's sitting there at the potassium reversal potential, or e leak. And as you add conductance, it corresponds to making that conductance, bigger making that resistor smaller. You're basically attaching the battery to the inside of the cell. And you can see that what happens is as you add more and more conductance, as you put more and more neurotransmitter onto that receptor, or have more and more neurotransmitter receptors, the voltage response goes up and then saturates.
And it's really obvious why that happens, right? Once this resistor gets small enough, meaning you've added enough conductance, the inside of the cell is connected to the battery. And the voltage inside the cell just can't go any higher. It is forced to E synaptics, the reversal potential of the synapse. And it can't go any higher. And that's why no matter how much excitatory conductance you add, the voltage inside the dendrite cannot go above E synapse. And that's called synaptic saturation.
And I was going to show you the derivation of this. It's just very simple. You just write down Kirchhoff's current law, substitute the equation for synaptic current, leak current, solve for voltage as a function of G synapse. And you can write down an approximation for the case of G synapse much smaller than G leak. And what you find is it's linear.
And so there's a linear part [AUDIO OUT] voltage change at small conductances. And you can write down an approximation at high synaptic conductance. And you show that it approaches-- the voltage approaches E synapse.
OK, so I'm not going to go through the math. But it's there. You don't have to be able to derive it yourself. But what I want you to understand is that for small synaptic conductance, the voltage responds linearly. But for high synaptic conductance, it saturates. OK?
All right, and now, I want to tell you a story about inhibition. And the basic story is that we can add inhibition to-- so in real neurons in the brain, inhibition sometimes connects to dendrites. And sometimes inhibitory synapses connect to somata. And they're actually different kinds of inhibitory neurons that preferentially connect onto dendrites and others that preferentially connect onto the somata. And it turns out there's a really interesting story about how that inhibition has a different effect whether it's connected to the dendrite or connected to the soma, right?
So you see what I've done here. I've got a dendrite that has an excitatory synapse. That's here. And it has an inhibitory synapse. Or-- so we can analyze this case-- or we consider the case where the excitatory synapse is still on the dendrite, but the inhibitory synapse comes onto the soma. And it turns out those two things do something very interesting.
And this was first shown in the crayfish. The crayfish is a really cool model system, because it has very stereotyped behaviors. And one of its interesting stereotyped behaviors is its escape reflexes. It has three different kinds of escape reflexes that involve different what are called command neurons that get sensory input and drive motor output. And one of these particular neurons is where this effect about inhibition was first shown. It's called the LG. LG neurons, MG neurons that drive two different kinds of escape reflexes.
So here are the two different kinds of escape reflexes. The medial giant neuron drives the MG escape, which is if you touch the crayfish on its nose, it flicks its tail and goes backwards. The LG escape is when you touch the crayfish on its tail, and it flicks its tail in a way that makes it go forward. OK?
So let's look at what those behaviors look like. We're going to post this video. This is from a Journal of Visual Experiments. And there's a nice-- it's actually really nice--
- The recordings of neural and muscular field potentials, electronic recordings from a pair of bath electrodes are synchronized with high speed video recordings and display--
MICHALE FEE: So this video shows you actually how to set up a tank with electrodes so you can record the signals from these neurons in a crayfish while it's behaving. So you should watch the video.
MICHALE FEE: But I'm going to show you-- I'm going to show you what--
- Here is a look at a series of single, high speed video frames and corresponding electric field recordings for an escape tail flip in response to--
MICHALE FEE: Can you hear that?
- A stimulus delivered to the head or tail of a juvenile crayfish.
MICHALE FEE: So that was the MG response. And he puts two electrodes into the tank. And you can actually measure signals in the tank from that neuron firing.
- The giant neuron and the phasic deflection that follows enables non-ambiguous identification of the tail flip as mediated by giant neuron activity. The backward movement shown in the video traces determines the identity of the activated neural circuit. Here is a tail flip mediated by the--
MICHALE FEE: So here, you touch him on the tail--
- Tactile stimulus was applied to the tail. Upward and forward motion--
MICHALE FEE: You can see it a different movement that makes him--
- Synchronized electronic trace, displaying the giant spike and the large phasic initial deflection determines the identity of the activated neural circuit. This video demonstrates the--
MICHALE FEE: Here's a different escape reflex that doesn't involve either of those two neurons. Here he spends a little more time thinking before he figures out what to do.
- A giant spike and consists of much--
MICHALE FEE: OK. All right, so let me just-- I probably won't get very far in explaining the inhibition part, but you can at least understand a little bit more about this behavior. So what's really cool is that inhibition is used to regulate these two behaviors.
So the idea is that-- so first let me just say that kind of that LG neuron, that lateral giant neuron, is known as a command neuron. And that is because if you activate that neuron, if you just depolarize that one neuron, it activates that entire escape reflex. And if you hyperpolarize that neuron, inhibit that neuron, you completely suppress the escape reflex.
And now, what's really interesting is that that neuron has inhibitory inputs that control the probability that the animal will elicit this escape reflex. And that kind of modulation of that behavior is really interesting. And it has some interesting subtleties to it.
So first of all, if the animal is touched on the nose and elicits-- touched on the nose and elicits an MG response, the tail flips and the animal goes backwards, right? But now, if it goes backwards and it bumps into something on his backside, when it's going backwards, you don't want to have that immediately trigger an LG escape, and he bumps into something. He's like boom, boom, boom, right? That would be terrible. So what happens is that when the MG neuron fires and initiates that backwards movement, it sends a signal that inhibits the LG response. OK?
OK, other cool things, when the animal is restrained, when you pick the animal up, if he doesn't get away from you with his first escape attempt and you hold him, you can touch him on the nose and it won't elicit an escape reflex. So when the animal is held, the response probability goes way down. Maybe he's like holding off until he feels like your grip loosen a little bit and then he'll try. OK? So there's no point in wasting escape attempts. They're very energetically expensive, wasting escape attempts when you're being restrained.
Another interesting modulation is that the LG escape response is suppressed while the animal is eating. This is threshold. This is how much you have to poke in the nose or in the tail for him to escape when he's just wandering around. But then while he's eating, he's getting food. And so the threshold for eliciting that escape reflex goes up. He's like, sorry, I'm eating. Leave me alone. OK?
It's not just being hungry, right? Because if he's hungry and searching for food, there's no increased threshold. So it's really because he's eating. He's found a food source. He doesn't want to leave it. So there's a higher-- he won't leave until there's like more danger.
So all this different kinds of modulation and inhibition of the behavior is controlled by inhibitory [AUDIO OUT] projecting onto this LG neuron. So there are two kinds of escape modulation. One is absolute.
When the animal is engaged in an escape reflex, it is impossible to activate-- when the animal is engaged in an MG escape, it's impossible to activate the LG escape. No matter how much he gets poked in the tail, he will not initiate an LG. So this kind of modulation is absolute.
There's this other kind of modulation, where the likelihood of escape is just reduced, but the animal is still able to initiate escape if the danger is high enough, if the stimulus is high enough. And that's really the crux of the difference is that some kinds of suppression of a behavior are absolute. No matter how strong the stimulus is, you don't want to allow that neuron to spike.
In the other case is I'm just going to turn down the probability that I generate [AUDIO OUT] because I'm eating. But if it's looks dangerous enough I'm still going to go. But I'm just going to modulate, gently, the probability. And it turns out that's the crux of the difference.
So it turns out the LG neuron has two sites on it where you get sources of inhibition. There are a bunch of inhibitory inputs on the proximal dendrite near the spike initiation zone, which is right here. And that's recurrent inhibition, because it's coming from the other escape neuron. So it's recurrent within the motor system. And the other inhibitory synapses come out here on the dendrite. And they're [AUDIO OUT] higher brain areas. And it's called tonic inhibition for historical reasons.
Now, previous hypothesis was that those inputs here allowed inhibition to control different branches of the input. But it turns out the answer is very simple. So let me just summarize this.
So you have one [AUDIO OUT] input, inhibitory input, that's coming from the other escape neuron that lands right there on the part of this neuron the initiates spikes. The other input that suppresses the response during feeding is coming far out on the dendrite at the same location where those excitatory inputs, the sensory inputs, are coming in. So that's what the circuit looks like.
So the input that absolutely suppresses the response of this neuron is coming right on the soma, right where the spike is generated. And the ones that tonic [AUDIO OUT] sort of adjust the probability of spiking are coming out here where the excitatory inputs are.
And so they developed the simple equivalent circuit model where they have a dendrite out here. Out here on the dendrite, you have an excitatory input. And you can have recurrent input on the soma.
And there's another model. So in one of these models, they're modeling the proximal inhibition. In the other model, they put the inhibitory synapse out here on the dendrite. And they ask, how are those two different sources of inhibition different? What do they do differently to the neuron?
And they just did this model. They analyzed it mathematically, exactly like the way I just showed you. That very simple calculation works. You just use Kirchhoff's current law. And you write down the voltage response to the excitatory synapse as a function of the strength of these two different kinds of inhibition.
And here's what you find. The proximal inhibition suppresses the response to the excitatory input by the same fraction no matter how strong the excitation is. So if you put in strong inhibition, you can always cause that excitatory input to be suppressed to zero. So inhibitory input approximately at the soma can suppress the response of a neuron to an excitatory input, no matter how strong the excitatory input is.
On the other hand, inhibition out on the dendrite for any given amount of inhibition, there's always an excitation that's strong enough to allow the response to get through. So this shows the amount of suppression as a function of excitatory input. And you can see that no matter how strong the inhibition, there's always an amount of excitation that will overcome the inhibition.
And you can also do that analysis in a much more complicated model. And you get exactly the same results. But what's really cool is that you can understand this just from this very simple two-compartment model.
And I wish I had a little bit more time to go through those models and explain why that works. But basically, proximal inhibition is absolute. You can always make the inhibition win. Distal inhibition kind of gently varies the effect of the excitatory input.
All right, so that's what we covered-- synapses, a model, convolution, synaptic saturation, and the different functions of distal and proximal inhibition.