9.40 | Spring 2018 | Undergraduate

Introduction to Neural Computation

Lecture Notes

Lec # Learning Objectives Lecture Notes
1

  • To understand how the timescale of diffusion relates to length scales
  • To understand how concentration gradients lead to currents (Fick’s First Law)
  • To understand how charge drift in an electric field leads to currents (Ohm’s Law and resistivity)

Overview and Ionic Currents (PDF - 1.7MB)
2

  • To understand how neurons respond to injected currents
  • To understand how membrane capacitance and resistance allows neurons to integrate or smooth their inputs over time (RC model)
  • To understand how to derive the differential equations for the RC model
  • To be able to sketch the response of an RC neuron to different current inputs
  • To understand where the ‘batteries’ of a neuron come from

RC Circuit and Nernst Potential (PDF - 2.7MB)
3

  • To be able to construct a simplified model neuron by replacing the complex spike generating mechanisms of the real neuron (HH model) with a simplified spike generating mechanism
  • To understand the processes that neurons spend most of their time doing which is integrating inputs in the interval between spikes
  • To be able to create a quantitative description of the firing rate of neurons in response to current inputs
  • To provide an easy-to implement model that captures the basic properties of spiking neurons

Nernst Potential and Integrate and Fire Models​ (PDF - 4.1MB)
4

  • To be able to draw the circuit diagram of the HH model
  • Understand what a voltage clamp is and how it works
  • Be able to plot the voltage and time dependence of the potassium current and conductance
  • Be able to explain the time and voltage dependence of the potassium conductance in terms of Hodgkin-Huxley gating variables

Hodgkin Huxley Model Part 1 (PDF - 6.3MB)
5 Hodgkin Huxley Model Part 2 (PDF - 3.3MB)
6

  • To be able to draw the ‘circuit diagram’ of a dendrite
  • Be able to plot the voltage in a dendrite as a function of distance for leaky and non-leaky dendrite, and understand the concept of a length constant
  • Know how length constant depends on dendritic radius
  • Understand the concept of electrotonic length
  • Be able to draw the circuit diagram a two-compartment model

Dendrites (PDF - 3.2MB)
7

  • Be able to add a synapse in an equivalent circuit model
  • To describe a simple model of synaptic transmission
  • To be able to describe synaptic transmission as a convolution of a linear kernel with a spike train
  • To understand synaptic saturation
  • To understand the different functions of somatic and dendritic inhibition

Synapses (PDF - 3.1MB)
8

  • To understand the origin of extracellular spike waveforms and local field potentials
  • To understand how to extract local field potentials and spike signals by low-pass and high-pass filtering, respectively
  • To be able to extract spike times as a threshold crossing
  • To understand what a peri-stimulus time histogram (PSTH) and a tuning curve is
  • To know how to compute the firing rate of a neuron by smoothing a spike train

Spike Trains (PDF - 2.6MB)
9

  • To be able to mathematically describe a neural response as a linear filter followed by a nonlinear function.
    • A correlation of a spatial receptive field with the stimulus
    • A convolution of a temporal receptive field with the stimulus
  • To understand the concept of a Spatio-temporal Receptive Field (STRF) and the concept of ‘separability’
  • To understand the idea of a Spike Triggered Average and how to use it to compute a Spatio-temporal Receptive Field and a Spectro-temporal Receptive Field (STRF).

Receptive Fields (PDF - 2.1MB)
10

  • Spike trains are probabilistic (Poisson Process)
  • Be able to use measures of spike train variability
    • Fano Factor
    • Interspike Interval (ISI)
  • Understand convolution, cross-correlation, and autocorrelation functions
  • Understand the concept of a Fourier series

Time Series (PDF - 4.5MB)
11

  • Fourier series for symmetric and asymmetric functions
  • Complex Fourier series
  • Fourier transform
  • Discrete Fourier transform (Fast Fourier Transform - FFT)
  • Power spectrum

Spectral Analysis Part 1 (PDF - 4.3MB)
12

  • Fourier Transform Pairs
  • Convolution Theorem
  • Gaussian Noise (Fourier Transform and Power Spectrum)
  • Spectral Estimation
    • Filtering in the frequency domain
    • Wiener-Kinchine Theorem
  • Shannon-Nyquist Theorem (and zero padding)
  • Line noise removal

Spectral Analysis Part 2 (PDF - 3.1MB)
13

  • Brief review of Fourier transform pairs and convolution theorem
  • Spectral estimation
    • Windows and Tapers
  • Spectrograms
  • Multi-taper spectral analysis
    • How to design the best tapers (DPSS)
    • Controlling the time-bandwith product
  • Advanced filtering methods

Spectral Analysis Part 3 (PDF - 2.2MB)
14

  • Derive a mathematically tractable model of neural networks (the rate model)
  • Building receptive fields with neural networks
  • Vector notation and vector algebra
  • Neural networks for classification
  • Perceptrons

Rate Models and Perceptrons (PDF - 3.9MB)
15

  • Perceptrons and perceptron learning rule
  • Neuronal logic, linear separability, and invariance
  • Two-layer feedforward networks
  • Matrix algebra review
  • Matrix transformations

Matrix Operations (PDF - 4.0MB)
16

  • More on two-layer feed-forward networks
  • Matrix transformations (rotated transformations)
  • Basis sets
  • Linear independence
  • Change of basis

Basis Sets (PDF - 2.8MB)
17

  • Eigenvectors and eigenvalues
  • Variance and multivariate Gaussian distributions
  • Computing a covariance matrix from data
  • Principal Components Analysis (PCA)

Principal Components Analysis​ (PDF - 4.8MB)
18

  • Mathematical description of recurrent networks
  • Dynamics in simple autapse networks
  • Dynamics in fully recurrent networks
  • Recurrent networks for storing memories
  • Recurrent networks for decision making (winner-take-all)

Recurrent Networks (PDF - 2.2MB)
19

  • Recurrent neural networks and memory
  • The oculomotor system as a model of short term memory and neural integration
  • Stability in neural integrators
  • Learning in neural integrators

Neural Integrators (PDF - 2.0MB)
20

  • Recurrent networks with lambda greater than one
    • Attractors
  • Winner-take-all networks
  • Attractor networks for long-term memory (Hopfield model)
  • Energy landscape
  • Hopfield network capacity

Hopfield Networks (PDF - 2.7MB)