1 The Column Space of \(A\) Contains All Vectors \(A\boldsymbol{x}\)
2 Multiplying and Factoring Matrices 
3 Orthonormal Columns in \(Q\) Give \(Q’Q= I\)
4 Eigenvalues and Eigenvectors
5 Positive Definite and Semidefinite Matrices
6 Singular Value Decomposition (SVD)
7 Eckart-Young: The Closest Rank \(k\) Matrix to \(A\)
8 Norms of Vectors and Matrices
9 Four Ways to Solve Least Squares Problems
10 Survey of Difficulties with \(A\boldsymbol{x} = \boldsymbol{b}\)
11 Minimizing \(‖\boldsymbol{x}‖\) Subject to \(A\boldsymbol{x} = \boldsymbol{b}\)
12 Computing Eigenvalues and Singular Values
13 Randomized Matrix Multiplication
14 Low Rank Changes in \(A\) and Its Inverse
15 Matrices \(A(t)\) Depending on \(t\), Derivative = \(dA/dt\)
16 Derivatives of Inverse and Singular Values
17 Rapidly Decreasing Singular Values
18 Counting Parameters in SVD, LU, QR, Saddle Points
19 Saddle Points Continued, Maxmin Principle
20 Definitions and Inequalities
21 Minimizing a Function Step by Step
22 Gradient Descent: Downhill to a Minimum
23 Accelerating Gradient Descent (Use Momentum)
24 Linear Programming and Two-Person Games
25 Stochastic Gradient Descent
26 Structure of Neural Nets for Deep Learning
27 Backpropagation: Find Partial Derivatives
28 Computing in Class [No video available]
29 Computing in Class (cont.) [No video available]
30 Completing a Rank-One Matrix, Circulants!
31 Eigenvectors of Circulant Matrices: Fourier Matrix
32 ImageNet is a Convolutional Neural Network (CNN), The Convolution Rule
33 Neural Nets and the Learning Function
34 Distance Matrices, Procrustes Problem
35 Finding Clusters in Graphs
36 Alan Edelman and Julia Language