A message from Prof. Strang:
This is an introduction to several videos that I have added to MIT OpenCourseWare (OCW) under the heading A Vision of Linear Algebra. They come from recent experience in teaching the course 18.06 Linear Algebra.
Since the OCW site for 18.06 was first published in the early 2000s, the reception has been wonderful. The number of viewers of the videos posted on the OCW site and on OCW’s YouTube channel exceeds 20 million! Very generous messages still come from all over the world. I am extremely grateful!
Naturally, the 18.06 course has added new topics and new emphasis over the years. My textbook Introduction to Linear Algebra is in its sixth and final edition. An alternative text is Linear Algebra for Everyone. (I am happy to send desk copies of either book to faculty.) The changes in today’s 18.06 course come mostly at the beginning and the end. You will see how they fit naturally—and the instructor is free to adopt or to skip over those changes.
The text always begins with vectors and their linear combinations and dot products. The new and more active step is to introduce linear independence early (by small examples!). It becomes natural to put the vectors into a matrix and to begin to visualize the “column space” of that matrix A. We are touching on the key ideas of a basis for the column space (and the dimension of that space), but the full story waits for chapter 3. Chapter 1 does include one new column-row factorization A = CR that explains the first magic fact of linear algebra: dimension of column space = dimension of row space.
Moving toward the end of the course, we certainly meet eigenvalues and their applications! But there is another topic—singular values—which now has major importance in data science. It involves *two sets of orthogonal vectors*—inputs ui and outputs vi. Their connection is A ui = sigmai vi. The u’s are perpendicular eigenvectors of AT A and the v’s are perpendicular eigenvectors of A AT. The positive numbers sigmai are square roots of the eigenvalues. Singular values are important because matrices of data are not symmetric or even square!
The sixth edition of the book ends with a short treatment of artificial intelligence and neural nets—enough to explain the general idea of deep learning and its close connection to matrices.
I hope the videos that follow this message will help faculty and students to see that the essential mathematics is linear algebra!
My textbooks are published by Wellesley-Cambridge Press. Thank you again!