Lecture 7: Eckart-Young: The Closest Rank k Matrix to A
Description
In this lecture, Professor Strang reviews Principal Component Analysis (PCA), which is a major tool in understanding a matrix of data. In particular, he focuses on the Eckart-Young low rank approximation theorem.
Summary
\(A_k = \sigma_1 u_1 v^{\mathtt{T}}_1 + \cdots + \sigma_k u_k v^{\mathtt{T}}_k\) (larger \(\sigma\)’s from \(A = U\Sigma V^{\mathtt{T}}\))
The norm of \(A - A_k\) is below the norm of all other \(A - B_k\).
Frobenius norm squared = sum of squares of all entries
The idea of Principal Component Analysis (PCA)
Related section in textbook: I.9
Instructor: Prof. Gilbert Strang
Problems for Lecture 7
From textbook Section I.9
2. Find a closest rank-1 approximation to these matrices (\(L^2\) or Frobenius norm) :
$$A = \left[\begin{matrix}3 & 0&0 \\ 0 &2&0\\ 0 & 0&1\end{matrix}\right] \hspace{12pt} A = \left[\begin{matrix}0 & 3\\ 2 & 0\end{matrix}\right] \hspace{12pt} A = \left[\begin{matrix}2 & 1\\ 1 & 2\end{matrix}\right] $$
10. If \(A\) is a 2 by 2 matrix with σ1 ≥ σ2 > 0, find \(||A^{-1}||_2\) and \(||A^{-1}||^2_F\).