Video Lectures

Lecture 11: Minimizing ‖x‖ Subject to Ax = b

Description

In this lecture, Professor Strang revisits the ways to solve least squares problems. In particular, he focuses on the Gram-Schmidt process that finds orthogonal vectors.

Summary

Picture the shortest \(x\) in \(\ell^1\) and \(\ell^2\) and \(\ell^\infty\) norms
The \(\ell^1\) norm gives a sparse solution \(x\).
Details of Gram-Schmidt orthogonalization and \(A = QR\)
Orthogonal vectors in \(Q\) from independent vectors in \(A\)

Related section in textbook: I.11

Instructor: Prof. Gilbert Strang

Problems for Lecture 11
From textbook Section I.11

6. The first page of I.11 shows unit balls for the \(\ell^1\) and \(\ell^2\) and \(\ell^\infty\) norms. Those are the three sets of vectors \(\boldsymbol{v}=(v_1,v_2)\) with \(||\boldsymbol{v}||_1\leq 1, ||\boldsymbol{v}||_2\leq 1, ||\boldsymbol{v}||_\infty\leq 1\). Unit balls are always convex because of the triangle inequality for vector norms:

$$ \text{If } ||\boldsymbol{v}||\leq 1\text{ and } ||\boldsymbol{w}||\leq1 \text{ show that } ||\frac{\boldsymbol{v}}{2}+\frac{\boldsymbol{w}}{2}||\leq 1$$

10. What multiple of \(\boldsymbol{a} = \left[\begin{matrix}1\\1\end{matrix}\right]\) should be subtracted from \(\boldsymbol{b} = \left[\begin{matrix}4\\0\end{matrix}\right]\) to make the result \(\boldsymbol{A}_2\) orthogonal to \(\boldsymbol{a}\)? Sketch a figure to show \(\boldsymbol{a}\), \(\boldsymbol{b}\), and \(\boldsymbol{A}_2\).

Course Info

Departments
As Taught In
Spring 2018
Learning Resource Types
Lecture Videos
Problem Sets
Instructor Insights