Video Lectures

Lecture 13: Randomized Matrix Multiplication

Description

This lecture focuses on randomized linear algebra, specifically on randomized matrix multiplication. This process is useful when working with very large matrices. Professor Strang introduces and describes the basic steps of randomized computations.

Summary

Sample a few columns of \(A\) and rows of \(B\)
Use probabilities proportional to lengths \(\Vert A_i \Vert \, \Vert B_i \Vert\)
See the key ideas of probability: Mean and Variance
Mean \(= AB\) (correct) and variance to be minimized

Related section in textbook: II.4

Instructor: Prof. Gilbert Strang

Problems for Lecture 13
From textbook Section II.4

1. Given positive numbers \(a_1,\ldots,a_n\) find positive numbers \(p_1\ldots,p_n\) so that

\( p_1+\cdots+p_n=1 \text{ and } V=\dfrac{a_1^2}{p_1}+\cdots+\dfrac{a_n^2}{p_n} \text{ reaches its minimum } (a_1+\cdots+a_n)^2.\)

The derivatives of \(L(p,\lambda)=V-\lambda(p_1+\cdots+p_n-1)\) are zero as in equation (8).

4. If \(M={1\,1}\)T is the \(n\) by \(n\) matrix of 1’s, prove that \(nI −M\) is positive semidefinite. Problem 3 was the energy test. For Problem 4, find the eigenvalues of \(nI − M\).

Course Info

Departments
As Taught In
Spring 2018
Learning Resource Types
Lecture Videos
Problem Sets
Instructor Insights