## 32.2 Matrices

Matrices provide a convenient way to describe linear equations. Thus if you take the coefficients of your unknowns, in some standard order, as the row elements of your matrix, you define a matrix of coefficients for any set of equations.

For the example equations above, the coefficient matrix, call it M, is, with the standard ordering of x, y and z

We can then write the original equations as the single matrix equation

Using the definition of matrix multiplication, which is: taking the dot products of the rows of the first matrix with the columns (here single column) of the second to produce the corresponding elements of the product, you should verify that this matrix equation is exactly the same as our original three equations.

The process of Gaussian elimination can be applied in this matrix form here. The rules are:

1. You can multiply an entire row (on both sides of the equation) by any non-zero number without changing the content of the equations.

2. You can add a multiple of any row to another without changing the content of the equations. You must add entirely across the row, including the other side of the matrix, however.

In this form such operations are called "elementary row operations" and Gaussian elimination is called row reduction.

What you do here is perform enough of operation 2 to form 0's in the matrix on one side of the main diagonal. When this is done you can determine one unknown and then substitute successively to find the others.

You can also attempt to perform these operations until all elements of your matrix off the main diagonal are 0's, and the diagonal elements are 1. In that case the right hand side vectors are the solutions for the corresponding variables and you need not substitute back to find all the unknowns.

The n dimensional matrix whose diagonal elements are 1 and off diagonal elements are 0 is called the n dimensional identity matrix, and is written as I usually without any indication of what its size is, unless that can cause confusion, in which case it is written as In.

It has the property that its matrix product with any matrix M of the same dimension is M itself, and its operation on any n dimensional vector v is v itself.

Thus if you start with the matrix equation Mv = r, and row reduce to find another representation of the same set of equations for which M has been reduced to the identity matrix I, you have Iv = r' where r' is the result of the same row operations on the right side of the equation as those that reduced M to I.

You thereby obtain the solution, v = r'.