![]() |
|||||
|
|||||
![]() |
|||||
![]() |
![]() |
If two square matrices M and A have the property that MA = I, A wonderful feature of row reduction as we have described it is that when
you have a matrix equation AB = C, you can apply your reduction operations for
the matrix A to the rows of A and C simultaneously and ignore B, and what you
get will be as true as what you started with. This is exactly what we did
when B was the column vector with components equal to our unknowns, x, y and z,
but it is equally true for any matrix B. We can place an identity matrix next to it, and perform row operations simultaneously on both. Here we will first subtract 5 times the first row from the second row, then divide the second row by -9 then subtract three times the second from the first: and the last matrix here is the inverse, A-1 of our original matrix
A. Exercise 32.3 Find the inverse to the matrix B whose rows are first (2 4); second (1 3). Solution The inverse of a matrix can be useful for solving equations, when you need to solve the same equations with different right hand sides. It is overkill if you only want to solve the equations once. If your original equations had the form Mv = r, by multiplying both sides by M-1 you obtain v = Iv = M-1Mv = M-1r, so you need only multiply the inverse, M-1 of M by your right hand side, r, to obtain a solution of your equations. If you think of what you do here to compute the inverse matrix, and realize
that in the process the different columns of M-1 do not interact with
one another at all, you are essentially solving the inhomogeneous equation Mv
= r for r given by each of the three
columns of the identity matrix, and arranging the results next to each other.
What we are saying here then is that to solve the equations for general r it is
sufficient to solve it for each of the columns of I, and then the solution for
a general linear combination r of these columns is the same linear combination
of the corresponding solutions. Not every matrix has an inverse. As we have seen, when the rows of M are linearly dependent, the equations that M defines do not have unique solutions, which means that for some right hand sides there are lots of solutions and for some there are none. If so the matrix M does not have an inverse. One way to characterize the linear dependence of the rows (or columns: if the rows are linearly dependent and the matrix is square, then the columns are linearly dependent as well.) in three dimensions is that the volume of the parallelepiped formed by the rows (or columns) of M is zero. The volume of the parallelepiped formed by the rows of M is not changed under the second kind of row operation, adding a multiple of a row to another, though it is changes by a factor |c| if you multiply each element of a row by c. The fact that volume is always positive so that the absolute value |c| appears here is a bit awkward, and so it is customary to define a quantity that when positive is this volume but has the property of linearity: if you multiply a column by c it changes by a factor of c rather than by |c|. This quantity (and an analogue holds in any dimension) is called the determinant of M. Thus the absolute value of the determinant of M is In three dimensions the volume of the parallelepiped
with sides given by the rows (or alternately the columns) of M. In higher dimensions the "hypervolume" or higher dimensional analogue
of volume of the region with sides given by the rows (or columns) of M. These statements specify the determinant up to a sign. The sign is determined
by convention to be positive for the identity matrix I
whose determinant is always 1. We will soon see how to calculate determinants, and how to express the inverse of a matrix in terms of its determinant. |
![]() |