Home  18.013A  Chapter 32 


We have defined the determinant of a matrix to be a linear function of its rows or columns whose magnitude is the hypervolume of the region with edges given by its columns, or by its rows.
The determinant has a number of important properties as follows:
We will list them then offer proofs of them.
1. Linearity in columns: if we have column nvectors c(k) and d(k), for $k=1$ to $n$ , and pick any $j$ in this range then the $n$ dimensional determinant obeys the condition
det (c(1), ...c(j  1), a*c(j) + b*d(j) , c(j + 1), ...,c(n)) = a *det (c(1), ...c(j  1), c(j) , c(j + 1), ...,c(n)) + b *det (c(1), ...c(j1), d(j) , c(j + 1), ...,c(n)).
2.
Linearity in rows:
write this one out yourself.
3.
The determinant is 0 if two columns are the same.
(Likewise for rows.) Equivalently, it changes sign if you interchange two rows (or columns).
4. The determinant can be evaluated by a process like row reduction. You can add multiples of rows to one another until all elements on one side of the main diagonal are 0.
Then the product of the diagonal elements is the determinant.
5.
The determinant of the matrix product of two matrices is the product
of their determinants.
6. In terms of the elements of a matrix $M$ in any one column say ${M}_{1j},{M}_{2j},\dots $
The determinant can be expressed as
The quantities $C(i,j)$ that occur here are called cofactors of the matrix $M$ .
$C(i,j)$ must be linear in all the rows of $M$ except the ith and in all the columns of $M$ except the jth, and it must be 0 if two of those rows or columns are the same; so it is proportional to the determinant of the matrix obtained by removing the ith row and jth column from $M$ . The proportionality constant turns out to be ${(1)}^{i+j}$ .
7. The inverse of the matrix
$M$
is the matrix whose (i, j)th element is
$\frac{C(j,i)}{\mathrm{det}(M)}$
.
8. If you have a set of equations of the form
$M\stackrel{\u27f6}{v}=\stackrel{\u27f6}{c}$
, then the ith component of
$\stackrel{\u27f6}{v}$
is given by
the ratio of the determinant of the matrix obtained by taking
$M$
and substituting
$\stackrel{\u27f6}{c}$
for the ith column of
$M$
, divided by the determinant of
$M$
itself.
(This statement is called Cramer's Rule.)
9. The condition that the determinant of a matrix is 0 means that the hypervolume of the region determined by the columns is 0 which means that they are linearly dependent, and it means that there is a nonzero linear combination of the columns that is the zero vector. Which means that for this vector $\stackrel{\u27f6}{v}$ , we have $M\stackrel{\u27f6}{v}=\stackrel{\u27f6}{0}$ .
10. The determinant is unchanged by rotations of coordinates.
11. The polynomial of degree $n$ in $x$ defined by $\mathrm{det}(MxI)$ is called the characteristic polynomial of $M$ . Its roots (solutions to it $=0$ ) are called the eigenvalues of $M$ .
We now comment on these claims.
The first three follow immediately from the definition of the determinant as a linear version of hypervolume.
It follows from these that you can add a multiple of one row to another without changing the determinant: because by linearity the change would have to be a multiple of the determinant of a matrix with two identical rows.
But then you can do this until the matrix is diagonal, at which point the determinant, again by linearity, is the product of the diagonal elements times the determinant of the identity matrix (which is 1).
The statement that the determinant of a product of two matrices is the product of the determinants is important and useful. It follows by these two observations:
1. If the matrix $A$ is diagonal, then $\mathrm{det}A$ is the product of the diagonal elements of $A$ .
On the other hand, the rows of $AB$ are just the rows of $B$ each multiplied by the corresponding diagonal element of $A$ .
By linearity then, the determinant of $AB$ is the product of the diagonal elements of $A$ times the determinant of $B$ , that is, it is the product of the determinant of $A$ and that of $B$ , as we have claimed.
2. If we apply a row operation (no multiplying rows by constants allowed) as discussed in property 4 above, on $A$ to obtain a new matrix $A\text{'}$ and apply the same row operation to $(AB)$ to obtain $(AB)\text{'}$ we will have
and we will have $\mathrm{det}A=\mathrm{det}A\text{'}$ , and $\mathrm{det}AB=\mathrm{det}A\text{'}B$ .
We can do this until $A$ is diagonal, at which point we can use the first statement here to tell us: $(\mathrm{det}A\text{'})*(\mathrm{det}B)=\mathrm{det}A\text{'}B$ , from which our conclusion follows.
The statements about cofactors merely make explicit what it means to be linear in each row and column.
The sign factor can be deduced from the fact that it is 1 if you consider the first row and column, (think of the identity matrix) and you can switch rows and columns with their neighbors $i1$ and $j1$ times to rearrange things so that the ith row and jth column become the first and everything else is in their original orders.
This will cause $i+j2$ sign changes, which gives the sign factor noted.
As already noted somewhere the cofactor formula for the inverse is a statement about the dot products of the rows of the inverse with the columns of the original matrix. The diagonal products must be 1 which follows for $\frac{C(j,i)}{\mathrm{det}(M)}$ from the cofactor formula for the determinant, and the off diagonal ones must be zero because by that same formula they represent the determinants of matrices with two identical columns or rows.
Cramer's rule is the observation that by the definition of the inverse, the desired coefficient is the dot product of the ith row of the inverse of $M$ with the vector $\stackrel{\u27f6}{c}$ . But by the cofactor formula this is the dot product of the ith column of the cofactor matrix with the vector $\stackrel{\u27f6}{c}$ , divided by the determinant of $M$ , and that is the ratio of the two determinants of Cramer's rule.
