
This section is extremely sketchy. For further discussion see Chapter 32 .
Evaluating a Determinant by Gaussian elimination: to do this you add multiples of one row to another until all entries below the main diagonal are 0. The determinant (which is unchanged by these actions) is then the product of the diagonal entries. Machines can do such things for $n$ by $n$ matrices with $n$ in the hundreds or thousands, but people find the exercise a bit dull.
Expansion of a determinant in a row or column: let the matrix $M$ have elements ${m}_{ij}$ . The first index describes the row number, the second the column number.
$M$ 's determinant is a sum of the elements of any single row each multiplied by a factor. What factor?
For the jth element of the ith row it is the determinant of the matrix obtained by removing that row and column, multiplied by a sign factor of 1 to the sum of the indices of the element, $i+j$
$\mathrm{det}{m}_{ij}={\displaystyle \sum _{j}}{m}_{ij}{(1)}^{i+j}{M}_{ij}$ (A)
where ${M}_{ij}$ is the determinant of the matrix obtained from $M$ by eliminating its ith row and jth column and closing the rest up into a square array.
Why is this so?
The factor multiplying ${m}_{ij}$ must be linear in the other rows and be 0 if two of them are identical, so it must be proportional to their determinant, ${M}_{ij}$ . (Also since the determinant is linear in the jth column this term can have no factor other than ${m}_{ij}$ from that column.)
The only question left, then, is: why the sign factor?
You can interchange two rows or columns of a matrix which have the same parity (which means that both have even indices or both odd indices) with an even number of single row or column interchanges, while you need an odd number of interchanges when they have opposite parity, Each interchange requires a sign change, so there must be a sign change if the parity of the row and column indices are different, to make the computation for different indices consistent.
Notice that we also have
$0={\displaystyle \sum _{j}}{m}_{kj}{(1)}^{i+j}{M}_{ij}$ for $k\ne i$ (B)
since by equation (A) this is the determinant of a matrix with two of its rows, the ith and the kth, equal to the kth row of $M$ , and a matrix with two identical rows has 0 determinant.
The formula (A) is called the expansion of $\mathrm{det}M$ in the ith row. The same thing can be done for a column, and even for several rows or columns together.
The expression ${(1)}^{i+j}{M}_{ij}$ is called the ijth cofactor of the matrix $M$ . The statement (A) can then be phrased as: the dot product of any row of $M$ with the vector of cofactors of the entries in that row is the determinant of $M$ . The same statement holds with the word "row" replaced by "column".
Exercises:
4.5. Evaluate the determinant of the matrix whose rows are, in order, $(1,2,5),(3,1,2)$ and $(4,2,7)$ be each of the methods described above. Which do you find faster?
4.6. Do the same for a random but nontrivial 4 by 4 matrix. Which is faster?
