Chapter B: Fun with Determinants Previous Chapter Next Chapter

We have discussed the determinant in Chapter 4 and again in Chapter 32. We discussed several methods for computing determinants, which are not difficult in principle, but rather tedious to perform in practice, if you want to do them by hand.
Here we give some ideas which allow us to deduce the determinant of several special types of matrices (but ones that are occasionally useful) with less effort than it takes to write down the matrix.
We also present a curious formula involving determinants that was discovered by Lewis Carroll, the author of Alice in Wonderland.

#### 1 Some Easily Calculated Determinants. The VanderMonde Determinant

When a matrix has elements that are monomials or even polynomials in some set of variables, then its determinant will in general be a polynomial in those variables, and this is sometimes useful in evaluating it.

The prime example of this is what is called a VanderMonde matrix, whose rows (or if you prefer, columns) all have the form (1, xj, xj2, xj3, ..., xjn-1) for some xj.

Here is an example of one:

By the basic property of a determinant, that it is 0 if two of its rows are the same, we can deduce that determinant of a VanderMonde matrix will be 0 when any two of its rows are the same. But that means that, as a polynomial, it must have (xi - xj) as a factor, for every i and j.

This means that such a determinant must have as a factor, with the product here taken over all pairs of variables with i > j. This factor is already a polynomial of degree (where we are dealing with an n by n matrix.).

And what is the degree of our VanderMonde determinant? Well, it is the sum of entries, each one of which has one factor from each column. Thus, as a polynomial, it has degrees 0 + 1 +, .., + n - 1, which is .

And so, we have already evaluated our determinant as a polynomial, up to a constant. And what is that constant? We can check it by looking at the main diagonal term: which is:

x10x21x32...xnn-1

and this is exactly what you get if you take the first (positive) terms in each factor of our product above. Since these two terms have the same coefficient, 1, in both the determinant and the product, the product is the determinant, and that is our answer.

In our example, we can deduce immediately that the determinant is 2*1*1, or 2.

#### 2 Another Easy Case: Cauchy's Determinant

Suppose we want the determinant of a matrix whose (j, k) entry is .

Here is an example : for the x values 1, 2, 4 and y values 1, 2, 3.

Now we don't have a polynomial, but rather have a rational function of our variables. What do we do? We make it into a polynomial by factoring out all the denominators!

Again, by the way, we know that it will be 0 if any two x variables are the same, or if any two y variables are the same since that would make two rows or columns identical.

Thus, it must have factors of in the numerator. Each product once for each pair with the larger index first. In the denominator we will have for every pair of variables. Notice that the numerator has already degree n(n - 1) in our variables and the denominator has degree , which is n more than the that of the numerator.

This is also true of the determinant, all of whose terms are products of n factors, each having one term in the denominator and none in the numerator, for a net excess of n in the denominator.

In fact the formula we have so far:

where the first two products are over variable pairs with the first variable having greater index, while the product in the denominator is over all pairs, is the determinant we seek.

We can verify this by setting , in which case the terms in the products in the numerator become the off diagonal terms in the matrix, and these cancel with similar terms in the denominator, and we are left with in the denominator, exactly what we get from the diagonal term in the determinant.

In our example, we can deduce that our determinant is (which includes all the denominators except for a factor of 12 which cancels with the numerator.)

Exercise: There is one more famous example, which we give you as an exercise:
Consider a matrix whose (j, k) element is xj if j > k, and yj otherwise.
For a 3 by 3 matrix this looks like

Find a formula for its determinant. (Notice that it is a polynomial, of what degree? in these variables. When will it be 0? You can read off the answer from your answer to these questions.

For example, consider the matrix:

You can immediately deduce that its determinant is 6*4*3 or 72.

#### 3 Lewis Carroll's Theorem

The formula for a determinant of a matrix in two dimensions

is

a11*a22 - a12*a21.

Charles Dodgson found an analog of this formula in every dimension.

We introduce the following notation: let Ai,j be the determinant of the matrix obtained from A by removing its i-th row and j-th column. If we omit two rows and columns let Aij,kl be the determinant of what is left, which in the two dimensional case is nothing, whose determinant we define to be 1.

Then for a two dimensional matrix, A, as above, we have A2,2 = a11, A1,2 = a21 and so on.
Our two dimensional formula above can then be written as

It is this formula that Dodgson generalized. He noticed, and proved that if you pick any two distinct indices (say j and k) for an n by n matrix, you get the same result:

This formula gives an n by n determinant in terms of determinants of smaller size. Thus it can be used as a recursive definition of the determinant.(If you define the 0 by 0 determinant to be 1, and the 1 by 1 determinant of a number to be itself, you can use this definition to define all higher dimensional determinants.)

We indicate proof of Dodgson's formula. To do so we need only show that it is linear in each row and is zero if two rows are identical.

We make the following observations:
If two rows other than rows j and k are identical then every determinant on the right is 0. This is a bit awkward, but there will be two zeroes in the numerator for each one in the denominator, and this formula essentially says that det A is 0 when this happens.

If row j or k is identical to a different row then one factor in each term in the numerator will be 0, and det A will be 0 by this formula.

Finally, if row j and k are identical, the two terms in the numerator are the same and cancel, giving the same conclusion.

Dodgson's formula is obviously linear in row j and row k, since only one of the factors on the right in each term has either of these rows, and each is a determinant which is linear in its rows.

As far as the other rows are concerned, they occur in both factors of both terms in the numerator (with a column missing in each) as well as in the denominator, so that the expression on the right hand side of the formula is a quadratic in each such row divided by a term linear in it.

If you divide a quadratic by a linear function you get a linear function and perhaps a remainder. It is necessary to show that there is no remainder here.

It is obvious that Dodgson's formula only makes sense if the numerator on its right is zero whenever its denominator is zero. Otherwise, it would imply that the determinant could become infinite when the denominator was 0.

It turns out that the this condition is all that is needed for us to deduce the linearity of the determinant here in rows other than j and k from its linearity in lower dimensions. And also this condition can be proven by straightforward use of row operations. We leave details to the interested reader.

This formula does not produce a particularly efficient way to evaluate determinants, but it is fun to look at.