]> 3.8  Digression on Length and Distance in Vector Spaces

## 3.8 Digression on Length and Distance in Vector Spaces

The distance between two vectors $v ⟶$ and $w ⟶$ is the length of the difference vector $v ⟶ − w ⟶$ .

There are many different distance functions that you will encounter in the world. We here use "Euclidean Distance" in which we have the Pythagorean theorem.

If the concepts of distance and length are used without additional description this is what we will mean:

The square of the length of a vector $w ⟶$ is the sum of the squares (or more generally the sum of the absolute values of the squares, when the components are complex numbers) of its components. It is the dot product $( w ⟶ , w ⟶ )$ or $w ⟶ · w ⟶$ ).

But that is not the only concept of distance you will encounter in life.

What properties should the length of a vector have?

The traditional requirements are as follows:

It should be positive, and zero for the zero vector.

It should obey the triangle inequality: the length of the sum of two vectors is no greater than the sum of their lengths.

It is nice if length 0 means that the vector is the $0 ⟶$ vector.

What other concepts of length or distance are around?

Manhattan distance: the length of a vector is the sum of the absolute values of its components.

Hamming distance: length is number of non-zero components.

Maximum component distance: length is Maximum component absolute value.

Suppose we call the components $x i$ , and a small quantity of any of them $d x i$ , and the resulting value of distance with components $d x i$ let us call $d s$ .

Then in Euclidean space we have $d s 2 = ∑ i | d x i | 2$ .

We define the metric $L j$ by $d s j = ∑ i | d x i | j$ .

Euclidean space can then be described as $L 2$ .

Exercise 3.14 Which values of $j$ in the definition of $L j$ correspond to Hamming, Manhattan, and Maximum component size? (Hints: $j$ can be infinite; also for Hamming distance the notions are similar but not exactly the same, and only similar in a limit.)

Length in Euclidean space when non-rectilinear coordinates are used:

When you describe ordinary vectors in Euclidean space by their polar coordinates, then these do not obey the linear properties of ordinary rectangular coordinates. For example, the length of the sum of two vectors is not the sum of their lengths, and the angle made with the $x$ axis of a sum is not the sum of the angles of the summands.

We can ask, what is the length of a small vector, whose endpoints differ by $r$ coordinate $d r$ and by angle $d θ$ ?

If we are at a specific point with given coordinates, the $r$ direction is the direction pointing away from the origin toward it, and distance in this direction is measured just as in the $x$ or $y$ direction. The length of a vector in this direction with coordinate $d r$ is $| d r |$ .

The $θ$ direction is perpendicular to the $r$ direction, increasing in the counterclockwise direction, but distance is not $d θ$ . The distance around a circle is proportional to the radius of the circle, and distance in the angular direction in consequence is proportional to $r$ as well.
The result is that distance in polar coordinates is measured by

$d s 2 = d r 2 + r 2 d θ 2$

Length in non-orthonal coordinates:

Any $k$ linearly independent k-vectors may be used as a basis: any other k-vector can be expressed as a linear combination of them. (Why? By exercise 3.11 any other k-vector is in a linear dependence with them which can be solved for that k-vector in terms of the basis.)

Thus, in two dimensions, for example, any two vectors $a ⟶$ and $b ⟶$ with different directions can form a basis and any vector $v ⟶$ can be described by coordinates that are the coefficients of these two: if $v ⟶ = s a ⟶ + t b ⟶$ then we can describe $v ⟶$ by the 2-vector $( s , t )$ .
However, if we are describing Euclidean space and the vectors $a ⟶$ and $b ⟶$ are not orthogonal the length of $v ⟶$ squared will not be $s 2 + t 2$ . In general though, if we define $( s , t )$ to be $v ' ⟶$ , we get length squared is $< v ' ⟶ | G | v ' ⟶ >$ for some matrix $G$ which depends on the angle between $a ⟶$ and $b ⟶$ .

Thus, if $a ⟶$ and $b ⟶$ are unit vectors at angle $θ$ the matrix $G$ is

$( 1 − cos ⁡ θ − cos ⁡ θ 1 )$

The matrix $G$ is called the metric tensor for the given basis.

Different metrics: Minkowski space:

There are even vector spaces in which the concept of distance is replaced by something that can be positive or imaginary: such is Minkowski space: it has four dimensions, three spatial and also time. In it the analog of distance is described by

$d s 2 = d x 2 + d y 2 + d z 2 − c 2 d t 2$

Vectors with $s 2$ positive or negative are said to be space-like or time-like respectively; those with $s 2 = 0$ are said to lie on the "light cone".

Why does anyone bother with such things ?

Linear changes in the coordinates in Euclidean space that have the property that they do not alter distances (so that the distance between two points remains after the changes exactly what it was before), are rotations in space. Similar changes in Minkowski space are symmetries of Maxwell's equations of electrodynamics, and correspond to both rotations in space and "Lorentz transformations". Thus even this last concept has important physical application. All the others do as well, in appropriate contexts.