We seek (Lie) groups of continous linear transformations,

(16.41) |

(16.42) |

Examples of transformations of importance in physics (that you should
already be familiar with) include

(16.43) |

where . This is the ( parameter)

(16.44) |

(16.45) |

An **infinitesimal transformation** in one of the parameters is defined by

(16.46) |

(16.47) |

(16.48) |

(16.49) |

Putting this all together,

(16.50) |

(summed over in four dimensional space-time and ). Thus (unsurprisingly)

(16.51) |

One can easily verify that

(16.52) |

(16.53) |

The continuous transformation group (mentioned above) follows
immediately from making (the displacement of coordinates)
infinitesimal and finding finite displacements by integration. The
rotation group (matrices) are a little trickier. They are

(16.54) |

(16.55) |

(16.56) |

(16.57) |

(16.58) |

To obtain the rotation **group** we must show that *every*
rotation can be obtained by integrating . This follows by
writing an arbitrary rotation or product of rotations as a single
rotation about a fixed axis. For
parallel to this axis
, this is obviously true, as I show next. Since any
rotation can be written this way, the rotations indeed form a group.

The integration proceeds like:

(16.59) |

(16.60) |

(16.61) |

With these known results from simpler days recalled to mind, we return
to the homogeneous, proper Lorentz group. Here we seek the
infinitesimal linear transformations, etc. in *four* dimensions.
Algebraically one proceeds almost identically to the case of rotation,
but now in four dimensions and with the goal of preserving length in a
different metric. A general infinitesimal transformation can be written
compactly as:

(16.62) |

Thus

(16.63) |

Thus, whenever we write

(16.64) |

To construct (and find the distinct components of ) we make use
of its properties. Its determinant is

(16.65) |

(16.66) |

If is real then
is excluded by this result. If is
traceless (and only if, given that it is real), then

(16.67) |

Think back to the requirement that:

(16.68) |

(16.69) |

If we multiply from the right by and the left by , this equation
is equivalent also to

(16.70) |

(16.71) |

(16.72) |

Finally, if we multiply both sides from the *left* by and express the
left hand side as a transpose, we get

(16.73) |

(16.74) |

So this is just great. Let us now separate out the individual couplings
for our appreciation and easy manipulation. To do that we define six
fundamental matrices (called the **generators** of the group from
which we can construct an arbitrary and hence . They are
basically the individual matrices with unit or zero components that can
be scaled by the six parameters . The particular choices
for the signs make certain relations work out nicely:

(16.75) |

(16.76) |

(16.77) |

(16.78) |

(16.79) |

(16.80) |

(16.81) |

(16.82) |

(16.83) | |||

(16.84) |

Note that these relations are

The reason this is important is that if we form the dot product of a
vector of these generators with a spatial vector (effectively
decomposing a vector parameter in terms of these matrices) in the
exponential expansion, the following relations can be used to reduce
powers of the generators.

(16.85) |

(16.86) |

It is easy (and important!) to determine the commutation relations of these
generators. They are:

(16.87) | |||

(16.88) | |||

(16.89) |

The first set are immediately recognizable. They tells us that ``two rotations performed in both orders differ by a rotation''. The second and third show that ``a boost and a rotation differ by a boost'' and ``two boosts differ by a rotation'', respectively. In quotes because that is somewhat oversimplified, but it gets some of the idea across.

These are the generators for the groups or . The latter is the group of relativity as we are currently studying it.

A question that has been brought up in class is ``where is the factor
in the generators of rotation'' so that
as we might expect from considering spin and angular momentum
in other contexts. It is there, but subtly hidden, in the fact that
*in the projective block* of the rotation matrices
only. Matrices appear to be *a* way to represent geometric
algebras, as most readers of this text should already know from their
study of the (quaternionic) Pauli spin matrices. We won't dwell on this
here, but note well that the Pauli matrices
are isomorphic to the unit quaternions
via the mapping ,
,
,
as the reader can
easily verify^{16.6} Note well that:

(16.90) |

With these definitions in hand, we can easily decompose in terms of the
and the
matrices. We get:

(16.91) |

(16.92) |

Let us see that these are indeed the familiar boosts and rotations we are used
to. After all, this exponential notation is not transparent. Suppose that
and
. Then and

(16.93) |

or (in matrix form)

(16.94) |

Now, a boost in an *arbitrary* direction is just

(16.95) |

(16.96) |

(16.97) |

I can do no better than quote Jackson on the remainder:

``It is left as an exercise to verify that ...''

(16.98) |

(16.99) | |||

(16.100) |

from before.

Now, we have enough information to construct the exact form of a simultaneous
boost and rotation, but this presents a dual problem. When we go to factorize
the results (like before) the components of independent boosts and rotations
do not commute! If you like,

(16.101) |

(16.102) |

The worst part, of course, is the algebra itself. A useful exercise for the algebraically inclined might be for someone to construct the general solution using, e.g. - mathematica.

This suggests that for *rotating* relativistic systems (such as atoms or
orbits around neutron stars) we may need a kinematic correction to account for
the successive frame changes as the system rotates.

The atom perceives itself as being ``elliptically deformed''. The consequences of this are observable. This is known as ``Thomas precession''.