The Quaternions

Vector spaces are foundational because of their structure. They generalize the field of scalars they are defined over, like $\mathbb{C}$ or $\mathbb{R}$, to include other objects - the vectors - whose addition is consistent with that of the underlying field. That is to say, vector addition is commutative and scalar distribute over it. As we saw last time, all sorts of things qualify as vectors, like matrices. Matrices add, of course, but they also multiply.

An algebra is a vector space together with a multiplication operation for vectors that closes. That is, the vector product of any two vectors is itself a vector. The familiar cross product from $\mathbb{R}^{3}$ is an early example.

$$ \vec{v}\times \vec{w} \in \mathbb{R}^3,\quad \vec{v},\vec{w}\in\mathbb{R}^3.$$

Indeed, this antisymmetric product will play an important role in today's discussion.

Note: The experts with some exposure to de Rham Cohomology - or at least the idea of parity in physics -  may cry foul at this point, but let's take things at face value for now.

The Complex Numbers

Let's first start with $\mathbb{C}$, which can be thought of as the vector space $\mathbb{R}^{2}$, where

$$\vec{z} = a\hat{e}_{x} \ + b\hat{e}_{y},\quad a,b\in\mathbb{R}.$$

Here $\hat{e}_{i}$ are basis vectors for $\mathbb{R}^{2}$. To fully represent $\mathbb{C}$, we need the additional capacity to multiply vectors, which amounts to the rules,

\begin{equation}\label{005:Calgebra} \hat{e}_{x}\cdot \hat{e}_{x} = 1,\quad \hat{e}_{x} \cdot \hat{e}_{y} = \hat{e}_{y},\quad \hat{e}_{y}\cdot \hat{e}_{y} = -1.\end{equation}

It's not terribly hard to see that these basis vectors are represented in $\mathbb{C}$ by

\begin{equation}\label{005:Calgebradef} \hat{e}_{x} = 1, \quad \hat{e}_{y} = i.\end{equation}

Thus $\mathbb{C}$ can be thought of as an algebra.

The Matrix $J$

Last time we saw that the matrix

\begin{equation}\label{005:J}J = \left(\begin{array}{cc} 0 & 1 \\ -1 & 0\end{array}\right), \end{equation}

squares to minus the identity, that is

$$J\cdot J = -\mathbb{1}.$$

Treating $\mathbb{1}$ as $\hat{e}_{x}$ and $J$ as $\hat{e}_{y}$, we've found yet another way to represent the complex numbers, this time as $2\times 2$ matrices of the form,

$$ z = a \left(\begin{array}{cc} 1 & 0 \\ 0 & 1\end{array}\right) + b\left(\begin{array}{cc} 0 & 1 \\-1 & 0\end{array}\right),\quad a,b\in \mathbb{R}.$$

This of course simplifies to

\begin{equation}z = \left(\begin{array}{cc} a & b \\-b & a\end{array}\right).\end{equation}

Meaning that any matrix of this form can represent a complex number.

The $\sigma$-matrices

Last time we discussed the four $\sigma$-matrices

\begin{equation}\label{0045:sigmas} \sigma_{0} = \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right),\quad  \sigma_{1} = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right),\; \sigma_{2} = \left(\begin{array}{cc} 0 & -i \\ i & 0 \end{array}\right),\; \sigma_{3} = \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right), \end{equation}

and observed they formed a representation for $\mathbb{R}^{4}$. Using matrix multiplication, we can see how this vector space, as a representation of $\mathbb{R}^{4}$ forms an algebra under matrix multiplication. We'll work through this algebra together, and you can verify the following claims as a series of exercises.

The first claim is that, for any $\alpha = 0,1,2,3$,

\begin{equation}\label{005:square} \sigma_{\alpha}\cdot\sigma_{\alpha} = \mathbb{1}.\end{equation}

The second claim is almost trivial to see:

\begin{equation}\label{005:j2} J = i\sigma_2,\end{equation}

where $J$ is defined as in \eqref{005:J}. This, together with the fact that $\sigma_{0} = \mathbb{1}$ tells us that the algebra of complex numbers is a subalgebra of the real vector space spanned by these $\sigma$-matrices. Given that $\mathbb{C}\sim \mathbb{R}^{2} \subset \mathbb{R}^{4}$, this probably isn't so surprising, but motivates the extension of \eqref{005:j2}:

\begin{equation}\label{005:IJK} I = i\sigma_{1},\quad J = i\sigma_{2},\quad K = i\sigma{3}.\end{equation}

This leads us to our third claim, that

\begin{equation}\label{005:IJKq} I\cdot J = K, \quad J\cdot K = I, \quad K \cdot I = J,\end{equation}

which you can also verify satisfies,

$$ J\cdot I = - K, \quad K\cdot J = -I, \quad I \cdot K = -J. $$

In other words, $I,J$ and $K$ anticommute with each other.

Written this way, any member of $\mathbb{R}^{4}$ may be written,

$$ Z = a + bI + cJ + dK, \quad a,b,c,d \in \mathbb{R}.$$

This algebra is well known to be precisely that of the quaternions, $\mathbb{H}$.

Hopefully it's clear that $\mathbb{C}$ is a subalgebra of $\mathbb{H}$, and now we see that there are actually three, equivalent copies of that subalgebra, and that all of them are representable in terms of the $\sigma$-matrices.


Clifford Algebras

Finally, today we'll do a little definition work to help facilitate future conversations. $\mathbb{R},\mathbb{C}$ and $\mathbb{H}$ are all examples of division algebras. By the Frobenius theorem, they are the \textit{only} associative division algebras of finite dimension. To be a bit more generous, all the multiplication law for all three of these algebra are also satisfies

\begin{equation}\label{005:clifford} \hat{e}_{i} \cdot \hat{e}_{j} + \hat{e}_{j} \cdot \hat{e}_{i} = 2\delta_{ij}\mathbb{1}.\end{equation}

For the matrices we've been considering here, we can employ the inner product discussed previously,

\begin{equation}\label{005:cliffordTr} \hat{e}_{i} \cdot \hat{e}_{j} + \hat{e}_{j} \cdot \hat{e}_{i} = 2\langle \hat{e}_{i},\hat{e}_{j}\rangle\mathbb{1},\end{equation}

where

$$\langle M,N \rangle = \mathsf{Tr}(MN).$$

An algebra whose basis satisfies \eqref{005:clifford} is called a \textbf{\textsf{Clifford Algebra}}, and we will have much to say Clifford algebras in the days to come.


We close this section by defining the commutator and anticommutor of linear operators, for later use and convenience. The left-hand side of \eqref{005:clifford} is the anticommutator of $\hat{e}_{i}$ and $\hat{e}_{j}$, and is traditionally referenced by braces:

$$\{ X, Y \} = X\cdot Y + Y\cdot X.$$

The associated antisymmetric product of $X$ and $Y$ is called the commutator,

$$[X,Y] = X\cdot Y - Y \cdot X.$$

Evidently the $X$ and $Y$ commute when their commutator vanishes. Any algebra whose commutator vanishes for all elements is referred to as abelian,
TL;DR

Vector spaces are a solid foundation on which to build more complicated structures. Allowing vectors to multiple affords one such structure: the algebra. The $\sigma$ matrices we explored last time, viewed as a vector space, become an algebra under matrix multiplication. They're isomorphic to the field of quaternions, $\mathbb{H}$, which means you can also find numerous subspaces isomorphic to $\mathbb{C}$. All three of these finite, associative division algebras are examples of Clifford Algebras. All these structures repeat in spirit over and over again in mathematics, so it's worthwhile getting familiar with archetypes.

$\setCounter{0}$
Sean Downes

Theoretical physicist, coffee and outdoor recreation enthusiast.

https://www.pasayten.org
Previous
Previous

Lessons from Week Three

Next
Next

Self-Adjoint Operators