05 - Algebras
We define the concept of an algebra as an extension of a vector space and explore some examples before introducing the idea of a algebra representation.
The complex numbers can be though of as a two-dimensional, real vector space, where $1$ and $I$ represent basis vectors for $\mathbb{C}$. What makes $\mathbb{C}$ different from $\mathbb{R}^{2}$ is that we can multiply two complex numbers together to get a third,
$$z_{1}z_{2} \in \mathbb{C},\quad z_{1},z_{2}\in\mathbb{C}.$$
By now we’re so familiar with it, you might have forgotten that this multiplication rule had to be imposed by hand. Indeed, this only works because we define the rules:
\begin{equation}\label{crules}1\times 1 = 1, \quad 1\times i = i,\quad i\times i = -1.\end{equation}
These rules promote the vector space $\mathbb{R}^{2}$ into an algebra, which is defined to be a vector space equipped with a bilinear$^{1}$, multiplication operation.
The vector space $\mathbb{R}^{3}$ can also be made into an algebra via the antisymmetric cross-product, which is typically defined in terms of the basis vector $e_{x}$,$e_{y}$ and $e_{z}$:
$$e_{i}\times e_{j} = \epsilon_{ijk}e_{k},$$
where $\epsilon_{ijk}$ is the totally antisymmetric object with three indices, i.e.
$$\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1,\quad \epsilon_{213} = \epsilon_{132} = \epsilon_{321} = -1,$$
and all other values vanish.
The main idea here is that if a vector space $V$ together with a binary operation $\star$ forms an algebra, then we can for each $M$ and $N$ in $V$, we can always find a $L$ in $V$ such that $M\star N = L.$
Importantly, the elements of an algebra need not be invertible under any operation like $\star$. In fact, $\star$ need not be even be an associative operation. Note that if $\star$ is not associative, this means that triple products such as
$$a\star b \star c$$
are ambiguous, so a bracket notation like
$$((a,b),c),$$
may be more appropriate.
The Algebra of Matrices
As we have seen, the complex matrices $\mathsf{Hom}(\mathbb{C}^{n})$, are a vector space. Together with the standard idea of matrix multiplication, $\mathsf{Hom}(\mathbb{C}^{n})$ is an algebra. Because matrix multiplication is an associative operation, we say $\mathsf{Hom}(\mathbb{C}^{n})$ is an associative algebra. The same is true of course for $\mathsf{Hom}(\mathbb{R}^{n})$.
If $A$ is an algebra, a subalgebra $B$ of $A$ is a linear subspace of $A$ that is closed under the algebra’s multiplication operation. As with groups and subgroups, $B$ and $A$ must share the same identity.
Certain subsets of $\mathsf{Hom}(\mathbb{C}^{n})$ may form subalgebras. For example, the product of any two diagonal matrices is itself diagonal. Less obviously perhaps, the upper (or lower) triangular matrices also form a subalgebra of $\mathsf{Hom}(\mathbb{C}^{n})$.
Matrices and Complex Structure
Consider the two real matrices
$$\mathbb{1} = \left(\begin{array}{cc}1&0\\0&1\end{array}\right),\quad J = \left(\begin{array}{rr}0&1\\-1&0\end{array}\right).$$
Evidently we have
$$\mathbb{1}^{2}=\mathbb{1} \quad \mathrm{and}\quad \mathbb{1} J = J.$$
What’s curious is the matrix $J^{2}$. As you can easily check by explicit computation,
$$J^{2} = -\mathbb{1}.$$
Thus $J$ and $\mathbb{1}$ satisfy the same multiplication rules as $1$ and $i$, \eqref{crules}.
It’s also not hard to see that $\mathbb{1}$ and $J$ are linearly independent as vectors in $\mathsf{End}(\mathbb{R}^{2})$, so that this real span forms an algebra that behaves exactly as $\mathbb{C}$.
We say that $\mathbb{C}$ and $C = \mathsf{span}_{\mathbb{R}}\left\{\mathbb{1},J\right\}$ are isomorphic as algebras. To see this we employ the linear map
$$\phi : \mathbb{C}\rightarrow C,$$
$$\phi : z \mapsto \mathsf{Re}z \mathbb{1} + \mathsf{Im}z J.$$
Because $\mathbb{1}$ and $J$ satisfy the same rules as \eqref{crules}, $\phi$ is a homomorphism of associative$^{2}$ algebras, i.e.
$$\phi(zw) = \phi(z)\phi(w).$$
The kernel of $\phi$ is trivial - it’s just $0$, so $\phi$ is invertible, and the inverse map
$$\phi^{-1}: C \rightarrow \mathbb{C},$$
is also a homomorphism of algebra. In other words, $\phi$ is an isomorphism of algebras. In set theory language we’d also call it a bijection.
Clearly, $C$ is a subalgebra of $\mathsf{End}(\mathbb{R}^{2})$.
Representations of Algebras
Let $A$ be an algebra and $V$ be a vector space. If $\phi$ is a algebra homomorphism from $A$ to $\mathsf{End}(V)$, then $V$ is said to be a module for $A$. $\phi$ is the associated representation of $A$ on this module $V$.
For the example of the algebra $C$ above, $\phi$ can be interpreted as a representation of the $\mathbb{C}$ algebra in $\mathsf{End}(\mathbb{R}^{2})$, and the vector space $\mathbb{R}^{2}$ serves as its module.
$^{1}:$ A bilinear map on some vector space $V$ is of the form:
$$f : V\times V \rightarrow V,$$
which is a linear map in both factors. Of course, you can also have a bilinear map between three vector spaces $A$,$B$ and $C$ defined similarly,
$$f: A\times B \rightarrow C.$$
$^{2}$: A homomorphism of nonassociative algebras can be defined similarly. We shall see some examples in later lectures.