Ahoy! This page uses MathJax to typeset math symbols. If you’re seeing code rather than nice, typeset symbols, you may need to hit “refresh” to render things cleanly.

 
 

Applied Representation Theory : 002

Matrix Constructions

We construct each of the normed division algebras A using matrices. We also review some basic properties of matrices, and discuss a few important collections of them.

Matrices

A map between vector spaces is called linear if it preserves vector addition and scalar multiplication. Essentially, a linear map f behaves as 

$$f(a\cdot(v+w))=af(v)+af(w),$$

for $a$ in $A$.

In the first set of exercises, we considered the space of linear maps from $\mathbb{C}^{n}$ to itself, $\mathsf{Hom}_{n,n}\,\mathbb{C}$. These turned out to be the $n\times×n$-matrices with complex components. You proved that Homn,nC is a vector space, whose $\mathbb{C}$-dimension is $n^{2}$.

The subset of invertible n×n-matrices has one restriction, namely:

$$\det M \neq 0.$$

This subset is useful and as such has its own name$^{1}$,

$$\mathsf{GL}_{n}\mathbb{C}=\left\{M \in \mathsf{Hom}_{n,n}\mathbb{C} \;\Big|\; \det M \neq 0\right\}.$$

From linear algebra, you might recall that the determinant of a matrix is an alternating multilinear function of its columns.  For a $2\times 2$ matrix,

$$M = \left(\begin{array}{cc}a & b \\ c & d\end{array}\right)\Rightarrow \det M = ad - bc.$$

Similarly, for a $3\times 3$ matrix,

$$M = \left(\begin{array}{ccc}a & b & c \\ d & e & f \\ g & h & j\end{array}\right)\Rightarrow \det M = a(ej - hf) - b(dj - gf) + c(dh - eg),$$

and so on.

You might also recall that the determinant of a product of matrices is the product of determinants:

$$ \det(MNP) =\det M \det N \det P. $$

From this we can immediately see that the subset of invertible matrices is closed under matrix multiplication. Because matrix multiplication itself is an associative operation:

$$MNP=(MN)P=M(NP),$$

and the identity matrix $\mathsf{1}$ is trivially invertible, we have an important result (that you will prove in the exercises):

$\mathsf{GL}_{n}\,\mathbb{C}$ with matrix multiplication forms a group.

The general linear group, $\mathsf{GL}_{n}\,\mathbb{C}$, being a subset of $\mathsf{Hom}_{n,n}\mathbb{C} $ is not a vector space$^{2}$, but is intimately related to $\mathsf{Hom}_{n,n}\mathbb{C}$. It also has a number of notable subgroups - that is, subsets that are groups under the same multiplication operation. In particular, the special linear group is defined to be the subset of invertible matrices with unit determinant:

$$ \mathsf{SL}_{n}\mathsf{C}=\left\{M\in \mathsf{GL}_{n}\mathbb{C}\;\Big|\;\det M=1\right\}, $$

$\mathsf{SL}_{n}\mathsf{C}$ will feature prominently in future discussions.

Matrix Facts

A few technical reminders about matrices are in order. First, the trace of a matrix is the sum of its diagonal components. Additionally there is the cyclic property of the trace:

$$ \mathsf{Tr}(MNP)=\mathsf{Tr}(PMN)=\mathsf{Tr}(NPM).$$

Thus if a matrix $M$ is diagonalizable by a similarity transformation $S$,

$$ M=S\Lambda S−1, $$

the trace always amounts to the sum of the eigenvalues:

$$ \mathsf{Tr} M=\mathsf{Tr}(S\Lambda S−1)=\mathsf{Tr}\Lambda=\sum_{i} \Lambda_{ii}=\sum_{i}\lambda_{I}, $$

where components of $\Lambda_{ij}$ are simply $\lambda_{i}\delta_{ij}$.

Functions of matrices are defined in terms of their Taylor series. Notably,

$$ e^{M}= \sum_{n=0}^{\infty} \frac{1}{n!}M^{n}.$$

Amusingly,

$$ \det e^{M}=e^{\mathsf{Tr} M}, $$

which you can verify as an exercise in matrix algebra.

The Physicists’ Ďƒ matrices

The physicists' σ matrices are given by

\begin{equation}\label{sigmas}\sigma_{1} = \left(\begin{array}{cc} 0 & 1 \\ 1 & 0\end{array}\right),\quad \sigma_{2} = \left(\begin{array}{rr} 0 & -i \\ i & 0\end{array}\right),\quad \sigma_{3} = \left(\begin{array}{rr} 1 & 0 \\ 0 & -1\end{array}\right).\end{equation}

Each of these are traceless and satisfy:

$$\sigma_{i}^{\dagger} = \sigma_{i}.$$

They show up in all sorts of applications.

Let I be the identity matrix and define the matrices

$$ I=i\sigma_{1},J=i\sigma_{2},K=i\sigma_{3}.$$

While trivial, it's hopefully clear that

$$ \mathbb{R}\cong\left\{c \;\Big|\; c\in \mathbb{R}\right\}.$$

Some what less trivially, you can show that

\begin{equation}\label{c}\mathbb{C}\cong \left\{ a \mathbb{I} + bJ\;\Big|\; a,b\in\mathbb{R}\right\}.\end{equation}

Similarly, you can show

\begin{equation}\label{h}\mathbb{H}\cong \left\{ a \mathbb{I} + bI + cJ + dK\;\Big|\; a,b,c,d\in\mathbb{R}\right\}.\end{equation}

Complex and quaternionic conjugation require an operation such that will take

$$ I\rightarrow −I,J\rightarrow−J,K\rightarrow−K.$$

An operation consistent with all three σ matrices is a transpose together with component-wise complex conjugation:

$$I\rightarrow \overline{I}^{\sf T}=−I\,,\,J\rightarrow\overline{J}^{T}=−J\,,\,K\rightarrow\overline{K}^{T}=−K.$$

This is precisely the same operation that mapped $\mathsf{Hom}_{n,1}\mathbb{A}$ to $\mathsf{Hom}_{1,n}\mathbb{A}$

Thus for the involution in $\mathbb{C}$ we may write:

$$\overline{z}\rightarrow Z^{\dagger}=\overline{Z}^{T},$$ 

in terms of matrices. Here $Z$ is in the real span^{3} of the two matrices $\mathbb{1}$ and $J$.

$$Z\in\mathsf{span}_{\mathbb{R}}\left\{\mathbb{1},J\right\}\cong\mathbb{C}.$$

Therefore if $Z=a\mathbb{1}+bJ$, we have

$$Z^{\dagger}Z=(a^{2}+b^{2})\,\mathbb{1}.$$

As for the involution in $\mathbb{H}$ we have:

$$\overline{q}\rightarrow Q^{\dagger}=\overline{Q}^{T}.$$

Where $Q$ is in the real span of the four matrices $\mathbb{1}, I,J$ and $K$:

$$ Q\in\mathsf{span}_{\mathbb{R}} \left\{\mathbb{1},I,J,K\right\},$$

Thus, for $Q=a\mathbb{1}+bI+cJ+dK$, we have

$$Q^{\dagger}Q=(a^{2}+b^{2}+c^{2}+d^{2})\mathbb{1}.$$

These agrees well with the explicit computations:

\begin{equation}\label{detc}\det Z = \det\left(\begin{array}{rr} a & b \\ - b & a\end{array}\right) = a^{2} + b^{2} \end{equation}

\begin{equation}\label{detq}\det Q = \det\left(\begin{array}{cc} a + d & c + ib \\ -c + ib & a - d\end{array}\right) = a^{2} + b^{2} + c^{2} + d^{2}.\end{equation}

Exercises

2.1 : Dimensionality

Argue that the nonzero subset of Rn:

$$\mathbb{R}^{n\times}=\left\{x \in\mathbb{R}^{n}\;\Big|\; x \neq 0\right\},$$

is not a vector space. In what sense does $\mathbb{R}^{n}$ have dimension $n$? How is this space related to $\mathsf{GL}_{n}\,\mathbb{C}$? (Hint: think local vs global.)

2.2 : Matrices of Quaternions

Because the components do not commute, the determinant of a matrix in $\mathsf{Hom}_{n,n}\,\mathbb{H}$ is tricky to define. For an arbitrary matrix $q\in \mathsf{Hom}_{n,n}\,\mathbb{H}$, represent $q$ as a member of $\mathsf{Hom}_{2n2,n}\,\mathbb{C}$. Then take a determinant. Use that definition to verify that $q$ is invertible if and only if $\det q\neq 0$. 

2.3 : Verifying Group Axioms

Look up the axioms for a group and verify explicitly that $\mathsf{GL}_{n}\mathbb{C}$ is a group. Hence verify that $\mathsf{SL}_{n}\mathbb{C}$ is a subgroup of $\mathsf{GL}_{n}\mathbb{C}$.

2.4 : Vector Spaces vs Groups

Demonstrate that $\mathfrak{sl}_{2}=\mathsf{span}_\mathbb{C} \left\{\sigma_{1},\sigma_{2},\sigma_{3}\right\}$ is a complex vector space. Why is $\mathfrak{sl}_{2}$ a vector space but not $\mathsf{SL}_{2}\mathbb{C}$? 

2.5 : Matrix Isomorophisms

Prove the vector space isomorphisms (2) and (3) extend to the full algebra structure by identifying algebra multiplication with matrix multiplication.

2.6 : Magnitudes

Verify (5) by explicit computation, and verify that 

$$\det Q^{\dagger}Q=(\det Q)^{2}=|Q|^{2}.$$

Hence argue the same for (4). What choice in our definitions forced detQ to be a positive, real number? What happens when we relax that choice? (Hint: See Exercise 2.4.) 


$^{1}$ The GL here standards for “general” and “linear”. The “n” gives the rank of the matrix, alternatively the dimension of the vector space upon which it acts. The word general implicitly points to a specialized linear subset, which we shall visit with soon enough.

$^{2}$ You might read that $\mathsf{GL}_{n}\,\mathbb{C}$ has dimension $n^{2}$. This is meant as its (topological) dimension as a smooth manifold, i.e. the number of complex parameters that can be used to characterize the space.

$^{3}$ The span of two vectors is the set of all linear combinations of those two vectors. The same language applies for n vectors. The span of m linearly independent vectors is an $m$-dimensional vector space. We can explicitly distinguish between $\mathbb{R}$ and $\mathbb{C}$ spans where appropriate.

Š2021 The Pasayten Institute cc by-sa-4.0

Previous
Previous

Notes 01 : Three Basic Models

Next
Next

Notes 03 : The Euler Relation