07-Lie Algebras
Expanding on our discussion of the commutator and the exponential map, we introduce the concept of a Lie algebra and relate it to Lie groups.
Expanding on our discussion of the commutator and the exponential map, we introduce the concept of a Lie algebra and relate it to Lie groups.
A Summary of the Course So Far
So far in this course we’ve discussed two kinds of algebraic objects: groups and algebras. Groups are easier to define, but far more rigid in scope. For example, all group elements must have an inverse. Algebras are just a vector space equipped with a little extra structure.
Being a vector space, algebras have a zero vector, and scalar multiplication by zero can take any vector to that zero vector.
Because of this fact, zero presents an obstruction to defining a group. In many contexts we have explored, it has no multiplicative inverse.
In all of these contexts the saving grace has been the exponential map. The exponential map of the zero vector is typically a multiplicative identity. As we have seen, the image of a vector space of endomorphisms of some other vector spaces $V$ under the exponential map is the general linear group, $\mathsf{GL}(V)$.
This fact has many abstractions and refinements, and will characterize much of the (local) structure of Lie groups.
Lie Algebras
Let $V$ be a finite dimensional vector space and let $\mathsf{End}(V)$ be its associated vector space of matrices. $\mathsf{End}(V)$ is of course an associative algebra under the operation of matrix multiplication. We can make $\mathsf{End}(V)$ into a nonassociative algebra by consider the commutator as vector multiplication instead.
A Lie algebra is a vector space equipped with a bilinear bracket $[\cdot,\cdot]$ which satisfies two requirements. First, it is alternating:
$$[A,A] = 0,$$
for all $A$ in $\mathsf{End}(V)$. Second, it satisfies the so-called Jacobi Identity:
$$[A,[B,C]] + [B,[C,A]] + [C,[A,B]] = 0,$$
for all $A,B$ and $C$ in $\mathsf{End}(V)$.
The Jacobi Identity plays the analogous, structural role to that of associativity.
Any bilinear bracket that satisfies these conditions is called a Lie bracket.
Claim: The commutator of matrices is a Lie bracket.
Proof. Exercise.
A vector space with a Lie bracket is called a Lie algebra. Evidently $\mathsf{End}(\mathbb{F}^{n})$ is a Lie algebra under commutators for $\mathbb{F}$ equal to $\mathbb{R}$ or $\mathbb{C}$.
The Exponential Map
Let $\mathfrak{g}$ be a Lie algebra built out of matrices.
Proposition: The image of the exponential map of $\mathfrak{g}$ is a Lie group.
Proof. The image of the zero vector in $\mathfrak{g}$ serves as the identity matrix. The additive inverse of any element $g$ in $\mathfrak{g}$ exponentiates to a matrix $e^{-g}$. By linearity $[g,-g] = -[g,g] = 0$, as the commutator is alternating. Following the logic from our prior discussions on commutators, we have
$$e^{g}e^{-g} = \mathbb{1} + g - g + 0 = \mathbb{1}.$$
Finally we must show that the multiplication of elements is associative, $$(e^{g}e^{h})e^{j} = e^{g}(e^{h}e^{j}),$$
but this is guaranteed by matrix multiplication$^{1}$. Hence the image of $\mathfrak{g}$ is a group. Since $\mathfrak{g}$ is a vector space, it is a continuous space and therefore a Lie group. $\square$
Lie groups and their Lie algebra
Therefore, to any Lie algebra there is an associate Lie group, and visa versa. However, this relationship is not one-to-one. Many Lie groups share the same Lie algebra. Essentially, the space of infinitesimal group elements near the identity is identified with the Lie algebra of a group.
For an example of how two Lie groups represent the same Lie algebra, consider the orthgonal group of real, $n$-dimensional matrices:
$$\mathsf{O}(n) = \left\{ x \in \mathsf{GL}(\mathbb{R}^{n}) \;\Big|\; x^{\sf T} = x^{-1}\right\}.$$
The defining condition on such matrices $x$ gives
$$x^{\sf T} x = \mathbb{1}.$$
Since the transpose operation does not affect the determinant, and the determinant of a product of matrices is the product of determinants. we have
$$\det x^{2} = 1,$$
so $\det x = \pm 1$.
We can define a subgroup of $\mathsf{O}(n)$ with positive determinant, the so-called special orthogonal group,
$$\mathsf{SO}(n) = \left\{ x \in \mathsf{O}(n) \;\Big|\; \det x = 1\right\}.$$
It’s not hard to see that both $\mathsf{O}(n)$ and its subgroup $\mathsf{SO}(n)$ share the same Lie algebra.
$^{1}$: It is the Jacobi identity of the Lie bracket that affords associativity of the image of the exponential in the general case.
06-The Commutator
We introduce the concept of a commutator of matrices and explore its implications in the study of matrix groups.
We introduce the concept of a commutator of matrices and explore its implications in the study of matrix groups.
The Commutator
In general, matrices do not commute. We can quantify this failure to commute with the commutator:
$$[A,B] = AB - BA,$$
this is an antisymmetric, bilinear map on a chosen set of matrices, be it $\mathsf{Hom}(\mathbb{C}^{n})$, $\mathsf{Hom}(\mathbb{R}^{n})$ or perhaps a subspace thereof.
A potentially silly example of such a subspace might be the diagonal matrices, whose commutator vanishes. Evidently the product of two diagonal matrices is also diagonal, so they form their own subalgebra under matrix multiplication. Because the commutator vanishes for all members of this subalgebra, we call it an abelian algebra.
The Commutator and the Exponential Map
Let $V$ be a finite dimensional vector space, like $\mathbb{C}^{n}$, so that $\mathsf{End}(V)$ can be modeled by matrices, say $\mathsf{Hom}(\mathbb{C}^{n})$. In our study of the exponential map, we saw that any element $g_{a}$ of the group $\mathsf{GL}(V)$ can be written as an exponential,
$$g_{a} = e^{M_{a}},\quad M_{a}\in \mathsf{End}(V).$$
Let us consider the matrix product of two such elements $g_{a}g_{b}$.
$$g_{a}g_{b} =e^{M_{a}}e^{M_{b}} = \sum_{m,n = 0}^{\infty} \frac{1}{m!n!}M_{a}^{m}M_{b}^{n}.$$
Because in general $[M_{a},M_{b}]\neq 0$, such a product of infinite sums$^{1}$ will always have terms with $M_{a}$ to the left and $M_{b}$ to the right. For example,
$$g_{a}g_{b} = \mathbb{1} + M_{a} + M_{b} +\frac{1}{2}M_{a}^{2}+ M_{a}M_{b} +\frac{1}{2}M_{b}^{2}\cdots.$$
By replacing $a$ and $b$, we can represent the other product,
$$g_{b}g_{a} = \mathbb{1} + M_{a} + M_{b} +\frac{1}{2}M_{a}^{2}+ M_{b}M_{a} +\frac{1}{2}M_{b}^{2}\cdots.$$
Thus, the commutator of $g_{a}$ and $g_{b}$, is given by
$$[g_{a},g_{b}] = [M_{a},M_{b}] + \frac{1}{2!} ([M_{a}^{2},M_{b}] - [M_{b}^{2},M_{a}]) \cdots.$$
so that in particular, we see that the commutator of $[g_{a},g_{b}]$ depends on the commutator $[M_{a},M_{b}]$.
There is a more general statement of this fact, often referred to as the Baker-Campbell-Hausdorff formula, gives an explicit form of the product
\begin{equation}\label{bch}e^{A}e^{B} = e^{A + B + \frac{1}{2}[A,B] + \frac{1}{12}[A,[A,B] - \frac{1}{12}[B,[A,B] + \cdots }.\end{equation}
We shall not prove this fact here, but rather motivate it through the proof of an easier Lemma.
Lemma: Let $A$ and $B$ be finite dimensional matrices and let $t$ be a formal variable such that $\frac{d}{dt}e^{At} = Ae^{At}$. Then we have the power series expansion in $t$
$$e^{At}Be^{-At} = B + [A,B]t + \frac{1}{2!}[A,[A,B]]t^{2} + \frac{1}{3!}[A,[A,[A,B]]]t^{3} + \cdots$$
Proof. We prove by induction. Let
$$f(t) = e^{At}Be^{-At},$$
and let
$$f^{(n)}(t) = \frac{d^{n}f(t)}{dt^{n}}.$$
Suppose that
$$f^{(n)}(t) = [A,f^{(n-1)}(t)].$$
Taking the derivative of the right hand side,
$$\frac{d}{dt}\left(Af^{(n-1)}(t) - f^{(n-1)}(t)A\right) = Af^{(n)}(t) - f^{(n)}(t)A,$$
since $A$ is independent of the formal parameter $t$. Thus
$$f^{(n)}(t) = [A,f^{(n-1)}(t)] \Rightarrow f^{(n+1)}(t) = [A,f^{(n)}(t)].$$
In particular this holds for $t=0$.
Next observe that
$$\frac{d}{dt}(e^{At}Be^{-At}) = Ae^{At}Be^{-At} + e^{At}B(-A)e^{-At},$$
so that at $t=0$,
$$\frac{d}{dt}(e^{At}Be^{-At})\Big|_{t=0} = [A,B].$$
The hypothesis - which states that the $n$-th term in the formal series for $t$ is the $n$-th commutator of $B$ with $A$ - follows by induction. $\square$
$^{1}$: Convergence of such sums is not assumed. Often they are just written as formal sums. Sufficiently close to the identity these sums do converge.
05 - Algebras
We define the concept of an algebra as an extension of a vector space and explore some examples before introducing the idea of a algebra representation.
We define the concept of an algebra as an extension of a vector space and explore some examples before introducing the idea of a algebra representation.
The complex numbers can be though of as a two-dimensional, real vector space, where $1$ and $I$ represent basis vectors for $\mathbb{C}$. What makes $\mathbb{C}$ different from $\mathbb{R}^{2}$ is that we can multiply two complex numbers together to get a third,
$$z_{1}z_{2} \in \mathbb{C},\quad z_{1},z_{2}\in\mathbb{C}.$$
By now we’re so familiar with it, you might have forgotten that this multiplication rule had to be imposed by hand. Indeed, this only works because we define the rules:
\begin{equation}\label{crules}1\times 1 = 1, \quad 1\times i = i,\quad i\times i = -1.\end{equation}
These rules promote the vector space $\mathbb{R}^{2}$ into an algebra, which is defined to be a vector space equipped with a bilinear$^{1}$, multiplication operation.
The vector space $\mathbb{R}^{3}$ can also be made into an algebra via the antisymmetric cross-product, which is typically defined in terms of the basis vector $e_{x}$,$e_{y}$ and $e_{z}$:
$$e_{i}\times e_{j} = \epsilon_{ijk}e_{k},$$
where $\epsilon_{ijk}$ is the totally antisymmetric object with three indices, i.e.
$$\epsilon_{123} = \epsilon_{231} = \epsilon_{312} = 1,\quad \epsilon_{213} = \epsilon_{132} = \epsilon_{321} = -1,$$
and all other values vanish.
The main idea here is that if a vector space $V$ together with a binary operation $\star$ forms an algebra, then we can for each $M$ and $N$ in $V$, we can always find a $L$ in $V$ such that $M\star N = L.$
Importantly, the elements of an algebra need not be invertible under any operation like $\star$. In fact, $\star$ need not be even be an associative operation. Note that if $\star$ is not associative, this means that triple products such as
$$a\star b \star c$$
are ambiguous, so a bracket notation like
$$((a,b),c),$$
may be more appropriate.
The Algebra of Matrices
As we have seen, the complex matrices $\mathsf{Hom}(\mathbb{C}^{n})$, are a vector space. Together with the standard idea of matrix multiplication, $\mathsf{Hom}(\mathbb{C}^{n})$ is an algebra. Because matrix multiplication is an associative operation, we say $\mathsf{Hom}(\mathbb{C}^{n})$ is an associative algebra. The same is true of course for $\mathsf{Hom}(\mathbb{R}^{n})$.
If $A$ is an algebra, a subalgebra $B$ of $A$ is a linear subspace of $A$ that is closed under the algebra’s multiplication operation. As with groups and subgroups, $B$ and $A$ must share the same identity.
Certain subsets of $\mathsf{Hom}(\mathbb{C}^{n})$ may form subalgebras. For example, the product of any two diagonal matrices is itself diagonal. Less obviously perhaps, the upper (or lower) triangular matrices also form a subalgebra of $\mathsf{Hom}(\mathbb{C}^{n})$.
Matrices and Complex Structure
Consider the two real matrices
$$\mathbb{1} = \left(\begin{array}{cc}1&0\\0&1\end{array}\right),\quad J = \left(\begin{array}{rr}0&1\\-1&0\end{array}\right).$$
Evidently we have
$$\mathbb{1}^{2}=\mathbb{1} \quad \mathrm{and}\quad \mathbb{1} J = J.$$
What’s curious is the matrix $J^{2}$. As you can easily check by explicit computation,
$$J^{2} = -\mathbb{1}.$$
Thus $J$ and $\mathbb{1}$ satisfy the same multiplication rules as $1$ and $i$, \eqref{crules}.
It’s also not hard to see that $\mathbb{1}$ and $J$ are linearly independent as vectors in $\mathsf{End}(\mathbb{R}^{2})$, so that this real span forms an algebra that behaves exactly as $\mathbb{C}$.
We say that $\mathbb{C}$ and $C = \mathsf{span}_{\mathbb{R}}\left\{\mathbb{1},J\right\}$ are isomorphic as algebras. To see this we employ the linear map
$$\phi : \mathbb{C}\rightarrow C,$$
$$\phi : z \mapsto \mathsf{Re}z \mathbb{1} + \mathsf{Im}z J.$$
Because $\mathbb{1}$ and $J$ satisfy the same rules as \eqref{crules}, $\phi$ is a homomorphism of associative$^{2}$ algebras, i.e.
$$\phi(zw) = \phi(z)\phi(w).$$
The kernel of $\phi$ is trivial - it’s just $0$, so $\phi$ is invertible, and the inverse map
$$\phi^{-1}: C \rightarrow \mathbb{C},$$
is also a homomorphism of algebra. In other words, $\phi$ is an isomorphism of algebras. In set theory language we’d also call it a bijection.
Clearly, $C$ is a subalgebra of $\mathsf{End}(\mathbb{R}^{2})$.
Representations of Algebras
Let $A$ be an algebra and $V$ be a vector space. If $\phi$ is a algebra homomorphism from $A$ to $\mathsf{End}(V)$, then $V$ is said to be a module for $A$. $\phi$ is the associated representation of $A$ on this module $V$.
For the example of the algebra $C$ above, $\phi$ can be interpreted as a representation of the $\mathbb{C}$ algebra in $\mathsf{End}(\mathbb{R}^{2})$, and the vector space $\mathbb{R}^{2}$ serves as its module.
$^{1}:$ A bilinear map on some vector space $V$ is of the form:
$$f : V\times V \rightarrow V,$$
which is a linear map in both factors. Of course, you can also have a bilinear map between three vector spaces $A$,$B$ and $C$ defined similarly,
$$f: A\times B \rightarrow C.$$
$^{2}$: A homomorphism of nonassociative algebras can be defined similarly. We shall see some examples in later lectures.
04 - The Exponential Map
We explore the familiar exponential function in the context of groups of matrices.
We explore the familiar exponential function in the context of groups of matrices.
The Trouble with Zero
As one-dimensional vector spaces, both $\mathbb{R}$ or $\mathbb{C}$ are abelian groups under the addition operation.
However, neither $\mathbb{R}$ nor $\mathbb{C}$ form a group under multiplication. There's a glaring inconsistency with the group axioms. Zero doesn't not have a multiplicative inverse. There is no number $x$ such that
$$ x\cdot 0 = 1.$$
Instead, we consider the multiplicative groups $\mathbb{R}^{\times}$ and $\mathbb{C}^{\times}$, which consist of all nonzero numbers. Specifically, for example,
$$\mathbb{C}^{\times} = \left\{ z \in \mathbb{C} \;\Big|\; z \neq 0\right\}.$$
It's not hard to check that both of these “punctured” fields form groups under multiplication.
This structure generalizes in a way we've already seen. Consider the vector space $\mathbb{C}^{n}$. The endomorphisms of $\mathbb{C}^{n}$ are just the $n\times n$ complex matrices,
$$\mathsf{End}(\mathbb{C}^{n}) = \mathsf{Hom}(\mathbb{C}^{n}).$$
These endomorphisms are not a group, but rather contain a subset
$$\mathsf{GL}(\mathbb{C}^{n}) = \left\{ M \in \mathsf{Hom}(\mathbb{C}^{n})\;\Big|\; \det M \neq 0\right\},$$
that is a group. The trouble once again involves zero.
The Exponential Map
The Real Numbers
Another way to deal with zero is to simply restrict ourselves to positive numbers. For $\mathbb{R}$, consider the map
$$\exp : t \mapsto e^{t} = \sum_{n=0}^{\infty} \frac{1}{n!}t^{n}.$$
The exponential function maps all of $\mathbb{R}$ to the positive, real line. In this context
$$0\mapsto 1,$$
and the limit
$$\lim_{t\rightarrow -\infty} e^{t} = 0.$$
In this sense, all of the negative, real numbers get mapped to the unit interval. Restricted to the strictly positive, real numbers, multiplication is once again a group operation.
Put another way, the image of the exponential map:
$$\exp[\mathbb{R}] = \left\{ \exp(x) \;\Big|\; x\in \mathbb{R}\right\},$$
is the set of strictly positive, real numbers,
$$\mathbb{R}^{+} = \left\{ x \in \mathbb{R} \;\Big|\; x > 0\right\}.$$
This forms a (connected) subgroup of $\mathbb{R}^{\times}$.
The Complex Numbers
This same construction carries over to the complex plane, with curious implications. The real numbers behave as they do with $\mathbb{R}$, but the entire imaginary axis is mapped to the unit circle,
$$ \exp : iy \mapsto e^{iy}.$$
Thus we find, for real parameters $x$ and $y$, a map to polar coordinates
\begin{equation}\label{polar}\exp : x + iy \mapsto e^{x + iy} = e^{x}e^{iy} \rightsquigarrow r e^{i\theta}.\end{equation}
The image of the exponential map, $\exp[\mathbb{C}]$ is $\mathbb{C}^{\times}$.
The Exponential of a Matrix
The definition of the exponential map ports directly to both $\mathsf{End}(\mathbb{R}^{n})$ and $\mathsf{End}(\mathbb{C}^{n})$. For either case, let $M$ be such an $n\times n$ matrix. Then
$$\exp: M \mapsto e^{M} = \sum_{n=0}^{\infty}\frac{1}{n!}M^{n}.$$
Using methods of multivariable calculus, one can show
\begin{equation}\label{jacobi}\det e^{M} = e^{\mathsf{Tr}\,M},\end{equation}
where $\mathsf{Tr}\,M$ represents the trace of the matrix $M$, that is to say, the sum of its diagonal elements. Therefore, given any matrix $M$, with finite elements, we find that the corresponding matrix $e^{M}$ is invertible.
TL;DR
To summarize, the exponential map took the fields $\mathbb{R}$ and $\mathbb{C}$ to the corresponding (multiplicative) groups, $\mathbb{R}^{+}$ and $\mathbb{C}^{\times}$. It also mapped endomorpisms to the groups,
$$ \exp : \mathsf{Hom}(\mathbb{F}^{n}) \rightarrow \mathsf{GL}(\mathbb{F}^{n}),$$
where $\mathbb{F}$ is either of $\mathbb{R}$ or $\mathbb{C}$.
The common thread here is that the exponential map takes vector spaces to groups by removing zero. More precisely, every member of the image of the exponential map is invertible.
Finally, notice that the exponential map itself is only invertible on its image. The so-called logarithm is undefined for zero, or for matrices with vanishing determinant.
03 - Lie groups and homomorphisms
We expand on our discussion of groups to include the maps between groups. We also present the idea behind Lie groups. Finally we give a formal definition of a group representation.
We expand on our discussion of groups to include the maps between groups. We also present the idea behind Lie groups. Finally we give a formal definition of a group representation.
Lie Groups
A Lie group is a group with one or more continuous parameters that may also have a geometric interpretation. The abelian group $\mathsf{U}(1)$ has a single, continuous parameter that served as a coordinate on the circle, $S^{1}$.
The translations associated to the vector space $\mathbb{R}^{n}$ also form an abelian Lie group. Actually, this idea holds for any vector space. Essentially, translations model the group of vector addition.
The general linear group of invertible matrices is - unsurprisingly - also a group. It is a group whose operation is matrix multiplication. Given that each matrix element is a continuous parameter, $\mathsf{GL}(V)$ is also a Lie group.
Note that the full vector space of endomorphisms $\mathsf{End}(V)$ is not a group under matrix multiplication, as some matrices are not invertible.
Homomorphisms
Let $G$ and $H$ be groups. A map $f$ between these groups is called a group homomorphism if it respects the group multiplication,
$$f(gh) = f(g)f(h),\quad g,h \in G.$$
In this language, a linear map might be thought of as a vector space homomorphism.
We are now in a position to give our first precise definition of a representation. A group representation of a group $G$ on a vector space $V$ is a homomorphism $\pi$ from $G$ to a subgroup of $\mathsf{GL}(V)$:
$$\pi : G \rightarrow \mathsf{GL}(V).$$
In this context $V$ is called a module for $G$ or a $G$-module.
A Confounding Point
One confusing thing about groups like $\mathsf{GL}(V)$, $\mathsf{U}(n)$, and $\mathsf{O}(n)$ is that they are also defined in terms of matrices that are supposed to represent them.
The homomorphism $\pi$ is a trivial in these so-called defining modules. As we shall see in detail, groups like $\mathsf{O}(n)$ have their own identity, independent of their definition in terms of space like $\mathsf{End}(\mathbb{R}^{n}$). In particular, there is an infinite tower of modules for $\mathsf{O}(n)$, each with a different dimension.
02 - Groups
We present the formal idea of a group and give a few elementary examples. We also use these examples to sketch the idea of a group representation.
We present the formal idea of a group and give a few elementary examples. We also use these examples to sketch the idea of a group representation.
A group is a set $G$ together with a binary operation:
$$\star : G\times G \rightarrow G,$$
that is subject to three requirements. First, there is an identity element in $G$, say $1$, such that for all $g$ in $G$,
$$1\star g = g\star 1 = g.$$
Relatedly, every $g$ in $G$ must have an inverse element $g^{-1}$, such that
$$g\star g^{-1} = g^{-1}\star g = 1.$$
Finally, the operation $\star$ is associative, which means that for all $f,g,h$ in $G$
$$(f\star g)\star h = f\star(g\star h) = f\star g \star h.$$
Essentially, associativity means that the product of group elements
$$f\star g \star h \star \dots$$
is independent of the order of evaluation.
Quintessential examples of groups include the integers ($\mathbb{Z}$) with addition, or the real $(\mathbb{R})$ and complex $(\mathbb{C})$ numbers with either addition or multiplication. These examples immediately extent to the rational numbers:
$$\mathbb{Q} = \left\{ \frac{a}{b} \;\Big|\; a,b \in \mathbb{Z}\right\}.$$
We will now consider a few more examples of groups and their associated representations.
Finite Group Representations
Consider the cyclic groups:
$$\mathbb{Z}_{n} = \left\{ q\; \mathrm{mod} \;n \;\Big|\;q \in \mathbb{Z}\right\},$$
where $n$ is a positive number. Here $q$ mod or ``modulo'' $n$ means the remainder of the fraction $q/n$.
Any complex vector space $V$ can be a module for $\mathbb{Z}_{n}$, where the representation of $\mathbb{Z}_{n}$ is furnished by the set of scalars
$$ \{ \pi_{q} = e^{2\pi i q/n} \;\Big|\; q \in \mathbb{Z}_{n}\}.$$
Here the scalars $\pi_{q}$ act as linear operators on $V$. To connect with our schematic picture from last time,
$$\pi : \mathcal{A} \rightarrow \mathsf{End}(V),$$
$\mathcal{A} = \mathbb{Z}_{n}$. The representation $\pi$ then takes any $q$ in $\mathbb{Z}_{n}$ to the endomorphism
$$\pi(q) = e^{2\pi i q/n}\mathbb{1},$$
where $\mathbb{1}$ is the identity map:
$$\mathbb{1}\cdot v = v,$$
for all $v$ in $V$.
A notable example that we will see a lot of is $\mathbb{Z}_{2} = \left\{\pm 1\right\}$. Notice that any real vector space can also serve as a module for $\mathbb{Z}_{2}$.
Abelian Groups
Abelian groups are those whose group operation is commutative. Often we refer to it as addition and use the associated notation, $a + b.$
In this context the unit element is typically written as $0$ and inverse elements $a^{-1}$ are often written as $-a$.
The cyclic groups were abelian, and this addition notation can be inferred by the product of exponentials:
$$g_{1}\cdot g_{2} \rightsquigarrow e^{i\frac{m_{1}}{p}}e^{i\frac{m_{1}}{p}} = e^{i\frac{m_{1} + m_{2}}{p}},$$
for $g_{1},g_{2}$ in $\mathbb{Z}_{p}$.
Another common example that also happens to generalize the cyclic groups is the group $\mathsf{U}(1)$:
$$\mathsf{U}(1) = \left\{e^{i \theta} \;\Big|\; 0 \leq \theta < 2\pi \right\},$$
which also can be thought of as the unit circle
$$\mathsf{U}(1) = S^{1} = \left\{ z \in \mathbb{C} \;\Big|\; |z|^{2} = 1\right\}.$$
Again, any complex vector space $V$ can serve as a module for $\mathsf{U}(1)$, where the representation acts by scalar multiplication. For some $v$ in $V$,
$$v\mapsto \pi(\theta)\cdot v = e^{i\theta}v.$$
Evidently any of the cyclic groups can be found inside $\mathsf{U}(1)$. More precisely, they are subgroups. A subgroup is a subset of a group that is also a group under the same group operation. Any subgroup must contain the same identity element from the original group, and therefore inverse elements of a subgroup also coincide with those in the original group. For semantic reasons a group is sometimes trivially considered as a subgroup of itself.
Infinite groups like $\mathsf{U}(1)$ that have a geometric interpretation are often called Lie groups. We’ll talk about them next time.
01 - What is a Representation?
We sketch the idea of a representation and give a few common examples.
We sketch the idea of a representation and give a few common examples.
The Basics of Linear Representations
A representation is a way of expressing the structure of an algebraic object like a group or an algebra on some other kinds of set, typically a vector space.
More precisely, a representation is a map between two kinds of objects, schematically given by
\begin{equation}\label{schematic}\pi : \mathcal{A} \rightarrow \mathsf{End}(V).\end{equation}
Here $\pi$ map between the aforementioned algebraic object $\mathcal{A}$ and the endomorphisms of some vector space $V$, called the module for $\mathcal{A}$.
Wait… the what and the what?
Endomorphisms are Linear Operators
A map $f$ between any two vector spaces $V$ and $W$
$$f: V\rightarrow W$$
is linear if it compatible with scalar multiplication and vector addition:
$$f(a\overrightarrow{x} + b\overrightarrow{y}) = af(\overrightarrow{x}) + bf(\overrightarrow{y}),$$
where $\overrightarrow{x}$ and $\overrightarrow{y}$ are vectors in $V$, and $a$ and $b$ are scalars.
For any vector space V, the set of all such linear maps from V to itself are called the endomorphisms of V, $\mathsf{End}(V)$. Sometimes, we call endomorphisms linear operators.
Matrices are Endomorphisms
Because endomorphisms respect both vector addition and scalar multiplication, it may not surprise you to realize they too are a vector space.
Familiar vector spaces include $\mathbb{R}^{n}$ and $\mathbb{C}^{n}$ have familiar endomorphisms: the $n\times n$-matrices.
Let $\mathbb{F}$ be either of $\mathbb{R}$ or $\mathbb{C}$. These matrices are typically denoted $\mathsf{Hom}(\mathbb{F}^{n})$. Being endomorphisms, they are also finite dimensional vector spaces. In particular, they have $\mathbb{F}$-dimension $n^{2}$: one for each component.
Invertibility and Special Endomorphisms
Let $V$ and $W$ once again be vector spaces and let $f$ be a linear map between them:
$$f: V\rightarrow W.$$
The kernel of $f$ is the linear span of all vectors $v$ in $V$ that map to zero:
$$\ker(f) = \mathsf{span}\left\{ v \in V \;\Big|\; f(v) = 0\right\}.$$
Evidently, $\ker(f)$ is a subspace of $V$.
Now suppose $M$ is an endomorphism of $V$. The Rank-Nullity theorem of linear algebra tells us that if $\dim\ker(M) = 0$, then $M$ is invertible.
The subset of invertible endomorphisms of a finite dimensional vector space $V$ is called the general linear group of $V$,
$$\mathsf{GL}(V) = \left\{ M\in\mathsf{End}(V) \;\Big|\; \det M \neq 0\right\}.$$
For a finite-dimensional vector space $V$, an invertible matrix $M$ is one whose determinant is nonvanishing
$$\det M \neq 0.$$
One special subset of the endomorphisms of $\mathbb{R}^{n}$ are the orthogonal matrices$^{1}$,
$$\mathsf{O}(n) = \left\{ M \in \mathsf{End}(\mathbb{R}^{n}) \;\Big|\; M^{\sf T} = M^{-1}\right\}.$$
That is, the set of matrices whose inverse coincides with the transpose.
Another special class of endomorphisms of $\mathbb{C}^{n}$ are the unitary matrices,
$$\mathsf{U}(n) = \left\{ M \in \mathsf{End}(\mathbb{C}^{n}) \;\Big|\; M^{\dagger} = M^{-1}\right\}.$$
This is a slight extension of the orthogonal matrices to acknowledge the complex numbers, as
$$M^{\dagger} = (M^{\sf T})^{\star},$$
where the complex conjugate $\star$ acts component-wise.
Unitary matrices are of the upmost importance in Physics. Unitary representations are critical for preserving the statistical interpretation of the inner product of the Hilbert space of quantum states.
$^{1}$ : We can of course consider orthogonal matrices of complex numbers, but for practical application we usually restrict to the reals.