01 - What is a Representation?
We sketch the idea of a representation and give a few common examples.
The Basics of Linear Representations
A representation is a way of expressing the structure of an algebraic object like a group or an algebra on some other kinds of set, typically a vector space.
More precisely, a representation is a map between two kinds of objects, schematically given by
\begin{equation}\label{schematic}\pi : \mathcal{A} \rightarrow \mathsf{End}(V).\end{equation}
Here $\pi$ map between the aforementioned algebraic object $\mathcal{A}$ and the endomorphisms of some vector space $V$, called the module for $\mathcal{A}$.
Wait… the what and the what?
Endomorphisms are Linear Operators
A map $f$ between any two vector spaces $V$ and $W$
$$f: V\rightarrow W$$
is linear if it compatible with scalar multiplication and vector addition:
$$f(a\overrightarrow{x} + b\overrightarrow{y}) = af(\overrightarrow{x}) + bf(\overrightarrow{y}),$$
where $\overrightarrow{x}$ and $\overrightarrow{y}$ are vectors in $V$, and $a$ and $b$ are scalars.
For any vector space V, the set of all such linear maps from V to itself are called the endomorphisms of V, $\mathsf{End}(V)$. Sometimes, we call endomorphisms linear operators.
Matrices are Endomorphisms
Because endomorphisms respect both vector addition and scalar multiplication, it may not surprise you to realize they too are a vector space.
Familiar vector spaces include $\mathbb{R}^{n}$ and $\mathbb{C}^{n}$ have familiar endomorphisms: the $n\times n$-matrices.
Let $\mathbb{F}$ be either of $\mathbb{R}$ or $\mathbb{C}$. These matrices are typically denoted $\mathsf{Hom}(\mathbb{F}^{n})$. Being endomorphisms, they are also finite dimensional vector spaces. In particular, they have $\mathbb{F}$-dimension $n^{2}$: one for each component.
Invertibility and Special Endomorphisms
Let $V$ and $W$ once again be vector spaces and let $f$ be a linear map between them:
$$f: V\rightarrow W.$$
The kernel of $f$ is the linear span of all vectors $v$ in $V$ that map to zero:
$$\ker(f) = \mathsf{span}\left\{ v \in V \;\Big|\; f(v) = 0\right\}.$$
Evidently, $\ker(f)$ is a subspace of $V$.
Now suppose $M$ is an endomorphism of $V$. The Rank-Nullity theorem of linear algebra tells us that if $\dim\ker(M) = 0$, then $M$ is invertible.
The subset of invertible endomorphisms of a finite dimensional vector space $V$ is called the general linear group of $V$,
$$\mathsf{GL}(V) = \left\{ M\in\mathsf{End}(V) \;\Big|\; \det M \neq 0\right\}.$$
For a finite-dimensional vector space $V$, an invertible matrix $M$ is one whose determinant is nonvanishing
$$\det M \neq 0.$$
One special subset of the endomorphisms of $\mathbb{R}^{n}$ are the orthogonal matrices$^{1}$,
$$\mathsf{O}(n) = \left\{ M \in \mathsf{End}(\mathbb{R}^{n}) \;\Big|\; M^{\sf T} = M^{-1}\right\}.$$
That is, the set of matrices whose inverse coincides with the transpose.
Another special class of endomorphisms of $\mathbb{C}^{n}$ are the unitary matrices,
$$\mathsf{U}(n) = \left\{ M \in \mathsf{End}(\mathbb{C}^{n}) \;\Big|\; M^{\dagger} = M^{-1}\right\}.$$
This is a slight extension of the orthogonal matrices to acknowledge the complex numbers, as
$$M^{\dagger} = (M^{\sf T})^{\star},$$
where the complex conjugate $\star$ acts component-wise.
Unitary matrices are of the upmost importance in Physics. Unitary representations are critical for preserving the statistical interpretation of the inner product of the Hilbert space of quantum states.
$^{1}$ : We can of course consider orthogonal matrices of complex numbers, but for practical application we usually restrict to the reals.