Complex Linear Transformations

Last time we discussed the default inner product on a complex vector space. For two vectors $x,y\in\mathbb{C}^2$, we found

\begin{equation}\label{ip}\langle x,y\rangle = x^{\dagger}y = \bar{x}_{1}y_{1} + \bar{x}_{2}y_{2}.\end{equation}

The inner product $\langle\cdot,\cdot\rangle$ allowed us to defined a norm, which for $z\in\mathbb{C}^{2}$ is,

\begin{equation}\label{norm}|z|^{2} = \langle z,z \rangle = |z_{1}|^{2} + |z_{2}|^{2}\end{equation}

and the notion of an adjoint,

\begin{equation}\label{adjoint}z=\left(\begin{array}{c}z_{1} \\ z_{2}\end{array}\right) \Rightarrow z^{\dagger} = \left(\begin{array}{cc}\bar{z}_{1} & \bar{z}_{2}\\\end{array}\right).\end{equation}

Today we ask, what sort of linear transformations preserve the inner product \eqref{ip}?

If it's not immediately obvious why we would ask such a question, you're in good company. These kinds of questions are never obvious in the beginning. It's an intuition that borders on folklore.

Complex Linear Transformations

Vector spaces are of interest to mathematicians for the same reason they're of interest to physicists: they're easy to work with: vectors add.

There are well-known algorithms for computation that inolve nothing more than manipulating linear equations. That simplicity is a solid foundation for rich, mathematical structure.

A linear transformation on a complex vector space $V$ is a function or map that respects vector addition. That is, given two vectors $x,y\in V$, a linear map $f$ acts term by term:

\begin{equation}\label{addition}f(x+y) = f(x) + f(y).\end{equation}

Linear transformations can also  map between different vector spaces, although we won't need that fact today. Instead,  we're interested in so-called endomorphisms: those linear transformations that map a complex vector space to itself

If $x$ and $y$ are vectors in a complex vector space $V$, a linear transformation $M$ from $x$ to $y$ can be represented as an equation,

\begin{equation}\label{linear}y = M\cdot x,\end{equation}

which essentially amounts to a series of coupled algebraic equations, one for each component of $y$. Depending on context, $M$ is either called a matrix or an operator.

To be concrete. let's explore a linear transformation of $\mathbb{C}^{2}$:

\begin{equation}\label{c2linear} \left(\begin{array}{c} y_{1} \\ y_{2} \end{array}\right) = \left(\begin{array}{cc} m_{11} & m_{12}\\ m_{21} & m_{22} \end{array}\right) \left(\begin{array}{c} x_{1} \\ x_{2} \end{array}\right).\end{equation}

Here $M$ is a matrix, and this gives two coupled, algebraic equations:

$$y_{1} = m_{11}x_{1} + m_{12}x_{2},$$

and

$$y_{2} = m_{21}x_{1} + m_{22}x_{2}.$$

Here the $m_{ij}$ are complex numbers that parametrize the linear map $M$.

As you might expect, you can also apply any number of linear transformations in sequence,

$$M_{1}\cdot M_{2}\cdot M_{3}\dots M_{N}\cdot x,$$

using the same algebraic rules, \eqref{addition}.

To comments are in order here. First, while sequential operation means that each matrix acts on a vector, you could - as a matter of perspective - choose to multiply the matrices together before acting on any vector. This is fine so long as you're mindful of the order, as matrix multiplication doesn't obey the commutative law. This \textit{operator} perspective is another reason why vector spaces are so powerful. They are functions that are easy to compose. Second, notice that all these matrices operate by acting on vectors from the left. That's a convention - like choosing to write on the page from left to right - but soon we'll see when it makes sense to \textit{switch} that convention.

So now you might ask, when do equations like \eqref{linear} and \eqref{c2linear} have solutions? When can we solve them for $x$ or $y$?

Solving for $y$ - with a known $x$ - is arithmetic. Solving for $x$ - with a known $y$ - is more subtle.

Just like the equation for a real variable $x$:

$$ x^{2} + 1 = 0,$$

there may be no solution. To find if \eqref{c2linear} has a solution for fixed $y$, must attempt invert these equations.

Inverse Operators

Suppose a solution to \eqref{c2linear} exists. Then we may write an equation for $x$ as,

\begin{equation}\label{c2inverse} x = M^{-1}\cdot y.\end{equation}

The existence of a solution $x$ amounts to the existence of the inverse matrix $M^{-1}$. This procedure is hopefully familiar from elementary linear algebra. But a quick test for existence that generalizes beyond $\mathbb{C}^{2}$ involves taking the determinant of $M$.

Put differently, we can solve for x whenever the determinant of $M$ is nonzero:

$$\det M = m_{11}m_{22} - m_{12}m_{21} \neq 0.$$

There's a theorem for matrix determinants that says:

The determinant of a product of matrices is the product of determinants.

So, starting from the defining equation of an inverse matrix,

$$M^{-1} \cdot M = \mathbb{1},$$

where $\mathbb{1}$ is the identity matrix, we apply the determinant theorem to find

\begin{equation}\label{inverse}\det M^{-1} \det M = 1,\end{equation}

and so 

$$\det M^{-1} = \frac{1}{\det M},$$

so we see why a non-vanishing determinant is necessary\footnote{Of course, proving sufficiency is a different matter.} for the existence of an inverse matrix.

Here’s a quick exercise to test yourself:

For the $2\times 2$ matrix $M$, compute $M^{-1}$ explicitly using \eqref{inverse} and show that $\det M^{-1} = \frac{1}{\det M}$.

While it may seem like a lot of tedious work, linear spaces allow us to solve equations and perform computations in a straightforward manner.
If you followed our discussion from last time, you might ask what to make of the adjoint of \eqref{linear}. After all $y$ is a vector in a well-based, complex vector space $V$. So it must have a dual, $y^{\dagger}$. Indeed it does!

The Adjoint

$$y^{\dagger} = \left(M\cdot x\right)^{\dagger},$$

We know what to make of $x^{\dagger}$, but what is $(M\cdot x)^{\dagger}$? Whatever it is, we define the adjoint with respect to the inner product, so

$$\langle y, y\rangle = |y|^{2} = \langle M\cdot x , M\cdot x\rangle.$$

Whatever $(M\cdot x)^{\dagger}$ is, it needs to compose with a vector. Working explicitly with \eqref{c2linear}, it's not hard to convince yourself that

\begin{equation} \label{002:adjoint} y^{\dagger} = \left(M\cdot x\right)^{\dagger} = x^{\dagger}\cdot M^{\dagger},\end{equation}

so that

\begin{equation}\label{y} |y|^2 = \langle y , y \rangle = x^{\dagger} \cdot M^{\dagger} \cdot M \cdot x.\end{equation}

Here $M^{\dagger}$ acts on $x^{\dagger}$ from the right. But. It can also act on $M\cdot x$ from the left. This is another manifestation of the duality with respect to the default inner product we discussed last time. Operators on a given linear space $V$ also have duals, such as $M^{\dagger}$. The operators, then, have dual actions: left on the vector space, and right on the dual space.

Incidentally, just as $x^{\dagger}$ was interpreted in $\mathbb{C}^{2}$ as a transpose, complex conjugate, so too is $M^{\dagger}$:

$$M^{\dagger} = \left(\begin{array}{cc} \bar{m}_{11} & \bar{m}_{21} \\ \bar{m}_{12} & \bar{m}_{22} \end{array}\right).$$

So finally we're able to pose the question we asked at the beginning in an explicit way:

What can we say about those linear transformations that preserve the inner product? That is, those $M$

$$y = M\cdot x,$$

where $|y|^{2} = |x|^{2}$?

By \eqref{y}, we see this holds precisely when

\begin{equation}\label{unitary} M^{\dagger}\cdot M = \mathbb{1}.\end{equation}

TL;DR

Vector spaces are useful because they’re easy to work with. Maps between vectors - or even between vector spaces - have really nice composition properties. Working explicitly, we often think of them as matrices. The adjoint from complex vector spaces acts naturally on linear maps, affording an extension of the ideas around duality and the default inner product.

$\setCounter{0}$
Sean Downes

Theoretical physicist, coffee and outdoor recreation enthusiast.

https://www.pasayten.org
Previous
Previous

The Group of Special Unitary Matrices

Next
Next

Inner Products and Duality