Ahoy! This page uses MathJax to typeset math symbols. If you’re seeing code rather than nice, typeset symbols, you may need to hit “refresh” to render things cleanly.

 
 

Applied Representation Theory : 001

Three Basic Models

This is a semi-formal review of some basic algebraic details needed in later lectures, using the backdrop of the normed division algebras. Despite reviewing some of the material, basic knowledge of real and complex numbers, as well as real vector spaces, is assumed. It is aimed a physicists. Later lectures will include more structure.

The Normed, Associative Division Algebras

Three basic models we'll come back to time and again are what effectively amount to number systems. These are the three, normed division algebras: 

  • the Real numbers, R

  • the Complex numbers, C

  • and the Quaternions, H.

We will collectively refer to them all as A. We assume R is familiar, as is the n-dimensional space, Rn. We assume in particular that you are comfortable with column and row vectors, 

x=(x1x2x3),xT=(x1x2x3),

and how to take their inner product:

xTx=(x1x2x3)(x1x2x3)=x21+x22+x23=||x||2.

We also assume familiarity with the C, but will review the details to establish notation. 

The Complex Numbers and Cn

C is related to the plane, R2, whose elements are ordered pairs of real numbers presented in vector notation:

z=(a,b)aex+bey.

The basis vectors ex are orthonormal, so that their inner product is

eiej=δij.

So in particular,

zex=a,zey=b,andzz=a2+b2.

The complex numbers present a slight complication to R2: we can multiple vectors directly, rather than consider their inner product. This new, vector multiplication will be represented simply by juxtaposition. We define the multiplication of vectors in terms of the basis elements:

exex=1,exey=eyex=ey,eyey=1.

Notice that the order of the basis vectors doesn't matter; we have defined a commutative operation. 

It is customary to represent ex as 1 and ey as i. That way we may represent the vector z - now as a complex number - by

z=a+ib.

The inner product on C requires the notion of a complex conjugate,

¯z=aib,

which maps the basis vectors

11,ii.

Just as column and row vectors are dual representations of R2:

z=(ab),zT=(ab),

and the inner product on R2 involves the use of both:

||z||2=zTz=a2+b2,zR2.

The complex numbers C, built from R2 using (1), involves the complex conjugate:

zT¯z,

so that

zTz¯zz=(aib)(a+ib)=a2+b2.

So the duality between row and column vectors is now replaced by the duality between complex conjugate pairs.

This pattern of dualities iterates in the natural way. For the n-dimensional vector space of these complex numbers, Cn. We can model “column” vectors - an ordered collection of n complex numbers - as zCn. The associated ``row'' vectors, are now an transposed, ordered collection of the n-complex conjugate numbers, ¯zT.

As might be familiar to physicists, the combination of transpose-complex conjugate operations is often denoted by a :

z=¯zT.

Although the meaning is usually clear from context, the -operation will be used for most any sort of duality operation. We will develop the idea of vector space dualities in later lectures.

For a concrete example, consider a column vector in C2,

z=(z1z2),z1,z2C.

The dual vector to z in C2 is written

z=¯zT=(¯z1¯z2),

so that the inner product

zz=(¯z1¯z2)(z1z2)=¯z1z1+¯z2z2=|z1|2+|z2|2=||z||2,

is manifestly positive definite. One important takeaway from the study of Cn is that it has the same vector space structure of the R2n, but that repackaged slightly - pairwise, in this case - courtesy of the multiplication operation.

The Quaternions

H is perhaps less familiar. H is a somewhat natural extension of C, that looks like R4, represented by ordered quadruplets of real numbers which similarly map to basis vectors

q=(a,b,c,d)a1+bi+cj+dk,

where 

i2=j2=k2=1,andijk=1.

Cyclic permutations of ijk, like jki are equivalent.

Notice that the restriction to 1 and any specific, single choice of the basis vector i,j or k gives you the complex numbers. In particular, there are three separate - by isomorphic - subspaces of H that look like C.

As with C, we can consider the space of ordered n-tuples of quaternions, Hn. Without commutative multiplication, the resulting structure is a bit more intricate than vector space over a field like R or C. Nevertheless, Hn is a vector space, although now the difference between the “column” and “row” constructions take on new significance.

In the left vector space of column vectors of H, scalar multiplication1 is understood to act only from the left:

a(v+w)=av+aw,aH,v,wHn.

More precisely, this definition applies component-wise, so that for H2:

q=(q1q2)aq=(aq1aq2),q1,q2,aH

In the dual, right vector space of row vectors of H, scalar multiplication is understood to act only from the right:

(v+w)a=va+wa,aH,v,wHn.

Where again, we mean this operation acts component-wise,

q=(q1q2)qa=(q1aq2a),q1,q2,aH.

Note that we could have similarly defined a left or right vector space for commutative fields like Cn, but commutativity erases this distinction at the component level.

Finally, note that this structure implies that the inner products on Hn have unambiguous meaning, that is, they are well-defined:

qq=¯q1q1+¯q2q2=||q||2

Additionally, inner products like

qaq=¯q1aq1+¯q2aq2,

for a in H are also well-defined2. Notice that the meaning of extends from its definition on Cn in two essential ways. First, the quaternionic conjugate:

¯q=¯a+bi+cj+dk=abicjdk,

maps the “imaginary” quaternions to their additive inverses. Second, takes, left vector spaces to right vector spaces, and vice versa. As with any of the spaces A, restricted to H is an involution:

q=q,qA.

Exercises

1.1 : The Quaternions

Using the basis vectors, write out the multiplication table for H. Hence show that multiplication in H is associative.

1.2 : Hilbert Spaces

Let n be a finite, positive number. Use the inner products defined for each of An to define a (Euclidean) distance function on An. Hence show that this distance function gives a norm for vectors in An. For a challenge requiring external material, prove that An with that distance function defines a complete metric space, i.e. show that An is a Hilbert Space.

1.3 : Matrices as Linear Maps

The space of linear maps from Am to An is denoted by Homm,nA. In practice, Homm,nA is an m×n-array of numbers drawn from A. For each of the normed, division algebras A, show that Homm,nA is a vector space. Hence show that  Homm,nA and Homn,mA are dual in the sense defined above. Define the relevant operation as a map between these two spaces. How does the inner product construction on Homm,1A generalize to Homm,nA? In particular, comment on the ordering of the pairing of Homm,nA and Homn,mA in this product.

1.4 : Dual Spaces for Matrices

In the previous exercise you verified that, in particular, Homn,nC is a vector space. Argue that is an involution on Homn,nC. Argue also that the generalized inner product construction considered above reduces to ordinary matrix multiplication. Why don't all matrices in Homn,nC have multiplicative inverses? Finally, consider the space of linear maps from Homn,nC to Cn:

X=Hom(Homn,nC,Cn).

What is the dual space for X? What is the associated operation?

1.5 : The Octonians

We have argued that C is isomorphic to R2 as a vector space by defining a multiplication operation on orthornomal basis vectors of R2. Reconstruct the algebraic rules of C directly from ordered pairs of real numbers. The result is the famous Cayley-Dickenson construction. Generalize this construction to build H from ordered pairs in C2, using the notion of complex conjugate, component-wise, as needed. Iterate this construction to build an 8-dimensional algebra, O. Show that the multiplication of elements in O fails to be associative.

1.6 : Involutions

An endomorphism is a linear isomorphism that takes a vector space V, to itself:

ϕ:VV.

A rotation is a great example of an endomorphism. An involution on V is an endomorphism that is its own inverse:

ϕϕ=ϕϕ1=1.

Specify the involutions on each of the normed division algebras, A.


1 Here, for emphasis, we have also represented scalar multiplication of the vector space with an explicit dot: . We will do this somewhat frequently, but the meaning should be clear from context.

2 Eqn (3) is a great example where the meaning of the symbol is confused. It could be either the inner product or scalar multiplication. What makes (3) sensible is that both interpretations are equally valid! One interpretation is we take the first to mean that a to acts from the right, and the second is the inner product: (qa)q. An alternative interpretation is q(aq). Both statements are mathematically equivalent, which not only affords our notation, but is also serves as a crucial consistency condition in dealing with vector spaces that have noncommutative scalars.

©2021 The Pasayten Institute cc by-sa-4.0

Previous
Previous

Lecture 1 : The Exponential Map

Next
Next

Notes 02 : Matrix Constructions