text
stringlengths
270
6.81k
bolic geometry. Unfortunately, hyperbolic geometry is much more complicated, since we cannot directly visualize it as a subset of R3. Instead, we need to develop the machinery of a Riemannian metric in order to properly describe hyperbolic geometry. In a nutshell, this allows us to take a subset of R2 and measure distances in it in a funny way. 4.1 Review of derivatives and chain rule We start by reviewing some facts about taking derivatives, and make explicit the notation we will use. Definition (Smooth function). Let U ⊆ Rn be open, and f = (f1, · · ·, fm) : U → Rm. We say f is smooth (or C∞) if each fi has continuous partial derivatives of each order. In particular, a C∞ map is differentiable, with continuous firstorder partial derivatives. Definition (Derivative). The derivative for a function f : U → Rm at a point a ∈ U is a linear map dfa : Rn → Rm (also written as Df (a) or f (a)) such that lim h→0 f (a + h) − f (a) − dfa · h h → 0, where h ∈ Rn. If m = 1, then dfa is expressed as ∂f ∂xa (a), · · ·, ∂f ∂xn (a), and the linear map is given by (h1, · · ·, hn) → n i=1 ∂f ∂xi (a)hi, i.e. the dot product. For a general m, this vector becomes a matrix. The Jacobian matrix is J(f )a = ∂fi ∂xj (a), with the linear map given by matrix multiplication, namely h → J(f )a · h. Example. Recall that a holomorphic (analytic) function of complex variables f : U ⊆ C → C has a derivative f, defined by lim |w|→0 |f (z + w) − f (z) − f (z)w| |w| → 0 27 4 Hyperbolic geometry IB Geometry We let f (z) = a + ib and w = h1 + ih2. Then we have f
(z)w = ah1 − bh2 + i(ah2 + bh1). We identify R2 = C. Then f : U ⊆ R2 → R2 has a derivative dfz : R2 → R2 given by a −b a b. We’re also going to use the chain rule quite a lot. So we shall write it out explicitly. Proposition (Chain rule). Let U ⊆ Rn and V ⊆ Rp. Let f : U → Rm and g : V → U be smooth. Then f ◦ g : V → Rm is smooth and has a derivative In terms of the Jacobian matrices, we get d(f ◦ g)p = (df )g(p) ◦ (dg)p. J(f ◦ g)p = J(f )g(p)J(g)p. 4.2 Riemannian metrics Finally, we get to the idea of a Riemannian metric. The basic idea of a Riemannian metric is not too unfamiliar. Presumably, we have all seen maps of the Earth, where we try to draw the spherical Earth on a piece of paper, i.e. a subset of R2. However, this does not behave like R2. You cannot measure distances on Earth by placing a ruler on the map, since distances are distorted. Instead, you have to find the coordinates of the points (e.g. the longitude and latitude), and then plug them into some complicated formula. Similarly, straight lines on the map are not really straight (spherical) lines on Earth. We really should not think of Earth a subset of R2. All we have done was to “force” Earth to live in R2 to get a convenient way of depicting the Earth, as well as a convenient system of labelling points (in many map projections, the x and y axes are the longitude and latitude). This is the idea of a Riemannian metric. To describe some complicated surface, we take a subset U of R2, and define a new way of measuring distances, angles and areas on U. All these information are packed into an entity known as the Riemannian metric. Definition (Riemannian metric). We use coordinates (u, v) ∈ R2. We let V ⊆ R
2 be open. Then a Riemannian metric on V is defined by giving C∞ functions E, F, G : V → R such that E(P ) F (P ) F (P ) G(P ) is a positive definite definite matrix for all P ∈ V. Alternatively, this is a smooth function that gives a 2 × 2 symmetric positive definite matrix, i.e. inner product ·, · P, for each point in V. By definition, if e1, e2 are the standard basis, then e1, e1P = E(P ) e1, e2P = F (P ) e2, e2P = G(P ). 28 4 Hyperbolic geometry IB Geometry Example. We can pick E = G = 1 and F = 0. Then this is just the standard Euclidean inner product. As mentioned, we should not imagine V as a subset of R2. Instead, we should think of it as an abstract two-dimensional surface, with some coordinate system given by a subset of R2. However, this coordinate system is just a convenient way of labelling points. They do not represent any notion of distance. For example, (0, 1) need not be closer to (0, 2) than to (7, 0). These are just abstract labels. With this in mind, V does not have any intrinsic notion of distances, angles and areas. However, we do want these notions. We can certainly write down things like the difference of two points, or even the compute the derivative of a function. However, these numbers you get are not meaningful, since we can easily use a different coordinate system (e.g. by scaling the axes) and get a different number. They have to be interpreted with the Riemannian metric. This tells us how to measure these things, via an inner product “that varies with space”. This variation in space is not an oddity arising from us not being able to make up our minds. This is since we have “forced” our space to lie in R2. Inside V, going from (0, 1) to (0, 2) might be very different from going from (5, 5) to (6, 5),
since coordinates don’t mean anything. Hence our inner product needs to measure “going from (0, 1) to (0, 2)” differently from “going from (5, 5) to (6, 5)”, and must vary with space. We’ll soon come to defining how this inner product gives rise to the notion of distance and similar stuff. Before that, we want to understand what we can put into the inner product ·, · P. Obviously these would be vectors in R2, but where do these vectors come from? What are they supposed to represent? The answer is “directions” (more formally, tangent vectors). For example, e1, e1P will tell us how far we actually are going if we move in the direction of e1 from P. Note that we say “move in the direction of e1”, not “move by e1”. We really should read this as “if we move by he1 for some small h, then the distance covered is he1, e1P ”. This statement is to be interpreted along the same lines as “if we vary x by some small h, then the value of f will vary by f (x)h”. Notice how the inner product allows us to translate a length in R2 (namely he1eucl = h) into the actual length in V. What we needed for this is just the norm induced by the inner product. Since what we have is the whole inner product, we in fact can define more interesting things such as areas and angles. We will formalize these ideas very soon, after getting some more notation out of the way. Often, instead of specifying the three functions separately, we write the metric as E du2 + 2F du dv + G dv2. This notation has some mathematical meaning. We can view the coordinates as smooth functions u : V → R, v : U → R. Since they are smooth, they have derivatives. They are linear maps duP : R2 → R (h1, h2) → h1 dvP : R2 → R (h1, h2) → h2. These formula are valid for all P ∈ V. So we just write du and dv instead. Since they are maps R2 → R,
we can view them as vectors in the dual space, 29 4 Hyperbolic geometry IB Geometry du, dv ∈ (R2)∗. Moreover, they form a basis for the dual space. In particular, they are the dual basis to the standard basis e1, e2 of R2. Then we can consider du2, du dv and dv2 as bilinear forms on R2. For example, du2(h, k) = du(h)du(k) 1 2 dv2(h, k) = dv(h)dv(k) du dv(h, k) = (du(h)dv(k) + du(k)dv(h)) These have matrices respectively. Then we indeed have E du2 + 2F du dv + G dv2 = E F F G. We can now start talking about what this is good for. In standard Euclidean space, we have a notion of length and area. A Riemannian metric also gives a notion of length and area. Definition (Length). The length of a smooth curve γ = (γ1, γ2) : [0, 1] → V is defined as 1 0 E ˙γ2 1 + 2F ˙γ1 ˙γ2 + G ˙γ2 2 1 2 dt, where E = E(γ1(t), γ2(t)) etc. We can also write this as 1 0 ˙γ, ˙γ 1 2 γ(t) dt. Definition (Area). The area of a region W ⊆ V is defined as W (EG − F 2) 1 2 du dv when this integral exists. In the area formula, what we are integrating is just the determinant of the metric. This is also known as the Gram determinant. We define the distance between two points P and Q to be the infimum of the lengths of all curves from P to Q. It is an exercise on the second example sheet to prove that this is indeed a metric. Example. We will not do this in full detail — the details are to be filled in in the third example sheet. Let V = R2, and de�
��ne the Riemannian metric by 4(du2 + dv2) (1 + u2 + v2)2. 30 4 Hyperbolic geometry IB Geometry This looks somewhat arbitrary, but we shall see this actually makes sense by identifying R2 with the sphere by the stereographic projection π : S2 \{N } → R2. For every point P ∈ S2, the tangent plane to S2 at P is given by {x ∈ R3 : −−→ x · OP = 0}. Note that we translated it so that P is the origin, so that we can view it as a vector space (points on the tangent plane are points “from P ”). −−→ Now given any two tangent vectors x1, x2 ⊥ OP, we can take the inner product x1 · x2 in R3. We want to say this inner product is “the same as” the inner product provided by the Riemannian metric on R2. We cannot just require x1 · x2 = x1, x2π(P ), since this makes no sense at all. Apart from the obvious problem that x1, x2 have three components but the Riemannian metric takes in vectors of two components, we know that x1 and x2 are vectors tangent to P ∈ S2, but to apply the Riemannian metric, we need the corresponding tangent vector at π(P ) ∈ R2. To do so, we act by dπp. So what we want is x1 · x2 = dπP (x1), dπP (x2)π(P ). Verification of this equality is left as an exercise on the third example sheet. It is helpful to notice π−1(u, v) = (2u, 2v, u2 + v2 − 1) 1 + u2 + v2. In some sense, we say the surface S2 \ {N } is “isometric” to R2 via the stereographic projection π. We can define the notion of isometry between two open sets with Riemannian metrics in general. Definition (Isometry). Let V, ˜V ⊆ R2 be open sets endowed with Riemannian metrics, denoted as ·,
· P and ·, · ∼ Q for P ∈ V, Q ∈ ˜V respectively. A diffeomorphism (i.e. C∞ map with C∞ inverse) ϕ : V → ˜V is an isometry if for every P ∈ V and x, y ∈ R2, we get x, yP = dϕP (x), dϕP (y)∼ ϕ(P ). Again, in the definition, x and y represent tangent vectors at P ∈ V, and on the right of the equality, we need to apply dϕP to get tangent vectors at ϕ(P ) ∈ ˜V. How are we sure this indeed is the right definition? We, at the very least, would expect isometries to preserve lengths. Let’s see this is indeed the case. If γ : [0, 1] → V is a C∞ curve, the composition ˜γ = ϕ ◦ γ : [0, 1] → ˜V is a path in ˜V. We let P = γ(t), and hence ϕ(P ) = ˜γ(t). Then ˜γ(t), ˜γ(t)∼ ˜γ(t) = dϕP ◦ γ(t), dϕP ◦ γ(t)∼ ϕ(P ) = γ(t), γ(t)γ(t)=P. Integrating, we obtain length(˜γ) = length(γ) = 1 0 γ(t), γ(t)γ(t) dt. 31 4 Hyperbolic geometry IB Geometry 4.3 Two models for the hyperbolic plane That’s enough preparation. We can start talking about hyperbolic plane. We will in fact provide two models of the hyperbolic plane. Each model has its own strengths, and often proving something is significantly easier in one model than the other. We start with the disk model. Definition (Poincar´e disk model). The (Poincar´e) disk model for the hyperbolic plane is given by the unit disk D ⊆ C ∼= R2, D = {�
� ∈ C : |ζ| < 1}, and a Riemannian metric on this disk given by 4(du2 + dv2) (1 − u2 − v2)2 = 4|dζ|2 (1 − |ζ|2)2, (∗) where ζ = u + iv. Note that this is similar to our previous metric for the sphere, but we have 1 − u2 − v2 instead of 1 + u2 + v2. To interpret the term |dζ|2, we can either formally set |dζ|2 = du2 + dv2, or interpret it as the derivative dζ = du + idv : C → C. We see that (∗) is a scaling of the standard Riemannian metric by a factor depending on the polar radius r = |ζ|2. The distances are scaled by 1−r2, while the areas are scaled by (1−r2)2. Note, however, that the angles in the hyperbolic disk are the same as that in R2. This is in general true for metrics that are just scaled versions of the Euclidean metric (exercise). 4 2 Alternatively, we can define the hyperbolic plane with the upper-half plane. Definition (Upper half-plane). The upper half-plane is H = {z ∈ C : Im(z) > 0}. What is the appropriate Riemannian metric to put on the upper half plane? We know D bijects to H via the M¨obius transformation. This bijection is in fact a conformal equivalence, as defined in IB Complex Analysis/Methods. The idea is to pick a metric on H such that this map is an isometry. Then H together with this Riemannian metric will be the upper half-plane model for the hyperbolic plane. To avoid confusion, we reserve the letter z for points z ∈ H, with z = x + iy, while we use ζ for points ζ ∈ D, and write ζ = u + iv. Then we have. Instead of trying to convert the Riemannian metric on D to H, which would be a complete algebraic horror, we first try converting the Euclidean metric. The Euclidean metric on
R2 = C is given by w1, w2 = Re(w1w2) = 1 2 (w1 ¯w2 + ¯w1w2). 32 4 Hyperbolic geometry IB Geometry So if ·, · eucl is the Euclidean metric at ζ, then at z such that ζ = z−i require (by definition of isometry) z+i, we w, vz = dζ dz w, dζ dz v eucl = 2 dζ dz Re(w¯v) = 2 dζ dz (w1v1 + w2v2), where w = w1 + iw2, v = v1 + iv2. Hence, on H, we obtain the Riemannian metric 2 dζ dz (dx2 + dy2). We can compute dζ dz = 1 z + i − z − i (z + i)2 = 2i (z + i)2. This is what we get if we started with a Euclidean metric. If we start with the hyperbolic metric on D, we get an additional scaling factor. We can do some computations to get 1 − |ζ|2 = 1 − |z − i|2 |z + i|2, and hence 1 1 − |ζ|2 = |z + i|2 |z + i|2 − |z − i|2 = |z + i|2 4 Im z. Putting all these together, metric corresponding to 4|dζ|2 (1−|ζ|2)2 on D is 4 · 4 |z + i|4 · |z + i|2 4 Im z 2 · |dz|2 = |dz|2 (Im z)2 = dx2 + dy2 y2. We now use all these ingredients to define the upper half-plane model. Definition (Upper half-plane model). The upper half-plane model of the hyperbolic plane is the upper half-plane H with the Riemannian metric dx2 + dy2 y2. The lengths on H are scaled (from the Euclidean one) by 1 y2. Again, the angles are the same. are scaled by 1 y, while the areas Note that we did not have to go through
so much mess in order to define the sphere. This is since we can easily “embed” the surface of the sphere in R3. However, there is no easy surface in R3 that gives us the hyperbolic plane. As we don’t have an actual prototype, we need to rely on the more abstract data of a Riemannian metric in order to work with hyperbolic geometry. We are next going to study the geometry of H, We claim that the following group of M¨obius maps are isometries of H: PSL(2, R) = z → az + b cz + d : a, b, c, d ∈ R, ad − bc = 1. Note that the coefficients have to be real, not complex. 33 4 Hyperbolic geometry IB Geometry Proposition. The elements of PSL(2, R) are isometries of H, and this preserves the lengths of curves. Proof. It is easy to check that PSL(2, R) is generated by (i) Translations z → z + a for a ∈ R (ii) Dilations z → az for a > 0 (iii) The single map z → − 1 z. So it suffices to show each of these preserves the metric |dz|2 y2, where z = x + iy. The first two are straightforward to see, by plugging it into formula and notice the metric does not change. We now look at the last one, given by z → − 1 z. The derivative at z is So we get So We also have So f (z) = 1 z2. dz → d − 1 z = dz z2. d − 2 1 z = |dz|2 |z|4. Im − 1 z = − 1 |z|2 Im ¯z = Im z |z|2. |d(−1/z)|2 Im(−1/z)2 = |dz|2 |z4| (Im z)2 |z|4 = |dz|2 (Im z)2. So this is an isometry, as required. Note that each z → az + b with a > 0, b ∈ R is in PSL(2, R). Also, we can use maps of this form to send any point to any other point
. So PSL(2, R) acts transitively on H. Moreover, everything in PSL(2, R) fixes R ∪ {∞}. Recall also that each M¨obius transformation preserves circles and lines in the complex plane, as well as angles between circles/lines. In particular, consider the line L = iR, which meets R perpendicularly, and let g ∈ PSL(2, R). Then the image is either a circle centered at a point in R, or a straight line perpendicular to R. We let L+ = L ∩ H = {it : t > 0}. Then g(L+) is either a vertical half-line or a semi-circle that ends in R. Definition (Hyperbolic lines). Hyperbolic lines in H are vertical half-lines or semicircles ending in R. We will now prove some lemmas to justify why we call these hyperbolic lines. Lemma. Given any two distinct points z1, z2 ∈ H, there exists a unique hyperbolic line through z1 and z2. 34 4 Hyperbolic geometry IB Geometry Proof. This is clear if Re z1 = Re z2 — we just pick the vertical half-line through them, and it is clear this is the only possible choice. Otherwise, if Re z1 = Re z2, then we can find the desired circle as follows: z1 z2 R It is also clear this is the only possible choice. Lemma. PSL(2, R) acts transitively on the set of hyperbolic lines in H. Proof. It suffices to show that for each hyperbolic line, there is some g ∈ PSL(2, R) such that g() = L+. This is clear when is a vertical half-line, since we can just apply a horizontal translation. If it is a semicircle, suppose it has end-points s < t ∈ R. Then consider g(z) = z − t z − s. This has determinant −s+t > 0. So g ∈ PSL(2, R). Then g(t) = 0 and g(s) = ∞. Then we must have g() = L+, since g() is a hyperbolic line, and the only hyperbolic lines passing through ∞ are the vertical
half-lines. So done. Moreover, we can achieve g(s) = 0 and g(t) = ∞ by composing with − 1 z. Also, for any P ∈ not on the endpoints, we can construct a g such that g(P ) = i ∈ L+, by composing with z → az. So the isometries act transitively on pairs (, P ), where is a hyperbolic line and P ∈. Definition (Hyperbolic distance). For points z1, z2 ∈ H, the hyperbolic distance ρ(z1, z2) is the length of the segment [z1, z2] ⊆ of the hyperbolic line through z1, z2 (parametrized monotonically). Thus PSL(2, R) preserves hyperbolic distances. Similar to Euclidean space and the sphere, we show these lines minimize distance. Proposition. If γ : [0, 1] → H is a piecewise C 1-smooth curve with γ(0) = z1, γ(1) = z2, then length(γ) ≥ ρ(z1, z2), with equality iff γ is a monotonic parametrisation of [z1, z2] ⊆, where is the hyperbolic line through z1 and z2. Proof. We pick an isometry g ∈ PSL(2, R) so that g() = L+. So without loss of generality, we assume z1 = iu and z2 = iv, with u < v ∈ R. 35 4 Hyperbolic geometry IB Geometry We decompose the path as γ(t) = x(t) + iy(t). Then we have 1 0 1 length(γ) = ≥ ˙x2 + ˙y2 dt 1 y | ˙y| y dz 0 1 ≥ ˙y y dt 0 = [log y(t)]1 0 v u = log This calculation also tells us that ρ(z1, z2) = log v. so length(γ) ≥ ρ(z1, z2) with equality if and only if x(t) = 0 (hence γ ⊆ L+) and ˙y ≥ 0 (hence
monotonic). u Corollary (Triangle inequality). Given three points z1, z2, z3 ∈ H, we have ρ(z1, z3) ≤ ρ(z1, z2) + ρ(z2, z3), with equality if and only if z2 lies between z1 and z2. Hence, (H, ρ) is a metric space. 4.4 Geometry of the hyperbolic disk So far, we have worked with the upper half-plane model. This is since the upper half-plane model is more convenient for these calculations. However, sometimes the disk model is more convenient. So we also want to understand that as well. 1−ζ ∈ H is an isometry, with an (isometric) inverse z+i ∈ D. Moreover, since these are M¨obius maps, circle-lines are z ∈ H → ζ = z−i preserved, and angles between the lines are also preserved. Hence, immediately from previous work on H, we know Recall that ζ ∈ D → z = i 1+ζ (i) PSL(2, R) ∼= {M¨obius transformations sending D to itself} = G. (ii) Hyperbolic lines in D are circle segments meeting |ζ| = 1 orthogonally, including diameters. (iii) G acts transitively on hyperbolic lines in D (and also on pairs consisting of a line and a point on the line). (iv) The length-minimizing geodesics on D are a segments of hyperbolic lines parametrized monotonically. We write ρ for the (hyperbolic) distance in D. Lemma. Let G be the set of isometries of the hyperbolic disk. Then (i) Rotations z → eiθz (for θ ∈ R) are elements of G. (ii) If a ∈ D, then g(z) = z−a 1−¯az is in G. 36 4 Hyperbolic geometry IB Geometry Proof. (i) This is clearly an isometry, since this is a linear map, preserves |z| and |dz|, and hence also the metric 4|dz|2 (1 − |z|2)2. (ii) First, we need to check this indeed maps D
to itself. To do this, we first make sure it sends {|z| = 1} to itself. If |z| = 1, then |1 − ¯az| = |¯z(1 − ¯az)| = |¯z − ¯a| = |z − a|. So |g(z)| = 1. Finally, it is easy to check g(a) = 0. By continuity, G must map D to itself. We can then show it is an isometry by plugging it into the formula. It is an exercise on the second example sheet to show all g ∈ G is of the form or g(z) = eiθ z − a 1 − ¯az g(z) = eiθ ¯z − a 1 − ¯a¯z for some θ ∈ R and a ∈ D. We shall now use the disk model to do something useful. We start by coming up with an explicit formula for distances in the hyperbolic plane. Proposition. If 0 ≤ r < 1, then ρ(0, reiθ) = 2 tanh−1 r. In general, for z1, z2 ∈ D, we have g(z1, z2) = 2 tanh−1 z1 − z2 1 − ¯z1z2. Proof. By the lemma above, we can rotate the hyperbolic disk so that reiθ is rotated to r. So We can evaluate this by performing the integral ρ(0, reiθ) = ρ(0, r). 2 dt 1 − t2 = 2 tanh−1 r. For the general case, we apply the M¨obius transformation ρ(0, r) = 0 r g(z) = z − z1 1 − ¯z1z. Then we have So g(z1) = 0, g(z2) = z2 − z1 1 − ¯z1z2 = z1 − z2 1 − ¯z1z2 eiθ. ρ(z1, z2) = ρ(g(z1), g(z2)) = 2 tanh−1 z1 − z2 1 − ¯z1z2. 37 4 Hyperbolic geometry IB Geometry Again, we exploited the idea of performing the calculation in an easy case, and then using is
ometries to move everything else to the easy case. In general, when we have a “distinguished” point in the hyperbolic plane, it is often convenient to use the disk model, move it to 0 by an isometry. Proposition. For every point P and hyperbolic line, with P ∈, there is a unique line with P ∈ such that meets orthogonally, say ∩ = Q, and ρ(P, Q) ≤ ρ(P, ˜Q) for all ˜Q ∈. This is a familiar fact from Euclidean geometry. To prove this, we again apply the trick of letting P = 0. Proof. wlog, assume P = 0 ∈ D. Note that a line in D (that is not a diameter) is a Euclidean circle. So it has a center, say C. Since any line through P is a diameter, there is clearly only one line that intersects perpendicularly (recall angles in D is the same as the Euclidean angle). D P Q C It is also clear that P Q minimizes the Euclidean distance between P and. While this is not the same as the hyperbolic distance, since hyperbolic lines through P are diameters, having a larger hyperbolic distance is equivalent to having a higher Euclidean distance. So this indeed minimizes the distance. How does reflection in hyperbolic lines work? This time, we work in the upper half-plane model, since we have a favorite line L+. Lemma (Hyperbolic reflection). Suppose g is an isometry of the hyperbolic half-plane H and g fixes every point in L+ = {iy : y ∈ R+}. Then G is either the identity or g(z) = −¯z, i.e. it is a reflection in the vertical axis L+. Observe we have already proved a similar result in Euclidean geometry, and the spherical version was proven in the first example sheet. Proof. For every P ∈ H \ L+, there is a unique line containing P such that ⊥ L+. Let Q = L+ ∩. L+ Q P 38 4 Hyperbolic geometry IB Geometry We see is a semicircle, and by definition of isometry, we must have �
�(P, Q) = ρ(g(P ), Q). Now note that g() is also a line meeting L+ perpendicularly at Q, since g fixes L+ and preserves angles. So we must have g() =. Then in particular g(P ) ∈. So we must have g(P ) = P or g(P ) = P, where P is the image under reflection in L+. Now it suffices to prove that if g(P ) = P for any one P, then g(P ) must be the identity (if g(P ) = P for all P, then g must be given by g(z) = −¯z). Now suppose g(P ) = P, and let A ∈ H +, where H + = {z ∈ H : Re z > 0}. L+ B A P A P Now if g(A) = A, then g(A) = A. Then ρ(A, P ) = ρ(A, P ). But ρ(A, P ) = ρ(A, B) + ρ(B, P ) = ρ(A, B) + ρ(B, P ) > ρ(A, P ), by the triangle inequality, noting that B ∈ (AP ). This is a contradiction. So g must fix everything. Definition (Hyperbolic reflection). The map R : z ∈ H → −¯z ∈ H is the (hyperbolic) reflection in L+. More generally, given any hyperbolic line, let T be the isometry that sends to L+. Then the (hyperbolic) reflection in is R = T −1RT Again, we already know how to reflect in L+. So to reflect in another line, we move our plane such that becomes L+, do the reflection, and move back. By the previous proposition, R is the unique isometry that is not identity and fixes. 4.5 Hyperbolic triangles Definition (Hyperbolic triangle). A hyperbolic triangle ABC is the region determined by three hyperbolic line segments AB, BC and CA, including extreme cases where some vertices A, B, C are allowed
to be “at infinity”. More precisely, in the half-plane model, we allow them to lie in R ∪ {∞}; in the disk model we allow them to lie on the unit circle |z| = 1. We see that if A is “at infinity”, then the angle at A must be zero. Recall for a region R ⊆ H, we can compute the area of R as area(R) = R dx dy y2. Similar to the sphere, we have 39 4 Hyperbolic geometry IB Geometry Theorem (Gauss-Bonnet theorem for hyperbolic triangles). For each hyperbolic triangle ∆, say, ABC, with angles α, β, γ ≥ 0 (note that zero angle is possible), we have area(∆) = π − (α + β + γ). Proof. First do the case where γ = 0, so C is “at infinity”. Recall that we like to use the disk model if we have a distinguished point in the hyperbolic plane. If we have a distinguished point at infinity, it is often advantageous to use the upper half plane model, since ∞ is a distinguished point at infinity. So we use the upper-half plane model, and wlog C = ∞ (apply PSL(2, R)) if necessary. Then AC and BC are vertical half-lines. So AB is the arc of a semi-circle. So AB is an arc of a semicircle We use the transformation z → z + a (with a ∈ R) to center the semi-circle at 0. We then apply z → bz (with b > 0) to make the circle have radius 1. Thus wlog AB ⊆ {x2 + y2 = 1}. Now we have area(T ) = cos β ∞ 1 y2 dy dx √ 1−x2 cos(π−α) cos β cos(π−α) = √ 1 1 − x2 dx = [− cos−1(x)]cos β cos(π−α) = π − α − β, as required. In general, we use H again, and we can arrange AC in a vertical half-line. Also, we can move AB to x2 + y2
= 1, noting that this transformation keeps AC vertical. C A γ α δ B β 40 4 Hyperbolic geometry IB Geometry We consider ∆1 = AB∞ and ∆2 = CB∞. Then we can immediately write area(∆1) = π − α − (β + δ) area(∆2) = π − δ − (π − γ) = γ − δ. So we have as required. area(T ) = area(∆2) − area(∆1) = π − α − β − γ, Similar to the spherical case, we have some hyperbolic sine and cosine rules. For example, we have Theorem (Hyperbolic cosine rule). In a triangle with sides a, b, c and angles α, β, γ, we have cosh c = cosh a cosh b − sinh a sinh b cos γ. Proof. See example sheet 2. Recall that in S2, any two lines meet (in two points). In the Euclidean plane R2, any two lines meet (in one point) iff they are not parallel. Before we move on to the hyperbolic case, we first make a definition. Definition (Parallel lines). We use the disk model of the hyperbolic plane. Two hyperbolic lines are parallel iff they meet only at the boundary of the disk (at |z| = 1). Definition (Ultraparallel lines). Two hyperbolic lines are ultraparallel if they don’t meet anywhere in {|z| ≤ 1}. In the Euclidean plane, we have the parallel axiom: given a line and P ∈, there exists a unique line containing P with ∩ = ∅. This fails in both S2 and the hyperbolic plane — but for very different reasons! In S2, there are no such parallel lines. In the hyperbolic plane, there are many parallel lines. There is a more deep reason for why this is the case, which we will come to at the very end of the course. 4.6 Hyperboloid model Recall we said there is no way to view the hyperbolic plane as a subset of R3, and hence we
need to mess with Riemannian metrics. However, it turns out we can indeed embed the hyperbolic plane in R3, if we give R3 a different metric! Definition (Lorentzian inner product). The Lorentzian inner product on R3 has the matrix  1 41 4 Hyperbolic geometry IB Geometry This is less arbitrary as it seems. Recall from IB Linear Algebra that we can always pick a basis where a non-degenerate symmetric bilinear form has diagonal made of 1 and −1. If we further identify A and −A as the “same” symmetric bilinear form, then this is the only other possibility left. Thus, we obtain the quadratic form given by q(x) = x, x = x2 + y2 − z2. We now define the 2-sheet hyperboloid as {x ∈ R2 : q(x) = −1}. This is given explicitly by the formula x2 + y2 = z2 − 1. We don’t actually need to two sheets. So we define S+ = S ∩ {z > 0}. We let π : S+ → D ⊆ C = R2 be the stereographic projection from (0, 0, -1) by π(x, y, z) = x + iy 1 + z = u + iv. P π(P ) We put r2 = u2 + v2. Doing some calculations, we show that (i) We always have r < 1, as promised. (ii) The stereographic projection π is invertible with σ(u, v) = π−1(u, v) = 1 1 − r2 (2u, 2v, 1 + r2) ∈ S+. (iii) The tangent plane to S+ at P is spanned by σu = ∂σ σu, σv = ∂σ ∂v. 42 4 Hyperbolic geometry IB Geometry We can explicitly compute these to be σu = σv = 2 (1 − r2)2 (1 + u2 − v2, 2uv, 2u), (1 − r2)2 (2uv, 1 + v2 − u2, 2v
). 2 We restrict the inner product ·, · to the span of σu, σv, and we get a symmetric bilinear form assigned to each u, v ∈ D given by where E du2 + 2F du dv + G dv2, E = σu, σu = 4 (1 − r2)2, F = σu, σv = 0, G = σv, σv = 4 (1 − r2)2. We have thus recovered the Poincare disk model of the hyperbolic plane. 43 5 Smooth embedded surfaces (in R3) IB Geometry 5 Smooth embedded surfaces (in R3) 5.1 Smooth embedded surfaces So far, we have been studying some specific geometries, namely Euclidean, spherical and hyperbolic geometry. From now on, we go towards greater generality, and study arbitrary surfaces. We will mostly work with surfaces that are smoothly embedded as subsets of R3, we can develop notions parallel to those we have had before, such as Riemannian metrics and lengths. At the very end of the course, we will move away from the needless restriction restriction that the surface is embedded in R3, and study surfaces that just are. Definition (Smooth embedded surface). A set S ⊆ R3 is a (parametrized) smooth embedded surface if every point P ∈ S has an open neighbourhood U ⊆ S (with the subspace topology on S ⊆ R3) and a map σ : V → U from an open V ⊆ R2 to U such that if we write σ(u, v) = (x(u, v), y(u, v), z(u, v)), then (i) σ is a homeomorphism (i.e. a bijection with continuous inverse) (ii) σ is C∞ (smooth) on V (i.e. has continuous partial derivatives of all orders). (iii) For all Q ∈ V, the partial derivatives σu(Q) and σv(Q) are linearly inde- pendent. Recall that σu(Q) = ∂σ ∂u (Q) =   (Q) = dσQ(e1), 
 ∂x ∂u ∂y ∂u ∂z ∂u where e1, e2 is the standard basis of R2. Similarly, we have We define some terminology. σv(Q) = dσQ(e2). Definition (Smooth coordinates). We say (u, v) are smooth coordinates on U ⊆ S. Definition (Tangent space). The subspace of R3 spanned by σu(Q), σv(Q) is the tangent space TP S to S at P = σ(Q). Definition (Smooth parametrisation). The function σ is a smooth parametrisation of U ⊆ S. Definition (Chart). The function σ−1 : U → V is a chart of U. Proposition. Let σ : V → U and ˜σ : ˜V → U be two C∞ parametrisations of a surface. Then the homeomorphism ϕ = σ−1 ◦ ˜σ : ˜V → V is in fact a diffeomorphism. This proposition says any two parametrizations of the same surface are compatible. 44 5 Smooth embedded surfaces (in R3) IB Geometry Proof. Since differentiability is a local property, it suffices to consider ϕ on some small neighbourhood of a point in V. Pick our favorite point (v0, u0) ∈ ˜V. We know σ = σ(u, v) is differentiable. So it has a Jacobian matrix   xu xv yv yu zv zy  . By definition, this matrix has rank two at each point. wlog, we assume the first two rows are linearly independent. So det xu xv yv yu = 0 at (v0, u0) ∈ ˜V. We define a new function F (u, v) = x(u, v) y(u, v). Now the inverse function theorem applies. So F has a local C∞ inverse, i.e. there are two open neighbourhoods
(u0, v0) ∈ N and F (u0, v0) ∈ N ⊆ R2 such that f : N → N is a diffeomorphism. Writing π : ˜σ → N for the projection π(x, y, z) = (x, y) we can put these things in a commutative diagram: σ(N ). π N σ F N We now let ˜N = ˜σ−1(σ(N )) and ˜F = π ◦ ˜σ, which is yet again smooth. Then we have the following larger commutative diagram. σ(N ) π N σ F N ˜σ. ˜N ˜F Then we have ϕ = σ−1 ◦ ˜σ = σ−1 ◦ π−1 ◦ π ◦ ˜σ = F −1 ◦ ˜F, which is smooth, since F −1 and ˜F are. Hence ϕ is smooth everywhere. By symmetry of the argument, ϕ−1 is smooth as well. So this is a diffeomorphism. A more practical result is the following: Corollary. The tangent plane TQS is independent of parametrization. Proof. We know ˜σ(˜u, ˜v) = σ(ϕ1(˜u, ˜v), ϕ2(˜u, ˜v)). 45 5 Smooth embedded surfaces (in R3) IB Geometry We can then compute the partial derivatives as ˜σ˜u = ϕ1,˜uσu + ϕ2,˜uσv ˜σ˜v = ϕ1,˜vσu + ϕ2,˜vσv Here the transformation is related by the Jacobian matrix ϕ1,˜u ϕ1,˜v ϕ2,˜u ϕ2,˜v = J(ϕ). This is invertible since ϕ is a diffeomorphism. So (σ˜u, σ˜v) and (σu, σv) are different basis of the same two-dimensional vector space. So done. Note that we have So we can define ˜σ˜u × ˜σ
˜v = det(J(ϕ))σu × σv. Definition (Unit normal). The unit normal to S at Q ∈ S is N = NQ = σu × σv σu × σv, which is well-defined up to a sign. Often, instead of a parametrization σ : V ⊆ R2 → U ⊆ S, we want the function the other way round. We call this a chart. Definition (Chart). Let S ⊆ R3 be an embedded surface. The map θ = σ−1 : U ⊆ S → V ⊆ R2 is a chart. Example. Let S2 ⊆ R3 be a sphere. The two stereographic projections from ±e3 give two charts, whose domain together cover S2. Similar to what we did to the sphere, given a chart θ : U → V ⊆ R2, we can induce a Riemannian metric on V. We first get an inner product on the tangent space as follows: Definition (First fundamental form). If S ⊆ R3 is an embedded surface, then each TQS for Q ∈ S has an inner product from R3, i.e. we have a family of inner products, one for each point. We call this family the first fundamental form. This is a theoretical entity, and is more easily worked with when we have a chart. Suppose we have a parametrization σ : V → U ⊆ S, a, b ∈ R2, and P ∈ V. We can then define a, bP = dσP (a), dσP (b)R3. With respect to the standard basis e1, e2 ∈ R2, we can write the first fundamental form as E du2 + 2F du dv + G dv2, where E = σu, σu = e1, e1P F = σu, σv = e1, e2P G = σv, σv = e2, e2P. 46 5 Smooth embedded surfaces (in R3) IB Geometry Thus, this induces a Riemann
ian metric on V. This is also called the first fundamental form corresponding to σ. This is what we do in practical examples. We will assume the following property, which we are not bothered to prove. Proposition. If we have two parametrizations related by ˜σ = σ ◦ ϕ : ˜V → U, then ϕ : ˜V → V is an isometry of Riemannian metrics (on V and ˜V ). Definition (Length and energy of curve). Given a smooth curve Γ : [a, b] → S ⊆ R3, the length of γ is length(Γ) = The energy of the curve is energy(Γ) = b a b a Γ(t) dt. Γ(t)2 dt. We can think of the energy as something like the kinetic energy of a particle 2 m, because they are along the path, except that we are missing the factor of 1 annoying. How does this work with parametrizations? For the sake of simplicity, we assume Γ([a, b]) ⊆ U for some parametrization σ : V → U. Then we define the new curve γ = σ−1 ◦ Γ : [a, b] → V. This curve has two components, say γ = (γ1, γ2). Then we have Γ(t) = (dσ)γ(t)( ˙γ1(t)e1 + ˙γ2(t)e2) = ˙γ1σu + ˙γ2σv, and thus So we get Γ(t) = ˙γ, ˙γ P = (E ˙γ2 1 + 2F ˙γ1 ˙γ2 + G ˙γ2 2 ) 1 2 1 2. b length Γ = (E ˙γ2 1 + 2F ˙γ1 ˙γ2 + G ˙γ2 2 ) 1 2 dt. a Similarly, the energy is given by energy Γ = b a (E ˙γ2 1 + 2F ˙γ1 ˙γ2 + G ˙γ2 2 ) dt. This agrees with what we
’ve had for Riemannian metrics. Definition (Area). Given a smooth C∞ parametrization σ : V → U ⊆ S ⊆ R3, and a region T ⊆ U, we define the area of T to be area(T ) = θ(T ) EG − F 2 du dv, whenever the integral exists (where θ = σ−1 is a chart). Proposition. The area of T is independent of the choice of parametrization. So it extends to more general subsets T ⊆ S, not necessarily living in the image of a parametrization. 47 5 Smooth embedded surfaces (in R3) IB Geometry Proof. Exercise! Note that in examples, σ(V ) = U often is a dense set in S. For example, if we work with the sphere, we can easily parametrize everything but the poles. In that case, it suffices to use just one parametrization σ for area(S). Note also that areas are invariant under isometries. 5.2 Geodesics We now come to the important idea of a geodesic. We will first define these for Riemannian metrics, and then generalize it to general embedded surfaces. Definition (Geodesic). Let V ⊆ R2 be a Riemannian metric on V. We let u,v be open, and E du2 + 2F du dv + G dv2 γ = (γ1, γ2) : [a, b] → V be a smooth curve. We say γ is a geodesic with respect to the Riemannian metric if it satisfies d dt d dt (E ˙γ1 + F ˙γ2) = (F ˙γ1 + G ˙γ2) = 1 2 1 2 (Eu ˙γ2 1 + 2Fu ˙γ1 ˙γ2 + Gu ˙γ2 2 ) (Ev ˙γ2 1 + 2Fv ˙γ1 ˙γ2 + Gv ˙γ2 2 ) for all t ∈ [a, b]. These equations are known
as the geodesic ODEs. What exactly do these equations mean? We will soon show that these are curves that minimize (more precisely, are stationary points of) energy. To do so, we need to come up with a way of describing what it means for γ to minimize energy among all possible curves. Definition (Proper variation). Let γ : [a, b] → V be a smooth curve, and let γ(a) = p and γ(b) = q. A proper variation of γ is a C∞ map h : [a, b] × (−ε, ε) ⊆ R2 → V h(t, 0) = γ(t) for all t ∈ [a, b], h(a, τ ) = p, h(b, τ ) = q for all |τ | < ε, such that and and that is a C∞ curve for all fixed τ ∈ (−ε, ε). γτ = h( ·, τ ) : [a, b] → V Proposition. A smooth curve γ satisfies the geodesic ODEs if and only if γ is a stationary point of the energy function for all proper variation, i.e. if we define the function E(τ ) = energy(γτ ) : (−ε, ε) → R, then dE dτ τ =0 = 0. 48 5 Smooth embedded surfaces (in R3) IB Geometry Proof. We let γ(t) = (u(t), v(t)). Then we have energy(γ) = b a (E(u, v) ˙u2 + 2F (u, v) ˙u ˙v + G(u, v) ˙v2) dt = b a I(u, v, ˙u, ˙v) dt. We consider this as a function of four variables u, ˙u, v, ˙v, which are not necessarily related to one another. From the calculus of variations, we know γ is stationary if and only if d dt ∂I ∂ ˙u = ∂I ∂u, d dt ∂I ∂ ˙v = ∂I
∂v. The first equation gives us d dt (2(E ˙u + F ˙v)) = Eu ˙u2 + 2Fu ˙u ˙v + Gu ˙v2, which is exactly the geodesic ODE. Similarly, the second equation gives the other geodesic ODE. So done. Since the definition of a geodesic involves the derivative only, which is a local property, we can easily generalize the definition to arbitrary embedded surfaces. Definition (Geodesic on smooth embedded surface). Let S ⊆ R3 be an embedded surface. Let Γ : [a, b] → S be a smooth curve in S, and suppose there is a parametrization σ : V → U ⊆ S such that im Γ ⊆ U. We let θ = σ−1 be the corresponding chart. We define a new curve in V by γ = θ ◦ Γ : [a, b] → V. Then we say Γ is a geodesic on S if and only if γ is a geodesic with respect to the induced Riemannian metric. For a general Γ : [a, b] → V, we say Γ is a geodesic if for each point t0 ∈ [a, b], there is a neighbourhood ˜V of t0 such that im Γ| ˜V lies in the domain of some chart, and Γ| ˜V is a geodesic in the previous sense. Corollary. If a curve Γ minimizes the energy among all curves from P = Γ(a) to Q = Γ(b), then Γ is a geodesic. Proof. For any a1, a2 such that a ≤ a1 ≤ b1 ≤ b, we let Γ1 = Γ|[a1,b1]. Then Γ1 also minimizes the energy between a1 and b1 for all curves between Γ(a1) and Γ(b1). If we picked a1, b1 such that Γ([a1, b1]) ⊆ U for some parametrized neighbourhood U, then Γ1 is a geodesic by the
previous proposition. Since the parametrized neighbourhoods cover S, at each point t0 ∈ [a, b], we can find a1, b1 such that Γ([a1, b1]) ⊆ U. So done. This is good, but we can do better. To do so, we need a lemma. Lemma. Let V ⊆ R2 be an open set with a Riemannian metric, and let P, Q ∈ V. Consider C∞ curves γ : [a, b] → V such that γ(0) = P, γ(1) = Q. Then such a γ will minimize the energy (and therefore is a geodesic) if and only if γ minimizes the length and has constant speed. 49 5 Smooth embedded surfaces (in R3) IB Geometry This means being a geodesic is almost the same as minimizing length. It’s just that to be a geodesic, we have to parametrize it carefully. Proof. Recall the Cauchy-Schwartz inequality for continuous functions f, g ∈ C[0, 1], which says 1 0 2 f (x)g(x) dx ≤ 1 f (x)2 dx 1 g(x)2 dx, 0 0 with equality iff g = λf for some λ ∈ R, or f = 0, i.e. g and f are linearly dependent. We now put f = 1 and g = ˙γ. Then Cauchy-Schwartz says (length γ)2 ≤ energy(γ), with equality if and only if ˙γ is constant. From this, we see that a curve of minimal energy must have constant speed. Then it follows that minimizing energy is the same as minimizing length if we move at constant speed. Is the converse true? Are all geodesics length minimizing? The answer is “almost”. We have to be careful with our conditions in order for it to be true. Proposition. A curve Γ is a geodesic iff and only if it minimizes the energy locally, and this happens if it minimizes the length locally and has constant speed. Here minimizing a quantity locally means for every t ∈ [a, b], there is some ε > 0 such that Γ|[t−ε,
t+ε] minimizes the quantity. We will not prove this. Local minimization is the best we can hope for, since the definition of a geodesic involves differentiation, and derivatives are local properties. Proposition. In fact, the geodesic ODEs imply Γ(t) is constant. We will also not prove this, but in the special case of the hyperbolic plane, we can check this directly. This is an exercise on the third example sheet. A natural question to ask is that if we pick a point P and a tangent direction a, can we find a geodesic through P whose tangent vector at P is a? In the geodesic equations, if we expand out the derivative, we can write the equation as E F F G ¨γ1 ¨γ2 = something. Since the Riemannian metric is positive definite, we can invert the matrix and get an equation of the form ¨γ1 ¨γ2 = H(γ1, γ2, ˙γ1, ˙γ2) for some function H. From the general theory of ODE’s in IB Analysis II, subject to some sensible conditions, given any P = (u0, v0) ∈ V and a = (p0, q0) ∈ R2, there is a unique geodesic curve γ(t) defined for |t| < ε with γ(0) = P and 50 5 Smooth embedded surfaces (in R3) IB Geometry ˙γ(0) = a. In other words, we can choose a point, and a direction, and then there is a geodesic going that way. Note that we need the restriction that γ is defined only for |t| < ε since we might run off to the boundary in finite time. So we need not be able to define it for all t ∈ R. How is this result useful? We can use the uniqueness part to find geodesics. We can try to find some family of curves C that are length-minimizing. To prove that we have found all of them, we can show that given any point P ∈ V and direction a,
there is some curve in C through P with direction a. Example. Consider the sphere S2. Recall that arcs of great circles are lengthminimizing, at least locally. So these are indeed geodesics. Are these all? We know for any P ∈ S2 and any tangent direction, there exists a unique great circle through P in this direction. So there cannot be any other geodesics on S2, by uniqueness. Similarly, we find that hyperbolic line are precisely all the geodesics on a hyperbolic plane. We have defined these geodesics as solutions of certain ODEs. It is possible to show that the solutions of these ODEs depend C∞-smoothly on the initial conditions. We shall use this to construct around each point P ∈ S in a surface geodesic polar coordinates. The idea is that to specify a point near P, we can just say “go in direction θ, and then move along the corresponding geodesic for time r”. We can make this (slightly) more precise, and provide a quick sketch of how we can do this formally. We let ψ : U → V be some chart with P ∈ U ⊆ S. We wlog ψ(P ) = 0 ∈ V ⊆ R2. We denote by θ the polar angle (coordinate), defined on V \ {0}. θ Then for any given θ, there is a unique geodesic γθ : (−ε, ε) → V such that γθ(0) = 0, and ˙γθ(0) is the unit vector in the θ direction. We define σ(r, θ) = γθ(r) whenever this is defined. It is possible to check that σ is C∞-smooth. While we would like to say that σ gives us a parametrization, this is not exactly true, since we cannot define θ continuously. Instead, for each θ0, we define the region Wθ0 = {(r, θ) : 0 < r < ε, θ0 < θ < θ0 + 2π} ⊆ R2. Writing V0
for the image of Wθ0 under σ, the composition Wθ0 σ V0 ψ−1 U0 ⊆ S 51 5 Smooth embedded surfaces (in R3) IB Geometry is a valid parametrization. Thus σ−1 ◦ ψ is a valid chart. The image (r, θ) of this chart are the geodesic polar coordinates. We have the following lemma: Lemma (Gauss’ lemma). The geodesic circles {r = r0} ⊆ W are orthogonal to their radii, i.e. to γθ, and the Riemannian metric (first fundamental form) on W is dr2 + G(r, θ) dθ2. This is why we like geodesic polar coordinates. Using these, we can put the Riemannian metric into a very simple form. Of course, this is just a sketch of what really happens, and there are many holes to fill in. For more details, go to IID Differential Geometry. Definition (Atlas). An atlas is a collection of charts covering the whole surface. The collection of all geodesic polars about all points give us an example. Other interesting atlases are left as an exercise on example sheet 3. 5.3 Surfaces of revolution So far, we do not have many examples of surfaces. We now describe a nice way of obtaining surfaces — we obtain a surface S by rotating a plane curve η around a line. We may wlog assume that coordinates a chosen so that is the z-axis, and η lies in the x − z plane. More precisely, we let η : (a, b) → R3, and write η(u) = (f (u), 0, g(u)). Note that it is possible that a = −∞ and/or b = ∞. We require η(u) = 1 for all u. This is sometimes known as parametrization by arclength. We also require f (u) > 0 for all u > 0, or else things won’t make sense. Finally, we require that η is a homeomorphism to its image. This is more than requiring η to be injective. This is to eliminate things like Then
S is the image of the following map: σ(u, v) = (f (u) cos v, f (u) sin v, g(u)) for a < u < b and 0 ≤ v ≤ 2π. This is not exactly a parametrization, since it is not injective (v = 0 and v = 2π give the same points). To rectify this, for each α ∈ R, we define σα : (a, b) × (α, α + 2π) → S, given by the same formula, and this is a homeomorphism onto the image. The proof of this is left as an exercise for the reader. Assuming this, we now show that this is indeed a parametrization. It is evidently smooth, since f and g both are. To show this is a parametrization, 52 5 Smooth embedded surfaces (in R3) IB Geometry we need to show that the partial derivatives are linearly independent. We can compute the partial derivatives and show that they are non-zero. We have σu = (f cos v, f sin v, g) σv = (−f sin v, f cos v, 0). We then compute the cross product as σu × σv = (−f g cos v, −f g sin v, f f ). So we have σu × σv = f 2(g2 + f 2) = f 2 = 0. Thus every σα is a valid parametrization, and S is a valid embedded surface. More generally, we can allow S to be covered by several families of parametrizations of type σα, i.e. we can consider more than one curve or more than one axis of rotation. This allows us to obtain, say, S2 or the embedded torus (in the old sense, we cannot view S2 as a surface of revolution in the obvious way, since we will be missing the poles). Definition (Parallels). On a surface of revolution, parallels are curves of the form γ(t) = σ(u0, t) for fixed u0. Meridians are curves of the form γ(t) = σ(t, v0) for fixed v0. These are generalizations of the notions of long
itude and latitude (in some order) on Earth. In a general surface of revolution, we can compute the first fundamental form with respect to σ as E = σu2 = f 2 + g2 = 1, F = σu · σv = 0 G = σv2 = f 2. So its first fundamental form is also of the simple form, like the geodesic polar coordinates. Putting these explicit expressions into the geodesic formula, we find that the geodesic equations are ¨u = f df du ˙v2 d dt (f 2 ˙v) = 0. Proposition. We assume ˙γ = 1, i.e. ˙u2 + f 2(u) ˙v2 = 1. (i) Every unit speed meridians is a geodesic. (ii) A (unit speed) parallel will be a geodesic if and only if i.e. u0 is a critical point for f. df du (u0) = 0, 53 5 Smooth embedded surfaces (in R3) IB Geometry Proof. (i) In a meridian, v = v0 is constant. So the second equation holds. Also, we know ˙γ = | ˙u| = 1. So ¨u = 0. So the first geodesic equation is satisfied. (ii) Since o = ou, we know f (u0)2 ˙v2 = 1. So ˙v = ± 1 f (u0). So the second equation holds. Since ˙v and f are non-zero, the first equation is satisfied if and only if df du = 0. 5.4 Gaussian curvature We will next consider the notion of curvature. Intuitively, Euclidean space is “flat”, while the sphere is “curved”. In this section, we will define a quantity known as the curvature that characterizes how curved a surface is. The definition itself is not too intuitive. So what we will do is that we first study the curvature of curves, which is something we already know from, say, IA
Vector Calculus. Afterwards, we will make an analogous definition for surfaces. Definition (Curvature of curve). We let η : [0, ] → R2 be a curve parametrized with unit speed, i.e. η = 1. The curvature κ at the point η(s) is determined by η = κn, where n is the unit normal, chosen so that κ is non-negative. If f : [c, d] → [0, ] is a smooth function and f (t) > 0 for all t, then we can reparametrize our curve to get We can then find So we have We also have by definition γ(t) = η(f (t)). ˙γ(t) = df dt η(f (t)). ˙γ2 = df dt 2. η(f (t)) = κn, where κ is the curvature at γ(t). On the other hand, Taylor’s theorem tells us γ(t + ∆t) − γ(t) = η(f (t))∆t df dt d2f dt2 + 1 2 η(f (t)) + 2 df dt η(f (t)) + higher order terms. Now we know by assumption that η · η = 1. 54 5 Smooth embedded surfaces (in R3) IB Geometry Differentiating thus give s Hence we get η · η = 0. η · n = 0. We now take the dot product of the Taylor expansion with n, killing off all the η terms. Then we get (γ(t + ∆t) − γ(t)) · n = 1 2 κ ˙γ2(∆t)2 + · · ·, (∗) where κ is the curvature. This is the distance denoted below: γ(t + ∆t) γ(t) (γ(t + ∆t) − γ(t)) · n We can also compute γ(t + ∆t) − γ(t)2 = ˙γ2(∆t)2. (†
) So we find that 1 and is independent of the choice of parametrization. 2 κ is the ratio of the leading (quadratic) terms of (∗) and (†), We now try to apply this thinking to embedded surfaces. We let σ : V → U ⊆ S be a parametrization of a surface S (with V ⊆ R2 open). We apply Taylor’s theorem to σ to get σ(u + ∆u, v + ∆v) − σ(u, v) = σu∆u + σv∆v + 1 2 (σuu(∆u2) + 2σuv∆u∆v + σvv(∆v)2) + · · ·. We now measure the deviation from the tangent plane, i.e. (σ(u + ∆u, v + ∆v) − σ(u, v)) · N = 1 2 (L(∆u)2 + 2M ∆u∆v + N (∆v)2) + · · ·, where L = σuu · N, M = σuv · N, N = σvv · N. Note that N and N are different things. N is the unit normal, while N is the expression given above. We can also compute σ(u + ∆u, v + ∆v) − σ(u, v)2 = E(∆u)2 + 2F ∆u∆v + G(∆v)2 + · · ·. We now define the second fundamental form as 55 5 Smooth embedded surfaces (in R3) IB Geometry Definition (Second fundamental form). The second fundamental form on V with σ : V → U ⊆ S for S is L du2 + 2M du dv + N dv2, where L = σuu · N M = σuv · N N = σvv · N. Definition (Gaussian curvature). The Gaussian curvature K of a surface of S at P ∈ S is the ratio of the determinants of the two fundamental forms, i.e. K = LN − M 2 EG − F 2. This
is valid since the first fundamental form is positive-definite and in particular has non-zero derivative. We can imagine that K is a capital κ, but it looks just like a normal capital K. Note that K > 0 means the second fundamental form is definite (i.e. either positive definite or negative definite). If K < 0, then the second fundamental form is indefinite. If K = 0, then the second fundamental form is semi-definite (but not definite). Example. Consider the unit sphere S2 ⊆ R3. This has K > 0 at each point. We can compute this directly, or we can, for the moment, pretend that M = 0. Then by symmetry, N = M. So K > 0. On the other hand, we can imagine a Pringle crisp (also known as a hyperbolic paraboloid), and this has K < 0. More examples are left on the third example sheet. For example we will see that the embedded torus in R3 has points at which K > 0, some where K < 0, and others where K = 0. It can be deduced, similar to the curves, that K is independent of parametriza- tion. Recall that around each point, we can get some nice coordinates where the first fundamental form looks simple. We might expect the second fundamental form to look simple as well. That is indeed true, but we need to do some preparation first. Proposition. We let N = σu × σv σu × σv be our unit normal for a surface patch. Then at each point, we have where In particular, Nu = aσu + bσv, Nv = cσu + dσv = ad − bc. 56 5 Smooth embedded surfaces (in R3) IB Geometry Proof. Note that Differentiating gives N · N = 1. N · Nu = 0 = N · Nv. Since σu, σv and N for an orthogonal basis, at least there are some a, b, c, d such that Nu = aσu + bσv Nv = cσu + dσv. N · σu = 0.
Nu · σu + N · σuu = 0. Nu · σu = −L. By definition of σu, we have So differentiating gives So we know Similarly, we find Nu = σv = −M = Nv · σu, Nv · σv = −N. We dot our original definition of Nu, Nv in terms of a, b, c, d with σu and σv to obtain −L = aE + bF −M = cE + dF −M = aF + bG −N = cF + dG. Taking determinants, we get the formula for the curvature. If we have nice coordinates on S, then we get a nice formula for the Gaussian curvature K. Theorem. Suppose for a parametrization σ : V → U ⊆ S ⊆ R3, the first fundamental form is given by for some G ∈ C∞(V ). Then the Gaussian curvature is given by du2 + G(u, v) dv2 K = −( √ G)uu√ G. In particular, we do not need to compute the second fundamental form of the surface. This is purely a technical result. Proof. We set e = σu, f = σv√ G. Then e and f are unit and orthogonal. We also let N = e × f be a third unit vector orthogonal to e and f so that they form a basis of R3. 57 5 Smooth embedded surfaces (in R3) IB Geometry Using the notation of the previous proposition, we have Nu × Nv = (aσu + bσv) × (cσu + dσv) = (ad − bc)σu × σv = Kσu × σv √ = K Ge × f √ GN. = K Thus we know √ K G = (Nu × Nv) · N = (Nu × Nv) · (e × f ) = (Nu · e)(Nv · f ) − (Nu · f )(Nv · e). Since N · e = 0, we know Nu · e + N · eu = 0. Hence to evaluate the expression
above, it suffices to compute N · eu instead of Nu · e. Since e · e = 1, we know So we can write Similarly, we have e · eu = 0 = e · ev. eu = αf + λ1N ev = βf + λ2N. fu = −˜αe + µ1N fv = − ˜βe + µ2N. Our objective now is to find the coefficients µi, λi, and then √ K G = λ1µ2 − λ2µ1. Since we know e · f = 0, differentiating gives eu · f + e · fu = 0 ev · f + e · fv = 0. ˜α = α, ˜β = β. Thus we get But we have α = eu · f = σuu · σv√ G = (σu · σv)u − 1 2 (σu · σu)v 1 √ G = 0, since σu · σv = 0, σu · σu = 1. So α vanishes. 58 5 Smooth embedded surfaces (in R3) IB Geometry Also, we have β = ev · f = σuv · σv√ G = 1 2 Gu√ G √ = ( G)u. Finally, we can use our equations again to find λ1µ1 − λ2µ1 = eu · fv − ev · fu = (e · fv)u − (e · fu)v = − ˜βu − (−˜α)u = −( G)uu. √ So we have as required. Phew. √ K √ G = −( G)uu, Observe, for σ as in the previous theorem, K depends only on the first fundamental from, not on the second fundamental form. When Gauss discovered this, he was so impressed that he called it the Theorema Egregium, which means Corollary (Theorema Egregium). If S1 and S2 have locally isometric charts, then K is locally the same. Proof. We know that this corollary is valid under the assumption of the previous theorem, i.
e. the existence of a parametrization σ of the surface S such that the first fundamental form is du2 + G(u, v) dv2. Suitable σ includes, for each point P ∈ S, the geodesic polars (ρ, θ). However, P itself is not in the chart, i.e. P ∈ σ(U ), and there is no guarantee that there will be some geodesic polar that covers P. To solve this problem, we notice that K is a C∞ function of S, and in particular continuous. So we can determine the curvature at P as K(P ) = lim ρ→0 K(ρ, σ). So done. Note also that every surface of revolution has such a suitable parametrization, as we have previously explicitly seen. 59 6 Abstract smooth surfaces IB Geometry 6 Abstract smooth surfaces While embedded surfaces are quite general surfaces, they do not include, say, the hyperbolic plane. We can generalize our notions by considering surfaces “without embedding in R3”. These are known as abstract surfaces. Definition (Abstract smooth surface). An abstract smooth surface S is a metric space (or Hausdorff (and second-countable) topological space) equipped with homeomorphisms θi : Ui → Vi, where Ui ⊆ S and Vi ⊆ R2 are open sets such that (i) S = i Ui (ii) For any i, j, the transition map φij = θj ◦ θ−1 i : θj(Ui ∩ Uj) → θi(Ui ∩ Uj) is a diffeomorphism. Note that θj(Ui ∩ Uj) and θi(Ui ∩ Uj) are open sets in R2. So it makes sense to talk about whether the function is a diffeomorphism. Like for embedded surfaces, the maps θi are called charts, and the collection of θi’s satisfying our conditions is an atlas etc. Definition (Riemannian metric on abstract surface). A Riemannian metric on an abstract surface is given by Riemannian metrics on each Vi = θi(Ui
) subject to the compatibility condition that for all i, j, the transition map φij is an isometry, i.e. dϕP (a), dϕP (b)ϕ(P ) = a, bP Note that on the left, we are computing the Riemannian metric on Vi, while on the left, we are computing it on Vj. Then we can define lengths, areas, energies on an abstract surface S. It is clear that every embedded surface is an abstract surface, by forgetting that it is embedded in R3. Example. The three classical geometries are all abstract surfaces. (i) The Euclidean space R2 with dx2 + dy2 is an abstract surface. (ii) The sphere S2 ⊆ R2, being an embedded surface, is an abstract surface with metric 4(dx2 + dy2) (1 + x2 + y2)2. (iii) The hyperbolic disc D ⊆ R2 is an abstract surface with metric 4(dx2 + dy2) (1 − x2 − y2)2. and this is isometric to the upper half plane H with metric dx2 + dy2 y2 60 6 Abstract smooth surfaces IB Geometry Note that in the first and last example, it was sufficient to use just one chart to cover every point of the surface, but not for the sphere. Also, in the case of the hyperbolic plane, we can have many different charts, and they are compatible. Finally, we notice that we really need the notion of abstract surface for the hyperbolic plane, since it cannot be realized as an embedded surface in R3. The proof is not obvious at all, and is a theorem of Hilbert. One important thing we can do is to study the curvature of surfaces. Given a P ∈ S, the Riemannian metric (on a chart) around P determines a “reparametrization” by geodesics, similar to embedded surfaces. Then the metric takes the form dρ2 + G(ρ, θ) dθ2. We then define the curvature as K = −( √ G)ρρ√ G. Note that for embedded surfaces, we obtained this formula as a theorem. For abstract surfaces, we take this as a de
finition. We can check how this works in some familiar examples. Example. (i) In R2, we use the usual polar coordinates (ρ, θ), and the metric becomes dρ2 + ρ2 dθ2, where x = ρ cos θ and y = ρ sin θ. So the curvature is −( √ G)ρρ√ G = −(ρ)ρρ ρ = 0. So the Euclidean space has zero curvature. (ii) For the sphere S, we use the spherical coordinates, fixing the radius to be 1. So we specify each point by σ(ρ, θ) = (sin ρ cos θ, sin ρ sin θ, cos ρ). Note that ρ is not really the radius in spherical coordinates, but just one of the angle coordinates. We then have the metric dρ2 + sin2 ρ dθ2. √ G = sin ρ, Then we get and K = 1. (iii) For the hyperbolic plane, we use the disk model D, and we first express our original metric in polar coordinates of the Euclidean plane to get 2 2 1 − r2 (dr2 + r2 dθ2). 61 6 Abstract smooth surfaces IB Geometry This is not geodesic polar coordinates, since r is given by the Euclidean distance, not hyperbolic distance. We will need to put ρ = 2 tanh−1 r, dρ = 2 1 − r2 dr. Then we have which gives So we finally get with r = tanh ρ 2, 4r2 (1 − r2)2 = sinh2 ρ. √ G = sinh ρ, K = −1. We see that the three classic geometries are characterized by having constant 0, 1 and −1 curvatures. We are almost able to state the Gauss-Bonnet theorem. Before that, we need the notion of triangulations. We notice that our old definition makes sense for (compact) abstract surfaces S. So we just use the same definition. We then define the Euler number of an abstract surface as e(S) = F − E + V, as before. Assuming that the Euler
number is independent of triangulations, we know that this is invariant under homeomorphisms. Theorem (Gauss-Bonnet theorem). If the sides of a triangle ABC ⊆ S are geodesic segments, then ABC K dA = (α + β + γ) − π, where α, β, γ are the angles of the triangle, and dA is the “area element” given by dA = EG − F 2 du dv, on each domain U ⊆ S of a chart, with E, F, G as in the respective first fundamental form. Moreover, if S is a compact surface, then S K dA = 2πe(S). We will not prove this theorem, but we will make some remarks. Note that we can deduce the second part from the first part. The basic idea is to take a triangulation of S, and then use things like each edge belongs to two triangles and each triangle has three edges. This is a genuine generalization of what we previously had for the sphere and hyperbolic plane, as one can easily see. 62 6 Abstract smooth surfaces IB Geometry Using the Gauss-Bonnet theorem, we can define the curvature K(P ) for a point P ∈ S alternatively by considering triangles containing P, and then taking the limit lim area→0 (α + β + γ) − π area = K(P ). Finally, we note how this relates to the problem of the parallel postulate we have mentioned previously. The parallel postulate, in some form, states that given a line and a point not on it, there is a unique line through the point and parallel to the line. This holds in Euclidean geometry, but not hyperbolic and spherical geometry. It is a fact that this is equivalent to the axiom that the angles of a triangle sum to π. Thus, the Gauss-Bonnet theorem tells us the parallel postulate is captured by the fact that the curvature of the Euclidean plane is zero everywhere. 63pth in the next chapter. In this chapter, we will first look at results for general cosets. In particular, we will, step by step, prove the things we casually claimed above. Definition (Cosets). Let H ≤ G and a ∈ G. Then the set aH = {ah : h ∈
H} is a left coset of H and Ha = {ha : h ∈ H} is a right coset of H. Example. (i) Take 2Z ≤ Z. Then 6 + 2Z = {all even numbers} = 0 + 2Z. 1 + 2Z = {all odd numbers} = 17 + 2Z. (ii) Take G = S3, let H = ⟨(1 2)⟩ = {e, (1 2)}. The left cosets are eH = (1 2)H = {e, (1 2)} (1 3)H = (1 2 3)H = {(1 3), (1 2 3)} (2 3)H = (1 3 2)H = {(2 3), (1 3 2)} (iii) Take G = D6 (which is isomorphic to S3). Recall D6 = ⟨r, s | r3e = s2, rs = sr−1⟩.Take H = ⟨s⟩ = {e, s}. We have left coset rH = {r, rs = sr−1} and the right coset Hr = {r, sr}. Thus rH ̸= Hr. Proposition. aH = bH ⇔ b−1a ∈ H. Proof. (⇒) Since a ∈ aH, a ∈ bH. Then a = bh for some h ∈ H. So b−1a = h ∈ H. (⇐). Let b−1a = h0. Then a = bh0. Then ∀ah ∈ aH, we have ah = b(h0h) ∈ bH. So aH ⊆ bH. Similarly, bH ⊆ aH. So aH = bH. 21 3 Lagrange’s Theorem IA Groups Definition (Partition). Let X be a set, and X1, · · · Xn be subsets of X. The Xi are called a partition of X if Xi = X and Xi ∩ Xj = ∅ for i ̸= j. i.e. every element is in exactly one of Xi. Lemma. The left cosets of a subgroup H ≤ G partition G, and every coset has the same size. Proof. For each a
∈ G, a ∈ aH. Thus the union of all cosets gives all of G. Now we have to show that for all a, b ∈ G, the cosets aH and bH are either the same or disjoint. Suppose that aH and bH are not disjoint. Let ah1 = bh2 ∈ aH ∩ bH. Then b−1a = h2h−1 1 ∈ H. So aH = bH. To show that they each coset has the same size, note that f : H → aH with f (h) = ah is invertible with inverse f −1(h) = a−1h. Thus there exists a bijection between them and they have the same size. Definition (Index of a subgroup). The index of H in G, written |G : H|, is the number of left cosets of H in G. Theorem (Lagrange’s theorem). If G is a finite group and H is a subgroup of G, then |H| divides |G|. In particular, |H||G : H| = |G|. Note that the converse is not true. If k divides |G|, there is not necessarily a subgroup of order k, e.g. |A4| = 12 but there is no subgroup of order 6. However, we will later see that this is true if k is a prime (cf. Cauchy’s theorem). Proof. Suppose that there are |G : H| left cosets in total. Since the left cosets partition G, and each coset has size |H|, we have |H||G : H| = |G|. Again, the hard part of this proof is to prove that the left cosets partition G and have the same size. If you are asked to prove Lagrange’s theorem in exams, that is what you actually have to prove. Corollary. The order of an element divides the order of the group, i.e. for any finite group G and a ∈ G, ord(a) divides |G|. Proof. Consider the subgroup generated by a, which has order ord(a). Then by Lagrange’s theorem, ord(a) divides |G|. Corollary. The exponent of a group divides the order of the group,
i.e. for any finite group G and a ∈ G, a|G| = e. Proof. We know that |G| = k ord(a) for some k ∈ N. Then a|G| = (aord(a))k = ek = e. Corollary. Groups of prime order are cyclic and are generated by every nonidentity element. Proof. Say |G| = p. If a ∈ G is not the identity, the subgroup generated by a must have order p since it has to divide p. Thus the subgroup generated by a has the same size as G and they must be equal. Then G must be cyclic since it is equal to the subgroup generated by a. 22 3 Lagrange’s Theorem IA Groups A useful way to think about cosets is to view them as equivalence classes. To do so, we need to first define what an equivalence class is. Definition (Equivalence relation). An equivalence relation ∼ is a relation that is reflexive, symmetric and transitive. i.e. (i) (∀x) x ∼ x (ii) (∀x, y) x ∼ y ⇒ y ∼ x (iii) (∀x, y, z) [(x ∼ y) ∧ (y ∼ z) ⇒ x ∼ z] (reflexivity) (symmetry) (transitivity) Example. The following relations are equivalence relations: (i) Consider Z. The relation ≡n defined as a ≡n b ⇔ n | (a − b). (ii) Consider the set (formally: class) of all finite groups. Then “is isomorphic to” is an equivalence relation. Definition (Equivalence class). Given an equivalence relation ∼ on A, the equivalence class of a is [a]∼ = [a] = {b ∈ A : a ∼ b} Proposition. The equivalence classes form a partition of A. Proof. By reflexivity, we have a ∈ [a]. Thus the equivalence classes cover the whole set. We must now show that for all a, b ∈ A, either [a] = [b] or [a] ∩ [b] = ∅. Suppose [a] ∩ [b] ̸= ∅. Then ∃c ∈ [a] ∩
[b]. So a ∼ c, b ∼ c. By symmetry, c ∼ b. By transitivity, we have a ∼ b. Now for all b′ ∈ [b], we have b ∼ b′. Thus by transitivity, we have a ∼ b′. Thus [b] ⊆ [a]. Similarly, [a] ⊆ [b] and [a] = [b]. Lemma. Given a group G and a subgroup H, define the equivalence relation on G with a ∼ b iff b−1a ∈ H. The equivalence classes are the left cosets of H. Proof. First show that it is an equivalence relation. (i) Reflexivity: Since aa−1 = e ∈ H, a ∼ a. (ii) Symmetry: a ∼ b ⇒ b−1a ∈ H ⇒ (b−1a)−1 = a−1b ∈ H ⇒ b ∼ a. (iii) Transitivity: If a ∼ b and b ∼ c, we have b−1a, c−1b ∈ H. So c−1bb−1a = c−1a ∈ H. So a ∼ c. To show that the equivalence classes are the cosets, we have a ∼ b ⇔ b−1a ∈ H ⇔ aH = bH. Example. Consider (Z, +), and for fixed n, take the subgroup nZ. The cosets are 0 + H, 1 + H, · · · (n − 1) + H. We can write these as [0], [1], [2] · · · [n]. To perform arithmetic “mod n”, define [a] + [b] = [a + b], and [a][b] = [ab]. We need to check that it is well-defined, i.e. it doesn’t depend on the choice of the representative of [a]. If [a1] = [a2] and [b1] = [b2], then a1 = a2 + kn and b1 = b2 + kn, then a1 + b1 = a2 + b2 + n(k + l) and a1b1 = a2b2 + n(kb2 + la2 + kln). So [a1 + b1] = [
a2 + b2] and [a1b1] = [a2b2]. 23 3 Lagrange’s Theorem IA Groups We have seen that (Zn, +n) is a group. What happens with multiplication? We can only take elements which have inverses (these are called units, cf. IB Groups, Rings and Modules). Call the set of them Un = {[a] : (a, n) = 1}. We’ll see these are the units. Definition (Euler totient function). (Euler totient function) ϕ(n) = |Un|. Example. If p is a prime, ϕ(n) = p − 1. ϕ(4) = 2. Proposition. Un is a group under multiplication mod n. Proof. The operation is well-defined as shown above. To check the axioms: 0. Closure: if a, b are coprime to n, then a · b is also coprime to n. So [a], [b] ∈ Un ⇒ [a] · [b] = [a · b] ∈ Un 1. Identity: [1] 2. Let [a] ∈ Un. Consider the map Un → Un with [c] → [ac]. This is injective: if [ac1] = [ac2], then n divides a(c1 − c2). Since a is coprime to n, n divides c1 − c2, so [c1] = [c2]. Since Un is finite, any injection (Un → Un) is also a surjection. So there exists a c such that [ac] = [a][c] = 1. So [c] = [a]−1. 3. Associativity (and also commutativity): inherited from Z. Theorem (Fermat-Euler theorem). Let n ∈ N and a ∈ Z coprime to n. Then aϕ(n) ≡ 1 (mod n). In particular, (Fermat’s Little Theorem) if n = p is a prime, then for any a not a multiple of p. ap−1 ≡ 1 (mod p). Proof. As a is coprime with n, [a] ∈ Un. Then [a]|Un| = [1], i.e. aϕ(n) ≡ 1
(mod n). 3.1 Small groups We will study the structures of certain small groups. Example (Using Lagrange theorem to find subgroups). To find subgroups of D10, we know that the subgroups must have size 1, 2, 5 or 10: 1: {e} 2: The groups generated by the 5 reflections of order 2 5: The group must be cyclic since it has prime order 5. It is then generated by an element of order 5, i.e. r, r2, r3 and r4. They generate the same group ⟨r⟩. 10: D10 As for D8, subgroups must have order 1, 2, 4 or 8. 1: {e} 24 3 Lagrange’s Theorem IA Groups 2: 5 elements of order 2, namely 4 reflections and r2. 4: First consider the subgroup isomorphic to C4, which is ⟨r⟩. There are two other non-cyclic group. 8: D8 Proposition. Any group of order 4 is either isomorphic to C4 or C2 × C2. Proof. Let |G| = 4. By Lagrange theorem, possible element orders are 1 (e only), 2 and 4. If there is an element a ∈ G of order 4, then G = ⟨a⟩ ∼= C4. Otherwise all non-identity elements have order 2. Then G must be abelian (For any a, b, (ab)2 = 1 ⇒ ab = (ab)−1 ⇒ ab = b−1a−1 ⇒ ab = ba). Pick 2 elements of order 2, say b, c ∈ G, then ⟨b⟩ = {e, b} and ⟨c⟩ = {e, c}. So ⟨b⟩ ∩ ⟨c⟩ = {e}. As G is abelian, ⟨b⟩ and ⟨c⟩ commute. We know that bc = cb has order 2 as well, and is the only element of G left. So G ∼= ⟨b⟩ × ⟨c⟩ ∼= C2 × C2 by the direct product theorem. Proposition. A group of order 6 is either cyclic or dihedral (i.e. is is
omorphic to C6 or D6). (See proof in next section) 3.2 Left and right cosets As |aH| = |H| and similarly |H| = |Ha|, left and right cosets have the same size. Are they necessarily the same? We’ve previously shown that they might not be the same. In some other cases, they are. Example. (i) Take G = (Z, +) and H = 2Z. We have 0 + 2Z = 2Z + 0 = even numbers and 1 + 2Z = 2Z + 1 = odd numbers. Since G is abelian, aH = Ha for all a, ∈ G, H ≤ G. (ii) Let G = D6 = ⟨r, s | r3 = e = s2, rs = sr−1⟩. Let U = ⟨r⟩. Since the cosets partition G, so one must be U and the other sU = {s, sr = r2s, sr2 = rs} = U s. So for all a ∈ G, aU = U a. (iii) Let G = D6 and take H = ⟨s⟩. We have H = {e, s}, rH = {r, rs = sr−1} and r2H = {r2, rs}; while H = {e, s}, Hr = {r, sr} and Hr2 = {r2, sr2}. So the left and right subgroups do not coincide. This distinction will become useful in the next chapter. 25 4 Quotient groups IA Groups 4 Quotient groups In the previous section, when attempting to pretend that a 3 × 3 × 3 Rubik’s cube is a 2 × 2 × 2 one, we came up with the cosets aH, and claimed that these form a group. We also said that this is not the case for arbitrary subgroup H, but only for subgroups that satisfy aH = Ha. Before we prove these, we first study these subgroups a bit. 4.1 Normal subgroups Definition (Normal subgroup). A subgroup K of G is a normal subgroup if (∀a ∈ G)(∀k ∈ K) aka−1 ∈ K. We write K ◁ G. This is equivalent to: (i) (�
�a ∈ G) aK = Ka, i.e. left coset = right coset (ii) (∀a ∈ G) aKa−1 = K (cf. conjugacy classes) From the example last time, H = ⟨s⟩ ≤ D6 is not a normal subgroup, but K = ⟨r⟩ ◁ D6. We know that every group G has at least two normal subgroups {e} and G. Lemma. (i) Every subgroup of index 2 is normal. (ii) Any subgroup of an abelian group is normal. Proof. (i) If K ≤ G has index 2, then there are only two possible cosets K and G \ K. As eK = Ke and cosets partition G, the other left coset and right coset must be G \ K. So all left cosets and right cosets are the same. (ii) For all a ∈ G and k ∈ K, we have aka−1 = aa−1k = k ∈ K. Proposition. Every kernel is a normal subgroup. Proof. Given homomorphism f : G → H and some a ∈ G, for all k ∈ ker f, we have f (aka−1) = f (a)f (k)f (a)−1 = f (a)ef (a)−1 = e. Therefore aka−1 ∈ ker f by definition of the kernel. In fact, we will see in the next section that all normal subgroups are kernels of some homomorphism. Example. Consider G = D8. Let K = ⟨r2⟩ is normal. Check: Any element of G is either srℓ or rℓ for some ℓ. Clearly e satisfies aka−1 ∈ K. Now check r2: For the case of srℓ, we have srℓr2(srℓ)−1 = srℓr2r−ℓs−1 = sr2s = ssr−2 = r2. For the case of rℓ, rℓr2r−ℓ = r2. Proposition. A group of order 6 is either cyclic or dihedral (i.e. ∼= C6 or D6). 26 4
Quotient groups IA Groups Proof. Let |G| = 6. By Lagrange theorem, possible element orders are 1, 2, 3 and 6. If there is an a ∈ G of order 6, then G = ⟨a⟩ ∼= C6. Otherwise, we can only have elements of orders 2 and 3 other than the identity. If G only has elements of order 2, the order must be a power of 2 by Sheet 1 Q. 8, which is not the case. So there must be an element r of order 3. So ⟨r⟩ ◁ G as it has index 2. Now G must also have an element s of order 2 by Sheet 1 Q. 9. Since ⟨r⟩ is normal, we know that srs−1 ∈ ⟨r⟩. If srs−1 = e, then r = e, which is not true. If srs−1 = r, then sr = rs and sr has order 6 (lcm of the orders of s and r), which was ruled out above. Otherwise if srs−1 = r2 = r−1, then G is dihedral by definition of the dihedral group. 4.2 Quotient groups Proposition. Let K ◁ G. Then the set of (left) cosets of K in G is a group under the operation aK ∗ bK = (ab)K. Proof. First show that the operation is well-defined. If aK = a′K and bK = b′K, we want to show that aK ∗ bK = a′K ∗ b′K. We know that a′ = ak1 and b′ = bk2 for some k1, k2 ∈ K. Then a′b′ = ak1bk2. We know that b−1k1b ∈ K. Let b−1k1b = k3. Then k1b = bk3. So a′b′ = abk3k2 ∈ (ab)K. So picking a different representative of the coset gives the same product. 1. Closure: If aK, bK are cosets, then (ab)K is also a coset 2. Identity: The identity is eK = K (clear from definition) 3. Inverse: The inverse of aK is a−1
K (clear from definition) 4. Associativity: Follows from the associativity of G. Definition (Quotient group). Given a group G and a normal subgroup K, the quotient group or factor group of G by K, written as G/K, is the set of (left) cosets of K in G under the operation aK ∗ bK = (ab)K. Note that the set of left cosets also exists for non-normal subgroups (abnormal subgroups?), but the group operation above is not well defined. Example. (i) Take G = Z and nZ (which must be normal since G is abelian), the cosets are k + nZ for 0 ≤ k < n. The quotient group is Zn. So we can write Z/(nZ) = Zn. In fact these are the only quotient groups of Z since nZ are the only subgroups. Note that if G is abelian, G/K is also abelian. (ii) Take K = ⟨r⟩ ◁ D6. We have two cosets K and sK. So D6/K has order 2 and is isomorphic to C2. (iii) Take K = ⟨r2⟩ ◁ D8. We know that G/K should have 8 2 = 4 elements. We have G/K = {K, rK = r3K, sK = sr2K, srK = sr3K}. We see that all elements (except K) has order 2, so G/K ∼= C2 × C2. Note that quotient groups are not subgroups of G. They contain different kinds of elements. For example, Z/nZ ∼= Cn are finite, but all subgroups of Z infinite. 27 4 Quotient groups IA Groups Example. (Non-example) Consider D6 with H = ⟨s⟩. H is not a normal subgroup. We have rH ∗ r2H = r3H = H, but rH = rsH and r2H = srH (by considering the individual elements). So we have rsH ∗ srH = r2H ̸= H, and the operation is not well-defined. Lemma. Given K ◁ G, the quotient map q : G → G/K with g →
gK is a surjective group homomorphism. Proof. q(ab) = (ab)K = aKbK = q(a)q(b). So q is a group homomorphism. Also for all aK ∈ G/K, q(a) = aK. So it is surjective. Note that the kernel of the quotient map is K itself. So any normal subgroup is a kernel of some homomorphism. Proposition. The quotient of a cyclic group is cyclic. Proof. Let G = Cn with H ≤ Cn. We know that H is also cyclic. Say Cn = ⟨c⟩ and H = ⟨ck⟩ ∼= Cℓ, where kℓ = n. We have Cn/H = {H, cH, c2H, · · · ck−1H} = ⟨cH⟩ ∼= Ck. 4.3 The Isomorphism Theorem Now we come to the Really Important TheoremTM. Theorem (The Isomorphism Theorem). Let f : G → H be a group homomorphism with kernel K. Then K ◁ G and G/K ∼= im f. Proof. We have proved that K ◁ G before. We define a group homomorphism θ : G/K → im f by θ(aK) = f (a). First check that this is well-defined: If a1K = a2K, then a−1 2 a1 ∈ K. So f (a2)−1f (a1) = f (a−1 2 a1) = e. So f (a1) = f (a2) and θ(a1K) = θ(a2K). Now we check that it is a group homomorphism: θ(aKbK) = θ(abK) = f (ab) = f (a)f (b) = θ(aK)θ(bK). To show that it is injective, suppose θ(aK) = θ(bK). Then f (a) = f (b). Hence f (b)−1f (a) = e. Hence b−1a ∈ K. So aK = bK. By
definition, θ is surjective since im θ = im f. So θ gives an isomorphism G/K ∼= im f ≤ H. If f is injective, then the kernel is {e}, so G/K ∼= G and G is isomorphic to a subgroup of H. We can think of f as an inclusion map. If f is surjective, then im f = H. In this case, G/K ∼= H. Example. (i) Take f : GLn(R) → R∗ with A → det A, ker f = SLN (R). im f = R∗ as for      λ 0 0... 0 · · · 1 · · ·...... 0 0  0 0  ...   1 all λ ∈ R∗, det R∗. = λ. So we know that GLn(R)/SLn(R) ∼= 28 4 Quotient groups IA Groups (ii) Define θ : (R, +) → (C∗, ×) with r → exp(2πir). This is a group homomorphism since θ(r + s) = exp(2πi(r + s)) = exp(2πir) exp(2πis) = θ(r)θ(s). We know that the kernel is Z ◁ R. Clearly the image is the unit circle (S1, ×). So R/Z ∼= (S1, ×). (iii) G = (Z∗ p, ×) for prime p ̸= 2. We have f : G → G with a → a2. This p is abelian). The kernel is 2. These is a homomorphism since (ab)2 = a2b2 (Z∗ {±1} = {1, p − 1}. We know that im f ∼= G/ ker f with order p−1 are known as quadratic residues. Lemma. Any cyclic group is isomorphic to either Z or Z/(nZ) for some n ∈ N. Proof. Let G = ⟨c⟩. Define f : Z → G with m → cm. This is a group homomorphism
since cm1+m2 = cm1cm2. f is surjective since G is by definition all cm for all m. We know that ker f ◁ Z. We have three possibilities. Either (i) ker f = {e}, so F is an isomorphism and G ∼= Z; or (ii) ker f = Z, then G ∼= Z/Z = {e} = C1; or (iii) ker f = nZ (since these are the only proper subgroups of Z), then G ∼= Z/(nZ). Definition (Simple group). A group is simple if it has no non-trivial proper normal subgroup, i.e. only {e} and G are normal subgroups. Example. Cp for prime p are simple groups since it has no proper subgroups at all, let alone normal ones. A5 is simple, which we will prove after Chapter 6. The finite simple groups are the building blocks of all finite groups. All finite simple groups have been classified (The Atlas of Finite Groups). If we have K ◁ G with K ̸= G or {e}, then we can “quotient out” G into G/K. If G/K is not simple, repeat. Then we can write G as an “inverse quotient” of simple groups. 29 5 Group actions IA Groups 5 Group actions Recall that we came up with groups to model symmetries and permutations. Intuitively, elements of groups are supposed to “do things”. However, as we developed group theory, we abstracted these away and just looked at how elements combine to form new elements. Group actions recapture this idea and make each group element correspond to some function. 5.1 Group acting on sets Definition (Group action). Let X be a set and G be a group. An action of G on X is a homomorphism φ : G → Sym X. This means that the homomorphism φ turns each element g ∈ G into a permutation of X, in a way that respects the group structure. Instead of writing φ(g)(x), we usually directly write g(x) or gx. Alternatively, we can define the group action as follows: Proposition. Let X be a set and G be a group. Then φ : G → Sym X is a homomorphism (i.e. an action)
iff θ : G × X → X defined by θ(g, x) = φ(g)(x) satisfies 0. (∀g ∈ G)(x ∈ X) θ(g, x) ∈ X. 1. (∀x ∈ X) θ(e, x) = x. 2. (∀g, h ∈ G)(∀x ∈ X) θ(g, θ(h, x)) = θ(gh, x). This criteria is almost the definition of a homomorphism. However, here we do not explicitly require θ(g, ·) to be a bijection, but require θ(e, ·) to be the identity function. This automatically ensures that θ(g, ·) is a bijection, since when composed with θ(g−1, ·), it gives θ(e, ·), which is the identity. So θ(g, ·) has an inverse. This is usually an easier thing to show. Example. (i) Trivial action: for any group G acting on any set X, we can have φ(g) = 1X for all g, i.e. G does nothing. (ii) Sn acts on {1, · · · n} by permutation. (iii) D2n acts on the vertices of a regular n-gon (or the set {1, · · ·, n}). (iv) The rotations of a cube act on the faces/vertices/diagonals/axes of the cube. Note that different groups can act on the same sets, and the same group can act on different sets. Definition (Kernel of action). The kernel of an action G on X is the kernel of φ, i.e. all g such that φ(g) = 1X. Note that by the isomorphism theorem, ker φ ◁ G and G/K is isomorphic to a subgroup of Sym X. Example. 30 5 Group actions IA Groups (i) D2n acting on {1, 2 · · · n} gives φ : D2n → Sn with kernel {e}. (ii) Let G be the rotations of a cube and let it act on the three axes x, y, z through the faces. We have φ : G → S3
. Then any rotation by 180◦ doesn’t change the axes, i.e. act as the identity. So the kernel of the action has at least 4 elements: e and the three 180◦ rotations. In fact, we’ll see later that these 4 are exactly the kernel. Definition (Faithful action). An action is faithful if the kernel is just {e}. 5.2 Orbits and Stabilizers Definition (Orbit of action). Given an action G on X, the orbit of an element x ∈ X is orb(x) = G(x) = {y ∈ X : (∃g ∈ G) g(x) = y}. Intuitively, it is the elements that x can possibly get mapped to. Definition (Stabilizer of action). The stabilizer of x is stab(x) = Gx = {g ∈ G : g(x) = x} ⊆ G. Intuitively, it is the elements in G that do not change x. Lemma. stab(x) is a subgroup of G. Proof. We know that e(x) = x by definition. So stab(x) is non-empty. Suppose g, h ∈ stab(x), then gh−1(x) = g(h−1(x)) = g(x) = x. So gh−1 ∈ stab(X). So stab(x) is a subgroup. Example. (i) Consider D8 acting on the corners of the square X = {1, 2, 3, 4}. Then orb(1) = X since 1 can go anywhere by rotations. stab(1) = {e, reflection in the line through 1} (ii) Consider the rotations of a cube acting on the three axes x, y, z. Then orb(x) is everything, and stab(x) contains e, 180◦ rotations and rotations about the x axis. Definition (Transitive action). An action G on X is transitive if (∀x) orb(x) = X, i.e. you can reach any element from any element. Lemma. The orbits of an action partition X. Proof. Firstly, (∀x)(x ∈ orb(x)) as e(x) = x. So every x is in some orbit. Then suppose z ∈ orb(x) and
z ∈ orb(y), we have to show that orb(x) = orb(y). We know that z = g1(x) and z = g2(y) for some g1, g2. Then g1(x) = g2(y) and y = g−1 2 g1(x). For any w = g3(y) ∈ orb(y), we have w = g3g−1 2 g1(x). So w ∈ orb(x). Thus orb(y) ⊆ orb(x) and similarly orb(x) ⊆ orb(y). Therefore orb(x) = orb(y). 31 5 Group actions IA Groups Suppose a group G acts on X. We fix an x ∈ X. Then by definition of the orbit, given any g ∈ G, we have g(x) ∈ orb(x). So each g ∈ G gives us a member of orb(x). Conversely, every object in orb(x) arises this way, by definition of orb(x). However, different elements in G can give us the same orbit. In particular, if g ∈ stab(x), then hg and h give us the same object in orb(x), since hg(x) = h(g(x)) = h(x). So we have a correspondence between things in orb(x) and members of G, “up to stab(x)”. Theorem (Orbit-stabilizer theorem). Let the group G act on X. Then there is a bijection between orb(x) and cosets of stab(x) in G. In particular, if G is finite, then | orb(x)|| stab(x)| = |G|. Proof. We biject the cosets of stab(x) with elements in the orbit of x. Recall that G : stab(x) is the set of cosets of stab(x). We can define θ : (G : stab(x)) → orb(x) g stab(x) → g(x). This is well-defined — if g stab(x) = h stab(x), then h = gk for some k ∈ stab(x). So h(x) = g(k(x)) = g(x). This map is surjective since for any y ∈ orb(x), there
is some g ∈ G such that g(x) = y, by definition. Then θ(g stab(x)) = y. It is injective since if g(x) = h(x), then h−1g(x) = x. So h−1g ∈ stab(x). So g stab(x) = h stab(x). Hence the number of cosets is | orb(x)|. Then the result follows from La- grange’s theorem. An important application of the orbit-stabilizer theorem is determining group sizes. To find the order of the symmetry group of, say, a pyramid, we find something for it to act on, pick a favorite element, and find the orbit and stabilizer sizes. Example. (i) Suppose we want to know how big D2n is. D2n acts on the vertices {1, 2, 3, · · ·, n} transitively. So | orb(1)| = n. Also, stab(1) = {e, reflection in the line through 1}. So |D2n| = | orb(1)|| stab(1)| = 2n. Note that if the action is transitive, then all orbits have size |X| and thus all stabilizers have the same size. (ii) Let ⟨(1 2)⟩ act on {1, 2, 3}. Then orb(1) = {1, 2} and stab(1) = {e}. orb(3) = {3} and stab(3) = ⟨(1 2)⟩. (iii) Consider S4 acting on {1, 2, 3, 4}. We know that orb(1) = X and |S4| = 24. 4 = 6. That makes it easier to find stab(1). Clearly ∼= S3 fix 1. So S{2,3,4} ≤ stab(1). However, |S3| = 6 = | stab(1)|, So | stab(1)| = 24 S{2,3,4} so this is all of the stabilizer. 5.3 Important actions Given any group G, there are a few important actions we can define. In particular, we will define the conjugation action, which is a very important concept on 32 5 Group actions IA Groups its own. In fact, the whole of the next chapter
will be devoted to studying conjugation in the symmetric groups. First, we will study some less important examples of actions. Lemma (Left regular action). Any group G acts on itself by left multiplication. This action is faithful and transitive. Proof. We have 1. (∀g ∈ G)(x ∈ G) g(x) = g · x ∈ G by definition of a group. 2. (∀x ∈ G) e · x = x by definition of a group. 3. g(hx) = (gh)x by associativity. So it is an action. To show that it is faithful, we want to know that [(∀x ∈ X) gx = x] ⇒ g = e. This follows directly from the uniqueness of identity. To show that it is transitive, ∀x, y ∈ G, then (yx−1)(x) = y. So any x can be sent to any y. Theorem (Cayley’s theorem). Every group is isomorphic to some subgroup of some symmetric group. Proof. Take the left regular action of G on itself. This gives a group homomorphism φ : G → Sym G with ker φ = {e} as the action is faithful. By the isomorphism theorem, G ∼= im φ ≤ Sym G. Lemma (Left coset action). Let H ≤ G. Then G acts on the left cosets of H by left multiplication transitively. Proof. First show that it is an action: 0. g(aH) = (ga)H is a coset of H. 1. e(aH) = (ea)H = aH. 2. g1(g2(aH)) = g1((g2a)H) = (g1g2a)H = (g1g2)(aH). To show that it is transitive, given aH, bH, we know that (ba−1)(aH) = bH. So any aH can be mapped to bH. In the boring case where H = {e}, then this is just the left regular action since G/{e} ∼= G. Definition (Conjugation of element). The conjugation of a ∈ G by b ∈ G is given by bab−1 ∈ G. Given any a,
c, if there exists some b such that c = bab−1, then we say a and c are conjugate. What is conjugation? This bab−1 form looks familiar from Vectors and Matrices. It is the formula used for changing basis. If b is the change-of-basis matrix and a is a matrix, then the matrix in the new basis is given by bab−1. In this case, bab−1 is the same matrix viewed from a different basis. In general, two conjugate elements are “the same” in some sense. For example, we will later show that in Sn, two elements are conjugate if and only if they have the same cycle type. Conjugate elements in general have many properties in common, such as their order. 33 5 Group actions IA Groups Lemma (Conjugation action). Any group G acts on itself by conjugation (i.e. g(x) = gxg−1). Proof. To show that this is an action, we have 0. g(x) = gxg−1 ∈ G for all g, x ∈ G. 1. e(x) = exe−1 = x 2. g(h(x)) = g(hxh−1) = ghxh−1g−1 = (gh)x(gh)−1 = (gh)(x) Definition (Conjugacy classes and centralizers). The conjugacy classes are the orbits of the conjugacy action. ccl(a) = {b ∈ G : (∃g ∈ g) gag−1 = b}. The centralizers are the stabilizers of this action, i.e. elements that commute with a. CG(a) = {g ∈ G : gag−1 = a} = {g ∈ G : ga = ag}. The centralizer is defined as the elements that commute with a particular element a. For the whole group G, we can define the center. Definition (Center of group). The center of G is the elements that commute with all other elements. Z(G) = {g ∈ G : (∀a) gag−1 = a} = {g ∈ G : (∀a) ga = ag}. It is sometimes written as C(G) instead of Z(G). In many ways, conjugation is related to normal
subgroups. Lemma. Let K ◁ G. Then G acts by conjugation on K. Proof. We only have to prove closure as the other properties follow from the conjugation action. However, by definition of a normal subgroup, for every g ∈ G, k ∈ K, we have gkg−1 ∈ K. So it is closed. Proposition. Normal subgroups are exactly those subgroups which are unions of conjugacy classes. Proof. Let K ◁ G. If k ∈ K, then by definition for every g ∈ G, we get gkg−1 ∈ K. So ccl(k) ⊆ K. So K is the union of the conjugacy classes of all its elements. Conversely, if K is a union of conjugacy classes and a subgroup of G, then for all k ∈ K, g ∈ G, we have gkg−1 ∈ K. So K is normal. Lemma. Let X be the set of subgroups of G. Then G acts by conjugation on X. Proof. To show that it is an action, we have 0. If H ≤ G, then we have to show that gHg−1 is also a subgroup. We know that e ∈ H and thus geg−1 = e ∈ gHg−1, so gHg−1 is non-empty. For any two elements gag−1 and gbg−1 ∈ gHg−1, (gag−1)(gbg−1)−1 = g(ab−1)g−1 ∈ gHg−1. So gHg−1 is a subgroup. 34 5 Group actions IA Groups 1. eHe−1 = H. 2. g1(g2Hg−1 2 )g−1 1 = (g1g2)H(g1g2)−1. Under this action, normal subgroups have singleton orbits. Definition (Normalizer of subgroup). The normalizer of a subgroup is the stabilizer of the (group) conjugation action. NG(H) = {g ∈ G : gHg−1 = H}. We clearly have H ⊆ NG(H). It is easy to show that NG(H) is the largest subgroup of G in which H is a normal subgroup, hence the name
. There is a connection between actions in general and conjugation of subgroups. Lemma. Stabilizers of the elements in the same orbit are conjugate, i.e. let G act on X and let g ∈ G, x ∈ X. Then stab(g(x)) = g stab(x)g−1. 5.4 Applications Example. Let G+ be the rotations of a cube acting on the vertices. Let X be the set of vertices. Then |X| = 8. Since the action is transitive, the orbit of element is the whole of X. The stabilizer of vertex 1 is the set of rotations through 1 and the diagonally opposite vertex, of which there are 3. So |G+| = | orb(1)|| stab(1)| = 8 · 3 = 24. Example. Let G be a finite simple group of order greater than 2, and H ≤ G have index n ̸= 1. Then |G| ≤ n!/2. Proof. Consider the left coset action of G on H. We get a group homomorphism φ : G → Sn since there are n cosets of H. Since H ̸= G, φ is non-trivial and ker φ ̸= G. Now ker φ ◁ G. Since G is simple, ker φ = {e}. So G ∼= im φ ⊆ Sn by the isomorphism theorem. So |G| ≤ |Sn| = n!. We can further refine this by considering sgn ◦φ : G → {±1}. The kernel of this composite is normal in G. So K = ker(sgn ◦ϕ) = {e} or G. Since G/K ∼= im(sgn ◦ϕ), we know that |G|/|K| = 1 or 2 since im(sgn ◦ϕ) has at most two elements. Hence for |G| > 2, we cannot have K = {e}, or else |G|/|K| > 2. So we must have K = G, so sgn(φ(g)) = 1 for all g and im φ ≤ An. So |G| ≤ n!/2 We have seen on Sheet 1 that if |G| is even, then G has an element of order 2. In fact
, Theorem (Cauchy’s Theorem). Let G be a finite group and prime p dividing |G|. Then G has an element of order p (in fact there must be at least p − 1 elements of order p). It is important to remember that this only holds for prime p. For example, A4 doesn’t have an element of order 6 even though 6 | 12 = |A4|. The converse, however, holds for any number trivially by Lagrange’s theorem. Proof. Let G and p be fixed. Consider Gp = G × G × · · · × G, the set of p-tuples of G. Let X ⊆ Gp be X = {(a1, a2, · · ·, ap) ∈ Gp : a1a2 · · · ap = e}. 35 5 Group actions IA Groups In particular, if an element b has order p, then (b, b, · · ·, b) ∈ X. In fact, if (b, b, · · ·, b) ∈ X and b ̸= e, then b has order p, since p is prime. Now let H = ⟨h : hp = e⟩ ∼= Cp be a cyclic group of order p with generator h (This h is not related to G in any way). Let H act on X by “rotation”: h(a1, a2, · · ·, ap) = (a2, a3, · · ·, ap, a1) This is an action: 0. If a1 · · · ap = e, then a−1 (a2, a3, · · ·, ap, a1) ∈ X. 1 = a2 · · · ap. So a2 · · · apa1 = a−1 1 a1 = e. So 1. e acts as an identity by construction 2. The “associativity” condition also works by construction. As orbits partition X, the sum of all orbit sizes must be |X|. We know that |X| = |G|p−1 since we can freely choose the first p − 1 entries and the last one must be the inverse of their product. Since p divides |G|, p also divides |X|. We have | orb(a1, · ·
·, ap)|| stabH (a1, · · ·, ap)| = |H| = p. So all orbits have size 1 or p, and they sum to |X| = p× something. We know that there is one orbit of size 1, namely (e, e, · · ·, e). So there must be at least p − 1 other orbits of size 1 for the sum to be divisible by p. In order to have an orbit of size 1, they must look like (a, a, · · ·, a) for some a ∈ G, which has order p. 36 6 Symmetric groups II IA Groups 6 Symmetric groups II In this chapter, we will look at conjugacy classes of Sn and An. It turns out this is easy for Sn, since two elements are conjugate if and only if they have the same cycle type. However, it is slightly more complicated in An. This is since while (1 2 3) and (1 3 2) might be conjugate in S4, the element needed to perform the conjugation might be odd and not in An. 6.1 Conjugacy classes in Sn Recall σ, τ ∈ Sn are conjugate if ∃ρ ∈ Sn such that ρσρ−1 = τ. We first investigate the special case, when σ is a k-cycle. Proposition. If (a1 a2 · · · ak) is a k-cycle and ρ ∈ Sn, then ρ(a1 · · · ak)ρ−1 is the k-cycle (ρ(a1) ρ(a2) · · · ρ(a3)). Proof. Consider any ρ(a1) acted on by ρ(a1 · · · ak)ρ−1. The three permutations send it to ρ(a1) → a1 → a2 → ρ(a2) and similarly for other ais. Since ρ is bijective, any b can be written as ρ(a) for some a. So the result is the k-cycle (ρ(a1) ρ(a2) · · · ρ(a3)). Corollary. Two elements in Sn are conjugate iff they have the same cycle type. Proof. Suppose σ = σ1σ2 · · · σℓ
, where σi are disjoint cycles. Then ρσρ−1 = ρσ1ρ−1ρσ2ρ−1 · · · ρσℓρ−1. Since the conjugation of a cycle conserves its length, ρσρ−1 has the same cycle type. Conversely, if σ, τ have the same cycle type, say σ = (a1 a2 · · · ak)(ak+1 · · · ak+ℓ), τ = (b1 b2 · · · bk)(bk+1 · · · bk+ℓ), if we let ρ(ai) = bi, then ρσρ−1 = τ. Example. Conjugacy classes of S4: Cycle type Example element Size of ccl Size of centralizer Sign (1, 1, 1, 1) (2, 1, 1) (2, 2) (3, 1) (4) e (1 2) (1 2)(3 4) (1 2 3) (1 2 3 4) 1 6 3 8 6 24 4 8 3 4 +1 −1 +1 +1 −1 We know that a normal subgroup is a union of conjugacy classes. We can now find all normal subgroups by finding possible union of conjugacy classes whose cardinality divides 24. Note that the normal subgroup must contain e. (i) Order 1: {e} (ii) Order 2: None (iii) Order 3: None (iv) Order 4: {e, (1 2)(3 4), (1 3)(2 4), (1 4)(2 3)} ∼= C2 × C2 = V4 is a possible candidate. We can check the group axioms and find that it is really a subgroup 37 6 Symmetric groups II IA Groups (v) Order 6: None (vi) Order 8: None (vii) Order 12: A4 (We know it is a normal subgroup since it is the kernel of the signature and/or it has index 2) (viii) Order 24: S4 We can also obtain the quotients of S4: S4/{e} ∼= S4, S4/V4 S4/A4 ∼= C2, S4/S4 = {e}. ∼= S3 ∼= D6, 6.2 Con
jugacy classes in An We have seen that |Sn| = 2|An| and that conjugacy classes in Sn are “nice”. How about in An? The first thought is that we write it down: cclSn (σ) = {τ ∈ Sn : (∃ρ ∈ Sn) τ = ρσρ−1} cclAn (σ) = {τ ∈ An : (∃ρ ∈ An) τ = ρσρ−1} Obviously cclAn (σ) ⊆ cclSn (σ), but the converse need not be true since the conjugation need to map σ to τ may be odd. Example. Consider (1 2 3) and (1 3 2). They are conjugate in S3 by (2 3), but (2 3) ̸∈ A3. (This does not automatically entail that they are not conjugate in A3 because there might be another even permutation that conjugate (1 2 3) and (1 3 2). In A5, (2 3)(4 5) works (but not in A3)) We can use the orbit-stabilizer theorem: |Sn| = | cclSn (σ)||CSn (σ)| |An| = | cclAn (σ)||CAn (σ)| We know that An is half of Sn and cclAn is contained in cclSn. So we have two options: either cclSn (σ) = cclAn(σ) and |CSn(σ)| = 1 2 | cclSn (σ)| = | cclAn (σ)| and CAn (σ) = CSn (σ). 2 |CAn (σ)|; or 1 Definition (Splitting of conjugacy classes). When | cclAn(σ)| = 1 say that the conjugacy class of σ splits in An. 2 | cclSn (σ)|, we So the conjugacy classes are either retained or split. Proposition. For σ ∈ An, the conjugacy class of σ splits in An if and only if no odd permutation commutes with σ. Proof. We have the conjugacy classes splitting if and only if the centralizer does not. So instead we check whether the centralizer splits. Clearly CAn (σ) = CSn(σ) ∩ An
. So splitting of centralizer occurs if and only if an odd permutation commutes with σ. Example. Conjugacy classes in A4: 38 6 Symmetric groups II IA Groups Cycle type Example | cclS4 | Odd element in CS4? (1, 1, 1, 1) (2, 2) (3, 1) e (1 2)(3 4) (1 2 3) 1 3 8 Yes (1 2) Yes (1 2) No | cclA4 | 1 3 4, 4 In the (3, 1) case, by the orbit stabilizer theorem, |CS4((1 2 3))| = 3, which is odd and cannot split. Example. Conjugacy classes in A5: Cycle type Example | cclS5 | Odd element in CS5? (1, 1, 1, 1, 1) (2, 2, 1) (3, 1, 1) (5) e (1 2)(3 4) (1 2 3) (1 2 3 4 5) 1 15 20 24 Yes (1 2) Yes (1 2) Yes (4 5) No | cclA5 | 1 15 20 12, 12 Since the centralizer of (1 2 3 4 5) has size 5, it cannot split, so its conjugacy class must split. Lemma. σ = (1 2 3 4 5) ∈ S5 has CS5 (σ) = ⟨σ⟩. Proof. | cclSn (σ)| = 24 and |S5| = 120. So |CS5(σ)| = 5. Clearly ⟨σ⟩ ⊆ CS5(σ). Since they both have size 5, we know that CS5 (σ) = ⟨σ⟩ Theorem. A5 is simple. Proof. We know that normal subgroups must be unions of the conjugacy classes, must contain e and their order must divide 60. The possible orders are 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30. However, the conjugacy classes 1, 15, 20, 12, 12 cannot add up to any of the possible orders apart from 1 and 60. So we only have trivial normal subgroups. In fact, all An for n ≥ 5 are simple, but the proof is horrible (cf. IB Groups, Rings and Modules).
39 7 Quaternions IA Groups 7 Quaternions In the remaining of the course, we will look at different important groups. Here, we will have a brief look at Definition (Quaternions). The quaternions is the set of matrices 1 0 0 1 −1 0 0 −1,, i 0 0 −i −i 0 0 i 0 1 −1 0 0 −1 0 1 0 i 0 i 0 −i 0 −i,,,, which is a subgroup of GL2(C). Notation. We can also write the quaternions as Q8 = ⟨a, b : a4 = e, b2 = a2, bab−1 = a−1⟩ Even better, we can write Q8 = {1, −1, i, −i, j, −j, k, −k} with (i) (−1)2 = 1 (ii) i2 = j2 = k2 = −1 (iii) (−1)i = −i etc. (iv) ij = k, jk = i, ki = j (v) ji = −k, kj = −i, ik = −j We have 1 = 1 0 −1 = −1 0 0 −i −i 0 0 i 0 1 −1 0 0 −k = i 0 0 −i 0 −i, −i =, −j = Lemma. If G has order 8, then either G is abelian (i.e. ∼= C8, C4 × C2 or C2 × C2 × C2), or G is not abelian and isomorphic to D8 or Q8 (dihedral or quaternion). Proof. Consider the different possible cases: – If G contains an element of order 8, then G ∼= C8. – If all non-identity elements have order 2, then G is abelian (Sheet 1, Q8). Let a ̸= b ∈ G \ {e}. By the direct product theorem, ⟨a, b⟩ = ⟨a⟩ × ⟨b⟩. Then take c ̸∈ ⟨a, b⟩. By the direct product theorem, we obtain ⟨a, b, c⟩ = ⟨a⟩ × ⟨b�
�� × ⟨c⟩ = C2 × C2 × C2. Since ⟨a, b, c⟩ ⊆ G and |⟨a, b, c⟩| = |G|, G = ⟨a, b, c⟩ ∼= C2 × C2 × C2. 40 7 Quaternions IA Groups – G has no element of order 8 but has an order 4 element a ∈ G. Let H = ⟨a⟩. Since H has index 2, it is normal in G. So G/H ∼= C2 since |G/H| = 2. This means that for any b ̸∈ H, bH generates G/H. Then (bH)2 = b2H = H. So b2 ∈ H. Since b2 ∈ ⟨a⟩ and ⟨a⟩ is a cyclic group, b2 commutes with a. If b2 = a or a3, then b has order 8. Contradiction. So b2 = e or a2. We also know that H is normal, so bab−1 ∈ H. Let bab−1 = aℓ. Since a and b2 commute, we know that a = b2ab−2 = b(bab−1)b−1 = baℓb−1 = (bab−1)ℓ = aℓ2. So ℓ2 ≡ 1 (mod 4). So ℓ ≡ ±1 (mod 4). ◦ When l ≡ 1 (mod 4), bab−1 = a, i.e. ba = ab. So G is abelian. ∗ If b2 = e, then G = ⟨a, b⟩ ∼= ⟨a⟩ × ⟨b⟩ ∼= C4 × C2. ∗ If b2 = a2, then (ba−1)2 = e. So G = ⟨a, ba−1⟩ ∼= C4 × C2. ◦ If l ≡ −1 (mod 4), then bab−1 = a−1. ∗ If b2 = e, then G = ⟨a, b : a4 = e = b2, bab
−1 = a−1⟩. So G ∼= D8 by definition. ∗ If b2 = a2, then we have G ∼= Q8. 41 8 Matrix groups IA Groups 8 Matrix groups 8.1 General and special linear groups Consider Mn×n(F ), i.e. the set of n × n matrices over the field F = R or C (or Fp). We know that matrix multiplication is associative (since they represent functions) but are, in general, not commutative. To make this a group, we want the identity matrix I to be the identity. To ensure everything has an inverse, we can only include invertible matrices. (We do not necessarily need to take I as the identity of the group. We can, and obtain a group in which every matrix is of for some non-zero a. This forms a group, albeit a boring one 0 0 0 1 for example, take e = 0 the form (it is simply ∼= R∗)) 0 0 a Definition (General linear group GLn(F )). GLn(F ) = {A ∈ Mn×n(F ) : A is invertible} is the general linear group. Alternatively, we can define GLn(F ) as matrices with non-zero determinants. Proposition. GLn(F ) is a group. Proof. Identity is I, which is in GLn(F ) by definition (I is its self-inverse). The composition of invertible matrices is invertible, so is closed. Inverse exist by definition. Multiplication is associative. Proposition. det : GLn(F ) → F \ {0} is a surjective group homomorphism. Proof. det AB = det A det B. If A is invertible, it has non-zero determinant and det A ∈ F \ {0}. To show it is surjective, for any x ∈ F \ {0}, if we take the identity matrix and replace I11 with x, then the determinant is x. So it is surjective. Definition (Special linear group SLn(F )). The special linear group SLn(F ) is the kernel of the determinant, i.e. SLn(F ) = {A ∈ GLn(F ) : det A = 1}. So SLn(F ) ◁ GLn(F
) as it is a kernel. Note that Q8 ≤ SL2(C) 8.2 Actions of GLn(C) Proposition. GLn(C) acts faithfully on Cn by left multiplication to the vector, with two orbits (0 and everything else). Proof. First show that it is a group action: 1. If A ∈ GLn(C) and v ∈ Cn, then Av ∈ Cn. So it is closed. 2. Iv = v for all v ∈ Cn. 42 8 Matrix groups IA Groups 3. A(Bv) = (AB)v. Now prove that it is faithful: a linear map is determined by what it does on a basis. Take the standard basis e1 = (1, 0, · · ·, 0), · · · en = (0, · · ·, 1). Any matrix which maps each ek to itself must be I (since the columns of a matrix are the images of the basis vectors) To show that there are 2 orbits, we know that A0 = 0 for all A. Also, as A is invertible, Av = 0 ⇔ v = 0. So 0 forms a singleton orbit. Then given any two vectors v ̸= w ∈ Cn \ {0}, there is a matrix A ∈ GLn(C) such that Av = w (cf. Vectors and Matrices). Similarly, GLn(R) acts on Rn. Proposition. GLn(C) acts on Mn×n(C) by conjugation. (Proof is trivial) This action can be thought of as a “change of basis” action. Two matrices are conjugate if they represent the same map but with respect to different bases. The P is the base change matrix. From Vectors and Matrices, we know that there are three different types of orbits for GL2(C): A is conjugate to a matrix of one of these forms: (i) (ii) (iii, with λ ̸= µ, i.e. two distinct eigenvalues, i.e. a repeated eigenvalue with 2-dimensional eigenspace, i.e. a repeated eigenvalue with a 1-dimensional eigenspace Note that we said there are three types of orbits, not three orbits. There are infinitely many orbits, e.g. one for each of
λI. 8.3 Orthogonal groups Recall that AT is defined by AT They have the following properties: ij = Aji, i.e. we reflect the matrix in the diagonal. (i) (AB)T = BT AT (ii) (A−1)T = (AT )−1 (iii) AT A = I ⇔ AAT = I ⇔ A−1 = AT. In this case A is orthogonal (iv) det AT = det A We are now in R, because orthogonal matrices don’t make sense with complex matrices. Note that a matrix is orthogonal if the columns (or rows) form an orthonormal basis of Rn: AAT = I ⇔ aikajk = δij ⇔ ai · aj = δij, where ai is the ith column of A. The importance of orthogonal matrices is that they are the isometries of Rn. Lemma (Orthogonal matrices are isometries). For any orthogonal A and x, y ∈ Rn, we have 43 8 Matrix groups IA Groups (i) (Ax) · (Ay) = x · y (ii) |Ax| = |x| Proof. Treat the dot product as a matrix multiplication. So (Ax)T (Ay) = xT AT Ay = xT Iy = xT y Then we have |Ax|2 = (Ax) · (Ax) = x · x = |x|2. Since both are positive, we know that |Ax| = |x|. It is important to note that orthogonal matrices are isometries, but not all isometries are orthogonal. For example, translations are isometries but are not represented by orthogonal matrices, since they are not linear maps and cannot be represented by matrices at all! However, it is true that all linear isometries can be represented by orthogonal matrices. Definition (Orthogonal group O(n)). The orthogonal group is O(n) = On = On(R) = {A ∈ GLn(R) : AT A = I}, i.e. the group of orthogonal matrices. We will later show that this is the set of matrices that preserve distances in Rn. Lemma. The orthogonal group
is a group. Proof. We have to check that it is a subgroup of GLn(R): It is non-empty, since I ∈ O(n). If A, B ∈ O(n), then (AB−1)(AB−1)T = AB−1(B−1)T AT = AB−1BA−1 = I, so AB−1 ∈ O(n) and this is indeed a subgroup. Proposition. det : O(n) → {±1} is a surjective group homomorphism. Proof. For A ∈ O(n), we know that AT A = I. So det AT A = (det A)2 = 1. So det A = ±1. Since det(AB) = det A det B, it is a homomorphism. We have det I = 1, det      −1 0... 0 · · · 0 1 · · ·...... 1, so it is surjective. Definition (Special orthogonal group SO(n)). The special orthogonal group is the kernel of det : O(n) → {±1}. SO(n) = SOn = SOn(R) = {A ∈ O(n) : det A = 1}. By the isomorphism theorem, O(n)/SO(n) ∼= C2. What’s wrong with matrices with determinant −1? Why do we want to eliminate these? An important example of an orthogonal matrix with determinant −1 is a reflection. These transformations reverse orientation, and is often unwanted. 44 8 Matrix groups IA Groups Lemma. O(n) = SO(n) ∪      −1 0... 0 · · · 0 1 · · ·...... SO(n) Proof. Cosets partition the group. 8.4 Rotations and reflections in R2 and R3 Lemma. SO(2) consists of all rotations of R2 around 0. Proof. Let A ∈ SO(2). So AT A = I and det A = 1. Suppose A = a b d c. Then A−1 = d −b a −c. So AT = A−1 implies ad − bc = 1, c = −b, d = a
. Combining these equations we obtain a2 + c2 = 1. Set a = cos θ = d, and c = sin θ = −b. Then these satisfies all three equations. So A = cos θ − sin θ cos θ sin θ. Note that A maps (1, 0) to (cos θ, sin θ), and maps (0, 1) = (− sin θ, cos θ), which are rotations by θ counterclockwise. So A represents a rotation by θ. Corollary. Any matrix in O(2) is either a rotation around 0 or a reflection in a line through 0. Proof. If A ∈ SO(2), we’ve show that it is a rotation. Otherwise, A = 1 0 0 −1 cos θ − sin θ cos θ sin θ cos θ − sin θ − sin θ − cos θ = since O(2) = SO(2) ∪ 1 0 0 −1 SO(2). This has eigenvalues 1, −1. So it is a reflection in the line of the eigenspace E1. The line goes through 0 since the eigenspace is a subspace which must include 0. Lemma. Every matrix in SO(3) is a rotation around some axis. Proof. Let A ∈ SO(3). We know that det A = 1 and A is an isometry. The eigenvalues λ must have |λ| = 1. They also multiply to det A = 1. Since we are in R, complex eigenvalues come in complex conjugate pairs. If there are complex eigenvalues λ and ¯λ, then λ¯λ = |λ|2 = 1. The third eigenvalue must be real and has to be +1. If all eigenvalues are real. Then eigenvalues are either 1 or −1, and must multiply to 1. The possibilities are 1, 1, 1 and −1, −1, 1, all of which contain an eigenvalue of 1. So pick an eigenvector for our eigenvalue 1 as the third basis vector. Then in some orthonormal basis 45 8 Matrix groups IA Groups Since the third column is the image of the third basis vector, and by orthogonality the third row is 0, 0, 1. Now let A′ = a b d
c ∈ GL2(R) with det A′ = 1. A′ is still orthogonal, so A′ ∈ SO(2). Therefore A′ is a rotation and A =   cos θ − sin θ cos θ sin θ 0 0   0 0 1 in some basis, and this is exactly the rotation through an axis. Lemma. Every matrix in O(3) is the product of at most three reflections in planes through 0. Note that a rotation is a product of two reflections. This lemma effectively states that every matrix in O(3) is a reflection, a rotation or a product of a reflection and a rotation. Proof. Recall O(3) = SO(3) ∪   cos θ − sin θ cos θ sin θ 0 0 0 0 1 that A = reflections:  0 0 1 0 0 −1  SO(3). So if A ∈ SO(3), we know  1 0  0   in some basis, which is a composite of two A =  1 0 0 −1  0 0  0 0  1   sin θ cos θ sin θ − cos 1  SO(3), then it is automatically a product of three Then if A ∈ reflections. In the last line we’ve shown that everything in O(3) \ SO(3) can be written as a product of three reflections, but it is possible that they need only 1 reflection.   However, some matrices do genuinely need 3 reflections, e.g. −1 0 0 −1 0 0 0 0 −1   8.5 Unitary groups The concept of orthogonal matrices only make sense if we are talking about real matrices. If we are talking about complex, then instead we need unitary matrices. To do so, we replace the transposition with the Hermitian conjugate. It is defined by A† = (A∗)T with (A†)ij = A∗ ji, where the asterisk is the complex conjugate. We still have (i) (AB)† = B†A† (ii) (A−1)† =
(A†)−1 46 8 Matrix groups IA Groups (iii) A†A = I ⇔ AA† = I ⇔ A† = A−1. We say A is a unitary matrix (iv) det A† = (det A)∗ Definition (Unitary group U (n)). The unitary group is U(n) = Un = {A ∈ GLn(C) : A†A = I}. Lemma. det : U(n) → S1, where S1 is the unit circle in the complex plane, is a surjective group homomorphism. Proof. We know that 1 = det I = det A†A = | det A|2. So | det A| = 1. Since det AB = det A det B, it is a group homomorphism. Now given λ ∈ S1, we have ∈ U(n). So it is surjective.      λ 0 0... 0 · · · 1 · · ·...... Definition (Special unitary group SU (n)). The special unitary group SU(n) = SUn is the kernel of det U (n) → S1. Similarly, unitary matrices preserve the complex dot product: (Ax) · (Ay) = x · y. 47 9 More on regular polyhedra IA Groups 9 More on regular polyhedra In this section, we will look at the symmetry groups of the cube and the tetrahedron. 9.1 Symmetries of the cube Rotations Recall that there are |G+| = 24 rotations of the group by the orbit-stabilizer theorem. Proposition. G+ ∼= S4, where G+ is the group of all rotations of the cube. Proof. Consider G+ acting on the 4 diagonals of the cube. This gives a group homomorphism φ : G+ → S4. We have (1 2 3 4) ∈ im φ by rotation around the axis through the top and bottom face. We also (1 2) ∈ im φ by rotation around the axis through the mid-point of the edge connect 1 and 2. Since (1 2) and (1 2 3 4) generate S4 (Sheet 2 Q. 5d), im φ = S4, i.e. ϕ is sur
jective. Since |S4| = |G+|, φ must be an isomorphism. All symmetries Consider the reflection in the mid-point of the cube τ, sending every point to its opposite. We can view this as −I in R3. So it commutes with all other symmetries of the cube. Proposition. G ∼= S4 × C2, where G is the group of all symmetries of the cube. Proof. Let τ be “reflection in mid-point” as shown above. This commutes with everything. (Actually it is enough to check that it commutes with rotations only) We have to show that G = G+⟨τ ⟩. This can be deduced using sizes: since G+ and ⟨τ ⟩ intersect at e only, (i) and (ii) of the Direct Product Theorem gives an injective group homomorphism G+ × ⟨τ ⟩ → G. Since both sides have the same size, the homomorphism must be surjective as well. So G ∼= G+ × ⟨τ ⟩ ∼= S4 × C2. In fact, we have also proved that the group of symmetries of an octahedron is S4 × C2 since the octahedron is the dual of the cube. (if you join the centers of each face of the cube, you get an octahedron) 48 9 More on regular polyhedra IA Groups 9.2 Symmetries of the tetrahedron Rotations Let 1, 2, 3, 4 be the vertices (in any order). G+ is just the rotations. Let it act on the vertices. Then orb(1) = {1, 2, 3, 4} and stab(1) = { rotations in the axis through 1 and the center of the opposite face } = {e, 2π So |G+| = 4 · 3 = 12 by the orbit-stabilizer theorem. The action gives a group homomorphism φ : G+ → S4. Clearly ker φ = {e}. So G+ ≤ S4 and G+ has size 12. We “guess” it is A4 (actually it must be A4 since that is the only subgroup of S4 of order 12, but it’s
nice to see why that’s the case). 3, 4π 3 } If we rotate in an axis through 1, we get (2 3 4), (2 4 3). Similarly, rotating through other axes through vertices gives all 3-cycles. If we rotate through an axis that passes through two opposite edges, e.g. through 1-2 edge and 3-4 edge, then we have (1 2)(3 4) and similarly we obtain all double transpositions. So G+ ∼= A4. This shows that there is no rotation that fixes two vertices and swaps the other two. All symmetries Now consider the plane that goes through 1, 2 and the mid-point of 3 and 4. Reflection through this plane swaps 3 and 4, but doesn’t change 1, 2. So now stab(1) = ⟨(2 3 4), (3, 4)⟩ ∼= D6 (alternatively, if we want to fix 1, we just move 2, 3, 4 around which is the symmetries of the triangular base) So |G| = 4 · 6 = 24 and G ∼= S4 (which makes sense since we can move any of its vertices around in any way and still be a tetrahedron, so we have all permutations of vertices as the symmetry group) 49 10 M¨obius group IA Groups 10 M¨obius group 10.1 M¨obius maps We want to study maps f : C → C in the form f (z) = az+b and ad − bc ̸= 0. cz+d with a, b, c, d ∈ C We impose ad − bc ̸= 0 or else the map will be constant: for any z, w ∈ C, (cw+d)(cz+d). If ad − bc = 0, then f f (z) − f (w) = (az+b)(cw+d)−(aw+b)(cz+d) is constant and boring (more importantly, it will not be invertible). = (ad−bc)(z−w) (cw+d)(cz+d) If c ̸= 0, then f (− d c ) involves division by 0. So we add ∞ to C to form the extended complex plane (Riemann sphere) C ∪ {∞}
= C∞ (cf. Vectors and c ) = ∞. We call C∞ a one-point compactification Matrices). Then we define f (− d of C (because it adds one point to C to make it compact, cf. Metric and Topology). Definition (M¨obius map). A M¨obius map is a map from C∞ → C∞ of the form f (z) = az + b cz + d, where a, b, c, d ∈ C and ad − bc ̸= 0, with f (− d (if c = 0, then f (∞) = ∞) c ) = ∞ and f (∞) = a c when c ̸= 0. Lemma. The M¨obius maps are bijections C∞ → C∞. Proof. The inverse of f (z) = az+b composition both ways. cz+d is g(z) = dz−b −cz+a, which we can check by Proposition. The M¨obius maps form a group M under function composition. (The M¨obius group) Proof. The group axioms are shown as follows: 0. If f1(z) = a1z+b1 c1z+d1 and f2(z) = a2z+b2 c2z+d2, then f2 ◦f1(z) = a1z+b1 c1z+d1 a1z+b1 c1z+d1 a2 c2 + b2 + d2 =. Now we have to check that ad − bc ̸= 0: (a1a2 + b2c1)z + (a2b1 + b2d1) (c2a1 + d2c1)z + (c2b1 + d1d2) we have (a1a2 + b2c1)(c2b1 + d1d2) − (a2b1 + b2d1)(c2a1 + d2c1) = (a1d1 − b1c1)(a2d2 − b2c2) ̸= 0. (This works for z ̸= ∞, − d1 c1 which is simply yet more tedious algebra). We
have to manually check the special cases, 1. The identity function is 1(z) = 1z+0 0+1 which satisfies ad − bc ̸= 0. 2. We have shown above that f −1(z) = dz−b −cz+a with da − bc ̸= 0, which are also M¨obius maps 3. Composition of functions is always associative 50 10 M¨obius group IA Groups M is not abelian. e.g. f1(z) = 2z and f2(z) = z + 1 are not commutative: f1 ◦ f2(z) = 2z + 2 and f2 ◦ f1(z) = 2z + 1. Note that the point at “infinity” is not special. ∞ is no different to any other point of the Riemann sphere. However, from the way we write down the M¨obius map, we have to check infinity specially. In this particular case, we can get quite far with conventions such as 1 0 = ∞ and a·∞ c·∞ = a c. ∞ = 0, 1 Clearly az+b cz+d = λaz+λb λcz+λd for any λ ̸= 0. So we do not have a unique representation of a map in terms of a, b, c, d. But a, b, c, d does uniquely determine a M¨obius map. Proposition. The map θ : GL2(C) → M sending surjective group homomorphism. a b d c → az + b cz + d is a Proof. Firstly, since the determinant ad − bc of any matrix in GL2(C) is non-zero, it does map to a M¨obius map. This also shows that θ is surjective. We have previously calculated that θ(A2) ◦ θ(A1) = (a1a2 + b2c1)z + (a2b1 + b2d1) (c2a1 + d2c1)z + (c2b1 + d1d2) = θ(A2A1) So it is a homomorphism. The kernel of θ is ker(θ) = A ∈ GL2(C) :
(∀z) z = az + b cz + d We can try different values of z: z = ∞ ⇒ c = 0; z = 0 ⇒ b = 0; z = 1 ⇒ d = a. So ker θ = Z = {λI : λ ∈ C, λ ̸= 0}, where I is the identity matrix and Z is the centre of GL2(C). By the isomorphism theorem, we have M ∼= GL2(C)/Z Definition (Projective general linear group PGL2(C)). (Non-examinable) The projective general linear group is PGL2(C) = GL2(C)/Z. Since fA = fB iff B = λA for some λ ̸= 0 (where A, B are the corresponding matrices of the maps), if we restrict θ to SL2(C), we have θ|SL2(C) : SL2(C) → M is also surjective. The kernel is now just {±I}. So M ∼= SL2(C)/{±I} = PSL2(C) Clearly PSL2(C) ∼= PGL2(C) since both are isomorphic to the M¨obius group. Proposition. Every M¨obius map is a composite of maps of the following form: (i) Dilation/rotation: f (z) = az, a ̸= 0 (ii) Translation: f (z) = z + b 51 10 M¨obius group IA Groups (iii) Inversion: f (z) = 1 z Proof. Let az+b cz+d ∈ M. If c = 0, i.e. g(∞) = ∞, then g(z.e. If c ̸= 0, let g(∞) = z0, Let h(z) = 1 We have h−1(w) = 1 a composition of maps of the three forms listed above.. Then hg(∞) = ∞ is of the above form. w + z0being of type (iii) followed by (ii). So g = h−1(hg) is z−z0 Alternatively, with sufficient magic, we have → − ad + bc c2(z + d c ) → a c − ad +
bc c2(z + d c ) = az + b cz + d. Note that the non-calculation method above can be transformed into another (different) composition with the same end result. So the way we compose a M¨obius map from the “elementary” maps are not unique. 10.2 Fixed points of M¨obius maps Definition (Fixed point). A fixed point of f is a z such that f (z) = z. We know that any M¨obius map with c = 0 fixes ∞. We also know that z → z + b for any b ̸= 0 fixes ∞ only, where as z → az for a ̸= 0, 1 fixes 0 and ∞. It turns out that you cannot have more than two fixed points, unless you are the identity. Proposition. Any M¨obius map with at least 3 fixed points must be the identity. Proof. Consider f (z) = az+b cz+d. This has fixed points at those z which satisfy az+b cz+d = z ⇔ cz2 + (d − a)z − b = 0. A quadratic has at most two roots, unless c = b = 0 and d = a, in which the equation just says 0 = 0. However, if c = b = 0 and d = a, then f is just the identity. Proposition. Any M¨obius map is conjugate to f (z) = νz for some ν ̸= 0 or to f (z) = z + 1. Proof. We have the surjective group homomorphism θ : GL2(C) → M. The conjugacy classes of GL2(C) are of types (z) = → g(z) = → g(z) = λz + 0 0z + µ λz + 0 0z + λ λz + 1 λ = λ µ z = 1z = z + 1 λ But the last one is not in the form z + 1. We know that the last g(z) can also be represented by Jordan-normal form). So z + 1 1 0 1 λ 1, which is conjugate to 1 0 λ is also conjugate to z + 1. 1 1 (since that’s its 52 10 M¨obius group IA Groups Now we
see easily that (for ν ̸= 0, 1), νz has 0 and ∞ as fixed points, z + 1 only has ∞. Does this transfer to their conjugates? Proposition. Every non-identity has exactly 1 or 2 fixed points. Proof. Given f ∈ M and f ̸= id. So ∃h ∈ M such that hf h−1(z) = νz. Now f (w) = w ⇔ hf (w) = h(w) ⇔ hf h−1(h(w)) = h(w). So h(w) is a fixed point of hf h−1. Since h is a bijection, f and hf h−1 have the same number of fixed points. So f has exactly 2 fixed points if f is conjugate to νz, and exactly 1 fixed point if f is conjugate to z + 1. Intuitively, we can show that conjugation preserves fixed points because if we conjugate by h, we first move the Riemann sphere around by h, apply f (that fixes the fixed points) then restore the Riemann sphere to its original orientation. So we have simply moved the fixed point around by h. 10.3 Permutation properties of M¨obius maps We have seen that the M¨obius map with three fixed points is the identity. As a corollary, we obtain the following. Proposition. Given f, g ∈ M. If ∃z1, z2, z3 ∈ C∞ such that f (zi) = g(zi), then f = g. i.e. every M¨obius map is uniquely determined by three points. Proof. As M¨obius maps are invertible, write f (zi) = g(zi) as g−1f (zi) = zi. So g−1f has three fixed points. So g−1f must be the identity. So f = g. Definition (Three-transitive action). An action of G on X is called threetransitive if the induced action on {(x1, x2, x3) ∈ X 3 : xi pairwise disjoint}, given by g(x1, x2, x3) = (g(x1), g(x2), g(x3
)), is transitive. This means that for any two triples x1, x2, x3 and y1, y2, y3 of distinct elements of X, there exists g ∈ G such that g(xi) = yi. If this g is always unique, then the action is called sharply three transitive This is a really weird definition. The reason we raise it here is that the M¨obius map satisfies this property. Proposition. The M¨obius group M acts sharply three-transitively on C∞. Proof. We want to show that we can send any three points to any other three points. However, it is easier to show that we can send any three points to 0, 1, ∞. Suppose we want to send z1 → ∞, z2 → 0, z3 → 1. Then the following works: f (z) = (z − z2)(z3 − z1) (z − z1)(z3 − z2) If any term zi is ∞, we simply remove the terms with zi, e.g. if z1 = ∞, we have f (z) = z−z2 z3−z2 So given also w1, w2, w3 distinct in C∞ and g ∈ M sending w1 → ∞, w2 →. 0, w3 → 1, then we have g−1f (zi) = wi. The uniqueness of the map follows from the fact that a M¨obius map is uniquely determined by 3 points. 53 10 M¨obius group IA Groups 3 points not only define a M¨obius map uniquely. They also uniquely define a line or circle. Note that on the Riemann sphere, we can think of a line as a circle through infinity, and it would be technically correct to refer to both of them as “circles”. However, we would rather be clearer and say “line/circle”. We will see how M¨obius maps relate to lines and circles. We will first recap some knowledge about lines and circles in the complex plane. Lemma. The general equation of a circle or straight line in C is where A, C ∈ R and |B|2 > AC. Az ¯z + ¯Bz + B ¯z + C = 0, A = 0 gives a straight line. If A ̸=
0, B = 0, we have a circle centered at the origin. If C = 0, the circle passes through 0. Proof. This comes from noting that |z − B| = r for r ∈ R > 0 is a circle; |z − a| = |z − b| with a ̸= b is a line. The detailed proof can be found in Vectors and Matrices. Proposition. M¨obius maps send circles/straight lines to circles/straight lines. Note that it can send circles to straight lines and vice versa. Alternatively, M¨obius maps send circles on the Riemann sphere to circles on the Riemann sphere. Proof. We can either calculate it directly using w = az+b −cw+a and substituting z into the circle equation, which gives A′w ¯w + ¯B′w + B′ ¯w + C ′ = 0 with A′, C ′ ∈ R. cz+d ⇔ z = dw−b Alternatively, we know that each M¨obius map is a composition of translation, dilation/rotation and inversion. We can check for each of the three types. Clearly dilation/rotation and translation maps a circle/line to a circle/line. So we simply do inversion: if w = z−1 Az ¯z + ¯Bz + B ¯z + C = 0 ⇔ Cw ¯w + Bw + ¯B ¯w + A = 0 Example. Consider f (z) = z−i z+i. Where does the real line go? The real line is simply a circle through 0, 1, ∞. f maps this circle to the circle containing f (∞) = 1, f (0) = −1 and f (1) = −i, which is the unit circle. Where does the upper half plane go? We know that the M¨obius map is smooth. So the upper-half plane either maps to the inside of the circle or the outside of the circle. We try the point i, which maps to 0. So the upper half plane is mapped to the inside of the circle. 10.4 Cross-ratios Finally, we’ll look at an important concept known as cross-ratios. Roughly speaking, this is a quantity that is preserved by M¨obius transforms. Definition (Cross-ratios). Given four distinct points z
1, z2, z3, z4 ∈ C∞, their cross-ratio is [z1, z2, z3, z4] = g(z4), with g being the unique M¨obius map that maps z1 → ∞, z2 → 0, z3 → 1. So [∞, 0, 1, λ] = λ for any λ ̸= ∞, 0, 1. We have [z1, z2, z3, z4] = z4 − z2 z4 − z1 · z3 − z1 z3 − z2 54 10 M¨obius group IA Groups (with special cases as above). We know that this exists and is uniquely defined because M acts sharply three-transitively on C∞. Note that different authors use different permutations of 1, 2, 3, 4, but they all lead to the same result as long as you are consistent. Lemma. For z1, z2, z3, z4 ∈ C∞ all distinct, then [z1, z2, z3, z4] = [z2, z1, z4, z3] = [z3, z4, z1, z2] = [z4, z3, z2, z1] i.e. if we perform a double transposition on the entries, the cross-ratio is retained. Proof. By inspection of the formula. Proposition. If f ∈ M, then [z1, z2, z3, z4] = [f (z1), f (z2), f (z3), f (z4)]. Proof. Use our original definition of the cross ratio (instead of the formula). Let g be the unique M¨obius map such that [z1, z2, z3, z4] = g(z4) = λ, i.e. We know that gf −1 sends g −→ ∞ z1 z2 → 0 z3 → 1 z4 → λ f (z1) f (z2) f (z3) f (z4) f −1 −−→ z1 f −1 −−→ z2 f −1 −−→ z3 f −1 −−→ z4 g −→ ∞ g −→ 0 g −→ 1 g −
→ λ So [f (z1), f (z2), f (z3), f (z4)] = gf −1f (z4) = g(z4) = λ. In fact, we can see from this proof that: given z1, z2, z3, z4 all distinct and w1, w2, w3, w4 distinct in C∞, then ∃f ∈ M with f (zi) = wi iff [z1, z2, z3, z4] = [w1, w2, w3, w4]. Corollary. z1, z2, z3, z4 lie on some circle/straight line iff [z1, z2, z3, z4] ∈ R. Proof. Let C be the circle/line through z1, z2, z3. Let g be the unique M¨obius map with g(z1) = ∞, g(z2) = 0, g(z3) = 1. Then g(z4) = [z1, z2, z3, z4] by definition. Since we know that M¨obius maps preserve circle/lines, z4 ∈ C ⇔ g(z4) is on the line through ∞, 0, 1, i.e. g(z4) ∈ R. 55 11 Projective line (non-examinable) IA Groups 11 Projective line (non-examinable) We have seen in matrix groups that GL2(C) acts on C2, the column vectors. Instead, we can also have GL2(C) acting on the set of 1-dimensional subspaces (i.e. lines) of C2. For any v ∈ C2, write the line generated by v as ⟨v⟩. Then clearly ⟨v⟩ = {λv : λ ∈ C}. Now for any A ∈ GL2(C), define the action as A⟨v⟩ = ⟨Av⟩. Check that this is well-defined: for any ⟨v⟩ = ⟨w⟩, we want to show that ⟨Av⟩ = ⟨Aw⟩. This is true because ⟨v�
�� = ⟨w⟩ if and only if w = λv for some λ ∈ C \ {0}, and then ⟨Aw⟩ = ⟨Aλv⟩ = ⟨λ(Av)⟩ = ⟨Av⟩. 0 ⟩ = ⟨1 ⟩, so we must have A1 What is the kernel of this action? By definition the kernel has to fix all lines. and 1. Since for some λ. Similarly, ⟩ = ⟨1 ⟩.. For the final, we must have λ = µ. So A = λI for some I. Clearly In particular, it has to fix our magic lines generated by 1 we want A⟨1 A0 = 0. So we can write A = Since A is a linear function, we know that A1 vector to be parallel to 1 any matrix of this form fixes any line. So the kernel Z = {λI : λ ∈ C \ {0}}.. However, also need A⟨1 +A0 = A1 Note that every line is uniquely determined by its slope. For any v = (v1, v2), w = (w1, w2), we have ⟨v⟩ = ⟨w⟩ iff z1/z2 = w1/w2. So we have a one-to-one correspondence from our lines to C∞, that maps ⟨z1 z2 ⟩, we have Finally, for each A ∈ GL2(C), given any line ⟨z ⟩ ↔ z1/z2. 1 z a b 1 d c az + b cz + d = ↔ az + b cz + d So GL2(C) acting on the lines is just “the same” as the M¨obius groups acting on points. 56erm is divisible by p. Also |G| = pn is divisible by p. Hence the number of conjugacy classes of size 1 is divisible by p. We know {e} is a conjugacy class of size 1. So there must be at least p conjugacy classes of size 1. Since the smallest prime number is 2, there is a conjugacy class {x} = {e
}. But if {x} is a conjugacy class on its own, then by definition g−1xg = x for all g ∈ G, i.e. xg = gx for all g ∈ G. So x ∈ Z(G). So Z(G) is non-trivial. The theorem allows us to prove interesting things about p-groups by induction — we can quotient G by Z(G), and get a smaller p-group. One way to do this is via the below lemma. Lemma. For any group G, if G/Z(G) is cyclic, then G is abelian. In other words, if G/Z(G) is cyclic, then it is in fact trivial, since the center of an abelian group is the abelian group itself. 20 1 Groups IB Groups, Rings and Modules Proof. Let g ∈ Z(G) be a generator of the cyclic group G/Z(G). Hence every coset of Z(G) is of the form grZ(G). So every element x ∈ G must be of the form grz for z ∈ Z(G) and r ∈ Z. To show G is abelian, let ¯x = g ¯r ¯z be another element, with ¯z ∈ Z(G), ¯r ∈ Z. Note that z and ¯z are in the center, and hence commute with every element. So we have x¯x = grzg ¯r ¯z = grg ¯rz ¯z = g ¯rgr ¯zz = g ¯r ¯zgrz = ¯xx. So they commute. So G is abelian. This is a general lemma for groups, but is particularly useful when applied to p groups. Corollary. If p is prime and |G| = p2, then G is abelian. Proof. Since Z(G) ≤ G, its order must be 1, p or p2. Since it is not trivial, it can only be p or p2. If it has order p2, then it is the whole group and the group is abelian. Otherwise, G/Z(G) has order p2/p = p. But then it must be cyclic, and thus G must be abelian. This is a contradiction. So G is
abelian. Theorem. Let G be a group of order pa, where p is a prime number. Then it has a subgroup of order pb for any 0 ≤ b ≤ a. This means there is a subgroup of every conceivable order. This is not true for general groups. For example, A5 has no subgroup of order 30 or else that would be a normal subgroup. Proof. We induct on a. If a = 1, then {e}, G give subgroups of order p0 and p1. So done. Now suppose a > 1, and we want to construct a subgroup of order pb. If b = 0, then this is trivial, namely {e} ≤ G has order 1. Otherwise, we know Z(G) is non-trivial. So let x = e ∈ Z(G). Since ord(x) | |G|, its order is a power of p. If it in fact has order pc, then xpc−1 has order p. So we can suppose, by renaming, that x has order p. We have thus generated a subgroup x of order exactly p. Moreover, since x is in the center, x commutes with everything in G. So x is in fact a normal subgroup of G. This is the point of choosing it in the center. Therefore G/x has order pa−1. Since this is a strictly smaller group, we can by induction suppose G/x has a subgroup of any order. In particular, it has a subgroup L of order pb−1. By the subgroup correspondence, there is some K ≤ G such that L = K/x and H K. But then K has order pb. So done. 1.6 Finite abelian groups We now move on to a small section, which is small because we will come back to it later, and actually prove what we claim. It turns out finite abelian groups are very easy to classify. We can just write down a list of all finite abelian groups. We write down the classification theorem, and then prove it in the last part of the course, where we hit this with a huge sledgehammer. 21 1 Groups IB Groups, Rings and Modules Theorem (Classification of finite abelian groups). Let G be a �
�nite abelian group. Then there exist some d1, · · ·, dr such that G ∼= Cd1 × Cd2 × · · · × Cdr. Moreover, we can pick di such that di+1 | di for each i, and this expression is unique. It turns out the best way to prove this is not to think of it as a group, but as a Z-module, which is something we will come to later. Example. The abelian groups of order 8 are C8, C4 × C2, C2 × C2 × C2. Sometimes this is not the most useful form of decomposition. To get a nicer decomposition, we use the following lemma: Lemma. If n and m are coprime, then Cmn ∼= Cm × Cn. This is a grown-up version of the Chinese remainder theorem. This is what the Chinese remainder theorem really says. Proof. It suffices to find an element of order nm in Cm ×Cn. Then since Cn ×Cm has order nm, it must be cyclic, and hence isomorphic to Cnm. Let g ∈ Cm have order m; h ∈ Cn have order n, and consider (g, h) ∈ Cm×Cn. Suppose the order of (g, h) is k. Then (g, h)k = (e, e). Hence (gk, hk) = (e, e). So the order of g and h divide k, i.e. m | k and n | k. As m and n are coprime, this means that mn | k. As k = ord((g, h)) and (g, h) ∈ Cm × Cn is a group of order mn, we must have k | nm. So k = nm. Corollary. For any finite abelian group G, we have G ∼= Cd1 × Cd2 × · · · × Cdr, where each di is some prime power. Proof. From the classification theorem, iteratively apply the previous lemma to break each component up into products of prime powers. As promised, this is short. 1.7 Sylow theorems We finally get to the big theorem of this part of the course.
Theorem (Sylow theorems). Let G be a finite group of order pa · m, with p a prime and p m. Then (i) The set of Sylow p-subgroups of G, given by Sylp(G) = {P ≤ G : |P | = pa}, is non-empty. In other words, G has a subgroup of order pa. (ii) All elements of Sylp(G) are conjugate in G. 22 1 Groups IB Groups, Rings and Modules (iii) The number of Sylow p-subgroups np = | Sylp(G)| satisfies np ≡ 1 (mod p) and np | |G| (in fact np | m, since p is not a factor of np). These are sometimes known as Sylow’s first/second/third theorem respec- tively. We will not prove this just yet. We first look at how we can apply this theorem. We can use it without knowing how to prove it. Lemma. If np = 1, then the Sylow p-subgroup is normal in G. Proof. Let P be the unique Sylow p-subgroup, and let g ∈ G, and consider g−1P g. Since this is isomorphic to P, we must have |g−1P g| = pa, i.e. it is also a Sylow p-subgroup. Since there is only one, we must have P = g−1P g. So P is normal. Corollary. Let G be a non-abelian simple group. Then |G| | np! p such that p | |G|. 2 for every prime Proof. The group G acts on Sylp(G) by conjugation. So it gives a permutation representation φ : G → Sym(Sylp(G)) ∼= Snp. We know ker φ G. But G is simple. So ker(φ) = {e} or G. We want to show it is not the whole of G. If we had G = ker(φ), then g−1P g = P for all g ∈ G. Hence P is a normal subgroup. As G is simple, either P = {e}, or P = G. We know P cannot be trivial since p | |
G|. But if G = P, then G is a p-group, has a non-trivial center, and hence G is not non-abelian simple. So we must have ker(φ) = {e}. Then by the first isomorphism theorem, we know G ∼= im φ ≤ Snp. We have proved the theorem without the divide-by-two part. To prove the whole result, we need to show that in fact im(φ) ≤ Anp. Consider the following composition of homomorphisms: φ G Snp sgn {±1}. If this is surjective, then ker(sgn ◦φ) G has index 2 (since the index is the size of the image), and is not the whole of G. This means G is not simple (the case where |G| = C2 is ruled out since it is abelian). So the kernel must be the whole G, and sgn ◦φ is the trivial map. In other words, sgn(φ(g)) = +1. So φ(g) ∈ Anp. So in fact we have G ∼= im(φ) ≤ Anp. So we get |G| | np! 2. Example. Suppose |G| = 1000. Then |G| is not simple. To show this, we need to factorize 1000. We have |G| = 23 · 53. We pick our favorite prime to be p = 5. ∼= 1 (mod 5), and n5 | 23 = 8. The only number that satisfies this We know n5 is n5 = 1. So the Sylow 5-subgroup is normal, and hence G is not normal. Example. Let |G| = 132 = 22 · 3 · 11. We want to show this is not simple. So for a contradiction suppose it is. We start by looking at p = 11. We know n11 ≡ 1 (mod 11). Also n11 | 12. As G is simple, we must have n11 = 12. Now look at p = 3. We have n3 = 1 (mod 3) and n3 | 44. The possible values of n3 are 4 and 22. 23 1 Groups IB Groups, Rings and Modules If n3 = 4, then the corollary says |G| | 4! 2 = 12
, which is of course nonsense. So n3 = 22. At this point, we count how many elements of each order there are. This is particularly useful if p | |G| but p2 |G|, i.e. the Sylow p-subgroups have order p and hence are cyclic. As all Sylow 11-subgroups are disjoint, apart from {e}, we know there are 12 · (11 − 1) = 120 elements of order 11. We do the same thing with the Sylow 3-subgroups. We need 22 · (3 − 1) = 44 elements of order 3. But this is more elements than the group has. This can’t happen. So G must be simple. We now get to prove our big theorem. This involves some non-trivial amount of trickery. Proof of Sylow’s theorem. Let G be a finite group with |G| = pam, and p m. (i) We need to show that Sylp(G) = ∅, i.e. we need to find some subgroup of order pa. As always, we find something clever for G to act on. We let Ω = {X subset of G : |X| = pa}. We let G act on Ω by g ∗ {g1, g2, · · ·, gpa } = {gg1, gg2, · · ·, ggpa }. Let Σ ≤ Ω be an orbit. We first note that if {g1, · · ·, gpa} ∈ Σ, then by the definition of an orbit, for every g ∈ G, gg−1 1 ∗ {g1, · · ·, gpa } = {g, gg−1 1 g2, · · ·, gg−1 1 gpa } ∈ Σ. The important thing is that this set contains g. So for each g, Σ contains a set X which contains g. Since each set X has size pa, we must have |Σ| ≥ |G| pa = m. Suppose |Σ| = m. Then the orbit-stabilizer theorem says the stabilizer H of any {g1, · · ·, gpa } ∈ Σ has index
m, hence |H| = pa, and thus H ∈ Sylp(G). So we need to show that not every orbit Σ can have size > m. Again, by the orbit-stabilizer, the size of any orbit divides the order of the group, |G| = pam. So if |Σ| > m, then p | |Σ|. Suppose we can show that p |Ω|. Then not every orbit Σ can have size > m, since Ω is the disjoint union of all the orbits, and thus we are done. So we have to show p |Ω|. This is just some basic counting. We have |Ω| = |G| pa = pam pa = pa−1 j=0 = pam − j pa − j. Now note that the largest power of p dividing pam − j is the largest power of p dividing j. Similarly, the largest power of p dividing pa − j is also the largest power of p dividing j. So we have the same power of p on top and 24 1 Groups IB Groups, Rings and Modules bottom for each item in the product, and they cancel. So the result is not divisible by p. This proof is not straightforward. We first needed the clever idea of letting G act on Ω. But then if we are given this set, the obvious thing to do would be to find something in Ω that is also a group. This is not what we do. Instead, we find an orbit whose stabilizer is a Sylow p-subgroup. (ii) We instead prove something stronger: if Q ≤ G is a p-subgroup (i.e. |Q| = pb, for b not necessarily a), and P ≤ G is a Sylow p-subgroup, then there is a g ∈ G such that g−1Qg ≤ P. Applying this to the case where Q is another Sylow p-subgroup says there is a g such that g−1Qg ≤ P, but since g−1Qg has the same size as P, they must be equal. We let Q act on the set of cosets of G/P via q ∗ gP = qgP. We know the orbits of this action have size dividing |Q|, so is
either 1 or divisible by p. But they can’t all be divisible by p, since |G/P | is coprime to p. So at least one of them have size 1, say {gP }. In other words, for every q ∈ Q, we have qgP = gP. This means g−1qg ∈ P. This holds for every element q ∈ Q. So we have found a g such that g−1Qg ≤ P. (iii) Finally, we need to show that np ∼= 1 (mod p) and np | |G|, where np = | SylP (G)|. The second part is easier — by Sylow’s second theorem, the action of G on Sylp(G) by conjugation has one orbit. By the orbit-stabilizer theorem, the size of the orbit, which is | Sylp(G)| = np, divides |G|. This proves the second part. For the first part, let P ∈ SylP (G). Consider the action by conjugation of P on Sylp(G). Again by the orbit-stabilizer theorem, the orbits each have size 1 or size divisible by p. But we know there is one orbit of size 1, namely {P } itself. To show np = | SylP (G)| ∼= 1 (mod p), it is enough to show there are no other orbits of size 1. Suppose {Q} is an orbit of size 1. This means for every p ∈ P, we get p−1Qp = Q. In other words, P ≤ NG(Q). Now NG(Q) is itself a group, and we can look at its Sylow p-subgroups. We know Q ≤ NG(Q) ≤ G. So pa | |NG(Q)| | pam. So pa is the biggest power of p that divides |NG(Q)|. So Q is a Sylow p-subgroup of NG(Q). Now we know P ≤ NG(Q) is also a Sylow p-subgroup of NG(Q). By Sylow’s second theorem, they must be conjugate in NG(Q). But conjugating anything in Q by something in NG(Q) does nothing, by definition of NG(Q). So we must have P =
Q. So the only orbit of size 1 is {P } itself. So done. This is all the theories of groups we’ve got. In the remaining time, we will look at some interesting examples of groups. 25 1 Groups IB Groups, Rings and Modules Example. Let G = GLn(Z/p), i.e. the set of invertible n × n matrices with entries in Z/p, the integers modulo p. Here p is obviously a prime. When we do rings later, we will study this properly. First of all, we would like to know the size of this group. A matrix A ∈ GLn(Z/p) is the same as n linearly independent vectors in the vector space (Z/p)n. We can just work out how many there are. This is not too difficult, when you know how. We can pick the first vector, which can be anything except zero. So there are pn − 1 ways of choosing the first vector. Next, we need to pick the second vector. This can be anything that is not in the span of the first vector, and this rules out p possibilities. So there are pn − p ways of choosing the second vector. Continuing this chain of thought, we have |GLn(Z/p)| = (pn − 1)(pn − p)(pn − p2) · · · (pn − pn−1). What is a Sylow p-subgroup of GLn(Z/p)? We first work out what the order of this is. We can factorize that as |GLn(Z/p)| = (1 · p · p2 · · · · · pn−1)((pn − 1)(pn−1 − 1) · · · (p − 1)). 2). Let’s find a subgroup So the largest power of p that divides |GLn(Z/p)| is p(n of size p(n 2). We consider matrices of the form     1 0   0  ...   0 U =  �
� Then we know |U | = p(n 2) as each ∗ can be chosen to be anything in Z/p, and there are n ∈ GLn(Z/p)   ∗s.. · · · · · · · · ·... · · · ∗ ∗   ∗  ...   1 ∗ ∗ 1... 0 ∗ 1 0... 0 Is the Sylow p-subgroup unique? No. We can take the lower triangular matrices and get another Sylow p-subgroup. Example. Let’s be less ambitious and consider GL2(Z/p). So |G| = p(p2 − 1)(p − 1) = p(p − 1)2(p + 1). Let be another prime number such that | p − 1. Suppose the largest power of that divides |G| is 2. Can we (explicitly) find a Sylow -subgroup? First, we want to find an element of order. How is p − 1 related to p (apart 2 (mod p)} ∼= Cp−1. ∼= (Z/p)×. Then we immediately from the obvious way)? We know that (Z/p)× = {x ∈ Z/p : (∃y) xy ≡ 1 So as | p − 1, there is a subgroup C ≤ Cp−1 know where to find a subgroup of order 2: we have C × C ≤ (Z/p)∗ × (Z/p)× ≤ GL2(Z/p), where the final inclusion is the diagonal matrices, identifying (a, b) ↔ a 0 b 0. So this is the Sylow -subgroup. 26 2 Rings IB Groups, Rings and Modules 2 Rings 2.1 Definitions and examples We now move on to something completely different — rings. In a ring, we are allowed to add, subtract, multiply but not divide. Our canonical example of a ring would be Z, the integers, as studied in
IA Numbers and Sets. In this course, we are only going to consider rings in which multiplication is commutative, since these rings behave like “number systems”, where we can study number theory. However, some of these rings do not behave like Z. Thus one major goal of this part is to understand the different properties of Z, whether they are present in arbitrary rings, and how different properties relate to one another. Definition (Ring). A ring is a quintuple (R, +, ·, 0R, 1R) where 0R, 1R ∈ R, and +, · : R × R → R are binary operations such that (i) (R, +, 0R) is an abelian group. (ii) The operation · : R × R → R satisfies associativity, i.e. and identity: a · (b · c) = (a · b) · c, 1R · r = r · 1R = r. (iii) Multiplication distributes over addition, i.e. r1 · (r2 + r3) = (r1 · r2) + (r1 · r3) (r1 + r2) · r3 = (r1 · r3) + (r2 · r3). Notation. If R is a ring and r ∈ R, we write −r for the inverse to r in (R, +, 0R). This satisfies r + (−r) = 0R. We write r − s to mean r + (−s) etc. Some people don’t insist on the existence of the multiplicative identity, but we will for the purposes of this course. Since we can add and multiply two elements, by induction, we can add and multiply any finite number of elements. However, the notions of infinite sum and product are undefined. It doesn’t make sense to ask if an infinite sum converges. Definition (Commutative ring). We say a ring R is commutative if a · b = b · a for all a, b ∈ R. From now onwards, all rings in this course are going to be commutative. Just as we have groups and subgroups,