text stringlengths 270 6.81k |
|---|
we also have subrings. Definition (Subring). Let (R, +, ·, 0R, 1R) be a ring, and S ⊆ R be a subset. We say S is a subring of R if 0R, 1R ∈ S, and the operations +, · make S into a ring in its own right. In this case we write S ≤ R. Example. The familiar number systems are all rings: we have Z ≤ Q ≤ R ≤ C, under the usual 0, 1, +, ·. 27 2 Rings IB Groups, Rings and Modules Example. The set Z[i] = {a + ib : a, b ∈ Z} ≤ C is the Gaussian integers, which is a ring. √ √ We also have the ring Q[ 2] = {a + b 2 ∈ R : a, b ∈ Q} ≤ R. We will use the square brackets notation quite frequently. It should be clear what it should mean, and we will define it properly later. In general, elements in a ring do not have inverses. This is not a bad thing. This is what makes rings interesting. For example, the division algorithm would be rather contentless if everything in Z had an inverse. Fortunately, Z only has two invertible elements — 1 and −1. We call these units. Definition (Unit). An element u ∈ R is a unit if there is another element v ∈ R such that u · v = 1R. It is important that this depends on R, not just on u. For example, 2 ∈ Z is not a unit, but 2 ∈ Q is a unit (since 1 2 is an inverse). A special case is when (almost) everything is a unit. Definition (Field). A field is a non-zero ring where every u = 0R ∈ R is a unit. We will later show that 0R cannot be a unit except in a very degenerate case. Example. Z is not a field, but Q, R, C are all fields. Similarly, Z[i] is not a field, while Q[ 2] is. √ Example. Let R be a ring. Then 0R + 0R = 0R, since this is |
true in the group (R, +, 0R). Then for any r ∈ R, we get We now use the fact that multiplication distributes over addition. So r · (0R + 0R) = r · 0R. r · 0R + r · 0R = r · 0R. Adding (−r · 0R) to both sides give r · 0R = 0R. This is true for any element r ∈ R. From this, it follows that if R = {0}, then 1R = 0R — if they were equal, then take r = 0R. So r = r · 1R = r · 0R = 0R, which is a contradiction. Note, however, that {0} forms a ring (with the only possible operations and identities), the zero ring, albeit a boring one. However, this is often a counterexample to many things. Definition (Product of rings). Let R, S be rings. Then the product R × S is a ring via (r, s) + (r, s) = (r + r, s + s), (r, s) · (r, s) = (r · r, s · s). The zero is (0R, 0S) and the one is (1R, 1S). We can (but won’t) check that these indeed are rings. 28 2 Rings IB Groups, Rings and Modules Definition (Polynomial). Let R be a ring. Then a polynomial with coefficients in R is an expression f = a0 + a1X + a2X 2 + · · · + anX n, with ai ∈ R. The symbols X i are formal symbols. We identify f and f + 0R · X n+1 as the same things. Definition (Degree of polynomial). The degree of a polynomial f is the largest m such that am = 0. Definition (Monic polynomial). Let f have degree m. If am = 1, then f is called monic. Definition (Polynomial ring). We write R[X] for the set of all polynomials with coefficients in R. The operations are performed in the obvious way, i.e. if |
f = a0 + a1X + · · · + AnX n and g = b0 + b1X + · · · + bkX k are polynomials, then max{n,k} f + g = and n+k f · g = r=0 i (ai + bi)X i, X i, ajbi−j i=0 j=0 We identify R with the constant polynomials, i.e. polynomials aiX i with ai = 0 for i > 0. In particular, 0R ∈ R and 1R ∈ R are the zero and one of R[X]. This is in fact a ring. Note that a polynomial is just a sequence of numbers, interpreted as the coefficients of some formal symbols. While it does indeed induce a function in the obvious way, we shall not identify the polynomial with the function given by it, since different polynomials can give rise to the same function. For example, in Z/2Z[X], f = X 2 + X is not the zero polynomial, since its coefficients are not zero. However, f (0) = 0 and f (1) = 0. As a function, this is identically zero. So f = 0 as a polynomial but f = 0 as a function. Definition (Power series). We write R[[X]] for the ring of power series on R, i.e. f = a0 + a1X + a2X 2 + · · ·, where each ai ∈ R. This has addition and multiplication the same as for polynomials, but without upper limits. A power series is very much not a function. We don’t talk about whether the sum converges or not, because it is not a sum. Example. Is 1 − X ∈ R[X] a unit? For every g = a0 + · · · + anX n (with an = 0), we get (1 − X)g = stuff + · · · − anX n+1, which is not 1. So g cannot be the inverse of (1 − X). So (1 − X) is not a unit. However, 1 |
− X ∈ R[[X]] is a unit, since (1 − X)(. 29 2 Rings IB Groups, Rings and Modules Definition (Laurent polynomials). The Laurent polynomials on R is the set R[X, X −1], i.e. each element is of the form f = aiX i i∈Z where ai ∈ R and only finitely many ai are non-zero. The operations are the obvious ones. We can also think of Laurent series, but we have to be careful. We allow infinitely many positive coefficients, but only finitely many negative ones. Or else, in the formula for multiplication, we will have an infinite sum, which is undefined. Example. Let X be a set, and R be a ring. Then the set of all functions on X, i.e. functions f : X → R, is a ring with ring operations given by (f + g)(x) = f (x) + g(x), (f · g)(x) = f (x) · g(x). Here zero is the constant function 0 and one is the constant function 1. Usually, we don’t want to consider all functions X → R. Instead, we look at some subrings of this. For example, we can consider the ring of all continuous functions R → R. This contains, for example, the polynomial functions, which is just R[X] (since in R, polynomials are functions). 2.2 Homomorphisms, ideals, quotients and isomorphisms Just like groups, we will come up with analogues of homomorphisms, normal subgroups (which are now known as ideals), and quotients. Definition (Homomorphism of rings). Let R, S be rings. A function φ : R → S is a ring homomorphism if it preserves everything we can think of, i.e. (i) φ(r1 + r2) = φ(r1) + φ(r2), (ii) φ(0R) = 0S, (iii) φ(r1 · r2) = φ(r1) · φ(r2), (iv) φ |
(1R) = 1S. Definition (Isomorphism of rings). If a homomorphism φ : R → S is a bijection, we call it an isomorphism. Definition (Kernel). The kernel of a homomorphism φ : R → S is ker(φ) = {r ∈ R : φ(r) = 0S}. Definition (Image). The image of φ : R → S is im(φ) = {s ∈ S : s = φ(r) for some r ∈ R}. Lemma. A homomorphism φ : R → S is injective if and only if ker φ = {0R}. 30 2 Rings IB Groups, Rings and Modules Proof. A ring homomorphism is in particular a group homomorphism φ : (R, +, 0R) → (S, +, 0S) of abelian groups. So this follows from the case of groups. In the group scenario, we had groups, subgroups and normal subgroups, which are special subgroups. Here, we have a special kind of subsets of a ring that act like normal subgroups, known as ideals. Definition (Ideal). A subset I ⊆ R is an ideal, written I R, if (i) It is an additive subgroup of (R, +, 0R), i.e. it is closed under addition and (additive closure) additive inverses. (ii) If a ∈ I and b ∈ R, then a · b ∈ I. (strong closure) We say I is a proper ideal if I = R. Note that the multiplicative closure is stronger than what we require for subrings — for subrings, it has to be closed under multiplication by its own elements; for ideals, it has to be closed under multiplication by everything in the world. This is similar to how normal subgroups not only have to be closed under internal multiplication, but also conjugation by external elements. Lemma. If φ : R → S is a homomorphism, then ker(φ) R. Proof. Since φ : (R, +, 0R) → (S, +, 0S) is a group homomorphism, the kernel is a subgroup of (R, + |
, 0R). For the second part, let a ∈ ker(φ), b ∈ R. We need to show that their product is in the kernel. We have φ(a · b) = φ(a) · φ(b) = 0 · φ(b) = 0. So a · b ∈ ker(φ). Example. Suppose I R is an ideal, and 1R ∈ I. Then for any r ∈ R, the axioms entail 1R · r ∈ I. But 1R · r = r. So if 1R ∈ I, then I = R. In other words, every proper ideal does not contain 1. In particular, every proper ideal is not a subring, since a subring must contain 1. We are starting to diverge from groups. In groups, a normal subgroup is a subgroup, but here an ideal is not a subring. Example. We can generalize the above a bit. Suppose I R and u ∈ I is a unit, i.e. there is some v ∈ R such that u · v = 1R. Then by strong closure, 1R = u · v ∈ I. So I = R. Hence proper ideals are not allowed to contain any unit at all, not just 1R. Example. Consider the ring Z of integers. Then every ideal of Z is of the form nZ = {· · ·, −2n, −n, 0, n, 2n, · · · } ⊆ Z. It is easy to see this is indeed an ideal. To show these are all the ideals, let I Z. If I = {0}, then I = 0Z. Otherwise, let n ∈ N be the smallest positive element of I. We want to show in fact I = nZ. Certainly nZ ⊆ I by strong closure. 31 2 Rings IB Groups, Rings and Modules Now let m ∈ I. By the Euclidean algorithm, we can write m = q · n + r with 0 ≤ r < n. Now n, m ∈ I. So by strong closure, m, q · n ∈ I. So r = m − q · n ∈ I. As n is the smallest positive element of I, and r < n, we must have r = 0. So m = q · n ∈ nZ. So I |
⊆ nZ. So I = nZ. The key to proving this was that we can perform the Euclidean algorithm on Z. Thus, for any ring R in which we can “do Euclidean algorithm”, every ideal is of the form aR = {a · r : r ∈ R} for some a ∈ R. We will make this notion precise later. Definition (Generator of ideal). For an element a ∈ R, we write (a) = aR = {a · r : r ∈ R} R. This is the ideal generated by a. In general, let a1, a2, · · ·, ak ∈ R, we write (a1, a2, · · ·, ak) = {a1r1 + · · · + akrk : r1, · · ·, rk ∈ R}. This is the ideal generated by a1, · · ·, ak. We can also have ideals generated by infinitely many objects, but we have to be careful, since we cannot have infinite sums. Definition (Generator of ideal). For A ⊆ R a subset, the ideal generated by A is (A) = ra · a : ra ∈ R, only finitely-many non-zero. a∈A These ideals are rather nice ideals, since they are easy to describe, and often have some nice properties. Definition (Principal ideal). An ideal I is a principal ideal if I = (a) for some a ∈ R. So what we have just shown for Z is that all ideals are principal. Not all rings are like this. These are special types of rings, which we will study more in depth later. Example. Consider the following subset: {f ∈ R[X] : the constant coefficient of f is 0}. This is an ideal, as we can check manually (alternatively, it is the kernel of the “evaluate at 0” homomorphism). It turns out this is a principal ideal. In fact, it is (X). We have said ideals are like normal subgroups. The key idea is that we can divide by ideals. 32 2 Rings IB Groups, Rings and Modules Definition (Quotient ring). |
Let I R. The quotient ring R/I consists of the (additive) cosets r + I with the zero and one as 0R + I and 1R + I, and operations (r1 + I) + (r2 + I) = (r1 + r2) + I (r1 + I) · (r2 + I) = r1r2 + I. Proposition. The quotient ring is a ring, and the function R → R/I r → r + I is a ring homomorphism. This is true, because we defined ideals to be those things that can be quotiented by. So we just have to check we made the right definition. Just as we could have come up with the definition of a normal subgroup by requiring operations on the cosets to be well-defined, we could have come up with the definition of an ideal by requiring the multiplication of cosets to be well-defined, and we would end up with the strong closure property. Proof. We know the group (R/I, +, 0R/I ) is well-defined, since I is a (normal) subgroup of R. So we only have to check multiplication is well-defined. Suppose r1 + I = r 1 + I and r2 + I = r 2 + I. Then r 1 − r1 = a1 ∈ I and r 2 − r2 = a2 ∈ I. So 1r r 2 = (r1 + a1)(r2 + a2) = r1r2 + r1a2 + r2a1 + a1a2. By the strong closure property, the last three objects are in I. So r r1r2 + I. 1r 2 + I = It is easy to check that 0R + I and 1R + I are indeed the zero and one, and the function given is clearly a homomorphism. Example. We have the ideals nZ Z. So we have the quotient rings Z/nZ. The elements are of the form m + nZ, so they are just 0 + nZ, 1 + nZ, 2 + nZ, · · ·, (n − 1) + nZ. Addition and multiplication are just |
what we are used to — addition and multiplication modulo n. Note that it is easier to come up with ideals than normal subgroups — we can just pick up random elements, and then take the ideal generated by them. Example. Consider (X) C[X]. What is C[X]/(X)? Elements are represented by a0 + a1X + a2X 2 + · · · + anX n + (X). But everything but the first term is in (X). So every such thing is equivalent to a0 + (X). It is not hard to convince yourself that this representation is unique. So in fact C[X]/(X) ∼= C, with the bijection a0 + (X) ↔ a0. If we want to prove things like this, we have to convince ourselves this representation is unique. We can do that by hand here, but in general, we want to be able to do this properly. 33 2 Rings IB Groups, Rings and Modules Proposition (Euclidean algorithm for polynomials). Let F be a field and f, g ∈ F[X]. Then there is some r, q ∈ F[X] such that with deg r < deg g. f = gq + r, This is like the usual Euclidean algorithm, except that instead of the absolute value, we use the degree to measure how “big” the polynomial is. Proof. Let deg(f ) = n. So f = n i=0 aiX i, and an = 0. Similarly, if deg g = m, then g = m i=0 biX i, with bm = 0. If n < m, we let q = 0 and r = f, and done. Otherwise, suppose n ≥ m, and proceed by induction on n. We let f1 = f − anb−1 m X n−mg. This is possible since bm = 0, and F is a field. Then by construction, the coefficients of X n cancel out. So deg(f1) < n. If n = m, then deg(f1) < n = m. So we can write f = (anb−1 m X n−m)g + f1, and deg(f1) < deg(f ). So done. Otherwise, if n |
> m, then as deg(f1) < n, by induction, we can find r1, q1 such that and deg(r1) < deg g = m. Then f1 = gq1 + r1, f = anb−1 m X n−mg + q1g + r1 = (anb−1 m X n−m + q1)g + r1. So done. Now that we have a Euclidean algorithm for polynomials, we should be able to show that every ideal of F[X] is generated by one polynomial. We will not prove it specifically here, but later show that in general, in every ring where the Euclidean algorithm is possible, all ideals are principal. We now look at some applications of the Euclidean algorithm. Example. Consider R[X], and consider the principal ideal (X 2 + 1) R[X]. We let R = R[X]/(X 2 + 1). Elements of R are polynomials a0 + a1X + a2X 2 + · · · + anX n f +(X 2 + 1). 34 2 Rings IB Groups, Rings and Modules By the Euclidean algorithm, we have f = q(X 2 + 1) + r, with deg(r) < 2, i.e. r = b0 + b1X. Thus f + (X 2 + 1) = r + (X 2 + 1). So every element of R[X]/(X 2 + 1) is representable as a + bX for some a, b ∈ R. Is this representation unique? If a + bX + (X 2 + 1) = a + bX + (X 2 + 1), then the difference (a − a) + (b − b)X ∈ (X 2 + 1). So it is (X 2 + 1)q for some q. This is possible only if q = 0, since for non-zero q, we know (X 2 + 1)q has degree at least 2. So we must have (a − a) + (b − b)X = 0. So a + bX = a + bX. So the representation is unique. What we’ve got is that every element in R is of the form a + bX, |
and X 2 + 1 = 0, i.e. X 2 = −1. This sounds like the complex numbers, just that we are calling it X instead of i. To show this formally, we define the function φ : R[X]/(X 2 + 1) → C a + bX + (X 2 + 1) → a + bi. This is well-defined and a bijection. It is also clearly additive. So to prove this is an isomorphism, we have to show it is multiplicative. We check this manually. We have φ((a + bX + (X 2 + 1))(c + dX + (X 2 + 1))) = φ(ac + (ad + bc)X + bdX 2 + (X 2 + 1)) = φ((ac − bd) + (ad + bc)X + (X 2 + 1)) = (ac − bd) + (ad + bc)i = (a + bi)(c + di) = φ(a + bX + (X 2 + 1))φ(c + dX + (X 2 + 1)). So this is indeed an isomorphism. This is pretty tedious. Fortunately, we have some helpful results we can use, namely the isomorphism theorems. These are exactly analogous to those for groups. Theorem (First isomorphism theorem). Let φ : R → S be a ring homomorphism. Then ker(φ) R, and R ker(φ) ∼= im(φ) ≤ S. Proof. We have already seen ker(φ) R. Now define Φ : R/ ker(φ) → im(φ) r + ker(φ) → φ(r). This well-defined, since if r + ker(φ) = r + ker(φ), then r − r ∈ ker(φ). So φ(r − r) = 0. So φ(r) = φ(r). We don’t have to check this is bijective and additive, since that comes for free from the (proof of the) isomorphism theorem of groups. So we just have to 35 2 Rings IB Groups, Rings and Modules check it is multiplicative. To show Φ |
is multiplicative, we have Φ((r + ker(φ))(t + ker(φ))) = Φ(rt + ker(φ)) = φ(rt) = φ(r)φ(t) = Φ(r + ker(φ))Φ(t + ker(φ)). This is more-or-less the same proof as the one for groups, just that we had a few more things to check. Since there is the first isomorphism theorem, we, obviously, have more coming. Theorem (Second isomorphism theorem). Let R ≤ S and J S. Then J ∩RR, and is a subring, and Proof. Define the function R + J J S J = {r + J : r ∈ R} ≤ R R ∩ J ∼= /J r → r + J. Since this is the quotient map, it is a ring homomorphism. The kernel is ker(φ) = {r ∈ R : r + J = 0, i.e. r ∈ J} = R ∩ J. Then the image is im(φ) = {r + J : r ∈ R} = R + J J. Then by the first isomorphism theorem, we know R ∩ J R, and R+J J ≤ S, and R R ∩ J ∼= R + J J. Before we get to the third isomorphism theorem, recall we had the subgroup correspondence for groups. Analogously, for I R, {subrings of R/I} ←→ {subrings of R which contain I −→ {x ∈ R : x + I ∈ L} ←− I S ≤ R. This is exactly the same formula as for groups. For groups, we had a correspondence for normal subgroups. Here, we have a correspondence between ideals {ideals of R/I} ←→ {ideals of R which contain I} 36 2 Rings IB Groups, Rings and Modules It is important to note here that quotienting in groups and rings have different purposes. In groups, we take quotients so that we have simpler groups to work with. In rings, we often take quotients to get more interesting rings. For example, R[X] is quite boring, but R[ |
X]/(X 2 + 1) ∼= C is more interesting. Thus this ideal correspondence allows us to occasionally get interesting ideals from boring ones. Theorem (Third isomorphism theorem). Let I R and J R, and I ⊆ J. Then J/I R/I and R I J I ∼= R J. Proof. We define the map φ : R/I → R/J r + I → r + J. This is well-defined and surjective by the groups case. Also it is a ring homomorphism since multiplication in R/I and R/J are “the same”. The kernel is ker(φ) = {r + I : r + J = 0, i.e. r ∈ J} = So the result follows from the first isomorphism theorem. J I. Note that for any ring R, there is a unique ring homomorphism Z → R, given by ι : Z → R n ≥ 0 → 1R + 1R + · · · + 1R n times ) n ≤ 0 → −(1R + 1R + · · · + 1R −n times Any homomorphism Z → R must be given by this formula, since it must send the unit to the unit, and we can show this is indeed a homomorphism by distributivity. So the ring homomorphism is unique. In fancy language, we say Z is the initial object in (the category of) rings. We then know ker(ι) Z. Thus ker(ι) = nZ for some n. Definition (Characteristic of ring). Let R be a ring, and ι : Z → R be the unique such map. The characteristic of R is the unique non-negative n such that ker(ι) = nZ. Example. The rings Z, Q, R, C all have characteristic 0. The ring Z/nZ has characteristic n. In particular, all natural numbers can be characteristics. The notion of the characteristic will not be too useful in this course. However, fields of non-zero characteristic often provide interesting examples and counterexamples to some later theory. 37 2 Rings IB Groups, Rings and Modules 2.3 Integral domains, field of factions, maximal and prime ideals Many rings can be completely nothing like |
Z. For example, in Z, we know that if a, b = 0, then ab = 0. However, in, say, Z/6Z, we get 2, 3 = 0, but 2 · 3 = 0. Also, Z has some nice properties such as every ideal is principal, and every integer has an (essentially) unique factorization. We will now classify rings according to which properties they have. We start with the most fundamental property that the product of two nonzero elements are non-zero. We will almost exclusively work with rings that satisfy this property. Definition (Integral domain). A non-zero ring R is an integral domain if for all a, b ∈ R, if a · b = 0R, then a = 0R or b = 0R. An element that violates this property is known as a zero divisor. Definition (Zero divisor). An element x ∈ R is a zero divisor if x = 0 and there is a y = 0 such that x · y = 0 ∈ R. In other words, a ring is an integral domain if it has no zero divisors. Example. All fields are integral domains, since if a · b = 0, and b = 0, then a = a · (b · b−1) = 0. Similarly, if a = 0, then b = 0. Example. A subring of an integral domain is an integral domain, since a zero divisor in the small ring would also be a zero divisor in the big ring. Example. Immediately, we know Z, Q, R, C are integral domains, since C is a field, and the others are subrings of it. Also, Z[i] ≤ C is also an integral domain. These are the nice rings we like in number theory, since there we can sensibly talk about things like factorization. It turns out there are no interesting finite integral domains. Lemma. Let R be a finite ring which is an integral domain. Then R is a field. Proof. Let a ∈ R be non-zero, and consider the ring homomorphism We want to show this is injective. For this, it suffices to show the kernel is trivial. If r ∈ ker(a · − |
), then a · r = 0. So r = 0 since R is an integral domain. So the kernel is trivial. Since R is finite, a · − must also be surjective. In particular, there is an element b ∈ R such that a · b = 1R. So a has an inverse. Since a was arbitrary, R is a field. So far, we know fields are integral domains, and subrings of integral domains are integral domains. We have another good source of integral domain as follows: Lemma. Let R be an integral domain. Then R[X] is also an integral domain. 38 2 Rings IB Groups, Rings and Modules Proof. We need to show that the product of two non-zero elements is non-zero. Let f, g ∈ R[X] be non-zero, say f = a0 + a1X + · · · + anX n ∈ R[X] g = b0 + b1X + · · · + bmX m ∈ R[X], with an, bm = 0. Then the coefficient of X n+m in f g is anbm. This is nonzero since R is an integral domain. So f g is non-zero. So R[X] is an integral domain. So, for instance, Z[X] is an integral domain. We can also iterate this. Notation. Write R[X, Y ] for (R[X])[Y ], the polynomial ring of R in two variables. In general, write R[X1, · · ·, Xn] = (· · · ((R[X1])[X2]) · · · )[Xn]. Then if R is an integral domain, so is R[X1, · · ·, Xn]. We now mimic the familiar construction of Q from Z. For any integral domain R, we want to construct a field F that consists of “fractions” of elements in R. Recall that a subring of any field is an integral domain. This says the converse — every integral domain is the subring of some field. Definition (Field of fractions). Let R be an integral domain. A field of fractions F of R is |
a field with the following properties (i) R ≤ F (ii) Every element of F may be written as a · b−1 for a, b ∈ R, where b−1 means the multiplicative inverse to b = 0 in F. For example, Q is the field of fractions of Z. Theorem. Every integral domain has a field of fractions. Proof. The construction is exactly how we construct the rationals from the integers — as equivalence classes of pairs of integers. We let S = {(a, b) ∈ R × R : b = 0}. We think of (a, b) ∈ S as a b. We define the equivalence relation ∼ on S by (a, b) ∼ (c, d) ⇔ ad = bc. We need to show this is indeed a equivalence relation. Symmetry and reflexivity are obvious. To show transitivity, suppose i.e. (a, b) ∼ (c, d), (c, d) ∼ (e, f ), ad = bc, cf = de. We multiply the first equation by f and the second by b, to obtain adf = bcf, bcf = bed. 39 2 Rings IB Groups, Rings and Modules Rearranging, we get d(af − be) = 0. Since d is in the denominator, d = 0. Since R is an integral domain, we must have af − be = 0, i.e. af = be. So (a, b) ∼ (e, f ). This is where being an integral domain is important. Now let F = S/∼ be the set of equivalence classes. We now want to check this is indeed the field of fractions. We first want to show it is a field. We write a b = [(a, b)] ∈ F, and define the operations by = = ad + bc bd ac bd. These are well-defined, and make (F, +, ·, 0 1 ) into a ring. There are many things to check, but those are straightforward, and we will not waste time doing that here. 1, 1 Finally, we need to show every non-zero element has an inverse. Let a b |
= 0 1, or a · 1 = b · 0 ∈ R, i.e. a = 0. Then b a ∈ F is defined, and i.e. a b = 0F, b a · a b = ba ba = 1F. So a b has a multiplicative inverse. So F is a field. We now need to construct a subring of F that is isomorphic to R. To do so, we need to define an injective isomorphism φ : R → F. This is given by φ : R → F r 1 r →. This is a ring homomorphism, as one can check easily. The kernel is the set of all r ∈ R such that r 1 = 0, i.e. r = 0. So the kernel is trivial, and φ is injective. Then by the first isomorphism theorem, R ∼= im(φ) ⊆ F. Finally, we need to show everything is a quotient of two things in R. We have as required1, This gives us a very useful tool. Since this gives us a field from an integral domain, this allows us to use field techniques to study integral domains. Moreover, we can use this to construct new interesting fields from integral domains. Example. Consider the integral domain C[X]. Its field of fractions is the field of all rational functions p(X) q(X), where p, q ∈ C[X]. 40 2 Rings IB Groups, Rings and Modules To some people, it is a shame to think of rings as having elements. Instead, we should think of a ring as a god-like object, and the only things we should ever mention are its ideals. We should also not think of the ideals as containing elements, but just some abstract objects, and all we know is how ideals relate to one another, e.g. if one contains the other. Under this philosophy, we can think of a field as follows: Lemma. A (non-zero) ring R is a field if and only if its only ideals are {0} and R. Note that we don’t need elements to define the ideals {0} and R. { |
0} can be defined as the ideal that all other ideals contain, and R is the ideal that contains all other ideals. Alternatively, we can reword this as “R is a field if and only if it has only two ideals” to avoid mentioning explicit ideals. Proof. (⇒) Let I R and R be a field. Suppose x = 0 ∈ I. Then as x is a unit, I = R. (⇐) Suppose x = 0 ∈ R. Then (x) is an ideal of R. It is not {0} since it contains x. So (x) = R. In other words 1R ∈ (x). But (x) is defined to be {x · y : y ∈ R}. So there is some u ∈ R such that x · u = 1R. So x is a unit. Since x was arbitrary, R is a field. This is another reason why fields are special. They have the simplest possible ideal structure. This motivates the following definition: Definition (Maximal ideal). An ideal I of a ring R is maximal if I = R and for any ideal J with I ≤ J ≤ R, either J = I or J = R. The relation with what we’ve done above is quite simple. There is an easy way to recognize if an ideal is maximal. Lemma. An ideal I R is maximal if and only if R/I is a field. Proof. R/I is a field if and only if {0} and R/I are the only ideals of R/I. By the ideal correspondence, this is equivalent to saying I and R are the only ideals of R which contains I, i.e. I is maximal. So done. This is a nice result. This makes a correspondence between properties of ideals I and properties of the quotient R/I. Here is another one: Definition (Prime ideal). An ideal I of a ring R is prime if I = R and whenever a, b ∈ R are such that a · b ∈ I, then a ∈ I or b ∈ I. This is like the opposite of the property of being an ideal — being an ideal means if we have |
something in the ideal and something outside, the product is always in the ideal. This does the backwards. If the product of two random things is in the ideal, then one of them must be from the ideal. Example. A non-zero ideal nZ Z is prime if and only if n is a prime. To show this, first suppose n = p is a prime, and a · b ∈ pZ. So p | a · b. So p | a or p | b, i.e. a ∈ pZ or b ∈ pZ. For the other direction, suppose n = pq is a composite number (p, q = 1). Then n ∈ nZ but p ∈ nZ and q ∈ nZ, since 0 < p, q < n. 41 2 Rings IB Groups, Rings and Modules So instead of talking about prime numbers, we can talk about prime ideals instead, because ideals are better than elements. We prove a result similar to the above: Lemma. An ideal I R is prime if and only if R/I is an integral domain. Proof. Let I be prime. Let a + I, b + I ∈ R/I, and suppose (a + I)(b + I) = 0R/I. By definition, (a + I)(b + I) = ab + I. So we must have ab ∈ I. As I is prime, either a ∈ I or b ∈ I. So a + I = 0R/I or b + I = 0R/I. So R/I is an integral domain. Conversely, suppose R/I is an integral domain. Let a, b ∈ R be such that ab ∈ I. Then (a + I)(b + I) = ab + I = 0R/I ∈ R/I. Since R/I is an integral domain, either a + I = 0R/I or b + I = 0R/i, i.e. a ∈ I or b ∈ I. So I is a prime ideal. Prime ideals and maximal ideals are the main types of ideals we care about. Note that every field is an integral domain. So we immediately have the following result: Proposition. Every maximal ideal is a prime ideal. Proof. I R is maximal implies R/I is a field implies |
R/I is an integral domain implies I is prime. The converse is not true. For example, {0} ⊆ Z is prime but not maximal. Less stupidly, (X) ∈ Z[X, Y ] is prime but not maximal (since Z[X, Y ]/(X) ∼= Z[Y ]). We can provide a more explicit proof of this, which is essentially the same. Alternative proof. Let I be a maximal ideal, and suppose a, b ∈ I but ab ∈ I. Then by maximality, I + (a) = I + (b) = R = (1). So we can find some p, q ∈ R and n, m ∈ I such that n + ap = m + bq = 1. Then 1 = (n + ap)(m + bq) = nm + apm + bqn + abpq ∈ I, since n, m, ab ∈ I. This is a contradiction. Lemma. Let R be an integral domain. Then its characteristic is either 0 or a prime number. Proof. Consider the unique map φ : Z → R, and ker(φ) = nZ. Then n is the characteristic of R by definition. By the first isomorphism theorem, Z/nZ = im(φ) ≤ R. So Z/nZ is an integral domain. So nZ Z is a prime. So n = 0 or a prime number. 2.4 Factorization in integral domains We now move on to tackle the problem of factorization in rings. For sanity, we suppose throughout the section that R is an integral domain. We start by making loads of definitions. Definition (Unit). An element a ∈ R is a unit if there is a b ∈ R such that ab = 1R. Equivalently, if the ideal (a) = R. 42 2 Rings IB Groups, Rings and Modules Definition (Division). For elements a, b ∈ R, we say a divides b, written a | b, if there is a c ∈ R such that b = ac. Equivalently, if (b) ⊆ (a). Definition (Associates). We say a, b ∈ R are associates if |
a = bc for some unit c. Equivalently, if (a) = (b). Equivalently, if a | b and b | a. In the integers, this can only happen if a and b differ by a sign, but in more interesting rings, more interesting things can happen. When considering division in rings, we often consider two associates to be “the same”. For example, in Z, we can factorize 6 as 6 = 2 · 3 = (−2) · (−3), but this does not violate unique factorization, since 2 and −2 are associates (and so are 3 and −3), and we consider these two factorizations to be “the same”. Definition (Irreducible). We say a ∈ R is irreducible if a = 0, a is not a unit, and if a = xy, then x or y is a unit. For integers, being irreducible is the same as being a prime number. However, “prime” means something different in general rings. Definition (Prime). We say a ∈ R is prime if a is non-zero, not a unit, and whenever a | xy, either a | x or a | y. It is important to note all these properties depend on the ring, not just the element itself. Example. 2 ∈ Z is a prime, but 2 ∈ Q is not (since it is a unit). Similarly, the polynomial 2X ∈ Q[X] is irreducible (since 2 is a unit), but 2X ∈ Z[X] not irreducible. We have two things called prime, so they had better be related. Lemma. A principal ideal (r) is a prime ideal in R if and only if r = 0 or r is prime. Proof. (⇒) Let (r) be a prime ideal. If r = 0, then done. Otherwise, as prime ideals are proper, i.e. not the whole ring, r is not a unit. Now suppose r | a · b. Then a · b ∈ (r). But (r) is prime. So a ∈ (r) or b ∈ (r). So r | a or r | b. So r is prime. (⇐) If r |
= 0, then (0) = {0} R, which is prime since R is an integral domain. Otherwise, let r = 0 be prime. Suppose a · b ∈ (r). This means r | a · b. So r | a or r | b. So a ∈ (r) and b ∈ (r). So (r) is prime. Note that in Z, prime numbers exactly match the irreducibles, but prime numbers are also prime (surprise!). In general, it is not true that irreducibles are the same as primes. However, one direction is always true. Lemma. Let r ∈ R be prime. Then it is irreducible. Proof. Let r ∈ R be prime, and suppose r = ab. Since r | r = ab, and r is prime, we must have r | a or r | b. wlog, r | a. So a = rc for some c ∈ R. So r = ab = rcb. Since we are in an integral domain, we must have 1 = cb. So b is a unit. 43 2 Rings IB Groups, Rings and Modules We now do a long interesting example. Example. Let √ R = Z[ −5] = {a + b √ −5 : a, b ∈ Z} ≤ C. By definition, it is a subring of a field. So it is an integral domain. What are the units of the ring? There is a nice trick we can use, when things are lying inside C. Consider the function given by N : R → Z≥0 √ N (a + b −5) → a2 + 5b2. It is convenient to think of this as z → z ¯z = |z|2. This satisfies N (z · w) = N (z)N (w). This is a desirable thing to have for a ring, since it immediately implies all units have norm 1 — if r · s = 1, then 1 = N (1) = N (rs) = N (r)N (s). So N (r) = N (s) = 1. So to find the units, we need to solve a2 + 5b2 = 1, for a and b units. The only solutions are ±1. So only ± |
1 ∈ R can be units, and these obviously are units. So these are all the units. Next, we claim 2 ∈ R is irreducible. We again use the norm. Suppose 2 = ab. Then 4 = N (2) = N (a)N (b). Now note that nothing has norm 2. a2 + 5b2 can never be 2 for integers a, b ∈ Z. So we must have, wlog, N (a) = 4, N (b) = 1. So b must be a unit. Similarly, we see that 3, 1 + −5 are irreducible (since there is also no element of norm 3). −5, 1 − √ √ We have four irreducible elements in this ring. Are they prime? No! Note that √ (1 + √ −5)(1 − √ −5) = 6 = 2 · 3. √ We now claim 2 does not divide 1 + To show this, suppose 2 | 1 + √ −5) = 6, and 4 6. Similarly, N (1 − −5. Then N (2) | N (1 + √ −5). But N (2) = 4 −5) = 6 as well. So −5 or 1 − −5. So 2 is not prime. √ √ and N (1 + 2 1 ± −5. √ There are several life lessons here. First is that primes and irreducibles are not the same thing in general. We’ve always thought they were the same because we’ve been living in the fantasy land of the integers. But we need to grow up. The second one is that factorization into irreducibles is not necessarily unique, √ √ since 2 · 3 = (1 + −5)(1 − −5) are two factorizations into irreducibles. However, there is one situation when unique factorizations holds. This is when we have a Euclidean algorithm available. Definition (Euclidean domain). An integral domain R is a Euclidean domain (ED) if there is a Euclidean function φ : R \ {0} → Z≥0 such that (i) φ(a · b) ≥ φ(b) for all a, b = 0 (ii) If a, b ∈ R |
, with b = 0, then there are q, r ∈ R such that and either r = 0 or φ(r) < φ(b). a = b · q + r, 44 2 Rings IB Groups, Rings and Modules What are examples? Every time in this course where we said “Euclidean algorithm”, we have an example. Example. Z is a Euclidean domain with φ(n) = |n|. Example. For any field F, F[X] is a Euclidean domain with φ(f ) = deg(f ). Example. The Gaussian integers R = Z[i] ≤ C is a Euclidean domain with φ(z) = N (z) = |z|2. We now check this: (i) We have φ(zw) = φ(z)φ(w) ≥ φ(z), since φ(w) is a positive integer. (ii) Given a, b ∈ Z[i], b = 0. We consider the complex number a b ∈ C. Consider the following complex plane, where the red dots are points in Z[i]. Im a b Re By looking at the picture, we know that there is some q ∈ Z[i] such that b − q a < 1. So we can write with |c| < 1. Then we have. We know r = a − bq ∈ Z[i], and φ(r) = N (bc) = N (b)N (c) < N (b) = φ(b). So done. This is not just true for the Gaussian integers. All we really needed was that R ≤ C, and for any x ∈ C, there is some point in R that is not more than 1 away from x. If we draw some more pictures, we will see this is not true for Z[ −5]. √ 45 2 Rings IB Groups, Rings and Modules Before we move on to prove unique factorization, we first derive something we’ve previously mentioned. Recall we showed that every ideal in Z is principal, and we proved this by the Euclidean algorithm. So we might expect this to be true in an arbitrary Euclidean domain. Definition (Principal ideal domain). A ring R is a principal ideal |
domain (PID) if it is an integral domain, and every ideal is a principal ideal, i.e. for all I R, there is some a such that I = (a). Example. Z is a principal ideal domain. Proposition. Let R be a Euclidean domain. Then R is a principal ideal domain. We have already proved this, just that we did it for a particular Euclidean domain Z. Nonetheless, we shall do it again. Proof. Let R have a Euclidean function φ : R \ {0} → Z≥0. We let I R be a non-zero ideal, and let b ∈ I \ {0} be an element with φ(b) minimal. Then for any a ∈ I, we write a = bq + r, with r = 0 or φ(r) < φ(b). However, any such r must be in I since r = a − bq ∈ I. So we cannot have φ(r) < φ(b). So we must have r = 0. So a = bq. So a ∈ (b). Since this is true for all a ∈ I, we must have I ⊆ (b). On the other hand, since b ∈ I, we must have (b) ⊆ I. So we must have I = (b). This is exactly, word by word, the same proof as we gave for the integers, except we replaced the absolute value with φ. Example. Z is a Euclidean domain, and hence a principal ideal domain. Also, for any field F, F[X] is a Euclidean domain, hence a principal ideal domain. Also, Z[i] is a Euclidean domain, and hence a principal ideal domain. What is a non-example of principal ideal domains? In Z[X], the ideal (2, X) Z[X] is not a principal ideal. Suppose it were. Then (2, X) = (f ). Since 2 ∈ (2, X) = (f ), we know 2 ∈ (f ), i.e. 2 = f · g for some g. So f has degree zero, and hence constant. So f = ±1 or ±2. If f = ±1, since ±1 are units, then (f ) = Z[X]. But (2 |
, X) = Z[X], since, say, 1 ∈ (2, X). If f = ±2, then since X ∈ (2, X) = (f ), we must have ±2 | X, but this is clearly false. So (2, X) cannot be a principal ideal. Example. Let A ∈ Mn×n(F) be an n × n matrix over a field F. We consider the following set I = {f ∈ F[X] : f (A) = 0}. This is an ideal — if f, g ∈ I, then (f + g)(A) = f (A) + g(A) = 0. Similarly, if f ∈ I and h ∈ F[X], then (f g)(A) = f (A)g(A) = 0. But we know F[X] is a principal ideal domain. So there must be some m ∈ F[X] such that I = (m) for some m. Suppose f ∈ F[X] such that f (A) = 0, i.e. f ∈ I. Then m | f. So m is a polynomial that divides all polynomials that kill A, i.e. m is the minimal polynomial of A. We have just proved that all matrices have minimal polynomials, and that the minimal polynomial divides all other polynomials that kill A. Also, the minimal polynomial is unique up to multiplication of units. 46 2 Rings IB Groups, Rings and Modules Let’s get further into number theory-like things. For a general ring, we cannot factorize things into irreducibles uniquely. However, in some rings, this is possible. Definition (Unique factorization domain). An integral domain R is a unique factorization domain (UFD) if (i) Every non-unit may be written as a product of irreducibles; (ii) If p1p2 · · · pn = q1 · · · qm with pi, qj irreducibles, then n = m, and they can be reordered such that pi is an associate of qi. This is a really nice property, and here we can do things we are familiar with in number theory. So how do we know if something is a unique factor |
ization domain? Our goal is to show that all principal ideal domains are unique factorization domains. To do so, we are going to prove several lemmas that give us some really nice properties of principal ideal domains. Recall we saw that every prime is an irreducible, but in Z[ −5], there are some irreducibles that are not prime. However, this cannot happen in principal ideal domains. √ Lemma. Let R be a principal ideal domain. If p ∈ R is irreducible, then it is prime. Note that this is also true for general unique factorization domains, which we can prove directly by unique factorization. Proof. Let p ∈ R be irreducible, and suppose p | a · b. Also, suppose p a. We need to show p | b. Consider the ideal (p, a) R. Since R is a principal ideal domain, there is some d ∈ R such that (p, a) = (d). So d | p and d | a. Since d | p, there is some q1 such that p = q1d. As p is irreducible, either q1 or d is a unit. If q1 is a unit, then d = q−1 This is a contradiction, since p a. 1 p, and this divides a. So a = q−1 1 px for some x. Therefore d is a unit. So (p, a) = (d) = R. In particular, 1R ∈ (p, a). So suppose 1R = rp + sa, for some r, s ∈ R. We now take the whole thing and multiply by b. Then we get b = rpb + sab. We observe that ab is divisible by p, and so is p. So b is divisible by p. So done. This is similar to the argument for integers. For integers, we would say if p a, then p and a are coprime. Therefore there are some r, s such that 1 = rp + sa. Then we continue the proof as above. Hence what we did in the middle is to do something similar to showing p and a are “coprime”. Another nice property of principal ideal domains is the following: Lemma. Let R be a principal ideal domain. Let I1 ⊆ I2 ⊆ I3 ⊆ · · |
· be a chain of ideals. Then there is some N ∈ N such that In = In+1 for some n ≥ N. 47 2 Rings IB Groups, Rings and Modules So in a principal ideal domain, we cannot have an infinite chain of bigger and bigger ideals. Definition (Ascending chain condition). A ring satisfies the ascending chain condition (ACC) if there is no infinite strictly increasing chain of ideals. Definition (Noetherian ring). A ring that satisfies the ascending chain condition is known as a Noetherian ring. So we are proving that every principal ideal domain is Noetherian. Proof. The obvious thing to do when we have an infinite chain of ideals is to take the union of them. We let I = ∞ n≥1 In, which is again an ideal. Since R is a principal ideal domain, I = (a) for some a ∈ R. We know a ∈ I = ∞ n=0 In. So a ∈ IN for some N. Then we have (a) ⊆ IN ⊆ I = (a) So we must have IN = I. So In = IN = I for all n ≥ N. Notice it is not important that I is generated by one element. If, for some reason, we know I is generated by finitely many elements, then the same argument work. So if every ideal is finitely generated, then the ring must be Noetherian. It turns out this is an if-and-only-if — if you are Noetherian, then every ideal is finitely generated. We will prove this later on in the course. Finally, we have done the setup, and we can prove the proposition promised. Proposition. Let R be a principal ideal domain. Then R is a unique factorization domain. Proof. We first need to show any (non-unit) r ∈ R is a product of irreducibles. Suppose r ∈ R cannot be factored as a product of irreducibles. Then it is certainly not irreducible. So we can write r = r1s1, with r1, s1 both non-units. Since r cannot be factored as a product of irreduc |
ibles, wlog r1 cannot be factored as a product of irreducibles (if both can, then r would be a product of irreducibles). So we can write r1 = r2s2, with r2, s2 not units. Again, wlog r2 cannot be factored as a product of irreducibles. We continue this way. By assumption, the process does not end, and then we have the following chain of ideals: (r) ⊆ (r1) ⊆ (r2) ⊆ · · · ⊆ (rn) ⊆ · · · But then we have an ascending chain of ideals. By the ascending chain condition, these are all eventually equal, i.e. there is some n such that (rn) = (rn+1) = (rn+2) = · · ·. In particular, since (rn) = (rn+1), and rn = rn+1sn+1, then sn+1 is a unit. But this is a contradiction, since sn+1 is not a unit. So r must be a product of irreducibles. To show uniqueness, we let p1p2 · · · pn = q1q2 · · · qm, with pi, qi irreducible. So in particular p1 | q1 · · · qm. Since p1 is irreducible, it is prime. So p1 divides some qi. We reorder and suppose p1 | q1. So q1 = p1 · a for some a. But since q1 48 2 Rings IB Groups, Rings and Modules is irreducible, a must be a unit. So p1, q1 are associates. Since R is a principal ideal domain, hence integral domain, we can cancel p1 to obtain p2p3 · · · pn = (aq2)q3 · · · qm. We now rename aq2 as q2, so that we in fact have p2p3 · · · pn = q2q3 · · · qm. We can then continue to show that pi and qi are associates for all i. This also shows that n = m, or else if n = m + k, saw, then pk+1 · · · pn = 1, which is a contradiction. We can now use |
this to define other familiar notions from number theory. Definition (Greatest common divisor). d is a greatest common divisor (gcd) of a1, a2, · · ·, an if d | ai for all i, and if any other d satisfies d | ai for all i, then d | d. Note that the gcd of a set of numbers, if exists, is not unique. It is only well-defined up to a unit. This is a definition that says what it means to be a greatest common divisor. However, it does not always have to exist. Lemma. Let R be a unique factorization domain. Then greatest common divisors exists, and is unique up to associates. Proof. We construct the greatest common divisor using the good-old way of prime factorization. We let p1, p2, · · ·, pm be a list of all irreducible factors of ai, such that no two of these are associates of each other. We now write ai = ui m j=1 pnij j, where nij ∈ N and ui are units. We let mj = min {nij}, i and choose d = m j=1 pmj j. As, by definition, mj ≤ nij for all i, we know d | ai for all i. Finally, if d | ai for all i, then we let d = v m j=1 ptj j. Then we must have tj ≤ nij for all i, j. So we must have tj ≤ mj for all j. So d | d. Uniqueness is immediate since any two greatest common divisors have to divide each other. 49 2 Rings IB Groups, Rings and Modules 2.5 Factorization in polynomial rings Since polynomial rings are a bit more special than general integral domains, we can say a bit more about them. Recall that for F a field, we know F [X] is a Euclidean domain, hence a principal ideal domain, hence a unique factorization domain. Therefore we know (i) If I F [X], then I = (f ) for some f ∈ F [X]. (ii) If f ∈ F |
[X], then f is irreducible if and only if f is prime. (iii) Let f be irreducible, and suppose (f ) ⊆ J ⊆ F [X]. Then J = (g) for some g. Since (f ) ⊆ (g), we must have f = gh for some h. But f is irreducible. So either g or h is a unit. If g is a unit, then (g) = F [X]. If h is a unit, then (f ) = (g). So (f ) is a maximal ideal. Note that this argument is valid for any PID, not just polynomial rings. (iv) Let (f ) be a prime ideal. Then f is prime. So f is irreducible. So (f ) is maximal. But we also know in complete generality that maximal ideals are prime. So in F [X], prime ideals are the same as maximal ideals. Again, this is true for all PIDs in general. (v) Thus f is irreducible if and only if F [X]/(f ) is a field. To use the last item, we can first show that F [X]/(f ) is a field, and then use this to deduce that f is irreducible. But we can also do something more interesting — find an irreducible f, and then generate an interesting field F [X]/(f ). So we want to understand reducibility, i.e. we want to know whether we can factorize a polynomial f. Firstly, we want to get rid of the trivial case where we just factor out a scalar, e.g. 2X 2 + 2 = 2(X 2 + 1) ∈ Z[X] is a boring factorization. Definition (Content). Let R be a UFD and f = a0 + a1X + · · · + anX n ∈ R[X]. The content c(f ) of f is c(f ) = gcd(a0, a1, · · ·, an) ∈ R. Again, since the gcd is only defined up to a unit, so is the content. Definition (Primitive |
polynomial). A polynomial is primitive if c(f ) is a unit, i.e. the ai are coprime. Note that this is the best we can do. We cannot ask for c(f ) to be exactly 1, since the gcd is only well-defined up to a unit. We now want to prove the following important lemma: Lemma (Gauss’ lemma). Let R be a UFD, and f ∈ R[X] be a primitive polynomial. Then f is reducible in R[X] if and only if f is reducible F [X], where F is the field of fractions of R. We can’t do this right away. We first need some preparation. Before that, we do some examples. Example. Consider X 3 + X + 1 ∈ Z[X]. This has content 1 so is primitive. We show it is not reducible in Z[X], and hence not reducible in Q[X]. 50 2 Rings IB Groups, Rings and Modules Suppose f is reducible in Q[X]. Then by Gauss’ lemma, this is reducible in Z[X]. So we can write X 3 + X + 1 = gh, for some polynomials g, h ∈ Z[X], with g, h not units. But if g and h are not units, then they cannot be constant, since the coefficients of X 3 + X + 1 are all 1 or 0. So they have degree at least 1. Since the degrees add up to 3, we wlog suppose g has degree 1 and h has degree 2. So suppose g = b0 + b1X, h = c0 + c1X + c2X 2. Multiplying out and equating coefficients, we get b0c0 = 1 c2b1 = 1 So b0 and b1 must be ±1. So g is either 1 + X, 1 − X, −1 + X or −1 − X, and hence has ±1 as a root. But this is a contradiction, since ±1 is not a root of X 3 + X + 1. So f is not reducible in Q. In particular, f has no root in Q. We see the advantage of using Gauss’ lemma — |
if we worked in Q instead, we could have gotten to the step b0c0 = 1, and then we can do nothing, since b0 and c0 can be many things if we live in Q. Now we start working towards proving this. Lemma. Let R be a UFD. If f, g ∈ R[X] are primitive, then so is f g. Proof. We let f = a0 + a1X + · · · + anX n, g = b0 + b1X + · · · + bmX m, where an, bm = 0, and f, g are primitive. We want to show that the content of f g is a unit. Now suppose f g is not primitive. Then c(f g) is not a unit. Since R is a UFD, we can find an irreducible p which divides c(f g). By assumption, c(f ) and c(g) are units. So p c(f ) and p c(g). So suppose p | a0, p | a1,..., p | ak−1 but p ak. Note it is possible that k = 0. Similarly, suppose p | b0, p | b1, · · ·, p | b−1, p b. We look at the coefficient of X k+ in f g. It is given by i+j=k+ aibj = ak+b0 + · · · + ak+1b−1 + akb + ak−1b+1 + · · · + a0b+k. By assumption, this is divisible by p. So p | aibj. i+j=k+ However, the terms ak+b0 + · · · + ak+1b−1, is divisible by p, as p | bj for j <. Similarly, ak−1b+1 + · · · + a0b+k is divisible by p. So we must have p | akb. As p is irreducible, and hence prime, we must have p | ak or p | b. This is a contradiction. So c(f g) must be a unit. 51 2 Rings IB Groups, Rings and Modules Corollary. Let R be a UFD. Then for f, g ∈ R[X], |
we have that c(f g) is an associate of c(f )c(g). Again, we cannot say they are equal, since content is only well-defined up to a unit. Proof. We can write f = c(f )f1 and g = c(g)g1, with f1 and g1 primitive. Then f g = c(f )c(g)f1g1. Since f1g1 is primitive, so c(f )c(g) is a gcd of the coefficients of f g, and so is c(f g), by definition. So they are associates. Finally, we can prove Gauss’ lemma. Lemma (Gauss’ lemma). Let R be a UFD, and f ∈ R[X] be a primitive polynomial. Then f is reducible in R[X] if and only if f is reducible F [X], where F is the field of fractions of R. Proof. We will show that a primitive f ∈ R[X] is reducible in R[X] if and only if f is reducible in F [X]. One direction is almost immediately obvious. Let f = gh be a product in R[X] with g, h not units. As f is primitive, so are g and h. So both have degree > 0. So g, h are not units in F [X]. So f is reducible in F [X]. The other direction is less obvious. We let f = gh in F [X], with g, h not units. So g and h have degree > 0, since F is a field. So we can clear denominators by finding a, b ∈ R such that (ag), (bh) ∈ R[X] (e.g. let a be the product of denominators of coefficients of g). Then we get abf = (ag)(bh), and this is a factorization in R[X]. Here we have to be careful — (ag) is one thing that lives in R[X], and is not necessarily a product in R[X], since g might not be in R[X]. So we should just treat it as a single symbol. We now write (ag) = c(ag |
)g1, (bh) = c(bh)h1, where g1, h1 are primitive. So we have ab = c(abf ) = c((ag)(bh)) = u · c(ag)c(bh), where u ∈ R is a unit, by the previous corollary. But also we have abf = c(ag)c(gh)g1h1 = u−1abg1h1. So cancelling ab gives So f is reducible in R[X]. f = u−1g1h1 ∈ R[X]. If this looks fancy and magical, you can try to do this explicitly in the case where R = Z and F = Q. Then you will probably get enlightened. We will do another proof performed in a similar manner. 52 2 Rings IB Groups, Rings and Modules Proposition. Let R be a UFD, and F be its field of fractions. Let g ∈ R[X] be primitive. We let J = (g) R[X], I = (g) F [X]. Then J = I ∩ R[X]. In other words, if f ∈ R[X] and we can write it as f = gh, with h ∈ F [X], then in fact h ∈ R[X]. Proof. The strategy is the same — we clear denominators in the equation f = gh, and then use contents to get that down in R[X]. We certainly have J ⊆ I ∩ R[X]. Now let f ∈ I ∩ R[X]. So we can write with h ∈ F [X]. So we can choose b ∈ R such that bh ∈ R[X]. Then we know f = gh, bf = g(bh) ∈ R[X]. We let for h1 ∈ R[X] primitive. Thus (bh) = c(bh)h1, bf = c(bh)gh1. Since g is primitive, so is gh1. So c(bh) = uc(bf ) for u a unit. But bf is really a product in R[X]. So we have So we have Cancelling b gives c(bf ) = c(b)c(f ) = bc(f ). bf = ubc(f )gh1. f = g(uc(f |
)h1). So g | f in R[X]. So f ∈ J. From this we can get ourselves a large class of UFDs. Theorem. If R is a UFD, then R[X] is a UFD. In particular, if R is a UFD, then R[X1, · · ·, Xn] is also a UFD. Proof. We know R[X] has a notion of degree. So we will combine this with the fact that R is a UFD. Let f ∈ R[X]. We can write f = c(f )f1, with f1 primitive. Firstly, as R is a UFD, we may factor c(f ) = p1p2 · · · pn, for pi ∈ R irreducible (and also irreducible in R[X]). Now we want to deal with f1. If f1 is not irreducible, then we can write f1 = f2f3, 53 2 Rings IB Groups, Rings and Modules with f2, f3 both not units. Since f1 is primitive, f2, f3 also cannot be constants. So we must have deg f2, deg f3 > 0. Also, since deg f2 + deg f3 = deg f1, we must have deg f2, deg f3 < deg f1. If f2, f3 are irreducible, then done. Otherwise, keep on going. We will eventually stop since the degrees have to keep on decreasing. So we can write it as with qi irreducible. So we can write f1 = q1 · · · qm, f = p1p2 · · · pnq1q2 · · · qm, a product of irreducibles. For uniqueness, we first deal with the p’s. We note that c(f ) = p1p2 · · · pn is a unique factorization of the content, up to reordering and associates, as R is a UFD. So cancelling the content, we only have to show that primitives can be factored uniquely. Suppose we have two factorizations f1 = q1q2 · · · qm = r1r2 · · · r. Note that each qi and each ri is a factor of the primitive polynomial f |
1, so are also primitive. Now we do (maybe) the unexpected thing. We let F be the field of fractions of R, and consider qi, ri ∈ F [X]. Since F is a field, F [X] is a Euclidean domain, hence principal ideal domain, hence unique factorization domain. By Gauss’ lemma, since the qi and ri are irreducible in R[X], they are also irreducible in F [X]. As F [X] is a UFD, we find that = m, and after reordering, ri and qi are associates, say ri = uiqi, with ui ∈ F [X] a unit. What we want to say is that ri is a unit times qi in R[X]. Firstly, note that ui ∈ F as it is a unit. Clearing denominators, we can write airi = biqi ∈ R[X]. Taking contents, since ri, qi are primitives, we know ai and bi are associates, say bi = viai, with vi ∈ R a unit. Cancelling ai on both sides, we know ri = viqi as required. The key idea is to use Gauss’ lemma to say the reducibility in R[X] is the same as reducibility in F [X], as long as we are primitive. The first part about contents is just to turn everything into primitives. Note that the last part of the proof is just our previous proposition. We could have applied it, but we decide to spell it out in full for clarity. Example. We know Z[X] is a UFD, and if R is a UFD, then R[X1, · · ·, Xn] is also a UFD. 54 2 Rings IB Groups, Rings and Modules This is a useful thing to know. In particular, it gives us examples of UFDs that are not PIDs. However, in such rings, we would also like to have an easy to determine whether something is reducible. Fortunately, we have the following criterion: Proposition (Eisenstein’s criterion). Let R be a UFD, and let f = a0 + a1X + · · · + anX n ∈ R[ |
X] be primitive with an = 0. Let p ∈ R be irreducible (hence prime) be such that (i) p an; (ii) p | ai for all 0 ≤ i < n; (iii) p2 a0. Then f is irreducible in R[X], and hence in F [X] (where F is the field of fractions of R). It is important that we work in R[X] all the time, until the end where we apply Gauss’ lemma. Otherwise, we cannot possibly apply Eisenstein’s criterion since there are no primes in F. Proof. Suppose we have a factorization f = gh with g = r0 + r1X + · · · + rkX k h = s0 + s1X + · · · + sX, for rk, s = 0. We know rks = an. Since p an, so p rk and p s. We can also look at bottom coefficients. We know r0s0 = a0. We know p | a0 and p2 a0. So p divides exactly one of r0 and s0. wlog, p | r0 and p s0. Now let j be such that p | r0, p | r1, · · ·, p | rj−1, p rj. We now look at aj. This is, by definition, aj = r0sj + r1sj−1 + · · · + rj−1s1 + rjs0. We know r0, · · ·, rj−1 are all divisible by p. So p | r0sj + r1sj−1 + · · · + rj−1s1. Also, since p rj and p s0, we know p rjs0, using the fact that p is prime. So p aj. So we must have j = n. We also know that j ≤ k ≤ n. So we must have j = k = n. So deg g = n. Hence = n − h = 0. So h is a constant. But we also know f is primitive. So h must be a unit. So this is not a proper factorization. Example. Consider the polynomial X n − p ∈ |
Z[X] for p a prime. Apply Eisenstein’s criterion with p, and observe all the conditions hold. This is certainly primitive, since this is monic. So X n − p is irreducible in Z[X], hence √ in Q[X]. In particular, X n − p has no rational roots, i.e. n p is irrational (for n > 1). 55 2 Rings IB Groups, Rings and Modules Example. Consider a polynomial f = X p−1 + X p−[X], where p is a prime number. If we look at this, we notice Eisenstein’s criteria does not apply. What should we do? We observe that f = X p − 1 X − 1. So it might be a good idea to let Y = X − 1. Then we get a new polynomial p 1 (Y + 1)p − 1 Y Y p−3 + · · · + ˆf = ˆf (Y ) = = Y p−1 + Y p−2 + p 2 p p − 1. When we look at it hard enough, we notice Eisenstein’s criteria can be applied — = p. So ˆf is irreducible in Z[Y ]. we know p | p for 1 ≤ i ≤ p − 1, but p2 p i Now if we had a factorization p−1 then we get f (X) = g(X)h(X) ∈ Z[X], ˆf (Y ) = g(Y + 1)h(Y + 1) in Z[Y ]. So f is irreducible. Hence none of the roots of f are rational (but we already know that — they are not even real!). 2.6 Gaussian integers We’ve mentioned the Gaussian integers already. Definition (Gaussian integers). The Gaussian integers is the subring Z[i] = {a + bi : a, b ∈ Z} ≤ C. We have already shown that the norm N (a + ib) = a2 + b2 is a Euclidean function for Z[i]. So Z[i] is a Euclidean domain, hence principal ideal domain, hence a unique factorization domain. Since the units must have norm 1, they are precisely ±1, ±i. What does factorization in Z[ |
i] look like? What are the primes? We know we are going to get new primes, i.e. primes that are not integers, while we will lose some other primes. For example, we have 2 = (1 + i)(1 − i). So 2 is not irreducible, hence not prime. However, 3 is a prime. We have N (3) = 9. So if 3 = uv, with u, v not units, then 9 = N (u)N (v), and neither N (u) nor N (v) are 1. So N (u) = N (v) = 3. However, 3 = a2 + b2 has no solutions with a, b ∈ Z. So there is nothing of norm 3. So 3 is irreducible, hence a prime. Also, 5 is not prime, since How can we understand which primes stay as primes in the Gaussian integers? 5 = (1 + 2i)(1 − 2i). 56 2 Rings IB Groups, Rings and Modules Proposition. A prime number p ∈ Z is prime in Z[i] if and only if p = a2 + b2 for a, b ∈ Z \ {0}. The proof is exactly what we have done so far. Proof. If p = a2 + b2, then p = (a + ib)(a − ib). So p is not irreducible. Now suppose p = uv, with u, v not units. Taking norms, we get p2 = N (u)N (v). So if u and v are not units, then N (u) = N (v) = p. Writing u = a + ib, then this says a2 + b2 = p. So what we have to do is to understand when a prime p can be written as a sum of two squares. We will need the following helpful lemma: Lemma. Let p be a prime number. Let Fp = Z/pZ be the field with p elements. p = Fp \ {0} be the group of invertible elements under multiplication. Then Let F× F× p ∼= Cp−1. Proof. Certainly F× p has order p − 1, and is abelian. We know from the classification of finite abelian groups that |
if F× p is not cyclic, then it must contain a subgroup Cm × Cm for m > 1 (we can write it as Cd × Cd × · · ·, and that d | d. So Cd has a subgroup isomorphic to Cd). We consider the polynomial X m − 1 ∈ Fp[x], which is a UFD. At best, this factors into m linear factors. So X m − 1 has at most m distinct roots. But if Cm × Cm ≤ F× p, then we can find m2 elements of order dividing m. So there are m2 elements of Fp which are roots of X m − 1. This is a contradiction. So F× p is cyclic. This is a funny proof, since we have not found any element that has order p − 1. Proposition. The primes in Z[i] are, up to associates, (i) Prime numbers p ∈ Z ≤ Z[i] such that p ≡ 3 (mod 4). (ii) Gaussian integers z ∈ Z[i] with N (z) = z ¯z = p for some prime p such that p = 2 or p ≡ 1 (mod 4). Proof. We first show these are primes. If p ≡ 3 (mod 4), then p = a2 + b2, since a square number mod 4 is always 0 or 1. So these are primes in Z[i]. On the other hand, if N (z) = p, and z = uv, then N (u)N (v) = p. So N (u) is 1 or N (v) is 1. So u or v is a unit. Note that we did not use the condition that p ≡ 3 (mod 4). This is not needed, since N (z) is always a sum of squares, and hence N (z) cannot be a prime that is 3 mod 4. Now let z ∈ Z[i] be irreducible, hence prime. Then ¯z is also irreducible. So N (z) = z ¯z is a factorization of N (z) into irreducibles. Let p ∈ Z be an ordinary prime number dividing N (z), which exists since N (z) = 1. Now if p ≡ 3 (mod 4), then p itself is prime in |
Z[i] by the first part of the proof. So p | N (z) = z ¯z. So p | z or p | ¯z. Note that if p | ¯z, then p | z by taking complex conjugates. So we get p | z. Since both p and z are irreducible, they must be equal up to associates. Otherwise, we get p = 2 or p ≡ 1 (mod 4). If p ≡ 1 (mod 4), then p − 1 = 4k ∼= Cp−1 = C4k, there is a unique element of order 2 (this for some k ∈ Z. As F× p is true for any cyclic group of order 4k — think Z/4kZ). This must be [−1] ∈ Fp. Now let a ∈ F× p be an element of order 4. Then a2 has order 2. So [a2] = [−1]. 57 2 Rings IB Groups, Rings and Modules This is a complicated way of saying we can find an a such that p | a2 + 1. Thus p | (a + i)(a − i). In the case where p = 2, we know by checking directly that 2 = (1 + i)(1 − i). In either case, we deduce that p (or 2) is not prime (hence irreducible), since it clearly does not divide a ± i (or 1 ± i). So we can write p = z1z2, for z1, z2 ∈ Z[i] not units. Now we get p2 = N (p) = N (z1)N (z2). As the zi are not units, we know N (z1) = N (z2) = p. By definition, this means p = z1 ¯z1 = z2 ¯z2. But also p = z1z2. So we must have ¯z1 = z2. Finally, we have p = z1 ¯z1 | N (z) = z ¯z. All these z, zi are irreducible. So z must be an associate of z1 (or maybe ¯z1). So in particular N (z) = p. Corollary. An integer n ∈ Z≥0 may be written as x |
2 + y2 (as the sum of two squares) if and only if “when we write n = pn1 1 pn2 k as a product as distinct primes, then pi ≡ 3 (mod 4) implies ni is even”. 2 · · · pnk We have proved this in the case when n is a prime. Proof. If n = x2 + y2, then we have n = (x + iy)(x − iy) = N (x + iy). Let z = x + iy. So we can write z = α1 · · · αq as a product of irreducibles in Z[i]. By the proposition, each αi is either αi = p (a genuine prime number with p ≡ 3 (mod 4)), or N (αi) = p is a prime number which is either 2 or ≡ 1 (mod 4). We now take the norm to obtain N = x2 + y2 = N (z) = N (α1)N (α2) · · · N (αq). Now each N (αi) is either p2 with p ≡ 3 (mod 4), or is just p for p = 2 or p ≡ 1 (mod 4). So if pm is the largest power of p divides n, we find that n must be even if p ≡ 3 (mod 4). Conversely, let n = pn1 1 pn2 2 · · · pnk k be a product of distinct primes. Now for each pi, either pi ≡ 3 (mod 4), and ni is even, in which case pni i = (p2 i )ni/2 = N (pni/2 i ); or pi = 2 or pi ≡ 1 (mod 4), in which case, the above proof shows that pi = N (αi) for some αi. So pn i = N (αn Since the norm is multiplicative, we can write n as the norm of some z ∈ Z[i]. i ). So as required. n = N (z) = N (x + iy) = x2 + y2, Example. We know 65 = 5 × 13. Since 5, 13 ≡ 1 (mod 4), it is a sum of squares. Moreover, the proof tells us how to find 65 as the sum of squares. We have to factor 5 and 13 in |
Z[i]. We have 5 = (2 + i)(2 − i) 13 = (2 + 3i)(2 − 3i). 58 2 Rings So we know IB Groups, Rings and Modules 65 = N (2 + i)N (2 + 3i) = N ((2 + i)(2 + 3i)) = N (1 + 8i) = 12 + 82. But there is a choice here. We had to pick which factor is α and which is ¯α. So we can also write 65 = N ((2 + i)(2 − 3i)) = N (7 − 4i) = 72 + 42. So not only are we able to write them as sum of squares, but this also gives us many ways of writing 65 as a sum of squares. 2.7 Algebraic integers We generalize the idea of Gaussian integers to algebraic integers. Definition (Algebraic integer). An α ∈ C is called an algebraic integer if it is a root of a monic polynomial in Z[X], i.e. there is a monic f ∈ Z[X] such that f (α) = 0. We can immediately check that this is a sensible definition — not all complex numbers are algebraic integers, since there are only countably many polynomials with integer coefficients, hence only countably many algebraic integers, but there are uncountably many complex numbers. Notation. For α an algebraic integer, we write Z[α] ≤ C for the smallest subring containing α. This can also be defined for arbitrary complex numbers, but it is less inter- esting. We can also construct Z[α] by taking it as the image of the map φ : Z[X] → C given by g → g(α). So we can also write Z[α] = Z[X] I, I = ker φ. Note that I is non-empty, since, say, f ∈ I, by definition of an algebraic integer. Proposition. Let α ∈ C be an algebraic integer. Then the ideal I = ker(φ : Z[X] → C, f → f (α)) is principal, and equal to (fα) for some irreducible monic fα. This is a non |
-trivial theorem, since Z[X] is not a principal ideal domain. So there is no immediate guarantee that I is generated by one polynomial. Definition (Minimal polynomial). Let α ∈ C be an algebraic integer. Then the minimal polynomial is a polynomial fα is the irreducible monic such that I = ker(φ) = (fα). Proof. By definition, there is a monic f ∈ Z[X] such that f (a) = 0. So f ∈ I. So I = 0. Now let fα ∈ I be such a polynomial of minimal degree. We may suppose that fα is primitive. We want to show that I = (fα), and that fα is irreducible. 59 2 Rings IB Groups, Rings and Modules Let h ∈ I. We pretend we are living in Q[X]. Then we have the Euclidean algorithm. So we can write h = fαq + r, with r = 0 or deg r < deg fα. This was done over Q[X], not Z[X]. We now clear denominators. We multiply by some a ∈ Z to get ah = fα(aq) + (ar), where now (aq), (ar) ∈ Z[X]. We now evaluate these polynomials at α. Then we have ah(α) = fα(α)aq(α) + ar(α). We know fα(α) = h(α) = 0, since fα and h are both in I. So ar(α) = 0. So (ar) ∈ I. As fα ∈ I has minimal degree, we cannot have deg(r) = deg(ar) < deg(fa). So we must have r = 0. Hence we know ah = fα · (aq) is a factorization in Z[X]. This is almost right, but we want to factor h, not ah. Again, taking contents of everything, we get ac(h) = c(ah) = c(fα(aq)) = c(aq), as fα is primitive. In particular, a | c(aq). This, by definition of content, means (aq) can be written as a¯q, where ¯ |
q ∈ Z[X]. Cancelling, we get q = ¯q ∈ Z[X]. So we know So we know I = (fα). To show fα is irreducible, note that h = fαq ∈ (fα). Z[X] (fα) ∼= Z[X] ker φ ∼= im(φ) = Z[α] ≤ C. Since C is an integral domain, so is im(φ). So we know Z[X]/(fα) is an integral domain. So (fα) is prime. So fα is prime, hence irreducible. If this final line looks magical, we can unravel this proof as follows: suppose fα = pq for some non-units pq. Then since fα(α) = 0, we know p(α)q(α) = 0. Since p(α), q(α) ∈ C, which is an integral domain, we must have, say, p(α) = 0. But then deg p < deg fα, so p ∈ I = (fα). Contradiction. Example. (i) We know α = i is an algebraic integer with fα = X 2 + 1. 2 is an algebraic integer with fα = X 2 − 2. √ (iii) More interestingly, α = 1 2 (1 + −3) is an algebraic integer with fα = (ii) Also. (iv) The polynomial X 5 − X + d ∈ Z[X] with d ∈ Z≥0 has precisely one real root α, which is an algebraic integer. It is a theorem, which will be proved in IID Galois Theory, that this α cannot be constructed from integers √ ·. It is also a theorem, found in IID Galois Theory, that via +, −, ×, ÷, n degree 5 polynomials are the smallest degree for which this can happen (the prove involves writing down formulas analogous to the quadratic formula for degree 3 and 4 polynomials). 60 2 Rings IB Groups, Rings and Modules Lemma. Let α ∈ Q be an algebraic integer. Then α ∈ Z. Proof. Let fα ∈ Z[X] be the minimal polynomial, which is irreducible. In |
Q[X], the polynomial X − α must divide fα. However, by Gauss’ lemma, we know f ∈ Q[X] is irreducible. So we must have fα = X − α ∈ Z[X]. So α is an integer. It turns out the collection of all algebraic integers form a subring of C. This is not at all obvious — given f, g ∈ Z[X] monic such that f (α) = g(α) = 0, there is no easy way to find a new monic h such that h(α + β) = 0. We will prove this much later on in the course. 2.8 Noetherian rings We now revisit the idea of Noetherian rings, something we have briefly mentioned when proving that PIDs are UFDs. Definition (Noetherian ring). A ring is Noetherian if for any chain of ideals I1 ⊆ I2 ⊆ I3 ⊆ · · ·, there is some N such that IN = IN +1 = IN +2 = · · ·. This condition is known as the ascending chain condition. Example. Every finite ring is Noetherian. This is since there are only finitely many possible ideals. Example. Every field is Noetherian. This is since there are only two possible ideals. Example. Every principal ideal domain (e.g. Z) is Noetherian. This is easy to check directly, but the next proposition will make this utterly trivial. Most rings we love and know are indeed Noetherian. However, we can explicitly construct some non-Noetherian ideals. Example. The ring Z[X1, X2, X3, · · · ] is not Noetherian. This has the chain of strictly increasing ideals (X1) ⊆ (X1, X2) ⊆ (X1, X2, X3) ⊆ · · ·. We have the following proposition that makes Noetherian rings much more concrete, and makes it obvious why PIDs are Noetherian. Definition (Finitely generated ideal). An ideal I is finitely generated if it can be written as I = (r1, · · ·, rn) for some |
r1, · · ·, rn ∈ R. Proposition. A ring is Noetherian if and only if every ideal is finitely generated. Every PID trivially satisfies this condition. So we know every PID is Noethe- rian. 61 2 Rings IB Groups, Rings and Modules Proof. We start with the easier direction — from concrete to abstract. Suppose every ideal of R is finitely generated. Given the chain I1 ⊆ I2 ⊆ · · ·, consider the ideal I = I1 ∪ I2 ∪ I3 ∪ · · ·. This is obviously an ideal, and you will check this manually in example sheet 2. We know I is finitely generated, say I = (r1, · · ·, rn), with ri ∈ Iki. Let K = max {ki}. i=1,···,n Then r1, · · ·, rn ∈ IK. So IK = I. So IK = IK+1 = IK+2 = · · ·. To prove the other direction, suppose there is an ideal I R that is not finitely generated. We pick r1 ∈ I. Since I is not finitely generated, we know (r1) = I. So we can find some r2 ∈ I \ (r1). Again (r1, r2) = I. So we can find r3 ∈ I \ (r1, r2). We continue on, and then can find an infinite strictly ascending chain (r1) ⊆ (r1, r2) ⊆ (r1, r2, r3) ⊆ · · ·. So R is not Noetherian. When we have developed some properties or notions, a natural thing to ask is whether it passes on to subrings and quotients. If R is Noetherian, does every subring of R have to be Noetherian? The answer is no. For example, since Z[X1, X2, · · · ] is an integral domain, we can take its field of fractions, which is a field, hence Noetherian, but Z[X1, X2, · · |
· ] is a subring of its field of fractions. How about quotients? Proposition. Let R be a Noetherian ring and I be an ideal of R. Then R/I is Noetherian. Proof. Whenever we see quotients, we should think of them as the image of a homomorphism. Consider the quotient map π : R → R/I x → x + I. We can prove this result by finitely generated or ascending chain condition. We go for the former. Let J R/I be an ideal. We want to show that J is finitely generated. Consider the inverse image π−1(J). This is an ideal of R, and is hence finitely generated, since R is Noetherian. So π−1(J) = (r1, · · ·, rn) for some r1, · · ·, rn ∈ R. Then J is generated by π(r1), · · ·, π(rn). So done. This gives us many examples of Noetherian rings. But there is one important case we have not tackled yet — polynomial rings. We know Z[X] is not a PID, since (2, X) is not principal. However, this is finitely generated. So we are not dead. We might try to construct some non-finitely generated ideal, but we are bound to fail. This is since Z[X] is a Noetherian ring. This is a special case of the following powerful theorem: Theorem (Hilbert basis theorem). Let R be a Noetherian ring. Then so is R[X]. 62 2 Rings IB Groups, Rings and Modules Since Z is Noetherian, we know Z[X] also is. Hence so is Z[X, Y ] etc. The Hilbert basis theorem was, surprisingly, proven by Hilbert himself. Before that, there were many mathematicians studying something known as invariant theory. The idea is that we have some interesting objects, and we want to look at their symmetries. Often, there are infinitely many possible such symmetries, and one interesting question to ask is whether there is a finite set of symmetries that generate all possible symmetries. This sounds like an interesting problem, so people |
devoted much time, writing down funny proofs, showing that the symmetries are finitely generated. However, the collection of such symmetries are often just ideals of some funny ring. So Hilbert came along and proved the Hilbert basis theorem, and showed once and for all that those rings are Noetherian, and hence the symmetries are finitely generated. Proof. The proof is not too hard, but we will need to use both the ascending chain condition and the fact that all ideals are finitely-generated. Let I R[X] be an ideal. We want to show it is finitely generated. Since we know R is Noetherian, we want to generate some ideals of R from I. How can we do this? We can do the silly thing of taking all constants of I, i.e. I ∩ R. But we can do better. We can consider all linear polynomials, and take their leading coefficients. Thinking for a while, this is indeed an ideal. In general, for n = 0, 1, 2, · · ·, we let In = {r ∈ R : there is some f ∈ I such that f = rX n + · · · } ∪ {0}. Then it is easy to see, using the strong closure property, that each ideal In is an ideal of R. Moreover, they form a chain, since if f ∈ I, then Xf ∈ I, by strong closure. So In ⊆ In+1 for all n. By the ascending chain condition of R, we know there is some N such that IN = IN +1 = · · ·. Now for each 0 ≤ n ≤ N, since R is Noetherian, we can write In = (r(n) 1, r(n) 2, · · ·, r(n) k(n)). Now for each r(n), we choose some f (n) i ∈ I with f (n) i = r(n We now claim the polynomials f (n) i for 0 ≤ n ≤ N and 1 ≤ i ≤ k(n) generate I. Suppose not. We pick g ∈ I of minimal degree not generated by the f (n) There are two possible cases. If deg g = n ≤ N, suppose i. We know r ∈ In. So |
we can write g = rX n + · · ·. r = λir(n) i i for some λi ∈ R, since that’s what generating an ideal means. Then we know λif (n) i = rX n + · · · ∈ I. i But if g is not in the span of the f (j) a lower degree than g. This is a contradiction. i, then so isn’t g − i λif (n) i. But this has 63 2 Rings IB Groups, Rings and Modules Now suppose deg g = n > N. This might look scary, but it is not, since In = IN. So we write the same proof. We write g = rX n + · · ·. But we know r ∈ In = IN. So we know r = λir(N ) i. I Then we know X n−N λif (n) i = rX N + · · · ∈ I. i Hence g − X n−N λif (N ) f (j) i. i has smaller degree than g, but is not in the span of As an aside, let E ⊆ F [X1, X2, · · ·, Xn] be any set of polynomials. We view this as a set of equations f = 0 for each f ∈ E. The claim is that to solve the potentially infinite set of equations E, we actually only have to solve finitely many equations. Consider the ideal (E) F [X1, · · ·, Xn]. By the Hilbert basis theorem, there is a finite list f1, · · ·, fk such that (f1, · · ·, fk) = (E). We want to show that we only have to solve fi(x) = 0 for these fi. Given (α1, · · ·, αn) ∈ F n, consider the homomorphism φα : F [X1, · · ·, Xn] → F Xi → αi. Then we know (α1, · · ·, αn) ∈ F n is a solution to the equations E if and only if (E) ⊆ ker(ϕα). By our choice of fi, this is true if and only |
if (f1, · · ·, fk) ⊆ ker(ϕα). By inspection, this is true if and only if (α1, · · ·, αn) is a solution to all of f1, · · ·, fk. So solving E is the same as solving f1, · · ·, fk. This is useful in, say, algebraic geometry. 64 3 Modules IB Groups, Rings and Modules 3 Modules Finally, we are going to look at modules. Recall that to define a vector space, we first pick some base field F. We then defined a vector space to be an abelian group V with an action of F on V (i.e. scalar multiplication) that is compatible with the multiplicative and additive structure of F. In the definition, we did not at all mention division in F. So in fact we can make the same definition, but allow F to be a ring instead of a field. We call these modules. Unfortunately, most results we prove about vector spaces do use the fact that F is a field. So many linear algebra results do not apply to modules, and modules have much richer structures. 3.1 Definitions and examples Definition (Module). Let R be a commutative ring. We say a quadruple (M, +, 0M, · ) is an R-module if (i) (M, +, 0M ) is an abelian group (ii) The operation · : R × M → M satisfies (a) (r1 + r2) · m = (r1 · m) + (r2 · m); (b) r · (m1 + m2) = (r · m1) + (r · m2); (c) r1 · (r2 · m) = (r1 · r2) · m; and (d) 1R · m = m. Note that there are two different additions going on — addition in the ring and addition in the module, and similarly two notions of multiplication. However, it is easy to distinguish them since they operate on different things. If needed, we can make them explicit |
by writing, say, +R and +M. We can imagine modules as rings acting on abelian groups, just as groups can act on sets. Hence we might say “R acts on M ” to mean M is an R-module. Example. Let F be a field. An F-module is precisely the same as a vector space over F (the axioms are the same). Example. For any ring R, we have the R-module Rn = R × R × · · · × R via r · (r1, · · ·, rn) = (rr1, · · ·, rrn), using the ring multiplication. This is the same as the definition of the vector space Fn for fields F. Example. Let I R be an ideal. Then it is an R-module via r ·I a = r ·R a, r1 +I r2 = r1 +R r2. Also, R/I is an R-module via r ·R/I (a + I) = (r ·R a) + I, 65 3 Modules IB Groups, Rings and Modules Example. A Z-module is precisely the same as an abelian group. For A an abelian group, we have Z × A → A, (n, a times where we adopt the notation a + · · · + a −n times, = (−a) + · · · + (−a) n times and adding something to itself 0 times is just 0. This definition is essentially forced upon us, since by the axioms of a module, we must have (1, a) → a. Then we must send, say, (2, a) = (1 + 1, a) → a + a. Example. Let F be a field and V a vector space over F, and α : V → V be a linear map. Then V is an F[X]-module via F[X] × V → V (f, v) → f (α)(v). This is a module. Note that we cannot just say that V is an F[X]-module. We have to specify the α as well. Picking a different α will give a different F[X]-module structure. Example. Let φ : |
R → S be a homomorphism of rings. Then any S-module M may be considered as an R-module via R × M → M (r, m) → φ(r) ·M m. Definition (Submodule). Let M be an R-module. A subset N ⊆ M is an R-submodule if it is a subgroup of (M, +, 0M ), and if n ∈ N and r ∈ R, then rn ∈ N. We write N ≤ M. Example. We know R itself is an R-module. Then a subset of R is a submodule if and only if it is an ideal. Example. A subset of an F-module V, where F is a field, is an F-submodule if and only if it is a vector subspace of V. Definition (Quotient module). Let N ≤ M be an R-submodule. The quotient module M/N is the set of N -cosets in (M, +, 0M ), with the R-action given by r · (m + N ) = (r · m) + N. It is easy to check this is well-defined and is indeed a module. Note that modules are different from rings and groups. In groups, we had subgroups, and we have some really nice ones called normal subgroups. We are only allowed to quotient by normal subgroups. In rings, we have subrings and ideals, which are unrelated objects, and we only quotient by ideals. In modules, we only have submodules, and we can quotient by arbitrary submodules. 66 3 Modules IB Groups, Rings and Modules Definition (R-module homomorphism and isomorphism). A function f : M → N between R-modules is an R-module homomorphism if it is a homomorphism of abelian groups, and satisfies f (r · m) = r · f (m) for all r ∈ R and m ∈ M. An isomorphism is a bijective homomorphism, and two R-modules are isomorphic if there is an isomorphism between them. Note that on the left, the multiplication is the action in M, while on the right, it is the |
action in N. Example. If F is a field and V, W are F-modules (i.e. vector spaces over F), then an F-module homomorphism is precisely an F-linear map. Theorem (First isomorphism theorem). Let f : M → N be an R-module homomorphism. Then ker f = {m ∈ M : f (m) = 0} ≤ M is an R-submodule of M. Similarly, im f = {f (m) : m ∈ M } ≤ N is an R-submodule of N. Then M ker f ∼= im f. We will not prove this again. The proof is exactly the same. Theorem (Second isomorphism theorem). Let A, B ≤ M. Then A + B = {m ∈ M : m = a + b for some a ∈ A, b ∈ B} ≤ M, and We then have A ∩ B ≤ M. A + B A ∼= B A ∩ B. Theorem (Third isomorphism theorem). Let N ≤ L ≤ M. Then we have M L ∼= M N. L N Also, we have a correspondence {submodules of M/N } ←→ {submodules of M which contain N } It is an exercise to see what these mean in the cases where R is a field, and modules are vector spaces. We now have something new. We have a new concept that was not present in rings and groups. 67 3 Modules IB Groups, Rings and Modules Definition (Annihilator). Let M be an R-module, and m ∈ M. The annihilator of m is Ann(m) = {r ∈ R : r · m = 0}. For any set S ⊆ M, we define Ann(S) = {r ∈ R : r · m = 0 for all m ∈ S} = In particular, for the module M itself, we have Ann(M ) = {r ∈ R : r · m = 0 for all m ∈ M } = m∈S Ann(m). m∈M Ann(m). Note that the annihilator is a subset of R. Moreover it is an ideal — if r · m = 0 and s · m = 0, then (r + s. So |
r + s ∈ Ann(m). Moreover, if r · m = 0, then also (sr) · m = s · (r · m) = 0. So sr ∈ Ann(m). What is this good for? We first note that any m ∈ M generates a submodule Rm as follows: Definition (Submodule generated by element). Let M be an R-module, and m ∈ M. The submodule generated by m is Rm = {r · m ∈ M : r ∈ R}. We consider the R-module homomorphism φ : R → M r → rm. This is clearly a homomorphism. Then we have The conclusion is that Rm = im(φ), Ann(m) = ker(φ). Rm ∼= R/ Ann(m). As we mentioned, rings acting on modules is like groups acting on sets. We can think of this as the analogue of the orbit-stabilizer theorem. In general, we can generate a submodule with many elements. Definition (Finitely generated module). An R-module M is finitely generated if there is a finite list of elements m1, · · ·, mk such that M = Rm1 + Rm2 + · · · + Rmk = {r1m1 + r2m2 + · · · + rkmk : ri ∈ R}. This is in some sense analogous to the idea of a vector space being finite- dimensional. However, it behaves much more differently. While this definition is rather concrete, it is often not the most helpful characterization of finitely-generated modules. Instead, we use the following lemma: Lemma. An R-module M is finitely-generated if and only if there is a surjective R-module homomorphism f : Rk M for some finite k. 68 3 Modules Proof. If IB Groups, Rings and Modules we define f : Rk → M by M = Rm1 + Rm2 + · · · + Rmk, (r1, · · ·, rk) → r1m1 + · · · + rkmk. It is clear that this |
is an R-module homomorphism. This is by definition surjective. So done. Conversely, given a surjection f : Rk M, we let mi = f (0, 0, · · ·, 0, 1, 0, · · ·, 0), where the 1 appears in the ith position. We now claim that M = Rm1 + Rm2 + · · · + Rmk. So let m ∈ M. As f is surjective, we know m = f (r1, r2, · · ·, rk) for some ri. We then have f (r1, r2, · · ·, rk) = f ((r1, 0, · · ·, 0) + (0, r2, 0, · · ·, 0) + · · · + (0, 0, · · ·, 0, rk)) = f (r1, 0, · · ·, 0) + f (0, r2, 0, · · ·, 0) + · · · + f (0, 0, · · ·, 0, rk) = r1f (1, 0, · · ·, 0) + r2f (0, 1, 0, · · ·, 0) + · · · + rkf (0, 0, · · ·, 0, 1) = r1m1 + r2m2 + · · · + rkmk. So the mi generate M. This view is a convenient way of thinking about finitely-generated modules. For example, we can immediately prove the following corollary: Corollary. Let N ≤ M and M be finitely-generated. Then M/N is also finitely generated. Proof. Since m is finitely generated, we have some surjection f : Rk M. Moreover, we have the surjective quotient map q : M M/N. Then we get the following composition f Rk M q M/N, which is a surjection, since it is a composition of surjections. So M/N is finitely generated. It is very tempting to believe that if a module is finitely generated, then its submodules are also finitely generated. It would be very wrong to think |
so. Example. A submodule of a finitely-generated module need not be finitely generated. We let R = C[X1, X2, · · · ]. We consider the R-module M = R, which is finitely generated (by 1). A submodule of the ring is the same as an ideal. 69 3 Modules IB Groups, Rings and Modules Moreover, an ideal is finitely generated as an ideal if and only if it is finitely generated as a module. We pick the submodule I = (X1, X2, · · · ), which we have already shown to be not finitely-generated. So done. Example. For a complex number α, the ring Z[α] (i.e. the smallest subring of C containing α) is a finitely-generated as a Z-module if and only if α is an algebraic integer. Proof is left as an exercise for the reader on the last example sheet. This allows us to prove that algebraic integers are closed under addition and multiplication, since it is easier to argue about whether Z[α] is finitely generated. 3.2 Direct sums and free modules We’ve been secretly using the direct sum in many examples, but we shall define it properly now. Definition (Direct sum of modules). Let M1, M2, · · ·, Mk be R-modules. The direct sum is the R-module M1 ⊕ M2 ⊕ · · · ⊕ Mk, which is the set M1 × M2 × · · · × Mk, with addition given by (m1, · · ·, mk) + (m 1, · · ·, m k) = (m1 + m 1, · · ·, mk + m k), and the R-action given by r · (m1, · · ·, mk) = (rm1, · · ·, rmk). We’ve been using one example of the direct sum already, namely Rn = times. Recall we said modules are like vector spaces. So we can try to define things like basis and linear independence. However, we will fail massively, since we really can’t prove much about them. Still, we can de� |
�ne them. Definition (Linear independence). Let m1, · · ·, mk ∈ M. Then {m1, · · ·, mk} is linearly independent if k i=1 rimi = 0 implies r1 = r2 = · · · = rk = 0. Lots of modules will not have a basis in the sense we are used to. The next best thing would be the following: Definition (Freely generate). A subset S ⊆ M generates M freely if (i) S generates M 70 3 Modules IB Groups, Rings and Modules (ii) Any set function ψ : S → N to an R-module N extends to an R-module map θ : M → N. Note that if θ1, θ2 are two such extensions, we can consider θ1 − θ2 : M → N. Then θ1 − θ2 sends everything in S to 0. So S ⊆ ker(θ1 − θ2) ≤ M. So the submodule generated by S lies in ker(θ1 − θ2) too. But this is by definition M. So M ≤ ker(θ1 − θ2) ≤ M, i.e. equality holds. So θ1 − θ2 = 0. So θ1 = θ2. So any such extension is unique. Thus, what this definition tells us is that giving a map from M to N is exactly the same thing as giving a function from S to N. Definition (Free module and basis). An R-module is free if it is freely generated by some subset S ⊆ M, and S is called a basis. We will soon prove that if R is a field, then every module is free. However, if R is not a field, then there are non-free modules. Example. The Z-module Z/2Z is not freely generated. Suppose Z/2Z were generated by some S ⊆ Z/2Z. Then this can only possibly be S = {1}. Then this implies there is a homomorphism θ : Z/2Z → Z sending 1 to 1. But it does not send 0 = 1 + 1 to 1 + 1, since homomorph |
isms send 0 to 0. So Z/2Z is not freely generated. We now want to formulate free modules in a way more similar to what we do in linear algebra. Proposition. For a subset S = {m1, · · ·, mk} ⊆ M, the following are equivalent: (i) S generates M freely. (ii) S generates M and the set S is independent. (iii) Every element of M is uniquely expressible as r1m1 + r2m2 + · · · + rkmk for some ri ∈ R. Proof. The fact that (ii) and (iii) are equivalent is something we would expect from what we know from linear algebra, and in fact the proof is the same. So we only show that (i) and (ii) are equivalent. Let S generate M freely. If S is not independent, then we can write r1m1 + · · · + rkmk = 0, with ri ∈ R and, say, r1 non-zero. We define the set function ψ : S → R by sending m1 → 1R and mi → 0 for all i = 1. As S generates M freely, this extends to an R-module homomorphism θ : M → R. By definition of a homomorphism, we can compute 0 = θ(0) = θ(r1m1 + r2m2 + · · · + rkmk) = r1θ(m1) + r2θ(m2) + · · · + rkθ(mk) = r1. 71 3 Modules IB Groups, Rings and Modules This is a contradiction. So S must be independent. To prove the other direction, suppose every element can be uniquely written as r1m1 + · · · + rkmk. Given any set function ψ : S → N, we define θ : M → N by θ(r1m1 + · · · + rkmk) = r1ψ(m1) + · · · + rkψ(mk). This is well-defined by uniqueness, and is clearly a homomorphism. So it follows that S generates M freely. Example. The set {2, 3} ∈ Z generates Z. However, they do |
not generate Z freely, since 3 · 2 + (−2) · 3 = 0. Recall from linear algebra that if a set S spans a vector space V, and it is not independent, then we can just pick some useless vectors and throw them away in order to get a basis. However, this is no longer the case in modules. Neither 2 nor 3 generate Z. Definition (Relations). If M is a finitely-generated R-module, we have shown that there is a surjective R-module homomorphism φ : Rk → M. We call ker(φ) the relation module for those generators. Definition (Finitely presented module). A finitely-generated module is finitely presented if we have a surjective homomorphism φ : Rk → M and ker φ is finitely generated. Being finitely presented means I can tell you everything about the module with a finite amount of paper. More precisely, if {m1, · · ·, mk} generate M and {n1, n2, · · ·, n} generate ker(φ), then each corresponds to the relation ni = (ri1, · · · rik) ri1m1 + ri2m2 + · · · + rikmk = 0 in M. So M is the module generated by writing down R-linear combinations of m1, · · ·, mk, and saying two elements are the same if they are related to one another by these relations. Since there are only finitely many generators and finitely many such relations, we can specify the module with a finite amount of information. A natural question we might ask is if n = m, then are Rn and Rm the same? In vector spaces, they obviously must be different, since basis and dimension are well-defined concepts. Proposition (Invariance of dimension/rank). Let R be a non-zero ring. If Rn ∼= Rm as R-modules, then n = m. We know this is true if R is a field. We now want to reduce this to the case where R is a ring. If R is an integral domain, then we can produce a field |
by taking the field of fractions, and this might be a good starting point. However, we want to do this for general rings. So we need some more magic. We will need the following construction: 72 3 Modules IB Groups, Rings and Modules Let I R be an ideal, and let M be an R-module. We define IM = {am ∈ M : a ∈ I, m ∈ M } ≤ M. So we can take the quotient module M/IM, which is an R-module again. Now if b ∈ I, then its action on M/IM is b(m + IM ) = bm + IM = IM. So everything in I kills everything in M/IM. So we can consider M/IM as an R/I module by (r + I) · (m + IM ) = r · m + IM. So we have proved that Proposition. If I R is an ideal and M is an R-module, then M/IM is an R/I module in a natural way. We next need to use the following general fact: Proposition. Every non-zero ring has a maximal ideal. This is a rather strong statement, since it talks about “all rings”, and we can have weird rings. We need to use a more subtle argument, namely via Zorn’s lemma. You probably haven’t seen it before, in which case you might want to skip the proof and just take the lecturer’s word on it. Proof. We observe that an ideal I R is proper if and only if 1R ∈ I. So every increasing union of proper ideals is proper. Then by Zorn’s lemma, there is a maximal ideal (Zorn’s lemma says if an arbitrary union of increasing things is still a thing, then there is a maximal such thing, roughly). With these two notions, we get Proposition (Invariance of dimension/rank). Let R be a non-zero ring. If Rn ∼= Rm as R-modules, then n = m. Proof. Let I be a maximal ideal of R. Suppose we have Rn ∼= Rm. Then we must have Rn IRn ∼= Rm IRm, as R/I modules. But staring at it long enough, we figure that Rn IRn ∼= R I n |
, and similarly for m. Since R/I is a field, the result follows by linear algebra. The point of this proposition is not the result itself (which is not too inter- esting), but the general constructions used behind the proof. 73 3 Modules IB Groups, Rings and Modules 3.3 Matrices over Euclidean domains This is the part of the course where we deliver all our promises about proving the classification of finite abelian groups and Jordan normal forms. Until further notice, we will assume R is a Euclidean domain, and we write φ : R \ {0} → Z≥0 for its Euclidean function. We know that in such a Euclidean domain, the greatest common divisor gcd(a, b) exists for all a, b ∈ R. We will consider some matrices with entries in R. Definition (Elementary row operations). Elementary row operations on an m × n matrix A with entries in R are operations of the form (i) Add c ∈ R times the ith row to the jth row. This may be done by multiplying by the following matrix on the left, where c appears in the ith column of the jth row. (ii) Swap the ith and jth rows. This can be done by left-multiplication of the matrix.... Again, the rows and columns we have messed with are the ith and jth rows and columns. (iii) We multiply the ith row by a unit c ∈ R. We do this via the following 74 3 Modules IB Groups, Rings and Modules matrix Notice that if R is a field, then we can multiply any row by any non-zero number, since they are all units. We also have elementary column operations defined in a similar fashion, corresponding to right multiplication of the matrices. Notice all these matrices are invertible. Definition (Equivalent matrices). Two matrices are equivalent if we can get from one to the other via a sequence of such elementary row and column operations. Note that if A and B are equivalent, then we can write B = QAT −1 for some invertible matrices Q and T −1. The aim of the game is to find, for each matrix, a matrix |
equivalent to it that is as simple as possible. Recall from IB Linear Algebra that if R is a field, then we can put any matrix into the form Ir 0 0 0 via elementary row and column operations. This is no longer true when working with rings. For example, over Z, we cannot put the matrix 2 0 0 0 into that form, since no operation can turn the 2 into a 1. What we get is the following result: Theorem (Smith normal form). An m × n matrix over a Euclidean domain R is equivalent to a diagonal matrix d2... d1 dr, with the di all non-zero and d1 | d2 | d3 | · · · | dr. 75 3 Modules IB Groups, Rings and Modules Note that the divisibility criterion is similar to the classification of finitelygenerated abelian groups. In fact, we will derive that as a consequence of the Smith normal form. Definition (Invariant factors). The dk obtained in the Smith normal form are called the invariant factors of A. We first exhibit the algorithm of producing the Smith normal form with an algorithm in Z. Example. We start with the matrix 3 7 1 −1 5 3 4 2 . 1 We want to move the 1 to the top-left corner. So we swap the first and second rows to obtain. 1 − We then try to eliminate the other entries in the first row by column operations. We add multiples of the first column to the second and third to obtain 1 3 3 . 0 0 10 −2 8 −5 We similarly clear the first column to get 1 0 0 . 0 0 10 −2 8 −5 We are left with a 2 × 2 matrix to fiddle with. We swap the second and third columns so that 2 is in the 2, 2 entry, and secretly change sign to get 10 8 We notice that (2 |
, 5) = 1. So we can use linear combinations to introduce a 1 at the bottom 10 1 −12 Swapping rows, we get 12 10 2 We then clear the remaining rows and columns to get 34 76 3 Modules IB Groups, Rings and Modules Proof. Throughout the process, we will keep calling our matrix A, even though it keeps changing in each step, so that we don’t have to invent hundreds of names for these matrices. If A = 0, then done! So suppose A = 0. So some entry is not zero, say, Aij = 0. Swapping the ith and first row, then jth and first column, we arrange that A11 = 0. We now try to reduce A11 as much as possible. We have the following two possible moves: (i) If there is an A1j not divisible by A11, then we can use the Euclidean algorithm to write A1j = qA11 + r. By assumption, r = 0. So φ(r) < φ(A11) (where φ is the Euclidean function). So we subtract q copies of the first column from the jth column. Then in position (1, j), we now have r. We swap the first and jth column such that r is in position (1, 1), and we have strictly reduced the value of φ at the first entry. (ii) If there is an Ai1 not divisible by A11, we do the same thing, and this again reduces φ(A11). We keep performing these until no move is possible. Since the value of φ(A11) strictly decreases every move, we stop after finitely many applications. Then we know that we must have A11 dividing all Aij and Ai1. Now we can just subtract appropriate multiples of the first column from others so that A1j = 0 for j = 1. We do the same thing with rows so that the first row is cleared. Then we have a matrix of the form A = d 0 0... 0 · · · C 0 |
. We would like to say “do the same thing with C”, but then this would get us a regular diagonal matrix, not necessarily in Smith normal form. So we need some preparation. (iii) Suppose there is an entry of C not divisible by d, say Aij with i, j > 1... 0... Aij We suppose Aij = qd + r, with r = 0 and φ(r) < φ(d). We add column 1 to column j, and subtract q times row 1 from row i. Now we get r in the (i, j)th entry, and we want 77 3 Modules IB Groups, Rings and Modules to send it back to the (1, 1) position. We swap row i with row 1, swap column j with row 1, so that r is in the (1, 1)th entry, and φ(r) < φ(d). Now we have messed up the first row and column. So we go back and do (i) and (ii) again until the first row and columns are cleared. Then we get (d) ≤ φ(r) < φ(d). where As this strictly decreases the value of φ(A11), we can only repeat this finitely many times. When we stop, we will end up with a matrix... 0 · · · C 0 , and d divides every entry of C. Now we apply the entire process to C. When we do this process, notice all allowed operations don’t change the fact that d divides every entry of C. So applying this recursively, we obtain a diagonal matrix with the claimed divisibility property. Note that if we didn’t have to care about the divisibility property, we can just do (i) and (ii), and we can get a diagonal matrix. The magic to get to the Smith normal form is (iii). Recall that the di are called the invariant factors. So it would be nice if we can prove that the di are indeed invariant. It is not clear from the algorithm that we will always end up with the same di. Indeed, we can multiply a whole row |
by −1 and get different invariant factors. However, it turns out that these are unique up to multiplication by units. To study the uniqueness of the invariant factors of a matrix A, we relate them to other invariants, which involves minors. Definition (Minor). A k × k minor of a matrix A is the determinant of a k × k sub-matrix of A (i.e. a matrix formed by removing all but k rows and all but k columns). Any given matrix has many minors, since we get to decide which rows and columns we can throw away. The idea is to consider the ideal generated by all the minors of matrix. Definition (Fitting ideal). For a matrix A, the kth Fitting ideal Fitk(A) R is the ideal generated by the set of all k × k minors of A. A key property is that equivalent matrices have the same Fitting ideal, even if they might have very different minors. 78 3 Modules IB Groups, Rings and Modules Lemma. Let A and B be equivalent matrices. Then Fitk(A) = Fitk(B) for all k. Proof. It suffices to show that changing A by a row or column operation does not change the Fitting ideal. Since taking the transpose does not change the determinant, i.e. Fitk(A) = Fitk(AT ), it suffices to consider the row operations. The most difficult one is taking linear combinations. Let B be the result of adding c times the ith row to the jth row, and fix C a k × k minor of A. Suppose the resultant matrix is C. We then want to show that det C ∈ Fitk(A). If the jth row is outside of C, then the minor det C is unchanged. If both the ith and jth rows are in C, then the submatrix C changes by a row operation, which does not affect the determinant. These are the boring cases. Suppose the jth row is in C and the ith row is not. Suppose the ith row is f1, · · ·, fk. Then C is changed to C, with the jth row being (Cj1 + cf1, Cj2 + cf2, · |
· ·, Cjk + cfk). We compute det C by expanding along this row. Then we get det C = det C + c det D, where D is the matrix obtained by replacing the jth row of C with (f1, · · ·, fk). The point is that det C is definitely a minor of A, and det D is still a minor of A, just another one. Since ideals are closed under addition and multiplications, we know det(C ) ∈ Fitk(A). The other operations are much simpler. They just follow by standard properties of the effect of swapping rows or multiplying rows on determinants. So after any row operation, the resultant submatrix C satisfies det(C ) ∈ Fitk(A). Since this is true for all minors, we must have But row operations are invertible. So we must have Fitk(B) ⊆ Fitk(A). as well. So they must be equal. So done. Fitk(A) ⊆ Fitk(B) We now notice that if we have a matrix in Smith normal form, say d2... d1 B = dr, 79 3 Modules IB Groups, Rings and Modules then we can immediately read off Fitk(B) = (d1d2 · · · dk). This is clear once we notice that the only possible contributing minors are from the diagonal submatrices, and the minor from the top left square submatrix divides all other diagonal ones. So we have Corollary. If A has Smith normal form d2... d1 B = dr, then So dk is unique up to associates. Fitk(A) = (d1d2 · · · dk). This is since we can find dk by dividing the generator of Fitk(A) by the generator of Fitk−1(A). Example. Consider the matrix in Z: A = 2 0 0 3. This is diagonal, but not |
in Smith normal form. We can potentially apply the algorithm, but that would be messy. We notice that Fit1(A) = (2, 3) = (1). So we know d1 = ±1. We can then look at the second Fitting ideal Fit2(A) = (6). So d1d2 = ±6. So we must have d2 = ±6. So the Smith normal form is 1 0 0 6. That was much easier. We are now going to use Smith normal forms to do things. We will need some preparation, in the form of the following lemma: Lemma. Let R be a principal ideal domain. Then any submodule of Rm is generated by at most m elements. This is obvious for vector spaces, but is slightly more difficult here. 80 3 Modules IB Groups, Rings and Modules Proof. Let N ≤ Rm be a submodule. Consider the ideal I = {r ∈ R : (r, r2, · · ·, rm) ∈ N for some r2, · · ·, rm ∈ R}. It is clear this is an ideal. Since R is a principle ideal domain, we must have I = (a) for some a ∈ R. We now choose an n = (a, a2, · · ·, am) ∈ N. Then for any vector (r1, r2, · · ·, rm) ∈ N, we know that r1 ∈ I. So a | r1. So we can write Then we can form r1 = ra. (r1, r2, · · ·, rm) − r(a, a2, · · ·, am) = (0, r2 − ra2, · · ·, rm − ram) ∈ N. This lies in N = N ∩ ({0} × Rm−1) ≤ Rm−1. Thus everything in N can be written as a multiple of n plus something in N. But by induction, since N ≤ Rm−1, we know N is generated by at most m − 1 elements. So there are n2, · · ·, nm ∈ N generating N. So n, n2, · · ·, nm generate N. If we have a submodule of Rm, then it has at most m generators. However, these might generate the submodule in a terrible way |
. The next theorem tells us there is a nice way of finding generators. Theorem. Let R be a Euclidean domain, and let N ≤ Rm be a submodule. Then there exists a basis v1, · · ·, vm of Rm such that N is generated by d1v1, d2v2, · · ·, drvr for some 0 ≤ r ≤ m and some di ∈ R such that This is not hard, given what we’ve developed so far. d1 | d2 | · · · | dr. Proof. By the previous lemma, N is generated by some elements x1, · · ·, xn with n ≤ m. Each xi is an element of Rm. So we can think of it as a column vector of length m, and we can form a matrix A = ↑ ↑ x1 x2 ↓ ↓ . ↑ · · · xn ↓ We’ve got an m × n matrix. So we can put it in Smith normal form! Since there are fewer columns than there are rows, this is of the form d2... dr 0... d1 0 0 ... 0 81 3 Modules IB Groups, Rings and Modules Recall that we got to the Smith normal form by row and column operations. Performing row operations is just changing the basis of Rm, while each column operation changes the generators of N. So what this tells us is that there is a new basis v1, · · ·, vm of Rm such that N is generated by d1v1, · · ·, drvr. By definition of Smith normal form, the divisibility condition holds. Corollary. Let R be a Euclidean domain. A submodule of Rm is free of rank |
at most m. In other words, the submodule of a free module is free, and of a smaller (or equal) rank. Proof. Let N ≤ Rm be a submodule. By the above, there is a basis v1, · · ·, vn of Rm such that N is generated by d1v1, · · ·, drvr for r ≤ m. So it is certainly generated by at most m elements. So we only have to show that d1v1, · · ·, drvr are independent. But if they were linearly dependent, then so would be v1, · · ·, vm. But v1, · · ·, vn are a basis, hence independent. So d1v1, · · ·, drvr generate N freely. So N ∼= Rr. Note that this is not true for all rings. For example, (2, X) Z[X] is a submodule of Z[X], but is not isomorphic to Z[X]. Theorem (Classification of finitely-generated modules over a Euclidean domain). Let R be a Euclidean domain, and M be a finitely generated R-module. Then M ∼= R (d1) ⊕ R (d2) ⊕ · · · ⊕ R (dr for some di = 0, and d1 | d2 | · · · | dr. This is either a good or bad thing. If you are pessimistic, this says the world of finitely generated modules is boring, since there are only these modules we already know about. If you are optimistic, this tells you all finitely-generated modules are of this simple form, so we can prove things about them assuming they look like this. Proof. Since M is finitely-generated, there is a surjection φ : Rm → M. So by the first isomorphism, we have M ∼= Rm ker φ. Since ker φ is a submodule of Rm, by the previous theorem, there is a basis v1, · · ·, vm of Rm such that ker φ is generated by d1v1, · · ·, drvr for 0 ≤ r ≤ m and d1 | d2 | · · · | dr. So we know M |
∼= Rm ((d1, 0, · · ·, 0), (0, d2, 0, · · ·, 0), · · ·, (0, · · ·, 0, dr, 0, · · ·, 0)). This is just R (d1) with m − r copies of R. ⊕ R (d2) ⊕ · · · ⊕ R (dr) ⊕ R ⊕ · · · ⊕ R, 82 3 Modules IB Groups, Rings and Modules This is particularly useful in the case where R = Z, where R-modules are abelian groups. Example. Let A be the abelian group generated by a, b, c with relations 2a + 3b + c = 0, a + 2b = 0, 5a + 6b + 7c = 0. In other words, we have A = Z3 ((2, 3, 1), (1, 2, 0), (5, 6, 7)). We would like to get a better description of A. It is not even obvious if this module is the zero module or not. To work out a good description, We consider the matrix To figure out the Smith normal form, we find the fitting ideals. We have Fit1(X) = (1, · · · ) = (1). So d1 = 1. We have to work out the second fitting ideal. In principle, we have to check all the minors, but we immediately notice 2 3 1 2 = 1. So Fit2(X) = (1), and d2 = 1. Finally, we find Fit3(X3). So d3 = 3. So we know A ∼= Z (1) ⊕ Z (1) ⊕ Z (3) ∼= Z (3) ∼= C3. If you don’t feel like computing determinants, doing row and column reduction is often as quick and straightforward. We re-state the previous theorem in the specific case where R is Z, since this is particularly useful. Corollary (Classification of finitely-generated abelian groups). Any finitelygenerated abelian group is isomorphic to Cd1 × · |
· · × Cdr × C∞ × · · · × C∞, where C∞ ∼= Z is the infinite cyclic group, with d1 | d2 | · · · | dr. 83 3 Modules IB Groups, Rings and Modules Proof. Let R = Z, and apply the classification of finitely generated R-modules. Note that if the group is finite, then there cannot be any C∞ factors. So it is just a product of finite cyclic groups. Corollary. If A is a finite abelian group, then with A ∼= Cd1 × · · · × Cdr, d1 | d2 | · · · | dr. This is the result we stated at the beginning of the course. Recall that we were also to decompose a finite abelian group into products of the form Cpk, where p is a prime, and we said it was just the Chinese remainder theorem. This is again in general true, but we, again, need the Chinese remainder theorem. Lemma (Chinese remainder theorem). Let R be a Euclidean domain, and a, b ∈ R be such that gcd(a, b) = 1. Then R (ab) ∼= R (a) × R (b) as R-modules. The proof is just that of the Chinese remainder theorem written in ring language. Proof. Consider the R-module homomorphism φ : R (a) × R (b) → R (ab) by (r1 + (a), r2 + (b)) → br1 + ar2 + (ab). To show this is well-defined, suppose (r1 + (a), r2 + (b)) = (r 1 + (a), r 2 + (b)). Then So r1 = r r2 = r 1 + xa 2 + yb. br1 + ar2 + (ab) = br 1 + xab + ar 2 + yab + (ab) = br 1 + ar 2 + (ab). So this is indeed well-defined. It is clear that this is a module map, by inspection. We now have to show it is surjective and injective. So far, we have not used the |
hypothesis, that gcd(a, b) = 1. As we know gcd(a, b) = 1, by the Euclidean algorithm, we can write 1 = ax + by 84 3 Modules IB Groups, Rings and Modules for some x, y ∈ R. So we have φ(y + (a), x + (b)) = by + ax + (ab) = 1 + (ab). So 1 ∈ im φ. Since this is an R-module map, we get φ(r(y + (a), x + (b))) = r · (1 + (ab)) = r + (ab). The key fact is that R/(ab) as an R-module is generated by 1. Thus we know φ is surjective. Finally, we have to show it is injective, i.e. that the kernel is trivial. Suppose φ(r1 + (a), r2 + (b)) = 0 + (ab). Then So we can write br1 + ar2 ∈ (ab). br1 + ar2 = abx for some x ∈ R. Since a | ar2 and a | abx, we know a | br1. Since a and b are coprime, unique factorization implies a | r1. Similarly, we know b | r2. (r1 + (a), r2 + (b)) = (0 + (a), 0 + (b)). So the kernel is trivial. Theorem (Prime decomposition theorem). Let R be a Euclidean domain, and M be a finitely-generated R-module. Then M ∼= N1 ⊕ N2 ⊕ · · · ⊕ Nt, where each Ni is either R or is R/(pn) for some prime p ∈ R and some n ≥ 1. Proof. We already know M ∼= R (d1) ⊕ · · · ⊕ R (dr) ⊕ R ⊕ · · · ⊕ R. So it suffices to show that each R/(d1) can be written in that form. We let d = pn1 1 pn2 2 · · · pnk k with pi distinct primes. So each pni i iterated, we have is coprime to each other. So by the lemma R |
(d1) ∼= R (pn1 1 ) ⊕ · · · ⊕ R (pnk k ). 3.4 Modules over F[X] and normal forms for matrices That was one promise delivered. We next want to consider the Jordan normal form. This is less straightforward, since considering V directly as an F module would not be too helpful (since that would just be pure linear algebra). Instead, we use the following trick: 85 3 Modules IB Groups, Rings and Modules For a field F, the polynomial ring F[X] is a Euclidean domain, so the results of the last few sections apply. If V is a vector space on F, and α : V → V is a linear map, then we can make V into an F[X]-module via F[X] × V → V (f, v) → (f (α))(v). We write Vα for this F[X]-module. Lemma. If V is a finite-dimensional vector space, then Vα is a finitely-generated F[X]-module. Proof. If v1, · · ·, vn generate V as an F-module, i.e. they span V as a vector space over F, then they also generate Vα as an F[X]-module, since F ≤ F[X]. ∼= F[X]/(X r) as F[X]-modules. Then in particular Example. Suppose Vα they are isomorphic as F-modules (since being a map of F-modules has fewer requirements than being a map of F[X]-modules). Under this bijection, the elements 1, X, X 2, · · ·, X r−1 ∈ F[X]/(X r) form a vector space basis for Vα. Viewing F[X]/(X r) as an F-vector space, the action of X has the matrix 0 1 0 ... 0 0 0 1...... 1 0 0 0 ... 0. We also know that in Vα, the action of X is by definition the linear map α. So under this basis, α |
also has matrix 0 1 0 ... 0 0 0 1...... 1 0 0 0 ... 0. Vα ∼= F[X] ((X − λ)r) Example. Suppose for some λ ∈ F. Consider the new linear map β = α − λ · id : V → V. Then Vβ like ∼= F[Y ]/(Y r), for Y = X − λ. So there is a basis for V so that β looks 0 0 0... 1 0 0 0 ... 0. 0 1 0 ... 0 0 0 1... 0 · · · · · · · · ·... · · · 86 3 Modules IB Groups, Rings and Modules So we know α has matrix ......... · · ·...... 1 λ So it is a Jordan block (except the Jordan blocks are the other way round, with zeroes below the diagonal). Example. Suppose Vα ∼= F[X]/(f ) for some polynomial f, for f = a0 + a1X + · · · + ar−1X r−1 + X r. This has a basis 1, X, X 2, · · ·, X r−1 as well, in which α is c(f ) = 0 1 0 ... 0 0 0 1... a0 0 −a1 0 −a2...... 1 −ar−1 We call this the companion matrix for the monic polynomial f. These are different things that can possibly happen. Since we have already classified all finitely generated F[X] modules, this allows us to put matrices in a rather nice form. Theorem (Rational canonical form). Let α : V → V be a linear endomorphism of a finite-dimensional |
vector space over F, and Vα be the associated F[X]-module. Then Vα ∼= F[X] (f1) ⊕ F[X] (f2) ⊕ · · · ⊕ F[X] (fs), with f1 | f2 | · · · | fs. Thus there is a basis for V in which the matrix for α is the block diagonal c(f1) 0... 0 0 c(f2)...... c(fs) This is the sort of theorem whose statement is longer than the proof. Proof. We already know that Vα is a finitely-generated F[X]-module. By the structure theorem of F[X]-modules, we know Vα ∼= F[X] (f1) ⊕ F[X] (f2) ⊕ · · · ⊕ F[X] (fs) ⊕ 0. We know there are no copies of F[X], since Vα = V is finite-dimensional over F, but F[X] is not. The divisibility criterion also follows from the structure theorem. Then the form of the matrix is immediate. 87 3 Modules IB Groups, Rings and Modules This is really a canonical form. The Jordan normal form is not canonical, since we can move the blocks around. The structure theorem determines the factors fi up to units, and once we require them to be monic, there is no choice left. In terms of matrices, this says that if α is represented by a matrix A ∈ Mn,n(F) in some basis, then A is conjugate to a matrix of the form above. From the rational canonical form, we can immediately read off the minimal polynomial as fs. This is since if we view Vα as the decomposition above, we F[X] (fs). It also kills the other factors since fi | fs find that fs(α) kills everything in for all i. So fs(α) = 0. We also know no smaller polynomial kills V, since it does not kill F[X] (fs). Similarly, we find that the characteristic polynomial of α is f1f2 · · · fs. Recall we had a di� |
��erent way of decomposing a module over a Euclidean domain, namely the prime decomposition, and this gives us the Jordan normal form. Before we can use that, we need to know what the primes are. This is why we need to work over C. Lemma. The prime elements of C[X] are the X − λ for λ ∈ C (up to multiplication by units). Proof. Let f ∈ C[X]. If f is constant, then it is either a unit or 0. Otherwise, by the fundamental theorem of algebra, it has a root λ. So it is divisible by X − λ. So if f is irreducible, it must have degree 1. And clearly everything of degree 1 is prime. Applying the prime decomposition theorem to C[X]-modules gives us the Jordan normal form. Theorem (Jordan normal form). Let α : V → V be an endomorphism of a vector space V over C, and Vα be the associated C[X]-module. Then ∼= Vα C[X] ((X − λ1)a1) C[X] ((X − λ2)a2) where λi ∈ C do not have to be distinct. So there is a basis of V in which α has matrix C[X] ((X − λt)at) ⊕ · · · ⊕ ⊕, Ja1 (λ1) 0 0 Ja2(λ2)..., Jat(λt) where Jm(λ) = λ 1...... λ is an m × m matrix. Proof. Apply the prime decomposition theorem to Vα. Then all primes are of the form X − λ. We then use our second example at the beginning of the chapter to get the form of the matrix. 88 3 Modules IB Groups, Rings and Modules The blocks Jm(λ) are called the Jordan λ-blocks. It turns out that the Jordan blocks are unique up to reordering, but it does not immediately follow from what we have so far, and we will not prove it. It is done in the IB Linear Algebra course |
. We can also read off the minimal polynomial and characteristic polynomial of α. The minimal polynomial is (X − λ)aλ, λ where aλ is the size of the largest λ-block. The characteristic polynomial of α is (X − λ)bλ, λ where bλ is the sum of the sizes of the λ-blocks. Alternatively, it is t (X − λi)ai. i=1 From the Jordan normal form, we can also read off another invariant, namely the size of the λ-space of α, namely the number of λ-blocks. We can also use the idea of viewing V as an F[X] module to prove Cayley- Hamilton theorem. In fact, we don’t need F to be a field. Theorem (Cayley-Hamilton theorem). Let M be a finitely-generated R-module, where R is some commutative ring. Let α : M → M be an R-module homomorphism. Let A be a matrix representation of α under some choice of generators, and let p(t) = det(tI − A). Then p(α) = 0. Proof. We consider M as an R[X]-module with action given by Suppose e1, · · ·, en span M, and that for all i, we have (f (X))(m) = f (α)m. Then α(ei) = n j=1 aijej. n j=1 (Xδij − aij)ej = 0. We write C for the matrix with entries We now use the fact that cij = Xδij − aij ∈ F[X]. adj(C)C = det(C)I, which we proved in IB Linear Algebra (and the proof did not assume that the underlying ring is a field). Expanding this out, we get the following equation (in F[X]). χα(X)I = det(XI − A)I = (adj(XI − A))(XI − A). 89 3 Modules IB Groups, Rings and Modules Writing this in components, and multiplying by ek, we have χα(X)δikek = n j=1 |
(adj(XI − A)ij)(Xδjk − ajk)ek. Then for each i, we sum over k to obtain n k=1 χα(X)δikek = n (adj(XI − A)ij)(Xδjk − ajk)ek = 0, j,k=1 by our choice of aij. But the left hand side is just χα(X)ei. So χα(X) acts trivially on all of the generators ei. So it in fact acts trivially. So χα(α) is the zero map (since acting by X is the same as acting by α, by construction). Note that if we want to prove this just for matrices, we don’t really need the theory of rings and modules. It just provides a convenient language to write the proof in. 3.5 Conjugacy of matrices* We are now going to do some fun computations of conjugacy classes of matrices, using what we have got so far. ∼= Vβ as F[X]-modules Lemma. Let α, β : V → V be two linear maps. Then Vα if and only if α and β are conjugate as linear maps, i.e. there is some γ : V → V such that α = γ−1βγ. This is not a deep theorem. This is in some sense just some tautology. All we have to do is to unwrap what these statements say. Proof. Let γ : Vβ → Vα be an F[X]-module isomorphism. Then for v ∈ V, we notice that β(v) is just X · v in Vβ, and α(v) is just X · v in Vα. So we get β ◦ γ(v) = X · (γ(v)) = γ(X · v) = γ ◦ α(v), using the definition of an F[X]-module homomorphism. So we know So βγ = γα. α = γ−1βγ. Conversely, let γ : V → V be a linear isomorphism such that γ−1βγ = α. We now claim that γ : Vα → Vβ is an F[X]-module isomorphism |
. We just have to check that γ(f · v) = γ(f (α)(v)) = γ(a0 + a1α + · · · + anαn)(v) = γ(a0v) + γ(a1α(v)) + γ(a2α2(v)) + · · · + γ(anαn(v)) = (a0 + a1β + a2β2 + · · · + anβn)(γ(v)) = f · γ(v). 90 3 Modules IB Groups, Rings and Modules So classifying linear maps up to conjugation is the same as classifying modules. We can reinterpret this a little bit, using our classification of finitely-generated modules. Corollary. There is a bijection between conjugacy classes of n × n matrices over F and sequences of monic polynomials d1, · · ·, dr such that d1 | d2 | · · · | dr and deg(d1 · · · dr) = n. Example. Let’s classify conjugacy classes in GL2(F), i.e. we need to classify F[X]-modules of the form F[X] (d1) ⊕ F[X] (d2) ⊕ · · · ⊕ F[X] (dr) which are two-dimensional as F-modules. As we must have deg(d1d2 · · · dr) = 2, we either have a quadratic thing or two linear things, i.e. either (i) r = 1 and deg(d1) = 2, (ii) r = 2 and deg(d1) = deg(d2) = 1. In this case, since we have d1 | d2, and they are both monic linear, we must have d1 = d2 = X − λ for some λ. In the first case, the module is F[X] (d1), where, say, In the second case, we get d1 = X 2 + a1X + a2. F[X] (X − λ) ⊕ F[X] (X − λ). What does this say? In the fi |
rst case, we use the basis 1, X, and the linear map has matrix 0 −a2 1 −a1 In the second case, this is λ 0 0 λ. Do these cases overlap? Suppose the two of them are conjugate. Then they have the same determinant and same trace. So we know −a1 = 2λ a2 = λ2 So in fact our polynomial is X 2 + a1X + a2 = X 2 − 2λ + λ2 = (X − λ)2. This is just the polynomial of a Jordan block. So the matrix 0 −a2 1 −a1 91 3 Modules IB Groups, Rings and Modules is conjugate to the Jordan block λ 0 1 λ, but this is not conjugate to λI, e.g. by looking at eigenspaces. So these cases are disjoint. Note that we have done more work that we really needed, since λI is invariant under conjugation. But the first case is not too satisfactory. We can further classify it as follows. If X 2 + a1X + a2 is reducible, then it is for some µ, λ ∈ F. If λ = µ, then the matrix is conjugate to (X − λ)(X − µ) Otherwise, it is conjugate to λ 0 1 λ λ 0 0 µ. In the case where X 2 + a1X + a2 is irreducible, there is nothing we can do in general. However, we can look at some special scenarios and see if there is anything we can do. Example. Consider GL2(Z/3). We want to classify its conjugacy classes. By the general theory, we know everything is conjugate to 0 −a2 1 −a1 λ 0 0 µ λ 0 1 λ,,, with X 2 + a1X + a2 irreducible. So we need to figure out what the irreducibles are. A reasonable strategy is to guess. Given any quadratic, it is easy to see if it is irreducible, since we can try to see if it has any roots, and there are just three things to try. However, we can be a bit slightly more clever. We � |
��rst count how many irreducibles we are expecting, and then find that many of them. There are 9 monic quadratic polynomials in total, since a1, a2 ∈ Z/3. The reducibles are (X − λ)2 or (X − λ)(X − µ) with λ = µ. There are three of each kind. So we have 6 reducible polynomials, and so 3 irreducible ones. We can then check that X 2 + 1, X 2 + X + 2, X 2 + 2X + 2 are the irreducible polynomials. So every matrix in GL2(Z/3) is either congruent to 0 −2 1 −2 0 −2 1 −1 0 −,,,,, where λ, µ ∈ (Z/3)× (since the matrix has to be invertible). The number of conjugacy classes of each type are 1, 1, 1, 3, 2. So there are 8 conjugacy classes. 92 3 Modules IB Groups, Rings and Modules The first three classes have elements of order 4, 8, 8 respectively, by trying. We notice that the identity matrix has order 1, and λ 0 0 µ has order 2 otherwise. Finally, for the last type, we have, ord ord = 6 Note that we also have |GL2(Z/3)| = 48 = 24 · 3. Since there is no element of order 16, the Sylow 2-subgroup of GL2(Z/3) is not cyclic. To construct the Sylow 2-subgroup, we might start with an element of order 8, say 0 1 To make a subgroup of order 6, a sensible guess would be to take an element of order 2, but that doesn’t work, since B4 will give you the element of order 2. Instead, we pick. We notice A−1BA = = B3. So this is a bit like the dihedral group. We know that B A, B. Also, we know |B| = 8. So if we can show that B has index 2 in A, B, then this is the Sylow 2-subgroup. By the second isomorphism theorem, something we have never used in our life, we know A, B B ∼ |
= A A ∩ B. We can list things out, and then find A ∩ B = 2 0 0 2 ∼= C2. We also know A ∼= C4. So we know |A, B| |B| = 2. So |A, B| = 16. So this is the Sylow 2-subgroup. in fact, it is A, B | A4 = B8 = e, A−1BA = B3 We call this the semi-dihedral group of order 16, because it is a bit like a dihedral group. Note that finding this subgroup was purely guesswork. There is no method to know that A and B are the right choices. 93 Index Index An, 12 Sn, 12 Sym(X), 12 sgn, 12 p-group, 20 abelian group, 4 ACC, 48 algebraic integer, 59 alternating group, 12 annihilator, 68 ascending chain condition, 48 associates, 43 automorphism group, 17 basis, 71 Cayley’s theorem, 15 Cayley-Hamilton theorem, 89 center, 17 centralizer, 17 characteristic, 37 Chinese remainder theorem, 84 classification of finite abelian groups, 22 classification of finitely-generated abelian groups, 83 classification of finitely-generated modules over Euclidean domains, 82 commutative ring, 27 conjugacy class, 17 conjugation, 17 content, 50 coset, 5 degree, 29 direct sum, 70 division, 43 ED, 44 Eisenstein’s criterion, 55 elementary row operations, 74 equivalent matrices, 75 Euclidean algorithm for polynomials, 34 Euclidean domain, 44 even permutation, 12 IB Groups, Rings and Modules field, 28 field of fractions, 39 finitely generated module, 68 finitely presented module, 72 finitely-generated ideal, 61 first isomorphism theorem, 8, 35, 67 Fitting ideal, 78 free module, 71 freely generated, 70 Gauss’ lemma, 50 Gaussian integers, 56 gcd, 49 generator of ideal, 32 greatest common divisor, 49 group, 4 group action, 13 Hilbert basis theorem, 62 homomorphism, 7, 30, |
67 ideal, 31 identity, 4 image, 7, 30 integral domain, 38 invariant factors, 76 inverse, 4 irreducible, 43 isomorphic, 8 isomorphism, 8, 30, 67 Jordan normal form, 88 kernel, 7, 30 Lagrange’s theorem, 5 Laurent polynomial, 30 linear independence, 70 maximal ideal, 41 minimal polynomial, 59 minor, 78 module, 65 monic polynomial, 29 Noetherian ring, 48, 61 normal subgroup, 6 normalizer, 18 odd permutation, 12 94 Index IB Groups, Rings and Modules orbit, 16 orbit-stabilizer theorem, 16 order, 5 permutation group, 13 permutation representation, 14 PID, 46 polynomial, 29 polynomial ring, 29 power series, 29 prime, 43 prime decomposition theorem, 85 prime ideal, 41 primitive polynomial, 50 principal ideal, 32 principal ideal domain, 46 product of rings, 28 quotient group, 6 quotient ring, 33 relation, 72 ring, 27 second isomorphism theorem, 9, 36, 67 sign, 12 simple group, 11 Smith normal form, 75 stabilizer, 16 submodule, 66 subring, 27 Sylow theorem, 22 symmetric group, 12 third isomorphism theorem, 10, 37, 67 UFD, 47 unique factorization domain, 47 unit, 28, 42 rational canonical form, 87 zero divisor, 38 95ll g 2 G. An intersection of normal subgroups of a group is again a normal subgroup (cf. 1.14). Therefore, we can define the normal subgroup generated by a subset X of a group G to be the intersection of the normal subgroups containing X. Its description in terms of X is a little complicated. We say that a subset X of a group G is normal (or closed under conjugation) if gXg1 X for all g 2 G. LEMMA 1.38 If X is normal, then the subgroup hXi generated by it is normal. PROOF. The map “conjugation by g”, a 7! gag1, is a homomorphism G! G. If a 2 hXi, say, a D x1 xm with each xi or its inverse in X, then gag1 D.gx1g1/.gxmg1/. As X is closed under conjugation, each gxi g1 or its inverse |
lies in X, and so ghX ig1 hX i. LEMMA 1.39 For any subset X of G, the subset S smallest normal set containing X. g2G gXg1 is normal, and it is the PROOF. Obvious. On combining these lemmas, we obtain the following proposition. PROPOSITION 1.40 The normal subgroup generated by a subset X of G is hS g2G gXg1i. Kernels and quotients The kernel of a homomorphism ˛W G! G0 is Ker.˛/ D fg 2 Gj ˛.g/ D eg: If ˛ is injective, then Ker.˛/ D feg. Conversely, if Ker.˛/ D feg, then ˛ is injective, because ˛.g/ D ˛.g0/ H) ˛.g1g0/ D e H) g1g0 D e H) g D g0. PROPOSITION 1.41 The kernel of a homomorphism is a normal subgroup. PROOF. It is obviously a subgroup, and if a 2 Ker.˛/, so that ˛.a/ D e, and g 2 G, then ˛.gag1/ D ˛.g/˛.a/˛.g/1 D ˛.g/˛.g/1 D e: Hence gag1 2 Ker.˛/. For example, the kernel of the homomorphism detW GLn.F /! F is the group of n n matrices with determinant 1 — this group SLn.F / is called the special linear group of degree n. Theorems concerning homomorphisms 21 PROPOSITION 1.42 Every normal subgroup occurs as the kernel of a homomorphism. More precisely, if N is a normal subgroup of G, then there is a unique group structure on the set G=N of cosets of N in G for which the natural map a 7! ŒaW G! G=N is a homomorphism. PROOF. Write the cosets as left cosets, and define.aN /.bN / D.ab/N. We have to check (a) that this is well-defined, and (b) |
that it gives a group structure on the set of cosets. It will then be obvious that the map g 7! gN is a homomorphism with kernel N. (a). Let aN D a0N and bN D b0N ; we have to show that abN D a0b0N. But abN D a.bN / D a.b0N / 1.34D aN b0 D a0N b0 1.34D a0b0N: (b). The product is certainly associative, the coset N is an identity element, and a1N is an inverse for aN. The group G=N is called the7 quotient of G by N. Propositions 1.41 and 1.42 show that the normal subgroups are exactly the kernels of homomorphisms. PROPOSITION 1.43 The map a 7! aN W G! G=N has the following universal property: for any homomorphism ˛W G! G0 of groups such that ˛.N / D feg, there exists a unique homomorphism G=N! G0 making the diagram at right commute: PROOF. Note that for n 2 N, ˛.gn/ D ˛.g/˛.n/ D ˛.g/, and so ˛ is constant on each left coset gN of N in G. It therefore defines a map a 7! aN G=N G0: G ˛ N˛W G=N! G0; N˛.gN / D ˛.g/; and N˛ is a homomorphism because N˛..gN /.g0N // D N˛.gg0N / D ˛.gg0/ D ˛.g/˛.g0/ D N˛.gN / N˛.g0N /. The uniqueness of N˛ follows from the surjectivity of G! G=N. EXAMPLE 1.44 (a) Consider the subgroup mZ of Z. The quotient group Z=mZ is a cyclic group of order m. (b) Let L be a line through the origin in R2. Then R2=L is isomorphic to R (because it is a one-dimensional vector space over |
R). (c) For n 2, the quotient Dn=hri D f Ne; Nsg (cyclic group of order 2). Theorems concerning homomorphisms The theorems in this subsection are sometimes called the isomorphism theorems (first, second,..., or first, third,..., or... ). 7Some authors say “factor” instead of “quotient”, but this can be confused with “direct factor”. 22 1. BASIC DEFINITIONS AND RESULTS FACTORIZATION OF HOMOMORPHISMS Recall that the image of a map ˛W S! T is ˛.S/ D f˛.s/ j s 2 Sg. THEOREM 1.45 (HOMOMORPHISM THEOREM) For any homomorphism ˛W G! G0 of groups, the kernel N of ˛ is a normal subgroup of G, the image I of ˛ is a subgroup of G0, and ˛ factors in a natural way into the composite of a surjection, an isomorphism, and an injection: G ˛ G0 g7!gN surjective injective I: G=N gN 7!˛.g/ isomorphism PROOF. We have already seen (1.41) that the kernel is a normal subgroup of G. If b D ˛.a/ and b0 D ˛.a0/, then bb0 D ˛.aa0/ and b1 D ˛.a1/, and so I defD ˛.G/ is a subgroup of G0. The universal property of quotients (1.43) shows that the map x 7! ˛.x/W G! I defines a homomorphism N˛W G=N! I with N˛.gN / D ˛.g/. The homomorphism N˛ is certainly surjective, and if N˛.gN / D e, then g 2 Ker.˛/ D N, and so N˛ has trivial kernel. This implies that it is injective (p. 20). THE ISOMORPHISM THEOREM THEOREM 1.46 (ISOMORPH |
ISM THEOREM) Let H be a subgroup of G and N a normal subgroup of G. Then HN is a subgroup of G, H \ N is a normal subgroup of H, and the map h.H \ N / 7! hN W H=H \ N! HN=N is an isomorphism. PROOF. We have already seen (1.37) that HN is a subgroup. Consider the map H! G=N; h 7! hN: This is a homomorphism, and its kernel is H \N, which is therefore normal in H. According to Theorem 1.45, the map induces an isomorphism H=H \ N! I, where I is its image. But I is the set of cosets of the form hN with h 2 H, i.e., I D HN=N. It is not necessary to assume that N be normal in G as long as hN h1 D N for all h 2 H (i.e., H is contained in the normalizer of N — see later). Then H \ N is still normal in H, but it need not be a normal subgroup of G. THE CORRESPONDENCE THEOREM The next theorem shows that if NG is a quotient group of G, then the lattice of subgroups in NG captures the structure of the lattice of subgroups of G lying over the kernel of G! NG. THEOREM 1.47 (CORRESPONDENCE THEOREM) Let ˛W G NG be a surjective homomorphism, and let N D Ker.˛/. Then there is a one-to-one correspondence fsubgroups of G containing N g 1W1$ fsubgroups of NGg under which a subgroup H of G containing N corresponds to NH D ˛.H / and a subgroup NH of NG corresponds to H D ˛1. NH /. Moreover, if H $ NH and H 0 $ NH 0, then Direct products 23 (a) (b) NH NH 0 ” H H 0, in which case. NH 0 W NH / D.H 0 W H /; NH is normal in NG if and only if H is normal in G, in which case, ˛ induces an isomorphism G=H '! NG= NH : NH is a subgroup of NG, then |
˛1. NH / is easily seen to be a subgroup of G PROOF. If containing N, and if H is a subgroup of G, then ˛.H / is a subgroup of NG (see 1.45). Clearly, ˛1˛.H / D HN, which equals H if and only if H N, and ˛˛1. NH / D NH. Therefore, the two operations give the required bijection. The remaining statements are easily verified. For example, a decomposition H 0 D F i 2I ai H of H 0 into a disjoint union of left cosets of H gives a similar decomposition NH 0 D F NH 0. i 2I ˛.ai / NH of COROLLARY 1.48 Let N be a normal subgroup of G; then there is a one-to-one correspondence between the set of subgroups of G containing N and the set of subgroups of G=N, H $ H=N. Moreover H is normal in G if and only if H=N is normal in G=N, in which case the homomorphism g 7! gN W G! G=N induces an isomorphism G=H '!.G=N /=.H=N /. PROOF. This is the special case of the theorem in which ˛ is g 7! gN W G! G=N. EXAMPLE 1.49 Let G D D4 and let N be its subgroup hr 2i. Recall (1.17) that srs1 D r 3, and so sr 2s1 D r 32 D r 2. Therefore N is normal. The groups G and G=N have the following lattices of subgroups: D4 D4=hr 2i hr 2; si hri hr 2; rsi hNsi h Nri h Nr Nsi hsi hr 2si hr 2i hrsi hr 3si 1 1 Direct products Let G be a group, and let H1; : : : ; Hk be subgroups of G. We say that G is a direct product of the subgroups Hi if the map.h1; h2; : : : ; hk/ 7! h1h2 hk W H1 H2 Hk! G is an isomorphism of groups. This means that each element g of G |
can be written uniquely in the form g D h1h2 hk, hi 2 Hi, and that if g D h1h2 hk and g0 D h0 k, then h0 1h0 2 gg0 D.h1h0 1/.h2h0 2/.hkh0 k/: The following propositions give criteria for a group to be a direct product of subgroups. 24 1. BASIC DEFINITIONS AND RESULTS PROPOSITION 1.50 A group G is a direct product of subgroups H1, H2 if and only if (a) G D H1H2, (b) H1 \ H2 D feg, and (c) every element of H1 commutes with every element of H2. PROOF. If G is the direct product of H1 and H2, then certainly (a) and (c) hold, and (b) holds because, for any g 2 H1 \ H2, the element.g; g1/ maps to e under.h1; h2/ 7! h1h2 and so equals.e; e/. Conversely, (c) implies that.h1; h2/ 7! h1h2 is a homomorphism, and (b) implies that it is injective: h1h2 D e H) h1 D h1 2 2 H1 \ H2 D feg: Finally, (a) implies that it is surjective. PROPOSITION 1.51 A group G is a direct product of subgroups H1, H2 if and only if (a) G D H1H2, (b) H1 \ H2 D feg, and (c) H1 and H2 are both normal in G. PROOF. Certainly, these conditions are implied by those in the previous proposition, and so it remains to show that they imply that each element h1 of H1 commutes with each element h2 of H2. Two elements h1; h2 of a group commute if and only if their commutator Œh1; h2 defD.h1h2/.h2h1/ 1 is e. But.h1h2/.h2h1/ 1 D h1h2h1 1 h1 2 D.h1h2h1 h1 h2h1 1 / h |
1 1 h1 2 2 ; which is in H2 because H2 is normal, and is in H1 because H1 is normal. Therefore (b) implies Œh1; h2 D e. PROPOSITION 1.52 A group G is a direct product of subgroups H1; H2; : : : ; Hk if and only if (a) G D H1H2 Hk; (b) for each j, Hj \.H1 Hj 1Hj C1 Hk/ D feg, and (c) each of H1; H2; : : : ; Hk is normal in G, PROOF. The necessity of the conditions being obvious, we shall prove only the sufficiency. For k D 2, we have just done this, and so we argue by induction on k. An induction argument using (1.37) shows that H1 Hk1 is a normal subgroup of G. The conditions (a,b,c) hold for the subgroups H1; : : : ; Hk1 of H1 Hk1, and so the induction hypothesis shows that.h1; h2; : : : ; hk1/ 7! h1h2 hk1W H1 H2 Hk1! H1H2 Hk1 is an isomorphism. The pair H1 Hk1, Hk satisfies the hypotheses of (1.51), and so.h; hk/ 7! hhkW.H1 Hk1/ Hk! G is also an isomorphism. The composite of these isomorphisms H1 Hk1 Hk.h1;:::;hk /7!.h1hk1;hk /! H1 Hk1 Hk.h;hk /7!hhk! G sends.h1; h2; : : : ; hk/ to h1h2 hk: Commutative groups 25 Commutative groups The classification of finitely generated commutative groups is most naturally studied as part of the theory of modules over a principal ideal domain, but, for the sake of completeness, I include an elementary exposition here. Let M be a commutative group, written additively. The subgroup hx1; : : : ; xki of M generated by the elements x1; : : : ; xk consists of |
the sums P mi xi, mi 2 Z. A subset fx1; : : : ; xkg of M is a basis for M if it generates M and m1x1 C C mkxk D 0; mi 2 Z H) mi xi D 0 for every iI then M D hx1i ˚ ˚ hxki: LEMMA 1.53 Let x1; : : : ; xk generate M. For any c1; : : : ; ck 2 N with gcd.c1; : : : ; ck/ D 1, there exist generators y1; : : : ; yk for M such that y1 D c1x1 C C ckxk. PROOF. We argue by induction on s D c1 C C ck. The lemma certainly holds if s D 1, and so we assume s > 1. Then, at least two ci are nonzero, say, c1 c2 > 0. Now ˘ fx1; x2 C x1; x3; : : : ; xkg generates M, ˘ gcd.c1 c2; c2; c3; : : : ; ck/ D 1, and ˘.c1 c2/ C c2 C C ck < s, and so, by induction, there exist generators y1; : : : ; yk for M such that y1 D.c1 c2/x1 C c2.x1 C x2/ C c3x3 C C ckxk D c1x1 C C ckxk. THEOREM 1.54 Every finitely generated commutative group M has a basis; hence it is a finite direct sum of cyclic groups. PROOF. 8We argue by induction on the number of generators of M. If M can be generated by one element, the statement is trivial, and so we may assume that it requires at least k > 1 generators. Among the generating sets fx1; : : : ; xkg for M with k elements there is one for which the order of x1 is the smallest possible. We shall show that M is then the direct sum of hx1i and hx2; : : : ; xki. This will complete the proof, because the induction hypothesis provides us with a basis for the second group, which |
together with x1 forms a basis for M. If M is not the direct sum of hx1i and hx2; : : : ; xki, then there exists a relation m1x1 C m2x2 C C mkxk D 0 (10) with m1x1 ¤ 0. After possibly changing the sign of some of the xi, we may suppose that m1; : : : ; mk 2 N and m1 < order.x1/. Let d D gcd.m1; : : : ; mk/ > 0, and let ci D mi =d. According to the lemma, there exists a generating set y1; : : : ; yk such that y1 D c1x1 C C ckxk. But dy1 D m1x1 C m2x2 C C mkxk D 0 and d m1 < order.x1/, and so this contradicts the choice of fx1; : : : ; xkg. 8John Stillwell tells me that, for finite commutative groups, this is similar to the first proof of the theorem, given by Kronecker in 1870. 26 1. BASIC DEFINITIONS AND RESULTS COROLLARY 1.55 A finite commutative group is cyclic if, for each n > 0, it contains at most n elements of order dividing n. PROOF. After the Theorem 1.54, we may suppose that G D Cn1 Cnr with ni 2 N. If n divides ni and nj with i ¤ j, then G has more than n elements of order dividing n. Therefore, the hypothesis implies that the ni are relatively prime. Let ai generate the i th factor. Then.a1; : : : ; ar / has order n1 nr, and so generates G. EXAMPLE 1.56 Let F be a field. The elements of order dividing n in F are the roots of the polynomial X n 1. Because unique factorization holds in F ŒX, there are at most n of these, and so the corollary shows that every finite subgroup of F is cyclic. THEOREM 1.57 A nonzero finitely generated commutative group M can be expressed M Cn1 Cns C r 1 (11) for certain integers n1; : : : ; ns 2 and r 0. Moreover, ( |
a) r is uniquely determined by M ; (b) the ni can be chosen so that n1 2 and n1jn2; : : : ; ns1jns, and then they are uniquely determined by M ; (c) the ni can be chosen to be powers of prime numbers, and then they are uniquely determined by M. The number r is called the rank of M. By r being uniquely determined by M, we mean that in any two decompositions of M of the form (11), the number of copies of C1 will be the same (and similarly for the ni in (b) and (c)). The integers n1; : : : ; ns in (b) are called the invariant factors of M. Statement (c) says that M can be expressed M Cp e1 1 Cp et t C r 1; ei 1; (12) for certain prime powers pei i are uniquely determined by M ; they are called the elementary divisors of M. PROOF. The first assertion is a restatement of Theorem 1.54. (repetitions of primes allowed), and that the integers pe1 1 ; : : : ; pet t (a) For a prime p not dividing any of the ni, M=pM.C1=pC1/r.Z=pZ/r ; and so r is the dimension of M=pM as an Fp-vector space. (b,c) If gcd.m; n/ D 1, then Cm Cn contains an element of order mn, and so Cm Cn Cmn: (13) Use (13) to decompose the Cni into products of cyclic groups of prime power order. Once this has been achieved, (13) can be used to combine factors to achieve a decomposition as in D Q Cp (b); for example, Cns, where the product is over the distinct primes among the pi and ei is the highest exponent for the prime pi. ei i In proving the uniqueness statements in (b) and (c), we can replace M with its torsion subgroup (and so assume r D 0). A prime p will occur as one of the primes pi in (12) if and only M has an element of order p, in which case p will occur exact a times, where pa Commutative groups 27 is the number of elements of |
order dividing p. Similarly, p2 will divide some pei in (12) i if and only if M has an element of order p2, in which case it will divide exactly b of the, where pabp2b is the number of elements in M of order dividing p2. Continuing in pei i this fashion, we find that the elementary divisors of M can be read off from knowing the numbers of elements of M of each prime power order. The uniqueness of the invariant factors can be derived from that of the elementary divisors, or it can be proved directly: ns is the smallest integer > 0 such that nsM D 0; ns1 is the smallest integer > 0 such that ns1M is cyclic; ns2 is the smallest integer such that ns2 can be expressed as a product of two cyclic groups, and so on. SUMMARY 1.58 Each finite commutative group is isomorphic to exactly one of the groups Cn1 Cnr ; n1jn2; : : : ; nr1jnr : The order of this group is n1 nr. For example, each commutative group of order 90 is isomorphic to exactly one of C90 or C3 C30 — to see this, note that the largest invariant factor must be a factor of 90 divisible by all the prime factors of 90. THE LINEAR CHARACTERS OF A COMMUTATIVE GROUP Let.C/ D fz 2 C j jzj D 1g. This is an infinite group. For any integer n, the set n.C/ of elements of order dividing n is cyclic of order n; in fact, n.C/ D fe2 im=n j 0 m n 1g D f1; ; : : : ; n1; g where D e2 i=n is a primitive nth root of 1. A linear character (or just character) of a group G is a homomorphism G!.C/. The homomorphism a 7! 1 is called the trivial (or principal) character. EXAMPLE 1.59 The Legendre symbol modulo p of an integer a not divisible by p is defD a p 1 if a is a square in Z=pZ 1 otherwise. Clearly, this depends only on a modulo p, and if neither a nor b is divisible by p, W.Z=pZ/! then f˙1 |
g D 2.C/ is a character of.Z=pZ/, sometimes called the quadratic character. (because.Z=pZ/ is cyclic). Therefore Œa 7! b p ab p a p a p D The set of characters of a group G becomes a group G_ under the addition,. C 0/.g/ D.g/0.g/; called the dual group of G. For example, the dual group Z_ of Z is isomorphic to.C/ by the map 7!.1/. THEOREM 1.60 Let G be a finite commutative group. (a) The dual of G_ is isomorphic to G. (b) The map G! G__ sending an element a of G to the character 7!.a/ of G_ is an isomorphism. 28 1. BASIC DEFINITIONS AND RESULTS In other words, G G_ and G'G__. PROOF. The statements are obvious for cyclic groups, and.G H /_'G_ H _. ASIDE 1.61 The statement that the natural map G! G__ is an isomorphism is a special case of the Pontryagin theorem. For infinite groups, it is necessary to consider groups together with a topology. For example, as we observed above, Z_ '.C/. Each m 2 Z does define a character 7! mW.C/!.C/, but there are many homomorphisms.C/!.C/ not of this form, and so the dual of.C/ is larger than Z. However, these are the only continuous homomorphisms. In general, let G be a commutative group endowed with a locally compact topology9 for which the group operations are continuous; then the group G_ of continuous characters G!.C/ has a natural topology for which it is locally compact, and the Pontryagin duality theorem says that the natural map G! G__ is an isomorphism. THEOREM 1.62 (ORTHOGONALITY RELATIONS) Let G be a finite commutative group. For any characters and of G, X a2G.a/.a1/ D jGj 0 if D otherwise. In particular, X jGj 0 PROOF. If D, then.a/.a1/ D 1, and so the sum is jG |
j. Otherwise there exists a b 2 G such that.b/ ¤.b/. As a runs over G, so also does ab, and so if is trivial otherwise..a/ D a2G X a2G.a/.a1/ D X a2G.ab/..ab/1/ D.b/.b/1 X a2G.a/.a1/: Because.b/.b/1 ¤ 1, this implies that P a2G.a/.a1/ D 0. COROLLARY 1.63 For any a 2 G, X 2G_.a/ D jGj 0 if a D e otherwise. PROOF. Apply the theorem to G_, noting that.G_/_'G. The order of ab Let a and b be elements of a group G. If a has order m and b has order n, what can we say about the order of ab? The next theorem shows that we can say nothing at all. THEOREM 1.64 For any integers m; n; r > 1, there exists a finite group G with elements a and b such that a has order m, b has order n, and ab has order r. PROOF. We shall show that, for a suitable prime power q, there exist elements a and b of SL2.Fq/ such that a, b, and ab have orders 2m, 2n, and 2r respectively. As I is the unique element of order 2 in SL2.Fq/, the images of a, b, ab in SL2.Fq/=f˙I g will then have orders m, n, and r as required. 9Following Bourbaki, I require locally compact spaces to be Hausdorff. Exercises 29 Let p be a prime number not dividing 2mnr. Then p is a unit in the finite ring Z=2mnrZ, and so some power of it, q say, is 1 in the ring. This means that 2mnr divides q 1. As the q has order q 1 and is cyclic (see 1.56), there exist elements u, v, and w of F group F having orders 2m, 2n, and 2r respectively. Let q a D u 1 0 u1 and b D v t 0 v1 (elements of SL2.Fq/); where |
t has been chosen so that uv C t C u1v1 D w C w1: The characteristic polynomial of a is.X u/.X u1/, and so a is similar to diag.u; u1/. Therefore a has order 2m. Similarly b has order 2n. The matrix ab D uv C t u1t v1 u1v1 ; has characteristic polynomial X 2.uv C t C u1v1/X C 1 D.X w/.X w1/, and so ab is similar to diag.w; w1/. Therefore ab has order 2r.10 Exercises 1-1 Show that the quaternion group has only one element of order 2, and that it commutes with all elements of Q. Deduce that Q is not isomorphic to D4, and that every subgroup of Q is normal.11 1-2 Consider the elements in GL2.Z/. Show that a4 D 1 and b3 D 1, but that ab has infinite order, and hence that the group ha; bi is infinite. 1-3 Show that every finite group of even order contains an element of order 2. 1-4 Let n D n1 C C nr be a partition of the positive integer n. Use Lagrange’s theorem to show that nŠ is divisible by Qr i D1 ni Š. 1-5 Let N be a normal subgroup of G of index n. Show that if g 2 G, then gn 2 N. Give an example to show that this may be false when the subgroup is not normal. 10I don’t know who found this beautiful proof. Apparently the original proof of G.A. Miller is very complicated; see mo24913. 11This property of Q is unusual. In fact, the only noncommutative groups in which every subgroup is normal are the groups of the form Q A B with Q the quaternion group, A a commutative group whose elements have finite odd order, and B a commutative group whose elements have order 2 (or 1). See Hall 1959, 12.5.4. 30 1. BASIC DEFINITIONS AND RESULTS 1-6 A group G is said to have finite exponent if there exists an m > 0 such that am D e for every a in G; the smallest such m is then called the exponent of G. (a |
) Show that every group of exponent 2 is commutative. (b) Show that, for an odd prime p, the group of matrices ; b; c 2 Fp 9 = ; has exponent p, but is not commutative. 1-7 Two subgroups H and H 0 of a group G are said to be commensurable if H \ H 0 is of finite index in both H and H 0. Show that commensurability is an equivalence relation on the subgroups of G. 1-8 Show that a nonempty finite set with an associative binary operation satisfying the cancellation laws is a group. 1-9 Let G be a set with an associative binary operation. Show that if left multiplication x 7! ax by every element a is bijective and right multiplication by some element is injective, then G is a group. Give an example to show that the second condition is needed. 1-10 Show that a commutative monoid M is a submonoid of a commutative group if and only if cancellation holds in M : mn D m0n H) m D m0: Hint: The group is constructed from M as Q is constructed from Z. CHAPTER 2 Free Groups and Presentations; Coxeter Groups It is frequently useful to describe a group by giving a set of generators for the group and a set of relations for the generators from which every other relation in the group can be deduced. For example, Dn can be described as the group with generators r; s and relations r n D e; s2 D e; srsr D e: In this chapter, we make precise what this means. First we need to define the free group on a set X of generators — this is a group generated by X and with no relations except for those implied by the group axioms. Because inverses cause problems, we first do this for monoids. Recall that a monoid is a set S with an associative binary operation having an identity element e. A homomorphism ˛W S! S 0 of monoids is a map such that ˛.ab/ D ˛.a/˛.b/ for all a; b 2 S and ˛.e/ D e — unlike the case of groups, the second condition is not automatic. A homomorphism of monoids preserves all finite products. Free monoids Let X D fa; b; c; |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.