text
stringlengths
270
6.81k
. Show that an m × n matrix gives rise to a well-defined map from Rn to Rm. EXERCISES 21 28. Find the error in the following argument by providing a counterexample. “The reflexive property is redundant in the axioms for an equivalence relation. If x ∼ y, then y ∼ x by the symmetric property. Using the transitive property, we can deduce that x ∼ x.” 29. Projective Real Line. Define a relation on R2 \ (0, 0) by letting (x1, y1) ∼ (x2, y2) if there exists a nonzero real number λ such that (x1, y1) = (λx2, λy2). Prove that ∼ defines an equivalence relation on R2 \ (0, 0). What are the corresponding equivalence classes? This equivalence relation defines the projective line, denoted by P(R), which is very important in geometry. References and Suggested Readings The following list contains references suitable for further reading. With the exception of [8] and [9] and perhaps [1] and [3], all of these books are more or less at the same level as this text. Interesting applications of algebra can be found in [2], [5], [10], and [11]. [1] Artin, M. Abstract Algebra. 2nd ed. Pearson, Upper Saddle River, NJ, 2011. [2] Childs, L. A Concrete Introduction to Higher Algebra. 2nd ed. Springer-Verlag, New York, 1995. [3] Dummit, D. and Foote, R. Abstract Algebra. 3rd ed. Wiley, New York, 2003. [4] Fraleigh, J. B. A First Course in Abstract Algebra. 7th ed. Pearson, Upper Saddle River, NJ, 2003. [5] Gallian, J. A. Contemporary Abstract Algebra. 7th ed. Brooks/Cole, Belmont, CA, 2009. [6] Halmos, P. Naive Set Theory. Springer, New York, 1991. One of the best references for set theory. [7] Herstein, I. N. Abstract Algebra. 3rd ed. Wiley, New York, 1996. [8
] Hungerford, T. W. Algebra. Springer, New York, 1974. One of the standard graduate algebra texts. [9] Lang, S. Algebra. 3rd ed. Springer, New York, 2002. Another standard graduate text. [10] Lidl, R. and Pilz, G. Applied Abstract Algebra. 2nd ed. Springer, New York, 1998. [11] Mackiw, G. Applications of Abstract Algebra. Wiley, New York, 1985. [12] Nickelson, W. K. Introduction to Abstract Algebra. 3rd ed. Wiley, New York, 2006. [13] Solow, D. How to Read and Do Proofs. 5th ed. Wiley, New York, 2009. 22 CHAPTER 1 PRELIMINARIES [14] van der Waerden, B. L. A History of Algebra. Springer-Verlag, New York, 1985. An account of the historical development of algebra. Sage Sage is free, open source, mathematical software, which has very impressive capabilities for the study of abstract algebra. See the Preface for more information about obtaining Sage and the supplementary material describing how to use Sage in the study of abstract algebra. At the end of chapter, we will have a brief explanation of Sage’s capabilities relevant to that chapter. 2 The Integers The integers are the building blocks of mathematics. In this chapter we will investigate the fundamental properties of the integers, including mathematical induction, the division algorithm, and the Fundamental Theorem of Arithmetic. 2.1 Mathematical Induction Suppose we wish to show that (n + 1) 2 for any natural number n. This formula is easily verified for small numbers such as n = 1, 2, 3, or 4, but it is impossible to verify for all natural numbers on a case-by-case basis. To prove the formula true in general, a more generic method is required. Suppose we have verified the equation for the first n cases. We will attempt to show that we can generate the formula for the (n + 1)th case from this knowledge. The formula is true for n = 1 since 1 = 1(1 + 1) 2. If we have verified the first n cases, then n + 1) = = = 23 n(n + 1) 2 + n + 1 n2 +
3n + 2 2 (n + 1)[(n + 1) + 1] 2. 24 CHAPTER 2 THE INTEGERS This is exactly the formula for the (n + 1)th case. This method of proof is known as mathematical induction. Instead of attempting to verify a statement about some subset S of the positive integers N on a case-by-case basis, an impossible task if S is an infinite set, we give a specific proof for the smallest integer being considered, followed by a generic argument showing that if the statement holds for a given case, then it must also hold for the next case in the sequence. We summarize mathematical induction in the following axiom. First Principle of Mathematical Induction. Let S(n) be a statement about integers for n ∈ N and suppose S(n0) is true for some integer n0. If for all integers k with k ≥ n0 S(k) implies that S(k + 1) is true, then S(n) is true for all integers n greater than n0. Example 1. For all integers n ≥ 3, 2n > n + 4. Since 8 = 23 > 3 + 4 = 7, the statement is true for n0 = 3. Assume that 2k > k + 4 for k ≥ 3. Then 2k+1 = 2 · 2k > 2(k + 4). But 2(k + 4) = 2k + 8 > k + 5 = (k + 1) + 4 since k is positive. Hence, by induction, the statement holds for all integers n ≥ 3. Example 2. Every integer 10n+1 + 3 · 10n + 5 is divisible by 9 for n ∈ N. For n = 1, 101+1 + 3 · 10 + 5 = 135 = 9 · 15 is divisible by 9. Suppose that 10k+1 + 3 · 10k + 5 is divisible by 9 for k ≥ 1. Then 10(k+1)+1 + 3 · 10k+1 + 5 = 10k+2 + 3 · 10k+1 + 50 − 45 = 10(10k+1 + 3 · 10k + 5) − 45 is divisible by 9. Example 3. We will prove the binomial theorem using mathematical induction; that is, (a + b)n = akbn−k, n k=0 n k 2.1
MATHEMATICAL INDUCTION 25 where a and b are real numbers, n ∈ N, and n k = n! k!(n − k)! is the binomial coefficient. We first show that. This result follows from! (k − 1)!(n − k + 1)! + n! k!(n − k)! (n + 1)! k!(n + 1 − k)! n + 1 k. If n = 1, the binomial theorem is easy to verify. Now assume that the result is true for n greater than or equal to 1. Then (a + b)n+1 = (a + b)(a + b)n n k = (a + b) n k=0 akbn−k = n k=0 n k ak+1bn−k + n k=0 n k akbn+1−k n n k − 1 akbn+1−k + n k=1 n k akbn+1−k + bn+1 n k − 1 + n k akbn+1−k + bn+1 = an+1 + k=1 n = an+1 + k=1 n + 1 k = n+1 k=0 akbn+1−k. We have an equivalent statement of the Principle of Mathematical Induc- tion that is often very useful. Second Principle of Mathematical Induction. Let S(n) be a statement about integers for n ∈ N and suppose S(n0) is true for some integer n0. If 26 CHAPTER 2 THE INTEGERS S(n0), S(n0 + 1),..., S(k) imply that S(k + 1) for k ≥ n0, then the statement S(n) is true for all integers n greater than n0. A nonempty subset S of Z is well-ordered if S contains a least element. Notice that the set Z is not well-ordered since it does not contain a smallest element. However, the natural numbers are well-ordered. Principle of Well-Ordering. Every nonempty subset of the natural numbers is well-ordered. The Principle of Well-Ordering is equivalent to the Principle of Mathe- matical Induction. Lemma 2.1 The Principle of Mathematical Induction implies that 1 is the least positive natural number. Proof. Let S = {
n ∈ N : n ≥ 1}. Then 1 ∈ S. Now assume that n ∈ S; that is, n ≥ 1. Since n + 1 ≥ 1, n + 1 ∈ S; hence, by induction, every natural number is greater than or equal to 1. Theorem 2.2 The Principle of Mathematical Induction implies the Principle of Well-Ordering. That is, every nonempty subset of N contains a least element. Proof. We must show that if S is a nonempty subset of the natural numbers, then S contains a smallest element. If S contains 1, then the theorem is true by Lemma 2.1. Assume that if S contains an integer k such that 1 ≤ k ≤ n, then S contains a smallest element. We will show that if a set S contains an integer less than or equal to n + 1, then S has a smallest element. If S does not contain an integer less than n + 1, then n + 1 is the smallest integer in S. Otherwise, since S is nonempty, S must contain an integer less than or equal to n. In this case, by induction, S contains a smallest integer. Induction can also be very useful in formulating definitions. For instance, there are two ways to define n!, the factorial of a positive integer n. • The explicit definition: nn − 1) · n. • The inductive or recursive definition: 1! = 1 and n! = n(n − 1)! for n > 1. Every good mathematician or computer scientist knows that looking at problems recursively, as opposed to explicitly, often results in better understanding of complex issues. 2.2 THE DIVISION ALGORITHM 27 2.2 The Division Algorithm An application of the Principle of Well-Ordering that we will use often is the division algorithm. Theorem 2.3 (Division Algorithm) Let a and b be integers, with b > 0. Then there exist unique integers q and r such that where 0 ≤ r < b. a = bq + r Proof. This is a perfect example of the existence-and-uniqueness type of proof. We must first prove that the numbers q and r actually exist. Then we must show that if q and r are two other such numbers, then q = q and r = r.
Existence of q and r. Let S = {a − bk : k ∈ Z and a − bk ≥ 0}. If 0 ∈ S, then b divides a, and we can let q = a/b and r = 0. If 0 /∈ S, we can use the Well-Ordering Principle. We must first show that S is nonempty. If a > 0, then a − b · 0 ∈ S. If a < 0, then a − b(2a) = a(1 − 2b) ∈ S. In either case S = ∅. By the Well-Ordering Principle, S must have a smallest member, say r = a − bq. Therefore, a = bq + r, r ≥ 0. We now show that r < b. Suppose that r > b. Then a − b(q + 1) = a − bq − b = r − b > 0. In this case we would have a − b(q + 1) in the set S. But then a − b(q + 1) < a−bq, which would contradict the fact that r = a−bq is the smallest member of S. So r ≤ b. Since 0 /∈ S, r = b and so r < b. Uniqueness of q and r. Suppose there exist integers r, r, q, and q such that a = bq + r, 0 ≤ r < b and a = bq + r, 0 ≤ r < b. Then bq + r = bq + r. Assume that r ≥ r. From the last equation we have b(q − q) = r − r; therefore, b must divide r − r and 0 ≤ r − r ≤ r < b. This is possible only if r − r = 0. Hence, r = r and q = q. Let a and b be integers. If b = ak for some integer k, we write a | b. An integer d is called a common divisor of a and b if d | a and d | b. The greatest common divisor of integers a and b is a positive integer d such 28 CHAPTER 2 THE INTEGERS that d is a common divisor of a and b and if d is any other common divisor of a and b, then d | d. We write d = gcd(
a, b); for example, gcd(24, 36) = 12 and gcd(120, 102) = 6. We say that two integers a and b are relatively prime if gcd(a, b) = 1. Theorem 2.4 Let a and b be nonzero integers. Then there exist integers r and s such that gcd(a, b) = ar + bs. Furthermore, the greatest common divisor of a and b is unique. Proof. Let S = {am + bn : m, n ∈ Z and am + bn > 0}. Clearly, the set S is nonempty; hence, by the Well-Ordering Principle S must have a smallest member, say d = ar + bs. We claim that d = gcd(a, b). Write a = dq + r where 0 ≤ r < d. If r > 0, then r = a − dq = a − (ar + bs)q = a − arq − bsq = a(1 − rq) + b(−sq), which is in S. But this would contradict the fact that d is the smallest member of S. Hence, r = 0 and d divides a. A similar argument shows that d divides b. Therefore, d is a common divisor of a and b. Suppose that d is another common divisor of a and b, and we want to show that d | d. If we let a = dh and b = dk, then d = ar + bs = dhr + dks = d(hr + ks). So d must divide d. Hence, d must be the unique greatest common divisor of a and b. Corollary 2.5 Let a and b be two integers that are relatively prime. Then there exist integers r and s such that ar + bs = 1. 2.2 THE DIVISION ALGORITHM 29 The Euclidean Algorithm Among other things, Theorem 2.4 allows us to compute the greatest common divisor of two integers. Example 4. Let us compute the greatest common divisor of 945 and 2415. First observe that 2415 = 945 · 2 + 525 945 = 525 · 1 + 420 525 = 420 · 1 + 105 420 = 105 · 4 + 0. Reversing our steps, 105 divides 420, 105 divides 525, 105 divides 945, and
105 divides 2415. Hence, 105 divides both 945 and 2415. If d were another common divisor of 945 and 2415, then d would also have to divide 105. Therefore, gcd(945, 2415) = 105. If we work backward through the above sequence of equations, we can also obtain numbers r and s such that 945r + 2415s = 105. Observe that 105 = 525 + (−1) · 420 = 525 + (−1) · [945 + (−1) · 525] = 2 · 525 + (−1) · 945 = 2 · [2415 + (−2) · 945] + (−1) · 945 = 2 · 2415 + (−5) · 945. So r = −5 and s = 2. Notice that r and s are not unique, since r = 41 and s = −16 would also work. To compute gcd(a, b) = d, we are using repeated divisions to obtain a decreasing sequence of positive integers r1 > r2 > · · · > rn = d; that is, b = aq1 + r1 a = r1q2 + r2 r1 = r2q3 + r3... rn−2 = rn−1qn + rn rn−1 = rnqn+1. 30 CHAPTER 2 THE INTEGERS To find r and s such that ar + bs = d, we begin with this last equation and substitute results obtained from the previous equations: d = rn = rn−2 − rn−1qn = rn−2 − qn(rn−3 − qn−1rn−2) = −qnrn−3 + (1 + qnqn−1)rn−2... = ra + sb. The algorithm that we have just used to find the greatest common divisor d of two integers a and b and to write d as the linear combination of a and b is known as the Euclidean algorithm. Prime Numbers Let p be an integer such that p > 1. We say that p is a prime number, or simply p is prime, if the only positive numbers that divide p are 1 and p itself. An integer n > 1 that is not prime is said to be composite. Lemma 2.6 (Euclid) Let a and b be
integers and p be a prime number. If p | ab, then either p | a or p | b. Proof. Suppose that p does not divide a. We must show that p | b. Since gcd(a, p) = 1, there exist integers r and s such that ar + ps = 1. So b = b(ar + ps) = (ab)r + p(bs). Since p divides both ab and itself, p must divide b = (ab)r + p(bs). Theorem 2.7 (Euclid) There exist an infinite number of primes. Proof. We will prove this theorem by contradiction. Suppose that there are only a finite number of primes, say p1, p2,..., pn. Let P = p1p2 · · · pn + 1. Then P must be divisible by some pi for 1 ≤ i ≤ n. In this case, pi must divide P − p1p2 · · · pn = 1, which is a contradiction. Hence, either P is prime or there exists an additional prime number p = pi that divides P. Theorem 2.8 (Fundamental Theorem of Arithmetic) Let n be an integer such that n > 1. Then n = p1p2 · · · pk, 2.2 THE DIVISION ALGORITHM 31 where p1,..., pk are primes (not necessarily distinct). Furthermore, this factorization is unique; that is, if n = q1q2 · · · ql, then k = l and the qi’s are just the pi’s rearranged. Proof. Uniqueness. To show uniqueness we will use induction on n. The theorem is certainly true for n = 2 since in this case n is prime. Now assume that the result holds for all integers m such that 1 ≤ m < n, and n = p1p2 · · · pk = q1q2 · · · ql, where p1 ≤ p2 ≤ · · · ≤ pk and q1 ≤ q2 ≤ · · · ≤ ql. By Lemma 2.6, p1 | qi for some i = 1,..., l and q1 | pj for some j = 1,..., k. Since all of the pi’s
and qi’s are prime, p1 = qi and q1 = pj. Hence, p1 = q1 since p1 ≤ pj = q1 ≤ qi = p1. By the induction hypothesis, n = p2 · · · pk = q2 · · · ql has a unique factorization. Hence, k = l and qi = pi for i = 1,..., k. Existence. To show existence, suppose that there is some integer that cannot be written as the product of primes. Let S be the set of all such numbers. By the Principle of Well-Ordering, S has a smallest number, say a. If the only positive factors of a are a and 1, then a is prime, which is a contradiction. Hence, a = a1a2 where 1 < a1 < a and 1 < a2 < a. Neither a1 ∈ S nor a2 ∈ S, since a is the smallest element in S. So a1 = p1 · · · pr a2 = q1 · · · qs. Therefore, a = a1a2 = p1 · · · prq1 · · · qs. So a /∈ S, which is a contradiction. Historical Note 32 CHAPTER 2 THE INTEGERS Prime numbers were first studied by the ancient Greeks. Two important results from antiquity are Euclid’s proof that an infinite number of primes exist and the Sieve of Eratosthenes, a method of computing all of the prime numbers less than a fixed positive integer n. One problem in number theory is to find a function f such that f (n) is prime for each integer n. Pierre Fermat (1601?–1665) conjectured that 22n + 1 was prime for all n, but later it was shown by Leonhard Euler (1707–1783) that 225 + 1 = 4,294,967,297 is a composite number. One of the many unproven conjectures about prime numbers is Goldbach’s Conjecture. In a letter to Euler in 1742, Christian Goldbach stated the conjecture that every even integer with the exception of 2 seemed to be the sum of two primes: 4 = 2 + 2, 6 = 3 + 3, 8 = 3 + 5,..
.. Although the conjecture has been verified for the numbers up through 100 million, it has yet to be proven in general. Since prime numbers play an important role in public key cryptography, there is currently a great deal of interest in determining whether or not a large number is prime. Exercises 1. Prove that for n ∈ N. 2. Prove that for n ∈ N. 12 + 22 + · · · + n2 = n(n + 1)(2n + 1) 6 13 + 23 + · · · + n3 = n2(n + 1)2 4 3. Prove that n! > 2n for n ≥ 4. 4. Prove that for n ∈ N. x + 4x + 7x + · · · + (3n − 2)x = n(3n − 1)x 2 5. Prove that 10n+1 + 10n + 1 is divisible by 3 for n ∈ N. 6. Prove that 4 · 102n + 9 · 102n−1 + 5 is divisible by 99 for n ∈ N. 7. Show that √ n a1a2 · · · an ≤ 1 n n k=1 ak. 8. Prove the Leibniz rule for f (n)(x), where f (n) is the nth derivative of f ; that is, show that (f g)(n)(x) = n k=0 n k f (k)(x)g(n−k)(x). EXERCISES 33 9. Use induction to prove that 1 + 2 + 22 + · · · + 2n = 2n+1 − 1 for n ∈ N. 10. Prove that for n ∈ N(n + 1) = n n + 1 11. If x is a nonnegative real number, then show that (1 + x)n − 1 ≥ nx for n = 0, 1, 2,.... 12. Power Sets. Let X be a set. Define the power set of X, denoted P(X), to be the set of all subsets of X. For example, P({a, b}) = {∅, {a}, {b}, {a, b}}. For every positive integer n, show that a set with exactly n elements has a power set with exactly 2n elements. 13. Pro
ve that the two principles of mathematical induction stated in Section 2.1 are equivalent. 14. Show that the Principle of Well-Ordering for the natural numbers implies that 1 is the smallest natural number. Use this result to show that the Principle of Well-Ordering implies the Principle of Mathematical Induction; that is, show that if S ⊂ N such that 1 ∈ S and n + 1 ∈ S whenever n ∈ S, then S = N. 15. For each of the following pairs of numbers a and b, calculate gcd(a, b) and find integers r and s such that gcd(a, b) = ra + sb. (a) 14 and 39 (b) 234 and 165 (c) 1739 and 9923 (d) 471 and 562 (e) 23,771 and 19,945 (f) −4357 and 3754 16. Let a and b be nonzero integers. If there exist integers r and s such that ar + bs = 1, show that a and b are relatively prime. 17. Fibonacci Numbers. The Fibonacci numbers are 1, 1, 2, 3, 5, 8, 13, 21,.... We can define them inductively by f1 = 1, f2 = 1, and fn+2 = fn+1 + fn for n ∈ N. (a) Prove that fn < 2n. (b) Prove that fn+1fn−1 = f 2 (c) Prove that fn = [(1 + (d) Show that limn→∞ fn/fn+1 = ( (e) Prove that fn and fn+1 are relatively prime. n + (−1)n, n ≥ 2. 5 )n − (1 − 5 − 1)/2. 5 )n]/2n √ √ √ √ 5. 34 CHAPTER 2 THE INTEGERS 18. Let a and b be integers such that gcd(a, b) = 1. Let r and s be integers such that ar + bs = 1. Prove that gcd(a, s) = gcd(r, b) = gcd(r, s) = 1. 19. Let x, y ∈ N be relatively prime. If xy is a perfect square, prove that x and y must both be perfect squares
. 20. Using the division algorithm, show that every perfect square is of the form 4k or 4k + 1 for some nonnegative integer k. 21. Suppose that a, b, r, s are pairwise relatively prime and that a2 + b2 = r2 a2 − b2 = s2. Prove that a, r, and s are odd and b is even. 22. Let n ∈ N. Use the division algorithm to prove that every integer is congruent mod n to precisely one of the integers 0, 1,..., n − 1. Conclude that if r is an integer, then there is exactly one s in Z such that 0 ≤ s < n and [r] = [s]. Hence, the integers are indeed partitioned by congruence mod n. 23. Define the least common multiple of two nonzero integers a and b, denoted by lcm(a, b), to be the nonnegative integer m such that both a and b divide m, and if a and b divide any other integer n, then m also divides n. Prove that any two integers a and b have a unique least common multiple. 24. If d = gcd(a, b) and m = lcm(a, b), prove that dm = |ab|. 25. Show that lcm(a, b) = ab if and only if gcd(a, b) = 1. 26. Prove that gcd(a, c) = gcd(b, c) = 1 if and only if gcd(ab, c) = 1 for integers a, b, and c. 27. Let a, b, c ∈ Z. Prove that if gcd(a, b) = 1 and a | bc, then a | c. 28. Let p ≥ 2. Prove that if 2p − 1 is prime, then p must also be prime. 29. Prove that there are an infinite number of primes of the form 6n + 1. 30. Prove that there are an infinite number of primes of the form 4n − 1. 31. Using the fact that 2 is prime, show that there do not exist integers p and 2 cannot be a rational q such that p2 = 2q2. Demonstrate that therefore number. √ EXERCISES 35 Programming
Exercises 1. The Sieve of Eratosthenes. One method of computing all of the prime numbers less than a certain fixed positive integer N is to list all of the numbers n such that 1 < n < N. Begin by eliminating all of the multiples of 2. Next eliminate all of the multiples of 3. Now eliminate all of the multiples of 5. Notice that 4 has already been crossed out. Continue in this manner, noticing that we do not have to go all the way to N ; it suffices to stop at N. Using this method, compute all of the prime numbers less than N = 250. We can also use this method to find all of the integers that are relatively prime to an integer N. Simply eliminate the prime factors of N and all of their multiples. Using this method, find all of the numbers that are relatively prime to N = 120. Using the Sieve of Eratosthenes, write a program that will compute all of the primes less than an integer N. √ 2. Let N0 = N ∪ {0}. Ackermann’s function is the function A : N0 × N0 → N0 defined by the equations A(0, y) = y + 1, A(x + 1, 0) = A(x, 1), A(x + 1, y + 1) = A(x, A(x + 1, y)). Use this definition to compute A(3, 1). Write a program to evaluate Ackermann’s function. Modify the program to count the number of statements executed in the program when Ackermann’s function is evaluated. How many statements are executed in the evaluation of A(4, 1)? What about A(5, 1)? 3. Write a computer program that will implement the Euclidean algorithm. The program should accept two positive integers a and b as input and should output gcd(a, b) as well as integers r and s such that gcd(a, b) = ra + sb. References and Suggested Readings References [2], [3], and [4] are good sources for elementary number theory. [1] Brookshear, J. G. Theory of Computation: Formal Languages, Automata, and Complexity. Benjamin/Cummings, Red
wood City, CA, 1989. Shows the relationships of the theoretical aspects of computer science to set theory and the integers. [2] Hardy, G. H. and Wright, E. M. An Introduction to the Theory of Numbers. 6th ed. Oxford University Press, New York, 2008. [3] Niven, I. and Zuckerman, H. S. An Introduction to the Theory of Numbers. 5th ed. Wiley, New York, 1991. 36 CHAPTER 2 THE INTEGERS [4] Vanden Eynden, C. Elementary Number Theory. 2nd ed. Waveland Press, Long Grove IL, 2001. Sage Sage’s original purpose was to support research in number theory, so it is perfect for the types of computations with the integers that we have in this chapter. 3 Groups We begin our study of algebraic structures by investigating sets associated with single operations that satisfy certain reasonable axioms; that is, we want to define an operation on a set in a way that will generalize such familiar structures as the integers Z together with the single operation of addition, or invertible 2 × 2 matrices together with the single operation of matrix multiplication. The integers and the 2 × 2 matrices, together with their respective single operations, are examples of algebraic structures known as groups. The theory of groups occupies a central position in mathematics. Modern group theory arose from an attempt to find the roots of a polynomial in terms of its coefficients. Groups now play a central role in such areas as coding theory, counting, and the study of symmetries; many areas of biology, chemistry, and physics have benefited from group theory. 3.1 Integer Equivalence Classes and Symmetries Let us now investigate some mathematical structures that can be viewed as sets with single operations. The Integers mod n The integers mod n have become indispensable in the theory and applications of algebra. In mathematics they are used in cryptography, coding theory, and the detection of errors in identification codes. We have already seen that two integers a and b are equivalent mod n if n divides a − b. The integers mod n also partition Z into n different equivalence classes; we will denote the set of these equivalence classes by Zn. Consider 37 38 CHAPTER 3 GROUPS the integers modulo 12 and the corresponding partition of the integers: [
0] = {..., −12, 0, 12, 24,...}, [1] = {..., −11, 1, 13, 25,...},... [11] = {..., −1, 11, 23, 35,...}. When no confusion can arise, we will use 0, 1,..., 11 to indicate the equivalence classes [0], [1],..., [11] respectively. We can do arithmetic on Zn. For two integers a and b, define addition modulo n to be (a + b) (mod n); that is, the remainder when a + b is divided by n. Similarly, multiplication modulo n is defined as (ab) (mod n), the remainder when ab is divided by n. Table 3.1. Multiplication table for Z8 Example 1. The following examples illustrate integer arithmetic modulo n: 7 + 4 ≡ 1 (mod 5) 3 + 5 ≡ 0 (mod 8) 7 · 3 ≡ 1 (mod 5) 3 · 5 ≡ 7 (mod 8) 3 + 4 ≡ 7 (mod 12) 3 · 4 ≡ 0 (mod 12). In particular, notice that it is possible that the product of two nonzero numbers modulo n can be equivalent to 0 modulo n. Example 2. Most, but not all, of the usual laws of arithmetic hold for addition and multiplication in Zn. For instance, it is not necessarily true that there is a multiplicative inverse. Consider the multiplication table for Z8 in Table 3.1. Notice that 2, 4, and 6 do not have multiplicative inverses; 3.1 INTEGER EQUIVALENCE CLASSES AND SYMMETRIES 39 that is, for n = 2, 4, or 6, there is no integer k such that kn ≡ 1 (mod 8). Proposition 3.1 Let Zn be the set of equivalence classes of the integers mod n and a, b, c ∈ Zn. 1. Addition and multiplication are commutative: a + b ≡ b + a (mod n) ab ≡ ba (mod n). 2. Addition and multiplication are associative: (a + b) + c ≡ a + (b + c) (mod n) (ab)c ≡ a(bc) (mod n). 3. There are
both an additive and a multiplicative identity: a + 0 ≡ a (mod n) a · 1 ≡ a (mod n). 4. Multiplication distributes over addition: a(b + c) ≡ ab + ac (mod n). 5. For every integer a there is an additive inverse −a: a + (−a) ≡ 0 (mod n). 6. Let a be a nonzero integer. Then gcd(a, n) = 1 if and only if there exists a multiplicative inverse b for a (mod n); that is, a nonzero integer b such that ab ≡ 1 (mod n). Proof. We will prove (1) and (6) and leave the remaining properties to be proven in the exercises. (1) Addition and multiplication are commutative modulo n since the remainder of a + b divided by n is the same as the remainder of b + a divided by n. 40 CHAPTER 3 GROUPS (6) Suppose that gcd(a, n) = 1. Then there exist integers r and s such that ar + ns = 1. Since ns = 1 − ar, ra ≡ 1 (mod n). Letting b be the equivalence class of r, ab ≡ 1 (mod n). Conversely, suppose that there exists a b such that ab ≡ 1 (mod n). Then n divides ab − 1, so there is an integer k such that ab − nk = 1. Let d = gcd(a, n). Since d divides ab − nk, d must also divide 1; hence, d = 1. Symmetries Figure 3.1. Rigid motions of a rectangle identity 180◦ rotation reflection vertical axis reflection A D C B B C D horizontal axis symmetry of a geometric figure is a rearrangement of the figure preserving the arrangement of its sides and vertices as well as its distances and angles. A map from the plane to itself preserving the symmetry of an object is called a rigid motion. For example, if we look at the rectangle in Figure 3.1, it is easy to see that a rotation of 180◦ or 360◦ returns a rectangle in the plane with the same orientation as the original rectangle and the same 3.1 INTEGER EQUIVALENCE CLASSES AND SYMMETRIES 41 relationship among the vertices. A reflection of the rectangle across
either the vertical axis or the horizontal axis can also be seen to be a symmetry. However, a 90◦ rotation in either direction cannot be a symmetry unless the rectangle is a square. Figure 3.2. Symmetries of a triangle identity C A rotation C C rotation C B reflection C A reflection C C reflection C B A A A A A A id = A B C A B C ρ1 = A B C B C A ρ2 = A B C C A B µ1 = A B C A C B µ2 = A B C C B A µ3 = Let us find the symmetries of the equilateral triangle ABC. To find a symmetry of ABC, we must first examine the permutations of the vertices A, B, and C and then ask if a permutation extends to a symmetry of the triangle. Recall that a permutation of a set S is a one-to-one and onto map π : S → S. The three vertices have 3! = 6 permutations, so the triangle 42 CHAPTER 3 GROUPS has at most six symmetries. To see that there are six permutations, observe there are three different possibilities for the first vertex, and two for the second, and the remaining vertex is determined by the placement of the first two. So we have 3 · 2 · 1 = 3! = 6 different arrangements. To denote the permutation of the vertices of an equilateral triangle that sends A to B, B to C, and C to A, we write the array A B C B C A. Notice that this particular permutation corresponds to the rigid motion of rotating the triangle by 120◦ in a clockwise direction. In fact, every permutation gives rise to a symmetry of the triangle. All of these symmetries are shown in Figure 3.2. A natural question to ask is what happens if one motion of the triangle ABC is followed by another. Which symmetry is µ1ρ1; that is, what happens when we do the permutation ρ1 and then the permutation µ1? Remember that we are composing functions here. Although we usually multiply left to right, we compose functions right to left. We have (µ1ρ1)(A) = µ1(ρ1(
A)) = µ1(B) = C (µ1ρ1)(B) = µ1(ρ1(B)) = µ1(C) = B (µ1ρ1)(C) = µ1(ρ1(C)) = µ1(A) = A. This is the same symmetry as µ2. Suppose we do these motions in the opposite order, ρ1 then µ1. It is easy to determine that this is the same as the symmetry µ3; hence, ρ1µ1 = µ1ρ1. A multiplication table for the symmetries of an equilateral triangle ABC is given in Table 3.2. Notice that in the multiplication table for the symmetries of an equilateral triangle, for every motion of the triangle α there is another motion α such that αα = id; that is, for every motion there is another motion that takes the triangle back to its original orientation. 3.2 Definitions and Examples The integers mod n and the symmetries of a triangle or a rectangle are both examples of groups. A binary operation or law of composition on a set G is a function G × G → G that assigns to each pair (a, b) ∈ G × G a unique element a ◦ b, or ab in G, called the composition of a and b. A group (G, ◦) is a set G together with a law of composition (a, b) → a ◦ b that satisfies the following axioms. 3.2 DEFINITIONS AND EXAMPLES 43 Table 3.2. Symmetries of an equilateral triangle ρ2 ρ2 id ρ1 µ3 µ1 µ2 µ1 µ1 µ3 µ2 id ρ2 ρ1 ◦ id ρ1 ρ2 µ1 µ2 µ3 µ2 µ2 µ1 µ3 ρ1 id ρ2 id id ρ1 ρ2 µ1 µ2 µ3 ρ1 ρ1 ρ2 id µ2 µ3 µ1 µ3 µ3 µ2 µ1 ρ2 ρ1 id • The law of composition is associative. That is, (a ◦ b) ◦ c = a ◦ (b ◦ c) for a, b, c ∈ G. • There exists an element e ∈ G,
called the identity element, such that for any element. • For each element a ∈ G, there exists an inverse element in G, denoted by a−1, such that a ◦ a−1 = a−1 ◦ a = e. A group G with the property that a ◦ b = b ◦ a for all a, b ∈ G is called abelian or commutative. Groups not satisfying this property are said to be nonabelian or noncommutative. Example 3. The integers Z = {..., −1, 0, 1, 2,...} form a group under the operation of addition. The binary operation on two integers m, n ∈ Z is just their sum. Since the integers under addition already have a well-established notation, we will use the operator + instead of ◦; that is, we shall write m + n instead of m ◦ n. The identity is 0, and the inverse of n ∈ Z is written as −n instead of n−1. Notice that the integers under addition have the additional property that m + n = n + m and are therefore an abelian group. Most of the time we will write ab instead of a ◦ b; however, if the group already has a natural operation such as addition in the integers, we will use that operation. That is, if we are adding two integers, we still write m + n, 44 CHAPTER 3 GROUPS Table 3.3. Cayley table for (Z5, +) + n for the inverse, and 0 for the identity as usual. We also write m − n instead of m + (−n). It is often convenient to describe a group in terms of an addition or multiplication table. Such a table is called a Cayley table. Example 4. The integers mod n form a group under addition modulo n. Consider Z5, consisting of the equivalence classes of the integers 0, 1, 2, 3, and 4. We define the group operation on Z5 by modular addition. We write the binary operation on the group additively; that is, we write m + n. The element 0 is the identity of the group and each element in Z5 has an inverse. For instance. Table 3.3 is a Cayley table for Z5. By Proposition 3.1, Zn = {0, 1,..., n − 1}
is a group under the binary operation of addition mod n. Example 5. Not every set with a binary operation is a group. For example, if we let modular multiplication be the binary operation on Zn, then Zn fails to be a group. The element 1 acts as a group identity since 1 · k = k · 1 = k for any k ∈ Zn; however, a multiplicative inverse for 0 does not exist since 0 · k = k · 0 = 0 for every k in Zn. Even if we consider the set Zn \ {0}, we still may not have a group. For instance, let 2 ∈ Z6. Then 2 has no multiplicative inverse since. By Proposition 3.1, every nonzero k does have an inverse in Zn if k is relatively prime to n. Denote the set of all such nonzero elements in Zn by U (n). Then U (n) is a group called the group of units of Zn. Table 3.4 is a Cayley table for the group U (8). 3.2 DEFINITIONS AND EXAMPLES 45 Table 3.4. Multiplication table for U (8 Example 6. The symmetries of an equilateral triangle described in Section 3.1 form a nonabelian group. As we observed, it is not necessarily true that αβ = βα for two symmetries α and β. Using Table 3.2, which is a Cayley table for this group, we can easily check that the symmetries of an equilateral triangle are indeed a group. We will denote this group by either S3 or D3, for reasons that will be explained later. Example 7. We use M2(R) to denote the set of all 2 × 2 matrices. Let GL2(R) be the subset of M2(R) consisting of invertible matrices; that is, a matrix A = a b c d is in GL2(R) if there exists a matrix A−1 such that AA−1 = A−1A = I, where I is the 2 × 2 identity matrix. For A to have an inverse is equivalent to requiring that the determinant of A be nonzero; that is, det A = ad − bc = 0. The set of invertible matrices forms a group called the general linear group. The identity of the group is the identity matrix I = 1 0 0 1. The
inverse of A ∈ GL2(R) is A−1 = 1 ad − bc d −b a −c. The product of two invertible matrices is again invertible. Matrix multiplication is associative, satisfying the other group axiom. For matrices it is not true in general that AB = BA; hence, GL2(R) is another example of a nonabelian group. 46 Example 8. Let CHAPTER 3 GROUPS I = K = 0 1 −1 0 i 0 0 −i, where i2 = −1. Then the relations I 2 = J 2 = K2 = −1, IJ = K, JK = I, KI = J, JI = −K, KJ = −I, and IK = −J hold. The set Q8 = {±1, ±I, ±J, ±K} is a group called the quaternion group. Notice that Q8 is noncommutative. Example 9. Let C∗ be the set of nonzero complex numbers. Under the operation of multiplication C∗ forms a group. The identity is 1. If z = a + bi is a nonzero complex number, then z−1 = a − bi a2 + b2 is the inverse of z. It is easy to see that the remaining group axioms hold. A group is finite, or has finite order, if it contains a finite number of elements; otherwise, the group is said to be infinite or to have infinite order. The order of a finite group is the number of elements that it contains. If G is a group containing n elements, we write |G| = n. The group Z5 is a finite group of order 5; the integers Z form an infinite group under addition, and we sometimes write |Z| = ∞. Basic Properties of Groups Proposition 3.2 The identity element in a group G is unique; that is, there exists only one element e ∈ G such that eg = ge = g for all g ∈ G. Proof. Suppose that e and e are both identities in G. Then eg = ge = g and eg = ge = g for all g ∈ G. We need to show that e = e. If we think of e as the
identity, then ee = e; but if e is the identity, then ee = e. Combining these two equations, we have e = ee = e. Inverses in a group are also unique. If g and g are both inverses of an element g in a group G, then gg = gg = e and gg = gg = e. We want to show that g = g, but g = ge = g(gg) = (gg)g = eg = g. We summarize this fact in the following proposition. 3.2 DEFINITIONS AND EXAMPLES 47 Proposition 3.3 If g is any element in a group G, then the inverse of g, g−1, is unique. Proposition 3.4 Let G be a group. If a, b ∈ G, then (ab)−1 = b−1a−1. Proof. Let a, b ∈ G. Then abb−1a−1 = aea−1 = aa−1 = e. Similarly, b−1a−1ab = e. But by the previous proposition, inverses are unique; hence, (ab)−1 = b−1a−1. Proposition 3.5 Let G be a group. For any a ∈ G, (a−1)−1 = a. Proof. Observe that a−1(a−1)−1 = e. Consequently, multiplying both sides of this equation by a, we have (a−1)−1 = e(a−1)−1 = aa−1(a−1)−1 = ae = a. It makes sense to write equations with group elements and group operations. If a and b are two elements in a group G, does there exist an element x ∈ G such that ax = b? If such an x does exist, is it unique? The following proposition answers both of these questions positively. Proposition 3.6 Let G be a group and a and b be any two elements in G. Then the equations ax = b and xa = b have unique solutions in G. Proof. Suppose that ax = b. We must show that such an x exists. Multiplying both sides of ax = b by a−1, we have x = ex = a−1ax = a−1b. To show uniqueness, suppose that x1 and x2 are both solutions
of ax = b; then ax1 = b = ax2. So x1 = a−1ax1 = a−1ax2 = x2. The proof for the existence and uniqueness of the solution of xa = b is similar. Proposition 3.7 If G is a group and a, b, c ∈ G, then ba = ca implies b = c and ab = ac implies b = c. This proposition tells us that the right and left cancellation laws are true in groups. We leave the proof as an exercise. We can use exponential notation for groups just as we do in ordinary algebra. If G is a group and g ∈ G, then we define g0 = e. For n ∈ N, we define and gn = g · g · · · g n times g−n = g−1 · g−1 · · · g−1 n times. 48 CHAPTER 3 GROUPS Theorem 3.8 In a group, the usual laws of exponents hold; that is, for all g, h ∈ G, 1. gmgn = gm+n for all m, n ∈ Z; 2. (gm)n = gmn for all m, n ∈ Z; 3. (gh)n = (h−1g−1)−n for all n ∈ Z. Furthermore, if G is abelian, then (gh)n = gnhn. We will leave the proof of this theorem as an exercise. Notice that (gh)n = gnhn in general, since the group may not be abelian. If the group is Z or Zn, we write the group operation additively and the exponential operation multiplicatively; that is, we write ng instead of gn. The laws of exponents now become 1. mg + ng = (m + n)g for all m, n ∈ Z; 2. m(ng) = (mn)g for all m, n ∈ Z; 3. m(g + h) = mg + mh for all n ∈ Z. It is important to realize that the last statement can be made only because Z and Zn are commutative groups. Historical Note Although the first clear axiomatic definition of a group was not given until the late 1800s, group-theoretic methods had been employed before this time
in the development of many areas of mathematics, including geometry and the theory of algebraic equations. Joseph-Louis Lagrange used group-theoretic methods in a 1770–1771 memoir to study methods of solving polynomial equations. Later, ´Evariste Galois (1811–1832) succeeded in developing the mathematics necessary to determine exactly which polynomial equations could be solved in terms of the polynomials’ coefficients. Galois’ primary tool was group theory. The study of geometry was revolutionized in 1872 when Felix Klein proposed that geometric spaces should be studied by examining those properties that are invariant under a transformation of the space. Sophus Lie, a contemporary of Klein, used group theory to study solutions of partial differential equations. One of the first modern treatments of group theory appeared in William Burnside’s The Theory of Groups of Finite Order [1], first published in 1897. 3.3 SUBGROUPS 3.3 Subgroups Definitions and Examples 49 Sometimes we wish to investigate smaller groups sitting inside a larger group. The set of even integers 2Z = {..., −2, 0, 2, 4,...} is a group under the operation of addition. This smaller group sits naturally inside of the group of integers under addition. We define a subgroup H of a group G to be a subset H of G such that when the group operation of G is restricted to H, H is a group in its own right. Observe that every group G with at least two elements will always have at least two subgroups, the subgroup consisting of the identity element alone and the entire group itself. The subgroup H = {e} of a group G is called the trivial subgroup. A subgroup that is a proper subset of G is called a proper subgroup. In many of the examples that we have investigated up to this point, there exist other subgroups besides the trivial and improper subgroups. Example 10. Consider the set of nonzero real numbers, R∗, with the group operation of multiplication. The identity of this group is 1 and the inverse of any element a ∈ R∗ is just 1/a. We will show that Q∗ = {p/q : p and q are nonzero integers} is a subgroup of R�
�. The identity of R∗ is 1; however, 1 = 1/1 is the quotient of two nonzero integers. Hence, the identity of R∗ is in Q∗. Given two elements in Q∗, say p/q and r/s, their product pr/qs is also in Q∗. The inverse of any element p/q ∈ Q∗ is again in Q∗ since (p/q)−1 = q/p. Since multiplication in R∗ is associative, multiplication in Q∗ is associative. Example 11. Recall that C∗ is the multiplicative group of nonzero complex numbers. Let H = {1, −1, i, −i}. Then H is a subgroup of C∗. It is quite easy to verify that H is a group under multiplication and that H ⊂ C∗. Example 12. Let SL2(R) be the subset of GL2(R) consisting of matrices of determinant one; that is, a matrix A = a b c d is in SL2(R) exactly when ad − bc = 1. To show that SL2(R) is a subgroup of the general linear group, we must show that it is a group under matrix 50 CHAPTER 3 GROUPS multiplication. The 2 × 2 identity matrix is in SL2(R), as is the inverse of the matrix A: A−1 = d −b a −c. It remains to show that multiplication is closed; that is, that the product of two matrices of determinant one also has determinant one. We will leave this task as an exercise. The group SL2(R) is called the special linear group. Example 13. It is important to realize that a subset H of a group G can be a group without being a subgroup of G. For H to be a subgroup of G it must inherit G’s binary operation. The set of all 2 × 2 matrices, M2(R), forms a group under the operation of addition. The 2 × 2 general linear group is a subset of M2(R) and is a group under matrix multiplication, but it is not a subgroup of M2(R). If we add two invertible matrices, we do not necessarily obtain another invertible matrix. Observe that 1 0 0 1 + −1 0 0 −1 0 0 0 0
, = but the zero matrix is not in GL2(R). Example 14. One way of telling whether or not two groups are the same is by examining their subgroups. Other than the trivial subgroup and the group itself, the group Z4 has a single subgroup consisting of the elements 0 and 2. From the group Z2, we can form another group of four elements as follows. As a set this group is Z2 × Z2. We perform the group operation coordinatewise; that is, (a, b) + (c, d) = (a + c, b + d). Table 3.5 is an addition table for Z2 × Z2. Since there are three nontrivial proper subgroups of Z2 × Z2, H1 = {(0, 0), (0, 1)}, H2 = {(0, 0), (1, 0)}, and H3 = {(0, 0), (1, 1)}, Z4 and Z2 × Z2 must be different groups. + (0,0) (0,1) (1,0) (1,1) (0,0) (0,0) (0,1) (1,0) (1,1) (0,1) (0,1) (0,0) (1,1) (1,0) (1,0) (1,0) (1,1) (0,0) (0,1) (1,1) (1,1) (1,0) (0,1) (0,0) Table 3.5. Addition table for Z2 × Z2 EXERCISES 51 Some Subgroup Theorems Let us examine some criteria for determining exactly when a subset of a group is a subgroup. Proposition 3.9 A subset H of G is a subgroup if and only if it satisfies the following conditions. 1. The identity e of G is in H. 2. If h1, h2 ∈ H, then h1h2 ∈ H. 3. If h ∈ H, then h−1 ∈ H. Proof. First suppose that H is a subgroup of G. We must show that the three conditions hold. Since H is a group, it must have an identity eH. We must show that eH = e, where e is the identity
of G. We know that eH eH = eH and that eeH = eH e = eH ; hence, eeH = eH eH. By right-hand cancellation, e = eH. The second condition holds since a subgroup H is a group. To prove the third condition, let h ∈ H. Since H is a group, there is an element h ∈ H such that hh = hh = e. By the uniqueness of the inverse in G, h = h−1. Conversely, if the three conditions hold, we must show that H is a group under the same operation as G; however, these conditions plus the associativity of the binary operation are exactly the axioms stated in the definition of a group. Proposition 3.10 Let H be a subset of a group G. Then H is a subgroup of G if and only if H = ∅, and whenever g, h ∈ H then gh−1 is in H. Proof. Let H be a nonempty subset of G. Then H contains some element g. So gg−1 = e is in H. If g ∈ H, then eg−1 = g−1 is also in H. Finally, let g, h ∈ H. We must show that their product is also in H. However, g(h−1)−1 = gh ∈ H. Hence, H is indeed a subgroup of G. Conversely, if g and h are in H, we want to show that gh−1 ∈ H. Since h is in H, its inverse h−1 must also be in H. Because of the closure of the group operation, gh−1 ∈ H. Exercises 1. Find all x ∈ Z satisfying each of the following equations. 52 CHAPTER 3 GROUPS (a) 3x ≡ 2 (mod 7) (b) 5x + 1 ≡ 13 (mod 23) (c) 5x + 1 ≡ 13 (mod 26) (d) 9x ≡ 3 (mod 5) (e) 5x ≡ 1 (mod 6) (f) 3x ≡ 1 (mod 6) 2. Which of the following multiplication tables defined on the set G = {a, b, c, d} form a group? Support your answer in each case. (a) (bc) (d. Write out Cayley
tables for groups formed by the symmetries of a rectangle and for (Z4, +). How many elements are in each group? Are the groups the same? Why or why not? 4. Describe the symmetries of a rhombus and prove that the set of symmetries forms a group. Give Cayley tables for both the symmetries of a rectangle and the symmetries of a rhombus. Are the symmetries of a rectangle and those of a rhombus the same? 5. Describe the symmetries of a square and prove that the set of symmetries is a group. Give a Cayley table for the symmetries. How many ways can the vertices of a square be permuted? Is each permutation necessarily a symmetry of the square? The symmetry group of the square is denoted by D4. 6. Give a multiplication table for the group U (12). 7. Let S = R \ {−1} and define a binary operation on S by a ∗ b = a + b + ab. Prove that (S, ∗) is an abelian group. 8. Give an example of two elements A and B in GL2(R) with AB = BA. 9. Prove that the product of two matrices in SL2(R) has determinant one. 10. Prove that the set of matrices of the form    EXERCISES 53 is a group under matrix multiplication. This group, known as the Heisenberg group, is important in quantum physics. Matrix multiplication in the Heisenberg group is defined by  + xz z + z 1  . 11. Prove that det(AB) = det(A) det(B) in GL2(R). Use this result to show that the binary operation in the group GL2(R) is closed; that is, if A and B are in GL2(R), then AB ∈ GL2(R). 12. Let Zn 2 = {(a1, a2,..., an) : ai ∈ Z2}. Define a binary operation on Zn 2 by (a1, a2,..., an) + (b1, b2,..., bn) = (a1
+ b1, a2 + b2,..., an + bn). Prove that Zn algebraic coding theory. 2 is a group under this operation. This group is important in 13. Show that R∗ = R \ {0} is a group under the operation of multiplication. 14. Given the groups R∗ and Z, let G = R∗ × Z. Define a binary operation ◦ on G by (a, m) ◦ (b, n) = (ab, m + n). Show that G is a group under this operation. 15. Prove or disprove that every group containing six elements is abelian. 16. Give a specific example of some group G and elements g, h ∈ G where (gh)n = gnhn. 17. Give an example of three different groups with eight elements. Why are the groups different? 18. Show that there are n! permutations of a set containing n items. 19. Show that for all a ∈ Znmod n) 20. Prove that there is a multiplicative identity for the integers modulo n: a · 1 ≡ a (mod n). 21. For each a ∈ Zn find a b ∈ Zn such that mod n). 22. Show that addition and multiplication mod n are associative operations. 23. Show that multiplication distributes over addition modulo n: a(b + c) ≡ ab + ac (mod n). 54 CHAPTER 3 GROUPS 24. Let a and b be elements in a group G. Prove that abna−1 = (aba−1)n for n ∈ Z. 25. Let U (n) be the group of units in Zn. If n > 2, prove that there is an element k ∈ U (n) such that k2 = 1 and k = 1. 26. Prove that the inverse of g1g2 · · · gn is g−1 n g−1 n−1 · · · g−1 1. 27. Prove the remainder of Proposition 3.6: if G is a group and a, b ∈ G, then the equation xa = b has unique solutions in G. 28. Prove Theorem 3.8. 29. Prove the right and left cancellation laws for a group G; that
is, show that in the group G, ba = ca implies b = c and ab = ac implies b = c for elements a, b, c ∈ G. 30. Show that if a2 = e for all elements a in a group G, then G must be abelian. 31. Show that if G is a finite group of even order, then there is an a ∈ G such that a is not the identity and a2 = e. 32. Let G be a group and suppose that (ab)2 = a2b2 for all a and b in G. Prove that G is an abelian group. 33. Find all the subgroups of Z3 × Z3. Use this information to show that Z3 × Z3 is not the same group as Z9. (See Example 14 for a short description of the product of groups.) 34. Find all the subgroups of the symmetry group of an equilateral triangle. 35. Compute the subgroups of the symmetry group of a square. 36. Let H = {2k : k ∈ Z}. Show that H is a subgroup of Q∗. 37. Let n = 0, 1, 2,... and nZ = {nk : k ∈ Z}. Prove that nZ is a subgroup of Z. Show that these subgroups are the only subgroups of Z. 38. Let T = {z ∈ C∗ : |z| = 1}. Prove that T is a subgroup of C∗. 39. Let G consist of the 2 × 2 matrices of the form cos θ − sin θ cos θ sin θ where θ ∈ R. Prove that G is a subgroup of SL2(R). 40. Prove that √ G = {a + b 2 : a, b ∈ Q and a and b are not both zero} is a subgroup of R∗ under the group operation of multiplication. EXERCISES 55 41. Let G be the group of 2 × 2 matrices under addition and. Prove that H is a subgroup of G. 42. Prove or disprove: SL2(Z), the set of 2 × 2 matrices with integer entries and determinant one, is a subgroup of SL2(R). 43. List the subgroups of the quaternion group, Q8.
44. Prove that the intersection of two subgroups of a group G is also a subgroup of G. 45. Prove or disprove: If H and K are subgroups of a group G, then H ∪ K is a subgroup of G. 46. Prove or disprove: If H and K are subgroups of a group G, then HK = {hk : h ∈ H and k ∈ K} is a subgroup of G. What if G is abelian? 47. Let G be a group and g ∈ G. Show that Z(G) = {x ∈ G : gx = xg for all g ∈ G} is a subgroup of G. This subgroup is called the center of G. 48. Let a and b be elements of a group G. If a4b = ba and a3 = e, prove that ab = ba. 49. Give an example of an infinite group in which every nontrivial subgroup is infinite. 50. Give an example of an infinite group in which every proper subgroup is finite. 51. If xy = x−1y−1 for all x and y in G, prove that G must be abelian. 52. If (xy)2 = xy for all x and y in G, prove that G must be abelian. 53. Prove or disprove: Every nontrivial subgroup of an nonabelian group is nonabelian. 54. Let H be a subgroup of G and C(H) = {g ∈ G : gh = hg for all h ∈ H}. Prove C(H) is a subgroup of G. This subgroup is called the centralizer of H in G. 55. Let H be a subgroup of G. If g ∈ G, show that gHg−1 is also a subgroup of G. 56 CHAPTER 3 GROUPS Figure 3.3. A UPC code Additional Exercises: Detecting Errors Credit card companies, banks, book publishers, and supermarkets all take advantage of the properties of integer arithmetic modulo n and group theory to obtain error detection schemes for the identification codes that they use. 1. UPC Symbols. Universal Product Code (UPC) symbols are now found on most products
in grocery and retail stores. The UPC symbol is a 12-digit code identifying the manufacturer of a product and the product itself (Figure 3.3). The first 11 digits contain information about the product; the twelfth digit is used for error detection. If d1d2 · · · d12 is a valid UPC number, then 3 · d1 + 1 · d2 + 3 · d3 + · · · + 3 · d11 + 1 · d12 ≡ 0 (mod 10). (a) Show that the UPC number 0-50000-30042-6, which appears in Fig- ure 3.3, is a valid UPC number. (b) Show that the number 0-50000-30043-6 is not a valid UPC number. (c) Write a formula to calculate the check digit, d12, in the UPC number. (d) The UPC error detection scheme can detect most transposition errors; that is, it can determine if two digits have been interchanged. Show that the transposition error 0-05000-30042-6 is not detected. Find a transposition error that is detected. Can you find a general rule for the types of transposition errors that can be detected? (e) Write a program that will determine whether or not a UPC number is valid. 2. It is often useful to use an inner product notation for this type of error detection scheme; hence, we will use the notion (d1, d2,..., dk) · (w1, w2,..., wk) ≡ 0 (mod n) 500003004206 EXERCISES to mean 57 d1w1 + d2w2 + · · · + dkwk ≡ 0 (mod n). Suppose that (d1, d2,..., dk) · (w1, w2,..., wk) ≡ 0 (mod n) is an error detection scheme for the k-digit identification number d1d2 · · · dk, where 0 ≤ di < n. Prove that all single-digit errors are detected if and only if gcd(wi, n) = 1 for 1 ≤ i ≤ k. 3. Let (d1, d2,..., dk) · (w1, w2,
..., wk) ≡ 0 (mod n) be an error detection scheme for the k-digit identification number d1d2 · · · dk, where 0 ≤ di < n. Prove that all transposition errors of two digits di and dj are detected if and only if gcd(wi − wj, n) = 1 for i and j between 1 and k. 4. ISBN Codes. Every book has an International Standard Book Number (ISBN) code. This is a 10-digit code indicating the book’s publisher and title. The tenth digit is a check digit satisfying (d1, d2,..., d10) · (10, 9,..., 1) ≡ 0 (mod 11). One problem is that d10 might have to be a 10 to make the inner product zero; in this case, 11 digits would be needed to make this scheme work. Therefore, the character X is used for the eleventh digit. So ISBN 3-540-96035-X is a valid ISBN code. (a) Is ISBN 0-534-91500-0 a valid ISBN code? What about ISBN 0-534- 91700-0 and ISBN 0-534-19500-0? (b) Does this method detect all single-digit errors? What about all transpo- sition errors? (c) How many different ISBN codes are there? (d) Write a computer program that will calculate the check digit for the first nine digits of an ISBN code. (e) A publisher has houses in Germany and the United States. Its German prefix is 3-540. If its United States prefix will be 0-abc, find abc such that the rest of the ISBN code will be the same for a book printed in Germany and in the United States. Under the ISBN coding method the first digit identifies the language; German is 3 and English is 0. The next group of numbers identifies the publisher, and the last group identifies the specific book. References and Suggested Readings References [2] and [3] show how group theory can be used in error detection schemes. Other sources cover more advanced topics in group theory. [1] Burnside, W
. Theory of Groups of Finite Order. 2nd ed. Cambridge University Press, Cambridge, 1911; Dover, New York, 1953. A classic. Also available at books.google.com. 58 CHAPTER 3 GROUPS [2] Gallian, J. A. and Winters, S. “Modular Arithmetic in the Marketplace,” The American Mathematical Monthly 95(1988): 548–51. [3] Gallian, J. A. Contemporary Abstract Algebra. 7th ed. Brooks/Cole, Belmont, CA, 2009. [4] Hall, M. Theory of Groups. 2nd ed. American Mathematical Society, Provi- dence, 1959. [5] Kurosh, A. E. The Theory of Groups, vols. I and II. American Mathematical Society, Providence, 1979. [6] Rotman, J. J. An Introduction to the Theory of Groups. 4th ed. Springer, New York, 1995. Sage The first half of this text is about group theory. Sage includes GAP, a program designed primarly for just group theory, and in continuous development since 1986. Many of Sage’s computations for groups ultimately are performed by GAP. 4 Cyclic Groups The groups Z and Zn, which are among the most familiar and easily understood groups, are both examples of what are called cyclic groups. In this chapter we will study the properties of cyclic groups and cyclic subgroups, which play a fundamental part in the classification of all abelian groups. 4.1 Cyclic Subgroups Often a subgroup will depend entirely on a single element of the group; that is, knowing that particular element will allow us to compute any other element in the subgroup. Example 1. Suppose that we consider 3 ∈ Z and look at all multiples (both positive and negative) of 3. As a set, this is 3Z = {..., −3, 0, 3, 6,...}. It is easy to see that 3Z is a subgroup of the integers. This subgroup is completely determined by the element 3 since we can obtain all of the other elements of the group by taking multiples of 3. Every element in the subgroup is “generated” by 3. Example 2. If H = {2n : n ∈ Z}, then H is a subgroup of
the multiplicative group of nonzero rational numbers, Q∗. If a = 2m and b = 2n are in H, then ab−1 = 2m2−n = 2m−n is also in H. By Proposition 3.10, H is a subgroup of Q∗ determined by the element 2. 59 60 CHAPTER 4 CYCLIC GROUPS Theorem 4.1 Let G be a group and a be any element in G. Then the set a = {ak : k ∈ Z} is a subgroup of G. Furthermore, a is the smallest subgroup of G that contains a. Proof. The identity is in a since a0 = e. If g and h are any two elements in a, then by the definition of a we can write g = am and h = an for some integers m and n. So gh = aman = am+n is again in a. Finally, if g = an in a, then the inverse g−1 = a−n is also in a. Clearly, any subgroup H of G containing a must contain all the powers of a by closure; hence, H contains a. Therefore, a is the smallest subgroup of G containing a. Remark. If we are using the “+” notation, as in the case of the integers under addition, we write a = {na : n ∈ Z}. For a ∈ G, we call a the cyclic subgroup generated by a. If G contains some element a such that G = a, then G is a cyclic group. In this case a is a generator of G. If a is an element of a group G, we define the order of a to be the smallest positive integer n such that an = e, and we write |a| = n. If there is no such integer n, we say that the order of a is infinite and write |a| = ∞ to denote the order of a. Example 3. Notice that a cyclic group can have more than a single generator. Both 1 and 5 generate Z6; hence, Z6 is a cyclic group. Not every element in a cyclic group is necessarily a generator of the group. The order of 2 ∈ Z6 is 3. The cyclic subgroup generated by 2 is 2 = {0, 2, 4}. The groups Z and Zn
are cyclic groups. The elements 1 and −1 are generators for Z. We can certainly generate Zn with 1 although there may be other generators of Zn, as in the case of Z6. Example 4. The group of units, U (9), in Z9 is a cyclic group. As a set, U (9) is {1, 2, 4, 5, 7, 8}. The element 2 is a generator for U (9) since 21 = 2 23 = 8 25 = 5 22 = 4 24 = 7 26 = 1. 4.1 CYCLIC SUBGROUPS 61 Example 5. Not every group is a cyclic group. Consider the symmetry group of an equilateral triangle S3. The multiplication table for this group is Table 3.2. The subgroups of S3 are shown in Figure 4.1. Notice that every subgroup is cyclic; however, no single element generates the entire group. S3 {id, ρ1, ρ2} {id, µ1} {id, µ2} {id, µ3} {id} Figure 4.1. Subgroups of S3 Theorem 4.2 Every cyclic group is abelian. Proof. Let G be a cyclic group and a ∈ G be a generator for G. If g and h are in G, then they can be written as powers of a, say g = ar and h = as. Since gh = aras = ar+s = as+r = asar = hg, G is abelian. Subgroups of Cyclic Groups We can ask some interesting questions about cyclic subgroups of a group and subgroups of a cyclic group. If G is a group, which subgroups of G are cyclic? If G is a cyclic group, what type of subgroups does G possess? Theorem 4.3 Every subgroup of a cyclic group is cyclic. Proof. The main tools used in this proof are the division algorithm and the Principle of Well-Ordering. Let G be a cyclic group generated by a and suppose that H is a subgroup of G. If H = {e}, then trivially H is cyclic. Suppose that H contains some other element g distinct from the identity. Then g can be written as an for some integer n. We can assume that n > 0. 62 CHAPTER 4 CYCLIC GROUPS Let m be the smallest
natural number such that am ∈ H. Such an m exists by the Principle of Well-Ordering. We claim that h = am is a generator for H. We must show that every h ∈ H can be written as a power of h. Since h ∈ H and H is a subgroup of G, h = ak for some positive integer k. Using the division algorithm, we can find numbers q and r such that k = mq + r where 0 ≤ r < m; hence, ak = amq+r = (am)qar = hqar. So ar = akh−q. Since ak and h−q are in H, ar must also be in H. However, m was the smallest positive number such that am was in H; consequently, r = 0 and so k = mq. Therefore, h = ak = amq = hq and H is generated by h. Corollary 4.4 The subgroups of Z are exactly nZ for n = 0, 1, 2,.... Proposition 4.5 Let G be a cyclic group of order n and suppose that a is a generator for G. Then ak = e if and only if n divides k. Proof. First suppose that ak = e. By the division algorithm, k = nq + r where 0 ≤ r < n; hence, e = ak = anq+r = anqar = ear = ar. Since the smallest positive integer m such that am = e is n, r = 0. Conversely, if n divides k, then k = ns for some integer s. Consequently, ak = ans = (an)s = es = e. Theorem 4.6 Let G be a cyclic group of order n and suppose that a ∈ G is a generator of the group. If b = ak, then the order of b is n/d, where d = gcd(k, n). Proof. We wish to find the smallest integer m such that e = bm = akm. By Proposition 4.5, this is the smallest integer m such that n divides km or, equivalently, n/d divides m(k/d). Since d is the greatest common divisor of n and k, n/d and k/d are relatively prime. Hence, for n/d to divide m(k/d) it must divide m. The smallest such m is n
/d. 4.2 MULTIPLICATIVE GROUP OF COMPLEX NUMBERS 63 Corollary 4.7 The generators of Zn are the integers r such that 1 ≤ r < n and gcd(r, n) = 1. Example 6. Let us examine the group Z16. The numbers 1, 3, 5, 7, 9, 11, 13, and 15 are the elements of Z16 that are relatively prime to 16. Each of these elements generates Z16. For example = 13 7 · 9 = 15 8 · 9 = 8 3 · 9 = 11 10 · 9 = 10 11 · 9 = 3 12 · 9 = 12 13 · 9 = 5 14 · 9 = 14 15 · 9 = 7. 4.2 Multiplicative Group of Complex Numbers The complex numbers are defined as C = {a + bi : a, b ∈ R}, where i2 = −1. If z = a + bi, then a is the real part of z and b is the imaginary part of z. To add two complex numbers z = a + bi and w = c + di, we just add the corresponding real and imaginary parts: z + w = (a + bi) + (c + di) = (a + c) + (b + d)i. Remembering that i2 = −1, we multiply complex numbers just like polynomials. The product of z and w is (a + bi)(c + di) = ac + bdi2 + adi + bci = (ac − bd) + (ad + bc)i. Every nonzero complex number z = a + bi has a multiplicative inverse; that is, there exists a z−1 ∈ C∗ such that zz−1 = z−1z = 1. If z = a + bi, then z−1 = a − bi a2 + b2. 64 CHAPTER 4 CYCLIC GROUPS The complex conjugate of a complex number z = a + bi is defined to be a2 + b2. z = a − bi. The absolute value or modulus of z = a + bi is |z| = √ Example 7. Let z = 2 + 3i and w = 1 − 2i. Then z + w = (2 + 3i) + (1 − 2i) = 3 + i and Also, zw =
(2 + 3i)(1 − 2i) = 8 − i. z−1 = |z| = 2 13 √ − 3 13 i 13 z = 2 − 3i. z3 = −3 + 2i y 0 z1 = 2 + 3i x z2 = 1 − 2i Figure 4.2. Rectangular coordinates of a complex number There are several ways of graphically representing complex numbers. We can represent a complex number z = a + bi as an ordered pair on the xy plane where a is the x (or real) coordinate and b is the y (or imaginary) coordinate. This is called the rectangular or Cartesian representation. The rectangular representations of z1 = 2 + 3i, z2 = 1 − 2i, and z3 = −3 + 2i are depicted in Figure 4.2. Nonzero complex numbers can also be represented using polar coordinates. To specify any nonzero point on the plane, it suffices to give an angle 4.2 MULTIPLICATIVE GROUP OF COMPLEX NUMBERS 65 y 0 r θ a + bi x Figure 4.3. Polar coordinates of a complex number θ from the positive x axis in the counterclockwise direction and a distance r from the origin, as in Figure 4.3. We can see that Hence, and z = a + bi = r(cos θ + i sin θ). r = |z| = a2 + b2 a = r cos θ b = r sin θ. We sometimes abbreviate r(cos θ + i sin θ) as r cis θ. To assure that the representation of z is well-defined, we also require that 0◦ ≤ θ < 360◦. If the measurement is in radians, then 0 ≤ θ < 2π. Example 8. Suppose that z = 2 cis 60◦. Then and a = 2 cos 60◦ = 1 b = 2 sin 60◦ = √ 3. Hence, the rectangular representation is z = 1 + √ 3 i. 66 CHAPTER 4 CYCLIC GROUPS Conversely, if we are given a rectangular representation of a complex number, it is often useful to know the number’s polar representation. If z = 3 2 i, then 2 − 3 √ √ r = a2 + b2 = √ 36 = 6 and θ =
arctan b a = arctan(−1) = 315◦, √ so 3 2 − 3 √ 2 i = 6 cis 315◦. The polar representation of a complex number makes it easy to find products and powers of complex numbers. The proof of the following proposition is straightforward and is left as an exercise. Proposition 4.8 Let z = r cis θ and w = s cis φ be two nonzero complex numbers. Then zw = rs cis(θ + φ). Example 9. If z = 3 cis(π/3) and w = 2 cis(π/6), then zw = 6 cis(π/2) = 6i. Theorem 4.9 (DeMoivre) Let z = r cis θ be a nonzero complex number. Then [r cis θ]n = rn cis(nθ) for n = 1, 2,.... Proof. We will use induction on n. For n = 1 the theorem is trivial. Assume that the theorem is true for all k such that 1 ≤ k ≤ n. Then zn+1 = znz = rn(cos nθ + i sin nθ)r(cos θ + i sin θ) = rn+1[(cos nθ cos θ − sin nθ sin θ) + i(sin nθ cos θ + cos nθ sin θ)] = rn+1[cos(nθ + θ) + i sin(nθ + θ)] = rn+1[cos(n + 1)θ + i sin(n + 1)θ]. 4.2 MULTIPLICATIVE GROUP OF COMPLEX NUMBERS 67 Example 10. Suppose that z = 1 + i and we wish to compute z10. Rather than computing (1 + i)10 directly, it is much easier to switch to polar coordinates and calculate z10 using DeMoivre’s Theorem: z10 = (1 + i)10 √ = 2 cis √ = ( 2 )10 cis 10 π 4 5π 2 π 2 = 32 cis = 32i. The Circle Group and the Roots of Unity The multiplicative group of the complex numbers, C∗, possesses some interesting subgroups. Whereas Q∗ and R∗ have no interesting
subgroups of finite order, C∗ has many. We first consider the circle group, T = {z ∈ C : |z| = 1}. The following proposition is a direct result of Proposition 4.8. Proposition 4.10 The circle group is a subgroup of C∗. Although the circle group has infinite order, it has many interesting finite subgroups. Suppose that H = {1, −1, i, −i}. Then H is a subgroup of the circle group. Also, 1, −1, i, and −i are exactly those complex numbers that satisfy the equation z4 = 1. The complex numbers satisfying the equation zn = 1 are called the nth roots of unity. Theorem 4.11 If zn = 1, then the nth roots of unity are z = cis 2kπ n, where k = 0, 1,..., n − 1. Furthermore, the nth roots of unity form a cyclic subgroup of T of order n. 68 CHAPTER 4 CYCLIC GROUPS Proof. By DeMoivre’s Theorem, zn = cis n 2kπ n = cis(2kπ) = 1. The z’s are distinct since the numbers 2kπ/n are all distinct and are greater than or equal to 0 but less than 2π. The fact that these are all of the roots of the equation zn = 1 follows from from Corollary 17.6, which states that a polynomial of degree n can have at most n roots. We will leave the proof that the nth roots of unity form a cyclic subgroup of T as an exercise. A generator for the group of the nth roots of unity is called a primitive nth root of unity. Example 11. The 8th roots of unity can be represented as eight equally spaced points on the unit circle (Figure 4.4). The primitive 8th roots of unity are ω = + ω3 = − + √ √. − √ ω5 = − √ ω7 = i i 4.3 The Method of Repeated Squares1 Computing large powers can be very time-consuming. Just as anyone can compute 22 or 28, everyone knows how to compute 221000000. However, such numbers are so large that we do not want to attempt the calculations
; moreover, past a certain point the computations would not be feasible even if we had every computer in the world at our disposal. Even writing down the decimal representation of a very large number may not be 1The results in this section are needed only in Chapter 7. 4.3 THE METHOD OF REPEATED SQUARES 69 y i ω 0 1 x ω7 −i ω3 ω5 −1 Figure 4.4. 8th roots of unity reasonable. It could be thousands or even millions of digits long. However, if we could compute something like 237398332 (mod 46389), we could very easily write the result down since it would be a number between 0 and 46,388. If we want to compute powers modulo n quickly and efficiently, we will have to be clever. The first thing to notice is that any number a can be written as the sum of distinct powers of 2; that is, we can write a = 2k1 + 2k2 + · · · + 2kn, where k1 < k2 < · · · < kn. This is just the binary representation of a. For example, the binary representation of 57 is 111001, since we can write 57 = 20 + 23 + 24 + 25. The laws of exponents still work in Zn; that is, if b ≡ ax (mod n) and c ≡ ay (mod n), then bc ≡ ax+y (mod n). We can compute a2k (mod n) in k multiplications by computing a20 a21 a2k (mod n) (mod n)... (mod n). 70 CHAPTER 4 CYCLIC GROUPS Each step involves squaring the answer obtained in the previous step, dividing by n, and taking the remainder. Example 12. We will compute 271321 (mod 481). Notice that hence, computing 271321 (mod 481) is the same as computing 321 = 20 + 26 + 28; 27120+26+28 · 27126 So it will suffice to compute 2712i (mod 481) where i = 0, 6, 8. It is very easy to see that (mod 481). ≡ 27120 · 27128 27121 ≡ 73, 441 (mod 481) ≡ 329 (mod 481). We can square this result to obtain a value for 27122 (mod 481): 27122 )
2 ≡ (27121 ≡ (329)2 ≡ 1, 082, 411 (mod 481) (mod 481) (mod 481) ≡ 16 (mod 481). We are using the fact that (a2n)2 ≡ a2·2n ≡ a2n+1 (mod n). Continuing, we can calculate 27126 ≡ 419 (mod 481) and Therefore, 27128 ≡ 16 (mod 481). 271321 ≡ 27120+26+28 · 27126 ≡ 27120 ≡ 271 · 419 · 16 (mod 481) · 27128 (mod 481) (mod 481) ≡ 1, 816, 784 (mod 481) ≡ 47 (mod 481). The method of repeated squares will prove to be a very useful tool when we explore RSA cryptography in Chapter 7. To encode and decode messages in a reasonable manner under this scheme, it is necessary to be able to quickly compute large powers of integers mod n. EXERCISES Exercises 71 1. Prove or disprove each of the following statements. (a) U (8) is cyclic. (b) All of the generators of Z60 are prime. (c) Q is cyclic. (d) If every proper subgroup of a group G is cyclic, then G is a cyclic group. (e) A group with a finite number of subgroups is finite. 2. Find the order of each of the following elements. (a) 5 ∈ Z12 √ 3 ∈ R (b) √ 3 ∈ R∗ (c) (d) −i ∈ C∗ (e) 72 in Z240 (f) 312 in Z471 3. List all of the elements in each of the following subgroups. (a) The subgroup of Z generated by 7 (b) The subgroup of Z24 generated by 15 (c) All subgroups of Z12 (d) All subgroups of Z60 (e) All subgroups of Z13 (f) All subgroups of Z48 (g) The subgroup generated by 3 in U (20) (h) The subgroup generated by 5 in U (18) (i) The subgroup of R∗ generated by 7 (j) The subgroup of C∗ generated by i where i2 = −1 (k) The subgroup of C�
� generated by 2i (l) The subgroup of C∗ generated by (1 + i)/ √ (m) The subgroup of C∗ generated by (1 + 3 i)/2 √ 2 (a) 0 4. Find the subgroups of GL2(R) generated by each of the following matrices. 1 −1 0 1 −1 1 1 −1 0 1 −1 0 1/2 √ 3/2 3/2 −1/2 −1 √ 1/3 0 0 3 (d) (b) (c) (e) (f) 0 1 5. Find the order of every element in Z18. 6. Find the order of every element in the symmetry group of the square, D4. 72 CHAPTER 4 CYCLIC GROUPS 7. What are all of the cyclic subgroups of the quaternion group, Q8? 8. List all of the cyclic subgroups of U (30). 9. List every generator of each subgroup of order 8 in Z32. 10. Find all elements of finite order in each of the following groups. Here the “∗” indicates the set with zero removed. (a) Z (b) Q∗ (c) R∗ 11. If a24 = e in a group G, what are the possible orders of a? 12. Find a cyclic group with exactly one generator. Can you find cyclic groups with exactly two generators? Four generators? How about n generators? 13. For n ≤ 20, which groups U (n) are cyclic? Make a conjecture as to what is true in general. Can you prove your conjecture? 14. Let A = 0 −1 1 0 and B = 0 −1 1 −1 be elements in GL2(R). Show that A and B have finite orders but AB does not. 15. Evaluate each of the following. (a) (3 − 2i) + (5i − 6) (b) (4 − 5i) − (4i − 4) (c) (5 − 4i)(7 + 2i) (d) (9 − i)(9 − i) (e) i45 (f) (1 + i) + (1 + i) 16. Convert the following complex numbers to the form a + bi. (a) 2 cis(π
/6) (b) 5 cis(9π/4) (c) 3 cis(π) (d) cis(7π/4)/2 17. Change the following complex numbers to polar representation. (a) 1 − i (b) −5 (c) 2 + 2i √ (d) 3 + i (e) −3i (f) 2i + 2 √ 3 18. Calculate each of the following expressions. EXERCISES 73 (a) (1 + i)−1 (b) (1 − i)6 √ (c) ( (d) (−i)10 3 + i)5 (e) ((1 − i)/2)4 √ √ (f) (− 2 − 2 i)12 (g) (−2 + 2i)−5 19. Prove each of the following statements. (a) |z| = |z| (b) zz = |z|2 (c) z−1 = z/|z|2 (d) |z + w| ≤ |z| + |w| (e) |z − w| ≥ ||z| − |w|| (f) |zw| = |z||w| 20. List and graph the 6th roots of unity. What are the generators of this group? What are the primitive 6th roots of unity? 21. List and graph the 5th roots of unity. What are the generators of this group? What are the primitive 5th roots of unity? 22. Calculate each of the following. (a) 2923171 (mod 582) (b) 2557341 (mod 5681) (c) 20719521 (mod 4724) (d) 971321 (mod 765) 23. Let a, b ∈ G. Prove the following statements. (a) The order of a is the same as the order of a−1. (b) For all g ∈ G, |a| = |g−1ag|. (c) The order of ab is the same as the order of ba. 24. Let p and q be distinct primes. How many generators does Zpq have? 25. Let p be prime and r be a positive integer. How many generators does Zpr have? 26. Prove that Zp has no nontrivial subgroups if p is prime. 27. If g and
h have orders 15 and 16 respectively in a group G, what is the order of g ∩ h? 28. Let a be an element in a group G. What is a generator for the subgroup am ∩ an? 29. Prove that Zn has an even number of generators for n > 2. 30. Suppose that G is a group and let a, b ∈ G. Prove that if |a| = m and |b| = n with gcd(m, n) = 1, then a ∩ b = {e}. 31. Let G be an abelian group. Show that the elements of finite order in G form a subgroup. This subgroup is called the torsion subgroup of G. 74 CHAPTER 4 CYCLIC GROUPS 32. Let G be a finite cyclic group of order n generated by x. Show that if y = xk where gcd(k, n) = 1, then y must be a generator of G. 33. If G is an abelian group that contains a pair of cyclic subgroups of order 2, show that G must contain a subgroup of order 4. Does this subgroup have to be cyclic? 34. Let G be an abelian group of order pq where gcd(p, q) = 1. If G contains elements a and b of order p and q respectively, then show that G is cyclic. 35. Prove that the subgroups of Z are exactly nZ for n = 0, 1, 2,.... 36. Prove that the generators of Zn are the integers r such that 1 ≤ r < n and gcd(r, n) = 1. 37. Prove that if G has no proper nontrivial subgroups, then G is a cyclic group. 38. Prove that the order of an element in a cyclic group G must divide the order of the group. 39. For what integers n is −1 an nth root of unity? 40. If z = r(cos θ + i sin θ) and w = s(cos φ + i sin φ) are two nonzero complex numbers, show that zw = rs[cos(θ + φ) + i sin(θ + φ)]. 41. Prove that the circle group is a subgroup of C∗.
42. Prove that the nth roots of unity form a cyclic subgroup of T of order n. 43. Prove that αm = 1 and αn = 1 if and only if αd = 1 for d = gcd(m, n). 44. Let z ∈ C∗. If |z| = 1, prove that the order of z is infinite. 45. Let z = cos θ + i sin θ be in T where θ ∈ Q. Prove that the order of z is infinite. Programming Exercises 1. Write a computer program that will write any decimal number as the sum of distinct powers of 2. What is the largest integer that your program will handle? 2. Write a computer program to calculate ax (mod n) by the method of repeated squares. What are the largest values of n and x that your program will accept? EXERCISES 75 References and Suggested Readings [1] Koblitz, N. A Course in Number Theory and Cryptography. 2nd ed. Springer, New York, 1994. [2] Pomerance, C. “Cryptology and Computational Number Theory—An Introduction,” in Cryptology and Computational Number Theory, Pomerance, C., ed. Proceedings of Symposia in Applied Mathematics, vol. 42, American Mathematical Society, Providence, RI, 1990. This book gives an excellent account of how the method of repeated squares is used in cryptography. Sage Sage support for cyclic groups is a little spotty — but this situation could change soon. 5 Permutation Groups Permutation groups are central to the study of geometric symmetries and to Galois theory, the study of finding solutions of polynomial equations. They also provide abundant examples of nonabelian groups. Let us recall for a moment the symmetries of the equilateral triangle ABC from Chapter 3. The symmetries actually consist of permutations of the three vertices, where a permutation of the set S = {A, B, C} is a one-to-one and onto map π : S → S. The three vertices have the following six permutations We have used the array to denote the permutation that sends A to B, B to C, and C to A. That is. The symmetries of a triangle form a group. In this chapter we will
study groups of this type. 76 5.1 DEFINITIONS AND NOTATION 77 5.1 Definitions and Notation In general, the permutations of a set X form a group SX. If X is a finite set, we can assume X = {1, 2,..., n}. In this case we write Sn instead of SX. The following theorem says that Sn is a group. We call this group the symmetric group on n letters. Theorem 5.1 The symmetric group on n letters, Sn, is a group with n! elements, where the binary operation is the composition of maps. Proof. The identity of Sn is just the identity map that sends 1 to 1, 2 to 2,..., n to n. If f : Sn → Sn is a permutation, then f −1 exists, since f is one-to-one and onto; hence, every permutation has an inverse. Composition of maps is associative, which makes the group operation associative. We leave the proof that |Sn| = n! as an exercise. A subgroup of Sn is called a permutation group. Example 1. Consider the subgroup G of S5 consisting of the identity permutation id and the permutations. The following table tells us how to multiply elements in the permutation group G. ◦ id σ τ µ id id σ τ µ σ σ id µ τ τ τ µ id σ µ µ τ σ id Remark. Though it is natural to multiply elements in a group from left to right, functions are composed from right to left. Let σ and τ be permutations on a set X. To compose σ and τ as functions, we calculate (σ◦τ )(x) = σ(τ (x)). 78 CHAPTER 5 PERMUTATION GROUPS That is, we do τ first, then σ. There are several ways to approach this inconsistency. We will adopt the convention of multiplying permutations right to left. To compute στ, do τ first and then σ. That is, by στ (x) we mean σ(τ (x)). (Another way of solving this problem would be to write functions on the right; that is, instead of writing σ(x), we could write (x)σ. We could also multiply
permutations left to right to agree with the usual way of multiplying elements in a group. Certainly all of these methods have been used. Example 2. Permutation multiplication is not usually commutative. Let. Then but στ =,. Cycle Notation The notation that we have used to represent permutations up to this point is cumbersome, to say the least. To work effectively with permutation groups, we need a more streamlined method of writing down and manipulating permutations. A permutation σ ∈ SX is a cycle of length k if there exist elements a1, a2,..., ak ∈ X such that σ(a1) = a2 σ(a2) = a3... σ(ak) = a1 5.1 DEFINITIONS AND NOTATION 79 and σ(x) = x for all other elements x ∈ X. We will write (a1, a2,..., ak) to denote the cycle σ. Cycles are the building blocks of all permutations. Example 3. The permutation 162354) is a cycle of length 6, whereas 243) is a cycle of length 3. Not every permutation is a cycle. Consider the permutation 1243)(56). This permutation actually contains a cycle of length 2 and a cycle of length 4. Example 4. It is very easy to compute products of cycles. Suppose that σ = (1352) and τ = (256). If we think of σ as 1 → 3, 3 → 5, 5 → 2, 2 → 1, and τ as 2 → 5, 5 → 6, 6 → 2, then for στ remembering that we apply τ first and then σ, it must be the case that 1 → 3, 3 → 5, 5 → 6, 6 → 2 → 1, or στ = (1356). If µ = (1634), then σµ = (1652)(34). Two cycles in SX, σ = (a1, a2,..., ak) and τ = (b1, b2,..., bl), are dis- joint if ai = bj for all i and j. Example 5. The cycles (135) and (27) are disjoint; however, the cycles (135) and (347) are not. Calculating
their products, we find that (135)(27) = (135)(27) (135)(347) = (13475). 80 CHAPTER 5 PERMUTATION GROUPS The product of two cycles that are not disjoint may reduce to something less complicated; the product of disjoint cycles cannot be simplified. Proposition 5.2 Let σ and τ be two disjoint cycles in SX. Then στ = τ σ. Proof. Let σ = (a1, a2,..., ak) and τ = (b1, b2,..., bl). We must show that στ (x) = τ σ(x) for all x ∈ X. If x is neither in {a1, a2,..., ak} nor {b1, b2,..., bl}, then both σ and τ fix x. That is, σ(x) = x and τ (x) = x. Hence, στ (x) = σ(τ (x)) = σ(x) = x = τ (x) = τ (σ(x)) = τ σ(x). Do not forget that we are multiplying permutations right to left, which is the opposite of the order in which we usually multiply group elements. Now suppose that x ∈ {a1, a2,..., ak}. Then σ(ai) = a(i mod k)+1; that is, a1 → a2 a2 → a3... ak−1 → ak ak → a1. However, τ (ai) = ai since σ and τ are disjoint. Therefore, στ (ai) = σ(τ (ai)) = σ(ai) = a(i mod k)+1 = τ (a(i mod k)+1) = τ (σ(ai)) = τ σ(ai). Similarly, if x ∈ {b1, b2,..., bl}, then σ and τ also commute. Theorem 5.3 Every permutation in Sn can be written as the product of disjoint cycles. Proof. We can assume that X = {1, 2,..., n}. Let σ ∈ Sn, and define X1 to be
{σ(1), σ2(1),...}. The set X1 is finite since X is finite. Now let i be the first integer in X that is not in X1 and define X2 by {σ(i), σ2(i),...}. 5.1 DEFINITIONS AND NOTATION 81 Again, X2 is a finite set. Continuing in this manner, we can define finite disjoint sets X3, X4,.... Since X is a finite set, we are guaranteed that this process will end and there will be only a finite number of these sets, say r. If σi is the cycle defined by σi(x) = σ(x) x ∈ Xi x /∈ Xi, x then σ = σ1σ2 · · · σr. Since the sets X1, X2,..., Xr are disjoint, the cycles σ1, σ2,..., σr must also be disjoint. Example 6. Let. Using cycle notation, we can write σ = (1624) τ = (13)(456) στ = (136)(245) τ σ = (143)(256). Remark. From this point forward we will find it convenient to use cycle notation to represent permutations. When using cycle notation, we often denote the identity permutation by (1). Transpositions The simplest permutation is a cycle of length 2. Such cycles are called transpositions. Since (a1, a2,..., an) = (a1an)(a1an−1) · · · (a1a3)(a1a2), any cycle can be written as the product of transpositions, leading to the following proposition. Proposition 5.4 Any permutation of a finite set containing at least two elements can be written as the product of transpositions. 82 CHAPTER 5 PERMUTATION GROUPS Example 7. Consider the permutation (16)(253) = (16)(23)(25) = (16)(45)(23)(45)(25). As we can see, there is
no unique way to represent permutation as the product of transpositions. For instance, we can write the identity permutation as (12)(12), as (13)(24)(13)(24), and in many other ways. However, as it turns out, no permutation can be written as the product of both an even number of transpositions and an odd number of transpositions. For instance, we could represent the permutation (16) by (23)(16)(23) or by (35)(16)(13)(16)(13)(35)(56), but (16) will always be the product of an odd number of transpositions. Lemma 5.5 If the identity is written as the product of r transpositions, id = τ1τ2 · · · τr, then r is an even number. Proof. We will employ induction on r. A transposition cannot be the identity; hence, r > 1. If r = 2, then we are done. Suppose that r > 2. In this case the product of the last two transpositions, τr−1τr, must be one of the following cases: (ab)(ab) = id (bc)(ab) = (ac)(bc) (cd)(ab) = (ab)(cd) (ac)(ab) = (ab)(bc), where a, b, c, and d are distinct. The first equation simply says that a transposition is its own inverse. If this case occurs, delete τr−1τr from the product to obtain id = τ1τ2 · · · τr−3τr−2. By induction r − 2 is even; hence, r must be even. 5.1 DEFINITIONS AND NOTATION 83 In each of the other three cases, we can replace τr−1τr with the right-hand side of the corresponding equation to obtain a new product of r transpositions for the identity. In this new product the last occurrence of a will be in the next-to-the-last transposition. We can continue this process with τr−2τr−1 to obtain either a product of r − 2 transpositions or a new product of r transpositions where the last occurrence of a is in τr−2. If the identity is the product of r − 2 transpositions, then again we are done, by our induction hypothesis; otherwise,
we will repeat the procedure with τr−3τr−2. At some point either we will have two adjacent, identical transpositions canceling each other out or a will be shuffled so that it will appear only in the first transposition. However, the latter case cannot occur, because the identity would not fix a in this instance. Therefore, the identity permutation must be the product of r − 2 transpositions and, again by our induction hypothesis, we are done. Theorem 5.6 If a permutation σ can be expressed as the product of an even number of transpositions, then any other product of transpositions equaling σ must also contain an even number of transpositions. Similarly, if σ can be expressed as the product of an odd number of transpositions, then any other product of transpositions equaling σ must also contain an odd number of transpositions. Proof. Suppose that σ = σ1σ2 · · · σm = τ1τ2 · · · τn, where m is even. We must show that n is also an even number. The inverse of σ−1 is σm · · · σ1. Since id = σσm · · · σ1 = τ1 · · · τnσm · · · σ1, n must be even by Lemma 5.5. The proof for the case in which σ can be expressed as an odd number of transpositions is left as an exercise. In light of Theorem 5.6, we define a permutation to be even if it can be expressed as an even number of transpositions and odd if it can be expressed as an odd number of transpositions. The Alternating Groups One of the most important subgroups of Sn is the set of all even permutations, An. The group An is called the alternating group on n letters. 84 CHAPTER 5 PERMUTATION GROUPS Theorem 5.7 The set An is a subgroup of Sn. Proof. Since the product of two even permutations must also be an even permutation, An is closed. The identity is an even permutation and therefore is in An. If σ is an even permutation, then σ = σ1σ2 · · · σr, where σi is a
transposition and r is even. Since the inverse of any transposition is itself, σ−1 = σrσr−1 · · · σ1 is also in An. Proposition 5.8 The number of even permutations in Sn, n ≥ 2, is equal to the number of odd permutations; hence, the order of An is n!/2. Proof. Let An be the set of even permutations in Sn and Bn be the set of odd permutations. If we can show that there is a bijection between these sets, they must contain the same number of elements. Fix a transposition σ in Sn. Since n ≥ 2, such a σ exists. Define by λσ : An → Bn λσ(τ ) = στ. Suppose that λσ(τ ) = λσ(µ). Then στ = σµ and so τ = σ−1στ = σ−1σµ = µ. Therefore, λσ is one-to-one. We will leave the proof that λσ is surjective to the reader. Example 8. The group A4 is the subgroup of S4 consisting of even permutations. There are twelve elements in A4: (1) (123) (134) (12)(34) (132) (143) (13)(24) (124) (234) (14)(23) (142) (243). One of the end-of-chapter exercises will be to write down all the subgroups of A4. You will find that there is no subgroup of order 6. Does this surprise you? 5.2 DIHEDRAL GROUPS 85 Historical Note Lagrange first thought of permutations as functions from a set to itself, but it was Cauchy who developed the basic theorems and notation for permutations. He was the first to use cycle notation. Augustin-Louis Cauchy (1789–1857) was born in Paris at the height of the French Revolution. His family soon left Paris for the village of Arcueil to escape the Reign of Terror. One of the family’s neighbors there was Pierre-Simon Laplace (1749–1827), who encouraged him to seek a career in mathematics. Cauchy began his career as a mathematician by solving a
problem in geometry given to him by Lagrange. Over 800 papers were written by Cauchy on such diverse topics as differential equations, finite groups, applied mathematics, and complex analysis. He was one of the mathematicians responsible for making calculus rigorous. Perhaps more theorems and concepts in mathematics have the name Cauchy attached to them than that of any other mathematician. 1 n n − 1 2 4 3 Figure 5.1. A regular n-gon 5.2 Dihedral Groups Another special type of permutation group is the dihedral group. Recall the symmetry group of an equilateral triangle in Chapter 3. Such groups consist of the rigid motions of a regular n-sided polygon or n-gon. For n = 3, 4,..., we define the nth dihedral group to be the group of rigid motions of a regular n-gon. We will denote this group by Dn. We can number the vertices of a regular n-gon by 1, 2,..., n (Figure 5.1). Notice that there are exactly n choices to replace the first vertex. If we replace the first vertex by k, then the second vertex must be replaced either by vertex k + 1 or by vertex k − 1; hence, there are 2n possible rigid motions of the n-gon. We summarize these results in the following theorem. 86 CHAPTER 5 PERMUTATION GROUPS Theorem 5.9 The dihedral group, Dn, is a subgroup of Sn of order 2n rotation 8 3 reflection Figure 5.2. Rotations and reflections of a regular n-gon Theorem 5.10 The group Dn, n ≥ 3, consists of all products of the two elements r and s, satisfying the relations rn = id s2 = id srs = r−1. Proof. The possible motions of a regular n-gon are either reflections or rotations (Figure 5.2). There are exactly n possible rotations: id, 360◦ n, 2 · 360◦ n,..., (n − 1) · 360◦ n. We will denote the rotation 360◦/n by r. The rotation r generates all of the other rotations. That is, rk = k · 360◦ n.
Label the n reflections s1, s2,..., sn, where sk is the reflection that leaves vertex k fixed. There are two cases of reflection, depending on whether n 5.2 DIHEDRAL GROUPS 87 Figure 5.3. Types of reflections of a regular n-gon is even or odd. If there are an even number of vertices, then 2 vertices are left fixed by a reflection. If there are an odd number of vertices, then only a single vertex is left fixed by a reflection (Figure 5.3). In either case, the order of sk is two. Let s = s1. Then s2 = id and rn = id. Since any rigid motion t of the n-gon replaces the first vertex by the vertex k, the second vertex must be replaced by either k + 1 or by k − 1. If the second vertex is replaced by k + 1, then t = rk−1. If it is replaced by k − 1, then t = rk−1s. Hence, r and s generate Dn; that is, Dn consists of all finite products of r and s. We will leave the proof that srs = r−1 as an exercise. 1 4 2 3 Figure 5.4. The group D4 88 CHAPTER 5 PERMUTATION GROUPS Example 9. The group of rigid motions of a square, D4, consists of eight elements. With the vertices numbered 1, 2, 3, 4 (Figure 5.4), the rotations are and the reflections are r = (1234) r2 = (13)(24) r3 = (1432) r4 = id s1 = (24) s2 = (13). The order of D4 is 8. The remaining two elements are rs1 = (12)(34) r3s1 = (14)(23). 1 3 4 2 2 4 3 1 Figure 5.5. The motion group of a cube The Motion Group of a Cube We can investigate the groups of rigid motions of geometric objects other than a regular n-sided polygon to obtain interesting examples of permutation groups. Let us consider the group of rigid motions of a cube
. One of the first EXERCISES 89 questions that we can ask about this group is “what is its order?” A cube has 6 sides. If a particular side is facing upward, then there are four possible rotations of the cube that will preserve the upward-facing side. Hence, the order of the group is 6 · 4 = 24. We have just proved the following proposition. Proposition 5.11 The group of rigid motions of a cube contains 24 elements. Theorem 5.12 The group of rigid motions of a cube is S4 Figure 5.6. Transpositions in the motion group of a cube Proof. From Proposition 5.11, we already know that the motion group of the cube has 24 elements, the same number of elements as there are in S4. There are exactly four diagonals in the cube. If we label these diagonals 1, 2, 3, and 4, we must show that the motion group of the cube will give us any permutation of the diagonals (Figure 5.5). If we can obtain all of these permutations, then S4 and the group of rigid motions of the cube must be the same. To obtain a transposition we can rotate the cube 180◦ about the axis joining the midpoints of opposite edges (Figure 5.6). There are six such axes, giving all transpositions in S4. Since every element in S4 is the product of a finite number of transpositions, the motion group of a cube must be S4. Exercises 1. Write the following permutations in cycle notation. 90 CHAPTER 5 PERMUTATION GROUPS (a) (bc) (d. Compute each of the following. (a) (1345)(234) (b) (12)(1253) (c) (143)(23)(24) (d) (1423)(34)(56)(1324) (e) (1254)(13)(25) (f) (1254)(13)(25)2 (g) (1254)−1(123)(45)(1254) (h) (1254)2(123)(45) (i) (123)(45)(1254)−2 (j) (1254)100 (k) |(1254)| (l) |(1254)2| (m) (12)−1 (n
) (12537)−1 (o) [(12)(34)(12)(47)]−1 (p) [(1235)(467)]−1 3. Express the following permutations as products of transpositions and identify them as even or odd. (a) (14356) (b) (156)(234) (c) (1426)(142) 4. Find (a1, a2,..., an)−1. (d) (17254)(1423)(154632) (e) (142637) 5. List all of the subgroups of S4. Find each of the following sets. (a) {σ ∈ S4 : σ(1) = 3} (b) {σ ∈ S4 : σ(2) = 2} (c) {σ ∈ S4 : σ(1) = 3 and σ(2) = 2} Are any of these sets subgroups of S4? 6. Find all of the subgroups in A4. What is the order of each subgroup? 7. Find all possible orders of elements in S7 and A7. 8. Show that A10 contains an element of order 15. 9. Does A8 contain an element of order 26? 10. Find an element of largest order in Sn for n = 3,..., 10. EXERCISES 91 11. What are the possible cycle structures of elements of A5? What about A6? 12. Let σ ∈ Sn have order n. Show that for all integers i and j, σi = σj if and only if i ≡ j (mod n). 13. Let σ = σ1 · · · σm ∈ Sn be the product of disjoint cycles. Prove that the order of σ is the least common multiple of the lengths of the cycles σ1,..., σm. 14. Using cycle notation, list the elements in D5. What are r and s? Write every element as a product of r and s. 15. If the diagonals of a cube are labeled as Figure 5.5, to which motion of the cube does the permutation (12)(34) correspond? What about the other permutations of the diagonals? 16. Find the group of rigid motions of a tetrahedron. Show that this is the same
group as A4. 17. Prove that Sn is nonabelian for n ≥ 3. 18. Show that An is nonabelian for n ≥ 4. 19. Prove that Dn is nonabelian for n ≥ 3. 20. Let σ ∈ Sn. Prove that σ can be written as the product of at most n − 1 transpositions. 21. Let σ ∈ Sn. If σ is not a cycle, prove that σ can be written as the product of at most n − 2 transpositions. 22. If σ can be expressed as an odd number of transpositions, show that any other product of transpositions equaling σ must also be odd. 23. If σ is a cycle of odd length, prove that σ2 is also a cycle. 24. Show that a 3-cycle is an even permutation. 25. Prove that in An with n ≥ 3, any permutation is a product of cycles of length 3. 26. Prove that any element in Sn can be written as a finite product of the following permutations. (a) (12), (13),..., (1n) (b) (12), (23),..., (n − 1, n) (c) (12), (12... n) 27. Let G be a group and define a map λg : G → G by λg(a) = ga. Prove that λg is a permutation of G. 28. Prove that there exist n! permutations of a set containing n elements. 92 CHAPTER 5 PERMUTATION GROUPS 29. Recall that the center of a group G is Z(G) = {g ∈ G : gx = xg for all x ∈ G}. Find the center of D8. What about the center of D10? What is the center of Dn? 30. Let τ = (a1, a2,..., ak) be a cycle of length k. (a) Prove that if σ is any permutation, then στ σ−1 = (σ(a1), σ(a2),..., σ(ak)) is a cycle of length k. (b) Let µ be a cycle of length k. Prove that there
is a permutation σ such that στ σ−1 = µ. 31. For α and β in Sn, define α ∼ β if there exists an σ ∈ Sn such that σασ−1 = β. Show that ∼ is an equivalence relation on Sn. 32. Let σ ∈ SX. If σn(x) = y, we will say that x ∼ y. (a) Show that ∼ is an equivalence relation on X. (b) If σ ∈ An and τ ∈ Sn, show that τ −1στ ∈ An. (c) Define the orbit of x ∈ X under σ ∈ SX to be the set Ox,σ = {y : x ∼ y}. Compute the orbits of α, β, γ where α = (1254) β = (123)(45) γ = (13)(25). (d) If Ox,σ ∩ Oy,σ = ∅, prove that Ox,σ = Oy,σ. The orbits under a permutation σ are the equivalence classes corresponding to the equivalence relation ∼. (e) A subgroup H of SX is transitive if for every x, y ∈ X, there exists a σ ∈ H such that σ(x) = y. Prove that σ is transitive if and only if Ox,σ = X for some x ∈ X. 33. Let α ∈ Sn for n ≥ 3. If αβ = βα for all β ∈ Sn, prove that α must be the identity permutation; hence, the center of Sn is the trivial subgroup. 34. If α is even, prove that α−1 is also even. Does a corresponding result hold if α is odd? 35. Show that α−1β−1αβ is even for α, β ∈ Sn. EXERCISES 93 36. Let r and s be the elements in Dn described in Theorem 5.10. (a) Show that srs = r−1. (b) Show that rks = sr−k in Dn. (c) Prove that the order of rk ∈ Dn is n/ gcd(k, n). Sage A permutation group is a very concrete representation of a group, and Sage support for permutations groups is very good — making
Sage a natural place for beginners to learn about group theory. 6 Cosets and Lagrange’s Theorem Lagrange’s Theorem, one of the most important results in finite group theory, states that the order of a subgroup must divide the order of the group. This theorem provides a powerful tool for analyzing finite groups; it gives us an idea of exactly what type of subgroups we might expect a finite group to possess. Central to understanding Lagranges’s Theorem is the notion of a coset. 6.1 Cosets Let G be a group and H a subgroup of G. Define a left coset of H with representative g ∈ G to be the set gH = {gh : h ∈ H}. Right cosets can be defined similarly by Hg = {hg : h ∈ H}. If left and right cosets coincide or if it is clear from the context to which type of coset that we are referring, we will use the word coset without specifying left or right. Example 1. Let H be the subgroup of Z6 consisting of the elements 0 and 3. The cosets are 0 + H = 3 + H = {0, 31, 42, 5}. 94 6.1 COSETS 95 We will always write the cosets of subgroups of Z and Zn with the additive notation we have used for cosets here. In a commutative group, left and right cosets are always identical. Example 2. Let H be the subgroup of S3 defined by the permutations {(1), (123), (132)}. The left cosets of H are (1)H = (123)H = (132)H = {(1), (123), (132)} (12)H = (13)H = (23)H = {(12), (13), (23)}. The right cosets of H are exactly the same as the left cosets: H(1) = H(123) = H(132) = {(1), (123), (132)} H(12) = H(13) = H(23) = {(12), (13), (23)}. It is not always the case that a left coset is the same as a right coset. Let K be the subgroup of S
3 defined by the permutations {(1), (12)}. Then the left cosets of K are (1)K = (12)K = {(1), (12)} (13)K = (123)K = {(13), (123)} (23)K = (132)K = {(23), (132)}; however, the right cosets of K are K(1) = K(12) = {(1), (12)} K(13) = K(132) = {(13), (132)} K(23) = K(123) = {(23), (123)}. The following lemma is quite useful when dealing with cosets. (We leave its proof as an exercise.) Lemma 6.1 Let H be a subgroup of a group G and suppose that g1, g2 ∈ G. The following conditions are equivalent. 1. g1H = g2H; 2. Hg−1 1 = Hg−1 2 ; 3. g1H ⊆ g2H; 96 CHAPTER 6 COSETS AND LAGRANGE’S THEOREM 4. g2 ∈ g1H; 5. g−1 1 g2 ∈ H. In all of our examples the cosets of a subgroup H partition the larger group G. The following theorem proclaims that this will always be the case. Theorem 6.2 Let H be a subgroup of a group G. Then the left cosets of H in G partition G. That is, the group G is the disjoint union of the left cosets of H in G. Proof. Let g1H and g2H be two cosets of H in G. We must show that either g1H ∩ g2H = ∅ or g1H = g2H. Suppose that g1H ∩ g2H = ∅ and a ∈ g1H ∩ g2H. Then by the definition of a left coset, a = g1h1 = g2h2 for some elements h1 and h2 in H. Hence, g1 = g2h2h−1 1 or g1 ∈ g2H. By Lemma 6.1, g1H = g2H. Remark. There is nothing special in this theorem about left cosets.
Right cosets also partition G; the proof of this fact is exactly the same as the proof for left cosets except that all group multiplications are done on the opposite side of H. Let G be a group and H be a subgroup of G. Define the index of H in G to be the number of left cosets of H in G. We will denote the index by [G : H]. Example 3. Let G = Z6 and H = {0, 3}. Then [G : H] = 3. Example 4. Suppose that G = S3, H = {(1), (123), (132)}, and K = {(1), (12)}. Then [G : H] = 2 and [G : K] = 3. Theorem 6.3 Let H be a subgroup of a group G. The number of left cosets of H in G is the same as the number of right cosets of H in G. Proof. Let LH and RH denote the set of left and right cosets of H in G, respectively. If we can define a bijective map φ : LH → RH, then the theorem will be proved. If gH ∈ LH, let φ(gH) = Hg−1. By Lemma 6.1, the map φ is well-defined; that is, if g1H = g2H, then Hg−1 2. To show that φ is one-to-one, suppose that 1 = Hg−1 Hg−1 1 = φ(g1H) = φ(g2H) = Hg−1 2. Again by Lemma 6.1, g1H = g2H. The map φ is onto since φ(g−1H) = Hg. 6.2 LAGRANGE’S THEOREM 97 6.2 Lagrange’s Theorem Proposition 6.4 Let H be a subgroup of G with g ∈ G and define a map φ : H → gH by φ(h) = gh. The map φ is bijective; hence, the number of elements in H is the same as the number of elements in gH. Proof. We first show that the map φ is one-to-one. Suppose
that φ(h1) = φ(h2) for elements h1, h2 ∈ H. We must show that h1 = h2, but φ(h1) = gh1 and φ(h2) = gh2. So gh1 = gh2, and by left cancellation h1 = h2. To show that φ is onto is easy. By definition every element of gH is of the form gh for some h ∈ H and φ(h) = gh. Theorem 6.5 (Lagrange) Let G be a finite group and let H be a subgroup of G. Then |G|/|H| = [G : H] is the number of distinct left cosets of H in G. In particular, the number of elements in H must divide the number of elements in G. Proof. The group G is partitioned into [G : H] distinct left cosets. Each left coset has |H| elements; therefore, |G| = [G : H]|H|. Corollary 6.6 Suppose that G is a finite group and g ∈ G. Then the order of g must divide the number of elements in G. Corollary 6.7 Let |G| = p with p a prime number. Then G is cyclic and any g ∈ G such that g = e is a generator. Proof. Let g be in G such that g = e. Then by Corollary 6.6, the order of g must divide the order of the group. Since |g| > 1, it must be p. Hence, g generates G. Corollary 6.7 suggests that groups of prime order p must somehow look like Zp. Corollary 6.8 Let H and K be subgroups of a finite group G such that G ⊃ H ⊃ K. Then [G : K] = [G : H][H : K]. Proof. Observe that [G : K] = |G| |K| = |G| |H| · |H| |K| = [G : H][H : K]. 98 CHAPTER 6 COSETS AND LAGRANGE’S THEOREM The converse of Lagrange’s Theorem is false. The group A4 has order 12
; however, it can be shown that it does not possess a subgroup of order 6. According to Lagrange’s Theorem, subgroups of a group of order 12 can have orders of either 1, 2, 3, 4, or 6. However, we are not guaranteed that subgroups of every possible order exist. To prove that A4 has no subgroup of order 6, we will assume that it does have such a subgroup H and show that a contradiction must occur. Since A4 contains eight 3-cycles, we know that H must contain a 3-cycle. We will show that if H contains one 3-cycle, then it must contain more than 6 elements. Proposition 6.9 The group A4 has no subgroup of order 6. Proof. Since [A4 : H] = 2, there are only two cosets of H in A4. Inasmuch as one of the cosets is H itself, right and left cosets must coincide; therefore, gH = Hg or gHg−1 = H for every g ∈ A4. Since there are eight 3-cycles in A4, at least one 3-cycle must be in H. Without loss of generality, assume that (123) is in H. Then (123)−1 = (132) must also be in H. Since ghg−1 ∈ H for all g ∈ A4 and all h ∈ H and (124)(123)(124)−1 = (124)(123)(142) = (243) (243)(123)(243)−1 = (243)(123)(234) = (142) we can conclude that H must have at least seven elements (1), (123), (132), (243), (243)−1 = (234), (142), (142)−1 = (124). Therefore, A4 has no subgroup of order 6. In fact, we can say more about when two cycles have the same length. Theorem 6.10 Two cycles τ and µ in Sn have the same length if and only if there exists a σ ∈ Sn such that µ = στ σ−1. Proof. Suppose that τ = (a1, a2,..., ak) µ = (b1, b2,..., bk). Define σ to be the permutation σ(a1) = b1
σ(a2) = b2... σ(ak) = bk. 6.3 FERMAT’S AND EULER’S THEOREMS 99 Then µ = στ σ−1. Conversely, suppose that τ = (a1, a2,..., ak) is a k-cycle and σ ∈ Sn. If σ(ai) = b and σ(a(i mod k)+1) = b, then µ(b) = b. Hence, µ = (σ(a1), σ(a2),..., σ(ak)). Since σ is one-to-one and onto, µ is a cycle of the same length as τ. 6.3 Fermat’s and Euler’s Theorems The Euler φ-function is the map φ : N → N defined by φ(n) = 1 for n = 1, and, for n > 1, φ(n) is the number of positive integers m with 1 ≤ m < n and gcd(m, n) = 1. From Proposition 3.1, we know that the order of U (n), the group of units in Zn, is φ(n). For example, |U (12)| = φ(12) = 4 since the numbers that are relatively prime to 12 are 1, 5, 7, and 11. For any prime p, φ(p) = p − 1. We state these results in the following theorem. Theorem 6.11 Let U (n) be the group of units in Zn. Then |U (n)| = φ(n). The following theorem is an important result in number theory, due to Leonhard Euler. Theorem 6.12 (Euler’s Theorem) Let a and n be integers such that n > 0 and gcd(a, n) = 1. Then aφ(n) ≡ 1 (mod n). Proof. By Theorem 6.11 the order of U (n) is φ(n). Consequently, aφ(n) = 1 for all a ∈ U (n); or aφ(n) − 1 is divisible by n. Therefore, aφ(n) ≡ 1 (mod n). If we consider the special case of Euler
’s Theorem in which n = p is prime and recall that φ(p) = p − 1, we obtain the following result, due to Pierre de Fermat. Theorem 6.13 (Fermat’s Little Theorem) Let p be any prime number and suppose that p |a. Then ap−1 ≡ 1 (mod p). Furthermore, for any integer b, bp ≡ b (mod p). 100 CHAPTER 6 COSETS AND LAGRANGE’S THEOREM Historical Note Joseph-Louis Lagrange (1736–1813), born in Turin, Italy, was of French and Italian descent. His talent for mathematics became apparent at an early age. Leonhard Euler recognized Lagrange’s abilities when Lagrange, who was only 19, communicated to Euler some work that he had done in the calculus of variations. That year he was also named a professor at the Royal Artillery School in Turin. At the age of 23 he joined the Berlin Academy. Frederick the Great had written to Lagrange proclaiming that the “greatest king in Europe” should have the “greatest mathematician in Europe” at his court. For 20 years Lagrange held the position vacated by his mentor, Euler. His works include contributions to number theory, group theory, physics and mechanics, the calculus of variations, the theory of equations, and differential equations. Along with Laplace and Lavoisier, Lagrange was one of the people responsible for designing the metric system. During his life Lagrange profoundly influenced the development of mathematics, leaving much to the next generation of mathematicians in the form of examples and new problems to be solved. Exercises 1. Suppose that G is a finite group with an element g of order 5 and an element h of order 7. Why must |G| ≥ 35? 2. Suppose that G is a finite group with 60 elements. What are the orders of possible subgroups of G? 3. Prove or disprove: Every subgroup of the integers has finite index. 4. Prove or disprove: Every subgroup of the integers has finite order. 5. List the left and right cosets of the subgroups in each of the following. (a) 8 in Z24 (b) 3 in U (8) (c) 3Z in
Z (d) A4 in S4 (e) An in Sn (f) D4 in S4 (g) T in C∗ (h) H = {(1), (123), (132)} in S4 6. Describe the left cosets of SL2(R) in GL2(R). What is the index of SL2(R) in GL2(R)? 7. Verify Euler’s Theorem for n = 15 and a = 4. 8. Use Fermat’s Little Theorem to show that if p = 4n + 3 is prime, there is no solution to the equation x2 ≡ −1 (mod p). EXERCISES 101 9. Show that the integers have infinite index in the additive group of rational numbers. 10. Show that the additive group of real numbers has infinite index in the additive group of the complex numbers. 11. Let H be a subgroup of a group G and suppose that g1, g2 ∈ G. Prove that the following conditions are equivalent. 1 = Hg−1 2 (a) g1H = g2H (b) Hg−1 (c) g1H ⊆ g2H (d) g2 ∈ g1H (e) g−1 1 g2 ∈ H 12. If ghg−1 ∈ H for all g ∈ G and h ∈ H, show that right cosets are identical to left cosets. 13. What fails in the proof of Theorem 6.3 if φ : LH → RH is defined by φ(gH) = Hg? 14. Suppose that gn = e. Show that the order of g divides n. 15. Modify the proof of Theorem 6.10 to show that any two permutations α, β ∈ Sn have the same cycle structure if and only if there exists a permutation γ such that β = γαγ−1. If β = γαγ−1 for some γ ∈ Sn, then α and β are conjugate. 16. If |G| = 2n, prove that the number of elements of order 2 is odd. Use this result to show that G must contain a subgroup of order 2. 17. Suppose that [G : H] = 2. If a and b are
not in H, show that ab ∈ H. 18. If [G : H] = 2, prove that gH = Hg. 19. Let H and K be subgroups of a group G. Prove that gH ∩ gK is a coset of H ∩ K in G. 20. Let H and K be subgroups of a group G. Define a relation ∼ on G by a ∼ b if there exists an h ∈ H and a k ∈ K such that hak = b. Show that this relation is an equivalence relation. The corresponding equivalence classes are called double cosets. Compute the double cosets of H = {(1), (123), (132)} in A4. 21. Let G be a cyclic group of order n. Show that there are exactly φ(n) generators for G. 22. Let n = pe1 that 1 pe2 2 · · · pek k be the factorization of n into distinct primes. Prove φ(n) = n 1 − 1 − 1 p1 1 p2 · · · 1 −. 1 pk 102 CHAPTER 6 COSETS AND LAGRANGE’S THEOREM 23. Show that for all positive integers n. n = d|n φ(d) Sage Sage can create all the subgroups of a group, so long as the group is not too large. It can also create the cosets of a subgroup. 7 Introduction to Cryptography Cryptography is the study of sending and receiving secret messages. The aim of cryptography is to send messages across a channel so only the intended recipient of the message can read it. In addition, when a message is received, the recipient usually requires some assurance that the message is authentic; that is, that it has not been sent by someone who is trying to deceive the recipient. Modern cryptography is heavily dependent on abstract algebra and number theory. The message to be sent is called the plaintext message. The disguised message is called the ciphertext. The plaintext and the ciphertext are both written in an alphabet, consisting of letters or characters. Characters can include not only the familiar alphabetic characters A,..., Z and a,..., z but also digits, punctuation marks, and blanks. A cryptosystem, or cipher, has two parts: encryption, the process of transforming a plaintext message to a ciphertext
message, and decryption, the reverse transformation of changing a ciphertext message into a plaintext message. There are many different families of cryptosystems, each distinguished by a particular encryption algorithm. Cryptosystems in a specified cryptographic family are distinguished from one another by a parameter to the encryption function called a key. A classical cryptosystem has a single key, which must be kept secret, known only to the sender and the receiver of the message. If person A wishes to send secret messages to two different people B and C, and does not wish to have B understand C’s messages or vice versa, A must use two separate keys, so one cryptosystem is used for exchanging messages with B, and another is used for exchanging messages with C. Systems that use two separate keys, one for encoding and another for decoding, are called public key cryptosystems. Since knowledge of the 103 104 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY encoding key does not allow anyone to guess at the decoding key, the encoding key can be made public. A public key cryptosystem allows A and B to send messages to C using the same encoding key. Anyone is capable of encoding a message to be sent to C, but only C knows how to decode such a message. 7.1 Private Key Cryptography In single or private key cryptosystems the same key is used for both encrypting and decrypting messages. To encrypt a plaintext message, we apply to the message some function which is kept secret, say f. This function will yield an encrypted message. Given the encrypted form of the message, we can recover the original message by applying the inverse transformation f −1. The transformation f must be relatively easy to compute, as must f −1; however, f must be extremely difficult to guess at if only examples of coded messages are available. Example 1. One of the first and most famous private key cryptosystems was the shift code used by Julius Caesar. We first digitize the alphabet by letting A = 00, B = 01,..., Z = 25. The encoding function will be f (p) = p + 3 mod 26; that is, A → D, B → E,..., Z → C. The decoding function is then f −1(p) = p − 3 mod 26 =
p + 23 mod 26. Suppose we receive the encoded message DOJHEUD. To decode this message, we first digitize it: Next we apply the inverse transformation to get 3, 14, 9, 7, 4, 20, 3. 0, 11, 6, 4, 1, 17, 0, or ALGEBRA. Notice here that there is nothing special about either of the numbers 3 or 26. We could have used a larger alphabet or a different shift. Cryptanalysis is concerned with deciphering a received or intercepted message. Methods from probability and statistics are great aids in deciphering 7.1 PRIVATE KEY CRYPTOGRAPHY 105 an intercepted message; for example, the frequency analysis of the characters appearing in the intercepted message often makes its decryption possible. Example 2. Suppose we receive a message that we know was encrypted by using a shift transformation on single letters of the 26-letter alphabet. To find out exactly what the shift transformation was, we must compute b in the equation f (p) = p + b mod 26. We can do this using frequency analysis. The letter E = 04 is the most commonly occurring letter in the English language. Suppose that S = 18 is the most commonly occurring letter in the ciphertext. Then we have good reason to suspect that 18 = 4 + b mod 26, or b = 14. Therefore, the most likely encrypting function is f (p) = p + 14 mod 26. The corresponding decrypting function is f −1(p) = p + 12 mod 26. It is now easy to determine whether or not our guess is correct. Simple shift codes are examples of monoalphabetic cryptosystems. In these ciphers a character in the enciphered message represents exactly one character in the original message. Such cryptosystems are not very sophisticated and are quite easy to break. In fact, in a simple shift as described in Example 1, there are only 26 possible keys. It would be quite easy to try them all rather than to use frequency analysis. Let us investigate a slightly more sophisticated cryptosystem. Suppose that the encoding function is given by f (p) = ap + b mod 26. We first need to find out when a decoding function f −1 exists. Such a decoding function exists when we can solve the equation c = ap + b mod 26 for p. By Proposition 3.1, this is possible
exactly when a has an inverse or, equivalently, when gcd(a, 26) = 1. In this case f −1(p) = a−1p − a−1b mod 26. Such a cryptosystem is called an affine cryptosystem. Example 3. Let us consider the affine cryptosystem f (p) = ap + b mod 26. For this cryptosystem to work we must choose an a ∈ Z26 that is invertible. 106 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY This is only possible if gcd(a, 26) = 1. Recognizing this fact, we will let a = 5 since gcd(5, 26) = 1. It is easy to see that a−1 = 21. Therefore, we can take our encryption function to be f (p) = 5p + 3 mod 26. Thus, ALGEBRA is encoded as 3, 6, 7, 23, 8, 10, 3, or DGHXIKD. The decryption function will be f −1(p) = 21p − 21 · 3 mod 26 = 21p + 15 mod 26. A cryptosystem would be more secure if a ciphertext letter could represent more than one plaintext letter. To give an example of this type of cryptosystem, called a polyalphabetic cryptosystem, we will generalize affine codes by using matrices. The idea works roughly the same as before; however, instead of encrypting one letter at a time we will encrypt pairs of letters. We can store a pair of letters p1 and p2 in a vector p =. p1 p2 Let A be a 2 × 2 invertible matrix with entries in Z26. We can define an encoding function by f (p) = Ap + b, where b is a fixed column vector and matrix operations are performed in Z26. The decoding function must be f −1(p) = A−1p − A−1b. Example 4. Suppose that we wish to encode the word HELP. The corresponding digit string is 7, 4, 11, 15. If A = 3 5 1 2, then 2 25 If b = (2, 2)t, then our message is encrypted as RRCR. The encrypted letter R represents more than one plaintext letter. A−
1 = 21 3. Frequency analysis can still be performed on a polyalphabetic cryptosystem, because we have a good understanding of how pairs of letters appear in the English language. The pair th appears quite often; the pair qz never appears. To avoid decryption by a third party, we must use a larger matrix than the one we used in Example 4. 7.2 PUBLIC KEY CRYPTOGRAPHY 107 7.2 Public Key Cryptography If traditional cryptosystems are used, anyone who knows enough to encode a message will also know enough to decode an intercepted message. In 1976, W. Diffie and M. Hellman proposed public key cryptography, which is based on the observation that the encryption and decryption procedures need not have the same key. This removes the requirement that the encoding key be kept secret. The encoding function f must be relatively easy to compute, but f −1 must be extremely difficult to compute without some additional information, so that someone who knows only the encrypting key cannot find the decrypting key without prohibitive computation. It is interesting to note that to date, no system has been proposed that has been proven to be “one-way;” that is, for any existing public key cryptosystem, it has never been shown to be computationally prohibitive to decode messages with only knowledge of the encoding key. The RSA Cryptosystem The RSA cryptosystem introduced by R. Rivest, A. Shamir, and L. Adleman in 1978, is based on the difficulty of factoring large numbers. Though it is not a difficult task to find two large random primes and multiply them together, factoring a 150-digit number that is the product of two large primes would take 100 million computers operating at 10 million instructions per second about 50 million years under the fastest algorithms currently known. The RSA cryptosystem works as follows. Suppose that we choose two random 150-digit prime numbers p and q. Next, we compute the product n = pq and also compute φ(n) = m = (p − 1)(q − 1), where φ is the Euler φ-function. Now we start choosing random integers E until we find one that is relatively prime to m; that is, we choose E such that gcd(E, m) = 1. Using the Eucl
idean algorithm, we can find a number D such that DE ≡ 1 (mod m). The numbers n and E are now made public. Suppose now that person B (Bob) wishes to send person A (Alice) a message over a public line. Since E and n are known to everyone, anyone can encode messages. Bob first digitizes the message according to some scheme, say A = 00, B = 02,..., Z = 25. If necessary, he will break the message into pieces such that each piece is a positive integer less than n. Suppose x is one of the pieces. Bob forms the number y = xE mod n and sends y to Alice. 108 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY For Alice to recover x, she need only compute x = yD mod n. Only Alice knows D. Example 5. Before exploring the theory behind the RSA cryptosystem or attempting to use large integers, we will use some small integers just to see that the system does indeed work. Suppose that we wish to send some message, which when digitized is 25. Let p = 23 and q = 29. Then n = pq = 667 and φ(n) = m = (p − 1)(q − 1) = 616. We can let E = 487, since gcd(616, 487) = 1. The encoded message is computed to be 25487 mod 667 = 169. This computation can be reasonably done by using the method of repeated squares as described in Chapter 4. Using the Euclidean algorithm, we determine that 191E = 1 + 151m; therefore, the decrypting key is (n, D) = (667, 191). We can recover the original message by calculating 169191 mod 667 = 25. Now let us examine why the RSA cryptosystem works. We know that DE ≡ 1 (mod m); hence, there exists a k such that DE = km + 1 = kφ(n) + 1. There are two cases to consider. In the first case assume that gcd(x, n) = 1. Then by Theorem 6.12, yD = (xE)D = xDE = xkm+1 = (xφ(n))kx = (1)kx = x mod n. So we see that Alice recovers the original message x when she
computes yD mod n. For the other case, assume that gcd(x, n) = 1. Since n = pq and x < n, we know x is a multiple of p or a multiple of q, but not both. We will describe the first possibility only, since the second is entirely similar. There is then an integer r, with r < q and x = rp. Note that we have gcd(x, q) = 1 and 7.2 PUBLIC KEY CRYPTOGRAPHY 109 that m = φ(n) = (p − 1)(q − 1) = φ(p)φ(q). Then, using Theorem 6.12, but now mod q, xkm = xkφ(p)φ(q) = (xφ(q))kφ(p) = (1)kφ(p) = 1 mod q. So there is an integer t such that xkm = 1 + tq. Thus, Alice also recovers the message in this case, yD = xkm+1 = xkmx = (1 + tq)x = x + tq(rp) = x + trn = x mod n. We can now ask how one would go about breaking the RSA cryptosystem. To find D given n and E, we simply need to factor n and solve for D by using the Euclidean algorithm. If we had known that 667 = 23 · 29 in Example 5, we could have recovered D. Message Verification There is a problem of message verification in public key cryptosystems. Since the encoding key is public knowledge, anyone has the ability to send an encoded message. If Alice receives a message from Bob, she would like to be able to verify that it was Bob who actually sent the message. Suppose that Bob’s encrypting key is (n, E) and his decrypting key is (n, D). Also, suppose that Alice’s encrypting key is (n, E) and her decrypting key is (n, D). Since encryption keys are public information, they can exchange coded messages at their convenience. Bob wishes to assure Alice that the message he is sending is authentic. Before Bob sends the message x to Alice, he decrypts x with his own key: x = xD mod n
. Anyone can change x back to x just by encryption, but only Bob has the ability to form x. Now Bob encrypts x with Alice’s encryption key to form y = xE mod n, a message that only Alice can decode. Alice decodes the message and then encodes the result with Bob’s key to read the original message, a message that could have only been sent by Bob. Historical Note Encrypting secret messages goes as far back as ancient Greece and Rome. As we know, Julius Caesar used a simple shift code to send and receive messages. However, 110 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY the formal study of encoding and decoding messages probably began with the Arabs in the 1400s. In the fifteenth and sixteenth centuries mathematicians such as Alberti and Viete discovered that monoalphabetic cryptosystems offered no real security. In the 1800s, F. W. Kasiski established methods for breaking ciphers in which a ciphertext letter can represent more than one plaintext letter, if the same key was used several times. This discovery led to the use of cryptosystems with keys that were used only a single time. Cryptography was placed on firm mathematical foundations by such people as W. Friedman and L. Hill in the early part of the twentieth century. During World War II mathematicians were very active in cryptography. Efforts to penetrate the cryptosystems of the Axis nations were organized in England and in the United States by such notable mathematicians as Alan Turing and A. A. Albert. The period after World War I saw the development of special-purpose machines for encrypting and decrypting messages. The Allies gained a tremendous advantage in World War II by breaking the ciphers produced by the German Enigma machine and the Japanese Purple ciphers. By the 1970s, interest in commercial cryptography had begun to take hold. There was a growing need to protect banking transactions, computer data, and electronic mail. In the early 1970s, IBM developed and implemented LUZIFER, the forerunner of the National Bureau of Standards’ Data Encryption Standard (DES). The concept of a public key cryptosystem, due to Diffie and Hellman, is very recent (1976). It was further developed by Rivest, Shamir, and Adleman with the RSA cryptosystem (1978). It is not
known how secure any of these systems are. The trapdoor knapsack cryptosystem, developed by Merkle and Hellman, has been broken. It is still an open question whether or not the RSA system can be broken. At the time of the writing of this book, the largest number factored is 135 digits long, and at the present moment a code is considered secure if the key is about 400 digits long and is the product of two 200-digit primes. There has been a great deal of controversy about research in cryptography in recent times: the National Security Agency would like to keep information about cryptography secret, whereas the academic community has fought for the right to publish basic research. Modern cryptography has come a long way since 1929, when Henry Stimson, Secretary of State under Herbert Hoover, dismissed the Black Chamber (the State Department’s cryptography division) in 1929 on the ethical grounds that “gentlemen do not read each other’s mail.” Exercises 1. Encode IXLOVEXMATH using the cryptosystem in Example 1. 2. Decode ZLOOA WKLVA EHARQ WKHA ILQDO, which was encoded using the cryptosystem in Example 1. 3. Assuming that monoalphabetic code was used to encode the following secret message, what was the original message? EXERCISES 111 NBQFRSMXZF YAWJUFHWFF ESKGQCFWDQ AFNBQFTILO FCWP 4. What is the total number of possible monoalphabetic cryptosystems? How secure are such cryptosystems? 5. Prove that a 2 × 2 matrix A with entries in Z26 is invertible if and only if gcd(det(A), 26) = 1. 6. Given the matrix A = 3 2 4 3, use the encryption function f (p) = Ap + b to encode the message CRYPTOLOGY, where b = (2, 5)t. What is the decoding function? 7. Encrypt each of the following RSA messages x so that x is divided into blocks of integers of length 2; that is, if x = 142528, encode 14, 25, and 28 separately. (a) n = 3551, E = 629, x = 31 (b) n = 2257, E = 47, x = 23 (c) n = 120979, E = 13
251, x = 142371 (d) n = 45629, E = 781, x = 231561 8. Compute the decoding key D for each of the encoding keys in Exercise 7. 9. Decrypt each of the following RSA messages y. (a) n = 3551, D = 1997, y = 2791 (b) n = 5893, D = 81, y = 34 (c) n = 120979, D = 27331, y = 112135 (d) n = 79403, D = 671, y = 129381 10. For each of the following encryption keys (n, E) in the RSA cryptosystem, compute D. (a) (n, E) = (451, 231) (b) (n, E) = (3053, 1921) (c) (n, E) = (37986733, 12371) (d) (n, E) = (16394854313, 34578451) 11. Encrypted messages are often divided into blocks of n letters. A message such as THE WORLD WONDERS WHY might be encrypted as JIW OCFRJ LPOEVYQ IOC but sent as JIW OCF RJL POE VYQ IOC. What are the advantages of using blocks of n letters? 12. Find integers n, E, and X such that X E ≡ X (mod n). Is this a potential problem in the RSA cryptosystem? 112 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY 13. Every person in the class should construct an RSA cryptosystem using primes that are 10 to 15 digits long. Hand in (n, E) and an encoded message. Keep D secret. See if you can break one another’s codes. Additional Exercises: Primality and Factoring In the RSA cryptosystem it is important to be able to find large prime numbers easily. Also, this cryptosystem is not secure if we can factor a composite number that is the product of two large primes. The solutions to both of these problems are quite easy. To find out if a number n is prime or to factor n, we can use trial division. We simply divide n by d = 2, 3,..., n. Either a factorization will be obtained, or n is prime if no d divides n. The problem
is that such a computation is prohibitively time-consuming if n is very large. √ 1. A better algorithm for factoring odd positive integers is Fermat’s factor- ization algorithm. (a) Let n = ab be an odd composite number. Prove that n can be written as the difference of two perfect squares: n = x2 − y2 = (x − y)(x + y). Consequently, a positive odd integer can be factored exactly when we can find integers x and y such that n = x2 − y2. (b) Write a program to implement the following factorization algorithm based on the observation in part (a). √ x ← y ← 1 n 1: while x2 − y2 > n do y ← y + 1 if x2 − y2 < n then x ← x + 1 y ← 1 goto 1 else if x2 − y2 = 0 then write n = a ∗ b √ n means the smallest integer greater than or equal The expression to the square root of n. Write another program to do factorization using trial division and compare the speed of the two algorithms. Which algorithm is faster and why? EXERCISES 113 2. Primality Testing. Recall Fermat’s Little Theorem from Chapter 6. Let p be prime with gcd(a, p) = 1. Then ap−1 ≡ 1 (mod p). We can use Fermat’s Little Theorem as a screening test for primes. For example, 15 cannot be prime since 215−1 ≡ 214 ≡ 4 (mod 15). However, 17 is a potential prime since 217−1 ≡ 216 ≡ 1 (mod 17). We say that an odd composite number n is a pseudoprime if 2n−1 ≡ 1 (mod n). Which of the following numbers are primes and which are pseudoprimes? (a) 342 (b) 811 (c) 601 (d) 561 (e) 771 (f) 631 3. Let n be an odd composite number and b be a positive integer such that gcd(b, n) = 1. If bn−1 ≡ 1 (mod n), then n is a pseudoprime base b. Show that 341 is a pseudoprime base 2 but not a pseudoprime base 3. 4. Write a program to determine all pr
imes less than 2000 using trial division. Write a second program that will determine all numbers less than 2000 that are either primes or pseudoprimes. Compare the speed of the two programs. How many pseudoprimes are there below 2000? There exist composite numbers that are pseudoprimes for all bases to which they are relatively prime. These numbers are called Carmichael numbers. The first Carmichael number is 561 = 3·11·17. In 1992, Alford, Granville, and Pomerance proved that there are an infinite number of Carmichael numbers [4]. However, Carmichael numbers are very rare. There are only 2163 Carmichael numbers less than 25 × 109. For more sophisticated primality tests, see [1], [6], or [7]. References and Suggested Readings [1] Bressoud, D. M. Factorization and Primality Testing. Springer-Verlag, New York, 1989. [2] Diffie, W. and Hellman, M. E. “New Directions in Cryptography,” IEEE Trans. Inform. Theory 22 (1976), 644–54. [3] Gardner, M. “Mathematical games: A new kind of cipher that would take millions of years to break,” Scientific American 237 (1977), 120–24. [4] Granville, A. “Primality Testing and Carmichael Numbers,” Notices of the American Mathematical Society 39(1992), 696–700. 114 CHAPTER 7 INTRODUCTION TO CRYPTOGRAPHY [5] Hellman, M. E. “The Mathematics of Public Key Cryptography,” Scientific American 241 (1979), 130–39. [6] Koblitz, N. A Course in Number Theory and Cryptography. 2nd ed. Springer, New York, 1994. [7] Pomerance, C., ed. Cryptology and Computational Number Theory. Proceedings of Symposia in Applied Mathematics, vol. 42. American Mathematical Society, Providence, RI, 1990. [8] Rivest, R. L., Shamir, A., and Adleman, L., “A Method for Obtaining Signatures and Public-key Cryptosystems,” Comm. ACM 21(1978), 120–26. Sage With Sage
’s excellent implementations of basic number-theory computations, it is easy to work non-trivial examples of RSA and the exercises about primality and factoring. 8 Algebraic Coding Theory Coding theory is an application of algebra that has become increasingly important over the last several decades. When we transmit data, we are concerned about sending a message over a channel that could be affected by “noise.” We wish to be able to encode and decode the information in a manner that will allow the detection, and possibly the correction, of errors caused by noise. This situation arises in many areas of communications, including radio, telephone, television, computer communications, and even compact disc player technology. Probability, combinatorics, group theory, linear algebra, and polynomial rings over finite fields all play important roles in coding theory. 8.1 Error-Detecting and Correcting Codes Let us examine a simple model of a communications system for transmitting and receiving coded messages (Figure 8.1). Uncoded messages may be composed of letters or characters, but typically they consist of binary m-tuples. These messages are encoded into codewords, consisting of binary n-tuples, by a device called an encoder. The message is transmitted and then decoded. We will consider the occurrence of errors during transmission. An error occurs if there is a change in one or more bits in the codeword. A decoding scheme is a method that either converts an arbitrarily received n-tuple into a meaningful decoded message or gives an error message for that n-tuple. If the received message is a codeword (one of the special n-tuples allowed to be transmitted), then the decoded message must be the unique message that was encoded into the codeword. For received non-codewords, the decoding scheme will give an error indication, or, if we are more clever, will actually try to correct the error and reconstruct 115 116 CHAPTER 8 ALGEBRAIC CODING THEORY m-digit message Encoder n-digit code word Transmitter Noise Receiver n-digit received word Decoder m-digit received message or error Figure 8.1. Encoding and decoding messages the original message. Our goal is to transmit error-free messages as cheaply and quickly as possible. Example 1. One possible coding scheme would be to send a message several times and to compare the received copies
with one another. Suppose that the message to be encoded is a binary n-tuple (x1, x2,..., xn). The message is encoded into a binary 3n-tuple by simply repeating the message three times: (x1, x2,..., xn) → (x1, x2,..., xn, x1, x2,..., xn, x1, x2,..., xn). To decode the message, we choose as the ith digit the one that appears in the ith place in at least two of the three transmissions. For example, if the original message is (0110), then the transmitted message will be (0110 0110 0110). If there is a transmission error in the fifth digit, then the received codeword will be (0110 1110 0110), which will be correctly decoded as (0110).1 This 1We will adopt the convention that bits are numbered left to right in binary n-tuples. 8.1 ERROR-DETECTING AND CORRECTING CODES 117 triple-repetition method will automatically detect and correct all single errors, but it is slow and inefficient: to send a message consisting of n bits, 2n extra bits are required, and we can only detect and correct single errors. We will see that it is possible to find an encoding scheme that will encode a message of n bits into m bits with m much smaller than 3n. Example 2. Even parity, a commonly used coding scheme, is much more efficient than the simple repetition scheme. The ASCII (American Standard Code for Information Interchange) coding system uses binary 8tuples, yielding 28 = 256 possible 8-tuples. However, only seven bits are needed since there are only 27 = 128 ASCII characters. What can or should be done with the extra bit? Using the full eight bits, we can detect single transmission errors. For example, the ASCII codes for A, B, and C are A = 6510 = 010000012, B = 6610 = 010000102, C = 6710 = 010000112. Notice that the leftmost bit is always set to 0; that is, the 128 ASCII characters have codes 000000002 = 010,... 011111112 = 12710. The bit can be used for error checking
on the other seven bits. It is set to either 0 or 1 so that the total number of 1 bits in the representation of a character is even. Using even parity, the codes for A, B, and C now become A = 010000012, B = 010000102, C = 110000112. Suppose an A is sent and a transmission error in the sixth bit is caused by noise over the communication channel so that (0100 0101) is received. We know an error has occurred since the received word has an odd number of 1’s, and we can now request that the codeword be transmitted again. When used for error checking, the leftmost bit is called a parity check bit. By far the most common error-detecting codes used in computers are based on the addition of a parity bit. Typically, a computer stores information 118 CHAPTER 8 ALGEBRAIC CODING THEORY in m-tuples called words. Common word lengths are 8, 16, and 32 bits. One bit in the word is set aside as the parity check bit, and is not used to store information. This bit is set to either 0 or 1, depending on the number of 1’s in the word. Adding a parity check bit allows the detection of all single errors because changing a single bit either increases or decreases the number of 1’s by one, and in either case the parity has been changed from even to odd, so the new word is not a codeword. (We could also construct an error detection scheme based on odd parity; that is, we could set the parity check bit so that a codeword always has an odd number of 1’s.) The even parity system is easy to implement, but has two drawbacks. First, multiple errors are not detectable. Suppose an A is sent and the first and seventh bits are changed from 0 to 1. The received word is a codeword, but will be decoded into a C instead of an A. Second, we do not have the ability to correct errors. If the 8-tuple (1001 1000) is received, we know that an error has occurred, but we have no idea which bit has been changed. We will now investigate a coding scheme that will not only allow us to detect transmission errors but will actually correct the errors. Transmitted Codeword 000 111 000 0 3 001 1 2 010 1 2 011 2 1 100 1 2
101 2 1 110 2 1 111 3 0 Received Word Table 8.1. A repetition code Example 3. Suppose that our original message is either a 0 or a 1, and that 0 encodes to (000) and 1 encodes to (111). If only a single error occurs during transmission, we can detect and correct the error. For example, if a 101 is received, then the second bit must have been changed from a 1 to a 0. The originally transmitted codeword must have been (111). This method will detect and correct all single errors. In Table 8.1, we present all possible words that might be received for the transmitted codewords (000) and (111). Table 8.1 also shows the number of bits by which each received 3-tuple differs from each original codeword. 8.1 ERROR-DETECTING AND CORRECTING CODES 119 Maximum-Likelihood Decoding The coding scheme presented in Example 3 is not a complete solution to the problem because it does not account for the possibility of multiple errors. For example, either a (000) or a (111) could be sent and a (001) received. We have no means of deciding from the received word whether there was a single error in the third bit or two errors, one in the first bit and one in the second. No matter what coding scheme is used, an incorrect message could be received: we could transmit a (000), have errors in all three bits, and receive the codeword (111). It is important to make explicit assumptions about the likelihood and distribution of transmission errors so that, in a particular application, it will be known whether a given error detection scheme is appropriate. We will assume that transmission errors are rare, and, that when they do occur, they occur independently in each bit; that is, if p is the probability of an error in one bit and q is the probability of an error in a different bit, then the probability of errors occurring in both of these bits at the same time is pq. We will also assume that a received n-tuple is decoded into a codeword that is closest to it; that is, we assume that the receiver uses maximum-likelihood decoding. 0 1 p p q q 0 1 Figure 8.2. Binary symmetric channel A binary symmetric channel is a model that consists of a transmitter capable of sending a binary signal,
either a 0 or a 1, together with a receiver. Let p be the probability that the signal is correctly received. Then q = 1 − p is the probability of an incorrect reception. If a 1 is sent, then the probability that a 1 is received is p and the probability that a 0 is received is q (Figure 8.2). The probability that no errors occur during the transmission of a binary codeword of length n is pn. For example, if p = 0.999 and a message consisting of 10,000 bits is sent, then the probability of a perfect transmission is (0.999)10,000 ≈ 0.00005. 120 CHAPTER 8 ALGEBRAIC CODING THEORY Theorem 8.1 If a binary n-tuple (x1,..., xn) is transmitted across a binary symmetric channel with probability p that no error will occur in each coordinate, then the probability that there are errors in exactly k coordinates is n k qkpn−k. Proof. Fix k different coordinates. We first compute the probability that an error has occurred in this fixed set of coordinates. The probability of an error occurring in a particular one of these k coordinates is q; the probability that an error will not occur in any of the remaining n − k coordinates is p. The probability of each of these n independent events is qkpn−k. The number of possible error patterns with exactly k errors occurring is equal to n k = n! k!(n − k)!, the number of combinations of n things taken k at a time. Each of these error patterns has probability qkpn−k of occurring; hence, the probability of all of these error patterns is n k qkpn−k. Example 4. Suppose that p = 0.995 and a 500-bit message is sent. The probability that the message was sent error-free is pn = (0.995)500 ≈ 0.082. The probability of exactly one error occurring is qpn−1 = 500(0.005)(0.995)499 ≈ 0.204. n 1 The probability of exactly two errors is q2pn−2 = n 2 500 · 499 2 (0.005)2(0.995)498 ≈ 0.257. The probability of more than two errors is approximately 1 − 0.082 − 0.204 −
0.257 = 0.457. 8.1 ERROR-DETECTING AND CORRECTING CODES 121 Block Codes If we are to develop efficient error-detecting and error-correcting codes, we will need more sophisticated mathematical tools. Group theory will allow faster methods of encoding and decoding messages. A code is an (n, m)-block code if the information that is to be coded can be divided into blocks of m binary digits, each of which can be encoded into n binary digits. More specifically, an (n, m)-block code consists of an encoding function and a decoding function E : Zm 2 → Zn 2 D : Zn 2 → Zm 2. A codeword is any element in the image of E. We also require that E be one-to-one so that two information blocks will not be encoded into the same codeword. If our code is to be error-correcting, then D must be onto. Example 5. The even-parity coding system developed to detect single errors in ASCII characters is an (8, 7)-block code. The encoding function is E(x7, x6,..., x1) = (x8, x7,..., x1), where x8 = x7 + x6 + · · · + x1 with addition in Z2. Let x = (x1,..., xn) and y = (y1,..., yn) be binary n-tuples. The Hamming distance or distance, d(x, y), between x and y is the number of bits in which x and y differ. The distance between two codewords is the minimum number of transmission errors required to change one codeword into the other. The minimum distance for a code, dmin, is the minimum of all distances d(x, y), where x and y are distinct codewords. The weight, w(x), of a binary codeword x is the number of 1’s in x. Clearly, w(x) = d(x, 0), where 0 = (00 · · · 0). Example 6. Let x = (10101), y = (11010), and z = (00011) be all of the codewords in some code C. Then we have the following Hamming distances
: d(x, y) = 4, d(x, z) = 3, d(y, z) = 3. The minimum distance for this code is 3. We also have the following weights: w(x) = 3, w(y) = 3, w(z) = 2. 122 CHAPTER 8 ALGEBRAIC CODING THEORY The following proposition lists some basic properties about the weight of a codeword and the distance between two codewords. The proof is left as an exercise. Proposition 8.2 Let x, y, and z be binary n-tuples. Then 1. w(x) = d(x, 0); 2. d(x, y) ≥ 0; 3. d(x, y) = 0 exactly when x = y; 4. d(x, y) = d(y, x); 5. d(x, y) ≤ d(x, z) + d(z, y). The weights in a particular code are usually much easier to compute than the Hamming distances between all codewords in the code. If a code is set up carefully, we can use this fact to our advantage. Suppose that x = (1101) and y = (1100) are codewords in some code. If we transmit (1101) and an error occurs in the rightmost bit, then (1100) will be received. Since (1100) is a codeword, the decoder will decode (1100) as the transmitted message. This code is clearly not very appropriate for error detection. The problem is that d(x, y) = 1. If x = (1100) and y = (1010) are codewords, then d(x, y) = 2. If x is transmitted and a single error occurs, then y can never be received. Table 8.2 gives the distances between all 4-bit codewords in which the first three bits carry information and the fourth is an even parity check bit. We can see that the minimum distance here is 2; hence, the code is suitable as a single error-correcting code. 0000 0 2 2 2 2 2 2 4 0011 2 0 2 2 2 2 4 2 0101 2 2 0 2 2 4 2 2 0110 2 2 2 0 4 2 2 2 1001 2 2 2 4 0 2 2 2 1010 2 2 4 2 2 0 2 2
1100 2 4 2 2 2 2 0 2 1111 4 2 2 2 2 2 2 0 0000 0011 0101 0110 1001 1010 1100 1111 Table 8.2. Distances between 4-bit codewords 8.1 ERROR-DETECTING AND CORRECTING CODES 123 To determine exactly what the error-detecting and error-correcting capabilities for a code are, we need to analyze the minimum distance for the code. Let x and y be codewords. If d(x, y) = 1 and an error occurs where x and y differ, then x is changed to y. The received codeword is y and no error message is given. Now suppose d(x, y) = 2. Then a single error cannot change x to y. Therefore, if dmin = 2, we have the ability to detect single errors. However, suppose that d(x, y) = 2, y is sent, and a noncodeword z is received such that d(x, z) = d(y, z) = 1. Then the decoder cannot decide between x and y. Even though we are aware that an error has occurred, we do not know what the error is. Suppose dmin ≥ 3. Then the maximum-likelihood decoding scheme corrects all single errors. Starting with a codeword x, an error in the transmission of a single bit gives y with d(x, y) = 1, but d(z, y) ≥ 2 for any other codeword z = x. If we do not require the correction of errors, then we can detect multiple errors when a code has a minimum distance that is greater than 3. Theorem 8.3 Let C be a code with dmin = 2n + 1. Then C can correct any n or fewer errors. Furthermore, any 2n or fewer errors can be detected in C. Proof. Suppose that a codeword x is sent and the word y is received with at most n errors. Then d(x, y) ≤ n. If z is any codeword other than x, then 2n + 1 ≤ d(x, z) ≤ d(x, y) + d(y, z) ≤ n + d(y, z). Hence, d(y, z) ≥ n + 1 and y will be correctly decoded as x. Now suppose that x is transmitted and y
is received and that at least one error has occurred, but not more than 2n errors. Then 1 ≤ d(x, y) ≤ 2n. Since the minimum distance between codewords is 2n + 1, y cannot be a codeword. Consequently, the code can detect between 1 and 2n errors. Example 7. In Table 8.3, the codewords c1 = (00000), c2 = (00111), c3 = (11100), and c4 = (11011) determine a single error-correcting code. Historical Note Modern coding theory began in 1948 with C. Shannon’s paper, “A Mathematical Theory of Information” [7]. This paper offered an example of an algebraic code, and Shannon’s Theorem proclaimed exactly how good codes could be expected to be. 124 CHAPTER 8 ALGEBRAIC CODING THEORY 00000 0 3 3 4 00111 3 0 4 3 11100 3 4 0 3 11011 4 3 3 0 00000 00111 11100 11011 Table 8.3. Hamming distances for an error-correcting code Richard Hamming began working with linear codes at Bell Labs in the late 1940s and early 1950s after becoming frustrated because the programs that he was running could not recover from simple errors generated by noise. Coding theory has grown tremendously in the past several years. The Theory of Error-Correcting Codes, by MacWilliams and Sloane [5], published in 1977, already contained over 1500 references. Linear codes (Reed-Muller (32, 6)-block codes) were used on NASA’s Mariner space probes. More recent space probes such as Voyager have used what are called convolution codes. Currently, very active research is being done with Goppa codes, which are heavily dependent on algebraic geometry. 8.2 Linear Codes To gain more knowledge of a particular code and develop more efficient techniques of encoding, decoding, and error detection, we need to add additional structure to our codes. One way to accomplish this is to require that the code also be a group. A group code is a code that is also a subgroup of Zn 2. To check that a code is a group code, we need only verify one thing. If we add any two elements in the code, the result must be an n-tuple that is again in the code. It is not
necessary to check that the inverse of the n-tuple is in the code, since every codeword is its own inverse, nor is it necessary to check that 0 is a codeword. For instance, (11000101) + (11000101) = (00000000). Example 8. Suppose that we have a code that consists of the following 7-tuples: (0000000) (0100110) (1000011) (1100101) (0001111) (0101001) (1001100) (1101010) (0010101) (0110011) (1010110) (1110000) (0011010) (0111100) (1011001) (1111111). 8.2 LINEAR CODES 125 It is a straightforward though tedious task to verify that this code is also a subgroup of Z7 2 and, therefore, a group code. This code is a single errordetecting and single error-correcting code, but it is a long and tedious process to compute all of the distances between pairs of codewords to determine that dmin = 3. It is much easier to see that the minimum weight of all the nonzero codewords is 3. As we will soon see, this is no coincidence. However, the relationship between weights and distances in a particular code is heavily dependent on the fact that the code is a group. Lemma 8.4 Let x and y be binary n-tuples. Then w(x + y) = d(x, y). Proof. Suppose that x and y are binary n-tuples. Then the distance between x and y is exactly the number of places in which x and y differ. But x and y differ in a particular coordinate exactly when the sum in the coordinate is 1, since. Consequently, the weight of the sum must be the distance between the two codewords. Theorem 8.5 Let dmin be the minimum distance for a group code C. Then dmin is the minimum of all the nonzero weights of the nonzero codewords in C. That is, dmin = min{w(x) : x = 0}. Proof. Observe that dmin = min{d(x, y) : x = y} = min{d(x, y) : x + y = 0} = min{w(x + y) : x + y = 0
} = min{w(z) : z = 0}. 126 CHAPTER 8 ALGEBRAIC CODING THEORY Linear Codes From Example 8, it is now easy to check that the minimum nonzero weight is 3; hence, the code does indeed detect and correct all single errors. We have now reduced the problem of finding “good” codes to that of generating group codes. One easy way to generate group codes is to employ a bit of matrix theory. Define the inner product of two binary n-tuples to be x · y = x1y1 + · · · + xnyn, where x = (x1, x2,..., xn)t and y = (y1, y2,..., yn)t are column vectors.2 For example, if x = (011001)t and y = (110101)t, then x · y = 0. We can also look at an inner product as the product of a row matrix with a column matrix; that is, x · y = xty = x1 x2 · · · xn           y1 y2... yn = x1y1 + x2y2 + · · · + xnyn. Example 9. Suppose that the words to be encoded consist of all binary 3-tuples and that our encoding scheme is even-parity. To encode an arbitrary 3-tuple, we add a fourth bit to obtain an even number of 1’s. Notice that an arbitrary n-tuple x = (x1, x2,..., xn)t has an even number of 1’s exactly when x1 + x2 + · · · + xn = 0; hence, a 4-tuple x = (x1, x2, x3, x4)t has an even number of 1’s if x1 + x2 + x3 + x4 = 0, or x · 1 = xt1 = x1 x2 x3 x4      1 1   1  1 = 0. This example leads us to hope that there is a
connection between matrices and coding theory. 2Since we will be working with matrices, we will write binary n-tuples as column vectors for the remainder of this chapter. 8.2 LINEAR CODES 127 Let Mm×n(Z2) denote the set of all m × n matrices with entries in Z2. We do matrix operations as usual except that all our addition and multiplication operations occur in Z2. Define the null space of a matrix H ∈ Mm×n(Z2) to be the set of all binary n-tuples x such that Hx = 0. We denote the null space of a matrix H by Null(H). Example 10. Suppose that  . For a 5-tuple x = (x1, x2, x3, x4, x5)t to be in the null space of H, Hx = 0. Equivalently, the following system of equations must be satisfied: x2 + x4 = 0 x1 + x2 + x3 + x4 = 0 x3 + x4 + x5 = 0. The set of binary 5-tuples satisfying these equations is (00000) (11110) (10101) (01011). This code is easily determined to be a group code. Theorem 8.6 Let H be in Mm×n(Z2). Then the null space of H is a group code. Proof. Since each element of Zn 2 is its own inverse, the only thing that really needs to be checked here is closure. Let x, y ∈ Null(H) for some matrix H in Mm×n(Z2). Then Hx = 0 and Hy = 0. So H(x + y) = Hx + Hy = 0 + 0 = 0. Hence, x + y is in the null space of H and therefore must be a codeword. A code is a linear code if it is determined by the null space of some matrix H ∈ Mm×n(Z2). Example 11. Let C be the code given by the matrix  . 128 CHAPTER 8 ALGEBRAIC CODING THEORY Suppose that the 6-tuple x = (010011)t is received. It is a simple matter of matrix multiplication to determine whether or not x is a codeword. Since Hx
=    0 1 , 1 the received word is not a codeword. We must either attempt to correct the word or request that it be transmitted again. 8.3 Parity-Check and Generator Matrices We need to find a systematic way of generating linear codes as well as fast methods of decoding. By examining the properties of a matrix H and by carefully choosing H, it is possible to develop very efficient methods of encoding and decoding messages. To this end, we will introduce standard generator and canonical parity-check matrices. Suppose that H is an m × n matrix with entries in Z2 and n > m. If the last m columns of the matrix form the m × m identity matrix, Im, then the matrix is a canonical parity-check matrix. More specifically, H = (A | Im), where A is the m × (n − m) matrix      a12 a11 a22 a21......... am1 am2 · · · · · ·... · · ·      a1,n−m a2,n−m am,n−m and Im is the m × m identity matrix  1 0  ...   0 0 1... 0... · · · · · ·... · · ·  0 0     1. With each canonical parity-check matrix we can associate an n × (n − m) standard generator matrix G = In−m A. 8.3 PARITY-CHECK AND GENERATOR MATRICES 129 Our goal will be to show that Gx = y if and only if Hy = 0. Given a message block x to be encoded, G will allow us to quickly encode it into a linear codeword y. Example 12. Suppose that we have the following eight words to be encoded: (000), (001), (010),..., (111). For  , the associated standard generator and canonical parity-check matrices are         and respectively  , Obs
erve that the rows in H represent the parity checks on certain bit positions in a 6-tuple. The 1’s in the identity matrix serve as parity checks for the 1’s in the same row. If x = (x1, x2, x3, x4, x5, x6), then 0 = Hx =   x2 + x3 + x4 x1 + x2 + x5 x1 + x3 + x6  , which yields a system of equations: x2 + x3 + x4 = 0 x1 + x2 + x5 = 0 x1 + x3 + x6 = 0. Here x4 serves as a check bit for x2 and x3; x5 is a check bit for x1 and x2; and x6 is a check bit for x1 and x3. The identity matrix keeps x4, x5, and x6 130 CHAPTER 8 ALGEBRAIC CODING THEORY from having to check on each other. Hence, x1, x2, and x3 can be arbitrary but x4, x5, and x6 must be chosen to ensure parity. The null space of H is easily computed to be (000000) (100011) (001101) (101110) (010110) (110101) (011011) (111000). An even easier way to compute the null space is with the generator matrix G (Table 8.4). Message Word Codeword x 000 001 010 011 100 101 110 111 Gx 000000 001101 010110 011011 100011 101110 110101 111000 Table 8.4. A matrix-generated code Theorem 8.7 If H ∈ Mm×n(Z2) is a canonical parity-check matrix, then Null(H) consists of all x ∈ Zn 2 whose first n − m bits are arbitrary but whose last m bits are determined by Hx = 0. Each of the last m bits serves as an even parity check bit for some of the first n − m bits. Hence, H gives rise to an (n, n − m)-block code. We leave the proof of this theorem as an exercise. In light of the theorem, the first n − m bits in x are called information
bits and the last m bits are called check bits. In Example 12, the first three bits are the information bits and the last three are the check bits. Theorem 8.8 Suppose that G is an n × k standard generator matrix. Then C = y : Gx = y for x ∈ Zk is an (n, k)-block code. More specifically, C 2 is a group code. Proof. Let Gx1 = y1 and Gx2 = y2 be two codewords. Then y1 + y2 is in C since G(x1 + x2) = Gx1 + Gx2 = y1 + y2. 8.3 PARITY-CHECK AND GENERATOR MATRICES 131 We must also show that two message blocks cannot be encoded into the same codeword. That is, we must show that if Gx = Gy, then x = y. Suppose that Gx = Gy. Then Gx − Gy = G(x − y) = 0. However, the first k coordinates in G(x − y) are exactly x1 − y1,..., xk − yk, since they are determined by the identity matrix, Ik, part of G. Hence, G(x − y) = 0 exactly when x = y. Before we can prove the relationship between canonical parity-check matrices and standard generating matrices, we need to prove a lemma. Lemma 8.9 Let H = (A | Im) be an m × n canonical parity-check matrix and G = be the corresponding n × (n − m) standard generator matrix. Then HG = 0. In−m A Proof. Let C = HG. The ijth entry in C is n cij = hikgkj hikgkj + n hikgkj aikδkj + k=n−m+1 n k=n−m+1 δi−(m−n),kakj k=1 n−m k=1 n−m = = k=1 = aij + aij = 0, where δij = 1, 0, i = j i = j is the Kronecker delta. In−m A Theorem 8.10 Let H = (A | Im) be an m × n canonical parity-check matrix be the n × (
n − m) standard generator matrix associated and let G = with H. Let C be the code generated by G. Then y is in C if and only if Hy = 0. In particular, C is a linear code with canonical parity-check matrix H. 132 CHAPTER 8 ALGEBRAIC CODING THEORY Proof. First suppose that y ∈ C. Then Gx = y for some x ∈ Zm Lemma 8.9, Hy = HGx = 0. 2. By Conversely, suppose that y = (y1,..., yn)t is in the null space of H. We such that Gxt = y. Since Hy = 0, the following need to find an x in Zn−m set of equations must be satisfied: 2 a11y1 + a12y2 + · · · + a1,n−myn−m + yn−m+1 = 0 a21y1 + a22y2 + · · · + a2,n−myn−m + yn−m+1 = 0... am1y1 + am2y2 + · · · + am,n−myn−m + yn−m+1 = 0. Equivalently, yn−m+1,..., yn are determined by y1,..., yn−m: yn−m+1 = a11y1 + a12y2 + · · · + a1,n−myn−m yn−m+1 = a21y1 + a22y2 + · · · + a2,n−myn−m... yn−m+1 = am1y1 + am2y2 + · · · + am,n−myn−m. Consequently, we can let xi = yi for i = 1,..., n − m. It would be helpful if we could compute the minimum distance of a linear code directly from its matrix H in order to determine the error-detecting and error-correcting capabilities of the code. Suppose that e1 = (100 · · · 00)t e2 = (010 · · · 00)t... en = (000 · · · 01)t are the n-tuples in Zn exactly the ith column of the matrix H. 2 of
weight 1. For an m × n binary matrix H, Hei is Example 13. Observe that  .3 PARITY-CHECK AND GENERATOR MATRICES 133 We state this result in the following proposition and leave the proof as an exercise. Proposition 8.11 Let ei be the binary n-tuple with a 1 in the ith coordinate and 0’s elsewhere and suppose that H ∈ Mm×n(Z2). Then Hei is the ith column of the matrix H. Theorem 8.12 Let H be an m × n binary matrix. Then the null space of H is a single error-detecting code if and only if no column of H consists entirely of zeros. Proof. Suppose that Null(H) is a single error-detecting code. Then the minimum distance of the code must be at least 2. Since the null space is a group code, it is sufficient to require that the code contain no codewords of less than weight 2 other than the zero codeword. That is, ei must not be a codeword for i = 1,..., n. Since Hei is the ith column of H, the only way in which ei could be in the null space of H would be if the ith column were all zeros, which is impossible; hence, the code must have the capability to detect at least single errors. Conversely, suppose that no column of H is the zero column. By Propo- sition 8.11, Hei = 0. Example 14. If we consider the matrices and H1 = H2 =    , then the null space of H1 is a single error-detecting code and the null space of H2 is not. We can even do better than Theorem 8.12. This theorem gives us conditions on a matrix H that tell us when the minimum weight of the code 134 CHAPTER 8 ALGEBRAIC CODING THEORY formed by the null space of H is 2. We can also determine when the minimum distance of a linear code is 3 by examining the corresponding matrix. Example 15. If we let   and want to determine whether or not H is the canonical parity-check matrix for an error-correcting code, it is necessary to make certain that Null(H) does
not contain any 4-tuples of weight 2. That is, (1100), (1010), (1001), (0110), (0101), and (0011) must not be in Null(H). The next theorem states that we can indeed determine that the code generated by H is error-correcting by examining the columns of H. Notice in this example that not only does H have no zero columns, but also that no two columns are the same. Theorem 8.13 Let H be a binary matrix. The null space of H is a single error-correcting code if and only if H does not contain any zero columns and no two columns of H are identical. Proof. The n-tuple ei + ej has 1’s in the ith and jth entries and 0’s elsewhere, and w(ei + ej) = 2 for i = j. Since 0 = H(ei + ej) = Hei + Hej can only occur if the ith and jth columns are identical, the null space of H is a single error-correcting code. Suppose now that we have a canonical parity-check matrix H with three rows. Then we might ask how many more columns we can add to the matrix and still have a null space that is a single error-detecting and single error-correcting code. Since each column has three entries, there are 23 = 8 possible distinct columns. We cannot add the columns  ,  So we can add as many as four columns and still maintain a minimum distance of 3. In general, if H is an m × n canonical parity-check matrix, then there are n − m information positions in each codeword. Each column has m 8.4 EFFICIENT DECODING 135 bits, so there are 2m possible distinct columns. It is necessary that the columns 0, e1,..., en be excluded, leaving 2m − (1 + n) remaining columns for information if we are still to maintain the ability not only to detect but also to correct single errors. 8.4 Efficient Decoding We are now at the stage where we are able to generate linear codes that detect and correct errors fairly easily, but it is still a time-consuming process to decode a received n-tuple and determine which is the closest codeword, because the received n-tuple must be compared to each
possible codeword to determine the proper decoding. This can be a serious impediment if the code is very large. Example 16. Given the binary matrix   and the 5-tuples x = (11011)t and y = (01011)t, we can compute Hx =    0 0  0 and Hy =    1 0 . 1 Hence, x is a codeword and y is not, since x is in the null space and y is not. Notice that Hx is identical to the first column of H. In fact, this is where the error occurred. If we flip the first bit in y from 0 to 1, then we obtain x. If H is an m × n matrix and x ∈ Zn 2, then we say that the syndrome of x is Hx. The following proposition allows the quick detection and correction of errors. Proposition 8.14 Let the m × n binary matrix H determine a linear code and let x be the received n-tuple. Write x as x = c + e, where c is the transmitted codeword and e is the transmission error. Then the syndrome Hx of the received codeword x is also the syndrome of the error e. Proof. Hx = H(c + e) = Hc + He = 0 + He = He. 136 CHAPTER 8 ALGEBRAIC CODING THEORY This proposition tells us that the syndrome of a received word depends solely on the error and not on the transmitted codeword. The proof of the following theorem follows immediately from Proposition 8.14 and from the fact that He is the ith column of the matrix H. Theorem 8.15 Let H ∈ Mm×n(Z2) and suppose that the linear code corresponding to H is single error-correcting. Let r be a received n-tuple that was transmitted with at most one error. If the syndrome of r is 0, then no error has occurred; otherwise, if the syndrome of r is equal to some column of H, say the ith column, then the error has occurred in the ith bit. Example 17. Consider the matrix   and suppose that the 6-tuples x = (111110)t, y = (111111)t