Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Corollary 13.4.7 Every finite Boolean algebra \( \mathcal{B} = \left\lbrack {B;\vee ,\land , - }\right\rbrack \) has \( {2}^{n} \) elements for some positive integer \( n \) .
Proof. Let \( A \) be the set of all atoms of \( \mathcal{B} \) and let \( \left| A\right| = n \) . Then there are exactly \( {2}^{n} \) elements (subsets) in \( \mathcal{P}\left( A\right) \), and by Theorem 13.4.6, \( \left\lbrack {B;\vee ,\land , - }\right\rbrack \) is isomorphic to \( \left\lbrack {\mathcal{P}\left( A\right) ;\cup ,{ \cap }^{c}}\right\rbrack \) and must also have \( {2}^{n} \) elements.
Yes
All Boolean algebras of order \( {2}^{n} \) are isomorphic to one another.
Every Boolean algebra of order \( {2}^{n} \) is isomorphic to \( \left\lbrack {\mathcal{P}\left( A\right) ;\cup ,\cap ,{}^{c}}\right\rbrack \) when \( \left| A\right| = n \) . Hence, if \( {\mathcal{B}}_{1} \) and \( {\mathcal{B}}_{2} \) each have \( {2}^{n} \) elements, they each have \( n \) atoms. Suppose their sets of atoms are \( {A}_{1} \) and \( {A}_{2} \), respectively. We know there are isomorphisms \( {T}_{1} \) and \( {T}_{2} \), where \( {T}_{i} : {\mathcal{B}}_{i} \rightarrow \mathcal{P}\left( {A}_{i}\right), i = 1,2 \) . In addition we have an isomorphism, \( N \) from \( \mathcal{P}\left( {A}_{1}\right) \) into \( \mathcal{P}\left( {A}_{2}\right) \), which we ask you to prove in Exercise 13.4.1.9. We can combine these isomorphisms to produce the isomorphism \( {T}_{2}^{-1} \circ N \circ {T}_{1} : {\mathcal{B}}_{1} \rightarrow {\mathcal{B}}_{2} \), which proves the corollary.
No
Consider any Boolean algebra of order 2, \(\left\lbrack {B;\vee ,\land , - }\right\rbrack\). How many functions \( f : {B}^{2} \rightarrow B \) are there?
First, all Boolean algebras of order 2 are isomorphic to \(\left\lbrack {{B}_{2};\vee ,\land , - }\right\rbrack\) so we want to determine the number of functions \( f : {B}_{2}^{2} \rightarrow {B}_{2} \). If we consider a Boolean function of two variables, \({x}_{1}\) and \({x}_{2}\), we note that each variable has two possible values 0 and 1, so there are \({2}^{2}\) ways of assigning these two values to the \(k = 2\) variables. Hence, the table below has \({2}^{2} = 4\) rows. So far we have a table such as this one:\n\n<table><thead><tr><th>\({x}_{1}\)</th><th>\({x}_{2}\)</th><th>\(f\left( {{x}_{1},{x}_{2}}\right)\)</th></tr></thead><tr><td>0</td><td>0</td><td>?</td></tr><tr><td>0</td><td>1</td><td>?</td></tr><tr><td>1</td><td>0</td><td>?</td></tr><tr><td>1</td><td>1</td><td>?</td></tr></table>\n\nHow many possible different functions can there be? To list a few: \({f}_{1}\left( {{x}_{1},{x}_{2}}\right) = \)\({x}_{1},{f}_{2}\left( {{x}_{1},{x}_{2}}\right) = {x}_{2},{f}_{3}\left( {{x}_{1},{x}_{2}}\right) = {x}_{1} \vee {x}_{2},{f}_{4}\left( {{x}_{1},{x}_{2}}\right) = \left( {{x}_{1} \land \overline{{x}_{2}}}\right) \vee {x}_{2},{f}_{5}\left( {{x}_{1},{x}_{2}}\right) = \)\({x}_{1} \land {x}_{2} \vee \overline{{x}_{2}}\), etc. Each of these will fill in the question marks in the table above. The tables for \({f}_{1}\) and \({f}_{3}\) are\n\n![6dd34435-8451-4aec-abdd-96bf3c6137fe_368_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_368_0.jpg)\n\nTwo functions are different if and only if their tables are different for at least one row. Of course by using the basic laws of Boolean algebra we can see that \({f}_{3} = {f}_{4}\). Why? So if we simply list by brute force all \
No
Theorem 13.6.6 Uniqueness of Minterm Normal Form. Let \( e\left( {{x}_{1},\ldots ,{x}_{k}}\right) \) be a Boolean expression over \( B \) . There exists a unique minterm normal form \( M\left( {{x}_{1},\ldots ,{x}_{k}}\right) \) that is equivalent to \( e\left( {{x}_{1},\ldots ,{x}_{k}}\right) \) in the sense that \( e \) and \( M \) define the same function from \( {B}^{k} \) into \( B \) .
The uniqueness in this theorem does not include the possible ordering of the minterms in \( M \) (commonly referred to as \
No
Consider the Boolean expression \( f\left( {{x}_{1},{x}_{2}}\right) = {x}_{1} \vee \overline{{x}_{2}} \) . One method of determining the minterm normal form of \( f \) is to think in terms of sets. Consider the diagram with the usual translation of notation in Figure 13.6.8.
\[ f\left( {{x}_{1},{x}_{2}}\right) = \left( {\overline{{x}_{1}} \land \overline{{x}_{2}}}\right) \vee \left( {{x}_{1} \land \overline{{x}_{2}}}\right) \vee \left( {{x}_{1} \land {x}_{2}}\right) \] \[ = {M}_{00} \vee {M}_{10} \vee {M}_{11} \]
Yes
Consider the function \( g : {B}_{2}^{3} \rightarrow {B}_{2} \) defined by Table 13.6.9.
The minterm normal form for \( g \) can be obtained by taking the join of minterms that correspond to rows that have an image value of 1 . If \( g\left( {{a}_{1},{a}_{2},{a}_{3}}\right) = \) 1, then include the minterm \( {y}_{1} \land {y}_{2} \land {y}_{3} \) where\n\n\[ \n{y}_{j} = \left\{ \begin{array}{ll} {x}_{j} & \text{ if }{a}_{j} = 1 \\ \overline{{x}_{j}} & \text{ if }{a}_{j} = 0 \end{array}\right. \n\]\n\nOr, to use alternate notation, include \( {M}_{{a}_{1}{a}_{2}{a}_{3}} \) in the expression if and only if \( g\left( {{a}_{1},{a}_{2},{a}_{3}}\right) = 1 \)\n\nTherefore,\n\n\[ \ng\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = \left( {\overline{{x}_{1}} \land \overline{{x}_{2}} \land \overline{{x}_{3}}}\right) \vee \left( {\overline{{x}_{1}} \land {x}_{2} \land {x}_{3}}\right) \vee \left( {{x}_{1} \land {x}_{2} \land \overline{{x}_{3}}}\right) . \n\]
Yes
Consider the circuit in Figure 13.7.15. As usual, we assume that three inputs enter on the left and the output exits on the right. If we trace the inputs through the gates we see that this circuit realizes the boolean function \[ f\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = {x}_{1} \cdot \overline{{x}_{2}} \cdot \left( {\left( {{x}_{1} + {x}_{2}}\right) + \left( {{x}_{1} + {x}_{3}}\right) }\right) . \] We simplify the boolean expression that defines \( f \), simplifying the circuit in so doing.
\[ {x}_{1} \cdot \overline{{x}_{2}} \cdot \left( {\left( {{x}_{1} + {x}_{2}}\right) + \left( {{x}_{1} + {x}_{3}}\right) }\right) = {x}_{1} \cdot \overline{{x}_{2}} \cdot \left( {{x}_{1} + {x}_{2} + {x}_{3}}\right) \] \[ = {x}_{1} \cdot \overline{{x}_{2}} \cdot {x}_{1} + {x}_{1} \cdot \overline{{x}_{2}} \cdot {x}_{2} + {x}_{1} \cdot \overline{{x}_{2}} \cdot {x}_{3} \] \[ = {x}_{1} \cdot \overline{{x}_{2}} + 0 \cdot {x}_{1} + {x}_{3} \cdot {x}_{1} \cdot \overline{{x}_{2}} \] \[ = {x}_{1} \cdot \overline{{x}_{2}} + {x}_{3} \cdot {x}_{1} \cdot \overline{{x}_{2}} \] \[ = {x}_{1} \cdot \overline{{x}_{2}} \cdot \left( {1 + {x}_{3}}\right) \] \[ = {x}_{1} \cdot \overline{{x}_{2}} \] Therefore, \( f\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = {x}_{1} \cdot \overline{{x}_{2}} \), which can be realized with the much simpler circuit in Figure 13.7.16, without using the input \( {x}_{3} \) .
Yes
Consider the following table of desired outputs for the three input bits \( {x}_{1},{x}_{2},{x}_{3} \).
The first step is to write the Minterm Normal Form of \( f \). Since we are working with the two value Boolean algebra, \( {B}_{2} \), the constants in each minterm are either 0 or 1, and we simply list the minterms that have a 1. These correspond with the rows of the table above that have an output of 1.\n\n\[ f\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = \left( {\overline{{x}_{1}} \cdot \overline{{x}_{2}} \cdot {x}_{3}}\right) + \left( {{x}_{1} \cdot \overline{{x}_{2}} \cdot \overline{{x}_{3}}}\right) + \left( {{x}_{1} \cdot \overline{{x}_{2}} \cdot {x}_{3}}\right) \]\n\n\[ = \overline{{x}_{2}} \cdot \left( {\left( {\overline{{x}_{1}} \cdot {x}_{3}}\right) + \left( {{x}_{1} \cdot \overline{{x}_{3}}}\right) + \left( {{x}_{1} \cdot {x}_{3}}\right) }\right) \]\n\n\[ = \overline{{x}_{2}} \cdot \left( {\left( {\overline{{x}_{1}} \cdot {x}_{3}}\right) + {x}_{1} \cdot \left( {\overline{{x}_{3}} + {x}_{3}}\right) }\right) \]\n\n\[ = \overline{{x}_{2}} \cdot \left( {\left( {\overline{{x}_{1}} \cdot {x}_{3}}\right) + {x}_{1}}\right) \]\n\nTherefore we can realize our table with the boolean function \( f\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = \overline{{x}_{2}} \cdot \left( {\left( {\overline{{x}_{1}} \cdot {x}_{3}}\right) + {x}_{1}}\right) \).
Yes
Example 14.1.9 \( M = \left\lbrack {\mathcal{P}\{ 1,2,3\} ; \cap }\right\rbrack \) is isomorphic to \( {M}_{2} = \left\lbrack {{\mathbb{Z}}_{2}^{3}; \cdot }\right\rbrack \), where the operation in \( {M}_{2} \) is componentwise mod 2 multiplication. A translation rule is that if \( A \subseteq \{ 1,2,3\} \), then it is translated to \( \left( {{d}_{1},{d}_{2},{d}_{3}}\right) \) where
\[ {d}_{i} = \left\{ \begin{array}{ll} 1 & \text{ if }i \in A \\ 0 & \text{ if }i \notin A \end{array}\right. \] Two cases of how this translation rule works are: \( \{ 1,2,3\} \; \) is the identity for \( {M}_{1}\;\{ 1,2\} \cap \{ 2,3\} = \{ 2\} \) \( \updownarrow \; \updownarrow \) \( \left( {1,1,1}\right) \; \) is the identity for \( {M}_{2}\;\left( {1,1,0}\right) \cdot \left( {0,1,1}\right) = \left( {0,1,0}\right) \) A more precise definition of a monoid isomorphism is identical to the definition of a group isomorphism, Definition 11.7.9.
No
Theorem 14.2.3 If \( A \) is countable, then \( {A}^{ * } \) is countable.
Proof. Case 1. Given the alphabet \( B = \{ 0,1\} \), we can define a bijection from the positive integers into \( {B}^{ * } \) . Each positive integer has a binary expansion \( {d}_{k}{d}_{k - 1}\cdots {d}_{1}{d}_{0} \), where each \( {d}_{j} \) is 0 or 1 and \( {d}_{k} = 1 \) . If \( n \) has such a binary expansion, then \( {2}^{k} \leq n \leq {2}^{k + 1} \) . We define \( f : \mathbb{P} \rightarrow {B}^{ * } \) by \( f\left( n\right) = \) \( f\left( {{d}_{k}{d}_{k - 1}\cdots {d}_{1}{d}_{0}}\right) = {d}_{k - 1}\cdots {d}_{1}{d}_{0} \), where \( f\left( 1\right) = \lambda \) . Every one of the \( {2}^{k} \) strings of length \( k \) are the images of exactly one of the integers between \( {2}^{k} \) and \( {2}^{k + 1} - 1 \) . From its definition, \( f \) is clearly a bijection; therefore, \( {B}^{ * } \) is countable.\n\nCase 2: \( A \) is Finite. We will describe how this case is handled with an example first and then give the general proof. If \( A = \{ a, b, c, d, e\} \), then we can code the letters in \( A \) into strings from \( {B}^{3} \) . One of the coding schemes (there are many) is \( a \leftrightarrow {000}, b \leftrightarrow {001}, c \leftrightarrow {010}, d \leftrightarrow {011} \), and \( e \leftrightarrow {100} \) . Now every string in \( {A}^{ * } \) corresponds to a different string in \( {B}^{ * } \) ; for example, ace. would correspond with 000010100 . The cardinality of \( {A}^{ * } \) is equal to the cardinality of the set of strings that can be obtained from this encoding system. The possible coded strings must be countable, since they are a subset of a countable set, \( {B}^{ * } \) . Therefore, \( {A}^{ * } \) is countable.\n\nIf \( \left| A\right| = m \), then the letters in \( A \) can be coded using a set of fixed-length strings from \( {B}^{ * } \) . If \( {2}^{k - 1} < m \leq {2}^{k} \), then there are at least as many strings of length \( k \) in \( {B}^{k} \) as there are letters in \( A \) . Now we can associate each letter in \( A \) with with a different element of \( {B}^{k} \) . Then any string in \( {A}^{ * } \) . corresponds to a string in \( {B}^{ * } \) . By the same reasoning as in the example above, \( {A}^{ * } \) is countable.
Yes
Theorem 14.2.12 Recursive implies Generating.\n\n(a) If \( A \) is countable, then there exists a generating algorithm for \( {A}^{ * } \) .\n\n(b) If \( L \) is a recursive language over a countable alphabet, then there exists a generating algorithm for \( L \) .
Proof. Part (a) follows from the fact that \( {A}^{ * } \) is countable; therefore, there exists a complete list of strings in \( {A}^{ * } \) .\n\nTo generate all strings of \( L \), start with a list of all strings in \( {A}^{ * } \) and an empty list, \( W \), of strings in \( L \) . For each string \( s \), use a recognition algorithm (one exists since \( L \) is recursive) to determine whether \( s \in L \) . If \( s \in L \), add it to \( W \) ; otherwise \
No
The language over \( B \) consisting of strings of alternating 0 's and 1's is a phrase structure language. It can be defined by the following grammar:
These rules can be visualized with a graph: ![6dd34435-8451-4aec-abdd-96bf3c6137fe_386_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_386_0.jpg) Figure 14.2.17 Production rules for the language of alternating 0 's and 1 's We can verify that a string such as 10101 belongs to the language by starting with \( S \) and producing 10101 using the production rules a finite number of times: \( S \rightarrow {1U} \rightarrow {101U} \rightarrow {10101}. \)
Yes
Example 14.3.3 A Parity Checking Machine. The following machine is called a parity checker. It recognizes whether or not a string in \( {B}^{ * } \) contains an even number of \( 1\mathrm{\;s} \) . The memory structure of this machine reflects the fact that in order to check the parity of a string, we need only keep track of whether an odd or even number of 1 's has been detected.
The input alphabet is \( B = \{ 0,1\} \) and the output alphabet is also \( B \) . The state set is \( \{ \) even, odd \( \} \) . The following table defines the output and next-state functions.\n\n<table><thead><tr><th>\( x \)</th><th>\( S \)</th><th>\( w\left( {x, s}\right) \)</th><th>\( t\left( {x, s}\right) \)</th></tr></thead><tr><td>0</td><td>even</td><td>0</td><td>even</td></tr><tr><td>0</td><td>odd</td><td>1</td><td>odd</td></tr><tr><td>1</td><td>even</td><td>1</td><td>odd</td></tr><tr><td>1</td><td>odd</td><td>0</td><td>even</td></tr></table>\n\nNote how the value of the most recent output at any time is an indication of the current state of the machine. Therefore, if we start in the even state and read any finite input tape, the last output corresponds to the final state of the parity checker and tells us the parity of the string on the input tape. For example, if the string 11001010 is read from left to right, the output tape, also from left to right, will be 10001100 . Since the last character is a 0 , we know that the input string has even parity.
Yes
Example 14.3.5 A Baseball Machine. Consider the following simplified version of the game of baseball. To be precise, this machine describes one half-inning of a simplified baseball game. Suppose that in addition to home plate, there is only one base instead of the usual three bases. Also, assume that there are only two outs per inning instead of the usual three. Our input alphabet will consist of the types of hits that the batter could have: out (O), double play (DP), single (S), and home run (HR). The input DP is meant to represent a batted ball that would result in a double play (two outs), if possible. The input DP can then occur at any time. The output alphabet is the numbers 0 , 1 , and 2 for the number of runs that can be scored as a result of any input. The state set contains the current situation in the inning, the number of outs, and whether a base runner is currently on the base. The list of possible states is then 00 (for 0 outs and 0 runners), 01, 10, 11, and end (when the half-inning is over). The transition diagram for this machine appears in Figure 14.3.6
Let's concentrate on one state. If the current state is 01,0 outs and 1 runner on base, each input results in a different combination of output and next-state. If the batter hits the ball poorly (a double play) the output is zero runs and the inning is over (the limit of two outs has been made). A simple out also results in an output of 0 runs and the next state is 11 , one out and one runner on base. If the batter hits a single, one run scores (output \( = 1 \) ) while the state remains 01 . If a home run is hit, two runs are scored (output \( = 2 \) ) and the next state is 00 . If we had allowed three outs per inning, this graph would only be marginally more complicated. The usual game with three bases would be quite a bit more complicated, however.
No
Recognition in Regular Languages. As we mentioned at the outset of this section, finite-state machines can recognize strings in a regular language. Consider the language \( L \) over \( \{ a, b, c\} \) that contains the strings of positive length in which each \( a \) is followed by \( b \) and each \( b \) is followed by \( c \) . One such string is bccabcbc. This language is regular. A grammar for the language would be nonterminal symbols \( \{ A, B, C\} \) with starting symbol \( C \) and production rules \( A \rightarrow {bB}, B \rightarrow {cC}, C \rightarrow {aA}, C \rightarrow {bB}, C \rightarrow {cC}, C \rightarrow c \) .
A finite-state machine (Figure 14.3.8) that recognizes this language can be constructed with one state for each nonterminal symbol and an additional state (Reject) that is entered if any invalid production takes place. At the end of an input tape that encodes a string in \( \{ a, b, c{\} }^{ * } \), we will know when the string belongs to \( L \) based on the final output. If the final output is 1, the string belongs to \( L \) and if it is 0, the string does not belong to \( L \) . In addition, recognition can be accomplished by examining the final state of the machine. The input string belongs to the language if and only if the final state is \( C \) .
Yes
Example 14.4.3 The Unit-time Delay Machine. A finite-state machine called the unit-time delay machine does not echo its current state, but prints its previous state. For this reason, when we find the monoid of the unit-time delay machine, we must consider both state and output. The transition diagram of this machine appears in Figure 14.4.4.
![6dd34435-8451-4aec-abdd-96bf3c6137fe_395_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_395_0.jpg)\n\nFigure 14.4.4\n\n<table><thead><tr><th>Input</th><th>01</th><th>00</th><th>0110</th><th>11</th><th>100 or000</th><th>101 or001</th><th></th><th>110 or101</th><th>111 or011</th></tr></thead><tr><td>0</td><td>\\( \\left( {0,0}\\right) \\)</td><td>\\( \\left( {1,0}\\right) \\)</td><td>\\( \\left( {0,0}\\right) \\)</td><td>\\( \\left( {1,0}\\right) \\)</td><td>\\( \\left( {0,1}\\right) \\)\\( \\left( {1,1}\\right) \\)</td><td>\\( \\left( {0,0}\\right) \\)</td><td>\\( \\left( {1,0}\\right) \\)</td><td>\\( \\left( {0,1}\\right) \\)</td><td>\\( \\left( {1,1}\\right) \\)</td></tr><tr><td>1</td><td>\\( \\left( {0,1}\\right) \\)</td><td>\\( \\left( {1,1}\\right) \\)</td><td>\\( \\left( {0,0}\\right) \\)</td><td>\\( \\left( {1,0}\\right) \\)</td><td>\\( \\left( {0,1}\\right) \\)\\( \\left( {1,1}\\right) \\)</td><td>\\( \\left( {0,0}\\right) \\)</td><td>\\( \\left( {1,0}\\right) \\)</td><td>\\( \\left( {0,1}\\right) \\)</td><td>\\( \\left( {1,1}\\right) \\)</td></tr><tr><td>Same as</td><td></td><td></td><td></td><td></td><td></td><td>00</td><td>01</td><td>10</td><td>11</td></tr></table>\n\nAgain, since no new outcomes were obtained from strings of length 3 , only strings of length 2 or less contribute to the monoid of the machine. The table for the strings of positive length shows that we must add \\( {T}_{\\lambda } \\) to obtain a monoid.\n\n<table><thead><tr><th>*</th><th>\\( {T}_{0} \\)</th><th>\\( {T}_{1} \\)</th><th>\\( {T}_{00} \\)</th><th>\\( {T}_{01} \\)</th><th>\\( {T}_{10} \\)</th><th>\\( {T}_{11} \\)</th><th></th></tr></thead><tr><td>\\( {T}_{0} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td></td></tr><tr><td>\\( {T}_{1} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td></td></tr><tr><td>\\( {T}_{00} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td>.</td></tr><tr><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td></td></tr><tr><td>\\( {T}_{10} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td></td></tr><tr><td>\\( {T}_{11} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td>\\( {T}_{00} \\)</td><td>\\( {T}_{01} \\)</td><td>\\( {T}_{10} \\)</td><td>\\( {T}_{11} \\)</td><td></td></tr></table>
Yes
We will construct the machine of the monoid \( \left\lbrack {{\mathbb{Z}}_{2};{ + }_{2}}\right\rbrack \) . As mentioned above, the state set and the input set are both \( {\mathbb{Z}}_{2} \) . The next state function is defined by \( t\left( {s, x}\right) = s + {}_{2}x \) .
The transition diagram for \( m\left( {\mathbb{Z}}_{2}\right) \) appears in Figure 14.5.3. Note how it is identical to the transition diagram of the parity checker, which has an associated monoid that was isomorphic to \( \left\lbrack {{\mathbb{Z}}_{2};{ + }_{2}}\right\rbrack \)
No
Example 14.5.4 The transition diagram of the monoids \( \left\lbrack {{\mathbb{Z}}_{2};{ \times }_{2}}\right\rbrack \) and \( \left\lbrack {{\mathbb{Z}}_{3};{ \times }_{3}}\right\rbrack \) appear in Figure 14.5.5.
![6dd34435-8451-4aec-abdd-96bf3c6137fe_397_0.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_397_0.jpg)\n\nFigure 14.5.5 The machines of \( \left\lbrack {{\mathbb{Z}}_{2};{ \times }_{2}}\right\rbrack \) and \( \left\lbrack {{\mathbb{Z}}_{3};{ \times }_{3}}\right\rbrack \)
Yes
Example 14.5.6 Let \( U \) be the monoid that we obtained from the unit-time delay machine (Example 14.4.3). We have seen that the machine of the monoid of the parity checker is essentially the parity checker. Will we obtain a unit-time delay machine when we construct the machine of \( U \) ?
We can’t expect to get exactly the same machine because the unit-time delay machine is not a state machine and the machine of a monoid is a state machine. However, we will see that our new machine is capable of telling us what input was received in the previous time period. The operation table for the monoid serves as a table to define the transition function for the machine. The row headings are the state values, while the column headings are the inputs. If we were to draw a transition diagram with all possible inputs, the diagram would be too difficult to read. Since \( U \) is generated by the two elements, \( {T}_{0} \) and \( {T}_{1} \), we will include only those inputs. Suppose that we wanted to read the transition function for the input \( {T}_{01} \) . Since \( {T}_{01} = {T}_{0}{T}_{1} \), in any state \( s, t\left( {s,{T}_{01}}\right) = t\left( {t\left( {s,{T}_{0}}\right) ,{T}_{1}}\right) \) . The transition diagram appears in Figure 14.5.7.\n\n![6dd34435-8451-4aec-abdd-96bf3c6137fe_397_1.jpg](images/6dd34435-8451-4aec-abdd-96bf3c6137fe_397_1.jpg)\n\n## Figure 14.5.7 Unit time delay machine\n\nIf we start reading a string of 0 ’s and 1’s while in state \( {T}_{\lambda } \) and are in state \( {T}_{ab} \) at any one time, the input from the previous time period (not the input that sent us into \( {T}_{ab} \), the one before that) is \( a \) . In states \( {T}_{\lambda },{T}_{0} \) and \( {T}_{1} \), no previous input exists.
Yes
Example 15.1.2 A Finite Cyclic Group. \( {\mathbb{Z}}_{12} = \left\lbrack {{\mathbb{Z}}_{12};{ + }_{12}}\right\rbrack \), where \( { + }_{12} \) is addition modulo 12, is a cyclic group. To verify this statement, all we need to do is demonstrate that some element of \( {\mathbb{Z}}_{12} \) is a generator.
One such element is 5 ; that is, \( \langle 5\rangle = {\mathbb{Z}}_{12} \) . One more obvious generator is 1 . In fact,1 is a generator of every \( \left\lbrack {{\mathbb{Z}}_{n};{ + }_{n}}\right\rbrack \) . The reader is asked to prove that if an element is a generator, then its inverse is also a generator. Thus, \( - 5 = 7 \) and \( - 1 = {11} \) are the other generators of \( {\mathbb{Z}}_{12} \) . The remaining eight elements of the group are not generators.
No
The additive group of integers, \( \left\lbrack {\mathbb{Z}; + }\right\rbrack \), is cyclic.
\[ \mathbb{Z} = \langle 1\rangle = \{ n \cdot 1 \mid n \in \mathbb{Z}\} \] This observation does not mean that every integer is the product of an integer times 1. It means that \[ \mathbb{Z} = \{ 0\} \cup \{ \overset{n\text{ terms }}{\overbrace{1 + 1 + \cdots + 1}} \mid n \in \mathbb{P}\} \cup \{ \overset{n\text{ terms }}{\overbrace{\left( {-1}\right) + \left( {-1}\right) + \cdots + \left( {-1}\right) }} \mid n \in \mathbb{P}\} \]
Yes
Theorem 15.1.5 Cyclic Implies Abelian. If \( \left\lbrack {G; * }\right\rbrack \) is cyclic, then it is abelian.
Proof. Let \( a \) be any generator of \( G \) and let \( b, c \in G \) . By the definition of the generator of a group, there exist integers \( m \) and \( n \) such that \( b = {ma} \) and \( c = {na} \) . Thus, using Theorem 11.3.14,\n\n\[ b * c = \left( {ma}\right) * \left( {na}\right) \]\n\n\[ = \left( {m + n}\right) a \]\n\n\[ = \left( {n + m}\right) a \]\n\n\[ = \left( {na}\right) * \left( {ma}\right) \]\n\n\[ = c * b \]
Yes
Example 15.1.6 A Cyclic Multiplicative Group. The group of positive integers modulo 11 with modulo 11 multiplication, \( \left\lbrack {{\mathbb{Z}}_{11}^{ * };{ \times }_{11}}\right\rbrack \), is cyclic.
One of its generators is \( 6 : {6}^{1} = 6,{6}^{2} = 3,{6}^{3} = 7,\ldots ,{6}^{9} = 2 \), and \( {6}^{10} = 1 \), the identity of the group.
Yes
A Non-cyclic Group. The real numbers with addition, \( \left\lbrack {\mathbb{R}; + }\right\rbrack \) is a noncyclic group.
The proof of this statement requires a bit more generality since we are saying that for all \( r \in \mathbb{R},\langle r\rangle \) is a proper subset of \( \mathbb{R} \) . If \( r \) is nonzero, the multiples of \( r \) are distributed over the real line, as in Figure 15.1.8. It is clear then that there are many real numbers, like \( r/2 \), that are not in \( \langle r\rangle \) .
Yes
Theorem 15.1.9 Possible Cyclic Group Structures. If \( G \) is a cyclic group, then \( G \) is either finite or countably infinite. If \( G \) is finite and \( \left| G\right| = n \) , it is isomorphic to \( \left\lbrack {{\mathbb{Z}}_{n};{ + }_{n}}\right\rbrack \) . If \( G \) is infinite, it is isomorphic to \( \left\lbrack {\mathbb{Z}; + }\right\rbrack \) .
Proof. Case 1: \( \left| G\right| < \infty \) . If \( a \) is a generator of \( G \) and \( \left| G\right| = n \), define \( \phi : {\mathbb{Z}}_{n} \rightarrow G \) by \( \phi \left( k\right) = {ka} \) for all \( k \in {\mathbb{Z}}_{n} \). Since \( \langle a\rangle \) is finite, we can use the fact that the elements of \( \langle a\rangle \) are the first \( n \) nonnegative multiples of \( a \) . From this observation, we see that \( \phi \) is a surjection. A surjection between finite sets of the same cardinality must be a bijection. Finally, if \( p, q \in {\mathbb{Z}}_{n} \), \[ \phi \left( p\right) + \phi \left( q\right) = {pa} + {qa} \] \[ = \left( {p + q}\right) a \] \[ = \left( {p{ + }_{n}q}\right) a\;\text{ see exercise }{10} \] \[ = \phi \left( {p{ + }_{n}q}\right) \] Therefore \( \phi \) is an isomorphism. Case 2: \( \left| G\right| = \infty \) . We will leave this case as an exercise.
No
Theorem 15.1.10 Subgroups of Cyclic Groups. Every subgroup of a cyclic group is cyclic.
Proof. Let \( G \) be cyclic with generator \( a \) and let \( H \leq G \) . If \( H = \{ e\}, H \) has \( e \) as a generator. We may now assume that \( \left| H\right| \geq 2 \) and \( a \neq e \) . Let \( m \) be the least positive integer such that \( {ma} \) belongs to \( H \) . This is the key step. It lets us get our hands on a generator of \( H \) . We will now show that \( c = {ma} \) generates \( H \) . Certainly, \( \langle c\rangle \subseteq H \), but suppose that \( \langle c\rangle \neq H \) . Then there exists \( b \in H \) such that \( b \notin \langle c\rangle \) . Now, since \( b \) is in \( G \), there exists \( n \in \mathbb{Z} \) such that \( b = {na} \) . We now apply the division property and divide \( n \) by \( m \) . \( b = {na} = \left( {{qm} + r}\right) a = \left( {qm}\right) a + {ra} \), where \( 0 \leq r < m \) . We note that \( r \) cannot be zero for otherwise we would have \( b = {na} = q\left( {ma}\right) = {qc} \in \langle c\rangle \) . Therefore, \( {ra} = {na} - \left( {qm}\right) a \in H \) . This contradicts our choice of \( m \) because \( 0 < r < m \) .
Yes
All subgroups of \( {\mathbb{Z}}_{10} \)
The only proper subgroups of \( {\mathbb{Z}}_{10} \) are \( {H}_{1} = \{ 0,5\} \) and \( {H}_{2} = \{ 0,2,4,6,8\} \) . They are both cyclic: \( {H}_{1} = \langle 5\rangle \) , while \( {H}_{2} = \langle 2\rangle = \langle 4\rangle = \langle 6\rangle = \langle 8\rangle \) . The generators of \( {\mathbb{Z}}_{10} \) are \( 1,3,7 \), and 9 .
Yes
All subgroups of \( \mathbb{Z} \). With the exception of \( \{ 0\} \), all subgroups of \( \mathbb{Z} \) are isomorphic to \( \mathbb{Z} \). If \( H \leq \mathbb{Z} \), then \( H \) is the cyclic subgroup generated by the least positive element of \( H \).
It is infinite and so by Theorem 15.1.10 it is isomorphic to \( \mathbb{Z} \).
No
Theorem 15.1.13 The order of elements of a finite cyclic group. If \( G \) is a cyclic group of order \( n \) and \( a \) is a generator of \( G \), the order of \( {ka} \) is \( n/d \) , where \( d \) is the greatest common divisor of \( n \) and \( k \) .
Proof. The proof of this theorem is left to the reader.
No
Computation of an order in a cyclic group. To compute the order of \( \langle {18}\rangle \) in \( {\mathbb{Z}}_{30} \), we first observe that 1 is a generator of \( {\mathbb{Z}}_{30} \) and \( {18} = {18}\left( 1\right) \) . The greatest common divisor of 18 and 30 is 6 . Hence, the order of \( \langle {18}\rangle \) is \( {30}/6 \), or 5 .
The greatest common divisor of 18 and 30 is 6 . Hence, the order of \( \langle {18}\rangle \) is \( {30}/6 \), or 5 .
Yes
Theorem 15.2.6 If \( b \in a * H \), then \( a * H = b * H \), and if \( b \in H * a \), then \( H * a = H * b. \)
Proof. In light of the remark above, we need only prove the first part of this theorem. Suppose that \( x \in a * H \) . We need only find a way of expressing \( x \) as \
No
In Figure 15.2.1, you can start at either 1 or 7 and obtain the same path by taking jumps of three tacks in each step. Thus,
\[ 1 + {}_{12}\{ 0,3,6,9\} = 7 + {}_{12}\{ 0,3,6,9\} = \{ 1,4,7,{10}\} .\]
Yes
Theorem 15.2.8 Cosets Partition a Group. If \( \left\lbrack {G; * }\right\rbrack \) is a group and \( H \leq G \), the set of left cosets of \( H \) is a partition of \( G \) . In addition, all of the left cosets of \( H \) have the same cardinality. The same is true for right cosets.
Proof. That every element of \( G \) belongs to a left coset is clear because \( a \in a * H \) for all \( a \in G \) . If \( a * H \) and \( b * H \) are left cosets, we will prove that they are either equal or disjoint. If \( a * H \) and \( b * H \) are not disjoint, \( a * H \cap b * H \) is nonempty and some element \( c \in G \) belongs to the intersection. Then by Theorem 15.2.6, \( c \in a * H \Rightarrow a * H = c * H \) and \( c \in b * H \Rightarrow b * H = c * H \) . Hence \( a * H = b * H \) . We complete the proof by showing that each left coset has the same cardinality as \( H \) . To do this, we simply observe that if \( a \in G,\rho : H \rightarrow a * H \) defined by \( \rho \left( h\right) = a * h \) is a bijection and hence \( \left| H\right| = \left| {a * H}\right| \) .
Yes
Corollary 15.2.9 A Coset Counting Formula. If \( \left| G\right| < \infty \) and \( H \leq G \) , the number of distinct left cosets of \( H \) equals \( \frac{\left| G\right| }{\left| H\right| } \) . For this reason we use \( G/H \) to denote the set of left cosets of \( H \) in \( G \)
Proof. This follows from the partitioning of \( G \) into equal sized sets, one of which is \( H \) .
No
Consider the cosets described in Example 15.2.10. For brevity, we rename \( 0 + 4\mathbb{Z},1 + 4\mathbb{Z},2 + 4\mathbb{Z} \) , and \( 3 + 4\mathbb{Z} \) with the symbols \( \overline{0},\overline{1},\overline{2} \), and \( \overline{3} \) . Let’s do a typical calculation, \( \overline{1} + \overline{3} \) . We will see that the result is always going to be \( \overline{0} \), no matter what representatives we select.
For example, \( 9 \in \overline{1},7 \in \overline{3} \), and \( 9 + 7 = {16} \in \overline{0} \) . Our choice of the representatives \( \overline{1} \) and \( \overline{3} \) were completely arbitrary.
Yes
Consider the group of real numbers, \( \left\lbrack {\mathbb{R}; + }\right\rbrack \), and its subgroup of integers, \( \mathbb{Z} \). Every element of \( \mathbb{R}/\mathbb{Z} \) has the same cardinality as \( \mathbb{Z} \). Let \( s, t \in \mathbb{R} \). \( s \in t + \mathbb{Z} \) if \( s \) can be written \( t + n \) for some \( n \in \mathbb{Z} \). Hence \( s \) and \( t \) belong to the same coset if they differ by an integer.
Now consider the coset \( {0.25} + \mathbb{Z} \). Real numbers that differ by an integer from 0.25 are \( {1.25},{2.25},{3.25},\ldots \) and \( - {0.75}, - {1.75}, - {2.75},\ldots \) If any real number is selected, there exists a representative of its coset that is greater than or equal to 0 and less than 1. We will call that representative the distinguished representative of the coset. For example, 43.125 belongs to the coset represented by \( {0.125}; - {6.382} + \mathbb{Z} \) has 0.618 as its distinguished representative. The operation on \( \mathbb{R}/\mathbb{Z} \) is commonly called addition modulo 1. A few typical calculations in \( \mathbb{R}/\mathbb{Z} \) are \[ \left( {{0.1} + \mathbb{Z}}\right) + \left( {{0.48} + \mathbb{Z}}\right) = {0.58} + \mathbb{Z} \] \[ \left( {{0.7} + \mathbb{Z}}\right) + \left( {{0.31} + \mathbb{Z}}\right) = {0.01} + \mathbb{Z} \] \[ - \left( {{0.41} + \mathbb{Z}}\right) = - {0.41} + \mathbb{Z} = {0.59} + \mathbb{Z} \] \[ \text{and in general,} - \left( {a + \mathbb{Z}}\right) = \left( {1 - a}\right) + \mathbb{Z} \]
Yes
Consider the group \( {\mathbb{Z}}_{2}{}^{4} = {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \) . Let \( H \) be \( \langle \left( {1,0,1,0}\right) \rangle \), the cyclic subgroup of \( {\mathbb{Z}}_{2}{}^{4} \) generate by \( \left( {1,0,1,0}\right) \) .
Since\n\n\[ \left( {1,0,1,0}\right) + \left( {1,0,1,0}\right) = \left( {1{ + }_{2}1,0{ + }_{2}0,1{ + }_{2}1,0{ + }_{2}0}\right) = \left( {0,0,0,0}\right) \]\n\nthe order of \( H \) is 2 and, \( {\mathbb{Z}}_{2}{}^{4}/H \) has \( \left| {{\mathbb{Z}}_{2}^{4}/H}\right| = \frac{\left| {\mathbb{Z}}_{2}^{4}\right| }{\left| H\right| } = \frac{16}{2} = 8 \) elements. A typical coset is\n\n\[ C = \left( {0,1,1,1}\right) + H = \{ \left( {0,1,1,1}\right) ,\left( {1,1,0,1}\right) \} \]\n\nNote that since \( 2\left( {0,1,1,1}\right) = \left( {0,0,0,0}\right) ,{2C} = C \otimes C = H \), the identity for the operation on \( {\mathbb{Z}}_{2}{}^{4}/H \) . The orders of non-identity elements of this factor group are all 2, and it can be shown that the factor group is isomorphic to \( {\mathbb{Z}}_{2}{}^{3} \) .
Yes
Theorem 15.2.18 Coset operation is well-defined (Abelian Case). If \( G \) is an abelian group, and \( H \leq G \), the operation induced on cosets of \( H \) by the operation of \( G \) is well defined.
Proof. Suppose that \( a, b \), and \( {a}^{\prime },{b}^{\prime } \) . are two choices for representatives of cosets \( C \) and \( D \) . That is to say that \( a,{a}^{\prime } \in C, b,{b}^{\prime } \in D \) . We will show that \( a * b \) and \( {a}^{\prime } * {b}^{\prime } \) are representatives of the same coset. Theorem 15.2.61 implies that \( C = a * H \) and \( D = b * H \), thus we have \( {a}^{\prime } \in a * H \) and \( {b}^{\prime } \in b * H \) . Then there exists \( {h}_{1},{h}_{2} \in H \) such that \( {a}^{\prime } = a * {h}_{1} \) and \( {b}^{\prime } = b * {h}_{2} \) and so\n\n\[ \n{a}^{\prime } * {b}^{\prime } = \left( {a * {h}_{1}}\right) * \left( {b * {h}_{2}}\right) = \left( {a * b}\right) * \left( {{h}_{1} * {h}_{2}}\right)\n\]\n\nby various group properties and the assumption that \( G \) is abelian, which lets us reverse the order in which \( b \) and \( {h}_{1} \) appear in the chain of equalities. This last expression for \( {a}^{\prime } * {b}^{\prime } \) implies that \( {a}^{\prime } * {b}^{\prime } \in \left( {a * b}\right) * H \) since \( {h}_{1} * {h}_{2} \in H \) because \( H \) is a subgroup of \( G \) . Thus, we get the same coset for both pairs of representatives.
Yes
Theorem 15.2.19 Let \( G \) be a group and \( H \leq G \) . If the operation induced on left cosets of \( H \) by the operation of \( G \) is well defined, then the set of left cosets forms a group under that operation.
Proof. Let \( {C}_{1},{C}_{2} \), and \( {C}_{3} \) be the left cosets with representatives \( {r}_{1},{r}_{2} \), and \( {r}_{3} \) , respectively. The values of \( {C}_{1} \otimes \left( {{C}_{2} \otimes {C}_{3}}\right) \) and \( \left( {{C}_{1} \otimes {C}_{2}}\right) \otimes {C}_{3} \) are determined by \( {r}_{1} * \left( {{r}_{2} * {r}_{3}}\right) \) and \( \left( {{r}_{1} * {r}_{2}}\right) * {r}_{3} \), respectively. By the associativity of \( * \) in \( G \) , these two group elements are equal and so the two coset expressions must be equal. Therefore, the induced operation is associative. As for the identity and inverse properties, there is no surprise. The identity coset is \( H \), or \( e * H \), the coset that contains \( G \) ’s identity. If \( C \) is a coset with representative \( a \) ; that is,\n\nif \( C = a * H \), then \( {C}^{-1} \) is \( {a}^{-1} * H \) .\n\n\[ \left( {a * H}\right) \otimes \left( {{a}^{-1} * H}\right) = \left( {a * {a}^{-1}}\right) * H = e * H = \text{ identity coset. } \]
Yes
The significance of \( {S}_{3} \) . Our opening example, \( {S}_{3} \), is the smallest non-abelian group. For that reason, all of its proper subgroups are abelian: in fact, they are all cyclic.
Figure 15.3.7 shows the Hasse diagram for the subgroups of \( {S}_{3} \).
No
The only abelian symmetric groups are \( {S}_{1} \) and \( {S}_{2} \), with 1 and 2 elements, respectively.
The elements of \( {S}_{2} \) are \( i = \left( \begin{array}{ll} 1 & 2 \\ 1 & 2 \end{array}\right) \) and \( \alpha = \left( \begin{array}{ll} 1 & 2 \\ 2 & 1 \end{array}\right) .{S}_{2} \) is isomorphic to \( {\mathbb{Z}}_{2} \).
No
Theorem 15.3.9 For \( n \geq 1,\left| {S}_{n}\right| = n \) ! and for \( n \geq 3,{S}_{n} \) is non-abelian.
Proof. The first part of the theorem follows from the extended rule of products (see Chapter 2). We leave the details of proof of the second part to the reader after the following hint. Consider \( f \) in \( {S}_{n} \) where \( f\left( 1\right) = 2, f\left( 2\right) = 3, f\left( 3\right) = 1 \) , and \( f\left( j\right) = j \) for \( 3 < j \leq n \) . Therefore the cycle representation of \( f \) is \( \left( {1,2,3}\right) \) . Now define \( g \) in a similar manner so that when you compare \( f\left( {g\left( 1\right) }\right) \) and \( g\left( {f\left( 1\right) }\right) \) you get different results.
No
Theorem 15.3.16 Decomposition into Cycles. Every cycle of length greater than 2 can be expressed as a product of transpositions.
Proof. We need only indicate how the product of transpositions can be obtained. It is easy to verify that a cycle of length \( k,\left( {{a}_{1},{a}_{2},{a}_{3},\ldots ,{a}_{k}}\right) \), is equal to the following product of \( k - 1 \) transpositions:\n\n\[ \left( {{a}_{1},{a}_{k}}\right) \cdots \left( {{a}_{1},{a}_{3}}\right) \left( {{a}_{1},{a}_{2}}\right) \]
Yes
Theorem 15.3.19 Let \( n \geq 2 \) . The alternating group is indeed a group and has order \( \frac{n!}{2} \) .
Proof. In this proof, the symbols \( {s}_{i} \) and \( {t}_{i} \) stand for transpositions and \( p, q \) are even nonnegative integers. If \( f, g \in {A}_{n} \), we can write the two permutations as products of even numbers of transpositions, \( f = {s}_{1}{s}_{2}\cdots {s}_{p} \) and \( g = {t}_{1}{t}_{2}\cdots {t}_{q} \) . Then\n\n\[ f \circ g = {s}_{1}{s}_{2}\cdots {s}_{p}{t}_{1}{t}_{2}\cdots {t}_{q} \]\n\nSince \( p + q \) is even, \( f \circ g \in {A}_{n} \), and \( {A}_{n} \) is closed with respect to function composition. With this, we have proven that \( {A}_{n} \) is a subgroup of \( {S}_{n} \) by Theorem 11.5.5.\n\nTo prove the final assertion, let \( {B}_{n} \) be the set of odd permutations and let \( \tau = \left( {1,2}\right) \) . Define \( \theta : {A}_{n} \rightarrow {B}_{n} \) by \( \theta \left( f\right) = f \circ \tau \) . Suppose that \( \theta \left( f\right) = \theta \left( g\right) \) . Then \( f \circ \tau = g \circ \tau \) and by the right cancellation law, \( f = g \) . Hence, \( \theta \) is an injection. Next we show that \( \theta \) is also a surjection. If \( h \in {B}_{n}, h \) is the image of an element of \( {A}_{n} \) . Specifically, \( h \) is the image of \( h \circ \tau \) .\n\n\[ \theta \left( {h \circ \tau }\right) = \left( {h \circ \tau }\right) \circ \tau \]\n\n\[ = h \circ \left( {\tau \circ \tau }\right) \;\text{Why?} \]\n\n\[ = h \circ i\;\text{ Why? } \]\n\n\[ = h \]\n\nSince \( \theta \) is a bijection, \( \left| {A}_{n}\right| = \left| {B}_{n}\right| = \frac{n!}{2} \).
Yes
Example 15.3.20 The Sliding Tile Puzzle. Consider the sliding-tile puzzles pictured in Figure 15.3.21. Each numbered square is a tile and the dark square is a gap. Any tile that is adjacent to the gap can slide into the gap. In most versions of this puzzle, the tiles are locked into a frame so that they can be moved only in the manner described above. The object of the puzzle is to arrange the tiles as they appear in Configuration (a). Configurations (b) and (c) are typical starting points. We propose to show why the puzzle can be solved starting with (b), but not with (c).
We will associate a change in the configuration of the puzzle with an element of \( {S}_{16} \) . Imagine that a tile numbered 16 fills in the gap. For any configuration of the puzzle, the identity \( i \), is the function that leave the configurate \
No
Cosets of \( {A}_{3} \) . We have seen that \( {A}_{3} = \left\{ {i,{r}_{1},{r}_{2}}\right\} \) is a subgroup of \( {S}_{3} \), and its left cosets are \( {A}_{3} \) itself and \( {B}_{3} = \left\{ {{f}_{1},{f}_{2},{f}_{3}}\right\} \) . Whether \( \left\{ {{A}_{3},{B}_{3}}\right\} \) is a group boils down to determining whether the induced operation is well defined.
Consider the operation table for \( {S}_{3} \) in Figure 15.4.2. We have shaded in all occurrences of the elements of \( {B}_{3} \) in gray. We will call these elements the gray elements and the elements of \( {A}_{3} \) the white ones. Now consider the process of computing the coset product \( {A}_{3} \circ {B}_{3} \) . The \
No
Example 15.4.3 Cosets of another subgroup of \( {S}_{3} \) . Now let’s try the left cosets of \( \left\langle {f}_{1}\right\rangle \) in \( {S}_{3} \) . There are three of them. Will we get a complicated version of \( {\mathbb{Z}}_{3} \) ? The left cosets are \( {C}_{0} = \left\langle {f}_{1}\right\rangle ,{C}_{1} = {r}_{1}\left\langle {f}_{1}\right\rangle = \left\{ {{r}_{1},{f}_{3}}\right\} \), and \( {C}_{2} = {r}_{2}\left\langle {f}_{1}\right\rangle = \left\{ {{r}_{2},{f}_{2}}\right\} \)
The reader might be expecting something to go wrong eventually, and here it is. To determine \( {C}_{1} \circ {C}_{2} \) we can choose from four pairs of representatives:\n\n\[ \n{r}_{1} \in {C}_{1},{r}_{2} \in {C}_{2} \rightarrow {r}_{1} \circ {r}_{2} = i \in {C}_{0} \]\n\n\[ \n{r}_{1} \in {C}_{1},{f}_{2} \in {C}_{2} \rightarrow {r}_{1} \circ {f}_{2} = f \in {C}_{0} \]\n\n\[ \n{f}_{3} \in {C}_{1},{r}_{2} \in {C}_{2} \rightarrow {f}_{3} \circ {r}_{2} = {f}_{2} \in {C}_{2} \]\n\n\[ \n{f}_{3} \in {C}_{1},{f}_{2} \in {C}_{2} \rightarrow {f}_{3} \circ {f}_{2} = {r}_{2} \in {C}_{2} \]\n\nThis time, we don't get the same coset for each pair of representatives. Therefore, the induced operation is not well defined and no factor group is produced.
Yes
Theorem 15.4.6 If \( H \leq G \), then the operation induced on left cosets of \( H \) by the operation of \( G \) is well defined if and only if any one of the following conditions is true:\n\n(a) \( H \) is a normal subgroup of \( G \) .\n\n(b) If \( h \in H, a \in G \), then there exists \( {h}^{\prime } \in H \) such that \( h * a = a * {h}^{\prime } \) .\n\n(c) If \( h \in H, a \in G \), then \( {a}^{-1} * h * a \in H \) .
Proof. We leave the proof of this theorem to the reader.
No
A non-normal subgroup. The right cosets of \( \left\langle {f}_{1}\right\rangle \leq {S}_{3} \) are \( \left\{ {i,{f}_{1}}\right\} ,\left\{ {{r}_{1}{f}_{2}}\right\} \), and \( \left\{ {{r}_{2},{f}_{3}}\right\} \) . These are not the same as the left cosets of \( \left\langle {f}_{1}\right\rangle \) .
In addition, \( {f}_{2}{}^{-1}{f}_{1}{f}_{2} = {f}_{2}{f}_{1}{f}_{2} = {f}_{3} \notin \left\langle {f}_{1}\right\rangle \) . Thus, \( \left\langle {f}_{1}\right\rangle \) is not normal.
Yes
Subgroups of \( {A}_{5} \). \( {A}_{5} \), a group in its own right with 60 elements, has many proper subgroups, but none are normal.
Although this could be done by brute force, the number of elements in the group would make the process tedious. A far more elegant way to approach the verification of this statement is to use the following fact about the cycle structure of permutations. If \( f \in {S}_{n} \) is a permutation with a certain cycle structure, \( {\sigma }_{1}{\sigma }_{2}\cdots {\sigma }_{k} \), where the length of \( {\sigma }_{i} \) is \( {\ell }_{i} \), then for any \( g \in {S}_{n},{g}^{-1} \circ f \circ g \), which is the conjugate of \( f \) by \( g \), will have a cycle structure with exactly the same cycle lengths. For example if we take \( f = \left( {1,2,3,4}\right) \left( {5,6}\right) \left( {7,8,9}\right) \in {S}_{9} \) and conjugate by \( g = \left( {1,3,5,7,9}\right) \), \[ {g}^{-1} \circ f \circ g = \left( {1,9,7,5,3}\right) \circ \left( {1,2,3,4}\right) \left( {5,6}\right) \left( {7,8,9}\right) \circ \left( {1,3,5,7,9}\right) \] \[ = \left( {1,4,9,2}\right) \left( {3,6}\right) \left( {5,8,7}\right) \] Notice that the condition for normality of a subgroup \( H \) of \( G \) is that the conjugate of any element of \( H \) by an element of \( G \) must be remain in \( H \) . To verify that \( {A}_{5} \) has no proper normal subgroups, you can start by cataloging the different cycle structures that occur in \( {A}_{5} \) and how many elements have those structures. Then consider what happens when you conjugate these different cycle structures with elements of \( {A}_{5} \) . An outline of the process is in the exercises.
No
Define \( \alpha : {\mathbb{Z}}_{6} \rightarrow {\mathbb{Z}}_{3} \) by \( \alpha \left( n\right) = \) \( n{\;\operatorname{mod}\;3} \). Therefore, \( \alpha \left( 0\right) = 0,\alpha \left( 1\right) = 1,\alpha \left( 2\right) = 2,\alpha \left( 3\right) = 1 + 1 + 1 = 0 \), \( \alpha \left( 4\right) = 1 \), and \( \alpha \left( 5\right) = 2 \). If \( n, m \in {\mathbb{Z}}_{6} \). We could actually show that \( \alpha \) is a homomorphism by checking all \( {6}^{2} = {36} \) different cases for the formula
\[ \alpha \left( {n + {6m}}\right) = \alpha \left( n\right) + {3\alpha }\left( m\right) \] (15.4.1) but we will use a line of reasoning that generalizes. We have already encountered the Chinese Remainder Theorem, which implies that the function \( \beta : {\mathbb{Z}}_{6} \rightarrow {\mathbb{Z}}_{3} \times {\mathbb{Z}}_{2} \) defined by \( \beta \left( n\right) = \left( {n{\;\operatorname{mod}\;3}, n{\;\operatorname{mod}\;2}}\right) \). We need only observe that equating the first coordinates of both sides of the equation \[ \beta \left( {n + {6m}}\right) = \beta \left( n\right) + \beta \left( m\right) \] (15.4.2) gives us precisely the homomorphism property.
Yes
Theorem 15.4.14 Group Homomorphism Properties. If \( \theta : G \rightarrow {G}^{\prime } \) is a homomorphism, then:\n\n(a) \( \theta \left( e\right) = \theta \) (the identity of \( G \) ) \( = \) the identity of \( {G}^{\prime } = {e}^{\prime } \) .\n\n(b) \( \theta \left( {a}^{-1}\right) = \theta {\left( a\right) }^{-1} \) for all \( a \in G \) .\n\n(c) If \( H \leq G \), then \( \theta \left( H\right) = \{ \theta \left( h\right) \mid h \in H\} \leq {G}^{\prime } \) .
Proof.\n\n(a) Let \( a \) be any element of \( G \) . Then \( \theta \left( a\right) \in {G}^{\prime } \).\n\n\( \theta \left( a\right) \diamond {e}^{\prime } = \theta \left( a\right) \; \) by the definition of \( {e}^{\prime } \)\n\n\( = \theta \left( {a * e}\right) \; \) by the definition of \( e \)\n\n\( = \theta \left( a\right) \diamond \theta \left( e\right) \; \) by the fact that \( \theta \) is a homomorphism\n\nBy cancellation, \( {e}^{\prime } = \theta \left( e\right) \).\n\n(b) Again, let \( a \in G.{e}^{\prime } = \theta \left( e\right) = \theta \left( {a * {a}^{-1}}\right) = \theta \left( a\right) \diamond \theta \left( {a}^{-1}\right) \) . Hence, by the uniqueness of inverses, \( \theta {\left( a\right) }^{-1} = \theta \left( {a}^{-1}\right) \).\n\n(c) Let \( {b}_{1},{b}_{2} \in \theta \left( H\right) \) . Then there exists \( {a}_{1},{a}_{2} \in H \) such that \( \theta \left( {a}_{1}\right) = {b}_{1} \) , \( \theta \left( {a}_{2}\right) = {b}_{2} \) . Recall that a compact necessary and sufficient condition for \( H \leq G \) is that \( x * {y}^{-1} \in H \) for all \( x, y \in H \) . Now we apply the same condition in \( {G}^{\prime } \):\n\n\[ \n{b}_{1}\diamond {b}_{2}{}^{-1} = \theta \left( {a}_{1}\right) \diamond \theta {\left( {a}_{2}\right) }^{-1} \]\n\n\[ = \theta \left( {a}_{1}\right) \diamond \theta \left( {{a}_{2}{}^{-1}}\right) \]\n\n\[ = \theta \left( {{a}_{1} * {a}_{2}{}^{-1}}\right) \in \theta \left( H\right) \]\n\nsince \( {a}_{1} * {a}_{2}{}^{-1} \in H \), and so we can conclude that \( \theta \left( H\right) \leq {G}^{\prime } \) .
Yes
If we define \( \pi : \mathbb{Z} \rightarrow \mathbb{Z}/4\mathbb{Z} \) by \( \pi \left( n\right) = n + 4\mathbb{Z} \), then \( \pi \) is a homomorphism.
The image of the subgroup \( 4\mathbb{Z} \) is the single coset \( 0 + 4\mathbb{Z} \), the identity of the factor group. Homomorphisms of this type are called natural homomorphisms. The following theorems will verify that \( \pi \) is a homomorphism and also show the connection between homomorphisms and normal subgroups.
No
Theorem 15.4.17 If \( H \vartriangleleft G \), then the function \( \pi : G \rightarrow G/H \) defined by \( \pi \left( a\right) = {aH} \) is a homomorphism.
Proof. We leave the proof of this theorem to the reader.
No
Theorem 15.4.20 Let \( \theta : G \rightarrow {G}^{\prime } \) be a homomorphism from \( G \) into \( {G}^{\prime } \). The kernel of \( \theta \) is a normal subgroup of \( G \).
Proof. Let \( K = \ker \theta \). We can see that \( K \) is a subgroup of \( G \) by letting \( a, b \in K \) and verify that \( a * {b}^{-1} \in K \) by computing \( \theta \left( {a * {b}^{-1}}\right) = \theta \left( a\right) * \theta {\left( b\right) }^{-1} = {e}^{\prime } * {e}^{\prime - 1} = {e}^{\prime } \). To prove normality, we let \( g \) be any element of \( G \) and \( k \in K \). We compute \( \theta \left( {g * k * {g}^{-1}}\right) \) to verify that \( g * k * {g}^{-1} \in K \).\n\n\[ \theta \left( {g * k * {g}^{-1}}\right) = \theta \left( g\right) * \theta \left( k\right) * \theta \left( {g}^{-1}\right) \]\n\n\[ = \theta \left( g\right) * \theta \left( k\right) * \theta {\left( g\right) }^{-1} \]\n\n\[ = \theta \left( g\right) * {e}^{\prime } * \theta {\left( g\right) }^{-1} \]\n\n\[ = \theta \left( g\right) * \theta {\left( g\right) }^{-1} \]\n\n\[ = {e}^{\prime } \]
Yes
Define \( \theta : \mathbb{Z} \rightarrow {\mathbb{Z}}_{10} \) by \( \theta \left( n\right) = n{\;\operatorname{mod}\;{10}} \) . The three previous theorems imply the following:
- \( \pi : \mathbb{Z} \rightarrow \mathbb{Z}/{10}\mathbb{Z} \) defined by \( \pi \left( n\right) = n + {10}\mathbb{Z} \) is a homomorphism.\n- \( \{ n \in \mathbb{Z} \mid \theta \left( n\right) = 0\} = \{ {10n} \mid n \in \mathbb{Z}\} = {10}\mathbb{Z} \vartriangleleft \mathbb{Z} \).\n- \( \mathbb{Z}/{10}\mathbb{Z} \) is isomorphic to \( {\mathbb{Z}}_{10} \).
Yes
Let \( G \) be the same group of two by two invertible real matrices as in Example 15.4.11. Define \( \Phi : G \rightarrow G \) by \( \Phi \left( A\right) = \frac{A}{\sqrt{\left| \det A\right| }} \). We will let the reader verify that \( \Phi \) is a homomorphism. The theorems above imply the following.
\[ \text{-}\ker \Phi = \{ A \in G \mid \Phi \left( A\right) = I\} = \left\{ {\left. \left( \begin{array}{ll} a & 0 \\ 0 & a \end{array}\right) \right| \;a \in \mathbb{R}, a \neq 0}\right\} \vartriangleleft G\text{. This} \] verifies our statement in Example 15.4.11. As in that example, let \( \ker \Phi = \) \( {H}_{1} \) .
No
Consider \( \Phi : {\mathbb{Z}}_{2}{}^{2} \rightarrow {\mathbb{Z}}_{2}{}^{3} \) defined by \( \Phi \left( {a, b}\right) = \left( {a, b, a{ + }_{2}b}\right) \) . If \( \left( {{a}_{1},{b}_{1}}\right) ,\left( {{a}_{2},{b}_{2}}\right) \in {\mathbb{Z}}_{2}{}^{2} \)
\[ \Phi \left( {\left( {{a}_{1},{b}_{1}}\right) + \left( {{a}_{2},{b}_{2}}\right) }\right) = \Phi \left( {{a}_{1}{ + }_{2}{a}_{2},{b}_{1}{ + }_{2}{b}_{2}}\right) \] \[ = \left( {{a}_{1}{ + }_{2}{a}_{2},{b}_{1}{ + }_{2}{b}_{2},{a}_{1}{ + }_{2}{a}_{2}{ + }_{2}{b}_{1}{ + }_{2}{b}_{2}}\right) \] \[ = \left( {{a}_{1},{b}_{1},{a}_{1}{ + }_{2}{b}_{1}}\right) + \left( {{a}_{2},{b}_{2},{a}_{2}{ + }_{2}{b}_{2}}\right) \] \[ = \Phi \left( {{a}_{1},{b}_{1}}\right) + \Phi \left( {{a}_{2},{b}_{2}}\right) \]
Yes
Theorem 15.5.3 There is a system of distinguished representatives of \( {\mathbb{Z}}_{2}{}^{6}/W \) such that each of the six-bit blocks having a single 1 is a distinguished representative of its own coset.
Now we can describe the error-correcting process. First match each of the blocks with a single 1 with its syndrome. In addition, match the identity of \( W \) with the syndrome \( \left( {0,0,0}\right) \) as in the table below. Since there are eight cosets of \( W \), select any representative of the eighth coset to be distinguished. This is the coset with syndrome \( \left( {1,1,1}\right) \) .\n\n<table><thead><tr><th colspan=\
No
The ring of integers. \( \left\lbrack {\mathbb{Z};+, \cdot }\right\rbrack \) is a ring, where + and - stand for regular addition and multiplication on \( \mathbb{Z} \) .
From Chapter 11, we already know that \( \left\lbrack {\mathbb{Z}; + }\right\rbrack \) is an abelian group, so we need only check parts 2 and 3 of the definition of a ring. From elementary algebra, we know that the associative law under multiplication and the distributive laws are true for \( \mathbb{Z} \) . This is our main example of an infinite ring.
No
The ring of integers modulo \( n.\left\lbrack {{\mathbb{Z}}_{n};{ + }_{n},{ \times }_{n}}\right\rbrack \) is a ring.
The properties of modular arithmetic on \( {\mathbb{Z}}_{n} \) were described in Section 11.4, and they give us the information we need to convince ourselves that \( \left\lbrack {{\mathbb{Z}}_{n};{ + }_{n},{ \times }_{n}}\right\rbrack \) is a ring.
No
To determine the unity in the ring \( {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{3} \), we look for the element \( \left( {m, n}\right) \) such that for all elements \( \left( {a, b}\right) \in {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{3},\left( {a, b}\right) = \left( {a, b}\right) \cdot \left( {m, n}\right) = \left( {m, n}\right) \cdot \left( {a, b}\right) \) , or, equivalently, \n\n\[ \left( {a{ \times }_{4}m, b{ \times }_{3}n}\right) = \left( {m{ \times }_{4}a, n{ \times }_{3}b}\right) = \left( {a, b}\right) \]
So we want \( m \) such that \( a{ \times }_{4}m = m{ \times }_{4}a = a \) in the ring \( {\mathbb{Z}}_{4} \) . The only element \( m \) in \( {\mathbb{Z}}_{4} \) that satisfies this equation is \( m = 1 \) . Similarly, we obtain value of 1 for \( n \) . So the unity of \( {\mathbb{Z}}_{4} \times {\mathbb{Z}}_{3} \), which is unique by Exercise 15 of this section, is \( \left( {1,1}\right) \) . We leave to the reader to verify that this ring is commutative.
No
The equation \( {2x} = 3 \) has a solution in the ring \( \left\lbrack {\mathbb{Q};+, \cdot }\right\rbrack \) but does not have a solution in \( \left\lbrack {\mathbb{Z};+, \cdot }\right\rbrack \)
since, to solve this equation, we multiply both sides of the equation \( {2x} = 3 \) by the multiplicative inverse of 2 . This number, \( {2}^{-1} \) exists in \( \mathbb{Q} \) but does not exist in \( \mathbb{Z} \)
Yes
Let us find the multiplicative inverses, when they exist, of each element of the ring \( \left\lbrack {{\mathbb{Z}}_{6};{ + }_{6},{ \times }_{6}}\right\rbrack \) . If \( u = 3 \), we want an element \( v \) such that \( u{ \times }_{6}v = 1 \) .
We do not have to check whether \( v{ \times }_{6}u = 1 \) since \( {\mathbb{Z}}_{6} \) is commutative. If we try each of the six elements, \( 0,1,2,3,4 \), and 5, of \( {\mathbb{Z}}_{6} \), we find that none of them satisfies the above equation, so 3 does not have a multiplicative inverse in \( {\mathbb{Z}}_{6} \) .
Yes
Consider the rings \( \left\lbrack {\mathbb{Z};+, \cdot }\right\rbrack \) and \( \left\lbrack {2\mathbb{Z};+, \cdot }\right\rbrack \) . In Chapter 11 we showed that as groups, the two sets \( \mathbb{Z} \) and \( 2\mathbb{Z} \) with addition were isomorphic. The group isomorphism that proved this was the function \( f : \mathbb{Z} \rightarrow 2\mathbb{Z} \), defined by \( f\left( n\right) = {2n} \) . Is \( f \) a ring isomorphism?
We need only check whether \( f\left( {m \cdot n}\right) = \) \( f\left( m\right) \cdot f\left( n\right) \) for all \( m, n \in \mathbb{Z} \) . In fact, this condition is not satisfied:\n\n\[ f\left( {m \cdot n}\right) = 2 \cdot m \cdot n\;\text{ and }\;f\left( m\right) \cdot f\left( n\right) = {2m} \cdot {2n} = 4 \cdot m \cdot n \]\n\nTherefore, \( f \) is not a ring isomorphism. This does not necessarily mean that the two rings \( \mathbb{Z} \) and \( 2\mathbb{Z} \) are not isomorphic, but simply that \( f \) doesn’t satisfy the conditions.
Yes
Next consider whether \( \left\lbrack {2\mathbb{Z};+, \cdot }\right\rbrack \) and \( \left\lbrack {3\mathbb{Z};+, \cdot }\right\rbrack \) are isomorphic.
The equation \( x + x = x \cdot x \), or \( {2x} = {x}^{2} \), makes sense in both rings. However, this equation has a nonzero solution, \( x = 2 \), in \( 2\mathbb{Z} \), but does not have a nonzero solution in \( 3\mathbb{Z} \) . Thus we have an equation solvable in one ring that cannot be solved in the other, so they cannot be isomorphic.
Yes
The set of even integers, \( 2\mathbb{Z} \), is a subring of the ring \( \left\lbrack {\mathbb{Z};+, \cdot }\right\rbrack \) since \( \left\lbrack {2\mathbb{Z}; + }\right\rbrack \) is a subgroup of the group \( \left\lbrack {\mathbb{Z}; + }\right\rbrack \) and since it is also closed with respect to multiplication:
\[ {2m},{2n} \in 2\mathbb{Z} \Rightarrow \left( {2m}\right) \cdot \left( {2n}\right) = 2\left( {2 \cdot m \cdot n}\right) \in 2\mathbb{Z} \]
Yes
Theorem 16.1.18 Some Basic Properties. Let \( \left\lbrack {R;+, \cdot }\right\rbrack \) be a ring, with \( a, b \in R \) . Then\n\n(1) \( a \cdot 0 = 0 \cdot a = 0 \)\n\n(2) \( a \cdot \left( {-b}\right) = \left( {-a}\right) \cdot b = - \left( {a \cdot b}\right) \)\n\n(3) \( \left( {-a}\right) \cdot \left( {-b}\right) = a \cdot b \)
Proof.\n\n(1) \( a \cdot 0 = a \cdot \left( {0 + 0}\right) = a \cdot 0 + a \cdot 0 \), the last equality valid by the left distributive axiom. Hence if we add \( - \left( {a \cdot 0}\right) \) to both sides of the equality above, we obtain \( a \cdot 0 = 0 \) . Similarly, we can prove that \( 0 \cdot a = 0 \).\n\n(2) Before we begin the proof of this part, recall that the inverse of each element of the group \( \left\lbrack {R; + }\right\rbrack \) is unique. Hence the inverse of the element \( a \cdot b \) is unique and it is denoted \( - \left( {a \cdot b}\right) \) . Therefore, to prove that \( a \cdot \left( {-b}\right) = \) \( - \left( {a \cdot b}\right) \), we need only show that \( a \cdot \left( {-b}\right) \) inverts \( a \cdot b \).\n\n\[ a \cdot \left( {-b}\right) + a \cdot b = a \cdot \left( {-b + b}\right) \;\text{ by the left distributive axiom } \]\n\n\[ = a \cdot 0\;\text{ since } - b\text{ inverts }b \]\n\n\[ = 0\;\text{by part 1 of this theorem} \]\n\nSimilarly, it can be shown that \( \left( {-a}\right) \cdot b = - \left( {a \cdot b}\right) \).\n\n(3) We leave the proof of part 3 to the reader as an exercise.
No
We will compute \( 2 \cdot \left( {-2}\right) \) in the ring \( \left\lbrack {{\mathbb{Z}}_{6};{ + }_{6},{ \times }_{6}}\right\rbrack .
2{ \times }_{6}\left( {-2}\right) = - \left( {2{ \times }_{6}2}\right) = - 4 = 2, since the additive inverse of 4 (mod 6) is 2. Of course, we could have done the calculation directly as \( 2{ \times }_{6}\left( {-2}\right) = 2{ \times }_{6}4 = 2 \)
Yes
Theorem 16.1.22 Multiplicative Cancellation. The multiplicative cancellation laws hold in a ring \( \\left\\lbrack {R;+, \\cdot }\\right\\rbrack \) if and only if \( R \) has no zero divisors.
Proof. We prove the theorem using the left cancellation axiom, namely that if \( a \\neq 0 \) and \( a \\cdot b = a \\cdot c \), then \( b = c \) for all \( a, b, c \\in R \). The proof using the right cancellation axiom is its mirror image.\n\n\\( \\left( \\Rightarrow \\right) \\) Assume the left cancellation law holds in \( R \) and assume that \( a \) and \( b \) are two elements in \( R \) such that \( a \\cdot b = 0 \). We must show that either \( a = 0 \) or \( b = 0 \). To do this, assume that \( a \\neq 0 \) and show that \( b \) must be 0.\n\n\[ a \\cdot b = 0 \\Rightarrow a \\cdot b = a \\cdot 0 \]\n\n\[ \\Rightarrow b = 0\\;\\text{by the left cancellation law} \]\n\n\\( \\left( \\Leftarrow \\right) \\) Conversely, assume that \( R \) has no zero divisors and we will prove that the left cancellation law must hold. To do this, assume that \( a, b, c \\in R, a \\neq 0 \), such that \( a \\cdot b = a \\cdot c \) and show that \( b = c \).\n\n\[ a \\cdot b = a \\cdot c \\Rightarrow a \\cdot b - a \\cdot c = 0 \]\n\n\[ \\Rightarrow a \\cdot \\left( {b - c}\\right) = 0 \]\n\n\[ \\Rightarrow b - c = 0\\;\\text{since there are no zero divisors} \]\n\n\[ \\Rightarrow b = c \]\n\nHence, the only time that the cancellation laws hold in a ring is when there are no zero divisors.
Yes
Both \( \left\lbrack {{\mathbb{Z}}_{2};{ + }_{2},{ \times }_{2}}\right\rbrack \) and \( \left\lbrack {{\mathbb{Z}}_{3};{ + }_{3},{ \times }_{3}}\right\rbrack \) are integral domains. Consider the direct product \( {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{3} \). It’s true that \( {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{3} \) is a commutative ring with unity (see Exercise 13). However, \( \left( {1,0}\right) \cdot \left( {0,2}\right) = \left( {0,0}\right) \), so \( {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{3} \) has zero divisors and is therefore not an integral domain.
However, \( \left( {1,0}\right) \cdot \left( {0,2}\right) = \left( {0,0}\right) \), so \( {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{3} \) has zero divisors and is therefore not an integral domain.
Yes
Theorem 16.2.4 Field \( \Rightarrow \) Integral Domain. Every field is an integral domain.
Proof. The proof is fairly easy and a good exercise, so we provide a hint. Starting with the assumption that \( a \cdot b = 0 \) if we assume that \( a \neq 0 \) then the existence of \( {a}^{-1} \) makes it possible to infer that \( b = 0 \) .
No
Theorem 16.2.5 Finite Integral Domain \( \Rightarrow \) Field. Every finite integral domain is a field.
Proof. We leave the details to the reader, but observe that if \( D \) is a finite integral domain, we can list all elements as \( {a}_{1},{a}_{2},\ldots ,{a}_{n} \), where \( {a}_{1} = 1 \) . Now, to show that any \( {a}_{i} \) has a multiplicative inverse, consider the \( n \) products \( {a}_{i} \) . \( {a}_{1},{a}_{i} \cdot {a}_{2},\ldots ,{a}_{i} \cdot {a}_{n} \) . What can you say about these products?
No
In \( {\mathbb{Z}}_{3}\left\lbrack x\right\rbrack \), if \( f\left( x\right) = 1 + x \) and \( g\left( x\right) = 2 + x \), then
\[ f\left( x\right) + g\left( x\right) = \left( {1 + x}\right) + \left( {2 + x}\right) \] \[ = \left( {1{ + }_{3}2}\right) + \left( {1{ + }_{3}1}\right) x \] \[ = 0 + {2x} \] \[ = {2x} \] and \[ f\left( x\right) g\left( x\right) = \left( {1 + x}\right) \cdot \left( {2 + x}\right) \] \[ = 1{ \times }_{3}2 + \left( {1{ \times }_{3}1{ + }_{3}1{ \times }_{3}2}\right) x + \left( {1{ \times }_{3}2}\right) {x}^{2} \] \[ = 2 + {0x} + {x}^{2} \] \[ = 2 + {x}^{2} \]
Yes
Let \( f\left( x\right) = 2 + {x}^{2} \) and \( g\left( x\right) = - 1 + {4x} + 3{x}^{2} \) . We will compute \( f\left( x\right) \cdot g\left( x\right) \) in \( \mathbb{Z}\left\lbrack x\right\rbrack \) .
Using the notation of the above definition, \( {a}_{0} = 2,{a}_{1} = 0 \) , \( {a}_{2} = 1,{b}_{0} = - 1,{b}_{1} = 4 \), and \( {b}_{2} = 3 \) . We want to compute the coefficients \( {d}_{0} \) , \( {d}_{1},{d}_{2},{d}_{3} \), and \( {d}_{4} \) . We will compute \( {d}_{3} \), the coefficient of the \( {x}^{3} \) term of the product, and leave the remainder to the reader (see Exercise 2 of this section). Since the degrees of both factors is \( 2,{a}_{i} = {b}_{i} = 0 \) for \( i \geq 3 \) . The coefficient of \( {x}^{3} \) is\n\n\[ \n{d}_{3} = {a}_{0}{b}_{3} + {a}_{1}{b}_{2} + {a}_{2}{b}_{1} + {a}_{3}{b}_{0} = 2 \cdot 0 + 0 \cdot 3 + 1 \cdot 4 + 0 \cdot \left( {-1}\right) = 4 \n\]
No
Let \( f\left( x\right) = 1 + x + {x}^{3} \) and \( g\left( x\right) = 1 + x \) be two polynomials in \( {\mathbb{Z}}_{2}\left\lbrack x\right\rbrack \). Let us divide \( f\left( x\right) \) by \( g\left( x\right) \).
\[ \frac{{x}^{3} + x + 1}{x + 1} = {x}^{2} + x + \frac{1}{x + 1} \] or equivalently, \[ {x}^{3} + x + 2 = \left( {{x}^{2} + x}\right) \cdot \left( {x + 1}\right) + 1 \] That is, \( f\left( x\right) = g\left( x\right) \cdot q\left( x\right) + r\left( x\right) \) where \( q\left( x\right) = {x}^{2} + x \) and \( r\left( x\right) = 1 \). Notice that \( \deg \left( {r\left( x\right) }\right) = 0 \), which is strictly less than \( \deg \left( {g\left( x\right) }\right) = 1 \).
Yes
Let \( f\left( x\right) = 1 + {x}^{4} \) and \( g\left( x\right) = 1 + x \) be polynomials in \( {\mathbb{Z}}_{2}\left\lbrack x\right\rbrack \) . Let us divide \( f\left( x\right) \) by \( g\left( x\right) \) :
\[ \begin{matrix} {x}^{3} + {x}^{2} + x + 1 \\ x + 1){x}^{4} + 0{x}^{3} + 0{x}^{2} + {0x} + 1 \end{matrix} \] \[ \begin{array}{r} {x}^{4} + {x}^{3} \\ {x}^{3} \end{array} \] \[ \begin{array}{r} \frac{{x}^{3} + {x}^{2}}{{x}^{2}} + 1 \end{array} \] Thus \( {x}^{4} + 1 = \left( {{x}^{3} + {x}^{2} + x + 1}\right) \left( {x + 1}\right) \) . Since we have 0 as a remainder, \( x + 1 \) must be a factor of \( {x}^{4} + 1 \) . Also, since \( x + 1 \) is a factor of \( {x}^{4} + 1,1 \) is a zero (or root) of \( {x}^{4} + 1 \) . Of course we could have determined that 1 is a root of \( f\left( x\right) \) simply by computing \( f\left( 1\right) = {1}^{4} + {}_{2}1 = 1 + {}_{2}1 = 0 \) .
Yes
Theorem 16.3.13 Division Property for Polynomials. Let \( \left\lbrack {F;+, \cdot }\right\rbrack \) be a field and let \( f\left( x\right) \) and \( g\left( x\right) \) be two elements of \( F\left\lbrack x\right\rbrack \) with \( g\left( x\right) \neq 0 \) . Then there exist unique polynomials \( q\left( x\right) \) and \( r\left( x\right) \) in \( F\left\lbrack x\right\rbrack \) such that \( f\left( x\right) = g\left( x\right) q\left( x\right) + r\left( x\right) \) , where \( \deg r\left( x\right) < \deg g\left( x\right) \) .
Proof. This theorem can be proven by induction on \( \deg f\left( x\right) \) .
No
Theorem 16.3.14 The Factor Theorem. Let \( \left\lbrack {F;+, \cdot }\right\rbrack \) be a field. An element \( a \in F \) is a zero of \( f\left( x\right) \in F\left\lbrack x\right\rbrack \) if and only if \( x - a \) is a factor of \( f\left( x\right) \) in \( F\left\lbrack x\right\rbrack \) .
Proof.\n\n\( \left( \Rightarrow \right) \) Assume that \( a \in F \) is a zero of \( f\left( x\right) \in F\left\lbrack x\right\rbrack \) . We wish to show that \( x - a \) is a factor of \( f\left( x\right) \) . To do so, apply the division property to \( f\left( x\right) \) and \( g\left( x\right) = \) \( x - a \) . Hence, there exist unique polynomials \( q\left( x\right) \) and \( r\left( x\right) \) from \( F\left\lbrack x\right\rbrack \) such that \( f\left( x\right) = \left( {x - a}\right) \cdot q\left( x\right) + r\left( x\right) \) and the \( \deg r\left( x\right) < \deg \left( {x - a}\right) = 1 \), so \( r\left( x\right) = c \in F \) , that is, \( r\left( x\right) \) is a constant. Also, the fact that \( a \) is a zero of \( f\left( x\right) \) means that \( f\left( a\right) = 0 \) . So \( f\left( x\right) = \left( {x - a}\right) \cdot q\left( x\right) + c \) becomes \( 0 = f\left( a\right) = \left( {a - a}\right) q\left( a\right) + c \) . Hence \( c = 0 \), so \( f\left( x\right) = \left( {x - a}\right) \cdot q\left( x\right) \), and \( x - a \) is a factor of \( f\left( x\right) \) . The reader should note that a critical point of the proof of this half of the theorem was the part of the division property that stated that \( \deg r\left( x\right) < \deg g\left( x\right) \) .\n\n\( \left( \Leftarrow \right) \) We leave this half to the reader as an exercise.
No
Theorem 16.3.15 A nonzero polynomial \( f\left( x\right) \in F\left\lbrack x\right\rbrack \) of degree \( n \) can have at most \( n \) zeros.
Proof. Let \( a \in F \) be a zero of \( f\left( x\right) \) . Then \( f\left( x\right) = \left( {x - a}\right) \cdot {q}_{1}\left( x\right) ,{q}_{1}\left( x\right) \in F\left\lbrack x\right\rbrack \), by the Factor Theorem. If \( b \in F \) is a zero of \( {q}_{1}\left( x\right) \), then again by Factor Theorem, \( f\left( x\right) = \left( {x - a}\right) \left( {x - b}\right) {q}_{2}\left( x\right) ,{q}_{2}\left( x\right) \in F\left\lbrack x\right\rbrack \) . Continue this process, which must terminate in at most \( n \) steps since the degree of \( {q}_{k}\left( x\right) \) would be \( n - k \) .
Yes
The polynomial \( f\left( x\right) = {x}^{4} + 1 \) is reducible over \( {\mathbb{Z}}_{2} \)
since \( {x}^{4} + 1 = \left( {x + 1}\right) \left( {{x}^{3} + {x}^{2} + x - 1}\right) \)
No
Is the polynomial \( f\left( x\right) = {x}^{3} + x + 1 \) reducible over \( {\mathbb{Z}}_{2} \) ?
Since a factorization of a cubic polynomial can only be as a product of linear and quadratic factors, or as a product of three linear factors, \( f\left( x\right) \) is reducible if and only if it has at least one linear factor. From the Factor Theorem, \( x - a \) is a factor of \( {x}^{3} + x + 1 \) over \( {\mathbb{Z}}_{2} \) if and only if \( a \in {\mathbb{Z}}_{2} \) is a zero of \( {x}^{3} + x + 1 \) . So \( {x}^{3} + x + 1 \) is reducible over \( {\mathbb{Z}}_{2} \) if and only if it has a zero in \( {\mathbb{Z}}_{2} \) . Since \( {\mathbb{Z}}_{2} \) has only two elements,0 and 1, this is easy enough to check. \( f\left( 0\right) = {0}^{3} + {}_{2}0 + {}_{2}1 = 1 \) and \( f\left( 1\right) = {1}^{3} + {}_{2}1 + {}_{2}1 = 1 \), so neither 0 nor 1 is a zero of \( f\left( x\right) \) over \( {\mathbb{Z}}_{2} \) . Hence, \( {x}^{3} + x + 1 \) is irreducible over \( {\mathbb{Z}}_{2} \) .
Yes
Example 16.4.1 Extending the Rational Numbers. Let \( f\left( x\right) = {x}^{2} - 2 \in \) \( \mathbb{Q}\left\lbrack x\right\rbrack \) . It is important to remember that we are considering \( {x}^{2} - 2 \) over \( \mathbb{Q} \), no other field. We would like to find all zeros of \( f\left( x\right) \) and the smallest field, call it \( S \) for now, that contains them. The zeros are \( x = \pm \sqrt{2} \), neither of which is an element of \( \mathbb{Q} \) . The set \( S \) we are looking for must satisfy the conditions:\n\n(1) \( S \) must be a field.\n\n(2) \( S \) must contain \( \mathbb{Q} \) as a subfield,\n\n(3) \( S \) must contain all zeros of \( f\left( x\right) = {x}^{2} - 2 \)
By the last condition \( \sqrt{2} \) must be an element of \( S \), and, if \( S \) is to be a field, the sum, product, difference, and quotient of elements in \( S \) must be in \( S \) . So operations involving this number, such as \( \sqrt{2},{\left( \sqrt{2}\right) }^{2},{\left( \sqrt{2}\right) }^{3},\sqrt{2} + \sqrt{2} \) , \( \sqrt{2} - \sqrt{2} \), and \( \frac{1}{\sqrt{2}} \) must all be elements of \( S \) . Further, since \( S \) contains \( \mathbb{Q} \) as a subset, any element of \( \mathbb{Q} \) combined with \( \sqrt{2} \) under any field operation must be an element of \( S \) . Hence, every element of the form \( a + b\sqrt{2} \), where \( a \) and \( b \) can be any elements in \( \mathbb{Q} \), is an element of \( S \) . We leave to the reader to show that \( S = \{ a + b\sqrt{2} \mid a, b \in \mathbb{Q}\} \) is a field (see Exercise 1 of this section). We note that the second zero of \( {x}^{2} - 2 \), namely \( - \sqrt{2} \), is an element of this set. To see this, simply take \( a = 0 \) and \( b = - 1 \) . The field \( S \) is frequently denoted as \( \mathbb{Q}\left( \sqrt{2}\right) \), and it is referred to as an extension field of \( \mathbb{Q} \) . Note that the polynomial \( {x}^{2} - 2 = \left( {x - \sqrt{2}}\right) \left( {x + \sqrt{2}}\right) \) factors into linear factors, or splits, in \( \mathbb{Q}\left( \sqrt{2}\right) \left\lbrack x\right\rbrack \) ; that is, all coefficients of both factors are elements of the field \( \mathbb{Q}\left( \sqrt{2}\right) \) .
No
Extending \( {\mathbb{Z}}_{2} \) . Consider the polynomial \( g\left( x\right) = {x}^{2} + x + 1 \in \) \( {\mathbb{Z}}_{2}\left\lbrack x\right\rbrack \) . Let’s repeat the steps from the previous example to factor \( g\left( x\right) \) . First, \( g\left( 0\right) = 1 \) and \( g\left( 1\right) = 1 \), so none of the elements of \( {\mathbb{Z}}_{2} \) are zeros of \( g\left( x\right) \) . Hence, the zeros of \( g\left( x\right) \) must lie in an extension field of \( {\mathbb{Z}}_{2} \) . By Theorem 16.3.15, \( g\left( x\right) = {x}^{2} + x + 1 \) can have at most two zeros. Let \( a \) be a zero of \( g\left( x\right) \) . Then the extension field \( S \) of \( {\mathbb{Z}}_{2} \) must contain, besides \( a, a \cdot a = {a}^{2},{a}^{3}, a + a \) , \( a + 1 \), and so on. But, since \( g\left( a\right) = 0 \), we have \( {a}^{2} + a + 1 = 0 \), or equivalently, \( {a}^{2} = - \left( {a + 1}\right) = a + 1 \) (remember, we are working in an extension of \( {\mathbb{Z}}_{2} \) ). We can use this recurrence relation to reduce powers of \( a \) . So far our extension field, \( S \) , of \( {\mathbb{Z}}_{2} \) must contain the set \( \{ 0,1, a, a + 1\} \), and we claim that this the complete extension. For \( S \) to be a field, all possible sums, products, and differences of elements in \( S \) must be in \( S \) . Let’s try a few: \( a + a = a\left( {1{ + }_{2}1}\right) = a \cdot 0 = 0 \in S \) Since \( a + a = 0, - a = a \), which is in \( S \) . Adding three \( a \) ’s together doesn’t give us anything new: \( a + a + a = a \in S \) In fact, \( {na} \) is in \( S \) for all possible positive integers \( n \) . Next,
\[ {a}^{3} = {a}^{2} \cdot a \] \[ = \left( {a + 1}\right) \cdot a \] \[ = {a}^{2} + a \] \[ = \left( {a + 1}\right) + a \] \[ = 1 \] Therefore, \( {a}^{-1} = a + 1 = {a}^{2} \) and \( {\left( a + 1\right) }^{-1} = a \) . It is not difficult to see that \( {a}^{n} \) is in \( S \) for all positive \( n \) . Does \( S \) contain all zeros of \( {x}^{2} + x + 1 \) ? Remember, \( g\left( x\right) \) can have at most two distinct zeros and we called one of them \( a \), so if there is a second, it must be \( a + 1 \) . To see if \( a + 1 \) is indeed a zero of \( g\left( x\right) \), simply compute \( f\left( {a + 1}\right) \) : \[ f\left( {a + 1}\right) = {\left( a + 1\right) }^{2} + \left( {a + 1}\right) + 1 \] \[ = {a}^{2} + 1 + a + 1 + 1 \] \[ = {a}^{2} + a + 1 \] \[ = 0 \] Therefore, \( a + 1 \) is also a zero of \( {x}^{2} + x + 1 \) . Hence, \( S = \{ 0,1, a, a + 1\} \) is the smallest field that contains \( {\mathbb{Z}}_{2} = \{ 0,1\} \) as a subfield and contains all zeros of \( {x}^{2} + x + 1 \) . This extension field is denoted by \( {\mathbb{Z}}_{2}\left( a\right) \) . Note that \( {x}^{2} + x + 1 \) splits in \( {\mathbb{Z}}_{2}\left( a\right) \) ; that is, it factors into linear factors in \( {\mathbb{Z}}_{2}\left( a\right) \) . We also observe that \( {\mathbb{Z}}_{2}\left( a\right) \) is a field containing exactly four elements. By Theorem 16.2.10, we expected that \( {\mathbb{Z}}_{2}\left( a\right) \) would be of order \( {p}^{2} \) for some prime \( p \) and positive integer \( n \) . Also recall that all fields of order \( {p}^{n} \) are isomorphic. Hence, we have described all fields of order \( {2}^{2} = 4 \) by finding the extension field of a polynomial that is irreducible over \( {\mathbb{Z}}_{2} \) .
Yes
An Error Correcting Polynomial Code. An important observation regarding the previous example is that the nonzero elements of \( {GF}\left( 4\right) \) can be represented two ways. First as a linear combination of 1 and \( a \) . There are four such linear combinations, one of which is zero. Second, as powers of \( a \) . There are three distinct powers and the each match up with a nonzero linear combination:
\[ \n{a}^{0} = 1 \cdot 1 + 0 \cdot a \n\] \n\[ \n{a}^{1} = 0 \cdot 1 + 1 \cdot a \n\] \n\[ \n{a}^{2} = 1 \cdot 1 + 1 \cdot a \n\] \n\nNext, we briefly describe the field \( {GF}\left( 8\right) \) and how an error correcting code can be build on a the same observation about that field. \n\nFirst, we start with the irreducible polynomial \( p\left( x\right) = {x}^{3} + x + 1 \) over \( {\mathbb{Z}}_{2} \) . There is another such cubic polynomial, but its choice produces essentially the same result. Just as we did in the previous example, we assume we have a zero of \( p\left( x\right) \) and call it \( \beta \) . Since we have assumed that \( p\left( \beta \right) = {\beta }^{3} + \beta + 1 = 0 \), we get the recurrence relation for powers \( {\beta }^{3} = \beta + 1 \) that lets us reduce the seven powers \( {\beta }^{k},0 \leq k \leq 6 \), to linear combinations of \( 1,\beta \), and \( {\beta }^{2} \) . Higher powers will reduce to these seven, which make up the elements of a field with \( {2}^{3} = 8 \) elements when we add zero to the set. We leave as an exercise for you to set up a table relating powers of \( \beta \) with the linear combinations. \n\nWith this information we are now in a position to take blocks of four bits and encode them with three parity bits to create an error correcting code. If the bits are \( {b}_{3}{b}_{4}{b}_{5}{b}_{6} \), then we reduce the expression \( {B}_{m} = {b}_{3} \cdot {\beta }^{3} + {b}_{4} \cdot {\beta }^{4} + {b}_{5} \cdot {\beta }^{5} + {b}_{6} \cdot {\beta }^{6} \) using the recurrence relation to an expression \( {B}_{p} = {b}_{0} \cdot 1 + {b}_{1} \cdot \beta + {b}_{2} \cdot {\beta }^{2} \) . Since we are equating equals within \( {GF}\left( 8\right) \), we have \( {B}_{p} = {B}_{m} \), or \( {B}_{p} + {B}_{m} = 0 \) . The encoded message is \( {b}_{0}{b}_{1}{b}_{2}{b}_{3}{b}_{4}{b}_{5}{b}_{6} \), which is a representation of 0 in \( {GF}\left( 8\right) \) . If the transmitted sequence of bits is received as \( {c}_{0}{c}_{1}{c}_{2}{c}_{3}{c}_{4}{c}_{5}{c}_{6} \) we reduce \( C = {c}_{0} \cdot 1 + {c}_{1} \cdot \beta + {c}_{2} \cdot {\beta }^{2} + {c}_{3} \cdot {\beta }^{3} + {c}_{4} \cdot {\beta }^{4} + {c}_{5} \cdot {\beta }^{5} + {c}_{6} \cdot {\beta }^{6} \) using the recurrence. If there was no transmission error, the result is zero. If the reduced result is zero it is most likely that the original message was \( {c}_{3}{c}_{4}{c}_{5}{c}_{6} \) . If bit \( k \) is switched in the transmission, then \n\n\[ \nC = {B}_{p} + {B}_{m} + {\beta }^{k} = {\beta }^{k} \n\] \n\nTherefore if we reduce \( C \) with the recurrence, we get the linear combination of \( 1,\beta \), and \( {\beta }^{2} \) that is equal to \( {\beta }^{k} \) and so we can identify the location of the error and correct it.
No
Let\n\n\[ f\left( x\right) = \mathop{\sum }\limits_{{i = 0}}^{\infty }i{x}^{i} = 0 + {1x} + 2{x}^{2} + 3{x}^{3} + \cdots \;\text{ and }\n\]\n\n\[ g\left( x\right) = \mathop{\sum }\limits_{{i = 0}}^{\infty }{2}^{i}{x}^{i} = 1 + {2x} + 4{x}^{2} + 8{x}^{3} + \cdots \n\]\n\nbe elements in \( \mathbb{Z}\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) . Let us compute \( f\left( x\right) + g\left( x\right) \) and \( f\left( x\right) \cdot g\left( x\right) \) .
First the sum:\n\n\[ f\left( x\right) + g\left( x\right) = \mathop{\sum }\limits_{{i = 0}}^{\infty }i{x}^{i} + \mathop{\sum }\limits_{{i = 0}}^{\infty }{2}^{i}{x}^{i} \]\n\n\[ = \mathop{\sum }\limits_{{i = 0}}^{\infty }\left( {i + {2}^{i}}\right) {x}^{i} \]\n\n\[ = 1 + {3x} + 6{x}^{2} + {11}{x}^{3} + \cdots \]\n\nThe product is a bit more involved:\n\n\[ f\left( x\right) \cdot g\left( x\right) = \left( {\mathop{\sum }\limits_{{i = 0}}^{\infty }i{x}^{i}}\right) \cdot \left( {\mathop{\sum }\limits_{{i = 0}}^{\infty }{2}^{i}{x}^{i}}\right) \]\n\n\[ = \left( {0 + {1x} + 2{x}^{2} + 3{x}^{3} + \cdots }\right) \cdot \left( {1 + {2x} + 4{x}^{2} + 8{x}^{3} + \cdots }\right) \]\n\n\[ = 0 \cdot 1 + \left( {0 \cdot 2 + 1 \cdot 1}\right) x + \left( {0 \cdot 4 + 1 \cdot 2 + 2 \cdot 1}\right) {x}^{2} + \cdots \]\n\n\[ = x + 4{x}^{2} + {11}{x}^{3} + \cdots \]\n\n\[ = \mathop{\sum }\limits_{{i = 0}}^{\infty }{d}_{i}{x}^{i}\;\text{ where }{d}_{i} = \mathop{\sum }\limits_{{j = 0}}^{i}j{2}^{i - j} \]\n\nWe can compute any value of \( {d}_{i} \), with the amount of time/work required increasing as \( i \) increases.
Yes
Theorem 16.5.6 Polynomial Units. Let \( \\left\\lbrack {F;+, \\cdot }\\right\\rbrack \) be a field. Polynomial \( f\\left( x\\right) \) is a unit in \( F\\left\\lbrack x\\right\\rbrack \) if and only if it is a nonzero constant polynomial.
Proof.\n\n\\( \\left( \\Rightarrow \\right) \\) Let \\( f\\left( x\\right) \\) be a unit in \\( F\\left\\lbrack x\\right\\rbrack \\) . Then \\( f\\left( x\\right) \\) has a multiplicative inverse, call it \\( g\\left( x\\right) \\), such that \\( f\\left( x\\right) \\cdot g\\left( x\\right) = 1 \\) . Hence, the \\( \\deg \\left( {f\\left( x\\right) \\cdot g\\left( x\\right) }\\right) = \\deg \\left( 1\\right) = 0 \\) . But \\( \\deg \\left( {f\\left( x\\right) \\cdot g\\left( x\\right) }\\right) = \\deg f\\left( x\\right) + \\deg g\\left( x\\right) \\) . So \\( \\deg f\\left( x\\right) + \\deg g\\left( x\\right) = 0 \\), and since the degree of a polynomial is always nonnegative, this can only happen when the \\( \\deg f\\left( x\\right) = \\deg g\\left( x\\right) = 0 \\) . Hence, \\( f\\left( x\\right) \\) is a constant, an element of \\( F \\) , which is a unit if and only if it is nonzero.\n\n\\( \\left( \\Leftarrow \\right) \\) If \\( f\\left( x\\right) \\) is a nonzero element of \\( F \\), then it is a unit since \\( F \\) is a field. Thus it has an inverse, which is also in \\( F\\left\\lbrack x\\right\\rbrack \\) and so \\( f\\left( x\\right) \\) is a unit of \\( F\\left\\lbrack x\\right\\rbrack \\) .
Yes
Theorem 16.5.7 Power Series Units. Let \( \\left\\lbrack {F;+, \\cdot }\\right\\rbrack \) be a field. Then \( f\\left( x\\right) = \\) \( \\mathop{\\sum }\\limits_{{i = 0}}^{\\infty }{a}_{i}{x}^{i} \) is a unit of \( F\\left\\lbrack \\left\\lbrack x\\right\\rbrack \\right\\rbrack \) if and only if \( {a}_{0} \\neq 0 \) .
Proof.\n\n\\( \\left( \\Rightarrow \\right) \\) If \( f\\left( x\\right) \\) is a unit of \( F\\left\\lbrack \\left\\lbrack x\\right\\rbrack \\right\\rbrack \\), then there exists \( g\\left( x\\right) = \\mathop{\\sum }\\limits_{{i = 0}}^{\\infty }{b}_{i}{x}^{i} \\) in \( F\\left\\lbrack \\left\\lbrack x\\right\\rbrack \\right\\rbrack \\) such that\n\n\\[ f\\left( x\\right) \\cdot g\\left( x\\right) = \\left( {{a}_{0} + {a}_{1}x + {a}_{2}{x}^{2} + \\cdots }\\right) \\cdot \\left( {{b}_{0} + {b}_{1}x + {b}_{2}{x}^{2} + \\cdots }\\right) \\]\n\n\\[ = 1 \\]\n\n\\[ = 1 + {0x} + 0{x}^{2} + \\cdots \\]\n\nSince corresponding coefficients in the equation above must be equal, \( {a}_{0} \\) . \( {b}_{0} = 1 \\), which implies that \( {a}_{0} \\neq 0 \\).\n\n\\( \\left( \\Leftarrow \\right) \\) Assume that \( {a}_{0} \\neq 0 \\). To prove that \( f\\left( x\\right) \\) is a unit of \( F\\left\\lbrack \\left\\lbrack x\\right\\rbrack \\right\\rbrack \\) we need to find \( g\\left( x\\right) = \\mathop{\\sum }\\limits_{{i = 0}}^{\\infty }{b}_{i}{x}^{i} \\) in \( F\\left\\lbrack \\left\\lbrack x\\right\\rbrack \\right\\rbrack \\) such that \( f\\left( x\\right) \\cdot g\\left( x\\right) = \\mathop{\\sum }\\limits_{{i = 0}}^{\\infty }{d}_{i}{x}^{i} = 1 \\). If we use the formula for the coefficients \( f\\left( x\\right) \\cdot g\\left( x\\right) \\) and equate coefficients, we get\n\n\\[ {d}_{0} = {a}_{0} \\cdot {b}_{0} = 1\\; \\Rightarrow \\;{b}_{0} = {a}_{0}{}^{-1} \\]\n\n\\[ {d}_{1} = {a}_{0} \\cdot {b}_{1} + {a}_{1} \\cdot {b}_{0} = 0\\; \\Rightarrow \\;{b}_{1} = - {a}_{0}{}^{-1} \\cdot \\left( {{a}_{1} \\cdot {b}_{0}}\\right) \\]\n\n\\[ {d}_{2} = {a}_{0}{b}_{2} + {a}_{1}{b}_{1} + {a}_{2}{b}_{0} = 0\\; \\Rightarrow \\;{b}_{2} = - {a}_{0}{}^{-1} \\cdot \\left( {{a}_{1} \\cdot {b}_{1} + {a}_{2} \\cdot {b}_{0}}\\right) \\]\n\n\\[ {d}_{s} = {a}_{0} \\cdot {b}_{s} + {a}_{1} \\cdot {b}_{s - 1} + \\cdots + {a}_{s} \\cdot {b}_{0} = 0 \\Rightarrow {b}_{s} = - {a}_{0}{}^{-1} \\cdot \\left( {{a}_{1} \\cdot {b}_{s - 1} + {a}_{2} \\cdot {b}_{s - 2} + \\cdots }\\right. \\]\n\nTherefore the powers series \( \\mathop{\\sum }\\limits_{{i = 0}}^{\\infty }{b}_{i}{x}^{i} \\) is an expression whose coefficients lie in \( F \\) and that satisfies the statement \( f\\left( x\\right) \\cdot g\\left( x\\right) = 1 \\). Hence, \( g\\left( x\\right) \\) is the multiplicative inverse of \( f\\left( x\\right) \\) and \( f\\left( x\\right) \\) is a unit.
Yes
Let \( f\left( x\right) = 1 + {2x} + 3{x}^{2} + 4{x}^{3} + \cdots = \mathop{\sum }\limits_{{i = 0}}^{\infty }\left( {i + 1}\right) {x}^{i} \) be an element of \( \mathbb{Q}\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) . Then, by Theorem 16.5.7, since \( {a}_{0} = 1 \neq 0, f\left( x\right) \) is a unit and has an inverse, call it \( g\left( x\right) \) . To compute \( g\left( x\right) \), we follow the procedure outlined in the above theorem.
Using the formulas for the \( {b}_{i}^{\prime }\mathrm{s} \), we obtain\n\n\[ \n{b}_{0} = 1 \n\]\n\n\[ \n{b}_{1} = - 1\left( {2 \cdot 1}\right) = - 2 \n\]\n\n\[ \n{b}_{2} = - 1\left( {2 \cdot \left( {-2}\right) + 3 \cdot 1}\right) = 1 \n\]\n\n\[ \n{b}_{3} = - 1\left( {2 \cdot 1 + 3 \cdot \left( {-2}\right) + 4 \cdot 1}\right) = 0 \n\]\n\n\[ \n{b}_{4} = - 1\left( {2 \cdot 0 + 3 \cdot 1 + 4 \cdot \left( {-2}\right) + 5 \cdot 1}\right) = 0 \n\]\n\n\[ \n{b}_{5} = - 1\left( {2 \cdot 0 + 3 \cdot 0 + 4 \cdot \left( 1\right) + 5 \cdot \left( {-2}\right) + 6 \cdot 1}\right) = 0 \n\]\n\n\[ \n\vdots \n\]\n\nFor \( s \geq 3 \), we have\n\n\[ \n{b}_{s} = - 1\left( {2 \cdot 0 + 3 \cdot 0 + \cdots \left( {s - 2}\right) \cdot 0 + \left( {s - 1}\right) \cdot 1 + s \cdot \left( {-2}\right) + \left( {s + 1}\right) \cdot 1}\right) = 0 \n\]\n\nHence, \( g\left( x\right) = 1 - {2x} + {x}^{2} \) is the multiplicative inverse of \( f\left( x\right) \) .
Yes
Consider a circle of radius 5 in 2-space centered at the origin. We know that we can parameterize this circle as\n\n\[ \mathbf{r}\left( t\right) = \langle 5\cos \left( t\right) ,5\sin \left( t\right) \rangle \]\n\nwhere \( t \) runs from 0 to \( {2\pi } \) .
We see that \( {\mathbf{r}}^{\prime }\left( t\right) = \langle - 5\sin \left( t\right) ,5\cos \left( t\right) \rangle \), and hence \( \left| {{\mathbf{r}}^{\prime }\left( t\right) }\right| = 5 \) . It then follows that\n\n\[ s = L\left( t\right) = {\int }_{0}^{t}\left| {{\mathbf{r}}^{\prime }\left( w\right) }\right| {dw} = {\int }_{0}^{t}{5dw} = {5t}. \]\n\nSince \( s = L\left( t\right) = {5t} \), we may solve for \( t \) in terms of \( s \) to obtain \( t\left( s\right) = \) \( {L}^{-1}\left( s\right) = s/5 \) . We then find the arc length parametrization by composing\n\n\[ \mathbf{r}\left( {t\left( s\right) }\right) = \mathbf{r}\left( {{L}^{-1}\left( s\right) }\right) = \left\langle {5\cos \left( \frac{s}{5}\right) ,5\sin \left( \frac{s}{5}\right) }\right\rangle . \]\n\nMore generally, for a circle of radius \( a \) centered at the origin, a similar computation shows that\n\n\[ \left\langle {a\cos \left( \frac{s}{a}\right), a\sin \left( \frac{s}{a}\right) }\right\rangle \]\n\nis an arc length parametrization.
Yes
Let us parameterize the curve defined by\n\n\\[ \n\\mathbf{r}\\left( t\\right) = \\left\\langle {{t}^{2},\\frac{8}{3}{t}^{3/2},{4t}}\\right\\rangle \n\\]\n\nfor \\( t \\geq 0 \\) in terms of arc length.
To write \\( t \\) in terms of \\( s \\) we find \\( s \\) in terms of \\( t \\) :\n\n\\[ \ns\\left( t\\right) = {\\int }_{0}^{t}\\sqrt{{\\left( {x}^{\\prime }\\left( w\\right) \\right) }^{2} + {\\left( {y}^{\\prime }\\left( w\\right) \\right) }^{2} + {\\left( {z}^{\\prime }\\left( w\\right) \\right) }^{2}}{dw} \n\\]\n\n\\[ \n= {\\int }_{0}^{t}\\sqrt{{\\left( 2w\\right) }^{2} + {\\left( 4{w}^{1/2}\\right) }^{2} + {\\left( 4\\right) }^{2}}{dw} \n\\]\n\n\\[ \n= {\\int }_{0}^{t}\\sqrt{4{w}^{2} + {16w} + {16}}{dw} \n\\]\n\n\\[ \n= 2{\\int }_{0}^{t}\\sqrt{{\\left( w + 2\\right) }^{2}}{dw} \n\\]\n\n\\[ \n= 2{\\int }_{0}^{t}w + {2dw} \n\\]\n\n\\[ \n= {\\left. \\left( {w}^{2} + 4w\\right) \\right| }_{0}^{t} \n\\]\n\n\\[ \n= {t}^{2} + {4t}.\\text{.} \n\\]\n\nSince \\( t \\geq 0 \\), we can solve the equation \\( s = {t}^{2} + {4t} \\) (or \\( {t}^{2} + {4t} - s = 0 \\) ) for \\( t \\) to obtain \\( t = \\frac{-4 + \\sqrt{{16} + {4s}}}{2} = - 2 + \\sqrt{4 + s} \\) . So we can parameterize our curve in terms of arc length by\n\n\\[ \n\\mathbf{r}\\left( s\\right) = \\left\\langle {{\\left( -2 + \\sqrt{4 + s}\\right) }^{2},\\frac{8}{3}{\\left( -2 + \\sqrt{4 + s}\\right) }^{3/2},4\\left( {-2 + \\sqrt{4 + s}}\\right) }\\right\\rangle . \n\\]
Yes
Consider the function \( f \) defined by\n\n\[ f\left( {x, y}\right) = \frac{{x}^{2}{y}^{2}}{{x}^{2} + {y}^{2}}.\]\n\nWe want to know whether \( \mathop{\lim }\limits_{{\left( {x, y}\right) \rightarrow \left( {0,0}\right) }}f\left( {x, y}\right) \) exists.
Note that if either \( x \) or \( y \) is 0, then \( f\left( {x, y}\right) = 0 \) . Therefore, if \( f \) has a limit at \( \left( {0,0}\right) \), it must be 0 . We will therefore argue that\n\n\[ \mathop{\lim }\limits_{{\left( {x, y}\right) \rightarrow \left( {0,0}\right) }}f\left( {x, y}\right) = 0 \]\n\nby showing that we can make \( f\left( {x, y}\right) \) as close to 0 as we wish by taking \( \left( {x, y}\right) \) sufficiently close (but not equal) to \( \left( {0,0}\right) \) . In what follows, we view \( x \) and \( y \) as being real numbers that are close, but not equal, to 0 .\n\nSince \( 0 \leq {x}^{2} \), we have\n\n\[ {y}^{2} \leq {x}^{2} + {y}^{2} \]\n\nwhich implies that\n\n\[ \frac{{y}^{2}}{{x}^{2} + {y}^{2}} \leq 1 \]\n\nMultiplying both sides by \( {x}^{2} \) and observing that \( f\left( {x, y}\right) \geq 0 \) for all \( \left( {x, y}\right) \) gives\n\n\[ 0 \leq f\left( {x, y}\right) = \frac{{x}^{2}{y}^{2}}{{x}^{2} + {y}^{2}} = {x}^{2}\left( \frac{{y}^{2}}{{x}^{2} + {y}^{2}}\right) \leq {x}^{2}. \]\n\nThus, \( 0 \leq f\left( {x, y}\right) \leq {x}^{2} \) . Since \( {x}^{2} \rightarrow 0 \) as \( x \rightarrow 0 \), we can make \( f\left( {x, y}\right) \) as close to 0 as we like by taking \( x \) sufficiently close to 0 (for this example, it turns out that we don’t even need to worry about making \( y \) close to 0 ). Therefore,\n\n\[ \mathop{\lim }\limits_{{\left( {x, y}\right) \rightarrow \left( {0,0}\right) }}\frac{{x}^{2}{y}^{2}}{{x}^{2} + {y}^{2}} = 0. \]
Yes
Suppose we have a machine that manufactures rectangles of width \( x = {20}\mathrm{\;{cm}} \) and height \( y = {10}\mathrm{\;{cm}} \). However, the machine isn’t perfect, and therefore the width could be off by \( {dx} = {\Delta x} = {0.2}\mathrm{\;{cm}} \) and the height could be off by \( {dy} = {\Delta y} = {0.4}\mathrm{\;{cm}} \). The area of the rectangle is \( A\left( {x, y}\right) = {xy} \) so that the area of a perfectly manufactured rectangle is \( A\left( {{20},{10}}\right) = {200} \) square centimeters. Since the machine isn't perfect, we would like to know how much the area of a given manufactured rectangle could differ from the perfect rectangle.
We will estimate the uncertainty in the area using (10.4.2), and find that \( {\Delta A} \approx {dA} = {A}_{x}\left( {{20},{10}}\right) {dx} + {A}_{y}\left( {{20},{10}}\right) {dy}. \) Since \( {A}_{x} = y \) and \( {A}_{y} = x \), we have \( {\Delta A} \approx {dA} = {10dx} + {20dy} = {10} \cdot {0.2} + {20} \cdot {0.4} = {10}. \) That is, we estimate that the area in our rectangles could be off by as much as 10 square centimeters.
Yes
Let \( f\left( {x, y}\right) = {x}^{2}y \) be defined on the triangle \( D \) with vertices \( \left( {0,0}\right) ,\left( {2,0}\right) \), and \( \left( {2,3}\right) \). To evaluate \( {\iint }_{D}f\left( {x, y}\right) {dA} \), we must first describe the region \( D \) in terms of the variables \( x \) and \( y \).
Approach 1: Integrate first with respect to \( y \). In this case we choose to evaluate the double integral as an iterated integral in the form\n\n\[ \n{\iint }_{D}{x}^{2}{ydA} = {\int }_{x = a}^{x = b}{\int }_{y = {g}_{1}\left( x\right) }^{y = {g}_{2}\left( x\right) }{x}^{2}{ydydx} \n\]\n\nand therefore we need to describe \( D \) in terms of inequalities\n\n\[ \n{g}_{1}\left( x\right) \leq y \leq {g}_{2}\left( x\right) \;\text{ and }\;a \leq x \leq b. \n\]\n\nSince we are integrating with respect to \( y \) first, the iterated integral has the form\n\n\[ \n{\iint }_{D}{x}^{2}{ydA} = {\int }_{x = a}^{x = b}A\left( x\right) {dx} \n\]\n\nwhere \( A\left( x\right) \) is a cross sectional area in the \( y \) direction. So we are slicing the domain perpendicular to the \( x \) -axis and want to understand what a cross sectional area of the overall solid will look like. Several slices of the domain are shown in the middle image in Figure 11.3.4. On a slice with fixed \( x \) value, the \( y \) values are bounded below by 0 and above by the \( y \) coordinate on the hypotenuse of the right triangle. Thus, \( {g}_{1}\left( x\right) = 0 \) ; to find \( y = {g}_{2}\left( x\right) \), we need to write the hypotenuse as a function of \( x \) . The hypotenuse connects the points \( \left( {0,0}\right) \) and \( \left( {2,3}\right) \) and hence has equation \( y = \frac{3}{2}x \) . This gives the upper bound on \( y \) as \( {g}_{2}\left( x\right) = \frac{3}{2}x \) . The leftmost vertical cross section is at \( x = 0 \) and the rightmost one is at \( x = 2 \), so we have \( a = 0 \) and \( b = 2 \) . Therefore,\n\n\[ \n{\iint }_{D}{x}^{2}{ydA} = {\int }_{x = 0}^{x = 2}{\int }_{y = 0}^{y = \frac{3}{2}x}{x}^{2}{ydydx}. \n\]\n\nWe evaluate the iterated integral by applying the Fundamental Theorem of Calculus first to the inner integral, and then to the outer one, and find that\n\n\[ \n{\int }_{x = 0}^{x = 2}{\int }_{y = 0}^{y = \frac{3}{2}x}{x}^{2}{ydydx} = {\int }_{x = 0}^{x = 2}\left\lbrack {{x}^{2} \cdot \frac{{y}^{2}}{2}}\right\rbrack {\left. \right| }_{y = 0}^{y = \frac{3}{2}x}\;{dx} \n\]\n\n\[ \n= {\int }_{x = 0}^{x = 2}\frac{9}{8}{x}^{4}{dx} \n\]\n\n\[ \n= {\left. \frac{9}{8}\frac{{x}^{5}}{5}\right| }_{x = 0}^{x = 2} \n\]\n\n\[ \n= \left( \frac{9}{8}\right) \left( \frac{32}{5}\right) \n\]\n\n\[ \n= \frac{36}{5}\text{. } \n\]
Yes
Example 11.5.3 Let \( f\left( {x, y}\right) = {e}^{{x}^{2} + {y}^{2}} \) on the disk \( D = \left\{ {\left( {x, y}\right) : {x}^{2} + {y}^{2} \leq 1}\right\} \) . We will evaluate \( {\iint }_{D}f\left( {x, y}\right) {dA} \) .
In rectangular coordinates the double integral \( {\iint }_{D}f\left( {x, y}\right) {dA} \) can be written as the iterated integral\n\n\[ \n{\iint }_{D}f\left( {x, y}\right) {dA} = {\int }_{x = - 1}^{x = 1}{\int }_{y = - \sqrt{1 - {x}^{2}}}^{y = \sqrt{1 - {x}^{2}}}{e}^{{x}^{2} + {y}^{2}}{dydx}. \n\]\n\nWe cannot evaluate this iterated integral, because \( {e}^{{x}^{2} + {y}^{2}} \) does not have an elementary antiderivative with respect to either \( x \) or \( y \) . However, since \( {r}^{2} = {x}^{2} + {y}^{2} \) and the region \( D \) is circular, it is natural to wonder whether converting to polar coordinates will allow us to evaluate the new integral. To do so, we replace \( x \) with \( r\cos \left( \theta \right), y \) with \( r\sin \left( \theta \right) \), and \( {dydx} \) with \( {rdrd\theta } \) to\n\nobtain\n\[ \n{\iint }_{D}f\left( {x, y}\right) {dA} = {\iint }_{D}{e}^{{r}^{2}}{rdrd\theta }. \n\]\n\nThe disc \( D \) is described in polar coordinates by the constraints \( 0 \leq r \leq 1 \) and \( 0 \leq \theta \leq {2\pi } \) . Therefore, it follows that\n\n\[ \n{\iint }_{D}{e}^{{r}^{2}}{rdrd\theta } = {\int }_{\theta = 0}^{\theta = {2\pi }}{\int }_{r = 0}^{r = 1}{e}^{{r}^{2}}{rdrd\theta }. \n\]\n\nWe can evaluate the resulting iterated polar integral as follows:\n\n\[ \n{\int }_{\theta = 0}^{\theta = {2\pi }}{\int }_{r = 0}^{r = 1}{e}^{{r}^{2}}{rdrd\theta } = {\int }_{\theta = 0}^{2\pi }\left( {\left. \frac{1}{2}{e}^{{r}^{2}}\right| }_{r = 0}^{r = 1}\right) {d\theta } \n\]\n\n\[ \n= \frac{1}{2}{\int }_{\theta = 0}^{\theta = {2\pi }}\left( {e - 1}\right) {d\theta } \n\]\n\n\[ \n= \frac{1}{2}\left( {e - 1}\right) {\int }_{\theta = 0}^{\theta = {2\pi }}{d\theta } \n\]\n\n\[ \n= {\left. \frac{1}{2}\left( e - 1\right) \left\lbrack \theta \right\rbrack \right| }_{\theta = 0}^{\theta = {2\pi }} \n\]\n\n\[ \n= \pi \left( {e - 1}\right) \text{.} \n\]
Yes
Example 11.6.1 Consider the torus (or doughnut) shown in Figure 11.6.2.
To find a parametrization of this torus, we recall our work in Preview Activity 11.6.1. There, we saw that a circle of radius \( r \) that has its center at the point \( \left( {0,0,{z}_{0}}\right) \) and is contained in the horizontal plane \( z = {z}_{0} \), as shown in Figure 11.6.3, can be parametrized using the vector-valued function \( \mathbf{r} \) defined by\n\n\[ \mathbf{r}\left( t\right) = r\cos \left( t\right) \mathbf{i} + r\sin \left( t\right) \mathbf{j} + {z}_{0}\mathbf{k} \]\n\nwhere \( 0 \leq t \leq {2\pi } \) .\n\nTo obtain the torus in Figure 11.6.2, we begin with a circle of radius \( a \) in the \( {xz} \) -plane centered at \( \left( {b,0}\right) \), as shown on the left of Figure 11.6.4. We may parametrize the points on this circle, using the parameter \( s \), by using the equations\n\n\[ x\left( s\right) = b + a\cos \left( s\right) \text{ and }z\left( s\right) = a\sin \left( s\right) ,\]\n\nwhere \( 0 \leq s \leq {2\pi } \) .\n\nLet's focus our attention on one point on this circle, such as the indicated point, which has coordinates \( \left( {x\left( s\right) ,0, z\left( s\right) }\right) \) for a fixed value of the parameter \( s \) . When this point is revolved about the \( z \) -axis, we obtain a circle contained in a horizontal plane centered at \( \left( {0,0, z\left( s\right) }\right) \) and having radius \( x\left( s\right) \), as shown on the right of Figure 11.6.4. If we let \( t \) be the new parameter that generates the circle for the rotation about the \( z \) -axis, this circle may be parametrized by\n\n\[ \mathbf{r}\left( {s, t}\right) = x\left( s\right) \cos \left( t\right) \mathbf{i} + x\left( s\right) \sin \left( t\right) \mathbf{j} + z\left( s\right) \mathbf{k}. \]\n\nNow using our earlier parametric equations for \( x\left( s\right) \) and \( z\left( s\right) \) for the original smaller circle, we have an overall parameterization of the torus given by\n\n\[ \mathbf{r}\left( {s, t}\right) = \left( {b + a\cos \left( s\right) }\right) \cos \left( t\right) \mathbf{i} + \left( {b + a\cos \left( s\right) }\right) \sin \left( t\right) \mathbf{j} + a\sin \left( s\right) \mathbf{k}. \]\n\nTo trace out the entire torus, we require that the parameters vary through the values \( 0 \leq s \leq {2\pi } \) and \( 0 \leq t \leq {2\pi } \) .
Yes
Find the mass of the tetrahedron in the first octant bounded by the coordinate planes and the plane \( x + {2y} + {3z} = 6 \) if the density at point \( \left( {x, y, z}\right) \) is given by \( \delta \left( {x, y, z}\right) = x + y + z \) .
We find the mass, \( M \), of the tetrahedron by the triple integral\n\n\[ M = {\iiint }_{S}\delta \left( {x, y, z}\right) {dV} \]\n\nwhere \( S \) is the solid tetrahedron described above. In this example, we choose to integrate with respect to \( z \) first for the innermost integral. The top of the tetrahedron is given by the equation\n\n\[ x + {2y} + {3z} = 6 \]\n\nsolving for \( z \) then yields\n\n\[ z = \frac{1}{3}\left( {6 - x - {2y}}\right) \]\n\nThe bottom of the tetrahedron is the \( {xy} \)-plane, so the limits on \( z \) in the iterated integral will be \( 0 \leq z \leq \frac{1}{3}\left( {6 - x - {2y}}\right) \).\n\nTo find the bounds on \( x \) and \( y \) we project the tetrahedron onto the \( {xy} \)-plane; this corresponds to setting \( z = 0 \) in the equation \( z = \frac{1}{3}\left( {6 - x - {2y}}\right) \). The resulting relation between \( x \) and \( y \) is\n\n\[ x + {2y} = 6. \]\n\nThe right image in Figure 11.7.6 shows the projection of the tetrahedron onto the \( {xy} \)-plane.\n\nIf we choose to integrate with respect to \( y \) for the middle integral in the iterated integral, then the lower limit on \( y \) is the \( x \)-axis and the upper limit is the hypotenuse of the triangle. Note that the hypotenuse joins the points \( \left( {6,0}\right) \) and \( \left( {0,3}\right) \) and so has equation \( y = 3 - \frac{1}{2}x \). Thus, the bounds on \( y \) are \( 0 \leq y \leq 3 - \frac{1}{2}x \). Finally, the \( x \) values run from 0 to 6, so the iterated integral that gives the mass of the tetrahedron is\n\n\[ M = {\int }_{0}^{6}{\int }_{0}^{3 - \left( {1/2}\right) x}{\int }_{0}^{\left( {1/3}\right) \left( {6 - x - {2y}}\right) }x + y + {zdzdydx}. \]\n\nEvaluating the triple integral gives us\n\n\[ M = {\int }_{0}^{6}{\int }_{0}^{3 - \left( {1/2}\right) x}{\int }_{0}^{\left( {1/3}\right) \left( {6 - x - {2y}}\right) }x + y + {zdzdydx} \]\n\n\[ = {\int }_{0}^{6}{\int }_{0}^{3 - \left( {1/2}\right) x}{\left. \left\lbrack xz + yz + \frac{z}{2}\right\rbrack \right| }_{0}^{\left( {1/3}\right) \left( {6 - x - {2y}}\right) }{dydx} \]\n\n\[ = {\int }_{0}^{6}{\int }_{0}^{3 - \left( {1/2}\right) x}\frac{4}{3}x - \frac{5}{18}{x}^{2} - \frac{}{9}{xy} + \frac{2}{3}y - \frac{4}{9}{y}^{2} + {2dydx} \]\n\n\[ = {\int }_{0}^{6}{\left. \left\lbrack \frac{4}{3}xy - \frac{5}{18}{x}^{2}y - \frac{7}{18}x{y}^{2} + \frac{1}{3}{y}^{2} - \frac{4}{27}{y}^{3} + 2y\right\rbrack \right| }_{0}^{3 - \left( {1/2}\right) x}{dx} \]\n\n\[ = {\int }_{0}^{6}5 + \frac{1}{2}x - \frac{7}{12}{x}^{2} + \frac{13}{216}{x}^{3}{dx} \]\n\n\[ = {\left. \left\lbrack 5x + \frac{1}{4}{x}^{2} - \frac{7}{36}{x}^{3} + \frac{13}{864}{x}^{4}\right\rbrack \right| }_{0}^{6} \]\n\n\[ = \frac{33}{2}\text{. } \]
Yes
Say we have an object, and 5 measurements of its length from the same ruler but from different people, \[ {5.1}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{4.9}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{4.7}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{4.9}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{5.0}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] Unlike earlier, let's say that we don't know the uncertainty (given this ruler) of one measurement. What is the best estimate of the length?
Again, the best estimate should be given by the sample mean of these 5 samples, \[ \widehat{\mu } = \frac{{x}_{1} + {x}_{2} + \cdots + {x}_{N}}{N} \] \[ = \frac{{5.1}\left\lbrack \mathrm{\;{cm}}\right\rbrack + {4.9}\left\lbrack \mathrm{\;{cm}}\right\rbrack + {4.7}\left\lbrack \mathrm{\;{cm}}\right\rbrack + {4.9}\left\lbrack \mathrm{\;{cm}}\right\rbrack + {5.0}\left\lbrack \mathrm{\;{cm}}\right\rbrack }{5} = {4.92}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] with uncertainty related to the adjusted sample deviation, \[ {S}^{2} = \frac{1}{N - 1}\left( {{\left( {x}_{1} - \bar{x}\right) }^{2} + \cdots + {\left( {x}_{N} - \bar{x}\right) }^{2}}\right) \] \[ = \frac{1}{5 - 1}\left( {{\left( {5.1}\left\lbrack \mathrm{{cm}}\right\rbrack - {4.92}\left\lbrack \mathrm{{cm}}\right\rbrack \right) }^{2} + {\left( {4.9}\left\lbrack \mathrm{{cm}}\right\rbrack - {4.92}\left\lbrack \mathrm{{cm}}\right\rbrack \right) }^{2} + {\left( {4.7}\left\lbrack \mathrm{{cm}}\right\rbrack - {4.92}\left\lbrack \mathrm{{cm}}\right\rbrack \right) }^{2} + }\right. \] \[ \left. {{\left( {4.9}\left\lbrack \mathrm{{cm}}\right\rbrack - {4.92}\left\lbrack \mathrm{{cm}}\right\rbrack \right) }^{2} + {\left( {5.0}\left\lbrack \mathrm{{cm}}\right\rbrack - {4.92}\left\lbrack \mathrm{{cm}}\right\rbrack \right) }^{2}}\right) \] \[ = {0.024}{\left\lbrack \mathrm{\;{cm}}\right\rbrack }^{2} \] \[ S = \sqrt{{0.024}{\left\lbrack \mathrm{\;{cm}}\right\rbrack }^{2}} = {0.155}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] \[ \frac{S}{\sqrt{N}} = \frac{{0.155}\left\lbrack \mathrm{\;{cm}}\right\rbrack }{\sqrt{5}} = {0.069}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] \[ k = 1 + \frac{20}{{5}^{2}} = {1.8} \] \[ k \cdot \frac{S}{\sqrt{N}} = {1.8} \cdot {0.069}\left\lbrack \mathrm{\;{cm}}\right\rbrack = {0.124}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] yielding a final best estimate of \[ \widehat{\mu } = {4.92}\left\lbrack \mathrm{\;{cm}}\right\rbrack \pm {0.124}\left\lbrack \mathrm{\;{cm}}\right\rbrack \] or (with \( {2\sigma } \) range), \[{4.92}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{95}\% \mathrm{{CI}} = \left\lbrack {{4.672}\left\lbrack \mathrm{\;{cm}}\right\rbrack ,{5.168}\left\lbrack \mathrm{\;{cm}}\right\rbrack }\right\rbrack \]
Yes
A study (Murphy and Abbey, Cancer in Families, 1959) addressed the question of whether cancer runs in families. The investigator identified 200 women with breast cancer and another 200 women without breast cancer and asked them whether their mothers had had breast cancer. Of the 400 women in the two groups combined, 10 of the mothers had had breast cancer. If there is no genetic connection, then about half of these 10 would come from each group. The data is that 7 of the daughters had cancer and 3 did not. Is there strong evidence of a connection?
The proper way, assuming total initial ignorance, is to use the Beta distribution:\n\n\[ P\left( {{\theta }_{\text{cancer }} \mid \text{ data }}\right) = \operatorname{Beta}\left( {h = 7, N = {10}}\right) \]\n\nwhich has a median of \( {\widehat{\theta }}_{\text{cancer }} = {0.68} \), but a \( {95}\% \) credible interval of \( {\widehat{\theta }}_{\text{cancer }} = {0.39} \) up to \( {\widehat{\theta }}_{\text{cancer }} = {0.89} \) . This means there is not strong evidence of an effect.
Yes
Consider \( E = A\left( {B \cup {C}^{c}}\right) \cup {A}^{c}{\left( B \cup {C}^{c}\right) }^{c} \) and \( F = {A}^{c}{B}^{c} \cup {AC} \) of the example above, and suppose the respective minterm probabilities are \[ {p}_{0} = {0.21},{p}_{1} = {0.06},{p}_{2} = {0.29},{p}_{3} = {0.11},{p}_{4} = {0.09},{p}_{5} = {0.03},{p}_{6} = {0.14},{p}_{7} = {0.07} \]
Use of a minterm map shows \( E = M\left( {1,4,6,7}\right) \) and \( F = M\left( {0,1,5,7}\right) \) . so that \[ P\left( E\right) = {p}_{1} + {p}_{4} + {p}_{6} + {p}_{7} = p\left( {1,4,6,7}\right) = {0.36}\text{ and }P\left( F\right) = p\left( {0,1,5,7}\right) = {0.37} \]
Yes