Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Theorem 16.39. If \( D \) is a PID, then every non-zero, non-unit element of \( D \) can be expressed as a product of irreducibles in \( D \) .
Proof. Let \( c \in D, c \neq 0 \), and \( c \) not a unit. If \( c \) is irreducible, we are done. Otherwise, we can write \( c = {ab} \), where neither \( a \) nor \( b \) are units. As ideals, we have \( {cD} \varsubsetneq {aD} \) and \( {cD} \varsubsetneq {bD} \) . If we continue this process recursively, building up a \
No
Theorem 16.40. Let \( D \) be a PID. For all \( a, b \in D \), there exists a greatest common divisor \( d \) of \( a \) and \( b \), and moreover, \( {aD} + {bD} = {dD} \) .
Proof. Exercise.
No
Theorem 16.41. Let \( D \) be a PID. For all \( a, b, c \in D \) such that \( c \mid {ab} \) and \( a \) and \( c \) are relatively prime, we have \( c \mid b \) .
Proof. Exercise.
No
Theorem 16.42. Let \( D \) be a PID. Let \( p \in D \) be irreducible, and let \( a, b \in D \) . Then \( p \mid {ab} \) implies that \( p \mid a \) or \( p \mid b \) .
Proof. Exercise.
No
Theorem 16.44. Let \( D \) be a UFD. Every non-zero, non-unit element of \( D\left\lbrack X\right\rbrack \) can be expressed as a product of irreducibles in \( D\left\lbrack X\right\rbrack \) .
Proof. Let \( f \) be a non-zero, non-unit polynomial in \( D\left\lbrack X\right\rbrack \) . If \( f \) is a constant, then because \( D \) is a UFD, \( f \) factors into irreducibles in \( D \) . So assume \( f \) is not constant. If \( f \) is not primitive, we can write \( f = c{f}^{\prime } \), where \( c \) is a non-zero, non-unit in \( D \) , and \( {f}^{\prime } \) is a primitive, non-constant polynomial in \( D\left\lbrack X\right\rbrack \) . Again, as \( D \) is a UFD, \( c \) factors into irreducibles in \( D \) .\n\nFrom the above discussion, it suffices to prove the theorem for non-constant, primitive polynomials \( f \in D\left\lbrack X\right\rbrack \) . If \( f \) is itself irreducible, we are done. Otherwise, we can write \( f = {gh} \), where \( g, h \in D\left\lbrack X\right\rbrack \) and neither \( g \) nor \( h \) are units. Further, by the assumption that \( f \) is a primitive, non-constant polynomial, both \( g \) and \( h \) must also be primitive, non-constant polynomials; in particular, both \( g \) and \( h \) have degree strictly less than \( \deg \left( f\right) \), and the theorem follows by induction on degree.
Yes
Theorem 16.45. Let \( D \) be a UFD, let \( p \) be an irreducible in \( D \), and let \( g, h \in D\left\lbrack X\right\rbrack \) . Then \( p \mid {gh} \) implies \( p \mid g \) or \( p \mid h \) .
Proof. Consider the quotient ring \( D/{pD} \), which is an integral domain (because \( D \) is a UFD), and the corresponding ring of polynomials \( \left( {D/{pD}}\right) \left\lbrack X\right\rbrack \), which is also an integral domain. Also consider the natural map that sends \( a \in D \) to \( \bar{a} \mathrel{\text{:=}} {\left\lbrack a\right\rbrack }_{p} \in D/{pD} \), which we can extend coefficient-wise to a ring homomorphism from \( D\left\lbrack X\right\rbrack \) to \( \left( {D/{pD}}\right) \left\lbrack X\right\rbrack \) (see Example 7.46). If \( p \mid {gh} \), then we have\n\n\[ 0 = \overline{gh} = \bar{g}\bar{h} \]\n\nand since \( \left( {D/{pD}}\right) \left\lbrack X\right\rbrack \) is an integral domain, it follows that \( \bar{g} = 0 \) or \( \bar{h} = 0 \), which means that \( p \mid g \) or \( p \mid h \) .
Yes
Theorem 16.46. Let \( D \) be a UFD. The product of two primitive polynomials in \( D\left\lbrack X\right\rbrack \) is also primitive.
Proof. Let \( g, h \in D\left\lbrack X\right\rbrack \) be primitive polynomials, and let \( f \mathrel{\text{:=}} {gh} \) . If \( f \) is not primitive, then \( c \mid f \) for some non-zero, non-unit \( c \in D \), and as \( D \) is a UFD, there is some irreducible element \( p \in D \) that divides \( c \), and therefore, divides \( f \) as well. By Theorem 16.45, it follows that \( p \mid g \) or \( p \mid h \), which implies that either \( g \) is not primitive or \( h \) is not primitive.
Yes
Theorem 16.47. Let \( D \) be a UFD and let \( F \) be its field of fractions. Suppose that \( f, g \in D\left\lbrack X\right\rbrack \) and \( h \in F\left\lbrack X\right\rbrack \) are non-zero polynomials such that \( f = {gh} \) and \( g \) is primitive. Then \( h \in D\left\lbrack X\right\rbrack \) .
Proof. Write \( h = \left( {c/d}\right) {h}^{\prime } \), where \( c, d \in D \) and \( {h}^{\prime } \in D\left\lbrack X\right\rbrack \) is primitive. Let us assume that \( c \) and \( d \) are relatively prime. Then we have\n\n\[ d \cdot f = c \cdot g{h}^{\prime }.\]\n\n(16.9)\n\nWe claim that \( d \in {D}^{ * } \) . To see this, note that (16.9) implies that \( d \mid \left( {c \cdot g{h}^{\prime }}\right) \) , and the assumption that \( c \) and \( d \) are relatively prime implies that \( d \mid g{h}^{\prime } \) . But by Theorem 16.46, \( g{h}^{\prime } \) is primitive, from which it follows that \( d \) is a unit. That proves the claim.\n\nIt follows that \( c/d \in D \), and hence \( h = \left( {c/d}\right) {h}^{\prime } \in D\left\lbrack X\right\rbrack \) .
Yes
Theorem 16.48. Let \( D \) be a UFD and \( F \) its field of fractions. If \( f \in D\left\lbrack X\right\rbrack \) with \( \deg \left( f\right) > 0 \) is irreducible, then \( f \) is also irreducible in \( F\left\lbrack X\right\rbrack \) .
Proof. Suppose that \( f \) is not irreducible in \( F\left\lbrack X\right\rbrack \), so that \( f = {gh} \) for non-constant polynomials \( g, h \in F\left\lbrack X\right\rbrack \), both of degree strictly less than that of \( f \) . We may write \( g = \left( {c/d}\right) {g}^{\prime } \), where \( c, d \in D \) and \( {g}^{\prime } \in D\left\lbrack X\right\rbrack \) is primitive. Set \( {h}^{\prime } \mathrel{\text{:=}} \left( {c/d}\right) h \), so that \( f = {gh} = {g}^{\prime }{h}^{\prime } \) . By Theorem 16.47, we have \( {h}^{\prime } \in D\left\lbrack X\right\rbrack \), and this shows that \( f \) is not irreducible in \( D\left\lbrack X\right\rbrack \) .
Yes
Theorem 16.49. Let \( D \) be a UFD. Let \( f \in D\left\lbrack X\right\rbrack \) with \( \deg \left( f\right) > 0 \) be irreducible, and let \( g, h \in D\left\lbrack X\right\rbrack \) . If \( f \) divides \( {gh} \) in \( D\left\lbrack X\right\rbrack \), then \( f \) divides either \( g \) or \( h \) in \( D\left\lbrack X\right\rbrack \) .
Proof. Suppose that \( f \in D\left\lbrack X\right\rbrack \) with \( \deg \left( f\right) > 0 \) is irreducible. This implies that \( f \) is a primitive polynomial. By Theorem 16.48, \( f \) is irreducible in \( F\left\lbrack X\right\rbrack \), where \( F \) is the field of fractions of \( D \) . Suppose \( f \) divides \( {gh} \) in \( D\left\lbrack X\right\rbrack \) . Then because \( F\left\lbrack X\right\rbrack \) is a UFD, \( f \) divides either \( g \) or \( h \) in \( F\left\lbrack X\right\rbrack \) . But Theorem 16.47 implies that \( f \) divides either \( g \) or \( h \) in \( D\left\lbrack X\right\rbrack \) .
Yes
Theorem 16.50 (Eisenstein's criterion). Let \( D \) be a UFD and \( F \) its field of fractions. Let \( f = {c}_{n}{X}^{n} + {c}_{n - 1}{X}^{n - 1} + \cdots + {c}_{0} \in D\left\lbrack X\right\rbrack \) . If there exists an irreducible \( p \in D \) such that\n\n\[ p \nmid {c}_{n}, p \mid {c}_{n - 1},\cdots, p \mid {c}_{0},{p}^{2} \nmid {c}_{0}, \]\n\nthen \( f \) is irreducible over \( F \) .
Proof. Let \( f \) be as above, and suppose it were not irreducible in \( F\left\lbrack X\right\rbrack \) . Then by Theorem 16.48, we could write \( f = {gh} \), where \( g, h \in D\left\lbrack X\right\rbrack \), both of degree strictly less than that of \( f \) . Let us write\n\n\[ g = {a}_{k}{X}^{k} + \cdots + {a}_{0}\text{ and }h = {b}_{\ell }{X}^{\ell } + \cdots + {b}_{0}, \]\n\nwhere \( {a}_{k} \neq 0 \) and \( {b}_{\ell } \neq 0 \), so that \( 0 < k < n \) and \( 0 < \ell < n \) . Now, since \( {c}_{n} = {a}_{k}{b}_{\ell } \) , and \( p \nmid {c}_{n} \), it follows that \( p \nmid {a}_{k} \) and \( p \nmid {b}_{\ell } \) . Further, since \( {c}_{0} = {a}_{0}{b}_{0} \), and \( p \mid {c}_{0} \) but \( {p}^{2} \nmid {c}_{0} \), it follows that \( p \) divides one of \( {a}_{0} \) or \( {b}_{0} \), but not both-for concreteness, let us assume that \( p \mid {a}_{0} \) but \( p \nmid {b}_{0} \) . Also, let \( m \) be the smallest positive integer such that \( p \nmid {a}_{m} \) -note that \( 0 < m \leq k < n \) .\n\nNow consider the natural map that sends \( a \in D \) to \( \bar{a} \mathrel{\text{:=}} {\left\lbrack a\right\rbrack }_{p} \in D/{pD} \), which we can extend coefficient-wise to a ring homomorphism from \( D\left\lbrack X\right\rbrack \) to \( \left( {D/{pD}}\right) \left\lbrack X\right\rbrack \) (see Example 7.46). Because \( D \) is a UFD and \( p \) is irreducible, \( D/{pD} \) is an integral domain. Since \( f = {gh} \), we have\n\n\[ {\bar{c}}_{n}{X}^{n} = \bar{f} = \bar{g}\bar{h} = \left( {{\bar{a}}_{k}{X}^{k} + \cdots + {\bar{a}}_{m}{X}^{m}}\right) \left( {{\bar{b}}_{\ell }{X}^{\ell } + \cdots + {\bar{b}}_{0}}\right) . \]\n\n(16.10)\n\nBut notice that when we multiply out the two polynomials on the right-hand side of (16.10), the coefficient of \( {X}^{m} \) is \( {\bar{a}}_{m}{\bar{b}}_{0} \neq 0 \), and as \( m < n \), this clearly contradicts the fact that the coefficient of \( {X}^{m} \) in the polynomial on the left-hand side of (16.10) is zero.
Yes
Theorem 16.51. For every prime number \( q \), the \( q \) th cyclotomic polynomial\n\n\[{\Phi }_{q} \mathrel{\text{:=}} \frac{{X}^{q} - 1}{X - 1} = {X}^{q - 1} + {X}^{q - 2} + \cdots + 1\]\n\nis irreducible over \( \mathbb{Q} \) .
Proof. Let\n\n\[f \mathrel{\text{:=}} {\Phi }_{q}\left( {X + 1}\right) = \frac{{\left( X + 1\right) }^{q} - 1}{\left( {X + 1}\right) - 1}.\n\]\n\nIt is easy to see that\n\n\[f = \mathop{\sum }\limits_{{i = 0}}^{{q - 1}}{c}_{i}{X}^{i},\text{ where }{c}_{i} = \left( \begin{matrix} q \\ i + 1 \end{matrix}\right) \left( {i = 0,\ldots, q - 1}\right) .\n\]\n\nThus, \( {c}_{q - 1} = 1,{c}_{0} = q \), and for \( 0 < i < q - 1 \), we have \( q \mid {c}_{i} \) (see Exercise 1.14). Theorem 16.50 therefore applies, and we conclude that \( f \) is irreducible over \( \mathbb{Q} \). It follows that \( {\Phi }_{q} \) is irreducible over \( \mathbb{Q} \), since if \( {\Phi }_{q} = {gh} \) were a non-trivial factorization of \( {\Phi }_{q} \), then \( f = {\Phi }_{q}\left( {X + 1}\right) = g\left( {X + 1}\right) \cdot h\left( {X + 1}\right) \) would be a non-trivial factorization of \( f \) .
Yes
Theorem 17.2. Let \( g, h \in F\left\lbrack X\right\rbrack \), with \( \deg \left( g\right) \geq \deg \left( h\right) \) and \( g \neq 0 \) . Define the polynomials \( {r}_{0},{r}_{1},\ldots ,{r}_{\lambda + 1} \in F\left\lbrack X\right\rbrack \) and \( {q}_{1},\ldots ,{q}_{\lambda } \in F\left\lbrack X\right\rbrack \), where \( \lambda \geq 0 \), as follows:\n\n\[ g = {r}_{0} \]\n\n\[ h = {r}_{1} \]\n\n\[ {r}_{0} = {r}_{1}{q}_{1} + {r}_{2}\;\left( {0 \leq \deg \left( {r}_{2}\right) < \deg \left( {r}_{1}\right) }\right) ,\]\n\n\[ \vdots \]\n\n\[ {r}_{i - 1} = {r}_{i}{q}_{i} + {r}_{i + 1}\;\left( {0 \leq \deg \left( {r}_{i + 1}\right) < \deg \left( {r}_{i}\right) }\right) ,\]\n\n\[ \vdots \]\n\n\[ {r}_{\lambda - 2} = {r}_{\lambda - 1}{q}_{\lambda - 1} + {r}_{\lambda }\;\left( {0 \leq \deg \left( {r}_{\lambda }\right) < \deg \left( {r}_{\lambda - 1}\right) }\right) ,\]\n\n\[ {r}_{\lambda - 1} = {r}_{\lambda }{q}_{\lambda }\;\left( {{r}_{\lambda + 1} = 0}\right) . \]\n\nNote that by definition, \( \lambda = 0 \) if \( h = 0 \), and \( \lambda > 0 \) otherwise. Then we have \( {r}_{\lambda }/\operatorname{lc}\left( {r}_{\lambda }\right) = \gcd \left( {g, h}\right) \), and if \( h \neq 0 \), then \( \lambda \leq \deg \left( h\right) + 1 \) .
Proof. Arguing as in the proof of Theorem 4.1, one sees that\n\n\[ \gcd \left( {g, h}\right) = \gcd \left( {{r}_{0},{r}_{1}}\right) = \cdots = \gcd \left( {{r}_{\lambda },{r}_{\lambda + 1}}\right) = \gcd \left( {{r}_{\lambda },0}\right) = {r}_{\lambda }/\operatorname{lc}\left( {r}_{\lambda }\right) . \]\n\nThat proves the first statement.\n\nFor the second statement, if \( h \neq 0 \), then the degree sequence\n\n\[ \deg \left( {r}_{1}\right) ,\deg \left( {r}_{2}\right) ,\ldots ,\deg \left( {r}_{\lambda }\right) \]\n\nis strictly decreasing, with \( \deg \left( {r}_{\lambda }\right) \geq 0 \), from which it follows that \( \deg \left( h\right) = \) \( \deg \left( {r}_{1}\right) \geq \lambda - 1 \)
Yes
Theorem 17.3. Euclid’s algorithm for polynomials performs \( O\left( {\operatorname{len}\left( g\right) \operatorname{len}\left( h\right) }\right) \) operations in \( F \) .
Proof. The proof is almost identical to that of Theorem 4.2. Details are left to the reader.
No
Theorem 17.4. Let \( g, h,{r}_{0},\ldots ,{r}_{\lambda + 1} \) and \( {q}_{1},\ldots ,{q}_{\lambda } \) be as in Theorem 17.2. Define polynomials \( {s}_{0},\ldots ,{s}_{\lambda + 1} \in F\left\lbrack X\right\rbrack \) and \( {t}_{0},\ldots ,{t}_{\lambda + 1} \in F\left\lbrack X\right\rbrack \) as follows:\n\n\[ \n{s}_{0} \mathrel{\text{:=}} 1,\;{t}_{0} \mathrel{\text{:=}} 0, \]\n\n\[ \n{s}_{1} \mathrel{\text{:=}} 0,\;{t}_{1} \mathrel{\text{:=}} 1, \]\n\nand for \( i = 1,\ldots ,\lambda \) ,\n\n\[ \n{s}_{i + 1} \mathrel{\text{:=}} {s}_{i - 1} - {s}_{i}{q}_{i},\;{t}_{i + 1} \mathrel{\text{:=}} {t}_{i - 1} - {t}_{i}{q}_{i}. \]\n\nThen:\n\n(i) for \( i = 0,\ldots ,\lambda + 1 \), we have \( g{s}_{i} + h{t}_{i} = {r}_{i} \) ; in particular, \( g{s}_{\lambda } + h{t}_{\lambda } = \) \( \operatorname{lc}\left( {r}_{\lambda }\right) \gcd \left( {g, h}\right) \)\n\n(ii) for \( i = 0,\ldots ,\lambda \), we have \( {s}_{i}{t}_{i + 1} - {t}_{i}{s}_{i + 1} = {\left( -1\right) }^{i} \) ;\n\n(iii) for \( i = 0,\ldots ,\lambda + 1 \), we have \( \gcd \left( {{s}_{i},{t}_{i}}\right) = 1 \) ;\n\n(iv) for \( i = 1,\ldots ,\lambda + 1 \), we have\n\n\[ \n\deg \left( {t}_{i}\right) = \deg \left( g\right) - \deg \left( {r}_{i - 1}\right) \]\n\nand for \( i = 2,\ldots ,\lambda + 1 \), we have\n\n\[ \n\deg \left( {s}_{i}\right) = \deg \left( h\right) - \deg \left( {r}_{i - 1}\right) \]\n\n(v) for \( i = 1,\ldots ,\lambda + 1 \), we have \( \deg \left( {t}_{i}\right) \leq \deg \left( g\right) \) and \( \deg \left( {s}_{i}\right) \leq \deg \left( h\right) \) ; if \( \deg \left( g\right) > 0 \) and \( h \neq 0 \), then \( \deg \left( {t}_{\lambda }\right) < \deg \left( g\right) \) and \( \deg \left( {s}_{\lambda }\right) < \deg \left( h\right) \) .
Proof. (i), (ii), and (iii) are proved just as in the corresponding parts of Theorem 4.3.\n\nFor (iv), the proof will hinge on the following facts:\n\n- For \( i = 1,\ldots ,\lambda \), we have \( \deg \left( {r}_{i - 1}\right) \geq \deg \left( {r}_{i}\right) \), and since \( {q}_{i} \) is the quotient in dividing \( {r}_{i - 1} \) by \( {r}_{i} \), we have \( \deg \left( {q}_{i}\right) = \deg \left( {r}_{i - 1}\right) - \deg \left( {r}_{i}\right) \) .\n\n- For \( i = 2,\ldots ,\lambda \), we have \( \deg \left( {r}_{i - 1}\right) > \deg \left( {r}_{i}\right) \).\n\nWe prove the statement involving the \( {t}_{i} \) ’s by induction on \( i \), and leave the proof of the statement involving the \( {s}_{i} \) ’s to the reader.\n\nOne can see by inspection that this statement holds for \( i = 1 \), since \( \deg \left( {t}_{1}\right) = 0 \) and \( {r}_{0} = g \) . If \( \lambda = 0 \), there is nothing more to prove, so assume that \( \lambda > 0 \) and \( h \neq 0 \) .\n\nNow, for \( i = 2 \), we have \( {t}_{2} = 0 - 1 \cdot {q}_{1} = - {q}_{1} \) . Thus, \( \deg \left( {t}_{2}\right) = \deg \left( {q}_{1}\right) = \) \( \deg \left( {r}_{0}\right) - \deg \left( {r}_{1}\right) = \deg \left( g\right) - \deg \left( {r}_{1}\right) \)\n\nNow for the induction step. Assume \( i \geq 3 \) . Then we have\n\n\[ \n\deg \left( {{t}_{i - 1}{q}_{i - 1}}\right) = \deg \left( {t}_{i - 1}\right) + \deg \left( {q}_{i - 1}\right) \]\n\n\[ \n= \deg \left( g\right) - \deg \left( {r}_{i - 2}\right) + \deg \left( {q}_{i - 1}\right) \;\text{ (by induction) } \]\n\n\[ \n= \deg \left( g\right) - \deg \left( {r}_{i - 1}\right) \]\n\n\[ \n\left( {\operatorname{since}\deg \left( {q}_{i - 1}\right) = \deg \left( {r}_{i - 2}\right) - \deg \left( {r}_{i - 1}\right) }\right) \]\n\n\[ \n> \deg \left( g\right) - \deg \left( {r}_{i - 3}\right) \;\left( {\text{ since }\deg \left( {r}_{i - 3}\right) > \deg \left( {r}_{i - 1}\right) }\right) \]\n\n\[ \n= \deg \left( {t}_{i - 2}\right) \text{ (by induction). } \]\n\nBy definition, \( {t}_{i} = {t}_{i - 2} - {t}_{i - 1}{q}_{i - 1} \), and from the above reasoning, we see that\n\n\[ \n\deg \left( g\right) - \deg \left( {r}_{i - 1}\right) = \deg \left( {{t}_{i - 1}{q}_{i - 1}}\right) > \deg \left( {t}_{i - 2}\right) \]\n\nfrom which it follows that \( \deg \left( {t}_{i}\right)
Yes
Theorem 17.5. The extended Euclidean algorithm for polynomials performs \( O\left( {\operatorname{len}\left( g\right) \operatorname{len}\left( h\right) }\right) \) operations in \( F \) .
Proof. Exercise.
No
Theorem 17.6. Suppose we are given polynomials \( f, h \in F\left\lbrack X\right\rbrack \), where \( \deg \left( h\right) < \) \( \deg \left( f\right) \) . Then using \( O\left( {\operatorname{len}{\left( f\right) }^{2}}\right) \) operations in \( F \), we can determine if \( h \) is relatively prime to \( f \), and if so, compute \( {h}^{-1}{\;\operatorname{mod}\;f} \) .
Proof. We may assume \( \deg \left( f\right) > 0 \), since \( \deg \left( f\right) = 0 \) implies \( h = 0 = {h}^{-1}{\;\operatorname{mod}\;f} \) . We run the extended Euclidean algorithm on input \( f, h \), obtaining polynomials \( d, s, t \) such that \( d = \gcd \left( {f, h}\right) \) and \( {fs} + {ht} = d \) . If \( d \neq 1 \), then \( h \) does not have a multiplicative inverse modulo \( f \) . Otherwise, if \( d = 1 \), then \( t \) is a multiplicative inverse of \( h \) modulo \( f \) . Moreover, by part (v) of Theorem 17.4, we have \( \deg \left( t\right) < \deg \left( f\right) \), and so \( t = {h}^{-1}{\;\operatorname{mod}\;f} \) . Based on Theorem 17.5, it is clear that all the computations can be performed using \( O\left( {\operatorname{len}{\left( f\right) }^{2}}\right) \) operations in \( F \) .
Yes
Theorem 17.7 (Effective Chinese remainder theorem). Suppose we are given polynomials \( {f}_{1},\ldots ,{f}_{k} \in F\left\lbrack X\right\rbrack \) and \( {g}_{1},\ldots ,{g}_{k} \in F\left\lbrack X\right\rbrack \), where the family \( {\left\{ {f}_{i}\right\} }_{i = 1}^{k} \) is pairwise relatively prime, and where \( \deg \left( {f}_{i}\right) > 0 \) and \( \deg \left( {g}_{i}\right) < \deg \left( {f}_{i}\right) \) for \( i = 1,\ldots, k \) . Let \( f \mathrel{\text{:=}} \mathop{\prod }\limits_{{i = 1}}^{k}{f}_{i} \) . Then using \( O\left( {\operatorname{len}{\left( f\right) }^{2}}\right) \) operations in \( F \), we can compute the unique polynomial \( g \in F\left\lbrack X\right\rbrack \) satisfying \( \deg \left( g\right) < \deg \left( f\right) \) and \( g \equiv {g}_{i}\left( {\;\operatorname{mod}\;{f}_{i}}\right) \) for \( i = 1,\ldots, k.
Proof. Exercise (just use the formulas given after Theorem 16.19).
No
Theorem 17.8 (Rational function reconstruction). Let \( f, h \in F\left\lbrack X\right\rbrack \) be polynomials, and let \( {r}^{ * },{t}^{ * } \) be non-negative integers, such that\n\n\[ \deg \left( h\right) < \deg \left( f\right) \text{ and }{r}^{ * } + {t}^{ * } \leq \deg \left( f\right) .\n\]\n\nFurther, let \( \operatorname{EEA}\left( {f, h}\right) = {\left\{ \left( {r}_{i},{s}_{i},{t}_{i}\right) \right\} }_{i = 0}^{\lambda + 1} \), and let \( j \) be the smallest index (among \( 0,\ldots ,\lambda + 1) \) such that \( \deg \left( {r}_{j}\right) < {r}^{ * } \), and set\n\n\[ {r}^{\prime } \mathrel{\text{:=}} {r}_{j},\;{s}^{\prime } \mathrel{\text{:=}} {s}_{j},\text{ and }{t}^{\prime } \mathrel{\text{:=}} {t}_{j}.\n\]\n\nFinally, suppose that there exist polynomials \( r, s, t \in F\left\lbrack X\right\rbrack \) such that\n\n\[ r = {fs} + {ht},\deg \left( r\right) < {r}^{ * },\text{ and }0 \leq \deg \left( t\right) \leq {t}^{ * }.\n\]\n\nThen for some non-zero polynomial \( q \in F\left\lbrack X\right\rbrack \), we have\n\n\[ r = {r}^{\prime }q, s = {s}^{\prime }q, t = {t}^{\prime }q.\n\]
Proof. Since \( \deg \left( {r}_{0}\right) = \deg \left( f\right) \geq {r}^{ * } > - \infty = \deg \left( {r}_{\lambda + 1}\right) \), the value of \( j \) is well defined, and moreover, \( j \geq 1,\deg \left( {r}_{j - 1}\right) \geq {r}^{ * } \), and \( {t}_{j} \neq 0 \) .\n\nFrom the equalities \( {r}_{j} = f{s}_{j} + h{t}_{j} \) and \( r = {fs} + {ht} \), we have the two congruences:\n\n\[ {r}_{j} \equiv h{t}_{j}\left( {\;\operatorname{mod}\;f}\right)\n\]\n\n\[ r \equiv {ht}\left( {\;\operatorname{mod}\;f}\right) .\n\]\n\nSubtracting \( t \) times the first from \( {t}_{j} \) times the second, we obtain\n\n\[ r{t}_{j} \equiv {r}_{j}t\left( {\;\operatorname{mod}\;f}\right)\n\]\n\nThis says that \( f \) divides \( r{t}_{j} - {r}_{j}t \) .\n\nWe want to show that, in fact, \( r{t}_{j} - {r}_{j}t = 0 \) . To this end, first observe that by part (iv) of Theorem 17.4 and the inequality \( \deg \left( {r}_{j - 1}\right) \geq {r}^{ * } \), we have\n\n\[ \deg \left( {t}_{j}\right) = \deg \left( f\right) - \deg \left( {r}_{j - 1}\right) \leq \deg \left( f\right) - {r}^{ * }.\n\]\n\nCombining this with the inequality \( \deg \left( r\right) < {r}^{ * } \), we see that\n\n\[ \deg \left( {r{t}_{j}}\right) = \deg \left( r\right) + \deg \left( {t}_{j}\right) < \deg \left( f\right) .\n\]\n\nFurthermore, using the inequalities\n\n\[ \deg \left( {r}_{j}\right) < {r}^{ * },\deg \left( t\right) \leq {t}^{ * },\text{ and }{r}^{ * } + {t}^{ * } \leq \deg \left( f\right) ,\n\]\n\nwe see that\n\n\[ \deg \left( {{r}_{j}t}\right) = \deg \left( {r}_{j}\right) + \deg \left( t\right) < \deg \left( f\right)\n\]\n\nand it immediately follows that\n\n\[ \deg \left( {r{t}_{j} - {r}_{j}t}\right) < \deg \left( f\right)\n\]\n\nSince \( f \) divides \( r{t}_{j} - {r}_{j}t \) and \( \deg \left( {r{t}_{j} - {r}_{j}t}\right) < \deg \left( f\right) \), the only possibility is that\n\n\[ r{t}_{j} - {r}_{j}t = 0.\n\]\n\nThe rest of the proof follows exactly the same line of reasoning as in the last paragraph in the proof of Theorem 4.9, as the reader may easily verify.
Yes
Theorem 18.1. The set \( G\left( \Psi \right) \) is an ideal of \( F\left\lbrack X\right\rbrack \) .
Proof. First, note that for all \( g, h \in F\left\lbrack X\right\rbrack \), we have \( \left( {g + h}\right) \star \Psi = \left( {g \star \Psi }\right) + \left( {h \star \Psi }\right) - \) this is clear from the definitions. It is also clear that for all \( c \in F \) and \( g \in F\left\lbrack X\right\rbrack \) , we have \( \left( {cg}\right) \star \Psi = c \cdot \left( {g \star \Psi }\right) \) . From these two observations, it follows that \( G\left( \Psi \right) \) is closed under addition and scalar multiplication. It is also easy to see from the definition that \( G\left( \Psi \right) \) is closed under multiplication by \( X \) ; indeed, if \( \left( {{X}^{i}g}\right) \star \Psi = 0 \) for all \( i \geq 0 \), then certainly, \( \left( {{X}^{i}\left( {Xg}\right) }\right) \star \Psi = \left( {{X}^{i + 1}g}\right) \star \Psi = 0 \) for all \( i \geq 0 \) . But any non-empty subset of \( F\left\lbrack X\right\rbrack \) that is closed under addition, multiplication by elements of \( F \), and multiplication by \( X \) is an ideal of \( F\left\lbrack X\right\rbrack \) (see Exercise 7.27).
Yes
One can always define a linearly generated sequence by simply choosing an initial segment \( {\alpha }_{0},{\alpha }_{1},\ldots ,{\alpha }_{k - 1} \), along with scalars \( {c}_{0},\ldots ,{c}_{k - 1} \in F \) defining the recurrence relation.
One can enumerate as many elements of the sequence as one wants by using storage for \( k \) elements of \( V \), along with storage for the scalars \( {c}_{0},\ldots ,{c}_{k - 1} \), as follows:\n\n\( \left( {{\beta }_{0},\ldots ,{\beta }_{k - 1}}\right) \leftarrow \left( {{\alpha }_{0},\ldots ,{\alpha }_{k - 1}}\right) \)\n\nrepeat\n\noutput \( {\beta }_{0} \)\n\n\( {\beta }^{\prime } \leftarrow \mathop{\sum }\limits_{{j = 0}}^{{k - 1}}{c}_{j}{\beta }_{j} \)\n\n\( \left( {{\beta }_{0},\ldots ,{\beta }_{k - 1}}\right) \leftarrow \left( {{\beta }_{1},\ldots ,{\beta }_{k - 1},{\beta }^{\prime }}\right) \)\n\nforever\n\nBecause of the structure of the above algorithm, linearly generated sequences are sometimes also called shift register sequences. Also observe that if \( F \) is a finite field, and \( V \) is finite dimensional, the value stored in the \
Yes
Let \( V \) be a vector space over \( F \) of dimension \( \ell > 0 \), and let \( \tau : V \rightarrow V \) be an \( F \)-linear map. Let \( \beta \in V \), and consider the sequence \( \Psi \mathrel{\text{:=}} {\left\{ {\alpha }_{i}\right\} }_{i = 0}^{\infty } \), where \( {\alpha }_{i} = {\tau }^{i}\left( \beta \right) \); that is, \( {\alpha }_{0} = \beta ,{\alpha }_{1} = \tau \left( \beta \right) ,{\alpha }_{2} = \tau \left( {\tau \left( \beta \right) }\right) \), and so on. For every polynomial \( g = \mathop{\sum }\limits_{{j = 0}}^{k}{a}_{j}{X}^{j} \in F\left\lbrack X\right\rbrack \), we have
\[ g \star \Psi = \mathop{\sum }\limits_{{j = 0}}^{k}{a}_{j}{\tau }^{j}\left( \beta \right) \] and for every \( i \geq 0 \), we have \[ \left( {{X}^{i}g}\right) \star \Psi = \mathop{\sum }\limits_{{j = 0}}^{k}{a}_{j}{\tau }^{i + j}\left( \beta \right) = {\tau }^{i}\left( {\mathop{\sum }\limits_{{j = 0}}^{k}{a}_{j}{\tau }^{j}\left( \beta \right) }\right) = {\tau }^{i}\left( {g \star \Psi }\right) . \] Thus, if \( g \star \Psi = 0 \), then clearly \( \left( {{X}^{i}g}\right) \star \Psi = {\tau }^{i}\left( {g \star \Psi }\right) = {\tau }^{i}\left( 0\right) = 0 \) for all \( i \geq 0 \). Conversely, if \( \left( {{X}^{i}g}\right) \star \Psi = 0 \) for all \( i \geq 0 \), then in particular, \( g \star \Psi = 0 \). Thus, \( g \) is a generating polynomial for \( \Psi \) if and only if \( g \star \Psi = 0 \). The minimal polynomial \( \phi \) of \( \Psi \) is non-zero and its degree \( m \) is at most \( \ell \); indeed, \( m \) may be characterized as the least non-negative integer such that \( {\left\{ {\tau }^{i}\left( \beta \right) \right\} }_{i = 0}^{m} \) is linearly dependent, and since \( V \) has dimension \( \ell \) over \( F \), we must have \( m \leq \ell \).
Yes
Theorem 18.2. Let \( \Psi = {\left\{ {z}_{i}\right\} }_{i = 0}^{\infty } \) be a sequence of elements of \( F \), and define the reversed Laurent series\n\n\[ z \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 0}}^{\infty }{z}_{i}{X}^{-\left( {i + 1}\right) } \in F\left( \left( {X}^{-1}\right) \right) ,\]\n\nwhose coefficients are the elements of the sequence \( \Psi \) . Then for every \( g \in F\left\lbrack X\right\rbrack \) , we have \( g \in G\left( \Psi \right) \) if and only if \( {gz} \in F\left\lbrack X\right\rbrack \) . In particular, \( \Psi \) is linearly generated if and only if \( z \) is a rational function, in which case, its minimal polynomial is the denominator of \( z \) when expressed as a fraction in lowest terms.
Proof. Observe that for every polynomial \( g \in F\left\lbrack X\right\rbrack \) and every integer \( i \geq 0 \) , the coefficient of \( {X}^{-\left( {i + 1}\right) } \) in the product \( {gz} \) is equal to \( {X}^{i}g \star \Psi \) -just look at the formulas defining these expressions! It follows that \( g \) is a generating polynomial for \( \Psi \) if and only if the coefficients of the negative powers of \( X \) in \( {gz} \) are all zero, which is the same as saying that \( {gz} \in F\left\lbrack X\right\rbrack \) . Further, if \( g \neq 0 \) and \( h \mathrel{\text{:=}} {gz} \in F\left\lbrack X\right\rbrack \) , then \( \deg \left( h\right) < \deg \left( g\right) \) -this follows simply from the fact that \( \deg \left( z\right) < 0 \) (together with the fact that \( \deg \left( h\right) = \deg \left( g\right) + \deg \left( z\right) \) ). All the statements in the theorem follow immediately from these observations.
Yes
Theorem 18.3. Let \( \Psi = {\left\{ {\alpha }_{i}\right\} }_{i = 0}^{\infty } \) be a linearly generated sequence over the field \( F \), where the \( {\alpha }_{i} \) ’s are elements of a vector space \( V \) of finite dimension \( \ell > 0 \) . Let \( \phi \) be the minimal polynomial of \( \Psi \) over \( F \), let \( m \mathrel{\text{:=}} \deg \left( \phi \right) \), and assume that \( \Psi \) has full rank (i.e., \( {\left\{ {\alpha }_{i}\right\} }_{i = 0}^{m - 1} \) is linearly independent). Finally, let \( F{\left\lbrack X\right\rbrack }_{ < m} \) denote the vector space over \( F \) consisting of all polynomials in \( F\left\lbrack X\right\rbrack \) of degree less than \( m \) .\n\nUnder the above assumptions, there exists a surjective \( F \) -linear map\n\n\[ \n\sigma : {\mathcal{D}}_{F}\left( V\right) \rightarrow F{\left\lbrack X\right\rbrack }_{ < m}\n\]\n\nsuch that for all \( \pi \in {\mathcal{D}}_{F}\left( V\right) \), the minimal polynomial \( {\phi }_{\pi } \) of the projected sequence \( {\Psi }_{\pi } \mathrel{\text{:=}} {\left\{ \pi \left( {\alpha }_{i}\right) \right\} }_{i = 0}^{\infty } \) satisfies\n\n\[ \n{\phi }_{\pi } = \frac{\phi }{\gcd \left( {\sigma \left( \pi \right) ,\phi }\right) }.\n\]
Proof. While the statement of this theorem looks a bit complicated, its proof is quite straightforward, given our characterization of linearly generated sequences in Theorem 18.2 in terms of rational functions. We build the linear map \( \sigma \) as the composition of two linear maps, \( {\sigma }_{0} \) and \( {\sigma }_{1} \).\n\nLet us define the map\n\n\[ \n{\sigma }_{0} : \;{\mathcal{D}}_{F}\left( V\right) \rightarrow F\left( \left( {X}^{-1}\right) \right)\n\]\n\n\[ \n\pi \mapsto \mathop{\sum }\limits_{{i = 0}}^{\infty }\pi \left( {\alpha }_{i}\right) {X}^{-\left( {i + 1}\right) }.\n\]\n\nWe also define the map \( {\sigma }_{1} \) to be the \( \phi \) -multiplication map on \( F\left( \left( {X}^{-1}\right) \right) \) -that is, the map that sends \( z \in F\left( \left( {X}^{-1}\right) \right) \) to \( \phi \cdot z \in F\left( \left( {X}^{-1}\right) \right) \) . The map \( \sigma \) is just the composition \( \sigma = {\sigma }_{1} \circ {\sigma }_{0} \) . It is clear that both \( {\sigma }_{0} \) and \( {\sigma }_{1} \) are \( F \) -linear maps, and hence, so is \( \sigma \).\n\nFirst, observe that for \( \pi \in {\mathcal{D}}_{F}\left( V\right) \), the series \( z \mathrel{\text{:=}} {\sigma }_{0}\left( \pi \right) \) is the series associated with the projected sequence \( {\Psi }_{\pi } \), as in Theorem 18.2. Let \( {\phi }_{\pi } \) be the minimal polynomial of \( {\Psi }_{\pi } \) . Since \( \phi \) is a generating polynomial for \( \Psi \), it is also a generating polynomial for \( {\Psi }_{\pi } \) . Therefore, Theorem 18.2 tells us that\n\n\[ \nh \mathrel{\text{:=}} \sigma \left( \pi \right) = \phi \cdot z \in F{\left\lbrack X\right\rbrack }_{ < m},\n\]\n\nand that \( {\phi }_{\pi } \) is the denominator of \( z \) when expressed as a fraction in lowest terms. Now, we have \( z = h/\phi \), and it follows that \( {\phi }_{\pi } = \phi /\gcd \left( {h,\phi }\right) \) is this denominator.\n\nSecond, the hypothesis that \( {\left\{ {\alpha }_{i}\right\} }_{i = 0}^{m - 1} \) is linearly independent implies that \( {\dim }_{F}\left( {\operatorname{Im}{\sigma }_{0}}\right) \geq m \) (see Exercise 13.21). Also, observe that \( {\sigma }_{1} \) is an injective map. Therefore, \( {\dim }_{F}\left( {\operatorname{Im}\sigma }\right) \geq m \) . In the previous paragraph, we observed that \( \operatorname{Im}\sigma \subseteq F{\left\lbrack X\right\rbrack }_{ < m} \), and since \( {\dim }_{F}\left( {F{\left\lbrack X\right\rbrack }_{ < m}}\right) = m \), we may conclude that \( \operatorname{Im}\sigma = F{\left\lbrack X\right\rbrack }_{ < m} \) . That proves the theorem.
Yes
Theorem 18.4. If \( F \) is a finite field of cardinality \( q \), and \( m \) and \( s \) are positive integers, then we have\n\n\[ \n{\Lambda }_{F}^{m}\left( s\right) = 1 - 1/{q}^{s - 1} + \left( {q - 1}\right) /{q}^{sm}.\n\]
Proof. For each positive integer \( n \), let \( {U}_{n} \) denote the set of all tuples of polynomials \( \left( {{f}_{1},\ldots ,{f}_{s}}\right) \in F{\left\lbrack X\right\rbrack }_{ < n}^{{ \times }_{S}} \) with \( \gcd \left( {{f}_{1},\ldots ,{f}_{s}}\right) = 1 \), and let \( {u}_{n} \mathrel{\text{:=}} \left| {U}_{n}\right| \) . Also, for each monic polynomial \( h \in F\left\lbrack X\right\rbrack \) of degree less that \( n \), let \( {U}_{n, h} \) denote the set of all \( s \) -tuples of polynomials of degree less than \( n \) whose gcd is \( h \) . Observe that the set \( {U}_{n, h} \) is in one-to-one correspondence with \( {U}_{n - k} \), where \( k \mathrel{\text{:=}} \deg \left( h\right) \) , via the map that sends \( \left( {{f}_{1},\ldots ,{f}_{s}}\right) \in {U}_{n, h} \) to \( \left( {{f}_{1}/h,\ldots ,{f}_{s}/h}\right) \in {U}_{n - k} \) . As there are \( {q}^{k} \) possible choices for \( h \) of degree \( k \), if we define \( {V}_{n, k} \) to be the set of tuples \( \left( {{f}_{1},\ldots ,{f}_{s}}\right) \in F{\left\lbrack X\right\rbrack }_{ < n}^{{ \times }_{S}} \) with \( \deg \left( {\gcd \left( {{f}_{1},\ldots ,{f}_{s}}\right) }\right) = k \), we see that \( \left| {V}_{n, k}\right| = {q}^{k}{u}_{n - k} \) . Every non-zero tuple in \( F{\left\lbrack X\right\rbrack }_{ < n}^{\times s} \) appears in exactly one of the sets \( {V}_{n, k} \), for \( k = 0,\ldots, n - 1 \) . Taking into account the zero tuple, it follows that\n\n\[ \n{q}^{sn} = 1 + \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{q}^{k}{u}_{n - k}\n\]\n\n(18.2)\n\nwhich holds for all \( n \geq 1 \) . Replacing \( n \) by \( n - 1 \) in (18.2), we obtain\n\n\[ \n{q}^{s\left( {n - 1}\right) } = 1 + \mathop{\sum }\limits_{{k = 0}}^{{n - 2}}{q}^{k}{u}_{n - 1 - k}\n\]\n\n(18.3)\n\nwhich holds for all \( n \geq 2 \), and indeed, holds for \( n = 1 \) as well. Subtracting \( q \) times (18.3) from (18.2), we deduce that for all \( n \geq 1 \),\n\n\[ \n{q}^{sn} - {q}^{{sn} - s + 1} = 1 + {u}_{n} - q,\n\]\n\nand rearranging terms:\n\n\[ \n{u}_{n} = {q}^{sn} - {q}^{{sn} - s + 1} + q - 1.\n\]\n\nTherefore,\n\n\[ \n{\Lambda }_{F}^{m}\left( s\right) = {u}_{m}/{q}^{sm} = 1 - 1/{q}^{s - 1} + \left( {q - 1}\right) /{q}^{sm}.\n\]
Yes
Theorem 18.6. For all \( \tau ,{\tau }^{\prime },{\tau }^{\prime \prime } \in {\mathcal{L}}_{F}\left( V\right) \), and for all \( c \in F \), we have:\n\n(i) \( \tau \circ \left( {{\tau }^{\prime } + {\tau }^{\prime \prime }}\right) = \tau \circ {\tau }^{\prime } + \tau \circ {\tau }^{\prime \prime } \) ;\n\n(ii) \( \left( {{\tau }^{\prime } + {\tau }^{\prime \prime }}\right) \circ \tau = {\tau }^{\prime } \circ \tau + {\tau }^{\prime \prime } \circ \tau \) ;\n\n(iii) \( \left( {c\tau }\right) \circ {\tau }^{\prime } = c\left( {\tau \circ {\tau }^{\prime }}\right) = \tau \circ \left( {c{\tau }^{\prime }}\right) \) .
Proof. Exercise.
No
Theorem 18.7. For all \( \tau \in {\mathcal{L}}_{F}\left( V\right) \), for all \( c \in F \), and for all \( g, h \in F\left\lbrack X\right\rbrack \), we have:\n\n(i) \( g\left( \tau \right) + h\left( \tau \right) = \left( {g + h}\right) \left( \tau \right) \) ;\n\n(ii) \( c \cdot g\left( \tau \right) = \left( {cg}\right) \left( \tau \right) \) ;\n\n(iii) \( g\left( \tau \right) \circ h\left( \tau \right) = \left( {gh}\right) \left( \tau \right) = h\left( \tau \right) \circ g\left( \tau \right) \) .
Proof. Exercise.
No
Theorem 18.8. The scalar multiplication \( \odot \), together with the usual addition operation on \( V \), makes \( V \) into an \( F\left\lbrack X\right\rbrack \) -module; that is, for all \( g, h \in F\left\lbrack X\right\rbrack \) and \( \alpha ,\beta \in V \), we have\n\n\[ g \odot \left( {h \odot \alpha }\right) = \left( {gh}\right) \odot \alpha ,\left( {g + h}\right) \odot \alpha = g \odot \alpha + h \odot \alpha ,\]\n\n\[ g \odot \left( {\alpha + \beta }\right) = g \odot \alpha + g \odot \beta ,1 \odot \alpha = \alpha . \]
Proof. Exercise.
No
Theorem 18.12. Let \( \tau \in {\mathcal{L}}_{F}\left( V\right) \), and suppose that \( \tau \) has non-zero minimal polynomial \( \phi \) . Then there exists \( \beta \in V \) such that the minimal polynomial of \( \beta \) under \( \tau \) is \( \phi \) .
Proof. Let \( \odot \) be the scalar multiplication associated with \( \tau \) . Let \( \phi = {\phi }_{1}^{{e}_{1}}\cdots {\phi }_{r}^{{e}_{r}} \) be the factorization of \( \phi \) into monic irreducible polynomials in \( F\left\lbrack X\right\rbrack \) . First, we claim that for each \( i = 1,\ldots, r \), there exists \( {\alpha }_{i} \in V \) such that \( \phi /{\phi }_{i} \odot {\alpha }_{i} \neq 0 \) . Suppose the claim were false: then for some \( i \), we would have \( \phi /{\phi }_{i} \odot \alpha = 0 \) for all \( \alpha \in V \) ; however, this means that \( \left( {\phi /{\phi }_{i}}\right) \left( \tau \right) = 0 \), contradicting the minimality property in the definition of the minimal polynomial \( \phi \) . That proves the claim. Let \( {\alpha }_{1},\ldots ,{\alpha }_{r} \) be as in the above claim. Then by Theorem 18.10, each \( \phi /{\phi }_{i}^{{e}_{i}} \odot {\alpha }_{i} \) has minimal polynomial \( {\phi }_{i}^{{e}_{i}} \) under \( \tau \) . Finally, by Theorem 18.9, \[ \beta \mathrel{\text{:=}} \phi /{\phi }_{1}^{{e}_{1}} \odot {\alpha }_{1} + \cdots + \phi /{\phi }_{r}^{{e}_{r}} \odot {\alpha }_{r} \] has minimal polynomial \( \phi \) under \( \tau \) .
Yes
Theorem 19.1. If \( F \) is a field, and \( f \in F\left\lbrack X\right\rbrack \) with \( \gcd \left( {f,\mathbf{D}\left( f\right) }\right) = 1 \), then \( f \) is square-free.
Proof. Suppose \( f \) is not square-free, and write \( f = {g}^{2}h \), for \( g, h \in F\left\lbrack X\right\rbrack \) with \( \deg \left( g\right) > 0 \) . Taking formal derivatives, we have\n\n\[ \mathbf{D}\left( f\right) = {2g}\mathbf{D}\left( g\right) h + {g}^{2}\mathbf{D}\left( h\right) ,\]\n\nand so clearly, \( g \) is a common divisor of \( f \) and \( \mathbf{D}\left( f\right) \).
Yes
Theorem 19.2. Let \( F \) be a field, and let \( k,\ell \) be positive integers. Then \( {X}^{k} - 1 \) divides \( {X}^{\ell } - 1 \) in \( F\left\lbrack X\right\rbrack \) if and only if \( k \) divides \( \ell \) .
Proof. Let \( \ell = {kq} + r \), with \( 0 \leq r < k \) . We have\n\n\[ \n{X}^{\ell } \equiv {X}^{kq}{X}^{r} \equiv {X}^{r}\left( {{\;\operatorname{mod}\;{X}^{k}} - 1}\right) ,\n\]\n\nand \( {X}^{r} \equiv 1\left( {{\;\operatorname{mod}\;{X}^{k}} - 1}\right) \) if and only if \( r = 0 \) .
Yes
Theorem 19.3. Let \( a \geq 2 \) be an integer and let \( k,\ell \) be positive integers. Then \( {a}^{k} - 1 \) divides \( {a}^{\ell } - 1 \) if and only if \( k \) divides \( \ell \) .
Proof. The proof is analogous to that of Theorem 19.2. We leave the details to the reader.
No
Theorem 19.4. Let \( a \geq 2 \) be an integer, \( k,\ell \) be positive integers, and \( F \) a field. Then \( {X}^{{a}^{k}} - X \) divides \( {X}^{{a}^{\ell }} - X \) in \( F\left\lbrack X\right\rbrack \) if and only if \( k \) divides \( \ell \) .
Proof. Now, \( {X}^{{a}^{k}} - X \) divides \( {X}^{{a}^{\ell }} - X \) if and only if \( {X}^{{a}^{k} - 1} - 1 \) divides \( {X}^{{a}^{\ell } - 1} - 1 \) . By Theorem 19.2, this happens if and only if \( {a}^{k} - 1 \) divides \( {a}^{\ell } - 1 \) . By Theorem 19.3, this happens if and only if \( k \) divides \( \ell \) .
Yes
Theorem 19.6. We have\n\n\[ \n{X}^{q} - X = \mathop{\prod }\limits_{{a \in F}}\left( {X - a}\right) \n\]
Proof. Since each \( a \in F \) is a root of \( {X}^{q} - X \), by Theorem 7.13, the polynomial \( \mathop{\prod }\limits_{{a \in F}}\left( {X - a}\right) \) divides the polynomial \( {X}^{q} - X \) . Since the degrees and leading coefficients of these two polynomials are the same, the two polynomials must be equal. \( ▱ \)
Yes
Theorem 19.7. Let \( E \) be an \( F \) -algebra. Then the map \( \sigma : E \rightarrow E \) that sends \( \alpha \in E \) to \( {\alpha }^{q} \) is an \( F \) -algebra homomorphism.
Proof. By Theorem 16.3, either \( E \) is trivial or contains an isomorphic copy of \( F \) as a subring. In the former case, there is nothing to prove. So assume that \( E \) contains an isomorphic copy of \( F \) as a subring. It follows that \( E \) must have characteristic \( p \) .\n\nSince \( q = {p}^{w} \), we see that \( \sigma = {\tau }^{w} \), where \( \tau \left( \alpha \right) \mathrel{\text{:=}} {\alpha }^{p} \) . By the discussion in Example 7.48, the map \( \tau \) is a ring homomorphism, and hence so is \( \sigma \) . Moreover, by Theorem 19.5, we have\n\n\[ \sigma \left( {c{1}_{E}}\right) = {\left( c{1}_{E}\right) }^{q} = {c}^{q}{1}_{E}^{q} = c{1}_{E} \]\n\nfor all \( c \in F \) . Thus (see Theorem 16.5), \( \sigma \) is an \( F \) -algebra homomorphism.
Yes
Theorem 19.8. Let \( E \) be a finite extension of \( F \), and let \( \sigma \) be the Frobenius map on \( E \) over \( F \) . Then \( \sigma \) is an \( F \) -algebra automorphism on \( E \) . Moreover, for all \( \alpha \in E \), we have \( \sigma \left( \alpha \right) = \alpha \) if and only if \( \alpha \in F \) .
Proof. The fact that \( \sigma \) is an \( F \) -algebra homomorphism follows from the previous theorem. Any ring homomorphism from a field into a field is injective (see Exercise 7.47). Surjectivity follows from injectivity and finiteness.\n\nFor the second statement, observe that \( \sigma \left( \alpha \right) = \alpha \) if and only if \( \alpha \) is a root of the polynomial \( {X}^{q} - X \), and since all \( q \) elements of \( F \) are already roots, by Theorem 7.14, there can be no other roots.
Yes
Theorem 19.9. Let \( E \) be a extension of degree \( \ell \) over \( F \), and let \( \sigma \) be the Frobenius map on \( E \) over \( F \) . Then for all integers \( i \) and \( j \), we have \( {\sigma }^{i} = {\sigma }^{j} \) if and only if \( i \equiv j\left( {\;\operatorname{mod}\;\ell }\right) \) .
Proof. We may assume \( i \geq j \) . We have\n\n\[ \n{\sigma }^{i} = {\sigma }^{j} \Leftrightarrow {\sigma }^{i - j} = {\sigma }^{0} \Leftrightarrow {\alpha }^{{q}^{i - j}} - \alpha = 0\text{ for all }\alpha \in E \n\] \n\n\[ \n\Leftrightarrow \left( {\mathop{\prod }\limits_{{\alpha \in E}}\left( {X - \alpha }\right) }\right) \mid \left( {{X}^{{q}^{i - j}} - X}\right) \text{ (by Theorem 7.13) } \n\] \n\n\[ \n\Leftrightarrow \left( {{X}^{{q}^{\ell }} - X}\right) \mid \left( {{X}^{{q}^{i - j}} - X}\right) \text{(by Theorem 19.6, applied to}E\text{)} \n\] \n\n\[ \n\Leftrightarrow \ell \mid \left( {i - j}\right) \text{ (by Theorem 19.4) } \n\] \n\n\[ \n\Leftrightarrow i \equiv j\left( {\;\operatorname{mod}\;\ell }\right) \text{.} \n\]
Yes
For \( k \geq 1 \), let \( {P}_{k} \) denote the product of all the monic irreducible polynomials in \( F\left\lbrack X\right\rbrack \) of degree \( k \) . For all positive integers \( \ell \), we have\n\n\[ \n{X}^{{q}^{\ell }} - X = \mathop{\prod }\limits_{{k \mid \ell }}{P}_{k} \n\]\n\nwhere the product is over all positive divisors \( k \) of \( \ell \) .
Proof. First, we claim that the polynomial \( {X}^{{q}^{\ell }} - X \) is square-free. This follows immediately from Theorem 19.1, since \( \mathbf{D}\left( {{X}^{{q}^{\ell }} - X}\right) = {q}^{\ell }{X}^{{q}^{\ell } - 1} - 1 = - 1 \) .\n\nThus, we have reduced the proof to showing that if \( f \) is a monic irreducible polynomial of degree \( k \), then \( f \) divides \( {X}^{{q}^{\ell }} - X \) if and only if \( k \) divides \( \ell \) .\n\nSo let \( f \) be a monic irreducible polynomial of degree \( k \) . Let \( E \mathrel{\text{:=}} F\left\lbrack X\right\rbrack /\left( f\right) = \) \( F\left\lbrack \xi \right\rbrack \), where \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \) . Observe that \( E \) is an extension field of degree \( k \) over \( F \) . Let \( \sigma \) be the Frobenius map on \( E \) over \( F \) .\n\nFirst, we claim that \( f \) divides \( {X}^{{q}^{\ell }} - X \) if and only if \( {\sigma }^{\ell }\left( \xi \right) = \xi \) . Indeed, \( f \) is the minimal polynomial of \( \xi \) over \( F \), and so \( f \) divides \( {X}^{{q}^{\ell }} - X \) if and only if \( \xi \) is a root of \( {X}^{{q}^{\ell }} - X \), which is the same as saying \( {\xi }^{{q}^{\ell }} = \xi \), or equivalently, \( {\sigma }^{\ell }\left( \xi \right) = \xi \) .\n\nSecond, we claim that \( {\sigma }^{\ell }\left( \xi \right) = \xi \) if and only if \( {\sigma }^{\ell }\left( \alpha \right) = \alpha \) for all \( \alpha \in E \) . To see this, first suppose that \( {\sigma }^{\ell }\left( \alpha \right) = \alpha \) for all \( \alpha \in E \) . Then in particular, this holds for \( \alpha = \xi \) . Conversely, suppose that \( {\sigma }^{\ell }\left( \xi \right) = \xi \) . Every \( \alpha \in E \) can be written as \( \alpha = g\left( \xi \right) \) for some \( g \in F\left\lbrack X\right\rbrack \), and since \( {\sigma }^{\ell } \) is an \( F \) -algebra homomorphism, by Theorem 16.7 we have\n\n\[ \n{\sigma }^{\ell }\left( \alpha \right) = {\sigma }^{\ell }\left( {g\left( \xi \right) }\right) = g\left( {{\sigma }^{\ell }\left( \xi \right) }\right) = g\left( \xi \right) = \alpha .\n\]\n\nFinally, we see that \( {\sigma }^{\ell }\left( \alpha \right) = \alpha \) for all \( \alpha \in E \) if and only if \( {\sigma }^{\ell } = {\sigma }^{0} \), which by Theorem 19.9 holds if and only if \( k \mid \ell \) .
Yes
Theorem 19.11. For all \( \ell \geq 1 \), we have\n\n\[ \n{q}^{\ell } = \mathop{\sum }\limits_{{k \mid \ell }}k{\Pi }_{F}\left( k\right) \n\]
Proof. Just equate the degrees of both sides of the identity in Theorem 19.10.
No
Theorem 19.12. For all \( \ell \geq 1 \), we have\n\n\[ \frac{{q}^{\ell }}{2\ell } \leq {\Pi }_{F}\left( \ell \right) \leq \frac{{q}^{\ell }}{\ell } \]\n\n(19.2)\n\nand\n\n\[ {\Pi }_{F}\left( \ell \right) = \frac{{q}^{\ell }}{\ell } + O\left( \frac{{q}^{\ell /2}}{\ell }\right) . \]\n\n(19.3)
Proof. First, since all the terms in the sum on the right hand side of (19.1) are non-negative, and \( \ell {\Pi }_{F}\left( \ell \right) \) is one of these terms, we may deduce that \( \ell {\Pi }_{F}\left( \ell \right) \leq {q}^{\ell } \) , which proves the second inequality in (19.2). Since this holds for all \( \ell \), we have\n\n\[ \ell {\Pi }_{F}\left( \ell \right) = {q}^{\ell } - \mathop{\sum }\limits_{\substack{{k \mid \ell } \\ {k < \ell } }}k{\Pi }_{F}\left( k\right) \geq {q}^{\ell } - \mathop{\sum }\limits_{\substack{{k \mid \ell } \\ {k < \ell } }}{q}^{k} \geq {q}^{\ell } - \mathop{\sum }\limits_{{k = 1}}^{{\lfloor \ell /2\rfloor }}{q}^{k}. \]\n\nLet us set\n\n\[ S\left( {q,\ell }\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{{\lfloor \ell /2\rfloor }}{q}^{k} = \frac{q}{q - 1}\left( {{q}^{\lfloor \ell /2\rfloor } - 1}\right) ,\]\n\nso that \( \ell {\Pi }_{F}\left( \ell \right) \geq {q}^{\ell } - S\left( {q,\ell }\right) \) . It is easy to see that \( S\left( {q,\ell }\right) = O\left( {q}^{\ell /2}\right) \), which proves (19.3). For the first inequality of (19.2), it suffices to show that \( S\left( {q,\ell }\right) \leq {q}^{\ell }/2 \) . One can verify this directly for \( \ell \in \{ 1,2,3\} \), and for \( \ell \geq 4 \), we have\n\n\[ S\left( {q,\ell }\right) \leq {q}^{\ell /2 + 1} \leq {q}^{\ell - 1} \leq {q}^{\ell }/2 \]\n\nWe note that the inequalities in (19.2) are tight, in the sense that \( {\Pi }_{F}\left( \ell \right) = {q}^{\ell }/2\ell \) when \( q = 2 \) and \( \ell = 2 \), and \( {\Pi }_{F}\left( \ell \right) = {q}^{\ell } \) when \( \ell = 1 \) . The first inequality in (19.2) implies not only that \( {\Pi }_{F}\left( \ell \right) > 0 \), but that the fraction of all monic degree \( \ell \) polynomials that are irreducible is at least \( 1/2\ell \), while (19.3) says that this fraction gets arbitrarily close to \( 1/\ell \) as either \( q \) or \( \ell \) are sufficiently large.
Yes
Theorem 19.13. Let \( E \) be an extension of degree \( \ell \) over a finite field \( F \) . Let \( \sigma \) be the Frobenius map on \( E \) over \( F \) . Then the intermediate fields \( K \), with \( F \subseteq K \subseteq E \) , are in one-to-one correspondence with the divisors \( k \) of \( \ell \), where the divisor \( k \) corresponds to the subalgebra of \( E \) fixed by \( {\sigma }^{k} \), which has degree \( k \) over \( F \) .
Proof. Let \( q \) be the cardinality of \( F \) .\n\nSuppose \( k \) is a divisor of \( \ell \) . By Theorem 19.6 (applied to \( E \) ), the polynomial \( {X}^{{q}^{\ell }} - X \) splits into distinct monic linear factors over \( E \) . By Theorem 19.4, the polynomial \( {X}^{{q}^{k}} - X \) divides \( {X}^{{q}^{\ell }} - X \) . Hence, \( {X}^{{q}^{k}} - X \) also splits into distinct monic linear factors over \( E \) . This says that the subalgebra of \( E \) fixed by \( {\sigma }^{k} \), which consists of the roots of \( {X}^{{q}^{k}} - X \), has precisely \( {q}^{k} \) elements, and hence is an extension of degree \( k \) over \( F \) .\n\nNow let \( K \) be an arbitrary intermediate field, and let \( k \) be the degree of \( K \) over \( F \) . As already mentioned, we must have \( k \mid \ell \) . Also, by Theorem 19.8 (applied with \( K \) in place of \( F \) ), \( K \) is the subalgebra of \( E \) fixed by \( {\sigma }^{k} \) .
Yes
Theorem 19.14. Let \( E \) and \( {E}^{\prime } \) be finite extensions of the same degree over a finite field \( F \) . Then \( E \) and \( {E}^{\prime } \) are isomorphic as \( F \) -algebras.
Proof. Let \( q \) be the cardinality of \( F \), and let \( \ell \) be the degree of the extensions. As we have argued before, we have \( {E}^{\prime } = F\left\lbrack {\alpha }^{\prime }\right\rbrack \) for some \( {\alpha }^{\prime } \in {E}^{\prime } \), and so \( {E}^{\prime } \) is isomorphic as an \( F \) -algebra to \( F\left\lbrack X\right\rbrack /\left( \phi \right) \), where \( \phi \) is the minimal polynomial of \( {\alpha }^{\prime } \) over \( F \). As \( \phi \) is an irreducible polynomial of degree \( \ell \), by Theorem 19.10, \( \phi \) divides \( {X}^{{q}^{\ell }} - X \), and by Theorem 19.6 (applied to \( E \) ), \( {X}^{{q}^{\ell }} - X = \mathop{\prod }\limits_{{\alpha \in E}}\left( {X - \alpha }\right) \) , from which it follows that \( \phi \) has a root \( \alpha \in E \). Since \( \phi \) is irreducible, \( \phi \) is the minimal polynomial of \( \alpha \) over \( F \), and hence \( F\left\lbrack \alpha \right\rbrack \) is isomorphic as an \( F \) -algebra to \( F\left\lbrack X\right\rbrack /\left( \phi \right) \). Since \( \alpha \) has degree \( \ell \) over \( F \), we must have \( E = F\left\lbrack \alpha \right\rbrack \). Thus, \( E = F\left\lbrack \alpha \right\rbrack \cong F\left\lbrack X\right\rbrack /\left( \phi \right) \cong F\left\lbrack {\alpha }^{\prime }\right\rbrack = {E}^{\prime }.
Yes
Theorem 19.16. If \( \alpha \in {E}^{ * } \) has multiplicative order \( r \), then the degree of \( \alpha \) over \( F \) is equal to the multiplicative order of \( q \) modulo \( r \) .
For \( \alpha \in E \), define the polynomial\n\n\[ \chi \mathrel{\text{:=}} \mathop{\prod }\limits_{{i = 0}}^{{\ell - 1}}\left( {X - {\sigma }^{i}\left( \alpha \right) }\right) \]\n\nIt is easy to see, using the same type of argument as was used to prove Theorem 19.15, that \( \chi \in F\left\lbrack X\right\rbrack \), and indeed,\n\n\[ \chi = {\phi }^{\ell /k} \]\n\nwhere \( k \) is the degree of \( \alpha \) over \( F \) . The polynomial \( \chi \) is called the characteristic polynomial of \( \alpha \) (from \( E \) to \( F \) ).
No
Theorem 19.17. The function \( {\mathbf{N}}_{E/F} \), restricted to \( {E}^{ * } \), is a group homomorphism from \( {E}^{ * } \) onto \( {F}^{ * } \) .
Proof. We have\n\n\[ \n{\mathbf{N}}_{E/F}\left( \alpha \right) = \mathop{\prod }\limits_{{i = 0}}^{{\ell - 1}}{\alpha }^{{q}^{i}} = {\alpha }^{\mathop{\sum }\limits_{{i = 0}}^{{\ell - 1}}{q}^{i}} = {\alpha }^{\left( {{q}^{\ell } - 1}\right) /\left( {q - 1}\right) }. \n\]\n\nSince \( {E}^{ * } \) is a cyclic group of order \( {q}^{\ell } - 1 \), the image of the \( \left( {{q}^{\ell } - 1}\right) /\left( {q - 1}\right) \) -power map on \( {E}^{ * } \) is the unique subgroup of \( {E}^{ * } \) of order \( q - 1 \) (see Theorem 6.32). Since \( {F}^{ * } \) is a subgroup of \( {E}^{ * } \) of order \( q - 1 \), it follows that the image of this power map is \( {F}^{ * } \). \( ▱ \)
Yes
As an application of some of the above theory, let us investigate the factorization of the polynomial \( {X}^{r} - 1 \) over \( F \), a finite field of cardinality \( q \) . Let us assume that \( r > 0 \) and is relatively prime to \( q \) . Let \( E \) be a splitting field of \( {X}^{r} - 1 \) (see Theorem 16.25), so that \( E \) is a finite extension of \( F \) in which \( {X}^{r} - 1 \) splits into linear factors:
\[ {X}^{r} - 1 = \mathop{\prod }\limits_{{i = 1}}^{r}\left( {X - {\alpha }_{i}}\right) \] We claim that the roots \( {\alpha }_{i} \) of \( {X}^{r} - 1 \) are distinct-this follows from the Theorem 19.1 and the fact that \( \gcd \left( {{X}^{r} - 1, r{X}^{r - 1}}\right) = 1 \) . Next, observe that the \( r \) roots of \( {X}^{r} - 1 \) in \( E \) actually form a subgroup of \( {E}^{ * } \) , and since \( {E}^{ * } \) is cyclic, this subgroup must be cyclic as well. So the roots of \( {X}^{r} - 1 \) form a cyclic subgroup of \( {E}^{ * } \) of order \( r \) . Let \( \zeta \) be a generator for this group. Then all the roots of \( {X}^{r} - 1 \) are contained in \( F\left\lbrack \zeta \right\rbrack \), and so we may as well assume that \( E = F\left\lbrack \zeta \right\rbrack \)
Yes
Theorem 20.1. Algorithm IPT uses \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Proof. Consider an execution of a single iteration of the main loop. The cost of the \( q \) th-powering step (using a standard repeated-squaring algorithm) is \( O\left( {\operatorname{len}\left( q\right) }\right) \) multiplications modulo \( f \), and so \( O\left( {{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) . The cost of the gcd computation is \( O\left( {\ell }^{2}\right) \) operations in \( F \) . Thus, the cost of a single loop iteration is \( O\left( {{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \), from which it follows that the cost of the entire algorithm is \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Yes
Theorem 20.2. Algorithm RIP uses an expected number of \( O\left( {{\ell }^{4}\operatorname{len}\left( q\right) }\right) \) operations in \( F \), and its output is uniformly distributed over all monic irreducibles of degree \( \ell \) .
Proof. This is a simple application of the generate-and-test paradigm (see Theorem 9.3, and Example 9.10 in particular). Because of Theorem 19.12, the expected number of loop iterations of the above algorithm is \( O\left( \ell \right) \) . Since Algorithm IPT uses \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \), the statement about the running time of Algorithm RIP is immediate. The statement about its output distribution is clear.
Yes
Theorem 20.3. Suppose that \( f \in F\left\lbrack X\right\rbrack \) is a monic polynomial of degree \( \ell > 0 \) , and that \( \gcd \left( {f,\mathbf{D}\left( f\right) }\right) = f \) . Then \( f = g\left( {X}^{p}\right) \) for some \( g \in F\left\lbrack X\right\rbrack \) . Moreover, if \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \), then \( f = {h}^{p} \), where
Proof. Since \( \deg \left( {\mathbf{D}\left( f\right) }\right) < \deg \left( f\right) \) and \( \gcd \left( {f,\mathbf{D}\left( f\right) }\right) = f \), we must have \( \mathbf{D}\left( f\right) = 0 \) . If \( f = \mathop{\sum }\limits_{i}{c}_{i}{X}^{i} \), then \( \mathbf{D}\left( f\right) = \mathop{\sum }\limits_{i}i{c}_{i}{X}^{i - 1} \) . Since this derivative must be zero, it follows that all the coefficients \( {c}_{i} \) with \( i ≢ 0\left( {\;\operatorname{mod}\;p}\right) \) must be zero to begin with. That proves that \( f = g\left( {X}^{p}\right) \) for some \( g \in F\left\lbrack X\right\rbrack \) . Furthermore, if \( h \) is defined as above, then \[ {h}^{p} = {\left( \mathop{\sum }\limits_{i}{a}_{i}^{{p}^{\left( w - 1\right) }}{X}^{i}\right) }^{p} = \mathop{\sum }\limits_{i}{a}_{i}^{{p}^{w}}{X}^{ip} = \mathop{\sum }\limits_{i}{a}_{i}{\left( {X}^{p}\right) }^{i} = g\left( {X}^{p}\right) = f. \]
Yes
Theorem 20.4. Let \( f \in F\left\lbrack X\right\rbrack \) be a monic polynomial of degree \( \ell > 0 \) . Suppose that the factorization of \( f \) into irreducibles is \( f = {f}_{1}^{{e}_{1}}\cdots {f}_{r}^{{e}_{r}} \) . Then\n\n\[ \frac{f}{\gcd \left( {f,\mathbf{D}\left( f\right) }\right) } = \mathop{\prod }\limits_{\substack{{1 \leq i \leq r} \\ {{e}_{i} ≢ 0\left( {\;\operatorname{mod}\;p}\right) } }}{f}_{i}. \]
Proof. The theorem can be restated in terms of the following claim: for each \( i = 1,\ldots, r \), we have\n\n- \( {f}_{i}^{{e}_{i}} \mid \mathbf{D}\left( f\right) \) if \( {e}_{i} \equiv 0\left( {\;\operatorname{mod}\;p}\right) \), and\n\n- \( {f}_{i}^{{e}_{i} - 1} \mid \mathbf{D}\left( f\right) \) but \( {f}_{i}^{{e}_{i}} \nmid \mathbf{D}\left( f\right) \) if \( {e}_{i} ≢ 0\left( {\;\operatorname{mod}\;p}\right) \) .\n\nTo prove the claim, we take formal derivatives using the usual rule for products, obtaining\n\n\[ \mathbf{D}\left( f\right) = \mathop{\sum }\limits_{j}{e}_{j}{f}_{j}^{{e}_{j} - 1}\mathbf{D}\left( {f}_{j}\right) \mathop{\prod }\limits_{{k \neq j}}{f}_{k}^{{e}_{k}}. \]\n\n(20.3)\n\nConsider a fixed index \( i \) . Clearly, \( {f}_{i}^{{e}_{i}} \) divides every term in the sum on the righthand side of (20.3), with the possible exception of the term with \( j = i \) . In the case where \( {e}_{i} \equiv 0\left( {\;\operatorname{mod}\;p}\right) \), the term with \( j = i \) vanishes, and that proves the claim in this case. So assume that \( {e}_{i} ≢ 0\left( {\;\operatorname{mod}\;p}\right) \) . By the previous theorem, and the fact that \( {f}_{i} \) is irreducible, and in particular, not the \( p \) th power of any polynomial, we see that \( \mathbf{D}\left( {f}_{i}\right) \) is non-zero, and (of course) has degree strictly less than that of \( {f}_{i} \) . From this, and (again) the fact that \( {f}_{i} \) is irreducible, it follows that the term with \( j = i \) is divisible by \( {f}_{i}^{{e}_{i} - 1} \), but not by \( {f}_{i}^{{e}_{i}} \), from which the claim follows.
Yes
Theorem 20.6. Algorithm DDF uses \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Proof. Note that the body of the main loop is executed at most \( \ell \) times, since after \( \ell \) iterations, we will have removed all the factors of \( f \) . Thus, we perform at most \( \ell q \) th-powering steps, each of which takes \( O\left( {{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \), and so the total contribution to the running time of these is \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) . We also have to take into account the cost of the gcd and division computations. The cost per loop iteration of these is \( O\left( {\ell }^{2}\right) \) operations in \( F \), contributing a term of \( O\left( {\ell }^{3}\right) \) to the total operation count. This term is dominated by the cost of the \( q \) th-powering steps, and so the total cost of Algorithm DDF is \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Yes
Theorem 20.7. In the case \( p = 2 \), Algorithm EDF uses an expected number of \( O\left( {k{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Proof. We may assume \( r \geq 2 \) . Let \( L \) be the random variable that represents the number of iterations of the main loop of the algorithm. For \( n \geq 1 \), let \( {H}_{n} \) be the random variable that represents the value of \( H \) at the beginning of the \( n \) th loop iteration. For \( i, j = 1,\ldots, r \), we define \( {L}_{ij} \) to be the largest value of \( n \) (with \( 1 \leq n \leq L) \) such that \( {f}_{i} \mid h \) and \( {f}_{j} \mid h \) for some \( h \in {H}_{n} \) .\n\nWe first claim that \( \mathrm{E}\left\lbrack L\right\rbrack = O\left( {\operatorname{len}\left( r\right) }\right) \) . To prove this claim, we make use of the fact (see Theorem 8.17) that\n\n\[ \mathrm{E}\left\lbrack L\right\rbrack = \mathop{\sum }\limits_{{n \geq 1}}\mathrm{P}\left\lbrack {L \geq n}\right\rbrack \]\n\nNow, \( L \geq n \) if and only if for some \( i, j \) with \( 1 \leq i < j \leq r \), we have \( {L}_{ij} \geq n \) . Moreover, if \( {f}_{i} \) and \( {f}_{j} \) have not been separated at the beginning of one loop iteration, then they will be separated at the beginning of the next with probability \( 1/2 \) . It follows that\n\n\[ \mathrm{P}\left\lbrack {{L}_{ij} \geq n}\right\rbrack = {2}^{-\left( {n - 1}\right) }.\]\n\nSo we have\n\n\[ \mathrm{P}\left\lbrack {L \geq n}\right\rbrack \leq \mathop{\sum }\limits_{{i < j}}\mathrm{P}\left\lbrack {{L}_{ij} \geq n}\right\rbrack \leq {r}^{2}{2}^{-n}. \]\n\nTherefore,\n\n\[ \mathrm{E}\left\lbrack L\right\rbrack = \mathop{\sum }\limits_{{n \geq 1}}\mathrm{P}\left\lbrack {L \geq n}\right\rbrack = \mathop{\sum }\limits_{{n \leq 2{\log }_{2}r}}\mathrm{P}\left\lbrack {L \geq n}\right\rbrack + \mathop{\sum }\limits_{{n > 2{\log }_{2}r}}\mathrm{P}\left\lbrack {L \geq n}\right\rbrack \]\n\n\[ \leq 2{\log }_{2}r + \mathop{\sum }\limits_{{n > 2{\log }_{2}r}}{r}^{2}{2}^{-n} \leq 2{\log }_{2}r + \mathop{\sum }\limits_{{n \geq 0}}{2}^{-n} = 2{\log }_{2}r + 2, \]\n\nwhich proves the claim.\n\nAs discussed in the paragraph above this theorem, the cost of each iteration of the main loop is \( O\left( {k{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) . Combining this with the fact that \( \mathrm{E}\left\lbrack L\right\rbrack = O\left( {\operatorname{len}\left( r\right) }\right) \), it follows that the expected number of operations in \( F \) for the entire algorithm is \( O\left( {\operatorname{len}\left( r\right) k{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) . This is significantly better than the above quick-and-dirty estimate, but is not quite the result we are after. For this, we have to work a little harder.
Yes
Theorem 20.8. In the case \( p > 2 \), Algorithm EDF uses an expected number of \( O\left( {k{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Proof. The analysis is essentially the same as in the case \( p = 2 \), except that now the probability that we fail to split a given pair of irreducible factors is at most \( 5/9 \) , rather than equal to \( 1/2 \) . The details are left as an exercise for the reader.
No
Theorem 20.9. The Cantor-Zassenhaus factoring algorithm uses an expected number of \( O\left( {{\ell }^{3}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
This bound is tight, since in the worst case, when the input is irreducible, the algorithm really does do this much work. Also, we have assumed the input to the Cantor-Zassenhaus is a square-free polynomial. However, we may use Algorithm SFD as a preprocessing step to ensure that this is the case. Even if we include the cost of this preprocessing step, the running time estimate in Theorem 20.9 remains valid.
Yes
Theorem 20.10. Algorithm B1 uses \( O\left( {{\ell }^{3} + {\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
Proof. This is just a matter of counting. The computation of \( \alpha \) takes \( O\left( {\operatorname{len}\left( q\right) }\right) \) operations in \( E \) using repeated squaring, and hence \( O\left( {{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) . To build the matrix \( Q \), we have to perform an additional \( O\left( \ell \right) \) operations in \( E \) to compute the successive powers of \( \alpha \), which translates into \( O\left( {\ell }^{3}\right) \) operations in \( F \) . Finally, the cost of Gaussian elimination is an additional \( O\left( {\ell }^{3}\right) \) operations in \( F \) .
Yes
Theorem 20.11. Algorithm B2 uses an expected number of\n\n\[ O\left( {\operatorname{len}\left( r\right) {\ell }^{2}\operatorname{len}\left( q\right) }\right) \]\n\noperations in \( F \) .
Proof. The proof follows the same line of reasoning as the analysis of Algorithm EDF. Indeed, using the same argument as was used there, the expected number of iterations of the main loop is \( O\left( {\operatorname{len}\left( r\right) }\right) \) . As discussed in the paragraph above this theorem, the cost per loop iteration is \( O\left( {{\ell }^{2}\operatorname{len}\left( q\right) }\right) \) operations in \( F \) . The theorem follows.
Yes
Theorem 20.12. Berlekamp's factoring algorithm uses an expected number of \( O\left( {{\ell }^{3} + {\ell }^{2}\operatorname{len}\left( \ell \right) \operatorname{len}\left( q\right) }\right) \) operations in \( F \) .
We have assumed the input to Berlekamp's algorithm is a square-free polynomial. However, we may use Algorithm SFD as a preprocessing step to ensure that this is the case. Even if we include the cost of this preprocessing step, the running time estimate in Theorem 20.12 remains valid.
No
Theorem 21.1. Let \( n > 1 \) be an integer. If \( n \) is prime, then for all \( a \in {\mathbb{Z}}_{n} \), we have the following identity in the ring \( {\mathbb{Z}}_{n}\left\lbrack X\right\rbrack \) :\n\n\[ \n{\left( X + a\right) }^{n} = {X}^{n} + a.\n\]\n\nConversely, if \( n \) is composite, then for all \( a \in {\mathbb{Z}}_{n}^{ * } \), the identity (21.1) does not hold.
Proof. Note that\n\n\[ \n{\left( X + a\right) }^{n} = {X}^{n} + {a}^{n} + \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}\left( \begin{matrix} n \\ i \end{matrix}\right) {a}^{i}{X}^{n - i}.\n\]\n\nIf \( n \) is prime, then by Fermat’s little theorem (Theorem 2.14), we have \( {a}^{n} = a \) , and by Exercise 1.14, all of the binomial coefficients \( \left( \begin{array}{l} n \\ i \end{array}\right) \), for \( i = 1,\ldots, n - 1 \), are divisible by \( n \), and hence their images in the ring \( {\mathbb{Z}}_{n} \) vanish. That proves that the identity (21.1) holds when \( n \) is prime.\n\nConversely, suppose that \( n \) is composite and that \( a \in {\mathbb{Z}}_{n}^{ * } \) . Consider any prime factor \( p \) of \( n \), and suppose \( n = {p}^{k}m \), where \( p \nmid m \).\n\nWe claim that \( {p}^{k} \nmid \left( \begin{array}{l} n \\ p \end{array}\right) \) . To prove the claim, one simply observes that\n\n\[ \n\left( \begin{array}{l} n \\ p \end{array}\right) = \frac{n\left( {n - 1}\right) \cdots \left( {n - p + 1}\right) }{p!}\n\]\n\nand the numerator of this fraction is an integer divisible by \( {p}^{k} \), but no higher power of \( p \), and the denominator is divisible by \( p \), but no higher power of \( p \) . That proves the claim.\n\nFrom the claim, and the fact that \( a \in {\mathbb{Z}}_{n}^{ * } \), it follows that the coefficient of \( {X}^{n - p} \) in \( {\left( X + a\right) }^{n} \) is not zero, and hence the identity (21.1) does not hold.
Yes
Theorem 21.2. For integers \( n > 1 \) and \( m \geq 1 \), the least prime \( r \) such that \( r \nmid n \) and the multiplicative order of \( {\left\lbrack n\right\rbrack }_{r} \in {\mathbb{Z}}_{r}^{ * } \) is greater than \( m \) is \( O\left( {{m}^{2}\operatorname{len}\left( n\right) }\right) \) .
Proof. Call a prime \( r \) \
No
Theorem 21.3. Algorithm AKS can be implemented so that its running time is \( O\left( {\operatorname{len}{\left( n\right) }^{16.5}}\right) \) .
Proof. As discussed above, the value of \( r \) determined in step 2 will be \( O\left( {\operatorname{len}{\left( n\right) }^{5}}\right) \) . It is fairly straightforward to see that the running time of the algorithm is dominated by the running time of step 5 . Here, we have to perform \( O\left( {{r}^{1/2}\operatorname{len}\left( n\right) }\right) \) exponentiations to the power \( n \) in the ring \( {\mathbb{Z}}_{n}\left\lbrack X\right\rbrack /\left( {{X}^{r} - 1}\right) \) . Each of these exponentiations takes \( O\left( {\operatorname{len}\left( n\right) }\right) \) operations in \( {\mathbb{Z}}_{n}\left\lbrack X\right\rbrack /\left( {{X}^{r} - 1}\right) \), each of which takes \( O\left( {r}^{2}\right) \) operations in \( {\mathbb{Z}}_{n} \), each of which takes time \( O\left( {\operatorname{len}{\left( n\right) }^{2}}\right) \) . This yields a running time bounded by a constant times\n\n\[ \n{r}^{1/2}\operatorname{len}\left( n\right) \times \operatorname{len}\left( n\right) \times {r}^{2} \times \operatorname{len}{\left( n\right) }^{2} = {r}^{2.5}\operatorname{len}{\left( n\right) }^{4}.\n\]\n\nSubstituting the bound \( O\left( {\operatorname{len}{\left( n\right) }^{5}}\right) \) for \( r \), we obtain the desired bound.
Yes
Theorem 21.4. If the input to Algorithm AKS is prime, then the output is true.
Proof. Assume that the input \( n \) is prime. The test in step 1 will certainly fail. If the algorithm does not return true in step 3, then certainly the test in step 4 will fail as well. If the algorithm reaches step 5 , then all of the tests in the loop in step 5 will fail-this follows from Theorem 21.1.
Yes
For all \( k \in {\mathbb{Z}}^{\left( r\right) } \), the kernel of \( {\widehat{\sigma }}_{k} \) is \( \left( {{X}^{r} - 1}\right) \), and the image of \( {\widehat{\sigma }}_{k} \) is \( E \) .
Proof. Let \( J \mathrel{\text{:=}} \operatorname{Ker}{\widehat{\sigma }}_{k} \), which is an ideal of \( {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack \) . Let \( {k}^{\prime } \) be a positive integer such that \( k{k}^{\prime } \equiv 1\left( {\;\operatorname{mod}\;r}\right) \), which exists because \( \gcd \left( {r, k}\right) = 1 \) .\n\nTo show that \( J = \left( {{X}^{r} - 1}\right) \), we first observe that\n\n\[ \n{\widehat{\sigma }}_{k}\left( {{X}^{r} - 1}\right) = {\left( {\xi }^{k}\right) }^{r} - 1 = {\left( {\xi }^{r}\right) }^{k} - 1 = {1}^{k} - 1 = 0, \n\] \n\nand hence \( \left( {{X}^{r} - 1}\right) \subseteq J \) .\n\nNext, we show that \( J \subseteq \left( {{X}^{r} - 1}\right) \) . Let \( g \in J \) . We want to show that \( \left( {{X}^{r} - 1}\right) \mid g \) . Now, \( g \in J \) means that \( g\left( {\xi }^{k}\right) = 0 \) . If we set \( h \mathrel{\text{:=}} g\left( {X}^{k}\right) \), this implies that \( h\left( \xi \right) = 0 \) , which means that \( \left( {{X}^{r} - 1}\right) \mid h \) . So let us write \( h = \left( {{X}^{r} - 1}\right) f \), for some \( f \in {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack \) . Then\n\n\[ \ng\left( \xi \right) = g\left( {\xi }^{k{k}^{\prime }}\right) = h\left( {\xi }^{{k}^{\prime }}\right) = \left( {{\xi }^{{k}^{\prime }r} - 1}\right) f\left( {\xi }^{{k}^{\prime }}\right) = 0, \n\] \n\nwhich implies that \( \left( {{X}^{r} - 1}\right) \mid g \) .\n\nThat finishes the proof that \( J = \left( {{X}^{r} - 1}\right) \) .\n\nFinally, to show that \( {\widehat{\sigma }}_{k} \) is surjective, suppose we are given an arbitrary element of \( E \), which we can express as \( g\left( \xi \right) \) for some \( g \in {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack \) . Now set \( h \mathrel{\text{:=}} g\left( {X}^{{k}^{\prime }}\right) \), and observe that\n\n\[ \n{\widehat{\sigma }}_{k}\left( h\right) = h\left( {\xi }^{k}\right) = g\left( {\xi }^{k{k}^{\prime }}\right) = g\left( \xi \right) . \n\]
Yes
Lemma 21.7. For every \( \alpha \in E \), if \( k \in C\left( \alpha \right) \) and \( {k}^{\prime } \in C\left( \alpha \right) \), then \( k{k}^{\prime } \in C\left( \alpha \right) \) .
Proof. If \( {\sigma }_{k}\left( \alpha \right) = {\alpha }^{k} \) and \( {\sigma }_{{k}^{\prime }}\left( \alpha \right) = {\alpha }^{{k}^{\prime }} \), then\n\n\[ \n{\sigma }_{k{k}^{\prime }}\left( \alpha \right) = {\sigma }_{k}\left( {{\sigma }_{{k}^{\prime }}\left( \alpha \right) }\right) = {\sigma }_{k}\left( {\alpha }^{{k}^{\prime }}\right) = {\left( {\sigma }_{k}\left( \alpha \right) \right) }^{{k}^{\prime }} = {\left( {\alpha }^{k}\right) }^{{k}^{\prime }} = {\alpha }^{k{k}^{\prime }}, \n\]\n\nwhere we have made use of the homomorphic property of \( {\sigma }_{k} \) .
Yes
Lemma 21.8. For every \( k \in {\mathbb{Z}}^{\left( r\right) } \), if \( \alpha \in D\left( k\right) \) and \( \beta \in D\left( k\right) \), then \( {\alpha \beta } \in D\left( k\right) \) .
Proof. If \( {\sigma }_{k}\left( \alpha \right) = {\alpha }^{k} \) and \( {\sigma }_{k}\left( \beta \right) = {\beta }^{k} \), then\n\n\[ \n{\sigma }_{k}\left( {\alpha \beta }\right) = {\sigma }_{k}\left( \alpha \right) {\sigma }_{k}\left( \beta \right) = {\alpha }^{k}{\beta }^{k} = {\left( \alpha \beta \right) }^{k}, \n\]\n\nwhere again, we have made use of the homomorphic property of \( {\sigma }_{k} \) .
Yes
Lemma 21.11. Under assumptions (A4) and (A5), we have\n\n\[ \n{2}^{\min \left( {t,\ell }\right) } - 1 > {n}^{2\left\lfloor {t}^{1/2}\right\rfloor } \n\]
Proof. Observe that \( {\log }_{2}n \leq \operatorname{len}\left( n\right) \), and so it suffices to show that\n\n\[ \n{2}^{\min \left( {t,\ell }\right) } - 1 > {2}^{2\operatorname{len}\left( n\right) \left\lfloor {t}^{1/2}\right\rfloor } \n\]\n\nand for this, it suffices to show that\n\n\[ \n\min \left( {t,\ell }\right) > 2\operatorname{len}\left( n\right) \left\lfloor {t}^{1/2}\right\rfloor \n\]\n\nsince for all integers \( a, b \) with \( a > b \geq 1 \), we have \( {2}^{a} > {2}^{b} + 1 \) .\n\nTo show that \( t > 2\operatorname{len}\left( n\right) \left\lfloor {t}^{1/2}\right\rfloor \), it suffices to show that \( t > 2\operatorname{len}\left( n\right) {t}^{1/2} \), or equivalently, that \( t > 4\operatorname{len}{\left( n\right) }^{2} \) . But observe that by definition, \( t \) is the order of the subgroup of \( {\mathbb{Z}}_{r}^{ * } \) generated by \( {\left\lbrack n\right\rbrack }_{r} \) and \( {\left\lbrack p\right\rbrack }_{r} \), which is at least as large as the multiplicative order of \( {\left\lbrack n\right\rbrack }_{r} \) in \( {\mathbb{Z}}_{r}^{ * } \), and by assumption (A4), this is larger than \( 4\operatorname{len}{\left( n\right) }^{2} \) .\n\nFinally, directly by assumption (A5), we have \( \ell > 2\operatorname{len}\left( n\right) \left\lfloor {t}^{1/2}\right\rfloor \) .\n\nThat concludes the proof of Theorem 21.5.
Yes
For which real numbers \( x \) do you have \( \left| x\right| = 3 \) ?
Since \( \left| 3\right| = 3 \) and \( \left| {-3}\right| = 3 \), we see that there are two solutions, \( x = 3 \) or \( x = - 3 \) . The solution set is \( S = \{ - 3,3\} \).
Yes
Example 1.6. Solve for \( x : \left| x\right| = - 7 \) .
Solution. Note that \( \left| {-7}\right| = 7 \) and \( \left| 7\right| = 7 \) so that these cannot give any solutions. Indeed, there are no solutions, since the absolute value is always non-negative. The solution set is the empty set \( S = \{ \} \) .
Yes
Example 1.7. Solve for \( x : \left| x\right| = 0 \) .
Solution. Since \( - 0 = 0 \), there is only one solution, \( x = 0 \) . Thus, \( S = \{ 0\} \) .
Yes
Solve for \( x : \left| {x + 2}\right| = 6 \) .
Since the absolute value of \( x + 2 \) is 6, we see that \( x + 2 \) has to be either 6 or -6 . We evaluate each case,\n\n\[ \n\begin{array}{ll} \text{ either }x + 2 = 6, & \text{ or }x + 2 = - 6, \\ \Rightarrow x = 6 - 2, & \Rightarrow x = - 6 - 2, \\ \Rightarrow x = 4; & \Rightarrow x = - 8. \end{array} \n\]\n\nThe solution set is \( S = \{ - 8,4\} \) .
Yes
Example 1.9. Solve for \( x : \left| {{3x} - 4}\right| = 5 \)
Solution.\n\n\[ \n\begin{array}{ll} \text{ Either }{3x} - 4 = 5 & \text{ or }{3x} - 4 = - 5 \\ \Rightarrow {3x} = 9 & \Rightarrow {3x} = - 1 \\ \Rightarrow x = 3 & \Rightarrow x = - \frac{1}{3} \end{array} \n\] \n\nThe solution set is \( S = \left\{ {-\frac{1}{3},3}\right\} \) .
Yes
Solve for \( x : - 2 \cdot \left| {{12} + {3x}}\right| = - {18} \)
Solution. Dividing both sides by -2 gives \( \left| {{12} + {3x}}\right| = 9 \) . With this, we have the two cases\n\n\[ \begin{array}{ll} \text{ Either }{12} + {3x} = 9 & \text{ or }{12} + {3x} = - 9 \\ \Rightarrow {3x} = - 3 & \Rightarrow {3x} = - {21} \\ \Rightarrow x = - 1 & \Rightarrow x = - 7 \end{array} \]\n\nThe solution set is \( S = \{ - 7, - 1\} \) .
Yes
Example 1.14. Graph the the inequality \( \pi < x \leq 5 \) on the number line and write it in interval notation.
Solution.\n\nOn the number line:\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_16_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_16_0.jpg)\n\nInterval notation:
No
Example 1.15. Write the following interval as an inequality and in interval notation:
Solution.\n\n\[ \text{Inequality notation:}\; - 3 \leq x \]\n\n\[ \text{Interval notation:}\;\lbrack - 3,\infty ) \]
No
Example 1.16. Write the following interval as an inequality and in interval notation:
Solution.\n\n\[ \text{Inequality notation:}\;x < 2 \]\n\n\[ \text{Interval notation:}\;\left( {-\infty ,2}\right) \]
Yes
Solve for \( x \) : a) \( \left| {x + 7}\right| < 2 \)
Solution. a) We follow the three steps described above. In step 1, we solve the corresponding equality, \( \left| {x + 7}\right| = 2 \). \n\n\[ \n\begin{array}{l} x + 7 = 2 \\ \Rightarrow x = - 5 \end{array}\left| {\;\begin{array}{l} x + 7 = - 2 \\ \Rightarrow x = - 9 \end{array}}\right. \n\] \n\nThe solutions \( x = - 5 \) and \( x = - 9 \) divide the number line into three subintervals: \n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_0.jpg) \n\nNow, in step 2, we check the inequality for one number in each of these subintervals. \n\nCheck: ![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_1.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_1.jpg) \n\nSince \( x = - 7 \) in the subinterval given by \( - 9 < x < - 5 \) solves the inequality \( \left| {x + 7}\right| < 2 \), it follows that all numbers in the subinterval given by \( - 9 < x < - 5 \) solve the inequality. Similarly, since \( x = - {10} \) and \( x = 0 \) do not solve the inequality, no number in these subintervals will solve the inequality. For step 3, we note that the numbers \( x = - 9 \) and \( x = - 5 \) are not included as solutions since the inequality is strict (that is we have \( < \) instead of \( \leq \) ).The solution set is therefore the interval \( S = \left( {-9, - 5}\right) \). The solution on the number line is: \n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_2.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_18_2.jpg)
Yes
Solve for \( x : \;\left| {{12} - {5x}}\right| \leq 1 \)
Note that \( \left| {{12} - {5x}}\right| \leq 1 \) implies that\n\n\[- 1 \leq {12} - {5x} \leq 1\]\n\nso that\n\n\[- {13} \leq - {5x} \leq - {11}\]\n\nand by dividing by -5 (remembering to switch the direction of the inequalities when multiplying or dividing by a negative number) we see that\n\n\[\frac{13}{5} \geq x \geq \frac{11}{5}\]\n\nor in interval notation, we have the solution set\n\n\[S = \left\lbrack {\frac{11}{5},\frac{13}{5}}\right\rbrack\]
Yes
Example 1.22. Solve for \( x \) : a) \( \left| {x - 6}\right| = 4 \)
Solution. a) Consider the distance between \( x \) and 6 to be 4 on a number line:\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_21_1.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_21_1.jpg)\n\nThere are two solutions, \( x = 2 \) or \( x = {10} \) . That is, the distance between 2 and 6 is 4 and the distance between 10 and 6 is 4 .
Yes
Example 2.2. Graph the line \( y = {2x} + 3 \) .
Solution. We calculate \( y \) for various values of \( x \) . For example, when \( x \) is \( - 2, - 1,0,1,2 \), or 3, we calculate\n\n<table><tr><td>\( x \)</td><td>\( - 2 \)</td><td>\( - 1 \)</td><td>0</td><td>1</td><td>2</td><td>3</td></tr><tr><td>\( y \)</td><td>\( - 1 \)</td><td>1</td><td>3</td><td>5</td><td>7</td><td>9</td></tr></table>\n\nIn the above table each \( y \) value is calculated by substituting the corresponding \( x \) value into our equation \( y = {2x} + 3 \) :\n\n\[ x = - 2\; \Rightarrow \;y = 2 \cdot \left( {-2}\right) + 3 = - 4 + 3 = - 1 \]\n\n\[ x = - 1\; \Rightarrow \;y = 2 \cdot \left( {-1}\right) + 3 = - 2 + 3 = 1 \]\n\n\[ x = 0\; \Rightarrow \;y = 2 \cdot \left( 0\right) + 3 = 0 + 3 = 3 \]\n\n\[ x = 1\; \Rightarrow \;y = 2 \cdot \left( 1\right) + 3 = 2 + 3 = 5 \]\n\n\[ x = 2\; \Rightarrow \;y = 2 \cdot \left( 2\right) + 3 = 4 + 3 = 7 \]\n\n\[ x = 3\; \Rightarrow \;y = 2 \cdot \left( 3\right) + 3 = 6 + 3 = 9 \]\n\nIn the above calculation, the values for \( x \) were arbitrarily chosen. Since a line is completely determined by knowing two points on it, any two values for \( x \) would have worked for the purpose of graphing the line.\n\nDrawing the above points in the coordinate plane and connecting them gives the graph of the line \( y = {2x} + 3 \) :\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_28_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_28_0.jpg)\n\nAlternatively, note that the \( y \) -intercept is \( \left( {0,3}\right) (3 \) is the additive constant in our initial equation \( y = {2x} + 3 \) ) and the slope \( m = 2 \) determines the rate at which the line grows: for each step to the right, we have to move two steps up.\n\nTo plot the graph, we first plot the \( y \) -intercept \( \left( {0,3}\right) \) . Then from that point, rise 2 and run 1 so that you find yourself at \( \left( {1,5}\right) \) (which must be on the graph), and similarly rise 2 and run 1 to get to \( \left( {2,7}\right) \) (which must be on the graph), etc. Plot these points on the graph and connect the dots to form a straight line. As noted above, any 2 distinct points on the graph of a straight line are enough to plot the complete line.
Yes
Example 2.3. Find the equation of the line in slope-intercept form.
Solution. The \( y \) -intercept can be read off the graph giving us that \( b = 2 \) . As for the slope, we use formula (2.1) and the two points on the line \( {P}_{1}\left( {0,2}\right) \) and \( {P}_{2}\left( {4,0}\right) \) . We obtain\n\n\[ m = \frac{0 - 2}{4 - 0} = \frac{-2}{4} = - \frac{1}{2}. \]\n\nThus, the line has the slope-intercept form \( y = - \frac{1}{2}x + 2 \) .
Yes
Example 2.4. Find the equation of the line in slope-intercept form.
Solution. The \( y \) -intercept is \( b = - 4 \) . To obtain the slope we can again use the \( y \) -intercept \( {P}_{1}\left( {0, - 4}\right) \) . To use (2.1), we need another point \( {P}_{2} \) on the line. We may pick any second point on the line, for example, \( {P}_{2}\left( {3, - 3}\right) \) . With this,\n\nwe obtain\n\[ m = \frac{\left( {-3}\right) - \left( {-4}\right) }{3 - 0} = \frac{-3 + 4}{3} = \frac{1}{3}. \]\n\nThus, the line has the slope-intercept form \( y = \frac{1}{3}x - 4 \) .
Yes
Find the equation of the line in point-slope form (2.2).
Solution. We need to identify one point \( \left( {{x}_{1},{y}_{1}}\right) \) on the line together with the slope \( m \) of the line so that we can write the line in point-slope form: \( y - {y}_{1} = m\left( {x - {x}_{1}}\right) \) . By direct inspection, we identify the two points \( {P}_{1}\left( {5,1}\right) \) and \( {P}_{2}\left( {8,3}\right) \) on the line, and with this we calculate the slope as\n\n\[ m = \frac{3 - 1}{8 - 5} = \frac{2}{3} \]\n\nUsing the point \( \left( {5,1}\right) \) we write the line in point-slope form as follows:\n\n\[ y - 1 = \frac{2}{3}\left( {x - 5}\right) \]\n\nNote that our answer depends on the chosen point \( \left( {5,1}\right) \) on the line. Indeed, if we choose a different point on the line, such as \( \left( {8,3}\right) \), we obtain a different equation, (which nevertheless represents the same line):\n\n\[ y - 3 = \frac{2}{3}\left( {x - 8}\right) \]\n\nNote, that we do not need to solve this for \( y \), since we are looking for an answer in point-slope form.
Yes
Find the slope, find the \( y \) -intercept, and graph the line\n\n\[ {4x} + {2y} - 2 = 0. \]
Solution. We first rewrite the equation in slope-intercept form.\n\n\[ {4x} + {2y} - 2 = 0\overset{\left( -4x + 2\right) }{ \Rightarrow }{2y} = - {4x} + 2 \]\n\n\[ \overset{\text{ (divide 2) }}{ \Rightarrow }y = - {2x} + 1 \]\n\nWe see that the slope is -2 and the \( y \) -intercept is \( \left( {0,1}\right) \).\n\nWe can then plot the \( y - \) intercept \( \left( {0,1}\right) \) and use the slope \( m = \frac{-2}{1} \) to find another point \( \left( {1, - 1}\right) \). Plot that point, connect the two plotted points, and extend to see the graph below.
Yes
Find the slope, \( y \) -intercept, and graph the line \( {5y} + {2x} = - {10} \) .
Solution. Again, we first rewrite the equation in slope-intercept form.\n\n\[ \n{5y} + {2x} = - {10}\;\overset{\left( \text{subtract }2x\right) }{ \Rightarrow }\;{5y} = - {2x} - {10} \n\]\n\n\[ \n\overset{\left( \text{divide }5\right) }{ \Rightarrow }\;y = \frac{-{2x} - {10}}{5} \n\]\n\n\[ \n\Rightarrow \;y = - \frac{2}{5}x - 2 \n\]\n\nNow, the slope is \( - \frac{2}{5} \) and the \( y \) -intercept is \( \left( {0, - 2}\right) \) .\n\nWe can plot the \( y \) -intercept and from there move 2 units down and 5 units to the right to find another point on the line. The graph is given below.\n\nTo graph it by plotting points we need to find points on the line. In fact, any two points will be enough to completely determine the graph of the line. For some \
Yes
Define the assignment \( f \) by the following table\n\n<table><tr><td>\( x \)</td><td>2</td><td>5</td><td>\( - 3 \)</td><td>0</td><td>7</td><td>4</td></tr><tr><td>\( y \)</td><td>6</td><td>8</td><td>6</td><td>4</td><td>\( - 1 \)</td><td>8</td></tr></table>\n\nThe assignment \( f \) assigns to the input 2 the output 6, which is also written as\n\n\[ f\left( 2\right) = 6\text{.} \]\n\nSimilarly, \( f \) assigns to 5 the number 8, in short \( f\left( 5\right) = 8 \), etc:\n\n\[ f\left( 5\right) = 8,\;f\left( {-3}\right) = 6,\;f\left( 0\right) = 4,\;f\left( 7\right) = - 1,\;f\left( 4\right) = 8. \]
The domain \( D \) is the set of all inputs. The domain is therefore\n\n\[ D = \{ - 3,0,2,4,5,7\} \]\n\nThe range \( R \) is the set of all outputs. The range is therefore\n\n\[ R = \{ - 1,4,6,8\} \]\n\nThe assignment \( f \) is indeed a function since each element of the domain gets assigned exactly one element in the range. Note that for an input number that is not in the domain, \( f \) does not assign an output to it. For example,\n\n\[ f\left( 1\right) = \text{undefined.} \]\n\nNote also that \( f\left( 5\right) = 8 \) and \( f\left( 4\right) = 8 \), so that \( f \) assigns to the inputs 5 and 4 the same output 8. Similarly, \( f \) also assigns the same output to the inputs 2 and -3 . Therefore we see that:\n\n- A function may assign the same output to two different inputs!
Yes
Example 2.12. Consider the assignment \( f \) that is given by the following table.\n\n<table><tr><td>\( x \)</td><td>2</td><td>5</td><td>\( - 3 \)</td><td>0</td><td>5</td><td>4</td></tr><tr><td>\( y \)</td><td>6</td><td>8</td><td>6</td><td>4</td><td>\( - 1 \)</td><td>8</td></tr></table>\n\nThis assignment does not define a function! What went wrong?
Consider the input value 5 . What does \( f \) assign to the input 5 ? The third column states that \( f \) assigns to 5 the output 8, whereas the sixth column states that \( f \) assigns to 5 the output -1,\n\n\[ f\left( 5\right) = 8,\;f\left( 5\right) = - 1.\]\n\nHowever, by the definition of a function, to each input we have to assign exactly one output. So, here, to the input 5 we have assigned two outputs 8 and -1 . Therefore, \( f \) is not a function.\n\n- A function cannot assign two outputs to one input!
Yes
A university creates a mentoring program, which matches each freshman student with a senior student as his or her mentor. Within this program it is guaranteed that each freshman gets precisely one mentor, however two freshmen may receive the same mentor. Does the assignment of freshmen to mentor, or mentor to freshmen describe a function? If so, what is its domain, what is its range?
Since a senior may mentor several freshman, we cannot take a mentor as an \
No
a) Is the rainfall a function of the month?\nb) Is the month a function of the rainfall?
a) Each month has exactly one amount of rainfall associated to it. Therefore, the assignment that associates to a month its rainfall (in inches) is a function.\nb) If we take a certain rainfall amount as our input data, can we associate a unique month to it? For example, February and March have the same amount of rainfall. Therefore, to one input amount of rainfall we cannot assign a unique month. The month is not a function of the rainfall.
Yes
Consider the function \( y = {5x} + 4 \) with domain all real numbers and range all real numbers. Note that for each input \( x \), we obtain an exactly one induced output \( y \) .
For example, for the input \( x = 3 \) we get the output \( y = 5 \cdot 3 + 4 = {19} \), etc.
Yes
For each real number \( x \), denote by \( \lfloor x\rfloor \) the greatest integer that is less or equal to \( x \). We call \( \lfloor x\rfloor \) the floor of \( x \).
For example, to calculate \( \lfloor {4.37}\rfloor \), note that all integers \( 4,3,2,\ldots \) are less or equal to 4.37 :\n\n\[ \ldots , - 3, - 2, - 1,0,1,2,3,4\; \leq \;{4.37} \]\n\nThe greatest of these integers is 4, so that \( \lfloor {4.37}\rfloor = 4 \) . We define the floor function as \( f\left( x\right) = \lfloor x\rfloor \) . Here are more examples of function values of the floor function.\n\n\[ \lfloor {7.3}\rfloor = 7,\;\lfloor \pi \rfloor = 3,\;\lfloor - {4.65}\rfloor = - 5, \]\n\n\[ \lfloor {12}\rfloor = {12},\;\left\lfloor \frac{-{26}}{3}\right\rfloor = \lfloor - {8.667}\rfloor = - 9 \]
Yes
Let \( A \) be the area of an isosceles right triangle with base side length \( x \) . Express \( A \) as a function of \( x \) .
Being an isosceles right triangle means that two side lengths are \( x \), and the angles are \( {45}^{ \circ },{45}^{ \circ } \), and \( {90}^{ \circ } \) (or in radian measure \( \frac{\pi }{4},\frac{\pi }{4} \), and \( \left. \frac{\pi }{2}\right) \) :\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_38_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_38_0.jpg)\n\nRecall that the area of a triangle is: area \( = \frac{1}{2} \) base \( \cdot \) height. In this case, we have base \( = x \), and height \( = x \), so that the area\n\n\[ A = \frac{1}{2}x \cdot x = \frac{1}{2}{x}^{2} \]\n\nTherefore, the area \( A\left( x\right) = \frac{1}{2} \cdot {x}^{2} \) .
Yes
Consider the equation \( y = {x}^{2} + 3 \) . This equation associates to each input number \( a \) exactly one output number \( b = {a}^{2} + 3 \) . Therefore, the equation defines a function.
The domain \( D \) is all real numbers, \( D = \mathbb{R} \) . Since \( {x}^{2} \) is always \( \geq 0 \), we see that \( {x}^{2} + 3 \geq 3 \), and vice versa every number \( y \geq 3 \) can be written as \( y = {x}^{2} + 3 \) . (To see this, note that the input \( x = \sqrt{y - 3} \) for \( y \geq 3 \) gives the output \( {x}^{2} - 3 = {\left( \sqrt{y - 3}\right) }^{2} + 3 = y - 3 + 3 = y \) .) Therefore, the range is \( R = \lbrack 3,\infty ) \) .
Yes
Consider the equation \( {x}^{2} + {y}^{2} = {25} \) . Does this equation define \( y \) as a function of \( x \) ? That is, does this equation assign to each input \( x \) exactly one output \( y \) ?
An input number \( x \) gets assigned to \( y \) with \( {x}^{2} + {y}^{2} = {25} \) . Solving this for \( y \), we obtain\n\n\[ \n{y}^{2} = {25} - {x}^{2}\; \Rightarrow \;y = \pm \sqrt{{25} - {x}^{2}}. \n\]\n\nTherefore, there are two possible outputs associated to the input \( x\left( { \neq 5}\right) \) :\n\n\[ \n\text{either}\;y = + \sqrt{{25} - {x}^{2}}\;\text{or}\;y = - \sqrt{{25} - {x}^{2}}\text{.} \n\]\n\nFor example, the input \( x = 0 \) has two outputs \( y = 5 \) and \( y = - 5 \) . However, a function cannot assign two outputs to one input \( x! \) The conclusion is that \( {x}^{2} + {y}^{2} = {25} \) does not determine \( y \) as a function!
Yes
For the given function \( f \), calculate the outputs \( f\left( 2\right), f\left( {-3}\right) \) , and \( f\left( {-1}\right) \) .
Solution. We substitute the input values into the function and simplify.\n\n\[ \text{a)}\;f\left( 2\right) = 3 \cdot 2 + 4 = 6 + 4 = {10}\text{,}\]\n\n\[ f\left( {-3}\right) = 3 \cdot \left( {-3}\right) + 4 = - 9 + 4 = - 5,\]\n\n\[ f\left( {-1}\right) = 3 \cdot \left( {-1}\right) + 4 = - 3 + 4 = 1.\]
Yes
Example 3.2. Let \( f \) be the function given by \( f\left( x\right) = {x}^{2} + {2x} - 3 \) . Find the following function values.\n\na) \( f\left( 5\right) \) b) \( f\left( 2\right) \) c) \( f\left( {-2}\right) \) d) \( f\left( 0\right) \)\n\ne) \( f\left( \sqrt{5}\right) \) f) \( f\left( {\sqrt{3} + 1}\right) \) g) \( f\left( a\right) \) h) \( f\left( a\right) + 5 \)\n\nj) \( f\left( {x + h}\right) - f\left( x\right) \) k) \( \frac{f\left( {x + h}\right) - f\left( x\right) }{h} \) l) \( \frac{f\left( x\right) - f\left( a\right) }{x - a} \)
Solution. The first four function values ((a)-(d)) can be calculated directly:\n\n\[ f\left( 5\right) = {5}^{2} + 2 \cdot 5 - 3 = {25} + {10} - 3 = {32}, \]\n\n\[ f\left( 2\right) = {2}^{2} + 2 \cdot 2 - 3 = 4 + 4 - 3 = 5, \]\n\n\[ f\left( {-2}\right) = {\left( -2\right) }^{2} + 2 \cdot \left( {-2}\right) - 3 = 4 + - 4 - 3 = - 3, \]\n\n\[ f\left( 0\right) = {0}^{2} + 2 \cdot 0 - 3 = 0 + 0 - 3 = - 3. \]\n\nThe next two values ((e) and (f)) are similar, but the arithmetic is a bit more involved.\n\n\[ f\left( \sqrt{5}\right) = {\sqrt{5}}^{2} + 2 \cdot \sqrt{5} - 3 = 5 + 2 \cdot \sqrt{5} - 3 = 2 + 2 \cdot \sqrt{5}, \]\n\n\[ f\left( {\sqrt{3} + 1}\right) = {\left( \sqrt{3} + 1\right) }^{2} + 2 \cdot \left( {\sqrt{3} + 1}\right) - 3 \]\n\n\[ = \;\left( {\sqrt{3} + 1}\right) \cdot \left( {\sqrt{3} + 1}\right) + 2 \cdot \left( {\sqrt{3} + 1}\right) - 3 \]\n\n\[ = \sqrt{3} \cdot \sqrt{3} + 2 \cdot \sqrt{3} + 1 \cdot 1 + 2 \cdot \sqrt{3} + 2 - 3 \]\n\n\[ = 3 + 2 \cdot \sqrt{3} + 1 + 2 \cdot \sqrt{3} + 2 - 3 \]\n\n\[ = 3 + 4 \cdot \sqrt{3}\text{. } \]\n\nThe last five values \( \left( {\left( \mathrm{g}\right) - \left( \mathrm{l}\right) }\right) \) are purely algebraic:\n\n\[ f\left( a\right) = {a}^{2} + 2 \cdot a - 3, \]\n\n\[ f\left( a\right) + 5 = {a}^{2} + 2 \cdot a - 3 + 5 = {a}^{2} + 2 \cdot a + 2, \]\n\n\[ f\left( {x + h}\right) = {\left( x + h\right) }^{2} + 2 \cdot \left( {x + h}\right) - 3 \]\n\n\[ = {x}^{2} + {2xh} + {h}^{2} + {2x} + {2h} - 3, \]\n\n\[ f\left( {x + h}\right) - f\left( x\right) = \left( {{x}^{2} + {2xh} + {h}^{2} + {2x} + {2h} - 3}\right) - \left( {{x}^{2} + {2x} - 3}\right) \]\n\n\[ = {x}^{2} + {2xh} + {h}^{2} + {2x} + {2h} - 3 - {x}^{2} - {2x} + 3 \]\n\n\[ = {2xh} + {h}^{2} + {2h} \]\n\n\[ \frac{f\left( {x + h}\right) - f\left( x\right) }{h} = \frac{{2xh} + {h}^{2} + {2h}}{h} \]\n\n\[ = \frac{h \cdot \left( {{2x} + h + 2}\right) }{h} = {2x} + h + 2 \]\n\nand\n\n\[ \frac{f\left( x\right) - f\left( a\right) }{x - a} = \frac{\left( {{x}^{2} + {2x} - 3}\right) - \left( {{a}^{2} + {2a} - 3}\right) }{x - a} \]\n\n\[ = \frac{{x}^{2} + {2x} - 3 - {a}^{2} - {2a} + 3}{x - a} = \frac{{x}^{2} - {a}^{2} + {2x} - {2a}}{x - a} \]\n\n\[ = \frac{\left( {x + a}\right) \left( {x - a}\right) + 2\left( {x - a}\right) }{x - a} = \frac{\left( {x - a}\right) \left( {x + a + 2}\right) }{\left( x - a\right) } = x + a + 2. \]
Yes
Calculate the difference quotient \( \frac{f\left( {x + h}\right) - f\left( x\right) }{h} \) for a) \( f\left( x\right) = {x}^{3} + 2 \)
Solution. We calculate first the difference quotient step by step.\n\n\[ \text{a)}f\left( {x + h}\right) = {\left( x + h\right) }^{3} + 2 = \left( {x + h}\right) \cdot \left( {x + h}\right) \cdot \left( {x + h}\right) + 2 \]\n\n\[ = \left( {{x}^{2} + {2xh} + {h}^{2}}\right) \cdot \left( {x + h}\right) + 2 \]\n\n\[ = {x}^{3} + 2{x}^{2}h + x{h}^{2} + {x}^{2}h + {2x}{h}^{2} + {h}^{3} + 2, \]\n\n\[ = {x}^{3} + 3{x}^{2}h + {3x}{h}^{2} + {h}^{3} + 2. \]\n\nSubtracting \( f\left( x\right) \) from \( f\left( {x + h}\right) \) gives\n\n\[ f\left( {x + h}\right) - f\left( x\right) = \left( {{x}^{3} + 3{x}^{2}h + {3x}{h}^{2} + {h}^{3} + 2}\right) - \left( {{x}^{3} + 2}\right) \]\n\n\[ = {x}^{3} + 3{x}^{2}h + {3x}{h}^{2} + {h}^{3} + 2 - {x}^{3} - 2 \]\n\n\[ = 3{x}^{2}h + {3x}{h}^{2} + {h}^{3}. \]\n\nWith this we obtain:\n\n\[ \frac{f\left( {x + h}\right) - f\left( x\right) }{h} = \frac{3{x}^{2}h + {3x}{h}^{2} + {h}^{3}}{h} \]\n\n\[ = \frac{h \cdot \left( {3{x}^{2} + {3xh} + {h}^{2}}\right) }{h} = 3{x}^{2} + {3xh} + {h}^{2}. \]
Yes
Example 3.5. Find the domain of each of the following functions.
Solution.\n\na) There is no problem taking a real number \( x \) to any (positive) power. Therefore, \( f \) is defined for all real numbers \( x \), and the domain is written as \( D = \mathbb{R} \) .\n\nb) Again, we can take the absolute value for any real number \( x \) . The domain is all real numbers, \( D = \mathbb{R} \) .\n\nc) The square root \( \sqrt{x} \) is only defined for \( x \geq 0 \) (remember we are not using complex numbers yet!). Thus, the domain is \( D = \lbrack 0,\infty ) \) .\n\nd) Again, the square root is only defined for non-negative numbers. Thus, the argument in the square root has to be greater or equal to zero: \( x - 3 \geq 0 \) . Solving this for \( x \) gives\n\n\[ \n x - 3 \geq 0\;\overset{\left( \text{add }3\right) }{ \Rightarrow }\;x \geq 3.\n\]\n\nThe domain is therefore, \( D = \lbrack 3,\infty ) \) .\n\ne) A fraction is defined whenever the denominator is not zero, so that in this case \( \frac{1}{x} \) is defined whenever \( x \neq 0 \) . Therefore the domain is all real numbers except zero, \( D = \mathbb{R} - \{ 0\} \) .\n\nf) Again, we need to make sure that the denominator does not become zero; however we do not care about the numerator. The denominator is zero exactly when \( {x}^{2} + {8x} + {15} = 0 \) . Solving this for \( x \) gives:\n\n\[ \n {x}^{2} + {8x} + {15} = 0\; \Rightarrow \;\left( {x + 3}\right) \cdot \left( {x + 5}\right) = 0\n\]\n\n\[ \n \Rightarrow \;x + 3 = 0\text{or}x + 5 = 0\n\]\n\n\[ \n \Rightarrow \;x = - 3\text{ or }x = - 5\text{. }\n\]\n\nThe domain is all real numbers except for -3 and -5, that is \( D = \) \( \mathbb{R} - \{ - 5, - 3\} \) .\n\ng) The function is explicitly defined for all \( 2 < x \leq 4 \) and \( 5 \leq x \) . Therefore, the domain is \( D = \left( {2,4\rbrack \cup \lbrack 5,\infty }\right) \) .
Yes
Let \( y = {x}^{2} \) with domain \( D = \mathbb{R} \) being the set of all real numbers. We can graph this after calculating a table as follows:
<table><tr><td>\( x \)</td><td>\( - 3 \)</td><td>\( - 2 \)</td><td>\( - 1 \)</td><td>0</td><td>1</td><td>2</td><td>3</td></tr><tr><td>\( y \)</td><td>9</td><td>4</td><td>1</td><td>0</td><td>1</td><td>4</td><td>9</td></tr></table>
Yes
Let \( f \) be the function given by the following graph.
Here, the dashed lines show, that the input \( x = 3 \) gives an output of \( y = 2 \) . Similarly, we can obtain other output values from the graph:\n\n\[ f\left( 2\right) = 4,\;f\left( 3\right) = 2,\;f\left( 5\right) = 2,\;f\left( 7\right) = 4. \]\n\nNote, that in the above graph, a closed point means that the point is part of the graph, whereas an open point means that it is not part of the graph.\n\nThe domain is the set of all possible input values on the \( x \) -axis. Since we can take any number \( 2 \leq x \leq 7 \) as an input, the domain is the interval \( D = \left\lbrack {2,7}\right\rbrack \) . The range is the set of all possible output values on the \( y \) -axis. Since any number \( 1 < y \leq 4 \) is obtained as an output, the range is \( R = (1,4\rbrack \) . Note in particular, that \( y = 1 \) is not an output, since \( f\left( 5\right) = 2 \) .
Yes
Let \( f \) be the function given by the following graph.
Here are some function values that can be read from the graph:\n\n\[ f\left( {-5}\right) = 2,\;f\left( {-4}\right) = 3,\;f\left( {-3}\right) \text{and}f\left( {-2}\right) \text{are undefined,}\]\n\n\[ f\left( {-1}\right) = 2,\;f\left( 0\right) = 1,\;f\left( 1\right) = 2,\;f\left( 2\right) = - 1,\;f\left( 4\right) = 0,\;f\left( 5\right) = 1.\]\n\nNote, that the output value \( f\left( 3\right) \) is somewhere between -1 and 0, but we can only read off an approximation of \( f\left( 3\right) \) from the graph.\n\nTo find the domain of the function, we need to determine all possible \( x \) - coordinates (or in other words, we need to project the graph onto the \( x \) -axis). The possible \( x \) -coordinates are from the interval \( \lbrack - 5, - 3) \) together with the intervals \( \left( {-2,2}\right) \) and \( \left\lbrack {2,5}\right\rbrack \) . The last two intervals may be combined. We get the domain:\n\n\[ D = \left\lbrack {-5, - 3)\cup ( - 2,5}\right\rbrack .\]\n\nFor the range, we look at all possible \( y \) -values. These are given by the intervals \( (1,3\rbrack \) and \( \left( {0,3}\right) \) and \( \left\lbrack {-1,1}\right\rbrack \) . Combining these three intervals, we obtain the range\n\n\[ R = \left\lbrack {-1,3}\right\rbrack \]
Yes
Consider the input \( x = 4 \) . There are several outputs that we get for \( x = 4 \) from this graph:\n\n\[ f\left( 4\right) = 1,\;f\left( 4\right) = 2,\;f\left( 4\right) = 3. \]\n\nHowever, in a function, it is not allowed to obtain more than one output for one input! Therefore, this graph is not the graph of a function!
The reason why the previous example is not a function is due to some input having more than one output: \( f\left( 4\right) = 1, f\left( 4\right) = 2, f\left( 4\right) = 3 \) .\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_51_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_51_0.jpg)\n\nIn other words, there is a vertical line \( \left( {x = 4}\right) \) which intersects the graph in more than one point. This observation is generalized in the following vertical line test.\n\nObservation 3.10 (Vertical Line Test). A graph is the graph of a function precisely when every vertical line intersects the graph at most once.
Yes
Consider the graph of the equation \( x = {y}^{2} \)
This does not pass the vertical line test so \( y \) is not a function of \( x \) . However, \( x \) is a function of \( y \) since, if you consider \( y \) to be the input, each input has exactly one output (it passes the 'horizontal line' test).
Yes