Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Consider the following algorithm, which models an experiment in which we toss a fair coin repeatedly until it comes up heads:\n\nrepeat\n\n\[ b\overset{\phi }{ \leftarrow }\{ 0,1\} \]\n\n\[ \text{until}b = 1 \]\n\nFor each positive integer \( n \), let \( {\beta }_{n} \) be the probability that the algorithm executes at least \( n \) loop iterations, in the sense of Example 9.4.
It is not hard to see that \( {\beta }_{n} = {2}^{-n + 1} \), and since \( {\beta }_{n} \rightarrow 0 \) as \( n \rightarrow \infty \), the algorithm halts with probability 1 , even though the loop is not guaranteed to terminate after any particular, finite number of steps.
Yes
Consider the following algorithm:\n\n\( i \leftarrow 0 \)\n\nrepeat\n\n\[ i \leftarrow i + 1 \]\n\n\[ \sigma \overset{\phi }{ \leftarrow }\{ 0,1{\} }^{\times i} \]\n\nuntil \( \sigma = {0}^{\times i} \)\n\nFor each positive integer \( n \), let \( {\beta }_{n} \) be the probability that the algorithm executes at least \( n \) loop iterations, in the sense of Example 9.4.
It is not hard to see that\n\n\[ {\beta }_{n} = \mathop{\prod }\limits_{{i = 1}}^{{n - 1}}\left( {1 - {2}^{-i}}\right) \geq \mathop{\prod }\limits_{{i = 1}}^{{n - 1}}{e}^{-{2}^{-i + 1}} = {e}^{-\mathop{\sum }\limits_{{i = 0}}^{{n - 2}}{2}^{-i}} \geq {e}^{-2}, \]\n\nwhere we have made use of the estimate (iii) in §A1. Therefore,\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\beta }_{n} \geq {e}^{-2} > 0 \]\n\nand so the algorithm does not halt with probability 1 , even though it never falls into an infinite loop.
Yes
Consider the following probabilistic algorithm that takes as input a positive integer \( m \) . It models an experiment in which we toss a fair coin repeatedly until it comes up heads \( m \) times.
Let \( L \) be the random variable that represents the number of loop iterations executed the algorithm on a fixed input \( m \) . We claim that \( \mathrm{E}\left\lbrack L\right\rbrack = {2m} \) . To see this, define random variables \( {L}_{1},\ldots ,{L}_{m} \), where \( {L}_{1} \) is the number of loop iterations needed to get \( b = 1 \) for the first time, \( {L}_{2} \) is the number of additional loop iterations needed to get \( b = 1 \) for the second time, and so on. Clearly, we have \( L = {L}_{1} + \cdots + {L}_{m} \) , and moreover, \( \mathrm{E}\left\lbrack {L}_{i}\right\rbrack = 2 \) for \( i = 1,\ldots, m \) ; therefore, by linearity of expectation, we have \( \mathrm{E}\left\lbrack L\right\rbrack = \mathrm{E}\left\lbrack {L}_{1}\right\rbrack + \cdots + \mathrm{E}\left\lbrack {L}_{m}\right\rbrack = {2m} \) . It follows that the expected running time of this algorithm on input \( m \) is \( O\left( m\right) \) .
Yes
Consider the following algorithm:\n\n\\( n \\leftarrow 0 \\)\n\nrepeat \\( n \\leftarrow n + 1, b\\overset{\\phi }{ \\leftarrow }\\{ 0,1\\} \\) until \\( b = 1 \\)\n\nrepeat \\( \\sigma \\overset{\\phi }{ \\leftarrow }\\{ 0,1{\\} }^{\\times n} \\) until \\( \\sigma = {0}^{\\times n} \\)\n\nThe expected running time is infinite (even though it does halt with probability 1).
To see this, define random variables \\( {L}_{1} \\) and \\( {L}_{2} \\), where \\( {L}_{1} \\) is the number of iterations of the first loop, and \\( {L}_{2} \\) is the number of iterations of the second. As in Example 9.7, the distribution of \\( {L}_{1} \\) is a geometric distribution with associated success probability \\( 1/2 \\), and \\( \\mathrm{E}\\left\\lbrack {L}_{1}\\right\\rbrack = 2 \\) . For each \\( k \\geq 1 \\), the conditional distribution of \\( {L}_{2} \\) given \\( {L}_{1} = k \\) is a geometric distribution with associated success probability \\( 1/{2}^{k} \\), and so \\( \\mathrm{E}\\left\\lbrack {{L}_{2} \\mid {L}_{1} = k}\\right\\rbrack = {2}^{k} \\) . Therefore,\n\n\\[ \n\\mathrm{E}\\left\\lbrack {L}_{2}\\right\\rbrack = \\mathop{\\sum }\\limits_{{k \\geq 1}}\\mathrm{E}\\left\\lbrack {{L}_{2} \\mid {L}_{1} = k}\\right\\rbrack \\mathrm{P}\\left\\lbrack {{L}_{1} = k}\\right\\rbrack = \\mathop{\\sum }\\limits_{{k \\geq 1}}{2}^{k} \\cdot {2}^{-k} = \\mathop{\\sum }\\limits_{{k \\geq 1}}1 = \\infty .▱\n\\]
Yes
Theorem 9.3. Under the assumptions above, (i) \( L \) has a geometric distribution with associated success probability \( \mathrm{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack \) , and in particular, \( \mathrm{E}\left\lbrack L\right\rbrack = 1/\mathrm{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack \) ;
Proof. (i) is clear.
No
Suppose \( T \) is a finite set, and \( {T}^{\prime } \) is a non-empty, finite subset of \( T \) . Consider the following generalization of Algorithm RN:\n\nrepeat\n\n\[ y\overset{\phi }{ \leftarrow }T \]\n\nuntil \( y \in {T}^{\prime } \)\n\noutput \( y \)\n\nHere, we assume that we have an algorithm to generate a random element of \( T \) (i.e., uniformly distributed over \( T \) ), and an efficient algorithm to test for membership in \( {T}^{\prime } \) . Let \( L \) denote the number of loop iterations, and \( Y \) the output. Also, let \( {Y}_{1} \) be the value of \( y \) in the first iteration, and \( {\mathcal{H}}_{1} \) the event that the algorithm halts in the first iteration.
Since \( {Y}_{1} \) is uniformly distributed over \( T \), and \( {\mathcal{H}}_{1} \) is the event that \( {Y}_{1} \in {T}^{\prime } \), we have \( \mathrm{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack = \left| {T}^{\prime }\right| /\left| T\right| \) . It follows that \( \mathrm{E}\left\lbrack L\right\rbrack = \left| T\right| /\left| {T}^{\prime }\right| \) . As for the output, for every \( t \in T \), we have\n\n\[ \mathrm{P}\left\lbrack {Y = t}\right\rbrack = \mathrm{P}\left\lbrack {{Y}_{1} = t \mid {\mathcal{H}}_{1}}\right\rbrack = \mathrm{P}\left\lbrack {{Y}_{1} = t \mid {Y}_{1} \in {T}^{\prime }}\right\rbrack ,\]\n\nwhich is 0 if \( t \notin {T}^{\prime } \) and is \( 1/\left| {T}^{\prime }\right| \) if \( t \in {T}^{\prime } \) . It follows that \( Y \) is uniformly distributed over \( {T}^{\prime } \) .
Yes
Let us analyze the following algorithm:\n\nrepeat\n\n\[ y\overset{\phi }{ \leftarrow }\{ 1,2,3,4\} \]\n\n\[ z\overset{\phi }{ \leftarrow }\{ 1,\ldots, y\} \]\n\nuntil \( z = 1 \)\n\noutput \( y \)\n\nWith each loop iteration, the algorithm chooses \( y \) uniformly at random, and then decides to halt with probability \( 1/y \) . Let \( L \) denote the number of loop iterations, and \( Y \) the output. Also, let \( {Y}_{1} \) be the value of \( y \) in the first iteration, and \( {\mathcal{H}}_{1} \) the event that the algorithm halts in the first iteration. \( {Y}_{1} \) is uniformly distributed over\n\n\( \{ 1,\ldots ,4\} \), and for \( t = 1,\ldots ,4,\mathrm{P}\left\lbrack {{\mathcal{H}}_{1} \mid {Y}_{1} = t}\right\rbrack = 1/t \) . Therefore,\n\n\[ \mathrm{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack = \mathop{\sum }\limits_{{t = 1}}^{4}\mathrm{P}\left\lbrack {{\mathcal{H}}_{1} \mid {Y}_{1} = t}\right\rbrack \mathrm{P}\left\lbrack {{Y}_{1} = t}\right\rbrack = \mathop{\sum }\limits_{{t = 1}}^{4}\left( {1/t}\right) \left( {1/4}\right) = {25}/{48}. \]
Thus, \( \mathrm{E}\left\lbrack L\right\rbrack = {48}/{25} \) . For the output distribution, for \( t = 1,\ldots ,4 \), we have\n\n\[ \mathsf{P}\left\lbrack {Y = t}\right\rbrack = \mathsf{P}\left\lbrack {{Y}_{1} = t \mid {\mathcal{H}}_{1}}\right\rbrack = \mathsf{P}\left\lbrack {\left( {{Y}_{1} = t}\right) \cap {\mathcal{H}}_{1}}\right\rbrack /\;\mathsf{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack \]\n\n\[ = \mathrm{P}\left\lbrack {{\mathcal{H}}_{1} \mid {Y}_{1} = t}\right\rbrack \mathrm{P}\left\lbrack {{Y}_{1} = t}\right\rbrack /\mathrm{P}\left\lbrack {\mathcal{H}}_{1}\right\rbrack = \left( {1/t}\right) \left( {1/4}\right) \left( {{48}/{25}}\right) = \frac{12}{25t}. \]
Yes
Theorem 10.1. If \( n \) is prime, then \( {L}_{n} = {\mathbb{Z}}_{n}^{ * } \) . If \( n \) is composite and \( {L}_{n} \varsubsetneq {\mathbb{Z}}_{n}^{ * } \), then \( \left| {L}_{n}\right| \leq \left( {n - 1}\right) /2 \) .
Proof. Note that \( {L}_{n} \) is the kernel of the \( \left( {n - 1}\right) \) -power map on \( {\mathbb{Z}}_{n}^{ * } \), and hence is a subgroup of \( {\mathbb{Z}}_{n}^{ * } \) . If \( n \) is prime, then we know that \( {\mathbb{Z}}_{n}^{ * } \) is a group of order \( n - 1 \) . Since the order of a group element divides the order of the group, we have \( {\alpha }^{n - 1} = 1 \) for all \( \alpha \in {\mathbb{Z}}_{n}^{ * } \) . That is, \( {L}_{n} = {\mathbb{Z}}_{n}^{ * } \) . Suppose that \( n \) is composite and \( {L}_{n} \varsubsetneq {\mathbb{Z}}_{n}^{ * } \) . Since the order of a subgroup divides the order of the group, we have \( \left| {\mathbb{Z}}_{n}^{ * }\right| = t\left| {L}_{n}\right| \) for some integer \( t > 1 \) . From this, we conclude that \[ \left| {L}_{n}\right| = \frac{1}{t}\left| {\mathbb{Z}}_{n}^{ * }\right| \leq \frac{1}{2}\left| {\mathbb{Z}}_{n}^{ * }\right| \leq \frac{n - 1}{2}. \]
Yes
Theorem 10.2. Every Carmichael number \( n \) is of the form \( n = {p}_{1}\cdots {p}_{r} \), where the \( {p}_{i} \) ’s are distinct primes, \( r \geq 3 \), and \( \left( {{p}_{i} - 1}\right) \mid \left( {n - 1}\right) \) for \( i = 1,\ldots, r \) .
Proof. Let \( n = {p}_{1}^{{e}_{1}}\cdots {p}_{r}^{{e}_{r}} \) be a Carmichael number. By the Chinese remainder theorem, we have an isomorphism of \( {\mathbb{Z}}_{n}^{ * } \) with the group\n\n\[ \n{\mathbb{Z}}_{{p}_{1}^{{e}_{1}}}^{ * } \times \cdots \times {\mathbb{Z}}_{{p}_{r}^{{e}_{r}}}^{ * }\n\]\n\nand we know that each group \( {\mathbb{Z}}_{{p}_{i}^{{e}_{i}}}^{ * } \) is cyclic of order \( {p}_{i}^{{e}_{i} - 1}\left( {{p}_{i} - 1}\right) \) . Thus, the power \( n - 1 \) kills the group \( {\mathbb{Z}}_{n}^{ * } \) if and only if it kills all the groups \( {\mathbb{Z}}_{{p}_{i}^{{c}_{i}}}^{ * } \), which happens if and only if \( {p}_{i}^{{e}_{i} - 1}\left( {{p}_{i} - 1}\right) \mid \left( {n - 1}\right) \) . Now, on the one hand, \( n \equiv 0\left( {\;\operatorname{mod}\;{p}_{i}}\right) \) . On the other hand, if \( {e}_{i} > 1 \), we would have \( n \equiv 1\left( {\;\operatorname{mod}\;{p}_{i}}\right) \), which is clearly impossible. Thus, we must have \( {e}_{i} = 1 \) .\n\nIt remains to show that \( r \geq 3 \) . Suppose \( r = 2 \), so that \( n = {p}_{1}{p}_{2} \) . We have\n\n\[ \nn - 1 = {p}_{1}{p}_{2} - 1 = \left( {{p}_{1} - 1}\right) {p}_{2} + \left( {{p}_{2} - 1}\right) .\n\]\n\nSince \( \left( {{p}_{1} - 1}\right) \mid \left( {n - 1}\right) \), we must have \( \left( {{p}_{1} - 1}\right) \mid \left( {{p}_{2} - 1}\right) \) . By a symmetric argument, \( \left( {{p}_{2} - 1}\right) \mid \left( {{p}_{1} - 1}\right) \) . Hence, \( {p}_{1} = {p}_{2} \), a contradiction.
Yes
Theorem 10.3. If \( n \) is prime, then \( {L}_{n}^{\prime } = {\mathbb{Z}}_{n}^{ * } \) . If \( n \) is composite, then \( \left| {L}_{n}^{\prime }\right| \leq \left( {n - 1}\right) /4 \) .
Proof. Let \( n - 1 = t{2}^{h} \), where \( t \) is odd.\n\nCase 1: \( n \) is prime. Let \( \alpha \in {\mathbb{Z}}_{n}^{ * } \) . Since \( {\mathbb{Z}}_{n}^{ * } \) is a group of order \( n - 1 \), and the order of a group element divides the order of the group, we know that \( {\alpha }^{t{2}^{h}} = {\alpha }^{n - 1} = 1 \) . Now consider any index \( j = 0,\ldots, h - 1 \) such that \( {\alpha }^{t{2}^{j + 1}} = 1 \), and consider the value \( \beta \mathrel{\text{:=}} {\alpha }^{t{2}^{j}} \) . Then since \( {\beta }^{2} = {\alpha }^{t{2}^{j + 1}} = 1 \), the only possible choices for \( \beta \) are \( \pm 1 \) -this is because \( {\mathbb{Z}}_{n}^{ * } \) is cyclic of even order and so there are exactly two elements of \( {\mathbb{Z}}_{n}^{ * } \) whose multiplicative order divides 2, namely \( \pm 1 \) . So we have shown that \( \alpha \in {L}_{n}^{\prime } \) .
Yes
Theorem 10.4. We have\n\n\\[\n\\gamma \\left( {m,1}\\right) \\leq \\exp \\left\\lbrack {-\\left( {1 + o\\left( 1\\right) }\\right) \\log \\left( m\\right) \\log \\left( {\\log \\left( {\\log \\left( m\\right) }\\right) }\\right) /\\log \\left( {\\log \\left( m\\right) }\\right) }\\right\\rbrack .\n\\]\n
Proof. Literature-see §10.5.
No
Theorem 10.5. For all real \( x \geq 0 \) and \( y \geq 0 \), we have\n\n\[ \left| {R\left( {x, y}\right) - x\mathop{\prod }\limits_{{p \leq y}}\left( {1 - 1/p}\right) }\right| \leq {2}^{\pi \left( y\right) }.\]
Proof. To simplify the notation, we shall use the Möbius function \( \mu \) (see §2.9). Also, for a real number \( u \), let us write \( u = \lfloor u\rfloor + \{ u\} \), where \( 0 \leq \{ u\} < 1 \) . Let \( Q \) be the product of the primes up to the bound \( y \).\n\nNow, there are \( \lfloor x\rfloor \) positive integers up to \( x \), and of these, for each prime \( p \) dividing \( Q \), precisely \( \lfloor x/p\rfloor \) are divisible by \( p \), for each pair \( p,{p}^{\prime } \) of distinct primes dividing \( Q \), precisely \( \left\lfloor {x/p{p}^{\prime }}\right\rfloor \) are divisible by \( p{p}^{\prime } \), and so on. By inclusion/exclusion (see Theorem 8.1), we have\n\n\[ R\left( {x, y}\right) = \mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) \lfloor x/d\rfloor = \mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) \left( {x/d}\right) - \mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) \{ x/d\} . \]\n\nMoreover,\n\n\[ \mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) \left( {x/d}\right) = x\mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) /d = x\mathop{\prod }\limits_{{p \leq y}}\left( {1 - 1/p}\right) ,\]\n\nand\n\n\[ \left| {\mathop{\sum }\limits_{{d \mid Q}}\mu \left( d\right) \{ x/d\} }\right| \leq \mathop{\sum }\limits_{{d \mid Q}}1 = {2}^{\pi \left( y\right) }\n\nThat proves the theorem.
Yes
Theorem 10.6. For all \( \ell \geq 2 \), we have\n\n\[ \n{\gamma }^{\prime }\left( {\ell ,1}\right) \leq {\ell }^{2}{4}^{2 - \sqrt{\ell }} \n\]
Proof. Literature-see §10.5.
No
Theorem 12.1. Let \( p \) be an odd prime, and let \( a, b \in \mathbb{Z} \) . Then we have:\n\n(i) \( \left( {a \mid p}\right) \equiv {a}^{\left( {p - 1}\right) /2}\left( {\;\operatorname{mod}\;p}\right) \) ; in particular, \( \left( {-1 \mid p}\right) = {\left( -1\right) }^{\left( {p - 1}\right) /2} \) ;
Part (i) of the theorem is just a restatement of Euler's criterion (Theorem 2.21). As was observed in Theorem 2.31, this implies that -1 is a quadratic residue modulo \( p \) if and only if \( p \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . Thus, the quadratic residuosity of -1 modulo \( p \) is determined by the residue class of \( p \) modulo 4 .
Yes
Let us characterize those primes \( p \) modulo which 5 is a quadratic residue.
Since \( 5 \equiv 1\left( {\;\operatorname{mod}\;4}\right) \), the law of quadratic reciprocity tells us that \( \left( {5 \mid p}\right) = \left( {p \mid 5}\right) \). Now, among the numbers \( \pm 1, \pm 2 \), the quadratic residues modulo 5 are \( \pm 1 \). It follows that 5 is a quadratic residue modulo \( p \) if and only if \( p \equiv \pm 1\left( {\;\operatorname{mod}\;5}\right) \).
Yes
Let us characterize those primes \( p \) modulo which 3 is a quadratic residue.
Since \( 3 ≢ 1\left( {\;\operatorname{mod}\;4}\right) \), we must be careful in our application of the law of quadratic reciprocity. First, suppose that \( p \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . Then \( \left( {3 \mid p}\right) = \left( {p \mid 3}\right) \) , and so 3 is a quadratic residue modulo \( p \) if and only if \( p \equiv 1\left( {\;\operatorname{mod}\;3}\right) \) . Second, suppose that \( p ≢ 1\left( {\;\operatorname{mod}\;4}\right) \) . Then \( \left( {3 \mid p}\right) = - \left( {p \mid 3}\right) \), and so 3 is a quadratic residue modulo \( p \) if and only if \( p \equiv - 1\left( {\;\operatorname{mod}\;3}\right) \) . Putting this all together, we see that 3 is quadratic residue modulo \( p \) if and only if\n\n\[ p \equiv 1\left( {\;\operatorname{mod}\;4}\right) \text{ and }p \equiv 1\left( {\;\operatorname{mod}\;3}\right) \]\n\nor\n\n\[ p \equiv - 1\left( {\;\operatorname{mod}\;4}\right) \text{ and }p \equiv - 1\left( {\;\operatorname{mod}\;3}\right) . \]\n\nUsing the Chinese remainder theorem, we can restate this criterion in terms of residue classes modulo 12: 3 is quadratic residue modulo \( p \) if and only if \( p \equiv \) \( \pm 1\left( {\;\operatorname{mod}\;{12}}\right) \) .
Yes
Theorem 12.2 (Gauss' lemma). Let \( p \) be an odd prime and let \( a \) be an integer not divisible by \( p \) . Define \( {\alpha }_{j} \mathrel{\text{:=}} {ja}{\;\operatorname{mod}\;p} \) for \( j = 1,\ldots ,\left( {p - 1}\right) /2 \), and let \( n \) be the number of indices \( j \) for which \( {\alpha }_{j} > p/2 \) . Then \( \left( {a \mid p}\right) = {\left( -1\right) }^{n} \) .
Proof. Let \( {r}_{1},\ldots ,{r}_{n} \) denote the values \( {\alpha }_{j} \) that exceed \( p/2 \), and let \( {s}_{1},\ldots ,{s}_{k} \) denote the remaining values \( {\alpha }_{j} \) . The \( {r}_{i} \) ’s and \( {s}_{i} \) ’s are all distinct and non-zero. We have \( 0 < p - {r}_{i} < p/2 \) for \( i = 1,\ldots, n \), and no \( p - {r}_{i} \) is an \( {s}_{j} \) ; indeed, if \( p - {r}_{i} = {s}_{j} \) , then \( {s}_{j} \equiv - {r}_{i}\left( {\;\operatorname{mod}\;p}\right) \), and writing \( {s}_{j} = {ua}{\;\operatorname{mod}\;p} \) and \( {r}_{i} = {va}{\;\operatorname{mod}\;p} \), for some \( u, v = 1,\ldots ,\left( {p - 1}\right) /2 \), we have \( {ua} \equiv - {va}\left( {\;\operatorname{mod}\;p}\right) \), which implies \( u \equiv - v\left( {\;\operatorname{mod}\;p}\right) \) , which is impossible.\n\nIt follows that the sequence of numbers \( {s}_{1},\ldots ,{s}_{k}, p - {r}_{1},\ldots, p - {r}_{n} \) is just a reordering of \( 1,\ldots ,\left( {p - 1}\right) /2 \) . Then we have\n\n\[ \left( {\left( {p - 1}\right) /2}\right) ! \equiv {s}_{1}\cdots {s}_{k}\left( {-{r}_{1}}\right) \cdots \left( {-{r}_{n}}\right) \]\n\n\[ \equiv {\left( -1\right) }^{n}{s}_{1}\cdots {s}_{k}{r}_{1}\cdots {r}_{n} \]\n\n\[ \equiv {\left( -1\right) }^{n}\left( {\left( {p - 1}\right) /2}\right) !{a}^{\left( {p - 1}\right) /2}\left( {\;\operatorname{mod}\;p}\right) ,\]\n\nand canceling the factor \( \left( {\left( {p - 1}\right) /2}\right) ! \), we obtain \( {a}^{\left( {p - 1}\right) /2} \equiv {\left( -1\right) }^{n}\left( {\;\operatorname{mod}\;p}\right) \), and the result follows from the fact that \( \left( {a \mid p}\right) \equiv {a}^{\left( {p - 1}\right) /2}\left( {\;\operatorname{mod}\;p}\right) \) .
Yes
Theorem 12.3. If \( p \) is an odd prime and \( \gcd \left( {a,{2p}}\right) = 1 \), then \( \left( {a \mid p}\right) = {\left( -1\right) }^{t} \) where \( t = \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {ja}/p\rfloor \) . Also, \( \left( {2 \mid p}\right) = {\left( -1\right) }^{\left( {{p}^{2} - 1}\right) /8} \) .
Proof. Let \( a \) be an integer not divisible by \( p \), but which may be even, and let us adopt the same notation as in the statement and proof of Theorem 12.2; in particular, \( {\alpha }_{1},\ldots ,{\alpha }_{\left( {p - 1}\right) /2},{r}_{1},\ldots ,{r}_{n} \), and \( {s}_{1},\ldots ,{s}_{k} \) are as defined there. Note that \( {ja} = p\lfloor {ja}/p\rfloor + {\alpha }_{j} \), for \( j = 1,\ldots ,\left( {p - 1}\right) /2 \), so we have\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}{ja} = \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}p\lfloor {ja}/p\rfloor + \mathop{\sum }\limits_{{j = 1}}^{n}{r}_{j} + \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}. \]\n\n(12.1)\n\nMoreover, as we saw in the proof of Theorem 12.2, the sequence of numbers\n\n\( {s}_{1},\ldots ,{s}_{k}, p - {r}_{1},\ldots, p - {r}_{n} \) is a reordering of \( 1,\ldots ,\left( {p - 1}\right) /2 \), and hence\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}j = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {p - {r}_{j}}\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j} = {np} - \mathop{\sum }\limits_{{j = 1}}^{n}{r}_{j} + \mathop{\sum }\limits_{{j = 1}}^{k}{s}_{j}. \]\n\n(12.2)\n\nSubtracting (12.2) from (12.1), we get\n\n\[ \left( {a - 1}\right) \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}j = p\left( {\mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {ja}/p\rfloor - n}\right) + 2\mathop{\sum }\limits_{{j = 1}}^{n}{r}_{j}. \]\n\n(12.3)\n\nNote that\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}j = \frac{{p}^{2} - 1}{8} \]\n\n(12.4)\n\nwhich together with (12.3) implies\n\n\[ \left( {a - 1}\right) \frac{{p}^{2} - 1}{8} \equiv \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {ja}/p\rfloor - n\left( {\;\operatorname{mod}\;2}\right) . \]\n\n(12.5)\n\nIf \( a \) is odd,(12.5) implies\n\n\[ n \equiv \mathop{\sum }\limits_{{j = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {ja}/p\rfloor \left( {\;\operatorname{mod}\;2}\right) . \]\n\n(12.6)\n\nIf \( a = 2 \), then \( \lfloor {2j}/p\rfloor = 0 \) for \( j = 1,\ldots ,\left( {p - 1}\right) /2 \), and (12.5) implies\n\n\[ n \equiv \frac{{p}^{2} - 1}{8}\left( {\;\operatorname{mod}\;2}\right) \]\n\n(12.7)\n\nThe theorem now follows from (12.6) and (12.7), together with Theorem 12.2. Note that this last theorem proves part (iv) of Theorem 12.1. The next theorem proves part (v).
Yes
Theorem 12.4. If \( p \) and \( q \) are distinct odd primes, then\n\n\[ \left( {p \mid q}\right) \left( {q \mid p}\right) = {\left( -1\right) }^{\frac{p - 1}{2}\frac{q - 1}{2}}. \]
Proof. Let \( S \) be the set of pairs of integers \( \left( {x, y}\right) \) with \( 1 \leq x \leq \left( {p - 1}\right) /2 \) and \( 1 \leq y \leq \left( {q - 1}\right) /2 \) . Note that \( S \) contains no pair \( \left( {x, y}\right) \) with \( {qx} = {py} \), so let us partition \( S \) into two subsets: \( {S}_{1} \) contains all pairs \( \left( {x, y}\right) \) with \( {qx} > {py} \), and \( {S}_{2} \) contains all pairs \( \left( {x, y}\right) \) with \( {qx} < {py} \) . Note that \( \left( {x, y}\right) \in {S}_{1} \) if and only if\n\n\( 1 \leq x \leq \left( {p - 1}\right) /2 \) and \( 1 \leq y \leq \lfloor {qx}/p\rfloor \) . So \( \left| {S}_{1}\right| = \mathop{\sum }\limits_{{x = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {qx}/p\rfloor \) . Similarly, \( \left| {S}_{2}\right| = \mathop{\sum }\limits_{{y = 1}}^{{\left( {q - 1}\right) /2}}\lfloor {py}/q\rfloor \) . So we have\n\n\[ \frac{p - 1}{2}\frac{q - 1}{2} = \left| S\right| = \left| {S}_{1}\right| + \left| {S}_{2}\right| = \mathop{\sum }\limits_{{x = 1}}^{{\left( {p - 1}\right) /2}}\lfloor {qx}/p\rfloor + \mathop{\sum }\limits_{{y = 1}}^{{\left( {q - 1}\right) /2}}\lfloor {py}/q\rfloor , \]\n\nand Theorem 12.3 implies\n\n\[ \left( {p \mid q}\right) \left( {q \mid p}\right) = {\left( -1\right) }^{\frac{p - 1}{2}\frac{q - 1}{2}}. \]
Yes
Theorem 12.5. Let \( m, n \) be odd, positive integers, and let \( a, b \in \mathbb{Z} \) . Then we have:\n\n(i) \( \left( {{ab} \mid n}\right) = \left( {a \mid n}\right) \left( {b \mid n}\right) \) ;\n\n(ii) \( \left( {a \mid {mn}}\right) = \left( {a \mid m}\right) \left( {a \mid n}\right) \) ;\n\n(iii) \( a \equiv b\left( {\;\operatorname{mod}\;n}\right) \) implies \( \left( {a \mid n}\right) = \left( {b \mid n}\right) \) ;\n\n(iv) \( \left( {-1 \mid n}\right) = {\left( -1\right) }^{\left( {n - 1}\right) /2} \) ;\n\n(v) \( \left( {2 \mid n}\right) = {\left( -1\right) }^{\left( {{n}^{2} - 1}\right) /8} \) ;\n\n(vi) \( \left( {m \mid n}\right) = {\left( -1\right) }^{\frac{m - 1}{2}\frac{n - 1}{2}}\left( {n \mid m}\right) \) .
Proof. Parts (i)-(iii) follow directly from the definition (exercise).\n\nFor parts (iv) and (vi), one can easily verify (exercise) that for all odd integers \( {n}_{1},\ldots ,{n}_{k} \)\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{k}\left( {{n}_{i} - 1}\right) /2 \equiv \left( {{n}_{1}\cdots {n}_{k} - 1}\right) /2\left( {\;\operatorname{mod}\;2}\right) . \]\n\nPart (iv) easily follows from this fact, along with part (ii) of this theorem and part (i) of Theorem 12.1 (exercise). Part (vi) easily follows from this fact, along with parts (i) and (ii) of this theorem, and part (v) of Theorem 12.1 (exercise).\n\nFor part (v), one can easily verify (exercise) that for odd integers \( {n}_{1},\ldots ,{n}_{k} \), \n\n\[ \mathop{\sum }\limits_{{i = 1}}^{k}\left( {{n}_{i}^{2} - 1}\right) /8 \equiv \left( {{n}_{1}^{2}\cdots {n}_{k}^{2} - 1}\right) /8\left( {\;\operatorname{mod}\;2}\right) . \]\n\nPart (v) easily follows from this fact, along with part (ii) of this theorem, and part (iv) of Theorem 12.1 (exercise).
No
Theorem 13.2. If \( M \) is a module over \( R \), then for all \( c \in R,\alpha \in M \), and \( k \in \mathbb{Z} \) , we have:\n\n(i) \( {0}_{R} \cdot \alpha = {0}_{M} \) ;\n\n(ii) \( c \cdot {0}_{M} = {0}_{M} \) ;\n\n(iii) \( \left( {-c}\right) \alpha = - \left( {c\alpha }\right) = c\left( {-\alpha }\right) \) ;\n\n(iv) \( \left( {kc}\right) \alpha = k\left( {c\alpha }\right) = c\left( {k\alpha }\right) \) .
Proof. Exercise.
No
The set \( {R}^{\times n} \), which consists of all of \( n \) -tuples of elements of \( R \) , forms an \( R \) -module, with addition and scalar multiplication defined componentwise: for \( \alpha = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \in {R}^{\times n},\beta = \left( {{b}_{1},\ldots ,{b}_{n}}\right) \in {R}^{\times n} \), and \( c \in R \), we define
\[ \alpha + \beta \mathrel{\text{:=}} \left( {{a}_{1} + {b}_{1},\ldots ,{a}_{n} + {b}_{n}}\right) \text{and}{c\alpha } \mathrel{\text{:=}} \left( {c{a}_{1},\ldots, c{a}_{n}}\right) \text{.} \]
Yes
Let \( G \) be any group, written additively, whose exponent divides \( n \) . Then we may define a scalar multiplication that maps \( {\left\lbrack k\right\rbrack }_{n} \in {\mathbb{Z}}_{n} \) and \( \alpha \in G \) to \( {k\alpha } \) . That this map is unambiguously defined follows from the fact that \( G \) has exponent dividing \( n \), so that if \( k \equiv {k}^{\prime }\left( {\;\operatorname{mod}\;n}\right) \), we have \( {k\alpha } - {k}^{\prime }\alpha = \left( {k - {k}^{\prime }}\right) \alpha = {0}_{G} \), since \( n \mid \left( {k - {k}^{\prime }}\right) \) .
It is easy to check that this scalar multiplication map indeed makes \( G \) into a \( {\mathbb{Z}}_{n} \)-module.
No
If \( I \) is an arbitrary set, and \( M \) is an \( R \) -module, then \( \operatorname{Map}\left( {I, M}\right) \) , which is the set of all functions \( f : I \rightarrow M \), may be naturally viewed as an \( R \) - module, with point-wise addition and scalar multiplication: for \( f, g \in \operatorname{Map}\left( {I, M}\right) \) and \( c \in R \), we define
\[ \left( {f + g}\right) \left( i\right) \mathrel{\text{:=}} f\left( i\right) + g\left( i\right) \text{ and }\left( {cf}\right) \left( i\right) \mathrel{\text{:=}} {cf}\left( i\right) \text{ for all }i \in I. \]
Yes
Suppose \( N \) is a submodule of an \( R \) -module \( M \) . Then the natural map (see Example 6.36)\n\n\[ \rho : \;M \rightarrow M/N \]\n\n\[ \alpha \mapsto {\left\lbrack \alpha \right\rbrack }_{N} \]
is not just a group homomorphism, it is also easily seen to be an \( R \) -linear map.
No
Generalizing the previous example, let \( M \) be an \( R \) -module, and let \( {\alpha }_{1},\ldots ,{\alpha }_{k} \) be elements of \( M \) . The map \[ \rho : \;{R}^{\times k} \rightarrow M \] \[ \left( {{c}_{1},\ldots ,{c}_{k}}\right) \mapsto {c}_{1}{\alpha }_{1} + \cdots + {c}_{k}{\alpha }_{k} \]
is easily seen to be an \( R \) -linear map whose image is the submodule \( R{\alpha }_{1} + \cdots + R{\alpha }_{k} \) (i.e., the submodule generated by \( {\alpha }_{1},\ldots ,{\alpha }_{k} \) ).
Yes
Suppose that \( {M}_{1},\ldots ,{M}_{k} \) are submodules of an \( R \) -module \( M \) . Then the map\n\n\[ \rho : \;{M}_{1} \times \cdots \times {M}_{k} \rightarrow M \]\n\n\[ \left( {{\alpha }_{1},\ldots ,{\alpha }_{k}}\right) \mapsto {\alpha }_{1} + \cdots + {\alpha }_{k} \]\n\nis easily seen to be an \( R \) -linear map whose image is the submodule \( {M}_{1} + \cdots + {M}_{k} \) .
\[ ▱ \]
No
Let \( E \) and \( {E}^{\prime } \) be extension rings of \( R \), which may be viewed as \( R \) -modules as in Example 13.5. Suppose that \( \rho : E \rightarrow {E}^{\prime } \) is a ring homomorphism whose restriction to \( R \) is the identity map (i.e., \( \rho \left( c\right) = c \) for all \( c \in R \) ). Then \( \rho \) is an \( R \) -linear map.
Indeed, for every \( c \in R \) and \( \alpha ,\beta \in E \), we have \( \rho \left( {\alpha + \beta }\right) = \rho \left( \alpha \right) + \rho \left( \beta \right) \) and \( \rho \left( {c\alpha }\right) = \rho \left( c\right) \rho \left( \alpha \right) = {c\rho }\left( \alpha \right) \).
Yes
Let \( G \) and \( {G}^{\prime } \) be abelian groups. As we saw in Example 13.6, \( G \) and \( {G}^{\prime } \) may be viewed as \( \mathbb{Z} \) -modules. In addition, every group homomorphism \( \rho : G \rightarrow {G}^{\prime } \) is also a \( \mathbb{Z} \) -linear map.
Since an \( R \) -module homomorphism is also a group homomorphism on the underlying additive groups, all of the statements in Theorem 6.19 apply. In particular, an \( R \) -linear map is injective if and only if the kernel is trivial (i.e., contains only the zero element). However, in the case of \( R \) -module homomorphisms, we can extend Theorem 6.19, as follows:
No
Theorem 13.5. Let \( \rho : M \rightarrow {M}^{\prime } \) be an \( R \) -linear map. Then:\n\n(i) for every submodule \( N \) of \( M,\rho \left( N\right) \) is a submodule of \( {M}^{\prime } \) ; in particular (setting \( N \mathrel{\text{:=}} M \) ), \( \operatorname{Im}\rho \) is a submodule of \( {M}^{\prime } \) ;\n\n(ii) for every submodule \( {N}^{\prime } \) of \( {M}^{\prime },{\rho }^{-1}\left( {N}^{\prime }\right) \) is a submodule of \( {M}^{\prime } \) ; in particular (setting \( {N}^{\prime } \mathrel{\text{:=}} \left\{ {0}_{{M}^{\prime }}\right\} \) ), Ker \( \rho \) is a submodule of \( M \) .
Proof. Exercise.
No
Theorem 13.9 (First isomorphism theorem). Let \( \rho : M \rightarrow {M}^{\prime } \) be an \( R \) -linear map with kernel \( K \) and image \( {N}^{\prime } \) . Then we have an \( R \) -module isomorphism\n\n\[ M/K \cong {N}^{\prime }\text{.} \]
Specifically, the map\n\n\[ \bar{\rho } : \;M/K \rightarrow {M}^{\prime }\n\]\n\[ {\left\lbrack \alpha \right\rbrack }_{K} \mapsto \rho \left( \alpha \right) \]\n\nis an injective \( R \) -linear map whose image is \( {N}^{\prime } \) .
Yes
Theorem 13.11 (Internal direct product). Let \( M \) be an \( R \) -module with submodules \( {N}_{1},{N}_{2} \), where \( {N}_{1} \cap {N}_{2} = \left\{ {0}_{M}\right\} \). Then we have an \( R \) -module isomorphism
\[ {N}_{1} \times {N}_{2} \cong {N}_{1} + {N}_{2} \] given by the map \[ \rho : \;{N}_{1} \times {N}_{2} \rightarrow {N}_{1} + {N}_{2} \] \[ \left( {{\alpha }_{1},{\alpha }_{2}}\right) \mapsto {\alpha }_{1} + {\alpha }_{2} \]
Yes
Consider again the \( R \) -module \( R\\left\\lbrack X\\right\\rbrack /\\left( f\\right) \) discussed in Example 13.4, where \( f \\in R\\left\\lbrack X\\right\\rbrack \) is of degree \( \\ell \\geq 0 \) and \( \\operatorname{lc}\\left( f\\right) \\in {R}^{ * } \) . As an \( R \) -module, \( R\\left\\lbrack X\\right\\rbrack /\\left( f\\right) \) is isomorphic to \( R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \) (see Example 13.11).
Indeed, based on the observations in Example 7.39, the map \( \\rho : R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \\rightarrow R\\left\\lbrack X\\right\\rbrack /\\left( f\\right) \) that sends a polynomial \( g \\in R\\left\\lbrack X\\right\\rbrack \) of degree less than \( \\ell \) to \( {\\left\\lbrack g\\right\\rbrack }_{f} \\in R\\left\\lbrack X\\right\\rbrack /\\left( f\\right) \) is an isomorphism of \( R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \) with \( R\\left\\lbrack X\\right\\rbrack /\\left( f\\right) \) . Furthermore, \( R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \) is isomorphic as an \( R \) -module to \( {R}^{\\times \\ell } \) . Indeed, the map \( {\\rho }^{\\prime } : R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \\rightarrow {R}^{\\times \\ell } \) that sends \( g = \\mathop{\\sum }\\limits_{{i = 0}}^{{\\ell - 1}}{a}_{i}{X}^{i} \\in R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \) to \( \\left( {{a}_{0},\\ldots ,{a}_{\\ell - 1}}\\right) \\in {R}^{\\times \\ell } \) is an isomorphism of \( R{\\left\\lbrack X\\right\\rbrack }_{ < \\ell } \) with \( {R}^{\\times \\ell } \) .
Yes
Consider the \( R \) -module \( {R}^{\times n} \) . Define \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in {R}^{\times n} \) as follows:\n\n\[ \n{\alpha }_{1} \mathrel{\text{:=}} \left( {1,0,\ldots ,0}\right) ,{\alpha }_{2} \mathrel{\text{:=}} \left( {0,1,0,\ldots ,0}\right) ,\ldots ,{\alpha }_{n} \mathrel{\text{:=}} \left( {0,\ldots ,0,1}\right) ; \n\]\n\nthat is, \( {\alpha }_{i} \) has a 1 in position \( i \) and is zero everywhere else. It is easy to see that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a basis for \( {R}^{\times n} \) .
Indeed, for all \( {c}_{1},\ldots ,{c}_{n} \in R \), we have\n\n\[ \n{c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n} = \left( {{c}_{1},\ldots ,{c}_{n}}\right) , \n\]\n\nfrom which it is clear that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) spans \( {R}^{\times n} \) and is linearly independent. The family \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is called the standard basis for \( {R}^{\times n} \) .
Yes
Consider the \( \mathbb{Z} \) -module \( {\mathbb{Z}}^{\times 3} \) . In addition to the standard basis, which consists of the tuples\n\n\[ \left( {1,0,0}\right) ,\left( {0,1,0}\right) ,\left( {0,0,1}\right) ,\]\n\nthe tuples\n\n\[ {\alpha }_{1} \mathrel{\text{:=}} \left( {1,1,1}\right) ,{\alpha }_{2} \mathrel{\text{:=}} \left( {0,1,0}\right) ,{\alpha }_{3} \mathrel{\text{:=}} \left( {2,0,1}\right) \]\n\nalso form a basis.
To see this, first observe that for all \( {c}_{1},{c}_{2},{c}_{3},{d}_{1},{d}_{2},{d}_{3} \in \mathbb{Z} \), we have\n\n\[ \left( {{d}_{1},{d}_{2},{d}_{3}}\right) = {c}_{1}{\alpha }_{1} + {c}_{2}{\alpha }_{2} + {c}_{3}{\alpha }_{3} \]\n\nif and only if\n\n\[ {d}_{1} = {c}_{1} + 2{c}_{3},{d}_{2} = {c}_{1} + {c}_{2}\text{, and}{d}_{3} = {c}_{1} + {c}_{3}\text{.} \]\n\n(13.1)\n\nIf (13.1) holds with \( {d}_{1} = {d}_{2} = {d}_{3} = 0 \), then subtracting the equation \( {c}_{1} + {c}_{3} = 0 \) from \( {c}_{1} + 2{c}_{3} = 0 \), we see that \( {c}_{3} = 0 \), from which it easily follows that \( {c}_{1} = {c}_{2} = 0 \) . This shows that the family \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{3} \) is linearly independent. To show that it spans \( {\mathbb{Z}}^{\times 3} \), the reader may verify that for any given \( {d}_{1},{d}_{2},{d}_{3} \in \mathbb{Z} \), the values\n\n\[ {c}_{1} \mathrel{\text{:=}} - {d}_{1} + 2{d}_{3},{c}_{2} \mathrel{\text{:=}} {d}_{1} + {d}_{2} - 2{d}_{3},{c}_{3} \mathrel{\text{:=}} {d}_{1} - {d}_{3} \]\n\nsatisfy (13.1).
Yes
Theorem 13.14. If \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a basis for an \( R \) -module \( M \), then the map\n\n\[ \varepsilon : \;{R}^{\times n} \rightarrow M \]\n\n\[ \left( {{c}_{1},\ldots ,{c}_{n}}\right) \mapsto {c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n} \]\n\nis an \( R \) -module isomorphism. In particular, every element of \( M \) can be expressed in a unique way as \( {c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n} \), for \( {c}_{1},\ldots ,{c}_{n} \in R \) .
Proof. We already saw that \( \varepsilon \) is an \( R \) -linear map in Example 13.21. Since \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is linearly independent, it follows that the kernel of \( \varepsilon \) is trivial, so that \( \varepsilon \) is injective. That \( \varepsilon \) is surjective follows immediately from the fact that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) spans \( M \) .
Yes
Theorem 13.16. Let \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) be a basis for an \( R \) -module \( M \), and let \( \rho : M \rightarrow {M}^{\prime } \) be an \( R \) -linear map. Then:\n\n(i) \( \rho \) is surjective if and only if \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) spans \( {M}^{\prime } \) ;\n\n(ii) \( \rho \) is injective if and only if \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) is linearly independent;\n\n(iii) \( \rho \) is an isomorphism if and only if \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) is a basis for \( {\mathbf{M}}^{\prime } \) .
Proof. By the previous theorem, we know that every element of \( M \) can be written uniquely as \( \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} \), where the \( {c}_{i} \) ’s are in \( R \) . Therefore, every element in \( \operatorname{Im}\rho \) can be expressed as \( \rho \left( {\mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i}}\right) = \mathop{\sum }\limits_{i}{c}_{i}\rho \left( {\alpha }_{i}\right) \) . It follows that \( \operatorname{Im}\rho \) is equal to the subspace of \( {\mathbf{M}}^{\prime } \) spanned by \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) . From this,(i) is clear.\n\nFor (ii), consider a non-zero element \( \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} \) of \( M \), so that not all \( {c}_{i} \) ’s are zero. Now, \( \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} \in \operatorname{Ker}\rho \) if and only if \( \mathop{\sum }\limits_{i}{c}_{i}\rho \left( {\alpha }_{i}\right) = {0}_{{M}^{\prime }} \), and thus, \( \operatorname{Ker}\rho \) is non-trivial if and only if \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) is linearly dependent. That proves (ii).\n\n(iii) follows from (i) and (ii).
Yes
Theorem 13.17. Suppose that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a linearly independent family of elements that spans a subspace \( W \varsubsetneq V \), and that \( {\alpha }_{n + 1} \in V \smallsetminus W \) . Then \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n + 1} \) is also linearly independent.
Proof. Suppose we have a linear relation\n\n\[ \n{0}_{V} = {c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n} + {c}_{n + 1}{\alpha }_{n + 1}, \n\]\n\nwhere the \( {c}_{i} \) ’s are in \( F \) . We want to show that all the \( {c}_{i} \) ’s are zero. If \( {c}_{n + 1} \neq 0 \), then we have\n\n\[ \n{\alpha }_{n + 1} = - {c}_{n + 1}^{-1}\left( {{c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n}}\right) \in W, \n\]\n\ncontradicting the assumption that \( {\alpha }_{n + 1} \notin W \) . Therefore, we must have \( {c}_{n + 1} = 0 \) , and the linear independence of \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) implies that \( {c}_{1} = \cdots = {c}_{n} = 0 \) .
Yes
Theorem 13.18. Suppose \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a family of elements that spans \( V \) . Then for some subset \( J \subseteq \{ 1,\ldots, n\} \), the subfamily \( {\left\{ {\alpha }_{j}\right\} }_{j \in J} \) is a basis for \( V \) .
Proof. We prove this by induction on \( n \) . If \( n = 0 \), the theorem is clear, so assume \( n > 0 \) . Consider the subspace \( W \) of \( V \) spanned by \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n - 1} \) . By the induction hypothesis, for some \( K \subseteq \{ 1,\ldots, n - 1\} \), the subfamily \( {\left\{ {\alpha }_{k}\right\} }_{k \in K} \) is a basis for \( W \) . There are two cases to consider.\n\nCase 1: \( {\alpha }_{n} \in W \) . In this case, \( W = V \), and the theorem clearly holds with \( J \mathrel{\text{:=}} K \) .\n\nCase 2: \( {\alpha }_{n} \notin W \) . We claim that setting \( J \mathrel{\text{:=}} K \cup \{ n\} \), the subfamily \( {\left\{ {\alpha }_{j}\right\} }_{j \in J} \) is a basis for \( V \) . Indeed, since \( {\left\{ {\alpha }_{k}\right\} }_{k \in K} \) is linearly independent, and \( {\alpha }_{n} \notin W \) , Theorem 13.17 immediately implies that \( {\left\{ {\alpha }_{j}\right\} }_{j \in J} \) is linearly independent. Also, since \( {\left\{ {\alpha }_{k}\right\} }_{k \in K} \) spans \( W \), it is clear that \( {\left\{ {\alpha }_{j}\right\} }_{j \in J} \) spans \( W + {\left\langle {\alpha }_{n}\right\rangle }_{F} = V \) .
Yes
If \( V \) is spanned by some family of \( n \) elements of \( V \), then every family of \( n + 1 \) elements of \( V \) is linearly dependent.
We prove this by induction on \( n \) . If \( n = 0 \), the theorem is clear, so assume that \( n > 0 \) . Let \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) be a family that spans \( V \), and let \( {\left\{ {\beta }_{i}\right\} }_{i = 1}^{n + 1} \) be an arbitrary family of elements of \( V \) . We wish to show that \( {\left\{ {\beta }_{i}\right\} }_{i = 1}^{n + 1} \) is linearly dependent.\n\nWe know that \( {\beta }_{n + 1} \) is a linear combination of the \( {\alpha }_{i} \) ’s, say,\n\n\[ \n{\beta }_{n + 1} = {c}_{1}{\alpha }_{1} + \cdots + {c}_{n}{\alpha }_{n} \n\]\n\n(13.2)\n\nIf all the \( {c}_{i} \) ’s were zero, then we would have \( {\beta }_{n + 1} = {0}_{V} \), and so trivially, \( {\left\{ {\beta }_{i}\right\} }_{i = 1}^{n + 1} \) is linearly dependent. So assume that some \( {c}_{i} \) is non-zero, and for concreteness, say \( {c}_{n} \neq 0 \) . Dividing equation (13.2) through by \( {c}_{n} \), it follows that \( {\alpha }_{n} \) is an \( F \) -linear combination of \( {\alpha }_{1},\ldots ,{\alpha }_{n - 1},{\beta }_{n + 1} \) . Therefore,\n\n\[ \n{\left\langle {\alpha }_{1},\ldots ,{\alpha }_{n - 1},{\beta }_{n + 1}\right\rangle }_{F} \supseteq {\left\langle {\alpha }_{1},\ldots ,{\alpha }_{n - 1}\right\rangle }_{F} + {\left\langle {\alpha }_{n}\right\rangle }_{F} = V. \n\]\n\nNow consider the subspace \( W \mathrel{\text{:=}} {\left\langle {\beta }_{n + 1}\right\rangle }_{F} \) and the quotient space \( V/W \) . Since the family of elements \( {\alpha }_{1},\ldots ,{\alpha }_{n - 1},{\beta }_{n + 1} \) spans \( V \), it is easy to see that \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{n - 1} \) spans \( V/W \) ; therefore, by induction, \( {\left\{ {\left\lbrack {\beta }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{n} \) is linearly dependent. This means that there exist \( {d}_{1},\ldots ,{d}_{n} \in F \), not all zero, such that \( {d}_{1}{\beta }_{1} + \cdots + {d}_{n}{\beta }_{n} \equiv 0\left( {\;\operatorname{mod}\;W}\right) \) , which means that for some \( {d}_{n + 1} \in F \), we have \( {d}_{1}{\beta }_{1} + \cdots + {d}_{n}{\beta }_{n} = {d}_{n + 1}{\beta }_{n + 1} \) . That proves that \( {\left\{ {\beta }_{i}\right\} }_{i = 1}^{n + 1} \) is linearly dependent.
Yes
Theorem 13.20. If \( V \) is finitely generated, then any two bases for \( V \) have the same size.
Proof. If one basis had more elements than another, then Theorem 13.19 would imply that the first basis was linearly dependent, which contradicts the definition of a basis.
Yes
Theorem 13.22. Suppose that \( {\dim }_{F}\left( V\right) = n \), and that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a family of \( n \) elements of \( V \) . The following are equivalent:\n\n(i) \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is linearly independent;\n\n(ii) \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) spans \( V \) ;\n\n(iii) \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a basis for \( V \) .
Proof. Let \( W \) be the subspace of \( V \) spanned by \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) .\n\nFirst, let us show that (i) implies (ii). Suppose \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is linearly independent. Also, by way of contradiction, suppose that \( W \varsubsetneq V \), and choose \( {\alpha }_{n + 1} \in V \smallsetminus W \) . Then Theorem 13.17 implies that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n + 1} \) is linearly independent. But then we have a linearly independent family of \( n + 1 \) elements of \( V \), which is impossible by Theorem 13.19.\n\nSecond, let us prove that (ii) implies (i). Let us assume that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is linearly dependent, and prove that \( W \varsubsetneq V \) . By Theorem 13.18, we can find a basis for \( W \) among the \( {\alpha }_{i} \) ’s, and since \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is linearly dependent, this basis must contain strictly fewer than \( n \) elements. Hence, \( {\dim }_{F}\left( W\right) < {\dim }_{F}\left( V\right) \), and therefore, \( W \varsubsetneq V \) .\n\nThe theorem now follows from the above arguments, and the fact that, by definition, (iii) holds if and only if both (i) and (ii) hold.
Yes
Theorem 13.23. Suppose that \( V \) is finite dimensional and \( W \) is a subspace of \( V \) . Then \( W \) is also finite dimensional, with \( {\dim }_{F}\left( W\right) \leq {\dim }_{F}\left( V\right) \) . Moreover, \( {\dim }_{F}\left( W\right) = {\dim }_{F}\left( V\right) \) if and only if \( W = V \) .
Proof. Suppose \( {\dim }_{F}\left( V\right) = n \) . Consider the set \( \mathcal{S} \) of all linearly independent families of the form \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{m} \), where \( m \geq 0 \) and each \( {\alpha }_{i} \) is in \( W \) . The set \( \mathcal{S} \) is certainly non-empty, as it contains the empty family. Moreover, by Theorem 13.19, every member of \( \mathcal{S} \) must have at most \( n \) elements. Therefore, we may choose some particular element \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{m} \) of \( \mathcal{S} \), where \( m \) is as large as possible. We claim that this family \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{m} \) is a basis for \( W \) . By definition, \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{m} \) is linearly independent and spans some subspace \( {W}^{\prime } \) of \( W \) . If \( {W}^{\prime } \varsubsetneq W \), we can choose an element \( {\alpha }_{m + 1} \in W \smallsetminus {W}^{\prime } \), and by Theorem 13.17, the family \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{m + 1} \) is linearly independent, and therefore, this family also belongs to \( \mathcal{S} \), contradicting the assumption that \( m \) is as large as possible.\n\nThat proves that \( W \) is finite dimensional with \( {\dim }_{F}\left( W\right) \leq {\dim }_{F}\left( V\right) \) . It remains to show that these dimensions are equal if and only if \( W = V \) . Now, if \( W = V \) , then clearly \( {\dim }_{F}\left( W\right) = {\dim }_{F}\left( V\right) \) . Conversely, if \( {\dim }_{F}\left( W\right) = {\dim }_{F}\left( V\right) \), then by Theorem 13.22, any basis for \( W \) must already span \( V \) .
Yes
Theorem 13.24. If \( V \) is finite dimensional, and \( W \) is a subspace of \( V \), then the quotient space \( V/W \) is also finite dimensional, and\n\n\[{\dim }_{F}\left( {V/W}\right) = {\dim }_{F}\left( V\right) - {\dim }_{F}\left( W\right)\]
Proof. Suppose that \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) spans \( V \) . Then it is clear that \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{n} \) spans \( V/W \) . By Theorem 13.18, we know that \( V/W \) has a basis of the form \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{\ell } \), where \( \ell \leq n \) (renumbering the \( {\alpha }_{i} \) ’s as necessary). By Theorem 13.23, we know that \( W \) has a basis, say \( {\left\{ {\beta }_{j}\right\} }_{j = 1}^{m} \) . The theorem will follow immediately from the following:\n\nClaim. The elements\n\n\[{\alpha }_{1},\ldots ,{\alpha }_{\ell },{\beta }_{1},\ldots ,{\beta }_{m}\]\n\n(13.3)\n\nform a basis for \( V \) .\n\nTo see that this family spans \( V \), consider any element \( \gamma \) of \( V \) . Then since \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{\ell } \) spans \( V/W \), we have \( \gamma \equiv \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i}\left( {\;\operatorname{mod}\;W}\right) \) for some \( {c}_{1},\ldots ,{c}_{\ell } \in F \) . If we set \( \beta \mathrel{\text{:=}} \gamma - \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} \in W \), then since \( {\left\{ {\beta }_{j}\right\} }_{j = 1}^{m} \) spans \( W \), we have \( \beta = \mathop{\sum }\limits_{j}{d}_{j}{\beta }_{j} \) for some \( {d}_{1},\ldots ,{d}_{m} \in F \), and hence \( \gamma = \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} + \mathop{\sum }\limits_{j}{d}_{j}{\beta }_{j} \) . That proves that the family of elements (13.3) spans \( V \) . To prove this family is linearly independent, suppose we have a relation of the form \( \mathop{\sum }\limits_{i}{c}_{i}{\alpha }_{i} + \mathop{\sum }\limits_{j}{d}_{j}{\beta }_{j} = {0}_{V} \), where \( {c}_{1},\ldots ,{c}_{\ell } \in F \) and \( {d}_{1},\ldots ,{d}_{m} \in F \) . If any of the \( {c}_{i} \) ’s were non-zero, this would contradict the assumption that \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{\ell } \) is linearly independent. So assume that all the \( {c}_{i} \) ’s are zero. If any of the \( {d}_{j} \) ’s were non-zero, this would contradict the assumption that \( {\left\{ {\beta }_{j}\right\} }_{j = 1}^{m} \) is linearly independent. Thus, all the \( {c}_{i} \) ’s and \( {d}_{j} \) ’s must be zero, which proves that the family of elements (13.3) is linearly independent. That proves the claim.
Yes
Theorem 13.25. If \( V \) is finite dimensional, then every linearly independent family of elements of \( V \) can be extended to form a basis for \( V \) .
Proof. One can prove this by generalizing the proof of Theorem 13.18. Alternatively, we can adapt the proof of the previous theorem. Let \( {\left\{ {\beta }_{j}\right\} }_{j = 1}^{m} \) be a linearly independent family of elements that spans a subspace \( W \) of \( V \) . As in the proof of the previous theorem, if \( {\left\{ {\left\lbrack {\alpha }_{i}\right\rbrack }_{W}\right\} }_{i = 1}^{\ell } \) is a basis for the quotient space \( V/W \), then the elements\n\n\[ \n{\alpha }_{1},\ldots ,{\alpha }_{\ell },{\beta }_{1},\ldots ,{\beta }_{m}\n\]\n\nform a basis for \( V \) .
No
Theorem 13.26. If \( V \) is of finite dimension \( n \), and \( V \) is isomorphic to \( {V}^{\prime } \), then \( {V}^{\prime } \) is also of finite dimension \( n \) .
Proof. If \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) is a basis for \( V \), then by Theorem 13.16, \( {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) is a basis for \( {V}^{\prime } \) . \( ▱ \)
Yes
Theorem 13.27. If \( \rho : V \rightarrow {V}^{\prime } \) is an \( F \) -linear map, and if \( V \) and \( {V}^{\prime } \) are finite dimensional with \( {\dim }_{F}\left( V\right) = {\dim }_{F}\left( {V}^{\prime }\right) \), then we have:\n\n\( \rho \) is injective if and only if \( \rho \) is surjective.
Proof. Let \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) be a basis for \( V \) . Then\n\n\( \rho \) is injective \( \Leftrightarrow {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n} \) is linearly independent (by Theorem 13.16)\n\n\[ \Leftrightarrow {\left\{ \rho \left( {\alpha }_{i}\right) \right\} }_{i = 1}^{n}\text{spans}{V}^{\prime }\text{(by Theorem 13.22)} \]\n\n\[ \Leftrightarrow \rho \text{is surjective (again by Theorem 13.16).} \]
Yes
Theorem 13.28. If \( V \) is finite dimensional, and \( \rho : V \rightarrow {V}^{\prime } \) is an \( F \) -linear map, then \( \operatorname{Im}\rho \) is a finite dimensional vector space, and\n\n\[{\dim }_{F}\left( V\right) = {\dim }_{F}\left( {\operatorname{Im}\rho }\right) + {\dim }_{F}\left( {\operatorname{Ker}\rho }\right)\]
Proof. As the reader may verify, this follows immediately from Theorem 13.24, together with Theorems 13.26 and 13.9.
No
Theorem 14.1. With addition and scalar multiplication as defined above, \( {R}^{m \times n} \) is an \( R \) -module. The matrix \( {0}_{R}^{m \times n} \) is the additive identity, and the additive inverse of a matrix \( A \in {R}^{m \times n} \) is the \( m \times n \) matrix whose \( \left( {i, j}\right) \) entry is \( - A\left( {i, j}\right) \) .
Proof. To prove this, one first verifies that matrix addition is associative and commutative, which follows from the associativity and commutativity of addition in \( R \) . The claims made about the additive identity and additive inverses are also easily verified. These observations establish that \( {R}^{m \times n} \) is an abelian group. One also has to check that all of the properties in Definition 13.1 hold. We leave this to the reader.
No
Matrix multiplication is associative; that is, \( A\left( {BC}\right) = \left( {AB}\right) C \) for all \( A \in {R}^{m \times n}, B \in {R}^{n \times p} \), and \( C \in {R}^{p \times q} \) .
All of these are trivial, except for (i), which requires just a bit of computation to show that the \( \left( {i,\ell }\right) \) entry of both \( A\left( {BC}\right) \) and \( \left( {AB}\right) C \) is equal to (as the reader may verify)\n\n\[ \mathop{\sum }\limits_{\substack{{1 \leq j \leq n} \\ {1 \leq k \leq p} }}A\left( {i, j}\right) B\left( {j, k}\right) C\left( {k,\ell }\right) . \]
No
Theorem 14.3. If \( A, B \in {R}^{m \times n}, C \in {R}^{n \times p} \), and \( c \in R \), then:\n\n(i) \( {\left( A + B\right) }^{\top } = {A}^{\top } + {B}^{\top } \) ;\n\n(ii) \( {\left( cA\right) }^{\top } = c{A}^{\top } \) ;\n\n(iii) \( {\left( {A}^{\top }\right) }^{\top } = A \) ;\n\n(iv) \( {\left( AC\right) }^{\top } = {C}^{\top }{A}^{\top } \) .
Proof. Exercise.
No
Consider the quotient ring \( E = R\left\lbrack X\right\rbrack /\left( f\right) \), where \( f \in R\left\lbrack X\right\rbrack \) with \( \deg \left( f\right) = \ell > 0 \) and \( \operatorname{lc}\left( f\right) \in {R}^{ * } \) . Let \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \) . As an \( R \) -module, \( E \) has a basis \( \mathcal{S} \mathrel{\text{:=}} {\left\{ {\xi }^{i - 1}\right\} }_{i = 1}^{\ell } \) (see Example 13.30). Let \( \rho : E \rightarrow E \) be the \( \xi \) - multiplication map, which sends \( \alpha \in E \) to \( {\xi \alpha } \in E \) . This is an \( R \) -linear map. If \( f = {c}_{0} + {c}_{1}X + \cdots + {c}_{\ell - 1}{X}^{\ell - 1} + {c}_{\ell }{X}^{\ell } \), then the matrix of \( \rho \) relative to \( \mathcal{S} \) is the \( \ell \times \ell \) matrix
\[ A = \left( \begin{matrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ & & & \ddots & \\ 0 & 0 & 0 & \cdots & 1 \\ - {c}_{0}/{c}_{\ell } & - {c}_{1}/{c}_{\ell } & - {c}_{2}/{c}_{\ell } & \cdots & - {c}_{\ell - 1}/{c}_{\ell } \end{matrix}\right) ,\]
Yes
Theorem 14.5. Let \( A \in {R}^{n \times n} \), and let \( {\lambda }_{A} : {R}^{1 \times n} \rightarrow {R}^{1 \times n} \) be the corresponding \( R \) -linear map. Then \( A \) is invertible if and only if \( {\lambda }_{A} \) is bijective, in which case \( {\lambda }_{{A}^{-1}} = {\lambda }_{A}^{-1} \) .
Proof. Suppose \( A \) is invertible, and that \( B \) is its inverse. We have \( {AB} = {BA} = I \) , and hence \( {\lambda }_{AB} = {\lambda }_{BA} = {\lambda }_{I} \), from which it follows (see (14.1)) that \( {\lambda }_{B} \circ {\lambda }_{A} = \) \( {\lambda }_{A} \circ {\lambda }_{B} = {\lambda }_{I} \) . Since \( {\lambda }_{I} \) is the identity map, this implies \( {\lambda }_{A} \) is bijective.\n\nSuppose \( {\lambda }_{A} \) is bijective. We know that the inverse map \( {\lambda }_{A}^{-1} \) is also an \( R \) -linear map, and since the mapping \( \Lambda \) above is surjective, we have \( {\lambda }_{A}^{-1} = {\lambda }_{B} \) for some \( B \in {R}^{n \times n} \) . Therefore, we have \( {\lambda }_{B} \circ {\lambda }_{A} = {\lambda }_{A} \circ {\lambda }_{B} = {\lambda }_{I} \), and hence (again, see (14.1)) \( {\lambda }_{AB} = {\lambda }_{BA} = {\lambda }_{I} \) . Since the mapping \( \Lambda \) is injective, it follows that \( {AB} = {BA} = I \) . This implies \( A \) is invertible, with \( {A}^{-1} = B \) .
Yes
Theorem 14.6. Let \( A \in {R}^{n \times n} \) . The following are equivalent:\n\n(i) \( A \) is invertible;\n\n(ii) \( {\left\{ {\operatorname{Row}}_{i}\left( A\right) \right\} }_{i = 1}^{n} \) is a basis for \( {R}^{1 \times n} \) ;\n\n(iii) \( {\left\{ {\operatorname{Col}}_{j}\left( A\right) \right\} }_{j = 1}^{n} \) is a basis for \( {R}^{n \times 1} \) .
Proof. We first prove the equivalence of (i) and (ii). By the previous theorem, \( A \) is invertible if and only if \( {\lambda }_{A} \) is bijective. Also, in the previous section, we observed that \( {\lambda }_{A} \) is surjective if and only if \( {\left\{ {\operatorname{Row}}_{i}\left( A\right) \right\} }_{i = 1}^{n} \) spans \( {R}^{1 \times n} \), and that \( {\lambda }_{A} \) is injective if and only if \( {\left\{ {\operatorname{Row}}_{i}\left( A\right) \right\} }_{i = 1}^{n} \) is linearly independent.\n\nThe equivalence of (i) and (iii) follows by considering the transpose of \( A \) .
Yes
The following \( 4 \times 6 \) matrix \( B \) over the rational numbers is in reduced row echelon form:
\[ B = \left( \begin{matrix} 0 & 1 & - 2 & 0 & 0 & 3 \\ 0 & 0 & 0 & 1 & 0 & 2 \\ 0 & 0 & 0 & 0 & 1 & - 4 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{matrix}\right) \] The pivot sequence of \( B \) is \( \left( {2,4,5}\right) \). Notice that the first three rows of \( B \) form a linearly independent family of vectors, that columns 2, 4, and 5 form a linearly independent family of vectors, and that all of other columns of \( B \) are linear combinations of columns 2, 4, and 5. Indeed, if we truncate the pivot columns to their first three rows, we get the \( 3 \times 3 \) identity matrix.
Yes
Theorem 14.7. If \( B \) is a matrix in reduced row echelon form with pivot sequence \( \left( {{p}_{1},\ldots ,{p}_{r}}\right) \), then:\n\n(i) rows \( 1,2,\ldots, r \) of \( B \) form a linearly independent family of vectors;\n\n(ii) columns \( {p}_{1},\ldots ,{p}_{r} \) of \( B \) form a linearly independent family of vectors, and all other columns of \( B \) can be expressed as linear combinations of columns \( {p}_{1},\ldots ,{p}_{r} \) .
## Proof. Exercise-just look at the matrix!
No
Consider the execution of the Gaussian elimination algorithm on input\n\n\[ A = \left( \begin{array}{lll} \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \end{array}\right) \in {\mathbb{Z}}_{3}^{3 \times 3} \]
After copying \( A \) into \( B \), the algorithm transforms \( B \) as follows:\n\n\[ \left( \begin{matrix} \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \end{matrix}\right) \xrightarrow[]{{\mathrm{{Row}}}_{1} \leftrightarrow {\mathrm{{Row}}}_{2}}\left( \begin{matrix} \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \end{matrix}\right) \xrightarrow[]{{\mathrm{{Row}}}_{1} \leftarrow \left\lbrack 2\right\rbrack {\mathrm{{Row}}}_{1}}\left( \begin{matrix} \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \end{matrix}\right) \]\n\n\[ \xrightarrow[]{{\operatorname{Row}}_{3} \leftarrow {\operatorname{Row}}_{3} - \left\lbrack 2\right\rbrack {\operatorname{Row}}_{1}}\left( \begin{matrix} \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \xrightarrow[]{{\operatorname{Row}}_{1} \leftarrow {\operatorname{Row}}_{1} - \left\lbrack 2\right\rbrack {\operatorname{Row}}_{2}}\left( \begin{matrix} \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \]\n\n\[ \xrightarrow[]{{\operatorname{Row}}_{3} \leftarrow {\operatorname{Row}}_{3} - {\operatorname{Row}}_{2}}\left( \begin{array}{lll} \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 1\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \end{array}\right) \]
Yes
Continuing with Example 14.4, the execution of the extended Gaussian elimination algorithm initializes \( X \) to the identity matrix, and then transforms \( X \) as follows:
\[ \left( \begin{matrix} \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \xrightarrow[]{{\mathrm{{Row}}}_{1} \leftrightarrow {\mathrm{{Row}}}_{2}}\left( \begin{matrix} \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \xrightarrow[]{{\mathrm{{Row}}}_{1} \leftarrow \left\lbrack 2\right\rbrack {\mathrm{{Row}}}_{1}}\left( \begin{matrix} \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \] \[ \xrightarrow[]{{\mathrm{{Row}}}_{3} \leftarrow {\mathrm{{Row}}}_{3} - \left\lbrack 2\right\rbrack {\mathrm{{Row}}}_{1}}\left( \begin{matrix} \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \xrightarrow[]{{\mathrm{{Row}}}_{1} \leftarrow {\mathrm{{Row}}}_{1} - \left\lbrack 2\right\rbrack {\mathrm{{Row}}}_{2}}\left( \begin{matrix} \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 0\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack \end{matrix}\right) \] \[ \xrightarrow[]{{\operatorname{Row}}_{3} \leftarrow {\operatorname{Row}}_{3} - {\operatorname{Row}}_{2}}\left( \begin{array}{lll} \left\lbrack 1\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 1\right\rbrack & \left\lbrack 0\right\rbrack & \left\lbrack 0\right\rbrack \\ \left\lbrack 2\right\rbrack & \left\lbrack 2\right\rbrack & \left\lbrack 1\right\rbrack \end{array}\right) . \]
Yes
Theorem 15.1. Let \( y \) be a function of \( x \) such that\n\n\[ \frac{y}{\log x} \rightarrow \infty \text{ and }u \mathrel{\text{:=}} \frac{\log x}{\log y} \rightarrow \infty \]\n\nas \( x \rightarrow \infty \) . Then\n\n\[ \Psi \left( {y, x}\right) \geq x \cdot \exp \left\lbrack {\left( {-1 + o\left( 1\right) }\right) u\log \log x}\right\rbrack . \]
Proof. Let us write \( u = \lfloor u\rfloor + \delta \), where \( 0 \leq \delta < 1 \) . Let us split the primes up to \( y \) into two sets: the set \( V \) of \
No
Lemma 15.4. For \( i = 1,\ldots, k + 2 \), we have \( \mathrm{E}\left\lbrack {L}_{i}\right\rbrack \leq {\sigma }^{-1} \) .
Proof. We first compute \( \mathrm{E}\left\lbrack {L}_{1}\right\rbrack \) . As \( \delta \) is chosen uniformly from \( {\mathbb{Z}}_{n}^{ * } \) and independent of \( {\alpha }_{1} \), at each attempt to find a relation, \( {\alpha }_{1}^{2}\delta \) is uniformly distributed over \( {\mathbb{Z}}_{n}^{ * } \), and hence the probability that the attempt succeeds is precisely \( \sigma \) . This means \( \mathrm{E}\left\lbrack {L}_{1}\right\rbrack = {\sigma }^{-1} \) .\n\nWe next compute \( \mathrm{E}\left\lbrack {L}_{i}\right\rbrack \) for \( i > 1 \) . To this end, let us denote the cosets of \( {\left( {\mathbb{Z}}_{n}^{ * }\right) }^{2} \) by \( {\mathbb{Z}}_{n}^{ * } \) as \( {C}_{1},\ldots ,{C}_{t} \) . As it happens, \( t = {2}^{w} \), but this fact plays no role in the analysis. For \( j = 1,\ldots, t \), let \( {\sigma }_{j} \) denote the probability that a random element of \( {C}_{j} \) is \( y \) -smooth, and let \( {\tau }_{j} \) denote the probability that the final value of \( \delta \) belongs to \( {C}_{j} \).\n\nWe claim that for \( j = 1,\ldots, t \), we have \( {\tau }_{j} = {\sigma }_{j}{\sigma }^{-1}{t}^{-1} \) . To see this, note that each coset \( {C}_{j} \) has the same number of elements, namely, \( \left| {\mathbb{Z}}_{n}^{ * }\right| {t}^{-1} \), and so the number of \( y \) -smooth elements in \( {C}_{j} \) is equal to \( {\sigma }_{j}\left| {\mathbb{Z}}_{n}^{ * }\right| {t}^{-1} \) . Moreover, the final value of \( {\alpha }_{1}^{2}\delta \) is equally likely to be any one of the \( y \) -smooth numbers in \( {\mathbb{Z}}_{n}^{ * } \), of which there are \( \sigma \left| {\mathbb{Z}}_{n}^{ * }\right| \), and hence\n\n\[{\tau }_{j} = \frac{{\sigma }_{j}\left| {\mathbb{Z}}_{n}^{ * }\right| {t}^{-1}}{\sigma \left| {\mathbb{Z}}_{n}^{ * }\right| } = {\sigma }_{j}{\sigma }^{-1}{t}^{-1}\]\n\nwhich proves the claim.\n\nNow, for a fixed value of \( \delta \) and a random choice of \( {\alpha }_{i} \in {\mathbb{Z}}_{n}^{ * } \), one sees that \( {\alpha }_{i}^{2}\delta \) is uniformly distributed over the coset containing \( \delta \) . Therefore, for \( j = 1,\ldots, t \), if \( {\tau }_{j} > 0 \), we have\n\n\[ \mathrm{E}\left\lbrack {{L}_{i} \mid \delta \in {C}_{j}}\right\rbrack = {\sigma }_{j}^{-1}. \]\n\nSumming over all \( j = 1,\ldots, t \) with \( {\tau }_{j} > 0 \), it follows that\n\n\[ \mathrm{E}\left\lbrack {L}_{i}\right\rbrack = \mathop{\sum }\limits_{{{\tau }_{j} > 0}}\mathrm{E}\left\lbrack {{L}_{i} \mid \delta \in {C}_{j}}\right\rbrack \cdot \mathrm{P}\l
Yes
Theorem 15.7. Let \( y \) be a function of \( x \) such that for some \( \varepsilon > 0 \), we have\n\n\[ y = \Omega \left( {\left( \log x\right) }^{1 + \varepsilon }\right) \text{ and }u \mathrel{\text{:=}} \frac{\log x}{\log y} \rightarrow \infty \]\n\nas \( x \rightarrow \infty \) . Then\n\n\[ \Psi \left( {y, x}\right) = x \cdot \exp \left\lbrack {\left( {-1 + o\left( 1\right) }\right) u\log u}\right\rbrack . \]
Proof. See §15.5.
No
Theorem 16.2. If \( E \) is an \( R \) -algebra, then the map\n\n\[ \n\tau : \;R \rightarrow E \]\n\n\[ \nc \mapsto c \cdot {1}_{E} \]\n\nis a ring homomorphism, and \( {c\alpha } = \tau \left( c\right) \alpha \) for all \( c \in R \) and \( \alpha \in E \) .
Proof. Exercise.
No
Let \( E \) be an \( R \) -algebra and let \( I \) be an ideal of \( E \) . Then it is easily verified that \( I \) is also a submodule of \( E \) . This means that the quotient ring \( E/I \) may also be viewed as an \( R \) -module, and indeed, it is an \( R \) -algebra, called the quotient algebra (over \( R \) ) of \( E \) modulo \( I \) .
For \( \alpha ,\beta \in E \) and \( c \in R \), addition, multiplication, and scalar multiplication in \( E \) are defined as follows:\n\n\[ \n{\left\lbrack \alpha \right\rbrack }_{I} + {\left\lbrack \beta \right\rbrack }_{I} \mathrel{\text{:=}} {\left\lbrack \alpha + \beta \right\rbrack }_{I},\;{\left\lbrack \alpha \right\rbrack }_{I} \cdot {\left\lbrack \beta \right\rbrack }_{I} \mathrel{\text{:=}} {\left\lbrack \alpha \cdot \beta \right\rbrack }_{I},\;c \cdot {\left\lbrack \alpha \right\rbrack }_{I} \mathrel{\text{:=}} {\left\lbrack c \cdot \alpha \right\rbrack }_{I}. \n\]
Yes
The ring of polynomials \( R\left\lbrack X\right\rbrack \) is an \( R \) -algebra via inclusion. Let \( f \in R\left\lbrack X\right\rbrack \) be a non-zero polynomial with \( \operatorname{lc}\left( f\right) \in {R}^{ * } \). We may form the quotient ring \( E \mathrel{\text{:=}} R\left\lbrack X\right\rbrack /\left( f\right) \), which may naturally be viewed as an \( R \) -algebra, as in the previous example. If \( \deg \left( f\right) = 0 \), then \( E \) is trivial; so assume \( \deg \left( f\right) > 0 \), and consider the map
\[ \tau : \;R \rightarrow E \] \[ c \mapsto c \cdot {1}_{E} \] from Theorem 16.2. By definition, \( \tau \left( c\right) = {\left\lbrack c\right\rbrack }_{f} \). As discussed in Example 7.55, the map \( \tau \) is a natural embedding of rings, and so by identifying \( R \) with its image in \( E \) under \( \tau \), we can view \( R \) as a subring of \( E \) ; therefore, we can also view \( E \) as an \( R \) -algebra via inclusion.
Yes
Theorem 16.4. If \( E \) is an \( R \) -algebra via inclusion, and \( S \) is a subring of \( E \), then \( S \) is a subalgebra if and only if \( S \) contains \( R \) . More generally, if \( E \) is an arbitrary \( R \) -algebra, and \( S \) is a subring of \( E \), then \( S \) is a subalgebra of \( E \) if and only if \( S \) contains \( c \cdot {1}_{E} \) for all \( c \in R \) .
Proof. Exercise.
No
Theorem 16.5. If \( E \) and \( {E}^{\prime } \) are \( R \) -algebras via inclusion, and \( \rho : E \rightarrow {E}^{\prime } \) is a ring homomorphism, then \( \rho \) is an \( R \) -algebra homomorphism if and only if the restriction of \( \rho \) to \( R \) is the identity map. More generally, if \( E \) and \( {E}^{\prime } \) are arbitrary \( R \) -algebras and \( \rho : E \rightarrow {E}^{\prime } \) is a ring homomorphism, then \( \rho \) is an \( R \) -algebra homomorphism if and only if \( \rho \left( {c \cdot {1}_{E}}\right) = c \cdot {1}_{{E}^{\prime }} \) for all \( c \in R \) .
Proof. Exercise.
No
If \( E \) is an \( R \) -algebra and \( I \) is an ideal of \( E \), then as observed in Example 16.6, \( I \) is also a submodule of \( E \), and we may form the quotient algebra \( E/I \).
The natural map\n\n\[ \rho : \;E \rightarrow E/I \]\n\n\[ \alpha \mapsto {\left\lbrack \alpha \right\rbrack }_{I} \]\n\n is both a ring homomorphism and an \( R \) -linear map, and hence is an \( R \) -algebra homomorphism.
Yes
Theorem 16.6. Let \( E \) be an \( R \) -algebra, and let \( \rho : E \rightarrow E \) be an \( R \) -algebra homomorphism. Then the set \( S \mathrel{\text{:=}} \{ \alpha \in E : \rho \left( \alpha \right) = \alpha \} \) is a subalgebra of \( E \) , called the subalgebra of \( E \) fixed by \( \rho \) . Moreover, if \( E \) is a field, then so is \( S \) .
Proof. Let us verify that \( S \) is closed under addition. If \( \alpha ,\beta \in S \), then we have\n\n\[ \rho \left( {\alpha + \beta }\right) = \rho \left( \alpha \right) + \rho \left( \beta \right) \text{ (since }\rho \text{ is a group homomorphism) } \]\n\n\[ = \alpha + \beta \text{ (since }\alpha ,\beta \in S\text{ ). } \]\n\nUsing the fact that \( \rho \) is a ring homomorphism, one can similarly show that \( S \) is closed under multiplication, and that \( {1}_{E} \in S \) . Likewise, using the fact that \( \rho \) is an \( R \) -linear map, one can also show that \( S \) is closed under scalar multiplication.\n\nThis shows that \( S \) is a subalgebra, proving the first statement. For the second statement, suppose that \( E \) is a field. Let \( \alpha \) be a non-zero element of \( S \), and suppose \( \beta \in E \) is its multiplicative inverse, so that \( {\alpha \beta } = {1}_{E} \) . We want to show that \( \beta \) lies in \( S \) . Again, using the fact that \( \rho \) is a ring homomorphism, we have\n\n\[ {\alpha \beta } = {1}_{E} = \rho \left( {1}_{E}\right) = \rho \left( {\alpha \beta }\right) = \rho \left( \alpha \right) \rho \left( \beta \right) = {\alpha \rho }\left( \beta \right) ,\]\n\nand hence \( {\alpha \beta } = {\alpha \rho }\left( \beta \right) \) ; canceling \( \alpha \), we obtain \( \beta = \rho \left( \beta \right) \), and so \( \beta \in S \) .
Yes
Theorem 16.7. Let \( \rho : E \rightarrow {E}^{\prime } \) be an \( R \) -algebra homomorphism. Then for all \( g \in R\left\lbrack X\right\rbrack \) and \( \alpha \in E \), we have\n\n\[ \rho \left( {g\left( \alpha \right) }\right) = g\left( {\rho \left( \alpha \right) }\right) . \]
Proof. Let \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \in R\left\lbrack X\right\rbrack \) . Then we have\n\n\[ \rho \left( {g\left( \alpha \right) }\right) = \rho \left( {\mathop{\sum }\limits_{i}{a}_{i}{\alpha }^{i}}\right) = \mathop{\sum }\limits_{i}\rho \left( {{a}_{i}{\alpha }^{i}}\right) = \mathop{\sum }\limits_{i}{a}_{i}\rho \left( {\alpha }^{i}\right) = \mathop{\sum }\limits_{i}{a}_{i}\rho {\left( \alpha \right) }^{i} \]\n\n\[ = g\left( {\rho \left( \alpha \right) }\right) \text{.} \]
Yes
Let \( f \in R\left\lbrack X\right\rbrack \) be a non-zero polynomial with \( \operatorname{lc}\left( f\right) \in {R}^{ * } \). As in Example 16.7, we may form the quotient algebra \( E \mathrel{\text{:=}} R\left\lbrack X\right\rbrack /\left( f\right) \). Let \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \). Then \( E = R\left\lbrack \xi \right\rbrack \), and moreover, every element of \( E \) can be expressed uniquely as \( g\left( \xi \right) \), where \( g \in R\left\lbrack X\right\rbrack \) and \( \deg \left( g\right) < \deg \left( f\right) \). In addition, \( \xi \) is a root of \( f \). If \( \deg \left( f\right) > 0 \), these facts were already observed in Example 7.55, and otherwise, they are trivial.
Now let \( {E}^{\prime } \) be any \( R \) -algebra, and suppose that \( \rho : E \rightarrow {E}^{\prime } \) is an \( R \) -algebra homomorphism, and let \( {\xi }^{\prime } \mathrel{\text{:=}} \rho \left( \xi \right) \). By the previous theorem, \( \rho \) sends \( g\left( \xi \right) \) to \( g\left( {\xi }^{\prime }\right) \), for each \( g \in R\left\lbrack X\right\rbrack \). Thus, the image of \( \rho \) is \( R\left\lbrack {\xi }^{\prime }\right\rbrack \). Also, we have \( f\left( {\xi }^{\prime }\right) = f\left( {\rho \left( \xi \right) }\right) = \rho \left( {f\left( \xi \right) }\right) = \rho \left( {0}_{E}\right) = {0}_{{E}^{\prime }} \). Therefore, \( {\xi }^{\prime } \) must be a root of \( f \). Conversely, suppose that \( {\xi }^{\prime } \in {E}^{\prime } \) is a root of \( f \). Then the polynomial evaluation map from \( R\left\lbrack X\right\rbrack \) to \( {E}^{\prime } \) that sends \( g \in R\left\lbrack X\right\rbrack \) to \( g\left( {\xi }^{\prime }\right) \in {E}^{\prime } \) is an \( R \) -algebra homomorphism whose kernel contains \( f \). Using the generalized versions of the first isomorphism theorems for rings and \( R \) -modules (Theorems 7.27 and 13.10), we obtain the \( R \) -algebra homomorphism \[ \rho : \;E \rightarrow {E}^{\prime } \] \[ g\left( \xi \right) \mapsto g\left( {\xi }^{\prime }\right) \] One sees that complex conjugation is just a special case of this construction (see Example 7.57).
Yes
Lemma 16.8. For all \( \left( {{a}_{1},{b}_{1}}\right) ,\left( {{a}_{2},{b}_{2}}\right) ,\left( {{a}_{3},{b}_{3}}\right) \in S \), we have\n\n(i) \( \left( {{a}_{1},{b}_{1}}\right) \sim \left( {{a}_{1},{b}_{1}}\right) \) ;\n\n(ii) \( \left( {{a}_{1},{b}_{1}}\right) \sim \left( {{a}_{2},{b}_{2}}\right) \) implies \( \left( {{a}_{2},{b}_{2}}\right) \sim \left( {{a}_{1},{b}_{1}}\right) \) ;\n\n(iii) \( \left( {{a}_{1},{b}_{1}}\right) \sim \left( {{a}_{2},{b}_{2}}\right) \) and \( \left( {{a}_{2},{b}_{2}}\right) \sim \left( {{a}_{3},{b}_{3}}\right) \) implies \( \left( {{a}_{1},{b}_{1}}\right) \sim \left( {{a}_{3},{b}_{3}}\right) \) .
Proof. (i) and (ii) are rather trivial, and we do not comment on these any further. As for (iii), assume that \( {a}_{1}{b}_{2} = {a}_{2}{b}_{1} \) and \( {a}_{2}{b}_{3} = {a}_{3}{b}_{2} \) . Multiplying the first equation by \( {b}_{3} \), we obtain \( {a}_{1}{b}_{2}{b}_{3} = {a}_{2}{b}_{1}{b}_{3} \) and substituting \( {a}_{3}{b}_{2} \) for \( {a}_{2}{b}_{3} \) on the right-hand side of this last equation, we obtain \( {a}_{1}{b}_{2}{b}_{3} = {a}_{3}{b}_{2}{b}_{1} \) . Now, using the fact that \( {b}_{2} \) is non-zero and that \( \mathbf{D} \) is an integral domain, we may cancel \( {b}_{2} \) from both sides, obtaining \( {a}_{1}{b}_{3} = {a}_{3}{b}_{1} \) .
Yes
Lemma 16.9. Let \( \left( {{a}_{1},{b}_{1}}\right) ,\left( {{a}_{1}^{\prime },{b}_{1}^{\prime }}\right) ,\left( {{a}_{2},{b}_{2}}\right) ,\left( {{a}_{2}^{\prime },{b}_{2}^{\prime }}\right) \in S \), where \( \left( {{a}_{1},{b}_{1}}\right) \sim \left( {{a}_{1}^{\prime },{b}_{1}^{\prime }}\right) \) and \( \left( {{a}_{2},{b}_{2}}\right) \sim \left( {{a}_{2}^{\prime },{b}_{2}^{\prime }}\right) \) . Then we have\n\n\[ \left( {{a}_{1}{b}_{2} + {a}_{2}{b}_{1},{b}_{1}{b}_{2}}\right) \sim \left( {{a}_{1}^{\prime }{b}_{2}^{\prime } + {a}_{2}^{\prime }{b}_{1}^{\prime },{b}_{1}^{\prime }{b}_{2}^{\prime }}\right) \]\n\nand\n\n\[ \left( {{a}_{1}{a}_{2},{b}_{1}{b}_{2}}\right) \sim \left( {{a}_{1}^{\prime }{a}_{2}^{\prime },{b}_{1}^{\prime }{b}_{2}^{\prime }}\right) . \]
Proof. This is a straightforward calculation. Since \( {a}_{1}{b}_{1}^{\prime } = {a}_{1}^{\prime }{b}_{1} \) and \( {a}_{2}{b}_{2}^{\prime } = {a}_{2}^{\prime }{b}_{2} \) , we have\n\n\[ \left( {{a}_{1}{b}_{2} + {a}_{2}{b}_{1}}\right) {b}_{1}^{\prime }{b}_{2}^{\prime } = {a}_{1}{b}_{2}{b}_{1}^{\prime }{b}_{2}^{\prime } + {a}_{2}{b}_{1}{b}_{1}^{\prime }{b}_{2}^{\prime } = {a}_{1}^{\prime }{b}_{2}{b}_{1}{b}_{2}^{\prime } + {a}_{2}^{\prime }{b}_{1}{b}_{1}^{\prime }{b}_{2} \]\n\n\[ = \left( {{a}_{1}^{\prime }{b}_{2}^{\prime } + {a}_{2}^{\prime }{b}_{1}^{\prime }}\right) {b}_{1}{b}_{2} \]\n\nand\n\n\[ {a}_{1}{a}_{2}{b}_{1}^{\prime }{b}_{2}^{\prime } = {a}_{1}^{\prime }{a}_{2}{b}_{1}{b}_{2}^{\prime } = {a}_{1}^{\prime }{a}_{2}^{\prime }{b}_{1}{b}_{2} \]
Yes
Lemma 16.10. With addition and multiplication as defined above, \( K \) is a ring, with additive identity \( \left\lbrack {{0}_{D},{1}_{D}}\right\rbrack \) and multiplicative identity \( \left\lbrack {{1}_{D},{1}_{D}}\right\rbrack \) .
Proof. Exercise.
No
Every non-zero polynomial \( f \in F\left\lbrack X\right\rbrack \) can be expressed as\n\n\[ f = c \cdot {p}_{1}^{{e}_{1}}\cdots {p}_{r}^{{e}_{r}} \]\n\nwhere \( c \in {F}^{ * },{p}_{1},\ldots ,{p}_{r} \) are distinct monic irreducible polynomials, and \( {e}_{1},\ldots ,{e}_{r} \) are positive integers. Moreover, this expression is unique, up to a reordering of the irreducible polynomials.
To prove this theorem, we may assume that \( f \) is monic, since the non-monic case trivially reduces to the monic case.\n\nThe proof of the existence part of Theorem 16.11 is just as for Theorem 1.3. If \( f \) is 1 or a monic irreducible, we are done. Otherwise, there exist \( g, h \in F\left\lbrack X\right\rbrack \) of degree strictly less than that of \( f \) such that \( f = {gh} \), and again, we may assume that \( g \) and \( h \) are monic. By induction on degree, both \( g \) and \( h \) can be expressed as a product of monic irreducible polynomials, and hence, so can \( f \) .\n\nThe proof of the uniqueness part of Theorem 16.11 is almost identical to that of Theorem 1.3. The key to the proof is the division with remainder property, Theorem 7.10, from which we can easily derive the following analog of Theorem 1.6:
Yes
Theorem 16.12. Let \( I \) be an ideal of \( F\left\lbrack X\right\rbrack \) . Then there exists a unique polynomial \( d \in F\left\lbrack X\right\rbrack \) such that \( I = {dF}\left\lbrack X\right\rbrack \) and \( d \) is either zero or monic.
Proof. We first prove the existence part of the theorem. If \( I = \{ 0\} \), then \( d = 0 \) does the job, so let us assume that \( I \neq \{ 0\} \) . Since \( I \) contains non-zero polynomials, it must contain monic polynomials, since if \( g \) is a non-zero polynomial in \( I \), then its monic associate \( \operatorname{lc}{\left( g\right) }^{-1}g \) is also in \( I \) . Let \( d \) be a monic polynomial of minimal degree in \( I \) . We want to show that \( I = {dF}\left\lbrack X\right\rbrack \) . We first show that \( I \subseteq {dF}\left\lbrack X\right\rbrack \) . To this end, let \( g \) be any element in \( I \) . It suffices to show that \( d \mid g \) . Using Theorem 7.10, we may write \( g = {dq} + r \), where \( \deg \left( r\right) < \deg \left( d\right) \) . Then by the closure properties of ideals, one sees that \( r = g - {dq} \) is also an element of \( I \), and by the minimality of the degree of \( d \), we must have \( r = 0 \) . Thus, \( d \mid g \) . We next show that \( {dF}\left\lbrack X\right\rbrack \subseteq I \) . This follows immediately from the fact that \( d \in I \) and the closure properties of ideals. That proves the existence part of the theorem. As for uniqueness, note that if \( {dF}\left\lbrack X\right\rbrack = {eF}\left\lbrack X\right\rbrack \), we have \( d \mid e \) and \( e \mid d \), from which it follows that \( d \) and \( e \) are associate, and so if \( d \) and \( e \) are both either monic or zero, they must be equal.
Yes
Theorem 16.13. For all \( g, h \in F\left\lbrack X\right\rbrack \), there exists a unique greatest common divisor \( d \) of \( g \) and \( h \), and moreover, \( {gF}\left\lbrack X\right\rbrack + {hF}\left\lbrack X\right\rbrack = {dF}\left\lbrack X\right\rbrack \) .
Proof. We apply the previous theorem to the ideal \( I \mathrel{\text{:=}} {gF}\left\lbrack X\right\rbrack + {hF}\left\lbrack X\right\rbrack \) . Let \( d \in F\left\lbrack X\right\rbrack \) with \( I = {dF}\left\lbrack X\right\rbrack \), as in that theorem. Note that \( g, h, d \in I \) and \( d \) is monic or zero.\n\nIt is clear that \( d \) is a common divisor of \( g \) and \( h \) . Moreover, there exist \( s, t \in F\left\lbrack X\right\rbrack \) such that \( {gs} + {ht} = d \) . If \( {d}^{\prime }\left| {g\text{and}{d}^{\prime }}\right| h \), then clearly \( {d}^{\prime } \mid \left( {{gs} + {ht}}\right) \), and hence \( {d}^{\prime } \mid d \) .\n\nFinally, for uniqueness, if \( e \) is a greatest common divisor of \( g \) and \( h \), then \( d \mid e \) and \( e \mid d \), and hence \( e \) is associate to \( d \), and the requirement that \( e \) is monic or zero implies that \( e = d \) .
Yes
Theorem 16.14. For \( f, g, h \in F\left\lbrack X\right\rbrack \) such that \( f \mid {gh} \) and \( \gcd \left( {f, g}\right) = 1 \), we have \( f \mid h \) .
Proof. Suppose that \( f \mid {gh} \) and \( \gcd \left( {f, g}\right) = 1 \) . Then since \( \gcd \left( {f, g}\right) = 1 \), by Theorem 16.13 we have \( {fs} + {gt} = 1 \) for some \( s, t \in F\left\lbrack X\right\rbrack \) . Multiplying this equation by \( h \), we obtain \( {fhs} + {ght} = h \) . Since \( f \mid f \) by definition, and \( f \mid {gh} \) by hypothesis, it follows that \( f \mid h \) .
Yes
Theorem 16.15. Let \( p \in F\left\lbrack X\right\rbrack \) be irreducible, and let \( g, h \in F\left\lbrack X\right\rbrack \) . Then \( p \mid {gh} \) implies that \( p \mid g \) or \( p \mid h \) .
Proof. Assume that \( p \mid {gh} \) . The only divisors of \( p \) are associate to 1 or \( p \) . Thus, \( \gcd \left( {p, g}\right) \) is either 1 or the monic associate of \( p \) . If \( p \mid g \), we are done; otherwise, if \( p \nmid g \), we must have \( \gcd \left( {p, g}\right) = 1 \), and by the previous theorem, we conclude that \( p \mid h.▱ \)
Yes
Theorem 16.16. There are infinitely many monic irreducible polynomials in \( F\left\lbrack X\right\rbrack \) .
If \( F \) is infinite, then this theorem is true simply because there are infinitely many monic, linear polynomials; in any case, one can easily prove this theorem by mimicking the proof of Theorem 1.11 (as the reader may verify).
No
Let \( f \in F\left\lbrack X\right\rbrack \) be a polynomial of degree 2 or 3 . Then it is easy to see that \( f \) is irreducible if and only if \( f \) has no roots in \( F \) .
Indeed, if \( f \) is reducible, then it must have a factor of degree 1 , which we can assume is monic; thus, we can write \( f = \left( {X - x}\right) g \), where \( x \in F \) and \( g \in F\left\lbrack X\right\rbrack \), and so \( f\left( x\right) = \left( {x - x}\right) g\left( x\right) = 0 \) . Conversely, if \( x \in F \) is a root of \( f \), then \( X - x \) divides \( f \) (see Theorem 7.12), and so \( f \) is reducible.
Yes
As a special case of the previous example, consider the polynomials \( f \mathrel{\text{:=}} {X}^{2} - 2 \in \mathbb{Q}\left\lbrack X\right\rbrack \) and \( g \mathrel{\text{:=}} {X}^{3} - 2 \in \mathbb{Q}\left\lbrack X\right\rbrack \) . We claim that as polynomials over \( \mathbb{Q}, f \) and \( g \) are irreducible.
Indeed, neither of them have integer roots, and so neither of them have rational roots (see Exercise 1.26); therefore, they are irreducible.
No
Theorem 16.19 (Chinese remainder theorem). Let \( {\left\{ {f}_{i}\right\} }_{i = 1}^{k} \) be a pairwise relatively prime family of non-zero polynomials in \( F\left\lbrack X\right\rbrack \), and let \( {g}_{1},\ldots ,{g}_{k} \) be arbitrary polynomials in \( F\left\lbrack X\right\rbrack \) . Then there exists a solution \( g \in F\left\lbrack X\right\rbrack \) to the system of congruences\n\n\[ g \equiv {g}_{i}\left( {\;\operatorname{mod}\;{f}_{i}}\right) \;\left( {i = 1,\ldots, k}\right) . \]\n\nMoreover, any \( {g}^{\prime } \in F\left\lbrack X\right\rbrack \) is a solution to this system of congruences if and only if \( g \equiv {g}^{\prime }\left( {\;\operatorname{mod}\;f}\right) \), where \( f \mathrel{\text{:=}} \mathop{\prod }\limits_{{i = 1}}^{k}{f}_{i} \) .
Let us recall the formula for the solution \( g \) (see proof of Theorem 2.6). We have\n\n\[ g \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{k}{g}_{i}{e}_{i} \]\n\nwhere\n\n\[ {e}_{i} \mathrel{\text{:=}} {f}_{i}^{ * }{t}_{i},\;{f}_{i}^{ * } \mathrel{\text{:=}} f/{f}_{i},\;{t}_{i} \mathrel{\text{:=}} {\left( {f}_{i}^{ * }\right) }^{-1}{\;\operatorname{mod}\;{f}_{i}}\;\left( {i = 1,\ldots, k}\right) . \]
Yes
The polynomial \( {X}^{2} + 1 \) is irreducible over \( \mathbb{R} \)
since if it were not, it would have a root in \( \mathbb{R} \) (see Example 16.12), which is clearly impossible, since -1 is not the square of any real number. It follows immediately that \( \mathbb{C} = \mathbb{R}\left\lbrack X\right\rbrack /\left( {{X}^{2} + 1}\right) \) is a field, without having to explicitly calculate a formula for the inverse of a nonzero complex number.
Yes
Consider the polynomial \( f \mathrel{\text{:=}} {X}^{4} + {X}^{3} + 1 \) over \( {\mathbb{Z}}_{2} \). We claim that \( f \) is irreducible. It suffices to show that \( f \) has no irreducible factors of degree 1 or 2.
If \( f \) had a factor of degree 1, then it would have a root; however, \( f\left( 0\right) = 0 + 0 + 1 = 1 \) and \( f\left( 1\right) = 1 + 1 + 1 = 1 \). So \( f \) has no factors of degree 1.\n\nDoes \( f \) have a factor of degree 2 ? The polynomials of degree 2 are \( {X}^{2},{X}^{2} + X \), \( {X}^{2} + 1 \), and \( {X}^{2} + X + 1 \). The first and second of these polynomials are divisible by \( X \), and hence not irreducible, while the third has a 1 as a root, and hence is also not irreducible. The last polynomial, \( {X}^{2} + X + 1 \), has no roots, and hence is the only irreducible polynomial of degree 2 over \( {\mathbb{Z}}_{2} \). So now we may conclude that if \( f \) were not irreducible, it would have to be equal to\n\n\[ {\left( {X}^{2} + X + 1\right) }^{2} = {X}^{4} + 2{X}^{3} + 3{X}^{2} + {2X} + 1 = {X}^{4} + {X}^{2} + 1, \]\n\nwhich it is not.\n\nThus, \( E \mathrel{\text{:=}} {\mathbb{Z}}_{2}\left\lbrack X\right\rbrack /\left( f\right) \) is a field with \( {2}^{4} = 16 \) elements.
Yes
Consider the real numbers \( \sqrt{2} \) and \( \sqrt[3]{2} \). We claim that \( {X}^{2} - 2 \) is the minimal polynomial of \( \sqrt{2} \) over \( \mathbb{Q} \).
To see this, first observe that \( \sqrt{2} \) is a root of \( {X}^{2} - 2 \). Thus, the minimal polynomial of \( \sqrt{2} \) divides \( {X}^{2} - 2 \). However, as we saw in Example 16.13, the polynomial \( {X}^{2} - 2 \) is irreducible over \( \mathbb{Q} \), and hence must be equal to the minimal polynomial of \( \sqrt{2} \) over \( \mathbb{Q} \).
Yes
Theorem 16.20. Suppose \( E \) is an \( F \) -algebra, and that as an \( F \) -vector space, \( E \) has finite dimension \( n \) . Then every \( \alpha \in E \) has a non-zero minimal polynomial of degree at most \( n \) .
Proof. Indeed, the family of elements\n\n\[ \n{1}_{E},\alpha ,\ldots ,{\alpha }^{n} \n\]\n\nmust be linearly dependent (as must any family of \( n + 1 \) elements of a vector space\n\nof dimension \( n \) ), and hence there exist \( {c}_{0},\ldots ,{c}_{n} \in F \), not all zero, such that\n\n\[ \n{c}_{0}{1}_{E} + {c}_{1}\alpha + \cdots + {c}_{n}{\alpha }^{n} = {0}_{E}, \n\]\n\nand therefore, the non-zero polynomial \( f \mathrel{\text{:=}} \mathop{\sum }\limits_{i}{c}_{i}{X}^{i} \) vanishes at \( \alpha \) .
Yes
Theorem 16.22. Suppose \( E \) is a finite extension of a field \( K \), with a basis \( {\left\{ {\beta }_{j}\right\} }_{j = 1}^{m} \) over \( K \), and \( K \) is a finite extension of \( F \), with a basis \( {\left\{ {\alpha }_{i}\right\} }_{i = 1}^{n} \) over \( F \) . Then the elements\n\n\[ \n{\alpha }_{i}{\beta }_{j}\;\left( {i = 1,\ldots, n;j = 1,\ldots, m}\right)\n\]\n\nform a basis for \( E \) over \( F \) . In particular, \( E \) is a finite extension of \( F \) and\n\n\[ \n\left( {E : F}\right) = \left( {E : K}\right) \left( {K : F}\right) .\n\]
Now suppose that \( E \) is a finite extension of a field \( F \) . Let \( K \) be an intermediate field, that is, a subfield of \( E \) containing \( F \) . Then evidently, \( E \) is a finite extension of \( K \) (since any basis for \( E \) over \( F \) also spans \( E \) over \( K \) ), and \( K \) is a finite extension of \( F \) (since as \( F \) -vector spaces, \( K \) is a subspace of \( E \) ). The previous theorem then implies that \( \left( {E : F}\right) = \left( {E : K}\right) \left( {K : F}\right) \) . We have proved:
Yes
Theorem 16.25. Let \( F \) be a field, and \( f \in F\left\lbrack X\right\rbrack \) a non-zero polynomial of degree \( n \) . Then there exists a finite extension \( E \) of \( F \) over which \( f \) factors as\n\n\[ f = c\left( {X - {\alpha }_{1}}\right) \left( {X - {\alpha }_{2}}\right) \cdots \left( {X - {\alpha }_{n}}\right) ,\]\n\nwhere \( c \in F \) and \( {\alpha }_{1},\ldots ,{\alpha }_{n} \in E \) .
Proof. We may assume that \( f \) is monic. We prove the existence of \( E \) by induction on the degree \( n \) of \( f \) . If \( n = 0 \), then the theorem is trivially true. Otherwise, let \( h \) be an irreducible factor of \( f \), and set \( K \mathrel{\text{:=}} F\left\lbrack X\right\rbrack /\left( h\right) \), so that \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{h} \in K \) is a root of \( h \), and hence of \( f \) . So over \( K \), which is a finite extension of \( F \), the polynomial \( f \) factors as\n\n\[ f = \left( {X - \xi }\right) g \]\n\nwhere \( g \in K\left\lbrack X\right\rbrack \) is a monic polynomial of degree \( n - 1 \) . Applying the induction hypothesis, there exists a finite extension \( E \) of \( K \) over which \( g \) splits into linear factors. Thus, over \( E, f \) splits into linear factors, and by Theorem 16.22, \( E \) is a finite extension of \( F \) .
Yes
Theorem 16.26. We have:\n\n(i) \( \\mathbf{D}\\left( c\\right) = 0 \) for all \( c \\in R \) ;\n\n(ii) \( \\mathbf{D}\\left( X\\right) = 1 \) ;\n\n(iii) \( \\mathbf{D}\\left( {g + h}\\right) = \\mathbf{D}\\left( g\\right) + \\mathbf{D}\\left( h\\right) \) for all \( g, h \\in R\\left\\lbrack X\\right\\rbrack \) ;\n\n(iv) \( \\mathbf{D}\\left( {gh}\\right) = \\mathbf{D}\\left( g\\right) h + g\\mathbf{D}\\left( h\\right) \) for all \( g, h \\in R\\left\\lbrack X\\right\\rbrack \) .
Proof. Parts (i) and (ii) are immediate from the definition. Parts (iii) and (iv) follow from the definition by a simple calculation. Suppose\n\n\[ g\\left( {X + Y}\\right) \\equiv g + {g}_{1}Y\\left( {\\;\\operatorname{mod}\\{Y}^{2}}\\right) \\text{ and }h\\left( {X + Y}\\right) \\equiv h + {h}_{1}Y\\left( {\\;\\operatorname{mod}\\{Y}^{2}}\\right) \]\n\nwhere \( {g}_{1} = \\mathbf{D}\\left( g\\right) \) and \( {h}_{1} = \\mathbf{D}\\left( h\\right) \) . Then\n\n\[ \\left( {g + h}\\right) \\left( {X + Y}\\right) \\equiv g\\left( {X + Y}\\right) + h\\left( {X + Y}\\right) \\equiv \\left( {g + h}\\right) + \\left( {{g}_{1} + {h}_{1}}\\right) Y\\left( {\\;\\operatorname{mod}\\{Y}^{2}}\\right) ,\]\n\nand\n\n\[ \\left( {gh}\\right) \\left( {X + Y}\\right) \\equiv g\\left( {X + Y}\\right) h\\left( {X + Y}\\right) \\equiv {gh} + \\left( {{g}_{1}h + g{h}_{1}}\\right) Y\\left( {\\;\\operatorname{mod}\\{Y}^{2}}\\right) . \]
Yes
Theorem 16.27. Let \( g = \mathop{\sum }\limits_{{i = 0}}^{\infty }{a}_{i}{X}^{i} \in R\llbracket X\rrbracket \) . Then \( g \in {\left( R\llbracket X\rrbracket \right) }^{ * } \) if and only if \( {a}_{0} \in {R}^{ * } \) .
Proof. If \( {a}_{0} \) is not a unit, then it is clear that \( g \) is not a unit, since the constant term of a product of formal power series is equal to the product of the constant terms.\n\nConversely, if \( {a}_{0} \) is a unit, we show how to define the coefficients of the inverse \( h = \mathop{\sum }\limits_{{i = 0}}^{\infty }{b}_{i}{X}^{i} \) of \( g \) . Let \( f = {gh} = \mathop{\sum }\limits_{{i = 0}}^{\infty }{c}_{i}{X}^{i} \) . We want \( f = 1 \), which means that \( {c}_{0} = 1 \) and \( {c}_{i} = 0 \) for all \( i > 0 \) . Now, \( {c}_{0} = {a}_{0}{b}_{0} \), so we set \( {b}_{0} \mathrel{\text{:=}} {a}_{0}^{-1} \) . Next, we have \( {c}_{1} = {a}_{0}{b}_{1} + {a}_{1}{b}_{0} \), so we set \( {b}_{1} \mathrel{\text{:=}} - {a}_{1}{b}_{0} \cdot {a}_{0}^{-1} \) . Next, we have \( {c}_{2} = {a}_{0}{b}_{2} + {a}_{1}{b}_{1} + {a}_{2}{b}_{0} \) , so we set \( {b}_{2} \mathrel{\text{:=}} - \left( {{a}_{1}{b}_{1} + {a}_{2}{b}_{0}}\right) \cdot {a}_{0}^{-1} \) . Continuing in this way, we see that if we define \( {b}_{i} \mathrel{\text{:=}} - \left( {{a}_{1}{b}_{i - 1} + \cdots + {a}_{i}{b}_{0}}\right) \cdot {a}_{0}^{-1} \) for \( i \geq 1 \), then \( {gh} = 1 \) .
Yes
Theorem 16.28. If \( D \) is an integral domain, then \( D\left( \left( X\right) \right) \) is an integral domain.
Proof. Let \( g = \mathop{\sum }\limits_{{i = m}}^{\infty }{a}_{i}{X}^{i} \) and \( h = \mathop{\sum }\limits_{{i = n}}^{\infty }{b}_{i}{X}^{i} \), where \( {a}_{m} \neq 0 \) and \( {b}_{n} \neq 0 \) . Then \( {gh} = \mathop{\sum }\limits_{{i = m + n}}^{\infty }{c}_{i}{X}^{i} \), where \( {c}_{m + n} = {a}_{m}{b}_{n} \neq 0.
Yes
Theorem 16.29. Let \( g \in R\left( \left( X\right) \right) \), and suppose that \( g \neq 0 \) and \( g = \mathop{\sum }\limits_{{i = m}}^{\infty }{a}_{i}{X}^{i} \) with \( {a}_{m} \in {R}^{ * } \) . Then \( g \) has a multiplicative inverse in \( R\left( \left( X\right) \right) \).
Proof. We can write \( g = {X}^{m}{g}^{\prime } \), where \( {g}^{\prime } \) is a formal power series whose constant term is a unit, and hence there is a formal power series \( h \) such that \( {g}^{\prime }h = 1 \) . Thus, \( {X}^{-m}h \) is the multiplicative inverse of \( g \) in \( R\left( \left( X\right) \right) \).
Yes
Theorem 16.31. For \( g, h \in R\left( \left( {X}^{-1}\right) \right) \), we have \( \deg \left( {gh}\right) \leq \deg \left( g\right) + \deg \left( h\right) \), where equality holds unless both \( \operatorname{lc}\left( g\right) \) and \( \operatorname{lc}\left( h\right) \) are zero divisors. Furthermore, if \( h \neq 0 \) and \( \operatorname{lc}\left( h\right) \) is a unit, then \( h \) is a unit, and we have \( \deg \left( {g{h}^{-1}}\right) = \deg \left( g\right) - \deg \left( h\right) \) .
Proof. Exercise.
No
Theorem 16.32. Let \( g, h \in R\left\lbrack X\right\rbrack \) with \( h \neq 0 \) and \( \operatorname{lc}\left( h\right) \in {R}^{ * } \), and using the usual division with remainder property for polynomials, write \( g = {hq} + r \), where \( q, r \in R\left\lbrack X\right\rbrack \) with \( \deg \left( r\right) < \deg \left( h\right) \) . Let \( {h}^{-1} \) denote the multiplicative inverse of \( h \) in \( R\left( \left( {X}^{-1}\right) \right) \) . Then \( q = \left\lfloor {g{h}^{-1}}\right\rfloor \) .
Proof. Multiplying the equation \( g = {hq} + r \) by \( {h}^{-1} \), we obtain \( g{h}^{-1} = q + r{h}^{-1} \) , and \( \deg \left( {r{h}^{-1}}\right) < 0 \), from which it follows that \( \left\lfloor {g{h}^{-1}}\right\rfloor = q \) .
Yes
Consider the subring \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) of the complex numbers, which consists of all complex numbers of the form \( a + b\sqrt{-3} \), where \( a, b \in \mathbb{Z} \). As this is a subring of the field \( \mathbb{C} \), it is an integral domain (one may also view \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) as the quotient ring \( \mathbb{Z}\left\lbrack X\right\rbrack /\left( {{X}^{2} + 3}\right) \) ).
Let us first determine the units in \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) . For \( a, b \in \mathbb{Z} \), we have \( N\left( {a + b\sqrt{-3}}\right) = \) \( {a}^{2} + 3{b}^{2} \), where \( N \) is the usual norm map on \( \mathbb{C} \) (see Example 7.5). If \( \alpha \in \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) is a unit, then there exists \( {\alpha }^{\prime } \in \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) such that \( \alpha {\alpha }^{\prime } = 1 \) . Taking norms, we obtain\n\n\[ 1 = N\left( 1\right) = N\left( {\alpha {\alpha }^{\prime }}\right) = N\left( \alpha \right) N\left( {\alpha }^{\prime }\right) . \]\n\nSince the norm of an element of \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) is a non-negative integer, this implies that \( N\left( \alpha \right) = 1 \) . If \( \alpha = a + b\sqrt{-3} \), with \( a, b \in \mathbb{Z} \), then \( N\left( \alpha \right) = {a}^{2} + 3{b}^{2} \), and it is clear that \( N\left( \alpha \right) = 1 \) if and only if \( \alpha = \pm 1 \) . We conclude that the only units in \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) are \( \pm 1 \) .\n\nNow consider the following two factorizations of 4 in \( \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) :\n\n\[ 4 = 2 \cdot 2 = \left( {1 + \sqrt{-3}}\right) \left( {1 - \sqrt{-3}}\right) . \]\n\n(16.8)\n\nWe claim that 2 is irreducible. For suppose, say, that \( 2 = \alpha {\alpha }^{\prime } \), for \( \alpha ,{\alpha }^{\prime } \in \mathbb{Z}\left\lbrack \sqrt{-3}\right\rbrack \) , with neither a unit. Taking norms, we have \( 4 = N\left( 2\right) = N\left( \alpha \right) N\left( {\alpha }^{\prime }\right) \), and therefore, \( N\left( \alpha \right) = N\left( {\alpha }^{\prime }\right) = 2 \) -but this is impossible, since there are no integers \( a \) and \( b \) such that \( {a}^{2} + 3{b}^{2} = 2 \) . By the same reasoning, since \( N\left( {1 + \sqrt{-3}}\right) = N\left( {1 - \sqrt{-3}}\right) = 4 \) , we see that \( 1 + \sqrt{-3} \) and \( 1 - \sqrt{-3} \) are both irreducible. Further, it is clear that 2 is not associate to either \( 1 + \sqrt{-3} \) or \( 1 - \sqrt{-3} \), and so the two factorizations of 4 in (16.8) are fundamentally different.
Yes
Theorem 16.34. Suppose \( D \) satisfies part (i) of Definition 16.33, and that \( D/{pD} \) is an integral domain for every irreducible \( p \in D \) . Then \( D \) is a UFD.
Proof. Exercise.
No
Both \( \mathbb{Z} \) and \( F\left\lbrack X\right\rbrack \) are Euclidean domains.
In \( \mathbb{Z} \), we can take the ordinary absolute value function \( \left| \cdot \right| \) as a size function, and for \( F\left\lbrack X\right\rbrack \), the function \( \deg \left( \cdot \right) \) will do.
Yes
Let us show that this is a Euclidean domain, using the usual norm map \( N \) on complex numbers (see Example 7.5) for the size function. Let \( \alpha ,\beta \in \mathbb{Z}\left\lbrack i\right\rbrack \), with \( \beta \neq 0 \) . We want to show the existence of \( \kappa ,\rho \in \mathbb{Z}\left\lbrack i\right\rbrack \) such that \( \alpha = {\beta \kappa } + \rho \), where \( N\left( \rho \right) < N\left( \beta \right) \) .
Suppose that in the field \( \mathbb{C} \), we compute \( \alpha {\beta }^{-1} = r + {si} \), where \( r, s \in \mathbb{Q} \) . Let \( m, n \) be integers such that \( \left| {m - r}\right| \leq 1/2 \) and \( \left| {n - s}\right| \leq 1/2 \) -such integers \( m \) and \( n \) always exist, but may not be uniquely determined. Set \( \kappa \mathrel{\text{:=}} m + {ni} \in \mathbb{Z}\left\lbrack i\right\rbrack \) and \( \rho \mathrel{\text{:=}} \alpha - {\beta \kappa } \) . Then we have\n\n\[ \alpha {\beta }^{-1} = \kappa + \delta \]\n\nwhere \( \delta \in \mathbb{C} \) with \( N\left( \delta \right) \leq 1/4 + 1/4 = 1/2 \), and\n\n\[ \rho = \alpha - {\beta \kappa } = \alpha - \beta \left( {\alpha {\beta }^{-1} - \delta }\right) = {\delta \beta }, \]\n\nand hence\n\n\[ N\left( \rho \right) = N\left( {\delta \beta }\right) = N\left( \delta \right) N\left( \beta \right) \leq \frac{1}{2}N\left( \beta \right) . \]
Yes
Theorem 16.36. If \( D \) is a Euclidean domain and \( I \) is an ideal of \( D \), then there exists \( d \in D \) such that \( I = {dD} \) .
Proof. If \( I = \{ 0\} \), then \( d = 0 \) does the job, so let us assume that \( I \neq \{ 0\} \) . Let \( d \) be any non-zero element of \( I \) such that \( S\left( d\right) \) is minimal, where \( S \) is a size function that makes \( D \) into a Euclidean domain. We claim that \( I = {dD} \) .\n\nIt will suffice to show that for all \( c \in I \), we have \( d \mid c \) . Now, we know that there exists \( q, r \in D \) such that \( c = {dq} + r \), where either \( r = 0 \) or \( S\left( r\right) < S\left( d\right) \) . If \( r = 0 \), we are done; otherwise, \( r \) is a non-zero element of \( I \) with \( S\left( r\right) < S\left( d\right) \) , contradicting the minimality of \( S\left( d\right) \) .
Yes
Theorem 16.38. If \( D \) is a PID, and \( {I}_{1} \subseteq {I}_{2} \subseteq \cdots \) are ideals of \( D \), then there exists an integer \( k \) such that \( {I}_{k} = {I}_{k + 1} = \cdots \) .
Proof. Let \( I \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{I}_{i} \), which is an ideal of \( D \) (see Exercise 7.37). Thus, \( I = {dD} \) for some \( d \in D \) . But \( d \in \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{I}_{i} \) implies that \( d \in {I}_{k} \) for some \( k \), which shows that \( I = {dD} \subseteq {I}_{k} \) . It follows that \( I = {I}_{k} = {I}_{k + 1} = \cdots \) .
No