Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Theorem 7.17. Let \( R \) be a ring and let \( a \in R \) . Then \( {aR} \mathrel{\text{:=}} \{ {ar} : r \in R\} \) is an ideal of \( R \) .
|
Proof. This is an easy calculation. For all \( {ar}, a{r}^{\prime } \in {aR} \) and \( {r}^{\prime \prime } \in R \), we have \( {ar} + a{r}^{\prime } = a\left( {r + {r}^{\prime }}\right) \in {aR} \) and \( \left( {ar}\right) {r}^{\prime \prime } = a\left( {r{r}^{\prime \prime }}\right) \in {aR}.
|
Yes
|
Theorem 7.18. If \( {I}_{1} \) and \( {I}_{2} \) are ideals of a ring \( R \), then so are \( {I}_{1} + {I}_{2} \) and \( {I}_{1} \cap {I}_{2} \) .
|
Proof. We already know that \( {I}_{1} + {I}_{2} \) and \( {I}_{1} \cap {I}_{2} \) are additive subgroups of \( R \), so it suffices to show that they are closed under multiplication by elements of \( R \) . The reader may easily verify that this is the case.
|
No
|
Let \( n \) be a positive integer, and let \( x \) be any integer. Define \( I \mathrel{\text{:=}} \{ g \in \mathbb{Z}\left\lbrack X\right\rbrack : g\left( x\right) \equiv 0\left( {\;\operatorname{mod}\;n}\right) \} \). We claim that \( I \) is the ideal \( \left( {X - x, n}\right) \) of \( \mathbb{Z}\left\lbrack X\right\rbrack \).
|
To see this, consider any fixed \( g \in \mathbb{Z}\left\lbrack X\right\rbrack \). Using Theorem 7.12, we have \( g = \left( {X - x}\right) q + g\left( x\right) \) for some \( q \in \mathbb{Z}\left\lbrack X\right\rbrack \). Using the division with remainder property for integers, we have \( g\left( x\right) = n{q}^{\prime } + r \) for some \( r \in \{ 0,\ldots, n - 1\} \) and \( {q}^{\prime } \in \mathbb{Z} \). Thus, \( g\left( x\right) \equiv r\left( {\;\operatorname{mod}\;n}\right) \), and if \( g\left( x\right) \equiv 0\left( {\;\operatorname{mod}\;n}\right) \), then we must have \( r = 0 \), and hence \( g = \left( {X - x}\right) q + n{q}^{\prime } \in \left( {X - x, n}\right) \). Conversely, if \( g \in \left( {X - x, n}\right) \), we can write \( g = \left( {X - x}\right) q + n{q}^{\prime } \) for some \( q,{q}^{\prime } \in \mathbb{Z}\left\lbrack X\right\rbrack \), and from this, it is clear that \( g\left( x\right) = n{q}^{\prime }\left( x\right) \equiv 0\left( {\;\operatorname{mod}\;n}\right) \).
|
Yes
|
Theorem 7.19. Suppose \( I \) is an ideal of a ring \( R \) . For all \( a,{a}^{\prime }, b,{b}^{\prime } \in R \), if \( a \equiv {a}^{\prime }\left( {\;\operatorname{mod}\;I}\right) \) and \( b \equiv {b}^{\prime }\left( {\;\operatorname{mod}\;I}\right) \), then \( {ab} \equiv {a}^{\prime }{b}^{\prime }\left( {\;\operatorname{mod}\;I}\right) \) .
|
Proof. If \( a = {a}^{\prime } + x \) for some \( x \in I \) and \( b = {b}^{\prime } + y \) for some \( y \in I \), then \( {ab} = {a}^{\prime }{b}^{\prime } + {a}^{\prime }y + {b}^{\prime }x + {xy} \) . Since \( I \) is closed under multiplication by elements of \( R \) , we see that \( {a}^{\prime }y,{b}^{\prime }x,{xy} \in I \), and since \( I \) is closed under addition, \( {a}^{\prime }y + {b}^{\prime }x + {xy} \in I \) . Hence, \( {ab} - {a}^{\prime }{b}^{\prime } \in I \) .
|
Yes
|
Let \( f \) be a polynomial over a ring \( R \) with \( \deg \left( f\right) = \ell \geq 0 \) and \( \operatorname{lc}\left( f\right) \in {R}^{ * } \), and consider the quotient ring \( E \mathrel{\text{:=}} R\left\lbrack X\right\rbrack /{fR}\left\lbrack X\right\rbrack \) . By the division with remainder property for polynomials (Theorem 7.10), for every \( g \in R\left\lbrack X\right\rbrack \) , there exists a unique polynomial \( h \in R\left\lbrack X\right\rbrack \) such that \( g \equiv h\left( {\;\operatorname{mod}\;f}\right) \) and \( \deg \left( h\right) < \ell \) .
|
From this, it follows that every element of \( E \) can be written uniquely as \( {\left\lbrack h\right\rbrack }_{f} \), where \( h \in R\left\lbrack X\right\rbrack \) is a polynomial of degree less than \( \ell \) . Note that in this situation, we will generally prefer the more compact notation \( R\left\lbrack X\right\rbrack /\left( f\right) \), instead of \( R\left\lbrack X\right\rbrack /{fR}\left\lbrack X\right\rbrack \) .
|
Yes
|
Consider the polynomial \( f \mathrel{\text{:=}} {X}^{2} + X + 1 \in {\mathbb{Z}}_{2}\left\lbrack X\right\rbrack \) and the quotient ring \( E \mathrel{\text{:=}} {\mathbb{Z}}_{2}\left\lbrack X\right\rbrack /\left( f\right) \) . Let us name the elements of \( E \) as follows:\n\n\[ \n{00} \mathrel{\text{:=}} {\left\lbrack 0\right\rbrack }_{f},{01} \mathrel{\text{:=}} {\left\lbrack 1\right\rbrack }_{f},{10} \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f},{11} \mathrel{\text{:=}} {\left\lbrack X + 1\right\rbrack }_{f}. \n\]\n\nWith this naming convention, addition of two elements in \( E \) corresponds to just computing the bit-wise exclusive-or of their names. More precisely, the addition table for \( E \) is the following:\n\n<table><thead><tr><th>\( + \)</th><th>00</th><th>01</th><th>10</th><th>11</th></tr></thead><tr><td>00</td><td>00</td><td>01</td><td>10</td><td>11</td></tr><tr><td>01</td><td>01</td><td>00</td><td>11</td><td>10</td></tr><tr><td>10</td><td>10</td><td>11</td><td>00</td><td>01</td></tr><tr><td>11</td><td>11</td><td>10</td><td>01</td><td>00</td></tr></table>\n\nNote that 00 acts as the additive identity for \( E \), and that as an additive group, \( E \) is isomorphic to the additive group \( {\mathbb{Z}}_{2} \times {\mathbb{Z}}_{2} \) .\n\nAs for multiplication in \( E \), one has to compute the product of two polynomials, and then reduce modulo \( f \) . For example, to compute \( {10} \cdot {11} \), using the identity \( {X}^{2} \equiv X + 1\left( {\;\operatorname{mod}\;f}\right) \), one sees that\n\n\[ \nX \cdot \left( {X + 1}\right) \equiv {X}^{2} + X \equiv \left( {X + 1}\right) + X \equiv 1\left( {\;\operatorname{mod}\;f}\right) ; \n\]\n\nthus, \( {10} \cdot {11} = {01} \) . The reader may verify the following multiplication table for \( E \) :\n\n\[ \n\begin{matrix} \cdot & {00} & {01} & {10} & {11} \\ {00} & {00} & {00} & {00} & {00} \\ {01} & {00} & {01} & {10} & {11} \\ {10} & {00} & {10} & {11} & {01} \\ {11} & {00} & {11} & {01} & {10} \end{matrix} \n\]\n\nObserve that 01 acts as the multiplicative identity for \( E \) . Notice that every non-zero element of \( E \) has a multiplicative inverse, and so \( E \) is in fact a field. Observe that \( {E}^{ * } \) is cyclic: the reader may verify that both 10 and 11 have multiplicative order 3 .
|
This is the first example we have seen of a finite field whose cardinality is not prime.
|
No
|
Suppose \( I \) is an ideal of a ring \( R \) . Analogous to Example 6.36, we may define the natural map from the ring \( R \) to the quotient ring \( R/I \) as follows:\n\n\[ \rho : \;R \rightarrow R/I \]\n\n\[ a \mapsto {\left\lbrack a\right\rbrack }_{I}. \]
|
Not only is this a surjective homomorphism of additive groups, with kernel \( I \), it is a ring homomorphism. Indeed, we have\n\n\[ \rho \left( {ab}\right) = {\left\lbrack ab\right\rbrack }_{I} = {\left\lbrack a\right\rbrack }_{I} \cdot {\left\lbrack b\right\rbrack }_{I} = \rho \left( a\right) \cdot \rho \left( b\right) ,\]\n\nand \( \rho \left( {1}_{R}\right) = {\left\lbrack {1}_{R}\right\rbrack }_{I} \), which is the multiplicative identity in \( R/I \) .
|
Yes
|
Let \( R \) be a subring of a ring \( E \), and fix \( \alpha \in E \). The polynomial evaluation map\n\n\[ \rho : \;R\left\lbrack X\right\rbrack \rightarrow E \]\n\n\[ g \mapsto g\left( \alpha \right) \]\n\nis a ring homomorphism.
|
The image of \( \rho \) consists of all polynomial expressions in \( \alpha \) with coefficients in \( R \), and is denoted \( R\left\lbrack \alpha \right\rbrack \). As the reader may verify, \( R\left\lbrack \alpha \right\rbrack \) is a subring of \( E \) containing \( \alpha \) and all of \( R \), and is the smallest such subring of \( E \).
|
No
|
Let \( \rho : R \rightarrow {R}^{\prime } \) be a ring homomorphism. We can extend the domain of definition of \( \rho \) from \( R \) to \( R\left\lbrack X\right\rbrack \) by defining \( \rho \left( {\mathop{\sum }\limits_{i}{a}_{i}{X}^{i}}\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{i}\rho \left( {a}_{i}\right) {X}^{i} \) . This yields a ring homomorphism from \( R\left\lbrack X\right\rbrack \) into \( {R}^{\prime }\left\lbrack X\right\rbrack \) .
|
To verify this, suppose \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \) and \( h = \mathop{\sum }\limits_{i}{b}_{i}{X}^{i} \) are polynomials in \( R\left\lbrack X\right\rbrack \) . Let \( s \mathrel{\text{:=}} g + h \in R\left\lbrack X\right\rbrack \) and \( p \mathrel{\text{:=}} {gh} \in R\left\lbrack X\right\rbrack \), and write \( s = \mathop{\sum }\limits_{i}{s}_{i}{X}^{i} \) and \( p = \mathop{\sum }\limits_{i}{p}_{i}{X}^{i} \), so that\n\n\[ \n{s}_{i} = {a}_{i} + {b}_{i}\text{ and }{p}_{i} = \mathop{\sum }\limits_{{i = j + k}}{a}_{j}{b}_{k} \n\]\n\nThen we have\n\n\[ \n\rho \left( {s}_{i}\right) = \rho \left( {{a}_{i} + {b}_{i}}\right) = \rho \left( {a}_{i}\right) + \rho \left( {b}_{i}\right) , \n\]\n\nwhich is the coefficient of \( {X}^{i} \) in \( \rho \left( g\right) + \rho \left( h\right) \), and\n\n\[ \n\rho \left( {p}_{i}\right) = \rho \left( {\mathop{\sum }\limits_{{i = j + k}}{a}_{j}{b}_{k}}\right) = \mathop{\sum }\limits_{{i = j + k}}\rho \left( {{a}_{j}{b}_{k}}\right) = \mathop{\sum }\limits_{{i = j + k}}\rho \left( {a}_{j}\right) \rho \left( {b}_{k}\right) , \n\]\n\nwhich is the coefficient of \( {X}^{i} \) in \( \rho \left( g\right) \rho \left( h\right) \) .
|
Yes
|
Consider the natural map that sends \( a \in \mathbb{Z} \) to \( \bar{a} \mathrel{\text{:=}} {\left\lbrack a\right\rbrack }_{n} \in {\mathbb{Z}}_{n} \) (see Example 7.43). As in the previous example, we may extend this to a ring homomorphism from \( \mathbb{Z}\left\lbrack X\right\rbrack \) to \( {\mathbb{Z}}_{n}\left\lbrack X\right\rbrack \) that sends \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \in \mathbb{Z}\left\lbrack X\right\rbrack \) to \( \bar{g} = \mathop{\sum }\limits_{i}{\bar{a}}_{i}{X}^{i} \in {\mathbb{Z}}_{n}\left\lbrack X\right\rbrack . \) This homomorphism is clearly surjective. Let us determine its kernel.
|
Observe that if \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \), then \( \bar{g} = 0 \) if and only if \( n \mid {a}_{i} \) for each \( i \) ; therefore, the kernel is the ideal \( n\mathbb{Z}\left\lbrack X\right\rbrack \) of \( \mathbb{Z}\left\lbrack X\right\rbrack \) .
|
Yes
|
Let \( R \) be a ring of prime characteristic \( p \) . For all \( a, b \in R \), we have (see Exercise 7.1)\n\n\[ \n{\left( a + b\right) }^{p} = \mathop{\sum }\limits_{{k = 0}}^{p}\left( \begin{array}{l} p \\ k \end{array}\right) {a}^{p - k}{b}^{k} \n\]
|
However, by Exercise 1.14, all of the binomial coefficients are multiples of \( p \) , except for \( k = 0 \) and \( k = p \), and hence in the ring \( R \), all of these terms vanish, leaving us with\n\n\[ \n{\left( a + b\right) }^{p} = {a}^{p} + {b}^{p}. \n\]
|
Yes
|
In special situations, part (iii) of Definition 7.20 may be redundant. One such situation arises when \( \rho : R \rightarrow {R}^{\prime } \) is surjective.
|
In this case, we know that \( {1}_{{R}^{\prime }} = \rho \left( a\right) \) for some \( a \in R \), and by part (ii) of the definition, we have\n\n\[ \rho \left( {1}_{R}\right) = \rho \left( {1}_{R}\right) \cdot {1}_{{R}^{\prime }} = \rho \left( {1}_{R}\right) \rho \left( a\right) = \rho \left( {{1}_{R} \cdot a}\right) = \rho \left( a\right) = {1}_{{R}^{\prime }}. \]
|
Yes
|
Theorem 7.21. Let \( \rho : R \rightarrow {R}^{\prime } \) be a ring homomorphism.\n\n(i) If \( S \) is a subring of \( R \), then \( \rho \left( S\right) \) is a subring of \( {R}^{\prime } \) ; in particular (setting \( S \mathrel{\text{:=}} R),\operatorname{Im}\rho \) is a subring of \( {R}^{\prime } \) .
|
Proof. In each part, we already know that the relevant object is an additive subgroup, and so it suffices to show that the appropriate additional properties are satisfied.\n\n(i) For all \( a, b \in S \), we have \( {ab} \in S \), and hence \( \rho \left( S\right) \) contains \( \rho \left( {ab}\right) = \rho \left( a\right) \rho \left( b\right) \) . Also, \( {1}_{R} \in S \), and hence \( \rho \left( S\right) \) contains \( \rho \left( {1}_{R}\right) = {1}_{{R}^{\prime }} \) .
|
Yes
|
Theorem 7.24. If \( \rho \) is a ring isomorphism of \( R \) with \( {R}^{\prime } \), then the inverse function \( {\rho }^{-1} \) is a ring isomorphism of \( {R}^{\prime } \) with \( R \) .
|
Proof. Exercise.
|
No
|
Theorem 7.25. Let \( \rho : R \rightarrow {R}^{\prime } \) be a ring isomorphism.\n\n(i) For all \( a \in R, a \) is a zero divisor if and only if \( \rho \left( a\right) \) is a zero divisor.\n\n(ii) For all \( a \in R, a \) is a unit if and only if \( \rho \left( a\right) \) is a unit.\n\n(iii) The restriction of \( R \) to \( {R}^{ * } \) is a group isomorphism of \( {R}^{ * } \) with \( {\left( {R}^{\prime }\right) }^{ * } \) .
|
## Proof. Exercise.
|
No
|
Theorem 7.26 (First isomorphism theorem). Let \( \rho : R \rightarrow {R}^{\prime } \) be a ring homomorphism with kernel \( K \) and image \( {S}^{\prime } \) . Then we have a ring isomorphism\n\n\[ R/K \cong {S}^{\prime }\text{.} \]
|
Specifically, the map\n\n\[ \bar{\rho } : \;R/K \rightarrow {R}^{\prime } \]\n\n\[ {\left\lbrack a\right\rbrack }_{K} \mapsto \rho \left( a\right) \]\n\nis an injective ring homomorphism whose image is \( {S}^{\prime } \) .
|
Yes
|
Returning again to the Chinese remainder theorem and the discussion in Example 6.48, if \( {\left\{ {n}_{i}\right\} }_{i = 1}^{k} \) is a pairwise relatively prime family of positive integers, and \( n \mathrel{\text{:=}} \mathop{\prod }\limits_{{i = 1}}^{k}{n}_{i} \), then the map \[ \rho : \;\mathbb{Z} \rightarrow {\mathbb{Z}}_{{n}_{1}} \times \cdots \times {\mathbb{Z}}_{{n}_{k}} \] \[ a \mapsto \left( {{\left\lbrack a\right\rbrack }_{{n}_{1}},\ldots ,{\left\lbrack a\right\rbrack }_{{n}_{k}}}\right) \] is not just a surjective group homomorphism with kernel \( n\mathbb{Z} \), it is also a ring homomorphism.
|
Applying Theorem 7.26, we get a ring isomorphism \[ \bar{\rho } : \;{\mathbb{Z}}_{n} \rightarrow {\mathbb{Z}}_{{n}_{1}} \times \cdots \times {\mathbb{Z}}_{{n}_{k}} \] \[ {\left\lbrack a\right\rbrack }_{n} \mapsto \left( {{\left\lbrack a\right\rbrack }_{{n}_{1}},\ldots ,{\left\lbrack a\right\rbrack }_{{n}_{k}}}\right) , \] which is the same function as the function \( \theta \) in Theorem 2.8. By part (iii) of Theorem 7.25, the restriction of \( \theta \) to \( {\mathbb{Z}}_{n}^{ * } \) is a group isomorphism of \( {\mathbb{Z}}_{n}^{ * } \) with the multiplicative group of units of \( {\mathbb{Z}}_{{n}_{1}} \times \cdots \times {\mathbb{Z}}_{{n}_{k}} \), which (according to Example 7.15) is \( {\mathbb{Z}}_{{n}_{1}}^{ * } \times \cdots \times {\mathbb{Z}}_{{n}_{k}}^{ * } \) . Thus, part (iii) of Theorem 2.8 is an immediate consequence of the above observations.
|
Yes
|
For a ring \( R \), consider the map \( \rho : \mathbb{Z} \rightarrow R \) that sends \( m \in \mathbb{Z} \) to \( m \cdot {1}_{R} \) in \( R \). It is easily verified that \( \rho \) is a ring homomorphism. Since \( \operatorname{Ker}\rho \) is an ideal of \( \mathbb{Z} \), it is either \( \{ 0\} \) or of the form \( n\mathbb{Z} \) for some \( n > 0 \).
|
In the first case, if \( \operatorname{Ker}\rho = \{ 0\} \), then \( \operatorname{Im}\rho \cong \mathbb{Z} \), and so the ring \( \mathbb{Z} \) is embedded in \( R \), and \( R \) has characteristic zero. In the second case, if \( \operatorname{Ker}\rho = n\mathbb{Z} \) for some \( n > 0 \), then by Theorem 7.26, \( \operatorname{Im}\rho \cong {\mathbb{Z}}_{n} \), and so the ring \( {\mathbb{Z}}_{n} \) is embedded in \( R \), and \( R \) has characteristic \( n \).
|
Yes
|
We can generalize Example 7.44 by evaluating polynomials at several points. This is most fruitful when the underlying coefficient ring is a field, and the evaluation points belong to the same field. So let \( F \) be a field, and let \( {x}_{1},\ldots ,{x}_{k} \) be distinct elements of \( F \) . Define the map\n\n\[ \rho : \;F\left\lbrack X\right\rbrack \rightarrow {F}^{\times k} \]\n\n\[ g \mapsto \left( {g\left( {x}_{1}\right) ,\ldots, g\left( {x}_{k}\right) }\right) .
|
This is a ring homomorphism (as seen by applying Theorem 7.23 to the polynomial evaluation maps at the points \( \left. {{x}_{1},\ldots ,{x}_{k}}\right) \) . By Theorem 7.13, \( \operatorname{Ker}\rho = \left( f\right) \), where \( f \mathrel{\text{:=}} \mathop{\prod }\limits_{{i = 1}}^{k}\left( {X - {x}_{i}}\right) \) . By Theorem 7.15, \( \rho \) is surjective. Therefore, by Theorem 7.26, we get a ring isomorphism\n\n\[ \bar{\rho } : \;F\left\lbrack X\right\rbrack /\left( f\right) \rightarrow {F}^{\times k} \]\n\n\[ {\left\lbrack g\right\rbrack }_{f} \mapsto \left( {g\left( {x}_{1}\right) ,\ldots, g\left( {x}_{k}\right) }\right) .
|
Yes
|
As in Example 7.39, let \( f \) be a polynomial over a ring \( R \) with \( \deg \left( f\right) = \ell \) and \( \operatorname{lc}\left( f\right) \in {R}^{ * } \), but now assume that \( \ell > 0 \) . Consider the natural map \( \rho \) from \( R\left\lbrack X\right\rbrack \) to the quotient ring \( E \mathrel{\text{:=}} R\left\lbrack X\right\rbrack /\left( f\right) \) that sends \( g \in R\left\lbrack X\right\rbrack \) to \( {\left\lbrack g\right\rbrack }_{f} \) . Let \( \tau \) be the restriction of \( \rho \) to the subring \( R \) of \( R\left\lbrack X\right\rbrack \) . Evidently, \( \tau \) is a ring homomorphism from \( R \) into \( E \) . Moreover, since distinct polynomials of degree less than \( \ell \) belong to distinct residue classes modulo \( f \), we see that \( \tau \) is injective. Thus, \( \tau \) is an embedding of \( R \) into \( E \) . As \( \tau \) is a very natural embedding, we can identify elements of \( R \) with their images in \( E \) under \( \tau \), and regard \( R \) as a subring of \( E \) . Taking this point of view, we see that if \( g = \mathop{\sum }\limits_{i}{a}_{i}{X}^{i} \), then
|
\n\[
{\left\lbrack g\right\rbrack }_{f} = {\left\lbrack \mathop{\sum }\limits_{i}{a}_{i}{X}^{i}\right\rbrack }_{f} = \mathop{\sum }\limits_{i}{\left\lbrack {a}_{i}\right\rbrack }_{f}{\left( {\left\lbrack X\right\rbrack }_{f}\right) }^{i} = \mathop{\sum }\limits_{i}{a}_{i}{\xi }^{i} = g\left( \xi \right) ,
\]
where \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \) . Therefore, the natural map \( \rho \) may be viewed as the polynomial evaluation map (see Example 7.44) that sends \( g \in R\left\lbrack X\right\rbrack \) to \( g\left( \xi \right) \in E \) .
|
Yes
|
As a special case of Example 7.55, let \( f \mathrel{\text{:=}} {X}^{2} + 1 \in \mathbb{R}\left\lbrack X\right\rbrack \) , and consider the quotient ring \( \mathbb{R}\left\lbrack X\right\rbrack /\left( f\right) \) . If we set \( i \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in \mathbb{R}\left\lbrack X\right\rbrack /\left( f\right) \), then every element of \( \mathbb{R}\left\lbrack X\right\rbrack /\left( f\right) \) can be expressed uniquely as \( a + {bi} \), where \( a, b \in \mathbb{R} \) . Moreover, we have \( {i}^{2} = - 1 \), and more generally, for all \( a, b,{a}^{\prime },{b}^{\prime } \in \mathbb{R} \), we have
|
\[
\left( {a + {bi}}\right) + \left( {{a}^{\prime } + {b}^{\prime }i}\right) = \left( {a + {a}^{\prime }}\right) + \left( {b + {b}^{\prime }}\right) i
\]
and
\[
\left( {a + {bi}}\right) \cdot \left( {{a}^{\prime } + {b}^{\prime }i}\right) = \left( {a{a}^{\prime } - b{b}^{\prime }}\right) + \left( {a{b}^{\prime } + {a}^{\prime }b}\right) i.
\]
Thus, the rules for arithmetic in \( \mathbb{R}\left\lbrack X\right\rbrack /\left( f\right) \) are precisely the familiar rules of complex arithmetic, and so \( \mathbb{C} \) and \( \mathbb{R}\left\lbrack X\right\rbrack /\left( f\right) \) are essentially the same, as rings. Indeed, the \
|
Yes
|
Consider the polynomial evaluation map\n\n\\[ \rho : \\;\\mathbb{R}\\left\\lbrack X\\right\\rbrack \\rightarrow \\mathbb{C} = R\\left\\lbrack X\\right\\rbrack /\\left( {{X}^{2} + 1}\\right) \\]\n\n\\[ g \\mapsto g\\left( {-i}\\right) \\text{.} \\]
|
For every \\( g \\in \\mathbb{R}\\left\\lbrack X\\right\\rbrack \\), we may write \\( g = \\left( {{X}^{2} + 1}\\right) q + a + {bX} \\), where \\( q \\in \\mathbb{R}\\left\\lbrack X\\right\\rbrack \\) and \\( a, b \\in \\mathbb{R} \\) . Since \\( {\\left( -i\\right) }^{2} + 1 = {i}^{2} + 1 = 0 \\), we have\n\n\\[ g\\left( {-i}\\right) = \\left( {{\\left( -i\\right) }^{2} + 1}\\right) q\\left( {-i}\\right) + a - {bi} = a - {bi}. \\]\n\nClearly, then, \\( \\rho \\) is surjective and the kernel of \\( \\rho \\) is the ideal of \\( \\mathbb{R}\\left\\lbrack X\\right\\rbrack \\) generated by the polynomial \\( {X}^{2} + 1 \\) . By Theorem 7.26, we therefore get a ring automorphism \\( \\bar{\\rho } \\) on \\( \\mathbb{C} \\) that sends \\( a + {bi} \\in \\mathbb{C} \\) to \\( a - {bi} \\) . In fact, \\( \\bar{\\rho } \\) is none other than the complex conjugation map. Indeed, this is the \
|
Yes
|
We defined the ring \( \mathbb{Z}\left\lbrack i\right\rbrack \) of Gaussian integers in Example 7.25 as a subring of \( \mathbb{C} \). Let us verify that the notation \( \mathbb{Z}\left\lbrack i\right\rbrack \) introduced in Example 7.25 is consistent with that introduced in Example 7.44.
|
Consider the polynomial evaluation map \( \rho : \mathbb{Z}\left\lbrack X\right\rbrack \rightarrow \mathbb{C} \) that sends \( g \in \mathbb{Z}\left\lbrack X\right\rbrack \) to \( g\left( i\right) \in \mathbb{C} \). For every \( g \in \mathbb{Z}\left\lbrack X\right\rbrack \), we may write \( g = \left( {{X}^{2} + 1}\right) q + a + {bX} \), where \( q \in \mathbb{Z}\left\lbrack X\right\rbrack \) and \( a, b \in \mathbb{Z} \). Since \( {i}^{2} + 1 = 0 \), we have \( g\left( i\right) = \left( {{i}^{2} + 1}\right) q\left( i\right) + a + {bi} = a + {bi} \). Clearly, then, the image of \( \rho \) is the set \( \{ a + {bi} : a, b \in \mathbb{Z}\} \), and the kernel of \( \rho \) is the ideal of \( \mathbb{Z}\left\lbrack X\right\rbrack \) generated by the polynomial \( {X}^{2} + 1 \). This shows that \( \mathbb{Z}\left\lbrack i\right\rbrack \) in Example 7.25 is the same as \( \mathbb{Z}\left\lbrack i\right\rbrack \) in Example 7.44, and moreover, Theorem 7.26 implies that \( \mathbb{Z}\left\lbrack i\right\rbrack \) is isomorphic to \( \mathbb{Z}\left\lbrack X\right\rbrack /\left( {{X}^{2} + 1}\right) \).
|
Yes
|
Let \( p \) be a prime, and consider the quotient ring \( E \mathrel{\text{:=}} {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack /\left( f\right) \) , where \( f \mathrel{\text{:=}} {X}^{2} + 1 \) . If we set \( i \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \), then \( E = {\mathbb{Z}}_{p}\left\lbrack i\right\rbrack = \left\{ {a + {bi} : a, b \in {\mathbb{Z}}_{p}}\right\} \) . In particular, \( E \) is a ring of cardinality \( {p}^{2} \) . Moreover, we have \( {i}^{2} = - 1 \), and the rules for addition and multiplication in \( E \) look exactly the same as they do in \( \mathbb{C} \) : for all \( a, b,{a}^{\prime },{b}^{\prime } \in {\mathbb{Z}}_{p} \), we have\n\n\[ \left( {a + {bi}}\right) + \left( {{a}^{\prime } + {b}^{\prime }i}\right) = \left( {a + {a}^{\prime }}\right) + \left( {b + {b}^{\prime }}\right) i \]\n\nand\n\n\[ \left( {a + {bi}}\right) \cdot \left( {{a}^{\prime } + {b}^{\prime }i}\right) = \left( {a{a}^{\prime } - b{b}^{\prime }}\right) + \left( {a{b}^{\prime } + {a}^{\prime }b}\right) i. \]\n\nThe ring \( E \) may or may not be a field. We now determine for which primes \( p \) we get a field.
|
If \( p = 2 \), then \( 0 = 1 + {i}^{2} = {\left( 1 + i\right) }^{2} \) (see Example 7.48), and so in this case, \( 1 + i \) is a zero divisor and \( E \) is not a field.\n\nNow suppose \( p \) is odd. There are two subcases to consider: \( p \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) and \( p \equiv 3\left( {\;\operatorname{mod}\;4}\right) \).\n\nSuppose \( p \equiv 1\left( {\;\operatorname{mod}\;4}\right) \) . By Theorem 2.31, there exists \( c \in {\mathbb{Z}}_{p} \) such that \( {c}^{2} = - 1 \), and therefore \( f = {X}^{2} + 1 = {X}^{2} - {c}^{2} = \left( {X - c}\right) \left( {X + c}\right) \), and by Example 7.45, we have a ring isomorphism \( E \cong {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \) (which maps \( a + {bi} \in E \) to \( \left( {a + {bc}, a - {bc}}\right) \in {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \) ); in particular, \( E \) is not a field. Indeed, \( c + i \) is a zero divisor, since \( \left( {c + i}\right) \left( {c - i}\right) = {c}^{2} - {i}^{2} = {c}^{2} + 1 = 0 \).\n\nSuppose \( p \equiv 3\left( {\;\operatorname{mod}\;4}\right) \) . By Theorem 2.31, there is no \( c \in {\mathbb{Z}}_{p} \) such that \( {c}^{2} = - 1 \) . It follows that for all \( a, b \in {\mathbb{Z}}_{p} \), not both zero, we must have \( {a}^{2} + {b}^{2} \neq 0 \) ; indeed, suppose that \( {a}^{2} + {b}^{2} = 0 \), and that, say, \( b \neq 0 \) ; then we would have \( {\left( a/b\right) }^{2} = - 1 \) , contradicting the assumption that -1 has no square root in \( {\mathbb{Z}}_{p} \) . Therefore, \( {a}^{2} + {b}^{2} \) has a multiplicative inverse in \( {\mathbb{Z}}_{p} \), from which it follows that the formula for multiplicative inverses in \( \mathbb{C} \) applies equally well in \( E \) ; that is,\n\n\[ {\left( a + bi\right) }^{-1} = \frac{a - {bi}}{{a}^{2} + {b}^{2}} \]\n\nTherefore, in this case, \( E \) is a field.
|
Yes
|
Example 7.60. Let \( p \) an odd prime, and let \( d \in {\mathbb{Z}}_{p}^{ * } \) . Let \( f \mathrel{\text{:=}} {X}^{2} - d \in {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack \) , and consider the ring \( E \mathrel{\text{:=}} {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack /\left( f\right) = {\mathbb{Z}}_{p}\left\lbrack \xi \right\rbrack \), where \( \xi \mathrel{\text{:=}} {\left\lbrack X\right\rbrack }_{f} \in E \) . We have \( E = \left\{ {a + {b\xi } : a, b \in {\mathbb{Z}}_{p}}\right\} \) and \( \left| E\right| = {p}^{2} \) . Note that \( {\xi }^{2} = d \), and the general rules for arithmetic in \( E \) look like this: for all \( a, b,{a}^{\prime },{b}^{\prime } \in {\mathbb{Z}}_{p} \), we have
|
Suppose that \( d \in {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{2} \), so that \( d = {c}^{2} \) for some \( c \in {\mathbb{Z}}_{p}^{ * } \) . Then \( f = \left( {X - c}\right) \left( {X + c}\right) \) , and like in previous example, we have a ring isomorphism \( E \cong {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \) (which maps \( a + {b\xi } \in E \) to \( \left( {a + {bc}, a - {bc}}\right) \in {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \) ); in particular, \( E \) is not a field.\n\nSuppose that \( d \notin {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{2} \) . This implies that for all \( a, b \in {\mathbb{Z}}_{p} \), not both zero, we have \( {a}^{2} - {b}^{2}d \neq 0 \) . Using this, we get the following formula for multiplicative inverses in \( E \) :\n\n\[{\left( a + b\xi \right) }^{-1} = \frac{a - {b\xi }}{{a}^{2} - {b}^{2}d}\]\nTherefore, \( E \) is a field in this case.\n\nBy Theorem 2.20, we know that \( \left| {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{2}\right| = \left( {p - 1}\right) /2 \), and hence there exists \( d \in {\mathbb{Z}}_{p}^{ * } \smallsetminus {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{2} \) for all odd primes \( p \) . Thus, we have a general (though not explicit) construction for finite fields of cardinality \( {p}^{2} \) for all odd primes \( p \) .
|
Yes
|
Theorem 7.29. Let \( D \) be an integral domain and \( G \) a subgroup of \( {D}^{ * } \) of finite order. Then \( G \) is cyclic.
|
Proof. Suppose \( G \) is not cyclic. If \( m \) is the exponent of \( G \), then by Theorem 6.41, we know that \( m < \left| G\right| \) . Moreover, by definition, \( {a}^{m} = 1 \) for all \( a \in G \) ; that is, every element of \( G \) is a root of the polynomial \( {X}^{m} - 1 \in D\left\lbrack X\right\rbrack \) . But by Theorem 7.14, a polynomial of degree \( m \) over an integral domain has at most \( m \) distinct roots, and this contradicts the fact that \( m < \left| G\right| \) .
|
Yes
|
Lemma 7.30. Let \( p \) be a prime. For every positive integer \( e \), if \( a \equiv b\\left( {\\;\\operatorname{mod\\;}{p}^{e}}\\right) \), then \( {a}^{p} \equiv {b}^{p}\\left( {\\;\\operatorname{mod\\;}{p}^{e + 1}}\\right) \).
|
Proof. Suppose \( a \equiv b\\left( {\\;\\operatorname{mod\\;}{p}^{e}}\\right) \), so that \( a = b + c{p}^{e} \) for some \( c \\in \\mathbb{Z} \). Then \( {a}^{p} = {b}^{p} + p{b}^{p - 1}c{p}^{e} + d{p}^{2e} \) for some \( d \\in \\mathbb{Z} \), and it follows that \( {a}^{p} \equiv {b}^{p}\\left( {\\;\\operatorname{mod\\;}{p}^{e + 1}}\\right) \). \( ▱ \)
|
Yes
|
Lemma 7.31. Let \( p \) be a prime, and let \( e \) be a positive integer such that \( {p}^{e} > 2 \) . If \( a \equiv 1 + {p}^{e}\left( {\;\operatorname{mod}\;{p}^{e + 1}}\right) \), then \( {a}^{p} \equiv 1 + {p}^{e + 1}\left( {\;\operatorname{mod}\;{p}^{e + 2}}\right) \) .
|
Proof. Suppose \( a \equiv 1 + {p}^{e}\left( {\;\operatorname{mod}\;{p}^{e + 1}}\right) \) . By Lemma 7.30, \( {a}^{p} \equiv {\left( 1 + {p}^{e}\right) }^{p}\left( {\;\operatorname{mod}\;{p}^{e + 2}}\right) \) . Expanding \( {\left( 1 + {p}^{e}\right) }^{p} \), we have\n\n\[ \n{\left( 1 + {p}^{e}\right) }^{p} = 1 + p \cdot {p}^{e} + \mathop{\sum }\limits_{{k = 2}}^{{p - 1}}\left( \begin{array}{l} p \\ k \end{array}\right) {p}^{ek} + {p}^{ep}.\n\] \n\nBy Exercise 1.14, all of the terms in the sum on \( k \) are divisible by \( {p}^{1 + {2e}} \), and \( 1 + {2e} \geq e + 2 \) for all \( e \geq 1 \) . For the term \( {p}^{ep} \), the assumption that \( {p}^{e} > 2 \) means that either \( p \geq 3 \) or \( e \geq 2 \), which implies \( {ep} \geq e + 2 \) .
|
Yes
|
Example 7.61. Let \( p \) be an odd prime, and let \( d \) be a positive integer dividing \( p - 1 \) . Since \( {\mathbb{Z}}_{p}^{ * } \) is a cyclic group of order \( p - 1 \), Theorem 6.32, implies that \( {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{d} \) is the unique subgroup of \( {\mathbb{Z}}_{p}^{ * } \) of order \( \left( {p - 1}\right) /d \), and moreover, \( {\left( {\mathbb{Z}}_{p}^{ * }\right) }^{d} = {\mathbb{Z}}_{p}^{ * }\{ \left( {p - 1}\right) /d\} \) ; that is, for all \( \alpha \in {\mathbb{Z}}_{p}^{ * } \), we have
|
\[ \alpha = {\beta }^{d}\text{ for some }\beta \in {\mathbb{Z}}_{p}^{ * } \Leftrightarrow {\alpha }^{\left( {p - 1}\right) /d} = 1. \]
|
Yes
|
De Morgan’s law says that for all events \( \mathcal{A} \) and \( \mathcal{B} \) ,
|
\[\overline{\mathcal{A} \cup \mathcal{B}} = \overline{\mathcal{A}} \cap \overline{\mathcal{B}}\text{ and }\overline{\mathcal{A} \cap \mathcal{B}} = \overline{\mathcal{A}} \cup \overline{\mathcal{B}}.\]
|
Yes
|
Alice rolls two dice, and asks Bob to guess a value that appears on either of the two dice (without looking). Let us model this situation by considering the uniform distribution on \( \Omega \mathrel{\text{:=}} \{ 1,\ldots ,6\} \times \{ 1,\ldots ,6\} \), where for each pair \( \left( {s, t}\right) \in \Omega, s \) represents the value of the first die, and \( t \) the value of the second.\n\nFor \( k = 1,\ldots ,6 \), let \( {\mathcal{A}}_{k} \) be the event that the first die is \( k \), and \( {\mathcal{B}}_{k} \) the event that the second die is \( k \) . Let \( {\mathcal{C}}_{k} = {\mathcal{A}}_{k} \cup {\mathcal{B}}_{k} \) be the event that \( k \) appears on either of the two dice. No matter what value \( k \) Bob chooses, the probability that this choice is correct is
|
\n\[ \mathsf{P}\left\lbrack {\mathcal{C}}_{k}\right\rbrack = \mathsf{P}\left\lbrack {{\mathcal{A}}_{k} \cup {\mathcal{B}}_{k}}\right\rbrack = \mathsf{P}\left\lbrack {\mathcal{A}}_{k}\right\rbrack + \mathsf{P}\left\lbrack {\mathcal{B}}_{k}\right\rbrack - \mathsf{P}\left\lbrack {{\mathcal{A}}_{k} \cap {\mathcal{B}}_{k}}\right\rbrack \]\n\n\[ = 1/6 + 1/6 - 1/{36} = {11}/{36}, \]\n\nwhich is slightly less than the estimate \( \mathrm{P}\left\lbrack {\mathcal{A}}_{k}\right\rbrack + \mathrm{P}\left\lbrack {\mathcal{B}}_{k}\right\rbrack \) obtained from (8.3).
|
Yes
|
Suppose we have a coin that comes up heads with some probability \( p \), and tails with probability \( q \mathrel{\text{:=}} 1 - p \). We toss the coin \( n \) times, and record the outcomes. We can model this as the product distribution \( \mathrm{P} = {\mathrm{P}}_{1}^{n} \), where \( {\mathrm{P}}_{1} \) is the distribution of a Bernoulli trial (see Example 8.3) with success probability \( p \), and where we identify success with heads, and failure with tails. The sample space \( \Omega \) of \( \mathrm{P} \) is the set of all \( {2}^{n} \) tuples \( \omega = \left( {{\omega }_{1},\ldots ,{\omega }_{n}}\right) \), where each \( {\omega }_{i} \) is either heads or tails. If the tuple \( \omega \) has \( k \) heads and \( n - k \) tails, then \( \mathrm{P}\left( \omega \right) = {p}^{k}{q}^{n - k} \), regardless of the positions of the heads and tails in the tuple.
|
For each \( k = 0,\ldots, n \), let \( {\mathcal{A}}_{k} \) be the event that our coin comes up heads exactly \( k \) times. As a set, \( {\mathcal{A}}_{k} \) consists of all those tuples in the sample space with exactly \( k \) heads, and so\n\n\[ \left| {\mathcal{A}}_{k}\right| = \left( \begin{array}{l} n \\ k \end{array}\right) \]\n\nfrom which it follows that\n\n\[ \mathrm{P}\left\lbrack {\mathcal{A}}_{k}\right\rbrack = \left( \begin{array}{l} n \\ k \end{array}\right) {p}^{k}{q}^{n - k} \]\n\nIf our coin is a fair coin, so that \( p = q = 1/2 \), then \( \mathrm{P} \) is the uniform distribution on \( \Omega \), and for each \( k = 0,\ldots, n \), we have\n\n\[ \mathrm{P}\left\lbrack {\mathcal{A}}_{k}\right\rbrack = \left( \begin{array}{l} n \\ k \end{array}\right) {2}^{-n} \]
|
Yes
|
Consider again Example 8.4, where \( \mathcal{A} \) is the event that the value on the die is odd, and \( \mathcal{B} \) is the event that the value of the die exceeds 2 . Then as we calculated, \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack = 1/2,\mathrm{P}\left\lbrack \mathcal{B}\right\rbrack = 2/3 \), and \( \mathrm{P}\left\lbrack {\mathcal{A} \cap \mathcal{B}}\right\rbrack = 1/3 \) ; thus, \( \mathrm{P}\left\lbrack {\mathcal{A} \cap \mathcal{B}}\right\rbrack = \) \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \mathrm{P}\left\lbrack \mathcal{B}\right\rbrack \), and we conclude that \( \mathcal{A} \) and \( \mathcal{B} \) are independent.
|
Indeed, \( \mathrm{P}\left\lbrack {\mathcal{A} \mid \mathcal{B}}\right\rbrack = \) \( \left( {1/3}\right) /\left( {2/3}\right) = 1/2 = \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \) ; intuitively, given the partial knowledge that the value on the die exceeds 2, we know it is equally likely to be either 3, 4, 5, or 6, and so the conditional probability that it is odd is \( 1/2 \) .
|
Yes
|
For example, suppose Alice tells Bob the sum is 4. Then what is Bob's best strategy in this case? Let \( {D}_{\ell } \) be the event that the sum is \( \ell \), for \( \ell = 2,\ldots ,{12} \), and consider the conditional distribution given \( {\mathcal{D}}_{4} \) . This conditional distribution is essentially the uniform distribution on the set \( \{ \left( {1,3}\right) ,\left( {2,2}\right) ,\left( {3,1}\right) \} \) .
|
The numbers 1 and 3 both appear in two pairs, while the number 2 appears in just one pair. Therefore, \[ \mathrm{P}\left\lbrack {{\mathcal{C}}_{1} \mid {\mathcal{D}}_{4}}\right\rbrack = \mathrm{P}\left\lbrack {{\mathcal{C}}_{3} \mid {\mathcal{D}}_{4}}\right\rbrack = 2/3 \] while \[ \mathrm{P}\left\lbrack {{\mathcal{C}}_{2} \mid {\mathcal{D}}_{4}}\right\rbrack = 1/3 \] and \[ \mathrm{P}\left\lbrack {{\mathcal{C}}_{4} \mid {\mathcal{D}}_{4}}\right\rbrack = \mathrm{P}\left\lbrack {{\mathcal{C}}_{5} \mid {\mathcal{D}}_{4}}\right\rbrack = \mathrm{P}\left\lbrack {{\mathcal{C}}_{6} \mid {\mathcal{D}}_{4}}\right\rbrack = 0. \] Thus, if the sum is 4, Bob's best strategy is to guess either 1 or 3 , which will be correct with probability \( 2/3 \) .
|
Yes
|
Let us continue with Example 8.11, and compute Bob's overall probability of winning, assuming he follows an optimal strategy. If the sum is 2 or 12, clearly there is only one sensible choice for Bob to make, and it will certainly be correct. If the sum is any other number \( \ell \), and there are \( {N}_{\ell } \) pairs in the sample space that sum to that number, then there will always be a value that appears in exactly 2 of these \( {N}_{\ell } \) pairs, and Bob should choose such a value (see the diagram in Example 8.11). Indeed, this is achieved by the simple rule of choosing the value 1 if \( \ell \leq 7 \), and the value 6 if \( \ell > 7 \) . This is an optimal strategy for Bob, and if \( \mathcal{C} \) is the event that Bob wins following this strategy, then by total probability (8.10), we have
|
\[ \mathrm{P}\left\lbrack \mathcal{C}\right\rbrack = \mathop{\sum }\limits_{{\ell = 2}}^{{12}}\mathrm{P}\left\lbrack {\mathcal{C} \mid {\mathcal{D}}_{\ell }}\right\rbrack \mathrm{P}\left\lbrack {\mathcal{D}}_{\ell }\right\rbrack \] Moreover, \[ \mathrm{P}\left\lbrack {\mathcal{C} \mid {\mathcal{D}}_{2}}\right\rbrack \mathrm{P}\left\lbrack {\mathcal{D}}_{2}}\right\rbrack = 1 \cdot \frac{1}{36} = \frac{1}{36},\mathrm{P}\left\lbrack {\mathcal{C} \mid {\mathcal{D}}_{12}}\right\rbrack \mathrm{P}\left\lbrack {\mathcal{D}}_{12}}\right\rbrack = 1 \cdot \frac{1}{36} = \frac{1}{36}, \] and for \( \ell = 3,\ldots ,{11} \), we have \[ \mathrm{P}\left\lbrack {\mathcal{C} \mid {\mathcal{D}}_{\ell }}\right\rbrack \mathrm{P}\left\lbrack {\mathcal{D}}_{\ell }}\right\rbrack = \frac{2}{{N}_{\ell }} \cdot \frac{{N}_{\ell }}{36} = \frac{1}{18}. \] Therefore, \[ \mathrm{P}\left\lbrack C\right\rbrack = \frac{1}{36} + \frac{1}{36} + \frac{9}{18} = \frac{10}{18} \]
|
Yes
|
Suppose that the rate of incidence of disease \( X \) in the overall population is \( 1\% \) . Also suppose that there is a test for disease \( X \) ; however, the test is not perfect: it has a 5% false positive rate (i.e., 5% of healthy patients test positive for the disease), and a \( 2\% \) false negative rate (i.e., \( 2\% \) of sick patients test negative for the disease). A doctor gives the test to a patient and it comes out positive. How should the doctor advise his patient? In particular, what is the probability that the patient actually has disease \( X \), given a positive test result?
|
Let \( \mathcal{A} \) be the event that the test is positive and let \( \mathcal{B} \) be the event that the patient has disease \( X \) . The relevant quantity that we need to estimate is \( \mathrm{P}\left\lbrack {\mathcal{B} \mid \mathcal{A}}\right\rbrack \) ; that is, the probability that the patient has disease \( X \), given a positive test result. We use Bayes' theorem to do this:\n\n\[\n\begin{aligned} \mathsf{P}\left\lbrack {\mathcal{B} \mid \mathcal{A}}\right\rbrack & = \frac{\mathsf{P}\left\lbrack {\mathcal{A} \mid \mathcal{B}}\right\rbrack \mathsf{P}\left\lbrack \mathcal{B}\right\rbrack }{\mathsf{P}\left\lbrack {\mathcal{A} \mid \mathcal{B}}\right\rbrack \mathsf{P}\left\lbrack \mathcal{B}\right\rbrack + \mathsf{P}\left\lbrack {\mathcal{A} \mid \overline{\mathcal{B}}}\right\rbrack \mathsf{P}\left\lbrack \overline{\mathcal{B}}\right\rbrack } \\ & = \frac{{0.98} \cdot {0.01}}{{0.98} \cdot {0.01} + {0.05} \cdot {0.99}} \approx {0.17}. \end{aligned}\n\]\n\nThus, the chances that the patient has disease \( X \) given a positive test result are just \( {17}\% \) . The correct intuition here is that it is much more likely to get a false positive than it is to actually have the disease.
|
Yes
|
In this game, a contestant chooses one of three doors. Behind two doors is a 'zonk,' and behind one of the doors is a 'grand prize.' After the contestant chooses a door, the host of the show, Monty Hall, always reveals a zonk behind one of the two doors not chosen by the contestant. The contestant is then given a choice: either stay with his initial choice of door, or switch to the other unopened door. After the contestant finalizes his decision on which door to choose, that door is opened and he wins whatever is behind it. The question is, which strategy is better for the contestant: to stay or to switch?
|
Let us evaluate the two strategies. If the contestant always stays with his initial selection, then it is clear that his probability of success is exactly \( 1/3 \) .\n\nNow consider the strategy of always switching. Let \( \mathcal{B} \) be the event that the contestant’s initial choice was correct, and let \( \mathcal{A} \) be the event that the contestant wins the grand prize. On the one hand, if the contestant's initial choice was correct, then switching will certainly lead to failure (in this case, Monty has two doors to choose from, but his choice does not affect the outcome). Thus, \( \mathrm{P}\left\lbrack {\mathcal{A} \mid \mathcal{B}}\right\rbrack = 0 \) . On the other hand, suppose that the contestant's initial choice was incorrect, so that one of the zonks is behind the initially chosen door. Since Monty reveals the other zonk, switching will lead with certainty to success. Thus, \( \mathrm{P}\left\lbrack {\mathcal{A} \mid \bar{B}}\right\rbrack = 1 \) . Furthermore, it is clear that \( \mathrm{P}\left\lbrack \mathcal{B}\right\rbrack = 1/3 \) . So using total probability (8.10), we compute\n\n\[ \mathsf{P}\left\lbrack \mathcal{A}\right\rbrack = \mathsf{P}\left\lbrack {\mathcal{A} \mid \mathcal{B}}\right\rbrack \mathsf{P}\left\lbrack \mathcal{B}\right\rbrack + \mathsf{P}\left\lbrack {\mathcal{A} \mid \overline{\mathcal{B}}}\right\rbrack \mathsf{P}\left\lbrack \overline{\mathcal{B}}\right\rbrack = 0 \cdot \left( {1/3}\right) + 1 \cdot \left( {2/3}\right) = 2/3. \]\n\nThus, the 'stay' strategy has a success probability of \( 1/3 \), while the 'switch' strategy has a success probability of \( 2/3 \). So it is better to switch than to stay.
|
Yes
|
Suppose we toss a fair coin three times, which we formally model using the uniform distribution on the set of all 8 possible outcomes of the three coin tosses: (heads, heads, heads), (heads, heads, tails), etc., as in Example 8.8. For \( i = 1,2,3 \), let \( {\mathcal{A}}_{i} \) be the event that the \( i \) th toss comes up heads. Then \( {\left\{ {\mathcal{A}}_{i}\right\} }_{i = 1}^{3} \) is a mutually independent family of events, where each individual \( {\mathcal{A}}_{i} \) occurs with probability \( 1/2 \).
|
Now let \( {\mathcal{B}}_{12} \) be the event that the first and second tosses agree (i.e., both heads or both tails), let \( {\mathcal{B}}_{13} \) be the event that the first and third tosses agree, and let \( {\mathcal{B}}_{23} \) be the event that the second and third tosses agree. Then the family of events \( {\mathcal{B}}_{12},{\mathcal{B}}_{13},{\mathcal{B}}_{23} \) is pairwise independent, but not mutually independent. Indeed, the probability that any given individual event occurs is \( 1/2 \), and the probability that any given pair of events occurs is \( 1/4 \) ; however, the probability that all three events occur is also \( 1/4 \), since if any two events occur, then so does the third.
|
Yes
|
Theorem 8.2. If \( \mathcal{A} \) and \( \mathcal{B} \) are independent events, then so are \( \mathcal{A} \) and \( \overline{\mathcal{B}} \) .
|
Proof. We have\n\n\[ \mathsf{P}\left\lbrack \mathcal{A}\right\rbrack = \mathsf{P}\left\lbrack {\mathcal{A} \cap \mathcal{B}}\right\rbrack + \mathsf{P}\left\lbrack {\mathcal{A} \cap \overline{\mathcal{B}}}\right\rbrack \;\text{(by total probability (8.9))} \]\n\n\[ = \mathsf{P}\left\lbrack \mathcal{A}\right\rbrack \mathsf{P}\left\lbrack \mathcal{B}\right\rbrack + \mathsf{P}\left\lbrack {\mathcal{A} \cap \overline{\mathcal{B}}}\right\rbrack \text{ (since }\mathcal{A}\text{ and }\mathcal{B}\text{ are independent).} \]\n\nTherefore,\n\n\[ \begin{aligned} \mathrm{P}\left\lbrack {\mathcal{A} \cap \overline{\mathcal{B}}}\right\rbrack & = \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack - \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \mathrm{P}\left\lbrack \mathcal{B}\right\rbrack \\ & = \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \left( {1 - \mathrm{P}\left\lbrack \mathcal{B}\right\rbrack }\right) \\ & = \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \mathrm{P}\left\lbrack \overline{\mathcal{B}}\right\rbrack . \end{aligned} \]
|
Yes
|
Theorem 8.3. Let \( {\left\{ {\mathcal{A}}_{i}\right\} }_{i \in I} \) be a finite, \( k \) -wise independent family of events. Let \( J \) be a subset of \( I \), and for each \( i \in I \), define \( {\mathcal{A}}_{i}^{\prime } \mathrel{\text{:=}} {\mathcal{A}}_{i} \) if \( i \in J \), and \( {\mathcal{A}}_{i}^{\prime } \mathrel{\text{:=}} {\overline{\mathcal{A}}}_{i} \) if \( i \notin J \). Then \( {\left\{ {\mathcal{A}}_{i}^{\prime }\right\} }_{i \in I} \) is also \( k \) -wise independent.
|
Proof. It suffices to prove the theorem for the case where \( J = I \smallsetminus \{ d\} \), for an arbitrary \( d \in I \) : this allows us to complement any single member of the family that we wish, without affecting independence; by repeating the procedure, we can complement any number of them.\n\nTo this end, it will suffice to show the following: if \( J \subseteq I,\left| J\right| < k, d \in I \smallsetminus J \) , and \( {\mathcal{A}}_{J} \mathrel{\text{:=}} \mathop{\bigcap }\limits_{{j \in J}}{\mathcal{A}}_{j} \), we have\n\n\[ \mathrm{P}\left\lbrack {{\overline{\mathcal{A}}}_{d} \cap {\mathcal{A}}_{J}}\right\rbrack = \left( {1 - \mathrm{P}\left\lbrack {\mathcal{A}}_{d}\right\rbrack }\right) \mathop{\prod }\limits_{{j \in J}}\mathrm{P}\left\lbrack {\mathcal{A}}_{j}\right\rbrack \]\n\n(8.12)\n\nUsing total probability (8.9), along with the independence hypothesis (twice), we have\n\n\[ \mathop{\prod }\limits_{{j \in J}}\mathrm{P}\left\lbrack {\mathcal{A}}_{j}\right\rbrack = \mathrm{P}\left\lbrack {\mathcal{A}}_{J}\right\rbrack = \mathrm{P}\left\lbrack {{\mathcal{A}}_{d} \cap {\mathcal{A}}_{J}}\right\rbrack + \mathrm{P}\left\lbrack {{\overline{\mathcal{A}}}_{d} \cap {\mathcal{A}}_{J}}\right\rbrack \]\n\n\[ = \mathrm{P}\left\lbrack {\mathcal{A}}_{d}\right\rbrack \cdot \mathop{\prod }\limits_{{j \in J}}\mathrm{P}\left\lbrack {\mathcal{A}}_{j}\right\rbrack + \mathrm{P}\left\lbrack {{\overline{\mathcal{A}}}_{d} \cap {\mathcal{A}}_{J}}\right\rbrack \]\n\nfrom which (8.12) follows immediately.
|
Yes
|
If \( \mathcal{A} \) is an event, we may define a random variable \( X \) as follows: \( X \mathrel{\text{:=}} 1 \) if the event \( \mathcal{A} \) occurs, and \( X \mathrel{\text{:=}} 0 \) otherwise. The variable \( X \) is called the indicator variable for \( \mathcal{A} \). Formally, \( X \) is the function that maps \( \omega \in \mathcal{A} \) to 1, and \( \omega \in \Omega \smallsetminus \mathcal{A} \) to 0 ; that is, \( X \) is simply the characteristic function of \( \mathcal{A} \). The distribution of \( X \) is that of a Bernoulli trial: \( \mathrm{P}\left\lbrack {X = 1}\right\rbrack = \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \) and \( \mathrm{P}\left\lbrack {X = 0}\right\rbrack = 1 - \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \).
|
It is not hard to see that \( 1 - X \) is the indicator variable for \( \overline{\mathcal{A}} \). Now suppose \( \mathcal{B} \) is another event, with indicator variable \( Y \). Then it is also not hard to see that \( {XY} \) is the indicator variable for \( \mathcal{A} \cap \mathcal{B} \), and that \( X + Y - {XY} \) is the indicator variable for \( \mathcal{A} \cup \mathcal{B} \); in particular, if \( \mathcal{A} \cap \mathcal{B} = \varnothing \), then \( X + Y \) is the indicator variable for \( \mathcal{A} \cup \mathcal{B} \).
|
Yes
|
Consider again Example 8.8, where we have a coin that comes up heads with probability \( p \), and tails with probability \( q \mathrel{\text{:=}} 1 - p \), and we toss it \( n \) times. For each \( i = 1,\ldots, n \), let \( {\mathcal{A}}_{i} \) be the event that the \( i \) th toss comes up heads, and let \( {X}_{i} \) be the corresponding indicator variable. Let us also define \( X \mathrel{\text{:=}} {X}_{1} + \cdots + {X}_{n} \) , which represents the total number of tosses that come up heads. The image of \( X \) is \( \{ 0,\ldots, n\} \).
|
By the calculations made in Example 8.8, for each \( k = 0,\ldots, n \), we have\n\n\[ \mathrm{P}\left\lbrack {X = k}\right\rbrack = \left( \begin{array}{l} n \\ k \end{array}\right) {p}^{k}{q}^{n - k} \]
|
Yes
|
Theorem 8.4. Suppose \( f : S \rightarrow T \) is a surjective, regular function, and that \( X \) is a random variable that is uniformly distributed over \( S \) . Then \( f\left( X\right) \) is uniformly distributed over \( T \) .
|
Proof. The assumption that \( f \) is surjective and regular implies that for every \( t \in T \) , the set \( {S}_{t} \mathrel{\text{:=}} {f}^{-1}\left( {\{ t\} }\right) \) has size \( \left| S\right| /\left| T\right| \) . So, for each \( t \in T \), working directly from the definitions, we have\n\n\[ \mathrm{P}\left\lbrack {f\left( X\right) = t}\right\rbrack = \mathop{\sum }\limits_{{\omega \in {X}^{-1}\left( {S}_{t}\right) }}\mathrm{P}\left( \omega \right) = \mathop{\sum }\limits_{{s \in {S}_{t}}}\mathop{\sum }\limits_{{\omega \in {X}^{-1}\left( {\{ s\} }\right) }}\mathrm{P}\left( \omega \right) = \mathop{\sum }\limits_{{s \in {S}_{t}}}\mathrm{P}\left\lbrack {X = s}\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{s \in {S}_{t}}}1/\left| S\right| = \left( {\left| S\right| /\left| T\right| }\right) /\left| S\right| = 1/\left| T\right| . \]
|
Yes
|
Theorem 8.5. Suppose that \( \rho : G \rightarrow {G}^{\prime } \) is a surjective homomorphism of finite abelian groups \( G \) and \( {G}^{\prime } \), and that \( X \) is a random variable that is uniformly distributed over \( G \) . Then \( \rho \left( X\right) \) is uniformly distributed over \( {G}^{\prime } \) .
|
Proof. It suffices to show that \( \rho \) is regular. Recall that the kernel \( K \) of \( \rho \) is a subgroup of \( G \), and that for every \( {g}^{\prime } \in {G}^{\prime } \), the set \( {\rho }^{-1}\left( \left\{ {g}^{\prime }\right\} \right) \) is a coset of \( K \) (see Theorem 6.19); moreover, every coset of \( K \) has the same size (see Theorem 6.14). These facts imply that \( \rho \) is regular.
|
No
|
We claim that \( {Z}^{\prime } \) is uniformly distributed over \( {\mathbb{Z}}_{6} \).
|
This follows immediately from the fact that the map that sends \( \left( {a, b}\right) \in {\mathbb{Z}}_{6} \times {\mathbb{Z}}_{6} \) to \( a + b \in {\mathbb{Z}}_{6} \) is a surjective group homomorphism (see Example 6.45).
|
Yes
|
Let us continue with Examples 8.16 and 8.19. The random variables \( X \) and \( Y \) are independent: each is uniformly distributed over \( \{ 1,\ldots ,6\} \), and \( \left( {X, Y}\right) \) is uniformly distributed over \( \{ 1,\ldots ,6\} \times \{ 1,\ldots ,6\} \) . Let us calculate the conditional distribution of \( X \) given \( Z = 4 \) .
|
We have \( \mathrm{P}\left\lbrack {X = s \mid Z = 4}\right\rbrack = 1/3 \) for \( s = 1,2,3 \), and \( \mathrm{P}\left\lbrack {X = s \mid Z = 4}\right\rbrack = 0 \) for \( s = 4,5,6 \) . Thus, the conditional distribution of \( X \) given \( Z = 4 \) is essentially the uniform distribution on \( \{ 1,2,3\} \) .
|
Yes
|
Returning again to Examples 8.16, 8.19, and 8.20, we see that the family of random variables \( {X}^{\prime },{Y}^{\prime },{Z}^{\prime } \) is pairwise independent, but not mutually independent; for example,
|
\[ \mathrm{P}\left\lbrack {\left( {{X}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right) \cap \left( {{Y}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right) \cap \left( {{Z}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right) }\right\rbrack = 1/{6}^{2}, \] but \[ \mathrm{P}\left\lbrack {{X}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right\rbrack \cdot \mathrm{P}\left\lbrack {{Y}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right\rbrack \cdot \mathrm{P}\left\lbrack {{Z}^{\prime } = {\left\lbrack 0\right\rbrack }_{6}}\right\rbrack = 1/{6}^{3}. \]
|
Yes
|
Theorem 8.6. Let \( X \) be a random variable with image \( S \), and \( Y \) be a random variable with image \( T \). Further, suppose that \( f : S \rightarrow \left\lbrack {0,1}\right\rbrack \) and \( g : T \rightarrow \left\lbrack {0,1}\right\rbrack \) are functions such that\n\n\[ \mathop{\sum }\limits_{{s \in S}}f\left( s\right) = \mathop{\sum }\limits_{{t \in T}}g\left( t\right) = 1 \]\n\n(8.14)\n\nand that for all \( s \in S \) and \( t \in T \), we have\n\n\[ \mathrm{P}\left\lbrack {\left( {X = s}\right) \cap \left( {Y = t}\right) }\right\rbrack = f\left( s\right) g\left( t\right) . \]\n\n(8.15)\n\nThen \( X \) and \( Y \) are independent, the distribution of \( X \) is \( f \), and the distribution of \( Y \) is \( g \) .
|
Proof. Since \( \{ Y = t{\} }_{t \in T} \) is a partition of the sample space, making use of total probability (8.9), along with (8.15) and (8.14), we see that for all \( s \in S \), we have\n\n\[ \mathrm{P}\left\lbrack {X = s}\right\rbrack = \mathop{\sum }\limits_{{t \in T}}\mathrm{P}\left\lbrack {\left( {X = s}\right) \cap \left( {Y = t}\right) }\right\rbrack = \mathop{\sum }\limits_{{t \in T}}f\left( s\right) g\left( t\right) = f\left( s\right) \mathop{\sum }\limits_{{t \in T}}g\left( t\right) = f\left( s\right) . \]\n\nThus, the distribution of \( X \) is indeed \( f \). Exchanging the roles of \( X \) and \( Y \) in the above argument, we see that the distribution of \( Y \) is \( g \). Combining this with (8.15), we see that \( X \) and \( Y \) are independent.
|
Yes
|
Theorem 8.7. Let \( {\left\{ {X}_{i}\right\} }_{i \in I} \) be a finite family of random variables, where each \( {X}_{i} \) has image \( {S}_{i} \) . Also, let \( {\left\{ {f}_{i}\right\} }_{i \in I} \) be a family of functions, where for each \( i \in I \) , \( {f}_{i} : {S}_{i} \rightarrow \left\lbrack {0,1}\right\rbrack \) and \( \mathop{\sum }\limits_{{{s}_{i} \in {S}_{i}}}{f}_{i}\left( {s}_{i}\right) = 1 \) . Further, suppose that
|
Proof. To prove the theorem, it suffices to prove the following statement: for every subset \( J \) of \( I \), and every assignment \( {\left\{ {s}_{j}\right\} }_{j \in J} \) to \( {\left\{ {X}_{j}\right\} }_{j \in J} \), we have\n\n\[ \mathrm{P}\left\lbrack {\mathop{\bigcap }\limits_{{j \in J}}\left( {{X}_{j} = {s}_{j}}\right) }\right\rbrack = \mathop{\prod }\limits_{{j \in J}}{f}_{j}\left( {s}_{j}\right) \]\n\nMoreover, it suffices to prove this statement for the case where \( J = I \smallsetminus \{ d\} \), for an arbitrary \( d \in I \) : this allows us to eliminate any one variable from the family, without affecting the hypotheses, and by repeating this procedure, we can eliminate any number of variables.\n\nThus, let \( d \in I \) be fixed, let \( J \mathrel{\text{:=}} I \smallsetminus \{ d\} \), and let \( {\left\{ {s}_{j}\right\} }_{j \in J} \) be a fixed assignment to \( {\left\{ {X}_{j}\right\} }_{j \in J} \) . Then, since \( {\left\{ {X}_{d} = {s}_{d}\right\} }_{{s}_{d} \in {S}_{d}} \) is a partition of the sample space, we have\n\n\[ \mathrm{P}\left\lbrack {\mathop{\bigcap }\limits_{{j \in J}}\left( {{X}_{j} = {s}_{j}}\right) }\right\rbrack = \mathrm{P}\left\lbrack {\mathop{\bigcup }\limits_{{{s}_{d} \in {S}_{d}}}\left( {\mathop{\bigcap }\limits_{{i \in I}}\left( {{X}_{i} = {s}_{i}}\right) }\right) }\right\rbrack = \mathop{\sum }\limits_{{{s}_{d} \in {S}_{d}}}\mathrm{P}\left\lbrack {\mathop{\bigcap }\limits_{{i \in I}}\left( {{X}_{i} = {s}_{i}}\right) }\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{{s}_{d} \in {S}_{d}}}\mathop{\prod }\limits_{{i \in I}}{f}_{i}\left( {s}_{i}\right) = \mathop{\prod }\limits_{{j \in J}}{f}_{j}\left( {s}_{j}\right) \cdot \mathop{\sum }\limits_{{{s}_{d} \in {S}_{d}}}{f}_{d}\left( {s}_{d}\right) = \mathop{\prod }\limits_{{j \in J}}{f}_{j}\left( {s}_{j}\right) \]
|
Yes
|
Theorem 8.12. Suppose \( {\left\{ {X}_{i}\right\} }_{i = 1}^{n} \) is a mutually independent family of random variables. Further, suppose that for \( i = 1,\ldots, n,{Y}_{i} \mathrel{\text{:=}} {g}_{i}\left( {X}_{i}\right) \) for some function \( {g}_{i} \) . Then \( {\left\{ {Y}_{i}\right\} }_{i = 1}^{n} \) is mutually independent.
|
Proof. It suffices to prove the theorem for \( n = 2 \) . The general case follows easily by induction, using Theorem 8.9. For \( i = 1,2 \), let \( {t}_{i} \) be any value in the image of \( {Y}_{i} \), and let \( {S}_{i}^{\prime } \mathrel{\text{:=}} {g}_{i}^{-1}\left( \left\{ {t}_{i}\right\} \right) \) . We have\n\n\[ \mathrm{P}\left\lbrack {\left( {{Y}_{1} = {t}_{1}}\right) \cap \left( {{Y}_{2} = {t}_{2}}\right) }\right\rbrack = \mathrm{P}\left\lbrack {\left( {\mathop{\bigcup }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\left( {{X}_{1} = {s}_{1}}\right) }\right) \cap \left( {\mathop{\bigcup }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\left( {{X}_{2} = {s}_{2}}\right) }\right) }\right\rbrack \]\n\n\[ = \mathrm{P}\left\lbrack {\mathop{\bigcup }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\mathop{\bigcup }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\left( {\left( {{X}_{1} = {s}_{1}}\right) \cap \left( {{X}_{2} = {s}_{2}}\right) }\right) }\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\mathop{\sum }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\mathrm{P}\left\lbrack {\left( {{X}_{1} = {s}_{1}}\right) \cap \left( {{X}_{2} = {s}_{2}}\right) }\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\mathop{\sum }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\mathrm{P}\left\lbrack {{X}_{1} = {s}_{1}}\right\rbrack \mathrm{P}\left\lbrack {{X}_{2} = {s}_{2}}\right\rbrack \]\n\n\[ = \left( {\mathop{\sum }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\mathrm{P}\left\lbrack {{X}_{1} = {s}_{1}}\right\rbrack }\right) \left( {\mathop{\sum }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\mathrm{P}\left\lbrack {{X}_{2} = {s}_{2}}\right\rbrack }\right) \]\n\n\[ = \mathrm{P}\left\lbrack {\mathop{\bigcup }\limits_{{{s}_{1} \in {S}_{1}^{\prime }}}\left( {{X}_{1} = {s}_{1}}\right) }\right\rbrack \mathrm{P}\left\lbrack {\mathop{\bigcup }\limits_{{{s}_{2} \in {S}_{2}^{\prime }}}\left( {{X}_{2} = {s}_{2}}\right) }\right\rbrack = \mathrm{P}\left\lbrack {{Y}_{1} = {t}_{1}}\right\rbrack \mathrm{P}\left\lbrack {{Y}_{2} = {t}_{2}}\right\rbrack . \]
|
Yes
|
Theorem 8.13. Suppose that \( G \) is a finite abelian group, and that \( W \) is a random variable uniformly distributed over \( G \) . Let \( Z \) be another random variable, taking values in some finite set \( U \), and suppose that \( W \) and \( Z \) are independent. Let \( \sigma : U \rightarrow G \) be some function, and define \( Y \mathrel{\text{:=}} W + \sigma \left( Z\right) \) . Then \( Y \) is uniformly distributed over \( G \), and \( Y \) and \( Z \) are independent.
|
Proof. Consider any fixed values \( t \in G \) and \( u \in U \) . Evidently, the events \( \left( {Y = t}\right) \cap \left( {Z = u}\right) \) and \( \left( {W = t - \sigma \left( u\right) }\right) \cap \left( {Z = u}\right) \) are the same, and therefore, because \( W \) and \( Z \) are independent, we have\n\n\[ \mathrm{P}\left\lbrack {\left( {Y = t}\right) \cap \left( {Z = u}\right) }\right\rbrack = \mathrm{P}\left\lbrack {W = t - \sigma \left( u\right) }\right\rbrack \mathrm{P}\left\lbrack {Z = u}\right\rbrack = \frac{1}{\left| G\right| }\mathrm{P}\left\lbrack {Z = u}\right\rbrack . \]\n\n(8.16)\n\nSince this holds for every \( u \in U \), making use of total probability (8.9), we have\n\n\[ \mathrm{P}\left\lbrack {Y = t}\right\rbrack = \mathop{\sum }\limits_{{u \in U}}\mathrm{P}\left\lbrack {\left( {Y = t}\right) \cap \left( {Z = u}\right) }\right\rbrack = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{u \in U}}\mathrm{P}\left\lbrack {Z = u}\right\rbrack = \frac{1}{\left| G\right| }. \]\n\nThus, \( Y \) is uniformly distributed over \( G \), and by (8.16), \( Y \) and \( Z \) are independent. (This conclusion could also have been deduced directly from (8.16) using Theorem 8.6 — we have repeated the argument here.)
|
Yes
|
Let \( k \) be a positive integer. This example shows how we can take a mutually independent family of \( k \) random variables, and, from it, construct a much larger, \( k \) -wise independent family of random variables.
|
Let \( p \) be a prime, with \( p \geq k \) . Let \( {\left\{ {H}_{i}\right\} }_{i = 0}^{k - 1} \) be a mutually independent family of random variables, each of which is uniformly distributed over \( {\mathbb{Z}}_{p} \) . Let us set \( H \mathrel{\text{:=}} \left( {{H}_{0},\ldots ,{H}_{k - 1}}\right) \), which, by assumption, is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \) . For each \( s \in {\mathbb{Z}}_{p} \), we define the function \( {\rho }_{s} : {\mathbb{Z}}_{p}^{\times k} \rightarrow {\mathbb{Z}}_{p} \) as follows: for \( r = \left( {{r}_{0},\ldots ,{r}_{k - 1}}\right) \in {\mathbb{Z}}_{p}^{\times k},{\rho }_{s}\left( r\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 0}}^{{k - 1}}{r}_{i}{s}^{i} \) ; that is, \( {\rho }_{s}\left( r\right) \) is the value obtained by evaluating the polynomial \( {r}_{0} + {r}_{1}X + \cdots + {r}_{k - 1}{X}^{k - 1} \in {\mathbb{Z}}_{p}\left\lbrack X\right\rbrack \) at the point \( s \) . Each \( s \in {\mathbb{Z}}_{p} \) defines a random variable \( {\rho }_{s}\left( H\right) = {H}_{0} + {H}_{1}s + \cdots + {H}_{k - 1}{s}^{k - 1} \) . We claim that the family of random variables \( {\left\{ {\rho }_{s}\left( H\right) \right\} }_{s \in {\mathbb{Z}}_{p}} \) is \( k \) -wise independent, with each individual \( {\rho }_{s}\left( H\right) \) uniformly distributed over \( {\mathbb{Z}}_{p} \) . By Theorem 8.10, it suffices to show the following: for all distinct points \( {s}_{1},\ldots ,{s}_{k} \in {\mathbb{Z}}_{p} \), the random variable \( W \mathrel{\text{:=}} \left( {{\rho }_{{s}_{1}}\left( H\right) ,\ldots ,{\rho }_{{s}_{k}}\left( H\right) }\right) \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \) . So let \( {s}_{1},\ldots ,{s}_{k} \) be fixed, distinct elements of \( {\mathbb{Z}}_{p} \), and define the function \[ \rho : \;{\mathbb{Z}}_{p}^{\times k} \rightarrow {\mathbb{Z}}_{p}^{\times k} \] \[ r \mapsto \left( {{\rho }_{{s}_{1}}\left( r\right) ,\ldots ,{\rho }_{{s}_{k}}\left( r\right) }\right) . \] Thus, \( W = \rho \left( H\right) \), and by Lagrange interpolation (Theorem 7.15), the function \( \rho \) is a bijection; moreover, since \( H \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \), so is \( W \) .
|
Yes
|
Consider again the secret sharing scenario of Example 8.26. Suppose at the critical moment, one of the officers is missing in action. The military planners would perhaps like a more flexible secret sharing scheme; for example, perhaps shares of the launch code should be distributed to three officers, in such a way that no single officer can authorize a launch, but any two can. More generally, for positive integers \( k \) and \( \ell \), with \( \ell \geq k + 1 \), the scheme should distribute shares among \( \ell \) officers, so that no coalition of \( k \) (or fewer) officers can authorize a launch, yet any coalition of \( k + 1 \) officers can.
|
First, we show how any coalition of \( k + 1 \) officers can reconstruct the launch code from their collection of shares, say, \( {Y}_{{s}_{1}},\ldots ,{Y}_{{s}_{k + 1}} \) . This is easily done by means of the Lagrange interpolation formula (again, Theorem 7.15). Indeed, we only need to recover the high-order coefficient, \( \sigma \left( Z\right) \), which we can obtain via the formula\n\n\[ \sigma \left( Z\right) = \mathop{\sum }\limits_{{i = 1}}^{{k + 1}}\frac{{Y}_{{s}_{i}}}{\mathop{\prod }\limits_{{j \neq i}}\left( {{s}_{i} - {s}_{j}}\right) }.\]\n\nSecond, we show that no coalition of \( k \) officers learn anything about the launch code, even if they pool their shares. Formally, this means that if \( {s}_{1},\ldots ,{s}_{k} \) are fixed, distinct points, then \( {Y}_{{s}_{1}},\ldots ,{Y}_{{s}_{k}}, Z \) form a mutually independent family of random variables. This is easily seen, as follows. Define \( H \mathrel{\text{:=}} \left( {{H}_{0},\ldots ,{H}_{k - 1}}\right) \), and \( W \mathrel{\text{:=}} \rho \left( H\right) \), where \( \rho : {\mathbb{Z}}_{p}^{\times k} \rightarrow {\mathbb{Z}}_{p}^{\times k} \) is as defined in (8.17), and set \( Y \mathrel{\text{:=}} \left( {{Y}_{{s}_{1}},\ldots ,{Y}_{{s}_{k}}}\right) \). Now, by hypothesis, \( H \) and \( Z \) are independent, and \( H \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \). As we noted in Example 8.27, \( \rho \) is a bijection, and hence, \( W \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \); moreover (by Theorem 8.12), \( W \) and \( Z \) are independent. Observe that \( Y = W + {\sigma }^{\prime }\left( Z\right) \), where \( {\sigma }^{\prime } \) maps \( u \in U \) to \( \left( {\sigma \left( u\right) {s}_{1}^{k},\ldots ,\sigma \left( u\right) {s}_{k}^{k}}\right) \in {\mathbb{Z}}_{p}^{\times k} \), and so applying Theorem 8.13 (with the group \( {\mathbb{Z}}_{p}^{\times k} \), the random variables \( W \) and \( Z \), and the function \( {\sigma }^{\prime } \) ), we see that \( Y \) and \( Z \) are independent, where \( Y \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{\times k} \). From this, it follows (using Theorems 8.9 and 8.10) that the family of random variables \( {Y}_{{s}_{1}},\ldots ,{Y}_{{s}_{k}}, Z \) is mutually independent, with each \( {Y}_{{s}_
|
Yes
|
Theorem 8.14 (Linearity of expectation). If \( X \) and \( Y \) are real-valued random variables, and a is a real number, then\n\n\[ \mathrm{E}\left\lbrack {X + Y}\right\rbrack = \mathrm{E}\left\lbrack X\right\rbrack + \mathrm{E}\left\lbrack Y\right\rbrack \text{ and }\mathrm{E}\left\lbrack {aX}\right\rbrack = a\mathrm{E}\left\lbrack X\right\rbrack . \]
|
Proof. It is easiest to prove this using the defining equation (8.18) for expectation. For \( \omega \in \Omega \), the value of the random variable \( X + Y \) at \( \omega \) is by definition \( X\left( \omega \right) + Y\left( \omega \right) \), and so we have\n\n\[ \mathrm{E}\left\lbrack {X + Y}\right\rbrack = \mathop{\sum }\limits_{\omega }\left( {X\left( \omega \right) + Y\left( \omega \right) }\right) \mathrm{P}\left( \omega \right) \]\n\n\[ = \mathop{\sum }\limits_{\omega }X\left( \omega \right) \mathrm{P}\left( \omega \right) + \mathop{\sum }\limits_{\omega }Y\left( \omega \right) \mathrm{P}\left( \omega \right) \]\n\n\[ = \mathrm{E}\left\lbrack X\right\rbrack + \mathrm{E}\left\lbrack Y\right\rbrack \]\n\nFor the second part of the theorem, by a similar calculation, we have\n\n\[ \mathrm{E}\left\lbrack {aX}\right\rbrack = \mathop{\sum }\limits_{\omega }\left( {{aX}\left( \omega \right) }\right) \mathrm{P}\left( \omega \right) = a\mathop{\sum }\limits_{\omega }X\left( \omega \right) \mathrm{P}\left( \omega \right) = a\mathrm{E}\left\lbrack X\right\rbrack . \]
|
Yes
|
Theorem 8.15. If \( X \) and \( Y \) are independent, real-valued random variables, then \( \mathrm{E}\left\lbrack {XY}\right\rbrack = \mathrm{E}\left\lbrack X\right\rbrack \mathrm{E}\left\lbrack Y\right\rbrack \)
|
Proof. It is easiest to prove this using (8.20), with the function \( f\left( {s, t}\right) \mathrel{\text{:=}} {st} \) applied to the random variable \( \left( {X, Y}\right) \) . We have\n\n\[ \mathrm{E}\left\lbrack {XY}\right\rbrack = \mathop{\sum }\limits_{{s, t}}{st}\mathrm{P}\left\lbrack {\left( {X = s}\right) \cap \left( {Y = t}\right) }\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{s, t}}{st}\mathrm{P}\left\lbrack {X = s}\right\rbrack \mathrm{P}\left\lbrack {Y = t}\right\rbrack \]\n\n\[ = \left( {\mathop{\sum }\limits_{s}s\mathrm{P}\left\lbrack {X = s}\right\rbrack }\right) \left( {\mathop{\sum }\limits_{t}t\mathrm{P}\left\lbrack {Y = t}\right\rbrack }\right) \]\n\n\[ = \mathrm{E}\left\lbrack X\right\rbrack \mathrm{E}\left\lbrack Y\right\rbrack \]
|
Yes
|
Theorem 8.16. Let \( X \) be a \( 0/1 \) -valued random variable. Then \( \mathrm{E}\left\lbrack X\right\rbrack = \mathrm{P}\left\lbrack {X = 1}\right\rbrack \) .
|
Proof. \( \mathrm{E}\left\lbrack X\right\rbrack = 0 \cdot \mathrm{P}\left\lbrack {X = 0}\right\rbrack + 1 \cdot \mathrm{P}\left\lbrack {X = 1}\right\rbrack = \mathrm{P}\left\lbrack {X = 1}\right\rbrack \) .
|
Yes
|
Theorem 8.17. If \( X \) is a random variable that takes only non-negative integer values, then\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i \geq 1}}\mathrm{P}\left\lbrack {X \geq i}\right\rbrack \]\n\nNote that since \( X \) has a finite image, the sum appearing above is finite.
|
Proof. Suppose that the image of \( X \) is contained in \( \{ 0,\ldots, n\} \), and for \( i = 1,\ldots, n \) , let \( {X}_{i} \) be the indicator variable for the event \( X \geq i \) . Then \( X = {X}_{1} + \cdots + {X}_{n} \), and by linearity of expectation and Theorem 8.16, we have\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{n}\mathrm{E}\left\lbrack {X}_{i}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{n}\mathrm{P}\left\lbrack {X \geq i}\right\rbrack \]
|
Yes
|
Theorem 8.18. Let \( X \) be a real-valued random variable, with \( \mu \mathrel{\text{:=}} \mathrm{E}\left\lbrack X\right\rbrack \), and let \( a \) and \( b \) be real numbers. Then we have\n\n(i) \( \operatorname{Var}\left\lbrack X\right\rbrack = \mathrm{E}\left\lbrack {X}^{2}\right\rbrack - {\mu }^{2} \)
|
Proof. For part (i), observe that\n\n\[ \operatorname{Var}\left\lbrack X\right\rbrack = \mathrm{E}\left\lbrack {\left( X - \mu \right) }^{2}\right\rbrack = \mathrm{E}\left\lbrack {{X}^{2} - {2\mu X} + {\mu }^{2}}\right\rbrack \]\n\n\[ = \mathsf{E}\left\lbrack {X}^{2}\right\rbrack - {2\mu }\;\mathsf{E}\left\lbrack X\right\rbrack + \mathsf{E}\left\lbrack {\mu }^{2}\right\rbrack = \mathsf{E}\left\lbrack {X}^{2}\right\rbrack - 2{\mu }^{2} + {\mu }^{2} \]\n\n\[ = \mathrm{E}\left\lbrack {X}^{2}\right\rbrack - {\mu }^{2}, \]\n\nwhere in the third equality, we used the fact that expectation is linear, and in the fourth equality, we used the fact that \( \mathrm{E}\left\lbrack c\right\rbrack = c \) for constant \( c \) (in this case, \( c = {\mu }^{2} \) ).
|
Yes
|
Theorem 8.20. If \( {\left\{ {X}_{i}\right\} }_{i \in I} \) is a finite, pairwise independent family of real-valued random variables, then\n\n\[ \operatorname{Var}\left\lbrack {\mathop{\sum }\limits_{{i \in I}}{X}_{i}}\right\rbrack = \mathop{\sum }\limits_{{i \in I}}\operatorname{Var}\left\lbrack {X}_{i}\right\rbrack \]
|
Proof. We have\n\n\[ \operatorname{Var}\left\lbrack {\mathop{\sum }\limits_{{i \in I}}{X}_{i}}\right\rbrack = \mathrm{E}\left\lbrack {\left( \mathop{\sum }\limits_{{i \in I}}{X}_{i}\right) }^{2}\right\rbrack - {\left( \mathrm{E}\left\lbrack \mathop{\sum }\limits_{{i \in I}}{X}_{i}\right\rbrack \right) }^{2} \]\n\n\[ = \mathop{\sum }\limits_{{i \in I}}\mathrm{E}\left\lbrack {X}_{i}^{2}\right\rbrack + \mathop{\sum }\limits_{\substack{{i, j \in I} \\ {i \neq j} }}\left( {\mathrm{E}\left\lbrack {{X}_{i}{X}_{j}}\right\rbrack - \mathrm{E}\left\lbrack {X}_{i}\right\rbrack \mathrm{E}\left\lbrack {X}_{j}\right\rbrack }\right) - \mathop{\sum }\limits_{{i \in I}}\mathrm{E}{\left\lbrack {X}_{i}\right\rbrack }^{2} \]\n\n(by linearity of expectation and rearranging terms)\n\n\[ = \mathop{\sum }\limits_{{i \in I}}\mathrm{E}\left\lbrack {X}_{i}^{2}\right\rbrack - \mathop{\sum }\limits_{{i \in I}}\mathrm{E}{\left\lbrack {X}_{i}\right\rbrack }^{2} \]\n\n(by pairwise independence and Theorem 8.15)\n\n\[ = \mathop{\sum }\limits_{{i \in I}}\operatorname{Var}\left\lbrack {X}_{i}\right\rbrack \]
|
Yes
|
Theorem 8.21. Let \( X \) be a \( 0/1 \) -valued random variable, with \( p \mathrel{\text{:=}} \mathrm{P}\left\lbrack {X = 1}\right\rbrack \) and \( q \mathrel{\text{:=}} \mathrm{P}\left\lbrack {X = 0}\right\rbrack = 1 - p \) . Then \( \operatorname{Var}\left\lbrack X\right\rbrack = {pq} \) .
|
Proof. We have \( \mathrm{E}\left\lbrack X\right\rbrack = p \) and \( \mathrm{E}\left\lbrack {X}^{2}\right\rbrack = \mathrm{P}\left\lbrack {{X}^{2} = 1}\right\rbrack = \mathrm{P}\left\lbrack {X = 1}\right\rbrack = p \) . Therefore,\n\n\[ \operatorname{Var}\left\lbrack X\right\rbrack = \mathrm{E}\left\lbrack {X}^{2}\right\rbrack - \mathrm{E}{\left\lbrack X\right\rbrack }^{2} = p - {p}^{2} = p\left( {1 - p}\right) = {pq}. \]
|
Yes
|
Let \( X \) be uniformly distributed over \( \{ 1,\ldots, m\} \) . Let us compute \( \mathrm{E}\left\lbrack X\right\rbrack \) and \( \operatorname{Var}\left\lbrack X\right\rbrack \) .
|
We have\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{s = 1}}^{m}s \cdot \frac{1}{m} = \frac{m\left( {m + 1}\right) }{2} \cdot \frac{1}{m} = \frac{m + 1}{2}. \]\n\nWe also have\n\n\[ \mathrm{E}\left\lbrack {X}^{2}\right\rbrack = \mathop{\sum }\limits_{{s = 1}}^{m}{s}^{2} \cdot \frac{1}{m} = \frac{m\left( {m + 1}\right) \left( {{2m} + 1}\right) }{6} \cdot \frac{1}{m} = \frac{\left( {m + 1}\right) \left( {{2m} + 1}\right) }{6}. \]\n\nTherefore,\n\n\[ \operatorname{Var}\left\lbrack X\right\rbrack = \mathrm{E}\left\lbrack {X}^{2}\right\rbrack - \mathrm{E}{\left\lbrack X\right\rbrack }^{2} = \frac{{m}^{2} - 1}{12}. \]
|
Yes
|
Let \( X \) denote the value of a roll of a die. Let \( \mathcal{A} \) be the event that \( X \) is even. Then the conditional distribution of \( X \) given \( \mathcal{A} \) is essentially the uniform distribution on \( \{ 2,4,6\} \), and hence
|
\[ \mathrm{E}\left\lbrack {X \mid \mathcal{A}}\right\rbrack = \frac{2 + 4 + 6}{3} = 4. \] Similarly, the conditional distribution of \( X \) given \( \overline{\mathcal{A}} \) is essentially the uniform distribution on \( \{ 1,3,5\} \), and so \[ \mathrm{E}\left\lbrack {X \mid \overline{\mathcal{A}}}\right\rbrack = \frac{1 + 3 + 5}{3} = 3. \] Using the law of total expectation, we can compute the expected value of \( X \) as follows: \[ \mathrm{E}\left\lbrack X\right\rbrack = \mathrm{E}\left\lbrack {X \mid \mathcal{A}}\right\rbrack \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack + \mathrm{E}\left\lbrack {X \mid \overline{\mathcal{A}}}\right\rbrack \mathrm{P}\left\lbrack \overline{\mathcal{A}}\right\rbrack = 4 \cdot \frac{1}{2} + 3 \cdot \frac{1}{2} = \frac{7}{2}, \] which agrees with the calculation in the previous example.
|
Yes
|
Let \( X \) be a random variable with a binomial distribution, as in Example 8.18, that counts the number of successes among \( n \) Bernoulli trials, each of which succeeds with probability \( p \). Let us compute \( \mathrm{E}\left\lbrack X\right\rbrack \) and \( \operatorname{Var}\left\lbrack X\right\rbrack \).
|
We can write \( X \) as the sum of indicator variables, \( X = \mathop{\sum }\limits_{{i = 1}}^{n}{X}_{i} \), where \( {X}_{i} \) is the indicator variable for the event that the \( i \) th trial succeeds; each \( {X}_{i} \) takes the value 1 with probability \( p \) and 0 with probability \( q \mathrel{\text{:=}} 1 - p \), and the family of random variables \( {\left\{ {X}_{i}\right\} }_{i = 1}^{n} \) is mutually independent (see Example 8.24). By Theorems 8.16 and 8.21, we have \( \mathrm{E}\left\lbrack {X}_{i}\right\rbrack = p \) and \( \operatorname{Var}\left\lbrack {X}_{i}\right\rbrack = {pq} \) for \( i = 1,\ldots, n \). By linearity of expectation, we have\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{n}\mathrm{E}\left\lbrack {X}_{i}\right\rbrack = {np}. \]\n\nBy Theorem 8.20, and the fact that \( {\left\{ {X}_{i}\right\} }_{i = 1}^{n} \) is mutually independent (and hence pairwise independent), we have\n\n\[ \operatorname{Var}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{n}\operatorname{Var}\left\lbrack {X}_{i}\right\rbrack = {npq} \]
|
Yes
|
Example 8.32. Our proof of Theorem 8.1 could be elegantly recast in terms of indicator variables. For \( \mathcal{B} \subseteq \Omega \), let \( {X}_{\mathcal{B}} \) be the indicator variable for \( \mathcal{B} \), so that \( {X}_{\mathcal{B}}\left( \omega \right) = {\delta }_{\omega }\left\lbrack \mathcal{B}\right\rbrack \) for each \( \omega \in \Omega \) . Equation (8.8) then becomes\n\n\[ \n{X}_{\mathcal{A}} = \mathop{\sum }\limits_{{\varnothing \varsubsetneq J \subseteq I}}{\left( -1\right) }^{\left| J\right| - 1}{X}_{{\mathcal{A}}_{J}} \n\]
|
and by Theorem 8.16 and linearity of expectation, we have\n\n\[ \n\mathrm{P}\left\lbrack \mathcal{A}\right\rbrack = \mathrm{E}\left\lbrack {X}_{\mathcal{A}}\right\rbrack = \mathop{\sum }\limits_{{\varnothing \varsubsetneq J \subseteq I}}{\left( -1\right) }^{\left| J\right| - 1}\mathrm{E}\left\lbrack {X}_{{\mathcal{A}}_{J}}\right\rbrack = \mathop{\sum }\limits_{{\varnothing \varsubsetneq J \subseteq I}}{\left( -1\right) }^{\left| J\right| - 1}\mathrm{P}\left\lbrack {X}_{{\mathcal{A}}_{J}}\right\rbrack \n\]
|
Yes
|
Theorem 8.22 (Markov’s inequality). Let \( X \) be a random variable that takes only non-negative real values. Then for every \( \alpha > 0 \), we have\n\n\[ \mathrm{P}\left\lbrack {X \geq \alpha }\right\rbrack \leq \mathrm{E}\left\lbrack X\right\rbrack /\alpha . \]
|
Proof. We have\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{s}s\mathrm{P}\left\lbrack {X = s}\right\rbrack = \mathop{\sum }\limits_{{s < \alpha }}s\mathrm{P}\left\lbrack {X = s}\right\rbrack + \mathop{\sum }\limits_{{s \geq \alpha }}s\mathrm{P}\left\lbrack {X = s}\right\rbrack ,\]\n\nwhere the summations are over elements \( s \) in the image of \( X \) . Since \( X \) takes only non-negative values, all of the terms are non-negative. Therefore,\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack \geq \mathop{\sum }\limits_{{s \geq \alpha }}s\mathrm{P}\left\lbrack {X = s}\right\rbrack \geq \mathop{\sum }\limits_{{s \geq \alpha }}\alpha \mathrm{P}\left\lbrack {X = s}\right\rbrack = \alpha \mathrm{P}\left\lbrack {X \geq \alpha }\right\rbrack . \]
|
Yes
|
Theorem 8.23 (Chebyshev’s inequality). Let \( X \) be a real-valued random variable, with \( \mu \mathrel{\text{:=}} \mathrm{E}\left\lbrack X\right\rbrack \) and \( \nu \mathrel{\text{:=}} \operatorname{Var}\left\lbrack X\right\rbrack \) . Then for every \( \alpha > 0 \), we have\n\n\[ \mathrm{P}\left\lbrack {\left| {X - \mu }\right| \geq \alpha }\right\rbrack \leq v/{\alpha }^{2} \]
|
Proof. Let \( Y \mathrel{\text{:=}} {\left( X - \mu \right) }^{2} \) . Then \( Y \) is always non-negative, and \( \mathrm{E}\left\lbrack Y\right\rbrack = v \) . Applying Markov’s inequality to \( Y \), we have\n\n\[ \mathrm{P}\left\lbrack {\left| {X - \mu }\right| \geq \alpha }\right\rbrack = \mathrm{P}\left\lbrack {Y \geq {\alpha }^{2}}\right\rbrack \leq \nu /{\alpha }^{2}. \]
|
Yes
|
Theorem 8.20 (along with part (ii) of Theorem 8.18) that \( \operatorname{Var}\left\lbrack \bar{X}\right\rbrack = v/n \) . Applying Chebyshev’s inequality, for every \( \varepsilon > 0 \), we have
|
\[ \mathrm{P}\left\lbrack {\left| {\bar{X} - \mu }\right| \geq \varepsilon }\right\rbrack \leq \frac{v}{n{\varepsilon }^{2}} \]
|
Yes
|
Suppose we toss a fair coin 10,000 times. The expected number of heads is 5,000. What is an upper bound on the probability \( \alpha \) that we get 6,000 or more heads?
|
Using Markov’s inequality, we get \( \alpha \leq 5/6 \). Using Chebyshev’s inequality, and in particular, the inequality (8.27), we get\n\n\[ \alpha \leq \frac{1/4}{{10}^{4}{10}^{-2}} = \frac{1}{400} \]\n\nFinally, using the Chernoff bound, we obtain\n\n\[ \alpha \leq {e}^{-{10}^{4}{10}^{-2}/2\left( {0.5}\right) } = {e}^{-{100}} \approx {10}^{-{43.4}}. \]
|
Yes
|
Theorem 8.25. Suppose \( {\left\{ {X}_{i}\right\} }_{i \in I} \) is pairwise independent. Then for all \( i, j \in I \) with \( i \neq j \), we have \( \mathrm{P}\left\lbrack {{X}_{i} = {X}_{j}}\right\rbrack = 1/m \) .
|
Proof. The event \( {X}_{i} = {X}_{j} \) occurs if and only if \( {X}_{i} = s \) and \( {X}_{j} = s \) for some \( s \in S \) . Therefore,\n\n\[ \mathrm{P}\left\lbrack {{X}_{i} = {X}_{j}}\right\rbrack = \mathop{\sum }\limits_{{s \in S}}\mathrm{P}\left\lbrack {\left( {{X}_{i} = s}\right) \cap \left( {{X}_{j} = s}\right) }\right\rbrack \text{ (by Boole’s equality (8.7)) } \]\n\n\[ = \mathop{\sum }\limits_{{s \in S}}1/{m}^{2}\text{ (by pairwise independence) } \]\n\n\[ = 1/m\text{.} \]
|
Yes
|
Theorem 8.26. Suppose \( {\left\{ {X}_{i}\right\} }_{i \in I} \) is pairwise independent. Then\n\n\[ \mathrm{P}\left\lbrack \mathcal{C}\right\rbrack \leq \frac{n\left( {n - 1}\right) }{2m} \]
|
Proof. Let \( {I}^{\left( 2\right) } \mathrel{\text{:=}} \{ J \subseteq I : \left| J\right| = 2\} \) . Then using Boole’s inequality (8.6) and Theorem 8.25, we have\n\n\[ \mathrm{P}\left\lbrack C\right\rbrack \leq \mathop{\sum }\limits_{{\{ i, j\} \in {I}^{\left( 2\right) }}}\mathrm{P}\left\lbrack {{X}_{i} = {X}_{j}}\right\rbrack = \mathop{\sum }\limits_{{\{ i, j\} \in {I}^{\left( 2\right) }}}\frac{1}{m} = \frac{\left| {I}^{\left( 2\right) }\right| }{m} = \frac{n\left( {n - 1}\right) }{2m}. \]
|
Yes
|
Theorem 8.27. Suppose \( {\left\{ {X}_{i}\right\} }_{i \in I} \) is pairwise independent. Then\n\n\[ \mathrm{E}\left\lbrack M\right\rbrack \leq \sqrt{{n}^{2}/m + n} \]
|
Proof. To prove this, we use the fact that \( \mathrm{E}{\left\lbrack M\right\rbrack }^{2} \leq \mathrm{E}\left\lbrack {M}^{2}\right\rbrack \) (see Theorem 8.19), and that \( {M}^{2} \leq Z \mathrel{\text{:=}} \mathop{\sum }\limits_{{s \in S}}{N}_{s}^{2} \) . It will therefore suffice to show that\n\n\[ \mathrm{E}\left\lbrack Z\right\rbrack \leq {n}^{2}/m + n \]\n\n(8.32)\n\nTo this end, for \( i \in I \) and \( s \in S \), let \( {L}_{is} \) be the indicator variable for the event that ball \( i \) lands in bin \( s \) (i.e., \( {X}_{i} = s \) ), and for \( i, j \in I \), let \( {C}_{ij} \) be the indicator variable for the event that balls \( i \) and \( j \) land in the same bin (i.e., \( {X}_{i} = {X}_{j} \) ). Observing that\n\n\( {C}_{ij} = \mathop{\sum }\limits_{{s \in S}}{L}_{is}{L}_{js} \), we have\n\n\[ Z = \mathop{\sum }\limits_{{s \in S}}{N}_{s}^{2} = \mathop{\sum }\limits_{{s \in S}}{\left( \mathop{\sum }\limits_{{i \in I}}{L}_{is}\right) }^{2} = \mathop{\sum }\limits_{{s \in S}}\left( {\mathop{\sum }\limits_{{i \in I}}{L}_{is}}\right) \left( {\mathop{\sum }\limits_{{j \in I}}{L}_{js}}\right) = \mathop{\sum }\limits_{{i, j \in I}}\mathop{\sum }\limits_{{s \in S}}{L}_{is}{L}_{js} \]\n\n\[ = \mathop{\sum }\limits_{{i, j \in I}}{C}_{ij} \]\n\nFor \( i, j \in I \), we have \( \mathrm{E}\left\lbrack {C}_{ij}\right\rbrack = \mathrm{P}\left\lbrack {{X}_{i} = {X}_{j}}\right\rbrack \) (see Theorem 8.16), and so by Theorem 8.25, we have \( \mathrm{E}\left\lbrack {C}_{ij}\right\rbrack = 1/m \) if \( i \neq j \), and clearly, \( \mathrm{E}\left\lbrack {C}_{ij}\right\rbrack = 1 \) if \( i = j \) . By linearity of expectation, we have\n\n\[ \mathrm{E}\left\lbrack Z\right\rbrack = \mathop{\sum }\limits_{{i, j \in I}}\mathrm{E}\left\lbrack {C}_{ij}\right\rbrack = \mathop{\sum }\limits_{\substack{{i, j \in I} \\ {i \neq j} }}\mathrm{E}\left\lbrack {C}_{ij}\right\rbrack + \mathop{\sum }\limits_{{i \in I}}\mathrm{E}\left\lbrack {C}_{ii}\right\rbrack = \frac{n\left( {n - 1}\right) }{m} + n \leq {n}^{2}/m + n, \]\n\nwhich proves (8.32).
|
Yes
|
Theorem 8.28. Suppose \( {\left\{ {X}_{i}\right\} }_{i \in I} \) is mutually independent. Then\n\n\[ \mathrm{P}\left\lbrack \mathcal{C}\right\rbrack \geq 1 - {e}^{-n\left( {n - 1}\right) /{2m}}. \]
|
Proof. Let \( \alpha \mathrel{\text{:=}} \mathrm{P}\left\lbrack \bar{C}\right\rbrack \) . We want to show \( \alpha \leq {e}^{-n\left( {n - 1}\right) /{2m}} \) . We may assume that \( I = \{ 1,\ldots, n\} \) (the labels make no difference) and that \( n \leq m \) (otherwise, \( \alpha = 0 \) ). Under the hypothesis of the theorem, the random variable \( \left( {{X}_{1},\ldots ,{X}_{n}}\right) \) is uniformly distributed over \( {S}^{\times n} \) . Among all \( {m}^{n} \) sequences \( \left( {{s}_{1},\ldots ,{s}_{n}}\right) \in {S}^{\times n} \), there are a total of \( m\left( {m - 1}\right) \cdots \left( {m - n + 1}\right) \) that contain no repetitions: there are \( m \) choices for \( {s}_{1} \) , and for any fixed value of \( {s}_{1} \), there are \( m - 1 \) choices for \( {s}_{2} \), and so on. Therefore\n\n\[ \alpha = m\left( {m - 1}\right) \cdots \left( {m - n + 1}\right) /{m}^{n} = \left( {1 - \frac{1}{m}}\right) \left( {1 - \frac{2}{m}}\right) \cdots \left( {1 - \frac{n - 1}{m}}\right) .\n\nUsing part (i) of \( \$ \mathrm{A}1 \), we obtain\n\n\[ \alpha \leq {e}^{-\mathop{\sum }\limits_{{i = 1}}^{{n - 1}}i/m} = {e}^{-n\left( {n - 1}\right) /{2m}}. \]
|
Yes
|
Suppose Alice wants to send a message to Bob in such a way that Bob can be reasonably sure that the message he receives really came from Alice, and was not modified in transit by some malicious adversary. We present a solution to this problem here that works assuming that Alice and Bob share a randomly generated secret key, and that this key is used to authenticate just a single message (multiple messages can be authenticated using multiple keys).
|
Suppose that \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) is a pairwise independent family of hash functions from \( S \) to \( T \) . We model the shared random key as a random variable \( H \), uniformly distributed over \( R \) . We also model Alice’s message as a random variable \( X \), taking values in the set \( S \) . We make no assumption about the distribution of \( X \), but we do assume that \( X \) and \( H \) are independent. When Alice sends the message \( X \) to Bob, she also sends the \
|
No
|
Example 8.36. By setting \( k \mathrel{\text{:=}} 2 \) in Example 8.27, for each prime \( p \), we immediately get a pairwise independent family of hash functions \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) from \( {\mathbb{Z}}_{p} \) to \( {\mathbb{Z}}_{p} \) , where \( R = {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \), and for \( r = \left( {{r}_{0},{r}_{1}}\right) \in R \), the hash function \( {\Phi }_{r} \) is given by
|
\[ {\Phi }_{r} : \;{\mathbb{Z}}_{p} \rightarrow {\mathbb{Z}}_{p} \] \[ s \mapsto {r}_{0} + {r}_{1}s. \]
|
Yes
|
Let \( p \) be a prime, and let \( \ell \) be a positive integer. Let \( S \mathrel{\text{:=}} {\mathbb{Z}}_{p}^{\times \ell } \) and \( R \mathrel{\text{:=}} {\mathbb{Z}}_{p}^{\times \left( {\ell + 1}\right) } \) . For each \( r = \left( {{r}_{0},{r}_{1},\ldots ,{r}_{\ell }}\right) \in R \), we define the hash function \[ {\Phi }_{r} : \;S \rightarrow {\mathbb{Z}}_{p} \] \[ \left( {{s}_{1},\ldots ,{s}_{\ell }}\right) \mapsto {r}_{0} + {r}_{1}{s}_{1} + \cdots + {r}_{\ell }{s}_{\ell }. \] We will show that \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) is a pairwise independent family of hash functions from \( S \) to \( {\mathbb{Z}}_{p} \) .
|
To this end, let \( H \) be a random variable uniformly distributed over \( R \) . We want to show that for each \( s,{s}^{\prime } \in S \) with \( s \neq {s}^{\prime } \), the random variable \( \left( {{\Phi }_{H}\left( s\right) ,{\Phi }_{H}\left( {s}^{\prime }\right) }\right) \) is uniformly distributed over \( {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \) . So let \( s \neq {s}^{\prime } \) be fixed, and define the function \[ \rho : \;R \rightarrow {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \] \[ r \mapsto \left( {{\Phi }_{r}\left( s\right) ,{\Phi }_{r}\left( {s}^{\prime }\right) }\right) . \] Because \( \rho \) is a group homomorphism, it will suffice to show that \( \rho \) is surjective (see Theorem 8.5). Suppose \( s = \left( {{s}_{1},\ldots ,{s}_{\ell }}\right) \) and \( {s}^{\prime } = \left( {{s}_{1}^{\prime },\ldots ,{s}_{\ell }^{\prime }}\right) \) . Since \( s \neq {s}^{\prime } \), we must have \( {s}_{j} \neq {s}_{j}^{\prime } \) for some \( j = 1,\ldots ,\ell \) . For this \( j \), consider the function \[ {\rho }^{\prime } : \;R \rightarrow {\mathbb{Z}}_{p} \times {\mathbb{Z}}_{p} \] \[ \left( {{r}_{0},{r}_{1},\ldots ,{r}_{\ell }}\right) \mapsto \left( {{r}_{0} + {r}_{j}{s}_{j},{r}_{0} + {r}_{j}{s}_{j}^{\prime }}\right) . \] Evidently, the image of \( \rho \) includes the image of \( {\rho }^{\prime } \), and by Example 8.36, the function \( {\rho }^{\prime } \) is surjective.
|
Yes
|
Our goal is to show that \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) is a universal family of hash functions from \( {\mathbb{Z}}_{p} \) to \( {\mathbb{Z}}_{m} \) . So let \( s,{s}^{\prime } \in {\mathbb{Z}}_{p} \) with \( s \neq {s}^{\prime } \), let \( {H}_{0} \) and \( {H}_{1} \) be independent random variables, with \( {H}_{0} \) uniformly distributed over \( {\mathbb{Z}}_{p} \) and \( {H}_{1} \) uniformly distributed over \( {\mathbb{Z}}_{p}^{ * } \), and let \( H \mathrel{\text{:=}} \left( {{H}_{0},{H}_{1}}\right) \) . Also, let \( \mathcal{C} \) be the event that \( {\Phi }_{H}\left( s\right) = {\Phi }_{H}\left( {s}^{\prime }\right) \) . We want to show that \( \mathrm{P}\left\lbrack C\right\rbrack \leq 1/m \) .
|
Let us define random variables \( Y \mathrel{\text{:=}} {H}_{0} + {H}_{1}s \) and \( {Y}^{\prime } \mathrel{\text{:=}} {H}_{0} + {H}_{1}{s}^{\prime } \) . Also, let \( \widehat{s} \mathrel{\text{:=}} {s}^{\prime } - s \neq 0 \) . Then we have\n\n\[ \mathrm{P}\left\lbrack \mathcal{C}\right\rbrack = \mathrm{P}\left\lbrack {\llbracket Y{\rrbracket }_{m} = \llbracket {Y}^{\prime }{\rrbracket }_{m}}\right\rbrack \]\n\n\[ = P\left\lbrack {\llbracket Y{\rrbracket }_{m} = \llbracket Y + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right\rbrack \;\left( {\text{since}\;{Y}^{\prime } = Y + {H}_{1}\widehat{s}}\right) \]\n\n\[ = \mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}_{p}}}\mathsf{P}\left\lbrack {\left( {\llbracket Y{\rrbracket }_{m} = \llbracket Y + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right) \cap \left( {Y = \alpha }\right) }\right\rbrack \;\text{(law of total probability (8.9))} \]\n\n\[ = \mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}_{p}}}\mathrm{P}\left\lbrack {\left( {\llbracket \alpha {\rrbracket }_{m} = \llbracket \alpha + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right) \cap \left( {Y = \alpha }\right) }\right\rbrack \]\n\n\[ = \mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}_{p}}}\mathrm{P}\left\lbrack {\llbracket \alpha {\rrbracket }_{m} = \llbracket \alpha + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right\rbrack \mathrm{P}\left\lbrack {Y = \alpha }\right\rbrack \]\n\n(by Theorem 8.13, \( Y \) and \( {H}_{1} \) are independent).\n\nIt will suffice to show that\n\n\[ \mathrm{P}\left\lbrack {\llbracket \alpha {\rrbracket }_{m} = \llbracket \alpha + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right\rbrack \leq 1/m \]\n\n(8.33)\n\nfor each \( \alpha \in {\mathbb{Z}}_{p} \), since then\n\n\[ \mathrm{P}\left\lbrack \mathcal{C}\right\rbrack \leq \mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}_{p}}}\left( {1/m}\right) \mathrm{P}\left\lbrack {Y = \alpha }\right\rbrack = \left( {1/m}\right) \mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}_{p}}}\mathrm{P}\left\lbrack {Y = \alpha }\right\rbrack = 1/m. \]\n\nSo consider a fixed \( \alpha \in {\mathbb{Z}}_{p} \) . As \( \widehat{s} \neq 0 \) and \( {H}_{1} \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{ * } \), it follows that \( {H}_{1}\widehat{s} \) is uniformly distributed over \( {\mathbb{Z}}_{p}^{ * } \), and hence \( \alpha + {H}_{1}\widehat{s} \) is uniformly distributed over the set \( {\mathbb{Z}}_{p} \smallsetminus \{ \alpha \} \) . Let \( {M}_{\alpha } \mathrel{\text{:=}} \left\{ {\beta \in {\mathbb{Z}}_{p} : \llbracket \alpha {\rrbracket }_{m} = \llbracket \beta {\rrbracket }_{m}}\right\} \) . Then \( \left| {M}_{\alpha }\right| \leq m \) and \( \alpha + {H}_{1}\widehat{s} \) is uniformly distributed over \( {\mathbb{Z}}_{p} \smallsetminus \{ \alpha \} \) , so\n\n\[ \mathrm{P}\left\lbrack {\llbracket \alpha {\rrbracket }_{m} = \llbracket \alpha + {H}_{1}\widehat{s}{\rrbracket }_{m}}\right\rbrack = \frac{\left| {M}_{\alpha }\right| - 1}{p - 1} \leq \frac{m - 1}{p - 1} \leq \frac{1}{m}. \]
|
Yes
|
Our goal is to show that \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) is a universal family of hash functions from \( S \) to \( {\mathbb{Z}}_{p} \) . So let \( s,{s}^{\prime } \in S \) with \( s \neq {s}^{\prime } \), and let \( H \) be a random variable that is uniformly distributed over \( R \) . We want to show that \( \mathrm{P}\left\lbrack {{\Phi }_{H}\left( s\right) = {\Phi }_{H}\left( {s}^{\prime }\right) }\right\rbrack \leq 1/p \) .
|
Let \( s = \left( {{s}_{0},{s}_{1},\ldots ,{s}_{\ell }}\right) \) and \( {s}^{\prime } = \left( {{s}_{0}^{\prime },{s}_{1}^{\prime },\ldots ,{s}_{\ell }^{\prime }}\right) \), and set \( {\widehat{s}}_{i} \mathrel{\text{:=}} {s}_{i}^{\prime } - {s}_{i} \) for \( i = 0,1,\ldots ,\ell \) . Let us define the function\n\n\[ \rho : \;R \rightarrow {\mathbb{Z}}_{p} \]\n\n\[ \left( {{r}_{1},\ldots ,{r}_{\ell }}\right) \mapsto {r}_{1}{\widehat{s}}_{1} + \cdots + {r}_{\ell }{\widehat{s}}_{\ell }. \]\n\nClearly, \( {\Phi }_{H}\left( s\right) = {\Phi }_{H}\left( {s}^{\prime }\right) \) if and only if \( \rho \left( H\right) = - {\widehat{s}}_{0} \) . Moreover, \( \rho \) is a group homomorphism. There are two cases to consider. In the first case, \( {\widehat{s}}_{i} = 0 \) for all \( i = 1,\ldots ,\ell \) ; in this case, the image of \( \rho \) is \( \{ 0\} \), but \( {\widehat{s}}_{0} \neq 0 \) (since \( s \neq {s}^{\prime } \) ), and so \( \mathrm{P}\left\lbrack {\rho \left( H\right) = - {\widehat{s}}_{0}}\right\rbrack = 0 \) . In the second case, \( {\widehat{s}}_{i} \neq 0 \) for some \( i = 1,\ldots ,\ell \) ; in this case, the image of \( \rho \) is \( {\mathbb{Z}}_{p} \), and so \( \rho \left( H\right) \) is uniformly distributed over \( {\mathbb{Z}}_{p} \) (see Theorem 8.5); thus, \( \mathrm{P}\left\lbrack {\rho \left( H\right) = - {\widehat{s}}_{0}}\right\rbrack = 1/p \) .
|
Yes
|
Theorem 8.30. For random variables \( X, Y, Z \), we have\n\n(i) \( 0 \leq \Delta \left\lbrack {X;Y}\right\rbrack \leq 1 \) ,\n\n(ii) \( \Delta \left\lbrack {X;X}\right\rbrack = 0 \) ,\n\n(iii) \( \Delta \left\lbrack {X;Y}\right\rbrack = \Delta \left\lbrack {Y;X}\right\rbrack \), and\n\n(iv) \( \Delta \left\lbrack {X;Z}\right\rbrack \leq \Delta \left\lbrack {X;Y}\right\rbrack + \Delta \left\lbrack {Y;Z}\right\rbrack \) .
|
Proof. Exercise.
|
No
|
Suppose \( X \) has the uniform distribution on \( \{ 1,\ldots, m\} \), and \( Y \) has the uniform distribution on \( \{ 1,\ldots, m - \delta \} \), where \( \delta \in \{ 0,\ldots, m - 1\} \). Let us compute \( \Delta \left\lbrack {X;Y}\right\rbrack \).
|
The statistical distance between \( X \) and \( Y \) is just \( 1/2 \) times the area of regions \( A \) and \( C \) in the diagram. Moreover, because probability distributions sum to 1, we must have\n\n\[ \text{area of}B + \text{area of}A = 1 = \text{area of}B + \text{area of}C\text{,}\]\n\nand hence, the areas of region \( A \) and region \( C \) are the same. Therefore,\n\n\[ \Delta \left\lbrack {X;Y}\right\rbrack = \text{ area of }A = \text{ area of }C = \delta /m. \]
|
Yes
|
Theorem 8.31. Let \( X \) and \( Y \) be random variables taking values in a set \( S \). For every \( {S}^{\prime } \subseteq S \), we have\n\n\[ \Delta \left\lbrack {X;Y}\right\rbrack \geq \left| {\mathrm{P}\left\lbrack {X \in {S}^{\prime }}\right\rbrack - \mathrm{P}\left\lbrack {Y \in {S}^{\prime }}\right\rbrack }\right| \]\n\nand equality holds for some \( {S}^{\prime } \subseteq S \), and in particular, for the set\n\n\[ {S}^{\prime } \mathrel{\text{:=}} \{ s \in S : \mathrm{P}\left\lbrack {X = s}\right\rbrack < \mathrm{P}\left\lbrack {Y = s}\right\rbrack \} ,\]\n\nas well as its complement.
|
Proof. Suppose we split the set \( S \) into two disjoint subsets: the set \( {S}_{0} \) consisting of those \( s \in S \) such that \( \mathrm{P}\left\lbrack {X = s}\right\rbrack < \mathrm{P}\left\lbrack {Y = s}\right\rbrack \), and the set \( {S}_{1} \) consisting of those \( s \in S \) such that \( \mathrm{P}\left\lbrack {X = s}\right\rbrack \geq \mathrm{P}\left\lbrack {Y = s}\right\rbrack \). Consider the following rough graph of the distributions of \( X \) and \( Y \), where the elements of \( {S}_{0} \) are placed to the left of the elements of \( {S}_{1} \):\n\n\n\nNow, as in Example 8.40,\n\n\[ \Delta \left\lbrack {X;Y}\right\rbrack = \text{ area of }A = \text{ area of }C. \]\n\nNow consider any subset \( {S}^{\prime } \) of \( S \), and observe that\n\n\[ \mathrm{P}\left\lbrack {X \in {S}^{\prime }}\right\rbrack - \mathrm{P}\left\lbrack {Y \in {S}^{\prime }}\right\rbrack = \text{ area of }{C}^{\prime } - \text{ area of }{A}^{\prime }, \]\n\nwhere \( {C}^{\prime } \) is the subregion of \( C \) that lies above \( {S}^{\prime } \), and \( {A}^{\prime } \) is the subregion of \( A \) that lies above \( {S}^{\prime } \). It follows that \( \left| {\mathrm{P}\left\lbrack {X \in {S}^{\prime }}\right\rbrack - \mathrm{P}\left\lbrack {Y \in {S}^{\prime }}\right\rbrack }\right| \) is maximized when \( {S}^{\prime } = {S}_{0} \) or \( {S}^{\prime } = {S}_{1} \), in which case it is equal to \( \Delta \left\lbrack {X;Y}\right\rbrack \).
|
Yes
|
Theorem 8.32. If \( S \) and \( T \) are finite sets, \( X \) and \( Y \) are random variables taking values in \( S \), and \( f : S \rightarrow T \) is a function, then \( \Delta \left\lbrack {f\left( X\right) ;f\left( Y\right) }\right\rbrack \leq \Delta \left\lbrack {X;Y}\right\rbrack \) .
|
Proof. We have\n\n\[ \Delta \left\lbrack {f\left( X\right) ;f\left( Y\right) }\right\rbrack = \left| {\mathrm{P}\left\lbrack {f\left( X\right) \in {T}^{\prime }}\right\rbrack - \mathrm{P}\left\lbrack {f\left( Y\right) \in {T}^{\prime }}\right\rbrack }\right| \text{ for some }{T}^{\prime } \subseteq T \]\n\n\[ \text{(by Theorem 8.31)} \]\n\n\[ = \left| {\mathsf{P}\left\lbrack {X \in {f}^{-1}\left( {T}^{\prime }\right) }\right\rbrack - \mathsf{P}\left\lbrack {Y \in {f}^{-1}\left( {T}^{\prime }\right) }\right\rbrack }\right| \]\n\n\[ \leq \Delta \left\lbrack {X;Y}\right\rbrack \text{ (again by Theorem 8.31). } \]
|
Yes
|
Let \( X \) be uniformly distributed over the set \( \{ 0,\ldots, m - 1\} \), and let \( Y \) be uniformly distributed over the set \( \{ 0,\ldots, n - 1\} \), for \( n \geq m \). Let \( f\left( t\right) \mathrel{\text{:=}} t{\;\operatorname{mod}\;m} \). We want to compute an upper bound on the statistical distance between \( X \) and \( f\left( Y\right) \).
|
We can do this as follows. Let \( n = {qm} - r \), where \( 0 \leq r < m \), so that \( q = \lceil n/m\rceil \). Also, let \( Z \) be uniformly distributed over \( \{ 0,\ldots ,{qm} - 1\} \). Then \( f\left( Z\right) \) is uniformly distributed over \( \{ 0,\ldots, m - 1\} \), since every element of \( \{ 0,\ldots, m - 1\} \) has the same number (namely, \( q \) ) of pre-images under \( f \) which lie in the set \( \{ 0,\ldots ,{qm} - 1\} \). Since statistical distance depends only on the distributions of the random variables, by the previous theorem, we have\n\n\[ \Delta \left\lbrack {X;f\left( Y\right) }\right\rbrack = \Delta \left\lbrack {f\left( Z\right) ;f\left( Y\right) }\right\rbrack \leq \Delta \left\lbrack {Z;Y}\right\rbrack \]\n\nand as we saw in Example 8.40,\n\n\[ \Delta \left\lbrack {Z;Y}\right\rbrack = r/{qm} < 1/q \leq m/n. \]\n\nTherefore,\n\n\[ \Delta \left\lbrack {X;f\left( Y\right) }\right\rbrack < m/n. \]
|
Yes
|
Theorem 8.33. Suppose \( X, Y \), and \( Z \) are random variables, where \( X \) and \( Z \) are independent, and \( Y \) and \( Z \) are independent. Then \( \Delta \left\lbrack {X, Z;Y, Z}\right\rbrack = \Delta \left\lbrack {X, Y}\right\rbrack \) .
|
Proof. Suppose \( X \) and \( Y \) take values in a finite set \( S \), and \( Z \) takes values in a finite set \( T \). From the definition of statistical distance,\n\n\[ \n{2\Delta }\left\lbrack {X, Z;Y, Z}\right\rbrack = \mathop{\sum }\limits_{{s, t}}\left| {\mathrm{P}\left\lbrack {\left( {X = s}\right) \cap \left( {Z = t}\right) }\right\rbrack - \mathrm{P}\left\lbrack {\left( {Y = s}\right) \cap \left( {Z = t}\right) }\right\rbrack }\right| \n\]\n\n\[ \n= \mathop{\sum }\limits_{{s, t}}\left| {\mathrm{P}\left\lbrack {X = s}\right\rbrack \mathrm{P}\left\lbrack {Z = t}\right\rbrack - \mathrm{P}\left\lbrack {Y = s}\right\rbrack \mathrm{P}\left\lbrack {Z = t}\right\rbrack }\right| \n\]\n\n\[ \n\text{(by independence)} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{s, t}}\mathrm{P}\left\lbrack {Z = t}\right\rbrack \left| {\mathrm{P}\left\lbrack {X = s}\right\rbrack - \mathrm{P}\left\lbrack {Y = s}\right\rbrack }\right| \n\]\n\n\[ \n= \left( {\mathop{\sum }\limits_{t}\mathrm{P}\left\lbrack {Z = t}\right\rbrack }\right) \left( {\mathop{\sum }\limits_{s}\left| {\mathrm{P}\left\lbrack {X = s}\right\rbrack - \mathrm{P}\left\lbrack {Y = s}\right\rbrack }\right| }\right) \n\]\n\n\[ \n= 1 \cdot {2\Delta }\left\lbrack {X;Y}\right\rbrack \n\]
|
Yes
|
Theorem 8.34. Let \( {X}_{1},\ldots ,{X}_{n},{Y}_{1},\ldots ,{Y}_{n} \) be random variables, where \( {\left\{ {X}_{i}\right\} }_{i = 1}^{n} \) is mutually independent, and \( {\left\{ {Y}_{i}\right\} }_{i = 1}^{n} \) is mutually independent. Then we have\n\n\[ \Delta \left\lbrack {{X}_{1},\ldots ,{X}_{n};{Y}_{1},\ldots ,{Y}_{n}}\right\rbrack \leq \mathop{\sum }\limits_{{i = 1}}^{n}\Delta \left\lbrack {{X}_{i};{Y}_{i}}\right\rbrack \]
|
Proof. Since \( \Delta \left\lbrack {{X}_{1},\ldots ,{X}_{n};{Y}_{1},\ldots ,{Y}_{n}}\right\rbrack \) depends only on the individual distributions of the random variables \( \left( {{X}_{1},\ldots ,{X}_{n}}\right) \) and \( \left( {{Y}_{1},\ldots ,{Y}_{n}}\right) \), without loss of generality, we may assume that \( \left( {{X}_{1},\ldots ,{X}_{n}}\right) \) and \( \left( {{Y}_{1},\ldots ,{Y}_{n}}\right) \) are independent, so that \( {X}_{1},\ldots ,{X}_{n},{Y}_{1},\ldots ,{Y}_{n} \) form a mutually independent family of random variables.\n\nWe introduce random variables \( {Z}_{0},\ldots ,{Z}_{n} \), defined as follows:\n\n\[ {Z}_{0} \mathrel{\text{:=}} \left( {{X}_{1},\ldots ,{X}_{n}}\right) \]\n\n\[ {Z}_{i} \mathrel{\text{:=}} \left( {{Y}_{1},\ldots ,{Y}_{i},{X}_{i + 1},\ldots ,{X}_{n}}\right) \text{ for }i = 1,\ldots, n - 1\text{, and } \]\n\n\[ {Z}_{n} \mathrel{\text{:=}} \left( {{Y}_{1},\ldots ,{Y}_{n}}\right) \]\n\nBy definition, \( \Delta \left\lbrack {{X}_{1},\ldots ,{X}_{n};{Y}_{1},\ldots ,{Y}_{n}}\right\rbrack = \Delta \left\lbrack {{Z}_{0};{Z}_{n}}\right\rbrack \) . Moreover, by part (iv) of Theorem 8.30, we have \( \Delta \left\lbrack {{Z}_{0};{Z}_{n}}\right\rbrack \leq \mathop{\sum }\limits_{{i = 1}}^{n}\Delta \left\lbrack {{Z}_{i - 1};{Z}_{i}}\right\rbrack \) . Now consider any fixed index \( i = 1,\ldots, n \) . By Theorem 8.33, we have\n\n\[ \Delta \left\lbrack {{Z}_{i - 1};{Z}_{i}}\right\rbrack = \Delta \left\lbrack {{X}_{i},\left( {{Y}_{1},\ldots ,{Y}_{i - 1},{X}_{i + 1},\ldots ,{X}_{n}}\right) }\right. \]\n\n\[ \left. {{Y}_{i},\left( {{Y}_{1},\ldots ,{Y}_{i - 1},{X}_{i + 1},\ldots ,{X}_{n}}\right) }\right\rbrack \]\n\n\[ = \Delta \left\lbrack {{X}_{i};{Y}_{i}}\right\rbrack \]\n\nThe theorem now follows immediately.
|
Yes
|
Theorem 8.35. Suppose \( X \) is a random variable that takes values in a finite set \( S \) of size \( m \) . If \( X \) has collision probability \( \beta \), guessing probability \( \gamma \), and distance \( \delta \) from uniform on \( S \), then:\n\n(i) \( \beta \geq 1/m \) ;\n\n(ii) \( {\gamma }^{2} \leq \beta \leq \gamma \leq 1/m + \delta \) .
|
Proof. Part (i) is immediate from Exercise 8.37. The other inequalities are left as easy exercises.
|
No
|
Theorem 8.36. Suppose \( X \) is a random variable that takes values in a finite set \( S \) of size \( m \) . If \( X \) has collision probability \( \beta \), and distance \( \delta \) from uniform on \( S \), then \( \delta \leq \frac{1}{2}\sqrt{{m\beta } - 1}. \)
|
Proof. We may assume that \( \delta > 0 \), since otherwise the theorem is already true, simply from the fact that \( \beta \geq 1/m \) . For \( s \in S \), let \( {p}_{s} \mathrel{\text{:=}} \mathrm{P}\left\lbrack {X = s}\right\rbrack \) . We have \( \delta = \frac{1}{2}\mathop{\sum }\limits_{s}\left| {{p}_{s} - 1/m}\right| \), and hence \( 1 = \mathop{\sum }\limits_{s}{q}_{s} \), where \( {q}_{s} \mathrel{\text{:=}} \left| {{p}_{s} - 1/m}\right| /{2\delta } \) . So we have \[ \frac{1}{m} \leq \mathop{\sum }\limits_{s}{q}_{s}^{2}\;\text{ (by Exercise 8.36) } \] \[ = \frac{1}{4{\delta }^{2}}\mathop{\sum }\limits_{s}{\left( {p}_{s} - 1/m\right) }^{2} \] \[ = \frac{1}{4{\delta }^{2}}\left( {\mathop{\sum }\limits_{s}{p}_{s}^{2} - 1/m}\right) \text{ (again by Exercise 8.36) } \] \[ = \frac{1}{4{\delta }^{2}}\left( {\beta - 1/m}\right) \] from which the theorem follows immediately.
|
No
|
Theorem 8.37 (Leftover hash lemma). Let \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) be a \( \left( {1 + \alpha }\right) /m \) -almost universal family of hash functions from \( S \) to \( T \), where \( m \mathrel{\text{:=}} \left| T\right| \) . Let \( H \) and \( X \) be independent random variables, where \( H \) is uniformly distributed over \( R \), and \( X \) takes values in \( S \) . If \( \beta \) is the collision probability of \( X \), and \( {\delta }^{\prime } \) is the distance of \( \left( {H,{\Phi }_{H}\left( X\right) }\right) \) from uniform on \( R \times T \), then \( {\delta }^{\prime } \leq \frac{1}{2}\sqrt{{m\beta } + \alpha } \) .
|
Proof. Let \( {\beta }^{\prime } \) be the collision probability of \( \left( {H,{\Phi }_{H}\left( X\right) }\right) \) . Our goal is to bound \( {\beta }^{\prime } \) from above, and then apply Theorem 8.36 to the random variable \( \left( {H,{\Phi }_{H}\left( X\right) }\right) \) . To this end, let \( \ell \mathrel{\text{:=}} \left| R\right| \), and suppose \( {H}^{\prime } \) and \( {X}^{\prime } \) are random variables, where \( {H}^{\prime } \) has the same distribution as \( H,{X}^{\prime } \) has the same distribution as \( X \), and \( H,{H}^{\prime }, X,{X}^{\prime } \) form a mutually independent family of random variables. Then we have\n\n\[{\beta }^{\prime } = \mathrm{P}\left\lbrack {\left( {H = {H}^{\prime }}\right) \cap \left( {{\Phi }_{H}\left( X\right) = {\Phi }_{{H}^{\prime }}\left( {X}^{\prime }\right) }\right) }\right\rbrack\]\n\n\[= \mathbb{P}\left\lbrack {\left( {H = {H}^{\prime }}\right) \cap \left( {{\Phi }_{H}\left( X\right) = {\Phi }_{H}\left( {X}^{\prime }\right) }\right) }\right\rbrack\]\n\n\[= \frac{1}{\ell }\mathrm{P}\left\lbrack {{\Phi }_{H}\left( X\right) = {\Phi }_{H}\left( {X}^{\prime }\right) }\right\rbrack \text{ (a special case of Exercise 8.15) }\]\n\n\[\leq \frac{1}{\ell }\left( {\mathrm{P}\left\lbrack {X = {X}^{\prime }}\right\rbrack + \left( {1 + \alpha }\right) /m}\right) \text{ (by Exercise 8.48) }\]\n\n\[= \frac{1}{\ell m}\left( {{m\beta } + 1 + \alpha }\right)\]\n\nThe theorem now follows immediately from Theorem 8.36.
|
Yes
|
Theorem 8.38. Let \( {\left\{ {\Phi }_{r}\right\} }_{r \in R} \) be a \( \left( {1 + \alpha }\right) /m \) -almost universal family of hash functions from \( S \) to \( T \), where \( m \mathrel{\text{:=}} \left| T\right| \) . Let \( H,{X}_{1},\ldots ,{X}_{n} \) be random variables, where \( H \) is uniformly distributed over \( R \), each \( {X}_{i} \) takes values in \( S \), and \( H,{X}_{1},\ldots ,{X}_{n} \) form a mutually independent family of random variables. If \( \beta \) is an upper bound on the collision probability of each \( {X}_{i} \), and \( {\delta }^{\prime } \) is the distance of \( \left( {H,{\Phi }_{H}\left( {X}_{1}\right) ,\ldots ,{\Phi }_{H}\left( {X}_{n}\right) }\right) \) from uniform on \( R \times {T}^{\times n} \), then \( {\delta }^{\prime } \leq \frac{1}{2}n\sqrt{{m\beta } + \alpha } \) .
|
Proof. Let \( {Y}_{1},\ldots ,{Y}_{n} \) be random variables, each uniformly distributed over \( T \), and assume that \( H,{X}_{1},\ldots ,{X}_{n},{Y}_{1},\ldots ,{Y}_{n} \) form a mutually independent family of random variables. We shall make a hybrid argument (as in the proof of Theorem 8.34). Define random variables \( {Z}_{0},{Z}_{1},\ldots ,{Z}_{n} \) as follows:\n\n\[ \n{Z}_{0} \mathrel{\text{:=}} \left( {H,{\Phi }_{H}\left( {X}_{1}\right) ,\ldots ,{\Phi }_{H}\left( {X}_{n}\right) }\right) \n\]\n\n\[ \n{Z}_{i} \mathrel{\text{:=}} \left( {H,{Y}_{1},\ldots ,{Y}_{i},{\Phi }_{H}\left( {X}_{i + 1}\right) ,\ldots ,{\Phi }_{H}\left( {X}_{n}\right) }\right) \text{ for }i = 1,\ldots, n - 1\text{, and } \n\]\n\n\[ \n{Z}_{n} \mathrel{\text{:=}} \left( {H,{Y}_{1},\ldots ,{Y}_{n}}\right) \n\]\n\nWe have\n\n\[ \n{\delta }^{\prime } = \Delta \left\lbrack {{Z}_{0};{Z}_{n}}\right\rbrack \n\]\n\n\[ \n\leq \mathop{\sum }\limits_{{i = 1}}^{n}\Delta \left\lbrack {{Z}_{i - 1};{Z}_{i}}\right\rbrack \;\text{ (by part (iv) of Theorem 8.30) } \n\]\n\n\[ \n\begin{array}{r} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\Delta \left\lbrack {H,{Y}_{1},\ldots ,{Y}_{i - 1},{\Phi }_{H}\left( {X}_{i}\right) ,{X}_{i + 1},\ldots ,{X}_{n};}\right. \\ \left. {H,{Y}_{1},\ldots ,{Y}_{i - 1},\;{Y}_{i},\;{X}_{i + 1},\ldots ,{X}_{n}}\right\rbrack \end{array} \n\]\n\n\[ \n\text{(by Theorem 8.32)} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{n}\Delta \left\lbrack {H,{\Phi }_{H}\left( {X}_{i}\right) ;H,{Y}_{i}}\right\rbrack \;\text{ (by Theorem 8.33) } \n\]\n\n\[ \n\leq \frac{1}{2}n\sqrt{{m\beta } + \alpha }\;\text{(by Theorem 8.37).} \n\]
|
Yes
|
Example 8.43. Suppose we toss a fair coin repeatedly until it comes up heads, and let \( k \) be the total number of tosses. We can model this experiment as a discrete probability distribution \( \mathrm{P} \), where the sample space consists of the set of all positive integers: for each positive integer \( k,\mathrm{P}\left( k\right) \mathrel{\text{:=}} {2}^{-k} \) .
|
We can check that indeed \( \mathop{\sum }\limits_{{k = 1}}^{\infty }{2}^{-k} = 1 \), as required.
|
Yes
|
More generally, suppose we repeatedly execute a Bernoulli trial until it succeeds, where each execution succeeds with probability \( p > 0 \) independently of the previous trials, and let \( k \) be the total number of trials executed. Then we associate the probability \( \mathrm{P}\left( k\right) \mathrel{\text{:=}} {q}^{k - 1}p \) with each positive integer \( k \), where \( q \mathrel{\text{:=}} 1 - p \), since we have \( k - 1 \) failures before the one success.
|
One can easily check that these probabilities sum to 1 . Such a distribution is called a geometric distribution.
|
No
|
The series \( \mathop{\sum }\limits_{{k = 1}}^{\infty }1/{k}^{3} \) converges to some positive number \( c \) . Therefore, we can define a probability distribution on the set of positive integers, where we associate with each \( k \geq 1 \) the probability \( 1/c{k}^{3} \) .
|
As in the finite case, an event is an arbitrary subset \( \mathcal{A} \) of \( \Omega \) . The probability \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \) of \( \mathcal{A} \) is defined as the sum of the probabilities associated with the elements of \( \mathcal{A} \) . This sum is treated as an infinite series when \( \mathcal{A} \) is infinite. This series is guaranteed to converge, and its value does not depend on the particular enumeration of the elements of \( \mathcal{A} \) .
|
No
|
Consider the geometric distribution discussed in Example 8.44, where \( p \) is the success probability of each Bernoulli trial, and \( q \mathrel{\text{:=}} 1 - p \). For a given integer \( i \geq 1 \), consider the event \( \mathcal{A} \) that the number of trials executed is at least \( i \). Formally, \( \mathcal{A} \) is the set of all integers greater than or equal to \( i \). Intuitively, \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \) should be \( {q}^{i - 1} \), since we perform at least \( i \) trials if and only if the first \( i - 1 \) trials fail.
|
\[ \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{k \geq i}}\mathrm{P}\left( k\right) = \mathop{\sum }\limits_{{k \geq i}}{q}^{k - 1}p = {q}^{i - 1}p\mathop{\sum }\limits_{{k \geq 0}}{q}^{k} = {q}^{i - 1}p \cdot \frac{1}{1 - q} = {q}^{i - 1}. \]
|
Yes
|
Theorem 8.39. Suppose \( \mathcal{A} \mathrel{\text{:=}} \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{\mathcal{A}}_{i} \), where \( {\left\{ {\mathcal{A}}_{i}\right\} }_{i = 1}^{\infty } \) is an infinite sequence of events. Then\n\n(i) \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{P}\left\lbrack {\mathcal{A}}_{i}\right\rbrack \), and\n\n(ii) \( \mathrm{P}\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{P}\left\lbrack {\mathcal{A}}_{i}\right\rbrack \) if \( {\left\{ {\mathcal{A}}_{i}\right\} }_{i = 1}^{\infty } \) is pairwise disjoint.
|
Proof. As in the proof of Theorem 8.1, for \( \omega \in \Omega \) and \( \mathcal{B} \subseteq \Omega \), define \( {\delta }_{\omega }\left\lbrack \mathcal{B}\right\rbrack \mathrel{\text{:=}} 1 \) if \( \omega \in \mathcal{B} \), and \( {\delta }_{\omega }\left\lbrack \mathcal{B}\right\rbrack \mathrel{\text{:=}} 0 \) if \( \omega \notin \mathcal{B} \) . First, suppose that \( {\left\{ {\mathcal{A}}_{i}\right\} }_{i = 1}^{\infty } \) is pairwise disjoint.\n\nEvidently, \( {\delta }_{\omega }\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }{\delta }_{\omega }\left\lbrack {\mathcal{A}}_{i}\right\rbrack \) for each \( \omega \in \Omega \), and so\n\n\[ \n\mathrm{P}\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) {\delta }_{\omega }\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) \mathop{\sum }\limits_{{i = 1}}^{\infty }{\delta }_{\omega }\left\lbrack {\mathcal{A}}_{i}\right\rbrack \n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) {\delta }_{\omega }\left\lbrack {\mathcal{A}}_{i}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{P}\left\lbrack {\mathcal{A}}_{i}\right\rbrack \n\]\n\nwhere we use the fact that we may reverse the order of summation in an infinite double summation of non-negative terms (see §A7). That proves (ii), and (i) follows from (ii), applied to the sequence \( {\left\{ {\mathcal{A}}_{i}^{\prime }\right\} }_{i = 1}^{\infty } \), where \( {\mathcal{A}}_{i}^{\prime } \mathrel{\text{:=}} {\mathcal{A}}_{i} \smallsetminus \mathop{\bigcup }\limits_{{j = 1}}^{{i - 1}}{\mathcal{A}}_{i} \), as \( \mathsf{P}\left\lbrack \mathcal{A}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathsf{P}\left\lbrack {\mathcal{A}}_{i}^{\prime }\right\rbrack \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathsf{P}\left\lbrack {\mathcal{A}}_{i}\right\rbrack . \)
|
Yes
|
Theorem 8.40. Let \( {\left\{ {X}_{i}\right\} }_{i = 1}^{\infty } \) be an infinite sequence of random variables. Suppose that for each \( i \geq 1,{X}_{i} \) takes non-negative values only, and has finite expectation. Also suppose that \( \mathop{\sum }\limits_{{i = 1}}^{\infty }{X}_{i}\left( \omega \right) \) converges for each \( \omega \in \Omega \), and define \( X \mathrel{\text{:=}} \mathop{\sum }\limits_{{i = 1}}^{\infty }{X}_{i} \) . Then we have\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{E}\left\lbrack {X}_{i}\right\rbrack \]
|
Proof. This is a calculation just like the one made in the proof of Theorem 8.39, where, again, we use the fact that we may reverse the order of summation in an infinite double summation of non-negative terms:\n\n\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) X\left( \omega \right) = \mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) \mathop{\sum }\limits_{{i = 1}}^{\infty }{X}_{i}\left( \omega \right) \]\n\n\[ = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathop{\sum }\limits_{{\omega \in \Omega }}\mathrm{P}\left( \omega \right) {X}_{i}\left( \omega \right) = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{E}\left\lbrack {X}_{i}\right\rbrack \]
|
Yes
|
Suppose \( X \) is a random variable with a geometric distribution, as in Example 8.44, with an associated success probability \( p \) and failure probability \( q \mathrel{\text{:=}} 1 - p \). As we saw in Example 8.46, for every integer \( i \geq 1 \), we have \( \mathrm{P}\left\lbrack {X \geq i}\right\rbrack = {q}^{i - 1} \). We may therefore apply the infinite version of Theorem 8.17 to easily compute the expected value of \( X \).
|
\[ \mathrm{E}\left\lbrack X\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mathrm{P}\left\lbrack {X \geq i}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{\infty }{q}^{i - 1} = \frac{1}{1 - q} = \frac{1}{p} \]
|
Yes
|
Theorem 9.1. Let \( \Omega \) be the set of all exact execution paths for \( A \) on input \( x \) . Then \( \mathop{\sum }\limits_{{\omega \in \Omega }}{2}^{-\left| \omega \right| } \leq 1 \)
|
Proof. Let \( k \) be a non-negative integer. Let \( {\Omega }_{k} \subseteq \Omega \) be the set of all exact execution paths of length at most \( k \), and let \( {\alpha }_{k} \mathrel{\text{:=}} \mathop{\sum }\limits_{{\omega \in {\Omega }_{k}}}{2}^{-\left| \omega \right| } \) . We shall show below that\n\n\[{\alpha }_{k} \leq 1\]\n\n(9.1)\n\nFrom this, it will follow that\n\n\[\mathop{\sum }\limits_{{\omega \in \Omega }}{2}^{-\left| \omega \right| } = \mathop{\lim }\limits_{{k \rightarrow \infty }}{\alpha }_{k} \leq 1\]\n\nTo prove the inequality (9.1), consider the set \( {C}_{k} \) of all complete execution paths of length equal to \( k \) . We claim that\n\n\[{\alpha }_{k} = {2}^{-k}\left| {C}_{k}\right|\]\n\n(9.2)\n\nfrom which (9.1) follows, since clearly, \( \left| {C}_{k}\right| \leq {2}^{k} \) . So now we are left to prove (9.2). Observe that by definition, each \( \lambda \in {C}_{k} \) extends some \( \omega \in {\Omega }_{k} \) ; that is, \( \omega \) is a prefix of \( \lambda \) ; moreover, \( \omega \) is uniquely determined by \( \lambda \), since no exact execution path is a proper prefix of any other exact execution path. Also observe that for each \( \omega \in {\Omega }_{k} \), if \( {C}_{k}\left( \omega \right) \) is the set of execution paths \( \lambda \in {C}_{k} \) that extend \( \omega \), then \( \left| {{C}_{k}\left( \omega \right) }\right| = {2}^{k - \left| \omega \right| } \), and by the previous observation, \( {\left\{ {C}_{k}\left( \omega \right) \right\} }_{\omega \in {\Omega }_{k}} \) is a partition of \( {C}_{k} \) . Thus, we have\n\n\[{\alpha }_{k} = \mathop{\sum }\limits_{{\omega \in {\Omega }_{k}}}{2}^{-\left| \omega \right| } = \mathop{\sum }\limits_{{\omega \in {\Omega }_{k}}}{2}^{-\left| \omega \right| }\mathop{\sum }\limits_{{\lambda \in {C}_{k}\left( \omega \right) }}{2}^{-k + \left| \omega \right| } = {2}^{-k}\mathop{\sum }\limits_{{\omega \in {\Omega }_{k}}}\mathop{\sum }\limits_{{\lambda \in {C}_{k}\left( \omega \right) }}1 = {2}^{-k}\left| {C}_{k}\right| ,\]\n\nwhich proves (9.2).
|
Yes
|
Suppose that on input \( x, A \) always halts within a finite number of steps, regardless of its random choices. More precisely, this means that there is a bound \( \ell \) (depending on \( A \) and \( x \) ), such that all execution paths of length \( \ell \) are complete.
|
In this case, we say that \( A \)’s running time on input \( x \) is strictly bounded by \( \ell \), and it is clear that \( A \) halts with probability 1 on input \( x \) . Moreover, one can much more simply model \( A \)’s computation on input \( x \) by working with the uniform distribution on execution paths of length \( \ell \) .
|
Yes
|
Suppose \( A \) and \( B \) are probabilistic algorithms that both halt with probability 1 on all inputs. Using \( A \) and \( B \) as subroutines, we can form their serial composition; that is, we can construct the algorithm \[ C\left( x\right) : \;\text{ output }B\left( {A\left( x\right) }\right) , \] which on input \( x \), first runs \( A \) on input \( x \), obtaining a value \( y \), then runs \( B \) on input \( y \), obtaining a value \( z \), and finally, outputs \( z \) . We claim that \( C \) halts with probability 1 on all inputs.
|
For simplicity, we may assume that \( A \) places its output \( y \) in a location in memory where \( B \) expects to find its input, and that \( B \) places its output in a location in memory where \( C \) ’s output should go. With these assumptions, the program for \( C \) is obtained by simply concatenating the programs for \( A \) and \( B \), making the following adjustments: every halt instruction in \( A \) ’s program is translated into an instruction that branches to the first instruction of \( B \) ’s program, and every target in a branch instruction in \( B \) ’s program is increased by the length of \( A \) ’s program. Let \( \Omega \) be the sample space representing \( A \) ’s execution on an input \( x \) . Each \( \omega \in \Omega \) determines an output \( y \), and a corresponding sample space \( {\Omega }_{\omega }^{\prime } \) representing \( B \) ’s execution on input \( y \) . The sample space representing \( C \) ’s execution on input \( x \) is \[ {\Omega }^{\prime \prime } = \left\{ {\omega {\omega }^{\prime } : \omega \in \Omega ,{\omega }^{\prime } \in {\Omega }_{\omega }^{\prime }}\right\} \] where \( \omega {\omega }^{\prime } \) is the concatenation of \( \omega \) and \( {\omega }^{\prime } \) . We have \[ \mathop{\sum }\limits_{{\omega {\omega }^{\prime } \in {\Omega }^{\prime \prime }}}{2}^{-\left| {\omega {\omega }^{\prime }}\right| } = \mathop{\sum }\limits_{{\omega \in \Omega }}{2}^{-\left| \omega \right| }\mathop{\sum }\limits_{{{\omega }^{\prime } \in {\Omega }_{\omega }^{\prime }}}{2}^{-\left| {\omega }^{\prime }\right| } = \mathop{\sum }\limits_{{\omega \in \Omega }}{2}^{-\left| \omega \right| } \cdot 1 = 1, \] which shows that \( C \) halts with probability 1 on input \( x \) .
|
Yes
|
Example 9.3. Suppose \( A, B \), and \( C \) are probabilistic algorithms that halt with probability 1 on all inputs, and that \( A \) always outputs either true or false. Then we can form the conditional construct \[ D\left( x\right) : \;\text{if}A\left( x\right) \text{then output}B\left( x\right) \text{else output}C\left( x\right) \text{.} \]
|
By a calculation similar to that in the previous example, it is easy to see that \( D \) halts with probability 1 on all inputs.
|
No
|
Suppose \( A \) and \( B \) are probabilistic algorithms that halt with probability 1 on all inputs, and that \( A \) always outputs either true or false. We can form the iterative construct\n\n\[ C\left( x\right) : \;\text{ while }A\left( x\right) \text{ do }x \leftarrow B\left( x\right) \]\n\noutput \( x \) .
|
Algorithm \( C \) may or may not halt with probability 1 . To analyze \( C \), we define an infinite sequence of algorithms \( {\left\{ {C}_{n}\right\} }_{n = 0}^{\infty } \) ; namely, we define \( {C}_{0} \) as\n\n\[ {C}_{0}\left( x\right) : \text{ halt,}\]\n\nand for \( n > 0 \), we define \( {C}_{n} \) as\n\n\[ {C}_{n}\left( x\right) : \;\text{ if }A\left( x\right) \text{ then }{C}_{n - 1}\left( {B\left( x\right) }\right) .\]\n\nEssentially, \( {C}_{n} \) drives \( C \) for up to \( n \) loop iterations before halting, if necessary, in \( {C}_{0} \) . By the previous three examples, it follows by induction on \( n \) that each \( {C}_{n} \) halts with probability 1 on all inputs. Therefore, we have a well-defined probability distribution for each \( {C}_{n} \) and each input \( x \) .\n\nConsider a fixed input \( x \) . For each \( n \geq 0 \), let \( {\beta }_{n} \) be the probability that on input \( x,{C}_{n} \) terminates by executing algorithm \( {C}_{0} \) . Intuitively, \( {\beta }_{n} \) is the probability that \( C \) executes at least \( n \) loop iterations; however, this probability is defined with respect to the probability distribution associated with algorithm \( {C}_{n} \) on input \( x \) . It is not hard to see that the sequence \( {\left\{ {\beta }_{n}\right\} }_{n = 0}^{\infty } \) is non-increasing, and so the limit \( \beta \mathrel{\text{:=}} \mathop{\lim }\limits_{{n \rightarrow \infty }}{\beta }_{n} \) exists; moreover, \( C \) halts with probability \( 1 - \beta \) on input \( x \) .
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.