Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Example 21.7. Convert the complex number to polar form.\n\n\[ \text{a)}2 + {3i}\text{, b)} - 2 - 2\sqrt{3}i\text{, c)}4 - {3i}\text{, d)} - {4i} \]
Solution. a) First, the absolute value is \( r = \left| {2 + {3i}}\right| = \sqrt{{2}^{2} + {3}^{2}} = \sqrt{13} \) . Furthermore, since \( a = 2 \) and \( b = 3 \), we have \( \tan \left( \theta \right) = \frac{3}{2} \) . To obtain \( \theta \), we\ncalculate\n\[ {\tan }^{-1}\left( \frac{3}{2}\right) \approx {56.3}^{ \circ } \]\n\nNote that \( {56.3}^{ \circ } \) is in the first quadrant, and so is the complex number \( 2 + {3i} \)\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_302_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_302_0.jpg)\n\nTherefore, \( \theta \approx {56.3}^{ \circ } \), and we obtain our answer:\n\n\[ 2 + {3i} \approx \sqrt{13} \cdot \left( {\cos \left( {56.3}^{ \circ }\right) + i\sin \left( {56.3}^{ \circ }\right) }\right) . \]
No
Convert the number from polar form into the standard form \( a + {bi} \) .
\[ \text{a)}3 \cdot \left( {\cos \left( {117}^{ \circ }\right) + i\sin \left( {117}^{ \circ }\right) }\right) \;\text{b)}4 \cdot \left( {\cos \left( \frac{5\pi }{4}\right) + i\sin \left( \frac{5\pi }{4}\right) }\right) \] Solution. a) Since we don’t have an exact formula for \( \cos \left( {117}^{ \circ }\right) \) or \( \sin \left( {117}^{ \circ }\right) \) , we use the calculator to obtain approximate values. \[ 3 \cdot \left( {\cos \left( {{117}{^\circ}}\right) + i\sin \left( {{117}{^\circ}}\right) }\right) \approx 3 \cdot \left( {-{0.454} + i \cdot {0.891}}\right) = - {1.362} + {2.673i} \] b) We recall that \( \cos \left( \frac{5\pi }{4}\right) = - \frac{\sqrt{2}}{2} \) and \( \sin \left( \frac{5\pi }{4}\right) = - \frac{\sqrt{2}}{2} \) . (This can be seen as in Example 17.6 on page 237 by considering the point \( P\left( {-1, - 1}\right) \) on the terminal side of the angle \( \frac{5\pi }{4} = \frac{5 \cdot {180}^{ \circ }}{4} = {225}^{ \circ } \) . ![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_305_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_305_0.jpg) Therefore, \( \cos \left( \frac{5\pi }{4}\right) = \frac{-1}{\sqrt{2}} = - \frac{\sqrt{2}}{2} \) and \( \sin \left( \frac{5\pi }{4}\right) = \frac{-1}{\sqrt{2}} = - \frac{\sqrt{2}}{2} \) .) With this, we obtain the complex number in standard form. \[ 4 \cdot \left( {\cos \left( \frac{5\pi }{4}\right) + i\sin \left( \frac{5\pi }{4}\right) }\right) = 4 \cdot \left( {-\frac{\sqrt{2}}{2} - i\frac{\sqrt{2}}{2}}\right) \] \[ = - 4\frac{\sqrt{2}}{2} - i \cdot 4\frac{\sqrt{2}}{2} = - 2\sqrt{2} - 2\sqrt{2} \cdot i \]
Yes
Proposition 21.9. Let \( {r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) \) and \( {r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) \) be two complex numbers in polar form. Then, the product and quotient of these are given by\n\n\[ \n{r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) \cdot {r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) \n\]\n\n\[ \n= {r}_{1}{r}_{2} \cdot \left( {\cos \left( {{\theta }_{1} + {\theta }_{2}}\right) + i\sin \left( {{\theta }_{1} + {\theta }_{2}}\right) }\right) \n\]\n\n(21.7)\n\n\[ \n\frac{{r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) }{{r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) } = \frac{{r}_{1}}{{r}_{2}} \cdot \left( {\cos \left( {{\theta }_{1} - {\theta }_{2}}\right) + i\sin \left( {{\theta }_{1} - {\theta }_{2}}\right) }\right) \n\]\n\n(21.8)
Proof. The proof uses the addition formulas for trigonometric functions \( \sin \left( {\alpha + \beta }\right) \) and \( \cos \left( {\alpha + \beta }\right) \) from proposition 18.1 on page 252\n\n\[ \n{r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) \cdot {r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) \n\]\n\n\[ \n= {r}_{1}{r}_{2} \cdot \left( {\cos \left( {\theta }_{1}\right) \cos \left( {\theta }_{2}\right) + i\cos \left( {\theta }_{1}\right) \sin \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{1}\right) \cos \left( {\theta }_{2}\right) + {i}^{2}\sin \left( {\theta }_{1}\right) \sin \left( {\theta }_{2}\right) }\right) \n\]\n\n\[ \n= {r}_{1}{r}_{2} \cdot \left( {\left( {\cos \left( {\theta }_{1}\right) \cos \left( {\theta }_{2}\right) - \sin \left( {\theta }_{1}\right) \sin \left( {\theta }_{2}\right) }\right) + i\left( {\cos \left( {\theta }_{1}\right) \sin \left( {\theta }_{2}\right) + \sin \left( {\theta }_{1}\right) \cos \left( {\theta }_{2}\right) }\right) }\right) \n\]\n\n\[ \n= {r}_{1}{r}_{2} \cdot \left( {\cos \left( {{\theta }_{1} + {\theta }_{2}}\right) + i\sin \left( {{\theta }_{1} + {\theta }_{2}}\right) }\right) \n\]\n\nFor the division formula, note, that the multiplication formula [21.7] gives\n\n\[ \n{r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) \cdot \frac{1}{{r}_{2}}\left( {\cos \left( {-{\theta }_{2}}\right) + i\sin \left( {-{\theta }_{2}}\right) }\right) = {r}_{2}\frac{1}{{r}_{2}}\left( {\cos \left( {{\theta }_{2} - {\theta }_{2}}\right) + i\sin \left( {{\theta }_{2} - {\theta }_{2}}\right) }\right) \n\]\n\n\[ \n= 1 \cdot \left( {\cos 0 + i\sin 0}\right) = 1 \cdot \left( {1 + i \cdot 0}\right) = 1 \n\]\n\n\[ \n\Rightarrow \;\frac{1}{{r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) } = \frac{1}{{r}_{2}}\left( {\cos \left( {-{\theta }_{2}}\right) + i\sin \left( {-{\theta }_{2}}\right) }\right) , \n\]\n\nso that\n\n\[ \n\frac{{r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) }{{r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) } = {r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) \cdot \frac{1}{{r}_{2}\left( {\cos \left( {\theta }_{2}\right) + i\sin \left( {\theta }_{2}\right) }\right) } \n\]\n\n\[ \n= {r}_{1}\left( {\cos \left( {\theta }_{1}\right) + i\sin \left( {\theta }_{1}\right) }\right) \cdot \frac{1}{{r}_{2}}\left( {\cos \left( {-{\theta }_{2}}\right) + i\sin \left( {-{\theta }_{2}}\right) }\right) = \frac{{r}_{1}}{{r}_{2}} \cdot \left( {\cos \left( {{\theta }_{1} - {\theta }_{2}}\right) + i\sin \left( {{\theta }_{1} - {\theta }_{2}}\right) }\right) . \n\]
Yes
Multiply or divide the complex numbers, and write your answer in polar and standard form.
Solution. We will multiply and divide the complex numbers using equations (21.7) and (21.8), respectively, and then convert them to standard notation \( a + {bi} \) .\n\n\[ \text{a)}5\left( {\cos \left( {11}^{ \circ }\right) + i\sin \left( {11}^{ \circ }\right) }\right) \cdot 8\left( {\cos \left( {34}^{ \circ }\right) + i\sin \left( {34}^{ \circ }\right) }\right) \]\n\n\[ = 5 \cdot 8 \cdot \left( {\cos \left( {{11}^{ \circ } + {34}^{ \circ }}\right) + i\sin \left( {{11}^{ \circ } + {34}^{ \circ }}\right) }\right) = {40}\left( {\cos \left( {45}^{ \circ }\right) + i\sin \left( {45}^{ \circ }\right) }\right) \]\n\n\[ = {40}\left( {\frac{\sqrt{2}}{2} + i\frac{\sqrt{2}}{2}}\right) = {40}\frac{\sqrt{2}}{2} + i \cdot {40}\frac{\sqrt{2}}{2} = {20}\sqrt{2} + {20}\sqrt{2}i \]
Yes
Graph the vectors \( \overrightarrow{v},\overrightarrow{w},\overrightarrow{r},\overrightarrow{s},\overrightarrow{t} \) in the plane, where \( \overrightarrow{v} = \overrightarrow{PQ} \) with \( P\left( {6,3}\right) \) and \( Q\left( {4, - 2}\right) \), and \[ \overrightarrow{w} = \langle 3, - 1\rangle ,\;\overrightarrow{r} = \langle - 4, - 2\rangle ,\;\overrightarrow{s} = \langle 0,2\rangle ,\;\overrightarrow{t} = \langle - 5,3\rangle . \]
![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_311_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_311_0.jpg)
Yes
Example 22.4. Find the magnitude and directional angle of the given vectors. \[ \begin{array}{lll} \text{ a) }\langle - 6,6\rangle & \text{ b) }\langle 4, - 3\rangle & \text{ c) }\langle - 2\sqrt{3}, - 2\rangle \\ \text{ d) }\langle 8,4\sqrt{5}\rangle & \text{ e) }\overline{PQ},\;\text{ where } & P\left( {9,2}\right) \text{ and }Q\left( {3,{10}}\right) \end{array} \]
Solution. a) We use formulas (22.2), and the calculation is in analogy with Example 21.7 The magnitude of \( \overrightarrow{v} = \langle - 6,6\rangle \) is \[ \left| \right| \overrightarrow{v}\left| \right| = \sqrt{{\left( -6\right) }^{2} + {6}^{2}} = \sqrt{{36} + {36}} = \sqrt{72} = \sqrt{{36} \cdot 2} = 6\sqrt{2}. \] The directional angle \( \theta \) is given by \( \tan \left( \theta \right) = \frac{6}{-6} = - 1 \) . Now, since \( {\tan }^{-1}\left( {-1}\right) = \) \( - {\tan }^{-1}\left( 1\right) = - {45}^{ \circ } \) is in the fourth quadrant, but \( \overrightarrow{v} = \langle - 6,6\rangle \) drawn at the origin \( O\left( {0,0}\right) \) has its endpoint in the second quadrant, we see that the angle \( \theta = - {45}^{ \circ } + {180}^{ \circ } = {135}^{ \circ } \) .
No
Example 22.6. Multiply, and graph the vectors.\n\n\[ \text{a)}4 \cdot \langle - 2,1\rangle \;\text{b)}\left( {-3}\right) \cdot \langle - 6, - 2\rangle \]
Solution. a) The calculation is straightforward.\n\n\[ 4 \cdot \langle - 2,1\rangle = \langle 4 \cdot \left( {-2}\right) ,4 \cdot 1\rangle = \langle - 8,4\rangle \]\n\nThe vectors are displayed below. We see, that \( \langle - 2,1\rangle \) and \( \langle - 8,4\rangle \) both have the same directional angle, and the latter stretches the former by a factor 4 .\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_314_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_314_0.jpg)
Yes
Find a unit vector in the direction of \( \overrightarrow{v} \) .
\[ \text{a)}\langle 8,6\rangle \;\text{b)}\langle - 2,3\sqrt{7}\rangle \]\n\nSolution. a) Note that the magnitude of \( \overrightarrow{v} = \langle 8,6\rangle \) is\n\n\[ \parallel \langle 8,6\rangle \parallel = \sqrt{{8}^{2} + {6}^{2}} = \sqrt{{64} + {36}} = \sqrt{100} = {10}. \]\n\nTherefore, if we multiply \( \langle 8,6\rangle \) by \( \frac{1}{10} \) we obtain \( \frac{1}{10} \cdot \langle 8,6\rangle = \langle \frac{8}{10},\frac{6}{10}\rangle = \langle \frac{4}{5},\frac{3}{5}\rangle \) , which (according to Observation 22.7 above) has the same directional angle as \( \langle 8,6\rangle \), and has a magnitude of 1 :\n\n\[ \left| \left| {\frac{1}{10}\cdot \langle 8,6\rangle }\right| \right| = \frac{1}{10} \cdot \left| \right| \langle 8,6\rangle \left| \right| = \frac{1}{10} \cdot {10} = 1 \]\n\nb) The magnitude of \( \langle - 2,3\sqrt{7}\rangle \) is\n\n\[ \left| \right| \langle - 2,3\sqrt{7}\rangle \left| \right| = \sqrt{{\left( -2\right) }^{2} + {\left( 3\sqrt{7}\right) }^{2}} = \sqrt{4 + 9 \cdot 7} = \sqrt{4 + {63}} = \sqrt{67}. \]\n\nTherefore, we have the unit vector\n\n\[ \frac{1}{\sqrt{67}} \cdot \langle - 2,3\sqrt{7}\rangle = \langle \frac{-2}{\sqrt{67}},\frac{3\sqrt{7}}{\sqrt{67}}\rangle = \langle \frac{-2\sqrt{67}}{67},\frac{3\sqrt{7}\sqrt{67}}{67}\rangle = \langle \frac{-2\sqrt{67}}{67},\frac{3\sqrt{469}}{67}\rangle \]\n\nwhich again has the same directional angle as \( \langle - 2,3\sqrt{7}\rangle \) . Now, \( \frac{1}{\sqrt{67}} \cdot \langle - 2,3\sqrt{7}\rangle \) is a unit vector, since \( \left| \right| \frac{1}{\sqrt{67}} \cdot \langle - 2,3\sqrt{7}\rangle \left| \right| = \frac{1}{\sqrt{67}} \cdot \left| \right| \langle - 2,3\sqrt{7}\rangle \left| \right| = \frac{1}{\sqrt{67}} \cdot \sqrt{67} = 1. \]
Yes
Perform the vector addition and simplify as much as possible.\n\n\[ \text{a)}\langle 3, - 5\rangle + \langle 6,4\rangle \;\text{b)}5 \cdot \langle - 6,2\rangle - 7 \cdot \langle 1, - 3\rangle \;\text{c)}4\overrightarrow{i} + 9\overrightarrow{j} \]
Solution. We can find the answer by direct algebraic computation.\n\na) \( \langle 3, - 5\rangle + \langle 6,4\rangle = \langle 3 + 6, - 5 + 4\rangle = \langle 9, - 1\rangle \)\n\nb) \( 5 \cdot \langle - 6,2\rangle - 7 \cdot \langle 1, - 3\rangle = \langle - {30},{10}\rangle + \langle - 7,{21}\rangle = \langle - {37},{31}\rangle \)\n\nc) \( \;4\overrightarrow{i} + 9\overrightarrow{j} = 4 \cdot \langle 1,0\rangle + 9 \cdot \langle 0,1\rangle = \langle 4,0\rangle + \langle 0,9\rangle = \langle 4,9\rangle \)
Yes
The forces \( \overrightarrow{{F}_{1}} \) and \( \overrightarrow{{F}_{2}} \) are applied to an object. Find the resulting total force \( \overrightarrow{F} = {\overrightarrow{F}}_{1} + {\overrightarrow{F}}_{2} \) . Determine the magnitude and directional angle of the total force \( \overrightarrow{F} \) . Approximate these values as necessary. Recall that the international system of units for force is the newton \( \left\lbrack {{1N} = 1\frac{{kg} \cdot m}{{s}^{2}}}\right\rbrack \) .
a) The vectors \( {\overrightarrow{F}}_{1} \) and \( {\overrightarrow{F}}_{2} \) are given by their magnitudes and directional angles. However, the addition of vectors (in Definition 22.10) is defined in terms of their components. Therefore, our first task is to find the vectors in component form. As was stated in equation (22.3) on page 301 the vectors are calculated by \( \overrightarrow{v} = \langle \parallel \overrightarrow{v}\parallel \cdot \cos \left( \theta \right) ,\parallel \overrightarrow{v}\parallel \cdot \sin \left( \theta \right) \rangle \) . Therefore,\n\n\[ \n{\overrightarrow{F}}_{1} = \left\langle {3 \cdot \cos \left( {45}^{ \circ }\right) ,3 \cdot \sin \left( {45}^{ \circ }\right) }\right\rangle = \left\langle {3 \cdot \frac{\sqrt{2}}{2},3 \cdot \frac{\sqrt{2}}{2}}\right\rangle = \left\langle {\frac{3\sqrt{2}}{2},\frac{3\sqrt{2}}{2}}\right\rangle \n\]\n\n\[ \n{\overrightarrow{F}}_{2} = \left\langle {5 \cdot \cos \left( {135}^{ \circ }\right) ,5 \cdot \sin \left( {135}^{ \circ }\right) }\right\rangle = \left\langle {5 \cdot \frac{-\sqrt{2}}{2},5 \cdot \frac{\sqrt{2}}{2}}\right\rangle = \left\langle {\frac{-5\sqrt{2}}{2},\frac{5\sqrt{2}}{2}}\right\rangle \n\]\n\nThe total force is the sum of the forces.\n\n\[ \n\overrightarrow{F} = \overrightarrow{{F}_{1}} + \overrightarrow{{F}_{2}} = \left\langle {\frac{3\sqrt{2}}{2},\frac{3\sqrt{2}}{2}}\right\rangle + \left\langle {\frac{-5\sqrt{2}}{2},\frac{5\sqrt{2}}{2}}\right\rangle \n\]\n\n\[ \n= \langle \frac{3\sqrt{2}}{2} + \frac{-5\sqrt{2}}{2},\frac{3\sqrt{2}}{2} + \frac{5\sqrt{2}}{2}\rangle = \langle \frac{3\sqrt{2} - 5\sqrt{2}}{2},\frac{3\sqrt{2} + 5\sqrt{2}}{2}\rangle \n\]\n\n\[ \n= \left\langle {\frac{-2\sqrt{2}}{2},\frac{8\sqrt{2}}{2}}\right\rangle = \langle - \sqrt{2},4\sqrt{2}\rangle \n\]\n\nThe total force applied in components is \( \overrightarrow{F} = \langle - \sqrt{2},4\sqrt{2}\rangle . \) It has a magnitude of \( \parallel \overrightarrow{F}\parallel = \sqrt{{\left( -\sqrt{2}\right) }^{2} + {\left( 4\sqrt{2}\right) }^{2}} = \sqrt{2 + {16} \cdot 2} = \sqrt{34} \approx {5.83} \) newton. The directional angle is given by \( \tan \left( \theta \right) = \frac{4\sqrt{2}}{-\sqrt{2}} = - 4 \) . Since \( {\tan }^{-1}\left( {-4}\right) \approx - {76.0}^{ \circ } \) is in quadrant IV, and \( \overrightarrow{F} = \langle - \sqrt{2},4\sqrt{2}\rangle \) is in quadrant II, we see that the directional angle of \( \overrightarrow{F} \) is\n\n\[ \n\theta = {180}^{ \circ } + {\tan }^{-1}\left( {-4}\right) \approx {180}^{ \circ } - {76.0}^{ \circ } \approx {104}^{ \circ }. \n\]
Yes
Here are some examples of sequences.
For many of these sequences we can find rules that describe how to obtain the individual terms. For example, in (a), we always add the fixed number 2 to the previous number to obtain the next, starting from the first term 4. This is an example of an arithmetic sequence, and we will study those in more detail in section 23.2 below.\n\nIn (b), we start with the first element 1 and multiply by the fixed number 3 to obtain the next term. This is an example of a geometric sequence, and we will study those in more detail in section 24 below.\n\nThe sequence in (c) alternates between +5 and -5, starting from +5. Note, that we can get from one term to the next by multiplying (-1) to the term. Therefore, this is another example of a geometric sequence.\n\nThe sequence in (d) is called the Fibonacci sequence. In the Fibonacci sequence, each term is calculated by adding the previous two terms, starting with the first two terms 1 and 1 :\n\n\[ 1 + 1 = 2,\;1 + 2 = 3,\;2 + 3 = 5,\;3 + 5 = 8,\;5 + 8 = {13},\;\ldots \]\n\nFinally, the sequence in (e) does not seem to have any obvious rule by which the terms are generated.\n\nIn many cases, the sequence is given by a formula for the \( n \) th term \( {a}_{n} \) .
No
Consider the sequence \( \left\{ {a}_{n}\right\} \) with \( {a}_{n} = {4n} + 3 \) .
We can calculate the individual terms of this sequence:\n\nfirst term: \( \;{a}_{1} = 4 \cdot 1 + 3 = 7 \) ,\n\nsecond term: \( {a}_{2} = 4 \cdot 2 + 3 = {11} \) ,\n\nthird term: \( \;{a}_{3} = 4 \cdot 3 + 3 = {15} \) ,\n\nfourth term: \( \;{a}_{4} = 4 \cdot 4 + 3 = {19} \) ,\n\nfifth term: \( \;{a}_{5} = 4 \cdot 5 + 3 = {23} \) \n\n\( \vdots \)\n\nThus, the sequence is: \( \;7,{11},{15},{19},{23},{27},{31},{35},\ldots \)\n\nFurthermore, from the formula, we can directly calculate a higher term, for example the 200th term is given by:\n\n\[ \n\text{200th term:}\;{a}_{200} = 4 \cdot {200} + 3 = {803} \n\]
Yes
Example 23.4. Find the first 6 terms of each sequence.
Solution. a) We can easily calculate the first 6 terms of \( {a}_{n} = {n}^{2} \) directly:\n\n\[ 1,4,9,{16},{25},{36},\ldots \]\n\nWe can also use the calculator to produce the terms of a sequence. To this end, we switch the calculator from function mode to sequence mode in the mode menu (press \( \left( \text{ mode }\right) \) ):\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_0.jpg)\n\nTo enter the sequence, we need to use the LIST menu (press 2nd and ![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_1.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_1.jpg) item (press \( \left( 5\right) \) ):\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_2.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_2.jpg)\n\nFor the sequence, we need to specify four pieces of information, where the index \( n \) can be entered via the \( \left( {\mathrm{X},\mathrm{T},\theta ,\mathrm{n}}\right) \) key.\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_3.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_324_3.jpg)\n\n1st entry is the given formula for \( {a}_{n} = {n}^{2} \)\n\n2nd entry is the index \( n \) of the sequence \( {a}_{n} \)\n\n3rd and 4th entries are the first and last index 1 and\n\n6 of the wanted sequence, here: \( {a}_{1},\ldots ,{a}_{6} \)\n\n---\n\nAlternatively, we can enter the function in the \( \left( {\;y = }\right) \) menu, starting from the first index \( n = 1 \), and with \( n \) th term given by \( {a}_{n} = {n}^{2} \) . The values of the sequence are displayed in the table window (press \( 2 \) 1\n\n---\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_325_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_325_0.jpg)
Yes
Find the first 6 terms in the sequence described below.
Solution. a) The first term is explicitly given as \( {a}_{1} = 4 \) . Then, we can calculate the following terms via \( {a}_{n} = {a}_{n - 1} + 5 \) :\n\n\[ {a}_{2} = {a}_{1} + 5 = 4 + 5 = 9 \]\n\n\[ {a}_{3} = {a}_{2} + 5 = 9 + 5 = {14} \]\n\n\[ {a}_{4} = {a}_{3} + 5 = {14} + 5 = {19} \]\n\n\[ {a}_{5} = {a}_{4} + 5 = {19} + 5 = {24} \]\n\n\[ \vdots \]\n\nb) We have \( {a}_{1} = 3 \), and calculate \( {a}_{2} = 2 \cdot {a}_{1} = 2 \cdot 3 = 6,{a}_{3} = 2 \cdot {a}_{2} = \) \( 2 \cdot 6 = {12},{a}_{4} = 2 \cdot {a}_{3} = 2 \cdot {12} = {24} \), etc. We see that the effect of the recursive relation \( {a}_{n} = 2 \cdot {a}_{n - 1} \) is to double the previous number. The sequence is:\n\n\[ 3,6,{12},{24},{48},{96},{192},\ldots \]\n\nc) Starting from \( {a}_{1} = 1 \), and \( {a}_{2} = 1 \), we can calculate the higher terms:\n\n\[ {a}_{3} = {a}_{1} + {a}_{2} = 1 + 1 = 2 \]\n\n\[ {a}_{4} = {a}_{2} + {a}_{3} = 1 + 2 = 3 \]\n\n\[ {a}_{5} = {a}_{3} + {a}_{4} = 2 + 3 = 5 \]\n\n\[ {a}_{6} = {a}_{4} + {a}_{5} = 3 + 5 = 8 \]\n\n\[ \vdots \]\n\nStudying the sequence for a short while, we see that this is precisely the Fibonacci sequence from example 23.2(d).
No
\[ \text{a)}\mathop{\sum }\limits_{{i = 1}}^{4}{a}_{i}\text{, for}{a}_{i} = {7i} + 3 \]
Solution. a) The first four terms \( {a}_{1},{a}_{2},{a}_{3},{a}_{4} \) of the sequence \( {\left\{ {a}_{i}\right\} }_{i \geq 1} \) are:\n\n\[ {10},{17},{24},{31} \]\n\nThe sum is therefore:\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{4}{a}_{i} = {a}_{1} + {a}_{2} + {a}_{3} + {a}_{4} = {10} + {17} + {24} + {31} = {82} \]
Yes
Determine if the sequence is an arithmetic sequence. If so, then find the general formula for \( {a}_{n} \) in the form of equation (23.2).
Solution. a) Calculating the difference between two consecutive terms always gives the same answer \( {13} - 7 = 6,{19} - {13} = 6,{25} - {19} = 6 \), etc. Therefore the common difference \( d = 6 \) , which shows that this is an arithmetic sequence. Furthermore, the first term is \( {a}_{1} = 7 \), so that the general formula for the \( n \) th term is \( {a}_{n} = 7 + 6 \cdot \left( {n - 1}\right) \) .
Yes
Find the general formula of an arithmetic sequence with the given property.
Solution. a) According to equation (23.2) the general term is \( {a}_{n} = {a}_{1} + d(n - \) 1). We know that \( d = {12} \), so that we only need to find \( {a}_{1} \) . Plugging \( {a}_{6} = {68} \) into \( {a}_{n} = {a}_{1} + d\left( {n - 1}\right) \), we may solve for \( {a}_{1} \) :\n\n\[ {68} = {a}_{6} = {a}_{1} + {12} \cdot \left( {6 - 1}\right) = {a}_{1} + {12} \cdot 5 = {a}_{1} + {60}\;\overset{\left( -{60}\right) }{ \Rightarrow }\;{a}_{1} = {68} - {60} = 8 \]\n\nThe \( n \) th term is therefore, \( {a}_{n} = 8 + {12} \cdot \left( {n - 1}\right) \) .
Yes
Find the sum of the first 100 integers, starting from 1. In other words, we want to find the sum of \( 1 + 2 + 3 + \cdots + {99} + {100} \) .
Let \( S = 1 + 2 + 3 + \cdots + {98} + {99} + {100} \) be what we want to find. Note that\n\n\[ \n{2S} = \begin{matrix} & 1 & + & 2 & + & 3 & \cdots & + & {98} & + & {99} & + & {100} \\ + & {100} & + & {99} & + & {98} & \cdots & + & 3 & + & 2 & + & 1 \end{matrix}. \n\]\n\nNote that the second line is also \( S \) but is added in the reverse order. Adding vertically we see then that\n\n\[ \n{2S} = {101} + {101} + {101} + \cdots + {101} + {101} + {101}, \n\]\n\nwhere there are 100 terms on the right hand side. So\n\n\[ \n{2S} = {100} \cdot {101}\text{and therefore}S = \frac{{100} \cdot {101}}{2} = {5050}\text{.} \n\]
Yes
a) Find the sum \( {a}_{1} + \cdots + {a}_{60} \) for the arithmetic sequence \( {a}_{n} = 2 + {13}\left( {n - 1}\right) \) .
Solution. a) The sum is given by the formula (23.3): \( \mathop{\sum }\limits_{{i = 1}}^{k}{a}_{i} = \frac{k}{2} \cdot \left( {{a}_{1} + {a}_{k}}\right) \) . Here, \( k = {60} \), and \( {a}_{1} = 2 \) and \( {a}_{60} = 2 + {13} \cdot \left( {{60} - 1}\right) = 2 + {13} \cdot {59} = 2 + {767} = {769} \) . We obtain a sum of\n\n\[ \n{a}_{1} + \cdots + {a}_{60} = \mathop{\sum }\limits_{{i = 1}}^{{60}}{a}_{i} = \frac{60}{2} \cdot \left( {2 + {769}}\right) = {30} \cdot {771} = {23130}.\n\]\nWe may confirm this with the calculator as described in example 23.9 (on page 316) in the previous section.\n\nEnter:\n\n![37f2b8bb-bce0-4882-a084-b5aa2f5782a5_333_0.jpg](images/37f2b8bb-bce0-4882-a084-b5aa2f5782a5_333_0.jpg)\n\n\[ \n\operatorname{sum}(\operatorname{seq}\left( \left( {2 + {13} \cdot \left( {n - 1}\right), n,1,{60}}\right) \right)\n\]
Yes
Example 24.2. Determine if the sequence is a geometric, or arithmetic sequence, or neither or both. If it is a geometric or arithmetic sequence, then find the general formula for \( {a}_{n} \) in the form (24.1) or (23.2).
a) Calculating the quotient of two consecutive terms always gives the same number \( 6 \div 3 = 2,{12} \div 6 = 2,{24} \div {12} = 2 \), etc. Therefore the common ratio is \( r = 2 \), which shows that this is a geometric sequence. Furthermore, the first term is \( {a}_{1} = 3 \), so that the general formula for the \( n \) th term is \( {a}_{n} = 3 \cdot {2}^{n - 1} \) .
Yes
Find the general formula of a geometric sequence with the given property.
a) Since \( \\left\\{ {a}_{n}\\right\\} \) is a geometric sequence, it is \( {a}_{n} = {a}_{1} \\cdot {r}^{n - 1} \) . We know that \( r = 4 \), so we still need to find \( {a}_{1} \) . Using \( {a}_{5} = {64000} \), we obtain:\n\n\[ \n{6400} = {a}_{5} = {a}_{1} \\cdot {4}^{5 - 1} = {a}_{1} \\cdot {4}^{4} = {256} \\cdot {a}_{1}\\overset{\\left( \\div {256}\\right) }{ \\Rightarrow }{a}_{1} = \\frac{6400}{256} = {25} \n\]\n\nThe sequence is therefore given by the formula, \( {a}_{n} = {25} \\cdot {4}^{n - 1} \) .
Yes
Consider the geometric sequence \( {a}_{n} = 8 \cdot {5}^{n - 1} \), that is the sequence:\n\n\[ 8,{40},{200},{1000},{5000},{25000},{125000},\ldots \]\n\nWe want to add the first 6 terms of this sequence.\n\n\[ 8 + {40} + {200} + {1000} + {5000} + {25000} = {31248} \]
In general, it may be much more difficult to simply add the terms as we did above, and we need to use a better general method. For this, we multiply \( \left( {1 - 5}\right) \) to the sum \( \left( {8 + {40} + {200} + {1000} + {5000} + {25000}}\right) \) and simplify this using the distributive law:\n\n\[ \left( {1 - 5}\right) \cdot \left( {8 + {40} + {200} + {1000} + {5000} + {25000}}\right) \]\n\n\[ = \left\lbrack \begin{matrix} 8 - 4 & 0 + 4 & 0 - 2 \\ 0 & 0 + 2 & 0 \\ 0 - 1 & 0 & 0 \\ 0 + 1 & 0 & 0 \\ 0 - 5 & 0 & 0 \\ 0 & & \end{matrix}\right\rbrack \]\n\n\[ + {5000} - {25000} + {25000} - {125000} \]\n\n\[ = 8 - {125000} \]\n\nIn the second and third lines above, we have what is called a telescopic sum, which can be canceled except for the very first and last terms. Dividing by \( \left( {1 - 5}\right) \), we obtain:\n\n\[ 8 + {40} + {200} + {1000} + {5000} + {25000} = \frac{8 - {125000}}{1 - 5} = \frac{-{124992}}{-4} = {31248} \]
Yes
a) Find the sum \( \mathop{\sum }\limits_{{n = 1}}^{6}{a}_{n} \) for the geometric sequence \( {a}_{n} = {10} \cdot {3}^{n - 1} \) .
Solution. a) We need to find the sum \( {a}_{1} + {a}_{2} + {a}_{3} + {a}_{4} + {a}_{5} + {a}_{6} \), and we will do so using the formula provided in equation (24.2). Since \( {a}_{n} = {10} \cdot {3}^{n - 1} \), we have \( {a}_{1} = {10} \) and \( r = 3 \), so that\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{6}{a}_{n} = {10} \cdot \frac{1 - {3}^{6}}{1 - 3} = {10} \cdot \frac{1 - {729}}{1 - 3} = {10} \cdot \frac{-{728}}{-2} = {10} \cdot {364} = {3640} \]
Yes
Consider the geometric sequence\n\n\[ 1,\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{16},\ldots \]\n\nHere, the common ratio is \( r = \frac{1}{2} \), and the first term is \( {a}_{1} = 1 \), so that the formula for \( {a}_{n} \) is \( {a}_{n} = {\left( \frac{1}{2}\right) }^{n - 1} \) . We are interested in summing all infinitely many terms of this sequence:\n\n\[ 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \ldots \]
We see that adding each term takes the sum closer and closer to the number 2. More precisely, adding a term \( {a}_{n} \) to the partial sum \( {a}_{1} + \cdots + {a}_{n - 1} \) cuts the distance between 2 and \( {a}_{1} + \cdots + {a}_{n - 1} \) in half. For this reason we can, in fact, get arbitrarily close to 2 , so that it is reasonable to expect that\n\n\[ 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \frac{1}{16} + \cdots = 2 \]\n\nIn the next definition and observation, this equation will be justified and made more precise.
Yes
Example 24.10. Find the value of the infinite geometric series.\n\n\[ \n\\text{a)}\\mathop{\\sum }\\limits_{{j = 1}}^{\\infty }{a}_{j}\\text{, for}{a}_{j} = 5 \\cdot {\\left( \\frac{1}{3}\\right) }^{j - 1} \n\]
Solution. a) We use formula (24.4) for the geometric series \( {a}_{n} = 5 \\cdot {\\left( \\frac{1}{3}\\right) }^{n - 1} \) , that is \( {a}_{1} = 5 \\cdot {\\left( \\frac{1}{3}\\right) }^{1 - 1} = 5 \\cdot {\\left( \\frac{1}{3}\\right) }^{0} = 5 \\cdot 1 = 5 \) and \( r = \\frac{1}{3} \) . Therefore,\n\n\[ \n\\mathop{\\sum }\\limits_{{j = 1}}^{\\infty }{a}_{j} = {a}_{1} \\cdot \\frac{1}{1 - r} = 5 \\cdot \\frac{1}{1 - \\frac{1}{3}} = 5 \\cdot \\frac{1}{\\frac{3 - 1}{3}} = 5 \\cdot \\frac{1}{\\frac{2}{3}} = 5 \\cdot \\frac{3}{2} = \\frac{15}{2} \n\]
Yes
The fraction \( {0.55555}\ldots \) may be written as:
\[ {0.55555}\cdots = {0.5} + {0.05} + {0.005} + {0.0005} + {0.00005} + \ldots \] Noting that the sequence \[ {0.5},\underset{\times {0.1}}{\underbrace{}}{0.05},\underset{\times {0.1}}{\underbrace{}}{0.005},\underset{\times {0.1}}{\underbrace{}}{0.0005},\underset{\times {0.1}}{\underbrace{}}{0.00005},\ldots \] is a geometric sequence with \( {a}_{1} = {0.5} \) and \( r = {0.1} \), we can calculate the infinite sum as: \[ {0.55555}\cdots = \mathop{\sum }\limits_{{i = 1}}^{\infty }{0.5} \cdot {\left( {0.1}\right) }^{i - 1} = {0.5} \cdot \frac{1}{1 - {0.1}} = {0.5} \cdot \frac{1}{0.9} = \frac{0.5}{0.9} = \frac{5}{9}, \] Here we multiplied numerator and denominator by 10 in the last step in order to eliminate the decimals.
Yes
\[ 4! = 1 \cdot 2 \cdot 3 \cdot 4 = {24} \]
\[ 4! = 1 \cdot 2 \cdot 3 \cdot 4 = {24} \]
Yes
Example 25.5. Calculate the binomial coefficients.\n\n\[ \n\\text{a)}\\left( \\begin{array}{l} 6 \\ 4 \\end{array}\\right) \\;\\text{b)}\\left( \\begin{array}{l} 8 \\ 5 \\end{array}\\right) \\;\\text{c)}\\left( \\begin{array}{l} {25} \\ {23} \\end{array}\\right) \\;\\text{d)}\\left( \\begin{array}{l} 7 \\ 1 \\end{array}\\right) \\;\\text{e)}\\left( \\begin{array}{l} {11} \\ {11} \\end{array}\\right) \\;\\text{f)}\\left( \\begin{array}{l} {11} \\ 0 \\end{array}\\right) \n\]
Solution. a) Many binomial coefficients may be calculated by hand, such as:\n\n\[ \n\\left( \\begin{array}{l} 6 \\ 4 \\end{array}\\right) = \\frac{6!}{4!\\left( {6 - 4}\\right) !} = \\frac{6!}{4!2!} = \\frac{1 \\cdot 2 \\cdot 3 \\cdot 4 \\cdot 5 \\cdot 6}{1 \\cdot 2 \\cdot 3 \\cdot 4 \\cdot 1 \\cdot 2} = \\frac{5 \\cdot 6}{2} = {15} \n\]
No
\[ {\left( a + b\right) }^{3} = \left( {a + b}\right) \cdot \left( {a + b}\right) \cdot \left( {a + b}\right) \]
\[ = \left( {{a}^{2} + {2ab} + {b}^{2}}\right) \cdot \left( {a + b}\right) \] \[ = {a}^{3} + 2{a}^{2}b + a{b}^{2} + {a}^{2}b + {2a}{b}^{2} + {b}^{3} \] \[ = {a}^{3} + 3{a}^{2}b + {3a}{b}^{2} + {b}^{3} \]
Yes
Theorem 25.9 (Binomial theorem). The \( n \) th power \( {\left( a + b\right) }^{n} \) can be expanded as:\n\n\[ \n{\left( a + b\right) }^{n} = \left( \begin{array}{l} n \\ 0 \end{array}\right) {a}^{n} + \left( \begin{array}{l} n \\ 1 \end{array}\right) {a}^{n - 1}{b}^{1} + \left( \begin{array}{l} n \\ 2 \end{array}\right) {a}^{n - 2}{b}^{2} + \cdots + \left( \begin{matrix} n \\ n - 1 \end{matrix}\right) {a}^{1}{b}^{n - 1} + \left( \begin{array}{l} n \\ n \end{array}\right) {b}^{n} \n\]
Using the summation symbol, we may write this in short:\n\n\[ \n{\left( a + b\right) }^{n} = \mathop{\sum }\limits_{{r = 0}}^{n}\left( \begin{array}{l} n \\ r \end{array}\right) \cdot {a}^{n - r} \cdot {b}^{r} \n\]
Yes
Expand \( {\left( a + b\right) }^{5} \).
\[ {\left( a + b\right) }^{5} = \left( \begin{array}{l} 5 \\ 0 \end{array}\right) {a}^{5} + \left( \begin{array}{l} 5 \\ 1 \end{array}\right) {a}^{4}{b}^{1} + \left( \begin{array}{l} 5 \\ 2 \end{array}\right) {a}^{3}{b}^{2} + \left( \begin{array}{l} 5 \\ 3 \end{array}\right) {a}^{2}{b}^{3} + \left( \begin{array}{l} 5 \\ 4 \end{array}\right) {a}^{1}{b}^{4} + \left( \begin{array}{l} 5 \\ 5 \end{array}\right) {b}^{5} \] \[ = {a}^{5} + 5{a}^{4}b + {10}{a}^{3}{b}^{2} + {10}{a}^{2}{b}^{3} + {5a}{b}^{4} + {b}^{5} \]
Yes
Expand the expression. a) \( {\left( {x}^{2} + 2{y}^{3}\right) }^{5} \)
Solution. a) We use the binomial theorem with \( a = {x}^{2} \) and \( b = 2{y}^{3} \) :\n\n\[ \n{\left( {x}^{2} + 2{y}^{3}\right) }^{5} = {\left( {x}^{2}\right) }^{5} + \left( \begin{array}{l} 5 \\ 1 \end{array}\right) {\left( {x}^{2}\right) }^{4}\left( {2{y}^{3}}\right) + \left( \begin{array}{l} 5 \\ 2 \end{array}\right) {\left( {x}^{2}\right) }^{3}{\left( 2{y}^{3}\right) }^{2} \n\]\n\n\[ \n+ \left( \begin{array}{l} 5 \\ 3 \end{array}\right) {\left( {x}^{2}\right) }^{2}{\left( 2{y}^{3}\right) }^{3} + \left( \begin{array}{l} 5 \\ 4 \end{array}\right) \left( {x}^{2}\right) {\left( 2{y}^{3}\right) }^{4} + {\left( 2{y}^{3}\right) }^{5} \n\]\n\n\[ \n= {x}^{10} + 5{x}^{8} \cdot 2{y}^{3} + {10}{x}^{6} \cdot 4{y}^{6} + {10}{x}^{4} \cdot {2}^{3}{y}^{9} + 5{x}^{2} \cdot {2}^{4}{y}^{12} + {2}^{5}{y}^{15} \n\]\n\n\[ \n= {x}^{10} + {10}{x}^{8}{y}^{3} + {40}{x}^{6}{y}^{6} + {80}{x}^{4}{y}^{9} + {80}{x}^{2}{y}^{12} + {32}{y}^{15} \n\]
Yes
Determine: a) the \( {x}^{4}{y}^{12} \) -term in the binomial expansion of \( {\left( 5{x}^{2} + 2{y}^{3}\right) }^{6} \)
Solution. a) In this case we have \( a = 5{x}^{2} \) and \( b = 2{y}^{3} \) . The term \( {x}^{4}{y}^{12} \) can be rewritten as \( {x}^{4}{y}^{12} = {\left( {x}^{2}\right) }^{2} \cdot {\left( {y}^{3}\right) }^{4} \), so that the full term \( \left( \begin{matrix} n \\ k - 1 \end{matrix}\right) {a}^{n - k + 1}{b}^{k - 1} \) (including the coefficients) is\n\n\[ \left( \begin{array}{l} 6 \\ 4 \end{array}\right) \cdot {\left( 5{x}^{2}\right) }^{2} \cdot {\left( 2{y}^{3}\right) }^{4} = {15} \cdot {25}{x}^{4} \cdot {16}{y}^{12} = {6000} \cdot {x}^{4}{y}^{12} \]
Yes
Example 1.2 (Coin Tossing) As we have noted, our intuition suggests that the probability of obtaining a head on a single toss of a coin is \( 1/2 \) . To have the computer toss a coin, we can ask it to pick a random real number in the interval \( \left\lbrack {0,1}\right\rbrack \) and test to see if this number is less than \( 1/2 \) . If so, we shall call the outcome heads; if not we call it tails. Another way to proceed would be to ask the computer to pick a random integer from the set \( \{ 0,1\} \) . The program CoinTosses carries out the experiment of tossing a coin \( n \) times. Running this program, with \( n = {20} \), resulted in:\n\n## THTTTHTTTTHTTTTTHHTT.\n\nNote that in 20 tosses, we obtained 5 heads and 15 tails. Let us toss a coin \( n \) times, where \( n \) is much larger than 20, and see if we obtain a proportion of heads closer to our intuitive guess of \( 1/2 \) . The program CoinTosses keeps track of the number of heads. When we ran this program with \( n = {1000} \), we obtained 494 heads. When we ran it with \( n = {10000} \), we obtained 5039 heads.
We notice that when we tossed the coin 10,000 times, the proportion of heads was close to the \
Yes
Example 1.3 (Dice Rolling) We consider a dice game that played an important role in the historical development of probability. The famous letters between Pascal and Fermat, which many believe started a serious study of probability, were instigated by a request for help from a French nobleman and gambler, Chevalier de Méré. It is said that de Méré had been betting that, in four rolls of a die, at least one six would turn up. He was winning consistently and, to get more people to play, he changed the game to bet that, in 24 rolls of two dice, a pair of sixes would turn up. It is claimed that de Méré lost with 24 and felt that 25 rolls were necessary to make the game favorable. It was un grand scandale that mathematics was wrong.
One can understand this calculation as follows: The probability that no 6 turns up on the first toss is \( \left( {5/6}\right) \) . The probability that no 6 turns up on either of the first two tosses is \( {\left( 5/6\right) }^{2} \) . Reasoning in the same way, the probability that no 6 turns up on any of the first four tosses is \( {\left( 5/6\right) }^{4} \) . Thus, the probability of at least one 6 in the first four tosses is \( 1 - {\left( 5/6\right) }^{4} \) . Similarly, for the second bet, with 24 rolls, the probability that de Méré wins is \( 1 - {\left( {35}/{36}\right) }^{24} = {.491} \), and for 25 rolls it is \( 1 - {\left( {35}/{36}\right) }^{25} = {.506} \).
Yes
For our next example, we consider a problem where the exact answer is difficult to obtain but for which simulation easily gives the qualitative results. Peter and Paul play a game called heads or tails. In this game, a fair coin is tossed a sequence of times - we choose 40 . Each time a head comes up Peter wins 1 penny from Paul, and each time a tail comes up Peter loses 1 penny to Paul. For example, if the results of the 40 tosses are\n\n## THTHHHHTTHTHHTTHHTTTTHHHTHHTHHHTHHHTTTHH.\n\nPeter's winnings may be graphed as in Figure 1.1.\n\nPeter has won 6 pennies in this particular game. It is natural to ask for the probability that he will win \( j \) pennies; here \( j \) could be any even number from -40 to 40. It is reasonable to guess that the value of \( j \) with the highest probability is \( j = 0 \), since this occurs when the number of heads equals the number of tails. Similarly, we would guess that the values of \( j \) with the lowest probabilities are \( j = \pm {40} \) .
It is easy to settle this by simulating the game a large number of times and keeping track of the number of times that Peter’s final winnings are \( j \), and the number of times that Peter ends up being in the lead by \( k \) . The proportions over all games then give estimates for the corresponding probabilities. The program HTSimulation carries out this simulation. Note that when there are an even number of tosses in the game, it is possible to be in the lead only an even number of times. We have simulated this game 10,000 times. The results are shown in Figures 1.2 and 1.3. These graphs, which we call spike graphs, were generated using the program Spikegraph. The vertical line, or spike, at position \( x \) on the horizontal axis, has a height equal to the proportion of outcomes which equal \( x \) . Our intuition about Peter's final winnings was quite correct, but our intuition about the number of times Peter was in the lead was completely wrong. The simulation suggests that the least likely number of times in the lead is 20 and the most likely is 0 or 40 . This is indeed correct, and the explanation for it is suggested by playing the game of heads or tails with a large number of tosses and looking at a graph of Peter's winnings. In Figure 1.4 we show the results of a simulation of the game, for 1000 tosses and in Figure 1.5 for 10,000 tosses.
Yes
Example 1.5 (Horse Races) Four horses (Acorn, Balky, Chestnut, and Dolby) have raced many times. It is estimated that Acorn wins 30 percent of the time, Balky 40 percent of the time, Chestnut 20 percent of the time, and Dolby 10 percent of the time.
We can have our computer carry out one race as follows: Choose a random number \( x \) . If \( x < {.3} \) then we say that Acorn won. If \( {.3} \leq x < {.7} \) then Balky wins. If \( {.7} \leq x < {.9} \) then Chestnut wins. Finally, if \( {.9} \leq x \) then Dolby wins.\n\nThe program HorseRace uses this method to simulate the outcomes of \( n \) races. Running this program for \( n = {10} \) we found that Acorn won 40 percent of the time, Balky 20 percent of the time, Chestnut 10 percent of the time, and Dolby 30 percent of the time. A larger number of races would be necessary to have better agreement with the past experience. Therefore we ran the program to simulate 1000 races with our four horses. Although very tired after all these races, they performed in a manner quite consistent with our estimates of their abilities. Acorn won 29.8 percent of the time, Balky 39.4 percent, Chestnut 19.5 percent, and Dolby 11.3 percent of the time.
Yes
Let \( E = \{ \mathrm{{HH}},\mathrm{{HT}},\mathrm{{TH}}\} \) be the event that at least one head comes up. Then, the probability of \( E \) can be calculated as follows:
\[ P\left( E\right) = m\left( \mathrm{{HH}}\right) + m\left( \mathrm{{HT}}\right) + m\left( \mathrm{{TH}}\right) \] \[ = \frac{1}{4} + \frac{1}{4} + \frac{1}{4} = \frac{3}{4}\text{.} \]
Yes
The sample space for the experiment in which the die is rolled is the 6-element set \( \Omega = \{ 1,2,3,4,5,6\} \) . We assumed that the die was fair, and we chose the distribution function defined by\n\n\[ m\left( i\right) = \frac{1}{6},\;\text{ for }i = 1,\ldots ,6. \]\n\nIf \( E \) is the event that the result of the roll is an even number, then \( E = \{ 2,4,6\} \) and\n\n\[ P\left( E\right) = m\left( 2\right) + m\left( 4\right) + m\left( 6\right) \]
\[ = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} = \frac{1}{2}\text{.} \]
Yes
Example 1.9 Three people, A, B, and C, are running for the same office, and we assume that one and only one of them wins. The sample space may be taken as the 3-element set \( \Omega = \{ \mathrm{A},\mathrm{B},\mathrm{C}\} \) where each element corresponds to the outcome of that candidate's winning. Suppose that A and B have the same chance of winning, but that \( \mathrm{C} \) has only \( 1/2 \) the chance of \( \mathrm{A} \) or \( \mathrm{B} \) . Then we assign\n\n\[ m\left( \mathrm{\;A}\right) = m\left( \mathrm{\;B}\right) = {2m}\left( \mathrm{C}\right) . \]\n\nSince\n\n\[ m\left( \mathrm{\;A}\right) + m\left( \mathrm{\;B}\right) + m\left( \mathrm{C}\right) = 1, \]
we see that\n\n\[ {2m}\left( \mathrm{C}\right) + {2m}\left( \mathrm{C}\right) + m\left( \mathrm{C}\right) = 1, \]\n\nwhich implies that \( {5m}\left( \mathrm{C}\right) = 1 \) . Hence,\n\n\[ m\left( \mathrm{\;A}\right) = \frac{2}{5},\;m\left( \mathrm{\;B}\right) = \frac{2}{5},\;m\left( \mathrm{C}\right) = \frac{1}{5}. \]\n\nLet \( E \) be the event that either \( \mathrm{A} \) or \( \mathrm{C} \) wins. Then \( E = \{ \mathrm{A},\mathrm{C}\} \), and\n\n\[ P\left( E\right) = m\left( \mathrm{\;A}\right) + m\left( \mathrm{C}\right) = \frac{2}{5} + \frac{1}{5} = \frac{3}{5}. \]
Yes
Theorem 1.1 The probabilities assigned to events by a distribution function on a sample space \( \Omega \) satisfy the following properties:\n\n1. \( P\left( E\right) \geq 0 \) for every \( E \subset \Omega \) .\n\n2. \( P\left( \Omega \right) = 1 \) .\n\n3. If \( E \subset F \subset \Omega \), then \( P\left( E\right) \leq P\left( F\right) \) .\n\n4. If \( A \) and \( B \) are disjoint subsets of \( \Omega \), then \( P\left( {A \cup B}\right) = P\left( A\right) + P\left( B\right) \) .\n\n5. \( P\left( \widetilde{A}\right) = 1 - P\left( A\right) \) for every \( A \subset \Omega \) .
Proof. For any event \( E \) the probability \( P\left( E\right) \) is determined from the distribution \( m \) by\n\n\[ P\left( E\right) = \mathop{\sum }\limits_{{\omega \in E}}m\left( \omega \right) \]\n\nfor every \( E \subset \Omega \) . Since the function \( m \) is nonnegative, it follows that \( P\left( E\right) \) is also nonnegative. Thus, Property 1 is true.\n\nProperty 2 is proved by the equations\n\n\[ P\left( \Omega \right) = \mathop{\sum }\limits_{{\omega \in \Omega }}m\left( \omega \right) = 1. \]\n\nSuppose that \( E \subset F \subset \Omega \) . Then every element \( \omega \) that belongs to \( E \) also belongs to \( F \) . Therefore,\n\n\[ \mathop{\sum }\limits_{{\omega \in E}}m\left( \omega \right) \leq \mathop{\sum }\limits_{{\omega \in F}}m\left( \omega \right) \]\n\nsince each term in the left-hand sum is in the right-hand sum, and all the terms in both sums are non-negative. This implies that\n\n\[ P\left( E\right) \leq P\left( F\right) \]\n\nand Property 3 is proved.\n\nSuppose next that \( A \) and \( B \) are disjoint subsets of \( \Omega \) . Then every element \( \omega \) of \( A \cup B \) lies either in \( A \) and not in \( B \) or in \( B \) and not in \( A \) . It follows that\n\n\[ P\left( {A \cup B}\right) = \mathop{\sum }\limits_{{\omega \in A \cup B}}m\left( \omega \right) = \mathop{\sum }\limits_{{\omega \in A}}m\left( \omega \right) + \mathop{\sum }\limits_{{\omega \in B}}m\left( \omega \right) \]\n\n\[ = P\left( A\right) + P\left( B\right) \]\n\nand Property 4 is proved.\n\nFinally, to prove Property 5, consider the disjoint union\n\n\[ \Omega = A \cup \widetilde{A} \]\n\nSince \( P\left( \Omega \right) = 1 \), the property of disjoint additivity (Property 4) implies that\n\n\[ 1 = P\left( A\right) + P\left( \widetilde{A}\right) \]\n\nwhence \( P\left( \widetilde{A}\right) = 1 - P\left( A\right) \) .
Yes
Theorem 1.2 If \( {A}_{1},\ldots ,{A}_{n} \) are pairwise disjoint subsets of \( \Omega \) (i.e., no two of the \( {A}_{i} \)’s have an element in common), then\n\n\[ P\left( {{A}_{1} \cup \cdots \cup {A}_{n}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}P\left( {A}_{i}\right) . \]
Proof. Let \( \omega \) be any element in the union\n\n\[ {A}_{1} \cup \cdots \cup {A}_{n}\text{.} \]\n\nThen \( m\left( \omega \right) \) occurs exactly once on each side of the equality in the statement of the theorem.
No
Theorem 1.3 Let \( {A}_{1},\ldots ,{A}_{n} \) be pairwise disjoint events with \( \Omega = {A}_{1} \cup \cdots \cup {A}_{n} \) , and let \( E \) be any event. Then\n\n\[ P\left( E\right) = \mathop{\sum }\limits_{{i = 1}}^{n}P\left( {E \cap {A}_{i}}\right) \]
Proof. The sets \( E \cap {A}_{1},\ldots, E \cap {A}_{n} \) are pairwise disjoint, and their union is the set \( E \) . The result now follows from Theorem 1.2.
Yes
Theorem 1.4 If \( A \) and \( B \) are subsets of \( \Omega \), then\n\n\[ P\left( {A \cup B}\right) = P\left( A\right) + P\left( B\right) - P\left( {A \cap B}\right) . \]
Proof. The left side of Equation 1.1 is the sum of \( m\left( \omega \right) \) for \( \omega \) in either \( A \) or \( B \) . We must show that the right side of Equation 1.1 also adds \( m\left( \omega \right) \) for \( \omega \) in \( A \) or \( B \) . If \( \omega \) is in exactly one of the two sets, then it is counted in only one of the three terms on the right side of Equation 1.1. If it is in both \( A \) and \( B \), it is added twice from the calculations of \( P\left( A\right) \) and \( P\left( B\right) \) and subtracted once for \( P\left( {A \cap B}\right) \) . Thus it is counted exactly once by the right side. Of course, if \( A \cap B = \varnothing \), then Equation 1.1 reduces to Property 4. (Equation 1.1 can also be generalized; see Theorem 3.8.) \( ▱ \)
Yes
What is the probability of getting a sum of 7 on the roll of two dice - or getting a sum of 11 ?
The first event, denoted by \( E \), is the subset\n\n\[ E = \{ \left( {1,6}\right) ,\left( {6,1}\right) ,\left( {2,5}\right) ,\left( {5,2}\right) ,\left( {3,4}\right) ,\left( {4,3}\right) \} .\n\nA sum of 11 is the subset \( F \) given by\n\n\[ F = \{ \left( {5,6}\right) ,\left( {6,5}\right) \} .\n\nConsequently,\n\n\[ P\left( E\right) = \mathop{\sum }\limits_{{\omega \in E}}m\left( \omega \right) = 6 \cdot \frac{1}{36} = \frac{1}{6},\n\n\[ P\left( F\right) = \mathop{\sum }\limits_{{\omega \in F}}m\left( \omega \right) = 2 \cdot \frac{1}{36} = \frac{1}{18}. \]
Yes
Example 1.13 A coin is tossed until the first time that a head turns up. Let the outcome of the experiment, \( \omega \), be the first time that a head turns up. Then the possible outcomes of our experiment are\n\n\[ \Omega = \{ 1,2,3,\ldots \} .\n\]\n\nNote that even though the coin could come up tails every time we have not allowed for this possibility. We will explain why in a moment. The probability that heads comes up on the first toss is \( 1/2 \) . The probability that tails comes up on the first toss and heads on the second is \( 1/4 \) . The probability that we have two tails followed by a head is \( 1/8 \), and so forth. This suggests assigning the distribution function \( m\left( n\right) = 1/{2}^{n} \) for \( n = 1,2,3,\ldots \) To see that this is a distribution function we\n\nmust show that\n\[ \mathop{\sum }\limits_{\omega }m\left( \omega \right) = \frac{1}{2} + \frac{1}{4} + \frac{1}{8} + \cdots = 1. \]
That this is true follows from the formula for the sum of a geometric series,\n\n\[ 1 + r + {r}^{2} + {r}^{3} + \cdots = \frac{1}{1 - r}, \]\n\nor\n\[ r + {r}^{2} + {r}^{3} + {r}^{4} + \cdots = \frac{r}{1 - r}, \]\n\n(1.2)\n\nfor \( - 1 < r < 1 \) .\n\nPutting \( r = 1/2 \), we see that we have a probability of 1 that the coin eventually turns up heads. The possible outcome of tails every time has to be assigned probability 0 , so we omit it from our sample space of possible outcomes.
Yes
Example 2.1 We begin by constructing a spinner, which consists of a circle of unit circumference and a pointer as shown in Figure 2.1. We pick a point on the circle and label it 0 , and then label every other point on the circle with the distance, say \( x \), from 0 to that point, measured counterclockwise. The experiment consists of spinning the pointer and recording the label of the point at the tip of the pointer. We let the random variable \( X \) denote the value of this outcome. The sample space is clearly the interval \( \lbrack 0,1) \) . We would like to construct a probability model in which each outcome is equally likely to occur.
If we proceed as we did in Chapter 1 for experiments with a finite number of possible outcomes, then we must assign the probability 0 to each outcome, since otherwise, the sum of the probabilities, over all of the possible outcomes, would not equal 1. (In fact, summing an uncountable number of real numbers is a tricky business; in particular, in order for such a sum to have any meaning, at most countably many of the summands can be different than 0 .) However, if all of the assigned probabilities are 0 , then the sum is 0 , not 1 , as it should be.\n\nIn the next section, we will show how to construct a probability model in this situation. At present, we will assume that such a model can be constructed. We will also assume that in this model, if \( E \) is an arc of the circle, and \( E \) is of length \( p \), then the model will assign the probability \( p \) to \( E \) . This means that if the pointer is spun, the probability that it ends up pointing to a point in \( E \) equals \( p \), which is certainly a reasonable thing to expect.
No
In this example we show how simulation can be used to estimate areas of plane figures. Suppose that we program our computer to provide a pair \( \left( {x, y}\right) \) or numbers, each chosen independently at random from the interval \( \left\lbrack {0,1}\right\rbrack \) . Then we can interpret this pair \( \left( {x, y}\right) \) as the coordinates of a point chosen at random from the unit square. Events are subsets of the unit square. Our experience with Example 2.1 suggests that the point is equally likely to fall in subsets of equal area. Since the total area of the square is 1 , the probability of the point falling in a specific subset \( E \) of the unit square should be equal to its area. Thus, we can estimate the area of any subset of the unit square by estimating the probability that a point chosen at random from this square falls in the subset.
We can use this method to estimate the area of the region \( E \) under the curve \( y = {x}^{2} \) in the unit square (see Figure 2.2). We choose a large number of points \( \left( {x, y}\right) \) at random and record what fraction of them fall in the region \( E = \left\{ {\left( {x, y}\right) : y \leq {x}^{2}}\right\} \) .\n\nThe program MonteCarlo will carry out this experiment for us. Running this program for 10,000 experiments gives an estimate of .325 (see Figure 2.3).\n\nFrom these experiments we would estimate the area to be about \( 1/3 \) . Of course,\n\n![201c5617-be5d-49b1-8926-165ae51a0e51_53_0.jpg](images/201c5617-be5d-49b1-8926-165ae51a0e51_53_0.jpg)\n\nFigure 2.2: Area under \( y = {x}^{2} \) .\n\nfor this simple region we can find the exact area by calculus. In fact,\n\n\[ \text{Area of}E = {\int }_{0}^{1}{x}^{2}{dx} = \frac{1}{3}\text{.} \]
Yes
Suppose that we take a card table and draw across the top surface a set of parallel lines a unit distance apart. We then drop a common needle of unit length at random on this surface and observe whether or not the needle lies across one of the lines. We can describe the possible outcomes of this experiment by coordinates as follows: Let \( d \) be the distance from the center of the needle to the nearest line. Next, let \( L \) be the line determined by the needle, and define \( \theta \) as the acute angle that the line \( L \) makes with the set of parallel lines. (The reader should certainly be wary of this description of the sample space. We are attempting to coordinatize a set of line segments. To see why one must be careful in the choice of coordinates, see Example 2.6.) Using this description, we have \( 0 \leq d \leq 1/2 \), and \( 0 \leq \theta \leq \pi /2 \) . Moreover, we see that the needle lies across the nearest line if and only if the hypotenuse of the triangle (see Figure 2.4) is less than half the length of the needle, that is,\n\n\[ \frac{d}{\sin \theta } < \frac{1}{2}. \]\n\nNow we assume that when the needle drops, the pair \( \left( {\theta, d}\right) \) is chosen at random from the rectangle \( 0 \leq \theta \leq \pi /2,0 \leq d \leq 1/2 \) . We observe whether the needle lies across the nearest line (i.e., whether \( d \leq \left( {1/2}\right) \sin \theta \) ). The probability of this event \( E \) is the fraction of the area of the rectangle which lies inside \( E \) (see Figure 2.5).
Now the area of the rectangle is \( \pi /4 \), while the area of \( E \) is\n\n\[ \text{Area} = {\int }_{0}^{\pi /2}\frac{1}{2}\sin {\theta d\theta } = \frac{1}{2}.\ ]\n\nHence, we get\n\n\[ P\left( E\right) = \frac{1/2}{\pi /4} = \frac{2}{\pi }. \]
Yes
Example 2.4 Suppose that we choose two random real numbers in \( \\left\\lbrack {0,1}\\right\\rbrack \) and add them together. Let \( X \) be the sum. How is \( X \) distributed?
To help understand the answer to this question, we can use the program Are-abargraph. This program produces a bar graph with the property that on each interval, the area, rather than the height, of the bar is equal to the fraction of outcomes that fell in the corresponding interval. We have carried out this experiment 1000 times; the data is shown in Figure 2.7. It appears that the function defined by\n\n\[ f\\left( x\\right) = \\left\\{ \\begin{array}{ll} x, & \\text{ if }0 \\leq x \\leq 1 \\\\ 2 - x, & \\text{ if }1 < x \\leq 2 \\end{array}\\right. \]\n\nfits the data very well. (It is shown in the figure.) In the next section, we will see that this function is the \
Yes
Example 2.5 Suppose that we choose 100 random numbers in \( \left\lbrack {0,1}\right\rbrack \), and let \( X \) represent their sum. How is \( X \) distributed?
It turns out that the type of function which does the job is called a normal density function. This type of function is sometimes referred to as a \
No
Example 2.7 The spinner experiment described in Example 2.1 has the interval \( \lbrack 0,1) \) as the set of possible outcomes. We would like to construct a probability model in which each outcome is equally likely to occur. We saw that in such a model, it is necessary to assign the probability 0 to each outcome. This does not at all mean that the probability of every event must be zero. On the contrary, if we let the random variable \( X \) denote the outcome, then the probability\n\n\[ P\left( {0 \leq X \leq 1}\right) \]\n\nthat the head of the spinner comes to rest somewhere in the circle, should be equal to 1. Also, the probability that it comes to rest in the upper half of the circle should be the same as for the lower half, so that\n\n\[ P\left( {0 \leq X < \frac{1}{2}}\right) = P\left( {\frac{1}{2} \leq X < 1}\right) = \frac{1}{2}. \]\n\nMore generally, in our model, we would like the equation\n\n\[ P\left( {c \leq X < d}\right) = d - c \]\n\nto be true for every choice of \( c \) and \( d \) .
If we let \( E = \left\lbrack {c, d}\right\rbrack \), then we can write the above formula in the form\n\n\[ P\left( E\right) = {\int }_{E}f\left( x\right) {dx} \]\n\nwhere \( f\left( x\right) \) is the constant function with value 1 . This should remind the reader of the corresponding formula in the discrete case for the probability of an event:\n\n\[ P\left( E\right) = \mathop{\sum }\limits_{{\omega \in E}}m\left( \omega \right) \]
Yes
Example 2.8 A game of darts involves throwing a dart at a circular target of unit radius. Suppose we throw a dart once so that it hits the target, and we observe where it lands.
To describe the possible outcomes of this experiment, it is natural to take as our sample space the set \( \Omega \) of all the points in the target. It is convenient to describe these points by their rectangular coordinates, relative to a coordinate system with origin at the center of the target, so that each pair \( \left( {x, y}\right) \) of coordinates with \( {x}^{2} + {y}^{2} \leq \) 1 describes a possible outcome of the experiment. Then \( \Omega = \left\{ {\left( {x, y}\right) : {x}^{2} + {y}^{2} \leq 1}\right\} \) is a subset of the Euclidean plane, and the event \( E = \{ \left( {x, y}\right) : y > 0\} \), for example, corresponds to the statement that the dart lands in the upper half of the target, and so forth. Unless there is reason to believe otherwise (and with experts at the game there may well be!), it is natural to assume that the coordinates are chosen at random. (When doing this with a computer, each coordinate is chosen uniformly from the interval \( \left\lbrack {-1,1}\right\rbrack \) . If the resulting point does not lie inside the unit circle, the point is not counted.) Then the arguments used in the preceding example show that the probability of any elementary event, consisting of a single outcome, must be zero, and suggest that the probability of the event that the dart lands in any subset \( E \) of the target should be determined by what fraction of the target area lies in \( E \) . Thus,\n\n\[ P\left( E\right) = \frac{\text{ area of }E}{\text{ area of target }} = \frac{\text{ area of }E}{\pi }.\]\n\nThis can be written in the form\n\n\[ P\left( E\right) = {\int }_{E}f\left( x\right) {dx} \]\n\nwhere \( f\left( x\right) \) is the constant function with value \( 1/\pi \) . In particular, if \( E = \{ \left( {x, y}\right) \) : \( \left. {{x}^{2} + {y}^{2} \leq {a}^{2}}\right\} \) is the event that the dart lands within distance \( a < 1 \) of the center of the target, then\n\n\[ P\left( E\right) = \frac{\pi {a}^{2}}{\pi } = {a}^{2}. \]\n\nFor example, the probability that the dart lies within a distance \( 1/2 \) of the center is \( 1/4 \) .
Yes
What probabilities should we assign to the events \( E \) of \( \Omega \) ? If\n\n\[ E = \{ r : 0 \leq r \leq a\} \]\n\nthen \( E \) occurs if the dart lands within a distance \( a \) of the center, that is, within the circle of radius \( a \), and we saw in the previous example that under our assumptions the probability of this event is given by\n\n\[ P\left( \left\lbrack {0, a}\right\rbrack \right) = {a}^{2}. \]
More generally, if\n\n\[ E = \{ r : a \leq r \leq b\} ,\]\n\nthen by our basic assumptions,\n\n\[ P\left( E\right) = P\left( \left\lbrack {a, b}\right\rbrack \right) = P\left( \left\lbrack {0, b}\right\rbrack \right) - P\left( \left\lbrack {0, a}\right\rbrack \right)\]\n\n\[ = {b}^{2} - {a}^{2}\]\n\n\[ = \left( {b - a}\right) \left( {b + a}\right)\]\n\n\[ = 2\left( {b - a}\right) \frac{\left( b + a\right) }{2}. \]
Yes
In the spinner experiment, we choose for our set of outcomes the interval \( 0 \leq x < 1 \), and for our density function\n\n\[ f\left( x\right) = \left\{ \begin{array}{ll} 1, & \text{ if }0 \leq x < 1 \\ 0, & \text{ otherwise. } \end{array}\right. \]\n\nIf \( E \) is the event that the head of the spinner falls in the upper half of the circle, then \( E = \{ x : 0 \leq x \leq 1/2\} \), and so
\[ P\left( E\right) = {\int }_{0}^{1/2}{1dx} = \frac{1}{2}. \]
Yes
In the first dart game experiment, we choose for our sample space a disc of unit radius in the plane and for our density function the function\n\n\[ f\left( {x, y}\right) = \left\{ \begin{array}{ll} 1/\pi , & \text{ if }{x}^{2} + {y}^{2} \leq 1 \\ 0, & \text{ otherwise. } \end{array}\right. \]\n\nThe probability that the dart lands inside the subset \( E \) is then given by
\[ P\left( E\right) = \iint {\int }_{E}\frac{1}{\pi }{dxdy} \]\n\n\[ = \frac{1}{\pi } \cdot \left( {\text{ area of }E}\right) . \]
Yes
In the second dart game experiment, we choose for our sample space the unit interval on the real line and for our density the function \[ f\left( r\right) = \left\{ \begin{array}{ll} {2r}, & \text{ if }0 < r < 1 \\ 0, & \text{ otherwise. } \end{array}\right. \] Then the probability that the dart lands at distance \( r, a \leq r \leq b \), from the center of the target is given by \[ P\left( \left\lbrack {a, b}\right\rbrack \right) = {\int }_{a}^{b}{2rdr} \]
\[ = {b}^{2} - {a}^{2}\text{.} \] Here again, since the density is small when \( r \) is near 0 and large when \( r \) is near 1, we see that in this experiment the dart is more likely to land near the rim of the target than near the center. In terms of the bar graph of Example 2.9, the heights of the bars approximate the density function, while the areas of the bars approximate the probabilities of the subintervals (see Figure 2.12).
Yes
Theorem 2.1 Let \( X \) be a continuous real-valued random variable with density function \( f\left( x\right) \) . Then the function defined by\n\n\[ F\left( x\right) = {\int }_{-\infty }^{x}f\left( t\right) {dt} \]\n\nis the cumulative distribution function of \( X \) . Furthermore, we have\n\n\[ \frac{d}{dx}F\left( x\right) = f\left( x\right) \]
Proof. By definition,\n\n\[ F\left( x\right) = P\left( {X \leq x}\right) . \]\n\nLet \( E = ( - \infty, x\rbrack \) . Then\n\n\[ P\left( {X \leq x}\right) = P\left( {X \in E}\right) ,\]\n\nwhich equals\n\n\[ {\int }_{-\infty }^{x}f\left( t\right) {dt} \]\n\nApplying the Fundamental Theorem of Calculus to the first equation in the statement of the theorem yields the second statement.
Yes
A real number is chosen at random from \( \left\lbrack {0,1}\right\rbrack \) with uniform probability, and then this number is squared. Let \( X \) represent the result. What is the cumulative distribution function of \( X \) ? What is the density of \( X \) ?
We begin by letting \( U \) represent the chosen real number. Then \( X = {U}^{2} \) . If \( 0 \leq x \leq 1 \), then we have\n\n\[ \n{F}_{X}\left( x\right) = P\left( {X \leq x}\right) \]\n\n\[ \n= P\left( {{U}^{2} \leq x}\right) \]\n\n\[ \n= P\left( {U \leq \sqrt{x}}\right) \]\n\n\[ \n= \sqrt{x}\text{.} \]\n\nIt is clear that \( X \) always takes on a value between 0 and 1, so the cumulative distribution function of \( X \) is given by\n\n\[ \n{F}_{X}\left( x\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x \leq 0 \\ \sqrt{x}, & \text{ if }0 \leq x \leq 1 \\ 1, & \text{ if }x \geq 1 \end{array}\right. \]\n\nFrom this we easily calculate that the density function of \( X \) is\n\n\[ \n{f}_{X}\left( x\right) = \left\{ \begin{array}{ll} 0, & \text{ if }x \leq 0 \\ 1/\left( {2\sqrt{x}}\right) , & \text{ if }0 \leq x \leq 1 \\ 0, & \text{ if }x > 1 \end{array}\right. \]\n\nNote that \( {F}_{X}\left( x\right) \) is continuous, but \( {f}_{X}\left( x\right) \) is not. (See Figure 2.13.)
Yes
In Example 2.4, we considered a random variable, defined to be the sum of two random real numbers chosen uniformly from \( \left\lbrack {0,1}\right\rbrack \) . Let the random variables \( X \) and \( Y \) denote the two chosen real numbers. Define \( Z = X + Y \) . We will now derive expressions for the cumulative distribution function and the density function of \( Z \) .
Here we take for our sample space \( \Omega \) the unit square in \( {\mathbf{R}}^{2} \) with uniform density. A point \( \omega \in \Omega \) then consists of a pair \( \left( {x, y}\right) \) of numbers chosen at random. Then \( 0 \leq Z \leq 2 \) . Let \( {E}_{z} \) denote the event that \( Z \leq z \) . In Figure 2.14, we show the set \( {E}_{.8} \) . The event \( {E}_{z} \), for any \( z \) between 0 and 1, looks very similar to the shaded set in the figure. For \( 1 < z \leq 2 \), the set \( {E}_{z} \) looks like the unit square with a triangle removed from the upper right-hand corner. We can now calculate the probability distribution \( {F}_{Z} \) of \( Z \) ; it is given by\n\n\[ \n{F}_{Z}\left( z\right) = P\left( {Z \leq z}\right) \n\]\n\n\[ \n= \text{Area of}{E}_{z} \n\]\n\n\[ \n= \left\{ \begin{array}{ll} 0, & \text{ if }z < 0, \\ \left( {1/2}\right) {z}^{2}, & \text{ if }0 \leq z \leq 1, \\ 1 - \left( {1/2}\right) {\left( 2 - z\right) }^{2}, & \text{ if }1 \leq z \leq 2, \\ 1, & \text{ if }2 < z. \end{array}\right. \n\]\n\nThe density function is obtained by differentiating this function:\n\n\[ \n{f}_{Z}\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }z < 0 \\ z, & \text{ if }0 \leq z \leq 1 \\ 2 - z, & \text{ if }1 \leq z \leq 2 \\ 0, & \text{ if }2 < z \end{array}\right. \n\]
Yes
In the dart game described in Example 2.8, what is the distribution of the distance of the dart from the center of the target? What is its density?
Here, as before, our sample space \( \Omega \) is the unit disk in \( {\mathbf{R}}^{2} \), with coordinates \( \left( {X, Y}\right) \) . Let \( Z = \sqrt{{X}^{2} + {Y}^{2}} \) represent the distance from the center of the target. Let\n\n![201c5617-be5d-49b1-8926-165ae51a0e51_75_0.jpg](images/201c5617-be5d-49b1-8926-165ae51a0e51_75_0.jpg)\n\nFigure 2.17: Distribution and density for \( Z = \sqrt{{X}^{2} + {Y}^{2}} \) .\n\n\( E \) be the event \( \{ Z \leq z\} \) . Then the distribution function \( {F}_{Z} \) of \( Z \) (see Figure 2.16) is given by\n\n\[ \n{F}_{Z}\left( z\right) = P\left( {Z \leq z}\right) \n\]\n\n\[ \n= \frac{\text{ Area of }E}{\text{ Area of target }}\text{. } \n\]\n\nThus, we easily compute that\n\n\[ \n{F}_{Z}\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }z \leq 0 \\ {z}^{2}, & \text{ if }0 \leq z \leq 1 \\ 1, & \text{ if }z > 1 \end{array}\right. \n\]\n\nThe density \( {f}_{Z}\left( z\right) \) is given again by the derivative of \( {F}_{Z}\left( z\right) \) :\n\n\[ \n{f}_{Z}\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }z \leq 0 \\ {2z}, & \text{ if }0 \leq z \leq 1 \\ 0, & \text{ if }z > 1 \end{array}\right. \n\]\n\nThe reader is referred to Figure 2.17 for the graphs of these functions.
Yes
Suppose Mr. and Mrs. Lockhorn agree to meet at the Hanover Inn between 5:00 and 6:00 P.M. on Tuesday. Suppose each arrives at a time between 5:00 and 6:00 chosen at random with uniform probability. What is the distribution function for the length of time that the first to arrive has to wait for the other? What is the density function?
Here again we can take the unit square to represent the sample space, and \( \left( {X, Y}\right) \) as the arrival times (after 5:00 P.M.) for the Lockhorns. Let \( Z = \left| {X - Y}\right| \) . Then we have \( {F}_{X}\left( x\right) = x \) and \( {F}_{Y}\left( y\right) = y \) . Moreover (see Figure 2.19),\n\n\[ \n{F}_{Z}\left( z\right) = P\left( {Z \leq z}\right) \]\n\n\[ \n= P\left( {\left| {X - Y}\right| \leq z}\right) \]\n\n\[ \n= \text{Area of}E\text{.} \]\n\nThus, we have\n\n\[ \n{F}_{Z}\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }z \leq 0 \\ 1 - {\left( 1 - z\right) }^{2}, & \text{ if }0 \leq z \leq 1 \\ 1, & \text{ if }z > 1 \end{array}\right. \]\n\nThe density \( {f}_{Z}\left( z\right) \) is again obtained by differentiation:\n\n\[ \n{f}_{Z}\left( z\right) = \left\{ \begin{array}{ll} 0, & \text{ if }z \leq 0 \\ 2\left( {1 - z}\right) , & \text{ if }0 \leq z \leq 1 \\ 0, & \text{ if }z > 1 \end{array}\right. \]
Yes
Consider an experiment in which a fair coin is tossed repeatedly, without stopping. We have seen in Example 1.6 that, for a coin tossed \( n \) times, the natural sample space is a binary tree with \( n \) stages. On this evidence we expect that for a coin tossed repeatedly, the natural sample space is a binary tree with an infinite number of stages, as indicated in Figure 2.22.
It is surprising to learn that, although the \( n \) -stage tree is obviously a finite sample space, the unlimited tree can be described as a continuous sample space. To see how this comes about, let us agree that a typical outcome of the unlimited coin tossing experiment can be described by a sequence of the form \( \omega = \{ \mathrm{H}\mathrm{H}\mathrm{T}\mathrm{H}\mathrm{T}\mathrm{T}\mathrm{H}\ldots \} \) . If we write 1 for \( \mathrm{H} \) and 0 for \( \mathrm{T} \), then \( \omega = \{ 1,1,0,0,1\ldots \} \) . In this way, each outcome is described by a sequence of 0 's and 1 's.\n\nNow suppose we think of this sequence of 0 's and 1's as the binary expansion of some real number \( x = {.1101001}\cdots \) lying between 0 and 1 . (A binary expansion is like a decimal expansion but based on 2 instead of 10.) Then each outcome is described by a value of \( x \), and in this way \( x \) becomes a coordinate for the sample space, taking on all real values between 0 and 1 . (We note that it is possible for two different sequences to correspond to the same real number; for example, the sequences \( \{ \mathrm{{THHHHH}}\ldots \} \) and \( \{ \mathrm{{HTTTTT}}\ldots \} \) both correspond to the real number \( 1/2 \) . We will not concern ourselves with this apparent problem here.)\n\nWhat probabilities should be assigned to the events of this sample space? Consider, for example, the event \( E \) consisting of all outcomes for which the first toss comes up heads and the second tails. Every such outcome has the form \( {.10} * * * * \cdots \) , where \( * \) can be either 0 or 1 . Now if \( x \) is our real-valued coordinate, then the value of \( x \) for every such outcome must lie between \( 1/2 = {.10000}\cdots \) and \( 3/4 = {.11000}\cdots \) , and moreover, every value of \( x \) between \( 1/2 \) and \( 3/4 \) has a binary expansion of the\n\nform \( {.10} * * * * \cdots \) . This means that \( \omega \in E \) if and only if \( 1/2 \leq x < 3/4 \), and in this way we see that we can describe \( E \) by the interval \( \lbrack 1/2,3/4) \) . More generally, every event consisting of outcomes for which the results of the first \( n \) tosses are prescribed is described by a binary interval of the form \( \left\lbrack {k/{2}^{n},\left( {k + 1}\right) /{2}^{n}}\right) \) .\n\nWe have already seen in Section 1.2 that in the experiment involving \( n \) tosses, the probability of any one outcome must be exactly \( 1/{2}^{n} \) . It follows that in the unlimited toss experiment, the probability of any event consisting of outcomes for which the results of the first \( n \) tosses are prescribed must also be \( 1/{2}^{n} \) . But \( 1/{2}^{n} \) is exactly the length of the interval of \( x \) -values describing \( E \) ! Thus we see that, just as with the spinner experiment, the probability of an event \( E \) is determined by what fraction of the unit interval lies in \( E \) .
Yes
How many possible choices do you have for your complete meal?
We illustrate the possible meals by a tree diagram shown in Figure 3.1. Your menu is decided in three stages - at each stage the number of possible choices does not depend on what is chosen in the previous stages: two choices at the first stage, three at the second, and two at the third. From the tree diagram we see that the total number of choices is the product of the number of choices at each stage. In this examples we have \( 2 \cdot 3 \cdot 2 = {12} \) possible menus.
Yes
Example 3.2 We can show that there are at least two people in Columbus, Ohio, who have the same three initials. Assuming that each person has three initials, there are 26 possibilities for a person's first initial, 26 for the second, and 26 for the third. Therefore, there are \( {26}^{3} = {17},{576} \) possible sets of initials.
This number is smaller than the number of people living in Columbus, Ohio; hence, there must be at least two people with the same three initials.
Yes
How many people do we need to have in a room to make it a favorable bet (probability of success greater than \( 1/2 \) ) that two people in the room will have the same birthday?
Since there are 365 possible birthdays, it is tempting to guess that we would need about \( 1/2 \) this number, or 183. You would surely win this bet. In fact, the number required for a favorable bet is only 23 . To show this, we find the probability \( {p}_{r} \) that, in a room with \( r \) people, there is no duplication of birthdays; we will have a favorable bet if this probability is less than one half. Number of people Probability that all birthdays are different\n\n\[ \n\text{.5885616} \]\n21 .5563117 .5243047 .4927028 .4616557 25 .4313003\n\nTable 3.1: Birthday problem.\n\nAssume that there are 365 possible birthdays for each person (we ignore leap years). Order the people from 1 to \( r \) . For a sample point \( \omega \), we choose a possible sequence of length \( r \) of birthdays each chosen as one of the 365 possible dates. There are 365 possibilities for the first element of the sequence, and for each of these choices there are 365 for the second, and so forth, making \( {365}^{r} \) possible sequences of birthdays. We must find the number of these sequences that have no duplication of birthdays. For such a sequence, we can choose any of the 365 days for the first element, then any of the remaining 364 for the second, 363 for the third, and so forth, until we make \( r \) choices. For the \( r \) th choice, there will be \( {365} - r + 1 \) possibilities. Hence, the total number of sequences with no duplications is\n\n\[ \n{365} \cdot {364} \cdot {363} \cdot \ldots \cdot \left( {{365} - r + 1}\right) .\n\]\n\nThus, assuming that each sequence is equally likely,\n\n\[ \n{p}_{r} = \frac{{365} \cdot {364} \cdot \ldots \cdot \left( {{365} - r + 1}\right) }{{365}^{r}}.\n\]\n\nWe denote the product\n\n\[ \n\left( n\right) \left( {n - 1}\right) \cdots \left( {n - r + 1}\right)\n\]\n\nby \( {\left( n\right) }_{r} \) (read \
Yes
Theorem 3.3 (Stirling’s Formula) The sequence \( n \) ! is asymptotically equal to\n\n\[{n}^{n}{e}^{-n}\sqrt{2\pi n}\text{.}
The proof of Stirling's formula may be found in most analysis texts. Let us verify this approximation by using the computer. The program StirlingApprox-imations prints \( n \) !, the Stirling approximation, and, finally, the ratio of these two numbers. Sample output of this program is shown in Table 3.4. Note that, while the ratio of the numbers is getting closer to 1 , the difference between the exact value and the approximation is increasing, and indeed, this difference will tend to infinity as \( n \) tends to infinity, even though the ratio tends to 1 . (This was also true in our Example 3.4 where \( n + \sqrt{n} \sim n \), but the difference is \( \sqrt{n} \).)
No
Example 3.5 Let \( U = \{ a, b, c\} \) . The subsets of \( U \) are\n\n\[ \phi ,\{ a\} ,\{ b\} ,\{ c\} ,\{ a, b\} ,\{ a, c\} ,\{ b, c\} ,\{ a, b, c\} . \]
In the above example, there is one subset with no elements, three subsets with exactly 1 element, three subsets with exactly 2 elements, and one subset with exactly 3 elements. Thus, \( \left( \begin{array}{l} 3 \\ 0 \end{array}\right) = 1,\left( \begin{array}{l} 3 \\ 1 \end{array}\right) = 3,\left( \begin{array}{l} 3 \\ 2 \end{array}\right) = 3 \), and \( \left( \begin{array}{l} 3 \\ 3 \end{array}\right) = 1 \) . Note that there are \( {2}^{3} = 8 \) subsets in all. (We have already seen that a set with \( n \) elements has \( {2}^{n} \) subsets; see Exercise 3.1.8.) It follows that\n\n\[ \left( \begin{array}{l} 3 \\ 0 \end{array}\right) + \left( \begin{array}{l} 3 \\ 1 \end{array}\right) + \left( \begin{array}{l} 3 \\ 2 \end{array}\right) + \left( \begin{array}{l} 3 \\ 3 \end{array}\right) = {2}^{3} = 8 \]
Yes
Theorem 3.4 For integers \( n \) and \( j \), with \( 0 < j < n \), the binomial coefficients satisfy:\n\n\[ \left( \begin{array}{l} n \\ j \end{array}\right) = \left( \begin{matrix} n - 1 \\ j \end{matrix}\right) + \left( \begin{matrix} n - 1 \\ j - 1 \end{matrix}\right) \]
Proof. We wish to choose a subset of \( j \) elements. Choose an element \( u \) of \( U \) . Assume first that we do not want \( u \) in the subset. Then we must choose the \( j \) elements from a set of \( n - 1 \) elements; this can be done in \( \left( \begin{matrix} n - 1 \\ j \end{matrix}\right) \) ways. On the other hand, assume that we do want \( u \) in the subset. Then we must choose the other \( j - 1 \) elements from the remaining \( n - 1 \) elements of \( U \) ; this can be done in \( \left( \begin{matrix} n - 1 \\ j - 1 \end{matrix}\right) \) ways. Since \( u \) is either in our subset or not, the number of ways that we can choose a subset of \( j \) elements is the sum of the number of subsets of \( j \) elements which have \( u \) as a member and the number which do not - this is what Equation 3.1 states. \( ▱ \)
Yes
Theorem 3.5 The binomial coefficients are given by the formula\n\n\[ \left( \begin{array}{l} n \\ j \end{array}\right) = \frac{{\left( n\right) }_{j}}{j!}. \]
Proof. Each subset of size \( j \) of a set of size \( n \) can be ordered in \( j \) ! ways. Each of these orderings is a \( j \) -permutation of the set of size \( n \) . The number of \( j \) -permutations is \( {\left( n\right) }_{j} \), so the number of subsets of size \( j \) is\n\n\[ \frac{{\left( n\right) }_{j}}{j!}. \]\n\nThis completes the proof.
Yes
How many hands have four of a kind?
There are 13 ways that we can specify the value for the four cards. For each of these, there are 48 possibilities for the fifth card. Thus, the number of four-of-a-kind hands is \( {13} \cdot {48} = {624} \) . Since the total number of possible hands is \( \left( \begin{matrix} {52} \\ 5 \end{matrix}\right) = {2598960} \), the probability of a hand with four of a kind is \( {624}/{2598960} = {.00024} \) .
Yes
Theorem 3.6 Given \( n \) Bernoulli trials with probability \( p \) of success on each experiment, the probability of exactly \( j \) successes is\n\n\[ b\left( {n, p, j}\right) = \left( \begin{array}{l} n \\ j \end{array}\right) {p}^{j}{q}^{n - j} \]\n\nwhere \( q = 1 - p \) .
Proof. We construct a tree measure as described above. We want to find the sum of the probabilities for all paths which have exactly \( j \) successes and \( n - j \) failures. Each such path is assigned a probability \( {p}^{j}{q}^{n - j} \) . How many such paths are there? To specify a path, we have to pick, from the \( n \) possible trials, a subset of \( j \) to be successes, with the remaining \( n - j \) outcomes being failures. We can do this in \( \left( \begin{array}{l} n \\ j \end{array}\right) \) ways. Thus the sum of the probabilities is\n\n\[ b\left( {n, p, j}\right) = \left( \begin{array}{l} n \\ j \end{array}\right) {p}^{j}{q}^{n - j}. \]
Yes
A fair coin is tossed six times. What is the probability that exactly three heads turn up?
\[ b\left( {6,{.5},3}\right) = \left( \begin{array}{l} 6 \\ 3 \end{array}\right) {\left( \frac{1}{2}\right) }^{3}{\left( \frac{1}{2}\right) }^{3} = {20} \cdot \frac{1}{64} = {.3125}. \]
Yes
A die is rolled four times. What is the probability that we obtain exactly one 6 ?
We treat this as Bernoulli trials with success \( = \) \
No
Example 3.10 A Galton board is a board in which a large number of BB-shots are dropped from a chute at the top of the board and deflected off a number of pins on their way down to the bottom of the board. The final position of each slot is the result of a number of random deflections either to the left or the right. We have written a program GaltonBoard to simulate this experiment.
Note that if we write 0 every time the shot is deflected to the left, and 1 every time it is deflected to the right, then the path of the shot can be described by a sequence of 0 ’s and 1’s of length \( n \), just as for the \( n \) -fold coin toss. The distribution shown in Figure 3.6 is an example of an empirical distribution, in the sense that it comes about by means of a sequence of experiments. As expected, this empirical distribution resembles the corresponding binomial distribution with parameters \( n = {20} \) and \( p = 1/2 \) .
No
Theorem 3.7 (Binomial Theorem) The quantity \( {\left( a + b\right) }^{n} \) can be expressed in the form\n\n\[ \n{\left( a + b\right) }^{n} = \mathop{\sum }\limits_{{j = 0}}^{n}\left( \begin{array}{l} n \\ j \end{array}\right) {a}^{j}{b}^{n - j}.\n\]
Proof. To see that this expansion is correct, write\n\n\[ \n{\left( a + b\right) }^{n} = \left( {a + b}\right) \left( {a + b}\right) \cdots \left( {a + b}\right) .\n\]\n\nWhen we multiply this out we will have a sum of terms each of which results from a choice of an \( a \) or \( b \) for each of \( n \) factors. When we choose \( j \) a’s and \( \left( {n - j}\right) b \) ’s, we obtain a term of the form \( {a}^{j}{b}^{n - j} \) . To determine such a term, we have to specify \( j \) of the \( n \) terms in the product from which we choose the \( a \) . This can be done in \( \left( \begin{array}{l} n \\ j \end{array}\right) \) ways. Thus, collecting these terms in the sum contributes a term \( \left( \begin{array}{l} n \\ j \end{array}\right) {a}^{j}{b}^{n - j} \) . \( ▱ \)
Yes
Corollary 3.1 The sum of the elements in the \( n \) th row of Pascal’s triangle is \( {2}^{n} \) . If the elements in the \( n \) th row of Pascal’s triangle are added with alternating signs, the sum is 0 .
Proof. The first statement in the corollary follows from the fact that\n\n\[ \n{2}^{n} = {\left( 1 + 1\right) }^{n} = \left( \begin{array}{l} n \\ 0 \end{array}\right) + \left( \begin{array}{l} n \\ 1 \end{array}\right) + \left( \begin{array}{l} n \\ 2 \end{array}\right) + \cdots + \left( \begin{array}{l} n \\ n \end{array}\right) ,\n\]\n\nand the second from the fact that\n\n\[ \n0 = {\left( 1 - 1\right) }^{n} = \left( \begin{array}{l} n \\ 0 \end{array}\right) - \left( \begin{array}{l} n \\ 1 \end{array}\right) + \left( \begin{array}{l} n \\ 2 \end{array}\right) - \cdots + {\left( -1\right) }^{n}\left( \begin{array}{l} n \\ n \end{array}\right) .\n\]
Yes
Theorem 3.8 Let \( P \) be a probability distribution on a sample space \( \Omega \), and let \( \\left\\{ {{A}_{1},{A}_{2},\\ldots ,{A}_{n}}\\right\\} \) be a finite set of events. Then\n\n\[ P\\left( {{A}_{1} \\cup {A}_{2} \\cup \\cdots \\cup {A}_{n}}\\right) = \\mathop{\\sum }\\limits_{{i = 1}}^{n}P\\left( {A}_{i}\\right) - \\mathop{\\sum }\\limits_{{1 \\leq i < j \\leq n}}P\\left( {{A}_{i} \\cap {A}_{j}}\\right) \]\n\n\[ + \\mathop{\\sum }\\limits_{{1 \\leq i < j < k \\leq n}}P\\left( {{A}_{i} \\cap {A}_{j} \\cap {A}_{k}}\\right) - \\cdots . \]\n\n(3.3)\n\nThat is, to find the probability that at least one of \( n \) events \( {A}_{i} \) occurs, first add the probability of each event, then subtract the probabilities of all possible two-way intersections, add the probability of all three-way intersections, and so forth.
Proof. If the outcome \( \\omega \) occurs in at least one of the events \( {A}_{i} \), its probability is added exactly once by the left side of Equation 3.3. We must show that it is added exactly once by the right side of Equation 3.3. Assume that \( \\omega \) is in exactly \( k \) of the sets. Then its probability is added \( k \) times in the first term, subtracted \( \\left( \\begin{array}{l} k \\\\ 2 \\end{array}\\right) \) times in the second, added \( \\left( \\begin{array}{l} k \\\\ 3 \\end{array}\\right) \) times in the third term, and so forth. Thus, the total number of times that it is added is\n\n\[ \\left( \\begin{array}{l} k \\\\ 1 \\end{array}\\right) - \\left( \\begin{array}{l} k \\\\ 2 \\end{array}\\right) + \\left( \\begin{array}{l} k \\\\ 3 \\end{array}\\right) - \\cdots {\\left( -1\\right) }^{k - 1}\\left( \\begin{array}{l} k \\\\ k \\end{array}\\right) . \]\n\nBut\n\n\[ 0 = {\\left( 1 - 1\\right) }^{k} = \\mathop{\\sum }\\limits_{{j = 0}}^{k}\\left( \\begin{array}{l} k \\\\ j \\end{array}\\right) {\\left( -1\\right) }^{j} = \\left( \\begin{array}{l} k \\\\ 0 \\end{array}\\right) - \\mathop{\\sum }\\limits_{{j = 1}}^{k}\\left( \\begin{array}{l} k \\\\ j \\end{array}\\right) {\\left( -1\\right) }^{j - 1}. \]\n\nHence,\n\n\[ 1 = \\left( \\begin{array}{l} k \\\\ 0 \\end{array}\\right) = \\mathop{\\sum }\\limits_{{j = 1}}^{k}\\left( \\begin{array}{l} k \\\\ j \\end{array}\\right) {\\left( -1\\right) }^{j - 1}. \]\n\nIf the outcome \( \\omega \) is not in any of the events \( {A}_{i} \), then it is not counted on either side of the equation.
Yes
Example 3.12 We return to the hat check problem discussed in Section 3.1, that is, the problem of finding the probability that a random permutation contains at least one fixed point. Recall that a permutation is a one-to-one map of a set \( A = \left\{ {{a}_{1},{a}_{2},\ldots ,{a}_{n}}\right\} \) onto itself. Let \( {A}_{i} \) be the event that the \( i \) th element \( {a}_{i} \) remains fixed under this map. If we require that \( {a}_{i} \) is fixed, then the map of the remaining \( n - 1 \) elements provides an arbitrary permutation of \( \left( {n - 1}\right) \) objects. Since there are \( \left( {n - 1}\right) \) ! such permutations, \( P\left( {A}_{i}\right) = \left( {n - 1}\right) !/n! = 1/n \) . Since there are \( n \) choices for \( {a}_{i} \), the first term of Equation 3.3 is 1 . In the same way, to have a particular pair \( \left( {{a}_{i},{a}_{j}}\right) \) fixed, we can choose any permutation of the remaining \( n - 2 \) elements; there are \( \left( {n - 2}\right) \) ! such choices and thus
\[ P\left( {{A}_{i} \cap {A}_{j}}\right) = \frac{\left( {n - 2}\right) !}{n!} = \frac{1}{n\left( {n - 1}\right) }. \] The number of terms of this form in the right side of Equation 3.3 is \[ \left( \begin{array}{l} n \\ 2 \end{array}\right) = \frac{n\left( {n - 1}\right) }{2!}. \] Hence, the second term of Equation 3.3 is \[ - \frac{n\left( {n - 1}\right) }{2!} \cdot \frac{1}{n\left( {n - 1}\right) } = - \frac{1}{2!}. \] Similarly, for any specific three events \( {A}_{i},{A}_{j},{A}_{k} \) , \[ P\left( {{A}_{i} \cap {A}_{j} \cap {A}_{k}}\right) = \frac{\left( {n - 3}\right) !}{n!} = \frac{1}{n\left( {n - 1}\right) \left( {n - 2}\right) }, \] and the number of such terms is \[ \left( \begin{array}{l} n \\ 3 \end{array}\right) = \frac{n\left( {n - 1}\right) \left( {n - 2}\right) }{3!}, \] making the third term of Equation 3.3 equal to \( 1/3 \) !. Continuing in this way, we obtain \[ P\left( \text{ at least one fixed point }\right) = 1 - \frac{1}{2!} + \frac{1}{3!} - \cdots {\left( -1\right) }^{n - 1}\frac{1}{n!} \] and \[ P\left( \text{ no fixed point }\right) = \frac{1}{2!} - \frac{1}{3!} + \cdots {\left( -1\right) }^{n}\frac{1}{n!}. \]
Yes
In the quantum mechanical model of the helium atom, various parameters can be used to classify the energy states of the atom. In the triplet spin state \( \left( {S = 1}\right) \) with orbital angular momentum \( 1\left( {L = 1}\right) \), there are three possibilities, \( 0,1 \), or 2, for the total angular momentum \( \left( J\right) \) . We would like to assign probabilities to the three possibilities for \( J \) .
The theory gives these assignments because these frequencies were observed in experiments and further parameters were developed in the theory to allow these frequencies to be predicted.
No
Theorem 3.9 Let \( a \) and \( b \) be two positive integers. Let \( {S}_{a, b} \) be the set of all ordered pairs in which the first entry is an \( a \) -shuffle and the second entry is a \( b \) -shuffle. Let \( {S}_{ab} \) be the set of all \( {ab} \) -shuffles. Then there is a 1-1 correspondence between \( {S}_{a, b} \) and \( {S}_{ab} \) with the following property. Suppose that \( \left( {{T}_{1},{T}_{2}}\right) \) corresponds to \( {T}_{3} \) . If \( {T}_{1} \) is applied to the identity ordering, and \( {T}_{2} \) is applied to the resulting ordering, then the final ordering is the same as the ordering that is obtained by applying \( {T}_{3} \) to the identity ordering.
Proof. The easiest way to describe the required correspondence is through the idea of an unshuffle. An \( a \) -unshuffle begins with a deck of \( n \) cards. One by one, cards are taken from the top of the deck and placed, with equal probability, on the bottom of any one of \( a \) stacks, where the stacks are labelled from 0 to \( a - 1 \) . After all of the cards have been distributed, we combine the stacks to form one stack by placing stack \( i \) on top of stack \( i + 1 \), for \( 0 \leq i \leq a - 1 \) . It is easy to see that if one starts with a deck, there is exactly one way to cut the deck to obtain the \( a \) stacks generated by the \( a \) -unshuffle, and with these \( a \) stacks, there is exactly one way to interleave them to obtain the deck in the order that it was in before the unshuffle was performed. Thus, this \( a \) -unshuffle corresponds to a unique \( a \) -shuffle, and this \( a \) -shuffle is the inverse of the original \( a \) -unshuffle.\n\nIf we apply an \( {ab} \) -unshuffle \( {U}_{3} \) to a deck, we obtain a set of \( {ab} \) stacks, which are then combined, in order, to form one stack. We label these stacks with ordered pairs of integers, where the first coordinate is between 0 and \( a - 1 \), and the second coordinate is between 0 and \( b - 1 \) . Then we label each card with the label of its stack. The number of possible labels is \( {ab} \), as required. Using this labelling, we can describe how to find a \( b \) -unshuffle and an \( a \) -unshuffle, such that if these two unshuffles are applied in this order to the deck, we obtain the same set of \( {ab} \) stacks as were obtained by the \( {ab} \) -unshuffle.\n\nTo obtain the \( b \) -unshuffle \( {U}_{2} \), we sort the deck into \( b \) stacks, with the \( i \) th stack containing all of the cards with second coordinate \( i \), for \( 0 \leq i \leq b - 1 \) . Then these stacks are combined to form one stack. The \( a \) -unshuffle \( {U}_{1} \) proceeds in the same manner, except that the first coordinates of the labels are used. The resulting \( a \) stacks are then combined to form one stack.\n\nThe above description shows that the cards ending up on top are all those labelled \( \left( {0,0}\right) \) . These are followed by those labelled \( \left( {0,1}\right) ,\left( {0,2}\right) ,\ldots ,(0, b - \) \( 1),\left( {1,0}\right) ,\left( {1,1}\right) ,\ldots ,\left( {a - 1, b - 1}\right) \) . Furthermore, the relative order of any pair of cards with the same labels is never altered. But this is exactly the same as an \( {ab} \) -unshuffle, if, at the beginning of such an unshuffle, we label each of the cards with one of the labels \( \left( {0,0}\right) ,\left( {0,1}\right) ,\ldots ,\left( {0, b - 1}\right) ,\left( {1,0}\right) ,\left( {1,1}\right) ,\ldots ,\left( {a - 1, b - 1}\right) \) . This completes the proof.
Yes
Theorem 3.10 If \( D \) is any ordering that is the result of applying an \( a \) -shuffle and then a \( b \) -shuffle to the identity ordering, then the probability assigned to \( D \) by this pair of operations is the same as the probability assigned to \( D \) by the process of applying an \( {ab} \) -shuffle to the identity ordering.
Proof. Call the sample space of \( a \) -shuffles \( {S}_{a} \) . If we label the stacks by the integers from 0 to \( a - 1 \), then each cut-interleaving pair, i.e., shuffle, corresponds to exactly one \( n \) -digit base \( a \) integer, where the \( i \) th digit in the integer is the stack of which the \( i \) th card is a member. Thus, the number of cut-interleaving pairs is equal to the number of \( n \) -digit base \( a \) integers, which is \( {a}^{n} \) . Of course, not all of these pairs leads to different orderings. The number of pairs leading to a given ordering will be discussed later. For our purposes it is enough to point out that it is the cut-interleaving pairs that determine the probability assignment.\n\nThe previous theorem shows that there is a 1-1 correspondence between \( {S}_{a, b} \) and \( {S}_{ab} \) . Furthermore, corresponding elements give the same ordering when applied to the identity ordering. Given any ordering \( D \), let \( {m}_{1} \) be the number of elements of \( {S}_{a, b} \) which, when applied to the identity ordering, result in \( D \) . Let \( {m}_{2} \) be the number of elements of \( {S}_{ab} \) which, when applied to the identity ordering, result in \( D \) . The previous theorem implies that \( {m}_{1} = {m}_{2} \) . Thus, both sets assign the probability\n\n\[ \frac{{m}_{1}}{{\left( ab\right) }^{n}} \]\n\nto \( D \) . This completes the proof.
Yes
If an ordering of length \( n \) has \( r \) rising sequences, then the number of cut-interleaving pairs under an \( a \) -shuffle of the identity ordering which lead to the ordering is \n\n\[ \left( \begin{matrix} n + a - r \\ n \end{matrix}\right) . \]
Proof. To see why this is true, we need to count the number of ways in which the cut in an \( a \) -shuffle can be performed which will lead to a given ordering with \( r \) rising sequences. We can disregard the interleavings, since once a cut has been made, at most one interleaving will lead to a given ordering. Since the given ordering has \( r \) rising sequences, \( r - 1 \) of the division points in the cut are determined. The remaining \( a - 1 - \left( {r - 1}\right) = a - r \) division points can be placed anywhere. The number of places to put these remaining division points is \( n + 1 \) (which is the number of spaces between the consecutive pairs of cards, including the positions at the beginning and the end of the deck). These places are chosen with repetition allowed, so the number of ways to make these choices is \n\n\[ \left( \begin{matrix} n + a - r \\ a - r \end{matrix}\right) = \left( \begin{matrix} n + a - r \\ n \end{matrix}\right) . \] \n\nIn particular, this means that if \( D \) is an ordering that is the result of applying an \( a \) -shuffle to the identity ordering, and if \( D \) has \( r \) rising sequences, then the probability assigned to \( D \) by this process is \n\n\[ \frac{\left( \begin{matrix} n + a - r \\ n \end{matrix}\right) }{{a}^{n}}. \] \n\nThis completes the proof.
Yes
Theorem 3.12 Let \( a \) and \( n \) be positive integers. Then\n\n\[ \n{a}^{n} = \mathop{\sum }\limits_{{r = 1}}^{a}\left( \begin{matrix} n + a - r \\ n \end{matrix}\right) A\left( {n, r}\right) .\n\]\n\n(3.5)\n\nThus,\n\n\[ \nA\left( {n, a}\right) = {a}^{n} - \mathop{\sum }\limits_{{r = 1}}^{{a - 1}}\left( \begin{matrix} n + a - r \\ n \end{matrix}\right) A\left( {n, r}\right) .\n\]
Proof. The second equation can be used to calculate the values of the Eulerian numbers, and follows immediately from the Equation 3.5. The last equation is a consequence of the fact that the only ordering of \( \{ 1,2,\ldots, n\} \) with one rising sequence is the identity ordering. Thus, it remains to prove Equation 3.5. We will count the set of \( a \) -shuffles of a deck with \( n \) cards in two ways. First, we know that there are \( {a}^{n} \) such shuffles (this was noted in the proof of Theorem 3.10). But there are \( A\left( {n, r}\right) \) orderings of \( \{ 1,2,\ldots, n\} \) with \( r \) rising sequences, and Theorem 3.11 states that for each such ordering, there are exactly\n\n\[ \n\left( \begin{matrix} n + a - r \\ n \end{matrix}\right)\n\]\n\ncut-interleaving pairs that lead to the ordering. Therefore, the right-hand side of Equation 3.5 counts the set of \( a \) -shuffles of an \( n \) -card deck. This completes the proof.
Yes
An experiment consists of rolling a die once. Let \( X \) be the outcome. Let \( F \) be the event \( \{ X = 6\} \), and let \( E \) be the event \( \{ X > 4\} \) . We assign the distribution function \( m\left( \omega \right) = 1/6 \) for \( \omega = 1,2,\ldots ,6 \) . Thus, \( P\left( F\right) = 1/6 \) .
Now suppose that the die is rolled and we are told that the event \( E \) has occurred. This leaves only two possible outcomes: 5 and 6 . In the absence of any information, we would still regard these outcomes to be equally likely, so the probability of \( F \) becomes \( 1/2 \), making \( P\left( {F \mid E}\right) = 1/2 \) .
Yes
Example 4.2 In the Life Table (see Appendix C), one finds that in a population of 100,000 females, \( {89.835}\% \) can expect to live to age 60, while \( {57.062}\% \) can expect to live to age 80 . Given that a woman is 60 , what is the probability that she lives to age 80 ?
This is an example of a conditional probability. In this case, the original sample space can be thought of as a set of 100,000 females. The events \( E \) and \( F \) are the subsets of the sample space consisting of all women who live at least 60 years, and at least 80 years, respectively. We consider \( E \) to be the new sample space, and note that \( F \) is a subset of \( E \) . Thus, the size of \( E \) is \( {89},{835} \), and the size of \( F \) is 57,062 . So, the probability in question equals \( {57},{062}/{89},{835} = {.6352} \) . Thus, a woman who is 60 has a \( {63.52}\% \) chance of living to age 80 .
Yes
Example 4.4 (Example 4.1 continued) Let us return to the example of rolling a die. Recall that \( F \) is the event \( X = 6 \), and \( E \) is the event \( X > 4 \) . Note that \( E \cap F \) is the event \( F \) . So, the above formula gives
\[ P\left( {F \mid E}\right) = \frac{P\left( {F \cap E}\right) }{P\left( E\right) } \] \[ = \frac{1/6}{1/3} \] \[ = \frac{1}{2} \] in agreement with the calculations performed earlier.
Yes
Suppose we wish to calculate \( P\left( {I \mid B}\right) \) .
Using the formula, we obtain\n\n\[ P\left( {I \mid B}\right) = \frac{P\left( {I \cap B}\right) }{P\left( B\right) } \]\n\n\[ = \frac{P\left( {I \cap B}\right) }{P\left( {B \cap I}\right) + P\left( {B \cap {II}}\right) } \]\n\n\[ = \frac{1/5}{1/5 + 1/4} = \frac{4}{9}\text{.} \]
Yes
Theorem 4.1 Two events \( E \) and \( F \) are independent if and only if\n\n\[ P\left( {E \cap F}\right) = P\left( E\right) P\left( F\right) . \]
Proof. If either event has probability 0 , then the two events are independent and the above equation is true, so the theorem is true in this case. Thus, we may assume that both events have positive probability in what follows. Assume that \( E \) and \( F \) are independent. Then \( P\left( {E \mid F}\right) = P\left( E\right) \), and so\n\n\[ P\left( {E \cap F}\right) = P\left( {E \mid F}\right) P\left( F\right) \]\n\n\[ = P\left( E\right) P\left( F\right) \text{.} \]\n\nAssume next that \( P\left( {E \cap F}\right) = P\left( E\right) P\left( F\right) \) . Then\n\n\[ P\left( {E \mid F}\right) = \frac{P\left( {E \cap F}\right) }{P\left( F\right) } = P\left( E\right) . \]\n\nAlso,\n\n\[ P\left( {F \mid E}\right) = \frac{P\left( {F \cap E}\right) }{P\left( E\right) } = P\left( F\right) . \]\n\nTherefore, \( E \) and \( F \) are independent.
Yes
Example 4.7 Suppose that we have a coin which comes up heads with probability \( p \), and tails with probability \( q \) . Now suppose that this coin is tossed twice. Using a frequency interpretation of probability, it is reasonable to assign to the outcome \( \left( {H, H}\right) \) the probability \( {p}^{2} \), to the outcome \( \left( {H, T}\right) \) the probability \( {pq} \), and so on. Let \( E \) be the event that heads turns up on the first toss and \( F \) the event that tails turns up on the second toss. We will now check that with the above probability assignments, these two events are independent, as expected.
We have \( P\left( E\right) = \) \( {p}^{2} + {pq} = p, P\left( F\right) = {pq} + {q}^{2} = q \) . Finally \( P\left( {E \cap F}\right) = {pq} \), so \( P\left( {E \cap F}\right) = \) \( P\left( E\right) P\left( F\right) \) .
Yes
In the coin-tossing example above, let \( {X}_{i} \) denote the outcome of the \( i \) th toss. Then the joint random variable \( \bar{X} = \) \( \left( {{X}_{1},{X}_{2},{X}_{3}}\right) \) has eight possible outcomes. Suppose that we now define \( {Y}_{i} \), for \( i = 1,2,3 \), as the number of heads which occur in the first \( i \) tosses. Then \( {Y}_{i} \) has \( \{ 0,1,\ldots, i\} \) as possible outcomes, so at first glance, the set of possible outcomes of the joint random variable \( \bar{Y} = \left( {{Y}_{1},{Y}_{2},{Y}_{3}}\right) \) should be the set \[ \left\{ {\left( {{a}_{1},{a}_{2},{a}_{3}}\right) : 0 \leq {a}_{1} \leq 1,0 \leq {a}_{2} \leq 2,0 \leq {a}_{3} \leq 3}\right\} . \] However, the outcome \( \left( {1,0,1}\right) \) cannot occur, since we must have \( {a}_{1} \leq {a}_{2} \leq {a}_{3} \) . The solution to this problem is to define the probability of the outcome \( \left( {1,0,1}\right) \) to be 0 . In addition, we must have \( {a}_{i + 1} - {a}_{i} \leq 1 \) for \( i = 1,2 \) .
We now illustrate the assignment of probabilities to the various outcomes for the joint random variables \( \bar{X} \) and \( \bar{Y} \) . In the first case, each of the eight outcomes should be assigned the probability \( 1/8 \), since we are assuming that we have a fair coin. In the second case, since \( {Y}_{i} \) has \( i + 1 \) possible outcomes, the set of possible outcomes has size 24. Only eight of these 24 outcomes can actually occur, namely the ones satisfying \( {a}_{1} \leq {a}_{2} \leq {a}_{3} \) . Each of these outcomes corresponds to exactly one of the outcomes of the random variable \( \bar{X} \), so it is natural to assign probability \( 1/8 \) to each of these. We assign probability 0 to the other 16 outcomes. In each case, the probability function is called a joint distribution function.
Yes
Example 4.12 (Example 4.10 continued) We now consider the assignment of probabilities in the above example. In the case of the random variable \( \bar{X} \), the probability of any outcome \( \left( {{a}_{1},{a}_{2},{a}_{3}}\right) \) is just the product of the probabilities \( P\left( {{X}_{i} = {a}_{i}}\right) \), for \( i = 1,2,3 \) . However, in the case of \( \bar{Y} \), the probability assigned to the outcome \( \left( {1,1,0}\right) \) is not the product of the probabilities \( P\left( {{Y}_{1} = 1}\right), P\left( {{Y}_{2} = 1}\right) \), and \( P\left( {{Y}_{3} = 0}\right) \) . The difference between these two situations is that the value of \( {X}_{i} \) does not affect the value of \( {X}_{j} \), if \( i \neq j \), while the values of \( {Y}_{i} \) and \( {Y}_{j} \) affect one another. For example, if \( {Y}_{1} = 1 \), then \( {Y}_{2} \) cannot equal 0 . This prompts the next definition.
Definition 4.4 The random variables \( {X}_{1},{X}_{2},\ldots ,{X}_{n} \) are mutually independent if \[ P\left( {{X}_{1} = {r}_{1},{X}_{2} = {r}_{2},\ldots ,{X}_{n} = {r}_{n}}\right) \] \[ = P\left( {{X}_{1} = {r}_{1}}\right) P\left( {{X}_{2} = {r}_{2}}\right) \cdots P\left( {{X}_{n} = {r}_{n}}\right) \] for any choice of \( {r}_{1},{r}_{2},\ldots ,{r}_{n} \) . Thus, if \( {X}_{1},{X}_{2},\ldots ,{X}_{n} \) are mutually independent, then the joint distribution function of the random variable \[ \bar{X} = \left( {{X}_{1},{X}_{2},\ldots ,{X}_{n}}\right) \] is just the product of the individual distribution functions. When two random variables are mutually independent, we shall say more briefly that they are independent.
Yes
In a group of 60 people, the numbers who do or do not smoke and do or do not have cancer are reported as shown in Table 4.1. Let \( \Omega \) be the sample space consisting of these 60 people. A person is chosen at random from the group. Let \( C\left( \omega \right) = 1 \) if this person has cancer and 0 if not, and \( S\left( \omega \right) = 1 \) if this person smokes and 0 if not. Then the joint distribution of \( \{ C, S\} \) is given in Table 4.2. For example \( P\left( {C = 0, S = 0}\right) = {40}/{60}, P\left( {C = 0, S = 1}\right) = {10}/{60} \), and so forth. The distributions of the individual random variables are called marginal distributions. The marginal distributions of \( C \) and \( S \) are:\n\n\[ \n{p}_{C} = \left( \begin{matrix} 0 & 1 \\ {50}/{60} & {10}/{60} \end{matrix}\right) \n\]\n\n\[ \n{p}_{S} = \left( \begin{matrix} 0 & 1 \\ {47}/{60} & {13}/{60} \end{matrix}\right) .\n\]\n\nThe random variables \( S \) and \( C \) are not independent, since\n\n\[ \nP\left( {C = 1, S = 1}\right) = \frac{3}{60} = {.05}, \n\]\n\n\[ \nP\left( {C = 1}\right) P\left( {S = 1}\right) = \frac{10}{60} \cdot \frac{13}{60} = {.036}. \n\]
Note that we would also see this from the fact that\n\n\[ \nP\left( {C = 1 \mid S = 1}\right) = \frac{3}{13} = {.23}, \n\]\n\n\[ \nP\left( {C = 1}\right) = \frac{1}{6} = {.167}. \n\]
Yes
Example 4.16 A doctor is trying to decide if a patient has one of three diseases \( {d}_{1},{d}_{2} \), or \( {d}_{3} \) . Two tests are to be carried out, each of which results in a positive (+) or a negative (-) outcome. There are four possible test patterns \( + + , + - \) , -+, and --. National records have indicated that, for 10,000 people having one of these three diseases, the distribution of diseases and test results are as in Table 4.3.
From this data, we can estimate the prior probabilities for each of the diseases and, given a particular disease, the probability of a particular test outcome. For example, the prior probability of disease \( {d}_{1} \) may be estimated to be \( {3215}/{10},{000} = \) .3215. The probability of the test result \( + - \), given disease \( {d}_{1} \), may be estimated to be \( {301}/{3215} = {.094} \). We can now use Bayes' formula to compute various posterior probabilities. The computer program Bayes computes these posterior probabilities. The results for this example are shown in Table 4.4. We note from the outcomes that, when the test result is \( + + \), the disease \( {d}_{1} \) has a significantly higher probability than the other two. When the outcome is ++, this is true for disease \( {d}_{3} \) . When the outcome is \( - + \), this is true for disease \( {d}_{2} \) . Note that these statements might have been guessed by looking at the data. If the outcome is \( - - \), the most probable cause is \( {d}_{3} \), but the probability that a patient has \( {d}_{2} \) is only slightly smaller. If one looks at the data in this case, one can see that it might be hard to guess which of the two diseases \( {d}_{2} \) and \( {d}_{3} \) is more likely.
Yes
A doctor gives a patient a test for a particular cancer. Before the results of the test, the only evidence the doctor has to go on is that 1 woman in 1000 has this cancer. Experience has shown that, in 99 percent of the cases in which cancer is present, the test is positive; and in 95 percent of the cases in which it is not present, it is negative. If the test turns out to be positive, what probability should the doctor assign to the event that cancer is present? An alternative form of this question is to ask for the relative frequencies of false positives and cancers.
We are given that \( \operatorname{prior}\left( \text{cancer}\right) = {.001} \) and \( \operatorname{prior}\left( \text{not cancer}\right) = {.999} \) . We know also that \( P\left( {+ \mid \text{cancer}}\right) = {.99}, P\left( {- \mid \text{cancer}}\right) = {.01}, P\left( {+ \mid \text{not cancer}}\right) = {.05} \) , and \( P\left( {- \mid \text{not cancer}}\right) = {.95} \) . Using this data gives the result shown in Figure 4.5.\n\nWe see now that the probability of cancer given a positive test has only increased from .001 to .019 . While this is nearly a twenty-fold increase, the probability that the patient has the cancer is still small. Stated in another way, among the positive results, 98.1 percent are false positives, and 1.9 percent are cancers. When a group of second-year medical students was asked this question, over half of the students incorrectly guessed the probability to be greater than .5 .
Yes
In the spinner experiment (cf. Example 2.1), suppose we know that the spinner has stopped with head in the upper half of the circle, \( 0 \leq x \leq 1/2 \) . What is the probability that \( 1/6 \leq x \leq 1/3 \) ?
Here \( E = \left\lbrack {0,1/2}\right\rbrack, F = \left\lbrack {1/6,1/3}\right\rbrack \), and \( F \cap E = F \) . Hence\n\n\[ P\left( {F \mid E}\right) = \frac{P\left( {F \cap E}\right) }{P\left( E\right) } \]\n\n\[ = \frac{1/6}{1/2} \]\n\n\[ = \frac{1}{3} \]\n\nwhich is reasonable, since \( F \) is \( 1/3 \) the size of \( E \) . The conditional density function here is given by\n\n\[ f\left( {x \mid E}\right) = \left\{ \begin{array}{ll} 2, & \text{ if }0 \leq x < 1/2 \\ 0, & \text{ if }1/2 \leq x < 1 \end{array}\right. \]\n\nThus the conditional density function is nonzero only on \( \left\lbrack {0,1/2}\right\rbrack \), and is uniform there.
Yes
In the dart game (cf. Example 2.8), suppose we know that the dart lands in the upper half of the target. What is the probability that its distance from the center is less than \( 1/2 \) ?
Here \( E = \{ \left( {x, y}\right) : y \geq 0\} \), and \( F = \left\{ {\left( {x, y}\right) : {x}^{2} + {y}^{2} < {\left( 1/2\right) }^{2}}\right\} \) . Hence,\n\n\[ P\left( {F \mid E}\right) = \frac{P\left( {F \cap E}\right) }{P\left( E\right) } = \frac{\left( {1/\pi }\right) \left\lbrack {\left( {1/2}\right) \left( {\pi /4}\right) }\right\rbrack }{\left( {1/\pi }\right) \left( {\pi /2}\right) } \]\n\n\[ = 1/4\text{.} \]
Yes
What is the probability that there is no emission in a further \( s \) seconds, given that the clock reads \( r \) seconds and is still running?
Let \( G\left( t\right) \) be the probability that the next particle is emitted after time \( t \) . Then\n\n\[ G\left( t\right) = {\int }_{t}^{\infty }\lambda {e}^{-{\lambda x}}{dx} \]\n\n\[ = - {\left. {e}^{-{\lambda x}}\right| }_{t}^{\infty } = {e}^{-{\lambda t}}. \]\n\nLet \( E \) be the event \
Yes
In the dart game (see Example 4.18), let \( E \) be the event that the dart lands in the upper half of the target \( \\left( {y \\geq 0}\\right) \) and \( F \) the event that the dart lands in the right half of the target \( \\left( {x \\geq 0}\\right) \). Then \( P\\left( {E \\cap F}\\right) \) is the probability that the dart lies in the first quadrant of the target, and
\[ P\\left( {E \\cap F}\\right) = \\frac{1}{\\pi }{\\int }_{E \\cap F}{1dxdy} \] \[ = \\operatorname{Area}\\left( {E \\cap F}\\right) \] \[ = \\operatorname{Area}\\left( E\\right) \\operatorname{Area}\\left( F\\right) \] \[ = \\left( {\\frac{1}{\\pi }{\\int }_{E}{1dxdy}}\\right) \\left( {\\frac{1}{\\pi }{\\int }_{F}{1dxdy}}\\right) \] \[ = P\\left( E\\right) P\\left( F\\right) \] so that \( E \) and \( F \) are independent.
Yes
In this example, we define three random variables, \( {X}_{1},{X}_{2} \), and \( {X}_{3} \). We will show that \( {X}_{1} \) and \( {X}_{2} \) are independent, and that \( {X}_{1} \) and \( {X}_{3} \) are not independent. Choose a point \( \omega = \left( {{\omega }_{1},{\omega }_{2}}\right) \) at random from the unit square. Set \( {X}_{1} = {\omega }_{1}^{2},{X}_{2} = {\omega }_{2}^{2} \), and \( {X}_{3} = {\omega }_{1} + {\omega }_{2} \). Find the joint distributions \( {F}_{12}\left( {{r}_{1},{r}_{2}}\right) \) and \( {F}_{23}\left( {{r}_{2},{r}_{3}}\right) \).
We have already seen (see Example 2.13) that\n\n\[ \n{F}_{1}\left( {r}_{1}\right) = P\left( {-\infty < {X}_{1} \leq {r}_{1}}\right) \]\n\n\[ = \sqrt{{r}_{1}},\;\text{if}0 \leq {r}_{1} \leq 1\text{,} \]\n\nand similarly,\n\n\[ {F}_{2}\left( {r}_{2}\right) = \sqrt{{r}_{2}} \]\n\nif \( 0 \leq {r}_{2} \leq 1 \) . Now we have (see Figure 4.7)\n\n\[ {F}_{12}\left( {{r}_{1},{r}_{2}}\right) = P\left( {{X}_{1} \leq {r}_{1}\text{ and }{X}_{2} \leq {r}_{2}}\right) \]\n\n\[ = P\left( {{\omega }_{1} \leq \sqrt{{r}_{1}}}\right. \text{and}\left. {{\omega }_{2} \leq \sqrt{{r}_{2}}}\right) \]\n\n\[ = \operatorname{Area}\left( {E}_{1}\right) \]\n\n\[ = \sqrt{{r}_{1}}\sqrt{{r}_{2}} \]\n\n\[ = {F}_{1}\left( {r}_{1}\right) {F}_{2}\left( {r}_{2}\right) \]\n\nIn this case \( {F}_{12}\left( {{r}_{1},{r}_{2}}\right) = {F}_{1}\left( {r}_{1}\right) {F}_{2}\left( {r}_{2}\right) \) so that \( {X}_{1} \) and \( {X}_{2} \) are independent.
Yes