text
stringlengths
1
7.76k
source
stringlengths
17
81
254 CHAPTER 6. EXPECTED VALUE AND VARIANCE (d) In Example 6.11 we stated that 1 + 1 2 + 1 3 + · · · + 1 n ∼log n + .5772 + 1 2n . Use this to estimate the expression in (c). Compare these estimates with the exact values and also with your estimates obtained by simulation for the case n = 26. *31 (Feller14) A large number, N, of people are subjected to a blood test. This can be administered in two ways: (1) Each person can be tested separately, in this case N test are required, (2) the blood samples of k persons can be pooled and analyzed together. If this test is negative, this one test suffices for the k people. If the test is positive, each of the k persons must be tested separately, and in all, k + 1 tests are required for the k people. Assume that the probability p that a test is positive is the same for all people and that these events are independent. (a) Find the probability that the test for a pooled sample of k people will be positive. (b) What is the expected value of the number X of tests necessary under plan (2)? (Assume that N is divisible by k.) (c) For small p, show that the value of k which will minimize the expected number of tests under the second plan is approximately 1/√p. 32 Write a program to add random numbers chosen from [0, 1] until the first time the sum is greater than one. Have your program repeat this experiment a number of times to estimate the expected number of selections necessary in order that the sum of the chosen numbers first exceeds 1. On the basis of your experiments, what is your estimate for this number? *33 The following related discrete problem also gives a good clue for the answer to Exercise 32. Randomly select with replacement t1, t2, . . . , tr from the set (1/n, 2/n, . . ., n/n). Let X be the smallest value of r satisfying t1 + t2 + · · · + tr > 1 . Then E(X) = (1 + 1/n)n. To prove this, we can just as well choose t1, t2, . . . , tr randomly with replacement from the set (1, 2, . . . , n) and let X be the smallest value of r for which t1 + t2 + · · · + tr > n . (a) Use Exercise 3.2.36 to show that P(X ≥j + 1) = n j  1 n j . 14W. Feller, Introduction to Probability Theory and Its Applications, 3rd ed., vol. 1 (New York: John Wiley and Sons, 1968), p. 240.
prob_Page_262_Chunk5101
6.1. EXPECTED VALUE 255 (b) Show that E(X) = n X j=0 P(X ≥j + 1) . (c) From these two facts, find an expression for E(X). This proof is due to Harris Schultz.15 *34 (Banach’s Matchbox16) A man carries in each of his two front pockets a box of matches originally containing N matches. Whenever he needs a match, he chooses a pocket at random and removes one from that box. One day he reaches into a pocket and finds the box empty. (a) Let pr denote the probability that the other pocket contains r matches. Define a sequence of counter random variables as follows: Let Xi = 1 if the ith draw is from the left pocket, and 0 if it is from the right pocket. Interpret pr in terms of Sn = X1 + X2 + · · · + Xn. Find a binomial expression for pr. (b) Write a computer program to compute the pr, as well as the probability that the other pocket contains at least r matches, for N = 100 and r from 0 to 50. (c) Show that (N −r)pr = (1/2)(2N + 1)pr+1 −(1/2)(r + 1)pr+1 . (d) Evaluate P r pr. (e) Use (c) and (d) to determine the expectation E of the distribution {pr}. (f) Use Stirling’s formula to obtain an approximation for E. How many matches must each box contain to ensure a value of about 13 for the expectation E? (Take π = 22/7.) 35 A coin is tossed until the first time a head turns up. If this occurs on the nth toss and n is odd you win 2n/n, but if n is even then you lose 2n/n. Then if your expected winnings exist they are given by the convergent series 1 −1 2 + 1 3 −1 4 + · · · called the alternating harmonic series. It is tempting to say that this should be the expected value of the experiment. Show that if we were to do this, the expected value of an experiment would depend upon the order in which the outcomes are listed. 36 Suppose we have an urn containing c yellow balls and d green balls. We draw k balls, without replacement, from the urn. Find the expected number of yellow balls drawn. Hint: Write the number of yellow balls drawn as the sum of c random variables. 15H. Schultz, “An Expected Value Problem,” Two-Year Mathematics Journal, vol. 10, no. 4 (1979), pp. 277–78. 16W. Feller, Introduction to Probability Theory, vol. 1, p. 166.
prob_Page_263_Chunk5102
256 CHAPTER 6. EXPECTED VALUE AND VARIANCE 37 The reader is referred to Example 6.13 for an explanation of the various op- tions available in Monte Carlo roulette. (a) Compute the expected winnings of a 1 franc bet on red under option (a). (b) Repeat part (a) for option (b). (c) Compare the expected winnings for all three options. *38 (from Pittel17) Telephone books, n in number, are kept in a stack. The probability that the book numbered i (where 1 ≤i ≤n) is consulted for a given phone call is pi > 0, where the pi’s sum to 1. After a book is used, it is placed at the top of the stack. Assume that the calls are independent and evenly spaced, and that the system has been employed indefinitely far into the past. Let di be the average depth of book i in the stack. Show that di ≤dj whenever pi ≥pj. Thus, on the average, the more popular books have a tendency to be closer to the top of the stack. Hint: Let pij denote the probability that book i is above book j. Show that pij = pij(1 −pj) + pjipi. *39 (from Propp18) In the previous problem, let P be the probability that at the present time, each book is in its proper place, i.e., book i is ith from the top. Find a formula for P in terms of the pi’s. In addition, find the least upper bound on P, if the pi’s are allowed to vary. Hint: First find the probability that book 1 is in the right place. Then find the probability that book 2 is in the right place, given that book 1 is in the right place. Continue. *40 (from H. Shultz and B. Leonard19) A sequence of random numbers in [0, 1) is generated until the sequence is no longer monotone increasing. The num- bers are chosen according to the uniform distribution. What is the expected length of the sequence? (In calculating the length, the term that destroys monotonicity is included.) Hint: Let a1, a2, . . . be the sequence and let X denote the length of the sequence. Then P(X > k) = P(a1 < a2 < · · · < ak) , and the probability on the right-hand side is easy to calculate. Furthermore, one can show that E(X) = 1 + P(X > 1) + P(X > 2) + · · · . 41 Let T be the random variable that counts the number of 2-unshuffles per- formed on an n-card deck until all of the labels on the cards are distinct. This random variable was discussed in Section 3.3. Using Equation 3.4 in that section, together with the formula E(T) = ∞ X s=0 P(T > s) 17B. Pittel, Problem #1195, Mathematics Magazine, vol. 58, no. 3 (May 1985), pg. 183. 18J. Propp, Problem #1159, Mathematics Magazine vol. 57, no. 1 (Feb. 1984), pg. 50. 19H. Shultz and B. Leonard, “Unexpected Occurrences of the Number e,” Mathematics Magazine vol. 62, no. 4 (October, 1989), pp. 269-271.
prob_Page_264_Chunk5103
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 257 that was proved in Exercise 33, show that E(T) = ∞ X s=0  1 − 2s n  n! 2sn  . Show that for n = 52, this expression is approximately equal to 11.7. (As was stated in Chapter 3, this means that on the average, almost 12 riffle shuffles of a 52-card deck are required in order for the process to be considered random.) 6.2 Variance of Discrete Random Variables The usefulness of the expected value as a prediction for the outcome of an ex- periment is increased when the outcome is not likely to deviate too much from the expected value. In this section we shall introduce a measure of this deviation, called the variance. Variance Definition 6.3 Let X be a numerically valued random variable with expected value µ = E(X). Then the variance of X, denoted by V (X), is V (X) = E((X −µ)2) . 2 Note that, by Theorem 6.1, V (X) is given by V (X) = X x (x −µ)2m(x) , (6.1) where m is the distribution function of X. Standard Deviation The standard deviation of X, denoted by D(X), is D(X) = p V (X). We often write σ for D(X) and σ2 for V (X). Example 6.17 Consider one roll of a die. Let X be the number that turns up. To find V (X), we must first find the expected value of X. This is µ = E(X) = 1 1 6  + 2 1 6  + 3 1 6  + 4 1 6  + 5 1 6  + 6 1 6  = 7 2 . To find the variance of X, we form the new random variable (X −µ)2 and compute its expectation. We can easily do this using the following table.
prob_Page_265_Chunk5104
258 CHAPTER 6. EXPECTED VALUE AND VARIANCE x m(x) (x −7/2)2 1 1/6 25/4 2 1/6 9/4 3 1/6 1/4 4 1/6 1/4 5 1/6 9/4 6 1/6 25/4 Table 6.6: Variance calculation. From this table we find E((X −µ)2) is V (X) = 1 6 25 4 + 9 4 + 1 4 + 1 4 + 9 4 + 25 4  = 35 12 , and the standard deviation D(X) = p 35/12 ≈1.707. 2 Calculation of Variance We next prove a theorem that gives us a useful alternative form for computing the variance. Theorem 6.6 If X is any random variable with E(X) = µ, then V (X) = E(X2) −µ2 . Proof. We have V (X) = E((X −µ)2) = E(X2 −2µX + µ2) = E(X2) −2µE(X) + µ2 = E(X2) −µ2 . 2 Using Theorem 6.6, we can compute the variance of the outcome of a roll of a die by first computing E(X2) = 1 1 6  + 4 1 6  + 9 1 6  + 16 1 6  + 25 1 6  + 36 1 6  = 91 6 , and, V (X) = E(X2) −µ2 = 91 6 − 7 2 2 = 35 12 , in agreement with the value obtained directly from the definition of V (X).
prob_Page_266_Chunk5105
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 259 Properties of Variance The variance has properties very different from those of the expectation. If c is any constant, E(cX) = cE(X) and E(X + c) = E(X) + c. These two statements imply that the expectation is a linear function. However, the variance is not linear, as seen in the next theorem. Theorem 6.7 If X is any random variable and c is any constant, then V (cX) = c2V (X) and V (X + c) = V (X) . Proof. Let µ = E(X). Then E(cX) = cµ, and V (cX) = E((cX −cµ)2) = E(c2(X −µ)2) = c2E((X −µ)2) = c2V (X) . To prove the second assertion, we note that, to compute V (X + c), we would replace x by x+c and µ by µ+c in Equation 6.1. Then the c’s would cancel, leaving V (X). 2 We turn now to some general properties of the variance. Recall that if X and Y are any two random variables, E(X+Y ) = E(X)+E(Y ). This is not always true for the case of the variance. For example, let X be a random variable with V (X) ̸= 0, and define Y = −X. Then V (X) = V (Y ), so that V (X) + V (Y ) = 2V (X). But X + Y is always 0 and hence has variance 0. Thus V (X + Y ) ̸= V (X) + V (Y ). In the important case of mutually independent random variables, however, the variance of the sum is the sum of the variances. Theorem 6.8 Let X and Y be two independent random variables. Then V (X + Y ) = V (X) + V (Y ) . Proof. Let E(X) = a and E(Y ) = b. Then V (X + Y ) = E((X + Y )2) −(a + b)2 = E(X2) + 2E(XY ) + E(Y 2) −a2 −2ab −b2 . Since X and Y are independent, E(XY ) = E(X)E(Y ) = ab. Thus, V (X + Y ) = E(X2) −a2 + E(Y 2) −b2 = V (X) + V (Y ) . 2
prob_Page_267_Chunk5106
260 CHAPTER 6. EXPECTED VALUE AND VARIANCE It is easy to extend this proof, by mathematical induction, to show that the variance of the sum of any number of mutually independent random variables is the sum of the individual variances. Thus we have the following theorem. Theorem 6.9 Let X1, X2, . . . , Xn be an independent trials process with E(Xj) = µ and V (Xj) = σ2. Let Sn = X1 + X2 + · · · + Xn be the sum, and An = Sn n be the average. Then E(Sn) = nµ , V (Sn) = nσ2 , σ(Sn) = σ√n , E(An) = µ , V (An) = σ2 , σ(An) = σ √n . Proof. Since all the random variables Xj have the same expected value, we have E(Sn) = E(X1) + · · · + E(Xn) = nµ , V (Sn) = V (X1) + · · · + V (Xn) = nσ2 , and σ(Sn) = σ√n . We have seen that, if we multiply a random variable X with mean µ and variance σ2 by a constant c, the new random variable has expected value cµ and variance c2σ2. Thus, E(An) = E Sn n  = nµ n = µ , and V (An) = V Sn n  = V (Sn) n2 = nσ2 n2 = σ2 n . Finally, the standard deviation of An is given by σ(An) = σ √n . 2
prob_Page_268_Chunk5107
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 261 1 2 3 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 n = 10 n = 100 Figure 6.7: Empirical distribution of An. The last equation in the above theorem implies that in an independent trials process, if the individual summands have finite variance, then the standard devi- ation of the average goes to 0 as n →∞. Since the standard deviation tells us something about the spread of the distribution around the mean, we see that for large values of n, the value of An is usually very close to the mean of An, which equals µ, as shown above. This statement is made precise in Chapter 8, where it is called the Law of Large Numbers. For example, let X represent the roll of a fair die. In Figure 6.7, we show the distribution of a random variable An corresponding to X, for n = 10 and n = 100. Example 6.18 Consider n rolls of a die. We have seen that, if Xj is the outcome if the jth roll, then E(Xj) = 7/2 and V (Xj) = 35/12. Thus, if Sn is the sum of the outcomes, and An = Sn/n is the average of the outcomes, we have E(An) = 7/2 and V (An) = (35/12)/n. Therefore, as n increases, the expected value of the average remains constant, but the variance tends to 0. If the variance is a measure of the expected deviation from the mean this would indicate that, for large n, we can expect the average to be very near the expected value. This is in fact the case, and we shall justify it in Chapter 8. 2 Bernoulli Trials Consider next the general Bernoulli trials process. As usual, we let Xj = 1 if the jth outcome is a success and 0 if it is a failure. If p is the probability of a success, and q = 1 −p, then E(Xj) = 0q + 1p = p , E(X2 j ) = 02q + 12p = p , and V (Xj) = E(X2 j ) −(E(Xj))2 = p −p2 = pq . Thus, for Bernoulli trials, if Sn = X1 +X2 +· · ·+Xn is the number of successes, then E(Sn) = np, V (Sn) = npq, and D(Sn) = √npq. If An = Sn/n is the average number of successes, then E(An) = p, V (An) = pq/n, and D(An) = p pq/n. We see that the expected proportion of successes remains p and the variance tends to 0.
prob_Page_269_Chunk5108
262 CHAPTER 6. EXPECTED VALUE AND VARIANCE This suggests that the frequency interpretation of probability is a correct one. We shall make this more precise in Chapter 8. Example 6.19 Let T denote the number of trials until the first success in a Bernoulli trials process. Then T is geometrically distributed. What is the vari- ance of T? In Example 4.15, we saw that mT =  1 2 3 · · · p qp q2p · · ·  . In Example 6.4, we showed that E(T) = 1/p . Thus, V (T) = E(T 2) −1/p2 , so we need only find E(T 2) = 1p + 4qp + 9q2p + · · · = p(1 + 4q + 9q2 + · · ·) . To evaluate this sum, we start again with 1 + x + x2 + · · · = 1 1 −x . Differentiating, we obtain 1 + 2x + 3x2 + · · · = 1 (1 −x)2 . Multiplying by x, x + 2x2 + 3x3 + · · · = x (1 −x)2 . Differentiating again gives 1 + 4x + 9x2 + · · · = 1 + x (1 −x)3 . Thus, E(T 2) = p 1 + q (1 −q)3 = 1 + q p2 and V (T) = E(T 2) −(E(T))2 = 1 + q p2 −1 p2 = q p2 . For example, the variance for the number of tosses of a coin until the first head turns up is (1/2)/(1/2)2 = 2. The variance for the number of rolls of a die until the first six turns up is (5/6)/(1/6)2 = 30. Note that, as p decreases, the variance increases rapidly. This corresponds to the increased spread of the geometric distribution as p decreases (noted in Figure 5.1). 2
prob_Page_270_Chunk5109
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 263 Poisson Distribution Just as in the case of expected values, it is easy to guess the variance of the Poisson distribution with parameter λ. We recall that the variance of a binomial distribution with parameters n and p equals npq. We also recall that the Poisson distribution could be obtained as a limit of binomial distributions, if n goes to ∞and p goes to 0 in such a way that their product is kept fixed at the value λ. In this case, npq = λq approaches λ, since q goes to 1. So, given a Poisson distribution with parameter λ, we should guess that its variance is λ. The reader is asked to show this in Exercise 29. Exercises 1 A number is chosen at random from the set S = {−1, 0, 1}. Let X be the number chosen. Find the expected value, variance, and standard deviation of X. 2 A random variable X has the distribution pX =  0 1 2 4 1/3 1/3 1/6 1/6  . Find the expected value, variance, and standard deviation of X. 3 You place a 1-dollar bet on the number 17 at Las Vegas, and your friend places a 1-dollar bet on black (see Exercises 1.1.6 and 1.1.7). Let X be your winnings and Y be her winnings. Compare E(X), E(Y ), and V (X), V (Y ). What do these computations tell you about the nature of your winnings if you and your friend make a sequence of bets, with you betting each time on a number and your friend betting on a color? 4 X is a random variable with E(X) = 100 and V (X) = 15. Find (a) E(X2). (b) E(3X + 10). (c) E(−X). (d) V (−X). (e) D(−X). 5 In a certain manufacturing process, the (Fahrenheit) temperature never varies by more than 2◦from 62◦. The temperature is, in fact, a random variable F with distribution PF =  60 61 62 63 64 1/10 2/10 4/10 2/10 1/10  . (a) Find E(F) and V (F). (b) Define T = F −62. Find E(T) and V (T), and compare these answers with those in part (a).
prob_Page_271_Chunk5110
264 CHAPTER 6. EXPECTED VALUE AND VARIANCE (c) It is decided to report the temperature readings on a Celsius scale, that is, C = (5/9)(F −32). What is the expected value and variance for the readings now? 6 Write a computer program to calculate the mean and variance of a distribution which you specify as data. Use the program to compare the variances for the following densities, both having expected value 0: pX =  −2 −1 0 1 2 3/11 2/11 1/11 2/11 3/11  ; pY =  −2 −1 0 1 2 1/11 2/11 5/11 2/11 1/11  . 7 A coin is tossed three times. Let X be the number of heads that turn up. Find V (X) and D(X). 8 A random sample of 2400 people are asked if they favor a government pro- posal to develop new nuclear power plants. If 40 percent of the people in the country are in favor of this proposal, find the expected value and the stan- dard deviation for the number S2400 of people in the sample who favored the proposal. 9 A die is loaded so that the probability of a face coming up is proportional to the number on that face. The die is rolled with outcome X. Find V (X) and D(X). 10 Prove the following facts about the standard deviation. (a) D(X + c) = D(X). (b) D(cX) = |c|D(X). 11 A number is chosen at random from the integers 1, 2, 3, . . . , n. Let X be the number chosen. Show that E(X) = (n + 1)/2 and V (X) = (n −1)(n + 1)/12. Hint: The following identity may be useful: 12 + 22 + · · · + n2 = (n)(n + 1)(2n + 1) 6 . 12 Let X be a random variable with µ = E(X) and σ2 = V (X). Define X∗= (X−µ)/σ. The random variable X∗is called the standardized random variable associated with X. Show that this standardized random variable has expected value 0 and variance 1. 13 Peter and Paul play Heads or Tails (see Example 1.4). Let Wn be Peter’s winnings after n matches. Show that E(Wn) = 0 and V (Wn) = n. 14 Find the expected value and the variance for the number of boys and the number of girls in a royal family that has children until there is a boy or until there are three children, whichever comes first.
prob_Page_272_Chunk5111
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 265 15 Suppose that n people have their hats returned at random. Let Xi = 1 if the ith person gets his or her own hat back and 0 otherwise. Let Sn = Pn i=1 Xi. Then Sn is the total number of people who get their own hats back. Show that (a) E(X2 i ) = 1/n. (b) E(Xi · Xj) = 1/n(n −1) for i ̸= j. (c) E(S2 n) = 2 (using (a) and (b)). (d) V (Sn) = 1. 16 Let Sn be the number of successes in n independent trials. Use the program BinomialProbabilities (Section 3.2) to compute, for given n, p, and j, the probability P(−j√npq < Sn −np < j√npq) . (a) Let p = .5, and compute this probability for j = 1, 2, 3 and n = 10, 30, 50. Do the same for p = .2. (b) Show that the standardized random variable S∗ n = (Sn −np)/√npq has expected value 0 and variance 1. What do your results from (a) tell you about this standardized quantity S∗ n? 17 Let X be the outcome of a chance experiment with E(X) = µ and V (X) = σ2. When µ and σ2 are unknown, the statistician often estimates them by repeating the experiment n times with outcomes x1, x2, . . . , xn, estimating µ by the sample mean ¯x = 1 n n X i=1 xi , and σ2 by the sample variance s2 = 1 n n X i=1 (xi −¯x)2 . Then s is the sample standard deviation. These formulas should remind the reader of the definitions of the theoretical mean and variance. (Many statisti- cians define the sample variance with the coefficient 1/n replaced by 1/(n−1). If this alternative definition is used, the expected value of s2 is equal to σ2. See Exercise 18, part (d).) Write a computer program that will roll a die n times and compute the sample mean and sample variance. Repeat this experiment several times for n = 10 and n = 1000. How well do the sample mean and sample variance estimate the true mean 7/2 and variance 35/12? 18 Show that, for the sample mean ¯x and sample variance s2 as defined in Exer- cise 17, (a) E(¯x) = µ.
prob_Page_273_Chunk5112
266 CHAPTER 6. EXPECTED VALUE AND VARIANCE (b) E
prob_Page_274_Chunk5113
6.2. VARIANCE OF DISCRETE RANDOM VARIABLES 267 21 Let X be a random variable with E(X) = µ and V (X) = σ2. Show that the function f(x) defined by f(x) = X ω (X(ω) −x)2p(ω) has its minimum value when x = µ. 22 Let X and Y be two random variables defined on the finite sample space Ω. Assume that X, Y , X + Y , and X −Y all have the same distribution. Prove that P(X = Y = 0) = 1. 23 If X and Y are any two random variables, then the covariance of X and Y is defined by Cov(X, Y ) = E((X −E(X))(Y −E(Y ))). Note that Cov(X, X) = V (X). Show that, if X and Y are independent, then Cov(X, Y ) = 0; and show, by an example, that we can have Cov(X, Y ) = 0 and X and Y not independent. *24 A professor wishes to make up a true-false exam with n questions. She assumes that she can design the problems in such a way that a student will answer the jth problem correctly with probability pj, and that the answers to the various problems may be considered independent experiments. Let Sn be the number of problems that a student will get correct. The professor wishes to choose pj so that E(Sn) = .7n and so that the variance of Sn is as large as possible. Show that, to achieve this, she should choose pj = .7 for all j; that is, she should make all the problems have the same difficulty. 25 (Lamperti20) An urn contains exactly 5000 balls, of which an unknown number X are white and the rest red, where X is a random variable with a probability distribution on the integers 0, 1, 2, . . . , 5000. (a) Suppose we know that E(X) = µ. Show that this is enough to allow us to calculate the probability that a ball drawn at random from the urn will be white. What is this probability? (b) We draw a ball from the urn, examine its color, replace it, and then draw another. Under what conditions, if any, are the results of the two drawings independent; that is, does P(white, white) = P(white)2 ? (c) Suppose the variance of X is σ2. What is the probability of drawing two white balls in part (b)? 26 For a sequence of Bernoulli trials, let X1 be the number of trials until the first success. For j ≥2, let Xj be the number of trials after the (j −1)st success until the jth success. It can be shown that X1, X2, . . . is an independent trials process. 20Private communication.
prob_Page_275_Chunk5114
268 CHAPTER 6. EXPECTED VALUE AND VARIANCE (a) What is the common distribution, expected value, and variance for Xj? (b) Let Tn = X1 + X2 + · · · + Xn. Then Tn is the time until the nth success. Find E(Tn) and V (Tn). (c) Use the results of (b) to find the expected value and variance for the number of tosses of a coin until the nth occurrence of a head. 27 Referring to Exercise 6.1.30, find the variance for the number of boxes of Wheaties bought before getting half of the players’ pictures and the variance for the number of additional boxes needed to get the second half of the players’ pictures. 28 In Example 5.3, assume that the book in question has 1000 pages. Let X be the number of pages with no mistakes. Show that E(X) = 905 and V (X) = 86. Using these results, show that the probability is ≤.05 that there will be more than 924 pages without errors or fewer than 866 pages without errors. 29 Let X be Poisson distributed with parameter λ. Show that V (X) = λ. 6.3 Continuous Random Variables In this section we consider the properties of the expected value and the variance of a continuous random variable. These quantities are defined just as for discrete random variables and share the same properties. Expected Value Definition 6.4 Let X be a real-valued random variable with density function f(x). The expected value µ = E(X) is defined by µ = E(X) = Z +∞ −∞ xf(x) dx , provided the integral Z +∞ −∞ |x|f(x) dx is finite. 2 The reader should compare this definition with the corresponding one for discrete random variables in Section 6.1. Intuitively, we can interpret E(X), as we did in the previous sections, as the value that we should expect to obtain if we perform a large number of independent experiments and average the resulting values of X. We can summarize the properties of E(X) as follows (cf. Theorem 6.2).
prob_Page_276_Chunk5115
6.3. CONTINUOUS RANDOM VARIABLES 269 Theorem 6.10 If X and Y are real-valued random variables and c is any constant, then E(X + Y ) = E(X) + E(Y ) , E(cX) = cE(X) . The proof is very similar to the proof of Theorem 6.2, and we omit it. 2 More generally, if X1, X2, . . . , Xn are n real-valued random variables, and c1, c2, . . . , cn are n constants, then E(c1X1 + c2X2 + · · · + cnXn) = c1E(X1) + c2E(X2) + · · · + cnE(Xn) . Example 6.20 Let X be uniformly distributed on the interval [0, 1]. Then E(X) = Z 1 0 x dx = 1/2 . It follows that if we choose a large number N of random numbers from [0, 1] and take the average, then we can expect that this average should be close to the expected value of 1/2. 2 Example 6.21 Let Z = (x, y) denote a point chosen uniformly and randomly from the unit disk, as in the dart game in Example 2.8 and let X = (x2 + y2)1/2 be the distance from Z to the center of the disk. The density function of X can easily be shown to equal f(x) = 2x, so by the definition of expected value, E(X) = Z 1 0 xf(x) dx = Z 1 0 x(2x) dx = 2 3 . 2 Example 6.22 In the example of the couple meeting at the Inn (Example 2.16), each person arrives at a time which is uniformly distributed between 5:00 and 6:00 PM. The random variable Z under consideration is the length of time the first person has to wait until the second one arrives. It was shown that fZ(z) = 2(1 −z) , for 0 ≤z ≤1. Hence, E(Z) = Z 1 0 zfZ(z) dz
prob_Page_277_Chunk5116
270 CHAPTER 6. EXPECTED VALUE AND VARIANCE = Z 1 0 2z(1 −z) dz = h z2 −2 3z3i1 0 = 1 3 . 2 Expectation of a Function of a Random Variable Suppose that X is a real-valued random variable and φ(x) is a continuous function from R to R. The following theorem is the continuous analogue of Theorem 6.1. Theorem 6.11 If X is a real-valued random variable and if φ : R → R is a continuous real-valued function with domain [a, b], then E(φ(X)) = Z +∞ −∞ φ(x)fX(x) dx , provided the integral exists. 2 For a proof of this theorem, see Ross.21 Expectation of the Product of Two Random Variables In general, it is not true that E(XY ) = E(X)E(Y ), since the integral of a product is not the product of integrals. But if X and Y are independent, then the expectations multiply. Theorem 6.12 Let X and Y be independent real-valued continuous random vari- ables with finite expected values. Then we have E(XY ) = E(X)E(Y ) . Proof. We will prove this only in the case that the ranges of X and Y are contained in the intervals [a, b] and [c, d], respectively. Let the density functions of X and Y be denoted by fX(x) and fY (y), respectively. Since X and Y are independent, the joint density function of X and Y is the product of the individual density functions. Hence E(XY ) = Z b a Z d c xyfX(x)fY (y) dy dx = Z b a xfX(x) dx Z d c yfY (y) dy = E(X)E(Y ) . The proof in the general case involves using sequences of bounded random vari- ables that approach X and Y , and is somewhat technical, so we will omit it. 2 21S. Ross, A First Course in Probability, (New York: Macmillan, 1984), pgs. 241-245.
prob_Page_278_Chunk5117
6.3. CONTINUOUS RANDOM VARIABLES 271 In the same way, one can show that if X1, X2, . . . , Xn are n mutually indepen- dent real-valued random variables, then E(X1X2 · · · Xn) = E(X1) E(X2) · · · E(Xn) . Example 6.23 Let Z = (X, Y ) be a point chosen at random in the unit square. Let A = X2 and B = Y 2. Then Theorem 4.3 implies that A and B are independent. Using Theorem 6.11, the expectations of A and B are easy to calculate: E(A) = E(B) = Z 1 0 x2 dx = 1 3 . Using Theorem 6.12, the expectation of AB is just the product of E(A) and E(B), or 1/9. The usefulness of this theorem is demonstrated by noting that it is quite a bit more difficult to calculate E(AB) from the definition of expectation. One finds that the density function of AB is fAB(t) = −log(t) 4 √ t , so E(AB) = Z 1 0 tfAB(t) dt = 1 9 . 2 Example 6.24 Again let Z = (X, Y ) be a point chosen at random in the unit square, and let W = X + Y . Then Y and W are not independent, and we have E(Y ) = 1 2 , E(W) = 1 , E(Y W) = E(XY + Y 2) = E(X)E(Y ) + 1 3 = 7 12 ̸= E(Y )E(W) . 2 We turn now to the variance. Variance Definition 6.5 Let X be a real-valued random variable with density function f(x). The variance σ2 = V (X) is defined by σ2 = V (X) = E((X −µ)2) . 2
prob_Page_279_Chunk5118
272 CHAPTER 6. EXPECTED VALUE AND VARIANCE The next result follows easily from Theorem 6.1. There is another way to calculate the variance of a continuous random variable, which is usually slightly easier. It is given in Theorem 6.15. Theorem 6.13 If X is a real-valued random variable with E(X) = µ, then σ2 = Z ∞ −∞ (x −µ)2f(x) dx . 2 The properties listed in the next three theorems are all proved in exactly the same way that the corresponding theorems for discrete random variables were proved in Section 6.2. Theorem 6.14 If X is a real-valued random variable defined on Ωand c is any constant, then (cf. Theorem 6.7) V (cX) = c2V (X) , V (X + c) = V (X) . 2 Theorem 6.15 If X is a real-valued random variable with E(X) = µ, then (cf. Theorem 6.6) V (X) = E(X2) −µ2 . 2 Theorem 6.16 If X and Y are independent real-valued random variables on Ω, then (cf. Theorem 6.8) V (X + Y ) = V (X) + V (Y ) . 2 Example 6.25 (continuation of Example 6.20) If X is uniformly distributed on [0, 1], then, using Theorem 6.15, we have V (X) = Z 1 0  x −1 2 2 dx = 1 12 . 2
prob_Page_280_Chunk5119
6.3. CONTINUOUS RANDOM VARIABLES 273 Example 6.26 Let X be an exponentially distributed random variable with pa- rameter λ. Then the density function of X is fX(x) = λe−λx . From the definition of expectation and integration by parts, we have E(X) = Z ∞ 0 xfX(x) dx = λ Z ∞ 0 xe−λx dx = −xe−λx ∞ 0 + Z ∞ 0 e−λx dx = 0 + e−λx −λ ∞ 0 = 1 λ . Similarly, using Theorems 6.11 and 6.15, we have V (X) = Z ∞ 0 x2fX(x) dx −1 λ2 = λ Z ∞ 0 x2e−λx dx −1 λ2 = −x2e−λx ∞ 0 + 2 Z ∞ 0 xe−λx dx −1 λ2 = −x2e−λx ∞ 0 −2xe−λx λ ∞ 0 −2 λ2 e−λx ∞ 0 −1 λ2 = 2 λ2 −1 λ2 = 1 λ2 . In this case, both E(X) and V (X) are finite if λ > 0. 2 Example 6.27 Let Z be a standard normal random variable with density function fZ(x) = 1 √ 2π e−x2/2 . Since this density function is symmetric with respect to the y-axis, then it is easy to show that Z ∞ −∞ xfZ(x) dx has value 0. The reader should recall however, that the expectation is defined to be the above integral only if the integral Z ∞ −∞ |x|fZ(x) dx is finite. This integral equals 2 Z ∞ 0 xfZ(x) dx ,
prob_Page_281_Chunk5120
274 CHAPTER 6. EXPECTED VALUE AND VARIANCE which one can easily show is finite. Thus, the expected value of Z is 0. To calculate the variance of Z, we begin by applying Theorem 6.15: V (Z) = Z +∞ −∞ x2fZ(x) dx −µ2 . If we write x2 as x · x, and integrate by parts, we obtain 1 √ 2π (−xe−x2/2) +∞ −∞ + 1 √ 2π Z +∞ −∞ e−x2/2 dx . The first summand above can be shown to equal 0, since as x →±∞, e−x2/2 gets small more quickly than x gets large. The second summand is just the standard normal density integrated over its domain, so the value of this summand is 1. Therefore, the variance of the standard normal density equals 1. Now let X be a (not necessarily standard) normal random variable with param- eters µ and σ. Then the density function of X is fX(x) = 1 √ 2πσ e−(x−µ)2/2σ2 . We can write X = σZ + µ, where Z is a standard normal random variable. Since E(Z) = 0 and V (Z) = 1 by the calculation above, Theorems 6.10 and 6.14 imply that E(X) = E(σZ + µ) = µ , V (X) = V (σZ + µ) = σ2 . 2 Example 6.28 Let X be a continuous random variable with the Cauchy density function fX(x) = a π 1 a2 + x2 . Then the expectation of X does not exist, because the integral a π Z +∞ −∞ |x| dx a2 + x2 diverges. Thus the variance of X also fails to exist. Densities whose variance is not defined, like the Cauchy density, behave quite differently in a number of important respects from those whose variance is finite. We shall see one instance of this difference in Section 8.2. 2 Independent Trials
prob_Page_282_Chunk5121
6.3. CONTINUOUS RANDOM VARIABLES 275 Corollary 6.1 If X1, X2, . . . , Xn is an independent trials process of real-valued random variables, with E(Xi) = µ and V (Xi) = σ2, and if Sn = X1 + X2 + · · · + Xn , An = Sn n , then E(Sn) = nµ , E(An) = µ , V (Sn) = nσ2 , V (An) = σ2 n . It follows that if we set S∗ n = Sn −nµ √ nσ2 , then E(S∗ n) = 0 , V (S∗ n) = 1 . We say that S∗ n is a standardized version of Sn (see Exercise 12 in Section 6.2). 2 Queues Example 6.29 Let us consider again the queueing problem, that is, the problem of the customers waiting in a queue for service (see Example 5.7). We suppose again that customers join the queue in such a way that the time between arrivals is an exponentially distributed random variable X with density function fX(t) = λe−λt . Then the expected value of the time between arrivals is simply 1/λ (see Exam- ple 6.26), as was stated in Example 5.7. The reciprocal λ of this expected value is often referred to as the arrival rate. The service time of an individual who is first in line is defined to be the amount of time that the person stays at the head of the line before leaving. We suppose that the customers are served in such a way that the service time is another exponentially distributed random variable Y with density function fX(t) = µe−µt . Then the expected value of the service time is E(X) = Z ∞ 0 tfX(t) dt = 1 µ . The reciprocal µ if this expected value is often referred to as the service rate.
prob_Page_283_Chunk5122
276 CHAPTER 6. EXPECTED VALUE AND VARIANCE We expect on grounds of our everyday experience with queues that if the service rate is greater than the arrival rate, then the average queue size will tend to stabilize, but if the service rate is less than the arrival rate, then the queue will tend to increase in length without limit (see Figure 5.7). The simulations in Example 5.7 tend to bear out our everyday experience. We can make this conclusion more precise if we introduce the traffic intensity as the product ρ = (arrival rate)(average service time) = λ µ = 1/µ 1/λ . The traffic intensity is also the ratio of the average service time to the average time between arrivals. If the traffic intensity is less than 1 the queue will perform reasonably, but if it is greater than 1 the queue will grow indefinitely large. In the critical case of ρ = 1, it can be shown that the queue will become large but there will always be times at which the queue is empty.22 In the case that the traffic intensity is less than 1 we can consider the length of the queue as a random variable Z whose expected value is finite, E(Z) = N . The time spent in the queue by a single customer can be considered as a random variable W whose expected value is finite, E(W) = T . Then we can argue that, when a customer joins the queue, he expects to find N people ahead of him, and when he leaves the queue, he expects to find λT people behind him. Since, in equilibrium, these should be the same, we would expect to find that N = λT . This last relationship is called Little’s law for queues.23 We will not prove it here. A proof may be found in Ross.24 Note that in this case we are counting the waiting time of all customers, even those that do not have to wait at all. In our simulation in Section 4.2, we did not consider these customers. If we knew the expected queue length then we could use Little’s law to obtain the expected waiting time, since T = N λ . The queue length is a random variable with a discrete distribution. We can estimate this distribution by simulation, keeping track of the queue lengths at the times at which a customer arrives. We show the result of this simulation (using the program Queue) in Figure 6.8. 22L. Kleinrock, Queueing Systems, vol. 2 (New York: John Wiley and Sons, 1975). 23ibid., p. 17. 24S. M. Ross, Applied Probability Models with Optimization Applications, (San Francisco: Holden-Day, 1970)
prob_Page_284_Chunk5123
6.3. CONTINUOUS RANDOM VARIABLES 277 0 10 20 30 40 50 0 0.02 0.04 0.06 0.08 Figure 6.8: Distribution of queue lengths. We note that the distribution appears to be a geometric distribution. In the study of queueing theory it is shown that the distribution for the queue length in equilibrium is indeed a geometric distribution with sj = (1 −ρ)ρj for j = 0, 1, 2, . . . , if ρ < 1. The expected value of a random variable with this distribution is N = ρ (1 −ρ) (see Example 6.4). Thus by Little’s result the expected waiting time is T = ρ λ(1 −ρ) = 1 µ −λ , where µ is the service rate, λ the arrival rate, and ρ the traffic intensity. In our simulation, the arrival rate is 1 and the service rate is 1.1. Thus, the traffic intensity is 1/1.1 = 10/11, the expected queue size is 10/11 (1 −10/11) = 10 , and the expected waiting time is 1 1.1 −1 = 10 . In our simulation the average queue size was 8.19 and the average waiting time was 7.37. In Figure 6.9, we show the histogram for the waiting times. This histogram suggests that the density for the waiting times is exponential with parameter µ−λ, and this is the case. 2
prob_Page_285_Chunk5124
278 CHAPTER 6. EXPECTED VALUE AND VARIANCE 0 10 20 30 40 50 0 0.02 0.04 0.06 0.08 Figure 6.9: Distribution of queue waiting times. Exercises 1 Let X be a random variable with range [−1, 1] and let fX(x) be the density function of X. Find µ(X) and σ2(X) if, for |x| < 1, (a) fX(x) = 1/2. (b) fX(x) = |x|. (c) fX(x) = 1 −|x|. (d) fX(x) = (3/2)x2. 2 Let X be a random variable with range [−1, 1] and fX its density function. Find µ(X) and σ2(X) if, for |x| > 1, fX(x) = 0, and for |x| < 1, (a) fX(x) = (3/4)(1 −x2). (b) fX(x) = (π/4) cos(πx/2). (c) fX(x) = (x + 1)/2. (d) fX(x) = (3/8)(x + 1)2. 3 The lifetime, measure in hours, of the ACME super light bulb is a random variable T with density function fT (t) = λ2te−λt, where λ = .05. What is the expected lifetime of this light bulb? What is its variance? 4 Let X be a random variable with range [−1, 1] and density function fX(x) = ax + b if |x| < 1. (a) Show that if R +1 −1 fX(x) dx = 1, then b = 1/2. (b) Show that if fX(x) ≥0, then −1/2 ≤a ≤1/2. (c) Show that µ = (2/3)a, and hence that −1/3 ≤µ ≤1/3.
prob_Page_286_Chunk5125
6.3. CONTINUOUS RANDOM VARIABLES 279 (d) Show that σ2(X) = (2/3)b −(4/9)a2 = 1/3 −(4/9)a2. 5 Let X be a random variable with range [−1, 1] and density function fX(x) = ax2 + bx + c if |x| < 1 and 0 otherwise. (a) Show that 2a/3 + 2c = 1 (see Exercise 4). (b) Show that 2b/3 = µ(X). (c) Show that 2a/5 + 2c/3 = σ2(X). (d) Find a, b, and c if µ(X) = 0, σ2(X) = 1/15, and sketch the graph of fX. (e) Find a, b, and c if µ(X) = 0, σ2(X) = 1/2, and sketch the graph of fX. 6 Let T be a random variable with range [0, ∞] and fT its density function. Find µ(T) and σ2(T) if, for t < 0, fT (t) = 0, and for t > 0, (a) fT (t) = 3e−3t. (b) fT (t) = 9te−3t. (c) fT (t) = 3/(1 + t)4. 7 Let X be a random variable with density function fX. Show, using elementary calculus, that the function φ(a) = E((X −a)2) takes its minimum value when a = µ(X), and in that case φ(a) = σ2(X). 8 Let X be a random variable with mean µ and variance σ2. Let Y = aX2 + bX + c. Find the expected value of Y . 9 Let X, Y , and Z be independent random variables, each with mean µ and variance σ2. (a) Find the expected value and variance of S = X + Y + Z. (b) Find the expected value and variance of A = (1/3)(X + Y + Z). (c) Find the expected value of S2 and A2. 10 Let X and Y be independent random variables with uniform density functions on [0, 1]. Find (a) E(|X −Y |). (b) E(max(X, Y )). (c) E(min(X, Y )). (d) E(X2 + Y 2). (e) E((X + Y )2).
prob_Page_287_Chunk5126
280 CHAPTER 6. EXPECTED VALUE AND VARIANCE 11 The PilsdorffBeer Company runs a fleet of trucks along the 100 mile road from Hangtown to Dry Gulch. The trucks are old, and are apt to break down at any point along the road with equal probability. Where should the company locate a garage so as to minimize the expected distance from a typical breakdown to the garage? In other words, if X is a random variable giving the location of the breakdown, measured, say, from Hangtown, and b gives the location of the garage, what choice of b minimizes E(|X −b|)? Now suppose X is not distributed uniformly over [0, 100], but instead has density function fX(x) = 2x/10,000. Then what choice of b minimizes E(|X −b|)? 12 Find E(XY ), where X and Y are independent random variables which are uniform on [0, 1]. Then verify your answer by simulation. 13 Let X be a random variable that takes on nonnegative values and has distri- bution function F(x). Show that E(X) = Z ∞ 0 (1 −F(x)) dx . Hint: Integrate by parts. Illustrate this result by calculating E(X) by this method if X has an expo- nential distribution F(x) = 1 −e−λx for x ≥0, and F(x) = 0 otherwise. 14 Let X be a continuous random variable with density function fX(x). Show that if Z +∞ −∞ x2fX(x) dx < ∞, then Z +∞ −∞ |x|fX(x) dx < ∞. Hint: Except on the interval [−1, 1], the first integrand is greater than the second integrand. 15 Let X be a random variable distributed uniformly over [0, 20]. Define a new random variable Y by Y = ⌊X⌋(the greatest integer in X). Find the expected value of Y . Do the same for Z = ⌊X + .5⌋. Compute E
prob_Page_288_Chunk5127
6.3. CONTINUOUS RANDOM VARIABLES 281 This result is correct but quite difficult to prove. Write a program that will allow you to specify the density fX, and the time t, and simulate this experi- ment to find N(t)/t. Have your program repeat the experiment 500 times and plot a bar graph for the random outcomes of N(t)/t. From this data, estimate E(N(t)/t) and compare this with 1/E(X). In particular, do this for t = 100 with the following two densities: (a) fX = e−t. (b) fX = te−t. 17 Let X and Y be random variables. The covariance Cov(X, Y) is defined by (see Exercise 6.2.23) cov(X, Y) = E((X −µ(X))(Y −µ(Y))) . (a) Show that cov(X, Y) = E(XY) −E(X)E(Y). (b) Using (a), show that cov(X, Y ) = 0, if X and Y are independent. (Cau- tion: the converse is not always true.) (c) Show that V (X + Y ) = V (X) + V (Y ) + 2cov(X, Y ). 18 Let X and Y be random variables with positive variance. The correlation of X and Y is defined as ρ(X, Y ) = cov(X, Y ) p V (X)V (Y ) . (a) Using Exercise 17(c), show that 0 ≤V  X σ(X) + Y σ(Y )  = 2(1 + ρ(X, Y )) . (b) Now show that 0 ≤V  X σ(X) − Y σ(Y )  = 2(1 −ρ(X, Y )) . (c) Using (a) and (b), show that −1 ≤ρ(X, Y ) ≤1 . 19 Let X and Y be independent random variables with uniform densities in [0, 1]. Let Z = X + Y and W = X −Y . Find (a) ρ(X, Y ) (see Exercise 18). (b) ρ(X, Z). (c) ρ(Y, W). (d) ρ(Z, W).
prob_Page_289_Chunk5128
282 CHAPTER 6. EXPECTED VALUE AND VARIANCE *20 When studying certain physiological data, such as heights of fathers and sons, it is often natural to assume that these data (e.g., the heights of the fathers and the heights of the sons) are described by random variables with normal densities. These random variables, however, are not independent but rather are correlated. For example, a two-dimensional standard normal density for correlated random variables has the form fX,Y (x, y) = 1 2π p 1 −ρ2 · e−(x2−2ρxy+y2)/2(1−ρ2) . (a) Show that X and Y each have standard normal densities. (b) Show that the correlation of X and Y (see Exercise 18) is ρ. *21 For correlated random variables X and Y it is natural to ask for the expected value for X given Y . For example, Galton calculated the expected value of the height of a son given the height of the father. He used this to show that tall men can be expected to have sons who are less tall on the average. Similarly, students who do very well on one exam can be expected to do less well on the next exam, and so forth. This is called regression on the mean. To define this conditional expected value, we first define a conditional density of X given Y = y by fX|Y (x|y) = fX,Y (x, y) fY (y) , where fX,Y (x, y) is the joint density of X and Y , and fY is the density for Y . Then the conditional expected value of X given Y is E(X|Y = y) = Z b a xfX|Y (x|y) dx . For the normal density in Exercise 20, show that the conditional density of fX|Y (x|y) is normal with mean ρy and variance 1 −ρ2. From this we see that if X and Y are positively correlated (0 < ρ < 1), and if y > E(Y ), then the expected value for X given Y = y will be less than y (i.e., we have regression on the mean). 22 A point Y is chosen at random from [0, 1]. A second point X is then chosen from the interval [0, Y ]. Find the density for X. Hint: Calculate fX|Y as in Exercise 21 and then use fX(x) = Z 1 x fX|Y (x|y)fY (y) dy . Can you also derive your result geometrically? *23 Let X and V be two standard normal random variables. Let ρ be a real number between -1 and 1. (a) Let Y = ρX + p 1 −ρ2V . Show that E(Y ) = 0 and V ar(Y ) = 1. We shall see later (see Example 7.5 and Example 10.17), that the sum of two independent normal random variables is again normal. Thus, assuming this fact, we have shown that Y is standard normal.
prob_Page_290_Chunk5129
6.3. CONTINUOUS RANDOM VARIABLES 283 (b) Using Exercises 17 and 18, show that the correlation of X and Y is ρ. (c) In Exercise 20, the joint density function fX,Y (x, y) for the random vari- able (X, Y ) is given. Now suppose that we want to know the set of points (x, y) in the xy-plane such that fX,Y (x, y) = C for some constant C. This set of points is called a set of constant density. Roughly speak- ing, a set of constant density is a set of points where the outcomes (X, Y ) are equally likely to fall. Show that for a given C, the set of points of constant density is a curve whose equation is x2 −2ρxy + y2 = D , where D is a constant which depends upon C. (This curve is an ellipse.) (d) One can plot the ellipse in part (c) by using the parametric equations x = r cos θ p 2(1 −ρ) + r sin θ p 2(1 + ρ) , y = r cos θ p 2(1 −ρ) − r sin θ p 2(1 + ρ) . Write a program to plot 1000 pairs (X, Y ) for ρ = −1/2, 0, 1/2. For each plot, have your program plot the above parametric curves for r = 1, 2, 3. *24 Following Galton, let us assume that the fathers and sons have heights that are dependent normal random variables. Assume that the average height is 68 inches, standard deviation is 2.7 inches, and the correlation coefficient is .5 (see Exercises 20 and 21). That is, assume that the heights of the fathers and sons have the form 2.7X + 68 and 2.7Y + 68, respectively, where X and Y are correlated standardized normal random variables, with correlation coefficient .5. (a) What is the expected height for the son of a father whose height is 72 inches? (b) Plot a scatter diagram of the heights of 1000 father and son pairs. Hint: You can choose standardized pairs as in Exercise 23 and then plot (2.7X+ 68, 2.7Y + 68). *25 When we have pairs of data (xi, yi) that are outcomes of the pairs of dependent random variables X, Y we can estimate the coorelation coefficient ρ by ¯r = P i(xi −¯x)(yi −¯y) (n −1)sXsY , where ¯x and ¯y are the sample means for X and Y , respectively, and sX and sY are the sample standard deviations for X and Y (see Exercise 6.2.17). Write a program to compute the sample means, variances, and correlation for such dependent data. Use your program to compute these quantities for Galton’s data on heights of parents and children given in Appendix B.
prob_Page_291_Chunk5130
284 CHAPTER 6. EXPECTED VALUE AND VARIANCE Plot the equal density ellipses as defined in Exercise 23 for r = 4, 6, and 8, and on the same graph print the values that appear in the table at the appropriate points. For example, print 12 at the point (70.5, 68.2), indicating that there were 12 cases where the parent’s height was 70.5 and the child’s was 68.12. See if Galton’s data is consistent with the equal density ellipses. 26 (from Hamming25) Suppose you are standing on the bank of a straight river. (a) Choose, at random, a direction which will keep you on dry land, and walk 1 km in that direction. Let P denote your position. What is the expected distance from P to the river? (b) Now suppose you proceed as in part (a), but when you get to P, you pick a random direction (from among all directions) and walk 1 km. What is the probability that you will reach the river before the second walk is completed? 27 (from Hamming26) A game is played as follows: A random number X is chosen uniformly from [0, 1]. Then a sequence Y1, Y2, . . . of random numbers is chosen independently and uniformly from [0, 1]. The game ends the first time that Yi > X. You are then paid (i −1) dollars. What is a fair entrance fee for this game? 28 A long needle of length L much bigger than 1 is dropped on a grid with horizontal and vertical lines one unit apart. Show that the average number a of lines crossed is approximately a = 4L π . 25R. W. Hamming, The Art of Probability for Scientists and Engineers (Redwood City: Addison-Wesley, 1991), p. 192. 26ibid., pg. 205.
prob_Page_292_Chunk5131
Chapter 7 Sums of Independent Random Variables 7.1 Sums of Discrete Random Variables In this chapter we turn to the important question of determining the distribution of a sum of independent random variables in terms of the distributions of the individual constituents. In this section we consider only sums of discrete random variables, reserving the case of continuous random variables for the next section. We consider here only random variables whose values are integers. Their distri- bution functions are then defined on these integers. We shall find it convenient to assume here that these distribution functions are defined for all integers, by defining them to be 0 where they are not otherwise defined. Convolutions Suppose X and Y are two independent discrete random variables with distribution functions m1(x) and m2(x). Let Z = X + Y . We would like to determine the dis- tribution function m3(x) of Z. To do this, it is enough to determine the probability that Z takes on the value z, where z is an arbitrary integer. Suppose that X = k, where k is some integer. Then Z = z if and only if Y = z −k. So the event Z = z is the union of the pairwise disjoint events (X = k) and (Y = z −k) , where k runs over the integers. Since these events are pairwise disjoint, we have P(Z = z) = ∞ X k=−∞ P(X = k) · P(Y = z −k) . Thus, we have found the distribution function of the random variable Z. This leads to the following definition. 285
prob_Page_293_Chunk5132
286 CHAPTER 7. SUMS OF RANDOM VARIABLES Definition 7.1 Let X and Y be two independent integer-valued random variables, with distribution functions m1(x) and m2(x) respectively. Then the convolution of m1(x) and m2(x) is the distribution function m3 = m1 ∗m2 given by m3(j) = X k m1(k) · m2(j −k) , for j = . . . , −2, −1, 0, 1, 2, . . .. The function m3(x) is the distribution function of the random variable Z = X + Y . 2 It is easy to see that the convolution operation is commutative, and it is straight- forward to show that it is also associative. Now let Sn = X1 +X2 +· · ·+Xn be the sum of n independent random variables of an independent trials process with common distribution function m defined on the integers. Then the distribution function of S1 is m. We can write Sn = Sn−1 + Xn . Thus, since we know the distribution function of Xn is m, we can find the distribu- tion function of Sn by induction. Example 7.1 A die is rolled twice. Let X1 and X2 be the outcomes, and let S2 = X1 + X2 be the sum of these outcomes. Then X1 and X2 have the common distribution function: m =  1 2 3 4 5 6 1/6 1/6 1/6 1/6 1/6 1/6  . The distribution function of S2 is then the convolution of this distribution with itself. Thus, P(S2 = 2) = m(1)m(1) = 1 6 · 1 6 = 1 36 , P(S2 = 3) = m(1)m(2) + m(2)m(1) = 1 6 · 1 6 + 1 6 · 1 6 = 2 36 , P(S2 = 4) = m(1)m(3) + m(2)m(2) + m(3)m(1) = 1 6 · 1 6 + 1 6 · 1 6 + 1 6 · 1 6 = 3 36 . Continuing in this way we would find P(S2 = 5) = 4/36, P(S2 = 6) = 5/36, P(S2 = 7) = 6/36, P(S2 = 8) = 5/36, P(S2 = 9) = 4/36, P(S2 = 10) = 3/36, P(S2 = 11) = 2/36, and P(S2 = 12) = 1/36. The distribution for S3 would then be the convolution of the distribution for S2 with the distribution for X3. Thus P(S3 = 3) = P(S2 = 2)P(X3 = 1)
prob_Page_294_Chunk5133
7.1. SUMS OF DISCRETE RANDOM VARIABLES 287 = 1 36 · 1 6 = 1 216 , P(S3 = 4) = P(S2 = 3)P(X3 = 1) + P(S2 = 2)P(X3 = 2) = 2 36 · 1 6 + 1 36 · 1 6 = 3 216 , and so forth. This is clearly a tedious job, and a program should be written to carry out this calculation. To do this we first write a program to form the convolution of two densities p and q and return the density r. We can then write a program to find the density for the sum Sn of n independent random variables with a common density p, at least in the case that the random variables have a finite number of possible values. Running this program for the example of rolling a die n times for n = 10, 20, 30 results in the distributions shown in Figure 7.1. We see that, as in the case of Bernoulli trials, the distributions become bell-shaped. We shall discuss in Chapter 9 a very general theorem called the Central Limit Theorem that will explain this phenomenon. 2 Example 7.2 A well-known method for evaluating a bridge hand is: an ace is assigned a value of 4, a king 3, a queen 2, and a jack 1. All other cards are assigned a value of 0. The point count of the hand is then the sum of the values of the cards in the hand. (It is actually more complicated than this, taking into account voids in suits, and so forth, but we consider here this simplified form of the point count.) If a card is dealt at random to a player, then the point count for this card has distribution pX =  0 1 2 3 4 36/52 4/52 4/52 4/52 4/52  . Let us regard the total hand of 13 cards as 13 independent trials with this common distribution. (Again this is not quite correct because we assume here that we are always choosing a card from a full deck.) Then the distribution for the point count C for the hand can be found from the program NFoldConvolution by using the distribution for a single card and choosing n = 13. A player with a point count of 13 or more is said to have an opening bid. The probability of having an opening bid is then P(C ≥13) . Since we have the distribution of C, it is easy to compute this probability. Doing this we find that P(C ≥13) = .2845 , so that about one in four hands should be an opening bid according to this simplified model. A more realistic discussion of this problem can be found in Epstein, The Theory of Gambling and Statistical Logic.1 2 1R. A. Epstein, The Theory of Gambling and Statistical Logic, rev. ed. (New York: Academic Press, 1977).
prob_Page_295_Chunk5134
288 CHAPTER 7. SUMS OF RANDOM VARIABLES 20 40 60 80 100 120 140 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 20 40 60 80 100 120 140 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 20 40 60 80 100 120 140 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 n = 10 n = 20 n = 30 Figure 7.1: Density of Sn for rolling a die n times.
prob_Page_296_Chunk5135
7.1. SUMS OF DISCRETE RANDOM VARIABLES 289 For certain special distributions it is possible to find an expression for the dis- tribution that results from convoluting the distribution with itself n times. The convolution of two binomial distributions, one with parameters m and p and the other with parameters n and p, is a binomial distribution with parameters (m+n) and p. This fact follows easily from a consideration of the experiment which consists of first tossing a coin m times, and then tossing it n more times. The convolution of k geometric distributions with common parameter p is a negative binomial distribution with parameters p and k. This can be seen by con- sidering the experiment which consists of tossing a coin until the kth head appears. Exercises 1 A die is rolled three times. Find the probability that the sum of the outcomes is (a) greater than 9. (b) an odd number. 2 The price of a stock on a given trading day changes according to the distri- bution pX =  −1 0 1 2 1/4 1/2 1/8 1/8  . Find the distribution for the change in stock price after two (independent) trading days. 3 Let X1 and X2 be independent random variables with common distribution pX =  0 1 2 1/8 3/8 1/2  . Find the distribution of the sum X1 + X2. 4 In one play of a certain game you win an amount X with distribution pX =  1 2 3 1/4 1/4 1/2  . Using the program NFoldConvolution find the distribution for your total winnings after ten (independent) plays. Plot this distribution. 5 Consider the following two experiments: the first has outcome X taking on the values 0, 1, and 2 with equal probabilities; the second results in an (in- dependent) outcome Y taking on the value 3 with probability 1/4 and 4 with probability 3/4. Find the distribution of (a) Y + X. (b) Y −X.
prob_Page_297_Chunk5136
290 CHAPTER 7. SUMS OF RANDOM VARIABLES 6 People arrive at a queue according to the following scheme: During each minute of time either 0 or 1 person arrives. The probability that 1 person arrives is p and that no person arrives is q = 1 −p. Let Cr be the number of customers arriving in the first r minutes. Consider a Bernoulli trials process with a success if a person arrives in a unit time and failure if no person arrives in a unit time. Let Tr be the number of failures before the rth success. (a) What is the distribution for Tr? (b) What is the distribution for Cr? (c) Find the mean and variance for the number of customers arriving in the first r minutes. 7 (a) A die is rolled three times with outcomes X1, X2, and X3. Let Y3 be the maximum of the values obtained. Show that P(Y3 ≤j) = P(X1 ≤j)3 . Use this to find the distribution of Y3. Does Y3 have a bell-shaped dis- tribution? (b) Now let Yn be the maximum value when n dice are rolled. Find the distribution of Yn. Is this distribution bell-shaped for large values of n? 8 A baseball player is to play in the World Series. Based upon his season play, you estimate that if he comes to bat four times in a game the number of hits he will get has a distribution pX =  0 1 2 3 4 .4 .2 .2 .1 .1  . Assume that the player comes to bat four times in each game of the series. (a) Let X denote the number of hits that he gets in a series. Using the program NFoldConvolution, find the distribution of X for each of the possible series lengths: four-game, five-game, six-game, seven-game. (b) Using one of the distribution found in part (a), find the probability that his batting average exceeds .400 in a four-game series. (The batting average is the number of hits divided by the number of times at bat.) (c) Given the distribution pX, what is his long-term batting average? 9 Prove that you cannot load two dice in such a way that the probabilities for any sum from 2 to 12 are the same. (Be sure to consider the case where one or more sides turn up with probability zero.) 10 (L´evy2) Assume that n is an integer, not prime. Show that you can find two distributions a and b on the nonnegative integers such that the convolution of 2See M. Krasner and B. Ranulae, “Sur une Propriet´e des Polynomes de la Division du Circle”; and the following note by J. Hadamard, in C. R. Acad. Sci., vol. 204 (1937), pp. 397–399.
prob_Page_298_Chunk5137
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 291 a and b is the equiprobable distribution on the set 0, 1, 2, . . . , n −1. If n is prime this is not possible, but the proof is not so easy. (Assume that neither a nor b is concentrated at 0.) 11 Assume that you are playing craps with dice that are loaded in the following way: faces two, three, four, and five all come up with the same probability (1/6) + r. Faces one and six come up with probability (1/6) −2r, with 0 < r < .02. Write a computer program to find the probability of winning at craps with these dice, and using your program find which values of r make craps a favorable game for the player with these dice. 7.2 Sums of Continuous Random Variables In this section we consider the continuous version of the problem posed in the previous section: How are sums of independent random variables distributed? Convolutions Definition 7.2 Let X and Y be two continuous random variables with density functions f(x) and g(y), respectively. Assume that both f(x) and g(y) are defined for all real numbers. Then the convolution f ∗g of f and g is the function given by (f ∗g)(z) = Z +∞ −∞ f(z −y)g(y) dy = Z +∞ −∞ g(z −x)f(x) dx . 2 This definition is analogous to the definition, given in Section 7.1, of the con- volution of two distribution functions. Thus it should not be surprising that if X and Y are independent, then the density of their sum is the convolution of their densities. This fact is stated as a theorem below, and its proof is left as an exercise (see Exercise 1). Theorem 7.1 Let X and Y be two independent random variables with density functions fX(x) and fY (y) defined for all x. Then the sum Z = X + Y is a random variable with density function fZ(z), where fZ is the convolution of fX and fY . 2 To get a better understanding of this important result, we will look at some examples.
prob_Page_299_Chunk5138
292 CHAPTER 7. SUMS OF RANDOM VARIABLES Sum of Two Independent Uniform Random Variables Example 7.3 Suppose we choose independently two numbers at random from the interval [0, 1] with uniform probability density. What is the density of their sum? Let X and Y be random variables describing our choices and Z = X + Y their sum. Then we have fX(x) = fY (x) =  1 if 0 ≤x ≤1, 0 otherwise; and the density function for the sum is given by fZ(z) = Z +∞ −∞ fX(z −y)fY (y) dy . Since fY (y) = 1 if 0 ≤y ≤1 and 0 otherwise, this becomes fZ(z) = Z 1 0 fX(z −y) dy . Now the integrand is 0 unless 0 ≤z −y ≤1 (i.e., unless z −1 ≤y ≤z) and then it is 1. So if 0 ≤z ≤1, we have fZ(z) = Z z 0 dy = z , while if 1 < z ≤2, we have fZ(z) = Z 1 z−1 dy = 2 −z , and if z < 0 or z > 2 we have fZ(z) = 0 (see Figure 7.2). Hence, fZ(z) =    z, if 0 ≤z ≤1, 2 −z, if 1 < z ≤2, 0, otherwise. Note that this result agrees with that of Example 2.4. 2 Sum of Two Independent Exponential Random Variables Example 7.4 Suppose we choose two numbers at random from the interval [0, ∞) with an exponential density with parameter λ. What is the density of their sum? Let X, Y , and Z = X + Y denote the relevant random variables, and fX, fY , and fZ their densities. Then fX(x) = fY (x) =  λe−λx, if x ≥0, 0, otherwise;
prob_Page_300_Chunk5139
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 293 0.5 1 1.5 2 0.2 0.4 0.6 0.8 1 Figure 7.2: Convolution of two uniform densities. 1 2 3 4 5 6 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Figure 7.3: Convolution of two exponential densities with λ = 1. and so, if z > 0, fZ(z) = Z +∞ −∞ fX(z −y)fY (y) dy = Z z 0 λe−λ(z−y)λe−λy dy = Z z 0 λ2e−λz dy = λ2ze−λz, while if z < 0, fZ(z) = 0 (see Figure 7.3). Hence, fZ(z) =  λ2ze−λz, if z ≥0, 0, otherwise. 2
prob_Page_301_Chunk5140
294 CHAPTER 7. SUMS OF RANDOM VARIABLES Sum of Two Independent Normal Random Variables Example 7.5 It is an interesting and important fact that the convolution of two normal densities with means µ1 and µ2 and variances σ1 and σ2 is again a normal density, with mean µ1 + µ2 and variance σ2 1 + σ2 2. We will show this in the special case that both random variables are standard normal. The general case can be done in the same way, but the calculation is messier. Another way to show the general result is given in Example 10.17. Suppose X and Y are two independent random variables, each with the standard normal density (see Example 5.8). We have fX(x) = fY (y) = 1 √ 2π e−x2/2 , and so fZ(z) = fX ∗fY (z) = 1 2π Z +∞ −∞ e−(z−y)2/2e−y2/2 dy = 1 2π e−z2/4 Z +∞ −∞ e−(y−z/2)2 dy = 1 2π e−z2/4√π  1 √π Z ∞ −∞ e−(y−z/2)2 dy  . The expression in the brackets equals 1, since it is the integral of the normal density function with µ = 0 and σ = √ 2. So, we have fZ(z) = 1 √ 4π e−z2/4 . 2 Sum of Two Independent Cauchy Random Variables Example 7.6 Choose two numbers at random from the interval (−∞, +∞) with the Cauchy density with parameter a = 1 (see Example 5.10). Then fX(x) = fY (x) = 1 π(1 + x2) , and Z = X + Y has density fZ(z) = 1 π2 Z +∞ −∞ 1 1 + (z −y)2 1 1 + y2 dy .
prob_Page_302_Chunk5141
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 295 This integral requires some effort, and we give here only the result (see Section 10.3, or Dwass3): fZ(z) = 2 π(4 + z2) . Now, suppose that we ask for the density function of the average A = (1/2)(X + Y ) of X and Y . Then A = (1/2)Z. Exercise 5.2.19 shows that if U and V are two continuous random variables with density functions fU(x) and fV (x), respectively, and if V = aU, then fV (x) = 1 a  fU x a  . Thus, we have fA(z) = 2fZ(2z) = 1 π(1 + z2) . Hence, the density function for the average of two random variables, each having a Cauchy density, is again a random variable with a Cauchy density; this remarkable property is a peculiarity of the Cauchy density. One consequence of this is if the error in a certain measurement process had a Cauchy density and you averaged a number of measurements, the average could not be expected to be any more accurate than any one of your individual measurements! 2 Rayleigh Density Example 7.7 Suppose X and Y are two independent standard normal random variables. Now suppose we locate a point P in the xy-plane with coordinates (X, Y ) and ask: What is the density of the square of the distance of P from the origin? (We have already simulated this problem in Example 5.9.) Here, with the preceding notation, we have fX(x) = fY (x) = 1 √ 2π e−x2/2 . Moreover, if X2 denotes the square of X, then (see Theorem 5.1 and the discussion following) fX2(r) =  1 2√r(fX(√r) + fX(−√r)) if r > 0, 0 otherwise. =  1 √ 2πr(e−r/2) if r > 0, 0 otherwise. 3M. Dwass, “On the Convolution of Cauchy Distributions,” American Mathematical Monthly, vol. 92, no. 1, (1985), pp. 55–57; see also R. Nelson, letters to the Editor, ibid., p. 679.
prob_Page_303_Chunk5142
296 CHAPTER 7. SUMS OF RANDOM VARIABLES This is a gamma density with λ = 1/2, β = 1/2 (see Example 7.4). Now let R2 = X2 + Y 2. Then fR2(r) = Z +∞ −∞ fX2(r −s)fY 2(s) ds = 1 4π Z +∞ −∞ e−(r−s)/2 r −s 2 −1/2 e−s s 2 −1/2 ds , =  1 2e−r2/2, if r ≥0, 0, otherwise. Hence, R2 has a gamma density with λ = 1/2, β = 1. We can interpret this result as giving the density for the square of the distance of P from the center of a target if its coordinates are normally distributed. The density of the random variable R is obtained from that of R2 in the usual way (see Theorem 5.1), and we find fR(r) =  1 2e−r2/2 · 2r = re−r2/2, if r ≥0, 0, otherwise. Physicists will recognize this as a Rayleigh density. Our result here agrees with our simulation in Example 5.9. 2 Chi-Squared Density More generally, the same method shows that the sum of the squares of n independent normally distributed random variables with mean 0 and standard deviation 1 has a gamma density with λ = 1/2 and β = n/2. Such a density is called a chi-squared density with n degrees of freedom. This density was introduced in Chapter 4.3. In Example 5.10, we used this density to test the hypothesis that two traits were independent. Another important use of the chi-squared density is in comparing experimental data with a theoretical discrete distribution, to see whether the data supports the theoretical model. More specifically, suppose that we have an experiment with a finite set of outcomes. If the set of outcomes is countable, we group them into finitely many sets of outcomes. We propose a theoretical distribution which we think will model the experiment well. We obtain some data by repeating the experiment a number of times. Now we wish to check how well the theoretical distribution fits the data. Let X be the random variable which represents a theoretical outcome in the model of the experiment, and let m(x) be the distribution function of X. In a manner similar to what was done in Example 5.10, we calculate the value of the expression V = X x (ox −n · m(x))2 n · m(x) , where the sum runs over all possible outcomes x, n is the number of data points, and ox denotes the number of outcomes of type x observed in the data. Then
prob_Page_304_Chunk5143
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 297 Outcome Observed Frequency 1 15 2 8 3 7 4 5 5 7 6 18 Table 7.1: Observed data. for moderate or large values of n, the quantity V is approximately chi-squared distributed, with ν−1 degrees of freedom, where ν represents the number of possible outcomes. The proof of this is beyond the scope of this book, but we will illustrate the reasonableness of this statement in the next example. If the value of V is very large, when compared with the appropriate chi-squared density function, then we would tend to reject the hypothesis that the model is an appropriate one for the experiment at hand. We now give an example of this procedure. Example 7.8 Suppose we are given a single die. We wish to test the hypothesis that the die is fair. Thus, our theoretical distribution is the uniform distribution on the integers between 1 and 6. So, if we roll the die n times, the expected number of data points of each type is n/6. Thus, if oi denotes the actual number of data points of type i, for 1 ≤i ≤6, then the expression V = 6 X i=1 (oi −n/6)2 n/6 is approximately chi-squared distributed with 5 degrees of freedom. Now suppose that we actually roll the die 60 times and obtain the data in Table 7.1. If we calculate V for this data, we obtain the value 13.6. The graph of the chi-squared density with 5 degrees of freedom is shown in Figure 7.4. One sees that values as large as 13.6 are rarely taken on by V if the die is fair, so we would reject the hypothesis that the die is fair. (When using this test, a statistician will reject the hypothesis if the data gives a value of V which is larger than 95% of the values one would expect to obtain if the hypothesis is true.) In Figure 7.5, we show the results of rolling a die 60 times, then calculating V , and then repeating this experiment 1000 times. The program that performs these calculations is called DieTest. We have superimposed the chi-squared density with 5 degrees of freedom; one can see that the data values fit the curve fairly well, which supports the statement that the chi-squared density is the correct one to use. 2 So far we have looked at several important special cases for which the convolution integral can be evaluated explicitly. In general, the convolution of two continuous densities cannot be evaluated explicitly, and we must resort to numerical methods. Fortunately, these prove to be remarkably effective, at least for bounded densities.
prob_Page_305_Chunk5144
298 CHAPTER 7. SUMS OF RANDOM VARIABLES 5 10 15 20 0.025 0.05 0.075 0.1 0.125 0.15 Figure 7.4: Chi-squared density with 5 degrees of freedom. 0 5 10 15 20 25 30 0 0.025 0.05 0.075 0.1 0.125 0.15 1000 experiments 60 rolls per experiment Figure 7.5: Rolling a fair die.
prob_Page_306_Chunk5145
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 299 1 2 3 4 5 6 7 8 0 0.2 0.4 0.6 0.8 1 n = 2 n = 4 n = 6 n = 8 n = 10 Figure 7.6: Convolution of n uniform densities. Independent Trials We now consider briefly the distribution of the sum of n independent random vari- ables, all having the same density function. If X1, X2, . . . , Xn are these random variables and Sn = X1 + X2 + · · · + Xn is their sum, then we will have fSn(x) = (fX1 ∗fX2 ∗· · · ∗fXn) (x) , where the right-hand side is an n-fold convolution. It is possible to calculate this density for general values of n in certain simple cases. Example 7.9 Suppose the Xi are uniformly distributed on the interval [0, 1]. Then fXi(x) =  1, if 0 ≤x ≤1, 0, otherwise, and fSn(x) is given by the formula4 fSn(x) =  1 (n−1)! P 0≤j≤x(−1)j
prob_Page_307_Chunk5146
300 CHAPTER 7. SUMS OF RANDOM VARIABLES -15 -10 -5 5 10 15 0.025 0.05 0.075 0.1 0.125 0.15 0.175 n = 5 n = 10 n = 15 n = 20 n = 25 Figure 7.7: Convolution of n standard normal densities. and fSn(x) = 1 √ 2πne−x2/2n . Here the density fSn for n = 5, 10, 15, 20, 25 is shown in Figure 7.7. If the Xi are all exponentially distributed, with mean 1/λ, then fXi(x) = λe−λx , and fSn(x) = λe−λx(λx)n−1 (n −1)! . In this case the density fSn for n = 2, 4, 6, 8, 10 is shown in Figure 7.8. 2 Exercises 1 Let X and Y be independent real-valued random variables with density func- tions fX(x) and fY (y), respectively. Show that the density function of the sum X + Y is the convolution of the functions fX(x) and fY (y). Hint: Let ¯X be the joint random variable (X, Y ). Then the joint density function of ¯X is fX(x)fY (y), since X and Y are independent. Now compute the probability that X +Y ≤z, by integrating the joint density function over the appropriate region in the plane. This gives the cumulative distribution function of Z. Now differentiate this function with respect to z to obtain the density function of z. 2 Let X and Y be independent random variables defined on the space Ω, with density functions fX and fY , respectively. Suppose that Z = X + Y . Find the density fZ of Z if
prob_Page_308_Chunk5147
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 301 5 10 15 20 0.05 0.1 0.15 0.2 0.25 0.3 0.35 n = 2 n = 4 n = 6 n = 8 n = 10 Figure 7.8: Convolution of n exponential densities with λ = 1. (a) fX(x) = fY (x) =  1/2, if −1 ≤x ≤+1, 0, otherwise. (b) fX(x) = fY (x) =  1/2, if 3 ≤x ≤5, 0, otherwise. (c) fX(x) =  1/2, if −1 ≤x ≤1, 0, otherwise. fY (x) =  1/2, if 3 ≤x ≤5, 0, otherwise. (d) What can you say about the set E = { z : fZ(z) > 0 } in each case? 3 Suppose again that Z = X + Y . Find fZ if (a) fX(x) = fY (x) =  x/2, if 0 < x < 2, 0, otherwise. (b) fX(x) = fY (x) =  (1/2)(x −3), if 3 < x < 5, 0, otherwise. (c) fX(x) =  1/2, if 0 < x < 2, 0, otherwise,
prob_Page_309_Chunk5148
302 CHAPTER 7. SUMS OF RANDOM VARIABLES fY (x) =  x/2, if 0 < x < 2, 0, otherwise. (d) What can you say about the set E = { z : fZ(z) > 0 } in each case? 4 Let X, Y , and Z be independent random variables with fX(x) = fY (x) = fZ(x) =  1, if 0 < x < 1, 0, otherwise. Suppose that W = X + Y + Z. Find fW directly, and compare your answer with that given by the formula in Example 7.9. Hint: See Example 7.3. 5 Suppose that X and Y are independent and Z = X + Y . Find fZ if (a) fX(x) =  λe−λx, if x > 0, 0, otherwise. fY (x) =  µe−µx, if x > 0, 0, otherwise. (b) fX(x) =  λe−λx, if x > 0, 0, otherwise. fY (x) =  1, if 0 < x < 1, 0, otherwise. 6 Suppose again that Z = X + Y . Find fZ if fX(x) = 1 √ 2πσ1 e−(x−µ1)2/2σ2 1 fY (x) = 1 √ 2πσ2 e−(x−µ2)2/2σ2 2 . *7 Suppose that R2 = X2 + Y 2. Find fR2 and fR if fX(x) = 1 √ 2πσ1 e−(x−µ1)2/2σ2 1 fY (x) = 1 √ 2πσ2 e−(x−µ2)2/2σ2 2 . 8 Suppose that R2 = X2 + Y 2. Find fR2 and fR if fX(x) = fY (x) =  1/2, if −1 ≤x ≤1, 0, otherwise. 9 Assume that the service time for a customer at a bank is exponentially dis- tributed with mean service time 2 minutes. Let X be the total service time for 10 customers. Estimate the probability that X > 22 minutes.
prob_Page_310_Chunk5149
7.2. SUMS OF CONTINUOUS RANDOM VARIABLES 303 10 Let X1, X2, . . . , Xn be n independent random variables each of which has an exponential density with mean µ. Let M be the minimum value of the Xj. Show that the density for M is exponential with mean µ/n. Hint: Use cumulative distribution functions. 11 A company buys 100 lightbulbs, each of which has an exponential lifetime of 1000 hours. What is the expected time for the first of these bulbs to burn out? (See Exercise 10.) 12 An insurance company assumes that the time between claims from each of its homeowners’ policies is exponentially distributed with mean µ. It would like to estimate µ by averaging the times for a number of policies, but this is not very practical since the time between claims is about 30 years. At Galambos’5 suggestion the company puts its customers in groups of 50 and observes the time of the first claim within each group. Show that this provides a practical way to estimate the value of µ. 13 Particles are subject to collisions that cause them to split into two parts with each part a fraction of the parent. Suppose that this fraction is uniformly distributed between 0 and 1. Following a single particle through several split- tings we obtain a fraction of the original particle Zn = X1 · X2 · . . . · Xn where each Xj is uniformly distributed between 0 and 1. Show that the density for the random variable Zn is fn(z) = 1 (n −1)!(−log z)n−1. Hint: Show that Yk = −log Xk is exponentially distributed. Use this to find the density function for Sn = Y1 +Y2 +· · ·+Yn, and from this the cumulative distribution and density of Zn = e−Sn. 14 Assume that X1 and X2 are independent random variables, each having an exponential density with parameter λ. Show that Z = X1 −X2 has density fZ(z) = (1/2)λe−λ|z| . 15 Suppose we want to test a coin for fairness. We flip the coin n times and record the number of times X0 that the coin turns up tails and the number of times X1 = n −X0 that the coin turns up heads. Now we set Z = 1 X i=0 (Xi −n/2)2 n/2 . Then for a fair coin Z has approximately a chi-squared distribution with 2 −1 = 1 degree of freedom. Verify this by computer simulation first for a fair coin (p = 1/2) and then for a biased coin (p = 1/3). 5J. Galambos, Introductory Probability Theory (New York: Marcel Dekker, 1984), p. 159.
prob_Page_311_Chunk5150
304 CHAPTER 7. SUMS OF RANDOM VARIABLES 16 Verify your answers in Exercise 2(a) by computer simulation: Choose X and Y from [−1, 1] with uniform density and calculate Z = X + Y . Repeat this experiment 500 times, recording the outcomes in a bar graph on [−2, 2] with 40 bars. Does the density fZ calculated in Exercise 2(a) describe the shape of your bar graph? Try this for Exercises 2(b) and Exercise 2(c), too. 17 Verify your answers to Exercise 3 by computer simulation. 18 Verify your answer to Exercise 4 by computer simulation. 19 The support of a function f(x) is defined to be the set {x : f(x) > 0} . Suppose that X and Y are two continuous random variables with density functions fX(x) and fY (y), respectively, and suppose that the supports of these density functions are the intervals [a, b] and [c, d], respectively. Find the support of the density function of the random variable X + Y . 20 Let X1, X2, . . . , Xn be a sequence of independent random variables, all having a common density function fX with support [a, b] (see Exercise 19). Let Sn = X1 + X2 + · · · + Xn, with density function fSn. Show that the support of fSn is the interval [na, nb]. Hint: Write fSn = fSn−1 ∗fX. Now use Exercise 19 to establish the desired result by induction. 21 Let X1, X2, . . . , Xn be a sequence of independent random variables, all having a common density function fX. Let A = Sn/n be their average. Find fA if (a) fX(x) = (1/ √ 2π)e−x2/2 (normal density). (b) fX(x) = e−x (exponential density). Hint: Write fA(x) in terms of fSn(x).
prob_Page_312_Chunk5151
Chapter 8 Law of Large Numbers 8.1 Law of Large Numbers for Discrete Random Variables We are now in a position to prove our first fundamental theorem of probability. We have seen that an intuitive way to view the probability of a certain outcome is as the frequency with which that outcome occurs in the long run, when the ex- periment is repeated a large number of times. We have also defined probability mathematically as a value of a distribution function for the random variable rep- resenting the experiment. The Law of Large Numbers, which is a theorem proved about the mathematical model of probability, shows that this model is consistent with the frequency interpretation of probability. This theorem is sometimes called the law of averages. To find out what would happen if this law were not true, see the article by Robert M. Coates.1 Chebyshev Inequality To discuss the Law of Large Numbers, we first need an important inequality called the Chebyshev Inequality. Theorem 8.1 (Chebyshev Inequality) Let X be a discrete random variable with expected value µ = E(X), and let ϵ > 0 be any positive real number. Then P(|X −µ| ≥ϵ) ≤V (X) ϵ2 . Proof. Let m(x) denote the distribution function of X. Then the probability that X differs from µ by at least ϵ is given by P(|X −µ| ≥ϵ) = X |x−µ|≥ϵ m(x) . 1R. M. Coates, “The Law,” The World of Mathematics, ed. James R. Newman (New York: Simon and Schuster, 1956. 305
prob_Page_313_Chunk5152
306 CHAPTER 8. LAW OF LARGE NUMBERS We know that V (X) = X x (x −µ)2m(x) , and this is clearly at least as large as X |x−µ|≥ϵ (x −µ)2m(x) , since all the summands are positive and we have restricted the range of summation in the second sum. But this last sum is at least X |x−µ|≥ϵ ϵ2m(x) = ϵ2 X |x−µ|≥ϵ m(x) = ϵ2P(|X −µ| ≥ϵ) . So, P(|X −µ| ≥ϵ) ≤V (X) ϵ2 . 2 Note that X in the above theorem can be any discrete random variable, and ϵ any positive number. Example 8.1 Let X by any random variable with E(X) = µ and V (X) = σ2. Then, if ϵ = kσ, Chebyshev’s Inequality states that P(|X −µ| ≥kσ) ≤ σ2 k2σ2 = 1 k2 . Thus, for any random variable, the probability of a deviation from the mean of more than k standard deviations is ≤1/k2. If, for example, k = 5, 1/k2 = .04. 2 Chebyshev’s Inequality is the best possible inequality in the sense that, for any ϵ > 0, it is possible to give an example of a random variable for which Chebyshev’s Inequality is in fact an equality. To see this, given ϵ > 0, choose X with distribution pX =  −ϵ +ϵ 1/2 1/2  . Then E(X) = 0, V (X) = ϵ2, and P(|X −µ| ≥ϵ) = V (X) ϵ2 = 1 . We are now prepared to state and prove the Law of Large Numbers.
prob_Page_314_Chunk5153
8.1. DISCRETE RANDOM VARIABLES 307 Law of Large Numbers Theorem 8.2 (Law of Large Numbers) Let X1, X2, . . . , Xn be an independent trials process, with finite expected value µ = E(Xj) and finite variance σ2 = V (Xj). Let Sn = X1 + X2 + · · · + Xn. Then for any ϵ > 0, P  Sn n −µ ≥ϵ  →0 as n →∞. Equivalently, P  Sn n −µ < ϵ  →1 as n →∞. Proof. Since X1, X2, . . . , Xn are independent and have the same distributions, we can apply Theorem 6.9. We obtain V (Sn) = nσ2 , and V (Sn n ) = σ2 n . Also we know that E(Sn n ) = µ . By Chebyshev’s Inequality, for any ϵ > 0, P  Sn n −µ ≥ϵ  ≤σ2 nϵ2 . Thus, for fixed ϵ, P  Sn n −µ ≥ϵ  →0 as n →∞, or equivalently, P  Sn n −µ < ϵ  →1 as n →∞. 2 Law of Averages Note that Sn/n is an average of the individual outcomes, and one often calls the Law of Large Numbers the “law of averages.” It is a striking fact that we can start with a random experiment about which little can be predicted and, by taking averages, obtain an experiment in which the outcome can be predicted with a high degree of certainty. The Law of Large Numbers, as we have stated it, is often called the “Weak Law of Large Numbers” to distinguish it from the “Strong Law of Large Numbers” described in Exercise 15.
prob_Page_315_Chunk5154
308 CHAPTER 8. LAW OF LARGE NUMBERS Consider the important special case of Bernoulli trials with probability p for success. Let Xj = 1 if the jth outcome is a success and 0 if it is a failure. Then Sn = X1 + X2 + · · ·+ Xn is the number of successes in n trials and µ = E(X1) = p. The Law of Large Numbers states that for any ϵ > 0 P  Sn n −p < ϵ  →1 as n →∞. The above statement says that, in a large number of repetitions of a Bernoulli experiment, we can expect the proportion of times the event will occur to be near p. This shows that our mathematical model of probability agrees with our frequency interpretation of probability. Coin Tossing Let us consider the special case of tossing a coin n times with Sn the number of heads that turn up. Then the random variable Sn/n represents the fraction of times heads turns up and will have values between 0 and 1. The Law of Large Numbers predicts that the outcomes for this random variable will, for large n, be near 1/2. In Figure 8.1, we have plotted the distribution for this example for increasing values of n. We have marked the outcomes between .45 and .55 by dots at the top of the spikes. We see that as n increases the distribution gets more and more con- centrated around .5 and a larger and larger percentage of the total area is contained within the interval (.45, .55), as predicted by the Law of Large Numbers. Die Rolling Example 8.2 Consider n rolls of a die. Let Xj be the outcome of the jth roll. Then Sn = X1 +X2 +· · ·+Xn is the sum of the first n rolls. This is an independent trials process with E(Xj) = 7/2. Thus, by the Law of Large Numbers, for any ϵ > 0 P  Sn n −7 2 ≥ϵ  →0 as n →∞. An equivalent way to state this is that, for any ϵ > 0, P  Sn n −7 2 < ϵ  →1 as n →∞. 2 Numerical Comparisons It should be emphasized that, although Chebyshev’s Inequality proves the Law of Large Numbers, it is actually a very crude inequality for the probabilities involved. However, its strength lies in the fact that it is true for any random variable at all, and it allows us to prove a very powerful theorem. In the following example, we compare the estimates given by Chebyshev’s In- equality with the actual values.
prob_Page_316_Chunk5155
8.1. DISCRETE RANDOM VARIABLES 309 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 0.12 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 0.25 0 0.2 0.4 0.6 0.8 1 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 n=10 n=20 n=40 n=30 n=60 n=100 Figure 8.1: Bernoulli trials distributions.
prob_Page_317_Chunk5156
310 CHAPTER 8. LAW OF LARGE NUMBERS Example 8.3 Let X1, X2, . . . , Xn be a Bernoulli trials process with probability .3 for success and .7 for failure. Let Xj = 1 if the jth outcome is a success and 0 otherwise. Then, E(Xj) = .3 and V (Xj) = (.3)(.7) = .21. If An = Sn n = X1 + X2 + · · · + Xn n is the average of the Xi, then E(An) = .3 and V (An) = V (Sn)/n2 = .21/n. Chebyshev’s Inequality states that if, for example, ϵ = .1, P(|An −.3| ≥.1) ≤ .21 n(.1)2 = 21 n . Thus, if n = 100, P(|A100 −.3| ≥.1) ≤.21 , or if n = 1000, P(|A1000 −.3| ≥.1) ≤.021 . These can be rewritten as P(.2 < A100 < .4) ≥ .79 , P(.2 < A1000 < .4) ≥ .979 . These values should be compared with the actual values, which are (to six decimal places) P(.2 < A100 < .4) ≈ .962549 P(.2 < A1000 < .4) ≈ 1 . The program Law can be used to carry out the above calculations in a systematic way. 2 Historical Remarks The Law of Large Numbers was first proved by the Swiss mathematician James Bernoulli in the fourth part of his work Ars Conjectandi published posthumously in 1713.2 As often happens with a first proof, Bernoulli’s proof was much more difficult than the proof we have presented using Chebyshev’s inequality. Cheby- shev developed his inequality to prove a general form of the Law of Large Numbers (see Exercise 12). The inequality itself appeared much earlier in a work by Bien- aym´e, and in discussing its history Maistrov remarks that it was referred to as the Bienaym´e-Chebyshev Inequality for a long time.3 In Ars Conjectandi Bernoulli provides his reader with a long discussion of the meaning of his theorem with lots of examples. In modern notation he has an event 2J. Bernoulli, The Art of Conjecturing IV, trans. Bing Sung, Technical Report No. 2, Dept. of Statistics, Harvard Univ., 1966 3L. E. Maistrov, Probability Theory: A Historical Approach, trans. and ed. Samual Kotz, (New York: Academic Press, 1974), p. 202
prob_Page_318_Chunk5157
8.1. DISCRETE RANDOM VARIABLES 311 that occurs with probability p but he does not know p. He wants to estimate p by the fraction ¯p of the times the event occurs when the experiment is repeated a number of times. He discusses in detail the problem of estimating, by this method, the proportion of white balls in an urn that contains an unknown number of white and black balls. He would do this by drawing a sequence of balls from the urn, replacing the ball drawn after each draw, and estimating the unknown proportion of white balls in the urn by the proportion of the balls drawn that are white. He shows that, by choosing n large enough he can obtain any desired accuracy and reliability for the estimate. He also provides a lively discussion of the applicability of his theorem to estimating the probability of dying of a particular disease, of different kinds of weather occurring, and so forth. In speaking of the number of trials necessary for making a judgement, Bernoulli observes that the “man on the street” believes the “law of averages.” Further, it cannot escape anyone that for judging in this way about any event at all, it is not enough to use one or two trials, but rather a great number of trials is required. And sometimes the stupidest man—by some instinct of nature per se and by no previous instruction (this is truly amazing)— knows for sure that the more observations of this sort that are taken, the less the danger will be of straying from the mark.4 But he goes on to say that he must contemplate another possibility. Something futher must be contemplated here which perhaps no one has thought about till now. It certainly remains to be inquired whether after the number of observations has been increased, the probability is increased of attaining the true ratio between the number of cases in which some event can happen and in which it cannot happen, so that this probability finally exceeds any given degree of certainty; or whether the problem has, so to speak, its own asymptote—that is, whether some degree of certainty is given which one can never exceed.5 Bernoulli recognized the importance of this theorem, writing: Therefore, this is the problem which I now set forth and make known after I have already pondered over it for twenty years. Both its novelty and its very great usefullness, coupled with its just as great difficulty, can exceed in weight and value all the remaining chapters of this thesis.6 Bernoulli concludes his long proof with the remark: Whence, finally, this one thing seems to follow: that if observations of all events were to be continued throughout all eternity, (and hence the ultimate probability would tend toward perfect certainty), everything in 4Bernoulli, op. cit., p. 38. 5ibid., p. 39. 6ibid., p. 42.
prob_Page_319_Chunk5158
312 CHAPTER 8. LAW OF LARGE NUMBERS the world would be perceived to happen in fixed ratios and according to a constant law of alternation, so that even in the most accidental and fortuitous occurrences we would be bound to recognize, as it were, a certain necessity and, so to speak, a certain fate. I do now know whether Plato wished to aim at this in his doctrine of the universal return of things, according to which he predicted that all things will return to their original state after countless ages have past.7 Exercises 1 A fair coin is tossed 100 times. The expected number of heads is 50, and the standard deviation for the number of heads is (100 · 1/2 · 1/2)1/2 = 5. What does Chebyshev’s Inequality tell you about the probability that the number of heads that turn up deviates from the expected number 50 by three or more standard deviations (i.e., by at least 15)? 2 Write a program that uses the function binomial(n, p, x) to compute the exact probability that you estimated in Exercise 1. Compare the two results. 3 Write a program to toss a coin 10,000 times. Let Sn be the number of heads in the first n tosses. Have your program print out, after every 1000 tosses, Sn −n/2. On the basis of this simulation, is it correct to say that you can expect heads about half of the time when you toss a coin a large number of times? 4 A 1-dollar bet on craps has an expected winning of −.0141. What does the Law of Large Numbers say about your winnings if you make a large number of 1-dollar bets at the craps table? Does it assure you that your losses will be small? Does it assure you that if n is very large you will lose? 5 Let X be a random variable with E(X) = 0 and V (X) = 1. What integer value k will assure us that P(|X| ≥k) ≤.01? 6 Let Sn be the number of successes in n Bernoulli trials with probability p for success on each trial. Show, using Chebyshev’s Inequality, that for any ϵ > 0 P  Sn n −p ≥ϵ  ≤p(1 −p) nϵ2 . 7 Find the maximum possible value for p(1 −p) if 0 < p < 1. Using this result and Exercise 6, show that the estimate P  Sn n −p ≥ϵ  ≤ 1 4nϵ2 is valid for any p. 7ibid., pp. 65–66.
prob_Page_320_Chunk5159
8.1. DISCRETE RANDOM VARIABLES 313 8 A fair coin is tossed a large number of times. Does the Law of Large Numbers assure us that, if n is large enough, with probability > .99 the number of heads that turn up will not deviate from n/2 by more than 100? 9 In Exercise 6.2.15, you showed that, for the hat check problem, the number Sn of people who get their own hats back has E(Sn) = V (Sn) = 1. Using Chebyshev’s Inequality, show that P(Sn ≥11) ≤.01 for any n ≥11. 10 Let X by any random variable which takes on values 0, 1, 2, . . . , n and has E(X) = V (X) = 1. Show that, for any positive integer k, P(X ≥k + 1) ≤1 k2 . 11 We have two coins: one is a fair coin and the other is a coin that produces heads with probability 3/4. One of the two coins is picked at random, and this coin is tossed n times. Let Sn be the number of heads that turns up in these n tosses. Does the Law of Large Numbers allow us to predict the proportion of heads that will turn up in the long run? After we have observed a large number of tosses, can we tell which coin was chosen? How many tosses suffice to make us 95 percent sure? 12 (Chebyshev8) Assume that X1, X2, . . . , Xn are independent random variables with possibly different distributions and let Sn be their sum. Let mk = E(Xk), σ2 k = V (Xk), and Mn = m1 + m2 + · · · + mn. Assume that σ2 k < R for all k. Prove that, for any ϵ > 0, P  Sn n −Mn n < ϵ  →1 as n →∞. 13 A fair coin is tossed repeatedly. Before each toss, you are allowed to decide whether to bet on the outcome. Can you describe a betting system with infinitely many bets which will enable you, in the long run, to win more than half of your bets? (Note that we are disallowing a betting system that says to bet until you are ahead, then quit.) Write a computer program that implements this betting system. As stated above, your program must decide whether to bet on a particular outcome before that outcome is determined. For example, you might select only outcomes that come after there have been three tails in a row. See if you can get more than 50% heads by your “system.” *14 Prove the following analogue of Chebyshev’s Inequality: P(|X −E(X)| ≥ϵ) ≤1 ϵ E(|X −E(X)|) . 8P. L. Chebyshev, “On Mean Values,” J. Math. Pure. Appl., vol. 12 (1867), pp. 177–184.
prob_Page_321_Chunk5160
314 CHAPTER 8. LAW OF LARGE NUMBERS *15 We have proved a theorem often called the “Weak Law of Large Numbers.” Most people’s intuition and our computer simulations suggest that, if we toss a coin a sequence of times, the proportion of heads will really approach 1/2; that is, if Sn is the number of heads in n times, then we will have An = Sn n →1 2 as n →∞. Of course, we cannot be sure of this since we are not able to toss the coin an infinite number of times, and, if we could, the coin could come up heads every time. However, the “Strong Law of Large Numbers,” proved in more advanced courses, states that P Sn n →1 2  = 1 . Describe a sample space Ωthat would make it possible for us to talk about the event E =  ω : Sn n →1 2  . Could we assign the equiprobable measure to this space? (See Example 2.18.) *16 In this exercise, we shall construct an example of a sequence of random vari- ables that satisfies the weak law of large numbers, but not the strong law. The distribution of Xi will have to depend on i, because otherwise both laws would be satisfied. (This problem was communicated to us by David Maslen.) Suppose we have an infinite sequence of mutually independent events A1, A2, . . .. Let ai = P(Ai), and let r be a positive integer. (a) Find an expression of the probability that none of the Ai with i > r occur. (b) Use the fact that x −1 ≤e−x to show that P(No Ai with i > r occurs) ≤e−P∞ i=r ai (c) (The first Borel-Cantelli lemma) Prove that if P∞ i=1 ai diverges, then P(infinitely many Ai occur) = 1. Now, let Xi be a sequence of mutually independent random variables such that for each positive integer i ≥2, P(Xi = i) = 1 2i log i, P(Xi = −i) = 1 2i log i, P(Xi = 0) = 1− 1 i log i. When i = 1 we let Xi = 0 with probability 1. As usual we let Sn = X1 + · · · + Xn. Note that the mean of each Xi is 0.
prob_Page_322_Chunk5161
8.1. DISCRETE RANDOM VARIABLES 315 (d) Find the variance of Sn. (e) Show that the sequence ⟨Xi⟩satisfies the Weak Law of Large Numbers, i.e. prove that for any ϵ > 0 P  Sn n ≥ϵ  →0 , as n tends to infinity. We now show that {Xi} does not satisfy the Strong Law of Large Num- bers. Suppose that Sn/n →0. Then because Xn n = Sn n −n −1 n Sn−1 n −1 , we know that Xn/n →0. From the definition of limits, we conclude that the inequality |Xi| ≥1 2i can only be true for finitely many i. (f) Let Ai be the event |Xi| ≥ 1 2i. Find P(Ai). Show that P∞ i=1 P(Ai) diverges (use the Integral Test). (g) Prove that Ai occurs for infinitely many i. (h) Prove that P Sn n →0  = 0, and hence that the Strong Law of Large Numbers fails for the sequence {Xi}. *17 Let us toss a biased coin that comes up heads with probability p and assume the validity of the Strong Law of Large Numbers as described in Exercise 15. Then, with probability 1, Sn n →p as n →∞. If f(x) is a continuous function on the unit interval, then we also have f Sn n  →f(p) . Finally, we could hope that E  f Sn n  →E(f(p)) = f(p) . Show that, if all this is correct, as in fact it is, we would have proven that any continuous function on the unit interval is a limit of polynomial func- tions. This is a sketch of a probabilistic proof of an important theorem in mathematics called the Weierstrass approximation theorem.
prob_Page_323_Chunk5162
316 CHAPTER 8. LAW OF LARGE NUMBERS 8.2 Law of Large Numbers for Continuous Ran- dom Variables In the previous section we discussed in some detail the Law of Large Numbers for discrete probability distributions. This law has a natural analogue for continuous probability distributions, which we consider somewhat more briefly here. Chebyshev Inequality Just as in the discrete case, we begin our discussion with the Chebyshev Inequality. Theorem 8.3 (Chebyshev Inequality) Let X be a continuous random variable with density function f(x). Suppose X has a finite expected value µ = E(X) and finite variance σ2 = V (X). Then for any positive number ϵ > 0 we have P(|X −µ| ≥ϵ) ≤σ2 ϵ2 . 2 The proof is completely analogous to the proof in the discrete case, and we omit it. Note that this theorem says nothing if σ2 = V (X) is infinite. Example 8.4 Let X be any continuous random variable with E(X) = µ and V (X) = σ2. Then, if ϵ = kσ = k standard deviations for some integer k, then P(|X −µ| ≥kσ) ≤ σ2 k2σ2 = 1 k2 , just as in the discrete case. 2 Law of Large Numbers With the Chebyshev Inequality we can now state and prove the Law of Large Numbers for the continuous case. Theorem 8.4 (Law of Large Numbers) Let X1, X2, . . . , Xn be an independent trials process with a continuous density function f, finite expected value µ, and finite variance σ2. Let Sn = X1 + X2 + · · · + Xn be the sum of the Xi. Then for any real number ϵ > 0 we have lim n→∞P  Sn n −µ ≥ϵ  = 0 , or equivalently, lim n→∞P  Sn n −µ < ϵ  = 1 . 2
prob_Page_324_Chunk5163
8.2. CONTINUOUS RANDOM VARIABLES 317 Note that this theorem is not necessarily true if σ2 is infinite (see Example 8.8). As in the discrete case, the Law of Large Numbers says that the average value of n independent trials tends to the expected value as n →∞, in the precise sense that, given ϵ > 0, the probability that the average value and the expected value differ by more than ϵ tends to 0 as n →∞. Once again, we suppress the proof, as it is identical to the proof in the discrete case. Uniform Case Example 8.5 Suppose we choose at random n numbers from the interval [0, 1] with uniform distribution. Then if Xi describes the ith choice, we have µ = E(Xi) = Z 1 0 x dx = 1 2 , σ2 = V (Xi) = Z 1 0 x2 dx −µ2 = 1 3 −1 4 = 1 12 . Hence, E Sn n  = 1 2 , V Sn n  = 1 12n , and for any ϵ > 0, P  Sn n −1 2 ≥ϵ  ≤ 1 12nϵ2 . This says that if we choose n numbers at random from [0, 1], then the chances are better than 1 −1/(12nϵ2) that the difference |Sn/n −1/2| is less than ϵ. Note that ϵ plays the role of the amount of error we are willing to tolerate: If we choose ϵ = 0.1, say, then the chances that |Sn/n −1/2| is less than 0.1 are better than 1 −100/(12n). For n = 100, this is about .92, but if n = 1000, this is better than .99 and if n = 10,000, this is better than .999. We can illustrate what the Law of Large Numbers says for this example graph- ically. The density for An = Sn/n is determined by fAn(x) = nfSn(nx) . We have seen in Section 7.2, that we can compute the density fSn(x) for the sum of n uniform random variables. In Figure 8.2 we have used this to plot the density for An for various values of n. We have shaded in the area for which An would lie between .45 and .55. We see that as we increase n, we obtain more and more of the total area inside the shaded region. The Law of Large Numbers tells us that we can obtain as much of the total area as we please inside the shaded region by choosing n large enough (see also Figure 8.1). 2
prob_Page_325_Chunk5164
318 CHAPTER 8. LAW OF LARGE NUMBERS n=2 n=5 n=10 n=20 n=30 n=50 Figure 8.2: Illustration of Law of Large Numbers — uniform case. Normal Case Example 8.6 Suppose we choose n real numbers at random, using a normal dis- tribution with mean 0 and variance 1. Then µ = E(Xi) = 0 , σ2 = V (Xi) = 1 . Hence, E Sn n  = 0 , V Sn n  = 1 n , and, for any ϵ > 0, P  Sn n −0 ≥ϵ  ≤ 1 nϵ2 . In this case it is possible to compare the Chebyshev estimate for P(|Sn/n −µ| ≥ϵ) in the Law of Large Numbers with exact values, since we know the density function for Sn/n exactly (see Example 7.9). The comparison is shown in Table 8.1, for ϵ = .1. The data in this table was produced by the program LawContinuous. We see here that the Chebyshev estimates are in general not very accurate. 2
prob_Page_326_Chunk5165
8.2. CONTINUOUS RANDOM VARIABLES 319 n P(|Sn/n| ≥.1) Chebyshev 100 .31731 1.00000 200 .15730 .50000 300 .08326 .33333 400 .04550 .25000 500 .02535 .20000 600 .01431 .16667 700 .00815 .14286 800 .00468 .12500 900 .00270 .11111 1000 .00157 .10000 Table 8.1: Chebyshev estimates. Monte Carlo Method Here is a somewhat more interesting example. Example 8.7 Let g(x) be a continuous function defined for x ∈[0, 1] with values in [0, 1]. In Section 2.1, we showed how to estimate the area of the region under the graph of g(x) by the Monte Carlo method, that is, by choosing a large number of random values for x and y with uniform distribution and seeing what fraction of the points P(x, y) fell inside the region under the graph (see Example 2.2). Here is a better way to estimate the same area (see Figure 8.3). Let us choose a large number of independent values Xn at random from [0, 1] with uniform density, set Yn = g(Xn), and find the average value of the Yn. Then this average is our estimate for the area. To see this, note that if the density function for Xn is uniform, µ = E(Yn) = Z 1 0 g(x)f(x) dx = Z 1 0 g(x) dx = average value of g(x) , while the variance is σ2 = E((Yn −µ)2) = Z 1 0 (g(x) −µ)2 dx < 1 , since for all x in [0, 1], g(x) is in [0, 1], hence µ is in [0, 1], and so |g(x) −µ| ≤1. Now let An = (1/n)(Y1 + Y2 + · · · + Yn). Then by Chebyshev’s Inequality, we have P(|An −µ| ≥ϵ) ≤σ2 nϵ2 < 1 nϵ2 . This says that to get within ϵ of the true value for µ = R 1 0 g(x) dx with probability at least p, we should choose n so that 1/nϵ2 ≤1 −p (i.e., so that n ≥1/ϵ2(1 −p)). Note that this method tells us how large to take n to get a desired accuracy. 2
prob_Page_327_Chunk5166
320 CHAPTER 8. LAW OF LARGE NUMBERS Y X Y = g (x) 0 1 1 Figure 8.3: Area problem. The Law of Large Numbers requires that the variance σ2 of the original under- lying density be finite: σ2 < ∞. In cases where this fails to hold, the Law of Large Numbers may fail, too. An example follows. Cauchy Case Example 8.8 Suppose we choose n numbers from (−∞, +∞) with a Cauchy den- sity with parameter a = 1. We know that for the Cauchy density the expected value and variance are undefined (see Example 6.28). In this case, the density function for An = Sn n is given by (see Example 7.6) fAn(x) = 1 π(1 + x2) , that is, the density function for An is the same for all n. In this case, as n increases, the density function does not change at all, and the Law of Large Numbers does not hold. 2 Exercises 1 Let X be a continuous random variable with mean µ = 10 and variance σ2 = 100/3. Using Chebyshev’s Inequality, find an upper bound for the following probabilities.
prob_Page_328_Chunk5167
8.2. CONTINUOUS RANDOM VARIABLES 321 (a) P(|X −10| ≥2). (b) P(|X −10| ≥5). (c) P(|X −10| ≥9). (d) P(|X −10| ≥20). 2 Let X be a continuous random variable with values unformly distributed over the interval [0, 20]. (a) Find the mean and variance of X. (b) Calculate P(|X −10| ≥2), P(|X −10| ≥5), P(|X −10| ≥9), and P(|X −10| ≥20) exactly. How do your answers compare with those of Exercise 1? How good is Chebyshev’s Inequality in this case? 3 Let X be the random variable of Exercise 2. (a) Calculate the function f(x) = P(|X −10| ≥x). (b) Now graph the function f(x), and on the same axes, graph the Chebyshev function g(x) = 100/(3x2). Show that f(x) ≤g(x) for all x > 0, but that g(x) is not a very good approximation for f(x). 4 Let X be a continuous random variable with values exponentially distributed over [0, ∞) with parameter λ = 0.1. (a) Find the mean and variance of X. (b) Using Chebyshev’s Inequality, find an upper bound for the following probabilities: P(|X −10| ≥2), P(|X −10| ≥5), P(|X −10| ≥9), and P(|X −10| ≥20). (c) Calculate these probabilities exactly, and compare with the bounds in (b). 5 Let X be a continuous random variable with values normally distributed over (−∞, +∞) with mean µ = 0 and variance σ2 = 1. (a) Using Chebyshev’s Inequality, find upper bounds for the following prob- abilities: P(|X| ≥1), P(|X| ≥2), and P(|X| ≥3). (b) The area under the normal curve between −1 and 1 is .6827, between −2 and 2 is .9545, and between −3 and 3 it is .9973 (see the table in Appendix A). Compare your bounds in (a) with these exact values. How good is Chebyshev’s Inequality in this case? 6 If X is normally distributed, with mean µ and variance σ2, find an upper bound for the following probabilities, using Chebyshev’s Inequality. (a) P(|X −µ| ≥σ). (b) P(|X −µ| ≥2σ). (c) P(|X −µ| ≥3σ).
prob_Page_329_Chunk5168
322 CHAPTER 8. LAW OF LARGE NUMBERS (d) P(|X −µ| ≥4σ). Now find the exact value using the program NormalArea or the normal table in Appendix A, and compare. 7 If X is a random variable with mean µ ̸= 0 and variance σ2, define the relative deviation D of X from its mean by D = X −µ µ . (a) Show that P(D ≥a) ≤σ2/(µ2a2). (b) If X is the random variable of Exercise 1, find an upper bound for P(D ≥ .2), P(D ≥.5), P(D ≥.9), and P(D ≥2). 8 Let X be a continuous random variable and define the standardized version X∗of X by: X∗= X −µ σ . (a) Show that P(|X∗| ≥a) ≤1/a2. (b) If X is the random variable of Exercise 1, find bounds for P(|X∗| ≥2), P(|X∗| ≥5), and P(|X∗| ≥9). 9 (a) Suppose a number X is chosen at random from [0, 20] with uniform probability. Find a lower bound for the probability that X lies between 8 and 12, using Chebyshev’s Inequality. (b) Now suppose 20 real numbers are chosen independently from [0, 20] with uniform probability. Find a lower bound for the probability that their average lies between 8 and 12. (c) Now suppose 100 real numbers are chosen independently from [0, 20]. Find a lower bound for the probability that their average lies between 8 and 12. 10 A student’s score on a particular calculus final is a random variable with values of [0, 100], mean 70, and variance 25. (a) Find a lower bound for the probability that the student’s score will fall between 65 and 75. (b) If 100 students take the final, find a lower bound for the probability that the class average will fall between 65 and 75. 11 The Pilsdorffbeer company runs a fleet of trucks along the 100 mile road from Hangtown to Dry Gulch, and maintains a garage halfway in between. Each of the trucks is apt to break down at a point X miles from Hangtown, where X is a random variable uniformly distributed over [0, 100]. (a) Find a lower bound for the probability P(|X −50| ≤10).
prob_Page_330_Chunk5169
8.2. CONTINUOUS RANDOM VARIABLES 323 (b) Suppose that in one bad week, 20 trucks break down. Find a lower bound for the probability P(|A20 −50| ≤10), where A20 is the average of the distances from Hangtown at the time of breakdown. 12 A share of common stock in the Pilsdorffbeer company has a price Yn on the nth business day of the year. Finn observes that the price change Xn = Yn+1 −Yn appears to be a random variable with mean µ = 0 and variance σ2 = 1/4. If Y1 = 30, find a lower bound for the following probabilities, under the assumption that the Xn’s are mutually independent. (a) P(25 ≤Y2 ≤35). (b) P(25 ≤Y11 ≤35). (c) P(25 ≤Y101 ≤35). 13 Suppose one hundred numbers X1, X2, . . . , X100 are chosen independently at random from [0, 20]. Let S = X1 + X2 + · · · + X100 be the sum, A = S/100 the average, and S∗= (S −1000)/(10/ √ 3) the standardized sum. Find lower bounds for the probabilities (a) P(|S −1000| ≤100). (b) P(|A −10| ≤1). (c) P(|S∗| ≤ √ 3). 14 Let X be a continuous random variable normally distributed on (−∞, +∞) with mean 0 and variance 1. Using the normal table provided in Appendix A, or the program NormalArea, find values for the function f(x) = P(|X| ≥x) as x increases from 0 to 4.0 in steps of .25. Note that for x ≥0 the table gives NA(0, x) = P(0 ≤X ≤x) and thus P(|X| ≥x) = 2(.5 −NA(0, x). Plot by hand the graph of f(x) using these values, and the graph of the Chebyshev function g(x) = 1/x2, and compare (see Exercise 3). 15 Repeat Exercise 14, but this time with mean 10 and variance 3. Note that the table in Appendix A presents values for a standard normal variable. Find the standardized version X∗for X, find values for f ∗(x) = P(|X∗| ≥x) as in Exercise 14, and then rescale these values for f(x) = P(|X −10| ≥x). Graph and compare this function with the Chebyshev function g(x) = 3/x2. 16 Let Z = X/Y where X and Y have normal densities with mean 0 and standard deviation 1. Then it can be shown that Z has a Cauchy density. (a) Write a program to illustrate this result by plotting a bar graph of 1000 samples obtained by forming the ratio of two standard normal outcomes. Compare your bar graph with the graph of the Cauchy density. Depend- ing upon which computer language you use, you may or may not need to tell the computer how to simulate a normal random variable. A method for doing this was described in Section 5.2.
prob_Page_331_Chunk5170
324 CHAPTER 8. LAW OF LARGE NUMBERS (b) We have seen that the Law of Large Numbers does not apply to the Cauchy density (see Example 8.8). Simulate a large number of experi- ments with Cauchy density and compute the average of your results. Do these averages seem to be approaching a limit? If so can you explain why this might be? 17 Show that, if X ≥0, then P(X ≥a) ≤E(X)/a. 18 (Lamperti9) Let X be a non-negative random variable. What is the best upper bound you can give for P(X ≥a) if you know (a) E(X) = 20. (b) E(X) = 20 and V (X) = 25. (c) E(X) = 20, V (X) = 25, and X is symmetric about its mean. 9Private communication.
prob_Page_332_Chunk5171
Chapter 9 Central Limit Theorem 9.1 Central Limit Theorem for Bernoulli Trials The second fundamental theorem of probability is the Central Limit Theorem. This theorem says that if Sn is the sum of n mutually independent random variables, then the distribution function of Sn is well-approximated by a certain type of continuous function known as a normal density function, which is given by the formula fµ,σ(x) = 1 √ 2πσ e−(x−µ)2/(2σ2) , as we have seen in Chapter 4.3. In this section, we will deal only with the case that µ = 0 and σ = 1. We will call this particular normal density function the standard normal density, and we will denote it by φ(x): φ(x) = 1 √ 2π e−x2/2 . A graph of this function is given in Figure 9.1. It can be shown that the area under any normal density equals 1. The Central Limit Theorem tells us, quite generally, what happens when we have the sum of a large number of independent random variables each of which con- tributes a small amount to the total. In this section we shall discuss this theorem as it applies to the Bernoulli trials and in Section 9.2 we shall consider more general processes. We will discuss the theorem in the case that the individual random vari- ables are identically distributed, but the theorem is true, under certain conditions, even if the individual random variables have different distributions. Bernoulli Trials Consider a Bernoulli trials process with probability p for success on each trial. Let Xi = 1 or 0 according as the ith outcome is a success or failure, and let Sn = X1 + X2 + · · · + Xn. Then Sn is the number of successes in n trials. We know that Sn has as its distribution the binomial probabilities b(n, p, j). In Section 3.2, 325
prob_Page_333_Chunk5172
326 CHAPTER 9. CENTRAL LIMIT THEOREM -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 Figure 9.1: Standard normal density. we plotted these distributions for p = .3 and p = .5 for various values of n (see Figure 3.5). We note that the maximum values of the distributions appeared near the ex- pected value np, which causes their spike graphs to drift offto the right as n in- creased. Moreover, these maximum values approach 0 as n increased, which causes the spike graphs to flatten out. Standardized Sums We can prevent the drifting of these spike graphs by subtracting the expected num- ber of successes np from Sn, obtaining the new random variable Sn −np. Now the maximum values of the distributions will always be near 0. To prevent the spreading of these spike graphs, we can normalize Sn−np to have variance 1 by dividing by its standard deviation √npq (see Exercise 6.2.12 and Ex- ercise 6.2.16). Definition 9.1 The standardized sum of Sn is given by S∗ n = Sn −np √npq . S∗ n always has expected value 0 and variance 1. 2 Suppose we plot a spike graph with the spikes placed at the possible values of S∗ n: x0, x1, . . . , xn, where xj = j −np √npq . (9.1) We make the height of the spike at xj equal to the distribution value b(n, p, j). An example of this standardized spike graph, with n = 270 and p = .3, is shown in Figure 9.2. This graph is beautifully bell-shaped. We would like to fit a normal density to this spike graph. The obvious choice to try is the standard normal density, since it is centered at 0, just as the standardized spike graph is. In this figure, we
prob_Page_334_Chunk5173
9.1. BERNOULLI TRIALS 327 -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 Figure 9.2: Normalized binomial distribution and standard normal density. have drawn this standard normal density. The reader will note that a horrible thing has occurred: Even though the shapes of the two graphs are the same, the heights are quite different. If we want the two graphs to fit each other, we must modify one of them; we choose to modify the spike graph. Since the shapes of the two graphs look fairly close, we will attempt to modify the spike graph without changing its shape. The reason for the differing heights is that the sum of the heights of the spikes equals 1, while the area under the standard normal density equals 1. If we were to draw a continuous curve through the top of the spikes, and find the area under this curve, we see that we would obtain, approximately, the sum of the heights of the spikes multiplied by the distance between consecutive spikes, which we will call ϵ. Since the sum of the heights of the spikes equals one, the area under this curve would be approximately ϵ. Thus, to change the spike graph so that the area under this curve has value 1, we need only multiply the heights of the spikes by 1/ϵ. It is easy to see from Equation 9.1 that ϵ = 1 √npq . In Figure 9.3 we show the standardized sum S∗ n for n = 270 and p = .3, after correcting the heights, together with the standard normal density. (This figure was produced with the program CLTBernoulliPlot.) The reader will note that the standard normal fits the height-corrected spike graph extremely well. In fact, one version of the Central Limit Theorem (see Theorem 9.1) says that as n increases, the standard normal density will do an increasingly better job of approximating the height-corrected spike graphs corresponding to a Bernoulli trials process with n summands. Let us fix a value x on the x-axis and let n be a fixed positive integer. Then, using Equation 9.1, the point xj that is closest to x has a subscript j given by the
prob_Page_335_Chunk5174
328 CHAPTER 9. CENTRAL LIMIT THEOREM -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 Figure 9.3: Corrected spike graph with standard normal density. formula j = ⟨np + x√npq⟩, where ⟨a⟩means the integer nearest to a. Thus the height of the spike above xj will be √npq b(n, p, j) = √npq b(n, p, ⟨np + xj √npq⟩) . For large n, we have seen that the height of the spike is very close to the height of the normal density at x. This suggests the following theorem. Theorem 9.1 (Central Limit Theorem for Binomial Distributions) For the binomial distribution b(n, p, j) we have lim n→∞ √npq b(n, p, ⟨np + x√npq⟩) = φ(x) , where φ(x) is the standard normal density. The proof of this theorem can be carried out using Stirling’s approximation from Section 3.1. We indicate this method of proof by considering the case x = 0. In this case, the theorem states that lim n→∞ √npq b(n, p, ⟨np⟩) = 1 √ 2π = .3989 . . . . In order to simplify the calculation, we assume that np is an integer, so that ⟨np⟩= np. Then √npq b(n, p, np) = √npq pnpqnq n! (np)! (nq)! . Recall that Stirling’s formula (see Theorem 3.3) states that n! ∼ √ 2πn nne−n as n →∞.
prob_Page_336_Chunk5175
9.1. BERNOULLI TRIALS 329 Using this, we have √npq b(n, p, np) ∼ √npq pnpqnq√ 2πn nne−n √2πnp√2πnq (np)np(nq)nqe−npe−nq , which simplifies to 1/ √ 2π. 2 Approximating Binomial Distributions We can use Theorem 9.1 to find approximations for the values of binomial distri- bution functions. If we wish to find an approximation for b(n, p, j), we set j = np + x√npq and solve for x, obtaining x = j −np √npq . Theorem 9.1 then says that √npq b(n, p, j) is approximately equal to φ(x), so b(n, p, j) ≈ φ(x) √npq = 1 √npq φ j −np √npq  . Example 9.1 Let us estimate the probability of exactly 55 heads in 100 tosses of a coin. For this case np = 100 · 1/2 = 50 and √npq = p 100 · 1/2 · 1/2 = 5. Thus x55 = (55 −50)/5 = 1 and P(S100 = 55) ∼φ(1) 5 = 1 5  1 √ 2π e−1/2  = .0484 . To four decimal places, the actual value is .0485, and so the approximation is very good. 2 The program CLTBernoulliLocal illustrates this approximation for any choice of n, p, and j. We have run this program for two examples. The first is the probability of exactly 50 heads in 100 tosses of a coin; the estimate is .0798, while the actual value, to four decimal places, is .0796. The second example is the probability of exactly eight sixes in 36 rolls of a die; here the estimate is .1093, while the actual value, to four decimal places, is .1196.
prob_Page_337_Chunk5176
330 CHAPTER 9. CENTRAL LIMIT THEOREM The individual binomial probabilities tend to 0 as n tends to infinity. In most applications we are not interested in the probability that a specific outcome occurs, but rather in the probability that the outcome lies in a given interval, say the interval [a, b]. In order to find this probability, we add the heights of the spike graphs for values of j between a and b. This is the same as asking for the probability that the standardized sum S∗ n lies between a∗and b∗, where a∗and b∗are the standardized values of a and b. But as n tends to infinity the sum of these areas could be expected to approach the area under the standard normal density between a∗and b∗. The Central Limit Theorem states that this does indeed happen. Theorem 9.2 (Central Limit Theorem for Bernoulli Trials) Let Sn be the number of successes in n Bernoulli trials with probability p for success, and let a and b be two fixed real numbers. Then lim n→∞P  a ≤Sn −np √npq ≤b  = Z b a φ(x) dx . 2 This theorem can be proved by adding together the approximations to b(n, p, k) given in Theorem 9.1.It is also a special case of the more general Central Limit Theorem (see Section 10.3). We know from calculus that the integral on the right side of this equation is equal to the area under the graph of the standard normal density φ(x) between a and b. We denote this area by NA(a∗, b∗). Unfortunately, there is no simple way to integrate the function e−x2/2, and so we must either use a table of values or else a numerical integration program. (See Figure 9.4 for values of NA(0, z). A more extensive table is given in Appendix A.) It is clear from the symmetry of the standard normal density that areas such as that between −2 and 3 can be found from this table by adding the area from 0 to 2 (same as that from −2 to 0) to the area from 0 to 3. Approximation of Binomial Probabilities Suppose that Sn is binomially distributed with parameters n and p. We have seen that the above theorem shows how to estimate a probability of the form P(i ≤Sn ≤j) , (9.2) where i and j are integers between 0 and n. As we have seen, the binomial distri- bution can be represented as a spike graph, with spikes at the integers between 0 and n, and with the height of the kth spike given by b(n, p, k). For moderate-sized values of n, if we standardize this spike graph, and change the heights of its spikes, in the manner described above, the sum of the heights of the spikes is approximated by the area under the standard normal density between i∗and j∗. It turns out that a slightly more accurate approximation is afforded by the area under the standard
prob_Page_338_Chunk5177
9.1. BERNOULLI TRIALS 331 NA (0,z) = area of shaded region 0 z z NA(z) z NA(z) z NA(z) z NA(z) .0 .0000 1.0 .3413 2.0 .4772 3.0 .4987 .1 .0398 1.1 .3643 2.1 .4821 3.1 .4990 .2 .0793 1.2 .3849 2.2 .4861 3.2 .4993 .3 .1179 1.3 .4032 2.3 .4893 3.3 .4995 .4 .1554 1.4 .4192 2.4 .4918 3.4 .4997 .5 .1915 1.5 .4332 2.5 .4938 3.5 .4998 .6 .2257 1.6 .4452 2.6 .4953 3.6 .4998 .7 .2580 1.7 .4554 2.7 .4965 3.7 .4999 .8 .2881 1.8 .4641 2.8 .4974 3.8 .4999 .9 .3159 1.9 .4713 2.9 .4981 3.9 .5000 Figure 9.4: Table of values of NA(0, z), the normal area from 0 to z.
prob_Page_339_Chunk5178
332 CHAPTER 9. CENTRAL LIMIT THEOREM normal density between the standardized values corresponding to (i −1/2) and (j + 1/2); these values are i∗= i −1/2 −np √npq and j∗= j + 1/2 −np √npq . Thus, P(i ≤Sn ≤j) ≈NA i −1 2 −np √npq , j + 1 2 −np √npq ! . It should be stressed that the approximations obtained by using the Central Limit Theorem are only approximations, and sometimes they are not very close to the actual values (see Exercise 12). We now illustrate this idea with some examples. Example 9.2 A coin is tossed 100 times. Estimate the probability that the number of heads lies between 40 and 60 (the word “between” in mathematics means inclusive of the endpoints). The expected number of heads is 100·1/2 = 50, and the standard deviation for the number of heads is p 100 · 1/2 · 1/2 = 5. Thus, since n = 100 is reasonably large, we have P(40 ≤Sn ≤60) ≈ P 39.5 −50 5 ≤S∗ n ≤60.5 −50 5  = P(−2.1 ≤S∗ n ≤2.1) ≈ NA(−2.1, 2.1) = 2NA(0, 2.1) ≈ .9642 . The actual value is .96480, to five decimal places. Note that in this case we are asking for the probability that the outcome will not deviate by more than two standard deviations from the expected value. Had we asked for the probability that the number of successes is between 35 and 65, this would have represented three standard deviations from the mean, and, using our 1/2 correction, our estimate would be the area under the standard normal curve between −3.1 and 3.1, or 2NA(0, 3.1) = .9980. The actual answer in this case, to five places, is .99821. 2 It is important to work a few problems by hand to understand the conversion from a given inequality to an inequality relating to the standardized variable. After this, one can then use a computer program that carries out this conversion, including the 1/2 correction. The program CLTBernoulliGlobal is such a program for estimating probabilities of the form P(a ≤Sn ≤b). Example 9.3 Dartmouth College would like to have 1050 freshmen. This college cannot accommodate more than 1060. Assume that each applicant accepts with
prob_Page_340_Chunk5179
9.1. BERNOULLI TRIALS 333 probability .6 and that the acceptances can be modeled by Bernoulli trials. If the college accepts 1700, what is the probability that it will have too many acceptances? If it accepts 1700 students, the expected number of students who matricu- late is .6 · 1700 = 1020. The standard deviation for the number that accept is √ 1700 · .6 · .4 ≈20. Thus we want to estimate the probability P(S1700 > 1060) = P(S1700 ≥1061) = P  S∗ 1700 ≥1060.5 −1020 20  = P(S∗ 1700 ≥2.025) . From Table 9.4, if we interpolate, we would estimate this probability to be .5 −.4784 = .0216. Thus, the college is fairly safe using this admission policy. 2 Applications to Statistics There are many important questions in the field of statistics that can be answered using the Central Limit Theorem for independent trials processes. The following example is one that is encountered quite frequently in the news. Another example of an application of the Central Limit Theorem to statistics is given in Section 9.2. Example 9.4 One frequently reads that a poll has been taken to estimate the pro- portion of people in a certain population who favor one candidate over another in a race with two candidates. (This model also applies to races with more than two candidates A and B, and two ballot propositions.) Clearly, it is not possible for pollsters to ask everyone for their preference. What is done instead is to pick a subset of the population, called a sample, and ask everyone in the sample for their preference. Let p be the actual proportion of people in the population who are in favor of candidate A and let q = 1−p. If we choose a sample of size n from the pop- ulation, the preferences of the people in the sample can be represented by random variables X1, X2, . . . , Xn, where Xi = 1 if person i is in favor of candidate A, and Xi = 0 if person i is in favor of candidate B. Let Sn = X1 + X2 + · · · + Xn. If each subset of size n is chosen with the same probability, then Sn is hypergeometrically distributed. If n is small relative to the size of the population (which is typically true in practice), then Sn is approximately binomially distributed, with parameters n and p. The pollster wants to estimate the value p. An estimate for p is provided by the value ¯p = Sn/n, which is the proportion of people in the sample who favor candidate B. The Central Limit Theorem says that the random variable ¯p is approximately normally distributed. (In fact, our version of the Central Limit Theorem says that the distribution function of the random variable S∗ n = Sn −np √npq is approximated by the standard normal density.) But we have ¯p = Sn −np √npq rpq n + p ,
prob_Page_341_Chunk5180
334 CHAPTER 9. CENTRAL LIMIT THEOREM i.e., ¯p is just a linear function of S∗ n. Since the distribution of S∗ n is approximated by the standard normal density, the distribution of the random variable ¯p must also be bell-shaped. We also know how to write the mean and standard deviation of ¯p in terms of p and n. The mean of ¯p is just p, and the standard deviation is rpq n . Thus, it is easy to write down the standardized version of ¯p; it is ¯p∗= ¯p −p p pq/n . Since the distribution of the standardized version of ¯p is approximated by the standard normal density, we know, for example, that 95% of its values will lie within two standard deviations of its mean, and the same is true of ¯p. So we have P  p −2 rpq n < ¯p < p + 2 rpq n  ≈.954 . Now the pollster does not know p or q, but he can use ¯p and ¯q = 1 −¯p in their place without too much danger. With this idea in mind, the above statement is equivalent to the statement P ¯p −2 r ¯p¯q n < p < ¯p + 2 r ¯p¯q n ! ≈.954 . The resulting interval  ¯p −2√¯p¯q √n , ¯p + 2√¯p¯q √n  is called the 95 percent confidence interval for the unknown value of p. The name is suggested by the fact that if we use this method to estimate p in a large number of samples we should expect that in about 95 percent of the samples the true value of p is contained in the confidence interval obtained from the sample. In Exercise 11 you are asked to write a program to illustrate that this does indeed happen. The pollster has control over the value of n. Thus, if he wants to create a 95% confidence interval with length 6%, then he should choose a value of n so that 2√¯p¯q √n ≤.03 . Using the fact that ¯p¯q ≤1/4, no matter what the value of ¯p is, it is easy to show that if he chooses a value of n so that 1 √n ≤.03 , he will be safe. This is equivalent to choosing n ≥1111 .
prob_Page_342_Chunk5181
9.1. BERNOULLI TRIALS 335 0.48 0.5 0.52 0.54 0.56 0.58 0.6 0 5 10 15 20 25 Figure 9.5: Polling simulation. So if the pollster chooses n to be 1200, say, and calculates ¯p using his sample of size 1200, then 19 times out of 20 (i.e., 95% of the time), his confidence interval, which is of length 6%, will contain the true value of p. This type of confidence interval is typically reported in the news as follows: this survey has a 3% margin of error. In fact, most of the surveys that one sees reported in the paper will have sample sizes around 1000. A somewhat surprising fact is that the size of the population has apparently no effect on the sample size needed to obtain a 95% confidence interval for p with a given margin of error. To see this, note that the value of n that was needed depended only on the number .03, which is the margin of error. In other words, whether the population is of size 100,000 or 100,000,000, the pollster needs only to choose a sample of size 1200 or so to get the same accuracy of estimate of p. (We did use the fact that the sample size was small relative to the population size in the statement that Sn is approximately binomially distributed.) In Figure 9.5, we show the results of simulating the polling process. The popula- tion is of size 100,000, and for the population, p = .54. The sample size was chosen to be 1200. The spike graph shows the distribution of ¯p for 10,000 randomly chosen samples. For this simulation, the program kept track of the number of samples for which ¯p was within 3% of .54. This number was 9648, which is close to 95% of the number of samples used. Another way to see what the idea of confidence intervals means is shown in Figure 9.6. In this figure, we show 100 confidence intervals, obtained by computing ¯p for 100 different samples of size 1200 from the same population as before. The reader can see that most of these confidence intervals (96, to be exact) contain the true value of p. The Gallup Poll has used these polling techniques in every Presidential election since 1936 (and in innumerable other elections as well). Table 9.11 shows the results 1The Gallup Poll Monthly, November 1992, No. 326, p. 33. Supplemented with the help of
prob_Page_343_Chunk5182
336 CHAPTER 9. CENTRAL LIMIT THEOREM 0.48 0.5 0.52 0.54 0.56 0.58 0.6 Figure 9.6: Confidence interval simulation. of their efforts. The reader will note that most of the approximations to p are within 3% of the actual value of p. The sample sizes for these polls were typically around 1500. (In the table, both the predicted and actual percentages for the winning candidate refer to the percentage of the vote among the “major” political parties. In most elections, there were two major parties, but in several elections, there were three.) This technique also plays an important role in the evaluation of the effectiveness of drugs in the medical profession. For example, it is sometimes desired to know what proportion of patients will be helped by a new drug. This proportion can be estimated by giving the drug to a subset of the patients, and determining the proportion of this sample who are helped by the drug. 2 Historical Remarks The Central Limit Theorem for Bernoulli trials was first proved by Abraham de Moivre and appeared in his book, The Doctrine of Chances, first published in 1718.2 De Moivre spent his years from age 18 to 21 in prison in France because of his Protestant background. When he was released he left France for England, where he worked as a tutor to the sons of noblemen. Newton had presented a copy of his Principia Mathematica to the Earl of Devonshire. The story goes that, while de Moivre was tutoring at the Earl’s house, he came upon Newton’s work and found that it was beyond him. It is said that he then bought a copy of his own and tore Lydia K. Saab, The Gallup Organization. 2A. de Moivre, The Doctrine of Chances, 3d ed. (London: Millar, 1756).
prob_Page_344_Chunk5183
9.1. BERNOULLI TRIALS 337 Year Winning Gallup Final Election Deviation Candidate Survey Result 1936 Roosevelt 55.7% 62.5% 6.8% 1940 Roosevelt 52.0% 55.0% 3.0% 1944 Roosevelt 51.5% 53.3% 1.8% 1948 Truman 44.5% 49.9% 5.4% 1952 Eisenhower 51.0% 55.4% 4.4% 1956 Eisenhower 59.5% 57.8% 1.7% 1960 Kennedy 51.0% 50.1% 0.9% 1964 Johnson 64.0% 61.3% 2.7% 1968 Nixon 43.0% 43.5% 0.5% 1972 Nixon 62.0% 61.8% 0.2% 1976 Carter 48.0% 50.0% 2.0% 1980 Reagan 47.0% 50.8% 3.8% 1984 Reagan 59.0% 59.1% 0.1% 1988 Bush 56.0% 53.9% 2.1% 1992 Clinton 49.0% 43.2% 5.8% 1996 Clinton 52.0% 50.1% 1.9% Table 9.1: Gallup Poll accuracy record. it into separate pages, learning it page by page as he walked around London to his tutoring jobs. De Moivre frequented the coffeehouses in London, where he started his probability work by calculating odds for gamblers. He also met Newton at such a coffeehouse and they became fast friends. De Moivre dedicated his book to Newton. The Doctrine of Chances provides the techniques for solving a wide variety of gambling problems. In the midst of these gambling problems de Moivre rather modestly introduces his proof of the Central Limit Theorem, writing A Method of approximating the Sum of the Terms of the Binomial (a + b)n expanded into a Series, from whence are deduced some prac- tical Rules to estimate the Degree of Assent which is to be given to Experiments.3 De Moivre’s proof used the approximation to factorials that we now call Stirling’s formula. De Moivre states that he had obtained this formula before Stirling but without determining the exact value of the constant √ 2π. While he says it is not really necessary to know this exact value, he concedes that knowing it “has spread a singular Elegancy on the Solution.” The complete proof and an interesting discussion of the life of de Moivre can be found in the book Games, Gods and Gambling by F. N. David.4 3ibid., p. 243. 4F. N. David, Games, Gods and Gambling (London: Griffin, 1962).
prob_Page_345_Chunk5184
338 CHAPTER 9. CENTRAL LIMIT THEOREM Exercises 1 Let S100 be the number of heads that turn up in 100 tosses of a fair coin. Use the Central Limit Theorem to estimate (a) P(S100 ≤45). (b) P(45 < S100 < 55). (c) P(S100 > 63). (d) P(S100 < 57). 2 Let S200 be the number of heads that turn up in 200 tosses of a fair coin. Estimate (a) P(S200 = 100). (b) P(S200 = 90). (c) P(S200 = 80). 3 A true-false examination has 48 questions. June has probability 3/4 of an- swering a question correctly. April just guesses on each question. A passing score is 30 or more correct answers. Compare the probability that June passes the exam with the probability that April passes it. 4 Let S be the number of heads in 1,000,000 tosses of a fair coin. Use (a) Cheby- shev’s inequality, and (b) the Central Limit Theorem, to estimate the prob- ability that S lies between 499,500 and 500,500. Use the same two methods to estimate the probability that S lies between 499,000 and 501,000, and the probability that S lies between 498,500 and 501,500. 5 A rookie is brought to a baseball club on the assumption that he will have a .300 batting average. (Batting average is the ratio of the number of hits to the number of times at bat.) In the first year, he comes to bat 300 times and his batting average is .267. Assume that his at bats can be considered Bernoulli trials with probability .3 for success. Could such a low average be considered just bad luck or should he be sent back to the minor leagues? Comment on the assumption of Bernoulli trials in this situation. 6 Once upon a time, there were two railway trains competing for the passenger traffic of 1000 people leaving from Chicago at the same hour and going to Los Angeles. Assume that passengers are equally likely to choose each train. How many seats must a train have to assure a probability of .99 or better of having a seat for each passenger? 7 Assume that, as in Example 9.3, Dartmouth admits 1750 students. What is the probability of too many acceptances? 8 A club serves dinner to members only. They are seated at 12-seat tables. The manager observes over a long period of time that 95 percent of the time there are between six and nine full tables of members, and the remainder of the
prob_Page_346_Chunk5185
9.1. BERNOULLI TRIALS 339 time the numbers are equally likely to fall above or below this range. Assume that each member decides to come with a given probability p, and that the decisions are independent. How many members are there? What is p? 9 Let Sn be the number of successes in n Bernoulli trials with probability .8 for success on each trial. Let An = Sn/n be the average number of successes. In each case give the value for the limit, and give a reason for your answer. (a) limn→∞P(An = .8). (b) limn→∞P(.7n < Sn < .9n). (c) limn→∞P(Sn < .8n + .8√n). (d) limn→∞P(.79 < An < .81). 10 Find the probability that among 10,000 random digits the digit 3 appears not more than 931 times. 11 Write a computer program to simulate 10,000 Bernoulli trials with probabil- ity .3 for success on each trial. Have the program compute the 95 percent confidence interval for the probability of success based on the proportion of successes. Repeat the experiment 100 times and see how many times the true value of .3 is included within the confidence limits. 12 A balanced coin is flipped 400 times. Determine the number x such that the probability that the number of heads is between 200 −x and 200 + x is approximately .80. 13 A noodle machine in Spumoni’s spaghetti factory makes about 5 percent de- fective noodles even when properly adjusted. The noodles are then packed in crates containing 1900 noodles each. A crate is examined and found to contain 115 defective noodles. What is the approximate probability of finding at least this many defective noodles if the machine is properly adjusted? 14 A restaurant feeds 400 customers per day. On the average 20 percent of the customers order apple pie. (a) Give a range (called a 95 percent confidence interval) for the number of pieces of apple pie ordered on a given day such that you can be 95 percent sure that the actual number will fall in this range. (b) How many customers must the restaurant have, on the average, to be at least 95 percent sure that the number of customers ordering pie on that day falls in the 19 to 21 percent range? 15 Recall that if X is a random variable, the cumulative distribution function of X is the function F(x) defined by F(x) = P(X ≤x) . (a) Let Sn be the number of successes in n Bernoulli trials with probability p for success. Write a program to plot the cumulative distribution for Sn.
prob_Page_347_Chunk5186
340 CHAPTER 9. CENTRAL LIMIT THEOREM (b) Modify your program in (a) to plot the cumulative distribution F ∗ n(x) of the standardized random variable S∗ n = Sn −np √npq . (c) Define the normal distribution N(x) to be the area under the normal curve up to the value x. Modify your program in (b) to plot the normal distribution as well, and compare it with the cumulative distribution of S∗ n. Do this for n = 10, 50, and 100. 16 In Example 3.11, we were interested in testing the hypothesis that a new form of aspirin is effective 80 percent of the time rather than the 60 percent of the time as reported for standard aspirin. The new aspirin is given to n people. If it is effective in m or more cases, we accept the claim that the new drug is effective 80 percent of the time and if not we reject the claim. Using the Central Limit Theorem, show that you can choose the number of trials n and the critical value m so that the probability that we reject the hypothesis when it is true is less than .01 and the probability that we accept it when it is false is also less than .01. Find the smallest value of n that will suffice for this. 17 In an opinion poll it is assumed that an unknown proportion p of the people are in favor of a proposed new law and a proportion 1 −p are against it. A sample of n people is taken to obtain their opinion. The proportion ¯p in favor in the sample is taken as an estimate of p. Using the Central Limit Theorem, determine how large a sample will ensure that the estimate will, with probability .95, be correct to within .01. 18 A description of a poll in a certain newspaper says that one can be 95% confident that error due to sampling will be no more than plus or minus 3 percentage points. A poll in the New York Times taken in Iowa says that “according to statistical theory, in 19 out of 20 cases the results based on such samples will differ by no more than 3 percentage points in either direction from what would have been obtained by interviewing all adult Iowans.” These are both attempts to explain the concept of confidence intervals. Do both statements say the same thing? If not, which do you think is the more accurate description? 9.2 Central Limit Theorem for Discrete Indepen- dent Trials We have illustrated the Central Limit Theorem in the case of Bernoulli trials, but this theorem applies to a much more general class of chance processes. In particular, it applies to any independent trials process such that the individual trials have finite variance. For such a process, both the normal approximation for individual terms and the Central Limit Theorem are valid.
prob_Page_348_Chunk5187
9.2. DISCRETE INDEPENDENT TRIALS 341 Let Sn = X1 + X2 + · · · + Xn be the sum of n independent discrete random variables of an independent trials process with common distribution function m(x) defined on the integers, with mean µ and variance σ2. We have seen in Section 7.2 that the distributions for such independent sums have shapes resembling the nor- mal curve, but the largest values drift to the right and the curves flatten out (see Figure 7.6). We can prevent this just as we did for Bernoulli trials. Standardized Sums Consider the standardized random variable S∗ n = Sn −nµ √ nσ2 . This standardizes Sn to have expected value 0 and variance 1. If Sn = j, then S∗ n has the value xj with xj = j −nµ √ nσ2 . We can construct a spike graph just as we did for Bernoulli trials. Each spike is centered at some xj. The distance between successive spikes is b = 1 √ nσ2 , and the height of the spike is h = √ nσ2P(Sn = j) . The case of Bernoulli trials is the special case for which Xj = 1 if the jth outcome is a success and 0 otherwise; then µ = p and σ2 = √pq. We now illustrate this process for two different discrete distributions. The first is the distribution m, given by m =  1 2 3 4 5 .2 .2 .2 .2 .2  . In Figure 9.7 we show the standardized sums for this distribution for the cases n = 2 and n = 10. Even for n = 2 the approximation is surprisingly good. For our second discrete distribution, we choose m =  1 2 3 4 5 .4 .3 .1 .1 .1  . This distribution is quite asymmetric and the approximation is not very good for n = 3, but by n = 10 we again have an excellent approximation (see Figure 9.8). Figures 9.7 and 9.8 were produced by the program CLTIndTrialsPlot.
prob_Page_349_Chunk5188
342 CHAPTER 9. CENTRAL LIMIT THEOREM -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 n = 2 n = 10 Figure 9.7: Distribution of standardized sums. -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 n = 3 n = 10 Figure 9.8: Distribution of standardized sums. Approximation Theorem As in the case of Bernoulli trials, these graphs suggest the following approximation theorem for the individual probabilities. Theorem 9.3 Let X1, X2, . . . , Xn be an independent trials process and let Sn = X1 + X2 + · · · + Xn. Assume that the greatest common divisor of the differences of all the values that the Xj can take on is 1. Let E(Xj) = µ and V (Xj) = σ2. Then for n large, P(Sn = j) ∼φ(xj) √ nσ2 , where xj = (j −nµ)/ √ nσ2, and φ(x) is the standard normal density. 2 The program CLTIndTrialsLocal implements this approximation. When we run this program for 6 rolls of a die, and ask for the probability that the sum of the rolls equals 21, we obtain an actual value of .09285, and a normal approximation value of .09537. If we run this program for 24 rolls of a die, and ask for the probability that the sum of the rolls is 72, we obtain an actual value of .01724 and a normal approximation value of .01705. These results show that the normal approximations are quite good.
prob_Page_350_Chunk5189
9.2. DISCRETE INDEPENDENT TRIALS 343 Central Limit Theorem for a Discrete Independent Trials Pro- cess The Central Limit Theorem for a discrete independent trials process is as follows. Theorem 9.4 (Central Limit Theorem) Let Sn = X1 + X2 + · · · + Xn be the sum of n discrete independent random variables with common distribution having expected value µ and variance σ2. Then, for a < b, lim n→∞P  a < Sn −nµ √ nσ2 < b  = 1 √ 2π Z b a e−x2/2 dx . 2 We will give the proofs of Theorems 9.3 and Theorem 9.4 in Section 10.3. Here we consider several examples. Examples Example 9.5 A die is rolled 420 times. What is the probability that the sum of the rolls lies between 1400 and 1550? The sum is a random variable S420 = X1 + X2 + · · · + X420 , where each Xj has distribution mX =  1 2 3 4 5 6 1/6 1/6 1/6 1/6 1/6 1/6  We have seen that µ = E(X) = 7/2 and σ2 = V (X) = 35/12. Thus, E(S420) = 420 · 7/2 = 1470, σ2(S420) = 420 · 35/12 = 1225, and σ(S420) = 35. Therefore, P(1400 ≤S420 ≤1550) ≈ P 1399.5 −1470 35 ≤S∗ 420 ≤1550.5 −1470 35  = P(−2.01 ≤S∗ 420 ≤2.30) ≈ NA(−2.01, 2.30) = .9670 . We note that the program CLTIndTrialsGlobal could be used to calculate these probabilities. 2 Example 9.6 A student’s grade point average is the average of his grades in 30 courses. The grades are based on 100 possible points and are recorded as integers. Assume that, in each course, the instructor makes an error in grading of k with probability |p/k|, where k = ±1, ±2, ±3, ±4, ±5. The probability of no error is then 1 −(137/30)p. (The parameter p represents the inaccuracy of the instructor’s grading.) Thus, in each course, there are two grades for the student, namely the
prob_Page_351_Chunk5190
344 CHAPTER 9. CENTRAL LIMIT THEOREM “correct” grade and the recorded grade. So there are two average grades for the student, namely the average of the correct grades and the average of the recorded grades. We wish to estimate the probability that these two average grades differ by less than .05 for a given student. We now assume that p = 1/20. We also assume that the total error is the sum S30 of 30 independent random variables each with distribution mX :  −5 −4 −3 −2 −1 0 1 2 3 4 5 1 100 1 80 1 60 1 40 1 20 463 600 1 20 1 40 1 60 1 80 1 100  . One can easily calculate that E(X) = 0 and σ2(X) = 1.5. Then we have P
prob_Page_352_Chunk5191
9.2. DISCRETE INDEPENDENT TRIALS 345 Theorem 9.5 (Central Limit Theorem) Let X1, X2, . . . , Xn , . . . be a se- quence of independent discrete random variables, and let Sn = X1 + X2 + · · · + Xn. For each n, denote the mean and variance of Xn by µn and σ2 n, respectively. De- fine the mean and variance of Sn to be mn and s2 n, respectively, and assume that sn →∞. If there exists a constant A, such that |Xn| ≤A for all n, then for a < b, lim n→∞P  a < Sn −mn sn < b  = 1 √ 2π Z b a e−x2/2 dx . 2 The condition that |Xn| ≤A for all n is sometimes described by saying that the sequence {Xn} is uniformly bounded. The condition that sn →∞is necessary (see Exercise 15). We illustrate this theorem by generating a sequence of n random distributions on the interval [a, b]. We then convolute these distributions to find the distribution of the sum of n independent experiments governed by these distributions. Finally, we standardize the distribution for the sum to have mean 0 and standard deviation 1 and compare it with the normal density. The program CLTGeneral carries out this procedure. In Figure 9.9 we show the result of running this program for [a, b] = [−2, 4], and n = 1, 4, and 10. We see that our first random distribution is quite asymmetric. By the time we choose the sum of ten such experiments we have a very good fit to the normal curve. The above theorem essentially says that anything that can be thought of as being made up as the sum of many small independent pieces is approximately normally distributed. This brings us to one of the most important questions that was asked about genetics in the 1800’s. The Normal Distribution and Genetics When one looks at the distribution of heights of adults of one sex in a given pop- ulation, one cannot help but notice that this distribution looks like the normal distribution. An example of this is shown in Figure 9.10. This figure shows the distribution of heights of 9593 women between the ages of 21 and 74. These data come from the Health and Nutrition Examination Survey I (HANES I). For this survey, a sample of the U.S. civilian population was chosen. The survey was carried out between 1971 and 1974. A natural question to ask is “How does this come about?”. Francis Galton, an English scientist in the 19th century, studied this question, and other related questions, and constructed probability models that were of great importance in explaining the genetic effects on such attributes as height. In fact, one of the most important ideas in statistics, the idea of regression to the mean, was invented by Galton in his attempts to understand these genetic effects. Galton was faced with an apparent contradiction. On the one hand, he knew that the normal distribution arises in situations in which many small independent effects are being summed. On the other hand, he also knew that many quantitative
prob_Page_353_Chunk5192
346 CHAPTER 9. CENTRAL LIMIT THEOREM -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 0.5 0.6 -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 -4 -2 0 2 4 0 0.1 0.2 0.3 0.4 Figure 9.9: Sums of randomly chosen random variables.
prob_Page_354_Chunk5193
9.2. DISCRETE INDEPENDENT TRIALS 347 50 55 60 65 70 75 80 0 0.025 0.05 0.075 0.1 0.125 0.15 Figure 9.10: Distribution of heights of adult women. attributes, such as height, are strongly influenced by genetic factors: tall parents tend to have tall offspring. Thus in this case, there seem to be two large effects, namely the parents. Galton was certainly aware of the fact that non-genetic factors played a role in determining the height of an individual. Nevertheless, unless these non-genetic factors overwhelm the genetic ones, thereby refuting the hypothesis that heredity is important in determining height, it did not seem possible for sets of parents of given heights to have offspring whose heights were normally distributed. One can express the above problem symbolically as follows. Suppose that we choose two specific positive real numbers x and y, and then find all pairs of parents one of whom is x units tall and the other of whom is y units tall. We then look at all of the offspring of these pairs of parents. One can postulate the existence of a function f(x, y) which denotes the genetic effect of the parents’ heights on the heights of the offspring. One can then let W denote the effects of the non-genetic factors on the heights of the offspring. Then, for a given set of heights {x, y}, the random variable which represents the heights of the offspring is given by H = f(x, y) + W , where f is a deterministic function, i.e., it gives one output for a pair of inputs {x, y}. If we assume that the effect of f is large in comparison with the effect of W, then the variance of W is small. But since f is deterministic, the variance of H equals the variance of W, so the variance of H is small. However, Galton observed from his data that the variance of the heights of the offspring of a given pair of parent heights is not small. This would seem to imply that inheritance plays a small role in the determination of the height of an individual. Later in this section, we will describe the way in which Galton got around this problem. We will now consider the modern explanation of why certain traits, such as heights, are approximately normally distributed. In order to do so, we need to introduce some terminology from the field of genetics. The cells in a living organism that are not directly involved in the transmission of genetic material to offspring are called somatic cells, and the remaining cells are called germ cells. Organisms of
prob_Page_355_Chunk5194
348 CHAPTER 9. CENTRAL LIMIT THEOREM a given species have their genetic information encoded in sets of physical entities, called chromosomes. The chromosomes are paired in each somatic cell. For example, human beings have 23 pairs of chromosomes in each somatic cell. The sex cells contain one chromosome from each pair. In sexual reproduction, two sex cells, one from each parent, contribute their chromosomes to create the set of chromosomes for the offspring. Chromosomes contain many subunits, called genes. Genes consist of molecules of DNA, and one gene has, encoded in its DNA, information that leads to the reg- ulation of proteins. In the present context, we will consider those genes containing information that has an effect on some physical trait, such as height, of the organ- ism. The pairing of the chromosomes gives rise to a pairing of the genes on the chromosomes. In a given species, each gene can be any one of several forms. These various forms are called alleles. One should think of the different alleles as potentially producing different effects on the physical trait in question. Of the two alleles that are found in a given gene pair in an organism, one of the alleles came from one parent and the other allele came from the other parent. The possible types of pairs of alleles (without regard to order) are called genotypes. If we assume that the height of a human being is largely controlled by a specific gene, then we are faced with the same difficulty that Galton was. We are assuming that each parent has a pair of alleles which largely controls their heights. Since each parent contributes one allele of this gene pair to each of its offspring, there are four possible allele pairs for the offspring at this gene location. The assumption is that these pairs of alleles largely control the height of the offspring, and we are also assuming that genetic factors outweigh non-genetic factors. It follows that among the offspring we should see several modes in the height distribution of the offspring, one mode corresponding to each possible pair of alleles. This distribution does not correspond to the observed distribution of heights. An alternative hypothesis, which does explain the observation of normally dis- tributed heights in offspring of a given sex, is the multiple-gene hypothesis. Under this hypothesis, we assume that there are many genes that affect the height of an individual. These genes may differ in the amount of their effects. Thus, we can represent each gene pair by a random variable Xi, where the value of the random variable is the allele pair’s effect on the height of the individual. Thus, for example, if each parent has two different alleles in the gene pair under consideration, then the offspring has one of four possible pairs of alleles at this gene location. Now the height of the offspring is a random variable, which can be expressed as H = X1 + X2 + · · · + Xn + W , if there are n genes that affect height. (Here, as before, the random variable W de- notes non-genetic effects.) Although n is fixed, if it is fairly large, then Theorem 9.5 implies that the sum X1 + X2 + · · · + Xn is approximately normally distributed. Now, if we assume that the Xi’s have a significantly larger cumulative effect than W does, then H is approximately normally distributed. Another observed feature of the distribution of heights of adults of one sex in
prob_Page_356_Chunk5195
9.2. DISCRETE INDEPENDENT TRIALS 349 a population is that the variance does not seem to increase or decrease from one generation to the next. This was known at the time of Galton, and his attempts to explain this led him to the idea of regression to the mean. This idea will be discussed further in the historical remarks at the end of the section. (The reason that we only consider one sex is that human heights are clearly sex-linked, and in general, if we have two populations that are each normally distributed, then their union need not be normally distributed.) Using the multiple-gene hypothesis, it is easy to explain why the variance should be constant from generation to generation. We begin by assuming that for a specific gene location, there are k alleles, which we will denote by A1, A2, . . . , Ak. We assume that the offspring are produced by random mating. By this we mean that given any offspring, it is equally likely that it came from any pair of parents in the preceding generation. There is another way to look at random mating that makes the calculations easier. We consider the set S of all of the alleles (at the given gene location) in all of the germ cells of all of the individuals in the parent generation. In terms of the set S, by random mating we mean that each pair of alleles in S is equally likely to reside in any particular offspring. (The reader might object to this way of thinking about random mating, as it allows two alleles from the same parent to end up in an offspring; but if the number of individuals in the parent population is large, then whether or not we allow this event does not affect the probabilities very much.) For 1 ≤i ≤k, we let pi denote the proportion of alleles in the parent population that are of type Ai. It is clear that this is the same as the proportion of alleles in the germ cells of the parent population, assuming that each parent produces roughly the same number of germs cells. Consider the distribution of alleles in the offspring. Since each germ cell is equally likely to be chosen for any particular offspring, the distribution of alleles in the offspring is the same as in the parents. We next consider the distribution of genotypes in the two generations. We will prove the following fact: the distribution of genotypes in the offspring generation depends only upon the distribution of alleles in the parent generation (in particular, it does not depend upon the distribution of genotypes in the parent generation). Consider the possible genotypes; there are k(k + 1)/2 of them. Under our assump- tions, the genotype AiAi will occur with frequency p2 i , and the genotype AiAj, with i ̸= j, will occur with frequency 2pipj. Thus, the frequencies of the genotypes depend only upon the allele frequencies in the parent generation, as claimed. This means that if we start with a certain generation, and a certain distribution of alleles, then in all generations after the one we started with, both the allele distribution and the genotype distribution will be fixed. This last statement is known as the Hardy-Weinberg Law. We can describe the consequences of this law for the distribution of heights among adults of one sex in a population. We recall that the height of an offspring was given by a random variable H, where H = X1 + X2 + · · · + Xn + W , with the Xi’s corresponding to the genes that affect height, and the random variable
prob_Page_357_Chunk5196
350 CHAPTER 9. CENTRAL LIMIT THEOREM W denoting non-genetic effects. The Hardy-Weinberg Law states that for each Xi, the distribution in the offspring generation is the same as the distribution in the parent generation. Thus, if we assume that the distribution of W is roughly the same from generation to generation (or if we assume that its effects are small), then the distribution of H is the same from generation to generation. (In fact, dietary effects are part of W, and it is clear that in many human populations, diets have changed quite a bit from one generation to the next in recent times. This change is thought to be one of the reasons that humans, on the average, are getting taller. It is also the case that the effects of W are thought to be small relative to the genetic effects of the parents.) Discussion Generally speaking, the Central Limit Theorem contains more information than the Law of Large Numbers, because it gives us detailed information about the shape of the distribution of S∗ n; for large n the shape is approximately the same as the shape of the standard normal density. More specifically, the Central Limit Theorem says that if we standardize and height-correct the distribution of Sn, then the normal density function is a very good approximation to this distribution when n is large. Thus, we have a computable approximation for the distribution for Sn, which provides us with a powerful technique for generating answers for all sorts of questions about sums of independent random variables, even if the individual random variables have different distributions. Historical Remarks In the mid-1800’s, the Belgian mathematician Quetelet7 had shown empirically that the normal distribution occurred in real data, and had also given a method for fitting the normal curve to a given data set. Laplace8 had shown much earlier that the sum of many independent identically distributed random variables is approximately normal. Galton knew that certain physical traits in a population appeared to be approximately normally distributed, but he did not consider Laplace’s result to be a good explanation of how this distribution comes about. We give a quote from Galton that appears in the fascinating book by S. Stigler9 on the history of statistics: First, let me point out a fact which Quetelet and all writers who have followed in his paths have unaccountably overlooked, and which has an intimate bearing on our work to-night. It is that, although characteris- tics of plants and animals conform to the law, the reason of their doing so is as yet totally unexplained. The essence of the law is that differences should be wholly due to the collective actions of a host of independent petty influences in various combinations...Now the processes of hered- ity...are not petty influences, but very important ones...The conclusion 7S. Stigler, The History of Statistics, (Cambridge: Harvard University Press, 1986), p. 203. 8ibid., p. 136 9ibid., p. 281.
prob_Page_358_Chunk5197
9.2. DISCRETE INDEPENDENT TRIALS 351 Figure 9.11: Two-stage version of the quincunx. is...that the processes of heredity must work harmoniously with the law of deviation, and be themselves in some sense conformable to it. Galton invented a device known as a quincunx (now commonly called a Galton board), which we used in Example 3.10 to show how to physically obtain a binomial distribution. Of course, the Central Limit Theorem says that for large values of the parameter n, the binomial distribution is approximately normal. Galton used the quincunx to explain how inheritance affects the distribution of a trait among offspring. We consider, as Galton did, what happens if we interrupt, at some intermediate height, the progress of the shot that is falling in the quincunx. The reader is referred to Figure 9.11. This figure is a drawing of Karl Pearson,10 based upon Galton’s notes. In this figure, the shot is being temporarily segregated into compartments at the line AB. (The line A′B′ forms a platform on which the shot can rest.) If the line AB is not too close to the top of the quincunx, then the shot will be approximately normally distributed at this line. Now suppose that one compartment is opened, as shown in the figure. The shot from that compartment will fall, forming a normal distribution at the bottom of the quincunx. If now all of the compartments are 10Karl Pearson, The Life, Letters and Labours of Francis Galton, vol. IIIB, (Cambridge at the University Press 1930.) p. 466. Reprinted with permission.
prob_Page_359_Chunk5198
352 CHAPTER 9. CENTRAL LIMIT THEOREM opened, all of the shot will fall, producing the same distribution as would occur if the shot were not temporarily stopped at the line AB. But the action of stopping the shot at the line AB, and then releasing the compartments one at a time, is just the same as convoluting two normal distributions. The normal distributions at the bottom, corresponding to each compartment at the line AB, are being mixed, with their weights being the number of shot in each compartment. On the other hand, it is already known that if the shot are unimpeded, the final distribution is approximately normal. Thus, this device shows that the convolution of two normal distributions is again normal. Galton also considered the quincunx from another perspective. He segregated into seven groups, by weight, a set of 490 sweet pea seeds. He gave 10 seeds from each of the seven group to each of seven friends, who grew the plants from the seeds. Galton found that each group produced seeds whose weights were normally distributed. (The sweet pea reproduces by self-pollination, so he did not need to consider the possibility of interaction between different groups.) In addition, he found that the variances of the weights of the offspring were the same for each group. This segregation into groups corresponds to the compartments at the line AB in the quincunx. Thus, the sweet peas were acting as though they were being governed by a convolution of normal distributions. He now was faced with a problem. We have shown in Chapter 7, and Galton knew, that the convolution of two normal distributions produces a normal distribu- tion with a larger variance than either of the original distributions. But his data on the sweet pea seeds showed that the variance of the offspring population was the same as the variance of the parent population. His answer to this problem was to postulate a mechanism that he called reversion, and is now called regression to the mean. As Stigler puts it:11 The seven groups of progeny were normally distributed, but not about their parents’ weight. Rather they were in every case distributed about a value that was closer to the average population weight than was that of the parent. Furthermore, this reversion followed “the simplest possible law,” that is, it was linear. The average deviation of the progeny from the population average was in the same direction as that of the parent, but only a third as great. The mean progeny reverted to type, and the increased variation was just sufficient to maintain the population variability. Galton illustrated reversion with the illustration shown in Figure 9.12.12 The parent population is shown at the top of the figure, and the slanted lines are meant to correspond to the reversion effect. The offspring population is shown at the bottom of the figure. 11ibid., p. 282. 12Karl Pearson, The Life, Letters and Labours of Francis Galton, vol. IIIA, (Cambridge at the University Press 1930.) p. 9. Reprinted with permission.
prob_Page_360_Chunk5199
9.2. DISCRETE INDEPENDENT TRIALS 353 Figure 9.12: Galton’s explanation of reversion.
prob_Page_361_Chunk5200