text
stringlengths
270
6.81k
F Let P U denote the marginal probability measure of U induced by P. By the theorem of total expectation (see Theorem 3.5.2), we have that MSE, where E P U u by Theorem 8.1.1, T 2 denotes the conditional MSE of T, given U u. Now E P U u T 2 VarP U u T E P U u T 2. (8.1.3) As both terms in (8.1.3) are nonnegative, and recalling the definition of TU we have MSE T E P U VarP U u T E P U TU s 2. E P U TU s 2 Now TU s the theorem of total expectation, E P U u 2 TU s 2 (Theorem 3.5.4) and so, by E P U TU s E P TU TU s 2 MSE TU and the theorem is proved. Theorem 8.1.3 shows that we can always improve on (or at least make no worse) any estimator T that possesses a finite second moment, by replacing T s by the esti­ mate TU s. This process is sometimes referred to as the Rao­Blackwellization of an estimator. Notice that putting E E and c in Theorem 8.1.1 implies that MSE T Var T E T 2. (8.1.4) So the MSE of T can be decomposed as the sum of the variance of T plus the squared bias of T (this was also proved in Theorem 6.3.1). Theorem 8.1.1 has another important implication, for (8.1.4) is minimized by tak­ ing E T. This indicates that, on average, the estimator T comes closer (in terms of squared error) to E T than to any other value. So, if we are sampling from T s is a natural estimate of E T. Therefore, for a the distribution specified by general characteristic, it makes sense to restrict attention to estimators that have bias equal to 0. This leads to the following definition. Chapter 8: Optimal Inferences 437 Definition 8.1.1 An estimator T of is unbiased if E T for every Notice that, for unbiased estimators with finite second moment, (
8.1.4) becomes MSE T Var T. Therefore, our search for an optimal estimator has become the search for an unbiased estimator with smallest variance. If such an estimator exists, we give it a special name. Definition 8.1.2 An unbiased estimator of with smallest variance for each is called a uniformly minimum variance unbiased (UMVU) estimator. It is important to note that the Rao–Blackwell theorem (Theorem 8.1.3) also ap­ plies to unbiased estimators. This is because the Rao–Blackwellization of an unbiased estimator yields an unbiased estimator, as the following result demonstrates. Theorem 8.1.4 (Rao–Blackwell for unbiased estimators) If T has finite second mo­ ment, is unbiased for for every and U is a sufficient statistic, then E TU Var T (so TU is also unbiased for ) and Var TU PROOF Using the theorem of total expectation (Theorem 3.5.2), we have E TU E P U TU. So TU is unbiased for plying Theorem 8.1.3 gives Var TU and MSE T Var T. Var T, MSE TU Var TU Ap­ There are many situations in which the theory of unbiased estimation leads to good estimators. However, the following example illustrates that in some problems, there are no unbiased estimators and hence the theory has some limitations. EXAMPLE 8.1.2 The Nonexistence of an Unbiased Estimator and we wish to find a Suppose that x1 UMVU estimator of, the odds in favor of a success occurring From Theorem 8.1.4, we can restrict our search to unbiased estimators T that are functions of the sufficient statistic nx. is a sample from the Bernoulli xn 1 Such a T satisfies E T n X 1 for every [0 1]. Recalling that n X Binomial n this implies that for every [0 1]. By the binomial theorem, we have 438 Section 8.1: Optimal Unbiased Estimation Substituting this into the preceding expression for of powers of leads to 1 and writing this in terms. (8.1.5) Now the left­hand side of (8.1.5) goes
to polynomial in, which is bounded in [0 1] Therefore, an unbiased estimator of cannot exist. 1 but the right­hand side is a as If a characteristic has an unbiased estimator, then it is said to be U­estimable. It should be kept in mind, however, that just because a parameter is not U­estimable in Example 8.1.2, is a 1–1 does not mean that we cannot estimate it! For example, x (see Theorem 6.2.1); this seems is given by x so the MLE of function of like a sensible estimator, even if it is biased. 1 8.1.2 Completeness and the Lehmann–Scheffé Theorem In certain circumstances, if an unbiased estimator exists, and is a function of a sufficient statistic U then there is only one such estimator — so it must be UMVU. We need the concept of completeness to establish this. Definition 8.1.3 A statistic U is complete if any function h of U which satisfies E h U 0 with probability 1 for each, also satisfies h U s 1 for every (i.e., P s : h U s 0 for every ). 0 In probability theory, we treat two functions as equivalent if they differ only on a set having probability content 0, as the probability of the functions taking different values at an observed response value is 0. So in Definition 8.1.3, we need not distinguish between h and the constant 0. Therefore, a statistic U is complete if the only unbiased estimator of 0, based on U is given by 0 itself. We can now derive the following result. Theorem 8.1.5 (Lehmann–Scheffé) If U is a complete sufficient statistic, and if T depends on the data only through the value of U has finite second moment for every and is unbiased for then T is UMVU. PROOF Suppose that T is also an unbiased estimator of By Theorem 8.1.4 we can assume that T depends on the data only through the value of U Then there exist functions h and h such that T s and T s h U s h U s and. By the completeness of U, we have that h U which implies
that T T with probability 1 for each h U with probability 1 for each This says based on U and so it must be there is essentially only one unbiased estimator for UMVU. Chapter 8: Optimal Inferences 439 The Rao–Blackwell theorem for unbiased estimators (Theorem 8.1.4), together with the Lehmann–Scheffé theorem, provide a method for obtaining a UMVU esti­ mator of. Suppose we can find an unbiased estimator T that has finite second If we also have a complete sufficient statistic U then by Theorem 8.1.4 moment. and depends on the data only through E P U U s TU s the value of U because TU s1 U s2. Therefore, by Theorem 8.1.5, TU is UMVU for TU s2 whenever U s1 T is unbiased for. It is not necessary, in a given problem, that a complete sufficient statistic exist. In fact, it can be proved that the only candidate for this is a minimal sufficient statistic (recall the definition in Section 6.1.1). So in a given problem, we must obtain a minimal sufficient statistic and then determine whether or not it is complete. We illustrate this via an example. EXAMPLE 8.1.3 Location Normal Suppose that x1 is unknown and 2 0 sufficient statistic for this model. R1 xn is a sample from an N 0 is known. In Example 6.1.7, we showed that x is a minimal 2 0 distribution, where In fact, x is also complete for this model. The proof of this is a bit involved and is presented in Section 8.5. Given that x is a complete, minimal sufficient statistic, this implies that T x is a UMVU estimator of its mean E T X whenever T has a finite second moment for and every E X 2 0z p is the UMVU estimator of E X R1 In particular, x is the UMVU estimator of 0z p (the pth quantile of the true distribution). Furthermore, x because E X 2 0 n 0z p 2 The arguments needed to show the completeness of a minimal sufficient statistic in
a problem are often similar to the one required in Example 8.1.3 (see Challenge 8.1.27). Rather than pursue such technicalities here, we quote some important examples in which the minimal sufficient statistic is complete. EXAMPLE 8.1.4 Location­Scale Normal Suppose that x1 and by R1 0 are unknown. The parameter in this model is two­dimensional and is given 2 xn is a sample from an N 2 distribution, where R1 0. We showed, in Example 6.1.8, that x s2 is a minimal sufficient statistic for this model. In fact, it can be shown that x s2 is a complete minimal sufficient statistic. Therefore, T x s2 is a UMVU estimator of E T X S2 whenever the second mo­ ment of T x s2 is finite for every In particular, x is the UMVU estimator of 2 and s2 is UMVU for 2 EXAMPLE 8.1.5 Distribution­Free Models Suppose that x1 statistical model comprises all continuous distributions on R1 xn is a sample from some continuous distribution on R1 The It can be shown that the order statistics x 1 sufficient statistic for this model. Therefore, T x 1 x n make up a complete minimal is UMVU for x n E T X 1 X n 440 whenever Section 8.1: Optimal Unbiased Estimation E T 2 X 1 X n for every continuous distribution. In particular, if T : Rn is the case. For example, if 8.1.6) R1 is bounded, then this the relative frequency of the event A in the sample, then T x 1 for is UMVU Now change the model assumption so that x1 is a sample from some continuous distribution on R1 that possesses its first m moments. Again, it can be shown that the order statistics make up a complete minimal sufficient statistic. There­ X n whenever (8.1.6) holds for fore, T x 1 2 every continuous distribution possessing its first m moments. For example, if m then this implies that T x 1 4 we x n have that s2 is UMVU for the population variance (see Exercise 8.1.2). x is UMVU for E X. When m is UM
VU for E T X 1 x n xn 8.1.3 The Cramer–Rao Inequality (Advanced) There is a fundamental inequality that holds for the variance of an estimator T This is given by the Cramer–Rao inequality (sometimes called the information inequality). It is a corollary to the following inequality. Theorem 8.1.6 (Covariance inequality) Suppose T U : S R1 and E T 2 0 E U 2 for every Then Var T Cov T U 2 Var U for every Equality holds if and only if T s E T Cov T U Var U U s E U s with probability 1 for every related). (i.e., if and only if T s and U s are linearly PROOF This result follows immediately from the Cauchy–Schwartz inequality (The­ orem 3.6.3). Now suppose that is an open subinterval of R1 and we take U s S s ln f s (8.1.7) i.e., U is the score function. Assume that the conditions discussed in Section 6.5 hold, is so that E S and, Fisher’s information I 0 for all Var S s s Chapter 8: Optimal Inferences 441 finite. Then using we have Cov T U ln ln 8.1.8) in the discrete case, where we have assumed conditions like those discussed in Section 6.5, so we can pull the partial derivative through the sum. A similar argument gives the equality (8.1.8) in the continuous case as well. The covariance inequality, applied with U specified as in (8.1.7) and using (8.1.8), gives the following result. Corollary 8.1.1 (Cramer–Rao or information inequality) Under conditions, Var T E T 2 I 1 for every Equality holds if and only if T s E T E T I 1S s with probability 1 for every. The Cramer–Rao inequality provides a fundamental lower bound on the variance of an estimator T From (8.1.4), we know that the variance is a relevant measure of the accuracy of an estimator only when the estimator is unbiased, so we restate Corollary 8.1.1 for this case. Corollary 8.1.2 Under the conditions of Coroll
ary 8.1.1, when T is an unbiased estimator of Var T for every Equality holds if and only if 2 I 1 T s I 1S s (8.1.9) with probability 1 for every. Notice that when then Corollary 8.1.2 says that the variance of the unbiased estimator T is bounded below by the reciprocal of the Fisher information. More generally, when is a 1–1, smooth transformation, we have (using Challenge 6.5.19) that the variance of an unbiased T is again bounded below by the reciprocal of 442 Section 8.1: Optimal Unbiased Estimation the Fisher information, but this time the model uses the parameterization in terms of. Corollary 8.1.2 has several interesting implications. First, if we obtain an unbiased estimator T with variance at the lower bound, then we know immediately that it is UMVU. Second, we know that any unbiased estimator that achieves the lower bound is of the form given in (8.1.9). Note that the right­hand side of (8.1.9) must be inde­ pendent of in order for this to be an estimator. If this is not the case, then there are no UMVU estimators whose variance achieves the lower bound. The following example demonstrates that there are cases in which UMVU estimators exist, but their variance does not achieve the lower bound. EXAMPLE 8.1.6 Poisson Model Suppose that x1 unknown. The log­likelihood is given by l nx function is given by S xn is a sample from the Poisson xn n Now x1 x1 xn and thus S x1 xn I E nx 2 nx 2 n distribution where 0 is n, so the score nx ln Suppose we are estimating I 1 n. Noting that x is unbiased for immediately that x is UMVU and achieves the lower bound. Then the Cramer–Rao lower bound is given by n we see and that Var X Now suppose that we are estimating lower bound equals e 2 n and I 1 S x1 xn e e e P 0. The Cramer–Rao nx n e n 1 x,. So there does not exist a UMVU estimator for which is clearly not independent of that attains the lower bound. Does there exist a UMVU estimator for 1 then I 0 x1.
As it turns out, x is (for every n) a complete mini­ is an unbiased estimator of mal sufficient statistic for this model, so by the Lehmann–Scheffé theorem I 0 x1 is UMVU for Furthermore, I 0 X1 has variance? Observe that when n P X1 0 1 P X1 0 e 1 e since I 0 X1 Bernoulli e This implies that e 1 e e 2. In general, we have that 1 n n i 1 I 0 xi is an unbiased estimator of, but it is not a function of x. Thus we cannot apply the Lehmann–Scheffé theorem, but we can Rao–Blackwellize this estimator. Therefore, Chapter 8: Optimal Inferences 443 the UMVU estimator of is given by 1 n n i 1 E I 0 Xi X x. To determine this estimator in closed form, we reason as follows. The condi­ x, because n X is distributed Xn given X tional probability function of X1 Poisson n is x1 x1! xn xn! e n n nx nx! e n 1 nx x1 xn x1 1 n xn, 1 n Xn given X i.e., X1 cordingly, the UMVU estimator is given by x is distributed Multinomial nx 1 n 1 n Ac­ E I 0 X1 X x P X1 0 X x because Xi X x Binomial nx 1 n for each i 1 nx 1 n 1 n Certainly, it is not at all obvious from the functional form that this estimator is unbiased, let alone UMVU. So this result can be viewed as a somewhat remarkable application of the theory. Recall now Theorems 6.5.2 and 6.5.3. The implications of these results, with some additional conditions, are that the MLE of and that the asymptotic variance of the MLE is at the information lower bound. This is often interpreted to mean that, with large samples, the MLE makes full use of the information about is asymptotically unbiased for contained in the data. Summary of Section 8.1 An estimator comes closest (using squared distance) on average to its mean (see Theorem 8.1.1), so we can restrict attention to unbiased estimators for quantities of interest. The Rao–Blackwell theorem says
that we can restrict attention to functions of a sufficient statistic when looking for an estimator minimizing MSE. When a sufficient statistic is complete, then any function of that sufficient statis­ tic is UMVU for its mean. The Cramer–Rao lower bound gives a lower bound on the variance of an unbi­ ased estimator and a method for obtaining an estimator that has variance at this lower bound when such an estimator exists. 444 Section 8.1: Optimal Unbiased Estimation EXERCISES 8.1.1 Suppose that a statistical model is given by the two distributions in the following table 12 1 6 s 4 5 12 1 12 fa s fb s T 2 1 2 3 4 is defined by T 1 If T : 1 2 3 4 s otherwise, then prove that T is a sufficient statistic. Derive the conditional distributions of s given T s and show that these are independent of 8.1.2 Suppose that x1 2 Prove that s2 ance 8.1.3 Suppose that x1 R1 is unknown and 2 n i 1 xi xn is a sample from an N 0 is known. Determine a UMVU estimator of the second moment xn is a sample from a distribution with mean 2 0 distribution, where x 2 is unbiased for 1 and T s and vari­ 1 1 n 2 2 2 0 8.1.4 Suppose that x1 R1 is unknown and 2 xn is a sample from an N 2 0 distribution, where 0 is known. Determine a UMVU estimator of the first quartile 0z0 25. xn is a sample from an N 2 0 distribution, where 3 a UMVU estimator of anything? If so, what 8.1.5 Suppose that x1 R1 is unknown and 2 0 is known. Is 2x is it UMVU for? Justify your answer. 8.1.6 Suppose that x1 xn [0 1] is unknown. Determine a UMVU estimator of is a sample from a Bernoulli distribution, where (use the fact that a minimal sufficient statistic for this model is complete). 8.1.7 Suppose that x1 0 is known and xn is a sample from a Gamma distribution, where 0 is unknown. Using
the fact that x is a complete sufficient 0 1. statistic (see Challenge 8.1.27), determine a UMVU estimator of 2 distribution, where 8.1.8 Suppose that x1 0 2 is a sufficient statistic is known and 2 for this problem. Using the fact that it is complete, determine a UMVU estimator for xn is a sample from an N 0 0 is unknown. Show that n i 1 xi 0 2. 8.1.9 Suppose a statistical model comprises all continuous distributions on R1. Based on a sample of n, determine a UMVU estimator of P 1 1, where P is the true probability measure. Justify your answer. 8.1.10 Suppose a statistical model comprises all continuous distributions on R1 that have a finite second moment. Based on a sample of n, determine a UMVU estimator is the true mean. Justify your answer. (Hint: Find an unbiased esti­ of mator for n 2 Rao–Blackwellize this estimator for a sample of n, and then use the Lehmann–Scheffé theorem.) 2 when the 8.1.11 The estimator determined in Exercise 8.1.10 is also unbiased for statistical model comprises all continuous distributions on R1 that have a finite first moment. Is this estimator still UMVU for 2 where 2? Chapter 8: Optimal Inferences 445 PROBLEMS 8.1.12 Suppose that x1 ] distribution, where xn is a sample from a Uniform[0 0 is unknown. Show that x n is a sufficient statistic and determine its distribution. distribution, where xn, Using the fact that x n is complete, determine a UMVU estimator of 8.1.13 Suppose that x1 xn is a sample from a Bernoulli. [0 1] is unknown. Then determine the conditional distribution of x1 given the value of the sufficient statistic x. 8.1.14 Prove that L a 2 satisfies a L a1 1 a2 L a1 1 L a2 when a ranges in a subinterval of R1. Use this result together with Jensen’s inequality (Theorem 3.6.4) to prove the Rao–Blackwell theorem. 8.1
.15 Prove that L a satisfies a L a1 1 a2 L a1 1 L a2 when a ranges in a subinterval of R1. Use this result together with Jensen’s inequality (Theorem 3.6.4) to prove the Rao–Blackwell theorem for absolute error. (Hint: First show that x 8.1.16 Suppose that x1 R1 2 distribution, where is unknown. Show that the optimal estimator (in the sense 2 is given by c 1. is a sample from an N y for any x and y.) xn n n 0 1 x y 2 of minimizing the MSE), of the form cs2 for Determine the bias of this estimator and show that it goes to 0 as n 8.1.17 Prove that if a statistic T is complete for a model and U function h then U is also complete. 8.1.18 Suppose that x1 R1 0 2 xn 2 distribution, where is unknown. Derive a UMVU estimator of the standard devia­ is a sample from an N. h T for a 1–1 (Hint: Calculate the expected value of the sample standard deviation s.) xn 2 distribution, where is unknown. Derive a UMVU estimator of the first quartile is a sample from an N tion 8.1.19 Suppose that x1 R1 0 2 z0 25. (Hint: Problem 8.1.17.) 8.1.20 Suppose that x1 xn is a sample from an N 2 0 distribution, where 0 is known. Establish that x is a minimal 1 2 is unknown and 2 0 sufficient statistic for this model but that it is not complete. is a sample from an N 8.1.21 Suppose that x1 xn R1 is unknown and 2 2 0 distribution, where 0 is known. Determine the information lower bound, for an 2 0. Does 2 unbiased estimator, when we consider estimating the second moment the UMVU estimator in Exercise 8.1.3 attain the information lower bound? 8.1.22 Suppose that x1 xn is a sample from a Gamma distribution, where 0 is unknown. Determine the information lower bound for the 1 using unbiased estimators, and determine if the UMVU estimator 0 0 is known and estimation of obtained in Exercise 8.1
.7 attains this. 8.1.23 Suppose that x1 x xn [0 1] and 1 for x x is a sample from the distribution with density f 0 is unknown. Determine the information lower 446 Section 8.2: Optimal Hypothesis Testing using unbiased estimators Does a UMVU estimator with vari­ bound for estimating ance at the lower bound exist for this problem? 8.1.24 Suppose that a statistic T is a complete statistic based on some statistical model. A submodel is a statistical model that comprises only some of the distributions in the original model. Why is it not necessarily the case that T is complete for a submodel? 8.1.25 Suppose that a statistic T is a complete statistic based on some statistical model. If we construct a larger model that contains all the distributions in the original model and is such that any set that has probability content equal to 0 for every distribution in the original model also has probability content equal to 0 for every distribution in the larger model, then prove that T is complete for the larger model as well. CHALLENGES 8.1.26 If X is a random variable such that E X either does not exist or is infinite, then show that E X 8.1.27 Suppose that x1 for any constant c. xn is a sample from a Gamma distribution, where 0 is unknown. Show that x is a complete minimal sufficient c 2 0 0 is known and statistic. 8.2 Optimal Hypothesis Testing Suppose we want to assess a hypothesis about the real­valued characteristic the model f have specified a value for we have evidence against H0. for 0, where we. After observing data s, we want to assess whether or not. Typically, this will take the form H0 : : In Section 6.3.3, we discussed methods for assessing such a hypothesis based on the plug­in MLE for These involved computing a P­value as a measure of how surprising the data s are when the null hypothesis is assumed to be true. If s is sur­ 0 then we have evidence for which prising for each of the distributions f against H0 The development of such procedures was largely based on the intuitive justification for the likelihood function. 8.2.1 The Power Function of a Test Closely associated with a specific procedure for computing a P­value is
the concept of a power function as defined in Section 6.3.6. For this, we specified a critical such that we declare the results of the test statistically significant whenever the value P­value is less than or equal to is then the probability of the P­value being less than or equal to when we are sampling from f The greater the value of 0 the better the procedure is at detecting departures from H0. The power function is thus a measure of the sensitivity of the testing procedure to detecting departures from H0 The power when Recall the following fundamental example. Chapter 8: Optimal Inferences 447 EXAMPLE 8.2.1 Location Normal Model Suppose we have a sample x1 unknown and 2 0 0 In Example 6.3.9, we showed that a sensible test for this problem is based on the z­ statistic 0 is known, and we want to assess the null hypothesis H0 : 2 0 model, where xn from the N R1 is z x 0 n 0 with Z N 0 1 under H0 The P­value is then given by where denotes the N 0 1 distribution function. In Example 6.3.18, we showed that, for critical value the power function of the z­test is given by P 2 1 1 0 0 n X 0 z1 0 n 2 X 0 0 n z1 2 P 0 0 n 1 2 because X N 2 0 n. We see that specifying a value for specifies a set of data values R x1 xn : x 0 0 n 1 2 such that the results of the test are determined to be statistically significant whenever x1 is 1–1 increasing, we can also write R as R Using the fact that xn R x1 x1 xn : xn : z1 2 Furthermore, the power function is given by P R and 0 P 0 R. 8.2.2 Type I and Type II Errors We now adopt a different point of view. We are going to look for tests that are optimal for testing the null hypothesis H0 : 0. First, we will assume that, having observed the data s we will decide to either accept or reject H0 If we reject H0 then this is equivalent to accepting the alternative Ha : 0. Our performance measure for assessing testing procedures will then be the probability that the testing procedure makes an error. 448 Section 8.2: Optimal
Hypothesis Testing There are two types of error. We can make a type I error — rejecting H0 when it is true — or make a type II error — accepting H0 when H0 is false. Note that if we reject H0 then this implies that we are accepting the alternative hypothesis Ha : 0 It turns out that, except in very artificial circumstances, there are no testing proce­ dures that simultaneously minimize the probabilities of making the two kinds of errors. Accordingly, we will place an upper bound called the critical value, on the proba­ bility of making a type I error. We then search among those tests whose probability of making a type I error is less than or equal to for a testing procedure that minimizes the probability of making a type II error. Sometimes hypothesis testing problems for real­valued parameters are distinguished 0 ver­ 0 0 are examples of one­sided problems. Notice, as being one­sided or two­sided. For example, if sus Ha : or H0 : however, that if we define 0 is a two­sided testing problem, while H0 : 0 versus Ha : is real­valued, then H0 : 0 versus Ha : I 0, 0 versus Ha : 0 is equivalent to the problem H0 : 0 0. Similarly, if we define I, 0 0 versus Ha : 0 0. So the formulation we have adopted for testing problems about 0 is equivalent to the problem H0 : includes the one­sided problems as special cases. then H0 : versus Ha : then H0 : versus Ha : a general 8.2.3 Rejection Regions and Test Functions S before we One approach to specifying a testing procedure is to select a subset R observe s. We then reject H0 whenever s R The set R is referred to as a rejection region. Putting an upper bound on the probability of rejecting H0 when it is true leads to the following. R and accept H0 whenever s Definition 8.2.1 A rejection region R satisfying P R (8.2.1) whenever 0 is called a size rejection region for H0. So (8.2.1) expresses the bound on the probability of making a type I error. Among all size rejection regions R we want to find the one (if it exists) that will minimize the probability of making a type II error. This is equivalent to finding the size
rejection region R that maximizes the probability of rejecting the null hypothesis when it is false. This probability can be expressed in terms of the power function of R and is given by P R whenever 0 To fully specify the optimality approach to testing hypotheses, we need one addi­ rejection region R is E IR tional ingredient. Observe that our search for an optimal size equivalent to finding the indicator function IR that satisfies P R Chapter 8: Optimal Inferences when 0 and maximizes E IR P R, when turns out that, in a number of problems, there is no such rejection region. 449 0 It On the other hand, there is often a solution to the more general problem of finding a function : S [0 1] satisfying E, (8.2.2) when 0 and maximizes when 0 We have the following terminology. E, Definition 8.2.2 We call power function associated with the test function : S 0 it is called a size 0 it is called an exact size maximizes (UMP) size E test function. when. test function. [0 1] a test function and the satisfies (8.2.2) when when that 0 is called a uniformly most powerful If satisfies E If test function. A size test function E Note that P R. IR is a test function with power function given by E IR s s For observed data s we interpret 1 to mean that we reject H0 In general, we interpret 0 to mean that we accept H0 and interpret s to be the conditional probability that we reject H0 given the data s Operationally, this means that, after we random variable. If we get a 1 we reject s observe s we generate a Bernoulli H0 if we get a 0 we accept H0 Therefore, by the theorem of total expectation, E is the unconditional probability of rejecting H0. The randomization that occurs when 1 may seem somewhat counterintuitive, but it is forced on us by our search 0 s test, as we can increase power by doing this in certain problems. for a UMP size 8.2.4 The Neyman–Pearson Theorem For a testing problem specified by a null hypothesis H0 : value function 0 for H0 : of 0 is characterized (letting we want to find a UMP size test function ) by
Note that a UMP size 0 and a critical test denote the power function when 0 and by 0 0, when 0, for any other size test function Still, this optimization problem does not have a solution in general. In certain prob­ lems, however, an optimal solution can be found. The following result gives one such example. It is fundamental to the entire theory of optimal hypothesis testing. 450 Section 8.2: Optimal Hypothesis Testing Theorem 8.2.1 (Neyman–Pearson) Suppose that 0 Then an exact size test H0 : test function 0 exists of the form 0 1 and that we want to c0 c0 c0 (8.2.3) for some [0 1] and c0 0 This test is UMP size PROOF See Section 8.5 for the proof of this result. The following result can be established by a simple extension of the proof of the Neyman–Pearson theorem. Corollary 8.2.1 If possibly on the boundary B size unless the power of a UMP size is a UMP size s : f 1 s s test, then f 0 s test equals 1. 0 s everywhere except has exact c0 Furthermore, PROOF See Challenge 8.2.22. Notice the intuitive nature of the test given by the Neyman–Pearson theorem, for (8.2.3) indicates that we categorically reject H0 as being true when the likelihood ratio of 0 is greater than the constant c0 and we accept H0 when it is smaller. When the likelihood ratio equals c0, we randomly decide to reject H0 with probability test is basically unique, although there. Also, Corollary 8.2.1 says that a UMP size 1 versus are possibly different randomization strategies on the boundary. The proof of the Neyman–Pearson theorem reveals that c0 is the smallest real num­ ber such that and P 0 f 1 s f 0 s c0 (8.2.4 c0 c0 0 P 0 f 1 s f 0 s otherwise. c0 0 (8.2.5) We use (8.2.4) and (8.2.5) to calculate c0 and, and so determine the UMP size in a particular problem. test, Note that the test is nonrandomized whenever P 0 as 0, i.e., we categorically accept or reject H0 after seeing the data. This then always
occurs whenever the distribution of f 1 s P 0. Interestingly, it can happen that the distribution of the ratio is not continuous even when the distribution of s is continuous (see Problem 8.2.17). f 0 s is continuous when s f 0 s f 1 s c0 Before considering some applications of the Neyman–Pearson theorem, we estab­ lish the analog of the Rao–Blackwell theorem for hypothesis testing problems. Given Chapter 8: Optimal Inferences 451 the value of the sufficient statistic U s measure for the response s by P U sure does not depend on ) For test function expectation of given the value of U s namely, u, we denote the conditional probability u (by Theorem 8.1.2, this probability mea­ put U s equal to the conditional U s E P U U s. Theorem 8.2.2 Suppose that U is a sufficient statistic and 0 Then U is a size for H0 : depends on the data only through the value of U Furthermore, same power function. test function for H0 : is a size test function 0 that and U have the and so U PROOF It is clear that U s1 depends on the data only through the value of U Now let P U denote the marginal probability measure of U induced by P. Then by the theorem of total expectation, we when E P U have E U s2 whenever U s1 U s2 E E P U E P U u 0, which implies that E U U when U. Now E 0, and E E U when 0 This result allows us to restrict our search for a UMP size that depend on the data only through the value of a sufficient statistic. test to those test functions We now consider some applications of the Neyman–Pearson theorem. The follow­ ing example shows that this result can lead to solutions to much more general problems than the simple case being addressed. EXAMPLE 8.2.2 Optimal Hypothesis Testing in the Location Normal Model 2 0 distribution, where Suppose that x1 1 and 2 0 versus Ha : 0 0 is known, and we want to test H0 : xn is a sample from an N 0 The likelihood function is given by 1. L x1 xn exp n 2 2 0 x 2, and x is a sufficient statistic for this restricted model. By Theorem 8.2.2, we
can restrict our attention to test functions that depend on the data through x Now X N 2 0 n so that f 1 x f 0 x exp exp exp exp 2x 1 2 1 x 2 2x 0 2 0 1 0 x exp n 2 2 0 2 1 2 0 452 Therefore, Section 8.2: Optimal Hypothesis Testing exp P 0 exp c0 c0 c0 0 X exp n 2 2 0 0 X c0 exp n 2 2 0 2 0 n ln c0 exp c0 2 0 1 1 0 0, where c0 n 0 2 0 n 1 0 ln c0 exp n 2 2 0 2 1 2 0 0 Using (8.2.4), when 1 0 we select c0 so that c0 z1 when 1 0 we select c0 so that c0 z These choices imply that P 0 f 1 X f 0 X c0 and, by (8.2.5), 0. So the UMP size test is nonrandomized. When 1 0 the test is given by 0 x 1 0 When 1 0 the test is given by z1 z1 0 n z 0 n 0 n z Notice that the test function in (8.2.6) does not depend on subsequent implication is that this test function is UMP size Ha : 1 for any 1 versus the alternative Ha : 0 This implies that 0 0 is UMP size (8.2.6) (8.2.7) for H0 : 1 in any way. The 0 versus 0 for H0 : Chapter 8: Optimal Inferences 453 Furthermore, we have 0 P X 0 1 0 0 n z1 0 n z1 P X 0 n 0 0 z1 n Note that this is increasing in H0 : H0 : Ha : Ha : UMP size 0 is a size is a size 0 versus Ha : test for H0 : 0 versus Ha : 0 From this, we conclude that for H0 : 0 Similarly (see Problem 8.2.12), it can be shown that for H0 :, which implies that 0 Observe that, if 0 then it is also a size 0 is UMP size 0 versus Ha : 0 test function for test function for 0 versus 0 versus 0 in (8.2.7) is We might wonder if a UMP size 0 Suppose that 0 versus Ha : test exists for the two­sided problem H0 : is a size UMP test for this problem. Then for
H0 : 0 versus Ha : is also size 0. Using Corollary 8.2.1 and the preceding developments (which also shows that there does not exist a test of the form (8.2.3) having power equal to 1 for this problem), this implies that (the boundary B has probability 0 here). But for H0 : versus Ha : is also UMP size 0; thus, by the same reasoning, 0 0 But clearly 1 when 1 1 when 1 0 0 0 so there is no UMP size Intuitively, we would expect that the size test for the two­sided problem. x 1 0 test given by x 0 x 0 0 n 0 n z1 z1 2 2 (8.2.8) would be a good test to use, but it is not UMP size. It turns out, however, that the test when in (8.2.8) is UMP size among all tests satisfying and 0 0. Example 8.2.2 illustrated a hypothesis testing problem for which no UMP size test exists. Sometimes, however, by requiring that the test possess another very natural property, we can obtain an optimal test. Definition 8.2.3 A test that satisfies when 0 and when problem H0 : 0 is said to be an unbiased size test for the hypothesis testing 0 So (8.2.8) is a UMP unbiased size test. An unbiased test has the property that the probability of rejecting the null hypothesis, when the null hypothesis is false, is always greater than the probability of rejecting the null hypothesis, when the null hypothesis is true. This seems like a very reasonable property. In particular, it can be proved that test (Problem 8.2.14). We do not pursue any UMP size the theory of unbiased tests further in this text. is always an unbiased size We now consider an example which shows that we cannot dispense with the use of randomized tests. 454 Section 8.2: Optimal Hypothesis Testing EXAMPLE 8.2.3 Optimal Hypothesis Testing in the Bernoulli Model Suppose that x1 xn is a sample from a Bernoulli 0 versus Ha : distribution, where 1 where 1 0 1, and we want to test H0 : 0 Then nx is a minimal sufficient statistic and, by Theorem 8.2.2, we can restrict our attention to test functions that
depend on the data only through nx Now n X Binomial n so f 1 nx f 0 nx nx 1 1 nx 0 1 n nx n nx 1 0 nx 1 0 1 1 n nx. 1 0 Therefore c0 c0 ln 1 P 0 n X 1 1 1 ln c0 ln 1 1 0 1 0 c0 1 1 n 1 0 n ln c0 1 1 1 0 P 0 n X c0 because ln 1 1 1 0 1 0 0 as 1 is increasing in which implies 1 1 1 0 1 0. Now, using (8.2.4), we choose c0 so that c0 is an integer satisfying P 0 n X c0 and P 0 n X c0 1 Because n X will not be able to achieve P 0 n X Binomial n 0 is a discrete distribution, we see that, in general, we exactly. So, using (8.2.5), c0 will not be equal to 0. Then P 0 n X c0 P 0 n X c0 0 nx 1 0 nx nx nx c0 c0 c0 Chapter 8: Optimal Inferences 455 is UMP size software (or Table D.6) for the binomial distribution to obtain c0 0 versus Ha : for H0 : 1 Note that we can use statistical For example, suppose n 6 and 0 of the Binomial 6 0 25 distribution function to three decimal places. 0 25 The following table gives the values x F x 0 0 178 1 0 534 2 0 831 3 0 962 4 0 995 5 1 000 6 1 000 Therefore, if 0 038 and P0 25 n X 0 05 we have that c0 0 831 1 2 3 because P0 25 n X 0 169 This implies that 3 1 0 962 0 05 0 962 1 0 962 0 012 So with this test, we reject H0 : greater than 3, accept H0 : than 3, and when the number of 1’s equals 3, we randomly reject H0 : probability 0 012 (e.g., generate U Uniform[0 1] and reject whenever U 0 012 0 categorically if the number of successes is 0 categorically when the number of successes is less 0 with Notice that the test 0 does not involve 1 so indeed it is UMP size for H0 : 0 versus Ha : 0 Furthermore, using Problem 8.2.
18, we have P n X c0 1 Because n k c0 c0 1 uc0 1 u n c0 1 du uc0 1 u n c0 1 du c0 1 is decreasing in Example 8.2.2, we conclude that we must have that P n X 0 is UMP size Similarly, we obtain a UMP size Example 8.2.2, there is no UMP size there is a UMP unbiased size test for H0 : test for H0 : test for this problem. c0 is increasing in Arguing as in for H0 : 0 0 As in 0 but 0 versus Ha : 0 versus Ha : 0 versus Ha : 8.2.5 Likelihood Ratio Tests (Advanced) In the examples considered so far, the Neyman–Pearson theorem has led to solutions to problems in which H0 or Ha are not just single values of the parameter, even though the theorem was only stated for the single­value case. We also noted, however, that this is not true in general (for example, the two­sided problems discussed in Examples 8.2.2 and 8.2.3). The method of generalized likelihood ratio tests for H0 : 0 has been developed to deal with the general case. This is motivated by the Neyman–Pearson 456 Section 8.2: Optimal Hypothesis Testing theorem, for observe that in (8.2.3),. Therefore, (8.2.3) can be thought of as being based on the ratio of the likelihood at 1 to the likelihood at 0 and we reject H0 : 0 when the likelihood gives much more support to 1 than to 0 The amount of the additional support required for rejection is determined by c0 The larger c0 is, the larger the likelihood L 1 s has to be relative to L 0 s before we reject H0 : Denote the overall MLE of and the MLE, when H0 by H0 s. So 0 s by we have for all when L s L H0 s s such that 0 The generalized likelihood ratio test then rejects H0 L s L H0 s s s (8.2.9) is large, as this indicates evidence against H0 being true. How do we determine when (8.2.9) is large enough to reject? Denoting the ob­ served data by s0 we do this by computing the P­values P L s L H0 s s s L s0
L H0 s0 s0 s0 (8.2.10) when H0. Small values of (8.2.10) are evidence against H0 Of course, when then it is not clear which value of (8.2.10) to 0 for more than one value of use. It can be shown, however, that under conditions such as those discussed in Section 6.5, if s corresponds to a sample of n values from a distribution, then 2 ln L s L H0 s s s D 2 dim dim H0 as n dimensions of these sets. This leads us to a test that rejects H0 whenever whenever the true value of is in H0 Here, dim and dim H0 are the 2 ln L s L H0 s s s (8.2.11) is greater than a particular quantile of the 2 dim For example, suppose that in a location­scale normal model, we are testing H0 : 1 and, 2 dim H0 0 2 for a size 0.05 test, we reject whenever (8.2.11) is greater than 0 95 1. Note that, strictly speaking, likelihood ratio tests are not derived via optimality considerations. We will not discuss likelihood ratio tests further in this text. dim H0 distribution. 0 Then dim R1 H0 [0 [0 Chapter 8: Optimal Inferences 457 Summary of Section 8.2 In searching for an optimal hypothesis testing procedure, we place an upper bound on the probability of making a type I error (rejecting H0 when it is true) and search for a test that minimizes the probability of making a type II error (accepting H0 when it is false). The Neyman–Pearson theorem prescribes an optimal size Ha each specify a single value for the full parameter Sometimes the Neyman–Pearson theorem leads to solutions to hypothesis test­ ing problems when the null or alternative hypotheses allow for more than one possible value for but in general we must resort to likelihood ratio tests for such problems. test when H0 and. EXERCISES 8.2.1 Suppose that a statistical model is given by the two distributions in the following table 12 1 6 4 s 5 12 1 12 fa s fb s b What is a versus Ha : Uniform[0 1] and reject H0 whenever U Determine the UMP size 0.10 test for testing H0 : the power of this test? Repeat this with the size equal to 0.
05. 8.2.2 Suppose for the hypothesis testing problem of Exercise 8.2.1, a statistician de­ cides to generate U 0 05. Show that this test has size 0.05. Explain why this is not a good choice of test and why the test derived in Exercise 8.2.1 is better. Provide numerical evidence for this. 8.2.3 Suppose an investigator knows that an industrial process yields a response vari­ able that follows an N 1 2 distribution. Some changes have been made in the indus­ trial process, and the investigator believes that these have possibly made a change in the mean of the response (not the variance), increasing its value. The investigator wants the probability of a type I error occurring to be less than 1%. Determine an appropriate testing procedure for this problem based on a sample of size 10. 8.2.4 Suppose you have a sample of 20 from an N 0.975­confidence interval for and use it to test H0 : 0 is not in the confidence interval. (a) What is the size of this test? (b) Determine the power function of this test. 8.2.5 Suppose you have a sample of size n where value is greater than 1. (a) What is the size of this test? (b) Determine the power function of this test. R1 You use a 8.2.6 Suppose you are testing a null hypothesis H0 : size 0.05 testing procedure and accept H0 You feel you have a fairly large sample, but ] distribution, 1 by rejecting H0 whenever the sampled 1 distribution. You form a 0 by rejecting H0 whenever 0 is unknown. You test H0 : 1 from a Uniform[0 0, where 458 Section 8.2: Optimal Hypothesis Testing 0 2, you obtain a value of 0 10 where 0 2 represents when you compute the power at the smallest difference from 0 that is of practical importance. Do you believe it makes sense to conclude that the null hypothesis is true? Justify your conclusion. 8.2.7 Suppose you want to test the null hypothesis H0 : 1 distribution, where n from an N the power at 2 of the optimal size 0.05 test, is equal to 0.99? 8.2.8 Suppose we have available two different test procedures in a problem and these have the same power function. Explain why, from the point of view of optimal
hypoth­ esis testing theory, we should not care which test is used. 8.2.9 Suppose you have a UMP size 0 based on a sample of 0 2 How large does n have to be so that for testing the hypothesis H0 : test 0, where is real­valued. Explain how the graph of the power function of another size test that was not UMP would differ from the graph of the power function of COMPUTER EXERCISES 8.2.10 Suppose you have a coin and you want to test the hypothesis that the coin is is the probability of getting a head 1 2 where fair, i.e., you want to test H0 : on a single toss. You decide to reject H0 using the rejection region R 0 1 7 8 based on n 0 1 8 2 8 8.2.11 On the same graph, plot the power functions for the two­sided z­test of H0 : 10 tosses. Tabulate the power function for this procedure for 7 8 1 0 for samples of sizes n 1 4 10 20 and 100 based on 0 05 (a) What do you observe about these graphs? (b) Explain how these graphs demonstrate the unbiasedness of this test. PROBLEMS 0 in (8.2.7) is UMP size 8.2.12 Prove that 8.2.13 Prove that the test function function. What is the interpretation of this test function? 8.2.14 Using the test function in Problem 8.2.13, show that a UMP size UMP unbiased size 8.2.15 Suppose that x1 xn is a sample from a Gamma for H0 : for every s test. s 0 0 is unknown. Determine the UMP size 0 is known and 1, where 1 0 Is this test UMP size test is also a distribution, where test for testing H0 : 0 for H0 : 0 versus Ha : S is an exact size 0. test 0 versus Ha : 0? versus Ha : 8.2.16 Suppose that x1 0 is known and 2 2 0 versus Ha : 2 2 H0 : H0 : 8.2.17 Suppose that x1 2 0 versus Ha : 2 Ha : Ha : 1, where 0 0? xn is a sample from an N 0 0 is unknown. Determine the UMP size 2 distribution, where test for testing for 2 1 Is this test UMP size 2 1 where 2 0 2
2 0? xn is a sample from a Uniform[0 test for testing H0 : for H0 : ] distribution, where 0 versus 0 versus 1 Is this test function UMP size 0 is unknown. Determine the UMP size Chapter 8: Optimal Inferences 459 8.2.18 Suppose that F is the distribution function for the Binomial n Then prove that distribution yx 1 y n x 1 dy n 1 This establishes a relationship between the binomial probability 0 1 for x distribution and the beta function. (Hint: Integration by parts.) 8.2.19 Suppose that F is the distribution function for the Poisson prove that distribution. Then F x 1 x! yx e y dy. This establishes a relationship between the Poisson probability 0 1 for x distribution and the gamma function. (Hint: Integration by parts.) xn 8.2.20 Suppose that x1 is a sample from a Poisson distribution, where 1, 0? 0 versus Ha : 0 versus Ha : 2 distribution, where likelihood test for H0 : for H0 : 0 is unknown. Determine the UMP size 2 0 xn is a sample from an N 1 Is this test function UMP size where 0 (Hint: You will need the result of Problem 8.2.19.) 8.2.21 Suppose that x1 R1 ratio test for testing H0 : 8.2.22 (Optimal confidence intervals) Suppose that for model UMP size pose further that each size (a) Prove that only takes values in 0 1, i.e., each 0 versus H0 : test function. test function for H0 : 0 0 0 is unknown. Derive the form of the exact size 0 for each possible value of f 0 : we have a 0 Sup­ is a nonrandomized. Conclude that C s is a 1 ­confidence set for ­confidence set for., then prove that the test function defined satisfies for every (b) If C is a 1 by for H0 : is size (c) Suppose that for each value H0 : 0 versus H0 : 0. 0 the test function 0 0. Then prove that P C s is UMP size for testing is minimized, when probability (8.2.12) is the probability of C containing the false value 0 among all 1 ­confi
dence sets for (8.2.12). The and a 460 Section 8.3: Optimal Bayesian Inferences 1 uniformly most accurate (UMA) 1 ­confidence region that minimizes this probability when ­confidence region for. 0 is called a CHALLENGES 8.2.23 Prove Corollary 8.2.1 in the discrete case. 8.3 Optimal Bayesian Inferences with density We now add the prior probability measure. As we will see, this completes the specification of an optimality problem, as now there is always a solution. Solutions to Bayesian optimization problems are known as Bayes rules. In Section 8.1, the unrestricted optimization problem was to find the estimator T The Bayesian that minimizes MSE T for each T 2 of E version of this problem is to minimize E MSE T E E T 2. (8.3.1) By the theorem of total expectation (Theorem 3.5.2), (8.3.1) is the expected value of the squared error T s s induced by the (the sampling model), and by the marginal dis­ conditional distribution for s given ). Again, by the theorem of total expectation, tribution for we can write this as 2 under the joint distribution on (the prior distribution of E MSE T E M E s T 2, (8.3.2) where conditional distribution of measure for s (the marginal distribution of s). s denotes the posterior probability measure for given the data s (the given s), and M denotes the prior predictive probability We have the following result. Theorem 8.3.1 When (8.3.1) is finite, a Bayes rule is given by T s E namely, the posterior expectation of s. PROOF First, consider the expected posterior squared error E s T s 2 of an estimate T s. By Theorem 8.1.1 this is minimized by taking T s equal to (note that the “random” quantity here is T s E s ). Then we have just shown that Now suppose that T is any estimator of Chapter 8: Optimal Inference Methods 461 and thus, E MSE MSE T. Therefore, T minimizes (8.3.1) and is a Bayes rule. So we see that, under mild conditions, the
optimal Bayesian estimation problem always has a solution and there is no need to restrict ourselves to unbiased estimators, etc. For the hypothesis testing problem H0 : 0 we want to find the test that minimizes the prior probability of making an error (type I or type II). function Such a is a Bayes rule. We have the following result. Theorem 8.3.2 A Bayes rule for the hypothesis testing problem H0 : is given by 0 0 s 1 0 otherwise. s 0 s 0 : PROOF Consider test function and let I of the set otherwise). Observe that which is an error when I 1; 1 having observed s which is an error when I denote the indicator function 0 and equals 0 s is the probability of rejecting H0 having observed s s is the probability of accepting H0 0 Therefore, given s and 0 1 when 0 (so I 0 0 0 the probability of making an error is By the theorem of total expectation, the prior probability of making an error (taking the expectation of e s under the joint distribution of s ) is E M E e s s (8.3.3) As in the proof of Theorem 8.3.1, if we can find each s then also minimizes (8.3.3) and is a Bayes rule. that minimizes E e s s for Using Theorem 3.5.4 to pull s through the conditional expectation, and the fact that E s I A A s for any event A then Because s [0 1] we have min 462 Section 8.3: Optimal Bayesian Inferences Therefore, the minimum value of E e s s is attained by s 0 s. Observe that Theorem 8.3.2 says that the Bayes rule rejects H0 whenever the pos­ terior probability of the null hypothesis is less than or equal to the posterior probability of the alternative. This is an intuitively satisfying result. The following problem does arise with this approach, however. We have s 0 max : 8.3.4) 0 0 (8.3.4) implies that When 0 for every s. Therefore, using the Bayes rule, we would always reject H0 no matter what data s are obtained, which does not seem sensible. As discussed in Section 7.2.3, we have to be careful to make sure we use a prior that assigns positive mass to H0 if we are going to use the optimal Bay
es approach to a hypothesis testing problem. s 0 Summary of Section 8.3 Optimal Bayesian procedures are obtained by minimizing the expected perfor­ mance measure using the posterior distribution. In estimation problems, when using squared error as the performance measure, the posterior mean is optimal. In hypothesis testing problems, when minimizing the probability of making an error as the performance measure, then computing the posterior probability of the null hypothesis and accepting H0 when this is greater than 1/2 is optimal. EXERCISES 8.3.1 Suppose that S following table. We place a uniform prior on 1 2 3 and want to estimate 1 2, with data distributions given by the f1 s f2 s 2 when s 2 is observed. Using a Bayes rule, test the hypothesis H0 : 8.3.2 For the situation described in Exercise 8.3.1, determine the Bayes rule estimator of when using expected squared error as our performance measure for estimators. 2 8.3.3 Suppose that we have a sample x1 0 distribution, using expected where squared error as our performance measure for estimators If we use the prior distrib­ 2 ution 0, then determine the Bayes rule for this problem. Determine the limiting Bayes rule as 2 0 is known, and we want to estimate is unknown and from an N xn N. 0 Chapter 8: Optimal Inference Methods 463 0 Gamma 0 is known, and xn from a Bernoulli xn is a sample from a Gamma, then determine a Bayes rule for this problem. is completely unknown, and we want to estimate distribution, using expected squared 8.3.4 Suppose that we observe a sample x1 where error as our performance measure for estimators. If we use the prior distribution Beta 8.3.5 Suppose that x1 distribution, where 0 0, where 0 and 0 are known. If we want to estimate using expected squared error as our performance measure for estimators, then determine the Bayes rule. Use the weak (or strong) law of large numbers to determine what this estimator converges to as n 8.3.6 For the situation described in Exercise 8.3.5, determine the Bayes rule for esti­ 1 when using expected squared error as our performance measure for esti­ mating mators. 2 0 distribution, 8.3.7 Suppose that we have a sample x1 where 0 that minimizes the prior probability of making an error (type I
or type II). If we use the prior distribution 0 1 is known (i.e., p0 N 0 p0 I 2 0 distribution), the prior is a mixture of a distribution degenerate at then determine the Bayes rule for this problem. Determine the limiting Bayes rule as xn 0 is known, and we want to find the test of H0 : is unknown and 2 2 0, where p0 0 and an N 0 from an N 1. 0. (Hint: Make use of the computations in Example 7.2.13.) is unknown, and we want to find the test of H0 : 0 distribution, 8.3.8 Suppose that we have a sample x1 0 that minimizes the where prior probability of making an error (type I or type II). If we use the prior distribution 0 1 is known (i.e., the prior is a 0 and a uniform distribution), then determine p0 Uniform[0 1], where p0 from a Bernoulli p0 I xn 1 0 mixture of a distribution degenerate at the Bayes rule for this problem. PROBLEMS 1 on 2, that we put a prior. If the model is denoted f and that we want to esti­ Suppose our performance measure for estimators is the probability of making, then obtain the form of 8.3.9 Suppose that mate an incorrect choice of the Bayes rule when data s are observed. 8.3.10 For the situation described in Exercise 8.3.1, use the Bayes rule obtained via the 2. What advantage does this estimate method of Problem 8.3.9 to estimate when s have over that obtained in Exercise 8.3.2? 8.3.11 Suppose that x1 R1 2 distribution where is a sample from an N using expected squared is unknown, and want to estimate error as our performance measure for estimators. Using the prior distribution given by xn 0 2 : and using 2 N 0 2 0 2, 1 2 Gamma 0 0 where 0 2 0 0 and 0 are fixed and known, then determine the Bayes rule for. 464 Section 8.4: Decision Theory (Advanced) 8.3.12 (Model selection) Generalize Problem 8.3.9 to the case 1 k. CHALLENGES 8.3.13 In Section 7.2.4, we described the Bayesian prediction problem. Using the notation
found there, suppose we wish to predict t If we assess the accuracy of a predictor by R1 using a predictor then determine the prior predictor that minimizes this quantity (assume all relevant expectations are finite). If we observe s0 then determine the best predictor. (Hint: Assume all the probability measures are discrete.) 8.4 Decision Theory (Advanced) To determine an optimal inference, we chose a performance measure and then at­ tempted to find an inference, of a given type, that has optimal performance with respect to this measure. For example, when considering estimates of a real­valued character­ istic of interest, we took the performance measure to be MSE and then searched for the estimator that minimizes this for each value of Decision theory is closely related to the optimal approach to deriving inferences, but it is a little more specialized. In the decision framework, we take the point of view that, in any statistical problem, the statistician is faced with making a decision, e.g., deciding on a particular value for. Furthermore, associated with a decision is the notion of a loss incurred whenever the decision is incorrect. A decision rule is a procedure, based on the observed data s that the statistician uses to select a decision. The decision problem is then to find a decision rule that minimizes the average loss incurred. There are a number of real­world contexts in which losses are an obvious part of the problem, e.g., the monetary losses associated with various insurance plans that an insurance company may consider offering. So the decision theory approach has many applications. It is clear in many practical problems, however, that losses (as well as performance measures) are somewhat arbitrary components of a statistical problem, often chosen simply for convenience. In such circumstances, the approaches to deriv­ ing inferences described in Chapters 6 and 7 are preferred by many statisticians. So the decision theory model for inference adds another ingredient to the sampling model (or to the sampling model and prior) to derive inferences — the loss function. To formalize this, we conceive of a set of possible actions or decisions that the statistician could take after observing the data s. This set of possible actions is denoted by and is called the action space. To connect these actions with the statistical model, there is the correct action to take is a correct action function A : we when do not know the correct action A so there is uncertainty involved in our decision. Consider
a simple example. is the true value of the parameter. Of course, because we do not know such that A Chapter 8: Optimal Inference Methods 465 EXAMPLE 8.4.1 Suppose you are told that an urn containing 100 balls has either 50 white and 50 black balls or 60 white and 40 black balls. Five balls are drawn from the urn without replace­ ment and their colors are observed. The statistician’s job is to make a decision about the true proportion of white balls in the urn based on these data. The statistical model then comprises two distributions P1 P2 where, using para­ meter space 1 2 P1 is the Hypergeometric 100 50 5 distribution (see Exam­ ple 2.3.7) and P2 is the Hypergeometric 100 60 5 distribution. The action space is 0 6 The data are 0 5 0 6, and A : is given by A 1 0 5 and A 2 given by the colors of the five balls drawn. We suppose now that there is also a loss or penalty L a incurred when we select and is true. If we select the correct action, then the loss is 0; it is greater action a than 0 otherwise. Definition 8.4.1 A loss function is a function L defined on in [0 0 if and only if a such that L A a and taking values Sometimes the loss can be an actual monetary loss. Actually, decision theory is a little more general than what we have just described, as we can allow for negative losses (gains or profits), but the restriction to nonnegative losses is suitable for purely statistical applications. In a specific problem, the statistician chooses a loss function that is believed to lead to reasonable statistical procedures. This choice is dependent on the particular application. Consider some examples. EXAMPLE 8.4.2 (Example 8.4.1 continued) Perhaps a sensible choice in this problem would be otherwise. Here we have decided that selecting a error than selecting a cally, then we could take 0 5 when it is not correct is a more serious 0 6 when it is not correct. If we want to treat errors symmetri.e., the losses are 1 or 0. EXAMPLE 8.4.3 Estimation as a Decision Problem Suppose we have a marginal parameter estimate T s after observing s and A Naturally, we want T s S. Here
, the action space is For example, suppose x1 xn is a sample from an N 2 where this case, R1 R is unknown, and we want to estimate R1 and a possible estimator is the sample average T x1 : 2 distribution, In x xn 2 of interest, and we want to specify an 466 Section 8.4: Decision Theory (Advanced) There are many possible choices for the loss function. Perhaps a natural choice is to use L a a the absolute deviation between and a Alternatively, it is common to use L a a 2 (8.4.1) (8.4.2) the squared deviations between and a We refer to (8.4.2) as squared error loss. Notice that (8.4.2) is just the square of the Euclidean distance between and a It might seem more natural to actually use the distance (8.4.1) as the loss function. It turns out, however, that there are a number of mathematical conveniences that arise from using squared distance. EXAMPLE 8.4.4 Hypothesis Testing as a Decision Problem In this problem, we have a characteristic of interest sibility of the value written as H0 : as the null hypothesis and to Ha as the alternative hypothesis. and want to assess the plau­ 0 after viewing the data s In a hypothesis testing problem, this is 0. As in Section 8.2, we refer to H0 0 versus Ha : The purpose of a hypothesis testing procedure is to decide which of H0 or Ha is H0 Ha true based on the observed data s So in this problem, the action space is and the correct action function is A H0 Ha 0 0 1. We write H0 An alternative, and useful, way of thinking of the two hypotheses is as subsets of values that make the null hypothesis values that make the null hypothesis false. true, and Ha Then, based on the data s we want to decide if the true value of is in Ha If H0 (or Ha) is composed of a single point, then it is called a simple hypothesis or a point hypothesis; otherwise, it is referred to as a composite hypothesis. 0 is the subset of all 0 as the subset of all is in H0 or if H c 2 For example, suppose that x1 R1 R 0 versus the alternative Ha : R For the same model, let c where H0 : Ha 0 xn is a sample from an N
2 distribution and we want to test the null hypothesis R and 0 Then H0 0 I 0] R 2 is the indicator function for the subset 1 i.e., versus the alternative Ha : 0 is equivalent to testing that the mean is less than or equal to 0 versus the alternative that it is greater than 0 This one­sided hypothesis testing problem is often denoted as H0 : 0] R Then testing H0 : 0 versus Ha : 0 There are a number of possible choices for the loss function, but the most com­ monly used is of the form L a 0 b c H0 a H0 a Ha a H0 or H0 Ha Ha a Ha Chapter 8: Optimal Inference Methods 467 If we reject H0 when H0 is true (a type I error), we incur a loss of c; if we accept H0 when H0 is false (a type II error), we incur a loss of b. When b c, we can take b 1 and produce the commonly used 0–1 loss function. c A statistician faced with a decision problem — i.e., a model, action space, correct action function, and loss function — must now select a rule for choosing an element of the action space when the data s are observed. A decision function is a procedure that specifies how an action is to be selected in the action space Definition 8.4.2 A nonrandomized decision function d is a function d : S So after observing s we decide that the appropriate action is d s Actually, we will allow our decision procedures to be a little more general than this, as we permit a random choice of an action after observing s. Definition 8.4.3 A decision function on the action space ) taken is in A for each s is such that is a probability measure s A is the probability that the action s S (so Operationally, after observing s a random mechanism with distribution specified by is used to select the action from the set of possible actions. Notice that if s s is a probability measure degenerate at the point d s (so s then Problem 8.4.8). 1) for each is equivalent to the nonrandomized decision function d and conversely (see s d s The use of randomized decision procedures may seem rather unnatural, but, as we will see, sometimes they are an essential ingredient of decision theory. In many estimation problems,
the use of randomized procedures provides no advantage, but this is not the case in hypothesis testing problems. We let D denote the set of all decision functions for the specific problem of interest. The decision problem is to choose a decision function to make the loss as small as possible. For a particular D. The selected will then be used to generate decisions in applications. We base this choice on how the perform with respect to the loss function. Intuitively, we various decision functions want to choose, because s a is a random quantity. Therefore, rather and a f than minimizing specific losses, we speak instead about minimizing some aspect of the Perhaps a reasonable choice is to minimize distribution of the losses for each the average loss. Accordingly, we define the risk function associated with D as the average loss incurred by The risk function plays a central role in determining an appropriate decision function for a problem., the loss L s Definition 8.4.4 The risk function associated with decision function is given by R E E s L a (8.4.3) Notice that to calculate the risk function we first calculate the average of L a based on s fixed and a to s f. By the theorem of total expectation, this is the average loss. When Then we average this conditional average with respect is s s 468 Section 8.4: Decision Theory (Advanced) degenerate at d s for each s then (8.4.3) simplifies (see Problem 8.4.8) to R E L d s Consider the following examples. EXAMPLE 8.4.5 Suppose that S table. 1 2 3 1 2, and the distributions are given by the following f1 s f2 s Further suppose that when A a but is 0 otherwise. and the loss function is given by L a 1 Now consider the decision function specified by the following table So when we observe s and choose the action a does the sensible thing and selects the decision a know unequivocally that 1 we randomly choose the action a 1 with probability 1/4 2 with probability 3/4, etc. Notice that this decision function 3 as we 1 when we observe s 1 in this case. We have so the risk function of is then given by R 1 and R 2 L 1 1 L 1 2 E1 E s 1 1 3 4 3 12 3 12 E2 0L
2 1 Chapter 8: Optimal Inference Methods 469 EXAMPLE 8.4.6 Estimation We will restrict our attention to nonrandomized decision functions and note that these are also called estimators. The risk function associated with estimator T and loss func­ tion (8.4.1) is given by RT E T and is called the mean absolute deviation (MAD). The risk function associated with the estimator T and loss function (8.4.2) is given by and is called the MSE. RT E T 2 We want to choose the estimator T to minimize RT Note that, when using (8.4.2), this decision problem is exactly the same as the optimal estimation problem discussed in Section 8.1. for every EXAMPLE 8.4.7 Hypothesis Testing for this problem, and a data value s We note that for a given decision function the distribution s Ha, which is the probability of rejecting H0 when s has been observed. This is because the probability measure is concentrated on two points, so we need only give its value at one of these to and observe that a completely specify it. We call decision function for this problem is also specified by a test function the test function associated with is characterized by s s s We have immediately that E s L a 1 s L H0 s L Ha (8.4.4) Therefore, when using the 0–1 loss function, R E L 1 H0 s L E H0 L s E 1 s E s s L Ha L H0 Ha H0 Ha Recall that in Section 6.3.6, we introduced the power function associated with a hypothesis assessment procedure that rejected H0 whenever the P­value was smaller is the probability that than some prescribed value. The power function, evaluated at such a procedure rejects H0 when s is the conditional probability, given s that H0 is rejected, the theorem of total expectation implies that E is the true value. So in general, we refer to the function equals the unconditional probability that we reject H0 when is the true value. Because s E s as the power function of the decision procedure or, equivalently, as the power function of the test function Therefore, minimizing the risk function in this case is equivalent to choosing to minimize Ha Ac­ cordingly, this decision problem is exactly the same as the optimal inference problem discussed in Section 8.2. H0 and to maximize for every for every
470 Section 8.4: Decision Theory (Advanced) Once we have written down all the ingredients for a decision problem, it is then clear what form a solution to the problem will take. In particular, any decision function 0 that satisfies R 0 R and If for every two decision functions have the same risk functions, then, from the point of view of decision theory, they are equivalent. So it is conceivable that there might be more than one solution to a decision problem. D is an optimal decision function and is a solution. Actually, it turns out that an optimal decision function exists only in extremely unrealistic cases, namely, the data always tell us categorically what the correct decision is (see Problem 8.4.9). We do not really need statistical inference for such situations. For example, suppose we have two coins — coin A has two heads and coin B has two tails. As soon as we observe an outcome from a coin toss, we know exactly which coin was tossed and there is no need for statistical inference. Still, we can identify some decision rules that we do not want to use. For example, and if then naturally we strictly prefer 0 to D is such that there exists 0 D satisfying R 0 if there is at least one for every for which R 0 R R Definition 8.4.5 A decision function is said to be admissible if there is no 0 that is strictly preferred to it. A consequence of decision theory is that we should use only admissible decision functions. Still, there are many admissible decision functions and typically none is optimal. Furthermore, a procedure that is only admissible may be a very poor choice (see Challenge 8.4.11). There are several routes out of this impasse for decision theory. One approach is to use reduction principles. By this we mean that we look for an optimal decision D that is considered appropriate. So we then look for function in some subclass D0 D0 i.e., we look for an R a 0 D0 such that R 0 optimal decision function in D0. Consider the following example. for every and EXAMPLE 8.4.8 Size Tests for Hypothesis Testing Consider a hypothesis testing problem H0 versus Ha Recall that in Section 8.2, we H0 restricted attention to those test functions test function for this problem. So in this case, we are Such a restricting to the class D0 of all decision functions for this problem, which correspond to size is called a
size test functions. that satisfy E for every In Section 8.2, we showed that sometimes there is an optimal D0 For example, when H0 and Ha are simple, the Neyman–Pearson theorem (Theorem 8.2.1) provides is optimal. We also showed in an optimal Section 8.2, however, that in general there is no optimal size and so test function there is no optimal D0 In this case, further reduction principles are necessary. defined by s Ha thus, s Another approach to selecting a valued characteristic of the risk function of on that. There are several possibilities. D is based on choosing one particular real­ and ordering the decision functions based Chapter 8: Optimal Inference Methods 471 One way is to introduce a prior into the problem and then look for the decision procedure D that has smallest prior risk r E R We then look for a rule that has prior risk equal to min D r (or inf D r ) This ap­ proach is called Bayesian decision theory. Definition 8.4.6 The quantity r is called the prior risk of, min D r is called the Bayes risk, and a rule with prior risk equal to the Bayes risk is called a Bayes rule. We derived Bayes rules for several problems in Section 8.3. Interestingly, Bayesian decision theory always effectively produces an answer to a decision problem. This is a very desirable property for any theory of statistics. Another way to order decision functions uses the maximum (or supremum) risk. So for a decision function we calculate max R (or sup the smallest, largest risk or the smallest, worst behavior. ) and then select a R D that minimizes this quantity. Such a has Definition 8.4.7 A decision function 0 satisfying max R 0 max R min D (8.1) is called a minimax decision function. Again, this approach will always effectively produce an answer to a decision problem (see Problem 8.4.10). Much more can be said about decision theory than this brief introduction to the basic concepts. Many interesting, general results have been established for the decision theoretic approach to statistical inference. Summary of Section 8.4 The decision theoretic approach to statistical inference introduces an action space and a loss function L s on. The prescribes a probability distribution using this distribution after observing s for this, the risk function R A decision function statistician generates a decision in The problem in
decision theory is to select is used. The value R function Typically, no optimal decision function various re­ duction criteria are used to reduce the class of possible decision functions, or the decision functions are ordered using some real­valued characteristic of their risk functions, e.g., maximum risk or average risk with respect to some prior. is the average loss incurred when using the decision and the goal is to minimize risk. exists. So, to select a 472 Section 8.4: Decision Theory (Advanced) EXERCISES xn xn from a Bernoulli 10. distribution, where is completely unknown, and we want to estimate using squared error loss. Write out x x. Graph the risk function when n xn from a Poisson distribution, 8.4.1 Suppose we observe a sample x1 where using squared error loss. Write out all the ingredients of this decision problem. Calculate the risk function of the estimator T x1 8.4.2 Suppose we have a sample x1 is completely unknown, and we want to estimate all the ingredients of this decision problem. Consider the estimator T x1 and calculate its risk function. Graph the risk function when n 8.4.3 Suppose we have a sample x1 is unknown and 2 out all the ingredients of this decision problem. Consider the estimator T x1 2 x and calculate its risk function. Graph the risk function when n 0 distribution, 8.4.4 Suppose we observe a sample x1 where 1 2 is completely unknown, and we want to test the null hypothesis that versus the alternative that it is not equal to this quantity, and we use 0­1 loss. Write out all the ingredients of this decision problem. Suppose we reject the null hypothesis whenever we observe nx 1 n. Determine the form of the test function 0 1 n and its associated power function. Graph the power function when n 8.4.5 Consider the decision problem with sample space S space table. 1 2 3 4, parameter a b, with the parameter indexing the distributions given in the following 0 is known, and we want to estimate 25 from a Bernoulli 2 0 distribution, where using squared error loss. Write xn from an N 10. 25. xn xn xn 2 fa s fb s a 1 when a Suppose that the action space by L (a) Calculate the risk function of the deterministic decision function given by d 1 d 2 (b) Is d in part (a) optimal
? and is equal to 0 otherwise. a and d 4 with A and the loss function is given d 3 A b COMPUTER EXERCISES xn n 0 from a Poisson 8.4.6 Suppose we have a sample x1 is completely unknown, and we want to test the hypothesis that distribution, where 0 versus the alternative that 0 using the 0–1 loss function. Write out all the ingredients of this decision problem. Suppose we decide to reject the null hypothesis whenever 2 n 0 and randomly reject the null hypothesis with probability 1/2 nx when nx 2 n 0 Determine the form of the test function and its associated power function. Graph the power function when 0 8.4.7 Suppose we have a sample x1 5. 2 0 distribution, where from an N 0 is known, and we want to test the null hypothesis that the mean response is 0 versus the alternative that the mean response is not equal to 0 using the 0–1 loss function. Write out all the ingredients of this decision problem. Suppose is unknown and 2 1 and n n 0 xn Chapter 8: Optimal Inference Methods 473 n]. Determine the that we decide to reject whenever x form of the test function and its associated power function. Graph the power function when 0 3 and n 2 0 2 0 [ 0 10 0 n 0 0 PROBLEMS s d s E L prove that R degen­ and that gives a probability measure S is equivalent to specifying a function d : S 8.4.8 Prove that a decision function erate at d s for each s conversely. For such a 8.4.9 Suppose we have a decision problem and that each probability distribution in the model is discrete. (a) Prove that for which P s (b) Prove that if there exist such that A 1 concentrated on disjoint sets, then there is no optimal 8.4.10 If decision function minimax. has constant risk and is admissible, then prove that is optimal in D if and only if and P 1 P 2 are not is degenerate at A A 2 D for each s is 0 s 1 2. CHALLENGES 8.4.11 Suppose we have a decision problem in which 0 0 Further assume that there is no optimal 0 for every implies that P C decision function (see Problem 8.4.9). Then prove that the nonrandomized decision function d given by d s A 0 is admissible. What does this
result tell you about the concept of admissibility? is such that P 0 C DISCUSSION TOPICS 8.4.12 Comment on the following statement: A natural requirement for any theory of inference is that it produce an answer for every inference problem posed. Have we discussed any theories so far that you believe will satisfy this? 8.4.13 Decision theory produces a decision in a given problem. It says nothing about how likely it is that the decision is in error. Some statisticians argue that a valid ap­ proach to inference must include some quantification of our uncertainty concerning any statement we make about an unknown, as only then can a recipient judge the reliability of the inference. Comment on this. 8.5 Further Proofs (Advanced) Proof of Theorem 8.1.2 We want to show that a statistic U is sufficient for a model if and only if the conditional distribution of the data s given U u is the same for every We prove this in the discrete case so that f s P s. The general case re­ quires more mathematics, and we leave that to a further course. 474 Section 8.5: Further Proofs (Advanced) Let u be such that P U 1 u is the set of values of s such that U s 0 where U 1 u u We have s : U s u so U 1 u P s s1 U u P s s1 U u P U u (8.5.1) Whenever s1 U 1 u, P s s1 U u P s1 s : U s u P 0 independently of Therefore, P s s1 U u 0 independently of So let us suppose that s1 U 1 u Then P s s1 U u P s1 s : U s u P s1 f s1 If U is a sufficient statistic, the factorization theorem (Theorem 6.1.1) implies f h s g U s for some h and g. Therefore, since 8.5.1) equals s1 f s U 1 u f s s1 f s U 1 u c s s1 f s1 1 s U 1 u c s s1 where f f s s1 h s h s1 c s s1. We conclude that (8.5.1) is independent of Conversely, if (8.5.1) is independent of then for s1 s2 U 1 u we have P U
u P s s2 s2 U u. P s Thus where f s1 P s s1 P s P s s1 U u P s P s s1 U u s2 U u P s s1 U u P U u s2 s2 U u P s f s2 c s1 s2 f s2, c s1 s2 P s P s s1 U u s2 U u. By the definition of sufficiency in Section 6.1.1, this establishes the sufficiency of U Chapter 8: Optimal Inference Methods 475 Establishing the Completeness of x in Example 8.1.3 Suppose that x1 is unknown and 2 0 sufficient statistic. R1 xn is a sample from an N 0 is known. In Example 6.1.7, we showed that x is a minimal 2 0 distribution, where Suppose that the function h is such that E h x 0 for every R1 Then defining h x max 0 h x and h x max 0 h x we have h x h x h x. Therefore, setting c E h X and c E h X, we must have and so c c c. Because h and h are nonnegative functions, we have that 0 and c 0 If c 0 then we have that h 0 with probability 1, because a non­ x negative function has mean 0 if and only if it is 0 with probability 1 (see Challenge 3.3.22). Then h 0 with probability 1. If c 0 with probability 1 also, and we conclude that h x 0 then h 0 for all x in a set A having positive probability 0 with probability 1, x 2 0 n distribution (otherwise h x x 0). This implies that c 0 for every 2 0 n distribution assigns positive probability to A as well (you 0 with respect to the N 0 which implies, as above, that c because every N can think of A as a subinterval of R1). 0 Now note that g x h x 1 2 0 exp nx 2 2 2 0 is nonnegative and is strictly positive on A. We can write c E h X h x 1 2 exp n 2 2 2 0 exp n x 0 2 0 g exp n x 2 2 2 0 dx x dx (8.5.2) Setting every 0 establishes that 0 g x dx because 0 c for Therefore
, g g x x dx is a probability density of a distribution concentrated on A 0. Fur­ thermore, using (8.5.2) and the definition of moment­generating function in Section 3.4, x : h x c exp n 2 2 2 0 g x dx (8.5.3) 476 Section 8.5: Further Proofs (Advanced) is the moment­generating function of this distribution evaluated at n Similarly, we define 2 0 so that g x h x 1 2 0 exp nx 2 2 2 0 g g x x dx is a probability density of a distribution concentrated on A x : h x 0 Also, c exp n 2 2 2 0 g x dx (8.5.4) is the moment­generating function of this distribution evaluated at n Because c c we have that (setting 0) 2 0. g x dx g x dx This implies that (8.5.3) equals (8.5.4) for every and so the moment­generating functions of these two distributions are the same everywhere. By Theorem 3.4.6, these distributions must be the same. But this is impossible, as the distribution given by g is concentrated on A whereas the distribution given by g is concentrated on A and A 0 and we are done. Accordingly, we conclude that we cannot have c A The Proof of Theorem 8.2.1 (the Neyman–Pearson Theorem) We want to prove that when exact size test function 0 exists of the form 0 1 and we want to test H0 : 0 then an c0 c0 c0 (8.5.5) for some [0 1] and c0 0 and this test is UMP size We develop the proof of this result in the discrete case. The proof in the more general context is similar. First, we note that s : 0 has P measure equal to 0 for both 1 Accordingly, without loss we can remove this set from the sample space and assume hereafter that f 0 s and f 1 s cannot be simultaneously 0. Therefore, the ratio f 1 s f 0 s is always defined. 0 and f 0 s f 1 s 1 Then setting c 1 Therefore, 0 and 0 is UMP size 1 in (8.5.5), we see that because no test can have power 0 s Suppose that 1 and so E 1 greater than 1 0 Chapter 8:
Optimal Inference Methods 477 0 Setting c0 0 (if f 0 s Suppose that and only if f 0 s is the indicator function for the set A Further, any size 0 test function must be 0 on Ac to have E 0 0 s and so E 1 that 0 and 0 then in (8.5.5), we see that f 0 s 0 s 0 if and conversely). So 0 0 0 On A we have 0 0 and therefore E 0 0 Therefore, 1. Consider the distribution function of the likelihood 0 is UMP size Now assume that 0 ratio when 0, namely So 1 c is a nondecreasing function of c with 1 0 and 1 Let c0 be the smallest value of c such that 1 1 c (recall that 1 is right continuous because it is a distribution function). Then we have that 1 0 in a distribution function at a point equals the probability of the point) lim 0 c0 and (using the fact that the jump c0 1 1 1 1 c c0 P 0 f 1 s f 0 s c0 1 c0 c0 0 1 c0 c0 0 Using this value of c0 in (8.5.5), put c0 c0 c0 0 c0 0 c0 0 otherwise, and note that [0 1] Then we have E 0 0 P 0 f 1 s c0 f 0 s c0 c0 P 0 f 1 s f 0 s c0 so 0 has exact size Now suppose that is another size test and E 1 E 1 0 We partition the sample space as S S0 S1 S2 where S0 S1 S2 Note that S1 because f 1 s 0 as c0 implies s 0 s 1 Also 0 f 1 s f 0 s c0 1 which implies 0 s s 1 S2 because f 1 s s 0 as 0 f 0 s 1 c0 implies 0 s 0 which implies c0 0 s s s 478 Section 8.5: Further Proofs (Advanced) Therefore, 0 0 E 1 E 1 IS1 IS2 s 0 s s Now note that E 1 IS1 S1 c0 s S1 0 s s f 0 s c0 E 0 IS1 s 0 s s because that 0 s s 0 and f 1 s f 0 s c0 when s S1 Similarly, we have E 1 IS2 S2 c0 s S2 0 s s f 0 s c0 E 0 IS2 s 0 s s because 0 s f
0 s Combining these inequalities, we obtain 0 and f 1 s s c0 when s S2 0 E 1 0 c0 E 0 E 1 0 E 0 c0 E 0 c0 0 E 0 0 because E 0 among all size 0 Therefore, E 1 0 E 1 which proves that 0 is UMP tests. Chapter 9 Model Checking CHAPTER OUTLINE Section 1 Checking the Sampling Model Section 2 Checking for Prior–Data Conict Section 3 The Problem with Multiple Checks The statistical inference methods developed in Chapters 6 through 8 all depend on various assumptions. For example, in Chapter 6 we assumed that the data s were generated from a distribution in the statistical model P :. In Chapter 7, we also assumed that our uncertainty concerning the true value of the model parameter could be described by a prior probability distribution. As such, any inferences drawn are of questionable validity if these assumptions do not make sense in a particular application. In fact, all statistical methodology is based on assumptions or choices made by the statistical analyst, and these must be checked if we want to feel confident that our inferences are relevant. We refer to the process of checking these assumptions as model checking, the topic of this chapter. Obviously, this is of enormous importance in applications of statistics, and good statistical practice demands that effective model checking be carried out. Methods range from fairly informal graphical methods to more elaborate hypothesis assessment, and we will discuss a number of these. 9.1 Checking the Sampling Model, for the Frequency­based inference methods start with a statistical model true distribution that generated the data s. This means we are assuming that the true distribution for the observed data is in this set If this assumption is not true, then it seems reasonable to question the relevance of any subsequent inferences we make about : f. Except in relatively rare circumstances, we can never know categorically that a model is correct. The most we can hope for is that we can assess whether or not the observed data s could plausibly have arisen from the model. 479 480 Section 9.1: Checking the Sampling Model If the observed data are surprising for each distribution in the model, then we have evidence that the model is incorrect. This leads us to think in terms of computing a P­value to check the correctness of the model. Of course, in this situation the null hypothesis is that the model is correct; the alternative is that the model could be any of the other possible models for the type of data we are dealing
with. We recall now our discussion of P­values in Chapter 6, where we distinguished between practical significance and statistical significance. It was noted that, while a P­ value may indicate that a null hypothesis is false, in practical terms the deviation from the null hypothesis may be so small as to be immaterial for the application. When the sample size gets large, it is inevitable that any reasonable approach via P­values will detect such a deviation and indicate that the null hypothesis is false. This is also true when we are carrying out model checking using P­values. The resolution of this is to estimate, in some fashion, the size of the deviation of the model from correctness, and so determine whether or not the model will be adequate for the application. Even if we ultimately accept the use of the model, it is still valuable to know, however, that we have detected evidence of model incorrectness when this is the case. One P­value approach to model checking entails specifying a discrepancy statistic R1 that measures deviations from the model under consideration. Typically, D : S large values of D are meant to indicate that a deviation has occurred. The actual value D s is, of course, not necessarily an indication of this. The relevant issue is whether or not the observed value D s is surprising under the assumption that the model is cor­ rect. Therefore, we must assess whether or not D s lies in a region of low probability for the distribution of this quantity when the model is correct. For example, consider the density of a potential D statistic plotted in Figure 9.1.1. Here a value D s in the left tail (near 0), right tail (out past 15), or between the two modes (in the interval from about 7 to 9) all would indicate that the model is incorrect, because such values have a low probability of occurrence when the model is correct. 0.3 0.2 0.1 0.0 0 2 4 6 8 10 12 14 16 18 20 D Figure 9.1.1: Plot of a density for a discrepancy statistic D Chapter 9: Model Checking 481 The above discussion places the restriction that, when the model is correct, D must have a single distribution, i.e., the distribution cannot depend on. For many com­ monly used discrepancy statistics, this distribution is unimodal. A value in the right tail then indicates a lack of fit, or underfitting
, by the model (the discrepancies are unnaturally large); a value in the left tail then indicates overfitting by the model (the discrepancies are unnaturally small). There are two general methods available for obtaining a single distribution for the computation of P­values. One method requires that D be ancillary. Definition 9.1.1 A statistic D whose distribution under the model does not depend upon P, then D s has the same distribution for every is called ancillary, i.e., if s. If D is ancillary, then it has a single distribution specified by the model. If D s is a surprising value for this distribution, then we have evidence against the model being true. It is not the case that any ancillary D will serve as a useful discrepancy statistic. For example, if D is a constant, then it is ancillary, but it is obviously not useful for model checking. So we have to be careful in choosing D. Quite often we can find useful ancillary statistics for a model by looking at resid­ uals. Loosely speaking, residuals are based on the information in the data that is left over after we have fit the model. If we have used all the relevant information in the data for fitting, then the residuals should contain no useful information for inference about the parameter. Example 9.1.1 will illustrate more clearly what we mean by residuals. Residuals play a major role in model checking. The second method works with any discrepancy statistic D. For this, we use the conditional distribution of D given the value of a sufficient statistic T. By Theorem 8.1.2, this conditional distribution is the same for every value of. If D s is a surpris­ ing value for this distribution, then we have evidence against the model being true. Sometimes the two approaches we have just described agree, but not always. Con­ sider some examples. EXAMPLE 9.1.1 Location Normal Suppose we assume that x1 R1 is unknown and 2 2 0 distribution, where 0 is known. We know that x is a minimal sufficient statistic for this problem (see Example 6.1.7). Also, x represents the fitting of the model to the data, as it is the estimate of the unknown parameter
value xn is a sample from an N Now consider r r x1 xn r1 rn x1 x xn x as one possible definition of the residual. Note that we can reconstruct the original data from the values of x and r. It turns out that R with E Ri Xn X has a distribution that is independent of 0 and Cov Ri R j 1 n for every i X1 X i j 2 0 j and 0 otherwise). Moreover, R is independent of X and Ri (see Problems 9.1.19 and 9.1.20). j ( i j N 0 1 when i 2 1 n 0 1 482 Section 9.1: Checking the Sampling Model Accordingly, we have that r is ancillary and so is any discrepancy statistic D that depends on the data only through r. Furthermore, the conditional distribution of D R given X x is the same as the marginal distribution of D R because they are inde­ pendent. Therefore, the two approaches to obtaining a P­value agree here, whenever the discrepancy statistic depends on the data only through r By Theorem 4.6.6, we have that D R 1 2 0 n i 1 R2 i 1 2 0 n i 1 Xi 2 X is distributed value 2 n 1, so this is a possible discrepancy statistic Therefore, the P­ P D D r (9.1.1) 2 n where D 1, provides an assessment of whether or not the model is correct. Note that values of (9.1.1) near 0 or near 1 are both evidence against the model, as both indicate that D r is in a region of low probability when assuming the model is correct. A value near 0 indicates that D r is in the right tail, whereas a value near 1 indicates that D r is in the left tail. The necessity of examining the left tail of the distribution of D r as well as the right, is seen as follows. Consider the situation where we are in fact sampling from an 2 is much smaller than 2 0 In this case, we expect D r N 1 to be a value in the left tail, because E D R 2 distribution where n 2 There are obviously many other choices that could be made for the D statistic At present, there is not a theory that prescribes one choice over another. One caution should be noted, however. The choice of a statistic D cannot be based upon looking at the data first. Doing so invalid
ates the computation of the P­value as described above, as then we must condition on the data feature that led us to choose that particular D. 2 0 EXAMPLE 9.1.2 Location­Scale Normal Suppose we assume that x1 R1 0 2 xn is a sample from an N is unknown. We know that x s2 2 distribution, where is a minimal sufficient statistic for this model (Example 6.1.8). Consider r r x1 xn r1 rn x1 x xn x s s as one possible definition of the residual. Note that we can reconstruct the data from the values of x s2 and r. It turns out R has a distribution that is independent of 2 (and hence is an­ cillary — see Challenge 9.1.28) as well as independent of X S2 So again, the two approaches to obtaining a P­value agree here, as long as the discrepancy statistic de­ pends on the data only through r One possible discrepancy statistic is given by D r 1 n n ln i 1 r 2 i n 1 Chapter 9: Model Checking 483 To use this statistic for model checking, we need to obtain its distribution when the model is correct. Then we compare the observed value D r with this distribution, to see if it is surprising. We can do this via simulation. Because the distribution of D R is independent 2, we can generate N samples of size n from the N 0 1 distribution (or of any other normal distribution) and calculate D R for each sample. Then we look at histograms of the simulated values to see if D r, from the original sample, is a surprising value, i.e., if it lies in a region of low probability like a left or right tail. For example, suppose we observed the sample 2 08 0 28 2 01 1 37 40 08 4 93 Then, simulating 104 values from the distribution obtaining the value D r of D under the assumption of model correctness, we obtained the density histogram given in Figure 9.1.2. See Appendix B for some code used to carry out this simulation. The value D r 4 93 is out in the right tail and thus indicates that the sample is not from a normal distribution. In fact, only 0 0057 of the simulated values are larger, so this is definite evidence against the model being correct. y t i s n e d 0.8 0.
7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 1 2 3 4 D 5 6 7 Figure 9.1.2: A density histogram for a simulation of 104 values of D in Example 9.1.2. n Obviously, there are other possible functions of r that we could use for model checking here. In particular, Dskew r i, the skewness statis­ n n i 1 r 4 i, the kurtosis statistic, are commonly used. tic, and Dkurtosis r The skewness statistic measures the symmetry in the data, while the kurtosis statistic measures the “peakedness” in the data. As just described, we can simulate the distribu­ tion of these statistics under the normality assumption and then compare the observed values with these distributions to see if we have any evidence against the model (see Computer Problem 9.1.27). The following examples present contexts in which the two approaches to computing a P­value for model checking are not the same. 484 Section 9.1: Checking the Sampling Model EXAMPLE 9.1.3 Location­Scale Cauchy Suppose we assume that x1 xn is a sample from the distribution given by 2 R1 t 1 and Z where Z is unknown. This time, x s2 is not a minimal sufficient statistic, but the statistic r defined in Example 9.1.2 is still ancillary (Challenge 9.1.28). We can again simulate values from the distribution of R (just generate samples from the t 1 distribution and compute r for each sample) to estimate P­values for any discrepancy statistic such as the D r statistics discussed in Example 9.1.2. 0 EXAMPLE 9.1.4 Fisher’s Exact Test Suppose we take a sample of n from a population of students and observe the values 2 indicating a1 b1 female) and bi is a categorical variable for part­time employment status (B 1 indicat­ 2 indicating unemployed). So each individual is being categorized ing employed, B into one of four categories, namely, an bn where ai is gender (A 1 indicating male, A Category 1, when A Category 2, when A Category 3, when A Category 4, when Suppose our model for this situation is that A and B are independent with P A 1 1 P
B 1 1 where 1 Then letting Xi j denote the count for the category, where A gives that [0 1] and 1 [0 1] are completely unknown. j, Example 2.8.5 i B X11 X12 X21 X22 Multinomial n 1 1 1 2 2 1 2 2 As we will see in Chapter 10, this model is equivalent to saying that there is no rela­ tionship between gender and employment status. Denoting the observed cell counts by x11 x12 x21 x22, the likelihood function is given by x11 1 1 x11 x12 1 x1 1 1 1 2 1 1 x12 x21 2 1 1 n x1 n x11 x12 x 1 1 1 x22 2 2 x11 x21 1 n x 1 1 1 1 n x11 x21 x11 x12 x11 x21. Therefore, the MLE (Problem 9.1.14) is 1 1 x1 n x 1 n. where x1 x 1 given by Note that 1 is the proportion of males in the sample and 1 is the proportion of all employed in the sample. Because x1 x 1 determines the likelihood function and can be calculated from the likelihood function, we have that x1 x 1 is a minimal sufficient statistic. In this example, a natural definition of residual does not seem readily apparent. So we consider looking at the conditional distribution of the data, given the minimal Chapter 9: Model Checking 485 An Bn sufficient statistic. The conditional distribution of the sample A1 B1 given the values x1 x 1 the restrictions is the uniform distribution on the set of all samples where x11 x11 x21 x12 x21 x22 x1 x 1 n x11 x12 (9.1.2) are satisfied. Notice that, given x1 x 1 when we specify a value for x11. all the other values in (9.1.2) are determined It can be shown that the number of such samples is equal to (see Problem 9.1.21) n x1 n x 1 Now the number of samples with prescribed values for x1 x 1 and x11 i is given by n x1 x1 i n x 1 x1 i Therefore, the conditional probability function of x11 given x1 x 1 is P x11 i x1 x 1 n x1 x1 i
n x1 n x1 x 1 i n x 1 x1 i n x1 x 1 i n x 1 This is the Hypergeometric n x 1 x1 probability function. So we have evidence against the model holding whenever x11 is out in the tails of this distribution. Assessing this requires a tabulation of this distribution or the use of a statistical package with the hypergeometric distribution function built in. As a simple numerical example, suppose that we took a sample of n obtaining x 1 the Hypergeometric 20 12 6 probability function is given by the following table. 12 unemployed, x1 6 males, and x11 20 students, 2 employed males. Then i p i 0 0 001 1 0 017 2 0 119 3 0 318 4 0 358 5 0 163 6 0 024 2 is equal The probability of getting a value as far, or farther, out in the tails than x11 to the probability of observing a value of x11 with probability of occurrence as small as or smaller than x11 2 This P­value equals 0 119 0 017 0 001 0 024 0 161 Therefore, we have no evidence against the model of independence between A and B Of course, the sample size is quite small here. There is another approach here to testing the independence of A and B. In particu­ lar, we could only assume the independence of the initial unclassified sample, and then we always have X11 X12 X21 X22 Multinomial n 11 12 21 22 486 Section 9.1: Checking the Sampling Model where the could then test for the independence of A and B We will discuss this in Section 10.2. i j comprise an unknown probability distribution. Given this model, we Another approach to model checking proceeds as follows. We enlarge the model to include more distributions and then test the null hypothesis that the true model is the submodel we initially started with. If we can apply the methods of Section 8.2 to come up with a uniformly most powerful (UMP) test of this null hypothesis, then we will have a check of departures from the model of interest — at least as expressed by the possible alternatives in the enlarged model. If the model passes such a check, however, we are still required to check the validity of the enlarged model. This can be viewed as a technique for generating relevant discrepancy statistics D. 9.1.1 Residual and Probability Plots There is another, more informal approach to checking model correctness
that is often used when we have residuals available. These methods involve various plots of the residuals that should exhibit specific characteristics if the model is correct. While this approach lacks the rigor of the P­value approach, it is good at demonstrating gross deviations from model assumptions. We illustrate this via some examples. EXAMPLE 9.1.5 Location and Location­Scale Normal Models Using the residuals for the location normal model discussed in Example 9.1.1, we have 2 that E Ri 1 n We standardize these values so that they 0 1 also have variance 1, and so obtain the standardized residuals r1 rn given by 0 and Var Ri ri n 2 0 n 1 xi x. (9.1.3) The standardized residuals are distributed N 0 1 and, assuming that n is reasonably large, it can be shown that they are approximately independent. Accordingly, we can think of r1 rn as an approximate sample from the N 0 1 distribution. Therefore, a plot of the points i ri should not exhibit any discernible pattern. Furthermore, all the values in the y­direction should lie in unless of course 3 3 n is very large, in which case we might expect a few values outside this interval A discernible pattern, or several extreme values, can be taken as some evidence that the model assumption is not correct. Always keep in mind, however, that any observed pattern could have arisen simply from sampling variability when the true model is correct. Simulating a few of these residual plots (just generating several samples of n from the N 0 1 distribution and obtaining a residual plot for each sample) will give us some idea of whether or not the observed pattern is unusual. Figure 9.1.3 shows a plot of the standardized residuals (9.1.3) for a sample of 100 from the N 0 1 distribution. Figure 9.1.4 shows a plot of the standardized residuals for a sample of 100 from the distribution given by 3 1 2 Z where Z t 3. Note that a t 3 distribution has mean 0 and variance equal to 3, so Var 3 1 2 Z 1 (Problem 4.6.16). Figure 9.1.5 shows the standardized residuals for a sample of 100 from an Exponential 1 distribution. Chapter 9: Model Checking 487 1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.3: A plot of the standardized residuals for
a sample of 100 from an N 0 1 distribution1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.4: A plot of the standardized residuals for a sample of 100 from X where 1 ­2 ­3 ­4 ­5 ­6 0 50 i 100 Figure 9.1.5: A plot of the standardized residuals for a sample of 100 from an Exponential 1 distribution. 488 Section 9.1: Checking the Sampling Model Note that the distributions of the standardized residuals for all these samples have mean 0 and variance equal to 1. The difference in Figures 9.1.3 and 9.1.4 is due to the fact that the t distribution has much longer tails. This is reected in the fact that a few of the standardized residuals are outside 3 3 in Figure 9.1.4 but not in Figure 9.1.3. Even though the two distributions are quite different — e.g., the N 0 1 distribution has all of its moments whereas the 3 1 2 t 3 distribution has only two moments — the plots of the standardized residuals are otherwise very similar. The difference in Figures 9.1.3 and 9.1.5 is due to the asymmetry in the Exponential 1 distribution, as it is skewed to the right. Using the residuals for the location­scale normal model discussed in Example 9.1.2, we define the standardized residuals r1 rn by ri n s2 n 1 xi x. (9.1.4) Here, the unknown variance is estimated by s2. Again, it can be shown that when n is rn is an approximate sample from the N 0 1 distribution. So we large, then r1 and interpret the plot just as we described for the location normal plot the values i ri model. It is very common in statistical applications to assume some basic form for the dis­ tribution of the data, e.g., we might assume we are sampling from a normal distribution with some mean and variance. To assess such an assumption, the use of a probability plot has proven to be very useful. To illustrate, suppose that x1 2 distribution. Then it can be shown that when n is large, the expectation of the i­th order statistic satisfies xn is a sample from an N 1 i n If the data value x j corresponds to order statistic ), then we call the
normal score of x j in the sample Then (9.1.5) indicates that if 1 i n, these should lie approximately on a line. We call such a plot a normal probability plot or normal we plot the points x i with intercept quantile plot. Similar plots can be obtained for other distributions. and slope (i.e., x i 1 (9.1.5) 1 EXAMPLE 9.1.6 Location­Scale Normal Suppose we want to assess whether or not the following data set can be considered a sample of size n 10 from some normal distribution. 2 00 0 28 0 47 3 33 1 66 8 17 1 18 4 15 6 43 1 77 The order statistics and associated normal scores for this sample are given in the fol­ lowing table 28 1 34 6 2 00 0 11 2 0 47 0 91 7 3 33 0 34 3 1 18 0 61 8 4 15 0 60 4 1 66 0 35 9 6 43 0 90 5 1 77 0 12 10 8 17 1 33 Chapter 9: Model Checking 489 The values x i 1 i n 1 are then plotted in Figure 9.1.6. There is some definite deviation from a straight line here, but note that it is difficult to tell whether this is unexpected in a sample of this size from a normal distribution. Again, simulating a few samples of the same size (say, from an N 0 1 distribution) and looking at their normal probability plots is recom­ mended. In this case, we conclude that the plot in Figure 9.1.6 looks reasonable Figure 9.1.6: Normal probability plot of the data in Example 9.1.6. We will see in Chapter 10 that the use of normal probability plots of standardized residuals is an important part of model checking for more complicated models. So, while they are not really needed here, we consider some of the characteristics of such plots when assessing whether or not a sample is from a location normal or location­ scale normal model. Assume that n is large so that we can consider the standardized residuals, given by (9.1.3) or (9.1.4) as an approximate sample from the N 0 1 distribution. Then a normal probability plot of the standardized residuals should be approximately linear, with y­intercept approximately equal to 0 and slope approximately equal to 1. If we get a substantial deviation from this, then we have evidence that the assumed model is
incorrect. In Figure 9.1.7, we have plotted a normal probability plot of the standardized resid­ 25 from an N 0 1 distribution In Figure 9.1.8, we have uals for a sample of n 25 plotted a normal probability plot of the standardized residuals for a sample of n from the distribution given by X t 3. Both distributions have mean 0 and variance 1, so the difference in the normal probability plots is due to other distributional differences. 3 1 2 Z where Z 490 Section 9.1: Checking the Sampling Model 1 ­2 ­2 ­1 0 1 2 Standardized residuals Figure 9.1.7: Normal probability plot of the standardized residuals of a sample of 25 from an N 0 1 distribution1 ­2 ­2 ­1 0 1 2 3 Standardized residuals Figure 9.1.8: Normal probability plot of the standardized residuals of a sample of 25 from X 3 1 2 Z where Z t 3 9.1.2 The Chi­Squared Goodness of Fit Test The chi­squared goodness of fit test has an important historical place in any discussion of assessing model correctness. We use this test to assess whether or not a categorical k, has a random variable W, which takes its values in the finite sample space 1 2 specified probability measure P, after having observed a sample n. When we have a random variable that is discrete and takes infinitely many values, then we partition the possible values into k categories and let W simply indicate which category has occurred. If we have a random variable that is quantitative, then we partition R1 into k subintervals and let W indicate in which interval the response occurred. In effect, we want to check whether or not a specific probability model, as given by P is correct for W based on an observed sample. 1 Chapter 9: Model Checking 491 Let X1 Xk be the observed counts or frequencies of 1 k respectively. If P is correct, then, from Example 2.8.5, X1 Xk Multinomial n p1 pk where pi that Xi P i. This implies that E Xi npi and Var Xi npi 1 pi (recall Binomial n pi ). From this, we deduce that Ri Xi npi 1 npi D pi N 0 1 (9.1.6) as n (see Example 4.4.9).
For finite n the distribution of Ri when the model is correct, is dependent on P but the limiting distribution is not. Thus we can think of the Ri as standardized residuals when n is large. Therefore, it would seem that a reasonable discrepancy k i 1 R2 statistic is given by the sum of the squares of the standardized residuals with i approximately distributed 2 k The restriction x1 n holds, however, so xk 2 k. We do, however, the Ri are not independent and the limiting distribution is not have the following result, which provides a similar discrepancy statistic. Theorem 9.1.1 If X1 Xk Multinomial n p1 pk, then X 2 k i 1 1 pi R2 i k Xi 2 npi D i 1 npi 2 k 1 as n The proof of this result is a little too involved for this text, so see, for example, Theorem 17.2 of Asymptotic Statistics by A. W. van der Vaart (Cambridge University Press, Cambridge, 1998), which we will use here. We refer to X 2 as the chi­squared statistic. The process of assessing the correctness of the model by computing the P­value P X 2 1 and X 2 0 is the observed value of the chi­squared statistic, is referred to as the chi­squared goodness of fit test. Small P­values near 0 provide evidence of the incorrectness of the probability model. Small P­values indicate that some of the residuals are too large. 0, where X 2 X 2 2 k Note that the ith term of the chi­squared statistic can be written as Xi 2 npi (number in the ith cell expected number in the ith cell)2 npi expected number in the ith cell. It is recommended, for example, in Statistical Methods, by G. Snedecor and W. Cochran (Iowa State Press, 6th ed., Ames, 1967) that grouping (combining cells) be employed to ensure that E Xi 1 for every i as simulations have shown that this improves the accuracy of the approximation to the P­value. npi We consider an important application. EXAMPLE 9.1.7 Testing the Accuracy of a Random Number Generator In effect, every Monte Carlo simulation can be considered to be a set of mathematical in [0 1] that are supposed to operations applied to a stream of numbers U1 U2 492
Section 9.1: Checking the Sampling Model be i.i.d. Uniform[0 1] Of course, they cannot satisfy this requirement exactly because they are generated according to some deterministic function. Typically, a function f : [0 1]m [0 1] is chosen and is applied iteratively to obtain the sequence. So we select U1 Um as initial seed values and then Um 1 etc. There are many possibilities for f and a great deal of re­ f U2 search and study have gone into selecting functions that will produce sequences that adequately mimic the properties of an i.i.d. Uniform[0 1] sequence. Um Um 2 Um 1 f U1 Of course, it is always possible that the underlying f used in a particular statistical In such a case, the results of the package or other piece of software is very poor. simulations can be grossly in error How do we assess whether a particular f is good or not? One approach is to run a battery of statistical tests to see whether the sequence is behaving as we know an ideal sequence would. For example, if the sequence U1 U2 is i.i.d. Uniform[0 1], then 10U1 10U2 10 ( x denotes the smallest integer greater than x e.g., is i.i.d. Uniform 1 2 4) So we can test the adequacy of the underlying function f by generating 3 2 and then carrying out a chi­squared U1 Un for large n putting xi goodness of fit test with the 10 categories 1 10 with each cell probability equal to 1/10. 10Ui Doing this using a popular statistical package (with n 104) gave the following table of counts xi and standardized residuals ri as specified in (9.1.6). 10 xi 993 1044 1061 1021 1017 973 975 965 996 955 ri 0 23333 1 46667 2 03333 0 70000 0 56667 0 90000 0 83333 1 16667 0 13333 1 50000 All the standardized residuals look reasonable as possible values from an N 0 1 dis­ tribution. Furthermore, X 2 0 1 0 1 11 0560 0 23333 2 0 70000 2 1 46667 2 0 56667 2 2 03333 2 0 90000 2 0 83333 2 1 50000 2 1 16667 2 0 13333 2 gives the P­value
P X 2 we have no evidence that the random number generator is defective. 0 27190 when X 2 11 0560 2 9 This indicates that Chapter 9: Model Checking 493 Of course, the story does not end with a single test like this. Many other features of the sequence should be tested. For example, we might want to investigate the inde­ pendence properties of the sequence and so test if each possible combination of i j occurs with probability 1/100, etc. More generally, we will not have a prescribed probability distribution P for X but where each P is a probability measure on the k Then, based on the sample from the model, we have that rather a statistical model P : finite set 1 2 X1 Xk Multinomial n p1 pk where pi P i Perhaps a natural way to assess whether or not this model fits the data is to find the MLE from the likelihood function L x1 xk p1 x1 pk xk and then look at the standardized residuals ri xi npi npi 1 pi We have the following result, which we state without proof. Theorem 9.1.2 Under conditions (similar to those discussed in Section 6.5), we have that Ri N 0 1 and D X 2 k i 1 1 pi R2 i k i 1 Xi npi 2 D npi 2 k 1 dim as n we mean the dimension of the set By dim Loosely speaking, this is the mini­ mum number of coordinates required to specify a point in the set, e.g., a line requires one coordinate (positive or negative distance from a fixed point), a circle requires one coordinate, a plane in R3 requires two coordinates, etc. Of course, this result implies 1 that the number of cells must satisfy k dim Consider an example. EXAMPLE 9.1.8 Testing for Exponentiality Suppose that a sample of lifelengths of light bulbs (measured in thousands of hours) is supposed to be from an Exponential is unknown. So here dim 1 and we require at least two cells for the chi­squared test The manufacturer expects that most bulbs will last at least 1000 hours, 50% will last less than 2000 hours, and most will have failed by 3000 hours. So based on this, we partition the sample space as distribution, where 0 0 0 1] 1 2] 2 3] 3 494 Section 9.1
: Checking the Sampling Model Suppose that a sample of n 5 30 light bulbs was taken and that the counts x1 1 were obtained for the four intervals, respectively. Then 16 x3 x2 the likelihood function based on these counts is given by 8 and x4 L x1 x40 1 e 5 e e 2 16 e 2 e 3 8 e 3 1 because, for example, the probability of a value falling in 1 2] is e have x2 and we 16 observations in this interval. Figure 9.1.9 is a plot of the log­likelihood. e 2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 theta 1.8 2.0 ln L ­100 ­200 ­300 ­400 ­500 Figure 9.1.9: Plot of the log­likelihood function in Example 9.1.8. By successively plotting the likelihood on shorter and shorter intervals, the MLE was determined to be 0 603535 This value leads to the probabilities p1 p2 p3 p4 the fitted values e 0 603535 1 e 0 603535 e 2 0 603535 e 3 0 603535 0 453125 e 2 0 603535 0 247803 e 3 0 603535 0 163555 0 135517 30 p1 30 p2 30 p3 30 p4 13 59375 7 43409 4 06551 4 90665 Chapter 9: Model Checking 495 and the standardized residuals r1 r2 r3 r4 5 16 8 1 13 59375 30 0 453125 1 0 453125 3 151875 7 43409 30 0 247803 1 0 247803 3 622378 4 06551 30 0 135517 1 0 135517 2 098711 4 90665 30 0 163555 1 0 163555 1 928382 Note that two of the standardized residuals look large. Finally, we compute X 2 0 1 0 453125 3 151875 2 0 135517 2 098711 2 1 1 1 0 247803 3 622378 2 0 163555 1 928382 2 22 221018 and P X 2 22 221018 0 0000 when X 2 2 2 Therefore, we have strong evidence that the Exponential model is not correct for these data and we would not use this model to make inference about. Note that we used the MLE of
based on the count data and not the original sample! If instead we were to use the MLE for based on the original sample (in this case, equal to x and so much easier to compute), then Theorem 9.1.2 would no longer be valid. The chi­squared goodness of fit test is but one of many discrepancy statistics that have been proposed for model checking in the statistical literature. The general ap­ proach is to select a discrepancy statistic D like X 2 such that the exact or asymptotic distribution of D is independent of and known. We then compute a P­value based on D The Kolmogorov–Smirnov test and the Cramer–von Mises test are further examples of such discrepancy statistics, but we do not discuss these here. 9.1.3 Prediction and Cross­Validation Perhaps the most rigorous test that a scientific model or theory can be subjected to is assessing how well it predicts new data after it has been fit to an independent data set. In fact, this is a crucial step in the acceptance of any new empirically developed scientific theory — to be accepted, it must predict new results beyond the data that led to its formulation. If a model does not do a good job at predicting new data, then it is reasonable to say that we have evidence against the model being correct. If the model is too simple, then the fitted model will underfit the observed data and also the future data. If the model is too complicated, then the model will overfit the original data, and this will be detected when we consider the new data in light of this fitted model. In statistical applications, we typically cannot wait until new data are generated to check the model. So statisticians use a technique called cross­validation. For this, we split an original data set x1 xn into two parts: the training set T comprising k of the data values and used to fit the model; and the validation set V, which comprises 496 Section 9.1: Checking the Sampling Model k data values. Based on the training data, we construct predictors of the remaining n various aspects of the validation data. Using the discrepancies between the predicted and actual values, we then assess whether or not the validation set V is surprising as a possible future sample from the model. Of course, there
are n k possible such splits of the data and we would not want to make a decision based on just one of these. So a cross­validational analysis will have to take this into account. Furthermore, we will have to decide how to measure the discrepancies between T and V and choose a value for k We do not pursue this topic any further in this text. 9.1.4 What Do We Do When a Model Fails? So far we have been concerned with determining whether or not an assumed model is appropriate given observed data. Suppose the result of our model checking is that we decide a particular model is inappropriate. What do we do now? Perhaps the obvious response is to say that we have to come up with a more appro­ priate model — one that will pass our model checking. It is not obvious how we should go about this, but statisticians have devised some techniques. One of the simplest techniques is the method of transformations. For example, sup­ exp X pose that we observe a sample y1 2. A normal probability plot based on the yi, as in Figure 9.1.10, with X will detect evidence of the nonnormality of the distribution. Transforming these yi values to ln yi will, however, yield a reasonable looking normal probability plot, as in Figure 9.1.11. yn from the distribution given by Y N So in this case, a simple transformation of the sample yields a data set that passes this check. In fact, this is something statisticians commonly do. Several transforma­ tions from the family of power transformations given by Y p for p 0 or the logarithm transformation ln Y are tried to see if a distributional assumption can be satisfied by a transformed sample. We will see some applications of this in Chapter 10. Surprisingly, this simple technique often works, although there are no guarantees that it always will. Perhaps the most commonly applied transformation is the logarithm when our data values are positive (note that this is a necessity for this transformation). Another very common transformation is the square root transformation, i.e., p 1 2 when we have count data. Of course, it is not correct to try many, many transformations until we find one that makes the probability plots or residual plots look acceptable. Rather, we try a few simple transformations. Chapter 9: Model Checking 497 1 ­2 0 1 2 4 5 6 3 Y Figure 9.1.10:
A normal probability plot of a sample of n Y exp X with X N 0 1. 50 from the distribution given by 1 ­2 ­3 ­2 ­1 0 1 2 Y Figure 9.1.11: A normal probability plot of a sample of n ln Y, where Y exp X and X N 0 1. 50 from the distribution given by Summary of Section 9.1 Model checking is a key component of the practical application of statistics. One approach to model checking involves choosing a discrepancy statistic D and then assessing whether or not the observed value of D is surprising by computing a P­value. 498 Section 9.1: Checking the Sampling Model Computation of the P­value requires that the distribution of D be known under the assumption that the model is correct. There are two approaches to accom­ plishing this. One involves choosing D to be ancillary, and the other involves computing the P­value using the conditional distribution of the data given the minimal sufficient statistic. The chi­squared goodness of fit statistic is a commonly used discrepancy statis­ tic. For large samples, with the expected cell counts determined by the MLE based on the multinomial likelihood, the chi­squared goodness of fit statistic is approximately ancillary. There are also many informal methods of model checking based on various plots of residuals. If a model is rejected, then there are several techniques for modifying the model. These typically involve transformations of the data. Also, a model that fails a model­checking procedure may still be useful, if the deviation from correctness is small. EXERCISES 9.1.1 Suppose the following sample is assumed to be from an N 4 distribution with R1 unknown Check this model using the discrepancy statistic of Example 9.1.1. 9.1.2 Suppose the following sample is assumed to be from an N 2 distribution with unknowna) Plot the standardized residuals. (b) Construct a normal probability plot of the standardized residuals. (c) What conclusions do you draw based on the results of parts (a) and (b)? 9.1.3 Suppose the following sample is assumed to be from an N 0 are unknown. where R1 and 2 2 distribution, 14 0 9 4 12 1 13 4 6 3 8 5 7 1 12 4 13 3 9 1 (a) Plot the standardized residuals. (b) Construct a normal probability plot of the standardized residuals.
(c) What conclusions do you draw based on the results of parts (a) and (b)? 9.1.4 Suppose the following table was obtained from classifying members of a sample of n 10 from a student population according to the classification variables A and B, where A 0 1 indicates conservative, liberal. 0 1 indicates male, female and Chapter 9: Model Checking 499 Check the model that says gender and political orientation are independent, using Fisher’s exact test. 9.1.5 The following sample of n ution. 20 is supposed to be from a Uniform[0 1] distrib­ 0 11 0 45 0 56 0 22 0 72 0 08 0 18 0 65 0 26 0 32 0 32 0 88 0 42 0 76 0 22 0 32 0 96 0 21 0 04 0 80 After grouping the data, using a partition of five equal­length intervals, carry out the chi­squared goodness of fit test to assess whether or not we have evidence against this assumption. Record the standardized residuals. 9.1.6 Suppose a die is tossed 1000 times, and the following frequencies are obtained for the number of pips up when the die comes to a rest. x1 163 x2 178 x3 142 x4 150 x5 183 x6 184 Using the chi­squared goodness of fit test, assess whether we have evidence that this is not a symmetrical die. Record the standardized residuals. 9.1.7 Suppose the sample space for a response is given by S (a) Suppose that a statistician believes that in fact the response will lie in the set S 10 11 12 13 and so chooses a probability measure P that reects this When 3 is observed. What is an appropriate the data are collected, however, the value s P­value to quote as a measure of how surprising this value is as a potential value from P? (b) Suppose instead P is taken to be a Geometric(0.1) distribution. Determine an ap­ 3 is as a potential value propriate P­value to quote as a measure of how surprising s from P. 0 1 2 3. 3 heads in n 9.1.8 Suppose we observe s 10 independent tosses of a purportedly fair coin. Compute a P­value that assesses how surprising this value is as a potential value from the distribution prescribed. Do not use the chi­squared test. 9
.1.9 Suppose you check a model by computing a P­value based on some discrepancy statistic and conclude that there is no evidence against the model. Does this mean the model is correct? Explain your answer. 9.1.10 Suppose you are told that standardized scores on a test are distributed N 0 1 A student’s standardized score is 4. Compute an appropriate P­value to assess whether or not the statement is correct. 9.1.11 It is asserted that a coin is being tossed in independent tosses. You are somewhat skeptical about the independence of the tosses because you know that some people practice tossing coins so that they can increase the frequency of getting a head. So you wish to assess the independence of x1 (a) Show that the conditional distribution of x1 of all sequences of length n with entries from 0 1 (b) Using this conditional distribution, determine the distribution of the number of 1’s observed in the first n 2 observations. (Hint: The hypergeometric distribution.) xn given x is uniform on the set xn from a Bernoulli distribution. 500 Section 9.1: Checking the Sampling Model (c) Suppose you observe Compute an appropriate P­value to assess the independence of these tosses using (b). COMPUTER EXERCISES 9.1.12 For the data of Exercise 9.1.1, present a normal probability plot of the standard­ ized residuals and comment on it. 9.1.13 Generate 25 samples from the N 0 1 distribution with n their normal probability plots. Draw any general conclusions. 9.1.14 Suppose the following table was obtained from classifying members of a sam­ 100 from a student population according to the classification variables A ple on n and B, where A 0 1 indicates conservative, liberal. 0 1 indicates male, female and B 10 and look at B 0 B 1 A A 0 1 20 36 15 29 Check the model that gender and political orientation are independent using Fisher’s exact test. 9.1.15 Using software, generate a sample of n 1000 from the Binomial 10 0 2 distribution. Then, using the chi­squared goodness of fit test, check that this sample is 1. What would indeed from this distribution. Use grouping to ensure E Xi you conclude if you got a P­value close to 0? 9.1.16 Using a statistical package,
generate a sample of n 1000 from the Poisson 5 distribution. Then, using the chi­squared goodness of fit test based on grouping the 1, check that this sample observations into five cells chosen to ensure E Xi is indeed from this distribution. What would you conclude if you got a P­value close to 0? 9.1.17 Using a statistical package, generate a sample of n 1000 from the N 0 1 distribution. Then, using the chi­squared goodness of fit test based on grouping the observations into five cells chosen to ensure E Xi 1, check that this sample is indeed from this distribution. What would you conclude if you got a P­value close to 0? npi npi npi PROBLEMS 9.1.18 (Multivariate normal distribution) A random vector Y have a multivariate normal distribution with mean vector Y1 Yk is said to Rk and variance matrix i j Rk k if a1Y1 akYk N ai i ai for every choice of a1 Cov Yi Y j ak i j and that Yi R1. We write Y N i ii. (Hint: Theorem 3.3.4.) Nk. Prove that E Yi i, Chapter 9: Model Checking 501 i j i j Y1 R1 Yk is distributed multivariate normal with mean vector Rn is distributed mul­ 0 and variance 0 j and i i 9.1.19 In Example 9.1.1, prove that the residual R tivariate normal (see Problem 9.1.18) with mean vector Rk k, where 2 matrix 0 n when i (Hint: Theorem 4.6.1.) Rk 9.1.20 If Y is distributed multi­ and variance matrix Rl Rl and variance matrix then it variate normal with mean vector i j l k i 1 ai Yi and i 1 bi Xi are can be shown that Y and X are independent whenever bl. Use this fact to show ak and b1 independent for every choice of a1 that, in Example 9.1.1, X and R are independent. (Hint: Theorem 4.6.2 and Problem 9.1.19.) 9.1.21 In Example 9.1.4, prove that ( 1 x1 n x 1 n is the MLE. Rk k and if X 2 0 1 1 n X1 Xl
i j l 1 9.1.22 In Example 9.1.4, prove that the number of samples satisfying the constraints (9.1.2) equals n x1 (Hint: Using i for the count x11, show that the number of such samples equals n x 1. n x1 min x1 x 1 i max 0 x1 x 1 n x1 i n x 1 x1 i and sum this using the fact that the sum of Hypergeometric n x 1 x1 probabilities equals 1.) COMPUTER PROBLEMS 9.1.23 For the data of Exercise 9.1.3, carry out a simulation to estimate the P­value for the discrepancy statistic of Example 9.1.2. Plot a density histogram of the simulated values. (Hint: See Appendix B for appropriate code.) 10 generate 104 values of the discrepancy statistic in Example 9.1.2 9.1.24 When n when we have a sample from an N 0 1 distribution. Plot these in a density histogram. Repeat this, but now generate from a Cauchy distribution. Compare the histograms (do not forget to make sure both plots have the same scales). 9.1.25 The following data are supposed to have come from an Exponential ution, where 0 is unknown. distrib 12 1 12 10 1 0 1 4 9 Check this model using a chi­squared goodness of fit test based on the intervals 2 0] 2 0 4 0] 4 0 6 0] 6 0 8 0] 8 0 10 0] 10 0 (Hint: Calculate the MLE by plotting the log­likelihood over successively smaller in­ tervals.) 502 Section 9.2: Checking for Prior–Data Conict 9.1.26 The following table, taken from Introduction to the Practice of Statistics, by D. Moore and G. McCabe (W. H. Freeman, New York, 1999), gives the measurements in milligrams of daily calcium intake for 38 women between the ages of 18 and 24 years. 808 651 626 1156 882 716 774 684 1062 438 1253 1933 970 1420 549 748 909 1425 1325 1203 802 948 446 2433 374 1050 465 1255 416 976 1269 110 784 572 671 997 403 696 600] 600 1200] 1200 1800] 1800 (a) Suppose that
Bayesian model was incorrect after deciding that s is a surprising value from M This only tells us, however, that the probability measure M is unlikely to have produced the data and not that the model P : was wrong. Consider the following example. EXAMPLE 9.2.1 Prior–Data Conict Suppose we obtain a sample consisting of n 1 from the model with 1 2 and probability functions for the basic response given by the following 20 values of s table f1 s f2 s Then the probability of obtaining this sample from f2 is given by 0 9 20 f1 which is a reasonable value, so we have no evidence against the model 0 12158 f2. Suppose we place a prior on 0 9999 so that we are virtually 1 Then the probability of getting these data from the prior predictive given by 1 certain that M is 0 9999 0 1 20 0 0001 0 9 20 1 2158 10 5. The prior probability of observing a sample of 20, whose prior predictive probability is 10 5 can be calculated (using statistical software to tabulate no greater than 1 2158 the prior predictive) to be approximately 0 04. This tells us that the observed data are “in the tails” of the prior predictive and thus are surprising, which leads us to conclude that we have evidence that M is incorrect. So in this example, checking the model leads us to conclude that it is plausible for the data observed. On the other hand, checking the model given by M leads us to the conclusion that the Bayesian model is implausible. : f The lesson of Example 9.2.1 is that we can have model failure in the Bayesian con­ text in two ways. First, the data s may be surprising in light of the model. Second, even when the data are plausibly from this model, the prior and the data may conict. This conict will occur whenever the prior assigns most of its probability to distributions in the model for which the data are surprising. In either situation, infer­ ences drawn from the Bayesian model may be awed. : f If, however, the prior assigns positive probability (or density) to every possible value of then the consistency results for Bayesian inference mentioned in Chapter 7 indicate that a large amount of data will overcome a prior–data conict (see Example 9.2.4). This is because the effect of the prior decreases with increasing amounts of data. So the
existence of a prior–data conict does not necessarily mean that our inferences are in error. Still, it is useful to know whether or not this conict exists, as it is often difficult to detect whether or not we have sufficient data to avoid the problem. Therefore, we should first use the checks discussed in Section 9.1 to ensure that the If we accept the model, then we look data s is plausibly from the model for any prior–data conict. We now consider how to go about this. : f 504 Section 9.2: Checking for Prior–Data Conict The prior predictive distribution of any ancillary statistic is the same as its distrib­ ution under the sampling model, i.e., its prior predictive distribution is not affected by the choice of the prior. So the observed value of any ancillary statistic cannot tell us anything about the existence of a prior–data conict. We conclude from this that, if we are going to use some function of the data to assess whether or not there is prior–data conict, then its marginal distribution has to depend on. We now show that the prior predictive conditional distribution of the data given a minimal sufficient statistic T is independent of the prior. Theorem 9.2.1 Suppose T is a sufficient statistic for the model for data s Then the conditional prior predictive distribution of the data s given T is independent of the prior f :. PROOF We will prove this in the case that each sample distribution f and the prior are discrete. A similar argument can be developed for the more general case. By Theorem 6.1.1 (factorization theorem) we have that f s h s g T s for some functions g and h Therefore the prior predictive probability function of s is given by m s h s g T s The prior predictive probability function of T at t is given by m t h s g t s:T s t Therefore, the conditional prior predictive probability function of the data s given T s t is t s t h s which is independent of So, from Theorem 9.2.1, we conclude that any aspects of the data, beyond the value of a minimal sufficient statistic, can tell us nothing about the existence of a prior– data con�
�ict. Therefore, if we want to base our check for a prior–data conict on the prior predictive, then we must use the prior predictive for a minimal sufficient statistic. Consider the following examples. EXAMPLE 9.2.2 Checking a Beta Prior for a Bernoulli Model Suppose that x1 unknown, and n i 1 xi is a minimal sufficient statistic and is distributed Binomial n count y [0 1] is prior distribution. Then we have that the sample xn is a sample from a Bernoulli is given a Beta model, where Chapter 9: Model Checking 505 Therefore, the prior predictive probability function for y is given by then m y Now observe that when On the other hand, when 1 n 1 i.e., the prior predictive of y is Uniform 0 1 n and no values of y are surprising. This is not unexpected, as with the uniform prior on we are implicitly saying that any count y is reasonable. 2 the prior puts more weight around 1/2. The 1 This prior predictive is prior predictive is then proportional to y plotted in Figure 9.2.1 when n 20. Note that counts near 0 or 20 lead to evidence that there is a conict between the data and the prior. For example, if we obtain the 3, we can assess how surprising this value is by computing the probability count y of obtaining a value with a lower probability of occurrence. Using the symmetry of the prior predictive, we have that this probability equals (using statistical software for the computation) m 0 m 2 m 19 m 20 0 0688876 Therefore, the observation 3 is not surprising at the 5% level.07 0.06 0.05 0.04 0.03 0.02 0.01 0 10 y 20 Figure 9.2.1: Plot of the prior predictive of the sample count y in Example 9.2.2 when 2 and n 20. Suppose now that n 50 and 2 4 The mean of this prior is 2 2 4 1 3 and the prior is right­skewed. The prior predictive is plotted in Figure 9.2.2. Clearly, values of y near 50 give evidence against the model in this case. For example, 35 then the probability of getting a count with smaller probability of if we observe y occurrence is given by (using statistical software for the computation) m 36 m 50 against the model at the 5% level. 0 0500457
. Only values more extreme than this would provide evidence 506 Section 9.2: Checking for Prior–Data Conict.04 0.03 0.02 0.01 0.00 0 10 20 30 40 50 y Figure 9.2.2: Plot of the prior predictive of the sample count y in Example 9.2.2 when 2 4 and n 50. EXAMPLE 9.2.3 Checking a Normal Prior for a Location Normal Model R1 Suppose that x1 xn is a sample from an N to be an is unknown and 2 N 0 0 Note that x is a minimal sufficient 0 for some specified choice of statistic for this model, so we need to compare the observed of this statistic to its prior predictive distribution to assess whether or not there is prior–data conict. 2 0 is known. Suppose we take the prior distribution of 0 and 2 2 0 distribution, where Now we can write x z where N 0 2 0 independent of z 2 0 n From this, we immediately deduce (see Exercise 9.2.3) that the prior pre­ 2 0 n. From the symmetry of the prior predictive N 0 dictive distribution of x is N 0 density about 2 0 0 we immediately see that the appropriate P­value is 9.2.1) M X So a small value of (9.2.1) is evidence that there is a conict between the observed data and the prior, i.e., the prior is putting most of its mass on values of for which the observed data are surprising. Another possibility for model checking in this context is to look at the posterior predictive distribution of the data. Consider, however, the following example. EXAMPLE 9.2.4 (Example 9.2.1 continued) Recall that, in Example 9.2.1, we concluded that a prior–data conict existed. Note, however, that the posterior probability of 2 is 0 0001 0 9 20 0 9999 0 1 20 0 0001 0 9 20 1 Therefore, the posterior predictive probability of the observed sequence of 20 values of 1 is 0 12158 which does not indicate any prior–data conict. We note, however, that in this example, the amount of data are sufficient to overwhelm the prior; thus we are led to a sensible inference about Chapter 9: Model Checking 507
The problem with using the posterior predictive to assess whether or not a prior– data conict exists is that we have an instance of the so­called double use of the data. For we have fit the model, i.e., constructed the posterior predictive, using the observed data, and then we tried to use this posterior predictive to assess whether or not a prior– data conict exists. The double use of the data results in overly optimistic assessments of the validity of the Bayesian model and will often not detect discrepancies. We will not discuss posterior model checking further in this text. We have only touched on the basics of checking for prior–data conict here. With more complicated models, the possibility exists of checking individual components of a prior, e.g., the components of the prior specified in Example 7.1.4 for the location­scale normal model, to ascertain more precisely where a prior–data conict is arising. Also, ancillary statistics play a role in checking for prior–data conict as we must remove any ancillary variation when computing the P­value because this variation does not depend on the prior. Furthermore, when the prior predictive distribution of a minimal sufficient statistic is continuous, then issues concerning exactly how P­values are to be computed must be addressed. These are all topics for a further course in statistics. Summary of Section 9.2 In Bayesian inference, there are two potential sources of model incorrectness. First, the sampling model for the data may be incorrect. Second, even if the sampling model is correct, the prior may conict with the data in the sense that most of the prior probability is assigned to distributions in the model for which the data are surprising. We first check for the correctness of the sampling model using the methods of Section 9.1. If we do not find evidence against the sampling model, we next check for prior–data conict by seeing if the observed value of a minimal suffi­ cient statistic is surprising or not, with respect to the prior predictive distribution of this quantity. Even if a prior–data conict exists, posterior inferences may still be valid if we have enough data. EXERCISES 9.2.1 Suppose we observe the value s table. 2 from the model, given by
the following f1 s f2 s (a) Do the observed data lead us to doubt the validity of the model? Explain why or why not. (b) Suppose the prior, given by 1 2. Is there any evidence of a prior–data conict? (Hint: Compute the prior predictive for each possible data set and assess whether or not the observed data set is surprising.) 0 3 is placed on the parameter 1 508 Section 9.2: Checking for Prior–Data Conict 0 01. distribution, where 2 is obtained, then determine 1 6 is taken from a Bernoulli (c) Repeat part (b) using the prior given by 9.2.2 Suppose a sample of n has a Beta 3 3 prior distribution. If the value nx whether there is any prior–data conict. 9.2.3 In Example 9.2.3, establish that the prior predictive distribution of x is given by the N 0 9.2.4 Suppose we have a sample of n unknown and the value x the appropriate P­value to check for prior–data conict. 9.2.5 Suppose that x observed, then determine an appropriate P­value for checking for prior–data conict. 7 3 is observed. An N 0 1 prior is placed on Uniform[0 1] If the value x 2 0 n distribution. 2 distribution where is Compute 5 from an N Uniform[0 2 2 is ] and 2 0 COMPUTER EXERCISES 9.2.6 Suppose a sample of n 20 is taken from a Bernoulli has a Beta 3 3 prior distribution. If the value nx whether there is any prior–data conict. PROBLEMS distribution, where 6 is obtained, then determine 2 0. Determine the prior predictive distribution of x xn is a sample from an N 2 0 distribution, where 9.2.7 Suppose that x1 N 0 9.2.8 Suppose that x1 xn is a sample from an Exponential distribution where Gamma 0 0 Determine the prior predictive distribution of x ] distribution, where 1 where 1 9.2.9 Suppose that s1 bution, where 1 distribution of x1 9.2.10 Suppose that x1 sn is a sample from a Multinomial 1 Dirichlet k 1 xk, where xi is the count in the ith category. x
n is a sample from a Uniform[0 1 k distri­ 1 k Determine the prior predictive has prior density given by I[ 1 0 Determine the prior predictive distribution of x n. 9.2.11 Suppose we have the context of Example 9.2.3. Determine the limiting P­value for checking for prior–data conict as n Interpret the meaning of this P­value in terms of the prior and the true value of Uniform[0 1] 9.2.12 Suppose that x Geometric (a) Determine the appropriate P­value for checking for prior–data conict. (b) Based on the P­value determined in part (a), describe the circumstances under which evidence of prior–data conict will exist. (c) If we use a continuous prior that is positive at a point, then this an assertion that the point is possible. In light of this, discuss whether or not a continuous prior that is positive at 0 makes sense for the Geometric distribution and distribution. CHALLENGES 9.2.13 Suppose that X1 N 0 2 2 0 2 and 1 Xn is a sample from an N 2 Gamma 0 2 distribution where 0. Then determine a form for the Chapter 9: Model Checking 509 prior predictive density of X S2 that you could evaluate without integrating (Hint: Use the algebraic manipulations found in Section 7.5.) 9.3 The Problem with Multiple Checks As we have mentioned throughout this text, model checking is a part of good statistical practice. In other words, one should always be wary of the value of statistical work in which the investigators have not engaged in, and reported the results of, reasonably rigorous model checking. It is really the job of those who report statistical results to convince us that their models are reasonable for the data collected, bearing in mind the effects of both underfitting and overfitting. In this chapter, we have reported some of the possible model­checking approaches available. We have focused on the main categories of procedures and perhaps the most often used methods from within these. There are many others. At this point, we cannot say that any one approach is the best possible method. Perhaps greater insight along these lines will come with further research into the topic, and then a clearer recommendation could be made. One recommendation that can be made now, however, is that it is not reasonable to go about model checking by
implementing every possible model­checking procedure you can. A simple example illustrates the folly of such an approach. EXAMPLE 9.3.1 Suppose that x1 Suppose we decide to check this model by computing the P­values xn is supposed to be a sample from the N 0 1 distribution. Pi P X 2 i x 2 i for i incorrect if the minimum of these P­values is less than 0.05. n where X 2 i 1 2 1 Furthermore, we will decide that the model is Now consider the repeated sampling behavior of this method when the model is correct. We have that if and only if and so min P1 Pn 0 05 max x 2 1 x 2 n 2 0 95 1 P min P1 Pn 0 05 P max 95 1 1 P max X 2 1 X 2 n 2 0 05 1 2 0 95 1 1 0 95 n 1 as n This tells us that if n is large enough, we will reject the model with virtual certainty even though it is correct! Note that n does not have to be very large for there 10 the to be an appreciable probability of making an error. For example, when n 510 Section 9.3: The Problem with Multiple Checks probability of making an error is 0.40; when n is 0.64; and when n 100 the probability of making an error is 0.99. 20 the probability of making an error We can learn an important lesson from Example 9.3.1, for, if we carry out too many model­checking procedures, we are almost certain to find something wrong — even if the model is correct. The cure for this is that before actually observing the data (so that our choices are not determined by the actual data obtained), we decide on a few relevant model­checking procedures to be carried out and implement only these. The problem we have been discussing here is sometimes referred to as the problem of multiple comparisons, which comes up in other situations as well — e.g., see Sec­ tion 10.4.1, where multiple means are compared via pairwise tests for differences in the means. One approach for avoiding the multiple­comparisons problem is to simply lower the cutoff for the P­value so that the probability of making a mistake is appro­ priately small. For example, if we decided in Example 9.3.1 that evidence against the model is only warranted when an individual P­value is smaller than 0.0001, then the probability of making a
mistake is 0 01 when n 100 A difficulty with this approach generally is that our model­checking procedures will not be independent, and it does not always seem possible to determine an appropriate cutoff for the individual P­values. More advanced methods are needed to deal with this problem. Summary of Section 9.3 Carrying out too many model checks is not a good idea, as we will invariably find something that leads us to conclude that the model is incorrect. Rather than engaging in a “fishing expedition,” where we just keep on checking the model, it is better to choose a few procedures before we see the data, and use these, and only these, for the model checking. Chapter 10 Relationships Among Variables CHAPTER OUTLINE Section 1 Related Variables Section 2 Categorical Response and Predictors Section 3 Quantitative Response and Predictors Section 4 Quantitative Response and Categorical Predictors Section 5 Categorical Response and Quantitative Predictors Section 6 Further Proofs (Advanced) In this chapter, we are concerned with perhaps the most important application of sta­ tistical inference: the problem of analyzing whether or not a relationship exists among variables and what form the relationship takes. As a particular instance of this, recall the example and discussion in Section 5.1. Many of the most important problems in science and society are concerned with re­ lationships among variables. For example, what is the relationship between the amount of carbon dioxide placed into the atmosphere and global temperatures? What is the re­ lationship between class size and scholastic achievement by students? What is the relationship between weight and carbohydrate intake in humans? What is the relation­ ship between lifelength and the dosage of a certain drug for cancer patients? These are all examples of questions whose answers involve relationships among variables. We will see that statistics plays a key role in answering such questions. In Section 10.1, we provide a precise definition of what it means for variables to be related, and we distinguish between two broad categories of relationship, namely, association and cause–effect. Also, we discuss some of the key ideas involved in col­ lecting data when we want to determine whether a cause–effect relationship exists. In the remaining sections, we examine the various statistical methodologies that are used to analyze data when we are concerned with relationships. We emphasize the use of frequentist methodologies in this chapter. We give some examples of the
Bayesian approach, but there are some complexities involved with the distributional problems associated with Bayesian methods that are best avoided at this 511 512 Section 10.1: Related Variables stage. Sampling algorithms for the Bayesian approach have been developed, along the lines of those discussed in Chapter 7 (see also Chapter 11), but their full discussion would take us beyond the scope of this text. It is worth noting, however, that Bayesian analyses with diffuse priors will often yield results very similar to those obtained via the frequentist approach. As discussed in Chapter 9, model checking is an important feature of any statistical analysis. For the models used in this chapter, a full discussion of the more rigorous P­ value approach to model checking requires more development than we can accomplish in this text. As such, we emphasize the informal approach to model checking, via residual and probability plots. This should not be interpreted as a recommendation that these are the preferred methods for such models. 10.1 Related Variables R1 defined on it. What does Consider a population with two variables X Y : it mean to say that the variables X and Y are related? Perhaps our first inclination is to bX 2 for say that there must be a formula relating the two variables, such as Y some choice of constants a and b or Y is the height of humans and suppose X of individual in centimeters. From our experience, we know that taller people tend to be heavier, so we believe that there is some kind of relationship between height and weight. We know, too, that there cannot be an exact formula that describes this relationship, because people with the same weight will often have different heights, and people with the same height will often have different weights. exp X etc. But consider a population in kilograms and Y is the weight of a 10.1.1 The Definition of Relationship If we think of all the people with a given weight x, then there will be a distribution that have weight x. We call this distribution the of heights for all those individuals conditional distribution of Y given that X x. We can now express what we mean by our intuitive idea that X and Y are related, for, as we change the value of the weight that we condition on, we expect the condi­ tional distribution to change. In particular, as x increases, we expect that the location of the conditional distribution will increase, although other features of the distribution may change as well. For example, in
Figure 10.1.1 we provide a possible plot of two approximating densities for the conditional distributions of Y given X 70 kg and the conditional distribution of Y given X 90 kg. We see that the conditional distribution has shifted up when X goes from 70 to 90 kg but also that the shape of the distribution has changed somewhat as well. So we can say that a relationship definitely exists between X and Y at least in this population. No­ tice that, as defined so far, X and Y are not random variables, but they become so when from the population. In that case, the conditional distributions we randomly select referred to become the conditional probability distributions of the random variable Y given that we observe X 90 respectively. 70 and X Chapter 10: Relationships Among Variables 513 0.05 0.00 140 160 180 200 x Figure 10.1.1: Plot of two approximating densities for the conditional distribution of Y given 90 kg (solid line). X 70 kg (dashed line) and the conditional distribution of Y given X We will adopt the following definition to precisely specify what we mean when we say that variables are related. Definition 10.1.1 Variables X and Y are related variables if there is any change in the conditional distribution of Y given X x, as x changes. We could instead define what it means for variables to be unrelated. We say that variables X and Y are unrelated if they are independent. This is equivalent to Definition 10.1.1, because two variables are independent if and only if the conditional distribution of one given the other does not depend on the condition (Exercise 10.1.1). There is an apparent asymmetry in Definition 10.1.1, because the definition consid­ ers only the conditional distribution of Y given X and not the conditional distribution of X given Y But, if there is a change in the conditional distribution of Y given X x as we change x then by the above comment, X and Y are not independent; thus there y as we change y must be a change in the conditional distribution of X given Y (also see Problem 10.1.23). Notice that the definition is applicable no matter what kind of variables we are dealing with. So both could be quantitative variables, or both categorical variables
, or one could be a quantitative variable while the other is a categorical variable. Definition 10.1.1 says that X and Y are related if any change is observed in the conditional distribution. In reality, this would mean that there is practically always a relationship between variables X and Y It seems likely that we will always detect some difference if we carry out a census and calculate all the relevant conditional distribu­ tions. This is where the idea of the strength of a relationship among variables becomes relevant, for if we see large changes in the conditional distributions, then we can say a strong relationship exists. If we see only very small changes, then we can say a very weak relationship exists that is perhaps of no practical importance. 514 Section 10.1: Related Variables The Role of Statistical Models If a relationship exists between two variables, then its form is completely described by the set of conditional distributions of Y given X. Sometimes it may be necessary to describe the relationship using all these conditional distributions. In many problems, however, we look for a simpler presentation. In fact, we often assume a statistical model that prescribes a simple form for how the conditional distributions change as we change X Consider the following example. EXAMPLE 10.1.1 Simple Normal Linear Regression Model In Section 10.3.2, we will discuss the simple normal linear regression model, where the conditional distribution of quantitative variable Y given the quantitative variable X x, is assumed to be distributed N 1 2 2x where 1 individual and X the amount of salt the person consumed each day. 2 and 2 are unknown. For example, Y could be the blood pressure of an In this case, the conditional distributions have constant shape and change, as x changes, only through the conditional mean. The mean moves along the line given by 1 and slope 2 If this model is correct, then the variables 0 as this is the only situation in which the conditional 2x for some intercept are unrelated if and only if distributions can remain constant as we change x 1 2 Statistical models, like that described in Example 10.1.1, can be wrong. There is nothing requiring that two quantitative variables must be related in that way. For example, the conditional variance of Y can vary with x, and the very shape of the conditional distribution can vary with x, too. The model of Example 10.1.1 is an instance of a simplifying assumption that is appropriate in many practical contexts. However, methods such as those discussed in Chapter
9 must be employed to check model assumptions before accepting statistical inferences based on such a model. We will always consider model checking as part of our discussion of the various models used to examine the relationship among variables. Response and Predictor Variables Often, we think of Y as a dependent variable (depending on X) and of X as an indepen­ dent variable (free to vary). Our goal, then, is to predict the value of Y given the value of X. In this situation, we call Y the response variable and X the predictor variable. Sometimes, though, there is really nothing to distinguish the roles of X and Y. For example, suppose that X is the weight of an individual in kilograms and Y is the height in centimeters. We could then think of predicting weight from height or conversely. It is then immaterial which we choose to condition on. In many applications, there is more than one response variable and more than one predictor variable X We will not consider the situation in which we have more than is one response variable, but we will consider the case in which X X1 Xk Chapter 10: Relationships Among Variables 515 k­dimensional. Here, the various predictors that make up X could be all categorical, all quantitative, or some mixture of categorical and quantitative variables. Xk The definition of a relationship existing between response variable Y and the set of is exactly as in Definition 10.1.1. In particular, a relationship predictors X1 Xk if there is any change in the conditional distribution exists between Y and X1 of Y given X1 xk is varied. If such a relation­ xk when x1 ship exists, then the form of the relationship is specified by the full set of conditional distributions. Again, statistical models are often used where simplifying assumptions are made about the form of the relationship. Consider the following example. Xk x1 EXAMPLE 10.1.2 The Normal Linear Model with k Predictors In Section 10.3.4, we will discuss the normal multiple linear regression model. For this, the conditional distribution of quantitative variable Y given that the quantitative predictors X1 is assumed to be the Xk x1 xk N 1 2x1 k 1xk 2 k 1 and 2 are unknown. For example, Y could be blood distribution, where 1 pressure, X1 the amount of daily salt intake, X2 the age of
the individual, X3 the weight of the individual, etc. In this case, the conditional distributions have constant shape and change, as the xk change only through the conditional mean, which values of the predictors x1 k 1xk Notice that, if this model changes according to the function 1 is correct, then the variables are unrelated if and only if 0 as this is the only situation in which the conditional distributions can remain constant as we change x1 xk. 2x1 k 1 2 When we split a set of variables Y X1 Xk into response Y and predictors Xk, we are implicitly saying that we are directly interested only in the con­ Xk There may be relationships among the X1 ditional distributions of Y given X1 predictors X1 Xk however, and these can be of interest. For example, suppose we have two predictors X1 and X2 and the conditional dis­ tribution of X1 given X2 is virtually degenerate at a value a cX2 for some constants a and c Then it is not a good idea to include both X1 and X2 in a model, such as that discussed in Example 10.1.2, as this can make the analysis very sensitive to small changes in the data. This is known as the problem of multicollinearity. The effect of multicollinearity, and how to avoid it, will not be discussed any further in this text. This is, however, a topic of considerable practical importance. Regression Models Suppose that the response Y is quantitative and we have k predictors X1 One of the most important simplifying assumptions used in practice is the regression the only thing that assumption, namely, we assume that, as we change X1 can possibly change about the conditional distribution of Y given X1 is the Xk Xk The importance of this assumption is that, to an­ conditional mean E Y X1 Xk we now need only consider how alyze the relationship between Y and X1 Xk Xk 516 Section 10.1: Related Variables Xk changes as X1 Xk Xk changes. Indeed, if E Y X1 E Y X1 does not change as X1 Xk changes, then there is no relationship between Y and the predictors. Of course, this kind of an analysis is dependent on the regression assumption holding, and the methods of Section 9.1 must be used to check this. Regres­ sion models — namely,
statistical models where we make the regression assumption — are among the most important statistical models used in practice. Sections 10.3 and 10.4 discuss several instances of regression models. Regression models are often presented in the form Y E Y X1 Xk Z (10.1.1) Y Xk E Y X1 is fixed as we change X1 where Z is known as the error term. We see immedi­ ately that, if the regression assumption applies, then the conditional distribution of Z Xk and, conversely, if the con­ Xk given X1 ditional distribution of Z given X1 then the regression assumption holds. So when the regression assumption applies, (10.1.1) provides a decomposition of Y into two parts: (1) a part possibly dependent on and (2) a part that is always independent X1 of X1 namely, the error Z Note that Examples 10.1.1 and 10.1.2 can be written in the form (10.1.1), where Z is fixed as we change X1 namely, E Y X1 N 0 Xk Xk Xk Xk Xk 2 10.1.2 Cause–Effect Relationships and Experiments Suppose now that we have variables X and Y defined on a population and have concluded that a relationship exists according to Definition 10.1.1. This may be based or, more typically, we will have drawn a on having conducted a full census of simple random sample from and then used the methods of the remaining sections of this chapter to conclude that such a relationship exists. If Y is playing the role of the response and if X is the predictor, then we often want to be able to assert that changes in X are causing the observed changes in the conditional distributions of Y Of course, if there are no changes in the conditional distributions, then there is no relationship between X and Y and hence no cause–effect relationship, either. For example, suppose that the amount of carbon dioxide gas being released in the atmosphere is increasing, and we observe that mean global temperatures are rising. If we have reason to believe that the amount of carbon dioxide released can have an effect on temperature, then perhaps it is sensible to believe that the increase in carbon dioxide emissions is causing the observed increase in mean global temperatures. As another example, for many years it has been observed that smokers suffer from respiratory It seems reasonable, then, to
diseases much more frequently than do nonsmokers. conclude that smoking causes an increased risk for respiratory disease. On the other hand, suppose we consider the relationship between weight and height. It seems clear that a relationship exists, but it does not make any sense to say that changes in one of the variables is causing the changes in the conditional distributions of the other. Chapter 10: Relationships Among Variables 517 Confounding Variables When can we say that an observed relationship between X and Y is a cause–effect relationship? If a relationship exists between X and Y then we know that there are at i.e., these two X least two values x1 and x2 such that fY conditional distributions are not equal. If we wish to say that this difference is caused by the change in X, then we have to know categorically that there is no other variable Z defined on that confounds with X The following example illustrates the idea of two variables confounding. x2 x1 fY X EXAMPLE 10.1.3 Suppose that is a population of students such that most females hold a part­time job and most males do not. A researcher is interested in the distribution of grades, as measured by grade point average (GPA), and is looking to see if there is a relationship between GPA and gender. On the basis of the data collected, the researcher observes a difference in the conditional distribution of GPA given gender and concludes that a relationship exists between these variables. It seems clear, however, that an assertion of a cause–effect relationship existing between GPA and gender is not warranted, as the difference in the conditional distributions could also be attributed to the difference in part­time work status rather than gender. In this example, part­time work status and gender are confounded. A more careful analysis might rescue the situation described in Example 10.1.3, for if X and Z denote the confounding variables, then we could collect data on Z as well and examine the conditional distributions fY z. In Example 10.1.3, these will be the conditional distributions of GPA, given gender and part­time work status. If these conditional distributions change as we change x for some fixed value of z then we could assert that a cause–effect relationship exists between X and Y provided there are no further confounding variables Of course, there are probably still more confounding variables, and we really should be conditioning on all of them. This brings up the point that, in any practical
application, we almost certainly will never even know all the potential confounding variables. x Z X Controlling Predictor Variable Assignments Fortunately, there is sometimes a way around the difficulties raised by confounding variables. Suppose we can control the value of the variable X for any i.e., we can assign the value x to x for any of the possible values of x so that X In Example 10.1.3, this would mean that we could assign a part­time work status to any student in the population. Now consider the following idealized situation. Imagine x1 and then carrying out a census assigning every element x1. Now imagine assigning every to obtain the conditional distribution fY x2 and then carrying out a census to obtain the conditional the value X X distribution fY x1 and fY x2, then the only possible reason is that the value of X differs. Therefore, if fY x1 x2 we can assert that a cause–effect relationship exists. x2. If there is any difference in fY fY X X X X the value X X 518 Section 10.1: Related Variables X X x1 and fY and randomly assign n1 of these the value X A difficulty with the above argument is that typically we can never exactly deter­ mine fY x2 But in fact, we may be able to sample from them; then the methods of statistical inference become available to us to infer whether n1 n2 from or not there is any difference. Suppose we take a random sample 1 x1 with the remaining ’s assigned ’s assigned the value x1 the value x2. We obtain the Y values y11 ’s assigned the value x2. Then it is ap­ and obtain the Y values y21 X y2n2 is a sample parent that y11 n2 is small relative to the population X from fY size, then we can consider these as i.i.d. samples from these conditional distributions. So we see that in certain circumstances, it is possible to collect data in such a way that we can make inferences about whether or not a cause–effect relationship exists. We now specify the characteristics of the relevant data collection technique. y1n1 is a sample from fY x2. In fact, provided that n1 y1n1 for those y2n2 for those x1 and y21 Conditions for Cause–Effect Relationships First,
if our inferences are to apply to a population then we must have a random sample from that population. This is just the characteristic of what we called a sampling study in Section 5.4, and we must do this to avoid any selection effects. So if the purpose of a study is to examine the relationship between the duration of migraine headaches and the dosage of a certain drug, the investigator must have a random sample from the population of migraine headache sufferers. Second, we must be able to assign any possible value of the predictor variable X to any selected. If we cannot do this, or do not do this, then there may be hidden confounding variables (sometimes called lurking variables) that are inuencing the conditional distributions of Y. So in a study of the effects of the dosage of a drug on migraine headaches, the investigator must be able to impose the dosage on each participant in the study. Third, after deciding what values of X we will use in our study, we must randomly allocate these values to members of the sample. This is done to avoid the possibility of selection effects. So, after deciding what dosages to use in the study of the effects of the dosage of a drug on migraine headaches, and how many participants will receive each dosage, the investigator must randomly select the individuals who will receive each dosage. This will (hopefully) avoid selection effects, such as only the healthiest individuals getting the lowest dosage, etc. When these requirements are met, we refer to the data collection process as an experiment. Statistical inference based on data collected via an experiment has the ca­ pability of inferring that cause–effect relationships exist, so this represents an important and powerful scientific tool. A Hierarchy of Studies Combining this discussion with Section 5.4, we see a hierarchy of data collection meth­ ods. Observational studies reside at the bottom of the hierarchy. Inferences drawn from observational studies must be taken with a degree of caution, for selection effects could mean that the results do not apply to the population intended, and the existence Chapter 10: Relationships Among Variables 519 of confounding variables means that we cannot make inferences about cause–effect re­ lationships. For sampling studies, we know that any inferences drawn will be about the appropriate population; but the existence of confounding variables again causes difficulties for any statements about the existence of cause–effect relationships, e.g., of Example just taking random samples of males and
females from the population 10.1.3 will not avoid the confounding variables. At the top of the hierarchy reside experiments. It is probably apparent that it is often impossible to conduct an experiment. In Example 10.1.3, we cannot assign the value of gender, so nothing can be said about the existence of a cause–effect relationship between GPA and gender. There are many notorious examples in which assertions are made about the exis­ tence of cause–effect relationships but for which no experiment is possible. For exam­ ple, there have been a number of studies conducted where differences have been noted among the IQ distributions of various racial groups. It is impossible, however, to con­ trol the variable racial origin, so it is impossible to assert that the observed differences in the conditional distributions of IQ, given race, are caused by changes in race. Another example concerns smoking and lung cancer in humans. It has been pointed out that it is impossible to conduct an experiment, as we cannot assign values of the predictor variable (perhaps different amounts of smoking) to humans at birth and then observe the response, namely, whether someone contracts lung cancer or not. This raises an important point. We do not simply reject the results of analyses based on observational studies or sampling studies because the data did not arise from an ex­ periment. Rather, we treat these as evidence — potentially awed evidence, but still evidence. Think of eyewitness evidence in a court of law suggesting that a crime was com­ mitted by a certain individual. Eyewitness evidence may be unreliable, but if two or three unconnected eyewitnesses give similar reports, then our confidence grows in the reliability of the evidence. Similarly, if many observational and sampling studies seem to indicate that smoking leads to an increased risk for contracting lung cancer, then our confidence grows that a cause–effect relationship does indeed exist. Furthermore, if we can identify potentially confounding variables, then observational or sampling studies can be conducted taking these into account, increasing our confidence still more. Ul­ timately, we may not be able to definitively settle the issue via an experiment, but it is still possible to build overwhelming evidence that smoking and lung cancer do have a cause–effect relationship. 10.1.3 Design of Experiments Suppose we have a response Y and a predictor X (sometimes called a factor in experi­ mental contexts) defined on a population and we
want to collect data to determine whether a cause–effect relationship exists between them. Following the discussion in Section 10.1.1, we will conduct an experiment. There are now a number of decisions to be made, and our choices constitute what we call the design of the experiment. For example, we are going to assign values of X to the sampled elements, now Which of the possible values of X called experimental units, n from 1 520 Section 10.1: Related Variables should we use? When X can take only a small finite number of values, then it is natural to use these values. On the other hand, when the number of possible values of X is very large or even infinite, as with quantitative predictors, then we have to choose values of X to use in the experiment. xk for X. We refer to x1 Suppose we have chosen the values x1 xk as the levels of X; any particular assignment xi to a j in the sample will be called a treatment. Typically, we will choose the levels so that they span the possible range of X fairly uniformly. For example, if X is temperature in degrees Celsius, and we want to examine the relationship between Y and X for X in the range [0 100] then, using k 5 levels, we might take x1 25 x3 Having chosen the levels of X, we then have to choose how many treatments of each level we are going to use in the experiment, i.e., decide how many response values ni 1 we are going to observe at level xi for i 75 and x5 50 x4 0 x2 100 k In any experiment, we will have a finite amount of resources (money, time, etc.) at The question then is how n? If we know nothing about the then it makes sense to use balance, namely, our disposal, which determines the sample size n from should we choose the ni so that n1 conditional distributions fY choose n1 nk nk xi X On the other hand, suppose we know that some of the fY xi will exhibit greater variability than others. For example, we might measure variability by the vari­ ance of fY xi. Then it makes sense to allocate more treatments to the levels of X where the response is more variable. This is because it will take more observations to make accurate inferences about characteristics of such an fY than for the less variable conditional
distributions. xi X X X As discussed in Sections 6.3.4 and 6.3.5, we also want to choose the ni so that any inferences we make have desired accuracy. Methods for choosing the sample sizes ni similar to those discussed in Chapter 7, have been developed for these more compli­ cated designs, but we will not discuss these any further here. Suppose, then, that we have determined set of ordered pairs as the experimental design. x1 n1 Consider some examples. xk nk We refer to this EXAMPLE 10.1.4 Suppose that is a population of students at a given university. The administration is concerned with determining the value of each student being assigned an academic advisor. The response variable Y will be a rating that a student assigns on a scale of 1 to 10 (completely dissatisfied to completely satisfied with their university experience) at the end of a given semester. We treat Y as a quantitative variable. A random sample of 100 students is selected from, and 50 of these are randomly selected to receive n advisers while the remaining 50 are not assigned advisers. Here, the predictor X is a categorical variable that indicates whether or not the 2 levels, and both are used in the experiment. 50 student has an advisor. There are only k If x1 and we have a balanced experiment. The experimental design is given by 1 denotes having an advisor, then n1 0 denotes no advisor and x2 n2 0 50 1 50 Chapter 10: Relationships Among Variables 521 At the end of the experiment, we want to use the data to make inferences about the conditional distributions fY 1 to determine whether a 0 and fY cause–effect relationship exists. The methods of Section 10.4 will be relevant for this. X X EXAMPLE 10.1.5 Suppose that is a population of dairy cows. A feed company is concerned with the relationship between weight gain, measured in kilograms, over a specific time period and the amount of a supplement, measured in grams/liter, of an additive put into the cows’ feed. Here, the response Y is the weight gain — a quantitative variable. The pre­ dictor X is the concentration of the additive. Suppose X can plausibly range between 0 and 2 so it is also a quantitative variable. The experimenter decides to use k 0 66 x3 1 32 and x4 determined to be appropriate. So the balanced experimental design
is given by 2 00 Further, the sample sizes n1 n4 10 were 4 levels with x1 n2 0 00 x2 n3 0 00 10 0 66 10 1 32 10 2 00 10. At the end of the experiment, we want to make inferences about the conditional distri­ butions fY 2 00. The methods of Section 10.3 are relevant for this. and fY 0 00 1 32 0 66 fY fY X X X X Control Treatment, the Placebo Effect, and Blinding Notice that in Example 10.1.5, we included the level X 0, which corresponds to no application of the additive. This is called a control treatment, as it gives a baseline against which we can assess the effect of the predictor. In many experiments, it is important to include a control treatment. In medical experiments, there is often a placebo effect — that is, a disease sufferer given any treatment will often record an improvement in symptoms. The placebo effect is believed to be due to the fact that a sufferer will start to feel better simply because someone is paying attention to the condition. Accordingly, in any experiment to de­ termine the efficacy of a drug in alleviating disease symptoms, it is important that a control treatment be used as well. For example, if we want to investigate whether or not a given drug alleviates migraine headaches, then among the dosages we select for the experiment, we should make sure that we include a pill containing none of the drug (the so­called sugar pill); that way we can assess the extent of the placebo effect. Of course, the recipients should not know whether they are receiving the sugar pill or the drug. This is called a blind experiment. If we also conceal the identity of the treatment from the experimenters, so as to avoid any biasing of the results on their part, then this is known as a double­blind experiment. In Example 10.1.5, we assumed that it is possible to take a sample from the popula­ tion of all dairy cows. Strictly speaking, this is necessary if we want to avoid selection effects and make sure that our inferences apply to the population of interest. In prac­ tice, however, taking a sample of experimental units from the full population of interest is often not feasible. For example, many medical experiments are conducted on ani­ 522 Section 10.1: Related Variables mals, and these are
definitely not random samples from the population of the particular animal in question, e.g., rats. In such cases, however, we simply recognize the possibility that selection effects or lurking variables could render invalid the conclusions drawn from such analyses when they are to be applied to the population of interest. But we still regard the results as evidence concerning the phenomenon under study. It is the job of the experimenter to come as close as possible to the idealized situation specified by a valid experiment; for example, randomization is still employed when assigning treatments to experimental units so that selection effects are avoided as much as possible. Interactions 1 X In the experiments we have discussed so far, there has been one predictor. In many practical contexts, there is more than one predictor. Suppose, then, that there are two predictors X and W and that we have decided on the levels x1 xk for X and the l for W One possibility is to look at the conditional distributions levels fY l to determine for i whether X and W individually have a relationship with the response Y Such an ap­ proach, however, ignores the effect of the two predictors together. In particular, the way the conditional distributions fY change as we change x may depend on ; when this is the case, we say that there is an interaction between the predictors. k and fY W for j x W xi X 1 1 j To investigate the possibility of an interaction existing between X and W we must k and sample from each of the kl distributions fY j xi W l The experimental design then takes the form for i X 1 1 j x1 1 n11 x2 1 n21 xk l nkl where ni j gives the number of applications of the treatment xi. We say that the two predictors X and W are completely crossed in such a design because each value of X used in the experiment occurs with each value of W used in the experiment Of course, we can extend this discussion to the case where there are more than two predictors. We will discuss in Section 10.4.3 how to analyze data to determine whether there are any interactions between predictors. j EXAMPLE 10.1.6 Suppose we have a population of students at a particular university and are investi­ gating the relationship between the response Y given by a student’s grade in calculus, and the predictors W and X. The predictor W is the number of
hours of academic advising given monthly to a student; it can take the values 0 1 or 2. The predictor X 1 indicates large indicates class size, where X class size. So we have a quantitative response Y a quantitative predictor W taking three values, and a categorical predictor X taking two values. The crossed values of the predictors W X are given by the set 0 indicates small class size and so there are six treatments. To conduct the experiment, the university then takes a random sample of 6n students and randomly assigns n students to each treatment. Chapter 10: Relationships Among Variables 523 Sometimes we include additional predictors in an experimental design even when we are not primarily interested in their effects on the response Y We do this because we know that such a variable has a relationship with Y. Including such predictors allows us to condition on their values and so investigate more precisely the relationship Y has with the remaining predictors. We refer to such a variable as a blocking variable. EXAMPLE 10.1.7 Suppose the response variable Y is yield of wheat in bushels per acre, and the predictor variable X is an indicator variable for which of three types of wheat is being planted in an agricultural study. Each type of wheat is going to be planted on a plot of land, where all the plots are of the same size, but it is known that the plots used in the experiment will vary considerably with respect to their fertility. Note that such an experiment is another example of a situation in which it is impossible to randomly sample the experimental units (the plots) from the full population of experimental units. Suppose the experimenter can group the available experimental units into plots of low fertility and high fertility. We call these two classes of fields blocks. Let W indicate the type of plot. So W is a categorical variable taking two values. It then seems clear will be much less variable than that the conditional distributions fY X the conditional distributions fY X x x W In this case, W is serving as a blocking variable. The experimental units in a par­ ticular block, the one of low fertility or the one of high fertility, are more homogeneous than the full set of plots, so variability will be reduced and inferences will be more accurate. Summary of Section 10.1 We say two variables are related if the conditional distribution of one given the other changes at all, as we change the value of the conditioning variable. To conclude that a relationship between two variables is a cause–
effect relation­ ship, we must make sure that (through conditioning) we have taken account of all confounding variables. Statistics provides a practical way of avoiding the effects of confounding vari­ ables via conducting an experiment. For this, we must be able to assign the val­ ues of the predictor variable to experimental units sampled from the population of interest. The design of experiments is concerned with determining methods of collecting the data so that the analysis of the data will lead to accurate inferences concerning questions of interest. EXERCISES 10.1.1 Prove that discrete random variables X and Y are unrelated if and only if X and Y are independent. 10.1.2 Suppose that two variables X and Y defined on a finite population tionally related as Y are func­ g X for some unknown nonconstant function g Explain how 524 Section 10.1: Related Variables this situation is covered by Definition 10.1.1, i.e., the definition will lead us to conclude that X and Y are related. What about the situation in which g x c for some value c for every x? (Hint: Use the relative frequency functions of the variables.) 10.1.3 Suppose that a census is conducted on a population and the joint distribution of X Y is obtained as in the following table. X X 1 2 Y 1 0 15 0 12 Y 2 0 18 0 09 Y 3 0 40 0 06 Determine whether or not a relationship exists between Y and X 10.1.4 Suppose that a census is conducted on a population and the joint distribution of X Y is obtained as in the following table 12 1 6 1 12 1 3 1 6 X 2 Determine whether or not X Determine whether or not a relationship exists between Y and X 10.1.5 Suppose that X is a random variable and Y and Y are related. What happens when X has a degenerate distribution? 10.1.6 Suppose a researcher wants to investigate the relationship between birth weight and performance on a standardized test administered to children at two years of age. If a relationship is found, can this be claimed to be a cause–effect relationship? Explain why or why not? 10.1.7 Suppose a large study of all doctors in Canada was undertaken to determine the relationship between various lifestyle choices and lifelength. If the conditional distribution of lifelength given various smoking habits changes, then discuss what can be concluded from
this study. 10.1.8 Suppose a teacher wanted to determine whether an open­ or closed­book exam was a more appropriate way to test students on a particular topic. The response variable is the grade obtained on the exam out of 100. Discuss how the teacher could go about answering this question. 10.1.9 Suppose a researcher wanted to determine whether or not there is a cause– effect relationship between the type of political ad (negative or positive) seen by a voter from a particular population and the way the voter votes. Discuss your advice to the researcher about how best to conduct the study. 10.1.10 If two random variables have a nonzero correlation, are they necessarily re­ lated? Explain why or why not. 10.1.11 An experimenter wants to determine the relationship between weight change Y over a specified period and the use of a specially designed diet. The predictor variable X is a categorical variable indicating whether or not a person is on the diet. A total of 200 volunteers signed on for the study; a random selection of 100 of these were given the diet and the remaining 100 continued their usual diet. (a) Record the experimental design. Chapter 10: Relationships Among Variables 525 (b) If the results of the study are to be applied to the population of all humans, what concerns do you have about how the study was conducted? (c) It is felt that the amount of weight lost or gained also is dependent on the initial weight W of a participant. How would you propose that the experiment be altered to take this into account? 10.1.12 A study will be conducted, involving the population of people aged 15 to 19 in a particular country, to determine whether a relationship exists between the response Y (amount spent in dollars in a week on music downloads) and the predictors W (gender) and X (age in years). (a) If observations are to be taken from every possible conditional distribution of Y given the two factors, then how many such conditional distributions are there? (b) Identify the types of each variable involved in the study. (c) Suppose there are enough funds available to monitor 2000 members of the popula­ tion. How would you recommend that these resources be allocated among the various combinations of factors? (d) If a relationship is found between the response and the predictors, can this be claimed to be a cause–effect relationship? Explain why or why not. (e) Suppose that in
addition, it was believed that family income would likely have an effect on Y and that families could be classified into low and high income. Indicate how you would modify the study to take this into account. 10.1.13 A random sample of 100 households, from the set of all households contain­ ing two or more members in a given geographical area, is selected and their television viewing habits are monitored for six months. A random selection of 50 of the house­ holds is sent a brochure each week advertising a certain program. The purpose of the study is to determine whether there is any relationship between exposure to the brochure and whether or not this program is watched. (a) Identify suitable response and predictor variables. (b) If a relationship is found, can this be claimed to be a cause–effect relationship? Explain why or why not. 10.1.14 Suppose we have a quantitative response variable Y and two categorical pre­ dictor variables W and X, both taking values in 0 1. Suppose the conditional distri­ butions of Y are given by. Does W have a relationship with Y? Does X have a relationship with Y? Explain your answers. 10.1.15 Suppose we have a quantitative response variable Y and two categorical pre­ dictor variables W and X both taking values in 0 1 Suppose the conditional distri­ 526 Section 10.1: Related Variables butions of Y are given by when i is 1 when i is odd and X i 0 otherwise. Does W have a relationship with Y? Does X have a relationship with Y? Explain your answers. 10.1.16 Do the predictors interact in Exercise 10.1.14? Do the predictors interact in Exercise 10.1.15? Explain your answers. 10.1.17 Suppose we have variables X and Y defined on the population 0 when i is even, Y i 10, where X i divisible by 3 and Y i (a) Determine the relative frequency function of X (b) Determine the relative frequency function of Y (c) Determine the joint relative frequency function of X Y (d) Determine all the conditional distributions of Y given X (e) Are X and Y related? Justify your answer. 10.1.18 A mathematical approach to examining the relationship between variables X and Y is to see whether there is a function g such that Y g X Explain why this approach does not work for many practical
applications where we are examining the relationship between variables. Explain how statistics treats this problem. 10.1.19 Suppose a variable X takes the values 1 and 2 on a population and the condi­ tional distributions of Y given X are N 0 5 when X 2. Determine whether X and Y are related and if so, describe their relationship. 10.1.20 A variable Y has conditional distribution given X specified by N 1 when X ship is. 10.1.21 Suppose that X between Y and X Are X and Y related? x Determine if X and Y are related and if so, describe what their relation­ X 2 Determine the correlation 1 and N 0 7 when X Uniform[ 1 1] and Y 2x x PROBLEMS 10.1.22 If there is more than one predictor involved in an experiment, do you think it is preferable for the predictors to interact or not? Explain your answer. Can the experimenter control whether or not predictors interact? 10.1.23 Prove directly, using Definition 10.1.1, that when X and Y are related variables defined on a finite population 10.1.24 Suppose that X Y Z are independent N 0 1 random variables and that U X Z V Calculate Cov U V ) 10.1.25 Suppose that X Y Z lated? Z Determine whether or not the variables U and V are related. (Hint: Multinomial n 1 3 1 3 1 3 Are X and Y re­ then Y and X are also related Y Chapter 10: Relationships Among Variables 527 10.1.26 Suppose that X Y Y are unrelated if and only if Corr X Y 10.1.27 Suppose that X Y Z have probability function pX Y Z If Y is related to X but not to Z then prove that pX Y Z x y z pY X y x pX Z x z pZ z Bivariate­Normal Show that X and 0 1 2 2 1 10.2 Categorical Response and Predictors There are two possible situations when we have a single categorical response Y and a single categorical predictor X The categorical predictor is either random or determin­ istic, depending on how we sample. We examine these two situations separately. 10.2.1 Random Predictor We consider the situation in which X is categorical, taking values in 1 Y is
categorical, taking values in 1 population, then the values X i b If we take a sample xi are random, as are the values Y 1 a and n from the y j i Suppose the sample size n is very small relative to the population size (so we can j we assume that i.i.d. sampling is applicable). Then, letting i j obtain the likelihood function (see Problem 10.2.15) P X i Y L 11 ab x1 y1 xn yn a b i 1 j 1 fi j i j (10.2.1) j An easy computation where fi j is the number of sample values with X Y (see Problem 10.2.16) shows that the MLE of fi j n is given by i j and that the standard error of this estimate (because the incidence of a sample member falling in the i i j and using Example 6.3.2) is given by j ­th cell is distributed Bernoulli 11 kl i i j 1 n i j. We are interested in whether or not there is a relationship between X and Y. To answer this, we look at the conditional distributions of Y given X The conditional distributions of Y given X using i i, are given in the following table. P X i b i1 X X 1 a Y 11 1 1 Y 1b b 1 a1 a ab a 528 Section 10.2: Categorical Response and Predictors fi, where fi i by i j Then estimating i j conditional distributions are as in the following table. fi j i fi1 fi b the estimated X X 1 a Y 1 f11 f1 Y b f1b f1 fa1 fa fab fa If we conclude that there is a relationship between X and Y then we look at the table of estimated conditional distributions to determine the form of the relationship, i.e., how the conditional distributions change as we change the value of X we are conditioning on How, then, do we infer whether or not a relationship exists between X and Y? No relationship exists between Y and X if and only if the conditional distributions of Y given X x do not change with x This is the case if and only if X and Y are independent, and this is true if and only if for every i and j where j. Therefore, to assess whether or not there is a relationship between X and Y it is equivalent to assess the null hypothesis H0 : j for every i and j How should we assess whether or
not the observed data are surprising when H0 holds? The methods of Section 9.1.2, and in particular Theorem 9.1.2, can be applied here, as we have that F11 F12 Fab Multinomial n 1 1 1 2 a b when H0 holds, where Fi j is the count in the i j ­th cell. To apply Theorem 9.1.2, we need the MLE of the parameters of the model under H0. The likelihood, when H0 holds, is L 1 a 1 b x1 y1 xn yn a b i 1 j 1 fi j. (10.2.2) i j From this, we deduce (see Problem 10.2.17) that the MLE’s of the i and by i f j n Therefore, the relevant chi­squared statistic is fi n and j j are given X 2 a b fi so we distribution because Under H0 the parameter space has dimension a compare the observed value of X 2 with the ab Consider an example. 2 Chapter 10: Relationships Among Variables 529 EXAMPLE 10.2.1 Piston Ring Data The following table gives the counts of piston ring failures, where variable Y is the compressor number and variable X is the leg position based on a sample of n 166. These data were taken from Statistical Methods in Research and Production, by O. L. Davies (Hafner Publishers, New York, 1961). Here, Y takes four values and X takes three values (N = North, C = Central, and S = South). X N C X S X Y 1 17 17 12 Y 2 11 9 13 Y 3 11 8 19 Y 4 14 7 28 The question of interest is whether or not there is any relation between compressor and 72 the conditional distributions 53 f2 leg position. Because f1 of Y given X are estimated as in the rows of the following table. 41 and f3 Y X N 17 53 C 17 41 X 12 72 S X 1 0 321 0 415 0 167 Y 11 53 9 41 13 72 2 0 208 0 222 0 181 Y 11 53 8 41 19 72 3 0 208 0 195 0 264 Y 14 53 7 41 28 72 4 0 264 0 171 0 389 Comparing the rows, it certainly looks as if there is a difference in the conditional distributions, but we must assess whether or not the observed differences can be ex­ plained as due to sampling error. To see if the observed differences
are real, we carry out the chi­squared test. Under the null hypothesis of independence, the MLE’s are given by 1 46 166 2 33 166 3 38 166 4 49 166 for the Y probabilities, and by 1 53 166 2 41 166 3 72 166 for the X probabilities. Then the estimated expected counts n i following table. j are given by the Y 1 X N 14 6867 C 11 3614 X 19 9518 S X Y 2 10 5361 8 1506 14 3133 Y 3 12 1325 9 3855 16 4819 4 Y 15 6446 12 1024 21 2530 The standardized residuals (using (9.1.6)) fi are as in the following table. 530 Section 10.2: Categorical Response and Predictors X N C X S X 1 Y 0 6322 1 7332 1 8979 2 Y 0 1477 0 3051 0 3631 3 Y 0 3377 0 4656 0 6536 4 Y 0 4369 1 5233 1 5673 All of the standardized residuals seem reasonable, and we have that X 2 with P 2 6 0 0685, which is not unreasonably small. 11 7223 11 7223 So, while there may be some indication that the null hypothesis of no relationship is false, this evidence is not overwhelming. Accordingly, in this case, we may assume that Y and X are independent and use the estimates of cell probabilities obtained under this assumption. We must also be concerned with model checking, i.e., is the model that we have as­ sumed for the data x1 y1 xn yn correct? If these observations are i.i.d., then indeed the model is correct, as that is all that is being effectively assumed. So we need to check that the observations are a plausible i.i.d. sample. Because the minimal suffi­ such a test could be based on the conditional cient statistic is given by f11 fab The distribution xn yn given f11 distribution of the sample x1 y1 theory for such tests is computationally difficult to implement, however, and we do not pursue this topic further in this text. fab 10.2.2 Deterministic Predictor 1 Consider again the situation in which X is categorical, taking values in 1 Y is categorical, taking values in 1 a and b But now suppose that we take a sample n from the population, where we have speci�
��ed that ni sample members have i etc. This could be by assignment, when we are trying to determine the value X whether a cause–effect relationship exists; or we might have a populations a and want to see whether there is any difference in the distribution of Y between popu­ lations. Note that n1 na n 1 In both cases, we again want to make inferences about the conditional distributions of Y given X as represented by the following table difference in the conditional distributions means there is a relationship between Y and X If we denote the number of observations in the ith sample that have Y j by fi j then assuming the sample sizes are small relative to the population sizes, the likelihood function is given by L 1 X 1 b X a x1 y1 xn yn a b i 1 j 1 fi j j X i (10.2.3) Chapter 10: Relationships Among Variables 531 and the MLE is given by j X i fi j ni (Problem 10.2.18). There is no relationship between Y and X if and only if the conditional distributions do not vary as we vary X or if and only if H0 : j X 1 j X a j for all j 1 hood function is given by b for some probability distribution 1 b Under H0 the likeli­ L 1 b x1 y1 xn yn b j 1 f j j (10.2.4) and the MLE of Theorem 9.1.2, we have that the statistic j is given by j f j n (see Problem 10.2.19). Then, applying X 2 a b fi j ni j 2 i 1 j 1 ni j has an approximate free parameters in the full model Consider an example. 1 2 a 1 b 1 distribution under H0 because there are a b 1 1 parameters in the independence model, and EXAMPLE 10.2.2 This example is taken from a famous applied statistics book, Statistical Methods, 6th ed., by G. Snedecor and W. Cochran (Iowa State University Press, Ames, 1967). In­ dividuals were classified according to their blood type Y (O, A, B, and AB, although the AB individuals were eliminated, as they were small in number) and also classified according to X their disease status (peptic ulcer = P, gastric cancer = G, or control = C). So we have
three populations; namely, those suffering from a peptic ulcer, those suffering from gastric cancer, and those suffering from neither. We suppose further that the individuals involved in the study can be considered as random samples from the respective populations. The data are given in the following table 983 383 2892 679 416 2625 B Total 1796 883 6087 134 84 570 The estimated conditional distributions of Y given X are then as follows 983 1796 383 883 C 2892 6087 0 547 0 434 0 475 679 1796 416 883 2625 6087 0 378 0 471 0 431 134 1796 84 883 570 6087 0 075 0 095 0 093 532 Section 10.2: Categorical Response and Predictors We now want to assess whether or not there is any evidence for concluding that a difference exists among these conditional distributions. Under the null hypothesis that no difference exists, the MLE’s of the probabilities P Y A, and 3 P Y B are given by P Y O 1 2 1 2 3 983 1796 679 1796 134 1796 383 883 416 883 84 883 2892 6087 2625 6087 570 6087 0 4857 0 4244 0 0899 Then the estimated expected counts ni j are given by the following table. Y O 872 3172 428 8731 C 2956 4559 X P X G X Y A 762 2224 374 7452 2583 3228 Y B 161 4604 79 3817 547 2213 The standardized residuals (using (9.1.6)) the following table. fi j ni j ni 1 1 2 are given by 2219 3 0910 1 659 2 Y A 3 9705 2 8111 1 0861 Y B 2 2643 0 5441 1 0227 We have that X 2 0 0000 so we have strong evidence against the null hypothesis of no relationship existing between Y and X Ob­ serve the large residuals when X 40 5434 and P 2 4 P and Y O, Y A. 40 5434 We are left with examining the conditional distributions to ascertain what form the relationship between Y and X takes. A useful tool in this regard is to plot the conditional distributions in bar charts, as we have done in Figure 10.2.1. From this, we see that the peptic ulcer population has a greater proportion of blood type O than the other populations. 1.0 0.9 0
.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0. Figure 10.2.1: Plot of the conditional distributions of Y given X in Example 10.2.2. Chapter 10: Relationships Among Variables 533 10.2.3 Bayesian Formulation for the unknown values of the parameters of the models We now add a prior density discussed in Sections 10.2.1 and 10.2.2. Depending on how we choose, and de­ pending on the particular computation we want to carry out, we could be faced with some difficult computational problems. Of course, we have the Monte Carlo methods available in such circumstances, which can often render a computation fairly straight­ forward. The most common choice of prior in these circumstances is to choose a conjugate prior. Because the likelihoods discussed in this section are as in Example 7.1.3, we see immediately that Dirichlet priors will be conjugate for the full model in Section 10.2.1 and that products of independent Dirichlet priors will be conjugate for the full model in Section 10.2.2. In Section 10.2.1, the general likelihood — i.e., no restrictions on the i j — is of the form L 11 ab x1 y1 xn yn a b i 1 j 1 fi j i j If we place a Dirichlet is proportional to 11 ab prior on the parameter, then the posterior density a b i 1 j 1 i j 1 fi j i j so the posterior is a Dirichlet f11 In Section 10.2.2, the general likelihood is of the form fab 11 ab distribution. L 1 X 1 b X a x1 y1 xn yn a b i 1 j 1 fi Because distribution Dirichlet 1 i 1 X i b X i 1 for each i a we must place a prior on each If we choose the prior on the ith distribution to be 1 a i, then the posterior density is proportional to a b i 1 j 1 fi j j i j i 1. We recognize this as the product of independent Dirichlet distributions, with the poste­ b X i equal to a rior distribution on 1 X i Dirichlet fi1 1 i fi b b i distribution. A special and important case of the Dirichlet priors corresponds to the situation in which we feel that we have