text
stringlengths
270
6.81k
dent that the estimate x within an inch of the true mean height. 64 517 is 6.3.3 Testing Hypotheses and P­Values As discussed in Section 5.5.3, another class of inference procedures is concerned with what we call hypothesis assessment. Suppose there is a theory, conjecture, or hypoth­ esis that specifies a value for a characteristic of interest 0 Often this hypothesis is written H0 : 0 and is referred to as the null hypothesis. say The word null is used because, as we will see in Chapter 10, the value specified in H0 is often associated with a treatment having no effect. For example, if we want to assess whether or not a proposed new drug does a better job of treating a particular condition than a standard treatment does, the null hypothesis will often be equivalent to the new drug providing no improvement. Of course, we have to show how this can be expressed in terms of some characteristic of an unknown distribution, and we will do so in Chapter 10. The statistician is then charged with assessing whether or not the observed s is in ac­ cord with this hypothesis. So we wish to assess the evidence in s for 0 being true. A statistical procedure that does this can be referred to as a hypothesis assessment, a test of significance, or a test of hypothesis. Such a procedure involves measuring how surprising the observed s is when we assume H0 to be true. It is clear that s is surprising whenever s lies in a region of low probability for each of the distributions specified by the null hypothesis, i.e., for each of the distributions in the model for which 0 is true. If we decide that the data are surprising under H0, then this is evidence against H0 This assessment is carried out by calculating a probability, called a P­value, so that small values of the P­value indicate that s is surprising. It is important to always remember that while a P­value is a probability, this prob­ ability is a measure of surprise. Small values of the P­value indicate to us that a sur­ prising event has occurred if the null hypothesis H0 was true. A large P­value is not evidence that the null hypothesis is true. Moreover, a P­value is not the probability that the null hypothesis is true. The power of a hypothesis assessment method (see Section 6.3.6) also has
a bearing on how we interpret a P­value. z­Tests We now illustrate the computation and use of P­values via several examples. EXAMPLE 6.3.9 Location Normal Model and the z­Test Suppose we have a sample x1 unknown and 2 0 unknown mean, say, H0 : sampling distribution of the MLE is given by X R1 is 0 is known, and we have a theory that specifies a value for the 0 Note that, by Corollary 4.6.1, when H0 is true, the 2 0 n. 2 0 model, where xn from the N So one method of assessing whether or not the hypothesis H0 makes sense is to compare the observed value x with this distribution. If x is in a region of low probabil­ 2 0 n distribution, then this is evidence that H0 is false. Because the ity for the N 0 2 0 n distribution is unimodal, the regions of low probability for density of the N 0 N 0 Chapter 6: Likelihood Inference 333 this distribution occur in its tails. The farther out in the tails x lies, the more surprising this will be when H0 is true, and thus the more evidence we will have against H0. In Figure 6.3.4, we have plotted a density of the MLE together with an observed value x that lies far in the right tail of the distribution. This would clearly be a surprising value from this distribution. So we want to measure how far out in the tails of the N 0 2 0 n distribution the value x is. We can do this by computing the probability of observing a value of x as far, or farther, away from the center of the distribution under H0 as x. The center of this distribution is given by 0. Because Z X 0 0 n N 0 1 (6.3.7) under H0 the P­value is then given by, where denotes the N 0 1 distribution function. If the P­value is small, then we have evidence that x is a surprising value because this tells us that x is out in a tail of 2 the N 0 0 n distribution. Because this P­value is based on the statistic Z defined in (6.3.7), this is referred to as the z­test procedure. density 1.2 1.0 0.8 0.6 0.4 0.2 0.0 1 2 3 4 5
MLE Figure 6.3.4: Plot of the density of the MLE in Example 6.3.9 when n 10 together with the observed value and EXAMPLE 6.3.10 Application of the z­Test We generated the following sample of n 10 from an N 26 4 distribution. 29 0651 28 6592 27 3980 25 5546 23 4346 29 4477 26 3665 28 0979 23 4994 25 2850 334 Section 6.3: Inferences Based on the MLE Even though we know the true value of esis H0 :, let us suppose we do not and test the hypoth­ 25 To assess this, we compute (using a statistical package to evaluate ) the P­value 26 6808 25 2 2 6576 10 0 0078 which is quite small. For example, if the hypothesis H0 is correct, then, in repeated sampling, we would see data giving a value of x at least as surprising as what we have observed only 0 78% of the time. So we conclude that we have evidence against H0 being true, which, of course, is appropriate in this case. If you do not use a statistical package for the evaluation of then you will have to use Table D.2 of Appendix D to get an approximation. For example, rounding 2 6576 to 2 66, Table D.2 gives 0 9961 and the approximate P­value is 2 1 0 0078 In this case, the approximation is exact to four 0 9961 decimal places. 2 6576 2 66 EXAMPLE 6.3.11 Bernoulli Model Suppose that x1 [0 1] is unknown, and we want to test H0 : true, we have xn is a sample from a Bernoulli distribution, where 0 As in Example 6.3.7, when H0 is as n So we can test this hypothesis by computing the approximate P­value when n is large. As a specific example, suppose that a psychic claims the ability to predict the value of a randomly tossed fair coin. To test this, a coin was tossed 100 times and the psy­ chic’s guesses were recorded as successes or failures. A total of 54 successes were observed. If the psychic has no predictive ability, then we would expect the successes to occur randomly, just as heads occur when we toss the coin. Therefore, we want to test the null hypothesis that the probability 1 2. This is equivalent to saying that
the psychic has no predictive ability. The MLE is 0.54 and the approximate P­value is given by of a success occurring is equal to 0 2 1 100 0 54 7881 0 4238 and we would appear to have no evidence that H0 is false, i.e., no reason to doubt that the psychic has no predictive ability. Often cutoff values like 0.05 or 0.01 are used to determine whether the results of a test are significant or not. For example, if the P­value is less than 0.05, then Chapter 6: Likelihood Inference 335 the results are said to be statistically significant at the 5% level. There is nothing sacrosanct about the 0.05 level, however, and different values can be used depending on the application. For example, if the result of concluding that we have evidence against H0 is that something very expensive or important will take place, then naturally we might demand that the cutoff value be much smaller than 0.05. When Is Statistical Significance Practically Significant? It is also important to point out here the difference between statistical significance and practical significance. Consider the situation in Example 6.3.9, when the true 0 that, practically speaking, they are value of indistinguishable. By the strong law of large numbers, we have that X 1 as n 1 is so close to and therefore 0 but a s is 1 X a s 0 n 0 This implies that 2 1 X a s 0 0 n 0 We conclude that, if we take a large enough sample size n we will inevitably conclude that 0 because the P­value of the z­test goes to 0. Of course, this is correct because the hypothesis is false. 0 as an estimate of In spite of this, we do not want to conclude that just because we have statistical sig­ nificance, the difference between the true value and 0 is of any practical importance. If we examine the observed absolute difference x 0, however, we will not make this mistake. If this absolute difference is smaller than some threshold that we consider represents a practically significant difference, then even if the P­value leads us to conclude that difference exists, we might conclude that no difference of any importance exists. Of course, the value of is application dependent. 1 2 we
might not care if the For example, in coin tossing, where we are testing coin is slightly unfair, say, 0 01 In testing the abilities of a psychic, as in Ex­ ample 6.3.11, however, we might take much lower, as any evidence of psychic powers would be an astounding finding. The issue of practical significance is something we should always be aware of when conducting a test of significance. 0 Hypothesis Assessment via Confidence Intervals ­confidence interval C s for Another approach to testing hypotheses is via confidence intervals. For example, if we then this seems like clear have a evidence against H0 : is close to 1. It turns out that in 0 at least when many problems, the approach to testing via confidence intervals is equivalent to using P­values with a specific cutoff for the P­value to determine statistical significance. We illustrate this equivalence using the z­test and z­confidence intervals. 0 C s and 336 Section 6.3: Inferences Based on the MLE EXAMPLE 6.3.12 An Equivalence Between z­Tests and z­Confidence Intervals We develop this equivalence by showing that obtaining a P­value less than 1 H0 : that 0 is equivalent to 0 not being in a ­confidence interval for for Observe 1 2 1 x 0 n if and only if This is true if and only if which holds if and only if This implies that the the P­value for the hypothesis H0 : ­confidence interval for 0 is greater than 1. comprises those values 0 for which Therefore, the P­value, based on the z­statistic, for the null hypothesis H0 : if and only if 0 is not in the 0, will be smaller than 1 ­confidence interval derived in Example 6.3.6. For example, if we decide that for any P­values less 0 05 we will declare the results statistically significant, then we know does not 0 For the data of Example 6.3.10, a 0.95­confidence interval is given by 25 we have evidence against for than 1 the results will be
significant whenever the 0.95­confidence interval for contain [25 441 27 920]. As this interval does not contain 0 the null hypothesis at the 0.05 level. We can apply the same reasoning for tests about when we are sampling from a model. For the data in Example 6.3.11, we obtain the 0.95­confidence Bernoulli interval x z0 975 x 1 x n 0 54 1 96 0 54 1 0 54 100 [0 44231 0 63769] which includes the value 0 of no predictive ability for the psychic at the 0.05 level. 0 5. So we have no evidence against the null hypothesis t­Tests We now consider an example pertaining to the important location­scale normal model. EXAMPLE 6.3.13 Location­Scale Normal Model and t­Tests Suppose that x1 and In Example 6.3.8, we obtained a 2 distribution, where 0 are unknown, and suppose we want to test the null hypothesis H0 : xn is a sample from an N ­confidence interval for 0 This was based on the R1 Chapter 6: Likelihood Inference 337 t­statistic given by (6.3.6). So we base our test on this statistic also. In fact, it can be shown that the test we derive here is equivalent to using the confidence intervals to assess the hypothesis as described in Example 6.3.12. As in Example 6.3.8, we can prove that when the null hypothesis is true, then T X S 0 n (6.3.8) is distributed t n 1. The t distributions are unimodal, with the mode at 0, and the regions of low probability are given by the tails. So we test, or assess, this hypothesis by computing the probability of observing a value as far or farther away from 0 as (6.3.8). Therefore, the P­value is given by is the distribution function of the t n where G 1 distribution. We then have evidence against H0 whenever this probability is small. This procedure is called the t­test. Again, it is a good idea to look at the difference x 0, when we conclude that H0 is false, to determine whether or not the detected difference is of practical importance. Consider now the data in Example 6.3.10 and let us
pretend that we do not know 2. Then we have x or the value of the t­statistic is 2 2050 so to test H0 : 26 6808 and s 4 8620 25 t x s 0 n 26 6808 2 2050 25 10 2 4105 From a statistics package (or Table D.4) we obtain t0 975 9 2 2622 so we have a statistically significant result at the 5% level and conclude that we have evidence 25 Using a statistical package, we can determine the precise value against H0 : of the P­value to be 0.039 in this case. One­Sided Tests to be a single value All the tests we have discussed so far in this section for a characteristic of interest have been two­sided tests. This means that the null hypothesis specified the value of 0 Sometimes, however, we want to test a null hypothesis 0 To carry out such tests, we use 0 or H0 : of the form H0 : the same test statistics as we have developed in the various examples here but compute the P­value in a way that reects the one­sided nature of the null. These are known as one­sided tests. We illustrate a one­sided test using the location normal model. EXAMPLE 6.3.14 One­Sided Tests Suppose we have a sample x1 unknown and 2 0 xn from the N 2 0 model, where R1 is 0 is known. Suppose further that it is hypothesized that H0 : 0 is true, and we wish to assess this after observing the data. 338 Section 6.3: Inferences Based on the MLE We will base our test on the z­statistic So Z is the sum of a random variable having an N 0 1 distribution and the constant n 0 0 which implies that Note that if and only if H0 is true. This implies that, when the null hypothesis is false, we will tend to see values of Z in the right tail of the N 0 1 distribution; when the null hypothesis is true, we will tend to see values of Z that are reasonable for the N 0 1 distribution, or in the left tail of this distribution. Accordingly, to test H0 we compute the P­value with Z Using the same reasoning, the P­value for the null hypothesis H0 : N 0 1 and conclude that we have evidence against H0 when this is small. 0 equals. For
more discussion of one­sided tests and confidence intervals, see Problems 6.3.25 through 6.3.32. 6.3.4 Inferences for the Variance or In Sections 6.3.1, 6.3.2, and 6.3.3, we focused on inferences for the unknown mean of a distribution, e.g., when we are sampling from an N distribution and our interest is in respectively. In general, location parameters tend to play a much more important role in a statistical analysis than other characteris­ tics of a distribution. There are logical reasons for this, discussed in Chapter 10, when 2 as a nui­ we consider regression models. Sometimes we refer to a parameter such as sance parameter because our interest is in distribution is variance too, i.e., there are no nuisance parameters. Note that the variance of a Bernoulli are logically inferences about the 2 distribution or a Bernoulli so that inferences about 1 But sometimes we are primarily interested in making inferences about 2 in the 2 distribution when it is unknown. For example, suppose that previous expe­ N rience with a system under study indicates that the true value of the variance is well­ 0 i.e., the true value does not differ from 2 approximated by 2 0 by an amount having Chapter 6: Likelihood Inference 339 any practical significance. Now based on the new sample, we may want to assess the 2 hypothesis H0 : 0 i.e., we wonder whether or not the basic variability in the process has changed. 2 The discussion in Section 6.3.1 led to consideration of the standard error s n as n of x In many ways s2 seems like a very an estimate of the standard deviation natural estimator of 2 even when we aren’t sampling from a normal distribution. The following example develops confidence intervals and P­values for 2 xn is a sample from an N 2 The plug­in MLE is given by n EXAMPLE 6.3.15 Location­Scale Normal Model and Inferences for the Variance Suppose that x1 and R1 0 are unknown, and we want to make inferences about the population variance 1 s2 n which is the average of the squared deviations of the data values from x Often s2 is recommended as the estimate because it has the unbiasedness property, and we will use this here. An expression can be
determined for the standard error of this estimate, but, as it is somewhat complicated, we will not pursue this further here. 2 distribution, where We can form a 1 (Theorem 4.6.6). There are a number of possibilities for this interval, but one is to note that, letting 2 denote the th quantile for the ­confidence interval for distribution, then 2 2 using n 1 S2 S2 n 1 2 n 2 1 S2 S2 2 n 1 for every 2 R1 0 So n 2 1 1 s2 n 1 2 n 2 1 1 s2 2 n 1 is an exact the 1 value of procedure. ­confidence interval for level, we need only see whether or not such that 0 at 2 0 is in the interval. The smallest 2 0 is in the interval is the P­value for this hypothesis assessment 2 To test a hypothesis such as H0 : For the data in Example 6.3.10, let us pretend that we do not know that 4. 4 8620 From a statistics package (or Table D.3 in Appendix 19 023 So a 0.95­confidence interval 2 700 10 and s2 2 0 975 9 2 Here, n D) we obtain 2 2 is given by for 0 025 9 n 2 1 1 s2 n 1 2 n 2 1 1 s2 2 n 1 9 4 8620 19 023 9 4 8620 2 700 [2 300 3 16 207] The length of the interval indicates that there is a reasonable degree of uncertainty concerning the true value of 4 would not reject this hypothesis at the 5% level because the value 4 is in the 0.95­confidence interval. 2. We see, however, that a test of H0 : 2 340 Section 6.3: Inferences Based on the MLE 6.3.5 Sample­Size Calculations: Confidence Intervals Quite often a statistician is asked to determine the sample size n to ensure that with a very high probability the results of a statistical analysis will yield definitive results. and For example, suppose we are going to take a sample of size n from a population want to estimate the population mean so that the estimate is within 0.5 of the true mean with probability at least 0.95. This means that we want the half­length, or margin of error, of the 0.95
­confidence interval for the mean to be guaranteed to be less than 0.5. We consider such problems in the following examples. Note that in general, sample­ size calculations are the domain of experimental design, which we will discuss more extensively in Chapter 10. First, we consider the problem of selecting the sample size to ensure that a confi­ dence interval is shorter than some prescribed value. EXAMPLE 6.3.16 The Length of a Confidence Interval for a Mean Suppose we are in the situation described in Example 6.3.6, in which we have a sample 0 known. x1 Further suppose that the statistician is asked to determine n so that the margin of error for a is no greater than a prescribed value ­confidence interval for the population mean 0 This entails that n be chosen so that R1 unknown and 2 0 2 0 model, with from the N xn or, equivalently, so that For example, if n is 154. 2 0 10 0 95 and 0 5 then the smallest possible value for Now consider the situation described in Example 6.3.8, in which we have a sample 0 both unknown. In this 2 model with R1 and 2 x1 xn from the N case, we want n so that which entails s2 But note this also depends on the unobserved value of s so we cannot determine an appropriate value of n. Often, however, we can determine an upper bound on the population standard de­ viation, say, b For example, suppose we are measuring human heights in cen­ timeters. Then we have a pretty good idea of upper and lower bounds on the possible heights we will actually obtain. Therefore, with the normality assumption, the interval given by the population mean, plus or minus three standard deviations, must be con­ tained within the interval given by the upper and lower bounds. So dividing the length Chapter 6: Likelihood Inference 341 of this interval by 6 gives a plausible upper bound b for the value of when we have such an upper bound, we can expect that s conservatively Therefore, we take n to satisfy In any case, b at least if we choose b n b2 t 1 2 n 1 2. Note that we need to evaluate t 1 1 for each n as well. It is wise to be fairly conservative in our choice of n in this case, i.e., do not choose the smallest possible value
. 2 n EXAMPLE 6.3.17 The Length of a Confidence Interval for a Proportion Suppose we are in the situation described in Example 6.3.2, in which we have a sample [0 1] is unknown. The statistician x1 is required to specify the sample size n so that the margin of error of a ­confidence interval for So, from Example 6.3.7, we want n to satisfy xn from the Bernoulli model and is no greater than a prescribed value x 1 x z 1 2 (6.3.9) and this entails n x 1 x n z 1 2 2. Because this also depends on the unobserved x, we cannot determine n. Note, however, that 0 1 4 for every x (plot this function) and that this upper bound is x achieved when x 1 2. Therefore, if we determine n so that x 1 n 1 4 z 1 2 2, then we know that (6.3.9) is satisfied. For example, if possible value of n is 97; if 9604. 0 95 0 1 the smallest 0 01, the smallest possible value of n is 0 95 6.3.6 Sample­Size Calculations: Power Suppose the purpose of a study is to assess a specific hypothesis H0 : 0 and it is has been decided that the results will be declared statistically significant whenever Suppose that the statistician is asked to choose n so that the P­value is less than 0 at some specific the P­value obtained is smaller than for a specific 1 such that value of and call is not really complete, as it suppresses the power function of the test. The notation the dependence of n and the test procedure, but we will assume that on these are clear in a particular context. The problem the statistician is presented with can then be stated as: Find n so that 0 The probability that the P­value is less than is called the power of the test at We will denote this by with probability at least 1 0 1 0 The power function of a test is a measure of the sensitivity of the test to detect 0 05 0 01 etc.) so that departures from the null hypothesis. We choose small ( 342 Section 6.3: Inferences Based on the MLE we do not erroneously declare that we
have evidence against the null hypothesis when the null hypothesis is in fact true. When is the probability that the test does the right thing and detects that H0 is false. 0 then For any test procedure, it is a good idea to examine its power function, perhaps, to see how good the test is at detecting departures. For it for several choices of can happen that we do not find any evidence against a null hypothesis when it is false because the sample size is too small. In such a case, the power will be small at values that represent practically significant departures from H0 To avoid this problem, we 1 that represents a practically significant departure from should always choose a value 1 0 and then determine n so that we reject H0 with high probability when We consider the computation and use of the power function in several examples. EXAMPLE 6.3.18 The Power Function in the Location Normal Model For the two­sided z­test in Example 6.3.9, we have 6.3.10) P P P 1 Notice that so 0 0 (put is symmetric about and we get the same value) 0 0 0 0 and 0 in the expression for Differentiating (6.3.10) with respect to n we obtain 6.3.11) is the density of the N 0 1 distribution. We can establish that (6.3.11) is is increasing in 0 for n (the solution may not be an integer) to where always nonnegative (see Challenge 6.3.34). This implies that n so we need only solve determine a suitable sample size (all larger values of n will give a larger power). 0 05 For example, when 0 0 1 we must 0 99 and 1 1 1 0 0 find n satisfying 1 n 0 1 1 96 n 0 1 1 96 0 99 (6.3.12) (Note that the symmetry of 0 0 1 here instead of 0 about 0 means we will get the same answer if we use 0 1 ) Tabulating (6.3.12) as a function of n using a Chapter 6: Likelihood Inference 343 statistical package determines that n bound. 785 is the smallest value achieving the required Also observe that the derivative of (6.3.10) with respect to is given by 6.3.13) 0, negative when This is positive when 0 (see Challenge 6.3.35) From (6.3
.10) we have that 0 and takes the value 0 when 1 as 0 and that it is increasing as takes its minimum value at These facts establish that we move away from 0 Therefore, once we have determined n so that the power is at satisfying least 1 we know that the power is at least 0 for all values of 0 at some 0 0 1 As an example of this, consider Figure 6.3.5, where we have plotted the power function when n 10 0 1 0 0 1 and 0 05 so that 10 1 96 10 1 96 Notice the symmetry about from 0. We obtain P­value for testing H0 : n this graph will rise even more steeply to 1 as we move away from 0. increases as moves away 1 2 the probability that the 0 will be less than 0 05 is 0 967. Of course, as we increase 0 0 967 so that when 0 and the fact that 1 2 r e w o p 1.0 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 ­5 Figure 6.3.5: Plot of the power function and 0 1 is assumed known. 0 mu for Example 6.3.18 when 5 0 05 0 0 Many statistical packages contain the power function as a built­in function for var­ ious tests. This is very convenient for examining the sensitivity of the test and deter­ mining sample sizes. EXAMPLE 6.3.19 The Power Function for For the two­sided test in Example 6.3.11, we have that the power function is given by in the Bernoulli Model P 2 1 n X 0 1 0 0 344 Section 6.3: Inferences Based on the MLE 1 Under the assumption that we choose n large enough so that X is approximately dis­ n the approximate calculation of this power function can be tributed N. We do not pursue approached as in Example 6.3.18, when we put this calculation further here but note that many statistical packages will evaluate as a built­in function. 1 0 EXAMPLE 6.3.20 The Power Function in the Location­Scale Normal Model For the two­sided t­test in Example 6.3.13, we have and 2 and then determine n so that 1 is the cumulative distribution function of the t n where G Notice that it is a function of both 1 distribution. and 2. In particular, we have to specify both 0
Many statistical packages n will have the calculation of this power function built­in so that an appropriate n can be determined using this. Alternatively, we can use Monte Carlo methods to approximate the distribution function of 2 X S 0 n when sampling from the N priate value. 2 for a variety of values of n to determine an appro­ Summary of Section 6.3 is the best­supported value of the parameter The MLE by the model and data. As such, it makes sense to base the derivation of inferences about some on the MLE. These inferences include estimates and their characteristic standard errors, confidence intervals, and the assessment of hypotheses via P­ values. An important aspect of the design of a sampling study is to decide on the size n of the sample to ensure that the results of the study produce sufficiently accurate results. Prescribing the half­lengths of confidence intervals (margins of error) or the power of a test are two techniques for doing this. EXERCISES 2 0, where 6.3.1 Suppose measurements (in centimeters) are taken using an instrument. There is error in the measuring process and a measurement is assumed to be distributed 10) measure­ N ments 4.7, 5.5, 4.4, 3.3, 4.6, 5.3, 5.2, 4.8, 5.7, 5.3 were obtained, assess the hypothesis H0 : 5 by computing the relevant P­value. Also compute a 0.95­confidence interval for the unknown is the exact measurement and 2 0 0 5 If the (n Chapter 6: Likelihood Inference 345 2 0 24 0 5 Then assess 5 and compute a 0.95­confidence interval for 2 and 2 Determine a 0.99­confidence inter­ 6.3.2 Suppose in Exercise 6.3.1, we drop the assumption that the hypothesis H0 : 6.3.3 Marks on an exam in a statistics course are assumed to be normally distributed with unknown mean but with variance equal to 5. A sample of four students is selected, 60 by computing and their marks are 52, 63, 64, 84. Assess the hypothesis H0 : the relevant P­value and compute a 0.95­confidence interval for the unknown 6.3.4 Suppose in Exercise
6.3.3 that we drop the assumption that the population vari­ ance is 5. Assess the hypothesis H0 : 60 by computing the relevant P­value and compute a 0.95­confidence interval for the unknown 6.3.5 Suppose that in Exercise 6.3.3 we had observed only one mark and that it was 60 by computing the relevant P­value and compute 52. Assess the hypothesis H0 : a 0.95­confidence interval for the unknown Is it possible to compute a P­value and construct a 0.95­confidence interval for without the assumption that we know the population variance? Explain your answer and, if your answer is no, determine the minimum sample size n for which inference is possible without the assumption that the population variance is known. 6.3.6 Assume that the speed of light data in Table 6.3.1 is a sample from an N distribution for some unknown values of val for Assess the null hypothesis H0 : 6.3.7 A manufacturer wants to assess whether or not rods are being constructed appro­ priately, where the diameter of the rods is supposed to be 1 0 cm and the variation in the diameters is known to be distributed N 0 1. The manufacturer is willing to tolerate a deviation of the population mean from this value of no more than 0 1 cm, i.e., if the 0 1 cm, then the manufacturing process is population mean is within the interval 1 0 500 rods is taken, and the average diameter performing correctly. A sample of n 0 083 cm2. Are these results 1 05 cm, with s2 of these rods is found to be x statistically significant? Are the results practically significant? Justify your answers. 6.3.8 A polling firm conducts a poll to determine what proportion of voters in a given 250 was taken population will vote in an upcoming election. A random sample of n from the population, and the proportion answering yes was 0.62. Assess the hypothesis H0 : 6.3.9 A coin was tossed n 0.51. Do we have evidence to conclude that the coin is unfair? 6.3.10 How many times must we toss a coin to ensure that a 0.95­confidence interval for the probability of heads on a single toss has length less than 0.1
, 0.05, and 0.01, respectively? 6.3.11 Suppose a possibly biased die is rolled 30 times and that the face containing two pips comes up 10 times. Do we have evidence to conclude that the die is biased? 6.3.12 Suppose a measurement on a population is assumed to be distributed N 2 R1 is unknown and that the size of the population is very large. A researcher where that is no longer than 1. What is wants to determine a 0.95­confidence interval for the minimum sample size that will guarantee this? 6.3.13 Suppose x1 (a) Show that xn is a sample from a Bernoulli x 2 x 0 65 and construct an approximate 0.90­confidence interval for 1000 times, and the proportion of heads observed was [0 1] unknown. nx 1 with xi ) (Hint: x 2 i n i 1 xi 346 Section 6.3: Inferences Based on the MLE 1 with Var X Bernoulli [0 1] unknown. Record the relationship 1 an unbiased estimator of 2 2 and that given by s2 in (5.5.5). then (b) If X between the plug­in estimate of 2 (see Problem 6.3.23), use the results in part (c) Since s2 is an unbiased estimator of (b) to determine the bias in the plug­in estimate. What happens to this bias as n? 6.3.14 Suppose you are told that, based on some data, a 0 95­confidence interval for is given by 1 23 2 45 You are then asked if there is any evi­ a characteristic dence against the hypothesis H0 : 2 State your conclusion and justify your reasoning. 6.3.15 Suppose that x1 is a value from a Bernoulli (a) Is x1 an unbiased estimator of? (b) Is x 2 2? is given by 5.3. Also a P­value 6.3.16 Suppose a plug­in MLE of a characteristic 5 and the value was 0 000132 If was computed to assess the hypothesis H0 : you are told that differences among values of less than 0 5 are of no importance as far as the application is concerned, then what do you conclude from these results? Suppose instead you were told that differences among values of less than 0 25 are of
no importance as far as the application is concerned, then what do you conclude from these results? 6.3.17 A P­value was computed to assess the hypothesis H0 : 0 and the value 0 22 was obtained. The investigator says this is strong evidence that the hypothesis is correct. How do you respond? 1 and the value 6.3.18 A P­value was computed to assess the hypothesis H0 : 0 55 was obtained. You are told that differences in greater than 0 5 are considered to be practically significant but not otherwise. The investigator wants to know if enough data were collected to reliably detect a difference of this size or greater. How would you respond? COMPUTER EXERCISES 2 2 0 R1 is unknown and the size of the population is is given by 5. A researcher wants to that is no longer than 1. Determine a sample 6.3.19 Suppose a measurement on a population can be assumed to follow the N distribution, where very large. A very conservative upper bound on determine a 0.95­confidence interval for size that will guarantee this. (Hint: Start with a large sample approximation.) 2, 6.3.20 Suppose a measurement on a population is assumed to be distributed N R1 is unknown and the size of the population is very large. A researcher where 0 and ensure that the probability is at wants to assess a null hypothesis H0 : least 0.80 that the P­value is less than 0.05 when 0 5 What is the minimum sample size that will guarantee this? (Hint: Tabulate the power as a function of the sample size n ) 6.3.21 Generate 103 samples of size n 5 from the Bernoulli 0 5 distribution. For each of these samples, calculate (6.3.5) with 0 95 and record the proportion of intervals that contain the true value. What do you notice? Repeat this simulation with n 20 What do you notice? 0 Chapter 6: Likelihood Inference 347 6.3.22 Generate 104 samples of size n these samples, calculate the interval x dard deviation, and compute the proportion of times this interval contains this simulation with n 5 from the N 0 1 distribution. For each of 5 where s is the sample stan­. Repeat 10 and 100 and compare your results. 5 x s s PROBLEMS 2 1 xn and R1. whenever T2 is also an unbiased estimator
of is a sample from a distribution with mean 1 s2 n, then determine the bias in this estimate 6.3.23 Suppose that x1 2 variance (a) Prove that s2 given by (5.5.5) is an unbiased estimator of 2 by n (b) If instead we estimate and what happens to it as n 6.3.24 Suppose we have two unbiased estimators T1 and T2 of (a) Show that T1 [0 1] (b) If T1 and T2 are also independent, e.g., determined from independent samples, then calculate Var (c) For the situation in part (b), determine the best choice of choice Var of T1 having a very large variance relative to T2? (d) Repeat parts (b) and (c), but now do not assume that T1 and T2 are independent, so Var 6.3.25 (One­sided confidence intervals for means) Suppose that x1 ple from an N pose we want to make inferences about the interval problem of finding an interval C x1 interval in the sense that for this T2 is smallest. What is the effect on this combined estimator xn is a sam­ 0 is known. Sup­. Consider the that covers the T2 in terms of Var T1 and Var T2 T2 will also involve Cov T1 T2 So we want u such that for every, 2 0 distribution, where R1 is unknown and 2 with probability at least u x1 T1 T1 T1 xn xn 1 1 1 P u X1 Xn 0 k x xn xn u x1 u x1 using u x1 is unknown and 2 0 distribution, where xn, so Obtain an exact left­ n, i.e., find the if and only if xn is called a left­sided ­confidence interval for Note that C x1 sided ­confidence interval for k that gives this property xn is a sample from 6.3.26 (One­sided hypotheses for means ) Suppose that x1 2 0 is known. Suppose we want a N to assess the hypothesis H0 : 0. Under these circumstances, we say that the observed value x is surprising if x occurs in a region of low probability for every distribution in H0. Therefore, a sensible P­value for this problem is max
H0 P X x. Show that this leads to the P­value 1 6.3.27 Determine the form of the power function associated with the hypothesis assess­ ment procedure of Problem 6.3.26, when we declare a test result as being statistically significant whenever the P­value is less than 6.3.28 Repeat Problems 6.3.25 and 6.3.26, but this time obtain a right­sided ­confidence interval for and assess the hypothesis H0 : n x 0 0 0. 348 Section 6.3: Inferences Based on the MLE 6.3.29 Repeat Problems 6.3.25 and 6.3.26, but this time do not assume the population variance is known. In particular, determine k so that u x1 n gives an exact left­sided and show that the P­value for testing H0 : ­confidence interval for 0 is given by k s xn.3.30 (One­sided confidence intervals for variances) Suppose that x1 2 distribution, where sample from the N we want a ­confidence interval of the form R1 0 2 is a is unknown, and xn C x1 xn 0 u x1 xn xn ks2 then determine k so that this interval is an exact 2 If u x1 for confidence interval. is a sample 6.3.31 (One­sided hypotheses for variances) Suppose that x1 2 is unknown, and we from the N 0 Argue that the sample variance s2 is 2 want to assess the hypothesis H0 : surprising if s2 is large and that, therefore, a sensible P­value for this problem is to compute max s2 Show that this leads to the P­value 2 distribution, where 2 R1 xn 0 ­ 2 H0 P S2 n 1 H 1 s2 2 0 n 1 2 n n 1 distribution. 1 is the distribution function of the where H 6.3.32 Determine the form of the power function associated with the hypothesis as­ sessment procedure of Problem 6.3.31, for computing the probability that the P­value is less than 6.3.33 Repeat Exercise 6.3.7, but this time do not assume that the population variance is known. In this case, the manufacturer deems the process to be under control
if the population standard deviation is less than or equal to 0.1 and the population mean is in the interval 1 0 0 1 cm. Use Problem 6.3.31 for the test concerning the population variance. CHALLENGES 6.3.34 Prove that (6.3.11) is always nonnegative. (Hint: Use the facts that metric about 0, increases to the left of 0, and decreases to the right of 0.) 6.3.35 Establish that (6.3.13) is positive when 0, negative when takes the value 0 when 0 is sym­ 0 and DISCUSSION TOPICS 6.3.36 Discuss the following statement: The accuracy of the results of a statistical analysis is so important that we should always take the largest possible sample size. Chapter 6: Likelihood Inference 349 6.3.37 Suppose we have a sequence of estimators T1 T2 as n for each might consider Tn a useful estimator of for Discuss under what circumstances you and Tn P 6.4 Distribution­Free Methods The likelihood methods we have been discussing all depend on the assumption that the. There is typically nothing that guarantees that true distribution lies in P : is correct. If the distribution we are sampling from is far the assumption P : different from any of the distributions in P :, then methods of inference that depend on this assumption, such as likelihood methods, can be very misleading. So it is important in any application to check that our assumptions make sense. We will discuss the topic of model checking in Chapter 9. Another approach to this problem is to take the model P : as large as possible, reecting the fact that we may have very little information about what the true distribution is like. For example, inferences based on the Bernoulli model with [0 1] really specify no information about the true distribution because this 0 1. Infer­ model includes all the possible distributions on the sample space S ence methods that are suitable when P : is very large are sometimes called distribution­free, to reect the fact that very little information is specified in the model about the true distribution. For finite sample spaces, it is straightforward to adopt the distribution­free ap­ proach, as with the just cited Bernoulli model, but when the sample space is infinite, things are more complicated. In fact, sometimes it is very d
for Notice that the model P : is very large (all distributions on R1 having their first l 2 moments finite), and these approximate inferences are appropriate for every distribution in the model. A cautionary note is that estimation of moments becomes more difficult as the order of the moments rises. Very large sample sizes are required for the accurate estimation of high­order moments. The general method of moments principle allows us to make inference about char­ acteristics that are functions of moments. This takes the following form: Method of moments principle: A function moments is estimated by m1 mk 1 k of the first k l Chapter 6: Likelihood Inference 351 is continuously differentiable and nonzero at When it can be proved that M1 given by 1 and covariances of M1 topic further here but note that, in the case k the so­called delta theorem, which says that l 2, then Mk converges in distribution to a normal with mean k and variance given by an expression involving the variances Mk and the partial derivatives of We do not pursue this 2 these conditions lead to 1 and l and 6.4.1) provided that as n 0 see Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for a proof of this result. This result provides approximate confidence intervals and tests for is continuously differentiable at 1 and 1 1. EXAMPLE 6.4.1 Inference about a Characteristic Using the Method of Moments Suppose x1 ance xn is a sample from a distribution with unknown mean 2 and we want to construct a ­confidence interval for and vari­ 2 Then 1 2 3 so the delta theorem says that n 1 X 2 1 2 D 2s X 3 N 0 1 as n Therefore, 2 1 x 2 s nx 3 z 1 2 2 1 is an approximate Notice that if ­confidence interval for is not continuously differentiable at 0. So if you think the population mean could be 0, or even close to 0, this would not be an appropriate choice of confidence interval for 0 then this confidence interval is not valid because. 6.4.2 Bootstrapping Suppose that P : xn is a sample from some unknown distribution with cdf F. Then the
empirical distribution function is the set of all distributions on R1 and that x1 n F x 1 n I x] xi, i 1 introduced in Section 5.4.1, is a natural estimator of the cdf F x. We have ] Xi 1 n n i 1 F x F x for every numbers then establish the consistency of F x for F x as n so that F is unbiased for F The weak and strong laws of large Observing that 352 Section 6.4: Distribution­Free Methods the I that the standard error of F x is given by x] xi constitute a sample from the Bernoulli F x distribution, we have F x 1 F x n. These facts can be used to form approximate confidence intervals and test hypotheses for F x, just as in Examples 6.3.7 and 6.3.11. Observe that F x prescribes a distribution on the set x1 xn, e.g., if the sam­ ple values are distinct, this probability distribution puts mass 1 n on each xi. Note that it is easy to sample a value from F, as we just select a value from x1 xn where each point has probability 1 n of occurring. When the xi are not distinct, then this is changed in an obvious way, namely, xi has probability fi n, where fi is the number of times xi occurs in x1 xn. Suppose we are interested in estimating T F, where T is a function of the distribution F We use this notation to emphasize that corresponds to some characteristic of the distribution rather than just being an arbitrary mathematical function of For example, T F could be a moment of F a quantile of F etc. Now suppose we have an estimator x1. Naturally, we are interested in the accuracy of that is being proposed for in­, and we could xn ferences about choose to measure this by MSE E 2 Var. (6.4.2) Then, to assess the accuracy of our estimate When n is large, we expect F to be close to F, so a natural estimate of xn, we need to estimate (6.4.2). is T F i.e., simply compute the same characteristic of the empirical distribution. This is the approach adopted in Chapter 5 when we discussed descriptive statistics. Then we estimate the square of the bias in by x1 T F 2. (6.4.3) To estimate the variance of,
we use VarF 1 nn 2 E F E 2 F n n i1 1 in 1 2 xi1 xin 1 nn n n i1 1 in 1 2 xi1 xin, (6.4.4) xn as i.i.d. random values with cdf given by F So to calculate i.e., we treat x1 an estimate of (6.4.2), we simply have to calculate VarF. This is rarely feasible, however, because the sums in (6.4.4) involve nn terms. For even very modest sample sizes, like n 10 this cannot be carried out, even on a computer. The solution to this problem is to approximate (6.4.4) by drawing m indepen­ for each of these samples to obtain dent samples of size n from F evaluating Chapter 6: Likelihood Inference 1 m and then using the sample variance VarF 353 (6.4.5) as the estimate. The m samples from F are referred to as bootstrap samples or re­ samples, and this technique is referred to as bootstrapping or resampling. Combining (6.4.3) and (6.4.5) gives an estimate of MSE i is called the bootstrap mean, and Furthermore, m 1 m i 1 VarF is the bootstrap standard error. Note that the bootstrap standard error is a valid estimate of the error in whenever has little or no bias. Consider the following example. EXAMPLE 6.4.2 The Sample Median as an Estimator of the Population Mean Suppose we want to estimate the location of a unimodal, symmetric distribution. While the sample mean might seem like the obvious choice for this, it turns out that for some distributions there are better estimators. This is because the distribution we are sam­ pling may have long tails, i.e., may produce extreme values that are far from the center of the distribution. This implies that the sample average itself could be highly inu­ enced by a few extreme observations and would thus be a poor estimate of the true mean. Not all estimators suffer from this defect. For example, if we are sampling from a symmetric distribution, then either the sample mean or the sample median could serve as an estimator of the population mean. But, as we have previously discussed, the sample median is not inuenced by extreme values, i
.e., it does not change as we move the smallest (or largest) values away from the rest of the data, and this is not the case for the sample mean. A problem with working with the sample median x0 5 rather than the sample mean x is that the sampling distribution for x0 5 is typically more difficult to study than that of x. In this situation, bootstrapping becomes useful. If we are estimating the population mean T F by using the sample median (which is appropriate when we know the distribution we were sampling from is symmetric), then the estimate of the squared bias in the sample median is given by T F 2 x0 5 x 2 x0 5 and T F x (the mean of the empirical distribution is x). This because should be close to 0, or else our assumption of a symmetric distribution would seem to be incorrect. To calculate (6.4.5), we have to generate m samples of size n from x1 xn (with replacement) and calculate x0 5 for each sample. To illustrate, suppose we have a sample of size n 15 given by the following table 354 Section 6.4: Distribution­Free Methods Then, using the definition of x0 5 given by (5.5.4) (denoted x0 5 there), and x 2 000 2 087 The estimate of the squared bias (6.4.3) equals 7 569 m 103 samples of size n of the sample points and obtained 10 3, which is appropriately small. Using a statistical package, we generated 15 from the distribution that has probability 1 15 at each 2 000 2 087 2 VarF 0 770866 Based on m 104 samples, we obtained VarF 0 718612 and based on m 105 samples we obtained VarF 0 704928 Because these estimates appear to be stabilizing, we take this as our estimate. So in this case, the bootstrap estimate of the MSE of the sample median at the true value of is given by MSE 0 007569 0 704928 0 71250 Note that the estimated MSE of the sample average is given by s2 0 62410 so the sample mean and sample median appear to be providing similar accuracy in this problem. In Figure 6.4.1, we have plotted a density histogram of the sample medians obtained from the m 105 bootstrap samples. Note that the histogram is very skewed. See Appendix B for more
details on how these computations were carried out. y t i s n e D 0.6 0.5 0.4 0.3 0.2 0.1 0.0 -5 -4 -3 -2 sample median -1 0 1 Figure 6.4.1: A density histogram of m 105 sample medians, each obtained from a bootstrap sample of size n 15 from the data in Example 6.4.2. Even with the very small sample size here, it was necessary to use the computer to carry out our calculations. To evaluate (6.4.4) exactly would have required computing Chapter 6: Likelihood Inference 355 the median of 1515 (roughly 4 4 using a computer. So the bootstrap is a very useful device. 1017) samples, which is clearly impossible even The validity of the bootstrapping technique depends on having its first two mo­ must be appropriately restricted, but we can see that ments. So the family P : the technique is very general. Broadly speaking, it is not clear how to choose m Perhaps the most direct method is to implement bootstrapping for successively higher values of m and stop when we see that the results stabilize for several values. This is what we did in Example 6.4.2, but it must be acknowledged that this approach is not foolproof, as we could have a sample x1 xn such that the estimate (6.4.5) is very slowly convergent. Bootstrap Confidence Intervals Bootstrap methods have also been devised to obtain approximate vals for characteristics such as form the bootstrap t ­confidence inter­ T F One very simple method is to simply ­confidence interval t 1 2 n 1 VarF, where t 1 possibility is to compute a bootstrap percentile confidence interval given by 2th quantile of the t n 1 is the 1 2 n 1 distribution. Another 1 2 1 2, where p denotes the pth empirical quantile of in the bootstrap sample of m It should be noted that to be applicable, these intervals require some conditions to hold. In particular, and the boot­ should be at least approximately unbiased for strap distribution should be approximately normal. Looking at the plot of the bootstrap distribution in Figure 6.4.1 we can see that the median does not have an approximately normal bootstrap distribution, so these
intervals are not applicable with the median. Consider the following example. EXAMPLE 6.4.3 The 0.25­Trimmed Mean as an Estimator of the Population Mean One of the virtues of the sample median as an estimator of the population mean is that it is not affected by extreme values in the sample. On the other hand, the sample median discards all but one or two of the data values and so seems to be discarding a lot of information. Estimators known as trimmed means can be seen as an attempt at retaining the virtues of the median while at the same time not discarding too much information. Let x denote the greatest integer less than or equal to x R1 Definition 6.4.1 For [0 1] a sample ­trimmed mean is given by where x i is the ith­order statistic. 356 Section 6.4: Distribution­Free Methods ­trimmed mean, we toss out (approximately) n of the smallest Thus for a sample data values and n of the largest data values and calculate the average of the n 2 n of the data values remaining. We need the greatest integer function because in general, 0 and the sample n will not be an integer. Note that the sample mean arises with median arises with 0 5 For the data in Example 6.4.1 and 3 75, so we discard the three smallest and three largest observations leaving the nine data val­ 2 9 ues 3 9 0 2 The average of these nine 1 97778, which we note is close to both the sample median x0 25 values gives and the sample mean. 0 25, we have 0 25 15 Now suppose we use a 0.25­trimmed mean as an estimator of a population mean where we believe the population distribution is symmetric. Consider the data in Ex­ ample 6.4.1 and suppose we generated m 104 bootstrap samples. We have plotted a histogram of the 104 values of in Figure 6.4.2. Notice that it is very normal looking, so we feel justified in using the confidence intervals associated with the bootstrap. In this case, we obtained VarF 0 7380 so the bootstrap t 0 95­confidence interval for the mean is given by 2 14479 0 7380 percentile 0 95­confidence interval as shows that the two intervals are very similar. 0 4
Sorting the bootstrap sample gives a bootstrap 0 5 which 0 488889 1 97778 3 36667.6 0.5 0.4 0.3 0.2 0.1 0.0 -5.4 -4.5 -3.6 -2.7.25-trimmed mean -1.8 -0.9 0.0 0.9 Figure 6.4.2: A density histogram of m 104 sample 0.25­trimmed means, each obtained from a bootstrap sample of size n 15 from the data in Example 6.4.3 More details about the bootstrap can be found in An Introduction to the Bootstrap, by B. Efron and R. J. Tibshirani (Chapman and Hall, New York, 1993). Chapter 6: Likelihood Inference 357 6.4.3 The Sign Statistic and Inferences about Quantiles is the set of all distributions on R1 such that the associated Suppose that P : distribution functions are continuous. Suppose we want to make inferences about a pth so that, when the distribution function quantile of P We denote this quantile by x p Note that continuity associated with P is denoted by F, we have p implies there is always a solution in x to p is the smallest solution. and that x p F x p F x Recall the definitions and discussion of estimation of these quantities in Example xn. For simplicity, let us restrict attention to the is n. In this case, we have that x p i n for some i x i 1 5.5.2 based on a sample x1 cases where p the natural estimate of x p. x p S n i 1 I Now consider assessing the evidence in the data concerning the hypothesis H0 : x0. For testing this hypothesis, we can use the sign test statistic, given by x0] xi. So S is the number of sample values less than or equal to x0 Notice that when H0 is true, I x0] x1 I x0] xn is a sample from the Bernoulli p distribution. This implies that, when H0 is true, S Binomial n p Therefore, we can test H0 by computing the observed value of S denoted So and seeing whether this value lies in a region of low probability for the Binomial n p dis­ tribution. Because the binomial distribution is unimodal, the
regions of low probability correspond to the left and right tails of this distribution. See, for example, Figure 6.4.3, where we have plotted the probability function of a Binomial 20 0 7 distribution. The P­value is therefore obtained by computing the probability of the set i : n i pi 1 p n i n So pSo 1 p n So (6.4.6) using the Binomial n p probability distribution. This is a measure of how far out in the tails the observed value So is (see Figure 6.4.3). Notice that this P­value is com­ pletely independent of and is thus valid for the entire model. Tables of binomial probabilities (Table D.6 in Appendix D), or built­in functions available in most statis­ tical packages, can be used to calculate this P­value. 0.2 0.1 0.0 0 10 x 20 Figure 6.4.3: Plot of the Binomial 20 0 7 probability function. When n is large, we have that, under H0 np Z S np 1 D N 0 1 p 358 as n Section 6.4: Distribution­Free Methods Therefore, an approximate P­value is given by 2 1 So 0 5 np 1 np p (as in Example 6.3.11), where we have replaced So by So continuity (see Example 4.4.9 for discussion of the correction for continuity). 0 5 as a correction for A special case arises when p 1 2 i.e., when we are making inferences about. In this case, the distribution of S under H0 is an unknown population median x0 5 Binomial n 1 2. Because the Binomial n 1 2 is unimodal and symmetrical about n 2 (6.4.6) becomes i : So n 2 i n 2 If we want a ­confidence interval for x0 5, then we can use the equivalence between tests, which we always reject when the P­value is less than or equal to 1, and ­confidence intervals (see Example 6.3.12). For this, let j be the smallest integer greater than n 2 satisfying 6.4.7) n 2, we n 2 level and will not otherwise. This leads ­confidence interval, namely, the set of all those values x0 5 such that the null where P is the Binomial n 1 2 distribution. If
S will reject H0 : x0 5 to the hypothesis H0 : x0 5 x0 5 is not rejected at the 1 level, equaling x0 at the 1 i : i j C x1 xn x0 : x0 : x0] xi n 2 j n 2 x0] xi j [x n j 1 x j (6.4.8) because, for example, n j n i 1 I x0] xi if and only if x0 x n j 1 EXAMPLE 6.4.4 Application of the Sign Test Suppose we have the following sample of size n variable X and we wish to test the hypothesis H0 : x0 5 0 10 from a continuous random 0 44 1 15 0 06 1 08 0 43 5 67 0 16 4 97 2 13 0 11 The boxplot in Figure 6.4.4 indicates that it is very unlikely that this sample came from a normal distribution, as there are two extreme observations. So it is appropriate to measure the location of the distribution of X by the median. Chapter 6: Likelihood Inference 359 5 x 0 ­5 Figure 6.4.4: Boxplot of the data in Example 6.4.4. In this case, the sample median (using (5.5.4)) is given by 0 11 0 43 2 0 27. The sign statistic for the null is given by S 10 i 1 I 0] xi 4 The P­value is given by 10 5 10 1 2 1 0 24609 0 75391, and we have no reason to reject the null hypothesis. Now suppose that we want a 0 95­confidence interval for the median. Using soft­ ware (or Table D.6), we calculate 10 5 10 3 10 1 10 10 10 1 2 1 2 1 2 0 24609 0 11719 9 7656 10 3 10 4 10 2 10 0 10 10 10 1 2 1 2 1 2 0 20508 4 3945 9 7656 10 2 10 4 We will use these values to compute the value of j in (6.4.7). We can use the symmetry of the Binomial 10 1 2 distribution about n 2 to com­ 10 we have that n 2 as follows. For j n 2 i j pute the values of P i : (6.4.7) equals P i : i 5 5 P 0 10 2 10 0 10 1 2 1 9531 10 3 and note that 1 9531
10 3 1 0 95 0 05 For j 9 we have that (6.4.7) equals 10 2 10 0 10 1 2 2 10 1 10 1 2 2 148 4 10 2 360 Section 6.4: Distribution­Free Methods which is also less than 0.05. For j 8 we have that (6.4.7) equals 10 1 2 2 10 0 0 10938 10 2 10 1 10 1 2 2 10 2 10 1 2 and this is greater than 0.05. Therefore, the appropriate value is j confidence interval for the median is given by [x 2 x 9 [ 0 16 1 15. 9 and a 0.95­ There are many other distribution­free methods for a variety of statistical situations. While some of these are discussed in the problems, we leave a thorough study of such methods to further courses in statistics. Summary of Section 6.4 Distribution­free methods of statistical inference are appropriate methods when we feel we can make only very minimal assumptions about the distribution from which we are sampling. The method of moments, bootstrapping, and methods of inference based on the sign statistic are three distribution­free methods that are applicable in different circumstances. EXERCISES 6.4.1 Suppose we obtained the following sample from a distribution that we know has its first six moments. Determine an approximate 0 95­confidence interval for 3. 3 27 1 42 1 24 2 75 3 97 2 25 3 47 1 48 4 97 8 00 0 09 7 45 3 26 0 15 6 20 3 74 4 12 3 64 4 88 4 55 where is the population mean and 6.4.2 Determine the method of moments estimator of the population variance. Is this estimator unbiased for the population variance? Justify your answer. 6.4.3 (Coefficient of variation) The coefficient of variation for a population measure­ is the ment with nonzero mean is given by population standard deviation. What is the method of moments estimate of the coeffi­ cient of variation? Prove that the coefficient of variation is invariant under rescalings of the distribution, i.e., under transformations of the form T x 0. It is this invariance that leads to the coefficient of variation being an appropriate mea­ sure of sampling variability in certain problems, as it is independent of the units we
use for the measurement. 6.4.4 For the context described in Exercise 6.4.1, determine an approximate 0.95­ confidence interval for exp 6.4.5 Verify that the third moment of an N 2 distribution is given by cx for constant c 1 3 3 2 Because the normal distribution is specified by its first two moments, any characteristic of the normal distribution can be estimated by simply plugging in 3 Chapter 6: Likelihood Inference 361 and 2. Compare the method of moments estimator of the MLE estimates of with this plug­in MLE estimator, i.e., determine whether they are the same or not. 6.4.6 Suppose we have the sample data 1.48, 4.10, 2.02, 56.59, 2.98, 1.51, 76.49, 50.25, 43.52, 2.96. Consider this as a sample from a normal distribution with unknown mean and variance, and assess the hypothesis that the population median (which is the same as the mean in this case) is 3. Also carry out a sign test that the population median is 3 and compare the results. Plot a boxplot for these data. Does this support the assumption that we are sampling from a normal distribution? Which test do you think is more appropriate? Justify your answer. 6.4.7 Determine the empirical distribution function based on the sample given below. 3 1 06 1 42 0 00 0 98 1 28 0 44 1 02 0 38 0 40 0 58 1 35 2 13 1 36 0 24 2 05 0 03 0 35 1 34 1 06 1 29 3 distinct values given by 1, 2, and 3. Using the empirical cdf, determine the sample median, the first and third quartiles, and the interquartile range. What is your estimate of F 2? 6.4.8 Suppose you obtain the sample of n (a) Write down all possible bootstrap samples. (b) If you are bootstrapping the sample median, what are the possible values for the sample median for a bootstrap sample? (c) If you are bootstrapping the sample mean, what are the possible values for the sample mean for a bootstrap sample? (d) What do you conclude about the bootstrap distribution of the sample median com­ pared to the bootstrap distribution of the sample mean? 6.4.9
Explain why the central limit theorem justifies saying that the bootstrap distri­ bution of the sample mean is approximately normal when n and m are large. What result justifies the approximate normality of the bootstrap distribution of a function of the sample mean under certain conditions? 6.4.10 For the data in Exercise 6.4.1, determine an approximate 0.95­confidence inter­ val for the population median when we assume the distribution we are sampling from is symmetric with finite first and second moments. (Hint: Use large sample results.) 6.4.11 Suppose you have a sample of n distinct values and are interested in the boot­ strap distribution of the sample range given by x n x 1 What is the maximum number of values that this statistic can take over all bootstrap samples? What are the largest and smallest values that the sample range can take in a bootstrap sample? Do you think the bootstrap distribution of the sample range will be approximately normal? Justify your answer. 6.4.12 Suppose you obtain the data 1 1 tinct bootstrap samples are there? 1 0 1 1 3 1 2 2, and 3 1. How many dis­ 362 Section 6.4: Distribution­Free Methods COMPUTER EXERCISES 103 and m 1000, use bootstrapping to estimate the 3 for the normal distribution, using the sample 1000 is a large enough sample for 6.4.13 For the data of Exercise 6.4.7, assess the hypothesis that the population median is 0. State a 0.95­confidence interval for the population median. What is the exact coverage probability of this interval? 6.4.14 For the data of Exercise 6.4.7, assess the hypothesis that the first quartile of the distribution we are sampling from is 1 0. 6.4.15 With a bootstrap sample size of m MSE of the plug­in MLE estimator of data in Exercise 6.4.1. Determine whether m accurate results. 6.4.16 For the data of Exercise 6.4.1, use the plug­in MLE to estimate the first quartile 2 distribution. Use bootstrapping to estimate the MSE of this estimate of an N 104 (use (5.5.3) to compute the fi
rst quartile of the empirical for m distribution). 6.4.17 For the data of Exercise 6.4.1, use the plug­in MLE to estimate F 3 for an 2 distribution. Use bootstrapping to estimate the MSE of this estimate for N m 103 and m 104. 6.4.18 For the data of Exercise 6.4.1, form a 0.95­confidence interval for that this is a sample from an N interval for a bootstrap percentile 0.95­confidence interval using m Compare the four intervals. 6.4.19 For the data of Exercise 6.4.1, use the plug­in MLE to estimate the first quintile, 2 distribution. Plot a density histogram estimate of the bootstrap i.e., x0 2 of an N 103 and compute a bootstrap t 0.95­confidence distribution of this estimator for m interval for x0 2, if you think it is appropriate. 3 of an 6.4.20 For the data of Exercise 6.4.1, use the plug­in MLE to estimate 2 distribution. Plot a density histogram estimate of the bootstrap distribu­ N tion of this estimator for m 103 and compute a bootstrap percentile 0.95­confidence interval for assuming 2 distribution. Also compute a 0.95­confidence based on the sign statistic, a bootstrap t 0.95­confidence interval, and 103 for the bootstrapping. 3 if you think it is appropriate. PROBLEMS 6.4.21 Prove that when x1 xn is a sample of distinct values from a distribution on R1 then the ith moment of the empirical distribution on R1 (i.e., the distribution with cdf given by F is mi xn is a sample from a distribution on R1. Determine the 6.4.22 Suppose that x1 general form of the i th moment of F i.e., in contrast to Problem 6.4.21, we are now allowing for several of the data values to be equal 6.4.23 (Variance stabilizing transformations) From the delta theorem, we have that 2 2 n when 0 and M1 is asymptotically normal with 2 n In some applications, it is important to choose the trans
­ 1 i.e., M1 is asymptotically normal with mean is continuously differentiable, so that the asymptotic variance does not depend on the mean mean 1 and variance formation 1 and variance 1 1 Chapter 6: Likelihood Inference 1 2 2 is constant as 1 varies (note that 2 may change as transformations are known as variance stabilizing transformations. (a) If we are sampling from a Poisson variance stabilizing. (b) If we are sampling from a Bernoulli is variance stabilizing. (c) If we are sampling from a distribution on 0 to the square of its mean (like the Gamma ln x is variance stabilizing. distribution, show that 363 1 changes). Such x arcsin x whose variance is proportional distribution), then show that x distribution, then show that x x is CHALLENGES 6.4.24 Suppose that X has an absolutely continuous distribution on R1 with density f that is symmetrical about its median. Assuming that the median is 0, prove that X and sgn are independent, with X having density 2 f and sgn X uniformly distributed on 1 1 6.4.25 (Fisher signed deviation statistic) Suppose that x1 xn is a sample from an absolutely continuous distribution on R1 with density that is symmetrical about its median. Suppose we want to assess the hypothesis H0 : x0 5 x0 One possibility for this is to use the Fisher signed deviation test based on the sta­ n i 1 xi xn x0 x0 sgn xi tistic S. The observed value of S is given by So We then assess H0 by comparing So with the conditional distribution of S given the absolute deviations x1 x0. If a value So occurs near the smallest or x0 under this conditional distribution, then we assert that largest possible value for S we have evidence against H0 We measure this by computing the P­value given by the conditional probability of obtaining a value as far, or farther, from the center of the conditional distribution of S using the conditional mean as the center. This is an ex­ ample of a randomization test, as the distribution for the test statistic is determined by randomly modifying the observed data (in this case, by randomly changing the signs of the deviations of the xi from x0). (a) Prove that So (b) Prove that the P­value described above does not depend on which distribution we are sampling from in the model. Prove that
the conditional mean of S is 0 and the conditional distribution of S is symmetric about this value. (c) Use the Fisher signed deviation test statistic to assess the hypothesis H0 : x0 5 2 when the data are 2.2, 1.5, 3.4, 0.4, 5.3, 4.3, 2.1, with the results declared to be statistically significant if the P­value is less than or equal to 0.05. (Hint: Based on the results obtained in part (b), you need only compute probabilities for the extreme values of S.) x0. n x 364 Section 6.5: Large Sample Behavior of the MLE (Advanced) (d) Show that using the Fisher signed deviation test statistic to assess the hypothesis H0 : x0 5 x0 is equivalent to the following randomized t­test statistic hypothesis assessment procedure. For this, we compute the conditional distribution of T X S x0 n when the Xi x0 are i.i.d. uniform on x0 are fixed and the sgn Xi 1 1. Compare the observed value of the t­statistic with this distribution, as we x0 xi did for the Fisher signed deviation test statistic. (Hint: Show that n i 1 xi x 2 2 and that large absolute values of T correspond to large n i 1 xi n x absolute values of S ) x0 2 x0 6.5 Asymptotics for the MLE (Advanced) As we saw in Examples 6.3.7 and 6.3.11, implementing exact sampling procedures based on the MLE can be difficult. In those examples, because the MLE was the sample average and we could use the central limit theorem, large sample theory allowed us to work out approximate procedures. In fact, there is some general large sample theory available for the MLE that allows us to obtain approximate sampling inferences. This is the content of this section. The results we develop are all for the case when is one­ dimensional. Similar results exist for the higher­dimensional problems, but we leave those to a later course. In Section 6.3, the basic issue was the need to measure the accuracy of the MLE. One approach is to plot the likelihood and examine how concentrated it is about its peak, with a more highly concentrated likelihood implying greater accuracy for the MLE. There are several problems with this.
In particular, the appearance of the likeli­ hood will depend greatly on how we choose the scales for the axes. With appropriate choices, we can make a likelihood look as concentrated or as diffuse as we want. Also, is more than two­dimensional, we cannot even plot the likelihood. One solu­ when tion, when the likelihood is a smooth function of is to compute a numerical measure of how concentrated the log­likelihood is at its peak. The quantity typically used for this is called the observed Fisher information. Definition 6.5.1 The observed Fisher information is given by I s 2l s 2 s (6.5.1) where s is the MLE. The larger the observed Fisher information is, the more peaked the likelihood func­ tion is at its maximum value. We will show that the observed Fisher information is estimating a quantity of considerable importance in statistical inference. 365 (6.5.2) (6.5.3) (6.5.4) (6.5.5) Chapter 6: Likelihood Inference Suppose that response X is real­valued, satisfies the following regularity conditions: is real­valued, and the model f : 2 ln f x 2 exists for each x E S X ln f x f x dx 0 ln f x f x dx 0 and Note that we have 2 ln f x 2 f x dx f x ln f x f x so we can write (6.5.3) equivalently as f x dx 0 Also note that (6.5.4) can be written as 0 l x f x l dx 2 x f x dx 2l 2l x x 2 2 S2 x f x dx E 2l x 2 S2 X This together with (6.5.3) and (6.5.5), implies that we can write (6.5.4) equivalently as Var S X E S2 X E 2 2 l X. We give a name to the quantity on the left. Definition 6.5.2 The function I tion of the model. Var S X is called the Fisher informa­ Our developments above have proven the following result. Theorem 6.5.1 If (6.5.2) and (6.5.3) are satisfied, then E S addition, (6.5.4) and (6
.5.5) are satisfied, then X 0 If, in I Var S X E 2l X 2 366 Section 6.5: Large Sample Behavior of the MLE (Advanced) Now we see why I is called the observed Fisher information, as it is a natural estimate of the Fisher information at the true value We note that there is another natural estimate of the Fisher information at the true value, given by I We call this the plug­in Fisher information. When we have a sample x1 xn from f then S x1 xn n ln f xi i 1 n i 1 ln f xi n i 1 S xi. So, if (6.5.3) holds for the basic model, then E S 0 and (6.5.3) also holds for the sampling model. Furthermore, if (6.5.4) holds for the basic model, then X1 Xn ln f Xi n i 1 E S2 Xi X1 Xn Var S X1 Xn which implies Var S X1 Xn E 2 2 l X1 Xn n I x1 xn because l xi. Therefore, (6.5.4) holds for the sampling model as well, and the Fisher information for the sampling model is given by the sam­ ple size times the Fisher information for the basic model. We have established the following result. n i 1 ln f Corollary 6.5.1 Under i.i.d. sampling from a model with Fisher information I the Fisher information for a sample of size n is given by n I. The conditions necessary for Theorem 6.5.1 to apply do not hold in general and have to be checked in each example. There are, however, many models where these conditions do hold. EXAMPLE 6.5.1 Nonexistence of the Fisher Information If X any x Indeed, if we ignored the lack of differentiability at 1 I[0 ] x which is not differentiable at x and wrote ] then f U [0 x x for f x 1 2 I[0 ] x then f x dx 1 2 I[0 ] x dx 1 0 So we cannot define the Fisher information for this model. Chapter 6: Likelihood Inference 367 EXAMPLE 6.5.2 Location Normal Suppose we have a sample x1 is unknown and 2 0 is known. We saw in Example 6.2.2
that xn from an N 2 0 distribution where R1 and therefore S x1 xn n 2 0 x 2 2 l x1 xn n 2 0 n I E 2 2 l X1 Xn n 2 0 We also determined in Example 6.2.2 that the MLE is given by Then the plug­in Fisher information is x1 xn x n I x n 2 0 while the observed Fisher information is I x1 xn 2l x1 2 xn n 2 0 x In this case, there is no need to estimate the Fisher information, but it is comforting that both of our estimates give the exact value. We now state, without proof, some theorems about the large sample behavior of the MLE under repeated sampling from the model. First, we have a result concerning the consistency of the MLE as an estimator of the true value of Theorem 6.5.2 Under regularity conditions (like those specified above) for the model, the MLE exists a.s. and as n a s : f PROOF See Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for the proof of this result. We see that Theorem 6.5.2 serves as a kind of strong law for the MLE. It also turns out that when the sample size is large, the sampling distribution of the MLE is approx­ imately normal. Theorem 6.5.3 Under regularity conditions (like those specified above) for the model f : then n I 1 2 D N 0 1 as n PROOF See Approximation Theorems of Mathematical Statistics, by R. J. Sering (John Wiley & Sons, New York, 1980), for the proof of this result. 368 Section 6.5: Large Sample Behavior of the MLE (Advanced) We see that Theorem 6.5.3 serves as a kind of central limit theorem for the MLE. To make this result fully useful to us for inference, we need the following corollary to this theorem. Corollary 6.5.2 When I is a continuous function of then n I 1 2 D N 0 1 In Corollary 6.5.2, we have estimated the Fisher information I Fisher estimation I case, we instead estimate n I result such as Coroll
ary 6.5.2 again holds in this case. by the plug­in. Often it is very difficult to evaluate the function I In such a xn A by the observed Fisher information I x1 From Corollary 6.5.2, we can devise large sample approximate inference methods based on the MLE. For example, the approximate standard error of the MLE is An approximate ­confidence interval is given by n I 1 2. n I 1 2z 1 2. Finally, if we want to assess the hypothesis H0 : the approximate P­value 0 we can do this by computing 2 1 n I 0 1 2 0 Notice that we are using Theorem 6.5.3 for the P­value, rather than Corollary 6.5.2, as, 1. So we when H0 is true, we know the asymptotic variance of the MLE is n I do not have to estimate this quantity. 0 When evaluating I is difficult, we can replace n I xn in the above expressions for the confidence interval and P­value. We now see very clearly the sig­ nificance of the observed information. Of course, as we move from using n I to n I xn we expect that larger sample sizes n are needed to make the normality approximation accurate. We consider some examples. by I x1 to I x1 EXAMPLE 6.5.3 Location Normal Model Using the Fisher information derived in Example 6.5.2, the approximate interval based on the MLE is ­confidence n I 1 2z 1 2 x 0 n z 1 2 This is just the z­confidence interval derived in Example 6.3.6. Rather than being an ­confidence interval, the coverage is exact in this case. Similarly, the approximate approximate P­value corresponds to the z­test and the P­value is exact. Chapter 6: Likelihood Inference 369 EXAMPLE 6.5.4 Bernoulli Model Suppose that x1 [0 1] is unknown. The likelihood function is given by is a sample from a Bernoulli xn distribution, where L x1 xn nx 1 n 1 x, and the MLE of is x. The log­likelihood is l x1 xn nx ln n 1
x ln 1, the score function is given by and S x1 xn nx n 1 1 x, S x1 xn nx 2 n 1 1 x 2. Therefore, the Fisher information for the sample is n I E S X1 Xn, and the plug­in Fisher information is n I x n x 1 x Note that the plug­in Fisher information is the same as the observed Fisher information in this case. So an approximate ­confidence interval is given by n I 1 2z, which is precisely the interval obtained in Example 6.3.7 using large sample consider­ ations based on the central limit theorem. Similarly, we obtain the same P­value as in Example 6.3.11 when testing H0 : 0 EXAMPLE 6.5.5 Poisson Model Suppose that x1 unknown. The likelihood function is given by xn is a sample from a Poisson distribution, where 0 is The log­likelihood is L x1 xn nx e n l x1 xn nx ln n the score function is given by S x1 xn nx n 370 and Section 6.5: Large Sample Behavior of the MLE (Advanced) S x1 xn nx 2 From this we deduce that the MLE of is x. Therefore, the Fisher information for the sample is n I E S X1 Xn E n X 2 n and the plug­in Fisher information is n I x n x Note that the plug­in Fisher information is the same as the observed Fisher information in this case. So an approximate ­confidence interval is given by n I 1 2z 1 2 x z 1 2 x n Similarly, the approximate P­value for testing H0 : 0 is given by. Note that we have used the Fisher information evaluated at 0 for this test. Summary of Section 6.5 we can for the model. is the true value of the parameter, the MLE is consistent for Under regularity conditions on the statistical model with parameter define the Fisher information I Under regularity conditions on the statistical model, it can be proved that, when and the MLE and with variance is approximately normally distributed with mean given by given by n I can be estimated by plugging in the MLE or by The Fisher information I using the observed Fisher information. These estimates lead to practically useful inferences for in many problems. 1. EXERCISES 6
.5.1 If x1 and 2 0 6.5.2 If x1 and 0 6.5.3 If x1 where xn is a sample from an N 0 is unknown, determine the Fisher information xn is a sample from a Gamma 0 is unknown, determine the Fisher information 2 distribution, where 0 is known distribution, where 0 is known xn is a sample from a Pareto 0 is unknown, determine the Fisher information. distribution (see Exercise 6.2.9), Chapter 6: Likelihood Inference 371 6.5.4 Suppose the number of calls arriving at an answering service during a given hour of the day is Poisson is unknown. The number of calls actually received during this hour was recorded for 20 days and the following data were obtained., where 0 9 10 7 8 12 11 12 5 16 13 9 5 13 5 13 9 9 8 9 10 Construct an approximate 0.95­confidence interval for Assess the hypothesis that this is a sample from a Poisson 11 distribution. If you are going to decide that the hypothesis is false when the P­value is less than 0.05, then compute an approximate power for this procedure when 6.5.5 Suppose the lifelengths in hours of lightbulbs from a manufacturing process are is unknown. A random, where known to be distributed Gamma 2 sample of 27 bulbs was taken and their lifelengths measured with the following data obtained. 10 0 336 87 2750 71 2199 44 710 64 2162 01 1856 47 2225 68 3524 23 2618 51 979 54 2159 18 1908 94 1397 96 292 99 1835 55 1385 36 2690 52 361 68 914 41 1548 48 1801 84 753 24 1016 16 1666 71 1196 42 1225 68 2422 53 Determine an approximate 0.90­confidence interval for 6.5.6 Repeat the analysis of Exercise 6.5.5, but this time assume that the lifelengths are distributed Gamma 1 6.5.7 Suppose that incomes (measured in thousands of dollars) above $20K can be 0 is unknown, for a particular population. A assumed to be Pareto sample of 20 is taken from the population and the following data obtained.. Comment on the differences in the two analyses., where 21 265 20 857 21 090 20 047 20 019 32 509 21 622 20 6
93 20 109 23 182 21 199 20 035 20 084 20 038 22 054 20 190 20 488 20 456 20 066 20 302 Assess the hypothesis that xn is a sample from an Exponential Construct an approximate 0.95­confidence interval for the mean income in this population is $25K. 6.5.8 Suppose that x1 struct an approximate left­sided ­confidence interval for 6.5.9 Suppose that x1 struct an approximate left­sided ­confidence interval for 6.5.10 Suppose that x1 ution. Construct an approximate left­sided ­confidence interval for 6.3.25.) is a sample from a Geometric xn xn is a sample from a Negative­Binomial r distribution. Con­ (See Problem 6.3.25.) distribution. Con­ (See Problem 6.3.25.) distrib­ (See Problem PROBLEMS 6.5.11 In Exercise 6.5.1, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 6.5.12 In Exercise 6.5.2, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 372 Section 6.5: Large Sample Behavior of the MLE (Advanced) 6.5.13 In Exercise 6.5.3, verify that (6.5.2), (6.5.3), (6.5.4), and (6.5.5) are satisfied. 6.5.14 Suppose that sampling from the model (6.5.4), and (6.5.5). Prove that n 1 I 6.5.15 (MV) When f model the Fisher information matrix is defined by then, under appropriate regularity conditions for the satisfies (6.5.2), (6.5.3), as Multinomial 1 If X1 X2 X3 3 Fisher information for this model. Recall that from 1 6.5.16 (MV) Generalize Problem 6.5.15 to the case where
2. 2 1 3 (Example 6.1.5), then determine the 2 and so is determined 1 1 X1 Xk Multinomial 1 1 k 6.5.17 (MV) Using the definition of the Fisher information matrix in Exercise 6.5.15, 2 1 1 0 model, where determine the Fisher information for the Bivariate Normal 1 1 2 R1 are unknown. 6.5.18 (MV) Extending the definition in Exercise 6.5.15 to the three­dimensional case, 2 0 model determine the Fisher information for the Bivariate Normal where 0 are unknown. R1 and 2 1 2 2 1 2 CHALLENGES 6.5.19 Suppose that model has Fisher information I differentiable, then, putting : f If : : satisfies the regularity conditions and that its Fisher information at 1 1 satisfies (6.5.2), (6.5.3), (6.5.4), (6.5.5), and 1 are continuously R1 is 1–1, and, prove that the model given by g : is given and by I 2. DISCUSSION TOPICS 6.5.20 The method of moments inference methods discussed in Section 6.4.1 are es­ sentially large sample methods based on the central limit theorem. The large sample methods in Section 6.5 are based on the form of the likelihood function. Which meth­ ods do you think are more likely to be correct when we know very little about the form of the distribution from which we are sampling? In what sense will your choice be “more correct”? Chapter 7 Bayesian Inference CHAPTER OUTLINE Section 1 The Prior and Posterior Distributions Section 2 Section 3 Bayesian Computations Section 4 Choosing Priors Section 5 Further Proofs (Advanced) Inferences Based on the Posterior In Chapter 5, we introduced the basic concepts of inference. At the heart of the the­ ory of inference is the concept of the statistical model that describes the statistician’s uncertainty about how the observed data were produced. Chapter 6 dealt with the analysis of this uncertainty based on the model and the data alone. In some cases, this seemed quite successful, but we note that we only dealt with some of the simpler contexts there f : If we accept the principle that, to be amenable to analysis, all uncertainties need
to be described by probabilities, then the prescription of a model alone is incomplete, as this does not tell us how to make probability statements about the unknown true value In this chapter, we complete the description so that all uncertainties are described of by probabilities. This leads to a probability distribution for and, in essence, we are in the situation of Section 5.2, with the parameter now playing the role of the unobserved response. This is the Bayesian approach to inference. Many statisticians prefer to develop statistical theory without the additional ingre­ dients necessary for a full probability description of the unknowns. In part, this is motivated by the desire to avoid the prescription of the additional model ingredients necessary for the Bayesian formulation. Of course, we would prefer to have our sta­ tistical analysis proceed based on the fewest and weakest model assumptions possible. For example, in Section 6.4, we introduced distribution­free methods. A price is paid for this weakening, however, and this typically manifests itself in ambiguities about how inference should proceed. The Bayesian formulation in essence removes the am­ biguity, but at the price of a more involved model. The Bayesian approach to inference is sometimes presented as antagonistic to meth­ ods that are based on repeated sampling properties (often referred to as frequentist 373 374 Section 7.1: The Prior and Posterior Distributions methods), as discussed, for example, in Chapter 6. The approach taken in this text, however, is that the Bayesian model arises naturally from the statistician assuming more ingredients for the model. It is up to the statistician to decide what ingredients can be justified and then use appropriate methods. We must be wary of all model assumptions, because using inappropriate ones may invalidate our inferences. Model checking will be taken up in Chapter 9. 7.1 The Prior and Posterior Distributions The Bayesian model for inference contains the statistical model for S and adds to this the prior probability measure data s the statistician’s beliefs about the true value of the parameter [0 1] and observing the data. For example, if a head on the toss of a coin, then the prior density that the statistician has some belief that the true value of formation is not very precise. f for the The prior describes a priori, i.e., before equals the probability of getting plotted in Figure 7.1.1 indicates is around 0.5. But this in­ : prior 1.
5 1.0 0.5 0.0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.1: A fairly diffuse prior on [0,1]. On the other hand, the prior density tician has very precise information about the true value of knows nothing about the true value of might be appropriate. plotted in Figure 7.1.2indicates that the statis­ In fact, if the statistician, then using the uniform distribution on [0 1] Chapter 7: Bayesian Inference 375 prior 10 8 6 4 2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.2: A fairly precise prior on [0,1]. It is important to remember that the probabilities prescribed by the prior repre­ sent beliefs. They do not in general correspond to long­run frequencies, although they could in certain circumstances. A natural question to ask is: Where do these beliefs come from in an application? An easy answer is to say that they come from previous experience with the random system under investigation or perhaps with related sys­ tems. To be honest, however, this is rarely the case, and one has to admit that the prior, as well as the statistical model, is often a somewhat arbitrary construction used to drive the statistician’s investigations. This raises the issue as to whether or not the inferences derived have any relevance to the practical context, if the model ingredients suffer from this arbitrariness. This is where the concept of model checking comes into play, a topic we will discuss in Chapter 9. At this point, we will assume that all the ingredients make sense, but remember that in an application, these must be checked if the inferences taken are to be practically meaningful. We note that the ingredients of the Bayesian formulation for inference prescribe a and a set of conditional distributions. By the law of total probability (Theorems marginal distribution for for the data s given f 2.3.1 and 2.8.1), these ingredients specify a joint distribution for s namely, the prior namely, namely, : f s, denotes the probability or density function associated with where distribution is absolutely continuous, the marginal distribution for s is given by. When the prior m s f s d and is referred to
as the prior predictive distribution of the data. When the prior distri­ bution of is discrete, we replace (as usual) the integral by a sum. 376 Section 7.1: The Prior and Posterior Distributions If we did not observe any data, then the prior predictive distribution is the relevant distribution for making probability statements about the unknown value of s Similarly, the prior before we observe s Inference about these unobserved quantities then proceeds as described in Section 5.2. is the relevant distribution to use in making probability statements about Recall now the principle of conditional probability; namely, P A is replaced by P A C after we are told that C is true. Therefore, after observing the data, the rel­ is the conditional evant distribution to use in making probability statements about s distribution of Note that the density (or probability and refer to it as the posterior distribution of function) of the posterior is obtained immediately by taking the joint density s of s given s We denote this conditional probability measure by and dividing it by the marginal m s of s f Definition 7.1.1 The posterior distribution of is the conditional distribution of, given s. The posterior density, or posterior probability function (whichever is relevant), is given by s s f m s (7.1.1) Sometimes this use of conditional probability is referred to as an application of Bayes’ theorem (Theorem 1.5.2). This is because we can think of a value of being selected first according to, and then s is generated from f We then want to make probability statements about the first stage, having observed the outcome of the sec­ ond stage. It is important to remember, however, that choosing to use the posterior is an axiom, or principle, not a theorem. distribution for probability statements about We note that in (7.1.1) the prior predictive of the data s plays the role of the inverse normalizing constant for the posterior density. By this we mean that the posterior density of ; to convert this into a proper density function, we need only divide by m s In many examples, we do not need to compute the inverse normalizing constant. This is because we recognize the s functional form, as a function of and so immediately deduce the posterior probability distribution of Also, there are Monte Carlo methods, such as those discussed in Chapter 4, that allow us to sample from of the posterior from the expression s,
as a function of s without knowing m s (also see Section 7.3). We consider some applications of Bayesian inference. is proportional to f f EXAMPLE 7.1.1 Bernoulli Model Suppose that we observe a sample x1 [0 1] unknown. For the prior, we take Problem 2.4.16). Then the posterior of xn from the Bernoulli to be equal to a Beta is proportional to the likelihood distribution with density (see n i 1 xi 1 1 xi nx 1 n 1 x times the prior B 1 1 1 1. Chapter 7: Bayesian Inference 377 This product is proportional to nx 1 1 n 1 x 1. We recognize this as the unnormalized density of a Beta nx tribution. So in this example, we did not need to compute m x1 posterior. n 1 x dis­ xn to obtain the As a specific case, suppose that we observe nx 1 i.e., we have a uniform prior on 40 and is given by the Beta 11 31 distribution. We plot the posterior density in Figure 7.1.3 as well as the prior. Then the posterior of 10 in a sample of n 6 5 4 3 2 1 0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 theta Figure 7.1.3: Prior (dashed line) and posterior densities (solid line) in Example 7.1.1. The spread of the posterior distribution gives us some idea of the precision of any. Note how much information the data have probability statements we make about added, as reected in the graphs of the prior and posterior densities. EXAMPLE 7.1.2 Location Normal Model Suppose that x1 unknown and 2 xn is a sample from an N 0 is known. The likelihood function is then given by 2 0 distribution, where R1 is L x1 xn exp n 2 2 0 x 2 Suppose we take the prior distribution of 0 The posterior density of 0 and 2 choice of to be an N 0 is then proportional to 2 0 for some specified 378 Section 7.1: The Prior and Posterior Distributions exp 1 2 2 0 2 exp 0 n 2 2 0 x 2 exp exp exp exp exp exp nx nx 7.1.2) We immediately recognize this,
as a function of of an as being proportional to the density distribution. Notice that the posterior mean is a weighted average of the prior mean 0 and the sample mean x, with weights and respectively. This implies that the posterior mean lies between the prior mean and the sample mean. Furthermore, the posterior variance is smaller than the variance of the sample mean. So if the information expressed by the prior is accurate, inferences about based on the posterior will be more accurate than those based on the sample mean alone. Note 2 0 is — the less inuence the that the more diffuse the prior is — namely, the larger 2 1 then the ratio of the prior has. For example, when n 1 0 posterior variance to the sample mean variance is 20 21 0 95 So there has been a 5% improvement due to the use of prior information. 20 and 2 0 Chapter 7: Bayesian Inference 379 For example, suppose that 0 1 2 Then the prior is an N 0 2 distribution, while the posterior is an 2 and that for n observe x 1 0 10 we 2 0 2 0 N 1 2 10 1 1 0 2 10 1 1 2 1 1 2 10 1 N 1 1429 9 523 8 10 2 distribution. These densities are plotted in Figure 7.1.4. Notice that the posterior is quite concentrated compared to the prior, so we have learned a lot from the data. 1.2 1.0 0.8 0.6 0.4 0.2 ­5 ­4 ­3 ­2 ­1 0 1 2 3 4 5 x Figure 7.1.4: Plot of the N 0 2 prior (dashed line) and the N 1 1429 9 523 8 posterior (solid line) in Example 7.1.2. 10 2 S EXAMPLE 7.1.3 Multinomial Model Suppose we have a categorical response s that takes k possible values, say, s 1 1 observing its label. k. For example, suppose we have a bowl containing chips labelled one of k. A proportion i of the chips are labelled i, and we randomly draw a chip, When the i are unknown, the statistical model is given by where and : 0 k i 1 i 1 k and 1 k 1 Note that the parameter space is really only k k 1 namely, once we have determined k 1 ­dimensional because, for example, the 1 of the i 1 k 1 remaining value is specified. Now suppose we observe a sample
s1 sn from this model. Let the frequency (count) of the ith category in the sample be denoted by xi Then, from Example 2.8.5, we see that the likelihood is given by L 1 k s1 sn x1 1 x2 2 xk k 380 Section 7.1: The Prior and Posterior Distributions For the prior we assume that sity (see Problem 2.7.13) given by 1 k 1 Dirichlet 1 2 k with den7.1.3) k 1 (recall that for i are nonnega­ tive constants chosen by the statistician to reect her beliefs about the unknown value of 1 corresponds to a uniform k. The choice distribution, as then (7.1.3) is constant on k 1). The 1 1 1 1 2 k k. The posterior density of 1 k 1 is then proportional to x1 1 1 1 x2 2 2 1 k 1 xk k for 1 ution of k 1. From (7.1.3), we immediately deduce that the posterior distrib­ k 1 is Dirichlet x1 1 x2 2 xk k. EXAMPLE 7.1.4 Location­Scale Normal Model xn is a sample from an N Suppose that x1 0 are unknown. The likelihood function is then given by and 2 distribution, where R1 L 2 x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 Suppose we put the following prior on 2. First, we specify that 2 N 0 2 2 0 i.e., the conditional prior distribution of ance 2 0 2. Then we specify the marginal prior distribution of given 2 is normal with mean 0 and vari­ 2 as 1 2 Gamma 0 0. (7.1.4) Sometimes (7.1.4) is referred to by saying that values 0 2 0 0 and 0 are selected by the statistician to reect his prior beliefs. 2 is distributed inverse Gamma. The From this, we can deduce (see Section 7.5 for the full derivation) that the posterior distribution of 2 is given by and where 2 x1 xn x1 xn Gamma nx 0 2 0 (7.1.5) (7.1.6) (7.1.7) Chapter 7: Bayesian Inference 381 and n 1 x 0 n 2 0 2 from the posterior
, we can make use of the method of composition (see Problem 2.10.13) by first generating 2 using (7.1.6) and then using (7.1.5) to generate We will discuss this further in Section 7.3. To generate a value (7.1.8) s2 2 0 2 1 2 n x 1 Notice that as 0 the conditional posterior distribution of 2 n distribution because N x i.e., as the prior on and x 1 2 0 n becomes increasingly diffuse, 2 converges in distribution to an (7.1.9) (7.1.10) given x 1 1 n Furthermore, as distribution to a Gamma 0 and 0 0 n 2 n 0 the marginal posterior of 1 1 s2 2 distribution because 2 converges in x n 1 s2 2 (7.1.11) Actually, it does not really seem to make sense to let 2 0 in as the prior does not converge to a proper probability the prior distribution of distribution. The idea here, however, is that we think of taking 0 large and 0 small, so that the posterior inferences are approximately those obtained from the limiting posterior. There is still a need to choose 0 however, even in the diffuse case, as the limiting inferences are dependent on this quantity. and 0 0 Summary of Section 7.1 Bayesian inference adds the prior probability distribution to the sampling model for the data as an additional ingredient to be used in determining inferences about the unknown value of the parameter. Having observed the data, the principle of conditional probability leads to the posterior distribution of the parameter as the basis for inference. Inference about marginal parameters is handled by marginalizing the full poste­ rior. EXERCISES 7.1.1 Suppose that S for the response s is given by the following table. 1 2 3 1 2 and the class of probability distributions s 1 1/2 1/3 3/4 s 2 1/2 2/3 1/4 f1 s f2 s f3 s 382 Section 7.1: The Prior and Posterior Distributions If we use the prior given by the table 1 1/5 2 3 2/5 2/5 then determine the posterior distribution of 7.1.2 In Example 7.1.1, determine the posterior mean and variance of for each possible sample of size 2.. 0 10 x 7.1.3 In Example 7.1.2, what is the
posterior probability that 1 when 2 1 n 0 probability of this event. 7.1.4 Suppose that x1 unknown. If we use the prior distribution for then determine the posterior distribution of 7.1.5 Suppose that x1 xn is a sample from a Poisson 0 and 2 0 xn. is positive, given that 10? Compare this with the prior given by the Gamma distribution with 0 distribution, 0 unknown. If the prior distribution of is a sample from a Uniform[0 is Gamma ] distribution with then obtain the form of. the posterior density of 7.1.6 Find the posterior mean and variance of See Problems 3.2.16 and 3.3.20.) 7.1.7 Suppose we have a sample i in Example 7.1.3 when k 3. (Hint: 6 56 6 39 3 30 3 03 5 31 5 62 5 10 2 45 8 24 3 71 4 14 2 80 7 43 6 82 4 75 4 09 7 95 5 84 8 44 9 36 2 1 2. 2 distribution and we determine that a prior specified by Gamma 1 1 is appropriate. Determine the posterior distribution from an N N 3 4 2 of 7.1.8 Suppose that the prior probability of posterior probability of being in A is 0 80 (a) Explain what effect the data have had on your beliefs concerning the true value of being in a set A is 0 25 and the 2 being in A (b) Explain why a posterior probability is more relevant to report than is a prior proba­ bility. 7.1.9 Suppose you toss a coin and put a Uniform[0 4 0 6] prior on, the probability of getting a head on a single toss. (a) If you toss the coin n times and obtain n heads, then determine the posterior density of (b) Suppose the true value of put any probability mass around (c) What do you conclude from part (b) about how you should choose a prior? 7.1.10 Suppose that for statistical model is, in fact, 0 99. Will the posterior distribution of 0 99 for any sample of n? ever f : R1, we assign the prior density, where Now suppose that we reparameterize the model via the function : R1 R1 is differentiable and strictly increasing. (a) Determine the prior density of (b) Show that m x is the same whether we parameterize the model by or by Chapter 7: Bayesian In
ference 383 :, where f, which is uniform on 7.1.11 Suppose that for statistical model we assign the prior probability function are interested primarily in making inferences about (a) Determine the prior probability distribution of (b) A uniform prior distribution is sometimes used to express complete ignorance about the value of a parameter. Does complete ignorance about the value of a parameter imply complete ignorance about a function of a parameter? Explain. 7.1.12 Suppose that for statistical model 1 0 1 2 3, 2 Now suppose we Is this distribution uniform? [0 1], we assign the prior density [0 1] Now suppose we are interested primarily in making, which is uniform on f : 2 inferences about (a) Determine the prior density of (b) A uniform prior distribution is sometimes used to express complete ignorance about the value of a parameter. Does complete ignorance about the value of a parameter imply complete ignorance about a function of a parameter? Explain. 2 Is this distribution uniform? COMPUTER EXERCISES 20 and x 7.1.13 In Example 7.1.2, when 2 generate a sample of 104 (or as large as possible) from the posterior distribution of and estimate the posterior probability that the coefficient of variation is greater than 0 125 Estimate the error in your 0.125, i.e., the posterior probability that approximation. 7.1.14 In Example 7.1.2, when 2 generate a sample of 104 (or as large as possible) from the posterior distribution of and estimate the posterior expectation of the coefficient of variation 0 the error in your approximation. 7.1.15 In Example 7.1.1, plot the prior and posterior densities on the same graph and compare them when n 3. (Hint: Calculate the logarithm of the posterior density and then exponentiate this. You will need the log­ gamma function defined by ln 20 and x Estimate 3 and 30 x 0 73 1 n 8 2 for 0 ) 2 0 2 0 1 0 PROBLEMS 2 dis­ 7.1.16 Suppose the prior of a real­valued parameter tribution. Show that this distribution does not converge to a probability distribution as is given by the N 0 (Hint: Consider the limits of the distribution functions.) : is a sample from f xn. Show that if we observe a further sample xn 1 7.1.17 Suppose that
x1 and that we have a prior xn m, then the posterior you obtain from using the posterior xn as a prior, and then condition­ ing on xn 1 and xn m This is the Bayesian updating property. conditioning on x1 7.1.18 In Example 7.1.1, determine m x. If you were asked to generate a value from this distribution, how would you do it? (Hint: For the generation part, use the theorem of total probability.) xn m, is the same as the posterior obtained using the prior xn xn 1 x1 384 Section 7.2: Inferences Based on the Posterior 7.1.19 Prove that the posterior distribution depends on the data only through the value of a sufficient statistic. COMPUTER PROBLEMS 8 2 1 2 1 generate a sample of 104 (or as large as is feasible) from the posterior 2. Estimate the error 2 over 7.1.20 For the data of Exercise 7.1.7, plot the prior and posterior densities of 0 10 on the same graph and compare them. (Hint: Evaluate the logarithms of the densities first and then plot the exponential of these values.) 7.1.21 In Example 7.1.4, when 0 and s2 distribution of in your approximation. 7.1.22 In Example 7.1.4, when 0 and s2 distribution of in your approximation. 8 2 1 2 1 generate a sample of 104 (or as large as is feasible) from the posterior. Estimate the error 2 and estimate the posterior expectation of 2 and estimate the posterior probability that 20 x 20 DISCUSSION TOPICS 7.1.23 One of the objections raised concerning Bayesian inference methodology is that it is subjective in nature. Comment on this and the role of subjectivity in scientific investigations. 7.1.24 Two statisticians are asked to analyze a data set x produced by a system under I, while study. Statistician I chooses to use a sampling model statistician II chooses to use a sampling model g : I I Comment on the fact that these ingredients can be completely different and so the subsequent analyses completely different. What is the relevance of this for the role of subjectivity in scientific analyses of data? and prior and prior f : 7.2 Inferences Based on the Posterior In
Section 7.1, we determined the posterior distribution of as a fundamental object of Bayesian inference. In essence, the principle of conditional probability asserts that s contains all the relevant information in the sampling the posterior distribution model and the data s about the unknown true value of : While this is a major step forward, it does not completely tell us how to make the types of inferences we discussed in Section 5.5.3. the prior f In particular, we must specify how to compute estimates, credible regions, and carry out hypothesis assessment — which is what we will do in this section. It turns out that there are often several plausible ways of proceeding, but they all have the common characteristic that they are based on the posterior. In general, we are interested in specifying inferences about a real­valued charac­. One of the great advantages of the Bayesian approach is that are determined in the same way as inferences about the full para­ replacing the full posterior., but with the marginal posterior distribution for teristic of interest inferences about meter Chapter 7: Bayesian Inference 385 This situation can be compared with the likelihood methods of Chapter 6, where it is not always entirely clear how we should proceed to determine inferences about based upon the likelihood. Still, we have paid a price for this in requiring the addition of another model ingredient, namely, the prior. So we need to determine the posterior distribution of in general, even if we have a closed­form expression for distribution of is discrete, the posterior probability function of This can be a difficult task s. When the posterior is given by 0 s s. : 0 When the posterior distribution of is absolutely continuous, we can often find a complementing function is 1–1, and such that the methods of Section 2.9.2 can be applied. Then, denoting the inverse of this transforma­ tion by the methods of Section 2.9.2 show that the marginal posterior distribution of has density given by so that 7.2.1) where J denotes the Jacobian derivative of this transformation (see Problem 7.2.35). Evaluating (7.2.1) can be difficult, and we will generally avoid doing so here. An example illustrates how we can sometimes avoid directly implementing (7.2.1) and still obtain the marginal posterior distribution of. EXAMPLE 7.2.1 Location­Scale Normal Model Suppose that x1 xn is a sample
from an N and distribution for R1 0 are unknown, and we use the prior given in Example 7.1.4. The posterior 2 distribution, where 2 is then given by (7.1.5) and (7.1.6). Suppose we are primarily interested in 2 We see immediately that 2 is prescribed by (7.1.6) and thus have no further work to 2. We can use the 2 the marginal posterior of do, unless we want a form for the marginal posterior density of methods of Section 2.6 for this (see Exercise 7.2.4). If we want the marginal posterior distribution of 2, then things are not quite so simple because (7.1.5) only prescribes the conditional posterior distribution given 2 We can, however, avoid the necessity to implement (7.2.1). Note that of (7.1.5) implies that Z n 1 2 0 x 1 2 2 x1 xn N 0 1 x is given in (7.1.7). Because this distribution does not involve where terior distribution of Z is independent of the posterior distribution of Gamma the definition of the general chi­squared distribution) and so, from (7.1.6), Gamma, then Y 2 X 2 2 1 2 2 the pos­ Now if X (see Problem 4.6.16 for 2 x 2 x1 xn 2 2 0 n 386 Section 7.2: Inferences Based on the Posterior x is given in (7.1.8). Therefore (using Problem 4.6.14), as we are dividing an n random variable where N 0 1 variable by the square root of an independent divided by its degrees of freedom, we conclude that the posterior distribution of is t 2 0 n. Equivalently, we can say the posterior distribution of is the same as. By (7.1.9), (7.1.10), and (7.1.11), we have that the posterior where T distribution of t 2 0 converges to the distribution of x n 2 0 1 n s n T as and 0 0. 0 In other cases, we cannot avoid the use of (7.2.1) if we want the marginal posterior For example, suppose we are interested in the posterior distribution of the 0 from the parameter space) density of coefficient of variation (we exclude the line given by 2 1 2 1 1 2 Then
a complementing function to is given by 2 1 2 and it can be shown (see Section 7.5) that J 2 1 2 If we let 1 x1 given, and the posterior density of xn and x1 xn denote the posterior densities of, respectively, then, from (7.2.1), the marginal density of is given by 2 0 1 1 2 1 x1 xn x1 xn 1 2 d (7.2.2) Without writing this out (see Problem 7.2.22), we note that we are left with a rather messy integral to evaluate. In some cases, integrals such as (7.2.2) can be evaluated in closed form; in other cases, they cannot. While it is convenient to have a closed form for a density, often this is not necessary, as we can use Monte Carlo methods to approximate posterior Chapter 7: Bayesian Inference 387 probabilities and expectations of interest. We will return to this in Section 7.3. We should always remember that our goal, in implementing Bayesian inference methods, is not to find the marginal posterior densities of quantities of interest, but rather to have a computational algorithm that allows us to implement our inferences. Under fairly weak conditions, it can be shown that the posterior distribution of converges, as the sample size increases, to a distribution degenerate at the true value. This is very satisfying, as it indicates that Bayesian inference methods are consistent. 7.2.1 Estimation Suppose now that we want to calculate an estimate of a characteristic of interest. We base this on the posterior distribution of this quantity. There are several different approaches to this problem. Perhaps the most natural estimate is to obtain the posterior density (or probability i.e., the point where the and use the posterior mode function when relevant) of posterior probability or density function of takes its maximum. In the discrete case, this is the value of with the greatest posterior probability; in the continuous case, it is the value that has the greatest amount of posterior probability in short intervals containing it. To calculate the posterior mode, we need to maximize s as a function of Note that it is equivalent to maximize m s s so that we do not need to compute the inverse normalizing constant to implement this. In fact, we can conveniently choose to maximize any function that is a 1–1 increasing function of s and get the same answer. In general, s may not have a unique mode, but
typically there is only one. An alternative estimate is commonly used and has a natural interpretation. This is given by the posterior mean E s, is symmetrical about its whenever this exists. When the posterior distribution of mode, and the expectation exists, then the posterior expectation is the same as the posterior mode; otherwise, these estimates will be different. If we want the estimate to reect where the central mass of probability lies, then in cases where s is highly skewed, perhaps the mode is a better choice than the mean. We will see in Chapter 8, however, that there are other ways of justifying the posterior mean as an estimate. We now consider some examples. EXAMPLE 7.2.2 Bernoulli Model Suppose we observe a sample x1 [0 1] unknown and we place a Beta the posterior distribution of the characteristic of interest is xn from the Bernoulli distribution with to be Beta nx prior on. In Example 7.1.1, we determined. Let us suppose that n 1 x The posterior expectation of is given by 388 Section 7.2: Inferences Based on the Posterior xn n nx n n n 1 x nx nx 0 nx x1 1 0 nx nx nx n When we have a uniform prior, i.e., 1, the posterior expectation is given by E x nx n 1 2 To determine the posterior mode, we need to maximize ln nx 1 1 n 1 x 1 nx 1 ln n 1 x 1 ln 1 This function has first derivative and second derivative nx 1 n 1 x 1 nx 1 n 1 x 2 1 2 1 1 Setting the first derivative equal to 0 and solving gives the solution nx n 1 2 1 we see that the second derivative is always negative, and so 1 Now, if 1 is the unique posterior mode. The restriction on the choice of that the prior has a mode in 0 1 rather than at 0 or 1 Note that when namely, when we put a uniform prior on same as the maximum likelihood estimate (MLE). The posterior is highly skewed whenever nx the posterior mode is and n 1 are far apart (plot Beta densities to see this). Thus, in such a case, we might consider the posterior mode as a more sensible estimate of. Note that when n is large, the mode and the mean will be very close together and in fact very close to the MLE x x 1
implies 1 1 x. This is the EXAMPLE 7.2.3 Location Normal Model Suppose that x1 unknown and 2 us suppose, that the characteristic of interest is xn is a sample from an N 0 is known, and we take the prior distribution on to be N 2 0 distribution, where R1 is 2 0. Let Chapter 7: Bayesian Inference 389 In Example 7.1.2 we showed that the posterior distribution of is given by the distribution. Because this distribution is symmetric about its mode, and the mean exists, the posterior mode and mean agree and equal This is a weighted average of the prior mean and the sample mean and lies between these two values. When n is large, we see that this estimator is approximately equal to the sample mean x which we also know to be the MLE for this situation Furthermore, when we take the prior to be very diffuse, namely, when 2 0 is very large, then again this estimator is close to the sample mean. Also observe that the ratio of the sampling variance of x to the posterior variance of is is always greater than 1. The closer 2 0 is to 0, the larger this ratio is. Furthermore, as 2 0 0 the Bayesian estimate converges to 0 If we are pretty confident that the population mean is close to the prior mean 0 we will take 2 0 small so that the bias in the Bayesian estimate will be small and its variance will be much smaller than the sampling variance of x In such a situation, the Bayesian estimator improves on accuracy over the sample mean. Of course, if we are not very confident that is close to the prior mean 0 then we choose a large value for 2 0 and the Bayesian estimator is basically the MLE. EXAMPLE 7.2.4 Multinomial Model Suppose we have a sample s1 sn and we place a Dirichlet 1 2 k 1 is then distribution of 1 from the model discussed in Example 7.1.3 k 1. The posterior k distribution on 1 Dirichlet x1 1 x2 2 xk k, where xi is the number of responses in the ith category. Now suppose we are interested in estimating a response is in the first category. the probability that It can be shown (see Problem 7.2.25) that, if 1 1 k 1 is distributed Dirichlet 1 2 k then i is distributed where distribution of i 2 1
1 is Dirichlet i i Beta i i k i This result implies that the marginal posterior Beta x1 1 x2 xk 2 k. 390 Section 7.2: Inferences Based on the Posterior Then, assuming that each i 1 and using the argument in Example 7.2.2 and x1 xk n, the marginal posterior mode of 1 is 1 n x1 2 When the prior is the uniform, namely, 1 1 n k k 1 then 1 1 1 x1 k 2 As in Example 7.2.2, we compute the posterior expectation to be E 1 x x1 1 n 1 k The posterior distribution is highly skewed whenever x1 1 and x2 xk 2 k are far apart. From Problem 7.2.26, we have that the plug­in MLE of 1 is x1 n When n is large, the Bayesian estimates are close to this value, so there is no conict between the estimates. Notice, however, that when the prior is uniform, then 1 k, hence the plug­in MLE and the Bayesian estimates will be quite different when k is large relative to n. In fact, the posterior mode will always be smaller than the plug­in MLE when k 0 This is a situation in which the Bayesian and frequentist approaches to inference differ. 2 and x1 k At this point, the decision about which estimate to use is left with the practitioner, as theory does not seem to provide a clear answer. We can be comforted by the fact that the estimates will not differ by much in many contexts of practical importance. EXAMPLE 7.2.5 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and that the characteristic of interest is R1 0 are unknown, and we use the prior given in Example 7.1.4. Let us suppose 2 distribution, where 2. In Example 7.2.1, we derived the marginal posterior distribution of to be the same as the distribution of where T t n 2 0. This is a t n 2 0 distribution relocated to have its mode at x and rescaled by the factor So the marginal posterior mode of is x n 1 1 2 0 nx 0 2 0 Chapter 7: Bayesian Inference 391, provided that n Because a t distribution is symmetric about its mode, this is also the posterior mean of 1 (see x is a 1 as a t Problem 4.6.16) This
will always be the case as the sample size n weighted average of the prior mean 0 and the sample average x distribution has a mean only when 1 Again, 2 0 The marginal posterior mode and expectation can also be obtained for 2 These computations are left to the reader (see Exercise 7.2.4). 2 One issue that we have not yet addressed is how we will assess the accuracy of Bayesian estimates. Naturally, this is based on the posterior distribution and how con­ centrated it is about the estimate being used. In the case of the posterior mean, this means that we compute the posterior variance as a measure of spread for the posterior distribution of about its mean. For the posterior mode, we will discuss this issue further in Section 7.2.3. EXAMPLE 7.2.6 Posterior Variances In Example 7.2.2, the posterior variance of is given by (see Exercise 7.2.6) nx n x n 1 2 n 1 Notice that the posterior variance converges to 0 as n In Example 7.2.3, the posterior variance is given by 1 the posterior variance converges to 0 as 2 variance of x, as 0 0 and converges to 2 2 0 n 2 1. Notice that 0 0 n the sampling 2 0 In Example 7.2.4, the posterior variance of 1 is given by (see Exercise 7.2.7) x1 1 x2 xk Notice that the posterior variance converges to 0 as n In Example 7.2.5, the posterior variance of is given by (see Problem 7.2.28 provided n 2 0 2 because the variance of a t distribution is 2 when 2 (see Problem 4.6.16). Notice that the posterior variance goes to 0 as n 7.2.2 Credible Intervals A credible interval, for a real­valued parameter that we believe will contain the true value of we specify a probability and then find an interval C s satisfying is an interval C s [l s u s ] As with the sampling theory approach7.2.3) We then refer to C s as a ­credible interval for 392 Section 7.2: Inferences Based on the Posterior Naturally, we try to find a s is as possible, and such that C s is as short as possible. This leads to the as close to consideration of highest posterior density (HPD) intervals, which are of the form ­credible interval
C s so that C s C s : s c, s is the marginal posterior density of where and where c is chosen as large as possible so that (7.2.3) is satisfied. In Figure 7.2.1, we have plotted an example of an HPD interval for a given value of c   | s) c [ l(s) ] u(s)  Figure 7.2.1: An HPD interval C s [l s u s ] : s c Clearly, C s contains the mode whenever c max length of an HPD interval as a measure of the accuracy of the mode of estimator of. The length of a 0 95­credible interval for purpose as the margin of error does with confidence intervals. Consider now some applications of the concept of credible interval. s. We can take the s as an will serve the same EXAMPLE 7.2.7 Location Normal Model Suppose that x1 unknown and 2 Example 7.1.2, we showed that the posterior distribution of 0 is known, and we take the prior distribution on xn is a sample from an N 2 0 distribution, where to be N 0 is given by the R1 is 2 0. In distribution. Since this distribution is symmetric about its mode (also mean) est ­HPD interval is of the form, a short­ 1 2 c, 1 2 0 n 2 0 Chapter 7: Bayesian Inference 393 where c is such that Since x1 xn c x1 xn x1 xn we have function (cdf). This immediately implies that c given by c, where is the standard normal cumulative distribution ­HPD interval is 2 and the Note that as 2 0 interval converges to the interval namely, as the prior becomes increasingly diffuse, this x z 1 0 n 2 which is also the a diffuse normal prior, the Bayesian and frequentist approaches agree. ­confidence interval derived in Chapter 6 for this problem. So under EXAMPLE 7.2.8 Location­Scale Normal Model Suppose that x1 xn is a sample from an N and 7.2.1, we derived the marginal posterior distribution of R1 0 are unknown, and we use the prior given in Example 7.1.4. In Example 2 distribution, where to be the same as where T t 2 0 ­HPD interval is of
the form n. Because this distribution is symmetric about its mode, 2 0 n n 394 Section 7.2: Inferences Based on the Posterior where c satisfies G2 0 n c G2 0 n c. x1 xn c 2 0 1 2 c x1 xn Here, G2 0 n is the t 2 0 n cdf, and therefore c t 1 2 2 0 n. Using (7.1.9), (7.1.10), and (7.1.11) we have that this interval converges to the interval 0 and 0 as interval we obtained for identical Note that this is a little different from the ­confidence in Example 6.3.8, but when 0 n is small, they are virtually In the examples we have considered so far, we could obtain closed­form expres­ sions for the HPD intervals. In general, this is not the case. In such situations, we have to resort to numerical methods to obtain the HPD intervals, but we do not pursue this topic further here. l is a 1 ­credible interval for method of obtaining a There are other methods of deriving credible intervals. For example, a common r ] where 2 1 quantile for this distribution. Alternatively, we could form one­sided intervals. These credible intervals avoid the more extensive computations that may be needed for HPD intervals. 2 quantile for the posterior distribution of is to take the interval [ and r is a 1 l 7.2.3 Hypothesis Testing and Bayes Factors Suppose now that we want to assess the evidence in the observed data concerning the hypothesis H0 : 0 It seems clear how we should assess this, namely, compute the posterior probability 0 s. (7.2.4) If this is small, then conclude that we have evidence against H0 We will see further justification for this approach in Chapter 8. EXAMPLE 7.2.9 Suppose we want to assess the evidence concerning whether or not IA then we are assessing the hypothesis H0 : 1 and A If we let So in this case, we simply compute the posterior probability that A 1 s A s. Chapter 7: Bayesian Inference 395 There can be a problem, however, with using (7.2.4) to assess a hypothesis. For when the prior distribution of 0 for all data s. Therefore, we would always find evidence against
H0 no matter what is observed, which does not make sense. In general, if the value 0 is assigned small prior probability, then it can happen that this value also has a small posterior probability no matter what data are observed. is absolutely continuous, then 0 s, then this is evidence that H0 is false. The value To avoid this problem, there is an alternative approach to hypothesis assessment that 0 is a surprising value for the posterior distribution is sometimes used. Recall that, if 0 is surprising whenever it of. A region of low occurs in a region of low probability for the posterior distribution of probability will correspond to a region where the posterior density s is relatively low. So, one possible method for assessing this is by computing the (Bayesian) P­value : s 0 s s. (7.2.5) s is unimodal, (7.2.5) corresponds to computing a tail probability. 0 is surprising, at least with respect to our Note that when If the probability (7.2.5) is small, then posterior beliefs. When we decide to reject H0 whenever the P­value is less than 1 then this approach is equivalent to computing a 0 is not in the region. whenever EXAMPLE 7.2.10 (Example 7.2.9 continued) Applying the P­value approach to this problem, we see that terior given by the Bernoulli has pos­ I A s is defined by distribution. Therefore, and rejecting H0 ­HPD region for A s Ac s and 0 s 1 Now 0 A s 1 so : Ac s Ac s. Therefore, (7.2.5) becomes : Ac s Ac s, so again we have evidence against H0 whenever A s is small. We see from Examples 7.2.9 and 7.2.10 that computing the P­value (7.2.5) is essen­ takes only two takes more than two values, however, and the tially equivalent to using (7.2.4), whenever the marginal parameter values. This is not the case whenever statistician has to decide which method is more appropriate in such a context. As previously noted, when the prior distribution of is absolutely continuous, then (7.2.4) is always 0, no matter what data are observed. As the following example illustrates, there is also a difficulty with using (7.2.5) in such a situation
. EXAMPLE 7.2.11 Suppose that the posterior distribution of 1 and we want to assess H0 : 0 is Beta 2 1, i.e., s 3 4 Then s 2 when 3 4 s if and 396 Section 7.2: Inferences Based on the Posterior only if 3 4 and (7.2.5) is given by 3 4 0 2 d 9 16 On the other hand, suppose we make a 1–1 transformation to 9 16 The posterior distribution of the hypothesis is now H0 : Since the posterior density of is constant, this implies that the posterior density at every possible value is less than or equal to the posterior density evaluated at 9 16. Therefore, (7.2.5) equals 1, and we would never find evidence against H0 using this parameterization is Beta 1 1 2 so that This example shows that our assessment of H0 via (7.2.5) depends on the parame­ terization used, which does not seem appropriate. The difficulty in using (7.2.5), as demonstrated in Example 7.2.11, only occurs with continuous posterior distributions. So, to avoid this problem, it is often recommended that the hypothesis to be tested always be assigned a positive prior probability. As demonstrated in Example 7.2.10, the approach via (7.2.5) is then essentially equivalent to using (7.2.4) to assess H0. In problems where it seems natural to use continuous priors, this is accomplished by to be a mixture of probability distributions, as discussed in Section taking the prior 2.5.4, namely, the prior distribution equals p 1 1 where and 1 0 2 is continuous at 1 and 0. Then 2 p 0 2, 0, i.e., 1 is degenerate at is the prior probability that H0 is true. The prior predictive for the data s is then given by m s pm1 s 1 p m2 s, i (see Problem 7.2.34) This im­ where mi is the prior predictive obtained via prior plies (see Problem 7.2.34) that the posterior probability measure for when using the prior is A s pm1 s 1 p m2 s pm1 s 1 A s 1 pm1 s p m2 s 1 p m2 s 2 A s (7.2.6) is the posterior measure obtained via the prior i s where mixture of the posterior probability
measures abilities 1 s and 2 i. Note that this a s with mixture prob­ pm1 s 1 and p m2 s 1 pm1 s p m2 s 1 p m2 s. pm1 s Chapter 7: Bayesian Inference 397 s is degenerate at Now 1 must be degenerate at that point too) and 0 (if the prior is degenerate at a point then the posterior 2 s is continuous at 0 Therefore, 0 s pm1 s 1, p m2 s pm1 s (7.2.7) and we use this probability to assess H0 The following example illustrates this approach. to be an N 0 xn is a sample from an N 2 0 distribution, where 0 is known, and we want to assess the hypothesis H0 : R1 0. As 2 0 distribution. Given 0 it seems reasonable to place the mode of 2 0 then reects 2 denote this prior probability EXAMPLE 7.2.12 Location Normal Model Suppose that x1 is unknown and 2 in Example 7.1.2, we will take the prior for that we are assessing whether or not the prior at the hypothesized value. The choice of the hyperparameter the degree of our prior belief that H0 is true. We let 2 is the N 0 measure, i.e., If we use 2 as our prior, then, as shown in Example 7.1.2, the posterior distribution is absolutely continuous. This implies that (7.2.4) is 0. So, following the preceding 2 obtained by mixing 1 and so 1 is p 1 1 degenerate at p. As shown in Example 7.1.2, under 1 p 0. Then 2 the posterior distribution of of discussion, we consider instead the prior 2 0 probability measure. 2 with a probability measure while the posterior under evaluate (7.2.7), and we will do this in Example 7.2.13. 1 is the distribution degenerate at 0. We now need to Bayes Factors Bayes factors comprise another method of hypothesis assessment and are defined in terms of odds. Definition 7.2.1 In a probability model with sample space S and probability mea­ S is defined to be P A P Ac namely, the sure P the odds in favor of event A ratio of the probability of A to the probability of Ac Obviously, large values of the odds in favor of A indicate a strong belief that
A is true. Odds represent another way of presenting probabilities that are convenient in certain contexts, e.g., horse racing. Bayes factors compare posterior odds with prior odds. Definition 7.2.2 The Bayes factor B FH0 in favor of the hypothesis H0 : 0 is defined, whenever the prior probability of H0 is not 0 or 1, to be the ratio of the posterior odds in favor of H0 to the prior odds in favor of H0 or B FH0 1 0 s 0 s 1 0 0 (7.2.8) 398 Section 7.2: Inferences Based on the Posterior So the Bayes factor in favor of H0 is measuring the degree to which the data have changed the odds in favor of the hypothesis. If B FH0 is small, then the data are provid­ ing evidence against H0 and evidence in favor of H0 when B FH0 is large. There is a relationship between the posterior probability of H0 being true and B FH0. From (7.2.8), we obtain 0 s r B FH0 1 r B FH0, (7.2.9) where r 1 0 0 is the prior odds in favor of H0 So, when B FH0 is small, then small and conversely. 0 s is One reason for using Bayes factors to assess hypotheses is the following result. This establishes a connection with likelihood ratios. Theorem 7.2.1 If the prior 1 is a mixture 1, and we want to assess the hypothesis H0 : 2 AC p 1 1 p 2, where A then 1 A B FH0 m1 s m2 s where mi is the prior predictive of the data under i PROOF Recall that, if a prior concentrates all of its probability on a set, then the posterior concentrates all of its probability on this set, too. Then using (7.2.6), we have B FH0 A s 1 A s 1 A A pm1 s p 1 p m2 s 1 p m1 s m2 s Interestingly, Theorem 7.2.1 indicates that the Bayes factor is independent of p We note, however, that it is not immediately clear how to interpret the value of B FH0. In particular, how large does B FH0 have to be to provide strong evidence in favor of H0? One approach to this problem
is to use (7.2.9), as this gives the posterior probability of H0, which is directly interpretable. So we can calibrate the Bayes factor. Note, however, that this requires the specification of p. EXAMPLE 7.2.13 Location Normal Model (Example 7.2.12 continued) We now compute the prior predictive under x1 xn given equals 2 We have that the joint density of 2 2 0 n 2 exp n 1 2 2 0 s2 exp n 2 2 0 x 2 Chapter 7: Bayesian Inference 399 and so m2 x1 xn 2 n 2 exp 2 0 s2 exp exp exp 1 n 2 2 0 s2 1 0 2 1 2 exp n 2 2 0 x 2 exp 1 2 2 0 2 0 d Then using (7.1.2), we have 1 0 2 1 2 exp 1 0 exp exp Therefore, m2 x1 xn n 2 2 0 1 n 2 0 nx 2 2 0 x 2 exp 7.2.10) 2 2 0 n 2 exp 1 n 2 2 0 s2 exp 1 2 2 0 2 0 nx exp Because is given by 1 is degenerate at 0 it is immediate that the prior predictive under 1 m1 x1 xn 2 2 0 n 2 exp 1 n 2 2 0 s2 exp n 2 2 0 x 2 0 Therefore, B FH0 equals divided by (7.2.10). exp n 2 2 0 x 2 0 400 Section 7.2: Inferences Based on the Posterior For example, suppose that 0 0 2 0 2 2 0 1 n 10 and x 0 2 Then exp n 2 2 0 x 2 0 exp 10 2 0 2 2 0 81873 while (7.2.10) equals 1 2 exp 1 2 1 2 0 21615 1 10 10 0 2 2 exp 10 0 2 2 2 10 1 2 1 2 So 0 81873 0 21615 which gives some evidence in favor of H0 : 1 2 so that we are completely indifferent between H0 being true and not being true, then r 0. If we suppose that p 1 and (7.2.9) gives 3 7878 B FH0 0 x1 xn 3 7878 1 3 7878 0 79114 indicating a large degree of support for H0. 7.2.4 Prediction Prediction problems arise when we have an unobserved response value t in a sample S Furthermore, we
have the statistical model space T and observed response s P : for t given for s and the conditional statistical model Q s. We assume that both models have the same true value of The objective is to T of the unobserved value t based on the observed data construct a prediction t s s The value of t could be unknown simply because it represents a future outcome. s : If we denote the conditional density or probability function (whichever is relevant) s t s, the joint distribution of is given by of t by q q t s f s. Then, once we have observed s (assume here that the distributions of and t are ab­ solutely continuous; if not, we replace integrals by sums), the conditional density of t, given s is dt d Then the marginal posterior distribution of t known as the posterior predictive of t is Chapter 7: Bayesian Inference 401 Notice that the posterior predictive of t is obtained by averaging the conditional density of t given s with respect to the posterior distribution of Now that we have obtained the posterior predictive distribution of t we can use it to select an estimate of the unobserved value. Again, we could choose the posterior mode T tq t s dt as our prediction, whichever is t or the posterior expectation E t x deemed most relevant. EXAMPLE 7.2.14 Bernoulli Model Suppose we want to predict the next independent outcome Xn 1 having observed. Here, the future a sample x1 observation is independent of the observed data. The posterior predictive probability function of Xn 1 at t is then given by from the Bernoulli Beta and xn q t x1 1 xn nx nx nx n n 1 x n nx nx nx 1 1 n 1 x 1 d x nx which is the probability function of a Bernoulli nx n Using the posterior mode as the predictor, i.e., maximizing q t x1 distribution. xn for t leads to the prediction t 1 if nx n 0 otherwise. n 1 x n The posterior expectation predictor is given by E t x1 xn nx n Note that the posterior mode takes a value in 0 1, and the future Xn 1 will be in this set, too. The posterior mean can be any value in [0 1]. EXAMPLE 7.2.15 Location Normal Model Suppose that x1 xn is a sample from an N 2 0 distribution, where R1 is unknown and 2 0 is known,
and we use the prior given in Example 7.1.2. Suppose we want to predict a future observation Xn 1, but this time Xn 1 is from the 7.2.11) 402 Section 7.2: Inferences Based on the Posterior distribution. So, in this case, the future observation is not independent of the observed data, but it is independent of the parameter. A simple calculation (see Exercise 7.2.9) shows that (7.2.11) is the posterior predictive distribution of t and so we would predict t by x, as this is both the posterior mode and mean. We can also construct a s : for a future value t from the model q, where s Q s is the posterior predictive measure for t One approach to constructing C s is to apply the HPD concept to q t s. We illustrate this via several examples. A ­prediction region for t satisfies Q C s ­prediction region C s EXAMPLE 7.2.16 Bernoulli Model (Example 7.2.14 continued) Suppose we want a we derived the posterior predictive distribution of Xn 1 to be ­prediction region for a future value Xn 1. In Example 7.2.14, Bernoulli nx n Accordingly, a ­prediction region for t, derived via the HPD concept, is given by C x1 xn 0 1 if max 1 0 if if nx n max max nx n nx nx n n 1 x n We see that this predictive region contains just the mode or encompasses all possible values for Xn 1. In the latter case, this is not an informative inference. EXAMPLE 7.2.17 Location Normal Model (Example 7.2.15 continued) Suppose we want a ­prediction interval for a future observation Xn 1 from distribution. As this is also the posterior predictive distribution of Xn 1 and is sym­ metric about x a ­prediction interval for Xn 1 derived via the HPD concept, is given by Summary of Section 7.2 Based on the posterior distribution of a parameter, we can obtain estimates of the parameter (posterior modes or means), construct credible intervals for the parameter (HPD intervals), and assess hypotheses about the parameter (posterior probability of the hypothesis, Bayesian P­values, Bayes factors). Chapter 7: Bayesian Inference 403 A new type of inference was discussed in this section, namely,
prediction prob­ lems where we are concerned with predicting an unobserved value from a sam­ pling model. EXERCISES m 1 in Example 7.2.4. 2. 2. (Hint: 2 to determine the mode.) in Example 7.2.2 is as given in Example 7.2.6. 1 in Example 7.2.4 is as given in Example 7.2.6. 7.2.1 For the model discussed in Example 7.1.1, derive the posterior mean of where m 0 7.2.2 For the model discussed in Example 7.1.2, determine the posterior distribution of the third quartile 0z0 75 Determine the posterior mode and the posterior expectation of 7.2.3 In Example 7.2.1, determine the posterior expectation and mode of 1 7.2.4 In Example 7.2.1, determine the posterior expectation and mode of You will need the posterior density of 7.2.5 Carry out the calculations to verify the posterior mode and posterior expectation of 7.2.6 Establish that the variance of the Prove that this goes to 0 as n 7.2.7 Establish that the variance of Prove that this goes to 0 as n 7.2.8 In Example 7.2.14, which of the two predictors derived there do you find more sensible? Why? 7.2.9 In Example 7.2.15, prove that the posterior predictive distribution for Xn 1 is as stated. (Hint: Write the posterior predictive distribution density as an expectation.) 7.2.10 Suppose that x1 distribution, where 0. Determine the mode of posterior distribution of. 7.2.11 Suppose that x1 distribution 0. Determine the mode of poste­ where rior distribution of a future independent observation Xn 1. Also determine the poste­ rior expectation of Xn 1 and posterior variance of Xn 1. (Hint: Problems 3.2.16 and 3.3.20.) 7.2.12 Suppose that in a population of students in a course with a large enrollment, the mark, out of 100, on a final exam is approximately distributed N 9 The instructor places the prior N 65 1 on the unknown parameter. A sample of 10 marks is obtained as given below. is a sample from the Exponential Gamma. Also determine the posterior expectation and posterior variance of is
a sample from the Exponential 0 is unknown and 0 is unknown and Gamma xn xn 0 0 46 68 34 86 75 56 77 73 53 64 (a) Determine the posterior mode and a 0.95­credible interval for interval tell you about the accuracy of the estimate? (b) Use the 0.95­credible interval for (c) Suppose we assign prior probability 0 5 to 0 5 1 compute the posterior probability of the null hypothesis. to test the hypothesis H0 : 1 is degenerate at 0 5 2, where 65 and 65. Using the mixture prior 2 is the N 65 1 distribution,. What does this 65. 404 Section 7.2: Inferences Based on the Posterior 0 is appropriate. 2, where 2 Gamma 65 when using the mixture prior. 0 is known and 2 0 (d) Compute the Bayes factor in favor of H0 : 7.2.13 A manufacturer believes that a machine produces rods with lengths in centime­ ters distributed N 0 0 is unknown, and that the prior distribution 1 (a) Determine the posterior distribution of (b) Determine the posterior mean of 2 (c) Indicate how you would assess the hypothesis H0 : 0. 7.2.14 Consider the sampling model and prior in Exercise 7.1.1. (a) Suppose we want to estimate 1 Determine the based upon having observed s posterior mode and posterior mean. Which would you prefer in this situation? Explain why. (b) Determine a 0.8 HPD region for (c) Suppose instead interest was in 2 based on a sample x1 based on having observed s 1 Identify the prior distribution of 1 Determine Identify the posterior distribution of based on having observed s I 1 2 xn. 2. 2 1 P A a 0.5 HPD region for 7.2.15 For an event A, we have that P Ac (a) What is the relationship between the odds in favor of A and the odds in favor of Ac? (b) When A is a subset of the parameter space, what is the relationship between the Bayes factor in favor of A and the Bayes factor in favor of Ac? 7.2.16 Suppose you are told that the odds in favor of a subset A are 3 to 1. What is the probability of A? If the Bayes factor in favor of A is 10 and the prior probability of A is 1/2, then determine the posterior probability of
A 7.2.17 Suppose data s is obtained. Two statisticians analyze these data using the same sampling model but different priors, and they are asked to assess a hypothesis H0 Both statisticians report a Bayes factor in favor of H0 equal to 100. Statistician I assigned prior probability 1/2 to H0 whereas statistician II assigned prior probability 1/4 to H0 Which statistician has the greatest posterior degree of belief in H0 being true? 7.2.18 You are told that a 0.95­credible interval, determined using the HPD criterion, for a quantity 3 3 2 6 If you are asked to assess the hypothesis H0 : 0 then what can you say about the Bayesian P­value? Explain your answer. 7.2.19 What is the range of possible values for a Bayes factor in favor of A Under what conditions will a Bayes factor in favor of A take its smallest value? is given by? PROBLEMS 7.2.20 Suppose that x1 0 is unknown, and we have terior distribution of 7.2.21 Suppose that x1 xn is a sample from the Uniform[0 0 ] distribution, where 0. Determine the mode of the pos­ Gamma. (Hint: The posterior is not differentiable at x n.) 0 1 is unknown, and we have Uniform[0 1]. Determine the form of the xn is a sample from the Uniform[0 ] distribution, where ­ credible interval for 7.2.22 In Example 7.2.1, write out the integral given in (7.2.2). based on the HPD concept. Chapter 7: Bayesian Inference 405 7.2.23 (MV) In Example 7.2.1, write out ithe integral that you would need to evaluate if you wanted to compute the posterior density of the third quartile of the population distribution, i.e., 7.2.24 Consider the location normal model discussed in Example 7.1.2 and the popu­. lation coefficient of variation (a) Show that the posterior expectation of write the posterior expectation as does not exist. (Hint: Show that we can z0 75. 0 0 bz a 1 2 e z2 2 dz i1 i2 k 1 k 2 ik 1 k. Beta a b ) k 1 1 Dirichlet i. (Hint: Use parts (
b) and (c).) k. (Hint: In the inte­ k 2.) k. (Hint: Use part (a).) k. Prove that 0 and show that this integral does not exist by considering the behavior of is a permutation of 1 ik. (Hint: What is the Jacobian of this transformation?) k 1 Dirichlet k 1 make the transformation k 1 Beta ik where b the integrand at z (b) Determine the posterior density of (c) Show that you can determine the posterior mode of by evaluating the posterior density at two specific points (Hint: Proceed by maximizing the logarithm of the pos­ terior density using the methods of calculus.) 7.2.25 (MV) Suppose that (a) Prove that gral to integrate out (b) Prove that (c) Suppose i1 Dirichlet i1 (d) Prove that 1 is given by x1 n i.e., 7.2.26 (MV) In Example 7.2.4, show that the plug­in MLE of find the MLE of k and determine the first coordinate. (Hint: Show there is a unique solution to the score equations and then use the facts that the log­likelihood is bounded above and goes to 7.2.27 Compare the results obtained in Exercises 7.2.3 and 7.2.4. What do you con­ clude about the invariance properties of these estimation procedures? (Hint: Consider Theorem 6.2.1.) 7.2.28 In Example 7.2.5, establish that the posterior variance of ample 7.2.6. (Hint: Problem 4.6.16.) 7.2.29 In a prediction problem, as described in Section 7.2.4, derive the form of the prior predictive density for t when the joint density of (assume s and 7.2.30 In Example 7.2.16, derive the posterior predictive probability function of Xn 1 Xn 2 xn when X1 having observed x1 pendently and identically distributed (i.i.d.) Bernoulli 7.2.31 In Example 7.2.15, derive the posterior predictive distribution for Xn 1 having 2 observed x1 0. (Hint: We can write Xn 1 N
0 1 is independent of the posterior distribution of ) Xn Xn 1 Xn 2 are inde­ Xn Xn 1 are i.i.d. N is as stated in Ex­ are real­valued) 0 Z, where Z xn when X1 is q t s f whenever s t 0 ) s 1. i 406 Section 7.2: Inferences Based on the Posterior 7.2.32 For the context of Example 7.2.1, prove that the posterior predictive distribution of an additional future observation Xn 1 from the population distribution has the same distribution as. Xn t 2 0 N 0 1 independent of X1 (Hint: Note that we can write Xn 1 U, where where T and then reason as in Example 7.2.1.) U 7.2.33 In Example 7.2.1, determine the form of an exact ­prediction interval for an additional future observation Xn 1 from the population distribution, based on the HPD concept. (Hint: Use Problem 7.2.32.) 7.2.34 Suppose that space prior predictive for the data s is given by m s posterior probability measure is given by (7.2.6). R2. 7.2.35 (MV) Suppose that Assume that h satisfies the necessary conditions and establish (7.2.1). (Hint: Theorem 2.9.2.) 2 are discrete probability distributions on the parameter 2, then the p 1 and the. Prove that when the prior p p m2 s R2 and h 1 is a mixture pm1 s 1 and 1 1 2 1 2 CHALLENGES 7.2.36 Another way to assess the null hypothesis H0 : P­value 0 is to compute the s 0 s 0 s (7.2.12) is the marginal prior density or probability function of We call (7.2.12) the where observed relative surprise ofH0. 0 s 0 is the true value of When (7.2.12) is small, 0 is a measure of how the data s have changed our a 0 is a surprising, as this indicates that the data have increased our belief more for other The quantity priori belief that value for values of (a) Prove that (7.2.12) is invariant under 1–1 continuously differentiable transforma­ tions of (b) Show that a value We call such
a value a least relative suprise estimate of (c) Indicate how to use (7.2.12) to form a surprise region, for (d) Suppose that both continuous and positive at Show that B FA takes its values in an 0 Generalize this to the case where open subset of Rk This shows that we can think of the observed relative surprise as a way of calibrating Bayes factors. is real­valued with prior density 0 Let A 0 that makes (7.2.12) smallest, maximizes ­credible region, known as a and posterior density ­relative 0 as 0 s 0 s s 0 0 0 Chapter 7: Bayesian Inference 407 7.3 Bayesian Computations In virtually all the examples in this chapter so far, we have been able to work out the exact form of the posterior distributions and carry out a number of important com­ putations using these. It often occurs, however, that we cannot derive any convenient form for the posterior distribution. Furthermore, even when we can derive the posterior distribution, there computations might arise that cannot be carried out exactly — e.g., recall the discussion in Example 7.2.1 that led to the integral (7.2.2). These calculations involve evaluating complicated sums or integrals. Therefore, when we apply Bayesian inference in a practical example, we need to have available methods for approximating these quantities. The subject of approximating integrals is an extensive topic that we cannot deal with fully here.1 We will, however, introduce several approximation methods that arise very naturally in Bayesian inference problems. 7.3.1 Asymptotic Normality of the Posterior R1 is approx­ In many circumstances, it turns out that the posterior distribution of imately normally distributed. We can then use this to compute approximate credible, carry out hypothesis assessment, etc. One such re­ regions for the true value of xn is a sult says that, under conditions that we will not describe here, when x1 sample from f, then x1 x1 xn xn z x1 xn z as n where x1 xn is the posterior mode, and 2 x1 xn 2 ln L x1 xn 2 1. x1 Note that this result is similar to Theorem 6.5.3 for the MLE. Actually, we can replace 2 x1 xn by the observed information is k­dimensional, there is a similar xn by the MLE and replace
(see Section 6.5), and the result still holds. When but more complicated result. 7.3.2 Sampling from the Posterior Typically, there are many things we want to compute as part of implementing a Bayesian analysis. Many of these can be written as expectations with respect to the posterior dis­ tribution of For example, we might want to compute the posterior probability content of a subset A namely, 1See, for example, Approximating Integrals via Monte Carlo and Deterministic Methods, by M. Evans and T. Swartz (Oxford University Press, Oxford, 2000). A s E IA s. 408 Section 7.3: Bayesian Computations More generally, we want to be able to compute the posterior expectation of some arbi­ trary function, namely E s. (7.3.1) It would certainly be convenient if we could compute all these quantities exactly, but quite often we cannot. In fact, it is not really necessary that we evaluate (7.3.1) exactly. This is because we naturally expect any inference we make about the true value of the parameter to be subject (different data sets of the same size lead to different inferences) to sampling error. It is not necessary to carry out our computations to a much higher degree of precision than what sampling error contributes. For example, if the sampling error only allows us to know the value of a parameter to within only 0 1 units, then there is no point in computing an estimate to many more digits of accuracy. In light of this, many of the computational problems associated with implementing Bayesian inference are effectively solved if we can sample from the posterior for For when this is possible, we simply generate an i.i.d. sequence 1 the posterior distribution of and estimate (7.3.1) by 2 N from 1 N N i 1 i. We know then, from the strong law of large numbers (see Theorem 4.3.2), that E x as N a s Of course, for any given N the value of only approximates (7.3.1); we would like to know that we have chosen N large enough so that the approximation is appropriately accurate. When E 2 then the central limit theorem (see Theorem 4.4.3) tells us that s E s N D N 0 1 as N but we can estimate it by where 2 Var s. In general, we do not know the value of 2, s2 1 N
1 N i 1 2 i is a quantitative variable, and by s2 when As shown in Section 4.4.2, in either case, s2 is a consistent estimate of Corollary 4.4.4, we have that when 1 I A for A 2 Then, by E s N s D N 0 1 as N. From this result we know that s 3 N 409 N Chapter 7: Bayesian Inference s so we can look at 3s is an approximate 100% confidence interval for E to determine whether or not N is large enough for the accuracy required. One caution concerning this approach to assessing error is that 3s N is itself so this could be misleading. A common subject to error, as s N for successively larger values recommendation then is to monitor the value of 3s of N and stop the sampling only when it is clear that the value of 3s N is small enough for the accuracy desired and appears to be declining appropriately. Even this approach, however, will not give a guaranteed bound on the accuracy of the computa­ tions, so it is necessary to be cautious. is an estimate of It is also important to remember that application of these results requires that For a bounded, this is always true, as any bounded random variable always has however, this must be checked — sometimes 2 a finite variance. For an unbounded this is very difficult to do. We consider an example where it is possible to exactly sample from the posterior. EXAMPLE 7.3.1 Location­Scale Normal Suppose that x1 and distribution for 2 developed there is R1 0 are unknown, and we use the prior given in Example 7.1.4. The posterior 2 distribution where is a sample from an N xn 2 x1 xn N x n 1 1 2 2 0 and 1 2 x1 xn Gamma 0 n 2 x, (7.3.2) (7.3.3) where x is given by (7.1.7) and x is given by (7.1.8). Most statistical packages have built­in generators for gamma distributions and for the normal distribution. Accordingly, it is very easy to generate a sample 2 N from this posterior. We simply generate a value for 1 N gamma distribution; then, given this value, we generate the value of fied normal distribution. 1 2 i from the specified i from the spe
ci­ 2 1 of variation Suppose, then, that we want to derive the posterior distribution of the coefficient. To do this we generate N values from the joint posterior of for each of these. We then know 2, using (7.3.2) and (7.3.3), and compute immediately that 1 N is a sample from the posterior distribution of As a specific numerical example, suppose that we observed the following sample x1 x15 11 6714 8 1631 1 9020 1 8957 1 8236 7 4899 2 1228 4 0362 4 9233 2 1286 6 8513 8 3223 1 0751 7 6461 7 9486 5 2 and s 3 3 Suppose further that the prior is specified by 0 4 Here, x 2 2 and 0 0 From (7.1.7), we have 1 2 0 x 15 1 2 1 4 2 15 5 2 5 161, 410 Section 7.3: Bayesian Computations and from (7.1.8), x 1 15 2 77 578 5 2 2 42 2 2 14 2 3 3 2 1 2 15 1 2 1 4 2 2 15 5 2 Therefore, we generate 1 2 x1 xn Gamma 9 5 77 578 followed by 2 x1 xn N 5 161 15 5 1 2 In Figure 7.3.1, we have plotted a sample of N See Appendix B for some code that can be used to generate from this joint distribution. 2 from this joint posterior. In Figure 7.3.2, we have plotted a density histogram of the 200 values of that arise from this sample. 200 values of 25 15 mu 6 7 Figure 7.3.1: A sample of 200 values of when n 5 2 s 15 x 3 3, 0 4 2 from the joint posterior in Example 7.3.1 2 0 2 and 0 1. 2 0 Chapter 7: Bayesian Inference 411 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.2: A density histogram of 200 values from the posterior distribution of Example 7.3.1. in 104 values. We can see from this that at N A sample of 200 is not very large, so we next generated a sample of N 103 values from the posterior distribution of A density histogram of these values is pro­ vided in Figure
7.3.3. In Figure 7.3.4, we have provided a density histogram based on 103, the basic shape of a sample of N the distribution has been obtained, although the right tail is not being very accurately 104, but note there are still some estimated. Things look better in the right tail for N extreme values quite disconnected from the main mass of values. As is characteristic of most distributions, we will need very large values of N to accurately estimate the tails. In any case, we have learned that this distribution is skewed to the right with a long right tail. 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.3: A density histogram of 1000 values from the posterior distribution of Example 7.3.1. in 412 Section 7.3: Bayesian Computations 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.4: A density histogram of N Example 7.3.1. 104 values from the posterior distribution of in Suppose we want to estimate 0 5 x1 xn E I 0 5 x1 xn. I Now 0 5 is bounded so its posterior variance exists. In the following table, we have recorded the estimates for each N together with the standard error based on each of the generated samples. We have included some code for computing these 104 estimates and their standard errors in Appendix B. Based on the results from N it would appear that this posterior probability is in the interval 0 289 [0 2755 0 3025]. 3 0 0045 N 200 103 104 Estimate of 0 5 x1 xn 0 265 0 271 0 289 Standard Error 0 0312 0 0141 0 0045 This example also demonstrates an important point. It would be very easy for us to generated from its posterior distribution calculate the sample mean of the values of and then consider this as an estimate of the posterior mean of But Problem 7.2.24 suggests (see Problem 7.3.15) that this mean will not exist. Accordingly, a Monte Carlo estimate of this quantity does not make any sense! So we must always check first that any expectation we want to estimate exists, before we proceed with some estimation procedure. When we cannot sample directly from the posterior, then the methods of the fol­ lowing section are needed. Chapter 7
: Bayesian Inference 413 7.3.3 Sampling from the Posterior Via Gibbs Sampling (Advanced) Sampling from the posterior, as described in Section 7.3.2, is very effective, when it can be implemented. Unfortunately, it is often difficult or even impossible to do this directly, as we did in Example 7.3.1. There are, however, a number of algorithms that allow us to approximately sample from the posterior. One of these, known as Gibbs sampling, is applicable in many statistical contexts. To describe this algorithm, suppose we want to generate samples from the joint Rk Further suppose that we can generate from each of distribution of Y1 the full conditional distributions Yi Y i Yk y i, where Y i Y1 Yi 1 Yi 1 Yk, namely, we can generate from the conditional distribution of Yi given the values of all the other coordinates. The Gibbs sampler then proceeds iteratively as follows. 1. Specify an initial value y1 0 yk 0 for Y1 Yk. 2. For N y1 N 0 generate Yi N from its conditional distribution given yi 1 N yi 1 N 1 yk N 1 for each i 1 k. For example, if k 3 we first specify y1 0 y2 0 y3 0. Then we generate Y1 1 Y2 0 Y2 1 Y1 1 Y3 1 Y1 1 y2 0 Y3 0 y1 1 Y3 0 y1 1 Y2 1 to obtain Y1 1 Y2 1 Y3 1. Next we generate Y1 2 Y2 1 Y2 2 Y1 2 Y3 2 Y1 2 y2 1 Y3 1 y1 2 Y3 1 y1 2 Y2 2 y3 0 y3 0 y2 1 y3 1 y3 1 y2 2 to obtain Y1 2 Y2 2 Y3 2 as it is never used. etc. Note that we actually did not need to specify Y1 0 converges in distribution to the joint distribution of Y1 It can then be shown (see Section 11.3) that, in fairly general circumstances, Y1 N Yk N So for large N, we have that the distribution of Y1 N Yk N is approximately from which we want to sample. So the same as the joint distribution of Y1 Gibbs sampling provides an approximate method for sampling from a distribution of interest. Yk as N Yk Furthermore, and this
is the result that is most relevant for simulations, it can be shown that, under conditions, 1 N N i 1 Y1 i Yk i a s E Y1 Yk. 414 Section 7.3: Bayesian Computations Estimation of the variance of sample variance, because now the is different than in the i.i.d. case, where we used the Y1 i Yk i terms are not independent. There are several approaches to estimating the variance of but perhaps the most commonly used is the technique of batching. For this we divide the sequence Y1 0 Yk 0 Y1 N Yk N into N m nonoverlapping sequential batches of size m (assuming here that N is divisi­ ble by m), calculate the mean in each batch obtaining 1 N m, and then estimate the variance of by s2 b N m, (7.3.4) where s2 b is the sample variance obtained from the batch means, i.e., s2. It can be shown that Y1 i Yk i m are approximately independent for m large enough. Accordingly, we choose the batch size m large enough so that the batch means are approximately independent, but not so large as to leave very few degrees of freedom for the estimation of the variance. Under ideal conditions, and Y1 i m Yk i 1 N m is an i.i.d. sequence with sample mean 1 N m N m i 1 i, and, as usual, we estimate the variance of by (7.3.4). Sometimes even Gibbs sampling cannot be directly implemented because we can­ not obtain algorithms to generate from all the full conditionals. There are a variety of techniques for dealing with this, but in many statistical applications the technique of latent variables often works. For this, we search for some random variables, say Vl and such that we can apply V1 Gibbs sampling to the joint distribution of V1 Vl We illustrate Gibbs sampling via latent variables in the following example. Vl where each Yi is a function of V1 EXAMPLE 7.3.2 Location­Scale Student Suppose now that x1 Z, where Z t xn is a sample from a distribution that is of the form X (see Section 4.6.2 and Problem 4.6.14). If is 2 1 2 is the standard deviation of the distribution (see Problem 1 corresponds to corresponds to normal variation, while 2, then at some specified value to re
ect the fact that we are interested in modeling situations in which the variable under consideration has a distribution with longer tails than the normal distribution. Typically, this manifests itself in a histogram of the data with a roughly symmetric shape but exhibiting a few extreme values out in the tails, so a t distribution might be appropriate. the mean and 4.6.16). Note that Cauchy variation. We will fix Chapter 7: Bayesian Inference 415 Suppose we place the prior on 2, given by Gamma 0 0. The likelihood function is given by 2 N 0 2 0 2 and xi 2 1 2, (7.3.5) hence the posterior density of 1 2 is proportional to 1 2 n 2 n 1 i 1 1 xi 2 1 2 1 2 exp exp 0 2 This distribution is not immediately recognizable, and it is not at all clear how to gen­ erate from it. It is natural, then, to see if we can implement Gibbs sampling. To do this directly, 2 and we need an algorithm to generate from the posterior of 2 given an algorithm to generate from the posterior of Unfortunately, neither of these conditional distributions is amenable to the techniques discussed in Section 2.10, so we cannot implement Gibbs sampling directly. given the value of Recall, however, that when V independent of Y N 2 2 Gamma then (Problem 4.6.14) 2 1 2 (see Problem 4.6.13) Z Y V t. Therefore, writing X Z Y V Y V we have that X V N 2. 2 and suppose Xi Vi We now introduce the n latent or hidden variables V1 i Vn which are i.i.d. i. The Vi are considered latent be­ cause they are not really part of the problem formulation but have been added here for associated with convenience (as we shall see). Then, noting that there is a factor Xn Vn is the density of Xi Vi proportional to 1 2 i the joint density of the values X1 V1 N 2 i 1 2 n 2 n exp i 1 i 2 2 xi 2 i 2 1 2 exp i 2. From the above argument, the marginal joint density of X1 out the Xn (after integrating i ’s) is proportional to (7.3.5), namely, a sample of n from the distribution 416 Section 7.3: Bayesian Computations specified by X we have that the joint density of Z, where
Z t. With the same prior structure as before, X1 V1 Xn Vn 2 1 is proportional to 1 2 n 2 n i 1 1 2 exp i 2 2 xi 2 i 2 1 2 exp i 2 1 2 exp exp 0 2. (7.3.6) In (7.3.6), treat x1 the conditional distributions of each of the variables V1 other variables. From (7.3.6), we have that the full conditional density of tional to xn as constants (we observed these values) and consider 2 given all the is propor­ Vn 1 exp 1 2 2 which is proportional to n i 1 i xi 2 1 2 0 2 0, exp xi 0 2 0. From this, we immediately deduce that x1 xn xi r 1 2, n 0 2 0 where r 1 n n i i 1 From (7.3.6), we have that the conditional density of 1 1 1 2 0 2 is proportional to n 2 0 1 2 exp 1 2 n i 1 i xi, and we immediately deduce that 1 2 x1 Gamma xn xi 2 1 2 0 2 0 2 0. Chapter 7: Bayesian Inference 417 Finally, the conditional density of Vi is proportional to 2 1 2 exp i 2 xi 2 2 1 2, i and it is immediate that Vi x1 xn 1 i 1 i 1 n 2 Gamma 1 2 1 2 2 2 xi 2 1. We can now easily generate from all these distributions and implement a Gibbs Vn we simply sampling algorithm. As we are not interested in the values of V1 discard these as we iterate. Let us now consider a specific computation using the same data and prior as in Example 7.3.1. The analysis of Example 7.3.1 assumed that the data were coming from a normal distribution, but now we are going to assume that the data are a sample from a 3 We again consider approximating the posterior distribution of the coefficient of variation t 3 distribution, i.e., We carry out the Gibbs sampling iteration in the order 1 n implies that we need starting values only for do not depend on the other starting value of to be s to obtain the sequence 1 2 N. j ) We take the starting value of 3 3 For each generated value of and 2 (the full conditionals of the 1 2 This i 5 2 and the to be x
2, we calculate The values 1 2 N are not i.i.d. from the posterior of. The best we can say is that D m x1 xn x1 xn, where. Also, values suf­ as m xn. Thus, x1 ficiently far apart in the sequence, will be like i.i.d. values from one approach is to determine an appropriate value m and then extract m 3m 2m as an approximate i.i.d. sequence from the posterior. Often it is difficult to determine an appropriate value for m however. is the posterior density of In any case, it is known that, under fairly weak conditions x1 xn So we can use the whole sequence N and record a density 1 just as we did in Example 7.3.1. The value of the density histogram as N histogram for between two cut points will converge almost surely to the correct value as N However, we will have to take N larger when using the Gibbs sampling algorithm than with i.i.d. sampling, to achieve the same accuracy. For many examples, the effect of the deviation of the sequence from being i.i.d. is very small, so N will not have to be much larger. We always need to be cautious, however, and the general recommendation is to 2 418 Section 7.3: Bayesian Computations compute estimates for successively higher values of N only stopping when the results seem to have stabilized. In Figure 7.3.5, we have plotted the density histogram of the values that resulted from 104 iterations of the Gibbs sampler. In this case, plotting the density histogram of 104 resulted in only minor deviations from this plot. Note that this density looks very similar to that plotted in Example 7.3.1, but it is not quite so peaked and it has a shorter right tail. based upon N 104 and N 8 5 4 3 2 1 0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Figure 7.3.5: A density histogram of N sampling in Example 7.3.2. 104 values of generated sequentially via Gibbs 0 5 x1 We can also estimate just as we did in Example 7.3.1, by recording the proportion of values in the sequence that are smaller than 0.5, i.e., 0 5. In this case, we obtained the
estimate 0 5441, which is quite different from the value obtained in Example 7.3.1. So using a t 3 distribution to describe the variation in the response has made a big difference in the results., where A I A xn : Of course, we must also quantify how accurate we believe our estimate is. Using 10 we obtained the standard error of the estimate 0 5441 to be a batch size of m 20, the standard error of the mean 0 00639. When we took the batch size to be m is 0 00659; with a batch size of m 40 the standard error of the mean is 0 00668. So we feel quite confident that we are assessing the error in the estimate appropriately. is asymptotically normal so that in this case Again, under conditions, we have that we can assert that the interval 0 5441 3 0 0066 [0 5243 0 5639] contains the true value of xn with virtual certainty. 0 5 x1 See Appendix B for some code that was used to implement the Gibbs sampling algorithm described here. It is fair to say that the introduction of Gibbs sampling has resulted in a revolution in statistical applications due to the wide variety of previously intractable problems that it successfully handles. There are a number of modifications and closely related Chapter 7: Bayesian Inference 419 algorithms. We refer the interested reader to Chapter 11, where the general theory of what is called Markov chain Monte Carlo (MCMC) is discussed. Summary of Section 7.3 Implementation of Bayesian inference often requires the evaluation of compli­ cated integrals or sums. If, however, we can sample from the posterior of the parameter, this will often lead to sufficiently accurate approximations to these integrals or sums via Monte Carlo. It is often difficult to sample exactly from a posterior distribution of interest. In such circumstances, Gibbs sampling can prove to be an effective method for generating an approximate sample from this distribution. EXERCISES 7.3.1 Suppose we have the following sample from an N unknown. 2 distribution, where is If the prior on is Uniform 2 6, determine an approximate 0.95­credible interval for based on the large sample results described in Section 7.3.1. ] and Uniform[0 1 prior, discussed in Example 7.2.2. 2 0 prior, discussed in Example 7.2.
3. 7.3.2 Determine the form of the approximate 0.95­credible interval of Section 7.3.1, for the Bernoulli model with a Beta 7.3.3 Determine the form of the approximate 0.95­credible intervals of Section 7.3.1, for the location­normal model with an N 0 7.3.4 Suppose that X Exponential 1. Derive a crude Monte Carlo algorithm, based on generating from a gamma distribution, to generate a value from the conditional distribution x Generalize this to a sample of n from X the Uniform[0 1 ] distribution. When will this algortithm be inefficient in the sense that we need a lot of computation to generate a single value? Uniform[0 1]. Derive a crude Monte Carlo 7.3.5 Suppose that X algorithm, based on generating from a normal distribution, to generate from the con­ ditional distribution 1 distribution. When will this algortithm be inefficient in the sense that we need a lot of computation to generate a single value? 7.3.6 Suppose that X Uniform[0 1]. Derive a 0 5N crude Monte Carlo algorithm, based on generating from a mixure of normal distrib­ utions, to generate from the conditional distribution x Generalize this to a sample of n x Generalize this to a sample of n from the N 2 from the 0 5N 2 distribution. 2 and 1 and 0 5N 0 5N N X X 1 1 420 Section 7.3: Bayesian Computations COMPUTER EXERCISES 5, where If a sample of n distribution, where 0 1 and assess the error in your estimate. z0 25, i.e., the population first quartile, using N 7.3.7 In the context of Example 7.3.1, construct a density histogram of the posterior 103 distribution of 104 and compare the results. Estimate the posterior mean of this distribution and N and assess the error in your approximation. (Hint: Modify the program in Appendix B.) 7.3.8 Suppose that a manufacturer takes a random sample of manufactured items and tests each item as to whether it is defective or not. The responses are felt to be i.i.d. is the probability that the item is defective. The manufacturer Bernoulli 100 items is taken and 5 places a Beta
0 5 10 distribution on defectives are observed, then, using a Monte Carlo sample with N 1000 estimate the posterior probability that 7.3.9 Suppose that lifelengths (in years) of a manufactured item are known to follow an Exponential Gamma 10 2. Suppose that the lifelengths 4.3, 6.2, 8.4, 3.1, 6.0, 5.5, and 7.8 were observed. (a) Using a Monte Carlo sample of size N that (b) Using a Monte Carlo sample of size N function of 1 (c) Using a Monte Carlo sample of size N tion of 1 10 from a Pareto 2 distribution. Now pretend you 7.3.10 Generate a sample of n 0 is only know that you have a sample from a Pareto Using a Monte Carlo sample of size unknown, and place a Gamma 2 1 prior on N 1 based on the observed sample, and assess the accuracy of your approximation by quoting an interval that contains the exact value with virtual certainty. (Hint: Problem 2.10.15.) 104, approximate the posterior expectation of 1 ( x equals the greatest integer less than or equal to x). 103, approximate the posterior probability 103, approximate the posterior probability 103, approximate the posterior expecta­ and assess the error in your approximation. [3 6] and assess the error of your estimate. 0 is unknown and for the prior we take distribution, where PROBLEMS Xn is a sample from the model 7.3.11 Suppose X1 ularity conditions of Section 6.5 apply. Assume that the prior function of sample from f X1 (the latter assumption holds under very general conditions). and that the posterior mode when X1 and all the reg­ is a continuous Xn is a Xn a s : f (a) Using the fact that, if Yn prove that a s Y and g is a continuous function, then g Yn a s g Y, 2 ln L x1 xn 2 1 n a s I Xn is a sample from f. when X1 (b) Explain to what extent the large sample approximate methods of Section 7.3.1 de­ pend on the prior if the assumptions just described apply. Chapter 7: Bayesian Inference 421 1 1 2 2 y x 1. 7.3.12 In Exercise 7.3.10, explain why the interval you constructed to contain the pos­ terior mean of 1
1 with virtual certainty may or may not contain the true value of 1 7.3.13 Suppose that X Y is distributed Bivariate Normal. Deter­ mine a Gibbs sampling algorithm to generate from this distribution. Assume that you have an algorithm for generating from univariate normal distributions. Is this the best way to sample from this distribution? (Hint: Problem 2.8.27.) 8x y for 7.3.14 Suppose that the joint density of X Y is given by f X Y x y 1 Fully describe a Gibbs sampling algorithm for this distribution. In 0 particular, indicate how you would generate all random variables. Can you design an algorithm to generate exactly from this distribution? 7.3.15 In Example 7.3.1, prove that the posterior mean of does not exist. (Hint: Use Problem 7.2.24 and the theorem of total expectation to split the integral into two parts, where one part has value 7.3.16 (Importance sampling based on the prior) Suppose we have an algorithm to generate from the prior. (a) Indicate how you could use this to approximate a posterior expectation using im­ portance sampling (see Problem 4.5.21). (b) What do you suppose is the major weakness is of this approach? and the other part has value ) COMPUTER PROBLEMS 7.3.17 In the context of Example 7.3.2, construct a density histogram of the posterior 104. Esti­ distribution of mate the posterior mean of this distribution and assess the error in your approximation. z0 25 i.e., the population first quartile, using N 7.4 Choosing Priors The issue of selecting a prior for a problem is an important one. Of course, the idea is that we choose a prior to reect our a priori beliefs about the true value of Because this will typically vary from statistician to statistician, this is often criticized as being too subjective for scientific studies. It should be remembered, however, that the sam­ pling model is also a subjective choice by the statistician. These choices are guided by the statistician’s judgment. What then justifies one choice of a statistical model or prior over another? f : In effect, when statisticians choose a prior and a model, they are prescribing a joint s. The only way to assess whether or not an appropriate choice distribution for
was made is to check whether the observed s is reasonable given this choice If s is surprising, when compared to the distribution prescribed by the model and prior, then we have evidence against the statistician’s choices Methods designed to assess this are called model­checking procedures, and are discussed in Chapter 9. At this point, however, we should recognize the subjectivity that enters into statistical analyses, but take some comfort that we have a methodology for checking whether or not the choices made by the statistician make sense. 422 Section 7.4: Choosing Priors Often a statistician will consider a particular family of priors for a. In such a context the problem and try to select a suitable prior parameter is called a hyperparameter. Note that this family could be the set of all possible priors, so there is no restriction in this formulation. We now discuss some commonly used families and methods for selecting 0 : : : 0 7.4.1 Conjugate Priors Depending on the sampling model, the family may be conjugate. Definition 7.4.1 The family of priors : for the parameter of the model f : : is conjugate, if for all data s. S and all the posterior s for Conjugacy is usually a great convenience as we start with some choice the prior, and then we find the relevant for the posterior, often without much s computation. While conjugacy can be criticized as a mere mathematical convenience, it has to be acknowledged that many conjugate families offer sufficient variety to allow for the expression of a wide spectrum of prior beliefs. 0 EXAMPLE 7.4.1 Conjugate Families In Example 7.1.1, we have effectively shown that the family of all Beta distributions is conjugate for sampling from the Bernoulli model. In Example 7.1.2, it is shown that the family of normal priors is conjugate for sampling from the location normal model. In Example 7.1.3, it is shown that the family of Dirichlet distributions is conjugate for Multinomial models. In Example 7.1.4, it is shown that the family of priors specified there is conjugate for sampling from the location­scale normal model. Of course, using a conjugate family does not tell us how to select 0 Perhaps the most justifiable approach is to use prior
elicitation. 7.4.2 Elicitation Elicitation involves explicitly using the statistician’s beliefs about the true value of that reects these beliefs. Typically, these involve the to select a prior in statistician asking questions of himself, or of experts in the application area, in such a way that the answers specify a prior from the family. : EXAMPLE 7.4.2 Location Normal Suppose we are sampling from an N known, and we restrict attention to the family N 0 priors for Thus, specifying two independent characteristics specifies a prior. 2 0 2 0 of 0 2 0 and there are two degrees of freedom in this family. unknown and 2 0 2 0 distribution with So here, R1 0 0 : Accordingly, we could ask an expert to specify two quantiles of his or her prior (see Exercise 7.4.10), as this specifies a prior in the family. For distribution for example, we might ask an expert to specify a number was as likely to be greater than as less than 0 so that 0 such that the true value of 0 is the median of the prior. Chapter 7: Bayesian Inference 423 We might also ask the expert to specify a value 0 such that there is 99% certainty that the true value of is less than 0 This of course is the 0.99­quantile of their prior. Alternatively, we could ask the expert to specify the center 0 of their prior dis­ 3 0 contains the true value of with 0 is the prior mean and 0 is the prior standard 0 tribution and for a constant virtual certainty. Clearly, in this case, deviation. 0 such that Elicitation is an important part of any Bayesian statistical analysis. If the experts used are truly knowledgeable about the application, then it seems intuitively clear that we will improve a statistical analysis by including such prior information. The process of elicitation can be somewhat involved, however, for complicated problems. Furthermore, there are various considerations that need to be taken into ac­ count involving, prejudices and aws in the way we reason about probability outside of a mathematical formulation. See Garthwaite, Kadane and O’Hagan (2005), “Statisti­ cal methods for eliciting probability distributions”, Journal of the American Statistical Association (Vol. 100, No. 470, pp. 680–700), for a deeper discussion of these issues. 7.
4.3 Empirical Bayes When the choice of 0 is based on the data s these methods are referred to as empirical Bayesian methods. Logically, such methods would seem to violate a basic principle of inference, namely, the principle of conditional probability. For when we compute using a prior based on s in general this is no longer the posterior distribution of the conditional distribution of given the data. While this is certainly an important concern, in many problems the application of empirical Bayes leads to inferences with satisfying properties. for the data s and then base the choice of For example, one empirical Bayesian method is to compute the prior predictive on these values. Note that the m s (as it is the density or probability prior predictive is like a likelihood function for function for the observed s), and so the methods of Chapter 6 apply for inference about s that maximizes m s. The required is typically multidimensional. We illustrate with a. For example, we could select the value of computations can be extensive, as simple example. EXAMPLE 7.4.3 Bernoulli Suppose we have a sample x1 plate putting a Beta 1/2 and the spread in this distribution is controlled by and the prior variance is 1 4 2 that choosing xn from a Bernoulli 0 as large leads to a very precise prior. Then we have that distribution and we contem­ 0 So the prior is symmetric about Since the prior mean is 1/2 we see 1 1 2 2] for some prior on 2 [ 2 m x1 xn 1 nx 0 nx 424 Section 7.4: Choosing Priors It is difficult to find the value of and plot m x1 can also be used. that maximizes this, but for real data we can tabulate xn to obtain this value. More advanced computational methods For example, suppose that n observed. In Figure 7.4.1 we have plotted the graph of m x1 We can see from this that the maximum occurs near 20 and we obtained nx 5 as the number of 1’s xn as a function of 2 More precisely, from a 2 3 is close to the maximum. Accordingly, we use tabulation we determine that the Beta 5 2 3 15 2 3 Beta 7 3 17 3 distribution for inferences about 0.000004 0.000003 0.000002 0.000001.000000 0 5 10 lambda 15 20 Figure 7.4.1: Plot of
es approach, is to prescribe a noninformative prior based on ignorance. Such a prior is also referred to as a default prior or reference prior. The motivation is to specify a prior that puts as little information into the analysis as possible and in some sense characterizes ignorance. Surprisingly, in many contexts, statisticians have been led to choose noninformative priors that are improper, i.e., so they do not correspond to probability distributions. d The idea here is to give a rule such that, if a statistician has no prior beliefs about the value of a parameter or hyperparameter, then a prior is prescribed that reects this. In the hierarchical Bayes approach, one continues up the chain until the statistician declares ignorance, and a default prior completes the specification. Unfortunately, just how ignorance is to be expressed turns out to be a rather subtle issue. In many cases, the default priors turn out to be improper, i.e., the integral or sum of the prior over the whole parameter space equals so the prior is not a probability distribution The interpretation of an improper prior is not at all clear, and their use is somewhat controversial. Of course, s no longer has a joint probability distribution when we are using improper priors, and we cannot use the principle of conditional probability to justify basing our inferences on the posterior. e.g., d There have been numerous difficulties associated with the use of improper priors, which is perhaps not surprising. In particular, it is important to note that there is no to exist as a proper probability distribution reason in general for the posterior of when is improper. If an improper prior is being used, then we should always check to make sure the posterior is proper, as inferences will not make sense if we are using an improper posterior. 426 Section 7.4: Choosing Priors c c When using an improper prior for any c is proper; then the posteriors are identical (see Exercise 7.4.6). The following example illustrates the use of an improper prior. 0 for the posterior under, it is completely equivalent to instead use the prior is proper if and only if the posterior under EXAMPLE 7.4.5 Location Normal Model with an Improper Prior Suppose that x1 R1 is unknown and 2 to the choice xn is a sample from an N 0 is known Many arguments for default priors in this context lead 1, which is clearly improper
. 2 0 distribution, where Proceeding as in Example 7.1.2, namely, pretending that this is a proper proba­ bility density, we get that the posterior density of is proportional to exp n 2 2 0 x 2. This immediately implies that the posterior distribution of that this is the same as the limiting posterior obtained in Example 7.1.2 as although the point of view is quite different. is N x 0 2 0 n. Note One commonly used method of selecting a default prior is to use, when it is avail­ 1 2 in the multidimen­ able, the prior given by I 1 2 sional case), where I is the Fisher information for the statistical model as defined in Section 6.5. This is referred to as Jeffreys’ prior. Note that Jeffreys’ prior is dependent on the model. R1 (and by det I when Jeffreys’ prior has an important invariance property. From Challenge 6.5.19, we have that, under some regularity conditions, if we make a 1–1 transformation of the real­valued parameter then the Fisher information of is given by via I 1 2 1 Therefore, the default Jeffreys’ prior for is I 1 2 1 1 (7.4.1) Now we see that, if we had started with the default prior I 1 2 change of variable to 2.6.3. A similar result can be obtained when and made the then this prior transforms to (7.4.1) by Theorems 2.6.2 and is multidimensional. for Jeffreys’ prior often turns out to be improper, as the next example illustrates. EXAMPLE 7.4.6 Location Normal (Example 7.4.5 continued) In this case, Jeffreys’ prior is given by 0 which gives the same posterior as in Example 7.4.5. Note that Jeffreys’ prior is effectively a constant and hence the prior of Example 7.4.5 is equivalent to Jeffreys’ prior. n Research into rules for determining noninformative priors and the consequences of using such priors is an active area in statistics. While the impropriety seems counterin­ tuitive, their usage often produces inferences with good properties. Chapter 7: Bayesian Inference 427 Summary of Section 7.4 To implement Bayesian inference, the statistician must choose a prior as well as the sampling model for the data
. These choices must be checked if the inferences obtained are supposed to have practical validity. This topic is discussed in Chapter 9. Various techniques have been devised to allow for automatic selection of a prior. These include empirical Bayes methods, hierarchical Bayes, and the use of non­ informative priors to express ignorance. Noninformative priors are often improper. We must always check that an im­ proper prior leads to a proper posterior. EXERCISES 7.4.1 Prove that the family Gamma priors with respect to sampling from the model given by Pareto 0 is a conjugate family of distributions with 0 : 0. 7.4.2 Prove that the family : 1 0 of priors given by I[ 1 1 is a conjugate family of priors with respect to sampling from the model given by the Uniform[0 7.4.3 Suppose that the statistical model is given by ] distributions with 0 and that we consider the family of priors given by. 1 x2 and hence the prior, which prior is selected here? based on the selected prior. and we observe the sample x1 1 x2 (a) If we use the maximum value of the prior predictive for the data to determine the value of (b) Determine the posterior of 7.4.4 For the situation described in Exercise 7.4.3, put a uniform prior on the hyperpa­ rameter 7.4.5 For the model for proportions described in Example 7.1.1, determine the prior predictive density. If n 1 1 or 5 5 would the prior predictive criterion select for further inferences about? (Hint: Theorem of total probability.) 7 which of the priors given by and determine the posterior of 10 and nx 428 Section 7.4: Choosing Priors ] 1 is proper for c the posterior under 0 model and we want to is proper if and 7.4.6 Prove that when using an improper prior only if the posterior under c 0 and then the posteriors are identical. 7.4.7 Determine Jeffreys’ prior for the Bernoulli model and determine the posterior distribution of based on this prior. 7.4.8 Suppose we are sampling from a Uniform[0 use the improper prior (a) Does the posterior exist in this context? (b) Does Jeffreys’ prior exist in this context? 7.4.9 Suppose a student wants to put a prior on the mean
grade out of 100 that their class will obtain on the next statistics exam. The student feels that a normal prior centered at 66 is appropriate and that the interval 40 92 should contain 99% of the marks. Fully identify the prior. 7.4.10 A lab has conducted many measurements in the past on water samples from a particular source to determine the existence of a certain contaminant. From their records, it was determined that 50% of the samples had contamination less than 5.3 parts per million, while 95% had contamination less than 7.3 parts per million. If a normal prior is going to be used for a future analysis, what prior do these data deter­ mine? 7.4.11 Suppose that a manufacturer wants to construct a 0.95­credible interval for the of an item sold by the company. A consulting engineer is 99% certain mean lifetime, that the mean lifetime is less than 50 months. If the prior on then determine based on this information. 7.4.12 Suppose the prior on a model parameter and 2 unable to do this for 1/ 2 0 0 are hyperparameters. The statistician is able to elicit a value for 0 Accordingly, the statistician puts a hyperprior on 2 2 0 0 but feels 0 given by (Hint: Write 0 Determine the prior on 0 1 for some value of is taken to be N 0 is an Exponential 2 0, where Gamma 0 0z, where z N 0 1 ) COMPUTER EXERCISES 10 nx 7, and we are using a symmetric prior, i.e., 7.4.13 Consider the situation discussed in Exercise 7.4.5. (a) If we observe n plot in the range 0 20 (you will need a statistical the prior predictive as a function of package that provides evaluations of the gamma function for this). Does this graph clearly select a value for? (b) If we observe n 10 nx range 0 20. Compare this plot with that in part (a). 7.4.14 Reproduce the plot given in Example 7.4.3 and verify that the maximum occurs near 9, plot the prior predictive as a function of in the 2 3 PROBLEMS R1 7.4.15 Show that a distribution in the family N 0 pletely determined once we specify two quantiles of the distribution. 2 0 : 0 2 0 0 is com­ Chapter 7: Bayesian Inference 429 7.4.16 (Scale normal
model) Consider the family of N 0 2 distributions, where 0 is known and 2 0 is unknown. Determine Jeffreys’ prior for this model. 2. 7.4.17 Suppose that for the location­scale normal model described in Example 7.1.4, we use the prior formed by the Jeffreys’ prior for the location model (just a constant) times the Jeffreys’ prior for the scale normal model. Determine the posterior distribu­ tion of 7.4.18 Consider the location normal model described in Example 7.1.2. (a) Determine the prior predictive density m. (Hint: Write down the joint density of and do not worry about getting m into the sample and Use (7.1.2) to integrate out a recognizable form.) (b) How would you generate a value X1 (c) Are X1 0 Zi Xn mutually independent? Justify your answer. (Hint: Write Xi 0 Xn from this distribution? Zn are i.i.d. N 0 1 ) 0 Z, where Z Z1 2. De­ 7.4.19 Consider Example 7.3.2, but this time use the prior velop the Gibbs sampling algorithm for this situation. (Hint: Simply adjust each full conditional in Example 7.3.2 appropriately.) 1 2 COMPUTER PROBLEMS 7.4.20 Use the formulation described in Problem 7.4.17 and the data in the following table 2.6 3.0 4.2 4.0 3.1 4.1 5.2 3.2 3.7 2.2 3.8 3.4 5.6 4.5 1.8 2.9 5.3 4.7 4.0 5.2 generate a sample of size N of the posterior density of 104 from the posterior. Plot a density histogram estimate based on this sample. CHALLENGES 1 2, the Fisher information matrix I 2 is defined in Prob­ 7.4.21 When 1 2. Determine Jef­ lem 6.5.15. The Jeffreys’ prior is then defined as det I freys’ prior for the location­scale normal model and compare this with the prior used in Problem 7.4.17. 2 1 1 DISCUSSION TOPICS 7.4.22 Using empirical Bayes methods to determine a prior violates the
Bayesian prin­ ciple that all unknowns should be assigned probability distributions. Comment on this. Is the hierarchical Bayesian approach a solution to this problem? 430 Section 7.5: Further Proofs (Advanced) 7.5 Further Proofs (Advanced) Derivation of the Posterior Distribution for the Location­Scale Normal Model In Example 7.1.4, the likelihood function is given by L x1 xn 2 2 n 2 exp n 2 2 x 2 exp n 1 2 2 s2 The prior on 2 where 0 0 0 and 0 are fixed and known. 2 is given by 2 N 0 2 0 2 and 1 2 Gamma 0 0, The posterior density of 2 is then proportional to the likelihood times the joint prior. Therefore, retaining only those parts of the likelihood and the prior that depend on and 2 the joint posterior density is proportional to 2 exp 2 0 n 1 2 2 s2 1 2 0 1 exp exp exp s2 exp 2 2 nx exp 1 2 2 n 0 n 2 1 exp exp exp exp 1 2 1 2 1 2 1 s2 1 2 n 2 1 2 nx 0 2 0 n 1 2 s2 2 nx 1 2 2 is given by 2 0 2 0 1 From this, we deduce that the posterior distribution of 2 x N n x 1 2 0 Chapter 7: Bayesian Inference 431 and where and 1 2 x Gamma nx s2 nx 1 s2 Derivation of J 0 for the Location­Scale Normal Here we have that 2 1 2 1 1 2 and We have that det and so det det J 0 2 1 2 1. Chapter 8 Optimal Inferences CHAPTER OUTLINE Section 1 Optimal Unbiased Estimation Section 2 Optimal Hypothesis Testing Section 3 Optimal Bayesian Inferences Section 4 Decision Theory (Advanced) Section 5 Further Proofs (Advanced) In Chapter 5, we introduced the basic ingredient of statistical inference — the statistical model. In Chapter 6, inference methods were developed based on the model alone via the likelihood function. In Chapter 7, we added the prior distribution on the model parameter, which led to the posterior distribution as the basis for deriving inference methods. With both the likelihood and the posterior, however, the inferences were derived largely based on intuition. For example, when we had a characteristic of interest, there was nothing in the theory in Chapters 6 and 7 that forced us to choose a particular estimator, confidence or credible interval,
or testing procedure. A complete theory of statistical inference, however, would totally prescribe our inferences. One attempt to resolve this issue is to introduce a performance measure on infer­ ences and then choose an inference that does best with respect to this measure. For example, we might choose to measure the performance of estimators by their mean­ squared error (MSE) and then try to obtain an estimator that had the smallest possible MSE. This is the optimality approach to inference, and it has been quite successful in a number of problems. In this chapter, we will consider several successes for the optimality approach to deriving inferences. Sometimes the performance measure we use can be considered to be based on what is called a loss function. Loss functions form the basis for yet another approach to statistical inference called decision theory. While it is not always the case that a performance measure is based on a loss function, this holds in some of the most impor­ tant problems in statistical inference. Decision theory provides a general framework in which to discuss these problems. A brief introduction to decision theory is provided in Section 8.4 as an advanced topic. 433 434 Section 8.1: Optimal Unbiased Estimation 8.1 Optimal Unbiased Estimation for the statistical Suppose we want to estimate the real­valued characteristic If we have observed the data s an estimate is a value T s that the model. We refer to T as an estimator statistician hopes will be close to the true value of of. For a variety of reasons The error in the estimate is given by T s (mostly to do with mathematics) it is more convenient to consider the squared error T s 2. f : Of course, we would like this squared error to be as small as possible. Because this leads us to consider the distributions of the we do not know the true value of squared error, when s has distribution given by f. We would then like to choose the estimator T so that these distributions are as concentrated as possible about 0. A convenient measure of the concentration of these distributions about 0 is given by their means, or for each MSE T E T 2 (8.1.1) called the mean­squared error (recall Definition 6.3.1). An optimal estimator of is then a T that minimizes (8.1.1) for every In other words, T would be optimal if, for any other estimator T defined on S we
have that MSE T MSE T for each Unfortunately, it can be shown that, except in very artificial circumstances, there is no such T so we need to modify our optimization problem. This modification takes the form of restricting the estimators T that we will enter­ tain as possible choices for the inference. Consider an estimator T such that E T does not exist or is infinite. It can then be shown that (8.1.1) is infinite (see Challenge 8.1.26). So we will first restrict our search to those T for which E T is finite for every Further restrictions on the types of estimators that we consider make use of the following result (recall also Theorem 6.3.1). Theorem 8.1.1 If T is such that E T 2 is finite, then E T c 2 Var T E T c 2 This is minimized by taking c E T. PROOF We have that E T c 2 E T Var T E T E T 2 E T 2E T c 2 because E T not depend on c, the value of (8.1.2) is minimized by taking c 0. As 8.1.2) 0, and Var T does E T. Chapter 8: Optimal Inferences 435 8.1.1 The Rao–Blackwell Theorem and Rao–Blackwellization We will prove that, when we are looking for T to minimize (8.1.1), we can further restrict our attention to estimators T that depend on the data only through the value of a sufficient statistic. This simplifies our search, as sufficiency often results in a reduction of the dimension of the data (recall the discussion and examples in Section 6.1.1). First, however, we need the following property of sufficiency. Theorem 8.1.2 A statistic U is sufficient for a model if and only if the conditional distribution of the data s given U u is the same for every PROOF See Section 8.5 for the proof of this result. u can tell us nothing about the true value of The implication of this result is that information in the data s beyond the value of U s because this information comes
from a distribution that does not depend on the parameter. Notice that Theorem 8.1.2 is a characterization of sufficiency, alternative to that provided in Section 6.1.1. Consider a simple example that illustrates the content of Theorem 8.1.2. EXAMPLE 8.1.1 Suppose that S 1 2 3 4 given by the following table. a b, where the two probability distributions are Then L U 2 L U 4 As we must have s the response s given U s the point 1) for both response s given U s similarly when following table. 0 1, given by U 1 0 and L 4, and so U : S 1 is a sufficient statistic. 1 when we observe U s 0 the conditional distribution of 0 is degenerate at 1 (i.e., all the probability mass is at a and a the conditional distribution of the 1 places 1/3 of its mass at each of the points in 2 3 4 and 1 the conditional distributions are as in the b When b So given Thus, we see that indeed the conditional distributions are independent of. We now combine Theorems 8.1.1 and 8.1.2 to show that we can restrict our at­ tention to estimators T that depend on the data only through the value of a sufficient statistic U. By Theorem 8.1.2 we can denote the conditional probability measure for u, i.e., this probability measure does not depend on s given U s., as it is the same for every u, by P U For estimator T of, such that E T is finite for every put TU s equal to the conditional expectation of T given the value of U s namely, TU s E P U U s T, 436 Section 8.1: Optimal Unbiased Estimation i.e., TU is the average value of T when we average using P U that TU s1 P U U s2 TU s2 whenever U s1 ), and so TU depends on the data s only through the value of U s. U s2 (this is because P U U s Notice U s1 Theorem 8.1.3 (Rao–Blackwell) Suppose that U is a sufficient statistic and E T 2 is finite for every. Then MSE TU MSE T for every PROO