text
stringlengths
270
6.81k
8 < q 0 then the likelihood function of ✓ is : 1 2 (x ✓)2 for x ✓ otherwise, L(✓) = n i=1 r Y 2 ⇡ e 1 2 (xi ✓)2 , where x1 ✓, x2 ✓,..., xn ✓. This likelihood function simplifies to n L(✓) = n 2 1 2 e i=1 X (xi ✓)2, 2 ⇡  ✓. Taking the natural logarithm of L(✓) and where min{x1, x2,..., xn} maximizing, we obtain the maximum likelihood estimator of ✓ as the first order statistic of the sample X1, X2,..., Xn, that is ✓ = X(1), b Techniques for finding Interval Estimators of Parameters 498 where X(1) = min{X1, X2,..., Xn}. Suppose the true value of ✓ = 1. Using the maximum likelihood estimator of ✓, we are trying to guess this value of ✓ based on a random sample. Suppose X1 = 1.5, X2 = 1.1, X3 = 1.7, X4 = 2.1, X5 = 3.1 is a set of sample data from the above population. Then based on this random sample, we will get ✓ML = X(1) = min{1.5, 1.1, 1.7, 2.1, 3.1} = 1.1. b If we take another random sample, say X1 = 1.8, X2 = 2.1, X3 = 2.5, X4 = ✓ = 1.8 3.1, X5 = 2.6 then the maximum likelihood estimator of this ✓ will be based on this sample. The graph of the density function f (x; ✓) for ✓ = 1 is shown below. b From the graph, it is clear that a number close to 1 has higher chance of getting randomly picked by the sampling process, then the numbers that are substantially bigger than 1. Hence, it makes sense that ✓ should be estimated by the smallest sample value. However, from this example we see that the point estimate of ✓ is not equal to the true value of
✓. Even if we take many random samples, yet the estimate of ✓ will rarely equal the actual value of the parameter. Hence, instead of finding a single value for ✓, we should report a range of probable values for the parameter ✓ with certain degree of confidence. This brings us to the notion of confidence interval of a parameter. 17.1. Interval Estimators and Confidence Intervals for Parameters The interval estimation problem can be stated as follow: Given a random ↵, find a pair of statistics U such that the sample X1, X2,..., Xn and a probability value 1 L = L(X1, X2,..., Xn) and U = U (X1, X2,..., Xn) with L  Probability and Mathematical Statistics 499 probability of ✓ being on the random interval [L, U ] is 1 P (L ✓   U ) = 1 ↵. ↵. That is Recall that a sample is a portion of the population usually chosen by method of random sampling and as such it is a set of random variables X1, X2,..., Xn with the same probability density function f (x; ✓) as the population. Once the sampling is done, we get X1 = x1, X2 = x2, · · ·, Xn = xn where x1, x2,..., xn are the sample data. Definition 17.1. Let X1, X2,..., Xn be a random sample of size n from a population X with density f (x; ✓), where ✓ is an unknown parameter. The interval estimator of ✓ is a pair of statistics L = L(X1, X2,..., Xn) and U such that if x1, x2,..., xn is a set of sample U = U (X1, X2,..., Xn) with L data, then ✓ belongs to the interval [L(x1, x2,...xn), U (x1, x2,...xn)].  The interval [l, u] will be denoted as an interval estimate of ✓ whereas the random interval [L, U ] will denote the interval estimator of ✓. Notice
that the interval estimator of ✓ is the random interval [L, U ]. Next, we define the 100(1 ↵)% confidence interval for the unknown parameter ✓. Definition 17.2. Let X1, X2,..., Xn be a random sample of size n from a population X with density f (x; ✓), where ✓ is an unknown parameter. The interval estimator of ✓ is called a 100(1 ↵)% confidence interval for ✓ if P (L ✓   U ) = 1 ↵. The random variable L is called the lower confidence limit and U is called the ↵) is called the confidence coefficient upper confidence limit. The number (1 or degree of confidence. There are several methods for constructing confidence intervals for an unknown parameter ✓. Some well known methods are: (1) Pivotal Quantity Method, (2) Maximum Likelihood Estimator (MLE) Method, (3) Bayesian Method, (4) Invariant Methods, (5) Inversion of Test Statistic Method, and (6) The Statistical or General Method. In this chapter, we only focus on the pivotal quantity method and the MLE method. We also briefly examine the the statistical or general method. The pivotal quantity method is mainly due to George Bernard and David Fraser of the University of Waterloo, and this method is perhaps one of the most elegant methods of constructing confidence intervals for unknown parameters. Techniques for finding Interval Estimators of Parameters 500 17.2. Pivotal Quantity Method In this section, we explain how the notion of pivotal quantity can be used to construct confidence interval for a unknown parameter. We will also examine how to find pivotal quantities for parameters associated with certain probability density functions. We begin with the formal definition of the pivotal quantity. Definition 17.3. Let X1, X2,..., Xn be a random sample of size n from a population X with probability density function f (x; ✓), where ✓ is an unknown parameter. A pivotal quantity Q is a
function of X1, X2,..., Xn and ✓ whose probability distribution is independent of the parameter ✓. Notice that the pivotal quantity Q(X1, X2,..., Xn, ✓) will usually contain both the parameter ✓ and an estimator (that is, a statistic) of ✓. Now we give an example of a pivotal quantity. Example 17.1. Let X1, X2,..., Xn be a random sample from a normal population X with mean µ and a known variance 2. Find a pivotal quantity for the unknown parameter µ. Answer: Since each Xi ⇠ N (µ, 2), X ⇠ N µ, ✓ 2 n. ◆ Standardizing X, we see that X µ pn ⇠ N (0, 1). The statistics Q given by Q(X1, X2,..., Xn, µ) = µ X pn is a pivotal quantity since it is a function of X1, X2,..., Xn and µ and its probability density function is free of the parameter µ. There is no general rule for finding a pivotal quantity (or pivot) for a parameter ✓ of an arbitrarily given density function f (x; ✓). Hence to some extents, finding pivots relies on guesswork. However, if the probability density function f (x; ✓) belongs to the location-scale family, then there is a systematic way to find pivots. Probability and Mathematical Statistics 501 Definition 17.4. Let g : IR any µ and any > 0, the family of functions! IR be a probability density function. Then for F = f (x; µ, ) = 1 2 1 ), (0, ) 1 2 ✓ ⇢ ◆ is called the location-scale family with standard probability density f (x; ✓). The parameter µ is called the location parameter and the parameter is called the scale parameter. If = 1, then F is called the location family. If µ = 0, then F is called the scale family It should be noted that each member f (x; µ, ) of the location-scale, then family is a probability density function. If we take g(x) = 1 p2⇡ the normal density function 2 x
2 e 1 f (x; µ, ) = p2⇡2 e 1 2 ( x µ )2 , < x < 1 1 belongs to the location-scale family. The density function f (x; ✓) = 1 ✓ e x ✓ 8 < 0 if 0 < x < 1 otherwise, belongs to the scale family. However, the density function : ✓ x✓ 1 if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, does not belong to the location-scale family. : µ µ, where It is relatively easy to find pivotal quantities for location or scale parameter when the density function of the population belongs to the location-scale family F. When the density function belongs to location family, the pivot for the location parameter µ is µ is the maximum likelihood is the maximum likelihood estimator of , then the pivot estimator of µ. If b for the scale parameter is when the density function belongs to the scale µ and the pivot for the scale parameter is when the density function belongs to location-scale famb ily. Sometime it is appropriate to make a minor modification to the pivot b obtained in this way, such as multiplying by a constant, so that the modified pivot will have a known distribution. family. The pivot for location parameter µ is b b b b µ Techniques for finding Interval Estimators of Parameters 502 Remark 17.1. Pivotal quantity can also be constructed using a sufficient statistic for the parameter. Suppose T = T (X1, X2,..., Xn) is a sufficient statistic based on a random sample X1, X2,..., Xn from a population X with probability density function f (x; ✓). Let the probability density function of T be g(t; ✓). If g(t; ✓) belongs to the location family, then an appropriate a(✓) is a pivotal quantity for the location parameter constant multiple of T ✓ for some suitable expression a(✓). If g(t; ✓) belongs to the scale family, then an appropriate constant multiple of T b(✓) is a pivotal quantity for the scale parameter ✓ for some suitable expression b(
ile) of a standard P (Z  z↵) = 1 ↵, where ↵  0.5 (see figure below). Note that ↵ = P (Z z↵) if ↵  0.5.  1- α α Zα A 100(1 ↵)% confidence interval for a parameter ✓ has the following interpretation. If X1 = x1, X2 = x2,..., Xn = xn is a sample of size n, then ↵)% confidence interval [l, u] based on this sample we construct a 100(1 which is a subinterval of the real line IR. Suppose we take large number of samples from the underlying population and construct all the corresponding ↵)% of these 100(1 intervals would include the unknown value of the parameter ✓. ↵)% confidence intervals, then approximately 100(1 In the next several sections, we illustrate how pivotal quantity method can be used to determine confidence intervals for various parameters. 17.3. Confidence Interval for Population Mean At the outset, we use the pivotal quantity method to construct a confidence interval for the mean of a normal population. Here we assume first the population variance is known and then variance is unknown. Next, we construct the confidence interval for the mean of a population with continuous, symmetric and unimodal probability distribution by applying the central limit theorem. Let X1, X2,..., Xn be a random sample from a population X N (µ, 2), where µ is an unknown parameter and 2 is a known parameter. First of all, we need a pivotal quantity Q(X1, X2,..., Xn, µ). To construct this pivotal ⇠ Techniques for finding Interval Estimators of Parameters 504 quantity, we find the likelihood estimator of the parameter µ. We know that N (µ, 2), the distribution of the sample mean is µ = X. Since, each Xi ⇠ given by b X N µ,. ⇠ 2 n ✓ It is easy to see that the distribution of the estim
�  U ). One can find infinitely many pairs L, U such that ↵ = P (L 1 ✓   U ). Hence, there are infinitely many confidence intervals for a given parameter. However, we only consider the confidence interval of shortest length. If a confidence interval is constructed by omitting equal tail areas then we obtain what is known as the central confidence interval. In a symmetric distribution, it can be shown that the central confidence interval is of the shortest length. Example 17.2. Let X1, X2,..., X11 be a random sample of size 11 from a normal distribution with unknown mean µ and variance 2 = 9.9. If 11 i=1 xi = 132, then what is the 95% confidence interval for µ? Answer: Since each Xi ⇠ P by N (µ, 9.9), the confidence interval for µ is given X pn z ↵ 2, X + z ↵ 2. pn Since ✓ ✓ 11 i=1 xi = 132, the sample mean x = 132 11 = 12. Also, we see that ◆ ◆  P 2 n = 9.9 11 r r = p0.9. Further, since 1 ↵ = 0.95, ↵ = 0.05. Thus z ↵ 2 = z0.025 = 1.96 (from normal table). Using these information in the expression of the confidence interval for µ, we get that is 12 h 1.96 p0.9, 12 + 1.96 p0.9 i [10.141, 13.859]. Techniques for finding Interval Estimators of Parameters 506 Example 17.3. Let X1, X2,..., X11 be a random sample of size 11 from a normal distribution with unknown mean µ and variance 2 = 9.9. If 11 i=1 xi = 132, then for what value of the constant k is P 12 h k p0.9, 12 + k p0.9 i a 90% con�
�dence interval for µ? Answer: The 90% confidence interval for µ when the variance is given is x  ✓ pn ◆ z ↵ 2, x + pn ✓ z ↵ 2. ◆ Thus we need to find x, 2 n and z ↵ 2 corresponding to 1 q x = 11 i=1 xi 11 = P 132 11 = 12. ↵ = 0.9. Hence 2 n r = 9.9 11 r = p0.9. z0.05 = 1.64 (from normal table). Hence, the confidence interval for µ at 90% confidence level is 12 (1.64) p0.9, 12 + (1.64) p0.9. i Comparing this interval with the given interval, we get h k = 1.64. and the corresponding 90% confidence interval is [10.444, 13.556]. Remark 17.3. Notice that the length of the 90% confidence interval for µ is 3.112. However, the length of the 95% confidence interval is 3.718. Thus higher the confidence level bigger is the length of the confidence interval. Hence, the confidence level is directly proportional to the length of the confidence interval. In view of this fact, we see that if the confidence level is zero, Probability and Mathematical Statistics 507 then the length is also zero. That is when the confidence level is zero, the confidence interval of µ degenerates into a point X. Until now we have considered the case when the population is normal with unknown mean µ and known variance 2. Now we consider the case when the population is non-normal but its probability density function is continuous, symmetric and unimodal. If the sample size is large, then by the central limit theorem X µ pn ⇠ N (0, 1) as n.! 1 Thus, in this case we can take the pivotal quantity to be Q(X1, X2,..., Xn, µ) = µ, X
pn if the sample size is large (generally n same as before, we get the sample expression for the (1 interval, that is 32). Since the pivotal quantity is ↵)100% confidence X  ✓ pn ◆ z ↵ 2, X + pn ✓ z ↵ 2. ◆ Example 17.4. Let X1, X2,..., X40 be a random sample of size 40 from 40 a distribution with known variance and unknown mean µ. i=1 xi = 286.56 and 2 = 10, then what is the 90 percent confidence interval for the population mean µ? P If ↵ = 0.90, we get ↵ Answer: Since 1 the standard normal table). Next, we find the sample mean 2 = 0.05. Hence, z0.05 = 1.64 (from x = 286.56 40 = 7.164. Hence, the confidence interval for µ is given by 7.164 " (1.64) r 10 40!, 7.164 + (1.64) 10 40!# r that is [6.344, 7.984]. Techniques for finding Interval Estimators of Parameters 508 Example 17.5. In sampling from a nonnormal distribution with a variance of 25, how large must the sample size be so that the length of a 95% confidence interval for the mean is 1.96? Answer: The confidence interval when the sample is taken from a normal population with a variance of 25 is x pn z ↵ 2, x + ✓ Thus the length of the confidence interval is ◆ ✓  pn z0.025 2 n r = 2 (1.96) 25 n 25 n. r But we are given that the length of the confidence interval is ` = 1.96. Thus 25 n r 1.96 = 2 (1.96) pn = 10 n = 100. Hence, the sample size must be 100 so that the length of the 95% confidence interval will be 1.96. So far, we have discussed the method of construction
.6. A random sample of 9 observations from a normal population yields the observed statistics x = 5 and 1 x)2 = 36. What is 8 the 95% confidence interval for µ? i=1(xi 9 P Answer: Since n = 9 s2 = 36 x = 5 and 1 ↵ = 0.95, the 95% confidence interval for µ is given by that is x  ✓ s pn ◆ t ↵ 2 (n 1), x + s pn ✓ t ↵ 2 (n 1), ◆ 5  ✓ 6 p9 ◆ t0.025(8), 5 + 6 p9 ✓ t0.025(8) , ◆ Techniques for finding Interval Estimators of Parameters 510 which is 5  ✓ 6 p9 ◆ (2.306), 5 + 6 p9 ✓ ◆ (2.306). Hence, the 95% confidence interval for µ is given by [0.388, 9.612]. Example 17.7. Which of the following is true of a 95% confidence interval for the mean of a population? (a) The interval includes 95% of the population values on the average. (b) The interval includes 95% of the sample values on the average. (c) The interval has 95% chance of including the sample mean. Answer: None of the statements is correct since the 95% confidence interval for the population mean µ means that the interval has 95% chance of including the population mean µ. Finally, we consider the case when the population is non-normal but it probability density function is continuous, symmetric and unimodal. If some weak conditions are satisfied, then the sample variance S2 of a random sample of size n 2, converges stochastically to 2. Therefore, in X µ 2 n 1)S2 1)2 (n p (n q X = µ S2 n q the numerator of the left-hand member converges to N (0, 1) and the denominator of that member converges to 1. Hence X µ S2 n
b 2 that is From this we get n 2 e a 2 = b n 2 e b 2. a ln a b a = b. n ✓ Hence to obtain the pair of constants a and b that will produce the shortest confidence interval for , we have to solve the following system of nonlinear equations ◆ ⇣ ⌘ b a Z f (u) du = 1 ↵ ln a b a = b. n 9 >>= >>; (?) ⌘ If ao and bo are solutions of (?), then the shortest confidence interval for is given by ⇣ (n 1)S2 bo, (n 1)S2 ao. 3 s 2 s 4 5 Since this system of nonlinear equations is hard to solve analytically, numerical solutions are given in statistical literature in the form of a table for finding the shortest interval for the variance. Probability and Mathematical Statistics 517 17.5. Confidence Interval for Parameter of some Distributions not belonging to the Location-Scale Family In this section, we illustrate the pivotal quantity method for finding confidence intervals for a parameter ✓ when the density function does not belong to the location-scale family. The following density functions does not belong to the location-scale family: or ✓x✓ 1 if 0 < x < 1 f (x; ✓) = 0 8 < : f (x; ✓) = otherwise, 1 ✓ if 0 < x < ✓ ( 0 otherwise. We will construct interval estimators for the parameters in these density functions. The same idea for finding the interval estimators can be used to find interval estimators for parameters of density functions that belong to the location-scale family such as f (x; ✓) = x ✓ 1 ✓ e ( 0 if 0 < x < 1 otherwise. To find the pivotal quantities for the above mentioned distributions and others we need the following three results. The first result is Theorem 6.2 while the proof of the second result is easy and we leave it to the reader. Theorem 17.1. Let F (x; ✓) be the cumulative distribution function of a continuous random variable X. Then F (X; ✓) ⇠ U
N IF (0, 1). Theorem 17.2. If X U N IF (0, 1), then ⇠ ln X ⇠ EXP (1). Theorem 17.3. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓) = x ✓ 1 ✓ e 0 8 < : if 0 < x < 1 otherwise, Techniques for finding Interval Estimators of Parameters 518 where ✓ > 0 is a parameter. Then the random variable 2 ✓ n i=1 X 2(2n) Xi ⇠ n Proof: Let Y = 2 i=1 Xi. Now we show that the sampling distribution of ✓ Y is chi-square with 2n degrees of freedom. We use the moment generating method to show this. The moment generating function of Y is given by P MY (t) = M n (t) 2 ✓ Xi i=1 X MXi 2 ✓ t ◆ ✓ = n i==1 ✓ Y = (1 = (1 n 2t) 2t) 2n 2. Since (1 square random variable with 2n degrees of freedom, we conclude that 2n 2 corresponds to the moment generating function of a chi- 2t) 2 ✓ n i=1 X 2(2n). Xi ⇠ Theorem 17.4. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓) = ✓x✓ 1 8 < 0 if 0 x 1   otherwise, where ✓ > 0 is a parameter. Then the random variable a chi-square distribution with 2n degree of freedoms. : 2✓ n i=1 ln Xi has P Proof: We are given that ✓ x✓ 1, Xi ⇠ 0 < x < 1. Probability and Mathematical Statistics 519 Hence, the cdf of f is F (x; ✓) = x 0 Z ✓ x✓ 1dx = x✓. Thus by Theorem 17.1, each that is By Theorem 17.2, each that is F (Xi; ✓) ⇠ U N IF (0, 1), X ✓ i ⇠ U N IF (0, 1). ln X ✓ i ⇠ EXP
(1), ✓ ln Xi ⇠ EXP (1). By Theorem 17.3 (with ✓ = 1), we obtain 2 ✓ n i=1 X ln Xi ⇠ 2(2n). Hence, the sampling distribution of degree of freedoms. 2 ✓ n i=1 ln Xi is chi-square with 2n P The following theorem whose proof follows from Theorems 17.1, 17.2 and 17.3 is the key to finding pivotal quantity of many distributions that do not belong to the location-scale family. Further, this theorem can also be used for finding the pivotal quantities for parameters of some distributions that belong the location-scale family. Theorem 17.5. Let X1, X2,..., Xn be a random sample from a continuous population X with a distribution function F (x; ✓). If F (x; ✓) is monotone in n i=1 ln F (Xi; ✓) is a pivotal quantity and has ✓, then the statistic Q = 2(2n)). a chi-square distribution with 2n degrees of freedom (that is, Q 2 P ⇠ It should be noted that the condition F (x; ✓) is monotone in ✓ is needed to ensure an interval. Otherwise we may get a confidence region instead of a F (Xi; ✓)) confidence interval. Further note that the statistic is also has a chi-square distribution with 2n degrees of freedom, that is n i=1 ln (1 2 P 2 n i=1 X ln (1 F (Xi; ✓)) ⇠ 2(2n). Techniques for finding Interval Estimators of Parameters 520 Example 17.11. If X1, X2,..., Xn is a random sample from a population with density ✓x✓ 1 if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where ✓ > 0 is an unknown parameter, what is a 100(1 interval for ✓? : ↵)% confidence Answer: To construct a confidence interval for ✓, we need a pivotal quantity. That is, we need a random variable which is a function of the sample and
��dence interval Answer: The cumulation density function of f (x; ✓) is F (x; ✓) = x ✓ ( 0 if 0 < x < ✓ otherwise. Since 2 n i=1 X ln F (Xi; ✓) = 2 n i=1 X = 2n ln ✓ ln Xi ✓ n ◆ ln Xi ✓ 2 i=1 X i=1 ln Xi ⇠ n by Theorem 17.5, the quantity 2n ln ✓ 2n ln ✓ its distribution is independent of ✓, it is a pivot for ✓. Hence, we take 2(2n). Since 2 n i=1 ln Xi is a function of the sample and the parameter and P 2 P Q(X1, X2,..., Xn, ✓) = 2n ln ✓ 2 n i=1 X ln Xi. Techniques for finding Interval Estimators of Parameters 522 The 100(1 ↵)% confidence interval for ✓ can be constructed from ↵ = P 1 = P ⇣ 2 ↵ 2 (2n) Q 2 1   (2n) ↵ 2 2 ↵ 2 (2n)  2n ln ✓ ⌘ ln Xi  2 1 n 2 i=1 X ln Xi  2n ln ✓ 2 1  (2n)! n (2n) + 2 ln Xi ↵ 2 ↵ 2 i=1 X! n i=1 ln Xi = P 2 ↵ 2 (2n) + 2 n i=1 X (2n)+2 = P e 1 2n 2 ↵ 2 n n i=1 ln Xi 1 2n e ✓  o  2 1 ↵ 2 (2n)+2 n P P. o! Hence, 100(1 ↵)% confidence interval for ✓ is given by 1 2n 2 ↵ 2 ( 2 e n (2n)+2 ln Xi i=1 X 1 2n ), e 2 1 (
↵ 2 n (2n)+2 ln Xi i= The density function of the following example belongs to the scale family. However, one can use Theorem 17.5 to find a pivot for the parameter and determine the interval estimators for the parameter. Example 17.13. If X1, X2,..., Xn is a random sample from a distribution with density function f (x; ✓) = x ✓ 1 ✓ e 8 < 0 if 0 < x < 1 otherwise, where ✓ > 0 is a parameter, then what is the 100(1 for ✓? : ↵)% confidence interval Answer: The cumulative density function F (x; ✓) of the density function is given by Hence f (x; ✓) = x ✓ 1 ✓ e ( 0 if 0 < x < 1 otherwise F (x; ✓) = 1 e x ✓. 2 n i=1 X ln (1 F (Xi; ✓)) = 2 ✓ n Xi. i=1 X Probability and Mathematical Statistics 523 Thus 2 ✓ n Xi ⇠ i=1 X 2(2n). n We take Q = 2 ✓ Xi as the pivotal quantity. The 100(1 interval for ✓ can be constructed using i=1 X ↵)% confidence ↵ = P 1 = P 2 ↵ 2 (2n) ⇣ 2 ↵ 2 (2n) Q  2 ✓  n 2 Xi 2 1  n (2n) ↵ 2 Xi  n i=1 X ⌘ 2 1 ↵ 2 (2n)! 2 Xi i=1 X ↵ 2 2 (2n=1 X ↵ 2 2 1 ✓ (2n)   C C C C A ↵)% confidence interval for ✓ is given by n n 2 Xi 2 Xi 2 1 i=1 X ↵ 2 (2n) 2 6 6 6 6 4, i=1 X ↵ 2 2 (2n). 3 7 7 7 7 5 Hence, 100(1 In this section, we have seen that 100(1 ↵)% con
fidence interval for the parameter ✓ can be constructed by taking the pivotal quantity Q to be either or ln F (Xi; ✓) n i= ln (1 F (Xi; ✓)). i=1 X In either case, the distribution of Q is chi-squared with 2n degrees of freedom, 2(2n). Since chi-squared distribution is not symmetric about that is Q the y-axis, the confidence intervals constructed in this section do not have the shortest length. In order to have a shortest confidence interval one has to solve the following minimization problem: ⇠ Minimize L(a, b) Subject to the condition f (u)du = 1 b a Z (MP) ↵, 9 >= >; Techniques for finding Interval Estimators of Parameters 524 where f (x 1e In the case of Example 17.13, the minimization process leads to the following system of nonlinear equations (NE) b a Z f (u) du = 1 ↵ ln (n + 1). 9 >>= >>; 2  P n i=1Xi bo 2, n i=1Xi ao P. If ao and bo are solutions of (NE), then the shortest confidence interval for ✓ is given by 17.6. Approximate Confidence Interval for Parameter with MLE In this section, we discuss how to construct an approximate (1 ↵)100% confidence interval for a population parameter ✓ using its maximum likelihood ✓. Let X1, X2,..., Xn be a random sample from a population X estimator with density f (x; ✓). Let ✓ be the maximum likelihood estimator of ✓. If the sample size n is large, then using asymptotic property of the maximum b likelihood estimator, we have b N (0, 1) as ar b r ⇣ ⌘ b ✓ V ar b r ⇣ ⌘ b where V ar the maximum likelihood estimator of ✓ is unbiased, we get denotes the variance of the estimator ✓ ✓. Since, for large n, ⇣ ⌘ b ✓ ✓ ⇠ N (0, 1) as n b
 1.96.  (1.96)2 (p Squaring both sides of the above inequality and simplifying, we get 78 (0.4231 p)2  The last inequality is equivalent to p2). 13.96306158 69.84520000 p + 81.84160000 p2 0.  Solving this quadratic inequality, we obtain [0.3196, 0.5338] as a 95% confidence interval for p. This interval is an improvement since its length is 0.2142 where as the length of the interval [0.3135, 0.5327] is 0.2192. Probability and Mathematical Statistics 529 Example 17.16. If X1, X2,..., Xn is a random sample from a population with density ✓ x✓ 1 if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where ✓ > 0 is an unknown parameter, what is a 100(1 confidence interval for ✓ if the sample size is large? : ↵)% approximate Answer: The likelihood function L(✓) of the sample is L(✓) = ✓ x✓ i 1 . n i=1 Y Hence ln L(✓) = n ln ✓ + (✓ 1) n ln xi. i=1 X The first derivative of the logarithm of the likelihood function is d d✓ ln L(✓) = n ✓ + n i=1 X ln xi. Setting this derivative to zero and solving for ✓, we obtain ✓ = n n i=1 ln xi. Hence, the maximum likelihood estimator of ✓ is given by P ✓ = n n i=1 ln Xi. Finding the variance of this estimator is difficult. We compute its variance by computing the Cram´er-Rao bound for this estimator. The second derivative of the logarithm of the likelihood function is given by P b d2 d✓2 ln L(✓) = = d d✓ n ✓2. n ✓ + n i=1 X ln
< ✓ < u2) = 1 p1 p2 where u1 = u1(t) and u2 = u2(t). The statistics T (X1, X2,..., Xn) may be a sufficient statistics, or a maximum likelihood estimator. If we minimize the p2 = u1 of the confidence interval, subject to the condition 1 length u2 ↵ for 0 < ↵ < 1, we obtain the shortest confidence interval based on the 1 statistics T. p1 17.8. Criteria for Evaluating Confidence Intervals In many situations, one can have more than one confidence intervals for the same parameter ✓. Thus it necessary to have a set of criteria to decide whether a particular interval is better than the other intervals. Some well known criteria are: (1) Shortest Length and (2) Unbiasedness. Now we only briefly describe these criteria. The criterion of shortest length demands that a good 100(1 ↵)% confidence interval [L, U ] of a parameter ✓ should have the shortest length ` = U L. In the pivotal quantity method one finds a pivot Q for a parameter ✓ and then converting the probability statement P (a < Q < b) = 1 ↵ Techniques for finding Interval Estimators of Parameters 534 to P (L < ✓ < U ) = 1 ↵ ↵)% confidence interval for ✓. If the constants a and b can be obtains a 100(1 found such that the difference U L depending on the sample X1, X2,..., Xn is minimum for every realization of the sample, then the random interval [L, U ] is said to be the shortest confidence interval based on Q. If the pivotal quantity Q has certain type of density functions, then one can easily construct confidence interval of shortest length. The following result is important in this regard. h(q; ✓) be continuTheorem 17.6. Let the density function of the pivot Q ous and unimodal. If in some interval [a, b] the density function
h has a mode, ↵ and (ii) h(a) = h(b) > 0, then and satisfies conditions (i) the interval [a, b] is of the shortest length among all intervals that satisfy condition (i). b a h(q; ✓)dq = 1 R ⇠ If the density function is not unimodal, then minimization of ` is necessary to construct a shortest confidence interval. One of the weakness of this shortest length criterion is that in some cases, ` could be a random variable. Often, the expected length of the interval E(`) = E(U L) is also used as a criterion for evaluating the goodness of an interval. However, this too has weaknesses. A weakness of this criterion is that minimization of E(`) depends on the unknown true value of the parameter ✓. If the sample size is very large, then every approximate confidence interval constructed using MLE method has minimum expected length. A confidence interval is only shortest based on a particular pivot Q. It is possible to find another pivot Q? which may yield even a shorter interval than the shortest interval found based on Q. The question naturally arises is how to find the pivot that gives the shortest confidence interval among all other pivots. It has been pointed out that a pivotal quantity Q which is a some function of the complete and sufficient statistics gives shortest confidence interval. Unbiasedness, is yet another criterion for judging the goodness of an ↵)% interval estimator. The unbiasedness is defined as follow. A 100(1 confidence interval [L, U ] of the parameter ✓ is said to be unbiased if P (L ✓? U )   1 1  ( ↵ if ✓? = ✓ ↵ if ✓? = ✓. 6 Probability and Mathematical Statistics 535 17.9. Review Exercises 1. Let X1, X2,..., Xn be a random sample from a population with gamma density function f (x; ✓, ) = 1 Γ() ✓ x 1 e x ✓ for 0 < x < 1 8 < 0 otherwise,
where ✓ is an unknown parameter and > 0 is a known parameter. Show that : n i=1Xi (2n), 2 2 ↵ 1 P 2 " n i=1Xi (2n) # 2 2 ↵ P 2 is a 100(1 ↵)% confidence interval for the parameter ✓. 2. Let X1, X2,..., Xn be a random sample from a population with Weibull density function f (x; ✓, ) = ✓ x 1e x ✓ 8 < 0 for 0 < x < 1 otherwise, where ✓ is an unknown parameter and > 0 is a known parameter. Show that : n i=1X i (2n) ↵ 2 2 2 1 P , " 2 n i=1X i (2n) # 2 ↵ P 2 is a 100(1 ↵)% confidence interval for the parameter ✓. 3. Let X1, X2,..., Xn be a random sample from a population with Pareto density function f (x; ✓, ) = ✓ ✓ x (✓+1) for x <  1 8 < 0 otherwise, where ✓ is an unknown parameter and > 0 is a known parameter. Show that : 2 n i=1 ln 2 1 Xi ⇣ (2n) ↵ 2 2 Xi n i=1 ln 2 ↵ 2 ⇣ (2n) P, ⌘ 2 P is a 100(1 4 ↵)% confidence interval for 1 ✓. ⌘ 3 5 Techniques for finding Interval Estimators of Parameters 536 4. Let X1, X2,..., Xn be a random sample from a population with Laplace density function f (x; ✓) = 1 2✓ |x| ✓, e < x < 1 1 where ✓ is an unknown parameter. Show that n i=1|Xi| (2n), ↵ 2 2 2 1 P " 2 n i=1|Xi| (2n) # 2
↵ P 2 is a 100(1 ↵)% confidence interval for ✓. 5. Let X1, X2,..., Xn be a random sample from a population with density function f (x; ✓) = 1 2✓2 x3 e 8 < 0 x2 2✓ for 0 < x < 1 otherwise, where ✓ is an unknown parameter. Show that : n i=1X 2 i (4n) =1X 2 i (4n) # 2 ↵ P 2 is a 100(1 ↵)% confidence interval for ✓. 6. Let X1, X2,..., Xn be a random sample from a population with density function x 1 (1+x )✓+1 f (x; ✓, ) = ✓ 8 < 0 for 0 < x < 1 otherwise, where ✓ is an unknown parameter and > 0 is a known parameter. Show that : 2 ↵ 2 (2n), 2 2 n i=1 ln 1 + X i ⇣ ↵)% confidence interval for ✓. P 4 ⌘ 2 P (2n) ↵ 2 2 1 n i=1 ln 1 + X i ⇣ 3 ⌘ 5 is a 100(1 7. Let X1, X2,..., Xn be a random sample from a population with density function e (x ✓) if ✓ < x < 1 otherwise, f (x; ✓) = 0 8 < : Probability and Mathematical Statistics 537 2 IR is an unknown parameter. Then show that Q = X(1) where ✓ pivotal quantity. Using this pivotal quantity find a 100(1 interval for ✓. ✓ is a ↵)% confidence 8. Let X1, X2,..., Xn be a random sample from a population with density function e (x ✓) f (x; ✓) = 8 < 0 if ✓ < x < 1 otherwise, 2 IR is an unknown parameter. Then show that Q = 2n where ✓ : a pivotal quantity. Using this pivotal quantity find a 100(1 interval for ✓. ✓ is
X(1) ↵)% confidence 9. Let X1, X2,..., Xn be a random sample from a population with density function e (x ✓) f (x; ✓) = 8 < 0 if ✓ < x < 1 otherwise, IR is an unknown parameter. Then show that Q = e(X(1) where ✓ pivotal quantity. Using this pivotal quantity find a 100(1 interval for ✓. ✓) is a ↵)% confidence : 2 10. Let X1, X2,..., Xn be a random sample from a population with uniform density function f (x; ✓) = 1 ✓ if 0 x ✓   8 < where 0 < ✓ is an unknown parameter. Then show that Q = X(n) : ✓ quantity. Using this pivotal quantity find a 100(1 for ✓. otherwise, 0 is a pivotal ↵)% confidence interval 11. Let X1, X2,..., Xn be a random sample from a population with uniform density function f (x; ✓) = 1 ✓ if 0 x ✓   8 < where 0 < ✓ is an unknown parameter. Then show that Q = X(n) : ✓ pivotal quantity. Using this pivotal quantity find a 100(1 interval for ✓. otherwise, 0 X(1) is a ↵)% confidence Techniques for finding Interval Estimators of Parameters 538 12. If X1, X2,..., Xn is a random sample from a population with density f (x; ✓) = 2 ⇡ e 8 < q 0 1 2 (x ✓)2 if ✓  x < 1 otherwise, : where ✓ is an unknown parameter, what is a 100(1 fidence interval for ✓ if the sample size is large? ↵)% approximate con- 13. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓ (✓ + 1) x 2 if 1 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter.
What is a 100(1 interval for ✓ if the sample size is large? : ↵)% approximate confidence 14. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓2 x e ✓ x if 0 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter. What is a 100(1 interval for ✓ if the sample size is large? : ↵)% approximate confidence 15. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ) = 8 < 0 4) (x 1 e for x > 4 otherwise, where > 0. What is a 100(1 if the sample size is large? : ↵)% approximate confidence interval for ✓ 16. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓) = where 0 < ✓. What is a 100(1 the sample size is large? 1 ✓ for 0 x   ✓ 0 otherwise, 8 < ↵)% approximate confidence interval for ✓ if : Probability and Mathematical Statistics 539 17. A sample X1, X2,..., Xn of size n is drawn from a gamma distribution f (x; ) = x x3 e 64 8 < 0 if 0 < x < 1 otherwise. What is a 100(1 size is large? ↵)% approximate confidence interval for ✓ if the sample : 18. Let X1, X2,..., Xn be a random sample from a continuous population X with a distribution function F (x; ✓). Show that the statistic n i=1 ln F (Xi; ✓) is a pivotal quantity and has a chi-square disQ = tribution with 2n degrees of freedom. 2 P 19. Let X1, X2,..., Xn be a random sample from a continuous population X with a distribution function F (x; ✓). Show that the statistic F (Xi; ✓)) is a pivotal quantity and has a chi-square Q =
distribution with 2n degrees of freedom. n i=1 ln (1 2 P Techniques for finding Interval Estimators of Parameters 540 Probability and Mathematical Statistics 541 Chapter 18 TEST OF STATISTICAL HYPOTHESES FOR PARAMETERS 18.1. Introduction Inferential statistics consists of estimation and hypothesis testing. We have already discussed various methods of finding point and interval estimators of parameters. We have also examined the goodness of an estimator. Suppose X1, X2,..., Xn is a random sample from a population with prob- ability density function given by (1 + ✓) x✓ for 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where ✓ > 0 is an unknown parameter. Further, let n = 4 and suppose x1 = 0.92, x2 = 0.75, x3 = 0.85, x4 = 0.8 is a set of random sample data from the above distribution. If we apply the maximum likelihood method, then we will find that the estimator ✓ of ✓ is : ✓ = 1 4 ln(X1) + ln(X2) + ln(X3) + ln(X2) b. Hence, the maximum likelihood estimate of ✓ is b ✓ = 1 4 ln(0.92) + ln(0.75) + ln(0.85) + ln(0.80) b = 1 + 4 0.7567 = 4.2861 Test of Statistical Hypotheses for Parameters 542 Therefore, the corresponding probability density function of the population is given by 5.2861 x4.2861 f (x) = ( 0 for 0 < x < 1 otherwise. Since, the point estimate will rarely equal to the true value of ✓, we would like to report a range of values with some degree of confidence. If we want to report an interval of values for ✓ with a confidence level of 90%, then we need a 90% confidence interval for ✓. If we use the pivotal quantity method, then we will find that the confidence interval for ✓ is 1 " 2 (8) 2 ↵ 2 4 i=1 l
n Xi, 1 2 ↵ 2 (8) 2 1 4 i=1 ln Xi #. P 0.05(8) = 2.73, 2 0.95(8) = 15.51, and P 4 i=1 ln(xi) = 0.7567, we Since 2 obtain which is 1 +  2.73 2(0.7567), P 15.51 2(0.7567) 1 + [ 0.803, 9.249 ]. Thus we may draw inference, at a 90% confidence level, that the population X has the distribution (1 + ✓) x✓ for 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, (?) 2 where ✓ [0.803, 9.249]. If we think carefully, we will notice that we have made one assumption. The assumption is that the observable quantity X can be modeled by a density function as shown in (?). Since, we are concerned with the parametric statistics, our assumption is in fact about ✓. : Based on the sample data, we found that an interval estimate of ✓ at a 90% confidence level is [0.803, 9.249]. But, we assumed that ✓ [0.803, 9.249]. However, we can not be sure that our assumption regarding the parameter is real and is not due to the chance in the random sampling process. The validation of this assumption can be done by the hypothesis test. In this chapter, we discuss testing of statistical hypotheses. Most of the ideas regarding the hypothesis test came from Jerry Neyman and Karl Pearson during 1928-1938. 2 Definition 18.1. A statistical hypothesis H is a conjecture about the distribution f (x; ✓) of a population X. This conjecture is usually about the Probability and Mathematical Statistics 543 parameter ✓ if one is dealing with a parametric statistics; otherwise it is about the form of the distribution of X. Definition 18.2. A hypothesis H is said to be a simple hypothesis if H completely specifies the density f (x; ✓) of the population; otherwise it is called a composite hypothesis. Definition 18.3. The hypothesis to be tested is called the null hypothesis
. The negation of the null hypothesis is called the alternative hypothesis. The null and alternative hypotheses are denoted by Ho and Ha, respectively. If ✓ denotes a population parameter, then the general format of the null hypothesis and alternative hypothesis is Ho : ✓ Ωo 2 and Ha : ✓ Ωa 2 (?) where Ωo and Ωa are subsets of the parameter space Ω with Ωo \ Ωa = ; and Ωo [ Ωa ✓ Ω. Remark 18.1. If Ωo [ Ωa = Ω, then (?) becomes Ho : ✓ Ωo 2 and Ha : ✓ Ωo. 62 If Ωo is a singleton set, then Ho reduces to a simple hypothesis. For example, Ωo = {4.2861}, the null hypothesis becomes Ho : ✓ = 4.2861 and the alternative hypothesis becomes Ha : ✓ = 4.2861. Hence, the null hypothesis Ho : ✓ = 4.2861 is a simple hypothesis and the alternative Ha : ✓ = 4.2861 is a composite hypothesis. Definition 18.4. A hypothesis test is an ordered sequence (X1, X2,..., Xn; Ho, Ha; C) where X1, X2,..., Xn is a random sample from a population X with the probability density function f (x; ✓), Ho and Ha are hypotheses concerning the parameter ✓ in f (x; ✓), and C is a Borel set in IRn. Remark 18.2. Borel sets are defined using the notion of -algebra. A collection of subsets A of a set S is called a -algebra if (i) S A, 2 A. The whenever A Borel sets are the member of the smallest -algebra containing all open sets A, whenever A1, A2,..., An,... A, and (iii) A, (ii) Ac 2 2 2 1k=1Ak 2 S 6 6 Test of Statistical Hypotheses for Parameters 544 of IRn. Two examples of Borel sets in IRn are the sets that arise by countable union of closed intervals in IRn, and countable intersection of open sets in IRn. The set C is called the critical region
in the hypothesis test. The critical region is obtained using a test statistic W (X1, X2,..., Xn). If the outcome of (X1, X2,..., Xn) turns out to be an element of C, then we decide to accept Ha; otherwise we accept Ho. Broadly speaking, a hypothesis test is a rule that tells us for which sample values we should decide to accept Ho as true and for which sample values we should decide to reject Ho and accept Ha as true. Typically, a hypothesis test is specified in terms of a test statistic W. For example, a test might specify n that Ho is to be rejected if the sample total k=1 Xk is less than 8. In this case the critical region C is the set {(x1, x2,..., xn) | x1 + x2 + · · · + xn < 8}. P 18.2. A Method of Finding Tests There are several methods to find test procedures and they are: (1) Likelihood Ratio Tests, (2) Invariant Tests, (3) Bayesian Tests, and (4) UnionIntersection and Intersection-Union Tests. In this section, we only examine likelihood ratio tests. Definition 18.5. The likelihood ratio test statistic for testing the simple Ωo against the composite alternative hypothesis null hypothesis Ho : ✓ Ωo based on a set of random sample data x1, x2,..., xn is defined as Ha : ✓ 2 62 L(✓, x1, x2,..., xn) W (x1, x2,..., xn) = max Ωo ✓ 2 max Ω ✓ 2 where Ω denotes the parameter space, and L(✓, x1, x2,..., xn) denotes the likelihood function of the random sample, that is L(✓, x1, x2,..., xn), L(✓, x1, x2,..., xn) = f (xi; ✓). n i=1 Y A likelihood ratio test (LRT) is any test that has a critical region C (that is, rejection region) of the form C = {(x1, x2,..., xn) | W (x1, x2,...,
xn) k},  where k is a number in the unit interval [0, 1]. Probability and Mathematical Statistics 545 If Ho : ✓ = ✓0 and Ha : ✓ = ✓a are both simple hypotheses, then the likelihood ratio test statistic is defined as W (x1, x2,..., xn) = L(✓o, x1, x2,..., xn) L(✓a, x1, x2,..., xn). Now we give some examples to illustrate this definition. Example 18.1. Let X1, X2, X3 denote three independent observations from a distribution with density f (x; ✓) = (1 + ✓) x✓ ( 0 if 0 x 1   otherwise. What is the form of the LRT critical region for testing Ho : ✓ = 1 versus Ha : ✓ = 2? Answer: In this example, ✓o = 1 and ✓a = 2. By the above definition, the form of the critical region is given by C = (x1, x2, x3) ⇢ = ( (x1, x2, x3) = (x1, x2, x3) ⇢ IR3 IR3 IR3 2 2 2 = (x1, x2, x3) IR3 k i i  k ) 3 L (✓o, x1, x2, x3) L (✓a, x1, x2, x3)  i=1 x✓o (1 + ✓o)3 i=1 x✓a (1 + ✓a)3 8x1x2x3 27x2 2x2 1x2 1 3 Q k Q 3  27 8 k ⇢ x1x2x3  2 IR3 | x1x2x3 where a is some constant. Hence the likelihood ratio test is of the form: (x1, x2, x3) a, = 2 3 “Reject Ho if a.” Xi i=1 Y Example
is given by 1 2 x2 e. By the above x IR 2 IR IR | x IR 2 2 2 Lo (x) La (x)  2 x2 p2⇡ e 1 x2 2 ln  a, }  [0, ) 1 2 k, where k k  o k p2⇡ ✓ ◆ where a is some constant. Hence the likelihood ratio test is of the form: “Reject Ho if X a.”  In the above three examples, we have dealt with the case when null as well as alternative were simple. If the null hypothesis is simple (for example, Ho : ✓ = ✓o) and the alternative is a composite hypothesis (for example, Ha : ✓ = ✓o), then the following algorithm can be used to construct the likelihood ratio critical region: (1) Find the likelihood function L(✓, x1, x2,..., xn) for the given sample. 6 Probability and Mathematical Statistics 547 (2) Find L(✓o, x1, x2,..., xn). (3) Find max Ω L(✓, x1, x2,..., xn). ✓ 2 (4) Rewrite L(✓o,x1,x2,...,xn) L(✓, x1, x2,..., xn) max Ω ✓ 2 (5) Use step (4) to construct the critical region. in a “suitable form”. Now we give an example to illustrate these steps. Example 18.4. Let X be a single observation from a population with probability density f (x; ✓) = ✓x e ✓ x! 8 < 0 for x = 0, 1, 2,..., 1 otherwise, where ✓ hypothesis Ho : ✓ = 2 against the composite alternative Ha : ✓ 0. Find the likelihood ratio critical region for testing the null = 2. : Answer: The likelihood function based on one observation x is L(✓, x) = ✓ ✓x e x!. Next, we find L(✓o, x) which is given by L(2, x) = 2x e 2 x
!. Our next step is to evaluate max 0 ✓ with respect to ✓, and then set the derivative to 0 and solve for ✓. Hence L(✓, x). For this we differentiate L(✓, x) dL(✓, x) d✓ = 1 x! e ✓ x ✓x 1 and dL(✓,x) ⇥ d✓ = 0 gives ✓ = x. Hence ✓ ✓x e ⇤ L(✓, x) = max ✓ 0 xx e x x!. To do the step (4), we consider L(2, x) L(✓, x) max Ω ✓ 2 = 2x e 2 x! xx e x x! 6 Test of Statistical Hypotheses for Parameters 548 which simplifies to L(2, x) L(✓, x) max Ω ✓ 2 = 2e x ✓ x e 2. ◆ Thus, the likelihood ratio critical region is given by C = x ⇢ IR 2 2e x ✓ x 2 e ◆  k = x ⇢ IR 2 2e x ✓ x a  ◆ where a is some constant. The likelihood ratio test is of the form: “Reject Ho if a.” X 2e X  So far, we have learned how to find tests for testing the null hypothesis against the alternative hypothesis. However, we have not considered the goodness of these tests. In the next section, we consider various criteria for evaluating the goodness of a hypothesis test. 18.3. Methods of Evaluating Tests There are several criteria to evaluate the goodness of a test procedure. Some well known criteria are: (1) Powerfulness, (2) Unbiasedness and Invariancy, and (3) Local Powerfulness. In order to examine some of these criteria, we need some terminologies such as error probabilities, power functions, type I error, and type II error. First, we develop these terminologies. A statistical hypothesis is a conjecture about the distribution f (x; ✓) of the population X. This conjecture is usually about the parameter ✓ if one is dealing with a parametric statistics; otherwise it is
about the form of the distribution of X. If the hypothesis completely specifies the density f (x; ✓) of the population, then it is said to be a simple hypothesis; otherwise it is called a composite hypothesis. The hypothesis to be tested is called the null hypothesis. We often hope to reject the null hypothesis based on the sample information. The negation of the null hypothesis is called the alternative hypothesis. The null and alternative hypotheses are denoted by Ho and Ha, respectively. In hypothesis test, the basic problem is to decide, based on the sample information, whether the null hypothesis is true. There are four possible situations that determines our decision is correct or in error. These four situations are summarized below: Probability and Mathematical Statistics 549 Accept Ho Reject Ho Ho is true Correct Decision Type I Error Ho is false Type II Error Correct Decision Ωo be the null and Definition 18.6. Let Ho : ✓ alternative hypotheses to be tested based on a random sample X1, X2,..., Xn from a population X with density f (x; ✓), where ✓ is a parameter. The significance level of the hypothesis test Ωo and Ha : ✓ 62 2 Ho : ✓ Ωo 2 and Ha : ✓ Ωo, 62 denoted by ↵, is defined as ↵ = P (Type I Error). Thus, the significance level of a hypothesis test we mean the probability of rejecting a true null hypothesis, that is ↵ = P (Reject Ho / Ho is true). This is also equivalent to ↵ = P (Accept Ha / Ho is true). Ωo be the null and Definition 18.7. Let Ho : ✓ alternative hypothesis to be tested based on a random sample X1, X2,..., Xn from a population X with density f (x; ✓), where ✓ is a parameter. The probability of type II error of the hypothesis test Ωo and Ha : ✓ 2 62 Ho : ✓ Ωo 2 and Ha : ✓ Ωo, 62 denoted by , is defined as = P (Accept Ho / Ho is false). Similarly, this is also equivalent to = P (Accept Ho / Ha is true). Remark 18.3.
Note that ↵ can be numerically evaluated if the null hypothesis is a simple hypothesis and rejection rule is given. Similarly, can be Test of Statistical Hypotheses for Parameters 550 evaluated if the alternative hypothesis is simple and rejection rule is known. If null and the alternatives are composite hypotheses, then ↵ and become functions of ✓. Example 18.5. Let X1, X2,..., X20 be a random sample from a distribution with probability density function f (x; p) = px(1 8 < 0 p)1 x if x = 0, 1 otherwise, 1 2 is a parameter. The hypothesis Ho : p = 1 : where 0 < p against Ha : p < 1 probability of type I error?  2. If Ho is rejected when 20 i=1 Xi  P 2 to be tested 6, then what is the Answer: Since each observation Xi ⇠ 20 BER(p), the sum the observations BIN (20, p). The probability of type I error is given by Xi ⇠ i=1 X ↵ = P (Type I Error) = P (Reject Ho / Ho is true) = P = P 6 = 20 i=1 X 20 i=1 X 20 k Xi  6, Ho is true! 1 2! k 20 Xi  6 Ho : Xk=0 ✓ = 0.0577 ◆ ✓ ◆ (from binomial table). Hence the probability of type I error is 0.0577. Example 18.6. Let p represent the proportion of defectives in a manufacturing process. To test Ho : p 4, a random sample of size 5 is taken from the process. If the number of defectives is 4 or more, the null hypothesis is rejected. What is the probability of rejecting Ho if p = 1 5? 4 versus Ha : p > 1  1 Answer: Let X denote the number of defectives out of a random sample of size 5. Then X is a binomial random variable with n = 5 and p = 1 5. Hence, Probability and Mathematical Statistics 551 the probability of rejecting Ho is given by ↵ = P (Reject Ho / Ho is true)X 4 / Ho is true = p4(1 p5(1 p)20 + 1 21 3125 ◆ ✓. = 5 = = Hence the probability of rejecting
a small probability of type II error is equivalent to large power of the test. o :! Ωo be the null and Definition 18.8. Let Ho : ✓ alternative hypothesis to be tested based on a random sample X1, X2,..., Xn from a population X with density f (x; ✓), where ✓ is a parameter. The power function of a hypothesis test Ωo and Ha : ✓ 62 2 Ho : ✓ Ωo 2 versus Ha : ✓ Ωo 62 is a function ⇡ : Ω [0, 1] defined by! P (Type I Error) if Ho is true ⇡(✓) = 8 < 1 P (Type II Error) if Ha is true. : Example 18.9. A manufacturing firm needs to test the null hypothesis Ho that the probability p of a defective item is 0.1 or less, against the alternative hypothesis Ha : p > 0.1. The procedure is to select two items at random. If both are defective, Ho is rejected; otherwise, a third is selected. If the third item is defective Ho is rejected. If all other cases, Ho is accepted, what is the power of the test in terms of p (if Ho is true)? Answer: Let p be the probability of a defective item. We want to calculate the power of the test at the null hypothesis. The power function of the test is given by P (Type I Error) if p 0.1  P (Type II Error) if p > 0.1. ⇡(p) = 1 8 < : Probability and Mathematical Statistics 555 Hence, we have ⇡(p) = P (Reject Ho / Ho is true) = P (Reject Ho / Ho : p = p) = P (first two items are both defective /p) + + P (at least one of the first two items is not defective and third is/p) = p2 + (1 p)2 p + = p + p2 p3. 2 1 ◆ ✓ p(1 p)p The graph of this power function is shown below. π(p) Remark 18.4. If X denotes the number of independent trials needed to obtain the first success, then X G
EO(p), and ⇠ P (X = k) = (1 p)k 1 p, where k = 1, 2, 3,..., since. Further 1 P (X  n) = 1 (1 p)n n (1 Xk=1 n p)k 1 p = p (1 p)k 1 Xk=1 1 1 (1 (1 (1 p)n p) p)n. = p = 1 Test of Statistical Hypotheses for Parameters 556 Example 18.10. Let X be the number of independent trails required to obtain a success where p is the probability of success on each trial. The hypothesis Ho : p = 0.1 is to be tested against the alternative Ha : p = 0.3. The hypothesis is rejected if X 4. What is the power of the test if Ha is true?  Answer: The power function is given by P (Type I Error) if p = 0.1 ⇡(p) = Hence, we have 1 8 < : P (Type II Error) if p = 0.3. ↵ = 1 P (Accept Ho / Ho is false) = P (Reject Ho / Ha is true) = P (X = P (X 4   4 / Ha is true) 4 / p = 0.3) = P (X = k /p = 0.3) Xk=1 4 = (1 Xk=1 4 p)k 1 p (where p = 0.3) = (0.7)k 1 (0.3) Xk=1 4 = 0.3 (0.7)k 1 Xk=1 (0.7)4 = 1 = 0.7599. Hence, the power of the test at the alternative is 0.7599. Example 18.11. Let X1, X2,..., X25 be a random sample of size 25 drawn from a normal distribution with unknown mean µ and variance 2 = 100. It is desired to test the null hypothesis µ = 4 against the alternative µ = 6. What is the power at µ = 6 of the test with rejection rule: reject µ = 4 if 25 i=1 Xi
I error is as small as possible and the power of the test at the alternative is as large as possible. Next, we give two definitions that will lead us to the definition of uni- formly most powerful test. Definition 18.9. Given 0 the null hypothesis Ho : ✓ be a test of level if 2  Ωo against the alternative Ha : ✓ 1, a test (or test procedure) T for testing Ωa is said to  where ⇡(✓) denotes the power function of the test T. max Ωo ✓ 2 ⇡(✓) ,  Definition 18.10. Given 0 the null hypothesis Ho : ✓ be a test of size if 2  Ωo against the alternative Ha : ✓ 1, a test (or test procedure) for testing Ωa is said to  ⇡(✓) = . max Ωo ✓ 2 Ωo against the alternative Ha : ✓ Definition 18.11. Let T be a test procedure for testing the null hypothesis Ωa. The test (or test procedure) Ho : ✓ T is said to be the uniformly most powerful (UMP) test of level if T is of level and for any other test W of level , 2 2 ⇡T (✓) ⇡W (✓) for all ✓ 2 and W, respectively. Ωa. Here ⇡T (✓) and ⇡W (✓) denote the power functions of tests T If T is a test procedure for testing Ho : ✓ = ✓o against Remark 18.5. Ha : ✓ = ✓a based on a sample data x1,..., xn from a population X with a continuous probability density function f (x; ✓), then there is a critical region C associated with the the test procedure T, and power function of T can be computed as ⇡T = L(✓a, x1,..., xn) dx1 · · · dxn. ZC 2 2 Probability and Mathematical Statistics 559 Similarly, the size of a critical region C, say ↵, can be given by ↵ = L(
✓o, x1,..., xn) dx1 · · · dxn. ZC The following famous result tells us which tests are uniformly most pow- erful if the null hypothesis and the alternative hypothesis are both simple. Theorem 18.1 (Neyman-Pearson). Let X1, X2,..., Xn be a random sample from a population with probability density function f (x; ✓). Let L(✓, x1,..., xn) = f (xi; ✓) n i=1 Y be the likelihood function of the sample. Then any critical region C of the form C = (x1, x2,..., xn) ⇢ L (✓o, x1,..., xn) L (✓a, x1,..., xn)  k for some constant 0  for testing Ho : ✓ = ✓o against Ha : ✓ = ✓a. k < 1 is best (or uniformly most powerful) of its size Proof: We assume that the population has a continuous probability density If the population has a discrete distribution, the proof can be function. appropriately modified by replacing integration by summation. Let C be the critical region of size ↵ as described in the statement of the theorem. Let B be any other critical region of size ↵. We want to show that the power of C is greater than or equal to that of B. In view of Remark 18.5, we would like to show that L(✓a, x1,..., xn) dx1 · · · dxn ZC ZB L(✓a, x1,..., xn) dx1 · · · dxn. (1) Since C and B are both critical regions of size ↵, we have L(✓o, x1,..., xn) dx1 · · · dxn = L(✓o, x1,..., xn) dx1 · · · dxn. (2) ZC ZB The last equality (2) can be written as L(✓o, x1,..., xn) dx1 · · · dxn + B \ ZC = ZC L(✓o, x1,..., xn) dx1 · · · dxn +
ZC B \ L(✓o, x1,..., xn) dx1 · · · dxn Bc \ L(✓o, x1,..., xn) dx1 · · · dxn ZCc B \ Test of Statistical Hypotheses for Parameters since C = (C B) (C [ \ \ Bc) and B = (C B) \ [ (Cc B). \ Therefore from the last equality, we have 560 (3) L(✓o, x1,..., xn) dx1 · · · dxn = ZCc B \ L(✓o, x1,..., xn) dx1 · · · dxn. (4) Bc ZC \ Since we have C = (x1, x2,..., xn) ⇢ L(✓a, x1,..., xn) L (✓o, x1,..., xn) L (✓a, x1,..., xn)  k L(✓o, x1,..., xn) k on C, and L(✓o, x1,..., xn) k on Cc. Therefore from (4), (6) and (7), we have L(✓a, x1,..., xn) < ZC Bc \ Thus, we obtain L(✓a, x1,..., xn) dx1 · · · dxn L(✓o, x1,..., xn) k L(✓o, x1,..., xn) k dx1 · · · dxn dx1 · · · dxn L(✓a, x1,..., xn) dx1 · · · dxn. = ZC Bc \ ZCc B \ ZCc B \ (5) (6) (7) L(✓a, x1,..., xn) dx1 · · · dxn Bc ZC \ \ From (3) and the last inequality, we see that ZCc B L(✓a, x1,..., xn) dx1 · · · dxn. L(✓a, x1,..., xn) dx
1 · · · dxn ZC = L(✓a, x1,..., xn) dx1 · · · dxn + ZC B \ L(✓a, x1,..., xn) dx1 · · · dxn + B \ L(✓a, x1,..., xn) dx1 · · · dxn ZC ZB and hence the theorem is proved. L(✓a, x1,..., xn) dx1 · · · dxn L(✓a, x1,..., xn) dx1 · · · dxn ZC Bc \ ZCc B \ Probability and Mathematical Statistics 561 Now we give several examples to illustrate the use of this theorem. Example 18.13. Let X be a random variable with a density function f (x). What is the critical region for the best test of Ho : f (x) = 1 2 8 < 0 : 1 if 1 < x < 1 elsewhere, |x| if 1 < x < 1 elsewhere, 8 < 0 against Ha : f (x) = at the significance size ↵ = 0.10? : Answer: We assume that the test is performed with a sample of size 1. Using Neyman-Pearson Theorem, the best critical region for the best test at the significance size ↵ is given by Since 0.1 = P ( C ) IR | IR | Lo (x) La (x)  1 2 1 |x|  IR | |x|  k k 1 2k 1 1 IR | 1 2k x 1   1 2k. k / Ho is true ◆ k / Ho is true 1 X 1   1 2k ◆ / Ho is true, ◆ = P = P = P = Z = 1 ✓ ✓ Lo (X) La (X)  1 2 |X|  1 1 2k 1 2 dx ✓ 1 1 2k 1 1 2k 1 2k , we get the critical region C to be C = {x IR | 2 0.1 x �
�  0.1}. Test of Statistical Hypotheses for Parameters 562 Thus the best critical region is C = [ Ho if 0.1”. 0.1 X   0.1, 0.1] and the best test is: “Reject Example 18.14. Suppose X has the density function f (x; ✓) = (1 + ✓) x✓ ( 0 if 0 x 1   otherwise. Based on a single observed value of X, find the most powerful critical region of size ↵ = 0.1 for testing Ho : ✓ = 1 against Ha : ✓ = 2. Answer: By Neyman-Pearson Theorem, the form of the critical region is given by IR | IR | IR | IR | x k L (✓o, x) L (✓a, x)  (1 + ✓o) x✓o (1 + ✓a) x✓a  2x 3x2  IR | x a, } where a is some constant. Hence the most powerful or best test is of the form: “Reject Ho if X a.” 2 Since, the significance level of the test is given to be ↵ = 0.1, the constant a can be determined. Now we proceed to find a. Since 0.1 = ↵ = P (Reject Ho / Ho is true} = P (X 1 a / ✓ = 1) = 2x dx a Z = 1 a2, a2 = 1 0.1 = 0.9. a = p0.9, hence Therefore Probability and Mathematical Statistics 563 since k in Neyman-Pearson Theorem is positive. Hence, the most powerful p0.9”. test is given by “Reject Ho if X Example 18.15. Suppose that X is a random variable about which the hypothesis Ho : X N (0, 1) is to be tested. What is the most powerful test with a significance level ↵ = 0.05 based on one observation of X? U N IF (0, 1) against Ha : X ⇠ ⇠ Answer: By Neyman-Pearson
Theorem, the form of the critical region is given by x 2 2 2 IR | Lo (x) La (x)  2 x2 1 IR | p2⇡ e IR | x2 2 ln  a, } IR | x k k  o k p2⇡ ✓ ◆  where a is some constant. Hence the most powerful or best test is of the form: “Reject Ho if X a.” 2  Since, the significance level of the test is given to be ↵ = 0.05, the constant a can be determined. Now we proceed to find a. Since 0.05 = ↵ = P (Reject Ho / Ho is true} a / X ⇠ U N IF (0, 1))  dx = P (X a = 0 Z = a, hence a = 0.05. Thus, the most powerful critical region is given by C = {x 2 IR | 0 < x 0.05}  based on the support of the uniform distribution on the open interval (0, 1). Since the support of this uniform distribution is the interval (0, 1), the acceptance region (or the complement of C in (0, 1)) is Cc = {x 2 IR | 0.05 < x < 1}. Test of Statistical Hypotheses for Parameters 564 However, since the support of the standard normal distribution is IR, the actual critical region should be the complement of Cc in IR. Therefore, the critical region of this hypothesis test is the set {x 2 IR | x  0.05 or x 1}. The most powerful test for ↵ = 0.05 is: “Reject Ho if X 0.05 or X 1.”  Example 18.16. Let X1, X2, X3 denote three independent observations from a distribution with density f (x; ✓) = (1 + ✓) x✓ ( 0 if 0 x 1   otherwise. What is the form of the best critical region of size 0.034 for testing Ho : ✓ = 1 versus Ha : ✓ = 2? Answer: By Neyman-Pearson Theorem, the form of the critical region is given by (with ✓o = 1 and ✓a = 2) C
= (x1, x2, x3) ⇢ = ( (x1, x2, x3) = (x1, x2, x3) ⇢ = = (x1, x2, x3) ⇢ (x1, x2, x3) k i i  k ) IR3 | IR3 | IR3 | 2 2 2 IR3 | 3 L (✓o, x1, x2, x3) L (✓a, x1, x2, x3)  i=1 x✓o (1 + ✓o)3 i=1 x✓a (1 + ✓a)3 8x1x2x3 27x2 2x2 1x2 1 3 Q k Q 3  27 8 k x1x2x3  2 IR3 | x1x2x3 2 a, where a is some constant. Hence the most powerful or best test is of the form: “Reject Ho if 3 a.” Xi i=1 Y Since, the significance level of the test is given to be ↵ = 0.034, the constant a can be determined. To evaluate the constant a, we need the probability distribution of X1X2X3. The distribution of X1X2X3 is not easy to get. Hence, we will use Theorem 17.5. There, we have shown that Probability and Mathematical Statistics 565 2(1 + ✓) 3 i=1 ln Xi ⇠ 0.034 = ↵ P 2(6). Now we proceed to find a. Since = P (Reject Ho / Ho is true} = P (X1X2X3 = P (ln(X1X2X3) a / ✓ = 1) ln a / ✓ = 1) = P ( 4 ln(X1X2X3) 2(6) = P 4 ln a   4 ln a)  = P ( 2(1 + ✓) ln(X1X2X3) 2(1 + ✓) ln a / ✓ = 1) hence
.” i=1 X Since, the significance level of the test is given to be ↵ = 0.025, the constant a can be determined. To evaluate the constant a, we need the probability distribution of X 2 12. It can be shown that the distribution of 1 + X 2 2(12). Now we proceed to find a. Since 2 + · · · + X 2 2 12 i=1 Xi ⇠ P 0.025 = ↵ = P (Reject Ho / Ho is true} = P = P = P 12 i=1 ✓ X 12 Xi 2 ◆ Xi p10 i=1 ✓ X 2(12)  a / 2 = 10!  2 a / 2 = 10! , ⌘ ◆ a 10 hence from chi-square table, we get ⇣ Therefore a 10 = 4.4. a = 44. Probability and Mathematical Statistics 567 Hence, the most powerful test is given by “Reject Ho if best critical region of size 0.025 is given by 12 i=1 X 2 i  44.” The P C = ( (x1, x2,..., x12) IR12 | 2 12 i=1 X x2 i  44. ) In last five examples, we have found the most powerful tests and corresponding critical regions when the both Ho and Ha are simple hypotheses. If either Ho or Ha is not simple, then it is not always possible to find the most powerful test and corresponding critical region. In this situation, hypothesis test is found by using the likelihood ratio. A test obtained by using likelihood ratio is called the likelihood ratio test and the corresponding critical region is called the likelihood ratio critical region. 18.4. Some Examples of Likelihood Ratio Tests In this section, we illustrate, using likelihood ratio, how one can construct hypothesis test when one of the hypotheses is not simple. As pointed out earlier, the test we will construct using the likelihood ratio is not the most powerful test. However, such a test has all the desirable properties of a hypothesis test. To construct the test one has to follow a sequence of steps. These steps are outlined below: (1) Find the likelihood function L(✓, x1, x2,..., xn)
for the given sample. (2) Evaluate max Ωo ✓ 2 L(✓, x1, x2,..., xn). (3) Find the maximum likelihood estimator ✓ of ✓. (4) Compute max Ω L(✓, x1, x2,..., xn) using L ✓ 2 (5) Using steps (2) and (4), find W (x1,..., xn) = ✓, x1, x2,..., xn b ⇣ ⌘. b L(✓, x1, x2,..., xn) L(✓, x1, x2,..., xn). max Ωo ✓ 2 max Ω ✓ 2 (6) Using step (5) determine C = {(x1, x2,..., xn) | W (x1,..., xn) k},  where k [0, 1]. 2 (7) Reduce W (x1,..., xn) k to an equivalent inequality W (x1,..., xn) A.   (8) Determine the distribution of W (x1,..., xn). c (9) Find A such that given ↵ equals P c W (x1,..., xn) ⇣ c  A | Ho is true. ⌘ Test of Statistical Hypotheses for Parameters 568 In the remaining examples, for notational simplicity, we will denote the likelihood function L(✓, x1, x2,..., xn) simply as L(✓). Example 18.19. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and known variance 2. What is the likelihood ratio test of size ↵ for testing the null hypothesis Ho : µ = µo versus the alternative hypothesis Ha : µ = µo? Answer: The likelihood function of the sample is given by L(µ) = n i=1 ✓ Y 1 p2⇡ ◆ 1 22 (xi µ)2 e = ✓ Since Ωo = {µo}, we obtain 1 p2⇡ 1 22 n e ◆ n (xi i=
t = x µo s pn then the above inequality becomes Thus critical region is given by |T | t ↵ 2 (n 1). C = (x1, x2,..., xn) | |t| t ↵ 2 (n 1) }. This tells us that the null hypothesis must be rejected when the absolute value of t takes on a value greater than or equal to t ↵ 2 1). (n - tα/2(n-1) tα/2(n-1) Reject Ho Accept Ho Reject Ho Remark 18.7. In the above example, if we had a right-sided alternative, that is Ha : µ > µo, then the critical region would have been C = {(x1, x2,..., xn) | t t↵(n 1) }. Test of Statistical Hypotheses for Parameters 576 Similarly, if the alternative would have been left-sided, that is Ha : µ < µo, then the critical region would have been C = {(x1, x2,..., xn) | t t↵(n  1) }. We summarize the three cases of hypotheses test of the mean (of the normal population with unknown variance) in the following table. Ho Ha Critical Region (or Test) µ = µo µ > µo t = x µo s pn t↵(n 1) µ = µo µ < µo t = x µo s pn  t↵(n 1) µ = µo µ = µo |t| = t ↵ 2 (n 1) x µo s pn Example 18.21. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2. What is the likelihood ratio test of significance of size ↵ for testing the null hypothesis Ho : 2 = 2 o versus Ha : 2 = 2 o? Answer: In this example, µ, 2 µ, 2
�� 2 1 ↵/2(n 1) 2 ↵/2(n 1) 18.5. Review Exercises 1. Five trials X1, X2,..., X5 of a Bernoulli experiment were conducted to test Ho : p = 1 4. The null hypothesis Ho will be rejected if 2 against Ha : p = 3 5 i=1 Xi = 5. Find the probability of Type I and Type II errors. 2. A manufacturer of car batteries claims that the life of his batteries is P normally distributed with a standard deviation equal to 0.9 year. If a random 6 Probability and Mathematical Statistics 581 sample of 10 of these batteries has a standard deviation of 1.2 years, do you think that > 0.9 year? Use a 0.05 level of significance. 3. Let X1, X2,..., X8 be a random sample of size 8 from a Poisson distribution with parameter . Reject the null hypothesis Ho : = 0.5 if the observed 8. First, compute the significance level ↵ of the test. Second, sum find the power function () of the test as a sum of Poisson probabilities when Ha is true. i=1 xi P 8 4. Suppose X has the density function f (x; ✓) = 1 ✓ ( 0 for 0 < x < ✓ otherwise. If one observation of X is taken, what are the probabilities of Type I and Type II errors in testing the null hypothesis Ho : ✓ = 1 against the alternative hypothesis Ha : ✓ = 2, if Ho is rejected for X > 0.92. 5. Let X have the density function (✓ + 1) x✓ for 0 < x < 1 where ✓ > 0 f (x; ✓) = ( 0 otherwise. The hypothesis Ho : ✓ = 1 is to be rejected in favor of H1 : ✓ = 2 if X > 0.90. What is the probability of Type I error? 6. Let X1, X2,..., X6 be a random sample from a distribution with density function ✓ x✓ 1 for 0 < x < 1 where ✓ > 0 f (x; ✓) = ( 0 otherwise. The null hypothesis Ho : ✓ = 1 is to be rejected in favor of the
alternative Ha : ✓ > 1 if and only if at least 5 of the sample observations are larger than 0.7. What is the significance level of the test? 7. A researcher wants to test Ho : ✓ = 0 versus Ha : ✓ = 1, where ✓ is a parameter of a population of interest. The statistic W, based on a random sample of the population, is used to test the hypothesis. Suppose that under Ho, W has a normal distribution with mean 0 and variance 1, and under Ha, W has a normal distribution with mean 4 and variance 1. If Ho is rejected when W > 1.50, then what are the probabilities of a Type I or Type II error respectively? Test of Statistical Hypotheses for Parameters 582 8. Let X1 and X2 be a random sample of size 2 from a normal distribution N (µ, 1). Find the likelihood ratio critical region of size 0.005 for testing the null hypothesis Ho : µ = 0 against the composite alternative Ha : µ = 0? 9. Let X1, X2,..., X10 be a random sample from a Poisson distribution with mean ✓. What is the most powerful (or best) critical region of size 0.08 for testing the null hypothesis H0 : ✓ = 0.1 against Ha : ✓ = 0.5? 10. Let X be a random sample of size 1 from a distribution with probability density function f (x; ✓) = (1 ✓ 2 ) + ✓ x if 0 x 1   ( 0 otherwise. For a significance level ↵ = 0.1, what is the best (or uniformly most powerful) 1 against Ha : ✓ = 1? critical region for testing the null hypothesis Ho : ✓ = 11. Let X1, X2 be a random sample of size 2 from a distribution with probability density function f (x; ✓) = ✓x e ✓ x! if x = 0, 1, 2, 3,.... 8 < 0 otherwise, where ✓ 0. For a significance level ↵ = 0.053, what is the best critical region for testing the null hypothesis Ho : ✓ = 1 against Ha : ✓ = 2? Sketch the graph of the best critical region. : 12. Let X1, X2,..., X8 be a random sample of size 8 from a distribution with probability
density function f (x; ✓) = ✓x e ✓ x! if x = 0, 1, 2, 3,.... 8 < 0 otherwise, where ✓ hypothesis Ho : ✓ = 1 against Ha : ✓ best likelihood ratio critical region? 0. What is the likelihood ratio critical region for testing the null = 1? If ↵ = 0.1 can you determine the : 13. Let X1, X2,..., Xn be a random sample of size n from a distribution with probability density function f (x; ) = x x6 e Γ(7)7, if x > 0 0 8 < : otherwise, 6 6 Probability and Mathematical Statistics 583 where hypothesis Ho : = 5 against Ha : 0. What is the likelihood ratio critical region for testing the null = 5? What is the most powerful test? 14. Let X1, X2,..., X5 denote a random sample of size 5 from a population X with probability density function ✓)x 1 ✓ f (x; ✓) = (1 8 < 0 if x = 1, 2, 3,..., 1 otherwise, where 0 < ✓ < 1 is a parameter. What is the likelihood ratio critical region of size 0.05 for testing Ho : ✓ = 0.5 versus Ha : ✓ = 0.5? : 15. Let X1, X2, X3 denote a random sample of size 3 from a population X with probability density function f (x; µ) = 1 p2⇡ (x µ)2 2 e 1 < x <, 1 where region of size 0.05 for testing Ho : µ = 3 versus Ha : µ < µ < 1 1 is a parameter. What is the likelihood ratio critical = 3? 16. Let X1, X2, X3 denote a random sample of size 3 from a population X with probability density function f (x; ✓) = x ✓ 1 ✓ e 8 < 0 if 0 < x < 1 otherwise, where 0 < ✓ < 1 for testing Ho : ✓ = 3 versus Ha : ✓ : = 3? is a parameter. What is the likelihood ratio critical region 17. Let X1, X2, X3 denote a random sample of size 3 from a population X with probability density function f (x; ✓) = e ✓ ✓x x
! 8 < 0 if x = 0, 1, 2, 3,..., 1 otherwise, where 0 < ✓ < for testing Ho : ✓ = 0.1 versus Ha : ✓ 1 : = 0.1? is a parameter. What is the likelihood ratio critical region 18. A box contains 4 marbles, ✓ of which are white and the rest are black. A sample of size 2 is drawn to test Ho : ✓ = 2 versus Ha : ✓ = 2. If the null 6 6 6 6 6 6 Test of Statistical Hypotheses for Parameters 584 hypothesis is rejected if both marbles are the same color, find the significance level of the test. 19. Let X1, X2, X3 denote a random sample of size 3 from a population X with probability density function f (x; ✓) = 1 ✓ 8 < 0 for 0 x ✓   otherwise, is a parameter. What is the likelihood ratio critical region : where 0 < ✓ < of size 117 1 125 for testing Ho : ✓ = 5 versus Ha : ✓ = 5? 20. Let X1, X2 and X3 denote three independent observations from a distribution with density f (x; ) = x 1 e 8 < 0 for 0 < x < 1 otherwise, where 0 < < powerful critical region for testing Ho : = 5 versus Ha : = 10? is a parameter. What is the best (or uniformly most 1 : 21. Suppose X has the density function f (x; ✓) = 1 ✓ ( 0 for 0 < x < ✓ otherwise. If X1, X2, X3, X4 is a random sample of size 4 taken from X, what are the probabilities of Type I and Type II errors in testing the null hypothesis Ho : ✓ = 1 against the alternative hypothesis Ha : ✓ = 2, if Ho is rejected for max{X1, X2, X3, X4} 1 2.  22. Let X1, X2, X3 denote a random sample of size 3 from a population X with probability density function f (x; ✓) = x ✓ 1 ✓ e 8 < 0 if 0 < x < 1 otherwise, where 0 < ✓ < rejected in favor of the alternative Ha : ✓ is the significance level of the test? 1 is a parameter. The null hypothesis Ho :
✓ = 3 is to be = 3 if and only if X > 6.296. What : 6 6 Probability and Mathematical Statistics 585 Chapter 19 SIMPLE LINEAR REGRESSION AND CORRELATION ANALYSIS Let X and Y be two random variables with joint probability density function f (x, y). Then the conditional density of Y given that X = x is where f (y/x) = f (x, y) g(x) g(x) = 1 f (x, y) dy Z 1 is the marginal density of X. The conditional mean of Y E (Y |X = x) = yf (y/x) dy 1 Z 1 is called the regression equation of Y on X. Example 19.1. Let X and Y be two random variables with the joint probability density function f (x, y) = x(1+y) xe if x > 0, y > 0 ( 0 otherwise. Find the regression equation of Y on X and then sketch the regression curve. Simple Linear Regression and Correlation Analysis 586 Answer: The marginal density of X is given by g(x) = 1 xe x(1+y) dy Z 1 1 = Z 1 x = xe x = xe = e x. xe x e xy dy 1 e xy dy 1 1 x Z  1 xy e 0 The conditional density of Y given X = x is f (y/x) = f (x, y) g(x) = x(1+y) xe x e = xe xy, y > 0. The conditional mean of Y given X = x is given by E(Y /x) = 1 yf (y/x) dy = 1 y x e xy dy = Z Thus the regression equation of Y on X is 1 Z 1 1 x. E(Y /x) = 1 x, x > 0. The graph of this equation of Y on X is shown below. Graph of the regression equation E(Y/x) = 1/ x Probability and Mathematical Statistics 587 From this example it is clear that the conditional mean E(Y /x) is a function of x. If this function is of the form ↵ + x, then the
corresponding regression equation is called a linear regression equation; otherwise it is called a nonlinear regression equation. The term linear regression refers to a specification that is linear in the parameters. Thus E(Y /x) = ↵ + x2 is also a linear regression equation. The regression equation E(Y /x) = ↵x is an example of a nonlinear regression equation. The main purpose of regression analysis is to predict Yi from the knowl- edge of xi using the relationship like E(Yi/xi) = ↵ + xi. The Yi is called the response or dependent variable where as xi is called the predictor or independent variable. The term regression has an interesting history, dating back to Francis Galton (1822-1911). Galton studied the heights of fathers and sons, in which he observed a regression (a “turning back”) from the heights of sons to the heights of their fathers. That is tall fathers tend to have tall sons and short fathers tend to have short sons. However, he also found that very tall fathers tend to have shorter sons and very short fathers tend to have taller sons. Galton called this phenomenon regression towards the mean. In regression analysis, that is when investigating the relationship between a predictor and response variable, there are two steps to the analysis. The first step is totally data oriented. This step is always performed. The second step is the statistical one, in which we draw conclusions about the (population) regression equation E(Yi/xi). Normally the regression equation contains several parameters. There are two well known methods for finding the estimates of the parameters of the regression equation. These two methods are: (1) The least square method and (2) the normal regression method. 19.1. The Least Squares Method Let {(xi, yi) | i = 1, 2,..., n} be a set of data. Assume that E(Yi/xi) = ↵ + xi, (1) that is yi = ↵ + xi, i = 1, 2,..., n. Simple Linear Regression and Correlation Analysis Then the sum of the squares of the error is given by E(↵, ) = n i=1 X (yi ↵ xi)2. 588 (2) The least squares
estimates of ↵ and are defined to be those values which minimize E(↵, ). That is, ↵, = arg min (↵,) E(↵, ). ⇣ b ⌘ b This least squares method is due to Adrien M. Legendre (1752-1833). Note that the least squares method also works even if the regression equation is nonlinear (that is, not of the form (1)). Next, we give several examples to illustrate the method of least squares. Example 19.2. Given the five pairs of points (x, y) shown in table below what is the line of the form y = x + b best fits the data by method of least squares? Answer: Suppose the best fit line is y = x + b. Then for each xi, xi + b is the estimated value of yi. The difference between yi and the estimated value of yi is the error or the residual corresponding to the ith measurement. That is, the error corresponding to the ith measurement is given by xi Hence the sum of the squares of the errors is ✏i = yi b. E(b) = 5 ✏2 i = i=1 X 5 i=1 X (yi xi b)2. Differentiating E(b) with respect to b, we get d db E(b) = 2 5 i=1 X (yi xi b) ( 1). Probability and Mathematical Statistics 589 Setting d db E(b) equal to 0, we get which is Using the data, we see that 5 i=1 X (yi xi b) = 0 5b = 5 5 yi i=1 X i=1 X xi. 5b = 14 6 which yields b = 8 5. Hence the best fitted line is y = x + 8 5. Example 19.3. Suppose the line y = bx + 1 is fit by the method of least squares to the 3 data points x y 1 2 2 2 4 0 What is the value of the constant b? Answer: The error corresponding to the ith measurement is given by ✏i = yi
bxi 1. Hence the sum of the squares of the errors is E(b) = 3 ✏2 i = i=1 X 3 i=1 X (yi bxi 1)2. Differentiating E(b) with respect to b, we get d db E(b) = 2 3 i=1 X (yi bxi 1) ( xi). Simple Linear Regression and Correlation Analysis 590 Setting d db E(b) equal to 0, we get 3 i=1 X (yi bxi 1) xi = 0 which in turn yields n xiyi b = i=1 X n n xi i=1 X Using the given data we see that x2 i i=1 X and the best fitted line is b = 7 = 6 21 1 21, y = 1 21 x + 1. Example 19.4. Observations y1, y2,..., yn are assumed to come from a model with E(Yi/xi) = ✓ + 2 ln xi where ✓ is an unknown parameter and x1, x2,..., xn are given constants. What is the least square estimate of the parameter ✓? Answer: The sum of the squares of errors is n n E(✓) = ✏2 i = i=1 X i=1 X (yi ✓ 2 ln xi)2. Differentiating E(✓) with respect to ✓, we get n E(✓) = 2 d d✓ i=1 X d✓ E(✓) equal to 0, we get Setting d (yi ✓ 2 ln xi) ( 1). n i=1 X (yi ✓ 2 ln xi) = 0 which is ✓ = 1 n n n 2 yi ln xi.! i=1 X i=1 X Probability and Mathematical Statistics 591 Hence the least squares estimate of ✓ is ✓ = y 2 n Example 19.5. Given the three pairs of points (x, y) shown below: b n ln xi. i= What is the curve of the form y = x best fits the data by method
) and @ @ E(↵, ) to 0, we get and From (3), we obtain which is (yi ↵ xi) = 0 n i=1 X n (yi ↵ xi) xi = 0. i=1 X n i=1 X n yi = n↵ + xi i=1 X y = ↵ + x. Similarly, from (4), we have n i=1 X n n xiyi = ↵ xi + i=1 X x2 i i=1 X (3) (4) (5) Probability and Mathematical Statistics 593 which can be rewritten as follows n (xi i=1 X Defining x)(yi y) + nx y = n ↵ x + n (xi i=1 X x)(xi x) + n x2 (6) Sxy := n (xi i=1 X x)(yi y) we see that (6) reduces to Sxy + nx y = ↵ n x + Sxx + nx2 (7) Substituting (5) into (7), we have ⇥ ⇤ Sxy + nx y = [y x] n x + Sxx + nx2. Simplifying the last equation, we get ⇥ ⇤ which is In view of (8) and (5), we get Sxy = Sxx = Sxy Sxx. ↵ = y Sxy Sxx x. Thus the least squares estimates of ↵ and are ↵ = y Sxy Sxx x and = Sxy Sxx, respectively. b b (8) (9) We need some notations. The random variable Y given X = x will be denoted by Yx. Note that this is the variable appears in the model E(Y /x) = ↵ + x. When one chooses in succession values x1, x2,..., xn for x, a sequence Yx1, Yx2,..., Yxn of random variable is obtained. For the sake of convenience, we denote the random variables Yx1, Yx2,...,
x is given by yx = 9.761 + (4.067) (14) = 66.699. Therefore Similarly n 2) (n + 1) Sxx + n s (n 2) Sxx b b = 66.699 = 66.699 = 58.4501. t0.025(8) (3.047) b (11) (376) + 10 (8) (376) s (2.306) (3.047) (1.1740n 2) s (n + 1) Sxx + n (n 2) Sxx b b = 66.699 + t0.025(8) (3.047) b (11) (376) + 10 (8) (376) s = 66.699 + (2.306) (3.047) (1.1740) = 74.9479. Hence the 95% prediction interval for yx when x = 14 is [58.4501, 74.9479]. 19.3. The Correlation Analysis In the first two sections of this chapter, we examine the regression problem and have done an in-depth study of the least squares and the normal regression analysis. In the regression analysis, we assumed that the values of X are not random variables, but are fixed. However, the values of Yx for Probability and Mathematical Statistics 613 a given value of x are randomly distributed about E(Yx) = µx = ↵ + x. Further, letting " to be a random variable with E(") = 0 and V ar(") = 2, one can model the so called regression problem by Yx = ↵ + x + ". In this section, we examine the correlation problem. Unlike the regression problem, here both X and Y are random variables and the correlation problem can be modeled by E(Y ) = ↵ + E(X). From an experimental point of view this means that we are observing random vector (X, Y ) drawn from some bivariate population. Recall that if (X, Y ) is a bivariate random variable then the correlation coefficient ⇢ is defined as ⇢ = E ((X E ((X µX ) (Y µX
)2) E ((Y µY )) µY )2) p where µX and µY are the mean of the random variables X and Y, respectively. Definition 19.1. If (X1, Y1), (X2, Y2),..., (Xn, Yn) is a random sample from a bivariate population, then the sample correlation coefficient is defined as n R = n i=1 X (Xi (Xi X) (Yi Y ) n X)2 (Yi v u u t i=1 X v u u t i=1 X. Y )2 The corresponding quantity computed from data (x1, y1), (x2, y2),..., (xn, yn) will be denoted by r and it is an estimate of the correlation coefficient ⇢. Now we give a geometrical interpretation of the sample correlation coefficient based on a paired data set {(x1, y1), (x2, y2),..., (xn, yn)}. We can associate this data set with two vectors ~x = (x1, x2,..., xn) and ~y = (y1, y2,..., yn) in IRn. Let L be the subset { ~e | IRn. Consider the linear space V given by IRn modulo L, that is V = IRn/L. The linear space V is illustrated in a figure on next page when n = 2. IR} of IRn, where ~e = (1, 1,..., 1) 2 2 Simple Linear Regression and Correlation Analysis 614 y L x V [x] Illustration of the linear space V for n=2 We denote the equivalence class associated with the vector ~x by [~x]. In the linear space V it can be shown that the points (x1, y1), (x2, y2),..., (xn, yn) are collinear if and only if the the vectors [~x] and [~y] in V are proportional. We define an inner product on this linear space V by [~x], [~y] i h = n (xi
cient r is given by r = Sxy Sxx Syy = 18.557 (8.565) (65.788) = 0.782. The computed t value is give by p p t = pn 2 p1 r = (6 2) r2 p From the t-table we have t0.005(4) = 4.604. Since p 0.782 (0.782)2 1 = 2.509. 2.509 = |t| 6 t0.005(4) = 4.604 we do not reject the null hypothesis Ho : ⇢ = 0. 19.4. Review Exercises N (xi, 2), where both and 2 are unknown parameters. 1. Let Y1, Y2,..., Yn be n independent random variables such that each If Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then find the maximum likelihood estimators of 2 of and 2. and b b 6 Simple Linear Regression and Correlation Analysis 618 N (xi, 2), where both and 2 are unknown parameters. 2. Let Y1, Y2,..., Yn be n independent random variables such that each If Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then show that the maximum likelihood is normally distributed. What are the mean and variance of estimator of ? b N (xi, 2), where both and 2 are unknown parameters. 3. Let Y1, Y2,..., Yn be n independent random variables such that each b If Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the ob2 of served values based on x1,
x2,..., xn, then find an unbiased estimator 2 and then find a constant k such that k 2(2n). 2 ⇠ b N (xi, 2), where both and 2 are unknown parameters. 4. Let Y1, Y2,..., Yn be n independent random variables such that each If Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then find a pivotal quantity for and )100% confidence interval for . using this pivotal quantity construct a (1 b N (xi, 2), where both and 2 are unknown parameters. 5. Let Y1, Y2,..., Yn be n independent random variables such that each If Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then find a pivotal quantity for 2 and )100% confidence interval for using this pivotal quantity construct a (1 2. EXP (xi), where is an unknown parameter. 6. Let Y1, Y2,..., Yn be n independent random variables such that If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then find the maximum likelihood estimator of of . EXP (xi), where is an unknown parameter. 7. Let Y1, Y2,..., Yn be n independent random variables such that b If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y
2,..., yn are the observed values based on x1, x2,..., xn, then find the least squares estimator of of . 8. Let Y1, Y2,..., Yn be n independent random variables such that b If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the ob- P OI(xi), where is an unknown parameter. Probability and Mathematical Statistics 619 served values based on x1, x2,..., xn, then find the maximum likelihood estimator of of . b P OI(xi), where is an unknown parameter. 9. Let Y1, Y2,..., Yn be n independent random variables such that If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, then find the least squares estimator of of . P OI(xi), where is an unknown parameter. 10. Let Y1, Y2,..., Yn be n independent random variables such that b If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, show that the least squares estimator and the maximum likelihood estimator of are both unbiased estimator of . P OI(xi), where is an unknown parameter. 11. Let Y1, Y2,..., Yn be n independent random variables such that If each Yi ⇠ {(x1, y1), (x2, y2),..., (xn, yn)} is a data set where y1, y2,..., yn are the observed values based on x1, x2,..., xn, the find the variances of both the least squares estimator
and the maximum likelihood estimator of . 12. Given the five pairs of points (x, y) shown below: x y 10 50.071 20 0.078 30 0.112 40 0.120 50 0.131 What is the curve of the form y = a + bx + cx2 best fits the data by method of least squares? 13. Given the five pairs of points (x, y) shown below: x y 4 10 7 16 9 22 10 20 11 25 What is the curve of the form y = a + b x best fits the data by method of least squares? 14. The following data were obtained from the grades of six students selected at random: Mathematics Grade, x English Grade, y 72 76 94 86 82 65 74 89 65 80 85 92 Simple Linear Regression and Correlation Analysis 620 Find the sample correlation coefficient r and then test the null hypothesis Ho : ⇢ = 0 against the alternative hypothesis Ha : ⇢ = 0 at a significance level 0.01. 15. Given a set of data {(x1, y2), (x2, y2),..., (xn, yn)} what is the least square estimate of ↵ if y = ↵ is fitted to this data set. 16. Given a set of data points {(2, 3), (4, 6), (5, 7)} what is the curve of the form y = ↵ + x2 best fits the data by method of least squares? N (↵ + 17. Given a data set {(1, 1), (2, 1), (2, 3), (3, 2), (4, 3)} and Yx ⇠ x, 2), find the point estimate of 2 and then construct a 90% confidence interval for . 18. For the data set {(1, 1), (2, 1), (2, 3), (3, 2), (4, 3)} determine the correlation coefficient r. Test the null hypothesis H0 : ⇢ = 0 versus Ha : ⇢ = 0 at a significance level 0.01. 6 6 Probability and Mathematical Statistics 621 Chapter 20 ANALYSIS OF V
ARIANCE In Chapter 19, we examine how a quantitative independent variable x can be used for predicting the value of a quantitative dependent variable y. In this chapter we would like to examine whether one or more independent (or predictor) variable affects a dependent (or response) variable y. This chapter differs from the last chapter because the independent variable may now be either quantitative or qualitative. It also differs from the last chapter in assuming that the response measurements were obtained for specific settings of the independent variables. Selecting the settings of the independent variables is another aspect of experimental design. It enables us to tell whether changes in the independent variables cause changes in the mean response and it permits us to analyze the data using a method known as analysis of variance (or ANOVA). Sir Ronald Aylmer Fisher (1890-1962) developed the analysis of variance in 1920’s and used it to analyze data from agricultural experiments. The ANOVA investigates independent measurements from several treatments or levels of one or more than one factors (that is, the predictor variables). The technique of ANOVA consists of partitioning the total sum of squares into component sum of squares due to different factors and the error. For instance, suppose there are Q factors. Then the total sum of squares (SST) is partitioned as SST = SSA + SSB + · · · + SSQ + SSError, where SSA, SSB,..., and SSQ represent the sum of squares associated with the factors A, B,..., and Q, respectively. If the ANOVA involves only one factor, then it is called one-way analysis of variance. Similarly if it involves two factors, then it is called the two-way analysis of variance. If it involves Analysis of Variance 622 more then two factors, then the corresponding ANOVA is called the higher order analysis of variance. In this chapter we only treat the one-way analysis of variance. The analysis of variance is a special case of the linear models that represent the relationship between a continuous response variable y and one or more predictor variables (either continuous or categorical) in the form y = X + ✏ (1) ⇥ n design matrix determined by the predictor variables, is n where y is an m m of parameters, and ✏ is an m independent of each other and having distribution. 1 vector of observations of response variable
, X is the 1 vector 1 vector of random error (or disturbances) ⇥ ⇥ ⇥ 20.1. One-Way Analysis of Variance with Equal Sample Sizes The standard model of one-way ANOVA is given by Yij = µi + ✏ij for i = 1, 2,..., m, j = 1, 2,..., n, (2) where m 2 and n 2. In this model, we assume that each random variable Yij ⇠ N (µi, 2) for i = 1, 2,..., m, j = 1, 2,..., n. (3) Note that because of (3), each ✏ij in model (2) is normally distributed with mean zero and variance 2. Given m independent samples, each of size n, where the members of the ith sample, Yi1, Yi2,..., Yin, are normal random variables with mean µi and unknown variance 2. That is, Yij ⇠ N µi, 2, i = 1, 2,..., m, j = 1, 2,..., n. We will be interested in testing the null hypothesis Ho : µ1 = µ2 = · · · = µm = µ against the alternative hypothesis Ha : not all the means are equal. Probability and Mathematical Statistics 623 In the following theorem we present the maximum likelihood estimators of the parameters µ1, µ2,..., µm and 2. Theorem 20.1. Suppose the one-way ANOVA model is given by the equation (2) where the ✏ij’s are independent and normally distributed random variables with mean zero and variance 2 for i = 1, 2,..., m and j = 1, 2,..., n. Then the MLE’s of the parameters µi (i = 1, 2,..., m) and 2 of the model are given by µi = Y i• 1 nm 2 = b i = 1, 2,..., m, SSW, where Y i• = 1 n n c Yij and SSW = m n sum of squares. j=1 X i=1 X j=1 X Yij 2 Y i• Proof: The likelihood function is given by is the within samples L(
Yij 2 Y •• Yij 2 Y i• and i=1 X j=1 X m n SSB = Y i• Here SST is the total sum of square, SSW is the within sum of square, and SSB is the between sum of square. j=1 X i=1 X (10) Y •• 2 Next we consider the partitioning of the total sum of squares. The fol- lowing lemma gives us such a partition. Lemma 20.1. The total sum of squares is equal to the sum of within and between sum of squares, that is SST = SSW + SSB. (11) (7) (8) (9) Probability and Mathematical Statistics 625 Proof: Rewriting (8) we have Y •• 2 m n SST = Yij (Yij ⇥ (Yij = = i=1 X m j=1 X n i=1 X m j=1 X n i=1 X j=1 X + 2 m = SSW + SSB + 2 Y i•) + (Yi• Y ••) 2 Y i•)2 + m n ⇤ (Y i• Y ••)2 j=1 i=1 X X m n (Yij i=1 X n j=1 X (Yij i=1 X j=1 X Y i•) (Y i• Y ••) Y i•) (Y i• Y ••). The cross-product term vanishes, that is m n (Yij i=1 X j=1 X Y i•) (Y i• Y ••) = m (Yi• i=1 X n Y••) j=1 X (Yij Y i•) = 0. Hence we obtain the asserted result SST = SSW + SSB and the proof of the lemma is complete. The following theorem is a technical result and is needed for testing the null hypothesis against the alternative hypothesis. Theorem 20.2. Consider the ANOVA model Yij = µi + ✏ij i = 1, 2,..., m, j = 1, 2,..., n, N where Y
mean of the m values of µi, and ↵i = 0. The quantity ↵i is called the effect of the ith treatment. Thus any observed value is the sum of i=1 X m Probability and Mathematical Statistics 633 an overall mean µ, a treatment or class deviation ↵i, and a random element from a normally distributed random variable ✏ij with mean zero and variance 2. This model is called model I, the fixed effects model. The effects of the treatments or classes, measured by the parameters ↵i, are regarded as fixed but unknown quantities to be estimated. In this fixed effect model the null hypothesis H0 is now Ho : ↵1 = ↵2 = · · · = ↵m = 0 and the alternative hypothesis is Ha : not all the ↵i are zero. The random effects model, also known as model II, is given by Yij = µ + Ai + ✏ij for i = 1, 2,..., m, j = 1, 2,..., n, where µ is the overall mean and Ai ⇠ N (0, 2 A) and N (0, 2). ✏ij ⇠ In this model, the variances 2 mated. The null hypothesis of the random effect model is Ho : 2 the alternative hypothesis is Ha : 2 the random effect model. A and 2 are unknown quantities to be estiA = 0 and A > 0. In this chapter we do not consider Before we present some examples, we point out the assumptions on which the ANOVA is based on. The ANOVA is based on the following three assumptions: (1) Independent Samples: The samples taken from the population under consideration should be independent of one another. (2) Normal Population: For each population, the variable under considera- tion should be normally distributed. (3) Equal Variance: The variances of the variables under consideration should be the same for all the populations. Example 20.1. The data in the following table gives the number of hours of relief provided by 5 different brands of headache tablets administered to 25 subjects experiencing fevers of 38oC or more. Perform the analysis of variance
Analysis of Variance 634 and test the hypothesis at the 0.05 level of significance that the mean number of hours of relief provided by the tablets is same for all 5 brands Tablets Answer: Using the formulas (8), (9) and (10), we compute the sum of squares SSW, SSB and SST as SSW = 57.60, SSB = 79.94, and SST = 137.04. The ANOVA table for this problem is shown below. Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between Within 79.94 57.60 Total 137.04 4 20 24 6.90 19.86 2.88 At the significance level ↵ = 0.05, we find the F-table that F0.05(4, 20) = 2.8661. Since 6.90 = F > F0.05(4, 20) = 2.8661 we reject the null hypothesis that the mean number of hours of relief provided by the tablets is same for all 5 brands. Note that using a statistical package like MINITAB, SAS or SPSS we can compute the p-value to be value = P (F (4, 20) p 6.90) = 0.001. Hence again we reach the same conclusion since p-value is less then the given ↵ for this problem. Probability and Mathematical Statistics 635 Example 20.2. Perform the analysis of variance and test the null hypothesis at the 0.05 level of significance for the following two data sets. Data Set 1 Data Set 2 A 8.1 4.2 14.7 9.9 12.1 6.2 Sample B 8.0 15.1 4.7 10.4 9.0 9.8 C 14.8 5.3 11.1 7.9 9.3 7.4 Sample B 9.5 9.5 9.5 9.6 9.5 9.4 C 9.4 9.3 9.3 9.3 9.2 9.3 A 9.2 9.1 9.2 9.2 9.3 9.2 Answer: Computing the sum of squares SSW, SSB and SST, we have the following two ANOVA tables: Source of variation Sums of squares Degree of freedom Mean squares F-
statistics F Between 0.3 Within Total 187.2 187.5 2 15 17 0.01 0.1 12.5 and Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between Within Total 0.280 0.600 0.340 2 15 17 35.0 0.140 0.004 Analysis of Variance 636 At the significance level ↵ = 0.05, we find from the F-table that F0.05(2, 15) = 3.68. For the first data set, since 0.01 = F < F0.05(2, 15) = 3.68 we do not reject the null hypothesis whereas for the second data set, 35.0 = F > F0.05(2, 15) = 3.68 we reject the null hypothesis. Remark 20.1. Note that the sample means are same in both the data sets. However, there is a less variation among the sample points in samples of the second data set. The ANOVA finds a more significant differences among the means in the second data set. This example suggests that the larger the variation among sample means compared with the variation of the measurements within samples, the greater is the evidence to indicate a difference among population means. 20.2. One-Way Analysis of Variance with Unequal Sample Sizes In the previous section, we examined the theory of ANOVA when samples are same sizes. When the samples are same sizes we say that the ANOVA is in the balanced case. In this section we examine the theory of ANOVA for unbalanced case, that is when the samples are of different sizes. In experimental work, one often encounters unbalance case due to the death of experimental animals in a study or drop out of the human subjects from a study or due to damage of experimental materials used in a study. Our analysis of the last section for the equal sample size will be valid but have to be modified to accommodate the different sample size. Consider m independent samples of respective sizes n1, n2,..., nm, where the members of the ith sample, Yi1, Yi2,..., Yini, are normal random variables with mean µi and unknown variance 2. That is, Y
ij ⇠ N µi, 2, i = 1, 2,..., m, j = 1, 2,..., ni. Let us denote N = n1 + n2 + · · · + nm. Again, we will be interested in testing the null hypothesis Ho : µ1 = µ2 = · · · = µm = µ Probability and Mathematical Statistics 637 against the alternative hypothesis Ha : not all the means are equal. Now we defining Y i• = n Yij, 1 ni j=1 X ni m Yij, i=1 X j=1 X Y •• = 1 N SST = SSW = m ni i=1 X m j=1 X ni Yij Yij i=1 X j=1 X m ni and Y •• 2, Y i• 2, 2 (17) (18) (19) (20) Y i• we have the following results analogous to the results in the previous section. SSB = i=1 X j=1 X (21) Y •• Theorem 20.4. Suppose the one-way ANOVA model is given by the equation (2) where the ✏ij’s are independent and normally distributed random variables with mean zero and variance 2 for i = 1, 2,..., m and j = 1, 2,..., ni. Then the MLE’s of the parameters µi (i = 1, 2,..., m) and 2 of the model are given by µi = Y i• i = 1, 2,..., m, 2 = b 1 N SSW, where Y i• = 1 ni ni c Yij and SSW = m ni sum of squares. j=1 X i=1 X j=1 X Yij 2 Y i• is the within samples Lemma 20.2. The total sum of squares is equal to the sum of within and between sum of squares, that is SST = SSW + SSB. Theorem 20.5. Consider the ANOVA model Yij = µi + ✏ij i = 1, 2,..., m, j = 1, 2,..., ni, Analysis of Variance 638 where Yij ⇠ (a)
Elementary Statistics Instructor A Instructor B Instructor C 75 91 83 45 82 75 68 47 38 90 80 50 93 53 87 76 82 78 80 33 79 17 81 55 70 61 43 89 73 58 70 Answer: Using the formulas (17) - (21), we compute the sum of squares SSW, SSB and SST as SSW = 10362, SSB = 755, and SST = 11117. The ANOVA table for this problem is shown below. Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between 755 Within 10362 Total 11117 2 28 30 1.02 377 370 At the significance level ↵ = 0.05, we find the F-table that F0.05(2, 28) = 3.34. Since 1.02 = F < F0.05(2, 28) = 3.34 we accept the null hypothesis that there is no difference in the average grades given by the three instructors. Note that using a statistical package like MINITAB, SAS or SPSS we can compute the p-value to be value = P (F (2, 28) p 1.02) = 0.374. Analysis of Variance 640 Hence again we reach the same conclusion since p-value is less then the given ↵ for this problem. We conclude this section pointing out the advantages of choosing equal sample sizes (balance case) over the choice of unequal sample sizes (unbalance case). The first advantage is that the F-statistics is insensitive to slight departures from the assumption of equal variances when the sample sizes are equal. The second advantage is that the choice of equal sample size minimizes the probability of committing a type II error. 20.3. Pair wise Comparisons When the null hypothesis is rejected using the F -test in ANOVA, one may still wants to know where the difference among the means is. There are several methods to find out where the significant differences in the means lie after the ANOVA procedure is performed. Among the most commonly used tests are Scheff´e test and Tuckey test. In this section, we give a brief description of these tests. In order to perform the Scheff´e test, we have to compare the means two at a time using all possible combinations
of means. Since we have m means, pair wise comparisons. A pair wise comparison can be viewed as we need = µk a test of the null hypothesis H0 : µi = µk against the alternative Ha : µi 6 for all i = k. m 2 To conduct this test we compute the statistics Fs = 2 Y i• M SW Y k• 1 ni + 1 nk, ⇣ where Y i• and Y k• are the means of the samples being compared, ni and nk are the respective sample sizes, and M SW is the mean sum of squared of within group. We reject the null hypothesis at a significance level of ↵ if ⌘ Fs > (m 1)F↵(m 1, N m) where N = n1 + n2 + · · · + nm. Example 20.4. Perform the analysis of variance and test the null hypothesis at the 0.05 level of significance for the following data given in the table below. Further perform a Scheff´e test to determine where the significant differences in the means lie. 6 Probability and Mathematical Statistics 641 Sample 2 9.5 9.5 9.5 9.6 9.5 9.4 3 9.4 9.3 9.3 9.3 9.2 9.3 1 9.2 9.1 9.2 9.2 9.3 9.2 Answer: The ANOVA table for this data is given by Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between Within Total 0.280 0.600 0.340 2 15 17 35.0 0.140 0.004 At the significance level ↵ = 0.05, we find the F-table that F0.05(2, 15) = 3.68. Since 35.0 = F > F0.05(2, 15) = 3.68 we reject the null hypothesis. Now we perform the Scheff´e test to determine where the significant differences in the means lie. From given data, we obtain Y 1• = 9.2, Y 2• = 9.5 and Y 3• = 9
.3. Since m = 3, we have to make 3 pair wise comparisons, namely µ1 with µ2, µ1 with µ3, and µ2 with µ3. First we consider the comparison of µ1 with µ2. For this case, we find Fs = 2 Y 1• M SW Y 2• 1 n1 + 1 n2 = (9.2 0.004 9.5)2 1 6 + 1 6 = 67.5. Since ⇣ ⌘ 67.5 = Fs > 2 F0.05(2, 15) = 7.36 we reject the null hypothesis H0 : µ1 = µ2 in favor of the alternative Ha : µ1 6 = µ2. Analysis of Variance 642 Next we consider the comparison of µ1 with µ3. For this case, we find Fs = 2 Y 1• M SW Y 3• 1 n1 + 1 n3 = (9.2 0.004 9.3)2 1 6 + 1 6 = 7.5. Since ⇣ ⌘ 7.5 = Fs > 2 F0.05(2, 15) = 7.36 we reject the null hypothesis H0 : µ1 = µ3 in favor of the alternative Ha : µ1 6 = µ3. Finally we consider the comparison of µ2 with µ3. For this case, we find Fs = 2 Y 2• M SW Y 3• 1 n2 + 1 n3 = (9.5 0.004 9.3)2 6 + 1 1 6 = 30.0. Since ⇣ ⌘ 30.0 = Fs > 2 F0.05(2, 15) = 7.36 we reject the null hypothesis H0 : µ2 = µ3 in favor of the alternative Ha : µ2 6 = µ3. Next consider the Tukey test. Tuckey test is applicable when we have a balanced case, that is when the sample sizes are equal. For Tukey test we compute the statistics Q = Y i• Y k• M SW n, q where Y i• and Y k• are the means of the samples being compared, n is the size of the samples, and M SW is the
mean sum of squared of within group. At a significance level ↵, we reject the null hypothesis H0 if |Q| > Q↵(m, ⌫) where ⌫ represents the degrees of freedom for the error mean square. Example 20.5. For the data given in Example 20.4 perform a Tukey test to determine where the significant differences in the means lie. Answer: We have seen that Y 1• = 9.2, Y 2• = 9.5 and Y 3• = 9.3. First we compare µ1 with µ2. For this we compute Q = Y 1• Y 2• M SW n 9.2 = 9.3 0.004 6 = 11.6189. q q Probability and Mathematical Statistics 643 Since 11.6189 = |Q| > Q0.05(2, 15) = 3.01 we reject the null hypothesis H0 : µ1 = µ2 in favor of the alternative Ha : µ1 6 = µ2. Next we compare µ1 with µ3. For this we compute Q = Y 1• Y 3• M SW n 9.2 = 9.5 0.004 6 = 3.8729. Since q q 3.8729 = |Q| > Q0.05(2, 15) = 3.01 we reject the null hypothesis H0 : µ1 = µ3 in favor of the alternative Ha : µ1 6 = µ3. Finally we compare µ2 with µ3. For this we compute Q = Y 2• Y 3• M SW n 9.5 = 9.3 0.004 6 = 7.7459. Since q q 7.7459 = |Q| > Q0.05(2, 15) = 3.01 we reject the null hypothesis H0 : µ2 = µ3 in favor of the alternative Ha : µ2 6 = µ3. Often in scientific and engineering problems, the experiment dictates the need for comparing simultaneously each treatment with a control. Now we describe a test developed by C. W. Dunnett for determining significant differences between each treatment mean and the control. Suppose we wish to test the m hypotheses H0 : µ0 = µi versus Ha :
µ0 6 = µi for i = 1, 2,..., m, where µ0 represents the mean yield for the population of measurements in which the control is used. To test the null hypotheses specified by H0 against two-sided alternatives for an experimental situation in which there are m treatments, excluding the control, and n observation per treatment, we first calculate Di = Y i• Y 0•, i = 1, 2,..., m. 2 M SW n q Analysis of Variance 644 At a significance level ↵, we reject the null hypothesis H0 if |Di| > D ↵ 2 (m, ⌫) where ⌫ represents the degrees of freedom for the error mean square. The values of the quantity D ↵ 2 (m, ⌫) are tabulated for various ↵, m and ⌫. Example 20.6. For the data given in the table below perform a Dunnett test to determine any significant differences between each treatment mean and the control. Control Sample 1 Sample 2 9.2 9.1 9.2 9.2 9.3 9.2 9.5 9.5 9.5 9.6 9.5 9.4 9.4 9.3 9.3 9.3 9.2 9.3 Answer: The ANOVA table for this data is given by Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between Within Total 0.280 0.600 0.340 2 15 17 35.0 0.140 0.004 At the significance level ↵ = 0.05, we find that D0.025(2, 15) = 2.44. Since 35.0 = D > D0.025(2, 15) = 2.44 we reject the null hypothesis. Now we perform the Dunnett test to determine if there is any significant differences between each treatment mean and the control. From given data, we obtain Y 0• = 9.2, Y 1• = 9.5 and Y 2• = 9.3. Since m = 2, we have to make 2 pair wise comparisons, namely µ0 with µ1, and µ0 with µ2. First we consider the comparison of
µ0 with µ1. For this case, we find D1 = Y 1• Y 0• 2 M SW n = 9.5 9.2 2 (0.004) 6 = 8.2158. q q Probability and Mathematical Statistics 645 Since 8.2158 = D1 > D0.025(2, 15) = 2.44 we reject the null hypothesis H0 : µ1 = µ0 in favor of the alternative Ha : µ1 6 = µ0. Next we find D2 = Y 2• Y 0• 2 M SW n = 9.3 9.2 2 (0.004) 6 = 2.7386. Since q q 2.7386 = D2 > D0.025(2, 15) = 2.44 we reject the null hypothesis H0 : µ2 = µ0 in favor of the alternative Ha : µ2 6 = µ0. 20.4. Tests for the Homogeneity of Variances One of the assumptions behind the ANOVA is the equal variance, that is the variances of the variables under consideration should be the same for all population. Earlier we have pointed out that the F-statistics is insensitive to slight departures from the assumption of equal variances when the sample sizes are equal. Nevertheless it is advisable to run a preliminary test for homogeneity of variances. Such a test would certainly be advisable in the case of unequal sample sizes if there is a doubt concerning the homogeneity of population variances. Suppose we want to test the null hypothesis 2 = · · · 2 m H0 : 2 1 = 2 versus the alternative hypothesis Ha : not all variances are equal. A frequently used test for the homogeneity of population variances is the Bartlett test. Bartlett (1937) proposed a test for equal variances that was modification of the normal-theory likelihood ratio test. We will use this test to test the above null hypothesis H0 against Ha. m from the samples of First, we compute the m sample variances S2 2,..., S2 1, S2 Analysis of Variance 646 size n1, n2,..., nm, with n1 + n2 + · · · + nm = N. The test statistics Bc is given by m m) ln S2 (N Bc = 1 +
Wij = |Yij Y i•|. Example 20.8. For the data in Example 20.7 do a Levene test to examine if the homogeneity of variances condition is met for a significance level 0.05. Answer: From data we find that Y 1• = 33.00, Y 2• = 32.83, Y 3• = 31.83, and Y 4• = 33.42. Next we compute Wij =. The resulting values are given in the table below. Y i• 2 Yij Transformed Data Sample 1 Sample 2 Sample 3 Sample 4 1 25 16 16 81 36 16 4 64 16 64 49 14.7 0.7 3.4 103.4 3.4 14.7 23.4 8.0 17.4 124.7 14.7 3.4 0.0 4.7 3.4 103.4 0.0 1.4 8.0 23.4 26.7 34.0 0.0 0.7 0.3 19.5 2.0 29.3 2.0 0.3 19.5 5.8 11.7 12.8 91.8 73.7 Probability and Mathematical Statistics 649 Now we perform an ANOVA to the data given in the table above. The ANOVA table for this data is given by Source of variation Sums of squares Degree of freedom Mean squares F-statistics F Between 1430 Within 45491 Total 46922 3 44 47 0.46 477 1034 At the significance level ↵ = 0.05, we find the F-table that F0.05(3, 44) = 2.84. Since 0.46 = F < F0.05(3, 44) = 2.84 we do not reject the null hypothesis that the variances are equal. Hence Bartlett test suggests that the homogeneity of variances condition is met. Although Bartlet test is most widely used test for homogeneity of variances a test due to Cochran provides a computationally simple procedure. Cochran test is one of the best method for detecting cases where the variance of one of the groups is much larger than that of the other groups. The test statistics of Cochran test is give by max i m 1   m S2 i. C = S2 i The
Cochran test rejects the null hypothesis H0 : 2 significance level ↵ if 1 = 2 2 = · · · 2 m at a i=1 X C > C↵. The critical values of C↵ were originally published by Eisenhart et al (1947) for some combinations of degrees of freedom ⌫ and the number of groups m. Here the degrees of freedom ⌫ are ⌫ = max m i 1   (ni 1). Example 20.9. For the data in Example 20.7 perform a Cochran test to examine if the homogeneity of variances condition is met for a significance level 0.05. Analysis of Variance 650 Answer: From the data the variances of each group can be found to be S2 1 = 35.2836, S2 2 = 30.1401, S2 3 = 19.4481, S2 4 = 24.4036. Hence the test statistic for Cochran test is C = 35.2836 35.2836 + 30.1401 + 19.4481 + 24.4036 = 35.2836 109.2754 = 0.3328. The critical value C0.5(3, 11) is given by 0.4884. Since 0.3328 = C < C0.5(3, 11) = 0.4884. At a significance level ↵ = 0.05, we do not reject the null hypothesis that the variances are equal. Hence Cochran test suggests that the homogeneity of variances condition is met. 20.5. Exercises 1. A consumer organization wants to compare the prices charged for a particular brand of refrigerator in three types of stores in Louisville: discount stores, department stores and appliance stores. Random samples of 6 stores of each type were selected. The results were shown below. Discount Department Appliance 1200 1300 1100 1400 1250 1150 1700 1500 1450 1300 1300 1500 1600 1500 1300 1500 1700 1400 At the 0.05 level of significance, is there any evidence of a difference in the average price between the types of stores? 2. It is conjectured that a certain gene might be linked to ovarian cancer. The ovarian cancer is sub-classified into three categories: stage I, stage
II and stage III-IV. There are three random samples available; one from each stage. The samples are labelled with three colors dyes and hybridized on a four channel cDNA microarray (one channel remains unused). The experiment is repeated 5 times and the following data were obtained. Probability and Mathematical Statistics 651 Microarray Data Array mRNA 1 mRNA 2 mRNA 3 1 2 3 4 5 100 90 105 83 78 95 93 79 85 90 70 72 81 74 75 Is there any difference between the averages of the three mRNA samples at 0.05 significance level? 3. A stock market analyst thinks 4 stock of mutual funds generate about the same return. He collected the accompaning rate-of-return data on 4 different mutual funds during the last 7 years. The data is given in table below. Year 2000 2001 2002 2004 2005 2006 2007 Mutual Funds A B 12 12 13 18 17 18 12 11 17 18 20 19 12 15 C 13 19 15 25 19 17 20 D 15 11 12 11 10 10 12 Do a one-way ANOVA to decide whether the funds give different performance at 0.05 significance level. 4. Give a proof of the Theorem 20.4. 5. Give a proof of the Lemma 20.2. 6. Give a proof of the Theorem 20.5. 7. Give a proof of the Theorem 20.6. 8. An automobile company produces and sells its cars under 3 different brand names. An autoanalyst wants to see whether different brand of cars have same performance. He tested 20 cars from 3 different brands and recorded the mileage per gallon. Analysis of Variance 652 Brand 1 Brand 2 Brand 3 32 29 32 25 35 33 34 31 34 25 31 37 32 31 28 30 34 39 36 38 Do the data suggest a rejection of the null hypothesis at a significance level 0.05 that the mileage per gallon generated by three different brands are same. Probability and Mathematical Statistics 653 Chapter 21 GOODNESS OF FITS TESTS In point estimation, interval estimation or hypothesis test we always started with a random sample X1, X2,..., Xn of size n from a known distribution. In order to apply the theory to data analysis one has to know the distribution of the sample. Quite often the experiment
er (or data analyst) assumes the nature of the sample distribution based on his subjective knowledge. Goodness of fit tests are performed to validate experimenter opinion about the distribution of the population from where the sample is drawn. The most commonly known and most frequently used goodness of fit tests are the Kolmogorov-Smirnov (KS) test and the Pearson chi-square (2) test. There is a controversy over which test is the most powerful, but the general feeling seems to be that the Kolmogorov-Smirnov test is probably more powerful than the chi-square test in most situations. The KS test measures the distance between distribution functions, while the 2 test measures the distance between density functions. Usually, if the population distribution is continuous, then one uses the Kolmogorov-Smirnov where as if the population distribution is discrete, then one performs the Pearson’s chi-square goodness of fit test. 21.1. Kolmogorov-Smirnov Test Let X1, X2,..., Xn be a random sample from a population X. We hypothesized that the distribution of X is F (x). Further, we wish to test our hypothesis. Thus our null hypothesis is Ho : X F (x). ⇠ Goodness of Fit Tests 654 We would like to design a test of this null hypothesis against the alternative Ha : X F (x). 6⇠ In order to design a test, first of all we need a statistic which will unbiasedly estimate the unknown distribution F (x) of the population X using the random sample X1, X2,..., Xn. Let x(1) < x(2) < · · · < x(n) be the observed values of the ordered statistics X(1), X(2),..., X(n). The empirical distribution of the random sample is defined as 0 k n 1 if if if x < x(1), x(k)  x(n)  x < x(k+1), for k = 1, 2,..., n 1, x. Fn(x) = 8 >>>< >>>: The graph of the empirical distribution function F4(x) is shown below. F4(x) 1.00 0.75 0.50 0.25 0 x (1)
x (2) x (3) x (4) Empirical Distribution Function For a fixed value of x, the empirical distribution function can be considered as a random variable that takes on the values 0, 1 n, 2 n,..., n 1, n n. n First we show that Fn(x) is an unbiased estimator of the population distribution F (x). That is, E(Fn(x)) = F (x) (1) Probability and Mathematical Statistics 655 for a fixed value of x. To establish (1), we need the probability density function of the random variable Fn(x). From the definition of the empirical distribution we see that if exactly k observations are less than or equal to x, then Fn(x) = k n which is n Fn(x) = k. The probability that an observation is less than or equal to x is given by F (x). x (κ) x x (κ+1) Threre are k sample observations each with probability F(x) There are n-k sample observations each with probability 1-F(x) Distribution of the Empirical Distribution Function Hence (see figure above) P (n Fn(x) = k) = P Fn(x) = k n = for k = 0, 1,..., n. Thus ✓ n k ✓ ◆ [F (x)]k [1 ◆ F (x)]n k n Fn(x) ⇠ BIN (n, F (x)). Goodness of Fit Tests 656 Thus the expected value of the random variable n Fn(x) is given by E(n Fn(x)) = n F (x) n E(Fn(x)) = n F (x) E(Fn(x)) = F (x). This shows that, for a fixed x, Fn(x), on an average, equals to the population distribution function F (x). Hence the empirical distribution function Fn(x) is an unbiased estimator of F (x). Since n Fn(x) ⇠ BIN (n, F (x)), the variance of n Fn(x) is given by V ar(n Fn(x)) = n F (x) [1 F (x)]. Hence the variance of Fn(x) is V ar(F
n(x)) = F (x) [1 n F (x)]. 0 as n It is easy to see that V ar(Fn(x)) for all values of x. Thus the empirical distribution function Fn(x) and F (x) tend to be closer to each other with large n. As a matter of fact, Glivenkno, a Russian mathematiwith cian, proved that Fn(x) converges to F (x) uniformly in x as n probability one.! 1! 1! Because of the convergence of the empirical distribution function to the theoretical distribution function, it makes sense to construct a goodness of fit test based on the closeness of Fn(x) and hypothesized distribution F (x). Let Dn = max IR |Fn(x) F (x)|. 2 x That is Dn is the maximum of all pointwise differences |Fn(x) F (x)|. The distribution of the Kolmogorov-Smirnov statistic, Dn can be derived. However, we shall not do that here as the derivation is quite involved. In stead, d). If X1, X2,..., Xn is a sample we give a closed form formula for P (Dn  from a population with continuous distribution function F (x), then P (Dn  0 n! 1 d) = 8 >>< >>: n 2 i 1 n +d i=1 Z Y 2 i d if d 1 2 n  du if 1 2n < d < 1 if d 1 Probability and Mathematical Statistics 657 where du = du1du2 · · · dun with 0 < u1 < u2 < · · · < un < 1. Further, lim n!1 P (pn Dn  d) = 1 2 1)k 1e 2 k2 d2. 1 ( Xk=1 These formulas show that the distribution of the Kolmogorov-Smirnov statistic Dn is distribution free, that is, it does not depend on the distribution F of the population. For most situations, it is sufficient to use the following approximations due to Kolmogorov: P (pn Dn  d) ⇡ 1 2nd
2 2e for d > 1 pn. F (x) is true, the statistic Dn is small. It If the null hypothesis Ho : X is therefore reasonable to reject Ho if and only if the observed value of Dn is larger than some constant dn. If the level of significance is given to be ↵, then the constant dn can be found from ⇠ ↵ = P (Dn > dn / Ho is true) 2e 2nd2 n. ⇡ This yields the following hypothesis test: Reject Ho if Dn dn where dn = 1 2n ln r ↵ 2 ⇣ ⌘ is obtained from the above Kolmogorov’s approximation. Note that the approximate value of d12 obtained by the above formula is equal to 0.3533 when ↵ = 0.1, however more accurate value of d12 is 0.34. Next we address the issue of the computation of the statistics Dn. Let us define and D+ n = max IR x 2 {Fn(x) F (x)} Dn = max IR x {F (x) 2 Fn(x)}. Then it is easy to see that Dn = max{D+ n, DN }. Further, since Fn(x(i)) = i n. it can be shown that D+ n = max max x(i)) , 0 Goodness of Fit Tests 658 and Dn = max max n i 1 ⇢   F (x(i))  1 i n, 0. Therefore it can also be shown that Dn = max n i 1   max ⇢  i n F (x(i)), F (x(i)) 1 i n . The following figure illustrates the Kolmogorov-Smirnov statistics Dn when n = 4. 1.00 0.75 0.50 0.25 0 D4 F(x) x (1) x (2) x (3) x (4) Kolmogorov-Smirnov Statistic Example 21.1. The data on the heights of 12 infants are given below: 18.
2, 21.4, 22.6, 17.4, 17.6, 16.7, 17.1, 21.4, 20.1, 17.9, 16.8, 23.1. Test the hypothesis that the data came from some normal population at a significance level ↵ = 0.1. Answer: Here, the null hypothesis is Ho : X ⇠ N (µ, 2). First we estimate µ and 2 from the data. Thus, we get x = 230.3 12 = 19.2. Probability and Mathematical Statistics 659 and s2 = 4482.01 12 1 12 (230.3)2 1 = 62.17 11 = 5.65. Hence s = 2.38. Then by the null hypothesis F (x(i)) = P Z ✓ 19.2 x(i) 2.38  ◆ where Z N (0, 1) and i = 1, 2,..., n. Next we compute the KolmogorovSmirnov statistic Dn the given sample of size 12 using the following tabular form 10 11 12 x(i) 16.7 16.8 17.1 17.4 17.6 17.9 18.2 20.1 21.4 21.4 22.6 23.1 F (x(i)) F (x(i)) i 1 12 F (x(i)) 0.1469 0.1562 0.1894 0.2236 0.2514 0.2912 0.3372 0.6480 0.8212 i 12 0.0636 0.0105 0.0606 0.1097 0.1653 0.2088 0.2461 0.0187 0.0121 0.1469 0.0729 0.0227 0.0264 0.0819 0.1255 0.1628 0.0647 0.0712 0.9236 0.9495 0.0069 0.0505 0.0903 0.0328 Thus D12 = 0.2461. From the tabulated value, we see that d12 = 0.34 for significance level ↵ = 0.1. Since D12 is smaller than d
12 we accept the null hypothesis Ho : X N (µ, 2). Hence the data came from a normal population. ⇠ Example 21.2. Let X1, X2,..., X10 be a random sample from a distribution whose probability density function is f (x) = 1 if 0 < x < 1 ( 0 otherwise. Based on the observed values 0.62, 0.36, 0.23, 0.76, 0.65, 0.09, 0.55, 0.26, 0.38, 0.24, test the hypothesis Ho : X ⇠ U N IF (0, 1) at a significance level ↵ = 0.1. U N IF (0, 1) against Ha : X 6⇠ Goodness of Fit Tests 660 Answer: The null hypothesis is Ho : X U N IF (0, 1). Thus ⇠ if x < 0 0 x if 0 if x 1 x < 1 1.  F (x) = ( Hence F (x(i)) = x(i) for i = 1, 2,..., n. Next we compute the Kolmogorov-Smirnov statistic Dn the given sample of size 10 using the following tabular form 10 x(i) 0.09 0.23 0.24 0.26 0.36 0.38 0.55 0.62 0.65 0.76 F (x(i)) 0.09 0.23 0.24 0.26 0.36 0.38 0.55 0.62 0.65 0.76 i 10 F (x(i)) 0.01 0.03 0.06 0.14 0.14 0.22 0.15 0.18 0.25 0.24 F (x(i)) i 1 10 0.09 0.13 0.04 0.04 0.04 0.12 0.05 0.08 0.15 0.14 Thus D10 = 0.25. From the tabulated value, we see that d10 = 0.37 for significance level ↵ = 0.1. Since D10 is smaller than d10 we accept the null hypothesis Ho : X ⇠ U N IF (0, 1). 21.2
Chi-square Test The chi-square goodness of fit test was introduced by Karl Pearson in 1900. Recall that the Kolmogorov-Smirnov test is only for testing a specific continuous distribution. Thus if we wish to test the null hypothesis Ho : X ⇠ BIN (n, p) against the alternative Ha : X BIN (n, p), then we can not use the Kolmogorov-Smirnov test. Pearson chi-square goodness of fit test can be used for testing of null hypothesis involving discrete as well as continuous 6⇠ Probability and Mathematical Statistics 661 distribution. Unlike Kolmogorov-Smirnov test, the Pearson chi-square test uses the density function the population X. Let X1, X2,..., Xn be a random sample from a population X with prob- ability density function f (x). We wish to test the null hypothesis against Ho : X f (x) ⇠ Ha : X f (x). 6⇠ If the probability density function f (x) is continuous, then we divide up the abscissa of the probability density function f (x) and calculate the probability pi for each of the interval by using xi pi = f (x) dx, xi Z 1 where {x0, x1,..., xn} is a partition of the domain of the f (x). y f(x Discretization of continuous density function Let Y1, Y2,..., Ym denote the number of observations (from the random sample X1, X2,..., Xn) is 1st, 2nd, 3rd,..., mth interval, respectively. Since the sample size is n, the number of observations expected to fall in the ith interval is equal to npi. Then Q = m i=1 X npi)2 (Yi npi Goodness of Fit Tests 662 measures the closeness of observed Yi to expected number npi. The distribution of Q is chi-square with m 1 degrees of freedom. The derivation of this fact is quite involved and beyond the scope of this introductory level book. Although the distribution of Q for m > 2 is hard to derive, yet for m = 2 it not very difficult. Thus we give a derivation to convince the reader that Q has 2 distribution
to illustrate the chi-square goodness-of-fit test. 1 degrees of freedom from zero to this real number 2 1 1) is 1 ↵(m ↵(m ↵(m Example 21.3. A die was rolled 30 times with the results shown below: Number of spots Frequency (xi If a chi-square goodness of fit test is used to test the hypothesis that the die is fair at a significance level ↵ = 0.05, then what is the value of the chi-square statistic and decision reached? Answer: In this problem, the null hypothesis is Ho : p1 = p2 = · · · = p6 = 1 6. The alternative hypothesis is that not all pi’s are equal to 1 be based on 30 trials, so n = 30. The test statistic 6. The test will 6 Q = where p1 = p2 = · · · = p6 = 1 i=1 X 6. Thus (xi n pi)2, n pi n pi = (30) 1 6 = 5 and Q = 6 i=1 X 6 (xi n pi)2 n pi 5)2 (xi 5 = = = i=1 X 1 5 58 5 = 11.6. [16 + 1 + 16 + 16 + 9] Goodness of Fit Tests 664 The tabulated 2 value for 2 0.95(5) is given by Since 2 0.95(5) = 11.07. 11.6 = Q > 2 0.95(5) = 11.07 the null hypothesis Ho : p1 = p2 = · · · = p6 = 1 6 should be rejected. Example 21.4. It is hypothesized that an experiment results in outcomes K, L, M and N with probabilities 1 5, respectively. Forty 5, independent repetitions of the experiment have results as follows: 10 and 2 3 10, 1 Outcome Frequency K 11 L 14 M 5 N 10 If a chi-square goodness of fit test is used to test the above hypothesis at the significance level ↵ = 0.01, then what is the value of the chi-square statistic and the decision reached? Answer: Here the null hypothesis to be tested is Ho : p(
K) = 1 5, p(L) = 3 10, p(M ) = 1 10, p(N ) = 2 5. The test will be based on n = 40 trials. The test statistic 4 Q = npk)2 (xk n pk Xk=1 (x1 8 8)2 (11 8)2 8 4 12 + + 9 8 95 24 = = = = = 3.958. 4)2 (x3 4 4)2 (5 + 16)2 (x4 16 16)2 (10 4 + 16 12)2 (x2 12 (14 12)2 12 + + + 36 16 + + 1 4 From chi-square table, we have 2 0.99(3) = 11.35. Thus 3.958 = Q < 2 0.99(3) = 11.35. Probability and Mathematical Statistics 665 Therefore we accept the null hypothesis. Example 21.5. Test at the 10% significance level the hypothesis that the following data 06.88 06.92 04.80 09.85 07.05 19.06 06.54 03.67 02.94 04.89 69.82 06.97 04.34 13.45 05.74 10.07 16.91 07.47 05.04 07.97 15.74 00.32 04.14 05.19 18.69 02.45 23.69 44.10 01.70 02.14 05.79 03.02 09.87 02.44 18.99 18.90 05.42 01.54 01.55 20.99 07.99 05.38 02.36 09.66 00.97 04.82 10.43 15.06 00.49 02.81 give the values of a random sample of size 50 from an exponential distribution with probability density function f (x; ✓) = 1 ✓ e x ✓ 8 < 0 if 0 < x < 1 elsewhere, where ✓ > 0. : Answer: From the data x = 9.74 and s = 11.71. Notice that Ho : X ⇠ EXP (✓). Hence we have to partition the domain of the experimental distribution into m parts. There is no rule to determine what should be the value of m. We assume m = 10 (an arbitrary choice for the
sake of convenience). We partition the domain of the given probability density function into 10 mutually disjoint sets of equal probability. This partition can be found as follow. Note that x estimate ✓. Thus ✓ = x = 9.74. Now we compute the points x1, x2,..., x10 which will be used to partition the domain of f (x) b Hence 1 10 = x1 1 ✓ x ✓ x ✓ e x1 0 x1 ✓. ⇤ xo Z = = 1 e e ⇥ x1 = ✓ ln 10 9 ✓ = 9.74 ln = 1.026. ◆ 10 9 ✓ ◆ Goodness of Fit Tests 666 Using the value of x1, we can find the value of x2. That is 1 10 = x2 x1 Z x ✓ e 1 ✓ x1 ✓ = e x2 ✓. e x2 = ✓ ln x1 ✓ e ✓ 1 10. ◆ xk = ✓ ln xk ✓ 1 e ✓ 1 10 ◆ Hence In general. Using these xk’s we find the intervals for k = 1, 2,..., 9, and x10 = Ak = [xk, xk+1) which are tabulates in the table below along with the number of data points in each each interval. 1 Interval Ai [0, 1.026) [1.026, 2.173) [2.173, 3.474) [3.474, 4.975) [4.975, 6.751) [6.751, 8.925) [8.925, 11.727) [11.727, 15.676) [15.676, 22.437) [22.437, Total ) 1 Frequency (oi 50 Expected value (ei 50 From this table, we compute the statistics Q = 10 i=1 X ei)2 (oi ei = 6.4. and from the chi-square table, we obtain Since 2 0.9(9) = 14.68. 6.4 = Q < 2 0.9(9) = 14.68 Probability and Mathematical Statistics 667 we accept the null hypothesis that the sample was taken from a population with exponential distribution. 21.3. Review
Exercises 1. The data on the heights of 4 infants are: 18.2, 21.4, 16.7 and 23.1. For a significance level ↵ = 0.1, use Kolmogorov-Smirnov Test to test the hypothesis that the data came from some uniform population on the interval (15, 25). (Use d4 = 0.56 at ↵ = 0.1.) 2. A four-sided die was rolled 40 times with the following results Number of spots Frequency 1 5 2 9 3 10 4 16 If a chi-square goodness of fit test is used to test the hypothesis that the die is fair at a significance level ↵ = 0.05, then what is the value of the chi-square statistic? 3. A coin is tossed 500 times and k heads are observed. If the chi-squares distribution is used to test the hypothesis that the coin is unbiased, this hypothesis will be accepted at 5 percents level of significance if and only if k lies between what values? (Use 2 0.05(1) = 3.84.) 4. It is hypothesized that an experiment results in outcomes A, C, T and G 16, 1 with probabilities 1 8, respectively. Eighty independent repetitions of the experiment have results as follows: 8 and 3 16, 5 Outcome Frequency A 3 G 28 C 15 T 34 If a chi-square goodness of fit test is used to test the above hypothesis at the significance level ↵ = 0.1, then what is the value of the chi-square statistic and the decision reached? 5. A die was rolled 50 times with the results shown below: Number of spots Frequency (xi) 1 8 2 7 3 12 4 13 5 4 6 6 If a chi-square goodness of fit test is used to test the hypothesis that the die is fair at a significance level ↵ = 0.1, then what is the value of the chi-square statistic and decision reached? Goodness of Fit Tests 668 6. Test at the 10% significance level the hypothesis that the following data 05.88 05.92 03.80 08.85 06.05 18.06 05.54 02.67 01.94 03.89 70.82
07.97 05.34 14.45 06.74 11.07 17.91 08.47 06.04 08.97 16.74 01.32 03.14 06.19 19.69 03.45 24.69 45.10 02.70 03.14 04.79 02.02 08.87 03.44 17.99 17.90 04.42 01.54 01.55 19.99 06.99 05.38 03.36 08.66 01.97 03.82 11.43 14.06 01.49 01.81 give the values of a random sample of size 50 from an exponential distribution with probability density function f (x; ✓) = 1 ✓ e x ✓ 0 8 < : if 0 < x < 1 elsewhere, where ✓ > 0. 7. Test at the 10% significance level the hypothesis that the following data 0.88 0.92 0.80 0.85 0.05 0.06 0.54 0.67 0.94 0.89 0.82 0.97 0.34 0.45 0.74 0.07 0.91 0.47 0.04 0.97 0.74 0.32 0.14 0.19 0.69 0.45 0.69 0.10 0.70 0.14 0.79 0.02 0.87 0.44 0.99 0.90 0.42 0.54 0.55 0.99 0.94 0.38 0.36 0.66 0.97 0.82 0.43 0.06 0.49 0.81 give the values of a random sample of size 50 from an exponential distribution with probability density function (1 + ✓) x✓ f (x; ✓) = 0 8 < : if 0 x 1   elsewhere, where ✓ > 0. 8. Test at the 10% significance level the hypothesis that the following data 06.88 06.92 04.80 09.85 07.05 19.06 06.54 03.67 02.94 04.89 29.82 06.97 04.34 13.45 05.74 10.07 16.91 07.47 05.04 07.97 15.74 00.32 04.14 05.19 18.69 02.45 23.69 24.10 01.70 02.14 05.79 03.02 09.87
02.44 18.99 18.90 05.42 01.54 01.55 20.99 07.99 05.38 02.36 09.66 00.97 04.82 10.43 15.06 00.49 02.81 give the values of a random sample of size 50 from an exponential distribution with probability density function f (x; ✓) = 1 ✓ ( 0 if 0 x ✓   elsewhere. Probability and Mathematical Statistics 669 9. Suppose that in 60 rolls of a die the outcomes 1, 2, 3, 4, 5, and 6 occur with frequencies n1, n2, 14, 8, 10, and 8 respectively. What is the least value 10)2 for which the chi-square test rejects the hypothesis that of the die is fair at 1% level of significance level? i=1(ni 2 P 10. It is hypothesized that of all marathon runners 70% are adult men, 25% are adult women, and 5% are youths. To test this hypothesis, the following data from the a recent marathon are used: Adult Men 630 Adult Women 300 Youths 70 Total 1000 A chi-square goodness-of-fit test is used. What is the value of the statistics? Goodness of Fit Tests 670 Probability and Mathematical Statistics 671 REFERENCES [1] Aitken, A. C. (1944). Statistical Mathematics. 3rd edn. Edinburgh and London: Oliver and Boyd, [2] Arbous, A. G. and Kerrich, J. E. (1951). Accident statistics and the concept of accident-proneness. Biometrics, 7, 340-432. [3] Arnold, S. (1984). Pivotal quantities and invariant confidence regions. Statistics and Decisions 2, 257-280. [4] Bain, L. J. and Engelhardt. M. (1992). Introduction to Probability and Mathematical Statistics. Belmont: Duxbury Press. [5] Bartlett, M. S. (1937). Properties of sufficiency and statistical tests. Proceedings of the Royal Society, London, Ser. A, 160, 268-282. [6] Bartlett, M. S. (1937). Some examples of statistical methods of research in agriculture and applied biology. J. R. Stat. Soc
., Suppli., 4, 137-183. [7] Brown, L. D. (1988). Lecture Notes, Department of Mathematics, Cornell University. Ithaca, New York. [8] Brown, M. B. and Forsythe, A. B. (1974). Robust tests for equality of variances. Journal of American Statistical Association, 69, 364-367. [9] Campbell, J. T. (1934). THe Poisson correlation function. Proc. Edin. Math. Soc., Series 2, 4, 18-26. [10] Casella, G. and Berger, R. L. (1990). Statistical Inference. Belmont: Wadsworth. [11] Castillo, E. (1988). Extreme Value Theory in Engineering. San Diego: Academic Press. [12] Cherian, K. C. (1941). A bivariate correlated gamma-type distribution function. J. Indian Math. Soc., 5, 133-144. References 672 [13] Dahiya, R., and Guttman, I. (1982). Shortest confidence and prediction intervals for the log-normal. The canadian Journal of Statistics 10, 777- 891. [14] David, F.N. and Fix, E. (1961). Rank correlation and regression in a non-normal surface. Proc. 4th Berkeley Symp. Math. Statist. & Prob., 1, 177-197. [15] Desu, M. (1971). Optimal confidence intervals of fixed width. The Amer- ican Statistician 25, 27-29. [16] Dynkin, E. B. (1951). Necessary and sufficient statistics for a family of probability distributions. English translation in Selected Translations in Mathematical Statistics and Probability, 1 (1961), 23-41. [17] Einstein, A. (1905). ¨Uber die von der molekularkinetischen Theorie der W¨arme geforderte Bewegung von in ruhenden Fl¨ussigkeiten sus- pendierten Teilchen, Ann. Phys. 17, 549560. [18] Eisenhart, C., Hastay, M. W. and Wallis, W. A. (1947). Selected Tech- niques of Statistical
Analysis, New York: McGraw-Hill. [19] Feller, W. (1968). An Introduction to Probability Theory and Its Appli- cations, Volume I. New York: Wiley. [20] Feller, W. (1971). An Introduction to Probability Theory and Its Appli- cations, Volume II. New York: Wiley. [21] Ferentinos, K. K. (1988). On shortest confidence intervals and their relation uniformly minimum variance unbiased estimators. Statistical Papers 29, 59-75. [22] Fisher, R. A. (1922), On the mathematical foundations of theoretical statistics. Reprinted in Contributions to Mathematical Statistics (by R. A. Fisher) (1950), J. Wiley & Sons, New York. [23] Freund, J. E. and Walpole, R. E. (1987). Mathematical Statistics. En- glewood Cliffs: Prantice-Hall. [24] Galton, F. (1879). The geometric mean in vital and social statistics. Proc. Roy. Soc., 29, 365-367. Probability and Mathematical Statistics 673 [25] Galton, F. (1886). Family likeness in stature. With an appendix by J.D.H. Dickson. Proc. Roy. Soc., 40, 42-73. [26] Ghahramani, S. (2000). Fundamentals of Probability. Upper Saddle River, New Jersey: Prentice Hall. [27] Graybill, F. A. (1961). An Introduction to Linear Statistical Models, Vol. 1. New YorK: McGraw-Hill. [28] Guenther, W. (1969). Shortest confidence intervals. The American Statistician 23, 51-53. [29] Guldberg, A. (1934). On discontinuous frequency functions of two vari- ables. Skand. Aktuar., 17, 89-117. [30] Gumbel, E. J. (1960). Bivariate exponetial distributions. J. Amer. Statist. Ass., 55, 698-707. [31] Hamedani, G. G. (1992). Bivariate and multivariate normal characteri- zations: a brief survey. Comm. Statist. Theory Methods, 21, 2665-2688.
[32] Hamming, R. W. (1991). The Art of Probability for Scientists and En- gineers New York: Addison-Wesley. [33] Hogg, R. V. and Craig, A. T. (1978). Introduction to Mathematical Statistics. New York: Macmillan. [34] Hogg, R. V. and Tanis, E. A. (1993). Probability and Statistical Inference. New York: Macmillan. [35] Holgate, P. (1964). Estimation for the bivariate Poisson distribution. Biometrika, 51, 241-245. [36] Kapteyn, J. C. (1903). Skew Frequency Curves in Biology and Statistics. Astronomical Laboratory, Noordhoff, Groningen. [37] Kibble, W. F. (1941). A two-variate gamma type distribution. Sankhya, 5, 137-150. [38] Kolmogorov, A. N. (1933). Grundbegriffe der Wahrscheinlichkeitsrech- nung. Erg. Math., Vol 2, Berlin: Springer-Verlag. [39] Kolmogorov, A. N. (1956). Foundations of the Theory of Probability. New York: Chelsea Publishing Company. References 674 [40] Kotlarski, I. I. (1960). On random variables whose quotient follows the Cauchy law. Colloquium Mathematicum. 7, 277-284. [41] Isserlis, L. (1914). The application of solid hypergeometrical series to frequency distributions in space. Phil. Mag., 28, 379-403. [42] Laha, G. (1959). On a class of distribution functions where the quotient follows the Cauchy law. Trans. Amer. Math. Soc. 93, 205-215. [43] Levene, H. (1960). In Contributions to Probability and Statistics: Essays in Honor of Harold Hotelling. I. Olkin et. al. eds., Stanford University Press, 278-292. [44] Lundberg, O. (1934). On Random Processes and their Applications to Sickness and Accident Statistics. Uppsala: Almqvist and Wiksell. [45] Mardia
, K. V. (1970). Families of Bivariate Distributions. London: Charles Griffin & Co Ltd. [46] Marshall, A. W. and Olkin, I. (1967). A multivariate exponential distri- bution. J. Amer. Statist. Ass., 62. 30-44. [47] McAlister, D. (1879). The law of the geometric mean. Proc. Roy. Soc., 29, 367-375. [48] McKay, A. T. (1934). Sampling from batches. J. Roy. Statist. Soc., Suppliment, 1, 207-216. [49] Meyer, P. L. (1970). Introductory Probability and Statistical Applica- tions. Reading: Addison-Wesley. [50] Mood, A., Graybill, G. and Boes, D. (1974). Introduction to the Theory of Statistics (3rd Ed.). New York: McGraw-Hill. [51] Moran, P. A. P. (1967). Testing for correlation between non-negative variates. Biometrika, 54, 385-394. [52] Morgenstern, D. (1956). Einfache Beispiele zweidimensionaler Verteilun- gen. Mitt. Math. Statist., 8, 234-235. [53] Papoulis, A. (1990). Probability and Statistics. Englewood Cliffs: Prantice-Hall. [54] Pearson, K. (1924). On the moments of the hypergeometrical series. Biometrika, 16, 157-160. Probability and Mathematical Statistics 675 [55] Pestman, W. R. (1998). Mathematical Statistics: An Introduction New York: Walter de Gruyter. [56] Pitman, J. (1993). Probability. New York: Springer-Verlag. [57] Plackett, R. L. (1965). A class of bivariate distributions. J. Amer. Statist. Ass., 60, 516-522. [58] Rice, S. O. (1944). Mathematical analysis of random noise. Bell. Syst. Tech. J., 23, 282-332. [59] Rice, S. O. (1945). Mathematical analysis of random noise. Bell. Syst.
Tech. J., 24, 46-156. [60] Rinaman, W. C. (1993). Foundations of Probability and Statistics. New York: Saunders College Publishing. [61] Rosenthal, J. S. (2000). A First Look at Rigorous Probability Theory. Singapore: World Scientific. [62] Ross, S. (1988). A First Course in Probability. New York: Macmillan. [63] Ross, S. M. (2000). Introduction to Probability and Statistics for Engi- neers and Scientists. San Diego: Harcourt Academic Press. [64] Roussas, G. (2003). An Introduction to Probability and Statistical Infer- ence. San Diego: Academic Press. [65] Sahai, H. and Ageel, M. I. (2000). The Analysis of Variance. Boston: Birkhauser. [66] Seshadri, V. and Patil, G. P. (1964). A characterization of a bivariate distribution by the marginal and the conditional distributions of the same component. Ann. Inst. Statist. Math., 15, 215-221. [67] H. Scheff´e (1959). The Analysis of Variance. New York: Wiley. [68] Smoluchowski, M. (1906). Zur kinetischen Theorie der Brownschen Molekularbewe-gung und der Suspensionen, Ann. Phys. 21, 756780. [69] Snedecor, G. W. and Cochran, W. G. (1983). Statistical Methods. 6th eds. Iowa State University Press, Ames, Iowa. [70] Stigler, S. M. (1984). Kruskal’s proof of the joint distribution of X and s2. The American Statistician, 38, 134-135. References 676 [71] Sveshnikov, A. A. (1978). Problems in Probability Theory, Mathematical Statistics and Theory of Random Functions. New York: Dover. [72] Tardiff, R. M. (1981). L’Hospital rule and the central limit theorem. American Statistician, 35, 43-44. [73] Taylor, L. D. (1974). Probability and Mathematical Statistics. New York: Harper & Row
. [74] Tweedie, M. C. K. (1945). Inverse statistical variates. Nature, 155, 453. [75] Waissi, G. R. (1993). A unifying probability density function. Appl. Math. Lett. 6, 25-26. [76] Waissi, G. R. (1994). An improved unifying density function. Appl. Math. Lett. 7, 71-73. [77] Waissi, G. R. (1998). Transformation of the unifying density to the normal distribution. Appl. Math. Lett. 11, 45-28. [78] Wicksell, S. D. (1933). On correlation functions of Type III. Biometrika, 25, 121-133. [79] Zehna, P. W. (1966). Invariance of maximum likelihood estimators. An- nals of Mathematical Statistics, 37, 744. Probability and Mathematical Statistics 677 ANSWERES TO SELECTED REVIEW EXERCISES CHAPTER 1 24, (b) 6 24 and (c) 4 24. 7 1. 1912. 2. 244. 3. 7488. 4. (a) 4 5. 0.95. 6. 4 7. 7. 2 3. 8. 7560. 10. 43. 11. 2. 12. 0.3238. 13. S has countable number of elements. 14. S has uncountable number of elements. 15. 25 648. 16. (n 17. (5!)2. 18. 7 10. 19. 1 3. 20. n+1 1. 3n 21. 6 11. 22. 1 5. 1)(n n+1 2) 1 2 . Answers to Selected Problems 678 CHAPTER 2 2. 1. 1 3. (6!)2 (21)6. 3. 0.941. 4. 4 5. 5. 6 11. 6. 255 256. 7. 0.2929. 8. 10 17. 9. 30 31. 10. 7 24. 11. 6 ) 10 )( 3 ( 4 5 ) 10 ) ( 2 6 )+( 6 10 ) ( 3 ( 4. 12. (0.01) (0.9) (0.01) (0.9)+(0.99) (0
.1). 2 5 4 52 + 3 5 4 16 and (b) 16 ) 5 )( 4 ( 3 5 ) ( 4 52 )+( 3 5 ) ( 4 ( 2 16 ). 13. 1 5. 14. 2 9. 15. (a) 16. 1 4. 17. 3 8. 18. 5. 19. 5 42. 20. 1 4. Probability and Mathematical Statistics 679 CHAPTER 3 1. 1 4. 2. k+1 2k+1. 1. 3p2 3. 4. Mode of X = 0 and median of X = 0. 5. ✓ ln 10 9 6. 2 ln 2. 7. 0.25.. 8. f (2) = 0.5, f (3) = 0.2, f (⇡) = 0.3. 9. f (x) = 1 x. 10. 3 4. 6 x3e 0.2) = 0.6766. 11. a = 500, mode = 0.2, and P (X 12. 0.5. 13. 0.5. 14. 1 15. 1 4. F ( y). 16. RX = {3, 4, 5, 6, 7, 8, 9}; f (3) = f (4) = 2 20, f (5) = f (6) = f (7) = 4 20, f (8) = f (9) = 2 20. 36, f (4) = 3 36, f (3) = 2 17. RX = {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}; f (2) = 1 36, f (9) = 4 36, f (10) = 3 5 18. RX = {0, 1, 2, 3, 4, 5}; 105, f (1) = 32805 f (0) = 59049 105, f (2) = 7290 36, f (11) = 2 36, f (5) = 4 36, f (6) = 5 36, f (12) = 1 36. 36, f (7) = 6 36, f (8) = 105, f (3) = 810 105, f (4
) = 45 105, f (5) = 1 105. 19. RX = {1, 2, 3, 4, 5, 6, 7}; f (1) = 0.4, f (2) = 0.2666, f (3) = 0.1666, f (4) = 0.0952, f (5) = 0.0476, f (6) = 0.0190, f (7) = 0.0048. 20. c = 1 and P (X = even) = 1 4. 21. c = 1 22. c = 3 2) = 3 4. = 3 1 16. 2 2, P (1  2 and P X X   Answers to Selected Problems 680 CHAPTER 4 0.995. 1. 2. (a) 1 33, (b) 12 33, (c) 65 33. 3. (c) 0.25, (d) 0.75, (e) 0.75, (f) 0. 4. (a) 3.75, (b) 2.6875, (c) 10.5, (d) 10.75, (e) 5. (a) 0.5, (b) ⇡, (c) 3 10 ⇡. 6. 17 24 71.5. . 1 p✓ 1 E(x2). 4 7. 8. 8 3. q 9. 280. 10. 9 20. p⇡, E(X) = 2 4, E(Y ) = 7 8. hp⇡, V ar(X) = 1 h2 3 2 4 ⇡. ⇥ ⇤ 11. 5.25. 12. a = 4h3 13. E(X) = 7 2 38. 38. 14. 15. 16. M (t) = 1 + 2t + 6t2 + · · ·. 17. 1 4. 18. n 19. 1 4 20. 120. ⇥ n i=0 (k + i). 3e2t + e3t Q ⇤ 1. 21. E(X) = 3, V ar(X) = 2. 22. 11. 23. c = E(X). 24. F (c) = 0.5. 25. E
y3 if 0 < y < p2 ( 0 otherwise 3 e y4 2. \(µ, 2). 23. 24. ln(X) 2 25. eµ. 26. eµ. 27. 0.3669. GBET A(↵, , a, b). 29. Y ⇠ 32. (i) 1 2 p⇡, (ii) 1 2, (iii) 1 1 180, (ii) (100)13 5! 7! 33. (i) 2 ↵ . 1 35. 36. E(X n) = ✓n Γ(n+↵) ⌘ Γ(↵) ⇣. 4 p⇡, (iv) 1 2. 1 360. 13!, (iii) Probability and Mathematical Statistics 683 CHAPTER 7 1. f1(x) = 2x+3 1 36 2 36 0 21, and f2(y) = 3y+6 21. if 1 < x < y = 2x < 12 if 1 < x < y < 2x < 12 otherwise. 2. f (x, y) = ( 3. 1 18. 1 4. 2e4. 5. 1 3. 6. f1(x) = 7. (e2 1)(e e5 8. 0.2922. 9. 5 7. 2(1 0 n 1). x) if 0 < x < 1 otherwise. 5 48 x(8 0 x3) if 0 < x < 2 otherwise. 10. f1(x) = ⇢ 11. f2(y) = n 12. f (y/x) = 2y 0 if 0 < y < 1 otherwise. 1 1+p1 0 (x 1)2 ⇢ if (x 1)2 + (y otherwise. 1)2 1  13. 6 7. 14. f (y/x) = 15. 4 9. 1 2x 0 ⇢ if 0 < y < 2x < 1 otherwise. 2e 2w. 6w2 ✓3. w3 ✓3 ⌘ 16. g(w) = 2e w 1 ⇣ 17. g(w) = 18. 11
Answers to Selected Problems 686 CHAPTER 10 1. g(y) = 1 2 + 1 4py for 0 y   1 ( 0 otherwise. 2. g(y) = 3. g(y) = 8 < : ( 4. g(z) = 8 >>>< >>>: 5. g(z, x) = 3 16 py mpm for 0 y   4m 0 2y otherwise. for 0 y   1 otherwise. 0 1 16 (z + 4) for 4  z  0 1 16 (4 z) for 0 z   4 0 otherwise. 1 x 2 e for. g(y) = ( 0 4 y3 ( 0 z3 7. g(z) = 8 >>>< >>>: 8 < 8. g(u) = otherwise. for 0 < y < p2 otherwise. z2 250 + z 25 15000 2z 8 25 15 for 0 z   10 z2 250 z3 15000 for 10 z   20 0 4a2 u3 ln u a a otherwise. + 2a (u u2 (u 2a) a) for 2a u <  1 0 otherwise. : 9. h(y) = 3z2 2z+1 216, z = 1, 2, 3, 4, 5, 6. 10. g(z) = 4h3 mp⇡ 2h2z m 2z m e 8 < 0 q for 0 z <  1 otherwise. : 11. g(u, v) = 3u 350 + 9v 350 for 10 3u + v 20, u 0, v 0   12. g1(u) = ( 0 2u (1+u)3 ( 0 otherwise. if 0  u < 1 otherwise. Probability and Mathematical Statistics 687 5 [9v3 8 < : 0 u+v 32 13. g(u, v) = 14. g(u, v) = 5u2v+3uv2+u3] 32768 for 0 < 2v + 2u
= n The confidence interval is 8. The pdf of Q is g(q) = ⇢ The confidence interval is 9. The pdf of Q is g(q) = ⇢ The confidence interval is n q n e 0 X(1) h 1 2 e 0 1 2 q X(1) h n qn 0 1 X(1) h if 0 < q < otherwise. 1 1 n ln 2 ↵, X(1) 1 n ln if 0 < q < otherwise. 1 1 n ln 2 ↵, X(1) 1 n ln if 0 < q < 1 otherwise. 1 n ln 2 ↵, X(1) 1 n ln i. ⌘i. ⌘i 1 n qn 0 1 ⇢ q if 0   otherwise. 1 n 2 2 ↵ ⇣ 1) qn ⌘ 2 (1 X(n). q) q if 0   otherwise. 1 10. The pdf g(q) of Q is given by g(q) = The confidence interval is 1 n X(n), 2 ↵  11. The pdf of Q is given by g(q) = n (n 0 ⇢ 12. X(1) h z ↵ 2 1 pn, X(1) + z ↵ 2 1 pn. i 13. ✓ z ↵ 2 ✓+1 pn, ✓ + z ↵ 2 ✓+1 pn, where ✓ = b 2 X 14 15 pn, (n) h 16. 17. z ↵ 2 X(n) (n+1) pn+2, X(n) + z ↵ 2 X(n) (n+1) pn+ pn, 1 4 X + z ↵ 2 X 8 pn. i 1 + n n i=1. ln xi P b. i 4 X pn. i Probability and Mathematical Statistics 697 CHAPTER 18 1. ↵ = 0
.03125 and = 0.763. 2. Do not reject Ho. 3. ↵ = 0.0511 and () = 1 4. ↵ = 0.08 and = 0.46. 8 (8)xe x! 7 x=0 X, = 0.5. 5. ↵ = 0.19. 6. ↵ = 0.0109. 7. ↵ = 0.0668 and = 0.0062. 8. C = {(x1, x2) | x2 3.9395}. 9. C = {(x1,..., x10) | x 0.3}. 0.829}. 10. C = {x [0, 1] | x 2 11. C = {(x1, x2) | x1 + x2 12. C = {(x1,..., x8) | x 5}. x ln x a}.  13. C = {(x1,..., xn) | 35 ln x 14. C = (x1,..., x5) | ⇢ 15. C = {(x1, x2, x3) | ⇣ x 2x 2 |x ⌘ 3| x 5x  5 a}. x5  a. 1.96}. 16. C = (x1, x2, x3) | x e 1 3 x  a. o 3x  e 10 x a. o 17. C = n (x1, x2,..., xn) | n 18. 1 3. 19. C = (x1, x2, x3) | x(3)  20. C = {(x1, x2, x3) | x 12.04}. 3p117. 21. ↵ = 1 16 and = 255 256. 22. ↵ = 0.05. 6 Answers to Selected Problems 698 CHAPTER 21 9. 2 i=1(ni 10)2 63.43. P 10. 25. View publication stats View publication stats and transitive but not symmetric. 27