text
stringlengths
270
6.81k
Y ) with positive parameters ✓1, ✓2 and ✓3 by writing (X, Y ) GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2). It can be shown that if Xk = (bk Beta(✓1, ✓2, ✓3), then (X1, X2) ak)Yk + ak (for k = 1, 2) and each GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2). ⇠ (Y1, Y2) Therefore, by Theorem 12.11 ⇠ ⇠ Theorem 12.13. Let (X, Y ) and ✓3 are positive apriori chosen parameters. Then X and Y GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2), where ✓1, ✓2 Beta(✓1, ✓2 + ✓3) Beta(✓2, ✓1 + ✓3) and ⇠ ⇠ ⇠ E(X) = (b1 a1) E(Y ) = (b2 a2) ✓1 ✓ ✓2 ✓ + a1, + a2, V ar(X) = (b1 V ar(Y ) = (b2 a1)2 ✓1 (✓ ✓1) ✓2 (✓ + 1) ✓2) ✓2 (✓ + 1) a2)2 ✓2 (✓ Cov(X, Y ) = (b1 a1)(b2 a2) ✓1 ✓2 ✓2 (✓ + 1) where ✓ = ✓1 + ✓2 + ✓3. Another generalization of the bivariate beta distribution is the following: Definition 12.7. A continuous bivariate random variable (X1, X2) is said to have the generalized bivariate beta distribution if its joint probability density function is of the form 1 B(↵1, 1)B(↵2, 2) x1x2)2 1 f (x1, x2) = 2 x↵2 1 2 x1)1 x↵1 ↵2 1(1 (1
. The parameter ⇢ determines the shape and orientation on the (x, y)-plane of the mountain. The following figures show the graphs of the bivariate normal distributions with different values of correlation coefficient ⇢. The first two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0, µ1 = µ2 = 0, and 1 = 2 = 1 and the equi-density plots. The next two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0.5, µ1 = µ2 = 0, and 1 = 2 = 0.5 and the equi-density plots. The last two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0.5, µ1 = µ2 = 0, and 1 = 2 = 0.5 and the equi-density plots. Probability and Mathematical Statistics 343 One of the remarkable features of the bivariate normal distribution is that if we vertically slice the graph of f (x, y) along any direction, we obtain a univariate normal distribution. In particular, if we vertically slice the graph of the f (x, y) along the x-axis, we obtain a univariate normal distribution. That is the marginal of f (x, y) is again normal. One can show that the marginals of f (x, y) are given by f1(x) = 1 1 p2⇡ 1 2 e x µ1 1 2 2. and f2(y) = 1 2 p2⇡ 1 2 e x µ2 2 In view of these, the following theorem is obvious. Some Special Continuous Bivariate Distributions 344 Theorem 12.14. If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then ⇠ E(X) = µ1 E(Y ) = µ2 V ar(X) = 2 1 V ar(Y ) = 2 2 Corr(X, Y ) = ⇢ M (s, t) = eµ1s+µ2t+ 1 2 (
.15. If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then ⇠ E(Y /x) = µ2 + ⇢ E(X/y) = µ1 + ⇢ V ar(Y /x) = 2 V ar(X/y) = 2 2 (1 1 (1 (x (y 2 1 1 2 ⇢2) ⇢2). µ1) µ2) We have seen that if (X, Y ) has a bivariate normal distribution, then the distributions of X and Y are also normal. However, the converse of this is not true. That is if X and Y have normal distributions as their marginals, then their joint distribution is not necessarily bivariate normal. Now we present some characterization theorems concerning the bivariate normal distribution. The first theorem is due to Cramer (1941). Theorem 12.16. The random variables X and Y have a joint bivariate normal distribution if and only if every linear combination of X and Y has a univariate normal distribution. Theorem 12.17. The random variables X and Y with unit variances and correlation coefficient ⇢ have a joint bivariate normal distribution if and only if @ @⇢ E[g(X, Y )] = E  @2 @X @Y g(X, Y ) holds for an arbitrary function g(x, y) of two variable. Many interesting characterizations of bivariate normal distribution can be found in the survey paper of Hamedani (1992). 12.6. Bivariate Logistic Distributions In this section, we study two bivariate logistic distributions. A univariate logistic distribution is often considered as an alternative to the univariate normal distribution. The univariate logistic distribution has a shape very close to that of a univariate normal distribution but has heavier tails than Some Special Continuous Bivariate Distributions 346 the normal. This distribution is also used as an alternative to the univariate Weibull distribution in life-testing. The univariate logistic distribution has the following probability density function f (x) = ⇡ p3 ⇡ p3 ( x µ ) e 1 + e ⇡ p3 ( 1 < µ < and
bivariate random variable (X, Y ) is said to have the bivariate logistic distribution of second kind if its joint probability density function is of the form f (x, y) = 2↵ [↵(x, y)]1 [1 + ↵(x, y)]2 ↵(x, y) 1 ↵(x, y) + 1 ✓ + ↵ e ↵(x+y), ◆ < x, y <, 1 1 1 where ↵ > 0 is a parameter, and ↵(x, y) := (e ↵. As before, we denote a bivariate logistic random variable of second kind (X, Y ) by writing (X, Y ) LOGS(↵). ↵x + e ↵y) ⇠ The marginal densities of X and Y are again logistic and they given by and f1(x) = x e (1 + e x)2, < x < 1 1 f2(y) = y e (1 + e y)2, < y <. 1 1 It was shown by Oliveira (1961) that if (X, Y ) lation between X and Y is ⇠ LOGS(↵), then the corre- ⇢(X, Y ) = 1 1 2 ↵2. Some Special Continuous Bivariate Distributions 350 12.7. Review Exercises 1. If (X, Y ) 2y +1, then what is the value of the conditional variance of Y given the event X = x? N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = x2 +2y2 2xy +2x ⇠ 2. If (X, Y ) ⇠ N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = 1 102 (x + 3)2 16(x + 3)(y 2) + 4(y 2)2, then what is the value of the conditional expectation of Y given X = x? ⇥ ⇤ 3. If (X, Y ) the random variables U and V, where U = 2
X + 3Y and V = 2X N (µ1, µ2, 1, 2, ⇢), then what is the correlation coefficient of ⇠ 3Y? 4. Let the random variables X and Y denote the height and weight of wild turkeys. If the random variables X and Y have a bivariate normal distribution with µ1 = 18 inches, µ2 = 15 pounds, 1 = 3 inches, 2 = 2 pounds, and ⇢ = 0.75, then what is the expected weight of one of these wild turkeys that is 17 inches tall? If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then what is the moment generating 5. function of the random variables U and V, where U = 7X + 3Y and V = 7X 3Y? ⇠ 6. Let (X, Y ) have a bivariate normal distribution. The mean of X is 10 and the variance of X is 12. The mean of Y is 5 and the variance of Y is 5. If the covariance of X and Y is 4, then what is the probability that X + Y is greater than 10? 7. Let X and Y have a bivariate normal distribution with means µX = 5 and µY = 6, standard deviations X = 3 and Y = 2, and covariance XY = 2. Let Φ denote the cumulative distribution function of a normal random variable with mean 0 and variance 1. What is P (2 5) in terms of Φ? X   Y 8. If (X, Y ) is the conditional distributions of X given the event Y = y? N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = x2 + xy ⇠ 2y2, then what 9. If (X, Y ) ters, then show that the moment generating function is given by GAMK(↵, ✓), where 0 < ↵ < and 0 1 ⇠  ✓ < 1 are parame- M (s, t) = (1 ✓ s) (1 1 ✓ s t t) ↵. ◆ Probability and Mathematical Statistics 351
10. Let X and Y have a bivariate gamma distribution of Kibble with parameters ↵ = 1 and 0 ✓ < 0. What is the probability that the random  variable 7X is less than 1 2? 11. If (X, Y ) curves of Y on X? ⇠ GAMC(↵, , ), then what are the regression and scedestic 12. The position of a random point (X, Y ) is equally probable anywhere on a circle of radius R and whose center is at the origin. What is the probability density function of each of the random variables X and Y? Are the random variables X and Y independent? 13. If (X, Y ) random variables X and Y? ⇠ GAMC(↵, , ), what is the correlation coefficient of the 14. Let X and Y have a bivariate exponential distribution of Gumble with parameter ✓ > 0. What is the regression curve of Y on X? 15. A screen of a navigational radar station represents a circle of radius 12 inches. As a result of noise, a spot may appear with its center at any point of the circle. Find the expected value and variance of the distance between the center of the spot and the center of the circle. 16. Let X and Y have a bivariate normal distribution. Which of the following statements must be true? (I) Any nonzero linear combination of X and Y has a normal distribution. (II) E(Y /X = x) is a linear function of x. (III) V ar(Y /X = x) V ar(Y ).  17. If (X, Y ) ⇠ LOGS(↵), then what is the correlation between X and Y? 18. If (X, Y ) ⇠ the random variables X and Y? LOGF (µ1, µ2, 1, 2), then what is the correlation between 19. If (X, Y ) are univariate logistic. ⇠ LOGF (µ1, µ2, 1, 2), then show that marginally X and Y 20. If (X, Y ) ⇠ the random variable Y and X? LOGF (µ1, µ2, 1, 2), then what is the scedastic curve of Some Special Continuous Bivariate Distributions 352 Probability and Mathemat
ical Statistics 353 Chapter 13 SEQUENCES OF RANDOM VARIABLES AND ORDER STASTISTICS In this chapter, we generalize some of the results we have studied in the previous chapters. We do these generalizations because the generalizations are needed in the subsequent chapters relating to mathematical statistics. In this chapter, we also examine the weak law of large numbers, Bernoulli’s law of large numbers, the strong law of large numbers, and the central limit theorem. Further, in this chapter, we treat the order statistics and percentiles. 13.1. Distribution of sample mean and variance Consider a random experiment. Let X be the random variable associated with this experiment. Let f (x) be the probability density function of X. Let us repeat this experiment n times. Let Xk be the random variable associated with the kth repetition. Then the collection of the random variables { X1, X2,..., Xn } is a random sample of size n. From here after, we simply denote X1, X2,..., Xn as a random sample of size n. The random variables X1, X2,..., Xn are independent and identically distributed with the common probability density function f (x). For a random sample, functions such as the sample mean X, the sample variance S2 are called statistics. In a particular sample, say x1, x2,..., xn, we observed x and s2. We may consider X = 1 n n i=1 X Xi Sequences of Random Variables and Order Statistics 354 and S2 = n 1 n 1 2 X Xi i=1 X as random variables and x and s2 are the realizations from a particular sample. In this section, we are mainly interested in finding the probability distributions of the sample mean X and sample variance S2, that is the distribution of the statistics of samples. Example 13.1. Let X1 and X2 be a random sample of size 2 from a distribution with probability density function f (x) = x) 6x(1 0 n if 0 < x < 1 otherwise. What are the mean and variance of sample sum Y = X1 + X2? Answer: The population mean µX = E (X) 1 = x 6x(1 0 Z 1 = 6 x2 (1 0 Z = 6 B(3, 2)
x) dx x) dx (here B denotes the beta function) Γ(3) Γ(2) Γ(5) 1 12 ◆ ✓. = 6 = 6 = 1 2 Since X1 and X2 have the same distribution, we obtain µX1 = 1 Hence the mean of Y is given by 2 = µX2. E(Y ) = E(X1 + X2) = E(X1) + E(X2) + 1 2 = 1 2 = 1. Probability and Mathematical Statistics 355 Next, we compute the variance of the population X. The variance of X is given by V ar(X 6x3(1 E(X)2 x) dx ✓ 1 2 x3 (1 x) dx ✓ ◆ (4, 2) Γ(4) Γ(2) 1 4 ✓ Γ(6) ✓ 1 1 4 20 ◆ 5 20 ✓ ◆ = 6 = 6 ✓ 6 20 1 20. = = Since X1 and X2 have the same distribution as the population X, we get V ar(X1) = 1 20 = V ar(X2). Hence, the variance of the sample sum Y is given by V ar(Y ) = V ar (X1 + X2) = V ar (X1) + V ar (X2) + 2 Cov (X1, X2) = V ar (X1) + V ar (X2) = = 1 20 1 10 + 1 20. Example 13.2. Let X1 and X2 be a random sample of size 2 from a distribution with density f (x) = 1 4 ( 0 for x = 1, 2, 3, 4 otherwise. What is the distribution of the sample sum Y = X1 + X2? Sequences of Random Variables and Order Statistics 356 Answer: Since the range space of X1 as well as X2 is {1, 2, 3, 4}, the range space of Y = X1 + X2 is RY = {2, 3, 4, 5, 6, 7, 8}. Let g(y) be the density function of Y. We want to find this density function. First, we find g(2), g(3) and so on. g(2) =
P (Y = 2) = P (X1 + X2 = 2) = P (X1 = 1 and X2 = 1) = P (X1 = 1) P (X2 = 1) (by independence of X1 and X2) = f (1) f (1) = 1 4 ✓ = 1 16. 1 4 ◆ ◆ ✓ g(3) = P (Y = 3) = P (X1 + X2 = 3) = P (X1 = 1 and X2 = 2) + P (X1 = 2 and X2 = 1) = P (X1 = 1) P (X2 = 2) + P (X1 = 2) P (X2 = 1) (by independence of X1 and X2) = f (1) f (2) + f (2) f (1 16. 1 4 ◆ ◆ ✓ ◆ ✓ Probability and Mathematical Statistics 357 g(4) = P (Y = 4) = P (X1 + X2 = 4) = P (X1 = 1 and X2 = 3) + P (X1 = 3 and X2 = 1) + P (X1 = 2 and X2 = 2) = P (X1 = 3) P (X2 = 1) + P (X1 = 1) P (X2 = 3) + P (X1 = 2) P (X2 = 2) (by independence of X1 and X2) = f (1) f (3) + f (3) f (1) + f (2) f (2 16 ◆ ✓. Similarly, we get g(5) = 4 16, g(6) = 3 16, g(7) = 2 16, g(8) = 1 16. Thus, putting these into one expression, we get g(y) = P (Y = y) y 1 f (k) f (y Xk=1 4 5|, |y 16 = = k) y = 2, 3, 4,..., 8. Remark 13.1. Note that g(y) = y 1 f (k) f (y of f with itself. The concept of convolution was introduced in chapter 10. Xk=1 k) is the discrete convolution The above example can also be done
using the moment generating func- Sequences of Random Variables and Order Statistics 358 tion method as follows: MY (t) = MX1+X2(t) = MX1(t) MX2(t) et + e2t + e3t + e4t 4 ◆ ◆ ✓ 2 ✓ et + e2t + e3t + e4t 4 et + e2t + e3t + e4t 4 = = = ◆ ✓ e2t + 2e3t + 3e4t + 4e5t + 3e6t + 2e7t + e8t 16. Hence, the density of Y is given by g(y) = 4 5|, |y 16 y = 2, 3, 4,..., 8. Theorem 13.1. If X1, X2,..., Xn are mutually independent random variables with densities f1(x1), f2(x2),..., fn(xn) and E[ui(Xi)], i = 1, 2,..., n exist, then n n E " ui(Xi) = # i=1 Y i=1 Y E[ui(Xi)], where ui (i = 1, 2,..., n) are arbitrary functions. Proof: We prove the theorem assuming that the random variables X1, X2,..., Xn are continuous. If the random variables are not continuous, then the proof follows exactly in the same manner if one replaces the integrals by summations. Since n E i=1 Y ui(Xi)! = E(u1(X1) · · · un(Xn)) = = = 1 · · · 1 u1(x1) · · · un(xn)f (x1,..., xn)dx1 · · · dxn Z 1 1 Z 1 1 · · · Z 1 1 Z 1 u1(x1) · · · un(xn)f1(x1) · · · fn(xn)dx1 · · · dxn u1(x1)f1(x1)dx1 · · · 1 un(xn)fn(xn)dxn Z 1 Z = E (u1(X1)) · · · E (un(Xn))
1 n = E (ui(Xi)), i=1 Y Probability and Mathematical Statistics 359 the proof of the theorem is now complete. Example 13.3. Let X and Y be two random variables with the joint density f (x, y) = (x+y) e for 0 < x, y < 1 ( 0 otherwise. What is the expected value of the continuous random variable Z = X 2Y 2 + XY 2 + X 2 + X? Answer: Since f (x, y) = e (x+y) = e y x e = f1(x) f2(y), the random variables X and Y are mutually independent. Hence, the expected value of X is E(X) = 1 x f1(x) dx 0 Z = 1 xe x dx 0 Z = Γ(2) = 1. Similarly, the expected value of X 2 is given by x2 f1(x) dx x2e x dx 0 Z = Γ(3) = 2. Since the marginals of X and Y are same, we also get E(Y ) = 1 and E(Y 2) = 2. Further, by Theorem 13.1, we get E [Z 2Y 2 + XY ⇥ ⇤ Y 2 ⇤ E + E [X] ⇤ ⇥ E + 1 ⇤ (by Theorem 13.1) = (2 + 1) (2 + 1) ⇤ ⇥ = 9. ⇥ ⇤ Sequences of Random Variables and Order Statistics 360 Theorem 13.2. If X1, X2,..., Xn are mutually independent random variables with respective means µ1, µ2,..., µn and variances 2 n, then n the mean and variance of Y = i=1 ai Xi, where a1, a2,..., an are real constants, are given by 2,..., 2 1, 2 P n µY = ai µi and i=1 X n 2 Y = i 2 a2 i. i=1 X Proof: First we show that µY = n i=1 ai µi. Since P µY = E(Y ) n = E i=1 X n ai Xi! aiE(
2 50 = Sequences of Random Variables and Order Statistics 362 Theorem 13.3. If X1, X2,..., Xn are independent random variables with respective moment generating functions MXi(t), i = 1, 2,..., n, then the moment generating function of Y = n i=1 aiXi is given by Proof: Since P MY (t) = n MXi (ait). i=1 Y MY (t) = M n i=1 aiXi (t) n P MaiXi(t) i=1 Y n MXi(ait) = = i=1 Y we have the asserted result and the proof of the theorem is now complete. Example 13.6. Let X1, X2,..., X10 be the observations from a random sample of size 10 from a distribution with density f (x) = 1 p2⇡ e 1 2 x2, < x <. 1 1 What is the moment generating function of the sample mean? Answer: The density of the population X is a standard normal. Hence, the moment generating function of each Xi is MXi(t) = e 1 2 t2, i = 1, 2,..., 10. The moment generating function of the sample mean is MX (t) = M 10 i=1 1 10 Xi (t) = = = 1 10 t ◆ ✓ 10 P MXi i=1 Y 10 t2 200 e i=1 Y e t2 200 10 h i = e 1 10 t2 2. Hence X N 0, 1 10. ⇠ Probability and Mathematical Statistics 363 The last example tells us that if we take a sample of any size from a standard normal population, then the sample mean also has a normal distribution. The following theorem says that a linear combination of random variables with normal distributions is again normal. Theorem 13.4. If X1, X2,..., Xn are mutually independent random variables such that Xi ⇠ N µi, 2 i, i = 1, 2,..., n. Then the random variable Y = n aiXi is a normal random variable with mean i=1 X n µY = ai µi and i=1 X n i=1 aiµi, n i=1 a2 i 2 i. that is Y N ⇠
n ⇠ For our next theorem, we write X n = 1 n n i=1 X Xi and S2 n = 1 1 n n (Xi i=1 X X n)2. Sequences of Random Variables and Order Statistics 366 Hence and X 2 = 1 2 (X1 + X2) X 2) X 2) + (X2 X2)2 + 1 4 (X2 X1)2 S2 2 = (X1 1 = (X1 4 1 (X1 2 = X2)2. Further, it can be shown that X n+1 = n X n + Xn+1 n + 1 and n S2 n+1 = (n 1) S2 n + n n + 1 (Xn+1 X n)2. (13.1) (13.2) The folllowing theorem is very useful in mathematical statistics. In order to prove this theorem we need the following result which can be established with some effort. Two linear commbinations of a pair of independent normally distributed random variables are themselves bivariate normal, and hence if they are uncorrelated, they are independent. The prooof of the following theorem is based on the inductive proof by Stigler (1984). Theorem 13.7. normal distribution N (µ, 2), then the sample mean X n = 1 n the sample variance S2 If X1, X2,..., Xn is a random sample of size n from the n i=1 Xi, and X )2 have the following properties: n n = 1 n 1 i=1(Xi P (a) (n 1) S2 n 2 ⇠ (b) X n and S2 2(n P 1), and n are independent. Proof: We prove this theorem by induction. First, consider the case n = 2. N (2µ, 22) Since each Xi ⇠ X2 ⇠ and X1 N (µ, 2) for i = 1, 2,..., n, therefore X1 + X2 ⇠ N (0, 22). Hence X2 X1 p2 2 ⇠ N (0, 1) and therefore 1 2 This proves (a
), that is, S2 (X1 2 2(1). 2 ⇠ X2)2 2(1). ⇠ Probability and Mathematical Statistics 367 Since X1 and X2 are independent, Cov(X1 + X2, X1 X2) = Cov(X1, X1) + Cov(X1, X2) 2 = 2 + 0 0 = 0. Cov(X2, X1) Cov(X2, X2) Therefore X1 + X2 and X1 variables. Hencce they are independent. Thus 1 2 (X1 are independent random variables. This proves (b), that is X 2 and S2 independent. X2 are uncorrelated bivariate normal random X2)2 2 are 2 (X1 + X2) and 1 Now assume the conclusion (that is (a) and (b)) holds for the sample of size n. We prove it holds for a sample of size n + 1. Since X1, X2,..., Xn+1 are independent and each Xi ⇠ N (µ, 2), there. Moreover X n and Xn+1 are independent. Hence by fore X n ⇠ (13.1), X n+1 is a linear combination of independent random variables X n and Xn+1. µ, 2 n ⇣ N ⌘ The linear combination Xn+1 X n of the random variables Xn+1 and X n is a normal random variable with mean 0 and variance n+1 n 2. Hence Xn+1 X n n 2 ⇠ n+1 N (0, 1). Therefore q (Xn+1 2 n n + 1 X n)2 2(1). ⇠ Since Xn+1 and S2 hypothesis X n and S2 get n are independent random variables, and by induction n are independent, therefore dividing (13.2) by 2 we n S2 n+1 2 = (n 1) S2 n 2 + n n + 1 1) + 2(1) = 2(n = 2(n). X n)2 (Xn+1
2 Hence (a) follows. Finally, the induction hypothesis and the fact that X n+1 = n X n + Xn+1 n + 1 Sequences of Random Variables and Order Statistics 368 show that X n+1 is independent of S2 n. Since Cov(n X n + Xn+1, Xn+1 X n) = n Cov( X n, Xn+1) + n Cov(X n, X n) Cov(Xn+1, X n) = 0 n 2 n + 2 0 = 0, Cov(Xn+1, Xn+1) the random variables n X n + Xn+1 and Xn+1 X n are uncorrelated. Since these two random variables are normal, therefore they are independent. X n)2/(n+1) are also independent. Hence (nX n +Xn+1)/(n+1) and (Xn+1 Since X n+1 and S2 n are independent, it follows that X n+1 and n 1 n S2 n + 1 n + 1 (Xn+1 X n)2 are independent and hence X n+1 and S2 and the proof of the theorem is now complete. n+1 are independent. This proves (b) Remark 13.2. At first sight the statement (b) might seem odd since the sample mean X n occurs explicitly in the definition of the sample variance S2 n. This remarkable independence of X n and S2 n is a unique property that distinguishes normal distribution from all other probability distributions. Example 13.9. Let X1, X2,..., Xn denote a random sample from a normal distribution with variance 2 > 0. If the first percentile of the statistics X)2 is 1.24, where X denotes the sample mean, what is the W = sample size n? P n i=1 (Xi 2 Answer: 1 100 = P (W 1.24)  n (Xi 2 X)2 1.24!  = P = P = P i=1 X (n ✓ 2(n S2 2  1)  1)
1.24 ◆. 1.24 Thus from 2-table, we get and hence the sample size n is 8. 1 = 7 n Probability and Mathematical Statistics 369 Example 13.10. Let X1, X2,..., X4 be a random sample from a normal distribution with unknown mean and variance equal to 9. Let S2 = 1 3 = 0.05, then what is k?. If P 4 i=1 S2 X k Xi Answer: P  0.05 = P = P k S2  3S2 9  ✓ = P 2(3) k 3 9 3 9  ◆ k. ◆ From 2-table with 3 degrees of freedom, we get ✓ 3 9 k = 0.35 and thus the constant k is given by k = 3(0.35) = 1.05. 13.2. Laws of Large Numbers In this section, we mainly examine the weak law of large numbers. The weak law of large numbers states that if X1, X2,..., Xn is a random sample of size n from a population X with mean µ, then the sample mean X rarely deviates from the population mean µ when the sample size n is very large. In other words, the sample mean X converges in probability to the population mean µ. We begin this section with a result known as Markov inequality which is needed to establish the weak law of large numbers. Theorem 13.8 (Markov Inequality). Suppose X is a nonnegative random variable with mean E(X). Then P (X t)  E(X) t for all t > 0. Proof: We assume the random variable X is continuous. If X is not continuous, then a proof can be obtained for this case by replacing the integrals Sequences of Random Variables and Order Statistics 370 with summations in the following proof. Since E(X) = 1 xf (x)dx Z 1 t = xf (x)dx + Z 1 1 xf (x)dx 1 xf (x)dx tf (x)dx because x [t, ) 1 2 1 f (x)dx t Z = t P (X t), we see that  This completes
2 P (|Sn µ| ")  2 n "2. Taking the limit as n tends to infinity, we get lim n!1 P (|Sn µ| ")  lim n!1 2 n "2 which yields and the proof of the theorem is now complete. lim n!1 P (|Sn µ| ") = 0 It is possible to prove the weak law of large numbers assuming only E(X) to exist and finite but the proof is more involved. Sn The weak law of large numbers says that the sequence of sample means n=1 from a population X stays close to the population mean E(X) most 1 of the time. Let us consider an experiment that consists of tossing a coin Sequences of Random Variables and Order Statistics 372 infinitely many times. Let Xi be 1 if the ith toss results in a Head, and 0 otherwise. The weak law of large numbers says that Sn = X1 + X2 + · · · + Xn n 1 2! as n! 1 (13.3) but it is easy to come up with sequences of tosses for which (13.3) is false · · · · · · · · · The strong law of large numbers (Theorem 13.11) states that the set of “bad sequences” like the ones given above has probability zero. Note that the assertion of Theorem 13.9 for any " > 0 can also be written as lim n!1 P (|Sn µ| < ") = 1. The type of convergence we saw in the weak law of large numbers is not the type of convergence discussed in calculus. This type of convergence is called convergence in probability and defined as follows. Definition 13.1. Suppose X1, X2,... is a sequence of random variables defined on a sample space S. The sequence converges in probability to the random variable X if, for any " > 0, lim n!1 P (|Xn X| < ") = 1. In view of the above definition, the weak law of large numbers states that the sample mean X converges in probability to the population mean µ. The following theorem is known as the Bernoulli
law of large numbers and is a special case of the weak law of large numbers. Theorem 13.10. Let X1, X2,... be a sequence of independent and identically distributed Bernoulli random variables with probability of success p. Then, for any " > 0, where Sn denotes X1+X2+···+Xn n lim n!1 P (|Sn . p| < ") = 1 The fact that the relative frequency of occurrence of an event E is very likely to be close to its probability P (E) for large n can be derived from the weak law of large numbers. Consider a repeatable random experiment Probability and Mathematical Statistics 373 repeated large number of time independently. Let Xi = 1 if E occurs on the ith repetition and Xi = 0 if E does not occur on ith repetition. Then µ = E(Xi) = 1 · P (E) + 0 · P (E) = P (E) for i = 1, 2, 3,... and X1 + X2 + · · · + Xn = N (E) where N (E) denotes the number of times E occurs. Hence by the weak law of large numbers, we have N (E) n P lim n!1 ✓ " ◆ P (E) P P = lim n!1 = lim n!1 = 0. X1 + X2 + · · · + Xn n ✓ Sn Hence, for large n, the relative frequency of occurrence of the event E is very likely to be close to its probability P (E). Now we present the strong law of large numbers without a proof. Theorem 13.11. Let X1, X2,... be a sequence of independent and identically distributed random variables with µ = E(Xi) and 2 = V ar(Xi) < for i = 1, 2,...,. Then 1 1 for every " > 0. Here Sn denotes X1+X2+···+Xn. Sn = µ = 1 P lim n!1 ⇣ ⌘ n The type convergence in Theorem 13.11 is called almost sure convergence. The notion of almost sure convergence is defined as follows. De�
�nition 13.2 Suppose the random variable X and the sequence X1, X2,..., of random variables are defined on a sample space S. The sequence Xn(w) converges almost surely to X(w) if P w S 2 ⇣n lim n!1 Xn(w) = X(w) = 1. o⌘ It can be shown that the convergence in probability implies the almost sure convergence but not the converse. 13.3. The Central Limit Theorem Consider a random sample of measurement {Xi}n i=1. The Xi’s are identically distributed and their common distribution is the distribution of the Sequences of Random Variables and Order Statistics 374 population. We have seen that if the population distribution is normal, then the sample mean X is also normal. More precisely, if X1, X2,..., Xn is a random sample from a normal distribution with density f (x) = 1 p2⇡ e 1 2 ( x µ )2 then X N µ,. ⇠ 2 n ◆ ✓ The central limit theorem (also known as Lindeberg-Levy Theorem) states that even though the population distribution may be far from being normal, yet for large sample size n, the distribution of the standardized sample mean is approximately standard normal with better approximations obtained with the larger sample size. Mathematically this can be stated as follows. Theorem 13.12 (Central Limit Theorem). Let X1, X2,..., Xn be a random sample of size n from a distribution with mean µ and variance 2 <, 1 then the limiting distribution of Zn = µ X pn is standard normal, that is Zn converges in distribution to Z where Z denotes a standard normal random variable. The type of convergence used in the central limit theorem is called the convergence in distribution and is defined as follows. Definition 13.3. Suppose X is a random variable with cumulative density function F (x) and the sequence X1, X2,... of random variables with cumulative density functions F1(x), F2(x),..., respectively. The sequence Xn converges in distribution to X if Fn(x) = F (x) lim n!1 for all values x at which F (x
) is continuous. The distribution of X is called the limiting distribution of Xn. Whenever a sequence of random variables X1, X2,... converges in distri- bution to the random variable X, it will be denoted by Xn X. d! Probability and Mathematical Statistics 375 Example 13.11. Let Y = X1 + X2 + · · · + X15 be the sum of a random sample of size 15 from the distribution whose density function is f (x) = 3 2 x2 ( 0 if 1 < x < 1 otherwise. What is the approximate value of P ( central limit theorem? 0.3 Y   1.5) when one uses the Answer: First, we find the mean µ and variance 2 for the density function f (x). The mean for this distribution is given by. 3 2 x3 dx x4 4 1 1 Hence the variance of this distribution is given by V ar(X) = E(X 2) [ E(X) ]2 x4 dx 3 2 x5.6. =  P ( 0.3 Y   1.5) = P ( 0.3 Y 0   1.5 0) 0 0.3 = P 15(0.6)  Y 0 15(0.6)  1.5 15(0.6)! = P ( = P (Z 0.10 p Z   0.50) p 0.50) + P (Z  = 0.6915 + 0.5398 1 = 0.2313. 0.10)  p 1 Example 13.12. Let X1, X2,..., Xn be a random sample of size n = 25 from a population that has a mean µ = 71.43 and variance 2 = 56.25. Let X be Sequences of Random Variables and Order Statistics 376 the sample mean. What is the probability that the sample mean is between 68.91 and 71.97? Answer: The mean of X is given by E given by V ar X = = 2 n X = 71.43. The variance of X is
d! Z ⇠ N (0, 1) as n.! 1 However, the above expression is not equivalent to X d! Z ⇠ N µ, ✓ as the following example shows. 2 n ◆ as n! 1 Example 13.16. Let X1, X2,..., Xn be a random sample of size n from a gamma distribution with parameters ✓ = 1 and ↵ = 1. What is the Probability and Mathematical Statistics 379 distribution of the sample mean X? Also, what is the limiting distribution of X as n GAM(1, 1), the probability density function of?! 1 Answer: Since, each Xi ⇠ each Xi is given by f (x) = x e ( 0 if x 0 otherwise and hence the moment generating function of each Xi is MXi(t) = 1 1. t First we determine the moment generating function of the sample mean X, and then examine this moment generating function to find the probability distribution of X. Since MX (t) = M 1 n n (t) n i=1 Xi t n ◆ = = = P MXi ✓ t n 1 i=1 Y n i= therefore X GAM 1 n, n. ⇠ Next, we find the limiting distribution of X as n. This can be done again by finding the limiting moment generating function of X and identifying the distribution of X. Consider! 1 lim n!1 MX (t) = lim!1 n 1 1 1 ! = limn 1 t e = et. = Thus, the sample mean X has a degenerate distribution, that is all the probability mass is concentrated at one point of the space of X. Sequences of Random Variables and Order Statistics 380 Example 13.17. Let X1, X2,..., Xn be a random sample of size n from a gamma distribution with parameters ✓ = 1 and ↵ = 1. What is the distribution of X µ pn as n! 1 where µ and are the population mean and variance, respectively? Answer: From Example 13.7, we know that MX (t) = 1 1 t n n. Since the population distribution is gamma with ✓ = 1 and ↵ = 1, the population mean µ is 1 and population variance
by the L´evy continuity theorem, we obtain Fn(x) = Φ(x) lim n!1 where Φ(x) is the cumulative density function of the standard normal distribution. Thus Zn Z and the proof of the theorem is now complete. d! Now we give another proof of the central limit theorem using L’Hospital rule. This proof is essentially due to Tardiff (1981). As before, let Zn = X pn µ. Then MZn(t) = M n t pn where M (t) is the moment generating function of the random variable X (13.5), we have M (0) = 1, M 0(0) = 0, and M 00(0) = 2. Letting h = t µ. Hence from pn, ⌘ i ⇣ h Probability and Mathematical Statistics 383 we see that n = t2 applying the L’Hospital rule twice, we compute 2 h2. Hence if n! 1, then h lim n!1 MZn(t) = lim M n!1  n t pn ✓ n ln ◆ M t pn = lim n!1 exp exp = lim 0 h! ✓ t2 2 ✓ exp = lim 0 h! exp = lim 0 h! exp = lim 0 h! exp = lim 0 h! exp = lim 0 h! = exp 1 2 t2 2 t2 2 t2 2 t2 2 t2 2. ✓ ✓ ✓ ✓ t2 ✓ ✓ ln (M (h)) h2 1 ◆ M (h) M 0(h) 2 h ◆◆◆! M 0(h) 2 h M (h) ◆ M 00(h) 2 M (h) + 2h M 0(h) M 00(0) 2 M (0) 2 2 ◆ ◆ 0. Using these and! 0 0 ✓ form ◆ (L0Hospital rule) 0 0 ✓ form ◆ (L0Hospital rule) ◆ ◆ Hence by the L´evy continuity theorem, we obtain ✓ Fn(x) = Φ(x) lim n!1 where Φ(x) is the cumulative
density function of the standard normal distribution. Thus as n, the random variable Zn Z, where Z N (0, 1). d! ⇠! 1 Remark 13.3. In contrast to the moment generating function, since the characteristic function of a random variable always exists, the original proof of the central limit theorem involved the characteristic function (see for example An Introduction to Probability Theory and Its Applications, Volume II by Feller). In 1988, Brown gave an elementary proof using very clever Taylor series expansions, where the use of the characteristic function has been avoided. 13.4. Order Statistics Often, sample values such as the smallest, largest, or middle observation from a random sample provide important information. For example, the Sequences of Random Variables and Order Statistics 384 highest flood water or lowest winter temperature recorded during the last 50 years might be useful when planning for future emergencies. The median price of houses sold during the previous month might be useful for estimating the cost of living. The statistics highest, lowest or median are examples of order statistics. Definition 13.4. Let X1, X2,..., Xn be observations from a random sample of size n from a distribution f (x). Let X(1) denote the smallest of {X1, X2,..., Xn}, X(2) denote the second smallest of {X1, X2,..., Xn}, and similarly X(r) denote the rth smallest of {X1, X2,..., Xn}. Then the random variables X(1), X(2),..., X(n) are called the order statistics of the samIn particular, X(r) is called the rth-order statistic of ple X1, X2,..., Xn. X1, X2,..., Xn. The sample range, R, is the distance between the smallest and the largest observation. That is, R = X(n) This is an important statistic which is defined using order statistics. X(1). The distribution of the order statistics are very important when one uses these in any statistical investigation. The next theorem gives the distribution of an order statistic. Theorem 13.14. Let X1, X2,..., Xn be a random sample of size n from a distribution with density function f (x). Then the probability density function of the rth order statistic, X(r),
is g(x) = n! 1)! (n (r r)! [F (x)]r 1 f (x) [1 r, F (x)]n where F (x) denotes the cdf of f (x). Proof: We prove the theorem assuming f (x) continuous. In the case f (x) is discrete the proof has to be modified appropriately. Let h be a positive real number and x be an arbitrary point in the domain of f. Let us divide the real line into three segments, namely IR = (, x) 1 [x, x + h) [x + h, ). 1 The probability, say p1, of a sample value falls into the first interval ( and is given by, x] 1 [ [ x p1 = f (t) dt = F (x). Z 1 Probability and Mathematical Statistics 385 Similarly, the probability p2 of a sample value falls into the second interval [x, x + h) is x+h p2 = f (t) dt = F (x + h) F (x). x Z In the same token, we can compute the probability p3 of a sample value which falls into the third interval p3 = 1 x+h Z f (t) dt = 1 F (x + h). Then the probability, Ph(x), that (r one falls in the second interval, and (n 1) sample values fall in the first interval, r) fall in the third interval is Ph(x) = n 1, 1, n r ✓ r ◆ 1 pr 1 2 pn r p1 3 = n! 1)! (n (r r)! 1 pr 1 p2 pn 3 r. Hence the probability density function g(x) of the rth statistics is given by g(x) = lim 0 h! = lim 0 h! = = = (r (r (r Ph(x) h  (r n! 1)! (n n! 1)! (n n! 1)! (n n! 1)! (n r)! r)! r)! r)!
[F (x)]r 1 pr 1 p2 h 1 lim 0 h! r pn 3 F (x + h) h F (x) F (x + h)]n r [1 lim 0 h! [F (x)]r 1 F 0(x) [1 F (x)]n r [F (x)]r 1 f (x) [1 r. F (x)]n Example 13.18. Let X1, X2 be a random sample from a distribution with density function f (x) = x e ( 0 for 0 x <  1 otherwise. What is the density function of Y = min{X1, X2} where nonzero? Answer: The cumulative distribution function of f (x) is F (x) = x e t dt 0 Z = 1 x e Sequences of Random Variables and Order Statistics 386 In this example, n = 2 and r = 1. Hence, the density of Y is g(y) = [F (y)]0 f (y) [1 F (y)] 2! 0! 1! = 2f (y) [1 y = 2 e 1 = 2 e 2y. F (y)] y 1 + e Example 13.19. Let Y1 < Y2 < · · · < Y6 be the order statistics from a random sample of size 6 from a distribution with density function 2x for 0 < x < 1 f (x) = ( 0 otherwise. What is the expected value of Y6? Answer: f (x) = 2x x F (x) = 2t dt 0 Z = x2. The density function of Y6 is given by g(y) = [F (y)]5 f (y) 6! 5! 0! y2 = 6 = 12y11. 5 2y Hence, the expected value of Y6 is 1 E (Y6) = y g(y) dy 0 Z 1 = = = 0 Z 12 13 12 13 y 12y11 dy y13 1 0 ⇥. ⇤ Example 13.20. Let X, Y and Z be independent uniform random variables on the interval (0, a). Let W = min{X
, Y, Z}. What is the expected value of 1? 2 W a Probability and Mathematical Statistics 387 Answer: The probability distribution of X (or Y or Z) is f (x) = 1 a ( 0 if 0 < x < a otherwise. Thus the cumulative distribution of function of f (x) is given by 0 x a 1 if x 0  if 0 < x < a if x a. F (x) = 8 >>>< >>>: Since W = min{X, Y, Z}, W is the first order statistic of the random sample X, Y, Z. Thus, the density function of W is given by g(w) = [F (w)]0 f (w) [1 F (w)]2 3! 0! 1! 2! = 3f (w) [w)]2 2 1 a ◆ 2 ✓. Thus, the pdf of W is given by g(w if 0 < w < a otherwise. The expected value of W is : E 1 "✓ W a a 2 g(w) dw ⇣ ⌘ dw ⌘ 1 ⇣ dw = = = = =. Sequences of Random Variables and Order Statistics 388 Example 13.21. Let X1, X2,..., Xn be a random sample from a population X with uniform distribution on the interval [0, 1]. What is the probability distribution of the sample range W := X(n) Answer: To find the distribution of W, we need the joint distribution of the random variable is given by. The joint distribution of X(n), X(1) X(n), X(1) X(1)? h(x1, xn) = n(n 1)f (x1)f (xn) [F (xn) 2, F (x1)]n x1 and f (x) is the probability density function of X. To dewhere xn termine the probability distribution of the sample range W, we consider the transformation which has an inverse U = X(1) W = X(n) X(1) ) X(1) = U X(n) = U + W. ) The Jacobian of this
transformation is J = det 1 0 1 1 ✓ ◆ = 1. Hence the joint density of (U, W ) is given by g(u, w) = |J| h(x1, xn) = n(n 1)f (u)f (u + w)[F (u + w) F (u)]n 2 where w and. Since f (u) and f (u+w) are simultaneously nonzero if 0 1 1. Hence f (u) and f (u + w) are simultaneously nonzero if   u w. Thus, the probability of W is given by 1)f (u)f (u + w)[F (u + w) F (u)]n 2 du j(w) = 1 g(u, w) du = Z 1 1 Z 1 n(n = n(n = n(n where 0 w   1. 1) wn 2 1 w du 0 Z w) wn 2 1) (1 Probability and Mathematical Statistics 389 13.5. Sample Percentiles The sample median, M, is a number such that approximately one-half of the observations are less than M and one-half are greater than M. Definition 13.5. Let X1, X2,..., Xn be a random sample. The sample median M is defined as X( n+1 2 ) if n is odd M = X( n h The median is a measure of location like sample mean. 2 ) + X( n+2 2 ) : i if n is even. 8 < 1 2 Recall that for continuous distribution, 100pth percentile, ⇡p, is a number such that ⇡p p = f (x) dx. Z 1 Definition 13.6. The 100pth sample percentile is defined as X([np]) M X(n+1 [n(1 p)]) if p < 0.5 if p = 0.5 if p > 0.5. ⇡p = 8 >>>< >>>: where [b] denote the number b rounded to the nearest integer. Example 13.22. Let X1, X2,
..., X12 be a random sample of size 12. What is the 65th percentile of this sample? Answer: 100p = 65 p = 0.65 n(1 p) = (12)(1 p)] = [4.2] = 4 [n(1 0.65) = 4.2 Hence by definition of 65th percentile is ⇡0.65 = X(n+1 [n(1 p)]) 4) = X(13 = X(9). Sequences of Random Variables and Order Statistics 390 Thus, the 65th percentile of the random sample X1, X2,..., X12 is the 9thorder statistic. For any number p between 0 and 1, the 100pth sample percentile is an observation such that approximately np observations are less than this obp) observations are greater than this. servation and n(1 Definition 13.7. The 25th percentile is called the lower quartile while the 75th percentile is called the upper quartile. The distance between these two quartiles is called the interquartile range. Example 13.23. If a sample of size 3 from a uniform distribution over [0, 1] is observed, what is the probability that the sample median is between 1 4 and 3 4? Answer: When a sample of (2n + 1) random variables are observed, the (n + 1)th smallest random variable is called the sample median. For our problem, the sample median is given by X(2) = 2nd smallest {X1, X2, X3}. Let Y = X(2). The density function of each Xi is given by f (x) = 1 ( 0 if 0 x 1   otherwise. Hence, the cumulative density function of f (x) is Thus the density function of Y is given by F (x) = x. Therefore g(y) = 3! 1! 1! 1 f (y) [1 [F (y)]2 F (y)]3 2 = 6 F (y) f (y) [1 = 6y (1 y). F (y)] (y) dy 6 y (1 y) dy y2 2 y3 3 3 4 1 4  11 16. = Prob
ability and Mathematical Statistics 391 13.6. Review Exercises 1. Suppose we roll a die 1000 times. What is the probability that the sum of the numbers obtained lies between 3000 and 4000? 2. Suppose Kathy flip a coin 1000 times. What is the probability she will get at least 600 heads? 3. At a certain large university the weight of the male students and female students are approximately normally distributed with means and standard deviations of 180, and 20, and 130 and 15, respectively. If a male and female are selected at random, what is the probability that the sum of their weights is less than 280? 4. Seven observations are drawn from a population with an unknown continuous distribution. What is the probability that the least and the greatest observations bracket the median? 5. If the random variable X has the density function f (x) = 2 (1 x) for 0 x 1   8 < 0 otherwise, what is the probability that the larger of 2 independent observations of X will exceed 1 2? : 6. Let X1, X2, X3 be a random sample from the uniform distribution on the interval (0, 1). What is the probability that the sample median is less than 0.4? 7. Let X1, X2, X3, X4, X5 be a random sample from the uniform distribution on the interval (0, ✓), where ✓ is unknown, and let Xmax denote the largest observation. For what value of the constant k, the expected value of the random variable kXmax is equal to ✓? 8. A random sample of size 16 is to be taken from a normal population having mean 100 and variance 4. What is the 90th percentile of the distribution of the sample mean? 9. If the density function of a random variable X is given by f (x) = 1 2x 0 8 < : for 1 e < x < e otherwise, Sequences of Random Variables and Order Statistics 392 what is the probability that one of the two independent observations of X is less than 2 and the other is greater than 1? 10. Five observations have been drawn independently and at random from a continuous distribution. What is the probability that the next observation will be less than all of the first 5? 11. Let the random variable X denote the length of time it takes to complete a mathematics assignment. Suppose the density function of X is given by e (x ✓)
f (x) = 8 < 0 for ✓ < x < 1 otherwise, where ✓ is a positive constant that represents the minimum time to complete a mathematics assignment. If X1, X2,..., X5 is a random sample from this distribution. What is the expected value of X(1)? : 12. Let X and Y be two independent random variables with identical probability density function given by f (x) = x e for x > 0 ( 0 elsewhere. What is the probability density function of W = max{X, Y }? 13. Let X and Y be two independent random variables with identical probability density function given by f (x) = 3 x2 ✓3 8 < 0 for 0 x ✓   elsewhere, for some ✓ > 0. What is the probability density function of W = min{X, Y }? : 14. Let X1, X2,..., Xn be a random sample from a uniform distribution on the interval from 0 to 5. What is the limiting moment generating function of X as n µ pn?! 1 15. Let X1, X2,..., Xn be a random sample of size n from a normal distribution with mean µ and variance 1. If the 75th percentile of the statistic W = is 28.24, what is the sample size n? X 2 P 16. Let X1, X2,..., Xn be a random sample of size n from a Bernoulli distribution with probability of success p = 1 2. What is the limiting distribution the sample mean X? n i=1 Xi Probability and Mathematical Statistics 393 17. Let X1, X2,..., X1995 be a random sample of size 1995 from a distribution with probability density function f (x) = e x x! x = 0, 1, 2, 3,...,. 1 What is the distribution of 1995X? 18. Suppose X1, X2,..., Xn is a random sample from the uniform distribution on (0, 1) and Z be the sample range. What is the probability that Z is less than or equal to 0.5? 19. Let X1, X2,..., X9 be a random sample from a uniform distribution on the interval [1, 12]. Find the probability that the next to smallest is greater than or equal to 4? 20. A
machine needs 4 out of its 6 independent components to operate. Let X1, X2,..., X6 be the lifetime of the respective components. Suppose each is exponentially distributed with parameter ✓. What is the probability density function of the machine lifetime? 21. Suppose X1, X2,..., X2n+1 is a random sample from the uniform distribution on (0, 1). What is the probability density function of the sample median X(n+1)? 22. Let X and Y be two random variables with joint density f (x, y) = 12x 0 if 0 < y < 2x < 1 otherwise. n What is the expected value of the random variable Z = X 2Y 3 +X 2 X Y 3? 23. Let X1, X2,..., X50 be a random sample of size 50 from a distribution with density 1 Γ(↵) ✓↵ x↵ 1e f (x) = ( 0 x ✓ for 0 < x < 1 otherwise. What are the mean and variance of the sample mean X? 24. Let X1, X2,..., X100 be a random sample of size 100 from a distribution with density f (x) = e x x! 0 ⇢ for x = 0, 1, 2,..., otherwise. 1 What is the probability that X greater than or equal to 1? Sequences of Random Variables and Order Statistics 394 Probability and Mathematical Statistics 395 Chapter 14 SAMPLING DISTRIBUTIONS ASSOCIATED WITH THE NORMAL POPULATIONS Given a random sample X1, X2,..., Xn from a population X with probability distribution f (x; ✓), where ✓ is a parameter, a statistic is a function T of X1, X2,..., Xn, that is T = T (X1, X2,..., Xn) which is free of the parameter ✓. If the distribution of the population is known, then sometimes it is possible to find the probability distribution of the statistic T. The probability distribution of the statistic T is called the sampling distribution of T. The joint distribution of the random variables X1, X2,..., Xn is called the distribution of the sample. The distribution of the sample is the joint density f (x1, x2,..., xn; ✓) = f (x1; ✓
)f (x2; ✓) · · · f (xn; ✓) = f (xi; ✓) n i=1 Y since the random variables X1, X2,..., Xn are independent and identically distributed. Since the normal population is very important in statistics, the sampling distributions associated with the normal population are very important. The most important sampling distributions which are associated with the normal Sampling Distributions Associated with the Normal Population 396 population are the followings: the chi-square distribution, the student’s tdistribution, the F-distribution, and the beta distribution. In this chapter, we only consider the first three distributions, since the last distribution was considered earlier. 14.1. Chi-square distribution In this section, we treat the Chi-square distribution, which is one of the very useful sampling distributions. Definition 14.1. A continuous random variable X is said to have a chisquare distribution with r degrees of freedom if its probability density function is of the form f (x; r) = 1 Γ x 2 8 < 0 if 0  x < 1 otherwise, : where r > 0. If X has chi-square distribution, then we denote it by writing 2(r). Recall that a gamma distribution reduces to chi-square distriX bution if ↵ = r 2 and ✓ = 2. The mean and variance of X are r and 2r, respectively. ⇠ Thus, chi-square distribution is also a special case of gamma distribution., then chi-square distribution tends to normal distribution. Further, if r! 1 Example 14.1. If X function of the random variable 2X? ⇠ GAM(1, 1), then what is the probability density Answer: We will use the moment generating method to find the distribution of 2X. The moment generating function of a gamma random variable is given by M (t) = (1 ✓ t) ↵, if t < 1 ✓. Probability and Mathematical Statistics 397 Since X ⇠ GAM(1, 1), the moment generating function of X is given by MX (t. Hence, the moment generating function of 2X is M2X (t) = MX (2t) 1 = 1 = 2t 1 2t) 2 = MGF of 2(2). (1 2 Hence,
if X is GAM(1, 1) or is an exponential with parameter 1, then 2X is chi-square with 2 degrees of freedom. Example 14.2. If X 1.145 and 12.83? ⇠ 2(5), then what is the probability that X is between Answer: The probability of X between 1.145 and 12.83 can be calculated from the following: P (1.145 X 12.83)   = P (X  12.83 12.83) P (X  1.145 1.145) = = 0 Z 12.83 0 Z = 0.975 = 0.925. f (x) dx Γ 1 5 2 0.050 2 5 2 f (x) dx 0 Z 5 2 x 1 e x 2 dx 0 Z (from 2 table) 5 2 1 e x 2 dx x 1.145 1 5 2 2 5 2 Γ The above integrals are hard to evaluate and thus their values are taken from the chi-square table. Example 14.3. If X b such that P (a < X < b) = 0.95? ⇠ 2(7), then what are values of the constants a and Answer: Since 0.95 = P (a < X < b) = P (X < b) P (X < a), we get P (X < b) = 0.95 + P (X < a). Sampling Distributions Associated with the Normal Population 398 We choose a = 1.690, so that P (X < 1.690) = 0.025. From this, we get P (X < b) = 0.95 + 0.025 = 0.975 Thus, from chi-square table, we get b = 16.01. The following theorems were studied earlier in Chapters 6 and 13 and they are very useful in finding the sampling distributions of many statistics. We state these theorems here for the convenience of the reader. Theorem 14.1. If X ⇠ N (µ, 2), then 2 X µ 2(1). ⇠ N (µ, 2) and X1, X2,..., Xn is a random sample
2 is given by 25 500 ◆ ✓ E 500 25 500 25 ◆ S2 S2 ◆ E ✓ ◆ 2(500) ⇥ 500 ⇤ E S2 = E ⇥ ⇤ = = = ✓ ✓ ✓ 25 500 1 20 1 20 ◆ ◆ ✓ = 25. Sampling Distributions Associated with the Normal Population 400 14.2. Student’s t-distribution Here we treat the Student’s t-distribution, which is also one of the very useful sampling distributions. Definition 14.2. A continuous random variable X is said to have a tdistribution with ⌫ degrees of freedom if its probability density function is of the form f (x; ⌫) = p⇡ ⌫ Γ Γ ⌫ 2 ⌫+1 2 1 + x2 ⌫, ( ⌫+1 2 ) < x < 1 1 where ⌫ > 0. If X has a t-distribution with ⌫ degrees of freedom, then we denote it by writing X t(⌫). ⇠ The t-distribution was discovered by W.S. Gosset (1876-1936) of England who published his work under the pseudonym of student. Therefore, this distribution is known as Student’s t-distribution. This distribution is a generalization of the Cauchy distribution and the normal distribution. That is, if ⌫ = 1, then the probability density function of X becomes f (x; 1) = 1 ⇡ (1 + x2) 1 < x <, 1 which is the Cauchy distribution. Further, if ⌫ f (x; ⌫) = lim ⌫!1 1 p2⇡ 1 2 x2 e, then! 1 1 < x <, 1 which is the probability density function of the standard normal distribution. The following figure shows the graph of t-distributions with various degrees of freedom. Example 14.6. If T 2.228? ⇠ t(10), then what is the probability that T is at least Probability and Mathematical Statistics 401 Answer: The probability that T is at least 2.228 is given by
This completes the proof of the theorem. t(n 1) ⇠ (by Theorem 14.6). Example 14.8. Let X1, X2, X3, X4 be a random sample of size 4 from a standard normal distribution. If the statistic W is given by W = X1 1 + X 2 X2 + X3, then what is the expected value of W? p Answer: Since Xi ⇠ N (0, 1), we get and X1 X2 + X3 ⇠ N (0, 3) X1 X2 + X3 p3 ⇠ N (0, 1). Further, since Xi ⇠ N (0, 1), we have X 2 i ⇠ 2(1) Probability and Mathematical Statistics 403 and hence Thus2(4) X1 X2+X3 p3 2 +X 2 1 +X 2 4 X 2 3 +X 2 4 = 2 p3 ✓ ◆ W ⇠ t(4). Now using the distribution of W, we find the expected value of W. q E 2 p3  W E [t(4)] 0 p3 2! p3 2! p3 2! E [W ] = = = = 0. Example 14.9. If X the population X, then what is the 75th percentile of the statistic W = X1 pX 2 2 N (0, 1) and X1, X2 is random sample of size two from? ⇠ Answer: Since each Xi ⇠ N (0, 1), we have X1 ⇠ X 2 2 ⇠ N (0, 1) 2(1). Hence 2 ⇠ The 75th percentile a of W is then given by p W = t(1). X1 X 2 0.75 = P (W a)  Hence, from the t-table, we get Hence the 75th percentile of W is 1.0. a = 1.0 Example 14.10. Suppose X1, X2,...., Xn is a random sample from a normal n i=1 Xi and V 2 = distribution with mean µ and variance 2. If X = 1 n P Sampling Distributions Associated with the Normal Population 404, and Xn+1 is an additional observation, what is the value Xn+1) V has a
⌫2 2 ) if 0  x < 1 0 otherwise, f (x; ⌫1, ⌫2) = 8 >>< >>: where ⌫1, ⌫2 > 0. If X has a F -distribution with ⌫1 and ⌫2 degrees of freedom, then we denote it by writing X F (⌫1, ⌫2). ⇠ The F -distribution was named in honor of Sir Ronald Fisher by George Snedecor. F -distribution arises as the distribution of a ratio of variances. Like, the other two distributions this distribution also tends to normal distribution as ⌫1 and ⌫2 become very large. The following figure illustrates the shape of the graph of this distribution for various degrees of freedom. The following theorem gives us the mean and variance of Snedecor’s F - distribution. Theorem 14.8. If the random variable X E[X] = ⌫2 ⌫2 2 ( DN E if if and V ar[X] = 8 < : 2 ⌫2 ⌫1 (⌫2 2 (⌫1+⌫2 2) 2)2 (⌫2 4) DN E ⇠ F (⌫1, ⌫2), then ⌫2 ⌫2 = 1, 2 3 if if 5 ⌫2 ⌫2 = 1, 2, 3, 4. Sampling Distributions Associated with the Normal Population 406 Here DNE means does not exist. Example 14.11. If X and variance of X. ⇠ F (9, 10), what P (X 3.02)? Also, find the mean Answer: P (X 3.02) = 1 = 1 = 1 P (X 3.02)  P (F (9, 10) 3.02)  0.95 (from F table) = 0.05. Next, we determine the mean and variance of X using the Theorem 14.8. Hence, and V ar(X) = E(X) = ⌫2 ⌫2 = 2 10 10 2 = 10 8 = 1
.25 2 ⌫2 ⌫1 (⌫2 2 (⌫1 + ⌫2 2) 2)2 (⌫2 4) = 2 (10)2 (19 9 (8)2 (6) 2) = (25) (17) (27) (16) = 425 432 = 0.9838. Theorem 14.9. If X ⇠ F (⌫1, ⌫2), then the random variable 1 X ⇠ F (⌫2, ⌫1). This theorem is very useful for computing probabilities like P (X  0.2439). If you look at a F -table, you will notice that the table start with values bigger than 1. Our next example illustrates how to find such probabilities using Theorem 14.9. Example 14.12. If X or equal to 0.2439? ⇠ F (6, 9), what is the probability that X is less than Probability and Mathematical Statistics 407 Answer: We use the above theorem to compute P (X  0.2439) = P 1 X ✓ 1 0.2439 ◆ 1 0.2439 ◆ 1 0.2439  4.10)  = P F (9, 6.05. P F (9, 6) ✓ P (F (9, 6) 0.95 (by Theorem 14.9) ◆ The following theorem says that F -distribution arises as the distribution of a random variable which is the quotient of two independently distributed chi-square random variables, each of which is divided by its degrees of freedom. Theorem 14.10. If U U and V are independent, then the random variable 2(⌫1) and V ⇠ ⇠ 2(⌫2), and the random variables U ⌫1 V ⌫2 ⇠ F (⌫1, ⌫2). Example 14.13. Let X1, X2,..., X4 and Y1, Y2,..., Y5 be two random samples of size 4 and 5 respectively, from a standard normal population. What is the 1 +X 2 variance of the statistic T = 2 +X 2? X 2 Y 2 1 +Y 2 3
+X 2 4 4 +Y 2 5 2 +Y 2 3 +Y 2 5 4 Answer: Since the population is standard normal, we get Similarly, Thus 2(4). 2(5). X 2 1 +X 2 X 2 4 2 +Y 2 3 +Y 2 5 F (4, 5). Y 2 1 +Y 2 3 +X 2 4 4 +Y 2 5 = T ⇠ Sampling Distributions Associated with the Normal Population 408 Therefore V ar(T ) = V ar[ F (4, 5) ] = 2 (5)2 (7) 4 (3)2 (1) 350 36 = 9.72. = Theorem 14.11. Let X ple of size n from the population X. Let Y be a random sample of size m from the population Y. Then the statistic 1) and X1, X2,..., Xn be a random sam2) and Y1, Y2,..., Ym N (µ2, 2 ⇠ ⇠ N (µ1, 2 S2 1 2 1 S2 2 2 2 F (n 1, m 1), ⇠ 1 and S2 where S2 sample, respectively. 2 denote the sample variances of the first and the second Proof: Since, we have by Theorem 14.3, we get Xi ⇠ N (µ1, 2 1) (n 1) S2 1 2 1 ⇠ 2(n 1). Similarly, since Yi ⇠ we have by Theorem 14.3, we get N (µ2, 2 2) (m 1) S2 2 2 2 ⇠ 2(m 1). Therefore S2 1 2 1 S2 2 2 2 = (n (n (m (m 1) S2 1 1) 2 1 1) S2 2 1) 2 2 F (n 1, m 1). ⇠ This completes the proof of the theorem. Because of this theorem, the F -distribution is also known as the variance- ratio distribution. Probability and Mathematical Statistics 409 14.4. Review Exercises 1. Let X1, X2,..., X5
be a random sample of size 5 from a normal distribution with mean zero and standard deviation 2. Find the sampling distribution of the statistic X1 + 2X2 2. Let X1, X2, X3 be a random sample of size 3 from a standard normal distribution. Find the distribution of X 2 3. If possible, find the sampling distribution of X 2 2. If not, justify why you can not determine it’s distribution. X3 + X4 + X5. Let X1, X2,..., X6 be a random sample of size 6 from a standard normal distribution. Find the sampling distribution of the statistics X1+X2+X3 5 +X 2 6 4 +X 2 pX 2 and X1 pX 2 X2 4 +X 2 X3 5 +X 2 6. 4. Let X1, X2, X3 be a random sample of size 3 from an exponential distribution with a parameter ✓ > 0. Find the distribution of the sample (that is the joint distribution of the random variables X1, X2, X3). 5. Let X1, X2,..., Xn be a random sample of size n from a normal population with mean µ and variance 2 > 0. What is the expected value of the sample variance S2 = 1 n n i=1 ¯X? 1 2 6. Let X1, X2, X3, X4 be a random sample of size 4 from a standard normal population. Find the distribution of the statistic X1+X4 2 +X 2 3 pX 2. P Xi 7. Let X1, X2, X3, X4 be a random sample of size 4 from a standard normal population. Find the sampling distribution (if possible) and moment generating function of the statistic 2X 2 4. What is the probability distribution of the sample? 1 +3X 2 3 +4X 2 2 +X 2 8. Let X equal the maximal oxygen intake of a human on a treadmill, where the measurement are in milliliters of oxygen per minute per kilogram of weight. Assume that for a particular population the mean of X is µ = 54.03 and the standard deviation is = 5.8. Let ¯X be the sample mean of a random sample X1, X2,..., X47 of size 47 drawn from X. Find
the probability that the sample mean is between 52.761 and 54.453. 9. Let X1, X2,..., Xn be a random sample from a normal distribution with mean µ and variance 2. What is the variance of V 2 = 1? n Xi 10. If X is a random variable with mean µ and variance 2, then µ 2 is called the lower 2 point of X. Suppose a random sample X1, X2, X3, X4 is n i=1 P X 2 Sampling Distributions Associated with the Normal Population 410 drawn from a chi-square distribution with two degrees of freedom. What is the lower 2 point of X1 + X2 + X3 + X4? 11. Let X and Y be independent normal random variables such that the mean and variance of X are 2 and 4, respectively, while the mean and variance of Y are 6 and k, respectively. A sample of size 4 is taken from the X-distribution and a sample of size 9 is taken from the Y -distribution. If P = 0.0228, then what is the value of the constant k? X > 8 Y 12. Let X1, X2,..., Xn be a random sample of size n from a distribution with density function f (x; ) = x e if 0 < x < 1 ( 0 otherwise. What is the distribution of the statistic Y = 2 n i=1 Xi? 13. Suppose X has a normal distribution with mean 0 and variance 1, Y has a chi-square distribution with n degrees of freedom, W has a chi-square distribution with p degrees of freedom, and W, X, and Y are independent. What is the sampling distribution of the statistic V = X P? W +Y p+n 14. A random sample X1, X2,..., Xn of size n is selected from a normal population with mean µ and standard deviation 1. Later an additional independent observation Xn+1 is obtained from the same population. What X)2, where X is the distribution of the statistic (Xn+1 denote the sample mean? i=1(Xi µ)2 + p n P 15. Let T = k(X+Y ) pZ2+W 2, where X, Y, Z, and
W are independent normal random variables with mean 0 and variance 2 > 0. For exactly one value If r denotes the degrees of freedom of that of k, T has a t-distribution. distribution, then what is the value of the pair (k, r)? 16. Let X and Y be joint normal random variables with common mean 0, common variance 1, and covariance 1 2. What is the probability of the event p3, that is P X + Y X + Y p3?   17. Suppose Xj = Zj 1, where j = 1, 2,..., n and Z0, Z1,..., Zn are Zj independent and identically distributed with common variance 2. What is the variance of the random variable 1 n n j=1 Xj? 18. A random sample of size 5 is taken from a normal distribution with mean 0 and standard deviation 2. Find the constant k such that 0.05 is equal to the P Probability and Mathematical Statistics 411 probability that the sum of the squares of the sample observations exceeds the constant k. 19. Let X1, X2,..., Xn and Y1, Y2,..., Yn be two random sample from the independent normal distributions with V ar[Xi] = 2 and V ar[Yi] = 22, for 2 i = 1, 2,..., n and 2 > 0. If U =, and V = then what is the sampling distribution of the statistic 2U +V n i=1 n i=1 X Y 2 Xi Yi 22? P P 20. Suppose X1, X2,..., X6 and Y1, Y2,..., Y9 are independent, identically distributed normal random variables, each with mean zero and variance 2 > 0. What is the 95th percentile of the statistics W = 6 X 2 i " i=1 X / # 9 2 4 j=1 X Y 2? j 3 5 21. Let X1, X2,..., X6 and Y1, Y2,..., Y8 be independent random samples from a normal distribution with mean 0 and variance 1, and =1 X j=1 X 22. Give a proof of Theorem 14.9. 5 4 Sam
pling Distributions Associated with the Normal Population 412 Probability and Mathematical Statistics 413 Chapter 15 SOME TECHNIQUES FOR FINDING POINT ESTIMATORS OF PARAMETERS A statistical population consists of all the measurements of interest in a statistical investigation. Usually a population is described by a random variable X. If we can gain some knowledge about the probability density function f (x; ✓) of X, then we also gain some knowledge about the population under investigation. A sample is a portion of the population usually chosen by method of random sampling and as such it is a set of random variables X1, X2,..., Xn with the same probability density function f (x; ✓) as the population. Once the sampling is done, we get X1 = x1, X2 = x2, · · ·, Xn = xn where x1, x2,..., xn are the sample data. Every statistical method employs a random sample to gain information about the population. Since the population is characterized by the probability density function f (x; ✓), in statistics one makes statistical inferences about the population distribution f (x; ✓) based on sample information. A statistical inference is a statement based on sample information about the population. There are three types of statistical inferences (1) estimation (2) Some Techniques for finding Point Estimators of Parameters 414 hypothesis testing and (3) prediction. The goal of this chapter is to examine some well known point estimation methods. In point estimation, we try to find the parameter ✓ of the population distribution f (x; ✓) from the sample information. Thus, in the parametric point estimation one assumes the functional form of the pdf f (x; ✓) to be known and only estimate the unknown parameter ✓ of the population using information available from the sample. Definition 15.1. Let X be a population with the density function f (x; ✓), where ✓ is an unknown parameter. The set of all admissible values of ✓ is called a parameter space and it is denoted by Ω, that is Ω = {✓ 2 IRn | f (x; ✓) is a pdf } for some natural number m. Example 15.1. If X Answer: Since X ⇠ EXP (✓), then what is the parameter space of ✓? ⇠ EXP (✓), the density function of X is given by f (
x; ✓) = e x ✓. 1 ✓ If ✓ is zero or negative then f (x; ✓) is not a density function. Thus, the admissible values of ✓ are all the positive real numbers. Hence Ω = {✓ 2 = IR+. IR | 0 < ✓ < } 1 Example 15.2. If X N µ, 2, what is the parameter space? ⇠ Answer: The parameter space Ω is given by Ω = ✓ IR2 | f (x; ✓) IR2 | 2 (µ, ) ⇠ 2 1 N µ, = = IR IR+ ⇥ = upper half plane. In general, a parameter space is a subset of IRm. Statistics concerns with the estimation of the unknown parameter ✓ from a random sample X1, X2,..., Xn. Recall that a statistic is a function of X1, X2,..., Xn and free of the population parameter ✓. Probability and Mathematical Statistics 415 f (x; ✓) and X1, X2,..., Xn be a random sample Definition 15.2. Let X from the population X. Any statistic that can be used to guess the parameter ✓ is called an estimator of ✓. The numerical value of this statistic is called an estimate of ✓. The estimator of the parameter ✓ is denoted by ✓. ⇠ One of the basic problems is how to find an estimator of population parameter ✓. There are several methods for finding an estimator of ✓. Some of these methods are: b (1) Moment Method (2) Maximum Likelihood Method (3) Bayes Method (4) Least Squares Method (5) Minimum Chi-Squares Method (6) Minimum Distance Method In this chapter, we only discuss the first three methods of estimating a population parameter. 15.1. Moment Method Let X1, X2,..., Xn be a random sample from a population X with probability density function f (x; ✓1, ✓2,..., ✓m), where ✓1, ✓2,..., ✓m are m unknown parameters. Let E X k = 1 xk f (x; ✓1, ✓2,..., ✓m) dx be the kth population moment about 0.
i= Thus, the estimator of 2 is 1 n i=1 X n i=1 X n i=1 X n i=, that is Xi i=1 X 1 n Xi Example 15.4. Let X1, X2,..., Xn be a random sample of size n from a population X with probability density function 2 = i=✓ 1 if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where 0 < ✓ < is an unknown parameter. Using the method of moment find an estimator of ✓? If x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3 is a random sample of size 4, then what is the estimate of ✓? 1 : Answer: To find an estimator, we shall equate the population moment to the sample moment. The population moment E(X) is given by 1 E(X) = x f (x; ✓) dx ✓ 1 dx 1 x✓ dx ✓+1 1 0 ⇤ Some Techniques for finding Point Estimators of Parameters 418 We know that M1 = X. Now setting M1 equal to E(X) and solving for ✓, we get that is where X is the sample mean. Thus, the statistic X X parameter ✓. Hence 1 is an estimator of the Since x1 = 0.2, x2 = 0.6, x3 = 0.5, x4 = 0.3, we have X = 0.4 and b ✓ = 1 X . X is an estimate of the ✓. 0.4 0.4 1 = 2 3 ✓ = b Example 15.5. What is the basic principle of the moment method? Answer: To choose a value for the unknown population parameter for which the observed data have the same moments as the population. Example 15.6. Suppose X1, X2,..., X7 is a random sample from a population X with density function f (x; ) = x x6 e Γ(7) 7 8 < 0 if 0 < x < 1 otherwise. Find an estimator of by the moment method. : Answer: Since, we have only one parameter, we need to compute
only the first population moment E(X) about 0. Thus, E(X) = 1 x f (x; ) dx x x6 e Γ(7) 7 dx x 1 7 0 ✓ Z 1 ◆ y7 e y dy e x dx 0 Z Γ(8) 0 Z 1 x 0 Z 1 Γ(7) 1 Γ(7) 1 Γ(7) = = = = = 7 . Probability and Mathematical Statistics 419 Since M1 = X, equating E(X) to M1, we get that is 7 = X = 1 7 X. Therefore, the estimator of by the moment method is given by = 1 7 X. Example 15.7. Suppose X1, X2,..., Xn is a random sample from a population X with density function b f (x; ✓) = 1 ✓ ( 0 if 0 < x < ✓ otherwise. Find an estimator of ✓ by the moment method. Answer: Examining the density function of the population X, we see that X U N IF (0, ✓). Therefore ⇠ E(X) = ✓ 2. Now, equating this population moment to the sample moment, we obtain ✓ 2 = E(X) = M1 = X. Therefore, the estimator of ✓ is ✓ = 2 X. Example 15.8. Suppose X1, X2,..., Xn is a random sample from a population X with density function b f (x; ↵, ) = 1 ↵ ( 0 if ↵ < x < otherwise. Find the estimators of ↵ and by the moment method. Some Techniques for finding Point Estimators of Parameters 420 Answer: Examining the density function of the population X, we see that U N IF (↵, ). Since, the distribution has two unknown parameters, we X need the first two population moments. Therefore ⇠ E(X) = ↵ + 2 and E(X 2) = ( ↵)2 12 + E(X)2. Equating these moments to the corresponding sample moments, we obtain ↵ + 2 = E(X) = M1 = X �
� + = 2X (1) ( ↵)2 12 + E(X)2 = E(X 2) = M2 = 1 n n i=1 X X 2 i that is and which is n i=1 X n i=1 X n ↵)2 = 12 ( = 12 = 12 (X)=2) 12 n n i= Hence, we get ↵ = 2 X Adding equation (1) to equation (2), we obtain 2 = 2X ± 2 3 n v u u t that is n X 2 i i=1 X i=1 X Similarly, subtracting (2) from (1), we get = =1 X X 2. X 2. Probability and Mathematical Statistics 421 Since, ↵ < , we see that the estimators of ↵ and are ↵ = =1 X 2 X and = =1 X X 2. 15.2. Maximum Likelihood Method The maximum likelihood method was first used by Sir Ronald Fisher in 1922 (see Fisher (1922)) for finding estimator of a unknown parameter. However, the method originated in the works of Gauss and Bernoulli. Next, we describe the method in detail. Definition 15.3. Let X1, X2,..., Xn be a random sample from a population X with probability density function f (x; ✓), where ✓ is an unknown parameter. The likelihood function, L(✓), is the distribution of the sample. That is n L(✓) = f (xi; ✓). i=1 Y This definition says that the likelihood function of a random sample X1, X2,..., Xn is the joint density of the random variables X1, X2,..., Xn. The ✓ that maximizes the likelihood function L(✓) is called the maximum likelihood estimator of ✓, and it is denoted by ✓. Hence ✓ = Arg sup Ω ✓ 2 b L(✓), where Ω is the parameter space of ✓ so that L(✓) is the joint density of the sample. b The method of maximum likelihood in a sense picks out
of all the possible values of ✓ the one most likely to have produced the given observations x1, x2,..., xn. The method is summarized below: (1) Obtain a random sample x1, x2,..., xn from the distribution of a population X with probability density function f (x; ✓); (2) define the likelihood function for the sample x1, x2,..., xn by L(✓) = f (x1; ✓)f (x2; ✓) · · · f (xn; ✓); (3) find the expression for ✓ that maximizes L(✓). This can be done directly or by maximizing ln L(✓); (4) replace ✓ by ✓ to obtain an expression for the maximum likelihood estimator for ✓; (5) find the observed value of this estimator for a given sample. b Some Techniques for finding Point Estimators of Parameters 422 Example 15.9. If X1, X2,..., Xn is a random sample from a distribution with density function f (x; ✓) = ✓ ✓) x (1 8 < 0 if 0 < x < 1 elsewhere, what is the maximum likelihood estimator of ✓? : Answer: The likelihood function of the sample is given by Therefore n L(✓) = f (xi; ✓). i=1 Y n ln L(✓) = ln f (xi; ✓)! n i=1 Y ln f (xi; ✓) i=1 X n ln (1 ✓ ✓) xi = = i=1 X ⇥ = n ln(1 ✓) ✓ n ⇤ ln xi. i=1 X Now we maximize ln L(✓) with respect to ✓. d ln L(✓) d✓ = = n ln(1 d d=1 X ✓) ✓ ln xi. n i=1 X ln xi! Setting this derivative d ln L(✓) d✓ to 0, we get that is d ln L(✓) d=1 X ln xi = =1 X ln xi Probability and Mathematical Statistics 423 or or = ✓ 1 n 1 1 n i
=1 X ln xi = ln x. ✓ = 1 + 1 ln x. This ✓ can be shown to be maximum by the second derivative test and we leave this verification to the reader. Therefore, the estimator of ✓ is ✓ = 1 + 1 ln X. Example 15.10. If X1, X2,..., Xn is a random sample from a distribution with density function b f (x; ) = x x6 e Γ(7) 7 8 < 0 if 0 < x < 1 otherwise, then what is the maximum likelihood estimator of ? : Answer: The likelihood function of the sample is given by n L() = f (xi; ). i=1 Y Thus, n ln L() = ln f (xi, ) Therefore i=1 X n = 6 i=1 X ln xi 1 d d ln L() = n i=1 X 1 2 Setting this derivative d d ln L() to zero, we get which yields 1 2 n i=1 X xi 7n = 0 = 1 7n xi. n i=1 X n ln(6!) xi 7n ln(). xi 7n . n i=1 X Some Techniques for finding Point Estimators of Parameters 424 This can be shown to be maximum by the second derivative test and again we leave this verification to the reader. Hence, the estimator of is given by = 1 7 X. b Remark 15.1. Note that this maximum likelihood estimator of is same as the one found for using the moment method in Example 15.6. However, in general the estimators by different methods are different as the following example illustrates. Example 15.11. If X1, X2,..., Xn is a random sample from a distribution with density function f (x; ✓) = 1 ✓ 8 < 0 if 0 < x < ✓ otherwise, then what is the maximum likelihood estimator of ✓? : Answer: The likelihood function of the sample is given by n L(✓
) = f (xi; ✓) i=1 Y n i= ◆ ✓ > xi (i = 1, 2, 3,..., n) ✓ > max{x1, x2,..., xn}. = = Hence the parameter space of ✓ with respect to L(✓) is given by Ω = {✓ 2 IR | xmax < ✓ < } = (xmax, ). 1 1 Now we maximize L(✓) on Ω. First, we compute ln L(✓) and then differentiate it to get and ln L(✓) = n ln(✓) d d✓ ln L(✓) = n ✓ < 0. Therefore ln L(✓) is a decreasing function of ✓ and as such the maximum of ln L(✓) occurs at the left end point of the interval (xmax, ). Therefore, at 1 Probability and Mathematical Statistics 425 ✓ = xmax the likelihood function achieve maximum. Hence the likelihood estimator of ✓ is given by ✓ = X(n) where X(n) denotes the nth order statistic of the given sample. b Thus, Example 15.7 and Example 15.11 say that the if we estimate the parameter ✓ of a distribution with uniform density on the interval (0, ✓), then the maximum likelihood estimator is given by where as ✓ = X(n) b ✓ = 2 X is the estimator obtained by the method of moment. Hence, in general these two methods do not provide the same estimator of an unknown parameter. b Example 15.12. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓) = 2 ⇡ e 8 < q 0 1 2 (x ✓)2 if x ✓ elsewhere. What is the maximum likelihood estimator of ✓? : Answer: The likelihood function L(✓) is given by L(✓) = r 2 ⇡! n n i=1 Y e 1 2 (xi ✓)2 xi ✓ (i = 1, 2, 3,..., n). Hence the parameter space of ✓ is given by Ω = {✓ IR | 0 ✓ xmin} = [0, xmin],,  where xmin
= min{x1, x2,..., xn}. Now we evaluate the logarithm of the likelihood function.  2 ln L(✓) = ln n 2 n 2 ⇡ 1 2 ✓)2, (xi ◆ where ✓ is on the interval [0, xmin ]. Now we maximize ln L(✓) subject to the condition 0 xmin. Taking the derivative, we get i=1 X ✓ ✓   d d✓ ln L(✓) = 1 2 n (xi i=1 X ✓) 2( 1) = n ✓). (xi i=1 X Some Techniques for finding Point Estimators of Parameters 426 In this example, if we equate the derivative to zero, then we get ✓ = x. But this value of ✓ is not on the parameter space Ω. Thus, ✓ = x is not the solution. Hence to find the solution of this optimization process, we examine the behavior of the ln L(✓) on the interval [0, xmin ]. Note that d d✓ ln L(✓) = 1 2 n (xi i=1 X ✓) 2( 1) = n (xi i=1 X ✓) > 0 since each xi is bigger than ✓. Therefore, the function ln L(✓) is an increasing function on the interval [0, xmin ] and as such it will achieve maximum at the right end point of the interval [0, xmin ]. Therefore, the maximum likelihood estimator of ✓ is given by X = X(1) where X(1) denotes the smallest observation in the random sample X1, X2,..., Xn. b Example 15.13. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2. What are the maximum likelihood estimators of µ and 2? Answer: Since X by ⇠ N (µ, 2), the probability density function of X is given f (x; µ, ) = 1 p2⇡ e 1 2 ( x µ )2 . The likelihood function of the sample is given by L(µ,
n i=1 Y 1 = ↵ n 1 ✓ ↵ ◆ for all ↵ the domain of the likelihood function is xi for (i = 1, 2,..., n) and for all  xi for (i = 1, 2,..., n). Hence, Ω = {(↵, ) | 0 < ↵ x(1)  and x(n)  < }. 1 Some Techniques for finding Point Estimators of Parameters 428 It is easy to see that L(↵, ) is maximum if ↵ = x(1) and = x(n). Therefore, the maximum likelihood estimator of ↵ and are ↵ = X(1) and = X(n). b b The maximum likelihood estimator ✓ is a maximum likelihood estimator of ✓, then g( ✓ of a parameter ✓ has a remarkable property known as the invariance property. This invariance property says ✓) is the maximum that if likelihood estimator of g(✓), where g is a function from IRk to a subset of IRm. This result was proved by Zehna in 1966. We state this result as a theorem without a proof. b b b ✓ be a maximum likelihood estimator of a parameter ✓ Theorem 15.1. Let and let g(✓) be a function of ✓. Then the maximum likelihood estimator of g(✓) is given by g b. ✓ Now we give two examples to illustrate the importance of this theorem. ⇣ ⌘ b Example 15.15. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2. What are the maximum likelihood estimators of and µ ? Answer: From Example 15.13, we have the maximum likelihood estimator of µ and 2 to be µ = X and 2 = 1 n n b (Xi i=1 X X)2 =: Σ2 (say). Now using the invariance property of the maximum likelihood estimator we have c and = Σ b = X Σ. µ Example 15.16. Suppose X1, X2,..., Xn is a
random sample from a distribution with density function d 1 if ↵ < x < f (x; ↵, ) = ↵ ( 0 otherwise. Find the estimator of ↵2 + 2 by the method of maximum likelihood. p Probability and Mathematical Statistics 429 Answer: From Example 15.14, we have the maximum likelihood estimator of ↵ and to be ↵ = X(1) and = X(n), respectively. Now using the invariance property of the maximum likelihood ↵2 + 2 is estimator we see that the maximum likelihood estimator of b b X 2 (1) + X 2 (n). p q The concept of information in statistics was introduced by Sir Ronald Fisher, and it is known as Fisher information. Definition 15.4. Let X be an observation from a population with probability density function f (x; ✓). Suppose f (x; ✓) is continuous, twice differentiable and it’s support does not depend on ✓. Then the Fisher information, I(✓), in a single observation X about ✓ is given by I(✓) = 1 Z 1  d ln f (x; ✓) d✓ 2 f (x; ✓) dx. Thus I(✓) is the expected value of the square of the random variable d ln f (X;✓) d✓. That is, I(✓) = E d ln f (X; ✓) d✓ 2.!  In the following lemma, we give an alternative formula for the Fisher information. Lemma 15.1. The Fisher information contained in a single observation about the unknown parameter ✓ can be given alternatively as I(✓) = 1 Z 1  d2 ln f (x; ✓) d✓2 f (x; ✓) dx. Proof: Since f (x; ✓) is a probability density function, f (x; ✓) dx = 1. 1 Z 1 (3) Differentiating (3) with respect to ✓, we get d d✓ 1 Z 1 f (x; ✓) dx = 0. Some Techniques for finding Point Estimators of Parameters 430 Rewrit
ing the last equality, we obtain df (x; ✓) d✓ 1 f (x; ✓) 1 Z 1 f (x; ✓) dx = 0 which is 1 d ln f (x; ✓) d✓ 1 Now differentiating (4) with respect to ✓, we see that Z f (x; ✓) dx = 0. (4) d2 ln f (x; ✓) d✓2 1 Z 1  f (x; ✓) + d ln f (x; ✓) d✓ df (x; ✓) d✓ dx = 0. Rewriting the last equality, we have d2 ln f (x; ✓) d✓2 f (x; ✓) + d ln f (x; ✓) d✓ df (x; ✓) d✓ 1 f (x; ✓) f (x; ✓) dx = 0 1 1  Z which is 1 1 Z d2 ln f (x; ✓) d✓2 + d ln f (x; ✓) d✓  2! f (x; ✓) dx = 0. The last equality implies that d ln f (x; ✓) d✓ 1 Z 1  2 f (x; ✓) dx = 1 Z 1  d2 ln f (x; ✓) d✓2 f (x; ✓) dx. Hence using the definition of Fisher information, we have I(✓) = 1 Z 1  d2 ln f (x; ✓) d✓2 f (x; ✓) dx and the proof of the lemma is now complete. The following two examples illustrate how one can determine Fisher in- formation. Example 15.17. Let X be a single observation taken from a normal population with unknown mean µ and known variance 2. Find the Fisher information in a single observation X about µ. Answer: Since X ⇠ N (µ, 2), the probability density of X is given by f (x; µ) = 1 p2⇡2 e 1 22 (x µ)2.
Probability and Mathematical Statistics 431 Hence Therefore and Hence ln f (x; µ) = 1 2 ln(2⇡2) (x µ)2 22. d ln f (x; µ) dµ x = µ 2 d2 ln f (x; µ) dµ2 = 1 2. I(µ) = 1 Z 1 ✓ 1 2 ◆ f (x; µ) dx = 1 2. Example 15.18. Let X1, X2,..., Xn be a random sample from a normal population with unknown mean µ and known variance 2. Find the Fisher information in this sample of size n about µ. Answer: Let In(µ) be the required Fisher information. Then from the definition, we have In(µ) = = = E E E ✓ ✓ ✓ d2 ln f (X1, X2,..., Xn; µ dµ2 ◆ d2 dµ2 {ln f (X1; µ) + · · · + ln f (Xn; µ)} ◆ d2 ln f (X1; µ) dµ2 d2 ln f (Xn; µ) dµ2 · · · E ◆ ✓ ◆ = I(µ) + · · · + I(µ) = n I(µ) = n 1 2 (using Example 15.17). This example shows that if X1, X2,..., Xn is a random sample from a f (x; ✓), then the Fisher information, In(✓), in a sample of population X size n about the parameter ✓ is equal to n times the Fisher information in X about ✓. Thus ⇠ In(✓) = n I(✓). If X is a random variable with probability density function f (x; ✓), where ✓ = (✓1,..., ✓n) is an unknown parameter vector then the Fisher information, Some Techniques for finding Point Estimators of Parameters 432 I(✓), is a n ⇥ n matrix given by I(✓) = (Iij(✓))
= E ✓ @2 ln f (X; ✓) ✓ @✓i @✓j ◆◆. Example 15.19. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2. What is the Fisher information matrix, In(µ, 2), of the sample of size n about the parameters µ and 2? Answer: Let us write ✓1 = µ and ✓2 = 2. The Fisher information, In(✓), in a sample of size n about the parameter (✓1, ✓2) is equal to n times the Fisher information in the population about (✓1, ✓2), that is In(✓1, ✓2) = n I(✓1, ✓2). (5) Since there are two parameters ✓1 and ✓2, the Fisher information matrix I(✓1, ✓2) is a 2 2 matrix given by ⇥ where I(✓1, ✓2) = I11(✓1, ✓2) I12(✓1, ✓2) I21(✓1, ✓2) 0 @ I22(✓1, ✓2) 1 A (6) Iij(✓1, ✓2) = E ✓ @2 ln f (X; ✓1, ✓2) @✓i @✓j ◆ for i = 1, 2 and j = 1, 2. Now we proceed to compute Iij. Since f (x; ✓1, ✓2) = 1 p2 ⇡ ✓2 e (x ✓1)2 2 ✓2 we have ln f (x; ✓1, ✓2) = 1 2 ln(2 ⇡ ✓2) (x ✓1)2 2 ✓2. Taking partials of ln f (x; ✓1, ✓2), we have @ ln f (x; ✓1, ✓2) @✓1 @ ln f (x; ✓1, ✓2) @✓2 @2 ln f (x; ✓1, ✓2) @✓2 1 @2 ln f (x; ✓1, ✓2) @✓2 2 @2 ln f (x; ✓1, ✓2) @
✓1 @✓2 = = = = = ✓1, x ✓2 1 2 ✓2 1 ✓2 1 2 ✓2 2 x, ✓1. ✓2 2 + (x ✓1)2 2 ✓2 2, (x ✓1)2 ✓3 2, Probability and Mathematical Statistics 433 Hence Similarly, I11(✓1, ✓2) = E 1 ✓2 ◆ ✓ = 1 ✓2 = 1 2. I21(✓1, ✓2) = I12(✓1, ✓2) = E ✓ X ✓1 ✓2 2 ◆ = E(X) ✓2 2 ✓1 ✓2 2 = ✓1 ✓2 2 ✓1 ✓2 2 = 0 and I22(✓1, ✓2) = = E E (X ✓1)2 ✓3 2 ✓1)2 ✓ (X ✓3 2 + 1 2✓2 2 ◆ ✓2 ✓3 2 = 1 2✓2 2 1 2✓2 2 = 1 2✓2 2 = 1 24. Thus from (5), (6) and the above calculations, the Fisher information matrix is given by In(✓1, ✓2) = n 1 2 0 0 @ 0 1 24 = 1 A n 2 0 0 @ 0 n 24. 1 A Now we present an important theorem about the maximum likelihood estimator without a proof. Theorem 15.2. Under certain regularity conditions on the f (x; ✓) the max✓ of ✓ based on a random sample of size n from imum likelihood estimator a population X with probability density f (x; ✓) is asymptotically normally distributed with mean ✓ and variance b 1 n I(✓). That is N ✓, ✓ 1 n I(✓) ◆ ✓M L ⇠ b as n.! 1 The following example shows that the maximum likelihood estimator of a parameter is not necessarily unique. Example 15.20. If X1, X2,..., Xn is a random sample from a distribution with density function f (x; ✓) = 1 2 8 < 0 if ✓ 1
 x  ✓ + 1 otherwise, then what is the maximum likelihood estimator of ✓? : Some Techniques for finding Point Estimators of Parameters 434 Answer: The likelihood function of this sample is given by L(✓) = n 1 2 ( 0 if max{x1,..., xn} 1  ✓  min{x1,..., xn} + 1 otherwise. Since the likelihood function is a constant, any value in the interval [max{x1,..., xn} 1, min{x1,..., xn} + 1] is a maximum likelihood estimate of ✓. Example 15.21. What is the basic principle of maximum likelihood estimation? Answer: To choose a value of the parameter for which the observed data have as high a probability or density as possible. In other words a maximum likelihood estimate is a parameter value under which the sample data have the highest probability. 15.3. Bayesian Method In the classical approach, the parameter ✓ is assumed to be an unknown, but fixed quantity. A random sample X1, X2,..., Xn is drawn from a population with probability density function f (x; ✓) and based on the observed values in the sample, knowledge about the value of ✓ is obtained. In Bayesian approach ✓ is considered to be a quantity whose variation can be described by a probability distribution (known as the prior distribution). This is a subjective distribution, based on the experimenter’s belief, and is formulated before the data are seen (and hence the name prior distribution). A sample is then taken from a population where ✓ is a parameter and the prior distribution is updated with this sample information. This updated prior is called the posterior distribution. The updating is done with the help of Bayes’ theorem and hence the name Bayesian method. In this section, we shall denote the population density f (x; ✓) as f (x/✓), that is the density of the population X given the parameter ✓. Definition 15.5. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. The probability density function of the random variable ✓ is called the prior distribution of ✓ and usually denoted by h(�
�). Definition 15.6. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. The Probability and Mathematical Statistics 435 conditional density, k(✓/x1, x2,..., xn), of ✓ given the sample x1, x2,..., xn is called the posterior distribution of ✓. Example 15.22. Let X1 = 1, X2 = 2 be a random sample of size 2 from a distribution with probability density function f (x/✓) = 3 x ✓ ◆ ✓x(1 ✓)3 x, x = 0, 1, 2, 3. If the prior density of ✓ is h(✓) = k 0 8 < if 1 2 < ✓ < 1 otherwise, what is the posterior distribution of ✓? : Answer: Since h(✓) is the probability density of ✓, we should get which implies 1 1 2 Z h(✓) d✓ = 1 1 1 2 Z k d✓ = 1. Therefore k = 2. The joint density of the sample and the parameter is given by = u(x1, x2, ✓) = f (x1/✓)f (x2/✓)h(✓) 3 x1◆ ✓ 3 = 2 x1◆✓ ✓ 3 x2◆ 3 x2◆ ✓ ✓x1+x2(1 ✓)6 ✓x1(1 ✓)3 x1 ✓x2(1 ✓)3 x2 2 x1 x2. Hence, u(1, 2, ✓) = 2 3 3 2 1 ◆ ◆✓ ✓ ✓)3. = 18 ✓3(1 ✓3(1 ✓)3 Some Techniques for finding Point Estimators of Parameters 436 The marginal distribution of the sample 1 g(1, 2) = u(1, 2, ✓) d✓ Z 1 2 1 1 2 Z = = 18 1 Z 1 2 1 = 18 1 2 Z 9 140. = 18 ✓3(1 ✓)3 d✓ ✓3 1 + 3✓2 3✓ ✓3 d�
� ✓3 + 3✓5 3✓4 ✓6 d✓ The conditional distribution of the parameter ✓ given the sample X1 = 1 and X2 = 2 is given by k(✓/x1 = 1, x2 = 2) = u(1, 2, ✓) g(1, 2) 18 ✓3 (1 9 140 = 280 ✓3 (1 = ✓)3 ✓)3. Therefore, the posterior distribution of ✓ is k(✓/x1 = 1, x2 = 2) = 280 ✓3 (1 ✓)3 ( 0 if 1 2 < ✓ < 1 otherwise. Remark 15.2. If X1, X2,..., Xn is a random sample from a population with density f (x/✓), then the joint density of the sample and the parameter is given by u(x1, x2,..., xn, ✓) = h(✓) f (xi/✓). n i=1 Y Given this joint density, the marginal density of the sample can be computed using the formula g(x1, x2,..., xn) = 1 h(✓) Z 1 n i=1 Y f (xi/✓) d✓. Probability and Mathematical Statistics 437 Now using the Bayes rule, the posterior distribution of ✓ can be computed as follows: k(✓/x1, x2,..., xn) = h(✓) h(✓) n i=1 f (xi/✓) n i=1 f (xi/✓) d✓. 1 1 R Q Q In Bayesian method, we use two types of loss functions. Definition 15.7. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. Let ✓ be an estimator of ✓. The function b L2 ✓, ✓ = ✓ is called the squared error loss. The function ⌘ ⇣ b ⇣ b ✓ 2 ⌘ L1 ✓, ✓ = ✓ is called the absolute error loss. ⌘ ⇣ b b ✓
The loss function L represents the ‘loss’ incurred when ✓ is used in place of the parameter ✓. Definition 15.8. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. Let ✓ be an estimator of ✓ and let L be a given loss function. The expected value of this loss function with respect to the population distribution f (x/✓), b that is ✓, ✓ ⌘ ⇣ b b is called the risk. RL(✓) = L ✓, ✓ f (x/✓) dx Z ⌘ ⇣ b The posterior density of the parameter ✓ given the sample x1, x2,..., xn, that is k(✓/x1, x2,..., xn) contains all information about ✓. In Bayesian estimation of parameter one chooses an estimate ✓ for ✓ such that b k( ✓/x1, x2,..., xn) is maximum subject to a loss function. Mathematically, this is equivalent to minimizing the integral b Ω Z L ✓, ✓ k(✓/x1, x2,..., xn) d✓ ⌘ ⇣ b Some Techniques for finding Point Estimators of Parameters 438 ✓, where Ω denotes the support of the prior density h(✓) of with respect to the parameter ✓. b Example 15.23. Suppose one observation was taken of a random variable X which yielded the value 2. The density function for X is f (x/✓) = 1 ✓ 8 < 0 if 0 < x < ✓ otherwise, and prior distribution for parameter ✓ is : 3 ✓4 h(✓) = 0 If the loss function is L(z, ✓) = (z ✓? ( if 1 < ✓ < 1 otherwise. ✓)2, then what is the Bayes’ estimate for Answer: The prior density of the random variable ✓ is h(✓) = 3 ✓4 ( 0 if 1 < ✓ < 1 otherwise. The probability density function of the population is f (x/✓) = 1 ✓ ( 0 if 0 < x < ✓ otherwise. Hence, the joint probability density function of the sample and the parameter is given by u(x, ✓) =
h(✓) f (x/✓) 3 ✓4 = 1 ✓ 5 3 ✓ = if 0 < x < ✓ and 1 < ✓ < 1 0 The marginal density of the sample is given by otherwise. ( g(x(x, ✓) d✓ 3 ✓ 5 d✓ 4 x x Z 3 4 3 4 x4. Probability and Mathematical Statistics 439 Thus, if x = 2, then g(2) = 3 given by 64. The posterior density of ✓ when x = 2 is k(✓/x = 2) = = = u(2, ✓) g(2) 64 3 5 3 ✓ 5 64 ✓ ( 0 if 2 < ✓ < 1 otherwise. Now, we find the Bayes E [L(✓, z)/x = 2]. That is estimator by minimizing the expression ✓ = Arg max Ω z 2 L(✓, z) k(✓/x = 2) d✓. Ω Z Let us call this integral (z). Then b (z) = L(✓, z) k(✓/x = 2) dz 1 (z ✓)2 k(✓/x = 2) d✓ ✓)2 64✓ 5 d✓. We want to find the value of z which yields a minimum of (z). This can be done by taking the derivative of (z) and evaluating where the derivative is zero. d dz (z) = d dz 1 (z ✓)2 64✓ 5 d ✓) 64✓ 5 d✓ (z z 64✓ 5 d✓ 2 2 Z 1 ✓ 64✓ 5 d✓ 16 3 Setting this derivative of (z) to zero and solving for z, we get . 2z 16 3 = 0 z = ) 8 3. dz2 = 2, the function (z) has a minimum at z = 8 3. Hence, the Since d2 (z) Bayes’ estimate of ✓ is 8 3. Some Techniques for finding Point Estimators of Parameters 440 ✓, ✓ Ω L k(✓/x1, x2,..., xn) d✓ with respect to In Example 15.23, we
have found the Bayes’ estimate of ✓ by di✓. rectly minimizing the The next result is very useful while finding the Bayes’ estimate using R b ✓)2, then a quadratic loss function. Notice that if L( k(✓/x1, x2,..., xn) d✓ is E Ω L. The following theorem is based on the fact that the function defined by (c) = ⇣ R E ✓)2 /x1, x2,..., xn b attains minimum if c = E[X]. ✓, ✓) = (✓ ⇣ (X ⌘ c)2 ✓, ✓ (✓ ⌘ ⌘ ⇣ b b b b ⇥ ⇤ Theorem 15.3. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. If ✓ of parameter the loss function is squared error, then the Bayes’ estimator ✓ is given by ✓ = E(✓/x1, x2,..., xn), b where the expectation is taken with respect to density k(✓/x1, x2,..., xn). b Now we give several examples to illustrate the use of this theorem. Example 15.24. Suppose the prior distribution of ✓ is uniform over the interval (0, 1). Given ✓, the population X is uniform over the interval (0, ✓). If the squared error loss function is used, find the Bayes’ estimator of ✓ based on a sample of size one. Answer: The prior density of ✓ is given by h(✓) = 1 ( 0 if 0 < ✓ < 1 otherwise. The density of population is given by f (x/✓) = 1 ✓ ( 0 if 0 < x < ✓ otherwise. The joint density of the sample and the parameter is given by u(x, ✓) = h(✓) f (x/✓) if 0 < x < ✓ < 1 otherwise. Probability and Mathematical Statistics 441 The marginal density of the sample is 1 g(x) = u(x, ✓) d✓ 1 ✓ ln x if 0 < x < 1 otherwise
. The conditional density of ✓ given the sample is k(✓/x) = u(x, ✓) g(x) = 1 ✓ ln x ( 0 if 0 < x < ✓ < 1 elsewhere. Since the loss function is quadratic error, therefore the Bayes’ estimator of ✓ is ✓ = E[✓/x] = b 1 x Z 1 ✓ k(✓/x) d✓ 1 ✓ ✓ ln x 1 d✓ d ln x x 1 ln x Thus, the Bayes’ estimator of ✓ based on one observation X is ✓ = X 1 ln X. Example 15.25. Given ✓, the random variable X has a binomial distribution with n = 2 and probability of success ✓. If the prior density of ✓ is b h(✓) = k 0 8 < if 1 2 < ✓ < 1 otherwise, what is the Bayes’ estimate of ✓ for a squared error loss if X = 1? : Answer: Note that ✓ is uniform on the interval fore, the prior density of ✓ is 1 2, 1, hence k = 2. There- h(✓) = 2 ( 0 if 1 2 < ✓ < 1 otherwise. Some Techniques for finding Point Estimators of Parameters 442 The population density is given by f (x/✓) = n x ✓ ◆ ✓x (1 ✓)n x = 2 x ✓ ◆ ✓x (1 ✓)2 x, x = 0, 1, 2. The joint density of the sample and the parameter ✓ is u(x, ✓) = h(✓) f (x/✓) = 2 2 x ✓ ◆ ✓x (1 ✓)2 x 2 < ✓ < 1 and x = 0, 1, 2. The marginal density of the sample is given where 1 by 1 g(x) = u(x, ✓) d✓. 1 2 Z This integral is easy to evaluate if we substitute X = 1 now. Hence g(1✓ ✓ (1 ✓) d✓ 4✓2 d✓ 1 ✓2 2 ✓3 3  2✓3 1 2 1 1 2 3✓2
2) 3 4 2 8 ⇤ ✓ ◆ ⇥ ( Therefore, the posterior density of ✓ given x = 1, is k(✓/x = 1) = u(1, ✓) g(1) = 12 (✓ ✓2), where 1 2 < ✓ < 1. Since the loss function is quadratic error, therefore the Probability and Mathematical Statistics 443 Bayes’ estimate of ✓ is ✓ = E[✓/x = 1] 1 ✓ k(✓/x = 1) d✓ 12 ✓ (✓ ✓2) d✓ 3 ✓4 1 1 2 ⇤ 5 16 = 3 = = = ⇥ = 1 11 16. Hence, based on the sample of size one with X = 1, the Bayes’ estimate of ✓ is 11 16, that is ✓ = 11 16. The following theorem help us to evaluate the Bayes estimate of a sample if the loss function is absolute error loss. This theorem is based the fact that a function (c) = E [ |X c| ] is minimum if c is the median of X. b Theorem 15.4. Let X1, X2,..., Xn be a random sample from a distribution with density f (x/✓), where ✓ is the unknown parameter to be estimated. If ✓ of the paramthe loss function is absolute error, then the Bayes estimator eter ✓ is given by ✓ = median of k(✓/x1, x2,..., xn) b where k(✓/x1, x2,..., xn) is the posterior distribution of ✓. b The followings are some examples to illustrate the above theorem. Example 15.26. Given ✓, the random variable X has a binomial distribution with n = 3 and probability of success ✓. If the prior density of ✓ is h(✓) = k if 1 2 < ✓ < 1 8 < 0 otherwise, what is the Bayes’ estimate of ✓ for an absolute difference error loss if the sample consists of one observation x = 3? : Some Techniques for finding Point Estimators of Parameters 444 Answer: Since, the prior density of ✓ is h(✓) = and the population density is if 1 2 < ✓ < 1 otherwise
, 2 0 8 < : f (x/✓) = 3 x ✓ ◆ ✓x(1 ✓)3 x, the joint density of the sample and the parameter is given by u(3, ✓) = h(✓) f (3/✓) = 2 ✓3, where 1 2 < ✓ < 1. The marginal density of the sample (at x = 3) is given by 1 g(3) = u(3, ✓) d✓ Z 1 2 1 1 2 Z ✓4 2  15 32. = = = 2 ✓3 d✓ 1 1 2 Therefore, the conditional density of ✓ given X = 3 is k(✓/x = 3) = u(3, ✓) g(3) = 64 15 ✓3 if 1 2 < ✓ < 1 ( 0 elsewhere. Since, the loss function is absolute error, the Bayes’ estimator is the median of the probability density function k(✓/x = 3). That is 64 60 64 60 64 15 ✓3 d✓ ✓4 ✓ 1 2 4 ⇤b ✓ ⇣ ⇥ ⌘ b 1 16. Probability and Mathematical Statistics 445 Solving the above equation for ✓, we get b ✓ = 4 r 17 32 = 0.8537. b Example 15.27. Suppose the prior distribution of ✓ is uniform over the interval (2, 5). Given ✓, X is uniform over the interval (0, ✓). What is the Bayes’ estimator of ✓ for absolute error loss if X = 1? Answer: Since, the prior density of ✓ is h(✓) = and the population density is if 2 < ✓ < 5 otherwise, 1 3 0 8 < : f (x/✓) = 1 ✓ 8 < 0 if 0 < x < ✓ elsewhere, the joint density of the sample and the parameter is given by : u(x, ✓) = h(✓) f (x/✓) = 1 3✓, where 2 < ✓ < 5 and 0 < x < ✓. The marginal density of the sample (at x = 1) is given by 5 g(1) = u(1, ✓) d✓ u(1, ✓) d✓ + u(1, ✓) d✓ 5 2 Z 1 3✓ d ln
= = = 5 2. ✓ Therefore, the conditional density of ✓ given the sample x = 1, is ◆ k(✓/x = 1) = = u(1, ✓) g(1) 1 ✓ ln 5 2. Some Techniques for finding Point Estimators of Parameters 446 Since, the loss function is absolute error, the Bayes estimate of ✓ is the median of k(✓/x = 1). Hence 1 2 = ✓ 2 Z b 1 ✓ ln 5 2 d✓ ln = 1 ln 5 2 ✓ 2! b. ✓ = p10 = 3.16. Solving for ✓, we get b Example 15.28. What is the basic principle of Bayesian estimation? b Answer: The basic principle behind the Bayesian estimation method consists of choosing a value of the parameter ✓ for which the observed data have as high a posterior probability k(✓/x1, x2,..., xn) of ✓ as possible subject to a loss function. 15.4. Review Exercises 1. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function f (x; ✓) = 1 2✓ if ✓ < x < ✓ 8 < 0 otherwise, where 0 < ✓ is a parameter. Using the moment method find an estimator for the parameter ✓. : 2. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓ (✓ + 1) x 2 if 1 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter. Using the moment method find an estimator for the parameter ✓. : 3. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function f (x; ✓) = ✓2 x e ✓ x if 0 < x < 1 0 8 < : otherwise, Probability and Mathematical Statistics 447 where 0 < ✓ is a parameter. Using the moment method find an estimator for the parameter ✓. 4. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓
x✓ 1 if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where 0 < ✓ is a parameter. Using the maximum likelihood method find an estimator for the parameter ✓. : 5. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓ (✓ + 1) x 2 if 1 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter. Using the maximum likelihood method find an estimator for the parameter ✓. : 6. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function ✓2 x e ✓ x if 0 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter. Using the maximum likelihood method find an : estimator for the parameter ✓. 7. Let X1, X2, X3, X4 be a random sample from a distribution with density function f (x; ) = 8 < 0 4) (x 1 e for x > 4 otherwise, where > 0. If the data from this random sample are 8.2, 9.1, 10.6 and 4.9, : respectively, what is the maximum likelihood estimate of ? 8. Given ✓, the random variable X has a binomial distribution with n = 2 and probability of success ✓. If the prior density of ✓ is k if 1 2 < ✓ < 1 0 otherwise, h(✓) = 8 < : Some Techniques for finding Point Estimators of Parameters 448 what is the Bayes’ estimate of ✓ for a squared error loss if the sample consists of x1 = 1 and x2 = 2. 9. Suppose two observations were taken of a random variable X which yielded the values 2 and 3. The density function for X is f (x/✓) = 1 ✓ 8 < 0 if 0 < x < ✓ otherwise, and prior distribution for the parameter ✓ is : h(✓) = 4 3 ✓ if ✓ > 1 ( 0 otherwise. If the loss function is quadratic, then what is the Bayes’ estimate for ✓? 10. The Pareto distribution is often
used in study of incomes and has the cumulative density function F (x; ↵, ✓) = ✓ ↵ x 1 0 8 < if ↵ x  otherwise, where 0 < ↵ < are parameters. Find the maximum likelihood estimates of ↵ and ✓ based on a sample of size 5 for value 3, 5, 2, 7, 8. and 1 < ✓ < : 1 1 11. The Pareto distribution is often used in study of incomes and has the cumulative density function F (x; ↵, ✓) = ✓ ↵ x 1 0 8 < if ↵ x  otherwise, where 0 < ↵ < are parameters. Using moment methods find estimates of ↵ and ✓ based on a sample of size 5 for value 3, 5, 2, 7, 8. and 1 < ✓ < : 1 1 12. Suppose one observation was taken of a random variable X which yielded the value 2. The density function for X is f (x/µ) = 1 p2⇡ e and prior distribution of µ is 1 2 (x µ)2 1 < x <, 1 h(µ) = 1 p2⇡ 1 2 µ2 e 1 < µ <. 1 Probability and Mathematical Statistics 449 If the loss function is quadratic, then what is the Bayes’ estimate for µ? 13. Let X1, X2,..., Xn be a random sample of size n from a distribution with probability density f (x) = 1 ✓ 8 < 0 if 2✓ x   3✓ otherwise, where ✓ > 0. What is the maximum likelihood estimator of ✓? : 14. Let X1, X2,..., Xn be a random sample of size n from a distribution with probability density f (x) = 1 0 8 < ✓2 if 0 x   otherwise, 1 ✓2 1 where ✓ > 0. What is the maximum likelihood estimator of ✓? : 15. Given ✓, the random variable X has a binomial distribution with n = 3 and probability of success ✓. If the prior density of ✓ is h(✓) = k if 1 2 < ✓ < 1 8 < 0 otherwise, what is the Bayes’ estimate of ✓ for
a absolute difference error loss if the sample consists of one observation x = 1? : 16. Suppose the random variable X has the cumulative density function c)2 is F (x). Show that the expected value of the random variable (X minimum if c equals the expected value of X. 17. Suppose the continuous random variable X has the cumulative density function F (x). Show that the expected value of the random variable |X c| is minimum if c equals the median of X (that is, F (c) = 0.5). 18. Eight independent trials are conducted of a given system with the following results: S, F, S, F, S, S, S, S where S denotes the success and F denotes the failure. What is the maximum likelihood estimate of the probability of successful operation p? 19. What is the maximum likelihood estimate of if the 5 values 4 2 (1 + )5 2, 5 3 4 were drawn from the population for which f (x; ) = 1 5, 2 3, 1, ? x 2 Some Techniques for finding Point Estimators of Parameters 450 20. If a sample of five values of X is taken from the population for which f (x; t) = 2(t 1)tx, what is the maximum likelihood estimator of t? 21. A sample of size n is drawn from a gamma distribution f (x; ) = x x3 e 64 8 < 0 if 0 < x < 1 otherwise. What is the maximum likelihood estimator of ? : 22. The probability density function of the random variable X is defined by f (x; ) = 1 2 3 + px ( 0 if 0 x 1   otherwise. What is the maximum likelihood estimate of the parameter based on two 4 and x2 = 9 independent observations x1 = 1 16? 23. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ) = µ|. What is the maximum likelihood estimator of ? 2 e |x 24. Suppose X1, X2,... are independent random variables, each with probability of success p and probability of failure 1
1. Let N be the number of observation needed to obtain the first success. What is the maximum likelihood estimator of p in term of N? p, where 0   p 25. Let X1, X2, X3 and X4 be a random sample from the discrete distribution X such that P (X = x) = ✓2 ✓2x e x! 8 < 0 for x = 0, 1, 2,..., 1 otherwise, where ✓ > 0. If the data are 17, 10, 32, 5, what is the maximum likelihood estimate of ✓? : 26. Let X1, X2,..., Xn be a random sample of size n from a population with a probability density function f (x; ↵, ) = ↵ Γ(↵) x↵ 1e x 0 8 < : if 0 < x < 1 otherwise, Probability and Mathematical Statistics 451 where ↵ and are parameters. Using the moment method find the estimators for the parameters ↵ and . 27. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function f (x; p) = 10 x px (1 p)10 x ◆ for x = 0, 1,..., 10, where p is a parameter. Find the Fisher information in the sample about the parameter p. ✓ 28. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function ✓2 x e ✓ x if 0 < x < f (x; ✓) = 8 < 0 otherwise, 1 where 0 < ✓ is a parameter. Find the Fisher information in the sample about the parameter ✓. : 29. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function f (x; µ, 2) = 1 x p2 ⇡ 1 2 e 8 < 0 ln(x) µ 2, if 0 < x < 1 otherwise, where Fisher information matrix in the sample about the parameters µ and 2. are unknown parameters. Find the < µ < and 0 < 2 < :
1 1 1 30. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function 2⇡ x 3 2 e f (x; µ, ) = 8 >< q 0 (x µ)2 2µ2x, if 0 < x < 1 otherwise, where 0 < µ < 1 information matrix in the sample about the parameters µ and . are unknown parameters. Find the Fisher and 0 < < >: 1 31. Let X1, X2,..., Xn be a random sample of size n from a distribution with a probability density function f (x) = 1 Γ(↵) ✓↵ x↵ 1 e 0 8 < : x ✓ if 0 < x < 1 otherwise, Some Techniques for finding Point Estimators of Parameters 452 where ↵ > 0 and ✓ > 0 are parameters. Using the moment method find estimators for parameters ↵ and . 32. Let X1, X2,..., Xn be a random sample of sizen from a distribution with a probability density function f (x; ✓) = 1 ⇡ [1 + (x, ✓)2] < x <, 1 1 where 0 < ✓ is a parameter. Using the maximum likelihood method find an estimator for the parameter ✓. 33. Let X1, X2,..., Xn be a random sample of sizen from a distribution with a probability density function f (x; ✓) = 1 2 e |x ✓|, < x <, 1 1 where 0 < ✓ is a parameter. Using the maximum likelihood method find an estimator for the parameter ✓. 34. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function f (x; ) = x e x! 8 < 0 if x = 0, 1,..., 1 otherwise, where > 0 is an unknown parameter. Find the Fisher information matrix in the sample about the parameter . : 35. Let X1, X2,..., Xn be a random sample of size n from a population distribution with the probability density function p)x 1p
f (x; p) = (1 8 < 0 if x = 1,..., 1 otherwise, where 0 < p < 1 is an unknown parameter. Find the Fisher information matrix in the sample about the parameter p. : 36. Let X1, X2,..., Xn be a random sample from a population X having the probability density function f (x; ✓) = ⇢ 2 ✓2 ✓ 0 x, x if 0   otherwise, ✓ Probability and Mathematical Statistics 453 where ✓ > 0 is a parameter. Find an estimator for ✓ using the moment method. 37. A box contains 50 red and blue balls out of which ✓ are red. A sample of 30 balls is to be selected without replacement. If X denotes the number of red balls in the sample, then find an estimator for ✓ using the moment method. Some Techniques for finding Point Estimators of Parameters 454 Probability and Mathematical Statistics 455 Chapter 16 CRITERIA FOR EVALUATING THE GOODNESS OF ESTIMATORS We have seen in Chapter 15 that, in general, different parameter estimation methods yield different estimators. For example, if X U N IF (0, ✓) and X1, X2,..., Xn is a random sample from the population X, then the estimator of ✓ obtained by moment method is ⇠ ✓MM = 2X where as the estimator obtained by the maximum likelihood method is b ✓M L = X(n) b where X and X(n) are the sample average and the nth order statistic, respectively. Now the question arises: which of the two estimators is better? Thus, we need some criteria to evaluate the goodness of an estimator. Some well known criteria for evaluating the goodness of an estimator are: (1) Unbiasedness, (2) Efficiency and Relative Efficiency, (3) Uniform Minimum Variance Unbiasedness, (4) Sufficiency, and (5) Consistency. In this chapter, we shall examine only the first four criteria in details. The concepts of unbiasedness, efficiency and sufficiency were introduced by Sir Ronald Fisher. Criteria for Evaluating the Goodness of Estimators 456 16.
1. The Unbiased Estimator Let X1, X2,..., Xn be a random sample of size n from a population with probability density function f (x; ✓). An estimator ✓ of ✓ is a function of the random variables X1, X2,..., Xn which is free of the parameter ✓. An estimate is a realized value of an estimator that is obtained when a sample is actually taken. b Definition 16.1. An estimator ✓ if and only if ✓ of ✓ is said to be an unbiased estimator of b E ✓ = ✓. If ✓ is not unbiased, then it is called a biased estimator of ✓. ⇣ ⌘ b b An estimator of a parameter may not equal to the actual value of the parameter for every realization of the sample X1, X2,..., Xn, but if it is unbiased then on an average it will equal to the parameter. Example 16.1. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2 > 0. Is the sample mean X an unbiased estimator of the parameter µ? Answer: Since, each Xi ⇠ N (µ, 2), we have X ⇠ N µ, ✓ 2 n. ◆ That is, the sample mean is normal with mean µ and variance 2 n. Thus E X = µ. Therefore, the sample mean X is an unbiased estimator of µ. Example 16.2. Let X1, X2,..., Xn be a random sample from a normal population with mean µ and variance 2 > 0. What is the maximum likelihood estimator of 2? Is this maximum likelihood estimator an unbiased estimator of the parameter 2? Answer: In Example 15.13, we have shown that the maximum likelihood estimator of 2 is 2 = 1 n c n Xi i=1 X X 2. Probability and Mathematical Statistics 457 Now, we examine the unbiasedness of this estimator E =1 X 1 n Xi Xi Xi i=1 X n i=2 n 2 n 2. 1 1 E n " E S2 E E  2(n 1
2. ✓1 and Example 16.4. Let X be a random variable with mean 2. Let ✓2 be unbiased estimators of the second and third moments, respectively, of X about the origin. Find an unbiased estimator of the third moment of X b about its mean in terms of ✓1 and ✓2. b b b Probability and Mathematical Statistics 459 ✓1 and Answer: Since, third moments of X about origin, we get ✓2 are the unbiased estimators of the second and b ✓1 b = E(X 2) E and E ✓2 = E X 3. The unbiased estimator of the third moment of X about its mean is ⇣ ⌘ b ⇣ ⌘ b E (X h 2)3 = E X 3 6X 2 + 12X 8 6E X 2 ✓1 + 24 6 ⇥ ⇤ ✓1 + 162 ⇥ ✓2 b b ⇤ + 12E [X] 8 b b ✓1 + 16. 6 Thus, the unbiased estimator of the third moment of X about its mean is ✓2 Example 16.5. Let X1, X2,..., X5 be a sample of size 5 from the uniform b distribution on the interval (0, ✓), where ✓ is unknown. Let the estimator of ✓ be k Xmax, where k is some constant and Xmax is the largest observation. In order k Xmax to be an unbiased estimator, what should be the value of the constant k? Answer: The probability density function of Xmax is given by g(x) = [F (x)]4 f (x) 5! 4! 0! x ✓ ⌘ ⇣ 5 ✓5 x4. = 5 = 4 1 ✓ If k Xmax is an unbiased estimator of ✓, then ✓ = E (k Xmax) = k E (Xmax) ✓ = k x g(x) dx 5 ✓5 x5 dx 0 Z ✓ 0 Z k ✓. = k = 5 6 k = 6 5. Hence, Criteria for Evaluating the Goodness of Estimators 460 Example 16.6. Let X1, X2,..., Xn be a sample of size n from a distribution, and unknown variance 2 > 0. Show with unknown mean 1 that the statistic X and
Y = X1+2X2+···+nXn are both unbiased estimators < µ < 1 n (n+1) 2 of µ. Further, show that V ar X < V ar(Y ). Answer: First, we show that X is an unbiased estimator of X1 + X2 + · · · + Xn n ✓ n ◆ E (Xi) µ = µ. i=1 X n i=1 X Hence, the sample mean X is an unbiased estimator of the population mean irrespective of the distribution of X. Next, we show that Y is also an unbiased estimator of µ. E (Y ) = E X1 + 2X2 + · · · + nXn n (n+1) 2! i E (Xi) n i=1 X n i µ i=1 X µ n (n + 1) 2 = = = 2 n (n + 1) 2 n (n + 1) 2 n (n + 1) = µ. Hence, X and Y are both unbiased estimator of the population mean irrespective of the distribution of the population. The variance of X is given by V ar X = V ar X1 + X2 + · · · + Xn n  1 n2 V ar [X1 + X2 + · · · + Xn] 1 n2 V ar [Xi] n ⇥ ⇤ = = = i=1 X 2 n. Probability and Mathematical Statistics 461 Similarly, the variance of Y can be calculated as follows: V ar [Y ] = V ar X1 + 2X2 + · · · + nXn n (n+1) 2 # " 4 n2 (n + 1)2 V ar [1 X1 + 2 X2 + · · · + n Xn] = = = = 4 n2 (n + 1)2 4 n2 (n + 1)2 4 n2 (n + 1)2 n i=1 X n V ar [i Xi] i2 V ar [Xi] i=1 X 2 n i2 = 2 = = 2 3 2 3 4 n2 (n + 1)2 2 2n + 1 n (n + 1) 2n + 1 (n + 1) V ar i=1 X n (n + 1) (2n + 1) 6 X
. 2n+1 (n+1) > 1 for n ⇥ Since 2 < V ar[ Y ]. This shows 2, we see that V ar 3 that although the estimators X and Y are both unbiased estimator of µ, yet the variance of the sample mean X is smaller than the variance of Y. X ⇥ ⇤ ⇤ In statistics, between two unbiased estimators one prefers the estimator which has the minimum variance. This leads to our next topic. However, before we move to the next topic we complete this section with some known disadvantages with the notion of unbiasedness. The first disadvantage is that an unbiased estimator for a parameter may not exist. The second disadvantage is that the property of unbiasedness is not invariant under functional ✓ is an unbiased estimator of ✓ and g is a function, transformation, that is, if then g( ✓) may not be an unbiased estimator of g(✓). 16.2. The Relatively Efficient Estimator b b We have seen that in Example 16.6 that the sample mean and the statistic X = X1 + X2 + · · · + Xn n Y = X1 + 2X2 + · · · + nXn 1 + 2 + · · · + n Criteria for Evaluating the Goodness of Estimators 462 are both unbiased estimators of the population mean. However, we also seen that V ar X < V ar(Y ). The following figure graphically illustrates the shape of the distributions of both the unbiased estimators. m m If an unbiased estimator has a smaller variance or dispersion, then it has a greater chance of being close to true parameter ✓. Therefore when two estimators of ✓ are both unbiased, then one should pick the one with the smaller variance. Definition 16.2. Let estimator ✓1 and ✓1 is said to be more efficient than ✓2 if ✓2 be two unbiased estimators of ✓. The b b b V ar ✓1 < V ar The ratio ⌘ given by ⌘ ⇣ b ⌘ ✓1, ✓2 = b ✓2. ⌘ ⇣ b V ar ✓2 V ar ⌘ ⇣ ✓1 b is called the relative efficiency of ⇣ b ⌘ b ✓1 with respect to �
�� ⇣ b ✓2. Example 16.7. Let X1, X2, X3 be a random sample of size 3 from a population with mean µ and variance 2 > 0. If the statistics X and Y given by b b Y = X1 + 2X2 + 3X3 6 are two unbiased estimators of the population mean µ, then which one is more efficient? Probability and Mathematical Statistics 463 Answer: Since E (Xi) = µ and V ar (Xi) = 2, we get X1 + X2 + X3 3 ✓ (E (X1) + E (X2) + E (X3)) ◆ 3µ = and E (Y ) = E X1 + 2X2 + 3X3 6 ✓ (E (X1) + 2E (X2) + 3E (X3)) ◆ = 1 6 1 6 = µ. = 6µ Therefore both X and Y are unbiased. Next we determine the variance of both the estimators. The variances of these estimators are given by V ar X = V ar X1 + X2 + X3 3 ✓ ◆ [V ar (X1) + V ar (X2) + V ar (X3)] = = = 1 9 1 9 12 36 32 2 and V ar (Y ) = V ar X1 + 2X2 + 3X3 6 ✓ [V ar (X1) + 4V ar (X2) + 9V ar (X3)] ◆ = = = 1 36 1 36 14 36 142 2. Therefore 12 36 2 = V ar X < V ar (Y ) = 14 36 2. Criteria for Evaluating the Goodness of Estimators 464 Hence, X is more efficient than the estimator Y. Further, the relative efficiency of X with respect to Y is given by ⌘ X, Y = 14 12 = 7 6. Example 16.8. Let X1, X2,..., Xn be a random sample of size n from a population with density f (x; ✓) = 1 ✓ e x ✓ 8 < 0 if 0  x < 1 otherwise, where ✓ > 0 is a parameter. Are the
estimators X1 and X unbiased? Given, X1 and X, which one is more efficient estimator of ✓? : Answer: Since the population X is exponential with parameter ✓, that is X EXP (✓), the mean and variance of it are given by ⇠ E(X) = ✓ and V ar(X) = ✓2. Since X1, X2,..., Xn is a random sample from X, we see that the statistic EXP (✓). Hence, the expected value of X1 is ✓ and thus it is an X1 ⇠ unbiased estimator of the parameter ✓. Also, the sample mean is an unbiased estimator of ✓ since n E (Xi) i=1 X n = ✓. Next, we compute the variances of the unbiased estimators X1 and X. It is easy to see that V ar (X1) = ✓2 and V ar X = V ar X1 + X2 + · · · + Xn n ◆ ✓ n V ar (Xi) = = = 1 n2 i=1 X 1 n2 n✓2 ✓2. n Probability and Mathematical Statistics 465 Hence ✓2 n = V ar X < V ar (X1) = ✓2. Thus X is more efficient than X1 and the relative efficiency of X with respect to X1 is ⌘(X, X1) = = n. ✓2 ✓2 n Example 16.9. Let X1, X2, X3 be a random sample of size 3 from a population with density f (x; ) = x e x! 8 < 0 if x = 0, 1, 2,..., 1 otherwise, where is a parameter. Are the estimators given by : 1 = 1 4 (X1 + 2X2 + X3) and 2 = 1 9 (4X1 + 3X2 + 2X3) c 2, which one is more efficient estimator of ? unbiased? Given, Find an unbiased estimator of whose variance is smaller than the variances of 1 and c c 1 and 2. c Answer: Since each Xi ⇠ c c P OI(), we get E (Xi) =
and V ar (Xi) = . It is easy to see that E 1 ⇣ ⌘ c = 1 4 1 4 = , = (E (X1) + 2E (X2) + E (X3)) 4 and E 2 ⇣ ⌘ c = 1 9 1 9 = . = (4E (X1) + 3E (X2) + 2E (X3)) 9 Criteria for Evaluating the Goodness of Estimators 466 Thus, both variances to find out which one is more efficient. It is easy to note that 2 are unbiased estimators of . Now we compute their 1 and c c V ar 1 ⇣ ⌘ c = = = = (V ar (X1) + 4V ar (X2) + V ar (X3)) 6 1 16 1 16 6 16 486 1296 , and Since, V ar 2 ⇣ ⌘ c = = = = (16V ar (X1) + 9V ar (X2) + 4V ar (X3)) 1 81 1 81 29 81 464 1296 29 , V ar 2 < V ar 1, ⇣ ⌘ ⇣ 2 is efficient than the estimator c ⌘ 1. We have seen in section the estimator 16.1 that the sample mean is always an unbiased estimator of the population mean irrespective of the population distribution. The variance of the sample mean is always equals to 1 n times the population variance, where n denotes the sample size. Hence, we get c c c Therefore, we get 3 = 432 1296 . V ar X = V ar X < V ar 2 < V ar 1. ⇣ Thus, the sample mean has even smaller variance than the two unbiased estimators given in this example. c c ⌘ ⇣ ⌘ In view of this example, now we have encountered a new problem. That is how to find an unbiased estimator which has the smallest variance among all unbiased estimators of a given parameter. We resolve this issue in the next section. Probability and Mathematical Statistics 467
16.3. The Uniform Minimum Variance Unbiased Estimator Let X1, X2,..., Xn be a random sample of size n from a population with ✓ of ✓ is a probability density function f (x; ✓). Recall that an estimator function of the random variables X1, X2,..., Xn which does depend on ✓. Definition 16.3. An unbiased estimator minimum variance unbiased estimator of ✓ if and only if ✓ of ✓ is said to be a uniform for any unbiased estimator V ar ✓ ⇣ ⌘ T of ✓. b  V ar b T ⇣ ⌘ b If an estimator the parameter ✓, that is b ✓ is unbiased then the mean of this estimator is equal to b b and the variance of ✓ is E ✓ = ✓ ⇣ ⌘ b V ar ✓ = E b ⇣ ⌘ b = E ✓ ⇣ b ✓ ⇣ ✓ 2 E ✓ ⇣ 2 ⌘⌘ b. ⌘ ✓ and it has a This variance, if exists, is a function of the unbiased estimator minimum in the class of all unbiased estimators of ✓. Therefore we have an alternative definition of the uniform minimum variance unbiased estimator. b b ✓ of ✓ is said to be a uniform Definition 16.4. An unbiased estimator minimum variance unbiased estimator of ✓ if it minimizes the variance b 2 ✓ E ✓ ⇣. Example 16.10. Let ✓2 V ar = 1, V ar b ✓1 ⌘ ✓1 and ✓2 be unbiased estimators of ✓. Suppose 2. What are the val✓2 is an unbiased estimator of ✓ with = 2 and Cov b ✓1 + c2 = 1 ✓1, ✓2 ⌘ ⇣ b ⌘ ⇣ ⌘ ues of c1 and c2 for which c1 minimum variance among unbiased estimators of this type? b b b b ⇣ Answer: We want c1 of ✓. Then ✓1 + c2 b ) ) ) ) b b ✓2 to be a minimum variance unbiased estimator ✓1 + c2 ✓2 = ✓ b E c1 h c1 E i + c2 E b ✓1 b
h i c1 ✓ + c2 ✓ = ✓ b c1 + c2 = 1 = ✓ ✓2 h b i c2 = 1 c1. Criteria for Evaluating the Goodness of Estimators 468 Therefore V ar c1 h ✓1 + c2 ✓2 b i b + c2 2 V ar ✓2 h b + 2 c1 c2 Cov ✓1, ✓1 i ⇣ ⌘ b b c1)2 + c1(1 c1) = c2 = c2 = c2 ✓1 1 V ar h i 1 + 2c2 2 + c1c2 1 + 2(1 b = 2(1 = 2 + 2c2 c1)2 + c1 3c1. 1 c1 Hence, the variance V ar function by (c1), that is h ✓1 + c2 ✓2 is a function of c1. Let us denote this b i b (c1) := V ar ✓1 + c2 ✓2 = 2 + 2c2 1 3c1. c1 h i b Taking the derivative of (c1) with respect to c1, we get b d dc1 (c1) = 4c1 3. Setting this derivative to zero and solving for c1, we obtain Therefore 4c1 3 = 0 c1 = 3 4. ) c2 = 1 c1 = 1 3 4 = 1 4. In Example 16.10, we saw that if mators of ✓, then c c IR. Hence given two estimators ✓1 + (1 c) b ✓1 and ✓2, b 2 ✓1 and ✓2 are any two unbiased esti✓2 is also an unbiased estimator of ✓ for any 1 + (1 c) b ✓2, c 2 IR o b b b b b b ✓1 and forms an uncountable class of unbiased estimators of ✓. When the variances ✓2 are known along with the their covariance, then in Example of 16.10 we were able to determine the minimum variance unbiased estimator ✓2 are not known, in the class C. If the variances of the estimators then it is very difficult to find the minimum variance estimator even in the class of estimators C
. Notice that C is a subset of the class of all unbiased estimators and finding a minimum variance unbiased estimator in this class is a difficult task. ✓1 and b b Probability and Mathematical Statistics 469 One way to find a uniform minimum variance unbiased estimator for a parameter is to use the Cram´er-Rao lower bound or the Fisher information inequality. Theorem 16.1. Let X1, X2,..., Xn be a random sample of size n from a population X with probability density f (x; ✓), where ✓ is a scalar parameter. ✓ be any unbiased estimator of ✓. Suppose the likelihood function L(✓) Let is a differentiable function of ✓ and satisfies b d d✓ 1 · · · 1 Z 1 Z 1 1 = h(x1,..., xn) L(✓) dx1 · · · dxn 1 · · · h(x1,..., xn) Z 1 Z 1 d d✓ L(✓) dx1 · · · dxn (1) for any h(x1,..., xn) with E(h(X1,..., Xn)) <. Then 1 V ar ✓ ⇣ ⌘ b 1 E @ ln L(✓) @✓ ⇣. 2 ⌘ (CR1) Proof: Since L(✓) is the joint probability density function of the sample X1, X2,..., Xn, 1 · · · 1 L(✓) dx1 · · · dxn = 1. (2) Z Differentiating (2) with respect to ✓ we have 1 1 Z d d✓ 1 · · · 1 Z 1 Z 1 L(✓) dx1 · · · dxn = 0 and use of (1) with h(x1,..., xn) = 1 yields 1 · · · 1 Z 1 Z 1 d d✓ L(✓) dx1 · · · dxn = 0. (3) Rewriting (3) as we see that 1 · · · 1 Z 1 Z 1 dL(✓
) d✓ 1 L(✓) L(✓) dx1 · · · dxn = 0 1 · · · 1 Z 1 Z 1 d ln L(✓) d✓ L(✓) dx1 · · · dxn = 0. Criteria for Evaluating the Goodness of Estimators Hence 1 · · · 1 ✓ Z 1 Z 1 d ln L(✓) d✓ L(✓) dx1 · · · dxn = 0. Since ✓ is an unbiased estimator of ✓, we see that (✓) dx1 · · · dxn = ✓. 470 (4) (5) ⌘ b Differentiating (5) with respect to ✓, we have 1 1 ⇣ b Z Z d d✓ 1 · · · 1 Z 1 Z 1 ✓ L(✓) dx1 · · · dxn = 1. b Again using (1) with h(X1,..., Xn) = ✓, we have 1 · · · 1 Z 1 Z 1 d d✓ ✓ b Rewriting (6) as b L(✓) dx1 · · · dxn = 1. (6) dL(✓) d✓ 1 L(✓) L(✓) dx1 · · · dxn = 1 we have 1 · · · 1 Z 1 Z 1 From (4) and (7), we obtain 1 b Z Z d ln L(✓) d✓ L(✓) dx1 · · · dxn = 1. (7) 1 · · · 1 ✓ Z 1 Z 1 ⇣ d ln L(✓) d✓ ✓ ⌘ L(✓) dx1 · · · dxn = 1. (8) By the Cauchy-Schwarz inequalityZ 1 1 Z Z 1 1 · · · Z 1 ⇣ 1 Z 1 Z 1 ✓ ⌘ d ln L(✓) b d✓ d ln L(✓) d✓ L(✓) dx1 · · · dxn 2 ◆ L(✓) dx1
· · · dxn ⌘ 2 ◆ L(✓) dx1 · · · dxn! 2 ◆ 2 = V ar ✓ E ⇣ ⌘ b @ ln L(✓) @✓ "✓. # ◆ Probability and Mathematical Statistics 471 Therefore V ar ✓ ⇣ ⌘ b 1 E @ ln L(✓) @✓ ⇣ 2 ⌘ and the proof of theorem is now complete. If L(✓) is twice differentiable with respect to ✓, the inequality (CR1) can be stated equivalently as V ar ✓ ⇣ ⌘ b 1 @2 ln L(✓) @✓2. i E h (CR2) The inequalities (CR1) and (CR2) are known as Cram´er-Rao lower bound ✓ or the Fisher information inequality. The condition for the variance of (1) interchanges the order on integration and differentiation. Therefore any distribution whose range depend on the value of the parameter is not covered by this theorem. Hence distribution like the uniform distribution may not be analyzed using the Cram´er-Rao lower bound. b If the estimator ✓ is minimum variance in addition to being unbiased, then equality holds. We state this as a theorem without giving a proof. b Theorem 16.2. Let X1, X2,..., Xn be a random sample of size n from a population X with probability density f (x; ✓), where ✓ is a parameter. If ✓ is an unbiased estimator of ✓ and V ar ✓ = 1 ⇣ ⌘ b E @ ln L(✓) @✓ ⇣, 2 ⌘ b ✓ is a uniform minimum variance unbiased estimator of ✓. The converse then of this is not true. ✓ is called an efficient estimator if b Definition 16.5. An unbiased estimator it satisfies Cram´er-Rao lower bound, that is b 1 V ar ✓ = ⇣ ⌘ b E @ ln L(✓) @✓ ⇣. 2 ⌘ In view of the above theorem it is easy to note that an e
fficient estimator of a parameter is always a uniform minimum variance unbiased estimator of Criteria for Evaluating the Goodness of Estimators 472 a parameter. However, not every uniform minimum variance unbiased estimator of a parameter is efficient. In other words not every uniform minimum variance unbiased estimators of a parameter satisfy the Cram´er-Rao lower bound 1. 2 E @ ln L(✓) @✓ V ar ✓ ⇣ ⌘ b ⇣ Example 16.11. Let X1, X2,..., Xn be a random sample of size n from a distribution with density function ⌘ f (x; ✓) = 3✓ x2e ✓x3 8 < 0 if 0 < x < 1 otherwise. What is the Cram´er-Rao lower bound for the variance of unbiased estimator of the parameter ✓? : ✓ be an unbiased estimator of ✓. Cram´er-Rao lower bound for Answer: Let ✓ is given by the variance of b b V ar ✓ E h ⇣ ⌘ b 1 @2 ln L(✓) @✓2, i where L(✓) denotes the likelihood function of the given random sample X1, X2,..., Xn. Since, the likelihood function of the sample is we get and ln L(✓) = n ln ✓ + ln 3x2 i 3✓ x2 i e ✓x3 i L(✓) = n i=1 Y n i=1 X n ✓ = @ ln L(✓) @✓ n i=1 X @2 ln L(✓) @✓2 = n ✓2. n ✓ x3 i. i=1 X x3 i, Hence, using this in the Cram´er-Rao inequality, we get ✓2 n. V ar ✓ ⇣ ⌘ b Probability and Mathematical Statistics 473 Thus the Cram´er-Rao lower bound for the variance of the unbiased estimator of ✓ is ✓2 n. Example 16.12. Let X1, X2,..., Xn be a random sample from a normal population with unknown mean µ and known variance 2 > 0.
What is the maximum likelihood estimator of µ? Is this maximum likelihood estimator an efficient estimator of µ? Answer: The probability density function of the population is f (x; µ) = 1 p2⇡ 2 e 1 22 (x µ)2. ln f (x; µ) = 1 2 ln(2⇡2) 1 22 (x µ)2 Thus and hence ln L(µ) = n 2 ln(2⇡2) 1 22 µ)2. (xi n i=1 X Taking the derivative of ln L(µ) with respect to µ, we get d ln L(µ) dµ = 1 2 n µ). (xi i=1 X Setting this derivative to zero and solving for µ, we see that µ = X. The variance of X is given by X1 + X2 + · · · + Xn n ◆ b V ar X = V ar ✓ = 2 n. Next we determine the Cram´er-Rao lower bound for the estimator X. We already know that and hence Therefore d ln L(µ) dµ = 1 2 n µ) (xi i=1 X d2 ln L(µ) dµ2 = n 2. d2 ln L(µ) dµ2 = n 2 ◆ E ✓ Criteria for Evaluating the Goodness of Estimators 474 and Thus 1 d2 ln L(µ) dµ2 E = 2 n. ⇣ ⌘ V ar X = 1 d2 ln L(µ) dµ2 E and X is an efficient estimator of µ. Since every efficient estimator is a uniform minimum variance unbiased estimator, therefore X is a uniform minimum variance unbiased estimator of µ. ⇣ ⌘ Example 16.13. Let X1, X2,..., Xn be a random sample from a normal population with known mean µ and unknown variance 2 > 0. What is the maximum likelihood estimator of
2(n) µ)2! 1 ✓3 E ✓ ✓3 E n ✓2 = = = = n 2✓2 n 2✓2 n 2✓2 n 2✓2 1 d2 ln L(✓) d✓2 = ✓2 n = 24 n. E ⇣ 24 1 n = V ar S2 > 1 d2 ln L(✓) d✓2 = 24 n. This shows that S2 can not attain the Cram´er-Rao lower bound. ⇣ ⌘ ⌘ E Probability and Mathematical Statistics 477 The disadvantages of Cram´er-Rao lower bound approach are the fol(1) Not every density function f (x; ✓) satisfies the assumptions lowings: of Cram´er-Rao theorem and (2) not every allowable estimator attains the Cram´er-Rao lower bound. Hence in any one of these situations, one does not know whether an estimator is a uniform minimum variance unbiased estimator or not. 16.4. Sufficient Estimator In many situations, we can not easily find the distribution of the es✓ of a parameter ✓ even though we know the distribution of the timator ✓ is population. Therefore, we have no way to know whether our estimator unbiased or biased. Hence, we need some other criteria to judge the quality of an estimator. Sufficiency is one such criteria for judging the quality of an estimator. b b Recall that an estimator of a population parameter is a function of the sample values that does not contain the parameter. An estimator summarizes the information found in the sample about the parameter. If an estimator summarizes just as much information about the parameter being estimated as the sample does, then the estimator is called a sufficient estimator. ⇠ f (x; ✓) be a population and let X1, X2,..., Xn Definition 16.6. Let X ✓ of be a random sample of size n from this population X. An estimator the parameter ✓ is said to be a sufficient estimator of ✓ if the conditional ✓ does not depend on the distribution of
the sample given the estimator parameter ✓. b Example 16.15. If X1, X2,..., Xn is a random sample from the distribution with probability density function b ✓x (1 ✓)1 x if x = 0, 1 f (x; ✓) = 8 < 0 where 0 < ✓ < 1. Show that Y = : elsewhere, i=1 Xi is a sufficient statistic of ✓. n Answer: First, we find the distribution of the sample. This is given by P f (x1, x2,..., xn) = n i=1 Y Since, each Xi ⇠ BER(✓), we have ✓xi(1 ✓)1 xi = ✓y(1 ✓)n y. Y = n i=1 X Xi ⇠ BIN (n, ✓). Criteria for Evaluating the Goodness of Estimators 478 If X1 = x1, X2 = x2,..., Xn = xn and Y = xi, then n i=1 X f (x1, x2,..., xn, y) = f (x1, x2,..., xn) if y = n i=1 xi, 8 < 0 if y = P n i=1 xi. P : Therefore, the probability density function of Y is given by g(y) = n y ✓ ◆ ✓y (1 ✓)n y. Now, we find the conditional density of the sample given the estimator Y, that is f (x1, x2,..., xn/Y = y) = = = = f (x1, x2,..., xn, y) g(y) f (x1, x2,..., xn) g(y) ✓)n y ✓)n y ✓y (1 ✓y(1 n y 1 n y. Hence, the conditional density of the sample given the statistic Y is independent of the parameter ✓. Therefore, by definition Y is a sufficient statistic. Example 16.16. If X1, X2,..., Xn is a random sample from the distribution with probability density function e
(x ✓) f (x; ✓) = 8 < 0 if ✓ < x < 1 elsewhere, where this maximum likelihood estimator sufficient estimator of ✓?. What is the maximum likelihood estimator of ✓? Is < ✓ < 1 1 : Answer: We have seen in Chapter 15 that the maximum likelihood estimator of ✓ is Y = X(1), that is the first order statistic of the sample. Let us find 6 Probability and Mathematical Statistics 479 the probability density of this statistic, which is given by g(y) = n! 1)! (n [F (y)]0 f (y) [1 F (y)]n 1 = n f (y) [1 F (y)]n 1 = n e (y ✓) 1 h (y e ✓) 1 n = n en✓ e ny. n 1 oi The probability density of the random sample is f (x1, x2,..., xn) = (xi e ✓) n i=1 Y = en✓ e n x, n where nx = xi. Let A be the event (X1 = x1, X2 = x2,..., Xn = xn) and i=1 X B denotes the event (Y = y). Then A B = A. Now, we find the conditional density of the sample given the estimator Y, that is B and therefore A ⇢ T f (x1, x2,..., xn/Y = y) = P (X1 = x1, X2 = x2,..., Xn = xn /Y = y) = P (A/B) P (A B) = = = = = P (B) T P (A) P (B) f (x1, x2,..., xn) g(y) n x n y en✓ e n en✓ e n x e n y. n e Hence, the conditional density of the sample given the statistic Y is independent of the parameter ✓. Therefore, by definition Y is a sufficient statistic. We have seen that to verify whether an estimator is su
fficient or not one has to examine the conditional density of the sample given the estimator. To Criteria for Evaluating the Goodness of Estimators 480 compute this conditional density one has to use the density of the estimator. The density of the estimator is not always easy to find. Therefore, verifying the sufficiency of an estimator using this definition is not always easy. The following factorization theorem of Fisher and Neyman helps to decide when an estimator is sufficient. Theorem 16.3. Let X1, X2,..., Xn denote a random sample with probability density function f (x1, x2,..., xn; ✓), which depends on the population parameter ✓. The estimator ✓ is sufficient for ✓ if and only if f (x1, x2,..., xn; ✓) = ( ✓, ✓) h(x1, x2,..., xn) b where depends on x1, x2,..., xn only through not depend on ✓. b ✓ and h(x1, x2,..., xn) does Now we give two examples to illustrate the factorization theorem. b Example 16.17. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ) = x e x! 8 < 0 if x = 0, 1, 2,..., 1 elsewhere, where > 0 is a parameter. Find the maximum likelihood estimator of and show that the maximum likelihood estimator of is a sufficient estimator of the parameter . : Answer: First, we find the density of the sample or the likelihood function of the sample. The likelihood function of the sample is given by n L() = f (xi; ) i=1 Y n xi e xi! i=1 Y nX e n n. = = (xi!) i=1 Y Taking the logarithm of the likelihood function, we get ln L() = nx ln n ln n (xi!). i=1 Y Probability and Mathematical Statistics 481 Therefore d d
ln L() = 1 nx n. Setting this derivative to zero and solving for , we get = x. The second derivative test assures us that the above is a maximum. Hence, the maximum likelihood estimator of is the sample mean X. Next, we show that X is sufficient, by using the Factorization Theorem of Fisher and Neyman. We factor the joint density of the sample as L() = n nxe n (xi!) i=1 Y nx e n = 1 n ⇥ ⇤ (xi!) = (X, ) h (x1, x2,..., xn). i=1 Y Therefore, the estimator X is a sufficient estimator of . Example 16.18. Let X1, X2,..., Xn be a random sample from a normal distribution with density function f (x; µ) = 1 p2⇡ e 1 2 (x µ)2, 1 < µ < where is a parameter. Find the maximum likelihood estimator of µ and show that the maximum likelihood estimator of µ is a sufficient estimator. 1 Answer: We know that the maximum likelihood estimator of µ is the sample mean X. Next, we show that this maximum likelihood estimator X is a Criteria for Evaluating the Goodness of Estimators 482 sufficient estimator of µ. The joint density of the sample is given by f (x1, x2,...,xn; µ) n 1 2 (xi e µ)xi; µ) i=1 Y n i=1 Y 1 p2 ✓ ✓ ✓ ✓ 1 p2⇡ 1 p2⇡ 1 p2⇡ 1 p2⇡ 1 p2⇡ ✓ ◆ n (xi i=1 X µ)2 n i=1 X n i=1 X n i=1 X [(xi x) + (x µ)]2 (xi ⇥ (xi ⇥ x)2 + 2(xi x)(x µ) + (x µ)2 ⇤ x)2 + (x µ)2 ⇤ n
x)2 1 2 e (xi i=1 X n 2 (x µ)2 Hence, by the Factorization Theorem, X is a sufficient estimator of the population mean. Note that the probability density function of the Example 16.17 which is f (x; ) = x e x! 8 < 0 if x = 0, 1, 2,..., 1 elsewhere, can be written as : f (x; ) = e{x ln ln x! } for x = 0, 1, 2,... This density function is of the form f (x; ) = e{K(x)A()+S(x)+B()}. Similarly, the probability density function of the Example 16.12, which is f (x; µ) = 1 p2⇡ e 1 2 (x µ)2 Probability and Mathematical Statistics 483 can also be written as f (x; µ) = e{xµ x2 2 µ2 2 1 2 ln(2⇡)}. This probability density function is of the form f (x; µ) = e{K(x)A(µ)+S(x)+B(µ)}. We have also seen that in both the examples, the sufficient estimators were n the sample mean X, which can be written as 1 n Xi. i=1 X Our next theorem gives a general result in this direction. The following theorem is known as the Pitman-Koopman theorem. Theorem 16.4. Let X1, X2,..., Xn be a random sample from a distribution with probability density function of the exponential form f (x; ✓) = e{K(x)A(✓)+S(x)+B(✓)} on a support free of ✓. Then the statistic K(Xi) is a sufficient statistic n for the parameter ✓. i=1 X Proof: The joint density of the sample is f (x1, x2,..., xn; ✓) = f (xi; ✓) n i=1 Y n = e{K(xi)A(✓)+S(xi)+B(✓)} i=1
Y n n K(xi)A(✓) + S(xi) + n B(✓) = e( i=1 X n i=1 X K(xi)A(✓) + n B(✓) ) n S(xi) = e( i=1 X ) e" i=1 X #. Hence by the Factorization Theorem the estimator n K(Xi) is a sufficient i=1 X statistic for the parameter ✓. This completes the proof. Criteria for Evaluating the Goodness of Estimators 484 Example 16.19. Let X1, X2,..., Xn be a random sample from a distribution with density function ✓ x✓ 1 for 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where ✓ > 0 is a parameter. Using the Pitman-Koopman Theorem find a sufficient estimator of ✓. : Answer: The Pitman-Koopman Theorem says that if the probability density function can be expressed in the form of f (x; ✓) = e{K(x)A(✓)+S(x)+B(✓)} n i=1 K(Xi) is a sufficient statistic for ✓. The given population density then can be written as P f (x; ✓) = ✓ x✓ 1 1] = e{ln[✓ x✓ = e{ln ✓+(✓ 1) ln x}. Thus, K(x) = ln x A(✓) = ✓ Hence by Pitman-Koopman Theorem, S(x) = ln x B(✓) = ln ✓. n i=1 X n K(Xi) = ln Xi i=1 X n = ln Xi. Thus ln i=1 Y i=1 Xi is a sufficient statistic for ✓. n Remark 16.1. Notice that Q n Xi is also a sufficient statistic of ✓, since knowing ln n Xi i=1 Y n, we also know Xi.! i=1 Y i=1 Y Example 16.20. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓)
= 1 ✓ e x ✓ 0 8 < : for 0 < x < 1 otherwise, Probability and Mathematical Statistics 485 where 0 < ✓ < 1 is a parameter. Find a sufficient estimator of ✓. Answer: First, we rewrite the population density in the exponential form. That is f (x; ✓) = x ✓ e 1 ✓ = eln ⇥ ln ✓ = e x ✓ 1 ✓ e x ⇤ ✓. Hence K(x) = x S(x) = 0 A(✓) = B(✓) = 1 ✓ ln ✓. Hence by Pitman-Koopman Theorem, n n K(Xi) = Xi = n X. i=1 X Thus, nX is a sufficient statistic for ✓. Since knowing nX, we also know X, the estimator X is also a sufficient estimator of ✓. i=1 X Example 16.21. Let X1, X2,..., Xn be a random sample from a distribution with density function e (x ✓) f (x; ✓) = 8 < 0 for ✓ < x < 1 otherwise, where used to find a sufficient statistic for ✓? < ✓ < 1 1 : is a parameter. Can Pitman-Koopman Theorem be Answer: No. We can not use Pitman-Koopman Theorem to find a sufficient statistic for ✓ since the domain where the population density is nonzero is not free of ✓. Next, we present the connection between the maximum likelihood estimator and the sufficient estimator. If there is a sufficient estimator for the parameter ✓ and if the maximum likelihood estimator of this ✓ is unique, then the maximum likelihood estimator is a function of the sufficient estimator. That is ✓ML = ( ✓S), where is a real valued function, b of ✓, and b ✓S is the sufficient estimator of ✓. ✓ML is the maximum likelihood estimator b b Criteria for Evaluating the Goodness of Estimators 486 Similarly, a connection can be established between the uniform minimum variance unbiased estimator and the sufficient estimator of a parameter ✓. If there is a su�
��cient estimator for the parameter ✓ and if the uniform minimum variance unbiased estimator of this ✓ is unique, then the uniform minimum variance unbiased estimator is a function of the sufficient estimator. That is ✓MVUE = ⌘( ✓S), where ⌘ is a real valued function, unbiased estimator of ✓, and ✓MVUE is the uniform minimum variance b ✓S is the sufficient estimator of ✓. b Finally, we may ask “If there are sufficient estimators, why are not there necessary estimators?” In fact, there are. Dynkin (1951) gave the following definition. b b Definition 16.7. An estimator is said to be a necessary estimator if it can be written as a function of every sufficient estimators. 16.5. Consistent Estimator Let X1, X2,..., Xn be a random sample from a population X with density f (x; ✓). Let ✓ be an estimator of ✓ based on the sample of size n. Obviously the estimator depends on the sample size n. In order to reflect the dependency of ✓ on n, we denote ✓ as b ✓n. b Definition 16.7. Let X1, X2,..., Xn be a random sample from a population X with density f (x; ✓). A sequence of estimators { ✓n} of ✓ is said to be ✓n} converges in probability to consistent for ✓ if and only if the sequence { ✓, that is, for any ✏ > 0 b b b b ✏ P lim n!1 ✓n ⇣ b ✓n}1n=no but we usually say “consistency of Note that consistency is actually a concept relating to a sequence of ✓n” for simplicity. estimators { Further, consistency is a large sample property of an estimator. = 0. ⌘ ✓ b b The following theorem states that if the mean squared error goes to zero as n goes to infinity, then { ✓n} converges in probability to ✓. Theorem 16.5. Let X1, X2,..., Xn be a random sample from a
b f (x; ✓) = 1 ✓ e x ✓ 8 < 0 for 0 < x < 1 otherwise, where 0 < ✓ < estimator of ✓. 1 is a parameter. Using moment method find a consistent : Answer: Let U (x) = x. Then f (✓) = E(U (X)) = ✓. The function f (x) = x for x > 0 is a one-to-one function and continuous. Moreover, the inverse of f is given by f 1(x) = x. Thus 1 ✓( X ) n i=1 X n U (Xi)! Xi! i=1 X Therefore, = X. ✓n = X b Probability and Mathematical Statistics 491 is a consistent estimator of ✓. Since consistency is a large sample property of an estimator, some statisticians suggest that consistency should not be used alone for judging the goodness of an estimator; rather it should be used along with other criteria. 16.6. Review Exercises 1. Let T1 and T2 be estimators of a population parameter ✓ based upon the ✓, 2 b)T2, same random sample. If Ti ⇠ i then for what value of b, T is a minimum variance unbiased estimator of ✓? 2. Let X1, X2,..., Xn be a random sample from a distribution with density function i = 1, 2 and if T = bT1 + (1 N f (x; ✓) = 1 2✓ |x| ✓ e 1 < x <, 1 where 0 < ✓ is a parameter. What is the expected value of the maximum likelihood estimator of ✓? Is this estimator unbiased? 3. Let X1, X2,..., Xn be a random sample from a distribution with density function f (x; ✓) = 1 2✓ |x| ✓ e 1 < x <, 1 where 0 < ✓ is a parameter. Is the maximum likelihood estimator an efficient estimator of ✓? 4. A random sample X1, X2,..., Xn of size n is selected from a normal distribution with variance 2. Let S2 be the unbiased estimator of 2, and T 19S2 = 0, then what is be the maximum likelihood estimator of 2. If 20
T the sample size? 5. Suppose X and Y are independent random variables each with density function 2 x ✓2 for 0 < x < 1 ✓ f (x) = ( 0 otherwise. If k (X + 2Y ) is an unbiased estimator of ✓ 1, then what is the value of k? 6. An object of length c is measured by two persons using the same instrument. The instrument error has a normal distribution with mean 0 and variance 1. The first person measures the object 25 times, and the average of the measurements is ¯X = 12. The second person measures the objects 36 times, and the average of the measurements is ¯Y = 12.8. To estimate c we use the weighted average a ¯X + b ¯Y as an estimator. Determine the constants Criteria for Evaluating the Goodness of Estimators 492 a and b such that a ¯X + b ¯Y is the minimum variance unbiased estimator of c and then calculate the minimum variance unbiased estimate of c. 7. Let X1, X2,..., Xn be a random sample from a distribution with probability density function f (x) = 3 ✓ x2 e ✓ x3 for 0 < x < 1 8 < 0 otherwise, where ✓ > 0 is an unknown parameter. Find a sufficient statistics for ✓. : 8. Let X1, X2,..., Xn be a random sample from a Weibull distribution with probability density function 1 e ✓ ) ( x f (x) = ✓ x 8 < 0 if x > 0 otherwise, where ✓ > 0 and > 0 are parameters. Find a sufficient statistics for ✓ with known, say = 2. If is unknown, can you find a single sufficient statistics for ✓? : 9. Let X1, X2 be a random sample of size 2 from population with probability density f (x; ✓) = 1 ✓ e x ✓ 8 < 0 if 0 < x < 1 otherwise, where ✓ > 0 is an unknown parameter. If Y = pX1X2, then what should be the value of the constant k such that kY is an unbiased estimator of the parameter ✓? : 10. Let X1, X2,..., Xn be a random sample from a population with probability density function
f (x; ✓) = 1 ✓ 8 < 0 if 0 < x < ✓ otherwise, where ✓ > 0 is an unknown parameter. If X denotes the sample mean, then what should be value of the constant k such that kX is an unbiased estimator of ✓? : Probability and Mathematical Statistics 493 11. Let X1, X2,..., Xn be a random sample from a population with probability density function f (x; ✓) = 1 ✓ 8 < 0 if 0 < x < ✓ otherwise, where ✓ > 0 is an unknown parameter. If Xmed denotes the sample median, then what should be value of the constant k such that kXmed is an unbiased estimator of ✓? : 12. What do you understand by an unbiased estimator of a parameter ✓? What is the basic principle of the maximum likelihood estimation of a parameter ✓? What is the basic principle of the Bayesian estimation of a parameter ✓? What is the main difference between Bayesian method and likelihood method. 13. Let X1, X2,..., Xn be a random sample from a population X with density function f (x; ✓) = ✓ (1+x)✓+1 8 < 0 for 0 x <  1 otherwise, where ✓ > 0 is an unknown parameter. What is a sufficient statistic for the parameter ✓? : 14. Let X1, X2,..., Xn be a random sample from a population X with density function f (x; ✓) = x2 2✓2 x ✓2 e 8 < 0 for 0 x <  1 otherwise, where ✓ is an unknown parameter. What is a sufficient statistic for the parameter ✓? : 15. Let X1, X2,..., Xn be a random sample from a distribution with density function e (x ✓) f (x; ✓) = 8 < 0 for ✓ < x < 1 otherwise, where estimator of ✓? Find a sufficient statistics of the parameter ✓. is a parameter. What is the maximum likelihood < ✓ < 1 1 : Criteria for Evaluating the Goodness of Estimators 494 16. Let X1, X2,..., Xn be a random sample from a distribution with density function e (x ✓) f (x; ✓) = 8 < 0 for ✓ <
x < 1 otherwise, where unbiased estimators of ✓? Which one is more efficient than the other? is a parameter. Are the estimators X(1) and X 1 < ✓ < 1 : 1 are 17. Let X1, X2,..., Xn be a random sample from a population X with density function ✓ x✓ 1 f (x; ✓) = 8 < 0 for 0  x < 1 otherwise, where ✓ > 1 is an unknown parameter. What is a sufficient statistic for the parameter ✓? : 18. Let X1, X2,..., Xn be a random sample from a population X with density function ✓ ↵ x↵ 1e ✓x↵ f (x; ✓) = 8 < 0 for 0 x <  1 otherwise, where ✓ > 0 and ↵ > 0 are parameters. What is a sufficient statistic for the : parameter ✓ for a fixed ↵? 19. Let X1, X2,..., Xn be a random sample from a population X with density function f (x; ✓) = ✓ ↵✓ x(✓+1) 8 < 0 for ↵ < x < 1 otherwise, where ✓ > 0 and ↵ > 0 are parameters. What is a sufficient statistic for the parameter ✓ for a fixed ↵? : 20. Let X1, X2,..., Xn be a random sample from a population X with density function f (x; ✓) = m x ✓x(1 8 < 0 ✓)m x for x = 0, 1, 2,..., m otherwise, : where 0 < ✓ < 1 is parameter. Show that X unbiased estimator of ✓ for a fixed m. m is a uniform minimum variance Probability and Mathematical Statistics 495 21. Let X1, X2,..., Xn be a random sample from a population X with density function ✓ x✓ 1 f (x; ✓) = 8 < 0 where ✓ > 1 is parameter. Show that : variance unbiased estimator of 1 ✓. 1 n for 0 < x < 1 otherwise, n i=1 ln(Xi) is a uniform minimum P 22. Let X1, X
2,..., Xn be a random sample from a uniform population X on the interval [0, ✓], where ✓ > 0 is a parameter. Is the likelihood estimator ✓ = X(n) of ✓ a consistent estimator of ✓? P OI(), 23. Let X1, X2,..., Xn be a random sample from a population X b where > 0 is a parameter. Is the estimator X of a consistent estimator of ? ⇠ 24. Let X1, X2,..., Xn be a random sample from a population X having the probability density function f (x; ✓) = ⇢ 1, ✓ x✓ 0 if 0 < x < 1 otherwise, where ✓ > 0 is a parameter. Is the estimator moment method, a consistent estimator of ✓? ✓ = X X 1 of ✓, obtained by the 25. Let X1, X2,..., Xn be a random sample from a population X having the probability density function b f (x; p) = m x px (1 8 < 0 p)m x, if x = 0, 1, 2,..., m otherwise, where 0 < p < 1 is a parameter and m is a fixed positive integer. What is the maximum likelihood estimator for p. Is this maximum likelihood estimator for p is an efficient estimator? : 26. Let X1, X2,..., Xn be a random sample from a population X having the probability density function ✓ x✓ 1, if 0 < x < 1 f (x; ✓) = 8 < 0 otherwise, where ✓ > 0 is a parameter. Is the estimator moment method, a consistent estimator of ✓? Justify your answer. ✓ = X X : 1 of ✓, obtained by the b Criteria for Evaluating the Goodness of Estimators 496 Probability and Mathematical Statistics 497 Chapter 17 SOME TECHNIQUES FOR FINDING INTERVAL ESTIMATORS FOR PARAMETERS In point estimation we find a value for the parameter ✓ given a sample data. For example, if X1, X2,..., Xn is a random sample of size n from a population with probability density function f (x; ✓) = 2 ⇡ e