text stringlengths 270 6.81k |
|---|
Y ) with positive parameters ✓1, ✓2 and ✓3 by writing (X, Y ) GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2). It can be shown that if Xk = (bk Beta(✓1, ✓2, ✓3), then (X1, X2) ak)Yk + ak (for k = 1, 2) and each GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2). ⇠ (Y1, Y2) Therefore, by Theorem 12.11 ⇠ ⇠ Theorem 12.13. Let (X, Y ) and ✓3 are positive apriori chosen parameters. Then X and Y GBeta(✓1, ✓2, ✓3, a1, b1, a2, b2), where ✓1, ✓2 Beta(✓1, ✓2 + ✓3) Beta(✓2, ✓1 + ✓3) and ⇠ ⇠ ⇠ E(X) = (b1 a1) E(Y ) = (b2 a2) ✓1 ✓ ✓2 ✓ + a1, + a2, V ar(X) = (b1 V ar(Y ) = (b2 a1)2 ✓1 (✓ ✓1) ✓2 (✓ + 1) ✓2) ✓2 (✓ + 1) a2)2 ✓2 (✓ Cov(X, Y ) = (b1 a1)(b2 a2) ✓1 ✓2 ✓2 (✓ + 1) where ✓ = ✓1 + ✓2 + ✓3. Another generalization of the bivariate beta distribution is the following: Definition 12.7. A continuous bivariate random variable (X1, X2) is said to have the generalized bivariate beta distribution if its joint probability density function is of the form 1 B(↵1, 1)B(↵2, 2) x1x2) 2 1 f (x1, x2) = 2 x↵2 1 2 x1) 1 x↵1 ↵2 1(1 (1 |
. The parameter ⇢ determines the shape and orientation on the (x, y)-plane of the mountain. The following figures show the graphs of the bivariate normal distributions with different values of correlation coefficient ⇢. The first two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0, µ1 = µ2 = 0, and 1 = 2 = 1 and the equi-density plots. The next two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0.5, µ1 = µ2 = 0, and 1 = 2 = 0.5 and the equi-density plots. The last two figures illustrate the graph of the bivariate normal distribution with ⇢ = 0.5, µ1 = µ2 = 0, and 1 = 2 = 0.5 and the equi-density plots. Probability and Mathematical Statistics 343 One of the remarkable features of the bivariate normal distribution is that if we vertically slice the graph of f (x, y) along any direction, we obtain a univariate normal distribution. In particular, if we vertically slice the graph of the f (x, y) along the x-axis, we obtain a univariate normal distribution. That is the marginal of f (x, y) is again normal. One can show that the marginals of f (x, y) are given by f1(x) = 1 1 p2⇡ 1 2 e x µ1 1 2 2. and f2(y) = 1 2 p2⇡ 1 2 e x µ2 2 In view of these, the following theorem is obvious. Some Special Continuous Bivariate Distributions 344 Theorem 12.14. If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then ⇠ E(X) = µ1 E(Y ) = µ2 V ar(X) = 2 1 V ar(Y ) = 2 2 Corr(X, Y ) = ⇢ M (s, t) = eµ1s+µ2t+ 1 2 ( |
.15. If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then ⇠ E(Y /x) = µ2 + ⇢ E(X/y) = µ1 + ⇢ V ar(Y /x) = 2 V ar(X/y) = 2 2 (1 1 (1 (x (y 2 1 1 2 ⇢2) ⇢2). µ1) µ2) We have seen that if (X, Y ) has a bivariate normal distribution, then the distributions of X and Y are also normal. However, the converse of this is not true. That is if X and Y have normal distributions as their marginals, then their joint distribution is not necessarily bivariate normal. Now we present some characterization theorems concerning the bivariate normal distribution. The first theorem is due to Cramer (1941). Theorem 12.16. The random variables X and Y have a joint bivariate normal distribution if and only if every linear combination of X and Y has a univariate normal distribution. Theorem 12.17. The random variables X and Y with unit variances and correlation coefficient ⇢ have a joint bivariate normal distribution if and only if @ @⇢ E[g(X, Y )] = E @2 @X @Y g(X, Y ) holds for an arbitrary function g(x, y) of two variable. Many interesting characterizations of bivariate normal distribution can be found in the survey paper of Hamedani (1992). 12.6. Bivariate Logistic Distributions In this section, we study two bivariate logistic distributions. A univariate logistic distribution is often considered as an alternative to the univariate normal distribution. The univariate logistic distribution has a shape very close to that of a univariate normal distribution but has heavier tails than Some Special Continuous Bivariate Distributions 346 the normal. This distribution is also used as an alternative to the univariate Weibull distribution in life-testing. The univariate logistic distribution has the following probability density function f (x) = ⇡ p3 ⇡ p3 ( x µ ) e 1 + e ⇡ p3 ( 1 < µ < and |
bivariate random variable (X, Y ) is said to have the bivariate logistic distribution of second kind if its joint probability density function is of the form f (x, y) = 2↵ [ ↵(x, y)]1 [1 + ↵(x, y)]2 ↵(x, y) 1 ↵(x, y) + 1 ✓ + ↵ e ↵(x+y), ◆ < x, y <, 1 1 1 where ↵ > 0 is a parameter, and ↵(x, y) := (e ↵. As before, we denote a bivariate logistic random variable of second kind (X, Y ) by writing (X, Y ) LOGS(↵). ↵x + e ↵y) ⇠ The marginal densities of X and Y are again logistic and they given by and f1(x) = x e (1 + e x)2, < x < 1 1 f2(y) = y e (1 + e y)2, < y <. 1 1 It was shown by Oliveira (1961) that if (X, Y ) lation between X and Y is ⇠ LOGS(↵), then the corre- ⇢(X, Y ) = 1 1 2 ↵2. Some Special Continuous Bivariate Distributions 350 12.7. Review Exercises 1. If (X, Y ) 2y +1, then what is the value of the conditional variance of Y given the event X = x? N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = x2 +2y2 2xy +2x ⇠ 2. If (X, Y ) ⇠ N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = 1 102 (x + 3)2 16(x + 3)(y 2) + 4(y 2)2, then what is the value of the conditional expectation of Y given X = x? ⇥ ⇤ 3. If (X, Y ) the random variables U and V, where U = 2 |
X + 3Y and V = 2X N (µ1, µ2, 1, 2, ⇢), then what is the correlation coefficient of ⇠ 3Y? 4. Let the random variables X and Y denote the height and weight of wild turkeys. If the random variables X and Y have a bivariate normal distribution with µ1 = 18 inches, µ2 = 15 pounds, 1 = 3 inches, 2 = 2 pounds, and ⇢ = 0.75, then what is the expected weight of one of these wild turkeys that is 17 inches tall? If (X, Y ) N (µ1, µ2, 1, 2, ⇢), then what is the moment generating 5. function of the random variables U and V, where U = 7X + 3Y and V = 7X 3Y? ⇠ 6. Let (X, Y ) have a bivariate normal distribution. The mean of X is 10 and the variance of X is 12. The mean of Y is 5 and the variance of Y is 5. If the covariance of X and Y is 4, then what is the probability that X + Y is greater than 10? 7. Let X and Y have a bivariate normal distribution with means µX = 5 and µY = 6, standard deviations X = 3 and Y = 2, and covariance XY = 2. Let Φ denote the cumulative distribution function of a normal random variable with mean 0 and variance 1. What is P (2 5) in terms of Φ? X Y 8. If (X, Y ) is the conditional distributions of X given the event Y = y? N (µ1, µ2, 1, 2, ⇢) with Q(x, y) = x2 + xy ⇠ 2y2, then what 9. If (X, Y ) ters, then show that the moment generating function is given by GAMK(↵, ✓), where 0 < ↵ < and 0 1 ⇠ ✓ < 1 are parame- M (s, t) = (1 ✓ s) (1 1 ✓ s t t) ↵. ◆ Probability and Mathematical Statistics 351 |
10. Let X and Y have a bivariate gamma distribution of Kibble with parameters ↵ = 1 and 0 ✓ < 0. What is the probability that the random variable 7X is less than 1 2? 11. If (X, Y ) curves of Y on X? ⇠ GAMC(↵, , ), then what are the regression and scedestic 12. The position of a random point (X, Y ) is equally probable anywhere on a circle of radius R and whose center is at the origin. What is the probability density function of each of the random variables X and Y? Are the random variables X and Y independent? 13. If (X, Y ) random variables X and Y? ⇠ GAMC(↵, , ), what is the correlation coefficient of the 14. Let X and Y have a bivariate exponential distribution of Gumble with parameter ✓ > 0. What is the regression curve of Y on X? 15. A screen of a navigational radar station represents a circle of radius 12 inches. As a result of noise, a spot may appear with its center at any point of the circle. Find the expected value and variance of the distance between the center of the spot and the center of the circle. 16. Let X and Y have a bivariate normal distribution. Which of the following statements must be true? (I) Any nonzero linear combination of X and Y has a normal distribution. (II) E(Y /X = x) is a linear function of x. (III) V ar(Y /X = x) V ar(Y ). 17. If (X, Y ) ⇠ LOGS(↵), then what is the correlation between X and Y? 18. If (X, Y ) ⇠ the random variables X and Y? LOGF (µ1, µ2, 1, 2), then what is the correlation between 19. If (X, Y ) are univariate logistic. ⇠ LOGF (µ1, µ2, 1, 2), then show that marginally X and Y 20. If (X, Y ) ⇠ the random variable Y and X? LOGF (µ1, µ2, 1, 2), then what is the scedastic curve of Some Special Continuous Bivariate Distributions 352 Probability and Mathemat |
ical Statistics 353 Chapter 13 SEQUENCES OF RANDOM VARIABLES AND ORDER STASTISTICS In this chapter, we generalize some of the results we have studied in the previous chapters. We do these generalizations because the generalizations are needed in the subsequent chapters relating to mathematical statistics. In this chapter, we also examine the weak law of large numbers, Bernoulli’s law of large numbers, the strong law of large numbers, and the central limit theorem. Further, in this chapter, we treat the order statistics and percentiles. 13.1. Distribution of sample mean and variance Consider a random experiment. Let X be the random variable associated with this experiment. Let f (x) be the probability density function of X. Let us repeat this experiment n times. Let Xk be the random variable associated with the kth repetition. Then the collection of the random variables { X1, X2,..., Xn } is a random sample of size n. From here after, we simply denote X1, X2,..., Xn as a random sample of size n. The random variables X1, X2,..., Xn are independent and identically distributed with the common probability density function f (x). For a random sample, functions such as the sample mean X, the sample variance S2 are called statistics. In a particular sample, say x1, x2,..., xn, we observed x and s2. We may consider X = 1 n n i=1 X Xi Sequences of Random Variables and Order Statistics 354 and S2 = n 1 n 1 2 X Xi i=1 X as random variables and x and s2 are the realizations from a particular sample. In this section, we are mainly interested in finding the probability distributions of the sample mean X and sample variance S2, that is the distribution of the statistics of samples. Example 13.1. Let X1 and X2 be a random sample of size 2 from a distribution with probability density function f (x) = x) 6x(1 0 n if 0 < x < 1 otherwise. What are the mean and variance of sample sum Y = X1 + X2? Answer: The population mean µX = E (X) 1 = x 6x(1 0 Z 1 = 6 x2 (1 0 Z = 6 B(3, 2) |
x) dx x) dx (here B denotes the beta function) Γ(3) Γ(2) Γ(5) 1 12 ◆ ✓. = 6 = 6 = 1 2 Since X1 and X2 have the same distribution, we obtain µX1 = 1 Hence the mean of Y is given by 2 = µX2. E(Y ) = E(X1 + X2) = E(X1) + E(X2) + 1 2 = 1 2 = 1. Probability and Mathematical Statistics 355 Next, we compute the variance of the population X. The variance of X is given by V ar(X 6x3(1 E(X)2 x) dx ✓ 1 2 x3 (1 x) dx ✓ ◆ (4, 2) Γ(4) Γ(2) 1 4 ✓ Γ(6) ✓ 1 1 4 20 ◆ 5 20 ✓ ◆ = 6 = 6 ✓ 6 20 1 20. = = Since X1 and X2 have the same distribution as the population X, we get V ar(X1) = 1 20 = V ar(X2). Hence, the variance of the sample sum Y is given by V ar(Y ) = V ar (X1 + X2) = V ar (X1) + V ar (X2) + 2 Cov (X1, X2) = V ar (X1) + V ar (X2) = = 1 20 1 10 + 1 20. Example 13.2. Let X1 and X2 be a random sample of size 2 from a distribution with density f (x) = 1 4 ( 0 for x = 1, 2, 3, 4 otherwise. What is the distribution of the sample sum Y = X1 + X2? Sequences of Random Variables and Order Statistics 356 Answer: Since the range space of X1 as well as X2 is {1, 2, 3, 4}, the range space of Y = X1 + X2 is RY = {2, 3, 4, 5, 6, 7, 8}. Let g(y) be the density function of Y. We want to find this density function. First, we find g(2), g(3) and so on. g(2) = |
P (Y = 2) = P (X1 + X2 = 2) = P (X1 = 1 and X2 = 1) = P (X1 = 1) P (X2 = 1) (by independence of X1 and X2) = f (1) f (1) = 1 4 ✓ = 1 16. 1 4 ◆ ◆ ✓ g(3) = P (Y = 3) = P (X1 + X2 = 3) = P (X1 = 1 and X2 = 2) + P (X1 = 2 and X2 = 1) = P (X1 = 1) P (X2 = 2) + P (X1 = 2) P (X2 = 1) (by independence of X1 and X2) = f (1) f (2) + f (2) f (1 16. 1 4 ◆ ◆ ✓ ◆ ✓ Probability and Mathematical Statistics 357 g(4) = P (Y = 4) = P (X1 + X2 = 4) = P (X1 = 1 and X2 = 3) + P (X1 = 3 and X2 = 1) + P (X1 = 2 and X2 = 2) = P (X1 = 3) P (X2 = 1) + P (X1 = 1) P (X2 = 3) + P (X1 = 2) P (X2 = 2) (by independence of X1 and X2) = f (1) f (3) + f (3) f (1) + f (2) f (2 16 ◆ ✓. Similarly, we get g(5) = 4 16, g(6) = 3 16, g(7) = 2 16, g(8) = 1 16. Thus, putting these into one expression, we get g(y) = P (Y = y) y 1 f (k) f (y Xk=1 4 5|, |y 16 = = k) y = 2, 3, 4,..., 8. Remark 13.1. Note that g(y) = y 1 f (k) f (y of f with itself. The concept of convolution was introduced in chapter 10. Xk=1 k) is the discrete convolution The above example can also be done |
using the moment generating func- Sequences of Random Variables and Order Statistics 358 tion method as follows: MY (t) = MX1+X2(t) = MX1(t) MX2(t) et + e2t + e3t + e4t 4 ◆ ◆ ✓ 2 ✓ et + e2t + e3t + e4t 4 et + e2t + e3t + e4t 4 = = = ◆ ✓ e2t + 2e3t + 3e4t + 4e5t + 3e6t + 2e7t + e8t 16. Hence, the density of Y is given by g(y) = 4 5|, |y 16 y = 2, 3, 4,..., 8. Theorem 13.1. If X1, X2,..., Xn are mutually independent random variables with densities f1(x1), f2(x2),..., fn(xn) and E[ui(Xi)], i = 1, 2,..., n exist, then n n E " ui(Xi) = # i=1 Y i=1 Y E[ui(Xi)], where ui (i = 1, 2,..., n) are arbitrary functions. Proof: We prove the theorem assuming that the random variables X1, X2,..., Xn are continuous. If the random variables are not continuous, then the proof follows exactly in the same manner if one replaces the integrals by summations. Since n E i=1 Y ui(Xi)! = E(u1(X1) · · · un(Xn)) = = = 1 · · · 1 u1(x1) · · · un(xn)f (x1,..., xn)dx1 · · · dxn Z 1 1 Z 1 1 · · · Z 1 1 Z 1 u1(x1) · · · un(xn)f1(x1) · · · fn(xn)dx1 · · · dxn u1(x1)f1(x1)dx1 · · · 1 un(xn)fn(xn)dxn Z 1 Z = E (u1(X1)) · · · E (un(Xn)) |