text
stringlengths
270
6.81k
d b > c, d 29. Let a, b, ↵, be any four real numbers with a < b and ↵, positive. BET A(↵, ), then what is the probability density function of the If X random variable Y = (b a)X + a? ⇠ 30. A nonnegative continuous random variable X is said to be memoryless if P (X > s + t/X > t) = P (X > s) for all s, t 0. Show that the exponential random variable is memoryless. 31. Show that every nonnegative continuous memoryless random variable is an exponential random variable. 32. Using gamma function evaluate the following integrals: x2 (i) dx; (iv) dx; (iii) dx; (ii) x2 x2 0 e 1 R 0 x2 (1 R 0 x e 1 R R 33. Using beta function evaluate the following integrals: 1 100 (i) 0 x)7 dx; (iii) x)2 dx; (ii) x5 (100 1 34. If Γ(z) denotes the gamma function, then prove that 1 0 x2 e R x2 1 0 x3 e R dx. 0 x11 (1 R x3)7 dx. Γ(1 + t) Γ(1 t) = tcosec(t). 35. Let ↵ and be given positive real numbers, with ↵ < . If two points are selected at random from a straight line segment of length , what is the probability that the distance between them is at least ↵? 36. If the random variable X of X about the origin? ⇠ GAM(✓, ↵), then what is the nth moment Probability and Mathematical Statistics 185 Two Random Variables 186 Chapter 7 TWO RANDOM VARIABLES There are many random experiments that involve more than one random variable. For example, an educator may study the joint behavior of grades and time devoted to study; a physician may study the joint behavior of blood pressure and weight. Similarly an economist may study the joint behavior of business volume and profit. In fact, most real problems we come across will have more than one underlying random variable of interest. 7.1. Bivariate
Discrete Random Variables In this section, we develop all the necessary terminologies for studying bivariate discrete random variables. Definition 7.1. A discrete bivariate random variable (X, Y ) is an ordered pair of discrete random variables. Definition 7.2. Let (X, Y ) be a bivariate random variable and let RX and RY be the range spaces of X and Y, respectively. A real-valued function IR is called a joint probability density function for X and Y f : RX ⇥ if and only if RY! f (x, y) = P (X = x, Y = y) for all (x, y) intersection of the events (X = x) and (Y = y), that is RX ⇥ RY. Here, the event (X = x, Y = y) means the 2 (X = x) (Y = y). \ Example 7.1. Roll a pair of unbiased dice. If X denotes the smaller and Y denotes the larger outcome on the dice, then what is the joint probability density function of X and Y? Probability and Mathematical Statistics 187 Answer: The sample space S of rolling two dice consists of S = {(1, 1) (2, 1) (3, 1) (4, 1) (5, 1) (6, 1) (1, 2) (2, 2) (3, 2) (4, 2) (5, 2) (6, 2) (1, 3) (2, 3) (3, 3) (4, 3) (5, 3) (6, 3) (1, 4) (2, 4) (3, 4) (4, 4) (5, 4) (6, 4) (1, 5) (2, 5) (3, 5) (4, 5) (5, 5) (6, 5) (1, 6) (2, 6) (3, 6) (4, 6) (5, 6) (6, 6)} The probability density function f (x, y) can be computed for X = 2 and Y = 3 as follows: There are two outcomes namely (2, 3) and (3, 2) in the sample S of 36 outcomes which contribute to the joint event (X = 2, Y = 3). Hence f (2, 3) = P (X
= 2, Y = 3) = 2 36. Similarly, we can compute the rest of the probabilities. The following table shows these probabilities: 6 5 4 3 2 1 2 36 2 36 2 36 2 36 2 36 1 36 1 2 36 2 36 2 36 2 36 1 36 0 2 2 36 2 36 2 36 1 36 0 0 3 2 36 2 36 1 36 0 0 0 2 36 1 36 0 0 0 0 4 5 1 36 0 0 0 0 0 6 These tabulated values can be written as 1 36 2 36 0 if 1 if   otherwise. f (x, y) = 8 >>>< >>>: Example 7.2. A group of 9 executives of a certain firm include 4 who are married, 3 who never married, and 2 who are divorced. Three of the Two Random Variables 188 executives are to be selected for promotion. Let X denote the number of married executives and Y the number of never married executives among the 3 selected for promotion. Assuming that the three are randomly selected from the nine available, what is the joint probability density function of the random variables X and Y? Answer: The number of ways we can choose 3 out of 9 is Thus which is 84. 9 3 f (0, 0) = P (X = 0, Y = 0) = f (1, 0) = P (X = 1, Y = 0) = 0 84 4 1 f (2, 0) = P (X = 2, Y = 0) = 4 2 f (3, 0) = P (X = 3, Y = 0 84 3 0 84 3 0 84 = = 4 84 12 84 = 4 84. Similarly, we can find the rest of the probabilities. The following table gives the complete information about these probabilities. 3 2 1 0 1 84 6 84 3 84 0 0 0 12 84 24 84 4 84 1 0 0 18 84 12 84 2 0 0 0 4 84 3 Definition 7.3. Let (X, Y ) be a discrete bivariate random variable. Let RX and RY be the range spaces of X and Y, respectively. Let f (x, y) be the joint probability density function of X and Y. The function f1(x)
= f (x, y) RY Xy 2 Probability and Mathematical Statistics 189 is called the marginal probability density function of X. Similarly, the function is called the marginal probability density function of Y. RX Xx 2 f2(y) = f (x, y) The following diagram illustrates the concept of marginal graphically. Joint Density of (X, Y Marginal Density of X Example 7.3. If the joint probability density function of the discrete random variables X and Y is given by 1 36 2 36 0 if 1 if   otherwise, f (x, y) = 8 >>>< >>>: then what are marginals of X and Y? Answer: The marginal of X can be obtained by summing the joint probability density function f (x, y) for all y values in the range space RY of the random variable Y. That is f1(x) = f (x, y) RY Xy 2 6 = f (x, y) y=1 X = f (x, x) + f (x, y) + f (x, y) y>x X x) 2 36 y<x X + 0 + (6 [13 2 x], x = 1, 2,..., 6. = = 1 36 1 36 Two Random Variables 190 Similarly, one can obtain the marginal probability density of Y by summing over for all x values in the range space RX of the random variable X. Hence f2(y) = f (x, y) RX Xx 2 6 = f (x, y) x=1 X = f (y, y) + f (x, y) + f (x, y) x<y X 1) 2 36 + 0 + (y x>y X [2y 1], y = 1, 2,..., 6. = = 1 36 1 36 Example 7.4. Let X and Y be discrete random variables with joint probability density function 1 21 (x + y) if x = 1, 2; y = 1, 2, 3 f (x, y) = ( 0 otherwise. What are the marginal probability density functions of X and Y? Answer: The marginal of X is given by 3 f1(x) = 1 21 (x + y) y=1 X 1 21 21 [1 + 2 + 3], x = 1, 2
. Similarly, the marginal of Y is given by 2 f2(y) = 1 21 (x + y) 3 21 + x=1 X 2y 21 3 + 2 y 21 = =, y = 1, 2, 3. From the above examples, note that the marginal f1(x) is obtained by summing across the columns. Similarly, the marginal f2(y) is obtained by summing across the rows. Probability and Mathematical Statistics 191 The following theorem follows from the definition of the joint probability density function. Theorem 7.1. A real valued function f of two variables is a joint probability density function of a pair of discrete random variables X and Y (with range spaces RX and RY, respectively) if and only if (a) f (x, y) 0 for all (x, y) RX ⇥ 2 RY ; (b) f (x, y) = 1. RY RX Xy Xx 2 2 Example 7.5. For what value of the constant k the function given by k xy if x = 1, 2, 3; y = 1, 2, 3 f (x, y) = ( 0 otherwise is a joint probability density function of some random variables X and Y? Answer: Since 3 3 1 = f (x, y) x=1 X 3 y=1 X 3 = k x y x=1 X y=1 X = k [] = 36 k. Hence k = 1 36 and the corresponding density function is given by f (x, y) = 1 36 xy ( 0 if x = 1, 2, 3; y = 1, 2, 3 otherwise. As in the case of one random variable, there are many situations where one wants to know the probability that the values of two random variables are less than or equal to some real numbers x and y. Two Random Variables 192 Definition 7.4. Let X and Y be any two discrete random variables. The real valued function F : IR2 IR is called the joint cumulative probability distribution function of X and Y if and only if! F (x, y) = P (X x, Y y)   for all (x, y) 2 IR2. Here, the event (X x, Y   y) means (X x) (Y y).   From
this definition it can be shown that for any real numbers a and b T F (a X b, c Y     d) = F (b, d) + F (a, c) F (a, d) F (b, c). Further, one can also show that F (x, y) = f (s, t) y x Xt Xs   where (s, t) is any pair of nonnegative numbers. 7.2. Bivariate Continuous Random Variables In this section, we shall extend the idea of probability density functions of one random variable to that of two random variables. Definition 7.5. The joint probability density function of the random variables X and Y is an integrable function f (x, y) such that (a) f (x, y) 0 for all (x, y) (b) f (x, y) dx dy = 1. IR2; and 2 1 1 R 1 1 R Example 7.6. Let the joint density function of X and Y be given by k xy2 if 0 < x < y < 1 f (x, y) = ( 0 otherwise. What is the value of the constant k? Probability and Mathematical Statistics 193 Answer: Since f is a joint probability density function, we have 1 1 1 = f (x, y) dx dy Z 1 Z y2 dx dy 1 k y2 y x dx dy 0 Z 0 Z 1 y4 dy = k 2 0 Z = k 10 y5 1 0 ⇥ ⇤ = k 10. Hence k = 10. If we know the joint probability density function f of the random vari- ables X and Y, then we can compute the probability of the event A from P (A) = f (x, y) dx dy. Z ZA Example 7.7. Let the joint density of the continuous random variables X and Y be f (x, y) = 6 5 x2 + 2 xy ( 0 What is the probability of the event (X if 0 x   1; 0 y   1 elsewhere. Y )?  Two Random Variables 194 Answer: Let A = (X Y ). we want to �
��nd  P (A) = f (x, y) dx dy x2 + 2 x y dx dy dy Z ZA 1 y 0 0  x3 3 + x2 y x=y x=0 y3 dy 1 0  Z 1 4 3 0 Z y4. Definition 7.6. Let (X, Y ) be a continuous bivariate random variable. Let f (x, y) be the joint probability density function of X and Y. The function f1(x) = 1 f (x, y) dy 1 is called the marginal probability density function of X. Similarly, the function Z f2(y) = 1 f (x, y) dx Z 1 is called the marginal probability density function of Y. Example 7.8. If the joint density function for X and Y is given by f (x, y) = 3 4 8 < 0 for 0 < y2 < x < 1 otherwise, then what is the marginal density function of X, for 0 < x < 1? : Answer: The domain of the f consists of the region bounded by the curve x = y2 and the vertical line x = 1. (See the figure on the next page.) Probability and Mathematical Statistics 195 Hence px f1(x) = px Z 3 4 dy px 3 4 y px px. = =  3 2 Example 7.9. Let X and Y have joint density function f (x, y) = x 2 e y for 0 < x y <  1 ( 0 otherwise. What is the marginal density of X where nonzero? Two Random Variables 196 Answer: The marginal density of X is given by f1(x) = 1 f (x, y) dy Z 1 1 = x 2 e y dy x Z x = 2 e 1 e y dy x = 2 e y e x Z x x e ⇥ 2x = 2 e = 2 e Example 7.10. Let (X, Y ) be distributed uniformly on the circular disk 2 p⇡. What is the marginal density function of centered at (0, 0) with radius X where nonzero? Answer: The equation of a circle with radius 2 p
⇡ and center at the origin is x2 + y2 = 4 ⇡. Hence, solving this equation for y, we get y = ± r 4 ⇡ x2. Thus, the marginal density of X is given by Probability and Mathematical Statistics 197 f1(x) = = = p 4 ⇡ x2 p 4 ⇡ x2 Z p 4 ⇡ x2 p 4 ⇡ x2 Z p 4 ⇡ x2 p 4 ⇡ x2 Z f (x, y) dy 1 area of the circle dy 1 4 dy = y 1 4  p 4 ⇡ x2 p 4 ⇡ x2 = 1 2 r 4 ⇡ x2. Definition 7.7. Let X and Y be the continuous random variables with joint probability density function f (x, y). The joint cumulative distribution function F (x, y) of X and Y is defined as F (x, y) = P (X x, Y   y) = y x f (u, v) du dv Z 1 Z 1 for all (x, y) IR2. 2 From the fundamental theorem of calculus, we again obtain f (x, y) = @2F @x @y. Example 7.11. If the joint cumulative distribution function of X and Y is given by F (x, y) = 1 5 2 x3 y + 3 x2 y2 for 0 < x, y < 1 8 < 0 elsewhere, : then what is the joint density of X and Y? 198 Two Random Variables Answer: f (x, y x3 y + 3 x2 y2 @ @y @ @x @ @x 6 x2 + 12 x y 2 x3 + 6 x2 y (x2 + 2 x y). Hence, the joint density of X and Y is given by f (x, y) = 6 5 x2 + 2 x y for 0 < x, y < 1 ( 0 elsewhere. Example 7.12. Let X and Y have the joint density function 2x for 0 < x < 1; 0 < y < 1 f (
x, y) = ( 0 elsewhere. What is?  Answer: (See the diagram below.) Probability and Mathematical Statistics 199 X + Y P ⇥ 1)  X X 1 2  ⇤  dx i 1 0 hR y dy + 1 1 2 1 0 hR R 0 2 x dx i 1 2 dy R hR 2 x dx dy. = = = Example 7.13. Let X and Y have the joint density function f (x, y) = x + y for 0 x   1; 0 y   1 ( 0 elsewhere. What is P (2X 1 / X + Y 1)?   Answer: We know that P (2X  1 / X + Y 1) =  P [X + Y 1X + Y T (X + Y 1)  1) . ⇤ P X ⇥ 1 1 x (x + y) dy dx 1 x)3 0 x2 2 x3 3 ( Two Random Variables Similarly 1 2  P X ✓ ◆ \ (X + Y  1 2 1) = 0 0 Z Z 200 (x + y) dy dx 1 x x2 2 x3 3 (1 6 =  1 2 x)3 0 = 11 48. Thus, P (2X  1 / X + Y 1) =  11 48 ✓ ◆ ✓ = 11 16. 3 1 ◆ 7.3. Conditional Distributions First, we motivate the definition of conditional distribution using discrete random variables and then based on this motivation we give a general definition of the conditional distribution. Let X and Y be two discrete random variables with joint probability density f (x, y). Then by definition of the joint probability density, we have f (x, y) = P (X = x, Y = y). If A = {X = x}, B = {Y = y} and f2(y) = P (Y = y), then from the above equation we have P ({X = x} /
{Y = y}) = P (A / B) B) P (A = = = P (B) T P ({X = x} and {Y = y}) P (Y = y) f (x, y) f2(y). If we write the P ({X = x} / {Y = y}) as g(x / y), then we have g(x / y) = f (x, y) f2(y). Probability and Mathematical Statistics 201 For the discrete bivariate random variables, we can write the conditional probability of the event {X = x} given the event {Y = y} as the ratio of the {Y = y} to the probability of the event probability of the event {X = x} {Y = y} which is T g(x / y) = f (x, y) f2(y). We use this fact to define the conditional probability density function given two random variables X and Y. Definition 7.8. Let X and Y be any two random variables with joint density f (x, y) and marginals f1(x) and f2(y). The conditional probability density function g of X, given (the event) Y = y, is defined as g(x / y) = f (x, y) f2(y) f2(y) > 0. Similarly, the conditional probability density function h of Y, given (the event) X = x, is defined as h(y / x) = f (x, y) f1(x) f1(x) > 0. Example 7.14. Let X and Y be discrete random variables with joint probability function 1 21 (x + y) for x = 1, 2, 3; y = 1, 2. f (x, y) = ( 0 elsewhere. What is the conditional probability density function of X, given Y = 2? Answer: We want to find g(x/2). Since g(x / 2) = f (x, 2) f2(2) we should first compute the marginal of Y, that is f2(2). The marginal of Y is given by 3 f2(y) = 1 21 (x + y) x=1 X 1 21 = (6 + 3 y).
Two Random Variables 202 Hence f2(2) = 12 given Y = 2, is 21. Thus, the conditional probability density function of X, g(x/2) = = = f (x, 2) f2(2) 1 21 (x + 2) 12 21 1 12 (x + 2), x = 1, 2, 3. Example 7.15. Let X and Y be discrete random variables with joint probability density function f (x, y) = x+y 32 ( 0 for x = 1, 2; y = 1, 2, 3, 4 otherwise. What is the conditional probability of Y given X = x? Answer: 4 f1(x) = f (x, y) y=1 X 1 32 1 32 4 (x + y) y=1 X (4 x + 10). = = Therefore h(y/x) = = = f (x, y) f1(x) 1 32 (x + y) 1 32 (4 x + 10) x + y 4x + 10. Thus, the conditional probability Y given X = x is h(y/x) = x+y 4x+10 ( 0 for x = 1, 2; y = 1, 2, 3, 4 otherwise. Example 7.16. Let X and Y be continuous random variables with joint pdf 12 x for 0 < y < 2x < 1 f (x, y) = ( 0 otherwise. Probability and Mathematical Statistics 203 What is the conditional density function of Y given X = x? Answer: First, we have to find the marginal of X. f1(x) = 1 f (x, y) dy Z 1 2x = 12 x dy 0 Z = 24 x2. Thus, the conditional density of Y given X = x is h(y/x) = = = f (x, y) f1(x) 12 x 24 x2 1 2x, for 0 < y < 2x < 1 and zero elsewhere. Example 7.17. Let X and Y be random variables such that X has density function 24 x2 for 0 < x < 1 2 f1(x) = ( 0 elsewhere Two Random Variables 204 and the conditional density of Y given X = x is h(y/x) = y 2 x2 ( 0 for 0 < y < 2x elsewhere. What is the conditional density of X given Y = y
over the appropriate domain? Answer: The joint density f (x, y) of X and Y is given by f (x, y) = h(y/x) f1(x) y 2 x2 24 x2 for = 12y = 0 < y < 2x < 1. The marginal density of Y is given by f2(y) = 1 f (x, y) dx = 12 y dx Z 1 y), for 0 < y < 1. Hence, the conditional density of X given Y = y is g(x/y) = = = f (x, y) f2(y) 12y 6 y (1 2 1 y . y) Thus, the conditional density of X given Y = y is given by g(x/y) = 2 1 y for 0 < y < 2x < 1 ( 0 otherwise. Note that for a specific x, the function f (x, y) is the intersection (profile) of the surface z = f (x, y) by the plane x = constant. The conditional density f (y/x), is the profile of f (x, y) normalized by the factor 1 f1(x). Probability and Mathematical Statistics 205 7.4. Independence of Random Variables In this section, we define the concept of stochastic independence of two random variables X and Y. The conditional probability density function g of X given Y = y usually depends on y. If g is independent of y, then the random variables X and Y are said to be independent. This motivates the following definition. Definition 7.8. Let X and Y be any two random variables with joint density f (x, y) and marginals f1(x) and f2(y). The random variables X and Y are (stochastically) independent if and only if f (x, y) = f1(x) f2(y) for all (x, y) RX ⇥ 2 RY. Example 7.18. Let X and Y be discrete random variables with joint density f (x, y) = 1 36 2 36 8 < for 1 for.   Are X and Y stochastically independent? : Answer: The marginals of X and Y are
given by 6 f1(x) = f (x, y) y=1 X = f (x, x) + f (x, y) + f (x, y) y>x X x) 2 36 + 0 + (6 y<x X 2x, 36 for x = 1, 2,..., 6 1 36 13 = = 6 f2(y) = f (x, y) x=1 X = f (y, y) + f (x, y) + f (x, y) and x<y X 1) 2 36 + 0 + (y x>y X 1, for y = 1, 2,..., 6. 1 36 2y = = 36 Two Random Variables 206 Since f (1, 1) = 1 36 6 = 11 36 1 36 = f1(1) f2(1), we conclude that f (x, y) = f1(x) f2(y), and X and Y are not independent. This example also illustrates that the marginals of X and Y can be determined if one knows the joint density f (x, y). However, if one knows the marginals of X and Y, then it is not possible to find the joint density of X and Y unless the random variables are independent. Example 7.19. Let X and Y have the joint density f (x, y) = (x+y) e for 0 < x, y < 1 ( 0 otherwise. Are X and Y stochastically independent? Answer: The marginals of X and Y are given by f1(x) = 1 f (x, y) dy = 1 e x (x+y) dy = e 0 Z 0 Z and Hence f2(y) = 1 f (x, y) dx = 1 0 Z 0 Z e (x+y) dx = e y. f (x, y) = e (x+y) = e x e y = f1(x) f2(y). Thus, X and Y are stochastically independent. Notice that if the joint density f (x, y) of X and Y can be factored into two nonnegative functions, one solely depending on x and the other solely depending on y, then X and Y are independent. We can use this factorization approach to predict when X
and Y are not independent. Example 7.20. Let X and Y have the joint density x + y for 0 < x < 1; 0 < y < 1 f (x, y) = ( 0 otherwise. Are X and Y stochastically independent? Answer: Notice that f (x, y Probability and Mathematical Statistics 207 Thus, the joint density cannot be factored into two nonnegative functions one depending on x and the other depending on y; and therefore X and Y are not independent. If X and Y are independent, then the random variables U = (X) and V = (Y ) are also independent. Here , : IR IR are some real valued functions. From this comment, one can conclude that if X and Y are independent, then the random variables eX and Y 3 +Y 2 +1 are also independent.! Definition 7.9. The random variables X and Y are said to be independent and identically distributed (IID) if and only if they are independent and have the same distribution. Example 7.21. Let X and Y be two independent random variables with identical probability density function given by f (x) = x e for x > 0 ( 0 elsewhere. What is the probability density function of W = min{X, Y }? Answer: Let G(w) be the cumulative distribution function of W. Then G(w) = P (W w)  P (W > wmin{X, Y } > w) P (X > w and Y > w) P (X > w) P (Y > w) (since X and Y are independent) 1 e x dx 1 e y dy w w ✓Z e 2w. e ◆ ✓Z w ◆ 2 Thus, the probability density function of W is Hence g(w) = d dw G(w) = d dw 1 2w e = 2 e 2w. g(w) = 2w 2 e for w > 0 ( 0 elsewhere. Two Random Variables 208 7.5. Review Exercises 1. Let X and Y be discrete random variables with joint probability density function 1 21 (x + y) for x = 1, 2, 3; y = 1, 2 f (x, y) = What are the marginals of X and Y? ( 0 otherwise. 2
. Roll a pair of unbiased dice. Let X be the maximum of the two faces and Y be the sum of the two faces. What is the joint density of X and Y? 3. For what value of c is the real valued function c (x + 2y) for x = 1, 2; y = 1, 2 f (x, y) = ( 0 otherwise a joint density for some random variables X and Y? 4. Let X and Y have the joint density f (x, y) = (x+y) e for 0  x, y < 1 ( 0 otherwise. What is P (X Y 2)? 5. If the random variable X is uniform on the interval from 1 to 1, and the random variable Y is uniform on the interval from 0 to 1, what is the probability that the the quadratic equation t2 + 2Xt + Y = 0 has real solutions? Assume X and Y are independent. 6. Let Y have a uniform distribution on the interval (0, 1), and let the conditional density of X given Y = y be uniform on the interval from 0 to py. What is the marginal density of X for 0 < x < 1? Probability and Mathematical Statistics 209 7. If the joint cumulative distribution of the random variables X and Y is F (x, y) = (1 e x)(1 e y) for x > 0, y > 0 8 < 0 otherwise, what is the joint probability density function of the random variables X and Y, and the P (1 < X < 3, 1 < Y < 2)? : 8. If the random variables X and Y have the joint density f (x, y) = 6 7 x for 1 x + y 2, x 0 otherwise, what is the probability P (Y X 2)? 9. If the random variables X and Y have the joint density : f (x, y) = 6 7 x for 1 x + y 2, x 0 otherwise, what is the probability P [max(X, Y ) > 1]? : 10. Let X and Y have the joint probability density function f (x, y) = 5 16 xy2 for 0 < x < y < 2 ( 0 elsewhere. What is the marginal density function of X where it is nonzero? 11. Let X and Y have the joint probability density function 4x for
0 < x < py < 1 f (x, y) = ( 0 elsewhere. What is the marginal density function of Y, where nonzero? 12. A point (X, Y ) is chosen at random from a uniform distribution on the circular disk of radius centered at the point (1, 1). For a given value of X = x between 0 and 2 and for y in the appropriate domain, what is the conditional density function for Y? Two Random Variables 210 13. Let X and Y be continuous random variables with joint density function f (x, y) = 3 4 (2 x y) for 0 < x, y < 2 otherwise. What is the conditional probability P (X < 1 | Y < 1)? 14. Let X and Y be continuous random variables with joint density function 12x for 0 < y < 2x < 1 f (x, y) = ( 0 otherwise. What is the conditional density function of Y given X = x? 15. Let X and Y be continuous random variables with joint density function 24xy for x > 0, y > 0x, y) = ( 0 otherwise. What is the conditional probability? 16. Let X and Y be two independent random variables with identical probability density function given by f (x) = x e for x > 0 ( 0 elsewhere. What is the probability density function of W = max{X, Y }? 17. Let X and Y be two independent random variables with identical probability density function given by f (x) = 3 x2 ✓3 8 < 0 for 0 x ✓   elsewhere, for some ✓ > 0. What is the probability density function of W = min{X, Y }? : 18. Ron and Glenna agree to meet between 5 P.M. and 6 P.M. Suppose that each of them arrive at a time distributed uniformly at random in this time interval, independent of the other. Each will wait for the other at most 10 minutes (and if other does not show up they will leave). What is the probability that they actually go out? Probability and Mathematical Statistics 211 19. Let X and Y be two independent random variables distributed uniformly 1 on the interval [0, 1]. What is the probability of the event Y 2 given that Y 2X? 1 20. Let X and Y have the joint density 8xy for 0 < y < x < 1 f
(x, y) = ( 0 otherwise. What is P (X + Y > 1)? 21. Let X and Y be continuous random variables with joint density function f (x, y) = 2 ( 0 for 0 y   x < 1 otherwise. Are X and Y stochastically independent? 22. Let X and Y be continuous random variables with joint density function 2x for 0 < x, y < 1 f (x, y) = ( 0 otherwise. Are X and Y stochastically independent? 23. A bus and a passenger arrive at a bus stop at a uniformly distributed time over the interval 0 to 1 hour. Assume the arrival times of the bus and passenger are independent of one another and that the passenger will wait up to 15 minutes for the bus to arrive. What is the probability that the passenger will catch the bus? 24. Let X and Y be continuous random variables with joint density function f (x, y) = 4xy for 0 x, y 1   ( 0 otherwise. What is the probability of the event X 1 2 given that Y 3 4?  25. Let X and Y be continuous random variables with joint density function f (x, y) = 1 2 ( 0 for 0 x y    2 otherwise. What is the probability of the event X 1 2 given that Y = 1?  Two Random Variables 212 26. If the joint density of the random variables X and Y is f (x, y) = 1 1 2 0 ( x x if 0   if 1   otherwise, 1 y  2, 0  y 1  what is the probability of the event X 3 2, Y 1 2?   27. If the joint density of the random variables X and Y is f (x, y) = emin{x,y} 1 (x+y) e if 0 < x, y < 1 8 < ⇥ 0 ⇤ otherwise, then what is the marginal density function of X, where nonzero? : Probability and Mathematical Statistics 213 Product Moments of Bivariate Random Variables 214 Chapter 8 PRODUCT MOMENTS OF BIVARIATE RANDOM VARIABLES In this chapter, we define various product moments of a
bivariate random variable. The main concept we introduce in this chapter is the notion of covariance between two random variables. Using this notion, we study the statistical dependence of two random variables. 8.1. Covariance of Bivariate Random Variables First, we define the notion of product moment of two random variables and then using this product moment, we give the definition of covariance between two random variables. Definition 8.1. Let X and Y be any two random variables with joint density function f (x, y). The product moment of X and Y, denoted by E(XY ), is defined as xy f (x, y) if X and Y are discrete E(XY ) = 8 >< RY RX Xy Xx 2 2 1 1 1 1 R R >: xy f (x, y) dx dy if X and Y are continuous. Here, RX and RY represent the range spaces of X and Y respectively. Definition 8.2. Let X and Y be any two random variables with joint density function f (x, y). The covariance between X and Y, denoted by Cov(X, Y ) (or XY ), is defined as Cov(X, Y ) = E( (X µX ) (Y µY ) ), Probability and Mathematical Statistics 215 where µX and µY are mean of X and Y, respectively. Notice that the covariance of X and Y is really the product moment of µX and Y µY. Further, the mean of µX is given by X µX = E(X) = 1 Z 1 x f1(x) dx = 1 1 x f (x, y) dx dy, Z 1 Z 1 and similarly the mean of Y is given by µY = E(Y ) = 1 Z 1 y f2(y) dy = 1 1 y f (x, y) dy dx. Z 1 Z 1 Theorem 8.1. Let X and Y be any two random variables. Then Cov(X, Y ) = E(XY ) E(X) E(Y ). Proof: Cov(X, Y ) = E((X = E(XY µX Y µY X
+ µX µY ) µY E(X) + µX µY µY µX + µX µY µX ) (Y µY )) µX E(Y ) µX µY µX µY E(X) E(Y ). = E(XY ) = E(XY ) = E(XY ) = E(XY ) Corollary 8.1. Cov(X, X) = 2 X. Proof: Cov(X, X) = E(XX) = E(X 2) E(X) E(X) µ2 X = V ar(X) = 2 X. Example 8.1. Let X and Y be discrete random variables with joint density f (x, y) = x+2y 18 ( 0 for x = 1, 2; y = 1, 2 elsewhere. What is the covariance XY between X and Y. Product Moments of Bivariate Random Variables 216 Answer: The marginal of X is f1(x) = 2 y=1 X x + 2y 18 = 1 18 (2x + 6). Hence the expected value of X is 2 E(X) = x f1(x) x=1 X = 1 f1(1) + 2f1(2) = = 8 18 28 18 + 2 10 18. Similarly, the marginal of Y is 2 f2(y) = x=1 X Hence the expected value of Y is x + 2y 18 = 1 18 (3 + 4y). 2 E(Y ) = y f2(y) y=1 X = 1 f2(1) + 2f2(2) = = 7 18 29 18 + 2 11 18. Further, the product moment of X and Y is given by 2 2 E(XY ) = x y f (x, y) x=1 X y=1 X = f (1, 1) + 2 f (1, 2) + 2 f (2, 1) + 4 f (2, 2) = = = 4 18 + 2 + 2 5 3 18 18 3 + 10 + 8 + 24 18 + 4 6 18 45 18. Probability and Mathematical Statistics 217 Hence, the covariance between X and Y is given by Cov(X, Y ) = E(XY )
E(X) E(Y ) 28 45 18 18 (45) (18) ✓ 29 18 (28) (29) ◆ ◆ ✓ (18) (18) 812 810 324 2 324 = 0.00617. = = = = Remark 8.1. For an arbitrary random variable, the product moment and covariance may or may not exist. Further, note that unlike variance, the covariance between two random variables may be negative. Example 8.2. Let X and Y have the joint density function f (x, y) = x + y if 0 < x, y < 1 ( 0 elsewhere. What is the covariance between X and Y? Answer: The marginal density of X is 1 f1(x) = (x + y) dy. y2 2 y=1 y=0 Thus, the expected value of X is given by 1 E(X) = x f1(x) dx 0 Z 1 x (x + + x2 4 0 Z x3 3  7 12. = = = ) dx 1 2 1 0 Product Moments of Bivariate Random Variables 218 Similarly (or using the fact that the density is symmetric in x and y), we get E(Y ) = 7 12. Now, we compute the product moment of X and Y. 1 1 E(XY ) = x y(x + y) dx dy 0 0 Z Z 1 1 (x2 y + x y2) dx dy 0 0 Z Z 1 x3 y 3 0  Z 1 x2 y2 2 x=1 x=0 dy dy ◆ dy + y2 2 1 0 y 3 + y3 6 0 ✓ Z y2 6 + 1 6  1 6 4 12 +. = = = = = = Hence the covariance between X and Y is given by Cov(X, Y ) = E(XY ) 7 12 E(X) E(Y ) 7 12 ◆ ◆ ✓ = = = 4 12 48 ✓ 49 144 1 144. Example 8.3. Let X and Y be continuous random variables with joint density function f (x, y) = 2 if 0 < y < 1 x; 0 < x < 1 ( 0 elsewhere. What is the covariance between X and Y? Answer:
The marginal density of X is given by 1 x f1(x) = 0 Z 2 dy = 2 (1 x). Probability and Mathematical Statistics 219 Hence the expected value of X is µX = E(X) = x f1(x) dx = 1 Similarly, the marginal of Y is 0 Z 0 Z Hence the expected value of Y is f2(y) = 1 y 2 dx = 2 (1 y). 1 o Z 2 (1 x) dx = 1 3. µY = E(Y ) = y f2(y) dy = o Z The product moment of X and Y is given by 0 Z 1 1 2 (1 y) dy = 1 3. 1 E(XY ) = = x, y) dy dx x y 2 dy dx y2 2 x  1 x (1 1 x dx 0 x)2 dx = 2x2 + x3 dx x x2 2 3 x3 + x4 12 Therefore, the covariance between X and Y is given by E(X) E(Y ) Cov(X, Y ) = E(XY ) 1 9 4 36 1 12 3 36 = = = 1 36. Product Moments of Bivariate Random Variables 220 Theorem 8.2. If X and Y are any two random variables and a, b, c, and d are real constants, then Cov(a X + b, c Y + d) = a c Cov(X, Y ). Proof: Cov(a X + b, c Y + d) = E ((aX + b)(cY + d)) E(aX + b) E(cY + d) = E (acXY + adX + bcY + bd) (aE(X) + b) (cE(Y ) + d) = ac E(XY ) + ad E(X) + bc E(Y ) + bd [ac E(X) E(Y ) + ad E(X) + bc E(Y ) + bd] = ac [E(XY ) = ac Cov(X, Y ). E(X) E(Y )] Example 8.4. If the product moment of X and Y is 3 and the mean of
X and Y are both equal to 2, then what is the covariance of the random variables 2X + 10 and 5 2 Y + 3? Answer: Since E(XY ) = 3 and E(X) = 2 = E(Y ), the covariance of X and Y is given by Cov(X, Y ) = E(XY ) E(X) E(Y ) = 3 4 = 1. Then the covariance of 2X + 10 and 5 2 Y + 3 is given by Cov 2X + 10 Cov(X, Y ) 5 2 ✓ 5) ( ◆ 1) ◆ = ( = 5. Remark 8.2. Notice that the Theorem 8.2 can be furthered improved. That is, if X, Y, Z are three random variables, then Cov(X + Y, Z) = Cov(X, Z) + Cov(Y, Z) and Cov(X, Y + Z) = Cov(X, Y ) + Cov(X, Z). Probability and Mathematical Statistics 221 The first formula can be established as follows. Consider Cov(X + Y, Z) = E((X + Y )Z) = E(XZ + Y Z) E(X + Y ) E(Z) E(X)E(Z) E(Y )E(Z) = E(XZ) E(X)E(Z) + E(Y Z) E(Y )E(Z) = Cov(X, Z) + Cov(Y, Z). 8.2. Independence of Random Variables In this section, we study the effect of independence on the product mo- ment (and hence on the covariance). We begin with a simple theorem. Theorem 8.3. If X and Y are independent random variables, then E(XY ) = E(X) E(Y ). Proof: Recall that X and Y are independent if and only if f (x, y) = f1(x) f2(y). Let us assume that X and Y are continuous. Therefore E(XY ) = 1 1 x y f (x, y) dx dy Z 1 Z 1 1 1 x y f1(x) f2(y) dx dy Z
1 Z 1 1 x f1(x) dx 1 y f2(y) dy = = ✓Z 1 = E(X) E(Y ). ◆ ✓Z 1 ◆ If X and Y are discrete, then replace the integrals by appropriate sums to prove the same result. Example 8.5. Let X and Y be two independent random variables with respective density functions: and f (x) = 3 x2 if 0 < x < 1 ( 0 otherwise g(y) = 4 y3 if 0 < y < 1 ( 0 otherwise. Product Moments of Bivariate Random Variables 222 What is E X Y? Answer: Since X and Y are independent, the joint density of X and Y is given by h(x, y) = f (x) g(y). Therefore (x, y) dx dy f (x) g(y) dx dy 1 1 Z 1 Z 3x2 4y3 dx dy 0 0 Z Z 1 3x3 dx 0 ✓Z 3 4 ✓ ◆ ✓ 0 ◆ ✓Z = 1. 4 3 ◆ 1 4y2 dy ◆ Remark 8.3. The independence of X and Y does not imply E. Further, note that E but only implies E equal to = E(X) E 1 Y X Y 1 E(Y ). = E(X) X E(Y ) Y 1 is not Y Theorem 8.4. covariance between X and Y is always zero, that is If X and Y are independent random variables, then the Cov(X, Y ) = 0. Proof: Suppose X and Y are independent, then by Theorem 8.3, we have E(XY ) = E(X) E(Y ). Consider Cov(X, Y ) = E(XY ) E(X) E(Y ) = E(X) E(Y ) E(X) E(Y ) = 0. Example 8.6. Let the random variables X and Y have the joint density f (x, y) = 1 4 ( 0 if (x, y) 2 otherwise. { (0, 1), (0, 1), (1, 0), ( 1, 0) } What is the covariance of X and Y
? Are the random variables X and Y independent? Probability and Mathematical Statistics 223 Answer: The joint density of X and Y are shown in the following table with the marginals f1(x) and f2(y). (x, y f1(x) 1 4 2 4 1 f2(y From this table, we see that 0 = f (0, 0) = f1(0) f2(0 ◆ ◆ ✓ and thus f (x, y) = f1(x) f2(y) for all (x, y) is the range space of the joint variable (X, Y ). Therefore X and Y are not independent. Next, we compute the covariance between X and Y. For this we need 6 6 Product Moments of Bivariate Random Variables 224 E(X), E(Y ) and E(XY ). The expected value of X is 1 E(X) = xf1(x) 1 x= X = ( 1) f1( 1) + (0)f1(0) + (1) f1(1. Similarly, the expected value of Y is 1 E(Y ) = yf2(y) 1 y= X 1) f2. 1) + (0)f2(0) + (1) f2(1) 1 4 The product moment of X and Y is given by 1 1 E(XY ) = x y f (x, y) 1 x= X = (1) f ( 1 y= X 1, + (0) f (0, 1) f (1, + ( = 0. 1) + (0) f ( 1, 0) + ( 1) + (0) f (0, 0) + (0) f (0, 1) 1) f ( 1, 1) 1) + (0) f (1, 0) + (1) f (1, 1) Hence, the covariance between X and Y is given by Cov(X, Y ) = E(XY ) E(X) E(Y ) = 0. Remark 8.4. This example shows that if the covariance of X and Y is zero that does not mean the random variables are independent. However, we know from Theorem 8.4 that if X
and Y are independent, then the Cov(X, Y ) is always zero. Probability and Mathematical Statistics 225 8.3. Variance of the Linear Combination of Random Variables Given two random variables, X and Y, we determine the variance of their linear combination, that is aX + bY. Theorem 8.5. Let X and Y be any two random variables and let a and b be any two real numbers. Then V ar(aX + bY ) = a2 V ar(X) + b2 V ar(Y ) + 2 a b Cov(X, Y ). Proof: V ar(aX + bY ) = E [aX + bY = E ⇣ [aX + bY E(aX + bY )]2 a E(X) ⌘ b E(Y )]2 ⌘ = E ⇣ [a (X µX ) + b (Y µY )]2 ⇣ a2 (X µX )2 + b2 (Y + b2 E µX )2 = E = a2 E = a2 V ar(X) + b2 V ar(Y ) + 2 a b Cov(X, Y ). µY )2 + 2 a b (X µX )2 (X (X ⌘ µX ) (Y + 2 a b E((X µY ) µX ) (Y µY )) Example 8.7. E(Y ) = 2, then what is E(XY )? If V ar(X + Y ) = 3, V ar(X Y ) = 1, E(X) = 1 and Answer: Hence, we get V ar(X + Y ) = 2 Y ) = 2 V ar(X X + 2 X + 2 Y + 2 Cov(X, Y ), 2 Cov(X, Y ). Y Cov(X ar(X + Y ) V ar(X Y ) ] [3 1] . Therefore, the product moment of X and Y is given by E(XY ) = Cov(X, Y ) + E(X) E(1) (2). Product Moments of
Bivariate Random Variables 226 Example 8.8. Let X and Y be random variables with V ar(X) = 4, V ar(Y ) = 9 and V ar(X Y ) = 16. What is Cov(X, Y )? Answer: Hence V ar(X Y ) = V ar(X) + V ar(Y ) 2 Cov(X, Y ) 16 = 4 + 9 2 Cov(X, Y ). Cov(X, Y ) = 3 2. Remark 8.5. The Theorem 8.5 can be extended to three or more random variables. In case of three random variables X, Y, Z, we have V ar(X + Y + Z) = V ar(X) + V ar(Y ) + V ar(Z) + 2Cov(X, Y ) + 2Cov(Y, Z) + 2Cov(Z, X). To see this consider V ar(X + Y + Z) = V ar((X + Y ) + Z) = V ar(X + Y ) + V ar(Z) + 2Cov(X + Y, Z) = V ar(X + Y ) + V ar(Z) + 2Cov(X, Z) + 2Cov(Y, Z) = V ar(X) + V ar(Y ) + 2Cov(X, Y ) + V ar(Z) + 2Cov(X, Z) + 2Cov(Y, Z) = V ar(X) + V ar(Y ) + V ar(Z) + 2Cov(X, Y ) + 2Cov(Y, Z) + 2Cov(Z, X). Theorem 8.6. If X and Y are independent random variables with E(X) = 0 = E(Y ), then V ar(XY ) = V ar(X) V ar(Y ). Proof: V ar(XY ) = E = E = E = E (XY )2 (XY )E(X) E(Y ))2 Y 2 (by independence of X and Y ) = V ar(X) V ar(Y ). Probability and Mathematical Statistics 227 Example 8.9. Let X and Y be independent random variables, each with density f (x
) = 1 2✓ ( 0 for ✓ < x < ✓ otherwise. If the V ar(XY ) = 64 9, then what is the value of ✓? Answer: ✓ E(X) = ✓ Z 1 2✓ x dx = 1 2✓ x2 2  ✓ = 0. ✓ Since Y has the same density, we conclude that E(Y ) = 0. Hence 64 9 = V ar(XY ) = V ar(X) V ar(Y ) 1 2✓ y2 dy! ✓ x2 dx! Z ✓ ✓2 3 ◆ ✓ ✓ 1 2✓ ◆ ✓ = = = Z ✓2 3 ✓ ✓4 9. Hence, we obtain ✓4 = 64 or ✓ = 2p2. 8.4. Correlation and Independence The functional dependency of the random variable Y on the random variable X can be obtained by examining the correlation coefficient. The definition of the correlation coefficient ⇢ between X and Y is given below. Definition 8.3. Let X and Y be two random variables with variances 2 X and 2 Y, respectively. Let the covariance of X and Y be Cov(X, Y ). Then the correlation coefficient ⇢ between X and Y is given by ⇢ = Cov(X, Y ) X Y. Theorem 8.7. If X and Y are independent, the correlation coefficient between X and Y is zero. Product Moments of Bivariate Random Variables 228 Proof: ⇢ = = Cov(X, Y ) X Y 0 X Y = 0. Remark 8.4. The converse of this theorem is not true. If the correlation coefficient of X and Y is zero, then X and Y are said to be uncorrelated. Lemma 8.1. If X? and Y? are the standardizations of the random variables X and Y, respectively, the correlation coefficient between X? and Y? is equal to the correlation coefficient between X and Y. Proof: Let ⇢? be the correlation coefficient between X? and Y?. Further, let ⇢ denote the correlation co
k) = @kM (s, t) @sk for k = 1, 2, 3, 4,...; and (0,0) , E(Y k) = @kM (s, t) @tk, (0,0) E(XY ) = @2M (s, t) @s @t. (0,0) Example 8.10. Let the random variables X and Y have the joint density f (x, y) = y e for 0 < x < y < 1 ( 0 otherwise. Probability and Mathematical Statistics 231 What is the joint moment generating function for X and Y? Answer: The joint moment generating function of X and Y is given by M (s, t) = E esX+tY = = = = 1 0 0 Z Z 1 1 1 esx+ty f (x, y) dy dx esx+ty e y dy dx x 0 Z Z 1 1 esx+ty y dy dx x 0  Z Z 1 t) (1 (1 s , t) provided s + t < 1 and t < 1. Example 8.11. variables X and Y is If the joint moment generating function of the random M (s, t) = e(s+3t+2s2+18t2+12st) what is the covariance of X and Y? Answer: Product Moments of Bivariate Random Variables 232 M (s, t) = e(s+3t+2s2+18t2+12st) @M @s @M @t @M @s (0,0) @M @t (0,0) = (1 + 4s + 12t) M (s, t) = 1 M (0, 0) = 1. = (3 + 36t + 12s) M (s, t) = 3 M (0, 0) = 3. Hence µX = 1 and µY = 3. Now we compute the product moment of X and Y. @2M (s, t) @s @t = = @ @t @ @t @M @s ◆ ✓ (
M (s, t) (1 + 4s + 12t)) = (1 + 4s + 12t) @M @t + M (s, t) (12). Therefore Thus @2M (s, t) @s @t = 1 (3) + 1 (12). (0,0) E(XY ) = 15 and the covariance of X and Y is given by Cov(X, Y ) = E(XY ) E(X) E(Y ) (3) (1) = 15 = 12. Theorem 8.9. If X and Y are independent then MaX+bY (t) = MX (at) MY (bt), Probability and Mathematical Statistics 233 where a and b real parameters. Proof: Let W = aX + bY. Hence MaX+bY (t) = MW (t) etW = E = E et(aX+bY ) = E = E ⇣ etaX etbY etaX E ⌘ etbY = MX (at) MY (bt). (by Theorem 8.3) This theorem is very powerful. It helps us to find the distribution of a linear combination of independent random variables. The following examples illustrate how one can use this theorem to determine distribution of a linear combination. Example 8.12. Suppose the random variable X is normal with mean 2 and standard deviation 3 and the random variable Y is also normal with mean 0 and standard deviation 4. If X and Y are independent, then what is the probability distribution of the random variable X + Y? Answer: Since X by ⇠ N (2, 9), the moment generating function of X is given MX (t) = eµt+ 1 2 2t2 = e2t+ 9 2 t2. Similarly, since Y N (0, 16), ⇠ MY (t) = eµt+ 1 2 2t2 = e 16 2 t2. Since X and Y are independent, the moment generating function of X + Y is given by MX+Y (t) = MX (t) MY (t) 2 t2 = e2t+ 9 = e2t+ 25 2 t2 2 t2 e 16. Hence X + Y N (
2, 25). Thus, X + Y has a normal distribution with mean 2 and variance 25. From this information we can find the probability density function of W = X + Y as ⇠ f (w) = 1 p50⇡ 1 2 ( w e 2 5 )2 , < w <. 1 1 Product Moments of Bivariate Random Variables 234 Remark 8.6. In fact if X and Y are independent normal random variables with means µX and µY and variances 2 Y, respectively, then aX + bY is also normal with mean aµX + bµY and variance a22 X and 2 X + b22 Y. Example 8.13. Let X and Y be two independent and identically distributed random variables. If their common distribution is chi-square with one degree of freedom, then what is the distribution of X + Y? What is the moment generating function of X Y? Answer: Since X and Y are both 2(1), the moment generating functions are MX (t) = and p1 2t MY (t) = p1. 2t 1 1 Since, the random variables X and Y are independent, the moment generating function of X + Y is given by MX+Y (t) = MX (t) MY (t) 1 1 = p1 = (1 2t 1 2t) 2 2 p1 2t . Hence X + Y variables, then their sum is also a chi-square random variable. 2(2). Thus, if X and Y are independent chi-square random ⇠ Next, we show that X X and Y are both chi-square. Y is not a chi-square random variable, even if MX Y (t) = MX (t) MY ( t) 1 p1 + 2t = = p1 p1 1 1 2t. 4t2 This moment generating function does not correspond to the moment generating function of a chi-square random variable with any degree of freedoms. Further, it is surprising that this moment generating function does not correspond to that of any known distributions. Remark 8.7. If X and Y are chi-square and independent random variables, then their linear combination is not necessarily a chi-square random variable. Probability and Mathemat
ical Statistics 235 Example 8.14. Let X and Y be two independent Bernoulli random variables with parameter p. What is the distribution of X + Y? Answer: Since X and Y are Bernoulli with parameter p, their moment generating functions are MX (t) = (1 p) + pet MY (t) = (1 p) + pet. Since, X and Y are independent, the moment generating function of their sum is the product of their moment generating functions, that is MX+Y (t) = MX (t) MY (t) = 1 = 1 p + pet 1 p + pet 2. p + pet Hence X + Y random variable is a binomial random variable with parameter 2 and p. BIN (2, p). Thus the sum of two independent Bernoulli ⇠ 8.6. Review Exercises 1. Suppose that X1 and X2 are random variables with zero mean and unit variance. If the correlation coefficient of X1 and X2 is 0.5, then what is the variance of Y = 2 k=1 k2Xk? 2. If the joint density of the random variables X and Y is P f (x, y) = 1 8 8 < 0 if (x, y) 2 otherwise, { (x, 0), (0, y) | x, y = 2, 1, 1, 2 } what is the covariance of X and Y? Are X and Y independent? : 3. Suppose the random variables X and Y are independent and identically distributed. Let Z = aX + Y. If the correlation coefficient between X and Z is 1 3, then what is the value of the constant a? 4. Let X and Y be two independent random variables with chi-square distribution with 2 degrees of freedom. What is the moment generating function of the random variable 2X + 3Y? If possible, what is the distribution of 2X + 3Y? 5. Let X and Y be two independent random variables. If X and Y BIN (m, p), then what is the distribution of X + Y? ⇠ ⇠ BIN (n, p) Product Moments of Bivariate Random Variables 236 6. Let X and Y be two independent random variables. If X and Y
are both standard normal, then what is the distribution of the random variable 1 2 X 2 + Y 2? 7. If the joint probability density function of X and Y is f (x, y) = 1 ( 0 if 0 < x < 1; 0 < y < 1 elsewhere, then what is the joint moment generating function of X and Y? 8. Let the joint density function of X and Y be f (x, y) = 1 36 2 36 8 < if 1 if.   What is the correlation coefficient of X and Y? : 9. Suppose that X and Y are random variables with joint moment generating function M (s, t) = es + 3 8 et + 3 8 1 4 ✓ 10, ◆ for all real s and t. What is the covariance of X and Y? 10. Suppose that X and Y are random variables with joint density function f (x, y) = 1 6⇡ 8 < 0 for x2 for x2 4 + y2 4 + y2 1 9  9 > 1. What is the covariance of X and Y? Are X and Y independent? : 11. Let X and Y be two random variables. Suppose E(X) = 1, E(Y ) = 2, V ar(X) = 1, V ar(Y ) = 2, and Cov(X, Y ) = 1 2. For what values of the constants a and b, the random variable aX + bY, whose expected value is 3, has minimum variance? 12. A box contains 5 white balls and 3 black balls. Draw 2 balls without replacement. If X represents the number of white balls and Y represents the number of black balls drawn, what is the covariance of X and Y? 13. If X represents the number of 1’s and Y represents the number of 5’s in three tosses of a fair six-sided die, what is the correlation between X and Y? Probability and Mathematical Statistics 237 14. Let Y and Z be two random variables. If V ar(Y ) = 4, V ar(Z) = 16, and Cov(Y, Z) = 2, then what is V ar(3Z 2Y )? 15. Three random variables X1, X2, X3, have equal variances 2 and coefficient of correlation
between X1 and X2 of ⇢ and between X1 and X3 and between X2 and X3 of zero. What is the correlation between Y and Z where Y = X1 + X2 and Z = X2 + X3? 16. If X and Y are two independent Bernoulli random variables with parameter p, then what is the joint moment generating function of X Y? If X1, X2,..., Xn are normal random variables with variance 2 and 17. covariance between any pair of random variables ⇢2, what is the variance of 1 n (X1 + X2 + · · · + Xn)? 18. The coefficient of correlation between X and Y is 1 2 Y = 4a, and 2 Z = 114 where Z = 3X constant a? X = a, 4Y. What is the value of the 3 and 2 19. Let X and Y be independent random variables with E(X) = 1, E(Y ) = 2, and V ar(X) = V ar(Y ) = 2. For what value of the constant k is the expected value of the random variable k(X 2 Y 2) + Y 2 equals 2? 20. Let X be a random variable with finite variance. If Y = 15 X, then what is the coefficient of correlation between the random variables X and (X + Y )X? Conditional Expectations of Bivariate Random Variables 238 Chapter 9 CONDITIONAL EXPECTATION OF BIVARIATE RANDOM VARIABLES This chapter examines the conditional mean and conditional variance associated with two random variables. The conditional mean is very useful in Bayesian estimation of parameters with a square loss function. Further, the notion of conditional mean sets the path for regression analysis in statistics. 9.1. Conditional Expected Values Let X and Y be any two random variables with joint density f (x, y). Recall that the conditional probability density of X, given the event Y = y, is defined as g(x/y) =, f2(y) > 0 f (x, y) f2(y) where f2(y) is the marginal probability density of Y. Similarly, the conditional probability density of Y, given the event X = x,
is defined as h(y/x) = f (x, y) f1(x), f1(x) > 0 where f1(x) is the marginal probability density of X. Definition 9.1. The conditional mean of X given Y = y is defined as µX|y = E (X | y), Probability and Mathematical Statistics 239 where x g(x/y) if X is discrete RX Xx 2 x g(x/y) dx if X is continuous. E (X | y) = 8 >>>< >>>: 1 1 R Similarly, the conditional mean of Y given X = x is defined as where µY |x = E (Y | x), y h(y/x) if Y is discrete y h(y/x) dy if Y is continuous. RY Xy 2 1 1 R E (Y | x) = 8 >>>< >>>: Example 9.1. Let X and Y be discrete random variables with joint probability density function 1 21 (x + y) for x = 1, 2, 3; y = 1, 2 f (x, y) = ( 0 otherwise. What is the conditional mean of X given Y = y, that is E(X|y)? Answer: To compute the conditional mean of X given Y = y, we need the conditional density g(x/y) of X given Y = y. However, to find g(x/y), we need to know the marginal of Y, that is f2(y). Thus, we begin with 3 f2(y) = 1 21 (x + y) x=1 X 1 21 = (6 + 3y). Therefore, the conditional density of X given Y = y is given by g(x/y) = = f (x, y) f2(y) x + y 6 + 3y, x = 1, 2, 3. Conditional Expectations of Bivariate Random Variables 240 The conditional expected value of X given the event Y = y E (X | y) = x g(x/y) RX Xx 2 3 = x x + y 6 + 3y x=1 X 1 6 + 3y " 14 + 6y 6 + 3y, = = 3 3 x2 + y x x=1 X #
x=1 X y = 1, 2. Remark 9.1. Note that the conditional mean of X given Y = y is dependent only on y, that is E(X|y) is a function of y. In the above example, this function is a rational function, namely (y) = 14+6y 6+3y. Example 9.2. Let X and Y have the joint density function x + y for 0 < x, y < 1 f (x, y) = ( 0 What is the conditional mean E otherwise.? Y | X = 1 3 Answer: 1 f1(x) = (x + y) dy 0 Z = xy +  1 2 y2 1 0 = x + 1 2. Probability and Mathematical Statistics 241 h(y/x) = f (x, y) f1(x(y/x) dy dy dy + y2 dy y2 + 1 3 y3 ◆ 1 0 The mean of the random variable Y is a deterministic number. The conditional mean of Y given X = x, that is E(Y |x), is a function (x) of the variable x. Using this function, we form (X). This function (X) is a random variable. Thus starting from the deterministic function E(Y |x), we have formed the random variable E(Y |X) = (X). An important property of conditional expectation is given by the following theorem. Theorem 9.1. The expected value of the random variable E(Y |X) is equal to the expected value of Y, that is Ex Ey|x (Y |X) = Ey(Y ), Conditional Expectations of Bivariate Random Variables 242 where Ex(X) stands for the expectation of X with respect to the distribution of X and Ey|x(Y |X) stands for the expected value of Y with respect to the conditional density h(y/X). Proof: We prove this theorem for continuous variables and leave the discrete case to the reader. Ex Ey|x (Y |X) = Ex 1 y h(y/X) dy = = = = = 1 Z 1 1 Z 1 ✓Z 1 1 1 Z 1 ✓Z 1
1 1 Z 1 ✓Z 1 1 1 y h(y/x) dy f1(x) dx ◆ y h(y/x)f1(x) dy dx ◆ h(y/x)f1(x) dx y dy ◆ f (x, y) dx y dy Z 1 ✓Z 1 1 y f2(y) dy ◆ Z 1 = Ey(Y ). Example 9.3. An insect lays Y number of eggs, where Y has a Poisson distribution with parameter . If the probability of each egg surviving is p, then on the average how many eggs will survive? Answer: Let X denote the number of surviving eggs. Then, given that Y = y (that is given that the insect has laid y eggs) the random variable X has a binomial distribution with parameters y and p. Thus X|Y Y ⇠ ⇠ BIN (Y, p) P OI(). Therefore, the expected number of survivors is given by Ex(X) = Ey Ex|y (X|Y ) = Ey (p Y ) = p Ey (Y ) (since X|Y ⇠ BIN(Y, p)) = p . (since Y POI()) ⇠ Definition 9.2. A random variable X is said to have a mixture distribution if the distribution of X depends on a quantity which also has a distribution. Probability and Mathematical Statistics 243 Example 9.4. A fair coin is tossed. If a head occurs, 1 die is rolled; if a tail occurs, 2 dice are rolled. Let Y be the total on the die or dice. What is the expected value of Y? Answer: Let X denote the outcome of tossing a coin. Then X where the probability of success is p = 1 2. ⇠ BER(p), Ey(Y ) = Ex( Ey|x(Y |X) ) Ey|x(Y |X = 0) + 1 2 Ey|x(Y |X = 1 + 12 + 20 + 30 + 42 + 40 + 36 + 30 + 22 + 12 36 ◆ ✓ 126 36 + 252 36 ◆ = = ✓ 378 72 = 5.25. = Note that the expected number of dots that show when 1 die is rolled is 126 36, and the expected number of dots that show when
2 dice are rolled is 252 36. Theorem 9.2. Let X and Y be two random variables with mean µX and µY, and standard deviation X and Y, respectively. If the conditional expectation of Y given X = x is linear in x, then E(Y |X = x) = µY + ⇢ Y X (x µX ), where ⇢ denotes the correlation coefficient of X and Y. Proof: We assume that the random variables X and Y are continuous. If they are discrete, the proof of the theorem follows exactly the same way by replacing the integrals with summations. We are given that E(Y |X = x) is linear in x, that is E(Y |X = x) = a x + b, (9.0) where a and b are two constants. Hence, from above we get y h(y/x) dy = a x + b 1 Z 1 Conditional Expectations of Bivariate Random Variables 244 which implies 1 y Z 1 f (x, y) f1(x) dy = a x + b. Multiplying both sides by f1(x), we get y f (x, y) dy = (a x + b) f1(x) (9.1) 1 Z 1 Now integrating with respect to x, we get 1 1 y f (x, y) dy dx = Z 1 Z 1 (a x + b) f1(x) dx 1 Z 1 This yields µY = a µX + b. (9.2) Now, we multiply (9.1) with x and then integrate the resulting expression with respect to x to get 1 1 xy f (x, y) dy dx = 1 (a x2 + bx) f1(x) dx. 1 Z From this we get Z 1 Z 1 E(XY ) = a E X 2 + b µX. (9.3) Solving (9.2) and (9.3) for the unknown a and b, we get E(XY ) µX µY 2 X Y X a = = = XY 2 X XY X Y Y
X. = ⇢ Similarly, we get b = µY + ⇢ Y X µX. Letting a and b into (9.0) we obtain the asserted result and the proof of the theorem is now complete. Example 9.5. Suppose X and Y are random variables with E(Y |X = x) = 4 y + 5. What is the correlation coefficient of x + 3 and E(X|Y = y) = 1 X and Y? Probability and Mathematical Statistics 245 Answer: From the Theorem 9.2, we get µY + ⇢ Y X (x µX ) = x + 3. Therefore, equating the coefficients of x terms, we get Similarly, since we have ⇢ Y X = 1. µX + ⇢ X Y (y µY ) = 1 4 y + 5 ⇢ X Y = 1 4. (9.4) (9.5) Multiplying (9.4) with (9.5), we get ⇢ Y X ⇢ X Y = ( 1) 1 4 ◆ ✓ which is Solving this, we get ⇢. Since ⇢ Y X = 1 and Y X > 0, we get ⇢ = 1 2. 9.2. Conditional Variance The variance of the probability density function f (y/x) is called the conditional variance of Y given that X = x. This conditional variance is defined as follows: Definition 9.3. Let X and Y be two random variables with joint density f (x, y) and f (y/x) be the conditional density of Y given X = x. The conditional variance of Y given X = x, denoted by V ar(Y |x), is defined as V ar(Y |x) = E Y 2 | x (E(Y |x))2, where E(Y |x) denotes the conditional mean of Y given X = x. Conditional Expectations of Bivariate Random Variables 246 Example 9.6. Let X
and Y be continuous random variables with joint probability density function f (x, y) = y e ( 0 for 0 < x < y < 1 otherwise. What is the conditional variance of Y given the knowledge that X = x? Answer: The marginal density of f1(x) is given by f1(x) = 1 f (x, y) dy Z 1 1 = e y dy x Z = y e x. = e ⇥ 1 x ⇤ Thus, the conditional density of Y given X = x is h(y/x) = f (x, y) f1(x) y e x e = e = (y x) for y > x. Thus, given X = x, Y has an exponential distribution with parameter ✓ = 1 and location parameter x. The conditional mean of Y given X = x is E(Y |x) = = = 1 Z 1 1 y h(y/x) dy (y y e x) dy x Z 1 0 Z (z + x) e z dz where z = y x = x 1 e z dz + 1 z e z dz 0 Z = x Γ(1) + Γ(2) 0 Z = x + 1. Probability and Mathematical Statistics 247 Similarly, we compute the second moment of the distribution h(y/x). E(Y 2|x) = 1 y2 h(y/x) dy = = Z 1 1 y2 e (y x) dy x Z 1 0 Z (z + x)2 e z dz where z = y x = x2 1 e z dz + 1 z2 e z dz + 2 x 1 z e z dz 0 Z 0 Z = x2 Γ(1) + Γ(3) + 2 x Γ(2) = x2 + 2 + 2x = (1 + x)2 + 1. 0 Z Therefore V ar(Y |x) = E Y 2|x = (1 + x)2 + 1 [ E(Y |x) ]2 (1 + x)2 = 1. Remark 9.2. The variance of Y is 2. This can be seen as follows: Since, the y y
, the expected value of Y 0 e marginal of Y is given by f2(y) = y dy = Γ(4) = 6. 0 y2e is E(Y ) = R Thus, the variance of Y is V ar(Y ) = 6 4 = 2. However, given the knowledge R X = x, the variance of Y is 1. Thus, in a way the prior knowledge reduces the variability (or the variance) of a random variable. y dy = Γ(3) = 2, and E y dx = y e Y 2 = 0 y3e R 1 1 Next, we simply state the following theorem concerning the conditional variance without proof. Conditional Expectations of Bivariate Random Variables 248 Theorem 9.3. Let X and Y be two random variables with mean µX and µY, and standard deviation X and Y, respectively. If the conditional expectation of Y given X = x is linear in x, then Ex (V ar(Y |X)) = (1 ⇢2) V ar(Y ), where ⇢ denotes the correlation coefficient of X and Y. Example 9.7. Let E(Y |X = x) = 2x and V ar(Y |X = x) = 4x2, and let X have a uniform distribution on the interval from 0 to 1. What is the variance of Y? Answer: If E(Y |X = x) is linear function of x, then E(Y |X = x) = µY + ⇢ Y X (x µX ) and We are given that Ex ( V ar(Y |X) ) = 2 Y (1 ⇢2). µY + ⇢ Y X (x µX ) = 2x. Hence, equating the coefficient of x terms, we get which is Further, we are given that ⇢ Y X = 2 ⇢ = 2 X Y. V ar(Y |X = x) = 4x2 (9.6) Since X interval (0, 1) Therefore, ⇠ U N IF (0, 1), we get the density of X to be f (x) = 1 on the Ex ( V ar(Y |X)
2 = 11. Remark 9.3. Notice that, in Example 9.8, we calculated the variance of Y directly using the form of f (y). It is easy to note that f (y) has the form of an exponential density with parameter ✓ = 1, and therefore its variance is the square of the parameter. This straightforward gives 2 Y = 1. 9.3. Regression Curve and Scedastic Curve One of the major goals in most statistical studies is to establish relationships between two or more random variables. For example, a company would like to know the relationship between the potential sales of a new product in terms of its price. Historically, regression analysis was originated in the works of Sir Francis Galton (1822-1911) but most of the theory of regression analysis was developed by his student Sir Ronald Fisher (1890-1962). Probability and Mathematical Statistics 251 Definition 9.4. Let X and Y be two random variables with joint probability density function f (x, y) and let h(y/x) is the conditional density of Y given X = x. Then the conditional mean E(Y |X = x) = y h(y/x) dy 1 Z 1 is called the regression function of Y on X. The graph of this regression function of Y on X is known as the regression curve of Y on X. Example 9.9. Let X and Y be two random variables with joint density f (x, y) = x(1+y) x e if x > 0; y > 0 ( 0 otherwise. What is the regression function of Y on X? Answer: The marginal density f1(x) of X is f1(x) = 1 f (x, y) dy Z 1 1 0 Z 1 = = x e x(1+y) dy x e x e xy dy 0 Z x = x e x = x e = e x. 1 e xy dy 0 Z  1 x 1 xy e 0 The conditional density of Y given X = x is h(y/x) = = f (x, y) f1(x) x e x(1+y) x e xy. = x e Conditional Expectations of Bivariate Random Variables 252 The conditional mean of Y given that X = x is
E(Y |X = x) = = = = = y h(y/x) dy y x e xy dy 1 ze z dz (where z = xy) 1 Z (2) Thus, the regression function (or equation) of Y on X is given by E(Y |x) = 1 x for 0 < x <. 1 Definition 9.4. Let X and Y be two random variables with joint probability density function f (x, y) and let E(Y |X = x) be the regression function of Y on X. If this regression function is linear, then E(Y |X = x) is called a linear regression of Y on X. Otherwise, it is called nonlinear regression of Y on X. Example 9.10. Given the regression lines E(Y |X = x) = x + 2 and E(X|Y = y) = 1 + 1 2 y, what is the expected value of X? Answer: Since the conditional expectation E(Y |X = x) is linear in x, we get Y X Hence, equating the coefficients of x and constant terms, we get µX ) = x + 2. µY + ⇢ (x ⇢ Y X = 1 (9.8) Probability and Mathematical Statistics and µY ⇢ Y X µX = 2, respectively. Now, using (9.8) in (9.9), we get Similarly, since E(X|Y = y) is linear in y, we get µY µX = 2. and ⇢ X Y = 1 2 µX ⇢ X Y µY = 1, Hence, letting (9.10) into (9.11) and simplifying, we get 2µX µY = 2. Now adding (9.13) to (9.10), we see that µX = 4. 253 (9.9) (9.10) (9.11) (9.12) (9.13) Remark 9.4. In statistics, a linear regression usually means the conditional expectation E (Y /x) is linear in the parameters, but not in x. Therefore, E (Y /x) = ↵
+ ✓x2 will be a linear model, where as E (Y /x) = ↵ x✓ is not a linear regression model. Definition 9.5. Let X and Y be two random variables with joint probability density function f (x, y) and let h(y/x) is the conditional density of Y given X = x. Then the conditional variance V ar(Y |X = x) = 1 y2 h(y/x) dy Z 1 is called the scedastic function of Y on X. The graph of this scedastic function of Y on X is known as the scedastic curve of Y on X. Scedastic curves and regression curves are used for constructing families of bivariate probability density functions with specified marginals. Conditional Expectations of Bivariate Random Variables 254 9.4. Review Exercises 1. Given the regression lines E(Y |X = x) = x+2 and E(X|Y = y) = 1+ 1 what is expected value of Y? 2 y, 2. If the joint density of X and Y is f (x, y) = k 0 8 < if 1 < x < 1; x2 < y < 1 elsewhere, where k is a constant, what is E(Y |X = x)? : 3. Suppose the joint density of X and Y is defined by 10xy2 if 0 < x < y < 1 f (x, y) = ( 0 elsewhere. What is E X 2|Y = y? 4. Let X and Y joint density function f (x, y) = 2(x+y) 2e if 0 < x < y < 1 ( 0 elsewhere. What is the expected value of Y, given X = x, for x > 0? 5. Let X and Y joint density function 8xy if 0 < x < 1; 0 < y < x f (x, y) = ( 0 elsewhere. What is the regression curve y on x, that is, E (Y /X = x)? 6. Suppose X and Y are random variables with means µX and µY, respec3 4 y + 2. What are tively; and E(Y |X = x) = the values of µX and µY? 1 3 x + 10 and E(X|Y
= y) = 7. Let X and Y have joint density f (x, y) = 24 5 (x + y) for 0 2y x 1    ( 0 otherwise. What is the conditional expectation of X given Y = y? Probability and Mathematical Statistics 255 8. Let X and Y have joint density f (x, y) = c xy2 ( 0 for 0 y   2x; 1 x 5   otherwise. What is the conditional expectation of Y given X = x? 9. Let X and Y have joint density f (x, y) = y e ( 0 for y x 0 otherwise. What is the conditional expectation of X given Y = y? 10. Let X and Y have joint density f (x, y) = 2 xy ( 0 for 0 y   2x  2 otherwise. What is the conditional expectation of Y given X = x? 11. Let E(Y |X = x) = 2 + 5x, V ar(Y |X = x) = 3, and let X have the density function f (x) = 1 4 x e x 2 ( 0 if 0 < x < 1 otherwise. What is the mean and variance of random variable Y? 12. Let E(Y |X = x) = 2x and V ar(Y |X = x) = 4x2 + 3, and let X have the density function f (x) = 4 p⇡ x2 e x2 8 < 0 for 0 x <  1 elsewhere. : What is the variance of Y? 13. Let X and Y have joint density f (x, y) = 2 ( 0 for 0 < y < 1 x; and 0 < x < 1 otherwise. What is the conditional variance of Y given X = x? Conditional Expectations of Bivariate Random Variables 256 14. Let X and Y have joint density 4x for 0 < x < py < 1 f (x, y) = ( 0 elsewhere. What is the conditional variance of Y given X = x? 15. Let X and Y have joint density f (x, y) = 6 7 x ( 0 for 1 x + y 2; x 0, y 0  
elsewhere. What is the marginal density of Y? What is the conditional variance of X given Y = 3 2? 16. Let X and Y have joint density 12x for 0 < y < 2x < 1 f (x, y) = ( 0 elsewhere. What is the conditional variance of Y given X = 0.5? 17. Let the random variable W denote the number of students who take business calculus each semester at the University of Louisville. If the random variable W has a Poisson distribution with parameter equal to 300 and the probability of each student passing the course is 3 5, then on an average how many students will pass the business calculus? 18. If the conditional density of Y given X = x is given by f (y/x) = 5 y xy (1 8 < 0 and the marginal density of X is : x)5 y if y = 0, 1, 2,..., 5 otherwise, 4x3 if 0 < x < 1 f1(x) = 8 < 0 otherwise, then what is the conditional expectation of Y given the event X = x? : 19. If the joint density of the random variables X and Y is 2+(2x 1)(2y 2 f (x, y) = 0 8 < : 1) if 0 < x, y < 1 otherwise, Probability and Mathematical Statistics 257 then what is the regression function of Y on X? 20. If the joint density of the random variables X and Y is f (x, y) = emin{x,y} 1 (x+y) e if 0 < x, y < 1 8 < ⇥ 0 ⇤ otherwise, then what is the conditional expectation of Y given X = x? : Transformation of Random Variables and their Distributions 258 Chapter 10 TRANSFORMATION OF RANDOM VARIABLES AND THEIR DISTRIBUTIONS In many statistical applications, given the probability distribution of a univariate random variable X, one would like to know the probability distribution of another univariate random variable Y = (X), where is some known function. For example, if we know the probability distribution of the random variable X, we would like know the distribution of Y = ln(X). For univariate random variable X, some commonly used transformed random variable Y of X are: Y = X 2, Y = |X|, Y = |X|,
Y = ln(X), Y = 2 X. Similarly for a bivariate random variable (X, Y ), some of the most common transformations of X and Y are X + Y, XY, X Y, min{X, Y }, max{X, Y } or pX 2 + Y 2. In this chapter, we examine various methods for finding the distribution of a transformed univariate or bivariate random variable, when transformation and distribution of the variable are known. First, we treat the univariate case. Then we treat the bivariate case. We begin with an example for univariate discrete random variable. , and Example 10.1. The probability density function of the random variable X is shown in the table below. x f (x) 2 1 10 1 2 10 0 1 10 1 1 10 2 1 10 3 2 10 4 2 10 Probability and Mathematical Statistics 259 What is the probability density function of the random variable Y = X 2? Answer: The space of the random variable X is RX = { 1, 0, 1, 2, 3, 4}. 2, Then the space of the random variable Y is RY = {x2 | x RX }. Thus, RY = {0, 1, 4, 9, 16}. Now we compute the probability density function g(y) for y in RY. 2 g(0) = P (Y = 0) = P (X 2 = 0) = P (X = 0)) = 1 10 g(1) = P (Y = 1) = P (X 2 = 1) = P (X = g(4) = P (Y = 4) = P (X 2 = 4) = P (X = 1) + P (X = 1) = 2) + P (X = 2) = 3 10 2 10 g(9) = P (Y = 9) = P (X 2 = 9) = P (X = 3) = 2 10 g(16) = P (Y = 16) = P (X 2 = 16) = P (X = 4) = 2 10. We summarize the distribution of Y in the following table. y g(y) 0 1 10 1 3 10 9 2 10 16 2 10 4 2 10 3/10 2/10 1/10 2/10 1/10 -2 - 16 Density Function of X
Density Function of Y = X 2 Example 10.2. The probability density function of the random variable X is shown in the table below. x f (x What is the probability density function of the random variable Y = 2X + 1? Transformation of Random Variables and their Distributions 260 Answer: The space of the random variable X is RX = {1, 2, 3, 4, 5, 6}. Then the space of the random variable Y is RY = {2x + 1 | x RX }. Thus, RY = {3, 5, 7, 9, 11, 13}. Next we compute the probability density function g(y) for y in RY. The pdf g(y) is given by 2 g(3) = P (Y = 3) = P (2X + 1 = 3) = P (X = 1)) = g(5) = P (Y = 5) = P (2X + 1 = 5) = P (X = 2)) = g(7) = P (Y = 7) = P (2X + 1 = 7) = P (X = 3)) = g(9) = P (Y = 9) = P (2X + 1 = 9) = P (X = 4)) = 1 6 1 6 1 6 1 6 g(11) = P (Y = 11) = P (2X + 1 = 11) = P (X = 5)) = g(13) = P (Y = 13) = P (2X + 1 = 13) = P (X = 6)) = 1 6 1 6. We summarize the distribution of Y in the following table. y g(y 11 13 1 6 1 6 1 6 The distribution of X and 2X + 1 are illustrated below. 1/6 1/ 11 13 Density Function of X Density Function of Y = 2X+1 In Example 10.1, we computed the distribution (that is, the probability density function) of transformed random variable Y = (X), where (x) = x2. This transformation is not either increasing or decreasing (that is, monotonic) in the space, RX, of the random variable X. Therefore, the distribution of Y turn out to be quite different from that of X. In Example 10.2, the form of distribution of the transform random variable Y = (X), where
(x) = 2x + 1, is essentially same. This is mainly due to the fact that (x) = 2x + 1 is monotonic in RX. Probability and Mathematical Statistics 261 In this chapter, we shall examine the probability density function of transformed random variables by knowing the density functions of the original random variables. There are several methods for finding the probability density function of a transformed random variable. Some of these methods are: (1) distribution function method (2) transformation method (3) convolution method, and (4) moment generating function method. Among these four methods, the transformation method is the most useful one. The convolution method is a special case of this method. The transformation method is derived using the distribution function method. 10.1. Distribution Function Method We have seen in chapter six that an easy way to find the probability density function of a transformation of continuous random variables is to determine its distribution function and then its density function by differentiation. Example 10.3. A box is to be constructed so that the height is 4 inches and its base is X inches by X inches. If X has a standard normal distribution, what is the distribution of the volume of the box? Answer: The volume of the box is a random variable, since X is a random variable. This random variable V is given by V = 4X 2. To find the density function of V, we first determine the form of the distribution function G(v) of V and then we differentiate G(v) to find the density function of V. The distribution function of V is given by G(v) = P (V v) v  pv X   1 2 pv ◆ 1 p2⇡ 1 2 x2 e dx  4X 2 1 2 ✓ 1 2 pv = P = P = 1 2 pv 1 2 pv Z = 2 0 Z 1 p2⇡ 1 2 x2 e dx (since the integrand is even). Transformation of Random Variables and their Distributions 262 Hence, by the Fundamental Theorem of Calculus, we get g(v) = dG(v) dv 1 2 pv 1 p2⇡ 1 2 x2 e dx! 1 2
✓ ◆ dpv dv = 2 = = = d 2 dv 1 p2⇡ 1 p2⇡ 1 1 2 Γ 0 Z e 1 2 ( 1 2 pv)2 e 1 8 v 1 2pv 1 2 1 e v 8 v p8 = V ⇠ GAM 8, ✓ 1 2. ◆ Example 10.4. If the density function of X is f (x) = 1 2 8 < 0 for 1 < x < 1 otherwise, what is the probability density function of Y = X 2? : Answer: We first find the cumulative distribution function of Y and then by differentiation, we obtain the density of Y. The distribution function G(y) of Y is given by G(y) = P (Y y)  X 2  py y 1 2 dx = P = P ( py = py Z = py. X  py)  Probability and Mathematical Statistics 263 Hence, the density function of Y is given by g(y) = = = dG(y) dy dpy dy 1 2 py for 0 < y < 1. 10.2. Transformation Method for Univariate Case The following theorem is the backbone of the transformation method. Theorem 10.1. Let X be a continuous random variable with probability density function f (x). Let y = T (x) be an increasing (or decreasing) function. Then the density function of the random variable Y = T (X) is given by g(y) = f (W (y)) dx dy where x = W (y) is the inverse function of T (x). Proof: Suppose y = T (x) is an increasing function. The distribution function G(y) of Y is given by G(y) = P (Y y)  = P (T (X) y)  W (y)) = P (X  W (y) = f (x) dx. Z 1 Transformation of Random Variables and their Distributions 264 Then, differentiating we get the density function of Y, which is g(y) = dG(y) dy f (x
. Theorem 10.2. Let X and Y be two continuous random variables with joint density f (x, y). Let U = P (X, Y ) and V = Q(X, Y ) be functions of X and Y. If the functions P (x, y) and Q(x, y) have single valued inverses, say X = R(U, V ) and Y = S(U, V ), then the joint density g(u, v) of U and V is given by g(u, v) = |J| f (R(u, v), S(u, v)), where J denotes the Jacobian and given by J = det @x @u @y @u ✓ @x @v @y @v ◆ = @x @u @y @v @x @v @y @u. Transformation of Random Variables and their Distributions 268 Example 10.8. Let X and Y have the joint probability density function 8 xy for 0 < x < y < 1 f (x, y) = ( 0 otherwise. What is the joint density of U = X Y and V = Y? Answer: Since we get by solving for X and. ) Hence, the Jacobian of the transformation is given by J = @x @u = v · 1 = v. @y @v @x @v @y @u u · 0 The joint density function of U and V is g(u, v) = |J| f (R(u, v), S(u, v)) Note that, since we have The last inequalities yield = |v| f (uv, v) = v 8 (uv) v = 8 uv3. 0 < x < y < 1 0 < uv < v < 1. 0 < uv < v 0 < v < 1. ) Probability and Mathematical Statistics 269 Therefore, we get. ) Thus, the joint density of U and V is given by 8 uv3 for 0 < u < 1; 0 < v < 1 g(u, v) = ( 0 otherwise. Example 10.9. Let each of the independent random variables X and Y have the density function f (x) = x e for 0 < x < 1 ( 0 otherwise. What is the joint density of U = X and V = 2X + 3Y and the domain on which this density is positive?
Answer: Since U = X V = 2X + 3Y, ) we get by solving for X and. 9 = Hence, the Jacobian of the transformation is given by ; @y @v 1 3 ◆ ✓ @x @v @y @x @u = 1 · = 1 3. Transformation of Random Variables and their Distributions 270 The joint density function of U and V is g(u, v) = |J| f (R(u, v), S(u, v)), ✓ u e e e( u+v 3 ). Since we get Further, since v = 2u + 3y and 3y > 0, we have v > 2u. Hence, the domain of g(u, v) where nonzero is given by The joint density g(u, v) of the random variables U and V is given by 0 < 2u < v <. 1 g(u, v) = 1 3 e( u+v 3 ) 8 < 0 for 0 < 2u < v < 1 otherwise. Example 10.10. Let X and Y be independent random variables, each with density function : x e f (x) = 8 < 0 for 0 < x < 1 otherwise, where > 0. Let U = X + 2Y and V = 2X + Y. What is the joint density of U and V? : Answer: Since U = X + 2Y V = 2X + Y, ) Probability and Mathematical Statistics 271 we get by solving for X and. 9 >= Hence, the Jacobian of the transformation is given by >; @x @v 1 3 @y @y @x @ . The joint density function of U and V is g(u, v) = |J| f (R(u, v), S(u, v)) f (R(u, v)) f (S(u, v)) R(u,v) eS(u,v) 2 e[R(u,v)+S(u,v)] 2 e ( u+v 3 ). Hence, the joint density g(u, v) of the random variables U and V is given by g(u, v) = 1 3 2 e ( u
+v 3 ) 8 < 0 for otherwise. : Example 10.11. Let X and Y be independent random variables, each with density function f (x) = 1 p2⇡ e 1 2 x2, < x <. 1 1 Let U = X is the density of U? Y and V = Y. What is the joint density of U and V? Also, what Transformation of Random Variables and their Distributions 272 Answer: Since we get by solving for X and Y U = X Y V = Y. ) Hence, the Jacobian of the transformation is given by J = @y @v @x @u = v · (1) @y @u @x @v u · (0) = v. The joint density function of U and V is g(u, v) = |J| f (R(u, v), S(u, v)) = |v| f (R(u, v)) f (S(u, v)) = |v| 1 p2⇡ 1 2 R2(u,v) e 1 p2⇡ 1 2 S2(u,v) e = |v| = |v| = |v| 1 2⇡ 1 2⇡ 1 2⇡ 1 2 [R2(u,v)+S2(u,v)] e 1 2 [u2v2+v2] e e 1 2 v2(u2+1). Hence, the joint density g(u, v) of the random variables U and V is given by g(u, v) = |v| 1 2⇡ e 1 2 v2(u2+1), where < u < and < v <. 1 1 1 1 Probability and Mathematical Statistics 273 Next, we want to find the density of U. We can obtain this by finding the marginal of U from the joint density of U and V. Hence, the marginal g1(u) of U is given by g1(u) = 1 g(u, v) dv 1 2⇡ e 1 2 v2(u2+1) dv Z 1 1 Z 1 0 1 2⇡ 1 2⇡ |v| v Z 1 1
2⇡ 1 u2 + 1 1 ⇡ (u2 + 1) 1 2⇡. = = = = = e 1 2 v2(u2+1) dv e 1 2 v2(u2+1) dv + 1 v 0 Z 2 v2(u2+1) 1 0 e 2 u2 + 1 1 2 v2(u2+1) 1 1 0 e 2 u2 + 1 1 u2 + 1 ◆  1 + 2⇡ Thus U ⇠ CAU (1). Remark 10.1. If X and Y are independent and standard normal random variables, then the quotient X Y is always a Cauchy random variable. However, the converse of this is not true. For example, if X and Y are independent and each have the same density function f (x) = p2 ⇡ x2 1 + x4, < x <, 1 1 then it can be shown that the random variable X Y is a Cauchy random variable. Laha (1959) and Kotlarski (1960) have given a complete description of the family of all probability density function f such that the quotient X Y Transformation of Random Variables and their Distributions 274 follows the standard Cauchy distribution whenever X and Y are independent and identically distributed random variables with common density f. Example 10.12. Let X have a Poisson distribution with mean . Find a transformation T (x) so that V ar ( T (X) ) is free of , for large values of . Answer: We expand the function T (x) by Taylor’s series about . Then, neglecting the higher orders terms for large values of , we get T (x) = T () + (x ) T 0() + · · · · · · where T 0() represents derivative of T (x) at x = . Now, we compute the variance of T (X). V ar ( T (X) ) = V ar ( T () + (X ) T 0() + · · · ) = V ar ( T () ) + V ar ( (X = 0 + [T 0()]2 V ar(X =
(u, v Z 1 u) du. Similarly, one can obtain the other two density functions. This completes the proof. In addition, if the random variables X and Y in Theorem 10.3 are independent and have the probability density functions f (x) and g(y) respectively, then we have hX+Y (z) = 1 g(y) f (z y) dy hXY (z) = h X Y (z) = Z 1 1 Z 1 1 Z 1 1 |y| g(y) f z y ✓ ◆ dy |y| g(y) f (zy) dy. Probability and Mathematical Statistics 277 Each of the following figures shows how the distribution of the random variable X + Y is obtained from the joint distribution of (X, Y ). Joint Density of (X, Y) Joint Density of (X, Y Marginal Density of 2 1 3 Marginal Density of 2 Example 10.14. Roll an unbiased die twice. If X denotes the outcome in the first roll and Y denotes the outcome in the second roll, what is the distribution of the random variable Z = max{X, Y }? Answer: The space of X is RX = {1, 2, 3, 4, 5, 6}. Similarly, the space of Y is RY = {1, 2, 3, 4, 5, 6}. Hence the space of the random variable (X, Y ) is RX ⇥ RY. The following table shows the distribution of (X, Y ). 1 2 3 4 5 6 1 36 1 36 1 36 1 36 1 36 1 36 1 1 36 1 36 1 36 1 36 1 36 1 36 2 1 36 1 36 1 36 1 36 1 36 1 36 3 1 36 1 36 1 36 1 36 1 36 1 36 4 1 36 1 36 1 36 1 36 1 36 1 36 5 1 36 1 36 1 36 1 36 1 36 1 36 6 The space of the random variable Z = max{X, Y } is RZ = {1, 2, 3, 4, 5, 6}. Thus Z = 1 only if (X, Y ) = (1, 1). Hence P (Z = 1) = 1 36. Similarly, Z = 2 only if (X, Y ) = (1, 2), (2, 2) or (2,
1). Hence, P (Z = 2) = 3 36. Proceeding in a similar manner, we get the distribution of Z which is summarized in the table below. Transformation of Random Variables and their Distributions 278 z h(z) 1 1 36 2 3 36 3 5 36 4 7 36 5 9 36 6 11 36 In this example, the random variable Z may be described as the best out of two rolls. Note that the probability density of Z can also be stated as h(z) = 2z 36 1, for z 2 {1, 2, 3, 4, 5, 6}. 10.4. Convolution Method for Sums of Random Variables In this section, we illustrate how convolution technique can be used in finding the distribution of the sum of random variables when they are independent. This convolution technique does not work if the random variables are not independent. Definition 10.1. Let f and g be two real valued functions. The convolution of f and g, denoted by f? g, is defined as (f? g)(z) = 1 f (z = Z 1 1 Z 1 g(z y) g(y) dy x) f (x) dx. Hence from this definition it is clear that f? g = g? f. Let X and Y be two independent random variables with probability density functions f (x) and g(y). Then by Theorem 10.3, we get h(z) = 1 f (z Z 1 y) g(y) dy. Thus, this result shows that the density of the random variable Z = X + Y is the convolution of the density of X with the density of Y. Example 10.15. What is the probability density of the sum of two independent random variables, each of which is uniformly distributed over the interval [0, 1]? Answer: Let Z = X + Y, where X Hence, the density function f (x) of the random variable X is given by U N IF (0, 1) and Y ⇠ ⇠ U N IF (0, 1). f (x) = 1 ( 0 for 0 x 1   otherwise. Probability and Mathematical Statistics 279 Similarly, the density function g(y) of Y is given by g(y)
= 1 ( 0 for 0 y   1 otherwise. Since X and Y are independent, the density function of Z can be obtained by the method of convolution. Since, the sum z = x + y is between 0 and 2, we consider two cases. First, suppose 0 1, then z   h(z) = (f? g) (z) 1 f (z Z 1 1 x) g(x) dx = = = = = z f (z f (z dx x) g(x) dx x) g(x) dx + f (z 1 z Z x) g(x) dx + 0 x) g(x) dx (since f (z x) = 0 between z and 1) 0 Z = z. Similarly, if 1 z   2, then h(z) = (f? g) (z) = = = 1 f (z Z 1 1 x) g(x) dx f (z x) g(x) dx z 1 f (z 1 x) g(x) dx + 1 z Z 1 f (z x) g(x) dx 0 Z 0 Z f ( dx = x) g(x) dx (since f (z x) = 0 between 0 and z 1) z Z = 2 1 z. Transformation of Random Variables and their Distributions 280 Thus, the density function of Z = X + Y is given by h(z) = for < z 0  1 for 0 for for >>>>>>>< >>>>>>>: The graph of this density function looks like a tent and it is called a tent function. However, in literature, this density function is known as the Simpson’s distribution. Example 10.16. What is the probability density of the sum of two independent random variables, each of which is gamma with parameter ↵ = 1 and ✓ = 1? Answer: Let Z = X + Y, where X Hence, the density function f (x) of the random variable X is given by GAM(1, 1) and Y ⇠ ⇠ GAM(1, 1). f (x) = x e for 0 < x < ( 0 otherwise. Similarly, the density function g(y)
of Y is given by g(y) = y e for 0 < y < ( 0 otherwise. 1 1 Since X and Y are independent, the density function of Z can be obtained by the method of convolution. Notice that the sum z = x + y is between 0 Probability and Mathematical Statistics 281 and, and 0 < x < z. Hence, the density function of Z is given by 1 h(z) = (f? g) (z) = = = = = Z z x) g(x) dx f (z x) g(x) dx (z e x) e x dx e z+x e x dx e z dx 0 Z z = z e 1 Γ(2) 12 z2 = 1 e z 1. ⇠ GAM(1, 1) and Y GAM(1, 2). Thus, if X GAM(1, 1), Hence Z ⇠ GAM(1, 2), that X + Y is a gamma with ↵ = 2 and ✓ = 1. then X + Y Recall that a gamma random variable with ↵ = 1 is known as an exponential random variable with parameter ✓. Thus, in view of the above example, we see that the sum of two independent exponential random variables is not necessarily an exponential variable. ⇠ ⇠ Example 10.17. What is the probability density of the sum of two independent random variables, each of which is standard normal? Answer: Let Z = X + Y, where X density function f (x) of the random variable X is given by N (0, 1) and Y ⇠ ⇠ N (0, 1). Hence, the f (x) = 1 p2 ⇡ x2 2 e Similarly, the density function g(y) of Y is given by g(y) = 1 p2 ⇡ y2 2 e Since X and Y are independent, the density function of Z can be obtained by the method of convolution. Notice that the sum z = x + y is between 1 Transformation of Random Variables and their Distributions 282 and. Hence, the density function of Z is given by 1 h(z) = (f? g) (z) 1 f (z x) g(x) dx 1 (z e x)2 2 x2 2 dx e 1 e(
x z 2 )2 dx 1 Z z2 4 p⇡ 1 e Z e 1 z2 4 z2 4 e1 1 2⇡ 1 2⇡ 1 2⇡ 1 2⇡ 1 p4⇡ 1 p4⇡ 1 Z z2 4 e 1 2 e z 0 p2 2. 1 p⇡ e(x z 2 )2 dx w2 e dw, where w = x z 2 Z 1 1 1 p⇡ The integral in the brackets equals to one, since the integrand is the normal density function with mean µ = 0 and variance 2 = 1 2. Hence sum of two standard normal random variables is again a normal random variable with mean zero and variance 2. Example 10.18. What is the probability density of the sum of two independent random variables, each of which is Cauchy? Answer: Let Z = X + Y, where X density function f (x) of the random variable X and Y are is given by N (0, 1) and Y N (0, 1). Hence, the ⇠ ⇠ f (x) = 1 ⇡ (1 + x2) and g(y) = 1 ⇡ (1 + y2), respectively. Since X and Y are independent, the density function of Z can be obtained by the method of convolution. Notice that the sum z = x + y is between. Hence, the density function of Z is given by and 1 1 h(z) = (f? g) (z) = = = Z 1 1 Z 1 1 ⇡2 1 f (z x) g(x) dx 1 ⇡ (1 + (z 1 ⇡ (1 + x2) dx x)2) 1 1 + (z 1 Z 1 1 1 + x2 dx. x)2 Probability and Mathematical Statistics 283 To integrate the above integral, we decompose the integrand using partial fraction decomposition. Hence where 1 1 + (z 1 1 + x2 = 2 A x + B 1 + x2 + 2 C (z 1 + (z x)2 x) + D x)2
A = 1 z (4 + z2) = C and B = 1 4 + z2 = D. Now integration yields 1 ⇡2 = = = 1 1 1 + (z 1 1 + x2 dx x)2 Z 1 1 ⇡2 z2 (4 + z2) 1 ⇡2 z2 (4 + z2) 2 ⇡ (4 + z2).  ⇥ 1 + x2 z ln 1 + (z ✓ 0 + z2 ⇡ + z2 ⇡ ⇤ x)2 + z2 tan 1 x z2 tan 1(z ◆ 1 x) 1 Hence the sum of two independent Cauchy random variables is not a Cauchy random variable. If X CAU (0) and Y Example 10.18 that the random variable Z = X+Y Z CAU (0). This is a remarkable property of the Cauchy distribution. CAU (0), then it can be easily shown using is again Cauchy, that is ⇠ ⇠ 2 ⇠ So far we have considered the convolution of two continuous independent random variables. However, the concept can be modified to the case when the random variables are discrete. Let X and Y be two discrete random variables both taking on values that are integers. Let Z = X + Y be the sum of the two random variables. Hence Z takes values on the set of integers. Suppose that X = n where n is n. Thus the events (Z = z) some integer. Then Z = z if and only if Y = z n) where is the union of the pair wise disjoint events (X = n) and (Y = z n runs over the integers. The cdf H(z) of Z can be obtained as follows: P (Z = z) = 1 P (X = n) P (Y = z n= X 1 n) which is 1 h(z) = n= X 1 f (n) g(z n), Transformation of Random Variables and their Distributions 284 where F (x) and G(y) are the cdf of X and Y, respectively. Definition 10.2. Let X and Y be two independent integer-valued discrete random variables, with pdf
s f (x) and g(y) respectively. Then the convolution of f (x) and g(y) is the cdf h = f? g given by h(m) = 1 n= X 1 f (n) g(m n), for m = 1 discrete random variable Z = X + Y. 1, 0, 1, 2,.... 1,..., 2,. The function h(z) is the pdf of the Example 10.19. Let each of the random variable X and Y represents the outcomes of a six-sided die. What is the cumulative density function of the sum of X and Y? Answer: Since the range of X as well as Y is {1, 2, 3, 4, 5, 6}, the range of Z = X + Y is RZ = {2, 3, 4,..., 11, 12}. The pdf of Z is given by h(2) = f (1) g(1) = 1 6 · 1 6 = 1 36 h(3) = f (1) g(2) + f (2) g(1(4) = f (1) g(3) + h(2) g(2) + f (3) g(1) = 1 6 1 6 = · 2 36 36. Continuing in this manner we obtain h(5) = 4 36, h(10) = 3 h(8) = 5 these into one expression we have 36, h(11) = 2 36, h(9) = 4 36, h(6) = 5 36, and h(12) = 1 36, h(7) = 6 36, 36. Putting z 1 h(z) = n=1 X 6 = f (n)g(z n) 7|, |z 36 z = 2, 3, 4,..., 12. It is easy to note that the convolution operation is commutative as well as associative. Using the associativity of the convolution operation one can compute the pdf of the random variable Sn = X1 + X2 + · · · + Xn, where X1, X2,..., Xn are random variables each having the same pdf f (x). Then the pdf of S1 is f (x). Since Sn = Sn 1 + Xn and the pdf of X
n is f (x), the pdf of Sn can be obtained by induction. Probability and Mathematical Statistics 285 10.5. Moment Generating Function Method We know that if X and Y are independent random variables, then MX+Y (t) = MX (t) MY (t). This result can be used to find the distribution of the sum X + Y. Like the convolution method, this method can be used in finding the distribution of X + Y if X and Y are independent random variables. We briefly illustrate the method using the following example. Example 10.20. Let X probability density function of X + Y if X and Y are independent? P OI(1) and Y P OI(2). What is the ⇠ ⇠ Answer: Since, X P OI(1) and Y ⇠ ⇠ P OI(2), we get and MX (t) = e1 (et 1) MY (t) = e2 (et 1). Further, since X and Y are independent, we have MX+Y (t) = MX (t) MY (t) 1) e2 (et = e1 (et 1)+2 (et = e1 (et = e(1+2)(et 1), 1) 1) that is, X +Y is given by ⇠ P OI(1+2). Hence the density function h(z) of Z = X +Y h(z) = (1+2) e z! (1 + 2)z for z = 0, 1, 2, 3,... 8 < 0 otherwise. : Compare this example to Example 10.13. You will see that moment method has a definite advantage over the convolution method. However, if you use the moment method in Example 10.15, then you will have problem identifying the form of the density function of the random variable X + Y. Thus, it is difficult to say which method always works. Most of the time we pick a particular method based on the type of problem at hand. Transformation of Random Variables and their Distributions 286 Example 10.21. What is the probability density function of the sum of two independent random
variable, each of which is gamma with parameters ✓ and ↵? Answer: Let X and Y be two independent gamma random variables with parameters ✓ and ↵, that is X GAM(✓, ↵). From Theorem 6.3, the moment generating functions of X and Y are obtained as ↵, respectively. Since, X and Y MX (t) = (1 are independent, we have ↵ and MY (t) = (1 GAM(✓, ↵) and Y ✓) ✓) ⇠ ⇠ MX+Y (t) = MX (t) MY (t) ↵ (1 2↵. = (1 = (1 ✓) ✓) ↵ ✓) Thus X + Y has a moment generating function of a gamma random variable with parameters ✓ and 2↵. Therefore X + Y ⇠ GAM(✓, 2↵). 10.6. Review Exercises 1. Let X be a continuous random variable with density function e 2x + 1 x 2 e f (x) = ( 0 for 0 < x < 1 otherwise. If Y = e 2X, then what is the density function of Y where nonzero? 2. Suppose that X is a random variable with density function f (x) = 3 8 x2 ( 0 for 0 < x < 2 otherwise. Let Y = mX 2, where m is a fixed positive number. What is the density function of Y where nonzero? 3. Let X be a continuous random variable with density function f (x) = 2x 2 e for x > 0 ( 0 otherwise and let Y = e X. What is the density function g(y) of Y where nonzero? Probability and Mathematical Statistics 287 4. What is the probability density of the sum of two independent random variables, each of which is uniformly distributed over the interval [ 2, 2]? 5. Let X and Y be random variables with joint density function f (x, y) = x e ( 0 for elsewhere. If Z = X + 2Y, then what is the joint density of X and Z where nonzero? 6. Let X be a continuous random variable with density function f (x) = 2 x2 ( 0 for 1 < x < 2 elsewhere. If Y = pX, then what is the density function
of Y for 1 < y < p2? 7. What is the probability density of the sum of two independent random variables, each of which has the density function given by 10 x 50 f (x) = ( 0 for 0 < x < 10 elsewhere? 8. What is the probability density of the sum of two independent random variables, each of which has the density function given by f (x) = a x2 ( 0 for a x <  1 elsewhere? 9. Roll an unbiased die 3 times. If U denotes the outcome in the first roll, V denotes the outcome in the second roll, and W denotes the outcome of the third roll, what is the distribution of the random variable Z = max{U, V, W }? 10. The probability density of V, the velocity of a gas molecule, by MaxwellBoltzmann law is given by f (v) = 4 h3 p⇡ v2 e h2v2 8 < 0 for 0 v <  1 otherwise, where h is the Plank’s constant. If m represents the mass of a gas molecule, then what is the probability density of the kinetic energy Z = 1 : 2 mV 2? Transformation of Random Variables and their Distributions 288 11. If the random variables X and Y have the joint density f (x, y) = 6 7 x for 1 x + y 2, x 0 otherwise, what is the joint density of U = 2X + 3Y and V = 4X + Y? : 12. If the random variables X and Y have the joint density f (x, y) = 6 7 x for 1 x + y 2, x 0 otherwise, : what is the density of X Y? 13. Let X and Y have the joint probability density function f (x, y) = 5 16 xy2 for 0 < x < y < 2 ( 0 elsewhere. What is the joint density function of U = 3X it is nonzero? 2Y and V = X + 2Y where 14. Let X and Y have the joint probability density function 4x for 0 < x < py < 1 f (x, y) = ( 0 elsewhere. What is the joint density function of U = 5X it is nonzero? 2Y and V = 3X + 2Y where 15. Let X and Y have the joint probability density function 4x for 0 <
x < py < 1 f (x, y) = ( 0 elsewhere. What is the density function of X Y? 16. Let X and Y have the joint probability density function 4x for 0 < x < py < 1 f (x, y) = ( 0 elsewhere. Probability and Mathematical Statistics 289 What is the density function of X Y? 17. Let X and Y have the joint probability density function 4x for 0 < x < py < 1 f (x, y) = ( 0 elsewhere. What is the density function of XY? 18. Let X and Y have the joint probability density function f (x, y) = 5 16 xy2 for 0 < x < y < 2 ( 0 elsewhere. What is the density function of Y X? 19. If X an uniform random variable on the interval [0, 2] and Y is an uniform random variable on the interval [0, 3], then what is the joint probability density function of X + Y if they are independent? 20. What is the probability density function of the sum of two independent random variable, each of which is binomial with parameters n and p? 21. What is the probability density function of the sum of two independent random variable, each of which is exponential with mean ✓? 22. What is the probability density function of the average of two independent random variable, each of which is Cauchy with parameter ✓ = 0? 23. What is the probability density function of the average of two independent random variable, each of which is normal with mean µ and variance 2? 24. Both roots of the quadratic equation x2 + ↵x + = 0 can take all values from 1 to +1 with equal probabilities. What are the probability density functions of the coefficients ↵ and ? 25. If A, B, C are independent random variables uniformly distributed on the interval from zero to one, then what is the probability that the quadratic equation Ax2 + Bx + C = 0 has real solutions? 26. The price of a stock on a given trading day changes according to the 8. Find the distribution f ( distribution for the change in stock price after two (independent) trading days. 8, and f (2) = 1 2, f (1) = 1 4, f (0) = 1 1) = 1 Some Special Discrete Bivariate Distributions 290 Chapter 11 SOME SPECIAL DIS
CRETE BIVARIATE DISTRIBUTIONS In this chapter, we shall examine some bivariate discrete probability density functions. Ever since the first statistical use of the bivariate normal distribution (which will be treated in Chapter 12) by Galton and Dickson in 1886, attempts have been made to develop families of bivariate distributions to describe non-normal variations. In many textbooks, only the bivariate normal distribution is treated. This is partly due to the dominant role the bivariate normal distribution has played in statistical theory. Recently, however, other bivariate distributions have started appearing in probability models and statistical sampling problems. This chapter will focus on some well known bivariate discrete distributions whose marginal distributions are wellknown univariate distributions. The book of K.V. Mardia gives an excellent exposition on various bivariate distributions. 11.1. Bivariate Bernoulli Distribution We define a bivariate Bernoulli random variable by specifying the form of the joint probability distribution. Definition 11.1. A discrete bivariate random variable (X, Y ) is said to have the bivariate Bernoulli distribution if its joint probability density is of the form f (x, y) = 1 y)! px 1 py 2 (1 x! y! (1 x 0 8 < : p1 p2)1 x y, if x, y = 0, 1 otherwise, Probability and Mathematical Statistics 291 where 0 < p1, p2, p1 + p2 < 1 and x + y random variable by writing (X, Y )  BER (p1, p2). ⇠ 1. We denote a bivariate Bernoulli In the following theorem, we present the expected values and the variances of X and Y, the covariance between X and Y, and their joint moment generating function. Recall that the joint moment generating function of X esX+tY and Y is defined as M (s, t) := E. Theorem 11.1. Let (X, Y ) ters. Then ⇠ BER (p1, p2), where p1 and p2 are parame- E(X) = p1 E(Y ) = p2 V ar(X) = p1 (1 V ar(Y ) = p2 (1 p1) p
be shown that E(X 2) = p1 and E(Y 2) = p2. Thus, we have V ar(X) = E(X 2) E(X)2 = p1 p2 1 = p1 (1 p1) and V ar(Y ) = E(Y 2) E(Y )2 = p2 p2 2 = p2 (1 p2). This completes the proof of the theorem. The next theorem presents some information regarding the conditional distributions f (x/y) and f (y/x). Probability and Mathematical Statistics 293 Theorem 11.2. Let (X, Y ) BER (p1, p2), where p1 and p2 are parameters. Then the conditional distributions f (y/x) and f (x/y) are also Bernoulli and ⇠ E(Y /x) = E(X/y) = V ar(Y /x) = p2 (1 1 p1 (1 1 p2 (1 V ar(X/y) = p1 (1 x) p1 y) p2 p1 (1 p1 (1 p2) (1 p1)2 p2) (1 p2)2 x) y). Proof: Notice that f (y/x) = = = f (x, y) f1(x) f (x, y) 1 f (x, y) y=0 X f (x, y) f (x, 0) + f (x, 1) x = 0, 1; y = 0, 1; 0 x + y 1.   Hence and f (1/0) = = = f (0, 1) f (0, 0) + f (0, 1) p2 p1 p1 p2 + p2 p2 1 1 f (1/1) = f (1, 1) f (1, 0) + f (1, 1) = 0 p1 + 0 = 0. Now we compute the conditional expectation E(Y /x) for x = 0, 1. Hence 1 E(Y /x = 0) = y f (y/0) y
=0 X = f (1/0) p2 = p1 1 Some Special Discrete Bivariate Distributions 294 and E(Y /x = 1) = f (1/1) = 0. Merging these together, we have E(Y /x) = Similarly, we compute x) p2 (1 1 p1 x = 0, 1. E(Y 2/x = 0) = y2 f (y/0) 1 and Therefore y=0 X = f (1/0) p2 = p1 1 E(Y 2/x = 1) = f (1/1) = 0. V ar(Y /x = 0) = E(Y 2/x = 0) E(Y /x = 0)2 2 p2 = = = p2 1 p2(1 (1 p2 (1 (1 p1 1 ✓ p1) p1)2 p1 p1)2 p1 ◆ p2 2 p2) and V ar(Y /x = 1) = 0. Merging these together, we have V ar(Y /x) = p2 (1 p1 (1 p2) (1 p1)2 x) x = 0, 1. The conditional expectation E(X/y) and the conditional variance V ar(X/y) can be obtained in a similar manner. We leave their derivations to the reader. 11.2. Bivariate Binomial Distribution The bivariate binomial random variable is defined by specifying the form of the joint probability distribution. Probability and Mathematical Statistics 295 Definition 11.2. A discrete bivariate random variable (X, Y ) is said to have the bivariate binomial distribution with parameters n, p1, p2 if its joint probability density is of the form f (x, y) = n! y)! px 1 py 2 (1 x! y! (n x 8 < 0 p1 p2)n x y, if x, y = 0, 1,..., n otherwise, where 0 < p1, p2, p1+p2 < 1, x+y a bivariate bin
omial random variable by writing (X, Y ) :  n and n is a positive integer. We denote BIN (n, p1, p2). ⇠ Bivariate binomial distribution is also known as trinomial distribution. It will be shown in the proof of Theorem 11.4 that the marginal distributions of X and Y are BIN (n, p1) and BIN (n, p2), respectively. The following two examples illustrate the applicability of bivariate bino- mial distribution. Example 11.1. In the city of Louisville on a Friday night, radio station A has 50 percent listeners, radio station B has 30 percent listeners, and radio station C has 20 percent listeners. What is the probability that among 8 listeners in the city of Louisville, randomly chosen on a Friday night, 5 will be listening to station A, 2 will be listening to station B, and 1 will be listening to station C? Answer: Let X denote the number listeners that listen to station A, and Y denote the listeners that listen to station B. Then the joint distribution of X and Y is bivariate binomial with n = 8, p1 = 5 10. The probability that among 8 listeners in the city of Louisville, randomly chosen on a Friday night, 5 will be listening to station A, 2 will be listening to station B, and 1 will be listening to station C is given by 10, and p2 = 3 P (X = 5, Y = 2) = f (5, 2) n! ✓ p2)n x y 1 py px 2 (1 3 10 2 ◆ ✓ 2 10 y)! 5 ◆ ✓ p1 ◆ x 5 10 = = x! y! (n 8! 5! 2! 1! = 0.0945. Example 11.2. A certain game involves rolling a fair die and watching the numbers of rolls of 4 and 5. What is the probability that in 10 rolls of the die one 4 and three 5 will be observed? Some Special Discrete Bivariate Distributions 296 Answer: Let X denote the number of 4 and Y denote the number of 5. Then the joint distribution of X and Y is bivariate binomial with n = 10, p1 = 1 6. Hence the probability that in 10 rolls of the die one 4 and three 5 will be observed is 6, p2 = 1 6 and 1 p
2 = 4 p1 P (X = 5, Y = 2) = f (1, 3) = = = = x! y! (n n! x 10! y)! 3)! 3)! 1 1 1! 3! (10 10! 1! 3! (10 573440 10077696 = 0.0569. 1 py px 2 (1 p2)n x y p1 10 ◆ Using transformation method discussed in chapter 10, it can be shown that if X1, X2 and X3 are independent binomial random variables, then the joint distribution of the random variables X = X1 + X2 and Y = X1 + X3 is bivariate binomial. This approach is known as trivariate reduction technique for constructing bivariate distribution. To establish the next theorem, we need a generalization of the binomial theorem which was treated in Chapter 1. The following result generalizes the binomial theorem and can be called trinomial theorem. Similar to the proof of binomial theorem, one can establish (a + b + c)n = n n n x, y ax by cn x y, ◆ where 0 x + y   n and x=0 X y=0 ✓ X n x, y ✓ ◆ = x! y! (n n! x . y)! In the following theorem, we present the expected values of X and Y, their variances, the covariance between X and Y, and the joint moment generating function. Probability and Mathematical Statistics 297 Theorem 11.3. Let (X, Y ) parameters. Then ⇠ BIN (n, p1, p2), where n, p1 and p2 are E(X) = n p1 E(Y ) = n p2 V ar(X) = n p1 (1 V ar(Y ) = n p2 (1 p1) p2) Cov(X, Y ) = n p1 p2 p1 Proof: First, we find the joint moment generating function of X and Y. The moment generating function M (s, t) is given by p2 + p1es + p2et M (s, ts, t) = E n esX
,0) n 1 p1 es (0,0) ⌘ Therefore the covariance of X and Y is Cov(X, Y ) = E(XY ) E(X) E(Y ) = n(n 1)p1p2 n2p1p2 = np1p2. Similarly, it can be shown that E(X 2) = n(n 1)p2 1 + np1 and E(Y 2) = n(n 1)p2 2 + np2. Thus, we have and similarly V ar(X) = E(X 2) = n(n = n p1 (1 1)p2 E(X)2 2 + np2 p1) n2p2 1 V ar(Y ) = E(Y 2) E(Y )2 = n p2 (1 p2). This completes the proof of the theorem. The following results are needed for the next theorem and they can be established using binomial theorem discussed in chapter 1. For any real numbers a and b, we have m y y=0 X ay bm y = m a (a + b)m 1 m y ✓ ◆ and m y2 y=0 X ay bm y = m a (ma + b) (a + b)m 2 m y ✓ ◆ where m is a positive integer. (11.1) (11.2) Probability and Mathematical Statistics 299 Example 11.3. If X equals the number of ones and Y equals the number of twos and threes when a pair of fair dice are rolled, then what is the correlation coefficient of X and Y? Answer: The joint density of X and Y is bivariate binomial and is given by f (x, y) = x! y! (2 2! y)!,  where x and y are nonnegative integers. By Theorem 11.3, we have V ar(X) = n p1 (1 V ar(Y ) = n p2 (1 p1) = 2 p2 ◆ ◆ = 10 36, = 16 36, Cov(X, Y ) = n p1
(X, Y ) is said to have the bivariate geometric distribution with parameters p1 and p2 if its joint probability density is of the form (x+y)! x! y! px 1 py 2 (1 f (x, y) = 8 < 0 p1 p2), if x, y = 0, 1,..., 1 otherwise, where 0 < p1, p2, p1 + p2 < 1. We denote a bivariate geometric random variable by writing (X, Y ) GEO (p1, p2). : ⇠ Example 11.5. Motor vehicles arriving at an intersection can turn right In a study of traffic patterns at this or left or continue straight ahead. intersection over a long period of time, engineers have noted that 40 percents of the motor vehicles turn left, 25 percents turn right, and the remainder continue straight ahead. For the next ten cars entering the intersection, what is the probability that 5 cars will turn left, 4 cars will turn right, and the last car will go straight ahead? Answer: Let X denote the number of cars turning left and Y denote the number of cars turning right. Since, the last car will go straight ahead, the joint distribution of X and Y is geometric with parameters p1 = 0.4, p2 = 0.35. For the next ten cars entering the p2 = 0.25 and p3 = 1 intersection, the probability that 5 cars will turn left, 4 cars will turn right, and the last car will go straight ahead is given by p1 P (X = 5, Y = 4) = f (5, 4) p2) 1 py px 2 (1 p1 (0.4)5 (0.25)4 (1 0.4 0.25) (0.4)5 (0.25)4 (0.35) = = (x + y)! x! y! (5 + 4)! 5! 4! 9! 5! 4! = 0.00677. = Some Special Discrete Bivariate Distributions 304 The following technical result is essential for proving the following theo- rem. If a and b are positive real numbers with 0 < a + b < 1, then 1 1 x=0 X y=0 X (x + y)! x! y! ax by = 1 a
. b 1 (11.3) In the following theorem, we present the expected values and the variances of X and Y, the covariance between X and Y, and the moment generating function. Theorem 11.5. Let (X, Y ) ters. Then ⇠ GEO (p1, p2), where p1 and p2 are parame- E(X) = E(Y ) = V ar(X) = V ar(Y ) = Cov(X, Y ) = M (s, t) = p2 p2) p2)2 p1) p2)2 p2 1 1 (1 p1 (1 p1 p1 p2 p1 p1 p1 p1 p2 p1 (1 p1 1 p1es p2 (1 (1 1 p2)2 p2 p2et. Proof: We only find the joint moment generating function M (s, t) of X and Y and leave proof of the rests to the reader of this book. The joint moment generating function M (s, t) is given by M (s, t) = E n = esX+tY n esx+tyf (x, y) x=0 X n y=0 X n = x=0 X = (1 = (1 1 y=0 X p1 p1 p1es esx+ty (x + y)! x! y! 1 py px 2 (1 p1 p2) n n p2) y=0 X x=0 X p2) p2et (x + y)! x! y! (p1es)x p2et y (by (11.3) ). Probability and Mathematical Statistics 305 The following results are needed for the next theorem. Let a be a positive real number less than one. Then 1 y=0 X 1 y=0 X (x + y)! x! y! ay = 1 a)x+1, (1 (x + y)! x! y! y ay = a(1 + x) (1 a)x+2,
and (x + y)! x! y! y2 ay = 1 y=0 X a(1 + x) (1 a)x+3 [a(x + 1) + 1]. (11.4) (11.5) (11.6) The next theorem presents some information regarding the conditional densities f (x/y) and f (y/x). Theorem 11.6. Let (X, Y ) GEO (p1, p2), where p1 and p2 are parameters. Then the conditional distributions f (y/x) and f (x/y) are also geometrical and ⇠ E(Y /x) = E(X/y) = V ar(Y /x) = V ar(X/y) = p2 (1 + x) p1 (1 + y) p2 p1 1 1 p2 (1 + x) p2)2 (1 p1 (1 + y) p1)2. (1 Proof: Again, as before, we first find the conditional probability density of Y given the event X = x. The marginal density f1(x) is given by f1(x) = = 1 y=0 X 1 y=0 X f (x, y) (x + y)! x! y! 1 py px 2 (1 p1 p2) 1 y=0 X p2) px 1 p2) px 1 (x + y)! x! y! py 2 (by (11.4) ). = (1 (1 = (1 p1 p1 p2)x+1 Some Special Discrete Bivariate Distributions 306 Therefore the conditional density of Y given the event X = x is f (y/x) = f (x, y) f1(x) = (x + y)! x! y! py 2 (1 p2)x+1. The conditional expectation of Y given X = x is E(Y /x) = = = 1 y=0 X 1 y f (y/x) y (x + y)! x! y! y=0 X p2 (1 + x) p2) (1 py 2 (1 p2)x+
1 (by (11.5) ). Similarly, one can show that E(X/y) = p1 (1 + y) p1) (1 . To compute the conditional variance of Y given the event that X = x, first we have to find E, which is given by Y 2/x 1 y=0 X 1 Y 2/x E = = = y2 f (y/x) y2 (x + y)! x! y! py 2 (1 p2)x+1 y=0 X p2 (1 + x) (1 p2)2 [p2 (1 + x) + 1] Therefore V ar Y 2/x = E Y 2/x E(Y /x)2 = = p2)2 [(p2 (1 + x) + 1] p2 (1 + x) (1 p2 (1 + x) p2)2. (1 (by (11.6) ). p2 (1 + x) 2 ✓ 1 p2 ◆ The rest of the moments can be determined in a similar manner. The proof of the theorem is now complete. Probability and Mathematical Statistics 307 11.4. Bivariate Negative Binomial Distribution The univariate negative binomial distribution can be generalized to the bivariate case. Guldberg (1934) introduced this distribution and Lundberg (1940) first used it in connection with problems of accident proneness. Arbous and Kerrich (1951) arrived at this distribution by mixing parameters of the bivariate Poisson distribution. Definition 11.4. A discrete bivariate random variable (X, Y ) is said to have the bivariate negative binomial distribution with parameters k, p1 and p2 if its joint probability density is of the form (x+y+k p1 x! y! (k if x, y = 0, 1,..., 1)! px p2)k, 1 py 2 (1 1 1)! f (x, y) = 8 < 0 otherwise, where 0 < p1, p2, p1 + p2 < 1 and k is a nonzero positive integer. We denote a b
ivariate negative binomial random variable by writing (X, Y ) N BIN (k, p1, p2). : ⇠ Example 11.6. An experiment consists of selecting a marble at random and with replacement from a box containing 10 white marbles, 15 black marbles and 5 green marbles. What is the probability that it takes exactly 11 trials to get 5 white, 3 black and the third green marbles at the 11th trial? Answer: Let X denote the number of white marbles and Y denote the number of black marbles. The joint distribution of X and Y is bivariate negative binomial with parameters p1 = 1 2, and k = 3. Hence the probability that it takes exactly 11 trials to get 5 white, 3 black and the third green marbles at the 11th trial is 3, p2 = 1 P (X = 5, Y = 3) = f (5, 3) 1)! 1)! 1)! 1)! 1 py px 2 (1 p1 p2)k (0.33)5 (0.5)3 (1 0.33 0.5)3 (0.33)5 (0.5)3 (0.17)3 = = (x + y + k x! y! (k (5 + 3 + 3 5! 3! (3 10! 5! 3! 2! = 0.0000503. = The negative binomial theorem which was treated in chapter 5 can be generalized to 1 1 x=0 X y=0 X (x + y + k x! y! (k 1)! 1)! 1 py px 2 = 1 p1 p2)k. (1 (11.7) Some Special Discrete Bivariate Distributions 308 In the following theorem, we present the expected values and the variances of X and Y, the covariance between X and Y, and the moment generating function. Theorem 11.7. Let (X, Y ) parameters. Then ⇠ N BIN (k, p1, p2), where k, p1 and p2 are E(X) = E(Y ) = V ar(X) = V ar(Y ) = Cov(X, Y ) = M (s, t) = 1 p2 k p1 p1 k p2 p
1 309 (11.9) (11.10) The next theorem presents some information regarding the conditional densities f (x/y) and f (y/x). Theorem 11.8. Let (X, Y ) N BIN (k, p1, p2), where p1 and p2 are parameters. Then the conditional densities f (y/x) and f (x/y) are also negative binomial and ⇠ E(Y /x) = E(X/y) = V ar(Y /x) = V ar(X/y) = p2 (k + x) p1 (k + y) p2 p1 1 1 p2 (k + x) p2)2 (1 p1 (k + y) p1)2. (1 Proof: First, we find the marginal density of X. The marginal density f1(x) is given by (x + y + k x! y! (k 1)! 1)! 1 py px 2 f (x, y) f1(x) = = 1 y=0 X 1 y=0 X p2)k px 1 = (1 p1 = (1 p1 p2)k px 1 1)! 1)! (x + y + k x! y! (k 1 p2)x+k (1 py 2 (by (11.8)). The conditional density of Y given the event X = x is f (y/x) = = f (x, y) f1(x) (x + y + k x! y! (k 1)! 1)! py 2 (1 p2)x+k. Some Special Discrete Bivariate Distributions 310 The conditional expectation E(Y /x) is given by E (Y /x) = 1 1 y (x + y + k x! y! (k 1)! 1)! py 2 (1 p2)x+k x=0 X = (1 y=0 X p2)x+k 1 x=0 X 1 y (x + y + k x! y! (k 1)! 1)! py 2 (by (11.
9)) y=0 X p2 (x + k) (1 p2)x+k+1 = (1 p2)x+k = p2 (x + k) p2) (1 . The conditional expectation E Y 2/x can be computed as follows E Y 2/x = 1 1 x=0 X = (1 1)! py 2 (1 1)! y2 (x + y + k x! y! (k y2 (x + y + k x! y! (k y=0 X p2)x+k 1 x=0 X 1 p2)x+k 1)! 1)! py 2 y=0 X p2 (x + k) p2)x+k+2 [1 + (x + k) p2] (1 = (1 p2)x+k p2 (x + k) (1 p2)2 [1 + (x + k) p2]. = (by (11.10)) The conditional variance of Y given X = x is V ar (Y /x) = E Y 2/x E (Y /x)2 = = p2)2 [1 + (x + k) p2] p2 (x + k) (1 p2 (x + k) p2)2. (1 p2 (x + k) p2) (1 2 ◆ ✓ The conditional expected value E(X/y) and conditional variance V ar(X/y) can be computed in a similar way. This completes the proof. Note that if k = 1, then bivariate negative binomial distribution reduces to bivariate geometric distribution. 11.5. Bivariate Hypergeometric Distribution The univariate hypergeometric distribution can be generalized to the bivariate case. Isserlis (1914) introduced this distribution and Pearson (1924) Probability and Mathematical Statistics 311 gave various properties of this distribution. Pearson also fitted this distribution to an observed data of the number of cards of a certain suit in two hands at whist. Definition 11.5. A discrete bivariate random variable (X, Y ) is said to have the bivariate hypergeometric
distribution with parameters r, n1, n2, n3 if its joint probability distribution is of the form (n1 y ) ( n3 x ) (n2 x (n1+n2+n3 ) r r y) f (x, y) = 8 < 0, if x, y = 0, 1,..., r otherwise,  n1, y n2, r : where x n3 and r is a positive integer less than or equal to n1 +n2 +n3. We denote a bivariate hypergeometric random variable by writing (X, Y ) HY P (r, n1, n2, n3).   x y ⇠ Example 11.7. A panel of prospective jurors includes 6 african american, 4 asian american and 9 white american. If the selection is random, what is the probability that a jury will consists of 4 african american, 3 asian american and 5 white american? Answer: Here n1 = 7, n2 = 3 and n3 = 9 so that n = 19. A total of 12 jurors will be selected so that r = 12. In this example x = 4, y = 3 and y = 5. Hence the probability that a jury will consists of 4 african r american, 3 asian american and 5 white american is x f (4, 3) = 7 4 9 5 3 3 19 12 = 4410 50388 = 0.0875. Example 11.8. Among 25 silver dollars struck in 1903 there are 15 from the Philadelphia mint, 7 from the New Orleans mint, and 3 from the San Francisco. If 5 of these silver dollars are picked at random, what is the probability of getting 4 from the Philadelphia mint and 1 from the New Orleans? Answer: Here n = 25, r = 5 and n1 = 15, n2 = 7, n3 = 3. The the probability of getting 4 from the Philadelphia mint and 1 from the New Orleans is f (4, 1) = 15 4 7 1 25 5 3 0 = 9555 53130 = 0.1798. In the following theorem, we present the expected values and the vari- an
ces of X and Y, and the covariance between X and Y. Some Special Discrete Bivariate Distributions 312 Theorem 11.9. Let (X, Y ) are parameters. Then ⇠ HY P (r, n1, n2, n3), where r, n1, n2 and n3 E(X) = E(Y ) = V ar(X) = V ar(Y ) = r n1 n1 + n2 + n3 r n2 n1 + n2 + n3 r n1 (n2 + n3) (n1 + n2 + n3)2 r n2 (n1 + n3) (n1 + n2 + n3)2 Cov(X, Y ) = r n1 n2 (n1 + n2 + n3)2 ✓ ✓ n1 + n2 + n3 r n1 + n2 + n3 1 n1 + n2 + n3 r n1 + n2 + n3 1 n1 + n2 + n3 n1 + n2 + n3 ✓ ◆ ◆ r 1. ◆ Proof: We find only the mean and variance of X. The mean and variance of Y can be found in a similar manner. The covariance of X and Y will be left to the reader as an exercise. To find the expected value of X, we need the marginal density f1(x) of X. The marginal of X is given by n1 x n2 y n3 x n1+n2+n3 r r y f1(x) = = = = f (x, y) r x y=0 X x r y=0 X n1 x n1+n2+n3 r n1 x n1+n2+n3 r r x n2 y n3 x y ◆ r ◆ ✓ y=0 ✓ X n2 + n3 x r ✓ (by Theorem 1.3) ◆ This shows that X ⇠ HY P (n1, n2 + n3, r). Hence, by Theorem
5.7, we get E(X) = r n1 n1 + n2 + n3, and V ar(X) = r n1 (n2 + n3) (n1 + n2 + n3)2 n1 + n2 + n3 n1 + n2 + n3 r 1. ◆ ✓ Similarly, the random variable Y Theorem 5.7, we get HY P (n2, n1 + n3, r). Hence, again by ⇠ E(Y ) = r n2 n1 + n2 + n3, Probability and Mathematical Statistics 313 and V ar(Y ) = r n2 (n1 + n3) (n1 + n2 + n3)2 n1 + n2 + n3 n1 + n2 + n3 r 1. ◆ ✓ The next theorem presents some information regarding the conditional densities f (x/y) and f (y/x). Theorem 11.10. Let (X, Y ) HY P (r, n1, n2, n3), where r, n1, n2 and n3 are parameters. Then the conditional distributions f (y/x) and f (x/y) are also hypergeometric and ⇠ E(Y /x) = E(X/y) = V ar(Y /x) = V ar(X/y) = n2 (r x) n2 + n3 n1 (r y) n1 + n3 n2n3 n2 + n3 n1n3 n1 + n3 n1 + n2 + n3 n2 + n3 n1 + n2 + n3 n1 + n3 x y x n1 n2 + n3 ◆ y n2 n1 + n3 ◆. ◆ ✓ ◆ ✓ 1 1 ✓ ✓ Proof: To find E(Y /x), we need the conditional density f (y/x) of Y given the event X = x. The conditional density f (y/x) is given by f (y/x) = = f (x, y) f1(x) n2 y n3 x r n2+n3 r x
y . Hence, the random variable Y given X = x is a hypergeometric random variable with parameters n2, n3, and r x, that is Y /x ⇠ HY P (n2, n3, r x). Hence, by Theorem 5.7, we get E(Y /x) = n2 (r x) n2 + n3 and V ar(Y /x) = n2n3 n2 + n3 1 ✓ n1 + n2 + n3 n2 + n3 x x n1 n2 + n3 ◆. ◆ ✓ Some Special Discrete Bivariate Distributions 314 Similarly, one can find E(X/y) and V ar(X/y). The proof of the theorem is now complete. 11.6. Bivariate Poisson Distribution The univariate Poisson distribution can be generalized to the bivariate case. In 1934, Campbell, first derived this distribution. However, in 1944, Aitken gave the explicit formula for the bivariate Poisson distribution function. In 1964, Holgate also arrived at the bivariate Poisson distribution by deriving the joint distribution of X = X1 + X3 and Y = X2 + X3, where X1, X2, X3 are independent Poisson random variables. Unlike the previous bivariate distributions, the conditional distributions of bivariate Poisson distribution are not Poisson. In fact, Seshadri and Patil (1964), indicated that no bivariate distribution exists having both marginal and conditional distributions of Poisson form. Definition 11.6. A discrete bivariate random variable (X, Y ) is said to have the bivariate Poisson distribution with parameters 1, 2, 3 if its joint probability density is of the form e( 1 2+3) (1 x! y! 3)x (2 3)y (x, y) for x, y = 0, 1,..., 1 otherwise, 0 8 < : f (x, y) = where with (x, y) := min(x,y) r=0 X x(r) y(r) r 3 3)r (2
2. An urn contains 3 red balls, 2 green balls and 1 yellow ball. Three balls are selected at random and without replacement from the urn. What is the probability that at least 1 color is not drawn? 3. An urn contains 4 red balls, 8 green balls and 2 yellow balls. Five balls are randomly selected, without replacement, from the urn. What is the probability that 1 red ball, 2 green balls, and 2 yellow balls will be selected? 4. From a group of three Republicans, two Democrats, and one Independent, If X denotes the a committee of two people is to be randomly selected. Some Special Discrete Bivariate Distributions 316 number of Republicans and Y the number of Democrats on the committee, then what is the variance of Y given that X = x? 5. If X equals the number of ones and Y the number of twos and threes when a four fair dice are rolled, then what is the conditional variance of X and Y = 1? 6. Motor vehicles arriving at an intersection can turn right or left or continue straight ahead. In a study of traffic patterns at this intersection over a long period of time, engineers have noted that 40 percents of the motor vehicles turn left, 25 percents turn right, and the remainder continue straight ahead. For the next five cars entering the intersection, what is the probability that at least one turn right? 7. Among a large number of applicants for a certain position, 60 percents have only a high school education, 30 percents have some college training, and 10 percents have completed a college degree. If 5 applicants are randomly selected to be interviewed, what is the probability that at least one will have completed a college degree? 8. In a population of 200 students who have just completed a first course in calculus, 50 have earned A’s, 80 B’s and remaining earned F ’s. A sample of size 25 is taken at random and without replacement from this population. What is the probability that 10 students have A’s, 12 students have B’s and 3 students have F ’s? 9. If X equals the number of ones and Y the number of twos and threes when a four fair dice are rolled, then what is the correlation coefficient of X and Y? 10. If the joint moment generating function of X and Y is M (s
, t) = 4 es 7 k lation coefficient between X and Y? 2et ⇣ 5 ⌘, then what is the value of the constant k? What is the corre- 11. A die with 1 painted on three sides, 2 painted on two sides, and 3 painted on one side is rolled 15 times. What is the probability that we will get eight 1’s, six 2’s and a 3 on the last roll? 12. The output of a machine is graded excellent 80 percents of time, good 15 percents of time, and defective 5 percents of time. What is the probability that a random sample of size 15 has 10 excellent, 3 good, and 2 defective items? Probability and Mathematical Statistics 317 13. An industrial product is graded by a machine excellent 80 percents of time, good 15 percents of time, and defective 5 percents of time. A random sample of 15 items is graded. What is the probability that machine will grade 10 excellent, 3 good, and 2 defective of which one being the last one graded? If (X, Y ) 14. random variables X and Y? ⇠ HY P (n1, n2, n3, r), then what is the covariance of the Some Special Continuous Bivariate Distributions 318 Chapter 12 SOME SPECIAL CONTINUOUS BIVARIATE DISTRIBUTIONS In this chapter, we study some well known continuous bivariate probability density functions. First, we present the natural extensions of univariate probability density functions that were treated in chapter 6. Then we present some other bivariate distributions that have been reported in the literature. The bivariate normal distribution has been treated in most textbooks because of its dominant role in the statistical theory. The other continuous bivariate distributions rarely treated in any textbooks. It is in this textbook, well known bivariate distributions have been treated for the first time. The monograph of K.V. Mardia gives an excellent exposition on various bivariate distributions. We begin this chapter with the bivariate uniform distribution. 12.1. Bivariate Uniform Distribution In this section, we study Morgenstern bivariate uniform distribution in detail. The marginals of Morgenstern bivariate uniform distribution are uniform. In this sense, it is an extension of univariate uniform distribution. Other bivariate uniform distributions will be pointed out without any in depth treatment. In 1956
, Morgenstern introduced a one-parameter family of bivariate distributions whose univariate marginal are uniform distributions by the following formula f (x, y) = f1(x) f2(y) ( 1 + ↵ [2F1(x) 1] [2F2(y) 1] ), Probability and Mathematical Statistics 319 2 [ where ↵ 1, 1] is a parameter. If one assumes The cdf Fi(x) = x and the pdf fi(x) = 1 (i = 1, 2), then we arrive at the Morgenstern uniform distribution on the unit square. The joint probability density function f (x, y) of the Morgenstern uniform distribution on the unit square is given by f (x, y) = 1 + ↵ (2x 1) (2y 1), 0 < x, y 1,  1  ↵  1. Next, we define the Morgenstern uniform distribution on an arbitrary rectangle [a, b] [c, d]. ⇥ Definition 12.1. A continuous bivariate random variable (X, Y ) is said to [c, d] if its have the bivariate uniform distribution on the rectangle [a, b] joint probability density function is of the form ⇥ f (x, y) = 1+↵ ( 2x b (b 1)( 2y d c) a) (d 2a a 2c c 1) for x 2 [a, b] y [c, d] 2 otherwise, 8 < 0 : ⇠ where ↵ is an apriori chosen parameter in [ bivariate uniform random variable on a rectangle [a, b] (X, Y ) U N IF (a, b, c, d, ↵). 1, 1]. We denote a Morgenstern [c, d] by writing ⇥ The following figures show the graph and the equi-density curves of f (x, y) on unit square with ↵ = 0.5. In the following theorem, we present the expected values, the variances of the random variables X and Y, and the covariance between X and Y. Theorem 12.
1. Let (X, Y ) ⇠ U N IF M (a, b, c, d, ↵), where a, b, c, d and ↵ Some Special Continuous Bivariate Distributions 320 are parameters. Then E(X) = E( ar(X) = V ar(Y ) = Cov(X, Y ) = (b (d 1 36 a)2 12 c)2 12 ↵ (b a) (d c). Proof: First, we determine the marginal density of X which is given by d f1(x) = f (x, y) dy c Z c Z = d 1 + ↵ ⇣ 2x b (b 2a a 1 a) (d ⌘ ⇣ 2c c 2y d c) 1 dy ⌘ = b 1 . a Thus, the marginal density of X is uniform on the interval from a to b. That is X U N IF (a, b). Hence by Theorem 6.1, we have ⇠ E(X) = b + a 2 and V ar(X) = (b a)2 12. Similarly, one can show that Y ⇠ U N IF (c, d) and therefore by Theorem 6.1 E(Y ) = d + c 2 and V ar(Y ) = (d c)2 12. The product moment of X and Y is b d E(XY ) = xy f (x, y) dx dy c a Z Z b d 1 + ↵ = xy c a Z Z ⇣ 2x b (b 2a a 1 a) (d ⌘ ⇣ 2c c 2y d c) 1 ⌘ dx dy = 1 36 ↵ (b a) (d c) + 1 4 (b + a) (d + c). Probability and Mathematical Statistics 321 Thus, the covariance of X and Y is Cov(X, Y ) = E(XY ) E(X) E(Y ) = = 1 36 1 36 ↵ (b a) (d c) + 1 4 (b + a) (d + c) 1 4
, y)2 (↵ {1 + (↵ 1) [F1(x) + F2(y)] } F (x, y) + ↵ F1(x) F2(y) = 0 (where 0 < ↵ < ) which satisfies the Fr´echet inequalities 1 max {F1(x) + F2(y) 1, 0}  F (x, y)  min {F1(x), F2(y)}. The class of bivariate joint density function constructed by Plackett is the following f (x, y) = ↵ f1(x) f2(y) [(↵ where 1) {F1(x) + F2(y) [S(x, y)2 4 ↵ (↵ 2F1(x)F2(y)} + 1] 1) F1(x) F2(y)] 3 2, S(x, y) = 1 + (↵ 1) (F1(x) + F2(y)). If one takes Fi(x) = x and fi(x) = 1 (for i = 1, 2), then the joint density function constructed by Plackett reduces to f (x, y) = ↵ [(↵ 1) {x + y 1)(x + y)}2 [{1 + (↵ 2xy} + 1] 4 ↵ (↵ 1) xy] , 3 2 Some Special Continuous Bivariate Distributions 324  x, y 1, and ↵ > 0. But unfortunately this is not a bivariate where 0 density function since this bivariate density does not integrate to one. This fact was missed by both Plackett (1965) and Mardia (1967).  12.2. Bivariate Cauchy Distribution Recall that univariate Cauchy probability distribution was defined in Chapter 3 as f (x) = ✓ ⇡ ✓ + (x, < x <, 1 1 ↵)2 i h where ↵ > 0 and ✓ are real parameters. The parameter ↵ is called the location parameter. In Chapter 4, we have pointed out that any random variable whose
probability density function is Cauchy has no moments. This random variable is further, has no moment generating function. The Cauchy distribution is widely used for instructional purposes besides its statistical use. The main purpose of this section is to generalize univariate Cauchy distribution to bivariate case and study its various intrinsic properties. We define the bivariate Cauchy random variables by using the form of their joint probability density function. Definition 12.3. A continuous bivariate random variable (X, Y ) is said to have the bivariate Cauchy distribution if its joint probability density function is of the form f (x, y) = ✓ 2⇡ [ ✓2 + (x ↵)2 + (y )2 ] , 3 2 < x, y <, 1 1 where ✓ is a positive parameter and ↵ and are location parameters. We deCAU (✓, ↵, ). note a bivariate Cauchy random variable by writing (X, Y ) ⇠ The following figures show the graph and the equi-density curves of the Cauchy density function f (x, y) with parameters ↵ = 0 = and ✓ = 0.5. Probability and Mathematical Statistics 325 The bivariate Cauchy distribution can be derived by considering the distribution of radio active particles emanating from a source that hit a two-dimensional screen. This distribution is a special case of the bivariate t-distribution which was first constructed by Karl Pearson in 1923. The following theorem shows that if a bivariate random variable (X, Y ) is Cauchy, then it has no moments like the univariate Cauchy random variable. Further, for a bivariate Cauchy random variable (X, Y ), the covariance (and hence the correlation) between X and Y does not exist. CAU (✓, ↵, ), where ✓ > 0, ↵ and are paTheorem 12.3. Let (X, Y ) rameters. Then the moments E(X), E(Y ), V ar(X), V ar(Y ), and Cov(X, Y ) do not exist. ⇠ Proof: In order to find the moments of X and Y, we need their marginal distributions. First, we find the
marginal of X which is given by f1(x) = 1 f (x, y) dy = Z 1 1 Z 1 ✓ 2⇡ [ ✓2 + (x ↵)2 + (y )2 ] 3 2 dy. To evaluate the above integral, we make a trigonometric substitution y = + [✓2 + (x ↵)2] tan. p dy = [✓2 + (x ↵)2] sec2 d p Hence and ✓2 + (x ⇥ ↵)2 + (y ✓2 + (x = = ✓2 + (x ⇥ )2 ↵)2 ↵) + tan2 ⇤ 2 sec3. 3 2 Some Special Continuous Bivariate Distributions 326 Using these in the above integral, we get 1 Z 1 ✓ 2⇡ [ ✓2 + (x ↵)2 + (y )2 ] 3 2 dy = ✓ 2⇡ ⇡ 2 [✓2 + (x ⇡ 2 p [✓2 + (x Z ↵)2] sec2 d ↵)2] 3 2 sec3 = ✓ ⇡ 2 2⇡ [✓2 + (x ↵)2] ⇡ 2 Z cos d = ✓ ⇡ [✓2 + (x. ↵)2] Hence, the marginal of X is a Cauchy distribution with parameters ✓ and ↵. Thus, for the random variable X, the expected value E(X) and the variance V ar(X) do not exist (see Example 4.2). In a similar manner, it can be shown that the marginal distribution of Y is also Cauchy with parameters ✓ and and hence E(Y ) and V ar(Y ) do not exist. Since Cov(X, Y ) = E(XY ) E(X) E(Y ), it easy to note that Cov(X, Y ) also does not exist. This completes the proof of the theorem. The conditional distribution of Y given the event X = x is given by f (y/x) = f (x, y) f1(x) = 1 2 ✓2 + (x ↵)
(✓ x y)↵+k 1 k! Γ(↵ + k) (1 ✓)↵+2k for 0  x, y < 1 otherwise. >: The following figures show the graph of the joint density function f (x, y) of a bivariate gamma random variable with parameters ↵ = 1 and ✓ = 0.5 and the equi-density curves of f (x, y). In 1941, Kibble found this bivariate gamma density function. However, Wicksell in 1933 had constructed the characteristic function of this bivariate gamma density function without knowing the explicit form of this density function. If { (Xi, Yi ) | i = 1, 2,..., n} is a random sample from a bivariate normal distribution with zero means, then the bivariate random variable (X, Y ), where X = 1 n X 2 i and Y = 1 n Y 2 i, has bivariate gamma distri- n i=1 X n i=1 X bution. This fact was established by Wicksell by finding the characteristic Probability and Mathematical Statistics 329 function of (X, Y ). This bivariate gamma distribution has found applications in noise theory (see Rice (1944, 1945)). The following theorem provides us some important characteristic of the bivariate gamma distribution of Kibble. Theorem 12.5. Let the random variable (X, Y ) 0 < ↵ < are univariate gamma and GAMK(↵, ✓), where ✓ < 1 are parameters. Then the marginals of X and Y and 0 1  ⇠ E(X) = ↵ E(Y ) = ↵ V ar(X) = ↵ V ar(Y ) = ↵ Cov(X, Y ) = ↵ ✓ M (s, t) = [(1 s) (1 1 t) ✓ s t]↵. Proof: First, we show that the marginal distribution of X is univariate gamma with parameter ↵ (and ✓ = 1). The marginal density of X is given by f (x, y) dy 1 1 Γ(↵) ✓↵ e x+y ✓ 1 1 (✓ x y)↵+k 1 k! Γ(↵ + k) (1 ✓)↵+2k
dy 1 1 Γ(↵) ✓↵ e 1 1 Γ(↵) ✓↵ e x 1 ✓ x 1 ✓ ✓)↵+2k 0 Z Xk=0 1 (✓ x)↵+k k! Γ(↵ + k) (1 (✓ x)↵+k k! Γ(↵ + k) (1 1 1 y↵+k 1 e y 1 ✓ dy ✓)↵+2k (1 ✓)↵+k Γ(↵ + k) f1(x Xk=0 1 Xk=0 1 Xk=0 ✓ 1 Γ(↵) 1 Γ(↵) 1 Γ(↵) k ✓ 1 ✓ ◆ 1 k! Γ(↵) x↵+k 1 e x 1 ✓ x↵ 1 e x↵ 1 e 1 k! k x ✓ Xk=0 x 1 ✓ e x✓ ✓ 1 x↵ 1 e x. Some Special Continuous Bivariate Distributions 330 Thus, the marginal distribution of X is gamma with parameters ↵ and ✓ = 1. Therefore, by Theorem 6.3, we obtain E(X) = ↵, V ar(X) = ↵. Similarly, we can show that the marginal density of Y is gamma with parameters ↵ and ✓ = 1. Hence, we have E(Y ) = ↵, V ar(Y ) = ↵. The moment generating function can be computed in a similar manner and we leave it to the reader. This completes the proof of the theorem. The following results are needed for the next theorem. From calculus we know that zk k!, 1 ez = Xk=0 (12.1) IR. Differentiating and the infinite series on the right converges for all z both sides of (12.1) and then multiplying the resulting expression by z, one obtains 2 zez = 1 k zk k!. (12.2) Xk=0 If one differentiates (12.2) again with respect to z
and multiply the resulting expression by z, then he/she will get zez + z2ez = k2 zk k!. 1 Xk=0 (12.3) Theorem 12.6. Let the random variable (X are parameters. Then and 0 1  GAMK(↵, ✓), where ⇠ E(Y /x) = ✓ x + (1 E(X/y) = ✓ y + (1 ✓) ↵ ✓) ↵ V ar(Y /x) = (1 V ar(X/y) = (1 ✓) [ 2✓ x + (1 ✓) [ 2✓ y + (1 ✓) ↵ ] ✓) ↵ ]. Probability and Mathematical Statistics 331 Proof: First, we will find the conditional probability density function Y given X = x, which is given by f (y/x) f (x, y) f1(x) = = 1 1 x↵ ✓↵ 1 e = ex x 1 ✓ 1 Xk=0 1 x+y ✓ 1 x e Xk=0 1 Γ(↵ + k) (1 (✓ x y)↵+k 1 ✓)↵+2k k! Γ(↵ + k) (1 (✓ x)k k! ✓)↵+2k y↵+k 1 e y 1 ✓. Next, we compute the conditional expectation of Y given the event X = x. The conditional expectation E(Y /x) is given by E(Y /x) 1 y f (y/x) dy 1 y ex Xk=0 1 Γ(↵ + k) (1 1 Γ(↵ + k) (1 = ex = ex x 1 ✓ x 1 ✓ 1 Xk=0 1 Xk=0 1 Γ(↵ + k) (1 ✓)↵+2k (✓ x)k k! y↵+k 1 e y 1 ✓ dy (✓ x)k k! (✓ x)k k! ✓)↵+2k ✓)↵
+ (2↵ + 1) " = (↵2 + ↵) (1 ✓)2 + 2(↵ + 1) ✓ (1 The conditional variance of Y given X = x is V ar(Y /x) = E(Y 2/x) E(Y /x)2 k2 k Xk=0 2 # ◆ ✓) x + ✓2 x2 = (↵2 + ↵) (1 (1 ✓) [↵ (1 ⇥ = (1 ✓)2 + 2(↵ + 1) ✓ (1 ✓)2 ↵2 + ✓2 x2 + 2 ↵ ✓ (1 ✓) + 2 ✓ x]. ✓) x ⇤ Since the density function f (x, y) is symmetric, that is f (x, y) = f (y, x), the conditional expectation E(X/y) and the conditional variance V ar(X/y) can be obtained by interchanging x with y in the formulae of E(Y /x) and V ar(Y /x). This completes the proof of the theorem. In 1941, Cherian constructed a bivariate gamma distribution whose prob- ability density function is given by f (x, y) = (x+y) e 3 Γ(↵i) i=1 0 Q 8 < : min{x,y} 0 R z↵3 (x z)↵1 (y z) (y z) z)↵2 z (x ez dz if 0 < x, y < 1 otherwise, Probability and Mathematical Statistics 333 (0, ) are parameters. where ↵1, ↵2, ↵3 2 If a bivariate random variable (X, Y ) has a Cherian bivariate gamma probability density function with parameters ↵1, ↵2 and ↵3, then we denote this by writing (X, Y ) GAMC(↵1, ↵2, ↵3). 1 ⇠ It can be shown that the marginals of f (x, y) are given by and f1(x) = f2(x) = 1 Γ(↵1+↵3) x↵1+↵3
↵, ), then the marginal f1(x) of X and the marginal f2(y) of Y are given by ⇠ f1(x) = ✓↵ Γ(↵) x↵ 1 e ✓ x ( 0 if 0  x < 1 otherwise and f2(y) = ✓↵+ Γ(↵+) x↵+ 8 < 0 1 e ✓ x if 0  x < 1 otherwise. ↵, 1 : ✓ and Y ⇠ GAM ↵ + , 1 ✓. Therefore, we have the Hence X GAM following theorem. ⇠ Theorem 12.9. If (X, Y ) ⇠ GAMM (✓, ↵, ), then E(X) = E(Y ) = V ar(X) = V ar(Y ) = M (s, t2 ↵ + ✓ ◆ . Probability and Mathematical Statistics 335 We state the various properties of the conditional densities of f (x, y), without proof, in the following theorem. Theorem 12.10. If (X, Y ) GAMM (✓, ↵, ), then ⇠ E(X/y) = E(Y /x2 V ar(Y /x) = V ar(X/y) = ↵ (↵ + )2 (↵ + + 1) y2. We know that the univariate exponential distribution is a special case of the univariate gamma distribution. Similarly, the bivariate exponential distribution is a special case of bivariate gamma distribution. On taking the index parameters to be unity in the Kibble and Cherian bivariate gamma distribution given above, we obtain the corresponding bivariate exponential distributions. The bivariate exponential probability density function corresponding to bivariate gamma distribution of Kibble is given by f (x, y) = 8 >< e( x+y 1 ✓ ) 1 Xk=0 0 (✓ x y)k k! Γ(k + 1) (1 ✓)2k+1 if 0 < x, y < 1 otherwise, >: where ✓ responding to the Cherian bivariate distribution is the following: (0, 1) is a parameter. The bivariate exponential distribution
cor- 2 f (x, y) = emin{x,y} 1 (x+y) e if 0 < x, y < 1 ( ⇥ 0 ⇤ otherwise. In 1960, Gumble has studied the following bivariate exponential distribution whose density function is given by f (x, y) = [(1 + ✓x) (1 + ✓y) ✓] e (x+y+✓ x y) if 0 < x, y < 1 8 < 0 otherwise, where ✓ > 0 is a parameter. : Some Special Continuous Bivariate Distributions 336 In 1967, Marshall and Olkin introduced the following bivariate exponen- tial distribution F (x, y) = (↵+)x e 1 0 8 < e (+)y + e (↵x+y+ max{x,y}) if x, y > 0 otherwise, where ↵, , > 0 are parameters. The exponential distribution function of Marshall and Olkin satisfies the lack of memory property : P (X > x + t, Y > t) = P (X > x, Y > y). 12.4. Bivariate Beta Distribution The bivariate beta distribution (also known as Dirichlet distribution ) is one of the basic distributions in statistics. The bivariate beta distribution is used in geology, biology, and chemistry for handling compositional data which are subject to nonnegativity and constant-sum constraints. It is also used nowadays with increasing frequency in statistical modeling, distribution theory and Bayesian statistics. For example, it is used to model the distribution of brand shares of certain consumer products, and in describing the joint distribution of two soil strength parameters. Further, it is used in modeling the proportions of the electorates who vote for a candidates in a two-candidate election. In Bayesian statistics, the beta distribution is very popular as a prior since it yields a beta distribution as posterior. In this section, we give some basic facts about the bivariate beta distribution. Definition 12.5. A continuous bivariate random variable (X, Y ) is said to have the bivariate beta distribution if its joint probability density function is of the form f (x, y) = Γ(✓1+✓2+✓3) Γ(✓1)Γ(
✓2)Γ(✓3) x✓1 1y✓2 1(1 0 8 < : x y)✓3 1 if 0 < x, y, x + y < 1 otherwise, Probability and Mathematical Statistics 337 ⇠ where ✓1, ✓2, ✓3 are positive parameters. We will denote a bivariate beta random variable (X, Y ) with positive parameters ✓1, ✓2 and ✓3 by writing (X, Y ) Beta(✓1, ✓2, ✓3). The following figures show the graph and the equi-density curves of f (x, y) on the domain of its definition. In the following theorem, we present the expected values, the variances of the random variables X and Y, and the correlation between X and Y. Theorem 12.11. Let (X, Y ) positive apriori chosen parameters. Then X Beta(✓2, ✓1 + ✓3) and ⇠ Beta(✓1, ✓2, ✓3), where ✓1, ✓2 and ✓3 are Beta(✓1, ✓2 + ✓3) and Y ⇠ ⇠ E(X) = E(Y ) = ✓1 ✓, ✓2 ✓, V ar(X) = ✓1 (✓ ✓1) ✓2 (✓ + 1) V ar(Y ) = ✓2 (✓ ✓2) ✓2 (✓ + 1) Cov(X, Y ) = ✓1 ✓2 ✓2 (✓ + 1) where ✓ = ✓1 + ✓2 + ✓3. Proof: First, we show that X Since (X, Y ) ⇠ ⇠ Beta(✓2, ✓1, ✓3), the joint density of (X, Y ) is given by Beta(✓1, ✓2 + ✓3) and Y Beta(✓2, ✓1 + ✓3). ⇠ f (x, y) = Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) x✓1 1y✓2 1(1 x y)✓3 1, Some Special Continuous Bivariate Distributions 338 where ✓ = ✓1 + ✓2 + ✓
3. Thus the marginal density of X is given by 1 f1(x) = f (x, y) dy 0 Z = Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Now we substitute u = 1 = 1 x x✓1 1 y✓2 1(1 0 Z 1(1 x✓1 x y)✓3 1 dy x 1 x)✓3 1 y ◆ x in the above integral. Then we have 0 Z 1 1 x y✓2 1 1 ✓ y ✓3 1 dy Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(✓2 + ✓3) x✓1 1(1 x✓1 1(1 x✓1 1(1 1 x)✓2+✓3 1 u✓2 1(1 0 Z 1 B(✓2, ✓3) x)✓2+✓3 x)✓2+✓3 1 u)✓3 1 du f1(x) = = = since 1 u✓2 1(1 u)✓3 1 du = B(✓2, ✓3) = Γ(✓2)Γ(✓3) Γ(✓2 + ✓3). 0 Z This proves that the random variable X one can shows that the random variable Y Theorem 6.5, we see that Beta(✓1, ✓2 + ✓3). Similarly, Beta(✓2, ✓1 + ✓3). Now using ⇠ ⇠ E(X) = E(Y ) = ✓1 ✓, ✓2 ✓, V ar(X) = ✓1 (✓ ✓1) ✓2 (
✓ + 1) V ar(X) = ✓2 (✓ ✓2) ✓2 (✓ + 1), where ✓ = ✓1 + ✓2 + ✓3. Next, we compute the product moment of X and Y. Consider E(XY ) 1 = = = = 1 x xy f (x, y) dy dx 0 0 Z Z Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) 1 1 x 0 0 Z Z 1 1 x 0 0 Z Z 1 x✓1(1 0 Z xy x✓1 1y✓2 1(1 x y)✓3 1dy dx x✓1y✓2(1 x x)✓3 1 "Z 0 x 1 y)✓3 1dy dx y ✓3 1 1 x ◆ y✓2 1 ✓ dy dx. # Probability and Mathematical Statistics 339 Now we substitute u = y x in the above integral to obtain 1 E(XY ) = Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) 1 x✓1(1 1 u✓2(1 x)✓2+✓3 0 Z u)✓3 1du dx u)✓3 1du = B(✓2 + 1, ✓3) 0 Z 1 u✓2(1 0 Z 1 x✓1(1 0 Z x)✓2+✓3dx = B(✓1 + 1, ✓2 + ✓3 + 1) Γ(✓) Γ(✓1)Γ(✓2)Γ(✓3) Γ(✓) Γ(✓1)Γ(�
�2)Γ(✓3) B(✓2 + 1, ✓3) B(✓1 + 1, ✓2 + ✓3 + 1) ✓1Γ(✓1)(✓2 + ✓3)Γ(✓2 + ✓3) (✓)(✓ + 1)Γ(✓) ✓2Γ(✓2)Γ(✓3) (✓2 + ✓3)Γ(✓2 + ✓3) ✓1✓2 ✓ (✓ + 1) where ✓ = ✓1 + ✓2 + ✓3. Since and we have E(XY ) = = = Now it is easy to compute the covariance of X and Y since Cov(X, Y ) = E(XY ) ✓1✓2 E(X)E(Y ) ✓2 ✓ ✓1 ✓ = = ✓ (✓ + 1) ✓1 ✓2 ✓2 (✓ + 1) . The proof of the theorem is now complete. The correlation coefficient of X and Y can be computed using the co- variance as ⇢ = Cov(X, Y ) V ar(X) V ar(Y ) = s ✓1✓2 (✓1 + ✓3)(✓2 + ✓3). Next theorem states some properties of the conditional density functions p f (x/y) and f (y/x). Theorem 12.12. Let (X, Y ) positive parameters. Then ⇠ Beta(✓1, ✓2, ✓3) where ✓1, ✓2 and ✓3 are E(Y /x) = E(X/y) = ✓2(1 x) ✓2 + ✓3 ✓1(1 y) ✓1 + ✓3,, V ar(Y /x) = V ar(X/y) = ✓2 ✓3(1 x)2 (✓2 + ✓3)2(✓2 + ✓3 + 1) ✓1 ✓3(1 y)2 (✓1 + ✓3)2(✓1 + ✓3 + 1). Some Special Continuous Bivariate Distributions 340 Proof: We know that if (X, Y ) Beta(✓1, ✓2 + ✓3). Therefore X ⇠
⇠ f (y/x) = f (x, y) f1(x) Beta(✓1, ✓2, ✓3), the random variable = 1 1 x Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) y ✓2 1 1 ✓ x ◆ y ✓3 1 1 x ◆ for all 0 < y < 1 variable with parameters ✓2 and ✓3. x. Thus the random variable is a beta random Now we compute the conditional expectation of Y /x. Consider 1 ✓ Y 1 x X=x y f (y/x) dy E(Y /x(✓2 + ✓3) Γ(✓2)Γ(✓32 1 1 ✓ 1 y ✓3 1 x ◆ dy. E(Y /x) = Now we substitute u = y x in the above integral to obtain 1 Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) ✓2 Γ(✓2)Γ(✓3) (✓2 + ✓3) Γ(✓2 + ✓3) x) B(✓2 + 1, ✓3) u)✓3 u✓2(1 0 Z x) x) (1 (1 (1 = = 1 1 du = ✓2 ✓2 + ✓3 x). (1 Next, we compute E(Y 2/x). Consider 1 x y2 f (y/x) dy E(Y 2/x(✓2 + ✓3) Γ(✓2)Γ(✓3) Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) y ✓2 1 1 ✓ 1 du ◆ u
)✓3 Γ(✓2 + ✓3) Γ(✓2)Γ(✓3) 1 x y2 0 Z x)2 1 0 Z 1 ✓ u✓2+1 (1 x x)2 B(✓2 + 2, ✓3) (1 (1 y ✓3 1 1 x ◆ dy (✓2 + 1) ✓2 Γ(✓2)Γ(✓3) (✓2 + ✓3 + 1) (✓2 + ✓3) Γ(✓2 + ✓3) (1 x)2 (✓2 + 1) ✓2 (✓2 + ✓3 + 1) (✓2 + ✓3 x)2. (1 Probability and Mathematical Statistics 341 Therefore V ar(Y /x) = E(Y 2/x) E(Y /x)2 = ✓2✓3 (1 x)2 (✓2 + ✓3)2(✓2 + ✓3 + 1). Similarly, one can compute E(X/y) and V ar(X/y). We leave this com- putation to the reader. Now the proof of the theorem is now complete. The Dirichlet distribution can be extended from the unit square (0, 1)2 (a2, b2). to an arbitrary rectangle (a1, b1) ⇥ Definition 12.6. A continuous bivariate random variable (X1, X2) is said to have the generalized bivariate beta distribution if its joint probability density function is of the form f (x1, x2) = Γ(✓1 + ✓2 + ✓3) Γ(✓1)Γ(✓2)Γ(✓3) 2 Yk=1 ✓ 1 xk bk ✓k ak ak ◆ 1 ✓ xk bk ✓3 1 ak ak ◆ where 0 < x1, x2, x1 + x2 < 1 and ✓1, ✓2, ✓3, a1, b1, a2, b2 are parameters. We will denote a bivariate generalized beta random variable (X,