Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
We first show that the generating function \( {h}_{n}\left( z\right) \) of the distribution of \( {Z}_{n} \) can be obtained from \( h\left( z\right) \) for any branching process.
We recall that the value of the generating function at the value \( z \) for any random variable \( X \) can be written as\n\n\[ h\left( z\right) = E\left( {z}^{X}\right) = {p}_{0} + {p}_{1}z + {p}_{2}{z}^{2} + \cdots . \]\n\nThat is, \( h\left( z\right) \) is the expected value of an experiment which has outcome \( {z}^{j} \) with probability \( {p}_{j} \).\n\nLet \( {S}_{n} = {X}_{1} + {X}_{2} + \cdots + {X}_{n} \) where each \( {X}_{j} \) has the same integer-valued distribution \( \left( {p}_{j}\right) \) with generating function \( k\left( z\right) = {p}_{0} + {p}_{1}z + {p}_{2}{z}^{2} + \cdots \) . Let \( {k}_{n}\left( z\right) \) be the generating function of \( {S}_{n} \) . Then using one of the properties of ordinary generating functions discussed in Section 10.1, we have\n\n\[ {k}_{n}\left( z\right) = {\left( k\left( z\right) \right) }^{n}, \]\n\nsince the \( {X}_{j} \) ’s are independent and all have the same distribution.\n\nConsider now the branching process \( {Z}_{n} \) . Let \( {h}_{n}\left( z\right) \) be the generating function of \( {Z}_{n} \) . Then\n\n\[ {h}_{n + 1}\left( z\right) = E\left( {z}^{{Z}_{n + 1}}\right) \]\n\n\[ = \mathop{\sum }\limits_{k}E\left( {{z}^{{Z}_{n + 1}} \mid {Z}_{n} = k}\right) P\left( {{Z}_{n} = k}\right) . \]\n\nIf \( {Z}_{n} = k \), then \( {Z}_{n + 1} = {X}_{1} + {X}_{2} + \cdots + {X}_{k} \) where \( {X}_{1},{X}_{2},\ldots ,{X}_{k} \) are independent random variables with common generating function \( h\left( z\right) \) . Thus\n\n\[ E\left( {{z}^{{Z}_{n + 1}} \mid {Z}_{n} = k}\right) = E\left( {z}^{{X}_{1} + {X}_{2} + \cdots + {X}_{k}}\right) = {\left( h\left( z\right) \right) }^{k}, \]\n\nand\n\n\[ {h}_{n + 1}\left( z\right) = \mathop{\sum }\limits_{k}{\left( h\left( z\right) \right) }^{k}P\left( {{Z}_{n} = k}\right) . \]\n\nBut\n\n\[ {h}_{n}\left( z\right) = \mathop{\sum }\limits_{k}P\left( {{Z}_{n} = k}\right) {z}^{k}. \]\n\nThus,\n\n\[ {h}_{n + 1}\left( z\right) = {h}_{n}\left( {h\left( z\right) }\right) . \]\n\n(10.5)\n\nIf we differentiate Equation 10.5 and use the chain rule we have\n\n\[ {h}_{n + 1}^{\prime }\left( z\right) = {h}_{n}^{\prime }\left( {h\left( z\right) }\right) {h}^{\prime }\left( z\right) . \]\n\nPutting \( z = 1 \) and using the fact that \( h\left( 1\right) = 1,{h}^{\prime }\left( 1\right) = m \), and \( {h}_{n}^{\prime }\left( 1\right) = {m}_{n} = \) the mean number of offspring in the \( n \) ’th generation, we have\n\n\[ {m}_{n + 1} = {m}_{n} \cdot m. \]\n\nThus, \( {m}_{2} = m \cdot m = {m}^{2},{m}_{3} = {m}^{2} \cdot m = {m}^{3} \), and in general\n\n\[ {m}_{n} = {m}^{n}. \]\n\nThus, for a branching process with \( m > 1 \), the mean number of offspring grows exponentially at a rate \( m \) .
Yes
For the branching process of Example 10.8 we have\n\n\[ h\left( z\right) = 1/2 + \left( {1/4}\right) z + \left( {1/4}\right) {z}^{2}, \]
\[ {h}_{2}\left( z\right) = h\left( {h\left( z\right) }\right) = 1/2 + \left( {1/4}\right) \left\lbrack {1/2 + \left( {1/4}\right) z + \left( {1/4}\right) {z}^{2}}\right\rbrack \]\n\n\[ = + \left( {1/4}\right) {\left\lbrack 1/2 + \left( 1/4\right) z + \left( 1/4\right) {z}^{2}\right\rbrack }^{2} \]\n\n\[ = {11}/{16} + \left( {1/8}\right) z + \left( {9/{64}}\right) {z}^{2} + \left( {1/{32}}\right) {z}^{3} + \left( {1/{64}}\right) {z}^{4}. \]
Yes
Assume that the probabilities \( {p}_{1},{p}_{2},\ldots \) form a geometric series: \( {p}_{k} = b{c}^{k - 1}, k = 1,2,\ldots \), with \( 0 < b \leq 1 - c \) and \( 0 < c < 1 \) . Then we have\n\n\[ \n{p}_{0} = 1 - {p}_{1} - {p}_{2} - \cdots \n\]
\[ \n= 1 - b - {bc} - b{c}^{2} - \cdots \n\]\n\n\[ \n= 1 - \frac{b}{1 - c}\text{.} \n\]\n\nThe generating function \( h\left( z\right) \) for this distribution is\n\n\[ \nh\left( z\right) = {p}_{0} + {p}_{1}z + {p}_{2}{z}^{2} + \cdots \n\]\n\n\[ \n= 1 - \frac{b}{1 - c} + {bz} + {bc}{z}^{2} + b{c}^{2}{z}^{3} + \cdots \n\]\n\n\[ \n= 1 - \frac{b}{1 - c} + \frac{bz}{1 - {cz}}. \n\]\n\nFrom this we find\n\n\[ \n{h}^{\prime }\left( z\right) = \frac{bcz}{{\left( 1 - cz\right) }^{2}} + \frac{b}{1 - {cz}} = \frac{b}{{\left( 1 - cz\right) }^{2}} \n\]\n\nand\n\[ \nm = {h}^{\prime }\left( 1\right) = \frac{b}{{\left( 1 - c\right) }^{2}}. \n\]\n\nWe know that if \( m \leq 1 \) the process will surely die out and \( d = 1 \) . To find the probability \( d \) when \( m > 1 \) we must find a root \( d < 1 \) of the equation\n\n\[ \nz = h\left( z\right) \n\]\n\nor\n\[ \nz = 1 - \frac{b}{1 - c} + \frac{bz}{1 - {cz}}. \n\]\n\nThis leads us to a quadratic equation. We know that \( z = 1 \) is one solution. The\n\nother is found to be\n\[ \nd = \frac{1 - b - c}{c\left( {1 - c}\right) } \n\]\n\nIt is easy to verify that \( d < 1 \) just when \( m > 1 \) .
Yes
Let us re-examine the Keyfitz data to see if a distribution of the type considered in Example 10.11 could reasonably be used as a model for this population. We would have to estimate from the data the parameters \( b \) and \( c \) for the formula \( {p}_{k} = b{c}^{k - 1} \) .
Solving Equation 10.6 and 10.7 for \( b \) and \( c \) gives\n\n\[ c = \frac{m - 1}{m - d} \]\n\nand\n\n\[ b = m{\left( \frac{1 - d}{m - d}\right) }^{2}. \]\n\nWe shall use the value 1.837 for \( m \) and .324 for \( d \) that we found in the Keyfitz example. Using these values, we obtain \( b = {.3666} \) and \( c = {.5533} \) . Note that \( {\left( 1 - c\right) }^{2} < b < 1 - c \), as required. In Table 10.3 we give for comparison the probabilities \( {p}_{0} \) through \( {p}_{8} \) as calculated by the geometric distribution versus the empirical values.
Yes
We now examine the random variable \( {Z}_{n} \) more closely for the case \( m < 1 \) (see Example 10.11). Fix a value \( t > 0 \) ; let \( \left\lbrack {t{m}^{n}}\right\rbrack \) be the integer part of \( t{m}^{n} \) . Then
\[ P\left( {{Z}_{n} = \left\lbrack {t{m}^{n}}\right\rbrack }\right) = {m}^{n}{\left( \frac{1 - d}{{m}^{n} - d}\right) }^{2}{\left( \frac{{m}^{n} - 1}{{m}^{n} - d}\right) }^{\left\lbrack {t{m}^{n}}\right\rbrack - 1} \] \[ = \frac{1}{{m}^{n}}{\left( \frac{1 - d}{1 - d/{m}^{n}}\right) }^{2}{\left( \frac{1 - 1/{m}^{n}}{1 - d/{m}^{n}}\right) }^{t{m}^{n} + a}, \] where \( \left| a\right| \leq 2 \) . Thus, as \( n \rightarrow \infty \) , \[ {m}^{n}P\left( {{Z}_{n} = \left\lbrack {t{m}^{n}}\right\rbrack }\right) \rightarrow {\left( 1 - d\right) }^{2}\frac{{e}^{-t}}{{e}^{-{td}}} = {\left( 1 - d\right) }^{2}{e}^{-t\left( {1 - d}\right) }. \] For \( t = 0 \) , \[ P\left( {{Z}_{n} = 0}\right) \rightarrow d \]
Yes
Let us first assume that the buyer may sell the letter only to a single person. If you buy the letter you will want to compute your expected winnings. (We are ignoring here the fact that the passing on of chain letters through the mail is a federal offense with certain obvious resulting penalties.) Assume that each person involved has a probability \( p \) of selling the letter. Then you will receive 50 dollars with probability \( p \) and another 50 dollars if the letter is sold to 12 people, since then your name would have risen to the top of the list. This occurs with probability \( {p}^{12} \) , and so your expected winnings are \( - {100} + {50p} + {50}{p}^{12} \) .
Thus the chain in this situation is a highly unfavorable game.
No
Example 10.15 Let \( X \) be a continuous random variable with range \( \\left\\lbrack {0,1}\\right\\rbrack \) and density function \( {f}_{X}\\left( x\\right) = 1 \) for \( 0 \\leq x \\leq 1 \) (uniform density). Then\n\n\[ \n{\\mu }_{n} = {\\int }_{0}^{1}{x}^{n}{dx} = \\frac{1}{n + 1}, \n\]\n\nand\n\n\[ \ng\\left( t\\right) = \\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }\\frac{{t}^{k}}{\\left( {k + 1}\\right) !} \n\]\n\n\[ \n= \\frac{{e}^{t} - 1}{t}. \n\]
Here the series converges for all \( t \) . Alternatively, we have\n\n\[ \ng\\left( t\\right) = {\\int }_{-\\infty }^{+\\infty }{e}^{tx}{f}_{X}\\left( x\\right) {dx} \n\]\n\n\[ \n= {\\int }_{0}^{1}{e}^{tx}{dx} = \\frac{{e}^{t} - 1}{t}. \n\]\n\nThen (by L'Hôpital's rule)\n\n\[ \n{\\mu }_{0} = g\\left( 0\\right) = \\mathop{\\lim }\\limits_{{t \\rightarrow 0}}\\frac{{e}^{t} - 1}{t} = 1 \n\]\n\n\[ \n{\\mu }_{1} = {g}^{\\prime }\\left( 0\\right) = \\mathop{\\lim }\\limits_{{t \\rightarrow 0}}\\frac{t{e}^{t} - {e}^{t} + 1}{{t}^{2}} = \\frac{1}{2}, \n\]\n\n\[ \n{\\mu }_{2} = {g}^{\\prime \\prime }\\left( 0\\right) = \\mathop{\\lim }\\limits_{{t \\rightarrow 0}}\\frac{{t}^{3}{e}^{t} - 2{t}^{2}{e}^{t} + {2t}{e}^{t} - {2t}}{{t}^{4}} = \\frac{1}{3}. \n\]
Yes
Let \( X \) have range \( \lbrack 0,\infty ) \) and density function \( {f}_{X}\left( x\right) = \lambda {e}^{-{\lambda x}} \) (exponential density with parameter \( \lambda \) ). In this case
\[ {\mu }_{n} = {\int }_{0}^{\infty }{x}^{n}\lambda {e}^{-{\lambda x}}{dx} = \lambda {\left( -1\right) }^{n}\frac{{d}^{n}}{d{\lambda }^{n}}{\int }_{0}^{\infty }{e}^{-{\lambda x}}{dx} = \lambda {\left( -1\right) }^{n}\frac{{d}^{n}}{d{\lambda }^{n}}\left\lbrack \frac{1}{\lambda }\right\rbrack = \frac{n!}{{\lambda }^{n}}, \] and \[ g\left( t\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\mu }_{k}{t}^{k}}{k!} = \mathop{\sum }\limits_{{k = 0}}^{\infty }{\left\lbrack \frac{t}{\lambda }\right\rbrack }^{k} = \frac{\lambda }{\lambda - t}. \] Here the series converges only for \( \left| t\right| < \lambda \) . Alternatively, we have \[ g\left( t\right) = {\int }_{0}^{\infty }{e}^{tx}\lambda {e}^{-{\lambda x}}{dx} = {\left. \frac{\lambda {e}^{\left( {t - \lambda }\right) x}}{t - \lambda }\right| }_{0}^{\infty } = \frac{\lambda }{\lambda - t}. \] Now we can verify directly that \[ {\mu }_{n} = {g}^{\left( n\right) }\left( 0\right) = {\left. \frac{{\lambda n}!}{{\left( \lambda - t\right) }^{n + 1}}\right| }_{t = 0} = \frac{n!}{{\lambda }^{n}}. \]
Yes
Let \( X \) have range \( \left( {-\infty , + \infty }\right) \) and density function \[ {f}_{X}\left( x\right) = \frac{1}{\sqrt{2\pi }}{e}^{-{x}^{2}/2} \] (normal density). In this case we have \[ {\mu }_{n} = \frac{1}{\sqrt{2\pi }}{\int }_{-\infty }^{+\infty }{x}^{n}{e}^{-{x}^{2}/2}{dx} \]
\[ = \left\{ \begin{array}{ll} \frac{\left( {2m}\right) !}{{2}^{m}m!}, & \text{ if }n = {2m}, \\ 0, & \text{ if }n = {2m} + 1. \end{array}\right. \] (These moments are calculated by integrating once by parts to show that \( {\mu }_{n} = \) \( \left. {\left( {n - 1}\right) {\mu }_{n - 2}\text{, and observing that}{\mu }_{0} = 1\text{and}{\mu }_{1} = 0\text{.}}\right) \) Hence, \[ g\left( t\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{{\mu }_{n}{t}^{n}}{n!} \] \[ = \mathop{\sum }\limits_{{m = 0}}^{\infty }\frac{{t}^{2m}}{{2}^{m}m!} = {e}^{{t}^{2}/2}. \] This series converges for all values of \( t \) . Again we can verify that \( {g}^{\left( n\right) }\left( 0\right) = {\mu }_{n} \)
Yes
Theorem 10.3 Suppose \( X \) is a continuous random variable with range contained in the interval \( \left\lbrack {-M, M}\right\rbrack \) . Then the series\n\n\[ g\left( t\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\mu }_{k}{t}^{k}}{k!} \]\n\nconverges for all \( t \) to an infinitely differentiable function \( g\left( t\right) \), and \( {g}^{\left( n\right) }\left( 0\right) = {\mu }_{n} \) .
Proof. We have\n\[ {\mu }_{k} = {\int }_{-M}^{+M}{x}^{k}{f}_{X}\left( x\right) {dx} \]\n\nso\n\n\[ \left| {\mu }_{k}\right| \leq {\int }_{-M}^{+M}{\left| x\right| }^{k}{f}_{X}\left( x\right) {dx} \]\n\n\[ \leq {M}^{k}{\int }_{-M}^{+M}{f}_{X}\left( x\right) {dx} = {M}^{k}. \]\n\nHence, for all \( N \) we have\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{N}\left| \frac{{\mu }_{k}{t}^{k}}{k!}\right| \leq \mathop{\sum }\limits_{{k = 0}}^{N}\frac{{\left( M\left| t\right| \right) }^{k}}{k!} \leq {e}^{M\left| t\right| }, \]\n\nwhich shows that the power series converges for all \( t \) . We know that the sum of a convergent power series is always differentiable.
Yes
Theorem 10.4 If \( X \) is a bounded random variable, then the moment generating function \( {g}_{X}\left( t\right) \) of \( x \) determines the density function \( {f}_{X}\left( x\right) \) uniquely.
Sketch of the Proof. We know that\n\n\[ \n{g}_{X}\left( t\right) = \mathop{\sum }\limits_{{k = 0}}^{\infty }\frac{{\mu }_{k}{t}^{k}}{k!}\n\]\n\n\[ \n= {\int }_{-\infty }^{+\infty }{e}^{tx}f\left( x\right) {dx}\n\]\n\nIf we replace \( t \) by \( {i\tau } \), where \( \tau \) is real and \( i = \sqrt{-1} \), then the series converges for all \( \tau \), and we can define the function\n\n\[ \n{k}_{X}\left( \tau \right) = {g}_{X}\left( {i\tau }\right) = {\int }_{-\infty }^{+\infty }{e}^{i\tau x}{f}_{X}\left( x\right) {dx}.\n\]\n\nThe function \( {k}_{X}\left( \tau \right) \) is called the characteristic function of \( X \), and is defined by the above equation even when the series for \( {g}_{X} \) does not converge. This equation says that \( {k}_{X} \) is the Fourier transform of \( {f}_{X} \). It is known that the Fourier transform has an inverse, given by the formula\n\n\[ \n{f}_{X}\left( x\right) = \frac{1}{2\pi }{\int }_{-\infty }^{+\infty }{e}^{-{i\tau x}}{k}_{X}\left( \tau \right) {d\tau }\n\]\n\nsuitably interpreted. \( {}^{9} \) Here we see that the characteristic function \( {k}_{X} \), and hence the moment generating function \( {g}_{X} \), determines the density function \( {f}_{X} \) uniquely under our hypotheses.
No
We consider the question of determining the probability that, given the chain is in state \( i \) today, it will be in state \( j \) two days from now. We denote this probability by \( {p}_{ij}^{\left( 2\right) } \) .
In Example 11.1, we see that if it is rainy today then the event that it is snowy two days from now is the disjoint union of the following three events: 1) it is rainy tomorrow and snowy two days from now, 2) it is nice tomorrow and snowy two days from now, and 3) it is snowy tomorrow and snowy two days from now. The probability of the first of these events is the product of the conditional probability that it is rainy tomorrow, given that it is rainy today, and the conditional probability that it is snowy two days from now, given that it is rainy tomorrow. Using the transition matrix \( \mathbf{P} \), we can write this product as \( {p}_{11}{p}_{13} \) . The other two events also have probabilities that can be written as products of entries of \( \mathbf{P} \) . Thus, we have \[ {p}_{13}^{\left( 2\right) } = {p}_{11}{p}_{13} + {p}_{12}{p}_{23} + {p}_{13}{p}_{33}. \] This equation should remind the reader of a dot product of two vectors; we are dotting the first row of \( \mathbf{P} \) with the third column of \( \mathbf{P} \) . This is just what is done in obtaining the 1,3-entry of the product of \( \mathbf{P} \) with itself. In general, if a Markov chain has \( r \) states, then \[ {p}_{ij}^{\left( 2\right) } = \mathop{\sum }\limits_{{k = 1}}^{r}{p}_{ik}{p}_{kj} \] The following general theorem is easy to prove by using the above observation and induction.
Yes
Theorem 11.1 Let \( \mathbf{P} \) be the transition matrix of a Markov chain. The \( {ij} \) th entry \( {p}_{ij}^{\left( n\right) } \) of the matrix \( {\mathbf{P}}^{n} \) gives the probability that the Markov chain, starting in state \( {s}_{i} \), will be in state \( {s}_{j} \) after \( n \) steps.
Proof. The proof of this theorem is left as an exercise (Exercise 17).
No
Consider again the weather in the Land of \( \mathrm{{Oz}} \). We know that the powers of the transition matrix give us interesting information about the process as it evolves. We shall be particularly interested in the state of the chain after a large number of steps.
The program MatrixPowers computes the powers of \( \mathbf{P} \). We have run the program MatrixPowers for the Land of Oz example to compute the successive powers of \( \mathbf{P} \) from 1 to 6. The results are shown in Table 11.1. We note that after six days our weather predictions are, to three-decimal-place accuracy, independent of today's weather. The probabilities for the three types of weather, \( \mathrm{R},\mathrm{N} \), and \( \mathrm{S} \), are \( {.4},{.2} \), and .4 no matter where the chain started. This is an example of a type of Markov chain called a regular Markov chain. For this type of chain, it is true that long-range predictions are independent of the starting state. Not all chains are regular, but this is an important class of chains that we shall study in detail later.
Yes
Theorem 11.2 Let \( \mathbf{P} \) be the transition matrix of a Markov chain, and let \( \mathbf{u} \) be the probability vector which represents the starting distribution. Then the probability that the chain is in state \( {s}_{i} \) after \( n \) steps is the \( i \) th entry in the vector\n\n\[{\mathbf{u}}^{\left( n\right) } = {\mathbf{{uP}}}^{n}\]
Proof. The proof of this theorem is left as an exercise (Exercise 18).
No
In the Land of Oz example (Example 11.1) let the initial probability vector \( \mathbf{u} \) equal \( \left( {1/3,1/3,1/3}\right) \). Then we can calculate the distribution of the states after three days using Theorem 11.2 and our previous calculation of \( {\mathbf{P}}^{3} \).
\[ {\mathbf{u}}^{\left( 3\right) } = \mathbf{u}{\mathbf{P}}^{3} = \left( \begin{array}{lll} 1/3, & 1/3, & 1/3 \end{array}\right) \left( \begin{array}{lll} {.406} & {.203} & {.391} \\ {.406} & {.188} & {.406} \\ {.391} & {.203} & {.406} \end{array}\right) \] \[ = \left( \begin{array}{lll} {.401}, & {.198}, & {.401} \end{array}\right) . \]
Yes
The President of the United States tells person A his or her intention to run or not to run in the next election. Then A relays the news to B, who in turn relays the message to \( \mathrm{C} \), and so forth, always to some new person. We assume that there is a probability \( a \) that a person will change the answer from yes to no when transmitting it to the next person and a probability \( b \) that he or she will change it from no to yes. We choose as states the message, either yes or no. The transition matrix is then
\[ \mathbf{P} = \begin{array}{l} \text{ yes } \\ \text{ yes } \\ \text{ no } \end{array}\left( \begin{matrix} 1 - a & a \\ b & 1 - b \end{matrix}\right) . \]
No
Each time a certain horse runs in a three-horse race, he has probability \( 1/2 \) of winning, \( 1/4 \) of coming in second, and \( 1/4 \) of coming in third, independent of the outcome of any previous race. We have an independent trials process,
but it can also be considered from the point of view of Markov chain theory. The transition matrix is \[ \mathbf{P} = \begin{array}{l} \mathrm{W} \\ \mathrm{W} \\ \mathrm{P} \\ \mathrm{S} \end{array}\left( \begin{matrix} \mathrm{P} & \mathrm{P} & \mathrm{S} \\ {.5} & {.25} & {.25} \\ {.5} & {.25} & {.25} \\ {.5} & {.25} & {.25} \end{matrix}\right) .
No
Example 11.6 In the Dark Ages, Harvard, Dartmouth, and Yale admitted only male students. Assume that, at that time, 80 percent of the sons of Harvard men went to Harvard and the rest went to Yale, 40 percent of the sons of Yale men went to Yale, and the rest split evenly between Harvard and Dartmouth; and of the sons of Dartmouth men, 70 percent went to Dartmouth, 20 percent to Harvard, and 10 percent to Yale. We form a Markov chain with transition matrix
\[ \mathbf{P} = \begin{array}{l} \mathrm{H} \\ \mathrm{H} \\ \mathrm{Y} \\ \mathrm{D} \end{array}\left( \begin{array}{lll} {.8} & {.2} & 0 \\ {.3} & {.4} & {.3} \\ {.3} & {.1} & {.7} \end{array}\right) . \]
No
Example 11.8 (Ehrenfest Model) The following is a special case of a model, called the Ehrenfest model, \( {}^{3} \) that has been used to explain diffusion of gases. The general model will be discussed in detail in Section 11.5. We have two urns that, between them, contain four balls. At each step, one of the four balls is chosen at random and moved from the urn that it is in into the other urn. We choose, as states, the number of balls in the first urn. The transition matrix is then
\[ \mathbf{P} = \frac{1}{2}\left( \begin{matrix} 0 & 1 & 2 & 3 & 4 \\ 0 & 1 & 0 & 0 & 0 \\ 1/4 & 0 & 3/4 & 0 & 0 \\ 0 & 1/2 & 0 & 1/2 & 0 \\ 0 & 0 & 3/4 & 0 & 1/4 \\ 0 & 0 & 0 & 1 & 0 \end{matrix}\right) . \]
No
Consider a process of continued matings. We start with an individual of known genetic character and mate it with a hybrid. We assume that there is at least one offspring. An offspring is chosen at random and is mated with a hybrid and this process repeated through a number of generations. The genetic type of the chosen offspring in successive generations can be represented by a Markov chain. The states are dominant, hybrid, and recessive, and indicated by GG, Gg, and gg respectively.
The transition probabilities are\n\n\[ \mathbf{P} = \begin{matrix} \mathrm{{GG}} \\ \mathrm{{GG}} \\ \mathrm{{Gg}} \\ \mathrm{{gg}} \end{matrix}\left( \begin{matrix} \mathrm{G} & \mathrm{S} & \mathrm{{gg}} \\ {.5} & {.5} & 0 \\ {.25} & {.5} & {.25} \\ 0 & {.5} & {.5} \end{matrix}\right) .
No
Example 11.11 We start with two animals of opposite sex, mate them, select two of their offspring of opposite sex, and mate those, and so forth. To simplify the example, we will assume that the trait under consideration is independent of sex.
Here a state is determined by a pair of animals. Hence, the states of our process will be: \( {s}_{1} = \left( {\mathrm{{GG}},\mathrm{{GG}}}\right) ,{s}_{2} = \left( {\mathrm{{GG}},\mathrm{{Gg}}}\right) ,{s}_{3} = \left( {\mathrm{{GG}},\mathrm{{gg}}}\right) ,{s}_{4} = \left( {\mathrm{{Gg}},\mathrm{{Gg}}}\right) ,{s}_{5} = \) \( \left( {\mathrm{{Gg}},\mathrm{{gg}}}\right) \), and \( {s}_{6} = \left( {\mathrm{{gg}},\mathrm{{gg}}}\right) \) . We illustrate the calculation of transition probabilities in terms of the state \( {s}_{2} \) . When the process is in this state, one parent has GG genes, the other Gg. Hence, the probability of a dominant offspring is \( 1/2 \) . Then the probability of transition to \( {s}_{1} \) (selection of two dominants) is \( 1/4 \), transition to \( {s}_{2} \) is \( 1/2 \), and to \( {s}_{4} \) is \( 1/4 \) . The other states are treated the same way. The transition matrix of this chain is: \[ {\mathbf{P}}^{1} = \begin{matrix} \text{ GG, GG } & \text{ GG, Gg } & \text{ GG, gg } & \text{ Gg, Gg } & \text{ Gg, gg } & \text{ gg, gg } & \\ \text{ GG, GG } & \text{ 1.000 } & {.000} & {.000} & {.000} & {.000} & \\ \text{ GG, Gg } & {.250} & {.500} & {.000} & {.250} & {.000} & {.000} \\ \text{ Gg, gg } & {.000} & {.000} & {.000} & {1.000} & {.000} & {.000} \\ \text{ Gg, Gg } & {.052} & {.250} & {.125} & {.250} & {.250} & {.062} \\ \text{ Gg, gg } & {.000} & {.000} & {.000} & {.250} & {.500} & {.250} \\ \text{ gg, gg } & {.000} & {.000} & {.000} & {.000} & {.000} & {1.000} \end{matrix}. \]
Yes
Example 11.12 (Stepping Stone Model) Our final example is another example that has been used in the study of genetics. It is called the stepping stone model. \( {}^{4} \) In this model we have an \( n \) -by- \( n \) array of squares, and each square is initially any one of \( k \) different colors. For each step, a square is chosen at random. This square then chooses one of its eight neighbors at random and assumes the color of that neighbor. To avoid boundary problems, we assume that if a square \( S \) is on the left-hand boundary, say, but not at a corner, it is adjacent to the square \( T \) on the right-hand boundary in the same row as \( S \), and \( S \) is also adjacent to the squares just above and below \( T \) . A similar assumption is made about squares on the upper and lower boundaries. The top left-hand corner square is adjacent to three obvious neighbors, namely the squares below it, to its right, and diagonally below and to the right. It has five other neighbors, which are as follows: the other three corner squares, the square below the upper right-hand corner, and the square to the right of the bottom left-hand corner. The other three corners also have, in a similar way, eight neighbors. (These adjacencies are much easier to understand if one imagines making the array into a cylinder by gluing the top and bottom edge together, and then making the cylinder into a doughnut by gluing the two circular boundaries together.) With these adjacencies, each square in the array is adjacent to exactly eight other squares.
This is an example of an absorbing Markov chain. This type of chain will be studied in Section 11.2. One of the theorems proved in that section, applied to the present example, implies that with probability 1 , the stones will eventually all be the same color. By watching the program run, you can see that territories are established and a battle develops to see which color survives. At any time the probability that a particular color will win out is equal to the proportion of the array of this color. You are asked to prove this in Exercise 11.2.32.
No
Example 11.13 A man walks along a four-block stretch of Park Avenue (see Figure 11.3). If he is at corner \( 1,2 \), or 3, then he walks to the left or right with equal probability. He continues until he reaches corner 4 , which is a bar, or corner 0 , which is his home. If he reaches either home or the bar, he stays there.
We form a Markov chain with states \( 0,1,2,3 \), and 4 . States 0 and 4 are absorbing states. The transition matrix is then\n\n\[ \mathbf{P} = \frac{0}{2}\left( \begin{matrix} 0 & 1 & 2 & 3 & 4 \\ 1 & 0 & 0 & 0 & 0 \\ 1/2 & 0 & 1/2 & 0 & 0 \\ 0 & 1/2 & 0 & 1/2 & 0 \\ 0 & 0 & 1/2 & 0 & 1/2 \\ 0 & 0 & 0 & 0 & 1 \end{matrix}\right) \]\n\nThe states 1, 2, and 3 are transient states, and from any of these it is possible to reach the absorbing states 0 and 4 . Hence the chain is an absorbing chain. When a process reaches an absorbing state, we shall say that it is absorbed.
Yes
Theorem 11.3 In an absorbing Markov chain, the probability that the process will be absorbed is 1 (i.e., \( {\mathbf{Q}}^{n} \rightarrow \mathbf{0} \) as \( n \rightarrow \infty \) ).
Proof. From each nonabsorbing state \( {s}_{j} \) it is possible to reach an absorbing state. Let \( {m}_{j} \) be the minimum number of steps required to reach an absorbing state, starting from \( {s}_{j} \). Let \( {p}_{j} \) be the probability that, starting from \( {s}_{j} \), the process will not reach an absorbing state in \( {m}_{j} \) steps. Then \( {p}_{j} < 1 \). Let \( m \) be the largest of the \( {m}_{j} \) and let \( p \) be the largest of \( {p}_{j} \). The probability of not being absorbed in \( m \) steps is less than or equal to \( p \), in \( {2m} \) steps less than or equal to \( {p}^{2} \), etc. Since \( p < 1 \) these probabilities tend to 0. Since the probability of not being absorbed in \( n \) steps is monotone decreasing, these probabilities also tend to 0, hence \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbf{Q}}^{n} = 0 \). \( ▱ \)
Yes
Theorem 11.4 For an absorbing Markov chain the matrix \( \mathbf{I} - \mathbf{Q} \) has an inverse \( \mathbf{N} \) and \( \mathbf{N} = \mathbf{I} + \mathbf{Q} + {\mathbf{Q}}^{2} + \cdots \) . The \( {ij} \) -entry \( {n}_{ij} \) of the matrix \( \mathbf{N} \) is the expected number of times the chain is in state \( {s}_{j} \), given that it starts in state \( {s}_{i} \) . The initial state is counted if \( i = j \) .
Proof. Let \( \left( {\mathbf{I} - \mathbf{Q}}\right) \mathbf{x} = 0 \) ; that is \( \mathbf{x} = \mathbf{Q}\mathbf{x} \) . Then, iterating this we see that \( \mathbf{x} = {\mathbf{Q}}^{n}\mathbf{x} \) . Since \( {\mathbf{Q}}^{n} \rightarrow \mathbf{0} \), we have \( {\mathbf{Q}}^{n}\mathbf{x} \rightarrow \mathbf{0} \), so \( \mathbf{x} = \mathbf{0} \) . Thus \( {\left( \mathbf{I} - \mathbf{Q}\right) }^{-1} = \mathbf{N} \) exists. Note next that\n\n\[ \left( {\mathbf{I} - \mathbf{Q}}\right) \left( {\mathbf{I} + \mathbf{Q} + {\mathbf{Q}}^{2} + \cdots + {\mathbf{Q}}^{n}}\right) = \mathbf{I} - {\mathbf{Q}}^{n + 1}. \]\n\nThus multiplying both sides by \( \mathbf{N} \) gives\n\n\[ \mathbf{I} + \mathbf{Q} + {\mathbf{Q}}^{2} + \cdots + {\mathbf{Q}}^{n} = \mathbf{N}\left( {\mathbf{I} - {\mathbf{Q}}^{n + 1}}\right) . \]\n\nLetting \( n \) tend to infinity we have\n\n\[ \mathbf{N} = \mathbf{I} + \mathbf{Q} + {\mathbf{Q}}^{2} + \cdots \]\n\nLet \( {s}_{i} \) and \( {s}_{j} \) be two transient states, and assume throughout the remainder of the proof that \( i \) and \( j \) are fixed. Let \( {X}^{\left( k\right) } \) be a random variable which equals 1 if the chain is in state \( {s}_{j} \) after \( k \) steps, and equals 0 otherwise. For each \( k \), this random variable depends upon both \( i \) and \( j \) ; we choose not to explicitly show this dependence in the interest of clarity. We have\n\n\[ P\left( {{X}^{\left( k\right) } = 1}\right) = {q}_{ij}^{\left( k\right) }, \]\n\nand\n\n\[ P\left( {{X}^{\left( k\right) } = 0}\right) = 1 - {q}_{ij}^{\left( k\right) }, \]\n\nwhere \( {q}_{ij}^{\left( k\right) } \) is the \( {ij} \) th entry of \( {\mathbf{Q}}^{k} \) . These equations hold for \( k = 0 \) since \( {\mathbf{Q}}^{0} = \mathbf{I} \) . Therefore, since \( {X}^{\left( k\right) } \) is a 0-1 random variable, \( E\left( {X}^{\left( k\right) }\right) = {q}_{ij}^{\left( k\right) } \) .\n\nThe expected number of times the chain is in state \( {s}_{j} \) in the first \( n \) steps, given that it starts in state \( {s}_{i} \), is clearly\n\n\[ E\left( {{X}^{\left( 0\right) } + {X}^{\left( 1\right) } + \cdots + {X}^{\left( n\right) }}\right) = {q}_{ij}^{\left( 0\right) } + {q}_{ij}^{\left( 1\right) } + \cdots + {q}_{ij}^{\left( n\right) }. \]\n\nLetting \( n \) tend to infinity we have\n\n\[ E\left( {{X}^{\left( 0\right) } + {X}^{\left( 1\right) } + \cdots }\right) = {q}_{ij}^{\left( 0\right) } + {q}_{ij}^{\left( 1\right) } + \cdots = {n}_{ij}. \]
Yes
Theorem 11.5 Let \( {t}_{i} \) be the expected number of steps before the chain is absorbed, given that the chain starts in state \( {s}_{i} \), and let \( \mathbf{t} \) be the column vector whose \( i \) th entry is \( {t}_{i} \) . Then\n\n\[ \mathbf{t} = \mathbf{N}\mathbf{c} \]\n\nwhere \( \mathbf{c} \) is a column vector all of whose entries are 1 .
Proof. If we add all the entries in the \( i \) th row of \( \mathbf{N} \), we will have the expected number of times in any of the transient states for a given starting state \( {s}_{i} \), that is, the expected time required before being absorbed. Thus, \( {t}_{i} \) is the sum of the entries in the \( i \) th row of \( \mathbf{N} \) . If we write this statement in matrix form, we obtain the theorem.
Yes
Theorem 11.6 Let \( {b}_{ij} \) be the probability that an absorbing chain will be absorbed in the absorbing state \( {s}_{j} \) if it starts in the transient state \( {s}_{i} \) . Let \( \mathbf{B} \) be the matrix with entries \( {b}_{ij} \) . Then \( \mathbf{B} \) is an \( t \) -by- \( r \) matrix, and\n\n\[ \mathbf{B} = \mathbf{{NR}} \]\n\nwhere \( \mathbf{N} \) is the fundamental matrix and \( \mathbf{R} \) is as in the canonical form.
Proof. We have\n\n\[ {\mathbf{B}}_{ij} = \mathop{\sum }\limits_{n}\mathop{\sum }\limits_{k}{q}_{ik}^{\left( n\right) }{r}_{kj} \]\n\n\[ = \mathop{\sum }\limits_{k}\mathop{\sum }\limits_{n}{q}_{ik}^{\left( n\right) }{r}_{kj} \]\n\n\[ = \mathop{\sum }\limits_{k}{n}_{ik}{r}_{kj} \]\n\n\[ = {\left( \mathbf{{NR}}\right) }_{ij}\text{.} \]\n\nThis completes the proof.
Yes
In the Drunkard's Walk example, we found that\n\n\[ \mathbf{N} = \frac{1}{2}\left( \begin{matrix} 1 & 2 & 3 \\ 3/2 & 1 & 1/2 \\ 1 & 2 & 1 \\ 1/2 & 1 & 3/2 \end{matrix}\right) \]
Hence,\n\n\[ \mathbf{t} = \mathbf{N}\mathbf{c} = \left( \begin{matrix} 3/2 & 1 & 1/2 \\ 1 & 2 & 1 \\ 1/2 & 1 & 3/2 \end{matrix}\right) \left( \begin{array}{l} 1 \\ 1 \\ 1 \end{array}\right) \]\n\n\[ = \left( \begin{array}{l} 3 \\ 4 \\ 3 \end{array}\right) . \]\n\nThus, starting in states \( 1,2 \), and 3, the expected times to absorption are \( 3,4 \), and 3 , respectively.\n\nFrom the canonical form,\n\n\[ \mathbf{R} = \frac{1}{2}\left( \begin{matrix} 0 & 4 \\ 1/2 & 0 \\ 0 & 0 \\ 0 & 1/2 \end{matrix}\right) \]\n\nHence,\n\n\[ \mathbf{B} = \mathbf{{NR}} = \left( \begin{matrix} 3/2 & 1 & 1/2 \\ 1 & 2 & 1 \\ 1/2 & 1 & 3/2 \end{matrix}\right) \cdot \left( \begin{matrix} 1/2 & 0 \\ 0 & 0 \\ 0 & 1/2 \end{matrix}\right) \]\n\n\[ \begin{matrix} 0 & 4 \\ = & 1 \\ 2 & \left( \begin{array}{ll} 3/4 & 1/4 \\ 1/2 & 1/2 \\ 1/4 & 3/4 \end{array}\right) . \end{matrix} \]\n\nHere the first row tells us that, starting from state 1, there is probability \( 3/4 \) of absorption in state 0 and \( 1/4 \) of absorption in state 4 .
Yes
Theorem 11.7 Let \( \mathbf{P} \) be the transition matrix for a regular chain. Then, as \( n \rightarrow \) \( \infty \), the powers \( {\mathbf{P}}^{n} \) approach a limiting matrix \( \mathbf{W} \) with all rows the same vector \( \mathbf{w} \) . The vector \( \mathbf{w} \) is a strictly positive probability vector (i.e., the components are all positive and they sum to one).
In the next section we give two proofs of this fundamental theorem. We give here the basic idea of the first proof.\n\nWe want to show that the powers \( {\mathbf{P}}^{n} \) of a regular transition matrix tend to a matrix with all rows the same. This is the same as showing that \( {\mathbf{P}}^{n} \) converges to a matrix with constant columns. Now the \( j \) th column of \( {\mathbf{P}}^{n} \) is \( {\mathbf{P}}^{n}\mathbf{y} \) where \( \mathbf{y} \) is a column vector with 1 in the \( j \) th entry and 0 in the other entries. Thus we need only prove that for any column vector \( \mathbf{y},{\mathbf{P}}^{n}\mathbf{y} \) approaches a constant vector as \( n \) tend to infinity.\n\nSince each row of \( \mathbf{P} \) is a probability vector, \( \mathbf{{Py}} \) replaces \( \mathbf{y} \) by averages of its components. Here is an example:\n\n\[ \left( \begin{matrix} 1/2 & 1/4 & 1/4 \\ 1/3 & 1/3 & 1/3 \\ 1/2 & 1/2 & 0 \end{matrix}\right) \left( \begin{array}{l} 1 \\ 2 \\ 3 \end{array}\right) = \left( \begin{matrix} 1/2 \cdot 1 + 1/4 \cdot 2 + 1/4 \cdot 3 \\ 1/3 \cdot 1 + 1/3 \cdot 2 + 1/3 \cdot 3 \\ 1/2 \cdot 1 + 1/2 \cdot 2 + 0 \cdot 3 \end{matrix}\right) = \left( \begin{matrix} 7/4 \\ 2 \\ 3/2 \end{matrix}\right) . \]\n\nThe result of the averaging process is to make the components of \( \mathbf{{Py}} \) more similar than those of \( \mathbf{y} \) . In particular, the maximum component decreases (from 3 to 2) and the minimum component increases (from 1 to \( 3/2 \) ). Our proof will show that as we do more and more of this averaging to get \( {\mathbf{P}}^{n}\mathbf{y} \), the difference between the maximum and minimum component will tend to 0 as \( n \rightarrow \infty \) . This means \( {\mathbf{P}}^{n}\mathbf{y} \) tends to a constant vector. The \( {ij} \) th entry of \( {\mathbf{P}}^{n},{p}_{ij}^{\left( n\right) } \), is the probability that the process will be in state \( {s}_{j} \) after \( n \) steps if it starts in state \( {s}_{i} \) . If we denote the common row of \( \mathbf{W} \) by \( \mathbf{w} \), then Theorem 11.7 states that the probability of being in \( {s}_{j} \) in the long run is approximately \( {w}_{j} \), the \( j \) th entry of \( \mathbf{w} \), and is independent of the starting state.
No
Recall that for the Land of Oz example of Section 11.1, the sixth power of the transition matrix \( \mathbf{P} \) is, to three decimal places, \[ {\mathbf{P}}^{6} = \begin{matrix} \mathrm{R} & \mathrm{N} & \mathrm{S} \\ \mathrm{R} & \left( \begin{array}{lll} {.4} & {.2} & {.4} \\ {.4} & {.2} & {.4} \\ {.4} & {.2} & {.4} \end{array}\right) & \end{matrix} \] Thus, to this degree of accuracy, the probability of rain six days after a rainy day is the same as the probability of rain six days after a nice day, or six days after a snowy day.
Theorem 11.7 predicts that, for large \( n \), the rows of \( \mathbf{P} \) approach a common vector. It is interesting that this occurs so soon in our example.
No
Theorem 11.8 Let \( \mathbf{P} \) be a regular transition matrix, let\n\n\[ \mathbf{W} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbf{P}}^{n} \]\n\nlet \( \mathbf{w} \) be the common row of \( \mathbf{W} \), and let \( \mathbf{c} \) be the column vector all of whose components are 1 . Then\n\n(a) \( \mathbf{{wP}} = \mathbf{w} \), and any row vector \( \mathbf{v} \) such that \( \mathbf{{vP}} = \mathbf{v} \) is a constant multiple of \( \mathbf{w} \).\n\n(b) \( \mathbf{{Pc}} = \mathbf{c} \), and any column vector \( \mathbf{x} \) such that \( \mathbf{{Px}} = \mathbf{x} \) is a multiple of \( \mathbf{c} \).
Proof. To prove part (a), we note that from Theorem 11.7,\n\n\[ {\mathbf{P}}^{n} \rightarrow \mathbf{W}\text{.} \]\n\nThus,\n\n\[ {\mathbf{P}}^{n + 1} = {\mathbf{P}}^{n} \cdot \mathbf{P} \rightarrow \mathbf{W}\mathbf{P} \]\n\nBut \( {\mathbf{P}}^{n + 1} \rightarrow \mathbf{W} \), and so \( \mathbf{W} = \mathbf{{WP}} \), and \( \mathbf{w} = \mathbf{{wP}} \).\n\nLet \( \mathbf{v} \) be any vector with \( \mathbf{{vP}} = \mathbf{v} \). Then \( \mathbf{v} = \mathbf{v}{\mathbf{P}}^{n} \), and passing to the limit, \( \mathbf{v} = \mathbf{v}\mathbf{W} \). Let \( r \) be the sum of the components of \( \mathbf{v} \). Then it is easily checked that \( \mathbf{{vW}} = r\mathbf{w} \). So, \( \mathbf{v} = r\mathbf{w} \).\n\nTo prove part (b), assume that \( \mathbf{x} = \mathbf{P}\mathbf{x} \). Then \( \mathbf{x} = {\mathbf{P}}^{n}\mathbf{x} \), and again passing to the limit, \( \mathbf{x} = \mathbf{W}\mathbf{x} \). Since all rows of \( \mathbf{W} \) are the same, the components of \( \mathbf{W}\mathbf{x} \) are all equal, so \( \mathbf{x} \) is a multiple of \( \mathbf{c} \).
Yes
By Theorem 11.7 we can find the limiting vector \( \mathbf{w} \) for the Land of \( \mathrm{{Oz}} \) from the fact that\n\n\[ \n{w}_{1} + {w}_{2} + {w}_{3} = 1 \n\]\n\nand\n\n\[ \n\left( \begin{array}{lll} {w}_{1} & {w}_{2} & {w}_{3} \end{array}\right) \left( \begin{matrix} 1/2 & 1/4 & 1/4 \\ 1/2 & 0 & 1/2 \\ 1/4 & 1/4 & 1/2 \end{matrix}\right) = \left( \begin{array}{lll} {w}_{1} & {w}_{2} & {w}_{3} \end{array}\right) .\n\]
These relations lead to the following four equations in three unknowns:\n\n\[ \n{w}_{1} + {w}_{2} + {w}_{3} = 1 \n\]\n\n\[ \n\left( {1/2}\right) {w}_{1} + \left( {1/2}\right) {w}_{2} + \left( {1/4}\right) {w}_{3} = {w}_{1}, \n\]\n\n\[ \n\left( {1/4}\right) {w}_{1} + \left( {1/4}\right) {w}_{3} = {w}_{2}, \n\]\n\n\[ \n\left( {1/4}\right) {w}_{1} + \left( {1/2}\right) {w}_{2} + \left( {1/2}\right) {w}_{3} = {w}_{3}. \n\]\n\nOur theorem guarantees that these equations have a unique solution. If the equations are solved, we obtain the solution\n\n\[ \n\mathbf{w} = \left( \begin{array}{lll} {.4} & {.2} & {.4} \end{array}\right) \n\]\n\nin agreement with that predicted from \( {\mathbf{P}}^{6} \), given in Example 11.2.
Yes
Example 11.20 (Example 11.19 continued) We set \( {w}_{1} = 1 \), and then solve the first and second linear equations from \( \mathbf{w}\mathbf{P} = \mathbf{w} \) . We have\n\n\[ \left( {1/2}\right) + \left( {1/2}\right) {w}_{2} + \left( {1/4}\right) {w}_{3} = 1, \]\n\n\[ \left( {1/4}\right) + \left( {1/4}\right) {w}_{3} = {w}_{2}. \]
If we solve these, we obtain\n\n\[ \left( \begin{array}{lll} {w}_{1} & {w}_{2} & {w}_{3} \end{array}\right) = \left( \begin{array}{lll} 1 & 1/2 & 1 \end{array}\right) . \]\n\nNow we divide this vector by the sum of the components, to obtain the final answer:\n\n\[ \mathbf{w} = \left( \begin{array}{lll} {.4} & {.2} & {.4} \end{array}\right) \]
Yes
Theorem 11.9 Let \( \mathbf{P} \) be the transition matrix for a regular chain and \( \mathbf{v} \) an arbitrary probability vector. Then\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbf{{vP}}}^{n} = \mathbf{w} \]\n\nwhere \( \mathbf{w} \) is the unique fixed probability vector for \( \mathbf{P} \) .
Proof. By Theorem 11.7,\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbf{P}}^{n} = \mathbf{W} \]\n\nHence,\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\mathbf{{vP}}}^{n} = \mathbf{{vW}} \]\n\nBut the entries in \( \mathbf{v} \) sum to 1, and each row of \( \mathbf{W} \) equals \( \mathbf{w} \) . From these statements, it is easy to check that\n\n\[ \mathbf{{vW}} = \mathbf{w}. \]
Yes
Theorem 11.10 For an ergodic Markov chain, there is a unique probability vector \( \mathbf{w} \) such that \( \mathbf{{wP}} = \mathbf{w} \) and \( \mathbf{w} \) is strictly positive. Any row vector such that \( \mathbf{{vP}} = \mathbf{v} \) is a multiple of \( \mathbf{w} \) . Any column vector \( \mathbf{x} \) such that \( \mathbf{{Px}} = \mathbf{x} \) is a constant vector.
Proof. This theorem states that Theorem 11.8 is true for ergodic chains. The result follows easily from the fact that, if \( \mathbf{P} \) is an ergodic transition matrix, then \( \overline{\mathbf{P}} = \left( {1/2}\right) \mathbf{I} + \left( {1/2}\right) \mathbf{P} \) is a regular transition matrix with the same fixed vectors (see Exercises 25-28).
No
In the Land of \( \mathrm{{Oz}} \), there are 525 days in a year. We have simulated the weather for one year in the Land of \( \mathrm{{Oz}} \), using the program SimulateChain. The results are shown in Table 11.2.
We note that the simulation gives a proportion of times in each of the states not too different from the long run predictions of \( {.4},{.2} \), and .4 assured by Theorem 11.7 . To get better results we have to simulate our chain for a longer time. We do this for 10,000 days without printing out each day's weather. The results are shown in Table 11.3. We see that the results are now quite close to the theoretical values of \( {.4},{.2} \), and .4 .
No
A white rat is put into the maze of Figure 11.4. There are nine compartments with connections between the compartments as indicated. The rat moves through the compartments at random. That is, if there are \( k \) ways to leave a compartment, it chooses each of these with equal probability. We can represent the travels of the rat by a Markov chain process with transition matrix given by\n\n\[ \mathbf{P} = \begin{matrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & \\ 1 & 2 & 0 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 \\ 2 & 1/3 & 0 & 1/3 & 0 & 1/3 & 0 & 0 & 0 & 0 \\ 3 & 0 & 1/2 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 \\ 4 & 0 & 0 & 1/3 & 0 & 1/3 & 0 & 0 & 0 & 1/3 \\ 5 & 0 & 1/4 & 0 & 1/4 & 0 & 1/4 & 0 & 1/4 & 0 \\ 6 & 1/3 & 0 & 0 & 0 & 1/3 & 0 & 1/3 & 0 & 0 \\ 7 & 0 & 0 & 0 & 0 & 0 & 1/2 & 0 & 1/2 & 0 \\ 8 & 0 & 0 & 0 & 0 & 1/3 & 0 & 1/3 & 0 & 1/3 \\ 9 & 0 & 0 & 0 & 1/2 & 0 & 0 & 0 & 1/2 & 0 \end{matrix}. \]
That this chain is not regular can be seen as follows: From an odd-numbered state the process can go only to an even-numbered state, and from an even-numbered state it can go only to an odd number. Hence, starting in state \( i \) the process will be alternately in even-numbered and odd-numbered states. Therefore, odd powers of \( \mathbf{P} \) will have 0 ’s for the odd-numbered entries in row 1 . On the other hand, a glance at the maze shows that it is possible to go from every state to every other state, so that the chain is ergodic.\n\nTo find the fixed probability vector for this matrix, we would have to solve ten equations in nine unknowns. However, it would seem reasonable that the times spent in each compartment should, in the long run, be proportional to the number of entries to each compartment. Thus, we try the vector whose \( j \) th component is the number of entries to the \( j \) th compartment:\n\n\[ \mathbf{x} = \left( \begin{array}{lllllllll} 2 & 3 & 2 & 3 & 4 & 3 & 2 & 3 & 2 \end{array}\right) . \]\n\nIt is easy to check that this vector is indeed a fixed vector so that the unique probability vector is this vector normalized to have sum 1:\n\n\[ \mathbf{w} = \left( \begin{array}{lllllllll} \frac{1}{12} & \frac{1}{8} & \frac{1}{12} & \frac{1}{8} & \frac{1}{6} & \frac{1}{8} & \frac{1}{12} & \frac{1}{8} & \frac{1}{12} \end{array}\right) . \]
Yes
Example 11.23 (Example 11.8 continued) We recall the Ehrenfest urn model of Example 11.8. The transition matrix for this chain is as follows:\n\n\[ \n\\mathbf{P} = \\begin{array}{l} 0 \\\\ 1 \\\\ 2 \\\\ 3 \\\\ 4 \\\\ 5 \\end{array}\\left( \\begin{matrix} 0 & 1 & 2 & 3 & 4 \\\\ {.000} & {1.000} & {.000} & {.000} & {.000} \\\\ {.250} & {.000} & {.750} & {.000} & {.000} \\\\ {.000} & {.500} & {.000} & {.500} & {.000} \\\\ {.000} & {.000} & {.750} & {.000} & {.250} \\\\ {.000} & {.000} & {.000} & {1.000} & {.000} \\end{matrix}\\right) .\n\]
If we run the program FixedVector for this chain, we obtain the vector\n\n\[ \n\\mathbf{w} = \\left( \\begin{matrix} 0 & 1 & 2 & 3 & 4 \\\\ {.0625} & {.2500} & {.3750} & {.2500} & {.0625} \\end{matrix}\\right)\n\]\n\nBy Theorem 11.12, we can interpret these values for \( {w}_{i} \) as the proportion of times the process is in each of the states in the long run. For example, the proportion of times in state 0 is .0625 and the proportion of times in state 1 is .375 . The astute reader will note that these numbers are the binomial distribution \( 1/{16},4/{16},6/{16} \) , \( 4/{16},1/{16} \) . We could have guessed this answer as follows: If we consider a particular ball, it simply moves randomly back and forth between the two urns. This suggests that the equilibrium state should be just as if we randomly distributed the four balls in the two urns. If we did this, the probability that there would be exactly \( j \) balls in one urn would be given by the binomial distribution \( b\\left( {n, p, j}\\right) \) with \( n = 4 \) and \( p = 1/2 \) .
Yes
Lemma 11.1 Let \( \mathbf{P} \) be an \( r \) -by- \( r \) transition matrix with no zero entries. Let \( d \) be the smallest entry of the matrix. Let \( \mathbf{y} \) be a column vector with \( r \) components, the largest of which is \( {M}_{0} \) and the smallest \( {m}_{0} \) . Let \( {M}_{1} \) and \( {m}_{1} \) be the largest and smallest component, respectively, of the vector \( \mathbf{{Py}} \) . Then\n\n\[ \n{M}_{1} - {m}_{1} \leq \left( {1 - {2d}}\right) \left( {{M}_{0} - {m}_{0}}\right) .\n\]
Proof. In the discussion following Theorem11.7, it was noted that each entry in the vector \( \mathbf{{Py}} \) is a weighted average of the entries in \( \mathbf{y} \) . The largest weighted average that could be obtained in the present case would occur if all but one of the entries of \( \mathbf{y} \) have value \( {M}_{0} \) and one entry has value \( {m}_{0} \), and this one small entry is weighted by the smallest possible weight, namely \( d \) . In this case, the weighted average would equal\n\n\[ \nd{m}_{0} + \left( {1 - d}\right) {M}_{0} \n\]\n\nSimilarly, the smallest possible weighted average equals\n\n\[ \nd{M}_{0} + \left( {1 - d}\right) {m}_{0}. \n\]\n\nThus,\n\n\[ \n{M}_{1} - {m}_{1} \leq \left( {d{m}_{0} + \left( {1 - d}\right) {M}_{0}}\right) - \left( {d{M}_{0} + \left( {1 - d}\right) {m}_{0}}\right) \n\]\n\n\[ \n= \left( {1 - {2d}}\right) \left( {{M}_{0} - {m}_{0}}\right) \text{.} \n\]\n\nThis completes the proof of the lemma.
Yes
Example 11.24 Let us return to the maze example (Example 11.22). We shall make this ergodic chain into an absorbing chain by making state 5 an absorbing state. For example, we might assume that food is placed in the center of the maze and once the rat finds the food, he stays to enjoy it (see Figure 11.5).
The new transition matrix in canonical form is\n\n\[ \mathbf{P} = \begin{matrix} 1 & 2 & 3 & 4 & 6 & 7 & 8 & 9 & 5 & & \\ 1 & 2 & 0 & 1/2 & 0 & 0 & 1/2 & 0 & 0 & 0 & 0 \\ 2 & 1/3 & 0 & 1/3 & 0 & 0 & 0 & 0 & 0 & 1/3 & \\ 3 & 0 & 1/2 & 0 & 1/2 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4 & 0 & 0 & 1/3 & 0 & 0 & 1/3 & 0 & 1/3 & 1/3 & \\ 6 & 1/3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/3 \\ 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1/3 \\ 8 & 0 & 0 & 0 & 0 & 0 & 1/3 & 0 & 1/3 & 1/3 & \\ 9 & 0 & 0 & 0 & 1/2 & 0 & 0 & 1/2 & 0 & 0 & \\ 5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \end{matrix}. \]\n\nIf we compute the fundamental matrix \( \mathbf{N} \), we obtain\n\n\[ \mathbf{N} = \frac{1}{8}\left( \begin{matrix} {14} & 9 & 4 & 3 & 9 & 4 & 3 & 2 \\ 6 & {14} & 6 & 4 & 4 & 2 & 2 & 2 \\ 4 & 9 & {14} & 9 & 3 & 2 & 3 & 4 \\ 2 & 4 & 6 & {14} & 2 & 2 & 4 & 6 \\ 6 & 4 & 2 & 2 & {14} & 6 & 4 & 2 \\ 4 & 3 & 2 & 3 & 9 & {14} & 9 & 4 \\ 2 & 2 & 2 & 4 & 4 & 6 & {14} & 6 \\ 2 & 3 & 4 & 9 & 3 & 4 & 9 & {14} \end{matrix}\right) \]\n\nThe expected time to absorption for different starting states is given by the vector \( \mathbf{{Nc}} \), where\n\n\[ \mathrm{{Nc}} = \left( \begin{array}{l} 6 \\ 5 \\ 6 \\ 5 \\ 5 \\ 6 \\ 5 \\ 6 \end{array}\right) . \]\n\nWe see that, starting from compartment 1 , it will take on the average six steps to reach food. It is clear from symmetry that we should get the same answer for starting at state \( 3,7 \), or 9 . It is also clear that it should take one more step, starting at one of these states, than it would starting at \( 2,4,6 \), or 8 . Some of the results obtained from \( \mathbf{N} \) are not so obvious. For instance, we note that the expected number of times in the starting state is \( {14}/8 \) regardless of the state in which we start.
Yes
Theorem 11.15 For an ergodic Markov chain, the mean recurrence time for state \( {s}_{i} \) is \( {r}_{i} = 1/{w}_{i} \), where \( {w}_{i} \) is the \( i \) th component of the fixed probability vector for the transition matrix.
Proof. Multiplying both sides of Equation 11.6 by \( \mathbf{w} \) and using the fact that\n\n\[ \mathbf{w}\left( {\mathbf{I} - \mathbf{P}}\right) = \mathbf{0} \]\n\ngives\n\n\[ \mathbf{w}\mathbf{C} - \mathbf{w}\mathbf{D} = \mathbf{0}. \]\n\nHere \( \mathbf{{wC}} \) is a row vector with all entries 1 and \( \mathbf{{wD}} \) is a row vector with \( i \) th entry \( {w}_{i}{r}_{i} \) . Thus\n\n\[ \left( {1,1,\ldots ,1}\right) = \left( {{w}_{1}{r}_{1},{w}_{2}{r}_{2},\ldots ,{w}_{n}{r}_{n}}\right) \]\n\nand\n\n\[ {r}_{i} = 1/{w}_{i} \]\n\nas was to be proved.
Yes
Corollary 11.1 For an ergodic Markov chain, the components of the fixed probability vector \( \mathbf{w} \) are strictly positive.
Proof. We know that the values of \( {r}_{i} \) are finite and so \( {w}_{i} = 1/{r}_{i} \) cannot be 0.
Yes
Proposition 11.1 Let \( \\mathbf{P} \) be the transition matrix of an ergodic chain, and let \( \\mathbf{W} \) be the matrix all of whose rows are the fixed probability row vector for \( \\mathbf{P} \). Then the matrix \[ \\mathbf{I} - \\mathbf{P} + \\mathbf{W} \] has an inverse.
Proof. Let \( \\mathbf{x} \) be a column vector such that \[ \\left( {\\mathbf{I} - \\mathbf{P} + \\mathbf{W}}\\right) \\mathbf{x} = \\mathbf{0}. \] To prove the proposition, it is sufficient to show that \( \\mathbf{x} \) must be the zero vector. Multiplying this equation by \( \\mathbf{w} \) and using the fact that \( \\mathbf{w}\\left( {\\mathbf{I} - \\mathbf{P}}\\right) = \\mathbf{0} \) and \( \\mathbf{w}\\mathbf{W} = \\mathbf{w} \), we have \[ \\mathbf{w}\\left( {\\mathbf{I} - \\mathbf{P} + \\mathbf{W}}\\right) \\mathbf{x} = \\mathbf{w}\\mathbf{x} = \\mathbf{0}. \] Therefore, \[ \\left( {\\mathbf{I} - \\mathbf{P}}\\right) \\mathbf{x} = \\mathbf{0}. \] But this means that \( \\mathbf{x} = \\mathbf{P}\\mathbf{x} \) is a fixed column vector for \( \\mathbf{P} \). By Theorem 11.10, this can only happen if \( \\mathbf{x} \) is a constant vector. Since \( \\mathbf{w}\\mathbf{x} = 0 \), and \( \\mathbf{w} \) has strictly positive entries, we see that \( \\mathbf{x} = \\mathbf{0} \). This completes the proof.
Yes
Example 11.26 Let \( \mathbf{P} \) be the transition matrix for the weather in the Land of \( \mathrm{Oz} \). Then
\[ \mathbf{I} - \mathbf{P} + \mathbf{W} = \left( \begin{array}{lll} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array}\right) - \left( \begin{matrix} 1/2 & 1/4 & 1/4 \\ 1/2 & 0 & 1/2 \\ 1/4 & 1/4 & 1/2 \end{matrix}\right) + \left( \begin{matrix} 2/5 & 1/5 & 2/5 \\ 2/5 & 1/5 & 2/5 \\ 2/5 & 1/5 & 2/5 \end{matrix}\right) \] \[ = \left( \begin{matrix} 9/{10} & - 1/{20} & 3/{20} \\ - 1/{10} & 6/5 & - 1/{10} \\ 3/{20} & - 1/{20} & 9/{10} \end{matrix}\right) \] so \[ \mathbf{Z} = {\left( \mathbf{I} - \mathbf{P} + \mathbf{W}\right) }^{-1} = \left( \begin{matrix} {86}/{75} & 1/{25} & - {14}/{75} \\ 2/{25} & {21}/{25} & 2/{25} \\ - {14}/{75} & 1/{25} & {86}/{75} \end{matrix}\right) . \]
Yes
Lemma 11.2 Let \( \mathbf{Z} = {\left( \mathbf{I} - \mathbf{P} + \mathbf{W}\right) }^{-1} \), and let \( \mathbf{c} \) be a column vector of all 1 ’s. Then\n\n\[ \mathbf{Z}\mathbf{c} = \mathbf{c} \]\n\n\[ \mathbf{{wZ}} = \mathbf{w} \]\n\nand\n\n\[ \mathbf{Z}\left( {\mathbf{I} - \mathbf{P}}\right) = \mathbf{I} - \mathbf{W} \]
Proof. Since \( \mathbf{{Pc}} = \mathbf{c} \) and \( \mathbf{{Wc}} = \mathbf{c} \) ,\n\n\[ \mathbf{c} = \left( {\mathbf{I} - \mathbf{P} + \mathbf{W}}\right) \mathbf{c} \]\n\nIf we multiply both sides of this equation on the left by \( \mathbf{Z} \), we obtain\n\n\[ \mathbf{Z}\mathbf{c} = \mathbf{c} \]\n\nSimilarly, since \( \mathbf{w}\mathbf{P} = \mathbf{w} \) and \( \mathbf{w}\mathbf{W} = \mathbf{w} \) ,\n\n\[ \mathbf{w} = \mathbf{w}\left( {\mathbf{I} - \mathbf{P} + \mathbf{W}}\right) \]\n\nIf we multiply both sides of this equation on the right by \( \mathbf{Z} \), we obtain\n\n\[ \mathbf{w}\mathbf{Z} = \mathbf{w}. \]\n\nFinally, we have\n\n\[ \left( {\mathbf{I} - \mathbf{P} + \mathbf{W}}\right) \left( {\mathbf{I} - \mathbf{W}}\right) = \mathbf{I} - \mathbf{W} - \mathbf{P} + \mathbf{W} + \mathbf{W} - \mathbf{W} \]\n\n\[ = \mathbf{I} - \mathbf{P}\text{.} \]\n\nMultiplying on the left by \( \mathbf{Z} \), we obtain\n\n\[ \mathbf{I} - \mathbf{W} = \mathbf{Z}\left( {\mathbf{I} - \mathbf{P}}\right) \]\n\nThis completes the proof.
Yes
Theorem 11.16 The mean first passage matrix \( \mathbf{M} \) for an ergodic chain is determined from the fundamental matrix \( \mathbf{Z} \) and the fixed row probability vector \( \mathbf{w} \) by\n\n\[ \n{m}_{ij} = \frac{{z}_{jj} - {z}_{ij}}{{w}_{j}}.\n\]
Proof. We showed in Equation 11.6 that\n\n\[ \n\left( {\mathbf{I} - \mathbf{P}}\right) \mathbf{M} = \mathbf{C} - \mathbf{D}\n\]\n\nThus,\n\n\[ \n\mathbf{Z}\left( {\mathbf{I} - \mathbf{P}}\right) \mathbf{M} = \mathbf{Z}\mathbf{C} - \mathbf{Z}\mathbf{D}\n\]\n\nand from Lemma 11.2,\n\n\[ \n\mathbf{Z}\left( {\mathbf{I} - \mathbf{P}}\right) \mathbf{M} = \mathbf{C} - \mathbf{Z}\mathbf{D}\n\]\n\nAgain using Lemma 11.2, we have\n\n\[ \n\mathbf{M} - \mathbf{{WM}} = \mathbf{C} - \mathbf{{ZD}}\n\]\n\nor\n\n\[ \n\mathbf{M} = \mathbf{C} - \mathbf{{ZD}} + \mathbf{{WM}}\n\]\n\nFrom this equation, we see that\n\n\[ \n{m}_{ij} = 1 - {z}_{ij}{r}_{j} + {\left( \mathbf{{wM}}\right) }_{j}.\n\]\n\n(11.8)\n\nBut \( {m}_{jj} = 0 \), and so\n\n\[ \n0 = 1 - {z}_{jj}{r}_{j} + {\left( \mathbf{{wM}}\right) }_{j},\n\]\n\nor\n\n\[ \n{\left( \mathbf{{wM}}\right) }_{j} = {z}_{jj}{r}_{j} - 1.\n\]\n\n(11.9)\n\nFrom Equations 11.8 and 11.9, we have\n\n\[ \n{m}_{ij} = \left( {{z}_{jj} - {z}_{ij}}\right) \cdot {r}_{j}.\n\]\n\nSince \( {r}_{j} = 1/{w}_{j} \),\n\n\[ \n{m}_{ij} = \frac{{z}_{jj} - {z}_{ij}}{{w}_{j}}.\n\]
Yes
In the Land of Oz example, we find that \[ \mathbf{Z} = {\left( \mathbf{I} - \mathbf{P} + \mathbf{W}\right) }^{-1} = \left( \begin{matrix} {86}/{75} & 1/{25} & - {14}/{75} \\ 2/{25} & {21}/{25} & 2/{25} \\ - {14}/{75} & 1/{25} & {86}/{75} \end{matrix}\right) . \]
We have also seen that \( \mathbf{w} = \left( {2/5,1/5,2/5}\right) \) . So, for example, \[ {m}_{12} = \frac{{z}_{22} - {z}_{12}}{{w}_{2}} \] \[ = \frac{{21}/{25} - 1/{25}}{1/5} \] \[ = 4\text{,} \] by Theorem 11.16. Carrying out the calculations for the other entries of \( \mathbf{M} \), we obtain \[ \mathbf{M} = \left( \begin{matrix} 0 & 4 & {10}/3 \\ 8/3 & 0 & 8/3 \\ {10}/3 & 4 & 0 \end{matrix}\right) \]
Yes
Let us consider the Ehrenfest model (see Example 11.8) for gas diffusion for the general case of \( {2n} \) balls. Every second, one of the \( {2n} \) balls is chosen at random and moved from the urn it was in to the other urn. If there are \( i \) balls in the first urn, then with probability \( i/{2n} \) we take one of them out and put it in the second urn, and with probability \( \left( {{2n} - i}\right) /{2n} \) we take a ball from the second urn and put it in the first urn. At each second we let the number \( i \) of balls in the first urn be the state of the system. Then from state \( i \) we can pass only to state \( i - 1 \) and \( i + 1 \), and the transition probabilities are given by\n\n\[ \n{p}_{ij} = \left\{ \begin{matrix} \frac{i}{2n}, & \text{ if }j = i - 1, \\ 1 - \frac{i}{2n}, & \text{ if }j = i + 1, \\ 0, & \text{ otherwise. } \end{matrix}\right. \n\]
This defines the transition matrix of an ergodic, non-regular Markov chain (see Exercise 15). Here the physicist is interested in long-term predictions about the state occupied. In Example 11.23, we gave an intuitive reason for expecting that the fixed vector \( \mathbf{w} \) is the binomial distribution with parameters \( {2n} \) and \( 1/2 \) . It is easy to check that this is correct. So,\n\n\[ \n{w}_{i} = \frac{\left( \begin{matrix} {2n} \\ i \end{matrix}\right) }{{2}^{2n}}. \n\]\n\nThus the mean recurrence time for state \( i \) is\n\n\[ \n{r}_{i} = \frac{{2}^{2n}}{\left( \begin{matrix} {2n} \\ i \end{matrix}\right) }. \n\]
Yes
Theorem 12.2 For \( n \geq 1 \), the probabilities \( \left\{ {u}_{2k}\right\} \) and \( \left\{ {f}_{2k}\right\} \) are related by the equation\n\n\[ \n{u}_{2n} = {f}_{0}{u}_{2n} + {f}_{2}{u}_{{2n} - 2} + \cdots + {f}_{2n}{u}_{0}.\n\]
Proof. There are \( {u}_{2n}{2}^{2n} \) paths of length \( {2n} \) which have endpoints \( \left( {0,0}\right) \) and \( \left( {{2n},0}\right) \) . The collection of such paths can be partitioned into \( n \) sets, depending upon the time of the first return to the origin. A path in this collection which has a first return to the origin at time \( {2k} \) consists of an initial segment from \( \left( {0,0}\right) \) to \( \left( {{2k},0}\right) \), in which no interior points are on the horizontal axis, and a terminal segment from \( \left( {{2k},0}\right) \) to \( \left( {{2n},0}\right) \), with no further restrictions on this segment. Thus, the number of paths in the collection which have a first return to the origin at time \( {2k} \) is given by\n\n\[ \n{f}_{2k}{2}^{2k}{u}_{{2n} - {2k}}{2}^{{2n} - {2k}} = {f}_{2k}{u}_{{2n} - {2k}}{2}^{2n}.\n\]\n\nIf we sum over \( k \), we obtain the equation\n\n\[ \n{u}_{2n}{2}^{2n} = {f}_{0}{u}_{2n}{2}^{2n} + {f}_{2}{u}_{{2n} - 2}{2}^{2n} + \cdots + {f}_{2n}{u}_{0}{2}^{2n}.\n\]\n\nDividing both sides of this equation by \( {2}^{2n} \) completes the proof.
Yes
Theorem 12.3 For \( m \geq 1 \), the probability of a first return to the origin at time \( {2m} \) is given by\n\n\[ \n{f}_{2m} = \frac{{u}_{2m}}{{2m} - 1} = \frac{\left( \begin{matrix} {2m} \\ m \end{matrix}\right) }{\left( {{2m} - 1}\right) {2}^{2m}}.\n\]
Proof. We begin by defining the generating functions\n\n\[ \nU\left( x\right) = \mathop{\sum }\limits_{{m = 0}}^{\infty }{u}_{2m}{x}^{m}\n\]\n\nand\n\n\[ \nF\left( x\right) = \mathop{\sum }\limits_{{m = 0}}^{\infty }{f}_{2m}{x}^{m}\n\]\n\nTheorem 12.2 says that\n\n\[ \nU\left( x\right) = 1 + U\left( x\right) F\left( x\right) .\n\]\n\n(12.1)\n\n(The presence of the 1 on the right-hand side is due to the fact that \( {u}_{0} \) is defined to be 1, but Theorem 12.2 only holds for \( m \geq 1 \) .) We note that both generating functions certainly converge on the interval \( \left( {-1,1}\right) \), since all of the coefficients are at most 1 in absolute value. Thus, we can solve the above equation for \( F\left( x\right) \), obtaining\n\n\[ \nF\left( x\right) = \frac{U\left( x\right) - 1}{U\left( x\right) }.\n\]\n\nNow, if we can find a closed-form expression for the function \( U\left( x\right) \), we will also have a closed-form expression for \( F\left( x\right) \) . From Theorem 12.1, we have\n\n\[ \nU\left( x\right) = \mathop{\sum }\limits_{{m = 0}}^{\infty }\left( \begin{matrix} {2m} \\ m \end{matrix}\right) {2}^{-{2m}}{x}^{m}.\n\]\n\nIn Wilf, \( {}^{1} \) we find that\n\n\[ \n\frac{1}{\sqrt{1 - {4x}}} = \mathop{\sum }\limits_{{m = 0}}^{\infty }\left( \begin{matrix} {2m} \\ m \end{matrix}\right) {x}^{m}.\n\]\n\nThe reader is asked to prove this statement in Exercise 1. If we replace \( x \) by \( x/4 \) in the last equation, we see that\n\n\[ \nU\left( x\right) = \frac{1}{\sqrt{1 - x}}.\n\]\n\n\( {}^{1} \) H. S. Wilf, Generatingfunctionology,(Boston: Academic Press,1990), p. 50.\n\nTherefore, we have\n\n\[ \nF\left( x\right) = \frac{U\left( x\right) - 1}{U\left( x\right) }\n\]\n\n\[ \n= \frac{{\left( 1 - x\right) }^{-1/2} - 1}{{\left( 1 - x\right) }^{-1/2}}\n\]\n\n\[ \n= 1 - {\left( 1 - x\right) }^{1/2}.\n\]\n\nAlthough it is possible to compute the value of \( {f}_{2m} \) using the Binomial Theorem, it is easier to note that \( {F}^{\prime }\left( x\right) = U\left( x\right) /2 \), so that the coefficients \( {f}_{2m} \) can be found by integrating the series for \( U\left( x\right) \) . We obtain, for \( m \geq 1 \),\n\n\[ \n{f}_{2m} = \frac{{u}_{{2m} - 2}}{2m}\n\]\n\n\[ \n= \;\frac{\left( \begin{matrix} {2m} - 2 \\ m - 1 \end{matrix}\right) }{m{2}^{{2m} - 1}}\n\]\n\n\[ \n= \frac{\left( \begin{matrix} {2m} \\ m \end{matrix}\right) }{\left( {{2m} - 1}\right) {2}^{2m}}\n\]\n\n\[ \n= \frac{{u}_{2m}}{{2m} - 1}\n\]\n\nsince\n\n\[ \n\left( \begin{matrix} {2m} - 2 \\ m - 1 \end{matrix}\right) = \frac{m}{2\left( {{2m} - 1}\right) }\left( \begin{matrix} {2m} \\ m \end{matrix}\right) .\n\]\n\nThis completes the proof of the theorem.
Yes
Example 12.1 (Eventual Return in \( {\mathbf{R}}^{1} \) ) One has to approach the idea of eventual return with some care, since the sample space seems to be the set of all walks of infinite length, and this set is non-denumerable. To avoid difficulties, we will define \( {w}_{n} \) to be the probability that a first return has occurred no later than time \( n \) . Thus, \( {w}_{n} \) concerns the sample space of all walks of length \( n \), which is a finite set. In terms of the \( {w}_{n} \) ’s, it is reasonable to define the probability that the particle eventually returns to the origin to be\n\n\[ \n{w}_{ * } = \mathop{\lim }\limits_{{n \rightarrow \infty }}{w}_{n} \n\]\n\nThis limit clearly exists and is at most one, since the sequence \( {\left\{ {w}_{n}\right\} }_{n = 1}^{\infty } \) is an increasing sequence, and all of its terms are at most one.
In terms of the \( {f}_{n} \) probabilities, we see that\n\n\[ \n{w}_{2n} = \mathop{\sum }\limits_{{i = 1}}^{n}{f}_{2i} \n\]\n\nThus,\n\n\[ \n{w}_{ * } = \mathop{\sum }\limits_{{i = 1}}^{\infty }{f}_{2i} \n\]\n\nIn the proof of Theorem 12.3, the generating function\n\n\[ \nF\left( x\right) = \mathop{\sum }\limits_{{m = 0}}^{\infty }{f}_{2m}{x}^{m} \n\]\n\nwas introduced. There it was noted that this series converges for \( x \in \left( {-1,1}\right) \) . In fact, it is possible to show that this series also converges for \( x = \pm 1 \) by using Exercise 4, together with the fact that\n\n\[ \n{f}_{2m} = \frac{{u}_{2m}}{{2m} - 1}. \n\]\n\n(This fact was proved in the proof of Theorem 12.3.) Since we also know that\n\n\[ \nF\left( x\right) = 1 - {\left( 1 - x\right) }^{1/2}, \n\]\n\nwe see that\n\n\[ \n{w}_{ * } = F\left( 1\right) = 1. \n\]\n\nThus, with probability one, the particle returns to the origin.
Yes
Example 12.2 (Eventual Return in \( {\mathbf{R}}^{m} \) ) We now turn our attention to the case that the random walk takes place in more than one dimension. We define \( {f}_{2n}^{\left( m\right) } \) to be the probability that the first return to the origin in \( {\mathbf{R}}^{m} \) occurs at time \( {2n} \) . The quantity \( {u}_{2n}^{\left( m\right) } \) is defined in a similar manner. Thus, \( {f}_{2n}^{\left( 1\right) } \) and \( {u}_{2n}^{\left( 1\right) } \) equal \( {f}_{2n} \) and \( {u}_{2n} \) , which were defined earlier. If, in addition, we define \( {u}_{0}^{\left( m\right) } = 1 \) and \( {f}_{0}^{\left( m\right) } = 0 \), then one can mimic the proof of Theorem 12.2, and show that for all \( m \geq 1 \) ,
\[ {u}_{2n}^{\left( m\right) } = {f}_{0}^{\left( m\right) }{u}_{2n}^{\left( m\right) } + {f}_{2}^{\left( m\right) }{u}_{{2n} - 2}^{\left( m\right) } + \cdots + {f}_{2n}^{\left( m\right) }{u}_{0}^{\left( m\right) }. \] (12.2)
No
Consider the function that maps non-negative real numbers to their positive square root. This function is denoted by\n\n\[ f\left( x\right) = \sqrt{x} \]
Note, since this is a function, and its range consists of the non-negative real numbers, we have that\n\n\[ \sqrt{{x}^{2}} = \left| x\right| \]
No
What does the inverse of this function tell you? What is the inverse of this function?
Solution While \( v\left( t\right) \) tells you how many gallons of water are in the pool after a period of time, the inverse of \( v\left( t\right) \) tells you how much time must be spent to obtain a given volume. To compute the inverse function, first set \( v = v\left( t\right) \) and write\n\n\[ v = {700t} + {200} \]\n\nNow solve for \( t \) :\n\n\[ t = v/{700} - 2/7 \]\n\nThis is a function that maps volumes to times, and \( t\left( v\right) = v/{700} - 2/7 \) .
Yes
Suppose you are standing on a bridge that is 60 meters above sea-level. You toss a ball up into the air with an initial velocity of 30 meters per second. If \( t \) is the time (in seconds) after we toss the ball, then the height at time \( t \) is approximately \( h\left( t\right) = - 5{t}^{2} + {30t} + {60} \) . What does the inverse of this function tell you? What is the inverse of this function?
While \( h\left( t\right) \) tells you how the height the ball is above sea-level at an instant of time, the inverse of \( h\left( t\right) \) tells you what time it is when the ball is at a given height. There is only one problem: There is no function that is the inverse of \( h\left( t\right) \) . Consider Figure 7, we can see that for some heights-namely 60 meters, there are two times.\n\nWhile there is no inverse function for \( h\left( t\right) \), we can find one if we restrict the domain of \( h\left( t\right) \) . Take it as given that the maximum of \( h\left( t\right) \) is at 105 meters and \( t = 3 \) seconds, later on in this course you’ll know how to find points like this with ease. In this case, we may find an inverse of \( h\left( t\right) \) on the interval \( \lbrack 3,\infty ) \) . Write\n\n\[ h = - 5{t}^{2} + {30t} + {60} \]\n\n\[ 0 = - 5{t}^{2} + {30t} + \left( {{60} - h}\right) \]\n\nand solve for \( t \) using the quadratic formula\n\n\[ t = \frac{-{30} \pm \sqrt{{30}^{2} - 4\left( {-5}\right) \left( {{60} - h}\right) }}{2\left( {-5}\right) } \]\n\n\[ = \frac{-{30} \pm \sqrt{{30}^{2} + {20}\left( {{60} - h}\right) }}{-{10}} \]\n\n\[ = 3 \mp \sqrt{{3}^{2} + {.2}\left( {{60} - h}\right) } \]\n\n\[ = 3 \mp \sqrt{9 + {.2}\left( {{60} - h}\right) } \]\n\n\[ = 3 \mp \sqrt{{21} - {.2h}} \]\n\nNow we must think about what it means to restrict the domain of \( h\left( t\right) \) to values of \( t \) in \( \lbrack 3,\infty ) \) . Since \( h\left( t\right) \) has its maximum value of 105 when \( t = 3 \), the largest \( h \) could be is 105 . This means that \( {21} - {.2h} \geq 0 \) and so \( \sqrt{{21} - {.2h}} \) is a real number. We know something else too, \( t > 3 \) . This means that the \
Yes
Does \( f\left( x\right) \) have an inverse? If so what is it? If not, attempt to restrict the domain of \( f\left( x\right) \) and find an inverse on the restricted domain.
In this case \( f\left( x\right) \) is one-to-one and \( {f}^{-1}\left( x\right) = \sqrt[3]{x} \) . See Figure 9.
Yes
Does \( f\left( x\right) \) have an inverse? If so what is it? If not, attempt to restrict the domain of \( f\left( x\right) \) and find an inverse on the restricted domain.
Solution In this case \( f\left( x\right) \) is not one-to-one. However, it is one-to-one on the interval \( \lbrack 0,\infty ) \) . Hence we can find an inverse of \( f\left( x\right) = {x}^{2} \) on this interval, and it is our familiar function \( \sqrt{x} \) . See Figure 10.
Yes
Example 1.1.1 Let \( f\left( x\right) = \lfloor x\rfloor \) . Explain why the limit\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 2}}f\left( x\right) \]\n\ndoes not exist.
Solution The function \( \lfloor x\rfloor \) is the function that returns the greatest integer less than or equal to \( x \) . Since \( f\left( x\right) \) is defined for all real numbers, one might be tempted to think that the limit above is simply \( f\left( 2\right) = 2 \) . However, this is not the case. If \( x < 2 \), then \( f\left( x\right) = 1 \) . Hence if \( \varepsilon = {.5} \), we can always find a value for \( x \) (just to the left of 2) such that\n\n\[ 0 < \left| {x - 2}\right| < \delta ,\;\text{ where }\;\varepsilon < \left| {f\left( x\right) - 2}\right| . \]\n\nOn the other hand, \( \mathop{\lim }\limits_{{x \rightarrow 2}}f\left( x\right) \neq 1 \), as in this case if \( \varepsilon = {.5} \), we can always find a value for \( x \) (just to the right of 2) such that\n\n\[ 0 < \left| {x - 2}\right| < \delta ,\;\text{ where }\;\varepsilon < \left| {f\left( x\right) - 1}\right| . \]\n\nWe’ve illustrated this in Figure 1.3. Moreover, no matter what value one chooses for \( \mathop{\lim }\limits_{{x \rightarrow 2}}f\left( x\right) \), we will always have a similar issue.
Yes
Let \( f\left( x\right) = \sin \left( \frac{1}{x}\right) \) . Explain why the limit\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 0}}f\left( x\right) \]\ndoes not exist.
Solution In this case \( f\left( x\right) \) oscillates \
No
Example 1.1.3 Let \( f\left( x\right) = \lfloor x\rfloor \) . Discuss\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 2 - }}f\left( x\right) ,\;\mathop{\lim }\limits_{{x \rightarrow 2 + }}f\left( x\right) ,\;\text{ and }\;\mathop{\lim }\limits_{{x \rightarrow 2}}f\left( x\right) .\n\]
Solution From the plot of \( f\left( x\right) \), see Figure 1.3, we see that\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 2 - }}f\left( x\right) = 1,\;\text{ and }\;\mathop{\lim }\limits_{{x \rightarrow 2 + }}f\left( x\right) = 2.\n\]\n\nSince these limits are different, \( \mathop{\lim }\limits_{{x \rightarrow 2}}f\left( x\right) \) does not exist.
Yes
Show that \( \mathop{\lim }\limits_{{x \rightarrow 2}}{x}^{2} = 4 \) .
Solution We want to show that for any given \( \varepsilon > 0 \), we can find a \( \delta > 0 \) such that\n\n\[ \left| {{x}^{2} - 4}\right| < \varepsilon \]\n\nwhenever \( 0 < \left| {x - 2}\right| < \delta \) . Start by factoring the left-hand side of the inequality above\n\n\[ \left| {x + 2}\right| \left| {x - 2}\right| < \varepsilon \text{.} \]\n\nSince we are going to assume that \( 0 < \left| {x - 2}\right| < \delta \), we will focus on the factor \( \left| {x + 2}\right| \) . Since \( x \) is assumed to be close to 2, suppose that \( x \in \left\lbrack {1,3}\right\rbrack \) . In this case\n\n\[ \left| {x + 2}\right| \leq 3 + 2 = 5 \]\n\nand so we want\n\n\[ 5 \cdot \left| {x - 2}\right| < \varepsilon \]\n\n\[ \left| {x - 2}\right| < \frac{\varepsilon }{5} \]\n\nRecall, we assumed that \( x \in \left\lbrack {1,3}\right\rbrack \), which is equivalent to \( \left| {x - 2}\right| \leq 1 \) . Hence we must set \( \delta = \min \left( {\frac{\varepsilon }{5},1}\right) \) .
Yes
Theorem 1.2.2 (Limit Product Law) Suppose \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = L \) and \( \mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = M \) . Then \[ \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) g\left( x\right) = {LM} \]
Proof Given any \( \varepsilon \) we need to find a \( \delta \) such that \[ 0 < \left| {x - a}\right| < \delta \] implies \[ \left| {f\left( x\right) g\left( x\right) - {LM}}\right| < \varepsilon . \] Here we use an algebraic trick, add \( 0 = - f\left( x\right) M + f\left( x\right) M \) : We will use this same trick again of \
No
Theorem 1.2.3 (Limit Composition Law) Suppose that \( \mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = M \) and This is sometimes written as \( \mathop{\lim }\limits_{{x \rightarrow M}}f\left( x\right) = f\left( M\right) \) . Then \[ \mathop{\lim }\limits_{{x \rightarrow a}}f\left( {g\left( x\right) }\right) = f\left( M\right) \]
\[ \mathop{\lim }\limits_{{x \rightarrow a}}f\left( {g\left( x\right) }\right) = \mathop{\lim }\limits_{{g\left( x\right) \rightarrow M}}f\left( {g\left( x\right) }\right) . \]
No
Theorem 1.2.4 (Limit Root Law) Suppose that \( n \) is a positive integer. Then\n\n\[ \mathop{\lim }\limits_{{x \rightarrow a}}\sqrt[n]{x} = \sqrt[n]{a} \]\n\nprovided that \( a \) is positive if \( n \) is even.
This theorem is not too difficult to prove from the definition of limit.
No
Theorem 1.3.1 (Limit Laws) Suppose that \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = L,\mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = M, k \) is some constant, and \( n \) is a positive integer.
Constant Law \( \mathop{\lim }\limits_{{x \rightarrow a}}{kf}\left( x\right) = k\mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = {kL} \) . Sum Law \( \mathop{\lim }\limits_{{x \rightarrow a}}\left( {f\left( x\right) + g\left( x\right) }\right) = \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) + \mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = L + M \) . Product Law \( \mathop{\lim }\limits_{{x \rightarrow a}}\left( {f\left( x\right) g\left( x\right) }\right) = \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) \cdot \mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = {LM} \) . Quotient Law \( \mathop{\lim }\limits_{{x \rightarrow a}}\frac{f\left( x\right) }{g\left( x\right) } = \frac{\mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) }{\mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) } = \frac{L}{M} \), if \( M \neq 0 \) . Power Law \( \mathop{\lim }\limits_{{x \rightarrow a}}f{\left( x\right) }^{n} = {\left( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) \right) }^{n} = {L}^{n} \) . Root Law \( \mathop{\lim }\limits_{{x \rightarrow a}}\sqrt[n]{f\left( x\right) } = \sqrt[n]{\mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) } = \sqrt[n]{L} \) provided if \( n \) is even, then \( f\left( x\right) \geq 0 \) near \( a \) . Composition Law If \( \mathop{\lim }\limits_{{x \rightarrow a}}g\left( x\right) = M \) and \( \mathop{\lim }\limits_{{x \rightarrow M}}f\left( x\right) = f\left( M\right) \), then \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( {g\left( x\right) }\right) = \) \( f\left( M\right) \) .
Yes
Example 1.3.2 Compute \( \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{{x}^{2} - {3x} + 5}{x - 2} \) .
Solution Using limit laws,\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{{x}^{2} - {3x} + 5}{x - 2} = \frac{\mathop{\lim }\limits_{{x \rightarrow 1}}{x}^{2} - {3x} + 5}{\mathop{\lim }\limits_{{x \rightarrow 1}}\left( {x - 2}\right) } \]\n\n\[ = \frac{\mathop{\lim }\limits_{{x \rightarrow 1}}{x}^{2} - \mathop{\lim }\limits_{{x \rightarrow 1}}{3x} + \mathop{\lim }\limits_{{x \rightarrow 1}}5}{\mathop{\lim }\limits_{{x \rightarrow 1}}x - \mathop{\lim }\limits_{{x \rightarrow 1}}2} \]\n\n\[ = \frac{{\left( \mathop{\lim }\limits_{{x \rightarrow 1}}x\right) }^{2} - 3\mathop{\lim }\limits_{{x \rightarrow 1}}x + 5}{\mathop{\lim }\limits_{{x \rightarrow 1}}x - 2} \]\n\n\[ = \frac{{1}^{2} - 3 \cdot 1 + 5}{1 - 2} \]\n\n\[ = \frac{1 - 3 + 5}{-1} = - 3\text{. } \]
Yes
Example 1.3.3 Compute \( \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{{x}^{2} + {2x} - 3}{x - 1} \) .
Solution We can’t simply plug in \( x = 1 \) because that makes the denominator Limits allow us to examine functions where they are zero. However, when taking limits we assume \( x \neq 1 \) : not defined.\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{{x}^{2} + {2x} - 3}{x - 1} = \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{\left( {x - 1}\right) \left( {x + 3}\right) }{x - 1} \]\n\n\[ = \mathop{\lim }\limits_{{x \rightarrow 1}}\left( {x + 3}\right) = 4 \]
Yes
Example 1.3.4 Compute \( \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{\sqrt{x + 5} - 2}{x + 1} \) .
Solution Using limit laws,\n\n\[ \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{\sqrt{x + 5} - 2}{x + 1} = \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{\sqrt{x + 5} - 2}{x + 1}\frac{\sqrt{x + 5} + 2}{\sqrt{x + 5} + 2} \]\n\n\[ = \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{x + 5 - 4}{\left( {x + 1}\right) \left( {\sqrt{x + 5} + 2}\right) } \]\n\n\[ = \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{x + 1}{\left( {x + 1}\right) \left( {\sqrt{x + 5} + 2}\right) } \]\n\n\[ = \mathop{\lim }\limits_{{x \rightarrow - 1}}\frac{1}{\sqrt{x + 5} + 2} = \frac{1}{4} \]
Yes
Example 1.3.6 Compute\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow 0}}\\frac{\\sin \\left( x\\right) }{x}\n\\]
Solution To compute this limit, use the Squeeze Theorem, Theorem 1.3.5. First note that we only need to examine \\( x \\in \\left( {\\frac{-\\pi }{2},\\frac{\\pi }{2}}\\right) \\) and for the present time, we’ll assume that \\( x \\) is positive-consider the diagrams below:\n\n![f0c2bb2c-af41-4ae2-8326-b33faa659faa_33_0.jpg](images/f0c2bb2c-af41-4ae2-8326-b33faa659faa_33_0.jpg) Triangle \\( B \\)\n\n![f0c2bb2c-af41-4ae2-8326-b33faa659faa_33_1.jpg](images/f0c2bb2c-af41-4ae2-8326-b33faa659faa_33_1.jpg) From our diagrams above we see that\n\nArea of Triangle \\( A \\leq \\) Area of Sector \\( \\leq \\) Area of Triangle \\( B \\)\n\nand computing these areas we find\n\n\\[ \n\\frac{\\cos \\left( x\\right) \\sin \\left( x\\right) }{2} \\leq \\left( \\frac{x}{2\\pi }\\right) \\cdot \\pi \\leq \\frac{\\tan \\left( x\\right) }{2}.\n\\]\n\nMultiplying through by 2, and recalling that \\( \\tan \\left( x\\right) = \\frac{\\sin \\left( x\\right) }{\\cos \\left( x\\right) } \\) we obtain\n\n\\[ \n\\cos \\left( x\\right) \\sin \\left( x\\right) \\leq x \\leq \\frac{\\sin \\left( x\\right) }{\\cos \\left( x\\right) }.\n\\]\n\nDividing through by \\( \\sin \\left( x\\right) \\) and taking the reciprocals, we find\n\n\\[ \n\\cos \\left( x\\right) \\leq \\frac{\\sin \\left( x\\right) }{x} \\leq \\frac{1}{\\cos \\left( x\\right) }.\n\\]\n\nNote, \\( \\cos \\left( {-x}\\right) = \\cos \\left( x\\right) \\) and \\( \\frac{\\sin \\left( {-x}\\right) }{-x} = \\frac{\\sin \\left( x\\right) }{x} \\), so these inequalities hold for all \\( x \\in \\left( {\\frac{-\\pi }{2},\\frac{\\pi }{2}}\\right) \\) . Additionally, we know\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow 0}}\\cos \\left( x\\right) = 1 = \\mathop{\\lim }\\limits_{{x \\rightarrow 0}}\\frac{1}{\\cos \\left( x\\right) }\n\\]\n\nand so we conclude by the Squeeze Theorem, Theorem 1.3.5, \\( \\mathop{\\lim }\\limits_{{x \\rightarrow 0}}\\frac{\\sin \\left( x\\right) }{x} = 1 \\) .
Yes
Example 2.1.1 Find the vertical asymptotes of\n\n\\[ \nf\\left( x\\right) = \\frac{{x}^{2} - {9x} + {14}}{{x}^{2} - {5x} + 6} \n\\]\n
Solution Start by factoring both the numerator and the denominator:\n\n\\[ \n\\frac{{x}^{2} - {9x} + {14}}{{x}^{2} - {5x} + 6} = \\frac{\\left( {x - 2}\\right) \\left( {x - 7}\\right) }{\\left( {x - 2}\\right) \\left( {x - 3}\\right) } \n\\]\n\nUsing limits, we must investigate when \\( x \\rightarrow 2 \\) and \\( x \\rightarrow 3 \\) . Write\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow 2}}\\frac{\\left( {x - 2}\\right) \\left( {x - 7}\\right) }{\\left( {x - 2}\\right) \\left( {x - 3}\\right) } = \\mathop{\\lim }\\limits_{{x \\rightarrow 2}}\\frac{\\left( x - 7\\right) }{\\left( x - 3\\right) } \n\\]\n\n\\[ \n= \\frac{-5}{-1} \n\\]\n\n\\[ \n= 5\\text{.} \n\\]\n\nNow write\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow 3}}\\frac{\\left( {x - 2}\\right) \\left( {x - 7}\\right) }{\\left( {x - 2}\\right) \\left( {x - 3}\\right) } = \\mathop{\\lim }\\limits_{{x \\rightarrow 3}}\\frac{\\left( x - 7\\right) }{\\left( x - 3\\right) } \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{x \\rightarrow 3}}\\frac{-4}{x - 3}\\text{.} \n\\]\n\nSince \\( \\lim x - 3 \\) approaches 0 from the right and the numerator is negative, \\( \\mathop{\\lim }\\limits_{{x \\rightarrow 3 + }}{f}_{\\left( x\\right) }^{x \\rightarrow 3 + } = - \\infty \\) . Since \\( \\mathop{\\lim }\\limits_{{x \\rightarrow 3 - }}x - 3 \\) approaches 0 from the left and the numerator is negative, \\( \\mathop{\\lim }\\limits_{{x \\rightarrow 3 - }}f\\left( x\\right) = \\infty \\) . Hence we have a vertical asymptote at \\( x = 3 \\), see Figure 2.3.
Yes
Example 2.2.1 Compute\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{{6x} - 9}{x - 1} \n\\]
Solution Write\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{{6x} - 9}{x - 1} = \\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{{6x} - 9}{x - 1}\\frac{1/x}{1/x} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{\\frac{6x}{x} - \\frac{9}{x}}{\\frac{x}{x} - \\frac{1}{x}} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{6}{1} \n\\]\n\n\\[ \n= 6\\text{.} \n\\]
Yes
Example 2.2.2 Compute\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow - \\infty }}\\frac{x + 1}{\\sqrt{{x}^{2}}} \n\\]
Solution In this case we multiply the numerator and denominator by \\( - 1/x \\) , which is a positive number as since \\( x \\rightarrow - \\infty, x \\) is a negative number.\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow - \\infty }}\\frac{x + 1}{\\sqrt{{x}^{2}}} = \\mathop{\\lim }\\limits_{{x \\rightarrow - \\infty }}\\frac{x + 1}{\\sqrt{{x}^{2}}} \\cdot \\frac{-1/x}{-1/x} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{x \\rightarrow - \\infty }}\\frac{-1 - 1/x}{\\sqrt{{x}^{2}/{x}^{2}}} \n\\]\n\n\\[ \n= - 1\\text{.} \n\\]
Yes
Example 2.2.3 Compute\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{\\sin \\left( {7x}\\right) }{x} + 4 \n\\]
Solution We can bound our function\n\n\\[ \n- 1/x + 4 \\leq \\frac{\\sin \\left( {7x}\\right) }{x} + 4 \\leq 1/x + 4. \n\\]\n\nSince\n\n\\[ \n\\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }} - 1/x + 4 = 4 = \\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}1/x + 4 \n\\]\n\nwe conclude by the Squeeze Theorem, Theorem 1.3.5, \\( \\mathop{\\lim }\\limits_{{x \\rightarrow \\infty }}\\frac{\\sin \\left( {7x}\\right) }{x} + 4 = 4 \\) .
Yes
Example 2.2.4 Give the horizontal asymptotes of\n\n\[ f\left( x\right) = \frac{{6x} - 9}{x - 1} \]
Solution From our previous work, we see that \( \mathop{\lim }\limits_{{x \rightarrow \infty }}f\left( x\right) = 6 \), and upon further inspection, we see that \( \mathop{\lim }\limits_{{x \rightarrow - \infty }}f\left( x\right) = 6 \) . Hence the horizontal asymptote of \( f\left( x\right) \) is the line \( y = 6 \) .
Yes
Example 2.2.5 Give a horizontal asymptote of\n\n\[ f\left( x\right) = \frac{\sin \left( {7x}\right) }{x} + 4 \]
Solution Again from previous work, we see that \( \mathop{\lim }\limits_{{x \rightarrow \infty }}f\left( x\right) = 4 \) . Hence \( y = 4 \) is a horizontal asymptote of \( f\left( x\right) \) .
Yes
\[ \mathop{\lim }\limits_{{x \rightarrow \infty }}\ln \left( x\right) \]
The function \( \ln \left( x\right) \) grows very slowly, and seems like it may have a horizontal asymptote, see Figure 2.6. However, if we consider the definition of the natural log\n\n\[ \ln \left( x\right) = y\; \Leftrightarrow \;{e}^{y} = x \]\n\nSince we need to raise \( e \) to higher and higher values to obtain larger numbers, we see that \( \ln \left( x\right) \) is unbounded, and hence \( \mathop{\lim }\limits_{{x \rightarrow \infty }}\ln \left( x\right) = \infty \) .
Yes
Find the discontinuities (the values for \( x \) where a function is not continuous) for the function given in Figure 2.7.
Solution From Figure 2.7 we see that \( \mathop{\lim }\limits_{{x \rightarrow 4}}f\left( x\right) \) does not exist as\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 4 - }}f\left( x\right) = 1\;\text{ and }\;\mathop{\lim }\limits_{{x \rightarrow 4 + }}f\left( x\right) \approx {3.5} \]\n\nHence \( \mathop{\lim }\limits_{{x \rightarrow 4}}f\left( x\right) \neq f\left( 4\right) \), and so \( f\left( x\right) \) is not continuous at \( x = 4 \).\n\nWe also see that \( \mathop{\lim }\limits_{{x \rightarrow 6}}f\left( x\right) \approx 3 \) while \( f\left( 6\right) = 2 \). Hence \( \mathop{\lim }\limits_{{x \rightarrow 6}}f\left( x\right) \neq f\left( 6\right) \), and so \( f\left( x\right) \) is not continuous at \( x = 6 \).
Yes
Example 2.3.2 Consider the function\n\n\[ f\left( x\right) = \left\{ \begin{array}{ll} \sqrt[5]{x}\sin \left( \frac{1}{x}\right) & \text{ if }x \neq 0 \\ 0 & \text{ if }x = 0 \end{array}\right. \]\n\nsee Figure 2.8. Is this function continuous?
Solution Considering \( f\left( x\right) \), the only issue is when \( x = 0 \) . We must show that \( \mathop{\lim }\limits_{{x \rightarrow 0}}f\left( x\right) = 0 \) . Note\n\n\[ - \left| \sqrt[5]{x}\right| \leq f\left( x\right) \leq \left| \sqrt[5]{x}\right| \]\n\nSince\n\n\[ \mathop{\lim }\limits_{{x \rightarrow 0}} - \left| \sqrt[5]{x}\right| = 0 = \mathop{\lim }\limits_{{x \rightarrow 0}}\left| \sqrt[5]{x}\right| \]\n\nwe see by the Squeeze Theorem, Theorem 1.3.5, that \( \mathop{\lim }\limits_{{x \rightarrow 0}}f\left( x\right) = 0 \) . Hence \( f\left( x\right) \) is continuous.
Yes
Example 2.3.4 Explain why the function \( f\left( x\right) = {x}^{3} + 3{x}^{2} + x - 2 \) has a root between 0 and 1 .
Solution By Theorem 1.3.1, \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = f\left( a\right) \), for all real values of \( a \), and hence \( f \) is continuous. Since \( f\left( 0\right) = - 2 \) and \( f\left( 1\right) = 3 \), and 0 is between -2 and 3, by the Intermediate Value Theorem, Theorem 2.3.3, there is a \( c \in \left\lbrack {0,1}\right\rbrack \) such that \( f\left( c\right) = 0 \) .
Yes
Example 2.3.5 Approximate a root of \( f\left( x\right) = {x}^{3} + 3{x}^{2} + x - 2 \) to one decimal place.
Solution If we compute \( f\left( {0.1}\right), f\left( {0.2}\right) \), and so on, we find that \( f\left( {0.6}\right) < 0 \) and \( f\left( {0.7}\right) > 0 \), so by the Intermediate Value Theorem, \( f \) has a root between 0.6 and 0.7 . Repeating the process with \( f\left( {0.61}\right), f\left( {0.62}\right) \), and so on, we find that \( f\left( {0.61}\right) < 0 \) and \( f\left( {0.62}\right) > 0 \), so by the Intermediate Value Theorem, Theorem 2.3.3, \( f\left( x\right) \) has a root between 0.61 and 0.62, and the root is 0.6 rounded to one decimal place.
Yes
Example 3.1.1 Compute\n\[ \frac{d}{dx}\left( {{x}^{3} + 1}\right) \]
Solution Using the definition of the derivative,\n\n\[ \frac{d}{dx}f\left( x\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{\left( x + h\right) }^{3} + 1 - \left( {{x}^{3} + 1}\right) }{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{x}^{3} + 3{x}^{2}h + {3x}{h}^{2} + {h}^{3} + 1 - {x}^{3} - 1}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{3{x}^{2}h + {3x}{h}^{2} + {h}^{3}}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\left( {3{x}^{2} + {3xh} + {h}^{2}}\right) \]\n\n\[ = 3{x}^{2}\text{.} \]
Yes
Compute \[ \frac{d}{dt}\frac{1}{t} \]
Solution Using the definition of the derivative, \[ \frac{d}{dt}\frac{1}{t} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\frac{1}{t + h} - \frac{1}{t}}{h} \] \[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\frac{t}{t\left( {t + h}\right) } - \frac{t + h}{t\left( {t + h}\right) }}{h} \] \[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\frac{t - \left( {t + h}\right) }{t\left( {t + h}\right) }}{h} \] \[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{t - t - h}{t\left( {t + h}\right) h} \] \[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{-h}{t\left( {t + h}\right) h} \] \[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{-1}{t\left( {t + h}\right) } \] \[ = \frac{-1}{{t}^{2}} \] This function is differentiable at all real numbers except for \( t = 0 \), see Figure 3.4.
Yes
Theorem 3.1.3 (Differentiability Implies Continuity) If \( f\left( x\right) \) is a differentiable function at \( x = a \), then \( f\left( x\right) \) is continuous at \( x = a \) .
Proof We want to show that \( f\left( x\right) \) is continuous at \( x = a \), hence we must show that\n\n\[ \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = f\left( a\right) \]\n\nConsider\n\n\[ \mathop{\lim }\limits_{{x \rightarrow a}}\left( {f\left( x\right) - f\left( a\right) }\right) = \mathop{\lim }\limits_{{x \rightarrow a}}\left( {\left( {x - a}\right) \frac{f\left( x\right) - f\left( a\right) }{x - a}}\right) \]\n\nMultiply and divide by \( \left( {x - a}\right) \).\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}h \cdot \frac{f\left( {a + h}\right) - f\left( a\right) }{h} \]\n\nSet \( x = a + h \).\n\n\[ = \left( {\mathop{\lim }\limits_{{h \rightarrow 0}}h}\right) \left( {\mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {a + h}\right) - f\left( a\right) }{h}}\right) \]\n\nLimit Law.\n\n\[ = 0 \cdot {f}^{\prime }\left( a\right) = 0\text{.} \]\n\nSince\n\n\[ \mathop{\lim }\limits_{{x \rightarrow a}}\left( {f\left( x\right) - f\left( a\right) }\right) = 0 \]\n\nwe see that \( \mathop{\lim }\limits_{{x \rightarrow a}}f\left( x\right) = f\left( a\right) \), and so \( f\left( x\right) \) is continuous.
Yes
Example 3.1.4 Compute\n\\[ \n\\frac{d}{dx}\\left| x\\right| \n\\]
Solution Using the definition of the derivative,\n\n\\[ \n\\frac{d}{dx}\\left| x\\right| = \\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h}. \n\\]\n\nIf \\( x \\) is positive we may assume that \\( x \\) is larger than \\( h \\), as we are taking the limit as \\( h \\) goes to 0,\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h} = \\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{x + h - x}{h} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{h}{h} \n\\]\n\n\\[ \n= 1\\text{.} \n\\]\n\nIf \\( x \\) is negative we may assume that \\( \\left| x\\right| \\) is larger than \\( h \\), as we are taking the\n\nlimit as \\( h \\) goes to 0,\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h} = \\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{-x - h + x}{h} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\frac{-h}{h} \n\\]\n\n\\[ \n= - 1\\text{.} \n\\]\n\nHowever we still have one case left, when \\( x = 0 \\) . In this situation, we must consider the one-sided limits:\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0 + }}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h}\\;\\text{ and }\\;\\mathop{\\lim }\\limits_{{h \\rightarrow 0 - }}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h}. \n\\]\n\nIn the first case,\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0 + }}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h} = \\mathop{\\lim }\\limits_{{h \\rightarrow 0 + }}\\frac{0 + h - 0}{h} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{h \\rightarrow 0 + }}\\frac{h}{h} \n\\]\n\n\\[ \n= 1\\text{.} \n\\]\n\nOn the other hand\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0 - }}\\frac{\\left| {x + h}\\right| - \\left| x\\right| }{h} = \\mathop{\\lim }\\limits_{{h \\rightarrow 0 - }}\\frac{\\left| {0 + h}\\right| - 0}{h} \n\\]\n\n\\[ \n= \\mathop{\\lim }\\limits_{{h \\rightarrow 0 - }}\\frac{\\left| h\\right| }{h} \n\\]\n\n\\[ \n= - 1\\text{.} \n\\]\n\nHence we see that the derivative is\n\n\\[ \n{f}^{\\prime }\\left( x\\right) = \\left\\{ \\begin{array}{ll} 1 & \\text{ if }x > 0 \\\\ - 1 & \\text{ if }x < 0 \\end{array}\\right. \n\\]\n\nNote this function is undefined at 0 , see Figure 3.5.
Yes
Theorem 3.2.1 (The Constant Rule) Given a constant \( c \) ,\n\n\[ \frac{d}{dx}c = 0 \]
Proof From the limit definition of the derivative, write\n\n\[ \frac{d}{dx}c = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{c - c}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{0}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}0 = 0\text{.} \]
Yes
Theorem 3.2.2 (The Power Rule) For any real number \( n \) ,\n\n\[ \frac{d}{dx}{x}^{n} = n{x}^{n - 1} \]
Proof At this point we will only prove this theorem for n being a positive integer. Later in Section 6.3, we will give the complete proof. From the limit definition of the derivative, write\n\n\[ \frac{d}{dx}{x}^{n} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{\left( x + h\right) }^{n} - {x}^{n}}{h}. \]\n\nStart by expanding the term \( {\left( x + h\right) }^{n} \)\n\n\[ \frac{d}{dx}{x}^{n} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{x}^{n} + \left( \begin{array}{l} n \\ 1 \end{array}\right) {x}^{n - 1}h + \left( \begin{array}{l} n \\ 2 \end{array}\right) {x}^{n - 2}{h}^{2} + \cdots + \left( \begin{matrix} n \\ n - 1 \end{matrix}\right) x{h}^{n - 1} + {h}^{n} - {x}^{n}}{h} \]\n\nNote, by the Binomial Theorem, we write \( \left( \begin{array}{l} n \\ k \end{array}\right) \) for the coefficients. Canceling the\n\nterms \( {x}^{n} \) and \( - {x}^{n} \), and noting \( \left( \begin{array}{l} n \\ 1 \end{array}\right) = \left( \begin{matrix} n \\ n - 1 \end{matrix}\right) = n \), write\n\n\[ \frac{d}{dx}{x}^{n} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{n{x}^{n - 1}h + \left( \begin{array}{l} n \\ 2 \end{array}\right) {x}^{n - 2}{h}^{2} + \cdots + \left( \begin{matrix} n \\ n - 1 \end{matrix}\right) x{h}^{n - 1} + {h}^{n}}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}n{x}^{n - 1} + \left( \begin{array}{l} n \\ 2 \end{array}\right) {x}^{n - 2}h + \cdots + \left( \begin{matrix} n \\ n - 1 \end{matrix}\right) x{h}^{n - 2} + {h}^{n - 1}. \]\n\nSince every term but the first has a factor of \( h \), we see\n\n\[ \frac{d}{dx}{x}^{n} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{\left( x + h\right) }^{n} - {x}^{n}}{h} = n{x}^{n - 1}. \]
No
Example 3.2.3 Compute\n\n\[ \n\frac{d}{dx}{x}^{13} \n\]
Solution Applying the power rule, we write\n\n\[ \n\frac{d}{dx}{x}^{13} = {13}{x}^{12} \n\]
Yes
Example 3.2.4 Compute\n\n\[ \n\frac{d}{dx}\frac{1}{{x}^{4}} \n\]
Solution Applying the power rule, we write\n\n\[ \n\frac{d}{dx}\frac{1}{{x}^{4}} = \frac{d}{dx}{x}^{-4} = - 4{x}^{-5}. \n\]
Yes
Compute\n\n\[ \frac{d}{dx}\sqrt[5]{x}. \]
Solution Applying the power rule, we write\n\n\[ \frac{d}{dx}\sqrt[5]{x} = \frac{d}{dx}{x}^{1/5} = \frac{{x}^{-4/5}}{5} \]
Yes
Theorem 3.2.6 (The Sum Rule) If \( f\left( x\right) \) and \( g\left( x\right) \) are differentiable and \( c \) is a constant, then\n\n(a) \( \frac{d}{dx}\left( {f\left( x\right) + g\left( x\right) }\right) = {f}^{\prime }\left( x\right) + {g}^{\prime }\left( x\right) \)
Proof We will only prove part (a) above, the rest are similar. Write\n\n\[ \frac{d}{dx}\left( {f\left( x\right) + g\left( x\right) }\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {x + h}\right) + g\left( {x + h}\right) - \left( {f\left( x\right) + g\left( x\right) }\right) }{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {x + h}\right) + g\left( {x + h}\right) - f\left( x\right) - g\left( x\right) }{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {x + h}\right) - f\left( x\right) + g\left( {x + h}\right) - g\left( x\right) }{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\left( {\frac{f\left( {x + h}\right) - f\left( x\right) }{h} + \frac{g\left( {x + h}\right) - g\left( x\right) }{h}}\right) \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( {x + h}\right) - f\left( x\right) }{h} + \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{g\left( {x + h}\right) - g\left( x\right) }{h} \]\n\n\[ = {f}^{\prime }\left( x\right) + {g}^{\prime }\left( x\right) \]
Yes
Compute\n\n\[ \frac{d}{dx}\left( {{x}^{5} + \frac{1}{x}}\right) \]
Solution Write\n\n\[ \frac{d}{dx}\left( {{x}^{5} + \frac{1}{x}}\right) = \frac{d}{dx}{x}^{5} + \frac{d}{dx}{x}^{-1} \]\n\n\[ = 5{x}^{4} - {x}^{-2}\text{. } \]
Yes
Compute\n\n\[ \frac{d}{dx}\left( {\frac{3}{\sqrt[3]{x}} - 2\sqrt{x} + \frac{1}{{x}^{7}}}\right) \]
Solution Write\n\n\[ \frac{d}{dx}\left( {\frac{3}{\sqrt[3]{x}} - 2\sqrt{x} + \frac{1}{{x}^{7}}}\right) = 3\frac{d}{dx}{x}^{-1/3} - 2\frac{d}{dx}{x}^{1/2} + \frac{d}{dx}{x}^{-7} \]\n\n\[ = - {x}^{-4/3} - {x}^{-1/2} - 7{x}^{-8} \]
Yes
Theorem 3.2.9 (The Derivative of \( {e}^{x} \) )\n\n\[ \frac{d}{dx}{e}^{x} = {e}^{x} \]
Proof From the limit definition of the derivative, write\n\n\[ \frac{d}{dx}{e}^{x} = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{e}^{x + h} - {e}^{x}}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{e}^{x}{e}^{h} - {e}^{x}}{h} \]\n\n\[ = \mathop{\lim }\limits_{{h \rightarrow 0}}{e}^{x}\frac{{e}^{h} - 1}{h} \]\n\n\[ = {e}^{x}\mathop{\lim }\limits_{{h \rightarrow 0}}\frac{{e}^{h} - 1}{h} \]\n\n\[ = {e}^{x}\text{.} \]\n\nHence \( {e}^{x} \) is its own derivative. In other words, the slope of the plot of \( {e}^{x} \) is the same as its height, or the same as its second coordinate: The function \( f\left( x\right) = {e}^{x} \) goes through the point \( \left( {a,{e}^{a}}\right) \) and has slope \( {e}^{a} \) there, no matter what \( a \) is.
Yes
Example 3.2.10 Compute:\n\n\[ \n\frac{d}{dx}\left( {8\sqrt{x} + 7{e}^{x}}\right) \n\]
Solution Write:\n\n\[ \n\frac{d}{dx}\left( {8\sqrt{x} + 7{e}^{x}}\right) = 8\frac{d}{dx}{x}^{1/2} + 7\frac{d}{dx}{e}^{x} \n\]\n\n\[ \n= 4{x}^{-1/2} + 7{e}^{x}\text{.} \n\]
Yes
Example 4.1.2 Find all local maximum and minimum points for the function \( f\left( x\right) = {x}^{3} - x. \)
Solution Write\n\n\[ \frac{d}{dx}f\left( x\right) = 3{x}^{2} - 1 \]\n\nThis is defined everywhere and is zero at \( x = \pm \sqrt{3}/3 \) . Looking first at \( x = \sqrt{3}/3 \) , we see that\n\n\[ f\left( {\sqrt{3}/3}\right) = - 2\sqrt{3}/9. \]\n\nNow we test two points on either side of \( x = \sqrt{3}/3 \), making sure that neither is farther away than the nearest critical point; since \( \sqrt{3} < 3,\sqrt{3}/3 < 1 \) and we can use \( x = 0 \) and \( x = 1 \) . Since\n\n\[ f\left( 0\right) = 0 > - 2\sqrt{3}/9\;\text{ and }\;f\left( 1\right) = 0 > - 2\sqrt{3}/9, \]\n\nthere must be a local minimum at \( x = \sqrt{3}/3 \) .\n\nFor \( x = - \sqrt{3}/3 \), we see that \( f\left( {-\sqrt{3}/3}\right) = 2\sqrt{3}/9 \) . This time we can use \( x = 0 \) and \( x = - 1 \), and we find that \( f\left( {-1}\right) = f\left( 0\right) = 0 < 2\sqrt{3}/9 \), so there must be a local maximum at \( x = - \sqrt{3}/3 \), see Figure 4.4.
Yes
Consider the function\n\n\[ f\left( x\right) = \frac{{x}^{4}}{4} + \frac{{x}^{3}}{3} - {x}^{2} \]\n\nFind the intervals on which \( f\left( x\right) \) is increasing and decreasing and identify the local extrema of \( f\left( x\right) \).
Solution Start by computing\n\n\[ \frac{d}{dx}f\left( x\right) = {x}^{3} + {x}^{2} - {2x} \]\n\nNow we need to find when this function is positive and when it is negative. To do this, solve\n\n\[ {f}^{\prime }\left( x\right) = {x}^{3} + {x}^{2} - {2x} = 0. \]\n\nFactor \( {f}^{\prime }\left( x\right) \)\n\n\[ {f}^{\prime }\left( x\right) = {x}^{3} + {x}^{2} - {2x} \]\n\n\[ = x\left( {{x}^{2} + x - 2}\right) \]\n\n\[ = x\left( {x + 2}\right) \left( {x - 1}\right) \text{.} \]\n\nSo the critical points (when \( {f}^{\prime }\left( x\right) = 0 \) ) are when \( x = - 2, x = 0 \), and \( x = 1 \). Now we can check points between the critical points to find when \( {f}^{\prime }\left( x\right) \) is increasing and decreasing:\n\n\[ {f}^{\prime }\left( {-3}\right) = - {12}\;{f}^{\prime }\left( {.5}\right) = - {0.625}\;{f}^{\prime }\left( {-1}\right) = 2\;{f}^{\prime }\left( 2\right) = 8 \]\n\nFrom this we can make a sign table:\n\nHence \( f\left( x\right) \) is increasing on \( \left( {-2,0}\right) \cup \left( {1,\infty }\right) \) and \( f\left( x\right) \) is decreasing on\n\n\( \left( {-\infty , - 2}\right) \cup \left( {0,1}\right) \). Moreover, from the first derivative test, Theorem 4.2.1, the\n\nlocal maximum is at \( x = 0 \) while the local minima are at \( x = - 2 \) and \( x = 1 \), see Figure 4.5.
Yes
Example 4.3.2 Describe the concavity of \( f\left( x\right) = {x}^{3} - x \) .
Solution To start, compute the first and second derivative of \( f\left( x\right) \) with respect\nto \( x \) ,\n\n\[ \n{f}^{\prime }\left( x\right) = 3{x}^{2} - 1\;\text{ and }\;{f}^{\prime \prime }\left( x\right) = {6x}.\n\]\n\nSince \( {f}^{\prime \prime }\left( 0\right) = 0 \), there is potentially an inflection point at zero. Since \( {f}^{\prime \prime }\left( x\right) > 0 \) when \( x > 0 \) and \( {f}^{\prime \prime }\left( x\right) < 0 \) when \( x < 0 \) the concavity does change from down to up at zero-there is an inflection point at \( x = 0 \) . The curve is concave down for all \( x < 0 \) and concave up for all \( x > 0 \) , see Figure 4.6.
Yes
Theorem 4.4.1 (Second Derivative Test) Suppose that \( {f}^{\prime \prime }\left( x\right) \) is continuous on an open interval and that \( {f}^{\prime }\left( a\right) = 0 \) for some value of \( a \) in that interval.
- If \( {f}^{\prime \prime }\left( a\right) < 0 \), then \( f\left( x\right) \) has a local maximum at \( a \) . \n- If \( {f}^{\prime \prime }\left( a\right) > 0 \), then \( f\left( x\right) \) has a local minimum at \( a \) . \n- If \( {f}^{\prime \prime }\left( a\right) = 0 \), then the test is inconclusive. In this case, \( f\left( x\right) \) may or may not have a local extremum at \( x = a \) .
Yes
Use the second derivative test, Theorem 4.4.1, to locate the local extrema of \( f\left( x\right) = \frac{{x}^{4}}{4} + \frac{{x}^{3}}{3} - {x}^{2} \).
Solution Start by computing\n\n\[ {f}^{\prime }\left( x\right) = {x}^{3} + {x}^{2} - {2x}\;\text{ and }\;{f}^{\prime \prime }\left( x\right) = 3{x}^{2} + {2x} - 2. \]\n\nUsing the same technique as used in the solution of Example 4.2.2, we find that\n\n\[ {f}^{\prime }\left( {-2}\right) = 0,\;{f}^{\prime }\left( 0\right) = 0,\;{f}^{\prime }\left( 1\right) = 0. \]\n\nNow we'll attempt to use the second derivative test, Theorem 4.4.1,\n\n\[ {f}^{\prime \prime }\left( {-2}\right) = 6,\;{f}^{\prime \prime }\left( 0\right) = - 2,\;{f}^{\prime \prime }\left( 1\right) = 3. \]\n\nHence we see that \( f\left( x\right) \) has a local minimum at \( x = - 2 \), a local maximum at \( x = 0 \), and a local minimum at \( x = 1 \), see Figure 4.7.
Yes