Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
We use the trapezoid method to approximate the solution to\n\n\[ y\left( 0\right) = 2\;{y}^{\prime }\left( t\right) = t - y\;0 \leq t \leq 4 \] \n\nusing \( n = {16} \) intervals and compare the results with those obtained with Euler’s method.
Step 0. \( {t}_{k} = 0 + k * {0.25} \), for \( k = 0,1,\cdots ,{15} \) and \( {y}_{0} = 2 \) .\n\nStep 1. \( {\operatorname{slope}}_{1} = {t}_{0} - {y}_{0} = 0 - 2 = - 2 \)\n\nEuler’s projected \( {y}_{1} \) is \( {\widehat{y}}_{1} = {y}_{0} + h \times {\text{slope}}_{1} = 2 + {0.25} \times \left( {-2}\right) = {1.5} \) .\n\nDirection field slope at \( \left( {{t}_{1},{\widehat{y}}_{1}}\right) = \left( {{0.25},{1.5}}\right) \) is \( {t}_{1} - {\widehat{y}}_{1} = {0.25} - {1.5} = - {1.25} = {\text{slope}}_{2} \) .\n\nslope \( = \left( {{\text{slope}}_{1} + {\text{slope}}_{2}}\right) /2 = \left( {-2 + - {1.25}}\right) /2 = - {1.625} \) .\n\nTrapezoid projected \( {y}_{1} = {y}_{0} + h \times \) slope.\n\n\[ {y}_{1} = {2.0} + {0.25} \times \left( {-{1.625}}\right) = {1.5938} \]
Yes
Theorem 17.6.1 Existence and Uniqueness of Solutions. If \( f\left( {t, y}\right) \) and \( {f}_{2}\left( {t, y}\right) = \frac{\partial }{\partial y}f\left( {t, y}\right) \) are continuous on a rectangle\n\n\[ a - d \leq t \leq a + d\;{y}_{a} - d \leq y \leq {y}_{a} + d\;d > 0, \]\n\nthen on an interval \( a - e \leq t \leq a + e,0 < e < d \) there is a unique solution to\n\n\[ y\left( a\right) = {y}_{a}\;{y}^{\prime }\left( t\right) = f\left( {t, y}\right) \;a \leq t \leq b \]
Under the hypothesis that \( {f}_{2} \) is continuous on the rectangle, it follows that \( {f}_{2} \) is bounded on the rectangle, and the proof of Theorem 17.6.1 hinges on this fact.
Yes
Example 17.6.1 Although proof of Theorem 17.6.1 is beyond our scope, existence and uniqueness of solutions to a large number of differential equations follows rather easily from simple anti-derivative formulas and the Parallel Graph Theorem. For example, we know by simple substitution that the growth equation \( \left( {k > 0}\right) \) ,\n\n\[ \nP\left( 0\right) = {P}_{0} > 0,\;{P}^{\prime }\left( t\right) = {kP}\left( t\right) \;\text{ has a solution }\;P\left( t\right) = {P}_{0}{e}^{kt}.\n\]\n\nMight there be another solution, say \( Q\left( t\right) \) ? If so, then\n\n\[ \nQ\left( 0\right) = {P}_{0},\;{Q}^{\prime }\left( t\right) = {kQ}\left( t\right) .\n\]
Then\n\n\[ \n{Q}^{\prime }\left( t\right) = {kQ}\left( t\right) \n\]\nHypothesis\n\n\[ \n{e}^{-{kt}}{Q}^{\prime }\left( t\right) - k{e}^{-{kt}}Q\left( t\right) = 0 \n\]\nBlue Sky.\n\n\[ \n{\left\lbrack {e}^{-{kt}}Q\left( t\right) \right\rbrack }^{\prime } = {\left\lbrack 1\right\rbrack }^{\prime } \n\]\nDerivative formulas\n\n\[ \n{\int }_{0}^{t}{\left\lbrack {e}^{-{k\tau }}Q\left( \tau \right) \right\rbrack }^{\prime }{d\tau } = {\int }_{0}^{t}{\left\lbrack 1\right\rbrack }^{\prime }{d\tau } \n\]\nIdentity in symbols\n\n\[ \n{e}^{-{kt}}Q\left( t\right) + {C}_{1} = {C}_{2} \n\]\nFundamental Theorem of Calculus II\n\n\[ \nQ\left( t\right) = C{e}^{kt} \n\]\n\( C = {C}_{2} - {C}_{1} \)\n\n\[ \n{P}_{0} = C{e}^{k \times 0} = C \n\]\n\( Q\left( 0\right) = {P}_{0} \)\n\n\[ \nQ\left( t\right) = {P}_{0}{e}^{kt} \n\]\n\nThus \( Q\left( t\right) \equiv P\left( t\right) \) and there is one and only one solution to \( P\left( t\right) = {P}_{0} > 0,{P}^{\prime }\left( t\right) = {kP}\left( t\right) ,\;k > 0 \) .
Yes
Theorem 17.7.1\n\nStep 1. Define \( \;u\left( t\right) = {\int }_{a}^{t}p\left( s\right) {ds} \) .\n\nStep 2. Define \( \;v\left( t\right) = {\int }_{a}^{t}{e}^{u\left( s\right) } \times q\left( s\right) {ds} \) .\n\nThen the solution to Equation 17.19 is
\[ y\left( t\right) = v\left( t\right) {e}^{-u\left( t\right) } + {y}_{a}{e}^{-u\left( t\right) } \]
Yes
Consider a case in which \( p\left( t\right) = 3 \) and \( q\left( t\right) = 5 \) are constant. Solve \[ y\left( 1\right) = 5\;{y}^{\prime }\left( t\right) + {3y}\left( t\right) = 2,\;a = 1,\;{y}_{a} = 5,\;p\left( t\right) = 3,\;q\left( t\right) = 5 \]
Define \[ u\left( t\right) = {\int }_{0}^{t}{3dt} = {3t} \] \[ v\left( t\right) = {\int }_{0}^{t}{e}^{3t} \times {2dt} = \frac{2}{3}\left( {{e}^{3t} - 1}\right) . \] Then \[ y\left( t\right) = \frac{2}{3}\left( {{e}^{3t} - 1}\right) \times {e}^{-{3t}} + 5{e}^{-{3t}} \] \[ = \frac{2}{3} + \frac{13}{3}{e}^{-{3t}} \] To check that \( y\left( t\right) \) solves \( y\left( 0\right) = 5\;{y}^{\prime }\left( t\right) + {3y}\left( t\right) = 5 \) we compute \[ y\left( 0\right) = \frac{2}{3} + \frac{13}{3}{e}^{-3 \times 0} = \frac{2}{3} + \frac{13}{3} = 5. \] Checks. Also \[ {y}^{\prime }\left( t\right) + {3y}\left( t\right) = {\left\lbrack \frac{2}{3} + \frac{13}{3}{e}^{-{3t}}\right\rbrack }^{\prime } + 3\left( {\frac{2}{3} + \frac{13}{3}{e}^{-{3t}}}\right) \] \[ = 0 + \frac{13}{3}{e}^{-{3t}} \times \left( {-3}\right) + 2 + {13}{e}^{-{3t}} \] \[ = 2 \] Checks.
Yes
The formulas for \( u\left( t\right) \) and \( v\left( t\right) \) are explicit, but the integrals may not be computable in terms of familiar functions. In the equation\n\n\[ y\left( 0\right) = 5\;{y}^{\prime }\left( t\right) - {2ty}\left( t\right) = 1, \]\n\n\[ p\left( t\right) = - {2t},\;u\left( t\right) = {\int }_{0}^{t} - {2sds} = - {t}^{2},\;\text{ and }\;v\left( t\right) = {\int }_{0}^{t}{e}^{-{s}^{2}}{ds}. \]
There is no formula for \( v\left( t\right) \) in familiar terms \( {}^{9} \) for \( v\left( t\right) \) . It can be numerically approximated as you did in Chapter 11, The Integral, of Volume I, and is an important formula in statistics, but there is no expression for \( {\int }_{0}^{t}{e}^{-{s}^{2}}{ds} \) in familiar terms.
No
Problem 1. Solve \( y\left( 1\right) = 1,\;{y}^{\prime }\left( t\right) + \frac{1}{t}y\left( t\right) = {e}^{t} \) .
\[ a = 1,\;{y}_{1} = 1,\;p\left( t\right) = \frac{1}{t},\;q\left( t\right) = {e}^{t} \]\n\n\[ u\left( t\right) = {\int }_{0}^{t}p\left( s\right) {ds} = {\int }_{0}^{t}\frac{1}{s}{ds} = {\left. \ln s\right| }_{s = 1}^{t} = \ln t - 0 = \ln t \]\n\n\[ v\left( t\right) = {\int }_{0}^{t}{e}^{u\left( s\right) }q\left( s\right) {ds} = {\int }_{0}^{t}{e}^{\ln s}{e}^{s}{ds} = {\int }_{0}^{t}s{e}^{s}{ds} = {\left. s{e}^{s} - {e}^{s}\right| }_{1}^{t} = t{e}^{t} - {e}^{t} \]\n\n\[ y = v\left( t\right) {e}^{-u\left( t\right) } + {y}_{a}{e}^{-u\left( t\right) } = \left( {t{e}^{t} - {e}^{t}}\right) {e}^{-\ln t} + 3{e}^{-\ln t} \]\n\n\[ = {e}^{t} - \frac{1}{t}{e}^{t} + \frac{1}{t} \]
Yes
Problem 2. Solve \( y\left( 0\right) = 2,\;{y}^{\prime }\left( t\right) + {2y}\left( t\right) = \sin {3t} \) .
\[ a = 0,\;{y}_{0} = 2,\;p\left( t\right) = 2,\;q\left( t\right) = \sin {3t} \]\n\n\[ u\left( t\right) = {\int }_{0}^{t}p\left( s\right) {ds} = {\int }_{0}^{t}{2ds} = {\left. 2s\right| }_{s = 0}^{t} = {2t} - 0 = {2t} \]\n\n\[ v\left( t\right) = {\int }_{0}^{t}{e}^{u\left( s\right) }q\left( s\right) {ds} = {\int }_{0}^{t}{e}^{2s}\sin {3sds} \]\n\n\[ = {\left. \frac{2}{4 + 9}{e}^{2s}\sin 3s - \frac{3}{4 + 9}{e}^{2s}\cos 3s\right| }_{0}^{t} \]\n\n\[ = \frac{4}{13}{e}^{2t}\sin {3t} - \frac{3}{13}{e}^{2t}\cos {3t} - \left( {\frac{2}{13}{e}^{0}\sin 0 - \frac{3}{13}{e}^{0}\cos 0}\right) \]\n\n\[ = \frac{4}{13}{e}^{2t}\sin {3t} - \frac{3}{13}{e}^{2t}\cos {3t} + \frac{3}{13} \]\n\n\[ y = v\left( t\right) {e}^{-u\left( t\right) } + {y}_{a}{e}^{-u\left( t\right) } \]\n\n\[ = \left( {\frac{2}{13}{e}^{2t}\sin {3t} - \frac{3}{13}{e}^{2t}\cos {3t} + \frac{3}{13}}\right) {e}^{-{2t}} + 2{e}^{-{2t}} \]\n\n\[ = \frac{2}{13}\sin {3t} - \frac{3}{13}\cos {3t} + \frac{3}{13}{e}^{-{2t}} + 2{e}^{-{2t}} \]
Yes
Consider\n\n\[ \n{y}^{\prime }\left( t\right) = {2t} \times y \n\]
Then\n\n\[ \n\frac{1}{y}{y}^{\prime } = {2t} \n\]\n\nSV Step 1, LHS Find \( H\left( y\right) \) such that\n\n\[ \n\frac{{dH}\left( y\right) }{dy} = \frac{1}{y}.\;\text{ Choose }\;H\left( y\right) = \ln y.\n\]\n\nSV Step 2, RHS Find \( G\left( t\right) \) such that \( \frac{d}{dt}G\left( t\right) = {2t} \).\n\nThen\n\n\[ \n\frac{d}{dt}\ln y\left( t\right) = \frac{1}{y\left( t\right) }\frac{{dy}\left( t\right) }{dt} = \frac{1}{y}{y}^{\prime } = {2t} = \frac{d}{dt}G\left( t\right) = \frac{d}{dt}{t}^{2}.\n\]\n\nBy the Parallel Graph Theorem there is a number, \( {C}_{1} \), such that\n\n\[ \n\ln y\left( t\right) = {t}^{2} + {C}_{1} \n\]\n\nThis implicit expression for \( y \) can be solved explicitly.\n\n\[ \ny\left( t\right) = {e}^{{t}^{2} + {C}_{1}} = {e}^{{t}^{2}} \times {e}^{{C}_{1}} = C{e}^{{t}^{2}} \n\]
Yes
Of the following six equations, the variables can be separated in only two.\n\n\[ \n{y}^{\prime }\left( t\right) = t + y\;{y}^{\prime } = {e}^{t + y} \]\n\n\[ \n{y}^{\prime }\left( t\right) = \ln \left( {t + y}\right) \;{y}^{\prime } = {e}^{t \times y} \]\n\n\[ \n{y}^{\prime }\left( t\right) = \ln \left( {t \times y}\right) \;{y}^{\prime } = \ln \left( {t}^{y}\right) \]\n
The two equations in which variables are separable are shown below.\n\n\[ \n{y}^{\prime } = {e}^{t + y} = {e}^{t} \times {e}^{y}\;{y}^{\prime } = \ln {t}^{y} = y \times \ln t \]\n\n\[ \n{e}^{-y} \times {y}^{\prime } = {e}^{t}\;\frac{1}{y}{y}^{\prime } = \ln t \]\n\n\[ \n{\left\lbrack -{e}^{-y}\right\rbrack }^{\prime } = {\left\lbrack {e}^{t}\right\rbrack }^{\prime }\;{\left\lbrack \ln y\right\rbrack }^{\prime } = {\left\lbrack t\ln t - t\right\rbrack }^{\prime } \]\n\n\[ \n- {e}^{-y} = {e}^{t} + C \]\n\n\[ \n\ln y = t\ln t - t + {C}_{1} \]\n\n\[ \ny = - \ln \left( {-{e}^{t} - C}\right) \]\n\n\[ \ny = C \times {t}^{t} \times {e}^{-t} \]\n
Yes
Example 17.8.3 The variables can be separated in every autonomous differential equation\n\n\[ \n{y}^{\prime } = f\left( y\right) \;\frac{1}{f\left( y\right) }{y}^{\prime } = 1 \n\]
To find an implicit solution to any autonomous differential equation only the problem\n\nSV Step 1 Find \( F\left( y\right) \) such that \( \;{F}^{\prime }\left( y\right) = \frac{1}{f\left( y\right) }\; \) requires attention.\n\nSV Step 2 is easy: Find \( G\left( t\right) \) such that \( {G}^{\prime }\left( t\right) = 1\; \) Answer: \( \;G\left( t\right) = t \)
No
To solve the autonomous equation, \( {y}^{\prime } = - {y}^{2} \)
\[ \frac{1}{{y}^{2}}{y}^{\prime } = - 1\;\frac{{dH}\left( y\right) }{dy} = \frac{1}{{y}^{2}},\;\text{ choose }\;H\left( y\right) = - \frac{1}{y} \] \[ \frac{d}{dt}\left\lbrack {-\frac{1}{y\left( t\right) }}\right\rbrack = \frac{d}{dt}\left\lbrack {-t}\right\rbrack \] \[ - \frac{1}{y\left( t\right) } = - t + C \] \[ y\left( t\right) = \frac{1}{t - C} \] If also an initial condition is given, for example \( y\left( 0\right) = {0.5} \), we write \[ y\left( 0\right) = \frac{1}{0 - C},\;{0.5} = \frac{1}{-C},\;C = - 2,\;y\left( t\right) = \frac{1}{t + 2}. \]
Yes
\[ \frac{{3x} + 4}{\left( {x + 3}\right) \times \left( {x - 2}\right) } = \frac{A}{x + 3} + \frac{B}{x - 2} \]
To find \( A \) and \( B \), multiply by \( \left( {x + 3}\right) \times \left( {x - 2}\right) \) and get \[ {3x} + 4 = A \times \left( {x - 2}\right) + B \times \left( {x + 3}\right) \] Then substitute \[ x = - 3\;3\left( {-3}\right) + 4 = A \times \left( {-3 - 2}\right) + B \times \left( {-3 + 3}\right) \] \[ - 5 = A \times \left( {-5}\right) \] \[ x = 2 \] \[ 3\left( 2\right) + 4 = A \times \left( {2 - 2}\right) + B \times \left( {2 + 3}\right) \] \[ {10} = B \times \left( 5\right) \] Thus \( \;\frac{{3x} + 4}{\left( {x + 3}\right) \times \left( {x - 2}\right) } = \frac{1}{x + 3} + \frac{2}{x - 2} \)
No
\[ \frac{3{x}^{2} - {5x} + 1}{\left( {x + 1}\right) \times {\left( x - 2\right) }^{2}} = \frac{A}{x + 1} + \frac{B}{x - 2} + \frac{C}{{\left( x - 2\right) }^{2}} \]
To find \( A, B \) and \( C \), multiply by \( \left( {x + 1}\right) \times {\left( x - 2\right) }^{2} \) and get\n\n\[ 3{x}^{2} - {5x} + 1 = A \times {\left( x - 2\right) }^{2} + B \times \left( {x - 2}\right) \times \left( {x + 1}\right) + C \times \left( {x + 1}\right) \]\n\nThen substitute\n\n\[ x = - 1\;3{\left( -1\right) }^{2} - 4\left( {-1}\right) + 1 = A \times {\left( -1 - 2\right) }^{2} + B \times \left( {-1 - 2}\right) \left( {-1 + 1}\right) + C \times \left( {-1 + 1}\right) \]\n\n\[ 9 = A \times \left( 9\right) \]\n\n\[ A = 1 \]\n\n\[ x = 2\;3{\left( 2\right) }^{2} - 5 \times 2 + 1 = A \times {\left( 2 - 2\right) }^{2} + B \times \left( {2 - 2}\right) \left( {2 + 1}\right) + C \times \left( {2 + 1}\right) \]\n\n\[ 3 = C \times \left( 3\right) \]\n\n\[ C = 1 \]\n\n\[ x = 0\;3{\left( 0\right) }^{2} - 5 \times 0 + 1 = A \times {\left( 0 - 2\right) }^{2} + B \times \left( {0 - 2}\right) \left( {0 + 1}\right) + C \times \left( {0 + 1}\right) \]\n\n\[ 1 = {4A} - {2B} + C \]\n\n\[ 1 = 4 \times 1 - {2B} + 1 \]\n\n\[ B = 2 \]\n\n\[ \text{Thus}\;\frac{3{x}^{2} - {5x} + 1}{\left( {x + 1}\right) \times {\left( x - 2\right) }^{2}} = \frac{1}{x + 1} + \frac{2}{x - 2} + \frac{1}{{\left( x - 2\right) }^{2}} \]
Yes
For Candidate 3 the method of partial fractions converts\n\n\\[ {u}^{\\prime } = u \\times \\left( {u - \\epsilon }\\right) \\times \\left( {1 - u}\\right) \\;\\frac{{u}^{\\prime }}{u \\times \\left( {u - \\epsilon }\\right) \\times \\left( {1 - u}\\right) } = 1 \\]\n\ninto\n\n\\[ \\left( {\\frac{-1/\\epsilon }{u} + \\frac{1/\\left( {\\epsilon \\left( {1 - \\epsilon }\\right) }\\right) }{u - \\epsilon } + \\frac{1/\\left( {1 - \\epsilon }\\right) }{1 - u}}\\right) {u}^{\\prime } = 1 \\]
which can be integrated to obtain\n\n\\[ - \\frac{1}{\\epsilon }\\ln \\left| {u\\left( t\\right) }\\right| + \\frac{1}{\\epsilon \\left( {1 - \\epsilon }\\right) }\\ln \\left| {u\\left( t\\right) - \\epsilon }\\right| - \\frac{1}{1 - \\epsilon }\\ln \\left| {1 - u\\left( t\\right) }\\right| = t + C \\]
Yes
Suppose the one gram of carbon from deer bone recently found among American Indian artifacts is emits \( 7{\beta }^{ - } \) particles per minute. How old is the bone?
Solution. Let \( {t}_{0} \) be the time at which the deer died. Assume that \( E\left( {t}_{0}\right) = E\left( 0\right) = {15.3} \) . Then\n\n\[ \n{\left. {E}_{{t}_{0}}\left( t\right) \right| }_{t = 0} = {E}_{{t}_{0}}\left( 0\right) = E\left( {t}_{0}\right) {e}^{-\frac{\ln 2}{5730}\left( {0 - {t}_{0}}\right) } \n\]\n\n\[ \n7 = {15.3}{e}^{-\frac{\ln 2}{5730}\left( {-{t}_{0}}\right) } \n\]\n\n\[ \n\ln \left( \frac{7}{15.3}\right) = \frac{\ln 2}{5730}{t}_{0} \n\]\n\n\[ \n{t}_{0} = - {6464} \n\]\n\nThus the bone is 6464 years old.
Yes
Example 17.9.2 Data from Reaction 1 of Exercise Table 17.9.9 are plotted in Figure 17.20 and it is clear that the data are from a second order reaction, and \( m = 2 \) . The reaction is thus\n\n\[ \n{2A} + {nB} \rightarrow {A}_{2}{B}_{n} \n\]\n\nfor some \( n \) (found below).
Figure 17.20: Graphs of Reaction 1 data. A. In Concentration vs \( t \) . B. 1/Concentration vs \( t \) . C. \( 1/{\left( \text{Concentration}\right) }^{2} \) vs \( t \) . B. is linear and the line \( \mathrm{y} = {99.96} + {4t} \) fits the data. Therefore, the reaction is second order, \( \widehat{K} = 4 \), and \( \frac{1}{y\left( t\right) } = \widehat{K}t + \frac{1}{{y}_{0}} \), and \( y\left( t\right) = 1/\left( {{4t} + {99.96}}\right) \) .
Yes
For the Reaction 1 with \( \left\lbrack B\right\rbrack = {0.2}\mathrm{\;{mol}} \), we found that the reaction was second order, \( m = 2 \) and \( \widehat{K} = 4 \) . The data for Reaction 1 with \( \left\lbrack B\right\rbrack = {0.4}\mathrm{\;{mol}} \) is plotted in Figure 17.21 as for a second order reaction, \( 1/ \) Concentration(A) vs time, and it is found that \( y = {104} + {31.9t} \) fits the data. Now \( {\widehat{K}}_{\left\lbrack B\right\rbrack = {0.4}} = {32} \) which is 8 times \( {\widehat{K}}_{\left\lbrack B\right\rbrack = {0.2}} \) . Therefore, \( {2}^{n} = 8, n = 3 \), and the reaction is third order in \( B \) . The reaction is thus
\[ {2A} + {3B} \rightarrow {A}_{2}{B}_{3} \]
Yes
Theorem 18.1.1 Superposition. Suppose \( {y}_{1}\left( t\right) \) and \( {y}_{2}\left( t\right) \) are two solutions to the homogeneous equation \( {y}^{\prime \prime }\left( t\right) + p{y}^{\prime }\left( t\right) + {qy}\left( t\right) = 0 \) and \( {C}_{1} \) is a number and \( {C}_{2} \) is a number. Then\n\n\[ y\left( t\right) = {C}_{1}{y}_{1}\left( t\right) + {C}_{2}{y}_{2}\left( t\right) \]\n\nis a solution to \( {y}^{\prime \prime }\left( t\right) + p{y}^{\prime }\left( t\right) + {qy}\left( t\right) = 0 \)
Proof. The proof of Theorem 18.1.1 is Exercise 18.1.1.
No
Theorem 18.1.2 If \( {y}_{p,1}\left( t\right) \) solves \( {y}^{\prime \prime }\left( t\right) - p{y}^{\prime }\left( t\right) + {qy}\left( t\right) = {f}_{1}\left( t\right) \) and \( {y}_{p,2}\left( t\right) \) solves\n\n\( {y}^{\prime \prime }\left( t\right) - p{y}^{\prime }\left( t\right) + {qy}\left( t\right) = {f}_{2}\left( t\right) \), then for any numbers \( A \) and \( B \)\n\n\[ A{y}_{p,1}\left( t\right) + B{y}_{p,2}\text{ solves }{y}^{\prime \prime }\left( t\right) - p{y}^{\prime }\left( t\right) + {qy}\left( t\right) = A{f}_{1}\left( t\right) + B{f}_{2}\left( t\right) . \]
Proof. Exercise 18.1.4.
No
Consider the equations\n\n\\[ \nx\\left( 0\\right) = {x}_{0}\\;{x}^{\\prime }\\left( t\\right) = y \n\\]\n\n(18.25)\n\n\\[ \ny\\left( 0\\right) = {y}_{0}\\;{y}^{\\prime }\\left( t\\right) = {2xy} \n\\]
First observe that every point with \\( y \\) -coordinate \\( 0,\\left( {{x}_{e},0}\\right) \\), is an equilibrium point. Also observe the direction field in, Figure 18.4A; the direction of motion in the four quarters of the plane is determined by whether \\( {x}^{\\prime } \\) and \\( {y}^{\\prime } \\) are positive or negative. Now use Leibnitz notation; write\n\n\\[ \n\\frac{{y}^{\\prime }\\left( t\\right) }{{x}^{\\prime }\\left( t\\right) } = \\frac{\\frac{dy}{dt}}{\\frac{dx}{dt}}\\;\\overset{\\text{ !! }}{ = }\\;\\frac{dy}{dx} = \\frac{2xy}{y} = {2x} \n\\]\n\n(18.26)\n\nThe equation, \\( {dy}/{dx} = {2x} \\) has an easy solution,\n\n\\[ \ny = {x}^{2} + C,\\;y = {x}^{2} + {y}_{0} - {x}_{0}^{2}. \n\\]\n\nFrom this we conclude that every solution curve is part of a parabola, and we have drawn three such parabola's in Figure 18.4B.
Yes
It appears from the direction field in Figure 18.6, that the origin, (0,0), is an asymptotically stable equilibrium of\n\n\\[ \nx\\left( 0\\right) = {x}_{0}\\;{x}^{\\prime } = - x \n\\] \n\n(18.27) \n\n\\[ \ny\\left( 0\\right) = {y}_{0}\\;{y}^{\\prime } = - y. \n\\]
The solution to Equations 18.27 is \n\n\\[ \nx\\left( t\\right) = {x}_{0}{e}^{-t},\\;y\\left( t\\right) = {y}_{0}{e}^{-t} \n\\] \n\nThe origin is stable: Suppose \\( \\epsilon > 0 \\), choose \\( \\delta = \\epsilon \\) . If for some \\( {t}_{1} \\) \n\n\\[ \n\\sqrt{{\\left( x\\left( {t}_{1}\\right) - 0\\right) }^{2} + {\\left( y\\left( {t}_{1}\\right) - 0\\right) }^{2}} = \\sqrt{{x}_{1}^{2} + {y}_{1}^{2}}{e}^{-{t}_{1}}\\; < \\;\\delta \n\\] \n\nThen for all \\( t > {t}_{1} \\) , \n\n\\[ \n\\sqrt{{\\left( x\\left( t\\right) - 0\\right) }^{2} + {\\left( y\\left( t\\right) - 0\\right) }^{2}} = \\sqrt{{x}_{1}^{2} + {y}_{1}^{2}}{e}^{-t}\\; < \\;\\sqrt{{x}_{1}^{2} + {y}_{1}^{2}}{e}^{-{t}_{1}}\\; < \\delta . \n\\] \n\nFurthermore \n\n\\[ \n\\mathop{\\lim }\\limits_{{t \\rightarrow \\infty }}\\sqrt{{\\left( x\\left( t\\right) - 0\\right) }^{2} + {\\left( y\\left( t\\right) - 0\\right) }^{2}} = \\sqrt{{x}_{1}^{2} + {y}_{1}^{2}}\\mathop{\\lim }\\limits_{{t \\rightarrow \\infty }}{e}^{-t} = 0, \n\\] \n\nso that \\( \\left( {0,0}\\right) \\) is asymptotically stable.
Yes
Theorem 18.3.1 Asymptotical Stability of a pair of constant coefficient homogeneous differential equations. The origin \( \left( {0,0}\right) \) is an asymptotically stable equilibrium point of\n\n\[ \n{x}^{\prime }\left( t\right) = {a}_{1,1}x\left( t\right) + {a}_{1,2}y\left( t\right) \n\]\n\n(18.37)\n\n\[ \n{y}^{\prime }\left( t\right) = {a}_{2,1}x\left( t\right) + {a}_{2,2}y\left( t\right) \n\]\n\nif the roots of the characteristic equation\n\n\[ \n{r}^{2} - \left( {{a}_{1,1} + {a}_{2,2}}\right) r + \left( {{a}_{1,1}{a}_{2,2} - {a}_{1,2}{a}_{2,1}}\right) = 0 \n\]\n\nsatisfy one of the three conditions:\n\na. The roots are real and distinct and negative.\n\nb. The root is a repeated root and is negative.\n\nc. The roots are complex, \( a + {bi} \) and \( a - {bi} \) and \( a \) is negative.\n\nUnder conditions a. and b. the origin is called an asymptotically stable node and under condition c. the origin is called an asymptotically stable spiral point.
Proof. Suppose the roots are real and distinct and negative. We first show that \( \left( {0,0}\right) \) is a stable equilibrium. From Equations 18.34 we observe that\n\n\[ \n\left| {x\left( t\right) }\right| = \left| {{C}_{1}{e}^{{r}_{1}t} + {C}_{2}{e}^{{r}_{2}t}}\right| \n\]\n\n\[ \n\leq \left| {C}_{1}\right| + \left| {C}_{2}\right| \n\]\n\n\[ \n= \left| \frac{{a}_{1,1}{x}_{0} + {a}_{1,2}{y}_{0} - {r}_{2}{x}_{0}}{{r}_{1} - {r}_{2}}\right| + \left| \frac{{r}_{1}{x}_{0} - {a}_{1,1}{x}_{0} - {a}_{1,2}{y}_{0}}{{r}_{1} - {r}_{2}}\right| \n\]\n\n\[ \n\leq \left( {\frac{\left| {a}_{1,1}\right| + \left| {a}_{1,2}\right| + \left| {r}_{2}\right| }{\left| {r}_{1} - {r}_{2}\right| } + \frac{\left| {r}_{1}\right| - \left| {a}_{1,1}\right| + \left| {a}_{1,2}\right| }{\left| {r}_{1} - {r}_{2}\right| }}\right) \times \max \left( {\left| {x}_{0}\right| ,\left| {y}_{0}\right| }\right) \n\]\n\n\[ \n\leq {K}_{x}\sqrt{{x}_{0}^{2} + {y}_{0}^{2}} \n\]\n\nSimilarly there is a constant \( {K}_{y} \) that depends only on the coefficients \( {a}_{1,1},\cdots ,{a}_{2,2} \) such that \( \left| {y\left( t\right) }\right| \leq {K}_{y}\sqrt{{x}_{0}^{2} + {y}_{0}^{2}} \) . Therefore\n\n\[ \n\sqrt{{\left( x\left( t\right) \right) }^{2} + {\left( y\left( t\right) \right) }^{2}} \leq \sqrt{{K}_{x}^{2} + {K}_{y}^{2}}\sqrt{{x}_{0}^{2} + {y}_{0}^{2}} = K\sqrt{{x}_{0}^{2} + {y}_{0}^{2}} \n\]\n\nSuppose \( \epsilon \) is to be a bound on \( \sqrt{{\left( x\left( t\right) \right) }^{2} + {\left( y\left( t\right) \right) }^{2}} \) . Let \( \delta = \epsilon /K \) . Then if \( \sqrt{{x}_{0}^{2} + {y}_{0}^{2}} < \delta \), \n\n\[ \n\sqrt{{\left( x\left( t\right) \right) }^{2} + {\left( y\left( t\right) \right) }^{2}} \leq K\sqrt{{x}_{0}^{2} + {y}_{0}^{2}} < K \times \epsilon /K = \epsilon \n\]\nTherefore, \( \left( {0,0}\right) \) is stable.\n\nFrom \( x\left( t\right) {C}_{1}{e}^{{r}_{1}t} + {C}_{2}{e}^{{r}_{2}t} \) it is immediate that\n\nbecause \( {r}_{1} \) and \( {r}_{2} \) are negative. Similarly \( \mathop{\lim }\limits_{{t \rightarrow \infty }}y\left( t\right) = 0 \), and it follows that \( \left( {0,0}\right) \) is an asymptotically stable equilibrium of Equations 18.34.\n\nThe arguments for a repeated root and complex roots are similar and are omitted. End of proof.
Yes
Explore 18.4.1 Show that \( x \equiv 5, y \equiv {12} \) is a solution to\n\n\[ {x}^{\prime } = \left( {{169} - {x}^{2} - {y}^{2}}\right) /{10},\;{y}^{\prime } = {17} - x - y. \]
- Direction fields near \( {e}_{1} = \left( {5,{12}}\right) \) and near \( {e}_{2} = \left( {{12},5}\right) \) for \( {x}^{\prime } = \left( {{169} - {x}^{2} - {y}^{2}}\right) /{10},{y}^{\prime } = {17} - x - y \) are shown in Figure 18.11A and B, respectively. They are quite different. Near \( {e}_{2} = \left( {{12},5}\right) \) the arrows point toward \( {e}_{2} \) ; near \( {e}_{1} = \left( {5,{12}}\right) \) some of the arrows do not point toward \( {e}_{1} \) . We will find that \( {e}_{2} \) is asymptotically stable, and that \( {e}_{1} \) does not meet the criterion that assures that it is asymptotically stable.
No
Example 18.4.1 (Continued) Equations 18.50,\n\n\[ \n{x}^{\prime } = \left( {{169} - {x}^{2} - {y}^{2}}\right) /{10} \n\]\n\n(18.55)\n\n\[ \n{y}^{\prime } = {17} - x - y \n\]\n\nhas two equilibrium points, \( {e}_{1} = \left( {5,{12}}\right) \) and \( {e}_{2} = \left( {{12},5}\right) \) . The Jacobian of Equations 18.55 is\n\n\[ \n\left\lbrack \begin{array}{ll} {f}_{1}\left( {x, y}\right) & {f}_{2}\left( {x, y}\right) \\ {g}_{1}\left( {x, y}\right) & {g}_{2}\left( {x, y}\right) \end{array}\right\rbrack = \left\lbrack \begin{array}{rr} - x/5 & - y/5 \\ - 1 & - 1 \end{array}\right\rbrack .\n\]
For \( {e}_{1} = \left( {5,{12}}\right) \) the Jacobian is\n\n\[ \n{\left\lbrack \begin{array}{rr} - x/5 & - y/5 \\ - 1 & - 1 \end{array}\right\rbrack }_{\left( {x, y}\right) = \left( {5,{12}}\right) }\; = \;\left\lbrack \begin{array}{rr} - 1 & - {12}/5 \\ - 1 & - 1 \end{array}\right\rbrack .\n\]\n\nThe trace and determinant of the Jacobian are -2 and -7/5, respectively, and the characteristic roots of the local linear approximation to Equations 18.55 (the roots to \( {r}^{2} + {2r} - 7/5 = 0 \) ) are\n\n\[ \n{r}_{1} = \frac{-2 + \sqrt{4 + {49}/{25}}}{2} \doteq {0.4413} \n\]\n\n\[ \n\text{and} \n\]\n\n\[ \n{r}_{2} = \frac{-2 - \sqrt{4 + {49}/{25}}}{2} \doteq - {4.4413}. \n\]\n\nBecause one of the characteristic roots is positive we do not conclude that Equations 18.55 are asymptotically stable.
Yes
Lemma 2.11 If \( {\mathcal{L}}_{\text{left }} \equiv {\mathcal{L}}_{\text{right }} \) then, for any library \( {\mathcal{L}}^{ * } \), we have \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }} \equiv {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{right }} \) .
of Note that we are comparing \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }} \) and \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{right }} \) as compound libraries. Hence we consider a calling program \( \mathcal{A} \) that is linked to either \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }} \) or \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{right }} \) .\n\nLet \( \mathcal{A} \) be such an arbitrary calling program. We must show that \( \mathcal{A}\diamond \left( {{\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }}}\right) \) and \( \mathcal{A}\diamond \left( {{\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{right }}}\right) \) have identical output distributions. As mentioned above, we can interpret \( \mathcal{A}\diamond {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }} \) as a calling program \( \mathcal{A} \) linked to the library \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }} \), but also as a calling program \( \mathcal{A}\diamond {\mathcal{L}}^{ * } \) linked to the library \( {\mathcal{L}}_{\text{left }} \) . Since \( {\mathcal{L}}_{\text{left }} \equiv {\mathcal{L}}_{\text{right }} \), swapping \( {\mathcal{L}}_{\text{left }} \) for \( {\mathcal{L}}_{\text{right }} \) has no effect on the output of any calling program. In particular, it has no effect when the calling program happens to be the compound program \( \mathcal{A}\diamond {\mathcal{L}}^{ * } \) . Hence we have:\n\n\[ \Pr \left\lbrack {\mathcal{A}\diamond \left( {{\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{left }}}\right) \Rightarrow \operatorname{true}}\right\rbrack = \Pr \left\lbrack {\left( {\mathcal{A}\diamond {\mathcal{L}}^{ * }}\right) \diamond {\mathcal{L}}_{\text{left }} \Rightarrow \text{ true }}\right\rbrack \;\text{ (change of perspective) }\n\]\n\[ = \Pr \left\lbrack {\left( {\mathcal{A}\diamond {\mathcal{L}}^{ * }}\right) \diamond {\mathcal{L}}_{\text{right }} \Rightarrow \text{ true }}\right\rbrack \;\text{ (since }{\mathcal{L}}_{\text{left }} \equiv {\mathcal{L}}_{\text{right }}\text{ ) }\n\]\n\[ = \Pr \left\lbrack {\mathcal{A}\diamond \left( {{\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{right }}}\right) \Rightarrow \text{ true }}\right\rbrack .\;\text{ (change of perspective) }\n\]\n\nSince \( \mathcal{A} \) was arbitrary, we have proved the lemma.
Yes
Theorem 2.16 There is an encryption scheme that satisfies one-time secrecy (Definition 2.6) but not one-time uniform ciphertexts (Definition 2.5). In other words, one-time secrecy does not necessarily imply one-time uniform ciphertexts.
Proof\n\nOne such encryption scheme is given below:\n\n\[ \begin{array}{lllll} \mathcal{K} = \{ 0,1{\} }^{\lambda } & \text{ keyGen: } & \frac{\operatorname{Enc}\left( {k, m \in \{ 0,1{\} }^{\lambda }}\right) : }{{c}^{\prime } \mathrel{\text{:=}} k \oplus m} & \frac{\operatorname{Dec}\left( {k, c \in \{ 0,1{\} }^{\lambda + 2}}\right) : }{{c}^{\prime } \mathrel{\text{:=}} k \oplus m} & \frac{\operatorname{Dec}\left( {k, c \in \{ 0,1{\} }^{\lambda + 2}}\right) : }{{c}^{\prime } \mathrel{\text{:=}} \operatorname{first}\lambda \text{ bits of }c} \\ \mathcal{C} = \{ 0,1{\} }^{\lambda + 2} & \text{ return }k & \text{ return }k & \text{ return }k \oplus {c}^{\prime } & \\ & & & & \end{array} \]\n\nThis scheme is just OTP with the bits \( {00} \) added to every ciphertext. The following facts about the scheme should be believable (and the exercises encourage you to prove them formally if you would like more practice at that sort of thing):\n\n- This scheme satisfies one-time one-time secrecy, meaning that encryptions of \( {m}_{L} \) are distributed identically to encryptions of \( {m}_{R} \), for any \( {m}_{L} \) and \( {m}_{R} \) of the attacker’s choice. We can characterize the ciphertext distribution in both cases as \
No
Let \( \\left\\{ {\\left( {{x}_{1},{y}_{1}}\\right) ,\\ldots ,\\left( {{x}_{d + 1},{y}_{d + 1}}\\right) }\\right\\} \\subseteq {\\mathbb{R}}^{2} \) be a set of points whose \( {x}_{i} \) values are all distinct. Then there is a unique degree-d polynomial \( f \) with real coefficients that satisfies \( {y}_{i} = f\\left( {x}_{i}\\right) \) for all \( i \) .
Proof To start, consider the following polynomial:\n\n\[ \n{\\ell }_{1}\\left( \\mathbf{x}\\right) = \\frac{\\left( {\\mathbf{x} - {x}_{2}}\\right) \\left( {\\mathbf{x} - {x}_{3}}\\right) \\cdots \\left( {\\mathbf{x} - {x}_{d + 1}}\\right) }{\\left( {{x}_{1} - {x}_{2}}\\right) \\left( {{x}_{1} - {x}_{3}}\\right) \\cdots \\left( {{x}_{1} - {x}_{d + 1}}\\right) }.\n\]\n\nThe notation is potentially confusing. \( {\\ell }_{1} \) is a polynomial with formal variable \( \\mathbf{x} \) (written in bold). The non-bold \( {x}_{i} \) values are just plain numbers (scalars), given in the theorem statement. Therefore the numerator in \( {\\ell }_{1} \) is a degree- \( d \) polynomial in \( \\mathbf{x} \). The denominator is just a scalar, and since all of the \( {x}_{i} \) ’s are distinct, we are not dividing by zero. Overall, \( {\\ell }_{1} \) is a degree- \( d \) polynomial.\n\nWhat happens when we evaluate \( {\\ell }_{1} \) at one of the special \( {x}_{i} \) values?\n\n- Evaluating \( {\\ell }_{1}\\left( {x}_{1}\\right) \) makes the numerator and denominator the same, so \( {\\ell }_{1}\\left( {x}_{1}\\right) = 1 \).\n\n- Evaluating \( {\\ell }_{1}\\left( {x}_{i}\\right) \) for \( i \\neq 1 \) leads to a term \( \\left( {{x}_{i} - {x}_{i}}\\right) \) in the numerator, so \( {\\ell }_{1}\\left( {x}_{i}\\right) = 0 \).\n\nOf course, \( {\\ell }_{1} \) can be evaluated at any point (not just the special points \( {x}_{1},\\ldots ,{x}_{d + 1} \)), but we don't care about what happens in those cases.\n\nWe can similarly define other polynomials \( {\\ell }_{j} \) :\n\n\[ \n{\\ell }_{j}\\left( \\mathbf{x}\\right) = \\frac{\\left( {\\mathbf{x} - {x}_{1}}\\right) \\cdots \\left( {\\mathbf{x} - {x}_{j - 1}}\\right) \\left( {\\mathbf{x} - {x}_{j + 1}}\\right) \\cdots \\left( {\\mathbf{x} - {x}_{d + 1}}\\right) }{\\left( {{x}_{j} - {x}_{1}}\\right) \\cdots \\left( {{x}_{j} - {x}_{j - 1}}\\right) \\left( {{x}_{j} - {x}_{j + 1}}\\right) \\cdots \\left( {{x}_{j} - {x}_{d + 1}}\\right) }.\n\]\n\nThe pattern is that the numerator is \
Yes
Let \( p \) be a prime, and let \( \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{d + 1},{y}_{d + 1}}\right) }\right\} \subseteq {\left( {\mathbb{Z}}_{p}\right) }^{2} \) be a set of points whose \( {x}_{i} \) values are all distinct. Then there is a unique degree-d polynomial \( f \) with coefficients from \( {\mathbb{Z}}_{p} \) that satisfies \( {y}_{i}{ \equiv }_{p}f\left( {x}_{i}\right) \) for all \( i \) .
The proof is the same as the one for Theorem 3.8, if you interpret all arithmetic modulo \( p \) . Addition, subtraction, and multiplication \( {\;\operatorname{mod}\;p} \) are straight forward; the only nontrivial question is how to interpret \
No
Corollary 3.10 Let \( \mathcal{P} = \left\{ {\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{k},{y}_{k}}\right) }\right\} \subseteq {\left( {\mathbb{Z}}_{p}\right) }^{2} \) be a set of points whose \( {x}_{i} \) values are distinct. Let \( d \) satisfy \( k \leq d + 1 \) and \( p > d \) . Then the number of degree-d polynomials \( f \) with coefficients in \( {\mathbb{Z}}_{p} \) that satisfy the condition \( {y}_{i}{ \equiv }_{p}f\left( {x}_{i}\right) \) for all \( i \) is exactly \( {p}^{d + 1 - k} \) .
Proof The proof is by induction on the value \( d + 1 - k \) . The base case is when \( d + 1 - k = 0 \) . Then we have \( k = d + 1 \) distinct points, and Theorem 3.9 says that there is a unique polynomial satisfying the condition. Since \( {p}^{d + 1 - k} = {p}^{0} = 1 \), the base case is true.\n\nFor the inductive case, we have \( k \leq d \) points in \( \mathcal{P} \) . Let \( {x}^{ * } \in {\mathbb{Z}}_{p} \) be a value that does not appear as one of the \( {x}_{i} \) ’s. Every polynomial must give some value when evaluated at \( {x}^{ * } \) . So,\n\n[# of degree- \( d \) polynomials passing through points in \( \mathcal{P} \) ]\n\n\[ = \mathop{\sum }\limits_{{{y}^{ * } \in {\mathbb{Z}}_{p}}}\left\lbrack {\# \text{ of degree- }d\text{ polynomials passing through points in }\mathcal{P} \cup \left\{ \left( {{x}^{ * },{y}^{ * }}\right) \right\} }\right\rbrack \]\n\n\[ \overset{\left( \star \right) }{ = }\mathop{\sum }\limits_{{{y}^{ * } \in {\mathbb{Z}}_{p}}}{p}^{d + 1 - \left( {k + 1}\right) } \]\n\n\[ = p \cdot \left( {p}^{d + 1 - k - 1}\right) = {p}^{d + 1 - k} \]\n\nThe equality marked \( \left( \star \right) \) follows from the inductive hypothesis, since each of the terms involves a polynomial passing through a specified set of \( k + 1 \) points with distinct \( x \) - coordinates.
Yes
Lemma 3.12 2 Let \( p \) be a prime and define the following two libraries: ![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_71_0.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_71_0.jpg)\n\n\( {\mathcal{L}}_{\text{shamir-real }} \) chooses a random degree- \( \left( {t - 1}\right) \) polynomial that passes through the point \( \left( {0, m}\right) \) , then evaluates it at the given \( x \) -coordinates (specified by \( U \) ). \( {\mathcal{L}}_{\text{shamir-rand }} \) simply gives uniformly chosen points, unrelated to any polynomial.\n\nThe claim is that these libraries are interchangeable: \( {\mathcal{L}}_{\text{shamir-real }} \equiv {\mathcal{L}}_{\text{shamir-rand }} \) .
Proof Fix a message \( m \in {\mathbb{Z}}_{p} \), fix set \( U \) of users with \( \left| U\right| < t \), and for each \( i \in U \) fix a value \( {y}_{i} \in {\mathbb{Z}}_{p} \) . We wish to consider the probability that a call to \( \operatorname{POLY}\left( {m, t, U}\right) \) outputs \( \left\{ {\left( {i,{y}_{i}}\right) \mid i \in U}\right\} \), in each of the two libraries. \( {}^{2} \)\n\nIn library \( {\mathcal{L}}_{\text{shamir-real }} \), the subroutine chooses a random degree- \( \left( {t - 1}\right) \) polynomial \( f \) such that \( f\left( 0\right) { \equiv }_{p}m \) . From Corollary 3.10, we know there are \( {p}^{t - 1} \) such polynomials.\n\nIn order for POLY to output points consistent with our chosen \( {y}_{i} \) ’s, the library must have chosen one of the polynomials that passes through \( \left( {0, m}\right) \) and all of the \( \left\{ {\left( {i,{y}_{i}}\right) \mid i \in }\right. \) \( U\} \) points. The library must have chosen one of the polynomials that passes through a specific choice of \( \left| U\right| + 1 \) points, and Corollary 3.10 tells us that there are \( {p}^{t - \left( {\left| U\right| + 1}\right) } \) such polynomials.\n\nThe only way for poly to give our desired output is for it to choose one of the \( {p}^{t - \left( {\left| U\right| + 1}\right) } \) \
Yes
Theorem 3.13 13 Shamir’s secret-sharing scheme (Construction 3.11) is secure according to Definition 3.3.
Proof Let \( \mathcal{S} \) denote the Shamir secret-sharing scheme. We prove that \( {\mathcal{L}}_{\text{tsss-L }}^{\mathcal{S}} \equiv {\mathcal{L}}_{\text{tsss-R }}^{\mathcal{S}} \) via a hybrid argument. \( \operatorname{SHARE}\left( {{m}_{L},{m}_{R}, U}\right) \) : Our starting point is \( {\mathcal{L}}_{\text{tsss-L }}^{\mathcal{S}} \) , \( {\mathcal{L}}_{\text{tsss-L }}^{\mathcal{S}} \) : shown here with the details of Shamir secret-sharing filled in. ![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_72_0.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_72_0.jpg) ![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_73_0.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_73_0.jpg) Applying the same steps in reverse, we can replace \( {\mathcal{L}}_{\text{shamir-rand }} \) with \( {\mathcal{L}}_{\text{shamir-real }} \) , having no effect on the library's behavior. A subroutine has been inlined, which has no effect on the library's behavior. The result- for \( i \in U \) : \( {s}_{i} \mathrel{\text{:=}} \left( {i, f\left( i\right) \% p}\right) \) return \( \left\{ {{s}_{i} \mid i \in U}\right\} \) We showed that \( {\mathcal{L}}_{\text{tsss-L }}^{\mathcal{S}} \equiv {\mathcal{L}}_{\text{hyb-1 }} \equiv \cdots \equiv {\mathcal{L}}_{\text{hyb-4 }} \equiv {\mathcal{L}}_{\text{tsss-R }}^{\mathcal{S}} \), so Shamir’s secret sharing scheme is secure.
Yes
Lemma 4.9 BirthdayProb \( \left( {q, N}\right) = 1 - \mathop{\prod }\limits_{{i = 1}}^{{q - 1}}\left( {1 - \frac{i}{N}}\right) \) .
Proof Let us instead compute the probability that \( \mathcal{B} \) outputs 0, which will allow us to then solve for the probability that it outputs 1 . In order for \( \mathcal{B} \) to output 0, it must avoid the early termination conditions in each iteration of the main loop. Therefore:\n\n\[ \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ outputs }0}\right\rbrack = \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ doesn’t terminate early in iteration }i = 1}\right\rbrack \]\n\n\[ \cdot \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ doesn’t terminate early in iteration }i = 2}\right\rbrack \]\n\n\[ \vdots \]\n\n\[ \cdot \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ doesn’t terminate early in iteration }i = q}\right\rbrack \]\n\nIn iteration \( i \) of the main loop, there are \( i - 1 \) previously chosen values \( {s}_{1},\ldots ,{s}_{i - 1} \) . The program terminates early if any of these are chosen again as \( {s}_{i} \), otherwise it continues to the next iteration. Put differently, there are \( i - 1 \) (out of \( N \) ) ways to choose \( {s}_{i} \) that lead to early termination - all other choices of \( {s}_{i} \) avoid early termination. Since the \( N \) possibilities for \( {s}_{i} \) happen with equal probability:\n\n\[ \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ doesn’t terminate early in iteration }i}\right\rbrack = 1 - \frac{i - 1}{N}. \]\n\nPutting everything together:\n\n\[ \operatorname{Birthday}\operatorname{Prob}\left( {q, N}\right) = \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ outputs }1}\right\rbrack \]\n\n\[ = 1 - \Pr \left\lbrack {\mathcal{B}\left( {q, N}\right) \text{ outputs }0}\right\rbrack \]\n\n\[ = 1 - \left( {1 - \frac{1}{N}}\right) \left( {1 - \frac{2}{N}}\right) \cdots \left( {1 - \frac{q - 1}{N}}\right) \]\n\n\[ = 1 - \mathop{\prod }\limits_{{i = 1}}^{{q - 1}}\left( {1 - \frac{i}{N}}\right) \]\n\nThis completes the proof.
Yes
If \( q \leq \sqrt{2N} \), then\n\n\[ \n{0.632}\frac{q\left( {q - 1}\right) }{2N} \leq \text{ Birthday Prob }\left( {q, N}\right) \leq \frac{q\left( {q - 1}\right) }{2N}.\n\]
Proof We split the proof into two parts.\n\n- To prove the upper bound, we use the fact that when \( x \) and \( y \) are positive,\n\n\[ \n\left( {1 - x}\right) \left( {1 - y}\right) = 1 - \left( {x + y}\right) + {xy}\n\]\n\n\[ \n\geq 1 - \left( {x + y}\right)\n\]\n\nMore generally, when all terms \( {x}_{i} \) are positive, \( \mathop{\prod }\limits_{i}\left( {1 - {x}_{i}}\right) \geq 1 - \mathop{\sum }\limits_{i}{x}_{i} \). Hence,\n\n\[ \n1 - \mathop{\prod }\limits_{i}\left( {1 - {x}_{i}}\right) \leq 1 - \left( {1 - \mathop{\sum }\limits_{i}{x}_{i}}\right) = \mathop{\sum }\limits_{i}{x}_{i}\n\]\n\nApplying that fact,\n\n\[ \n\text{Birthday}\operatorname{Prob}\left( {q, N}\right) \overset{\text{ def }}{ = }1 - \mathop{\prod }\limits_{{i = 1}}^{{q - 1}}\left( {1 - \frac{i}{N}}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{{q - 1}}\frac{i}{N} = \frac{\mathop{\sum }\limits_{{i = 1}}^{{q - 1}}i}{N} = \frac{q\left( {q - 1}\right) }{2N}.\n\]\n\n- To prove the lower bound, we use the fact that when \( 0 \leq x \leq 1 \),\n\n\[ \n1 - x \leq {e}^{-x} \leq 1 - {0.632x}.\n\]\n\nThis fact is illustrated below. The significance of 0.632 is that \( 1 - \frac{1}{e} = {0.63212}\ldots \)\n\nWe can use both of these upper and lower bounds on \( {e}^{-x} \) to show the following:\n\n\[ \n\mathop{\prod }\limits_{{i = 1}}^{{q - 1}}\left( {1 - \frac{i}{N}}\right) \leq \mathop{\prod }\limits_{{i = 1}}^{{q - 1}}{e}^{-\frac{i}{N}} = {e}^{-\mathop{\sum }\limits_{{i = 1}}^{{q - 1}}\frac{i}{N}} = {e}^{-\frac{q\left( {q - 1}\right) }{2N}} \leq 1 - {0.632}\frac{q\left( {q - 1}\right) }{2N}.\n\]\n\nWith the last inequality we used the fact that \( q \leq \sqrt{2N} \), and therefore \( \frac{q\left( {q - 1}\right) }{2N} \leq 1 \)\n\n(this is necessary to apply the inequality \( {e}^{-x} \leq 1 - {0.632x} \) ). Hence:\n\n\[ \n\operatorname{Birthday}\operatorname{Prob}\left( {q, N}\right) \overset{\text{ def }}{ = }1 - \mathop{\prod }\limits_{{i = 1}}^{{q - 1}}\left( {1 - \frac{i}{N}}\right)\n\]\n\n\[ \n\geq 1 - \left( {1 - {0.632}\frac{q\left( {q - 1}\right) }{2N}}\right) = {0.632}\frac{q\left( {q - 1}\right) }{2N}.\n\]\n\nThis completes the proof.
Yes
Let \( {\mathcal{L}}_{\text{samp-L }} \) and \( {\mathcal{L}}_{\text{samp-R }} \) be defined as above. Then for all calling programs \( \mathcal{A} \) that make \( q \) queries to the SAMP subroutine, the advantage of \( \mathcal{A} \) in distinguishing the libraries is at most BirthdayProb \( \left( {q,{2}^{\lambda }}\right) \) .
Proof Consider the following hybrid libraries:\n\n![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_91_0.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_91_0.jpg)\n\nFirst, let us prove some simple observations about these libraries:\n\n\( {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{L}} \equiv {\mathcal{L}}_{\text{samp } - \mathrm{L}} \) : Note that \( {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{L}} \) simply samples uniformly from \( \{ 0,1{\} }^{\lambda } \) . The extra \( R \) and bad variables in \( {\mathcal{L}}_{\text{hyb-L }} \) don’t actually have an effect on its external behavior (they are used only for convenience later in the proof).\n\n\( {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{R}} \equiv {\mathcal{L}}_{\text{samp } - \mathrm{R}} \) : Whereas \( {\mathcal{L}}_{\text{samp } - \mathrm{R}} \) avoids repeats by simply sampling from \( \{ 0,1{\} }^{\lambda } \smallsetminus R, \) this library \( {\mathcal{L}}_{\text{hyb-R }} \) samples \( r \) uniformly from \( \{ 0,1{\} }^{\bar{\lambda }} \) and retries if the result happens to be in \( R \) . This method is called rejection sampling, and it has the same effect \( {}^{10} \) as sampling \( r \) directly from \( \{ 0,1{\} }^{\lambda } \smallsetminus R \) .\n\nConveniently, \( {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{L}} \) and \( {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{R}} \) differ only in code that is reachable when bad \( = 1 \) (highlighted). So, using Lemma 4.8, we can bound the advantage of the calling program:\n\n\[ \left| {\Pr \left\lbrack {\mathcal{A}\diamond {\mathcal{L}}_{\text{samp-L }} \Rightarrow 1}\right\rbrack - \Pr \left\lbrack {\mathcal{A}\diamond {\mathcal{L}}_{\text{samp-R }} \Rightarrow 1}\right\rbrack }\right| \]\n\n\[ = \left| {\Pr \left\lbrack {\mathcal{A}\diamond {\mathcal{L}}_{\text{hyb-L}} \Rightarrow 1}\right\rbrack - \Pr \left\lbrack {\mathcal{A}\diamond {\mathcal{L}}_{\text{hyb-R}} \Rightarrow 1}\right\rbrack }\right| \]\n\n\[ \leq \Pr \left\lbrack {\mathcal{A}\diamond {\mathcal{L}}_{\text{hyb-L }}\text{ sets bad } \mathrel{\text{:=}} 1}\right\rbrack \text{. } \]\n\nFinally, we can observe that \( \mathcal{A}\diamond {\mathcal{L}}_{\mathrm{{hyb}} - \mathrm{L}} \) sets bad \( \mathrel{\text{:=}} 1 \) only in the event that it sees a repeated sample from \( \{ 0,1{\} }^{\lambda } \) . This happens with probability Birthday \( \operatorname{Prob}\left( {q,{2}^{\lambda }}\right) \) .
Yes
Lemma 4.12 The following two libraries are indistinguishable, provided that the argument \( \mathcal{R} \) to SAMP is passed as an explicit list of items.
Suppose the calling program makes \( q \) calls to SAMP, and in the \( i \) th call it uses an argument \( \mathcal{R} \) with \( {n}_{i} \) items. Then the advantage of the calling program is at most:\n\n\[ 1 - \mathop{\prod }\limits_{{i = 1}}^{q}\left( {1 - \frac{{n}_{i}}{{2}^{\lambda }}}\right) \]\n\nWe can bound this advantage as before. If \( \mathop{\sum }\limits_{{i = 1}}^{q}{n}_{i} \leq {2}^{\lambda } \), then the advantage is between \( {0.632}\left( {\mathop{\sum }\limits_{{i = 1}}^{q}{n}_{i}}\right) /{2}^{\lambda } \) and \( \left( {\mathop{\sum }\limits_{{i = 1}}^{q}{n}_{i}}\right) /{2}^{\lambda } \) . When the calling program runs in polynomial time and must pass \( \mathcal{R} \) as an explicit list (i.e., take the time to \
Yes
Let \( {\mathcal{L}}_{\text{prf-rand }} \) and \( {\mathcal{L}}_{\text{prp-rand }} \) be defined as in Definitions 6.1 & 6.6, with parameters in \( = \) out \( = \) blen \( = \lambda \) (so that the interfaces match up). Then \( {\mathcal{L}}_{\text{prf-rand }} \approx {\mathcal{L}}_{\text{prp-rand }} \) .
Recall the replacement-sampling lemma, Lemma 4.11, which showed that the following libraries are indistinguishable:\n\n![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_132_0.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_132_0.jpg)\n\n\( {\mathcal{L}}_{\text{samp-L }} \) samples values with replacement, and \( {\mathcal{L}}_{\text{samp-R }} \) samples values without replacement. Now consider the following library \( {\mathcal{L}}^{ * } \) :\n\n![3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_132_1.jpg](images/3ad0c3a5-19c6-45e7-ade0-f1e1b88ffb4d_132_1.jpg)\n\n\n\nWhen we link \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{samp-L }} \) we obtain \( {\mathcal{L}}_{\text{prf-rand }} \) since the values in \( T\left\lbrack x\right\rbrack \) are sampled uniformly. When we link \( {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{samp-R }} \) we obtain \( {\mathcal{L}}_{\text{prp-rand }} \) since the values in \( T\left\lbrack x\right\rbrack \) are sampled uniformly subject to having no repeats (consider \( R \) playing the role of \( T \) .values in \( {\mathcal{L}}_{\text{prp-rand }} \) ). Then from Lemma 4.11, we have:\n\n\[ \n{\mathcal{L}}_{\text{prf-rand }} \equiv {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{samp-L }} \approx {\mathcal{L}}^{ * }\diamond {\mathcal{L}}_{\text{samp-R }} \equiv {\mathcal{L}}_{\text{prp-rand }}, \n\]\n\nwhich completes the proof.
Yes
Corollary 6.8 Let \( F : \{ 0,1{\} }^{\lambda } \times \{ 0,1{\} }^{\lambda } \rightarrow \{ 0,1{\} }^{\lambda } \) be a secure PRP (with blen \( = \lambda \) ). Then \( F \) is also a secure PRF.
Proof As we have observed above, \( {\mathcal{L}}_{\text{prf-real }}^{F} \) and \( {\mathcal{L}}_{\text{prp-real }}^{F} \) are literally the same library. Since \( F \) is a secure PRP, \( {\mathcal{L}}_{\text{prp-real }}^{F} \approx {\mathcal{L}}_{\text{prp-rand }}^{F} \) . Finally, by the switching lemma, \( {\mathcal{L}}_{\text{prp-rand }}^{F} \approx {\mathcal{L}}_{\text{prf-rand }}^{F} \) . Putting everything together:\n\n\[ \n{\mathcal{L}}_{\text{prf-real }}^{F} \equiv {\mathcal{L}}_{\text{prp-real }}^{F} \approx {\mathcal{L}}_{\text{prp-rand }}^{F} \approx {\mathcal{L}}_{\text{prf-rand }}^{F}, \n\] \n\nhence \( F \) is a secure PRF.
Yes
Claim 11.3 Suppose \( h \) is a compression function and \( M{D}_{h} \) is the Merkle-Damgård construction applied to \( h \) . Given a collision \( x,{x}^{\prime } \) in \( M{D}_{h} \), it is easy to find a collision in \( h \) . In other words, if it is hard to find a collision in \( h \), then it must also be hard to find a collision in \( M{D}_{h} \) .
Proof Suppose that \( x,{x}^{\prime } \) are a collision under \( {\mathrm{{MD}}}_{h} \) . Define the values \( {x}_{1},\ldots ,{x}_{k + 1} \) and \( {y}_{1},\ldots ,{y}_{k + 1} \) as in the computation of \( {\mathrm{{MD}}}_{h}\left( x\right) \) . Similarly, define \( {x}_{1}^{\prime },\ldots ,{x}_{{k}^{\prime } + 1}^{\prime } \) and \( {y}_{1}^{\prime },\ldots ,{y}_{{k}^{\prime } + 1}^{\prime } \) as in the computation of \( {\mathrm{{MD}}}_{h}\left( {x}^{\prime }\right) \) . Note that, in general, \( k \) may not equal \( {k}^{\prime } \) .\n\nRecall that:\n\n\[ \n{\mathrm{{MD}}}_{h}\left( x\right) = {y}_{k + 1} = h\left( {{y}_{k}\parallel {x}_{k + 1}}\right) \]\n\n\[ \n{\operatorname{MD}}_{h}\left( {x}^{\prime }\right) = {y}_{{k}^{\prime } + 1}^{\prime } = h\left( {{y}_{{k}^{\prime }}^{\prime }\parallel {x}_{{k}^{\prime } + 1}^{\prime }}\right) \]\n\nSince we are assuming \( {\mathrm{{MD}}}_{h}\left( x\right) = {\mathrm{{MD}}}_{h}\left( {x}^{\prime }\right) \), we have \( {y}_{k + 1} = {y}_{{k}^{\prime } + 1}^{\prime } \) . We consider two cases:\n\nCase 1: If \( \left| x\right| \neq \left| {x}^{\prime }\right| \), then the padding blocks \( {x}_{k + 1} \) and \( {x}_{{k}^{\prime } + 1}^{\prime } \) which encode \( \left| x\right| \) and \( \left| {x}^{\prime }\right| \) are not equal. Hence we have \( {y}_{k}\begin{Vmatrix}{{x}_{k + 1} \neq {y}_{{k}^{\prime }}^{\prime }}\end{Vmatrix}{x}_{{k}^{\prime } + 1}^{\prime } \), so \( {y}_{k}\begin{Vmatrix}{{x}_{k + 1}\text{and}{y}_{{k}^{\prime }}^{\prime }}\end{Vmatrix}{x}_{{k}^{\prime } + 1}^{\prime } \) are a collision under \( h \) and we are done.\n\nCase 2: If \( \left| x\right| = \left| {x}^{\prime }\right| \), then \( x \) and \( {x}^{\prime } \) are broken into the same number of blocks, so \( k = {k}^{\prime } \) . Let us work backwards from the final step in the computations of \( {\mathrm{{MD}}}_{h}\left( x\right) \) and \( {\mathrm{{MD}}}_{h}\left( {x}^{\prime }\right)
Yes
If \( x \in {\mathbb{Z}}_{n}^{ * } \) then \( {x}^{\phi \left( n\right) }{ \equiv }_{n}1 \) .
Using the formula for \( \phi \left( n\right) \), we can see that \( \phi \left( {15}\right) = \phi \left( {3 \cdot 5}\right) = \left( {3 - 1}\right) \left( {5 - 1}\right) = 8 \) . Euler’s theorem says that raising any element of \( {\mathbb{Z}}_{15}^{ * } \) to the 8 power results in 1: We can use Sage to verify this:\n\n---sage: for \( i \) in range(15):\n\n\( \ldots : \; \) if \( \gcd \left( {i,{15}}\right) = = 1 : \)\n\n....: print(\
No
Suppose \( \gcd \left( {r, s}\right) = 1 \) . Then for all integers \( u, v \), there is a solution for \( x \) in the following system of equations:\n\n\[ x{ \equiv }_{r}u \]\n\n\[ x{ \equiv }_{s}v \]\n\nFurthermore, this solution is unique modulo \( {rs} \) .
Proof \( f\; \) Since \( \gcd \left( {r, s}\right) = 1 \), we have by Bezout’s theorem that \( 1 = {ar} + {bs} \) for some integers \( a \) and \( b \) . Furthermore, \( b \) and \( s \) are multiplicative inverses modulo \( r \) . Now choose \( x = {var} + {ubs} \) . Then,\n\n\[ x = {var} + {ubs}{ \equiv }_{r}\left( {va}\right) 0 + u\left( {{s}^{-1}s}\right) = u \]\n\nSo \( x{ \equiv }_{r}u \), as desired. Using similar reasoning \( {\;\operatorname{mod}\;s} \), we can see that \( x{ \equiv }_{s}v \), so \( x \) is a solution to both equations.\n\nNow we argue that this solution is unique modulo \( {rs} \) . Suppose \( x \) and \( {x}^{\prime } \) are two solutions to the system of equations, so we have:\n\n\[ x{ \equiv }_{r}{x}^{\prime }{ \equiv }_{r}u \]\n\n\[ x{ \equiv }_{s}{x}^{\prime }{ \equiv }_{s}v \]\n\nSince \( x{ \equiv }_{r}{x}^{\prime } \) and \( x{ \equiv }_{s}{x}^{\prime } \), it must be that \( x - {x}^{\prime } \) is a multiple of \( r \) and a multiple of \( s \) . Since \( r \) and \( s \) are relatively prime, their least common multiple is \( {rs} \), so \( x - {x}^{\prime } \) must be a multiple of \( {rs} \) . Hence, \( x{ \equiv }_{rs}{x}^{\prime } \) . So any two solutions to this system of equations are congruent mod \( {rs} \) .
Yes
Theorem 2.1.1 (Superposition). Suppose \( {y}_{1} \) and \( {y}_{2} \) are two solutions of the homogeneous equation (2.2). Then\n\n\[ y\left( x\right) = {C}_{1}{y}_{1}\left( x\right) + {C}_{2}{y}_{2}\left( x\right) \]\n\nalso solves (2.2) for arbitrary constants \( {C}_{1} \) and \( {C}_{2} \) .
Proof: Let \( y = {C}_{1}{y}_{1} + {C}_{2}{y}_{2} \) . Then\n\n\[ {y}^{\prime \prime } + p{y}^{\prime } + {qy} = {\left( {C}_{1}{y}_{1} + {C}_{2}{y}_{2}\right) }^{\prime \prime } + p{\left( {C}_{1}{y}_{1} + {C}_{2}{y}_{2}\right) }^{\prime } + q\left( {{C}_{1}{y}_{1} + {C}_{2}{y}_{2}}\right) \]\n\n\[ = {C}_{1}{y}_{1}^{\prime \prime } + {C}_{2}{y}_{2}^{\prime \prime } + {C}_{1}p{y}_{1}^{\prime } + {C}_{2}p{y}_{2}^{\prime } + {C}_{1}q{y}_{1} + {C}_{2}q{y}_{2} \]\n\n\[ = {C}_{1}\left( {{y}_{1}^{\prime \prime } + p{y}_{1}^{\prime } + q{y}_{1}}\right) + {C}_{2}\left( {{y}_{2}^{\prime \prime } + p{y}_{2}^{\prime } + q{y}_{2}}\right) \]\n\n\[ = {C}_{1} \cdot 0 + {C}_{2} \cdot 0 = 0\text{. } \]
Yes
Theorem 2.1.3. Let \( p, q \) be continuous functions. Let \( {y}_{1} \) and \( {y}_{2} \) be two linearly independent solutions to the homogeneous equation (2.2). Then every other solution is of the form\n\n\[ y = {C}_{1}{y}_{1} + {C}_{2}{y}_{2} \]\n\nThat is, \( y = {C}_{1}{y}_{1} + {C}_{2}{y}_{2} \) is the general solution.
For example, we found the solutions \( {y}_{1} = \sin x \) and \( {y}_{2} = \cos x \) for the equation \( {y}^{\prime \prime } + y = 0 \) . It is not hard to see that sine and cosine are not constant multiples of each other. If \( \sin x = A\cos x \) for some constant \( A \), we let \( x = 0 \) and this would imply \( A = 0 \) . But then \( \sin x = 0 \) for all \( x \), which is preposterous. So \( {y}_{1} \) and \( {y}_{2} \) are linearly independent. Hence,\n\n\[ y = {C}_{1}\cos x + {C}_{2}\sin x \]\n\nis the general solution to \( {y}^{\prime \prime } + y = 0 \) .\n\nFor two functions, checking linear independence is rather simple. Let us see another example. Consider \( {y}^{\prime \prime } - 2{x}^{-2}y = 0 \) . Then \( {y}_{1} = {x}^{2} \) and \( {y}_{2} = 1/x \) are solutions. To see that they are linearly indepedent, suppose one is a multple of the other: \( {y}_{1} = A{y}_{2} \), we just have to find out that \( A \) cannot be a constant. In this case we have \( A = {y}_{1}/{y}_{2} = {x}^{3} \), this most decidedly not a constant. So \( y = {C}_{1}{x}^{2} + {C}_{2}1/x \) is the general solution.
Yes
Theorem 2.2.1. Suppose that \( {r}_{1} \) and \( {r}_{2} \) are the roots of the characteristic equation.\n\n(i) If \( {r}_{1} \) and \( {r}_{2} \) are distinct and real (when \( {b}^{2} - {4ac} > 0 \) ), then (2.3) has the general solution\n\n\[ y = {C}_{1}{e}^{{r}_{1}x} + {C}_{2}{e}^{{r}_{2}x}. \]\n\n(ii) If \( {r}_{1} = {r}_{2} \) (happens when \( {b}^{2} - {4ac} = 0 \) ), then (2.3) has the general solution\n\n\[ y = \left( {{C}_{1} + {C}_{2}x}\right) {e}^{{r}_{1}x}. \]
Example 2.2.1: Solve\n\n\[ {y}^{\prime \prime } - {k}^{2}y = 0. \]\n\nThe characteristic equation is \( {r}^{2} - {k}^{2} = 0 \) or \( \left( {r - k}\right) \left( {r + k}\right) = 0 \) . Consequently, \( {e}^{-{kx}} \) and \( {e}^{kx} \) are the two linearly independent solutions, and the general solution is\n\n\[ y = {C}_{1}{e}^{kx} + {C}_{2}{e}^{-{kx}}. \]\n\nSince \( \cosh s = \frac{{e}^{s} + {e}^{-s}}{2} \) and \( \sinh s = \frac{{e}^{s} - {e}^{-s}}{2} \), we can also write the general solution as\n\n\[ y = {D}_{1}\cosh \left( {kx}\right) + {D}_{2}\sinh \left( {kx}\right) . \]\n\nExample 2.2.2: Find the general solution of\n\n\[ {y}^{\prime \prime } - 8{y}^{\prime } + {16y} = 0. \]\n\nThe characteristic equation is \( {r}^{2} - {8r} + {16} = {\left( r - 4\right) }^{2} = 0 \) . The equation has a double root \( {r}_{1} = {r}_{2} = 4 \) . The general solution is, therefore,\n\n\[ y = \left( {{C}_{1} + {C}_{2}x}\right) {e}^{4x} = {C}_{1}{e}^{4x} + {C}_{2}x{e}^{4x}. \]
Yes
Theorem 2.2.3. Take the equation\n\n\\[ \na{y}^{\\prime \\prime } + b{y}^{\\prime } + {cy} = 0.\n\\]\n\nIf the characteristic equation has the roots \\( \\alpha \\pm {i\\beta } \\) (when \\( {b}^{2} - {4ac} < 0 \\) ), then the general solution is\n\n\\[ \ny = {C}_{1}{e}^{\\alpha x}\\cos \\left( {\\beta x}\\right) + {C}_{2}{e}^{\\alpha x}\\sin \\left( {\\beta x}\\right) .\n\\]
Example 2.2.3: Find the general solution of \\( {y}^{\\prime \\prime } + {k}^{2}y = 0 \\), for a constant \\( k > 0 \\) .\n\nThe characteristic equation is \\( {r}^{2} + {k}^{2} = 0 \\) . Therefore, the roots are \\( r = \\pm {ik} \\), and by the theorem, we have the general solution\n\n\\[ \ny = {C}_{1}\\cos \\left( {kx}\\right) + {C}_{2}\\sin \\left( {kx}\\right) .\n\\]
Yes
Theorem 2.5.1. Let \( {Ly} = f\left( x\right) \) be a linear ODE (not necessarily constant coefficient). Let \( {y}_{c} \) be the complementary solution (the general solution to the associated homogeneous equation \( {Ly} = 0 \) ) and let \( {y}_{p} \) be any particular solution to \( {Ly} = f\left( x\right) \) . Then the general solution to \( {Ly} = f\left( x\right) \) is\n\n\[ y = {y}_{c} + {y}_{p} \]
The moral of the story is that we can find the particular solution in any old way. If we find a different particular solution (by a different method, or simply by guessing), then we still get the same general solution. The formula may look different, and the constants we have to choose to satisfy the initial conditions may be different, but it is the same solution.
No
Theorem 3.2.1. An \( n \times n \) matrix \( A \) is invertible if and only if \( \det \left( A\right) \neq 0 \) .
In fact, \( \det \left( {A}^{-1}\right) \det \left( A\right) = 1 \) says that \( \det \left( {A}^{-1}\right) = \frac{1}{\det \left( A\right) } \) . So we even know what the determinant of \( {A}^{-1} \) is before we know how to compute \( {A}^{-1} \) .
No
Theorem 3.3.2. Let \( {\overrightarrow{x}}^{\prime } = P\overrightarrow{x} + \overrightarrow{f} \) be a linear system of ODEs. Suppose \( {\overrightarrow{x}}_{p} \) is one particular solution. Then every solution can be written as\n\n\[ \overrightarrow{x} = {\overrightarrow{x}}_{c} + {\overrightarrow{x}}_{p} \]\n\nwhere \( {\overrightarrow{x}}_{c} \) is a solution to the associated homogeneous equation \( \left( {{\overrightarrow{x}}^{\prime } = P\overrightarrow{x}}\right) \) .
The procedure for systems is the same as for single equations. We find a particular solution to the nonhomogeneous equation, then we find the general solution to the associated homogeneous equation, and finally we add the two together.
No
Theorem 3.4.1. Take \( {\overrightarrow{x}}^{\prime } = P\overrightarrow{x} \) . If \( P \) is an \( n \times n \) constant matrix that has \( n \) distinct real eigenvalues \( {\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{n} \), then there exist \( n \) linearly independent corresponding eigenvectors \( {\overrightarrow{v}}_{1},{\overrightarrow{v}}_{2},\ldots ,{\overrightarrow{v}}_{n} \) , and the general solution to \( {\overrightarrow{x}}^{\prime } = P\overrightarrow{x} \) can be written as\n\n\[ \overrightarrow{x} = {c}_{1}{\overrightarrow{v}}_{1}{e}^{{\lambda }_{1}t} + {c}_{2}{\overrightarrow{v}}_{2}{e}^{{\lambda }_{2}t} + \cdots + {c}_{n}{\overrightarrow{v}}_{n}{e}^{{\lambda }_{n}t}. \]
The corresponding fundamental matrix solution is\n\n\[ X\left( t\right) = \left\lbrack \begin{array}{llll} {\overrightarrow{v}}_{1}{e}^{{\lambda }_{1}t} & {\overrightarrow{v}}_{2}{e}^{{\lambda }_{2}t} & \cdots & {\overrightarrow{v}}_{n}{e}^{{\lambda }_{n}t} \end{array}\right\rbrack .\n\nThat is, \( X\left( t\right) \) is the matrix whose \( {j}^{\text{th }} \) column is \( {\overrightarrow{v}}_{j}{e}^{{\lambda }_{j}t} \) .
Yes
Theorem 3.4.2. Let \( P \) be a real-valued constant matrix. If \( P \) has a complex eigenvalue \( a + {ib} \) and a corresponding eigenvector \( \overrightarrow{v} \), then \( P \) also has a complex eigenvalue \( a - {ib} \) with a corresponding eigenvector \( \overline{\overrightarrow{v}} \) . Furthermore, \( {\overrightarrow{x}}^{\prime } = P\overrightarrow{x} \) has two linearly independent real-valued solutions
\[ {\overrightarrow{x}}_{1} = \operatorname{Re}\overrightarrow{v}{e}^{\left( {a + {ib}}\right) t},\;\text{ and }\;{\overrightarrow{x}}_{2} = \operatorname{Im}\overrightarrow{v}{e}^{\left( {a + {ib}}\right) t}. \]
Yes
Let \( P \) be an \( n \times n \) matrix. Then the general solution to \( {\overrightarrow{x}}^{\prime } = P\overrightarrow{x} \) is\n\n\[ \overrightarrow{x} = {e}^{tP}\overrightarrow{c} \]\n\nwhere \( \overrightarrow{c} \) is an arbitrary constant vector. In fact, \( \overrightarrow{x}\left( 0\right) = \overrightarrow{c} \) .
Let us check:\n\n\[ \frac{d}{dt}\overrightarrow{x} = \frac{d}{dt}\left( {{e}^{tP}\overrightarrow{c}}\right) = P{e}^{tP}\overrightarrow{c} = P\overrightarrow{x}. \]\n\nHence \( {e}^{tP} \) is a fundamental matrix solution of the homogeneous system. So if we can compute the matrix exponential, we have another method of solving constant coefficient homogeneous systems. It also makes it easy to solve for initial conditions. To solve \( {\overrightarrow{x}}^{\prime } = A\overrightarrow{x} \) , \( \overrightarrow{x}\left( 0\right) = \overrightarrow{b} \), we take the solution\n\n\[ \overrightarrow{x} = {e}^{tA}\overrightarrow{b} \]\n\nThis equation follows because \( {e}^{0A} = I \), so \( \overrightarrow{x}\left( 0\right) = {e}^{0A}\overrightarrow{b} = \overrightarrow{b} \) .
Yes
Theorem 4.1.1. Suppose that \( {x}_{1}\left( t\right) \) and \( {x}_{2}\left( t\right) \) are two eigenfunctions of the problem (4.1),(4.2) or (4.3) for two different eigenvalues \( {\lambda }_{1} \) and \( {\lambda }_{2} \) . Then they are orthogonal in the sense that\n\n\[ \n{\int }_{a}^{b}{x}_{1}\left( t\right) {x}_{2}\left( t\right) {dt} = 0 \n\]
The terminology comes from the fact that the integral is a type of inner product. We will expand on this in the next section. The theorem has a very short, elegant, and illuminating proof so let us give it here. First, we have the following two equations.\n\n\[ \n{x}_{1}^{\prime \prime } + {\lambda }_{1}{x}_{1} = 0\;\text{ and }\;{x}_{2}^{\prime \prime } + {\lambda }_{2}{x}_{2} = 0. \n\]\n\nMultiply the first by \( {x}_{2} \) and the second by \( {x}_{1} \) and subtract to get\n\n\[ \n\left( {{\lambda }_{1} - {\lambda }_{2}}\right) {x}_{1}{x}_{2} = {x}_{2}^{\prime \prime }{x}_{1} - {x}_{2}{x}_{1}^{\prime \prime }. \n\]\n\nNow integrate both sides of the equation:\n\n\[ \n\left( {{\lambda }_{1} - {\lambda }_{2}}\right) {\int }_{a}^{b}{x}_{1}{x}_{2}{dt} = {\int }_{a}^{b}{x}_{2}^{\prime \prime }{x}_{1} - {x}_{2}{x}_{1}^{\prime \prime }{dt} \n\]\n\n\[ \n= {\int }_{a}^{b}\frac{d}{dt}\left( {{x}_{2}^{\prime }{x}_{1} - {x}_{2}{x}_{1}^{\prime }}\right) {dt} \n\]\n\n\[ \n= {\left\lbrack {x}_{2}^{\prime }{x}_{1} - {x}_{2}{x}_{1}^{\prime }\right\rbrack }_{t = a}^{b} = 0. \n\]\n\nThe last equality holds because of the boundary conditions. For example, if we consider (4.1) we have \( {x}_{1}\left( a\right) = {x}_{1}\left( b\right) = {x}_{2}\left( a\right) = {x}_{2}\left( b\right) = 0 \) and so \( {x}_{2}^{\prime }{x}_{1} - {x}_{2}{x}_{1}^{\prime } \) is zero at both \( a \) and \( b \) . As \( {\lambda }_{1} \neq {\lambda }_{2} \), the theorem follows.
Yes
Theorem 4.1.2 (Fredholm alternative*). Exactly one of the following statements holds. Either\n\n\\[ \n{x}^{\\prime \\prime } + {\\lambda x} = 0,\\;x\\left( a\\right) = 0,\\;x\\left( b\\right) = 0 \n\\]\n\n(4.4)\n\nhas a nonzero solution, or\n\n\\[ \n{x}^{\\prime \\prime } + {\\lambda x} = f\\left( t\\right) ,\\;x\\left( a\\right) = 0,\\;x\\left( b\\right) = 0 \n\\]\n\n(4.5)\n\nhas a unique solution for every function \\( f \\) continuous on \\( \\left\\lbrack {a, b}\\right\\rbrack \\) .
The theorem is also true for the other types of boundary conditions we considered. The theorem means that if \\( \\lambda \\) is not an eigenvalue, the nonhomogeneous equation (4.5) has a unique solution for every right-hand side. On the other hand if \\( \\lambda \\) is an eigenvalue, then (4.5) need not have a solution for every \\( f \\), and furthermore, even if it happens to have a solution, the solution is not unique.
Yes
Theorem 4.3.1. Suppose \( f\left( t\right) \) is a 2L-periodic piecewise smooth function. Let\n\n\[ \n\frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos \left( {\frac{n\pi }{L}t}\right) + {b}_{n}\sin \left( {\frac{n\pi }{L}t}\right)\n\]\n\nbe the Fourier series for \( f\left( t\right) \) . Then the series converges for all \( t \) . If \( f\left( t\right) \) is continuous at \( t \), then\n\n\[ \nf\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos \left( {\frac{n\pi }{L}t}\right) + {b}_{n}\sin \left( {\frac{n\pi }{L}t}\right) .\n\]\n\nOtherwise,\n\n\[ \n\frac{f\left( {t - }\right) + f\left( {t + }\right) }{2} = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos \left( {\frac{n\pi }{L}t}\right) + {b}_{n}\sin \left( {\frac{n\pi }{L}t}\right) .\n\]
If we happen to have that \( f\left( t\right) = \frac{f\left( {t - }\right) + f\left( {t + }\right) }{2} \) at all the discontinuities, the Fourier series converges to \( f\left( t\right) \) everywhere. We can always just redefine \( f\left( t\right) \) by changing the value at each discontinuity appropriately. Then we can write an equals sign between \( f\left( t\right) \) and the series without any worry. We mentioned this fact briefly at the end last section.
Yes
Theorem 4.3.2. Suppose\n\n\[ f\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos \left( {\frac{n\pi }{L}t}\right) + {b}_{n}\sin \left( {\frac{n\pi }{L}t}\right) \]\n\nis a piecewise smooth continuous function and the derivative \( {f}^{\prime }\left( t\right) \) is piecewise smooth. Then the derivative can be obtained by differentiating term by term,
\[ {f}^{\prime }\left( t\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{-{a}_{n}{n\pi }}{L}\sin \left( {\frac{n\pi }{L}t}\right) + \frac{{b}_{n}{n\pi }}{L}\cos \left( {\frac{n\pi }{L}t}\right) . \]
Yes
Theorem 4.3.3. Suppose\n\n\\[ \nf\\left( t\\right) = \\frac{{a}_{0}}{2} + \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }{a}_{n}\\cos \\left( {\\frac{n\\pi }{L}t}\\right) + {b}_{n}\\sin \\left( {\\frac{n\\pi }{L}t}\\right) \n\\]\n\nis a piecewise smooth function. Then the antiderivative is obtained by antidifferentiating term by term and so\n\n\\[ \nF\\left( t\\right) = \\frac{{a}_{0}t}{2} + C + \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }\\frac{{a}_{n}L}{n\\pi }\\sin \\left( {\\frac{n\\pi }{L}t}\\right) + \\frac{-{b}_{n}L}{n\\pi }\\cos \\left( {\\frac{n\\pi }{L}t}\\right) ,\n\\]\n\nwhere \\( {F}^{\\prime }\\left( t\\right) = f\\left( t\\right) \\) and \\( C \\) is an arbitrary constant.
Note that the series for \\( F\\left( t\\right) \\) is no longer a Fourier series as it contains the \\( \\frac{{a}_{0}t}{2} \\) term. The antiderivative of a periodic function need no longer be periodic and so we should not expect a Fourier series.
Yes
Theorem 4.7.1. Take the equation\n\n\[ \n{y}_{tt} = {a}^{2}{y}_{xx} \]\n\n\[ \ny\left( {0, t}\right) = y\left( {L, t}\right) = 0, \]\n\n(4.14)\n\n\[ \ny\left( {x,0}\right) = f\left( x\right) \;\text{for}\;0 < x < L, \]\n\n\[ \n{y}_{t}\left( {x,0}\right) = g\left( x\right) \;\text{ for }0 < x < L, \]\n\nwhere\n\n\[ \nf\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}\sin \left( {\frac{n\pi }{L}x}\right) \;\text{ and }\;g\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{b}_{n}\sin \left( {\frac{n\pi }{L}x}\right) . \]\n\nThen the solution \( y\left( {x, t}\right) \) can be written as a sum of the solutions of (4.11) and (4.12):
\[ \ny\left( {x, t}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{b}_{n}\frac{L}{n\pi a}\sin \left( {\frac{n\pi }{L}x}\right) \sin \left( {\frac{n\pi a}{L}t}\right) + {c}_{n}\sin \left( {\frac{n\pi }{L}x}\right) \cos \left( {\frac{n\pi a}{L}t}\right) \]\n\n\[ \n= \mathop{\sum }\limits_{{n = 1}}^{\infty }\sin \left( {\frac{n\pi }{L}x}\right) \left\lbrack {{b}_{n}\frac{L}{n\pi a}\sin \left( {\frac{n\pi a}{L}t}\right) + {c}_{n}\cos \left( {\frac{n\pi a}{L}t}\right) }\right\rbrack . \]\n
Yes
Put the following equation into the form (5.1):\n\n\[ \n{x}^{2}{y}^{\prime \prime } + x{y}^{\prime } + \left( {\lambda {x}^{2} - {n}^{2}}\right) y = 0.\n\]
Multiply both sides by \( \frac{1}{x} \) to obtain\n\n\[ \n\frac{1}{x}\left( {{x}^{2}{y}^{\prime \prime } + x{y}^{\prime } + \left( {\lambda {x}^{2} - {n}^{2}}\right) y}\right) = x{y}^{\prime \prime } + {y}^{\prime } + \left( {{\lambda x} - \frac{{n}^{2}}{x}}\right) y\n\]\n\n\[ \n= \frac{d}{dx}\left( {x\frac{dy}{dx}}\right) - \frac{{n}^{2}}{x}y + {\lambda xy} = 0.\n\]
Yes
Theorem 5.1.1. Suppose \( p\left( x\right) ,{p}^{\prime }\left( x\right), q\left( x\right) \) and \( r\left( x\right) \) are continuous on \( \left\lbrack {a, b}\right\rbrack \) and suppose \( p\left( x\right) > 0 \) and \( r\left( x\right) > 0 \) for all \( x \) in \( \left\lbrack {a, b}\right\rbrack \) . Then the Sturm-Liouville problem (5.2) has an increasing sequence of eigenvalues
\[ {\lambda }_{1} < {\lambda }_{2} < {\lambda }_{3} < \cdots \] such that \[ \mathop{\lim }\limits_{{n \rightarrow \infty }}{\lambda }_{n} = + \infty \] and such that to each \( {\lambda }_{n} \) there is (up to a constant multiple) a single eigenfunction \( {y}_{n}\left( x\right) \) . Moreover, if \( q\left( x\right) \geq 0 \) and \( {\alpha }_{1},{\alpha }_{2},{\beta }_{1},{\beta }_{2} \geq 0 \), then \( {\lambda }_{n} \geq 0 \) for all \( n \) .
Yes
Theorem 5.1.4. Suppose \( f \) is a piecewise smooth continuous function on \( \left\lbrack {a, b}\right\rbrack \) . If \( {y}_{1},{y}_{2},\ldots \) are eigenfunctions of a regular Sturm-Liouville problem, one for each eigenvalue, then there exist real constants \( {c}_{1},{c}_{2},\ldots \) given by (5.4) such that (5.3) converges and holds for \( a < x < b \) .
Example 5.1.4: Consider\n\n\[ \n{y}^{\prime \prime } + {\lambda y} = 0,\;0 < x < \pi /2, \]\n\n\[ \ny\left( 0\right) = 0,\;{y}^{\prime }\left( {\pi /2}\right) = 0. \]\n\nThe above is a regular Sturm-Liouville problem, and Theorem 5.1.1 on page 275 says that if \( \lambda \) is an eigenvalue then \( \lambda \geq 0 \) .\n\nSuppose \( \lambda = 0 \) . The general solution is \( y\left( x\right) = {Ax} + B \) . We plug in the initial conditions to get \( 0 = y\left( 0\right) = B \), and \( 0 = {y}^{\prime }\left( {\pi /2}\right) = A \) . Hence \( \lambda = 0 \) is not an eigenvalue.\n\nSo let us consider \( \lambda > 0 \), where the general solution is\n\n\[ \ny\left( x\right) = A\cos \left( {\sqrt{\lambda }x}\right) + B\sin \left( {\sqrt{\lambda }x}\right) . \]\n\nPlugging in the boundary conditions we get \( 0 = y\left( 0\right) = A \) and \( 0 = {y}^{\prime }\left( {\pi /2}\right) = \sqrt{\lambda }\;B\cos \left( {\sqrt{\lambda }\;\frac{\pi }{2}}\right) . \) Since \( A \) is zero, then \( B \) cannot be zero. Hence \( \cos \left( {\sqrt{\lambda }\frac{\pi }{2}}\right) = 0 \) . This means that \( \sqrt{\lambda }\frac{\pi }{2} \) is an odd integral multiple of \( \pi /2 \), i.e. \( \left( {{2n} - 1}\right) \frac{\pi }{2} = \sqrt{{\lambda }_{n}}\frac{\pi }{2} \) . Solving for \( {\lambda }_{n} \) we get\n\n\[ \n{\lambda }_{n} = {\left( 2n - 1\right) }^{2} \]\n\nWe can take \( B = 1 \) . Our eigenfunctions are\n\n\[ \n{y}_{n}\left( x\right) = \sin \left( {\left( {{2n} - 1}\right) x}\right) . \]\n\nA little bit of calculus shows\n\n\[ \n{\int }_{0}^{\frac{\pi }{2}}{\left( \sin \left( \left( 2n - 1\right) x\right) \right) }^{2}{dx} = \frac{\pi }{4} \]\n\nSo any piecewise smooth function \( f\left( x\right) \) on \( \left\lbrack {0,\pi /2}\right\rbrack \) can be written as\n\n\[ \nf\left( x\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}\sin \left( {\left( {{2n} - 1}\right) x}\right) \]\n\nwhere\n\n\[ \n{c}_{n} = \frac{\left\langle f,{y}_{n}\right\rangle }{\left\langle {y}_{n},{y}_{n}\right\rangle } = \frac{{\int }_{0}^{\frac{\pi }{2}}f\left( x\right) \sin \left( {\left( {{2n} - 1}\right) x}\right) {dx}}{{\int }_{0}^{\frac{\pi }{2}}{\left( \sin \left( \left( 2n - 1\right) x\right) \right) }^{2}{dx}} = \frac{4}{\pi }{\int }_{0}^{\frac{\pi }{2}}f\left( x\right) \sin \left( {\left( {{2n} - 1}\right) x}\right) {dx}. \]\n\nNote that the series converges to an odd \( {2\pi } \) -periodic extension of \( f\left( x\right) \) . With the regular sine series we would expect a function with period \( 2\frac{\pi }{2} = \pi \) .
Yes
Theorem 6.1.1 (Linearity of the Laplace transform). Suppose that \( A, B \), and \( C \) are constants, then\n\n\[ \mathcal{L}\{ {Af}\left( t\right) + {Bg}\left( t\right) \} = A\mathcal{L}\{ f\left( t\right) \} + B\mathcal{L}\{ g\left( t\right) \} \]\n\nand in particular\n\n\[ \mathcal{L}\{ {Cf}\left( t\right) \} = C\mathcal{L}\{ f\left( t\right) \} \]
Exercise 6.1.2: Verify the theorem. That is, show that \( \mathcal{L}\{ {Af}\left( t\right) + {Bg}\left( t\right) \} = A\mathcal{L}\{ f\left( t\right) \} + \) \( B\mathcal{L}\{ g\left( t\right) \} \) .
No
Theorem 6.1.2 (Existence). Let \( f\left( t\right) \) be continuous and of exponential order for a certain constant c. Then \( F\left( s\right) = \mathcal{L}\{ f\left( t\right) \} \) is defined for all \( s > c \) .
The existence is not difficult to see. Let \( f\left( t\right) \) be of exponential order, that is \( \left| {f\left( t\right) }\right| \leq M{e}^{ct} \) for all \( t > 0 \) (for simplicity \( {t}_{0} = 0 \) ). Let \( s > c \), or in other words \( \left( {c - s}\right) < 0 \) . By the comparison theorem from calculus, the improper integral defining \( \mathcal{L}\{ f\left( t\right) \} \) exists if the following integral exists\n\n\[ \n{\int }_{0}^{\infty }{e}^{-{st}}\left( {M{e}^{ct}}\right) {dt} = M{\int }_{0}^{\infty }{e}^{\left( {c - s}\right) t}{dt} = M{\left\lbrack \frac{{e}^{\left( {c - s}\right) t}}{c - s}\right\rbrack }_{t = 0}^{\infty } = \frac{M}{c - s}.\n\]
Yes
Theorem 7.1.1. For a power series (7.1), there exists a number \( \rho \) (we allow \( \rho = \infty \) ) called the radius of convergence such that the series converges absolutely on the interval \( \left( {{x}_{0} - \rho ,{x}_{0} + \rho }\right) \) and diverges for \( x < {x}_{0} - \rho \) and \( x > {x}_{0} + \rho \) . We write \( \rho = \infty \) if the series converges for all \( x \) .
A useful test for convergence of a series is the ratio test. Suppose that\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k} \]\n\nis a series and the limit\n\n\[ L = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{{c}_{k + 1}}{{c}_{k}}\right| \]\n\nexists. Then the series converges absolutely if \( L < 1 \) and diverges if \( L > 1 \) .\n\nWe apply this test to the series (7.1). Let \( {c}_{k} = {a}_{k}{\left( x - {x}_{0}\right) }^{k} \) in the test. Compute\n\n\[ L = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{{c}_{k + 1}}{{c}_{k}}\right| = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{{a}_{k + 1}{\left( x - {x}_{0}\right) }^{k + 1}}{{a}_{k}{\left( x - {x}_{0}\right) }^{k}}\right| = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{{a}_{k + 1}}{{a}_{k}}\right| \left| {x - {x}_{0}}\right| .\n\nDefine \( A \) by\n\n\[ A = \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{{a}_{k + 1}}{{a}_{k}}\right| \]\n\nThen if \( 1 > L = A\left| {x - {x}_{0}}\right| \) the series (7.1) converges absolutely. If \( A = 0 \), then the series always converges. If \( A > 0 \), then the series converges absolutely if \( \left| {x - {x}_{0}}\right| < 1/A \), and diverges if \( \left| {x - {x}_{0}}\right| > 1/A \) . That is, the radius of convergence is \( 1/A \) .
Yes
Theorem 7.1.2 (Ratio and root tests for power series). Consider a power series\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }{a}_{k}{\\left( x - {x}_{0}\\right) }^{k}\n\\]\n\n such that\n\\[ \nA = \\mathop{\\lim }\\limits_{{n \\rightarrow \\infty }}\\left| \\frac{{a}_{k + 1}}{{a}_{k}}\\right| \\;\\text{ or }\\;A = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\sqrt[k]{\\left| {a}_{k}\\right| }\n\\]\n\n exists. If \\( A = 0 \\), then the radius of convergence of the series is \\( \\infty \\) . Otherwise, the radius of convergence is \\( 1/A \\) .
Example 7.1.3: Suppose we have the series\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }{2}^{-k}{\\left( x - 1\\right) }^{k}\n\\]\n\nFirst we compute the limit in the ratio test,\n\n\\[ \nA = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\left| \\frac{{a}_{k + 1}}{{a}_{k}}\\right| = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\left| \\frac{{2}^{-k - 1}}{{2}^{-k}}\\right| = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}{2}^{-1} = 1/2.\n\\]\n\nTherefore the radius of convergence is 2, and the series converges absolutely on the interval \\( \\left( {-1,3}\\right) \\) . And we could just as well have used the root test:\n\n\\[ \nA = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\sqrt[k]{\\left| {a}_{k}\\right| } = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}\\sqrt[k]{\\left| {2}^{-k}\\right| } = \\mathop{\\lim }\\limits_{{k \\rightarrow \\infty }}{2}^{-1} = 1/2.\n\\]
Yes
Theorem 7.3.1 (Method of Frobenius). Suppose that\n\n\[ p\left( x\right) {y}^{\prime \prime } + q\left( x\right) {y}^{\prime } + r\left( x\right) y = 0 \]\n\nhas a regular singular point at \( x = 0 \), then there exists at least one solution of the form\n\n\[ y = {x}^{r}\mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k} \]
A solution of this form is called a Frobenius-type solution.\n\n---\n\n*Named after the German mathematician Ferdinand Georg Frobenius (1849-1917).\n\n---\n\nThe method usually breaks down like this:\n\n(i) We seek a Frobenius-type solution of the form\n\n\[ y = \mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k + r} \]\n\nWe plug this \( y \) into equation (7.3). We collect terms and write everything as a single series.\n\n(ii) The obtained series must be zero. Setting the first coefficient (usually the coefficient of \( {x}^{r} \) ) in the series to zero we obtain the indicial equation, which is a quadratic polynomial in \( r \).\n\n(iii) If the indicial equation has two real roots \( {r}_{1} \) and \( {r}_{2} \) such that \( {r}_{1} - {r}_{2} \) is not an integer, then we have two linearly independent Frobenius-type solutions. Using the first root, we plug in\n\n\[ {y}_{1} = {x}^{{r}_{1}}\mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k} \]\n\nand we solve for all \( {a}_{k} \) to obtain the first solution. Then using the second root, we plug in\n\n\[ {y}_{2} = {x}^{{r}_{2}}\mathop{\sum }\limits_{{k = 0}}^{\infty }{b}_{k}{x}^{k} \]\n\nand solve for all \( {b}_{k} \) to obtain the second solution.\n\n(iv) If the indicial equation has a doubled root \( r \), then there we find one solution\n\n\[ {y}_{1} = {x}^{r}\mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k} \]\n\nand then we obtain a new solution by plugging\n\n\[ {y}_{2} = {x}^{r}\mathop{\sum }\limits_{{k = 0}}^{\infty }{b}_{k}{x}^{k} + \left( {\ln x}\right) {y}_{1} \]\n\ninto equation (7.3) and solving for the constants \( {b}_{k} \).\n\n(v) If the indicial equation has two real roots such that \( {r}_{1} - {r}_{2} \) is an integer, then one\n\nsolution is\n\n\[ {y}_{1} = {x}^{{r}_{1}}\mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k} \]\nand the second linearly independent solution is of the form\n\n\[ {y}_{2} = {x}^{{r}_{2}}\mathop{\sum }\limits_{{k = 0}}^{\infty }{b}_{k}{x}^{k} + C\left( {\ln x}\right) {y}_{1} \]\n\nwhere we plug \( {y}_{2} \) into (7.3) and solve for the constants \( {b}_{k} \) and \( C \).\n\n(vi) Finally, if the indicial equation has complex roots, then solving for \( {a}_{k} \) in the solution\n\n\[ y = {x}^{{r}_{1}}\mathop{\sum }\limits_{{k = 0}}^{\infty }{a}_{k}{x}^{k} \]\n\nresults in a complex-valued function-all the \( {a}_{k} \) are complex numbers. We obtain our two linearly independent solutions* by taking the real and imaginary parts of \( y \).\n\nThe main idea is to find at least one Frobenius-type solution. If we are lucky and find two, we are done. If we only get one, we either use the ideas above or even a different method such as reduction of order (see \( §{2.1} \) ) to obtain a second solution.
Yes
Theorem 8.4.1 (Poincarè-Bendixson*). Suppose \( R \) is a closed bounded region (a region in the plane that includes its boundary and does not have points arbitrarily far from the origin). Suppose \( \left( {x\left( t\right), y\left( t\right) }\right) \) is a solution of (8.2) in \( R \) that exists for all \( t \geq {t}_{0} \) . Then either the solution is a periodic function, or the solution tends towards a periodic solution in R.
The main point of the theorem is that if you find one solution that exists for all \( t \) large enough (that is, as \( t \) goes to infinity) and stays within a bounded region, then you have found either a periodic orbit, or a solution that spirals towards a limit cycle or tends to a critical point. That is, in the long term, the behavior is very close to a periodic function. Note that a constant solution at a critical point is periodic (with any period). The theorem is more a qualitative statement rather than something to help us in computations. In practice it is hard to find analytic solutions and so hard to show rigorously that they exist for all time. But if we think the solution exists we numerically solve for a large time to approximate the limit cycle. Another caveat is that the theorem only works in two dimensions. In three dimensions and higher, there is simply too much room.
Yes
Theorem 8.4.2 (Bendixson-Dulac*). Suppose \( R \) is a simply connected region, and the expression \( {}^{ \dagger } \)\n\n\[ \frac{\partial f}{\partial x} + \frac{\partial g}{\partial y} \]\n\nis either always positive or always negative on \( R \) (except perhaps a small set such as on isolated points or curves) then the system (8.2) has no closed trajectory inside \( R \) .
Example 8.4.3: Let us look at \( {x}^{\prime } = y + {y}^{2}{e}^{x},{y}^{\prime } = x \) in the entire plane (see Example 8.2.2 on page 359). The entire plane is simply connected and so we can apply the theorem. We compute \( \frac{\partial f}{\partial x} + \frac{\partial g}{\partial y} = {y}^{2}{e}^{x} + 0 \) . The function \( {y}^{2}{e}^{x} \) is always positive except on the line \( y = 0 \) . Therefore, via the theorem, the system has no closed trajectories.
Yes
Proposition 2.19. Suppose that \( \sim \) is an equivalence relation on \( X \) . Let \( x, y \in X \) . If \( x \sim y \), then\n\n\[ \left\lbrack x\right\rbrack = \left\lbrack y\right\rbrack \text{.} \]\n\n\( \left( {2.20}\right) \)\n\nIf \( x \) is not equivalent to \( y\left( {x \sim y}\right) \), then\n\n\[ \left\lbrack x\right\rbrack \cap \left\lbrack y\right\rbrack = \varnothing \text{.} \]
Proof. (i) Assume \( x \sim y \) . Let us show that \( \left\lbrack x\right\rbrack \subseteq \left\lbrack y\right\rbrack \) . Let \( z \in \left\lbrack x\right\rbrack \) . This means that \( x \sim z \) . Since \( \sim \) is symmetric, and \( x \sim y \), we have \( y \sim x \) . As \( y \sim x \) and \( x \sim z \), by transitivity of \( \sim \) we get that \( y \sim z \) . Therefore \( z \in \left\lbrack y\right\rbrack \) . Since \( z \) is an arbitrary element of \( \left\lbrack x\right\rbrack \), we have shown that \( \left\lbrack x\right\rbrack \subseteq \left\lbrack y\right\rbrack \) .\n\nAs \( y \sim x \), the same argument with \( x \) and \( y \) swapped gives \( \left\lbrack y\right\rbrack \subseteq \left\lbrack x\right\rbrack \) , and therefore \( \left\lbrack x\right\rbrack = \left\lbrack y\right\rbrack \) .\n\n(ii) Now assume that \( x \) and \( y \) are not equivalent. We must show that there is no \( z \) such that \( z \in \left\lbrack x\right\rbrack \) and \( z \in \left\lbrack y\right\rbrack \) . We will argue by contradiction. Suppose there were such a \( z \) . Then we would have\n\n\[ x \sim z\;\text{ and }\;y \sim z. \]\n\nBy symmetry, we have also that \( z \sim y \), and by transitivity, we then have that \( x \sim y \) . This contradicts the assumption that \( x \) is not equivalent to \( y \) . So if \( x \) and \( y \) are not equivalent, no \( z \) can exist that is simultaneously in both \( \left\lbrack x\right\rbrack \) and \( \left\lbrack y\right\rbrack \) . Therefore \( \left\lbrack x\right\rbrack \) and \( \left\lbrack y\right\rbrack \) are disjoint sets, as required.
Yes
Proposition 2.26. If \( a \equiv r{\;\operatorname{mod}\;n} \) and \( b \equiv s{\;\operatorname{mod}\;n} \), then\n\n\[ \n\text{(i)}\;a + b \equiv r + s\;{\;\operatorname{mod}\;n} \n\]\nand\n\n\[ \n\text{(ii)}{ab} \equiv {rs}{\;\operatorname{mod}\;n}\text{.} \n\]
Proof. (i) Assume that \( a \equiv r{\;\operatorname{mod}\;n} \) and \( b \equiv s{\;\operatorname{mod}\;n} \) . Then \( n\left| {\left( {a - r}\right) \text{ and }n}\right| \left( {b - s}\right) \) . So\n\n\[ \nn \mid \left( {a + b - \left( {r + s}\right) }\right) .\n\]\n\nTherefore\n\n\[ \na + b \equiv r + s{\;\operatorname{mod}\;n},\n\]\n\nproving (i).\n\nTo prove (ii), note that there are \( i, j \in \mathbb{Z} \) such that\n\n\[ \na = {ni} + r\n\]\n\nand\n\n\[ \nb = {nj} + s.\n\]\n\nThen\n\n\[ \n{ab} = {n}^{2}{ji} + {rnj} + {sni} + {rs} = n\left( {{nji} + {rj} + {si}}\right) + {rs}.\n\]\n\nTherefore\n\n\[ \nn \mid \left( {{ab} - {rs}}\right)\n\]\n\nand\n\n\[ \n{ab} \equiv {rs}\;{\;\operatorname{mod}\;n}.\n\]\n\nHence the algebraic operations that \( {\mathbb{Z}}_{n} \) \
Yes
[P \Rightarrow Q] \equiv [(\neg Q) \Rightarrow (\neg P)].
This is a very important example of a propositional equivalence. We will show this by considering all possible assignments of truth values to \( P \) and \( Q \) . Let’s set this up in what is popularly called a truth table. We consider all possible assignments of truth values to \( P \) and \( Q \), and compare the truth values of the compound statements under consideration:\n\n\[ \begin{matrix} \frac{T(P)}{0} & \frac{T(Q)}{0} & \frac{T(P \Rightarrow Q)}{1} & \frac{T((\neg Q) \Rightarrow (\neg (P)))}{1} \\ 0 & 1 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{matrix} \]\n\nEach row of the truth table represents a particular assignment of truth values to the atomic statements \( P \) and \( Q \) . The four possible assignments are exhausted by the rows of the truth table. The truth values of the compound statements agree in each row of the truth table so the statements are equivalent.
Yes
Example 3.17. Prove (3.16) directly.
Let \( x \in \mathbb{N} \) (we treat \( x \) as a fixed but arbitrary element of the natural numbers). If \( x = {4n} \), then\n\n\[ \n x = 2 \cdot \left( {2n}\right) \n\]\n\nand is therefore even.
No
Example 3.26. Suppose there are 30 students in a class. Show that at least two of them share the same last initial.
Proof. For each letter A, B,... group all the students with that letter as their last initial. As there are only 26 groups and \( {30} > {26} \) students, at least one group must have more than on student in it.
Yes
Proposition 4.5. Let \( N \in \mathbb{N} \) . Then\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}n = \frac{N\left( {N + 1}\right) }{2} \]
Proof. Base case: \( N = 0 \) .\n\nDiscussion. Note that the base case is the statement \( P\left( 0\right) \) .\n\nSince\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{0}n = 0 = \frac{\left( 0\right) \left( 1\right) }{2} \]\n\n\( P\left( 0\right) \) holds.\n\nInduction step:\n\nDiscussion. We prove the universal statement\n\n\[ \left( {\forall x \in \mathbb{N}}\right) P\left( x\right) \Rightarrow P\left( {x + 1}\right) . \]\n\nby showing that for an arbitrary natural number \( N \)\n\n\[ P\left( N\right) \Rightarrow P\left( {N + 1}\right) \]\n\nThus we reduce proving a universal statement to proving an abstract\n\nconditional statement. We prove the resulting conditional statement directly. That is, we assume \( P\left( N\right) \) and derive \( P\left( {N + 1}\right) \) . We remind the reader that we are not claiming the result holds at \( N \) - that is, we do not claim \( P\left( N\right) \) . Rather, we are proving the conditional statement by assuming the antecedent, the induction hypothesis, and deriving the consequence. If you do not use the induction hypothesis, you are not arguing by induction. Of course, in the body of the argument this is transparent, without reference to the underlying logical principles.\n\nLet \( N \in \mathbb{N} \) and assume that\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}n = \frac{N\left( {N + 1}\right) }{2}. \]\n\nThen\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}n = \left( {\mathop{\sum }\limits_{{n = 0}}^{N}n}\right) + N + 1 \]\n\n\[ = {}_{IH}\;\frac{N\left( {N + 1}\right) }{2} + N + 1 \]\n\nby the induction hypothesis.\n\nDISCUSSION. It is a good habit, and a consideration for your reader, to identify when you are invoking the induction hypothesis. We will use the subscript \( {}_{IH} \) to indicate where we invoke the induction hypothesis.\n\nSo\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}n = \frac{N\left( {N + 1}\right) }{2} + N + 1 \]\n\n\[ = \frac{N\left( {N + 1}\right) }{2} + \frac{{2N} + 2}{2} \]\n\n\[ = \frac{{N}^{2} + {3N} + 2}{2} \]\n\n\[ = \frac{\left( {N + 1}\right) \left( {\left( {N + 1}\right) + 1}\right) }{2}\text{.} \]\n\nTherefore,\n\n\[ \left( {\forall N \in \mathbb{N}}\right) P\left( N\right) \Rightarrow P\left( {N + 1}\right) . \]\n\nBy the principle of induction, the proposition follows.
Yes
Proposition 4.6. Let \( N \in \mathbb{N} \) . Then\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}{n}^{2} = \frac{N\left( {N + 1}\right) \left( {{2N} + 1}\right) }{6}. \]
Proof. The assertion \( P\left( N\right) \) is that the equation (4.7) holds. The base case, \( N = 0 \), is obvious:\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{0}{n}^{2} = \frac{0\left( {0 + 1}\right) \left( {2 \cdot 0 + 1}\right) }{6}. \]\n\nInduction step:\n\nAssume that \( N \in \mathbb{N} \) and\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}{n}^{2} = \frac{N\left( {N + 1}\right) \left( {{2N} + 1}\right) }{6}. \]\n\nWe prove that\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}{n}^{2} = \frac{\left( {N + 1}\right) \left( {N + 2}\right) \left( {{2N} + 3}\right) }{6}. \]\n\nIndeed\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}{n}^{2} = \left( {\mathop{\sum }\limits_{{n = 0}}^{N}{n}^{2}}\right) + {\left( N + 1\right) }^{2} \]\n\n\[ = {}_{IH}\frac{N\left( {N + 1}\right) \left( {{2N} + 1}\right) }{6} + {\left( N + 1\right) }^{2}. \]\n\n\[ = \frac{N\left( {N + 1}\right) \left( {{2N} + 1}\right) }{6} + {\left( N + 1\right) }^{2} \]\n\n\[ = \frac{2{N}^{3} + 9{N}^{2} + {13N} + 6}{6} \]\n\n\[ = \frac{\left( {N + 1}\right) \left( {N + 2}\right) \left( {2\left( {N + 1}\right) + 1}\right) }{6}\text{.} \]\n\nThe proposition follows from the principle of induction.
Yes
Lemma 4.11. Let \( N \in {\mathbb{N}}^{ + } \) and, for \( 0 \leq n \leq N,{a}_{n} \in \mathbb{R} \) . If \( c \in \mathbb{R} \) , then\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}c{a}_{n} = c\left( {\mathop{\sum }\limits_{{n = 0}}^{N}{a}_{n}}\right) . \]
Proof. We argue by induction on \( N \) .\n\nBase case: \( N = 1 \)\n\nLet \( c,{a}_{0},{a}_{1} \in \mathbb{R} \) . By the distributive property,\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{1}c{a}_{n} = c{a}_{0} + c{a}_{1} \]\n\n\[ = c\left( {{a}_{0} + {a}_{1}}\right) \]\n\n\[ = c\left( {\mathop{\sum }\limits_{{n = 0}}^{1}{a}_{n}}\right) \]\n\nInduction step:\n\nLet \( c \in \mathbb{R} \) and \( {a}_{n} \in \mathbb{R} \), for \( 0 \leq n \leq N + 1 \) . We assume\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{N}c{a}_{n} = c\left( {\mathop{\sum }\limits_{{n = 0}}^{N}{a}_{n}}\right) . \]\n\nWe have\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}c{a}_{n}\; = \;\left( {\mathop{\sum }\limits_{{n = 0}}^{N}c{a}_{n}}\right) + c{a}_{N + 1} \]\n\n\[ = {}_{IH}c\left( {\mathop{\sum }\limits_{{n = 0}}^{N}{a}_{n}}\right) + c{a}_{N + 1}, \]\n\nBy the distributive law (for two summands)\n\n\[ c\left( {\mathop{\sum }\limits_{{n = 0}}^{N}{a}_{n}}\right) + c{a}_{N + 1} = c\left( {\mathop{\sum }\limits_{{n = 0}}^{N}{a}_{n} + {a}_{N + 1}}\right) \]\n\n\[ = c\left( {\mathop{\sum }\limits_{{n = 0}}^{{N + 1}}{a}_{n}}\right) \]\n\nTherefore,\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N + 1}}c{a}_{n} = c\left( {\mathop{\sum }\limits_{{n = 0}}^{{N + 1}}{a}_{n}}\right) . \]\n\nBy the induction principle the result holds for all \( N \in \mathbb{N} \) .
Yes
Proposition 5.22. Suppose \( f : X \rightarrow \mathbb{R} \) and \( g : X \rightarrow \mathbb{R} \) are real functions that are continuous at \( a \in X \) . Let \( c \) and \( d \) be scalars \( {}^{2} \) . Then \( {cf} + {dg} \) and \( {fg} \) are both continuous at \( a \), and so is \( f/g \) if \( g\left( a\right) \neq 0 \) .
Proof. Exercise.
No
Proposition 5.23. Every polynomial is continuous on \( \mathbb{R} \) . Every rational function is continuous wherever the denominator is non-zero.
What about the exponential function\n\n\[ \n{e}^{x} \mathrel{\text{:=}} \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{{x}^{n}}{n!}?\n\]\n\nEach partial sum is a polynomial, and hence continuous; so if we knew that the limit of a sequence of continuous functions were continuous, we would be done. This turns out, however, to be a subtle problem, which we address in the next Section.
No
Proposition 5.31. The exponential function is continuous on \( \mathbb{R} \) .
Proof. Let\n\n\[ \n{p}_{n}\left( x\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 0}}^{n}\frac{{x}^{k}}{k!} \n\] \n\nbe the \( {n}^{\text{th }} \) -order Taylor polynomial. We know each \( {p}_{n} \) is continuous, by Proposition 5.23. If we knew that \( {p}_{n}\left( x\right) \) converged uniformly to \( {e}^{x} \), we would be done by Theorem 5.29.\n\nIt is not true that \( {p}_{n} \) converges uniformly on \( \mathbb{R} \) (why?). However, the sequence does converge uniformly on every interval \( \left\lbrack {-R, R}\right\rbrack \), and this is good enough to conclude that \( {e}^{x} \) is continuous on \( \mathbb{R} \) (why?).\n\nTo see this latter assertion, fix \( R > 0 \) and \( \varepsilon > 0 \) . We must find \( N \) so that, for all \( n > N \) and all \( x \in \left\lbrack {-R, R}\right\rbrack \), we have \( \left| {{e}^{x} - {p}_{n}\left( x\right) }\right| < \varepsilon \) . Notice that\n\n\[ \n\left| {{e}^{x} - {p}_{n}\left( x\right) }\right| = \left| {\frac{{x}^{n + 1}}{\left( {n + 1}\right) !} + \frac{{x}^{n + 2}}{\left( {n + 2}\right) !} + \ldots }\right| . \n\] \n\nFor each \( n \), the right-hand side is maximized on \( \left\lbrack {-R, R}\right\rbrack \) by its value at \( R \) (why?); and as \( n \) increases, this remainder decreases monotonically (because you lose more and more positive terms). As we know the exponential series for \( {e}^{R} \) converges, choose an \( N \) so that \( {e}^{R} - {p}_{N}\left( R\right) \) is less than \( \varepsilon \) . Then for all \( x \) in \( \left\lbrack {-R, R}\right\rbrack \) and all \( n \geq N \), we have \( \left| {{e}^{x} - {p}_{n}\left( x\right) }\right| < \varepsilon \), as desired.
No
Proposition 6.1. Let \( m, n \in \mathbb{N} \). Then \[ \left( {\left| {\ulcorner m\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| : }\right) \Leftrightarrow \left( {m = n}\right) . \]
DISCUSSION. We prove the non-trivial direction of this biconditional by induction on one of the integers in the statement.\n\nProof. \( \Leftarrow \)\n\nLet \( m = n \). Then it is obvious that \[ \left| {\ulcorner m\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| \text{.} \]\n\n\( \Rightarrow \)\n\nWe argue by induction on \( m \).\n\nBase case:\n\nIf \( m = 0 \) and \( \left| {\ulcorner n\urcorner }\right| = \left| {\ulcorner m\urcorner }\right| \) then clearly \( n = 0 \).\n\nInduction step:\n\nLet \( m \in \mathbb{N} \) and assume that \[ \left( {\forall n \in \mathbb{N}}\right) \left\lbrack {\left| {\ulcorner m\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| \mid }\right\rbrack \Rightarrow \left\lbrack {m = n}\right\rbrack . \]\n\nWe show that \[ \left( {\forall n \in \mathbb{N}}\right) \left\lbrack {\left| {\ulcorner m + 1\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| \mid }\right\rbrack \Rightarrow \left\lbrack {m + 1 = n}\right\rbrack . \]\n\nAssume that \[ \left| {\ulcorner m + 1\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| . \]\n\nLet \[ f : \ulcorner m + 1\urcorner \rightarrowtail \ulcorner n\urcorner . \]\n\nDiscussion. A natural way to proceed with this argument is to restrict the domain of \( f \) to \( \ulcorner m\urcorner \) and use the induction hypothesis. Unfortunately if \( f\left( m\right) \neq n - 1 \) then \( {\left. f\right| }_{\ulcorner m\urcorner } \) is not a bijection from \( \ulcorner m\urcorner \) to \( \ulcorner n - 1\urcorner \), and the induction hypothesis will not directly apply. To address this issue, we shall define a permutation \( g : \ulcorner m + 1\urcorner \rightarrow \ulcorner m + 1\urcorner \) that rearranges the elements of \( \ulcorner m + 1\urcorner \) so that \( f \circ g \) will be a bijection satisfying \[ \left( {f \circ g}\right) \left( m\right) = n - 1 \]\n\nWe define \( g : \ulcorner m + 1\urcorner \rightarrow \ulcorner m + 1\urcorner \) as follows: \[ g\left( x\right) = \left\{ \begin{matrix} {f}^{-1}\left( {n - 1}\right) & \text{ if } & x = m \\ m & \text{ if } & x = {f}^{-1}\left( {n - 1}\right) \\ x & & \text{ otherwise. } \end{matrix}\right. \]\n\nLet \( h = f \circ g \). Then \( h \) is a bijection and \[ h\left( m\right) = \left( {f \circ g}\right) \left( m\right) = n - 1. \]\n\nTherefore \[ {\left. h\right| }_{\ulcorner m\urcorner } : \ulcorner m\urcorner \rightarrowtail \ulcorner n - 1\urcorner . \]\n\nBy the induction hypothesis \[ m = n - 1\text{.} \]\n\nTherefore \[ m + 1 = n\text{.} \]\n\nBy the induction principle, \[ \left( {\forall m \in \mathbb{N}}\right) \left( {\forall n \in \mathbb{N}}\right) \left( {\left| {\ulcorner m\urcorner }\right| = \left| {\ulcorner n\urcorner }\right| }\right) \Rightarrow \left( {m = n}\right) . \]
Yes
Proposition 6.8. Let \( X \) be a set, and define \( F : \ulcorner {2}^{\neg X} \rightarrow P\left( X\right) \) by: for \( \chi \in \ulcorner {2}^{\neg X} \) , \[ F\left( \chi \right) = {\chi }^{-1}\left( 1\right) \] That is \( F\left( \chi \right) = \{ x \in X \mid \chi \left( x\right) = 1\} \) . Then \( F : \ulcorner {2}^{\neg X} \rightarrow P\left( X\right) \) is a bijection.
Proof. The proof is left as an exercise.
No
Proposition 6.15. Let \( X \) and \( Y \) be sets. Then there is a surjection \( f : X \rightarrow Y \) iff \( \left| Y\right| \leq \left| X\right| \) .
Proof. \( \left( \Rightarrow \right) \n\nLet \( X, Y \) and \( f \) be as in the statement of the proposition. Let\n\n\[ \n\widehat{f} : X/f \rightarrow Y \n\]\n\nbe the canonical bijection associated with \( f \) that was defined in Section 2.3. We ask whether there is an injection \( g : X/f \rightarrow X \) where \( g\left( \left\lbrack x\right\rbrack \right) \in \left\lbrack x\right\rbrack \) . Recall that \( X/f \) is the collection of level subsets of \( X \) , with respect to \( f \), and is a partition of \( X \) . Why not simply choose an element from each equivalence class and define \( g \) to be the function from \( X/f \) to \( X \) defined by these choices?\n\nDISCUSSION. The Axiom of Choice is the assertion that such \
No
Corollary 6.23. \( \mathbb{K} \neq \mathbb{R} \)
Since \( \mathbb{K} \) is countable and \( \mathbb{R} \) is uncountable, \( \mathbb{K} \) is a proper subset of \( \mathbb{R} \) .
Yes
Proposition 7.1. Let \( a, b \in \mathbb{Z} \) . If \( a \) and \( b \) are relatively prime, then \( a - b \) and \( b \) are relatively prime.
Proof. Let \( c > 1 \) be a common factor of \( b \) and \( a - b \) . So\n\n\[ \left( {\exists m \in \mathbb{Z}}\right) b = {cm} \]\n\nand\n\n\[ \left( {\exists n \in \mathbb{Z}}\right) a - b = {cn}. \]\n\nThen\n\n\[ c\left( {m + n}\right) = a \]\n\nand so \( c \mid a \) . Therefore if \( a \) and \( b \) are relatively prime, then \( a - b \) and \( b \) are relatively prime.
Yes
Proposition 7.2. Let \( a \) and \( b \) be integers. If \( a \) and \( b \) are relatively prime, then\n\n\[ \left( {\exists m, n \in \mathbb{Z}}\right) {ma} + {nb} = 1. \]
Proof. We may assume that \( a > b > 0 \) . We argue by induction on \( a + b \) .\n\nBase case: \( a + b = 3 \) .\n\nThen \( a = 2 \) and \( b = 1 \) . So\n\n\[ a - b = 1.\text{.} \]\n\nInduction step:\n\nAssume that the result holds for all pairs of relatively prime natural numbers with sum less than \( a + b \) .\n\nBy Proposition 7.1, \( b \) and \( a - b \) are relatively prime. By the induction hypothesis, there are \( i, j \in \mathbb{Z} \) such that\n\n\[ i\left( {a - b}\right) + {jb} = 1. \]\n\nDiscussion. If \( a - b = b \), we are not in the case where we have two distinct positive numbers. How do we handle this possibility?\n\nLet \( m = i \) and \( n = j - i \) . Then\n\n\[ {ma} + {nb} = 1.\text{.} \]\n\nBy the induction principle the result holds for all relatively prime pairs of natural numbers.
No
Proposition 7.3. Let \( a, b, c \in \mathbb{Z} \), and assume that \( \gcd \left( {a, b}\right) = 1 \) . If \( a \mid {cb} \), then \( a \mid c \) .
Proof. By Proposition 7.2 there are \( m, n \in \mathbb{Z} \) such that\n\n\[ \n{ma} + {nb} = 1\text{.}\n\]\n\nTherefore\n\n\[ \n{cma} + {cnb} = c.\n\]\n\nClearly \( a \mid {cnb} \) (since \( a \mid {cb} \) ) and \( a \mid {cma} \) . So\n\n\[ \na \mid \left( {{cma} + {cnb}}\right) ,\n\]\n\nand therefore \( a \mid c \) .
Yes
Proposition 7.4. Let \( a, b, c \in \mathbb{Z} \) . If \( \gcd \left( {a, b}\right) = 1, a\left| {c\text{and}b}\right| c \) , then \[ {ab} \mid c\text{.} \]
Proof. Let \( m, n \in \mathbb{Z} \) be such that \( {am} = c \) and \( {bn} = c \) . Then \[ a \mid {bn}\text{.} \] By Proposition 7.3, \( a \mid n \) . Hence there is \( k \in \mathbb{Z} \) such that \[ {ak} = n\text{.} \] Therefore \[ {akb} = c \] and \[ {ab} \mid c\text{.} \]
Yes
Proposition 7.9. Let \( a, b, k \in \mathbb{Z} \) . Then\n\n\[ \gcd \left( {a, b}\right) = \gcd \left( {a - {kb}, b}\right) . \]
Proof. If \( c \in \mathbb{Z}, c\left| {a\text{and}c}\right| b \), then \( c \mid a - {kb} \) . Therefore\n\n\[ \gcd \left( {a, b}\right) \leq \gcd \left( {a - {kb}, b}\right) \]\n\nLikewise, if \( c\left| {a - {kb}\text{and}c}\right| b \), then \( c \mid a \), so we get the reverse inequality of (7.10), so the two sides are equal.
Yes
Proposition 7.15. Let \( a \in \mathbb{Z} \), and \( p \) be a prime number such that \( p \nmid a \) . Then \( {o}_{p}\left( a\right) < p \) .
Proof. Let \( p \) be a prime number and \( a \in \mathbb{Z} \) be such that \( a \) is not a multiple of \( p \) . By Lemma 7.5, as \( p \nmid a \), then \( p \nmid {a}^{n} \), and therefore \( \left\lbrack {a}^{n}\right\rbrack \in {\mathbb{Z}}_{p}^{ * } \) for any \( n \in \mathbb{N} \) . Since \( \left| {\mathbb{Z}}_{p}^{ * }\right| = p - 1 \), the finite sequence\n\n\[ \left\langle {\left\lbrack {a}^{n}\right\rbrack \mid 1 \leq n \leq p}\right\rangle \]\n\nmust have a repetition. Let \( 1 \leq n < k \leq p \) be such that\n\n\[ {a}^{n} \equiv {a}^{k}\;{\;\operatorname{mod}\;p}. \]\n\nThen\n\n\[ p \mid {a}^{k} - {a}^{n}. \]\n\nHence\n\n\[ p \mid {a}^{n}\left( {{a}^{k - n} - 1}\right) \text{.} \]\n\nHowever \( p \nmid {a}^{n} \) and thus by Proposition 7.3,\n\n\[ p \mid {a}^{k - n} - 1\text{.} \]\n\nThus\n\n\[ {a}^{k - n} \equiv 1{\;\operatorname{mod}\;p} \]\n\nTherefore\n\n\[ {o}_{p}\left( a\right) \leq k - n < p. \]
Yes
Proposition 7.16. Let \( a \in \mathbb{Z} \) and \( p \) be a prime number such that \( a \) is not a multiple of \( p \) . Then the remainder classes \( \left\lbrack 1\right\rbrack ,\left\lbrack a\right\rbrack ,\left\lbrack {a}^{2}\right\rbrack ,\ldots ,\left\lbrack {a}^{{o}_{p}\left( a\right) - 1}\right\rbrack \) in \( {\mathbb{Z}}_{p} \) are distinct.
Proof. Exercise.
No
Proposition 8.16. \( \\left| \\left\\lbrack {0,1}\\right\\rbrack \\right| = \\left| \\mathbb{R}\\right| \) .
Proof. Define \( f : \\lbrack 0,\\infty ) \\rightarrow (1/2,1\\rbrack \) by\n\n\[ f\\left( x\\right) = \\frac{1}{x + 2} + 1/2 \]\n\nThen \( f \) is an injection. Let \( {\\mathbb{R}}^{ - } \) be the negative real numbers, and define \( g : {\\mathbb{R}}^{ - } \\rightarrow \\lbrack 0,1/2) \) by\n\n\[ g\\left( x\\right) = \\frac{-1}{x - 2}. \]\n\nThen \( g \) is an injection. Let \( h : \\mathbb{R} \\rightarrow \\left\\lbrack {0,1}\\right\\rbrack \) be the union of the functions \( f \) and \( g \) . Then \( h \) is clearly an injection. The identity function on \( \\left\\lbrack {0,1}\\right\\rbrack \) is an injection into \( \\mathbb{R} \) . By the Schröder-Bernstein Theorem,\n\n\[ \\left| \\left\\lbrack {0,1}\\right\\rbrack \\right| = \\left| \\mathbb{R}\\right| \]
Yes
Corollary 8.18. \( \\left| \\left\\lbrack {0,1}\\right\\rbrack \\right| = {2}^{{\\aleph }_{0}} \) .
Proof. By Lemma 8.17, Proposition 6.15 and Theorem 6.11,\n\n\[ \n\\left| \\left\\lbrack {0,1}\\right\\rbrack \\right| \\leq \\left| {D}_{0}\\right| = \\left| {\\ulcorner {10}^{\\neg \\mathbb{N}}\\urcorner }\\right| = {2}^{{\\aleph }_{0}}.\n\]\n\nLet \( g : \\ulcorner {2}^{\\neg {\\mathbb{N}}^{ + }} \\rightarrow {D}_{0} \) be defined by\n\n\[ \ng\\left( \\left\\langle {a}_{n}\\right\\rangle \\right) = \\ldots {a}_{1}{a}_{2}\\ldots\n\]\n\nand \( h : {D}_{0} \\rightarrow \\left\\lbrack {0,1}\\right\\rbrack \) be defined as in the argument for Theorem 8.17. Then \( h \\circ g : \\ulcorner {2}^{\\neg \\mathbb{N}} \\rightarrow \\left\\lbrack {0,1}\\right\\rbrack \) is an injection, and so\n\n\[ \n{2}^{{\\aleph }_{0}} \\leq \\left| \\left\\lbrack {0,1}\\right\\rbrack \\right|\n\]\n\nBy the Schröder-Bernstein Theorem,\n\n\[ \n\\left| \\left\\lbrack {0,1}\\right\\rbrack \\right| = {2}^{{\\aleph }_{0}}.\n\]
Yes
Example 9.9. Let\n\n\[ \np\left( x\right) = {x}^{3} - {3x} + 1. \]\n\n(9.10)\n\nThen \( c = 1 \), and\n\n\[ {w}^{3} = \frac{-1 \pm \sqrt{-3}}{2}. \]\n\n(9.11)\n\nNow we have a worse problem: \( {w}^{3} \) involves the square root of a negative number, and even if we make sense of that, we then have to extract a cube root. Is this analagous to trying to solve the quadratic equation\n\n\[ q\left( x\right) \mathrel{\text{:=}} {x}^{2} + x + 1 = 0? \]\n\nThe quadratic formula again gives the right-hand side of (9.11), and we explain this by saying that in fact \( q \) has no real roots. Indeed, graphing shows that \( q \) looks like Figure 9.12.
But this cannot be the case for \( p \) . Indeed,\n\n\[ p\left( {-2}\right) = - 1 < 0 \]\n\n\[ p\left( 0\right) = 1 > 0 \]\n\n\[ p\left( 1\right) = - 1 < 0 \]\n\n\[ p\left( 2\right) = 3 > 0. \]\n\nTherefore, by the Intermediate Value Theorem 8.10, \( p \) must have a root in each of the intervals \( \left( {-2,0}\right) ,\left( {0,1}\right) \) and \( \left( {1,2}\right) \) . As \( p \) can have at most 3 roots by Theorem 4.10, it must therefore have exactly three roots. A graph of \( p \) looks like Figure 9.13.
Yes
Proposition 9.19. Let \( {z}_{1} = {r}_{1}\operatorname{Cis}\left( {\theta }_{1}\right) \) and \( {z}_{2} = {r}_{2}\operatorname{Cis}\left( {\theta }_{2}\right) \) . Then\n\n\[ \n{z}_{1}{z}_{2} = {r}_{1}{r}_{2}\operatorname{Cis}\left( {{\theta }_{1} + {\theta }_{2}}\right) \n\]
Proof. Multiplying out, we get\n\n\[ \n{z}_{1}{z}_{2} = {r}_{1}{r}_{2}\left\lbrack {\cos {\theta }_{1}\cos {\theta }_{2} - \sin {\theta }_{1}\sin {\theta }_{2}}\right.\n\]\n\n\[ \n+ i\left( {\cos {\theta }_{1}\sin {\theta }_{2} + \cos {\theta }_{2}\sin {\theta }_{1}}\right) \rbrack \text{.} \n\]\n\nThe result follows by the trigonometric identities for the cosine and sine of the sum of two angles.
Yes
Proposition 9.34. Polynomials are continuous functions on \( \mathbb{C} \) .
Proof. Repeat the proof of Proposition 5.23 with complex numbers instead of real numbers.
No
If we have, as in the previous example,\n\n\[ \nx\left( t\right) = {100} - {4.9}{t}^{2}\text{ meters,}\n\]\n\nthen from time \( t = 1 \) to time \( t = 1 + {\Delta t} \) we would have\n\n\[ \n{\Delta x} = x\left( {1 + {\Delta t}}\right) - x\left( 1\right)\n\]
\[ \n= \left( {{100} - {4.9}{\left( 1 + \Delta t\right) }^{2}}\right) - {95.1}\n\]\n\n\[ \n= {4.9} - {4.9}\left( {1 + {2\Delta t} + {\left( \Delta t\right) }^{2}}\right)\n\]\n\n\[ \n= - {9.8\Delta t} - {4.9}{\left( \Delta t\right) }^{2}\text{meters.}\n\]\n\nHence the average velocity over the interval \( \left\lbrack {1,1 + {\Delta t}}\right\rbrack \) is\n\n\[ \n{v}_{\left\lbrack 1,1 + \Delta t\right\rbrack } = \frac{\Delta x}{\Delta t}\n\]\n\n\[ \n= \frac{-{9.8\Delta t} - {4.9}{\left( \Delta t\right) }^{2}}{\Delta t}\n\]\n\n\[ \n= - {9.8} - {4.9\Delta t}\text{meters/second.}\n\]\n\nNote that if, for example, \( {\Delta t} = 3 \), then we find\n\n\[ \n{v}_{\left\lbrack 1,4\right\rbrack } = - {9.8} - \left( {4.9}\right) \left( 3\right) = - {9.8} - {14.7} = - {24.5}\text{meters/second,}\n\]\n\nin agreement with our previous calculations.
Yes
To find the velocity of the object of the previous examples at time \( t = 3 \)
\[ \frac{dx}{dt} = - {29.4} - {4.9dt}\text{ meters }/\text{ second. } \] As above, we disregard the immeasurable -4.9dt to obtain the velocity of the object at time \( t = 3 \) : \[ v\left( 3\right) = - {29.4}\text{meters/second.} \]
Yes
For our previous example, we find\n\n\[ \n{dx} = x\\left( {t + {dt}}\\right) - x\\left( t\\right) \n\]
\n\[ \n= \\left( {{100} - {4.9}{\\left( t + dt\\right) }^{2}}\\right) - \\left( {{100} - {4.9}{t}^{2}}\\right) \n\]\n\n\[ \n= - {4.9}\\left( {t + {2tdt} + {\\left( dt\\right) }^{2}}\\right) - {4.9}{t}^{2} \n\]\n\n\[ \n= - {9.8tdt} - {4.9}{\\left( dt\\right) }^{2}\\text{meters} \n\]\n\n\[ \n= \\left( {-{9.8t} - {4.9dt}}\\right) {dt}\\text{.} \n\]\n\n\nHence\n\n\[ \n\\frac{dx}{dt} = - {9.8t} - {4.9dt}\\text{ meters/second,} \n\]\n\nand so the velocity of the object at time \\( t \\) is\n\n\[ \nv\\left( t\\right) = - {9.8t}\\text{meters/second.} \n\]
Yes
Example 1.2.5. Suppose a spherical shaped balloon is being filled with water. If \( r \) is the radius of the balloon in centimeters and \( V \) is the volume of the balloon,\nthen\n\[ V = \frac{4}{3}\pi {r}^{3}{\text{ centimeters }}^{3}. \]\n\nSince a cubic centimeter of water has a mass of 1 gram, the mass of the water\nin the balloon is\n\[ M = \frac{4}{3}\pi {r}^{3}\text{ grams. } \]
To find the rate of change of the mass of the balloon with respect to the radius of the balloon, we first compute\n\[ {dM} = \frac{4}{3}\pi {\left( r + dr\right) }^{3} - \frac{4}{3}\pi {r}^{3} \]\n\[ = \frac{4}{3}\pi \left( {\left( {{r}^{3} + 3{r}^{2}{dr} + {3r}{\left( dr\right) }^{2} + {\left( dr\right) }^{3}}\right) - {r}^{3}}\right) \]\n\[ = \frac{4}{3}\pi \left( {3{r}^{2} + {3rdr} + {\left( dr\right) }^{2}}\right) {dr}\text{ grams,} \]\nfrom which it follows that\n\[ \frac{dM}{dr} = \frac{4}{3}\pi \left( {3{r}^{2} + {3rdr} + {\left( dr\right) }^{2}}\right) \text{ grams/centimeter. } \]\n\nSince both \( {3rdr} \) and \( {\left( dr\right) }^{2} \) are infinitesimal, the rate of change of mass of the balloon with respect to the radius of the balloon is\n\[ \frac{4}{3}\pi \left( {3{r}^{2}}\right) = {4\pi }{r}^{2}\text{ gams/centimeer. } \]\n\nFor example, when the balloon has a radius of 10 centimeters, the mass of the water in the balloon is increasing at a rate of\n\[ {4\pi }{\left( {10}\right) }^{2} = {400\pi }\text{ grams/centimeter. } \]
Yes
If \( f\left( x\right) = {x}^{2} \), then, for example, for any infinitesimal \( \epsilon \) ,
\[ f\left( {3 + \epsilon }\right) = {\left( 3 + \epsilon \right) }^{2} = 9 + {6\epsilon } + {\epsilon }^{2} \simeq 9 = f\left( 3\right) . \] Hence \( f \) is continuous at \( x = 3 \) . More generally, for any real number \( x \) , \[ f\left( {x + \epsilon }\right) = {\left( x + \epsilon \right) }^{2} = {x}^{2} + {2x\epsilon } + {\epsilon }^{2} \simeq {x}^{2} = f\left( x\right) , \] from which it follows that \( f \) is continuous at every real number \( x \) .
Yes
We call the function\n\n\[ \nH\left( t\right) = \left\{ \begin{array}{ll} 0, & \text{ if }t < 0 \\ 1, & \text{ if }t \geq 0 \end{array}\right.\n\]\n\nthe Heaviside function (see Figure 1.4.1). If \( \epsilon \) is a positive infinitesimal, then\n\n\[ \nH\left( {0 + \epsilon }\right) = H\left( \epsilon \right) = 1 = H\left( 0\right) ,\n\]\n\nwhereas\n\n\[ \nH\left( {0 - \epsilon }\right) = H\left( {-\epsilon }\right) = 0.\n\]
Since 0 is not infinitesimally close to 1, it follows that \( H \) is not continuous at 0 . However, for any positive real number \( a \) and any infinitesimal \( \epsilon \) (positive or negative),\n\n\[ \nH\left( {a + \epsilon }\right) = 1 = H\left( a\right)\n\]\n\nsince \( a + \epsilon > 0 \), and for any negative real number \( a \) and any infinitesimal \( \epsilon \) ,\n\n\[ \nH\left( {a + \epsilon }\right) = 0 = H\left( a\right) ,\n\]\n\nsince \( a + \epsilon < 0 \) . Thus \( H \) is continuous on both \( \left( {0,\infty }\right) \) and \( \left( {-\infty ,0}\right) \) .
Yes