Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Let \( z = 2\sqrt{3} + {2i} \) and \( w = - 1 + i\sqrt{3} \). Use Theorem 11.16 to find the following\n\n1. \( {zw} \) 2. \( {w}^{5} \) 3. \( \frac{z}{w} \n\nWrite your final answers in rectangular form.
Solution. In order to use Theorem 11.16, we need to write \( z \) and \( w \) in polar form. For \( z = 2\sqrt{3} + {2i} \), we find \( \left| z\right| = \sqrt{{\left( 2\sqrt{3}\right) }^{2} + {\left( 2\right) }^{2}} = \sqrt{16} = 4 \). If \( \theta \in \arg \left( z\right) \), we know \( \tan \left( \theta \right) = \frac{\operatorname{Im}\left( z\right) }{\operatorname{Re}\left( z\right) } = \frac{2}{2\sqrt{3}} = \frac{\sqrt{3}}{3} \). Since \( z \) lies in Quadrant I, we have \( \theta = \frac{\pi }{6} + {2\pi k} \) for integers \( k \). Hence, \( z = 4\operatorname{cis}\left( \frac{\pi }{6}\right) \). For \( w = - 1 + i\sqrt{3} \), we have \( \left| w\right| = \sqrt{{\left( -1\right) }^{2} + {\left( \sqrt{3}\right) }^{2}} = 2 \). For an argument \( \theta \) of \( w \), we have \( \tan \left( \theta \right) = \frac{\sqrt{3}}{-1} = - \sqrt{3} \). Since \( w \) lies in Quadrant II, \( \theta = \frac{2\pi }{3} + {2\pi k} \) for integers \( k \) and \( w = 2\operatorname{cis}\left( \frac{2\pi }{3}\right) \). We can now proceed.\n\n1. We get \( {zw} = \left( {4\text{cis}\left( \frac{\pi }{6}\right) }\right) \left( {2\text{cis}\left( \frac{2\pi }{3}\right) }\right) = 8 \) cis \( \left( {\frac{\pi }{6} + \frac{2\pi }{3}}\right) = 8 \) cis \( \left( \frac{5\pi }{6}\right) = 8\left\lbrack {\cos \left( \frac{5\pi }{6}\right) + i\sin \left( \frac{5\pi }{6}\right) }\right\rbrack . \) After simplifying, we get \( {zw} = - 4\sqrt{3} + {4i} \).\n\n2. We use DeMoivre’s Theorem which yields \( {w}^{5} = {\left\lbrack 2\text{cis}\left( \frac{2\pi }{3}\right) \right\rbrack }^{5} = {2}^{5} \) cis \( \left( {5 \cdot \frac{2\pi }{3}}\right) = {32} \) cis \( \left( \frac{10\pi }{3}\right) \). Since \( \frac{10\pi }{3} \) is coterminal with \( \frac{4\pi }{3} \), we get \( {w}^{5} = {32}\left\lbrack {\cos \left( \frac{4\pi }{3}\right) + i\sin \left( \frac{4\pi }{3}\right) }\right\rbrack = - {16} - {16i}\sqrt{3} \).\n\n3. Last, but not least, we have \( \frac{z}{w} = \frac{4\operatorname{cis}\left( \frac{\pi }{6}\right) }{2\operatorname{cis}\left( \frac{2\pi }{3}\right) } = \frac{4}{2}\operatorname{cis}\left( {\frac{\pi }{6} - \frac{2\pi }{3}}\right) = 2\operatorname{cis}\left( {-\frac{\pi }{2}}\right) \). Since \( - \frac{\pi }{2} \) is a quadrantal angle, we can 'see' the rectangular form by moving out 2 units along the positive real axis, then rotating \( \frac{\pi }{2} \) radians clockwise to arrive at the point 2 units below 0 on the imaginary axis. The long and short of it is that \( \frac{z}{w} = - {2i} \).
Yes
Theorem 11.17. The \( {n}^{\text{th }} \) roots of a Complex Number: Let \( z \neq 0 \) be a complex number with polar form \( z = r\operatorname{cis}\left( \theta \right) \). For each natural number \( n, z \) has \( n \) distinct \( {n}^{\text{th }} \) roots, which we denote by \( {w}_{0},{w}_{1},\ldots ,{w}_{n - 1} \), and they are given by the formula\n\n\[ \n{w}_{k} = \sqrt[n]{r}\operatorname{cis}\left( {\frac{\theta }{n} + \frac{2\pi }{n}k}\right) \n\]
The proof of Theorem 11.17 breaks into to two parts: first, showing that each \( {w}_{k} \) is an \( {n}^{\text{th }} \) root, and second, showing that the set \( \left\{ {{w}_{k} \mid k = 0,1,\ldots ,\left( {n - 1}\right) }\right\} \) consists of \( n \) different complex numbers. To show \( {w}_{k} \) is an \( {n}^{\text{th }} \) root of \( z \), we use DeMoivre’s Theorem to show \( {\left( {w}_{k}\right) }^{n} = z \).\n\n\[ \n{\left( {w}_{k}\right) }^{n} = {\left( \sqrt[n]{r}\operatorname{cis}\left( \frac{\theta }{n} + \frac{2\pi }{n}k\right) \right) }^{n} \n\]\n\n\[ \n= {\left( \sqrt[n]{r}\right) }^{n}\operatorname{cis}\left( {n \cdot \left\lbrack {\frac{\theta }{n} + \frac{2\pi }{n}k}\right\rbrack }\right) \;\text{DeMoivre’s Theorem} \n\]\n\n\[ \n= r\operatorname{cis}\left( {\theta + {2\pi k}}\right) \n\]\n\nSince \( k \) is a whole number, \( \cos \left( {\theta + {2\pi k}}\right) = \cos \left( \theta \right) \) and \( \sin \left( {\theta + {2\pi k}}\right) = \sin \left( \theta \right) \). Hence, it follows that \( \operatorname{cis}\left( {\theta + {2\pi k}}\right) = \operatorname{cis}\left( \theta \right) \), so \( {\left( {w}_{k}\right) }^{n} = r\operatorname{cis}\left( \theta \right) = z \), as required. To show that the formula in Theorem 11.17 generates \( n \) distinct numbers, we assume \( n \geq 2 \) (or else there is nothing to prove) and note that the modulus of each of the \( {w}_{k} \) is the same, namely \( \sqrt[n]{r} \). Therefore, the only way any two of these polar forms correspond to the same number is if their arguments are coterminal - that is, if the arguments differ by an integer multiple of \( {2\pi } \). Suppose \( k \) and \( j \) are whole numbers between 0 and \( \left( {n - 1}\right) \), inclusive, with \( k \neq j \). Since \( k \) and \( j \) are different, let’s assume for the sake of argument that \( k > j \). Then \( \left( {\frac{\theta }{n} + \frac{2\pi }{n}k}\right) - \left( {\frac{\theta }{n} + \frac{2\pi }{n}j}\right) = {2\pi }\left( \frac{k - j}{n}\right) \). For this to be an integer multiple of \( {2\pi } \), \( \left( {k - j}\right) \) must be a multiple of \( n \). But because of the restrictions on \( k \) and \( j,0 < k - j \leq n - 1 \). (Think this through.) Hence, \( \left( {k - j}\right) \) is a positive number less than \( n \), so it cannot be a multiple of \( n \). As a result, \( {w}_{k} \) and \( {w}_{j} \) are different complex numbers, and we are done. By Theorem 3.14, we know there at most \( n \) distinct solutions to \( {w}^{n} = z \), and we have just found all of them.
Yes
1. both square roots of \( z = - 2 + {2i}\sqrt{3} \)
We start by writing \( z = - 2 + {2i}\sqrt{3} = 4\operatorname{cis}\left( \frac{2\pi }{3}\right) \) . To use Theorem 11.17, we identify \( r = 4 \) , \( \theta = \frac{2\pi }{3} \) and \( n = 2 \) . We know that \( z \) has two square roots, and in keeping with the notation in Theorem 11.17, we’ll call them \( {w}_{0} \) and \( {w}_{1} \) . We get \( {w}_{0} = \sqrt{4}\operatorname{cis}\left( {\frac{\left( 2\pi /3\right) }{2} + \frac{2\pi }{2}\left( 0\right) }\right) = 2\operatorname{cis}\left( \frac{\pi }{3}\right) \) and \( {w}_{1} = \sqrt{4}\operatorname{cis}\left( {\frac{\left( 2\pi /3\right) }{2} + \frac{2\pi }{2}\left( 1\right) }\right) = 2\operatorname{cis}\left( \frac{4\pi }{3}\right) \) . In rectangular form, the two square roots of \( z \) are \( {w}_{0} = 1 + i\sqrt{3} \) and \( {w}_{1} = - 1 - i\sqrt{3} \) . We can check our answers by squaring them and showing that we get \( z = - 2 + {2i}\sqrt{3} \) .
Yes
A plane leaves an airport with an airspeed \( {}^{5} \) of 175 miles per hour at a bearing of \( \mathrm{N}{40}^{ \circ }\mathrm{E} \) . A \( {35}\mathrm{{mile}} \) per hour wind is blowing at a bearing of \( \mathrm{S}{60}^{ \circ }\mathrm{E} \) . Find the true speed of the plane, rounded to the nearest mile per hour, and the true bearing of the plane, rounded to the nearest degree.
Solution: For both the plane and the wind, we are given their speeds and their directions. Coupling speed (as a magnitude) with direction is the concept of velocity which we've seen a few times before in this textbook. \( {}^{6} \) We let \( \overrightarrow{v} \) denote the plane’s velocity and \( \overrightarrow{w} \) denote the wind’s velocity in the diagram below. The ’true’ speed and bearing is found by analyzing the resultant vector, \( \overrightarrow{v} + \overrightarrow{w} \) . From the vector diagram, we get a triangle, the lengths of whose sides are the magnitude of \( \overrightarrow{v} \) , which is 175, the magnitude of \( \overrightarrow{w} \), which is 35, and the magnitude of \( \overrightarrow{v} + \overrightarrow{w} \), which we’ll call \( c \) . From the given bearing information, we go through the usual geometry to determine that the angle between the sides of length 35 and 175 measures \( {100}^{ \circ } \) .\n\n![22c64c1a-3807-4372-b206-67aa5b3452be_330_0.jpg](images/22c64c1a-3807-4372-b206-67aa5b3452be_330_0.jpg)\n\nFrom the Law of Cosines, we determine \( c = \sqrt{{31850} - {12250}\cos \left( {100}^{ \circ }\right) } \approx {184} \), which means the true speed of the plane is (approximately) 184 miles per hour. To determine the true bearing of the plane, we need to determine the angle \( \alpha \) . Using the Law of Cosines once more, \( {}^{7} \) we find \( \cos \left( \alpha \right) = \frac{{c}^{2} + {29400}}{350c} \) so that \( \alpha \approx {11}^{ \circ } \) . Given the geometry of the situation, we add \( \alpha \) to the given \( {40}^{ \circ } \) and find the true bearing of the plane to be (approximately) \( {\mathrm{N}}^{ \circ }{\mathrm{E}}^{ \circ } \) E.
Yes
Let \( \overrightarrow{v} = \langle 3,4\rangle \) and suppose \( \overrightarrow{w} = \overrightarrow{PQ} \) where \( P\left( {-3,7}\right) \) and \( Q\left( {-2,5}\right) \) . Find \( \overrightarrow{v} + \overrightarrow{w} \) and interpret this sum geometrically.
Solution. Before can add the vectors using Definition 11.6, we need to write \( \overrightarrow{w} \) in component form. Using Definition 11.5, we get \( \overrightarrow{w} = \langle - 2 - \left( {-3}\right) ,5 - 7\rangle = \langle 1, - 2\rangle \) . Thus\n\n\[ \overrightarrow{v} + \overrightarrow{w} = \langle 3,4\rangle + \langle 1, - 2\rangle \]\n\n\[ = \langle 3 + 1,4 + \left( {-2}\right) \rangle \]\n\n\[ = \langle 4,2\rangle \]\n\nTo visualize this sum, we draw \( \overrightarrow{v} \) with its initial point at \( \left( {0,0}\right) \) (for convenience) so that its terminal point is \( \left( {3,4}\right) \) . Next, we graph \( \overrightarrow{w} \) with its initial point at \( \left( {3,4}\right) \) . Moving one to the right and two down, we find the terminal point of \( \overrightarrow{w} \) to be \( \left( {4,2}\right) \) . We see that the vector \( \overrightarrow{v} + \overrightarrow{w} \) has initial point \( \left( {0,0}\right) \) and terminal point \( \left( {4,2}\right) \) so its component form is \( \langle 4,2\rangle \), as required.
Yes
Theorem 11.19. Properties of Scalar Multiplication\n\n- Associative Property: For every vector \( \overrightarrow{v} \) and scalars \( k \) and \( r,\left( {kr}\right) \overrightarrow{v} = k\left( {r\overrightarrow{v}}\right) \) .
The proof of Theorem 11.19, like the proof of Theorem 11.18, ultimately boils down to the definition of scalar multiplication and properties of real numbers. For example, to prove the associative property, we let \( \overrightarrow{v} = \left\langle {{v}_{1},{v}_{2}}\right\rangle \) . If \( k \) and \( r \) are scalars then\n\n\[ \left( {kr}\right) \overrightarrow{v} = \left( {kr}\right) \left\langle {{v}_{1},{v}_{2}}\right\rangle \]\n\n\[ = \left\langle {\left( {kr}\right) {v}_{1},\left( {kr}\right) {v}_{2}}\right\rangle \text{Definition of Scalar Multiplication} \]\n\n\( = \left\langle {k\left( {r{v}_{1}}\right), k\left( {r{v}_{2}}\right) }\right\rangle \) Associative Property of Real Number Multiplication\n\n\( = k\left\langle {r{v}_{1}, r{v}_{2}}\right\rangle \; \) Definition of Scalar Multiplication\n\n\( = k\left( {r\left\langle {{v}_{1},{v}_{2}}\right\rangle }\right) \; \) Definition of Scalar Multiplication\n\n\[ = k\left( {r\overrightarrow{v}}\right) \]
Yes
Solve \( 5\overrightarrow{v} - 2\left( {\overrightarrow{v}+\langle 1, - 2\rangle }\right) = \overrightarrow{0} \) for \( \overrightarrow{v} \) .
\[ 5\overrightarrow{v} - 2\left( {\overrightarrow{v}+\langle 1, - 2\rangle }\right) = \overrightarrow{0} \] \[ 5\overrightarrow{v} + \left( {-1}\right) \left\lbrack {2\left( {\overrightarrow{v}+\langle 1, - 2\rangle }\right) }\right\rbrack = \overrightarrow{0} \] \[ 5\overrightarrow{v} + \left\lbrack {\left( {-1}\right) \left( 2\right) }\right\rbrack \left( {\overrightarrow{v}+\langle 1, - 2\rangle }\right) = \overrightarrow{0} \] \[ 5\overrightarrow{v} + \left( {-2}\right) \left( {\overrightarrow{v}+\langle 1, - 2\rangle }\right) = \overrightarrow{0} \] \[ 5\overrightarrow{v} + \left\lbrack {\left( {-2}\right) \overrightarrow{v} + \left( {-2}\right) \langle 1, - 2\rangle }\right\rbrack = \overrightarrow{0} \] \[ 5\overrightarrow{v} + \left\lbrack {\left( {-2}\right) \overrightarrow{v}+\langle \left( {-2}\right) \left( 1\right) ,\left( {-2}\right) \left( {-2}\right) \rangle }\right\rbrack = \overrightarrow{0} \] \[ \left\lbrack {5\overrightarrow{v} + \left( {-2}\right) \overrightarrow{v}}\right\rbrack + \langle - 2,4\rangle = \overrightarrow{0} \] \[ \left( {5 + \left( {-2}\right) }\right) \overrightarrow{v} + \langle - 2,4\rangle = \overrightarrow{0} \] \[ 3\overrightarrow{v} + \langle - 2,4\rangle = \overrightarrow{0} \] \[ \left( {3\overrightarrow{v}+\langle - 2,4\rangle }\right) + \left( {-\langle - 2,4\rangle }\right) = \overrightarrow{0} + \left( {-\langle - 2,4\rangle }\right) \] \[ 3\overrightarrow{v} + \left\lbrack {\langle - 2,4\rangle + \left( {-\langle - 2,4\rangle }\right) }\right\rbrack = \overrightarrow{0} + \left( {-1}\right) \langle - 2,4\rangle \] \[ 3\overrightarrow{v} + \overrightarrow{0} = \overrightarrow{0} + \langle \left( {-1}\right) \left( {-2}\right) ,\left( {-1}\right) \left( 4\right) \rangle \] \[ 3\overrightarrow{v} = \langle 2, - 4\rangle \] \[ \frac{1}{3}\left( {3\overrightarrow{v}}\right) = \frac{1}{3}\left( {\langle 2, - 4\rangle }\right) \] \[ \left\lbrack {\left( \frac{1}{3}\right) \left( 3\right) }\right\rbrack \overrightarrow{v} = \left\langle {\left( \frac{1}{3}\right) \left( 2\right) ,\left( \frac{1}{3}\right) \left( {-4}\right) }\right\rangle \] \[ 1\overrightarrow{v} = \left\langle {\frac{2}{3}, - \frac{4}{3}}\right\rangle \] \[ \overrightarrow{v} = \left\langle {\frac{2}{3}, - \frac{4}{3}}\right\rangle \]
Yes
Theorem 11.20. Properties of Magnitude and Direction: Suppose \( \overrightarrow{v} \) is a vector.\n\n- \( \parallel \overrightarrow{v}\parallel \geq 0 \) and \( \parallel \overrightarrow{v}\parallel = 0 \) if and only if \( \overrightarrow{v} = \overrightarrow{0} \)
The proof of the first property in Theorem 11.20 is a direct consequence of the definition of \( \parallel \overrightarrow{v}\parallel \) . If \( \overrightarrow{v} = \left\langle {{v}_{1},{v}_{2}}\right\rangle \), then \( \parallel \overrightarrow{v}\parallel = \sqrt{{v}_{1}^{2} + {v}_{2}^{2}} \) which is by definition greater than or equal to 0 . Moreover, \( \sqrt{{v}_{1}^{2} + {v}_{2}^{2}} = 0 \) if and only of \( {v}_{1}^{2} + {v}_{2}^{2} = 0 \) if and only if \( {v}_{1} = {v}_{2} = 0 \) . Hence, \( \parallel \overrightarrow{v}\parallel = 0 \) if and only if \( \overrightarrow{v} = \langle 0,0\rangle = \overrightarrow{0} \), as required.
Yes
Find the component form of the vector \( \overrightarrow{v} \) with \( \parallel \overrightarrow{v}\parallel = 5 \) so that when \( \overrightarrow{v} \) is plotted in standard position, it lies in Quadrant II and makes a \( {60}^{ \circ } \) angle \( {}^{12} \) with the negative \( x \) -axis.
We are told that \( \parallel \overrightarrow{v}\parallel = 5 \) and are given information about its direction, so we can use the formula \( \overrightarrow{v} = \parallel \overrightarrow{v}\parallel \widehat{v} \) to get the component form of \( \overrightarrow{v} \) . To determine \( \widehat{v} \), we appeal to Definition 11.8. We are told that \( \overrightarrow{v} \) lies in Quadrant II and makes a \( {60}^{ \circ } \) angle with the negative \( x \) -axis, so the polar form of the terminal point of \( \overrightarrow{v} \), when plotted in standard position is \( \left( {5,{120}^{ \circ }}\right) \) . (See the diagram below.) Thus \( \widehat{v} = \left\langle {\cos \left( {120}^{ \circ }\right) ,\sin \left( {120}^{ \circ }\right) }\right\rangle = \left\langle {-\frac{1}{2},\frac{\sqrt{3}}{2}}\right\rangle \), so \( \overrightarrow{v} = \parallel \overrightarrow{v}\parallel \widehat{v} = \) \( 5\left\langle {-\frac{1}{2},\frac{\sqrt{3}}{2}}\right\rangle = \left\langle {-\frac{5}{2},\frac{5\sqrt{3}}{2}}\right\rangle .
Yes
Theorem 11.21. Principal Vector Decomposition Theorem: Let \( \overrightarrow{v} \) be a vector with component form \( \overrightarrow{v} = \left\langle {{v}_{1},{v}_{2}}\right\rangle \) . Then \( \overrightarrow{v} = {v}_{1}\widehat{\imath } + {v}_{2}\widehat{\jmath } \) .
The proof of Theorem 11.21 is straightforward. Since \( \widehat{\imath } = \langle 1,0\rangle \) and \( \widehat{\jmath } = \langle 0,1\rangle \), we have from the definition of scalar multiplication and vector addition that\n\n\[ \n{v}_{1}\widehat{\imath } + {v}_{2}\widehat{\jmath } = {v}_{1}\langle 1,0\rangle + {v}_{2}\langle 0,1\rangle = \left\langle {{v}_{1},0}\right\rangle + \left\langle {0,{v}_{2}}\right\rangle = \left\langle {{v}_{1},{v}_{2}}\right\rangle = \overrightarrow{v} \n\]
Yes
Commutative Property: For all vectors \( \overrightarrow{v} \) and \( \overrightarrow{w},\overrightarrow{v} \cdot \overrightarrow{w} = \overrightarrow{w} \cdot \overrightarrow{v} \) .
To show the commutative property for instance, let \( \overrightarrow{v} = \left\langle {{v}_{1},{v}_{2}}\right\rangle \) and \( \overrightarrow{w} = \left\langle {{w}_{1},{w}_{2}}\right\rangle \) . Then\n\n\[ \overrightarrow{v} \cdot \overrightarrow{w} = \left\langle {{v}_{1},{v}_{2}}\right\rangle \cdot \left\langle {{w}_{1},{w}_{2}}\right\rangle \]\n\n\( = {v}_{1}{w}_{1} + {v}_{2}{w}_{2}\; \) Definition of Dot Product\n\n\( = {w}_{1}{v}_{1} + {w}_{2}{v}_{2}\; \) Commutativity of Real Number Multiplication\n\n\( = \left\langle {{w}_{1},{w}_{2}}\right\rangle \cdot \left\langle {{v}_{1},{v}_{2}}\right\rangle \) Definition of Dot Product\n\n\[ = \overrightarrow{w} \cdot \overrightarrow{v} \]
Yes
Prove the identity: \( \parallel \overrightarrow{v} - \overrightarrow{w}{\parallel }^{2} = \parallel \overrightarrow{v}{\parallel }^{2} - 2\left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \parallel \overrightarrow{w}{\parallel }^{2} \) .
Solution. We begin by rewriting \( \parallel \overrightarrow{v} - \overrightarrow{w}{\parallel }^{2} \) in terms of the dot product using Theorem 11.22.\n\n\[ \parallel \overrightarrow{v} - \overrightarrow{w}{\parallel }^{2} = \left( {\overrightarrow{v} - \overrightarrow{w}}\right) \cdot \left( {\overrightarrow{v} - \overrightarrow{w}}\right) \]\n\n\[ = \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) \cdot \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) \]\n\n\[ = \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) \cdot \overrightarrow{v} + \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) \cdot \left\lbrack {-\overrightarrow{w}}\right\rbrack \]\n\n\[ = \overrightarrow{v} \cdot \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) + \left\lbrack {-\overrightarrow{w}}\right\rbrack \cdot \left( {\overrightarrow{v} + \left\lbrack {-\overrightarrow{w}}\right\rbrack }\right) \]\n\n\[ = \;\overset{\rightarrow }{v} \cdot \overset{\rightarrow }{v} + \overset{\rightarrow }{v} \cdot \lbrack - \overset{\rightarrow }{w}\rbrack + \lbrack - \overset{\rightarrow }{w}\rbrack \cdot \overset{\rightarrow }{v} + \lbrack - \overset{\rightarrow }{w}\rbrack \cdot \lbrack - \overset{\rightarrow }{w}\rbrack \]\n\n\[ = \overrightarrow{v} \cdot \overrightarrow{v} + \overrightarrow{v} \cdot \left\lbrack {\left( {-1}\right) \overrightarrow{w}}\right\rbrack + \left\lbrack {\left( {-1}\right) \overrightarrow{w}}\right\rbrack \cdot \overrightarrow{v} + \left\lbrack {\left( {-1}\right) \overrightarrow{w}}\right\rbrack \cdot \left\lbrack {\left( {-1}\right) \overrightarrow{w}}\right\rbrack \]\n\n\[ = \overrightarrow{v} \cdot \overrightarrow{v} + \left( {-1}\right) \left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \left( {-1}\right) \left( {\overrightarrow{w} \cdot \overrightarrow{v}}\right) + \left\lbrack {\left( {-1}\right) \left( {-1}\right) }\right\rbrack \left( {\overrightarrow{w} \cdot \overrightarrow{w}}\right) \]\n\n\[ = \overrightarrow{v} \cdot \overrightarrow{v} + \left( {-1}\right) \left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \left( {-1}\right) \left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \overrightarrow{w} \cdot \overrightarrow{w} \]\n\n\[ = \overrightarrow{v} \cdot \overrightarrow{v} - 2\left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \overrightarrow{w} \cdot \overrightarrow{w} \]\n\n\[ = \parallel \overrightarrow{v}{\parallel }^{2} - 2\left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \parallel \overrightarrow{w}{\parallel }^{2} \]\n\nHence, \( \parallel \overrightarrow{v} - \overrightarrow{w}{\parallel }^{2} = \parallel \overrightarrow{v}{\parallel }^{2} - 2\left( {\overrightarrow{v} \cdot \overrightarrow{w}}\right) + \parallel \overrightarrow{w}{\parallel }^{2} \) as required.
Yes
Theorem 11.24. Let \( \overrightarrow{v} \) and \( \overrightarrow{w} \) be nonzero vectors and let \( \theta \) the angle between \( \overrightarrow{v} \) and \( \overrightarrow{w} \) . Then\n\n\[ \theta = \arccos \left( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel }\right) = \arccos \left( {\widehat{v} \cdot \widehat{w}}\right) \]
We obtain the formula in Theorem 11.24 by solving the equation given in Theorem 11.23 for \( \theta \) . Since \( \overrightarrow{v} \) and \( \overrightarrow{w} \) are nonzero, so are \( \parallel \overrightarrow{v}\parallel \) and \( \parallel \overrightarrow{w}\parallel \) . Hence, we may divide both sides of \( \overrightarrow{v} \cdot \overrightarrow{w} = \parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel \cos \left( \theta \right) \) by \( \parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel \) to get \( \cos \left( \theta \right) = \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel } \) . Since \( 0 \leq \theta \leq \pi \) by definition, the values of \( \theta \) exactly match the range of the arccosine function. Hence, \( \theta = \arccos \left( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel }\right) \) . Using Theorem 11.22, we can rewrite \( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel } = \left( {\frac{1}{\parallel \overrightarrow{v}\parallel }\overrightarrow{v}}\right) \cdot \left( {\frac{1}{\parallel \overrightarrow{w}\parallel }\overrightarrow{w}}\right) = \widehat{v} \cdot \widehat{w} \), giving us the alternative formula \( \theta = \arccos \left( {\widehat{v} \cdot \widehat{w}}\right) \) .
Yes
Find the angle between the following pairs of vectors.
Solution. We use the formula \( \theta = \arccos \left( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel }\right) \) from Theorem 11.24 in each case below.\n\n1. We have \( \overrightarrow{v} \cdot \overrightarrow{w} = \langle 3, - 3\sqrt{3}\rangle \cdot \langle - \sqrt{3},1\rangle = - 3\sqrt{3} - 3\sqrt{3} = - 6\sqrt{3} \). Since \( \parallel \overrightarrow{v}\parallel = \sqrt{{3}^{2} + {\left( -3\sqrt{3}\right) }^{2}} = \sqrt{36} = 6 \) and \( \parallel \overrightarrow{w}\parallel = \sqrt{{\left( -\sqrt{3}\right) }^{2} + {1}^{2}} = \sqrt{4} = 2 \), we find \( \theta = \arccos \left( \frac{-6\sqrt{3}}{12}\right) = \arccos \left( {-\frac{\sqrt{3}}{2}}\right) = \frac{5\pi }{6} \).
Yes
Theorem 11.25. The Dot Product Detects Orthogonality: Let \( \overrightarrow{v} \) and \( \overrightarrow{w} \) be nonzero vectors. Then \( \overrightarrow{v} \bot \overrightarrow{w} \) if and only if \( \overrightarrow{v} \cdot \overrightarrow{w} = 0 \) .
To prove Theorem 11.25, we first assume \( \overrightarrow{v} \) and \( \overrightarrow{w} \) are nonzero vectors with \( \overrightarrow{v} \bot \overrightarrow{w} \) . By definition, the angle between \( \overrightarrow{v} \) and \( \overrightarrow{w} \) is \( \frac{\pi }{2} \) . By Theorem 11.23, \( \overrightarrow{v} \cdot \overrightarrow{w} = \parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel \cos \left( \frac{\pi }{2}\right) = 0 \) . Conversely, if \( \overrightarrow{v} \) and \( \overrightarrow{w} \) are nonzero vectors and \( \overrightarrow{v} \cdot \overrightarrow{w} = 0 \), then Theorem 11.24 gives \( \theta = \arccos \left( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel }\right) = \) \( \arccos \left( \frac{0}{\parallel \overrightarrow{v}\parallel \parallel \overrightarrow{w}\parallel }\right) = \arccos \left( 0\right) = \frac{\pi }{2} \), so \( \overrightarrow{v} \bot \overrightarrow{w} \) .
Yes
Let \( {L}_{1} \) be the line \( y = {m}_{1}x + {b}_{1} \) and let \( {L}_{2} \) be the line \( y = {m}_{2}x + {b}_{2} \) . Prove that \( {L}_{1} \) is perpendicular to \( {L}_{2} \) if and only if \( {m}_{1} \cdot {m}_{2} = - 1 \) .
Solution. Our strategy is to find two vectors: \( \overrightarrow{{v}_{1}} \), which has the same direction as \( {L}_{1} \), and \( \overrightarrow{{v}_{2}} \) , which has the same direction as \( {L}_{2} \) and show \( \overrightarrow{{v}_{1}} \bot \overrightarrow{{v}_{2}} \) if and only if \( {m}_{1}{m}_{2} = - 1 \) . To that end, we substitute \( x = 0 \) and \( x = 1 \) into \( y = {m}_{1}x + {b}_{1} \) to find two points which lie on \( {L}_{1} \), namely \( P\left( {0,{b}_{1}}\right) \) and \( Q\left( {1,{m}_{1} + {b}_{1}}\right) \) . We let \( \overrightarrow{{v}_{1}} = \overrightarrow{PQ} = \left\langle {1 - 0,\left( {{m}_{1} + {b}_{1}}\right) - {b}_{1}}\right\rangle = \left\langle {1,{m}_{1}}\right\rangle \), and note that since \( \overrightarrow{{v}_{1}} \) is determined by two points on \( {L}_{1} \), it may be viewed as lying on \( {L}_{1} \). Hence it has the same direction as \( {L}_{1} \). Similarly, we get the vector \( \overrightarrow{{v}_{2}} = \left\langle {1,{m}_{2}}\right\rangle \) which has the same direction as the line \( {L}_{2} \). Hence, \( {L}_{1} \) and \( {L}_{2} \) are perpendicular if and only if \( \overrightarrow{{v}_{1}} \bot \overrightarrow{{v}_{2}} \). According to Theorem 11.25, \( \overrightarrow{{v}_{1}} \bot \overrightarrow{{v}_{2}} \) if and only if \( \overrightarrow{{v}_{1}} \cdot \overrightarrow{{v}_{2}} = 0 \). Notice that \( \overrightarrow{{v}_{1}} \cdot \overrightarrow{{v}_{2}} = \left\langle {1,{m}_{1}}\right\rangle \cdot \left\langle {1,{m}_{2}}\right\rangle = 1 + {m}_{1}{m}_{2} \). Hence, \( \overrightarrow{{v}_{1}} \cdot \overrightarrow{{v}_{2}} = 0 \) if and only if \( 1 + {m}_{1}{m}_{2} = 0 \), which is true if and only if \( {m}_{1}{m}_{2} = - 1 \), as required.
Yes
Let \( \overrightarrow{v} = \langle 1,8\rangle \) and \( \overrightarrow{w} = \langle - 1,2\rangle \) . Find \( \overrightarrow{p} = {\operatorname{proj}}_{\overrightarrow{w}}\left( \overrightarrow{v}\right) \), and plot \( \overrightarrow{v},\overrightarrow{w} \) and \( \overrightarrow{p} \) in standard position.
Solution. We find \( \overrightarrow{v} \cdot \overrightarrow{w} = \langle 1,8\rangle \cdot \langle - 1,2\rangle = \left( {-1}\right) + {16} = {15} \) and \( \overrightarrow{w} \cdot \overrightarrow{w} = \langle - 1,2\rangle \cdot \langle - 1,2\rangle = 1 + 4 = 5 \) . Hence, \( \overrightarrow{p} = \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\overrightarrow{w} \cdot \overrightarrow{w}}\overrightarrow{w} = \frac{15}{5}\langle - 1,2\rangle = \langle - 3,6\rangle \) . We plot \( \overrightarrow{v},\overrightarrow{w} \) and \( \overrightarrow{p} \) below.
Yes
Theorem 11.27. Generalized Decomposition Theorem: Let \( \overrightarrow{v} \) and \( \overrightarrow{w} \) be nonzero vectors. There are unique vectors \( \overrightarrow{p} \) and \( \overrightarrow{q} \) such that \( \overrightarrow{v} = \overrightarrow{p} + \overrightarrow{q} \) where \( \overrightarrow{p} = k\overrightarrow{w} \) for some scalar \( k \), and \( \overrightarrow{q} \cdot \overrightarrow{w} = 0 \)
To prove Theorem 11.27, we take \( \overrightarrow{p} = {\operatorname{proj}}_{\overrightarrow{w}}\left( \overrightarrow{v}\right) \) and \( \overrightarrow{q} = \overrightarrow{v} - \overrightarrow{p} \) . Then \( \overrightarrow{p} \) is, by definition, a scalar multiple of \( \overrightarrow{w} \) . Next, we compute \( \overrightarrow{q} \cdot \overrightarrow{w} \) .\n\n\[ \overrightarrow{q} \cdot \overrightarrow{w} = \left( {\overrightarrow{v} - \overrightarrow{p}}\right) \cdot \overrightarrow{w}\;\text{Definition of}\overrightarrow{q}\text{.} \]\n\[ = \overrightarrow{v} \cdot \overrightarrow{w} - \overrightarrow{p} \cdot \overrightarrow{w}\;\text{Properties of Dot Product} \]\n\[ = \overrightarrow{v} \cdot \overrightarrow{w} - \left( {\frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\overrightarrow{w} \cdot \overrightarrow{w}}\overrightarrow{w}}\right) \cdot \overrightarrow{w}\;\text{ Since }\overrightarrow{p} = {\operatorname{proj}}_{\overrightarrow{w}}\left( \overrightarrow{v}\right) . \]\n\[ = \overrightarrow{v} \cdot \overrightarrow{w} - \left( \frac{\overrightarrow{v} \cdot \overrightarrow{w}}{\overrightarrow{w} \cdot \overrightarrow{w}}\right) \left( {\overrightarrow{w} \cdot \overrightarrow{w}}\right) \text{Properties of Dot Product.} \]\n\[ = \overrightarrow{v} \cdot \overrightarrow{w} - \overrightarrow{v} \cdot \overrightarrow{w} \]\n\[ = 0 \]\n\nHence, \( \overrightarrow{q} \cdot \overrightarrow{w} = 0 \), as required. At this point, we have shown that the vectors \( \overrightarrow{p} \) and \( \overrightarrow{q} \) guaranteed by Theorem 11.27 exist. Now we need to show that they are unique. Suppose \( \overrightarrow{v} = \overrightarrow{p} + \overrightarrow{q} = {\overrightarrow{p}}^{\prime } + {\overrightarrow{q}}^{\prime } \) where the vectors \( {\overrightarrow{p}}^{\prime } \) and \( {\overrightarrow{q}}^{\prime } \) satisfy the same properties described in Theorem 11.27 as \( \overrightarrow{p} \) and \( \overrightarrow{q} \) . Then \( \overrightarrow{p} - {\overrightarrow{p}}^{\prime } = {\overrightarrow{q}}^{\prime } - \overrightarrow{q} \), so \( \overrightarrow{w} \cdot \left( {\overrightarrow{p} - {\overrightarrow{p}}^{\prime }}\right) = \overrightarrow{w} \cdot \left( {{\overrightarrow{q}}^{\prime } - \overrightarrow{q}}\right) = \overrightarrow{w} \cdot {\overrightarrow{q}}^{\prime } - \overrightarrow{w} \cdot \overrightarrow{q} = 0 - 0 = 0 \) . Hence, \( \overrightarrow{w} \cdot \left( {\overrightarrow{p} - {\overrightarrow{p}}^{\prime }}\right) = 0 \) . Now there are scalars \( k \) and \( {k}^{\prime } \) so that \( \overrightarrow{p} = k\overrightarrow{w} \) and \( {\overrightarrow{p}}^{\prime } = {k}^{\prime }\overrightarrow{w} \) . This means \( \overrightarrow{w} \cdot \left( {\overrightarrow{p} - {\overrightarrow{p}}^{\prime }}\right) = \overrightarrow{w} \cdot \left( {k\overrightarrow{w} - {k}^{\prime }\overrightarrow{w}}\right) = \overrightarrow{w} \cdot \left(
Yes
Taylor exerts a force of 10 pounds to pull her wagon a distance of 50 feet over level ground. If the handle of the wagon makes a \( {30}^{ \circ } \) angle with the horizontal, how much work did Taylor do pulling the wagon? Assume Taylor exerts the force of 10 pounds at a \( {30}^{ \circ } \) angle for the duration of the 50 feet.
Solution. There are two ways to attack this problem. One way is to find the vectors \( \overrightarrow{F} \) and \( \overrightarrow{PQ} \) mentioned in Theorem 11.28 and compute \( W = \overrightarrow{F} \cdot \overrightarrow{PQ} \). To do this, we assume the origin is at the point where the handle of the wagon meets the wagon and the positive \( x \) -axis lies along the dashed line in the figure above. Since the force applied is a constant 10 pounds, we have \( \parallel \overrightarrow{F}\parallel = {10} \). Since it is being applied at a constant angle of \( \theta = {30}^{ \circ } \) with respect to the positive \( x \) -axis, Definition 11.8 gives us \( \overrightarrow{F} = {10}\left\langle {\cos \left( {{30}^{ \circ },\sin \left( {30}^{ \circ }\right) }\right) = \langle 5\sqrt{3},5}\right\rangle \). Since the wagon is being pulled along 50 feet in the positive direction, the displacement vector is \( \overrightarrow{PQ} = {50}\widehat{\imath } = {50}\langle 1,0\rangle = \langle {50},0\rangle \). We get \( W = \overrightarrow{F} \cdot \overrightarrow{PQ} = \langle 5\sqrt{3},5\rangle \cdot \langle {50},0\rangle = {250}\sqrt{3} \). Since force is measured in pounds and distance is measured in feet, we get \( W = {250}\sqrt{3} \) foot-pounds. Alternatively, we can use the formulation \( W = \parallel \overrightarrow{F}\parallel \parallel \overrightarrow{PQ}\parallel \cos \left( \theta \right) \) to get \( W = \left( {{10}\text{ pounds }}\right) \left( {{50}\text{ feet }}\right) \cos \left( {30}^{ \circ }\right) = {250}\sqrt{3} \) foot-pounds of work.
Yes
Sketch the curve described by \( \\left\\{ \\begin{array}{l} x = {t}^{2} - 3 \\\\ y = {2t} - 1 \\end{array}\\right. \) for \( t \\geq - 2 \) .
Solution. We follow the same procedure here as we have time and time again when asked to graph anything new - choose friendly values of \( t \), plot the corresponding points and connect the results in a pleasing fashion. Since we are told \( t \\geq - 2 \), we start there and as we plot successive points, we draw an arrow to indicate the direction of the path for increasing values of \( t \) .\n\n<table><thead><tr><th>\\( t \\)</th><th>\\( x\\left( t\\right) \\)</th><th>\\( y\\left( t\\right) \\)</th><th>\\( \\left( {x\\left( t\\right), y\\left( t\\right) }\\right) \\)</th></tr></thead><tr><td>\\( - 2 \\)</td><td>1</td><td>\\( - 5 \\)</td><td>\\( \\left( {1, - 5}\\right) \\)</td></tr><tr><td>\\( - 1 \\)</td><td>\\( - 2 \\)</td><td>\\( - 3 \\)</td><td>\\( \\left( {-2, - 3}\\right) \\)</td></tr><tr><td>0</td><td>\\( - 3 \\)</td><td>\\( - 1 \\)</td><td>\\( \\left( {-3, - 1}\\right) \\)</td></tr><tr><td>1</td><td>\\( - 2 \\)</td><td>1</td><td>\\( \\left( {-2,1}\\right) \\)</td></tr><tr><td>2</td><td>1</td><td>3</td><td>\\( \\left( {1,3}\\right) \\)</td></tr><tr><td>3</td><td>6</td><td>5</td><td>\\( \\left( {6,5}\\right) \\)</td></tr></table>\n\n![22c64c1a-3807-4372-b206-67aa5b3452be_365_0.jpg](images/22c64c1a-3807-4372-b206-67aa5b3452be_365_0.jpg)
Yes
Sketch the curves described by the following parametric equations. 1. \( \\left\\{ \\begin{array}{l} x = {t}^{3} \\\\ y = 2{t}^{2} \\end{array}\\right. \) for \( - 1 \\leq t \\leq 1 \)
To get a feel for the curve described by the system \( \\left\\{ {x = {t}^{3}, y = 2{t}^{2}}\\right. \) we first sketch the graphs of \( x = {t}^{3} \) and \( y = 2{t}^{2} \) over the interval \( \\left\\lbrack {-1,1}\\right\\rbrack \) . We note that as \( t \) takes on values in the interval \( \\left\\lbrack {-1,1}\\right\\rbrack, x = {t}^{3} \) ranges between -1 and 1, and \( y = 2{t}^{2} \) ranges between 0 and 2. This means that all of the action is happening on a portion of the plane, namely \( \\{ \\left( {x, y}\\right) \\mid - 1 \\leq x \\leq 1,0 \\leq y \\leq 2\\} \) . Next, we plot a few points to get a sense of the position and orientation of the curve. Certainly, \( t = - 1 \) and \( t = 1 \) are good values to pick since these are the extreme values of \( t \) . We also choose \( t = 0 \), since that corresponds to a relative minimum \( {}^{4} \) on the graph of \( y = 2{t}^{2} \) . Plugging in \( t = - 1 \) gives the point \( \\left( {-1,2}\\right), t = 0 \) gives \( \\left( {0,0}\\right) \) and \( t = 1 \) gives \( \\left( {1,2}\\right) \) . More generally, we see that \( x = {t}^{3} \) is increasing over the entire interval \( \\left\\lbrack {-1,1}\\right\\rbrack \) whereas \( y = 2{t}^{2} \) is decreasing over the interval \( \\left\\lbrack {-1,0}\\right\\rbrack \) and then increasing over \( \\left\\lbrack {0,1}\\right\\rbrack \) . Geometrically, this means that in order to trace out the path described by the parametric equations, we start at \( \\left( {-1,2}\\right) \) (where \( t = - 1 \) ), then move to the right (since \( x \) is increasing) and down (since \( y \) is decreasing) to \( \\left( {0,0}\\right) \) (where \( t = 0 \) ). We continue to move to the right (since \( x \) is still increasing) but now move upwards (since \( y \) is now increasing) until we reach \( \\left( {1,2}\\right) \) (where \( t = 1 \) ). Finally, to get a good sense of the shape of the curve, we eliminate the parameter. Solving \( x = {t}^{3} \) for \( t \), we get \( t = \\sqrt[3]{x} \) . Substituting this into \( y = 2{t}^{2} \) gives \( y = 2{\\left( \\sqrt[3]{x}\\right) }^{2} = 2{x}^{2/3} \) . Our experience in Section 5.3 yields the graph of our final answer below. ![22c64c1a-3807-4372-b206-67aa5b3452be_366_0.jpg](images/22c64c1a-3807-4372-b206-67aa5b3452be_366_0.jpg)
Yes
Find a parametrization for each of the following curves and check your answers.\n\n1. \( y = {x}^{2} \) from \( x = - 3 \) to \( x = 2 \)
Since \( y = {x}^{2} \) is written in the form \( y = f\left( x\right) \), we let \( x = t \) and \( y = f\left( t\right) = {t}^{2} \) . Since \( x = t \), the bounds on \( t \) match precisely the bounds on \( x \) so we get \( \left\{ {x = t, y = {t}^{2}}\right. \) for \( - 3 \leq t \leq 2 \) . The check is almost trivial; with \( x = t \) we have \( y = {t}^{2} = {x}^{2} \) as \( t = x \) runs from -3 to 2 .
Yes
Find a parametrization for the following curves.\n\n1. The curve which starts at \( \left( {2,4}\right) \) and follows the parabola \( y = {x}^{2} \) to end at \( \left( {-1,1}\right) \) . Shift the parameter so that the path starts at \( t = 0 \) .
Solution.\n\n1. We can parametrize \( y = {x}^{2} \) from \( x = - 1 \) to \( x = 2 \) using the formula given on Page 1053 as \( \left\{ {x = t, y = {t}^{2}}\right. \) for \( - 1 \leq t \leq 2 \) . This parametrization, however, starts at \( \left( {-1,1}\right) \) and ends at \( \left( {2,4}\right) \) . Hence, we need to reverse the orientation. To do so, we replace every occurrence of \( t \) with \( - t \) to get \( \left\{ {x = - t, y = {\left( -t\right) }^{2}}\right. \) for \( - 1 \leq - t \leq 2 \) . After simplifying, we get \( \left\{ {x = - t, y = {t}^{2}}\right. \) for \( - 2 \leq t \leq 1 \) . We would like \( t \) to begin at \( t = 0 \) instead of \( t = - 2 \) . The problem here is that the parametrization we have starts 2 units 'too soon', so we need to introduce a 'time delay’ of 2. Replacing every occurrence of \( t \) with \( \left( {t - 2}\right) \) gives \( \{ x = - \left( {t - 2}\right), y = {\left( t - 2\right) }^{2} \) for \( - 2 \leq t - 2 \leq 1 \) . Simplifying yields \( \left\{ {x = 2 - t, y = {t}^{2} - {4t} + 4}\right. \) for \( 0 \leq t \leq 3 \) .
Yes
Find the parametric equations of a cycloid which results from a circle of radius 3 rolling down the positive \( x \) -axis as described above.
Solution. We have \( r = 3 \) which gives the equations \( \{ x = 3\left( {t - \sin \left( t\right) }\right), y = 3\left( {1 - \cos \left( t\right) }\right) \) for \( t \geq 0 \) . (Here we have returned to the convention of using \( t \) as the parameter.) Sketching the cycloid by hand is a wonderful exercise in Calculus, but for the purposes of this book, we use a graphing utility. Using a calculator to graph parametric equations is very similar to graphing polar equations on a calculator. \( {}^{13} \) Ensuring that the calculator is in ’Parametric Mode’ and ’radian mode’ we enter the equations and advance to the 'Window' screen.
Yes
1. Find \( {P}_{3}\left( x\right) \) about \( {x}_{0} = \pi /2 \) and use it to approximate \( f\left( {0.8}\right) \) .
Solution. 1. First note that \( f\left( {\pi /2}\right) = - \pi /2 \) . Differentiating \( f \) we get:\n\n\[ \n{f}^{\prime }\left( x\right) = \cos x - x\sin x - 1 \Rightarrow {f}^{\prime }\left( {\pi /2}\right) = - \pi /2 - 1 \n\]\n\n\[ \n{f}^{\prime \prime }\left( x\right) = - 2\sin x - x\cos x \Rightarrow {f}^{\prime \prime }\left( {\pi /2}\right) = - 2 \n\]\n\n\[ \n{f}^{\prime \prime \prime }\left( x\right) = - 3\cos x + x\sin x \Rightarrow {f}^{\prime \prime \prime }\left( {\pi /2}\right) = \pi /2. \n\]\n\nTherefore\n\n\[ \n{P}_{3}\left( x\right) = - \pi /2 - \left( {\pi /2 + 1}\right) \left( {x - \pi /2}\right) - {\left( x - \pi /2\right) }^{2} + \frac{\pi }{12}{\left( x - \pi /2\right) }^{3}. \n\]\n\nThen we approximate \( f\left( {0.8}\right) \) by \( {P}_{3}\left( {0.8}\right) = - {0.3033} \) (using 4-digits with rounding).
Yes
Example 9. Find the floating-point representation of 10.375.
Solution. You can check that \( {10} = {\left( {1010}\right) }_{2} \) and \( {0.375} = {\left( {.011}\right) }_{2} \) by computing\n\n\[ \n{10} = 0 \times {2}^{0} + 1 \times {2}^{1} + 0 \times {2}^{2} + 1 \times {2}^{3} \n\] \n\n\[ \n{0.375} = 0 \times {2}^{-1} + 1 \times {2}^{-2} + 1 \times {2}^{-3}. \n\] \n\nThen \n\n\[ \n{10.375} = {\left( {1010.011}\right) }_{2} = {\left( {1.010011}\right) }_{2} \times {2}^{3} \n\] \n\nwhere \( {\left( {1.010011}\right) }_{2} \times {2}^{3} \) is the normalized floating-point representation of the number. Now we rewrite this in terms of the representation (1.2): \n\n\[ \n{10.375} = {\left( -1\right) }^{0}{\left( {1.010011}\right) }_{2} \times {2}^{{1026} - {1023}}. \n\] \n\nSince \( {1026} = {\left( {10000000010}\right) }_{2} \), the bit by bit representation is: \n\n<table><tr><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td><td>1</td><td>0</td><td>...</td><td>0</td></tr></table>\n\nNotice the first sign bit is 0 since the number is positive. The next 11 bits (in blue) represent the exponent \( e = {1026} \), and the next group of red bits are the mantissa, filled with 0 ’s after the last digit of the mantissa. In Julia, we can get this bit by bit representation by typing bitstring(10.375): \n\nIn [1]: bitstring(10.375) ![789ddd7c-ef09-4d86-9656-bffbeedf0540_33_0.jpg](images/789ddd7c-ef09-4d86-9656-bffbeedf0540_33_0.jpg)
Yes
Find 5-digit \( \left( {k = 5}\right) \) chopping and rounding values of the numbers below:\n\n- \( \pi = {0.314159265}\ldots \times {10}^{1} \)
Chopping gives \( {fl}\left( \pi \right) = {0.31415} \) and rounding gives \( {fl}\left( \pi \right) = {0.31416} \) .
Yes
Find absolute and relative errors of\n\n1. \( x = {0.20} \times {10}^{1},{x}^{ * } = {0.21} \times {10}^{1} \)\n\n2. \( x = {0.20} \times {10}^{-2},{x}^{ * } = {0.21} \times {10}^{-2} \)\n\n3. \( x = {0.20} \times {10}^{5},{x}^{ * } = {0.21} \times {10}^{5} \)
Notice how the only difference in the three cases is the exponent of the numbers. The absolute errors are: \( {0.01} \times {10},{0.01} \times {10}^{-2},{0.01} \times {10}^{5} \) . The absolute errors are different since the exponents are different. However, the relative error in each case is the same: 0.05 .
Yes
Lemma 15. The relative error of approximating \( x \) by \( {fl}\left( x\right) \) in the \( k \) -digit normalized decimal floating-point representation satisfies
Proof. We will give the proof for chopping; the proof for rounding is similar but tedious. Let\n\n\[ x = 0.{d}_{1}{d}_{2}\ldots {d}_{k}{d}_{k + 1}\ldots \times {10}^{n}. \]\n\nThen\n\n\[ {fl}\left( x\right) = 0.{d}_{1}{d}_{2}\ldots {d}_{k} \times {10}^{n} \]\n\nif chopping is used. Observe that\n\n\[ \frac{\left| x - fl\left( x\right) \right| }{\left| x\right| } = \frac{0.{d}_{k + 1}{d}_{k + 2}\ldots \times {10}^{n - k}}{0.{d}_{1}{d}_{2}\ldots \times {10}^{n}} = \left( \frac{0.{d}_{k + 1}{d}_{k + 2}\ldots }{0.{d}_{1}{d}_{2}\ldots }\right) {10}^{-k}. \]\n\nWe have two simple bounds: \( 0.{d}_{k + 1}{d}_{k + 2}\ldots < 1 \) and \( 0.{d}_{1}{d}_{2}\ldots \geq {0.1} \), the latter true since the smallest \( {d}_{1} \) can be, is 1 . Using these bounds in the equation above we get\n\n\[ \frac{\left| x - fl\left( x\right) \right| }{\left| x\right| } \leq \frac{1}{0.1}{10}^{-k} = {10}^{-k + 1}. \]
Yes
Let's revisit the calculation of\n\n\[ \frac{1 - \cos x}{\sin x}. \]
Observe that using the algebraic identity\n\n\[ \frac{1 - \cos x}{\sin x} = \frac{\sin x}{1 + \cos x} \]\n\nremoves both difficulties encountered before: there is no cancellation of significant digits and division by a small number. Using five-digit rounding, we have\n\n\[ {fl}\left( \frac{\sin {0.1}}{1 + \cos {0.1}}\right) = {0.050042}. \]\n\nThe relative error is \( {5.8} \times {10}^{-6} \), about a factor of 100 smaller than the error in the original computation.
Yes
Consider the quadratic formula: the solution of \( a{x}^{2} + {bx} + c = 0 \) is\n\n\[ \n{r}_{1} = \frac{-b + \sqrt{{b}^{2} - {4ac}}}{2a},{r}_{2} = \frac{-b - \sqrt{{b}^{2} - {4ac}}}{2a}.\n\]\n\nIf \( \left| b\right| \approx \sqrt{{b}^{2} - {4ac}} \), then we have a potential loss of precision in computing one of the roots due to cancellation. Let’s consider a specific equation: \( {x}^{2} - {11x} + 1 = 0 \) . The roots from the quadratic formula are: \( {r}_{1} = \frac{{11} + \sqrt{117}}{2} \approx {10.90832691} \), and \( {r}_{2} = \frac{{11} - \sqrt{117}}{2} \approx {0.09167308680} \) .
Next will use four-digit arithmetic with rounding to compute the roots:\n\n\[ \n{fl}\left( \sqrt{117}\right) = {10.82}\n\]\n\n\[ \n{fl}({r}_{1}) = {fl}\left( \frac{{fl}({fl}({11.0}) + {fl}(\sqrt{117}))}{{fl}({2.0})}\right) = {fl}\left( \frac{{fl}({11.0} + {10.82})}{2.0}\right) = {fl}\left( \frac{21.82}{2.0}\right) = {10.91}\n\]\n\n\[ \n{fl}\left( {r}_{2}\right) = {fl}\left( \frac{{fl}\left( {{fl}\left( {11.0}\right) - {fl}\left( \sqrt{117}\right) }\right) }{{fl}\left( {2.0}\right) }\right) = {fl}\left( \frac{{fl}\left( {{11.0} - {10.82}}\right) }{2.0}\right) = {fl}\left( \frac{0.18}{2.0}\right) = {0.09}.\n\]\n\nThe relative errors are:\n\n\[ \n\text{ rel error in }{r}_{1} = \left| \frac{{10.90832691} - {10.91}}{10.90832691}\right| = {1.5} \times {10}^{-4}\n\]\n\n\[ \n\text{ rel error in }{r}_{2} = \left| \frac{{0.09167308680} - {0.09}}{0.09167308680}\right| = {1.8} \times {10}^{-2}.\n\]\n\nNotice the larger relative error in \( {r}_{2} \) compared to that of \( {r}_{1} \), about a factor of 100, which is due to cancellation of leading digits when we compute \( {11.0} - {10.82} \) .\n\nOne way to fix this problem is to rewrite the offending expression by rationalizing the numerator:\n\n\[ \n{r}_{2} = \frac{{11.0} - \sqrt{117}}{2} = \left( \frac{1}{2}\right) \frac{{11.0} - \sqrt{117}}{{11.0} + \sqrt{117}}\left( {{11.0} + \sqrt{117}}\right) = \left( \frac{1}{2}\right) \frac{4}{{11.0} + \sqrt{117}} = \frac{2}{{11.0} + \sqrt{117}}.\n\]\n\nIf we use this formula to compute \( {r}_{2} \) we get:\n\n\[ \n{fl}\left( {r}_{2}\right) = {fl}\left( \frac{2.0}{{fl}\left( {{11.0} + {fl}\left( \sqrt{117}\right) }\right) }\right) = {fl}\left( \frac{2.0}{21.82}\right) = {0.09166}.\n\]\n\nThe new relative error in \( {r}_{2} \) is:\n\n\[ \n\text{ rel error in }{r}_{2} = \left( \frac{{0.09167308680} - {0.09166}}{0.09167308680}\right) = {1.4} \times {10}^{-4},\n\]\n\nan improvement about a factor of 100, even though in the new way of computing \( {r}_{2} \) there are two operations where rounding error happens instead of one.
Yes
For a simple example, consider using four-digit arithmetic with rounding, and computing the average of two numbers, \( \frac{a + b}{2} \) . For \( a = {2.954} \) and \( b = {100.9} \), the true average is 51.927 . However, four-digit arithmetic with rounding yields:
\[ {fl}\left( \frac{{100.9} + {2.954}}{2}\right) = {fl}\left( \frac{{fl}\left( {103.854}\right) }{2}\right) = {fl}\left( \frac{103.9}{2}\right) = {51.95} \] which has a relative error of \( {4.43} \times {10}^{-4} \) . If we rewrite the averaging formula as \( a + \frac{b - a}{2} \), on the other hand, we obtain 51.93, which has a much smaller relative error of \( {5.78} \times {10}^{-5} \) . The following table displays the exact and 4-digit computations, with the corresponding relative error at each step.\n\n<table><thead><tr><th></th><th>\( a \)</th><th>\( b \)</th><th>\( a + b \)</th><th>\( \frac{a + b}{2} \)</th><th>\( b - a \)</th><th>\( \frac{b - a}{2} \)</th><th>\( a + \frac{b - a}{2} \)</th></tr></thead><tr><td>4-digit rounding</td><td>2.954</td><td>100.9</td><td>103.9</td><td>51.95</td><td>97.95</td><td>48.98</td><td>51.93</td></tr><tr><td>Exact</td><td></td><td></td><td>103.854</td><td>51.927</td><td>97.946</td><td>48.973</td><td>51.927</td></tr><tr><td>Relative error</td><td></td><td></td><td>\( {4.43}\mathrm{e} - 4 \)</td><td>\( {4.43}\mathrm{e} - 4 \)</td><td>\( {4.08}\mathrm{e} - 5 \)</td><td>\( {1.43}\mathrm{e} - 4 \)</td><td>\( {5.78}\mathrm{e} - 5 \)</td></tr></table>
Yes
There are two standard formulas given in textbooks to compute the sample variance \( {s}^{2} \) of the numbers \( {x}_{1},\ldots ,{x}_{n} \) :\n\n1. \( {s}^{2} = \frac{1}{n - 1}\left\lbrack {\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}^{2} - \frac{1}{n}{\left( \mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}\right) }^{2}}\right\rbrack \) ,\n\n2. First compute \( \bar{x} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i} \), and then \( {s}^{2} = \frac{1}{n - 1}\mathop{\sum }\limits_{{i = 1}}^{n}{\left( {x}_{i} - \bar{x}\right) }^{2} \) .
For an example, consider four-digit rounding arithmetic, and let the data be \( {1.253},{2.411},{3.174} \) . The sample variance from formula 1 and formula 2 are,0.93 and 0.9355, respectively. The exact value, up to 6 digits, is 0.935562 . Formula 2 is a numerically more stable choice for computing the variance than the first one.
No
Example 22. We have a sum to compute:\n\n\[ \n{e}^{-7} = 1 + \frac{-7}{1} + \frac{{\\left( -7\\right) }^{2}}{2!} + \frac{{\\left( -7\\right) }^{3}}{3!} + \\ldots + \frac{{\\left( -7\\right) }^{n}}{n!}.\n\]
Julia reports the \
No
A quantity and its \( 1/4 \) added together become 15 . What is the quantity?
Assume 4.\n\n\( \smallsetminus 1 \) 4\n\n\( \smallsetminus 1/4 \) 1\n\nTotal 5\n\nAs many times as 5 must be multiplied to give 15, so many times 4 must be multiplied to give the required number. Multiply 5 so as to get 15.\n\n\\1 5\n\n10\n\nTotal 3\n\nMultiply 3 by 4.\n\n1 3\n\n2\n\n6\n\n\\4 12\n\nThe quantity is\n\n\( 1/4 \)\n\nTotal\n\nArya's instructor knows she has taken math classes and asks her if she could decipher this solution. Although Arya's initial reaction to this assignment can be best described using the word \
No
Consider the sequences defined by\n\n\[ \n{p}_{n + 1} = {0.7}{p}_{n}\\text{and}{p}_{1} = 1 \n\]\n\n\[ \n{p}_{n + 1} = {0.7}{p}_{n}^{2}\\text{ and }{p}_{1} = 1 \n\]
The first sequence converges to 0 linearly, and the second quadratically. Here are a few iterations of the sequences:\n\n<table><thead><tr><th>\\( n \\)</th><th>Linear</th><th>Quadratic</th></tr></thead><tr><td>1</td><td>0.7</td><td>0.7</td></tr><tr><td>4</td><td>0.24</td><td>\\( {4.75} \\times {10}^{-3} \\)</td></tr><tr><td>8</td><td>\\( {5.76} \\times {10}^{-2} \\)</td><td>\\( {3.16} \\times {10}^{-{40}} \\)</td></tr></table>\n\nObserve how fast quadratic convergence is compared to linear convergence.
No
Compute the first three iterations by hand for the function plotted in Figure (2.2).
Step 1 : To start, we need to pick an interval \( \left\lbrack {a, b}\right\rbrack \) that contains the root, that is, \( f\left( a\right) f\left( b\right) < 0 \) . From the plot, it is clear that \( \left\lbrack {0,4}\right\rbrack \) is a possible choice. In the next few steps, we will be working with a sequence of intervals. For convenience, let's label them as \( \left\lbrack {a, b}\right\rbrack = \left\lbrack {{a}_{1},{b}_{1}}\right\rbrack ,\left\lbrack {{a}_{2},{b}_{2}}\right\rbrack ,\left\lbrack {{a}_{3},{b}_{3}}\right\rbrack \), etc. Our first interval is then \( \left\lbrack {{a}_{1},{b}_{1}}\right\rbrack = \left\lbrack {0,4}\right\rbrack \) . Next we find the midpoint of the interval, \( {p}_{1} = 4/2 = 2 \), and use it to obtain two subintervals \( \left\lbrack {0,2}\right\rbrack \) and \( \left\lbrack {2,4}\right\rbrack \) . Only one of them contains the root, and that is \( \left\lbrack {2,4}\right\rbrack \) .\n\nStep 2: From the previous step, our current interval is \( \left\lbrack {{a}_{2},{b}_{2}}\right\rbrack = \left\lbrack {2,4}\right\rbrack \) . We find the midpoint \( {}^{1} \) \( {p}_{2} = \frac{2 + 4}{2} = 3 \), and form the subintervals \( \left\lbrack {2,3}\right\rbrack ,\left\lbrack {3,4}\right\rbrack \) . The one that contains the root is \( \left\lbrack {3,4}\right\rbrack \) .\n\nStep 3: We have \( \left\lbrack {{a}_{3},{b}_{3}}\right\rbrack = \left\lbrack {3,4}\right\rbrack \) . The midpoint is \( {p}_{3} = {3.5} \) . We are now pretty close to the root visually, and we stop the calculations!
Yes
Theorem 28. Suppose that \( f \in {C}^{0}\left\lbrack {a, b}\right\rbrack \) and \( f\left( a\right) f\left( b\right) < 0 \) . The bisection method generates a sequence \( \left\{ {p}_{n}\right\} \) approximating a zero \( p \) of \( f\left( x\right) \) with\n\n\[ \left| {{p}_{n} - p}\right| \leq \frac{b - a}{{2}^{n}}, n \geq 1. \]
Proof. Let the sequences \( \left\{ {a}_{n}\right\} \) and \( \left\{ {b}_{n}\right\} \) denote the left-end and right-end points of the subintervals generated by the bisection method. Since at each step the interval is halved, we have\n\n\[ {b}_{n} - {a}_{n} = \frac{1}{2}\left( {{b}_{n - 1} - {a}_{n - 1}}\right) .\n\nBy mathematical induction, we get\n\n\[ {b}_{n} - {a}_{n} = \frac{1}{2}\left( {{b}_{n - 1} - {a}_{n - 1}}\right) = \frac{1}{{2}^{2}}\left( {{b}_{n - 2} - {a}_{n - 2}}\right) = \ldots = \frac{1}{{2}^{n - 1}}\left( {{b}_{1} - {a}_{1}}\right) .\n\nTherefore \( {b}_{n} - {a}_{n} = \frac{1}{{2}^{n - 1}}\left( {b - a}\right) \) . Observe that\n\n\[ \left| {{p}_{n} - p}\right| \leq \frac{1}{2}\left( {{b}_{n} - {a}_{n}}\right) = \frac{1}{{2}^{n}}\left( {b - a}\right) \]\n\nand thus \( \left| {{p}_{n} - p}\right| \rightarrow 0 \) as \( n \rightarrow \infty \) .
Yes
Corollary 29. The bisection method has linear convergence.
Proof. The bisection method does not satisfy (2.1) for any \( C < 1 \), but it satisfies a variant of (2.2) with \( C = 1/2 \) from the previous theorem.
Yes
Example 30. Determine the number of iterations necessary to solve \( f\\left( x\\right) = {x}^{5} + 2{x}^{3} - {5x} - 2 = 0 \) with accuracy \( {10}^{-4}, a = 0, b = 2 \).
Solution. Since \( n \\geq {\\log }_{2}\\left( \\frac{2}{{10}^{-4}}\\right) = 4{\\log }_{2}{10} + 1 = {14.3} \), the number of required iterations is 15.
Yes
Corollary 33. Newton's method has quadratic convergence.
Proof. Recall that quadratic convergence means\n\n\[ \left| {{p}_{n + 1} - p}\right| \leq C{\left| {p}_{n} - p\right| }^{2} \]\n\nfor some constant \( C > 0 \) . Taking the absolute values of the limit established in the previous theorem, we obtain\n\n\[ \mathop{\lim }\limits_{{n \rightarrow \infty }}\left| \frac{p - {p}_{n + 1}}{{\left( p - {p}_{n}\right) }^{2}}\right| = \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{\left| {p}_{n + 1} - p\right| }{{\left| {p}_{n} - p\right| }^{2}} = \left| {\frac{1}{2}\frac{{f}^{\prime \prime }\left( p\right) }{{f}^{\prime }\left( p\right) }}\right| .\n\nLet \( {C}^{\prime } = \left| {\frac{1}{2}\frac{{f}^{\prime \prime }\left( p\right) }{{f}^{\prime }\left( p\right) }}\right| \) . From the definition of limit of a sequence, for any \( \epsilon > 0 \), there exists an integer \( N > 0 \) such that \( \frac{\left| {p}_{n + 1} - p\right| }{{\left| {p}_{n} - p\right| }^{2}} < {C}^{\prime } + \epsilon \) whenever \( n > N \) . Set \( C = {C}^{\prime } + \epsilon \) to obtain \( \left| {{p}_{n + 1} - p}\right| \leq C{\left| {p}_{n} - p\right| }^{2} \) for \( n > N \) .
Yes
Theorem 35. Let \( f \in {C}^{2}\left\lbrack {a, b}\right\rbrack \) and assume \( f\left( p\right) = 0,{f}^{\prime }\left( p\right) \neq 0 \), for \( p \in \left( {a, b}\right) \) . If the initial guesses \( {p}_{0},{p}_{1} \) are sufficiently close to \( p \), then the iterates of the secant method converge to \( p \)
with\n\[\n\mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{\left| p - {p}_{n + 1}\right| }{{\left| p - {p}_{n}\right| }^{{r}_{0}}} = {\left| \frac{{f}^{\prime \prime }\left( p\right) }{2{f}^{\prime }\left( p\right) }\right| }^{{r}_{1}}\n\]\n\nwhere \( {r}_{0} = \frac{\sqrt{5} + 1}{2} \approx {1.62},{r}_{1} = \frac{\sqrt{5} - 1}{2} \approx {0.62} \)
Yes
If \( g \) is a continuous function on \( \left\lbrack {a, b}\right\rbrack \) and \( g\left( x\right) \in \left\lbrack {a, b}\right\rbrack \) for all \( x \in \left\lbrack {a, b}\right\rbrack \) , then \( g \) has at least one fixed-point in \( \left\lbrack {a, b}\right\rbrack \) .
Consider \( f\left( x\right) = g\left( x\right) - x \) . Assume \( g\left( a\right) \neq a \) and \( g\left( b\right) \neq b \) (otherwise the proof is over.) Then \( f\left( a\right) = g\left( a\right) - a > 0 \) since \( g\left( a\right) \) must be greater than \( a \) if it’s not equal to \( a \) . Similarly, \( f\left( b\right) = g\left( b\right) - b < 0 \) . Then from IVT, there exists \( p \in \left( {a, b}\right) \) such that \( f\left( p\right) = 0 \), or \( g\left( p\right) = p \) .
Yes
Theorem 40. If \( g \) is a continuous function on \( \left\lbrack {a, b}\right\rbrack \) satisfying the conditions\n\n1. \( g\left( x\right) \in \left\lbrack {a, b}\right\rbrack \) for all \( x \in \left\lbrack {a, b}\right\rbrack \) ,\n\n2. \( \left| {g\left( x\right) - g\left( y\right) }\right| \leq \lambda \left| {x - y}\right| \), for \( x, y \in \left\lbrack {a, b}\right\rbrack \) where \( 0 < \lambda < 1 \) ,\n\nthen the fixed-point iteration\n\n\[ \n{p}_{n} = g\left( {p}_{n - 1}\right), n \geq 1 \n\]\n\nconverges to \( p \), the unique fixed-point of \( g \) in \( \left\lbrack {a, b}\right\rbrack \), for any starting point \( {p}_{0} \in \left\lbrack {a, b}\right\rbrack \) .
Proof. Since \( {p}_{0} \in \left\lbrack {a, b}\right\rbrack \) and \( g\left( x\right) \in \left\lbrack {a, b}\right\rbrack \) for all \( x \in \left\lbrack {a, b}\right\rbrack \), all iterates \( {p}_{n} \in \left\lbrack {a, b}\right\rbrack \) . Observe that\n\n\[ \n\left| {p - {p}_{n}}\right| = \left| {g\left( p\right) - g\left( {p}_{n - 1}\right) }\right| \leq \lambda \left| {p - {p}_{n - 1}}\right| .\n\]\n\nThen by induction, \( \left| {p - {p}_{n}}\right| \leq {\lambda }^{n}\left| {p - {p}_{0}}\right| \) . Since \( 0 < \lambda < 1,{\lambda }^{n} \rightarrow 0 \) as \( n \rightarrow \infty \), and thus \( {p}_{n} \rightarrow p \) .
Yes
Consider the root-finding problem \( {x}^{3} - 2{x}^{2} - 1 = 0 \) on \( \left\lbrack {1,3}\right\rbrack \).
1. There are several ways we can write this problem as \( g\left( x\right) = x \):\n\n(a) Let \( f\left( x\right) = {x}^{3} - 2{x}^{2} - 1 \), and \( p \) be its root, that is, \( f\left( p\right) = 0 \). If we let \( g\left( x\right) = \) \( x - f\left( x\right) \), then \( g\left( p\right) = p - f\left( p\right) = p \), so \( p \) is a fixed-point of \( g \). However, this choice for \( g \) will not be helpful, since \( g \) does not satisfy the first condition of Theorem 40: \( g\left( x\right) \notin \left\lbrack {1,3}\right\rbrack \) for all \( x \in \left\lbrack {1,3}\right\rbrack \left( {g\left( 3\right) = - 5 \notin \left\lbrack {1,3}\right\rbrack }\right) \).\n\n(b) Since \( p \) is a root for \( f \), we have \( {p}^{3} = 2{p}^{2} + 1 \), or \( p = {\left( 2{p}^{2} + 1\right) }^{1/3} \). Therefore, \( p \) is the solution to the fixed-point problem \( g\left( x\right) = x \) where \( g\left( x\right) = {\left( 2{x}^{2} + 1\right) }^{1/3} \).\n\n- \( g \) is increasing on \( \left\lbrack {1,3}\right\rbrack \) and \( g\left( 1\right) = {1.44}, g\left( 3\right) = {2.67} \), thus \( g\left( x\right) \in \left\lbrack {1,3}\right\rbrack \) for all \( x \in \left\lbrack {1,3}\right\rbrack \). Therefore, \( g \) satisfies the first condition of Theorem 40 .\n\n- \( {g}^{\prime }\left( x\right) = \frac{4x}{3{\left( 2{x}^{2} + 1\right) }^{2/3}} \) and \( {g}^{\prime }\left( 1\right) = {0.64},{g}^{\prime }\left( 3\right) = {0.56} \) and \( {g}^{\prime } \) is decreasing on \( \left\lbrack {1,3}\right\rbrack \). Therefore \( g \) satisfies the condition in Remark 41 with \( \lambda = {0.64} \).\n\nThen, from Theorem 40 and Remark 41, the fixed-point iteration converges if \( g\left( x\right) = {\left( 2{x}^{2} + 1\right) }^{1/3}. \)
Yes
Theorem 44. Assume \( p \) is a solution of \( g\left( x\right) = x \), and suppose \( g\left( x\right) \) is continuously differentiable in some interval about \( p \) with \( \left| {{g}^{\prime }\left( p\right) }\right| < 1 \) . Then the fixed-point iteration converges to \( p \) , provided \( {p}_{0} \) is chosen sufficiently close to \( p \) . Moreover, the convergence is linear if \( {g}^{\prime }\left( p\right) \neq 0 \) .
Proof. Since \( {g}^{\prime } \) is continuous and \( \left| {{g}^{\prime }\left( p\right) }\right| < 1 \), there exists an interval \( I = \left\lbrack {p - \epsilon, p + \epsilon }\right\rbrack \) such that \( \left| {{g}^{\prime }\left( x\right) }\right| \leq k \) for all \( x \in I \), for some \( k < 1 \) . Then, from Remark 39, we know \( \left| {g\left( x\right) - g\left( y\right) }\right| \leq k\left| {x - y}\right| \) for all \( x, y \in I \) . Next, we argue that \( g\left( x\right) \in I \) if \( x \in I \) . Indeed, if \( \left| {x - p}\right| < \epsilon \), then\n\n\[ \left| {g\left( x\right) - p}\right| = \left| {g\left( x\right) - g\left( p\right) }\right| \leq \left| {{g}^{\prime }\left( \xi \right) }\right| \left| {x - p}\right| < {k\epsilon } < \epsilon \]\n\nhence \( g\left( x\right) \in I \) . Now use Theorem 40, setting \( \left\lbrack {a, b}\right\rbrack \) to \( \left\lbrack {p - \epsilon, p + \epsilon }\right\rbrack \), to conclude the fixed-point iteration converges.\n\nTo prove convergence is linear, we note\n\n\[ \left| {{p}_{n + 1} - p}\right| = \left| {g\left( {p}_{n}\right) - g\left( p\right) }\right| \leq \left| {{g}^{\prime }\left( {\xi }_{n}\right) }\right| \left| {{p}_{n} - p}\right| \leq k\left| {{p}_{n} - p}\right| \]\n\nwhich is the definition of linear convergence (with \( k \) being a positive constant less than 1).
Yes
Let \( g\left( x\right) = x + c\left( {{x}^{2} - 2}\right) \), which has the fixed-point \( p = \sqrt{2} \approx {1.4142} \). Pick a value for \( c \) to ensure the convergence of fixed-point iteration. For the picked value \( c \), determine the interval of convergence \( I = \left\lbrack {a, b}\right\rbrack \), that is, the interval for which any \( {p}_{0} \) from the interval gives rise to a converging fixed-point iteration.
Solution. Theorem 44 requires \( \left| {{g}^{\prime }\left( p\right) }\right| < 1 \). We have \( {g}^{\prime }\left( x\right) = 1 + {2xc} \), and thus \( {g}^{\prime }\left( \sqrt{2}\right) = 1 + 2\sqrt{2}c \). Therefore\n\n\[ \left| {{g}^{\prime }\left( \sqrt{2}\right) }\right| < 1 \Rightarrow - 1 < 1 + 2\sqrt{2}c < 1 \]\n\n\[ \Rightarrow - 2 < 2\sqrt{2}c < 0 \]\n\n\[ \Rightarrow \frac{-1}{\sqrt{2}} < c < 0 \]\n\nAny \( c \) from this interval works: let’s pick \( c = - 1/4 \).\n\nNow we need to find an interval \( I = \left\lbrack {\sqrt{2} - \epsilon ,\sqrt{2} + \epsilon }\right\rbrack \) such that\n\n\[ \left| {{g}^{\prime }\left( x\right) }\right| = \left| {1 + {2xc}}\right| = \left| {1 - \frac{x}{2}}\right| \leq k \]\n\nfor some \( k < 1 \), for all \( x \in I \). Plot \( {g}^{\prime }\left( x\right) \) and observe that one choice is \( \epsilon = {0.1} \), so that \( I = \left\lbrack {\sqrt{2} - {0.1},\sqrt{2} + {0.1}}\right\rbrack = \left\lbrack {{1.3142},{1.5142}}\right\rbrack \). Since \( {g}^{\prime }\left( x\right) \) is positive and decreasing on \( I = \left\lbrack {{1.3142},{1.5142}}\right\rbrack ,\left| {{g}^{\prime }\left( x\right) }\right| \leq 1 - \frac{1.3142}{2} = {0.3429} < 1 \), for any \( x \in I \). Then any starting value \( {x}_{0} \) from \( I \) gives convergence.
Yes
Theorem 46. Assume \( p \) is a solution of \( g\left( x\right) = x \) where \( g \in {C}^{\alpha }\left( I\right) \) for some interval \( I \) that contains \( p \), and for some \( \alpha \geq 2 \) . Furthermore assume\n\n\[ \n{g}^{\prime }\left( p\right) = {g}^{\prime \prime }\left( p\right) = \ldots = {g}^{\left( \alpha - 1\right) }\left( p\right) = 0,\text{ and }{g}^{\left( \alpha \right) }\left( p\right) \neq 0.\n\]\n\nThen if the initial guess \( {p}_{0} \) is sufficiently close to \( p \), the fixed-point iteration \( {p}_{n} = g\left( {p}_{n - 1}\right), n \geq \) 1, will have order of convergence of \( \alpha \), and\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{{p}_{n + 1} - p}{{\left( {p}_{n} - p\right) }^{\alpha }} = \frac{{g}^{\left( \alpha \right) }\left( p\right) }{\alpha !}.\n\]
Proof. From Taylor's theorem,\n\n\[ \n{p}_{n + 1} = g\left( {p}_{n}\right) = g\left( p\right) + \left( {{p}_{n} - p}\right) {g}^{\prime }\left( p\right) + \ldots + \frac{{\left( {p}_{n} - p\right) }^{\alpha - 1}}{\left( {\alpha - 1}\right) !}{g}^{\left( \alpha - 1\right) }\left( p\right) + \frac{{\left( {p}_{n} - p\right) }^{\alpha }}{\alpha !}{g}^{\left( \alpha \right) }\left( {\xi }_{n}\right)\n\]\n\nwhere \( {\xi }_{n} \) is a number between \( {p}_{n} \) and \( p \), and all numbers are in \( I \) . From the hypothesis, this simplifies as\n\n\[ \n{p}_{n + 1} = p + \frac{{\left( {p}_{n} - p\right) }^{\alpha }}{\alpha !}{g}^{\left( \alpha \right) }\left( {\xi }_{n}\right) \Rightarrow \frac{{p}_{n + 1} - p}{{\left( {p}_{n} - p\right) }^{\alpha }} = \frac{{g}^{\left( \alpha \right) }\left( {\xi }_{n}\right) }{\alpha !}.\n\]\n\nFrom Theorem 44, if \( {p}_{0} \) is chosen sufficiently close to \( p \), then \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{p}_{n} = p \) . The order of convergence is \( \alpha \) with\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{\left| {p}_{n + 1} - p\right| }{{\left| {p}_{n} - p\right| }^{\alpha }} = \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{\left| {g}^{\left( \alpha \right) }\left( {\xi }_{n}\right) \right| }{\alpha !} = \frac{\left| {g}^{\left( \alpha \right) }\left( p\right) \right| }{\alpha !} \neq 0.\n\]
Yes
Find the interpolating polynomial using the monomial basis and Lagrange basis functions for the data: \( \\left( {-1, - 6}\\right) ,\\left( {1,0}\\right) ,\\left( {2,6}\\right) \) .
- Monomial basis: \( {p}_{2}\\left( x\\right) = {a}_{0} + {a}_{1}x + {a}_{2}{x}^{2} \)\n\nWe can use Gaussian elimination to solve this matrix equation, or get help from Julia:\n\nIn [1]: A \( = \\left\\lbrack {1 - 1}\\right\\rbrack 1;1\\dot{1}\\dot{1};\\dot{1}\\dot{2}\\dot{4} \) ]\n\nOut \( \\left\\lbrack 1\\right\\rbrack : 3 \\times 3\\operatorname{Array}\{ \) Int64,2 \( \} : \) \( \\begin{array}{lll} 1 & - 1 & 1 \\end{array} \) \( \\begin{array}{lll} 1 & 1 & 1 \\end{array} \) \( \\begin{array}{lll} 1 & 2 & 4 \\end{array} \)\n\nIn [2]: \( y = \\left\\lbrack \\begin{array}{lll} - 6 & 0 & 6 \\end{array}\\right\\rbrack \) '\n\nOut \( \\left\\lbrack 2\\right\\rbrack : 3 \\times 1\\operatorname{Array}\{ \) Int64,2 \( \} \)\n\n\( - 6 \)\n\n0\n\n6\n\nIn [3]: A\\y\n\nOut \( \\left\\lbrack 3\\right\\rbrack : 3 \\times 1\\operatorname{Array}\{ \) Float64,2 \( \} :\)\n\n\( - {4.0} \)\n\n3.0\n\n1.0\n\nSince the solution is \( a = {\\left\\lbrack -4,3,1\\right\\rbrack }^{T} \), we obtain\n\n\[ {p}_{2}\\left( x\\right) = - 4 + {3x} + {x}^{2}. \]\n\n- Lagrange basis: \( {p}_{2}\\left( x\\right) = {y}_{0}{l}_{0}\\left( x\\right) + {y}_{1}{l}_{1}\\left( x\\right) + {y}_{2}{l}_{2}\\left( x\\right) = - 6{l}_{0}\\left( x\\right) + 0{l}_{1}\\left( x\\right) + 6{l}_{2}\\left( x\\right) \) where\n\n\[ {l}_{0}\\left( x\\right) = \\frac{\\left( {x - {x}_{1}}\\right) \\left( {x - {x}_{2}}\\right) }{\\left( {{x}_{0} - {x}_{1}}\\right) \\left( {{x}_{0} - {x}_{2}}\\right) } = \\frac{\\left( {x - 1}\\right) \\left( {x - 2}\\right) }{\\left( {-1 - 1}\\right) \\left( {-1 - 2}\\right) } = \\frac{\\left( {x - 1}\\right) \\left( {x - 2}\\right) }{6} \]\n\n\[ {l}_{2}\\left( x\\right) = \\frac{\\left( {x - {x}_{0}}\\right) \\left( {x - {x}_{1}}\\right) }{\\left( {{x}_{2} - {x}_{0}}\\right) \\left( {{x}_{2} - {x}_{1}}\\right) } = \\frac{\\left( {x + 1}\\right) \\left( {x - 1}\\right) }{\\left( {2 + 1}\\right) \\left( {2 - 1}\\right) } = \\frac{\\left( {x + 1}\\right) \\left( {x - 1}\\right) }{3} \]\n\ntherefore\n\n\[ {p}_{2}\\left( x\\right) = - 6\\frac{\\left( {x - 1}\\right) \\left( {x - 2}\\right) }{6} + 6\\frac{\\left( {x + 1}\\right) \\left( {x - 1}\\right) }{3} = - \\left( {x - 1}\\right) \\left( {x - 2}\\right) + 2\\left( {x + 1}\\right) \\left( {x - 1}\\right) . \]\n\nIf we multiply out and collect the like terms, we obtain \( {p}_{2}\\left( x\\right) = - 4 + {3x} + {x}^{2} \), which is the polynomial we obtained from the monomial basis earlier.
Yes
Find the interpolating polynomial using Newton's basis for the data: \( \\left( {-1, - 6}\\right) ,\\left( {1,0}\\right) ,\\left( {2,6}\\right) \) .
Solution. We have \( {p}_{2}\\left( x\\right) = {a}_{0} + {a}_{1}{\\pi }_{1}\\left( x\\right) + {a}_{2}{\\pi }_{2}\\left( x\\right) = {a}_{0} + {a}_{1}\\left( {x + 1}\\right) + {a}_{2}\\left( {x + 1}\\right) \\left( {x - 1}\\right) \) . Find \( {a}_{0},{a}_{1},{a}_{2} \) from\n\n\[ \n{p}_{2}\\left( {-1}\\right) = - 6 \\Rightarrow {a}_{0} + {a}_{1}\\left( {-1 + 1}\\right) + {a}_{2}\\left( {-1 + 1}\\right) \\left( {-1 - 1}\\right) = {a}_{0} = - 6 \n\]\n\n\[ \n{p}_{2}\\left( 1\\right) = 0 \\Rightarrow {a}_{0} + {a}_{1}\\left( {1 + 1}\\right) + {a}_{2}\\left( {1 + 1}\\right) \\left( {1 - 1}\\right) = {a}_{0} + 2{a}_{1} = 0 \n\]\n\n\[ \n{p}_{2}\\left( 2\\right) = 6 \\Rightarrow {a}_{0} + {a}_{1}\\left( {2 + 1}\\right) + {a}_{2}\\left( {2 + 1}\\right) \\left( {2 - 1}\\right) = {a}_{0} + 3{a}_{1} + 3{a}_{2} = 6 \n\]\n\nor, in matrix form\n\n\[ \n\\left\\lbrack \\begin{array}{lll} 1 & 0 & 0 \\\\ 1 & 2 & 0 \\\\ 1 & 3 & 3 \\end{array}\\right\\rbrack \\left\\lbrack \\begin{array}{l} {a}_{0} \\\\ {a}_{1} \\\\ {a}_{2} \\end{array}\\right\\rbrack = \\left\\lbrack \\begin{matrix} - 6 \\\\ 0 \\\\ 6 \\end{matrix}\\right\\rbrack .\n\]\n\nForward substitution is:\n\n\[ \n{a}_{0} = - 6 \n\]\n\n\[ \n{a}_{0} + 2{a}_{1} = 0 \\Rightarrow - 6 + 2{a}_{1} = 0 \\Rightarrow {a}_{1} = 3 \n\]\n\n\[ \n{a}_{0} + 3{a}_{1} + 3{a}_{2} = 6 \\Rightarrow - 6 + 9 + 3{a}_{2} = 6 \\Rightarrow {a}_{2} = 1. \n\]\n\nTherefore \( a = {\\left\\lbrack -6,3,1\\right\\rbrack }^{T} \) and\n\n\[ \n{p}_{2}\\left( x\\right) = - 6 + 3\\left( {x + 1}\\right) + \\left( {x + 1}\\right) \\left( {x - 1}\\right) .\n\]\n\nFactoring out and simplifying gives \( {p}_{2}\\left( x\\right) = - 4 + {3x} + {x}^{2} \), which is the polynomial discussed in Example 48.
Yes
Example 50. Write \( {p}_{2}\left( x\right) = - 6 + 3\left( {x + 1}\right) + \left( {x + 1}\right) \left( {x - 1}\right) \) using the nested form.
Solution. \( - 6 + 3\left( {x + 1}\right) + \left( {x + 1}\right) \left( {x - 1}\right) = - 6 + \left( {x + 1}\right) \left( {2 + x}\right) \) ; note that the left-hand side has 2 multiplications, and the right-hand side has 1 .
No
Lemma 52. Consider the partition of \( \left\lbrack {a, b}\right\rbrack \) as \( {x}_{0} = a,{x}_{1} = a + h,\ldots ,{x}_{n} = a + {nh} = b \) . More succinctly, \( {x}_{i} = a + {ih} \) for \( i = 0,1,\ldots, n \) and \( h = \frac{b - a}{n} \) . Then for any \( x \in \left\lbrack {a, b}\right\rbrack \)\n\n\[ \mathop{\prod }\limits_{{i = 0}}^{n}\left| {x - {x}_{i}}\right| \leq \frac{1}{4}{h}^{n + 1}n! \]
Proof. Since \( x \in \left\lbrack {a, b}\right\rbrack \), it falls into one of the subintervals: let \( x \in \left\lbrack {{x}_{j},{x}_{j + 1}}\right\rbrack \) . Consider the product \( \left| {x - {x}_{j}}\right| \left| {x - {x}_{j + 1}}\right| \) . Put \( s = \left| {x - {x}_{j}}\right| \) and \( t = \left| {x - {x}_{j + 1}}\right| \) . The maximum of \( {st} \) given \( s + t = h \), using Calculus, can be found to be \( {h}^{2}/4 \), which is attained when \( x \) is the midpoint, and thus \( s = t = h/2 \) . Then\n\n\[ \mathop{\prod }\limits_{{i = 0}}^{n}\left| {x - {x}_{i}}\right| = \left| {x - {x}_{0}}\right| \cdots \left| {x - {x}_{j - 1}}\right| \left| {x - {x}_{j}}\right| \left| {x - {x}_{j + 1}}\right| \left| {x - {x}_{j + 2}}\right| \cdots \left| {x - {x}_{n}}\right| \]\n\n\[ \leq \left| {x - {x}_{0}}\right| \cdots \left| {x - {x}_{j - 1}}\right| \frac{{h}^{2}}{4}\left| {x - {x}_{j + 2}}\right| \cdots \left| {x - {x}_{n}}\right| \]\n\n\[ \leq \left| {{x}_{j + 1} - {x}_{0}}\right| \cdots \left| {{x}_{j + 1} - {x}_{j - 1}}\right| \frac{{h}^{2}}{4}\left| {{x}_{j} - {x}_{j + 2}}\right| \cdots \left| {{x}_{j} - {x}_{n}}\right| \]\n\n\[ \leq \left( {j + 1}\right) h\cdots {2h}\left( \frac{{h}^{2}}{4}\right) \left( {2h}\right) \cdots \left( {n - j}\right) h \]\n\n\[ = {h}^{j}\left( {j + 1}\right) !\frac{{h}^{2}}{4}\left( {n - j}\right) !{h}^{n - j - 1} \]\n\n\[ \leq {h}^{n + 1}\frac{n!}{4}. \]
Yes
Find an upper bound for the absolute error when \( f\left( x\right) = \cos x \) is approximated by its interpolating polynomial \( {p}_{n}\left( x\right) \) on \( \left\lbrack {0,\pi /2}\right\rbrack \) . For the interpolating polynomial, use 5 equally spaced nodes \( \left( {n = 4}\right) \) in \( \left\lbrack {0,\pi /2}\right\rbrack \), including the endpoints.
Solution. From Theorem 51,\n\n\[ \left| {f\left( x\right) - {p}_{4}\left( x\right) }\right| = \frac{\left| {f}^{\left( 5\right) }\left( \xi \right) \right| }{5!}\left| {\left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{4}}\right) }\right| .\n\]\n\nWe have \( \left| {{f}^{\left( 5\right) }\left( \xi \right) }\right| \leq 1 \) . The nodes are equally spaced with \( h = \left( {\pi /2 - 0}\right) /4 = \pi /8 \) . Then from the previous lemma,\n\n\[ \left| {\left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{4}}\right) }\right| \leq \frac{1}{4}{\left( \frac{\pi }{8}\right) }^{5}4!\n\]\n\nand therefore\n\n\[ \left| {f\left( x\right) - {p}_{4}\left( x\right) }\right| \leq \frac{1}{5!}\frac{1}{4}{\left( \frac{\pi }{8}\right) }^{5}4! = {4.7} \times {10}^{-4}.\n\]
Yes
Theorem 55. The ordering of the data in constructing divided differences is not important, that is, the divided difference \( f\left\lbrack {{x}_{0},\ldots ,{x}_{k}}\right\rbrack \) is invariant under all permutations of the arguments \( {x}_{0},\ldots ,{x}_{k} \) .
Proof. Consider the data \( \left( {{x}_{0},{y}_{0}}\right) ,\left( {{x}_{1},{y}_{1}}\right) ,\ldots ,\left( {{x}_{k},{y}_{k}}\right) \) and let \( {p}_{k}\left( x\right) \) be its interpolating polynomial:\n\n\[ \n{p}_{k}\left( x\right) = f\left\lbrack {x}_{0}\right\rbrack + f\left\lbrack {{x}_{0},{x}_{1}}\right\rbrack \left( {x - {x}_{0}}\right) + f\left\lbrack {{x}_{0},{x}_{1},{x}_{2}}\right\rbrack \left( {x - {x}_{0}}\right) \left( {x - {x}_{1}}\right) + \ldots \n\] \n\n\[ \n+ f\left\lbrack {{x}_{0},\ldots ,{x}_{k}}\right\rbrack \left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{k - 1}}\right) . \n\] \n\nNow let’s consider a permutation of the \( {x}_{i} \) ; let’s label them as \( {\widetilde{x}}_{0},{\widetilde{x}}_{1},\ldots ,{\widetilde{x}}_{k} \) . The interpolating polynomial for the permuted data does not change, since the data \( {x}_{0},{x}_{1},\ldots ,{x}_{k} \) (omitting the \( y \) -coordinates) is the same as \( {\widetilde{x}}_{0},{\widetilde{x}}_{1},\ldots ,{\widetilde{x}}_{k} \), just in different order. Therefore \n\n\[ \n{p}_{k}\left( x\right) = f\left\lbrack {\widetilde{x}}_{0}\right\rbrack + f\left\lbrack {{\widetilde{x}}_{0},{\widetilde{x}}_{1}}\right\rbrack \left( {x - {\widetilde{x}}_{0}}\right) + f\left\lbrack {{\widetilde{x}}_{0},{\widetilde{x}}_{1},{\widetilde{x}}_{2}}\right\rbrack \left( {x - {\widetilde{x}}_{0}}\right) \left( {x - {\widetilde{x}}_{1}}\right) + \ldots \n\] \n\n\[ \n+ f\left\lbrack {{\widetilde{x}}_{0},\ldots ,{\widetilde{x}}_{k}}\right\rbrack \left( {x - {\widetilde{x}}_{0}}\right) \cdots \left( {x - {\widetilde{x}}_{k - 1}}\right) . \n\] \n\nThe coefficient of the polynomial \( {p}_{k}\left( x\right) \) for the highest degree \( {x}^{k} \) is \( f\left\lbrack {{x}_{0},\ldots ,{x}_{k}}\right\rbrack \) in the first equation, and \( f\left\lbrack {{\widetilde{x}}_{0},\ldots ,{\widetilde{x}}_{k}}\right\rbrack \) in the second. Therefore they must be equal to each other.
Yes
Find the interpolating polynomial for the data \( \\left( {-1, - 6}\\right) ,\\left( {1,0}\\right) ,\\left( {2,6}\\right) \) using Newton's form and divided differences.
Solution. We want to compute\n\n\[ \n{p}_{2}\\left( x\\right) = f\\left\\lbrack {x}_{0}\\right\\rbrack + f\\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack \\left( {x - {x}_{0}}\\right) + f\\left\\lbrack {{x}_{0},{x}_{1},{x}_{2}}\\right\\rbrack \\left( {x - {x}_{0}}\\right) \\left( {x - {x}_{1}}\\right) .\n\]\n\nHere are the finite differences:\n\n\[ \nf\\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack = \\frac{f\\left\\lbrack {x}_{1}\\right\\rbrack - f\\left\\lbrack {x}_{0}\\right\\rbrack }{{x}_{1} - {x}_{0}} = 3 \n\]\n\n\[ \n{x}_{1} = 1\\;f\\left\\lbrack {x}_{1}\\right\\rbrack = 0 \n\]\n\n\[ \nf\\left\\lbrack {{x}_{0},{x}_{1},{x}_{2}}\\right\\rbrack = \\frac{f\\left\\lbrack {{x}_{1},{x}_{2}}\\right\\rbrack - f\\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack }{{x}_{2} - {x}_{0}} = 1 \n\]\n\n\[ \nf\\left\\lbrack {{x}_{1},{x}_{2}}\\right\\rbrack = \\frac{f\\left\\lbrack {x}_{2}\\right\\rbrack - f\\left\\lbrack {x}_{1}\\right\\rbrack }{{x}_{2} - {x}_{1}} = 6 \n\]\n\n\[ \n{x}_{2} = 2\\;f\\left\\lbrack {x}_{2}\\right\\rbrack = 6 \n\]\n\nTherefore\n\n\[ \n{p}_{2}\\left( x\\right) = - 6 + 3\\left( {x + 1}\\right) + 1\\left( {x + 1}\\right) \\left( {x - 1}\\right) ,\n\]\n\nwhich is the same polynomial we had in Example 49.
Yes
Use polynomial interpolation to estimate \( \Gamma \left( {1.761}\right) \) .
Solution. The finite differences, with five-digit rounding, are:\n\n<table><thead><tr><th>\( i \)</th><th>\( {x}_{i} \)</th><th>\( f\left\lbrack {x}_{i}\right\rbrack \)</th><th>\( f\left\lbrack {{x}_{i},{x}_{i + 1}}\right\rbrack \)</th><th>\( f\left\lbrack {{x}_{i - 1},{x}_{i},{x}_{i + 1}}\right\rbrack \)</th><th>\( f\left\lbrack {{x}_{0},{x}_{1},{x}_{2},{x}_{3}}\right\rbrack \)</th></tr></thead><tr><td>0</td><td>1.750</td><td>0.91906</td><td></td><td></td><td rowspan=\
No
Theorem 58. Suppose \( f \in {C}^{n}\left\lbrack {a, b}\right\rbrack \) and \( {x}_{0},{x}_{1},\ldots ,{x}_{n} \) are distinct numbers in \( \left\lbrack {a, b}\right\rbrack \) . Then there exists \( \xi \in \left( {a, b}\right) \) such that\n\n\[ f\left\lbrack {{x}_{0},\ldots ,{x}_{n}}\right\rbrack = \frac{{f}^{\left( n\right) }\left( \xi \right) }{n!}. \]\n
To prove this theorem, we need the generalized Rolle's theorem.
Yes
Theorem 61. If \( f \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) and \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) are distinct, then there is a unique polynomial \( {H}_{{2n} + 1}\left( x\right) \), of degree at most \( {2n} + 1 \), agreeing with \( f \) and \( {f}^{\prime } \) at \( {x}_{0},\ldots ,{x}_{n} \) .
The polynomial can be written as:\n\n\[ \n{H}_{{2n} + 1}\left( x\right) = \mathop{\sum }\limits_{{i = 0}}^{n}{y}_{i}{h}_{i}\left( x\right) + \mathop{\sum }\limits_{{i = 0}}^{n}{y}_{i}^{\prime }{\widetilde{h}}_{i}\left( x\right) \]\n\nwhere\n\n\[ \n{h}_{i}\left( x\right) = \left( {1 - 2\left( {x - {x}_{i}}\right) {l}_{i}^{\prime }\left( {x}_{i}\right) }\right) {\left( {l}_{i}\left( x\right) \right) }^{2} \]\n\n\[ \n{\widetilde{h}}_{i}\left( x\right) = \left( {x - {x}_{i}}\right) {\left( {l}_{i}\left( x\right) \right) }^{2}. \]\n\nHere \( {l}_{i}\left( x\right) \) is the ith Lagrange basis function for the nodes \( {x}_{0},\ldots ,{x}_{n} \), and \( {l}_{i}^{\prime }\left( x\right) \) is its derivative. \( {H}_{{2n} + 1}\left( x\right) \) is called the Hermite interpolating polynomial.
Yes
We want to interpolate the following data:\n\n\[ \nx\\text{-coordinates :} - {1.5},{1.6},{4.7} \]\n\n\[ \ny\\text{-coordinates :}{0.071}, - {0.029}, - {0.012}\\text{.} \]\n\nThe underlying function the data comes from is \( \\cos x \), but we pretend we do not know this. Figure (3.2) plots the underlying function, the data, and the polynomial interpolant for the data. Clearly, the polynomial interpolant does not come close to giving a good approximation to the underlying function \( \\cos x \) .
Now let's assume we know the derivative of the underlying function at these nodes:\n\n\[ \nx\\text{-coordinates :} - {1.5},{1.6},{4.7} \]\n\n\[ \ny\\text{-coordinates :}{0.071}, - {0.029}, - {0.012} \]\n\n\[ \n{y}^{\\prime }\\text{-values : }1, - 1,1\\text{. } \]\n\nWe then construct the Hermite interpolating polynomial, incorporating the derivative information. Figure (3.3) plots the Hermite interpolating polynomial, together with the polynomial interpolant, and the underlying function.\n\nIt is visually difficult to separate the Hermite interpolating polynomial from the underlying function \( \\cos x \) in Figure (3.3). Going from polynomial interpolation to Hermite interpolation results in rather dramatic improvement in approximating the underlying function.
Yes
Let's compute the Hermite polynomial of Example 62. The data is:\n\n\\[\\begin{array}{llll} i & {x}_{i} & {y}_{i} & {y}_{i}^{\prime } \\end{array}\\]\n\n\\[\\begin{array}{llll} 0 & \\text{-}{1.5} & {0.071} & 1 \\end{array}\\]\n\n\\[\\begin{array}{llll} 1 & {1.6} & - {0.029} & - 1 \\end{array}\\]\n\n\\[\\begin{array}{llll} 2 & {4.7} & - {0.012} & 1 \\end{array}\\]\n\nHere \\( n = 2 \\), and \\( {2n} + 1 = 5 \\), so the Hermite polynomial is\n\n\\[{H}_{5}\\left( x\\right) = f\\left\\lbrack {z}_{0}\\right\\rbrack + \\mathop{\\sum }\\limits_{{i = 1}}^{5}f\\left\\lbrack {{z}_{0},\\ldots ,{z}_{i}}\\right\\rbrack \\left( {x - {z}_{0}}\\right) \\cdots \\left( {x - {z}_{i - 1}}\\right) .\\]
The divided differences are:\n\n<table><thead><tr><th>\\( z \\)</th><th>\\( f\\left( z\\right) \\)</th><th>1st diff</th><th>2nd diff</th><th>3rd diff</th><th>4th diff</th><th>5th diff</th></tr></thead><tr><td>\\( {z}_{0} = - {1.5} \\)</td><td>0.071</td><td>\\( {f}^{\\prime }\\left( {z}_{0}\\right) = 1 \\)</td><td></td><td></td><td></td><td></td></tr><tr><td rowspan=\
No
Estimate \( {\int }_{0.5}^{1}{x}^{x}{dx} \) using the midpoint, trapezoidal, and Simpson’s rules.
Solution. Let \( f\left( x\right) = {x}^{x} \) . The midpoint estimate for the integral is \( {2hf}\left( {x}_{0}\right) \) where \( h = \) \( \left( {b - a}\right) /2 = 1/4 \) and \( {x}_{0} = {0.75} \) . Then the midpoint estimate, using 6-digits, is \( f\left( {0.75}\right) /2 = \) \( {0.805927}/2 = {0.402964} \) . The trapezoidal estimate is \( \frac{h}{2}\left\lbrack {f\left( {0.5}\right) + f\left( 1\right) }\right\rbrack \) where \( h = 1/2 \), which results in \( {1.707107}/4 = {0.426777} \) . Finally, for Simpson’s rule, \( h = \left( {b - a}\right) /2 = 1/4 \), and thus the estimate is\n\n\[ \frac{h}{3}\left\lbrack {f\left( {0.5}\right) + {4f}\left( {0.75}\right) + f\left( 1\right) }\right\rbrack = \frac{1}{12}\left\lbrack {{0.707107} + 4\left( {0.805927}\right) + 1}\right\rbrack = {0.410901}. \]
Yes
Find the constants \( {c}_{0},{c}_{1},{x}_{1} \) so that the quadrature formula\n\n\[ \n{\int }_{0}^{1}f\left( x\right) {dx} = {c}_{0}f\left( 0\right) + {c}_{1}f\left( {x}_{1}\right) \n\]\n\nhas the highest possible degree of accuracy.
Solution. We will find how many of the polynomials \( 1, x,{x}^{2},\ldots \) the rule can integrate exactly.\n\nIf \( p\left( x\right) = 1 \), then\n\n\[ \n{\int }_{0}^{1}p\left( x\right) {dx} = {c}_{0}p\left( 0\right) + {c}_{1}p\left( {x}_{1}\right) \Rightarrow 1 = {c}_{0} + {c}_{1}.\n\]\n\nIf \( p\left( x\right) = x \), we get\n\n\[ \n{\int }_{0}^{1}p\left( x\right) {dx} = {c}_{0}p\left( 0\right) + {c}_{1}p\left( {x}_{1}\right) \Rightarrow \frac{1}{2} = {c}_{1}{x}_{1} \n\]\nand \( p\left( x\right) = {x}^{2} \) implies\n\n\[ \n{\int }_{0}^{1}p\left( x\right) {dx} = {c}_{0}p\left( 0\right) + {c}_{1}p\left( {x}_{1}\right) \Rightarrow \frac{1}{3} = {c}_{1}{x}_{1}^{2}.\n\]\n\nWe have three unknowns and three equations, so we have to stop here. Solving the three equations we get: \( {c}_{0} = 1/4,{c}_{1} = 3/4,{x}_{1} = 2/3 \) . So the quadrature rule is of precision two and it is:\n\n\[ \n{\int }_{0}^{1}f\left( x\right) {dx} = \frac{1}{4}f\left( 0\right) + \frac{3}{4}f\left( \frac{2}{3}\right) .\n\]
Yes
Let’s compute \( {\int }_{0}^{2}{e}^{x}\sin {xdx} \) . The antiderivative can be computed using integration by parts, and the true value of the integral to 6 digits is 5.39689 .
If we apply the Simpson's rule we get:\n\n\[ \n{\int }_{0}^{2}{e}^{x}\sin {xdx} \approx \frac{1}{3}\left( {{e}^{0}\sin 0 + {4e}\sin 1 + {e}^{2}\sin 2}\right) = {5.28942}. \n\]\n\nIf we partition the integration domain \( \left( {0,2}\right) \) into \( \left( {0,1}\right) \) and \( \left( {1,2}\right) \), and apply Simpson’s rule to each domain separately, we get\n\n\[ \n{\int }_{0}^{2}{e}^{x}\sin {xdx} = {\int }_{0}^{1}{e}^{x}\sin {xdx} + {\int }_{1}^{2}{e}^{x}\sin {xdx} \n\]\n\n\[ \n\approx \frac{1}{6}\left( {{e}^{0}\sin 0 + 4{e}^{0.5}\sin \left( {0.5}\right) + e\sin 1}\right) + \frac{1}{6}\left( {e\sin 1 + 4{e}^{1.5}\sin \left( {1.5}\right) + {e}^{2}\sin 2}\right) \n\]\n\n\[ \n= {5.38953} \n\]\n\nimproving the accuracy significantly. Note that we have used five nodes, \( 0,{0.5},1,{1.5},2 \), which splits the domain \( \left( {0,2}\right) \) into four subintervals.
No
Determine \( n \) that ensures the composite Simpson’s rule approximates \( {\int }_{1}^{2}x\log {xdx} \) with an absolute error of at most \( {10}^{-6} \) .
The error term for the composite Simpson’s rule is \( \frac{b - a}{180}{h}^{4}{f}^{\left( 4\right) }\left( \xi \right) \) where \( \xi \) is some number between \( a = 1 \) and \( b = 2 \), and \( h = \left( {b - a}\right) /n \) . Differentiate to get \( {f}^{\left( 4\right) }\left( x\right) = \frac{2}{{x}^{3}} \) . Then\n\n\[ \frac{b - a}{180}{h}^{4}{f}^{\left( 4\right) }\left( \xi \right) = \frac{1}{180}{h}^{4}\frac{2}{{\xi }^{3}} \leq \frac{{h}^{4}}{90} \]\n\nwhere we used the fact that \( \frac{2}{{\xi }^{3}} \leq \frac{2}{1} = 2 \) when \( \xi \in \left( {1,2}\right) \) . Now make the upper bound less than \( {10}^{-6} \), that is,\n\n\[ \frac{{h}^{4}}{90} \leq {10}^{-6} \Rightarrow \frac{1}{{n}^{4}\left( {90}\right) } \leq {10}^{-6} \Rightarrow {n}^{4} \geq \frac{{10}^{6}}{90} \approx {11111.11} \]\n\nwhich implies \( n \geq {10.27} \) . Since \( n \) must be even for Simpson’s rule, this means the smallest value of \( n \) to ensure an error of at most \( {10}^{-6} \) is 12 .
Yes
Approximate \( {\int }_{-1}^{1}\cos {xdx} \) using Gauss-Legendre quadrature with \( n = 3 \) nodes.
Solution. From Table 4.1, and using two-digit rounding, we have\n\n\[ \n{\int }_{-1}^{1}\cos {xdx} \approx {0.56}\cos \left( {-{0.77}}\right) + {0.89}\cos 0 + {0.56}\cos \left( {0.77}\right) = {1.69} \n\] \n\nand the true solution is \( \sin \left( 1\right) - \sin \left( {-1}\right) = {1.68} \) .
Yes
Approximate \( {\int }_{0.5}^{1}{x}^{x}{dx} \) using Gauss-Legendre quadrature with \( n = 2 \) nodes.
Transform the integral using \( x = \frac{1}{2}\left( {{0.5t} + {1.5}}\right) = \frac{1}{2}\left( {\frac{t}{2} + \frac{3}{2}}\right) = \frac{t + 3}{4},{dx} = \frac{dt}{4} \) to get:\n\n\[ \n{\int }_{0.5}^{1}{x}^{x}{dx} = \frac{1}{4}{\int }_{-1}^{1}{\left( \frac{t + 3}{4}\right) }^{\frac{t + 3}{4}}{dt} \n\]\n\nFor \( n = 2 \)\n\n\[ \n\frac{1}{4}{\int }_{-1}^{1}{\left( \frac{t + 3}{4}\right) }^{\frac{t + 3}{4}}{dt} \approx \frac{1}{4}\left\lbrack {{\left( \frac{1}{4\sqrt{3}} + \frac{3}{4}\right) }^{\left( \frac{1}{4\sqrt{3}} + \frac{3}{4}\right) } + {\left( -\frac{1}{4\sqrt{3}} + \frac{3}{4}\right) }^{\left( -\frac{1}{4\sqrt{3}} + \frac{3}{4}\right) }}\right\rbrack = {0.410759} \n\]\n\nusing six digits.
Yes
Theorem 80. Let \( f \in {C}^{2n}\left\lbrack {-1,1}\right\rbrack \) . The error of Gauss-Legendre rule satisfies\n\n\[ \n{\int }_{a}^{b}f\left( x\right) {dx} - \mathop{\sum }\limits_{{i = 1}}^{n}{w}_{i}f\left( {x}_{i}\right) = \frac{{2}^{{2n} + 1}{\left( n!\right) }^{4}}{\left( {{2n} + 1}\right) {\left\lbrack \left( 2n\right) !\right\rbrack }^{2}}\frac{{f}^{\left( 2n\right) }\left( \xi \right) }{\left( {2n}\right) !}\n\]\n\nfor some \( \xi \in \left( {-1,1}\right) \) .
Using Stirling’s formula \( n! \sim {e}^{-n}{n}^{n}{\left( 2\pi n\right) }^{1/2} \), where the symbol \( \sim \) means the ratio of the two sides converges to 1 as \( n \rightarrow \infty \), it can be shown that\n\n\[ \n\frac{{2}^{{2n} + 1}{\left( n!\right) }^{4}}{\left( {{2n} + 1}\right) {\left\lbrack \left( 2n\right) !\right\rbrack }^{2}} \sim \frac{\pi }{{4}^{n}}\n\]\n\nThis means the error of Gauss-Legendre rule decays at an exponential rate of \( 1/{4}^{n} \) as opposed to, for example, the polynomial rate of \( 1/{n}^{4} \) for composite Simpson’s rule.
Yes
One of the integrals with a known solution is\n\n\[ \n{\int }_{{\mathbb{R}}^{s}}\cos \left( {\parallel t\parallel }\right) ){e}^{-\parallel t{\parallel }^{2}}d{t}_{1}d{t}_{2}\cdots d{t}_{s} \n\] \n\nwhere \( \parallel t\parallel = {\left( {t}_{1}^{2} + \ldots + {t}_{s}^{2}\right) }^{1/2} \).
This integral can be transformed to an integral over the \( s \) -dimensional unit cube as\n\n\[ \n{\pi }^{s/2}{\int }_{{\left( 0,1\right) }^{s}}\cos \left\lbrack {\left( \frac{{\left( {F}^{-1}\left( {x}_{1}\right) \right) }^{2} + \ldots + {\left( {F}^{-1}\left( {x}_{s}\right) \right) }^{2}}{2}\right) }^{1/2}\right\rbrack d{x}_{1}d{x}_{2}\cdots d{x}_{s} \n\] \n\n(4.9) \n\nwhere \( {F}^{-1} \) is the inverse of the cumulative distribution function of the standard normal \n\ndistribution:\n\[ \nF\left( x\right) = \frac{1}{{\left( 2\pi \right) }^{1/2}}{\int }_{-\infty }^{x}{e}^{-{s}^{2}/2}{ds}. \n\] \n\nWe will estimate the integral (4.9) by Monte Carlo as\n\n\[ \n\frac{{\pi }^{s/2}}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\cos \left\lbrack {\left( \frac{{\left( {F}^{-1}\left( {x}_{1}^{\left( i\right) }\right) \right) }^{2} + \ldots + {\left( {F}^{-1}\left( {x}_{s}^{\left( i\right) }\right) \right) }^{2}}{2}\right) }^{1/2}\right\rbrack \n\] \n\nwhere \( {x}^{\left( i\right) } = \left( {{x}_{1}^{\left( i\right) },\ldots ,{x}_{s}^{\left( i\right) }}\right) \) is an \( s \) -dimensional vector of uniform random numbers between 0 and 1 .
Yes
Consider the previous integral \( {\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} \) . Try the transformation \( \theta = \) \( {\cos }^{-1}x \) .
Then \( {d\theta } = - {dx}/\sqrt{1 - {x}^{2}} \) and\n\n\[ \n{\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} = - {\int }_{\pi }^{0}f\left( {\cos \theta }\right) {d\theta } = {\int }_{0}^{\pi }f\left( {\cos \theta }\right) {d\theta }. \n\] \n\nThe latter integral can be evaluated using, for example, Simpson’s rule, provided \( f \) is smooth on \( \left\lbrack {0,\pi }\right\rbrack \) .
Yes
Consider the improper integral \( {\int }_{0}^{\infty }{e}^{-{x}^{2}}{dx} \) . Write the integral as
\[ {\int }_{0}^{\infty }{e}^{-{x}^{2}}{dx} = {\int }_{0}^{t}{e}^{-{x}^{2}}{dx} + {\int }_{t}^{\infty }{e}^{-{x}^{2}}{dx} \] where \( t \) is the \
No
Example 84. The following table gives the values of \( f\left( x\right) = \sin x \) . Estimate \( {f}^{\prime }\left( {0.1}\right) ,{f}^{\prime }\left( {0.3}\right) \) using an appropriate three-point formula.
Solution. To estimate \( {f}^{\prime }\left( {0.1}\right) \), we set \( {x}_{0} = {0.1} \), and \( h = {0.1} \) . Note that we can only use the three-point endpoint formula.\n\n\[ \n{f}^{\prime }\left( {0.1}\right) \approx \frac{1}{0.2}\left( {-3\left( {0.09983}\right) + 4\left( {0.19867}\right) - {0.29552}}\right) = {0.99835}. \n\] \n\nThe correct answer is \( \cos {0.1} = {0.995004} \) .\n\nTo estimate \( {f}^{\prime }\left( {0.3}\right) \) we can use the midpoint formula: \n\n\[ \n{f}^{\prime }\left( {0.3}\right) \approx \frac{1}{0.2}\left( {{0.38942} - {0.19867}}\right) = {0.95375}. \n\] \n\nThe correct answer is \( \cos {0.3} = {0.955336} \) and thus the absolute error is \( {1.59} \times {10}^{-3} \) . If we use the endpoint formula to estimate \( {f}^{\prime }\left( {0.3}\right) \) we set \( h = - {0.1} \) and compute \n\n\[ \n{f}^{\prime }\left( {0.3}\right) \approx \frac{1}{-{0.2}}\left( {-3\left( {0.29552}\right) + 4\left( {0.19867}\right) - {0.09983}}\right) = {0.95855} \n\] \n\nwith an absolute error of \( {3.2} \times {10}^{-3} \) .
Yes
Find the least squares polynomial approximation of degree 2 to \( f\left( x\right) = {e}^{x} \) on \( \left( {0,2}\right) \) .
The normal equations are:\n\n\[ \mathop{\sum }\limits_{{j = 0}}^{2}{a}_{j}{\int }_{0}^{2}{x}^{j + k}{dx} = {\int }_{0}^{2}{e}^{x}{x}^{k}{dx} \]\n\( k = 0,1,2 \) . Here are the three equations:\n\n\[ {a}_{0}{\int }_{0}^{2}{dx} + {a}_{1}{\int }_{0}^{2}{xdx} + {a}_{2}{\int }_{0}^{2}{x}^{2}{dx} = {\int }_{0}^{2}{e}^{x}{dx} \]\n\n\[ {a}_{0}{\int }_{0}^{2}{xdx} + {a}_{1}{\int }_{0}^{2}{x}^{2}{dx} + {a}_{2}{\int }_{0}^{2}{x}^{3}{dx} = {\int }_{0}^{2}{e}^{x}{xdx} \]\n\n\[ {a}_{0}{\int }_{0}^{2}{x}^{2}{dx} + {a}_{1}{\int }_{0}^{2}{x}^{3}{dx} + {a}_{2}{\int }_{0}^{2}{x}^{4}{dx} = {\int }_{0}^{2}{e}^{x}{x}^{2}{dx} \]\n\nComputing the integrals we get\n\n\[ 2{a}_{0} + 2{a}_{1} + \frac{8}{3}{a}_{2} = {e}^{2} - 1 \]\n\n\[ 2{a}_{0} + \frac{8}{3}{a}_{1} + 4{a}_{2} = {e}^{2} + 1 \]\n\n\[ \frac{8}{3}{a}_{0} + 4{a}_{1} + \frac{32}{5}{a}_{2} = 2{e}^{2} - 2 \]\n\nwhose solution is \( {a}_{0} = 3\left( {-7 + {e}^{2}}\right) ,{a}_{1} = - \frac{3}{2}\left( {-{37} + 5{e}^{2}}\right) ,{a}_{2} = \frac{15}{4}\left( {-7 + {e}^{2}}\right) \) . Then\n\n\[ {P}_{2}\left( x\right) = {1.17} + {0.08x} + {1.46}{x}^{2}. \]\n
Yes
Example 90 (Legendre Polynomials). If \( w\left( x\right) \equiv 1 \) and \( \left\lbrack {a, b}\right\rbrack = \left\lbrack {-1,1}\right\rbrack \), the first four polynomials obtained from the Gram-Schmidt process, when the process is applied to the monomials \( 1, x,{x}^{2},{x}^{3},\ldots \), are:
\[ {\phi }_{0}\left( x\right) = \sqrt{\frac{1}{2}},{\phi }_{1}\left( x\right) = \sqrt{\frac{3}{2}}x,{\phi }_{2}\left( x\right) = \frac{1}{2}\sqrt{\frac{5}{2}}\left( {3{x}^{2} - 1}\right) ,{\phi }_{3}\left( x\right) = \frac{1}{2}\sqrt{\frac{7}{2}}\left( {5{x}^{3} - {3x}}\right) . \] Often these polynomials are written in its orthogonal form; that is, we drop the requirement \( \left\langle {{\phi }_{j},{\phi }_{j}}\right\rangle = 1 \) in the Gram-Schmidt process, and we scale the polynomials so that the value of each polynomial at 1 equals 1 . The first four polynomials in that form are \[ {L}_{0}\left( x\right) = 1,{L}_{1}\left( x\right) = x,{L}_{2}\left( x\right) = \frac{3}{2}{x}^{2} - \frac{1}{2},{L}_{3}\left( x\right) = \frac{5}{2}{x}^{3} - \frac{3}{2}x. \] These are the Legendre polynomials; polynomials we first discussed in Gaussian quadrature, Section \( {4.3}^{3} \) . They can be obtained from the following recursion \[ {L}_{n + 1}\left( x\right) = \frac{{2n} + 1}{n + 1}x{L}_{n}\left( x\right) - \frac{n}{n + 1}{L}_{n - 1}\left( x\right) , \] \( n = 1,2,\ldots \), and they satisfy \[ \left\langle {{L}_{n},{L}_{n}}\right\rangle = \frac{2}{{2n} + 1} \]
Yes
Example 91 (Chebyshev polynomials). If we take \( w\left( x\right) = {\left( 1 - {x}^{2}\right) }^{-1/2} \) and \( \left\lbrack {a, b}\right\rbrack = \left\lbrack {-1,1}\right\rbrack \) , and again drop the orthonormal requirement in Gram-Schmidt, we obtain the following orthogonal polynomials:\n\n\[ \n{T}_{0}\left( x\right) = 1,{T}_{1}\left( x\right) = x,{T}_{2}\left( x\right) = 2{x}^{2} - 1,{T}_{3}\left( x\right) = 4{x}^{3} - {3x},\ldots \n\] \n\nThese polynomials are called Chebyshev polynomials and satisfy a curious identity:\n\n\[ \n{T}_{n}\left( x\right) = \cos \left( {n{\cos }^{-1}x}\right), n \geq 0. \n\]
Chebyshev polynomials also satisfy the following recursion:\n\n\[ \n{T}_{n + 1}\left( x\right) = {2x}{T}_{n}\left( x\right) - {T}_{n - 1}\left( x\right) \n\] \n\nfor \( n = 1,2,\ldots \), and\n\n\[ \n\left\langle {{T}_{j},{T}_{k}}\right\rangle = \left\{ \begin{array}{ll} 0 & \text{ if }j \neq k \\ \pi & \text{ if }j = k = 0 \\ \pi /2 & \text{ if }j = k > 0. \end{array}\right. \n\]
Yes
Find the least squares polynomial approximation of degree three to \( f\left( x\right) = {e}^{x} \) on \( \left( {-1,1}\right) \) using Legendre polynomials.
Solution. Put \( n = 3 \) in Equation (5.13) and let \( {\phi }_{j} \) be \( {L}_{j} \) to get\n\n\[ \n{P}_{3}\left( x\right) = \frac{\left\langle f,{L}_{0}\right\rangle }{{\alpha }_{0}}{L}_{0}\left( x\right) + \frac{\left\langle f,{L}_{1}\right\rangle }{{\alpha }_{1}}{L}_{1}\left( x\right) + \frac{\left\langle f,{L}_{2}\right\rangle }{{\alpha }_{2}}{L}_{2}\left( x\right) + \frac{\left\langle f,{L}_{3}\right\rangle }{{\alpha }_{3}}{L}_{3}\left( x\right) \n\]\n\n\[ \n= \frac{\left\langle {e}^{x},1\right\rangle }{2} + \frac{\left\langle {e}^{x}, x\right\rangle }{2/3}x + \frac{\left\langle {e}^{x},\frac{3}{2}{x}^{2} - \frac{1}{2}\right\rangle }{2/5}\left( {\frac{3}{2}{x}^{2} - \frac{1}{2}}\right) + \frac{\left\langle {e}^{x},\frac{5}{2}{x}^{3} - \frac{3}{2}x\right\rangle }{2/7}\left( {\frac{5}{2}{x}^{3} - \frac{3}{2}x}\right) , \n\]\n\nwhere we used the fact that \( {\alpha }_{j} = \left\langle {{L}_{j},{L}_{j}}\right\rangle = \frac{2}{{2n} + 1} \) (see Example 90). We will compute the inner products, which are definite integrals on \( \left( {-1,1}\right) \), using the five-node Gauss-Legendre quadrature we discussed in the previous chapter. The results rounded to four digits are:\n\n\[ \n\left\langle {{e}^{x},1}\right\rangle = {\int }_{-1}^{1}{e}^{x}{dx} = {2.350} \n\]\n\n\[ \n\left\langle {{e}^{x}, x}\right\rangle = {\int }_{-1}^{1}{e}^{x}{xdx} = {0.7358} \n\]\n\n\[ \n\left\langle {{e}^{x},\frac{3}{2}{x}^{2} - \frac{1}{2}}\right\rangle = {\int }_{-1}^{1}{e}^{x}\left( {\frac{3}{2}{x}^{2} - \frac{1}{2}}\right) {dx} = {0.1431} \n\]\n\n\[ \n\left\langle {{e}^{x},\frac{5}{2}{x}^{3} - \frac{3}{2}x}\right\rangle = {\int }_{-1}^{1}{e}^{x}\left( {\frac{5}{2}{x}^{3} - \frac{3}{2}x}\right) {dx} = {0.02013}. \n\]\n\nTherefore\n\n\[ \n{P}_{3}\left( x\right) = \frac{2.35}{2} + \frac{3\left( {0.7358}\right) }{2}x + \frac{5\left( {0.1431}\right) }{2}\left( {\frac{3}{2}{x}^{2} - \frac{1}{2}}\right) + \frac{7\left( {0.02013}\right) }{2}\left( {\frac{5}{2}{x}^{3} - \frac{3}{2}x}\right) \n\]\n\n\[ \n= {0.1761}{x}^{3} + {0.5366}{x}^{2} + {0.9980x} + {0.9961}. \n\]
Yes
Find the least squares polynomial approximation of degree three to \( f\left( x\right) = {e}^{x} \) on \( \left( {-1,1}\right) \) using Chebyshev polynomials.
Solution. As in the previous example solution, we take \( n = 3 \) in Equation (5.13)\n\n\[ \n{P}_{3}\left( x\right) = \mathop{\sum }\limits_{{j = 0}}^{3}\frac{\left\langle f,{\phi }_{j}\right\rangle }{{\alpha }_{j}}{\phi }_{j}\left( x\right) \]\n\nbut now \( {\phi }_{j} \) and \( {\alpha }_{j} \) will be replaced by \( {T}_{j} \), the Chebyshev polynomials, and its corresponding constant; see Example 91. We have\n\n\[ \n{P}_{3}\left( x\right) = \frac{\left\langle {e}^{x},{T}_{0}\right\rangle }{\pi }{T}_{0}\left( x\right) + \frac{\left\langle {e}^{x},{T}_{1}\right\rangle }{\pi /2}{T}_{1}\left( x\right) + \frac{\left\langle {e}^{x},{T}_{2}\right\rangle }{\pi /2}{T}_{2}\left( x\right) + \frac{\left\langle {e}^{x},{T}_{3}\right\rangle }{\pi /2}{T}_{3}\left( x\right) . \]\n\nConsider one of the inner products,\n\n\[ \n\left\langle {{e}^{x},{T}_{j}}\right\rangle = {\int }_{-1}^{1}\frac{{e}^{x}{T}_{j}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} \]\n\nwhich is an improper integral due to discontinuity at the end points. However, we can use the substitution \( \theta = {\cos }^{-1}x \) to rewrite the integral as (see Section 4.5)\n\n\[ \n\left\langle {{e}^{x},{T}_{j}}\right\rangle = {\int }_{-1}^{1}\frac{{e}^{x}{T}_{j}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} = {\int }_{0}^{\pi }{e}^{\cos \theta }\cos \left( {j\theta }\right) {d\theta }. \]\n\nThe transformed integrand is smooth, and it is not improper, and hence we can use composite Simpson’s rule to estimate it. The following estimates are obtained by taking \( n = {20} \) in the composite Simpson's rule:\n\n\[ \n\left\langle {{e}^{x},{T}_{0}}\right\rangle = {\int }_{0}^{\pi }{e}^{\cos \theta }{d\theta } = {3.977} \]\n\n\[ \n\left\langle {{e}^{x},{T}_{1}}\right\rangle = {\int }_{0}^{\pi }{e}^{\cos \theta }\cos {\theta d\theta } = {1.775} \]\n\n\[ \n\left\langle {{e}^{x},{T}_{2}}\right\rangle = {\int }_{0}^{\pi }{e}^{\cos \theta }\cos {2\theta d\theta } = {0.4265} \]\n\n\[ \n\left\langle {{e}^{x},{T}_{3}}\right\rangle = {\int }_{0}^{\pi }{e}^{\cos \theta }\cos {3\theta d\theta } = {0.06964} \]\n\nTherefore\n\n\[ \n{P}_{3}\left( x\right) = \frac{3.977}{\pi } + \frac{3.55}{\pi }x + \frac{0.853}{\pi }\left( {2{x}^{2} - 1}\right) + \frac{0.1393}{\pi }\left( {4{x}^{3} - {3x}}\right) \]\n\n\[ \n= {0.1774}{x}^{3} + {0.5430}{x}^{2} + {0.9970x} + {0.9944}. \]\n
Yes
Property 13.1.1 A property of tangents to functions of one variable. Suppose \( f \) is a function of one variable and at a number \( a \) in its domain, \( {f}^{\prime }\left( a\right) \) exists. The graph of \( L\left( x\right) = f\left( a\right) + {f}^{\prime }\left( a\right) \left( {x - a}\right) \) is the tangent to the graph of \( f \) at \( \left( {a, f\left( a\right) }\right) \) .
\[ \mathop{\lim }\limits_{{x \rightarrow a}}\frac{\left| f\left( x\right) - L\left( x\right) \right| }{\left| x - a\right| } = \mathop{\lim }\limits_{{x \rightarrow a}}\left| \frac{f\left( x\right) - f\left( a\right) - {f}^{\prime }\left( a\right) \left( {x - a}\right) }{x - a}\right| \] \[ = \left| {\mathop{\lim }\limits_{{x \rightarrow a}}\frac{f\left( x\right) - f\left( a\right) }{x - a} - {f}^{\prime }\left( a\right) }\right| \] (13.3) \[ = 0\text{.} \]
Yes
Property 13.1.2 A property of local linear approximations to functions of two variables. Suppose \( F \) is a function of two variables, \( \left( {a, b}\right) \) is a number pair in the domain of \( F \), and \( {F}_{1} \) and \( {F}_{2} \) exist and are continuous on the interior of a circle with center \( \left( {a, b}\right) \) . Then \( L\left( {x, y}\right) = F\left( {a, b}\right) + {F}_{1}\left( {a, b}\right) (x - \) \( a) + {F}_{2}\left( {a, b}\right) \left( {y - b}\right) \) is the local linear approximation to \( F \) at \( \left( {a, b}\right) \), and
\[ \mathop{\lim }\limits_{{\left( {x, y}\right) \rightarrow \left( {a, b}\right) }}\frac{\left| F\left( x, y\right) - L\left( x, y\right) \right| }{\sqrt{{\left( x - a\right) }^{2} + {\left( y - b\right) }^{2}}} = 0 \]
Yes
The graphs of \( F\left( {x, y}\right) = {x}^{2} + {y}^{2}, G\left( {x, y}\right) = {x}^{2} - {y}^{2} \) and \( H\left( {x, y}\right) = - {x}^{2} - {y}^{2} \) shown in Figure 13.10 illustrate three important options. The origin, \( \left( {0,0}\right) \), is a critical point of each of the graphs and the \( z = 0 \) plane is the tangent plane to each of the graphs at \( \left( {0,0}\right) \).
For \( F \), for example,\n\n\[ F\left( {x, y}\right) = {x}^{2} + {y}^{2}\;{F}_{1}\left( {x, y}\right) = {2x}\;{F}_{2}\left( {x, y}\right) = {2y} \]\n\n\[ F\left( {0,0}\right) = 0\;{F}_{1}\left( {0,0}\right) = 0\;{F}_{2}\left( {0,0}\right) = 0 \]\n\nThe origin, \( \left( {0,0}\right) \), is a critical point of \( F \), the linear approximation to \( F \) at \( \left( {0,0}\right) \) is \( L\left( {x, y}\right) = 0 \), and the tangent plane is \( z = 0 \), or the \( x, y \) plane. The same is true for \( G \) and \( H \) ; the tangent plane at \( \left( {0,0,0}\right) \) is \( z = 0 \) for all three examples.
Yes
Fit a line to the data in Example Figure 13.2.3.3.
\[ {S}_{x} = 1 + 2 + 4 + 6 + 8 = {21}, \] \[ {S}_{y} = {0.5} + {0.8} + {1.0} + {1.7} + {1.8} = {5.8} \] \[ {S}_{xx} = {1}^{2} + {2}^{2} + {4}^{2} + {6}^{2} + {8}^{2} = {121}, \] \[ {S}_{xy} = 1 \times {0.5} + 2 \times {0.8} + 4 \times {1.0} + 6 \times {1.7} + 8 \times {1.8} = {30.7} \] \[ \Delta = n{S}_{xx} - {\left( {S}_{x}\right) }^{2} = 5 \times {121} - {\left( {21}\right) }^{2} = {164} \] \[ a = \left( {{S}_{xx}{S}_{y} - {S}_{x}{S}_{xy}}\right) /\Delta = \left( {{121} \times {5.8} - {21} \times {30.7}}\right) /{164} = {0.348} \] \[ b = \left( {n{S}_{xy} - {S}_{x}{S}_{y}}\right) /\Delta = \left( {5 \times {30.7} - {21} \times {5.8}}\right) /{164} = {0.193} \] The line \( y = {0.348} + {0.193x} \) is the closest to the data in the sense of least squares and is drawn in Example Figure 13.2.3.3.
Yes
Find the dimensions of the largest box (rectangular solid) that will fit in a hemisphere of radius \( R \) .
Solution. Assume the hemisphere is the graph of \( z = \sqrt{{R}^{2} - {x}^{2} - {y}^{2}} \) and that the optimum box has one face in the \( x, y \) -plane and the other four corners on the hemisphere (see Figure 13.2.4.4).\n\nThe volume, \( V \) of the box is\n\n\[ V\left( {x, y}\right) = {2x} \times {2y} \times z = {2x} \times {2y} \times \sqrt{{R}^{2} - {x}^{2} - {y}^{2}} \]\n\nBefore launching into partial differentiation, it is perhaps clever, and certainly useful, to observe that the values of \( x \) and \( y \) for which \( V \) is a maximum are also the values for which \( {V}^{2}/{16} \) is a maximum.\n\n\[ \frac{{V}^{2}\left( {x, y}\right) }{16} = W\left( {x, y}\right) = \frac{{16}{x}^{2}{y}^{2}\left( {{R}^{2} - {x}^{2} - {y}^{2}}\right) }{16} = {R}^{2}{x}^{2}{y}^{2} - {x}^{4}{y}^{2} - {x}^{2}{y}^{4}. \]\n\nIt is easier to analyze \( W\left( {x, y}\right) \) than it is to analyze \( V\left( {x, y}\right) \) .\n\n\[ {W}_{1}\left( {x, y}\right) = 2{R}^{2}x{y}^{2} - 4{x}^{3}{y}^{2} - {2x}{y}^{4} \]\n\n\[ = {2x}{y}^{2}\left( {{R}^{2} - 4{x}^{2} - 2{y}^{2}}\right) \]\n\n\[ {W}_{2}\left( {x, y}\right) = 2{R}^{2}{x}^{2}y - 2{x}^{4}{y}^{2} - 4{x}^{2}{y}^{3} \]\n\n\[ = 2{x}^{2}y\left( {{R}^{2} - 2{x}^{2} - 4{y}^{2}}\right) \]\n\nSolving for \( {W}_{1}\left( {x, y}\right) = 0 \) and \( {W}_{2}\left( {x, y}\right) = 0 \) yields \( x = R/\sqrt{3} \) and \( y = R/\sqrt{3} \), for which \( z = R/\sqrt{3} \). The dimensions of the box are \( {2R}/\sqrt{3},{2R}/\sqrt{3} \), and \( R/\sqrt{3} \).
Yes
Show that\n\n\[ {u}_{t}\left( {x, t}\right) = \frac{1}{{\pi }^{2}}{u}_{xx}\left( {x, t}\right) ,\;u\left( {x,0}\right) = {30} * \left( {1 - \cos {\pi x}}\right) ,\;\text{ and }\;\begin{array}{l} {u}_{x}\left( {0, t}\right) = 0 \\ {u}_{x}\left( {2, t}\right) = 0. \end{array} \]\n\n(13.24)
Solution. First compute some partial derivatives.\n\n\[ u\left( {x, t}\right) = {30}\left( {1 - {e}^{-t}\cos {\pi x}}\right) \]\n\n(13.25)\n\n\n\n\[ {u}_{t}\left( {x, t}\right) = {30}\left( {0 - \left( {e}^{-t}\right) \left( {-1}\right) \cos {\pi x}}\right) = {30}{e}^{-t}\cos \left( {\pi x}\right) \]\n\n(13.26)\n\n\[ {u}_{x}\left( {x, t}\right) = {30}\left( {0 - {e}^{-t}\left( {-\sin {\pi x}}\right) \left( \pi \right) = {30\pi }{e}^{-t}\sin {\pi x}}\right. \]\n\n(13.27)\n\n\[ {u}_{xx}\left( {x, t}\right) = {30}{\pi }^{2}{e}^{-t}\cos {\pi x} \]\n\n(13.28)\n\nFrom Equations 13.26 and 13.28\n\n\[ {u}_{t}\left( {x, t}\right) = {30}{e}^{-t}\cos \left( {\pi x}\right) = \frac{1}{{\pi }^{2}}{30}{\pi }^{2}{e}^{-t}\cos \left( {\pi x}\right) = \frac{1}{{\pi }^{2}}{u}_{xx}. \]\n\nFrom Equation 13.23\n\n\[ u\left( {x,0}\right) = {\left. {30} * \left( 1 - {e}^{-t}\cos \pi * x\right) \right| }_{t = 0} = {30}\left( {1 - \cos {\pi x}}\right) . \]\n\nFrom Equation 13.27,\n\n\[ {u}_{x}\left( {0, t}\right) = {\left. {30}\pi {e}^{-t}\sin \pi x\right| }_{x = 0} = 0\;\text{ and }\;{u}_{x}\left( {2, t}\right) = {\left. {30}\pi {e}^{-t}\sin \pi x\right| }_{x = 2} = 0 \]\n\nThus all of Equations 13.24 are satisfied.
Yes
Explore 13.3.5 What do you expect the 'eventual' salt concentration along the tube to be?
The diffusion equation is \( {u}_{t}\left( {x, t}\right) = k{u}_{xx}\left( {x, t}\right) \). Because initially there is no salt in the tube, \( g\left( x\right) = 0 \) for \( 0 < x < 1 \), and the reservoirs at the ends of the tube imply the fixed boundary conditions 13.22, \( u\left( {0, t}\right) = 1 \) and \( u\left( {1, t}\right) = 0,0 \leq t \). Partition the tube into 5 equal intervals. There is in Figure 13.15 an array of points horizontally distributed with position \( x \) along the tube and distributed vertically in time \( t \). In this example, \[ {v}_{i, j} \doteq u\left( {i \times 1/5, j \times \delta }\right) ,\;i = 1,5,\;j = 1,\cdots . \] and from Equation 13.30 \[ {v}_{i, j + 1} = {v}_{i, j} + \widehat{k}\left( {{v}_{i - 1, j} - 2{v}_{i, j} + {v}_{i + 1, j}}\right) \;\text{ where }\;\widehat{k} = \frac{\delta \times k}{{d}^{2}}. \] (13.32) The boundary conditions 13.22 with \( u\left( {0, t}\right) = 1 \) and \( u\left( {1, t}\right) = 0 \) lead to \[ {v}_{0, j} = 1\;{v}_{5, j} = 0 \] (13.33) The initial condition and equations 13.32 determine the \( {v}_{i, j} \) one horizontal layer at a time for the interior grid points, \( 1 < i < 5 \). Begin with the initial condition, Equation 13.20: \[ {v}_{i,0} = g\left( {x}_{i}\right) = 0,\;i = 1,4 \] (13.34) Then for the bottom layer of the grid in Figure 13.15 \[ {v}_{0,0} = 1\;\text{ and }\;{v}_{j,0} = 0\;j = 1,\cdots ,5. \] Then compute the next layer up for \( t = \delta \): \[ {v}_{1,1} = {v}_{1,0} + \widehat{k}\left( {{v}_{0,0} - 2{v}_{1,0} + {v}_{2,0}}\right) \] \[ {v}_{2,1} = {v}_{2,0} + \widehat{k}\left( {{v}_{1,0} - 2{v}_{2,0} + {v}_{3,0}}\right) \] \[ {v}_{3,1} = {v}_{3,0} + \widehat{k}\left( {{v}_{2,0} - 2{v}_{3,0} + {v}_{4,0}}\right) \] \[ {v}_{4,1} = {v}_{4,0} + \widehat{k}\left( {{v}_{3,0} - 2{v}_{4,0} + {v}_{5,0}}\right) \] \[ {v}_{0,1} = {v}_{1,1} \] \[ {v}_{5,1} = {v}_{4,1} \] In a similar way as many layers as necessary can be computed. The computations for \( \widehat{k} = {0.2} \) are shown below. Calculator and MATLAB programs that produce the computations appear in Table 13.1.
No
Theorem 14.4.1 If \( F \) and \( {F}^{\prime } \) are continuous on \( \left\lbrack {0,1}\right\rbrack \) and for \( x \) in \( \left\lbrack {0,1}\right\rbrack, F\left( x\right) \) is in \( \left\lbrack {0,1}\right\rbrack \), and \( E \) is a number in \( \left\lbrack {0,1}\right\rbrack \) for which \( F\left( E\right) = E \) . Then the equilibrium point \( E \) is locally stable for the iteration \( {P}_{t + 1} = F\left( {P}_{t}\right) \) if \( \left| {{F}^{\prime }\left( E\right) }\right| < 1 \) .
Proof. The proof of Theorem 14.4.1 makes good use of the Mean Value Theorem, Theorem 9.1.1 in Volume I.\n\nSuppose \( F \) satisfies the hypothesis of Theorem 14.4.1. Let \( R = \left( {{F}^{\prime }\left( E\right) + 1}\right) /2 < 1 \) . Because \( {F}^{\prime } \) is continuous there is an subinterval \( \left( {a, b}\right) \) of \( \left( {0,1}\right) \) with midpoint \( E \) for which\n\n\[ \left| {{F}^{\prime }\left( x\right) }\right| < R\;\text{ for all }\;a < x < b\;\text{ (Similar to Theorem 4.1.4 in Volume I.) } \]\n\nSuppose \( {p}_{0} \) is in \( \left( {a, b}\right) \), and \( {p}_{0},{p}_{1},{p}_{2},\cdots \) is the iteration sequence defined by \( {p}_{t + 1} = F\left( {p}_{t}\right) \) . We will show that\n\n\[ \mathop{\lim }\limits_{{t \rightarrow \infty }}{p}_{t} = E \]\n\nWe first show by induction that every point of \( {p}_{0},{p}_{1},{p}_{2},\cdots \) is in \( \left( {a, b}\right) \) . By hypothesis \( {p}_{0} \) is in \( \left( {a, b}\right) \) . Suppose \( {p}_{t} \) is in \( \left( {a, b}\right) \) . Then\n\n\[ E = F\left( E\right) \]\n\( E \) is an equilibrium point.\n\n\[ {p}_{t + 1} = F\left( {p}_{t}\right) \]\nIteration equation.\n\n\[ {p}_{t + 1} - E = F\left( {p}_{t}\right) - F\left( E\right) \]\nSubtraction.\n\nThen there is a number \( c \) between \( E \) and \( {p}_{t} \) such that\n\n\[ {p}_{t + 1} - E = {F}^{\prime }\left( c\right) \left( {{p}_{t} - E}\right) \]\nMean Value Theorem.\n\n\[ \left| {{p}_{t + 1} - E}\right| = \left| {{F}^{\prime }\left( c\right) }\right| \left| {{p}_{t} - E}\right| \; < \;R\left| {{p}_{t} - E}\right| \;\left| {{F}^{\prime }\left( c\right) }\right| < R \]\n\nThe last inequality asserts that \( {p}_{t + 1} \) is closer to \( E \) than is \( {p}_{t} \) by a factor of \( R < 1 \) . Because \( E \) is the midpoint of \( \left( {a, b}\right) \) it follows that \( {p}_{t + 1} \) is also in \( \left( {a, b}\right) \) . By induction all of \( {p}_{0},{p}_{1},{p}_{2},\cdots \) are in \( \left( {a, b}\right) \).\n\nThe inequalities\n\n\[ \left| {{p}_{t + 1} - E}\right| \leq R\left| {{p}_{t} - E}\right| ,\;t = 0,1,\cdots \]\n\ncan be cascaded to find that\n\n\[ \left| {{p}_{t} - E}\right| < {R}^{t}\left| {{p}_{0} - E}\right| . \]\n\nBecause \( 0 < R < 1,{R}^{t} \rightarrow 0 \) as \( t \rightarrow \infty \), and \( {p}_{t} \rightarrow E \) as \( t \rightarrow \infty \) . It follows that \( E \) is locally stable. End of proof.
Yes
We show that the number \( 1/a \) is the only locally stable equilibrium point of the iteration function\n\n\[ F\left( x\right) = x \times \left( {2 - a \times x}\right) \]\n\nfor the iteration \( {x}_{n + 1} = {x}_{n} \times \left( {2 - a * {x}_{n}}\right) \).
The equilibrium points, \( E \), are\n\n\[ E = F\left( E\right) \;E = E \times \left( {2 - a \times E}\right) ,\;E = 0\;\text{ or }\;E = \frac{1}{a} \]\n\nTo examine stability using Theorem 14.4.1 we compute\n\n\[ {F}^{\prime }\left( x\right) = {\left\lbrack x \times \left( 2 - a \times x\right) \right\rbrack }^{\prime } = {\left\lbrack 2x - a \times {x}^{2})\right\rbrack }^{\prime } = 2 - 2 \times a \times x \]\n\nFor \( E = 0,\;{F}^{\prime }\left( 0\right) = 2\; \) and \( E = 0 \) is a nonstable equilibrium.\n\nFor \( \;E = \frac{1}{a},\;{F}^{\prime }\left( \frac{1}{a}\right) = 2 - 2 \times a \times \frac{1}{a} = 0\; \) and \( E = \frac{1}{a} \) is a locally stable equilibrium.
Yes
Example 14.4.2 Cray computation of \( 1/\pi \) In 24 digit binary notation,
\[ \pi \doteq {110010010000111111011010} \times {2}^{2}. \] In decimal notation, \[ \pi \doteq {0.785398} \times {2}^{2},\;\text{ and }\;\frac{1}{\pi } \doteq \frac{1}{0.785398} \times {2}^{-2}. \] An iteration sequence to compute 1/0.785398 is illustrated in Figure 14.13. The sequence is \( {x}_{0} = {1.75},{x}_{n + 1} = {x}_{n}\left( {2 - {0.785398}{x}_{n}}\right) . \) Because \( {F}^{\prime }\left( \frac{1}{a}\right) = 0 \), convergence is very rapid as can be seen in Figure 14.13 and by \[ {x}_{0} = {1.75},\;{x}_{1} = {1.094719},\;{x}_{2} = {1.248209},\;{x}_{3} = {1.272748}, \] \[ {x}_{4} = {1.273240},\;{x}_{5} = {1.273240}. \] So \( 1/\pi \doteq {1.273240} \times {2}^{-2} = {0.318310} \)
Yes
If \( B > 1 \) and \( n \) is a positive integer, then \( \;\mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{B}^{t}}{{t}^{n}} = \infty \)
We first show that\n\n\[ \mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{e}^{t}}{t} = \infty \]\n\n\( \left( {14.20}\right) \)\n\nWe assume it to be true that\n\n\[ \mathop{\lim }\limits_{{t \rightarrow \infty }}{e}^{t} = \infty \]\n\n(14.21)\n\nbut have included a proof for you to complete in Problem 14.5.8.\n\nNow\n\n\[ \frac{{e}^{t}}{t}\;\overset{a}{ > }\;\frac{{e}^{t} - {e}^{\frac{t}{2}}}{t}\; = \;\frac{1}{2}\frac{{e}^{t} - {e}^{\frac{t}{2}}}{t - \frac{t}{2}}\;\overset{b}{ = }\;\frac{1}{2}{e}^{{c}_{t}}\;\text{ where }\;\frac{t}{2} < {c}_{t} < t \]\n\n\( \left( {14.22}\right) \)\n\nYou are asked in Exercise 14.5.1 to give reasons for steps a. and b. in Equation 14.22.\n\n\[ \text{Because}\frac{t}{2} < {c}_{t}\;\text{as}\;t \rightarrow \infty ,\;{c}_{t} \rightarrow \infty \;\text{and}\;\frac{1}{2}{e}^{{c}_{t}} \rightarrow \infty \]\n\n\[ \text{Because}\frac{{e}^{t}}{t} > \frac{1}{2}{e}^{{c}_{t}}\text{it follows that}\;\mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{e}^{t}}{t} = \infty \]\n\nNext we show that\n\n\[ \text{If}n\text{is a positive integer, then}\;\mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{e}^{t}}{{t}^{n}} = \infty \]\n\n(14.23)\n\nObserve that\n\n\[ \frac{{e}^{t}}{{t}^{n}} = \frac{{\left( {e}^{\frac{t}{n}}\right) }^{n}}{{t}^{n}} = \frac{1}{{n}^{n}}\frac{{\left( {e}^{\frac{t}{n}}\right) }^{n}}{{\left( \frac{t}{n}\right) }^{n}} \]\n\n\[ = \frac{1}{{n}^{n}}\frac{{\left( {e}^{\frac{t}{n}}\right) }^{n}}{{\left( \frac{t}{n}\right) }^{n}} = \frac{1}{{n}^{n}}{\left( \frac{{e}^{\frac{t}{n}}}{\frac{t}{n}}\right) }^{n} \]\n\nSince \( \mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{e}^{\frac{t}{n}}}{\frac{t}{n}} = \infty \), it follows that \( \mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{{e}^{t}}{{t}^{n}} = \infty \).
No
Theorem 14.5.3 L’Hospital’s Theorem on a bounded interval. Suppose \( \left\lbrack {a, b}\right\rbrack \) is a number interval and \( F \) and \( G \) are continuous functions defined on the half open interval \( (a, b\rbrack \) and \( {F}^{\prime } \) and \( {G}^{\prime } \) are continuous on \( \left( {a, b}\right) \) and \( {G}^{\prime } \) is nonzero on \( \left( {a, b}\right) \) . If\n\n\[ \n\mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}F\left( t\right) = 0,\;\mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}G\left( t\right) = 0\;\text{ and }\;\mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}\frac{{F}^{\prime }\left( t\right) }{{G}^{\prime }\left( t\right) } = L \n\]\n\nthen\n\[ \n\mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}\frac{F\left( t\right) }{G\left( t\right) } = L \n\]
Proof. Because \( \mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}F\left( t\right) = \mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}G\left( t\right) = 0 \), the domain of \( F \) and \( G \) can be extended to include \( a \) by defining \( F\left( a\right) = G\left( a\right) = 0 \) and the extended \( F \) and \( G \) are continuous on \( \left\lbrack {a, b}\right\rbrack \) . By the Extended Mean Value Theorem 14.5.2, for any \( t \) in \( \left( {a, b}\right) \) there is a number \( {c}_{t} \) between \( a \) and \( t \) such that\n\n\[ \n\frac{F\left( t\right) }{G\left( t\right) } = \frac{F\left( t\right) - F\left( a\right) }{G\left( t\right) - G\left( a\right) } = \frac{{F}^{\prime }\left( {c}_{t}\right) }{{G}^{\prime }\left( {c}_{t}\right) }. \n\]\n\nAs \( t \) approaches \( a,{c}_{t} \) also approaches \( a \) and\n\n\[ \n\mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}\frac{F\left( t\right) }{G\left( t\right) } = \mathop{\lim }\limits_{{t \rightarrow {a}^{ + }}}\frac{{F}^{\prime }\left( {c}_{t}\right) }{{G}^{\prime }\left( {c}_{t}\right) } = L \n\]\n\nEnd of proof.
Yes
We begin with the equation\n\n\[ \n{P}_{0} = {1000} \n\]\n\n\[ \n{P}_{t + 1} - {P}_{t} = {0.2} \times {P}_{t} \times \left( {1 - \frac{{P}_{t}}{1000}}\right) - h \times {P}_{t} \n\]\n\nof a population with an initial population of 1000 individuals, low density growth rate of 0.2 per time interval, carrying capacity 1000 individuals, and ask what constant fraction of individuals present, \( h \) , may be harvested and still retain the population?
The equilibrium population is important and we solve for \( {P}_{e} \) in\n\n\[ \n{P}_{e} - {P}_{e} = {0.2} \times {P}_{e} \times \left( {1 - \frac{{P}_{e}}{1000}}\right) - h \times {P}_{e} \n\]\n\n\[ \n0 = {0.2} \times {P}_{e} \times \left( {1 - \frac{{P}_{e}}{1000}}\right) - h \times {P}_{e} \n\]\n\n\[ \n0 = {P}_{e} \times \left( {{0.2} \times \left( {1 - \frac{{P}_{e}}{1000}}\right) - h}\right) \n\]\n\nThe choices are\n\n\[ \n{P}_{e} = 0\;\text{ and }\;{0.2} \times \left( {1 - \frac{{P}_{e}}{1000}}\right) - h = 0 \n\]\n\nfor which\n\n\[ \n{P}_{e} = 0\;\text{ and }\;{P}_{e} = {1000} \times \left( {1 - {5h}}\right) \n\]\n\nObserve that if \( 1 - {5h} \) is negative, the only realistic equilibrium is 0 . Therefore if we harvest more than \( {20}\% \left( {h > {0.2}}\right) \) of the population present at each time, we will loose the population. This makes sense.\n\nThe low density growth rate (births minus natural deaths) is \( {20}\% \) and if harvest exceeds that we will loose the population. In fact, if harvest \( h = {0.2} \) then \( {P}_{e} = {1000} \times \left( {1 - {5h}}\right) = 0 \), and we loose the population.
Yes
If \( {u}_{t} \) and \( {v}_{t} \) are solutions to the linear homogeneous difference equation\n\n\[ \n{w}_{t + n} + {p}_{n - 1}{w}_{t + n - 1} + \cdots + {p}_{1}{w}_{t + 1} + {p}_{0}{w}_{t} = 0 \n\]\n\n(15.21)\n\nwhere \( {p}_{n - 1},\cdots {p}_{1} \) and \( {p}_{0} \) are constants, then for any numbers \( {C}_{1} \) and \( {C}_{2} \) ,\n\n\[ \n{C}_{1}{u}_{t} + {C}_{2}{v}_{t}\text{is a solution to Equation 15.21} \n\]
Proof.\n\n\[ \n{C}_{1}\left( {{u}_{t + n} + {C}_{2}{v}_{t + n}}\right) + \cdots + {p}_{1}\left( {{C}_{1}{u}_{t + 1} + {C}_{2}{v}_{t + 1}}\right) + {p}_{0}\left( {{C}_{1}{u}_{t} + {C}_{2}{v}_{t}}\right) = \n\]\n\n\[ \n{C}_{1}\left( {{u}_{t + n} + {p}_{n - 1}{u}_{t + n - 1} + \cdots + {p}_{1}{u}_{t + 1} + {p}_{0}{u}_{t}}\right) \n\]\n\n\[ \n+ \;{C}_{2}\left( {{v}_{t + n} + {p}_{n - 1}{v}_{t + n - 1} + \cdots + {p}_{1}{v}_{t + 1} + {p}_{0}{v}_{t}}\right) = \n\]\n\n\[ \n{C}_{1} \times \left( \begin{array}{ll} 0 & 0 \end{array}\right) + {C}_{2} \times \left( \begin{array}{ll} 0 & 0 \end{array}\right) = 0. \n\]\n\nEnd of proof.
Yes
Find formulas for \( {A}_{t} \) and \( {B}_{t} \) if\n\n\[ \n{A}_{0} = 1{A}_{t + 1} = {0.52}{A}_{t} + {0.04}{B}_{t} \]\n\n\[ \n\begin{matrix} {B}_{0} & = & 2 & {B}_{t + 1} & = & {0.24}{A}_{t} + {0.4}{B}_{t} \end{matrix} \]\n
Solution\n\n\[ \np = {0.52} + {0.4} = {0.92},\;q = {0.52} \cdot {0.4} - {0.24} \cdot {0.04} = {0.1984},\;{r}^{2} - {0.92r} + {0.1984} = 0 \]\n\n\[ \n{r}_{1} = \frac{{0.92} + \sqrt{{0.92}^{2} - 4 \cdot {0.1984}}}{2} = {0.46} + \sqrt{0.0132},\;{r}_{2} = {0.46} - \sqrt{0.0132} \]\n\n\[ \n{A}_{1} = {0.52} \cdot 1 + {0.04} \cdot 2 = {0.6},\;{B}_{1} = {0.24} \cdot 1 + {0.4} \cdot 2 = {1.04}. \]\n\n\[ \n{C}_{1} = \frac{{A}_{1} - {r}_{2}{A}_{0}}{{r}_{1} - {r}_{2}} = \frac{{0.6} - \left( {{0.46} - \sqrt{0.0132}}\right) \cdot 1}{2\sqrt{0.0132}} = \left( {{0.07}/\sqrt{0.0132}}\right) + {0.5} \]\n\n\[ \n{C}_{2} = \frac{{r}_{1}{A}_{0} - {A}_{1}}{{r}_{1} - {r}_{2}}\frac{\left( {{0.46} + \sqrt{0.132}}\right) \cdot 1 - {0.6}}{2\sqrt{0.0132}} = \left( {-{0.07}/\sqrt{0.0132}}\right) + {0.5} \]\n\n\[ \n{A}_{t} = {C}_{1}{r}_{1}^{t} + {C}_{2}{r}_{2}^{t} \]\n\n\[ \n= \left( {\left( {{0.07}/\sqrt{0.0132}}\right) + {0.5}}\right) {\left( {0.46} + \sqrt{0.0132}\right) }^{t} + \]\n\n\[ \n\left( {\left( {-{0.07}/\sqrt{0.0132}}\right) + {0.5}}\right) {\left( {0.46} - \sqrt{0.0132}\right) }^{t} \]\n\n\[ \n{D}_{1} = \frac{{B}_{1} - {r}_{2}{B}_{0}}{{r}_{1} - {r}_{2}} = \left( {{0.06}/\sqrt{0.0132}}\right) + 1\;\text{See Exercise 15.4.8} \]\n\n\[ \n{D}_{2} = \frac{{r}_{1}{B}_{0} - {B}_{1}}{{r}_{1} - {r}_{2}} = \left( {-{0.06}/\sqrt{0.0132}}\right) + 1 \]\n\n\[ \n{B}_{t} = \left( {\left( {{0.06}/\sqrt{0.0132}}\right) + 1}\right) {\left( {0.46} + \sqrt{0.0132}\right) }^{t} + \]\n\n\[ \n\left( {\left( {-{0.06}/\sqrt{0.0132}}\right) + 1}\right) {\left( {0.46} - \sqrt{0.0132}\right) }^{t} \]\n
No
Theorem 15.5.1 If \( {r}_{1} \) and \( {r}_{2} \) are the roots to \( {x}^{2} - {px} + q = 0 \) then\n\n\[ \n{r}_{1} + {r}_{2} = p\;\text{ and }\;{r}_{1} \times {r}_{2} = q.\n\]
Proof.\n\n\[ \n{r}_{1} + {r}_{2} = \frac{p + \sqrt{{p}^{2} + {4q}}}{2} + \frac{p - \sqrt{{p}^{2} - {4q}}}{2} = \frac{p + \sqrt{{p}^{2} - {4q}} + p - \sqrt{{p}^{2} - {4q}}}{2} = p \n\]\n\n\[ \n{r}_{1} \times {r}_{2} = \frac{p + \sqrt{{p}^{2} - {4q}}}{2} \times \frac{p - \sqrt{{p}^{2} - {4q}}}{2} = \frac{{p}^{2} - \left( {{p}^{2} - {4q}}\right) }{4} = q \n\]\nEnd of proof.
Yes
Theorem 16.2.1 For the three forms of solutions in Equations 16.8, 16.9 and 16.10, \( {x}_{t} \rightarrow 0 \) and \( {y}_{t} \rightarrow 0 \) for all choices of \( {x}_{0} \) and \( {y}_{0} \) if and only if \[ \left| {r}_{1}\right| < 1\text{ and }\left| {r}_{2}\right| < 1;\;\text{ or }\;\left| {r}_{1}\right| < 1;\;\text{ or }\;\left| \rho \right| < 1, \] (16.11) depending on which of the three formulas describe \( {x}_{t} \). Thus the dynamical systems 16.3 are stable only if the corresponding conditions are met.
Proof. The results are valid because \( \mathop{\lim }\limits_{{t \rightarrow \infty }}{r}^{t} = 0 \) if \( \left| r\right| < 1 \) and \( \mathop{\lim }\limits_{{t \rightarrow \infty }}t \times {r}^{t} = 0 \) if \( \left| r\right| < 1 \) . (For \( \left. {\mathop{\lim }\limits_{{t \rightarrow \infty }}t \times {r}^{t} = 0\text{ if }\left| r\right| < 1\text{, see Exercise 16.2.11.) End of proof. }}\right) \)
No
Theorem 16.2.2 The fate of \( {\mathbf{x}}_{\mathbf{t}} \) . The dynamical systems of Equations 16.3 and Equations 16.5, with characteristic equation\n\n\[ \n{z}^{2} - \left( {{m}_{1,1} + {m2},2}\right) \times z + {m}_{1,1}{m}_{2,2} - {m}_{2,1}{m}_{1,2} = {z}^{2} - {pz} + q = 0, \n\]\n\nare stable if and only if\n\n\[ \n0 \leq \left| p\right| < 1 + q < 2 \n\]
Proof. Danger: Obnubilation Zone. This argument is tedious and reading it can be delayed - a very long time.\n\nWe show that if \( 0 \leq \left| p\right| < 1 + q < 2 \) then \( \mathop{\lim }\limits_{{t \rightarrow \infty }}{x}_{t} = 0 \) . In the case of complex roots, \( {\rho }^{2} = q \)\n\n(Equation 15.26) and because \( 0 < 1 + q < 2,\left| q\right| < 1 \) so \( \left| \rho \right| < 1 \) . In the case of a repeated root, \( {r}_{1} \) ,\n\n\( {p}^{2} - {4q} = 0 \) and \( {r}_{1} = \frac{p}{2} \) . Because \( \left| p\right| < 2,\left| {r}_{1}\right| < 1 \) . Now suppose the roots \( {r}_{1} \) and \( {r}_{2} \) are real and distinct \( \left( {{p}^{2} - {4q} > 0}\right) \) and \( 0 \leq \left| p\right| < 1 + q < 2 \) . Then\n\n\[ \n\left| p\right| < 1 + q \n\]\n\n\[ \n1 + q\; > \; - p\;\text{ and }\;p\; < \;1 + q \n\]\n\n\[ \n1 + p\; > \; - q\;\text{ and }\; - q\; < \;1 - p \n\]\n\n\[ \n4 + {4p} + {p}^{2} > {p}^{2} - {4q}\text{ and }{p}^{2} - {4q} < 4 - {4p} + {p}^{2} \n\]\n\nBecause \( \left| p\right| < 2 \), both \( 2 + p \) and \( 2 - p \) are positive. Then\n\n\[ \n2 + p\; > \;\sqrt{{p}^{2} - {4q}}\;\text{ and }\;\sqrt{{p}^{2} - {4q}}\; < \;2 - p \n\]\n\n\[ \n- 2 - p < \; - \sqrt{{p}^{2} - {4q}}\; < \;\sqrt{{p}^{2} - {4q}}\; < \;2 - p \n\]\n\n\[ \n- 1 < \frac{p - \sqrt{{p}^{2} - {4q}}}{2} < \frac{p + \sqrt{{p}^{2} - {4q}}}{2} < 1 \n\]\n\nThus the roots, \( {r}_{1} \) and \( {r}_{2} \) are between -1 and 1 and \( \mathop{\lim }\limits_{{t \rightarrow \infty }}{A}_{t} = 0 \) .\n\nNow suppose \( \mathop{\lim }\limits_{{t \rightarrow \infty }}{x}_{t} = 0 \) . Then if the roots are real and distinct they must lie between -1 and 1 and the steps of the previous argument may be reversed to show that \( \left| p\right| < q + 1 < 2 \) .\n\nIf the root \( {r}_{1} \) is repeated, then \( \left| {r}_{1}\right| < 1 \) and \( {r}_{1}^{2} = q < 1 \) and \( q + 1 < 2 \) . Furthermore, \( {p}^{2} - {4q} = 0 \), so \( \left| p\right| = 2\sqrt{q} \) . Now,\n\n\[ \n{\left( 1 - \sqrt{q}\right) }^{2} > 0,\;1 - 2\sqrt{q} + q > 0,\;1 + q > 2\sqrt{q} = p \n\]\n\nIf the roots are complex, then \( \rho < 1 \) and \( {\rho }^{2} = q \) (Equation 15.26) and \( q < 1 \) and \( q + 1 < 2 \) . Furthermore, \( {p}^{2} - {4q} < 0 \), so \( \left| p\right| < 2\sqrt{q} < q + 1 \), as above.
Yes
Example 16.3.1 In the next section we consider two populations that have a symbiotic relationship, a special case of which is\n\n\[ \n{x}_{t + 1} = {x}_{t} + \frac{5}{98}{x}_{t}\left( {1 + \frac{4}{10}{y}_{t} - {x}_{t}}\right) = F\left( {{x}_{t},{y}_{t}}\right) \n\]\n\n(16.23)\n\n\[ \n{y}_{t + 1} = {y}_{t} + \frac{7}{120}{y}_{t}\left( {1 + \frac{5}{7}{x}_{t} - {y}_{t}}\right) = G\left( {{x}_{t},{y}_{t}}\right) . \n\]\n\nAn equilibrium point of the system is \( \left( {{1.96},{2.4}}\right) \) and the Jacobian matrix at \( \left( {{1.96},{2.4}}\right) \) is computed by\n\n\[ \nF\left( {x, y}\right) = \frac{103}{98}x + \frac{2}{98}x \times y - \frac{5}{98}{x}^{2}\;G\left( {x, y}\right) = \frac{127}{120}y + \frac{1}{24}{xy} - \frac{7}{120}{y}^{2} \n\]\n\n\[ \n{F}_{1}\left( {x, y}\right) = \frac{103}{98} + \frac{2}{98}y - \frac{10}{98}x\;{F}_{2}\left( {x, y}\right) = \frac{2}{98}x \n\]\n\n\[ \n{G}_{1}\left( {x, y}\right) = \frac{1}{24}y\;{G}_{2}\left( {x, y}\right) = \frac{127}{120} + \frac{1}{24}x - \frac{14}{120}y \n\]\n\n\[ \n{F}_{1}\left( {{1.96},{2.4}}\right) = {0.9}\;{F}_{2}\left( {{1.96},{2.4}}\right) = {0.04} \n\]\n\n\[ \n{G}_{1}\left( {{1.96},{2.4}}\right) = {0.1}\;{G}_{2}\left( {{1.96},{2.4}}\right) = {0.86} \n\]\n\nThen the Jacobian matrix and homogeneous local linear approximation to Equations 16.23 at the equilibrium point \( \left( {{1.96},{2.4}}\right) \) are\n\n\[ \n\left\lbrack \begin{array}{ll} {0.9} & {0.04} \\ {0.1} & {0.86} \end{array}\right\rbrack \;\begin{array}{l} {\xi }_{t + 1} = {0.9}{\xi }_{t} + {0.04}{\eta }_{t} \\ {\eta }_{t + 1} = {0.1}{\xi }_{t} + {0.86}{\eta }_{t} \end{array} \n\]\n\n(16.24)\n\nThe alert reader may recognize this linear dynamical system as being that of Equations 16.6A for which the characteristic roots are approximately 0.946 and 0.814 . The homogeneous linear dynamical system 16.24 is stable.\n\nBecause the local linear approximation 16.24 to the nonlinear dynamical system 16.23 at the equilibrium point \( \left( {{1.96},{2.4}}\right) \) is stable, the nonlinear dynamical system 16.23 is asymptotically stable at (1.96,2.4).
The basis for the previous paragraph is in Theorem 16.3.1. The idea of the theorem and of local linear approximation can be seen by an algebraic rearrangement of the nonlinear system 16.23\n\n\[ \n{x}_{t + 1} - {1.96} = {0.9}\left( {{x}_{t} - {1.96}}\right) + {0.04}\left( {{y}_{t} - {2.4}}\right) \n\]\n\n\[ \n+ \frac{2}{98}\left( {{x}_{t} - {1.96}}\right) \left( {{y}_{t} - {2.4}}\right) - \frac{5}{98}{\left( {x}_{t} - {1.96}\right) }^{2} \n\]\n\n\[ \n{y}_{t + 1} - {2.4} = {0.1}\left( {{x}_{t} - {1.96}}\right) + {0.86}\left( {{y}_{t} - {2.4}}\right) \n\]\n\n\[ \n+ \frac{5}{120}\left( {{x}_{t} - {1.96}}\right) \left( {{y}_{t} - {2.4}}\right) - \frac{7}{120}{\left( {y}_{t} - {1.96}\right) }^{2} \n\]\n\nThe linear terms are those of the local linear approximation. The idea of Theorem 16.3.1 is that if \( \left( {{x}_{t},{y}_{t}}\right) \) is close to the equilibrium \( \left( {{1.96},{2.4}}\right) \) so that \( {x}_{t} - {1.96} \) and \( {y}_{t} - {2.4} \) are ’small’ then the quadratic terms \( \left( {{x}_{t} - {1.96}}\right) \left( {{y}_{t} - {2.4}}\right) ,{\left( {x}_{t} - {1.96}\right) }^{2} \) and \( {\left( {y}_{t} - {1.96}\right) }^{2} \) are ’(small)’’, even smaller, and contribute very little in computing the trajectory.
Yes
Example 17.4.1 An extreme example. Shown in Figure 17.9 are the direction field and the phase plane graph of \[ {y}^{\prime } = \left( {y - 1}\right) \times \left( {y - 2}\right) \times \left( {y - 3}\right) \times \left( {y - 4}\right) \]
It is easy to solve \( f\left( y\right) = \left( {y - 1}\right) \left( {y - 2}\right) \left( {y - 3}\right) \left( {y - 4}\right) = 0 \) and see that \( 1,2,3 \) and 4 are equilibrium points, and equivalently that \( y = 1, y = 2, y = 3 \) and \( y = 4 \) are equilibrium solutions to the differential equation. Solutions, \( y\left( t\right) \), with \( y\left( 0\right) \) close to 1 will be asymptotic to \( y = 1 \) and solutions with \( y\left( 0\right) \) close to 3 will be asymptotic to \( y = 3 \) . Also, solutions starting close to 2 will not be asymptotic to \( y = 2 \) and solutions starting close to 4 will not be asymptotic to \( y = 4 \) .
Yes
Theorem 17.4.1 If \( f\left( y\right) \) and \( {f}^{\prime }\left( y\right) \) are continuous, an equilibrium point \( {y}_{e} \) of \( {y}^{\prime } = f\left( y\right) \) is asymptotically stable if \( {f}^{\prime }\left( {y}_{e}\right) \) is negative.
Proof: This proof is technical and your reading of it can be delayed — a long time. Suppose the hypothesis of the theorem and let \( - m = {f}^{\prime }\left( {y}_{e}\right) < 0 \) . By hypothesis, \( {y}_{e} \) is an equilibrium point so that \( f\left( {y}_{e}\right) = 0 \) ; therefore the function \( \bar{y}\left( t\right) \equiv {y}_{e} \) is a solution to \( {y}^{\prime }\left( t\right) = f\left( {y\left( t\right) }\right) \) . For convenience suppose that \( {y}_{e} = 0 \) (so that the equilibrium solution \( \bar{y}\left( t\right) \equiv 0 \) ). Then \( {F}^{\prime }\left( 0\right) < 0 \) ; let \( - m = {F}^{\prime }\left( 0\right) \) . See Figure 17.10 where the direction field of the very simple case, \( f\left( y\right) = - y \), the equilibrium solution, \( \bar{y}\left( t\right) = 0 \), and a typical solution, \( y = {e}^{-t} \), are drawn. We need to show that in the general case the solutions are similar to the \
No
Example 17.4.2 Parameter Reduction. It is customary to divide the logistic differential equation\n\n\[ \n{p}^{\prime }\left( t\right) = r \times p\left( t\right) \times \left( {1 - \frac{p\left( t\right) }{M}}\right) \n\]\n\nby \( M \) to obtain\n\n\[ \n\frac{{p}^{\prime }\left( t\right) }{M} = r \times \frac{p\left( t\right) }{M} \times \left( {1 - \frac{p\left( t\right) }{M}}\right) \n\]\n\nThen let\n\n\[ \nu\left( t\right) = \frac{p\left( t\right) }{M}\;\text{ and note that }\;{u}^{\prime }\left( t\right) = \frac{{p}^{\prime }\left( t\right) }{M} \n\]\n\nto obtain\n\n\[ \n{u}^{\prime }\left( t\right) = r \times u\left( t\right) \times \left( {1 - u\left( t\right) }\right) \;\text{ Fraction Logistic. } \n\]
The function, \( u \), is the fraction of the carrying capacity, \( M \), used by the population. Because \( u \) is the ratio of \( p \) to \( M \), both of which have units of population numbers, \( u \) is dimensionless.
No
We compute points to approximate the solution to\n\n\[ v\\left( 0\\right) = {0.2}\\,{v}^{\\prime }\\left( t\\right) = v\\left( t\\right) \\times {e}^{-v\\left( t\\right) } - {0.1} * v\\left( t\\right) \\;0 \\leq t \\leq {10} \]
This is Ricker's model of fish populations with parameter reduction, Equation 17.15. First we divide the time axis \( \\left\\lbrack {0,{10}}\\right\\rbrack \) into intervals of length 2 and let\n\n\[ {t}_{0} = 0\\;{t}_{1} = 2\\;{t}_{2} = 4\\;{t}_{3} = 6\\;{t}_{4} = 8\\;\\text{ and }\\;{t}_{5} = {10} \]\n\nOur objective is to compute \( {v}_{0},{v}_{1},\\cdots \), and \( {v}_{5} \) so that the points \( \\left( {{t}_{0},{v}_{0}}\\right) ,\\left( {{t}_{1},{v}_{1}}\\right) ,\\cdots \), and \( \\left( {{t}_{5},{v}_{5}}\\right) \) will lie close to the graph of the solution, \( v\\left( t\\right) \). The method we use is called Euler’s method (see Definition 17.5.1).\n\nStep 0. let \( {v}_{0} = {0.2} \). Then \( \\left( {{t}_{0},{v}_{0}}\\right) \) is a point of the graph of the solution.\n\nStep 1. From the differential equation, the slope of the solution at \( \\left( {0,{0.2}}\\right) \) is\n\n\[ {v}^{\\prime }\\left( 0\\right) = v\\left( 0\\right) \\times {e}^{-v\\left( 0\\right) } - {0.1} \\times v\\left( 0\\right) \]\n\n\[ = {0.2}{e}^{-{0.2}} - {0.1} * {0.2} \]\n\n\[ = {0.1437} \]\n\nWe construct the interval between \( {t}_{0} = 0 \) and \( {t}_{1} = 2 \) that starts at \( \\left( {0,{0.2}}\\right) \) and has slope 0.1437. The right end point is at \( {t}_{1} = 2 \) and the ordinate is\n\n\[ {v}_{1} = {0.2} + 2 \\times {0.1437} = {0.4874}. \]\n\nFigure 17.11A. Pattern: Note that \( {0.1437} = {v}_{0} \\times {e}^{-{v}_{0}} - {0.1} \\times {v}_{0} \) so that\n\n\[ {v}_{1} = {v}_{0} + 2 \\times \\left( {{v}_{0} \\times {e}^{-{v}_{0}} - {0.1} \\times {v}_{0}}\\right) \]
Yes
We find an approximate solution to\n\n\\[ \ny\\left( 0\\right) = 2\\;{y}^{\\prime }\\left( t\\right) = t - y\n\\]\n\nfor \\( 0 \\leq t \\leq 4 \\) ; the graph of the solution is shown in Figure 17.13A. Both \\( t \\) and \\( y \\) appear in the RHS in this problem.
The basic pattern is the same.\n\n\\[ \n{y}_{0} = y\\left( 0\\right) = 2\\;{y}_{k + 1} = {y}_{k} + h \\times {\\text{slope}}_{k}\n\\]\n\nThe computations are organized in Table 17.1 for time-interval size \\( h = 1 \\) .\n\nOur approximation is shown in Figure 17.13A and is not close enough to the solution to satisfy us. The approximation computed using time-interval size \\( h = {0.25} \\) is shown in Figure 17.13B and it is more acceptable. The initial and final computations for \\( h = {0.25} \\) are shown in Table 17.2\n\nBecause the RHS of \\( {y}^{\\prime }\\left( t\\right) = t - y \\) involves both \\( t \\) and \\( y \\), these numbers are not computed on a calculator using only ANS, the previous answer key. A calculator program that will do the computations is included in Table 17.3.
Yes