Q
stringlengths
27
2.03k
A
stringlengths
4
2.98k
Result
stringclasses
2 values
Theorem 5.9 Let \( A \) be an \( m \times n \) matrix and let \( \alpha > 0 \) . Then for each \( y \in {\mathbb{C}}^{m} \) there exists a unique \( {x}_{\alpha } \in {\mathbb{C}}^{n} \) such that\n\n\[ \n{\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}{x}_{\alpha }\end{Vmatrix}}_{2}^{2} = \mathop{\inf }\limits_{{x \in {\mathbb{C}}^{n}}}\left\{ {\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2}}\right\} .\n\]\n\n\( \left( {5.20}\right) \)\n\nThe minimizing vector \( {x}_{\alpha } \) is given by the unique solution of the linear system (5.18).
Proof. (Compare to the proof of Theorem 3.51.) We first note the relation\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} = \parallel A{x}_{\alpha } - y{\parallel }_{2}^{2} + \alpha \parallel {x}_{\alpha }{\parallel }_{2}^{2}\n\]\n\n\[ \n+ 2\operatorname{Re}\left( {x - {x}_{\alpha },\alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } - {A}^{ * }y}\right)\n\]\n\n(5.21)\n\n\[ \n+ {\begin{Vmatrix}Ax - A{x}_{\alpha }\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}x - {x}_{\alpha }\end{Vmatrix}}_{2}^{2}\n\]\n\nwhich is valid for all \( x,{x}_{\alpha } \in {\mathbb{C}}^{n} \) . From this it is obvious that the solution \( {x}_{\alpha } \) of (5.18) satisfies (5.20).\n\nConversely, let \( {x}_{\alpha } \) be a solution of (5.20) and assume that\n\n\[ \n\alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } \neq {A}^{ * }y.\n\]\n\nThen, setting \( z \mathrel{\text{:=}} \alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } - {A}^{ * }y \), for \( x \mathrel{\text{:=}} {x}_{\alpha } - {\varepsilon z} \) with \( \varepsilon \in \mathbb{R} \) from (5.21) we have\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} = {\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}{x}_{\alpha }\end{Vmatrix}}_{2}^{2} - {2\varepsilon a} + {\varepsilon }^{2}b,\n\]\n\nwhere\n\n\[ \na \mathrel{\text{:=}} \parallel z{\parallel }_{2}^{2}\;\text{ and }\;b \mathrel{\text{:=}} \parallel {Az}{\parallel }_{2}^{2} + \alpha \parallel z{\parallel }_{2}^{2}\n\]\n\nare both positive. By choosing \( \varepsilon = a/b \) we obtain\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} < {\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2},\n\]\n\nwhich contradicts (5.20).
Yes
Theorem 5.10 Let \( A \) be an \( m \times n \) matrix and let \( y \in A\left( {\mathbb{C}}^{n}\right) ,{y}^{\delta } \in {\mathbb{C}}^{m} \) satisfy\n\n\[ \n{\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \leq \delta < {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2}\n\]\n\nfor \( \delta > 0 \) . Then there exists a unique \( \alpha = \alpha \left( \delta \right) > 0 \) such that the unique solution \( {x}_{\alpha } \) of (5.23) satisfies\n\n\[ \n{\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2} = \delta\n\]\n\n\( \left( {5.24}\right) \)\n\nThis discrecancy principle for Tikhonov regularization is regular in the sense that if the error level \( \delta \) tends to zero, then\n\n\[ \n{x}_{\alpha } \rightarrow {A}^{ \dagger }y,\;\delta \rightarrow 0.\n\]\n\n(5.25)
Proof. We have to show that the function \( F : \left( {0,\infty }\right) \rightarrow \mathbb{R} \) defined by\n\n\[ \nF\left( \alpha \right) \mathrel{\text{:=}} {\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2}^{2} - {\delta }^{2}\n\]\n\nhas a unique zero. In terms of a singular system, from the representation (5.19) we find that\n\n\[ \nF\left( \alpha \right) = \mathop{\sum }\limits_{{j = 1}}^{m}\frac{{\alpha }^{2}}{{\left( \alpha + {\mu }_{j}^{2}\right) }^{2}}{\left| \left( {y}^{\delta },{v}_{j}\right) \right| }^{2} - {\delta }^{2}.\n\]\n\nTherefore, \( F \) is continuous and strictly monotonically increasing with the limits \( F\left( \alpha \right) \rightarrow - {\delta }^{2} < 0,\alpha \rightarrow 0 \), and \( F\left( \alpha \right) \rightarrow {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2}^{2} - {\delta }^{2} > 0,\alpha \rightarrow \infty \) . Hence, \( F \) has exactly one zero \( \alpha = \alpha \left( \delta \right) \) .\n\nNote that the condition \( {\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \leq \delta < {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} \) implies that \( y \neq 0 \) . Using (5.23), (5.24), and the triangle inequality we can estimate\n\n\[ \n{\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} - \delta = {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} - {\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2} \leq {\begin{Vmatrix}A{x}_{\alpha }\end{Vmatrix}}_{2}\n\]\nand\n\n\[ \n\alpha {\begin{Vmatrix}A{x}_{\alpha }\end{Vmatrix}}_{2} = {\begin{Vmatrix}A{A}^{ * }\left( {y}^{\delta } - A{x}_{\alpha }\right) \end{Vmatrix}}_{2} \leq {\begin{Vmatrix}A{A}^{ * }\end{Vmatrix}}_{2}\delta .\n\]\n\nCombining these two inequalities and using \( {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} \geq \parallel y{\parallel }_{2} - \delta \) yields\n\n\[ \n\alpha \leq \frac{{\begin{Vmatrix}A{A}^{ * }\end{Vmatrix}}_{2}\delta }{\parallel y{\parallel }_{2} - {2\delta }}.\n\]\n\nThis implies that \( \alpha \rightarrow 0,\delta \rightarrow 0 \) . Now the convergence (5.25) follows from the representations (5.13) for \( {A}^{ \dagger }y \) and (5.19) for \( {x}_{\alpha } \) (with \( y \) replaced by \( \left. {y}^{\delta }\right) \) and the fact that \( {\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \rightarrow 0,\delta \rightarrow 0 \) .
Yes
Theorem 6.1 Let \( D \subset \mathbb{R} \) be a closed interval and let \( f : D \rightarrow D \) be a continuously differentiable function with the property\n\n\[ q \mathrel{\text{:=}} \mathop{\sup }\limits_{{x \in D}}\left| {{f}^{\prime }\left( x\right) }\right| < 1 \]\n\nThen the equation \( f\left( x\right) = x \) has a unique solution \( x \in D \), and the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) ,\;\nu = 0,1,2,\ldots ,\]\n\nwith arbitrary \( {x}_{0} \in D \) converge to this solution. We have the a priori error\nestimate\n\[ \left| {{x}_{\nu } - x}\right| \leq \frac{{q}^{\nu }}{1 - q}\left| {{x}_{1} - {x}_{0}}\right| \]\n\nand the a posteriori error estimate\n\n\[ \left| {{x}_{\nu } - x}\right| \leq \frac{q}{1 - q}\left| {{x}_{\nu } - {x}_{\nu - 1}}\right| \]\n\nfor all \( \nu \in \mathbb{N} \) .
Proof. Equipped with the norm \( \parallel \cdot \parallel = \left| \cdot \right| \) the space \( \mathbb{R} \) is complete. By the mean value theorem, for \( x, y \in D \) with \( x < y \), we have that\n\n\[ f\left( x\right) - f\left( y\right) = {f}^{\prime }\left( \xi \right) \left( {x - y}\right) \]\n\nfor some intermediate point \( \xi \in \left( {x, y}\right) \) . Hence\n\n\[ \left| {f\left( x\right) - f\left( y\right) }\right| \leq \mathop{\sup }\limits_{{\xi \in D}}\left| {{f}^{\prime }\left( \xi \right) }\right| \left| {x - y}\right| = q\left| {x - y}\right| \]\n\nwhich is also valid for \( x, y \in D \) with \( x \geq y \) . Therefore, \( f \) is a contraction, and the assertion follows from the Banach fixed point Theorem 3.46.
Yes
Theorem 6.2 Let \( x \) be a fixed point of a continuously differentiable function \( f \) such that \( \left| {{f}^{\prime }\left( x\right) }\right| < 1 \) . Then the method of successive approximations \( {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) \) is locally convergent; i.e., there exists a neighborhood \( B \) of the fixed point \( x \) such that the successive approximations converge to \( x \) for all \( {x}_{0} \in B \) .
Proof. Since \( {f}^{\prime } \) is continuous and \( \left| {{f}^{\prime }\left( x\right) }\right| < 1 \), there exist constants \( 0 < q < 1 \) and \( \delta > 0 \) such that \( \left| {{f}^{\prime }\left( y\right) }\right| \leq q \) for all \( y \in B \mathrel{\text{:=}} \left\lbrack {x - \delta, x + \delta }\right\rbrack \) . Then we have that\n\n\[ \left| {f\left( y\right) - x}\right| = \left| {f\left( y\right) - f\left( x\right) }\right| \leq q\left| {y - x}\right| \leq \left| {y - x}\right| \leq \delta \]\n\nfor all \( y \in B \) ; i.e., \( f \) maps \( B \) into itself and is a contraction \( f : B \rightarrow B \) . Now the statement of the theorem follows from Theorem 6.1.
Yes
In order to describe a division by iteration, for \( a > 0 \) we consider the function \( f : \mathbb{R} \rightarrow \mathbb{R} \) given by \( f\left( x\right) \mathrel{\text{:=}} {2x} - a{x}^{2} \) . The graph of this function is a parabola with maximum value \( 1/a \) attained at \( 1/a \) . By solving the quadratic equation \( f\left( x\right) = x \) it can be seen that \( f \) has the fixed points \( x = 0 \) and \( x = 1/a \) . Obviously, \( f \) maps the open interval \( \left( {0,2/a}\right) \) into \( \left( {0,1/a}\right) \) . Since \( {f}^{\prime }\left( x\right) = 2\left( {1 - {ax}}\right) \), we have \( {f}^{\prime }\left( 0\right) = 2 \) and \( {f}^{\prime }\left( {1/a}\right) = 0 \) .
From the the property \( x < f\left( x\right) < 1/a \), which is valid for \( 0 < x < 1/a \) , it follows that the sequence \( {x}_{\nu + 1} \mathrel{\text{:=}} 2{x}_{\nu } - a{x}_{\nu }^{2} \) is monotonicly increasing and bounded. Hence, the successive approximations converge to the fixed point \( x = 1/a \) for arbitrarily chosen \( {x}_{0} \in \left( {0,2/a}\right) \) . Figure 6.2 illustrates the convergence. The numerical results are for \( a = 2 \) and two different starting points, \( {x}_{0} = {0.3} \) and \( {x}_{0} = {0.4} \) .
Yes
For computing the square root of a positive real number \( a \) by an iterative method we consider the function \( f : \left( {0,\infty }\right) \rightarrow \left( {0,\infty }\right) \) given by\n\n\[ f\left( x\right) \mathrel{\text{:=}} \frac{1}{2}\left( {x + \frac{a}{x}}\right) . \]
By solving the quadratic equation \( f\left( x\right) = x \) it can be seen that \( f \) has the fixed point \( x = \sqrt{a} \) . By the arithmetic geometric mean inequality we have that \( f\left( x\right) > \sqrt{a} \) for \( x > 0 \) ; i.e., \( f \) maps the open interval \( \left( {0,\infty }\right) \) into \( \lbrack \sqrt{a},\infty ) \), and therefore it maps the closed interval \( \lbrack \sqrt{a},\infty ) \) into itself. From\n\n\[ {f}^{\prime }\left( x\right) = \frac{1}{2}\left( {1 - \frac{a}{{x}^{2}}}\right) \]\n\nit follows that\n\n\[ q \mathrel{\text{:=}} \mathop{\sup }\limits_{{\sqrt{a} \leq x < \infty }}\left| {{f}^{\prime }\left( x\right) }\right| = \frac{1}{2}. \]\n\nHence \( f : \lbrack \sqrt{a},\infty ) \rightarrow \lbrack \sqrt{a},\infty ) \) is a contraction. Therefore, by Theorem 6.1 the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} \frac{1}{2}\left( {{x}_{\nu } + \frac{a}{{x}_{\nu }}}\right) ,\;\nu = 0,1,\ldots ,\]\n\nconverge to the square root \( \sqrt{a} \) for each \( {x}_{0} > 0 \), and we have the a posteriori error estimate\n\n\[ \left| {\sqrt{a} - {x}_{\nu }}\right| \leq \left| {{x}_{\nu } - {x}_{\nu - 1}}\right| \]
Yes
Example 6.5 Consider the function \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack \) given by\n\n\[ f\left( x\right) \mathrel{\text{:=}} \cos x. \]\n\nHere we have\n\n\[ q = \mathop{\sup }\limits_{{0 \leq x \leq 1}}\left| {{f}^{\prime }\left( x\right) }\right| = \sin 1 < 1 \]
and Theorem 6.1 implies that the successive approximations \( {x}_{\nu + 1} \mathrel{\text{:=}} \cos {x}_{\nu } \) converge to the unique solution \( x \) of \( \cos x = x \) for each \( {x}_{0} \in \left\lbrack {0,1}\right\rbrack \) . Table 6.1 illustrates the convergence, which is notably slower than in the two previous examples.
Yes
Example 6.6 The function \( h : \left( {0,1}\right) \rightarrow \left( {-\infty ,\infty }\right) \) given by \( h\left( x\right) \mathrel{\text{:=}} x + \ln x \) is strictly monotonically increasing with limits \( \mathop{\lim }\limits_{{x \rightarrow 0}}h\left( x\right) = - \infty \) and \( \mathop{\lim }\limits_{{x \rightarrow \infty }}h\left( x\right) = \infty \) . Therefore, the function \( f\left( x\right) \mathrel{\text{:=}} - \ln x \) has a unique fixed point \( x \) . Since this fixed point must satisfy \( 0 < x < 1 \), the derivative
\[ \left| {{f}^{\prime }\left( x\right) }\right| = \frac{1}{x} > 1 \] implies that \( f \) is not contracting in a neighborhood of the fixed point. However, we can still design a convergent scheme because \( x = - \ln x \) is equivalent to \( {e}^{-x} = x \) . We consider the inverse function \[ g\left( x\right) \mathrel{\text{:=}} {e}^{-x} \] of \( f \), which has derivative \( \left| {{g}^{\prime }\left( x\right) }\right| = {e}^{-x} < 1 \) at the fixed point, so that we can apply Theorem 6.2. Obviously, for each \( 0 < a < 1/e \) the exponential function \( g \) maps the interval \( \left\lbrack {a,1}\right\rbrack \) into itself. Since \[ q = \mathop{\sup }\limits_{{a \leq x \leq 1}}\left| {{g}^{\prime }\left( x\right) }\right| = {e}^{-a} < 1 \] by Theorem 6.1 it follows that for arbitrary \( {x}_{0} > 0 \) the successive approximations \( {x}_{\nu + 1} = {e}^{-{x}_{\nu }} \) converge to the unique solution of \( x = {e}^{-x} \) .
Yes
Theorem 6.8 Let \( D \subset {\mathbb{R}}^{n} \) be closed and convex (with a nonempty interior) and let \( f : D \rightarrow D \) be a continuous mapping. Assume further that \( f \) is continuously differentiable in the interior of \( D \) and that its Jacobian can be continuously extended to all of \( D \) such that\n\n\[ \mathop{\sup }\limits_{{x \in D}}\begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} < 1 \]\n\nin some norm \( \parallel \cdot \parallel \) on \( {\mathbb{R}}^{n} \) . Then the equation \( f\left( x\right) = x \) has a unique solution \( x \in D \), and the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) ,\;\nu = 0,1,2,\ldots ,\]\n\nconverge for each \( {x}_{0} \in D \) to this fixed point. We have the a priori error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - x}\end{Vmatrix} \leq \frac{{q}^{\nu }}{1 - q}\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} \]\n\nand the a posteriori error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - x}\end{Vmatrix} \leq \frac{q}{1 - q}\begin{Vmatrix}{{x}_{\nu } - {x}_{\nu - 1}}\end{Vmatrix} \]\n\nfor all \( \nu \in \mathbb{N} \) .
Proof. By the mean value Theorem 6.7 the mapping \( f : D \rightarrow D \) is a contraction.\n\nBy Theorem 3.26 we have that each of the conditions\n\n\[ \mathop{\sup }\limits_{{x \in D}}\mathop{\max }\limits_{{j = 1,\ldots, n}}\mathop{\sum }\limits_{{k = 1}}^{n}\left| {\frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) }\right| < 1 \]\n\n\[ \mathop{\sup }\limits_{{x \in D}}\mathop{\max }\limits_{{k = 1,\ldots, n}}\mathop{\sum }\limits_{{j = 1}}^{n}\left| {\frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) }\right| < 1 \]\n\n\[ \mathop{\sup }\limits_{{x \in D}}{\left\lbrack \mathop{\sum }\limits_{{j, k = 1}}^{n}{\left| \frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) \right| }^{2},\right\rbrack }^{1/2} < 1 \]\n\nensures convergence of the successive approximations in Theorem 6.8.
Yes
Theorem 6.9 Let \( x \) be a fixed point of a continuously differentiable function \( f \) such that \( \begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} < 1 \) in some norm \( \parallel \cdot \parallel \) on \( {\mathbb{R}}^{n} \) . Then the method of successive approximations \( {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) \) is locally convergent; i.e., there exists a neighborhood \( B \) of the fixed point \( x \) such that the successive approximations converge to \( x \) for all starting elements \( {x}_{0} \in B \) .
Null
No
Example 6.10 For the system\n\n\[ \n{x}_{1} = {0.5}\cos {x}_{1} - {0.5}\sin {x}_{2} \n\]\n\n\[ \n{x}_{2} = {0.5}\sin {x}_{1} + {0.5}\cos {x}_{2} \n\]\n\nwe have\n\[ \n{f}^{\prime }\left( x\right) = \left( \begin{array}{rr} - {0.5}\sin {x}_{1} & - {0.5}\cos {x}_{2} \\ {0.5}\cos {x}_{1} & - {0.5}\sin {x}_{2} \end{array}\right) ,\n\]\n\nand therefore \( {\begin{Vmatrix}{f}^{\prime }\left( x\right) \end{Vmatrix}}_{2} \leq \sqrt{0.5} \) for all \( x \in {\mathbb{R}}^{2} \) . Hence Theorem 6.8 is applicable.
Null
No
Example 6.12 For the function\n\n\\[ \nf\\left( x\\right) \\mathrel{\\text{:=}} a - \\frac{1}{x} \n\\]\n\nwhere \\( a > 0 \\), the Newton iteration is given by\n\n\\[ \n{x}_{\\nu + 1} \\mathrel{\\text{:=}} 2{x}_{\\nu } - a{x}_{\\nu }^{2} \n\\]\n\nBy Example 6.3 we have convergence for all \\( {x}_{0} \\in \\left( {0,2/a}\\right) \\) .
Null
No
Theorem 6.14 Let \( D \subset {\mathbb{R}}^{n} \) be open and convex and let \( f : D \rightarrow {\mathbb{R}}^{n} \) be continuously differentiable. Assume that for some norm \( \parallel \cdot \parallel \) on \( {\mathbb{R}}^{n} \) and some \( {x}_{0} \in D \) the following conditions hold:\n\n(a) \( f \) satisfies\n\n\[ \begin{Vmatrix}{{f}^{\prime }\left( x\right) - {f}^{\prime }\left( y\right) }\end{Vmatrix} \leq \gamma \parallel x - y\parallel \]\n\nfor all \( x, y \in D \) and some constant \( \gamma > 0 \).\n\n(b) The Jacobian matrix \( {f}^{\prime }\left( x\right) \) is nonsingular for all \( x \in D \), and there exists a constant \( \beta > 0 \) such that\n\n\[ \begin{Vmatrix}{\left\lbrack {f}^{\prime }\left( x\right) \right\rbrack }^{-1}\end{Vmatrix} \leq \beta ,\;x \in D. \]\n\n(c) For the constants\n\n\[ \alpha \mathrel{\text{:=}} \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix}\;\text{ and }\;q \mathrel{\text{:=}} {\alpha \beta \gamma } \]\n\nthe inequality\n\n\[ q < \frac{1}{2} \]\n\nis satisfied.\n\n(d) For \( r \mathrel{\text{:=}} {2\alpha } \) the closed ball \( B\left\lbrack {{x}_{0}, r}\right\rbrack \mathrel{\text{:=}} \left\{ {x : \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \leq r}\right\} \) is contained in \( D \).\n\nThen \( f \) has a unique zero \( {x}^{ * } \) in \( B\left\lbrack {{x}_{0}, r}\right\rbrack \). Starting with \( {x}_{0} \) the Newton iteration\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} {x}_{\nu } - {\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}f\left( {x}_{\nu }\right) ,\;\nu = 0,1,\ldots ,\]\n\n(6.4)\n\nis well-defined. The sequence \( \left( {x}_{\nu }\right) \) converges to the zero \( {x}^{ * } \) of \( f \), and we have the error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - {x}^{ * }}\end{Vmatrix} \leq {2\alpha }{q}^{{2}^{\nu } - 1},\;\nu = 0,1,\ldots \]\n
Proof. 1. Let \( x, y, z \in D \). From the proof of Theorem 6.7 we know that\n\n\[ f\left( y\right) - f\left( x\right) = {\int }_{0}^{1}{f}^{\prime }\left\lbrack {{\lambda x} + \left( {1 - \lambda }\right) y}\right\rbrack \left( {y - x}\right) {d\lambda }.\]\n\nHence\n\n\[ f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( z\right) \left( {y - x}\right) = {\int }_{0}^{1}\left\{ {{f}^{\prime }\left\lbrack {{\lambda x} + \left( {1 - \lambda }\right) y}\right\rbrack - {f}^{\prime }\left( z\right) }\right\} \left( {y - x}\right) {d\lambda },\]\n\nand estimating with the aid of (6.1) and condition (a) we find that\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( z\right) \left( {y - x}\right) }\end{Vmatrix} \]\n\n\[ \leq \gamma \parallel y - x\parallel {\int }_{0}^{1}\parallel \lambda \left( {x - z}\right) + \left( {1 - \lambda }\right) \left( {y - z}\right) \parallel {d\lambda } \]\n\n\[ \leq \frac{\gamma }{2}\parallel y - x\parallel \{ \parallel x - z\parallel + \parallel y - z\parallel \} \]\n\nChoosing \( z = x \) shows that\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( x\right) \left( {y - x}\right) }\end{Vmatrix} \leq \frac{\gamma }{2}\parallel y - x{\parallel }^{2} \]\n\n(6.5)\n\nfor all \( x, y \in D \), and choosing \( z = {x}_{0} \) yields\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( {x}_{0}\right) \left( {y - x}\right) }\end{Vmatrix} \leq {r\gamma }\parallel y - x\parallel \]\n\n(6.6)\n\nfor all \( x, y \in B\left\lbrack {{x}_{0}, r}\right\rbrack \).\n\n2. We proceed by proving through induction that\n\n\[ \begin{Vmatrix}{{x}_{\nu } - {x}_{0}}\end{Vmatrix} \leq r\;\text{ and }\;\begin{Vmatrix}{{x}_{\nu } - {x}_{\nu - 1}}\end{Vmatrix} \leq \alpha {q}^{{2}^{\nu - 1} - 1},\;\nu = 1,2,\ldots .\]\n\n(6.7)\n\nThis is valid for \( \nu = 1 \), since\n\n\[ \begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} = \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix} = \alpha = \frac{r}{2} < r \]\n\nas a consequence of conditions (c)
Yes
Corollary 6.15 Let \( D \subset {\mathbb{R}}^{n} \) be open and let \( f : D \rightarrow {\mathbb{R}}^{n} \) be twice continuously differentiable, and assume that \( {x}^{ * } \) is a zero of \( f \) such that the Jacobian \( {f}^{\prime }\left( {x}^{ * }\right) \) is nonsingular. Then Newton’s method is locally convergent; i.e., there exists a neighborhood \( B \) of the zero \( {x}^{ * } \) such that the Newton iterations converge to \( {x}^{ * } \) for all \( {x}_{0} \in B \) .
Proof. Since \( f \) is twice continuously differentiable, by the mean value Theorem 6.7 applied to the components of \( {f}^{\prime } \) there exists \( \gamma > 0 \) such that\n\n\[ \begin{Vmatrix}{{f}^{\prime }\left( x\right) - {f}^{\prime }\left( y\right) }\end{Vmatrix} \leq \gamma \parallel x - y\parallel \]\n\nfor all \( x, y \) in some closed ball \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) centered at \( {x}^{ * } \) . We write\n\n\[ {f}^{\prime }\left( x\right) = {f}^{\prime }\left( {x}^{ * }\right) \left\{ {I + {\left\lbrack {f}^{\prime }\left( {x}^{ * }\right) \right\rbrack }^{-1}\left\lbrack {{f}^{\prime }\left( x\right) - {f}^{\prime }\left( {x}^{ * }\right) }\right\rbrack }\right\} \]\n\nand deduce from the above estimate and Theorem 3.48 that the radius \( \rho \) of \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) can be chosen such that \( {f}^{\prime }\left( x\right) \) is nonsingular on \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) and \( \parallel {\left\lbrack {f}^{\prime }\left( {x}^{ * }\right) \right\rbrack }^{-1}\parallel \leq \beta \) for all \( x \in B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) and some constant \( \beta > 0. \)\n\nSince \( f \) is continuous, \( f\left( {x}^{ * }\right) = 0 \) implies that there exists \( \delta < \rho /2 \) such that\n\n\[ \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < \min \left\{ {\frac{\rho }{4\beta },\frac{1}{2{\beta }^{2}\gamma }}\right\} \]\n\nfor all \( \begin{Vmatrix}{{x}_{0} - {x}^{ * }}\end{Vmatrix} < \delta \) . Then, after setting \( \alpha \mathrel{\text{:=}} \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix} \) we have the inequalities\n\n\[ {\alpha \beta \gamma } \leq \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}{\beta }^{2}\gamma < \frac{1}{2} \]\n\nand\n\n\[ {2\alpha } \leq {2\beta }\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < \frac{\rho }{2}. \]\n\nHence for the open and convex ball \( B\left( {{x}^{ * },\rho }\right) \) and for each \( {x}_{0} \) with \( \begin{Vmatrix}{{x}_{0} - {x}^{ * }}\end{Vmatrix} < \delta \) the assumptions of Theorem 6.14 are satisfied.
Yes
Corollary 6.16 Let \( f : \left( {a, b}\right) \rightarrow \mathbb{R} \) be twice continuously differentiable and assume that \( {x}^{ * } \) is a simple zero of \( f \) . Then Newton’s method is locally convergent.
Proof. For simple zeros we have \( {f}^{\prime }\left( {x}^{ * }\right) \neq 0 \).
No
For the function \( f\left( x\right) \mathrel{\text{:=}} x - \cos x \) the Newton iteration reads\n\n\[ \n{x}_{\nu + 1} \mathrel{\text{:=}} {x}_{\nu } - \frac{{x}_{\nu } - \cos {x}_{\nu }}{1 + \sin {x}_{\nu }} \n\]
Null
No
For the function \( f\left( x\right) \mathrel{\text{:=}} x - {e}^{-x} \) the Newton iteration reads\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} {x}_{\nu } - \frac{{x}_{\nu } - {e}^{-{x}_{\nu }}}{1 + {e}^{-{x}_{\nu }}} \]
Null
No
Theorem 6.20 Under the assumptions of Theorem 6.14 Newton's method converges quadratically.
Proof. Using condition (b) of Theorem 6.14 and the inequality (6.5) we can estimate\n\n\[ \n\begin{Vmatrix}{{x}^{ * } - {x}_{\nu + 1}}\end{Vmatrix} = \begin{Vmatrix}{{x}^{ * } - {x}_{\nu } + {\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}f\left( {x}_{\nu }\right) }\end{Vmatrix}\n\]\n\n\[ \n\leq \begin{Vmatrix}{\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}\end{Vmatrix}\begin{Vmatrix}{f\left( {x}^{ * }\right) - f\left( {x}_{\nu }\right) - {f}^{\prime }\left( {x}_{\nu }\right) \left( {{x}^{ * } - {x}_{\nu }}\right) }\end{Vmatrix}\n\]\n\n\[ \n\leq \frac{\beta \gamma }{2}{\begin{Vmatrix}{x}^{ * } - {x}_{\nu }\end{Vmatrix}}^{2}\n\]\n\nsince \( f\left( {x}^{ * }\right) = 0 \) .
Yes
Theorem 6.21 Under the assumptions of Theorem 6.14 the simplified Newton method converges linearly to the unique zero of \( f \) in \( B\left\lbrack {{x}_{0}, r}\right\rbrack \) .
Proof. Recall that the function\n\n\[ g\left( x\right) \mathrel{\text{:=}} x - {\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( x\right) \]\n\ndefined in the proof of Theorem 6.14 is a contraction. We show that \( g \) maps \( B\left\lbrack {{x}_{0}, r}\right\rbrack \) into itself. For this we write\n\n\[ {x}_{0} - g\left( x\right) = {\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}\left\{ {f\left( x\right) - f\left( {x}_{0}\right) - {f}^{\prime }\left( {x}_{0}\right) \left( {x - {x}_{0}}\right) + f\left( {x}_{0}\right) }\right\} .\n\nThen estimating with the help of conditions (b), (c) and (d) and the inequality (6.5) we obtain\n\n\[ \begin{Vmatrix}{g\left( x\right) - {x}_{0}}\end{Vmatrix} \leq \frac{\beta \gamma }{2}{\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}^{2} + \alpha \leq 2{\alpha }^{2}{\beta \gamma } + \alpha = \left( {{2q} + 1}\right) \alpha < {2\alpha } = r \]\n\nfor all \( x \) with \( \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \leq r \) . Now the statement of the theorem follows from the Banach fixed point Theorem 3.46.
Yes
Theorem 6.22 Let\n\n\\[ p\\left( x\\right) = {a}_{0}{x}^{n} + {a}_{1}{x}^{n - 1} + {a}_{2}{x}^{n - 2} + \\cdots + {a}_{n - 1}x + {a}_{n} \\]\n\nbe a polynomial of degree \\( n \\) . For \\( z \\in \\mathbb{C} \\) the complete Horner scheme\n\n<table><thead><tr><th></th><th>\\( {a}_{0} \\)</th><th>\\( {a}_{1} \\)</th><th>\\( {a}_{2} \\)</th><th>-</th><th>-</th><th>\\( {a}_{n - 1} \\)</th><th>\\( {a}_{n} \\)</th></tr></thead><tr><td>\\( z \\)</td><td>\\( {b}_{0} \\)</td><td>\\( {b}_{1} \\)</td><td>\\( {b}_{2} \\)</td><td>-</td><td>.</td><td>\\( {b}_{n - 1} \\)</td><td>\\( {b}_{n} \\)</td></tr><tr><td>\\( z \\)</td><td>\\( {b}_{0}^{\\prime } \\)</td><td>\\( {b}_{1}^{\\prime } \\)</td><td>\\( {b}_{2}^{\\prime } \\)</td><td></td><td>-</td><td>\\( {b}_{n - 1}^{\\prime } \\)</td><td></td></tr><tr><td>\\( z \\)</td><td>\\( {b}_{0}^{\\prime \\prime } \\)</td><td>\\( {b}_{1}^{\\prime \\prime } \\)</td><td>\\( {b}_{2}^{\\prime \\prime } \\)</td><td>-</td><td>\\( {b}_{n - 2}^{\\prime \\prime } \\)</td><td></td><td></td></tr><tr><td>.</td><td>.</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>\\( z \\) \\( z \\)</td><td>\\( {b}_{0}^{\\left( n - 1\\right) } \\) \\( {b}_{0}^{\\left( n\\right) } \\)</td><td>\\( {b}_{1}^{\\left( n - 1\\right) } \\)</td><td></td><td></td><td></td><td></td><td></td></tr></table>\n\ncontains the derivatives\n\n\\[ {b}_{n - k}^{\\left( k\\right) } = \\frac{{p}^{\\left( k\\right) }\\left( z\\right) }{k!},\\;k = 0,1,\\ldots, n \\]\n\nof the polynomial \\( p \\) at the point \\( z \\) . The scheme is recursively defined by \\( {b}_{m}^{\\left( -1\\right) } \\mathrel{\\text{:=}} {a}_{m}, m = 0,\\ldots, n \\), and\n\n\\[ {b}_{0}^{\\left( k\\right) } \\mathrel{\\text{:=}} {b}_{0}^{\\left( k - 1\\right) },\\;{b}_{m}^{\\left( k\\right) } \\mathrel{\\text{:=}} z{b}_{m - 1}^{\\left( k\\right) } + {b}_{m}^{\\left( k - 1\\right) },\\;m = 1,\\ldots, n - k, \\]\n\nfor \\( k = 0,\\ldots, n \\) .
Null
No
For the polynomial \( p\left( x\right) \mathrel{\text{:=}} {x}^{3} - {x}^{2} + {3x} - 5 \) the Horner scheme
<table><thead><tr><th>\( z \)</th><th>1</th><th>-1</th><th>3</th><th>- 5</th></tr></thead><tr><td>2</td><td>1</td><td>1</td><td>5</td><td>5</td></tr><tr><td>2</td><td>1</td><td>3</td><td>11</td><td></td></tr><tr><td>2</td><td>1</td><td>5</td><td></td><td></td></tr><tr><td>2</td><td>1</td><td></td><td></td><td></td></tr></table>\n\nfor \( z = 2 \) leads to \( p\left( 2\right) = 5,{p}^{\prime }\left( 2\right) = {11},{p}^{\prime \prime }\left( 2\right) = {10},{p}^{\prime \prime \prime }\left( 2\right) = 6 \).
Yes
Example 6.24 The polynomial\n\n\[ p\\left( x\\right) \\mathrel{\\text{:=}} \\left( {x - 1}\\right) \\left( {x - 2}\\right) \\cdots \\left( {x - {10}}\\right) = {x}^{10} - {55}{x}^{9} + \\cdots + {10}! \]\n\nhas the zeros \( 1,2,\\ldots ,{10} \), which are well separated from each other. We perturb the coefficient of \( {x}^{9} \) by choosing \( q\\left( x\\right) \\mathrel{\\text{:=}} {55}{x}^{9} \). Since \( {p}^{\\prime }\\left( {10}\\right) = 9 \) !, by (6.13), the zero \( {z}_{0} = {10} \) of the polynomial \( p \) is perturbed into\n\n\[ {10} - \\frac{{55} \\cdot {10}^{9}}{9!}\\varepsilon \\approx {10} - {1.5} \\cdot {10}^{5}\\varepsilon . \]\n\nThis illustrates that finding the zeros of \( p \) is an ill-conditioned problem and that a reliable approximation of the zeros is impossible.
Null
No
The vibrations of a string are modeled by the so-called wave equation\n\n\\[ \n\\frac{{\\partial }^{2}w}{\\partial {x}^{2}} = \\frac{1}{{c}^{2}}\\frac{{\\partial }^{2}w}{\\partial {t}^{2}}\n\\]\n\nwhere \\( w = w\\left( {x, t}\\right) \\) denotes the vertical elongation and \\( c \\) is the speed of sound in the string. Assuming that the string is clamped at \\( x = 0 \\) and \\( x = 1 \\), the boundary conditions \\( w\\left( {0, t}\\right) = w\\left( {1, t}\\right) = 0 \\) must be satisfied for all times \\( t \\). Obviously, the time-harmonic wave\n\n\\[ \nw\\left( {x, t}\\right) = v\\left( x\\right) {e}^{i\\omega t}\n\\]\n\nwith frequency \\( \\omega \\) solves the wave equation, provided that the space-dependent part \\( v \\) satisfies\n\n\\[ \n- {v}^{\\prime \\prime } = {\\lambda v}\\;\\text{ on }\\left\\lbrack {0,1}\\right\\rbrack\n\\]\n\nwhere \\( \\lambda \\mathrel{\\text{:=}} {\\omega }^{2}/{c}^{2} \\). The boundary conditions \\( w\\left( {0, t}\\right) = w\\left( {1, t}\\right) = 0 \\) are satisfied if \\( v \\) satisfies the boundary conditions\n\n\\[ \nv\\left( 0\\right) = v\\left( 1\\right) = 0.\n\\]\n\nHence, introducing the linear space\n\n\\( U \\mathrel{\\text{:=}} \\{ v \\in C\\left\\lbrack {0,1}\\right\\rbrack : v \\) is twice continuously differentiable, \\( v\\left( 0\\right) = v\\left( 1\\right) = 0\\} \\)\n\nand defining the differential operator \\( D : U \\rightarrow C\\left\\lbrack {0,1}\\right\\rbrack \\) by \\( D : v \\mapsto - {v}^{\\prime \\prime } \\), we are led to the eigenvalue problem \\( {Dv} = {\\lambda v} \\). Elementary calculations show that the functions \\( {v}_{m}\\left( x\\right) = \\sin {m\\pi x} \\) are eigenfunctions of \\( D \\) with the eigenvalues \\( {\\lambda }_{m} = {m}^{2}{\\pi }^{2} \\) for \\( m = 1,2,\\ldots \\). It can be shown that these are the only eigenvalues and eigenfunctions of \\( D \\).
Null
No
Consider the eigenvalue problem\n\n\[ \n{\int }_{0}^{1}K\left( {x, y}\right) \varphi \left( y\right) {dy} = {\lambda \varphi }\left( x\right) ,\;x \in \left\lbrack {0,1}\right\rbrack \n\] \n\nfor a linear integral operator with continuous kernel \( K \) . For the numerical approximation we proceed as in Example 2.3 and approximate the integral by the rectangular rule with equidistant quadrature points \( {x}_{k} = k/n \) for \( k = 1,\ldots, n \) . If we require the approximated equation to be satisfied only at the grid points, we arrive at the approximating system of equations\n\n\[ \n\frac{1}{n}\mathop{\sum }\limits_{{k = 1}}^{n}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k} = \lambda {\varphi }_{j},\;j = 1,\ldots, n \n\] \n\nfor approximate values \( {\varphi }_{j} \) to the exact solution \( \varphi \left( {x}_{j}\right) \) . Hence, we approximate the eigenvalues of the integral operator by the eigenvalues of the matrix with entries \( K\left( {{x}_{j},{x}_{k}}\right) /n \) . Of course, instead of the rectangular rule any other quadrature rule can be used. A discussion of the convergence of the matrix eigenvalues to the eigenvalues of the integral operator is again beyond the aim of this introduction.
Null
No
Theorem 7.3 (Rayleigh) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues\n\n\[ \n{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n}\n\]\n\n(where multiple eigenvalues occur according to their multiplicity) and corresponding orthonormal eigenvectors \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) . Then\n\n\[ \n{\lambda }_{j} = \mathop{\max }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n\n\]\n\nwhere the subspaces \( {V}_{1},\ldots ,{V}_{n} \) are defined by \( {V}_{1} \mathrel{\text{:=}} {\mathbb{C}}^{n} \) and\n\n\[ \n{V}_{j} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{C}}^{n} : \left( {x,{x}_{k}}\right) = 0, k = 1,\ldots, j - 1}\right\} ,\;j = 2,\ldots, n.\n\]
Proof. Let \( x \in {V}_{j} \) with \( x \neq 0 \) . Then\n\n\[ \nx = \mathop{\sum }\limits_{{k = j}}^{n}\left( {x,{x}_{k}}\right) {x}_{k}\;\text{ and }\;\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = \left( {x, x}\right) .\n\]\n\nHence\n\n\[ \n{Ax} = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}\left( {x,{x}_{k}}\right) {x}_{k}\n\]\n\nand\n\n\[ \n\left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}{\left| \left( x,{x}_{k}\right) \right| }^{2} \leq {\lambda }_{j}\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = {\lambda }_{j}\left( {x, x}\right) .\n\]\n\nThis implies\n\n\[ \n\mathop{\sup }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } \leq {\lambda }_{j}\n\]\n\nand the statement follows from \( \left( {A{x}_{j},{x}_{j}}\right) = {\lambda }_{j} \) and \( {x}_{j} \in {V}_{j} \) .
Yes
Theorem 7.4 (Courant) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues\n\n\[ \n{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \n\]\n\n(where multiple eigenvalues occur according to their multiplicity). Then\n\n\[ \n{\lambda }_{j} = \mathop{\min }\limits_{{{U}_{j} \in {M}_{j}}}\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n \n\]\n\nwhere \( {M}_{j} \) denotes the set of all subspaces \( {U}_{j} \subset {\mathbb{C}}^{n} \) of dimension \( n + 1 - j \) .
Proof. First we note that because of\n\n\[ \n\mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } = \mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {\left( {x, x}\right) = 1} }}\left( {{Ax}, x}\right) \n\]\n\nand the continuity of the function \( x \mapsto \left( {{Ax}, x}\right) \), the supremum is attained; i.e., the maximum exists.\n\nBy \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) we denote orthonormal eigenvectors corresponding to the eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \) . First, we show that for a given subspace \( {U}_{j} \) of dimension \( n + 1 - j \) there exists a vector \( x \in {U}_{j} \) such that\n\n\[ \n\left( {x,{x}_{k}}\right) = 0,\;k = j + 1,\ldots, n. \n\]\n\n(7.1)\n\nLet \( {z}_{1},\ldots ,{z}_{n + 1 - j} \) be a basis of \( {U}_{j} \) . Then we can represent each \( x \in {U}_{j} \) by\n\n\[ \nx = \mathop{\sum }\limits_{{i = 1}}^{{n + 1 - j}}{a}_{i}{z}_{i} \n\]\n\n(7.2)\n\nIn order to guarantee (7.1), the \( n + 1 - j \) coefficients \( {a}_{1},\ldots ,{a}_{n + 1 - j} \) must satisfy the \( n - j \) linear equations\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{{n + 1 - j}}{a}_{i}\left( {{z}_{i},{x}_{k}}\right) = 0,\;k = j + 1,\ldots, n. \n\]\n\nThis underdetermined system always has a nontrivial solution. For the corresponding \( x \) given by (7.2) we have \( x \neq 0 \), and from\n\n\[ \nx = \mathop{\sum }\limits_{{k = 1}}^{j}\left( {x,{x}_{k}}\right) {x}_{k} \n\]\n\nwe obtain that\n\n\[ \n\left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{k = 1}}^{j}{\lambda }_{k}{\left| \left( x,{x}_{k}\right) \right| }^{2} \geq {\lambda }_{j}\mathop{\sum }\limits_{{k = 1}}^{j}{\left| \left( x,{x}_{k}\right) \right| }^{2} = {\lambda }_{j}\left( {x, x}\right) , \n\]\n\nwhence\n\n\[ \n\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } \geq {\lambda }_{j} \n\]\n\nfollows.\n\nOn the other hand, for the subspace\n\n\[ \n{U}_{j} = \left\{ {x \in {\mathbb{C}}^{n} : \left( {x,{x}_{k}}\right) = 0, k = 1,\ldots, j - 1}\right\} \n\]\n\nof dimension \( n + 1 - j \), by Theorem 7.3 we have the equality\n\n\[ \n\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } = {\lambda }_{j} \n\]\n\nand the proof is finished.
Yes
Corollary 7.5 Let \( A \) and \( B \) be two Hermitian \( n \times n \) matrices with eigenvalues \( {\lambda }_{1}\left( A\right) \geq {\lambda }_{2}\left( A\right) \geq \cdots \geq {\lambda }_{n}\left( A\right) \) and \( {\lambda }_{1}\left( B\right) \geq {\lambda }_{2}\left( B\right) \geq \cdots \geq {\lambda }_{n}\left( B\right) \) . Then\n\n\[ \n\left| {{\lambda }_{j}\left( A\right) - {\lambda }_{j}\left( B\right) }\right| \leq \parallel A - B\parallel ,\;j = 1,\ldots, n \n\] \n\nfor any norm \( \parallel \cdot \parallel \) on \( {\mathbb{C}}^{n} \) .
Proof. From the Cauchy-Schwarz inequality we have that\n\n\[ \n\left( {{Ax} - {Bx}, x}\right) \leq \parallel \left( {A - B}\right) x{\parallel }_{2}\parallel x{\parallel }_{2} \leq \parallel A - B{\parallel }_{2}\parallel x{\parallel }_{2}^{2} \n\] \n\nand hence\n\n\[ \n\left( {{Ax}, x}\right) \leq \left( {{Bx}, x}\right) + \parallel A - B{\parallel }_{2}\parallel x{\parallel }_{2}^{2}. \n\] \n\nBy the Courant minimum maximum principle of Theorem 7.4 this implies\n\n\[ \n{\lambda }_{j}\left( A\right) \leq {\lambda }_{j}\left( B\right) + \parallel A - B{\parallel }_{2},\;j = 1\ldots, n. \n\] \n\nInterchanging the roles of \( A \) and \( B \), we also have that\n\n\[ \n{\lambda }_{j}\left( B\right) \leq {\lambda }_{j}\left( A\right) + \parallel B - A{\parallel }_{2},\;j = 1\ldots, n, \n\] \n\nand therefore\n\n\[ \n\left| {{\lambda }_{j}\left( A\right) - {\lambda }_{j}\left( B\right) }\right| \leq \parallel A - B{\parallel }_{2},\;j = 1,\ldots, n. \n\] \n\nNow the statement follows from\n\n\[ \n\parallel A - B{\parallel }_{2} = \rho \left( {A - B}\right) \leq \parallel A - B\parallel \n\] \n\nwhich is a consequence of Theorems 3.31 and 3.32.
Yes
Corollary 7.6 For the eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \) of a Hermitian \( n \times n \) matrix \( A = \left( {a}_{jk}\right) \) we have that\n\n\[ \n{\left| {\lambda }_{i} - {a}_{ii}^{\prime }\right| }^{2} \leq \mathop{\sum }\limits_{\substack{{j, k = 1} \\ {j \neq k} }}^{n}{\left| {a}_{jk}\right| }^{2},\;i = 1,\ldots, n \n\]\n\nwhere the elements \( {a}_{11}^{\prime },\ldots ,{a}_{nn}^{\prime } \) represent a permutation of the diagonal elements \( {a}_{11},\ldots ,{a}_{nn} \) of \( A \) such that \( {a}_{11}^{\prime } \geq {a}_{22}^{\prime } \geq \cdots \geq {a}_{nn}^{\prime } \) .
Proof. Use \( B = \operatorname{diag}\left( {a}_{jj}^{\prime }\right) \) and \( \parallel \cdot \parallel = \parallel \cdot {\parallel }_{2} \) in the preceding corollary.
No
Theorem 7.7 (Gerschgorin) Let \( A = \left( {a}_{jk}\right) \) be a complex \( n \times n \) matrix and define the disks\n\n\[ \n{G}_{j} \mathrel{\text{:=}} \left\{ {\lambda \in \mathbb{C} : \left| {\lambda - {a}_{jj}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{jk}\right| }\right\} ,\;j = 1,\ldots, n, \n\] \n\nand \n\n\[ \n{G}_{j}^{ * } \mathrel{\text{:=}} \left\{ {\lambda \in \mathbb{C} : \left| {\lambda - {a}_{jj}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{kj}\right| }\right\} ,\;j = 1,\ldots, n. \n\] \n\nThen the eigenvalues \( \lambda \) of \( A \) satisfy \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j} \cap \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j}^{ * } \n\]
Proof. Assume that \( {Ax} = {\lambda x} \) and \( \parallel x{\parallel }_{\infty } = 1 \), and for \( x = {\left( {x}_{1},\ldots ,{x}_{n}\right) }^{T} \) choose \( j \) such that \( \left| {x}_{j}\right| = \parallel x{\parallel }_{\infty } = 1 \) . Then \n\n\[ \n\left| {\lambda - {a}_{jj}}\right| = \left| {\left( {\lambda - {a}_{jj}}\right) {x}_{j}}\right| = \left| {\mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}{a}_{jk}{x}_{k}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{jk}\right| \n\] \n\nand therefore \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j} \n\] \n\nSince the eigenvalues of \( {A}^{ * } \) are the complex conjugate of the eigenvalues of \( A \) (see Problem 7.3) we also have that \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j}^{ * } \n\] \n\nand the theorem is proven.
Yes
Lemma 7.8 The Frobenius norm\n\n\\[ \n\\parallel A{\\parallel }_{F} \\mathrel{\\text{:=}} {\\left( \\mathop{\\sum }\\limits_{{j, k = 1}}^{n}{\\left| {a}_{jk}\\right| }^{2}\\right) }^{1/2}\n\\]\n\nof an \\( n \\times n \\) matrix \\( A = \\left( {a}_{jk}\\right) \\) is invariant with respect to unitary transformations.
Proof. The trace\n\n\\[ \n\\operatorname{tr}A \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{j = 1}}^{n}{a}_{jj}\n\\]\n\nof a matrix \\( A \\) is commutative; i.e., \\( \\operatorname{tr}{AB} = \\operatorname{tr}{BA} \\) . This follows from\n\n\\[ \n\\mathop{\\sum }\\limits_{{j = 1}}^{n}{\\left( AB\\right) }_{jj} = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{a}_{jk}{b}_{kj} = \\mathop{\\sum }\\limits_{{k = 1}}^{n}\\mathop{\\sum }\\limits_{{j = 1}}^{n}{b}_{kj}{a}_{jk} = \\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left( BA\\right) }_{kk}.\n\\]\n\nIn particular, we have that\n\n\\[ \n\\operatorname{tr}A{A}^{ * } = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{a}_{jk}{a}_{kj}^{ * } = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {a}_{jk}\\right| }^{2}.\n\\]\n\nTherefore, for each unitary matrix \\( Q \\) it follows that\n\n\\[ \n\\parallel {Q}^{ * }{AQ}{\\parallel }_{F}^{2} = \\mathrm{{tr}}\\left( {{Q}^{ * }{AQ}{Q}^{ * }{A}^{ * }Q}\\right) = \\mathrm{{tr}}\\left( {{Q}^{ * }A{A}^{ * }Q}\\right) = \\mathrm{{tr}}\\left( {A{A}^{ * }Q{Q}^{ * }}\\right) = \\parallel A{\\parallel }_{F}^{2},\n\\]\n\nand the lemma is proven.
Yes
Corollary 7.9 The eigenvalues of an \( n \times n \) matrix \( A \) (counted repeatedly according to their algebraic multiplicity) satisfy Schur's inequality\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} \leq \parallel A{\parallel }_{F}^{2} \]\n\nEquality holds if and only if the matrix \( A \) is normal, i.e., if \( A{A}^{ * } = {A}^{ * }A \) .
Proof. By Theorem 3.27 there exists a unitary matrix \( Q \) such that \( R \mathrel{\text{:=}} {Q}^{ * }{AQ} \) is an upper triangular matrix. Hence\n\n\[ \parallel A{\parallel }_{F}^{2} = \parallel R{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} + \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{k = j + 1}}^{n}{\left| {r}_{jk}\right| }^{2}, \]\n\n(7.3)\n\nsince the diagonal elements of \( R = \left( {r}_{jk}\right) \) coincide with the eigenvalues of the similar matrices \( R \) and \( A \) . Now Schur’s inequality follows immediately from (7.3).\n\nFor the discussion of the case of equality, we first note that any unitary transformation of a normal matrix is again normal. This is a consequence of the identity\n\n\[ {Q}^{ * }{AQ}{\left( {Q}^{ * }AQ\right) }^{ * } - {\left( {Q}^{ * }AQ\right) }^{ * }{Q}^{ * }{AQ} = {Q}^{ * }\left( {A{A}^{ * } - {A}^{ * }A}\right) Q. \]\n\nIf equality holds in Schur’s inequality, then (7.3) implies that \( R \) is a diagonal matrix. Hence \( R \), and therefore \( A \), is normal.\n\nConversely, if \( A \) is normal, then the upper triangular matrix \( R \) must also be normal. Now, from\n\n\[ {\left( R{R}^{ * }\right) }_{jj} = \mathop{\sum }\limits_{{k = 1}}^{n}{r}_{jk}{r}_{kj}^{ * } = \mathop{\sum }\limits_{{k = j}}^{n}{\left| {r}_{jk}\right| }^{2} \]\n\nand\n\n\[ {\left( {R}^{ * }R\right) }_{jj} = \mathop{\sum }\limits_{{k = 1}}^{n}{r}_{jk}^{ * }{r}_{kj} = \mathop{\sum }\limits_{{k = 1}}^{j}{\left| {r}_{kj}\right| }^{2} \]\n\nwe conclude that\n\n\[ \mathop{\sum }\limits_{{k = j}}^{n}{\left| {r}_{jk}\right| }^{2} = \mathop{\sum }\limits_{{k = 1}}^{j}{\left| {r}_{kj}\right| }^{2},\;j = 1,\ldots, n. \]\n\nThis implies \( {r}_{jk} = 0 \) for \( j < k \), i.e., \( R \) is a diagonal matrix, and from (7.3) we deduce that equality holds in Schur’s inequality if \( A \) is normal.
Yes
Lemma 7.10 Normal matrices \( A \) satisfy\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {a}_{jj}\right| }^{2} + {\left\lbrack N\left( A\right) \right\rbrack }^{2}. \]
Proof. This follows from Corollary 7.9.
No
Lemma 7.11 For each pair \( j < k \) and each \( \varphi \in \mathbb{R} \) the matrix\n\n\[ U = \left( \begin{matrix} 1 & & & & & \\ & \cdot & & & & \\ & & \cos \varphi & & - \sin \varphi & \\ & & & \cdot & & \\ & & \sin \varphi & & \cos \varphi & \\ & & & & & \cdot \\ & & & & & 1 \end{matrix}\right) ,\]\n\nwhich coincides with the identity matrix except for \( {u}_{jj} = {u}_{kk} = \cos \varphi \) and \( {u}_{kj} = - {u}_{jk} = \sin \varphi \) (and which describes a rotation in the \( {x}_{j}{x}_{k} \) -plane) is unitary.
Proof. This follows from\n\n\[ \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \]\n\nand\n\n\[ \left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) . \]
Yes
Lemma 7.12 Let \( A \) be a real symmetric matrix and let \( U \) be the unitary matrix of Lemma 7.11. Then \( B = {U}^{ * }{AU} \) is also real and symmetric and has the entries\n\n\[ \n{b}_{jj} = {a}_{jj}{\cos }^{2}\varphi + {a}_{jk}\sin {2\varphi } + {a}_{kk}{\sin }^{2}\varphi \]\n\n\[ \n{b}_{kk} = {a}_{jj}{\sin }^{2}\varphi - {a}_{jk}\sin {2\varphi } + {a}_{kk}{\cos }^{2}\varphi \]\n\n\[ \n{b}_{jk} = {b}_{kj} = {a}_{jk}\cos {2\varphi } + \frac{1}{2}\left( {{a}_{kk} - {a}_{jj}}\right) \sin {2\varphi } \]\n\n\[ \n{b}_{ij} = {b}_{ji} = {a}_{ij}\cos \varphi + {a}_{ik}\sin \varphi ,\;i \neq j, k, \]\n\n\[ \n{b}_{ik} = {b}_{ki} = - {a}_{ij}\sin \varphi + {a}_{ik}\cos \varphi ,\;i \neq j, k, \]\n\n\[ \n{b}_{il} = {a}_{il},\;i, l \neq j, k \]\n\ni.e., the matrix \( B \) differs from \( A \) only in the \( j \) th and \( k \) th rows and columns.
Proof. The matrix \( B \) is real, since \( A \) and \( U \) are real, and it is symmetric, since the unitary transformation of a Hermitian matrix is again Hermitian. Elementary calculations show that\n\n\[ \n\left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} {a}_{jj} & {a}_{jk} \\ {a}_{kj} & {a}_{kk} \end{matrix}\right) \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{matrix} {b}_{jj} & {b}_{jk} \\ {b}_{kj} & {b}_{kk} \end{matrix}\right) \]\n\nwith \( {b}_{jj},{b}_{jk},{b}_{kj} \), and \( {b}_{kk} \) as stated in the theorem. For \( i \neq j, k \) we have that\n\n\[ \n{b}_{ij} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rj} = {a}_{ij}{u}_{jj} + {a}_{ik}{u}_{kj} = {a}_{ij}\cos \varphi + {a}_{ik}\sin \varphi \]\n\nand\n\n\[ \n{b}_{ik} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rk} = {a}_{ij}{u}_{jk} + {a}_{ik}{u}_{kk} = - {a}_{ij}\sin \varphi + {a}_{ik}\cos \varphi . \]\n\nFinally, we have\n\n\[ \n{b}_{il} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rl} = {a}_{il} \]\n\nfor \( i, l \neq j, k \) .
Yes
Lemma 7.13 For\n\n\[ \tan {2\varphi } = \frac{2{a}_{jk}}{{a}_{jj} - {a}_{kk}},\;{a}_{jj} \neq {a}_{kk}, \]\n\n\[ \varphi = \frac{\pi }{4},\;{a}_{jj} = {a}_{kk}, \]\n\nthe transformation of Lemma 7.12 annihilates the elements\n\n\[ {b}_{jk} = {b}_{kj} = 0 \]\n\nand reduces the off-diagonal elements according to\n\n\[ {\left\lbrack N\left( B\right) \right\rbrack }^{2} = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2}. \]
Proof. \( {b}_{jk} = {b}_{kj} = 0 \) follows immediately from Lemma 7.12. Applying Lemma 7.8 to the matrices\n\n\[ \left( \begin{matrix} {a}_{jj} & {a}_{jk} \\ {a}_{kj} & {a}_{kk} \end{matrix}\right) \;\mathrm{{and}}\;\left( \begin{matrix} {b}_{jj} & {b}_{jk} \\ {b}_{kj} & {b}_{kk} \end{matrix}\right) \]\n\nyields\n\n\[ {a}_{jj}^{2} + 2{a}_{jk}^{2} + {a}_{kk}^{2} = {b}_{jj}^{2} + {b}_{kk}^{2}. \]\n\nFrom this, with the aid of Lemmas 7.8 and 7.12 we find that\n\n\[ {\left\lbrack N\left( B\right) \right\rbrack }^{2} = \parallel B{\parallel }_{F}^{2} - \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ii}^{2} = \parallel A{\parallel }_{F}^{2} - \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ii}^{2} \]\n\n\[ = {\left\lbrack N\left( A\right) \right\rbrack }^{2} + \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{a}_{ii}^{2} - {b}_{ii}^{2}}\right) = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2}, \]\n\nwhich completes the proof.
Yes
Theorem 7.14 The classical Jacobi method converges; i.e., the sequence \( \left( {A}_{\nu }\right) \) converges to a diagonal matrix with the eigenvalues of \( A \) as diagonal elements.
Proof. For one step of the Jacobi method, from\n\n\[ \n{\left\lbrack N\left( A\right) \right\rbrack }^{2} \leq \left( {{n}^{2} - n}\right) \mathop{\max }\limits_{\substack{{i, l = 1,\ldots, n} \\ {i \neq l} }}{a}_{il}^{2} \n\]\n\nwe obtain that\n\n\[ \n{a}_{jk}^{2} \geq \frac{{\left\lbrack N\left( A\right) \right\rbrack }^{2}}{n\left( {n - 1}\right) } \n\]\n\nfor the nondiagonal element \( {a}_{jk} \) with largest modulus. Hence, from Lemma 7.13 we deduce that\n\n\[ \n{\left\lbrack N\left( B\right) \right\rbrack }^{2} = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2} \leq {q}^{2}{\left\lbrack N\left( A\right) \right\rbrack }^{2}, \n\]\n\nwhere\n\n\[ \nq \mathrel{\text{:=}} {\left( 1 - \frac{2}{n\left( {n - 1}\right) }\right) }^{1/2}. \n\]\n\nFor the sequence \( \left( {A}_{\nu }\right) \) this implies that\n\n\[ \nN\left( {A}_{\nu }\right) \leq {q}^{\nu }N\left( {A}_{0}\right) \n\]\n\nfor all \( \nu \in \mathbb{N} \), whence \( N\left( {A}_{\nu }\right) \rightarrow 0,\nu \rightarrow \infty \), since \( q < 1 \) .
Yes
For the matrix\n\n\[ A = \left( \begin{array}{rrr} 2 & - 1 & 0 \\ - 1 & 2 & - 1 \\ 0 & - 1 & 2 \end{array}\right) \]\n\nthe first six transformed matrices for the classical Jacobi method are given by
\[ {A}_{1} = \left( \begin{array}{rrr} {1.0000} & {0.0000} & - {0.7071} \\ {0.0000} & {3.0000} & - {0.7071} \\ - {0.7071} & - {0.7071} & {2.0000} \end{array}\right) \]\n\n\[ {A}_{2} = \left( \begin{array}{rrr} {0.6340} & - {0.3251} & {0.0000} \\ - {0.3251} & {3.0000} & - {0.6280} \\ {0.0000} & - {0.6280} & {2.3660} \end{array}\right) \]\n\n\[ {A}_{3} = \left( \begin{array}{rrr} {0.6340} & - {0.2768} & - {0.1704} \\ - {0.2768} & {3.3864} & {0.0000} \\ - {0.1704} & {0.0000} & {1.9796} \end{array}\right) \]\n\n\[ {A}_{4} = \left( \begin{array}{rrr} {0.6064} & {0.0000} & - {0.1695} \\ {0.0000} & {3.4140} & {0.0169} \\ - {0.1695} & {0.0169} & {1.9796} \end{array}\right) \]\n\n\[ {A}_{5} = \left( \begin{matrix} {0.5858} & {0.0020} & {0.0000} \\ {0.0020} & {3.4140} & {0.0168} \\ {0.0000} & {0.0168} & {2.0002} \end{matrix}\right) ,\]\n\n\[ {A}_{6} = \left( \begin{array}{rrr} {0.5858} & {0.0020} & - {0.0000} \\ {0.0020} & {3.4142} & {0.0000} \\ - {0.0000} & {0.0000} & {2.0000} \end{array}\right) \]\n\nThe exact eigenvalues of \( A \) are \( {\lambda }_{1} = 2 + \sqrt{2},{\lambda }_{2} = 2,{\lambda }_{3} = 2 - \sqrt{2} \) .
Yes
An \( n \times n \) matrix \( A \) is diagonalizable if and only if it has \( n \) linearly independent eigenvectors.
Assume that \( {C}^{-1}{AC} = D \), where \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \), is diagonal. Then \( D{e}_{j} = {\lambda }_{j}{e}_{j}, j = 1,\ldots, n \), with the canonical orthonormal basis \( {e}_{1},\ldots ,{e}_{n} \) of \( {\mathbb{C}}^{n} \) . This implies that the vectors \( {x}_{j} \mathrel{\text{:=}} C{e}_{j}, j = 1,\ldots, n \), are eigenvectors of \( A \), since\n\n\[ A{x}_{j} = {AC}{e}_{j} = {CD}{e}_{j} = C{\lambda }_{j}{e}_{j} = {\lambda }_{j}{x}_{j}. \]\n\nThe vectors \( {x}_{1},\ldots ,{x}_{n} \) are linearly independent because \( C \) is nonsingular and the \( {e}_{1},\ldots ,{e}_{n} \) are linearly independent.\n\nConversely, assume that \( {x}_{1},\ldots ,{x}_{n} \) are \( n \) linearly independent eigenvectors of \( A \) for the eigenvalues \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) . Then the matrix \( C = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) formed by the eigenvectors as columns is nonsingular, and we have that\n\n\[ {AC} = \left( {A{x}_{1},\ldots, A{x}_{n}}\right) = \left( {{\lambda }_{1}{x}_{1},\ldots ,{\lambda }_{n}{x}_{n}}\right) = {CD}, \]\n\nwhere \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) . Hence \( {C}^{-1}{AC} = D \) .
Yes
Theorem 7.19 Assume that \( A \) is a diagonalizable \( n \times n \) matrix with eigenvalues\n\n\[ \n\\left| {\\lambda }_{1}\\right| > \\left| {\\lambda }_{2}\\right| > \\cdots > \\left| {\\lambda }_{n}\\right| \n\]\n\nand corresponding eigenvectors \( {x}_{1},{x}_{2},\\ldots ,{x}_{n} \), and set\n\n\[ \n{T}_{m} \\mathrel{\\text{:=}} \\mathrm{{span}}\\{ {x}_{1},\\ldots ,{x}_{m}\\} \\;\\text{ and }\\;{U}_{m} \\mathrel{\\text{:=}} \\mathrm{{span}}\\{ {x}_{m + 1},\\ldots ,{x}_{n}\\} \n\]\n\nfor \( m = 1,\\ldots, n - 1 \) . Let \( {q}_{10},\\ldots ,{q}_{n0} \) be an orthonormal basis of \( {\\mathbb{C}}^{n} \) and let the subspaces\n\n\[ \n{S}_{m} \\mathrel{\\text{:=}} \\operatorname{span}\\left\\{ {{q}_{10},\\ldots ,{q}_{m0}}\\right\\} \n\]\n\nsatisfy\n\n\[ \n{S}_{m} \\cap {U}_{m} = \\{ 0\\} ,\\;m = 1,\\ldots, n - 1.\n\]\n\nAssume that for each \( \\nu \\in \\mathbb{N} \) we have constructed an orthonormal system \( {q}_{1\\nu },\\ldots ,{q}_{n\\nu } \) with the property\n\n\[ \n{A}^{\\nu }{S}_{m} = \\operatorname{span}\\left\\{ {{q}_{1\\nu },\\ldots ,{q}_{m\\nu }}}\\right\\} ,\\;m = 1,\\ldots, n - 1,\n\]\n\nand define \( {\\widetilde{Q}}_{\\nu } = \\left( {{q}_{1\\nu },\\ldots ,{q}_{n\\nu }}\\right) \) . Then for the sequence of matrices \( {A}_{\\nu } = \\left( {a}_{{jk},\\nu }\\right) \\; \\) given \\;{by}\n\n\[ \n{A}_{\\nu + 1} \\mathrel{\\text{:=}} {\\widetilde{Q}}_{\\nu }^{ * }A{\\widetilde{Q}}_{\\nu }\n\]\n\nwe have convergence:\n\n\[ \n\\mathop{\\lim }\\limits_{{\\nu \\rightarrow \\infty }}{a}_{{jk},\\nu } = 0,\\;1 < k < j \\leq n\n\]\n\nand\n\n\[ \n\\mathop{\\lim }\\limits_{{\\nu \\rightarrow \\infty }}{a}_{{jj},\\nu } = {\\lambda }_{j},\\;j = 1,\\ldots, n.\n\]
Proof. 1. Without loss of generality we may assume that \( {\\begin{Vmatrix}{x}_{j}\\end{Vmatrix}}_{2} = 1 \) for \( j = 1,\\ldots, n \) . From Lemma 7.18 it follows that\n\n\[ \n{\\begin{Vmatrix}{P}_{{A}^{\\nu }{S}_{m}} - {P}_{{T}_{m}}\\end{Vmatrix}}_{2} \\leq M{r}^{\\nu },\\;m = 1,\\ldots, n - 1,\\;\\nu \\in \\mathbb{N},\n\]\n\nfor some constant \( M \) and\n\n\[ \nr \\mathrel{\\text{:=}} \\mathop{\\max }\\limits_{{m = 1,\\ldots, n - 1}}\\left| \\frac{{\\lambda }_{m + 1}}{{\\lambda }_{m}}\\right| < 1\n\]\n\nFrom this, for the projections\n\n\[ \n{w}_{m\\nu } \\mathrel{\\text{:=}} {P}_{{A}^{\\nu }{S}_{m}}{x}_{m},\\;m = 1,\\ldots, n - 1,\n\]\n\nand \( {w}_{n\\nu } \\mathrel{\\text{:=}} {x}_{n} \), we conclude that\n\n\[ \n{\\begin{Vmatrix}{w}_{m\\nu } - {x}_{m}\\end{Vmatrix}}_{2} \\leq M{r}^{\\nu },\\;m = 1,\\ldots, n,\\;\\nu \\in \\mathbb{N}.\n\]\n\nFor sufficiently large \( \\nu \) the vectors \( {w}_{1\\nu },\\ldots ,{w}_{n\\nu } \) are linearly independent, and we have that\n\n\[ \n\\operatorname{span}\\left\\{ {{w}_{1\\nu },\\ldots ,{w}_{m\\nu }}}\\right\\} = {A}^{\\nu }{S}_{m},\\;m = 1,\\ldots, n - 1.\n\]\n\nTo prove this we assume to the contrary that the vectors \( {w}_{1\\nu },\\ldots ,{w}_{n\\nu } \) are not linearly independent for all sufficiently large \( \\nu \) . Then there exists a sequence \( {\\nu }_{\\ell } \) such that the vectors \( {w}_{1{\\nu }_{\\ell }},\\ldots ,{w}_{n{\\nu }_{\\ell }} \) are linearly dependent for each \( \\ell \\in \\mathbb{N} \) . Hence there exist complex numbers \( {\\alpha }_{1\\ell },\\ldots ,{\\alpha }_{n\\ell } \) such that\n\n\[ \n\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\alpha }_{k\\ell }{w}_{k{n}_{\\ell }} = 0\\;\\text{ and }\\;\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {\\alpha }_{k\\ell }\\right| }^{2} = 1\n\]\n\nBy the Bolzano-Weierstrass theorem, without loss of generality, we may assume that\n\n\[ \n{\\alpha }_{k\\ell } \\rightarrow {\\alpha }_{k},\\;\\ell \\rightarrow \\infty ,\\;k = 1,\\ldots, n.\n\]\n\nPassing to the limit \( \\ell \\rightarrow \\infty \) in (7.18) with the aid of (7.17) now leads to\n\n\[ \n\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\alpha }_{k}{x}_{k} = 0\\;\\text{ and }\\;\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {\\alpha }_{k}\\right| }^{2} = 1\n\]\n\nwhich contradicts the linear independence of the eigenvectors \( {x}_{1},\\ldots ,{x}_{n} \) . 2. We orthonormalize by setting \( {\\widetilde{p}}_{1} \\mathrel{\\text{:=}} {x}_{1} \) and\n\n\[ \n{\\widetilde{p}}_{m} \\mathrel{\\text{:=}} {x}_{m} - {P}_{{T}_{m - 1}}{x}_{m},\\;m = 2,\\ldots, n,\n\]\n\n\[ \n{p}_{m} \\mathrel{\\text{:=}} \\frac{{\\widetilde{p}}_{m}}{{\\begin{Vmatrix}{\\widetilde{p}}_{m}\\end{Vmatrix}}_{2}},\\;m = 1,\\ldots, n,\n\]
Yes
Theorem 7.20 (QR algorithm) Let \( A \) be a diagonalizable matrix with eigenvalues\n\n\[ \left| {\lambda }_{1}\right| > \left| {\lambda }_{2}\right| > \cdots > \left| {\lambda }_{n}\right| \]\n\nand corresponding eigenvectors \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \), and assume that\n\n\[ \operatorname{span}\left\{ {{e}_{1},\ldots ,{e}_{m}}\right\} \cap \operatorname{span}\left\{ {{x}_{m + 1},\ldots ,{x}_{n}}\right\} = \{ 0\} \]\n\n(7.24)\n\nfor \( m = 1,\ldots, n - 1 \) . Starting with \( {A}_{1} = A \), construct a sequence \( \left( {A}_{\nu }\right) \) by determining a QR decomposition\n\n\[ {A}_{\nu } = {Q}_{\nu }{R}_{\nu } \]\n\nand setting\n\n\[ {A}_{\nu + 1} \mathrel{\text{:=}} {R}_{\nu }{Q}_{\nu } \]\n\nfor \( \nu = 0,1,2,\ldots \) Then for \( {A}_{\nu } = \left( {a}_{{jk},\nu }\right) \) we have convergence:\n\n\[ \mathop{\lim }\limits_{{\nu \rightarrow \infty }}{a}_{{jk},\nu } = 0,\;1 < k < j \leq n \]\n\nand\n\n\[ \mathop{\lim }\limits_{{\nu \rightarrow \infty }}{a}_{{jj},\nu } = {\lambda }_{j},\;j = 1,\ldots, n. \]
Proof. This is just a special case of Theorem 7.19.
No
Theorem 7.22 To each \( n \times n \) matrix \( A \) there exist \( n - 2 \) Householder matrices \( {H}_{1},\ldots ,{H}_{n - 2} \) such for \( Q = {H}_{n - 2}\cdots {H}_{1} \) the matrix\n\n\[ B = {Q}^{ * }{AQ} \]\n\n## is a Hessenberg matrix.
Null
No
Example 7.23 Let\n\n\[ A = \\left( \\begin{matrix} {a}_{1} & {c}_{2} & & & & \\\\ {c}_{2} & {a}_{2} & {c}_{3} & & & \\\\ & {c}_{3} & {a}_{3} & {c}_{4} & & \\\\ & & \\cdot & \\cdot & \\cdot & \\\\ & & & {c}_{n - 1} & {a}_{n - 1} & {c}_{n} \\\\ & & & & {c}_{n} & {a}_{n} \\end{matrix}\\right) \]\n\nbe a symmetric tridiagonal matrix. Denote by \( {A}_{k} \) the \( k \\times k \) submatrix consisting of the first \( k \) rows and columns of \( A \), and let \( {p}_{k} \) denote the characteristic polynomial of \( {A}_{k} \). Then we have the recurrence relations\n\n\[ {p}_{k}\\left( \\lambda \\right) = \\left( {{a}_{k} - \\lambda }\\right) {p}_{k - 1}\\left( \\lambda \\right) - {c}_{k}^{2}{p}_{k - 2}\\left( \\lambda \\right) ,\\;k = 2,\\ldots, n, \]\n\n(7.25)\n\nand\n\n\[ {p}_{k}^{\\prime }\\left( \\lambda \\right) = \\left( {{a}_{k} - \\lambda }\\right) {p}_{k - 1}^{\\prime }\\left( \\lambda \\right) - {c}_{k}^{2}{p}_{k - 2}^{\\prime }\\left( \\lambda \\right) - {p}_{k - 1}\\left( \\lambda \\right) ,\\;k = 2,\\ldots, n, \]\n\n(7.26)\n\nstarting with \( {p}_{0}\\left( \\lambda \\right) = 1 \) and \( {p}_{1}\\left( \\lambda \\right) = {a}_{1} - \\lambda \).
Proof. The recursion (7.25) follows by expanding \( \\det \\left( {{A}_{k} - {\\lambda I}}\\right) \) with respect to the last column, and (7.26) is obtained by differentiating (7.25).
Yes
The \( n \times n \) tridiagonal matrix\n\n\[ A = \left( \begin{array}{rrrrrr} 2 & - 1 & & & & \\ - 1 & 2 & - 1 & & & \\ & - 1 & 2 & - 1 & & \\ \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\ & & & - 1 & 2 & - 1 \\ & & & & - 1 & 2 \end{array}\right) \]\n\nhas the eigenvalues\n\n\[ {\lambda }_{j} = 4{\sin }^{2}\frac{j\pi }{2\left( {n + 1}\right) },\;j = 1,\ldots, n \]
Null
No
Theorem 7.25 Let \( B = \left( {b}_{jk}\right) \) be an irreducible Hessenberg matrix and let \( \lambda \in \mathbb{C} \) . Starting from \( {\xi }_{n} = 1,{\eta }_{n} = 0 \), compute recursively\n\n\[ \n{\xi }_{n - k} = \frac{1}{{b}_{n - k + 1, n - k}}\left\{ {\lambda {\xi }_{n - k + 1} - \mathop{\sum }\limits_{{j = n - k + 1}}^{n}{b}_{n - k + 1, j}{\xi }_{j}}\right\} \n\]\n\n\[ \n{\eta }_{n - k} = \frac{1}{{b}_{n - k + 1, n - k}}\left\{ {{\xi }_{n - k + 1} + \lambda {\eta }_{n - k + 1} - \mathop{\sum }\limits_{{j = n - k + 1}}^{n}{b}_{n - k + 1, j}{\eta }_{j}}\right\} \n\]\n\nfor \( k = 1,\ldots, n - 1 \) and\n\n\[ \n\alpha = - \lambda {\xi }_{1} + \mathop{\sum }\limits_{{j = 1}}^{n}{b}_{1j}{\xi }_{j} \n\]\n\n\[ \n\beta = - {\xi }_{1} - \lambda {\eta }_{1} + \mathop{\sum }\limits_{{j = 1}}^{n}{b}_{1j}{\eta }_{j} \n\]\n\nThen for the characteristic polynomial of \( B \) we have\n\n\[ \n\frac{p\left( \lambda \right) }{{p}^{\prime }\left( \lambda \right) } = \frac{\alpha }{\beta } \n\]
Null
No
Theorem 8.1 For \( n \in \mathbb{N} \cup \{ 0\} \), each polynomial in \( {P}_{n} \) that has more than \( n \) (complex) zeros, where each zero is counted repeatedly according to its multiplicity, must vanish identically; i.e., all its coefficients must be equal to zero.
Proof. Obviously, the statement is true for \( n = 0 \) . Assume that it has been proven for some \( n \geq 0 \) . By using the binomial formula for \( {x}^{k} = {\left\lbrack \left( x - z\right) + z\right\rbrack }^{k} \) we can rewrite the polynomial \( p \in {P}_{n + 1} \) in the form\n\n\[ p\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{{n + 1}}{b}_{k}{\left( x - z\right) }^{k} + {b}_{0} \]\n\nwith the coefficients \( {b}_{0},{b}_{1},\ldots ,{b}_{n + 1} \) depending on \( {a}_{0},{a}_{1},\ldots ,{a}_{n + 1} \) and \( z \) . If \( z \) is a zero of \( p \), then we must have \( {b}_{0} = 0 \), and this implies that \( p\left( x\right) = \left( {x - z}\right) q\left( x\right) \) with \( q \in {P}_{n} \) . Obviously, \( q \) has more than \( n \) zeros, since \( p \) has more than \( n + 1 \) zeros. Hence, by the induction assumption, \( q \) must vanish identically, and this implies that \( p \) vanishes identically.
Yes
Theorem 8.2 The monomials \( {u}_{k}\left( x\right) \mathrel{\text{:=}} {x}^{k}, k = 0,\ldots, n \), are linearly independent.
Proof. In order to prove this, assume that\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{u}_{k} = 0 \]\n\nthat is,\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{x}^{k} = 0,\;x \in \left\lbrack {a, b}\right\rbrack \]\n\nThen the polynomial with coefficients \( {a}_{0},{a}_{1},\ldots ,{a}_{n} \) has more than \( n \) distinct zeros, and from Theorem 8.1 it follows that all the coefficients must be zero.
Yes
Theorem 8.3 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), there exists a unique polynomial \( {p}_{n} \in {P}_{n} \) with the property\n\n\[ \n{p}_{n}\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n.\n\]
Proof. We note that \( {\ell }_{k} \in {P}_{n} \) for \( k = 0,\ldots, n \) and that the equations\n\n\[ \n{\ell }_{k}\left( {x}_{j}\right) = {\delta }_{jk},\;j, k = 0,\ldots, n\n\]\n\nhold, where \( {\delta }_{jk} = 1 \) for \( k = j \), and \( {\delta }_{jk} = 0 \) for \( k \neq j \) . It follows that \( {p}_{n} \) given by (8.2) is in \( {P}_{n} \), and it fulfills the required interpolation conditions \( {p}_{n}\left( {x}_{j}\right) = {y}_{j}, j = 0,\ldots, n \) .\n\nTo prove uniqueness of the interpolation polynomial we assume that \( {p}_{n,1},{p}_{n,2} \in {P}_{n} \) are two polynomials satisfying (8.1). Then the difference \( {p}_{n} \mathrel{\text{:=}} {p}_{n,1} - {p}_{n,2} \) satisfies \( {p}_{n}\left( {x}_{j}\right) = 0, j = 0,\ldots, n \) ; i.e., the polynomial \( {p}_{n} \in {P}_{n} \) has \( n + 1 \) zeros and therefore by Theorem 8.1 must be identically zero. This implies that \( {p}_{n,1} = {p}_{n,2} \) .
Yes
Lemma 8.6 The divided differences satisfy the relation\n\n\[ \n{D}_{j}^{k} = \mathop{\sum }\limits_{{m = j}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}},\;j = 0,\ldots, n - k,\;k = 1,\ldots, n. \n\]\n\n(8.4)
Proof. We proceed by induction with respect to the order \( k \) . Trivially,(8.4) holds for \( k = 1 \) . We assume that (8.4) has been proven for order \( k - 1 \) for some \( k \geq 2 \) . Then, using Definition 8.4, the induction assumption, and the identity\n\n\[ \n\frac{1}{{x}_{j + k} - {x}_{j}}\left\{ {\frac{1}{{x}_{m} - {x}_{j + k}} - \frac{1}{{x}_{m} - {x}_{j}}}\right\} = \frac{1}{\left( {{x}_{m} - {x}_{j + k}}\right) \left( {{x}_{m} - {x}_{j}}\right) }, \n\]\n\nwe obtain\n\n\[ \n{D}_{j}^{k} = \frac{1}{{x}_{j + k} - {x}_{j}}\left\{ {\mathop{\sum }\limits_{{m = j + 1}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j + 1} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}} - \mathop{\sum }\limits_{{m = j}}^{{j + k - 1}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k - 1}}\frac{1}{{x}_{m} - {x}_{i}}}\right\} \n\]\n\n\[ \n= \frac{1}{{x}_{j + k} - {x}_{j}}\mathop{\sum }\limits_{{m = j + 1}}^{{j + k - 1}}{y}_{m}\left\{ {\frac{1}{{x}_{m} - {x}_{j + k}} - \frac{1}{{x}_{m} - {x}_{j}}}\right\} \mathop{\prod }\limits_{\substack{{i = j + 1} \\ {i \neq m} }}^{{j + k - 1}}\frac{1}{{x}_{m} - {x}_{i}} \n\]\n\n\[ \n+ {y}_{j + k}\mathop{\prod }\limits_{{i = j}}^{{j + k - 1}}\frac{1}{{x}_{j + k} - {x}_{i}} + {y}_{j}\mathop{\prod }\limits_{{i = j + 1}}^{{j + k}}\frac{1}{{x}_{j} - {x}_{i}} = \mathop{\sum }\limits_{{m = j}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}} \n\]\n\ni.e.,(8.4) also holds for order \( k \) .
Yes
Theorem 8.7 In the Newton representation, for \( n \geq 1 \) the uniquely determined interpolation polynomial \( {p}_{n} \) of Theorem 8.3 is given by\n\n\[ \n{p}_{n}\left( x\right) = {y}_{0} + \mathop{\sum }\limits_{{k = 1}}^{n}{D}_{0}^{k}\mathop{\prod }\limits_{{i = 0}}^{{k - 1}}\left( {x - {x}_{i}}\right) .\n\]\n\n(8.5)
Proof. We denote the right-hand side of (8.5) by \( {\widetilde{p}}_{n} \) and establish \( {p}_{n} = {\widetilde{p}}_{n} \) by induction with respect to the degree \( n \) . For \( n = 1 \) the representation (8.5) is correct. We assume that (8.5) has been proven for degree \( n - 1 \) for some \( n \geq 2 \) and consider the difference \( {d}_{n} \mathrel{\text{:=}} {p}_{n} - {\widetilde{p}}_{n} \) . Since\n\n\[ \n{d}_{n}\left( x\right) = {p}_{n}\left( x\right) - {\widetilde{p}}_{n - 1}\left( x\right) - {D}_{0}^{n}\mathop{\prod }\limits_{{i = 0}}^{{n - 1}}\left( {x - {x}_{i}}\right) ,\n\]\n\nas a consequence of Theorem 8.3 and Lemma 8.6 the coefficient of \( {x}^{n} \) in the polynomial \( {d}_{n} \) vanishes; i.e., \( {d}_{n} \in {P}_{n - 1} \) . Using the induction assumption, we have that\n\n\[ \n{\widetilde{p}}_{n - 1}\left( {x}_{j}\right) = {y}_{j} = {p}_{n}\left( {x}_{j}\right) ,\;j = 0,\ldots, n - 1,\n\]\n\nand therefore\n\n\[ \n{d}_{n}\left( {x}_{j}\right) = 0,\;j = 0,\ldots, n - 1.\n\]\n\nHence, by Theorem 8.1 it follows that \( {d}_{n} = 0 \), and therefore \( {p}_{n} = {\widetilde{p}}_{n} \) .
Yes
Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), the uniquely determined interpolation polynomials \( {p}_{i}^{k} \in {P}_{k}, i = 0,\ldots, n - k, k = 0,\ldots, n \), with the interpolation property\n\n\[ {p}_{i}^{k}\left( {x}_{j}\right) = {y}_{j},\;j = i,\ldots, i + k \]\n\nsatisfy the recursive relation\n\n\[ {p}_{i}^{0}\left( x\right) = {y}_{i} \]\n\n\[ {p}_{i}^{k}\left( x\right) = \frac{\left( {x - {x}_{i}}\right) {p}_{i + 1}^{k - 1}\left( x\right) - \left( {x - {x}_{i + k}}\right) {p}_{i}^{k - 1}\left( x\right) }{{x}_{i + k} - {x}_{i}},\;k = 1,\ldots, n. \]\n\n(8.6)
Proof. We again proceed by induction with respect to the degree \( k \) . Obviously, the statement is true for \( k = 1 \) . Assume that the assertion has been proven for degree \( k - 1 \) for some \( k \geq 2 \) . Then the right-hand side of (8.6) describes a polynomial \( p \in {P}_{k} \), and by the induction assumption we find that the interpolation conditions\n\n\[ p\left( {x}_{j}\right) = \frac{\left( {{x}_{j} - {x}_{i}}\right) {y}_{j} - \left( {{x}_{j} - {x}_{i + k}}\right) {y}_{j}}{{x}_{i + k} - {x}_{i}} = {y}_{j},\;j = i + 1,\ldots, i + k - 1, \]\n\nas well as \( p\left( {x}_{i}\right) = {y}_{i} \) and \( p\left( {x}_{i + k}\right) = {y}_{i + k} \) are fulfilled.
Yes
Theorem 8.9 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), the uniquely determined interpolation polynomials \( {p}_{i}^{k} \in {P}_{k}, i = 0,\ldots, n - k, k = 0,\ldots, n \), with the interpolation property\n\n\[ \n{p}_{i}^{k}\left( {x}_{j}\right) = {y}_{j},\;j = i,\ldots, i + k \n\]\n\nsatisfy the recursive relation\n\n\[ \n{p}_{i}^{0}\left( x\right) = {y}_{i} \n\]\n\n\[ \n{p}_{i}^{k}\left( x\right) = \frac{\left( {x - {x}_{i}}\right) {p}_{i + 1}^{k - 1}\left( x\right) - \left( {x - {x}_{i + k}}\right) {p}_{i}^{k - 1}\left( x\right) }{{x}_{i + k} - {x}_{i}},\;k = 1,\ldots, n. \n\]\n\n(8.6)
Proof. We again proceed by induction with respect to the degree \( k \) . Obviously, the statement is true for \( k = 1 \) . Assume that the assertion has been proven for degree \( k - 1 \) for some \( k \geq 2 \) . Then the right-hand side of (8.6) describes a polynomial \( p \in {P}_{k} \), and by the induction assumption we find that the interpolation conditions\n\n\[ \np\left( {x}_{j}\right) = \frac{\left( {{x}_{j} - {x}_{i}}\right) {y}_{j} - \left( {{x}_{j} - {x}_{i + k}}\right) {y}_{j}}{{x}_{i + k} - {x}_{i}} = {y}_{j},\;j = i + 1,\ldots, i + k - 1, \n\]\n\nas well as \( p\left( {x}_{i}\right) = {y}_{i} \) and \( p\left( {x}_{i + k}\right) = {y}_{i + k} \) are fulfilled.
Yes
Theorem 8.10 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( \left( {n + 1}\right) \) -times continuously differentiable. Then the remainder \( {R}_{n}f \mathrel{\text{:=}} f - {L}_{n}f \) for polynomial interpolation with \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) can be represented in the form\n\n\[ \left( {{R}_{n}f}\right) \left( x\right) = \frac{{f}^{\left( n + 1\right) }\left( \xi \right) }{\left( {n + 1}\right) !}\mathop{\prod }\limits_{{j = 0}}^{n}\left( {x - {x}_{j}}\right) ,\;x \in \left\lbrack {a, b}\right\rbrack ,\]\n\n(8.8)\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) depending on \( x \) .
Proof. Since (8.8) is trivially satisfied if \( x \) coincides with one of the interpolation points \( {x}_{0},\ldots ,{x}_{n} \), we need be concerned only with the case where \( x \) does not coincide with one of the interpolation points. We define\n\n\[ {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \mathop{\prod }\limits_{{j = 0}}^{n}\left( {x - {x}_{j}}\right) \]\n\nand, keeping \( x \) fixed, consider \( g : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) given by\n\n\[ g\left( y\right) \mathrel{\text{:=}} f\left( y\right) - \left( {{L}_{n}f}\right) \left( y\right) - {q}_{n + 1}\left( y\right) \frac{f\left( x\right) - \left( {{L}_{n}f}\right) \left( x\right) }{{q}_{n + 1}\left( x\right) },\;y \in \left\lbrack {a, b}\right\rbrack . \]\n\nBy the assumption on \( f \), the function \( g \) is also \( \left( {n + 1}\right) \) -times continuously differentiable. Obviously, \( g \) has at least \( n + 2 \) zeros, namely \( x \) and \( {x}_{0},\ldots ,{x}_{n} \) . Then, by Rolle’s theorem the derivative \( {g}^{\prime } \) has at least \( n + 1 \) zeros. Repeating the argument, by induction we deduce that the derivative \( {g}^{\left( n + 1\right) } \) has at least one zero in \( \left\lbrack {a, b}\right\rbrack \), which we denote by \( \xi \) . For this zero we have that\n\n\[ 0 = {f}^{\left( n + 1\right) }\left( \xi \right) - \left( {n + 1}\right) !\frac{\left( {{R}_{n}f}\right) \left( x\right) }{{q}_{n + 1}\left( x\right) }, \]\n\nand from this we obtain (8.8).
Yes
Corollary 8.11 Under the assumptions of Theorem 8.10 we have the error estimate \[ {\begin{Vmatrix}{R}_{n}f\end{Vmatrix}}_{\infty } \leq \frac{1}{\left( {n + 1}\right) !}{\begin{Vmatrix}{q}_{n + 1}\end{Vmatrix}}_{\infty }{\begin{Vmatrix}{f}^{\left( n + 1\right) }\end{Vmatrix}}_{\infty }.\]
Null
No
The linear interpolation is given by\n\n\\[ \n\\left( {{L}_{1}f}\\right) \\left( x\\right) = \\frac{1}{h}\\left\\lbrack {f\\left( {x}_{0}\\right) \\left( {{x}_{1} - x}\\right) + f\\left( {x}_{1}\\right) \\left( {x - {x}_{0}}\\right) }\\right\\rbrack \n\\]\n\nwith the step width \\( h = {x}_{1} - {x}_{0} \\) . For the polynomial \\( {q}_{2}\\left( x\\right) = \\left( {x - {x}_{0}}\\right) \\left( {x - {x}_{1}}\\right) \\)\n\nwe have that\n\n\\[ \n\\mathop{\\max }\\limits_{{x \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack }}\\left| {{q}_{2}\\left( x\\right) }\\right| = \\frac{{h}^{2}}{4}.\n\\]
Therefore, by Corollary 8.11, the error occurring in linear interpolation of a twice continuously differentiable function \\( f \\) can be estimated by\n\n\\[ \n\\left| {\\left( {{R}_{1}f}\\right) \\left( x\\right) }\\right| \\leq \\frac{{h}^{2}}{8}\\mathop{\\max }\\limits_{{y \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack }}\\left| {{f}^{\\prime \\prime }\\left( y\\right) }\\right| ,\\;x \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack .\n\\]
Yes
Example 8.13 Let \( f\left( x\right) \mathrel{\text{:=}} \sin x \) and let \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {0,\pi }\right\rbrack \) be \( n + 1 \) distinct points. Since
\[ \left| {{f}^{\left( n + 1\right) }\left( x\right) }\right| \leq 1,\;x \in \left\lbrack {0,\pi }\right\rbrack \] and \[ \left| {{q}_{n + 1}\left( x\right) }\right| \leq {\pi }^{n + 1},\;x \in \left\lbrack {0,\pi }\right\rbrack \] by Corollary 8.11, we have the estimate \[ \left| {\left( {{R}_{n}f}\right) \left( x\right) }\right| \leq \frac{{\pi }^{n + 1}}{\left( {n + 1}\right) !},\;x \in \left\lbrack {0,\pi }\right\rbrack \] Hence the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials converges to the interpolated function \( f \) uniformly on \( \left\lbrack {0,\pi }\right\rbrack \) as \( n \rightarrow \infty \) .
Yes
Example 8.14 A first detailed example of the insufficiency of polynomial interpolation even for analytic functions was investigated by Runge in 1901. He considered the simple function\n\n\[ f\left( x\right) = \frac{1}{1 + {25}{x}^{2}} \]\n\non the interval \( \left\lbrack {-1,1}\right\rbrack \) with equidistant interpolation points. He discovered that as the degree \( n \) tends to infinity, the interpolation polynomials diverge for \( {0.726} \leq \left| x\right| \leq 1 \), whereas the approximation works satisfactorily in the central portion of the interval (see Problem 8.6). Although \( f \) is analytic in all of \( \mathbb{R} \), its poles in the complex plane at \( \pm i/5 \) are responsible for this divergence.
Null
No
Example 8.15 Consider the continuous function\n\n\\[ \nf\\left( x\\right) \\mathrel{\\text{:=}} \\left\\{ \\begin{array}{ll} x\\sin \\frac{\\pi }{x}, & x \\in (0,1\\rbrack \\\\ 0, & x = 0 \\end{array}\\right. \n\\]\n\nWith the interpolation points chosen as\n\n\\[ \n{x}_{j} = \\frac{1}{j + 1},\\;j = 0,\\ldots, n \n\\]\n\nwe have that \\( f\\left( {x}_{j}\\right) = 0, j = 0,\\ldots, n \\), and therefore \\( {L}_{n}f = 0 \\) for all \\( n \\) . Hence, in this case the sequence \\( \\left( {{L}_{n}f}\\right) \\) converges only at the points \\( {x}_{j}, j \\in \\mathbb{N} \\cup \\{ 0\\} \\), to the interpolated function \\( f \\) .
Null
No
Theorem 8.16 (Marcinkiewicz) For each function \( f \in C\left\lbrack {a, b}\right\rbrack \) there exists a sequence of interpolation points \( \left( {x}_{j}^{\left( n\right) }\right), j = 0,\ldots, n, n = 0,1,\ldots \) , such that the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials \( {L}_{n}f \in {P}_{n} \) with \( \left( {{L}_{n}f}\right) \left( {x}_{j}^{\left( n\right) }\right) = f\left( {x}_{j}^{\left( n\right) }\right), j = 0,\ldots, n \), converges to \( f \) uniformly on \( \left\lbrack {a, b}\right\rbrack \) .
Proof. The proof relies on the Weierstrass approximation theorem and the Chebyshev alternation theorem. The Weierstrass approximation theorem (see [16]) ensures that for each \( f \in C\left\lbrack {a, b}\right\rbrack \) there exists a sequence of polynomials \( {p}_{n} \in {P}_{n} \) such that \( {\begin{Vmatrix}{p}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0 \) as \( n \rightarrow \infty \) . As a consequence of the Chebyshev alternation theorem from approximation theory (see [16]), for the uniquely determined best approximation \( {\widetilde{p}}_{n} \) to \( f \) in the maximum norm with respect to \( {P}_{n} \), the error \( {\widetilde{p}}_{n} - f \) has at least \( n + 1 \) zeros in \( \left\lbrack {a, b}\right\rbrack \) . Then taking the sequence of these zeros as the sequence of interpolation points implies the statement of the theorem.
Yes
Theorem 8.17 (Faber) For each sequence of interpolation points \( \left( {x}_{j}^{\left( n\right) }\right) \) there exists a function \( f \in C\left\lbrack {a, b}\right\rbrack \) such that the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials \( {L}_{n}f \in {P}_{n} \) does not converge to \( f \) uniformly on \( \left\lbrack {a, b}\right\rbrack \) .
Proof. This is a consequence of the uniform boundedness principle, Theorem 12.7. It implies that from the convergence of the sequence \( \left( {{L}_{n}f}\right) \) for all \( f \in C\left\lbrack {a, b}\right\rbrack \) it follows that there must exist a constant \( C > 0 \) such that \( {\begin{Vmatrix}{L}_{n}\end{Vmatrix}}_{\infty } \leq C \) for all \( n \in \mathbb{N} \) . Then the statement of the theorem is obtained by showing that the interpolation operator \( {L}_{n} \) satisfies \( {\begin{Vmatrix}{L}_{n}\end{Vmatrix}}_{\infty } \geq c\ln n \) for all \( n \in \mathbb{N} \) and some \( c > 0 \) (see [16]).
Yes
Theorem 8.18 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( {2n} + 2 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \) and \( {y}_{0}^{\prime },\ldots ,{y}_{n}^{\prime } \in \mathbb{R} \), there exists a unique polynomial \( {p}_{{2n} + 1} \in {P}_{{2n} + 1} \) with the property\n\n\[ \n{p}_{{2n} + 1}\left( {x}_{j}\right) = {y}_{j},\;{p}_{{2n} + 1}^{\prime }\left( {x}_{j}\right) = {y}_{j}^{\prime },\;j = 0,\ldots, n.\n\]\n\n(8.10)
This Hermite interpolation polynomial is given by\n\n\[ \n{p}_{{2n} + 1} = \mathop{\sum }\limits_{{k = 0}}^{n}\left\lbrack {{y}_{k}{H}_{k}^{0} + {y}_{k}^{\prime }{H}_{k}^{1}}\right\rbrack\n\]\n\n(8.11)\n\nwith the Hermite factors\n\n\[ \n{H}_{k}^{0}\left( x\right) \mathrel{\text{:=}} \left\lbrack {1 - 2{\ell }_{k}^{\prime }\left( {x}_{k}\right) \left( {x - {x}_{k}}\right) }\right\rbrack {\left\lbrack {\ell }_{k}\left( x\right) \right\rbrack }^{2},\;{H}_{k}^{1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{k}}\right) {\left\lbrack {\ell }_{k}\left( x\right) \right\rbrack }^{2}\n\]\n\nexpressed in terms of the Lagrange factors from Theorem 8.3.\n\nProof. Obviously, the polynomial \( {p}_{{2n} + 1} \) belongs to \( {P}_{{2n} + 1} \), since the Hermite factors have degree \( {2n} + 1 \) . From (8.3), by elementary calculations it can be seen that (see Problem 8.7)\n\n\[ \n{H}_{k}^{0}\left( {x}_{j}\right) = {H}_{k}^{1\prime }\left( {x}_{j}\right) = {\delta }_{jk}\n\]\n\n\[ \nj, k = 0,\ldots, n.\text{.}\n\]\n\n(8.12)\n\n\[ \n{H}_{k}^{0\prime }\left( {x}_{j}\right) = {H}_{k}^{1}\left( {x}_{j}\right) = 0\n\]\nFrom this it follows that the polynomial (8.11) satisfies the Hermite interpolation property (8.10).\n\nTo prove uniqueness of the Hermite interpolation polynomial we assume that \( {p}_{{2n} + 1,1},{p}_{{2n} + 1,2} \in {P}_{{2n} + 1} \) are two polynomials having the interpolation property (8.10). Then the difference \( {p}_{{2n} + 1} \mathrel{\text{:=}} {p}_{{2n} + 1,1} - {p}_{{2n} + 1,2} \) satisfies\n\n\[ \n{p}_{{2n} + 1}\left( {x}_{j}\right) = {p}_{{2n} + 1}^{\prime }\left( {x}_{j}\right) = 0,\;j = 0,\ldots, n\n\]\n\ni.e., the polynomial \( {p}_{{2n} + 1} \in {P}_{{2n} + 1} \) has \( n + 1 \) zeros of order two and therefore, by Theorem 8.1, must be identically equal to zero. This implies that \( {p}_{{2n} + 1,1} = {p}_{{2n} + 1,2} \)
Yes
Theorem 8.19 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( \left( {{2n} + 2}\right) \) -times continuously differentiable. Then the remainder \( {R}_{n}f \mathrel{\text{:=}} f - {H}_{n}f \) for Hermite interpolation with \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) can be represented in the form\n\n\[ \left( {{R}_{n}f}\right) \left( x\right) = \frac{{f}^{\left( 2n + 2\right) }\left( \xi \right) }{\left( {{2n} + 2}\right) !}\mathop{\prod }\limits_{{j = 0}}^{n}{\left( x - {x}_{j}\right) }^{2},\;x \in \left\lbrack {a, b}\right\rbrack ,\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) depending on \( x \) .
Null
No
Theorem 8.21 A trigonometric polynomial in \( {T}_{n} \) that has more than \( {2n} \) distinct zeros in the periodicity interval \( \lbrack 0,{2\pi }) \) must vanish identically; i.e., all its coefficients must be equal to zero.
Proof. We consider a trigonometric polynomial \( q \in {T}_{n} \) of the form\n\n\[ q\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{n}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack \]\n\n(8.14)\n\nSetting \( {b}_{0} = 0 \) ,\n\n\[ {\gamma }_{k} \mathrel{\text{:=}} \frac{1}{2}\left( {{a}_{k} - i{b}_{k}}\right) ,\;{\gamma }_{-k} \mathrel{\text{:=}} \frac{1}{2}\left( {{a}_{k} + i{b}_{k}}\right) ,\;k = 0,\ldots, n, \]\n\n(8.15)\n\nand using Euler's formula\n\n\[ {e}^{it} = \cos t + i\sin t \]\n\nwe can rewrite (8.14) in the complex form\n\n\[ q\left( t\right) = \mathop{\sum }\limits_{{k = - n}}^{n}{\gamma }_{k}{e}^{ikt} \]\n\n(8.16)\n\nTherefore, substituting \( z = {e}^{it} \) and setting\n\n\[ p\left( z\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = - n}}^{n}{\gamma }_{k}{z}^{n + k} \]\n\nwe have the relation\n\n\[ q\left( t\right) = {z}^{-n}p\left( z\right) \]\n\nNow assume that the trigonometric polynomial \( q \in {T}_{n} \) has more than \( {2n} \) distinct zeros in the interval \( \lbrack 0,{2\pi }) \) . Then the algebraic polynomial \( p \in {P}_{2n} \) has more than \( {2n} \) distinct zeros lying on the unit circle in the complex plane, since the function \( t \mapsto {e}^{it} \) maps \( \lbrack 0,{2\pi }) \) bijectively onto the unit circle. By Theorem 8.1, the algebraic polynomial \( p \) must be identically zero, and now (8.15) implies that also \( q \) must be identically zero.
Yes
Theorem 8.22 The cosine functions \( {c}_{k}\left( t\right) \mathrel{\text{:=}} \cos {kt}, k = 0,1,\ldots, n \), and the sine functions \( {s}_{k}\left( t\right) \mathrel{\text{:=}} \sin {kt}, k = 1,\ldots, n \), are linearly independent in the function space \( C\left\lbrack {0,{2\pi }}\right\rbrack \) .
Proof. To prove this, assume that\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{c}_{k} + \mathop{\sum }\limits_{{k = 1}}^{n}{b}_{k}{s}_{k} = 0 \]\n\nthat is,\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}\cos {kt} + \mathop{\sum }\limits_{{k = 1}}^{n}{b}_{k}\sin {kt} = 0,\;t \in \left\lbrack {0,{2\pi }}\right\rbrack . \]\n\nThen the trigonometric polynomial with coefficients \( {a}_{0},\ldots ,{a}_{n} \) and \( {b}_{1},\ldots ,{b}_{n} \) has more than \( {2n} \) distinct zeros in \( \lbrack 0,{2\pi }) \), and from Theorem 8.21 it follows that all the coefficients must be zero. Note that this linear independence also can be deduced from Theorem 3.17.
Yes
Theorem 8.23 Given \( {2n} + 1 \) distinct points \( {t}_{0},\ldots ,{t}_{2n} \in \lbrack 0,{2\pi }) \) and \( {2n} + 1 \) values \( {y}_{0},\ldots ,{y}_{2n} \in \mathbb{R} \), there exists a uniquely determined trigonometric polynomial \( {q}_{n} \in {T}_{n} \) with the property\n\n\[ \n{q}_{n}\left( {t}_{j}\right) = {y}_{j},\;j = 0,\ldots ,{2n}.\n\]
Proof. The function \( {q}_{n} \) belongs to \( {T}_{n} \), since the Lagrange factors are trigonometric polynomials of degree \( n \) . The latter is a consequence of\n\n\[ \n\sin \frac{t - {t}_{0}}{2}\sin \frac{t - {t}_{1}}{2} = \frac{1}{2}\cos \frac{{t}_{1} - {t}_{0}}{2} - \frac{1}{2}\cos \left( {t - \frac{{t}_{1} + {t}_{0}}{2}}\right)\n\]\n\ni.e., each of the functions \( {\ell }_{k} \) is a product of \( n \) trigonometric polynomials of degree one. As in Theorem 8.3, we have \( {\ell }_{k}\left( {x}_{j}\right) = {\delta }_{jk} \) for \( j, k = 0,\ldots ,{2n} \) , which shows that \( {q}_{n} \) indeed solves the trigonometric interpolation problem.\n\nUniqueness of the trigonometric interpolation polynomial follows analogously to the proof of Theorem 8.3 with the aid of Theorem 8.21.
Yes
Theorem 8.24 There exists a unique trigonometric polynomial\n\n\[ \n{q}_{n}\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{n}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack \n\]\n\nsatisfying the interpolation property\n\n\[ \n{q}_{n}\left( \frac{2\pi j}{{2n} + 1}\right) = {y}_{j},\;j = 0,\ldots ,{2n}. \n\]
Its coefficients are given by\n\n\[ \n{a}_{k} = \frac{2}{{2n} + 1}\mathop{\sum }\limits_{{j = 0}}^{{2n}}{y}_{j}\cos \frac{2\pi jk}{{2n} + 1},\;k = 0,\ldots, n, \n\]\n\n\[ \n{b}_{k} = \frac{2}{{2n} + 1}\mathop{\sum }\limits_{{j = 0}}^{{2n}}{y}_{j}\sin \frac{2\pi jk}{{2n} + 1},\;k = 1,\ldots, n. \n\]
Yes
Theorem 8.25 There exists a unique trigonometric polynomial\n\n\[ \n{q}_{n}\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack + \frac{{a}_{n}}{2}\cos {nt} \n\]\n\nsatisfying the interpolation property\n\n\[ \n{q}_{n}\left( \frac{\pi j}{n}\right) = {y}_{j},\;j = 0,\ldots ,{2n} - 1. \n\]
Its coefficients are given by\n\n\[ \n{a}_{k} = \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{{2n} - 1}}{y}_{j}\cos \frac{\pi jk}{n},\;k = 0,\ldots, n \n\]\n\n\[ \n{b}_{k} = \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{{2n} - 1}}{y}_{j}\sin \frac{\pi jk}{n},\;k = 1,\ldots, n - 1. \n\]
Yes
Lemma 8.28 Let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) and \( \ell \geq 2 \), and let \( f \in {C}^{\ell }\left\lbrack {a, b}\right\rbrack \) . Assume that the spline \( s \in {S}_{m}^{n} \) interpolates \( f \), i.e., \[ s\left( {x}_{j}\right) = f\left( {x}_{j}\right) ,\;j = 0,\ldots, n, \] (8.26) and that it satisfies the boundary conditions \[ {s}^{\left( j\right) }\left( a\right) = {f}^{\left( j\right) }\left( a\right) ,\;{s}^{\left( j\right) }\left( b\right) = {f}^{\left( j\right) }\left( b\right) ,\;j = 1,\ldots ,\ell - 1. \] (8.27) Then \[ {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx}. \] (8.28)
Proof. We have that \[ {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {2R}, \] where \[ R \mathrel{\text{:=}} {\int }_{a}^{b}\left\lbrack {{f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) }\right\rbrack {s}^{\left( \ell \right) }\left( x\right) {dx}. \] Since \( f \in {C}^{\ell }\left\lbrack {a, b}\right\rbrack \) and \( s \in {C}^{m - 1}\left\lbrack {a, b}\right\rbrack \) has piecewise continuous derivatives of order \( m \), by \( \ell - 1 \) repeated partial integrations and using the boundary conditions (8.27) we obtain that \[ R = {\left( -1\right) }^{\ell - 1}{\int }_{a}^{b}\left\lbrack {{f}^{\prime }\left( x\right) - {s}^{\prime }\left( x\right) }\right\rbrack {s}^{\left( m\right) }\left( x\right) {dx}. \] A further partial integration and the interpolation conditions now yield \[ R = {\left( -1\right) }^{\ell - 1}\mathop{\sum }\limits_{{j = 1}}^{n}{\int }_{{x}_{j - 1}}^{{x}_{j}}\left\lbrack {{f}^{\prime }\left( x\right) - {s}^{\prime }\left( x\right) }\right\rbrack {s}^{\left( m\right) }\left( x\right) {dx} \] \[ = {\left. {\left( -1\right) }^{\ell - 1}\mathop{\sum }\limits_{{j = 1}}^{n}\left\lbrack f\left( x\right) - s\left( x\right) \right\rbrack {s}^{\left( m\right) }\left( x\right) \right| }_{{x}_{j - 1}}^{{x}_{j}} = 0 \] since \( {s}^{\left( m + 1\right) } = 0 \) . This completes the proof.
Yes
Lemma 8.29 Under the assumptions of Lemma 8.28 let \( f = 0 \) . Then \( s = 0 \) .
Proof. For \( f = 0 \), from (8.28) it follows that\n\n\[ \n{\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = 0 \n\]\n\nThis implies that \( {s}^{\left( \ell \right) } = 0 \), and therefore \( s \in {P}_{\ell - 1} \) on \( \left\lbrack {a, b}\right\rbrack \) . Now the boundary conditions \( {s}^{\left( j\right) }\left( a\right) = 0, j = 0,\ldots ,\ell - 1 \), yield \( s = 0 \) .
Yes
Theorem 8.30 Let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) and \( \ell \geq 2 \) . Then, given \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \) and \( m - 1 \) boundary data \( {a}_{1},\ldots ,{a}_{\ell - 1} \) and \( {b}_{1},\ldots ,{b}_{\ell - 1} \) , there exists a unique spline \( s \in {S}_{m}^{n} \) satisfying the interpolation conditions\n\n\[ s\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n \]\n\n(8.29)\n\nand the boundary conditions\n\n\[ {s}^{\left( j\right) }\left( a\right) = {a}_{j},\;{s}^{\left( j\right) }\left( b\right) = {b}_{j},\;j = 1,\ldots ,\ell - 1. \]\n\n(8.30)
Proof. Representing the spline in the form (8.25), i.e.,\n\n\[ s\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k} + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k} \]\n\n(8.31)\n\n\nit follows that the interpolation conditions (8.29) and boundary conditions (8.30) are satisfied if and only if the \( m + n \) coefficients \( {\alpha }_{0},\ldots ,{\alpha }_{m} \) and \( {\beta }_{1},\ldots ,{\beta }_{n - 1} \) solve the system\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}\left( {x}_{j}\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n \]\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}^{\left( j\right) }\left( a\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}^{\left( j\right) }\left( a\right) = {a}_{j},\;j = 1,\ldots ,\ell - 1, \]\n\n(8.32)\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}^{\left( j\right) }\left( b\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}^{\left( j\right) }\left( b\right) = {b}_{j},\;j = 1,\ldots ,\ell - 1, \]\n\nof \( m + n \) linear equations. By Lemma 8.29 the homogeneous form of the system (8.32) has only the trivial solution. Therefore, the inhomogeneous system (8.32) is uniquely solvable, and the proof is finished.
Yes
Theorem 8.31 For \( m \in \mathbb{N} \cup \{ 0\} \) the B-splines\n\n\[ \n{B}_{m}\left( {\cdot - k}\right) ,\;k = 0,\ldots, m \n\]\n\n(8.37)\n\nare linearly independent on the interval \( {I}_{m} \mathrel{\text{:=}} \left\lbrack {\frac{m - 1}{2},\frac{m + 1}{2}}\right\rbrack \) .
Proof. This is trivial for \( m = 0 \), and we assume that it has been proven for degree \( m - 1 \) for some \( m \geq 1 \) . Let\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{B}_{m}\left( {x - k}\right) = 0,\;x \in {I}_{m} \n\]\n\n(8.38)\n\nThen, with the aid of (8.33), differentiating (8.38) yields\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}\left\lbrack {{B}_{m - 1}\left( {x - k + \frac{1}{2}}\right) - {B}_{m - 1}\left( {x - k - \frac{1}{2}}\right) }\right\rbrack = 0,\;x \in {I}_{m}. \n\]\n\nObserving that the supports of \( {B}_{m - 1}\left( {\cdot + \frac{1}{2}}\right) \) and \( {B}_{m - 1}\left( {\cdot - m - \frac{1}{2}}\right) \) do not intersect with \( {I}_{m} \), we can rewrite this as\n\n\[ \n\mathop{\sum }\limits_{{k = 1}}^{m}\left\lbrack {{\alpha }_{k} - {\alpha }_{k - 1}}\right\rbrack {B}_{m - 1}\left( {x - k + \frac{1}{2}}\right) = 0,\;x \in {I}_{m}, \n\]\n\nwhence \( {\alpha }_{k} = {\alpha }_{k - 1} \) for \( k = 1,\ldots, m \) follows by the induction assumption; i.e., \( {\alpha }_{k} = \alpha \) for \( k = 0,\ldots, m \) . Now (8.38) reads\n\n\[ \n\alpha \mathop{\sum }\limits_{{k = 0}}^{m}{B}_{m}\left( {x - k}\right) = 0,\;x \in {I}_{m} \n\]\n\nand integrating this equation over the interval \( {I}_{m} \) leads to\n\n\[ \n\alpha {\int }_{-\frac{m}{2} - \frac{1}{2}}^{\frac{m}{2} + \frac{1}{2}}{B}_{m}\left( x\right) {dx} = 0. \n\]\n\nThis finally implies \( \alpha = 0 \), since the \( {B}_{m} \) are nonnegative, and the proof is finished.
Yes
Corollary 8.32 Let \( {x}_{k} = a + {hk}, k = 0,\ldots, n \), be an equidistant subdivision of the interval \( \left\lbrack {a, b}\right\rbrack \) of step size \( h = \left( {b - a}\right) /n \) with \( n \geq 2 \), and let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) . Then the B-splines\n\n\[ \n{B}_{m, k}\left( x\right) \mathrel{\text{:=}} {B}_{m}\left( \frac{x - a - {hk}}{h}\right) ,\;x \in \left\lbrack {a, b}\right\rbrack ,\n\]\n\n(8.39)\n\nfor \( k = - \ell + 1,\ldots, n + \ell - 1 \) form a basis for \( {S}_{m}^{n} \) .
Proof. The \( n + m \) splines (8.39) belong to \( {S}_{m}^{n} \), and by the preceding Theorem 8.31 they can be shown to be linearly independent on \( \left\lbrack {a, b}\right\rbrack \) . Hence, the statement follows from Theorem 8.27.
Yes
Theorem 8.33 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable and let \( s \in {S}_{3}^{n} \) be the uniquely determined cubic spline satisfying the interpolation and boundary conditions of Lemma 8.28. Then\n\n\[ \parallel f - s{\parallel }_{\infty } \leq \frac{{h}^{3/2}}{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2}\;\text{ and }\;{\begin{Vmatrix}{f}^{\prime } - {s}^{\prime }\end{Vmatrix}}_{\infty } \leq {h}^{1/2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2}, \]\n\nwhere \( h \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {{x}_{j} - {x}_{j - 1}}\right| \) .
Proof. The error function \( r \mathrel{\text{:=}} f - s \) has \( n + 1 \) zeros \( {x}_{0},\ldots ,{x}_{n} \) . Hence, the distance between two consecutive zeros of \( r \) is less than or equal to \( h \) . By Rolle’s theorem, the derivative \( {r}^{\prime } \) has \( n \) zeros with distance less than or equal to \( {2h} \) . Choose \( z \in \left\lbrack {a, b}\right\rbrack \) such that \( \left| {{r}^{\prime }\left( z\right) }\right| = {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \) . Then the closest zero \( \zeta \) of \( {r}^{\prime } \) has distance \( \left| {\zeta - z}\right| \leq h \), and by the Cauchy-Schwarz inequality we can estimate\n\n\[ {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty }^{2} = {\left| {\int }_{\zeta }^{z}{r}^{\prime \prime }\left( y\right) dy\right| }^{2} \leq h\left| {{\int }_{\zeta }^{z}{\left\lbrack {r}^{\prime \prime }\left( y\right) \right\rbrack }^{2}{dy}}\right| \leq h{\int }_{a}^{b}{\left\lbrack {r}^{\prime \prime }\left( y\right) \right\rbrack }^{2}{dy}. \]\n\nFrom this, using Lemma 8.28 we obtain \( {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \leq \sqrt{h}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \) .\n\nChoose \( x \in \left\lbrack {a, b}\right\rbrack \) such that \( \left| {r\left( x\right) }\right| = \parallel r{\parallel }_{\infty } \) . Then the closest zero \( \xi \) of \( r \) has distance \( \left| {\xi - x}\right| \leq h/2 \), and we can estimate\n\n\[ \parallel r{\parallel }_{\infty } = \left| {{\int }_{\xi }^{x}{r}^{\prime }\left( y\right) {dy}}\right| \leq \frac{h}{2}{\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \leq \frac{h\sqrt{h}}{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \]\n\nwhich concludes the proof.
Yes
Theorem 8.34 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be four-times continuously differentiable and let \( s \in {S}_{3}^{n} \) be the uniquely determined cubic spline satisfying the interpolation and boundary conditions of Lemma 8.28 for an equidistant subdivision with step width \( h \) . Then\n\n\[ \parallel f - s{\parallel }_{\infty } \leq \frac{{h}^{4}}{16}{\begin{Vmatrix}{f}^{\left( 4\right) }\end{Vmatrix}}_{\infty } \]
Proof. By \( {L}_{1} : C\left\lbrack {a, b}\right\rbrack \rightarrow {S}_{1}^{n} \) we denote the interpolation operator mapping \( g \in C\left\lbrack {a, b}\right\rbrack \) onto its uniquely determined piecewise linear interpolation. From Example 8.12 we obtain that\n\n\[ \parallel r{\parallel }_{\infty } = {\begin{Vmatrix}r - {L}_{1}r\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{8}{\begin{Vmatrix}{r}^{\prime \prime }\end{Vmatrix}}_{\infty } \]\n\nsince trivially \( {L}_{1}r = 0 \) .\n\nBy integration, we choose a function \( w \) such that \( {w}^{\prime \prime } = {L}_{1}{f}^{\prime \prime } \) . Applying the estimate (8.45) for the cubic spline \( s - w \) and using the estimate (8.9) for the piecewise linear interpolation of \( {f}^{\prime \prime } \), we obtain\n\n\[ {\begin{Vmatrix}{f}^{\prime \prime } - {s}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{f}^{\prime \prime } - {L}_{1}{f}^{\prime \prime }\end{Vmatrix}}_{\infty } + {\begin{Vmatrix}{L}_{1}{f}^{\prime \prime } - {s}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq 4{\begin{Vmatrix}{f}^{\prime \prime } - {L}_{1}{f}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{2}{\begin{Vmatrix}{f}^{\left( 4\right) }\end{Vmatrix}}_{\infty }.\]\n\nBy piecing together the last two inequalities we obtain the assertion of the theorem.
Yes
Theorem 8.37 The Bernstein polynomials are nonnegative on \( \\left\\lbrack {0,1}\\right\\rbrack \) and provide a partition of unity; i.e., \[ {B}_{k}^{n}\\left( t\\right) \\geq 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] and \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{B}_{k}^{n}\\left( t\\right) = 1,\\;t \\in \\mathbb{R} \]
Proof. The first five properties are obvious. The statement on the maximum of \( {B}_{k}^{n} \) is a consequence of \[ \\frac{d}{dt}{B}_{k}^{n}\\left( t\\right) = \\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) {t}^{k - 1}{\\left( 1 - t\\right) }^{n - k - 1}\\left( {k - {nt}}\\right) ,\\;k = 0,\\ldots n. \] The recursion formula (8.52) follows from the definition (8.47) and the recursion formula \[ \\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) = \\left( \\begin{array}{l} n - 1 \\\\ k - 1 \\end{array}\\right) + \\left( \\begin{matrix} n - 1 \\\\ k \\end{matrix}\\right) \] for the binomial coefficients. In order to show that the \( n + 1 \) polynomials \( {B}_{0}^{n},\\ldots ,{B}_{n}^{n} \) of degree \( n \) provide a basis of \( {P}_{n} \), we prove that they are linearly independent. Let \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}{B}_{k}^{n}\\left( t\\right) = 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] Then \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}\\frac{{d}^{j}}{d{t}^{j}}{B}_{k}^{n}\\left( t\\right) = 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] and therefore \[ \\mathop{\\sum }\\limits_{{k = j}}^{n}{b}_{k}\\frac{{d}^{j}}{d{t}^{j}}{B}_{k}^{n}\\left( 0\\right) = 0,\\;j = 0,\\ldots, n \] since \( t = 0 \) is a zero of \( {B}_{k}^{n} \) of order \( k \). From this, by induction we find that \( {b}_{n} = \\cdots = {b}_{0} = 0. \)
No
Theorem 8.39 Let\n\n\\[ \np\\left( t\\right) = \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}{B}_{k}^{n}\\left( t\\right) ,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \n\\]\n\nbe a Bézier polynomial on \\( \\left\\lbrack {0,1}\\right\\rbrack \\) . Then\n\n\\[ \n{p}^{\\left( j\\right) }\\left( t\\right) = \\frac{n!}{\\left( {n - j}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k}^{n - j}\\left( t\\right) ,\\;j = 1,\\ldots, n, \n\\]\n\nwith the forward differences \\( {\\bigtriangleup }^{j}{b}_{k} \\) recursively defined by\n\n\\[ \n{\\bigtriangleup }^{0}{b}_{k} \\mathrel{\\text{:=}} {b}_{k},\\;{\\bigtriangleup }^{j}{b}_{k} \\mathrel{\\text{:=}} {\\bigtriangleup }^{j - 1}{b}_{k + 1} - {\\bigtriangleup }^{j - 1}{b}_{k},\\;j = 1,\\ldots, n. \n\\]
Proof. Obviously, the statement is true for \\( j = 0 \\) . We assume that it has been proven for some \\( 0 \\leq j < n \\) . Then with the aid of (8.54) we obtain\n\n\\[ \n{p}^{\\left( j + 1\\right) }\\left( t\\right) = \\frac{n!}{\\left( {n - j}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}\\frac{d}{dt}{B}_{k}^{n - j}\\left( t\\right) \n\\]\n\n\\[ \n= \\frac{n!}{\\left( {n - j - 1}\\right) !}\\left\\{ {\\mathop{\\sum }\\limits_{{k = 1}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k - 1}^{n - j - 1}\\left( t\\right) - \\mathop{\\sum }\\limits_{{k = 0}}^{{n - j - 1}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k}^{n - j - 1}\\left( t\\right) }\\right\\} \n\\]\n\n\\[ \n= \\frac{n!}{\\left( {n - j - 1}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j - 1}}\\left\\{ {{\\bigtriangleup }^{j}{b}_{k + 1} - {\\bigtriangleup }^{j}{b}_{k}}\\right\\} {B}_{k}^{n - j - 1}\\left( t\\right) \n\\]\n\n\\[ \n= \\frac{n!}{\\left\\lbrack {n - \\left( {j + 1}\\right) }\\right\\rbrack !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - \\left( {j + 1}\\right) }}{\\bigtriangleup }^{j + 1}{b}_{k}{B}_{k}^{n - \\left( {j + 1}\\right) }\\left( t\\right) , \n\\]\n\nwhich establishes the assertion for \\( j + 1 \\) .
Yes
Corollary 8.40 The polynomial from Theorem 8.39 has the derivatives\n\n\\[ \n{p}^{\\left( j\\right) }\\left( 0\\right) = \\frac{n!}{\\left( {n - j}\\right) !}{\\bigtriangleup }^{j}{b}_{0},\\;{p}^{\\left( j\\right) }\\left( 1\\right) = \\frac{n!}{\\left( {n - j}\\right) !}{\\bigtriangleup }^{j}{b}_{n - j} \n\\]\n\n at the two endpoints.
From Corollary 8.40 we note that \\( {p}^{\\left( j\\right) }\\left( 0\\right) \\) depends only on \\( {b}_{0},\\ldots ,{b}_{j} \\) and that \\( {p}^{\\left( j\\right) }\\left( 1\\right) \\) depends only on \\( {b}_{n - j},\\ldots ,{b}_{n} \\) . In particular, we have that\n\n\\[ \n{p}^{\\prime }\\left( 0\\right) = n\\left( {{b}_{1} - {b}_{0}}\\right) ,\\;{p}^{\\prime }\\left( 1\\right) = n\\left( {{b}_{n} - {b}_{n - 1}}\\right) ; \n\\]\n\n(8.55)\n\ni.e., at the two endpoints the Bézier curve has the same tangent lines as the Bézier polygon. Through the affine transformation (8.46) these results on the derivatives carry over to the general interval \\( \\left\\lbrack {a, b}\\right\\rbrack \\) .
Yes
Theorem 8.41 The subpolynomials \( {b}_{i}^{k} \) of a Bézier polynomial \( p \) of degree \( n \) satisfy the recursion formulae\n\n\[ \n{b}_{i}^{k}\left( t\right) = \left( {1 - t}\right) {b}_{i}^{k - 1}\left( t\right) + t{b}_{i + 1}^{k - 1}\left( t\right) \n\]\n\n(8.57)\n\nfor \( i = 0,\ldots, n - k \) and \( k = 1,\ldots, n \) .
Proof. We insert the recursion formulae (8.51) and (8.52) for the Bernstein polynomials into the definition (8.56) for the subpolynomials and obtain\n\n\[ \n{b}_{i}^{k}\left( t\right) = {b}_{i}{B}_{0}^{k}\left( t\right) + \mathop{\sum }\limits_{{j = 1}}^{{k - 1}}{b}_{i + j}{B}_{j}^{k}\left( t\right) + {b}_{i + k}{B}_{k}^{k}\left( t\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{{j = 0}}^{{k - 1}}{b}_{i + j}\left( {1 - t}\right) {B}_{j}^{k - 1}\left( t\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{b}_{i + j}t{B}_{j - 1}^{k - 1}\left( t\right) \n\]\n\n\[ \n= \left( {1 - t}\right) {b}_{i}^{k - 1}\left( t\right) + t{b}_{i + 1}^{k - 1}\left( t\right) \n\]\n\nwhich establishes (8.57).
Yes
Theorem 8.42 The Bézier polynomials\n\n\\[ \n{p}_{1}\\left( x\\right) \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{0}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) \\;\\text{ and }\\;{p}_{2}\\left( x\\right) \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}^{n - k}\\left( t\\right) {B}_{k}^{n}\\left( {x;t,1}\\right) \n\\]\n\nwith the coefficients \\( {b}_{0}^{k} \\) and \\( {b}_{k}^{n - k} \\) for \\( k = 0,\\ldots, n \\) defined by the recursion (8.57) satisfy\n\n\\[ \np\\left( x\\right) = {p}_{1}\\left( x\\right) = {p}_{2}\\left( x\\right) ,\\;x \\in \\mathbb{R}, \n\\]\n\nfor arbitrary \\( 0 < t < 1 \\) .
Proof. Inserting the equivalent definition (8.56) of the subpolynomials and reordering the summation, we find that\n\n\\[ \n{p}_{1}\\left( x\\right) = \\mathop{\\sum }\\limits_{{k = 0}}^{n}\\mathop{\\sum }\\limits_{{j = 0}}^{k}{b}_{j}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = \\mathop{\\sum }\\limits_{{j = 0}}^{n}{b}_{j}\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) .\n\\]\n\nHence the proof will be concluded by showing that\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = {B}_{j}^{n}\\left( x\\right) ,\\;x \\in \\mathbb{R}.\n\\]\n\n(8.58)\n\nTo establish this identity we make use of Definition 8.36 and obtain with the aid of the binomial formula that\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = \\mathop{\\sum }\\limits_{{k = j}}^{n}\\left( \\begin{array}{l} k \\\\ j \\end{array}\\right) {\\left( 1 - t\\right) }^{k - j}{t}^{j - n}\\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) {x}^{k}{\\left( t - x\\right) }^{n - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {t}^{j - n}\\mathop{\\sum }\\limits_{{k = j}}^{n}\\left( \\begin{array}{l} n - j \\\\ k - j \\end{array}\\right) {\\left( 1 - t\\right) }^{k - j}{x}^{k}{\\left( t - x\\right) }^{n - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {t}^{j - n}{x}^{j}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}\\left( \\begin{matrix} n - j \\\\ k \\end{matrix}\\right) {\\left( 1 - t\\right) }^{k}{x}^{k}{\\left( t - x\\right) }^{n - j - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {x}^{j}{\\left( 1 - x\\right) }^{n - j}\n\\]\n\nHence (8.58) is valid, and consequently \\( {p}_{1} = p \\) . The proof of \\( {p}_{2} = p \\) is completely analogous, and it can also be obtained by a symmetry argument from \\( {p}_{1} = p \\) .
Yes
Theorem 9.1 The polynomial interpolatory quadrature of order \( n \) defined by\n\n\[ \n{Q}_{n}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} \n\]\n\n(9.3)\n\nis of the form (9.2) with the weights given by\n\n\[ \n{a}_{k} = \frac{1}{{q}_{n + 1}^{\prime }\left( {x}_{k}\right) }{\int }_{a}^{b}\frac{{q}_{n + 1}\left( x\right) }{x - {x}_{k}}{dx},\;k = 0,\ldots, n, \n\]\n\n(9.4)\n\nwhere \( {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{n}}\right) \) .
Proof. From (8.2) we obtain\n\n\[ \n{\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}f\left( {x}_{k}\right) {\int }_{a}^{b}{\ell }_{k}\left( x\right) {dx} \n\]\n\nwith\n\[ \n{a}_{k} = {\int }_{a}^{b}{\ell }_{k}\left( x\right) {dx} = {\int }_{a}^{b}\mathop{\prod }\limits_{\substack{{j = 0} \\ {j \neq k} }}^{n}\frac{x - {x}_{j}}{{x}_{k} - {x}_{j}}{dx} \n\]\n\nwhence (9.4) follows by rewriting the product.
Yes
Theorem 9.2 Given \( n + 1 \) distinct quadrature points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) , the interpolatory quadrature (9.3) of order \( n \) is uniquely determined by its property of integrating all polynomials \( p \in {P}_{n} \) exactly, i.e., by the property\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) = {\int }_{a}^{b}p\left( x\right) {dx} \n\]\n\n(9.5)\n\nfor all \( p \in {P}_{n} \) .
Proof. From (9.3) and \( {L}_{n}p = p \) for all \( p \in {P}_{n} \) it follows that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) = {\int }_{a}^{b}\left( {{L}_{n}p}\right) \left( x\right) {dx} = {\int }_{a}^{b}p\left( x\right) {dx} \n\]\n\ni.e., the quadrature is exact for all \( p \in {P}_{n} \) . On the other hand, from (9.5) we obtain\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}\left( {{L}_{n}f}\right) \left( {x}_{k}\right) = {\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} \n\]\n\nfor all \( f \in C\left\lbrack {a, b}\right\rbrack \) ; i.e., the quadrature is an interpolatory quadrature.
Yes
Theorem 9.3 The polynomial interpolatory quadrature of order \( n \) with equidistant quadrature points\n\n\[ \n{x}_{k} = a + {kh},\;k = 0,\ldots, n, \]\n\nand step width \( h = \left( {b - a}\right) /n \) is called the Newton-Cotes quadrature formula of order \( n \) . Its weights are given by\n\n\[ \n{a}_{k} = h\frac{{\left( -1\right) }^{n - k}}{k!\left( {n - k}\right) !}{\int }_{0}^{n}\mathop{\prod }\limits_{\substack{{j = 0} \\ {j \neq k} }}^{n}\left( {z - j}\right) {dz},\;k = 0,\ldots, n, \]\n\n(9.6)\n\nand have the symmetry property \( {a}_{k} = {a}_{n - k}, k = 0,\ldots, n \) .
Proof. The weights are obtained from (9.4) by substituting \( x = {x}_{0} + {hz} \) and observing that\n\n\[ \n{q}_{n + 1}\left( x\right) = {h}^{n + 1}\mathop{\prod }\limits_{{j = 0}}^{n}\left( {z - j}\right) \]\n\nand\n\n\[ \n{q}_{n + 1}^{\prime }\left( {x}_{k}\right) = {\left( -1\right) }^{n - k}k!\left( {n - k}\right) !{h}^{n}. \]\n\nThe symmetry \( {a}_{k} = {a}_{n - k} \) follows by substituting \( z = n - y \) .
Yes
Theorem 9.4 Let \( f : C\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable. Then the error for the trapezoidal rule can be represented in the form\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{2}\left\lbrack {f\left( a\right) + f\left( b\right) }\right\rbrack = - \frac{{h}^{3}}{12}{f}^{\prime \prime }\left( \xi \right)\]\n\n(9.7)\n\nwith some \( \xi \in \left\lbrack {a, b}\right\rbrack \) and \( h = b - a \) .
Proof. Let \( {L}_{1}f \) denote the linear interpolation of \( f \) at the interpolation points \( {x}_{0} = a \) and \( {x}_{1} = b \) . By construction of the trapezoidal rule we have that the error\n\n\[{E}_{1}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{2}\left\lbrack {f\left( a\right) + f\left( b\right) }\right\rbrack\]\n\nis given by\n\n\[{E}_{1}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - \left( {{L}_{1}f}\right) \left( x\right) }\right\rbrack {dx} = {\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) \frac{f\left( x\right) - \left( {{L}_{1}f}\right) \left( x\right) }{\left( {x - a}\right) \left( {x - b}\right) }{dx}.\]\n\nSince the first factor of the integrand is nonpositive on \( \left\lbrack {a, b}\right\rbrack \) and since by l'Hôpital's rule the second factor is continuous, from the mean value theorem for integrals we obtain that\n\n\[{E}_{1}\left( f\right) = \frac{f\left( z\right) - \left( {{L}_{1}f}\right) \left( z\right) }{\left( {z - a}\right) \left( {z - b}\right) }{\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) {dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . From this, with the aid of the error representation for linear interpolation from Theorem 8.10 and the integral\n\n\[{\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) {dx} = - \frac{{h}^{3}}{6}\]\n\nthe assertion of the theorem follows.
Yes
Theorem 9.5 Let \( f : C\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be four-times continuously differentiable. Then the error for Simpson's rule can be represented in the form\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{6}\left\lbrack {f\left( a\right) + {4f}\left( \frac{a + b}{2}\right) + f\left( b\right) }\right\rbrack = - \frac{{h}^{5}}{90}{f}^{\left( 4\right) }\left( \xi \right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) and \( h = \left( {b - a}\right) /2 \) .
Proof. Let \( {L}_{2}f \) denote the quadratic interpolation polynomial for \( f \) at the interpolation points \( {x}_{0} = a,{x}_{1} = \left( {a + b}\right) /2 \), and \( {x}_{2} = b \) . By construction of Simpson's rule we have that the error\n\n\[{E}_{2}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{6}\left\lbrack {f\left( a\right) + {4f}\left( \frac{a + b}{2}\right) + f\left( b\right) }\right\rbrack\]\n\nis given by\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - \left( {{L}_{2}f}\right) \left( x\right) }\right\rbrack {dx}.\n\nConsider the cubic polynomial\n\n\[p\left( x\right) \mathrel{\text{:=}} \left( {{L}_{2}f}\right) \left( x\right) + \frac{4}{{\left( b - a\right) }^{2}}\left\lbrack {{\left( {L}_{2}f\right) }^{\prime }\left( {x}_{1}\right) - {f}^{\prime }\left( {x}_{1}\right) }\right\rbrack {q}_{3}\left( x\right) ,\]\n\nwhere \( {q}_{3}\left( x\right) = \left( {x - {x}_{0}}\right) \left( {x - {x}_{1}}\right) \left( {x - {x}_{2}}\right) \) . Obviously, \( p \) has the interpolation properties\n\n\[p\left( {x}_{k}\right) = f\left( {x}_{k}\right) ,\;k = 0,1,2,\;\text{ and }\;{p}^{\prime }\left( {x}_{1}\right) = {f}^{\prime }\left( {x}_{1}\right) .\n\nSince \( {\int }_{a}^{b}{q}_{3}\left( x\right) {dx} = 0 \), from (9.9) and (9.10) we can conclude that\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - p\left( x\right) }\right\rbrack {dx}\]\n\nand consequently\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) \frac{f\left( x\right) - p\left( x\right) }{\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) }{dx}.\n\nAs in the proof of Theorem 9.4, the first factor of the integrand is nonpositive on \( \left\lbrack {a, b}\right\rbrack \), and the second factor is continuous. Hence, by the mean value theorem for integrals, we obtain that\n\n\[{E}_{2}\left( f\right) = \frac{f\left( z\right) - p\left( z\right) }{\left( {z - {x}_{0}}\right) {\left( z - {x}_{1}\right) }^{2}\left( {z - {x}_{2}}\right) }{\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) {dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . Analogous to Theorem 8.10, it can be shown that\n\n\[f\left( z\right) - p\left( z\right) = \frac{{f}^{\left( 4\right) }\left( \xi \right) }{4!}\left( {z - {x}_{0}}\right) {\left( z - {x}_{1}\right) }^{2}\left( {z - {x}_{2}}\right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) . From this, with the aid of the integral\n\n\[{\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) {dx} = - \frac{{\left( b - a\right) }^{5}}{120},\]\n\nwe conclude the statement of the theorem.
Yes
Example 9.6 The approximation of\n\n\\[ \ln 2 = {\\int }_{0}^{1}\\frac{dx}{1 + x} \\]\n\nby the trapezoidal rule yields\n\n\\[ \ln 2 \\approx \\frac{1}{2}\\left\\lbrack {1 + \\frac{1}{2}}\\right\\rbrack = {0.75} \\]
For \\( f\\left( x\\right) \\mathrel{\\text{:=}} 1/\\left( {1 + x}\\right) \\) we have\n\n\\[ \\frac{{h}^{3}}{12}{\\begin{Vmatrix}{f}^{\\prime \\prime }\\end{Vmatrix}}_{\\infty } = \\frac{1}{6} \\]\n\nand hence, from Theorem 9.4, we obtain the estimate \\( \\left| {\\ln 2 - {0.75}}\\right| \\leq {0.167} \\) as compared to the true error \\( \\ln 2 - {0.75} = - {0.056}\\ldots \\) .
Yes
Theorem 9.7 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable. Then the error for the composite trapezoidal rule is given by\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{h}\left( f\right) = - \frac{b - a}{12}{h}^{2}{f}^{\prime \prime }\left( \xi \right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \).
Proof. By Theorem 9.4 we have that\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{h}\left( f\right) = - \frac{{h}^{3}}{12}\mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right)\]\n\nwhere \( a \leq {\xi }_{1} \leq {\xi }_{2} \leq \cdots \leq {\xi }_{n} \leq b \). From\n\n\[n\mathop{\min }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}{f}^{\prime \prime }\left( x\right) \leq \mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right) \leq n\mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}{f}^{\prime \prime }\left( x\right)\]\n\nand the continuity of \( {f}^{\prime \prime } \) we conclude that there exists \( \xi \in \left\lbrack {a, b}\right\rbrack \) such that\n\n\[\mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right) = n{f}^{\prime \prime }\left( \xi \right)\]\n\nand the proof is finished.
Yes
Theorem 9.10 (Szegö) Let\n\n\[ \n{Q}_{n}\left( f\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}^{\left( n\right) }f\left( {x}_{k}^{\left( n\right) }\right) \n\]\n\nbe a sequence of quadrature formulae that converges for all polynomials, i.e,\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}{Q}_{n}\left( p\right) = Q\left( p\right) \n\]\n\n(9.11)\n\nfor all polynomials \( p \), and that is uniformly bounded, i.e., there exists a constant \( C > 0 \) such that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| \leq C \n\]\n\n(9.12)\n\nfor all \( n \in \mathbb{N} \) . Then the sequence \( \left( {Q}_{n}\right) \) is convergent.
Proof. Let \( f \in C\left\lbrack {a, b}\right\rbrack \) and \( \varepsilon > 0 \) be arbitrary. By the Weierstrass approximation theorem (see [16]) there exists a polynomial \( p \) such that\n\n\[ \n\parallel f - p{\parallel }_{\infty } \leq \frac{\varepsilon }{2\left( {C + b - a}\right) }.\n\]\n\nThen, since by (9.11) we have \( {Q}_{n}\left( p\right) \rightarrow Q\left( p\right) \) as \( n \rightarrow \infty \), there exists \( N\left( \varepsilon \right) \in \mathbb{N} \) such that\n\n\[ \n\left| {{Q}_{n}\left( p\right) - Q\left( p\right) }\right| \leq \frac{\varepsilon }{2}\n\]\nfor all \( n \geq N\left( \varepsilon \right) \) . Now with the aid of the triangle inequality and using (9.12) we can estimate\n\n\[ \n\left| {{Q}_{n}\left( f\right) - Q\left( f\right) }\right| \leq \mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| \left| {f\left( {x}_{k}^{\left( n\right) }\right) - p\left( {x}_{k}^{\left( n\right) }\right) }\right| + \left| {{Q}_{n}\left( p\right) - Q\left( p\right) }\right|\n\]\n\n\[ \n+ {\int }_{a}^{b}\left| {p\left( x\right) - f\left( x\right) }\right| {dx}\n\]\n\n\[ \n\leq \frac{C\varepsilon }{2\left( {C + b - a}\right) } + \frac{\varepsilon }{2} + \frac{\left( {b - a}\right) \varepsilon }{2\left( {C + b - a}\right) } = \varepsilon\n\]\n\nfor all \( N \geq N\left( \varepsilon \right) \) ; i.e., \( {Q}_{n}\left( f\right) \rightarrow Q\left( f\right) \) for \( n \rightarrow \infty \) .
Yes
Corollary 9.11 (Steklov) Assume that the sequence \( \left( {Q}_{n}\right) \) of quadrature formulae converges for all polynomials and that all the weights are nonnegative. Then the sequence \( \left( {Q}_{n}\right) \) is convergent.
Proof. This follows from\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}^{\left( n\right) } = {Q}_{n}\left( 1\right) \rightarrow {\int }_{a}^{b}{dx} = b - a,\;n \rightarrow \infty ,\]\n\nand the preceding Theorem 9.10.
Yes
Lemma 9.13 Let \( {x}_{0},\ldots ,{x}_{n} \) be the \( n + 1 \) distinct quadrature points of a Gaussian quadrature formula. Then\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n + 1}\left( x\right) q\left( x\right) {dx} = 0 \n\]\n\n(9.16)\n\nfor \( {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{n}}\right) \) and all \( q \in {P}_{n} \) .
Proof. Since \( {q}_{n + 1}q \in {P}_{{2n} + 1} \) and \( {q}_{n + 1}\left( {x}_{k}\right) = 0 \), we have that\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n + 1}\left( x\right) q\left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{q}_{n + 1}\left( {x}_{k}\right) q\left( {x}_{k}\right) = 0 \n\]\n\nfor all \( q \in {P}_{n} \) .
Yes
Lemma 9.14 Let \( {x}_{0},\ldots ,{x}_{n} \) be \( n + 1 \) distinct points satisfying the condition (9.16). Then the corresponding polynomial interpolatory quadrature is a Gaussian quadrature formula.
Proof. Let \( {L}_{n} \) denote the polynomial interpolation operator for the interpolation points \( {x}_{0},\ldots ,{x}_{n} \) . By construction, for the interpolatory quadrature we have\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = {\int }_{a}^{b}w\left( x\right) \left( {{L}_{n}f}\right) \left( x\right) {dx} \]\n\n(9.17)\n\nfor all \( f \in C\left\lbrack {a, b}\right\rbrack \) . Each \( p \in {P}_{{2n} + 1} \) can be represented in the form\n\n\[ p = {L}_{n}p + {q}_{n + 1}q \]\n\nfor some \( q \in {P}_{n} \), since the polynomial \( p - {L}_{n}p \) vanishes at the points \( {x}_{0},\ldots ,{x}_{n} \) . Then from (9.16) and (9.17) we obtain that\n\n\[ {\int }_{a}^{b}w\left( x\right) p\left( x\right) {dx} = {\int }_{a}^{b}w\left( x\right) \left( {{L}_{n}p}\right) \left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) \]\n\nfor all \( p \in {P}_{{2n} + 1} \) .
Yes
Lemma 9.15 There exists a unique sequence \( \left( {q}_{n}\right) \) of polynomials of the form \( {q}_{0} = 1 \) and\n\n\[ \n{q}_{n}\left( x\right) = {x}^{n} + {r}_{n - 1}\left( x\right) ,\;n = 1,2,\ldots ,\n\]\n\nwith \( {r}_{n - 1} \in {P}_{n - 1} \) satisfying the orthogonality relation\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n}\left( x\right) {q}_{m}\left( x\right) {dx} = 0,\;n \neq m\n\]\n\n(9.18)\n\nand\n\n\[ \n{P}_{n} = \operatorname{span}\left\{ {{q}_{0},\ldots ,{q}_{n}}\right\} ,\;n = 0,1,\ldots \n\]\n\n(9.19)
Proof. This follows by the Gram-Schmidt orthogonalization procedure from Theorem 3.18 applied to the linearly independent functions \( {u}_{n}\left( x\right) \mathrel{\text{:=}} {x}^{n} \) for \( n = 0,1,\ldots \) and the scalar product\n\n\[ \n\left( {f, g}\right) \mathrel{\text{:=}} {\int }_{a}^{b}w\left( x\right) f\left( x\right) g\left( x\right) {dx}\n\]\n\nfor \( f, g \in C\left\lbrack {a, b}\right\rbrack \) . The positive definiteness of the scalar product is a consequence of \( w \) being positive in \( \left( {a, b}\right) \) .
Yes
Lemma 9.16 Each of the orthogonal polynomials \( {q}_{n} \) from Lemma 9.15 has \( n \) simple zeros in \( \left( {a, b}\right) \) .
Proof. For \( m = 0 \), from (9.18) we have that\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n}\left( x\right) {dx} = 0 \n\]\n\nfor \( n > 0 \) . Hence, since \( w \) is positive on \( \left( {a, b}\right) \), the polynomial \( {q}_{n} \) must have at least one zero in \( \left( {a, b}\right) \) where the sign of \( {q}_{n} \) changes. Denote by \( {x}_{1},\ldots ,{x}_{m} \) the zeros of \( {q}_{n} \) in \( \left( {a, b}\right) \) where \( {q}_{n} \) changes its sign. We assume that \( m < n \) and set \( {r}_{m}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{1}}\right) \cdots \left( {x - {x}_{m}}\right) \) . Then \( {r}_{m} \in {P}_{n - 1} \) and therefore\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {r}_{m}\left( x\right) {q}_{n}\left( x\right) {dx} = 0. \n\]\n\nHowever, this integral must be different from zero, since \( {r}_{m}{q}_{n} \) does not change its sign on \( \left( {a, b}\right) \) and does not vanish identically. Hence, we have arrived at a contradiction, and consequently \( m = n \) .
Yes
Theorem 9.17 For each \( n = 0,1,\ldots \) there exists a unique Gaussian quadrature formula of order \( n \) . Its quadrature points are given by the zeros of the orthogonal polynomial \( {q}_{n + 1} \) of degree \( n + 1 \) .
Proof. This is a consequence of Lemmas 9.13-9.16.
Yes
Theorem 9.18 The weights of the Gaussian quadrature formulae are all positive.
Proof. Define\n\n\[ \n{f}_{k}\left( x\right) \mathrel{\text{:=}} {\left\lbrack \frac{{q}_{n + 1}\left( x\right) }{x - {x}_{k}}\right\rbrack }^{2},\;k = 0,\ldots, n. \]\n\nThen\n\n\[ \n{a}_{k}{\left\lbrack {q}_{n + 1}^{\prime }\left( {x}_{k}\right) \right\rbrack }^{2} = \mathop{\sum }\limits_{{j = 0}}^{n}{a}_{j}{f}_{k}\left( {x}_{j}\right) = {\int }_{a}^{b}w\left( x\right) {f}_{k}\left( x\right) {dx} > 0, \]\n\nsince \( {f}_{k} \in {P}_{2n} \), and the theorem is proven.
Yes
Corollary 9.19 The sequence of Gaussian quadrature formulae is convergent.
Proof. For each polynomial \( p \) we have\n\n\[ \n{Q}_{n}\left( p\right) = {\int }_{a}^{b}w\left( x\right) p\left( x\right) {dx} \]\n\nprovided that \( {2n} + 1 \) is greater than or equal to the degree of \( p \) . From their proofs it is obvious that Theorem 9.10 and its Corollary 9.11 remain valid for the integral with the weight function \( w \) . Hence, the statement of the theorem follows from Theorem 9.18.
Yes
Theorem 9.20 Let \( f \in {C}^{{2n} + 2}\left\lbrack {a, b}\right\rbrack \) . Then the error for the Gaussian quadrature formula of order \( n \) is given by\n\n\[{\int }_{a}^{b}w\left( x\right) f\left( x\right) {dx} - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = \frac{{f}^{\left( 2n + 2\right) }\left( \xi \right) }{\left( {{2n} + 2}\right) !}{\int }_{a}^{b}w\left( x\right) {\left\lbrack {q}_{n + 1}\left( x\right) \right\rbrack }^{2}{dx}\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) .
Proof. Recall the Hermite interpolation polynomial \( {H}_{n}f \in {P}_{{2n} + 1} \) for \( f \) from Theorem 8.18. Since \( \left( {{H}_{n}f}\right) \left( {x}_{k}\right) = f\left( {x}_{k}\right), k = 0,\ldots, n \), for the error\n\n\[{E}_{n}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}w\left( x\right) f\left( x\right) {dx} - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right)\]\n\nwe can write\n\n\[{E}_{n}\left( f\right) = {\int }_{a}^{b}w\left( x\right) \left\lbrack {f\left( x\right) - \left( {{H}_{n}f}\right) \left( x\right) }\right\rbrack {dx}.\]\n\nThen as in the proofs of Theorems 9.7 and 9.8, using the mean value theorem we obtain\n\n\[{E}_{n}\left( f\right) = \frac{f\left( z\right) - \left( {{H}_{n}f}\right) \left( z\right) }{{\left\lbrack {q}_{n + 1}\left( z\right) \right\rbrack }^{2}}{\int }_{a}^{b}w\left( x\right) {\left\lbrack {q}_{n + 1}\left( x\right) \right\rbrack }^{2}{dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . Now the proof is finished with the aid of the error representation for Hermite interpolation from Theorem 8.19.
Yes
Example 9.21 We consider the Gaussian quadrature formulae for the weight function\n\n\[ \nw\left( x\right) = \frac{1}{\sqrt{1 - {x}^{2}}},\;x \in \left\lbrack {-1,1}\right\rbrack .\n\]
The Chebyshev polynomial \( {T}_{n} \) of degree \( n \) is defined by\n\n\[ \n{T}_{n}\left( x\right) \mathrel{\text{:=}} \cos \left( {n\arccos x}\right) ,\; - 1 \leq x \leq 1.\n\]\n\nObviously \( {T}_{0}\left( x\right) = 1 \) and \( {T}_{1}\left( x\right) = x \) . From the addition theorem for the cosine function, \( \cos \left( {n + 1}\right) t + \cos \left( {n - 1}\right) t = 2\cos t\cos {nt} \), we can deduce the recursion formula\n\n\[ \n{T}_{n + 1}\left( x\right) + {T}_{n - 1}\left( x\right) = {2x}{T}_{n}\left( x\right) ,\;n = 1,2,\ldots\n\]\n\nHence we have that \( {T}_{n} \in {P}_{n} \) with leading term\n\n\[ \n{T}_{n}\left( x\right) = {2}^{n - 1}{x}^{n} + \cdots ,\;n = 1,2,\ldots\n\]\n\nSubstituting \( x = \cos t \) we find that\n\n\[ \n{\int }_{-1}^{1}\frac{{T}_{n}\left( x\right) {T}_{m}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} = {\int }_{0}^{\pi }\cos {nt}\cos {mt} \cdot {dt} = \left\{ \begin{array}{ll} \pi , & n = m = 0, \\ \frac{\pi }{2}, & n = m > 0, \\ 0, & n \neq m. \end{array}\right.\n\]\n\nHence, the orthogonal polynomials \( {q}_{n} \) of Lemma 9.15 are given by \( {q}_{n} = {2}^{1 - n}{T}_{n} \) . The zeros of \( {T}_{n} \) and hence the quadrature points are given by\n\n\[ \n{x}_{k} = \cos \left( {\frac{{2k} + 1}{2n}\pi }\right) ,\;k = 0,\ldots, n - 1.\n\]\n\nThe weights can be most easily derived from the exactness conditions\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{a}_{k}{T}_{m}\left( {x}_{k}\right) = {\int }_{-1}^{1}\frac{{T}_{m}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx},\;m = 0,\ldots, n - 1,\n\]\n\nfor the interpolation quadrature, i.e., from\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{a}_{k}\cos \frac{\left( {{2k} + 1}\right) m}{2n}\pi = \left\{ \begin{array}{ll} \pi , & m = 0, \\ 0, & m = 1,\ldots, n - 1. \end{array}\right.\n\]\n\nFrom our analysis of trigonometric interpolation, i.e., from (8.19), we see that the unique solution of this linear system is given by\n\n\[ \n{a}_{k} = \frac{\pi }{n},\;k = 0,\ldots, n - 1\n\]\n\n\nHence, for \( n = 1,2,\ldots \) the Gauss-Chebyshev quadrature of order \( n - 1 \) is given by\n\n\[ \n{\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} \approx \frac{\pi }{n}\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\cos \frac{{2k} + 1}{2n}\pi }\right) .\n\]\n\nFrom Theorem 9.20 we have the error representation\n\n\[ \n{\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} - \frac{\pi }{n}\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\cos \frac{{2k} + 1}{2n}\pi }\right) = \frac{\pi {f}^{\left( 2n\right) }\left( \xi \right) }{{2}^{{2n} - 1}\left( {2n}\right) !}\n\]\n\nfor some \( \xi \in \left\lbrack {-1,1}\right\rbrack \) .
Yes
The Legendre polynomial \( {L}_{n} \) of degree \( n \) is defined by \[ {L}_{n}\left( x\right) \mathrel{\text{:=}} \frac{1}{{2}^{n}n!}\frac{{d}^{n}}{d{x}^{n}}{\left( {x}^{2} - 1\right) }^{n}. \]
Obviously, \( {L}_{n} \in {P}_{n} \) . If \( m < n \), by repeated partial integration we see that \[ {\int }_{-1}^{1}{x}^{m}\frac{{d}^{n}}{d{x}^{n}}{\left( {x}^{2} - 1\right) }^{n}{dx} = 0 \] since \( {\left( {x}^{2} - 1\right) }^{n} \) has zeros of order \( n \) at the endpoints -1 and 1 . Therefore, \[ {\int }_{-1}^{1}{L}_{n}\left( x\right) {L}_{m}\left( x\right) {dx} = 0,\;n \neq m. \]
Yes
Lemma 9.24 The Bernoulli polynomials have the symmetry property\n\n\\[ \n{B}_{n}\\left( x\\right) = {\\left( -1\\right) }^{n}{B}_{n}\\left( {1 - x}\\right) ,\\;x \\in \\mathbb{R},\\;n = 0,1,\\ldots \n\\]\n\n(9.24)
Proof. Obviously (9.24) holds for \\( n = 0 \\) . Assume that (9.24) has been proven for some \\( n \\geq 0 \\) . Then, integrating (9.24), we obtain\n\n\\[ \n{B}_{n + 1}\\left( x\\right) = {\\left( -1\\right) }^{n + 1}{B}_{n + 1}\\left( {1 - x}\\right) + {\\beta }_{n + 1} \n\\]\n\nfor some constant \\( {\\beta }_{n + 1} \\) . The condition (9.22) implies that \\( {\\beta }_{n + 1} = 0 \\), and therefore (9.24) is also valid for \\( n + 1 \\) .
No
Lemma 9.25 The Bernoulli polynomials \( {B}_{{2m} + 1}, m = 1,2,\ldots \), of odd degree have exactly three zeros in \( \left\lbrack {0,1}\right\rbrack \), and these zeros are at the points \( 0,1/2 \), and 1 . The Bernoulli polynomials \( {B}_{2m}, m = 0,1,\ldots \), of even degree satisfy \( {B}_{2m}\left( 0\right) \neq 0 \) .
Proof. From (9.23) and (9.24) we conclude that \( {B}_{{2m} + 1} \) vanishes at the points \( 0,1/2 \), and 1 . We prove by induction that these are the only zeros of \( {B}_{{2m} + 1} \) in \( \left\lbrack {0,1}\right\rbrack \) . This is true for \( m = 1 \), since \( {B}_{3} \) is a polynomial of degree three. Assume that we have proven that \( {B}_{{2m} + 1} \) has only the three zeros \( 0,1/2 \), and \( 1 \) in \( \left\lbrack {0,1}\right\rbrack \), and assume that \( {B}_{{2m} + 3} \) has an additional zero \( \alpha \) in \( \left\lbrack {0,1}\right\rbrack \) . Because of the symmetry (9.24) we may assume that \( \alpha \in \left( {0,1/2}\right) \) . Then, by Rolle’s theorem, we conclude that \( {B}_{{2m} + 2} \) has at least one zero in \( \left( {0,\alpha }\right) \) and also at least one zero in \( \left( {\alpha ,1/2}\right) \) . Again by Rolle’s theorem this implies that \( {B}_{{2m} + 1} \) has a zero in \( \left( {0,1/2}\right) \), which contradicts the induction assumption.\n\nFrom the zeros of \( {B}_{{2m} + 1} \), by Rolle’s theorem it follows that \( {B}_{2m} \) has a zero in \( \left( {0,1/2}\right) \) . Assume that \( {B}_{2m}\left( 0\right) = 0 \) . Then, by Rolle’s theorem, \( {B}_{{2m} - 1} \) has a zero in \( \left( {0,1/2}\right) \), which contradicts the first part of the lemma.
Yes
Theorem 9.26 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( m \) times continuously differentiable for \( m \geq 2 \) . Then we have the Euler-Maclaurin expansion\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} = {T}_{h}\left( f\right) - \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}{h}^{2j}}{\left( {2j}\right) !}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) }\right\rbrack\n\]\n(9.27)\n\n\[+ {\left( -1\right) }^{m}{h}^{m}{\int }_{a}^{b}{\widetilde{B}}_{m}\left( \frac{x - a}{h}\right) {f}^{\left( m\right) }\left( x\right) {dx}\]\n\nwhere \( \left\lbrack \frac{m}{2}\right\rbrack \) denotes the largest integer smaller than or equal to \( \frac{m}{2} \) .
Proof. Let \( g \in {C}^{m}\left\lbrack {0,1}\right\rbrack \) . Then, by \( m - 1 \) partial integrations and using (9.23) we find that\n\n\[{\int }_{0}^{1}{B}_{1}\left( z\right) {g}^{\prime }\left( z\right) {dz} = \mathop{\sum }\limits_{{j = 2}}^{m}{\left( -1\right) }^{j}{B}_{j}\left( 0\right) \left\lbrack {{g}^{\left( j - 1\right) }\left( 1\right) - {g}^{\left( j - 1\right) }\left( 0\right) }\right\rbrack\n\]\n\[- {\left( -1\right) }^{m}{\int }_{0}^{1}{B}_{m}\left( z\right) {g}^{\left( m\right) }\left( z\right) {dz}\]\n\nCombining this with the partial integration\n\n\[{\int }_{0}^{1}{B}_{1}\left( z\right) {g}^{\prime }\left( z\right) {dz} = \frac{1}{2}\left\lbrack {g\left( 1\right) + g\left( 0\right) }\right\rbrack - {\int }_{0}^{1}g\left( z\right) {dz}\]\n\nand observing that the odd Bernoulli numbers vanish leads to\n\n\[{\int }_{0}^{1}g\left( z\right) {dz} = \frac{1}{2}\left\lbrack {g\left( 0\right) + g\left( 1\right) }\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}}{\left( {2j}\right) !}\left\lbrack {{g}^{\left( 2j - 1\right) }\left( 1\right) - {g}^{\left( 2j - 1\right) }\left( 0\right) }\right\rbrack\n\]\n\[+ {\left( -1\right) }^{m}{\int }_{0}^{1}{B}_{m}\left( z\right) {g}^{\left( m\right) }\left( z\right) {dz}\]\n\nNow we substitute \( x = {x}_{k} + {hz} \) and \( g\left( z\right) = f\left( {{x}_{k} + {hz}}\right) \) to obtain\n\n\[{\int }_{{x}_{k}}^{{x}_{k + 1}}f\left( x\right) {dx} = \frac{h}{2}\left\lbrack {f\left( {x}_{k}\right) + f\left( {x}_{k + 1}\right) }\right\rbrack\n\]\n\[- \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}{h}^{2j}}{\left( {2j}\right) !}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( {x}_{k + 1}\right) - {f}^{\left( 2j - 1\right) }\left( {x}_{k}\right) }\right\rbrack\n\]\n\[+ {\left( -1\right) }^{m}{h}^{m}{\int }_{{x}_{k}}^{{x}_{k + 1}}{B}_{m}\left( \frac{x - a}{h}\right) {f}^{\left( m\right) }\left( x\right) {dx}.\n\]\nFinally, we sum the last equation for \( k = 0,\ldots, n - 1 \) to arrive at the Euler-Maclaurin expansion (9.27).
Yes