Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Consider some (physical) quantity \( u \) depending on time \( t \) and a parameter vector \( a = {\left( {a}_{1},\ldots ,{a}_{n}\right) }^{T} \in {\mathbb{R}}^{n} \) in terms of a known function\n\n\[ u\left( t\right) = f\left( {t;a}\right) \]\n\nIn order to determine the values of the parameter \( a \) (representing some physical constants), one can take \( m \) measurements of \( u \) at different times \( {t}_{1},\ldots ,{t}_{m} \) and then try to find \( a \) by solving the system of equations\n\n\[ u\left( {t}_{j}\right) = f\left( {{t}_{j};a}\right) ,\;j = 1,\ldots, m. \]\n\nIf \( m = n \), this system consists of \( n \) equations for the \( n \) unknowns \( {a}_{1},\ldots ,{a}_{n} \). However, in general, the measurements will be contaminated by errors. Therefore, usually one will take \( m > n \) measurements and then will try to determine \( a \) by requiring the deviations\n\n\[ u\left( {t}_{j}\right) - f\left( {{t}_{j};a}\right) ,\;j = 1,\ldots, m \]\n\nto be as small as possible. Usually the latter requirement is posed in the least squares sense, i.e., the parameter \( a \) is chosen such that\n\n\[ g\left( a\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{m}{\left\lbrack u\left( {t}_{k}\right) - f\left( {t}_{k};a}\right) \right\rbrack }^{2} \]\n\nattains a minimal value.
|
The necessary conditions for a minimum,\n\n\[ \frac{\partial g}{\partial {a}_{j}} = 0,\;j = 1,\ldots, n \]\n\nlead to the normal equations\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{m}\left\lbrack {u\left( {t}_{k}\right) - f\left( {{t}_{k};a}\right) }\right\rbrack \frac{\partial f\left( {{t}_{k};a}\right) }{\partial {a}_{j}} = 0,\;j = 1,\ldots, n, \]\n\nfor the method of least squares. These constitute a system of \( n \), in general, nonlinear equations for the \( n \) unknowns \( {a}_{1},\ldots ,{a}_{n} \).
|
Yes
|
We consider the system\n\n\[ \n{x}_{1} + {200}{x}_{2} = {100} \]\n\n\[ \n{x}_{1} + \;{x}_{2} = 1 \]\n\nwith the exact solution \( {x}_{1} = {100}/{199} = {0.502}\ldots ,{x}_{2} = {99}/{199} = {0.497}\ldots \) .
|
For the following computations we use two-decimal-digit floating-point arithmetic. Column pivoting leads to \( {a}_{11} \) as pivot element, and the elimination yields\n\n\[ \n{x}_{1} + {200}{x}_{2} = {100} \]\n\n\[ \n- {200}{x}_{2} = - {99} \]\n\nsince \( {199} = {200} \) in two-digit floating-point representation. From the second equation we then have \( {x}_{2} = {0.50}\left( {{0.495} = {0.50}\text{in two decimal digits}}\right) \), and from the first equation it finally follows that \( {x}_{1} = 0 \) .\n\nHowever, if by complete pivoting we choose \( {a}_{12} \) as pivot element, the elimination leads to\n\n\[ \n{x}_{1} + {200}{x}_{2} = {100} \]\n\n\[ \n{x}_{1}\; = {0.5} \]\n\n\( \left( {{0.995} = {1.00}\text{in two decimal digits}}\right) \), and from this we get the solution \( {x}_{1} = {0.5},{x}_{2} = {0.5}\left( {{0.4975} = {0.50}\text{in two decimal digits}}\right) \), which is correct to two decimal digits.
|
Yes
|
Theorem 2.9 For a nonsingular matrix \( A \), Gaussian elimination (without reordering rows and columns) yields an LR decomposition.
|
Proof. In the first elimination step we multiply the first equation by \( {a}_{j1}/{a}_{11} \) and subtract the result from the \( j \) th equation; i.e., the matrix \( {A}_{1} = A \) is multiplied from the left by the lower triangular matrix\n\n\[ \n{L}_{1} = \left( \begin{array}{rrrr} 1 & & & \\ - \frac{{a}_{21}}{{a}_{11}} & 1 & & \\ \cdot & \cdot & \cdot & \cdot \\ - \frac{{a}_{n1}}{{a}_{11}} & & & 1 \end{array}\right) .\n\]\n\nThe resulting matrix \( {A}_{2} = {L}_{1}{A}_{1} \) is of the form\n\n\[ \n{A}_{2} = \left( \begin{array}{rr} {a}_{11} & * \\ 0 & {\widetilde{A}}_{n - 1} \end{array}\right)\n\]\nwhere \( {\widetilde{A}}_{n - 1} \) is an \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix. In the second step the same procedure is repeated for the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {\widetilde{A}}_{n - 1} \) . The corresponding \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) elimination matrix is completed as an \( n \times n \) triangular matrix \( {L}_{2} \) by setting the diagonal element in the first row equal to one. In this way, \( n - 1 \) elimination steps lead to\n\n\[ \n{L}_{n - 1}\cdots {L}_{1}A = R\n\]\n\nwith nonsingular lower triangular matrices \( {L}_{1},\ldots ,{L}_{n - 1} \) and an upper triangular matrix \( R \) . From this we find\n\n\[ \nA = {LR}\n\]\n\nwhere \( L \) denotes the inverse of the product \( {L}_{n - 1}\cdots {L}_{1} \) .
|
Yes
|
Theorem 3.5 The limit of a convergent sequence is uniquely determined.
|
Proof. Assume that \( {x}_{n} \rightarrow x \) and \( {x}_{n} \rightarrow y \) for \( n \rightarrow \infty \) . Then from the triangle inequality we obtain that\n\n\[ \parallel x - y\parallel = \begin{Vmatrix}{x - {x}_{n} + {x}_{n} - y}\end{Vmatrix} \leq \begin{Vmatrix}{x - {x}_{n}}\end{Vmatrix} + \begin{Vmatrix}{{x}_{n} - y}\end{Vmatrix} \rightarrow 0,\;n \rightarrow \infty .\n\]\n\nTherefore, \( \parallel x - y\parallel = 0 \) and \( x = y \) by (N2).
|
Yes
|
Theorem 3.7 Two norms \( \parallel \cdot {\parallel }_{a} \) and \( \parallel \cdot {\parallel }_{b} \) on a linear space \( X \) are equivalent if and only if there exist positive numbers \( c \) and \( C \) such that\n\n\[ c\parallel x{\parallel }_{a} \leq \parallel x{\parallel }_{b} \leq C\parallel x{\parallel }_{a} \]\n\nfor all \( x \in X \) . The limits with respect to the two norms coincide.
|
Proof. Provided that the conditions are satisfied, from \( {\begin{Vmatrix}{x}_{n} - x\end{Vmatrix}}_{a} \rightarrow 0 \) , \( n \rightarrow \infty \), it follows that \( {\begin{Vmatrix}{x}_{n} - x\end{Vmatrix}}_{b} \rightarrow 0, n \rightarrow \infty \), and vice versa.\n\nConversely, let the two norms be equivalent and assume that there is no \( C > 0 \) such that \( \parallel x{\parallel }_{b} \leq C\parallel x{\parallel }_{a} \) for all \( x \in X \) . Then there exists a sequence \( \left( {x}_{n}\right) \) with \( {\begin{Vmatrix}{x}_{n}\end{Vmatrix}}_{a} = 1 \) and \( {\begin{Vmatrix}{x}_{n}\end{Vmatrix}}_{b} \geq {n}^{2} \) . Now, the sequence \( \left( {y}_{n}\right) \) with \( {y}_{n} \mathrel{\text{:=}} {x}_{n}/n \) converges to zero with respect to \( \parallel \cdot {\parallel }_{a} \), whereas with respect to \( \parallel \cdot {\parallel }_{b} \) it is divergent because of \( {\begin{Vmatrix}{y}_{n}\end{Vmatrix}}_{b} \geq n \) .
|
Yes
|
Theorem 3.8 On a finite-dimensional linear space all norms are equivalent.
|
Proof. In a linear space \( X \) with finite dimension \( n \) and basis \( {u}_{1},\ldots ,{u}_{n} \) every element can be expressed in the form\n\n\[ x = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \]\n\nAs in Example 3.2,\n\n\[ \parallel x{\parallel }_{\infty } \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {\alpha }_{j}\right| \]\n\n(3.2)\n\ndefines a norm on \( X \) . Let \( \parallel \cdot \parallel \) denote any other norm on \( X \) . Then, by the triangle inequality we have\n\n\[ \parallel x\parallel \leq \mathop{\sum }\limits_{{j = 1}}^{n}\left| {\alpha }_{j}\right| \begin{Vmatrix}{u}_{j}\end{Vmatrix} \leq C\parallel x{\parallel }_{\infty } \]\n\nfor all \( x \in X \), where\n\n\[ C \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}\begin{Vmatrix}{u}_{j}\end{Vmatrix} \]\n\nAssume that there is no \( c > 0 \) such that \( c\parallel x{\parallel }_{\infty } \leq \parallel x\parallel \) for all \( x \in X \) . Then there exists a sequence \( \left( {x}_{\nu }\right) \) with \( \begin{Vmatrix}{x}_{\nu }\end{Vmatrix} = 1 \) such that \( {\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \geq \nu \) . Consider the sequence \( \left( {y}_{\nu }\right) \) with \( {y}_{\nu } \mathrel{\text{:=}} {x}_{\nu }/{\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \) and write\n\n\[ {y}_{\nu } = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j\nu }{u}_{j} \]\n\nBecause of \( {\begin{Vmatrix}{y}_{\nu }\end{Vmatrix}}_{\infty } = 1 \) each of the sequences \( \left( {\alpha }_{j\nu }\right), j = 1,\ldots, n \), is bounded in \( \mathbb{C} \) . Hence, by the Bolzano-Weierstrass theorem we can select convergent subsequences \( {\alpha }_{j,\nu \left( \ell \right) } \rightarrow {\alpha }_{j},\ell \rightarrow \infty \), for each \( j = 1,\ldots, n \) . This now implies \( {\begin{Vmatrix}{y}_{\nu \left( \ell \right) } - y\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \), where\n\n\[ y \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \]\n\nand also \( \begin{Vmatrix}{{y}_{\nu \left( \ell \right) } - y}\end{Vmatrix} \leq C{\begin{Vmatrix}{y}_{\nu \left( \ell \right) } - y\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \) . But on the other hand we have \( \begin{Vmatrix}{y}_{\nu }\end{Vmatrix} = 1/{\begin{Vmatrix}{x}_{\nu }\end{Vmatrix}}_{\infty } \rightarrow 0,\nu \rightarrow \infty \) . Therefore, \( y = 0 \), and consequently \( {\begin{Vmatrix}{y}_{\nu \left( \ell \right) }\end{Vmatrix}}_{\infty } \rightarrow 0,\ell \rightarrow \infty \), which contradicts \( {\begin{Vmatrix}{y}_{\nu }\end{Vmatrix}}_{\infty } = 1 \) for all \( \nu .
|
Yes
|
Theorem 3.11 Any bounded sequence in a finite-dimensional normed space \( X \) contains a convergent subsequence.
|
Proof. Let \( {u}_{1},\ldots ,{u}_{n} \) be a basis of \( X \) and let \( \left( {x}_{\nu }\right) \) be a bounded sequence. Then writing\n\n\[ \n{x}_{\nu } = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j\nu }{u}_{j} \n\] \n\nand using the norm (3.2), as in the proof of Theorem 3.8 we deduce that each of the sequences \( \left( {\alpha }_{j\nu }\right), j = 1,\ldots, n \), is bounded in \( \mathbb{C} \) . Hence, by the Bolzano-Weierstrass theorem we can select convergent subsequences \( {\alpha }_{j,\nu \left( \ell \right) } \rightarrow {\alpha }_{j},\ell \rightarrow \infty \), for each \( j = 1,\ldots, n \) . This now implies\n\n\[ \n{x}_{\nu \left( \ell \right) } \rightarrow \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \in X,\;\ell \rightarrow \infty , \n\] \n\nand the proof is finished.
|
Yes
|
Theorem 3.14 For a scalar product we have the Cauchy-Schwarz inequality\n\n\[ \n{\left| \left( x, y\right) \right| }^{2} \leq \left( {x, x}\right) \left( {y, y}\right) \n\]\n\nfor all \( x, y \in X \), with equality if and only if \( x \) and \( y \) are linearly dependent.
|
Proof. The inequality is trivial for \( x = 0 \) . For \( x \neq 0 \) it follows from\n\n\[ \n\left( {{\alpha x} + {\beta y},{\alpha x} + {\beta y}}\right) = {\left| \alpha \right| }^{2}\left( {x, x}\right) + 2\operatorname{Re}\{ \alpha \bar{\beta }\left( {x, y}\right) \} + {\left| \beta \right| }^{2}\left( {y, y}\right) \n\]\n\n\[ \n= \left( {x, x}\right) \left( {y, y}\right) - {\left| \left( x, y\right) \right| }^{2}, \n\]\n\nwhere we have set \( \alpha = - {\left( x, x\right) }^{-1/2}\overline{\left( x, y\right) } \) and \( \beta = {\left( x, x\right) }^{1/2} \) . Since \( \left( {\cdot , \cdot }\right) \) is positive definite, this expression is nonnegative, and it is equal to zero if and only if \( {\alpha x} + {\beta y} = 0 \) . In the latter case \( x \) and \( y \) are linearly dependent because \( \beta \neq 0 \) .
|
Yes
|
Theorem 3.14 For a scalar product we have the Cauchy-Schwarz inequality\n\n\\[ \n{\\left| \\left( x, y\\right) \\right| }^{2} \\leq \\left( {x, x}\\right) \\left( {y, y}\\right) \n\\]\n\nfor all \\( x, y \\in X \\), with equality if and only if \\( x \\) and \\( y \\) are linearly dependent.
|
Proof. The inequality is trivial for \\( x = 0 \\) . For \\( x \\neq 0 \\) it follows from\n\n\\[ \n\\left( {{\\alpha x} + {\\beta y},{\\alpha x} + {\\beta y}}\\right) = {\\left| \\alpha \\right| }^{2}\\left( {x, x}\\right) + 2\\operatorname{Re}\\{ \\alpha \\bar{\\beta }\\left( {x, y}\\right) \\} + {\\left| \\beta \\right| }^{2}\\left( {y, y}\\right) \n\\]\n\n\\[ \n= \\left( {x, x}\\right) \\left( {y, y}\\right) - {\\left| \\left( x, y\\right) \\right| }^{2}, \n\\]\n\nwhere we have set \\( \\alpha = - {\\left( x, x\\right) }^{-1/2}\\overline{\\left( x, y\\right) } \\) and \\( \\beta = {\\left( x, x\\right) }^{1/2} \\) . Since \\( \\left( {\\cdot , \\cdot }\\right) \\) is positive definite, this expression is nonnegative, and it is equal to zero if and only if \\( {\\alpha x} + {\\beta y} = 0 \\) . In the latter case \\( x \\) and \\( y \\) are linearly dependent because \\( \\beta \\neq 0 \\) .
|
Yes
|
Theorem 3.15 A scalar product \( \left( {\cdot , \cdot }\right) \) on a linear space \( X \) defines a norm by\n\n\[ \parallel x\parallel \mathrel{\text{:=}} {\left( x, x\right) }^{1/2} \]\n\nfor all \( x \in X \) ; i.e., a pre-Hilbert space is always a normed space.
|
Proof. We leave it as an exercise for the reader to verify the norm axioms. The triangle inequality follows by\n\n\[ \parallel x + y{\parallel }^{2} = \left( {x + y, x + y}\right) \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \]\n\nfrom the Cauchy-Schwarz inequality.
|
No
|
Theorem 3.17 The elements of an orthonormal system are linearly independent.
|
Proof. From\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{n}{\alpha }_{k}{q}_{k} = 0 \]\n\nfor the orthonormal system \( \left\{ {{q}_{1},\ldots ,{q}_{n}}\right\} \), by taking the scalar product with \( {q}_{j} \), we immediately have that \( {\alpha }_{j} = 0 \) for \( j = 1,\ldots, n \) .
|
Yes
|
Theorem 3.18 Let \( \left\{ {{u}_{0},{u}_{1},\ldots }\right\} \) be a finite or countable number of linearly independent elements of a pre-Hilbert space. Then there exists a uniquely determined orthogonal system \( \left\{ {{q}_{0},{q}_{1},\ldots }\right\} \) of the form\n\n\[ \n{q}_{n} = {u}_{n} + {r}_{n},\;n = 0,1,\ldots ,\n\]\n\n(3.3)\n\nwith \( {r}_{0} = 0 \) and \( {r}_{n} \in \operatorname{span}\left\{ {{u}_{0},\ldots ,{u}_{n - 1}}\right\}, n = 1,2,\ldots \), satisfying\n\n\[ \n\operatorname{span}\left\{ {{u}_{0},\ldots ,{u}_{n}}\right\} = \operatorname{span}\left\{ {{q}_{0},\ldots ,{q}_{n}}\right\} ,\;n = 0,1,\ldots\n\]\n\n(3.4)
|
Proof. Assume that we have constructed orthogonal elements of the form (3.3) with the property (3.4) up to \( {q}_{n - 1} \) . By (3.4), the \( \left\{ {{q}_{0},\ldots ,{q}_{n - 1}}\right\} \) are linearly independent, and therefore \( \begin{Vmatrix}{q}_{k}\end{Vmatrix} \neq 0 \) for \( k = 0,1,\ldots, n - 1 \) . Hence,\n\n\[ \n{q}_{n} \mathrel{\text{:=}} {u}_{n} - \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}\frac{\left( {u}_{n},{q}_{k}\right) }{\left( {q}_{k},{q}_{k}\right) }{q}_{k}\n\]\n\nis well-defined, and using the induction assumption, we obtain \( \left( {{q}_{n},{q}_{m}}\right) = 0 \) for \( m = 0,\ldots, n - 1 \) and\n\n\[ \n\operatorname{span}\left\{ {{u}_{0},\ldots ,{u}_{n - 1},{u}_{n}}\right\} = \operatorname{span}\left\{ {{q}_{0},\ldots ,{q}_{n - 1},{u}_{n}}\right\} = \operatorname{span}\left\{ {{q}_{0},\ldots ,{q}_{n - 1},{q}_{n}}\right\} .\n\]\n\nHence, the existence of \( {q}_{n} \) is established.\n\nAssume that \( \left\{ {{q}_{0},{q}_{1},\ldots }\right\} \) and \( \left\{ {{\widetilde{q}}_{0},{\widetilde{q}}_{1},\ldots }\right\} \) are two orthogonal sets of elements with the required properties. Then clearly \( {q}_{0} = {u}_{0} = {\widetilde{q}}_{0} \) . Assume that we have shown that equality holds up to \( {q}_{n - 1} = {\widetilde{q}}_{n - 1} \) . Then, since \( {q}_{n} - {\widetilde{q}}_{n} \in \operatorname{span}\left\{ {{u}_{0},\ldots ,{u}_{n - 1}}\right\} \), we can represent \( {q}_{n} - {\widetilde{q}}_{n} \) as a linear combination of \( {q}_{1},\ldots ,{q}_{n - 1} \) ; i.e.,\n\n\[ \n{q}_{n} - {\widetilde{q}}_{n} = \mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\alpha }_{k}{q}_{k}\n\]\n\nNow the orthogonality yields\n\n\[ \n{\begin{Vmatrix}{q}_{n} - {\widetilde{q}}_{n}\end{Vmatrix}}^{2} = \left( {{q}_{n} - {\widetilde{q}}_{n},\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{\alpha }_{k}{q}_{k}}\right) = 0,\n\]\n\nwhence \( {q}_{n} = {\widetilde{q}}_{n} \) .
|
Yes
|
Theorem 3.21 A linear operator is continuous if it is continuous at one element.
|
Proof. Let \( A : X \rightarrow Y \) be continuous at \( {x}_{0} \in X \) . Then for every \( x \in X \) and every sequence \( \left( {x}_{n}\right) \) with \( {x}_{n} \rightarrow x, n \rightarrow ∞ \), we have\n\n\[ A{x}_{n} = A\left( {{x}_{n} - x + {x}_{0}}\right) + A\left( {x - {x}_{0}}\right) \rightarrow A\left( {x}_{0}\right) + A\left( {x - {x}_{0}}\right) = A\left( x\right) ,\;n \rightarrow ∞ ,\]\n\n since \( {x}_{n} - x + {x}_{0} \rightarrow {x}_{0}, n \rightarrow ∞ \) .
|
Yes
|
Theorem 3.23 A linear operator \( A : X \rightarrow Y \) is bounded if and only if\n\n\[ \parallel A\parallel \mathrel{\text{:=}} \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel < \infty . \]\n\nThe number \( \parallel A\parallel \) is the smallest bound for \( A \) and is called the norm of \( A \) .
|
Proof. Assume that \( A \) is bounded with the bound \( C \) . Then\n\n\[ \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel \leq C \]\n\nand, in particular, \( \parallel A\parallel \) is less than or equal to any bound for \( A \) . Conversely, if \( \parallel A\parallel < \infty \), then using the linearity of \( A \) and the homogeneity of the norm,\n\nwe find that\n\[ \parallel {Ax}\parallel = \begin{Vmatrix}{A\left( \frac{x}{\parallel x\parallel }\right) }\end{Vmatrix}\parallel x\parallel \leq \parallel A\parallel \parallel x\parallel \]\n\nfor all \( x \neq 0 \) . Therefore, \( A \) is bounded with the bound \( \parallel A\parallel \) .
|
Yes
|
Theorem 3.24 A linear operator is continuous if and only if it is bounded.
|
Proof. Let \( A : X \rightarrow Y \) be bounded and let \( \left( {x}_{n}\right) \) be a sequence in \( X \) with \( {x}_{n} \rightarrow 0, n \rightarrow \infty \) . Then from \( \begin{Vmatrix}{A{x}_{n}}\end{Vmatrix} \leq C\begin{Vmatrix}{x}_{n}\end{Vmatrix} \) it follows that \( A{x}_{n} \rightarrow 0 \) , \( n \rightarrow \infty \) . Thus, \( A \) is continuous at \( x = 0 \), and because of Theorem 3.21 it is continuous everywhere in \( X \) .\n\nConversely, let \( A \) be continuous and assume that there is no \( C > 0 \) such that \( \parallel {Ax}\parallel \leq C\parallel x\parallel \) for all \( x \in X \) . Then there exists a sequence \( \left( {x}_{n}\right) \) in \( X \) with \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} = 1 \) and \( \begin{Vmatrix}{A{x}_{n}}\end{Vmatrix} \geq n \) . Consider the sequence \( {y}_{n} \mathrel{\text{:=}} {x}_{n}/\begin{Vmatrix}{A{x}_{n}}\end{Vmatrix} \) . Then \( {y}_{n} \rightarrow 0, n \rightarrow \infty \), and since \( A \) is continuous, \( A{y}_{n} \rightarrow A\left( 0\right) = 0, n \rightarrow \infty \) . This is a contradiction to \( \begin{Vmatrix}{A{y}_{n}}\end{Vmatrix} = 1 \) for all \( n \) . Hence, \( A \) is bounded.
|
Yes
|
Theorem 3.27 To each matrix \( A \) there exists a unitary matrix \( Q \) such that \( {Q}^{ * }{AQ} \) is an upper triangular matrix.
|
Proof. Assume that it has been shown that for each \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {A}_{n - 1} \) there exists a unitary \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {Q}_{n - 1} \) such that \( {Q}_{n - 1}^{ * }{A}_{n - 1}{Q}_{n - 1} \) is an upper triangular matrix. Let \( \lambda \) be an eigenvalue of the \( n \times n \) matrix \( {A}_{n} \) with eigenvector \( u \) . We may assume that \( \left( {u, u}\right) = 1 \), where \( \left( {\cdot , \cdot }\right) \) is the Euclidean scalar product. Using the Gram-Schmidt procedure of Theorem 3.18 we can construct an orthonormal basis of \( {\mathbb{C}}^{n} \) of the form \( u,{v}_{2},\ldots ,{v}_{n} \) . Then we define a unitary \( n \times n \) matrix by\n\n\[ \n{U}_{n} \mathrel{\text{:=}} \left( {u,{v}_{2},\ldots ,{v}_{n}}\right) \n\]\n\nWith the aid of \( \left( {u,{v}_{j}}\right) = 0, j = 2,\ldots, n \), we see that\n\n\[ \n{U}_{n}^{ * }{A}_{n}{U}_{n} = {U}_{n}^{ * }\left( {{\lambda u},{A}_{n}{v}_{2},\ldots ,{A}_{n}{v}_{n}}\right) = \left( \begin{matrix} \lambda & * \\ 0 & {A}_{n - 1} \end{matrix}\right) , \n\]\n\nwith some \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {A}_{n - 1} \) . By the induction assumption there exists a unitary \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) matrix \( {Q}_{n - 1} \) such that \( {Q}_{n - 1}^{ * }{A}_{n - 1}{Q}_{n - 1} \) is upper triangular. Then\n\n\[ \n{Q}_{n} \mathrel{\text{:=}} {U}_{n}\left( \begin{matrix} 1 & 0 \\ 0 & {Q}_{n - 1} \end{matrix}\right) \n\]\n\ndefines a unitary \( n \times n \) matrix, and \( {Q}_{n}^{ * }{A}_{n}{Q}_{n} \) is upper triangular.
|
Yes
|
Lemma 3.28 For an \( n \times n \) matrix \( A \) and its adjoint \( {A}^{ * } \) we have that\n\n\[ \left( {{Ax}, y}\right) = \left( {x,{A}^{ * }y}\right) \]\n\nfor all \( x, y \in {\mathbb{C}}^{n} \), where \( \left( {\cdot , \cdot }\right) \) denotes the Euclidean scalar product.
|
Proof. Simple calculations yield\n\n\[ \left( {{Ax}, y}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left( Ax\right) }_{j}{\bar{y}}_{j} = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{k = 1}}^{n}{a}_{jk}{x}_{k}{\bar{y}}_{j} \]\n\n\[ = \mathop{\sum }\limits_{{k = 1}}^{n}\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{k}\overline{{a}_{kj}^{ * }{y}_{j}} = \mathop{\sum }\limits_{{k = 1}}^{n}{x}_{k}\overline{{A}^{ * }{y}_{k}} = \left( {x,{A}^{ * }y}\right) ,\]\n\nwhere we have used that \( {a}_{kj}^{ * } = \overline{{a}_{jk}} \) .
|
Yes
|
The eigenvalues of a Hermitian \( n \times n \) matrix are real, and the eigenvectors form an orthogonal basis in \( {\mathbb{C}}^{n} \) .
|
Proof. If \( A \) is Hermitian, i.e., if \( A = {A}^{ * } \), then the matrix \( \widetilde{A} \mathrel{\text{:=}} {Q}^{ * }{AQ} \) from Theorem 3.27 is also Hermitian, since\n\n\[ \n{\widetilde{A}}^{ * } = {\left( {Q}^{ * }AQ\right) }^{ * } = {Q}^{ * }{A}^{ * }{Q}^{* * } = {Q}^{ * }{AQ} = \widetilde{A}.\n\]\n\nTherefore, in this case the upper triangular matrix \( \widetilde{A} \) must be diagonal; i.e.,\n\n\[ \n\widetilde{A} = D \mathrel{\text{:=}} \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right)\n\]\n\nSince from \( {Q}^{ * }{AQ} = D \) it follows that \( {AQ} = {QD} \), we can conclude that the columns of \( Q = \left( {{u}_{1},\ldots ,{u}_{n}}\right) \) satisfy \( A{u}_{j} = {\lambda }_{j}{u}_{j}, j = 1,\ldots, n \) . Hence the eigenvectors of a Hermitian matrix form an orthogonal basis in \( {\mathbb{C}}^{n} \) . Because of\n\n\[ \n{\lambda }_{j} = \left( {A{u}_{j},{u}_{j}}\right) = \left( {{u}_{j}, A{u}_{j}}\right) = \overline{\left( A{u}_{j},{u}_{j}\right) } = {\bar{\lambda }}_{j},\n\]\n\nthe eigenvalues of Hermitian matrices are real.
|
Yes
|
For an \( n \times n \) matrix \( A \) we have\n\n\[ \parallel A{\parallel }_{2} = \sqrt{\rho \left( {{A}^{ * }A}\right) } \]\n\nIf \( A \) is Hermitian, then\n\n\[ \parallel A{\parallel }_{2} = \rho \left( A\right) \]
|
Proof. From Lemma 3.28 we have that\n\n\[ \parallel {Ax}{\parallel }_{2}^{2} = \left( {{Ax},{Ax}}\right) = \left( {x,{A}^{ * }{Ax}}\right) \]\n\nfor all \( x \in {\mathbb{C}}^{n} \) . Hence the Hermitian matrix \( {A}^{ * }A \) is positive semidefinite and therefore has \( n \) orthonormal eigenvectors\n\n\[ {A}^{ * }A{u}_{j} = {\mu }_{j}^{2}{u}_{j},\;j = 1,\ldots, n \]\n\nwith real nonnegative eigenvalues. We use the orthonormal basis of eigenvectors and represent \( x \in {\mathbb{C}}^{n} \) by\n\n\[ x = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \]\n\nand have\n\n\[ \parallel x{\parallel }_{2}^{2} = \left( {x, x}\right) = \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j},\mathop{\sum }\limits_{{k = 1}}^{n}{\alpha }_{k}{u}_{k}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\alpha }_{j}\right| }^{2} \]\n\nand\n\n\[ \parallel {Ax}{\parallel }_{2}^{2} = \left( {{Ax},{Ax}}\right) = \left( {x,{A}^{ * }{Ax}}\right) = \left( {\mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j},\mathop{\sum }\limits_{{k = 1}}^{n}{\mu }_{k}^{2}{\alpha }_{k}{u}_{k}}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\mu }_{j}^{2}{\left| {\alpha }_{j}\right| }^{2}. \]\n\nFrom this we obtain that\n\n\[ \parallel {Ax}{\parallel }_{2}^{2} \leq \rho \left( {{A}^{ * }A}\right) \parallel x{\parallel }_{2}^{2}, \]\n\nwhence\n\n\[ \parallel A{\parallel }_{2}^{2} \leq \rho \left( {{A}^{ * }A}\right) \]\n\nfollows. On the other hand, if we choose \( j \) such that \( {\mu }_{j}^{2} = \rho \left( {{A}^{ * }A}\right) \), then we\n\nhave that\n\n\[ \parallel A{\parallel }_{2}^{2} = {\left\lbrack \mathop{\sup }\limits_{{\parallel x{\parallel }_{2} = 1}}\parallel Ax{\parallel }_{2}\right\rbrack }^{2} \geq {\begin{Vmatrix}A{u}_{j}\end{Vmatrix}}_{2}^{2} = \left( {{u}_{j},{A}^{ * }A{u}_{j}}\right) = {\mu }_{j}^{2} = \rho \left( {{A}^{ * }A}\right) . \]\n\nThis concludes the proof of \( \parallel A{\parallel }_{2} = \sqrt{\rho \left( {{A}^{ * }A}\right) } \) . If \( A \) is Hermitian, then \( {A}^{ * }A = {A}^{2} \), whence \( \rho \left( {{A}^{ * }A}\right) = \rho \left( {A}^{2}\right) = {\left\lbrack \rho \left( A\right) \right\rbrack }^{2} \) follows.
|
Yes
|
Theorem 3.32 For each norm on \( {\mathbb{C}}^{n} \) and each \( n \times n \) matrix \( A \) we have that\n\n\[ \rho \left( A\right) \leq \parallel A\parallel \]\n\nConversely, to each matrix \( A \) and each \( \varepsilon > 0 \) there exists a norm on \( {\mathbb{C}}^{n} \) such that\n\n\[ \parallel A\parallel \leq \rho \left( A\right) + \varepsilon \]
|
Proof. Let \( \lambda \) be an eigenvalue of \( A \) with eigenvector \( u \) . We may assume that \( \parallel u\parallel = 1 \) . Then the first part of the theorem follows from\n\n\[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel \geq \parallel {Au}\parallel = \parallel {\lambda u}\parallel = \left| \lambda \right| .\n\nFor the second part, by Theorem 3.27 there exists a unitary matrix \( Q \)\n\nsuch that\n\[ B = {Q}^{ * }{AQ} = \left( \begin{matrix} {b}_{11} & {b}_{12} & {b}_{13} & . & . & {b}_{1n} \\ & {b}_{22} & {b}_{23} & . & . & {b}_{2n} \\ & & {b}_{33} & . & . & {b}_{3n} \\ . & . & . & . & . & . \\ & & & & & {b}_{nn} \end{matrix}\right)\n\nis upper triangular. Because of \( \det \left( {{\lambda I} - A}\right) = \det \left( {{\lambda I} - B}\right) \), the eigenvalues of \( A \) are given by \( {\lambda }_{j} = {b}_{jj}, j = 1,\ldots, n \) . We set\n\n\[ b \mathrel{\text{:=}} \mathop{\max }\limits_{{1 \leq j \leq k \leq n}}\left| {b}_{jk}\right| \]\n\nand\n\n\[ \delta \mathrel{\text{:=}} \min \left( {1,\frac{\varepsilon }{\left( {n - 1}\right) b}}\right) \]\n\nand define the diagonal matrix\n\n\[ D \mathrel{\text{:=}} \operatorname{diag}\left( {1,\delta ,{\delta }^{2},\ldots ,{\delta }^{n - 1}}\right) \]\n\nwith the inverse\n\n\[ {D}^{-1} = \operatorname{diag}\left( {1,{\delta }^{-1},{\delta }^{-2},\ldots ,{\delta }^{-n + 1}}\right) .\n\nThen for \( C \mathrel{\text{:=}} {D}^{-1}{BD} \) we have that\n\n\[ C = \left( \begin{matrix} {b}_{11} & \delta {b}_{12} & {\delta }^{2}{b}_{13} & . & . & {\delta }^{n - 1}{b}_{1n} \\ & {b}_{22} & \delta {b}_{23} & . & . & {\delta }^{n - 2}{b}_{2n} \\ & & {b}_{33} & . & . & {\delta }^{n - 3}{b}_{3n} \\ . & . & . & . & . & . \\ & & & & & {b}_{nn} \end{matrix}\right) \]\n\nSince \( \delta \leq 1 \), by Theorem 3.26, we can estimate\n\n\[ \parallel C{\parallel }_{\infty } \leq \mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {b}_{jj}\right| + \left( {n - 1}\right) {\delta b} \leq \rho \left( A\right) + \varepsilon .\n\nAfter setting \( V \mathrel{\text{:=}} {QD} \) we define a norm on \( {\mathbb{C}}^{n} \) by \( \parallel x\parallel \mathrel{\text{:=}} {\begin{Vmatrix}{V}^{-1}x\end{Vmatrix}}_{\infty } \) . Using \( C = {V}^{-1}{AV} \) we now obtain\n\n\[ \parallel {Ax}\parallel = {\begin{Vmatrix}{V}^{-1}Ax\end{Vmatrix}}_{\infty } = {\begin{Vmatrix}C{V}^{-1}x\end{Vmatrix}}_{\infty } \leq \parallel C{\parallel }_{\infty }{\begin{Vmatrix}{V}^{-1}x\end{Vmatrix}}_{\infty } = \parallel C{\parallel }_{\infty }\parallel x\parallel \]\n\nfor all \( x \in {\mathbb{C}}^{n} \) . Hence\n\n\[ \parallel A\parallel \leq \parallel C{\parallel }_{\infty } \leq \rho \left( A\right) + \varepsilon \]\n\nand the proof is finished.
|
Yes
|
Theorem 3.34 Every convergent sequence is a Cauchy sequence.
|
Proof. Let \( {x}_{n} \rightarrow x, n \rightarrow \infty \) . Then, for \( \varepsilon > 0 \) there exists \( N\left( \varepsilon \right) \in \mathbb{N} \) such that \( \begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} < \varepsilon /2 \) for all \( n \geq N\left( \varepsilon \right) \) . Now the triangle inequality yields\n\n\[ \begin{Vmatrix}{{x}_{n} - {x}_{m}}\end{Vmatrix} = \begin{Vmatrix}{{x}_{n} - x + x - {x}_{m}}\end{Vmatrix} \leq \begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} + \begin{Vmatrix}{x - {x}_{m}}\end{Vmatrix} < \varepsilon \]\n\nfor all \( n, m \geq N\left( \varepsilon \right) \) .
|
Yes
|
Example 3.36 The linear space \( C\left\lbrack {a, b}\right\rbrack \) furnished with the maximum norm\n\n\[ \parallel f{\parallel }_{\infty } \mathrel{\text{:=}} \mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}\left| {f\left( x\right) }\right| \]\n\nis a Banach space.
|
Proof. The norm axioms (N1)-(N3) are trivially satisfied. The triangle inequality follows from\n\n\[ \parallel f + g{\parallel }_{\infty } = \mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}\left| {\left( {f + g}\right) \left( x\right) }\right| = \left| {\left( {f + g}\right) \left( {x}_{0}\right) }\right| \leq \left| {f\left( {x}_{0}\right) }\right| + \left| {g\left( {x}_{0}\right) }\right| \]\n\n\[ \leq \mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}\left| {f\left( x\right) }\right| + \mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}\left| {g\left( x\right) }\right| = \parallel f{\parallel }_{\infty } + \parallel g{\parallel }_{\infty } \]\n\nfor some \( {x}_{0} \in \left\lbrack {a, b}\right\rbrack \) . Since the condition \( {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{\infty } < \varepsilon \) is equivalent to \( \left| {{f}_{n}\left( x\right) - f\left( x\right) }\right| < \varepsilon \) for all \( x \in \left\lbrack {a, b}\right\rbrack \), convergence of a sequence of continuous functions in the maximum norm is equivalent to uniform convergence on \( \left\lbrack {a, b}\right\rbrack \) . Since the Cauchy criterion is sufficient for uniform convergence of a sequence of continuous functions to a continuous limit function, the space \( C\left\lbrack {a, b}\right\rbrack \) is complete with respect to the maximum norm.
|
Yes
|
The linear space \( C\left\lbrack {a, b}\right\rbrack \) equipped with the \( {L}_{1} \) norm\n\n\[ \parallel f{\parallel }_{1} \mathrel{\text{:=}} {\int }_{a}^{b}\left| {f\left( x\right) }\right| {dx} \]\n\nis not complete.
|
The norm axioms are trivially satisfied. Without loss of generality we take \( \left\lbrack {a, b}\right\rbrack = \left\lbrack {0,2}\right\rbrack \) and choose\n\n\[ {f}_{n}\left( x\right) \mathrel{\text{:=}} \left\{ \begin{array}{ll} {x}^{n}, & 0 \leq x \leq 1 \\ 1, & 1 \leq x \leq 2 \end{array}\right. \]\n\nThen for \( m > n \) we have that\n\n\[ {\begin{Vmatrix}{f}_{n} - {f}_{m}\end{Vmatrix}}_{1} = {\int }_{0}^{1}\left( {{x}^{n} - {x}^{m}}\right) {dx} \leq \frac{1}{n + 1} \rightarrow 0,\;n \rightarrow \infty ,\]\n\nand therefore \( \left( {f}_{n}\right) \) is a Cauchy sequence. Now we assume that \( \left( {f}_{n}\right) \) converges with respect to the \( {L}_{1} \) norm to a continuous function \( f \) ; i.e.,\n\n\[ {\begin{Vmatrix}{f}_{n} - f\end{Vmatrix}}_{1} \rightarrow 0,\;n \rightarrow \infty . \]\n\nThen\n\n\[ {\int }_{0}^{1}\left| {f\left( x\right) }\right| {dx} \leq {\int }_{0}^{1}\left| {f\left( x\right) - {x}^{n}}\right| {dx} + {\int }_{0}^{1}{x}^{n}{dx} \leq {\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{1} + \frac{1}{n + 1} \rightarrow 0 \]\nfor \( n \rightarrow \infty \), whence \( f\left( x\right) = 0 \) follows for \( 0 \leq x \leq 1 \) . Furthermore, we have\n\n\[ {\int }_{1}^{2}\left| {f\left( x\right) - 1}\right| {dx} = {\int }_{1}^{2}\left| {f\left( x\right) - {f}_{n}\left( x\right) }\right| {dx} \leq {\begin{Vmatrix}f - {f}_{n}\end{Vmatrix}}_{1} \rightarrow 0,\;n \rightarrow \infty . \]\n\nThis implies that \( f\left( x\right) = 1 \) for \( 1 \leq x \leq 2 \), and we have a contradiction, since \( f \) is continuous.
|
Yes
|
Example 3.38 The linear space \( C\left\lbrack {a, b}\right\rbrack \) equipped with the \( {L}_{2} \) norm\n\n\[ \n\parallel f{\parallel }_{2} \mathrel{\text{:=}} {\left( {\int }_{a}^{b}{\left| f\left( x\right) \right| }^{2}dx\right) }^{1/2} \n\]\n\nis not complete.
|
Proof. The norm is generated by the scalar product\n\n\[ \n\left( {f, g}\right) \mathrel{\text{:=}} {\int }_{a}^{b}f\left( x\right) g\left( x\right) {dx}. \n\]\n\nConsidering the same sequence as in Example 3.37, it can be seen that \( C\left\lbrack {a, b}\right\rbrack \) also is not complete with respect to the \( {L}_{2} \) norm. Again note that the space \( {L}^{2}\left\lbrack {a, b}\right\rbrack \) of measurable and Lebesgue square-integrable real-valued functions is complete with respect to the \( {L}_{2} \) norm (see \( \left\lbrack {5,{51},{59}}\right\rbrack \) ).
|
Yes
|
Theorem 3.39 Each finite-dimensional normed space is a Banach space.
|
Proof. Let \( X \) be finite-dimensional with basis \( {u}_{1},\ldots ,{u}_{n} \) and assume that \( \left( {x}_{\nu }\right) \) is a Cauchy sequence in \( X \) . We represent\n\n\[ \n{x}_{\nu } = \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j\nu }{u}_{j} \n\] \n\nand recall from Theorem 3.8 that there exists \( C > 0 \) such that \n\n\[ \n\mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {{\alpha }_{j\nu } - {\alpha }_{j\mu }}\right| \leq C\begin{Vmatrix}{{x}_{\nu } - {x}_{\mu }}\end{Vmatrix} \n\] \n\nfor all \( \nu ,\mu \in \mathbb{N} \) . Hence for \( j = 1,\ldots, n \) the \( \left( {\alpha }_{j\nu }\right) \) are Cauchy sequences in \( \mathbb{C} \) . Therefore, there exist \( {\alpha }_{1},\ldots ,{\alpha }_{n} \) such that \( {\alpha }_{j\nu } \rightarrow {\alpha }_{j},\nu \rightarrow \infty \), for \( j = 1,\ldots, n \), since the Cauchy criterion is sufficient for convergence in \( \mathbb{C} \) . Then we have convergence, \n\n\[ \n{x}_{\nu } \rightarrow x \mathrel{\text{:=}} \mathop{\sum }\limits_{{j = 1}}^{n}{\alpha }_{j}{u}_{j} \in X,\;\nu \rightarrow \infty , \n\] \n\nand the proof is finished.
|
Yes
|
Theorem 3.44 Each contraction operator has at most one fixed point.
|
Proof. Assume that \( x \) and \( y \) are two different fixed points of the contraction operator \( A \) . Then\n\n\[ 0 \neq \parallel x - y\parallel = \parallel {Ax} - {Ay}\parallel \leq q\parallel x - y\parallel \]\n\nwhence \( 1 \leq q \) follows. This is a contradiction to the fact that \( A \) is a contraction operator.
|
Yes
|
Theorem 3.45 (Banach) Let \( U \) be a complete subset of a normed space \( X \) and let \( A : U \rightarrow U \) be a contraction operator. Then \( A \) has a unique fixed point.
|
Proof. Starting from an arbitrary element \( {x}_{0} \in U \) we define a sequence \( \left( {x}_{n}\right) \) in \( U \) by the recursion\n\n\[ \n{x}_{n + 1} \mathrel{\text{:=}} A{x}_{n},\;n = 0,1,2,\ldots \n\]\n\nThen we have\n\n\[ \n\begin{Vmatrix}{{x}_{n + 1} - {x}_{n}}\end{Vmatrix} = \begin{Vmatrix}{A{x}_{n} - A{x}_{n - 1}}\end{Vmatrix} \leq q\begin{Vmatrix}{{x}_{n} - {x}_{n - 1}}\end{Vmatrix}, \n\]\n\nand from this we deduce by induction that\n\n\[ \n\begin{Vmatrix}{{x}_{n + 1} - {x}_{n}}\end{Vmatrix} \leq {q}^{n}\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix},\;n = 1,2,\ldots \n\]\n\nHence, for \( m > n \), by the triangle inequality and the geometric series it follows that\n\n\[ \n\begin{Vmatrix}{{x}_{n} - {x}_{m}}\end{Vmatrix} \leq \begin{Vmatrix}{{x}_{n} - {x}_{n + 1}}\end{Vmatrix} + \begin{Vmatrix}{{x}_{n + 1} - {x}_{n + 2}}\end{Vmatrix} + \cdots + \begin{Vmatrix}{{x}_{m - 1} - {x}_{m}}\end{Vmatrix} \n\]\n\n(3.12)\n\n\[ \n\leq \left( {{q}^{n} + {q}^{n + 1} + \cdots + {q}^{m - 1}}\right) \begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} \leq \frac{{q}^{n}}{1 - q}\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix}. \n\]\n\nSince \( {q}^{n} \rightarrow 0, n \rightarrow \infty \), this implies that \( \left( {x}_{n}\right) \) is a Cauchy sequence, and therefore because \( U \) is complete there exists an element \( x \in U \) such that \( {x}_{n} \rightarrow x, n \rightarrow \infty \) . Finally, the continuity of \( A \) from Remark 3.42 yields\n\n\[ \nx = \mathop{\lim }\limits_{{n \rightarrow \infty }}{x}_{n + 1} = \mathop{\lim }\limits_{{n \rightarrow \infty }}A{x}_{n} = {Ax} \n\]\n\ni.e., \( x \) is a fixed point of \( A \) . That this fixed point is unique we have already settled by Theorem 3.44.
|
Yes
|
Theorem 3.46 Let \( A \) be a contraction operator with contraction constant \( q \) mapping a complete subset \( U \) of a normed space \( X \) into itself. Then the successive approximations\n\n\[ \n{x}_{n + 1} \mathrel{\text{:=}} A{x}_{n},\;n = 0,1,2,\ldots ,\n\]\n\nwith arbitrary \( {x}_{0} \in U \) converge to the unique fixed point \( x \) of \( A \) . We have the a priori error estimate\n\n\[ \n\begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} \leq \frac{{q}^{n}}{1 - q}\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix}\n\]\nand the a posteriori error estimate\n\n\[ \n\begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} \leq \frac{q}{1 - q}\begin{Vmatrix}{{x}_{n} - {x}_{n - 1}}\end{Vmatrix}\n\]\n\nfor all \( n \in \mathbb{N} \) .
|
Proof. The a priori error estimate follows from (3.12) by passing to the limit \( m \rightarrow \infty \) . The a posteriori estimate follows from the a priori estimate applied with starting element \( {x}_{0} = {x}_{n - 1} \) .
|
Yes
|
Theorem 3.48 Let \( B : X \rightarrow X \) be a bounded linear operator on a Banach space \( X \) with \( \parallel B\parallel < 1 \), and let \( I : X \rightarrow X \) denote the identity operator. Then \( I - B \) is bijective; i.e., for each \( z \in X \) the equation\n\n\[ x - {Bx} = z \]\n\nhas a unique solution \( x \in X \) . The successive approximations\n\n\[ {x}_{n + 1} \mathrel{\text{:=}} B{x}_{n} + z,\;n = 0,1,2,\ldots ,\]\n\nwith arbitrary \( {x}_{0} \in X \) converge to this solution, and we have the a priori\n\nestimate\n\[ \begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} \leq \frac{\parallel B{\parallel }^{n}}{1 - \parallel B\parallel }\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} \]\n\nand the a posteriori estimate\n\n\[ \begin{Vmatrix}{{x}_{n} - x}\end{Vmatrix} \leq \frac{\parallel B\parallel }{1 - \parallel B\parallel }\begin{Vmatrix}{{x}_{n} - {x}_{n - 1}}\end{Vmatrix} \]\n\nfor all \( n \in \mathbb{N} \) . Furthermore, the inverse operator \( {\left( I - B\right) }^{-1} \) is bounded by\n\n\[ \begin{Vmatrix}{\left( I - B\right) }^{-1}\end{Vmatrix} \leq \frac{1}{1 - \parallel B\parallel }.\]
|
Proof. For fixed, but arbitrary, \( z \in X \) we define the operator \( A : X \rightarrow X \) by\n\n\[ {Ax} \mathrel{\text{:=}} {Bx} + z,\;x \in X. \]\n\nThen we have\n\n\[ \parallel {Ax} - {Ay}\parallel = \parallel B\left( {x - y}\right) \parallel \leq \parallel B\parallel \parallel x - y\parallel \]\n\nfor all \( x, y \in X \) ; i.e., \( A \) is a contraction with contraction number \( q = \parallel B\parallel \) . Now the statements of the theorem can be deduced from Theorem 3.46.\n\nWith the starting element \( {x}_{0} = z \) the successive approximations lead to\n\n\[ {x}_{n} = \mathop{\sum }\limits_{{k = 0}}^{n}{B}^{k}z \]\n\nwith the iterated operators \( {B}^{k} : X \rightarrow X \) defined recursively by \( {B}^{0} \mathrel{\text{:=}} I \) and \( {B}^{k} \mathrel{\text{:=}} B{B}^{k - 1} \) for \( k \in \mathbb{N} \) . Hence, in view of Remark 3.25, we have\n\n\[ \begin{Vmatrix}{x}_{n}\end{Vmatrix} \leq \mathop{\sum }\limits_{{k = 0}}^{n}\begin{Vmatrix}{{B}^{k}z}\end{Vmatrix} \leq \mathop{\sum }\limits_{{k = 0}}^{n}\parallel B{\parallel }^{k}\parallel z\parallel \leq \frac{\parallel z\parallel }{1 - \parallel B\parallel }, \]\n\nand therefore, since \( {x}_{n} \rightarrow {\left( I - B\right) }^{-1}z, n \rightarrow \infty \), it follows that\n\n\[ \begin{Vmatrix}{{\left( I - B\right) }^{-1}z}\end{Vmatrix} \leq \frac{\parallel z\parallel }{1 - \parallel B\parallel } \]\n\nfor all \( z \in X \) .
|
Yes
|
Theorem 3.50 Let \( U \) be a finite-dimensional subspace of a normed space \( X \) . Then for every element in \( X \) there exists a best approximation with respect to \( U \) .
|
Proof. Let \( w \in X \) and choose a minimizing sequence \( \left( {u}_{n}\right) \) for \( w \) ; i.e., \( {u}_{n} \in U \) satisfies\n\n\[\n\begin{Vmatrix}{w - {u}_{n}}\end{Vmatrix} \rightarrow d \mathrel{\text{:=}} \mathop{\inf }\limits_{{u \in U}}\parallel w - u\parallel ,\;n \rightarrow \infty .\n\]\n\nBecause of \( \begin{Vmatrix}{u}_{n}\end{Vmatrix} \leq \begin{Vmatrix}{w - {u}_{n}}\end{Vmatrix} + \parallel w\parallel \) the sequence \( \left( {u}_{n}\right) \) is bounded. By Theorem 3.11 the sequence \( \left( {u}_{n}\right) \) contains a convergent subsequence \( \left( {u}_{n\left( \ell \right) }\right) \) with limit \( v \in U \) . Then\n\n\[\n\parallel w - v\parallel = \mathop{\lim }\limits_{{\ell \rightarrow \infty }}\begin{Vmatrix}{w - {u}_{n\left( \ell \right) }}\end{Vmatrix} = d\n\]\n\ncompletes the proof.
|
Yes
|
Theorem 3.51 Let \( U \) be a linear subspace of a pre-Hilbert space \( X \) . An element \( v \) is a best approximation to \( w \in X \) with respect to \( U \) if and only if\n\n\[ \left( {w - v, u}\right) = 0 \]\n\n(3.13)\n\nfor all \( u \in U \), i.e., if and only if \( w - v \bot U \) . To each \( w \in X \) there exists at most one best approximation with respect to \( U \) .
|
Proof. We begin by noting the equality\n\n\[ \parallel w - u{\parallel }^{2} = \parallel w - v{\parallel }^{2} + 2\operatorname{Re}\left( {w - v, v - u}\right) + \parallel v - u{\parallel }^{2}, \]\n\n(3.14)\n\nwhich is valid for all \( u, v \in U \) . From this, sufficiency of the condition (3.13) is obvious, since \( U \) is a linear subspace.\n\nTo establish the necessity we assume that \( v \) is a best approximation and \( \left( {w - v,{u}_{0}}\right) \neq 0 \) for some \( {u}_{0} \in U \) . Then, since \( U \) is a linear subspace, we may assume that \( \left( {w - v,{u}_{0}}\right) \in \mathbb{R} \) . Choosing\n\n\[ u = v + \frac{\left( w - v,{u}_{0}\right) }{{\begin{Vmatrix}{u}_{0}\end{Vmatrix}}^{2}}{u}_{0} \]\n\nfrom (3.14) we arrive at\n\n\[ \parallel w - u{\parallel }^{2} = \parallel w - v{\parallel }^{2} - \frac{{\left( w - v,{u}_{0}\right) }^{2}}{{\begin{Vmatrix}{u}_{0}\end{Vmatrix}}^{2}} < \parallel w - v{\parallel }^{2}, \]\n\nwhich contradicts the fact that \( v \) is a best approximation of \( w \) .\n\nFinally, assume that \( {v}_{1} \) and \( {v}_{2} \) are best approximations. Then from (3.13) it follows that \( \left( {w - {v}_{1},{v}_{1} - {v}_{2}}\right) = 0 = \left( {w - {v}_{2},{v}_{1} - {v}_{2}}\right) \) . This implies \( \left( {{v}_{1} - {v}_{2},{v}_{1} - {v}_{2}}\right) = 0 \), whence \( {v}_{1} = {v}_{2} \) follows.
|
Yes
|
Theorem 3.52 Let \( U \) be a complete linear subspace of a pre-Hilbert space \( X \) . Then to each element \( w \in X \) there exists a unique best approximation with respect to \( U \) . The operator \( P : X \rightarrow U \) mapping \( w \in X \) onto its best approximation is a bounded linear operator with the properties\n\n\[ \n{P}^{2} = P\;\text{ and }\;\parallel P\parallel = 1.\n\]\n\nIt is called the orthogonal projection from \( X \) onto \( U \) .
|
Proof. Choose a sequence \( \left( {u}_{n}\right) \) with\n\n\[ \n{\begin{Vmatrix}w - {u}_{n}\end{Vmatrix}}^{2} \leq {d}^{2} + \frac{1}{n},\;n \in \mathbb{N},\n\]\n\n(3.15)\n\nwhere \( d \mathrel{\text{:=}} \mathop{\inf }\limits_{{u \in U}}\parallel w - u\parallel \) . Then\n\n\[ \n\parallel \left( {w - {u}_{n}}\right) + \left( {w - {u}_{m}}\right) {\parallel }^{2} + \parallel {u}_{n} - {u}_{m}{\parallel }^{2} = 2\parallel w - {u}_{n}{\parallel }^{2} + 2\parallel w - {u}_{m}{\parallel }^{2}\n\]\n\n\[ \n\leq 4{d}^{2} + \frac{2}{n} + \frac{2}{m}\n\]\n\nfor all \( n, m \in \mathbb{N} \), and since \( \frac{1}{2}\left( {{u}_{n} + {u}_{m}}\right) \in U \), it follows that\n\n\[ \n{\begin{Vmatrix}{u}_{n} - {u}_{m}\end{Vmatrix}}^{2} \leq 4{d}^{2} + \frac{2}{n} + \frac{2}{m} - 4{\begin{Vmatrix}w - \frac{1}{2}\left( {u}_{n} + {u}_{m}\right) \end{Vmatrix}}^{2} \leq \frac{2}{n} + \frac{2}{m}.\n\]\n\nHence, \( \left( {u}_{n}\right) \) is a Cauchy sequence, and since \( U \) is complete, there exists an element \( v \in U \) such that \( {u}_{n} \rightarrow v, n \rightarrow \infty \) . Passing to the limit \( n \rightarrow \infty \) in (3.15) shows that \( v \) is a best approximation of \( w \) with respect to \( U \) . Uniqueness of the best approximation follows from Theorem 3.51.\n\nTrivially, we have \( {Pu} = u \) for all \( u \in U \), and this implies \( {P}^{2} = P \) . From (3.13) it can be deduced that \( P \) is a linear operator and that\n\n\[ \n\parallel w{\parallel }^{2} = \parallel {Pw}{\parallel }^{2} + \parallel w - {Pw}{\parallel }^{2} \geq \parallel {Pw}{\parallel }^{2}\n\]\n\nfor all \( w \in X \) . Therefore, \( P \) is bounded with \( \parallel P\parallel \leq 1 \) . From Remark 3.25 and \( {P}^{2} = P \) it follows that \( \parallel P\parallel \geq 1 \), which concludes the proof.
|
Yes
|
Corollary 3.53 Let \( U \) be a finite-dimensional linear subspace of a pre-Hilbert space \( X \) with basis \( {u}_{1},\ldots ,{u}_{n} \) . The linear combination\n\n\[ v = \mathop{\sum }\limits_{{k = 1}}^{n}{\alpha }_{k}{u}_{k} \]\n\nis the best approximation to \( w \in X \) with respect to \( U \) if and only if the coefficients \( {\alpha }_{1},\ldots ,{\alpha }_{n} \) satisfy the normal equations\n\n\[ \mathop{\sum }\limits_{{k = 1}}^{n}{\alpha }_{k}\left( {{u}_{k},{u}_{j}}\right) = \left( {w,{u}_{j}}\right) ,\;j = 1,\ldots, n. \]\n\n(3.16)
|
Proof. The normal equations (3.16) obviously are equivalent to (3.13).
|
No
|
Corollary 3.54 Let \( U \) be a finite-dimensional linear subspace of a pre-Hilbert space \( X \) with orthonormal basis \( {u}_{1},\ldots ,{u}_{n} \) . Then the orthogonal projection operator is given by\n\n\[ \n{Pw} = \mathop{\sum }\limits_{{k = 1}}^{n}\left( {w,{u}_{k}}\right) {u}_{k},\;w \in X.\n\]
|
Proof. This is trivial from either the orthogonality condition of Theorem 3.51 or the normal equations of Corollary 3.53.
|
No
|
Theorem 4.1 Let \( B \) be an \( n \times n \) matrix. Then the successive approximations\n\n\[ \n{x}_{\nu + 1} \mathrel{\text{:=}} B{x}_{\nu } + z,\;\nu = 0,1,2,\ldots ,\n\]\n\nconverge for each \( z \in {\mathbb{C}}^{n} \) and each \( {x}_{0} \in {\mathbb{C}}^{n} \) if and only if\n\n\[ \n\rho \left( B\right) < 1\n\]\n\nfor the spectral radius of \( B \) .
|
Proof. If \( \rho \left( B\right) < 1 \), then by Theorem 3.32 there exists a norm \( \parallel \cdot \parallel \) on \( {\mathbb{C}}^{n} \) such that \( \parallel B\parallel < 1 \) . Now convergence follows from Theorem 3.48 together with the equivalence of all norms on \( {\mathbb{C}}^{n} \) according to Theorem 3.8.\n\nConversely, suppose that convergence holds. If we assume that \( \rho \left( B\right) \geq 1 \) , then there exists an eigenvalue \( \lambda \) of \( B \) with \( \left| \lambda \right| \geq 1 \) . Let \( x \) denote an associated eigenvector. Then the successive iterations for the right-hand side \( z = x \) and the starting element \( {x}_{0} = x \) lead to the divergent sequence \( {x}_{\nu } = \left( {\mathop{\sum }\limits_{{k = 0}}^{\nu }{\lambda }^{k}}\right) x \) . This is a contradiction.
|
Yes
|
Theorem 4.2 Assume that the matrix \( A = \left( {a}_{jk}\right) \) satisfies\n\n\[ \n{q}_{\infty } \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}\mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| \frac{{a}_{jk}}{{a}_{jj}}\right| < 1 \n\]\n\n(4.1)\n\nor\n\n\[ \n{q}_{1} \mathrel{\text{:=}} \mathop{\max }\limits_{{k = 1,\ldots, n}}\mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq k} }}^{n}\left| \frac{{a}_{jk}}{{a}_{jj}}\right| < 1 \n\]\n\n(4.2)\n\nor\n\n\[ \n{q}_{2} \mathrel{\text{:=}} {\left( \mathop{\sum }\limits_{\substack{{j, k = 1} \\ {j \neq k} }}^{n}{\left| \frac{{a}_{jk}}{{a}_{jj}}\right| }^{2}\right) }^{1/2} < 1 \n\]\n\n(4.3)\n\nThen the Jacobi method, or method of simultaneous displacements,\n\n\[ \n{x}_{\nu + 1, j} = - \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\frac{{a}_{jk}}{{a}_{jj}}{x}_{\nu, k} + \frac{{y}_{j}}{{a}_{jj}},\;j = 1,\ldots, n,\;\nu = 0,1,2,\ldots , \n\]\n\nconverges for each \( y \in {\mathbb{C}}^{n} \) and each \( {x}_{0} \in {\mathbb{C}}^{n} \) to the the unique solution of \( {Ax} = y \) (in any norm on \( {\mathbb{C}}^{n} \) ). For \( \mu = 1,2,\infty \), if \( {q}_{\mu } < 1 \), we have the a priori error estimate\n\n\[ \n{\begin{Vmatrix}{x}_{\nu } - x\end{Vmatrix}}_{\mu } \leq \frac{{q}_{\mu }^{\nu }}{1 - {q}_{\mu }}{\begin{Vmatrix}{x}_{1} - {x}_{0}\end{Vmatrix}}_{\mu } \n\]\n\nand the a posteriori error estimate\n\n\[ \n{\begin{Vmatrix}{x}_{\nu } - x\end{Vmatrix}}_{\mu } \leq \frac{{q}_{\mu }}{1 - {q}_{\mu }}{\begin{Vmatrix}{x}_{\nu } - {x}_{\nu - 1}\end{Vmatrix}}_{\mu } \n\]\n\nfor all \( \nu \in \mathbb{N} \) .
|
Proof. The Jacobi matrix \( - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \) has diagonal entries zero and off-diagonal entries \( - {a}_{jk}/{a}_{jj} \) . Hence by Theorem 3.26 we have\n\n\[ \n{\begin{Vmatrix}-{D}^{-1}\left( {A}_{L} + {A}_{R}\right) \end{Vmatrix}}_{\infty } = {q}_{\infty } \n\]\n\n\[ \n{\begin{Vmatrix}-{D}^{-1}\left( {A}_{L} + {A}_{R}\right) \end{Vmatrix}}_{1} = {q}_{1} \n\]\n\n\[ \n{\begin{Vmatrix}-{D}^{-1}\left( {A}_{L} + {A}_{R}\right) \end{Vmatrix}}_{2} \leq {q}_{2} \n\]\n\nNow the assertion follows from Theorem 3.48.
|
Yes
|
Theorem 4.3 Assume that the matrix \( A = \left( {a}_{jk}\right) \) fulfills the Sassenfeld criterion\n\n\[ p \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}{p}_{j} < 1 \]\n\nwhere the numbers \( {p}_{j} \) are recursively defined by\n\n\[ {p}_{1} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 2}}^{n}\left| \frac{{a}_{1k}}{{a}_{11}}\right| ,\;{p}_{j} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 1}}^{{j - 1}}\left| \frac{{a}_{jk}}{{a}_{jj}}\right| {p}_{k} + \mathop{\sum }\limits_{{k = j + 1}}^{n}\left| \frac{{a}_{jk}}{{a}_{jj}}\right| ,\;j = 2,\ldots, n. \]\n\nThen the Gauss-Seidel method, or method of successive displacements,\n\n\[ {x}_{\nu + 1, j} = - \mathop{\sum }\limits_{{k = 1}}^{{j - 1}}\frac{{a}_{jk}}{{a}_{jj}}{x}_{\nu + 1, k} - \mathop{\sum }\limits_{{k = j + 1}}^{n}\frac{{a}_{jk}}{{a}_{jj}}{x}_{\nu, k} + \frac{{y}_{j}}{{a}_{jj}}, j = 1,\ldots, n,\nu = 0,1,2,\ldots ,\]\n\nconverges for each \( y \in {\mathbb{C}}^{n} \) and each \( {x}_{0} \in {\mathbb{C}}^{n} \) to the the unique solution of \( {Ax} = y \) (in any norm on \( {\mathbb{C}}^{n} \) ). We have the a priori error estimate\n\n\[ {\begin{Vmatrix}{x}_{\nu } - x\end{Vmatrix}}_{\infty } \leq \frac{{p}^{\nu }}{1 - p}{\begin{Vmatrix}{x}_{1} - {x}_{0}\end{Vmatrix}}_{\infty } \]\n\nand the a posteriori error estimate\n\n\[ {\begin{Vmatrix}{x}_{\nu } - x\end{Vmatrix}}_{\infty } \leq \frac{p}{1 - p}{\begin{Vmatrix}{x}_{\nu } - {x}_{\nu - 1}\end{Vmatrix}}_{\infty } \]\n\nfor all \( \nu \in \mathbb{N} \) .
|
Proof. Consider the equation\n\n\[ \left( {D + {A}_{L}}\right) x = - {A}_{R}z \]\n\nfor \( z \in {\mathbb{C}}^{n} \) with \( \parallel z{\parallel }_{\infty } = 1 \), that is,\n\n\[ {x}_{j} = - \mathop{\sum }\limits_{{k = 1}}^{{j - 1}}\frac{{a}_{jk}}{{a}_{jj}}{x}_{k} - \mathop{\sum }\limits_{{k = j + 1}}^{n}\frac{{a}_{jk}}{{a}_{jj}}{z}_{k},\;j = 1,\ldots, n. \]\n\nBy induction, this implies that \( \left| {x}_{j}\right| \leq {p}_{j} \) for \( j = 1,\ldots, n \), and therefore \( \parallel x{\parallel }_{\infty } \leq p \) . Hence we have\n\n\[ {\begin{Vmatrix}{\left( D + {A}_{L}\right) }^{-1}{A}_{R}\end{Vmatrix}}_{\infty } \leq p \]\n\nand the assertion of the theorem follows from Theorem 3.48.
|
Yes
|
Example 4.5 The tridiagonal matrix\n\n\[ A = \left( \begin{array}{rrrrrr} 2 & - 1 & & & & \\ - 1 & 2 & - 1 & & & \\ & - 1 & 2 & - 1 & & \\ & \cdot & \cdot & \cdot & \cdot & \cdot \\ & & & - 1 & 2 & - 1 \\ & & & & - 1 & 2 \end{array}\right) \] from Example 2.1 is not strictly row-diagonally dominant, but it satisfies the Sassenfeld criterion.
|
Proof. Obviously, \( {q}_{\infty } = 1 \) ; i.e.,(4.1) is not fulfilled. We have the recursion\n\n\[ {p}_{1} = \frac{1}{2},\;{p}_{j} = \frac{1}{2}{p}_{j - 1} + \frac{1}{2},\;j = 2,\ldots, n - 1,\;{p}_{n} = \frac{1}{2}{p}_{n - 1}. \]\n\nFrom this, by induction, it follows that\n\n\[ {p}_{j} = 1 - \frac{1}{{2}^{j}},\;j = 1,\ldots, n - 1,\;{p}_{n} = \frac{1}{2} - \frac{1}{{2}^{n}}. \]\n\nTherefore,\n\n\[ p = 1 - \frac{1}{{2}^{n - 1}} < 1 \]\n\nand this implies convergence of the Gauss-Seidel iterations by Theorem 4.3.
|
Yes
|
Assume that the Jacobi matrix \( B \mathrel{\text{:=}} - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \) has real eigenvalues and spectral radius less than one. Then the spectral radius of the iteration matrix\n\n\[ I - \omega {D}^{-1}A = \left( {1 - \omega }\right) I - \omega {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \]\n\nfor the Jacobi method with relaxation becomes minimal for the relaxation parameter\n\n\[ {\omega }_{\mathrm{{opt}}} = \frac{2}{2 - {\lambda }_{\max } - {\lambda }_{\min }} \]\n\nand has spectral radius\n\n\[ \rho \left( {I - {\omega }_{\text{opt }}{D}^{-1}A}\right) = \frac{{\lambda }_{\max } - {\lambda }_{\min }}{2 - {\lambda }_{\max } - {\lambda }_{\min }}, \]\n\nwhere \( {\lambda }_{\min } \) and \( {\lambda }_{\max } \) denote the smallest and the largest eigenvalue of \( B \) , respectively. In the case \( {\lambda }_{\min } \neq - {\lambda }_{\max } \) the convergence of the Jacobi method with optimal relaxation parameter is faster than the convergence of the Jacobi method without relaxation.
|
Proof. For \( \omega > 0 \) the equation \( {Bu} = {\lambda u} \) is equivalent to\n\n\[ \left\lbrack {\left( {1 - \omega }\right) I + {\omega B}}\right\rbrack u = \left\lbrack {1 - \omega + {\omega \lambda }}\right\rbrack u. \]\n\nHence the eigenvalues \( \lambda \) of \( B \) correspond to the eigenvalues \( 1 - \omega + {\omega \lambda } \) of \( \left( {1 - \omega }\right) I + {\omega B} \) . Therefore, the eigenvalues of \( \left( {1 - \omega }\right) I + {\omega B} \) are real, and\n\nthe smallest eigenvalue of \( \left( {1 - \omega }\right) I + {\omega B} \) is given by \( 1 - \omega + \omega {\lambda }_{\min } \) and the largest by \( 1 - \omega + \omega {\lambda }_{\max } \) . Obviously, the spectral radius becomes minimal if the smallest and the largest eigenvalue are of opposite sign and have the same absolute value, i.e., if\n\n\[ 1 - {\omega }_{\mathrm{{opt}}} + {\omega }_{\mathrm{{opt}}}{\lambda }_{\min } = - 1 + {\omega }_{\mathrm{{opt}}} - {\omega }_{\mathrm{{opt}}}{\lambda }_{\max } \]\n\nFrom this, elementary algebra yields the optimal parameter \( {\omega }_{\text{opt }} \) and the spectral radius \( \rho \left( {I - {\omega }_{\text{opt }}{D}^{-1}A}\right) \) as stated in the theorem.
|
Yes
|
Theorem 4.11 (Kahan) A necessary condition for the SOR method to be convergent is that \( 0 < \omega < 2 \) .
|
Proof. Since the eigenvalues \( {\mu }_{1},\ldots ,{\mu }_{n} \) of \( B\left( \omega \right) \) are the zeros of the characteristic polynomial, they satisfy\n\n\[ \mathop{\prod }\limits_{{j = 1}}^{n}{\mu }_{j} = \det B\left( \omega \right) \]\n\n(where multiple eigenvalues are repeated according to their algebraic multiplicity). From this, by the multiplication rules for determinants and since \( D + \omega {A}_{L} \) and \( \left( {1 - \omega }\right) D - \omega {A}_{R} \) are triangular matrices, it follows that\n\n\[ \mathop{\prod }\limits_{{j = 1}}^{n}{\mu }_{j} = \det {\left( D + \omega {A}_{L}\right) }^{-1}\det \left\lbrack {\left( {1 - \omega }\right) D - \omega {A}_{R}}\right\rbrack = {\left( 1 - \omega \right) }^{n}. \]\n\nThis now implies\n\n\[ \rho \left\lbrack {B\left( \omega \right) }\right\rbrack \geq \left| {1 - \omega }\right| \]\n\nand from Theorem 4.1 we conclude the necessity of \( 0 < \omega < 2 \) for convergence.
|
Yes
|
Theorem 4.12 (Ostrowski) If \( A \) is Hermitian and positive definite, then the SOR method converges for all \( {x}_{0} \in {\mathbb{C}}^{n} \), all \( y \in {\mathbb{C}}^{n} \), and all \( 0 < \omega < 2 \) to the unique solution of \( {Ax} = y \) .
|
Proof. Let \( \mu \) be an eigenvalue of \( B\left( \omega \right) \) with eigenvector \( x \) ; i.e., \[ \left\lbrack {\left( {1 - \omega }\right) D - \omega {A}_{R}}\right\rbrack x = \mu \left( {D + \omega {A}_{L}}\right) x. \] With the aid of \[ \left( {2 - \omega }\right) D - {\omega A} - \omega \left( {{A}_{R} - {A}_{L}}\right) = 2\left\lbrack {\left( {1 - \omega }\right) D - \omega {A}_{R}}\right\rbrack \] and \[ \left( {2 - \omega }\right) D + {\omega A} - \omega \left( {{A}_{R} - {A}_{L}}\right) = 2\left\lbrack {D + \omega {A}_{L}}\right\rbrack \] we deduce that \[ \left\lbrack {\left( {2 - \omega }\right) D - {\omega A} - \omega \left( {{A}_{R} - {A}_{L}}\right) }\right\rbrack x = \mu \left\lbrack {\left( {2 - \omega }\right) D + {\omega A} - \omega \left( {{A}_{R} - {A}_{L}}\right) }\right\rbrack x. \] Taking the Euclidean scalar product with \( x \), it now follows that \[ \mu = \frac{\left( {2 - \omega }\right) d - {\omega a} + {i\omega s}}{\left( {2 - \omega }\right) d + {\omega a} + {i\omega s}} \] where we have set \[ a \mathrel{\text{:=}} \left( {{Ax}, x}\right) ,\;d \mathrel{\text{:=}} \left( {{Dx}, x}\right) ,\;s \mathrel{\text{:=}} i\left( {{A}_{R}x - {A}_{L}x, x}\right) . \] Since \( A \) is positive definite, we have \( a > 0 \) and \( d > 0 \), and since \( A \) is Hermitean, \( s \) is real. From \[ \left| {\left( {2 - \omega }\right) d - {\omega a}}\right| < \left| {\left( {2 - \omega }\right) d + {\omega a}}\right| \] for \( 0 < \omega < 2 \) we now can conclude that \( \left| \mu \right| < 1 \) for \( 0 < \omega < 2 \) . Hence convergence of the SOR method for \( 0 < \omega < 2 \) follows from Theorem 4.1. \( ▱ \)
|
Yes
|
Corollary 4.16 Under the assumptions of Theorem 4.15 the Gauss-Seidel method converges twice as fast as the Jacobi method.
|
Proof. From (4.8) we observe that \( \mu = {\lambda }^{2} \) for \( \omega = 1 \) ; i.e., we have\n\n\[ \rho \left\lbrack {B\left( 1\right) }\right\rbrack = {\left\{ \rho \left\lbrack -{D}^{-1}\left( {A}_{L} + {A}_{R}\right) \right\rbrack \right\} }^{2} \]\n\nfor the spectral radii of the Gauss-Seidel matrix \( B\left( 1\right) \) and the Jacobi matrix \( - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \) . Now the statement follows from the observation that by the a priori estimate of Theorem 3.48 the number \( N \) of iterations required for a desired accuracy is inversely proportional to the modulus of the logarithm of the spectral radius; i.e.,\n\n\[ \frac{N\left( \text{ Gauss-Seidel }\right) }{N\left( \text{ Jacobi }\right) } \approx \frac{\ln \rho \left\lbrack {-{D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) }\right\rbrack }{\ln \rho \left\lbrack {B\left( 1\right) }\right\rbrack } = \frac{1}{2}, \]\n\nand this proves the assertion.
|
Yes
|
For the tridiagonal matrix \( A \) from Example 4.5 we have\n\n\[ \frac{N\left( \mathrm{{SOR}}\right) }{N\left( \mathrm{{Jacobi}}\right) } \approx \frac{\pi }{4\left( {n + 1}\right) } \] \n\nfor the optimal relaxation parameter.
|
Proof. Using the trigonometric addition theorem\n\n\[ \frac{1}{2}\sin \frac{{\pi j}\left( {k - 1}\right) }{n + 1} + \frac{1}{2}\sin \frac{{\pi j}\left( {k + 1}\right) }{n + 1} = \cos \frac{\pi j}{n + 1}\sin \frac{\pi jk}{n + 1}, \] \n\nit can be seen that the Jacobi matrix\n\n\[ - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) = \frac{1}{2}\left( \begin{matrix} 0 & 1 & & & & \\ 1 & 0 & 1 & & & \\ & 1 & 0 & 1 & & \\ & \cdot & \cdot & \cdot & \cdot & \cdot \\ & & & 1 & 0 & 1 \\ & & & & 1 & 0 \end{matrix}\right) \] \n\ncorresponding to Example 4.5 has the eigenvalues\n\n\[ {\lambda }_{j} = \cos \frac{\pi j}{n + 1},\;j = 1,\ldots, n \] \n\nand associated eigenvectors \( {v}_{j} \) with components\n\n\[ {v}_{j, k} = \sin \frac{\pi jk}{n + 1},\;k = 1,\ldots, n,\;j = 1,\ldots, n. \] \n\nHence,\n\n\[ \Lambda = \rho \left\lbrack {-{D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) }\right\rbrack = \cos \frac{\pi }{n + 1} \approx 1 - \frac{{\pi }^{2}}{2{\left( n + 1\right) }^{2}} \] \n\nand\n\n\[ - \ln \rho \left\lbrack {-{D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) }\right\rbrack \approx \frac{{\pi }^{2}}{2{\left( n + 1\right) }^{2}}. \] \n\nFrom Theorem 4.15 we obtain\n\n\[ {\omega }_{\text{opt }} = \frac{2}{1 + \sin \frac{\pi }{n + 1}} \] \n\nand\n\n\[ \rho \left\lbrack {B\left( {\omega }_{\text{opt }}\right) }\right\rbrack = \frac{1 - \sin \frac{\pi }{n + 1}}{1 + \sin \frac{\pi }{n + 1}} \approx 1 - \frac{2\pi }{n + 1}, \] \n\nwhence\n\n\[ - \ln \rho \left\lbrack {B\left( {\omega }_{\mathrm{{opt}}}\right) }\right\rbrack \approx \frac{2\pi }{n + 1} \] \n\nfollows. This concludes the proof.
|
Yes
|
Theorem 4.18 For the spectral radius of \( T \) we have that \( \rho \left( T\right) = {0.5} \) ; i.e., the two-grid iterations converge.
|
Proof. We note that from (4.18) and (4.19), with \( h \) replaced by \( {2h} \), we have\n\nthat\n\[ \n{A}^{\left( 2h\right) }{v}_{j}^{\left( 2h\right) } = \frac{1}{{h}^{2}}{\sin }^{2}\left( {\pi jh}\right) {v}_{j}^{\left( 2h\right) } = \frac{4}{{h}^{2}}{c}_{j}^{2}{s}_{j}^{2}{v}_{j}^{\left( 2h\right) }, \n\]\n\nwhence\n\n\[ \n{\left\lbrack {A}^{\left( 2h\right) }\right\rbrack }^{-1}{v}_{j}^{\left( 2h\right) } = \frac{{h}^{2}}{4{c}_{j}^{2}{s}_{j}^{2}}{v}_{j}^{\left( 2h\right) },\;j = 1,\ldots ,\frac{n - 1}{2}, \n\]\n\nfollows. From this, using (4.20)-(4.22) and \( {R}^{\left( h\right) }{v}_{\frac{n + 1}{2}}^{\left( h\right) } = 0 \), it can be derived\n\nthat\n\[ \n\left( \begin{matrix} T{v}_{j}^{\left( h\right) } \\ T{v}_{n + 1 - j}^{\left( h\right) } \end{matrix}\right) = {s}_{j}^{2}{c}_{j}^{2}\left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) \left( \begin{matrix} {v}_{j}^{\left( h\right) } \\ {v}_{n + 1 - j}^{\left( h\right) } \end{matrix}\right) \n\]\n\n(4.24)\n\nfor \( j = 1,\ldots ,\frac{n - 1}{2} \) and\n\n\[ \nT{v}_{\frac{n + 1}{2}}^{\left( h\right) } = \frac{1}{2}{v}_{\frac{n + 1}{2}}^{\left( h\right) }. \n\]\n\n(4.25)\n\nSince the matrix\n\n\[ \nQ = \left( \begin{array}{ll} 1 & 1 \\ 1 & 1 \end{array}\right) \n\]\n\nhas the eigenvalues 0 and 2, from (4.24) and (4.25) it can be seen that the matrix \( T \) has the eigenvalues\n\n\[ \n2{s}_{j}^{2}{c}_{j}^{2} = \frac{1}{2}{\sin }^{2}{\pi jh},\;j = 1,\ldots ,\frac{n + 1}{2}, \n\]\n\nand the eigenvalue zero of multiplicity \( \frac{n - 1}{2} \) . This implies the assertion on the spectral radius of \( T \) .
|
Yes
|
We consider the best approximation of a given continuous function \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \mathbb{R} \) by a polynomial \[ p\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{\alpha }_{k}{x}^{k} \] of degree \( n \) in the least squares sense, i.e., with respect to the \( {L}_{2} \) norm.
|
Using the monomials \( x \mapsto {x}^{k}, k = 0,1,\ldots, n \), as a basis of the subspace \( {P}_{n} \subset C\left\lbrack {0,1}\right\rbrack \) of polynomials of degree less than or equal to \( n \) (see Theorem 8.2), from Corollary 3.53 and the integrals \[ {\int }_{0}^{1}{x}^{j}{x}^{k}{dx} = \frac{1}{j + k + 1} \] it follows that the coefficients \( {\alpha }_{0},\ldots ,{\alpha }_{n} \) of the best approximation are uniquely determined by the normal equations \[ \mathop{\sum }\limits_{{k = 0}}^{n}\frac{1}{j + k + 1}{\alpha }_{k} = {\int }_{0}^{1}f\left( x\right) {x}^{j}{dx},\;j = 0,\ldots, n. \]
|
Yes
|
Theorem 5.3 Let \( X \) and \( Y \) be Banach spaces, let \( A : X \rightarrow Y \) be a bounded linear operator with a bounded inverse \( {A}^{-1} : Y \rightarrow X \) and let \( {A}^{\delta } : X \rightarrow Y \) be a bounded linear operator such that \( \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\begin{Vmatrix}{{A}^{\delta } - A}\end{Vmatrix} < 1 \) . Assume that \( x \) and \( {x}^{\delta } \) are solutions of the equations\n\n\[ \n{Ax} = y \n\]\n\n(5.2)\n\nand\n\n\[ \n{A}^{\delta }{x}^{\delta } = {y}^{\delta } \n\]\n\n(5.3)\n\nrespectively. Then\n\n\[ \n\frac{\begin{Vmatrix}{x}^{\delta } - x\end{Vmatrix}}{\parallel x\parallel } \leq \frac{\operatorname{cond}\left( A\right) }{1 - \operatorname{cond}\left( A\right) \frac{\begin{Vmatrix}{A}^{\delta } - A\end{Vmatrix}}{\parallel A\parallel }}\left\{ {\frac{\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}{\parallel y\parallel } + \frac{\begin{Vmatrix}{A}^{\delta } - A\end{Vmatrix}}{\parallel A\parallel }.}\right\} \n\]
|
Proof. Writing \( {A}^{\delta } = A\left\lbrack {I + {A}^{-1}\left( {{A}^{\delta } - A}\right) }\right\rbrack \), by Theorem 3.48 we observe that the inverse operator \( {\left\lbrack {A}^{\delta }\right\rbrack }^{-1} = {\left\lbrack I + {A}^{-1}\left( {A}^{\delta } - A\right) \right\rbrack }^{-1}{A}^{-1} \) exists and is bounded by\n\n\[ \n\begin{Vmatrix}{\left\lbrack {A}^{\delta }\right\rbrack }^{-1}\end{Vmatrix} \leq \frac{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}{1 - \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\begin{Vmatrix}{{A}^{\delta } - A}\end{Vmatrix}}. \n\]\n\n(5.4)\n\nFrom (5.2) and (5.3) we find that\n\n\[ \n{A}^{\delta }\left( {{x}^{\delta } - x}\right) = {y}^{\delta } - y - \left( {{A}^{\delta } - A}\right) x \n\]\n\nwhence\n\n\[ \n{x}^{\delta } - x = {\left\lbrack {A}^{\delta }\right\rbrack }^{-1}\left\{ {{y}^{\delta } - y - \left( {{A}^{\delta } - A}\right) x}\right\} \n\]\n\nfollows. Now we can estimate\n\n\[ \n\begin{Vmatrix}{{x}^{\delta } - x}\end{Vmatrix} \leq \begin{Vmatrix}{\left\lbrack {A}^{\delta }\right\rbrack }^{-1}\end{Vmatrix}\left\{ {\begin{Vmatrix}{{y}^{\delta } - y}\end{Vmatrix} + \begin{Vmatrix}{{A}^{\delta } - A}\end{Vmatrix}\parallel x\parallel }\right\} \n\]\n\nand insert (5.4) to obtain\n\n\[ \n\frac{\begin{Vmatrix}{x}^{\delta } - x\end{Vmatrix}}{\parallel x\parallel } \leq \frac{\operatorname{cond}\left( A\right) }{1 - \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\begin{Vmatrix}{{A}^{\delta } - A}\end{Vmatrix}}\left\{ {\frac{\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}{\parallel A\parallel \parallel x\parallel } + \frac{\begin{Vmatrix}{A}^{\delta } - A\end{Vmatrix}}{\parallel A\parallel }}\right\} . \n\]\n\nFrom this the assertion follows with the aid of \( \parallel A\parallel \parallel x\parallel \geq \parallel y\parallel \) .
|
Yes
|
Theorem 5.4 Let \( A \) be an \( m \times n \) matrix of rank \( r \) . Then there exist nonnegative numbers\n\n\[ \n{\mu }_{1} \geq {\mu }_{2} \geq \cdots \geq {\mu }_{r} > {\mu }_{r + 1} = \cdots = {\mu }_{n} = 0 \n\]\n\nand orthonormal vectors \( {u}_{1},\ldots ,{u}_{n} \in {\mathbb{C}}^{n} \) and \( {v}_{1},\ldots ,{v}_{m} \in {\mathbb{C}}^{m} \) such that\n\n\[ \nA{u}_{j} = {\mu }_{j}{v}_{j},\;{A}^{ * }{v}_{j} = {\mu }_{j}{u}_{j},\;j = 1,\ldots, r, \n\]\n\n\[ \nA{u}_{j} = 0,\;j = r + 1,\ldots, n \n\]\n\n(5.5)\n\n\[ \n{A}^{ * }{v}_{j} = 0,\;j = r + 1,\ldots, m. \n\]\n\nFor each \( x \in {\mathbb{C}}^{n} \) we have the singular value decomposition\n\n\[ \n{Ax} = \mathop{\sum }\limits_{{j = 1}}^{r}{\mu }_{j}\left( {x,{u}_{j}}\right) {v}_{j} \n\]\n\n(5.6)\n\nEach system \( \left( {{\mu }_{j},{u}_{j},{v}_{j}}\right) \) with these properties is called a singular system of the matrix \( A \) .
|
Proof. The Hermitian and semipositive definite matrix \( {A}^{ * }A \) of rank \( r \) has \( n \) orthonormal eigenvectors \( {u}_{1},\ldots ,{u}_{n} \) with nonnegative eigenvalues\n\n\[ \n{A}^{ * }A{u}_{j} = {\mu }_{j}^{2}{u}_{j},\;j = 1,\ldots, n \n\]\n\n(5.7)\n\nwhich we may assume to be ordered according to \( {\mu }_{1} \geq {\mu }_{2} \geq \cdots \geq {\mu }_{r} > 0 \) and \( {\mu }_{r + 1} = \cdots = {\mu }_{n} = 0 \) . We define\n\n\[ \n{v}_{j} \mathrel{\text{:=}} \frac{1}{{\mu }_{j}}A{u}_{j},\;j = 1,\ldots, r. \n\]\n\nThen, using (5.7) we have\n\n\[ \n\left( {{v}_{j},{v}_{k}}\right) = \frac{1}{{\mu }_{j}{\mu }_{k}}\left( {A{u}_{j}, A{u}_{k}}\right) = \frac{1}{{\mu }_{j}{\mu }_{k}}\left( {{u}_{j},{A}^{ * }A{u}_{k}}\right) = {\delta }_{jk},\;j, k = 1,\ldots, r, \n\]\n\nwhere \( {\delta }_{jk} = 1 \) for \( k = j \), and \( {\delta }_{jk} = 0 \) for \( k \neq j \) . Further, we compute that \( {A}^{ * }{v}_{j} = {\mu }_{j}{u}_{j}, j = 1,\ldots, r \), and hence the first line of (5.5) is proven. The second line of (5.5) is a consequence of \( N\left( A\right) = N\left( {{A}^{ * }A}\right) \) .\n\nIf \( r < m \), by the Gram-Schmidt orthogonalization procedure from Theorem 3.18 we can extend \( {v}_{1},\ldots ,{v}_{r} \) to an orthonormal basis \( {v}_{1},\ldots ,{v}_{m} \) of \( {\mathbb{C}}^{m} \) . Since \( {A}^{ * } \) has rank \( r \), we have \( \dim N\left( {A}^{ * }\right) = m - r \) . From this we can conclude the third line of (5.5).\n\nSince the \( {u}_{1},\ldots ,{u}_{n} \) form an orthonormal basis of \( {\mathbb{C}}^{n} \), we can represent\n\n\[ \nx = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {x,{u}_{j}}\right) {u}_{j} \n\]\n\nand (5.6) follows by applying \( A \) and observing (5.5).\n\nClearly, we can rewrite the equations (5.5) in the form\n\n\[ \nA = {VD}{U}^{ * }, \n\]\n\n(5.8)\n\nwhere \( U = \left( {{u}_{1},\ldots ,{u}_{n}}\right) \) and \( V = \left( {{v}_{1},\ldots ,{v}_{m}}\right) \) are unitary \( n \times n \) and \( m \times m \) matrices, respectively, and where \( D \) is an \( m \times n \) diagonal matrix with entries \( {d}_{jj} = {\mu }_{j} \) for \( j = 1,\ldots, r \) and \( {d}_{jk} = 0 \) otherwise.
|
Yes
|
Theorem 5.5 Let \( A \) be an \( m \times n \) matrix of rank \( r \) with singular system \( \left( {{\mu }_{j},{u}_{j},{v}_{j}}\right) \). The linear system\n\n\[ \n{Ax} = y \n\]\n\n(5.9)\n\nis solvable if and only if\n\n\[ \n\left( {y, z}\right) = 0 \n\]\n\n\( \left( {5.10}\right) \)\n\nfor all \( z \in {\mathbb{C}}^{m} \) with \( {A}^{ * }z = 0 \). In this case a solution of (5.9) is given by\n\n\[ \n{x}_{0} = \mathop{\sum }\limits_{{j = 1}}^{r}\frac{1}{{\mu }_{j}}\left( {y,{v}_{j}}\right) {u}_{j} \n\]\n\n(5.11)
|
Proof. Let \( x \) be a solution of (5.9) and let \( {A}^{ * }z = 0 \). Then\n\n\[ \n\left( {y, z}\right) = \left( {{Ax}, z}\right) = \left( {x,{A}^{ * }z}\right) = 0. \n\]\n\nThis implies the necessity of condition (5.10) for the solvability of (5.9).\n\nConversely, assume that (5.10) is satisfied. In terms of the orthonormal basis \( {v}_{1},\ldots ,{v}_{m} \) of \( {\mathbb{C}}^{m} \) condition (5.10) implies that\n\n\[ \ny = \mathop{\sum }\limits_{{j = 1}}^{r}\left( {y,{v}_{j}}\right) {v}_{j} \n\]\n\n(5.12)\n\nsince \( {A}^{ * }{v}_{j} = 0 \) for \( j = r + 1,\ldots, m \). For the vector \( {x}_{0} \) defined by (5.11) we have that\n\n\[ \nA{x}_{0} = \mathop{\sum }\limits_{{j = 1}}^{r}\left( {y,{v}_{j}}\right) {v}_{j} \n\]\n\nIn view of (5.12) this implies that \( A{x}_{0} = y \), and the proof is complete.
|
Yes
|
Theorem 5.7 Let \( A \) be an \( m \times n \) matrix of rank \( r \) with singular system \( \left( {{\mu }_{j},{u}_{j},{v}_{j}}\right) \) and let \( \alpha > 0 \) . Then for each \( y \in {\mathbb{C}}^{m} \) the linear system\n\n\[ \n\alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } = {A}^{ * }y \n\]\n\n(5.18)\n\nis uniquely solvable, and the solution is given by\n\n\[ \n{x}_{\alpha } = \mathop{\sum }\limits_{{j = 1}}^{r}\frac{{\mu }_{j}}{\alpha + {\mu }_{j}^{2}}\left( {y,{v}_{j}}\right) {u}_{j} \n\]\n\n(5.19)
|
Proof. For \( \alpha > 0 \) the matrix \( {\alpha I} + {A}^{ * }A \) is positive definite and therefore nonsingular. Since\n\n\[ \n\alpha {u}_{j} + {A}^{ * }A{u}_{j} = \left( {\alpha + {\mu }_{j}^{2}}\right) {u}_{j} \n\]\n\na singular system for the matrix \( {\alpha I} + {A}^{ * }A \) is given by \( \left( {\alpha + {\mu }_{j}^{2},{u}_{j},{u}_{j}}\right) \) , \( j = 1,\ldots, n \) . Now the assertion follows from Theorem 5.5 with the aid of \( \left( {{A}^{ * }y,{u}_{j}}\right) = \left( {y, A{u}_{j}}\right) \) and using (5.5).
|
Yes
|
Corollary 5.8 Under the assumptions of Theorem 5.7 we have convergence:\n\n\[ \mathop{\lim }\limits_{{\alpha \rightarrow 0}}{\left( \alpha I + {A}^{ * }A\right) }^{-1}{A}^{ * }y = {A}^{ \dagger }y. \]
|
Proof. This is obvious from (5.13) and (5.19).
|
No
|
Theorem 5.9 Let \( A \) be an \( m \times n \) matrix and let \( \alpha > 0 \) . Then for each \( y \in {\mathbb{C}}^{m} \) there exists a unique \( {x}_{\alpha } \in {\mathbb{C}}^{n} \) such that\n\n\[ \n{\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}{x}_{\alpha }\end{Vmatrix}}_{2}^{2} = \mathop{\inf }\limits_{{x \in {\mathbb{C}}^{n}}}\left\{ {\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2}}\right\} .\n\]\n\n\( \left( {5.20}\right) \)\n\nThe minimizing vector \( {x}_{\alpha } \) is given by the unique solution of the linear system (5.18).
|
Proof. (Compare to the proof of Theorem 3.51.) We first note the relation\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} = \parallel A{x}_{\alpha } - y{\parallel }_{2}^{2} + \alpha \parallel {x}_{\alpha }{\parallel }_{2}^{2}\n\]\n\n\[ \n+ 2\operatorname{Re}\left( {x - {x}_{\alpha },\alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } - {A}^{ * }y}\right)\n\]\n\n(5.21)\n\n\[ \n+ {\begin{Vmatrix}Ax - A{x}_{\alpha }\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}x - {x}_{\alpha }\end{Vmatrix}}_{2}^{2}\n\]\n\nwhich is valid for all \( x,{x}_{\alpha } \in {\mathbb{C}}^{n} \) . From this it is obvious that the solution \( {x}_{\alpha } \) of (5.18) satisfies (5.20).\n\nConversely, let \( {x}_{\alpha } \) be a solution of (5.20) and assume that\n\n\[ \n\alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } \neq {A}^{ * }y.\n\]\n\nThen, setting \( z \mathrel{\text{:=}} \alpha {x}_{\alpha } + {A}^{ * }A{x}_{\alpha } - {A}^{ * }y \), for \( x \mathrel{\text{:=}} {x}_{\alpha } - {\varepsilon z} \) with \( \varepsilon \in \mathbb{R} \) from (5.21) we have\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} = {\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha {\begin{Vmatrix}{x}_{\alpha }\end{Vmatrix}}_{2}^{2} - {2\varepsilon a} + {\varepsilon }^{2}b,\n\]\n\nwhere\n\n\[ \na \mathrel{\text{:=}} \parallel z{\parallel }_{2}^{2}\;\text{ and }\;b \mathrel{\text{:=}} \parallel {Az}{\parallel }_{2}^{2} + \alpha \parallel z{\parallel }_{2}^{2}\n\]\n\nare both positive. By choosing \( \varepsilon = a/b \) we obtain\n\n\[ \n\parallel {Ax} - y{\parallel }_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2} < {\begin{Vmatrix}A{x}_{\alpha } - y\end{Vmatrix}}_{2}^{2} + \alpha \parallel x{\parallel }_{2}^{2},\n\]\n\nwhich contradicts (5.20).
|
Yes
|
Theorem 5.10 Let \( A \) be an \( m \times n \) matrix and let \( y \in A\left( {\mathbb{C}}^{n}\right) ,{y}^{\delta } \in {\mathbb{C}}^{m} \) satisfy\n\n\[ \n{\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \leq \delta < {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2}\n\]\n\nfor \( \delta > 0 \) . Then there exists a unique \( \alpha = \alpha \left( \delta \right) > 0 \) such that the unique solution \( {x}_{\alpha } \) of (5.23) satisfies\n\n\[ \n{\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2} = \delta\n\]\n\n\( \left( {5.24}\right) \)\n\nThis discrecancy principle for Tikhonov regularization is regular in the sense that if the error level \( \delta \) tends to zero, then\n\n\[ \n{x}_{\alpha } \rightarrow {A}^{ \dagger }y,\;\delta \rightarrow 0.\n\]\n\n(5.25)
|
Proof. We have to show that the function \( F : \left( {0,\infty }\right) \rightarrow \mathbb{R} \) defined by\n\n\[ \nF\left( \alpha \right) \mathrel{\text{:=}} {\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2}^{2} - {\delta }^{2}\n\]\n\nhas a unique zero. In terms of a singular system, from the representation (5.19) we find that\n\n\[ \nF\left( \alpha \right) = \mathop{\sum }\limits_{{j = 1}}^{m}\frac{{\alpha }^{2}}{{\left( \alpha + {\mu }_{j}^{2}\right) }^{2}}{\left| \left( {y}^{\delta },{v}_{j}\right) \right| }^{2} - {\delta }^{2}.\n\]\n\nTherefore, \( F \) is continuous and strictly monotonically increasing with the limits \( F\left( \alpha \right) \rightarrow - {\delta }^{2} < 0,\alpha \rightarrow 0 \), and \( F\left( \alpha \right) \rightarrow {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2}^{2} - {\delta }^{2} > 0,\alpha \rightarrow \infty \) . Hence, \( F \) has exactly one zero \( \alpha = \alpha \left( \delta \right) \) .\n\nNote that the condition \( {\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \leq \delta < {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} \) implies that \( y \neq 0 \) . Using (5.23), (5.24), and the triangle inequality we can estimate\n\n\[ \n{\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} - \delta = {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} - {\begin{Vmatrix}A{x}_{\alpha } - {y}^{\delta }\end{Vmatrix}}_{2} \leq {\begin{Vmatrix}A{x}_{\alpha }\end{Vmatrix}}_{2}\n\]\nand\n\n\[ \n\alpha {\begin{Vmatrix}A{x}_{\alpha }\end{Vmatrix}}_{2} = {\begin{Vmatrix}A{A}^{ * }\left( {y}^{\delta } - A{x}_{\alpha }\right) \end{Vmatrix}}_{2} \leq {\begin{Vmatrix}A{A}^{ * }\end{Vmatrix}}_{2}\delta .\n\]\n\nCombining these two inequalities and using \( {\begin{Vmatrix}{y}^{\delta }\end{Vmatrix}}_{2} \geq \parallel y{\parallel }_{2} - \delta \) yields\n\n\[ \n\alpha \leq \frac{{\begin{Vmatrix}A{A}^{ * }\end{Vmatrix}}_{2}\delta }{\parallel y{\parallel }_{2} - {2\delta }}.\n\]\n\nThis implies that \( \alpha \rightarrow 0,\delta \rightarrow 0 \) . Now the convergence (5.25) follows from the representations (5.13) for \( {A}^{ \dagger }y \) and (5.19) for \( {x}_{\alpha } \) (with \( y \) replaced by \( \left. {y}^{\delta }\right) \) and the fact that \( {\begin{Vmatrix}{y}^{\delta } - y\end{Vmatrix}}_{2} \rightarrow 0,\delta \rightarrow 0 \) .
|
Yes
|
Theorem 6.1 Let \( D \subset \mathbb{R} \) be a closed interval and let \( f : D \rightarrow D \) be a continuously differentiable function with the property\n\n\[ q \mathrel{\text{:=}} \mathop{\sup }\limits_{{x \in D}}\left| {{f}^{\prime }\left( x\right) }\right| < 1 \]\n\nThen the equation \( f\left( x\right) = x \) has a unique solution \( x \in D \), and the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) ,\;\nu = 0,1,2,\ldots ,\]\n\nwith arbitrary \( {x}_{0} \in D \) converge to this solution. We have the a priori error\nestimate\n\[ \left| {{x}_{\nu } - x}\right| \leq \frac{{q}^{\nu }}{1 - q}\left| {{x}_{1} - {x}_{0}}\right| \]\n\nand the a posteriori error estimate\n\n\[ \left| {{x}_{\nu } - x}\right| \leq \frac{q}{1 - q}\left| {{x}_{\nu } - {x}_{\nu - 1}}\right| \]\n\nfor all \( \nu \in \mathbb{N} \) .
|
Proof. Equipped with the norm \( \parallel \cdot \parallel = \left| \cdot \right| \) the space \( \mathbb{R} \) is complete. By the mean value theorem, for \( x, y \in D \) with \( x < y \), we have that\n\n\[ f\left( x\right) - f\left( y\right) = {f}^{\prime }\left( \xi \right) \left( {x - y}\right) \]\n\nfor some intermediate point \( \xi \in \left( {x, y}\right) \) . Hence\n\n\[ \left| {f\left( x\right) - f\left( y\right) }\right| \leq \mathop{\sup }\limits_{{\xi \in D}}\left| {{f}^{\prime }\left( \xi \right) }\right| \left| {x - y}\right| = q\left| {x - y}\right| \]\n\nwhich is also valid for \( x, y \in D \) with \( x \geq y \) . Therefore, \( f \) is a contraction, and the assertion follows from the Banach fixed point Theorem 3.46.
|
Yes
|
Theorem 6.2 Let \( x \) be a fixed point of a continuously differentiable function \( f \) such that \( \left| {{f}^{\prime }\left( x\right) }\right| < 1 \) . Then the method of successive approximations \( {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) \) is locally convergent; i.e., there exists a neighborhood \( B \) of the fixed point \( x \) such that the successive approximations converge to \( x \) for all \( {x}_{0} \in B \) .
|
Proof. Since \( {f}^{\prime } \) is continuous and \( \left| {{f}^{\prime }\left( x\right) }\right| < 1 \), there exist constants \( 0 < q < 1 \) and \( \delta > 0 \) such that \( \left| {{f}^{\prime }\left( y\right) }\right| \leq q \) for all \( y \in B \mathrel{\text{:=}} \left\lbrack {x - \delta, x + \delta }\right\rbrack \) . Then we have that\n\n\[ \left| {f\left( y\right) - x}\right| = \left| {f\left( y\right) - f\left( x\right) }\right| \leq q\left| {y - x}\right| \leq \left| {y - x}\right| \leq \delta \]\n\nfor all \( y \in B \) ; i.e., \( f \) maps \( B \) into itself and is a contraction \( f : B \rightarrow B \) . Now the statement of the theorem follows from Theorem 6.1.
|
Yes
|
In order to describe a division by iteration, for \( a > 0 \) we consider the function \( f : \mathbb{R} \rightarrow \mathbb{R} \) given by \( f\left( x\right) \mathrel{\text{:=}} {2x} - a{x}^{2} \) . The graph of this function is a parabola with maximum value \( 1/a \) attained at \( 1/a \) . By solving the quadratic equation \( f\left( x\right) = x \) it can be seen that \( f \) has the fixed points \( x = 0 \) and \( x = 1/a \) . Obviously, \( f \) maps the open interval \( \left( {0,2/a}\right) \) into \( \left( {0,1/a}\right) \) . Since \( {f}^{\prime }\left( x\right) = 2\left( {1 - {ax}}\right) \), we have \( {f}^{\prime }\left( 0\right) = 2 \) and \( {f}^{\prime }\left( {1/a}\right) = 0 \) .
|
From the the property \( x < f\left( x\right) < 1/a \), which is valid for \( 0 < x < 1/a \) , it follows that the sequence \( {x}_{\nu + 1} \mathrel{\text{:=}} 2{x}_{\nu } - a{x}_{\nu }^{2} \) is monotonicly increasing and bounded. Hence, the successive approximations converge to the fixed point \( x = 1/a \) for arbitrarily chosen \( {x}_{0} \in \left( {0,2/a}\right) \) . Figure 6.2 illustrates the convergence. The numerical results are for \( a = 2 \) and two different starting points, \( {x}_{0} = {0.3} \) and \( {x}_{0} = {0.4} \) .
|
Yes
|
For computing the square root of a positive real number \( a \) by an iterative method we consider the function \( f : \left( {0,\infty }\right) \rightarrow \left( {0,\infty }\right) \) given by\n\n\[ f\left( x\right) \mathrel{\text{:=}} \frac{1}{2}\left( {x + \frac{a}{x}}\right) . \]
|
By solving the quadratic equation \( f\left( x\right) = x \) it can be seen that \( f \) has the fixed point \( x = \sqrt{a} \) . By the arithmetic geometric mean inequality we have that \( f\left( x\right) > \sqrt{a} \) for \( x > 0 \) ; i.e., \( f \) maps the open interval \( \left( {0,\infty }\right) \) into \( \lbrack \sqrt{a},\infty ) \), and therefore it maps the closed interval \( \lbrack \sqrt{a},\infty ) \) into itself. From\n\n\[ {f}^{\prime }\left( x\right) = \frac{1}{2}\left( {1 - \frac{a}{{x}^{2}}}\right) \]\n\nit follows that\n\n\[ q \mathrel{\text{:=}} \mathop{\sup }\limits_{{\sqrt{a} \leq x < \infty }}\left| {{f}^{\prime }\left( x\right) }\right| = \frac{1}{2}. \]\n\nHence \( f : \lbrack \sqrt{a},\infty ) \rightarrow \lbrack \sqrt{a},\infty ) \) is a contraction. Therefore, by Theorem 6.1 the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} \frac{1}{2}\left( {{x}_{\nu } + \frac{a}{{x}_{\nu }}}\right) ,\;\nu = 0,1,\ldots ,\]\n\nconverge to the square root \( \sqrt{a} \) for each \( {x}_{0} > 0 \), and we have the a posteriori error estimate\n\n\[ \left| {\sqrt{a} - {x}_{\nu }}\right| \leq \left| {{x}_{\nu } - {x}_{\nu - 1}}\right| \]
|
Yes
|
Example 6.5 Consider the function \( f : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack \) given by\n\n\[ f\left( x\right) \mathrel{\text{:=}} \cos x. \]\n\nHere we have\n\n\[ q = \mathop{\sup }\limits_{{0 \leq x \leq 1}}\left| {{f}^{\prime }\left( x\right) }\right| = \sin 1 < 1 \]
|
and Theorem 6.1 implies that the successive approximations \( {x}_{\nu + 1} \mathrel{\text{:=}} \cos {x}_{\nu } \) converge to the unique solution \( x \) of \( \cos x = x \) for each \( {x}_{0} \in \left\lbrack {0,1}\right\rbrack \) . Table 6.1 illustrates the convergence, which is notably slower than in the two previous examples.
|
Yes
|
Example 6.6 The function \( h : \left( {0,1}\right) \rightarrow \left( {-\infty ,\infty }\right) \) given by \( h\left( x\right) \mathrel{\text{:=}} x + \ln x \) is strictly monotonically increasing with limits \( \mathop{\lim }\limits_{{x \rightarrow 0}}h\left( x\right) = - \infty \) and \( \mathop{\lim }\limits_{{x \rightarrow \infty }}h\left( x\right) = \infty \) . Therefore, the function \( f\left( x\right) \mathrel{\text{:=}} - \ln x \) has a unique fixed point \( x \) . Since this fixed point must satisfy \( 0 < x < 1 \), the derivative
|
\[ \left| {{f}^{\prime }\left( x\right) }\right| = \frac{1}{x} > 1 \] implies that \( f \) is not contracting in a neighborhood of the fixed point. However, we can still design a convergent scheme because \( x = - \ln x \) is equivalent to \( {e}^{-x} = x \) . We consider the inverse function \[ g\left( x\right) \mathrel{\text{:=}} {e}^{-x} \] of \( f \), which has derivative \( \left| {{g}^{\prime }\left( x\right) }\right| = {e}^{-x} < 1 \) at the fixed point, so that we can apply Theorem 6.2. Obviously, for each \( 0 < a < 1/e \) the exponential function \( g \) maps the interval \( \left\lbrack {a,1}\right\rbrack \) into itself. Since \[ q = \mathop{\sup }\limits_{{a \leq x \leq 1}}\left| {{g}^{\prime }\left( x\right) }\right| = {e}^{-a} < 1 \] by Theorem 6.1 it follows that for arbitrary \( {x}_{0} > 0 \) the successive approximations \( {x}_{\nu + 1} = {e}^{-{x}_{\nu }} \) converge to the unique solution of \( x = {e}^{-x} \) .
|
Yes
|
Theorem 6.8 Let \( D \subset {\mathbb{R}}^{n} \) be closed and convex (with a nonempty interior) and let \( f : D \rightarrow D \) be a continuous mapping. Assume further that \( f \) is continuously differentiable in the interior of \( D \) and that its Jacobian can be continuously extended to all of \( D \) such that\n\n\[ \mathop{\sup }\limits_{{x \in D}}\begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} < 1 \]\n\nin some norm \( \parallel \cdot \parallel \) on \( {\mathbb{R}}^{n} \) . Then the equation \( f\left( x\right) = x \) has a unique solution \( x \in D \), and the successive approximations\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} f\left( {x}_{\nu }\right) ,\;\nu = 0,1,2,\ldots ,\]\n\nconverge for each \( {x}_{0} \in D \) to this fixed point. We have the a priori error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - x}\end{Vmatrix} \leq \frac{{q}^{\nu }}{1 - q}\begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} \]\n\nand the a posteriori error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - x}\end{Vmatrix} \leq \frac{q}{1 - q}\begin{Vmatrix}{{x}_{\nu } - {x}_{\nu - 1}}\end{Vmatrix} \]\n\nfor all \( \nu \in \mathbb{N} \) .
|
Proof. By the mean value Theorem 6.7 the mapping \( f : D \rightarrow D \) is a contraction.\n\nBy Theorem 3.26 we have that each of the conditions\n\n\[ \mathop{\sup }\limits_{{x \in D}}\mathop{\max }\limits_{{j = 1,\ldots, n}}\mathop{\sum }\limits_{{k = 1}}^{n}\left| {\frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) }\right| < 1 \]\n\n\[ \mathop{\sup }\limits_{{x \in D}}\mathop{\max }\limits_{{k = 1,\ldots, n}}\mathop{\sum }\limits_{{j = 1}}^{n}\left| {\frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) }\right| < 1 \]\n\n\[ \mathop{\sup }\limits_{{x \in D}}{\left\lbrack \mathop{\sum }\limits_{{j, k = 1}}^{n}{\left| \frac{\partial {f}_{j}}{\partial {x}_{k}}\left( x\right) \right| }^{2},\right\rbrack }^{1/2} < 1 \]\n\nensures convergence of the successive approximations in Theorem 6.8.
|
Yes
|
Theorem 6.14 Let \( D \subset {\mathbb{R}}^{n} \) be open and convex and let \( f : D \rightarrow {\mathbb{R}}^{n} \) be continuously differentiable. Assume that for some norm \( \parallel \cdot \parallel \) on \( {\mathbb{R}}^{n} \) and some \( {x}_{0} \in D \) the following conditions hold:\n\n(a) \( f \) satisfies\n\n\[ \begin{Vmatrix}{{f}^{\prime }\left( x\right) - {f}^{\prime }\left( y\right) }\end{Vmatrix} \leq \gamma \parallel x - y\parallel \]\n\nfor all \( x, y \in D \) and some constant \( \gamma > 0 \).\n\n(b) The Jacobian matrix \( {f}^{\prime }\left( x\right) \) is nonsingular for all \( x \in D \), and there exists a constant \( \beta > 0 \) such that\n\n\[ \begin{Vmatrix}{\left\lbrack {f}^{\prime }\left( x\right) \right\rbrack }^{-1}\end{Vmatrix} \leq \beta ,\;x \in D. \]\n\n(c) For the constants\n\n\[ \alpha \mathrel{\text{:=}} \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix}\;\text{ and }\;q \mathrel{\text{:=}} {\alpha \beta \gamma } \]\n\nthe inequality\n\n\[ q < \frac{1}{2} \]\n\nis satisfied.\n\n(d) For \( r \mathrel{\text{:=}} {2\alpha } \) the closed ball \( B\left\lbrack {{x}_{0}, r}\right\rbrack \mathrel{\text{:=}} \left\{ {x : \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \leq r}\right\} \) is contained in \( D \).\n\nThen \( f \) has a unique zero \( {x}^{ * } \) in \( B\left\lbrack {{x}_{0}, r}\right\rbrack \). Starting with \( {x}_{0} \) the Newton iteration\n\n\[ {x}_{\nu + 1} \mathrel{\text{:=}} {x}_{\nu } - {\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}f\left( {x}_{\nu }\right) ,\;\nu = 0,1,\ldots ,\]\n\n(6.4)\n\nis well-defined. The sequence \( \left( {x}_{\nu }\right) \) converges to the zero \( {x}^{ * } \) of \( f \), and we have the error estimate\n\n\[ \begin{Vmatrix}{{x}_{\nu } - {x}^{ * }}\end{Vmatrix} \leq {2\alpha }{q}^{{2}^{\nu } - 1},\;\nu = 0,1,\ldots \]\n
|
Proof. 1. Let \( x, y, z \in D \). From the proof of Theorem 6.7 we know that\n\n\[ f\left( y\right) - f\left( x\right) = {\int }_{0}^{1}{f}^{\prime }\left\lbrack {{\lambda x} + \left( {1 - \lambda }\right) y}\right\rbrack \left( {y - x}\right) {d\lambda }.\]\n\nHence\n\n\[ f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( z\right) \left( {y - x}\right) = {\int }_{0}^{1}\left\{ {{f}^{\prime }\left\lbrack {{\lambda x} + \left( {1 - \lambda }\right) y}\right\rbrack - {f}^{\prime }\left( z\right) }\right\} \left( {y - x}\right) {d\lambda },\]\n\nand estimating with the aid of (6.1) and condition (a) we find that\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( z\right) \left( {y - x}\right) }\end{Vmatrix} \]\n\n\[ \leq \gamma \parallel y - x\parallel {\int }_{0}^{1}\parallel \lambda \left( {x - z}\right) + \left( {1 - \lambda }\right) \left( {y - z}\right) \parallel {d\lambda } \]\n\n\[ \leq \frac{\gamma }{2}\parallel y - x\parallel \{ \parallel x - z\parallel + \parallel y - z\parallel \} \]\n\nChoosing \( z = x \) shows that\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( x\right) \left( {y - x}\right) }\end{Vmatrix} \leq \frac{\gamma }{2}\parallel y - x{\parallel }^{2} \]\n\n(6.5)\n\nfor all \( x, y \in D \), and choosing \( z = {x}_{0} \) yields\n\n\[ \begin{Vmatrix}{f\left( y\right) - f\left( x\right) - {f}^{\prime }\left( {x}_{0}\right) \left( {y - x}\right) }\end{Vmatrix} \leq {r\gamma }\parallel y - x\parallel \]\n\n(6.6)\n\nfor all \( x, y \in B\left\lbrack {{x}_{0}, r}\right\rbrack \).\n\n2. We proceed by proving through induction that\n\n\[ \begin{Vmatrix}{{x}_{\nu } - {x}_{0}}\end{Vmatrix} \leq r\;\text{ and }\;\begin{Vmatrix}{{x}_{\nu } - {x}_{\nu - 1}}\end{Vmatrix} \leq \alpha {q}^{{2}^{\nu - 1} - 1},\;\nu = 1,2,\ldots .\]\n\n(6.7)\n\nThis is valid for \( \nu = 1 \), since\n\n\[ \begin{Vmatrix}{{x}_{1} - {x}_{0}}\end{Vmatrix} = \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix} = \alpha = \frac{r}{2} < r \]\n\nas a consequence of conditions (c)
|
Yes
|
Corollary 6.15 Let \( D \subset {\mathbb{R}}^{n} \) be open and let \( f : D \rightarrow {\mathbb{R}}^{n} \) be twice continuously differentiable, and assume that \( {x}^{ * } \) is a zero of \( f \) such that the Jacobian \( {f}^{\prime }\left( {x}^{ * }\right) \) is nonsingular. Then Newton’s method is locally convergent; i.e., there exists a neighborhood \( B \) of the zero \( {x}^{ * } \) such that the Newton iterations converge to \( {x}^{ * } \) for all \( {x}_{0} \in B \) .
|
Proof. Since \( f \) is twice continuously differentiable, by the mean value Theorem 6.7 applied to the components of \( {f}^{\prime } \) there exists \( \gamma > 0 \) such that\n\n\[ \begin{Vmatrix}{{f}^{\prime }\left( x\right) - {f}^{\prime }\left( y\right) }\end{Vmatrix} \leq \gamma \parallel x - y\parallel \]\n\nfor all \( x, y \) in some closed ball \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) centered at \( {x}^{ * } \) . We write\n\n\[ {f}^{\prime }\left( x\right) = {f}^{\prime }\left( {x}^{ * }\right) \left\{ {I + {\left\lbrack {f}^{\prime }\left( {x}^{ * }\right) \right\rbrack }^{-1}\left\lbrack {{f}^{\prime }\left( x\right) - {f}^{\prime }\left( {x}^{ * }\right) }\right\rbrack }\right\} \]\n\nand deduce from the above estimate and Theorem 3.48 that the radius \( \rho \) of \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) can be chosen such that \( {f}^{\prime }\left( x\right) \) is nonsingular on \( B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) and \( \parallel {\left\lbrack {f}^{\prime }\left( {x}^{ * }\right) \right\rbrack }^{-1}\parallel \leq \beta \) for all \( x \in B\left\lbrack {{x}^{ * },\rho }\right\rbrack \) and some constant \( \beta > 0. \)\n\nSince \( f \) is continuous, \( f\left( {x}^{ * }\right) = 0 \) implies that there exists \( \delta < \rho /2 \) such that\n\n\[ \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < \min \left\{ {\frac{\rho }{4\beta },\frac{1}{2{\beta }^{2}\gamma }}\right\} \]\n\nfor all \( \begin{Vmatrix}{{x}_{0} - {x}^{ * }}\end{Vmatrix} < \delta \) . Then, after setting \( \alpha \mathrel{\text{:=}} \begin{Vmatrix}{{\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( {x}_{0}\right) }\end{Vmatrix} \) we have the inequalities\n\n\[ {\alpha \beta \gamma } \leq \begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix}{\beta }^{2}\gamma < \frac{1}{2} \]\n\nand\n\n\[ {2\alpha } \leq {2\beta }\begin{Vmatrix}{f\left( {x}_{0}\right) }\end{Vmatrix} < \frac{\rho }{2}. \]\n\nHence for the open and convex ball \( B\left( {{x}^{ * },\rho }\right) \) and for each \( {x}_{0} \) with \( \begin{Vmatrix}{{x}_{0} - {x}^{ * }}\end{Vmatrix} < \delta \) the assumptions of Theorem 6.14 are satisfied.
|
Yes
|
Corollary 6.16 Let \( f : \left( {a, b}\right) \rightarrow \mathbb{R} \) be twice continuously differentiable and assume that \( {x}^{ * } \) is a simple zero of \( f \) . Then Newton’s method is locally convergent.
|
Proof. For simple zeros we have \( {f}^{\prime }\left( {x}^{ * }\right) \neq 0 \).
|
No
|
Theorem 6.20 Under the assumptions of Theorem 6.14 Newton's method converges quadratically.
|
Proof. Using condition (b) of Theorem 6.14 and the inequality (6.5) we can estimate\n\n\[ \n\begin{Vmatrix}{{x}^{ * } - {x}_{\nu + 1}}\end{Vmatrix} = \begin{Vmatrix}{{x}^{ * } - {x}_{\nu } + {\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}f\left( {x}_{\nu }\right) }\end{Vmatrix}\n\]\n\n\[ \n\leq \begin{Vmatrix}{\left\lbrack {f}^{\prime }\left( {x}_{\nu }\right) \right\rbrack }^{-1}\end{Vmatrix}\begin{Vmatrix}{f\left( {x}^{ * }\right) - f\left( {x}_{\nu }\right) - {f}^{\prime }\left( {x}_{\nu }\right) \left( {{x}^{ * } - {x}_{\nu }}\right) }\end{Vmatrix}\n\]\n\n\[ \n\leq \frac{\beta \gamma }{2}{\begin{Vmatrix}{x}^{ * } - {x}_{\nu }\end{Vmatrix}}^{2}\n\]\n\nsince \( f\left( {x}^{ * }\right) = 0 \) .
|
Yes
|
Theorem 6.21 Under the assumptions of Theorem 6.14 the simplified Newton method converges linearly to the unique zero of \( f \) in \( B\left\lbrack {{x}_{0}, r}\right\rbrack \) .
|
Proof. Recall that the function\n\n\[ g\left( x\right) \mathrel{\text{:=}} x - {\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}f\left( x\right) \]\n\ndefined in the proof of Theorem 6.14 is a contraction. We show that \( g \) maps \( B\left\lbrack {{x}_{0}, r}\right\rbrack \) into itself. For this we write\n\n\[ {x}_{0} - g\left( x\right) = {\left\lbrack {f}^{\prime }\left( {x}_{0}\right) \right\rbrack }^{-1}\left\{ {f\left( x\right) - f\left( {x}_{0}\right) - {f}^{\prime }\left( {x}_{0}\right) \left( {x - {x}_{0}}\right) + f\left( {x}_{0}\right) }\right\} .\n\nThen estimating with the help of conditions (b), (c) and (d) and the inequality (6.5) we obtain\n\n\[ \begin{Vmatrix}{g\left( x\right) - {x}_{0}}\end{Vmatrix} \leq \frac{\beta \gamma }{2}{\begin{Vmatrix}x - {x}_{0}\end{Vmatrix}}^{2} + \alpha \leq 2{\alpha }^{2}{\beta \gamma } + \alpha = \left( {{2q} + 1}\right) \alpha < {2\alpha } = r \]\n\nfor all \( x \) with \( \begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} \leq r \) . Now the statement of the theorem follows from the Banach fixed point Theorem 3.46.
|
Yes
|
For the polynomial \( p\left( x\right) \mathrel{\text{:=}} {x}^{3} - {x}^{2} + {3x} - 5 \) the Horner scheme
|
<table><thead><tr><th>\( z \)</th><th>1</th><th>-1</th><th>3</th><th>- 5</th></tr></thead><tr><td>2</td><td>1</td><td>1</td><td>5</td><td>5</td></tr><tr><td>2</td><td>1</td><td>3</td><td>11</td><td></td></tr><tr><td>2</td><td>1</td><td>5</td><td></td><td></td></tr><tr><td>2</td><td>1</td><td></td><td></td><td></td></tr></table>\n\nfor \( z = 2 \) leads to \( p\left( 2\right) = 5,{p}^{\prime }\left( 2\right) = {11},{p}^{\prime \prime }\left( 2\right) = {10},{p}^{\prime \prime \prime }\left( 2\right) = 6 \).
|
Yes
|
Theorem 7.3 (Rayleigh) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues\n\n\[ \n{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n}\n\]\n\n(where multiple eigenvalues occur according to their multiplicity) and corresponding orthonormal eigenvectors \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) . Then\n\n\[ \n{\lambda }_{j} = \mathop{\max }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n\n\]\n\nwhere the subspaces \( {V}_{1},\ldots ,{V}_{n} \) are defined by \( {V}_{1} \mathrel{\text{:=}} {\mathbb{C}}^{n} \) and\n\n\[ \n{V}_{j} \mathrel{\text{:=}} \left\{ {x \in {\mathbb{C}}^{n} : \left( {x,{x}_{k}}\right) = 0, k = 1,\ldots, j - 1}\right\} ,\;j = 2,\ldots, n.\n\]
|
Proof. Let \( x \in {V}_{j} \) with \( x \neq 0 \) . Then\n\n\[ \nx = \mathop{\sum }\limits_{{k = j}}^{n}\left( {x,{x}_{k}}\right) {x}_{k}\;\text{ and }\;\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = \left( {x, x}\right) .\n\]\n\nHence\n\n\[ \n{Ax} = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}\left( {x,{x}_{k}}\right) {x}_{k}\n\]\n\nand\n\n\[ \n\left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{k = j}}^{n}{\lambda }_{k}{\left| \left( x,{x}_{k}\right) \right| }^{2} \leq {\lambda }_{j}\mathop{\sum }\limits_{{k = j}}^{n}{\left| \left( x,{x}_{k}\right) \right| }^{2} = {\lambda }_{j}\left( {x, x}\right) .\n\]\n\nThis implies\n\n\[ \n\mathop{\sup }\limits_{\substack{{x \in {V}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } \leq {\lambda }_{j}\n\]\n\nand the statement follows from \( \left( {A{x}_{j},{x}_{j}}\right) = {\lambda }_{j} \) and \( {x}_{j} \in {V}_{j} \) .
|
Yes
|
Theorem 7.4 (Courant) Let \( A \) be a Hermitian \( n \times n \) matrix with eigenvalues\n\n\[ \n{\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \n\]\n\n(where multiple eigenvalues occur according to their multiplicity). Then\n\n\[ \n{\lambda }_{j} = \mathop{\min }\limits_{{{U}_{j} \in {M}_{j}}}\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) },\;j = 1,\ldots, n \n\]\n\nwhere \( {M}_{j} \) denotes the set of all subspaces \( {U}_{j} \subset {\mathbb{C}}^{n} \) of dimension \( n + 1 - j \) .
|
Proof. First we note that because of\n\n\[ \n\mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } = \mathop{\sup }\limits_{\substack{{x \in {U}_{j}} \\ {\left( {x, x}\right) = 1} }}\left( {{Ax}, x}\right) \n\]\n\nand the continuity of the function \( x \mapsto \left( {{Ax}, x}\right) \), the supremum is attained; i.e., the maximum exists.\n\nBy \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \) we denote orthonormal eigenvectors corresponding to the eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \) . First, we show that for a given subspace \( {U}_{j} \) of dimension \( n + 1 - j \) there exists a vector \( x \in {U}_{j} \) such that\n\n\[ \n\left( {x,{x}_{k}}\right) = 0,\;k = j + 1,\ldots, n. \n\]\n\n(7.1)\n\nLet \( {z}_{1},\ldots ,{z}_{n + 1 - j} \) be a basis of \( {U}_{j} \) . Then we can represent each \( x \in {U}_{j} \) by\n\n\[ \nx = \mathop{\sum }\limits_{{i = 1}}^{{n + 1 - j}}{a}_{i}{z}_{i} \n\]\n\n(7.2)\n\nIn order to guarantee (7.1), the \( n + 1 - j \) coefficients \( {a}_{1},\ldots ,{a}_{n + 1 - j} \) must satisfy the \( n - j \) linear equations\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{{n + 1 - j}}{a}_{i}\left( {{z}_{i},{x}_{k}}\right) = 0,\;k = j + 1,\ldots, n. \n\]\n\nThis underdetermined system always has a nontrivial solution. For the corresponding \( x \) given by (7.2) we have \( x \neq 0 \), and from\n\n\[ \nx = \mathop{\sum }\limits_{{k = 1}}^{j}\left( {x,{x}_{k}}\right) {x}_{k} \n\]\n\nwe obtain that\n\n\[ \n\left( {{Ax}, x}\right) = \mathop{\sum }\limits_{{k = 1}}^{j}{\lambda }_{k}{\left| \left( x,{x}_{k}\right) \right| }^{2} \geq {\lambda }_{j}\mathop{\sum }\limits_{{k = 1}}^{j}{\left| \left( x,{x}_{k}\right) \right| }^{2} = {\lambda }_{j}\left( {x, x}\right) , \n\]\n\nwhence\n\n\[ \n\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } \geq {\lambda }_{j} \n\]\n\nfollows.\n\nOn the other hand, for the subspace\n\n\[ \n{U}_{j} = \left\{ {x \in {\mathbb{C}}^{n} : \left( {x,{x}_{k}}\right) = 0, k = 1,\ldots, j - 1}\right\} \n\]\n\nof dimension \( n + 1 - j \), by Theorem 7.3 we have the equality\n\n\[ \n\mathop{\max }\limits_{\substack{{x \in {U}_{j}} \\ {x \neq 0} }}\frac{\left( Ax, x\right) }{\left( x, x\right) } = {\lambda }_{j} \n\]\n\nand the proof is finished.
|
Yes
|
Corollary 7.5 Let \( A \) and \( B \) be two Hermitian \( n \times n \) matrices with eigenvalues \( {\lambda }_{1}\left( A\right) \geq {\lambda }_{2}\left( A\right) \geq \cdots \geq {\lambda }_{n}\left( A\right) \) and \( {\lambda }_{1}\left( B\right) \geq {\lambda }_{2}\left( B\right) \geq \cdots \geq {\lambda }_{n}\left( B\right) \) . Then\n\n\[ \n\left| {{\lambda }_{j}\left( A\right) - {\lambda }_{j}\left( B\right) }\right| \leq \parallel A - B\parallel ,\;j = 1,\ldots, n \n\] \n\nfor any norm \( \parallel \cdot \parallel \) on \( {\mathbb{C}}^{n} \) .
|
Proof. From the Cauchy-Schwarz inequality we have that\n\n\[ \n\left( {{Ax} - {Bx}, x}\right) \leq \parallel \left( {A - B}\right) x{\parallel }_{2}\parallel x{\parallel }_{2} \leq \parallel A - B{\parallel }_{2}\parallel x{\parallel }_{2}^{2} \n\] \n\nand hence\n\n\[ \n\left( {{Ax}, x}\right) \leq \left( {{Bx}, x}\right) + \parallel A - B{\parallel }_{2}\parallel x{\parallel }_{2}^{2}. \n\] \n\nBy the Courant minimum maximum principle of Theorem 7.4 this implies\n\n\[ \n{\lambda }_{j}\left( A\right) \leq {\lambda }_{j}\left( B\right) + \parallel A - B{\parallel }_{2},\;j = 1\ldots, n. \n\] \n\nInterchanging the roles of \( A \) and \( B \), we also have that\n\n\[ \n{\lambda }_{j}\left( B\right) \leq {\lambda }_{j}\left( A\right) + \parallel B - A{\parallel }_{2},\;j = 1\ldots, n, \n\] \n\nand therefore\n\n\[ \n\left| {{\lambda }_{j}\left( A\right) - {\lambda }_{j}\left( B\right) }\right| \leq \parallel A - B{\parallel }_{2},\;j = 1,\ldots, n. \n\] \n\nNow the statement follows from\n\n\[ \n\parallel A - B{\parallel }_{2} = \rho \left( {A - B}\right) \leq \parallel A - B\parallel \n\] \n\nwhich is a consequence of Theorems 3.31 and 3.32.
|
Yes
|
Corollary 7.6 For the eigenvalues \( {\lambda }_{1} \geq {\lambda }_{2} \geq \cdots \geq {\lambda }_{n} \) of a Hermitian \( n \times n \) matrix \( A = \left( {a}_{jk}\right) \) we have that\n\n\[ \n{\left| {\lambda }_{i} - {a}_{ii}^{\prime }\right| }^{2} \leq \mathop{\sum }\limits_{\substack{{j, k = 1} \\ {j \neq k} }}^{n}{\left| {a}_{jk}\right| }^{2},\;i = 1,\ldots, n \n\]\n\nwhere the elements \( {a}_{11}^{\prime },\ldots ,{a}_{nn}^{\prime } \) represent a permutation of the diagonal elements \( {a}_{11},\ldots ,{a}_{nn} \) of \( A \) such that \( {a}_{11}^{\prime } \geq {a}_{22}^{\prime } \geq \cdots \geq {a}_{nn}^{\prime } \) .
|
Proof. Use \( B = \operatorname{diag}\left( {a}_{jj}^{\prime }\right) \) and \( \parallel \cdot \parallel = \parallel \cdot {\parallel }_{2} \) in the preceding corollary.
|
No
|
Theorem 7.7 (Gerschgorin) Let \( A = \left( {a}_{jk}\right) \) be a complex \( n \times n \) matrix and define the disks\n\n\[ \n{G}_{j} \mathrel{\text{:=}} \left\{ {\lambda \in \mathbb{C} : \left| {\lambda - {a}_{jj}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{jk}\right| }\right\} ,\;j = 1,\ldots, n, \n\] \n\nand \n\n\[ \n{G}_{j}^{ * } \mathrel{\text{:=}} \left\{ {\lambda \in \mathbb{C} : \left| {\lambda - {a}_{jj}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{kj}\right| }\right\} ,\;j = 1,\ldots, n. \n\] \n\nThen the eigenvalues \( \lambda \) of \( A \) satisfy \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j} \cap \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j}^{ * } \n\]
|
Proof. Assume that \( {Ax} = {\lambda x} \) and \( \parallel x{\parallel }_{\infty } = 1 \), and for \( x = {\left( {x}_{1},\ldots ,{x}_{n}\right) }^{T} \) choose \( j \) such that \( \left| {x}_{j}\right| = \parallel x{\parallel }_{\infty } = 1 \) . Then \n\n\[ \n\left| {\lambda - {a}_{jj}}\right| = \left| {\left( {\lambda - {a}_{jj}}\right) {x}_{j}}\right| = \left| {\mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}{a}_{jk}{x}_{k}}\right| \leq \mathop{\sum }\limits_{\substack{{k = 1} \\ {k \neq j} }}^{n}\left| {a}_{jk}\right| \n\] \n\nand therefore \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j} \n\] \n\nSince the eigenvalues of \( {A}^{ * } \) are the complex conjugate of the eigenvalues of \( A \) (see Problem 7.3) we also have that \n\n\[ \n\lambda \in \mathop{\bigcup }\limits_{{j = 1}}^{n}{G}_{j}^{ * } \n\] \n\nand the theorem is proven.
|
Yes
|
Lemma 7.8 The Frobenius norm\n\n\\[ \n\\parallel A{\\parallel }_{F} \\mathrel{\\text{:=}} {\\left( \\mathop{\\sum }\\limits_{{j, k = 1}}^{n}{\\left| {a}_{jk}\\right| }^{2}\\right) }^{1/2}\n\\]\n\nof an \\( n \\times n \\) matrix \\( A = \\left( {a}_{jk}\\right) \\) is invariant with respect to unitary transformations.
|
Proof. The trace\n\n\\[ \n\\operatorname{tr}A \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{j = 1}}^{n}{a}_{jj}\n\\]\n\nof a matrix \\( A \\) is commutative; i.e., \\( \\operatorname{tr}{AB} = \\operatorname{tr}{BA} \\) . This follows from\n\n\\[ \n\\mathop{\\sum }\\limits_{{j = 1}}^{n}{\\left( AB\\right) }_{jj} = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{a}_{jk}{b}_{kj} = \\mathop{\\sum }\\limits_{{k = 1}}^{n}\\mathop{\\sum }\\limits_{{j = 1}}^{n}{b}_{kj}{a}_{jk} = \\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left( BA\\right) }_{kk}.\n\\]\n\nIn particular, we have that\n\n\\[ \n\\operatorname{tr}A{A}^{ * } = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{a}_{jk}{a}_{kj}^{ * } = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {a}_{jk}\\right| }^{2}.\n\\]\n\nTherefore, for each unitary matrix \\( Q \\) it follows that\n\n\\[ \n\\parallel {Q}^{ * }{AQ}{\\parallel }_{F}^{2} = \\mathrm{{tr}}\\left( {{Q}^{ * }{AQ}{Q}^{ * }{A}^{ * }Q}\\right) = \\mathrm{{tr}}\\left( {{Q}^{ * }A{A}^{ * }Q}\\right) = \\mathrm{{tr}}\\left( {A{A}^{ * }Q{Q}^{ * }}\\right) = \\parallel A{\\parallel }_{F}^{2},\n\\]\n\nand the lemma is proven.
|
Yes
|
Corollary 7.9 The eigenvalues of an \( n \times n \) matrix \( A \) (counted repeatedly according to their algebraic multiplicity) satisfy Schur's inequality\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} \leq \parallel A{\parallel }_{F}^{2} \]\n\nEquality holds if and only if the matrix \( A \) is normal, i.e., if \( A{A}^{ * } = {A}^{ * }A \) .
|
Proof. By Theorem 3.27 there exists a unitary matrix \( Q \) such that \( R \mathrel{\text{:=}} {Q}^{ * }{AQ} \) is an upper triangular matrix. Hence\n\n\[ \parallel A{\parallel }_{F}^{2} = \parallel R{\parallel }_{F}^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} + \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{k = j + 1}}^{n}{\left| {r}_{jk}\right| }^{2}, \]\n\n(7.3)\n\nsince the diagonal elements of \( R = \left( {r}_{jk}\right) \) coincide with the eigenvalues of the similar matrices \( R \) and \( A \) . Now Schur’s inequality follows immediately from (7.3).\n\nFor the discussion of the case of equality, we first note that any unitary transformation of a normal matrix is again normal. This is a consequence of the identity\n\n\[ {Q}^{ * }{AQ}{\left( {Q}^{ * }AQ\right) }^{ * } - {\left( {Q}^{ * }AQ\right) }^{ * }{Q}^{ * }{AQ} = {Q}^{ * }\left( {A{A}^{ * } - {A}^{ * }A}\right) Q. \]\n\nIf equality holds in Schur’s inequality, then (7.3) implies that \( R \) is a diagonal matrix. Hence \( R \), and therefore \( A \), is normal.\n\nConversely, if \( A \) is normal, then the upper triangular matrix \( R \) must also be normal. Now, from\n\n\[ {\left( R{R}^{ * }\right) }_{jj} = \mathop{\sum }\limits_{{k = 1}}^{n}{r}_{jk}{r}_{kj}^{ * } = \mathop{\sum }\limits_{{k = j}}^{n}{\left| {r}_{jk}\right| }^{2} \]\n\nand\n\n\[ {\left( {R}^{ * }R\right) }_{jj} = \mathop{\sum }\limits_{{k = 1}}^{n}{r}_{jk}^{ * }{r}_{kj} = \mathop{\sum }\limits_{{k = 1}}^{j}{\left| {r}_{kj}\right| }^{2} \]\n\nwe conclude that\n\n\[ \mathop{\sum }\limits_{{k = j}}^{n}{\left| {r}_{jk}\right| }^{2} = \mathop{\sum }\limits_{{k = 1}}^{j}{\left| {r}_{kj}\right| }^{2},\;j = 1,\ldots, n. \]\n\nThis implies \( {r}_{jk} = 0 \) for \( j < k \), i.e., \( R \) is a diagonal matrix, and from (7.3) we deduce that equality holds in Schur’s inequality if \( A \) is normal.
|
Yes
|
Lemma 7.10 Normal matrices \( A \) satisfy\n\n\[ \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {\lambda }_{j}\right| }^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\left| {a}_{jj}\right| }^{2} + {\left\lbrack N\left( A\right) \right\rbrack }^{2}. \]
|
Proof. This follows from Corollary 7.9.
|
No
|
Lemma 7.11 For each pair \( j < k \) and each \( \varphi \in \mathbb{R} \) the matrix\n\n\[ U = \left( \begin{matrix} 1 & & & & & \\ & \cdot & & & & \\ & & \cos \varphi & & - \sin \varphi & \\ & & & \cdot & & \\ & & \sin \varphi & & \cos \varphi & \\ & & & & & \cdot \\ & & & & & 1 \end{matrix}\right) ,\]\n\nwhich coincides with the identity matrix except for \( {u}_{jj} = {u}_{kk} = \cos \varphi \) and \( {u}_{kj} = - {u}_{jk} = \sin \varphi \) (and which describes a rotation in the \( {x}_{j}{x}_{k} \) -plane) is unitary.
|
Proof. This follows from\n\n\[ \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) \]\n\nand\n\n\[ \left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{array}{ll} 1 & 0 \\ 0 & 1 \end{array}\right) . \]
|
Yes
|
Lemma 7.12 Let \( A \) be a real symmetric matrix and let \( U \) be the unitary matrix of Lemma 7.11. Then \( B = {U}^{ * }{AU} \) is also real and symmetric and has the entries\n\n\[ \n{b}_{jj} = {a}_{jj}{\cos }^{2}\varphi + {a}_{jk}\sin {2\varphi } + {a}_{kk}{\sin }^{2}\varphi \]\n\n\[ \n{b}_{kk} = {a}_{jj}{\sin }^{2}\varphi - {a}_{jk}\sin {2\varphi } + {a}_{kk}{\cos }^{2}\varphi \]\n\n\[ \n{b}_{jk} = {b}_{kj} = {a}_{jk}\cos {2\varphi } + \frac{1}{2}\left( {{a}_{kk} - {a}_{jj}}\right) \sin {2\varphi } \]\n\n\[ \n{b}_{ij} = {b}_{ji} = {a}_{ij}\cos \varphi + {a}_{ik}\sin \varphi ,\;i \neq j, k, \]\n\n\[ \n{b}_{ik} = {b}_{ki} = - {a}_{ij}\sin \varphi + {a}_{ik}\cos \varphi ,\;i \neq j, k, \]\n\n\[ \n{b}_{il} = {a}_{il},\;i, l \neq j, k \]\n\ni.e., the matrix \( B \) differs from \( A \) only in the \( j \) th and \( k \) th rows and columns.
|
Proof. The matrix \( B \) is real, since \( A \) and \( U \) are real, and it is symmetric, since the unitary transformation of a Hermitian matrix is again Hermitian. Elementary calculations show that\n\n\[ \n\left( \begin{matrix} \cos \varphi & \sin \varphi \\ - \sin \varphi & \cos \varphi \end{matrix}\right) \left( \begin{matrix} {a}_{jj} & {a}_{jk} \\ {a}_{kj} & {a}_{kk} \end{matrix}\right) \left( \begin{matrix} \cos \varphi & - \sin \varphi \\ \sin \varphi & \cos \varphi \end{matrix}\right) = \left( \begin{matrix} {b}_{jj} & {b}_{jk} \\ {b}_{kj} & {b}_{kk} \end{matrix}\right) \]\n\nwith \( {b}_{jj},{b}_{jk},{b}_{kj} \), and \( {b}_{kk} \) as stated in the theorem. For \( i \neq j, k \) we have that\n\n\[ \n{b}_{ij} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rj} = {a}_{ij}{u}_{jj} + {a}_{ik}{u}_{kj} = {a}_{ij}\cos \varphi + {a}_{ik}\sin \varphi \]\n\nand\n\n\[ \n{b}_{ik} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rk} = {a}_{ij}{u}_{jk} + {a}_{ik}{u}_{kk} = - {a}_{ij}\sin \varphi + {a}_{ik}\cos \varphi . \]\n\nFinally, we have\n\n\[ \n{b}_{il} = \mathop{\sum }\limits_{{r, s = 1}}^{n}{u}_{is}^{ * }{a}_{sr}{u}_{rl} = {a}_{il} \]\n\nfor \( i, l \neq j, k \) .
|
Yes
|
Lemma 7.13 For\n\n\[ \tan {2\varphi } = \frac{2{a}_{jk}}{{a}_{jj} - {a}_{kk}},\;{a}_{jj} \neq {a}_{kk}, \]\n\n\[ \varphi = \frac{\pi }{4},\;{a}_{jj} = {a}_{kk}, \]\n\nthe transformation of Lemma 7.12 annihilates the elements\n\n\[ {b}_{jk} = {b}_{kj} = 0 \]\n\nand reduces the off-diagonal elements according to\n\n\[ {\left\lbrack N\left( B\right) \right\rbrack }^{2} = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2}. \]
|
Proof. \( {b}_{jk} = {b}_{kj} = 0 \) follows immediately from Lemma 7.12. Applying Lemma 7.8 to the matrices\n\n\[ \left( \begin{matrix} {a}_{jj} & {a}_{jk} \\ {a}_{kj} & {a}_{kk} \end{matrix}\right) \;\mathrm{{and}}\;\left( \begin{matrix} {b}_{jj} & {b}_{jk} \\ {b}_{kj} & {b}_{kk} \end{matrix}\right) \]\n\nyields\n\n\[ {a}_{jj}^{2} + 2{a}_{jk}^{2} + {a}_{kk}^{2} = {b}_{jj}^{2} + {b}_{kk}^{2}. \]\n\nFrom this, with the aid of Lemmas 7.8 and 7.12 we find that\n\n\[ {\left\lbrack N\left( B\right) \right\rbrack }^{2} = \parallel B{\parallel }_{F}^{2} - \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ii}^{2} = \parallel A{\parallel }_{F}^{2} - \mathop{\sum }\limits_{{i = 1}}^{n}{b}_{ii}^{2} \]\n\n\[ = {\left\lbrack N\left( A\right) \right\rbrack }^{2} + \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{a}_{ii}^{2} - {b}_{ii}^{2}}\right) = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2}, \]\n\nwhich completes the proof.
|
Yes
|
Theorem 7.14 The classical Jacobi method converges; i.e., the sequence \( \left( {A}_{\nu }\right) \) converges to a diagonal matrix with the eigenvalues of \( A \) as diagonal elements.
|
Proof. For one step of the Jacobi method, from\n\n\[ \n{\left\lbrack N\left( A\right) \right\rbrack }^{2} \leq \left( {{n}^{2} - n}\right) \mathop{\max }\limits_{\substack{{i, l = 1,\ldots, n} \\ {i \neq l} }}{a}_{il}^{2} \n\]\n\nwe obtain that\n\n\[ \n{a}_{jk}^{2} \geq \frac{{\left\lbrack N\left( A\right) \right\rbrack }^{2}}{n\left( {n - 1}\right) } \n\]\n\nfor the nondiagonal element \( {a}_{jk} \) with largest modulus. Hence, from Lemma 7.13 we deduce that\n\n\[ \n{\left\lbrack N\left( B\right) \right\rbrack }^{2} = {\left\lbrack N\left( A\right) \right\rbrack }^{2} - 2{a}_{jk}^{2} \leq {q}^{2}{\left\lbrack N\left( A\right) \right\rbrack }^{2}, \n\]\n\nwhere\n\n\[ \nq \mathrel{\text{:=}} {\left( 1 - \frac{2}{n\left( {n - 1}\right) }\right) }^{1/2}. \n\]\n\nFor the sequence \( \left( {A}_{\nu }\right) \) this implies that\n\n\[ \nN\left( {A}_{\nu }\right) \leq {q}^{\nu }N\left( {A}_{0}\right) \n\]\n\nfor all \( \nu \in \mathbb{N} \), whence \( N\left( {A}_{\nu }\right) \rightarrow 0,\nu \rightarrow \infty \), since \( q < 1 \) .
|
Yes
|
For the matrix\n\n\[ A = \left( \begin{array}{rrr} 2 & - 1 & 0 \\ - 1 & 2 & - 1 \\ 0 & - 1 & 2 \end{array}\right) \]\n\nthe first six transformed matrices for the classical Jacobi method are given by
|
\[ {A}_{1} = \left( \begin{array}{rrr} {1.0000} & {0.0000} & - {0.7071} \\ {0.0000} & {3.0000} & - {0.7071} \\ - {0.7071} & - {0.7071} & {2.0000} \end{array}\right) \]\n\n\[ {A}_{2} = \left( \begin{array}{rrr} {0.6340} & - {0.3251} & {0.0000} \\ - {0.3251} & {3.0000} & - {0.6280} \\ {0.0000} & - {0.6280} & {2.3660} \end{array}\right) \]\n\n\[ {A}_{3} = \left( \begin{array}{rrr} {0.6340} & - {0.2768} & - {0.1704} \\ - {0.2768} & {3.3864} & {0.0000} \\ - {0.1704} & {0.0000} & {1.9796} \end{array}\right) \]\n\n\[ {A}_{4} = \left( \begin{array}{rrr} {0.6064} & {0.0000} & - {0.1695} \\ {0.0000} & {3.4140} & {0.0169} \\ - {0.1695} & {0.0169} & {1.9796} \end{array}\right) \]\n\n\[ {A}_{5} = \left( \begin{matrix} {0.5858} & {0.0020} & {0.0000} \\ {0.0020} & {3.4140} & {0.0168} \\ {0.0000} & {0.0168} & {2.0002} \end{matrix}\right) ,\]\n\n\[ {A}_{6} = \left( \begin{array}{rrr} {0.5858} & {0.0020} & - {0.0000} \\ {0.0020} & {3.4142} & {0.0000} \\ - {0.0000} & {0.0000} & {2.0000} \end{array}\right) \]\n\nThe exact eigenvalues of \( A \) are \( {\lambda }_{1} = 2 + \sqrt{2},{\lambda }_{2} = 2,{\lambda }_{3} = 2 - \sqrt{2} \) .
|
Yes
|
An \( n \times n \) matrix \( A \) is diagonalizable if and only if it has \( n \) linearly independent eigenvectors.
|
Assume that \( {C}^{-1}{AC} = D \), where \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \), is diagonal. Then \( D{e}_{j} = {\lambda }_{j}{e}_{j}, j = 1,\ldots, n \), with the canonical orthonormal basis \( {e}_{1},\ldots ,{e}_{n} \) of \( {\mathbb{C}}^{n} \) . This implies that the vectors \( {x}_{j} \mathrel{\text{:=}} C{e}_{j}, j = 1,\ldots, n \), are eigenvectors of \( A \), since\n\n\[ A{x}_{j} = {AC}{e}_{j} = {CD}{e}_{j} = C{\lambda }_{j}{e}_{j} = {\lambda }_{j}{x}_{j}. \]\n\nThe vectors \( {x}_{1},\ldots ,{x}_{n} \) are linearly independent because \( C \) is nonsingular and the \( {e}_{1},\ldots ,{e}_{n} \) are linearly independent.\n\nConversely, assume that \( {x}_{1},\ldots ,{x}_{n} \) are \( n \) linearly independent eigenvectors of \( A \) for the eigenvalues \( {\lambda }_{1},\ldots ,{\lambda }_{n} \) . Then the matrix \( C = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) formed by the eigenvectors as columns is nonsingular, and we have that\n\n\[ {AC} = \left( {A{x}_{1},\ldots, A{x}_{n}}\right) = \left( {{\lambda }_{1}{x}_{1},\ldots ,{\lambda }_{n}{x}_{n}}\right) = {CD}, \]\n\nwhere \( D = \operatorname{diag}\left( {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right) \) . Hence \( {C}^{-1}{AC} = D \) .
|
Yes
|
Theorem 7.19 Assume that \( A \) is a diagonalizable \( n \times n \) matrix with eigenvalues\n\n\[ \n\\left| {\\lambda }_{1}\\right| > \\left| {\\lambda }_{2}\\right| > \\cdots > \\left| {\\lambda }_{n}\\right| \n\]\n\nand corresponding eigenvectors \( {x}_{1},{x}_{2},\\ldots ,{x}_{n} \), and set\n\n\[ \n{T}_{m} \\mathrel{\\text{:=}} \\mathrm{{span}}\\{ {x}_{1},\\ldots ,{x}_{m}\\} \\;\\text{ and }\\;{U}_{m} \\mathrel{\\text{:=}} \\mathrm{{span}}\\{ {x}_{m + 1},\\ldots ,{x}_{n}\\} \n\]\n\nfor \( m = 1,\\ldots, n - 1 \) . Let \( {q}_{10},\\ldots ,{q}_{n0} \) be an orthonormal basis of \( {\\mathbb{C}}^{n} \) and let the subspaces\n\n\[ \n{S}_{m} \\mathrel{\\text{:=}} \\operatorname{span}\\left\\{ {{q}_{10},\\ldots ,{q}_{m0}}\\right\\} \n\]\n\nsatisfy\n\n\[ \n{S}_{m} \\cap {U}_{m} = \\{ 0\\} ,\\;m = 1,\\ldots, n - 1.\n\]\n\nAssume that for each \( \\nu \\in \\mathbb{N} \) we have constructed an orthonormal system \( {q}_{1\\nu },\\ldots ,{q}_{n\\nu } \) with the property\n\n\[ \n{A}^{\\nu }{S}_{m} = \\operatorname{span}\\left\\{ {{q}_{1\\nu },\\ldots ,{q}_{m\\nu }}}\\right\\} ,\\;m = 1,\\ldots, n - 1,\n\]\n\nand define \( {\\widetilde{Q}}_{\\nu } = \\left( {{q}_{1\\nu },\\ldots ,{q}_{n\\nu }}\\right) \) . Then for the sequence of matrices \( {A}_{\\nu } = \\left( {a}_{{jk},\\nu }\\right) \\; \\) given \\;{by}\n\n\[ \n{A}_{\\nu + 1} \\mathrel{\\text{:=}} {\\widetilde{Q}}_{\\nu }^{ * }A{\\widetilde{Q}}_{\\nu }\n\]\n\nwe have convergence:\n\n\[ \n\\mathop{\\lim }\\limits_{{\\nu \\rightarrow \\infty }}{a}_{{jk},\\nu } = 0,\\;1 < k < j \\leq n\n\]\n\nand\n\n\[ \n\\mathop{\\lim }\\limits_{{\\nu \\rightarrow \\infty }}{a}_{{jj},\\nu } = {\\lambda }_{j},\\;j = 1,\\ldots, n.\n\]
|
Proof. 1. Without loss of generality we may assume that \( {\\begin{Vmatrix}{x}_{j}\\end{Vmatrix}}_{2} = 1 \) for \( j = 1,\\ldots, n \) . From Lemma 7.18 it follows that\n\n\[ \n{\\begin{Vmatrix}{P}_{{A}^{\\nu }{S}_{m}} - {P}_{{T}_{m}}\\end{Vmatrix}}_{2} \\leq M{r}^{\\nu },\\;m = 1,\\ldots, n - 1,\\;\\nu \\in \\mathbb{N},\n\]\n\nfor some constant \( M \) and\n\n\[ \nr \\mathrel{\\text{:=}} \\mathop{\\max }\\limits_{{m = 1,\\ldots, n - 1}}\\left| \\frac{{\\lambda }_{m + 1}}{{\\lambda }_{m}}\\right| < 1\n\]\n\nFrom this, for the projections\n\n\[ \n{w}_{m\\nu } \\mathrel{\\text{:=}} {P}_{{A}^{\\nu }{S}_{m}}{x}_{m},\\;m = 1,\\ldots, n - 1,\n\]\n\nand \( {w}_{n\\nu } \\mathrel{\\text{:=}} {x}_{n} \), we conclude that\n\n\[ \n{\\begin{Vmatrix}{w}_{m\\nu } - {x}_{m}\\end{Vmatrix}}_{2} \\leq M{r}^{\\nu },\\;m = 1,\\ldots, n,\\;\\nu \\in \\mathbb{N}.\n\]\n\nFor sufficiently large \( \\nu \) the vectors \( {w}_{1\\nu },\\ldots ,{w}_{n\\nu } \) are linearly independent, and we have that\n\n\[ \n\\operatorname{span}\\left\\{ {{w}_{1\\nu },\\ldots ,{w}_{m\\nu }}}\\right\\} = {A}^{\\nu }{S}_{m},\\;m = 1,\\ldots, n - 1.\n\]\n\nTo prove this we assume to the contrary that the vectors \( {w}_{1\\nu },\\ldots ,{w}_{n\\nu } \) are not linearly independent for all sufficiently large \( \\nu \) . Then there exists a sequence \( {\\nu }_{\\ell } \) such that the vectors \( {w}_{1{\\nu }_{\\ell }},\\ldots ,{w}_{n{\\nu }_{\\ell }} \) are linearly dependent for each \( \\ell \\in \\mathbb{N} \) . Hence there exist complex numbers \( {\\alpha }_{1\\ell },\\ldots ,{\\alpha }_{n\\ell } \) such that\n\n\[ \n\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\alpha }_{k\\ell }{w}_{k{n}_{\\ell }} = 0\\;\\text{ and }\\;\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {\\alpha }_{k\\ell }\\right| }^{2} = 1\n\]\n\nBy the Bolzano-Weierstrass theorem, without loss of generality, we may assume that\n\n\[ \n{\\alpha }_{k\\ell } \\rightarrow {\\alpha }_{k},\\;\\ell \\rightarrow \\infty ,\\;k = 1,\\ldots, n.\n\]\n\nPassing to the limit \( \\ell \\rightarrow \\infty \) in (7.18) with the aid of (7.17) now leads to\n\n\[ \n\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\alpha }_{k}{x}_{k} = 0\\;\\text{ and }\\;\\mathop{\\sum }\\limits_{{k = 1}}^{n}{\\left| {\\alpha }_{k}\\right| }^{2} = 1\n\]\n\nwhich contradicts the linear independence of the eigenvectors \( {x}_{1},\\ldots ,{x}_{n} \) . 2. We orthonormalize by setting \( {\\widetilde{p}}_{1} \\mathrel{\\text{:=}} {x}_{1} \) and\n\n\[ \n{\\widetilde{p}}_{m} \\mathrel{\\text{:=}} {x}_{m} - {P}_{{T}_{m - 1}}{x}_{m},\\;m = 2,\\ldots, n,\n\]\n\n\[ \n{p}_{m} \\mathrel{\\text{:=}} \\frac{{\\widetilde{p}}_{m}}{{\\begin{Vmatrix}{\\widetilde{p}}_{m}\\end{Vmatrix}}_{2}},\\;m = 1,\\ldots, n,\n\]
|
Yes
|
Theorem 7.20 (QR algorithm) Let \( A \) be a diagonalizable matrix with eigenvalues\n\n\[ \left| {\lambda }_{1}\right| > \left| {\lambda }_{2}\right| > \cdots > \left| {\lambda }_{n}\right| \]\n\nand corresponding eigenvectors \( {x}_{1},{x}_{2},\ldots ,{x}_{n} \), and assume that\n\n\[ \operatorname{span}\left\{ {{e}_{1},\ldots ,{e}_{m}}\right\} \cap \operatorname{span}\left\{ {{x}_{m + 1},\ldots ,{x}_{n}}\right\} = \{ 0\} \]\n\n(7.24)\n\nfor \( m = 1,\ldots, n - 1 \) . Starting with \( {A}_{1} = A \), construct a sequence \( \left( {A}_{\nu }\right) \) by determining a QR decomposition\n\n\[ {A}_{\nu } = {Q}_{\nu }{R}_{\nu } \]\n\nand setting\n\n\[ {A}_{\nu + 1} \mathrel{\text{:=}} {R}_{\nu }{Q}_{\nu } \]\n\nfor \( \nu = 0,1,2,\ldots \) Then for \( {A}_{\nu } = \left( {a}_{{jk},\nu }\right) \) we have convergence:\n\n\[ \mathop{\lim }\limits_{{\nu \rightarrow \infty }}{a}_{{jk},\nu } = 0,\;1 < k < j \leq n \]\n\nand\n\n\[ \mathop{\lim }\limits_{{\nu \rightarrow \infty }}{a}_{{jj},\nu } = {\lambda }_{j},\;j = 1,\ldots, n. \]
|
Proof. This is just a special case of Theorem 7.19.
|
Yes
|
Example 7.23 Let\n\n\[ A = \\left( \\begin{matrix} {a}_{1} & {c}_{2} & & & & \\\\ {c}_{2} & {a}_{2} & {c}_{3} & & & \\\\ & {c}_{3} & {a}_{3} & {c}_{4} & & \\\\ & & \\cdot & \\cdot & \\cdot & \\\\ & & & {c}_{n - 1} & {a}_{n - 1} & {c}_{n} \\\\ & & & & {c}_{n} & {a}_{n} \\end{matrix}\\right) \]\n\nbe a symmetric tridiagonal matrix. Denote by \( {A}_{k} \) the \( k \\times k \) submatrix consisting of the first \( k \) rows and columns of \( A \), and let \( {p}_{k} \) denote the characteristic polynomial of \( {A}_{k} \). Then we have the recurrence relations\n\n\[ {p}_{k}\\left( \\lambda \\right) = \\left( {{a}_{k} - \\lambda }\\right) {p}_{k - 1}\\left( \\lambda \\right) - {c}_{k}^{2}{p}_{k - 2}\\left( \\lambda \\right) ,\\;k = 2,\\ldots, n, \]\n\n(7.25)\n\nand\n\n\[ {p}_{k}^{\\prime }\\left( \\lambda \\right) = \\left( {{a}_{k} - \\lambda }\\right) {p}_{k - 1}^{\\prime }\\left( \\lambda \\right) - {c}_{k}^{2}{p}_{k - 2}^{\\prime }\\left( \\lambda \\right) - {p}_{k - 1}\\left( \\lambda \\right) ,\\;k = 2,\\ldots, n, \]\n\n(7.26)\n\nstarting with \( {p}_{0}\\left( \\lambda \\right) = 1 \) and \( {p}_{1}\\left( \\lambda \\right) = {a}_{1} - \\lambda \).
|
Proof. The recursion (7.25) follows by expanding \( \\det \\left( {{A}_{k} - {\\lambda I}}\\right) \) with respect to the last column, and (7.26) is obtained by differentiating (7.25).
|
Yes
|
Theorem 8.1 For \( n \in \mathbb{N} \cup \{ 0\} \), each polynomial in \( {P}_{n} \) that has more than \( n \) (complex) zeros, where each zero is counted repeatedly according to its multiplicity, must vanish identically; i.e., all its coefficients must be equal to zero.
|
Proof. Obviously, the statement is true for \( n = 0 \) . Assume that it has been proven for some \( n \geq 0 \) . By using the binomial formula for \( {x}^{k} = {\left\lbrack \left( x - z\right) + z\right\rbrack }^{k} \) we can rewrite the polynomial \( p \in {P}_{n + 1} \) in the form\n\n\[ p\left( x\right) = \mathop{\sum }\limits_{{k = 1}}^{{n + 1}}{b}_{k}{\left( x - z\right) }^{k} + {b}_{0} \]\n\nwith the coefficients \( {b}_{0},{b}_{1},\ldots ,{b}_{n + 1} \) depending on \( {a}_{0},{a}_{1},\ldots ,{a}_{n + 1} \) and \( z \) . If \( z \) is a zero of \( p \), then we must have \( {b}_{0} = 0 \), and this implies that \( p\left( x\right) = \left( {x - z}\right) q\left( x\right) \) with \( q \in {P}_{n} \) . Obviously, \( q \) has more than \( n \) zeros, since \( p \) has more than \( n + 1 \) zeros. Hence, by the induction assumption, \( q \) must vanish identically, and this implies that \( p \) vanishes identically.
|
Yes
|
Theorem 8.2 The monomials \( {u}_{k}\left( x\right) \mathrel{\text{:=}} {x}^{k}, k = 0,\ldots, n \), are linearly independent.
|
Proof. In order to prove this, assume that\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{u}_{k} = 0 \]\n\nthat is,\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{x}^{k} = 0,\;x \in \left\lbrack {a, b}\right\rbrack \]\n\nThen the polynomial with coefficients \( {a}_{0},{a}_{1},\ldots ,{a}_{n} \) has more than \( n \) distinct zeros, and from Theorem 8.1 it follows that all the coefficients must be zero.
|
Yes
|
Theorem 8.3 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), there exists a unique polynomial \( {p}_{n} \in {P}_{n} \) with the property\n\n\[ \n{p}_{n}\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n.\n\]
|
Proof. We note that \( {\ell }_{k} \in {P}_{n} \) for \( k = 0,\ldots, n \) and that the equations\n\n\[ \n{\ell }_{k}\left( {x}_{j}\right) = {\delta }_{jk},\;j, k = 0,\ldots, n\n\]\n\nhold, where \( {\delta }_{jk} = 1 \) for \( k = j \), and \( {\delta }_{jk} = 0 \) for \( k \neq j \) . It follows that \( {p}_{n} \) given by (8.2) is in \( {P}_{n} \), and it fulfills the required interpolation conditions \( {p}_{n}\left( {x}_{j}\right) = {y}_{j}, j = 0,\ldots, n \) .\n\nTo prove uniqueness of the interpolation polynomial we assume that \( {p}_{n,1},{p}_{n,2} \in {P}_{n} \) are two polynomials satisfying (8.1). Then the difference \( {p}_{n} \mathrel{\text{:=}} {p}_{n,1} - {p}_{n,2} \) satisfies \( {p}_{n}\left( {x}_{j}\right) = 0, j = 0,\ldots, n \) ; i.e., the polynomial \( {p}_{n} \in {P}_{n} \) has \( n + 1 \) zeros and therefore by Theorem 8.1 must be identically zero. This implies that \( {p}_{n,1} = {p}_{n,2} \) .
|
Yes
|
Lemma 8.6 The divided differences satisfy the relation\n\n\[ \n{D}_{j}^{k} = \mathop{\sum }\limits_{{m = j}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}},\;j = 0,\ldots, n - k,\;k = 1,\ldots, n. \n\]\n\n(8.4)
|
Proof. We proceed by induction with respect to the order \( k \) . Trivially,(8.4) holds for \( k = 1 \) . We assume that (8.4) has been proven for order \( k - 1 \) for some \( k \geq 2 \) . Then, using Definition 8.4, the induction assumption, and the identity\n\n\[ \n\frac{1}{{x}_{j + k} - {x}_{j}}\left\{ {\frac{1}{{x}_{m} - {x}_{j + k}} - \frac{1}{{x}_{m} - {x}_{j}}}\right\} = \frac{1}{\left( {{x}_{m} - {x}_{j + k}}\right) \left( {{x}_{m} - {x}_{j}}\right) }, \n\]\n\nwe obtain\n\n\[ \n{D}_{j}^{k} = \frac{1}{{x}_{j + k} - {x}_{j}}\left\{ {\mathop{\sum }\limits_{{m = j + 1}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j + 1} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}} - \mathop{\sum }\limits_{{m = j}}^{{j + k - 1}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k - 1}}\frac{1}{{x}_{m} - {x}_{i}}}\right\} \n\]\n\n\[ \n= \frac{1}{{x}_{j + k} - {x}_{j}}\mathop{\sum }\limits_{{m = j + 1}}^{{j + k - 1}}{y}_{m}\left\{ {\frac{1}{{x}_{m} - {x}_{j + k}} - \frac{1}{{x}_{m} - {x}_{j}}}\right\} \mathop{\prod }\limits_{\substack{{i = j + 1} \\ {i \neq m} }}^{{j + k - 1}}\frac{1}{{x}_{m} - {x}_{i}} \n\]\n\n\[ \n+ {y}_{j + k}\mathop{\prod }\limits_{{i = j}}^{{j + k - 1}}\frac{1}{{x}_{j + k} - {x}_{i}} + {y}_{j}\mathop{\prod }\limits_{{i = j + 1}}^{{j + k}}\frac{1}{{x}_{j} - {x}_{i}} = \mathop{\sum }\limits_{{m = j}}^{{j + k}}{y}_{m}\mathop{\prod }\limits_{\substack{{i = j} \\ {i \neq m} }}^{{j + k}}\frac{1}{{x}_{m} - {x}_{i}} \n\]\n\ni.e.,(8.4) also holds for order \( k \) .
|
Yes
|
Theorem 8.7 In the Newton representation, for \( n \geq 1 \) the uniquely determined interpolation polynomial \( {p}_{n} \) of Theorem 8.3 is given by\n\n\[ \n{p}_{n}\left( x\right) = {y}_{0} + \mathop{\sum }\limits_{{k = 1}}^{n}{D}_{0}^{k}\mathop{\prod }\limits_{{i = 0}}^{{k - 1}}\left( {x - {x}_{i}}\right) .\n\]\n\n(8.5)
|
Proof. We denote the right-hand side of (8.5) by \( {\widetilde{p}}_{n} \) and establish \( {p}_{n} = {\widetilde{p}}_{n} \) by induction with respect to the degree \( n \) . For \( n = 1 \) the representation (8.5) is correct. We assume that (8.5) has been proven for degree \( n - 1 \) for some \( n \geq 2 \) and consider the difference \( {d}_{n} \mathrel{\text{:=}} {p}_{n} - {\widetilde{p}}_{n} \) . Since\n\n\[ \n{d}_{n}\left( x\right) = {p}_{n}\left( x\right) - {\widetilde{p}}_{n - 1}\left( x\right) - {D}_{0}^{n}\mathop{\prod }\limits_{{i = 0}}^{{n - 1}}\left( {x - {x}_{i}}\right) ,\n\]\n\nas a consequence of Theorem 8.3 and Lemma 8.6 the coefficient of \( {x}^{n} \) in the polynomial \( {d}_{n} \) vanishes; i.e., \( {d}_{n} \in {P}_{n - 1} \) . Using the induction assumption, we have that\n\n\[ \n{\widetilde{p}}_{n - 1}\left( {x}_{j}\right) = {y}_{j} = {p}_{n}\left( {x}_{j}\right) ,\;j = 0,\ldots, n - 1,\n\]\n\nand therefore\n\n\[ \n{d}_{n}\left( {x}_{j}\right) = 0,\;j = 0,\ldots, n - 1.\n\]\n\nHence, by Theorem 8.1 it follows that \( {d}_{n} = 0 \), and therefore \( {p}_{n} = {\widetilde{p}}_{n} \) .
|
Yes
|
Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), the uniquely determined interpolation polynomials \( {p}_{i}^{k} \in {P}_{k}, i = 0,\ldots, n - k, k = 0,\ldots, n \), with the interpolation property\n\n\[ {p}_{i}^{k}\left( {x}_{j}\right) = {y}_{j},\;j = i,\ldots, i + k \]\n\nsatisfy the recursive relation\n\n\[ {p}_{i}^{0}\left( x\right) = {y}_{i} \]\n\n\[ {p}_{i}^{k}\left( x\right) = \frac{\left( {x - {x}_{i}}\right) {p}_{i + 1}^{k - 1}\left( x\right) - \left( {x - {x}_{i + k}}\right) {p}_{i}^{k - 1}\left( x\right) }{{x}_{i + k} - {x}_{i}},\;k = 1,\ldots, n. \]\n\n(8.6)
|
Proof. We again proceed by induction with respect to the degree \( k \) . Obviously, the statement is true for \( k = 1 \) . Assume that the assertion has been proven for degree \( k - 1 \) for some \( k \geq 2 \) . Then the right-hand side of (8.6) describes a polynomial \( p \in {P}_{k} \), and by the induction assumption we find that the interpolation conditions\n\n\[ p\left( {x}_{j}\right) = \frac{\left( {{x}_{j} - {x}_{i}}\right) {y}_{j} - \left( {{x}_{j} - {x}_{i + k}}\right) {y}_{j}}{{x}_{i + k} - {x}_{i}} = {y}_{j},\;j = i + 1,\ldots, i + k - 1, \]\n\nas well as \( p\left( {x}_{i}\right) = {y}_{i} \) and \( p\left( {x}_{i + k}\right) = {y}_{i + k} \) are fulfilled.
|
Yes
|
Theorem 8.9 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \), the uniquely determined interpolation polynomials \( {p}_{i}^{k} \in {P}_{k}, i = 0,\ldots, n - k, k = 0,\ldots, n \), with the interpolation property\n\n\[ \n{p}_{i}^{k}\left( {x}_{j}\right) = {y}_{j},\;j = i,\ldots, i + k \n\]\n\nsatisfy the recursive relation\n\n\[ \n{p}_{i}^{0}\left( x\right) = {y}_{i} \n\]\n\n\[ \n{p}_{i}^{k}\left( x\right) = \frac{\left( {x - {x}_{i}}\right) {p}_{i + 1}^{k - 1}\left( x\right) - \left( {x - {x}_{i + k}}\right) {p}_{i}^{k - 1}\left( x\right) }{{x}_{i + k} - {x}_{i}},\;k = 1,\ldots, n. \n\]\n\n(8.6)
|
Proof. We again proceed by induction with respect to the degree \( k \) . Obviously, the statement is true for \( k = 1 \) . Assume that the assertion has been proven for degree \( k - 1 \) for some \( k \geq 2 \) . Then the right-hand side of (8.6) describes a polynomial \( p \in {P}_{k} \), and by the induction assumption we find that the interpolation conditions\n\n\[ \np\left( {x}_{j}\right) = \frac{\left( {{x}_{j} - {x}_{i}}\right) {y}_{j} - \left( {{x}_{j} - {x}_{i + k}}\right) {y}_{j}}{{x}_{i + k} - {x}_{i}} = {y}_{j},\;j = i + 1,\ldots, i + k - 1, \n\]\n\nas well as \( p\left( {x}_{i}\right) = {y}_{i} \) and \( p\left( {x}_{i + k}\right) = {y}_{i + k} \) are fulfilled.
|
Yes
|
Theorem 8.10 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( \left( {n + 1}\right) \) -times continuously differentiable. Then the remainder \( {R}_{n}f \mathrel{\text{:=}} f - {L}_{n}f \) for polynomial interpolation with \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) can be represented in the form\n\n\[ \left( {{R}_{n}f}\right) \left( x\right) = \frac{{f}^{\left( n + 1\right) }\left( \xi \right) }{\left( {n + 1}\right) !}\mathop{\prod }\limits_{{j = 0}}^{n}\left( {x - {x}_{j}}\right) ,\;x \in \left\lbrack {a, b}\right\rbrack ,\]\n\n(8.8)\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) depending on \( x \) .
|
Proof. Since (8.8) is trivially satisfied if \( x \) coincides with one of the interpolation points \( {x}_{0},\ldots ,{x}_{n} \), we need be concerned only with the case where \( x \) does not coincide with one of the interpolation points. We define\n\n\[ {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \mathop{\prod }\limits_{{j = 0}}^{n}\left( {x - {x}_{j}}\right) \]\n\nand, keeping \( x \) fixed, consider \( g : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) given by\n\n\[ g\left( y\right) \mathrel{\text{:=}} f\left( y\right) - \left( {{L}_{n}f}\right) \left( y\right) - {q}_{n + 1}\left( y\right) \frac{f\left( x\right) - \left( {{L}_{n}f}\right) \left( x\right) }{{q}_{n + 1}\left( x\right) },\;y \in \left\lbrack {a, b}\right\rbrack . \]\n\nBy the assumption on \( f \), the function \( g \) is also \( \left( {n + 1}\right) \) -times continuously differentiable. Obviously, \( g \) has at least \( n + 2 \) zeros, namely \( x \) and \( {x}_{0},\ldots ,{x}_{n} \) . Then, by Rolle’s theorem the derivative \( {g}^{\prime } \) has at least \( n + 1 \) zeros. Repeating the argument, by induction we deduce that the derivative \( {g}^{\left( n + 1\right) } \) has at least one zero in \( \left\lbrack {a, b}\right\rbrack \), which we denote by \( \xi \) . For this zero we have that\n\n\[ 0 = {f}^{\left( n + 1\right) }\left( \xi \right) - \left( {n + 1}\right) !\frac{\left( {{R}_{n}f}\right) \left( x\right) }{{q}_{n + 1}\left( x\right) }, \]\n\nand from this we obtain (8.8).
|
Yes
|
The linear interpolation is given by\n\n\\[ \n\\left( {{L}_{1}f}\\right) \\left( x\\right) = \\frac{1}{h}\\left\\lbrack {f\\left( {x}_{0}\\right) \\left( {{x}_{1} - x}\\right) + f\\left( {x}_{1}\\right) \\left( {x - {x}_{0}}\\right) }\\right\\rbrack \n\\]\n\nwith the step width \\( h = {x}_{1} - {x}_{0} \\) . For the polynomial \\( {q}_{2}\\left( x\\right) = \\left( {x - {x}_{0}}\\right) \\left( {x - {x}_{1}}\\right) \\)\n\nwe have that\n\n\\[ \n\\mathop{\\max }\\limits_{{x \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack }}\\left| {{q}_{2}\\left( x\\right) }\\right| = \\frac{{h}^{2}}{4}.\n\\]
|
Therefore, by Corollary 8.11, the error occurring in linear interpolation of a twice continuously differentiable function \\( f \\) can be estimated by\n\n\\[ \n\\left| {\\left( {{R}_{1}f}\\right) \\left( x\\right) }\\right| \\leq \\frac{{h}^{2}}{8}\\mathop{\\max }\\limits_{{y \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack }}\\left| {{f}^{\\prime \\prime }\\left( y\\right) }\\right| ,\\;x \\in \\left\\lbrack {{x}_{0},{x}_{1}}\\right\\rbrack .\n\\]
|
Yes
|
Example 8.13 Let \( f\left( x\right) \mathrel{\text{:=}} \sin x \) and let \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {0,\pi }\right\rbrack \) be \( n + 1 \) distinct points. Since
|
\[ \left| {{f}^{\left( n + 1\right) }\left( x\right) }\right| \leq 1,\;x \in \left\lbrack {0,\pi }\right\rbrack \] and \[ \left| {{q}_{n + 1}\left( x\right) }\right| \leq {\pi }^{n + 1},\;x \in \left\lbrack {0,\pi }\right\rbrack \] by Corollary 8.11, we have the estimate \[ \left| {\left( {{R}_{n}f}\right) \left( x\right) }\right| \leq \frac{{\pi }^{n + 1}}{\left( {n + 1}\right) !},\;x \in \left\lbrack {0,\pi }\right\rbrack \] Hence the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials converges to the interpolated function \( f \) uniformly on \( \left\lbrack {0,\pi }\right\rbrack \) as \( n \rightarrow \infty \) .
|
Yes
|
Theorem 8.16 (Marcinkiewicz) For each function \( f \in C\left\lbrack {a, b}\right\rbrack \) there exists a sequence of interpolation points \( \left( {x}_{j}^{\left( n\right) }\right), j = 0,\ldots, n, n = 0,1,\ldots \) , such that the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials \( {L}_{n}f \in {P}_{n} \) with \( \left( {{L}_{n}f}\right) \left( {x}_{j}^{\left( n\right) }\right) = f\left( {x}_{j}^{\left( n\right) }\right), j = 0,\ldots, n \), converges to \( f \) uniformly on \( \left\lbrack {a, b}\right\rbrack \) .
|
Proof. The proof relies on the Weierstrass approximation theorem and the Chebyshev alternation theorem. The Weierstrass approximation theorem (see [16]) ensures that for each \( f \in C\left\lbrack {a, b}\right\rbrack \) there exists a sequence of polynomials \( {p}_{n} \in {P}_{n} \) such that \( {\begin{Vmatrix}{p}_{n} - f\end{Vmatrix}}_{\infty } \rightarrow 0 \) as \( n \rightarrow \infty \) . As a consequence of the Chebyshev alternation theorem from approximation theory (see [16]), for the uniquely determined best approximation \( {\widetilde{p}}_{n} \) to \( f \) in the maximum norm with respect to \( {P}_{n} \), the error \( {\widetilde{p}}_{n} - f \) has at least \( n + 1 \) zeros in \( \left\lbrack {a, b}\right\rbrack \) . Then taking the sequence of these zeros as the sequence of interpolation points implies the statement of the theorem.
|
Yes
|
Theorem 8.17 (Faber) For each sequence of interpolation points \( \left( {x}_{j}^{\left( n\right) }\right) \) there exists a function \( f \in C\left\lbrack {a, b}\right\rbrack \) such that the sequence \( \left( {{L}_{n}f}\right) \) of interpolation polynomials \( {L}_{n}f \in {P}_{n} \) does not converge to \( f \) uniformly on \( \left\lbrack {a, b}\right\rbrack \) .
|
Proof. This is a consequence of the uniform boundedness principle, Theorem 12.7. It implies that from the convergence of the sequence \( \left( {{L}_{n}f}\right) \) for all \( f \in C\left\lbrack {a, b}\right\rbrack \) it follows that there must exist a constant \( C > 0 \) such that \( {\begin{Vmatrix}{L}_{n}\end{Vmatrix}}_{\infty } \leq C \) for all \( n \in \mathbb{N} \) . Then the statement of the theorem is obtained by showing that the interpolation operator \( {L}_{n} \) satisfies \( {\begin{Vmatrix}{L}_{n}\end{Vmatrix}}_{\infty } \geq c\ln n \) for all \( n \in \mathbb{N} \) and some \( c > 0 \) (see [16]).
|
Yes
|
Theorem 8.18 Given \( n + 1 \) distinct points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) and \( {2n} + 2 \) values \( {y}_{0},\ldots ,{y}_{n} \in \mathbb{R} \) and \( {y}_{0}^{\prime },\ldots ,{y}_{n}^{\prime } \in \mathbb{R} \), there exists a unique polynomial \( {p}_{{2n} + 1} \in {P}_{{2n} + 1} \) with the property\n\n\[ \n{p}_{{2n} + 1}\left( {x}_{j}\right) = {y}_{j},\;{p}_{{2n} + 1}^{\prime }\left( {x}_{j}\right) = {y}_{j}^{\prime },\;j = 0,\ldots, n.\n\]\n\n(8.10)
|
This Hermite interpolation polynomial is given by\n\n\[ \n{p}_{{2n} + 1} = \mathop{\sum }\limits_{{k = 0}}^{n}\left\lbrack {{y}_{k}{H}_{k}^{0} + {y}_{k}^{\prime }{H}_{k}^{1}}\right\rbrack\n\]\n\n(8.11)\n\nwith the Hermite factors\n\n\[ \n{H}_{k}^{0}\left( x\right) \mathrel{\text{:=}} \left\lbrack {1 - 2{\ell }_{k}^{\prime }\left( {x}_{k}\right) \left( {x - {x}_{k}}\right) }\right\rbrack {\left\lbrack {\ell }_{k}\left( x\right) \right\rbrack }^{2},\;{H}_{k}^{1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{k}}\right) {\left\lbrack {\ell }_{k}\left( x\right) \right\rbrack }^{2}\n\]\n\nexpressed in terms of the Lagrange factors from Theorem 8.3.\n\nProof. Obviously, the polynomial \( {p}_{{2n} + 1} \) belongs to \( {P}_{{2n} + 1} \), since the Hermite factors have degree \( {2n} + 1 \) . From (8.3), by elementary calculations it can be seen that (see Problem 8.7)\n\n\[ \n{H}_{k}^{0}\left( {x}_{j}\right) = {H}_{k}^{1\prime }\left( {x}_{j}\right) = {\delta }_{jk}\n\]\n\n\[ \nj, k = 0,\ldots, n.\text{.}\n\]\n\n(8.12)\n\n\[ \n{H}_{k}^{0\prime }\left( {x}_{j}\right) = {H}_{k}^{1}\left( {x}_{j}\right) = 0\n\]\nFrom this it follows that the polynomial (8.11) satisfies the Hermite interpolation property (8.10).\n\nTo prove uniqueness of the Hermite interpolation polynomial we assume that \( {p}_{{2n} + 1,1},{p}_{{2n} + 1,2} \in {P}_{{2n} + 1} \) are two polynomials having the interpolation property (8.10). Then the difference \( {p}_{{2n} + 1} \mathrel{\text{:=}} {p}_{{2n} + 1,1} - {p}_{{2n} + 1,2} \) satisfies\n\n\[ \n{p}_{{2n} + 1}\left( {x}_{j}\right) = {p}_{{2n} + 1}^{\prime }\left( {x}_{j}\right) = 0,\;j = 0,\ldots, n\n\]\n\ni.e., the polynomial \( {p}_{{2n} + 1} \in {P}_{{2n} + 1} \) has \( n + 1 \) zeros of order two and therefore, by Theorem 8.1, must be identically equal to zero. This implies that \( {p}_{{2n} + 1,1} = {p}_{{2n} + 1,2} \)
|
Yes
|
Theorem 8.21 A trigonometric polynomial in \( {T}_{n} \) that has more than \( {2n} \) distinct zeros in the periodicity interval \( \lbrack 0,{2\pi }) \) must vanish identically; i.e., all its coefficients must be equal to zero.
|
Proof. We consider a trigonometric polynomial \( q \in {T}_{n} \) of the form\n\n\[ q\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{n}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack \]\n\n(8.14)\n\nSetting \( {b}_{0} = 0 \) ,\n\n\[ {\gamma }_{k} \mathrel{\text{:=}} \frac{1}{2}\left( {{a}_{k} - i{b}_{k}}\right) ,\;{\gamma }_{-k} \mathrel{\text{:=}} \frac{1}{2}\left( {{a}_{k} + i{b}_{k}}\right) ,\;k = 0,\ldots, n, \]\n\n(8.15)\n\nand using Euler's formula\n\n\[ {e}^{it} = \cos t + i\sin t \]\n\nwe can rewrite (8.14) in the complex form\n\n\[ q\left( t\right) = \mathop{\sum }\limits_{{k = - n}}^{n}{\gamma }_{k}{e}^{ikt} \]\n\n(8.16)\n\nTherefore, substituting \( z = {e}^{it} \) and setting\n\n\[ p\left( z\right) \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = - n}}^{n}{\gamma }_{k}{z}^{n + k} \]\n\nwe have the relation\n\n\[ q\left( t\right) = {z}^{-n}p\left( z\right) \]\n\nNow assume that the trigonometric polynomial \( q \in {T}_{n} \) has more than \( {2n} \) distinct zeros in the interval \( \lbrack 0,{2\pi }) \) . Then the algebraic polynomial \( p \in {P}_{2n} \) has more than \( {2n} \) distinct zeros lying on the unit circle in the complex plane, since the function \( t \mapsto {e}^{it} \) maps \( \lbrack 0,{2\pi }) \) bijectively onto the unit circle. By Theorem 8.1, the algebraic polynomial \( p \) must be identically zero, and now (8.15) implies that also \( q \) must be identically zero.
|
Yes
|
Theorem 8.22 The cosine functions \( {c}_{k}\left( t\right) \mathrel{\text{:=}} \cos {kt}, k = 0,1,\ldots, n \), and the sine functions \( {s}_{k}\left( t\right) \mathrel{\text{:=}} \sin {kt}, k = 1,\ldots, n \), are linearly independent in the function space \( C\left\lbrack {0,{2\pi }}\right\rbrack \) .
|
Proof. To prove this, assume that\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{c}_{k} + \mathop{\sum }\limits_{{k = 1}}^{n}{b}_{k}{s}_{k} = 0 \]\n\nthat is,\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}\cos {kt} + \mathop{\sum }\limits_{{k = 1}}^{n}{b}_{k}\sin {kt} = 0,\;t \in \left\lbrack {0,{2\pi }}\right\rbrack . \]\n\nThen the trigonometric polynomial with coefficients \( {a}_{0},\ldots ,{a}_{n} \) and \( {b}_{1},\ldots ,{b}_{n} \) has more than \( {2n} \) distinct zeros in \( \lbrack 0,{2\pi }) \), and from Theorem 8.21 it follows that all the coefficients must be zero. Note that this linear independence also can be deduced from Theorem 3.17.
|
Yes
|
Theorem 8.23 Given \( {2n} + 1 \) distinct points \( {t}_{0},\ldots ,{t}_{2n} \in \lbrack 0,{2\pi }) \) and \( {2n} + 1 \) values \( {y}_{0},\ldots ,{y}_{2n} \in \mathbb{R} \), there exists a uniquely determined trigonometric polynomial \( {q}_{n} \in {T}_{n} \) with the property\n\n\[ \n{q}_{n}\left( {t}_{j}\right) = {y}_{j},\;j = 0,\ldots ,{2n}.\n\]
|
Proof. The function \( {q}_{n} \) belongs to \( {T}_{n} \), since the Lagrange factors are trigonometric polynomials of degree \( n \) . The latter is a consequence of\n\n\[ \n\sin \frac{t - {t}_{0}}{2}\sin \frac{t - {t}_{1}}{2} = \frac{1}{2}\cos \frac{{t}_{1} - {t}_{0}}{2} - \frac{1}{2}\cos \left( {t - \frac{{t}_{1} + {t}_{0}}{2}}\right)\n\]\n\ni.e., each of the functions \( {\ell }_{k} \) is a product of \( n \) trigonometric polynomials of degree one. As in Theorem 8.3, we have \( {\ell }_{k}\left( {x}_{j}\right) = {\delta }_{jk} \) for \( j, k = 0,\ldots ,{2n} \) , which shows that \( {q}_{n} \) indeed solves the trigonometric interpolation problem.\n\nUniqueness of the trigonometric interpolation polynomial follows analogously to the proof of Theorem 8.3 with the aid of Theorem 8.21.
|
Yes
|
Theorem 8.24 There exists a unique trigonometric polynomial\n\n\[ \n{q}_{n}\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{n}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack \n\]\n\nsatisfying the interpolation property\n\n\[ \n{q}_{n}\left( \frac{2\pi j}{{2n} + 1}\right) = {y}_{j},\;j = 0,\ldots ,{2n}. \n\]
|
Its coefficients are given by\n\n\[ \n{a}_{k} = \frac{2}{{2n} + 1}\mathop{\sum }\limits_{{j = 0}}^{{2n}}{y}_{j}\cos \frac{2\pi jk}{{2n} + 1},\;k = 0,\ldots, n, \n\]\n\n\[ \n{b}_{k} = \frac{2}{{2n} + 1}\mathop{\sum }\limits_{{j = 0}}^{{2n}}{y}_{j}\sin \frac{2\pi jk}{{2n} + 1},\;k = 1,\ldots, n. \n\]
|
Yes
|
Theorem 8.25 There exists a unique trigonometric polynomial\n\n\[ \n{q}_{n}\left( t\right) = \frac{{a}_{0}}{2} + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}\left\lbrack {{a}_{k}\cos {kt} + {b}_{k}\sin {kt}}\right\rbrack + \frac{{a}_{n}}{2}\cos {nt} \n\]\n\nsatisfying the interpolation property\n\n\[ \n{q}_{n}\left( \frac{\pi j}{n}\right) = {y}_{j},\;j = 0,\ldots ,{2n} - 1. \n\]
|
Its coefficients are given by\n\n\[ \n{a}_{k} = \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{{2n} - 1}}{y}_{j}\cos \frac{\pi jk}{n},\;k = 0,\ldots, n \n\]\n\n\[ \n{b}_{k} = \frac{1}{n}\mathop{\sum }\limits_{{j = 0}}^{{{2n} - 1}}{y}_{j}\sin \frac{\pi jk}{n},\;k = 1,\ldots, n - 1. \n\]
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.