Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Lemma 8.28 Let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) and \( \ell \geq 2 \), and let \( f \in {C}^{\ell }\left\lbrack {a, b}\right\rbrack \) . Assume that the spline \( s \in {S}_{m}^{n} \) interpolates \( f \), i.e., \[ s\left( {x}_{j}\right) = f\left( {x}_{j}\right) ,\;j = 0,\ldots, n, \] (8.26) and that it satisfies the boundary conditions \[ {s}^{\left( j\right) }\left( a\right) = {f}^{\left( j\right) }\left( a\right) ,\;{s}^{\left( j\right) }\left( b\right) = {f}^{\left( j\right) }\left( b\right) ,\;j = 1,\ldots ,\ell - 1. \] (8.27) Then \[ {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx}. \] (8.28)
|
Proof. We have that \[ {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = {\int }_{a}^{b}{\left\lbrack {f}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} - {2R}, \] where \[ R \mathrel{\text{:=}} {\int }_{a}^{b}\left\lbrack {{f}^{\left( \ell \right) }\left( x\right) - {s}^{\left( \ell \right) }\left( x\right) }\right\rbrack {s}^{\left( \ell \right) }\left( x\right) {dx}. \] Since \( f \in {C}^{\ell }\left\lbrack {a, b}\right\rbrack \) and \( s \in {C}^{m - 1}\left\lbrack {a, b}\right\rbrack \) has piecewise continuous derivatives of order \( m \), by \( \ell - 1 \) repeated partial integrations and using the boundary conditions (8.27) we obtain that \[ R = {\left( -1\right) }^{\ell - 1}{\int }_{a}^{b}\left\lbrack {{f}^{\prime }\left( x\right) - {s}^{\prime }\left( x\right) }\right\rbrack {s}^{\left( m\right) }\left( x\right) {dx}. \] A further partial integration and the interpolation conditions now yield \[ R = {\left( -1\right) }^{\ell - 1}\mathop{\sum }\limits_{{j = 1}}^{n}{\int }_{{x}_{j - 1}}^{{x}_{j}}\left\lbrack {{f}^{\prime }\left( x\right) - {s}^{\prime }\left( x\right) }\right\rbrack {s}^{\left( m\right) }\left( x\right) {dx} \] \[ = {\left. {\left( -1\right) }^{\ell - 1}\mathop{\sum }\limits_{{j = 1}}^{n}\left\lbrack f\left( x\right) - s\left( x\right) \right\rbrack {s}^{\left( m\right) }\left( x\right) \right| }_{{x}_{j - 1}}^{{x}_{j}} = 0 \] since \( {s}^{\left( m + 1\right) } = 0 \) . This completes the proof.
|
Yes
|
Lemma 8.29 Under the assumptions of Lemma 8.28 let \( f = 0 \) . Then \( s = 0 \) .
|
Proof. For \( f = 0 \), from (8.28) it follows that\n\n\[ \n{\int }_{a}^{b}{\left\lbrack {s}^{\left( \ell \right) }\left( x\right) \right\rbrack }^{2}{dx} = 0 \n\]\n\nThis implies that \( {s}^{\left( \ell \right) } = 0 \), and therefore \( s \in {P}_{\ell - 1} \) on \( \left\lbrack {a, b}\right\rbrack \) . Now the boundary conditions \( {s}^{\left( j\right) }\left( a\right) = 0, j = 0,\ldots ,\ell - 1 \), yield \( s = 0 \) .
|
Yes
|
Theorem 8.30 Let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) and \( \ell \geq 2 \) . Then, given \( n + 1 \) values \( {y}_{0},\ldots ,{y}_{n} \) and \( m - 1 \) boundary data \( {a}_{1},\ldots ,{a}_{\ell - 1} \) and \( {b}_{1},\ldots ,{b}_{\ell - 1} \) , there exists a unique spline \( s \in {S}_{m}^{n} \) satisfying the interpolation conditions\n\n\[ s\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n \]\n\n(8.29)\n\nand the boundary conditions\n\n\[ {s}^{\left( j\right) }\left( a\right) = {a}_{j},\;{s}^{\left( j\right) }\left( b\right) = {b}_{j},\;j = 1,\ldots ,\ell - 1. \]\n\n(8.30)
|
Proof. Representing the spline in the form (8.25), i.e.,\n\n\[ s\left( x\right) = \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k} + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k} \]\n\n(8.31)\n\n\nit follows that the interpolation conditions (8.29) and boundary conditions (8.30) are satisfied if and only if the \( m + n \) coefficients \( {\alpha }_{0},\ldots ,{\alpha }_{m} \) and \( {\beta }_{1},\ldots ,{\beta }_{n - 1} \) solve the system\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}\left( {x}_{j}\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}\left( {x}_{j}\right) = {y}_{j},\;j = 0,\ldots, n \]\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}^{\left( j\right) }\left( a\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}^{\left( j\right) }\left( a\right) = {a}_{j},\;j = 1,\ldots ,\ell - 1, \]\n\n(8.32)\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{u}_{k}^{\left( j\right) }\left( b\right) + \mathop{\sum }\limits_{{k = 1}}^{{n - 1}}{\beta }_{k}{v}_{k}^{\left( j\right) }\left( b\right) = {b}_{j},\;j = 1,\ldots ,\ell - 1, \]\n\nof \( m + n \) linear equations. By Lemma 8.29 the homogeneous form of the system (8.32) has only the trivial solution. Therefore, the inhomogeneous system (8.32) is uniquely solvable, and the proof is finished.
|
Yes
|
Theorem 8.31 For \( m \in \mathbb{N} \cup \{ 0\} \) the B-splines\n\n\[ \n{B}_{m}\left( {\cdot - k}\right) ,\;k = 0,\ldots, m \n\]\n\n(8.37)\n\nare linearly independent on the interval \( {I}_{m} \mathrel{\text{:=}} \left\lbrack {\frac{m - 1}{2},\frac{m + 1}{2}}\right\rbrack \) .
|
Proof. This is trivial for \( m = 0 \), and we assume that it has been proven for degree \( m - 1 \) for some \( m \geq 1 \) . Let\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}{B}_{m}\left( {x - k}\right) = 0,\;x \in {I}_{m} \n\]\n\n(8.38)\n\nThen, with the aid of (8.33), differentiating (8.38) yields\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{m}{\alpha }_{k}\left\lbrack {{B}_{m - 1}\left( {x - k + \frac{1}{2}}\right) - {B}_{m - 1}\left( {x - k - \frac{1}{2}}\right) }\right\rbrack = 0,\;x \in {I}_{m}. \n\]\n\nObserving that the supports of \( {B}_{m - 1}\left( {\cdot + \frac{1}{2}}\right) \) and \( {B}_{m - 1}\left( {\cdot - m - \frac{1}{2}}\right) \) do not intersect with \( {I}_{m} \), we can rewrite this as\n\n\[ \n\mathop{\sum }\limits_{{k = 1}}^{m}\left\lbrack {{\alpha }_{k} - {\alpha }_{k - 1}}\right\rbrack {B}_{m - 1}\left( {x - k + \frac{1}{2}}\right) = 0,\;x \in {I}_{m}, \n\]\n\nwhence \( {\alpha }_{k} = {\alpha }_{k - 1} \) for \( k = 1,\ldots, m \) follows by the induction assumption; i.e., \( {\alpha }_{k} = \alpha \) for \( k = 0,\ldots, m \) . Now (8.38) reads\n\n\[ \n\alpha \mathop{\sum }\limits_{{k = 0}}^{m}{B}_{m}\left( {x - k}\right) = 0,\;x \in {I}_{m} \n\]\n\nand integrating this equation over the interval \( {I}_{m} \) leads to\n\n\[ \n\alpha {\int }_{-\frac{m}{2} - \frac{1}{2}}^{\frac{m}{2} + \frac{1}{2}}{B}_{m}\left( x\right) {dx} = 0. \n\]\n\nThis finally implies \( \alpha = 0 \), since the \( {B}_{m} \) are nonnegative, and the proof is finished.
|
Yes
|
Corollary 8.32 Let \( {x}_{k} = a + {hk}, k = 0,\ldots, n \), be an equidistant subdivision of the interval \( \left\lbrack {a, b}\right\rbrack \) of step size \( h = \left( {b - a}\right) /n \) with \( n \geq 2 \), and let \( m = 2\ell - 1 \) with \( \ell \in \mathbb{N} \) . Then the B-splines\n\n\[ \n{B}_{m, k}\left( x\right) \mathrel{\text{:=}} {B}_{m}\left( \frac{x - a - {hk}}{h}\right) ,\;x \in \left\lbrack {a, b}\right\rbrack ,\n\]\n\n(8.39)\n\nfor \( k = - \ell + 1,\ldots, n + \ell - 1 \) form a basis for \( {S}_{m}^{n} \) .
|
Proof. The \( n + m \) splines (8.39) belong to \( {S}_{m}^{n} \), and by the preceding Theorem 8.31 they can be shown to be linearly independent on \( \left\lbrack {a, b}\right\rbrack \) . Hence, the statement follows from Theorem 8.27.
|
Yes
|
Theorem 8.33 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable and let \( s \in {S}_{3}^{n} \) be the uniquely determined cubic spline satisfying the interpolation and boundary conditions of Lemma 8.28. Then\n\n\[ \parallel f - s{\parallel }_{\infty } \leq \frac{{h}^{3/2}}{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2}\;\text{ and }\;{\begin{Vmatrix}{f}^{\prime } - {s}^{\prime }\end{Vmatrix}}_{\infty } \leq {h}^{1/2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2}, \]\n\nwhere \( h \mathrel{\text{:=}} \mathop{\max }\limits_{{j = 1,\ldots, n}}\left| {{x}_{j} - {x}_{j - 1}}\right| \) .
|
Proof. The error function \( r \mathrel{\text{:=}} f - s \) has \( n + 1 \) zeros \( {x}_{0},\ldots ,{x}_{n} \) . Hence, the distance between two consecutive zeros of \( r \) is less than or equal to \( h \) . By Rolle’s theorem, the derivative \( {r}^{\prime } \) has \( n \) zeros with distance less than or equal to \( {2h} \) . Choose \( z \in \left\lbrack {a, b}\right\rbrack \) such that \( \left| {{r}^{\prime }\left( z\right) }\right| = {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \) . Then the closest zero \( \zeta \) of \( {r}^{\prime } \) has distance \( \left| {\zeta - z}\right| \leq h \), and by the Cauchy-Schwarz inequality we can estimate\n\n\[ {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty }^{2} = {\left| {\int }_{\zeta }^{z}{r}^{\prime \prime }\left( y\right) dy\right| }^{2} \leq h\left| {{\int }_{\zeta }^{z}{\left\lbrack {r}^{\prime \prime }\left( y\right) \right\rbrack }^{2}{dy}}\right| \leq h{\int }_{a}^{b}{\left\lbrack {r}^{\prime \prime }\left( y\right) \right\rbrack }^{2}{dy}. \]\n\nFrom this, using Lemma 8.28 we obtain \( {\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \leq \sqrt{h}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \) .\n\nChoose \( x \in \left\lbrack {a, b}\right\rbrack \) such that \( \left| {r\left( x\right) }\right| = \parallel r{\parallel }_{\infty } \) . Then the closest zero \( \xi \) of \( r \) has distance \( \left| {\xi - x}\right| \leq h/2 \), and we can estimate\n\n\[ \parallel r{\parallel }_{\infty } = \left| {{\int }_{\xi }^{x}{r}^{\prime }\left( y\right) {dy}}\right| \leq \frac{h}{2}{\begin{Vmatrix}{r}^{\prime }\end{Vmatrix}}_{\infty } \leq \frac{h\sqrt{h}}{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{2} \]\n\nwhich concludes the proof.
|
Yes
|
Theorem 8.34 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be four-times continuously differentiable and let \( s \in {S}_{3}^{n} \) be the uniquely determined cubic spline satisfying the interpolation and boundary conditions of Lemma 8.28 for an equidistant subdivision with step width \( h \) . Then\n\n\[ \parallel f - s{\parallel }_{\infty } \leq \frac{{h}^{4}}{16}{\begin{Vmatrix}{f}^{\left( 4\right) }\end{Vmatrix}}_{\infty } \]
|
Proof. By \( {L}_{1} : C\left\lbrack {a, b}\right\rbrack \rightarrow {S}_{1}^{n} \) we denote the interpolation operator mapping \( g \in C\left\lbrack {a, b}\right\rbrack \) onto its uniquely determined piecewise linear interpolation. From Example 8.12 we obtain that\n\n\[ \parallel r{\parallel }_{\infty } = {\begin{Vmatrix}r - {L}_{1}r\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{8}{\begin{Vmatrix}{r}^{\prime \prime }\end{Vmatrix}}_{\infty } \]\n\nsince trivially \( {L}_{1}r = 0 \) .\n\nBy integration, we choose a function \( w \) such that \( {w}^{\prime \prime } = {L}_{1}{f}^{\prime \prime } \) . Applying the estimate (8.45) for the cubic spline \( s - w \) and using the estimate (8.9) for the piecewise linear interpolation of \( {f}^{\prime \prime } \), we obtain\n\n\[ {\begin{Vmatrix}{f}^{\prime \prime } - {s}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{f}^{\prime \prime } - {L}_{1}{f}^{\prime \prime }\end{Vmatrix}}_{\infty } + {\begin{Vmatrix}{L}_{1}{f}^{\prime \prime } - {s}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq 4{\begin{Vmatrix}{f}^{\prime \prime } - {L}_{1}{f}^{\prime \prime }\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{2}{\begin{Vmatrix}{f}^{\left( 4\right) }\end{Vmatrix}}_{\infty }.\]\n\nBy piecing together the last two inequalities we obtain the assertion of the theorem.
|
Yes
|
Theorem 8.37 The Bernstein polynomials are nonnegative on \( \\left\\lbrack {0,1}\\right\\rbrack \) and provide a partition of unity; i.e., \[ {B}_{k}^{n}\\left( t\\right) \\geq 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] and \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{B}_{k}^{n}\\left( t\\right) = 1,\\;t \\in \\mathbb{R} \]
|
Proof. The first five properties are obvious. The statement on the maximum of \( {B}_{k}^{n} \) is a consequence of \[ \\frac{d}{dt}{B}_{k}^{n}\\left( t\\right) = \\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) {t}^{k - 1}{\\left( 1 - t\\right) }^{n - k - 1}\\left( {k - {nt}}\\right) ,\\;k = 0,\\ldots n. \] The recursion formula (8.52) follows from the definition (8.47) and the recursion formula \[ \\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) = \\left( \\begin{array}{l} n - 1 \\\\ k - 1 \\end{array}\\right) + \\left( \\begin{matrix} n - 1 \\\\ k \\end{matrix}\\right) \] for the binomial coefficients. In order to show that the \( n + 1 \) polynomials \( {B}_{0}^{n},\\ldots ,{B}_{n}^{n} \) of degree \( n \) provide a basis of \( {P}_{n} \), we prove that they are linearly independent. Let \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}{B}_{k}^{n}\\left( t\\right) = 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] Then \[ \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}\\frac{{d}^{j}}{d{t}^{j}}{B}_{k}^{n}\\left( t\\right) = 0,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \] and therefore \[ \\mathop{\\sum }\\limits_{{k = j}}^{n}{b}_{k}\\frac{{d}^{j}}{d{t}^{j}}{B}_{k}^{n}\\left( 0\\right) = 0,\\;j = 0,\\ldots, n \] since \( t = 0 \) is a zero of \( {B}_{k}^{n} \) of order \( k \). From this, by induction we find that \( {b}_{n} = \\cdots = {b}_{0} = 0. \)
|
Yes
|
Theorem 8.39 Let\n\n\\[ \np\\left( t\\right) = \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}{B}_{k}^{n}\\left( t\\right) ,\\;t \\in \\left\\lbrack {0,1}\\right\\rbrack \n\\]\n\nbe a Bézier polynomial on \\( \\left\\lbrack {0,1}\\right\\rbrack \\) . Then\n\n\\[ \n{p}^{\\left( j\\right) }\\left( t\\right) = \\frac{n!}{\\left( {n - j}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k}^{n - j}\\left( t\\right) ,\\;j = 1,\\ldots, n, \n\\]\n\nwith the forward differences \\( {\\bigtriangleup }^{j}{b}_{k} \\) recursively defined by\n\n\\[ \n{\\bigtriangleup }^{0}{b}_{k} \\mathrel{\\text{:=}} {b}_{k},\\;{\\bigtriangleup }^{j}{b}_{k} \\mathrel{\\text{:=}} {\\bigtriangleup }^{j - 1}{b}_{k + 1} - {\\bigtriangleup }^{j - 1}{b}_{k},\\;j = 1,\\ldots, n. \n\\]
|
Proof. Obviously, the statement is true for \\( j = 0 \\) . We assume that it has been proven for some \\( 0 \\leq j < n \\) . Then with the aid of (8.54) we obtain\n\n\\[ \n{p}^{\\left( j + 1\\right) }\\left( t\\right) = \\frac{n!}{\\left( {n - j}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}\\frac{d}{dt}{B}_{k}^{n - j}\\left( t\\right) \n\\]\n\n\\[ \n= \\frac{n!}{\\left( {n - j - 1}\\right) !}\\left\\{ {\\mathop{\\sum }\\limits_{{k = 1}}^{{n - j}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k - 1}^{n - j - 1}\\left( t\\right) - \\mathop{\\sum }\\limits_{{k = 0}}^{{n - j - 1}}{\\bigtriangleup }^{j}{b}_{k}{B}_{k}^{n - j - 1}\\left( t\\right) }\\right\\} \n\\]\n\n\\[ \n= \\frac{n!}{\\left( {n - j - 1}\\right) !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j - 1}}\\left\\{ {{\\bigtriangleup }^{j}{b}_{k + 1} - {\\bigtriangleup }^{j}{b}_{k}}\\right\\} {B}_{k}^{n - j - 1}\\left( t\\right) \n\\]\n\n\\[ \n= \\frac{n!}{\\left\\lbrack {n - \\left( {j + 1}\\right) }\\right\\rbrack !}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - \\left( {j + 1}\\right) }}{\\bigtriangleup }^{j + 1}{b}_{k}{B}_{k}^{n - \\left( {j + 1}\\right) }\\left( t\\right) , \n\\]\n\nwhich establishes the assertion for \\( j + 1 \\) .
|
Yes
|
Corollary 8.40 The polynomial from Theorem 8.39 has the derivatives\n\n\\[ \n{p}^{\\left( j\\right) }\\left( 0\\right) = \\frac{n!}{\\left( {n - j}\\right) !}{\\bigtriangleup }^{j}{b}_{0},\\;{p}^{\\left( j\\right) }\\left( 1\\right) = \\frac{n!}{\\left( {n - j}\\right) !}{\\bigtriangleup }^{j}{b}_{n - j} \n\\]\n\n at the two endpoints.
|
From Corollary 8.40 we note that \\( {p}^{\\left( j\\right) }\\left( 0\\right) \\) depends only on \\( {b}_{0},\\ldots ,{b}_{j} \\) and that \\( {p}^{\\left( j\\right) }\\left( 1\\right) \\) depends only on \\( {b}_{n - j},\\ldots ,{b}_{n} \\) . In particular, we have that\n\n\\[ \n{p}^{\\prime }\\left( 0\\right) = n\\left( {{b}_{1} - {b}_{0}}\\right) ,\\;{p}^{\\prime }\\left( 1\\right) = n\\left( {{b}_{n} - {b}_{n - 1}}\\right) ; \n\\]\n\n(8.55)\n\ni.e., at the two endpoints the Bézier curve has the same tangent lines as the Bézier polygon. Through the affine transformation (8.46) these results on the derivatives carry over to the general interval \\( \\left\\lbrack {a, b}\\right\\rbrack \\) .
|
Yes
|
Theorem 8.41 The subpolynomials \( {b}_{i}^{k} \) of a Bézier polynomial \( p \) of degree \( n \) satisfy the recursion formulae\n\n\[ \n{b}_{i}^{k}\left( t\right) = \left( {1 - t}\right) {b}_{i}^{k - 1}\left( t\right) + t{b}_{i + 1}^{k - 1}\left( t\right) \n\]\n\n(8.57)\n\nfor \( i = 0,\ldots, n - k \) and \( k = 1,\ldots, n \) .
|
Proof. We insert the recursion formulae (8.51) and (8.52) for the Bernstein polynomials into the definition (8.56) for the subpolynomials and obtain\n\n\[ \n{b}_{i}^{k}\left( t\right) = {b}_{i}{B}_{0}^{k}\left( t\right) + \mathop{\sum }\limits_{{j = 1}}^{{k - 1}}{b}_{i + j}{B}_{j}^{k}\left( t\right) + {b}_{i + k}{B}_{k}^{k}\left( t\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{{j = 0}}^{{k - 1}}{b}_{i + j}\left( {1 - t}\right) {B}_{j}^{k - 1}\left( t\right) + \mathop{\sum }\limits_{{j = 1}}^{k}{b}_{i + j}t{B}_{j - 1}^{k - 1}\left( t\right) \n\]\n\n\[ \n= \left( {1 - t}\right) {b}_{i}^{k - 1}\left( t\right) + t{b}_{i + 1}^{k - 1}\left( t\right) \n\]\n\nwhich establishes (8.57).
|
Yes
|
Theorem 8.42 The Bézier polynomials\n\n\\[ \n{p}_{1}\\left( x\\right) \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{0}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) \\;\\text{ and }\\;{p}_{2}\\left( x\\right) \\mathrel{\\text{:=}} \\mathop{\\sum }\\limits_{{k = 0}}^{n}{b}_{k}^{n - k}\\left( t\\right) {B}_{k}^{n}\\left( {x;t,1}\\right) \n\\]\n\nwith the coefficients \\( {b}_{0}^{k} \\) and \\( {b}_{k}^{n - k} \\) for \\( k = 0,\\ldots, n \\) defined by the recursion (8.57) satisfy\n\n\\[ \np\\left( x\\right) = {p}_{1}\\left( x\\right) = {p}_{2}\\left( x\\right) ,\\;x \\in \\mathbb{R}, \n\\]\n\nfor arbitrary \\( 0 < t < 1 \\) .
|
Proof. Inserting the equivalent definition (8.56) of the subpolynomials and reordering the summation, we find that\n\n\\[ \n{p}_{1}\\left( x\\right) = \\mathop{\\sum }\\limits_{{k = 0}}^{n}\\mathop{\\sum }\\limits_{{j = 0}}^{k}{b}_{j}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = \\mathop{\\sum }\\limits_{{j = 0}}^{n}{b}_{j}\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) .\n\\]\n\nHence the proof will be concluded by showing that\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = {B}_{j}^{n}\\left( x\\right) ,\\;x \\in \\mathbb{R}.\n\\]\n\n(8.58)\n\nTo establish this identity we make use of Definition 8.36 and obtain with the aid of the binomial formula that\n\n\\[ \n\\mathop{\\sum }\\limits_{{k = j}}^{n}{B}_{j}^{k}\\left( t\\right) {B}_{k}^{n}\\left( {x;0, t}\\right) = \\mathop{\\sum }\\limits_{{k = j}}^{n}\\left( \\begin{array}{l} k \\\\ j \\end{array}\\right) {\\left( 1 - t\\right) }^{k - j}{t}^{j - n}\\left( \\begin{array}{l} n \\\\ k \\end{array}\\right) {x}^{k}{\\left( t - x\\right) }^{n - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {t}^{j - n}\\mathop{\\sum }\\limits_{{k = j}}^{n}\\left( \\begin{array}{l} n - j \\\\ k - j \\end{array}\\right) {\\left( 1 - t\\right) }^{k - j}{x}^{k}{\\left( t - x\\right) }^{n - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {t}^{j - n}{x}^{j}\\mathop{\\sum }\\limits_{{k = 0}}^{{n - j}}\\left( \\begin{matrix} n - j \\\\ k \\end{matrix}\\right) {\\left( 1 - t\\right) }^{k}{x}^{k}{\\left( t - x\\right) }^{n - j - k}\n\\]\n\n\\[ \n= \\left( \\begin{array}{l} n \\\\ j \\end{array}\\right) {x}^{j}{\\left( 1 - x\\right) }^{n - j}\n\\]\n\nHence (8.58) is valid, and consequently \\( {p}_{1} = p \\) . The proof of \\( {p}_{2} = p \\) is completely analogous, and it can also be obtained by a symmetry argument from \\( {p}_{1} = p \\) .
|
Yes
|
Theorem 9.1 The polynomial interpolatory quadrature of order \( n \) defined by\n\n\[ \n{Q}_{n}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} \n\]\n\n(9.3)\n\nis of the form (9.2) with the weights given by\n\n\[ \n{a}_{k} = \frac{1}{{q}_{n + 1}^{\prime }\left( {x}_{k}\right) }{\int }_{a}^{b}\frac{{q}_{n + 1}\left( x\right) }{x - {x}_{k}}{dx},\;k = 0,\ldots, n, \n\]\n\n(9.4)\n\nwhere \( {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{n}}\right) \) .
|
Proof. From (8.2) we obtain\n\n\[ \n{\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}f\left( {x}_{k}\right) {\int }_{a}^{b}{\ell }_{k}\left( x\right) {dx} \n\]\n\nwith\n\[ \n{a}_{k} = {\int }_{a}^{b}{\ell }_{k}\left( x\right) {dx} = {\int }_{a}^{b}\mathop{\prod }\limits_{\substack{{j = 0} \\ {j \neq k} }}^{n}\frac{x - {x}_{j}}{{x}_{k} - {x}_{j}}{dx} \n\]\n\nwhence (9.4) follows by rewriting the product.
|
Yes
|
Theorem 9.2 Given \( n + 1 \) distinct quadrature points \( {x}_{0},\ldots ,{x}_{n} \in \left\lbrack {a, b}\right\rbrack \) , the interpolatory quadrature (9.3) of order \( n \) is uniquely determined by its property of integrating all polynomials \( p \in {P}_{n} \) exactly, i.e., by the property\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) = {\int }_{a}^{b}p\left( x\right) {dx} \n\]\n\n(9.5)\n\nfor all \( p \in {P}_{n} \) .
|
Proof. From (9.3) and \( {L}_{n}p = p \) for all \( p \in {P}_{n} \) it follows that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) = {\int }_{a}^{b}\left( {{L}_{n}p}\right) \left( x\right) {dx} = {\int }_{a}^{b}p\left( x\right) {dx} \n\]\n\ni.e., the quadrature is exact for all \( p \in {P}_{n} \) . On the other hand, from (9.5) we obtain\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}\left( {{L}_{n}f}\right) \left( {x}_{k}\right) = {\int }_{a}^{b}\left( {{L}_{n}f}\right) \left( x\right) {dx} \n\]\n\nfor all \( f \in C\left\lbrack {a, b}\right\rbrack \) ; i.e., the quadrature is an interpolatory quadrature.
|
Yes
|
Theorem 9.3 The polynomial interpolatory quadrature of order \( n \) with equidistant quadrature points\n\n\[ \n{x}_{k} = a + {kh},\;k = 0,\ldots, n, \]\n\nand step width \( h = \left( {b - a}\right) /n \) is called the Newton-Cotes quadrature formula of order \( n \) . Its weights are given by\n\n\[ \n{a}_{k} = h\frac{{\left( -1\right) }^{n - k}}{k!\left( {n - k}\right) !}{\int }_{0}^{n}\mathop{\prod }\limits_{\substack{{j = 0} \\ {j \neq k} }}^{n}\left( {z - j}\right) {dz},\;k = 0,\ldots, n, \]\n\n(9.6)\n\nand have the symmetry property \( {a}_{k} = {a}_{n - k}, k = 0,\ldots, n \) .
|
Proof. The weights are obtained from (9.4) by substituting \( x = {x}_{0} + {hz} \) and observing that\n\n\[ \n{q}_{n + 1}\left( x\right) = {h}^{n + 1}\mathop{\prod }\limits_{{j = 0}}^{n}\left( {z - j}\right) \]\n\nand\n\n\[ \n{q}_{n + 1}^{\prime }\left( {x}_{k}\right) = {\left( -1\right) }^{n - k}k!\left( {n - k}\right) !{h}^{n}. \]\n\nThe symmetry \( {a}_{k} = {a}_{n - k} \) follows by substituting \( z = n - y \) .
|
Yes
|
Theorem 9.4 Let \( f : C\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable. Then the error for the trapezoidal rule can be represented in the form\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{2}\left\lbrack {f\left( a\right) + f\left( b\right) }\right\rbrack = - \frac{{h}^{3}}{12}{f}^{\prime \prime }\left( \xi \right)\]\n\n(9.7)\n\nwith some \( \xi \in \left\lbrack {a, b}\right\rbrack \) and \( h = b - a \) .
|
Proof. Let \( {L}_{1}f \) denote the linear interpolation of \( f \) at the interpolation points \( {x}_{0} = a \) and \( {x}_{1} = b \) . By construction of the trapezoidal rule we have that the error\n\n\[{E}_{1}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{2}\left\lbrack {f\left( a\right) + f\left( b\right) }\right\rbrack\]\n\nis given by\n\n\[{E}_{1}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - \left( {{L}_{1}f}\right) \left( x\right) }\right\rbrack {dx} = {\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) \frac{f\left( x\right) - \left( {{L}_{1}f}\right) \left( x\right) }{\left( {x - a}\right) \left( {x - b}\right) }{dx}.\]\n\nSince the first factor of the integrand is nonpositive on \( \left\lbrack {a, b}\right\rbrack \) and since by l'Hôpital's rule the second factor is continuous, from the mean value theorem for integrals we obtain that\n\n\[{E}_{1}\left( f\right) = \frac{f\left( z\right) - \left( {{L}_{1}f}\right) \left( z\right) }{\left( {z - a}\right) \left( {z - b}\right) }{\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) {dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . From this, with the aid of the error representation for linear interpolation from Theorem 8.10 and the integral\n\n\[{\int }_{a}^{b}\left( {x - a}\right) \left( {x - b}\right) {dx} = - \frac{{h}^{3}}{6}\]\n\nthe assertion of the theorem follows.
|
Yes
|
Theorem 9.5 Let \( f : C\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be four-times continuously differentiable. Then the error for Simpson's rule can be represented in the form\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{6}\left\lbrack {f\left( a\right) + {4f}\left( \frac{a + b}{2}\right) + f\left( b\right) }\right\rbrack = - \frac{{h}^{5}}{90}{f}^{\left( 4\right) }\left( \xi \right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) and \( h = \left( {b - a}\right) /2 \) .
|
Proof. Let \( {L}_{2}f \) denote the quadratic interpolation polynomial for \( f \) at the interpolation points \( {x}_{0} = a,{x}_{1} = \left( {a + b}\right) /2 \), and \( {x}_{2} = b \) . By construction of Simpson's rule we have that the error\n\n\[{E}_{2}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}f\left( x\right) {dx} - \frac{b - a}{6}\left\lbrack {f\left( a\right) + {4f}\left( \frac{a + b}{2}\right) + f\left( b\right) }\right\rbrack\]\n\nis given by\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - \left( {{L}_{2}f}\right) \left( x\right) }\right\rbrack {dx}.\n\nConsider the cubic polynomial\n\n\[p\left( x\right) \mathrel{\text{:=}} \left( {{L}_{2}f}\right) \left( x\right) + \frac{4}{{\left( b - a\right) }^{2}}\left\lbrack {{\left( {L}_{2}f\right) }^{\prime }\left( {x}_{1}\right) - {f}^{\prime }\left( {x}_{1}\right) }\right\rbrack {q}_{3}\left( x\right) ,\]\n\nwhere \( {q}_{3}\left( x\right) = \left( {x - {x}_{0}}\right) \left( {x - {x}_{1}}\right) \left( {x - {x}_{2}}\right) \) . Obviously, \( p \) has the interpolation properties\n\n\[p\left( {x}_{k}\right) = f\left( {x}_{k}\right) ,\;k = 0,1,2,\;\text{ and }\;{p}^{\prime }\left( {x}_{1}\right) = {f}^{\prime }\left( {x}_{1}\right) .\n\nSince \( {\int }_{a}^{b}{q}_{3}\left( x\right) {dx} = 0 \), from (9.9) and (9.10) we can conclude that\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left\lbrack {f\left( x\right) - p\left( x\right) }\right\rbrack {dx}\]\n\nand consequently\n\n\[{E}_{2}\left( f\right) = {\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) \frac{f\left( x\right) - p\left( x\right) }{\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) }{dx}.\n\nAs in the proof of Theorem 9.4, the first factor of the integrand is nonpositive on \( \left\lbrack {a, b}\right\rbrack \), and the second factor is continuous. Hence, by the mean value theorem for integrals, we obtain that\n\n\[{E}_{2}\left( f\right) = \frac{f\left( z\right) - p\left( z\right) }{\left( {z - {x}_{0}}\right) {\left( z - {x}_{1}\right) }^{2}\left( {z - {x}_{2}}\right) }{\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) {dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . Analogous to Theorem 8.10, it can be shown that\n\n\[f\left( z\right) - p\left( z\right) = \frac{{f}^{\left( 4\right) }\left( \xi \right) }{4!}\left( {z - {x}_{0}}\right) {\left( z - {x}_{1}\right) }^{2}\left( {z - {x}_{2}}\right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) . From this, with the aid of the integral\n\n\[{\int }_{a}^{b}\left( {x - {x}_{0}}\right) {\left( x - {x}_{1}\right) }^{2}\left( {x - {x}_{2}}\right) {dx} = - \frac{{\left( b - a\right) }^{5}}{120},\]\n\nwe conclude the statement of the theorem.
|
Yes
|
Example 9.6 The approximation of\n\n\\[ \ln 2 = {\\int }_{0}^{1}\\frac{dx}{1 + x} \\]\n\nby the trapezoidal rule yields\n\n\\[ \ln 2 \\approx \\frac{1}{2}\\left\\lbrack {1 + \\frac{1}{2}}\\right\\rbrack = {0.75} \\]
|
For \\( f\\left( x\\right) \\mathrel{\\text{:=}} 1/\\left( {1 + x}\\right) \\) we have\n\n\\[ \\frac{{h}^{3}}{12}{\\begin{Vmatrix}{f}^{\\prime \\prime }\\end{Vmatrix}}_{\\infty } = \\frac{1}{6} \\]\n\nand hence, from Theorem 9.4, we obtain the estimate \\( \\left| {\\ln 2 - {0.75}}\\right| \\leq {0.167} \\) as compared to the true error \\( \\ln 2 - {0.75} = - {0.056}\\ldots \\) .
|
Yes
|
Theorem 9.7 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be twice continuously differentiable. Then the error for the composite trapezoidal rule is given by\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{h}\left( f\right) = - \frac{b - a}{12}{h}^{2}{f}^{\prime \prime }\left( \xi \right)\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \).
|
Proof. By Theorem 9.4 we have that\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{h}\left( f\right) = - \frac{{h}^{3}}{12}\mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right)\]\n\nwhere \( a \leq {\xi }_{1} \leq {\xi }_{2} \leq \cdots \leq {\xi }_{n} \leq b \). From\n\n\[n\mathop{\min }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}{f}^{\prime \prime }\left( x\right) \leq \mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right) \leq n\mathop{\max }\limits_{{x \in \left\lbrack {a, b}\right\rbrack }}{f}^{\prime \prime }\left( x\right)\]\n\nand the continuity of \( {f}^{\prime \prime } \) we conclude that there exists \( \xi \in \left\lbrack {a, b}\right\rbrack \) such that\n\n\[\mathop{\sum }\limits_{{k = 1}}^{n}{f}^{\prime \prime }\left( {\xi }_{k}\right) = n{f}^{\prime \prime }\left( \xi \right)\]\n\nand the proof is finished.
|
Yes
|
Theorem 9.10 (Szegö) Let\n\n\[ \n{Q}_{n}\left( f\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}^{\left( n\right) }f\left( {x}_{k}^{\left( n\right) }\right) \n\]\n\nbe a sequence of quadrature formulae that converges for all polynomials, i.e,\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}{Q}_{n}\left( p\right) = Q\left( p\right) \n\]\n\n(9.11)\n\nfor all polynomials \( p \), and that is uniformly bounded, i.e., there exists a constant \( C > 0 \) such that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| \leq C \n\]\n\n(9.12)\n\nfor all \( n \in \mathbb{N} \) . Then the sequence \( \left( {Q}_{n}\right) \) is convergent.
|
Proof. Let \( f \in C\left\lbrack {a, b}\right\rbrack \) and \( \varepsilon > 0 \) be arbitrary. By the Weierstrass approximation theorem (see [16]) there exists a polynomial \( p \) such that\n\n\[ \n\parallel f - p{\parallel }_{\infty } \leq \frac{\varepsilon }{2\left( {C + b - a}\right) }.\n\]\n\nThen, since by (9.11) we have \( {Q}_{n}\left( p\right) \rightarrow Q\left( p\right) \) as \( n \rightarrow \infty \), there exists \( N\left( \varepsilon \right) \in \mathbb{N} \) such that\n\n\[ \n\left| {{Q}_{n}\left( p\right) - Q\left( p\right) }\right| \leq \frac{\varepsilon }{2}\n\]\nfor all \( n \geq N\left( \varepsilon \right) \) . Now with the aid of the triangle inequality and using (9.12) we can estimate\n\n\[ \n\left| {{Q}_{n}\left( f\right) - Q\left( f\right) }\right| \leq \mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| \left| {f\left( {x}_{k}^{\left( n\right) }\right) - p\left( {x}_{k}^{\left( n\right) }\right) }\right| + \left| {{Q}_{n}\left( p\right) - Q\left( p\right) }\right|\n\]\n\n\[ \n+ {\int }_{a}^{b}\left| {p\left( x\right) - f\left( x\right) }\right| {dx}\n\]\n\n\[ \n\leq \frac{C\varepsilon }{2\left( {C + b - a}\right) } + \frac{\varepsilon }{2} + \frac{\left( {b - a}\right) \varepsilon }{2\left( {C + b - a}\right) } = \varepsilon\n\]\n\nfor all \( N \geq N\left( \varepsilon \right) \) ; i.e., \( {Q}_{n}\left( f\right) \rightarrow Q\left( f\right) \) for \( n \rightarrow \infty \) .
|
Yes
|
Corollary 9.11 (Steklov) Assume that the sequence \( \left( {Q}_{n}\right) \) of quadrature formulae converges for all polynomials and that all the weights are nonnegative. Then the sequence \( \left( {Q}_{n}\right) \) is convergent.
|
Proof. This follows from\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}\left| {a}_{k}^{\left( n\right) }\right| = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}^{\left( n\right) } = {Q}_{n}\left( 1\right) \rightarrow {\int }_{a}^{b}{dx} = b - a,\;n \rightarrow \infty ,\]\n\nand the preceding Theorem 9.10.
|
No
|
Lemma 9.13 Let \( {x}_{0},\ldots ,{x}_{n} \) be the \( n + 1 \) distinct quadrature points of a Gaussian quadrature formula. Then\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n + 1}\left( x\right) q\left( x\right) {dx} = 0 \n\]\n\n(9.16)\n\nfor \( {q}_{n + 1}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{0}}\right) \cdots \left( {x - {x}_{n}}\right) \) and all \( q \in {P}_{n} \) .
|
Proof. Since \( {q}_{n + 1}q \in {P}_{{2n} + 1} \) and \( {q}_{n + 1}\left( {x}_{k}\right) = 0 \), we have that\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n + 1}\left( x\right) q\left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}{q}_{n + 1}\left( {x}_{k}\right) q\left( {x}_{k}\right) = 0 \n\]\n\nfor all \( q \in {P}_{n} \) .
|
Yes
|
Lemma 9.14 Let \( {x}_{0},\ldots ,{x}_{n} \) be \( n + 1 \) distinct points satisfying the condition (9.16). Then the corresponding polynomial interpolatory quadrature is a Gaussian quadrature formula.
|
Proof. Let \( {L}_{n} \) denote the polynomial interpolation operator for the interpolation points \( {x}_{0},\ldots ,{x}_{n} \) . By construction, for the interpolatory quadrature we have\n\n\[ \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = {\int }_{a}^{b}w\left( x\right) \left( {{L}_{n}f}\right) \left( x\right) {dx} \]\n\n(9.17)\n\nfor all \( f \in C\left\lbrack {a, b}\right\rbrack \) . Each \( p \in {P}_{{2n} + 1} \) can be represented in the form\n\n\[ p = {L}_{n}p + {q}_{n + 1}q \]\n\nfor some \( q \in {P}_{n} \), since the polynomial \( p - {L}_{n}p \) vanishes at the points \( {x}_{0},\ldots ,{x}_{n} \) . Then from (9.16) and (9.17) we obtain that\n\n\[ {\int }_{a}^{b}w\left( x\right) p\left( x\right) {dx} = {\int }_{a}^{b}w\left( x\right) \left( {{L}_{n}p}\right) \left( x\right) {dx} = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}p\left( {x}_{k}\right) \]\n\nfor all \( p \in {P}_{{2n} + 1} \) .
|
Yes
|
Lemma 9.15 There exists a unique sequence \( \left( {q}_{n}\right) \) of polynomials of the form \( {q}_{0} = 1 \) and\n\n\[ \n{q}_{n}\left( x\right) = {x}^{n} + {r}_{n - 1}\left( x\right) ,\;n = 1,2,\ldots ,\n\]\n\nwith \( {r}_{n - 1} \in {P}_{n - 1} \) satisfying the orthogonality relation\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n}\left( x\right) {q}_{m}\left( x\right) {dx} = 0,\;n \neq m\n\]\n\n(9.18)\n\nand\n\n\[ \n{P}_{n} = \operatorname{span}\left\{ {{q}_{0},\ldots ,{q}_{n}}\right\} ,\;n = 0,1,\ldots \n\]\n\n(9.19)
|
Proof. This follows by the Gram-Schmidt orthogonalization procedure from Theorem 3.18 applied to the linearly independent functions \( {u}_{n}\left( x\right) \mathrel{\text{:=}} {x}^{n} \) for \( n = 0,1,\ldots \) and the scalar product\n\n\[ \n\left( {f, g}\right) \mathrel{\text{:=}} {\int }_{a}^{b}w\left( x\right) f\left( x\right) g\left( x\right) {dx}\n\]\n\nfor \( f, g \in C\left\lbrack {a, b}\right\rbrack \) . The positive definiteness of the scalar product is a consequence of \( w \) being positive in \( \left( {a, b}\right) \) .
|
Yes
|
Lemma 9.16 Each of the orthogonal polynomials \( {q}_{n} \) from Lemma 9.15 has \( n \) simple zeros in \( \left( {a, b}\right) \) .
|
Proof. For \( m = 0 \), from (9.18) we have that\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {q}_{n}\left( x\right) {dx} = 0 \n\]\n\nfor \( n > 0 \) . Hence, since \( w \) is positive on \( \left( {a, b}\right) \), the polynomial \( {q}_{n} \) must have at least one zero in \( \left( {a, b}\right) \) where the sign of \( {q}_{n} \) changes. Denote by \( {x}_{1},\ldots ,{x}_{m} \) the zeros of \( {q}_{n} \) in \( \left( {a, b}\right) \) where \( {q}_{n} \) changes its sign. We assume that \( m < n \) and set \( {r}_{m}\left( x\right) \mathrel{\text{:=}} \left( {x - {x}_{1}}\right) \cdots \left( {x - {x}_{m}}\right) \) . Then \( {r}_{m} \in {P}_{n - 1} \) and therefore\n\n\[ \n{\int }_{a}^{b}w\left( x\right) {r}_{m}\left( x\right) {q}_{n}\left( x\right) {dx} = 0. \n\]\n\nHowever, this integral must be different from zero, since \( {r}_{m}{q}_{n} \) does not change its sign on \( \left( {a, b}\right) \) and does not vanish identically. Hence, we have arrived at a contradiction, and consequently \( m = n \) .
|
Yes
|
Theorem 9.17 For each \( n = 0,1,\ldots \) there exists a unique Gaussian quadrature formula of order \( n \) . Its quadrature points are given by the zeros of the orthogonal polynomial \( {q}_{n + 1} \) of degree \( n + 1 \) .
|
Proof. This is a consequence of Lemmas 9.13-9.16.
|
Yes
|
Theorem 9.18 The weights of the Gaussian quadrature formulae are all positive.
|
Proof. Define\n\n\[ \n{f}_{k}\left( x\right) \mathrel{\text{:=}} {\left\lbrack \frac{{q}_{n + 1}\left( x\right) }{x - {x}_{k}}\right\rbrack }^{2},\;k = 0,\ldots, n. \]\n\nThen\n\n\[ \n{a}_{k}{\left\lbrack {q}_{n + 1}^{\prime }\left( {x}_{k}\right) \right\rbrack }^{2} = \mathop{\sum }\limits_{{j = 0}}^{n}{a}_{j}{f}_{k}\left( {x}_{j}\right) = {\int }_{a}^{b}w\left( x\right) {f}_{k}\left( x\right) {dx} > 0, \]\n\nsince \( {f}_{k} \in {P}_{2n} \), and the theorem is proven.
|
Yes
|
Corollary 9.19 The sequence of Gaussian quadrature formulae is convergent.
|
Proof. For each polynomial \( p \) we have\n\n\[ \n{Q}_{n}\left( p\right) = {\int }_{a}^{b}w\left( x\right) p\left( x\right) {dx} \]\n\nprovided that \( {2n} + 1 \) is greater than or equal to the degree of \( p \) . From their proofs it is obvious that Theorem 9.10 and its Corollary 9.11 remain valid for the integral with the weight function \( w \) . Hence, the statement of the theorem follows from Theorem 9.18.
|
No
|
Theorem 9.20 Let \( f \in {C}^{{2n} + 2}\left\lbrack {a, b}\right\rbrack \) . Then the error for the Gaussian quadrature formula of order \( n \) is given by\n\n\[{\int }_{a}^{b}w\left( x\right) f\left( x\right) {dx} - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right) = \frac{{f}^{\left( 2n + 2\right) }\left( \xi \right) }{\left( {{2n} + 2}\right) !}{\int }_{a}^{b}w\left( x\right) {\left\lbrack {q}_{n + 1}\left( x\right) \right\rbrack }^{2}{dx}\]\n\nfor some \( \xi \in \left\lbrack {a, b}\right\rbrack \) .
|
Proof. Recall the Hermite interpolation polynomial \( {H}_{n}f \in {P}_{{2n} + 1} \) for \( f \) from Theorem 8.18. Since \( \left( {{H}_{n}f}\right) \left( {x}_{k}\right) = f\left( {x}_{k}\right), k = 0,\ldots, n \), for the error\n\n\[{E}_{n}\left( f\right) \mathrel{\text{:=}} {\int }_{a}^{b}w\left( x\right) f\left( x\right) {dx} - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}f\left( {x}_{k}\right)\]\n\nwe can write\n\n\[{E}_{n}\left( f\right) = {\int }_{a}^{b}w\left( x\right) \left\lbrack {f\left( x\right) - \left( {{H}_{n}f}\right) \left( x\right) }\right\rbrack {dx}.\]\n\nThen as in the proofs of Theorems 9.7 and 9.8, using the mean value theorem we obtain\n\n\[{E}_{n}\left( f\right) = \frac{f\left( z\right) - \left( {{H}_{n}f}\right) \left( z\right) }{{\left\lbrack {q}_{n + 1}\left( z\right) \right\rbrack }^{2}}{\int }_{a}^{b}w\left( x\right) {\left\lbrack {q}_{n + 1}\left( x\right) \right\rbrack }^{2}{dx}\]\n\nfor some \( z \in \left\lbrack {a, b}\right\rbrack \) . Now the proof is finished with the aid of the error representation for Hermite interpolation from Theorem 8.19.
|
Yes
|
Example 9.21 We consider the Gaussian quadrature formulae for the weight function\n\n\[ \nw\left( x\right) = \frac{1}{\sqrt{1 - {x}^{2}}},\;x \in \left\lbrack {-1,1}\right\rbrack .\n\]
|
The Chebyshev polynomial \( {T}_{n} \) of degree \( n \) is defined by\n\n\[ \n{T}_{n}\left( x\right) \mathrel{\text{:=}} \cos \left( {n\arccos x}\right) ,\; - 1 \leq x \leq 1.\n\]\n\nObviously \( {T}_{0}\left( x\right) = 1 \) and \( {T}_{1}\left( x\right) = x \) . From the addition theorem for the cosine function, \( \cos \left( {n + 1}\right) t + \cos \left( {n - 1}\right) t = 2\cos t\cos {nt} \), we can deduce the recursion formula\n\n\[ \n{T}_{n + 1}\left( x\right) + {T}_{n - 1}\left( x\right) = {2x}{T}_{n}\left( x\right) ,\;n = 1,2,\ldots\n\]\n\nHence we have that \( {T}_{n} \in {P}_{n} \) with leading term\n\n\[ \n{T}_{n}\left( x\right) = {2}^{n - 1}{x}^{n} + \cdots ,\;n = 1,2,\ldots\n\]\n\nSubstituting \( x = \cos t \) we find that\n\n\[ \n{\int }_{-1}^{1}\frac{{T}_{n}\left( x\right) {T}_{m}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} = {\int }_{0}^{\pi }\cos {nt}\cos {mt} \cdot {dt} = \left\{ \begin{array}{ll} \pi , & n = m = 0, \\ \frac{\pi }{2}, & n = m > 0, \\ 0, & n \neq m. \end{array}\right.\n\]\n\nHence, the orthogonal polynomials \( {q}_{n} \) of Lemma 9.15 are given by \( {q}_{n} = {2}^{1 - n}{T}_{n} \) . The zeros of \( {T}_{n} \) and hence the quadrature points are given by\n\n\[ \n{x}_{k} = \cos \left( {\frac{{2k} + 1}{2n}\pi }\right) ,\;k = 0,\ldots, n - 1.\n\]\n\nThe weights can be most easily derived from the exactness conditions\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{a}_{k}{T}_{m}\left( {x}_{k}\right) = {\int }_{-1}^{1}\frac{{T}_{m}\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx},\;m = 0,\ldots, n - 1,\n\]\n\nfor the interpolation quadrature, i.e., from\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}{a}_{k}\cos \frac{\left( {{2k} + 1}\right) m}{2n}\pi = \left\{ \begin{array}{ll} \pi , & m = 0, \\ 0, & m = 1,\ldots, n - 1. \end{array}\right.\n\]\n\nFrom our analysis of trigonometric interpolation, i.e., from (8.19), we see that the unique solution of this linear system is given by\n\n\[ \n{a}_{k} = \frac{\pi }{n},\;k = 0,\ldots, n - 1\n\]\n\n\nHence, for \( n = 1,2,\ldots \) the Gauss-Chebyshev quadrature of order \( n - 1 \) is given by\n\n\[ \n{\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} \approx \frac{\pi }{n}\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\cos \frac{{2k} + 1}{2n}\pi }\right) .\n\]\n\nFrom Theorem 9.20 we have the error representation\n\n\[ \n{\int }_{-1}^{1}\frac{f\left( x\right) }{\sqrt{1 - {x}^{2}}}{dx} - \frac{\pi }{n}\mathop{\sum }\limits_{{k = 0}}^{{n - 1}}f\left( {\cos \frac{{2k} + 1}{2n}\pi }\right) = \frac{\pi {f}^{\left( 2n\right) }\left( \xi \right) }{{2}^{{2n} - 1}\left( {2n}\right) !}\n\]\n\nfor some \( \xi \in \left\lbrack {-1,1}\right\rbrack \) .
|
Yes
|
The Legendre polynomial \( {L}_{n} \) of degree \( n \) is defined by \[ {L}_{n}\left( x\right) \mathrel{\text{:=}} \frac{1}{{2}^{n}n!}\frac{{d}^{n}}{d{x}^{n}}{\left( {x}^{2} - 1\right) }^{n}. \]
|
Obviously, \( {L}_{n} \in {P}_{n} \) . If \( m < n \), by repeated partial integration we see that \[ {\int }_{-1}^{1}{x}^{m}\frac{{d}^{n}}{d{x}^{n}}{\left( {x}^{2} - 1\right) }^{n}{dx} = 0 \] since \( {\left( {x}^{2} - 1\right) }^{n} \) has zeros of order \( n \) at the endpoints -1 and 1 . Therefore, \[ {\int }_{-1}^{1}{L}_{n}\left( x\right) {L}_{m}\left( x\right) {dx} = 0,\;n \neq m. \]
|
No
|
Lemma 9.24 The Bernoulli polynomials have the symmetry property\n\n\\[ \n{B}_{n}\\left( x\\right) = {\\left( -1\\right) }^{n}{B}_{n}\\left( {1 - x}\\right) ,\\;x \\in \\mathbb{R},\\;n = 0,1,\\ldots \n\\]\n\n(9.24)
|
Proof. Obviously (9.24) holds for \\( n = 0 \\) . Assume that (9.24) has been proven for some \\( n \\geq 0 \\) . Then, integrating (9.24), we obtain\n\n\\[ \n{B}_{n + 1}\\left( x\\right) = {\\left( -1\\right) }^{n + 1}{B}_{n + 1}\\left( {1 - x}\\right) + {\\beta }_{n + 1} \n\\]\n\nfor some constant \\( {\\beta }_{n + 1} \\) . The condition (9.22) implies that \\( {\\beta }_{n + 1} = 0 \\), and therefore (9.24) is also valid for \\( n + 1 \\) .
|
Yes
|
Lemma 9.25 The Bernoulli polynomials \( {B}_{{2m} + 1}, m = 1,2,\ldots \), of odd degree have exactly three zeros in \( \left\lbrack {0,1}\right\rbrack \), and these zeros are at the points \( 0,1/2 \), and 1 . The Bernoulli polynomials \( {B}_{2m}, m = 0,1,\ldots \), of even degree satisfy \( {B}_{2m}\left( 0\right) \neq 0 \) .
|
Proof. From (9.23) and (9.24) we conclude that \( {B}_{{2m} + 1} \) vanishes at the points \( 0,1/2 \), and 1 . We prove by induction that these are the only zeros of \( {B}_{{2m} + 1} \) in \( \left\lbrack {0,1}\right\rbrack \) . This is true for \( m = 1 \), since \( {B}_{3} \) is a polynomial of degree three. Assume that we have proven that \( {B}_{{2m} + 1} \) has only the three zeros \( 0,1/2 \), and \( 1 \) in \( \left\lbrack {0,1}\right\rbrack \), and assume that \( {B}_{{2m} + 3} \) has an additional zero \( \alpha \) in \( \left\lbrack {0,1}\right\rbrack \) . Because of the symmetry (9.24) we may assume that \( \alpha \in \left( {0,1/2}\right) \) . Then, by Rolle’s theorem, we conclude that \( {B}_{{2m} + 2} \) has at least one zero in \( \left( {0,\alpha }\right) \) and also at least one zero in \( \left( {\alpha ,1/2}\right) \) . Again by Rolle’s theorem this implies that \( {B}_{{2m} + 1} \) has a zero in \( \left( {0,1/2}\right) \), which contradicts the induction assumption.\n\nFrom the zeros of \( {B}_{{2m} + 1} \), by Rolle’s theorem it follows that \( {B}_{2m} \) has a zero in \( \left( {0,1/2}\right) \) . Assume that \( {B}_{2m}\left( 0\right) = 0 \) . Then, by Rolle’s theorem, \( {B}_{{2m} - 1} \) has a zero in \( \left( {0,1/2}\right) \), which contradicts the first part of the lemma.
|
Yes
|
Theorem 9.26 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( m \) times continuously differentiable for \( m \geq 2 \) . Then we have the Euler-Maclaurin expansion\n\n\[{\int }_{a}^{b}f\left( x\right) {dx} = {T}_{h}\left( f\right) - \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}{h}^{2j}}{\left( {2j}\right) !}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) }\right\rbrack\n\]\n(9.27)\n\n\[+ {\left( -1\right) }^{m}{h}^{m}{\int }_{a}^{b}{\widetilde{B}}_{m}\left( \frac{x - a}{h}\right) {f}^{\left( m\right) }\left( x\right) {dx}\]\n\nwhere \( \left\lbrack \frac{m}{2}\right\rbrack \) denotes the largest integer smaller than or equal to \( \frac{m}{2} \) .
|
Proof. Let \( g \in {C}^{m}\left\lbrack {0,1}\right\rbrack \) . Then, by \( m - 1 \) partial integrations and using (9.23) we find that\n\n\[{\int }_{0}^{1}{B}_{1}\left( z\right) {g}^{\prime }\left( z\right) {dz} = \mathop{\sum }\limits_{{j = 2}}^{m}{\left( -1\right) }^{j}{B}_{j}\left( 0\right) \left\lbrack {{g}^{\left( j - 1\right) }\left( 1\right) - {g}^{\left( j - 1\right) }\left( 0\right) }\right\rbrack\n\]\n\[- {\left( -1\right) }^{m}{\int }_{0}^{1}{B}_{m}\left( z\right) {g}^{\left( m\right) }\left( z\right) {dz}\]\n\nCombining this with the partial integration\n\n\[{\int }_{0}^{1}{B}_{1}\left( z\right) {g}^{\prime }\left( z\right) {dz} = \frac{1}{2}\left\lbrack {g\left( 1\right) + g\left( 0\right) }\right\rbrack - {\int }_{0}^{1}g\left( z\right) {dz}\]\n\nand observing that the odd Bernoulli numbers vanish leads to\n\n\[{\int }_{0}^{1}g\left( z\right) {dz} = \frac{1}{2}\left\lbrack {g\left( 0\right) + g\left( 1\right) }\right\rbrack - \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}}{\left( {2j}\right) !}\left\lbrack {{g}^{\left( 2j - 1\right) }\left( 1\right) - {g}^{\left( 2j - 1\right) }\left( 0\right) }\right\rbrack\n\]\n\[+ {\left( -1\right) }^{m}{\int }_{0}^{1}{B}_{m}\left( z\right) {g}^{\left( m\right) }\left( z\right) {dz}\]\n\nNow we substitute \( x = {x}_{k} + {hz} \) and \( g\left( z\right) = f\left( {{x}_{k} + {hz}}\right) \) to obtain\n\n\[{\int }_{{x}_{k}}^{{x}_{k + 1}}f\left( x\right) {dx} = \frac{h}{2}\left\lbrack {f\left( {x}_{k}\right) + f\left( {x}_{k + 1}\right) }\right\rbrack\n\]\n\[- \mathop{\sum }\limits_{{j = 1}}^{\left\lbrack \frac{m}{2}\right\rbrack }\frac{{b}_{2j}{h}^{2j}}{\left( {2j}\right) !}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( {x}_{k + 1}\right) - {f}^{\left( 2j - 1\right) }\left( {x}_{k}\right) }\right\rbrack\n\]\n\[+ {\left( -1\right) }^{m}{h}^{m}{\int }_{{x}_{k}}^{{x}_{k + 1}}{B}_{m}\left( \frac{x - a}{h}\right) {f}^{\left( m\right) }\left( x\right) {dx}.\n\]\nFinally, we sum the last equation for \( k = 0,\ldots, n - 1 \) to arrive at the Euler-Maclaurin expansion (9.27).
|
Yes
|
Corollary 9.27 Let \( f : \mathbb{R} \rightarrow \mathbb{R} \) be \( \left( {{2m} + 1}\right) \) -times continuously differentiable and \( {2\pi } \) -periodic for \( m \in \mathbb{N} \) and let \( n \in \mathbb{N} \) . Then for the error of the rectangular rule we have\n\n\[ \left| {{E}_{n}\left( f\right) }\right| \leq \frac{C}{{n}^{{2m} + 1}}{\int }_{0}^{2\pi }\left| {{f}^{\left( 2m + 1\right) }\left( x\right) }\right| {dx} \]\n\nwhere\n\n\[ C \mathrel{\text{:=}} 2\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{{k}^{{2m} + 1}}. \]
|
Proof. From Theorem 9.26 we have that\n\n\[ {E}_{n}\left( f\right) = - {\left( \frac{2\pi }{n}\right) }^{{2m} + 1}{\int }_{0}^{2\pi }{\widetilde{B}}_{{2m} + 1}\left( \frac{2\pi x}{n}\right) {f}^{\left( 2m + 1\right) }\left( x\right) {dx} \]\n\nand the estimate follows from the inequality\n\n\[ \left| {{\widetilde{B}}_{{2m} + 1}\left( x\right) }\right| \leq 2\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{{\left( 2\pi k\right) }^{{2m} + 1}},\;x \in \mathbb{R}, \]\n\nwhich is a consequence of (9.26).
|
Yes
|
Theorem 9.28 Let \( f : \mathbb{R} \rightarrow \mathbb{R} \) be analytic and \( {2\pi } \) -periodic. Then there exists a strip \( D = \mathbb{R} \times \left( {-a, a}\right) \subset \mathbb{C} \) with \( a > 0 \) such that \( f \) can be extended to a holomorphic and \( {2\pi } \) -periodic bounded function \( f : D \rightarrow \mathbb{C} \) . The error for the rectangular rule can be estimated by\n\n\[ \left| {{E}_{n}\left( f\right) }\right| \leq \frac{4\pi M}{{e}^{na} - 1} \]\n\nwhere \( M \) denotes a bound for the holomorphic function \( f \) on \( D \) .
|
Proof. Since \( f : \mathbb{R} \rightarrow \mathbb{R} \) is analytic, at each point \( x \in \mathbb{R} \) the Taylor expansion provides a holomorphic extension of \( f \) into some open disk in the complex plane with radius \( r\left( x\right) > 0 \) and center \( x \) . The extended function again has period \( {2\pi } \), since the coefficients of the Taylor series at \( x \) and at \( x + {2\pi } \) coincide for the \( {2\pi } \) -periodic function \( f : \mathbb{R} \rightarrow \mathbb{R} \) . The disks corresponding to all points of the interval \( \left\lbrack {0,{2\pi }}\right\rbrack \) provide an open covering of \( \left\lbrack {0,{2\pi }}\right\rbrack \) . Since \( \left\lbrack {0,{2\pi }}\right\rbrack \) is compact, a finite number of these disks suffices to cover \( \left\lbrack {0,{2\pi }}\right\rbrack \) . Then we have an extension into a strip \( D \) with finite width \( {2a} \) contained in the union of the finite number of disks. Without loss of generality we may assume that \( f \) is bounded on \( D \) .\n\nFrom the residue theorem we have that\n\n\[ {\int }_{i\alpha }^{{i\alpha } + {2\pi }}\cot \frac{nz}{2}f\left( z\right) {dz} - {\int }_{-{i\alpha }}^{-{i\alpha } + {2\pi }}\cot \frac{nz}{2}f\left( z\right) {dz} = - \frac{4\pi i}{n}\mathop{\sum }\limits_{{k = 1}}^{n}f\left( \frac{2\pi k}{n}\right) \]\n\nfor each \( 0 < \alpha < a \) . This implies that\n\n\[ \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}i\cot \frac{nz}{2}f\left( z\right) {dz} = \frac{2\pi }{n}\mathop{\sum }\limits_{{k = 1}}^{n}f\left( \frac{2\pi k}{n}\right) ,\]\n\nsince by the Schwarz reflection principle, \( f \) enjoys the symmetry property \( f\left( \bar{z}\right) = \overline{f\left( z\right) } \) . By Cauchy’s integral theorem we have\n\n\[ \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}f\left( z\right) {dz} = {\int }_{0}^{2\pi }f\left( x\right) {dx} \]\n\nand combining the last two equations yields\n\n\[ {E}_{n}\left( f\right) = \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}\left( {1 - i\cot \frac{nz}{2}}\right) f\left( z\right) {dz} \]\nfor all \( 0 < \alpha < a \) . Now the estimate follows from\n\n\[ \left| {1 - i\cot \frac{nz}{2}}\right| \leq \frac{2}{{e}^{n\alpha } - 1} \]\n\nfor \( \operatorname{Im}z = \alpha \) and then passing to the limit \( \alpha \rightarrow a \) .
|
Yes
|
Theorem 9.29 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( {2m} \) -times continuously differentiable. Then for the Romberg quadratures we have the error estimate\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{m}\left( f\right) }\right| \leq {C}_{m}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m},\;k = 0,1,\ldots ,\n\]\n\nfor some constant \( {C}_{m} \) depending on \( m \) .
|
Proof. By induction, we show that there exist constants \( {\gamma }_{j, i} \) such that\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\gamma }_{j, i}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) }\right\rbrack {\left( \frac{h}{{2}^{k}}\right) }^{2j}}\right| \n\]\n\n(9.29)\n\n\[ \n\leq {\gamma }_{m, i}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m} \n\]\n\nfor \( i = 1,\ldots, m \) and \( k = 0,1,\ldots \) . Here the sum on the left-hand side is set equal to zero for \( i = m \) . By the Euler-Maclaurin expansion this is true for \( i = 1 \) with \( {\gamma }_{j,1} = {b}_{2j}/\left( {2j}\right) \) ! for \( j = 1,\ldots, m - 1 \) and\n\n\[ \n{\gamma }_{m,1} = \left( {b - a}\right) \mathop{\sup }\limits_{{x \in \left\lbrack {0,1}\right\rbrack }}\left| {{B}_{2m}\left( x\right) }\right| . \n\]\n\nAs an abbreviation we set\n\n\[ \n{F}_{j} \mathrel{\text{:=}} {f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) ,\;j = 1,\ldots, m - 1. \n\]\n\nAssume that (9.29) has been shown for some \( 1 \leq i < m \) . Then, using (9.28), we obtain\n\n\[ \n\frac{{4}^{i}}{{4}^{i} - 1}\left\lbrack {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k + 1}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\left( \frac{h}{{2}^{k + 1}}\right) }^{2j}{\gamma }_{j, i}{F}_{j}}\right\rbrack \n\]\n\n\[ \n- \frac{1}{{4}^{i} - 1}\left\lbrack {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i}{F}_{j}}\right\rbrack \n\]\n\n\[ \n= {\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i + 1}\left( f\right) - \mathop{\sum }\limits_{{j = i + 1}}^{{m - 1}}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i + 1}{F}_{j}, \n\]\n\nwhere\n\n\[ \n{\gamma }_{j, i + 1} = \frac{{4}^{i - j} - 1}{{4}^{i} - 1}{\gamma }_{j, i},\;j = i + 1,\ldots, m - 1. \n\]\n\nNow with the aid of the induction assumption we can estimate\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i + 1}\left( f\right) - \mathop{\sum }\limits_{{j = i + 1}}^{{m - 1}}{\gamma }_{j, i + 1}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i + 1}{F}_{j}}\right| \n\]\n\n\[ \n\leq {\gamma }_{m, i + 1}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m} \n\]\n\nwhere\n\[ \n{\gamma }_{m, i + 1} = \frac{{4}^{i - m} + 1}{{4}^{i} - 1} \n\]\n\nand the proof is complete.
|
Yes
|
Theorem 9.30 The quadrature weights of the Romberg formulae are positive.
|
Proof. We define recursively \( {Q}_{k}^{1} \mathrel{\text{:=}} 4{T}_{k + 1}^{1} - 2{T}_{k}^{1} \) and\n\n\[ {Q}_{k}^{m + 1} \mathrel{\text{:=}} \frac{1}{{4}^{m} - 1}\left\lbrack {{2}^{{2m} + 1}{T}_{k + 1}^{m} + 2{T}_{k}^{m} + {4}^{m + 1}{Q}_{k + 1}^{m}}\right\rbrack \]\n\n(9.30)\n\nfor \( k = 1,2,\ldots \) and \( m = 1,2\ldots \) and show by induction that\n\n\[ {T}_{k}^{m + 1} = \frac{1}{{4}^{m} - 1}\left\lbrack {{T}_{k}^{m} + {Q}_{k}^{m}}\right\rbrack \]\n\n(9.31)\n\nBy the definition of \( {Q}_{k}^{1} \) this is true for \( m = 1 \) . We assume that (9.31) has been proven for some \( m \geq 1 \) . Then, using the recursive definitions of \( {T}_{k}^{m} \) and \( {Q}_{k}^{m} \) and the induction assumption, we derive\n\n\[ {T}_{k}^{m + 1} + {Q}_{k}^{m + 1} = \frac{{4}^{m + 1}}{{4}^{m} - 1}\left\lbrack {{T}_{k + 1}^{m} + {Q}_{k + 1}^{m}}\right\rbrack - \frac{1}{{4}^{m} - 1}\left\lbrack {{4}^{m}{T}_{k + 1}^{m} - {T}_{k}^{m}}\right\rbrack \]\n\n\[ = {4}^{m + 1}{T}_{k + 1}^{m + 1} - {T}_{k}^{m + 1} = \left( {{4}^{m + 1} - 1}\right) {T}_{k}^{m + 2}; \]\n\ni.e.,(9.31) also holds for \( m + 1 \) . Now, from (9.30) and (9.31), by induction with respect to \( m \), it can be deduced that the weights of \( {T}_{k}^{m} \) are positive and that the weights of \( {Q}_{k}^{m} \) are nonnegative.
|
Yes
|
Corollary 9.31 For the Romberg quadratures we have convergence:\n\n\[ \n\mathop{\lim }\limits_{{m \rightarrow \infty }}{T}_{k}^{m}\left( f\right) = {\int }_{a}^{b}f\left( x\right) {dx}\;\text{ and }\;\mathop{\lim }\limits_{{k \rightarrow \infty }}{T}_{k}^{m}\left( f\right) = {\int }_{a}^{b}f\left( x\right) {dx} \n\]\n\nfor all continuous functions \( f \) .
|
Proof. This follows from Theorems 9.29 and 9.30 and Corollary 9.11.
|
Yes
|
Theorem 9.32 Denote by \( {L}_{k}^{m} \) the uniquely determined polynomial in \( {h}^{2} \) of degree less than or equal to \( m \) with the interpolation property\n\n\[ \n{L}_{k}^{m}\left( {h}_{j}^{2}\right) = {T}_{j}^{1}\left( f\right) ,\;j = k,\ldots, k + m.\n\]\n\nThen the Romberg quadratures satisfy\n\n\[ \n{T}_{k}^{m + 1}\left( f\right) = {L}_{k}^{m}\left( 0\right)\n\]\n\n(9.32)
|
Proof. Obviously,(9.32) is true for \( m = 0 \) . Assume that it has been proven for \( m - 1 \) . Then, using the Neville scheme from Theorem 8.9, we obtain\n\n\[ \n{L}_{k}^{m}\left( 0\right) = \frac{1}{{h}_{k + m}^{2} - {h}_{k}^{2}}\left\lbrack {-{h}_{k}^{2}{L}_{k + 1}^{m - 1}\left( 0\right) + {h}_{k + m}^{2}{L}_{k}^{m - 1}\left( 0\right) }\right\rbrack\n\]\n\n\[ \n= \frac{1}{{h}_{k + m}^{2} - {h}_{k}^{2}}\left\lbrack {-{h}_{k}^{2}{T}_{k + 1}^{m} + {h}_{k + m}^{2}{T}_{k}^{m}}\right\rbrack\n\]\n\n\[ \n= \frac{1}{{4}^{m} - 1}\left\lbrack {{4}^{m}{T}_{k + 1}^{m} - {T}_{k}^{m}}\right\rbrack = {T}_{k}^{m + 1},\n\]\n\nestablishing (9.32) for \( m \) .
|
Yes
|
Let \( p = p\left( t\right) \) describe the population of a species of animals or plants at time \( t \) . If \( r\left( {t, p}\right) \) denotes the growth rate given by the difference between the birth and death rate depending on the time \( t \) and the size \( p \) of the population, then an isolated population satisfies the differential equation\n\n\[ \frac{dp}{dt} = r\left( {t, p}\right) \]\n\nThe simplest model \( r\left( {t, p}\right) = {ap} \), where \( a \) is a positive constant, leads to\n\n\[ \frac{dp}{dt} = {ap} \]
|
with the explicit solution \( p\left( t\right) = {p}_{0}{e}^{a\left( {t - {t}_{0}}\right) } \) . Such an exponential growth is realistic only if the population is not too large.
|
Yes
|
Corollary 10.6 Under the assumptions of Theorem 10.5, the sequence \( \left( {u}_{\nu }\right) \) defined by \( {u}_{0}\left( x\right) = {u}_{0} \) and\n\n\[ \n{u}_{\nu + 1}\left( x\right) \mathrel{\text{:=}} {u}_{0} + {\int }_{{x}_{0}}^{x}f\left( {\xi ,{u}_{\nu }\left( \xi \right) }\right) {d\xi },\;\left| {x - {x}_{0}}\right| \leq a,\;\nu = 0,1,\ldots ,\n\]\n\n(10.6)\n\nconverges as \( \nu \rightarrow \infty \) uniformly on \( \left\lbrack {{x}_{0} - a,{x}_{0} + a}\right\rbrack \) to the unique solution \( u \) of the initial value problem. We have the a posteriori error estimate\n\n\[ \n{\begin{Vmatrix}u - {u}_{\nu }\end{Vmatrix}}_{\infty } \leq \frac{La}{1 - {La}}{\begin{Vmatrix}{u}_{\nu } - {u}_{\nu - 1}\end{Vmatrix}}_{\infty },\;\nu = 1,2,\ldots \n\]
|
Proof. This follows from Theorem 3.46.
|
No
|
Example 10.7 Consider the initial value problem\n\n\\[ \n{u}^{\prime } = {x}^{2} + {u}^{2},\;u\\left( 0\\right) = 0 \n\\] \n\non \\( G = \\left( {-{0.5},{0.5}}\\right) \\times \\left( {-{0.5},{0.5}}\\right) \\) . For \\( f\\left( {x, u}\\right) \\mathrel{\\text{:=}} {x}^{2} + {u}^{2} \\) we have\n\n\\[ \n\\left| {f\\left( {x, u}\\right) }\\right| \\leq {0.5} \n\\] \n\non \\( G \\) . Hence for any \\( a < {0.5} \\) and \\( M = {0.5} \\) the rectangle \\( B \\) from the proof of Theorem 10.5 satisfies \\( B \\subset G \\) . Furthermore, we can estimate\n\n\\[ \n\\left| {f\\left( {x, u}\\right) - f\\left( {x, v}\\right) }\\right| = \\left| {{u}^{2} - {v}^{2}}\\right| = \\left| {\\left( {u + v}\\right) \\left( {u - v}\\right) }\\right| \\leq \\left| {u - v}\\right| \n\\] \n\nfor all \\( \\left( {x, u}\\right) ,\\left( {x, v}\\right) \\in G \\) ; i.e., \\( f \\) satisfies a Lipschitz condition with Lipschitz constant \\( L = 1 \\) . Thus in this case the contraction number in the Picard-Lindelöf theorem is given by \\( {La} < {0.5} \\) .
|
Here, the iteration (10.6) reads\n\n\\[ \n{u}_{\\nu + 1}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + {u}_{\\nu }^{2}\\left( \\xi \\right) }\\right\\rbrack {d\\xi }.\n\\] \n\nStarting with \\( {u}_{0}\\left( x\\right) = 0 \\) we first compute\n\n\\[ \n{u}_{1}\\left( x\\right) = {\\int }_{0}^{x}{\\xi }^{2}{d\\xi } = \\frac{{x}^{3}}{3} \n\\] \n\nand from Corollary 10.6 we have the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{1}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{1} - {u}_{0}\\end{Vmatrix}}_{\\infty } = \\frac{1}{24} = {0.041}\\ldots \n\\] \n\nThe second iteration yields\n\n\\[ \n{u}_{2}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + \\frac{{\\xi }^{6}}{9}}\\right\\rbrack {d\\xi } = \\frac{{x}^{3}}{3} + \\frac{{x}^{7}}{63} \n\\] \n\nwith the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{2}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{2} - {u}_{1}\\end{Vmatrix}}_{\\infty } = \\frac{1}{{63} \\cdot {2}^{7}} = {0.00012}\\ldots , \n\\] \n\nand the third iteration yields\n\n\\[ \n{u}_{3}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + \\frac{{\\xi }^{6}}{9} + \\frac{2{\\xi }^{10}}{189} + \\frac{{\\xi }^{14}}{3969}}\\right\\rbrack {d\\xi } = \\frac{{x}^{3}}{3} + \\frac{{x}^{7}}{63} + \\frac{2{x}^{11}}{2079} + \\frac{{x}^{15}}{59535} \n\\] \n\nwith the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{3}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{3} - {u}_{2}\\end{Vmatrix}}_{\\infty } = \\frac{1}{{2079} \\cdot {2}^{10}} + \\frac{1}{{59535} \\cdot {2}^{15}} = {0.00000047}\\ldots \n\\]
|
Yes
|
Theorem 10.17 A single-step method is consistent if and only if\n\n\[ \mathop{\lim }\limits_{{h \rightarrow 0}}\varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) .
|
Proof. Since we assume \( f \) to be bounded, we have\n\n\[ \eta \left( {x + t}\right) - \eta \left( x\right) = {\int }_{0}^{t}{\eta }^{\prime }\left( {x + s}\right) {ds} = {\int }_{0}^{t}f\left( {x + s,\eta \left( {x + s}\right) }\right) {ds} \rightarrow 0,\;t \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) . Therefore, since we also assume that \( f \) is uniformly continuous, it follows that\n\n\[ \frac{1}{h}\left| {{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt}}\right| \leq \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right| \]\n\n\[ = \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {f\left( {x + t,\eta \left( {x + t}\right) }\right) - f\left( {x,\eta \left( x\right) }\right) }\right| \rightarrow 0,\;h \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) . From this we obtain that\n\n\[ \Delta \left( {x, u;h}\right) + \varphi \left( {x, u;h}\right) - f\left( {x, u}\right) = \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) \]\n\n\[ = \frac{1}{h}{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt} \rightarrow 0,\;h \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) .
|
Yes
|
Theorem 10.17 A single-step method is consistent if and only if\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\varphi \\left( {x, u;h}\\right) = f\\left( {x, u}\\right) \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) .
|
Proof. Since we assume \\( f \\) to be bounded, we have\n\n\\[ \n\\eta \\left( {x + t}\\right) - \\eta \\left( x\\right) = {\\int }_{0}^{t}{\\eta }^{\\prime }\\left( {x + s}\\right) {ds} = {\\int }_{0}^{t}f\\left( {x + s,\\eta \\left( {x + s}\\right) }\\right) {ds} \\rightarrow 0,\\;t \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . Therefore, since we also assume that \\( f \\) is uniformly continuous, it follows that\n\n\\[ \n\\frac{1}{h}\\left| {{\\int }_{0}^{h}\\left\\lbrack {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right\\rbrack {dt}}\\right| \\leq \\mathop{\\max }\\limits_{{0 \\leq t \\leq h}}\\left| {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right| \n\\]\n\n\\[ \n= \\mathop{\\max }\\limits_{{0 \\leq t \\leq h}}\\left| {f\\left( {x + t,\\eta \\left( {x + t}\\right) }\\right) - f\\left( {x,\\eta \\left( x\\right) }\\right) }\\right| \\rightarrow 0,\\;h \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . From this we obtain that\n\n\\[ \n\\Delta \\left( {x, u;h}\\right) + \\varphi \\left( {x, u;h}\\right) - f\\left( {x, u}\\right) = \\frac{1}{h}\\left\\lbrack {\\eta \\left( {x + h}\\right) - \\eta \\left( x\\right) }\\right\\rbrack - {\\eta }^{\\prime }\\left( x\\right) \n\\]\n\n\\[ \n= \\frac{1}{h}{\\int }_{0}^{h}\\left\\lbrack {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right\\rbrack {dt} \\rightarrow 0,\\;h \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . This now implies that the two conditions \\( \\Delta \\rightarrow 0, h \\rightarrow 0 \\), and \\( \\varphi \\rightarrow f, h \\rightarrow 0 \\), are equivalent.
|
Yes
|
Theorem 10.18 The Euler method is consistent. If \( f \) is continuously differentiable in \( G \), then the Euler method has consistency order one.
|
Proof. Consistency is a consequence of Theorem 10.17 and the fact that \( \varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \) for Euler’s method. If \( f \) is continuously differentiable, then from the differential equation \( {\eta }^{\prime } = f\left( {\xi ,\eta }\right) \) it follows that \( \eta \) is twice continuously differentiable with\n\n\[ \n{\eta }^{\prime \prime } = {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) .\n\]\n\nTherefore, Taylor's formula yields\n\n\[ \n\left| {\Delta \left( {x, u;h}\right) }\right| = \left| {\frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) }\right| = \frac{h}{2}\left| {{\eta }^{\prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {Kh} \n\]\n\nfor some \( 0 < \theta < 1 \) and some bound \( K \) for the function \( 2\left( {{f}_{x} + {f}_{u}f}\right) \) .
|
Yes
|
Theorem 10.19 The improved Euler method is consistent. If \( f \) is twice continuously differentiable in \( G \), then the improved Euler method has consistency order two.
|
Proof. Consistency follows from Theorem 10.17 and\n\n\[ \varphi \left( {x, u;h}\right) = \frac{1}{2}\left\lbrack {f\left( {x, u}\right) + f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) }\right\rbrack \rightarrow f\left( {x, u}\right) ,\;h \rightarrow 0. \]\n\nIf \( f \) is twice continuously differentiable, then (10.12) implies that \( \eta \) is three times continuously differentiable with\n\n\[ {\eta }^{\prime \prime \prime } = {f}_{xx}\left( {\xi ,\eta }\right) + 2{f}_{xu}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) + {f}_{uu}\left( {\xi ,\eta }\right) {f}^{2}\left( {\xi ,\eta }\right) \]\n\n\[ + {f}_{u}\left( {\xi ,\eta }\right) {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}^{2}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) . \]\n\nHence Taylor's formula yields\n\n\[ \left| {\eta \left( {x + h}\right) - \eta \left( x\right) - h{\eta }^{\prime }\left( x\right) - \frac{{h}^{2}}{2}\;{\eta }^{\prime \prime }\left( x\right) }\right| = \frac{{h}^{3}}{6}\;\left| {{\eta }^{\prime \prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {K}_{1}{h}^{3} \]\n\n(10.13)\n\nfor some \( 0 < \theta < 1 \) and a bound \( {K}_{1} \) for \( 6\left( {{f}_{xx} + 2{f}_{xu}f + {f}_{uu}{f}^{2} + {f}_{u}{f}_{x} + {f}_{u}^{2}f}\right) \). From Taylor's formula for functions of two variables we have the estimate\n\n\[ \left| {f\left( {x + h, u + k}\right) - f\left( {x, u}\right) - h{f}_{x}\left( {x, u}\right) - k{f}_{u}\left( {x, u}\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( \left| h\right| + \left| k\right| \right) }^{2} \]\n\nwith a bound \( {K}_{2} \) for the second derivatives \( {f}_{xx},{f}_{xu} \), and \( {f}_{uu} \). From this, setting \( k = {hf}\left( {x, u}\right) \), in view of (10.12) we obtain\n\n\[ \left| {f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) - f\left( {x, u}\right) - h{\eta }^{\prime \prime }\left( x\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( 1 + {K}_{0}\right) }^{2}{h}^{2} \]\n\nwith some bound \( {K}_{0} \) for \( f \), whence\n\n\[ \left| {\varphi \left( {x, u;h}\right) - f\left( {x, u}\right) - \frac{h}{2}{\eta }^{\prime \prime }\left( x\right) }\right| \leq \frac{1}{4}{K}_{2}{\left( 1 + {K}_{0}\right) }^{2}{h}^{2} \]\n\n(10.14)\n\nfollows. Now combining (10.13) and (10.14), with the aid of the triangle inequality and using the differential equation, we can establish consistency order two.
|
Yes
|
Lemma 10.21 Let \( \\left( {\\xi }_{j}\\right) \) be a sequence in \( \\mathbb{R} \) with the property\n\n\[\\left| {\\xi }_{j + 1}\\right| \\leq \\left( {1 + A}\\right) \\left| {\\xi }_{j}\\right| + B,\\;j = 0,1,\\ldots ,\]\n\nfor some constants \( A > 0 \) and \( B \\geq 0 \) . Then the estimate\n\n\[\\left| {\\xi }_{j}\\right| \\leq \\left| {\\xi }_{0}\\right| {e}^{jA} + \\frac{B}{A}\\left( {{e}^{jA} - 1}\\right) ,\\;j = 0,1,\\ldots ,\]\n\nholds.
|
Proof. We prove this by induction. The estimate is true for \( j = 0 \) . Assume that it has been proven for some \( j \\geq 0 \) . Then, with the aid of the inequality \( 1 + A < {e}^{A} \), which follows from the power series for the exponential function, we obtain\n\n\[\\left| {\\xi }_{j + 1}\\right| \\leq \\left( {1 + A}\\right) \\left| {\\xi }_{0}\\right| {e}^{jA} + \\left( {1 + A}\\right) \\frac{B}{A}\\left( {{e}^{jA} - 1}\\right) + B\]\n\n\[\\leq \\left| {\\xi }_{0}\\right| {e}^{\\left( {j + 1}\\right) A} + \\frac{B}{A}\\left( {{e}^{\\left( {j + 1}\\right) A} - 1}\\right)\]\n\ni.e., the estimate also holds for \( j + 1 \) .
|
Yes
|
Theorem 10.23 Assume that the single-step method satisfies the assumptions of the previous Theorem 10.22 and that it has consistency order \( p \) ; i.e., \( \left| {\Delta \left( {x, u;h}\right) }\right| \leq K{h}^{p} \) . Then\n\n\[ \left| {e}_{j}\right| \leq \frac{K}{M}\left( {{e}^{M\left( {{x}_{j} - {x}_{0}}\right) } - 1}\right) {h}^{p},\;j = 0,1,\ldots, n; \]\n\ni.e., the convergence also has order \( p \) .
|
Proof. This follows from (10.16) with the aid of \( c\left( h\right) \leq K{h}^{p} \) .
|
No
|
Corollary 10.24 The Euler method and the improved Euler method are convergent. For continuously differentiable \( f \) the Euler method has convergence order one. For twice continuously differentiable \( f \) the improved Euler method has convergence order two.
|
Proof. By Theorems 10.18, 10.19, 10.22, and 10.23 it remains only to verify the Lipschitz condition of the function \( \varphi \) for the improved Euler method given by (10.11). From the Lipschitz condition for \( f \) we obtain\n\n\[ \left| {\varphi \left( {x, u;h}\right) - \varphi \left( {x, v;h}\right) }\right| \]\n\n\[ \leq \frac{1}{2}\left| {f\left( {x, u}\right) - f\left( {x, v}\right) }\right| + \frac{1}{2}\left| {f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) - f\left( {x + h, v + {hf}\left( {x, v}\right) }\right) }\right| \]\n\n\[ \leq \frac{L}{2}\left| {u - v}\right| + \frac{L}{2}\left| {\left\lbrack {u + {hf}\left( {x, u}\right) }\right\rbrack - \left\lbrack {v + {hf}\left( {x, v}\right) }\right\rbrack }\right| \leq L\left( {1 + \frac{hL}{2}}\right) \left| {u - v}\right| ; \]\n\ni.e., \( \varphi \) also satisfies a Lipschitz condition.
|
Yes
|
Theorem 10.26 The Runge-Kutta method is consistent. If \( f \) is four-times continuously differentiable, then it has consistency order four and hence convergence order four.
|
Proof. The function \( \varphi \) describing the Runge-Kutta method is given recursively by\n\n\[ \varphi = \frac{1}{6}\left( {{\varphi }_{1} + 2{\varphi }_{2} + 2{\varphi }_{3} + {\varphi }_{4}}\right) \]\n\nwhere\n\n\[ {\varphi }_{1}\left( {x, u;h}\right) = f\left( {x, u}\right) \]\n\n\[ {\varphi }_{2}\left( {x, u;h}\right) = f\left( {x + \frac{h}{2}, u + \frac{h}{2}{\varphi }_{1}\left( {x, u;h}\right) }\right) ,\]\n\n\[ {\varphi }_{3}\left( {x, u;h}\right) = f\left( {x + \frac{h}{2}, u + \frac{h}{2}{\varphi }_{2}\left( {x, u;h}\right) }\right) ,\]\n\n\[ {\varphi }_{4}\left( {x, u;h}\right) = f\left( {x + h, u + h{\varphi }_{3}\left( {x, u;h}\right) }\right) .\n\nFrom this, consistency follows immediately by Theorem 10.17.\n\nAnalogously to the proof of Theorem 10.18 for the improved Euler method, the consistency order four can be established by a Taylor expansion of \( \varphi \left( {x, u;h}\right) \) with respect to powers of \( h \) up to order \( {h}^{4} \) and expressing the derivatives of \( \eta \) on the right-hand side of\n\n\[ \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack = {\eta }^{\prime }\left( x\right) + \frac{h}{2}{\eta }^{\prime \prime }\left( x\right) + \frac{{h}^{2}}{6}{\eta }^{\prime \prime \prime }\left( x\right) + \frac{{h}^{3}}{24}{\eta }^{\prime \prime \prime \prime }\left( x\right) + O\left( {h}^{4}\right) \]\n\nthrough \( f \) and its derivatives by using the differential equation. We leave the details as an exercise for the reader (see Problem 10.9).
|
No
|
Theorem 10.29 If \( f \) is \( \left( {s + 1}\right) \) -times continuously differentiable, then the multistep methods (10.21) are consistent of order \( s + 1 \) .
|
Proof. By construction we have that\n\n\[ \Delta \left( {x, u;h}\right) = \frac{1}{h}{\int }_{x + \left( {r - k}\right) h}^{x + {rh}}\left\lbrack {f\left( {\xi, u\left( \xi \right) }\right) - p\left( \xi \right) }\right\rbrack {d\xi } \]\n\nwhere \( p \) denotes the polynomial satisfying the interpolation condition\n\n\[ p\left( {x + {mh}}\right) = f\left( {x + {mh},\eta \left( {x + {mh}}\right) }\right) ,\;m = 0,\ldots, s. \]\n\nBy Theorem 8.10 on the remainder in polynomial interpolation, we can estimate\n\n\[ \left| {f\left( {\xi ,\eta \left( \xi \right) }\right) - p\left( \xi \right) }\right| \leq K{h}^{s + 1} \]\n\nfor all \( \xi \) in the interval \( x + \left( {r - k}\right) h \leq \xi \leq x + {rh} \) and some constant \( K \) depending on \( f \) and its derivatives up to order \( s + 1 \) .
|
Yes
|
Let \( p \) be the quadratic interpolation polynomial satisfying\n\n\[ p\left( {x}_{j}\right) = u\left( {x}_{j}\right) ,\;j = 0,1,2, \]\n\nand approximate\n\n\[ {u}^{\prime }\left( {x}_{0}\right) \approx {p}^{\prime }\left( {x}_{0}\right) \]
|
Using the fact that the approximation for the derivative is exact for polynomials of degree less than or equal to two, simple calculations show that (see Problem 10.15)\n\n\[ {p}^{\prime }\left( {x}_{0}\right) = \frac{1}{2h}\left\lbrack {-u\left( {x}_{2}\right) + {4u}\left( {x}_{1}\right) - {3u}\left( {x}_{0}\right) }\right\rbrack . \]
|
No
|
For \( k = 0,1,\ldots, r - 1 \), let \( {u}_{j, k} \) denote the unique solutions to the homogeneous difference equation (10.31) with initial values\n\n\[ \n{u}_{j, k} = {\delta }_{j, k},\;j = 0,1,\ldots, r - 1.\n\]\n\nThen for a given right-hand side \( {c}_{r},{c}_{r + 1},\ldots \), the unique solution to the inhomogeneous difference equation\n\n\[ \n{z}_{j + r} + \mathop{\sum }\limits_{{m = 0}}^{{r - 1}}{a}_{m}{z}_{j + m} = {c}_{j + r},\;j = 0,1,\ldots ,\n\]\n\n(10.36)\n\nwith initial values \( {z}_{0},{z}_{1},\ldots ,{z}_{r - 1} \) is given by\n\n\[ \n{z}_{j + r} = \mathop{\sum }\limits_{{k = 0}}^{{r - 1}}{z}_{k}{u}_{j + r, k} + \mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}{u}_{j + r - k - 1, r - 1},\;j = 0,1,\ldots \n\]\n\n(10.37)
|
Proof. Setting \( {u}_{m, r - 1} = 0 \) for \( m = - 1, - 2,\ldots \), we can rewrite (10.37) in the form\n\n\[ \n{z}_{j} = \mathop{\sum }\limits_{{k = 0}}^{{r - 1}}{z}_{k}{u}_{j, k} + {w}_{j},\;j = 0,1,\ldots ,\n\]\n\nwhere\n\n\[ \n{w}_{j} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k + r}{u}_{j - k - 1, r - 1},\;j = 0,1,\ldots \n\]\n\nObviously, \( {w}_{j} = 0 \) for \( j = 0,\ldots, r - 1 \), and therefore it remains to show that \( {w}_{j} \) satisfies the inhomogeneous difference equation (10.36).\n\nAs in the proof of Theorem 10.33 we set \( {a}_{r} = 1 \) . Then, using \( {u}_{m, r - 1} = 0 \) for \( m < r - 1,{u}_{r - 1, r - 1} = 1 \), and the homogeneous difference equation for \( {u}_{m, r - 1} \), we compute\n\n\[ \n\mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}{w}_{j + m} = \mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}\mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k + r}{u}_{j + m - k - 1, r - 1}\n\]\n\n\[ \n= \mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}\mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}{u}_{j + m - k - 1, r - 1}\n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}\mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}{u}_{j + m - k - 1, r - 1} = {c}_{j + r}.\n\]\n\nNow the proof is completed by noting that each solution to the inhomogeneous difference equation (10.36) is uniquely determined by its \( r \) initial values \( {z}_{0},{z}_{1},\ldots ,{z}_{r - 1} \) .
|
Yes
|
Lemma 10.37 Let \( \\left( {\\xi }_{j}\\right) \) be a sequence in \( \\mathbb{R} \) with the property\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq A\\mathop{\\sum }\\limits_{{m = 0}}^{{j - 1}}\\left| {\\xi }_{m}\\right| + B,\\;j = 1,2,\\ldots ,\n\]\n\nfor some constants \( A > 0 \) and \( B \\geq 0 \) . Then the estimate\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {e}^{\\left( {j - 1}\\right) A},\\;j = 1,2,\\ldots ,\n\]\n\nholds.
|
Proof. We prove by induction that\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{j - 1},\\;j = 1,2,\\ldots \n\]\n\n(10.38)\n\nThen the assertion follows by using the estimate \( 1 + A \\leq {e}^{A} \) . The inequality (10.38) is true for \( j = 1 \) . Assume that it has been proven up to some \( j \\geq 1 \) . Then we have\n\n\[ \n\\left| {\\xi }_{j + 1}\\right| \\leq A\\mathop{\\sum }\\limits_{{m = 0}}^{j}\\left| {\\xi }_{m}\\right| + B \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) + A\\mathop{\\sum }\\limits_{{m = 1}}^{j}\\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{m - 1}\n\]\n\n\[ \n= \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{j}\n\]\n\ni.e., the estimate is also true for \( j + 1 \) .
|
Yes
|
Consider the boundary value problem\n\n\[ \n{u}^{\prime \prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;u\left( 2\right) = \frac{1}{2}\sqrt{2}, \n\]\n\nwith the exact solution \( u\left( x\right) = \sqrt{2}/x \) .
|
We solve numerically the associated initial value problem\n\n\[ \n{u}^{\prime \prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;{u}^{\prime }\left( 1\right) = s, \n\]\n\nby the improved Euler method of Section 10.2 with step sizes \( h = {0.1} \), \( h = {0.01} \), and \( h = {0.001} \). For this we transform the initial value problem for the equation of second order into the initial value problem for the system\n\n\[ \n{u}^{\prime } = w,\;{w}^{\prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;w\left( 1\right) = s. \n\]\n\nAs starting value for the Newton iteration we choose \( s = 0 \). The exact initial condition is \( s = - \sqrt{2} = - {1.414214} \). The numerical results represented in Table 11.1 illustrate the feasibility of the shooting method with Newton iterations.
|
Yes
|
The linear boundary value problem\n\n\[ \n{u}^{\prime \prime } - {u}^{\prime } - {110u} = 0,\;u\left( 0\right) = u\left( {10}\right) = 1, \n\]
|
has the unique solution\n\n\[ \nu\left( x\right) = \frac{1}{{e}^{110} - {e}^{-{100}}}\left\{ {\left( {{e}^{110} - 1}\right) {e}^{-{10x}} + \left( {1 - {e}^{-{100}}}\right) {e}^{11x}}\right\} .\n\]
|
Yes
|
Theorem 11.4 Assume that \( q, r \in C\left\lbrack {a, b}\right\rbrack \) and \( q \geq 0 \) . Then the boundary value problem for the linear differential equation\n\n\[ - {u}^{\prime \prime } + {qu} = r\;\text{ on }\left\lbrack {a, b}\right\rbrack \]\n\nwith homogeneous boundary conditions\n\n\[ u\left( a\right) = u\left( b\right) = 0 \]
|
Proof. Assume that \( {u}_{1} \) and \( {u}_{2} \) are two solutions to the boundary value problem. Then the difference \( u = {u}_{1} - {u}_{2} \) solves the homogeneous boundary value problem\n\n\[ - {u}^{\prime \prime } + {qu} = 0,\;u\left( a\right) = u\left( b\right) = 0.\]\n\nBy partial integration we obtain\n\n\[ {\int }_{a}^{b}\left( {{\left\lbrack {u}^{\prime }\right\rbrack }^{2} + q{u}^{2}}\right) {dx} = {\int }_{a}^{b}\left( {-{u}^{\prime \prime } + {qu}}\right) {udx} = 0.\]\n\nThis implies \( {u}^{\prime } = 0 \) on \( \left\lbrack {a, b}\right\rbrack \), since \( q \geq 0 \) . Hence \( u \) is constant on \( \left\lbrack {a, b}\right\rbrack \) , and the boundary conditions finally yield \( u = 0 \) on \( \left\lbrack {a, b}\right\rbrack \) . Therefore, the boundary value problem (11.7)-(11.8) has at most one solution.\n\nThe general solution of the linear differential equation (11.7) is given by\n\n\[ u = {C}_{1}{u}_{1} + {C}_{2}{u}_{2} + {u}^{ * } \]\n\nwhere \( {u}_{1},{u}_{2} \) denotes a fundamental system of two linearly independent solutions to the homogeneous differential equation, \( {u}^{ * } \) is a solution to the inhomogeneous differential equation, and \( {C}_{1} \) and \( {C}_{2} \) are arbitrary constants. This can be seen with the help of the Picard-Lindelöf Theorem 10.1 (see Problem 11.4). The boundary condition (11.8) is satisfied, provided that the constants \( {C}_{1} \) and \( {C}_{2} \) solve the linear system\n\n\[ {C}_{1}{u}_{1}\left( a\right) + {C}_{2}{u}_{2}\left( a\right) = - {u}^{ * }\left( a\right) \]\n\n\[ {C}_{1}{u}_{1}\left( b\right) + {C}_{2}{u}_{2}\left( b\right) = - {u}^{ * }\left( b\right) \]\n\nThis system is uniquely solvable. Assume that \( {C}_{1} \) and \( {C}_{2} \) solve the homogeneous system. Then \( u = {C}_{1}{u}_{1} + {C}_{2}{u}_{2} \) yields a solution to the homogeneous boundary value problem. Hence \( u = 0 \), since we have already established uniqueness for the boundary value problem. From this we conclude that \( {C}_{1} = {C}_{2} = 0 \) because \( {u}_{1} \) and \( {u}_{2} \) are linearly independent, and the existence proof is complete.
|
Yes
|
Theorem 11.5 For each \( h > 0 \) the difference equations (11.10)-(11.11) have a unique solution.
|
Proof. The tridiagonal matrix \( A \) is irreducible and weakly row-diagonally dominant. Hence, by Theorem 4.7, the matrix \( A \) is invertible, and the Jacobi iterations converge.
|
No
|
Lemma 11.6 Denote by \( A \) the matrix of the finite difference method for \( q \geq 0 \) and by \( {A}_{0} \) the corresponding matrix for \( q = 0 \) . Then\n\n\[ 0 \leq {A}^{-1} \leq {A}_{0}^{-1} \]\n\ni.e., all components of \( {A}^{-1} \) are nonnegative and smaller than or equal to the corresponding components of \( {A}_{0}^{-1} \) .
|
Proof. The columns of the inverse \( {A}^{-1} = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) satisfy \( A{a}_{j} = {e}_{j} \) for \( j = 1,\ldots, n \) with the canonical unit vectors \( {e}_{1},\ldots ,{e}_{n} \) in \( {\mathbb{R}}^{n} \) . The Jacobi iterations for the solution of \( {Az} = {e}_{j} \) starting with \( {z}_{0} = 0 \) are given by\n\n\[ {z}_{\nu + 1} = - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) {z}_{\nu } + {D}^{-1}{e}_{j},\;\nu = 0,1,\ldots ,\]\n\nwith the usual splitting \( A = D + {A}_{L} + {A}_{R} \) of \( A \) into its diagonal, lower, and upper triangular parts. Since the entries of \( {D}^{-1} \) and of \( - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \) are all nonnegative, it follows that \( {A}^{-1} \geq 0 \) . Analogously, the iterations\n\n\[ {z}_{\nu + 1} = - {D}_{0}^{-1}\left( {{A}_{L} + {A}_{R}}\right) {z}_{\nu } + {D}_{0}^{-1}{e}_{j},\;\nu = 0,1,\ldots ,\]\n\nyield the columns of \( {A}_{0}^{-1} \) . Therefore, from \( {D}_{0}^{-1} \geq {D}^{-1} \) we conclude that \( {A}_{0}^{-1} \geq {A}^{-1} \)
|
Yes
|
Lemma 11.7 Assume that \( u \in {C}^{4}\left\lbrack {a, b}\right\rbrack \) . Then\n\n\[ \left| {{u}^{\prime \prime }\left( x\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x + h}\right) - {2u}\left( x\right) + u\left( {x - h}\right) }\right\rbrack }\right| \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty } \]\n\nfor all \( x \in \left\lbrack {a + h, b - h}\right\rbrack \) .
|
Proof. By Taylor's formula we have that\n\n\[ u\left( {x \pm h}\right) = u\left( x\right) \pm h{u}^{\prime }\left( x\right) + \frac{{h}^{2}}{2}{u}^{\prime \prime }\left( x\right) \pm \frac{{h}^{3}}{6}{u}^{\prime \prime \prime }\left( x\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x \pm {\theta }_{ \pm }h}\right) \]\n\nfor some \( {\theta }_{ \pm } \in \left( {0,1}\right) \) . Adding these two equations gives\n\n\[ u\left( {x + h}\right) - {2u}\left( x\right) + u\left( {x - h}\right) = {h}^{2}{u}^{\prime \prime }\left( x\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x + {\theta }_{ + }h}\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x - {\theta }_{ - }h}\right) ,\]\n\nwhence the statement of the lemma follows.
|
Yes
|
Theorem 11.8 Assume that the solution to the boundary value problem (11.7)-(11.8) is four-times continuously differentiable. Then the error of the finite difference approximation can be estimated by
|
Proof. By Lemma 11.7, for\n\n\[ {z}_{j} \mathrel{\text{:=}} {u}^{\prime \prime }\left( {x}_{j}\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x}_{j + 1}\right) - {2u}\left( {x}_{j}\right) + u\left( {x}_{j - 1}\right) }\right\rbrack \]\n\nwe have the estimate\n\n\[ \left| {z}_{j}\right| \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty },\;j = 1,\ldots, n. \]\n\n(11.13)\n\nSince\n\n\[ - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x}_{j + 1}\right) - \left( {2 + {h}^{2}{q}_{j}}\right) u\left( {x}_{j}\right) + u\left( {x}_{j - 1}\right) }\right\rbrack = - {u}^{\prime \prime }\left( {x}_{j}\right) + {q}_{j}u\left( {x}_{j}\right) + {z}_{j} = {r}_{j} + {z}_{j}, \]\n\nthe vector \( \widetilde{U} = {\left( u\left( {x}_{1}\right) ,\ldots, u\left( {x}_{n}\right) \right) }^{T} \) given by the exact solution solves the\n\nlinear system\n\n\[ A\widetilde{U} = R + Z \]\n\nwhere \( Z = {\left( {z}_{1},\ldots ,{z}_{n}\right) }^{T} \) . Therefore,\n\n\[ A\left( {\widetilde{U} - U}\right) = Z \]\n\nand from this, using Lemma 11.6 and the estimate (11.13), we obtain\n\n\[ \left| {u\left( {x}_{j}\right) - {u}_{j}}\right| \leq {\begin{Vmatrix}{A}^{-1}Z\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty }{\begin{Vmatrix}{A}_{0}^{-1}e\end{Vmatrix}}_{\infty },\;j = 1,\ldots, n \]\n\n(11.14)\n\nwhere \( e = {\left( 1,\ldots ,1\right) }^{T} \) . The boundary value problem\n\n\[ - {u}_{0}^{\prime \prime } = 1,\;{u}_{0}\left( a\right) = {u}_{0}\left( b\right) = 0, \]\n\nhas the solution\n\n\[ {u}_{0}\left( x\right) = \frac{1}{2}\left( {x - a}\right) \left( {b - x}\right) \]\n\nSince \( {u}_{0}^{\left( 4\right) } = 0 \), in this case, as a consequence of (11.14) the finite difference approximation coincides with the exact solution; i.e., \( e = {A}_{0}U = {A}_{0}\widetilde{U} \) . Hence,\n\n\[ {\begin{Vmatrix}{A}_{0}^{-1}e\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{u}_{0}\end{Vmatrix}}_{\infty } = \frac{1}{8}{\left( b - a\right) }^{2},\;j = 1,\ldots, n. \]\n\nInserting this into (11.14) completes the proof.
|
Yes
|
Theorem 11.9 For each \( h > 0 \) the difference equations (11.19)-(11.20) have a unique solution.
|
From the proof of Lemma 11.6 it can be seen that its statement also holds for the corresponding matrices of the system (11.19)-(11.20). Lemma 11.7 implies that\n\n\[ \n{\Delta u}\left( {{x}_{1},{x}_{2}}\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {{x}_{1} + h,{x}_{2}}\right) + u\left( {{x}_{1} - h,{x}_{2}}\right) + u\left( {{x}_{1},{x}_{2} + h}\right) }\right.\n\]\n\n\[ \n\left. {+u\left( {{x}_{1},{x}_{2} - h}\right) - {4u}\left( {{x}_{1},{x}_{2}}\right) }\right\rbrack \left| {\; \leq \frac{{h}^{2}}{12}\left\lbrack {{\begin{Vmatrix}\frac{{\partial }^{4}u}{\partial {x}_{1}^{4}}\end{Vmatrix}}_{\infty } + {\begin{Vmatrix}\frac{{\partial }^{4}u}{\partial {x}_{2}^{4}}\end{Vmatrix}}_{\infty }}\right\rbrack }\right. ,\n\]\n\nprovided that \( u \in {C}^{4}\left( {\left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack }\right) \) . Then we can proceed as in the proof of Theorem 11.8 to derive an error estimate. For this we need to have an estimate on the solution of\n\n\[ \n- \Delta {u}_{0} = 1\;\text{ in }D,\;{u}_{0} = 0\;\text{ on }\partial D.\n\]\n\n(11.21)\n\nEither from an explicit form of the solution obtained by separation of variables or by writing\n\n\[ \n{u}_{0}\left( x\right) = \frac{1}{4}\left( {1 - {x}_{1}}\right) {x}_{1} + \frac{1}{4}\left( {1 - {x}_{2}}\right) {x}_{2} + {v}_{0}\left( x\right)\n\]\n\nwhere \( {v}_{0} \) is a harmonic function, i.e., a solution of \( \Delta {v}_{0} = 0 \), and employing the maximum minimum principle for harmonic functions (see [39]), it can be seen that \( {\begin{Vmatrix}{u}_{0}\end{Vmatrix}}_{\infty } \leq 1/8 \) (see Problem 11.10). Hence we can state the following theorem.
|
Yes
|
Theorem 11.13 (Lax-Milgram) In a Hilbert space \( X \) a bounded and strictly coercive linear operator \( A : X \rightarrow X \) has a bounded inverse \( {A}^{-1} : X \rightarrow X \) .
|
Proof. Using the Cauchy-Schwarz inequality, we can estimate\n\n\[ \parallel {Au}\parallel \parallel u\parallel \geq \operatorname{Re}\left( {{Au}, u}\right) \geq c\parallel u{\parallel }^{2}. \]\n\nHence\n\n\[ \parallel {Au}\parallel \geq c\parallel u\parallel \]\n\n(11.25)\n\nfor all \( u \in X \) . From (11.25) we observe that \( {Au} = 0 \) implies \( u = 0 \) ; i.e., \( A \) is injective.\n\nNext we show that the range \( A\left( X\right) \) is closed. Let \( v \) be an element of the closure \( \overline{A\left( X\right) } \) and let \( \left( {v}_{n}\right) \) be a sequence from \( A\left( X\right) \) with \( {v}_{n} \rightarrow v, n \rightarrow \infty \) . Then we can write \( {v}_{n} = A{u}_{n} \) with some \( {u}_{n} \in X \), and from (11.25) we find that\n\n\[ c\begin{Vmatrix}{{u}_{n} - {u}_{m}}\end{Vmatrix} \leq \begin{Vmatrix}{{v}_{n} - {v}_{m}}\end{Vmatrix} \]\n\nfor all \( n, m \in \mathbb{N} \) . Therefore, \( \left( {u}_{n}\right) \) is a Cauchy sequence in \( X \) and converges: \( {u}_{n} \rightarrow u, n \rightarrow \infty \), with some \( u \in X \) . Then \( v = {Au} \), since \( A \) is continuous, and \( A\left( X\right) = \overline{A\left( X\right) } \) is proven.\n\nFrom Remark 3.40 we now have that \( A\left( X\right) \) is complete. Let \( w \in X \) be arbitrary and denote by \( v \) its best approximation with respect to \( A\left( X\right) \) , which uniquely exists by Theorem 3.52. Then, by Theorem 3.51, we have \( \left( {w - v, u}\right) = 0 \) for all \( u \in A\left( X\right) \) . In particular, \( \left( {w - v, A\left( {w - v}\right) }\right) = 0 \) . Hence, from (11.24) we see that \( w = v \in A\left( X\right) \) . Therefore, \( A \) is surjective. Finally, the boundedness of the inverse\n\n\[ \begin{Vmatrix}{A}^{-1}\end{Vmatrix} \leq \frac{1}{c} \]\n\n(11.26)\n\nis a consequence of (11.25).
|
Yes
|
Theorem 11.15 Let \( S \) be a bounded and strictly coercive sesquilinear function on a Hilbert space \( X \) . Then there exists a uniquely determined bounded and strictly coercive linear operator \( A : X \rightarrow X \) such that\n\n\[ S\left( {u, v}\right) = \left( {u,{Av}}\right) \] for all \( u, v \in X \) .
|
Proof. For each \( v \in X \) the mapping \( u \mapsto S\left( {u, v}\right) \) clearly defines a bounded linear function on \( X \), since \( \left| {S\left( {u, v}\right) }\right| \leq C\parallel u\parallel \parallel v\parallel \) . By the Riesz Theorem 11.11 we can write \( S\left( {u, v}\right) = \left( {u, f}\right) \) for all \( u \in X \) and some \( f \in X \) . Therefore, setting \( {Av} \mathrel{\text{:=}} f \) we define an operator \( A : X \rightarrow X \) such that \( S\left( {u, v}\right) = \left( {u,{Av}}\right) \) for all \( u, v \in X \) .\n\nTo show that \( A \) is linear we observe that\n\n\[ \left( {u,{\alpha Av} + {\beta Aw}}\right) = \bar{\alpha }\left( {u,{Av}}\right) + \bar{\beta }\left( {u,{Aw}}\right) = \bar{\alpha }S\left( {u, v}\right) + \bar{\beta }S\left( {u, w}\right) \]\n\n\[ = S\left( {u,{\alpha v} + {\beta w}}\right) = \left( {u, A\left\lbrack {{\alpha v} + {\beta w}}\right\rbrack }\right) \]\n\nfor all \( u, v, w \in X \) and all \( \alpha ,\beta \in \mathbb{C} \) . The boundedness of \( A \) follows from\n\n\[ \parallel {Au}{\parallel }^{2} = \left( {{Au},{Au}}\right) = S\left( {{Au}, u}\right) \leq C\parallel {Au}\parallel \parallel u\parallel \]\n\nand the strict coercivity of \( A \) is a consequence of the strict coercivity of \( S \) .\n\nTo show uniqueness of the operator \( A \) we suppose that there exist two operators \( {A}_{1} \) and \( {A}_{2} \) with the property\n\n\[ S\left( {u, v}\right) = \left( {u,{A}_{1}v}\right) = \left( {u,{A}_{2}v}\right) \]\n\nfor all \( u, v \in X \) . Then we have \( \left( {u,{A}_{1}v - {A}_{2}v}\right) = 0 \) for all \( u, v \in X \), which implies \( {A}_{1}v = {A}_{2}v \) for all \( v \in X \) by setting \( u = {A}_{1}v - {A}_{2}v \) .
|
Yes
|
Corollary 11.16 Let \( S \) be a bounded and strictly coercive sesquilinear function and \( F \) a bounded linear function on a Hilbert space \( X \) . Then there exists a unique \( u \in X \) such that\n\n\[ S\left( {v, u}\right) = F\left( v\right) \]\n\nfor all \( v \in X \) .
|
Proof. By Theorem 11.15 there exists a uniquely determined bounded and strictly coercive linear operator \( A \) such that\n\n\[ S\left( {v, u}\right) = \left( {v,{Au}}\right) \]\n\nfor all \( u, v \in X \), and by Theorem 11.11 there exists a uniquely determined element \( f \) such that\n\n\[ F\left( v\right) = \left( {v, f}\right) \]\n\nfor all \( v \in X \) . Hence, the equation (11.27) is equivalent to the equation\n\n\[ {Au} = f\text{.} \]\n\nHowever, the latter equation is uniquely solvable as a consequence of the Lax-Milgram Theorem 11.13.\n\nSince the coercivity constants for \( A \) and \( S \) coincide, from (11.23) and (11.26) we conclude that\n\n\[ \parallel u\parallel \leq \frac{1}{c}\parallel F\parallel \]\n\nfor the unique solution \( u \) of (11.27).
|
Yes
|
Theorem 11.17 For a bounded and strictly coercive linear operator \( A \) the Galerkin equations (11.30) have a unique solution. It satisfies the error estimate\n\n\[ \begin{Vmatrix}{{u}_{n} - u}\end{Vmatrix} \leq M\mathop{\inf }\limits_{{v \in {X}_{n}}}\parallel v - u\parallel \] \n\n(11.32) \n\nwhere \( M \) is some constant depending on \( A \) (and not on \( {X}_{n} \) ).
|
Proof. Since \( {A}_{n} : {X}_{n} \rightarrow {X}_{n} \) is strictly coercive with coercitivity constant \( c \), by the Lax-Milgram Theorem 11.13 we conclude that \( {A}_{n} \) is bijective; i.e., the Galerkin equations (11.30) have a unique solution \( {u}_{n} \in {X}_{n} \) . The estimate (11.26) applied to the operator \( {A}_{n} \) implies that \n\n\[ \begin{Vmatrix}{A}_{n}^{-1}\end{Vmatrix} \leq \frac{1}{c} \] \n\n(11.33) \n\nFor the error \( {u}_{n} - u \) between the Galerkin approximation \( {u}_{n} \) and the exact solution \( u \) we can write \n\n\[ {u}_{n} - u = \left( {{A}_{n}^{-1}{P}_{n}A - I}\right) u = \left( {{A}_{n}^{-1}{P}_{n}A - I}\right) \left( {u - v}\right) \] \n\nfor all \( v \in {X}_{n} \), since, trivially, we have \( {A}_{n}^{-1}{P}_{n}{Av} = v \) for \( v \in {X}_{n} \) . By Theorem 3.52 we have \( \begin{Vmatrix}{P}_{n}\end{Vmatrix} = 1 \), and therefore, using Remark 3.25 and (11.33) we can estimate \n\n\[ \begin{Vmatrix}{{A}_{n}^{-1}{P}_{n}A}\end{Vmatrix} \leq \frac{1}{c}\parallel A\parallel \] \n\nwhence (11.32) follows.
|
Yes
|
Theorem 11.19 The linear space\n\n\\[ \n{H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\mathrel{\\text{:=}} \\left\\{ {u \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack : {u}^{\\prime } \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack }\\right\\} \n\\]\n\nendowed with the scalar product\n\n\\[ \n{\\left( u, v\\right) }_{{H}^{1}} \\mathrel{\\text{:=}} {\\int }_{a}^{b}\\left( {{uv} + {u}^{\\prime }{v}^{\\prime }}\\right) {dx} \n\\]\n\n(11.44)\n\nis a Hilbert space.
|
Proof. It is readily checked that \\( {H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) is a linear space and that (11.44) defines a scalar product. Let \\( \\left( {u}_{n}\\right) \\) denote an \\( {H}^{1} \\) Cauchy sequence. Then \\( \\left( {u}_{n}\\right) \\) and \\( \\left( {u}_{n}^{\\prime }\\right) \\) are both \\( {L}^{2} \\) Cauchy sequences. From the completeness of \\( {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) we obtain the existence of \\( u \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) and \\( w \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) such that \\( {\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{2} \\rightarrow 0 \\) and \\( {\\begin{Vmatrix}{u}_{n}^{\\prime } - w\\end{Vmatrix}}_{2} \\rightarrow 0 \\) as \\( n \\rightarrow \\infty \\). Then for all \\( v \\in {C}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) with \\( v\\left( a\\right) = v\\left( b\\right) = 0 \\) we can estimate\n\n\\[ \n{\\int }_{a}^{b}\\left( {u{v}^{\\prime } + {wv}}\\right) {dx} = {\\int }_{a}^{b}\\left\\{ {\\left( {u - {u}_{n}}\\right) {v}^{\\prime } + \\left( {w - {u}_{n}^{\\prime }}\\right) v}\\right\\} {dx} \n\\]\n\n\\[ \n\\leq {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}{\\begin{Vmatrix}{v}^{\\prime }\\end{Vmatrix}}_{{L}^{2}} + {\\begin{Vmatrix}w - {u}_{n}^{\\prime }\\end{Vmatrix}}_{{L}^{2}}\\parallel v{\\parallel }_{{L}^{2}} \\rightarrow 0,\\;n \\rightarrow \\infty .\n\\]\n\nTherefore, \\( u \\in {H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) with \\( {u}^{\\prime } = w \\), and \\( {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{H}^{1}} \\rightarrow 0, n \\rightarrow \\infty \\), which completes the proof.
|
Yes
|
Theorem 11.20 \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) is dense in \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) .
|
Proof. Since \( C\left\lbrack {a, b}\right\rbrack \) is dense in \( {L}^{2}\left\lbrack {a, b}\right\rbrack \), for each \( u \in {H}^{1}\left\lbrack {a, b}\right\rbrack \) and \( \varepsilon > 0 \) there exists \( w \in C\left\lbrack {a, b}\right\rbrack \) such that \( {\begin{Vmatrix}{u}^{\prime } - w\end{Vmatrix}}_{2} < \varepsilon \) . Then we define \( v \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) by\n\n\[ v\left( x\right) \mathrel{\text{:=}} u\left( a\right) + {\int }_{a}^{x}w\left( \xi \right) {d\xi } \]\n\nand using (11.43), we have\n\n\[ u\left( x\right) - v\left( x\right) = {\int }_{a}^{x}\left\{ {{u}^{\prime }\left( \xi \right) - w\left( \xi \right) }\right\} {d\xi }. \]\n\nBy the Cauchy-Schwarz inequality this implies \( \parallel u - v{\parallel }_{2} < \left( {b - a}\right) \varepsilon \), and the proof is complete.
|
Yes
|
Theorem 11.21 \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) is contained in \( C\left\lbrack {a, b}\right\rbrack \) .
|
Proof. From (11.43) we have\n\n\[ u\left( x\right) - u\left( y\right) = {\int }_{y}^{x}{u}^{\prime }\left( \xi \right) {d\xi } \]\n\n(11.45)\n\nwhence by the Cauchy-Schwarz inequality,\n\n\[ \left| {u\left( x\right) - u\left( y\right) }\right| \leq {\left| x - y\right| }^{1/2}{\begin{Vmatrix}{u}^{\prime }\end{Vmatrix}}_{2} \]\n\nfollows for all \( x, y \in \left\lbrack {a, b}\right\rbrack \) . Therefore, every function \( u \in {H}^{1}\left\lbrack {a, b}\right\rbrack \) belongs to \( C\left\lbrack {a, b}\right\rbrack \), or more precisely, it coincides almost everywhere with a continuous function.
|
Yes
|
Theorem 11.22 The space\n\n\[ \n{H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \mathrel{\text{:=}} \left\{ {u \in {H}^{1}\left\lbrack {a, b}\right\rbrack : u\left( a\right) = u\left( b\right) = 0}\right\} \n\]\n\nis a complete subspace of \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) .
|
Proof. Since the \( {H}^{1} \) norm is stronger than the maximum norm, each \( {H}^{1} \) convergent sequence of elements of \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) has its limit in \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) . Therefore \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) is a closed subspace of \( {H}^{1}\left\lbrack {a, b}\right\rbrack \), and the statement follows from Remark 3.40.
|
Yes
|
Theorem 11.24 Assume that \( p > 0 \) and \( q \geq 0 \) . Then there exists a unique weak solution to the boundary value problem (11.36)-(11.37).
|
Proof. The sesquilinear function \( S : {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \times {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) is bounded, since\n\n\[ \left| {S\left( {u, v}\right) }\right| \leq \max \left\{ {\parallel p{\parallel }_{\infty },\parallel q{\parallel }_{\infty }}\right\} \parallel u{\parallel }_{{H}^{1}}\parallel v{\parallel }_{{H}^{1}} \]\n\nby the Cauchy-Schwarz inequality. For \( u \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \), from (11.45) and the Cauchy-Schwarz inequality we obtain that\n\n\[ \parallel u{\parallel }_{{L}^{2}}^{2} = {\int }_{a}^{b}{\left| {\int }_{a}^{x}{u}^{\prime }\left( \xi \right) d\xi \right| }^{2}{dx} \leq {\left( b - a\right) }^{2}{\begin{Vmatrix}{u}^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2}. \]\n\nHence we can estimate\n\n\[ S\left( {u, u}\right) \geq \mathop{\min }\limits_{{a \leq x \leq b}}p\left( x\right) {\int }_{a}^{b}{\left| {u}^{\prime }\right| }^{2}{dx} \geq c\parallel u{\parallel }_{{H}^{1}}^{2} \]\n\nfor all \( u \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) and some positive constant \( c \) ; i.e., \( S \) is strictly coercive. Finally, by the Cauchy-Schwarz inequality we have\n\n\[ \left| {F\left( v\right) }\right| \leq \parallel r{\parallel }_{{L}^{2}}\parallel v{\parallel }_{{L}^{2}} \leq \parallel r{\parallel }_{{L}^{2}}\parallel v{\parallel }_{{H}^{1}} \]\n\ni.e., the linear function \( F : {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) is bounded. Now the statement of the theorem follows from Corollary 11.16.\n\nWe note that from (11.28) and the previous inequality it follows that\n\n\[ \parallel u{\parallel }_{{H}^{1}} \leq \frac{1}{c}\parallel r{\parallel }_{{L}^{2}} \]\n\n(11.46)\n\nfor the weak solution \( u \) to the boundary value problem (11.36)-(11.37).
|
Yes
|
Theorem 11.25 Each weak solution to the boundary value problem (11.36)- (11.37) is also a classical solution; i.e., it is twice continuously differentiable.
|
Proof. Define\n\n\[ f\left( x\right) \mathrel{\text{:=}} {\int }_{a}^{x}\left\lbrack {q\left( \xi \right) u\left( \xi \right) - r\left( \xi \right) }\right\rbrack {d\xi },\;x \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( f \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) . From (11.38), by partial integration we obtain\n\n\[ {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {v}^{\prime }{dx} = 0 \]\n\nfor all \( v \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) . Now we set\n\n\[ c \mathrel{\text{:=}} \frac{1}{b - a}{\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {d\xi } \]\n\nand\n\n\[ {v}_{0}\left( x\right) \mathrel{\text{:=}} {\int }_{a}^{x}\left\lbrack {p\left( \xi \right) {u}^{\prime }\left( \xi \right) - f\left( \xi \right) - c}\right\rbrack {d\xi },\;x \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( {v}_{0} \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) and\n\n\[ {\int }_{a}^{b}{\left\lbrack p{u}^{\prime } - f - c\right\rbrack }^{2}{dx} = {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f - c}\right\rbrack {v}_{0}^{\prime }{dx} \]\n\n\[ = {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {v}_{0}^{\prime }{dx} - c{\int }_{a}^{b}{v}_{0}^{\prime }{dx} = 0. \]\n\nHence\n\n\[ p{u}^{\prime } = f + c \]\n\nand since \( f \) and \( p \) are in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) with \( p\left( x\right) > 0 \) for all \( x \in \left\lbrack {a, b}\right\rbrack \), we can conclude that \( {u}^{\prime } \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) and\n\n\[ {\left( p{u}^{\prime }\right) }^{\prime } = {f}^{\prime } = {qu} - r. \]\n\nThis completes the proof.
|
Yes
|
Lemma 11.26 Let \( f\left\lbrack {a, b}\right\rbrack \in {C}^{2}\left\lbrack {a, b}\right\rbrack \) . Then the remainder \( {R}_{1}f \mathrel{\text{:=}} f - {L}_{1}f \) for the linear interpolation at the two endpoints \( a \) and \( b \) can be estimated \( {by} \)\n\n\[ \n{\begin{Vmatrix}{R}_{1}f\end{Vmatrix}}_{{L}^{2}} \leq {\left( b - a\right) }^{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}} \]\n\n(11.49)\n\n\[ \n{\begin{Vmatrix}{\left( {R}_{1}f\right) }^{\prime }\end{Vmatrix}}_{{L}^{2}} \leq \left( {b - a}\right) {\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}} \]\n
|
Proof. For each function \( g \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) satisfying \( g\left( a\right) = 0 \), from\n\n\[ \ng\left( x\right) = {\int }_{a}^{x}{g}^{\prime }\left( \xi \right) {d\xi } \]\n\nby using the Cauchy-Schwarz inequality we obtain\n\n\[ \n{\left| g\left( x\right) \right| }^{2} \leq \left( {b - a}\right) {\begin{Vmatrix}{g}^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2},\;x \in \left\lbrack {a, b}\right\rbrack \]\n\nFrom this, by integration we derive the Friedrich inequality\n\n\[ \n\parallel g{\parallel }_{{L}^{2}} \leq \left( {b - a}\right) {\begin{Vmatrix}{g}^{\prime }\end{Vmatrix}}_{{L}^{2}} \]\n\n(11.50)\n\nfor functions \( g \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) with \( g\left( a\right) = 0 \) (or \( g\left( b\right) = 0 \) ). Using the interpolation property \( \left( {{R}_{1}f}\right) \left( a\right) = \left( {{R}_{1}f}\right) \left( b\right) = 0 \), by partial integration we obtain\n\n\[ \n{\int }_{a}^{b}{\left\lbrack {f}^{\prime } - {\left( {L}_{1}f\right) }^{\prime }\right\rbrack }^{2}{dx} = {\int }_{a}^{b}{f}^{\prime \prime }\left( {{L}_{1}f - f}\right) {dx}. \]\n\nFrom this, again applying the Cauchy-Schwarz inequality, we have\n\n\[ \n{\begin{Vmatrix}{\left( {R}_{1}f\right) }^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2} \leq {\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}}{\begin{Vmatrix}{R}_{1}f\end{Vmatrix}}_{{L}^{2}} \]\n\nwhence (11.49) follows with the aid of Friedrich's inequality (11.50) for \( g = {R}_{1}f \) .
|
Yes
|
Theorem 11.27 The error in the finite element approximation by linear splines for the boundary value problem (11.36)-(11.37) can be estimated by\n\n\\[ \n{\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{{H}^{1}} \\leq C{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\n(11.51)\n\nfor some positive constant \\( C \\) .
|
Proof. By summing up the inequalities (11.49), applied to each of the subintervals of length \\( h \\), for the interpolating linear spline \\( {w}_{n} \\in {X}_{n} \\) with \\( {w}_{n}\\left( {x}_{j}\\right) = u\\left( {x}_{j}\\right) \\) for \\( j = 0,\\ldots, n \\) we find that\n\n\\[ \n{\\begin{Vmatrix}{w}_{n}^{\\prime } - {u}^{\\prime }\\end{Vmatrix}}_{{L}^{2}} \\leq {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\nand\n\n\\[ \n{\\begin{Vmatrix}{w}_{n} - u\\end{Vmatrix}}_{{L}^{2}} \\leq {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\\]\n\nwhence\n\n\\[ \n\\mathop{\\inf }\\limits_{{v \\in {X}_{n}}}\\parallel v - u{\\parallel }_{{H}^{1}} \\leq {\\begin{Vmatrix}{w}_{n} - u\\end{Vmatrix}}_{{H}^{1}} \\leq \\left( {1 + b - a}\\right) {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\nfollows. Now (11.51) is a consequence of the error estimate for the Galerkin method of Theorem 11.17.
|
Yes
|
Theorem 11.28 The error in the finite element approximation by linear splines for the boundary value problem (11.36)-(11.37) can be estimated by\n\n\[ \n{\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{{L}^{2}} \\leq C{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\]\n\nwith some positive constant \( C \) .
|
Proof. Denote by \( {z}_{n} \) the weak solution to the boundary value problem with the right-hand side \( u - {u}_{n} \) ; i.e.,\n\n\[ \nS\\left( {v,{z}_{n}}\\right) = {\\left( v, u - {u}_{n}\\right) }_{{L}^{2}}\n\]\n\nfor all \( v \\in {H}_{0}^{1}\\left\\lbrack {a, b}\\right\\rbrack \) . In particular, inserting \( v = u - {u}_{n} \), it follows that\n\n\[ \nS\\left( {u - {u}_{n},{z}_{n}}\\right) = {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2}\n\]\n\n(11.52)\n\nSince \( S\\left( {v, u}\\right) = F\\left( v\\right) \) and \( S\\left( {v,{u}_{n}}\\right) = F\\left( v\\right) \) for all \( v \\in {X}_{n} \), using the symmetry of \( S \) we have\n\n\[ \nS\\left( {u - {u}_{n}, v}\\right) = 0\n\]\n\nfor all \( v \\in {X}_{n} \) . Inserting the Galerkin approximation to \( {z}_{n} \), which we denote by \( {\\widetilde{z}}_{n} \), into the last equation and subtracting from (11.52), we obtain\n\n\[ \n{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2} = S\\left( {u - {u}_{n},{z}_{n} - {\\widetilde{z}}_{n}}\\right) .\n\]\n\n(11.53)\n\nSince \( S \) is bounded, from (11.53) and (11.51), applied to \( u - {u}_{n} \) and \( {z}_{n} - {\\widetilde{z}}_{n} \) , we can conclude that\n\n\[ \n{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2} \\leq {C}_{1}{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{\\begin{Vmatrix}{z}_{n}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\]\n\nfor some constant \( {C}_{1} \) . However, from (11.47) we also have that\n\n\[ \n{\\begin{Vmatrix}{z}_{n}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}} \\leq {C}_{2}{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}\n\]\n\nfor some constant \( {C}_{2} \) . Now the assertion of the theorem follows from the last two inequalities.
|
Yes
|
Theorem 12.4 The integral operator (12.3) with continuous kernel is a compact operator on \( C\left\lbrack {a, b}\right\rbrack \) .
|
Proof. For all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) and all \( x \in \left\lbrack {a, b}\right\rbrack \), we have that\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq \left( {b - a}\right) \mathop{\max }\limits_{{x, y \in \left\lbrack {a, b}\right\rbrack }}\left| {K\left( {x, y}\right) }\right| \]\n\ni.e., the set \( U \mathrel{\text{:=}} \{ {A\varphi } : \varphi \in C\left\lbrack {a, b}\right\rbrack ,\parallel \varphi {\parallel }_{\infty } \leq 1\} \subset C\left\lbrack {a, b}\right\rbrack \) is bounded. Since \( K \) is uniformly continuous on the square \( \left\lbrack {a, b}\right\rbrack \times \left\lbrack {a, b}\right\rbrack \), for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that\n\n\[ \left| {K\left( {x, z}\right) - K\left( {y, z}\right) }\right| < \frac{\varepsilon }{b - a} \]\n\nfor all \( x, y, z \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) . Then\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) - \left( {A\varphi }\right) \left( y\right) }\right| = \left| {{\int }_{a}^{b}\left\lbrack {K\left( {x, z}\right) - K\left( {y, z}\right) }\right\rbrack \varphi \left( z\right) {dz}}\right| < \varepsilon \]\n\nfor all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) and all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) ; i.e., \( U \) is equicontinuous. Hence \( A \) is compact by the Arzelà-Ascoli Theorem 12.3.
|
Yes
|
Theorem 12.5 The norm of the integral operator \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) with continuous kernel \( K \) is given by\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\n(12.4)
|
Proof. For each \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) we have\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq {\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy},\;x \in \left\lbrack {a, b}\right\rbrack ,\]\n\nand thus\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \leq \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nSince \( K \) is continuous, there exists \( {x}_{0} \in \left\lbrack {a, b}\right\rbrack \) such that\n\n\[ {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nFor \( \varepsilon > 0 \) choose \( \psi \in C\left\lbrack {a, b}\right\rbrack \) by setting\n\n\[ \psi \left( y\right) \mathrel{\text{:=}} \frac{K\left( {{x}_{0}, y}\right) }{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon },\;y \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( \parallel \psi {\parallel }_{\infty } \leq 1 \) and\n\n\[ \parallel {A\psi }{\parallel }_{\infty } \geq \left| {\left( {A\psi }\right) \left( {x}_{0}\right) }\right| = {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy} \geq {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2} - {\varepsilon }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy}\]\n\n\[ = {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) . \]\n\nHence\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \geq \parallel {A\psi }{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) ,\]\n\nand since this holds for all \( \varepsilon > 0 \), we have\n\n\[ \parallel A{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nThis concludes the proof.
|
Yes
|
Theorem 12.6 Let \( A : X \rightarrow X \) be a compact linear operator on a Banach space \( X \) such that \( I - A \) is injective. Assume that the sequence \( {A}_{n} : X \rightarrow X \) of bounded linear operators is norm convergent, i.e., \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \) . Then for sufficiently large \( n \) the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} : X \rightarrow X \) exist and are uniformly bounded. For the solutions of the equations \[ \varphi - {A\varphi } = f\;\text{ and }\;{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n} \] we have an error estimate \[ \begin{Vmatrix}{{\varphi }_{n} - \varphi }\end{Vmatrix} \leq C\left\{ {\begin{Vmatrix}{\left( {{A}_{n} - A}\right) \varphi }\end{Vmatrix} + \begin{Vmatrix}{{f}_{n} - f}\end{Vmatrix}}\right\} \] for some constant \( C \) .
|
Proof. By the Riesz Theorem 12.2, the inverse \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists and is bounded. Since \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \), by Remark 3.25 we have \( \begin{Vmatrix}{{\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) }\end{Vmatrix} \leq q < 1 \) for sufficiently large \( n \) . For these \( n \), by the Neumann series Theorem 3.48, the inverse operators of \[ I - {\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) = {\left( I - A\right) }^{-1}\left( {I - {A}_{n}}\right) \] exist and are uniformly bounded by \[ \begin{Vmatrix}{\left\lbrack I - {\left( I - A\right) }^{-1}\left( {A}_{n} - A\right) \right\rbrack }^{-1}\end{Vmatrix} \leq \frac{1}{1 - q}. \] But then \( {\left\lbrack I - {\left( I - A\right) }^{-1}\left( A - {A}_{n}\right) \right\rbrack }^{-1}{\left( I - A\right) }^{-1} \) are the inverse operators of \( I - {A}_{n} \) and they are uniformly bounded. The error estimate follows from \[ \left( {I - {A}_{n}}\right) \left( {{\varphi }_{n} - \varphi }\right) = \left( {A - {A}_{n}}\right) \varphi + {f}_{n} - f \] by the uniform boundedness of the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} \) .
|
Yes
|
Lemma 12.9 Let \( X \) be a Banach space, let \( {A}_{n} : X \rightarrow X \) be a collectively compact sequence, and let \( {B}_{n} : X \rightarrow X \) be a pointwise convergent sequence with limit operator \( B : X \rightarrow X \) . Then\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{n} - B}\right) {A}_{n}}\end{Vmatrix} \rightarrow 0,\;n \rightarrow \infty .\n\]
|
Proof. Assume that (12.7) is not valid. Then there exist \( {\varepsilon }_{0} > 0 \), a sequence \( \left( {n}_{k}\right) \) in \( \mathbb{N} \) with \( {n}_{k} \rightarrow \infty, k \rightarrow \infty \), and a sequence \( \left( {\varphi }_{k}\right) \) in \( X \) with \( \begin{Vmatrix}{\varphi }_{k}\end{Vmatrix} \leq 1 \) such that\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{{n}_{k}} - B}\right) {A}_{{n}_{k}}{\varphi }_{k}}\end{Vmatrix} \geq {\varepsilon }_{0},\;k = 1,2,\ldots\n\]\n\n(12.8)\n\nSince the sequence \( \left( {A}_{n}\right) \) is collectively compact, there exists a subsequence such that\n\n\[ \n{A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) } \rightarrow \psi \in X,\;j \rightarrow \infty .\n\]\n\n(12.9)\n\nThen we can estimate with the aid of the triangle inequality and Remark 3.25 to obtain\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{{n}_{k\left( j\right) }} - B}\right) {A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) }}\end{Vmatrix}\n\]\n\n(12.10)\n\n\[ \n\leq \begin{Vmatrix}{\left( {{B}_{{n}_{k\left( j\right) }} - B}\right) \psi }\end{Vmatrix} + \begin{Vmatrix}{{B}_{{n}_{k\left( j\right) }} - B}\end{Vmatrix}\begin{Vmatrix}{{A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) } - \psi }\end{Vmatrix}.\n\]\n\nThe first term on the right-hand side of (12.10) tends to zero as \( j \rightarrow \infty \) , since the operator sequence \( \left( {B}_{n}\right) \) is pointwise convergent. The second term tends to zero as \( j \rightarrow \infty \), since the operator sequence \( \left( {B}_{n}\right) \) is uniformly bounded by Theorem 12.7 and since we have the convergence (12.9). Therefore, passing to the limit \( j \rightarrow \infty \) in (12.10) yields a contradiction to (12.8), and the proof is complete.
|
Yes
|
Theorem 12.10 Let \( A : X \rightarrow X \) be a compact linear operator on a Banach space \( X \) such that \( I - A \) is injective, and assume that the sequence \( {A}_{n} : X \rightarrow X \) of linear operators is collectively compact and pointwise convergent; i.e., \( {A}_{n}\varphi \rightarrow {A\varphi }, n \rightarrow \infty \), for all \( \varphi \in X \) . Then for sufficiently large \( n \) the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} : X \rightarrow X \) exist and are uniformly bounded. For the solutions of the equations\n\n\[ \varphi - {A\varphi } = f\;\text{ and }\;{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n} \]\n\nwe have an error estimate\n\n\[ \begin{Vmatrix}{{\varphi }_{n} - \varphi }\end{Vmatrix} \leq C\left\{ {\begin{Vmatrix}{\left( {{A}_{n} - A}\right) \varphi }\end{Vmatrix} + \begin{Vmatrix}{{f}_{n} - f}\end{Vmatrix}}\right\} \]\n\n(12.11)\n\nfor some constant \( C \) .
|
Proof. By the Riesz Theorem 12.2, the inverse \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists and is bounded. The identity\n\n\[ {\left( I - A\right) }^{-1} = I + {\left( I - A\right) }^{-1}A \]\n\nsuggests\n\n\[ {M}_{n} \mathrel{\text{:=}} I + {\left( I - A\right) }^{-1}{A}_{n} \]\n\nas an approximate inverse for \( I - {A}_{n} \) . Elementary calculations yield\n\n\[ {M}_{n}\left( {I - {A}_{n}}\right) = I - {S}_{n} \]\n\n(12.12)\n\nwhere\n\n\[ {S}_{n} \mathrel{\text{:=}} {\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) {A}_{n} \]\n\nFrom Lemma 12.9 we conclude that \( \begin{Vmatrix}{S}_{n}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \) . Hence for sufficiently large \( n \) we have \( \begin{Vmatrix}{S}_{n}\end{Vmatrix} \leq q < 1 \) . For these \( n \), by the Neumann series Theorem 3.48, the inverse operators \( {\left( I - {S}_{n}\right) }^{-1} \) exist and are uniformly bounded by\n\n\[ \begin{Vmatrix}{\left( I - {S}_{n}\right) }^{-1}\end{Vmatrix} \leq \frac{1}{1 - q}. \]\n\nNow (12.12) implies first that \( I - {A}_{n} \) is injective, and therefore, since \( {A}_{n} \) is compact, by Theorem 12.1 the inverse \( {\left( I - {A}_{n}\right) }^{-1} \) exists. Then (12.12) also yields \( {\left( I - {A}_{n}\right) }^{-1} = {\left( I - {S}_{n}\right) }^{-1}{M}_{n} \), whence uniform boundedness follows, since the operators \( {M}_{n} \) are uniformly bounded by Theorem 12.7. The error estimate (12.11) is proven as in Theorem 12.6.
|
Yes
|
Theorem 12.11 Let \( {\varphi }_{n} \) be a solution of\n\n\[ \n{\varphi }_{n}\left( x\right) - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {x,{x}_{k}}\right) {\varphi }_{n}\left( {x}_{k}\right) = f\left( x\right) ,\;x \in \left\lbrack {a, b}\right\rbrack .\n\]\n\n(12.13)\n\nThen the values \( {\varphi }_{j}^{\left( n\right) } \mathrel{\text{:=}} {\varphi }_{n}\left( {x}_{j}\right), j = 0,\ldots, n \), at the quadrature points satisfy the linear system\n\n\[ \n{\varphi }_{j}^{\left( n\right) } - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) } = f\left( {x}_{j}\right) ,\;j = 0,\ldots, n.\n\]\n\n(12.14)\n\n\n\nConversely, let \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), be a solution of the system (12.14). Then the function \( {\varphi }_{n} \) defined by\n\n\[ \n{\varphi }_{n}\left( x\right) \mathrel{\text{:=}} f\left( x\right) + \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {x,{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) },\;x \in \left\lbrack {a, b}\right\rbrack ,\n\]\n\n(12.15)\n\nsolves equation (12.13).
|
Proof. The first statement is trivial. For a solution \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), of the system (12.14) the function \( {\varphi }_{n} \) defined by (12.15) has values\n\n\[ \n{\varphi }_{n}\left( {x}_{j}\right) = f\left( {x}_{j}\right) + \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) } = {\varphi }_{j}^{\left( n\right) },\;j = 0,\ldots, n.\n\]\n\nInserting this into (12.15), we see that \( {\varphi }_{n} \) satisfies (12.13).\n\nThe formula (12.15) may be viewed as a natural interpolation of the values \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), at the quadrature points to obtain the approximating function \( {\varphi }_{n} \) . It was introduced by Nyström in 1930.
|
Yes
|
Theorem 12.12 The norm of the quadrature operators \( {A}_{n} \) is given by\n\n\[ \n{\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } = \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| .\n\]
|
Proof. For each \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) we have\n\n\[ \n{\begin{Vmatrix}{A}_{n}\varphi \end{Vmatrix}}_{\infty } \leq \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| \n\]\n\nand therefore \( {\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } \) is smaller than or equal to the right-hand side of (12.16). Let \( z \in \left\lbrack {a, b}\right\rbrack \) be such that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| = \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| \n\]\n\nand choose \( \psi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \psi {\parallel }_{\infty } = 1 \) and\n\n\[ \n{a}_{k}K\left( {z,{x}_{k}}\right) \psi \left( {x}_{k}\right) = \left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| ,\;k = 0,\ldots, n.\n\]\n\nThen\n\n\[ \n{\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } \geq {\begin{Vmatrix}{A}_{n}\psi \end{Vmatrix}}_{\infty } \geq \left| {\left( {{A}_{n}\psi }\right) \left( z\right) }\right| = \mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| \n\]\n\nand (12.16) is proven.
|
Yes
|
Consider the integral equation\n\n\[ \varphi \left( x\right) - \frac{1}{2}{\int }_{0}^{1}\left( {x + 1}\right) {e}^{-{xy}}\varphi \left( y\right) {dy} = {e}^{-x} - \frac{1}{2} + \frac{1}{2}{e}^{-\left( {x + 1}\right) },\;0 \leq x \leq 1, \]
|
with exact solution \( \varphi \left( x\right) = {e}^{-x} \) . For its kernel we have\n\n\[ \mathop{\max }\limits_{{0 \leq x \leq 1}}{\int }_{0}^{1}\frac{1}{2}\left( {x + 1}\right) {e}^{-{xy}}{dy} = \mathop{\sup }\limits_{{0 < x \leq 1}}\frac{x + 1}{2x}\left( {1 - {e}^{-x}}\right) < 1. \]\n\nTherefore, by the Neumann series Theorem 3.48 and the operator norm (12.4), equation (12.19) is uniquely solvable.
|
Yes
|
Consider the integral equation\n\n\[ \n\\varphi \\left( t\\right) + \\frac{ab}{\\pi }{\\int }_{0}^{2\\pi }\\frac{\\varphi \\left( \\tau \\right) {d\\tau }}{{a}^{2} + {b}^{2} - \\left( {{a}^{2} - {b}^{2}}\\right) \\cos \\left( {t + \\tau }\\right) } = f\\left( t\\right) ,\\;0 \\leq t \\leq {2\\pi },\n\]\n\nwhere \( a \\geq b > 0 \).
|
Any solution \( \\varphi \) to the homogeneous form of equation (12.20) clearly must be a \( {2\\pi } \) -periodic analytic function, since the kernel is a \( {2\\pi } \) -periodic analytic function with respect to the variable \( t \) . Hence, we can expand \( \\varphi \) into a uniformly convergent Fourier series\n\n\[ \n\\varphi \\left( t\\right) = \\mathop{\\sum }\\limits_{{n = 0}}^{\\infty }{\\alpha }_{n}\\cos {nt} + \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }{\\beta }_{n}\\sin {nt}.\n\]\n\nInserting this into the homogeneous integral equation and using the integrals (see Problem 12.10)\n\n\[ \n\\frac{ab}{\\pi }{\\int }_{0}^{2\\pi }\\frac{{e}^{in\\tau }{d\\tau }}{\\left( {{a}^{2} + {b}^{2}}\\right) - \\left( {{a}^{2} - {b}^{2}}\\right) \\cos \\left( {t + \\tau }\\right) } = {\\left( \\frac{a - b}{a + b}\\right) }^{n}{e}^{-{int}}\n\]\n\n(12.21)\n\nfor \( n = 0,1,2,\\ldots \), it follows that\n\n\[ \n{\\alpha }_{n}\\left\\lbrack {1 + {\\left( \\frac{a - b}{a + b}\\right) }^{n}}\\right\\rbrack = {\\beta }_{n}\\left\\lbrack {1 - {\\left( \\frac{a - b}{a + b}\\right) }^{n}}\\right\\rbrack = 0\n\]\n\nfor \( n = 0,1,2,\\ldots \) . Hence, \( {\\alpha }_{n} = {\\beta }_{n} = 0 \) for \( n = 0,1,2,\\ldots \), and therefore \( \\varphi = 0 \) . Now the Riesz Theorem 12.2 implies that the integral equation (12.20) is uniquely solvable for each right-hand side \( f \) .
|
Yes
|
Theorem 12.16 Let \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) be a compact linear operator such that \( I - A \) is injective, and assume that the interpolation operators \( {L}_{n} : C\left\lbrack {a, b}\right\rbrack \rightarrow {X}_{n} \) satisfy \( {\begin{Vmatrix}{L}_{n}A - A\end{Vmatrix}}_{\infty } \rightarrow 0, n \rightarrow \infty \) . Then, for sufficiently large \( n \), the approximate equation (12.27) is uniquely solvable for all \( f \in C\left\lbrack {a, b}\right\rbrack \), and we have the error estimate
|
Proof. From Theorem 12.6 applied to \( {A}_{n} = {L}_{n}A \), we conclude that for all sufficiently large \( n \) the inverse operators \( {\left( I - {L}_{n}A\right) }^{-1} \) exist and are uniformly bounded. To verify the error bound, we apply the interpolation operator \( {L}_{n} \) to (12.22) and get\n\n\[ \varphi - {L}_{n}{A\varphi } = {L}_{n}f + \varphi - {L}_{n}\varphi . \]\n\nSubtracting this from (12.27) we find\n\n\[ \left( {I - {L}_{n}A}\right) \left( {{\varphi }_{n} - \varphi }\right) = {L}_{n}\varphi - \varphi \]\n\nwhence the estimate (12.28) follows.
|
Yes
|
Corollary 12.17 Let \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) be a compact linear operator such that \( I - A \) is injective, and assume that the interpolation operators \( {L}_{n} : C\left\lbrack {a, b}\right\rbrack \rightarrow {X}_{n} \) are pointwise convergent; i.e., \( {L}_{n}\varphi \rightarrow \varphi, n \rightarrow \infty \), for all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) . Then, for sufficiently large \( n \) , the approximate equation (12.27) is uniquely solvable for all \( f \in C\left\lbrack {a, b}\right\rbrack \), and the estimate (12.28) holds.
|
Proof. By Lemma 12.9 the pointwise convergence of the interpolation operators \( {L}_{n} \) and the compactness of \( A \) imply that \( {\begin{Vmatrix}{L}_{n}A - A\end{Vmatrix}}_{\infty } \rightarrow 0, n \rightarrow \infty \) . Now the statement follows from the preceding theorem.
|
No
|
Lemma 12.20 Let \( f \in {C}^{1}\left\lbrack {0,{2\pi }}\right\rbrack \) . Then for the remainder in trigonometric interpolation we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty } \leq {c}_{n}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2} \n\]\n\n(12.34)\n\nwhere \( {c}_{n} \rightarrow 0, n \rightarrow \infty \) .
|
Proof. Consider the trigonometric monomials \( {f}_{m}\left( t\right) = {e}^{imt} \) and write \( m = \) \( \left( {{2k} + 1}\right) n + q \) with \( k \in \mathbf{Z} \) and \( 0 \leq q < {2n} \) . Since \( {f}_{m}\left( {t}_{j}\right) = {f}_{q - n}\left( {t}_{j}\right) \) for \( j = 0,\ldots ,{2n} - 1 \), the trigonometric interpolation polynomials for \( {f}_{m} \) and \( {f}_{q - n} \) coincide. Therefore, we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}{f}_{m} - {f}_{m}\end{Vmatrix}}_{\infty } \leq 2 \n\]\n\nfor all \( \left| m\right| \geq n \) . Since \( f \) is continuously differentiable, we can expand it into a uniformly convergent Fourier series (see Problem 12.14)\n\n\[ \nf = \mathop{\sum }\limits_{{m = - \infty }}^{\infty }{a}_{m}{f}_{m} \n\]\n\nFrom the relation\n\n\[ \n{\int }_{0}^{2\pi }{f}^{\prime }\left( t\right) {e}^{-{imt}}{dt} = {im}{\int }_{0}^{2\pi }f\left( t\right) {e}^{-{imt}}{dt} = {2\pi im}{a}_{m} \n\]\n\nfor the Fourier coefficients it follows that\n\n\[ \n{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2} = {\int }_{0}^{2\pi }{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt} = {2\pi }\mathop{\sum }\limits_{{m = - \infty }}^{\infty }{m}^{2}{\left| {a}_{m}\right| }^{2}. \n\]\n\nUsing this identity and the Cauchy-Schwarz inequality, we derive\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty }^{2} \leq 4{\left\{ \mathop{\sum }\limits_{{\left| m\right| = n}}^{\infty }\left| {a}_{m}\right| \right\} }^{2} \leq \frac{4}{\pi }{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2}\mathop{\sum }\limits_{{m = n}}^{\infty }\frac{1}{{m}^{2}}. \n\]\n\nThis implies (12.34).
|
Yes
|
Lemma 12.20 Let \( f \in {C}^{1}\left\lbrack {0,{2\pi }}\right\rbrack \) . Then for the remainder in trigonometric interpolation we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty } \leq {c}_{n}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2} \n\]\n\nwhere \( {c}_{n} \rightarrow 0, n \rightarrow \infty \) .
|
Proof. Consider the trigonometric monomials \( {f}_{m}\left( t\right) = {e}^{imt} \) and write \( m = \) \( \left( {{2k} + 1}\right) n + q \) with \( k \in \mathbf{Z} \) and \( 0 \leq q < {2n} \) . Since \( {f}_{m}\left( {t}_{j}\right) = {f}_{q - n}\left( {t}_{j}\right) \) for \( j = 0,\ldots ,{2n} - 1 \), the trigonometric interpolation polynomials for \( {f}_{m} \) and \( {f}_{q - n} \) coincide. Therefore, we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}{f}_{m} - {f}_{m}\end{Vmatrix}}_{\infty } \leq 2 \n\]\n\nfor all \( \left| m\right| \geq n \) . Since \( f \) is continuously differentiable, we can expand it into a uniformly convergent Fourier series (see Problem 12.14)\n\n\[ \nf = \mathop{\sum }\limits_{{m = - \infty }}^{\infty }{a}_{m}{f}_{m} \n\]\n\nFrom the relation\n\n\[ \n{\int }_{0}^{2\pi }{f}^{\prime }\left( t\right) {e}^{-{imt}}{dt} = {im}{\int }_{0}^{2\pi }f\left( t\right) {e}^{-{imt}}{dt} = {2\pi im}{a}_{m} \n\]\n\nfor the Fourier coefficients it follows that\n\n\[ \n{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2} = {\int }_{0}^{2\pi }{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt} = {2\pi }\mathop{\sum }\limits_{{m = - \infty }}^{\infty }{m}^{2}{\left| {a}_{m}\right| }^{2}. \n\]\n\nUsing this identity and the Cauchy-Schwarz inequality, we derive\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty }^{2} \leq 4{\left\{ \mathop{\sum }\limits_{{\left| m\right| = n}}^{\infty }\left| {a}_{m}\right| \right\} }^{2} \leq \frac{4}{\pi }{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2}\mathop{\sum }\limits_{{m = n}}^{\infty }\frac{1}{{m}^{2}}. \n\]\n\nThis implies (12.34).
|
Yes
|
Theorem 12.21 The collocation method with trigonometric polynomials converges for integral equations of the second kind with continuously differentiable periodic kernels and right-hand sides.
|
One possibility for the implementation of the collocation method is to use the trigonometric monomials as basis functions. Then the integrals \( {\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {e}^{ik\tau }{d\tau } \) have to be integrated numerically. Replacing the kernel by its trigonometric interpolation leads to the quadrature formula\n\n\[ \n{\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {e}^{ik\tau }{d\tau } \approx \frac{\pi }{n}\mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}K\left( {{t}_{j},{t}_{m}}\right) {e}^{{ik}{t}_{m}} \n\] \n\nfor \( j = 0,\ldots ,{2n} - 1 \) . Using fast Fourier transform techniques (see Section 8.2 ) these quadratures can be carried out very rapidly. A second, even more efficient, possibility is to use the Lagrange basis \n\n\[ \n{\ell }_{k}\left( t\right) = \frac{1}{2n}\left\{ {1 + 2\mathop{\sum }\limits_{{m = 1}}^{{n - 1}}\cos m\left( {t - {t}_{k}}\right) + \cos n\left( {t - {t}_{k}}\right) }\right\} \n\] \n\n(12.35) \n\nfor \( k = 0,\ldots ,{2n} - 1 \) which can be derived from Theorem 8.25 (see Problem 12.13). \n\nFor the evaluation of the matrix coefficients \( {\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {\ell }_{k}\left( \tau \right) {d\tau } \) we proceed analogously to the preceding case of linear splines. We approximate these integrals by replacing \( K\left( {{t}_{j}, \cdot }\right) \) by its trigonometric interpolation polynomial, i.e., we approximate \n\n\[ \n{\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {\ell }_{k}\left( \tau \right) {d\tau } \approx \mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}K\left( {{t}_{j},{t}_{m}}\right) {\int }_{0}^{2\pi }{\ell }_{m}\left( \tau \right) {\ell }_{k}\left( \tau \right) {d\tau } \n\] \n\nfor \( j, k = 0,\ldots ,{2n} - 1 \) . Using (12.35), elementary integrations yield (see Problem 12.13) \n\n\[ \n{\int }_{0}^{2\pi }{\ell }_{m}\left( \tau \right) {\ell }_{k}\left( \tau \right) {d\tau } = \frac{\pi }{n}{\delta }_{mk} - {\left( -1\right) }^{m - k}\frac{\pi }{4{n}^{2}}, \n\] \n\n(12.36) \n\nfor \( m, k = 0,\ldots ,{2n} - 1 \) . Note that despite the global nature of the trigonometric interpolation and its Lagrange basis, due to the simple structure of the weights (12.36) in the quadrature rule, the computation of the matrix elements is not too costly. The only additional computational effort besides the kernel evaluation is the computation of the row sums \n\n\[ \n\mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}{\left( -1\right) }^{m}K\left( {{t}_{j},{t}_{m}}\right) \n\] \n\nfor \( j = 0,\ldots ,{2n} - 1 \) . We omit the analysis of the additional error in the fully discrete method caused by the numerical quadrature.
|
Yes
|
For the integral equation (12.20) from Example 12.15, Table 12.5 gives the error between the exact solution and the collocation approximation.
|
TABLE 12.5. Collocation method for equation (12.20)\n\n<table><thead><tr><th></th><th>\\( n \\)</th><th>\\( t = 0 \\)</th><th>\\( t = \\pi /2 \\)</th><th>\\( t = \\pi \\)</th></tr></thead><tr><td></td><td>4</td><td>-0.10752855</td><td>-0.03243176</td><td>0.03961310</td></tr><tr><td>\\( a = 1 \\)</td><td>8</td><td>-0.00231537</td><td>0.00059809</td><td>0.00045961</td></tr><tr><td>\\( b = {0.5} \\)</td><td>16</td><td>\\( - {0.00000044} \\)</td><td>0.00000002</td><td>-0.00000000</td></tr><tr><td></td><td>4</td><td>-0.56984945</td><td>-0.18357135</td><td>0.06022598</td></tr><tr><td>\\( a = 1 \\)</td><td>8</td><td>\\( - {0.14414257} \\)</td><td>-0.00368787</td><td>-0.00571394</td></tr><tr><td>\\( b = {0.2} \\)</td><td>16</td><td>-0.00602543</td><td>-0.00035953</td><td>-0.00045408</td></tr><tr><td></td><td>32</td><td>\\( - {0.00000919} \\)</td><td>-0.00000055</td><td>-0.00000069</td></tr></table>
|
Yes
|
Theorem 12.23 For the Nyström method the condition numbers for the linear system are uniformly bounded.
|
This theorem states that the Nyström method essentially preserves the stability of the original integral equation.
|
No
|
Theorem 12.25 Let \( X \) and \( Y \) be normed spaces and let \( A : X \rightarrow Y \) be a compact linear operator. Then \( A \) has a bounded inverse if and only if \( X \) is finite-dimensional.
|
Proof. Assume that \( A \) has a bounded inverse \( {A}^{-1} : Y \rightarrow X \) . Then we have \( {A}^{-1}A = I \), and therefore the identity operator must be compact, since the product of a bounded and a compact operator is compact (see Problem 12.2). However, the identity operator on \( X \) is compact if and only if \( X \) has finite dimension.
|
Yes
|
\[ {y}^{\prime } = - {2y} \]
|
Here \( D = {\mathbb{R}}^{2} \) . Using the procedure in (5) one obtains\n\n\[ \frac{dy}{y} = - {2dx} \Leftrightarrow \ln \left| y\right| = - {2x} + C \Leftrightarrow \left| y\right| = {\mathrm{e}}^{C - {2x}}.\]\n\nThe general solution (with \( \pm {\mathrm{e}}^{C} \) replaced with \( C \) ) is\n\n\[ y\left( {x;C}\right) = C{\mathrm{e}}^{-{2x}}\;\left( {C \in \mathbb{R}}\right) . \]\n\nThe proof that every solution is of this form is elementary: If \( \phi \left( x\right) \) is any solution of the differential equation, then\n\n\[ {\left( \phi {\mathrm{e}}^{2x}\right) }^{\prime } = {\phi }^{\prime }{\mathrm{e}}^{2x} + {2\phi }{\mathrm{e}}^{2x} = 0, \]\n\ni.e., \( \phi {\mathrm{e}}^{2x} \) is a constant. (One could also appeal to the uniqueness statement proved in VII.) It follows that exactly one solution passes through each point \( \left( {\xi ,\eta }\right) \), namely,\n\n\[ y\left( {x;\eta {\mathrm{e}}^{2\xi }}\right) = \eta {\mathrm{e}}^{2\left( {\xi - x}\right) }.\]\n\nThus we have shown that the initial value problem is uniquely solvable, with a solution that exists in \( \mathbb{R} \) .
|
Yes
|
\[ {y}^{\prime } = \sqrt{\left| y\right| } \]
|
Again \( D = {\mathbb{R}}^{2} \) . Since the direction field is symmetric, it follows that if \( y\left( x\right) \) is a solution, then \( z\left( x\right) = - y\left( {-x}\right) \) is also a solution. Indeed, we have\n\n\[ {z}^{\prime }\left( x\right) = {y}^{\prime }\left( {-x}\right) = \sqrt{\left| y\left( -x\right) \right| } = \sqrt{\left| z\left( x\right) \right| }.\]\n\nThus it is sufficient to consider only positive solutions. From (5) it follows that\n\n\[ \int \frac{dy}{\sqrt{y}} = 2\sqrt{y} = x + C \]\n\nhence\n\n\[ y\left( {x;C}\right) = \frac{{\left( x + C\right) }^{2}}{4}\;\text{ in }\;\left( {-C,\infty }\right) \;\left( {C \in \mathbb{R}}\right) \]\n\n(note that \( \sqrt{y} \) is positive, whence \( x > - C \), and that for \( x < - C \) this formula does not give a solution to the differential equation). This function gives all of the positive solutions (this also follows from the uniqueness statement in VII).
|
Yes
|
\[ {y}^{\prime } = - x\left( {\operatorname{sgn}y}\right) \sqrt{\left| y\right| } = \left\{ \begin{array}{lll} - x\sqrt{y} & \text{ for } & y \geq 0, \\ x\sqrt{-y} & \text{ for } & y < 0. \end{array}\right. \]
|
The direction field is symmetric to the \( x \) -axis; i.e., if \( y\left( x\right) \) is a solution, then so is \( - y\left( x\right) \) . Thus it is sufficient to calculate the positive solutions. From \[ \int \frac{dy}{\sqrt{y}} = - 2\sqrt{y} = - \int {xdx} = \frac{1}{2}\left( {C - {x}^{2}}\right) \] it follows that \[ y\left( {x;C}\right) = \frac{1}{16}{\left( C - {x}^{2}\right) }^{2}\;\text{ in }\;\left( {-\sqrt{C},\sqrt{C}}\right) \;\left( {C > 0}\right) \] (note that \( \sqrt{y} > 0 \) ). If this function is extended by setting \( y\left( {x;C}\right) = 0 \) for \( \left| x\right| \geq \sqrt{C} \), then one clearly has a solution defined in \( \mathbb{R} \) . Thus we have the solutions \( \pm y\left( {x;C}\right) \) for \( C > 0 \) and \( y \equiv 0 \) . There are no other solutions. On the one hand, they (that is, their graphs) cover the whole plane; on the other hand, \( g\left( y\right) = \sqrt{\left| y\right| } \) vanishes only for \( y = 0 \) . Thus each initial value problem with \( \eta \neq 0 \) is locally uniquely solvable.
|
Yes
|
\[ {y}^{\prime } = {\mathrm{e}}^{y}\sin x. \]
|
The direction field is symmetric with respect to the \( y \) -axis and periodic in \( x \) of period \( {2\pi } \), i.e., if \( y\left( x\right) \) is a solution, then so are \( u\left( x\right) = y\left( {-x}\right) \) and \( v\left( x\right) = y\left( {x + {2k\pi }}\right) \) . By separation of variables (7) one obtains\n\n\[ \int {\mathrm{e}}^{-y}{dy} = - {\mathrm{e}}^{-y} = \int \sin {xdx} = - \cos x - C; \]\n\ni.e.,\n\n\[ y\left( {x;C}\right) = - \log \left( {\cos x + C}\right) \;\left( {C + \cos x > 0}\right) . \]
|
Yes
|
Theorem 1. Let \( \mathop{\lim }\limits_{{t \rightarrow \infty }}B\left( t\right) = \infty \) . If \( u \) is a positive solution, then\n\n\[ \mathop{\lim }\limits_{{t \rightarrow \infty }}u\left( t\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{b\left( t\right) }{c\left( t\right) } \]\n\nprovided that the limit on the right side exists.
|
Proof. This theorem is a substantial generalization of 1.XIII.(a). It can be proved by writing \( y \) as the quotient \( Z\left( t\right) /N\left( t\right) \) with \( N\left( t\right) = {\mathrm{e}}^{B\left( t\right) } \) . The result then follows using l’Hospital’s rule; since both \( B\left( t\right) \) and \( N\left( t\right) \) tend to \( \infty \) , the rule applies. One gets \( {Z}^{\prime }\left( t\right) /{N}^{\prime }\left( t\right) = c\left( t\right) /b\left( t\right) \), which gives the conclusion immediately.
|
Yes
|
Theorem 2. If the coefficients \( b \) and \( c \) are \( T \) -periodic, then there exists exactly one positive \( T \) -periodic solution of (14).
|
Proof. It is sufficient to show that there is exactly one solution with \( u\left( 0\right) = \) \( u\left( T\right) > 0 \) . Under this assumption \( v\left( t\right) \mathrel{\text{:=}} u\left( {t + T}\right) \) is a solution of (14) with \( v\left( 0\right) = u\left( 0\right) \) . Then \( y = 1/u \) and \( z = 1/v \) both satisfy the same linear differential equation and have the same initial values. It follows that \( y = z \) and hence \( u = v \) i.e., \( u \) is \( T \) -periodic. If we set \( \tau = 0 \) in (15), then the relation \( u\left( 0\right) = u\left( T\right) \) leads to the equation\n\n\[ \n{y}_{0}\left( {{\mathrm{e}}^{B\left( T\right) } - 1}\right) = {\int }_{0}^{T}c\left( x\right) {\mathrm{e}}^{B\left( s\right) }{ds} > 0,\n\]\n\nwhich can be solved uniquely for \( {y}_{0} \) because \( {\mathrm{e}}^{B\left( T\right) } > 1 \) .
|
Yes
|
Theorem 3. Let the coefficients \( b, c \) be positively bounded. Then equation (13) has exactly one positively bounded solution \( {u}^{ * } \) on \( \mathbb{R} \) ; and if \( u \) is any positive solution, then \( u\left( t\right) - {u}^{ * }\left( t\right) \rightarrow 0 \) as \( t \rightarrow \infty \) .
|
Proof. Let \( \alpha ,\beta ,\gamma ,\delta \) be positive constants with \( \alpha < b < \beta ,\gamma < c/b < \delta \) in \( \mathbb{R} \) . The first set of these inequalities leads to the estimates\n\n\[ \n{\alpha t} < B\left( t\right) < {\beta t}\text{for}t > 0,{\alpha t} > B\left( t\right) > {\beta t}\text{for}t < 0\text{;} \n\]\n\nand the second leads to\n\n\[ \nI\left( t\right) \mathrel{\text{:=}} {\int }_{-\infty }^{t}c\left( s\right) {\mathrm{e}}^{B\left( s\right) }{ds} < {\left. \delta {\int }_{-\infty }^{t}b\left( s\right) {\mathrm{e}}^{B\left( s\right) }ds = \delta {\mathrm{e}}^{B\left( s\right) }\right| }_{-\infty }^{t} = \delta {\mathrm{e}}^{B\left( t\right) }, \n\]\n\nand similarly \( I\left( t\right) > \gamma {\mathrm{e}}^{B\left( t\right) } \) .\n\nWe have to show that the linear equation (15) for \( y = 1/u \) has one and only one positively bounded solution. Let \( {y}^{ * } \) be the solution (16) with \( {y}_{0} = I\left( 0\right) \) and \( \tau = 0 \), that is,\n\n\[ \n{y}^{ * }\left( t\right) = {\mathrm{e}}^{-B\left( t\right) }{\int }_{-\infty }^{t}c\left( s\right) {\mathrm{e}}^{B\left( s\right) }{ds}. \n\]\n\n(This, by the way, is the smallest positive solution that exists in all of \( \mathbb{R} \) ; cf. (a).) From the previous estimates it follows that \( \gamma < {y}^{ * } < \delta \) . Since the solution \( z\left( t\right) = {\mathrm{e}}^{-\bar{B}\left( t\right) } \) of the homogeneous equation is unbounded and all solutions of the nonhomogeneous equation are given by \( y = {y}^{ * } + {\lambda z} \), it follows that \( {y}^{ * } \) is the only positively bounded solution.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.