Q
stringlengths 27
2.03k
| A
stringlengths 4
2.98k
| Result
stringclasses 2
values |
|---|---|---|
Corollary 9.27 Let \( f : \mathbb{R} \rightarrow \mathbb{R} \) be \( \left( {{2m} + 1}\right) \) -times continuously differentiable and \( {2\pi } \) -periodic for \( m \in \mathbb{N} \) and let \( n \in \mathbb{N} \) . Then for the error of the rectangular rule we have\n\n\[ \left| {{E}_{n}\left( f\right) }\right| \leq \frac{C}{{n}^{{2m} + 1}}{\int }_{0}^{2\pi }\left| {{f}^{\left( 2m + 1\right) }\left( x\right) }\right| {dx} \]\n\nwhere\n\n\[ C \mathrel{\text{:=}} 2\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{{k}^{{2m} + 1}}. \]
|
Proof. From Theorem 9.26 we have that\n\n\[ {E}_{n}\left( f\right) = - {\left( \frac{2\pi }{n}\right) }^{{2m} + 1}{\int }_{0}^{2\pi }{\widetilde{B}}_{{2m} + 1}\left( \frac{2\pi x}{n}\right) {f}^{\left( 2m + 1\right) }\left( x\right) {dx} \]\n\nand the estimate follows from the inequality\n\n\[ \left| {{\widetilde{B}}_{{2m} + 1}\left( x\right) }\right| \leq 2\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{{\left( 2\pi k\right) }^{{2m} + 1}},\;x \in \mathbb{R}, \]\n\nwhich is a consequence of (9.26).
|
Yes
|
Theorem 9.28 Let \( f : \mathbb{R} \rightarrow \mathbb{R} \) be analytic and \( {2\pi } \) -periodic. Then there exists a strip \( D = \mathbb{R} \times \left( {-a, a}\right) \subset \mathbb{C} \) with \( a > 0 \) such that \( f \) can be extended to a holomorphic and \( {2\pi } \) -periodic bounded function \( f : D \rightarrow \mathbb{C} \) . The error for the rectangular rule can be estimated by\n\n\[ \left| {{E}_{n}\left( f\right) }\right| \leq \frac{4\pi M}{{e}^{na} - 1} \]\n\nwhere \( M \) denotes a bound for the holomorphic function \( f \) on \( D \) .
|
Proof. Since \( f : \mathbb{R} \rightarrow \mathbb{R} \) is analytic, at each point \( x \in \mathbb{R} \) the Taylor expansion provides a holomorphic extension of \( f \) into some open disk in the complex plane with radius \( r\left( x\right) > 0 \) and center \( x \) . The extended function again has period \( {2\pi } \), since the coefficients of the Taylor series at \( x \) and at \( x + {2\pi } \) coincide for the \( {2\pi } \) -periodic function \( f : \mathbb{R} \rightarrow \mathbb{R} \) . The disks corresponding to all points of the interval \( \left\lbrack {0,{2\pi }}\right\rbrack \) provide an open covering of \( \left\lbrack {0,{2\pi }}\right\rbrack \) . Since \( \left\lbrack {0,{2\pi }}\right\rbrack \) is compact, a finite number of these disks suffices to cover \( \left\lbrack {0,{2\pi }}\right\rbrack \) . Then we have an extension into a strip \( D \) with finite width \( {2a} \) contained in the union of the finite number of disks. Without loss of generality we may assume that \( f \) is bounded on \( D \) .\n\nFrom the residue theorem we have that\n\n\[ {\int }_{i\alpha }^{{i\alpha } + {2\pi }}\cot \frac{nz}{2}f\left( z\right) {dz} - {\int }_{-{i\alpha }}^{-{i\alpha } + {2\pi }}\cot \frac{nz}{2}f\left( z\right) {dz} = - \frac{4\pi i}{n}\mathop{\sum }\limits_{{k = 1}}^{n}f\left( \frac{2\pi k}{n}\right) \]\n\nfor each \( 0 < \alpha < a \) . This implies that\n\n\[ \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}i\cot \frac{nz}{2}f\left( z\right) {dz} = \frac{2\pi }{n}\mathop{\sum }\limits_{{k = 1}}^{n}f\left( \frac{2\pi k}{n}\right) ,\]\n\nsince by the Schwarz reflection principle, \( f \) enjoys the symmetry property \( f\left( \bar{z}\right) = \overline{f\left( z\right) } \) . By Cauchy’s integral theorem we have\n\n\[ \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}f\left( z\right) {dz} = {\int }_{0}^{2\pi }f\left( x\right) {dx} \]\n\nand combining the last two equations yields\n\n\[ {E}_{n}\left( f\right) = \operatorname{Re}{\int }_{i\alpha }^{{i\alpha } + {2\pi }}\left( {1 - i\cot \frac{nz}{2}}\right) f\left( z\right) {dz} \]\nfor all \( 0 < \alpha < a \) . Now the estimate follows from\n\n\[ \left| {1 - i\cot \frac{nz}{2}}\right| \leq \frac{2}{{e}^{n\alpha } - 1} \]\n\nfor \( \operatorname{Im}z = \alpha \) and then passing to the limit \( \alpha \rightarrow a \) .
|
Yes
|
Theorem 9.29 Let \( f : \left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) be \( {2m} \) -times continuously differentiable. Then for the Romberg quadratures we have the error estimate\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{m}\left( f\right) }\right| \leq {C}_{m}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m},\;k = 0,1,\ldots ,\n\]\n\nfor some constant \( {C}_{m} \) depending on \( m \) .
|
Proof. By induction, we show that there exist constants \( {\gamma }_{j, i} \) such that\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\gamma }_{j, i}\left\lbrack {{f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) }\right\rbrack {\left( \frac{h}{{2}^{k}}\right) }^{2j}}\right| \n\]\n\n(9.29)\n\n\[ \n\leq {\gamma }_{m, i}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m} \n\]\n\nfor \( i = 1,\ldots, m \) and \( k = 0,1,\ldots \) . Here the sum on the left-hand side is set equal to zero for \( i = m \) . By the Euler-Maclaurin expansion this is true for \( i = 1 \) with \( {\gamma }_{j,1} = {b}_{2j}/\left( {2j}\right) \) ! for \( j = 1,\ldots, m - 1 \) and\n\n\[ \n{\gamma }_{m,1} = \left( {b - a}\right) \mathop{\sup }\limits_{{x \in \left\lbrack {0,1}\right\rbrack }}\left| {{B}_{2m}\left( x\right) }\right| . \n\]\n\nAs an abbreviation we set\n\n\[ \n{F}_{j} \mathrel{\text{:=}} {f}^{\left( 2j - 1\right) }\left( b\right) - {f}^{\left( 2j - 1\right) }\left( a\right) ,\;j = 1,\ldots, m - 1. \n\]\n\nAssume that (9.29) has been shown for some \( 1 \leq i < m \) . Then, using (9.28), we obtain\n\n\[ \n\frac{{4}^{i}}{{4}^{i} - 1}\left\lbrack {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k + 1}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\left( \frac{h}{{2}^{k + 1}}\right) }^{2j}{\gamma }_{j, i}{F}_{j}}\right\rbrack \n\]\n\n\[ \n- \frac{1}{{4}^{i} - 1}\left\lbrack {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i}\left( f\right) - \mathop{\sum }\limits_{{j = i}}^{{m - 1}}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i}{F}_{j}}\right\rbrack \n\]\n\n\[ \n= {\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i + 1}\left( f\right) - \mathop{\sum }\limits_{{j = i + 1}}^{{m - 1}}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i + 1}{F}_{j}, \n\]\n\nwhere\n\n\[ \n{\gamma }_{j, i + 1} = \frac{{4}^{i - j} - 1}{{4}^{i} - 1}{\gamma }_{j, i},\;j = i + 1,\ldots, m - 1. \n\]\n\nNow with the aid of the induction assumption we can estimate\n\n\[ \n\left| {{\int }_{a}^{b}f\left( x\right) {dx} - {T}_{k}^{i + 1}\left( f\right) - \mathop{\sum }\limits_{{j = i + 1}}^{{m - 1}}{\gamma }_{j, i + 1}{\left( \frac{h}{{2}^{k}}\right) }^{2j}{\gamma }_{j, i + 1}{F}_{j}}\right| \n\]\n\n\[ \n\leq {\gamma }_{m, i + 1}{\begin{Vmatrix}{f}^{\left( 2m\right) }\end{Vmatrix}}_{\infty }{\left( \frac{h}{{2}^{k}}\right) }^{2m} \n\]\n\nwhere\n\[ \n{\gamma }_{m, i + 1} = \frac{{4}^{i - m} + 1}{{4}^{i} - 1} \n\]\n\nand the proof is complete.
|
Yes
|
Theorem 9.30 The quadrature weights of the Romberg formulae are positive.
|
Proof. We define recursively \( {Q}_{k}^{1} \mathrel{\text{:=}} 4{T}_{k + 1}^{1} - 2{T}_{k}^{1} \) and\n\n\[ {Q}_{k}^{m + 1} \mathrel{\text{:=}} \frac{1}{{4}^{m} - 1}\left\lbrack {{2}^{{2m} + 1}{T}_{k + 1}^{m} + 2{T}_{k}^{m} + {4}^{m + 1}{Q}_{k + 1}^{m}}\right\rbrack \]\n\n(9.30)\n\nfor \( k = 1,2,\ldots \) and \( m = 1,2\ldots \) and show by induction that\n\n\[ {T}_{k}^{m + 1} = \frac{1}{{4}^{m} - 1}\left\lbrack {{T}_{k}^{m} + {Q}_{k}^{m}}\right\rbrack \]\n\n(9.31)\n\nBy the definition of \( {Q}_{k}^{1} \) this is true for \( m = 1 \) . We assume that (9.31) has been proven for some \( m \geq 1 \) . Then, using the recursive definitions of \( {T}_{k}^{m} \) and \( {Q}_{k}^{m} \) and the induction assumption, we derive\n\n\[ {T}_{k}^{m + 1} + {Q}_{k}^{m + 1} = \frac{{4}^{m + 1}}{{4}^{m} - 1}\left\lbrack {{T}_{k + 1}^{m} + {Q}_{k + 1}^{m}}\right\rbrack - \frac{1}{{4}^{m} - 1}\left\lbrack {{4}^{m}{T}_{k + 1}^{m} - {T}_{k}^{m}}\right\rbrack \]\n\n\[ = {4}^{m + 1}{T}_{k + 1}^{m + 1} - {T}_{k}^{m + 1} = \left( {{4}^{m + 1} - 1}\right) {T}_{k}^{m + 2}; \]\n\ni.e.,(9.31) also holds for \( m + 1 \) . Now, from (9.30) and (9.31), by induction with respect to \( m \), it can be deduced that the weights of \( {T}_{k}^{m} \) are positive and that the weights of \( {Q}_{k}^{m} \) are nonnegative.
|
Yes
|
Corollary 9.31 For the Romberg quadratures we have convergence:\n\n\[ \n\mathop{\lim }\limits_{{m \rightarrow \infty }}{T}_{k}^{m}\left( f\right) = {\int }_{a}^{b}f\left( x\right) {dx}\;\text{ and }\;\mathop{\lim }\limits_{{k \rightarrow \infty }}{T}_{k}^{m}\left( f\right) = {\int }_{a}^{b}f\left( x\right) {dx} \n\]\n\nfor all continuous functions \( f \) .
|
Proof. This follows from Theorems 9.29 and 9.30 and Corollary 9.11.
|
Yes
|
Theorem 9.32 Denote by \( {L}_{k}^{m} \) the uniquely determined polynomial in \( {h}^{2} \) of degree less than or equal to \( m \) with the interpolation property\n\n\[ \n{L}_{k}^{m}\left( {h}_{j}^{2}\right) = {T}_{j}^{1}\left( f\right) ,\;j = k,\ldots, k + m.\n\]\n\nThen the Romberg quadratures satisfy\n\n\[ \n{T}_{k}^{m + 1}\left( f\right) = {L}_{k}^{m}\left( 0\right)\n\]\n\n(9.32)
|
Proof. Obviously,(9.32) is true for \( m = 0 \) . Assume that it has been proven for \( m - 1 \) . Then, using the Neville scheme from Theorem 8.9, we obtain\n\n\[ \n{L}_{k}^{m}\left( 0\right) = \frac{1}{{h}_{k + m}^{2} - {h}_{k}^{2}}\left\lbrack {-{h}_{k}^{2}{L}_{k + 1}^{m - 1}\left( 0\right) + {h}_{k + m}^{2}{L}_{k}^{m - 1}\left( 0\right) }\right\rbrack\n\]\n\n\[ \n= \frac{1}{{h}_{k + m}^{2} - {h}_{k}^{2}}\left\lbrack {-{h}_{k}^{2}{T}_{k + 1}^{m} + {h}_{k + m}^{2}{T}_{k}^{m}}\right\rbrack\n\]\n\n\[ \n= \frac{1}{{4}^{m} - 1}\left\lbrack {{4}^{m}{T}_{k + 1}^{m} - {T}_{k}^{m}}\right\rbrack = {T}_{k}^{m + 1},\n\]\n\nestablishing (9.32) for \( m \) .
|
Yes
|
Example 10.2 By Newton's law, the differential equation of the second order\n\n\[ \nm{u}^{\prime \prime } = f\left( {t, u}\right) \n\]\n\ndescribes the motion of an object of mass \( m \) subject to the external force \( f\left( {t, u}\right) \) depending on the location \( u \) of the object and the time \( t \) . Given an initial location \( {u}_{0} \) and an initial velocity \( {u}_{0}^{\prime } \) at the initial time \( t = 0 \), one wants to find the position \( u\left( t\right) \) of the object for all times \( t \geq 0 \) .
|
Null
|
No
|
Let \( p = p\left( t\right) \) describe the population of a species of animals or plants at time \( t \) . If \( r\left( {t, p}\right) \) denotes the growth rate given by the difference between the birth and death rate depending on the time \( t \) and the size \( p \) of the population, then an isolated population satisfies the differential equation\n\n\[ \frac{dp}{dt} = r\left( {t, p}\right) \]\n\nThe simplest model \( r\left( {t, p}\right) = {ap} \), where \( a \) is a positive constant, leads to\n\n\[ \frac{dp}{dt} = {ap} \]
|
with the explicit solution \( p\left( t\right) = {p}_{0}{e}^{a\left( {t - {t}_{0}}\right) } \) . Such an exponential growth is realistic only if the population is not too large.
|
Yes
|
Corollary 10.6 Under the assumptions of Theorem 10.5, the sequence \( \left( {u}_{\nu }\right) \) defined by \( {u}_{0}\left( x\right) = {u}_{0} \) and\n\n\[ \n{u}_{\nu + 1}\left( x\right) \mathrel{\text{:=}} {u}_{0} + {\int }_{{x}_{0}}^{x}f\left( {\xi ,{u}_{\nu }\left( \xi \right) }\right) {d\xi },\;\left| {x - {x}_{0}}\right| \leq a,\;\nu = 0,1,\ldots ,\n\]\n\n(10.6)\n\nconverges as \( \nu \rightarrow \infty \) uniformly on \( \left\lbrack {{x}_{0} - a,{x}_{0} + a}\right\rbrack \) to the unique solution \( u \) of the initial value problem. We have the a posteriori error estimate\n\n\[ \n{\begin{Vmatrix}u - {u}_{\nu }\end{Vmatrix}}_{\infty } \leq \frac{La}{1 - {La}}{\begin{Vmatrix}{u}_{\nu } - {u}_{\nu - 1}\end{Vmatrix}}_{\infty },\;\nu = 1,2,\ldots \n\]
|
Proof. This follows from Theorem 3.46.
|
No
|
Example 10.7 Consider the initial value problem\n\n\\[ \n{u}^{\prime } = {x}^{2} + {u}^{2},\;u\\left( 0\\right) = 0 \n\\] \n\non \\( G = \\left( {-{0.5},{0.5}}\\right) \\times \\left( {-{0.5},{0.5}}\\right) \\) . For \\( f\\left( {x, u}\\right) \\mathrel{\\text{:=}} {x}^{2} + {u}^{2} \\) we have\n\n\\[ \n\\left| {f\\left( {x, u}\\right) }\\right| \\leq {0.5} \n\\] \n\non \\( G \\) . Hence for any \\( a < {0.5} \\) and \\( M = {0.5} \\) the rectangle \\( B \\) from the proof of Theorem 10.5 satisfies \\( B \\subset G \\) . Furthermore, we can estimate\n\n\\[ \n\\left| {f\\left( {x, u}\\right) - f\\left( {x, v}\\right) }\\right| = \\left| {{u}^{2} - {v}^{2}}\\right| = \\left| {\\left( {u + v}\\right) \\left( {u - v}\\right) }\\right| \\leq \\left| {u - v}\\right| \n\\] \n\nfor all \\( \\left( {x, u}\\right) ,\\left( {x, v}\\right) \\in G \\) ; i.e., \\( f \\) satisfies a Lipschitz condition with Lipschitz constant \\( L = 1 \\) . Thus in this case the contraction number in the Picard-Lindelöf theorem is given by \\( {La} < {0.5} \\) .
|
Here, the iteration (10.6) reads\n\n\\[ \n{u}_{\\nu + 1}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + {u}_{\\nu }^{2}\\left( \\xi \\right) }\\right\\rbrack {d\\xi }.\n\\] \n\nStarting with \\( {u}_{0}\\left( x\\right) = 0 \\) we first compute\n\n\\[ \n{u}_{1}\\left( x\\right) = {\\int }_{0}^{x}{\\xi }^{2}{d\\xi } = \\frac{{x}^{3}}{3} \n\\] \n\nand from Corollary 10.6 we have the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{1}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{1} - {u}_{0}\\end{Vmatrix}}_{\\infty } = \\frac{1}{24} = {0.041}\\ldots \n\\] \n\nThe second iteration yields\n\n\\[ \n{u}_{2}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + \\frac{{\\xi }^{6}}{9}}\\right\\rbrack {d\\xi } = \\frac{{x}^{3}}{3} + \\frac{{x}^{7}}{63} \n\\] \n\nwith the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{2}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{2} - {u}_{1}\\end{Vmatrix}}_{\\infty } = \\frac{1}{{63} \\cdot {2}^{7}} = {0.00012}\\ldots , \n\\] \n\nand the third iteration yields\n\n\\[ \n{u}_{3}\\left( x\\right) = {\\int }_{0}^{x}\\left\\lbrack {{\\xi }^{2} + \\frac{{\\xi }^{6}}{9} + \\frac{2{\\xi }^{10}}{189} + \\frac{{\\xi }^{14}}{3969}}\\right\\rbrack {d\\xi } = \\frac{{x}^{3}}{3} + \\frac{{x}^{7}}{63} + \\frac{2{x}^{11}}{2079} + \\frac{{x}^{15}}{59535} \n\\] \n\nwith the error estimate\n\n\\[ \n{\\begin{Vmatrix}u - {u}_{3}\\end{Vmatrix}}_{\\infty } \\leq {\\begin{Vmatrix}{u}_{3} - {u}_{2}\\end{Vmatrix}}_{\\infty } = \\frac{1}{{2079} \\cdot {2}^{10}} + \\frac{1}{{59535} \\cdot {2}^{15}} = {0.00000047}\\ldots \n\\]
|
Yes
|
Consider the initial value problem\n\n\\[ \n{u}^{\\prime } = {x}^{2} + {u}^{2},\\;u\\left( 0\\right) = 0, \n\\]\n\nfrom Example 10.7. Table 10.1 gives the difference between the exact solution as computed by the Picard-Lindelöf iterations in Example 10.7 and the approximate solution obtained by Euler's method for various step sizes \\( h \\) . We observe a linear convergence as \\( h \\rightarrow 0 \\) .
|
Null
|
No
|
Example 10.13 Consider again the initial value problem from Example 10.7. Table 10.2 gives the difference between the exact solution as computed by the Picard-Lindelöf iterations and the approximate solution obtained by the improved Euler method for various step sizes \( h \) . We observe quadratic convergence as \( h \rightarrow 0 \) .
|
Null
|
No
|
Theorem 10.17 A single-step method is consistent if and only if\n\n\[ \mathop{\lim }\limits_{{h \rightarrow 0}}\varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) .
|
Proof. Since we assume \( f \) to be bounded, we have\n\n\[ \eta \left( {x + t}\right) - \eta \left( x\right) = {\int }_{0}^{t}{\eta }^{\prime }\left( {x + s}\right) {ds} = {\int }_{0}^{t}f\left( {x + s,\eta \left( {x + s}\right) }\right) {ds} \rightarrow 0,\;t \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) . Therefore, since we also assume that \( f \) is uniformly continuous, it follows that\n\n\[ \frac{1}{h}\left| {{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt}}\right| \leq \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right| \]\n\n\[ = \mathop{\max }\limits_{{0 \leq t \leq h}}\left| {f\left( {x + t,\eta \left( {x + t}\right) }\right) - f\left( {x,\eta \left( x\right) }\right) }\right| \rightarrow 0,\;h \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) . From this we obtain that\n\n\[ \Delta \left( {x, u;h}\right) + \varphi \left( {x, u;h}\right) - f\left( {x, u}\right) = \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) \]\n\n\[ = \frac{1}{h}{\int }_{0}^{h}\left\lbrack {{\eta }^{\prime }\left( {x + t}\right) - {\eta }^{\prime }\left( x\right) }\right\rbrack {dt} \rightarrow 0,\;h \rightarrow 0, \]\n\nuniformly for all \( \left( {x, u}\right) \in G \) .
|
Yes
|
Theorem 10.17 A single-step method is consistent if and only if\n\n\\[ \n\\mathop{\\lim }\\limits_{{h \\rightarrow 0}}\\varphi \\left( {x, u;h}\\right) = f\\left( {x, u}\\right) \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) .
|
Proof. Since we assume \\( f \\) to be bounded, we have\n\n\\[ \n\\eta \\left( {x + t}\\right) - \\eta \\left( x\\right) = {\\int }_{0}^{t}{\\eta }^{\\prime }\\left( {x + s}\\right) {ds} = {\\int }_{0}^{t}f\\left( {x + s,\\eta \\left( {x + s}\\right) }\\right) {ds} \\rightarrow 0,\\;t \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . Therefore, since we also assume that \\( f \\) is uniformly continuous, it follows that\n\n\\[ \n\\frac{1}{h}\\left| {{\\int }_{0}^{h}\\left\\lbrack {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right\\rbrack {dt}}\\right| \\leq \\mathop{\\max }\\limits_{{0 \\leq t \\leq h}}\\left| {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right| \n\\]\n\n\\[ \n= \\mathop{\\max }\\limits_{{0 \\leq t \\leq h}}\\left| {f\\left( {x + t,\\eta \\left( {x + t}\\right) }\\right) - f\\left( {x,\\eta \\left( x\\right) }\\right) }\\right| \\rightarrow 0,\\;h \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . From this we obtain that\n\n\\[ \n\\Delta \\left( {x, u;h}\\right) + \\varphi \\left( {x, u;h}\\right) - f\\left( {x, u}\\right) = \\frac{1}{h}\\left\\lbrack {\\eta \\left( {x + h}\\right) - \\eta \\left( x\\right) }\\right\\rbrack - {\\eta }^{\\prime }\\left( x\\right) \n\\]\n\n\\[ \n= \\frac{1}{h}{\\int }_{0}^{h}\\left\\lbrack {{\\eta }^{\\prime }\\left( {x + t}\\right) - {\\eta }^{\\prime }\\left( x\\right) }\\right\\rbrack {dt} \\rightarrow 0,\\;h \\rightarrow 0, \n\\]\n\nuniformly for all \\( \\left( {x, u}\\right) \\in G \\) . This now implies that the two conditions \\( \\Delta \\rightarrow 0, h \\rightarrow 0 \\), and \\( \\varphi \\rightarrow f, h \\rightarrow 0 \\), are equivalent.
|
Yes
|
Theorem 10.18 The Euler method is consistent. If \( f \) is continuously differentiable in \( G \), then the Euler method has consistency order one.
|
Proof. Consistency is a consequence of Theorem 10.17 and the fact that \( \varphi \left( {x, u;h}\right) = f\left( {x, u}\right) \) for Euler’s method. If \( f \) is continuously differentiable, then from the differential equation \( {\eta }^{\prime } = f\left( {\xi ,\eta }\right) \) it follows that \( \eta \) is twice continuously differentiable with\n\n\[ \n{\eta }^{\prime \prime } = {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) .\n\]\n\nTherefore, Taylor's formula yields\n\n\[ \n\left| {\Delta \left( {x, u;h}\right) }\right| = \left| {\frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack - {\eta }^{\prime }\left( x\right) }\right| = \frac{h}{2}\left| {{\eta }^{\prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {Kh} \n\]\n\nfor some \( 0 < \theta < 1 \) and some bound \( K \) for the function \( 2\left( {{f}_{x} + {f}_{u}f}\right) \) .
|
Yes
|
Theorem 10.19 The improved Euler method is consistent. If \( f \) is twice continuously differentiable in \( G \), then the improved Euler method has consistency order two.
|
Proof. Consistency follows from Theorem 10.17 and\n\n\[ \varphi \left( {x, u;h}\right) = \frac{1}{2}\left\lbrack {f\left( {x, u}\right) + f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) }\right\rbrack \rightarrow f\left( {x, u}\right) ,\;h \rightarrow 0. \]\n\nIf \( f \) is twice continuously differentiable, then (10.12) implies that \( \eta \) is three times continuously differentiable with\n\n\[ {\eta }^{\prime \prime \prime } = {f}_{xx}\left( {\xi ,\eta }\right) + 2{f}_{xu}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) + {f}_{uu}\left( {\xi ,\eta }\right) {f}^{2}\left( {\xi ,\eta }\right) \]\n\n\[ + {f}_{u}\left( {\xi ,\eta }\right) {f}_{x}\left( {\xi ,\eta }\right) + {f}_{u}^{2}\left( {\xi ,\eta }\right) f\left( {\xi ,\eta }\right) . \]\n\nHence Taylor's formula yields\n\n\[ \left| {\eta \left( {x + h}\right) - \eta \left( x\right) - h{\eta }^{\prime }\left( x\right) - \frac{{h}^{2}}{2}\;{\eta }^{\prime \prime }\left( x\right) }\right| = \frac{{h}^{3}}{6}\;\left| {{\eta }^{\prime \prime \prime }\left( {x + {\theta h}}\right) }\right| \leq {K}_{1}{h}^{3} \]\n\n(10.13)\n\nfor some \( 0 < \theta < 1 \) and a bound \( {K}_{1} \) for \( 6\left( {{f}_{xx} + 2{f}_{xu}f + {f}_{uu}{f}^{2} + {f}_{u}{f}_{x} + {f}_{u}^{2}f}\right) \). From Taylor's formula for functions of two variables we have the estimate\n\n\[ \left| {f\left( {x + h, u + k}\right) - f\left( {x, u}\right) - h{f}_{x}\left( {x, u}\right) - k{f}_{u}\left( {x, u}\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( \left| h\right| + \left| k\right| \right) }^{2} \]\n\nwith a bound \( {K}_{2} \) for the second derivatives \( {f}_{xx},{f}_{xu} \), and \( {f}_{uu} \). From this, setting \( k = {hf}\left( {x, u}\right) \), in view of (10.12) we obtain\n\n\[ \left| {f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) - f\left( {x, u}\right) - h{\eta }^{\prime \prime }\left( x\right) }\right| \leq \frac{1}{2}{K}_{2}{\left( 1 + {K}_{0}\right) }^{2}{h}^{2} \]\n\nwith some bound \( {K}_{0} \) for \( f \), whence\n\n\[ \left| {\varphi \left( {x, u;h}\right) - f\left( {x, u}\right) - \frac{h}{2}{\eta }^{\prime \prime }\left( x\right) }\right| \leq \frac{1}{4}{K}_{2}{\left( 1 + {K}_{0}\right) }^{2}{h}^{2} \]\n\n(10.14)\n\nfollows. Now combining (10.13) and (10.14), with the aid of the triangle inequality and using the differential equation, we can establish consistency order two.
|
Yes
|
Lemma 10.21 Let \( \\left( {\\xi }_{j}\\right) \) be a sequence in \( \\mathbb{R} \) with the property\n\n\[\\left| {\\xi }_{j + 1}\\right| \\leq \\left( {1 + A}\\right) \\left| {\\xi }_{j}\\right| + B,\\;j = 0,1,\\ldots ,\]\n\nfor some constants \( A > 0 \) and \( B \\geq 0 \) . Then the estimate\n\n\[\\left| {\\xi }_{j}\\right| \\leq \\left| {\\xi }_{0}\\right| {e}^{jA} + \\frac{B}{A}\\left( {{e}^{jA} - 1}\\right) ,\\;j = 0,1,\\ldots ,\]\n\nholds.
|
Proof. We prove this by induction. The estimate is true for \( j = 0 \) . Assume that it has been proven for some \( j \\geq 0 \) . Then, with the aid of the inequality \( 1 + A < {e}^{A} \), which follows from the power series for the exponential function, we obtain\n\n\[\\left| {\\xi }_{j + 1}\\right| \\leq \\left( {1 + A}\\right) \\left| {\\xi }_{0}\\right| {e}^{jA} + \\left( {1 + A}\\right) \\frac{B}{A}\\left( {{e}^{jA} - 1}\\right) + B\]\n\n\[\\leq \\left| {\\xi }_{0}\\right| {e}^{\\left( {j + 1}\\right) A} + \\frac{B}{A}\\left( {{e}^{\\left( {j + 1}\\right) A} - 1}\\right)\]\n\ni.e., the estimate also holds for \( j + 1 \) .
|
Yes
|
Theorem 10.23 Assume that the single-step method satisfies the assumptions of the previous Theorem 10.22 and that it has consistency order \( p \) ; i.e., \( \left| {\Delta \left( {x, u;h}\right) }\right| \leq K{h}^{p} \) . Then\n\n\[ \left| {e}_{j}\right| \leq \frac{K}{M}\left( {{e}^{M\left( {{x}_{j} - {x}_{0}}\right) } - 1}\right) {h}^{p},\;j = 0,1,\ldots, n; \]\n\ni.e., the convergence also has order \( p \) .
|
Proof. This follows from (10.16) with the aid of \( c\left( h\right) \leq K{h}^{p} \) .
|
No
|
Corollary 10.24 The Euler method and the improved Euler method are convergent. For continuously differentiable \( f \) the Euler method has convergence order one. For twice continuously differentiable \( f \) the improved Euler method has convergence order two.
|
Proof. By Theorems 10.18, 10.19, 10.22, and 10.23 it remains only to verify the Lipschitz condition of the function \( \varphi \) for the improved Euler method given by (10.11). From the Lipschitz condition for \( f \) we obtain\n\n\[ \left| {\varphi \left( {x, u;h}\right) - \varphi \left( {x, v;h}\right) }\right| \]\n\n\[ \leq \frac{1}{2}\left| {f\left( {x, u}\right) - f\left( {x, v}\right) }\right| + \frac{1}{2}\left| {f\left( {x + h, u + {hf}\left( {x, u}\right) }\right) - f\left( {x + h, v + {hf}\left( {x, v}\right) }\right) }\right| \]\n\n\[ \leq \frac{L}{2}\left| {u - v}\right| + \frac{L}{2}\left| {\left\lbrack {u + {hf}\left( {x, u}\right) }\right\rbrack - \left\lbrack {v + {hf}\left( {x, v}\right) }\right\rbrack }\right| \leq L\left( {1 + \frac{hL}{2}}\right) \left| {u - v}\right| ; \]\n\ni.e., \( \varphi \) also satisfies a Lipschitz condition.
|
Yes
|
Theorem 10.26 The Runge-Kutta method is consistent. If \( f \) is four-times continuously differentiable, then it has consistency order four and hence convergence order four.
|
Proof. The function \( \varphi \) describing the Runge-Kutta method is given recursively by\n\n\[ \varphi = \frac{1}{6}\left( {{\varphi }_{1} + 2{\varphi }_{2} + 2{\varphi }_{3} + {\varphi }_{4}}\right) \]\n\nwhere\n\n\[ {\varphi }_{1}\left( {x, u;h}\right) = f\left( {x, u}\right) \]\n\n\[ {\varphi }_{2}\left( {x, u;h}\right) = f\left( {x + \frac{h}{2}, u + \frac{h}{2}{\varphi }_{1}\left( {x, u;h}\right) }\right) ,\]\n\n\[ {\varphi }_{3}\left( {x, u;h}\right) = f\left( {x + \frac{h}{2}, u + \frac{h}{2}{\varphi }_{2}\left( {x, u;h}\right) }\right) ,\]\n\n\[ {\varphi }_{4}\left( {x, u;h}\right) = f\left( {x + h, u + h{\varphi }_{3}\left( {x, u;h}\right) }\right) .\n\nFrom this, consistency follows immediately by Theorem 10.17.\n\nAnalogously to the proof of Theorem 10.18 for the improved Euler method, the consistency order four can be established by a Taylor expansion of \( \varphi \left( {x, u;h}\right) \) with respect to powers of \( h \) up to order \( {h}^{4} \) and expressing the derivatives of \( \eta \) on the right-hand side of\n\n\[ \frac{1}{h}\left\lbrack {\eta \left( {x + h}\right) - \eta \left( x\right) }\right\rbrack = {\eta }^{\prime }\left( x\right) + \frac{h}{2}{\eta }^{\prime \prime }\left( x\right) + \frac{{h}^{2}}{6}{\eta }^{\prime \prime \prime }\left( x\right) + \frac{{h}^{3}}{24}{\eta }^{\prime \prime \prime \prime }\left( x\right) + O\left( {h}^{4}\right) \]\n\nthrough \( f \) and its derivatives by using the differential equation. We leave the details as an exercise for the reader (see Problem 10.9).
|
No
|
Theorem 10.29 If \( f \) is \( \left( {s + 1}\right) \) -times continuously differentiable, then the multistep methods (10.21) are consistent of order \( s + 1 \) .
|
Proof. By construction we have that\n\n\[ \Delta \left( {x, u;h}\right) = \frac{1}{h}{\int }_{x + \left( {r - k}\right) h}^{x + {rh}}\left\lbrack {f\left( {\xi, u\left( \xi \right) }\right) - p\left( \xi \right) }\right\rbrack {d\xi } \]\n\nwhere \( p \) denotes the polynomial satisfying the interpolation condition\n\n\[ p\left( {x + {mh}}\right) = f\left( {x + {mh},\eta \left( {x + {mh}}\right) }\right) ,\;m = 0,\ldots, s. \]\n\nBy Theorem 8.10 on the remainder in polynomial interpolation, we can estimate\n\n\[ \left| {f\left( {\xi ,\eta \left( \xi \right) }\right) - p\left( \xi \right) }\right| \leq K{h}^{s + 1} \]\n\nfor all \( \xi \) in the interval \( x + \left( {r - k}\right) h \leq \xi \leq x + {rh} \) and some constant \( K \) depending on \( f \) and its derivatives up to order \( s + 1 \) .
|
Yes
|
Let \( p \) be the quadratic interpolation polynomial satisfying\n\n\[ p\left( {x}_{j}\right) = u\left( {x}_{j}\right) ,\;j = 0,1,2, \]\n\nand approximate\n\n\[ {u}^{\prime }\left( {x}_{0}\right) \approx {p}^{\prime }\left( {x}_{0}\right) \]
|
Using the fact that the approximation for the derivative is exact for polynomials of degree less than or equal to two, simple calculations show that (see Problem 10.15)\n\n\[ {p}^{\prime }\left( {x}_{0}\right) = \frac{1}{2h}\left\lbrack {-u\left( {x}_{2}\right) + {4u}\left( {x}_{1}\right) - {3u}\left( {x}_{0}\right) }\right\rbrack . \]
|
Yes
|
For \( k = 0,1,\ldots, r - 1 \), let \( {u}_{j, k} \) denote the unique solutions to the homogeneous difference equation (10.31) with initial values\n\n\[ \n{u}_{j, k} = {\delta }_{j, k},\;j = 0,1,\ldots, r - 1.\n\]\n\nThen for a given right-hand side \( {c}_{r},{c}_{r + 1},\ldots \), the unique solution to the inhomogeneous difference equation\n\n\[ \n{z}_{j + r} + \mathop{\sum }\limits_{{m = 0}}^{{r - 1}}{a}_{m}{z}_{j + m} = {c}_{j + r},\;j = 0,1,\ldots ,\n\]\n\n(10.36)\n\nwith initial values \( {z}_{0},{z}_{1},\ldots ,{z}_{r - 1} \) is given by\n\n\[ \n{z}_{j + r} = \mathop{\sum }\limits_{{k = 0}}^{{r - 1}}{z}_{k}{u}_{j + r, k} + \mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}{u}_{j + r - k - 1, r - 1},\;j = 0,1,\ldots \n\]\n\n(10.37)
|
Proof. Setting \( {u}_{m, r - 1} = 0 \) for \( m = - 1, - 2,\ldots \), we can rewrite (10.37) in the form\n\n\[ \n{z}_{j} = \mathop{\sum }\limits_{{k = 0}}^{{r - 1}}{z}_{k}{u}_{j, k} + {w}_{j},\;j = 0,1,\ldots ,\n\]\n\nwhere\n\n\[ \n{w}_{j} \mathrel{\text{:=}} \mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k + r}{u}_{j - k - 1, r - 1},\;j = 0,1,\ldots \n\]\n\nObviously, \( {w}_{j} = 0 \) for \( j = 0,\ldots, r - 1 \), and therefore it remains to show that \( {w}_{j} \) satisfies the inhomogeneous difference equation (10.36).\n\nAs in the proof of Theorem 10.33 we set \( {a}_{r} = 1 \) . Then, using \( {u}_{m, r - 1} = 0 \) for \( m < r - 1,{u}_{r - 1, r - 1} = 1 \), and the homogeneous difference equation for \( {u}_{m, r - 1} \), we compute\n\n\[ \n\mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}{w}_{j + m} = \mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}\mathop{\sum }\limits_{{k = 0}}^{\infty }{c}_{k + r}{u}_{j + m - k - 1, r - 1}\n\]\n\n\[ \n= \mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}\mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}{u}_{j + m - k - 1, r - 1}\n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{j}{c}_{k + r}\mathop{\sum }\limits_{{m = 0}}^{r}{a}_{m}{u}_{j + m - k - 1, r - 1} = {c}_{j + r}.\n\]\n\nNow the proof is completed by noting that each solution to the inhomogeneous difference equation (10.36) is uniquely determined by its \( r \) initial values \( {z}_{0},{z}_{1},\ldots ,{z}_{r - 1} \) .
|
Yes
|
Lemma 10.37 Let \( \\left( {\\xi }_{j}\\right) \) be a sequence in \( \\mathbb{R} \) with the property\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq A\\mathop{\\sum }\\limits_{{m = 0}}^{{j - 1}}\\left| {\\xi }_{m}\\right| + B,\\;j = 1,2,\\ldots ,\n\]\n\nfor some constants \( A > 0 \) and \( B \\geq 0 \) . Then the estimate\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {e}^{\\left( {j - 1}\\right) A},\\;j = 1,2,\\ldots ,\n\]\n\nholds.
|
Proof. We prove by induction that\n\n\[ \n\\left| {\\xi }_{j}\\right| \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{j - 1},\\;j = 1,2,\\ldots \n\]\n\n(10.38)\n\nThen the assertion follows by using the estimate \( 1 + A \\leq {e}^{A} \) . The inequality (10.38) is true for \( j = 1 \) . Assume that it has been proven up to some \( j \\geq 1 \) . Then we have\n\n\[ \n\\left| {\\xi }_{j + 1}\\right| \\leq A\\mathop{\\sum }\\limits_{{m = 0}}^{j}\\left| {\\xi }_{m}\\right| + B \\leq \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) + A\\mathop{\\sum }\\limits_{{m = 1}}^{j}\\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{m - 1}\n\]\n\n\[ \n= \\left( {A\\left| {\\xi }_{0}\\right| + B}\\right) {\\left( 1 + A\\right) }^{j}\n\]\n\ni.e., the estimate is also true for \( j + 1 \) .
|
Yes
|
Consider the boundary value problem\n\n\[ \n{u}^{\prime \prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;u\left( 2\right) = \frac{1}{2}\sqrt{2}, \n\]\n\nwith the exact solution \( u\left( x\right) = \sqrt{2}/x \) .
|
We solve numerically the associated initial value problem\n\n\[ \n{u}^{\prime \prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;{u}^{\prime }\left( 1\right) = s, \n\]\n\nby the improved Euler method of Section 10.2 with step sizes \( h = {0.1} \), \( h = {0.01} \), and \( h = {0.001} \). For this we transform the initial value problem for the equation of second order into the initial value problem for the system\n\n\[ \n{u}^{\prime } = w,\;{w}^{\prime } = {u}^{3},\;u\left( 1\right) = \sqrt{2},\;w\left( 1\right) = s. \n\]\n\nAs starting value for the Newton iteration we choose \( s = 0 \). The exact initial condition is \( s = - \sqrt{2} = - {1.414214} \). The numerical results represented in Table 11.1 illustrate the feasibility of the shooting method with Newton iterations.
|
Yes
|
The linear boundary value problem\n\n\[ \n{u}^{\prime \prime } - {u}^{\prime } - {110u} = 0,\;u\left( 0\right) = u\left( {10}\right) = 1, \n\]
|
has the unique solution\n\n\[ \nu\left( x\right) = \frac{1}{{e}^{110} - {e}^{-{100}}}\left\{ {\left( {{e}^{110} - 1}\right) {e}^{-{10x}} + \left( {1 - {e}^{-{100}}}\right) {e}^{11x}}\right\} .\n\]
|
Yes
|
Theorem 11.4 Assume that \( q, r \in C\left\lbrack {a, b}\right\rbrack \) and \( q \geq 0 \) . Then the boundary value problem for the linear differential equation\n\n\[ - {u}^{\prime \prime } + {qu} = r\;\text{ on }\left\lbrack {a, b}\right\rbrack \]\n\nwith homogeneous boundary conditions\n\n\[ u\left( a\right) = u\left( b\right) = 0 \]
|
Proof. Assume that \( {u}_{1} \) and \( {u}_{2} \) are two solutions to the boundary value problem. Then the difference \( u = {u}_{1} - {u}_{2} \) solves the homogeneous boundary value problem\n\n\[ - {u}^{\prime \prime } + {qu} = 0,\;u\left( a\right) = u\left( b\right) = 0.\]\n\nBy partial integration we obtain\n\n\[ {\int }_{a}^{b}\left( {{\left\lbrack {u}^{\prime }\right\rbrack }^{2} + q{u}^{2}}\right) {dx} = {\int }_{a}^{b}\left( {-{u}^{\prime \prime } + {qu}}\right) {udx} = 0.\]\n\nThis implies \( {u}^{\prime } = 0 \) on \( \left\lbrack {a, b}\right\rbrack \), since \( q \geq 0 \) . Hence \( u \) is constant on \( \left\lbrack {a, b}\right\rbrack \) , and the boundary conditions finally yield \( u = 0 \) on \( \left\lbrack {a, b}\right\rbrack \) . Therefore, the boundary value problem (11.7)-(11.8) has at most one solution.\n\nThe general solution of the linear differential equation (11.7) is given by\n\n\[ u = {C}_{1}{u}_{1} + {C}_{2}{u}_{2} + {u}^{ * } \]\n\nwhere \( {u}_{1},{u}_{2} \) denotes a fundamental system of two linearly independent solutions to the homogeneous differential equation, \( {u}^{ * } \) is a solution to the inhomogeneous differential equation, and \( {C}_{1} \) and \( {C}_{2} \) are arbitrary constants. This can be seen with the help of the Picard-Lindelöf Theorem 10.1 (see Problem 11.4). The boundary condition (11.8) is satisfied, provided that the constants \( {C}_{1} \) and \( {C}_{2} \) solve the linear system\n\n\[ {C}_{1}{u}_{1}\left( a\right) + {C}_{2}{u}_{2}\left( a\right) = - {u}^{ * }\left( a\right) \]\n\n\[ {C}_{1}{u}_{1}\left( b\right) + {C}_{2}{u}_{2}\left( b\right) = - {u}^{ * }\left( b\right) \]\n\nThis system is uniquely solvable. Assume that \( {C}_{1} \) and \( {C}_{2} \) solve the homogeneous system. Then \( u = {C}_{1}{u}_{1} + {C}_{2}{u}_{2} \) yields a solution to the homogeneous boundary value problem. Hence \( u = 0 \), since we have already established uniqueness for the boundary value problem. From this we conclude that \( {C}_{1} = {C}_{2} = 0 \) because \( {u}_{1} \) and \( {u}_{2} \) are linearly independent, and the existence proof is complete.
|
Yes
|
Theorem 11.5 For each \( h > 0 \) the difference equations (11.10)-(11.11) have a unique solution.
|
Proof. The tridiagonal matrix \( A \) is irreducible and weakly row-diagonally dominant. Hence, by Theorem 4.7, the matrix \( A \) is invertible, and the Jacobi iterations converge.
|
Yes
|
Lemma 11.6 Denote by \( A \) the matrix of the finite difference method for \( q \geq 0 \) and by \( {A}_{0} \) the corresponding matrix for \( q = 0 \) . Then\n\n\[ 0 \leq {A}^{-1} \leq {A}_{0}^{-1} \]\n\ni.e., all components of \( {A}^{-1} \) are nonnegative and smaller than or equal to the corresponding components of \( {A}_{0}^{-1} \) .
|
Proof. The columns of the inverse \( {A}^{-1} = \left( {{a}_{1},\ldots ,{a}_{n}}\right) \) satisfy \( A{a}_{j} = {e}_{j} \) for \( j = 1,\ldots, n \) with the canonical unit vectors \( {e}_{1},\ldots ,{e}_{n} \) in \( {\mathbb{R}}^{n} \) . The Jacobi iterations for the solution of \( {Az} = {e}_{j} \) starting with \( {z}_{0} = 0 \) are given by\n\n\[ {z}_{\nu + 1} = - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) {z}_{\nu } + {D}^{-1}{e}_{j},\;\nu = 0,1,\ldots ,\]\n\nwith the usual splitting \( A = D + {A}_{L} + {A}_{R} \) of \( A \) into its diagonal, lower, and upper triangular parts. Since the entries of \( {D}^{-1} \) and of \( - {D}^{-1}\left( {{A}_{L} + {A}_{R}}\right) \) are all nonnegative, it follows that \( {A}^{-1} \geq 0 \) . Analogously, the iterations\n\n\[ {z}_{\nu + 1} = - {D}_{0}^{-1}\left( {{A}_{L} + {A}_{R}}\right) {z}_{\nu } + {D}_{0}^{-1}{e}_{j},\;\nu = 0,1,\ldots ,\]\n\nyield the columns of \( {A}_{0}^{-1} \) . Therefore, from \( {D}_{0}^{-1} \geq {D}^{-1} \) we conclude that \( {A}_{0}^{-1} \geq {A}^{-1} \)
|
Yes
|
Lemma 11.7 Assume that \( u \in {C}^{4}\left\lbrack {a, b}\right\rbrack \) . Then\n\n\[ \left| {{u}^{\prime \prime }\left( x\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x + h}\right) - {2u}\left( x\right) + u\left( {x - h}\right) }\right\rbrack }\right| \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty } \]\n\nfor all \( x \in \left\lbrack {a + h, b - h}\right\rbrack \) .
|
Proof. By Taylor's formula we have that\n\n\[ u\left( {x \pm h}\right) = u\left( x\right) \pm h{u}^{\prime }\left( x\right) + \frac{{h}^{2}}{2}{u}^{\prime \prime }\left( x\right) \pm \frac{{h}^{3}}{6}{u}^{\prime \prime \prime }\left( x\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x \pm {\theta }_{ \pm }h}\right) \]\n\nfor some \( {\theta }_{ \pm } \in \left( {0,1}\right) \) . Adding these two equations gives\n\n\[ u\left( {x + h}\right) - {2u}\left( x\right) + u\left( {x - h}\right) = {h}^{2}{u}^{\prime \prime }\left( x\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x + {\theta }_{ + }h}\right) + \frac{{h}^{4}}{24}{u}^{\left( 4\right) }\left( {x - {\theta }_{ - }h}\right) ,\]\n\nwhence the statement of the lemma follows.
|
Yes
|
Theorem 11.8 Assume that the solution to the boundary value problem (11.7)-(11.8) is four-times continuously differentiable. Then the error of the finite difference approximation can be estimated by
|
Proof. By Lemma 11.7, for\n\n\[ {z}_{j} \mathrel{\text{:=}} {u}^{\prime \prime }\left( {x}_{j}\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x}_{j + 1}\right) - {2u}\left( {x}_{j}\right) + u\left( {x}_{j - 1}\right) }\right\rbrack \]\n\nwe have the estimate\n\n\[ \left| {z}_{j}\right| \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty },\;j = 1,\ldots, n. \]\n\n(11.13)\n\nSince\n\n\[ - \frac{1}{{h}^{2}}\left\lbrack {u\left( {x}_{j + 1}\right) - \left( {2 + {h}^{2}{q}_{j}}\right) u\left( {x}_{j}\right) + u\left( {x}_{j - 1}\right) }\right\rbrack = - {u}^{\prime \prime }\left( {x}_{j}\right) + {q}_{j}u\left( {x}_{j}\right) + {z}_{j} = {r}_{j} + {z}_{j}, \]\n\nthe vector \( \widetilde{U} = {\left( u\left( {x}_{1}\right) ,\ldots, u\left( {x}_{n}\right) \right) }^{T} \) given by the exact solution solves the\n\nlinear system\n\n\[ A\widetilde{U} = R + Z \]\n\nwhere \( Z = {\left( {z}_{1},\ldots ,{z}_{n}\right) }^{T} \) . Therefore,\n\n\[ A\left( {\widetilde{U} - U}\right) = Z \]\n\nand from this, using Lemma 11.6 and the estimate (11.13), we obtain\n\n\[ \left| {u\left( {x}_{j}\right) - {u}_{j}}\right| \leq {\begin{Vmatrix}{A}^{-1}Z\end{Vmatrix}}_{\infty } \leq \frac{{h}^{2}}{12}{\begin{Vmatrix}{u}^{\left( 4\right) }\end{Vmatrix}}_{\infty }{\begin{Vmatrix}{A}_{0}^{-1}e\end{Vmatrix}}_{\infty },\;j = 1,\ldots, n \]\n\n(11.14)\n\nwhere \( e = {\left( 1,\ldots ,1\right) }^{T} \) . The boundary value problem\n\n\[ - {u}_{0}^{\prime \prime } = 1,\;{u}_{0}\left( a\right) = {u}_{0}\left( b\right) = 0, \]\n\nhas the solution\n\n\[ {u}_{0}\left( x\right) = \frac{1}{2}\left( {x - a}\right) \left( {b - x}\right) \]\n\nSince \( {u}_{0}^{\left( 4\right) } = 0 \), in this case, as a consequence of (11.14) the finite difference approximation coincides with the exact solution; i.e., \( e = {A}_{0}U = {A}_{0}\widetilde{U} \) . Hence,\n\n\[ {\begin{Vmatrix}{A}_{0}^{-1}e\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{u}_{0}\end{Vmatrix}}_{\infty } = \frac{1}{8}{\left( b - a\right) }^{2},\;j = 1,\ldots, n. \]\n\nInserting this into (11.14) completes the proof.
|
Yes
|
Theorem 11.9 For each \( h > 0 \) the difference equations (11.19)-(11.20) have a unique solution.
|
From the proof of Lemma 11.6 it can be seen that its statement also holds for the corresponding matrices of the system (11.19)-(11.20). Lemma 11.7 implies that\n\n\[ \n{\Delta u}\left( {{x}_{1},{x}_{2}}\right) - \frac{1}{{h}^{2}}\left\lbrack {u\left( {{x}_{1} + h,{x}_{2}}\right) + u\left( {{x}_{1} - h,{x}_{2}}\right) + u\left( {{x}_{1},{x}_{2} + h}\right) }\right.\n\]\n\n\[ \n\left. {+u\left( {{x}_{1},{x}_{2} - h}\right) - {4u}\left( {{x}_{1},{x}_{2}}\right) }\right\rbrack \left| {\; \leq \frac{{h}^{2}}{12}\left\lbrack {{\begin{Vmatrix}\frac{{\partial }^{4}u}{\partial {x}_{1}^{4}}\end{Vmatrix}}_{\infty } + {\begin{Vmatrix}\frac{{\partial }^{4}u}{\partial {x}_{2}^{4}}\end{Vmatrix}}_{\infty }}\right\rbrack }\right. ,\n\]\n\nprovided that \( u \in {C}^{4}\left( {\left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack }\right) \) . Then we can proceed as in the proof of Theorem 11.8 to derive an error estimate. For this we need to have an estimate on the solution of\n\n\[ \n- \Delta {u}_{0} = 1\;\text{ in }D,\;{u}_{0} = 0\;\text{ on }\partial D.\n\]\n\n(11.21)\n\nEither from an explicit form of the solution obtained by separation of variables or by writing\n\n\[ \n{u}_{0}\left( x\right) = \frac{1}{4}\left( {1 - {x}_{1}}\right) {x}_{1} + \frac{1}{4}\left( {1 - {x}_{2}}\right) {x}_{2} + {v}_{0}\left( x\right)\n\]\n\nwhere \( {v}_{0} \) is a harmonic function, i.e., a solution of \( \Delta {v}_{0} = 0 \), and employing the maximum minimum principle for harmonic functions (see [39]), it can be seen that \( {\begin{Vmatrix}{u}_{0}\end{Vmatrix}}_{\infty } \leq 1/8 \) (see Problem 11.10). Hence we can state the following theorem.
|
No
|
Theorem 11.10 Assume that the solution to the boundary value problem (11.17)-(11.18) is four-times continuously differentiable. Then the error of the finite difference approximation can be estimated by\n\n\\[ \n\\left| {u\\left( {x}_{ij}\\right) - {u}_{ij}\\right| \\leq \\frac{{h}^{2}}{96}\\left\\lbrack {{\\begin{Vmatrix}\\frac{{\\partial }^{4}u}{\\partial {x}_{1}^{4}}\\end{Vmatrix}}_{\\infty } + {\\begin{Vmatrix}\\frac{{\\partial }^{4}u}{\\partial {x}_{2}^{4}}\\end{Vmatrix}}_{\\infty }}\\right\\rbrack ,\\;i, j = 1,\\ldots, n.\n\\]
|
Null
|
No
|
Theorem 11.11 (Riesz) Let \( X \) be a Hilbert space. Then for each bounded linear function \( F : X \rightarrow \mathbb{C} \) there exists a unique element \( f \in X \) such that\n\n\[ F\\left( u\\right) = \\left( {u, f}\\right) \]\n\nfor all \( u \in X \) . The norms of the element \( f \) and the linear function \( F \) coincide; i.e.,\n\n\[ \\parallel f\\parallel = \\parallel F\\parallel \\text{.} \]
|
Proof. Uniqueness follows from the observation that because of the positive definiteness of the scalar product, \( f = 0 \) is the only element representing the zero function \( F = 0 \) in the sense of (11.22). For \( F \neq 0 \) choose \( w \in X \) with \( F\\left( w\\right) \neq 0 \) . Since \( F \) is continuous, the nullspace\n\n\[ N\\left( F\\right) = \\{ u \in X : F\\left( u\\right) = 0\\} \]\n\ncan be seen to be a closed, and consequently, by Remark 3.40, a complete, subspace of the Hilbert space \( X \) . By the approximation Theorem 3.52 there exists the best approximation \( v \) to \( w \) with respect to \( N\\left( F\\right) \) . By Theorem 3.51 it satisfies \( w - v \bot N\\left( F\\right) \) . Then for \( g \\mathrel{\\text{:=}} w - v \) we have that\n\n\[ \\left( {F\\left( g\\right) u - F\\left( u\\right) g, g}\\right) = 0,\;u \in X, \]\n\nsince \( F\\left( g\\right) u - F\\left( u\\right) g \in N\\left( F\\right) \) for all \( u \in X \) . Hence,\n\n\[ F\\left( u\\right) = \\left( {u,\\frac{\\overline{F\\left( g\\right) }g}{\\parallel g{\\parallel }^{2}}}\\right) \]\n\nfor all \( u \in X \), which completes the proof of (11.22).\n\nFrom (11.22) and the Cauchy-Schwarz inequality we have that\n\n\[ \\left| {F\\left( u\\right) }\\right| \leq \\parallel f\\parallel \\parallel u\\parallel ,\;u \in X \]\n\nwhence \( \\parallel F\\parallel \leq \\parallel f\\parallel \) follows. On the other hand, inserting \( f \) into (11.22) yields\n\n\[ \\parallel f{\\parallel }^{2} = F\\left( f\\right) \leq \\parallel F\\parallel \\parallel f\\parallel \]\n\nand therefore \( \\parallel f\\parallel \leq \\parallel F\\parallel \) . This concludes the proof of the norm equality (11.23).
|
Yes
|
Theorem 11.13 (Lax-Milgram) In a Hilbert space \( X \) a bounded and strictly coercive linear operator \( A : X \rightarrow X \) has a bounded inverse \( {A}^{-1} : X \rightarrow X \) .
|
Proof. Using the Cauchy-Schwarz inequality, we can estimate\n\n\[ \parallel {Au}\parallel \parallel u\parallel \geq \operatorname{Re}\left( {{Au}, u}\right) \geq c\parallel u{\parallel }^{2}. \]\n\nHence\n\n\[ \parallel {Au}\parallel \geq c\parallel u\parallel \]\n\n(11.25)\n\nfor all \( u \in X \) . From (11.25) we observe that \( {Au} = 0 \) implies \( u = 0 \) ; i.e., \( A \) is injective.\n\nNext we show that the range \( A\left( X\right) \) is closed. Let \( v \) be an element of the closure \( \overline{A\left( X\right) } \) and let \( \left( {v}_{n}\right) \) be a sequence from \( A\left( X\right) \) with \( {v}_{n} \rightarrow v, n \rightarrow \infty \) . Then we can write \( {v}_{n} = A{u}_{n} \) with some \( {u}_{n} \in X \), and from (11.25) we find that\n\n\[ c\begin{Vmatrix}{{u}_{n} - {u}_{m}}\end{Vmatrix} \leq \begin{Vmatrix}{{v}_{n} - {v}_{m}}\end{Vmatrix} \]\n\nfor all \( n, m \in \mathbb{N} \) . Therefore, \( \left( {u}_{n}\right) \) is a Cauchy sequence in \( X \) and converges: \( {u}_{n} \rightarrow u, n \rightarrow \infty \), with some \( u \in X \) . Then \( v = {Au} \), since \( A \) is continuous, and \( A\left( X\right) = \overline{A\left( X\right) } \) is proven.\n\nFrom Remark 3.40 we now have that \( A\left( X\right) \) is complete. Let \( w \in X \) be arbitrary and denote by \( v \) its best approximation with respect to \( A\left( X\right) \) , which uniquely exists by Theorem 3.52. Then, by Theorem 3.51, we have \( \left( {w - v, u}\right) = 0 \) for all \( u \in A\left( X\right) \) . In particular, \( \left( {w - v, A\left( {w - v}\right) }\right) = 0 \) . Hence, from (11.24) we see that \( w = v \in A\left( X\right) \) . Therefore, \( A \) is surjective. Finally, the boundedness of the inverse\n\n\[ \begin{Vmatrix}{A}^{-1}\end{Vmatrix} \leq \frac{1}{c} \]\n\n(11.26)\n\nis a consequence of (11.25).
|
Yes
|
Theorem 11.15 Let \( S \) be a bounded and strictly coercive sesquilinear function on a Hilbert space \( X \) . Then there exists a uniquely determined bounded and strictly coercive linear operator \( A : X \rightarrow X \) such that\n\n\[ S\left( {u, v}\right) = \left( {u,{Av}}\right) \] for all \( u, v \in X \) .
|
Proof. For each \( v \in X \) the mapping \( u \mapsto S\left( {u, v}\right) \) clearly defines a bounded linear function on \( X \), since \( \left| {S\left( {u, v}\right) }\right| \leq C\parallel u\parallel \parallel v\parallel \) . By the Riesz Theorem 11.11 we can write \( S\left( {u, v}\right) = \left( {u, f}\right) \) for all \( u \in X \) and some \( f \in X \) . Therefore, setting \( {Av} \mathrel{\text{:=}} f \) we define an operator \( A : X \rightarrow X \) such that \( S\left( {u, v}\right) = \left( {u,{Av}}\right) \) for all \( u, v \in X \) .\n\nTo show that \( A \) is linear we observe that\n\n\[ \left( {u,{\alpha Av} + {\beta Aw}}\right) = \bar{\alpha }\left( {u,{Av}}\right) + \bar{\beta }\left( {u,{Aw}}\right) = \bar{\alpha }S\left( {u, v}\right) + \bar{\beta }S\left( {u, w}\right) \]\n\n\[ = S\left( {u,{\alpha v} + {\beta w}}\right) = \left( {u, A\left\lbrack {{\alpha v} + {\beta w}}\right\rbrack }\right) \]\n\nfor all \( u, v, w \in X \) and all \( \alpha ,\beta \in \mathbb{C} \) . The boundedness of \( A \) follows from\n\n\[ \parallel {Au}{\parallel }^{2} = \left( {{Au},{Au}}\right) = S\left( {{Au}, u}\right) \leq C\parallel {Au}\parallel \parallel u\parallel \]\n\nand the strict coercivity of \( A \) is a consequence of the strict coercivity of \( S \) .\n\nTo show uniqueness of the operator \( A \) we suppose that there exist two operators \( {A}_{1} \) and \( {A}_{2} \) with the property\n\n\[ S\left( {u, v}\right) = \left( {u,{A}_{1}v}\right) = \left( {u,{A}_{2}v}\right) \]\n\nfor all \( u, v \in X \) . Then we have \( \left( {u,{A}_{1}v - {A}_{2}v}\right) = 0 \) for all \( u, v \in X \), which implies \( {A}_{1}v = {A}_{2}v \) for all \( v \in X \) by setting \( u = {A}_{1}v - {A}_{2}v \) .
|
Yes
|
Corollary 11.16 Let \( S \) be a bounded and strictly coercive sesquilinear function and \( F \) a bounded linear function on a Hilbert space \( X \) . Then there exists a unique \( u \in X \) such that\n\n\[ S\left( {v, u}\right) = F\left( v\right) \]\n\nfor all \( v \in X \) .
|
Proof. By Theorem 11.15 there exists a uniquely determined bounded and strictly coercive linear operator \( A \) such that\n\n\[ S\left( {v, u}\right) = \left( {v,{Au}}\right) \]\n\nfor all \( u, v \in X \), and by Theorem 11.11 there exists a uniquely determined element \( f \) such that\n\n\[ F\left( v\right) = \left( {v, f}\right) \]\n\nfor all \( v \in X \) . Hence, the equation (11.27) is equivalent to the equation\n\n\[ {Au} = f\text{.} \]\n\nHowever, the latter equation is uniquely solvable as a consequence of the Lax-Milgram Theorem 11.13.\n\nSince the coercivity constants for \( A \) and \( S \) coincide, from (11.23) and (11.26) we conclude that\n\n\[ \parallel u\parallel \leq \frac{1}{c}\parallel F\parallel \]\n\nfor the unique solution \( u \) of (11.27).
|
Yes
|
Theorem 11.17 For a bounded and strictly coercive linear operator \( A \) the Galerkin equations (11.30) have a unique solution. It satisfies the error estimate\n\n\[ \begin{Vmatrix}{{u}_{n} - u}\end{Vmatrix} \leq M\mathop{\inf }\limits_{{v \in {X}_{n}}}\parallel v - u\parallel \] \n\n(11.32) \n\nwhere \( M \) is some constant depending on \( A \) (and not on \( {X}_{n} \) ).
|
Proof. Since \( {A}_{n} : {X}_{n} \rightarrow {X}_{n} \) is strictly coercive with coercitivity constant \( c \), by the Lax-Milgram Theorem 11.13 we conclude that \( {A}_{n} \) is bijective; i.e., the Galerkin equations (11.30) have a unique solution \( {u}_{n} \in {X}_{n} \) . The estimate (11.26) applied to the operator \( {A}_{n} \) implies that \n\n\[ \begin{Vmatrix}{A}_{n}^{-1}\end{Vmatrix} \leq \frac{1}{c} \] \n\n(11.33) \n\nFor the error \( {u}_{n} - u \) between the Galerkin approximation \( {u}_{n} \) and the exact solution \( u \) we can write \n\n\[ {u}_{n} - u = \left( {{A}_{n}^{-1}{P}_{n}A - I}\right) u = \left( {{A}_{n}^{-1}{P}_{n}A - I}\right) \left( {u - v}\right) \] \n\nfor all \( v \in {X}_{n} \), since, trivially, we have \( {A}_{n}^{-1}{P}_{n}{Av} = v \) for \( v \in {X}_{n} \) . By Theorem 3.52 we have \( \begin{Vmatrix}{P}_{n}\end{Vmatrix} = 1 \), and therefore, using Remark 3.25 and (11.33) we can estimate \n\n\[ \begin{Vmatrix}{{A}_{n}^{-1}{P}_{n}A}\end{Vmatrix} \leq \frac{1}{c}\parallel A\parallel \] \n\nwhence (11.32) follows.
|
Yes
|
Theorem 11.19 The linear space\n\n\\[ \n{H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\mathrel{\\text{:=}} \\left\\{ {u \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack : {u}^{\\prime } \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack }\\right\\} \n\\]\n\nendowed with the scalar product\n\n\\[ \n{\\left( u, v\\right) }_{{H}^{1}} \\mathrel{\\text{:=}} {\\int }_{a}^{b}\\left( {{uv} + {u}^{\\prime }{v}^{\\prime }}\\right) {dx} \n\\]\n\n(11.44)\n\nis a Hilbert space.
|
Proof. It is readily checked that \\( {H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) is a linear space and that (11.44) defines a scalar product. Let \\( \\left( {u}_{n}\\right) \\) denote an \\( {H}^{1} \\) Cauchy sequence. Then \\( \\left( {u}_{n}\\right) \\) and \\( \\left( {u}_{n}^{\\prime }\\right) \\) are both \\( {L}^{2} \\) Cauchy sequences. From the completeness of \\( {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) we obtain the existence of \\( u \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) and \\( w \\in {L}^{2}\\left\\lbrack {a, b}\\right\\rbrack \\) such that \\( {\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{2} \\rightarrow 0 \\) and \\( {\\begin{Vmatrix}{u}_{n}^{\\prime } - w\\end{Vmatrix}}_{2} \\rightarrow 0 \\) as \\( n \\rightarrow \\infty \\). Then for all \\( v \\in {C}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) with \\( v\\left( a\\right) = v\\left( b\\right) = 0 \\) we can estimate\n\n\\[ \n{\\int }_{a}^{b}\\left( {u{v}^{\\prime } + {wv}}\\right) {dx} = {\\int }_{a}^{b}\\left\\{ {\\left( {u - {u}_{n}}\\right) {v}^{\\prime } + \\left( {w - {u}_{n}^{\\prime }}\\right) v}\\right\\} {dx} \n\\]\n\n\\[ \n\\leq {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}{\\begin{Vmatrix}{v}^{\\prime }\\end{Vmatrix}}_{{L}^{2}} + {\\begin{Vmatrix}w - {u}_{n}^{\\prime }\\end{Vmatrix}}_{{L}^{2}}\\parallel v{\\parallel }_{{L}^{2}} \\rightarrow 0,\\;n \\rightarrow \\infty .\n\\]\n\nTherefore, \\( u \\in {H}^{1}\\left\\lbrack {a, b}\\right\\rbrack \\) with \\( {u}^{\\prime } = w \\), and \\( {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{H}^{1}} \\rightarrow 0, n \\rightarrow \\infty \\), which completes the proof.
|
Yes
|
Theorem 11.20 \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) is dense in \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) .
|
Proof. Since \( C\left\lbrack {a, b}\right\rbrack \) is dense in \( {L}^{2}\left\lbrack {a, b}\right\rbrack \), for each \( u \in {H}^{1}\left\lbrack {a, b}\right\rbrack \) and \( \varepsilon > 0 \) there exists \( w \in C\left\lbrack {a, b}\right\rbrack \) such that \( {\begin{Vmatrix}{u}^{\prime } - w\end{Vmatrix}}_{2} < \varepsilon \) . Then we define \( v \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) by\n\n\[ v\left( x\right) \mathrel{\text{:=}} u\left( a\right) + {\int }_{a}^{x}w\left( \xi \right) {d\xi } \]\n\nand using (11.43), we have\n\n\[ u\left( x\right) - v\left( x\right) = {\int }_{a}^{x}\left\{ {{u}^{\prime }\left( \xi \right) - w\left( \xi \right) }\right\} {d\xi }. \]\n\nBy the Cauchy-Schwarz inequality this implies \( \parallel u - v{\parallel }_{2} < \left( {b - a}\right) \varepsilon \), and the proof is complete.
|
Yes
|
Theorem 11.21 \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) is contained in \( C\left\lbrack {a, b}\right\rbrack \) .
|
Proof. From (11.43) we have\n\n\[ u\left( x\right) - u\left( y\right) = {\int }_{y}^{x}{u}^{\prime }\left( \xi \right) {d\xi } \]\n\n(11.45)\n\nwhence by the Cauchy-Schwarz inequality,\n\n\[ \left| {u\left( x\right) - u\left( y\right) }\right| \leq {\left| x - y\right| }^{1/2}{\begin{Vmatrix}{u}^{\prime }\end{Vmatrix}}_{2} \]\n\nfollows for all \( x, y \in \left\lbrack {a, b}\right\rbrack \) . Therefore, every function \( u \in {H}^{1}\left\lbrack {a, b}\right\rbrack \) belongs to \( C\left\lbrack {a, b}\right\rbrack \), or more precisely, it coincides almost everywhere with a continuous function.
|
Yes
|
Theorem 11.22 The space\n\n\[ \n{H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \mathrel{\text{:=}} \left\{ {u \in {H}^{1}\left\lbrack {a, b}\right\rbrack : u\left( a\right) = u\left( b\right) = 0}\right\} \n\]\n\nis a complete subspace of \( {H}^{1}\left\lbrack {a, b}\right\rbrack \) .
|
Proof. Since the \( {H}^{1} \) norm is stronger than the maximum norm, each \( {H}^{1} \) convergent sequence of elements of \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) has its limit in \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) . Therefore \( {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) is a closed subspace of \( {H}^{1}\left\lbrack {a, b}\right\rbrack \), and the statement follows from Remark 3.40.
|
Yes
|
Theorem 11.24 Assume that \( p > 0 \) and \( q \geq 0 \) . Then there exists a unique weak solution to the boundary value problem (11.36)-(11.37).
|
Proof. The sesquilinear function \( S : {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \times {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) is bounded, since\n\n\[ \left| {S\left( {u, v}\right) }\right| \leq \max \left\{ {\parallel p{\parallel }_{\infty },\parallel q{\parallel }_{\infty }}\right\} \parallel u{\parallel }_{{H}^{1}}\parallel v{\parallel }_{{H}^{1}} \]\n\nby the Cauchy-Schwarz inequality. For \( u \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \), from (11.45) and the Cauchy-Schwarz inequality we obtain that\n\n\[ \parallel u{\parallel }_{{L}^{2}}^{2} = {\int }_{a}^{b}{\left| {\int }_{a}^{x}{u}^{\prime }\left( \xi \right) d\xi \right| }^{2}{dx} \leq {\left( b - a\right) }^{2}{\begin{Vmatrix}{u}^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2}. \]\n\nHence we can estimate\n\n\[ S\left( {u, u}\right) \geq \mathop{\min }\limits_{{a \leq x \leq b}}p\left( x\right) {\int }_{a}^{b}{\left| {u}^{\prime }\right| }^{2}{dx} \geq c\parallel u{\parallel }_{{H}^{1}}^{2} \]\n\nfor all \( u \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) and some positive constant \( c \) ; i.e., \( S \) is strictly coercive. Finally, by the Cauchy-Schwarz inequality we have\n\n\[ \left| {F\left( v\right) }\right| \leq \parallel r{\parallel }_{{L}^{2}}\parallel v{\parallel }_{{L}^{2}} \leq \parallel r{\parallel }_{{L}^{2}}\parallel v{\parallel }_{{H}^{1}} \]\n\ni.e., the linear function \( F : {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \rightarrow \mathbb{R} \) is bounded. Now the statement of the theorem follows from Corollary 11.16.\n\nWe note that from (11.28) and the previous inequality it follows that\n\n\[ \parallel u{\parallel }_{{H}^{1}} \leq \frac{1}{c}\parallel r{\parallel }_{{L}^{2}} \]\n\n(11.46)\n\nfor the weak solution \( u \) to the boundary value problem (11.36)-(11.37).
|
Yes
|
Theorem 11.25 Each weak solution to the boundary value problem (11.36)- (11.37) is also a classical solution; i.e., it is twice continuously differentiable.
|
Proof. Define\n\n\[ f\left( x\right) \mathrel{\text{:=}} {\int }_{a}^{x}\left\lbrack {q\left( \xi \right) u\left( \xi \right) - r\left( \xi \right) }\right\rbrack {d\xi },\;x \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( f \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) . From (11.38), by partial integration we obtain\n\n\[ {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {v}^{\prime }{dx} = 0 \]\n\nfor all \( v \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) . Now we set\n\n\[ c \mathrel{\text{:=}} \frac{1}{b - a}{\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {d\xi } \]\n\nand\n\n\[ {v}_{0}\left( x\right) \mathrel{\text{:=}} {\int }_{a}^{x}\left\lbrack {p\left( \xi \right) {u}^{\prime }\left( \xi \right) - f\left( \xi \right) - c}\right\rbrack {d\xi },\;x \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( {v}_{0} \in {H}_{0}^{1}\left\lbrack {a, b}\right\rbrack \) and\n\n\[ {\int }_{a}^{b}{\left\lbrack p{u}^{\prime } - f - c\right\rbrack }^{2}{dx} = {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f - c}\right\rbrack {v}_{0}^{\prime }{dx} \]\n\n\[ = {\int }_{a}^{b}\left\lbrack {p{u}^{\prime } - f}\right\rbrack {v}_{0}^{\prime }{dx} - c{\int }_{a}^{b}{v}_{0}^{\prime }{dx} = 0. \]\n\nHence\n\n\[ p{u}^{\prime } = f + c \]\n\nand since \( f \) and \( p \) are in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) with \( p\left( x\right) > 0 \) for all \( x \in \left\lbrack {a, b}\right\rbrack \), we can conclude that \( {u}^{\prime } \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) and\n\n\[ {\left( p{u}^{\prime }\right) }^{\prime } = {f}^{\prime } = {qu} - r. \]\n\nThis completes the proof.
|
Yes
|
Lemma 11.26 Let \( f\left\lbrack {a, b}\right\rbrack \in {C}^{2}\left\lbrack {a, b}\right\rbrack \) . Then the remainder \( {R}_{1}f \mathrel{\text{:=}} f - {L}_{1}f \) for the linear interpolation at the two endpoints \( a \) and \( b \) can be estimated \( {by} \)\n\n\[ \n{\begin{Vmatrix}{R}_{1}f\end{Vmatrix}}_{{L}^{2}} \leq {\left( b - a\right) }^{2}{\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}} \]\n\n(11.49)\n\n\[ \n{\begin{Vmatrix}{\left( {R}_{1}f\right) }^{\prime }\end{Vmatrix}}_{{L}^{2}} \leq \left( {b - a}\right) {\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}} \]\n
|
Proof. For each function \( g \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) satisfying \( g\left( a\right) = 0 \), from\n\n\[ \ng\left( x\right) = {\int }_{a}^{x}{g}^{\prime }\left( \xi \right) {d\xi } \]\n\nby using the Cauchy-Schwarz inequality we obtain\n\n\[ \n{\left| g\left( x\right) \right| }^{2} \leq \left( {b - a}\right) {\begin{Vmatrix}{g}^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2},\;x \in \left\lbrack {a, b}\right\rbrack \]\n\nFrom this, by integration we derive the Friedrich inequality\n\n\[ \n\parallel g{\parallel }_{{L}^{2}} \leq \left( {b - a}\right) {\begin{Vmatrix}{g}^{\prime }\end{Vmatrix}}_{{L}^{2}} \]\n\n(11.50)\n\nfor functions \( g \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) with \( g\left( a\right) = 0 \) (or \( g\left( b\right) = 0 \) ). Using the interpolation property \( \left( {{R}_{1}f}\right) \left( a\right) = \left( {{R}_{1}f}\right) \left( b\right) = 0 \), by partial integration we obtain\n\n\[ \n{\int }_{a}^{b}{\left\lbrack {f}^{\prime } - {\left( {L}_{1}f\right) }^{\prime }\right\rbrack }^{2}{dx} = {\int }_{a}^{b}{f}^{\prime \prime }\left( {{L}_{1}f - f}\right) {dx}. \]\n\nFrom this, again applying the Cauchy-Schwarz inequality, we have\n\n\[ \n{\begin{Vmatrix}{\left( {R}_{1}f\right) }^{\prime }\end{Vmatrix}}_{{L}^{2}}^{2} \leq {\begin{Vmatrix}{f}^{\prime \prime }\end{Vmatrix}}_{{L}^{2}}{\begin{Vmatrix}{R}_{1}f\end{Vmatrix}}_{{L}^{2}} \]\n\nwhence (11.49) follows with the aid of Friedrich's inequality (11.50) for \( g = {R}_{1}f \) .
|
Yes
|
Theorem 11.27 The error in the finite element approximation by linear splines for the boundary value problem (11.36)-(11.37) can be estimated by\n\n\\[ \n{\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{{H}^{1}} \\leq C{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\n(11.51)\n\nfor some positive constant \\( C \\) .
|
Proof. By summing up the inequalities (11.49), applied to each of the subintervals of length \\( h \\), for the interpolating linear spline \\( {w}_{n} \\in {X}_{n} \\) with \\( {w}_{n}\\left( {x}_{j}\\right) = u\\left( {x}_{j}\\right) \\) for \\( j = 0,\\ldots, n \\) we find that\n\n\\[ \n{\\begin{Vmatrix}{w}_{n}^{\\prime } - {u}^{\\prime }\\end{Vmatrix}}_{{L}^{2}} \\leq {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\nand\n\n\\[ \n{\\begin{Vmatrix}{w}_{n} - u\\end{Vmatrix}}_{{L}^{2}} \\leq {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\\]\n\nwhence\n\n\\[ \n\\mathop{\\inf }\\limits_{{v \\in {X}_{n}}}\\parallel v - u{\\parallel }_{{H}^{1}} \\leq {\\begin{Vmatrix}{w}_{n} - u\\end{Vmatrix}}_{{H}^{1}} \\leq \\left( {1 + b - a}\\right) {\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}h\n\\]\n\nfollows. Now (11.51) is a consequence of the error estimate for the Galerkin method of Theorem 11.17.
|
Yes
|
Theorem 11.28 The error in the finite element approximation by linear splines for the boundary value problem (11.36)-(11.37) can be estimated by\n\n\[ \n{\\begin{Vmatrix}{u}_{n} - u\\end{Vmatrix}}_{{L}^{2}} \\leq C{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\]\n\nwith some positive constant \( C \) .
|
Proof. Denote by \( {z}_{n} \) the weak solution to the boundary value problem with the right-hand side \( u - {u}_{n} \) ; i.e.,\n\n\[ \nS\\left( {v,{z}_{n}}\\right) = {\\left( v, u - {u}_{n}\\right) }_{{L}^{2}}\n\]\n\nfor all \( v \\in {H}_{0}^{1}\\left\\lbrack {a, b}\\right\\rbrack \) . In particular, inserting \( v = u - {u}_{n} \), it follows that\n\n\[ \nS\\left( {u - {u}_{n},{z}_{n}}\\right) = {\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2}\n\]\n\n(11.52)\n\nSince \( S\\left( {v, u}\\right) = F\\left( v\\right) \) and \( S\\left( {v,{u}_{n}}\\right) = F\\left( v\\right) \) for all \( v \\in {X}_{n} \), using the symmetry of \( S \) we have\n\n\[ \nS\\left( {u - {u}_{n}, v}\\right) = 0\n\]\n\nfor all \( v \\in {X}_{n} \) . Inserting the Galerkin approximation to \( {z}_{n} \), which we denote by \( {\\widetilde{z}}_{n} \), into the last equation and subtracting from (11.52), we obtain\n\n\[ \n{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2} = S\\left( {u - {u}_{n},{z}_{n} - {\\widetilde{z}}_{n}}\\right) .\n\]\n\n(11.53)\n\nSince \( S \) is bounded, from (11.53) and (11.51), applied to \( u - {u}_{n} \) and \( {z}_{n} - {\\widetilde{z}}_{n} \) , we can conclude that\n\n\[ \n{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}^{2} \\leq {C}_{1}{\\begin{Vmatrix}{u}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{\\begin{Vmatrix}{z}_{n}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}}{h}^{2}\n\]\n\nfor some constant \( {C}_{1} \) . However, from (11.47) we also have that\n\n\[ \n{\\begin{Vmatrix}{z}_{n}^{\\prime \\prime }\\end{Vmatrix}}_{{L}^{2}} \\leq {C}_{2}{\\begin{Vmatrix}u - {u}_{n}\\end{Vmatrix}}_{{L}^{2}}\n\]\n\nfor some constant \( {C}_{2} \) . Now the assertion of the theorem follows from the last two inequalities.
|
Yes
|
Theorem 12.2 Let \( A : X \rightarrow X \) be a compact operator in a normed space \( X \) . Then \( I - A \) is surjective if and only if it is injective. If the inverse operator \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists, it is bounded.
|
Null
|
No
|
Theorem 12.3 (Arzelà-Ascoli) Each sequence from a subset \( U \subset C\left\lbrack {a, b}\right\rbrack \) contains a uniformly convergent subsequence; i.e., \( U \) is relatively sequentially compact, if and only if it is bounded and equicontinuous, i.e., if there exists a constant \( C \) such that\n\n\[ \left| {\varphi \left( x\right) }\right| \leq C \]\n\nfor all \( x \in \left\lbrack {a, b}\right\rbrack \) and all \( \varphi \in U \), and for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such\n\nthat\n\n\[ \left| {\varphi \left( x\right) - \varphi \left( y\right) }\right| < \varepsilon \]\n\nfor all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) and all \( \varphi \in U \) .
|
Null
|
No
|
Theorem 12.4 The integral operator (12.3) with continuous kernel is a compact operator on \( C\left\lbrack {a, b}\right\rbrack \) .
|
Proof. For all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) and all \( x \in \left\lbrack {a, b}\right\rbrack \), we have that\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq \left( {b - a}\right) \mathop{\max }\limits_{{x, y \in \left\lbrack {a, b}\right\rbrack }}\left| {K\left( {x, y}\right) }\right| \]\n\ni.e., the set \( U \mathrel{\text{:=}} \{ {A\varphi } : \varphi \in C\left\lbrack {a, b}\right\rbrack ,\parallel \varphi {\parallel }_{\infty } \leq 1\} \subset C\left\lbrack {a, b}\right\rbrack \) is bounded. Since \( K \) is uniformly continuous on the square \( \left\lbrack {a, b}\right\rbrack \times \left\lbrack {a, b}\right\rbrack \), for every \( \varepsilon > 0 \) there exists \( \delta > 0 \) such that\n\n\[ \left| {K\left( {x, z}\right) - K\left( {y, z}\right) }\right| < \frac{\varepsilon }{b - a} \]\n\nfor all \( x, y, z \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) . Then\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) - \left( {A\varphi }\right) \left( y\right) }\right| = \left| {{\int }_{a}^{b}\left\lbrack {K\left( {x, z}\right) - K\left( {y, z}\right) }\right\rbrack \varphi \left( z\right) {dz}}\right| < \varepsilon \]\n\nfor all \( x, y \in \left\lbrack {a, b}\right\rbrack \) with \( \left| {x - y}\right| < \delta \) and all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) ; i.e., \( U \) is equicontinuous. Hence \( A \) is compact by the Arzelà-Ascoli Theorem 12.3.
|
Yes
|
Theorem 12.5 The norm of the integral operator \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) with continuous kernel \( K \) is given by\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\n(12.4)
|
Proof. For each \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) we have\n\n\[ \left| {\left( {A\varphi }\right) \left( x\right) }\right| \leq {\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy},\;x \in \left\lbrack {a, b}\right\rbrack ,\]\n\nand thus\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \leq \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nSince \( K \) is continuous, there exists \( {x}_{0} \in \left\lbrack {a, b}\right\rbrack \) such that\n\n\[ {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nFor \( \varepsilon > 0 \) choose \( \psi \in C\left\lbrack {a, b}\right\rbrack \) by setting\n\n\[ \psi \left( y\right) \mathrel{\text{:=}} \frac{K\left( {{x}_{0}, y}\right) }{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon },\;y \in \left\lbrack {a, b}\right\rbrack . \]\n\nThen \( \parallel \psi {\parallel }_{\infty } \leq 1 \) and\n\n\[ \parallel {A\psi }{\parallel }_{\infty } \geq \left| {\left( {A\psi }\right) \left( {x}_{0}\right) }\right| = {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy} \geq {\int }_{a}^{b}\frac{{\left\lbrack K\left( {x}_{0}, y\right) \right\rbrack }^{2} - {\varepsilon }^{2}}{\left| {K\left( {{x}_{0}, y}\right) }\right| + \varepsilon }{dy}\]\n\n\[ = {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) . \]\n\nHence\n\n\[ \parallel A{\parallel }_{\infty } = \mathop{\sup }\limits_{{\parallel \varphi {\parallel }_{\infty } \leq 1}}\parallel {A\varphi }{\parallel }_{\infty } \geq \parallel {A\psi }{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} - \varepsilon \left( {b - a}\right) ,\]\n\nand since this holds for all \( \varepsilon > 0 \), we have\n\n\[ \parallel A{\parallel }_{\infty } \geq {\int }_{a}^{b}\left| {K\left( {{x}_{0}, y}\right) }\right| {dy} = \mathop{\max }\limits_{{a \leq x \leq b}}{\int }_{a}^{b}\left| {K\left( {x, y}\right) }\right| {dy}. \]\n\nThis concludes the proof.
|
Yes
|
Theorem 12.6 Let \( A : X \rightarrow X \) be a compact linear operator on a Banach space \( X \) such that \( I - A \) is injective. Assume that the sequence \( {A}_{n} : X \rightarrow X \) of bounded linear operators is norm convergent, i.e., \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \) . Then for sufficiently large \( n \) the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} : X \rightarrow X \) exist and are uniformly bounded. For the solutions of the equations \[ \varphi - {A\varphi } = f\;\text{ and }\;{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n} \] we have an error estimate \[ \begin{Vmatrix}{{\varphi }_{n} - \varphi }\end{Vmatrix} \leq C\left\{ {\begin{Vmatrix}{\left( {{A}_{n} - A}\right) \varphi }\end{Vmatrix} + \begin{Vmatrix}{{f}_{n} - f}\end{Vmatrix}}\right\} \] for some constant \( C \) .
|
Proof. By the Riesz Theorem 12.2, the inverse \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists and is bounded. Since \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \), by Remark 3.25 we have \( \begin{Vmatrix}{{\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) }\end{Vmatrix} \leq q < 1 \) for sufficiently large \( n \) . For these \( n \), by the Neumann series Theorem 3.48, the inverse operators of \[ I - {\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) = {\left( I - A\right) }^{-1}\left( {I - {A}_{n}}\right) \] exist and are uniformly bounded by \[ \begin{Vmatrix}{\left\lbrack I - {\left( I - A\right) }^{-1}\left( {A}_{n} - A\right) \right\rbrack }^{-1}\end{Vmatrix} \leq \frac{1}{1 - q}. \] But then \( {\left\lbrack I - {\left( I - A\right) }^{-1}\left( A - {A}_{n}\right) \right\rbrack }^{-1}{\left( I - A\right) }^{-1} \) are the inverse operators of \( I - {A}_{n} \) and they are uniformly bounded. The error estimate follows from \[ \left( {I - {A}_{n}}\right) \left( {{\varphi }_{n} - \varphi }\right) = \left( {A - {A}_{n}}\right) \varphi + {f}_{n} - f \] by the uniform boundedness of the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} \) .
|
Yes
|
Lemma 12.9 Let \( X \) be a Banach space, let \( {A}_{n} : X \rightarrow X \) be a collectively compact sequence, and let \( {B}_{n} : X \rightarrow X \) be a pointwise convergent sequence with limit operator \( B : X \rightarrow X \) . Then\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{n} - B}\right) {A}_{n}}\end{Vmatrix} \rightarrow 0,\;n \rightarrow \infty .\n\]
|
Proof. Assume that (12.7) is not valid. Then there exist \( {\varepsilon }_{0} > 0 \), a sequence \( \left( {n}_{k}\right) \) in \( \mathbb{N} \) with \( {n}_{k} \rightarrow \infty, k \rightarrow \infty \), and a sequence \( \left( {\varphi }_{k}\right) \) in \( X \) with \( \begin{Vmatrix}{\varphi }_{k}\end{Vmatrix} \leq 1 \) such that\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{{n}_{k}} - B}\right) {A}_{{n}_{k}}{\varphi }_{k}}\end{Vmatrix} \geq {\varepsilon }_{0},\;k = 1,2,\ldots\n\]\n\n(12.8)\n\nSince the sequence \( \left( {A}_{n}\right) \) is collectively compact, there exists a subsequence such that\n\n\[ \n{A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) } \rightarrow \psi \in X,\;j \rightarrow \infty .\n\]\n\n(12.9)\n\nThen we can estimate with the aid of the triangle inequality and Remark 3.25 to obtain\n\n\[ \n\begin{Vmatrix}{\left( {{B}_{{n}_{k\left( j\right) }} - B}\right) {A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) }}\end{Vmatrix}\n\]\n\n(12.10)\n\n\[ \n\leq \begin{Vmatrix}{\left( {{B}_{{n}_{k\left( j\right) }} - B}\right) \psi }\end{Vmatrix} + \begin{Vmatrix}{{B}_{{n}_{k\left( j\right) }} - B}\end{Vmatrix}\begin{Vmatrix}{{A}_{{n}_{k\left( j\right) }}{\varphi }_{k\left( j\right) } - \psi }\end{Vmatrix}.\n\]\n\nThe first term on the right-hand side of (12.10) tends to zero as \( j \rightarrow \infty \) , since the operator sequence \( \left( {B}_{n}\right) \) is pointwise convergent. The second term tends to zero as \( j \rightarrow \infty \), since the operator sequence \( \left( {B}_{n}\right) \) is uniformly bounded by Theorem 12.7 and since we have the convergence (12.9). Therefore, passing to the limit \( j \rightarrow \infty \) in (12.10) yields a contradiction to (12.8), and the proof is complete.
|
Yes
|
Theorem 12.10 Let \( A : X \rightarrow X \) be a compact linear operator on a Banach space \( X \) such that \( I - A \) is injective, and assume that the sequence \( {A}_{n} : X \rightarrow X \) of linear operators is collectively compact and pointwise convergent; i.e., \( {A}_{n}\varphi \rightarrow {A\varphi }, n \rightarrow \infty \), for all \( \varphi \in X \) . Then for sufficiently large \( n \) the inverse operators \( {\left( I - {A}_{n}\right) }^{-1} : X \rightarrow X \) exist and are uniformly bounded. For the solutions of the equations\n\n\[ \varphi - {A\varphi } = f\;\text{ and }\;{\varphi }_{n} - {A}_{n}{\varphi }_{n} = {f}_{n} \]\n\nwe have an error estimate\n\n\[ \begin{Vmatrix}{{\varphi }_{n} - \varphi }\end{Vmatrix} \leq C\left\{ {\begin{Vmatrix}{\left( {{A}_{n} - A}\right) \varphi }\end{Vmatrix} + \begin{Vmatrix}{{f}_{n} - f}\end{Vmatrix}}\right\} \]\n\n(12.11)\n\nfor some constant \( C \) .
|
Proof. By the Riesz Theorem 12.2, the inverse \( {\left( I - A\right) }^{-1} : X \rightarrow X \) exists and is bounded. The identity\n\n\[ {\left( I - A\right) }^{-1} = I + {\left( I - A\right) }^{-1}A \]\n\nsuggests\n\n\[ {M}_{n} \mathrel{\text{:=}} I + {\left( I - A\right) }^{-1}{A}_{n} \]\n\nas an approximate inverse for \( I - {A}_{n} \) . Elementary calculations yield\n\n\[ {M}_{n}\left( {I - {A}_{n}}\right) = I - {S}_{n} \]\n\n(12.12)\n\nwhere\n\n\[ {S}_{n} \mathrel{\text{:=}} {\left( I - A\right) }^{-1}\left( {{A}_{n} - A}\right) {A}_{n} \]\n\nFrom Lemma 12.9 we conclude that \( \begin{Vmatrix}{S}_{n}\end{Vmatrix} \rightarrow 0, n \rightarrow \infty \) . Hence for sufficiently large \( n \) we have \( \begin{Vmatrix}{S}_{n}\end{Vmatrix} \leq q < 1 \) . For these \( n \), by the Neumann series Theorem 3.48, the inverse operators \( {\left( I - {S}_{n}\right) }^{-1} \) exist and are uniformly bounded by\n\n\[ \begin{Vmatrix}{\left( I - {S}_{n}\right) }^{-1}\end{Vmatrix} \leq \frac{1}{1 - q}. \]\n\nNow (12.12) implies first that \( I - {A}_{n} \) is injective, and therefore, since \( {A}_{n} \) is compact, by Theorem 12.1 the inverse \( {\left( I - {A}_{n}\right) }^{-1} \) exists. Then (12.12) also yields \( {\left( I - {A}_{n}\right) }^{-1} = {\left( I - {S}_{n}\right) }^{-1}{M}_{n} \), whence uniform boundedness follows, since the operators \( {M}_{n} \) are uniformly bounded by Theorem 12.7. The error estimate (12.11) is proven as in Theorem 12.6.
|
Yes
|
Theorem 12.11 Let \( {\varphi }_{n} \) be a solution of\n\n\[ \n{\varphi }_{n}\left( x\right) - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {x,{x}_{k}}\right) {\varphi }_{n}\left( {x}_{k}\right) = f\left( x\right) ,\;x \in \left\lbrack {a, b}\right\rbrack .\n\]\n\n(12.13)\n\nThen the values \( {\varphi }_{j}^{\left( n\right) } \mathrel{\text{:=}} {\varphi }_{n}\left( {x}_{j}\right), j = 0,\ldots, n \), at the quadrature points satisfy the linear system\n\n\[ \n{\varphi }_{j}^{\left( n\right) } - \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) } = f\left( {x}_{j}\right) ,\;j = 0,\ldots, n.\n\]\n\n(12.14)\n\n\n\nConversely, let \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), be a solution of the system (12.14). Then the function \( {\varphi }_{n} \) defined by\n\n\[ \n{\varphi }_{n}\left( x\right) \mathrel{\text{:=}} f\left( x\right) + \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {x,{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) },\;x \in \left\lbrack {a, b}\right\rbrack ,\n\]\n\n(12.15)\n\nsolves equation (12.13).
|
Proof. The first statement is trivial. For a solution \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), of the system (12.14) the function \( {\varphi }_{n} \) defined by (12.15) has values\n\n\[ \n{\varphi }_{n}\left( {x}_{j}\right) = f\left( {x}_{j}\right) + \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}K\left( {{x}_{j},{x}_{k}}\right) {\varphi }_{k}^{\left( n\right) } = {\varphi }_{j}^{\left( n\right) },\;j = 0,\ldots, n.\n\]\n\nInserting this into (12.15), we see that \( {\varphi }_{n} \) satisfies (12.13).\n\nThe formula (12.15) may be viewed as a natural interpolation of the values \( {\varphi }_{j}^{\left( n\right) }, j = 0,\ldots, n \), at the quadrature points to obtain the approximating function \( {\varphi }_{n} \) . It was introduced by Nyström in 1930.
|
Yes
|
Theorem 12.12 The norm of the quadrature operators \( {A}_{n} \) is given by\n\n\[ \n{\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } = \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| .\n\]
|
Proof. For each \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \varphi {\parallel }_{\infty } \leq 1 \) we have\n\n\[ \n{\begin{Vmatrix}{A}_{n}\varphi \end{Vmatrix}}_{\infty } \leq \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| \n\]\n\nand therefore \( {\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } \) is smaller than or equal to the right-hand side of (12.16). Let \( z \in \left\lbrack {a, b}\right\rbrack \) be such that\n\n\[ \n\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| = \mathop{\max }\limits_{{a \leq x \leq b}}\mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {x,{x}_{k}}\right) }\right| \n\]\n\nand choose \( \psi \in C\left\lbrack {a, b}\right\rbrack \) with \( \parallel \psi {\parallel }_{\infty } = 1 \) and\n\n\[ \n{a}_{k}K\left( {z,{x}_{k}}\right) \psi \left( {x}_{k}\right) = \left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| ,\;k = 0,\ldots, n.\n\]\n\nThen\n\n\[ \n{\begin{Vmatrix}{A}_{n}\end{Vmatrix}}_{\infty } \geq {\begin{Vmatrix}{A}_{n}\psi \end{Vmatrix}}_{\infty } \geq \left| {\left( {{A}_{n}\psi }\right) \left( z\right) }\right| = \mathop{\sum }\limits_{{k = 0}}^{n}\left| {{a}_{k}K\left( {z,{x}_{k}}\right) }\right| \n\]\n\nand (12.16) is proven.
|
Yes
|
Consider the integral equation\n\n\[ \varphi \left( x\right) - \frac{1}{2}{\int }_{0}^{1}\left( {x + 1}\right) {e}^{-{xy}}\varphi \left( y\right) {dy} = {e}^{-x} - \frac{1}{2} + \frac{1}{2}{e}^{-\left( {x + 1}\right) },\;0 \leq x \leq 1, \]
|
with exact solution \( \varphi \left( x\right) = {e}^{-x} \) . For its kernel we have\n\n\[ \mathop{\max }\limits_{{0 \leq x \leq 1}}{\int }_{0}^{1}\frac{1}{2}\left( {x + 1}\right) {e}^{-{xy}}{dy} = \mathop{\sup }\limits_{{0 < x \leq 1}}\frac{x + 1}{2x}\left( {1 - {e}^{-x}}\right) < 1. \]\n\nTherefore, by the Neumann series Theorem 3.48 and the operator norm (12.4), equation (12.19) is uniquely solvable.
|
Yes
|
Consider the integral equation\n\n\[ \n\\varphi \\left( t\\right) + \\frac{ab}{\\pi }{\\int }_{0}^{2\\pi }\\frac{\\varphi \\left( \\tau \\right) {d\\tau }}{{a}^{2} + {b}^{2} - \\left( {{a}^{2} - {b}^{2}}\\right) \\cos \\left( {t + \\tau }\\right) } = f\\left( t\\right) ,\\;0 \\leq t \\leq {2\\pi },\n\]\n\nwhere \( a \\geq b > 0 \).
|
Any solution \( \\varphi \) to the homogeneous form of equation (12.20) clearly must be a \( {2\\pi } \) -periodic analytic function, since the kernel is a \( {2\\pi } \) -periodic analytic function with respect to the variable \( t \) . Hence, we can expand \( \\varphi \) into a uniformly convergent Fourier series\n\n\[ \n\\varphi \\left( t\\right) = \\mathop{\\sum }\\limits_{{n = 0}}^{\\infty }{\\alpha }_{n}\\cos {nt} + \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }{\\beta }_{n}\\sin {nt}.\n\]\n\nInserting this into the homogeneous integral equation and using the integrals (see Problem 12.10)\n\n\[ \n\\frac{ab}{\\pi }{\\int }_{0}^{2\\pi }\\frac{{e}^{in\\tau }{d\\tau }}{\\left( {{a}^{2} + {b}^{2}}\\right) - \\left( {{a}^{2} - {b}^{2}}\\right) \\cos \\left( {t + \\tau }\\right) } = {\\left( \\frac{a - b}{a + b}\\right) }^{n}{e}^{-{int}}\n\]\n\n(12.21)\n\nfor \( n = 0,1,2,\\ldots \), it follows that\n\n\[ \n{\\alpha }_{n}\\left\\lbrack {1 + {\\left( \\frac{a - b}{a + b}\\right) }^{n}}\\right\\rbrack = {\\beta }_{n}\\left\\lbrack {1 - {\\left( \\frac{a - b}{a + b}\\right) }^{n}}\\right\\rbrack = 0\n\]\n\nfor \( n = 0,1,2,\\ldots \) . Hence, \( {\\alpha }_{n} = {\\beta }_{n} = 0 \) for \( n = 0,1,2,\\ldots \), and therefore \( \\varphi = 0 \) . Now the Riesz Theorem 12.2 implies that the integral equation (12.20) is uniquely solvable for each right-hand side \( f \) .
|
Yes
|
Theorem 12.16 Let \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) be a compact linear operator such that \( I - A \) is injective, and assume that the interpolation operators \( {L}_{n} : C\left\lbrack {a, b}\right\rbrack \rightarrow {X}_{n} \) satisfy \( {\begin{Vmatrix}{L}_{n}A - A\end{Vmatrix}}_{\infty } \rightarrow 0, n \rightarrow \infty \) . Then, for sufficiently large \( n \), the approximate equation (12.27) is uniquely solvable for all \( f \in C\left\lbrack {a, b}\right\rbrack \), and we have the error estimate
|
Proof. From Theorem 12.6 applied to \( {A}_{n} = {L}_{n}A \), we conclude that for all sufficiently large \( n \) the inverse operators \( {\left( I - {L}_{n}A\right) }^{-1} \) exist and are uniformly bounded. To verify the error bound, we apply the interpolation operator \( {L}_{n} \) to (12.22) and get\n\n\[ \varphi - {L}_{n}{A\varphi } = {L}_{n}f + \varphi - {L}_{n}\varphi . \]\n\nSubtracting this from (12.27) we find\n\n\[ \left( {I - {L}_{n}A}\right) \left( {{\varphi }_{n} - \varphi }\right) = {L}_{n}\varphi - \varphi \]\n\nwhence the estimate (12.28) follows.
|
Yes
|
Corollary 12.17 Let \( A : C\left\lbrack {a, b}\right\rbrack \rightarrow C\left\lbrack {a, b}\right\rbrack \) be a compact linear operator such that \( I - A \) is injective, and assume that the interpolation operators \( {L}_{n} : C\left\lbrack {a, b}\right\rbrack \rightarrow {X}_{n} \) are pointwise convergent; i.e., \( {L}_{n}\varphi \rightarrow \varphi, n \rightarrow \infty \), for all \( \varphi \in C\left\lbrack {a, b}\right\rbrack \) . Then, for sufficiently large \( n \) , the approximate equation (12.27) is uniquely solvable for all \( f \in C\left\lbrack {a, b}\right\rbrack \), and the estimate (12.28) holds.
|
Proof. By Lemma 12.9 the pointwise convergence of the interpolation operators \( {L}_{n} \) and the compactness of \( A \) imply that \( {\begin{Vmatrix}{L}_{n}A - A\end{Vmatrix}}_{\infty } \rightarrow 0, n \rightarrow \infty \) . Now the statement follows from the preceding theorem.
|
Yes
|
Lemma 12.20 Let \( f \in {C}^{1}\left\lbrack {0,{2\pi }}\right\rbrack \) . Then for the remainder in trigonometric interpolation we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty } \leq {c}_{n}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2} \n\]\n\n(12.34)\n\nwhere \( {c}_{n} \rightarrow 0, n \rightarrow \infty \) .
|
Proof. Consider the trigonometric monomials \( {f}_{m}\left( t\right) = {e}^{imt} \) and write \( m = \) \( \left( {{2k} + 1}\right) n + q \) with \( k \in \mathbf{Z} \) and \( 0 \leq q < {2n} \) . Since \( {f}_{m}\left( {t}_{j}\right) = {f}_{q - n}\left( {t}_{j}\right) \) for \( j = 0,\ldots ,{2n} - 1 \), the trigonometric interpolation polynomials for \( {f}_{m} \) and \( {f}_{q - n} \) coincide. Therefore, we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}{f}_{m} - {f}_{m}\end{Vmatrix}}_{\infty } \leq 2 \n\]\n\nfor all \( \left| m\right| \geq n \) . Since \( f \) is continuously differentiable, we can expand it into a uniformly convergent Fourier series (see Problem 12.14)\n\n\[ \nf = \mathop{\sum }\limits_{{m = - \infty }}^{\infty }{a}_{m}{f}_{m} \n\]\n\nFrom the relation\n\n\[ \n{\int }_{0}^{2\pi }{f}^{\prime }\left( t\right) {e}^{-{imt}}{dt} = {im}{\int }_{0}^{2\pi }f\left( t\right) {e}^{-{imt}}{dt} = {2\pi im}{a}_{m} \n\]\n\nfor the Fourier coefficients it follows that\n\n\[ \n{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2} = {\int }_{0}^{2\pi }{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt} = {2\pi }\mathop{\sum }\limits_{{m = - \infty }}^{\infty }{m}^{2}{\left| {a}_{m}\right| }^{2}. \n\]\n\nUsing this identity and the Cauchy-Schwarz inequality, we derive\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty }^{2} \leq 4{\left\{ \mathop{\sum }\limits_{{\left| m\right| = n}}^{\infty }\left| {a}_{m}\right| \right\} }^{2} \leq \frac{4}{\pi }{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2}\mathop{\sum }\limits_{{m = n}}^{\infty }\frac{1}{{m}^{2}}. \n\]\n\nThis implies (12.34).
|
Yes
|
Lemma 12.20 Let \( f \in {C}^{1}\left\lbrack {0,{2\pi }}\right\rbrack \) . Then for the remainder in trigonometric interpolation we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty } \leq {c}_{n}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2} \n\]\n\nwhere \( {c}_{n} \rightarrow 0, n \rightarrow \infty \) .
|
Proof. Consider the trigonometric monomials \( {f}_{m}\left( t\right) = {e}^{imt} \) and write \( m = \) \( \left( {{2k} + 1}\right) n + q \) with \( k \in \mathbf{Z} \) and \( 0 \leq q < {2n} \) . Since \( {f}_{m}\left( {t}_{j}\right) = {f}_{q - n}\left( {t}_{j}\right) \) for \( j = 0,\ldots ,{2n} - 1 \), the trigonometric interpolation polynomials for \( {f}_{m} \) and \( {f}_{q - n} \) coincide. Therefore, we have\n\n\[ \n{\begin{Vmatrix}{L}_{n}{f}_{m} - {f}_{m}\end{Vmatrix}}_{\infty } \leq 2 \n\]\n\nfor all \( \left| m\right| \geq n \) . Since \( f \) is continuously differentiable, we can expand it into a uniformly convergent Fourier series (see Problem 12.14)\n\n\[ \nf = \mathop{\sum }\limits_{{m = - \infty }}^{\infty }{a}_{m}{f}_{m} \n\]\n\nFrom the relation\n\n\[ \n{\int }_{0}^{2\pi }{f}^{\prime }\left( t\right) {e}^{-{imt}}{dt} = {im}{\int }_{0}^{2\pi }f\left( t\right) {e}^{-{imt}}{dt} = {2\pi im}{a}_{m} \n\]\n\nfor the Fourier coefficients it follows that\n\n\[ \n{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2} = {\int }_{0}^{2\pi }{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt} = {2\pi }\mathop{\sum }\limits_{{m = - \infty }}^{\infty }{m}^{2}{\left| {a}_{m}\right| }^{2}. \n\]\n\nUsing this identity and the Cauchy-Schwarz inequality, we derive\n\n\[ \n{\begin{Vmatrix}{L}_{n}f - f\end{Vmatrix}}_{\infty }^{2} \leq 4{\left\{ \mathop{\sum }\limits_{{\left| m\right| = n}}^{\infty }\left| {a}_{m}\right| \right\} }^{2} \leq \frac{4}{\pi }{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2}\mathop{\sum }\limits_{{m = n}}^{\infty }\frac{1}{{m}^{2}}. \n\]\n\nThis implies (12.34).
|
Yes
|
Theorem 12.21 The collocation method with trigonometric polynomials converges for integral equations of the second kind with continuously differentiable periodic kernels and right-hand sides.
|
One possibility for the implementation of the collocation method is to use the trigonometric monomials as basis functions. Then the integrals \( {\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {e}^{ik\tau }{d\tau } \) have to be integrated numerically. Replacing the kernel by its trigonometric interpolation leads to the quadrature formula\n\n\[ \n{\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {e}^{ik\tau }{d\tau } \approx \frac{\pi }{n}\mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}K\left( {{t}_{j},{t}_{m}}\right) {e}^{{ik}{t}_{m}} \n\] \n\nfor \( j = 0,\ldots ,{2n} - 1 \) . Using fast Fourier transform techniques (see Section 8.2 ) these quadratures can be carried out very rapidly. A second, even more efficient, possibility is to use the Lagrange basis \n\n\[ \n{\ell }_{k}\left( t\right) = \frac{1}{2n}\left\{ {1 + 2\mathop{\sum }\limits_{{m = 1}}^{{n - 1}}\cos m\left( {t - {t}_{k}}\right) + \cos n\left( {t - {t}_{k}}\right) }\right\} \n\] \n\n(12.35) \n\nfor \( k = 0,\ldots ,{2n} - 1 \) which can be derived from Theorem 8.25 (see Problem 12.13). \n\nFor the evaluation of the matrix coefficients \( {\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {\ell }_{k}\left( \tau \right) {d\tau } \) we proceed analogously to the preceding case of linear splines. We approximate these integrals by replacing \( K\left( {{t}_{j}, \cdot }\right) \) by its trigonometric interpolation polynomial, i.e., we approximate \n\n\[ \n{\int }_{0}^{2\pi }K\left( {{t}_{j},\tau }\right) {\ell }_{k}\left( \tau \right) {d\tau } \approx \mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}K\left( {{t}_{j},{t}_{m}}\right) {\int }_{0}^{2\pi }{\ell }_{m}\left( \tau \right) {\ell }_{k}\left( \tau \right) {d\tau } \n\] \n\nfor \( j, k = 0,\ldots ,{2n} - 1 \) . Using (12.35), elementary integrations yield (see Problem 12.13) \n\n\[ \n{\int }_{0}^{2\pi }{\ell }_{m}\left( \tau \right) {\ell }_{k}\left( \tau \right) {d\tau } = \frac{\pi }{n}{\delta }_{mk} - {\left( -1\right) }^{m - k}\frac{\pi }{4{n}^{2}}, \n\] \n\n(12.36) \n\nfor \( m, k = 0,\ldots ,{2n} - 1 \) . Note that despite the global nature of the trigonometric interpolation and its Lagrange basis, due to the simple structure of the weights (12.36) in the quadrature rule, the computation of the matrix elements is not too costly. The only additional computational effort besides the kernel evaluation is the computation of the row sums \n\n\[ \n\mathop{\sum }\limits_{{m = 0}}^{{{2n} - 1}}{\left( -1\right) }^{m}K\left( {{t}_{j},{t}_{m}}\right) \n\] \n\nfor \( j = 0,\ldots ,{2n} - 1 \) . We omit the analysis of the additional error in the fully discrete method caused by the numerical quadrature.
|
No
|
For the integral equation (12.20) from Example 12.15, Table 12.5 gives the error between the exact solution and the collocation approximation.
|
TABLE 12.5. Collocation method for equation (12.20)\n\n<table><thead><tr><th></th><th>\\( n \\)</th><th>\\( t = 0 \\)</th><th>\\( t = \\pi /2 \\)</th><th>\\( t = \\pi \\)</th></tr></thead><tr><td></td><td>4</td><td>-0.10752855</td><td>-0.03243176</td><td>0.03961310</td></tr><tr><td>\\( a = 1 \\)</td><td>8</td><td>-0.00231537</td><td>0.00059809</td><td>0.00045961</td></tr><tr><td>\\( b = {0.5} \\)</td><td>16</td><td>\\( - {0.00000044} \\)</td><td>0.00000002</td><td>-0.00000000</td></tr><tr><td></td><td>4</td><td>-0.56984945</td><td>-0.18357135</td><td>0.06022598</td></tr><tr><td>\\( a = 1 \\)</td><td>8</td><td>\\( - {0.14414257} \\)</td><td>-0.00368787</td><td>-0.00571394</td></tr><tr><td>\\( b = {0.2} \\)</td><td>16</td><td>-0.00602543</td><td>-0.00035953</td><td>-0.00045408</td></tr><tr><td></td><td>32</td><td>\\( - {0.00000919} \\)</td><td>-0.00000055</td><td>-0.00000069</td></tr></table>
|
Yes
|
Theorem 12.23 For the Nyström method the condition numbers for the linear system are uniformly bounded.
|
This theorem states that the Nyström method essentially preserves the stability of the original integral equation.
|
No
|
Theorem 12.24 Under the assumptions of Theorem 12.16, for the collocation method the condition number of the linear system satisfies\n\n\[ \n\\operatorname{cond}\\left( {{E}_{n} - {\\widetilde{A}}_{n}}\\right) \\leq C{\\begin{Vmatrix}{L}_{n}\\end{Vmatrix}}_{\\infty }^{2}\\operatorname{cond}{E}_{n} \n\]\n\nfor all sufficiently large \( n \) and some constant \( C \) .
|
Null
|
No
|
Theorem 12.25 Let \( X \) and \( Y \) be normed spaces and let \( A : X \rightarrow Y \) be a compact linear operator. Then \( A \) has a bounded inverse if and only if \( X \) is finite-dimensional.
|
Proof. Assume that \( A \) has a bounded inverse \( {A}^{-1} : Y \rightarrow X \) . Then we have \( {A}^{-1}A = I \), and therefore the identity operator must be compact, since the product of a bounded and a compact operator is compact (see Problem 12.2). However, the identity operator on \( X \) is compact if and only if \( X \) has finite dimension.
|
Yes
|
\[ {y}^{\prime } = - {2y} \]
|
Here \( D = {\mathbb{R}}^{2} \) . Using the procedure in (5) one obtains\n\n\[ \frac{dy}{y} = - {2dx} \Leftrightarrow \ln \left| y\right| = - {2x} + C \Leftrightarrow \left| y\right| = {\mathrm{e}}^{C - {2x}}.\]\n\nThe general solution (with \( \pm {\mathrm{e}}^{C} \) replaced with \( C \) ) is\n\n\[ y\left( {x;C}\right) = C{\mathrm{e}}^{-{2x}}\;\left( {C \in \mathbb{R}}\right) . \]\n\nThe proof that every solution is of this form is elementary: If \( \phi \left( x\right) \) is any solution of the differential equation, then\n\n\[ {\left( \phi {\mathrm{e}}^{2x}\right) }^{\prime } = {\phi }^{\prime }{\mathrm{e}}^{2x} + {2\phi }{\mathrm{e}}^{2x} = 0, \]\n\ni.e., \( \phi {\mathrm{e}}^{2x} \) is a constant. (One could also appeal to the uniqueness statement proved in VII.) It follows that exactly one solution passes through each point \( \left( {\xi ,\eta }\right) \), namely,\n\n\[ y\left( {x;\eta {\mathrm{e}}^{2\xi }}\right) = \eta {\mathrm{e}}^{2\left( {\xi - x}\right) }.\]\n\nThus we have shown that the initial value problem is uniquely solvable, with a solution that exists in \( \mathbb{R} \) .
|
Yes
|
\[ {y}^{\prime } = \sqrt{\left| y\right| } \]
|
Again \( D = {\mathbb{R}}^{2} \) . Since the direction field is symmetric, it follows that if \( y\left( x\right) \) is a solution, then \( z\left( x\right) = - y\left( {-x}\right) \) is also a solution. Indeed, we have\n\n\[ {z}^{\prime }\left( x\right) = {y}^{\prime }\left( {-x}\right) = \sqrt{\left| y\left( -x\right) \right| } = \sqrt{\left| z\left( x\right) \right| }.\]\n\nThus it is sufficient to consider only positive solutions. From (5) it follows that\n\n\[ \int \frac{dy}{\sqrt{y}} = 2\sqrt{y} = x + C \]\n\nhence\n\n\[ y\left( {x;C}\right) = \frac{{\left( x + C\right) }^{2}}{4}\;\text{ in }\;\left( {-C,\infty }\right) \;\left( {C \in \mathbb{R}}\right) \]\n\n(note that \( \sqrt{y} \) is positive, whence \( x > - C \), and that for \( x < - C \) this formula does not give a solution to the differential equation). This function gives all of the positive solutions (this also follows from the uniqueness statement in VII).
|
Yes
|
\[ {y}^{\prime } = - x\left( {\operatorname{sgn}y}\right) \sqrt{\left| y\right| } = \left\{ \begin{array}{lll} - x\sqrt{y} & \text{ for } & y \geq 0, \\ x\sqrt{-y} & \text{ for } & y < 0. \end{array}\right. \]
|
The direction field is symmetric to the \( x \) -axis; i.e., if \( y\left( x\right) \) is a solution, then so is \( - y\left( x\right) \) . Thus it is sufficient to calculate the positive solutions. From \[ \int \frac{dy}{\sqrt{y}} = - 2\sqrt{y} = - \int {xdx} = \frac{1}{2}\left( {C - {x}^{2}}\right) \] it follows that \[ y\left( {x;C}\right) = \frac{1}{16}{\left( C - {x}^{2}\right) }^{2}\;\text{ in }\;\left( {-\sqrt{C},\sqrt{C}}\right) \;\left( {C > 0}\right) \] (note that \( \sqrt{y} > 0 \) ). If this function is extended by setting \( y\left( {x;C}\right) = 0 \) for \( \left| x\right| \geq \sqrt{C} \), then one clearly has a solution defined in \( \mathbb{R} \) . Thus we have the solutions \( \pm y\left( {x;C}\right) \) for \( C > 0 \) and \( y \equiv 0 \) . There are no other solutions. On the one hand, they (that is, their graphs) cover the whole plane; on the other hand, \( g\left( y\right) = \sqrt{\left| y\right| } \) vanishes only for \( y = 0 \) . Thus each initial value problem with \( \eta \neq 0 \) is locally uniquely solvable.
|
Yes
|
\[ {y}^{\prime } = {\mathrm{e}}^{y}\sin x. \]
|
The direction field is symmetric with respect to the \( y \) -axis and periodic in \( x \) of period \( {2\pi } \), i.e., if \( y\left( x\right) \) is a solution, then so are \( u\left( x\right) = y\left( {-x}\right) \) and \( v\left( x\right) = y\left( {x + {2k\pi }}\right) \) . By separation of variables (7) one obtains\n\n\[ \int {\mathrm{e}}^{-y}{dy} = - {\mathrm{e}}^{-y} = \int \sin {xdx} = - \cos x - C; \]\n\ni.e.,\n\n\[ y\left( {x;C}\right) = - \log \left( {\cos x + C}\right) \;\left( {C + \cos x > 0}\right) . \]
|
Yes
|
Theorem 1. Let \( \mathop{\lim }\limits_{{t \rightarrow \infty }}B\left( t\right) = \infty \) . If \( u \) is a positive solution, then\n\n\[ \mathop{\lim }\limits_{{t \rightarrow \infty }}u\left( t\right) = \mathop{\lim }\limits_{{t \rightarrow \infty }}\frac{b\left( t\right) }{c\left( t\right) } \]\n\nprovided that the limit on the right side exists.
|
Proof. This theorem is a substantial generalization of 1.XIII.(a). It can be proved by writing \( y \) as the quotient \( Z\left( t\right) /N\left( t\right) \) with \( N\left( t\right) = {\mathrm{e}}^{B\left( t\right) } \) . The result then follows using l’Hospital’s rule; since both \( B\left( t\right) \) and \( N\left( t\right) \) tend to \( \infty \) , the rule applies. One gets \( {Z}^{\prime }\left( t\right) /{N}^{\prime }\left( t\right) = c\left( t\right) /b\left( t\right) \), which gives the conclusion immediately.
|
Yes
|
Theorem 2. If the coefficients \( b \) and \( c \) are \( T \) -periodic, then there exists exactly one positive \( T \) -periodic solution of (14).
|
Proof. It is sufficient to show that there is exactly one solution with \( u\left( 0\right) = \) \( u\left( T\right) > 0 \) . Under this assumption \( v\left( t\right) \mathrel{\text{:=}} u\left( {t + T}\right) \) is a solution of (14) with \( v\left( 0\right) = u\left( 0\right) \) . Then \( y = 1/u \) and \( z = 1/v \) both satisfy the same linear differential equation and have the same initial values. It follows that \( y = z \) and hence \( u = v \) i.e., \( u \) is \( T \) -periodic. If we set \( \tau = 0 \) in (15), then the relation \( u\left( 0\right) = u\left( T\right) \) leads to the equation\n\n\[ \n{y}_{0}\left( {{\mathrm{e}}^{B\left( T\right) } - 1}\right) = {\int }_{0}^{T}c\left( x\right) {\mathrm{e}}^{B\left( s\right) }{ds} > 0,\n\]\n\nwhich can be solved uniquely for \( {y}_{0} \) because \( {\mathrm{e}}^{B\left( T\right) } > 1 \) .
|
Yes
|
Theorem 3. Let the coefficients \( b, c \) be positively bounded. Then equation (13) has exactly one positively bounded solution \( {u}^{ * } \) on \( \mathbb{R} \) ; and if \( u \) is any positive solution, then \( u\left( t\right) - {u}^{ * }\left( t\right) \rightarrow 0 \) as \( t \rightarrow \infty \) .
|
Proof. Let \( \alpha ,\beta ,\gamma ,\delta \) be positive constants with \( \alpha < b < \beta ,\gamma < c/b < \delta \) in \( \mathbb{R} \) . The first set of these inequalities leads to the estimates\n\n\[ \n{\alpha t} < B\left( t\right) < {\beta t}\text{for}t > 0,{\alpha t} > B\left( t\right) > {\beta t}\text{for}t < 0\text{;} \n\]\n\nand the second leads to\n\n\[ \nI\left( t\right) \mathrel{\text{:=}} {\int }_{-\infty }^{t}c\left( s\right) {\mathrm{e}}^{B\left( s\right) }{ds} < {\left. \delta {\int }_{-\infty }^{t}b\left( s\right) {\mathrm{e}}^{B\left( s\right) }ds = \delta {\mathrm{e}}^{B\left( s\right) }\right| }_{-\infty }^{t} = \delta {\mathrm{e}}^{B\left( t\right) }, \n\]\n\nand similarly \( I\left( t\right) > \gamma {\mathrm{e}}^{B\left( t\right) } \) .\n\nWe have to show that the linear equation (15) for \( y = 1/u \) has one and only one positively bounded solution. Let \( {y}^{ * } \) be the solution (16) with \( {y}_{0} = I\left( 0\right) \) and \( \tau = 0 \), that is,\n\n\[ \n{y}^{ * }\left( t\right) = {\mathrm{e}}^{-B\left( t\right) }{\int }_{-\infty }^{t}c\left( s\right) {\mathrm{e}}^{B\left( s\right) }{ds}. \n\]\n\n(This, by the way, is the smallest positive solution that exists in all of \( \mathbb{R} \) ; cf. (a).) From the previous estimates it follows that \( \gamma < {y}^{ * } < \delta \) . Since the solution \( z\left( t\right) = {\mathrm{e}}^{-\bar{B}\left( t\right) } \) of the homogeneous equation is unbounded and all solutions of the nonhomogeneous equation are given by \( y = {y}^{ * } + {\lambda z} \), it follows that \( {y}^{ * } \) is the only positively bounded solution.
|
Yes
|
Example 1. \( {y}^{\prime } = x - 1/y\;\left( {y > 0}\right) \) .
|
In the first example, the special solution \( \phi \) is the only bounded global solution. The solutions above \( \phi \) tend to \( \infty \) as \( x \rightarrow \infty \), while every positive solution beneath \( \phi \) exists only in a finite interval \( \lbrack 0, b) \) and tends to 0 as \( x \rightarrow b - \) .
|
Yes
|
Example 2. \( {y}^{\prime } = {x}^{3} + {y}^{3} \) .
|
Null
|
No
|
The inequality \( {Pv} \geq 0 \) holds for \( v = {\mathrm{e}}^{-x} \), and the inequality \( {Pw} \leq 0 \) is satisfied by the function\n\n\[ w\left( x\right) = \left\{ \begin{array}{lll} 2 - x & \text{ for } & 0 \leq x \leq 1 \\ 1/x & \text{ for } & x > 1 \end{array}\right. \]\n\nThus there exists a global solution \( \phi \) with \( {\mathrm{e}}^{-x} < \phi \left( x\right) < 1/x \) . The reader should show that \( {v}_{1} = 1/\left( {x + {x}^{-2}}\right) \) is also a lower bound.
|
Null
|
No
|
One can chose \( v = - \left( {x + 1}\right), w = - x \), as can be easily seen. Thus there exists a global solution \( \phi \) that satisfies the inequality \( - \left( {x + 1}\right) < \) \( \phi \left( x\right) < - x \) . The reader should show that \( {v}_{1} = - x - 1/3{x}^{2} \) is a better lower bound.
|
Null
|
No
|
Example 1. Here the global solutions \( \phi ,\psi \) are bounded; that is, there exists \( L > 0 \) such that \( 0 < \phi < \psi < L \) holds in \( \lbrack 0,\infty ) \) and hence \( {f}_{y}\left( {x, y}\right) = 1/{y}^{2} > \) \( 1/{L}^{2} \mathrel{\text{:=}} \alpha \) .
|
\[ {u}^{\prime } \geq {\alpha u},\;\text{ which implies that }\;u\left( x\right) \geq u\left( 0\right) {\mathrm{e}}^{\alpha x}. \] But \( u \) is bounded by assumption. This contradiction proves the assertion made at the beginning that there is only one bounded global solution. The estimate \( {v}_{1} < \phi < 1/x \) (see XIII) shows that \( \phi \) behaves like \( 1/x \) as \( x \rightarrow \infty \) and that \[ \frac{1}{x} - \frac{1}{{x}^{4}} < \phi \left( x\right) < \frac{1}{x} \]
|
Yes
|
Let \( y \) be a solution and \( y\left( a\right) \geq - a \) for some \( a \geq 0 \). It is easy to see from the differential equation that there exists \( b > a \) with \( y\left( b\right) > 0 \). Since the solution of the initial value problem \( {v}^{\prime } = {v}^{3}, v\left( b\right) = y\left( b\right) \) is a lower solution to \( y \) and since there exists \( c < \infty \) such that \( v\left( x\right) \rightarrow \infty \) as \( x \rightarrow c \), it follows that \( y\left( x\right) \) is not a global solution.
|
If \( \phi \) and \( \psi \) are global solutions with \( \phi < \psi \), then accordingly, \( \psi \left( x\right) < - x \). Thus in (14) we have \( {f}_{y}\left( {x,{y}^{ * }}\right) = 3{y}^{*2} > 3{x}^{2} \), and hence \( u = \psi - \phi \geq \delta \exp \left( {x}^{3}\right) \), where \( \delta = u\left( 0\right) > 0 \). In particular, \( z = - \phi \geq \delta \exp \left( {x}^{3}\right) \). Since \( z \) satisfies \( {z}^{\prime } = {z}^{3} - {x}^{3} \), we have \( {z}^{\prime } > \frac{1}{2}{z}^{3} \) for large \( x \). This implies in a manner similar to the case above that \( z = - \phi \) exists only in a finite interval \( \lbrack a, b) \) and tends to \( \infty \) as \( x \rightarrow b \). Therefore, there exists only one global solution. This proves the assertion made at the beginning for the second example.
|
Yes
|
Corollary 1. If the sets \( A \) and \( B \) are homeomorphic and if \( A \) has the fixed point property, then \( B \) also has the fixed point property.
|
The proof is very simple. Let \( h : A \rightarrow B \) be a homeomorphism and \( f \) : \( B \rightarrow B \) a continuous mapping. Then \( F = {h}^{-1} \circ f \circ h \) is a continuous mapping of \( A \) to itself. If \( x \) is a fixed point of \( F \), then the image point \( \xi = h\left( x\right) \) is a fixed point of \( f \), as one easily verifies.
|
Yes
|
Corollary 2. Let the set \( A \subset {\mathbb{R}}^{n} \) be compact, and let there exist a continuous mapping \( P : {\mathbb{R}}^{n} \rightarrow A \) with \( {\left. P\right| }_{A} = {\operatorname{id}}_{A} \), i.e., \( P\left( x\right) = x \) for \( x \in A \) . Then \( A \) has the fixed point property.
|
For the proof let \( B \supset A \) be a closed ball and \( f : A \rightarrow A \) continuous. Then \( F = f \circ P \) is a continuous mapping of \( B \) into itself. By the Brouwer fixed point theorem, \( F \) has a fixed point \( \xi \), and because \( F\left( B\right) \subset A \), this fixed point belongs to \( A \), whence \( \xi = f\left( \xi \right) \), i.e., \( \xi \) is a fixed point of \( f \) .
|
Yes
|
Corollary 3. A nonempty, convex, and compact set \( A \subset {\mathbb{R}}^{n} \) has the fixed point property.
|
Proof. For every \( x \in {\mathbb{R}}^{n} \) there exists, since \( A \) is convex and compact, exactly one \
|
No
|
Theorem 13. Given a separable Banach space \( E \) such that every sequence of elements of \( E \) that is bounded in norm has a subsequence weakly convergent to an element of \( E \), the space \( E \) is isometrically isomorphic to the space \( {E}^{* * } \) (the dual of \( {E}^{ * } \) ).
|
Though some of these terms have yet to be defined here, their meanings are not important at the moment. An examination of Banach's proof of this theorem shows that the isometric isomorphism he had in mind is the natural map from \( E \) into \( {E}^{* * } \), so the conclusion of Banach’s theorem is that \( E \) must be reflexive. In a note on that theorem given on page 243 of Banach's book, he made the following statement.
|
No
|
Theorem 1 The edge set of a graph can be partitioned into cycles if, and only if, every vertex has even degree.
|
Proof. The condition is clearly necessary, since if a graph is the union of some edge disjoint cycles and isolated vertices, then a vertex contained in \( k \) cycles has degree \( {2k} \) .\n\nSuppose that every vertex of a graph \( G \) has even degree and \( e\left( G\right) > 0 \) . How can we find a single cycle in \( G \) ? Let \( {x}_{0}{x}_{1}\cdots {x}_{\ell } \) be a path of maximal length \( \ell \) in \( G \) . Since \( {x}_{0}{x}_{1} \in E\left( G\right) \), we have \( d\left( {x}_{0}\right) \geq 2 \) . But then \( {x}_{0} \) has another neighbour \( y \) in addition to \( {x}_{1} \) ; furthermore, we must have \( y = {x}_{i} \) for some \( i,2 \leq i \leq \ell \), since otherwise \( y{x}_{0}{x}_{1}\cdots {x}_{\ell } \) would be a path of length \( \ell + 1 \) . Therefore, we have found our cycle: \( {x}_{0}{x}_{1}\cdots {x}_{i} \) .\n\nHaving found one cycle, \( {C}_{1} \), say, all we have to do is to repeat the procedure over and over again. To formalize this, set \( {G}_{1} = G \), so that \( {C}_{1} \) is a cycle in \( {G}_{1} \) , and define \( {G}_{2} = {G}_{1} - E\left( {C}_{1}\right) \) . Every vertex of \( {G}_{2} \) has even degree, so either\n\n\( E\left( {G}_{2}\right) = \varnothing \) or else \( {G}_{2} \) contains a cycle \( {C}_{2} \) . Continuing in this way, we find vertex disjoint cycles \( {C}_{1},{C}_{2},\ldots ,{C}_{s} \) such that \( E\left( G\right) = \mathop{\bigcup }\limits_{{i = 1}}^{s}E\left( {C}_{i}\right) \) .
|
Yes
|
Theorem 2 Every graph of order \( n \) and size greater than \( \left\lfloor {{n}^{2}/4}\right\rfloor \) contains a triangle.
|
Proof. Let \( G \) be a triangle-free graph of order \( n \) . Then \( \Gamma \left( x\right) \cap \Gamma \left( y\right) = \varnothing \) for every edge \( {xy} \in E\left( G\right) \), so\n\n\[ d\left( x\right) + d\left( y\right) \leq n. \]\n\nSumming these inequalities for all \( e\left( G\right) \) edges \( {xy} \), we find that\n\n\[ \mathop{\sum }\limits_{{x \in G}}d{\left( x\right) }^{2} \leq {ne}\left( G\right) \]\n\n(3)\n\nNow by (1) and Cauchy's inequality,\n\n\[ {\left( 2e\left( G\right) \right) }^{2} = {\left( \mathop{\sum }\limits_{{x \in G}}d\left( x\right) \right) }^{2} \leq n\left( {\mathop{\sum }\limits_{{x \in G}}d{\left( x\right) }^{2}}\right) . \]\n\nHence, by (3),\n\n\[ {\left( 2e\left( G\right) \right) }^{2} \leq {n}^{2}e\left( G\right) \]\n\nimplying that \( e\left( G\right) \leq {n}^{2}/4 \) .
|
Yes
|
Theorem 3 Let \( x \) be a vertex of a graph \( G \) and let \( W \) be the vertex set of a component containing \( x \) . Then the following assertions hold.\ni. \( W = \{ y \in G : G \) contains an \( x - y \) path \( \} \.\nii. \( W = \{ y \in G : G \) contains an \( x - y \) trail \( \} \.\niii. \( W = \{ y \in G : d\left( {x, y}\right) < \infty \} \.\niv. For \( u, v \in V = V\left( G\right) \) put \( {uRv} \) iff \( {uv} \in E\left( G\right) \), and let \( \widetilde{R} \) be the smallest equivalence relation on \( V \) containing \( R \) . Then \( W \) is the equivalence class of \( x \) .
|
Null
|
No
|
Theorem 4 A graph is bipartite iff it does not contain an odd cycle.
|
Proof. Suppose \( G \) is bipartite with vertex classes \( {V}_{1} \) and \( {V}_{2} \) . Let \( {x}_{1}{x}_{2}\cdots {x}_{l} \) be a cycle in \( G \) . We may assume that \( {x}_{1} \in {V}_{1} \) . Then \( {x}_{2} \in {V}_{2},{x}_{3} \in {V}_{1} \), and so on: \( {x}_{i} \in {V}_{1} \) iff \( i \) is odd. Since \( {x}_{l} \in {V}_{2} \), we find that \( l \) is even.\n\nSuppose now that \( G \) does not contain an odd cycle. Since a graph is bipartite iff each component of it is, we may assume that \( G \) is connected. Pick a vertex \( x \in V\left( G\right) \) and put \( {V}_{1} = \{ y : d\left( {x, y}\right) \) is odd \( \} ,{V}_{2} = V \smallsetminus {V}_{1} \) . There is no edge joining two vertices of the same class \( {V}_{i} \), since otherwise \( G \) would contain an odd cycle. Hence \( G \) is bipartite.
|
Yes
|
Theorem 5 A graph is a forest iff for every pair \( \{ x, y\} \) of distinct vertices it contains at most one \( x - y \) path.
|
Proof. If \( {x}_{1}{x}_{2}\cdots {x}_{l} \) is a cycle in a graph \( G \), then \( {x}_{1}{x}_{2}\cdots {x}_{l} \) and \( {x}_{1}{x}_{l} \) are two \( {x}_{1} - {x}_{l} \) paths in \( G \) .\n\nConversely, let \( {P}_{1} = {x}_{0}{x}_{1}\cdots {x}_{l} \) and \( {P}_{2} = {x}_{0}{y}_{1}{y}_{2}\cdots {y}_{k}{x}_{l} \) be two distinct \( {x}_{0} - {x}_{l} \) paths in a graph \( G \) . Let \( i + 1 \) be the minimal index for which \( {x}_{i + 1} \neq {y}_{i + 1} \) and let \( j \) be the minimal index for which \( j \geq i \) and \( {y}_{j + 1} \) is a vertex of \( {P}_{1} \), say \( {y}_{j + 1} = {x}_{h} \) . Then \( {x}_{i}{x}_{i + 1}\cdots {x}_{h}{y}_{j}{y}_{j - 1}\cdots {y}_{i + 1} \) is a cycle in \( G \) .
|
Yes
|
Theorem 6 The following assertions are equivalent for a graph \( G \) .\ni. \( G \) is a tree.\nii. \( G \) is a minimal connected graph, that is, \( G \) is connected and if \( {xy} \in E\left( G\right) \), then \( G - {xy} \) is disconnected. [In other words, \( G \) is connected and every edge is a bridge.]\niii. \( G \) is a maximal acyclic graph; that is, \( G \) is acyclic and if \( x \) and \( y \) are nonadjacent vertices of \( G \), then \( G + {xy} \) contains a cycle.
|
Proof. Suppose \( G \) is a tree. For an edge \( {xy} \in E\left( G\right) \), the graph \( G - {xy} \) cannot contain an \( x - y \) path \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \), since otherwise \( G \) contains the cycle \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \) . Hence \( G - {xy} \) is disconnected; and so \( G \) is a minimal connected graph. Similarly, if \( x \) and \( y \) are nonadjacent vertices of the tree \( G \) then \( G \), contains a path \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \), and so \( G + {xy} \) contains the cycle \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \) . Hence \( G + {xy} \) contains a cycle, and so \( G \) is a maximal acyclic graph.\n\nSuppose next that \( G \) is a minimal connected graph. If \( G \) contains a cycle \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \), then \( G - {xy} \) is still connected, since in any \( u - v \) walk in \( G \) the edge \( {xy} \) can be replaced by the path \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \) . As this contradicts the minimality of \( G \), we conclude that \( G \) is acyclic and so it is a tree.\n\nSuppose, finally, that \( G \) is a maximal acyclic graph. Is \( G \) connected? Yes, since if \( x \) and \( y \) belong to different components, the addition of \( {xy} \) to \( G \) cannot create a cycle \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \), since otherwise the path \( x{z}_{1}{z}_{2}\cdots {z}_{k}y \) is in \( G \) . Thus \( G \) is a tree.
|
Yes
|
Corollary 7 Every connected graph contains a spanning tree, that is, a tree containing every vertex of the graph.
|
## Proof. Take a minimal connected spanning subgraph.
|
No
|
Corollary 8 A tree of order \( n \) has size \( n - 1 \) ; a forest of order \( n \) with \( k \) components has size \( n - k \) .
|
Null
|
No
|
Corollary 9 A tree of order at least 2 contains at least 2 vertices of degree 1.
|
Proof. Let \( {d}_{1} \leq {d}_{2} \leq \cdots \leq {d}_{n} \) be the degree sequence of a tree \( T \) of order \( n \geq 2 \) . Since \( T \) is connected, \( \delta \left( T\right) = {d}_{1} \geq 1 \) . Hence if \( T \) had at most one vertex of degree 1, by (1) and Corollary 8 we would have\n\n\[ \n{2n} - 2 = {2e}\left( T\right) = \mathop{\sum }\limits_{1}^{n}{d}_{i} \geq 1 + 2\left( {n - 1}\right) .\n\]
|
Yes
|
Theorem 10 Each of the four methods described above produces an economical spanning tree. If no two edges have the same cost, then there is a unique economical spanning tree.
|
Proof. Choose an economical spanning tree \( T \) of \( G \) that has as many edges in common with \( {T}_{1} \) as possible, where \( {T}_{1} \) is a spanning tree constructed by the first method.\n\nSuppose that \( E\left( {T}_{1}\right) \neq E\left( T\right) \) . The edges of \( {T}_{1} \) have been selected one by one: let \( {xy} \) be the first edge of \( {T}_{1} \) that is not an edge of \( T \) . Then \( T \) contains a unique \( x - y \) path, say \( P \) . This path \( P \) has at least one edge, say \( {uv} \), that does not belong to \( {T}_{1} \), since otherwise \( {T}_{1} \) would contain a cycle. When \( {xy} \) was selected as an edge of \( {T}_{1} \), the edge \( {uv} \) was also a candidate. As \( {xy} \) was chosen and not \( {uv} \), the edge \( {xy} \) cannot be costlier then \( {uv} \) ; that is, \( f\left( {xy}\right) \leq f\left( {uv}\right) \) . Then \( {T}^{\prime } = T - {uv} + {xy} \) is a spanning tree, and since \( f\left( {T}^{\prime }\right) = f\left( T\right) - f\left( {uv}\right) + f\left( {xy}\right) \leq f\left( T\right) \), the new tree \( {T}^{\prime } \) is an economical spanning tree of \( G \) . (Of course, this inequality implies that \( f\left( {T}^{\prime }\right) = f\left( T\right) \) and \( f\left( {xy}\right) = f\left( {uv}\right) \) .) This tree \( {T}^{\prime } \) has more edges in common with \( {T}_{1} \) than \( T \), contradicting the choice of \( T \) . Hence \( T = {T}_{1} \), so \( {T}_{1} \) is indeed an economical spanning tree.\n\nSlight variants of the proof above show that the spanning trees \( {T}_{2} \) and \( {T}_{3} \) , constructed by the second and third methods, are also economical. We invite the reader to furnish the details (Exercise 44).\n\nSuppose now that no two edges have the same cost; that is, \( f\left( {xy}\right) \neq f\left( {uv}\right) \) whenever \( {xy} \neq {uv} \) . Let \( {T}_{4} \) be the spanning tree constructed by the fourth method and let \( T \) be an economical spanning tree. Suppose that \( T \neq {T}_{4} \), and let \( {xy} \) be the first edge not in \( T \) that we select for \( {T}_{4} \) . The edge \( {xy} \) was selected, since it is the least costly edge of \( G \) joining a vertex of a subtree \( F \) of \( {T}_{4} \) to a vertex outside \( F \) . The \( x - y \) path in \( T \) has an edge \( {uv} \) joining a vertex of \( F \) to a vertex outside \( F \) so \( f\left( {xy}\right) < f\left( {uv}\right) \) . However, this is impossible, since \( {T}^{\prime } = T - {uv} + {xy} \) is a spanning tree of \( G \) and \( f\left( {T}^{\prime }\right) < f\left( T\right) \) . Hence \( T = {T}_{4} \) . This shows that \( {T}_{4} \) is indeed an economical spanning tree. Furthermore, since the spanning tree constructed by the fourth method is unique, the economical spanning tree is unique if no two edges have the same cost.
|
Yes
|
Theorem 11 For \( n \geq 3 \) the complete graph \( {K}_{n} \) is decomposable into edge disjoint Hamilton cycles iff \( n \) is odd. For \( n \geq 2 \) the complete graph \( {K}_{n} \) is decomposable into edge-disjoint Hamilton paths iff \( n \) is even.
|
Null
|
No
|
Theorem 12 A non-trivial connected graph has an Euler circuit iff each vertex has even degree.
|
Proof. The conditions are clearly necessary. For example, if \( G \) has an Euler circuit \( {x}_{1}{x}_{2}\cdots {x}_{m} \), and \( x \) occurs \( k \) times in the sequence \( {x}_{1},{x}_{2},\ldots ,{x}_{m} \), then \( d\left( x\right) = {2k} \) .\n\nWe prove the sufficiency of the first condition by induction on the number of edges. If there are no edges, there is nothing to prove, so we proceed to the induction step.\n\nLet \( G \) be a non-trivial connected graph in which each vertex has even degree. Since \( e\left( G\right) \geq 1 \), we find that \( \delta \left( G\right) \geq 2 \), so by Corollary \( 9, G \) contains a cycle. Let \( C \) be a circuit in \( G \) with the maximal number of edges. Suppose \( C \) is not Eulerian. As \( G \) is connected, \( C \) contains a vertex \( x \) that is in a non-trivial component \( H \) of \( G - E\left( C\right) \) . Every vertex of \( H \) has even degree in \( H \), so by the induction hypothesis, \( H \) contains an Euler circuit \( D \) . The circuits \( C \) and \( D \) (see Fig. I.16) are edge-disjoint and have a vertex in common, so they can be concatenated to form a circuit with more edges than \( C \) . As this contradicts the maximality of \( e\left( C\right) \), the circuit \( C \) is Eulerian.\n\nSuppose now that \( G \) is connected and \( x \) and \( y \) are the only vertices of odd degree. Let \( {G}^{ * } \) be obtained from \( G \) by adding to it a vertex \( u \) together with the edges \( {ux} \) and \( {uy} \) . Then, by the first part, \( {G}^{ * } \) has an Euler circuit \( {C}^{ * } \) . Clearly, \( {C}^{ * } - u \) is an Euler trail from \( x \) to \( y \) .
|
Yes
|
Theorem 13 Let \( G \) be a directed multigraph with vertex set \( V\left( G\right) = \{ {v}_{1},\ldots ,{v}_{n}\} \) , such that \( {d}^{ + }\left( {v}_{i}\right) = {d}^{ - }\left( {v}_{i}\right) \) for every \( i \) . Denote by \( s\left( G\right) \) the number of Euler circuits of \( G \), and by \( {t}_{i}\left( G\right) \) the number of spanning trees oriented towards \( i \) . Then\n\n\[ s\left( G\right) = {t}_{i}\left( G\right) \mathop{\prod }\limits_{{j = 1}}^{n}\left( {{d}^{ + }\left( {v}_{j}\right) - 1}\right) ! \]\n\nfor every \( i,1 \leq i \leq n \) . In particular, \( {t}_{1}\left( G\right) = \cdots = {t}_{n}\left( G\right) \) .
|
Null
|
No
|
Theorem 14 Let \( G = \left( {V, E}\right) \) be a connected multigraph with \( E \) infinite. Then \( G \) has a two-way infinite Euler trail if and only if the following conditions are satisfied:\n\n(i) \( E \) is countable,\n\n(ii) every degree is even or infinite,\n\n(iii) for every subgraph \( {G}^{\prime } \subset G,{G}^{\prime } = \left( {V,{E}^{\prime }}\right) \), with \( {E}^{\prime } \) finite, the graph \( G - {E}^{\prime } \) has at most two infinite components; furthermore, if \( {d}_{{G}^{\prime }}\left( x\right) \) is even for every \( x \in V \), then \( G - {E}^{\prime } \) has precisely one infinite component.
|
Null
|
No
|
Theorem 15 If a connected plane graph \( G \) has \( n \) vertices, \( m \) edges, and \( f \) faces, then\n\n\[ n - m + f = 2\text{.} \]
|
Proof. Let us apply induction on the number of faces. If \( f = 1 \), then \( G \) does not contain a cycle, so it is a tree, and the result holds by Corollary 8.\n\nSuppose now that \( f > 1 \) and the result holds for smaller values of \( f \) . Let \( {ab} \) be an edge in a cycle of \( G \) . Since a cycle separates the plane, the edge \( {ab} \) is in the boundary of two faces, say \( S \) and \( T \) . Omitting \( {ab} \), in the new plane graph \( {G}^{\prime } \) the faces \( S \) and \( T \) join up to form a new face, while all other faces of \( G \) remain unchanged. Thus if \( {n}^{\prime },{m}^{\prime } \) and \( {f}^{\prime } \) are the parameters of \( {G}^{\prime } \), then \( {n}^{\prime } = n,{m}^{\prime } = m - 1 \) , and \( {f}^{\prime } = f - 1 \) . Hence \( n - m + f = {n}^{\prime } - {m}^{\prime } + {f}^{\prime } = 2 \) .
|
Yes
|
Theorem 16 A planar graph of order \( n \geq 3 \) has at most \( {3n} - 6 \) edges. Furthermore, a planar graph of order \( n \) and girth at least \( g,3 \leq g < \infty \), has size at most\n\n\[ \max \left\{ {\frac{g}{g - 2}\left( {n - 2}\right), n - 1}\right\} . \]
|
Proof. The first assertion is the case \( g = 3 \) of the second, so it suffices to prove the second assertion. Let \( G \) be a planar graph of order \( n \), size \( m \), and girth at least \( g \) . If \( n \leq g - 1 \), then \( G \) is acyclic, so \( m \leq n - 1 \) . Assume now that \( n \geq g \) and the assertion holds for smaller values of \( n \) . We may assume without loss of generality that \( G \) is connected. If \( {ab} \) is a bridge then \( G - {ab} \) is the union of two vertex disjoint subgraphs, say \( {G}_{1} \) and \( {G}_{2} \) . Putting \( {n}_{i} = \left| {G}_{i}\right| ,{m}_{i} = e\left( {G}_{i}\right), i = 1,2 \), by induction we find that\n\n\[ m = {m}_{1} + {m}_{2} + 1 \leq \max \left\{ {\frac{g}{g - 2}\left( {{n}_{1} - 2}\right) ,{n}_{1} - 1}\right\} \n\n+ \max \left\{ {\frac{g}{g - 2}\left( {{n}_{2} - 2}\right) ,{n}_{2} - 1}\right\} + 1 \n\n\leq \max \left\{ {\frac{g}{g - 2}\left( {n - 2}\right), n - 1}\right\} . \n\nOn the other hand, if \( G \) is bridgeless,(4) and (5) imply that\n\n\[ {2m} = \mathop{\sum }\limits_{i}i{f}_{i} = \mathop{\sum }\limits_{{i \geq g}}i{f}_{i} \geq g\mathop{\sum }\limits_{i}{f}_{i} = {gf}. \n\nHence, by Euler's formula,\n\n\[ m + 2 = n + f \leq n + \frac{2}{g}m \n\nand so\n\n\[ m \leq \frac{g}{g - 2}\left( {n - 2}\right) \]
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.