Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Example 3. Find the minimum distance from a point to a line in \( {\mathbb{R}}^{3} \) . Let the line be given as the intersection of two planes whose equations are \( \langle a, x\rangle = k \) and \( \langle b, x\rangle = \ell \) . (Here, \( x, a \), and \( b \) belong to \( {\mathbb{R}}^{3} \) .) Let the point be \( c \) . Then \( H \) should be\n\n\[ \parallel x - c{\parallel }^{2} + \lambda \left\lbrack {\langle a, x\rangle - k}\right\rbrack + \mu \left\lbrack {\langle b, x\rangle - \ell }\right\rbrack \]
|
This \( H \) is a function of \( \left( {{x}_{1},{x}_{2},{x}_{3},\lambda ,\mu }\right) \) . The five equations to solve are\n\n\[ 2\left( {{x}_{1} - {c}_{1}}\right) + \lambda {a}_{1} + \mu {b}_{1} = 2\left( {{x}_{2} - {c}_{2}}\right) + \lambda {a}_{2} + \mu {b}_{2} = 2\left( {{x}_{3} - {c}_{3}}\right) + \lambda {a}_{3} + \mu {b}_{3} = 0 \]\n\n\[ \langle a, x\rangle - k = \langle b, x\rangle - \ell = 0 \]\n\nWe see that \( x \) is of the form \( x = c + {\alpha a} + {\beta b} \) . When this is substituted in the second set of equations, we obtain two linear equations for determining \( \alpha \) and \( \beta \) :\n\n\[ \langle a, a\rangle \alpha + \langle a, b\rangle \beta = k - \langle a, c\rangle \;\text{ and }\;\langle a, b\rangle \alpha + \langle b, b\rangle \beta = \ell - \langle b, c\rangle \]
|
Yes
|
Theorem 2. Lagrange Multiplier. Let \( f \) and \( g \) be continuously differentiable real-valued functions on an open set \( \Omega \) in a Banach space. Let \( M = \{ x \in \Omega : g\left( x\right) = 0\} \) . If \( {x}_{0} \) is a local minimum point of \( f \mid M \) and if \( {g}^{\prime }\left( {x}_{0}\right) \neq 0 \), then \( {f}^{\prime }\left( {x}_{0}\right) = \lambda {g}^{\prime }\left( {x}_{0}\right) \) for some \( \lambda \in \mathbb{R} \) .
|
Proof. Let \( X \) be the Banach space in question. Select a neighborhood \( U \) of \( {x}_{0} \) such that\n\n\[ x \in U \cap M \Rightarrow f\left( {x}_{0}\right) \leq f\left( x\right) \]\n\nWe can assume \( U \subset \Omega \) . Define \( F : U \rightarrow {\mathbb{R}}^{2} \) by \( F\left( x\right) = \left( {f\left( x\right), g\left( x\right) }\right) \) . Then \( F\left( {x}_{0}\right) = \left( {f\left( {x}_{0}\right) ,0}\right) \) and \( {F}^{\prime }\left( x\right) v = \left( {{f}^{\prime }\left( x\right) v,{g}^{\prime }\left( x\right) v}\right) \) for all \( v \in X \) . Observe that if \( r < f\left( {x}_{0}\right) \), then \( \left( {r,0}\right) \) is not in \( F\left( U\right) \) . Hence \( F\left( U\right) \) is not a neighborhood of \( F\left( {x}_{0}\right) \) . By the Corollary in Section 4.4, \( {F}^{\prime }\left( {x}_{0}\right) \) is not surjective (as a linear map from \( X \) to \( {\mathbb{R}}^{2} \) ). Hence \( {F}^{\prime }\left( {x}_{0}\right) v = \alpha \left( v\right) \left( {\theta ,\mu }\right) \) for some continuous linear functional \( \alpha \) . (Thus \( \alpha \in {X}^{ * } \) .) It follows that \( {f}^{\prime }\left( {x}_{0}\right) v = \alpha \left( v\right) \theta \) and \( {g}^{\prime }\left( {x}_{0}\right) v = \alpha \left( v\right) \mu \) . Since \( {g}^{\prime }\left( {x}_{0}\right) \neq 0,\mu \neq 0 \) . Therefore,\n\n\[ {f}^{\prime }\left( {x}_{0}\right) v = \left( {\theta /\mu }\right) \alpha \left( v\right) \mu = \left( {\theta /\mu }\right) {g}^{\prime }\left( {x}_{0}\right) v \]
|
Yes
|
Theorem 3. Lagrange Multipliers. Let \( f,{g}_{1},\ldots ,{g}_{n} \) be continuously differentiable real-valued functions defined on an open set \( \Omega \) in a Banach space \( X \) . Let \( M = \left\{ {x \in \Omega : {g}_{1}\left( x\right) = \cdots = {g}_{n}\left( x\right) = 0}\right\} \) . If \( {x}_{0} \) is a local minimum point of \( f \mid M \) (the restriction of \( f \) to \( M \) ), then there is a nontrivial linear relation of the form\n\n\[ \mu {f}^{\prime }\left( {x}_{0}\right) + {\lambda }_{1}{g}_{1}^{\prime }\left( {x}_{0}\right) + {\lambda }_{2}{g}_{2}^{\prime }\left( {x}_{0}\right) + \cdots + {\lambda }_{n}{g}_{n}^{\prime }\left( {x}_{0}\right) = 0 \]
|
Proof. Select a neighborhood \( U \) of \( {x}_{0} \) such that \( U \subset \Omega \) and such that \( f\left( {x}_{0}\right) \leq \) \( f\left( x\right) \) for all \( x \in U \cap M \) . Define \( F : U \rightarrow {\mathbb{R}}^{n + 1} \) by the equation\n\n\[ F\left( x\right) = \left( {f\left( x\right) ,{g}_{1}\left( x\right) ,{g}_{2}\left( x\right) ,\ldots ,{g}_{n}\left( x\right) }\right) \]\n\nIf \( r < f\left( {x}_{0}\right) \), then the point \( \left( {r,0,0,\ldots ,0}\right) \) is not in \( F\left( U\right) \) . Thus \( F\left( U\right) \) does not contain a neighborhood of the point \( \left( {f\left( {x}_{0}\right) ,{g}_{1}\left( {x}_{0}\right) ,\ldots ,{g}_{n}\left( {x}_{0}\right) \equiv }\right. \) \( \left( {f\left( {x}_{0}\right) ,0,0,\ldots ,0}\right) \) . By the Corollary in Section 3.4, page 143, \( {F}^{\prime }\left( {x}_{0}\right) \) is not surjective. Since the range of \( {F}^{\prime }\left( {x}_{0}\right) \) is a linear subspace of \( {\mathbb{R}}^{n + 1} \), we now know that it is a proper subspace of \( {\mathbb{R}}^{n + 1} \) . Hence it is contained in a hyperplane through the origin. This means that for some \( \mu ,{\lambda }_{1},\ldots ,{\lambda }_{n} \) (not all zero) we have\n\n\[ \mu {f}^{\prime }\left( {x}_{0}\right) v + {\lambda }_{1}{g}_{1}^{\prime }\left( {x}_{0}\right) v + \cdots + {\lambda }_{n}{g}_{n}^{\prime }\left( {x}_{0}\right) v = 0 \]\n\nfor all \( v \in X \) . This implies the equation in the statement of the theorem.
|
Yes
|
Example 4. Let \( A \) be a compact Hermitian operator on a Hilbert space \( X \) . Then \( \parallel A\parallel = \max \{ \left| \lambda \right| : \lambda \in \Lambda \left( A\right) \} \), where \( \Lambda \left( A\right) \) is the set of eigenvalues of \( A \) .
|
This is proved by Lemma 2, page 92, together with Problem 22, page 101. Then by Lemma 2 in Section 2.3, page 85, we have \( \parallel A\parallel = \sup \{ \left| {\langle {Ax}, x\rangle }\right| : \parallel x\parallel = 1\} \) . Hence we can find an eigenvalue of \( A \) by determining an extremum of \( \langle {Ax}, x\rangle \) on the set defined by \( \parallel x\parallel = 1 \) .
|
No
|
Theorem 4. If \( A \) is a Hermitian operator on a Hilbert space, then each local constrained minimum or maximum point of \( \langle {Ax}, x\rangle \) on the unit sphere is an eigenvector of \( A \). The value of \( \langle {Ax}, x\rangle \) is the corresponding eigenvalue.
|
Proof. Use \( F\left( x\right) = \langle {Ax}, x\rangle \) and \( G\left( x\right) = \parallel x{\parallel }^{2} - 1 \). Then\n\n\[ \n{F}^{\prime }\left( x\right) h = 2\langle {Ax}, h\rangle \;{G}^{\prime }\left( x\right) h = 2\langle x, h\rangle \n\]\n\nOur theorem about Lagrange multipliers gives a necessary condition in order that \( x \) be a local extremum, namely that \( \mu {F}^{\prime }\left( x\right) + \lambda {G}^{\prime }\left( x\right) = 0 \) in a nontrivial manner. Since \( \parallel x\parallel = 1,{G}^{\prime }\left( x\right) \neq 0 \). Hence \( \mu \neq 0 \), and by the homogeneity we can set \( \mu = - 1 \). This leads to\n\n\[ \n- 2\langle {Ax}, h\rangle + {2\lambda }\langle x, h\rangle = 0\;\left( {h \in X}\right) \n\]\n\nwhence \( {Ax} = {\lambda x} \).
|
Yes
|
Find a function \( y \) in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \), satisfying \( y\left( a\right) = \alpha \) and \( y\left( b\right) = \beta \) , such that the surface of revolution obtained by rotating the graph of \( y \) about the \( x \) -axis has minimum area.
|
\[ {\int }_{a}^{b}{2\pi y}\left( x\right) {ds} = {2\pi }{\int }_{a}^{b}y\left( x\right) \sqrt{1 + {y}^{\prime }{\left( x\right) }^{2}}{dx} \]
|
No
|
Theorem 1. The Euler Equation. Let \( F \) be a mapping from \( {\mathbb{R}}^{3} \) to \( {\mathbb{R}}^{1} \), possessing piecewise continuous partial derivatives of the second order. In order that a function \( y \) in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) minimize \( {\int }_{a}^{b}F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} \) subject to the constraints \( y\left( a\right) = \alpha, y\left( b\right) = \beta \), it is necessary that Euler's equation hold:\n\n(4)\n\n\[ \frac{d}{dx}{F}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = {F}_{2}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) \]\n\n(Here \( {F}_{2} \) and \( {F}_{3} \) are partial derivatives.)
|
Proof. Let \( u \in {C}^{1}\left\lbrack {a, b}\right\rbrack \) and \( u\left( a\right) = u\left( b\right) = 0 \) . Assume that \( y \) is a solution of the problem. For all real \( \theta, y + {\theta u} \) is a competing function. Hence\n\n\[ {\left. \frac{d}{d\theta }{\int }_{a}^{b}F\left( x, y\left( x\right) + \theta u\left( x\right) ,{y}^{\prime }\left( x\right) + \theta {u}^{\prime }\left( x\right) \right) dx\right| }_{\theta = 0} = 0 \]\n\nThis leads to \( {\int }_{a}^{b}\left( {{F}_{2}u + {F}_{3}{u}^{\prime }}\right) = 0 \) . The second term can be integrated by parts.\n\nThe result is\n\n\[ {\int }_{a}^{b}\left\lbrack {{F}_{2}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) - \frac{d}{dx}{F}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) }\right\rbrack u\left( x\right) {dx} = 0 \]\n\nBy invoking the following lemma, we obtain the Euler equation.\n\nLemma. If \( v \) is piecewise continuous on \( \left\lbrack {a, b}\right\rbrack \) and if \( {\int }_{a}^{b}u\left( x\right) v\left( x\right) {dx} = \) 0 for every \( u \) in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) that vanishes at the endpoints \( a \) and \( b \), then \( v = 0 \) .\n\nProof. Assume the hypotheses, and suppose that \( v \neq 0 \) . Then there is a nonempty open interval \( \left( {\alpha ,\beta }\right) \) contained in \( \left\lbrack {a, b}\right\rbrack \) in which \( v \) is continuous and has no zero. We may assume that \( v\left( x\right) > 0 \) on \( \left( {\alpha ,\beta }\right) \) . There is a function \( u \) in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) such that \( u\left( x\right) > 0 \) on \( \left( {\alpha ,\beta }\right) \) and \( u\left( x\right) = 0 \) elsewhere in \( \left\lbrack {a, b}\right\rbrack \) . Since \( {\int }_{a}^{b}{uv} = {\int }_{\alpha }^{\beta }{uv} > 0 \), we have a contradiction, and \( v = 0 \) .
|
Yes
|
Theorem 2. In Theorem 1, if \( {F}_{1} = 0 \), then the Euler equation\nimplies that\n\n(5)\n\n\[ \n{y}^{\prime }\left( x\right) {F}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) - F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = \text{ constant } \n\]
|
Proof.\n\n\[ \n\frac{d}{dx}\left\lbrack {{y}^{\prime }{F}_{3} - F}}\right\rbrack = {y}^{\prime \prime }{F}_{3} + {y}^{\prime }\frac{d}{dx}{F}_{3} - {F}_{1} - {F}_{2}{y}^{\prime } - {F}_{3}{y}^{\prime \prime } = {y}^{\prime }\left\lbrack {\frac{d}{dx}{F}_{3} - {F}_{2}}\right\rbrack = 0. \n\]
|
Yes
|
Example 4. This is the Brachistochrone Problem, except that the terminal point is allowed to be anywhere on a given vertical line. Following the previous discussion, we are led to minimize the expression\n\n\[ \n{\int }_{0}^{b}\sqrt{\frac{1 + {y}^{\prime }{\left( x\right) }^{2}}{y\left( x\right) }}{dx} \n\]\n\nsubject to \( y \in {C}^{2}\left\lbrack {0, b}\right\rbrack \) and \( y\left( 0\right) = 0 \) . Notice that the value \( y\left( b\right) \) is not prescribed.
|
Returning now to Example 4, we conclude that \( {F}_{3}\left( {b, y\left( b\right) ,{y}^{\prime }\left( b\right) }\right) = 0 \) . This entails \( {y}^{\prime }\left( b\right) /\sqrt{y\left( b\right) \left\lbrack {1 + {y}^{\prime }{\left( b\right) }^{2}}\right\rbrack } = 0 \), or \( {y}^{\prime }\left( b\right) = 0 \) . Thus the slope of our cycloid must be zero at \( x = b \) . The cycloids going through the initial point are given parametrically by\n\n\[ \n\left\{ \begin{array}{l} x = k\left( {\phi - \sin \phi }\right) \\ y = k\left( {1 - \cos \phi }\right) \end{array}\right. \n\]\n\nThe slope is\n\n\[ \n\frac{dy}{dx} = \frac{dy}{d\phi } \div \frac{dx}{d\phi } = \frac{\sin \phi }{1 - \cos \phi } \n\]\n\nThis is 0 at \( \phi = \pi \) . The value \( x = b \) corresponds to \( \phi = \pi \), and \( k = b/\pi \) . The solution is given by \( x = \left( {b/\pi }\right) \left( {\phi - \sin \phi }\right), y = \left( {b/\pi }\right) \left( {1 - \cos \phi }\right) ,0 \leq \phi \leq \pi \) .
|
Yes
|
Theorem 3. Any function \( y \) in \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that minimizes\n\n\[{\int }_{a}^{b}F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx}\]\n\nsubject to the constraint \( y\left( a\right) = \alpha \) must satisfy the two conditions\n\n(7)\n\n\[\\frac{d}{dx}{F}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = {F}_{2}\left( {x, y\left( x\right) {y}^{\prime }\left( x\right) }\right) \\;\\text{ and }\\;{F}_{3}\left( {b, y\left( b\right) ,{y}^{\prime }\left( b\right) }\right) = 0\]
|
Proof. This is left as a problem.
|
No
|
Find the function \( y \) that minimizes an integral\n\n\[ \n{\int }_{a}^{b}F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} \n\]\n\nsubject to constraints that \( y \) belong to \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) and\n\n\[ \n{\int }_{a}^{b}G\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} = 0\;y\left( a\right) = \alpha \;y\left( b\right) = \beta \n\]
|
Theorem 4. If \( F \) and \( G \) map \( {\mathbb{R}}^{3} \) to \( \mathbb{R} \) and have continuous partial derivatives of the second order, and if \( y \) is an element of \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that minimizes \( {\int }_{a}^{b}F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} \) subject to endpoint constraints and \( {\int }_{a}^{b}G\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} = 0 \), then there is a nontrivial linear combination \( H = {\mu F} + {\lambda G} \) such that\n\n(8)\n\n\[ \n{H}_{2}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = \frac{d}{dx}{H}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) \n\]\n\nProof. As in previous problems of this section, we try to obtain a necessary condition for a solution by perturbing the solution in such a way that the constraints are not violated. Suppose that \( y \) is a solution in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) . Let \( {\eta }_{1} \) and \( {\eta }_{2} \) be two functions in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) that vanish at the endpoints. Consider the function \( z = y + {\theta }_{1}{\eta }_{1} + {\theta }_{2}{\eta }_{2} \) . It belongs to \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) and takes correct values at the endpoints: \( z\left( a\right) = \alpha, z\left( b\right) = \beta \) . We require two perturbing functions, \( {\eta }_{1} \) and \( {\eta }_{2} \) because the constraint \( {\int }_{a}^{b}G\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} = 0 \) will be true only if we allow a relationship between the two parameters \( {\theta }_{1} \) and \( {\theta }_{2} \) . Let \( I\left( {{\theta }_{1},{\theta }_{2}}\right) = {\int }_{a}^{b}F\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} \) and \( J\left( {{\theta }_{1},{\theta }_{2}}\right) = {\int }_{a}^{b}G\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} \) . The minimum of \( I\left( {{\theta }_{1},{\theta }_{2}}\right) \) under the constraint \( J\left( {{\theta }_{1},{\theta }_{2}}\right) = 0 \) occurs at \( {\theta }_{1} = {\theta }_{2} = 0 \) , because \( y \) is a solution of the original problem. By the Theorem on Lagrange Multipliers (Theorem 3, page 148), there is a nontrivial linear relation of the form \( \mu {I}^{\prime }\left( {0,0}\right) + \lambda {J}^{\prime }\left( {0,0}\right) = 0 \) . Thus\n\n\[ \n\mu \frac{\partial I}{\partial {\theta }_{1}} + \lambda \frac{\partial J}{\partial {\theta }_{1}} = 0\text{ at }\left( {{\theta }_{1},{\theta }_{2}}\right) = \left( {0,0}\right) ,\text{ and }\mu \frac{\partial I}{\partial {\theta }_{2}} + \lambda \frac{\partial J}{\partial {\theta }_{2}} = 0\text{ at }\left( {0,0}\right) \n\]\n\nFollowing the usual procedure, including an integration by parts, we eventually obtain Equation (8).
|
Yes
|
Theorem 4. If \( F \) and \( G \) map \( {\mathbb{R}}^{3} \) to \( \mathbb{R} \) and have continuous partial derivatives of the second order, and if \( y \) is an element of \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that minimizes \( {\int }_{a}^{b}F\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} \) subject to endpoint constraints and \( {\int }_{a}^{b}G\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) {dx} = 0 \), then there is a nontrivial linear combination \( H = {\mu F} + {\lambda G} \) such that\n\n(8)\n\n\[ \n{H}_{2}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = \frac{d}{dx}{H}_{3}\left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) \n\]
|
Proof. As in previous problems of this section, we try to obtain a necessary condition for a solution by perturbing the solution in such a way that the constraints are not violated. Suppose that \( y \) is a solution in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) . Let \( {\eta }_{1} \) and \( {\eta }_{2} \) be two functions in \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) that vanish at the endpoints. Consider the function \( z = y + {\theta }_{1}{\eta }_{1} + {\theta }_{2}{\eta }_{2} \) . It belongs to \( {C}^{1}\left\lbrack {a, b}\right\rbrack \) and takes correct values at the endpoints: \( z\left( a\right) = \alpha, z\left( b\right) = \beta \) . We require two perturbing functions, \( {\eta }_{1} \) and \( {\eta }_{2} \) because the constraint \( {\int }_{a}^{b}G\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} = 0 \) will be true only if we allow a relationship between the two parameters \( {\theta }_{1} \) and \( {\theta }_{2} \) . Let \( I\left( {{\theta }_{1},{\theta }_{2}}\right) = {\int }_{a}^{b}F\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} \) and \( J\left( {{\theta }_{1},{\theta }_{2}}\right) = {\int }_{a}^{b}G\left( {x, z\left( x\right) ,{z}^{\prime }\left( x\right) }\right) {dx} \) . The minimum of \( I\left( {{\theta }_{1},{\theta }_{2}}\right) \) under the constraint \( J\left( {{\theta }_{1},{\theta }_{2}}\right) = 0 \) occurs at \( {\theta }_{1} = {\theta }_{2} = 0 \) , because \( y \) is a solution of the original problem. By the Theorem on Lagrange Multipliers (Theorem 3, page 148), there is a nontrivial linear relation of the form \( \mu {I}^{\prime }\left( {0,0}\right) + \lambda {J}^{\prime }\left( {0,0}\right) = 0 \) . Thus\n\n\[ \n\mu \frac{\partial I}{\partial {\theta }_{1}} + \lambda \frac{\partial J}{\partial {\theta }_{1}} = 0\text{ at }\left( {{\theta }_{1},{\theta }_{2}}\right) = \left( {0,0}\right) ,\text{ and }\mu \frac{\partial I}{\partial {\theta }_{2}} + \lambda \frac{\partial J}{\partial {\theta }_{2}} = 0\text{ at }\left( {0,0}\right) \n\]\n\nFollowing the usual procedure, including an integration by parts, we eventually obtain Equation (8).
|
Yes
|
It is required to find the curve of given length \( \ell \) joining the point \( \left( {-1,0}\right) \) to the point \( \left( {1,0}\right) \) that, together with the interval \( \left\lbrack {-1,1}\right\rbrack \) on the horizontal axis, encloses the greatest possible area.
|
We assume that \( 2 < \ell < \pi \) . Let the curve be given by \( y = y\left( x\right) \), where \( y \) belongs to \( {C}^{1}\left\lbrack {-1,1}\right\rbrack \) . The area to be maximized is then\n\n\[ \n{\int }_{-1}^{1}y\left( x\right) {dx} \n\]\n\nand the constraints are\n\n\[ \n{\int }_{-1}^{1}\sqrt{1 + {y}^{\prime }{\left( x\right) }^{2}}{dx} = \ell \;y\left( {-1}\right) = y\left( 1\right) = 0 \n\]\n\nThis problem can be treated with Theorem 4, taking\n\n\[ \nF\left( {x, y,{y}^{\prime }}\right) = y\;\text{ and }\;G\left( {x, y,{y}^{\prime }}\right) = \sqrt{1 + {\left( {y}^{\prime }\right) }^{2}} - \ell /2 \n\]\n\nThe necessary condition of Theorem 4 is that for a suitable nontrivial pair of coefficients \( \mu \) and \( \lambda \)\n\n\[ \n\left\lbrack {{\left( \mu F + \lambda G\right) }_{2} - \frac{d}{dx}{\left( \mu F + \lambda G\right) }_{3}}\right\rbrack \left( {x, y\left( x\right) ,{y}^{\prime }\left( x\right) }\right) = 0 \n\]\n\n(In these equations, subscript 2 means a partial derivative with respect to the second argument of the function, and so on.) In the case being considered, we have \( {F}_{2} = 1,{F}_{3} = 0,{G}_{2} = 0 \), and \( {G}_{3} = {y}^{\prime }\left( x\right) \left( {\left\lbrack 1 + {y}^{\prime }{\left( x\right) }^{2}\right\rbrack }^{-1/2}\right. \) . The necessary\n\ncondition then reads\n\[ \n\mu - \lambda \frac{d}{dx}\frac{{y}^{\prime }\left( x\right) }{\sqrt{1 + {y}^{\prime }{\left( x\right) }^{2}}} = 0 \n\]\n\nIf \( \mu = 0 \), then \( \lambda \) must be 0 as well. Hence we are free to set \( \mu = 1 \) and integrate the previous equation, arriving at\n\n\[ \nx - \frac{\lambda {y}^{\prime }\left( x\right) }{\sqrt{1 + {y}^{\prime }{\left( x\right) }^{2}}} = {c}_{1} \n\]\n\nThis can be solved for \( {y}^{\prime }\left( x\right) \) :\n\n\[ \n{y}^{\prime }\left( x\right) = \frac{x - {c}_{1}}{\sqrt{{\lambda }^{2} - {\left( x - {c}_{1}\right) }^{2}}} \n\]\n\nAnother integration leads to\n\n\[ \ny\left( x\right) = - \sqrt{{\lambda }^{2} - {\left( x - {c}_{1}\right) }^{2}} + {c}_{2} \n\]\n\nWe see that the curve is a circle by writing this last equation in the form\n\n\[ \n{\left( x - {c}_{1}\right) }^{2} + {\left( y - {c}_{2}\right) }^{2} = {\lambda }^{2} \n\]\nSince the circle must pass through the points \( \left( {-1,0}\right) \) and \( \left( {1,0}\right) \), we find that \( {c}_{1} = 0 \) and that \( 1 + {c}_{2}^{2} = {\lambda }^{2} \) . When the condition on the length of the arc is imposed, we obtain \( \ell = {2\lambda }\operatorname{Arcsin}\frac{1}{\lambda } \), from which \( \lambda \) can be computed.
|
Yes
|
Among all the plane curves having a prescribed length, find one enclosing the greatest area. We assume a parametric representation \( x = x\\left( t\\right) \) and \( y = y\\left( t\\right) \) with continuously differentiable functions. We can also assume that \( 0 \\leq t \\leq b \) and that \( x\\left( 0\\right) = x\\left( b\\right) \) , \( y\\left( 0\\right) = y\\left( b\\right) \) so the curve is closed. Let us assume further that as \( t \) increases from 0 to \( b \), the curve is described in the counterclockwise direction. The region enclosed is then always on the left. Recall Green's Theorem, [Wid1], page 223:
|
\[ \n{\\int }_{\\Gamma }\\left( {{Pdx} + {Qdy}}\\right) = {\\iint }_{R}\\left\\lbrack {{Q}_{1}\\left( {x, y}\\right) - {P}_{2}\\left( {x, y}\\right) }\\right\\rbrack {dxdy} \]\n\nwhere \( R \) is the region enclosed by the curve \( \\Gamma \) and the subscripts denote partial derivatives. A special case of Green's Theorem is\n\n\[ \n\\frac{1}{2}{\\int }_{\\Gamma }\\left( {-y\\;{dx} + x\\;{dy}}\\right) = \\frac{1}{2}{\\iint }_{R}\\left( {1 + 1}\\right) \\;{dx}\\;{dy} = \\text{Area of}\\;R \]\n\nThus our isoperimetric problem is to maximize the integral\n\n\[ \n{\\int }_{0}^{b}\\left( {-y\\frac{dx}{dt} + x\\frac{dy}{dt}}\\right) {dt} \]\n\nsubject to\n\n\[ \n{\\int }_{0}^{b}\\sqrt{{\\left( \\frac{dx}{dt}\\right) }^{2} + {\\left( \\frac{dy}{dt}\\right) }^{2}}{dt} = \\text{ constant } \]\n\nThis isoperimetric problem involves again the minimization of an integral subject to a constraint expressed as an integral. But now we have two unknown functions to be determined. A straightforward extension of Theorem 4 applies in this situation. Suppose that we wish to minimize\n\n\[ \n{\\int }_{a}^{b}F\\left( {t, x\\left( t\\right) ,{x}^{\\prime }\\left( t\\right), y\\left( t\\right) ,{y}^{\\prime }\\left( t\\right) }\\right) {dt} \]\n\nsubject to the usual endpoint constraints and a constraint\n\n\[ \n{\\int }_{a}^{b}G\\left( {t, x\\left( t\\right) ,{x}^{\\prime }\\left( t\\right), y\\left( t\\right) ,{y}^{\\prime }\\left( t\\right) }\\right) {dt} = 0 \]\n\nThe Euler necessary condition is that for a suitable nontrivial linear combination \( H = {\\mu F} + {\\lambda G} \)\n\n\[ \n{H}_{2}\\left( {t, x,{x}^{\\prime }, y,{y}^{\\prime }}\\right) = \\frac{d}{dt}{H}_{3}\\left( {t, x,{x}^{\\prime }, y,{y}^{\\prime }}\\right) \]\n\n\[ \n{H}_{4}\\left( {t, x,{x}^{\\prime }y,{y}^{\\prime }}\\right) = \\frac{d}{dt}{H}_{5}\\left( {t, x,{x}^{\\prime }, y,{y}^{\\prime }}\\right) \]\nIf we apply this result to Example 7, we will use\n\n\[ \nH\\left( {t, x,{x}^{\\prime }y,{y}^{\\prime }}\\right) = \\mu \\left( {x{y}^{\\prime } - y{x}^{\\prime }}\\right) + \\lambda \\sqrt{{x}^{\\prime 2} + {y}^{\\prime 2}} \]\n\nThe Euler equations are\n\n\[ \n\\begin{cases} \\mu {y}^{\\prime } & = \\frac{d}{dt}\\left\\lbrack {-{\\mu y} + \\lambda {x}^{\\prime }{\\left( {x}^{\\prime 2} + {y}^{\\prime 2}\\right) }^{-1/2}}\\right\\rbrack \\\\ - \\mu {x}^{\\prime } & = \\frac{d}{dt}\\left\\lbrack {{\\mu x} + \\lambda {y}^{\\prime }{\\left( {x}^{\\prime 2} + {y}^{\\prime 2}\\right) }^{-1/2}}\\right\\rbrack \\end{cases} \]\n\nUpon integrating these with respect to \( t \), we obtain\n\n\[ \n{2\\mu y} = \\lambda {x}^{\\prime }{\\left( {x}^{\\prime 2} + {y}^{\\prime 2}\\right) }^{-1/2} + A \]\n\n\[ \n- {2\\mu x} = \\lambda {y}^{\\prime }{\\left( {x}^{\\prime 2} + {y}^{\\prime 2}\\right) }^{-1/2} - B \]\n\nIf \( \\mu = 0 \), we infer that \( {x}^{\\prime } = {y}^{\\prime } = 0 \), and then the \
|
Yes
|
Example 8. What is the path of a light beam if the velocity of light in the medium is \( c = {\alpha y} \) (where \( \alpha \) is a constant)?
|
Solution. The path is the graph of a function \( y \) such that\n\n\[ x = \int \frac{k\alpha y}{\sqrt{1 - {k}^{2}{\alpha }^{2}{y}^{2}}}{dy} \]\n\nThe integration produces\n\n\[ x = \frac{-1}{k\alpha }\sqrt{1 - {k}^{2}{\alpha }^{2}{y}^{2}} + A \]\n\nHere \( A \) and \( k \) are constants that can be adjusted so that the path passes through two given points. The equation can be written in the form\n\n\[ {\left( x - A\right) }^{2} = \frac{1}{{k}^{2}{\alpha }^{2}}\left( {1 - {k}^{2}{\alpha }^{2}{y}^{2}}\right) \]\n\nor in the standard form of a circle:\n\n\[ {\left( x - A\right) }^{2} + {y}^{2} = \frac{1}{{k}^{2}{\alpha }^{2}} \]
|
Yes
|
Theorem 5. Suppose that \( {y}_{1},\ldots ,{y}_{n} \) are functions (of \( t \) ) in \( {C}^{2}\left\lbrack {a, b}\right\rbrack \)\n\nthat minimize the integral\n\n\[ \n{\int }_{a}^{b}F\left( {{y}_{1},\ldots ,{y}_{n},{y}_{1}^{\prime },\ldots ,{y}_{n}^{\prime }}\right) {dt} \n\]\n\nsubject to endpoint constraints that prescribe values for all \( {y}_{i}\left( a\right) ,{y}_{i}\left( b\right) \) .\n\nThen the Euler Equations hold:\n\n(9)\n\n\[ \n\frac{d}{dt}\frac{\partial F}{\partial {y}_{i}^{\prime }} = \frac{\partial F}{\partial {y}_{i}}\;\left( {1 \leq i \leq n}\right) \n\]
|
Proof. Take functions \( {\eta }_{1},\ldots ,{\eta }_{n} \) in \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that vanish at the endpoints. The expression\n\n\[ \n{\int }_{a}^{b}F\left( {{y}_{1} + {\theta }_{1}{\eta }_{1},\ldots ,{y}_{n} + {\theta }_{n}{\eta }_{n}}\right) {dt} \n\]\n\nwill have a minimum when \( \left( {{\theta }_{1},\ldots ,{\theta }_{n}}\right) = \left( {0,0,\ldots ,0}\right) \) . Proceeding as in previous proofs, one arrives at the given equations.
|
Yes
|
We search for geodesics on a cylinder. Let the surface be the cylinder \( {x}^{2} + {z}^{2} = 1 \), or \( z = {\left( 1 - {x}^{2}\right) }^{1/2} \) (upper-half cylinder). In the general theory, \( F\left( {x, y,{x}^{\prime },{y}^{\prime }}\right) = \sqrt{{x}^{\prime 2} + {y}^{\prime 2} + {\left( {z}_{x}{x}^{\prime } + {z}_{y}{y}^{\prime }\right) }^{2}} \) . In this particular case this is \[ F = {\left\lbrack {x}^{\prime 2} + {y}^{\prime 2} + {z}_{x}^{2}{x}^{\prime 2}\right\rbrack }^{1/2} = {\left\lbrack {\left( 1 - {x}^{2}\right) }^{-1}{x}^{\prime 2} + {y}^{\prime 2}\right\rbrack }^{1/2} \]
|
Then computations show that \[ \frac{\partial F}{\partial x} = \frac{x{x}^{\prime 2}}{{\left( 1 - {x}^{2}\right) }^{2}F}\;\frac{\partial F}{\partial {x}^{\prime }} = \frac{{x}^{\prime }}{\left( {1 - {x}^{2}}\right) F}\;\frac{\partial F}{\partial y} = 0\;\frac{\partial F}{\partial {y}^{\prime }} = \frac{{y}^{\prime }}{F} \] To simplify the work we take \( t \) to be arc length and drop the requirement that \( 0 \leq t \leq 1 \) . Since \( {dt} = {ds} = \sqrt{{x}^{\prime 2} + {y}^{\prime 2} + {z}^{\prime 2}}{dt} \), we have \( {x}^{\prime 2} + {y}^{\prime 2} + {z}^{\prime 2} = 1 \) and \( F\left( {x, y,{x}^{\prime },{y}^{\prime }}\right) = 1 \) along the minimizing curve. The Euler equations yield \[ \frac{\left( {1 - {x}^{2}}\right) {x}^{\prime \prime } + {2x}{x}^{\prime 2}}{{\left( 1 - {x}^{2}\right) }^{2}} = \frac{x{x}^{\prime 2}}{{\left( 1 - {x}^{2}\right) }^{2}}\;\text{ and }\;{y}^{\prime \prime } = 0 \] The first of these can be written \( {x}^{\prime \prime } = x{x}^{\prime 2}/\left( {{x}^{2} - 1}\right) \) . The second one gives \( y = {at} + b \), for appropriate constants \( a \) and \( b \) that depend on the boundary conditions. The condition \( 1 = {F}^{2} \) leads to \( {x}^{\prime 2}/\left( {1 - {x}^{2}}\right) + {y}^{\prime 2} = 1 \) and then to \( {x}^{\prime 2}/\left( {1 - {x}^{2}}\right) = 1 - {a}^{2} \) . The Euler equation for \( x \) then simplifies to \( {x}^{\prime \prime } = \left( {{a}^{2} - 1}\right) x \) . There are three interesting cases: Case 1: \( a = 1 \) . Then \( {x}^{\prime \prime } = 0 \), and thus both \( x\left( t\right) \) and \( y\left( t\right) \) are linear expressions in \( t \) . The path is a straight line on the surface (necessarily parallel to the \( y \) -axis). Case 2: \( a = 0 \) . Then \( {x}^{\prime \prime } = - x \), and \( x = c\cos \left( {t + d}\right) \) for suitable constants \( c \) and \( d \) . The condition \( {x}^{\prime 2}/\left( {1 - {x}^{2}}\right) = 1 \) gives us \( c = 1 \) . It follows that \( x = \cos \left( {t + d}\right), y = b \), and \( z = \sqrt{1 - {x}^{2}} = \sin \left( {t + d}\right) \) . The curve is a circle parallel to the \( {xz} \) -plane. Case 3: \( 0 < a < 1 \) . Then \( x = c\cos \left( {\sqrt{1 - {a}^{2}}t + d}\right) \), and as before, \( c = 1 \) . Again \( z = \sin \left( {\sqrt{1 - {a}^{2}}t + d}\right) \), and \( y = {at} + b \) . The curve is a spiral.
|
Yes
|
We wish to minimize the expression \( {\int }_{0}^{b}{\int }_{0}^{a}\left( {{\phi }_{x}^{2} + {\phi }_{y}^{2}}\right) {dxdy} \) subject to the constraints that \( \phi \) be a continuously differentiable function on the rectangle \( R = \{ \left( {x, y}\right) : 0 \leq x \leq a,0 \leq y \leq b\} \), that \( \phi = 0 \) on the perimeter of \( R \), and that \( {\iint }_{R}{\phi }^{2}{dxdy} = 1 \).
|
A suitable set of base functions for this problem is the doubly indexed sequence\n\n\[ \n{u}_{nm}\left( {x, y}\right) = \frac{2}{\sqrt{ab}}\sin \frac{n\pi x}{a}\sin \frac{m\pi y}{b}\;\left( {n, m \geq 1}\right) \n\]\n\nIt turns out that this is an orthonormal set with respect to the inner product \( \langle u, v\rangle = {\iint }_{R}u\left( {x, y}\right) v\left( {x, y}\right) {dxdy} \). We are looking for a function \( \phi = \) \( \mathop{\sum }\limits_{{n, m = 1}}^{\infty }{c}_{nm}{u}_{nm} \) that will solve the problem. Clearly, the function \( \phi \) vanishes on the perimeter of \( R \). The condition \( \iint {\phi }^{2} = 1 \) means \( \mathop{\sum }\limits_{{n, m = 1}}^{\infty }{c}_{nm}^{2} = 1 \) by the Parseval identity (page 73). Now we compute\n\n\[ \n\frac{\partial }{\partial x}{u}_{nm}\left( {x, y}\right) = \frac{2}{\sqrt{ab}}\frac{n\pi }{a}\cos \frac{n\pi x}{a}\sin \frac{m\pi y}{b} \n\]\n\nThe system of functions\n\n\[ \n\frac{2}{\sqrt{ab}}\cos \frac{n\pi x}{a}\sin \frac{m\pi y}{b} \n\]\n\nis also orthonormal. Thus \( {\iint }_{R}{\phi }_{x}^{2} = \mathop{\sum }\limits_{{n, m}}{\left( \frac{n\pi }{a}{c}_{nm}\right) }^{2} \). Similarly,\n\n\[ \n{\iint }_{R}{\phi }_{y}^{2} = \mathop{\sum }\limits_{{n, m}}{\left( \frac{m\pi }{b}{c}_{nm}\right) }^{2} \n\]\n\nHence we are trying to minimize the expression\n\n\[ \n\int {\int }_{R}({\phi }_{x}^{2} + {\phi }_{y}^{2}) = \mathop{\sum }\limits_{{n, m}}\left\lbrack {{\left( \frac{n\pi }{a}{c}_{nm}\right) }^{2} + {\left( \frac{m\pi }{b}{c}_{nm}\right) }^{2}}\right\rbrack = {\pi }^{2}\sum \left( {\frac{{n}^{2}}{{a}^{2}} + \frac{{m}^{2}}{{b}^{2}}}\right) {c}_{nm}^{2} \n\]\n\nsubject to the constraint \( \sum {c}_{nm}^{2} = 1 \). Because of the coefficients \( {n}^{2} \) and \( {m}^{2} \), we obtain a solution by letting \( {c}_{11} = 1 \) and the remaining coefficients all be zero. Hence \( \phi = {u}_{11} = \frac{2}{\sqrt{ab}}\sin \frac{\pi x}{a}\sin \frac{\pi y}{b} \).
|
Yes
|
\[ \left\{ \begin{array}{ll} {u}^{\prime \prime } + a{u}^{\prime } + {bu} = c & 0 < t < 1 \\ u\left( 0\right) = 0\;u\left( 1\right) = 0 & \end{array}\right. \]
|
\[ \left\{ \begin{array}{l} \frac{{v}_{i + 1} - 2{v}_{i} + {v}_{i - 1}}{{h}^{2}} + {a}_{i}\frac{{v}_{i + 1} - {v}_{i - 1}}{2h} + {b}_{i}{v}_{i} = {c}_{i}\;\left( {1 \leq i \leq n}\right) \\ {v}_{0} = {v}_{n + 1} = 0 \end{array}\right. \]
|
No
|
Lemma 1. If an \( n \times n \) matrix \( A \) is diagonally dominant, then it is nonsingular, and\n\n\[ \n{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}_{\infty } \leq \mathop{\max }\limits_{i}{\left\{ \left| {a}_{ii}\right| - \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| \right\} }^{-1} \n\]
|
Proof. Let \( x \) be any nonzero vector, and let \( y = {Ax} \) . Select \( i \) so that \( \left| {x}_{i}\right| = \) \( \parallel x{\parallel }_{\infty } \) . Then\n\n\[ \n{a}_{ii}{x}_{i} + \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}{a}_{ij}{x}_{j} = {y}_{i} \n\]\n\n\[ \n\left| {{a}_{ii}{x}_{i}}\right| \leq \left| {y}_{i}\right| + \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| \left| {x}_{j}\right| \n\]\n\n\[ \n\left| {a}_{ii}\right| \parallel x{\parallel }_{\infty } \leq \left| {y}_{i}\right| + \left| {x}_{i}\right| \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| \leq \parallel y{\parallel }_{\infty } + \parallel x{\parallel }_{\infty }\mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| \n\]\n\nHence\n\n\[ \n\parallel x{\parallel }_{\infty }\left( {\left| {a}_{ii}\right| - \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| }\right) \leq \parallel y{\parallel }_{\infty } \n\]\n\nThis shows that \( y \neq 0 \) . Thus \( A \) maps no nonzero vector into 0, and \( A \) is nonsingular. If we write \( x = {A}^{-1}y \) in the above inequality, we obtain\n\n\[ \n{\begin{Vmatrix}{A}^{-1}y\end{Vmatrix}}_{\infty } \leq \parallel y{\parallel }_{\infty }{\left( \left| {a}_{ii}\right| - \mathop{\sum }\limits_{\substack{{j = 1} \\ {j \neq i} }}^{n}\left| {a}_{ij}\right| \right) }^{-1} \n\]\n\nand this implies the upper bound in the lemma.
|
Yes
|
Lemma 2. If \( {f}^{\left( 4\right) } \) is continuous on \( \left( {t - h, t + h}\right) \), then\n\n\[ \n{f}^{\prime \prime }\left( t\right) = {h}^{-2}\left\lbrack {f\left( {t + h}\right) - {2f}\left( t\right) + f\left( {t - h}\right) }\right\rbrack - \frac{1}{12}{h}^{2}{f}^{\left( 4\right) }\left( \xi \right) \n\]
|
Proof. We derive the first formula and leave the second as a problem. By Taylor's Theorem we have\n\n\[ \nf\left( {t + h}\right) = f\left( t\right) + h{f}^{\prime }\left( t\right) + \frac{1}{2}{h}^{2}{f}^{\prime \prime }\left( t\right) + \frac{1}{6}{h}^{3}{f}^{\prime \prime \prime }\left( t\right) + \frac{1}{24}{h}^{4}{f}^{\left( 4\right) }\left( {\xi }_{1}\right) \n\]\n\n\[ \nf\left( {t - h}\right) = f\left( t\right) - h{f}^{\prime }\left( t\right) + \frac{1}{2}{h}^{2}{f}^{\prime \prime }\left( t\right) - \frac{1}{6}{h}^{3}{f}^{\prime \prime \prime }\left( t\right) + \frac{1}{24}{h}^{4}{f}^{\left( 4\right) }\left( {\xi }_{2}\right) \n\]\n\nUpon adding these two equations, we get\n\n\[ \nf\left( {t + h}\right) + f\left( {t - h}\right) = {2f}\left( t\right) + {h}^{2}{f}^{\prime \prime }\left( t\right) + \frac{1}{24}{h}^{4}\left\lbrack {{f}^{\left( 4\right) }\left( {\xi }_{1}\right) + {f}^{\left( 4\right) }\left( {\xi }_{2}\right) }\right\rbrack \n\]\n\nUpon rearranging this, we obtain\n\n\[ \n{f}^{\prime \prime }\left( t\right) = {h}^{-2}\left\lbrack {f\left( {t + h}\right) - {2f}\left( t\right) + f\left( {t - h}\right) }\right\rbrack - \frac{1}{24}{h}^{2}\left\lbrack {{f}^{\left( 4\right) }\left( {\xi }_{1}\right) + {f}^{\left( 4\right) }\left( {\xi }_{2}\right) }\right\rbrack \n\]\n\nObserve that the expression \( \frac{1}{2}\left\lbrack {{f}^{\left( 4\right) }\left( {\xi }_{1}\right) + {f}^{\left( 4\right) }\left( {\xi }_{2}\right) }\right\rbrack \) is the average of two values of \( {f}^{\left( 4\right) } \) on the interval \( \left\lbrack {t - h, t + h}\right\rbrack \) . Its value therefore lies between the maximum and minimum of \( {f}^{\left( 4\right) } \) on this interval. If \( {f}^{\left( 4\right) } \) is continuous, this value is assumed at some point \( \xi \) in the same interval. Hence the error term can be written as \( - {h}^{2}{f}^{\left( 4\right) }\left( \xi \right) /{12} \) .
|
No
|
Consider a linear integral equation, such as\n\n\\[ \n{\\int }_{a}^{b}k\\left( {s, t}\\right) x\\left( s\\right) {ds} = v\\left( t\\right) \\;\\left( {a \\leq t \\leq b}\\right) \n\\]\n\nIn this equation, the kernel \\( k \\) and the function \\( v \\) are prescribed. We seek the unknown function \\( x \\).
|
Suppose that a quadrature formula of the type\n\n\\[ \n{\\int }_{a}^{b}f\\left( s\\right) {ds} \\approx \\mathop{\\sum }\\limits_{{j = 1}}^{n}{c}_{j}f\\left( {s}_{j}\\right) \n\\]\n\nis available. (The points \\( {s}_{j} \\) need not be equally spaced.) Taking \\( t = {s}_{i} \\) in the integral equation, we have\n\n\\[ \n{\\int }_{a}^{b}k\\left( {s,{s}_{i}}\\right) x\\left( s\\right) {ds} = v\\left( {s}_{i}\\right) \\;\\left( {1 \\leq i \\leq n}\\right) \n\\]\n\nApplying the quadrature formula leads to a discrete version of the integral equation:\n\n\\[ \n\\mathop{\\sum }\\limits_{{j = 1}}^{n}{c}_{j}k\\left( {{s}_{j},{s}_{i}}\\right) x\\left( {s}_{j}\\right) = v\\left( {s}_{i}\\right) \\;\\left( {1 \\leq i \\leq n}\\right) \n\\]\n\nThis is a system of \\( n \\) linear equations in the unknowns \\( x\\left( {s}_{j}\\right) \\); it can be solved by standard methods. Then an interpolation method can be used to reconstruct \\( x\\left( t\\right) \\) on the interval \\( a \\leq t \\leq b \\). Approximations have been made at two stages, and the resulting function \\( x \\) is not the solution of the original problem. This strategy is considered later in more detail (Section 4.7).
|
Yes
|
Theorem 1. Contraction Mapping Theorem. If \( F \) is a contraction on a complete metric space \( X \), then \( F \) has a unique fixed point \( \xi \) . The point \( \xi \) is the limit of every sequence generated from an arbitrary point \( x \) by iteration\n\n\[ \left\lbrack {x,{Fx},{F}^{2}x,\ldots }\right\rbrack \;\left( {x \in X}\right) \]
|
Proof. Reverting to the previous notation, we select \( {x}_{0} \) arbitrarily in \( X \) and define \( {x}_{n + 1} = F{x}_{n} \) for \( n = 0,1,2,\ldots \) . We have\n\n\[ d\left( {{x}_{n},{x}_{n - 1}}\right) = d\left( {F{x}_{n - 1}, F{x}_{n - 2}}\right) \leq {\theta d}\left( {{x}_{n - 1},{x}_{n - 2}}\right) \]\n\nThis argument can be repeated, and we conclude that\n\n(4)\n\n\[ d\left( {{x}_{n},{x}_{n - 1}}\right) \leq {\theta }^{n - 1}d\left( {{x}_{1},{x}_{0}}\right) \]\n\nIn order to establish the Cauchy property of the sequence \( \left\lbrack {x}_{n}\right\rbrack \), let \( n > N \) and \( m > N \) . There is no loss of generality in supposing that \( m \geq n \) . Then from Equation (4),\n\n\[ d\left( {{x}_{m},{x}_{n}}\right) \leq d\left( {{x}_{m},{x}_{m - 1}}\right) + d\left( {{x}_{m - 1},{x}_{m - 2}}\right) + \cdots + d\left( {{x}_{n + 1},{x}_{n}}\right) \]\n\n\[ \leq \left\lbrack {{\theta }^{m - 1} + {\theta }^{m - 2} + \cdots + {\theta }^{n}}\right\rbrack d\left( {{x}_{1},{x}_{0}}\right) \]\n\n\[ \leq \left\lbrack {{\theta }^{N} + {\theta }^{N + 1} + \cdots }\right\rbrack d\left( {{x}_{1},{x}_{0}}\right) \]\n\n\[ = {\theta }^{N}{\left( 1 - \theta \right) }^{-1}d\left( {{x}_{1},{x}_{0}}\right) \]\n\nSince \( 0 \leq \theta < 1,\mathop{\lim }\limits_{{N \rightarrow \infty }}{\theta }^{N} = 0 \) . This proves the Cauchy property. Since the space \( X \) is complete, the sequence converges to a point \( \xi \) . Since the contractive property implies directly that \( F \) is continuous, the argument in Equation (2) shows that \( \xi \) is a fixed point of \( F \) .\n\nIf \( \eta \) is also a fixed point of \( F \), then we have\n\n(5)\n\n\[ d\left( {\xi ,\eta }\right) = d\left( {{F\xi },{F\eta }}\right) \leq {\theta d}\left( {\xi ,\eta }\right) \]\n\nIf \( \xi \neq \eta \), then \( d\left( {\xi ,\eta }\right) > 0 \), and Inequality (5) leads to the contradiction \( \theta \geq 1 \) . This proves the uniqueness of the fixed point.
|
Yes
|
Consider the nonlinear Fredholm equation\n\n\[ \nx\left( t\right) = \frac{1}{2}{\int }_{0}^{1}\cos \left( {{stx}\left( s\right) }\right) {ds} \]\n
|
By the mean value theorem,\n\n\[ \n\left| {\cos \left( {st\xi }\right) - \cos \left( {st\eta }\right) }\right| = \left| {\sin \left( {st\zeta }\right) }\right| \;\left| {{st\xi } - {st\eta }}\right| \leq \left| {\xi - \eta }\right| \]\n\nThus the preceding theory is applicable with \( \theta = \frac{1}{2} \) . If the iteration is begun with \( {x}_{0} = 0 \), the next two steps are \( {x}_{1}\left( t\right) = \frac{1}{2} \) and \( {x}_{2}\left( t\right) = {t}^{-1}\sin \left( {t/2}\right) \) . The next element in the sequence is given by\n\n\[ \n{x}_{3}\left( t\right) = \frac{1}{2}{\int }_{0}^{1}\cos \left( {t\sin \frac{s}{2}}\right) {ds} \]\n\nThis integration cannot be effected in elementary functions. In fact, it is analogous to the Bessel function \( {J}_{0} \), whose definition is\n\n\[ \n{J}_{0}\left( z\right) = \frac{1}{\pi }{\int }_{0}^{\pi }\cos \left( {z\sin \theta }\right) {d\theta } \]\n\nIf the iteration method is to be continued in this example, numerical procedures for indefinite integration will be needed.
|
Yes
|
Theorem 3. Let \( S \) be an interval of the form \( S = \left\lbrack {0, b}\right\rbrack \) . Let \( f \) be a continuous map of \( S \times \mathbb{R} \) to \( \mathbb{R} \) . Assume a Lipschitz condition in the second argument:\n\n\[ \left| {f\left( {s,{t}_{1}}\right) - f\left( {s,{t}_{2}}\right) }\right| \leq \lambda \left| {{t}_{1} - {t}_{2}}\right| \]\n\nwhere \( \lambda \) is a constant depending only on \( f \) . Then the initial-value problem\n\n\[ {x}^{\prime }\left( s\right) = f\left( {s, x\left( s\right) }\right) \;x\left( 0\right) = \beta \]\n\nhas a unique solution in \( C\left( S\right) \) .
|
Proof. We introduce a new norm in \( C\left( S\right) \) by defining\n\n\[ \parallel x{\parallel }_{w} = \mathop{\sup }\limits_{{s \in S}}\left| {x\left( s\right) }\right| {e}^{-{2\lambda s}} \]\n\nThe space \( C\left( S\right) \), accompanied by this norm, is complete. Since the initial-value problem is equivalent to the integral equation\n\n\[ x = {Ax}\;\left( {Ax}\right) \left( s\right) = \beta + {\int }_{0}^{s}f\left( {t, x\left( t\right) }\right) {dt}\;x \in C\left( S\right) \]\n\nall we have to do is show that the mapping \( A \) has a fixed point. In order for the Contraction Mapping Theorem to be used, it suffices to establish that \( A \) is a contraction. Let \( u, v \in C\left( S\right) \) . Then we have, for \( 0 \leq s \leq b \) ,\n\n\[ \left| {\left( {{Au} - {Av}}\right) \left( s\right) }\right| \leq {\int }_{0}^{s}\left| {f\left( {t, u\left( t\right) }\right) - f\left( {t, v\left( t\right) }\right) }\right| {dt} \]\n\n\[ \leq {\int }_{0}^{s}\lambda \left| {u\left( t\right) - v\left( t\right) }\right| {dt} \]\n\n\[ = \lambda {\int }_{0}^{s}{e}^{2\lambda t}{e}^{-{2\lambda t}}\left| {u\left( t\right) - v\left( t\right) }\right| {dt} \]\n\n\[ \leq \lambda \parallel u - v{\parallel }_{w}{\int }_{0}^{s}{e}^{2\lambda t}{dt} \]\n\n\[ \leq \lambda \parallel u - v{\parallel }_{w}{\left( 2\lambda \right) }^{-1}{e}^{2\lambda s} \]\n\nFrom this we conclude that\n\n\[ {e}^{-{2\lambda s}}\left| {\left( {{Au} - {Av}}\right) \left( s\right) }\right| \leq \frac{1}{2}\parallel u - v{\parallel }_{w} \]\n\nand that\n\n\[ \parallel {Au} - {Av}{\parallel }_{w} \leq \frac{1}{2}\parallel u - v{\parallel }_{w} \]
|
Yes
|
Does the following initial value problem have a solution in the space \( C\left\lbrack {0,{10}}\right\rbrack \) ?\n\n\[ \n{x}^{\prime } = \cos \left( {x{e}^{s}}\right) \;x\left( 0\right) = 0 \n\]
|
This is an illustration of the general theory in which \( f\left( {s, t}\right) = \cos \left( {t{e}^{s}}\right) \) . By the mean value theorem,\n\n\[ \n\left| {f\left( {s,{t}_{1}}\right) - f\left( {s,{t}_{2}}\right) }\right| = \left| {\frac{\partial f}{\partial t}\left( {s,\tau }\right) }\right| \left| {{t}_{1} - {t}_{2}}\right| \n\]\n\nFor \( 0 \leq s \leq {10} \) and \( t \in \mathbb{R} \) ,\n\n\[ \n\left| \frac{\partial f}{\partial t}\right| = \left| {-\sin \left( {t{e}^{s}}\right) {e}^{s}}\right| \leq {e}^{10} \n\]\n\nHence, the hypothesis of Theorem 3 is satisfied, and our problem has a unique solution in \( C\left\lbrack {0,{10}}\right\rbrack \) .
|
Yes
|
If \( f \) is continuous but does not satisfy the Lipschitz condition in Theorem 3, the conclusions of the theorem may fail. For example, the problem \( {x}^{\prime } = {x}^{2/3}, x\left( 0\right) = 0 \) has two solutions, \( x\left( s\right) = 0 \) and \( x\left( s\right) = {s}^{3}/{27} \) . There is no Lipschitz condition of the form
|
\[ \left| {{t}_{1}^{2/3} - {t}_{2}^{2/3}}\right| \leq \lambda \left| {{t}_{1} - {t}_{2}}\right| \] (Consider the implications of this inequality when \( {t}_{2} = 0 \) .)
|
Yes
|
\[ {x}^{\prime } = {2t}\left( {1 + x}\right) \;x\left( 0\right) = 0 \]
|
The formula for the Picard iteration in this example is\n\n\[ {x}_{n + 1}\left( t\right) = {\int }_{0}^{t}{2s}\left( {1 + {x}_{n}\left( s\right) }\right) {ds} = {t}^{2} + {\int }_{0}^{t}{2s}{x}_{n}\left( s\right) {ds} \]\n\nIf \( {x}_{0} = 0 \), then successive computations yield\n\n\[ {x}_{1}\left( t\right) = {t}^{2}\;{x}_{2}\left( t\right) = {t}^{2} + \frac{1}{2}{t}^{4}\;{x}_{3}\left( t\right) = {t}^{2} + \frac{1}{2}{t}^{4} + \frac{1}{6}{t}^{6} \]\n\nIt appears that we are producing the partial sums in the Taylor series for \( {e}^{{t}^{2}} - 1 \) , and one verifies readily that this is indeed the solution.
|
Yes
|
Theorem 4. Let \( F \) be a mapping of a complete metric space into itself such that for some \( m,{F}^{m} \) is contractive. Then \( F \) has a unique fixed point. It is the limit of every sequence \( \left\lbrack {{F}^{k}x}}\right\rbrack \), for arbitrary \( x \) .
|
Proof. Since \( {F}^{m} \) is contractive, it has a unique fixed point \( \xi \) by Theorem 1.\n\nThen\n\n\[ \n{F\xi } = F\left( {{F}^{m}\xi }}\right) = {F}^{m + 1}\xi = {F}^{m}\left( {F\xi }}\right) \n\]\n\nThis shows that \( {F\xi } \) is also a fixed point of \( {F}^{m} \). By the uniqueness of \( \xi ,{F\xi } = \xi \). Thus \( F \) has at least one fixed point (namely \( \xi \) ), and \( \xi \) can be obtained by iteration using the function \( {F}^{m} \). If \( x \) is any fixed point of \( F \), then\n\n\[ \n{Fx} = x\;{F}^{2}x = x\ldots \;{F}^{m}x = x \n\]\n\nThus \( x \) is a fixed point of \( {F}^{m} \), and \( x = \xi \).\n\nIt remains to be proved that the sequence \( {F}^{n}x \) converges to \( \xi \) as \( n \rightarrow \infty \). Observe that for \( i \in \{ 1,2,\ldots, m\} \) we have\n\n\[ \n{F}^{{nm} + i}x = {F}^{nm}\left( {{F}^{i}x}}\right) \rightarrow \xi \text{ as }n \rightarrow \infty \n\]\n\nby the first part of the proof. If \( \varepsilon > 0 \), we can select an integer \( N \) having the property\n\n\[ \nn \geq N\; \Rightarrow \;d\left( {{F}^{{nm} + i}x,\xi }}\right) < \varepsilon \;\left( {1 \leq i \leq m}\right) \n\]\n\nSince each integer \( j \) greater than \( {Nm} \) can be written as \( j = {nm} + i \), where \( n \geq N \) and \( 1 \leq i \leq m \), we have\n\n\[ \nj > {Nm}\; \Rightarrow \;d\left( {{F}^{j}x,\xi }}\right) < \varepsilon \n\]\n\nThis proves that \( \lim {F}^{j}x = \xi \).
|
Yes
|
Theorem 6. Let \( F \) be a mapping of a Hilbert space into itself such that (a) \( \langle {Fx} - {Fy}, x - y\rangle \geq \alpha \parallel x - y{\parallel }^{2}\;\left( {\alpha > 0}\right) \) (b) \( \parallel {Fx} - {Fy}\parallel \leq \beta \parallel x - y\parallel \) Then \( F \) is surjective and injective. Consequently, \( {F}^{-1} \) exists.
|
Proof. The injectivity follows at once from (a): If \( x \neq y \), then \( {Fx} \neq {Fy} \) . For the surjectivity, let \( w \) be any point in the Hilbert space. It is to be proved that, for some \( x,{Fx} = w \) . It is equivalent to prove, for any \( \lambda > 0 \), that an \( x \) exists satisfying \( x - \lambda \left( {{Fx} - w}\right) = x \) . Define \( {Gx} = x - \lambda \left( {{Fx} - w}\right) \), so that our task is to prove the existence of a fixed point for \( G \) . The Contraction Mapping Theorem will be applied. To prove that \( G \) is a contraction, we let \( \lambda = \alpha /{\beta }^{2} \) in the following calculation: \[ {\begin{Vmatrix}Gx - Gy\end{Vmatrix}}^{2} = {\begin{Vmatrix}x - \lambda \left( Fx - w\right) - y + \lambda \left( Fy - w\right) \end{Vmatrix}}^{2} \] \[ = {\begin{Vmatrix}x - y - \lambda \left( Fx - Fy\right) \end{Vmatrix}}^{2} \] \[ = {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} - {2\lambda }\langle {Fx} - {Fy}, x - y\rangle + {\lambda }^{2}{\begin{Vmatrix}Fx - Fy\end{Vmatrix}}^{2} \] \[ \leq {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} - {2\lambda \alpha }{\begin{Vmatrix}x - y\end{Vmatrix}}^{2} + {\lambda }^{2}{\beta }^{2}{\begin{Vmatrix}x - y\end{Vmatrix}}^{2} \] \[ = \parallel x - y{\parallel }^{2}\left( {1 - {2\lambda \alpha } + {\lambda }^{2}{\beta }^{2}}\right) \] \[ = \parallel x - y{\parallel }^{2}\left( {1 - 2{\alpha }^{2}/{\beta }^{2} + {\alpha }^{2}/{\beta }^{2}}\right) \] \[ = \parallel x - y{\parallel }^{2}\left( {1 - {\alpha }^{2}/{\beta }^{2}}\right) \]
|
Yes
|
First, consider an integral equation of the form\n\n\[ x\left( t\right) - \lambda {\int }_{0}^{1}K\left( {s, t}\right) x\left( s\right) {ds} = v\left( t\right) \;\left( {0 \leq t \leq 1}\right) \]
|
Write the equation in the form\n\n\[ \left( {I - {\lambda A}}\right) x = v \] \nin which \( A \) is the integral operator in Equation (4). If we have chosen a suitable Banach space and if \( \parallel {\lambda A}\parallel < 1 \), then the Neumann series gives a formula for the solution:\n\n\[ x = {\left( I - \lambda A\right) }^{-1}v = \mathop{\sum }\limits_{{n = 0}}^{\infty }{\left( \lambda A\right) }^{n}v = v + {\lambda Av} + {\lambda }^{2}{A}^{2}v + \cdots \]
|
Yes
|
Example 2. For a concrete example of this, consider\n\n\[ x\left( t\right) = \lambda {\int }_{0}^{1}{e}^{t - s}x\left( s\right) {ds} + v\left( t\right) \]\n\nHere, we use an operator \( A \) defined by\n\n\[ \left( {Ax}\right) \left( t\right) = {\int }_{0}^{1}{e}^{t - s}x\left( s\right) {ds} \]
|
If we compute \( {A}^{2}x \), the result is\n\n\[ \left( {{A}^{2}x}\right) \left( t\right) = {\int }_{0}^{1}{e}^{t - s}\left( {Ax}\right) \left( s\right) {ds} \]\n\n\[ = {\int }_{0}^{1}{e}^{t - s}{\int }_{0}^{1}{e}^{s - \sigma }x\left( \sigma \right) {d\sigma ds} \]\n\n\[ = {\int }_{0}^{1}\left\lbrack {{\int }_{0}^{1}{e}^{t - s}{e}^{s - \sigma }{ds}}\right\rbrack x\left( \sigma \right) {d\sigma } \]\n\n\[ = {\int }_{0}^{1}{e}^{t - \sigma }x\left( \sigma \right) {d\sigma } = \left( {Ax}\right) \left( t\right) \]\n\nThis shows that \( {A}^{2} = A \) . Consequently, the solution by the Neumann series in Equation (6) simplifies to\n\n\[ x = v + {\lambda Av} + {\lambda }^{2}{Av} + {\lambda }^{3}{Av} + \cdots \]\n\n\[ = v + \left( {\frac{1}{1 - \lambda } - 1}\right) {Av} = v + \frac{\lambda }{1 - \lambda }{Av} \]
|
Yes
|
Can we use an operator \( B \) such that \( \parallel I - {AB}\parallel < 1 \) to solve the problem \( {Ax} = v \)?
|
Obviously, \( {x}_{0} = {Bv} \) is a first approximation to \( x \) . By the Neumann theorem, we know that \( {AB} \) is invertible and that\n\n\[ {\left( AB\right) }^{-1} = \mathop{\sum }\limits_{{k = 0}}^{\infty }{\left( I - AB\right) }^{k} \]\n\nIt is clear that the vector \( x = B{\left( AB\right) }^{-1}v \) is a solution, because \( {Ax} = \) \( {AB}{\left( AB\right) }^{-1}v = v \) . Hence \( x = \)\n\n(8) \( B{\left( AB\right) }^{-1}v = B\mathop{\sum }\limits_{{k = 0}}^{\infty }{\left( I - AB\right) }^{k}v = {Bv} + B\left( {I - {AB}}\right) v + B{\left( I - AB\right) }^{2}v + \cdots \)
|
Yes
|
Theorem 1. The range of a projection is identical with the set of its fixed points.
|
Proof. Let \( P : X \rightarrow X \) be a projection and \( V \) its range. If \( v \in V \), then \( v = {Px} \) for some \( x \), and consequently,\n\n\[ \n{Pv} = {P}^{2}x = {Px} = v \n\]\n\nThus \( v \) is a fixed point of \( P \) . The reverse inclusion is obvious: If \( x = {Px} \), then \( x \) is in the range of \( P \) .
|
Yes
|
Theorem 2. If \( P \) is a projection, then so is \( I - P \) . The range of each is the null space of the other.
|
\[ {\left( I - P\right) }^{2} = \left( {I - P}\right) \left( {I - P}\right) = I - {2P} + {P}^{2} = I - P \] \n\nUse \( \mathcal{R} \) and \( \mathcal{N} \) for \
|
No
|
Theorem 3. The adjoint of a projection is also a projection.
|
Proof. Let \( P \) be a projection defined on the normed space \( X \) . Recall that its adjoint \( {P}^{ * } \) maps \( {X}^{ * } \) to \( {X}^{ * } \) and is defined by the equation\n\n\[ \n{P}^{ * }\phi = \phi \circ P\;\phi \in {X}^{ * } \n\]\n\nThus it follows that\n\n\[ \n{\left( {P}^{ * }\right) }^{2}\phi = {P}^{ * }\left( {{P}^{ * }\phi }\right) = \left( {\phi \circ P}\right) \circ P = \phi \circ {P}^{2} = \phi \circ P = {P}^{ * }\phi \n\]
|
Yes
|
Theorem 4. Let \( P \) be a projection of a normed space \( X \) onto a finite-dimensional subspace \( V \) . If \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) is any basis for \( V \) then there exist functionals \( {\phi }_{i} \) in \( {X}^{ * } \) such that\n\n(4)\n\n\[{\phi }_{i}\left( {v}_{j}\right) = {\delta }_{ij}\;\left( {1 \leq i, j \leq n}\right)\]\n\n(5)\n\n\[{Px} = \mathop{\sum }\limits_{{i = 1}}^{n}{\phi }_{i}\left( x\right) {v}_{i}\;\left( {x \in X}\right)\]
|
Proof. Select \( {\psi }_{i} \in {V}^{ * } \) such that for any \( v \in V \),\n\n\[v = \mathop{\sum }\limits_{{i = 1}}^{n}{\psi }_{i}\left( v\right) {v}_{i}\]\n\nThe functionals \( {\psi }_{i} \) are linear and continuous, by Corollary 1 in Section 1.5, page 26. (See also the proof of Corollary 2 in the same section.) For each \( x \in X \) , \( {Px} \in V \), and so \( {Px} = \mathop{\sum }\limits_{{i = 1}}^{n}{\psi }_{i}\left( {Px}\right) {v}_{i} \) . Hence we can let \( {\phi }_{i}\left( x\right) = {\psi }_{i}\left( {Px}\right) \) . Being a composition of continuous linear maps, \( {\phi }_{i} \) is also linear and continuous. The equation \( P{v}_{j} = {v}_{j} \) implies that \( {\phi }_{i}\left( {v}_{j}\right) = {\delta }_{ij} \) by the uniqueness of the representation of \( {Px} \) as a linear combination of the basis vectors \( {v}_{i} \) .
|
Yes
|
Theorem 5. If \( P \) is a projection of \( X \) onto a subspace \( V \), then for all \( x \in X \) ,\n\n\[ \parallel x - {Px}\parallel \leq \parallel I - P\parallel \operatorname{dist}\left( {x, V}\right) \]
|
Proof. For any \( v \in V,{Pv} = v \) . Hence\n\n\[ \parallel x - {Px}\parallel = \parallel \left( {x - v}\right) - P\left( {x - v}\right) \parallel = \parallel \left( {I - P}\right) \left( {x - v}\right) \parallel \leq \parallel I - P\parallel \parallel x - v\parallel \]\n\nNow take an infimum as \( v \) ranges over \( V \) .
|
Yes
|
Theorem 6. Let \( \left\lbrack {P}_{n}\right\rbrack \) be a sequence of projections on a Banach space \( X \), and assume that \( {P}_{n}y \rightarrow y \) for each \( y \) in \( X \) . Let \( b \in X \) and \( A \in \mathcal{L}\left( {X, X}\right) \) . For each \( n \) let \( {x}_{n} \) be a point such that \( {P}_{n}\left( {A{x}_{n} - b}\right) = 0 \) . If \( {x}_{n} \rightarrow x \), then \( {Ax} = b \) .
|
Proof. Since \( {P}_{n}y \rightarrow y \), we also have \( \begin{Vmatrix}{{P}_{n}y}\end{Vmatrix} \rightarrow \parallel y\parallel \), and therefore \( \mathop{\sup }\limits_{n}\begin{Vmatrix}{{P}_{n}y}\end{Vmatrix} < \) \( \infty \) for each \( y \) . Since \( X \) is complete, we may apply the Uniform Boundedness Principle (Section 1.7, page 42) and conclude that \( \mathop{\sup }\limits_{n}\begin{Vmatrix}{P}_{n}\end{Vmatrix} < \infty \) . By the continuity of \( A, A{x}_{n} \rightarrow {Ax} \) . By the boundedness of \( \begin{Vmatrix}{P}_{n}\end{Vmatrix} \), we have \( {P}_{n}\left( {A{X}_{n} - {Ax}}\right) \rightarrow 0 \) . By the choice of \( {x}_{n},{P}_{n}A{x}_{n} = {P}_{n}b \) . Hence \( {P}_{n}b - {P}_{n}{Ax} \rightarrow 0 \) . In the limit, this yields \( b = {Ax} \) .
|
Yes
|
Theorem 7. Let \( P \) be a projection of the normed space \( X \) onto a subspace \( V \). Suppose that \( x \in X, x - {Ax} - b = 0,\widetilde{x} \in V \), and \( P\left( {\widetilde{x} - A\widetilde{x} - b}\right) = 0 \). If \( I - {PA} \) is invertible, then\n\n\[ \parallel x - \widetilde{x}\parallel \leq \begin{Vmatrix}{\left( I - PA\right) }^{-1}\end{Vmatrix}\parallel x - {Px}\parallel \]
|
Proof. From Equation (15), \( {PAx} = {Px} - {Pb} \). Hence\n\n\[ x - {PAx} = x - {Px} + {Pb} \]\n\nSince \( \widetilde{x} \in V \), it follows that \( P\widetilde{x} = \widetilde{x} \). Consequently,\n\n\[ \widetilde{x} - {PA}\widetilde{x} = {Pb} \]\n\nSubtraction between Equations (17) and (18) gives\n\n\[ x - \widetilde{x} - {PA}\left( {x - \widetilde{x}}\right) = x - {Px} \]\n\nor\n\n\[ \left( {I - {PA}}\right) \left( {x - \widetilde{x}}\right) = x - {Px} \]\n\nThus we have\n\n\[ x - \widetilde{x} = {\left( I - PA\right) }^{-1}\left( {x - {Px}}\right) \]\n\nThis leads to Inequality (16).
|
Yes
|
Theorem 8. Let \( \\left\\lbrack {{v}_{1},{v}_{2},\\ldots }\\right\\rbrack \) be an orthonormal basis in a Hilbert space \( X \) . Let \( A \\in \\mathcal{L}\\left( {X, X}\\right) \) and \( \\parallel A\\parallel < 1 \) . For each \( n \), let \( {x}_{n} \) be a linear combination of \( {v}_{1},\\ldots ,{v}_{n} \) chosen so that\n\n(23)\n\n\[ \n\\left\\langle {{x}_{n} - A{x}_{n},{v}_{i}}\\right\\rangle = \\left\\langle {b,{v}_{i}}\\right\\rangle \\;\\left( {1 \\leq i \\leq n}\\right) \n\]\n\nThen \( \\left\\lbrack {x}_{n}\\right\\rbrack \) converges to the solution of the equation \( x - {Ax} = b \) .
|
Of course, the Neumann Theorem can be used to solve the equation \( \\left( {I - A}\\right) x = b \) . It gives \( x = {\\left( I - A\\right) }^{-1}b = \\mathop{\\sum }\\limits_{{n = 0}}^{\\infty }{A}^{n}b \) . There seems to be no obvious connection between this solution and the one provided by Theorem 8.
|
No
|
Theorem 1. If \( \Omega \) is a region to which Green’s Theorem applies, then the Laplacian is self-adjoint with respect to the inner product (9) when applied to functions vanishing on \( \partial \Omega \) .
|
Proof. Using subscripts to denote partial derivatives, we write Green's Theorem in the form\n\[ \n{\int }_{\Omega }\left( {{Q}_{x} - {P}_{y}}\right) = {\int }_{\partial \Omega }\left( {{Pdx} + {Qdy}}\right) \]\n\nApplying this to the functions \( Q = u{v}_{x} - v{u}_{x} \) and \( P = v{u}_{y} - u{v}_{y} \), we obtain\n\n\[ \n{\int }_{\Omega }\left( {u{\nabla }^{2}v - v{\nabla }^{2}u}\right) = {\int }_{\partial \Omega }\left\lbrack {\left( {v{u}_{y} - u{v}_{y}}\right) {dx} + \left( {u{v}_{x} - v{u}_{x}}\right) {dy}}\right\rbrack \]\n\nSince \( v \) and \( u \) vanish on \( \partial \Omega \), we conclude that \( \left\langle {u,{\nabla }^{2}v}\right\rangle = \left\langle {{\nabla }^{2}u, v}\right\rangle \) .
|
Yes
|
Under the hypotheses listed above we have the following: For each \( z \) in \( V \) there is a unique \( w \) in \( U \) such that \[ B\left( {w, v}\right) = \langle z, v\rangle \;\text{ for all }v\text{ in }V \] Furthermore, \( w \) depends linearly and continuously on \( z \) .
|
Proof. As usual, we define the \( u \) -sections of \( B \) by \( {B}_{u}\left( v\right) = B\left( {u, v}\right) \) . Then each \( {B}_{u} \) is a continuous linear functional on \( V \) . Indeed, \[ \begin{Vmatrix}{B}_{u}\end{Vmatrix} = \sup \{ \left| {B\left( {u, v}\right) }\right| : v \in V,\parallel v\parallel = 1\} \leq \alpha \parallel u\parallel \] By the Riesz Representation Theorem (Section 2.3, Theorem 1, page 81), there corresponds to each \( u \) in \( U \) a unique point \( {Au} \) in \( V \) such that \( {B}_{u}\left( v\right) = \langle {Au}, v\rangle \) . Elementary arguments show that \( A \) is a linear map of \( U \) into \( V \) . Thus, \[ \langle {Au}, v\rangle = {B}_{u}\left( v\right) = B\left( {u, v}\right) \] The continuity of \( A \) follows from the inequality \( \parallel {Au}\parallel = \begin{Vmatrix}{B}_{u}\end{Vmatrix} \leq \alpha \parallel u\parallel \) . The operator \( A \) is also bounded from below: \[ \parallel {Au}\parallel = \mathop{\sup }\limits_{{\parallel v\parallel = 1}}\langle {Au}, v\rangle = \mathop{\sup }\limits_{{\parallel v\parallel = 1}}\left| {B\left( {u, v}\right) }\right| \geq \beta \parallel u\parallel \] In order to prove that the range of \( A \) is closed, let \( \left\lbrack {v}_{n}\right\rbrack \) be a convergent sequence in the range of \( A \) . Write \( {v}_{n} = A{u}_{n} \), and note that by the Cauchy property \[ 0 = \mathop{\lim }\limits_{{n, m \rightarrow \infty }}\begin{Vmatrix}{{v}_{n} - {v}_{m}}\end{Vmatrix} = \mathop{\lim }\limits_{{n, m}}\begin{Vmatrix}{A{u}_{n} - A{u}_{m}}\end{Vmatrix} \geq \beta \mathop{\lim }\limits_{{n, m}}\begin{Vmatrix}{{u}_{n} - {u}_{m}}\end{Vmatrix} \] Consequently, \( \left\lbrack {u}_{n}\right\rbrack \) is a Cauchy sequence. Let \( u = \mathop{\lim }\limits_{n}{u}_{n} \) . By the continuity of \( A,{v}_{n} = A{u}_{n} \rightarrow {Au} \), showing that \( \mathop{\lim }\limits_{n}{v}_{n} \) is in the range of \( A \) . Next, we wish to establish that the range of \( A \) is dense in \( V \) . If it is not, then the closure of the range is a proper subspace of \( V \) . Select a nonzero vector \( p \) orthogonal to the range of \( A \) . Then \( \langle {Au}, p\rangle = 0 \) for all \( u \) . Equivalently, \( B\left( {u, p}\right) = 0 \), contrary to the hypothesis (3) on \( B \) . At this juncture, we know that \( {A}^{-1} \) exists as a linear map. Its continuity follows from the fact that \( A \) is bounded below: If \( u = {A}^{-1}v \) , then the inequality \( \parallel {Au}\parallel \geq \beta \parallel u\parallel \) implies \( \parallel v\parallel \geq \beta \begin{Vmatrix}{{A}^{-1}v}\end{Vmatrix} \) . The equation we seek to solve is \( B\left( {w, v}\right) = \langle z, v\rangle \) for all \( v \) . Equivalently, \( \langle {Aw}, v\rangle = \langle z, v\rangle \) . Hence \( {Aw} = z \) and \( w = {A}^{-1}z \) . Since there is no other choice for \( w \), we conclude that it is unique and depends continuously and linearly on \( z \) .
|
Yes
|
Consider the two-point boundary-value problem\n\n\[ \n{\left( p{u}^{\prime }\right) }^{\prime } - {qu} = f\;u\left( a\right) = 0\;u\left( b\right) = 0 \n\]
|
This is a Sturm-Liouville problem, the subject of Theorem 1 in the next section (page 206) as well as Section 2.5, pages 105ff. In order to apply Theorem 3, one requires the bilinear form and linear functional appropriate to the problem. They are revealed by a standard procedure: Multiply the differential equation by a function \( v \) that vanishes at the endpoints \( a \) and \( b \), and then use integration by parts:\n\n\[ \n{\int }_{a}^{b}\left\lbrack {v{\left( p{u}^{\prime }\right) }^{\prime } - {vqu}}\right\rbrack = {\int }_{a}^{b}{vf} \n\]\n\n\[ \n{\left. vp{u}^{\prime }\right| }_{a}^{b} - {\int }_{a}^{b}{v}^{\prime }p{u}^{\prime } - {\int }_{a}^{b}{vqu} = {\int }_{a}^{b}{vf} \n\]\n\n\[ \n{\int }_{a}^{b}\left\lbrack {p{u}^{\prime }{v}^{\prime } + {quv}}\right\rbrack = {\int }_{a}^{b} - {fv} \n\]\n\n\[ \nB\left( {u, v}\right) = {\int }_{a}^{b}\left( {p{u}^{\prime }{v}^{\prime } + {quv}}\right) \;\langle f, v\rangle = - {\int }_{a}^{b}{fv} \n\]
|
Yes
|
The steady-state distribution of heat in a two-dimensional domain \( \Omega \) is governed by Poisson’s Equation:\n\n\[ \n{\nabla }^{2}u = f\;\text{ in }\Omega \n\] \n\nHere, \( u\left( {x, y}\right) \) is the temperature at the location \( \left( {x, y}\right) \) in \( {\mathbb{R}}^{2} \), and \( f \) is the heat-source function. If the temperature on the boundary \( \partial \Omega \) is held constant, then, with suitable units for the measurement of temperature, we may take \( u\left( {x, y}\right) = \) 0 on \( \partial \Omega \) . This simple case leads to the problem of discovering \( u \) such that \( B\left( {u, v}\right) = \langle f, v\rangle \) for all \( v \), where \n\n\[ \nB\left( {u, v}\right) = - {\int }_{\Omega }\left( {{u}_{x}{v}_{x} + {u}_{y}{v}_{y}}\right) \n\]
|
To arrive at this form of the problem, first write the equivalent equation\n\n\[ \n\left\langle {{\nabla }^{2}u, v}\right\rangle = \langle f, v\rangle \;\text{ for all }v \n\] \n\nThe integral form of this is\n\n\[ \n{\int }_{\Omega }v{\nabla }^{2}u = {\int }_{\Omega }{vf}\;\text{ for all }v \n\] \n\nThe integral on the left is treated by using Green's Theorem (also known as Gauss's Theorem). (This theorem plays the role of integration by parts for multivariate functions.) It states that\n\n\[ \n{\int }_{\Omega }\left( {{P}_{x} + {Q}_{y}}\right) = {\int }_{\partial \Omega }\left( {{Pdy} - {Qdx}}\right) \n\] \n\nThis equation holds true under mild assumptions on \( P, Q,\Omega \), and \( \partial \Omega \) . (See [Wid].) Exploiting the hypothesis of zero boundary values, we have\n\n\[ \n{\int }_{\Omega }v{\nabla }^{2}u = {\int }_{\Omega }\left( {{u}_{xx} + {u}_{yy}}\right) v \n\] \n\n\[ \n= {\int }_{\Omega }\left\lbrack {{\left( {u}_{x}v\right) }_{x} + {\left( {u}_{y}v\right) }_{y} - {u}_{x}{v}_{x} - {u}_{y}{v}_{y}}\right\rbrack \n\] \n\n\[ \n= {\int }_{\partial \Omega }\left( {{u}_{x}v - {u}_{y}v}\right) - {\int }_{\Omega }\left( {{u}_{x}{v}_{x} + {u}_{y}{v}_{y}}\right) \n\] \n\n\[ \n= - {\int }_{\Omega }\left( {{u}_{x}{v}_{x} + {u}_{y}{v}_{y}}\right) = B\left( {u, v}\right) \n\]
|
Yes
|
Theorem 1. If \( x \) is a function in \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that minimizes \( \Phi \left( x\right) \) locally subject to the constraints \( x\left( a\right) = \alpha \) and \( x\left( b\right) = \beta \), then \( x \) solves the problem in (5).
|
Proof. Assume the hypotheses, and let \( y \) be any element of \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) such that \( y\left( a\right) = y\left( b\right) = 0 \) . We use what is known as a variational argument. For each real number \( \lambda, x + {\lambda y} \) is a competitor of \( x \) in the minimization of \( \Phi \) . Hence the function \( \lambda \mapsto \Phi \left( {x + {\lambda y}}\right) \) has a local minimum at \( \lambda = 0 \) . We compute\n\n\[ \n\frac{d}{d\lambda }\Phi \left( {x + {\lambda y}}\right) = \frac{d}{d\lambda }{\int }_{a}^{b}\left\lbrack {{\left( {x}^{\prime } + \lambda {y}^{\prime }\right) }^{2}p + {\left( x + \lambda y\right) }^{2}q + 2\left( {x + {\lambda y}}\right) f}\right\rbrack {dt} \n\]\n\n\[ \n= 2{\int }_{a}^{b}\left\lbrack {\left( {{x}^{\prime } + \lambda {y}^{\prime }}\right) {y}^{\prime }p + \left( {x + {\lambda y}}\right) {yq} + {yf}}\right\rbrack {dt} \n\]\n\nEvaluating this derivative at \( \lambda = 0 \) and setting the result equal to 0 yields the necessary condition\n\n(7)\n\n\[ \n{\int }_{a}^{b}\left( {p{x}^{\prime }{y}^{\prime } + {qxy} + {fy}}\right) {dt} = 0 \n\]\n\nWe use in- egration by parts on the first term, like this:\n\n\[ \n{\int }_{a}^{b}p{x}^{\prime }{y}^{\prime } = {\left. p{x}^{\prime }y\right| }_{a}^{b} - {\int }_{a}^{b}{\left( p{x}^{\prime }\right) }^{\prime }y = - {\int }_{a}^{b}{\left( p{x}^{\prime }\right) }^{\prime }y \n\]\n\nHere the fact that \( y\left( a\right) = y\left( b\right) = 0 \) has been exploited. Equation (7) now reads\n\n(8)\n\n\[ \n{\int }_{a}^{b}\left\lbrack {-{\left( p{x}^{\prime }\right) }^{\prime } + {qx} + f}\right\rbrack {ydt} = 0 \n\]\n\n(The steps just described are the same as those in Example 1, page 203.) Since \( y \) is an arbitrary element of \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) vanishing at \( a \) and \( b \), we conclude from Equation (8) that\n\n\[ \n- {\left( p{x}^{\prime }\right) }^{\prime } + {qx} + f = 0 \n\]\n\nThe details of this last argument are as follows. Let \( z = - {\left( p{x}^{\prime }\right) }^{\prime } + {qx} + f \) . Then \( {\int }_{a}^{b}z\left( t\right) y\left( t\right) {dt} = 0 \) for all functions \( y \) of the type described above. Suppose that \( z \neq 0 \) . Then for some \( \tau, z\left( \tau \right) \neq 0 \) . For definiteness, let us assume that \( z\left( \tau \right) \equiv \varepsilon > 0 \) . Then there is a closed interval \( J \subset \left( {a, b}\right) \) in which \( z\left( t\right) \geq \varepsilon /2 \) . There is an open interval \( I \) containing \( J \) in which \( z\left( t\right) > 0 \) . Let \( y \) be a \( {C}^{2} \) function that is constantly equal to 1 on \( J \) and constantly equal to 0 on the complement of \( I \) . Then \( {\int }_{a}^{b}z\left( t\right) y\left( t\right) {dt} > 0 \) .
|
Yes
|
Theorem 2. Assume that \( p\left( t\right) > 0 \) and \( q\left( t\right) \geq 0 \) on \( \left\lbrack {a, b}\right\rbrack \) . If \( x \) is a function in \( {C}^{2}\left\lbrack {a, b}\right\rbrack \) that solves the boundary-value problem (5) then \( x \) is the unique local minimizer of \( \Phi \) subject to the boundary conditions as constraints.
|
Proof. Let \( z \in {C}^{2}\left\lbrack {a, b}\right\rbrack, z \neq x, z\left( a\right) = \alpha \), and \( z\left( b\right) = \beta \) . Then the function \( y = z - x \) satisfies 0 -boundary conditions but is not 0 . By calculations like those in the preceding proof,\n\n(9)\n\n\[ \Phi \left( {x + y}\right) = \Phi \left( x\right) + 2{\int }_{a}^{b}\left( {{x}^{\prime }{y}^{\prime }p + {xyq} + {yf}}\right) + {\int }_{a}^{b}\left\lbrack {{\left( {y}^{\prime }\right) }^{2}p + {y}^{2}q}\right\rbrack \]\n\nUsing integration by parts on the middle term, we find that it is zero. Then Equation (9) shows that \( \Phi \left( z\right) > \Phi \left( x\right) \) .
|
Yes
|
Theorem 3. Let \( x \) denote the solution of the boundary-value problem \( \left( {{11},{12}}\right) \), and let \( {x}_{n} \) be a point in \( {U}_{n} \) that minimizes \( \Phi \) on \( {U}_{n} \) . Assume \( q \geq 0 \) and \( p > 0 \) on \( \left\lbrack {0,1}\right\rbrack \) . Then \( {x}_{n}\left( t\right) \rightarrow x\left( t\right) \) uniformly.
|
Proof. Since we have assumed that \( \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{U}_{n} \) is dense in \( X \), the preceding lemma and Theorem 2 imply that \( \Phi \left( {x}_{n}\right) \downarrow \Phi \left( x\right) \) . Notice that our choice of norm on \( X \) guarantees the continuity of \( \Phi \) . In the following, we use the standard inner-product notation\n\n\[ \langle u, v\rangle = {\int }_{0}^{1}u\left( t\right) v\left( t\right) {dt} \]\n\nand \( \parallel {\parallel }_{2} \) is the accompanying quadratic norm. From Equation (9) in the proof of Theorem 2, we have\n\n\[ \Phi \left( {x}_{n}\right) - \Phi \left( x\right) = {\int }_{0}^{1}\left\lbrack {p \cdot {\left( {x}_{n}^{\prime } - {x}^{\prime }\right) }^{2} + q \cdot {\left( {x}_{n} - x\right) }^{2}}\right\rbrack {dt} \]\n\n\[ \geq {\int }_{0}^{1}p \cdot {\left( {x}_{n}^{\prime } - {x}^{\prime }\right) }^{2}{dt} \geq \mathop{\min }\limits_{{0 \leq t \leq 1}}p\left( t\right) {\begin{Vmatrix}{x}_{n}^{\prime } - {x}^{\prime }\end{Vmatrix}}_{2}^{2} \]\n\nThis shows that \( {\begin{Vmatrix}{x}_{n}^{\prime } - {x}^{\prime }\end{Vmatrix}}_{2} \rightarrow 0 \) . By obvious manipulations, including the Cauchy-Schwarz inequality, we have now\n\n\[ \left| {{x}_{n}\left( s\right) - x\left( s\right) }\right| = \left| {{\int }_{0}^{s}\left\lbrack {{x}_{n}^{\prime }\left( t\right) - {x}^{\prime }\left( t\right) }\right\rbrack {dt}}\right| \leq {\int }_{0}^{s}\left| {{x}_{n}^{\prime }\left( t\right) - {x}^{\prime }\left( t\right) }\right| {dt} \]\n\n\[ \leq {\int }_{0}^{1}\left| {{x}_{n}^{\prime }\left( t\right) - {x}^{\prime }\left( t\right) }\right| {dt} = \left\langle {\left| {{x}_{n}^{\prime } - {x}^{\prime }}\right| ,1}\right\rangle \leq {\begin{Vmatrix}{x}_{n}^{\prime } - {x}^{\prime }\end{Vmatrix}}_{2} \cdot \parallel 1{\parallel }_{2} \]\n\nThis shows that\n\n\[ {\begin{Vmatrix}{x}_{n} - x\end{Vmatrix}}_{\infty } \leq {\begin{Vmatrix}{x}_{n}^{\prime } - {x}^{\prime }\end{Vmatrix}}_{2} \rightarrow 0 \]
|
Yes
|
Theorem 1. Each solution of the boundary-value problem (4) solves the integral equation (6) and conversely.
|
Proof. Let \( x \) be any function in \( C\left\lbrack {0,1}\right\rbrack \), and define \( y \) by writing\n\n\[ y\left( t\right) = - {\int }_{0}^{1}G\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds} \]\n\n\[ = - {\int }_{0}^{t}G\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds} + {\int }_{1}^{t}G\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds} \]\n\nWe intend to differentiate in this equation, and the reader should recall the general rule:\n\n\[ \frac{d}{dt}{\int }_{a}^{t}h\left( s\right) {ds} = h\left( t\right) \]\n\nThen the chain rule gives us\n\n\[ \frac{d}{dt}{\int }_{a}^{k\left( t\right) }h\left( s\right) {ds} = h\left( {k\left( t\right) }\right) {k}^{\prime }\left( t\right) \]\n\nNow, for \( {y}^{\prime }\left( t\right) \) we have\n\n\[ {y}^{\prime }\left( t\right) = - G\left( {t, t}\right) f\left( {t, x\left( t\right) }\right) - {\int }_{0}^{t}{G}_{t}\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds} \]\n\n\[ + G\left( {t, t}\right) f\left( {t, x\left( t\right) }\right) + {\int }_{1}^{t}{G}_{t}\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds} \]\n\n\[ = {\int }_{0}^{t}{sf}\left( {s, x\left( s\right) }\right) {ds} + {\int }_{1}^{t}\left( {1 - s}\right) f\left( {s, x\left( s\right) }\right) {ds} \]\n\nA second differentiation yields\n\n(7)\n\n\[ {y}^{\prime \prime }\left( t\right) = {tf}\left( {t, x\left( t\right) }\right) + \left( {1 - t}\right) f\left( {t, x\left( t\right) }\right) = f\left( {t, x\left( t\right) }\right) \]\n\nIf \( x \) is a solution of the integral equation, then \( y = x \), and our calculation (7) shows that \( {x}^{\prime \prime } = {y}^{\prime \prime } = f\left( {t, x}\right) \). Since \( G\left( {t, s}\right) = 0 \) on the boundary of the square, \( y\left( 0\right) = y\left( 1\right) = 0 \). Hence \( x\left( 0\right) = x\left( 1\right) = 0 \). This proves half of the theorem.\n\nNow suppose that \( x \) is a solution of the boundary-value problem. The above calculation (7) shows that\n\n\[ {y}^{\prime \prime }\left( t\right) = f\left( {t, x\left( t\right) }\right) = {x}^{\prime \prime }\left( t\right) \]\n\nIt follows that the two functions \( x \) and \( y \) can differ only by a linear function of \( t \) (because \( {x}^{\prime \prime } - {y}^{\prime \prime } = 0 \)). Since \( x\left( t\right) \) and \( y\left( t\right) \) take the same values at \( t = 0 \) and \( t = 1 \), we conclude that \( x = y \). Thus \( x \) solves the integral equation.
|
Yes
|
Theorem 2. Let \( f\left( {s, t}\right) \) be continuous in the domain defined by the inequalities \( 0 \leq s \leq 1, - \infty < t < \infty \) . Assume also that \( f \) satisfies a Lipschitz condition in this domain:\n\n(8)\n\n\[ \left| {f\left( {s,{t}_{1}}\right) - f\left( {s,{t}_{2}}\right) }\right| \leq k\left| {{t}_{1} - {t}_{2}}\right| \;\left( {k < 8}\right) \]\n\nThen the integral equation (6) has a unique solution in \( C\left\lbrack {0,1}\right\rbrack \) .
|
Proof. Consider the nonlinear mapping \( F : C\left\lbrack {0,1}\right\rbrack \rightarrow C\left\lbrack {0,1}\right\rbrack \) defined by\n\n\[ \left( {Fx}\right) \left( t\right) = - {\int }_{0}^{1}G\left( {t, s}\right) f\left( {s, x\left( s\right) }\right) {ds}\;x \in C\left\lbrack {0,1}\right\rbrack \]\n\nWe shall prove that \( F \) is a contraction. We have\n\n\[ \left| {\left( {Fu}\right) \left( t\right) - \left( {Fv}\right) \left( t\right) }\right| \leq {\int }_{0}^{1}G\left( {t, s}\right) \left| {f\left( {s, u\left( s\right) }\right) - f\left( {s, v\left( s\right) }\right) }\right| {ds} \]\n\n\[ \leq k{\int }_{0}^{1}G\left( {t, s}\right) \left| {u\left( s\right) - v\left( s\right) }\right| {ds} \]\n\n\[ \leq k\parallel u - v{\parallel }_{\infty }{\int }_{0}^{1}G\left( {t, s}\right) {ds} \]\n\n\[ = \left( {k/8}\right) \parallel u - v{\parallel }_{\infty } \]\n\nIt follows that\n\n\[ \parallel {Fu} - {Fv}{\parallel }_{\infty } \leq \left( {k/8}\right) \parallel u - v{\parallel }_{\infty } \]\n\nand that \( F \) is a contraction. Now apply Banach’s Theorem, page 177, taking note of the fact that \( C\left\lbrack {0,1}\right\rbrack \), with the supremum norm, is complete.
|
Yes
|
Consider the two-point boundary-value problem\n\n\[ \left\{ \begin{array}{l} {x}^{\prime \prime }\left( t\right) = \frac{1}{2}\exp \left\{ {\frac{1}{2}\left( {t + 1}\right) \cos \left\lbrack {x\left( t\right) + 7 - {3t}}\right\rbrack }\right\} \; - 1 \leq t \leq 1 \\ x\left( {-1}\right) = - {10}\;x\left( 1\right) = - 4 \end{array}\right. \]
|
Our existence theorem does not apply to this directly, and some changes of variables are called for. We set\n\n\[ z\left( t\right) = x\left( t\right) - {3t} + 7 \]\n\nand find that \( z \) should solve this problem:\n\n\[ \left\{ \begin{array}{l} {z}^{\prime \prime }\left( t\right) = \frac{1}{2}\exp \left\{ {\frac{1}{2}\left( {t + 1}\right) \cos z\left( t\right) }\right\} \\ z\left( {-1}\right) = z\left( {+1}\right) = 0 \end{array}\right. \]\n\nNext we set\n\n\[ t = - 1 + {2s}\;y\left( s\right) = z\left( t\right) \]\n\nand find that \( y \) should solve this problem:\n\n\[ \left\{ \begin{array}{l} {y}^{\prime \prime }\left( s\right) = 2\exp \{ s\cos y\left( s\right) \} \\ y\left( 0\right) = y\left( 1\right) = 0 \end{array}\right. \]\n\nTo this problem we can apply the preceding corollary. The function \( f\left( {s, r}\right) = \) \( 2{e}^{s\cos r} \) satisfies a Lipschitz condition, as we see by applying the mean value\n\ntheorem:\n\[ \left| {f\left( {s,{r}_{1}}\right) - f\left( {s,{r}_{2}}\right) = }\right| \frac{\partial f}{\partial r}\left( {s,{r}_{3}}\right) \left| \right| {r}_{1} - {r}_{2} \mid \]\n\nThe derivative here is bounded as follows\n\n\[ \left| {2{e}^{s\cos r}\left( {-s\sin r}\right) }\right| \leq {2e} \approx {5.436} \]\n\nSince the Lipschitz constant \( {2e} \) is less than 8, the boundary-value problem (11) has a solution \( y \) . Hence (9) has a solution \( x \), and it is given by\n\n\[ x\left( t\right) = y\left( {\left( {t + 1}\right) /2}\right) + {3t} - 7 \]
|
Yes
|
Theorem 3. Problems (14) and (15) are equivalent.
|
Proof. We proceed as in the proof of Theorem 1, which concerns the \
|
No
|
Theorem 4. If \( k\left( {1 + 8\parallel v{\parallel }_{\infty }}\right) < 8 \), and if the weights \( {A}_{j} \) in Equation (19) are all positive, then for \( i = 1,2,\ldots, n \), \[ \left| {x\left( {s}_{i}\right) - {y}_{i}}\right| \leq \lambda \parallel u{\parallel }_{\infty }\;\text{ where }\lambda = {\left\lbrack 1 - k\left( \frac{1}{8} + \parallel v{\parallel }_{\infty }\right) \right\rbrack }^{-1} \]
|
Proof. Let \( {\varepsilon }_{i} = \left| {x\left( {s}_{i}\right) - {y}_{i}}\right| \) and \( \varepsilon = \max {\varepsilon }_{i} \). Then for each \( i \) we have \[ {\varepsilon }_{i} = \left| {{\int }_{0}^{1}G\left( {{s}_{i}, s}\right) f\left( {s, x\left( s\right) }\right) {ds} - \mathop{\sum }\limits_{{j = 1}}^{n}{A}_{j}G\left( {{s}_{i},{s}_{j}}\right) f\left( {{s}_{j},{y}_{j}}\right) }\right| \] \[ = \left| {u\left( {s}_{i}\right) + \mathop{\sum }\limits_{{j = 1}}^{n}{A}_{j}G\left( {{s}_{i},{s}_{j}}\right) f\left( {{s}_{j}, x\left( {s}_{j}\right) }\right) - \mathop{\sum }\limits_{{j = 1}}^{n}{A}_{j}G\left( {{s}_{i},{s}_{j}}\right) f\left( {{s}_{j},{y}_{j}}\right) }\right| \] \[ \leq \parallel u{\parallel }_{\infty } + \mathop{\sum }\limits_{{j = 1}}^{n}{A}_{j}G\left( {{s}_{i},{s}_{j}}\right) \left| {f\left( {{s}_{j}, x\left( {s}_{j}\right) }\right) - f\left( {{s}_{j},{y}_{j}}\right) }\right| \] \[ \leq \parallel u{\parallel }_{\infty } + {k\varepsilon }\mathop{\sum }\limits_{{j = 1}}^{n}{A}_{j}G\left( {{s}_{i},{s}_{j}}\right) = \parallel u{\parallel }_{\infty } + {k\varepsilon }\left\lbrack {{\int }_{0}^{1}G\left( {{s}_{i}, s}\right) {ds} - v\left( {s}_{i}\right) }\right\rbrack \] \[ \leq \parallel u{\parallel }_{\infty } + {k\varepsilon }\left( {\frac{1}{8} + \parallel v{\parallel }_{\infty }}\right) \] It follows that \( \varepsilon \leq \parallel u{\parallel }_{\infty } + {k\varepsilon }\left( {\frac{1}{8} + \parallel v{\parallel }_{\infty }}\right) \). When this inequality is solved for \( \varepsilon \), the result is the one stated in the theorem.
|
Yes
|
Theorem 6. If the nodes and the function \( w \) are prescribed, then there exists a formula of the type displayed in Equation (26) that is exact for all polynomials of degree at most \( n - 1 \) .
|
Proof. Recall the Lagrange interpolation operator described in Example 1 of Section 4.4, page 193. Its formula is\n\n(27)\n\n\[ \n{Lx} = \mathop{\sum }\limits_{{i = 1}}^{n}x\left( {t}_{i}\right) {\ell }_{i} \n\]\n\nSince \( L \) is a projection of \( C\left\lbrack {a, b}\right\rbrack \) onto \( \mathop{\prod }\limits_{{n - 1}} \), we have \( {Lx} = x \) for all \( x \in \mathop{\prod }\limits_{{n - 1}} \) , and consequently, for such an \( x \) ,\n\n(28)\n\n\[ \n{\int }_{a}^{b}x\left( t\right) w\left( t\right) {dt} = {\int }_{a}^{b}\mathop{\sum }\limits_{{i = 1}}^{n}x\left( {t}_{i}\right) {\ell }_{i}\left( t\right) w\left( t\right) {dt} = \mathop{\sum }\limits_{{i = 1}}^{n}x\left( {t}_{i}\right) {\int }_{a}^{b}{\ell }_{i}\left( t\right) w\left( t\right) {dt} \n\]
|
Yes
|
Example 2. If \( \left( {{t}_{1},{t}_{2},{t}_{3}}\right) = \left( {-1,0, + 1}\right) \) and \( \left\lbrack {a, b}\right\rbrack = \left\lbrack {-1,1}\right\rbrack \), what is the quadrature formula produced by the preceding method when \( w\left( t\right) = 1 \) ?
|
We follow the prescription and begin with the functions \( {\ell }_{i} \) :\n\n\[ \n{\ell }_{1}\left( t\right) = \left( {t - {t}_{2}}\right) \left( {t - {t}_{3}}\right) {\left( {t}_{1} - {t}_{2}\right) }^{-1}{\left( {t}_{1} - {t}_{3}\right) }^{-1} = t\left( {t - 1}\right) /2 \n\]\n\n\[ \n{\ell }_{2}\left( t\right) = \left( {t - {t}_{1}}\right) \left( {t - {t}_{3}}\right) {\left( {t}_{2} - {t}_{1}\right) }^{-1}{\left( {t}_{2} - {t}_{3}\right) }^{-1} = 1 - {t}^{2} \n\]\n\n\[ \n{\ell }_{3}\left( t\right) = t\left( {t + 1}\right) /2\;\text{ (by symmetry) } \n\]\n\nThe integrals \( {\int }_{-1}^{1}{\ell }_{i}\left( t\right) {dt} \) are \( \frac{1}{3},\frac{4}{3} \), and \( \frac{1}{3} \), and the quadrature formula is therefore\n\n(29)\n\n\[ \n{\int }_{-1}^{1}x\left( t\right) {dt} \approx \frac{1}{3}x\left( {-1}\right) + \frac{4}{3}x\left( 0\right) + \frac{1}{3}x\left( 1\right) \n\]\n\nThe formula gives correct values for each \( x \in \mathop{\prod }\limits_{2} \), by the analysis in the proof of Theorem 6. As a matter of fact, the formula is correct on \( \mathop{\prod }\limits_{3} \) because of symmetries in Equation (29). This formula is Simpson's Rule.
|
Yes
|
Theorem 7. Gaussian Quadrature. For appropriate nodes and weights, the quadrature formula (26) is exact on \( \mathop{\prod }\limits_{{{2n} - 1}} \) .
|
Proof. Define an inner product on \( C\left\lbrack {a, b}\right\rbrack \) by putting\n\n(30)\n\n\[ \langle x, y\rangle = {\int }_{a}^{b}x\left( t\right) y\left( t\right) w\left( t\right) {dt} \]\n\nLet \( p \) be the unique monic polynomial in \( \mathop{\prod }\limits_{n} \) that is orthogonal to \( \mathop{\prod }\limits_{{n - 1}} \), orthogonality being defined by the inner product (30). Let the nodes \( {t}_{1},\ldots ,{t}_{n} \) be the zeros of \( p \) . These are known to be simple zeros and lie in \( \left( {a, b}\right) \), although we do not stop to prove this. (See [Ch], page 111.) By Theorem 6, there is a set of weights \( {A}_{i} \) for which the quadrature formula (26) is exact on \( \mathop{\prod }\limits_{{n - 1}} \) . We now show that it is exact on \( \mathop{\prod }\limits_{{{2n} - 1}} \) . Let \( x \in \mathop{\prod }\limits_{{{2n} - 1}} \) . By the division algorithm, we can write \( x = {qp} + r \), where \( q \) (the quotient) and \( r \) (the remainder) belong to \( \mathop{\prod }\limits_{{n - 1}} \) . Now write\n\n\[ {\int }_{a}^{b}{xw} = {\int }_{a}^{b}{qpw} + {\int }_{a}^{b}{rw} \]\n\nSince \( p \bot \mathop{\prod }\limits_{{n - 1}} \) and \( q \in \mathop{\prod }\limits_{{n - 1}} \) the integral \( \int {qpw} \) is zero. Since \( p\left( {t}_{i}\right) = 0 \), we have \( x\left( {t}_{i}\right) = r\left( {t}_{i}\right) \) . Finally, since \( r \in \mathop{\prod }\limits_{{n - 1}} \), the quadrature formula (26) is exact for \( r \) . Putting these facts together yields\n\n\[ {\int }_{a}^{b}x\left( t\right) w\left( t\right) {dt} = {\int }_{a}^{b}r\left( t\right) w\left( t\right) {dt} = \mathop{\sum }\limits_{{i = 1}}^{n}{A}_{i}r\left( {t}_{i}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{A}_{i}x\left( {t}_{i}\right) \]\n\nFormulas that conform to Theorem 7 are known as Gaussian quadrature formulas.
|
Yes
|
Theorem 8. The weights in a Gaussian quadrature formula are positive.
|
Proof. Suppose that Formula (26) is exact on \( {\Pi }_{{2n} - 1} \) . Then it will integrate \( {\ell }_{j}^{2} \) exactly:\n\n\[ 0 < {\int }_{a}^{b}{\ell }_{j}^{2}\left( t\right) w\left( t\right) {dt} = \mathop{\sum }\limits_{{i = 1}}^{n}{A}_{i}{\ell }_{j}^{2}\left( {t}_{i}\right) = {A}_{j} \]\n
|
Yes
|
Theorem 1. A lower semicontinuous functional on a compact set attains its infimum.
|
Proof. Recall that lower semicontinuity of \( \Phi \) means that each set of the form\n\n\[ \n{K}_{\lambda } = \{ x \in K : \Phi \left( x\right) \leq \lambda \} \n\]\n\nis closed. If \( \lambda > \rho \), then \( {K}_{\lambda } \) is nonempty. The family of closed sets \( \left\{ {K}_{\lambda }\right. \) : \( \lambda > \rho \} \) has the finite-intersection property (i.e., the intersection of any finite subcollection is nonempty). Since the space is compact,\n\n\[ \n\bigcap \left\{ {{K}_{\lambda } : \lambda > \rho }\right\} \neq \varnothing \n\]\n\n(see [Kel]). Any point in this intersection satisfies the inequality \( \Phi \left( x\right) \leq \rho \) .
|
Yes
|
Theorem 2. Under the hypotheses above, a point \( y \) satisfies the equation \( {Ay} = b \) if and only if \( y \) is a global minimum point of \( \Phi \) .
|
Proof. Let \( x \) be an arbitrary point and \( v \) any nonzero vector. Then\n\n\[ \Phi \left( {x + {tv}}\right) = \langle {Ax} + {tAv} - {2b}, x + {tv}\rangle \]\n\n(10)\n\n\[ = \langle {Ax} - {2b}, x\rangle + t\langle {Ax} - {2b}, v\rangle + t\langle {Av}, x\rangle + {t}^{2}\langle {Av}, v\rangle \]\n\n\[ = \Phi \left( x\right) + {2t}\langle {Ax} - b, v\rangle + {t}^{2}\langle {Av}, v\rangle \]\n\nThe derivative of this expression, as a function of \( t \), is\n\n(11)\n\n\[ \frac{d}{dt}\Phi \left( {x + {tv}}\right) = 2\langle {Ax} - b, v\rangle + {2t}\langle {Av}, v\rangle \]\n\nThe minimum of \( \Phi \left( {x + {tv}}\right) \) occurs when this derivative is zero. The value of \( t \) for which this happens is\n\n(12)\n\n\[ \widetilde{t} = \langle b - {Ax}, v\rangle \langle {Av}, v{\rangle }^{-1} \]\n\nWhen this value is substituted in Equation (10) the result is\n\n(13)\n\n\[ \Phi \left( {x + \widetilde{t}v}\right) = \Phi \left( x\right) - \langle b - {Ax}, v{\rangle }^{2}\langle {Av}, v{\rangle }^{-1} \]\n\nThis shows that we can cause \( \Phi \left( x\right) \) to decrease by passing to the point \( x + \widetilde{t}v \) , except when \( b - {Ax} \bot v \) . If \( b - {Ax} \neq 0 \), then many directions \( v \) can be chosen for our purpose, but if \( {Ax} = b \), we cannot decrease \( \Phi \left( x\right) \) .\n\nIn the problem under consideration, the directional derivative of \( \Phi \) is obtained by putting \( t = 0 \) in Equation (11):\n\n(14)\n\n\[ {\left. \frac{d}{dt}\Phi \left( x + tv\right) \right| }_{t = 0} = 2\langle {Ax} - b, v\rangle \]\n\nIt follows that the direction of steepest descent is the residual vector \( r = b - {Ax} \) . (Positive scalar factors can be ignored in specifying a direction vector.) The algorithm for steepest descent in this problem is therefore described by these formulas:\n\n(15)\n\n\[ {r}_{n} = b - A{x}_{n}\;{t}_{n} = \langle {r}_{n},{r}_{n}\rangle /\langle A{r}_{n},{r}_{n}\rangle \;{x}_{n + 1} = {x}_{n} + {t}_{n}{r}_{n} \]
|
Yes
|
Theorem 3. If \( A \) is self-adjoint and satisfies\n\n\[ \mathop{\inf }\limits_{{\parallel x\parallel = 1}}\langle {Ax}, x\rangle > 0 \]\n\nthen the steepest-descent sequence in Equation (15) converges to the solution of the equation \( {Ax} = b \) .
|
There is more to this theorem than meets the eye, because the hypotheses on \( A \) imply its invertibility, and consequently the equation \( {Ax} = b \) has a unique solution for each \( b \) in the Hilbert space. See the lemma in Section 4.9, page 234, for the appropriate formal result.
|
No
|
Theorem 1. In the conjugate direction algorithm (3), using an A-orthonormal set of direction vectors, each residual \( {r}_{n} = b - A{x}_{n} \) is orthogonal to all the previous search directions \( {v}_{1},\ldots ,{v}_{n - 1} \) .
|
Proof. Let \( {r}_{n} = b - A{x}_{n} \) . We wish to prove that\n\n(5)\n\n\[ {r}_{n} \bot {v}_{1},{v}_{2},\ldots ,{v}_{n - 1}\;\left( {n = 2,3,\ldots }\right) \]\n\nFirst we observe that\n\n(6)\n\n\[ {r}_{n + 1} = b - A{x}_{n + 1} = b - A\left( {{x}_{n} + {\alpha }_{n}{v}_{n}}\right) = {r}_{n} - {\alpha }_{n}A{v}_{n} \]\n\nConsequently, by the definition of \( {\alpha }_{1} \), we have\n\n(7)\n\n\[ \left\langle {{r}_{2},{v}_{1}}\right\rangle = \left\langle {{r}_{1},{v}_{1}}\right\rangle - {\alpha }_{1}\left\langle {A{v}_{1},{v}_{1}}\right\rangle = \left\langle {{r}_{1},{v}_{1}}\right\rangle - \left\langle {{r}_{1},{v}_{1}}\right\rangle = 0 \]\n\nNow assume that Equation (5) is true for a certain index \( n \) . In order to prove Equation (5) for \( n + 1 \), let \( 1 \leq i \leq n \) and use Equation (6) to write\n\n(8)\n\n\[ \left\langle {{r}_{n + 1},{v}_{i}}\right\rangle = \left\langle {{r}_{n},{v}_{i}}\right\rangle - {\alpha }_{n}\left\langle {A{v}_{n},{v}_{i}}\right\rangle \]\n\nFor \( i < n \), both terms on the right side of Equation (8) are zero. For \( i = n \) the definition of \( {\alpha }_{n} \) shows that the right side is zero, as in Equation (7).
|
Yes
|
Theorem 2. Let \( A \) be a self-adjoint operator on a Hilbert space \( X \) . Assume that\n\n\[ m\parallel x{\parallel }^{2} \leq \langle x,{Ax}\rangle \leq M\parallel x{\parallel }^{2}\;\left( {m > 0}\right) \]\n\nLet \( {v}_{1},{v}_{2},\ldots \) be an \( A \) -orthonormal sequence whose linear span is dense in \( X \) . Then the conjugate direction algorithm\n\n\[ {x}_{n + 1} = {x}_{n} + \left\langle {{v}_{n}, b - A{x}_{n}}\right\rangle {v}_{n} \]\n\nproduces a sequence that converges to \( {A}^{-1}b \) from any starting point \( {x}_{1} \) .
|
Proof. (After [Lue2]) Putting \( {\alpha }_{n} = \left\langle {{v}_{n}, b - A{x}_{n}}\right\rangle \), we have, from Equation\n\n\[ {x}_{2} = {x}_{1} + {\alpha }_{1}{v}_{1} \]\n\n\[ {x}_{3} = {x}_{2} + {\alpha }_{2}{v}_{2} = {x}_{1} + {\alpha }_{1}{v}_{1} + {\alpha }_{2}{v}_{2} \]\n\nand so on. Thus, in general,\n\n\[ {x}_{n} - {x}_{1} = {\alpha }_{1}{v}_{1} + {\alpha }_{o}2{v}_{2} + \cdots + {\alpha }_{n - 1}{v}_{n - 1} \]\n\nFrom Equation (12) and the \( A \) -orthogonal property,\n\n\[ \left\langle {{x}_{n} - {x}_{1}, A{v}_{n}}\right\rangle = 0 = \left\langle {A{x}_{n} - A{x}_{1},{v}_{n}}\right\rangle \]\n\nFrom the definition of \( {\alpha }_{n} \) and Equation (13), we get\n\n\[ {\alpha }_{n} = \left\langle {{v}_{n}, b - A{x}_{n}}\right\rangle = \left\langle {{v}_{n}, b - A{x}_{1} - A{x}_{n} + A{x}_{1}}\right\rangle \]\n\n\[ = \left\langle {{v}_{n}, b - A{x}_{1}}\right\rangle = \left\langle {{v}_{n}, A\left( {{A}^{-1}b - {x}_{1}}\right) }\right\rangle \]\n\nThis shows that the right side of Equation (12) represents the partial sum of the Fourier series of \( {A}^{-1}b - {x}_{1} \), if we use for this expansion the inner product\n\n\[ \left\lbrack {x, y}\right\rbrack = \langle x,{Ay}\rangle \]\n\nThese two inner products lead to the same topology on \( X \) because of Equation (10). Hence \( {x}_{n} - {x}_{1} \rightarrow {A}^{-1}b - {x}_{1} \) .
|
Yes
|
Example 1. Let \( X = Y = {\mathbb{R}}^{2} \), and define\n\n\[ f\left( x\right) = \left\lbrack \begin{matrix} \sin {\xi }_{1} + {e}^{{\xi }_{2}} - 3 \\ {\left( {\xi }_{2} + 3\right) }^{2} - {\xi }_{1} - 4 \end{matrix}\right\rbrack \;x = \left( {{\xi }_{1},{\xi }_{2}}\right) \in {\mathbb{R}}^{2} \]
|
A convenient homotopy is defined by Equation (4), and we select the starting point \( {x}_{0} = \left( {5,3}\right) \) . The derivatives on the right side of Equation (8) are computed to be\n\n\[ {h}_{2} = {f}^{\prime }\left( x\right) = \left\lbrack \begin{matrix} \cos {\xi }_{1} & {e}^{{\xi }_{2}} \\ - 1 & 2{\xi }_{2} + 6 \end{matrix}\right\rbrack \]\n\n\[ {h}_{1} = f\left( {x}_{0}\right) = \left\lbrack \begin{array}{l} a \\ b \end{array}\right\rbrack \]\n\nwhere \( a = \sin 5 + {e}^{3} - 3 \) and \( b = {27} \) . The inverse of \( {f}^{\prime }\left( x\right) \) is\n\n\[ {\left\lbrack {f}^{\prime }\left( x\right) \right\rbrack }^{-1} = \frac{1}{\Delta }\left\lbrack \begin{matrix} 2{\xi }_{2} + 6 & - {e}^{{\xi }_{2}} \\ 1 & \cos {\xi }_{1} \end{matrix}\right\rbrack \;\Delta = 2\left( {{\xi }_{2} + 3}\right) \cos {\xi }_{1} + {e}^{{\xi }_{2}} \]\n\nThe differential equation that controls the path leading away from the point \( {x}_{0} \) is Equation (8). In this concrete case it is a pair of ordinary differential equations:\n\n\[ \left\lbrack \begin{array}{l} {\xi }_{1}^{\prime } \\ {\xi }_{2}^{\prime } \end{array}\right\rbrack = \frac{-1}{\Delta }\left\lbrack \begin{matrix} 2{\xi }_{2} + 6 & - {e}^{{\xi }_{2}} \\ 1 & \cos {\xi }_{1} \end{matrix}\right\rbrack \left\lbrack \begin{array}{l} a \\ b \end{array}\right\rbrack = \frac{-1}{\Delta }\left\lbrack \begin{matrix} {2a}\left( {{\xi }_{2} + 3}\right) - b{e}^{{\xi }_{2}} \\ a + b\cos {\xi }_{1} \end{matrix}\right\rbrack \]\n\nWhen this system was integrated numerically on the interval \( 0 \leq t \leq 1 \), the terminal value of \( x \) (at \( t = 1 \) ) was close to \( \left( {{12},1}\right) \) . In order to find a more accurate solution, we can use Newton's iteration starting at the point produced by the homotopy method. The Newton iteration replaces any approximate root \( x \) by \( x - \delta \), the correction \( \delta \) being defined by\n\n\[ \delta = {\left\lbrack {f}^{\prime }\left( x\right) \right\rbrack }^{-1}f\left( x\right) \]\n\n(These matters are the subject of Section 3.3, beginning at page 125.) In the current example, the vector \( \delta \) is\n\n\[ \left\lbrack \begin{array}{l} {\delta }_{1} \\ {\delta }_{2} \end{array}\right\rbrack = \frac{1}{\Delta }\left\lbrack \begin{matrix} 2{\xi }_{2} + 6 & - {e}^{{\xi }_{2}} \\ 1 & \cos {\xi }_{1} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} \sin {\xi }_{1} + {e}^{{\xi }_{2}} - 3 \\ {\left( {\xi }_{2} + 3\right) }^{2} - {\xi }_{1} - 4 \end{matrix}\right\rbrack \]\n\nFive steps of the Newton iteration produced these results: \n\nThe curve \( \{ x\left( t\right) : 0 \leq t \leq 1\} \) is shown in Figure 4.2\n\n\n\nFigure 4.2
|
Yes
|
Example 2. Let \( f \) be the mapping\n\n\[ f\left( x\right) = \left\lbrack \begin{matrix} {\xi }_{1}^{2} - 3{\xi }_{2}^{2} + 3 \\ {\xi }_{1}{\xi }_{2} + 6 \end{matrix}\right\rbrack \;x = \left( {{\xi }_{1},{\xi }_{2}}\right) \in {\mathbb{R}}^{2} \]\n\nWe take the starting point \( {x}_{0} = \left( {1,1}\right) \) and use the homotopy of Equation (4).
|
Then\n\n\[ h\left( {t, x}\right) = \left\lbrack \begin{matrix} {\xi }_{1}^{2} - 3{\xi }_{2}^{2} + 2 + t \\ {\xi }_{1}{\xi }_{2} - 1 + {7t} \end{matrix}\right\rbrack \]\n\nThe differential equation (9) is given by\n\n(11)\n\n\[ \left\lbrack \begin{matrix} 1 & 2{\xi }_{1} & - 6{\xi }_{2} \\ 7 & {\xi }_{2} & {\xi }_{1} \end{matrix}\right\rbrack \left\lbrack \begin{matrix} {t}^{\prime } \\ {\xi }_{1}^{\prime } \\ {\xi }_{2}^{\prime } \end{matrix}\right\rbrack = \left\lbrack \begin{array}{l} 0 \\ 0 \end{array}\right\rbrack \]\n\nIt is preferable to use Equation (10), however, and to write the differential equations in the form\n\n(12)\n\n\[ \left\{ \begin{array}{ll} {t}^{\prime } = - \left( {2{\xi }_{1}^{2} + 6{\xi }_{2}^{2}}\right) & t\left( 0\right) = 0 \\ {\xi }_{1}^{\prime } = {\xi }_{1} + {42}{\xi }_{2} & {\xi }_{1}\left( 0\right) = 1 \\ {\xi }_{2}^{\prime } = - \left( {{\xi }_{2} - {14}{\xi }_{1}}\right) & {\xi }_{2}\left( 0\right) = 1 \end{array}\right. \]\n\nThe derivatives in this system are with respect to \( s \) . Since we want \( t \) to run from 0 to 1, it is clear (from the equation governing \( t \) ) that we must let \( s \) proceed to the left. Alternatively, we can appeal to the homogeneity in the system, and simply change the signs on the right side of (12). Following the latter course, and performing a numerical integration, we arrive at these two points:\n\n\[ s = {.087},\;t = {.969},\;{\xi }_{1} = - {2.94},\;{\xi }_{2} = {1.97} \]\n\n\[ s = {.088},\;t = {1.010},\;{\xi }_{1} = - {3.02},\;{\xi }_{2} = {2.01} \]\n\nEither of these can be used to start a Newton iteration, as was done in Example 1. The path generated by this homotopy is shown in Figure 4.3.
|
Yes
|
For any polynomial \( P \), the function \( f : \mathbb{R} \rightarrow \mathbb{R} \) defined by \[ f\left( x\right) = \left\{ \begin{array}{ll} P\left( {1/x}\right) {e}^{-1/x} & x > 0 \\ 0 & x \leq 0 \end{array}\right. \] is in \( {C}^{\infty }\left( \mathbb{R}\right) \) .
|
First we show that \( f \) is continuous. The only questionable point is \( x = 0 \) . We have \[ \mathop{\lim }\limits_{{x \downarrow 0}}f\left( x\right) = \mathop{\lim }\limits_{{x \downarrow 0}}\frac{P\left( {1/x}\right) }{\exp \left( {1/x}\right) } = \mathop{\lim }\limits_{{t \uparrow \infty }}\frac{P\left( t\right) }{\exp \left( t\right) } \] By using L’Hôpital’s rule repeatedly on this last limit, we see that its value is 0 . Hence \( f \) is continuous. Differentiation of \( f \) gives \[ {f}^{\prime }\left( x\right) = \left\{ \begin{array}{ll} Q\left( {1/x}\right) {e}^{-1/x} & x > 0 \\ 0 & x < 0 \end{array}\right. \] where \( Q\left( x\right) = {x}^{2}\left\lbrack {P\left( x\right) - {P}^{\prime }\left( x\right) }\right\rbrack \) . By the first part of the proof, \( \mathop{\lim }\limits_{{x \downarrow 0}}{f}^{\prime }\left( x\right) = 0 \) . It remains only to be proved that \( {f}^{\prime }\left( 0\right) = 0 \) . We have, by the mean value theorem, \[ {f}^{\prime }\left( 0\right) = \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{f\left( h\right) - f\left( 0\right) }{h} = \mathop{\lim }\limits_{{h \rightarrow 0}}{f}^{\prime }\left( {\xi \left( h\right) }\right) = 0 \] where \( \xi \left( h\right) \) is strictly between 0 and \( h \) . (Note that \( h \) can be positive or negative in this argument.) We have shown that \[ {f}^{\prime }\left( x\right) = \left\{ \begin{array}{ll} Q\left( {1/x}\right) {e}^{-1/x} & x > 0 \\ 0 & x \leq 0 \end{array}\right. \] This has the same form as \( f \), and therefore \( {f}^{\prime } \) is continuous. The argument can be repeated indefinitely. The reader should observe that our argument requires the following version of the mean value theorem: If \( g \) is continuous on \( \left\lbrack {a, b}\right\rbrack \) and differentiable on \( \left( {a, b}\right) \), then for some \( \xi \) in \( \left( {a, b}\right) \] \[ {g}^{\prime }\left( \xi \right) = \left\lbrack {g\left( b\right) - g\left( a\right) }\right\rbrack /\left( {b - a}\right) \]
|
Yes
|
Lemma 2. The function \( \rho \) defined in Equation (1) belongs to \( \mathbf{D} \) .
|
Proof. The function \( f \) in the preceding lemma (with \( P\left( x\right) = 1 \) ) has the property that \( \rho \left( x\right) = {cf}\left( {1 - {\left| x\right| }^{2}}\right) \) . Thus \( \rho = {cf} \circ g \), where \( g\left( x\right) = 1 - {\left| x\right| }^{2} \) and belongs to \( {C}^{\infty }\left( {\mathbb{R}}^{n}\right) \) . By the chain rule, \( {D}^{\alpha }\rho \) can be expressed as a sum of products of ordinary derivatives of \( f \) with various partial derivatives of \( g \) . Since these are all continuous, \( {D}^{\alpha }\rho \in C\left( {\mathbb{R}}^{n}\right) \) for all multi-indices \( \alpha \) .
|
Yes
|
Theorem 1. For every multi-index \( \alpha ,{D}^{\alpha } \) is a continuous linear transformation of \( \mathbf{D} \) into \( \mathbf{D} \) .
|
Proof. The linearity is a familiar feature of differentiation. For the continuity, it suffices to prove continuity at 0 because \( {D}^{\alpha } \) is linear. Thus, suppose that \( {\phi }_{j} \in \mathcal{D} \) and \( {\phi }_{j} \rightarrow 0 \) . Let \( K \) be a compact set containing the supports of all the functions \( {\phi }_{j} \) . Then \( {D}^{\beta }{\phi }_{j}\left( x\right) \rightarrow 0 \) uniformly on \( K \) for every multi-index \( \beta \) . Consequently, \( {D}^{\beta }{D}^{\alpha }{\phi }_{j}\left( x\right) \rightarrow 0 \) uniformly for each \( \beta \), and so \( {D}^{\alpha }{\phi }_{j} \rightarrow 0 \), by the definition of convergence in \( \mathcal{D} \) .
|
Yes
|
Example 1. A Dirac distribution \( {\delta }_{\xi } \) is defined by selecting \( \xi \in {\mathbb{R}}^{n} \) and writing\n\n(3)\n\n\[ \n{\delta }_{\xi }\left( \phi \right) = \phi \left( \xi \right) \;\left( {\phi \in \mathfrak{D}}\right) \n\]
|
It is a distribution, because firstly, it is linear:\n\n\[ \n{\delta }_{\xi }\left( {{\lambda }_{1}{\phi }_{1} + {\lambda }_{2}{\phi }_{2}}\right) = \left( {{\lambda }_{1}{\phi }_{1} + {\lambda }_{2}{\phi }_{2}}\right) \left( \xi \right) = {\lambda }_{1}{\phi }_{1}\left( \xi \right) + {\lambda }_{2}{\phi }_{2}\left( \xi \right) = {\lambda }_{1}{\delta }_{\xi }\left( {\phi }_{1}\right) + {\lambda }_{2}{\delta }_{\xi }\left( {\phi }_{2}\right) \n\]\n\nSecondly, it is continuous because the condition \( {\phi }_{j} \rightarrow 0 \) implies that \( {\phi }_{j}\left( \xi \right) \rightarrow 0 \) . If we write \( \delta \) without a subscript it refers to evaluation at 0 ; i.e., \( \delta = {\delta }_{0} \) .
|
Yes
|
Example 3. Let \( f : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) be continuous. With \( f \) we associate a distribution \( \widetilde{f} \) by means of the definition\n\n(5)\n\n\[ \widetilde{f}\left( \phi \right) = \int f\left( x\right) \phi \left( x\right) {dx}\;\left( {\phi \in \mathcal{D}}\right) \]
|
The linearity of \( \widetilde{f} \) is obvious. For the continuity, we observe that if \( {\phi }_{j} \rightarrow 0 \), then there is a compact \( K \) containing the supports of the \( {\phi }_{j} \) . Then we have\n\n\[ \left| {\widetilde{f}\left( {\phi }_{j}\right) }\right| = \left| {{\int }_{K}f\left( x\right) {\phi }_{j}\left( x\right) {dx}}\right| \leq \mathop{\sup }\limits_{x}\left| {{\phi }_{j}\left( x\right) }\right| {\int }_{K}\left| {f\left( y\right) }\right| {dy} \rightarrow 0 \]\n\nbecause \( {\phi }_{j} \rightarrow 0 \) entails \( \mathop{\sup }\limits_{x}\left| {{\phi }_{j}\left( x\right) }\right| \rightarrow 0 \) .
|
Yes
|
Let \( f : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) be continuous. With \( f \) we associate a distribution \( \widetilde{f} \) by means of the definition\n\n\[ \widetilde{f}\left( \phi \right) = \int f\left( x\right) \phi \left( x\right) {dx}\;\left( {\phi \in \mathcal{D}}\right) \]
|
The linearity of \( \widetilde{f} \) is obvious. For the continuity, we observe that if \( {\phi }_{j} \rightarrow 0 \), then there is a compact \( K \) containing the supports of the \( {\phi }_{j} \) . Then we have\n\n\[ \left| {\widetilde{f}\left( {\phi }_{j}\right) }\right| = \left| {{\int }_{K}f\left( x\right) {\phi }_{j}\left( x\right) {dx}}\right| \leq \mathop{\sup }\limits_{x}\left| {{\phi }_{j}\left( x\right) }\right| {\int }_{K}\left| {f\left( y\right) }\right| {dy} \rightarrow 0 \]\n\nbecause \( {\phi }_{j} \rightarrow 0 \) entails \( \mathop{\sup }\limits_{x}\left| {{\phi }_{j}\left( x\right) }\right| \rightarrow 0 \) .
|
Yes
|
Theorem 2. If \( f \in C\left( {\mathbb{R}}^{n}\right) \), then \( \widetilde{f} \), as defined in Equation (5), is a distribution. The map \( f \mapsto \widetilde{f} \) is linear and injective from \( C\left( {\mathbb{R}}^{n}\right) \) into \( {\mathcal{D}}^{\prime } \) .
|
Proof. We have already seen that \( \widetilde{f} \) is a distribution. The linearity of the mapping \( f \mapsto \widetilde{f} \) follows from the equation\n\n\[{\left( {\alpha }_{1}{f}_{1} + {\alpha }_{2}{f}_{2}\right) }^{ \sim }\left( \phi \right) = \int \left( {{\alpha }_{1}{f}_{1} + {\alpha }_{2}{f}_{2}}\right) \phi = {\alpha }_{1}\int {f}_{1}\phi + {\alpha }_{2}\int {f}_{2}\phi\n\n= {\alpha }_{1}{\widetilde{f}}_{1}\left( \phi \right) + {\alpha }_{2}{\widetilde{f}}_{2}\left( \phi \right) = \left( {{\alpha }_{1}{\widetilde{f}}_{1} + {\alpha }_{2}\widetilde{f}}\right) \left( \phi \right)\]\n\nFor the injective property it suffices to prove that if \( f \neq 0 \), then \( \widetilde{f} \neq 0 \) . Supposing that \( f \neq 0 \), let \( \xi \) be a point where \( f\left( \xi \right) \neq 0 \) . Select \( j \) such that \( f\left( x\right) \) is of one sign in the ball around \( \xi \) of radius \( 1/j \) . Then \( {\rho }_{j}\left( {x - \xi }\right) \), as defined in Equation 2, is positive in this same ball about \( \xi \) and vanishes elsewhere. Hence \( \int f\left( x\right) {\rho }_{j}\left( {x - \xi }\right) {dx} \neq 0 \) . This means that \( \widetilde{f}\left( \phi \right) \neq 0 \) if \( \phi \left( x\right) = {\rho }_{j}\left( {x - \xi }\right) \) .
|
Yes
|
Theorem 2. If \( f \in C\left( {\mathbb{R}}^{n}\right) \), then \( \widetilde{f} \), as defined in Equation (5), is a distribution. The map \( f \mapsto \widetilde{f} \) is linear and injective from \( C\left( {\mathbb{R}}^{n}\right) \) into \( {\mathcal{D}}^{\prime } \) .
|
Proof. We have already seen that \( \widetilde{f} \) is a distribution. The linearity of the mapping \( f \mapsto \widetilde{f} \) follows from the equation\n\n\[ \n{\left( {\alpha }_{1}{f}_{1} + {\alpha }_{2}{f}_{2}\right) }^{ \sim }\left( \phi \right) = \int \left( {{\alpha }_{1}{f}_{1} + {\alpha }_{2}{f}_{2}}\right) \phi = {\alpha }_{1}\int {f}_{1}\phi + {\alpha }_{2}\int {f}_{2}\phi \n\]\n\n\[ \n= {\alpha }_{1}{\widetilde{f}}_{1}\left( \phi \right) + {\alpha }_{2}{\widetilde{f}}_{2}\left( \phi \right) = \left( {{\alpha }_{1}{\widetilde{f}}_{1} + {\alpha }_{2}\widetilde{f}}\right) \left( \phi \right) \n\]\n\nFor the injective property it suffices to prove that if \( f \neq 0 \), then \( \widetilde{f} \neq 0 \) . Supposing that \( f \neq 0 \), let \( \xi \) be a point where \( f\left( \xi \right) \neq 0 \) . Select \( j \) such that \( f\left( x\right) \) is of one sign in the ball around \( \xi \) of radius \( 1/j \) . Then \( {\rho }_{j}\left( {x - \xi }\right) \), as defined in Equation 2, is positive in this same ball about \( \xi \) and vanishes elsewhere. Hence \( \int f\left( x\right) {\rho }_{j}\left( {x - \xi }\right) {dx} \neq 0 \) . This means that \( \widetilde{f}\left( \phi \right) \neq 0 \) if \( \phi \left( x\right) = {\rho }_{j}\left( {x - \xi }\right) \) .
|
Yes
|
Theorem 4. Let \( \mu \) be a positive Borel measure on \( {\mathbb{R}}^{n} \) such that \( \mu \left( K\right) < \infty \) for each compact set \( K \) in \( {\mathbb{R}}^{n} \) . Then \( \mu \) induces a distribution \( T \) by the formula\n\n\[ T\left( \phi \right) = {\int }_{{\mathbb{R}}^{n}}\phi \left( x\right) {d\mu }\left( x\right) \;\left( {\phi \in \mathcal{D}}\right) \]
|
Proof. The linearity is obvious. For the continuity of \( T \), let \( {\phi }_{j} \in \mathcal{D} \) and \( {\phi }_{j} \rightarrow 0 \) . Then there is a compact set \( K \) containing the supports of all \( {\phi }_{j} \) . Consequently,\n\n\[ \left| {T\left( {\phi }_{j}\right) }\right| \leq {\int }_{K}\left| {{\phi }_{j}\left( x\right) }\right| {d\mu }\left( x\right) \leq \mathop{\sup }\limits_{{y \in K}}\left| {{\phi }_{j}\left( y\right) }\right| {\int }_{K}{d\mu }\left( x\right) \]\n\n\[ = \mu \left( K\right) \mathop{\sup }\limits_{{y \in K}}\left| {{\phi }_{j}\left( y\right) }\right| \rightarrow 0 \]
|
Yes
|
Let \( \widetilde{H} \) be the Heaviside distribution (Example 2, page 250), and let \( \delta \) be the Dirac distribution at 0 (Example 1, page 250). Then with \( n = 1 \) and \( \alpha = \left( 1\right) \), we have \( \partial \widetilde{H} = \delta \) .
|
Indeed, for any test function \( \phi \) ,\n\n\[ \left( {\partial \widetilde{H}}\right) \left( \phi \right) = - \widetilde{H}\left( {D\phi }\right) = - {\int }_{0}^{\infty }{\phi }^{\prime } = \phi \left( 0\right) - \phi \left( \infty \right) = \phi \left( 0\right) = \delta \left( \phi \right) \]
|
Yes
|
Example 3. What is the distribution derivative of the function \( f\left( x\right) = \left| x\right| \) ?
|
It is a distribution \( \widetilde{g} \), where \( g \) is a function such that for all test functions \( \phi \) ,\n\n\[ \n\int {g\phi } = - \int f{\phi }^{\prime } = - {\int }_{-\infty }^{0}\left( {-x}\right) {\phi }^{\prime }\left( x\right) {dx} - {\int }_{0}^{\infty }x{\phi }^{\prime }\left( x\right) {dx} \n\]\n\n\[ \n= {\left. x\phi \left( x\right) \right| }_{-\infty }^{0} - {\int }_{-\infty }^{0}\phi \left( x\right) {dx} - {\left. x\phi \left( x\right) \right| }_{0}^{\infty } + {\int }_{0}^{\infty }\phi \left( x\right) {dx} \n\]\n\n\[ \n= {\int }_{-\infty }^{0}\left( {-1}\right) \phi \left( x\right) {dx} + {\int }_{0}^{\infty }\left( {+1}\right) \phi \left( x\right) {dx} \n\]\n\nThus\n\n\[ \ng\left( x\right) = \left\{ \begin{array}{ll} - 1 & x < 0 \\ + 1 & x \geq 0 \end{array}\right\} = {2H}\left( x\right) - 1 \n\]\n\nWe say that \( {f}^{\prime } = g \) in the sense of distributions, or \( \partial \widetilde{f} = \widetilde{g} \) . Note particularly that \( f \) does not have a \
|
Yes
|
What is the distribution derivative \( {f}^{\prime \prime } \) when \( f\left( x\right) = \left| x\right| \) ?
|
If we blindly use the techniques of classical calculus, we have from Examples 1 and \( 3,{f}^{\prime } = {2H} - 1 \) and \( {f}^{\prime \prime } = {2\delta } \) . This procedure is justified by the next theorem.
|
No
|
Theorem 1. The operators \( {\partial }^{\alpha } \) are linear from \( {\mathcal{D}}^{\prime } \) into \( {\mathcal{D}}^{\prime } \) . Furthermore, \( {\partial }^{\alpha }{\partial }^{\beta } = {\partial }^{\beta }{\partial }^{\alpha } = {\partial }^{\alpha + \beta } \) for any pair of multi-indices.
|
Proof. The linearity of \( {\partial }^{\alpha } \) is obvious from the definition, Equation (1). The commutative property rests upon a theorem of classical calculus that states that for any function \( f \) of two variables, if \( \frac{{\partial }^{2}f}{\partial x\partial y} \) and \( \frac{{\partial }^{2}f}{\partial y\partial x} \) exist and are continuous, then they are equal. Therefore, for any \( \phi \in \mathcal{D} \), we have \( {D}^{\alpha }{D}^{\beta }\phi = {D}^{\beta }{D}^{\alpha }\phi \) . Consequently, for an arbitrary distribution \( T \) we have\n\n\[ \n{\partial }^{\alpha }\left( {{\partial }^{\beta }T}\right) = {\left( -1\right) }^{\left| \alpha \right| }\left( {{\partial }^{\beta }T}\right) \circ {D}^{\alpha } = {\left( -1\right) }^{\left| \alpha \right| }{\left( -1\right) }^{\left| \beta \right| }T \circ {D}^{\beta } \circ {D}^{\alpha }\n\]\n\n\[ \n= {\left( -1\right) }^{\left| \beta \right| }{\left( -1\right) }^{\left| \alpha \right| }T \circ {D}^{\alpha } \circ {D}^{\beta }\n\]\n\n\[ \n= {\left( -1\right) }^{\left| \beta \right| }\left( {{\partial }^{\alpha }T}\right) \circ {D}^{\beta }\n\]\n\n\[ \n= {\partial }^{\beta }{\partial }^{\alpha }T\n\]
|
Yes
|
Theorem 3. Let \( n = 1 \), and let \( T \) be a distribution for which \( \partial T = 0 \). Then \( T \) is \( \widetilde{c} \) for some constant \( c \).
|
Proof. Adopt the notation of the preceding proof. The familiar equation\n\n\[ \phi \left( x\right) = \frac{d}{dx}{\int }_{-\infty }^{x}\phi \left( y\right) {dy} \]\n\nsays that \( \phi = {DB\phi } \), and this is valid for all \( \phi \in M \). Since \( {A\phi } \in M \) for all \( \phi \in \mathfrak{D} \), we have \( {A\phi } = {DBA\phi } \) for all \( \phi \in \mathfrak{D} \). Consequently, if \( \partial T = 0 \), then for all test functions\n\n\[ T\left( \phi \right) = T\left( {{A\phi } + \widetilde{1}\left( \phi \right) \psi }\right) = T\left( {DBA\phi }\right) + \widetilde{1}\left( \phi \right) T\left( \psi \right) \]\n\n\[ = - \left( {\partial T}\right) \left( {BA\phi }\right) + T\left( \psi \right) \widetilde{1}\left( \phi \right) \]\n\n\[ = T\left( \psi \right) \widetilde{1}\left( \phi \right) \]\n\nThus \( T = \widetilde{c} \), with \( c = T\left( \psi \right) \).
|
Yes
|
Theorem 4. If \( T \) is a distribution and \( K \) is a compact set in \( {\mathbb{R}}^{n} \) , then there exists an \( f \in C\left( {\mathbb{R}}^{n}\right) \) and a multi-index \( \alpha \) such that for all \( \phi \in \mathcal{D} \) whose supports are in \( K \) ,\n\n\[ T\left( \phi \right) = \left( {{\partial }^{\alpha }\widetilde{f}}\right) \left( \phi \right) \]
|
Proof. For the proof, consult [Ru1], page 152.
|
No
|
Theorem 1. For every multi-index \( \alpha ,{\partial }^{\alpha } \) is a continuous linear map of \( {\mathcal{D}}^{\prime } \) into \( {\mathcal{D}}^{\prime } \) .
|
Proof. Let \( {T}_{j} \rightarrow 0 \) . In order to prove that \( {\partial }^{\alpha }{T}_{j} \rightarrow 0 \), we select an arbitrary test function \( \phi \), and attempt to prove that \( \left( {{\partial }^{\alpha }{T}_{j}}\right) \left( \phi \right) \rightarrow 0 \) . This means that \( {\left( -1\right) }^{\left| \alpha \right| }{T}_{j}\left( {{D}^{\alpha }\phi }\right) \rightarrow 0 \), which is certainly true, because \( {D}^{\alpha }\phi \) is a test function. See Theorem 1 in Section 5.2.
|
No
|
Theorem 2. If a sequence of distributions \( \left\lbrack {T}_{j}\right\rbrack \) has the property that \( \left\lbrack {{T}_{j}\left( \phi \right) }\right\rbrack \) is convergent for each test function \( \phi \), then the equation \( T\left( \phi \right) = \mathop{\lim }\limits_{j}{T}_{j}\left( \phi \right) \) defines a distribution \( T \), and \( {T}_{j} \rightarrow T \) .
|
The theorem asserts that for a sequence of distributions \( {T}_{j} \) if \( \mathop{\lim }\limits_{j}{T}_{j}\left( \phi \right) \) exists in \( \mathbb{R} \) for every test function \( \phi \), then the equation \[ T\left( \phi \right) = \mathop{\lim }\limits_{j}{T}_{j}\left( \phi \right) \] defines a distribution. There is no question about \( T \) being well-defined. Its linearity is also trivial, since we have \[ T\left( {{\phi }_{1} + {\phi }_{2}}\right) = \mathop{\lim }\limits_{j}{T}_{j}\left( {{\phi }_{1} + {\phi }_{2}}\right) = \mathop{\lim }\limits_{j}{T}_{j}\left( {\phi }_{1}\right) + \mathop{\lim }\limits_{j}{T}_{j}\left( {\phi }_{2}\right) = T\left( {\phi }_{1}\right) + T\left( {\phi }_{2}\right) \] The only real issue is whether \( T \) is continuous, and the proof of this requires some topological vector space theory beyond the scope of this chapter.
|
"No"
|
Corollary 2. If \( \sum {T}_{j} \) is a convergent series of distributions, then for any multi-index \( \alpha ,{\partial }^{\alpha }\sum {T}_{j} = \sum {\partial }^{\alpha }{T}_{j} \) .
|
Proof. By Theorem \( 1,{\partial }^{\alpha } \) is continuous. Hence\n\n\[ \n{\partial }^{\alpha }\left( {\mathop{\sum }\limits_{{j = 1}}^{\infty }{T}_{j}}\right) = {\partial }^{\alpha }\left( {\mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\sum }\limits_{{j = 1}}^{m}{T}_{j}}\right) = \mathop{\lim }\limits_{{m \rightarrow \infty }}\left( {{\partial }^{\alpha }\mathop{\sum }\limits_{{j = 1}}^{m}{T}_{j}}\right) \n\]\n\n\[ \n= \mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\sum }\limits_{{j = 1}}^{m}{\partial }^{\alpha }{T}_{j} = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\partial }^{\alpha }{T}_{j} \n\]
|
Yes
|
Corollary 2. If \( \sum {T}_{j} \) is a convergent series of distributions, then for any multi-index \( \alpha ,{\partial }^{\alpha }\sum {T}_{j} = \sum {\partial }^{\alpha }{T}_{j} \) .
|
Proof. By Theorem \( 1,{\partial }^{\alpha } \) is continuous. Hence\n\n\[ \n{\partial }^{\alpha }\left( {\mathop{\sum }\limits_{{j = 1}}^{\infty }{T}_{j}}\right) = {\partial }^{\alpha }\left( {\mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\sum }\limits_{{j = 1}}^{m}{T}_{j}}\right) = \mathop{\lim }\limits_{{m \rightarrow \infty }}\left( {{\partial }^{\alpha }\mathop{\sum }\limits_{{j = 1}}^{m}{T}_{j}}\right) \n\]\n\n\[ \n= \mathop{\lim }\limits_{{m \rightarrow \infty }}\mathop{\sum }\limits_{{j = 1}}^{m}{\partial }^{\alpha }{T}_{j} = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\partial }^{\alpha }{T}_{j} \n\]
|
Yes
|
Theorem 3. Let \( f,{f}_{1},{f}_{2},\ldots \) belong to \( {L}_{\text{loc }}^{1}\left( {\mathbb{R}}^{n}\right) \), and suppose that \( {f}_{j} \rightarrow f \) pointwise almost everywhere. If there is an element \( g \in {L}_{\text{loc }}^{1}\left( {\mathbb{R}}^{n}\right) \) such that \( \left| {f}_{j}\right| \leq g \), then \( {\widetilde{f}}_{j} \rightarrow \widetilde{f} \) in \( {\mathcal{D}}^{\prime } \) .
|
Proof. The question is whether\n\n(1)\n\n\[\n\int {f}_{j}\phi \rightarrow \int {f\phi }\n\]\n\nfor all test functions \( \phi \) . We have \( {f}_{j}\phi \in {L}^{1}\left( K\right) \) if \( K \) is the support of \( \phi \) . Furthermore, \( \left| {{f}_{j}\phi }\right| \leq g\left| \phi \right| \) and \( \left( {{f}_{j}\phi }\right) \left( x\right) \rightarrow \left( {f\phi }\right) \left( x\right) \) almost everywhere. Hence by the Lebesgue Dominated Convergence Theorem (Section 8.6, page 406), Equation (1) is valid.
|
Yes
|
Theorem 4. Let \( \\left\\lbrack {f}_{j}\\right\\rbrack \) be a sequence of nonnegative functions in\n\n\( {L}_{\\text{loc }}^{1}\\left( {\\mathbb{R}}^{n}\\right) \) such that \( \\int {f}_{j} = 1 \) for each \( j \) and such that\n\n\\[ \n\\mathop{\\lim }\\limits_{{j \\rightarrow \\infty }}{\\int }_{\\left| x\\right| \\geq r}{f}_{j} = 0\n\\]\n\nfor all positive \( r \) . Then \( {\\widetilde{f}}_{j} \\rightarrow \\delta \) (the Dirac distribution).
|
Proof. Let \( \\phi \\in \\mathcal{D} \) and put \( \\psi = \\phi - \\phi \\left( 0\\right) \) . Let \( \\varepsilon > 0 \), and select \( r > 0 \) so that \( \\left| {\\psi \\left( x\\right) }\\right| < \\varepsilon \) when \( \\left| x\\right| < r \) . Then\n\n\\[\n\\left| {\\int {f}_{j}\\phi - \\phi \\left( 0\\right) }\\right| = \\left| {\\int {f}_{j}\\left\\lbrack {\\phi - \\phi \\left( 0\\right) }\\right\\rbrack }\\right| = \\left| {\\int {f}_{j}\\psi }\\right| \\leq \\int \\left| {{f}_{j}\\psi }\\right|\n\\]\n\n\\[\n\\leq {\\int }_{\\left| x\\right| < r}\\left| {{f}_{j}\\psi }\\right| + {\\int }_{\\left| x\\right| \\geq r}\\left| {{f}_{j}\\psi }\\right|\n\\]\n\n\\[\n\\leq \\varepsilon \\int \\left| {f}_{j}\\right| + \\max \\left| {\\psi \\left( x\\right) }\\right| {\\int }_{\\left| x\\right| \\geq r}{f}_{j}\n\\]\n\nTaking the limit as \( j \\rightarrow \\infty \), we obtain\n\n\\[\n\\left| {\\mathop{\\lim }\\limits_{j}{\\widetilde{f}}_{j}\\left( \\phi \\right) - \\delta \\left( \\phi \\right) }\\right| \\leq \\varepsilon\n\\]\n\nSince \( \\varepsilon \) was arbitrary, \( \\mathop{\\lim }\\limits_{j}{\\widetilde{f}}_{j}\\left( \\phi \\right) = \\delta \\left( \\phi \\right) \).
|
Yes
|
Theorem 1 The set of monomials \( x \mapsto {x}^{\alpha } \) on \( {\mathbb{R}}^{n} \) is linearly independent.
|
Proof. If \( n = 1 \), the monomials are the elementary functions \( {x}_{1} \mapsto {x}_{1}^{j} \) for \( j = 0,1,2,\ldots \) They form a linearly independent set, because a nontrivial linear combination \( \mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}{x}_{1}^{j} \) cannot vanish as a function. (Indeed, it can have at most \( m \) zeros.)\n\nSuppose now that our assertion has been proved for dimension \( n - 1 \) . Let \( x = \left( {{x}_{1},\ldots ,{x}_{n}}\right) \) and \( \alpha = \left( {{\alpha }_{1},\ldots ,{\alpha }_{n}}\right) \) . Suppose that \( \mathop{\sum }\limits_{{\alpha \in J}}{c}_{\alpha }{x}^{\alpha } = 0 \), where the sum is over a finite set \( J \subset {\mathbb{Z}}_{ + }^{n} \) . Put\n\n\[{J}_{k} = \left\{ {\alpha \in J : {\alpha }_{1} = k}\right\}\]\n\nThen for some \( m, J = {J}_{0} \cup \cdots \cup {J}_{m} \), and we can write\n\n\[0 = \mathop{\sum }\limits_{{k = 0}}^{m}\mathop{\sum }\limits_{{\alpha \in {J}_{k}}}{c}_{\alpha }{x}_{1}^{{\alpha }_{1}}\cdots {x}_{n}^{{\alpha }_{n}} = \mathop{\sum }\limits_{{k = 0}}^{m}{x}_{1}^{k}\mathop{\sum }\limits_{{\alpha \in {J}_{k}}}{c}_{\alpha }{x}_{2}^{{\alpha }_{2}}\cdots {x}_{n}^{{\alpha }_{n}}\]\n\nBy the one-variable case, we infer that for \( k = 0,\ldots, m \),\n\n\[ \mathop{\sum }\limits_{{\alpha \in {J}_{k}}}{c}_{\alpha }{x}_{2}^{{\alpha }_{2}}\cdots {x}_{n}^{{\alpha }_{n}} = 0 \]\n\nNote that as \( \alpha \) runs over \( {J}_{k} \), the multi-indices \( \left( {{\alpha }_{2},\ldots ,{\alpha }_{n}}\right) \) are all distinct. By the induction hypothesis, we then infer that for all \( \alpha \in {J}_{k},{c}_{\alpha } = 0 \) . Since \( k \) runs from 0 to \( m \), all \( {c}_{\alpha } \) are 0 .
|
Yes
|
Theorem 2 The dimension of \( {\Pi }_{m}\left( {\mathbb{R}}^{n}\right) \) is \( \left( \begin{matrix} m + n \\ n \end{matrix}\right) \) .
|
Proof. The preceding theorem asserts that a basis for \( {\Pi }_{m}\left( {\mathbb{R}}^{n}\right) \) is \( \left\{ {x \mapsto {x}^{\alpha }}\right. \) : \( \left| \alpha \right| \leq m\} \) . Here \( x \in {\mathbb{R}}^{n} \) . Using #to denote the number of elements in a set, we have only to prove\n\n\[ \n\# \left\{ {\alpha \in {\mathbb{Z}}_{ + }^{n} : \left| \alpha \right| \leq m}\right\} = \left( \begin{matrix} m + n \\ n \end{matrix}\right)\n\]\n\nWe use induction on \( n \) . For \( n = 1 \), the formula is correct, since\n\n\[ \n\# \left\{ {\alpha \in {\mathbb{Z}}_{ + } : \alpha \leq m}\right\} = m + 1 = \left( \begin{matrix} m + 1 \\ 1 \end{matrix}\right)\n\]\n\nAssume that the formula is correct for a particular \( n \) . For the next case we write\n\n\[ \n\# \left\{ {\alpha \in {\mathbb{Z}}_{ + }^{n + 1} : \left| \alpha \right| \leq m}\right\} = \# \mathop{\bigcup }\limits_{{k = 0}}^{m}\left\{ {\alpha \in {\mathbb{Z}}_{ + }^{n + 1} : {\alpha }_{n + 1} = k,\mathop{\sum }\limits_{{i = 1}}^{n}{\alpha }_{i} \leq m - k}\right\}\n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{m}\# \left\{ {\alpha \in {\mathbb{Z}}_{ + }^{n} : \left| \alpha \right| \leq m - k}\right\} = \mathop{\sum }\limits_{{k = 0}}^{m}\left( \begin{matrix} m - k + n \\ n \end{matrix}\right)\n\]\n\n\[ \n= \mathop{\sum }\limits_{{k = 0}}^{m}\left( \begin{matrix} k + n \\ n \end{matrix}\right) = \left( \begin{matrix} m + n + 1 \\ n + 1 \end{matrix}\right)\n\]\n\nIn the last step, we applied Lemma 1.
|
Yes
|
Theorem 4. Let \( D \) be a simple partial derivative, say \( D = \frac{\partial }{\partial {x}_{i}} \) , and let \( \partial \) be the corresponding distribution derivative. If \( T \in {\mathcal{D}}^{\prime } \) and \( f \in {C}^{\infty }\left( {\mathbb{R}}^{n}\right) \), then \[ \partial \left( {fT}\right) = {Df} \cdot T + f \cdot \partial T \]
|
Proof. \[ \left( {{Df} \cdot T + f \cdot \partial T}\right) \left( \phi \right) = T\left( {{Df} \cdot \phi }\right) + \partial T\left( {f \cdot \phi }\right) = T\left( {{Df} \cdot \phi }\right) - T\left( {D\left( {f\phi }\right) }\right) \] \[ = T\left( {{Df} \cdot \phi }\right) - T\left( {{Df} \cdot \phi + f \cdot {D\phi }}\right) \] \[ = - T\left( {f \cdot {D\phi }}\right) = - \left( {fT}\right) \left( {D\phi }\right) = \left( {\partial \left( {fT}\right) }\right) \left( \phi \right) \]
|
Yes
|
Theorem 5. Let \( n = 1 \), let \( T \) be a distribution, and let \( u \) be an element of \( {C}^{\infty }\left( \mathbb{R}\right) \) . If \( \partial T + {uT} = \widetilde{f} \), for some \( f \) in \( C\left( \mathbb{R}\right) \), then \( T = \widetilde{g} \) for some \( g \) in \( {C}^{1}\left( \mathbb{R}\right) \), and \( {g}^{\prime } + {ug} = f \) .
|
Proof. If \( u = 0 \), then \( \partial T = \widetilde{f} \) . Write \( f = {h}^{\prime } \), where\n\n\[ h\left( x\right) = {\int }_{a}^{x}f\left( y\right) {dy} \]\n\nThen \( h \in {C}^{1}\left( \mathbb{R}\right) \) . From the equation\n\n\[ \partial \left( {T - \widetilde{h}}\right) = \partial T - \widetilde{f} = 0 \]\n\nwe conclude that \( T - \widetilde{h} = \widetilde{c} \) for some constant \( c \) . (See Theorem 3 in Section 5.2, page 256.) Hence \( T = \widetilde{h + c} \) .\n\nIf \( u \) is not zero, let \( v = \exp \int {udx} \) . Then \( {v}^{\prime } = {vu} \) and \( v \in {C}^{\infty }\left( \mathbb{R}\right) \) . Then \( {vT} \) is well-defined, and by Theorem 4,\n\n\[ \partial \left( {vT}\right) = {v}^{\prime }T + v\partial T = v\left( {{uT} + \partial T}\right) = v\widetilde{f} = \widetilde{vf} \]\n\nBy the first part of the proof, we have \( {vT} = \widetilde{g} \) for some \( g \in {C}^{1}\left( \mathbb{R}\right) \) . Hence \( T = \widetilde{g/v} \) . It is easily verified that \( {\left( g/v\right) }^{\prime } + u\left( {g/v}\right) = f \) .
|
Yes
|
Theorem 6. If \( \phi \in \mathcal{D} \), then\n\n\[ \n{\int }_{{\mathbb{R}}^{n}}\phi \left( x\right) {dx} = \mathop{\lim }\limits_{{h \downarrow 0}}{h}^{n}\mathop{\sum }\limits_{{\alpha \in {\mathbb{Z}}^{n}}}\phi \left( {h\alpha }\right)\n\]
|
Proof. The right side is just the limit of Riemann sums for the integral. In the case \( n = 2 \), we set up a lattice of points in \( {\mathbb{R}}^{2} \) . These points are of the form \( \left( {{ih},{jh}}\right) = h\left( {i, j}\right) = {h\alpha } \), where \( \alpha \) runs over the set of all multi-integers, having positive or negative entries. Each square created by four adjacent lattice points has area \( {h}^{2} \) .
|
No
|
Lemma 1. For \( T \in {\mathcal{D}}^{\prime } \) and \( \phi \in \mathcal{D} \), \[ {E}_{x}\left( {T * \phi }\right) = \left( {{E}_{x}T}\right) * \phi = T * {E}_{x}\phi \]
|
Proof. Straightforward calculation, using some results in Problem 1, gives us \[ \left\lbrack {{E}_{x}\left( {T * \phi }\right) }\right\rbrack \left( y\right) = \left( {T * \phi }\right) \left( {y - x}\right) = T\left( {{E}_{y - x}{B\phi }}\right) \] \[ \left\lbrack {\left( {{E}_{x}T}\right) * \phi }\right\rbrack \left( y\right) = \left( {{E}_{x}T}\right) \left( {{E}_{y}{B\phi }}\right) = T\left( {{E}_{-x}{E}_{y}{B\phi }}\right) = T\left( {{E}_{y - x}{B\phi }}\right) \] \[ \left\lbrack {T * {E}_{x}\phi }\right\rbrack \left( y\right) = T\left( {{E}_{y}B{E}_{x}\phi }\right) = T\left( {{E}_{y}{E}_{-x}{B\phi }}\right) = T\left( {{E}_{y - x}{B\phi }}\right) \]
|
Yes
|
Lemma 2. If \( T \) is a distribution and if \( {\phi }_{j} \rightarrow \phi \) in \( \mathcal{D} \), then \( T * {\phi }_{i} \rightarrow \) \( T * \phi \) pointwise.
|
Proof. By linearity (see Problem 3), it suffices to consider the case when \( \phi = 0 \) . If \( {\phi }_{j} \rightarrow 0 \) in \( \mathcal{D} \), then for all \( x \) ,\n\n\[ \n\left( {T * {\phi }_{j}}\right) \left( x\right) = T\left( {{E}_{x}B{\phi }_{j}}\right) \rightarrow 0 \n\]\n\nby the continuity of \( B,{E}_{x} \), and \( T \) (Problem 8).
|
No
|
Lemma 3. Let \( \left\lbrack {x}_{j}\right\rbrack \) be a sequence of points in \( {\mathbb{R}}^{n} \) converging to \( x \) . For each \( \phi \in \mathfrak{D} \), (7) \[ {E}_{{x}_{j}}\phi \rightarrow {E}_{x}\phi \;\text{ (convergence in }\;\mathfrak{D}\text{ ) } \]
|
Proof. If \( {K}_{1} = \left\{ {x,{x}_{1},{x}_{2},\ldots }\right\} \) and if \( {K}_{2} \) is the support of \( \phi \), then (as is easily verified) the supports of \( {E}_{{x}_{j}}\phi \) are contained in the compact set \[ {K}_{1} + {K}_{2} = \left\{ {u + v : u \in {K}_{1}, v \in {K}_{2}}\right\} \] Now we observe that (8) \[ \left( {{E}_{{x}_{j}}\phi }\right) \left( y\right) \rightarrow \left( {{E}_{x}\phi }\right) \left( y\right) \;\text{ uniformly for }\;y \in {K}_{1} + {K}_{2} \] Indeed, for a given \( \varepsilon > 0 \) there is a \( \delta > 0 \) such that \[ \left| {u - v}\right| < \delta \Rightarrow \left| {\phi \left( u\right) - \phi \left( v\right) }\right| < \varepsilon \;\left( {u, v \in {K}_{1} + {K}_{2} - {K}_{1}}\right) \] (This is uniform continuity of the continuous function \( \phi \) on a compact set.) Hence if \( \left| {{x}_{j} - x}\right| < \delta \), then \( \left| {\phi \left( {y - {x}_{j}}\right) - \phi \left( {y - x}\right) }\right| < \varepsilon \) . It now follows that \[ \left( {{D}^{\alpha }{E}_{{x}_{j}}\phi }\right) \left( y\right) \rightarrow \left( {{D}^{\alpha }{E}_{x}\phi }\right) \left( y\right) \;\text{ uniformly for }\;y \in {K}_{1} + {K}_{2} \] because \( {D}^{\alpha }{E}_{{x}_{j}}\phi = {E}_{{x}_{j}}{D}^{\alpha }\phi \), and (8) can be applied to \( {D}^{\alpha }\phi \) .
|
Yes
|
Lemma 4. Let \( e = \left( {1,0,\ldots ,0}\right) ,0 < \left| t\right| < 1 \), and \( {F}_{t} = {t}^{-1}\left( {{E}_{0} - {E}_{te}}\right) \). Then for each test function \( \phi ,{F}_{t}\phi \rightarrow \frac{\partial \phi }{\partial {x}_{1}} \) as \( t \rightarrow 0 \) . (This convergence is in the topology of \( \mathbf{D} \).)
|
Proof. Since \( \left| t\right| < 1 \), there is a single compact set \( K \) containing the supports of \( {F}_{t}\phi \) and \( \frac{\partial \phi }{\partial {x}_{1}} \). By the mean value theorem (used twice) we have (for \( 0 < \theta ,{\theta }^{\prime } < 1 \) )\n\n\[ \left| {\frac{\partial \phi }{\partial {x}_{1}}\left( x\right) - \left( {{F}_{t}\phi }\right) \left( x\right) }\right| = \left| {\frac{\partial \phi }{\partial {x}_{1}}\left( x\right) - {t}^{-1}\left\lbrack {\phi \left( x\right) - \phi \left( {x - {te}}\right) }\right\rbrack }\right| \]\n\n\[ = \left| {\frac{\partial \phi }{\partial {x}_{1}}\left( x\right) - \frac{\partial \phi }{\partial {x}_{1}}\left( {x - {\theta te}}\right) }\right| \]\n\n\[ = \left| {\frac{{\partial }^{2}\phi }{\partial {x}_{1}^{2}}\left( {x - {\theta }^{\prime }{te}}\right) }\right| \left| {\theta t}\right| \]\n\n\[ \leq {\begin{Vmatrix}\frac{{\partial }^{2}\phi }{\partial {x}_{1}^{2}}\end{Vmatrix}}_{K}\left| t\right| \]\n\nThe norm used here is the supremum norm on \( K \). Our inequality shows that as \( t \rightarrow 0,\left( {{F}_{t}\phi }\right) \left( x\right) \rightarrow \left( \frac{\partial \phi }{\partial {x}_{1}}\right) \left( x\right) \) uniformly in \( x \) on \( K \). Since \( \phi \) can be any test function, we can apply our conclusion to \( {D}^{\alpha }\phi \), inferring that \( {F}_{t}{D}^{\alpha }\phi \) converges uniformly to \( \frac{\partial }{\partial {x}_{1}}{D}^{\alpha }\phi \) on \( K \). Since \( {D}^{\alpha } \) commutes with \( {F}_{t} \) (Problem 9) and with other derivatives, we conclude that \( {D}^{\alpha }{F}_{t}\phi \) converges uniformly on \( K \) to \( {D}^{\alpha }\frac{\partial \phi }{\partial {x}_{1}} \). This proves that the convergence of \( {F}_{t}\phi \) is in accordance with the notion of convergence adopted in \( \mathbf{D} \).
|
Yes
|
What are the fundamental solutions of the operator \( D \) in the case \( n = 1 \) ? \( \left( {D = \frac{d}{dx}}\right) \) . We seek all the distributions \( T \) that satisfy \( \partial T = \delta \) .
|
We saw in Example 1 of Section 5.2 (page 254) that \( \partial \widetilde{H} = \delta \), where \( H \) is the Heaviside function. Thus \( \widetilde{H} \) is one of the fundamental solutions. Since the distributions sought are exactly those for which \( \partial T = \partial \widetilde{H} \), we see by Theorem 3 in Section 5.2 (page 256) that \( T = \widetilde{H} + \widetilde{c} \) for some constant \( c \) .
|
Yes
|
Theorem 1. The Malgrange-Ehrenpreis Theorem. Every operator \( \sum {c}_{\alpha }{D}^{\alpha } \) has a fundamental solution.
|
For the proof of this basic theorem, consult [Ho] page 189, or [Ru1] page 195.
|
No
|
Theorem 2. Let \( A \) be a linear differential operator with constant coefficients, and let \( T \) be a distribution that is a fundamental solution of \( A \) . Then for each test function \( \phi, A\left( {T * \phi }\right) = \phi \) .
|
Proof. Let \( A = \sum {c}_{\alpha }{D}^{\alpha } \) . Then \( \sum {c}_{\alpha }{\partial }^{\alpha }T = \delta \) . The basic formula (the theorem of Section 5, page 271) states that\n\n\[ \n{D}^{\alpha }\left( {T * \phi }\right) = {\partial }^{\alpha }T * \phi \n\]\n\nFrom this we conclude that\n\n\[ \nA\left( {T * \phi }\right) = \sum {c}_{\alpha }{D}^{\alpha }\left( {T * \phi }\right) = \left( {\sum {c}_{\alpha }{\partial }^{\alpha }T}\right) * \phi = \delta * \phi = \phi \n\]\n\nIn the last step we use the calculation\n\n\[ \n\left( {\delta * \phi }\right) \left( x\right) = \delta \left( {{E}_{x}{B\phi }}\right) = \left( {{E}_{x}{B\phi }}\right) \left( 0\right) = \left( {B\phi }\right) \left( {0 - x}\right) = \phi \left( x\right) \n\]
|
Yes
|
Example 2. We use the theory of distributions to find a solution of the differential equation \( \frac{du}{dx} = \phi \), where \( \phi \) is a test function.
|
By Example 1, one fundamental solution is the distribution \( \widetilde{H} \) . By the preceding theorem, \( \widetilde{H} * \phi \) will solve the differential equation. We have, with a simple change of variable,\n\n\[ u\left( x\right) = \left( {\widetilde{H} * \phi }\right) \left( x\right) = {\int }_{-\infty }^{\infty }H\left( y\right) \phi \left( {x - y}\right) {dy} = {\int }_{-\infty }^{x}\phi \left( z\right) {dz} \]
|
Yes
|
Let us search for a solution of the differential equation\n\n\\[ \n{u}^{\prime } + {au} = \phi \n\\]\n\nusing distribution theory. First, we try to discover a fundamental solution, i.e., a distribution \\( T \\) such that \\( \\partial T + {aT} = \\delta \\) .
|
If \\( T \\) is such a distribution and if \\( v\\left( x\\right) = {e}^{ax} \\), then\n\n\\[ \n\\partial \\left( {v \\cdot T}\\right) = {Dv} \\cdot T + v \\cdot \\partial T = {av} \\cdot T + v \\cdot \\left( {\\delta - {aT}}\\right) = v \\cdot \\delta = \\delta \n\\]\n\nConsequently, by Example 1,\n\n\\[ \nv \\cdot T = \\widetilde{H} + \\widetilde{c}\\;\\text{ and }\\;T = \\frac{1}{v}\\left( {\\widetilde{H} + \\widetilde{c}}\\right) \n\\]\n\nThus \\( T \\) is a regular distribution \\( \\widetilde{f} \\), and since \\( c \\) is arbitrary, we use \\( c = 0 \\), arriving at\n\n\\[ \nf\\left( x\\right) = {e}^{-{ax}}H\\left( x\\right) \n\\]\n\nA solution to the differential equation is then given by\n\n\\[ \nu\\left( x\\right) = \\left( {f * \\phi }\\right) \\left( x\\right) = {\\int }_{-\\infty }^{\\infty }{e}^{-{ay}}H\\left( y\\right) \\phi \\left( {x - y}\\right) {dy} \n\\]\n\n\\[ \n= {\\int }_{0}^{\\infty }{e}^{-{ay}}\\phi \\left( {x - y}\\right) {dy} \n\\]\n\nThis formula produces a solution if \\( \\phi \\) is bounded and of class \\( {C}^{1} \\).
|
Yes
|
Lemma 1. For \( x \neq 0,\frac{\partial }{\partial {x}_{j}}\left| x\right| = {x}_{j}{\left| x\right| }^{-1} \) .
|
\[ \frac{\partial }{\partial {x}_{j}}\left| x\right| = \frac{\partial }{\partial {x}_{j}}{\left( {x}_{1}^{2} + \cdots + {x}_{n}^{2}\right) }^{1/2} = \frac{1}{2}{\left( {x}_{1}^{2} + \cdots + {x}_{n}^{2}\right) }^{-1/2}\left( {2{x}_{j}}\right) = {x}_{j}{\left| x\right| }^{-1} \]
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.