Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Theorem 3. The Closed Graph Theorem. A closed linear map from one Banach space into another is continuous.
|
Proof. Let \( L : X \rightarrow Y \) be closed and linear. In \( X \), define a new norm \( N\left( x\right) = \) \( \parallel x\parallel + \parallel {Lx}\parallel \) . Then \( \left( {X, N}\right) \) is complete. Indeed, if \( \left\lbrack {x}_{n}\right\rbrack \) is a Cauchy sequence with the norm \( N \), then \( \left\lbrack {x}_{n}\right\rbrack \) and \( \left\lbrack {L{x}_{n}}\right\rbrack \) are Cauchy sequences with the given norms in \( X \) and \( Y \) . Hence \( {x}_{n} \rightarrow x \) and \( L{x}_{n} \rightarrow y \), since \( X \) and \( Y \) are complete. Since \( L \) is closed, \( {Lx} = y \) and so\n\n\[ N\left( {x - {x}_{n}}\right) = \begin{Vmatrix}{x - {x}_{n}}\end{Vmatrix} + \begin{Vmatrix}{{Lx} - L{x}_{n}}\end{Vmatrix} \rightarrow 0 \]\n\nBy the preceding corollary, \( N\left( x\right) \leq \alpha \parallel x\parallel \) for some \( \alpha \) . Hence \( \parallel {Lx}\parallel \leq \alpha \parallel x\parallel \; \) ∎
|
Yes
|
Theorem 4. A normed linear space that is the image of a Banach space by a bounded, linear, interior map is also a Banach space.
|
Proof. Let \( L : X \rightarrow Y \) be the bounded, linear, interior map. Assume that \( X \) is a Banach space. By Problem 1.2.38 (page 14), it suffices to prove that each absolutely convergent series in \( Y \) is convergent. Let \( {y}_{n} \in Y \) and \( \sum \begin{Vmatrix}{y}_{n}\end{Vmatrix} < \infty \) . By Problem 2 (of this section), there exist \( {x}_{n} \in X \) such that \( L{x}_{n} = {y}_{n} \) and (for some \( c > 0)\begin{Vmatrix}{x}_{n}\end{Vmatrix} \leq c\begin{Vmatrix}{y}_{n}\end{Vmatrix} \) . Then \( \sum \begin{Vmatrix}{x}_{n}\end{Vmatrix} \leq c\sum \begin{Vmatrix}{y}_{n}\end{Vmatrix} < \infty \) . By Problem 1.2.3, page 12, the series \( \sum {x}_{n} \) converges. Since \( L \) is continuous and linear, \( L\left( {\sum {x}_{n}}\right) = \) \( \sum L{x}_{n} = \sum {y}_{n} \), and the latter series is convergent.
|
Yes
|
Theorem 5. Let \( L \) be a continuous linear transformation from one normed linear space to another. The range of \( L \) is dense if and only if \( {L}^{ * } \) is injective.
|
Proof. Let \( L : X \rightarrow Y \) . By Theorem 3 in Section 1.6 (page 37), applied to \( L\left( X\right) \), we have these equivalent assertions: (1) \( L\left( X\right) \) is dense in \( Y \) . (2) \( L{\left( X\right) }^{ \bot } = 0 \) . (3) If \( \phi \in L{\left( X\right) }^{ \bot } \), then \( \phi = 0 \) . (4) If \( \phi \left( {Lx}\right) = 0 \) for all \( x \), then \( \phi = 0 \) . (5) If \( {L}^{ * }\phi = 0 \), then \( \phi = 0 \) . (6) \( {L}^{ * } \) is injective.
|
Yes
|
Theorem 6. The Closed Range Theorem. Let \( L \) be a bounded linear transformation defined on a normed linear space and taking values in another normed linear space. The range of \( L \) and the null space of \( {L}^{ * } \), denoted by \( \mathcal{N}\left( {L}^{ * }\right) \), are related by the fact that \( {\left\lbrack \mathcal{N}\left( {L}^{ * }\right) \right\rbrack }_{ \bot } \) is the closure of the range of \( L \) .
|
Proof. Recall the notation \( {U}_{ \bot } \) for the set \( \{ x \in X : \phi \left( x\right) = 0 \) for all \( \phi \in U\} \) , where \( X \) is a normed linear space and \( U \) is a subset of \( {X}^{ * } \) . (See Problems 1.6.20 and 1.6.21, on page 38, as well as Problem 13 in this section, page 52.) We denote by \( \mathcal{R}\left( L\right) \) the range of \( L \) . To prove [closure \( \mathcal{R}\left( L\right) \rbrack \subset {\left\lbrack \mathcal{N}\left( {L}^{ * }\right) \right\rbrack }_{ \bot } \), let \( y \) be an element of the set on the left. Then \( y = \lim {y}_{n} \) for some sequence \( \left\lbrack {y}_{n}\right\rbrack \) in \( \mathcal{R}\left( L\right) \) . Write \( {y}_{n} = L{x}_{n} \) for appropriate \( {x}_{n} \) . To show that \( y \in {\left\lbrack \mathcal{N}\left( {L}^{ * }\right) \right\rbrack }_{ \bot } \) we must prove that \( \phi \left( y\right) = 0 \) for all \( \phi \in \mathcal{N}\left( {L}^{ * }\right) \) . We have\n\n\[ \n\phi \left( y\right) = \phi \left( {\lim {y}_{n}}\right) = \lim \phi \left( {y}_{n}\right) = \lim \phi \left( {L{x}_{n}}\right) \n\]\n\n\[ \n= \lim \left( {\phi \circ L}\right) \left( {x}_{n}\right) = \lim \left( {{L}^{ * }\phi }\right) \left( {x}_{n}\right) = \lim 0 = 0 \n\]\n\nTo prove the reverse inclusion, suppose that \( y \) is not in [closure \( \mathcal{R}\left( L\right) \) ]. We shall show that \( y \) is not in \( {\left\lbrack \mathcal{N}\left( {L}^{ * }\right) \right\rbrack }_{ \bot } \) . By Corollary 2 of the Hahn-Banach Theorem (page 34), there is a continuous linear functional \( \phi \) such that \( \phi \left( y\right) \neq 0 \) and \( \phi \) annihilates each member of [closure \( \mathcal{R}\left( L\right) \) ]. It follows that for all \( x \) , \( \left( {{L}^{ * }\phi }\right) \left( x\right) = \left( {\phi \circ L}\right) \left( x\right) = \phi \left( {Lx}\right) = 0 \) . Consequently, \( \phi \in \mathcal{N}\left( {L}^{ * }\right) \) . Since \( \phi \left( y\right) \neq 0 \) , we conclude that \( y \notin {\left\lbrack N\left( {L}^{ * }\right) \right\rbrack }_{ \bot } \) .
|
No
|
Theorem 7 Let \( L \) be a continuous, linear, injective map from one Banach space into another. The range of \( L \) is closed if and only if \( L \) is bounded below: \( \mathop{\inf }\limits_{{\parallel x\parallel = 1}}\parallel {Lx}\parallel > 0 \) .
|
Proof. Assume first that \( \parallel {Lx}\parallel \geq c > 0 \) when \( \parallel x\parallel = 1 \) . By homogeneity, \( \parallel {Lx}\parallel \geq c\parallel x\parallel \) for all \( x \) . To prove that the range, \( \mathcal{R}\left( L\right) \), is closed, let \( {y}_{n} \in \mathcal{R}\left( L\right) \) and \( {y}_{n} \rightarrow y \) . It is to be shown that \( y \in \mathcal{R}\left( L\right) \) . Let \( {y}_{n} = L{x}_{n} \) . The inequality\n\n\[\n\begin{Vmatrix}{{y}_{n} - {y}_{m}}\end{Vmatrix} = \begin{Vmatrix}{L\left( {{x}_{n} - {x}_{m}}\right) }\end{Vmatrix} \geq c\begin{Vmatrix}{{x}_{n} - {x}_{m}}\end{Vmatrix}\n\]\n\nreveals that \( \left\lbrack {x}_{n}\right\rbrack \) is a Cauchy sequence. By the completeness of the domain space, \( {x}_{n} \rightarrow x \) for some \( x \) . Then, by continuity,\n\n\[\n{Lx} = L\left( {\lim {x}_{n}}\right) = \lim L{x}_{n} = \lim {y}_{n} = y\n\]\n\nHence \( y \in \mathcal{R}\left( L\right) \) .\n\nNow assume that \( \mathcal{R}\left( L\right) \) is closed. Then \( L \) maps the domain space \( X \) injec-tively onto the Banach space \( \mathcal{R}\left( L\right) \) . By Corollary 1 of the Interior Mapping Theorem (page 49), \( L \) has a continuous inverse. The equation \( \begin{Vmatrix}{{L}^{-1}y}\end{Vmatrix} \leq \begin{Vmatrix}{L}^{-1}\end{Vmatrix}\parallel y\parallel \) is equivalent to \( \parallel x\parallel \leq \begin{Vmatrix}{L}^{-1}\end{Vmatrix}\parallel {Lx}\parallel \), showing that \( L \) is bounded below.
|
Yes
|
Theorem 1. In a finite-dimensional normed linear space, weak and strong convergence coincide.
|
Proof. Let \( X \) be a \( k \) -dimensional space. Select a base \( \left\{ {{b}_{1},\ldots ,{b}_{k}}\right\} \) for \( X \) and let \( {\phi }_{1},\ldots ,{\phi }_{k} \) be the linear functionals such that for each \( x \) ,\n\n\[ x = \mathop{\sum }\limits_{{i = 1}}^{k}{\phi }_{i}\left( x\right) {b}_{i} \]\n\nBy Corollary 1 on page 26, each functional \( {\phi }_{i} \) is continuous. Now if \( {x}_{n} \rightharpoonup x \) , then we have \( {\phi }_{i}\left( {x}_{n}\right) \rightarrow \phi \left( x\right) \), and consequently,\n\n\[ \begin{Vmatrix}{x - {x}_{n}}\end{Vmatrix} = \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{k}{\phi }_{i}\left( {x - {x}_{n}}\right) {b}_{i}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{k}\left| {{\phi }_{i}\left( {x - {x}_{n}}\right) }\right| \begin{Vmatrix}{b}_{i}\end{Vmatrix} \rightarrow 0 \]
|
Yes
|
Theorem 2. If a sequence \( \left\lbrack {x}_{n}\right\rbrack \) in a normed linear space converges weakly to an element \( x \), then a sequence of linear combinations of the elements \( {x}_{n} \) converges strongly to \( x \) .
|
Proof. Another way of stating the conclusion is that \( x \) belongs to the closed subspace\n\n\[ Y = \operatorname{closure}\left( {\operatorname{span}\left\{ {{x}_{1},{x}_{2},\ldots }\right\} }\right) \]\n\nIf \( x \notin Y \), then by Corollary 2 of the Hahn-Banach Theorem (page 34), there is a continuous linear functional \( \phi \) such that \( \phi \in {Y}^{ \bot } \) and \( \phi \left( x\right) = 1 \) . This clearly contradicts the assumption that \( {x}_{n} \rightharpoonup x \) .
|
Yes
|
Theorem 3. If the sequence \( \left\lbrack {{x}_{0},{x}_{1},{x}_{2},\ldots }\right\rbrack \) is bounded in a normed linear space \( X \) and if \( \phi \left( {x}_{n}\right) \rightarrow \phi \left( {x}_{0}\right) \) for all \( \phi \) in a fundamental subset of \( {X}^{ * } \), then \( {x}_{n} \rightharpoonup x \) .
|
Proof. (The term \
|
No
|
Minkowski Inequality. If \( x \) and \( y \) are two members of \( {\ell }_{p} \), then\n\n\[ \parallel x + y{\parallel }_{p} \leq \parallel x{\parallel }_{p} + \parallel y{\parallel }_{p} \]
|
Proof. For \( p = 1 \) an elementary proof goes as follows:\n\n\[ \parallel x + y{\parallel }_{1} = \sum \left| {x\left( n\right) + y\left( n\right) }\right| \leq \sum \left| {x\left( n\right) }\right| + \sum \left| {y\left( n\right) }\right| = \parallel x{\parallel }_{1} + \parallel y{\parallel }_{1} \]\n\nNow assume \( 1 < p < \infty \) . Then\n\n\[ \sum {\left| x\left( n\right) + y\left( n\right) \right| }^{p} \leq \sum \{ \left| {x\left( n\right) }\right| + \left| {y\left( n\right) }\right| {\} }^{p} \]\n\n\[ \leq \sum \{ 2\max \left\lbrack {\left| {x\left( n\right) }\right| ,\left| {y\left( n\right) }\right| }\right\rbrack {\} }^{p} \]\n\n\[ = \sum {2}^{p}\max \left\{ {{\left| x\left( n\right) \right| }^{p},{\left| y\left( n\right) \right| }^{p}}\right\} \]\n\n\[ \leq {2}^{p}\sum \left\{ {{\left| x\left( n\right) \right| }^{p} + {\left| y\left( n\right) \right| }^{p}}\right\} < \infty \]\n\nThis proves that \( x + y \in {\ell }_{p} \) . Now let \( 1/p + 1/q = 1 \) and observe that\n\n\[ x \in {\ell }_{p} \Rightarrow {\left| x\right| }^{p - 1} \in {\ell }_{q} \]\n\nbecause\n\n\[ \sum {\left\{ {\left| x\left( n\right) \right| }^{p - 1}\right\} }^{q} = \sum {\left| x\left( n\right) \right| }^{p} < \infty \]\n\nTherefore, by the Hölder inequality,\n\n\[ \parallel x + y{\parallel }_{p}^{p} = \sum {\left| x\left( n\right) + y\left( n\right) \right| }^{p} \]\n\n\[ \leq \sum {\left| x\left( n\right) + y\left( n\right) \right| }^{p - 1}\left| {x\left( n\right) }\right| + \sum {\left| x\left( n\right) + y\left( n\right) \right| }^{p - 1}\left| {y\left( n\right) }\right| \]\n\n\[ \leq {\begin{Vmatrix}{\left| x + y\right| }^{p - 1}\end{Vmatrix}}_{q}\left\{ {\parallel x{\parallel }_{p} + \parallel y{\parallel }_{p}}\right\} \]\n\n\[ = \parallel x + y{\parallel }_{p}^{p/q}\left\{ {\parallel x{\parallel }_{p} + \parallel y{\parallel }_{p}}\right\} \]\n\nThus, finally,\n\n\[ \parallel x + y{\parallel }_{p} \leq \parallel x{\parallel }_{p} + \parallel y{\parallel }_{p} \]
|
Yes
|
Theorem 8. A subspace of a normed linear space is closed if and only if it is weakly sequentially closed.
|
Proof. Let \( Y \) be a weakly sequentially closed subspace in the normed space \( X \) . If \( {y}_{n} \in Y \) and \( {y}_{n} \rightarrow y \), then \( {y}_{n} \rightharpoonup y \) and \( y \in Y \) . Hence \( Y \) is norm-closed.\n\nFor the converse, suppose that \( Y \) is norm-closed, and let \( {y}_{n} \in Y,{y}_{n} \rightharpoonup y \) . If \( y \notin Y \), then (because \( Y \) is closed) we have \( \operatorname{dist}\left( {y, Y}\right) > 0 \) . By Corollary 2 of the Hahn-Banach Theorem (page 34) there is a functional \( \phi \in {Y}^{ \bot } \) such that \( \phi \left( y\right) = 1 \) . Hence \( \phi \left( {y}_{n}\right) \) does not converge to \( \phi \left( y\right) \), contradicting the assumed weak convergence.
|
Yes
|
Theorem 9. A linear continuous mapping between normed spaces is weakly sequentially continuous.
|
Proof. Let \( A : X \rightarrow Y \) be linear and norm-continuous. In order to prove that \( A \) is weakly continuous, let \( {x}_{n} \rightharpoonup x \) . For all \( \phi \in {Y}^{ * },\phi \circ A \in {X}^{ * } \) . Hence \( \phi \left( {A{x}_{n} - {Ax}}\right) \rightarrow 0 \) for all \( \phi \in {Y}^{ * } \) .
|
Yes
|
Theorem 10. Let \( X \) be a separable normed linear space, and \( \left\lbrack {\phi }_{n}\right\rbrack \) a bounded sequence in \( {X}^{ * } \) . Then there is a subsequence \( \left\lbrack {\phi }_{{n}_{i}}\right\rbrack \) that converges in the weak* sense to an element of \( {X}^{ * } \) .
|
Proof. Since \( X \) is separable, it contains a countable dense set, \( \left\{ {{x}_{1},{x}_{2},\ldots }\right\} \) . Since \( \left\lbrack {\phi }_{n}\right\rbrack \) is bounded, so is the sequence \( \left\lbrack {{\phi }_{n}\left( {x}_{1}\right) }\right\rbrack \) . We can therefore find an increasing sequence \( {\mathbb{N}}_{1} \subset \mathbb{N} \) such that \( \mathop{\lim }\limits_{{n \in {\mathbb{N}}_{1}}}{\phi }_{n}\left( {x}_{1}\right) \) exists. By the same reasoning there is an increasing sequence \( {\mathbb{N}}_{2} \subset {\mathbb{N}}_{1} \) such that \( \mathop{\lim }\limits_{{n \in {\mathbb{N}}_{2}}}{\phi }_{n}\left( {x}_{2}\right) \) exists. Continuing in this way, we generate sequences\n\n\[ \mathrm{N} \supset {\mathrm{N}}_{1} \supset {\mathrm{N}}_{2} \supset \cdots \]\n\nNow use the Cantor diagonalization process: Define \( {n}_{i} \) to be the \( i \) th element of \( {\mathbb{N}}_{i} \) . We claim that \( \mathop{\lim }\limits_{{i \rightarrow \infty }}{\phi }_{{n}_{i}}\left( {x}_{k}\right) \) exists for each \( k \) . This is true because \( \mathop{\lim }\limits_{{n \in {\mathbb{N}}_{k}}}{\phi }_{n}\left( {x}_{k}\right) \) exists by construction, and if \( i \geq k \), then \( {n}_{i} \in {\mathbb{N}}_{i} \subset {\mathbb{N}}_{k} \) . For any \( x \in X \) we write\n\n\[ \left| {{\phi }_{{n}_{i}}\left( x\right) - {\phi }_{{n}_{j}}\left( x\right) }\right| \leq \left| {{\phi }_{{n}_{i}}\left( x\right) - {\phi }_{{n}_{i}}\left( {x}_{k}\right) }\right| + \left| {{\phi }_{{n}_{i}}\left( {x}_{k}\right) - {\phi }_{{n}_{j}}\left( {x}_{k}\right) }\right| + \left| {{\phi }_{{n}_{j}}\left( {x}_{k}\right) - {\phi }_{{n}_{j}}\left( x\right) }\right| \]\n\nThis inequality shows that \( \left\lbrack {{\phi }_{{n}_{i}}\left( x\right) }\right\rbrack \) has the Cauchy property in \( \mathbb{R} \) for each \( x \in \) \( X \) . Hence it converges to something that we may denote by \( \phi \left( x\right) \) . Standard arguments show that \( \phi \in {X}^{ * } \) .
|
Yes
|
Theorem 1. Each space \( {\ell }_{p} \), where \( 1 < p < \infty \), is reflexive.
|
Proof. If \( {p}^{-1} + {q}^{-1} = 1 \), then \( {\ell }_{p}^{ * } = {\ell }_{q} \) and \( {\ell }_{q}^{ * } = {\ell }_{p} \) by Theorem 4 of Section 1.9, page 56. Hence \( {\ell }_{p}^{* * } = {\ell }_{p} \) . But we must be sure that the isometry involved in this statement is the natural one, \( J \) . Let \( A : {\ell }_{p} \rightarrow {\ell }_{q}^{ * } \) and \( B : {\ell }_{q} \rightarrow {\ell }_{p}^{ * } \) be the isometries that have already been discussed in a previous section. Thus, for example, if \( x \in {\ell }_{p} \) then \( {Ax} \) is the functional on \( {\ell }_{q} \) defined by\n\n\[ \left( {Ax}\right) \left( y\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }x\left( n\right) y\left( n\right) \;y \in {\ell }_{q} \]\n\nDefine \( {B}^{ * } : {\ell }_{p}^{* * } \rightarrow {\ell }_{q}^{ * } \) by the equation\n\n\[ {B}^{ * }\phi = \phi \circ B\;\phi \in {\ell }_{p}^{* * } \]\n\nOne of the problems asks for a proof of the fact that \( {B}^{ * } \) is an isometric isomorphism of \( {\ell }_{p}^{* * } \) onto \( {\ell }_{q}^{ * } \) . Thus \( {B}^{* - 1}A \) is an isometric isomorphism of \( {\ell }_{p} \) onto \( {\ell }_{p}^{* * } \) . Now we wonder whether \( {B}^{* - 1}A = J \) . Equivalent questions are these:\n\n\[ {B}^{* - 1}{Ax} = {Jx}\;\left( {x \in {\ell }_{p}}\right) \]\n\n\[ {Ax} = {B}^{ * }{Jx}\;\left( {x \in {\ell }_{p}}\right) \]\n\n\[ \left( {Ax}\right) \left( y\right) = \left( {{B}^{ * }{Jx}}\right) \left( y\right) \;\left( {x \in {\ell }_{p},\;y \in {\ell }_{q}}\right) \]\n\n\[ \left( {Ax}\right) \left( y\right) = \left( {Jx}\right) \left( {By}\right) \;\left( {x \in {\ell }_{p},\;y \in {\ell }_{q}}\right) \]\n\n\[ \left( {Ax}\right) \left( y\right) = \left( {By}\right) \left( x\right) \;\left( {x \in {\ell }_{p},\;y \in {\ell }_{q}}\right) \]\n\nThe final assertion is true because both sides of the equation are by definition \( \mathop{\sum }\limits_{{n = 1}}^{\infty }x\left( n\right) y\left( n\right) . \)
|
Yes
|
Theorem 2. A closed linear subspace in a reflexive Banach space is reflexive.
|
Proof. Let \( Y \) be a closed subspace in a reflexive Banach space \( X \) . Let \( J : X \rightarrow \) \( {X}^{* * } \) be the natural map. Define \( R : {X}^{ * } \rightarrow {Y}^{ * } \) by the equation \( {R\phi } = \phi \mid Y \) . (This is the restriction map.) Let \( f \in {Y}^{* * } \) . Define \( y = {J}^{-1}\left( {f \circ R}\right) \) . We claim that \( y \in Y \) . Suppose that \( y \notin Y \) . By a corollary of the Hahn-Banach Theorem, there exists \( \phi \in {X}^{ * } \) such that \( \phi \left( y\right) \neq 0 \) and \( \phi \left( Y\right) = 0 \) . Then it will follow that \( {R\phi } = 0 \) and that \( \phi \left( y\right) = \phi \left( {{J}^{-1}\left( {f \circ R}\right) }\right) = \left( {f \circ R}\right) \left( \phi \right) = 0 \), a contradiction. Next we claim that for all \( \psi \in {Y}^{ * }, f\left( \psi \right) = \psi \left( y\right) \) . Let \( \widetilde{\psi } \) be a Hahn-Banach extension of \( \psi \) in \( {X}^{ * } \) . Then \( \psi = R\widetilde{\psi } \) and \( f\left( \psi \right) = f\left( {R\widetilde{\psi }}\right) = \left( {f \circ R}\right) \left( \widetilde{\psi }\right) = \left( {Jy}\right) \left( \widetilde{\psi }\right) = \widetilde{\psi }\left( y\right) = \psi \left( y\right) \) .
|
Yes
|
Theorem 3. A Banach space is reflexive if and only if its conjugate space is reflexive.
|
Proof. Let \( X \) be reflexive. Then the natural embedding \( J : X \rightarrow {X}^{* * } \) is surjective. Let \( \Phi \in {X}^{* * * } \), and define \( \phi \in {X}^{ * } \) by the equation \( \phi = \Phi \circ J \) . Then for arbitrary \( f \in {X}^{* * } \) we have \( f = {Jx} \) for some \( x \), and consequently,\n\n\[ f\left( \phi \right) = \left( {Jx}\right) \left( \phi \right) = \phi \left( x\right) = \left( {\Phi \circ J}\right) \left( x\right) = \Phi \left( {Jx}\right) = \Phi \left( f\right) \]\n\nThus \( \Phi \) is the image of \( \phi \) under the natural map of \( {X}^{ * } \) into \( {X}^{* * * } \) . This natural map is therefore surjective, and \( {X}^{ * } \) is reflexive.\n\nFor the converse, suppose that \( {X}^{ * } \) is reflexive. By what we just proved, \( {X}^{* * } \) is reflexive. But \( J\left( X\right) \) is a closed subspace in \( {X}^{* * } \), and by the preceding theorem, \( J\left( X\right) \) is reflexive. Hence \( X \) is reflexive (being isometrically isomorphic to \( J\left( X\right) ) \) .
|
Yes
|
Theorem 1. The norm has these properties\na. \( \parallel x\parallel > 0 \) if \( x \neq 0 \)\nb. \( \parallel {\alpha x}\parallel = \left| \alpha \right| \parallel x\parallel \;\left( {\alpha \in \mathbb{C}}\right) \)\nc. \( \left| {\langle x, y\rangle }\right| \leq \parallel x\parallel \parallel y\parallel \; \) Cauchy-Schwarz Inequality\nd. \( \parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \; \) Triangle Inequality\ne. \( \parallel x + y{\parallel }^{2} + \parallel x - y{\parallel }^{2} = 2\parallel x{\parallel }^{2} + 2\parallel y{\parallel }^{2}\; \) Parallelogram Equality\nf. If \( \langle x, y\rangle = 0 \), then \( \parallel x + y{\parallel }^{2} = \parallel x{\parallel }^{2} + \parallel y{\parallel }^{2}\; \) Pythagorean Law.
|
Proof. Only \( \mathbf{c} \) and \( \mathbf{d} \) offer any difficulty. For \( \mathbf{c} \), let \( \parallel y\parallel = 1 \) and write\n\n\[ 0 \leq \langle x - {\lambda y}, x - {\lambda y}\rangle = \langle x, x\rangle - \bar{\lambda }\langle x, y\rangle - \lambda \langle y, x\rangle + {\left| \lambda \right| }^{2}\langle y, y\rangle . \]\n\nNow let \( \lambda = \langle x, y\rangle \) to get \( 0 \leq \parallel x{\parallel }^{2} - {\left| \langle x, y\rangle \right| }^{2} \) . This establishes \( \mathbf{c} \) in the case \( \parallel y\parallel = 1 \) . By homogeneity, this suffices. To prove \( \mathbf{d} \), we use \( \mathbf{c} \) as follows:\n\n\[ {\left\| x + y\right\| }^{2} = \left\langle {x + y, x + y}\right\rangle = \left\langle {x, x}\right\rangle + \left\langle {y, x}\right\rangle + \left\langle {x, y}\right\rangle + \left\langle {y, y}\right\rangle \]\n\n\[ = {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\mathcal{R}\langle x, y\rangle + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \leq {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\left| {\langle x, y\rangle }\right| + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \]\n\n\[ \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \]
|
Yes
|
Theorem 1. The norm has these properties\na. \( \parallel x\parallel > 0 \) if \( x \neq 0 \)\nb. \( \parallel {\alpha x}\parallel = \left| \alpha \right| \parallel x\parallel \;\left( {\alpha \in \mathbb{C}}\right) \)\nc. \( \left| {\langle x, y\rangle }\right| \leq \parallel x\parallel \parallel y\parallel \; \) Cauchy-Schwarz Inequality\nd. \( \parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \; \) Triangle Inequality\ne. \( \parallel x + y{\parallel }^{2} + \parallel x - y{\parallel }^{2} = 2\parallel x{\parallel }^{2} + 2\parallel y{\parallel }^{2}\; \) Parallelogram Equality\nf. If \( \langle x, y\rangle = 0 \), then \( \parallel x + y{\parallel }^{2} = \parallel x{\parallel }^{2} + \parallel y{\parallel }^{2}\; \) Pythagorean Law.
|
Proof. Only \( \mathbf{c} \) and \( \mathbf{d} \) offer any difficulty. For \( \mathbf{c} \), let \( \parallel y\parallel = 1 \) and write\n\n\[ 0 \leq \langle x - {\lambda y}, x - {\lambda y}\rangle = \langle x, x\rangle - \bar{\lambda }\langle x, y\rangle - \lambda \langle y, x\rangle + {\left| \lambda \right| }^{2}\langle y, y\rangle . \]\n\nNow let \( \lambda = \langle x, y\rangle \) to get \( 0 \leq \parallel x{\parallel }^{2} - {\left| \langle x, y\rangle \right| }^{2} \) . This establishes \( \mathbf{c} \) in the case \( \parallel y\parallel = 1 \) . By homogeneity, this suffices. To prove \( \mathbf{d} \), we use \( \mathbf{c} \) as follows:\n\n\[ {\left\| x + y\right\| }^{2} = \left\langle {x + y, x + y}\right\rangle = \left\langle {x, x}\right\rangle + \left\langle {y, x}\right\rangle + \left\langle {x, y}\right\rangle + \left\langle {y, y}\right\rangle \]\n\n\[ = {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\mathcal{R}\langle x, y\rangle + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \leq {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\left| {\langle x, y\rangle }\right| + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \]\n\n\[ \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \]
|
Yes
|
Theorem 1. The norm has these properties\na. \( \parallel x\parallel > 0 \) if \( x \neq 0 \)\nb. \( \parallel {\alpha x}\parallel = \left| \alpha \right| \parallel x\parallel \;\left( {\alpha \in \mathbb{C}}\right) \)\nc. \( \left| {\langle x, y\rangle }\right| \leq \parallel x\parallel \parallel y\parallel \; \) Cauchy-Schwarz Inequality\nd. \( \parallel x + y\parallel \leq \parallel x\parallel + \parallel y\parallel \; \) Triangle Inequality\ne. \( \parallel x + y{\parallel }^{2} + \parallel x - y{\parallel }^{2} = 2\parallel x{\parallel }^{2} + 2\parallel y{\parallel }^{2}\; \) Parallelogram Equality\nf. If \( \langle x, y\rangle = 0 \), then \( \parallel x + y{\parallel }^{2} = \parallel x{\parallel }^{2} + \parallel y{\parallel }^{2}\; \) Pythagorean Law.
|
Proof. Only \( \mathbf{c} \) and \( \mathbf{d} \) offer any difficulty. For \( \mathbf{c} \), let \( \parallel y\parallel = 1 \) and write\n\n\[ 0 \leq \langle x - {\lambda y}, x - {\lambda y}\rangle = \langle x, x\rangle - \bar{\lambda }\langle x, y\rangle - \lambda \langle y, x\rangle + {\left| \lambda \right| }^{2}\langle y, y\rangle . \]\n\nNow let \( \lambda = \langle x, y\rangle \) to get \( 0 \leq \parallel x{\parallel }^{2} - {\left| \langle x, y\rangle \right| }^{2} \) . This establishes \( \mathbf{c} \) in the case \( \parallel y\parallel = 1 \) . By homogeneity, this suffices. To prove \( \mathbf{d} \), we use \( \mathbf{c} \) as follows:\n\n\[ {\left\| x + y\right\| }^{2} = \left\langle {x + y, x + y}\right\rangle = \left\langle {x, x}\right\rangle + \left\langle {y, x}\right\rangle + \left\langle {x, y}\right\rangle + \left\langle {y, y}\right\rangle \]\n\n\[ = {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\mathcal{R}\langle x, y\rangle + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \leq {\begin{Vmatrix}x\end{Vmatrix}}^{2} + 2\left| {\langle x, y\rangle }\right| + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \]\n\n\[ \leq \parallel x{\parallel }^{2} + 2\parallel x\parallel \parallel y\parallel + \parallel y{\parallel }^{2} = {\left( \parallel x\parallel + \parallel y\parallel \right) }^{2} \]
|
Yes
|
We write \( {L}^{2}\left\lbrack {a, b}\right\rbrack \) for the set of all complex-valued Lebesgue measurable functions on \( \left\lbrack {a, b}\right\rbrack \) such that\n\n\[ \n{\int }_{a}^{b}{\left| x\left( t\right) \right| }^{2}{dt} < \infty \n\]\n\n(The concept of measurability is explained in Chapter 8, Section 4, page 394.) In \( {L}^{2}\left\lbrack {a, b}\right\rbrack \), put \( \langle x, y\rangle = {\int }_{a}^{b}x\left( t\right) \overline{y\left( t\right) }{dt} \) . This space is a Hilbert space, a fact known as the Riesz-Fischer Theorem (1906).
|
See Chapter 8, Section 7, page 411 for the proof.
|
No
|
Example 5. Let \( \\left( {S,\\mathcal{A},\\mu }\\right) \) be any measure space. The notation \( {L}^{2}\\left( S\\right) \) then denotes the space of measurable complex functions on \( S \) such that \( \\int {\\left| f\\left( s\\right) \\right| }^{2}{d\\mu } < \\) \( \\infty \) . In \( {L}^{2}\\left( S\\right) \\), define \( \\langle f, g\\rangle = \\int f\\left( s\\right) \\overline{g\\left( s\\right) }{d\\mu } \) . Then \( {L}^{2}\\left( S\\right) \) is a Hilbert space.
|
See Theorem 3 in Section 8.7, page 411.
|
No
|
Theorem 2. If \( K \) is a closed, convex, nonvoid set in a Hilbert space \( X \), then to each \( x \) in \( X \) there corresponds a unique point \( y \) in \( K \) closest to \( x \) ; that is,\n\n\[ \parallel x - y\parallel = \operatorname{dist}\left( {x, K}\right) \mathrel{\text{:=}} \inf \{ \parallel x - v\parallel : v \in K\} \]
|
Proof. Put \( \alpha = \operatorname{dist}\left( {x, K}\right) \), and select \( {y}_{n} \in K \) so that \( \begin{Vmatrix}{x - {y}_{n}}\end{Vmatrix} \rightarrow \alpha \) . Notice that \( \frac{1}{2}\left( {{y}_{n} + {y}_{m}}\right) \in K \) by the convexity of \( K \) . Hence \( \begin{Vmatrix}{\frac{1}{2}\left( {{y}_{n} + {y}_{m}}\right) - x}\end{Vmatrix} \geq \alpha \) . By the Parallelogram Law,\n\n\[ {\begin{Vmatrix}{y}_{n} - {y}_{m}\end{Vmatrix}}^{2} = {\begin{Vmatrix}\left( {y}_{m} - x\right) - \left( {y}_{n} - x\right) \end{Vmatrix}}^{2} \]\n\n\[ = 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - {\begin{Vmatrix}{y}_{n} + {y}_{m} - 2x\end{Vmatrix}}^{2} \]\n\n\[ = 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - 4{\begin{Vmatrix}\frac{1}{2}\left( {y}_{n} + {y}_{m}\right) - x\end{Vmatrix}}^{2} \]\n\n\[ \leq 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - 4{\alpha }^{2} \rightarrow 0 \]\n\nThis shows that \( \left\lbrack {y}_{n}\right\rbrack \) is a Cauchy sequence. Hence \( {y}_{n} \rightarrow y \) for some \( y \in X \) . Since \( K \) is closed, \( y \in K \) . By continuity,\n\n\[ \parallel x - y\parallel = \begin{Vmatrix}{x - \mathop{\lim }\limits_{n}{y}_{n}}\end{Vmatrix} = \mathop{\lim }\limits_{n}\begin{Vmatrix}{x - {y}_{n}}\end{Vmatrix} = \alpha \]\n\nFor the uniqueness of the point \( y \), suppose that \( {y}_{1} \) and \( {y}_{2} \) are points in \( K \) of distance \( \alpha \) from \( x \) . By the previous calculation we have\n\n\[ \begin{Vmatrix}{{y}_{1} - {y}_{2}}\end{Vmatrix} \leq 2{\begin{Vmatrix}{y}_{1} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{2} - x\end{Vmatrix}}^{2} - 4{\alpha }^{2} = 0 \]
|
Yes
|
Theorem 2. If \( K \) is a closed, convex, nonvoid set in a Hilbert space \( X \), then to each \( x \) in \( X \) there corresponds a unique point \( y \) in \( K \) closest to \( x \) ; that is,\n\n\[ \parallel x - y\parallel = \operatorname{dist}\left( {x, K}\right) \mathrel{\text{:=}} \inf \{ \parallel x - v\parallel : v \in K\} \]
|
Proof. Put \( \alpha = \operatorname{dist}\left( {x, K}\right) \), and select \( {y}_{n} \in K \) so that \( \begin{Vmatrix}{x - {y}_{n}}\end{Vmatrix} \rightarrow \alpha \) . Notice that \( \frac{1}{2}\left( {{y}_{n} + {y}_{m}}\right) \in K \) by the convexity of \( K \) . Hence \( \begin{Vmatrix}{\frac{1}{2}\left( {{y}_{n} + {y}_{m}}\right) - x}\end{Vmatrix} \geq \alpha \) . By the Parallelogram Law,\n\n\[ {\begin{Vmatrix}{y}_{n} - {y}_{m}\end{Vmatrix}}^{2} = {\begin{Vmatrix}\left( {y}_{m} - x\right) - \left( {y}_{n} - x\right) \end{Vmatrix}}^{2} \]\n\n\[ = 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - {\begin{Vmatrix}{y}_{n} + {y}_{m} - 2x\end{Vmatrix}}^{2} \]\n\n\[ = 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - 4{\begin{Vmatrix}\frac{1}{2}\left( {y}_{n} + {y}_{m}\right) - x\end{Vmatrix}}^{2} \]\n\n\[ \leq 2{\begin{Vmatrix}{y}_{n} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{m} - x\end{Vmatrix}}^{2} - 4{\alpha }^{2} \rightarrow 0 \]\n\nThis shows that \( \left\lbrack {y}_{n}\right\rbrack \) is a Cauchy sequence. Hence \( {y}_{n} \rightarrow y \) for some \( y \in X \) . Since \( K \) is closed, \( y \in K \) . By continuity,\n\n\[ \parallel x - y\parallel = \begin{Vmatrix}{x - \mathop{\lim }\limits_{n}{y}_{n}}\end{Vmatrix} = \mathop{\lim }\limits_{n}\begin{Vmatrix}{x - {y}_{n}}\end{Vmatrix} = \alpha \]\n\nFor the uniqueness of the point \( y \), suppose that \( {y}_{1} \) and \( {y}_{2} \) are points in \( K \) of distance \( \alpha \) from \( x \) . By the previous calculation we have\n\n\[ \begin{Vmatrix}{{y}_{1} - {y}_{2}}\end{Vmatrix} \leq 2{\begin{Vmatrix}{y}_{1} - x\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}{y}_{2} - x\end{Vmatrix}}^{2} - 4{\alpha }^{2} = 0 \]
|
Yes
|
Theorem 3. Let \( Y \) be a subspace in an inner-product space \( X \). Let\n\n\( x \in X \) and \( y \in Y \). These are equivalent assertions:\n\na. \( x - y \bot Y \), i.e., \( \langle x - y, v\rangle = 0 \) for all \( v \in Y \).\n\nb. \( y \) is the unique point of \( Y \) closest to \( x \).
|
Proof. If a is true, then for any \( u \in Y \) we have\n\n\[{\begin{Vmatrix}x - u\end{Vmatrix}}^{2} = {\begin{Vmatrix}\left( x - y\right) + \left( y - u\right) \end{Vmatrix}}^{2} = {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} + {\begin{Vmatrix}y - u\end{Vmatrix}}^{2} \geq {\begin{Vmatrix}x - y\end{Vmatrix}}^{2}\]\n\nHere we used the Pythagorean Law (part 6 of Theorem 1).\n\nNow suppose that \( \mathbf{b} \) is true. Let \( u \) be any point of \( Y \) and let \( \lambda \) be any scalar. Then (because \( y \) is the point closest to \( x \) )\n\n\[0 \leq {\begin{Vmatrix}x - \left( y + \lambda u\right) \end{Vmatrix}}^{2} - {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} = - 2\mathcal{R}\langle x - y,{\lambda u}\rangle + {\left| \lambda \right| }^{2}{\begin{Vmatrix}u\end{Vmatrix}}^{2}\]\n\nHence\n\n\[2\mathcal{R}\{ \bar{\lambda }\langle x - y, u\rangle \} \leq {\left| \lambda \right| }^{2}\parallel u{\parallel }^{2}\]\n\nIf \( \langle x - y, u\rangle \neq 0 \), then \( u \neq 0 \) and we can put \( \lambda = \langle x - y, u\rangle /\parallel u{\parallel }^{2} \) to get a\n\ncontradiction:\n\n\[2\mathcal{R}\left\{ {\bar{\lambda }\lambda \parallel u{\parallel }^{2}}\right\} \leq {\left| \lambda \right| }^{2}\parallel u{\parallel }^{2}\]
|
Yes
|
Theorem 4. If \( Y \) is a closed subspace of a Hilbert space \( X \), then\n\n\( X = Y \oplus {Y}^{ \bot } \) .
|
Proof. We have to prove that \( {Y}^{ \bot } \) is a subspace, that \( Y \cap {Y}^{ \bot } = 0 \), and that \( X \subset Y + {Y}^{ \bot } \) . If \( {v}_{1} \) and \( {v}_{2} \) belong to \( {Y}^{ \bot } \), then so does \( {\alpha }_{1}{v}_{1} + {\alpha }_{2}{v}_{2} \), since for \( y \in Y \)\n\n\[ \langle y,{\alpha }_{1}{v}_{1} + {\alpha }_{2}{v}_{2}\rangle = {\overline{\alpha }}_{1}\langle y,{v}_{1}\rangle + {\overline{\alpha }}_{2}\langle y,{v}_{2}\rangle = 0 \]\n\nIf \( x \in Y \cap {Y}^{ \bot } \), then \( \langle x, x\rangle = 0 \), so \( x = 0 \) . If \( x \) is any element of \( X \), let \( y \) be the element of \( Y \) closest to \( x \) . By the preceding theorem, \( x - y \bot Y \) . Hence the equation \( x = y + \left( {x - y}\right) \) shows that \( X \subset Y + {Y}^{ \bot } \) .
|
Yes
|
Theorem 5. If the Parallelogram Law is valid in a normed linear space, then that space is an inner-product space. In other words, an inner product can be defined in such a way that \( \langle x, x\rangle = \parallel x{\parallel }^{2} \) .
|
Proof. We define the inner product by the equation\n\n\[ 4\langle x, y\rangle = \parallel x + y{\parallel }^{2} - \parallel x - y{\parallel }^{2} + i\parallel x + {iy}{\parallel }^{2} - i\parallel x - {iy}{\parallel }^{2} \]\n\nFrom the definition, it follows that\n\n\[ 4\mathcal{R}\langle x, y\rangle = \parallel x + y{\parallel }^{2} - \parallel x - y{\parallel }^{2} \]\n\nFrom this equation and the Parallelogram Law we obtain\n\n\[ 4\mathcal{R}\langle u + v, y\rangle = \parallel u + v + y{\parallel }^{2} - \parallel u + v - y{\parallel }^{2} \]\n\n\[ = \left\{ {2\parallel u + y{\parallel }^{2} + 2\parallel v{\parallel }^{2} - \parallel u + y - v{\parallel }^{2}}\right\} \]\n\n\[ - \left\{ {2\parallel u{\parallel }^{2} + 2\parallel v - y{\parallel }^{2} - \parallel u - v + y{\parallel }^{2}}\right\} \]\n\n\[ = \{ {\left\| u + y\right\| }^{2} - {\left\| u - y\right\| }^{2}\} + \{ {\left\| v + y\right\| }^{2} - {\left\| v - y\right\| }^{2}\} \]\n\n\[ \; + \{ {\begin{Vmatrix}u + y\end{Vmatrix}}^{2} + {\begin{Vmatrix}u - y\end{Vmatrix}}^{2} - 2{\begin{Vmatrix}u\end{Vmatrix}}^{2} - 2{\begin{Vmatrix}y\end{Vmatrix}}^{2}\} \]\n\n\[ \; + \{ 2{\begin{Vmatrix}y\end{Vmatrix}}^{2} + 2{\begin{Vmatrix}v\end{Vmatrix}}^{2} - {\begin{Vmatrix}v + y\end{Vmatrix}}^{2} - {\begin{Vmatrix}v - y\end{Vmatrix}}^{2}\} \]\n\n\[ = 4\mathcal{R}\langle u, y\rangle + 4\mathcal{R}\langle v, y\rangle \]\n\nThis proves that \( \mathcal{R}\langle u + v, y\rangle = \mathcal{R}\langle u, y\rangle + \mathcal{R}\langle v, y\rangle \) . Now by putting \( {iy} \) in place of \( y \) in the definition of \( \langle x, y\rangle \) we obtain \( \langle x,{iy}\rangle = - i\langle x, y\rangle \) . Hence the imaginary parts of these complex numbers satisfy\n\n\[ \mathcal{I}\langle u + v, y\rangle = - \mathcal{R}i\langle u + v, y\rangle = \mathcal{R}\langle u + v,{iy}\rangle \]\n\n\[ = \mathcal{R}\langle u,{iy}\rangle + \mathcal{R}\langle v,{iy}\rangle = - \mathcal{R}i\langle u, y\rangle - \mathcal{R}i\langle v, y\rangle \]\n\n\[ = \mathcal{I}\langle u, y\rangle + \mathcal{I}\langle v, y\rangle \]\n\n(In this equation, \( \mathcal{I} \) denotes \
|
Yes
|
Theorem 1. Pythagorean Law. If \( \left\{ {{x}_{1},{x}_{2},\ldots ,{x}_{n}}\right\} \) is a finite orthogonal set of \( n \) distinct elements in an inner-product space, then\n\n\[ \n{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{j = 1}}^{n}{\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2} \n\]
|
Proof. By our assumptions, \( {x}_{i} \neq {x}_{j} \) if \( i \neq j \), and consequently,\n\n\[ \n{\begin{Vmatrix}\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}\end{Vmatrix}}^{2} = \left\langle {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j},\mathop{\sum }\limits_{{i = 1}}^{n}{x}_{i}}\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left\langle {{x}_{j},{x}_{i}}\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}\left\langle {{x}_{j},{x}_{j}}\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}{\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2}\text{ 。 } \n\]
|
Yes
|
Theorem 2. The General Pythagorean Law. Let \( \left\lbrack {x}_{j}\right\rbrack \) be an orthogonal sequence in a Hilbert space. The series \( \sum {x}_{j} \) converges if and only if \( \sum {\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2} < \infty \) . If \( \sum {\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2} = \lambda < \infty \), then \( {\begin{Vmatrix}\sum {x}_{j}\end{Vmatrix}}^{2} = \lambda \) , and the sum \( \sum {x}_{j} \) is independent of the ordering of the terms.
|
Proof. Put \( {S}_{n} = \mathop{\sum }\limits_{1}^{n}{x}_{j} \) and \( {s}_{n} = \mathop{\sum }\limits_{1}^{n}{\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2} \) . \n\nBy the finite version of the Pythagorean Law, we have (for \( m > n \) ) \n\n\[ \n{\begin{Vmatrix}{S}_{m} - {S}_{n}\end{Vmatrix}}^{2} = {\begin{Vmatrix}\mathop{\sum }\limits_{{n + 1}}^{m}{x}_{j}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{n + 1}}^{m}{\begin{Vmatrix}{x}_{j}\end{Vmatrix}}^{2} = \left| {{s}_{m} - {s}_{n}}\right| \n\] \n\nHence \( \left\lbrack {S}_{n}\right\rbrack \) is a Cauchy sequence in \( X \) if and only if \( \left\lbrack {s}_{n}\right\rbrack \) is a Cauchy sequence in \( \mathbb{R} \) . This establishes the first assertion in the theorem. \n\nNow assume that \( \lambda < \infty \) . By the Pythagorean Law, \( {\begin{Vmatrix}{S}_{n}\end{Vmatrix}}^{2} = {s}_{n} \), and hence in the limit we have \( {\begin{Vmatrix}\sum {x}_{j}\end{Vmatrix}}^{2} = \lambda \) . Let \( u \) be a rearrangement of the original series, say \( u = \sum {x}_{{k}_{j}} \) . Let \( {U}_{n} = \mathop{\sum }\limits_{1}^{n}{x}_{{k}_{j}} \) . By the theory of absolutely convergent series in \( \mathbb{R} \), we have \( \sum {\begin{Vmatrix}{x}_{{k}_{j}}\end{Vmatrix}}^{2} = \lambda \) . Hence, by our previous analysis, \( {U}_{n} \rightarrow u \) and \( \parallel u{\parallel }^{2} = \lambda \) . Now compute \n\n\[ \n\left\langle {{U}_{n},{S}_{m}}\right\rangle = \left\langle {\mathop{\sum }\limits_{{j = 1}}^{n}{x}_{{k}_{j}},\mathop{\sum }\limits_{{i = 1}}^{m}{x}_{i}}\right\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{i = 1}}^{m}{\begin{Vmatrix}{x}_{i}\end{Vmatrix}}^{2}{\delta }_{i{k}_{j}} \n\] \n\nWe let \( n \rightarrow \infty \) to get \( \left\langle {u,{S}_{m}}\right\rangle = \mathop{\sum }\limits_{{i = 1}}^{m}{\begin{Vmatrix}{x}_{i}\end{Vmatrix}}^{2} \) . Then let \( m \rightarrow \infty \) to get \( \langle u, x\rangle = \lambda \) , where \( x = \lim {S}_{m} \) . It follows that \( x = u \), because \n\n\[ \n{\begin{Vmatrix}x - u\end{Vmatrix}}^{2} = {\begin{Vmatrix}x\end{Vmatrix}}^{2} - 2\mathcal{R}\langle x, u\rangle + {\begin{Vmatrix}u\end{Vmatrix}}^{2} = \lambda - {2\lambda } + \lambda = 0 \n\]
|
Yes
|
Theorem 3. If \( \left\lbrack {{y}_{1},{y}_{2},\ldots ,{y}_{n}}\right\rbrack \) is an orthonormal set in an inner-product space, and if \( Y \) is the linear span of \( \left\{ {{y}_{i} : 1 \leq i \leq n}\right\} \), then for any \( x \), the point in \( Y \) closest to \( x \) is \( \mathop{\sum }\limits_{{i = 1}}^{n}\left\langle {x,{y}_{i}}\right\rangle {y}_{i} \) .
|
Proof. Let \( y = \mathop{\sum }\limits_{{i = 1}}^{n}\left\langle {x,{y}_{i}}\right\rangle {y}_{i} \) . By Theorem 3 in Section 2.1, page 65, it suffices to verify that \( x - y \bot Y \) . For this it is enough to verify that \( x - y \) is orthogonal to each basis vector \( {y}_{k} \) . We have\n\n\[ \langle x - y,{y}_{k}\rangle = \langle x,{y}_{k}\rangle - \left\langle {\mathop{\sum }\limits_{i}\langle x,{y}_{i}\rangle {y}_{i},{y}_{k}}\right\rangle = \left\langle {x,{y}_{k}}\right\rangle - \mathop{\sum }\limits_{i}\left\langle {x,{y}_{i}}\right\rangle \left\langle {{y}_{i},{y}_{k}}\right\rangle \]\n\n\[ = \left\langle {x,{y}_{k}}\right\rangle - \mathop{\sum }\limits_{i}\left\langle {x,{y}_{i}}\right\rangle {\delta }_{ik} = \left\langle {x,{y}_{k}}\right\rangle - \left\langle {x,{y}_{k}}\right\rangle = 0. \]
|
Yes
|
Theorem 4. Bessel’s Inequality. If \( \left\lbrack {{u}_{i} : i \in I}\right\rbrack \) is an orthonormal system in an inner-product space, then for every \( x \) ,\n\n\[ \sum {\left| \left\langle x,{u}_{i}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \]
|
Proof. For \( j \) ranging over a finite subset \( J \) of \( I \), let \( y = \sum \left\langle {x,{u}_{j}}\right\rangle {u}_{j} \) . This vector \( y \) is the orthogonal projection of \( x \) onto the subspace \( U = \operatorname{span}\left\lbrack {{u}_{j} : j \in J}\right\rbrack \) . By Theorem \( 3, x - y \bot U \) . Hence by the Pythagorean Law\n\n\[ {\begin{Vmatrix}x\end{Vmatrix}}^{2} = {\begin{Vmatrix}\left( x - y\right) + y\end{Vmatrix}}^{2} = {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} + {\begin{Vmatrix}y\end{Vmatrix}}^{2} \geq {\begin{Vmatrix}y\end{Vmatrix}}^{2} = \sum {\begin{Vmatrix}\left\langle x,{u}_{j}\right\rangle {u}_{j}\end{Vmatrix}}^{2} = \sum {\left| \left\langle x,{u}_{j}\right\rangle \right| }^{2} \]\n\nThis proves our result for any finite set of indices. The result for \( I \) itself now follows from Problem 4.
|
Yes
|
Corollary 3. If \( \left\lbrack {{u}_{i} : i \in I}\right\rbrack \) is an orthonormal system, then for each \( x \) at most a countable number of the Fourier coefficients \( \left\langle {x,{u}_{i}}\right\rangle \) are nonzero.
|
Proof. Fixing \( x \), put \( {J}_{n} = \left\{ {i \in I : \left| \left\langle {x,{u}_{i}}\right\rangle \right| > 1/n}\right\} \) . By the Bessel Inequality,\n\n\[ \parallel x{\parallel }^{2} \geq \mathop{\sum }\limits_{{j \in {J}_{n}}}{\left| \left\langle x,{u}_{j}\right\rangle \right| }^{2} \geq \mathop{\sum }\limits_{{j \in {J}_{n}}}1/{n}^{2} = \left( {\# {J}_{n}}\right) /{n}^{2} \]\n\nHence \( {J}_{n} \) is a finite set. Since\n\n\[ \left\{ {i : \left\langle {x,{u}_{i}}\right\rangle \neq 0}\right\} = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{J}_{n} \]\n\nwe see that this set must be countable, it being a union of countably many finite sets.
|
Yes
|
Theorem 5. Every nontrivial inner-product space has an orthonormal basis.
|
Proof. Call the space \( X \) . Since it is not 0, it contains a nonzero vector \( x \) . The set consisting solely of \( x/\parallel x\parallel \) is orthonormal. Now order the family of all orthonormal subsets of \( X \) in the natural way (by inclusion). In order to use Zorn's Lemma, one must verify that each chain of orthonormal sets has an upper bound. Let \( \mathcal{C} \) be such a chain, and put \( {A}^{ * } = \bigcup \{ A : A \in \mathcal{C}\} \) . It is obvious that \( {A}^{ * } \) is an upper bound for \( \mathcal{C} \), but is \( {A}^{ * } \) orthonormal? Take \( x \) and \( y \) in \( {A}^{ * } \) such that \( x \neq y \) . Say \( x \in {A}_{1} \in \mathcal{C} \) and \( y \in {A}_{2} \in \mathcal{C} \) . Since \( \mathcal{C} \) is a chain, either \( {A}_{1} \subset {A}_{2} \) or \( {A}_{2} \subset {A}_{1} \) . Suppose the latter. Then \( x, y \in {A}_{1} \) . Since \( {A}_{1} \) is orthonormal, \( \langle x, y\rangle = 0 \) . Obviously, \( \parallel x\parallel = 1 \) . Hence \( {A}^{ * } \) is orthonormal.
|
Yes
|
Theorem 6. The Orthonormal Basis Theorem. For an orthonormal family \( \left\lbrack {u}_{i}\right\rbrack \) (not necessarily finite or countable) in a Hilbert space \( X \), the following properties are equivalent:\n\na. \( \left\lbrack {u}_{i}\right\rbrack \) is an orthonormal basis for \( X \).\n\nb. If \( x \in X \) and \( x \bot {u}_{i} \) for all \( i \), then \( x = 0 \).\n\nc. For each \( x \in X, x = \sum \left\langle {x,{u}_{i}}\right\rangle {u}_{i} \).\n\nd. For each \( x \) and \( y \) in \( X,\langle x, y\rangle = \sum \left\langle {x,{u}_{i}}\right\rangle \overline{\left\langle y,{u}_{i}\right\rangle } \).\n\ne. For each \( x \) in \( X,\parallel x{\parallel }^{2} = \sum {\left| \left\langle x,{u}_{i}\right\rangle \right| }^{2} \) . (Parseval Identity)
|
Proof. To prove that a implies \( \mathbf{b} \), suppose that \( \mathbf{b} \) is false. Let \( x \neq 0 \) and \( x \bot {u}_{i} \) for all \( i \). Adjoin \( x/\parallel x\parallel \) to the family \( \left\lbrack {u}_{i}\right\rbrack \) to get a larger orthonormal family. Thus the original family is not maximal and is not a basis.\n\nTo prove that \( \mathbf{b} \) implies \( \mathbf{c} \), assume \( \mathbf{b} \) and let \( x \) be any point in \( X \). Let \( y = \sum \left\langle {x,{u}_{i}}\right\rangle {u}_{i} \). By Bessel’s inequality (Theorem 4), we have\n\n\[ \sum {\begin{Vmatrix}\left\langle x,{u}_{i}\right\rangle {u}_{i}\end{Vmatrix}}^{2} = \sum {\left| \left\langle x,{u}_{i}\right\rangle \right| }^{2} \leq \parallel x{\parallel }^{2} \]\n\nBy Theorem 2, the series defining \( y \) converges. (Here the completeness of \( X \) is needed.) Then straightforward calculation (as in the proof of Theorem 3) shows that \( x - y \bot {u}_{i} \) for all \( i \). By \( \mathbf{b}, x - y = 0 \).\n\nTo prove that \( \mathbf{c} \) implies \( \mathbf{d} \), assume \( \mathbf{c} \) and write\n\n\[ x = \sum \left\langle {x,{u}_{i}}\right\rangle {u}_{i}\;y = \sum \left\langle {y,{u}_{i}}\right\rangle {u}_{i} \]\n\nStraightforward calculation then yields \( \langle x, y\rangle = \sum \left\langle {x,{u}_{i}}\right\rangle \overline{\left\langle y,{u}_{i}\right\rangle } \).\n\nTo prove that \( \mathbf{d} \) implies \( \mathbf{e} \), assume \( \mathbf{d} \) and let \( y = x \) in \( \mathbf{d} \). The result is the assertion in \( \mathbf{e} \).\n\nTo prove that e implies \( \mathbf{a} \), suppose that \( \mathbf{a} \) is false. Then \( \left\lbrack {u}_{i}\right\rbrack \) is not a maximal orthonormal set. Adjoin a new element, \( x \), to obtain a larger orthonormal set. Then \( 1 = \parallel x{\parallel }^{2} \neq \sum {\left| \left\langle x,{u}_{i}\right\rangle \right| }^{2} = 0 \), showing that \( \mathbf{e} \) is false.
|
Yes
|
One orthonormal basis in \( {\ell }^{2} \) is obtained by defining \( {u}_{n}\left( j\right) = {\delta }_{nj} \) . Thus\n\n\[ \n{u}_{1} = \left\lbrack {1,0,0,\ldots }\right\rbrack ,\;{u}_{2} = \left\lbrack {0,1,0,\ldots }\right\rbrack ,\text{ etc. } \n\]
|
To see that this is actually an orthonormal base, use the preceding theorem, in particular the equivalence of \( \mathbf{a} \) and \( \mathbf{b} \) . Suppose \( x \in {\ell }^{2} \) and \( \left\langle {x,{u}_{n}}\right\rangle = 0 \) for all \( n \) . Then \( x\left( n\right) = 0 \) for all \( n \), and \( x = 0 \) .
|
No
|
An orthonormal basis for \( {L}^{2}\left\lbrack {0,1}\right\rbrack \) is provided by the functions \( {u}_{n}\left( t\right) = {e}^{2\pi int} \), where \( n \in \mathbb{Z} \). One verifies the orthonormality by computing the appropriate integrals. To show that \( \left\lbrack {u}_{n}\right\rbrack \) is a base, we use Part \( \mathbf{b} \) of Theorem 6 . Let \( x \in {L}^{2}\left\lbrack {0,1}\right\rbrack \) and \( x \neq 0 \). It is to be shown that \( \left\langle {x,{u}_{n}}\right\rangle \neq 0 \) for some \( n \). Since the set of continuous functions is dense in \( {L}^{2} \), there is a continuous \( y \) such that \( \parallel x - y\parallel < \parallel x\parallel /5 \). Then \( \parallel y\parallel \geq \parallel x\parallel - \parallel x - y\parallel > \frac{4}{5}\parallel x\parallel \). By the Weierstrass Approximation Theorem, the linear span of \( \left\lbrack {u}_{n}\right\rbrack \) is dense in the space \( C\left\lbrack {0,1}\right\rbrack \), furnished with the supremum norm. Select a linear combination \( p \) of \( \left\lbrack {u}_{n}\right\rbrack \) such that \( \parallel p - y{\parallel }_{\infty } < \parallel x\parallel /5 \). Then \( \parallel p - y\parallel < \parallel x\parallel /5 \). Hence \( \parallel p\parallel > \parallel y\parallel - \parallel y - p\parallel > \frac{3}{5} \parallel x \parallel \). Then
|
\[ \left| {\langle x, p\rangle }\right| \geq \left| {\langle p, p\rangle }\right| - \left| {\langle y - p, p\rangle }\right| - \left| {\langle x - y, p\rangle }\right| \] \[ \geq \parallel p{\parallel }^{2} - \parallel y - p\parallel \parallel p\parallel - \parallel x - y\parallel \parallel p\parallel > 0 \] Thus it is not possible to have \( \left\langle {x,{u}_{n}}\right\rangle = 0 \) for all \( n \).
|
Yes
|
Theorem 7. The Orthogonal Projection Theorem. The orthogonal projection \( P \) of a Hilbert space \( X \) onto a closed subspace \( Y \) has these properties:\na. It is well-defined; i.e., \( {Px} \) exists and is unique in \( Y \).\nb. It is surjective, i.e., \( P\left( X\right) = Y \).\nc. It is linear.\nd. If \( Y \) is not 0 (the zero subspace), then \( \parallel P\parallel = 1 \).\ne. \( x - {Px} \bot Y \) for all \( x \).\nf. \( P \) is Hermitian; i.e., \( \langle {Px}, w\rangle = \langle x,{Pw}\rangle \) for all \( x \) and \( w \).\ng. If \( \left\lbrack {y}_{i}\right\rbrack \) is an orthonormal basis for \( Y \), then \( {Px} = \sum \left\langle {x,{y}_{i}}\right\rangle {y}_{i \).\nh. \( P \) is idempotent; i.e., \( {P}^{2} = P \).\ni. \( {Py} = y \) for all \( y \in Y \) . Thus \( P \mid Y = {I}_{Y} \).\nj. \( \parallel x{\parallel }^{2} = \parallel {Px}{\parallel }^{2} + \parallel x - {Px}{\parallel }^{2} \).
|
Proof. This is left to the problems.
|
No
|
Theorem 8. The Gram-Schmidt Construction.\n\n\( \\left\\lbrack {{v}_{1},{v}_{2},{v}_{3},\\ldots }\\right\\rbrack \) be a linearly independent sequence in an inner product\n\nspace. Having set \( {u}_{1} = {v}_{1}/\\begin{Vmatrix}{v}_{1}\\end{Vmatrix} \), define recursively\n\n\[ \n{u}_{n} = \\frac{{v}_{n} - \\mathop{\\sum }\\limits_{{i = 1}}^{{n - 1}}\\left\\langle {{v}_{n},{u}_{i}}\\right\\rangle {u}_{i}}{\\begin{Vmatrix}{v}_{n} - \\mathop{\\sum }\\limits_{{i = 1}}^{{n - 1}}\\left\\langle {v}_{n},{u}_{i}\\right\\rangle {u}_{i}\\end{Vmatrix}}\\;n = 2,3,\\ldots \n\]\n\nThen \( \\left\\lbrack {{u}_{1},{u}_{2},{u}_{3},\\ldots }\\right\\rbrack \) is an orthonormal sequence, and for each \( n \) ,\n\n\( \\operatorname{span}\\left\\{ {{u}_{1},{u}_{2},\\ldots ,{u}_{n}}\\right\\} = \\operatorname{span}\\left\\{ {{v}_{1},{v}_{2},\\ldots ,{v}_{n}}\\right\\} \) .
|
Notice that in the equation describing this algorithm there is a normalization process: the dividing of a vector by its norm to produce a new vector pointing in the same direction but having unit length. The other action being carried out is the subtraction from the vector \( {v}_{n} \) of its projection on the linear span of the orthonormal set presently available, \( {u}_{1},{u}_{2},\\ldots ,{u}_{n - 1} \) . This action is obeying the equation in Theorem 3, and it produces a vector that is orthogonal to the linear span just described. These remarks should make the formulas easy to derive or remember.
|
Yes
|
A nonseparable inner-product space cannot have a countable orthonormal base.
|
For an example, we consider the uncountable family of functions \( {u}_{\lambda }\left( t\right) = {e}^{i\lambda t} \), where \( t \in \mathbb{R} \) and \( \lambda \in \mathbb{R} \) . This family of functions is linearly independent (Problem 5), and is therefore a Hamel basis for a linear space \( X \) . We introduce an inner product in \( X \) by defining the inner product of two elements in the Hamel base:\n\n\[ \left\langle {{u}_{\lambda },{u}_{\sigma }}\right\rangle = {\delta }_{\lambda \sigma } = \left\{ \begin{array}{ll} 1 & \lambda = \sigma \\ 0 & \lambda \neq \sigma \end{array}\right. \]\n\nThis is the value that arises in the following integration:\n\n\[ \mathop{\lim }\limits_{{T \rightarrow \infty }}\frac{1}{2T}{\int }_{-T}^{T}{u}_{\lambda }\left( t\right) \overline{{u}_{\sigma }\left( t\right) }{dt} = \mathop{\lim }\limits_{{T \rightarrow \infty }}\frac{1}{2T}{\int }_{-T}^{T}{e}^{i\left( {\lambda - \sigma }\right) t}{dt} \]\n\nIf \( \lambda = \sigma \), this calculation produces the result 1 . If \( \lambda \neq \sigma \), we get 0 . Elements of \( X \) have the property of almost periodicity. (See Problem 1.)
|
No
|
An important example of an orthonormal basis is provided by the Legendre polynomials. We consider the space \( C\left\lbrack {-1,1}\right\rbrack \) and use the simple inner product\n\n\[ \langle f, g\rangle = {\int }_{-1}^{1}f\left( t\right) g\left( t\right) {dt} \]\n\nNow apply the Gram-Schmidt process to the monomials \( t \mapsto 1, t,{t}^{2},{t}^{3},\ldots \) The un-normalized polynomials that result can be described recursively, using the classical notation \( {P}_{n} \) :\n\n\[ {P}_{0}\left( t\right) = 1\;{P}_{1}\left( t\right) = t \]\n\n\[ {P}_{n}\left( t\right) = \frac{{2n} - 1}{n}t{P}_{n - 1}\left( t\right) - \frac{n - 1}{n}{P}_{n - 2}\left( t\right) \;\left( {n = 2,3,\ldots }\right) \]
|
The orthonormal system is, of course, \( {p}_{n} = {P}_{n}/\begin{Vmatrix}{P}_{n}\end{Vmatrix} \). The completion of the space \( C\left\lbrack {-1,1}\right\rbrack \) with respect to the norm induced by the inner product is the space \( {L}^{2}\left\lbrack {-1,1}\right\rbrack \). Every function \( f \) in this space is represented in the \( {L}^{2} \)-sense by the series\n\n\[ f = \mathop{\sum }\limits_{{k = 0}}^{\infty }\left\langle {f,{p}_{k}}\right\rangle {p}_{k} \]
|
Yes
|
Theorem 2. Existence of Adjoints. If \( A \) is a bounded linear operator on a Hilbert space \( X \) (thus \( A : X \rightarrow X \) ), then there is a uniquely defined bounded linear operator \( {A}^{ * } \) such that\n\n\[ \langle {Ax}, y\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \;\left( {x, y \in X}\right) \]\n\nFurthermore, \( \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \parallel A\parallel \) .
|
Proof. For each fixed \( y \), the mapping \( x \mapsto \langle {Ax}, y\rangle \) is a bounded linear functional on \( X \) :\n\n\[ \langle A\left( {{\lambda x} + {\mu z}}\right), y\rangle = \langle {\lambda Ax} + {\mu Az}, y\rangle = \lambda \langle {Ax}, y\rangle + \mu \langle {Ax}, y\rangle \]\n\n\[ \left| {\langle {Ax}, y\rangle }\right| \leq \begin{Vmatrix}{Ax}\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \leq \begin{Vmatrix}A\end{Vmatrix}\begin{Vmatrix}x\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \]\n\nHence by the Riesz Representation Theorem (Theorem 1 above) there is a unique vector \( v \) such that \( \langle {Ax}, y\rangle = \langle x, v\rangle \) . Since \( v \) depends on \( A \) and \( y \), we are at liberty to denote it by \( {A}^{ * }y \) . It remains to be seen whether the mapping \( {A}^{ * } \) thus defined is linear and bounded. We ask whether\n\n\[ {A}^{ * }\left( {{\lambda y} + {\mu z}}\right) = \lambda {A}^{ * }y + \mu {A}^{ * }z \]\n\nBy the Lemma in Section 2.1, page 63, it would suffice to prove that for all \( x \) ,\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \left\langle {x,\lambda {A}^{ * }y + \mu {A}^{ * }z}\right\rangle \]\n\nFor this it will be sufficient to prove\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \bar{\lambda }\left\langle {x,{A}^{ * }y}\right\rangle + \bar{\mu }\left\langle {x,{A}^{ * }z}\right\rangle \]\n\nBy the definition of \( {A}^{ * } \), this equation can be transformed to
|
No
|
Theorem 2. Existence of Adjoints. If \( A \) is a bounded linear operator on a Hilbert space \( X \) (thus \( A : X \rightarrow X \) ), then there is a uniquely defined bounded linear operator \( {A}^{ * } \) such that\n\n\[ \langle {Ax}, y\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \;\left( {x, y \in X}\right) \]\n\nFurthermore, \( \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \parallel A\parallel \) .
|
Proof. For each fixed \( y \), the mapping \( x \mapsto \langle {Ax}, y\rangle \) is a bounded linear functional on \( X \) :\n\n\[ \langle A\left( {{\lambda x} + {\mu z}}\right), y\rangle = \langle {\lambda Ax} + {\mu Az}, y\rangle = \lambda \langle {Ax}, y\rangle + \mu \langle {Ax}, y\rangle \]\n\n\[ \left| {\langle {Ax}, y\rangle }\right| \leq \begin{Vmatrix}{Ax}\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \leq \begin{Vmatrix}A\end{Vmatrix}\begin{Vmatrix}x\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \]\n\nHence by the Riesz Representation Theorem (Theorem 1 above) there is a unique vector \( v \) such that \( \langle {Ax}, y\rangle = \langle x, v\rangle \) . Since \( v \) depends on \( A \) and \( y \), we are at liberty to denote it by \( {A}^{ * }y \) . It remains to be seen whether the mapping \( {A}^{ * } \) thus defined is linear and bounded. We ask whether\n\n\[ {A}^{ * }\left( {{\lambda y} + {\mu z}}\right) = \lambda {A}^{ * }y + \mu {A}^{ * }z \]\n\nBy the Lemma in Section 2.1, page 63, it would suffice to prove that for all \( x \) ,\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \left\langle {x,\lambda {A}^{ * }y + \mu {A}^{ * }z}\right\rangle \]\n\nFor this it will be sufficient to prove\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \bar{\lambda }\left\langle {x,{A}^{ * }y}\right\rangle + \bar{\mu }\left\langle {x,{A}^{ * }z}\right\rangle \]\n\nBy the definition of \( {A}^{ * } \), this equation can be transformed to\n\n\[ \langle {Ax},{\lambda y} + {\mu z}\rangle = \bar{\lambda }\langle {Ax}, y\rangle + \bar{\mu }\langle {Ax}, z\rangle \]\n\nThis we recognize as a correct equation, and the steps we took can be reversed.\n\nFor the boundedness of \( {A}^{ * } \) we use the lemma in Section 2.1 (page 63) and Problem 15 of this section (page 90) to write\n\n\[ \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\begin{Vmatrix}{{A}^{ * }y}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\mathop{\sup }\limits_{{\parallel x\parallel = 1}}\left| \left\langle {x,{A}^{ * }y}\right\rangle \right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\mathop{\sup }\limits_{{\parallel y\parallel = 1}}\left| {\langle {Ax}, y\rangle }\right| = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel = \parallel A\parallel \]\n\nThe uniqueness of \( {A}^{ * } \) is left as a problem. (Problem 11, page 89)\n\nThe operator \( {A}^{ * } \) described in Theorem 2 is called the adjoint of \( A \) . For an operator \( A \) on a Banach space \( X,{A}^{ * } \) is defined on \( {X}^{ * } \) by the equation \( {A}^{ * }\phi = \phi \circ A \) . If \( X \) is a Hilbert space, \( {X}^{ * } \) can be identified with \(
|
No
|
Theorem 2. Existence of Adjoints. If \( A \) is a bounded linear operator on a Hilbert space \( X \) (thus \( A : X \rightarrow X \) ), then there is a uniquely defined bounded linear operator \( {A}^{ * } \) such that\n\n\[ \langle {Ax}, y\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \;\left( {x, y \in X}\right) \]\n\nFurthermore, \( \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \parallel A\parallel \) .
|
Proof. For each fixed \( y \), the mapping \( x \mapsto \langle {Ax}, y\rangle \) is a bounded linear functional on \( X \) :\n\n\[ \langle A\left( {{\lambda x} + {\mu z}}\right), y\rangle = \langle {\lambda Ax} + {\mu Az}, y\rangle = \lambda \langle {Ax}, y\rangle + \mu \langle {Ax}, y\rangle \]\n\n\[ \left| {\langle {Ax}, y\rangle }\right| \leq \begin{Vmatrix}{Ax}\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \leq \begin{Vmatrix}A\end{Vmatrix}\begin{Vmatrix}x\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \]\n\nHence by the Riesz Representation Theorem (Theorem 1 above) there is a unique vector \( v \) such that \( \langle {Ax}, y\rangle = \langle x, v\rangle \) . Since \( v \) depends on \( A \) and \( y \), we are at liberty to denote it by \( {A}^{ * }y \) . It remains to be seen whether the mapping \( {A}^{ * } \) thus defined is linear and bounded. We ask whether\n\n\[ {A}^{ * }\left( {{\lambda y} + {\mu z}}\right) = \lambda {A}^{ * }y + \mu {A}^{ * }z \]\n\nBy the Lemma in Section 2.1, page 63, it would suffice to prove that for all \( x \) ,\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \left\langle {x,\lambda {A}^{ * }y + \mu {A}^{ * }z}\right\rangle \]\n\nFor this it will be sufficient to prove\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \bar{\lambda }\left\langle {x,{A}^{ * }y}\right\rangle + \bar{\mu }\left\langle {x,{A}^{ * }z}\right\rangle \]\n\nBy the definition of \( {A}^{ * } \), this equation can be transformed to\n\n\[ \langle {Ax},{\lambda y} + {\mu z}\rangle = \bar{\lambda }\langle {Ax}, y\rangle + \bar{\mu }\langle {Ax}, z\rangle \]\n\nThis we recognize as a correct equation, and the steps we took can be reversed.\n\nFor the boundedness of \( {A}^{ * } \) we use the lemma in Section 2.1 (page 63) and Problem 15 of this section (page 90) to write\n\n\[ \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\begin{Vmatrix}{{A}^{ * }y}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\mathop{\sup }\limits_{{\parallel x\parallel = 1}}\left| \left\langle {x,{A}^{ * }y}\right\rangle \right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\mathop{\sup }\limits_{{\parallel y\parallel = 1}}\left| {\langle {Ax}, y\rangle }\right| = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel = \parallel A\parallel \]\n\nThe uniqueness of \( {A}^{ * } \) is left as a problem. (Problem 11, page 89)
|
No
|
If \( A \) is a bounded linear operator on a Hilbert space \( X \) (thus \( A : X \rightarrow X \) ), then there is a uniquely defined bounded linear operator \( {A}^{ * } \) such that\n\n\[ \langle {Ax}, y\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \;\left( {x, y \in X}\right) \]\n\nFurthermore, \( \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \parallel A\parallel \) .
|
Proof. For each fixed \( y \), the mapping \( x \mapsto \langle {Ax}, y\rangle \) is a bounded linear functional on \( X \) :\n\n\[ \langle A\left( {{\lambda x} + {\mu z}}\right), y\rangle = \langle {\lambda Ax} + {\mu Az}, y\rangle = \lambda \langle {Ax}, y\rangle + \mu \langle {Ax}, y\rangle \]\n\n\[ \left| {\langle {Ax}, y\rangle }\right| \leq \begin{Vmatrix}{Ax}\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \leq \begin{Vmatrix}A\end{Vmatrix}\begin{Vmatrix}x\end{Vmatrix}\begin{Vmatrix}y\end{Vmatrix} \]\n\nHence by the Riesz Representation Theorem (Theorem 1 above) there is a unique vector \( v \) such that \( \langle {Ax}, y\rangle = \langle x, v\rangle \) . Since \( v \) depends on \( A \) and \( y \), we are at liberty to denote it by \( {A}^{ * }y \) . It remains to be seen whether the mapping \( {A}^{ * } \) thus defined is linear and bounded. We ask whether\n\n\[ {A}^{ * }\left( {{\lambda y} + {\mu z}}\right) = \lambda {A}^{ * }y + \mu {A}^{ * }z \]\n\nBy the Lemma in Section 2.1, page 63, it would suffice to prove that for all \( x \) ,\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \left\langle {x,\lambda {A}^{ * }y + \mu {A}^{ * }z}\right\rangle \]\n\nFor this it will be sufficient to prove\n\n\[ \left\langle {x,{A}^{ * }\left( {{\lambda y} + {\mu z}}\right) }\right\rangle = \bar{\lambda }\left\langle {x,{A}^{ * }y}\right\rangle + \bar{\mu }\left\langle {x,{A}^{ * }z}\right\rangle \]\n\nBy the definition of \( {A}^{ * } \), this equation can be transformed to\n\n\[ \langle {Ax},{\lambda y} + {\mu z}\rangle = \bar{\lambda }\langle {Ax}, y\rangle + \bar{\mu }\langle {Ax}, z\rangle \]\n\nThis we recognize as a correct equation, and the steps we took can be reversed.\n\nFor the boundedness of \( {A}^{ * } \) we use the lemma in Section 2.1 (page 63) and Problem 15 of this section (page 90) to write\n\n\[ \begin{Vmatrix}{A}^{ * }\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\begin{Vmatrix}{{A}^{ * }y}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\mathop{\sup }\limits_{{\parallel x\parallel = 1}}\left| \left\langle {x,{A}^{ * }y}\right\rangle \right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\mathop{\sup }\limits_{{\parallel y\parallel = 1}}\left| {\langle {Ax}, y\rangle }\right| = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel = \parallel A\parallel \]\n\nThe uniqueness of \( {A}^{ * } \) is left as a problem. (Problem 11, page 89)
|
No
|
Theorem 3. If a linear map \( A \) on a Hilbert space satisfies \( \langle {Ax}, y\rangle = \langle x,{Ay}\rangle \) for all \( x \) and \( y \), then \( A \) is bounded and self-adjoint.
|
Proof. For each \( y \) in the unit ball, define a functional \( {\phi }_{y} \) by writing \( {\phi }_{y}\left( x\right) = \langle {Ax}, y\rangle \). It is obvious that \( {\phi }_{y} \) is linear, and we see also that it is bounded, since by the Cauchy-Schwarz inequality\n\n\[ \left| {{\phi }_{y}\left( x\right) }\right| = \left| {\langle {Ax}, y\rangle }\right| = \left| {\langle x,{Ay}\rangle }\right| \leq \parallel x\parallel \parallel {Ay}\parallel \]\n\nNotice also that by the Lemma in Section 2.1, page 63,\n\n\[ \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {{\varphi }_{y}\left( x\right) }\right| = \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| = \parallel {Ax}\parallel \]\n\nBy the Uniform Boundedness Principle, (Section 1.7, page 42),\n\n\[ \infty > \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\begin{Vmatrix}{\phi }_{y}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| \left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| \left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\parallel {Ax}\parallel = \parallel A\parallel \]\n\nThe equation \( \langle {Ax}, y\rangle = \langle x,{Ay}\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \), together with the uniqueness of the adjoint, shows that \( A = {A}^{ * } \).
|
Yes
|
Theorem 3. If a linear map \( A \) on a Hilbert space satisfies \( \langle {Ax}, y\rangle = \langle x,{Ay}\rangle \) for all \( x \) and \( y \), then \( A \) is bounded and self-adjoint.
|
Proof. For each \( y \) in the unit ball, define a functional \( {\phi }_{y} \) by writing \( {\phi }_{y}\left( x\right) = \langle {Ax}, y\rangle \). It is obvious that \( {\phi }_{y} \) is linear, and we see also that it is bounded, since by the Cauchy-Schwarz inequality\n\n\[ \left| {{\phi }_{y}\left( x\right) }\right| = \left| {\langle {Ax}, y\rangle }\right| = \left| {\langle x,{Ay}\rangle }\right| \leq \parallel x\parallel \parallel {Ay}\parallel \]\n\nNotice also that by the Lemma in Section 2.1, page 63,\n\n\[ \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {{\varphi }_{y}\left( x\right) }\right| = \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| = \parallel {Ax}\parallel \]\n\nBy the Uniform Boundedness Principle, (Section 1.7, page 42),\n\n\[ \infty > \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\begin{Vmatrix}{\phi }_{y}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| \left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| \left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| \left| {{\phi }_{y}\left( x\right) }\right| \]\n\n\[ = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\mathop{\sup }\limits_{{\parallel y\parallel \leq 1}}\left| {\langle {Ax}, y\rangle }\right| = \mathop{\sup }\limits_{{\parallel x\parallel \leq 1}}\parallel {Ax}\parallel = \parallel A\parallel \]\n\nThe equation \( \langle {Ax}, y\rangle = \langle x,{Ay}\rangle = \left\langle {x,{A}^{ * }y}\right\rangle \), together with the uniqueness of the adjoint, shows that \( A = {A}^{ * } \).
|
Yes
|
Lemma 1. Generalized Cauchy-Schwarz Inequality. If \( A \) is a Hermitian operator, then \[ \left| {\langle {Ax}, y\rangle }\right| \leq \parallel A\parallel \parallel x\parallel \parallel y\parallel \]
|
Proof. Consider these two elementary equations: \[ \langle A\left( {x + y}\right), x + y\rangle = \langle {Ax}, x\rangle + \langle {Ax}, y\rangle + \langle {Ay}, x\rangle + \langle {Ay}, y\rangle \] \[ - \langle A\left( {x - y}\right), x - y\rangle = - \langle {Ax}, x\rangle + \langle {Ax}, y\rangle + \langle {Ay}, x\rangle - \langle {Ay}, y\rangle \] By adding these equations and using the Hermitian property of \( A \), we get (1) \[ \langle A\left( {x + y}\right), x + y\rangle - \langle A\left( {x - y}\right), x - y\rangle = 4\mathcal{R}\langle {Ax}, y\rangle \] From the definition of \( \parallel A\parallel \) and a homogeneity argument, we obtain (2) \[ \left| {\langle {Ax}, x\rangle }\right| \leq \parallel A\parallel \parallel x{\parallel }^{2}\;\left( {x \in X}\right) \] Using Equation (1), then (2), and finally the Parallelogram Law, we obtain \[ \left| {4\mathcal{R}\langle {Ax}, y\rangle }\right| = \left| {\langle A\left( {x + y}\right), x + y\rangle -\langle A\left( {x - y}\right), x - y\rangle }\right| \] \[ \leq \left| {\langle A\left( {x + y}\right), x + y\rangle }\right| + \left| {\langle A\left( {x - y}\right), x - y\rangle }\right| \] \[ \leq \parallel A\parallel {\begin{Vmatrix}x + y\end{Vmatrix}}^{2} + \parallel A\parallel {\begin{Vmatrix}x - y\end{Vmatrix}}^{2} \] \[ = \parallel A\parallel \left( {2\parallel x{\parallel }^{2} + 2\parallel y{\parallel }^{2}}\right) \] Letting \( \parallel x\parallel = \parallel y\parallel = 1 \) in the preceding equation establishes that \[ \left| {\mathcal{R}\langle {Ax}, y\rangle }\right| \leq \parallel A\parallel \;\left( {\parallel x\parallel = \parallel y\parallel = 1}\right) \] For a fixed pair \( x, y \) we can select a complex number \( \theta \) such that \( \left| \theta \right| = 1 \) and \( \theta \langle {Ax}, y\rangle = \left| {\langle {Ax}, y\rangle }\right| . Then \[ \left| {\langle {Ax}, y\rangle }\right| = \left| {\langle A\left( {\theta x}\right), y\rangle }\right| = \left| {\mathcal{R}\langle A\left( {\theta x}\right), y\rangle }\right| \leq \parallel A\parallel \] By homogeneity, this suffices to prove the lemma.
|
Yes
|
Lemma 2. If \( A \) is Hermitian, then \( \parallel A\parallel = \parallel A\parallel \) .
|
Proof. By the Cauchy-Schwarz inequality,\n\n\[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel u\parallel = 1}}\left| {\langle {Au}, u\rangle }\right| \leq \mathop{\sup }\limits_{{\parallel u\parallel = 1}}\parallel {Au}\parallel \parallel u\parallel = \mathop{\sup }\limits_{{\parallel u\parallel = 1}}\parallel {Au}\parallel = \parallel A\parallel \]\n\nFor the reverse inequality, use the preceding lemma to write\n\n\[ \parallel A\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\parallel {Ax}\parallel = \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\mathop{\sup }\limits_{{\parallel y\parallel = 1}}\left| {\langle {Ax}, y\rangle }\right| \]\n\n\[ \leq \mathop{\sup }\limits_{{\parallel x\parallel = 1}}\mathop{\sup }\limits_{{\parallel y\parallel = 1}}\parallel A\parallel \parallel x\parallel \parallel y\parallel = \parallel A\parallel \]
|
Yes
|
Lemma 3. Every continuous linear operator (from one normed linear space into another) having finite-dimensional range is compact.
|
Proof. Let \( A \) be such an operator, and let \( \sum \) be the unit ball. Since \( A \) is continuous, \( A\left( \sum \right) \) is a bounded set in a finite-dimensional subspace, and its closure is compact, by Theorem 1 in Section 1.4, page 20.
|
Yes
|
Theorem 4. If \( X \) and \( Y \) are Banach spaces, then the set of compact operators in \( \mathcal{L}\left( {X, Y}\right) \) is closed.
|
Proof. Let \( \left\lbrack {A}_{n}\right\rbrack \) be a sequence of compact operators from \( X \) to \( Y \) . Suppose that \( \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix} \rightarrow 0 \) . To prove that \( A \) is compact, let \( \left\lbrack {x}_{i}\right\rbrack \) be a sequence in the unit ball of \( X \) . We wish to find a convergent subsequence in \( \left\lbrack {A{x}_{i}}\right\rbrack \) . Since \( {A}_{1} \) is compact, there is an increasing sequence \( {I}_{1} \subset \mathbb{N} \) such that \( \left\lbrack {{A}_{1}{x}_{i} : i \in {I}_{1}}\right\rbrack \) converges. Since \( {A}_{2} \) is compact, there is an increasing sequence \( {I}_{2} \subset {I}_{1} \) such that \( \left\lbrack {{A}_{2}{x}_{i} : i \in {I}_{2}}\right\rbrack \) converges. Note that \( \left\lbrack {{A}_{1}{x}_{i} : i \in {I}_{2}}\right\rbrack \) converges. Continue this process, and use Cantor’s diagonal process. Thus we let \( I \) be the sequence whose \( i \) th member is the \( i \) th member of \( {I}_{i} \), for \( i = 1,2,\ldots \) By the construction, \( \left\lbrack {{A}_{n}{x}_{i} : i \in I}\right\rbrack \) converges. To prove that \( \left\lbrack {A{x}_{i} : i \in I}\right\rbrack \) converges, it suffices to show that it is a Cauchy sequence. This follows from the inequality\n\n\[ \begin{Vmatrix}{A{x}_{i} - A{x}_{j}}\end{Vmatrix} \leq \begin{Vmatrix}{A{x}_{i} - {A}_{n}{x}_{i}}\end{Vmatrix} + \begin{Vmatrix}{{A}_{n}{x}_{i} - {A}_{n}{x}_{j}}\end{Vmatrix} + \begin{Vmatrix}{{A}_{n}{x}_{j} - A{x}_{j}}\end{Vmatrix} \]\n\n\[ \leq \begin{Vmatrix}{A - {A}_{n}}\end{Vmatrix}\begin{Vmatrix}{x}_{i}\end{Vmatrix} + \begin{Vmatrix}{{A}_{n}{x}_{i} - {A}_{n}{x}_{j}}\end{Vmatrix} + \begin{Vmatrix}{{A}_{n} - A}\end{Vmatrix}\begin{Vmatrix}{x}_{j}\end{Vmatrix} \]
|
Yes
|
Theorem 5. Let \( S \) be any measure space. In the space \( {L}^{2}\left( S\right) \) , consider the integral operator \( T \) defined by the equation\n\n\[ \n\left( {Tx}\right) \left( s\right) = {\int }_{S}k\left( {s, t}\right) x\left( t\right) {dt} \n\]\n\nIf the kernel \( k \) belongs to the space \( {L}^{2}\left( {S \times S}\right) \), then \( T \) is a compact operator from \( {L}^{2}\left( S\right) \) into \( {L}^{2}\left( S\right) \) .
|
Proof. Select an orthonormal basis \( \left\lbrack {u}_{n}\right\rbrack \) for \( {L}^{2}\left( S\right) \), and define \( {a}_{nm} = \) \( \left\langle {T{u}_{m},{u}_{n}}\right\rangle \) . This is the \
|
No
|
If \( \left\lbrack {u}_{n}\right\rbrack \) is an orthonormal sequence, then \( {u}_{n} \rightharpoonup 0 \).
|
This follows from Bessel's inequality, which shows that \( \left\langle {{u}_{n}, y}\right\rangle \rightarrow 0 \) for all \( y \) .
|
Yes
|
Lemma 4. A weakly Cauchy sequence in a Hilbert space is weakly convergent to a point in the Hilbert space.
|
Proof. Let \( \left\lbrack {x}_{n}\right\rbrack \) be such a sequence. For each \( y \), the sequence \( \left\lbrack \left\langle {y,{x}_{n}}\right\rangle \right\rbrack \) has the Cauchy property, and is therefore bounded in \( \mathbb{C} \) . The linear functionals \( {\phi }_{n} \) defined by \( {\phi }_{n}\left( y\right) = \left\langle {y,{x}_{n}}\right\rangle \) have the property\n\n\[ \mathop{\sup }\limits_{n}\left| {{\phi }_{n}\left( y\right) }\right| < \infty \;\left( {y \in X}\right) \]\n\nBy the Uniform Boundedness Principle (Section 1.7, page 42), we infer that \( \begin{Vmatrix}{\phi }_{n}\end{Vmatrix} \leq M \) for some constant \( M \) . Since\n\n\[ \begin{Vmatrix}{x}_{n}\end{Vmatrix} = \mathop{\sup }\limits_{{\parallel y\parallel = 1}}\left| \left\langle {y,{x}_{n}}\right\rangle \right| = \begin{Vmatrix}{\phi }_{n}\end{Vmatrix} \leq M \]\n\nwe conclude that \( \left\lbrack {x}_{n}\right\rbrack \) is bounded. Put \( \phi \left( y\right) = \mathop{\lim }\limits_{n}\left\langle {y,{x}_{n}}\right\rangle \) . Then \( \phi \) is a bounded linear functional on \( X \) . By the Riesz Representation Theorem, there is an \( x \) for which \( \phi \left( y\right) = \langle y, x\rangle \) . Hence \( \mathop{\lim }\limits_{n}\left\langle {y,{x}_{n}}\right\rangle = \langle y, x\rangle \) and \( {x}_{n} \rightharpoonup x \) .
|
Yes
|
Theorem 7. Let \( A \) be a continuous linear operator on a Hilbert space. If the range of \( A \) is closed, then it is the orthogonal complement of the null space of \( {A}^{ * } \) ; in symbols,\n\n\[\n\mathcal{R}\left( A\right) = {\left\lbrack \mathcal{N}\left( {A}^{ * }\right) \right\rbrack }^{ \bot }\n\]
|
Proof. This is similar to Theorem 6, and is therefore left to the problems. (Half of the theorem does not require the closed range.)
|
No
|
Lemma 1. If \( A \) is a Hermitian operator on an inner-product space, then:\n(1) All eigenvalues of \( A \) are real.\n(2) Any two eigenvectors of \( A \) belonging to different eigenvalues are orthogonal to each other.\n(3) The quadratic form \( x \mapsto \langle {Ax}, x\rangle \) is real-valued.
|
Proof. Let \( {Ax} = {\lambda x},{Ay} = {\mu y}, x \neq 0, y \neq 0,\lambda \neq \mu \) . Then\n\n\[ \lambda \langle x, x\rangle = \langle {\lambda x}, x\rangle = \langle {Ax}, x\rangle = \langle x,{Ax}\rangle = \langle x,{\lambda x}\rangle = \bar{\lambda }\langle x, x\rangle \]\n\nThus \( \lambda \) is real. To see that \( \langle x, y\rangle = 0 \), use the fact that \( \lambda \) and \( \mu \) are real and write\n\n\[ \left( {\lambda - \mu }\right) \langle x, y\rangle = \langle {\lambda x}, y\rangle - \langle x,{\mu y}\rangle = \langle {Ax}, y\rangle - \langle x,{Ay}\rangle = 0 \]\n\nFor (3), note that \( \langle {Ax}, x\rangle = \overline{\langle x,{Ax}\rangle } = \overline{\langle {Ax}, x\rangle } \).
|
Yes
|
Theorem 2. Let \( A \) be a compact operator (on an inner-product space) having spectral decomposition \( {Ax} = \sum {\lambda }_{n}\left\langle {x,{e}_{n}}\right\rangle {e}_{n} \) . (We allow \( {\lambda }_{n} \) to be complex.) If \( 0 \neq \lambda \notin \left\{ {\lambda }_{n}\right\} \), then \( A - {\lambda I} \) is invertible, and\n\n\[{\left( A - \lambda I\right) }^{-1}x = - {\lambda }^{-1}x + {\lambda }^{-1}\sum {\lambda }_{n}\frac{\left\langle x,{e}_{n}\right\rangle }{{\lambda }_{n} - \lambda }{e}_{n}\]
|
Proof. If the series converges, then our formula is correct. Indeed, by the continuity of \( A - {\lambda I} \) we have by straightforward calculation\n\n\[left( {A - {\lambda I}}\right) {Bx} = B\left( {A - {\lambda I}}\right) x = x\]\n\nwhere \( {Bx} \) is defined by the right side of the equation in the statement of the theorem. In order to prove that the series converges, define the partial sums\n\n\[{v}_{n} = \mathop{\sum }\limits_{{k = 1}}^{n}\frac{\left\langle x,{e}_{k}\right\rangle }{{\lambda }_{k} - \lambda }{e}_{k}\]\n\nThe sequence \( \left\lbrack {v}_{n}\right\rbrack \) is bounded, because with an application of the Pythagorean law and Bessel's inequality we have\n\n\[{\begin{Vmatrix}{v}_{n}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{k = 1}}^{n}{\left| \frac{\left\langle x,{e}_{k}\right\rangle }{{\lambda }_{k} - \lambda }\right| }^{2} \leq \mathop{\sup }\limits_{j}{\left| \frac{1}{{\lambda }_{j} - \lambda }\right| }^{2}\mathop{\sum }\limits_{{k = 1}}^{\infty }{\left| \left\langle x,{e}_{k}\right\rangle \right| }^{2} \leq \beta \cdot \parallel x{\parallel }^{2}\]\n\nSince \( A \) is compact, \( {\lambda }_{n} \rightarrow 0 \), by Problem 15. Thus \( \beta < \infty \) . Also, the sequence \( \left\lbrack {A{v}_{n}}\right\rbrack \) contains a convergent subsequence. But \( \left\lbrack {A{v}_{n}}\right\rbrack \) is a Cauchy sequence, and a Cauchy sequence having a convergent subsequence is convergent (Problem 1.2.26, page 13). To see that \( \left\lbrack {A{v}_{n}}\right\rbrack \) is a Cauchy sequence, write\n\n\[A{v}_{n} = \mathop{\sum }\limits_{{k = 1}}^{n}{\lambda }_{k}\frac{\left\langle x,{e}_{k}\right\rangle }{{\lambda }_{k} - \lambda }{e}_{k}\]\n\nand\n\n\[{\begin{Vmatrix}A{v}_{n} - A{v}_{m}\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{{k = n + 1}}^{m}{\left| {\lambda }_{k}\frac{\left\langle x,{e}_{k}\right\rangle }{{\lambda }_{k} - \lambda }\right| }^{2} \leq \mathop{\sup }\limits_{{1 \leq j < \infty }}{\left| \frac{{\lambda }_{j}}{{\lambda }_{j} - \lambda }\right| }^{2} \cdot \mathop{\sum }\limits_{{k = n + 1}}^{m}{\left| \left\langle x,{e}_{k}\right\rangle \right| }^{2}\]
|
Yes
|
Theorem 3. Let \( A \) be an operator on an inner-product space having the form \( {Ax} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left\langle {x,{e}_{n}}\right\rangle {e}_{n} \), where \( \left\{ {e}_{n}\right\} \) is an orthonormal sequence and \( \left\lbrack {\lambda }_{n}\right\rbrack \) is a bounded sequence of nonzero complex numbers. Let \( M \) be the linear span of \( \left\{ {{e}_{n} : n \in \mathbb{N}}\right\} \) . Then \( {M}^{ \bot } = \ker \left( A\right) \) .
|
Proof. The following are equivalent properties of a vector \( x \) :\n\n(a) \( x \in \ker \left( A\right) \)\n\n(b) \( \parallel {Ax}{\parallel }^{2} = 0 \)\n\n(c) \( \sum {\left| {\lambda }_{n}\left\langle x,{e}_{n}\right\rangle \right| }^{2} = 0 \)\n\n(d) \( \left\langle {x,{e}_{n}}\right\rangle = 0 \) for all \( n \) .
|
Yes
|
Theorem 4. Adopt the hypotheses of Theorem 3. The orthonormal set \( \left\{ {e}_{n}\right\} \) is maximal if and only if \( \ker \left( A\right) = 0 \) .
|
Proof. By Theorem 3, \( \ker \left( A\right) = 0 \) if and only if \( {M}^{ \bot } = 0 \) . (In these equations,0 denotes the 0 subspace.) The condition \( {M}^{ \bot } = 0 \) is equivalent to the maximality of \( \left\{ {e}_{n}\right\} \) . Here refer to Theorem 6 in Section 2.2, page 73, and observe that the equivalence of (a) and (b) in that theorem does not require the completeness of the space.
|
Yes
|
Theorem 5. Let \( A \) be an operator on a Hilbert space such that \( {Ax} = \) \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left\langle {x,{e}_{n}}\right\rangle {e}_{n} \), where \( \left\lbrack {e}_{n}\right\rbrack \) is an orthonormal sequence and \( \left\lbrack {\lambda }_{n}\right\rbrack \) is a bounded sequence of nonzero complex numbers. If \( v \) is in the range of \( A \), then one solution of the equation \( {Ax} = v \) is \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}^{-1}\left\langle {v,{e}_{n}}\right\rangle {e}_{n} \) .
|
Proof. Since \( v \) is in the range of \( A, v = {Az} \) for some \( z \) . Hence\n\n\[ \left\langle {v,{e}_{m}}\right\rangle = \left\langle {{Az},{e}_{m}}\right\rangle = \left\langle {\mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left\langle {z,{e}_{n}}\right\rangle {e}_{n},{e}_{m}}\right\rangle = {\lambda }_{m}\left\langle {z,{e}_{m}}\right\rangle \]\n\nFrom this we have\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| {\lambda }_{n}^{-1}\left\langle v,{e}_{n}\right\rangle \right| }^{2} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\left| \left\langle z,{e}_{n}\right\rangle \right| }^{2} \leq \parallel z{\parallel }^{2} \]\n\nThis implies the convergence of the series \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}^{-1}\left\langle {v,{e}_{n}}\right\rangle {e}_{n} \), by Theorem 2 in Section 2.2, page 71. It follows that\n\n\[ {Ax} = \sum {\lambda }_{n}^{-1}\left\langle {v,{e}_{n}}\right\rangle A{e}_{n} = \sum \left\langle {v,{e}_{n}}\right\rangle {e}_{n} = \sum {\lambda }_{n}\left\langle {z,{e}_{n}}\right\rangle {e}_{n} = {Az} = v \]
|
Yes
|
Theorem 6. Singular-Value Decomposition for Compact Operators. Every compact operator on a separable Hilbert space is expressible in the form\n\n\[ \n{Ax} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{u}_{n}}\right\rangle {v}_{n} \]\n\nin which \( \left\lbrack {u}_{n}\right\rbrack \) is an orthonormal basis for the space and \( \left\lbrack {v}_{n}\right\rbrack \) is an orthogonal sequence tending to zero. (The sequences \( \left\lbrack {u}_{n}\right\rbrack \) and \( \left\lbrack {v}_{n}\right\rbrack \) depend on A.)
|
Proof. The operator \( {A}^{ * }A \) is compact and Hermitian. Its eigenvalues are nonnegative, because if \( {A}^{ * }{Ax} = {\beta x} \), then\n\n\[ \n0 \leq \langle {Ax},{Ax}\rangle = \left\langle {x,{A}^{ * }{Ax}}\right\rangle = \langle x,{\beta x}\rangle = \beta \langle x, x\rangle \]\n\nNow apply the spectral theorem to \( {A}^{ * }A \), obtaining\n\n\[ \n{A}^{ * }{Ax} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}^{2}\left\langle {x,{u}_{n}}\right\rangle {u}_{n} \]\n\nwhere \( \left\lbrack {u}_{n}\right\rbrack \) is an orthonormal basis for the space and \( {\lambda }_{n}^{2} \rightarrow 0 \) . Since we are assuming that \( \left\lbrack {u}_{n}\right\rbrack \) is a base, we permit some (possibly an infinite number) of the \( {\lambda }_{n} \) to be zero. In the spectral representation above, each nonzero eigenvalue \( {\lambda }_{n}^{2} \) is repeated a number of times equal to its geometric multiplicity. Define \( {v}_{n} = A{u}_{n} \) . Then we have\n\n\[ \n\langle {v}_{m},{v}_{n}\rangle = \langle A{u}_{m}, A{u}_{n}\rangle = \langle {u}_{m},{A}^{ * }A{u}_{n}\rangle = \langle {u}_{m},{\lambda }_{n}^{2}{u}_{n}\rangle = {\lambda }_{n}^{2}{\delta }_{nm} \]\n\nHence \( \left\lbrack {v}_{n}\right\rbrack \) is orthogonal, and \( \begin{Vmatrix}{v}_{n}\end{Vmatrix} = {\lambda }_{n} \rightarrow 0 \) . Since \( \left\lbrack {u}_{n}\right\rbrack \) is a base, we have for arbitrary \( x \) ,\n\n\[ \nx = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{u}_{n}}\right\rangle {u}_{n} \]\n\nConsequently,\n\n\[ \n{Ax} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{u}_{n}}\right\rangle A{u}_{n} = \mathop{\sum }\limits_{{n = 1}}^{\infty }\left\langle {x,{u}_{n}}\right\rangle {v}_{n} \]\n
|
Yes
|
Theorem 7. Let \( \left\lbrack {u}_{\alpha }\right\rbrack \) and \( \left\lbrack {v}_{\beta }\right\rbrack \) be two orthonormal bases for a Hilbert space. Every linear operator \( A \) on the space satisfies\n\n\[ \mathop{\sum }\limits_{\alpha }{\begin{Vmatrix}A{u}_{\alpha }\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{\beta }{\begin{Vmatrix}A{v}_{\beta }\end{Vmatrix}}^{2} \]\n
|
Proof. By the Orthonormal Basis Theorem, Section 2.2 (page 73), we have\n\n\[ \mathop{\sum }\limits_{\alpha }{\begin{Vmatrix}A{u}_{\alpha }\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{\alpha }\mathop{\sum }\limits_{\beta }{\left| \left\langle A{u}_{\alpha },{v}_{\beta }\right\rangle \right| }^{2} = \mathop{\sum }\limits_{\beta }\mathop{\sum }\limits_{\alpha }{\left| \left\langle A{u}_{\alpha },{v}_{\beta }\right\rangle \right| }^{2} \]\n\n\[ = \mathop{\sum }\limits_{\beta }\mathop{\sum }\limits_{\alpha }{\left| \left\langle {u}_{\alpha },{A}^{ * }{v}_{\beta }\right\rangle \right| }^{2} = \mathop{\sum }\limits_{\beta }{\begin{Vmatrix}{A}^{ * }{v}_{\beta }\end{Vmatrix}}^{2} \]\n\nLetting \( \left\{ {u}_{\alpha }\right\} = \left\{ {v}_{\beta }\right\} \) in this calculation, we obtain \( \mathop{\sum }\limits_{\beta }{\begin{Vmatrix}A{v}_{\beta }\end{Vmatrix}}^{2} = \mathop{\sum }\limits_{\beta }{\begin{Vmatrix}{A}^{ * }{v}_{\beta }\end{Vmatrix}}^{2} \) . By combining these equations, we obtain the required result.
|
Yes
|
Theorem 1. Under the preceding hypotheses, \( A \) is a Hermitian operator on \( X \) .
|
Proof. Let \( x, y \in X \) . We want to prove that \( \langle {Ax}, y\rangle = \langle x,{Ay}\rangle \) . We compute\n\n\[ \langle {Ax}, y\rangle - \langle x,{Ay}\rangle = {\int }_{a}^{b}\left\lbrack {\bar{y}{Ax} - {xA}\bar{y}}\right\rbrack = {\int }_{a}^{b}\left\lbrack {\bar{y}{\left( p{x}^{\prime }\right) }^{\prime } + \bar{y}{qx} - x{\left( p{\bar{y}}^{\prime }\right) }^{\prime } - {xq}\bar{y}}\right\rbrack \]\n\n\[ = {\int }_{a}^{b}\left\lbrack {\bar{y}{\left( p{x}^{\prime }\right) }^{\prime } - x{\left( p{\bar{y}}^{\prime }\right) }^{\prime }}\right\rbrack \]\n\n\[ = {\int }_{a}^{b}\left\lbrack {\bar{y}{\left( p{x}^{\prime }\right) }^{\prime } + {\bar{y}}^{\prime }p{x}^{\prime } - x{\left( p{\bar{y}}^{\prime }\right) }^{\prime } - {x}^{\prime }p{\bar{y}}^{\prime }}\right\rbrack \]\n\n\[ = {\int }_{a}^{b}{\left\lbrack p{x}^{\prime }\bar{y} - px{\bar{y}}^{\prime }\right\rbrack }^{\prime } = {\left\lbrack p{x}^{\prime }\bar{y} - px{\bar{y}}^{\prime }\right\rbrack }_{a}^{b} \]\n\n\[ = p\left( b\right) \left\lbrack {{x}^{\prime }\left( b\right) \bar{y}\left( b\right) - x\left( b\right) {\bar{y}}^{\prime }\left( b\right) }\right\rbrack - p\left( a\right) \left\lbrack {{x}^{\prime }\left( a\right) \bar{y}\left( a\right) - x\left( a\right) {\bar{y}}^{\prime }\left( a\right) }\right\rbrack \]\n\n\[ = - p\left( b\right) \left\lbrack {\det w\left( b\right) }\right\rbrack + p\left( a\right) \left\lbrack {\det w\left( a\right) }\right\rbrack \]\n\nwhere \( w\left( t\right) \) is the Wroński matrix\n\n\[ w\left( t\right) = \left\lbrack \begin{matrix} x\left( t\right) & \bar{y}\left( t\right) \\ {x}^{\prime }\left( t\right) & {\bar{y}}^{\prime }\left( t\right) \end{matrix}\right\rbrack \]\n\nPut also\n\n\[ \alpha = \left\lbrack \begin{array}{ll} {\alpha }_{11} & {\alpha }_{12} \\ {\alpha }_{21} & {\alpha }_{22} \end{array}\right\rbrack \;\beta = \left\lbrack \begin{array}{ll} {\beta }_{11} & {\beta }_{12} \\ {\beta }_{21} & {\beta }_{22} \end{array}\right\rbrack \]\n\nOur hypothesis on \( p \) is that \( p\left( a\right) \det \beta = p\left( b\right) \det \alpha \) . The fact that \( x, y \in X \) gives us \( {\alpha w}\left( a\right) + {\beta w}\left( b\right) = 0 \) . This yields \( \left( {\det \alpha }\right) \left\lbrack {\det w\left( a\right) }\right\rbrack = \left( {\det \beta }\right) \left\lbrack {\det w\left( b\right) }\right\rbrack \) . Note that \( \det \left( {-\beta }\right) = \det \left( \beta \right) \) because \( \beta \) is of even order. Multiplying this by \( p\left( b\right) \) gives us \( p\left( b\right) \det \alpha \det w\left( a\right) = p\left( b\right) \det \beta \det w\left( b\right) \) . By a previous equation, this is \( p\left( a\right) \det \beta \det w\left( a\right) = p\left( b\right) \det \beta \det w\left( b\right) \) . If \( \det \beta \neq 0 \), we have \( p\left( b\right) \det w\left( b\right) = \) \( p\left( a\right) \det w\left( a\right) \) . If \( \det \alpha \neq 0 \), a similar calculation can be used.
|
Yes
|
Example 1. If \( {Ax} = - {x}^{\prime \prime } \) (i.e., \( p\left( t\right) = - 1 \) and \( q\left( t\right) = 0 \) ), what are the eigenvalues and eigenfunctions?
|
The solutions to \( - {x}^{\prime \prime } = {\lambda x} \) are of the form \( {c}_{1}\sin \sqrt{\lambda }t + {c}_{2}\cos \sqrt{\lambda }t \) . Hence every complex number \( \lambda \) is an eigenvalue, and each eigenspace is of dimension 2.
|
Yes
|
Theorem 2. A right inverse of \( A \) in Equation (4) is the operator \( B \) defined by (5) \[ \left( {By}\right) \left( s\right) = {\int }_{a}^{b}g\left( {s, t}\right) y\left( t\right) {dt} \]
|
Proof. It is to be proved that \( {AB} = I \) . Let \( y \in C\left\lbrack {a, b}\right\rbrack \) and put \( x = {By} \) . We show first that \( {Ax} = y \) . From the equation \[ x\left( s\right) = {\int }_{a}^{b}g\left( {s, t}\right) y\left( t\right) {dt} \] \[ = {\int }_{a}^{s}u\left( s\right) v\left( t\right) y\left( t\right) {dt} + {\int }_{s}^{b}v\left( s\right) u\left( t\right) y\left( t\right) {dt} \] \[ = u\left( s\right) {\int }_{a}^{s}v\left( t\right) y\left( t\right) {dt} + v\left( s\right) {\int }_{s}^{b}u\left( t\right) y\left( t\right) {dt} \] we have \[ {x}^{\prime }\left( s\right) = {u}^{\prime }\left( s\right) {\int }_{a}^{s}v\left( t\right) y\left( t\right) {dt} + u\left( s\right) v\left( s\right) y\left( s\right) \] \[ + {v}^{\prime }\left( s\right) {\int }_{s}^{b}u\left( t\right) y\left( t\right) {dt} - v\left( s\right) u\left( s\right) y\left( s\right) \] \[ = {u}^{\prime }\left( s\right) {\int }_{a}^{s}v\left( t\right) y\left( t\right) {dt} + {v}^{\prime }\left( s\right) {\int }_{s}^{b}u\left( t\right) y\left( t\right) {dt} \] Another differentiation gives us \[ {x}^{\prime \prime }\left( s\right) = {u}^{\prime \prime }\left( s\right) {\int }_{a}^{s}v\left( t\right) y\left( t\right) {dt} + {u}^{\prime }\left( s\right) v\left( s\right) y\left( s\right) + {v}^{\prime \prime }\left( s\right) {\int }_{s}^{b}u\left( t\right) y\left( t\right) {dt} - {v}^{\prime }\left( s\right) u\left( s\right) y\left( s\right) \] \[ = q\left( s\right) u\left( s\right) {\int }_{a}^{s}v\left( t\right) y\left( t\right) {dt} + q\left( s\right) v\left( s\right) {\int }_{s}^{b}u\left( t\right) y\left( t\right) {dt} + y\left( s\right) \left\lbrack {{u}^{\prime }\left( s\right) v\left( s\right) - u\left( s\right) {v}^{\prime }\left( s\right) }\right\rbrack \] \[ = q\left( s\right) x\left( s\right) + y\left( s\right) \] In the last step, the constant value of the Wrońskian was substituted. Our calculation shows that \( {x}^{\prime \prime } - {qx} = y \) or \( {Ax} = y \), as asserted. Hence \( {AB} = I \) .
|
Yes
|
Consider the boundary-value problem\n\n\\[ \n{Ax} \\equiv {x}^{\\prime \\prime } + x = y\\;{x}^{\\prime }\\left( 0\\right) = x\\left( \\pi \\right) = 0 \n\\]
|
We shall solve it by means of a Green’s function. For the functions \\( u \\) and \\( v \\) we can take \\( u\\left( t\\right) = \\sin t \\) and \\( v\\left( t\\right) = \\cos t \\) . In this case the Green’s function is\n\n\\[ \ng\\left( {s, t}\\right) = \\left\\{ \\begin{array}{ll} \\sin s\\cos t & 0 \\leq t \\leq s \\leq \\pi \\\\ \\cos s\\sin t & 0 \\leq s \\leq t \\leq \\pi \\end{array}\\right.\n\\]\n\nThe compact Hermitian integral operator \\( B \\) is given by\n\n\\[ \n\\left( {By}\\right) \\left( s\\right) = \\sin s{\\int }_{0}^{s}\\cos {ty}\\left( t\\right) {dt} + \\cos s{\\int }_{s}^{\\pi }\\sin {ty}\\left( t\\right) {dt}\n\\]
|
No
|
Let us solve the problem in Example 3 by using the Spectral Theorem. The eigenvalues and eigenvectors of the differential operator \( A \) are obtained by solving \( {x}^{\prime \prime } + x = {\mu x} \) .
|
The general solution of the differential equation is\n\n\[ x\left( t\right) = {c}_{1}\sin \sqrt{1 - \mu }t + {c}_{2}\cos \sqrt{1 - \mu }t \]\n\nImposing the conditions \( {x}^{\prime }\left( 0\right) = x\left( \pi \right) = 0 \), we find that the eigenvalues are \( {\mu }_{n} = 1 - {\left( n - \frac{1}{2}\right) }^{2} \) and the eigenfunctions are \( {v}_{n}\left( t\right) = \cos \left( {{2n} - 1}\right) t/2 \) . The \( {v}_{n} \) are also eigenfunctions of \( B \), corresponding to eigenvalues \( {\lambda }_{n} = 1/{\mu }_{n} = {\left( n - {n}^{2} + \frac{3}{4}\right) }^{-1} \) .\n\nObserve that the eigenfunctions \( {v}_{n} \) are not of unit norm. If \( {\alpha }_{n} = 1/\begin{Vmatrix}{v}_{n}\end{Vmatrix} \), then \( \left\lbrack {{\alpha }_{n}{v}_{n}}\right\rbrack \) is an orthonormal system, and the spectral resolution of \( B \) is\n\n\[ {By} = \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left\langle {y,{\alpha }_{n}{v}_{n}}\right\rangle \left( {{\alpha }_{n}{v}_{n}}\right) \]\n\nA computation reveals that \( {\alpha }_{n} = {\left( 2/\pi \right) }^{1/2} \) . Hence we can write\n\n\[ {By} = \left( {2/\pi }\right) \mathop{\sum }\limits_{{n = 1}}^{\infty }{\lambda }_{n}\left\langle {y,{v}_{n}}\right\rangle {v}_{n} \]\n\nUse of this formula is equivalent to the traditional method for solving the boundary-value problem\n\n\[ {x}^{\prime \prime } + x = y\;{x}^{\prime }\left( 0\right) = x\left( \pi \right) = 0 \]\n\nThe traditional method starts with the functions \( {v}_{n}\left( t\right) = \cos \frac{{2n} - 1}{2}t \), which satisfy the boundary conditions. Then we build a function of the form \( x = \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}{v}_{n} \) . This also satisfies the boundary conditions. We hope that with a correct choice of the coefficients we will have \( {Ax} = y \) . Since \( A{v}_{n} = {\mu }_{n}{v}_{n} \), this equation reduces to \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{c}_{n}{\mu }_{n}{v}_{n} = y \) . To discover the values of the coefficients, take the inner product of both sides with \( {v}_{m} \) :\n\n\[ \sum {c}_{n}{\mu }_{n}\left\langle {{v}_{n},{v}_{m}}\right\rangle = \left\langle {y,{v}_{m}}\right\rangle \]\n\nBy orthogonality, we get \( {c}_{m}{\mu }_{m}{\alpha }_{m}^{-2} = \left\langle {y,{v}_{m}}\right\rangle \) and \( {c}_{m} = \left\langle {y,{v}_{m}}\right\rangle {\mu }_{m}^{-1}{\alpha }_{m}^{2} \) .
|
Yes
|
Example 5. Find the Green's function for this Sturm-Liouville problem:\n\n\[ \n{x}^{\prime \prime } = y\;x\left( 0\right) = {x}^{\prime }\left( 0\right) = 0\;x \in {C}^{2}\left\lbrack {0,1}\right\rbrack \n\]
|
The preceding theorem asserts that \( {g}^{t} \) should solve the homogeneous differential equation in the intervals \( 0 < s < t < 1 \) and \( 0 < t < s < 1 \) . Furthermore, \( {g}^{t} \) should be continuous, and it should satisfy the boundary conditions. Lastly, \( {g}^{\prime }\left( {s, t}\right) \) should have a jump discontinuity of magnitude -1 as \( t \) passes through the value \( s \) . One can guess that \( g \) is given by\n\n\[ \ng\left( {s, t}\right) = \left\{ \begin{array}{ll} 0 & 0 \leq s \leq t \leq 1 \\ s - t & 0 \leq t \leq s \leq 1 \end{array}\right. \n\]\n\nIf we proceed systematically, it will be seen that this is the only solution. In the triangle \( 0 < s < t < 1, A{g}^{t} = 0 \), and therefore \( {g}^{t} \) must be a linear function of \( s \) . We write \( g\left( {s, t}\right) = a\left( t\right) + b\left( t\right) s \) . Since \( {g}^{t} \) must satisfy the boundary conditions, we have \( g\left( {0, t}\right) = \left( {\partial g/\partial s}\right) \left( {0, t}\right) = 0 \) . Thus \( a\left( t\right) = b\left( t\right) = 0 \) and \( g\left( {s, t}\right) = 0 \) in this triangle. In the second triangle, \( 0 < t < s < 1 \) . Again \( {g}^{t} \) must be linear, and we write \( g\left( {s, t}\right) = \alpha \left( t\right) + \beta \left( t\right) s \) . Continuity of \( g \) on the diagonal implies that \( \alpha \left( t\right) + \beta \left( t\right) t = 0 \), and we therefore have \( g\left( {s, t}\right) = - \beta \left( t\right) t + \beta \left( t\right) s = \beta \left( t\right) \left( {s - t}\right) \) . The condition \( \left( {\partial g/\partial s}\right) \left( {s, s + }\right) - \left( {\partial g/\partial s}\right) \left( {s, s - }\right) = - 1/p \) leads to the equation \( 0 - \beta \left( t\right) = - 1 \) . Hence \( g\left( {s, t}\right) = s - t \) in this triangle. The solution to the inhomogeneous boundary-value problem \( {x}^{\prime \prime } = y \) is therefore given by\n\n\[ \nx\left( s\right) = {\int }_{0}^{s}\left( {s - t}\right) y\left( t\right) {dt} \n\]
|
Yes
|
Example 6. Find the Green's function for the problem\n\n\[ \n{x}^{\prime \prime } - {x}^{\prime } - {2x} = y\;x\left( 0\right) = 0 = x\left( 1\right) \n\]
|
We tentatively set\n\n(7)\n\n\[ \ng\left( {s, t}\right) = \left\{ \begin{array}{ll} u\left( s\right) v\left( t\right) & 0 \leq s \leq t \leq 1 \\ v\left( s\right) u\left( t\right) & 0 \leq t \leq s \leq 1 \end{array}\right. \n\]\n\nand try to determine the functions \( u \) and \( v \) . The homogeneous differential equation has as its general solution the function\n\n\[ \nx\left( s\right) = \alpha {e}^{-s} + \beta {e}^{2s} \n\]\n\nThe solution satisfying the condition \( x\left( 0\right) = 0 \) is\n\n\[ \nu\left( s\right) = \alpha {e}^{-s} - \alpha {e}^{2s} \n\]\n\nThe solution satisfying the condition \( x\left( 1\right) = 0 \) is\n\n\[ \nv\left( s\right) = - \beta {e}^{3}{e}^{-s} + \beta {e}^{2s} \n\]\n\nWith these choices, the function \( g \) in Equation (7) satisfies the first four requirements in Theorem 3. With a suitable choice of the parameters \( \alpha \) and \( \beta \), the fifth requirement can be met as well. The calculation produces the following equation involving the Wronskian of \( u \) and \( v \) :\n\n\[ \n{g}^{\prime }\left( {s, s + }\right) - {g}^{\prime }\left( {s, s - }\right) = {u}^{\prime }\left( s\right) v\left( s\right) - {v}^{\prime }\left( s\right) u\left( s\right) \n\]\n\n\[ \n= {\alpha \beta }\left( {3 - 3{e}^{3}}\right) {e}^{s} \n\]\n\nIn this problem, the function \( p \) is \( p\left( s\right) = {e}^{-s} \), because\n\n\[ \n{x}^{\prime \prime }\left( s\right) - {x}^{\prime }\left( s\right) = {\left( {e}^{-s}{x}^{\prime }\left( s\right) \right) }^{\prime } \n\]\n\nHence condition 5 in Theorem 3 requires us to choose \( \alpha \) and \( \beta \) such that \( {\alpha \beta } = \) \( - {\left( 3 - 3{e}^{3}\right) }^{-1} \approx {.0017465} \) . Then\n\n\[ \ng\left( {s, t}\right) = \left\{ \begin{array}{ll} {\alpha \beta }\left( {{e}^{-s} - {e}^{2s}}\right) \left( {{e}^{2t} - {e}^{3 - t}}\right) & 0 \leq s \leq t \leq 1 \\ {\alpha \beta }\left( {{e}^{2s} - {e}^{3 - s}}\right) \left( {{e}^{-t} - {e}^{2t}}\right) & 0 \leq t \leq s \leq 1 \end{array}\right. \n\]
|
Yes
|
Example 7. Find the Green's function for this Sturm-Liouville problem:\n\n\[ \n{x}^{\prime \prime } + {9x} = y\;x\left( 0\right) = x\left( {\pi /2}\right) = 0 \n\]
|
According to the preceding theorem, \( g \) should be a continuous function on the square \( 0 \leq s, t \leq \pi /2 \), and \( {g}^{t} \) should solve the homogeneous problem in the intervals \( 0 \leq s \leq t \) and \( t \leq s \leq \pi /2 \) . Finally, \( \partial g/\partial s \) should have a jump of magnitude -1 as \( t \) increases through the value \( s \) . These considerations lead us\n\nto define\n\[ \ng\left( {s, t}\right) = \left\{ \begin{array}{ll} - \frac{1}{3}\sin {3s}\cos {3t} & 0 \leq s \leq t \leq \pi /2 \\ - \frac{1}{3}\cos {3s}\sin {3t} & 0 \leq t \leq s \leq \pi /2 \end{array}\right. \n\]
|
Yes
|
Theorem 1. If \( f \) is differentiable at \( x \), then the mapping \( A \) in the definition is uniquely defined. (It depends on \( x \) as well as \( f \) .)
|
Proof. Suppose that \( {A}_{1} \) and \( {A}_{2} \) are two linear maps having the required property, expressed in Equation (1). Then to each \( \varepsilon > 0 \) there corresponds a \( \delta > 0 \) such that \[ \parallel f\left( {x + h}\right) - f\left( x\right) - {A}_{i}h\parallel < \varepsilon \parallel h\parallel \;\left( {i = 1,2}\right) \] whenever \( \parallel h\parallel < \delta \) . By the triangle inequality, \( \begin{Vmatrix}{{A}_{1}h - {A}_{2}h}\end{Vmatrix} < {2\varepsilon }\parallel h\parallel \) whenever \( \parallel h\parallel < \delta \) . Since \( {A}_{1} - {A}_{2} \) is homogeneous, the preceding inequality is true for all \( h \) . Hence \( \begin{Vmatrix}{{A}_{1} - {A}_{2}}\end{Vmatrix} \leq {2\varepsilon } \) . Since \( \varepsilon \) was arbitrary, \( \begin{Vmatrix}{{A}_{1} - {A}_{2}}\end{Vmatrix} = 0 \) .
|
Yes
|
Theorem 2. If \( f \) is bounded in a neighborhood of \( x \) and if a linear map \( A \) has the property in Equation (1), then \( A \) is a bounded linear map; in other words, \( A \) is the Fréchet derivative of \( f \) at \( x \) .
|
Proof. Choose \( \delta > 0 \) so that whenever \( \parallel h\parallel \leq \delta \) we will have\n\n\[ \parallel f\left( {x + h}\right) \parallel \leq M\;\text{ and }\;\parallel f\left( {x + h}\right) - f\left( x\right) - {Ah}\parallel \leq \parallel h\parallel \]\n\nThen for \( \parallel h\parallel \leq \delta \) we have \( \parallel {Ah}\parallel \leq {2M} + \parallel h\parallel \leq {2M} + \delta \) . For \( \parallel u\parallel \leq 1,\parallel {\delta u}\parallel \leq \delta \) , whence \( \parallel A\left( {\delta u}\right) \parallel \leq {2M} + \delta \) . Thus \( \parallel A\parallel \leq \left( {{2M} + \delta }\right) /\delta \) .
|
Yes
|
Let \( X = Y = \mathbb{R} \) . Let \( f \) be a function whose derivative (in the elementary sense) at \( x \) is \( \lambda \) . Then the Fréchet derivative of \( f \) at \( x \) is the linear map \( h \mapsto {\lambda h} \)
|
\[ \mathop{\lim }\limits_{{h \rightarrow 0}}\frac{\left| f\left( x + h\right) - f\left( x\right) - \lambda h\right| }{\left| h\right| } = \mathop{\lim }\limits_{{h \rightarrow 0}}\left| {\frac{f\left( {x + h}\right) - f\left( x\right) }{h} - \lambda }\right| = 0 \]
|
Yes
|
Theorem 3. If \( f \) is differentiable at \( x \), then it is continuous at \( x \) .
|
Proof. Let \( A = {f}^{\prime }\left( x\right) \) . Then \( A \in \mathcal{L}\left( {X, Y}\right) \) . Given \( \varepsilon > 0 \), select \( \delta > 0 \) so that \( \delta < \varepsilon /\left( {1 + \parallel A\parallel }\right) \) and so that the following implication is valid:\n\n\[ \parallel h\parallel < \delta \; \Rightarrow \;\parallel f\left( {x + h}\right) - f\left( x\right) - {Ah}\parallel /\parallel h\parallel < 1 \]\n\nThen for \( \parallel h\parallel < \delta \), we have by the triangle inequality\n\n\[ \parallel f\left( {x + h}\right) - f\left( x\right) \parallel \leq \parallel f\left( {x + h}\right) - f\left( x\right) - {Ah}\parallel + \parallel {Ah}\parallel \]\n\n\[ < \parallel h\parallel + \parallel {Ah}\parallel \leq \parallel h\parallel + \parallel A\parallel \parallel h\parallel \]\n\n\[ < \delta \left( {1 + \parallel A\parallel }\right) < \varepsilon \]
|
Yes
|
Example 4. Let \( X = Y = C\left\lbrack {0,1}\right\rbrack \) and let \( \phi : \mathbb{R} \rightarrow \mathbb{R} \) be continuously differentiable. Define \( f : X \rightarrow Y \) by the equation \( f\left( x\right) = \phi \circ x \), where \( x \) is any element of \( C\left\lbrack {0,1}\right\rbrack \) . What is \( {f}^{\prime }\left( x\right) \) ?
|
To answer this, we undertake a calculation of \( f\left( {x + h}\right) - f\left( x\right) \), using the classical mean value theorem:\n\n\[ \left\lbrack {f\left( {x + h}\right) - f\left( x\right) }\right\rbrack \left( t\right) = \phi \left( {x\left( t\right) + h\left( t\right) }\right) - \phi \left( {x\left( t\right) }\right) = {\phi }^{\prime }\left( {x\left( t\right) + \theta \left( t\right) h\left( t\right) }\right) h\left( t\right) \]\n\nwhere \( 0 < \theta \left( t\right) < 1 \) . This suggests that we define \( A \) by\n\n\[ {Ah} = \left( {{\phi }^{\prime } \circ x}\right) h \]\n\nWith this definition, we shall have at every point \( t \),\n\n\[ \left\lbrack {f\left( {x + h}\right) - f\left( x\right) - {Ah}}\right\rbrack \left( t\right) = {\phi }^{\prime }\left( {x\left( t\right) + \theta \left( t\right) h\left( t\right) }\right) h\left( t\right) - {\phi }^{\prime }\left( {x\left( t\right) }\right) h\left( t\right) \]\n\nHence, upon taking the supremum norm, we have\n\n\[ \begin{Vmatrix}{f\left( {x + h}\right) - f\left( x\right) - {Ah}}\end{Vmatrix} \leq \begin{Vmatrix}{{\phi }^{\prime } \circ \left( {x + {\theta h}}\right) - {\phi }^{\prime } \circ x}\end{Vmatrix}\parallel h\parallel \]\n\nBy comparing this to Equation (1) and invoking the continuity of \( {\phi }^{\prime } \), we see that \( A \) is indeed the derivative of \( f \) at \( x \) . Hence \( {f}^{\prime }\left( x\right) = {\phi }^{\prime } \circ x \) .
|
Yes
|
Theorem 4. Let \( f : {\mathbb{R}}^{n} \rightarrow \mathbb{R} \) . If each of the partial derivatives \( {D}_{i}f\left( { = \partial f/\partial {x}_{i}}\right) \) exists in a neighborhood of \( x \) and is continuous at \( x \) then \( {f}^{\prime }\left( x\right) \) exists, and a formula for it is\n\n\[ \n{f}^{\prime }\left( x\right) h = \mathop{\sum }\limits_{{i = 1}}^{n}{D}_{i}f\left( x\right) \cdot {h}_{i}\;h = \left( {{h}_{1},{h}_{2},\ldots ,{h}_{n}}\right) \in {\mathbb{R}}^{n} \n\]
|
Proof. We must prove that\n\n\[ \n\mathop{\lim }\limits_{{h \rightarrow 0}}\frac{1}{\parallel h\parallel }\left\lbrack {f\left( {x + h}\right) - f\left( x\right) - \mathop{\sum }\limits_{{i = 1}}^{n}{h}_{i}{D}_{i}f\left( x\right) }\right\rbrack = 0 \n\]\nWe begin by writing\n\n\[ \nf\left( {x + h}\right) - f\left( x\right) = f\left( {v}^{n}\right) - f\left( {v}^{0}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}\left\lbrack {f\left( {v}^{i}\right) - f\left( {v}^{i - 1}\right) }\right\rbrack \n\]\n\nwhere the vectors \( {v}^{i} \) and \( {v}^{i - 1} \) differ in only one coordinate. Thus we put \( {v}^{0} = x \) and \( {v}^{i} = {v}^{i - 1} + {h}_{i}{e}^{i} \), where \( {e}^{i} \) is the \( i \) th standard unit vector. By the mean value theorem for functions of one variable,\n\n\[ \nf\left( {v}^{i}\right) - f\left( {v}^{i - 1}\right) = f\left( {{v}^{i - 1} + {h}_{i}{e}^{i}}\right) - f\left( {v}^{i - 1}\right) = {h}_{i}{D}_{i}f\left( {{v}^{i - 1} + {\theta }_{i}{h}_{i}{e}^{i}}\right) \n\]\n\nwhere \( 0 < {\theta }_{i} < 1 \) . Putting this together, and using the Cauchy-Schwarz inequality, we have\n\n\[ \n\parallel h{\parallel }^{-1}\left| {f\left( {x + h}\right) - f\left( x\right) -\sum {h}_{i}{D}_{i}f\left( x\right) }\right| \n\]\n\n\[ \n= {\begin{Vmatrix}h\end{Vmatrix}}^{-1}\left| {\sum {h}_{i}\left\lbrack {{D}_{i}f\left( {{v}^{i - 1} + {\theta }_{i}{h}_{i}{e}^{i}}\right) - {D}_{i}f\left( x\right) }\right\rbrack }\right| \n\]\n\n\[ \n\leq {\begin{Vmatrix}h\end{Vmatrix}}^{-1}\begin{Vmatrix}h\end{Vmatrix}\sqrt{\sum {\left\lbrack {D}_{i}f\left( {v}^{i - 1} + {\theta }_{i}{h}_{i}{e}^{i}\right) - {D}_{i}f\left( x\right) \right\rbrack }^{2}} \rightarrow 0 \n\]\n\nas \( \parallel h\parallel \rightarrow 0 \), by the continuity of \( {D}_{i}f \) at \( x \) . Note that\n\n\[ \n\begin{Vmatrix}{{v}^{i - 1} + {\theta }_{i}{h}_{i}{e}^{i} - x}\end{Vmatrix} = \begin{Vmatrix}\left( {{h}_{1},\ldots ,{h}_{i - 1},{\theta }_{i}{h}_{i},0,0,\ldots ,0}\right) \end{Vmatrix} \leq \parallel h\parallel \n\]
|
Yes
|
Theorem 5. Let \( f : {\mathbb{R}}^{n} \rightarrow {\mathbb{R}}^{m} \), and let \( {f}_{1},\ldots ,{f}_{m} \) be the component functions of \( f \) . If all partial derivatives \( {D}_{j}{f}_{i} \) exist in a neighborhood of \( x \) and are continuous at \( x \), then \( {f}^{\prime }\left( x\right) \) exists, and\n\n\[ \n{\left( {f}^{\prime }\left( x\right) h\right) }_{i} = \mathop{\sum }\limits_{{j = 1}}^{n}{D}_{j}{f}_{i}\left( x\right) \cdot {h}_{j}\;\text{ for all }\;h \in {\mathbb{R}}^{n} \n\]
|
Proof. By the definition of the Euclidean norm,\n\n\[ \n\frac{1}{{\left\| h\right\| }^{2}}{\left\| f\left( x + h\right) - f\left( x\right) - Jh\right\| }^{2} = \frac{1}{{\left\| h\right\| }^{2}}\mathop{\sum }\limits_{{i = 1}}^{m}{\left\lbrack {f}_{i}\left( x + h\right) - {f}_{i}\left( x\right) - \mathop{\sum }\limits_{{j = 1}}^{n}{D}_{j}{f}_{i}\left( x\right) \cdot {h}_{j}\right\rbrack }^{2} \n\]\n\nEach of the \( m \) terms in the sum (including the divisor \( \parallel h{\parallel }^{2} \) ) converges to 0 as \( h \rightarrow 0 \) . This is exactly the content of the preceding theorem.
|
No
|
Example 5. Let \( f\left( x\right) = \sqrt{\left| {x}_{1}{x}_{2}\right| } \) . Then the two partial derivatives of \( f \) exist at \( \left( {0,0}\right) \), but \( {f}^{\prime }\left( {0,0}\right) \) does not exist.
|
Details are left to Problem 16.
|
No
|
Let \( L \) be a bounded linear operator on a real Hilbert space \( X \) . Define \( F : X \rightarrow \mathbb{R} \) by the equation \( F\left( x\right) = \langle x,{Lx}\rangle \) . In order to discover whether \( F \) is differentiable at \( x \), we write
|
\[ F\left( {x + h}\right) - F\left( x\right) = \langle x + h,{Lx} + {Lh}\rangle - \langle x,{Lx}\rangle \] \[ = \langle x,{Lh}\rangle + \langle h,{Lx}\rangle + \langle h,{Lh}\rangle \] Since the derivative is a linear map, we guess that \( A \) should be \( {Ah} = \langle x,{Lh}\rangle + \langle h,{Lx}\rangle \) . With that choice, \( \left| {Ah}\right| \leq 2\parallel x\parallel \parallel L\parallel \parallel h\parallel \), showing that \( \parallel A\parallel \leq 2\parallel x\parallel \parallel L\parallel \) . Thus \( A \) is a bounded linear functional. Furthermore, \[ \left| {F\left( {x + h}\right) - F\left( x\right) - {Ah}}\right| = \left| {\langle h,{Lh}\rangle }\right| \leq \parallel L\parallel \parallel h{\parallel }^{2} = o\left( h\right) \] (The notation \( o\left( h\right) \) is explained in Problem 6.) This establishes that \( A = {F}^{\prime }\left( x\right) \) . Notice that \[ {Ah} = \left\langle {{L}^{ * }x + {Lx}, h}\right\rangle \]
|
Yes
|
Theorem 1. The Chain Rule. If \( f \) is differentiable at \( x \) and if \( g \) is differentiable at \( f\left( x\right) \), then \( g \circ f \) is differentiable at \( x \), and\n\n\[{\left( g \circ f\right) }^{\prime }\left( x\right) = {g}^{\prime }\left( {f\left( x\right) }\right) \circ {f}^{\prime }\left( x\right)\]
|
Proof. Define \( F = g \circ f, A = {f}^{\prime }\left( x\right), y = f\left( x\right), B = {g}^{\prime }\left( y\right) \), and\n\n\[{o}_{1}\left( h\right) = f\left( {x + h}\right) - f\left( x\right) - {Ah}\;\left( {h \in X}\right)\]\n\n\[{o}_{2}\left( k\right) = g\left( {y + k}\right) - g\left( y\right) - {Bk}\;\left( {k \in Y}\right)\]\n\n\[\phi \left( h\right) = {Ah} + {o}_{1}\left( h\right)\]\n\nIt is to be shown that \( {F}^{\prime }\left( x\right) = {BA} \) . This requires a calculation as follows:\n\n\[F\left( {x + h}\right) - F\left( x\right) - {BAh} = g\left( {f\left( {x + h}\right) }\right) - g\left( {f\left( x\right) }\right) - {BAh}\]\n\n\[= g\left\lbrack {f\left( x\right) + {Ah} + {o}_{1}\left( h\right) }\right\rbrack - g\left( y\right) - {BAh}\]\n\n\[= g\left\lbrack {y + \phi \left( h\right) }\right\rbrack - g\left( y\right) - {BAh}\]\n\n\[= g\left( y\right) + {B\phi }\left( h\right) + {o}_{2}\left( {\phi \left( h\right) }\right) - g\left( y\right) - {BAh}\]\n\n\[= B\left\lbrack {{Ah} + {o}_{1}\left( h\right) }\right\rbrack + {o}_{2}\left( {\phi \left( h\right) }\right) - {BAh}\]\n\n\[= B{o}_{1}\left( h\right) + {o}_{2}\left( {\phi \left( h\right) }\right)\]\n\nIn order to see that this last expression is \( o\left( h\right) \), notice first that \( \begin{Vmatrix}{B{o}_{1}\left( h\right) }\end{Vmatrix} \leq \) \( \parallel B\parallel \begin{Vmatrix}{{o}_{1}\left( h\right) }\end{Vmatrix} \) . Hence this term is \( o\left( h\right) \) . Now let \( \varepsilon > 0 \) . Select \( {\delta }_{1} > 0 \) so that\n\n\[ \parallel k\parallel < {\delta }_{1}\; \Rightarrow \;\begin{Vmatrix}{{o}_{2}\left( k\right) }\end{Vmatrix} < \varepsilon \parallel k\parallel /\left( {\parallel A\parallel + 1}\right)\]\n\nSelect \( \delta > 0 \) so that \( \delta < {\delta }_{1}/\left( {\parallel A\parallel + 1}\right) \) and so that\n\n\[ \parallel h\parallel < \delta \; \Rightarrow \;\begin{Vmatrix}{{o}_{1}\left( h\right) }\end{Vmatrix} < \parallel h\parallel \]\n\nNow let \( \parallel h\parallel < \delta \) . Then we have\n\n\[ \parallel \phi \left( h\right) \parallel = \begin{Vmatrix}{{Ah} + {o}_{1}\left( h\right) }\end{Vmatrix} \leq \parallel A\parallel \parallel h\parallel + \begin{Vmatrix}{{o}_{1}\left( h\right) }\end{Vmatrix}\]\n\n\[< \left( {\parallel A\parallel + 1}\right) \parallel h\parallel < \left( {\parallel A\parallel + 1}\right) \delta < {\delta }_{1}\]\n\nConsequently, using \( k = \phi \left( h\right) \), we conclude that\n\n\[ \begin{Vmatrix}{{o}_{2}\left( {\phi \left( h\right) }\right) }\end{Vmatrix} < \varepsilon \parallel \phi \left( h\right) \parallel /\left( {\parallel A\parallel + 1}\right) < \varepsilon \parallel h\parallel \]
|
Yes
|
Theorem 2. Mean Value Theorem I. Let \( f \) be a real-valued mapping defined on an open set \( D \) in a normed linear space. Let \( a, b \in D \) . Assume that the line segment\n\n\[ \left\lbrack {a, b}\right\rbrack = \{ a + t\left( {b - a}\right) : 0 \leq t \leq 1\} \]\n\nlies in \( D \) . If \( f \) is continuous on \( \left\lbrack {a, b}\right\rbrack \) and differentiable on the open line segment \( \left( {a, b}\right) \), then for some \( \xi \) in \( \left( {a, b}\right) \),\n\n\[ f\left( b\right) - f\left( a\right) = {f}^{\prime }\left( \xi \right) \left( {b - a}\right) \]
|
Proof. Put \( g\left( t\right) = f\left( {a + t\left( {b - a}\right) }\right) \) . Then \( g \) is continuous on the interval \( \left\lbrack {0,1}\right\rbrack \) and differentiable on \( \left( {0,1}\right) \) . By the chain rule,\n\n\[ {g}^{\prime }\left( t\right) = {f}^{\prime }\left( {a + t\left( {b - a}\right) }\right) \left( {a - b}\right) \]\n\nBy the mean value theorem of elementary calculus,\n\n\[ f\left( b\right) - f\left( a\right) = {g}^{\prime }\left( \tau \right) = {f}^{\prime }\left( {a + \tau \left( {b - a}\right) }\right) \left( {b - a}\right) \]\n\n\[ = {f}^{\prime }\left( \xi \right) \left( {b - a}\right) \]
|
Yes
|
Theorem 3. Mean Value Theorem II. Let \( f \) be a continuous map of a compact interval \( \left\lbrack {a, b}\right\rbrack \) of the real line into a normed linear space \( Y \) . If, for each \( x \) in \( \left( {a, b}\right) ,{f}^{\prime }\left( x\right) \) exists and satisfies \( \begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} \leq M \) , then \( \parallel f\left( b\right) - f\left( a\right) \parallel \leq M\left( {b - a}\right) \) .
|
Proof. It suffices to prove that if \( a < \alpha < \beta < b \), then \( \parallel f\left( \beta \right) - f\left( \alpha \right) \parallel \leq M\left( {b - a}\right) \) because, the desired result would follow from this by continuity. Also, it suffices to prove \( \parallel f\left( \beta \right) - f\left( \alpha \right) \parallel \leq \left( {M + \varepsilon }\right) \left( {b - a}\right) \) for an arbitrary positive \( \varepsilon \) . Let \( S \) be the set of all \( x \) in \( \left\lbrack {\alpha ,\beta }\right\rbrack \) such that\n\n\[ \parallel f\left( x\right) - f\left( \alpha \right) \parallel \leq \left( {M + \varepsilon }\right) \left( {x - a}\right) \]\n\nBy continuity, \( S \) is a closed set. Let \( {x}_{0} = \sup S \) . Since \( S \) is compact, \( {x}_{0} \in S \) . To complete the proof, the main task is to show that \( {x}_{0} = \beta \) . Suppose that \( {x}_{0} < \beta \) and look for a contradiction. Since \( f \) is differentiable at \( {x}_{0} \), there is a positive \( \delta \) such that \( \delta < \beta - {x}_{0} \) and\n\n\[ \left| h\right| < \delta \Rightarrow \begin{Vmatrix}{f\left( {{x}_{0} + h}\right) - f\left( {x}_{0}\right) - {f}^{\prime }\left( {x}_{0}\right) h}\end{Vmatrix} < \varepsilon \left| h\right| \]\n\nPut \( h = \delta /2 \) and \( u = {x}_{0} + \delta /2 \) . Then\n\n\[ \begin{Vmatrix}{f\left( u\right) - f\left( {x}_{0}\right) - {f}^{\prime }\left( {x}_{0}\right) \left( {u - {x}_{0}}\right) }\end{Vmatrix} < \varepsilon \left( {u - {x}_{0}}\right) \]\n\nHence\n\n\[ \begin{Vmatrix}{f\left( u\right) - f\left( {x}_{0}\right) }\end{Vmatrix} < \begin{Vmatrix}{{f}^{\prime }\left( {x}_{0}\right) \left( {u - {x}_{0}}\right) }\end{Vmatrix} + \varepsilon \left( {u - {x}_{0}}\right) \leq \left( {M + \varepsilon }\right) \left( {u - {x}_{0}}\right) \]\n\nSince \( {x}_{0} \in S \), we have also\n\n\[ \begin{Vmatrix}{f\left( {x}_{0}\right) - f\left( \alpha \right) }\end{Vmatrix} \leq \left( {M + \varepsilon }\right) \left( {{x}_{0} - a}\right) \]\n\nHence\n\n\[ \parallel f\left( u\right) - f\left( \alpha \right) \parallel \leq \begin{Vmatrix}{f\left( u\right) - f\left( {x}_{0}\right) }\end{Vmatrix} + \begin{Vmatrix}{f\left( {x}_{0}\right) - f\left( \alpha \right) }\end{Vmatrix} \leq \left( {M + \varepsilon }\right) \left( {u - a}\right) \]\n\nThis proves that \( u \in S \) . Since \( u > {x}_{0} \), we have a contradiction. Thus \( {x}_{0} = \beta \) , \( \beta \in S \), and\n\n\[ \parallel f\left( \beta \right) - f\left( \alpha \right) \parallel \leq \left( {M + \varepsilon }\right) \left( {\beta - a}\right) < \left( {M + \varepsilon }\right) \left( {b - a}\right) \]
|
Yes
|
Theorem 4. Mean Value Theorem III. Let \( f \) be a map from an open set \( D \) in one normed linear space into another normed linear space. If the line segment\n\n\[ S = \{ {ta} + \left( {1 - t}\right) b : 0 \leq t \leq 1\} \]\n\nlies in \( D \) and if \( {f}^{\prime }\left( x\right) \) exists at each point of \( S \), then\n\n\[ \parallel f\left( b\right) - f\left( a\right) \parallel \leq \parallel b - a\parallel \mathop{\sup }\limits_{{x \in S}}\begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} \]\n
|
Proof. Define \( g\left( t\right) = f\left( {{ta} + \left( {1 - t}\right) b}\right) \) for \( 0 \leq t \leq 1 \) . By the chain rule, \( {g}^{\prime } \) exists and \( {g}^{\prime }\left( t\right) = {f}^{\prime }\left( {{ta} + \left( {1 - t}\right) b}\right) \left( {a - b}\right) \) . By the second Mean Value Theorem\n\n\[ \parallel f\left( b\right) - f\left( a\right) \parallel = \parallel g\left( 1\right) - g\left( 0\right) \parallel \leq \mathop{\sup }\limits_{{0 \leq t \leq 1}}\begin{Vmatrix}{{g}^{\prime }\left( t\right) }\end{Vmatrix} \leq \parallel b - a\parallel \mathop{\sup }\limits_{{x \in S}}\begin{Vmatrix}{{f}^{\prime }\left( x\right) }\end{Vmatrix} \]\n\nNotice that \( g = f \circ \ell \), where \( \ell \left( t\right) = {ta} + \left( {1 - t}\right) b \) . Thus \( {\ell }^{\prime }\left( t\right) \in \mathcal{L}\left( {\mathbb{R}, X}\right) \) . Hence in the formula for \( {g}^{\prime } \), the term \( \left( {a - b}\right) \) is interpreted as a mapping from \( \mathbb{R} \) to \( X \) defined by \( t \mapsto t \cdot \left( {a - b}\right) \) .
|
Yes
|
Theorem 5. Let \( X \) and \( Y \) be normed spaces, \( D \) a connected open set in \( X \), and \( f \) a differentiable map of \( D \) into \( Y \) . If \( {f}^{\prime }\left( x\right) = 0 \) for all \( x \in D \), then \( f \) is a constant function.
|
Proof. Since \( {f}^{\prime }\left( x\right) \) exists for all \( x \in D, f \) is continuous on \( D \) (by Theorem 3 of Section 3.1, page 117). Select \( {x}_{0} \in D \) and define \( A = \left\{ {x \in D : f\left( x\right) = f\left( {x}_{0}\right) }\right\} \) . This is a closed subset of \( D \) (i.e., the intersection of \( D \) with a closed set in \( X \) ). But we can prove that \( A \) is also open. Indeed, if \( x \in A \), then there is a ball \( B\left( {x, r}\right) \subset D \), because \( D \) is open. If \( y \in B\left( {x, r}\right) \), then the line segment from \( x \) to \( y \) lies in \( B\left( {x, r}\right) \) . By the Mean Value Theorem II,\n\n\[ \parallel f\left( x\right) - f\left( y\right) \parallel \leq \parallel x - y\parallel \mathop{\sup }\limits_{{0 \leq t \leq 1}}\begin{Vmatrix}{{f}^{\prime }\left( {{tx} + \left( {1 - t}\right) y}\right) }\end{Vmatrix} = 0 \]\n\nSo \( f\left( y\right) = f\left( x\right) = f\left( {x}_{0}\right) \) . This means that \( y \in A \) . Hence \( B\left( {x, r}\right) \subset A \) . Thus \( A \) is open (it contains a neighborhood of each of its points). A set is connected if it contains no proper subset that is open and closed. Since \( A \) is open and closed and nonempty, \( A = D \) .
|
Yes
|
Theorem 1. Let \( f \) be a function from \( \mathbb{R} \) to \( \mathbb{R} \). Assume that \( {f}^{\prime \prime } \) is bounded, that \( f\left( r\right) = 0 \), and that \( {f}^{\prime }\left( r\right) \neq 0 \). Let \( \delta \) be a positive number such that\n\n\[ \rho \equiv \frac{1}{2}\delta \mathop{\max }\limits_{{\left| {x - r}\right| \leq \delta }}\left| {{f}^{\prime \prime }\left( x\right) }\right| \div \mathop{\min }\limits_{{\left| {x - r}\right| \leq \delta }}\left| {{f}^{\prime }\left( x\right) }\right| < 1 \]\n\nIf Newton’s method is started with \( {x}_{0} \in \left\lbrack {r - \delta, r + \delta }\right\rbrack \), then for all \( n \),\n\n\[ \left| {{x}_{n + 1} - r}\right| \leq \frac{\rho }{\delta }{\left| {x}_{n} - r}\right| }^{2} \leq \rho \left| {{x}_{n} - r}\right| \]
|
Proof. Define \( {e}_{n} = {x}_{n} - r \). Then\n\n\[ 0 = f\left( r\right) = f\left( {{x}_{n} - {e}_{n}}\right) = f\left( {x}_{n}\right) - {e}_{n}{f}^{\prime }\left( {x}_{n}\right) + \frac{1}{2}{e}_{n}^{2}{f}^{\prime \prime }\left( {\xi }_{n}\right) \]\n\nIn this equation, the point \( {\xi }_{n} \) is between \( {x}_{n} \) and \( r \). Hence \( \left| {{\xi }_{n} - r}\right| \leq \left| {{x}_{n} - r}\right| = \left| {e}_{n}\right| \). Using this we have\n\n\[ {e}_{n + 1} = {x}_{n + 1} - r = {x}_{n} - \frac{f\left( {x}_{n}\right) }{{f}^{\prime }\left( {x}_{n}\right) } - r = {e}_{n} - \frac{f\left( {x}_{n}\right) }{{f}^{\prime }\left( {x}_{n}\right) } \]\n\n\[ = \frac{{e}_{n}{f}^{\prime }\left( {x}_{n}\right) - f\left( {x}_{n}\right) }{{f}^{\prime }\left( {x}_{n}\right) } = {e}_{n}^{2}\frac{{f}^{\prime \prime }\left( {\xi }_{n}\right) }{{f}^{\prime }\left( {x}_{n}\right) } \]\n\nSince \( \left| {{x}_{0} - r}\right| \leq \delta \) by hypothesis, we have \( \left| {e}_{0}\right| \leq \delta \) and \( \left| {{\xi }_{0} - r}\right| \leq \delta \). Hence \( \left| {e}_{1}\right| \leq \frac{1}{2}{e}_{0}^{2}\left| {{f}^{\prime \prime }\left( {\xi }_{0}\right) }\right| /\left| {{f}^{\prime }\left( {x}_{0}\right) }\right| \leq \frac{1}{2}{e}_{0}^{2} \cdot {2\rho }/\delta \leq \rho \left| {e}_{0}\right| \). By repeating this we establish that \( \left| {{x}_{n + 1} - r}\right| \leq \rho \left| {{x}_{n} - r}\right| \) (convergence). Similarly, we have \( \left| {e}_{1}\right| \leq \left( {\rho /\delta }\right) {e}_{0}^{2} \) and \( \left| {e}_{n + 1}\right| \leq \left( {\rho /\delta }\right) {e}_{n}^{2} \) . (quadratic convergence).
|
Yes
|
For finding the square root of a given positive number \( a \), one can solve the equation \( {x}^{2} - a = 0 \) by Newton’s method. The iteration formula turns out to be\n\n\[ \n{x}_{n + 1} = \frac{1}{2}\left( {{x}_{n} + \frac{a}{{x}_{n}}}\right)\n\]\n\nThis formula was known to the ancient Greeks and is called Heron's formula.
|
In order to see how well it performs, we can use a computer system such as Mathematica, Maple, or Matlab to obtain the Newton approximations to \( \sqrt{2} \) . The iteration function is \( g\left( x\right) = \left( {x + 2/x}\right) /2 \), and a reasonable starting point is \( {x}_{0} = 1 \) . Mathematica is capable of displaying \( {x}_{n} \) with any number of significant figures; we chose 60. The input commands to Mathematica are shown here. (Each one should be separated from the following one by a semicolon, as shown.) The output, not shown, indicates that the seventh iterate has at least 60 correct digits!\n\n\[ \ng\left\lbrack {x}_{ - }\right\rbrack \mathrel{\text{:=}} \left( {x + \left( {2/x}\right) }\right) /2;\;g\left\lbrack 1\right\rbrack ;\;N\left\lbrack {\% ,{60}}\right\rbrack ;\;g\left\lbrack \% \right\rbrack ;\;g\left\lbrack \% \right\rbrack ;\;\ldots\n\]
|
Yes
|
We illustrate the mechanics of Newton's method in higher dimensions with the following problem:\n\n\[ \n\\left\\{ \\begin{array}{l} x - y + 1 = 0 \\\\ {x}^{2} + {y}^{2} - 4 = 0 \\end{array}\\right.\n\]\n\nwhere \( x \) and \( y \) are real variables. We have here a mapping \( f : {\\mathbb{R}}^{2} \\rightarrow {\\mathbb{R}}^{2} \) , and we seek one or more zeros of \( f \) . The Newton iteration is \( {u}_{n + 1} = {u}_{n} - \) \( {\\left\\lbrack {f}^{\\prime }\\left( {u}_{n}\\right) \\right\\rbrack }^{-1}f\\left( {u}_{n}\\right) \), where \( {u}_{n} = \\left( {{x}_{n},{y}_{n}}\\right) \\in {\\mathbb{R}}^{2} \) . The derivative \( {f}^{\\prime }\\left( u\\right) \) is given by the Jacobian matrix \( J \) . We find that\n\n\[ J = \\left\\lbrack \\begin{array}{rr} 1 & - 1 \\\\ {2x} & {2y} \\end{array}\\right\\rbrack \\;{J}^{-1} = \\frac{1}{{2x} + {2y}}\\left\\lbrack \\begin{array}{rr} {2y} & 1 \\\\ - {2x} & 1 \\end{array}\\right\\rbrack \n\]
|
Hence the iteration formula, in detail, is this:\n\n\[ \\left\\lbrack \\begin{array}{l} {x}_{n + 1} \\\\ {y}_{n + 1} \\end{array}\\right\\rbrack = \\left\\lbrack \\begin{array}{l} {x}_{n} \\\\ {y}_{n} \\end{array}\\right\\rbrack - \\frac{1}{2{x}_{n} + 2{y}_{n}}\\left\\lbrack \\begin{array}{rr} 2{y}_{n} & 1 \\\\ - 2{x}_{n} & 1 \\end{array}\\right\\rbrack \\left\\lbrack \\begin{array}{l} {x}_{n} - {y}_{n} + 1 \\\\ {x}_{n}^{2} + {y}_{n}^{2} - 4 \\end{array}\\right\\rbrack \n\]\n\nIf we start at \( {u}_{0} = {\\left( 0,2\\right) }^{T} \), the next vectors are \( {u}_{1} = {\\left( 1,2\\right) }^{T} \) and \( {u}_{2} = \) \( \\left( {5/6,{11}/6}\\right) \) . A symbolic computation system such as those mentioned above can be used here, too. The problem is chosen intentionally as one easily visualized: One seeks the points where a line intersects a circle. See Figure 3.1.
|
Yes
|
Theorem 4. There is a neighborhood of \( {x}^{ * } \) such that the iteration sequence defined in Equation (7) converges to \( {x}^{ * } \) for arbitrary starting points in that neighborhood.
|
Proof. Select \( \varepsilon > 0 \) such that\n\n(10)\n\n\[ \theta \equiv \lambda + {M\varepsilon } < 1 \]\n\nBy the definition of the Fréchet derivative \( {F}^{\prime }\left( {x}^{ * }\right) \), we can write\n\n(11)\n\n\[ F\left( x\right) = F\left( {x}^{ * }\right) + {F}^{\prime }\left( {x}^{ * }\right) \left( {x - {x}^{ * }}\right) + \eta \left( x\right) \]\n\nwhere \( \eta \left( x\right) \) is \( o\left( \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix}\right) \) . In particular, we can select \( \delta > 0 \) so that\n\n(12)\n\n\[ \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} \leq \delta \Rightarrow \left\lbrack {x \in \Omega \text{ and }\parallel \eta \left( x\right) \parallel \leq \varepsilon \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix}}\right\rbrack \]\n\nFrom (11), using the fact that \( F\left( {x}^{ * }\right) = 0 \) and the definition of \( G \), we have\n\n\[ G\left( x\right) - {x}^{ * } = x - {x}^{ * } - A\left( x\right) F\left( x\right) \]\n\n\[ = x - {x}^{ * } - A\left( x\right) \left\lbrack {{F}^{\prime }\left( {x}^{ * }\right) \left( {x - {x}^{ * }}\right) + \eta \left( x\right) }\right\rbrack \]\n\n\[ = x - {x}^{ * } - A\left( x\right) {F}^{\prime }\left( {x}^{ * }\right) \left( {x - {x}^{ * }}\right) - A\left( x\right) \eta \left( x\right) \]\n\n\[ = \left\lbrack {I - A\left( x\right) {F}^{\prime }\left( {x}^{ * }\right) }\right\rbrack \left( {x - {x}^{ * }}\right) - A\left( x\right) \eta \left( x\right) \]\n\nIf we assume further that \( \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} \leq \delta \), then\n\n\[ \begin{Vmatrix}{G\left( x\right) - {x}^{ * }}\end{Vmatrix} \leq \lambda \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} + M\parallel \eta \left( x\right) \parallel \]\n\n\[ \leq \lambda \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} + {M\varepsilon }\begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} \]\n\n\[ = \left( {\lambda + {M\varepsilon }}\right) \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} = \theta \begin{Vmatrix}{x - {x}^{ * }}\end{Vmatrix} \]\n\nIf the starting point \( {x}_{0} \) for the iteration is within distance \( \delta \) of \( {x}^{ * } \), then\n\n\[ \begin{Vmatrix}{{x}_{1} - {x}^{ * }}\end{Vmatrix} = \begin{Vmatrix}{G\left( {x}_{0}\right) - {x}^{ * }}\end{Vmatrix} \leq \theta \begin{Vmatrix}{{x}_{0} - {x}^{ * }}\end{Vmatrix} \leq {\theta \delta } \]\n\nContinuing, we have\n\n\[ \begin{Vmatrix}{{x}_{2} - {x}^{ * }}\end{Vmatrix} = \begin{Vmatrix}{G\left( {x}_{1}\right) - {x}^{ * }}\end{Vmatrix} \leq \theta \begin{Vmatrix}{{x}_{1} - {x}^{ * }}\end{Vmatrix} \leq {\theta }^{2}\delta \]\n\nIn general, \( \begin{Vmatrix}{{x}_{n} - {x}^{ * }}\end{Vmatrix} \leq {\theta }^{n}\delta \), and hence \( {x}_{n} \rightarrow {x}^{ * } \) .
|
Yes
|
Theorem 5. If \( \left| \lambda \right| k < 1 \), then the integral equation (13) above has a unique solution.
|
Proof. Apply the Contraction Mapping Theorem (Chapter 4, Section 2, page 177) to the mapping \( F \) defined on \( C\left\lbrack {0,1}\right\rbrack \) by \( \left( {Fx}\right) \left( s\right) = v\left( s\right) + \lambda {\int }_{0}^{1}g\left( {s, t, x\left( t\right) }\right) {dt} \) . We see easily that\n\n\[ \begin{Vmatrix}{F{x}_{1} - F{x}_{2}}\end{Vmatrix} = \mathop{\sup }\limits_{s}\left| {\left( {F{x}_{1}}\right) \left( s\right) - \left( {F{x}_{2}}\right) \left( s\right) }\right| \]\n\n\[ \leq \left| \lambda \right| \mathop{\sup }\limits_{s}{\int }_{0}^{1}\left| {g\left( {s, t,{x}_{1}\left( t\right) }\right) - g\left( {s, t,{x}_{2}\left( t\right) }\right) }\right| \]\n\n\[ \leq \left| \lambda \right| {\int }_{0}^{1}k\left| {{x}_{1}\left( t\right) - {x}_{2}\left( t\right) }\right| {dt} \]\n\n\[ \leq \left| \lambda \right| k\begin{Vmatrix}{{x}_{1} - {x}_{2}}\end{Vmatrix} \]\n\nIf \( \left| \lambda \right| k < 1 \), then the sequence \( {x}_{n + 1} = F\left( {x}_{n}\right) \) will converge, in the space \( C\left\lbrack {0,1}\right\rbrack \) , to a solution of the integral equation. In this process, \( {x}_{0} \) can be an arbitrary starting point in \( C\left\lbrack {0,1}\right\rbrack \) .
|
Yes
|
Theorem 6. The operator \( B \), as just defined, is also an integral operator, having the form\n\n(15)\n\n\[ \left( {Bh}\right) \left( s\right) = {\int }_{0}^{1}r\left( {s, t}\right) h\left( t\right) {dt} \]\n\nThe kernel satisfies these two integral equations\n\n(16)\n\n\[ \left\{ \begin{array}{l} r\left( {s, t}\right) = k\left( {s, t}\right) + \lambda {\int }_{0}^{1}k\left( {s, u}\right) r\left( {u, t}\right) {du} \\ r\left( {s, t}\right) = k\left( {s, t}\right) + \lambda {\int }_{0}^{1}k\left( {u, t}\right) r\left( {s, u}\right) {du} \end{array}\right. \]
|
Proof. From the definition of \( B \) we have \( {\lambda B} = {\left( I - \lambda A\right) }^{-1} - I \) or \( I + {\lambda B} = \) \( {\left( I - \lambda A\right) }^{-1} \) . Consequently, we have\n\n\[ \left( {I + {\lambda B}}\right) \left( {I - {\lambda A}}\right) = \left( {I - {\lambda A}}\right) \left( {I + {\lambda B}}\right) = I \]\n\n\[ I + {\lambda B} - {\lambda A} - {\lambda }^{2}{BA} = I - {\lambda A} + {\lambda B} - {\lambda AB} = I \]\n\n\[ {\lambda B} - {\lambda A} - {\lambda }^{2}{BA} = {\lambda B} - {\lambda A} - {\lambda }^{2}{AB} = 0 \]\n\nand\n\n(17)\n\n\[ B = \left( {I + {\lambda B}}\right) A = A\left( {I + {\lambda B}}\right) \]\n\nConversely, from Equation (17) we can prove Equation (14). Thus Equation (17) serves to characterize \( B \) . Now assume that \( r \) satisfies Equations (16) and that \( B \) is defined by Equation (15). We will show that \( B \) must satisfy Equation (17), and hence Equation (14). We have\n\n\[ \left\lbrack {\left( {A + {\lambda BA}}\right) h}\right\rbrack \left( s\right) = {\int }_{0}^{1}k\left( {s, t}\right) h\left( t\right) {dt} + \lambda \int r\left( {s, u}\right) \left( {Ah}\right) \left( u\right) {du} \]\n\n\[ = {\int }_{0}^{1}k\left( {s, t}\right) h\left( t\right) {dt} + \lambda {\int }_{0}^{1}r\left( {s, u}\right) {\int }_{0}^{1}k\left( {u, t}\right) h\left( t\right) {dtdu} \]\n\n\[ = {\int }_{0}^{1}\left\{ {k\left( {s, t}\right) + \lambda {\int }_{0}^{1}r\left( {s, u}\right) k\left( {u, t}\right) {du}}\right\} h\left( t\right) {dt} \]\n\n\[ = {\int }_{0}^{1}r\left( {s, t}\right) h\left( t\right) {dt} = \left( {Bh}\right) \left( s\right) \]\n\nThis proves that \( B = A + {\lambda BA} \) . Similarly, \( B = A + {\lambda AB} \) .
|
Yes
|
Solve the integral equation\n\n\[ \nx\\left( s\\right) - {\\int }_{0}^{1}{st}\\operatorname{Arctan}x\\left( t\\right) {dt} = 1 + {s}^{2} - {0.485s} \n\]
|
This conforms to the general theory outlined above. We have as kernel \( g\\left( {s, t, u}\\right) = {st}\\operatorname{Arctan}u \), and \( {g}_{3}\\left( {s, t, u}\\right) = {st}/\\left( {1 + {u}^{2}}\\right) \) . We take as starting point for the Newton iteration the constant function \( {x}_{0}\\left( t\\right) = 3/2 \) . Then\n\n\[ \ng\\left( {s, t,{x}_{0}\\left( t\\right) }\\right) = {st}/\\left( {1 + \\frac{9}{4}}\\right) = \\frac{4}{13}{st} = {\\alpha st} \n\]\n\nThen \( {f}^{\\prime }\\left( {x}_{0}\\right) = I - A \), where \( \\left( {Ax}\\right) \\left( s\\right) = {\\int }_{0}^{1}{\\alpha stx}\\left( t\\right) {dt} \) . Also we can express \( {f}^{\\prime }{\\left( {x}_{0}\\right) }^{-1} = {\\left( I - A\\right) }^{-1} = I + B \), as in the preceding proof. We know that \( B \) is an integral operator whose kernel, \( r \), satisfies the equations\n\n\[ \n\\left\\{ \\begin{array}{l} r\\left( {s, t}\\right) = {\\alpha st} + {\\int }_{0}^{1}{\\alpha sur}\\left( {u, t}\\right) {du} \\\\ r\\left( {s, t}\\right) = {\\alpha st} + {\\int }_{0}^{1}{\\alpha tur}\\left( {s, u}\\right) {du} \\end{array}\\right. \n\]\n\nFrom these equations it is evident that \( r\\left( {s, t}\\right) /{st} \) is on the one hand a function of \( t \) only, and on the other hand a function of \( s \) only. Thus \( r\\left( {s, t}\\right) /{st} \) is constant, say \( \\beta \), and \( r\\left( {s, t}\\right) = {\\beta st} \) . Substituting in the integral equation for \( r \) and solving gives us \( \\beta = {12}/{35} \) . One step in the Newton algorithm will be \( {x}_{1} = {x}_{0} - {f}^{\\prime }{\\left( {x}_{0}\\right) }^{-1}f\\left( {x}_{0}\\right) \) . We compute \( y = f\\left( {x}_{0}\\right) \) as follows:\n\n\[ \ny\\left( s\\right) = {x}_{0}\\left( s\\right) - {\\int }_{0}^{1}g\\left( {s, t,{x}_{0}\\left( t\\right) }\\right) {dt} - v\\left( s\\right) \n\]\n\n\[ \n= \\frac{3}{2} - {\\int }_{0}^{1}{st}\\operatorname{Arctan}\\frac{3}{2}{dt} - 1 - {s}^{2} + {.485s} \n\]\n\n\[ \n= \\frac{1}{2} - {.0063968616s} - {s}^{2} \n\]\n\n\nThen \( {x}_{1} = {x}_{0} - {f}^{\\prime }{\\left( {x}_{0}\\right) }^{-1}y = {x}_{0} - \\left( {I + B}\\right) y = {x}_{0} - y - {By} \) . Hence\n\n\[ \n{x}_{1}\\left( s\\right) = \\frac{3}{2}\\left\\lbrack {\\frac{1}{2} - {\\gamma s} - {s}^{2}}\\right\\rbrack - {\\int }_{0}^{1}{\\beta st}\\left\\lbrack {\\frac{1}{2} - {\\gamma t} - {t}^{2}}\\right\\rbrack {dt}\\;\\left( {\\gamma \\approx {.0063968616}}\\right) \n\]\n\n\[ \n= 1 + {s}^{2} + \\left( {.0071279315}\\right) s \n\]
|
Yes
|
Theorem 2. Implicit Function Theorem for Many Variables.\n\nLet \( F : {\mathbb{R}}^{n} \times \mathbb{R} \rightarrow \mathbb{R} \), and suppose that \( F\left( {{x}_{0},{y}_{0}}\right) = 0 \) for some \( {x}_{0} \in {\mathbb{R}}^{n} \) and \( {y}_{0} \in \mathbb{R} \) . If all \( n + 1 \) partial derivatives \( {D}_{i}F \) exist and are continuous in a neighborhood of \( \left( {{x}_{0},{y}_{0}}\right) \) and if \( {D}_{n + 1}F\left( {{x}_{0},{y}_{0}}\right) \neq 0 \), then there is a continuously differentiable function \( f \) defined on a neighborhood of \( {x}_{0} \) such that \( F\left( {x, f\left( x\right) }\right) = 0, f\left( {x}_{0}\right) = {y}_{0} \), and\n\n\[ \n{D}_{i}f\left( x\right) = - {D}_{i}F\left( {x, f\left( x\right) }\right) /{D}_{n + 1}F\left( {x, f\left( x\right) }\right) \;\left( {1 \leq i \leq n}\right) \n\]
|
Proof. This is left as a problem (Problem 3.4.4).
|
No
|
Theorem 3. General Implicit Function Theorem. Let \( X, Y \) , and \( Z \) be normed linear spaces, \( Y \) being assumed complete. Let \( \Omega \) be an open set in \( X \times Y \) . Let \( F : \Omega \rightarrow Z \) . Let \( \left( {{x}_{0},{y}_{0}}\right) \in \dot{\Omega } \) . Assume that \( F \) is continuous at \( \left( {{x}_{0},{y}_{0}}\right) \), that \( F\left( {{x}_{0},{y}_{0}}\right) = 0 \), that \( {D}_{2}F \) exists in \( \Omega \) , that \( {D}_{2}F \) is continuous at \( \left( {{x}_{0},{y}_{0}}\right) \), and that \( {D}_{2}F\left( {{x}_{0},{y}_{0}}\right) \) is invertible. Then there is a function \( f \) defined on a neighborhood of \( {x}_{0} \) such that \( F\left( {x, f\left( x\right) }\right) = 0, f\left( {x}_{0}\right) = {y}_{0}, f \) is continuous at \( {x}_{0} \), and \( f \) is unique in the sense that any other such function must agree with \( f \) on some neighborhood of \( {x}_{0} \) .
|
Proof. We can assume that \( \left( {{x}_{0},{y}_{0}}\right) = \left( {0,0}\right) \) . Select \( \delta > 0 \) so that\n\n\[ \{ \left( {x, y}\right) : \parallel x\parallel \leq \delta ,\parallel y\parallel \leq \delta \} \subset \Omega \]\n\nPut \( A = {D}_{2}F\left( {0,0}\right) \) . Then \( A \in \mathcal{L}\left( {Y, Z}\right) \) and \( {A}^{-1} \in \mathcal{L}\left( {Z, Y}\right) \) . For each \( x \) satisfying \( \parallel x\parallel \leq \delta \) we define \( {G}_{x}\left( y\right) = y - {A}^{-1}F\left( {x, y}\right) \) . Here \( \parallel y\parallel \leq \delta \) . Observe that if \( {G}_{x} \) has a fixed point \( {y}^{ * } \), then\n\n\[ {y}^{ * } = {G}_{x}\left( {y}^{ * }\right) = {y}^{ * } - {A}^{-1}F\left( {x,{y}^{ * }}\right) \]\n\nfrom which we conclude that \( F\left( {x,{y}^{ * }}\right) = 0 \) . Let us therefore set about proving that \( {G}_{x} \) has a fixed point. We shall employ the Contraction Mapping Theorem. (Chapter 4, Section 2, page 177). We have\n\n\[ {G}_{x}^{\prime }\left( y\right) = I - {A}^{-1}{D}_{2}F\left( {x, y}\right) = {A}^{-1}\left\{ {{D}_{2}F\left( {0,0}\right) - {D}_{2}F\left( {x, y}\right) }\right\} \]\n\nBy the continuit
|
Yes
|
Theorem 5. Inverse Function Theorem I. Let \( f \) be a continuously differentiable map from an open set \( \Omega \) in a Banach space into a normed linear space. If \( {x}_{0} \in \Omega \) and if \( {f}^{\prime }\left( {x}_{0}\right) \) is invertible, then there is a continuously differentiable function \( g \) defined on a neighborhood \( \mathcal{N} \) of \( f\left( {x}_{0}\right) \) such that \( f\left( {g\left( y\right) }\right) = y \) for all \( y \in \mathcal{N} \) .
|
Proof. For \( x \) in \( \Omega \) and \( y \) in the second space, define \( F\left( {x, y}\right) = f\left( x\right) - y \) . Put \( {y}_{0} = f\left( {x}_{0}\right) \) so that \( F\left( {{x}_{0},{y}_{0}}\right) = 0 \) . Note that \( {D}_{1}F\left( {x, y}\right) = {f}^{\prime }\left( x\right) \), and thus \( {D}_{1}F\left( {{x}_{0},{y}_{0}}\right) \) is invertible. By Theorem 4, there is a neighborhood \( \mathcal{N} \) of \( {y}_{0} \) and a continuously differentiable function \( g \) defined on \( \mathcal{N} \) such that \( F\left( {g\left( y\right), y}\right) = 0 \) , or \( f\left( {g\left( y\right) }\right) - y = 0 \) for all \( y \in \mathcal{N} \) .
|
Yes
|
Theorem 6. Surjective Mapping Theorem I. Let \( X \) and \( Y \) be Banach spaces, \( \Omega \) an open set in \( X \) . Let \( f : \Omega \rightarrow Y \) be a continuously differentiable map. Let \( {x}_{0} \in \Omega \) and \( {y}_{0} = f\left( {x}_{0}\right) \) . If \( {f}^{\prime }\left( {x}_{0}\right) \) is invertible, as an element of \( \mathcal{L}\left( {X, Y}\right) \), then \( f\left( \Omega \right) \) is a neighborhood of \( {y}_{0} \) .
|
Proof. Define \( F : \Omega \times Y \rightarrow Y \) by putting \( F\left( {x, y}\right) = f\left( x\right) - y \) . Then \( F\left( {{x}_{0},{y}_{0}}\right) = \) 0 and \( {D}_{1}F\left( {{x}_{0},{y}_{0}}\right) = {f}^{\prime }\left( {x}_{0}\right) \) . ( \( {D}_{1} \) is a partial derivative, as defined previously.) By hypothesis, \( {D}_{1}F\left( {{x}_{0},{y}_{0}}\right) \) is invertible. By the Implicit Function Theorem (with the rôles of \( x \) and \( y \) reversed!), there exist a neighborhood \( \mathcal{N} \) of \( {y}_{0} \) and a function \( g : \mathcal{N} \rightarrow \Omega \) such that \( g\left( {y}_{0}\right) = {x}_{0} \) and \( F\left( {g\left( y\right), y}\right) = 0 \) for all \( y \in \mathcal{N} \) . From the definition of \( F \) we have \( f\left( {g\left( y\right) }\right) - y = 0 \) for all \( y \in \mathcal{N} \) . In other words, each element \( y \) of \( \mathcal{N} \) is the image under \( f \) of some point in \( \Omega \), namely, \( g\left( y\right) \) .
|
Yes
|
Theorem 7. A Fixed Point Theorem. Let \( \Omega \) be an open set in a Banach space \( X \), and let \( G \) be a differentiable map from \( \widehat{\Omega } \) to \( X \) . Suppose that there is a closed ball \( B \equiv B\left( {{x}_{0}, r}\right) \) in \( \Omega \) such that\n\n(i) \( k \equiv \mathop{\sup }\limits_{{x \in B}}\begin{Vmatrix}{{G}^{\prime }\left( x\right) }\end{Vmatrix} < 1 \)\n\n(ii) \( \begin{Vmatrix}{G\left( {x}_{0}\right) - {x}_{0}}\end{Vmatrix} < r\left( {1 - k}\right) \)\n\nThen \( G \) has a unique fixed point in \( B \) .
|
Proof. First, we show that \( G \mid B \) is a contraction. If \( {x}_{1} \) and \( {x}_{2} \) are in \( B \), then by the Mean Value Theorem (Theorem 4 in Section 3.2, page 123)\n\n\[ \begin{Vmatrix}{G\left( {x}_{1}\right) - G\left( {x}_{2}\right) }\end{Vmatrix} \leq \mathop{\sup }\limits_{{0 \leq \lambda \leq 1}}\begin{Vmatrix}{{G}^{\prime }\left( {{x}_{1} + \lambda \left( {{x}_{2} - {x}_{1}}\right) \parallel }\right. \parallel {x}_{1} - {x}_{2}}\end{Vmatrix} \]\n\n\[ \leq k\begin{Vmatrix}{{x}_{1} - {x}_{2}}\end{Vmatrix} \]\n\nSecond, we show that \( G \) maps \( B \) into \( B \) . If \( x \in B \), then\n\n\[ \begin{Vmatrix}{G\left( x\right) - {x}_{0}}\end{Vmatrix} \leq \begin{Vmatrix}{G\left( x\right) - G\left( {x}_{0}\right) }\end{Vmatrix} + \begin{Vmatrix}{G\left( {x}_{0}\right) - {x}_{0}}\end{Vmatrix} \]\n\n\[ \leq k\begin{Vmatrix}{x - {x}_{0}}\end{Vmatrix} + r\left( {1 - k}\right) \]\n\n\[ \leq {kr} + \left( {1 - k}\right) r = r \]\n\nSince \( X \) is complete, \( B \) is a complete metric space. By the Contractive Mapping Theorem (page 177), \( G \) has a unique fixed point in \( B \) .
|
Yes
|
Theorem 8. Inverse Function Theorem II. Let \( \Omega \) be an open set in a Banach space \( X \) . Let \( f \) be a differentiable map from \( \Omega \) to a normed space \( Y \) . Assume that \( \Omega \) contains a closed ball \( B \equiv B\left( {{x}_{0}, r}\right) \) such that\n\n(i) The linear transformation \( A \equiv {f}^{\prime }\left( {x}_{0}\right) \) is invertible.\n\n(ii) \( k \equiv \mathop{\sup }\limits_{{x \in B}}\begin{Vmatrix}{I - {A}^{-1}{f}^{\prime }\left( x\right) }\end{Vmatrix} < 1 \)\n\nThen for each \( y \) in \( Y \) satisfying \( \begin{Vmatrix}{y - f\left( {x}_{0}\right) }\end{Vmatrix} < \left( {1 - k}\right) r{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}^{-1} \) the\nequation \( f\left( x\right) = y \) has a unique solution in \( B \) .
|
Proof. Let \( y \) be as hypothesized, and define \( G\left( x\right) = x - {A}^{-1}\left\lbrack {f\left( x\right) - y}\right\rbrack \) . It is clear that \( f\left( x\right) = y \) if and only if \( x \) is a fixed point of \( G \) . The map \( G \) is differentiable in \( \Omega \), and \( {G}^{\prime }\left( x\right) = I - {A}^{-1}{f}^{\prime }\left( x\right) \) . To verify the hypothesis (i) in the preceding theorem, write\n\n\[ \begin{Vmatrix}{{G}^{\prime }\left( x\right) }\end{Vmatrix} = \begin{Vmatrix}{I - {A}^{-1}{f}^{\prime }\left( x\right) }\end{Vmatrix} \leq k\;\left( {x \in B}\right) \]\n\nBy the assumptions made about \( y \), we can verify hypothesis (ii) of the preceding theorem by writing\n\n\[ \begin{Vmatrix}{G\left( {x}_{0}\right) - {x}_{0}}\end{Vmatrix} = \begin{Vmatrix}{{x}_{0} - {A}^{-1}\left\lbrack {f\left( {x}_{0}\right) - y}\right\rbrack - {x}_{0}}\end{Vmatrix} \]\n\n\[ = \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\begin{Vmatrix}{f\left( {x}_{0}\right) - y}\end{Vmatrix} \]\n\n\[ \leq \begin{Vmatrix}{A}^{-1}\end{Vmatrix}\left( {1 - k}\right) r{\begin{Vmatrix}{A}^{-1}\end{Vmatrix}}^{-1} = \left( {1 - k}\right) r \]\n\nBy the preceding theorem, \( G \) has a unique fixed point in \( B \), which is the unique solution of \( f\left( x\right) = y \) in \( B \) .
|
Yes
|
Consider a nonlinear Volterra integral equation\n\n\\[ \nx\\left( t\\right) - {2x}\\left( 0\\right) + \\frac{1}{2}{\\int }_{0}^{t}\\cos \\left( {st}\\right) {\\left\\lbrack x\\left( s\\right) \\right\\rbrack }^{2}{ds} = y\\left( t\\right) \\;\\left( {0 \\leq t \\leq 1}\\right)\n\\]\n\nin which \\( y \\in C\\left\\lbrack {0,1}\\right\\rbrack \\) . Notice that when \\( y = 0 \\) the integral equation has the solution \\( x = 0 \\) . We ask: Does the equation have solutions when \\( \\parallel y\\parallel \\) is small? Here, we use the usual sup-norm on \\( C\\left\\lbrack {0,1}\\right\\rbrack \\), as this makes the space complete. (Weighted sup-norms would have this property, too.) Write the integral equation as \\( f\\left( x\\right) = y \\), where \\( f \\) has the obvious interpretation. Then \\( {f}^{\\prime }\\left( x\\right) \\) is given by\n\n\\[ \n\\left\\lbrack {{f}^{\\prime }\\left( x\\right) h}\\right\\rbrack \\left( t\\right) = h\\left( t\\right) - {2h}\\left( 0\\right) + {\\int }_{0}^{t}\\cos \\left( {st}\\right) x\\left( s\\right) h\\left( s\\right) {ds}\n\\]
|
Let \\( A = {f}^{\\prime }\\left( 0\\right) \\), so that \\( {Ah} = h - {2h}\\left( 0\\right) \\) . One verifies easily that \\( {A}^{2}h = h \\), from which it follows that \\( {A}^{-1} = A \\) . In order to use the preceding theorem, with \\( {x}_{0} = 0 \\), we must verify its hypotheses. We have just seen that \\( A \\) is invertible. Let \\( \\parallel x\\parallel \\leq r \\), where \\( r \\) is to be chosen later so that \\( \\begin{Vmatrix}{I - {A}^{-1}{f}^{\\prime }\\left( x\\right) }\\end{Vmatrix} \\leq k < 1 \\) . From an equation above,\n\n\\[ \n\\left| {\\left\\lbrack {{f}^{\\prime }\\left( x\\right) h}\\right\\rbrack \\left( t\\right) - \\left( {Ah}\\right) \\left( t\\right) }\\right| = \\left| {{\\int }_{0}^{t}\\cos \\left( {st}\\right) x\\left( s\\right) h\\left( s\\right) {ds}}\\right|\n\\]\n\n\\[ \n\\leq \\parallel h\\parallel \\parallel x\\parallel\n\\]\n\nIt follows that\n\n\\[ \n\\begin{Vmatrix}{{f}^{\\prime }\\left( x\\right) h - {Ah}}\\end{Vmatrix} \\leq \\parallel h\\parallel \\parallel x\\parallel\n\\]\n\nand that\n\n\\[ \n\\begin{Vmatrix}{{f}^{\\prime }\\left( x\\right) - A}\\end{Vmatrix} \\leq \\parallel x\\parallel \\leq r\n\\]\n\nSince \\( \\parallel A\\parallel = \\begin{Vmatrix}{A}^{-1}\\end{Vmatrix} = 3 \\), we have\n\n\\[ \n\\begin{Vmatrix}{I - {A}^{-1}{f}^{\\prime }\\left( x\\right) }\\end{Vmatrix} = \\begin{Vmatrix}{{A}^{-1}\\left( {A - {f}^{\\prime }\\left( x\\right) }\\right) }\\end{Vmatrix} \\leq \\begin{Vmatrix}{A}^{-1}\\end{Vmatrix}r = {3r}\n\\]\n\nThe hypothesis of the preceding theorem requires that \\( {3r} \\leq k < 1 \\), where \\( k \\) is to be chosen later. By the preceding theorem, the equation \\( f\\left( x\\right) = y \\) will have a unique solution if\n\n\\[ \n\\parallel y\\parallel < \\left( {1 - k}\\right) r{\\begin{Vmatrix}{A}^{-1}\\end{Vmatrix}}^{-1} \\leq \\frac{1}{3}\\left( {1 - k}\\right) \\frac{k}{3}\n\\]\n\nIn order for this bound to be as generous as possible, we let \\( k = \\frac{1}{2} \\), arriving at the restriction \\( \\parallel y\\parallel < \\frac{1}{36} \\) .
|
Yes
|
Let \( f : {\mathbb{R}}^{3} \rightarrow {\mathbb{R}}^{3} \) be given by\n\n\[ f\left( x\right) = y\;x = \left( {{\xi }_{1},{\xi }_{2},{\xi }_{3}}\right) \;y = \left( {{\eta }_{1},{\eta }_{2},{\eta }_{3}}\right) \]\n\n\[ {\eta }_{1} = 2{\xi }_{1}^{4} + {\xi }_{3}\cos {\xi }_{2} - {\xi }_{1}{\xi }_{3} \]\n\n\[ {\eta }_{2} = {\left( {\xi }_{1} + {\xi }_{3}\right) }^{3} - 4\sin {\xi }_{2} \]\n\n\[ {\eta }_{3} = \log \left( {{\xi }_{2} + 1}\right) + 5{\xi }_{1} + \cos {\xi }_{3} - 1 \]\n\nNotice that \( f\left( 0\right) = 0 \) . We ask: For \( y \) close to zero is there an \( x \) for which \( f\left( x\right) = y \) ?
|
To answer this, one can use the Inverse Function Theorem. We compute the Fréchet derivative or Jacobian:\n\n\[ {f}^{\prime }\left( x\right) = \left\lbrack \begin{matrix} 8{\xi }_{1}^{3} - {\xi }_{3} & - {\xi }_{3}\sin {\xi }_{2} & \cos {\xi }_{2} - {\xi }_{1} \\ 3{\left( {\xi }_{1} + {\xi }_{3}\right) }^{2} & - 4\cos {\xi }_{2} & 3{\left( {\xi }_{1} + {\xi }_{3}\right) }^{2} \\ 5 & {\left( {\xi }_{2} + 1}\right) ^{-1} & - \sin {\xi }_{3} \end{matrix}\right\rbrack \]\n\nAt \( x = 0 \) we have\n\n\[ {f}^{\prime }\left( 0\right) = \left\lbrack \begin{array}{rrr} 0 & 0 & 1 \\ 0 & - 4 & 0 \\ 5 & 1 & 0 \end{array}\right\rbrack \]\n\nObviously, \( {f}^{\prime }\left( 0\right) \) is invertible, and so we can conclude that in some neighborhood of \( y = 0 \) there is defined a function \( g \) such that \( f\left( {g\left( y\right) }\right) = y \) .
|
Yes
|
Theorem 10. Let \( f \) be defined on an open set \( \Omega \) in the direct-sum space \( X = \mathop{\sum }\limits_{{i = 1}}^{n} \oplus {X}_{i} \) and take values in a normed space \( Y \) . Assume that all the partial derivatives \( {D}_{i}f \) exist in \( \Omega \) and are continuous at a point \( x \) in \( \Omega \) . Then \( f \) is Fréchet differentiable at \( x \), and its Fréchet derivative is given by\n\n(1)\n\n\[ \n{f}^{\prime }\left( x\right) h = \mathop{\sum }\limits_{{i = 1}}^{n}{D}_{i}f\left( x\right) {h}_{i}\;\left( {h \in X}\right) \n\]
|
Proof. Equation (1) defines a linear transformation from \( X \) to \( Y \), and\n\n\[ \n\begin{Vmatrix}{{f}^{\prime }\left( x\right) h}\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\begin{Vmatrix}{{D}_{i}f\left( x\right) {h}_{i}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{n}\begin{Vmatrix}{{D}_{i}f\left( x\right) }\end{Vmatrix}\begin{Vmatrix}{h}_{i}\end{Vmatrix} \n\]\n\n\[ \n\leq \mathop{\max }\limits_{{1 \leq j \leq n}}\begin{Vmatrix}{{D}_{j}f\left( x\right) }\end{Vmatrix}\mathop{\sum }\limits_{{i = 1}}^{n}\begin{Vmatrix}{h}_{i}\end{Vmatrix} \n\]\n\n\[ \n= \mathop{\max }\limits_{{1 \leq j \leq n}}\begin{Vmatrix}{{D}_{j}f\left( x\right) }\end{Vmatrix}\parallel h\parallel \n\]\n\nThus Equation (1) defines a bounded linear transformation. Let\n\n\[ \nG\left( h\right) = f\left( {x + h}\right) - f\left( x\right) - \mathop{\sum }\limits_{{i = 1}}^{n}{D}_{i}f\left( x\right) {h}_{i} \n\]\n\nWe want to prove that \( \parallel G\left( h\right) \parallel = o\left( {\parallel h\parallel }\right) \) . For sufficiently small \( h, x + h \) is in \( \Omega \) , and the partial derivatives of \( G \) exist at \( x + h \) . They are\n\n\[ \n{D}_{i}G\left( h\right) = {D}_{i}f\left( {x + h}\right) - {D}_{i}f\left( x\right) \n\]\n\nIf \( \varepsilon \) is a given positive number, we use the assumed continuity of \( {D}_{i}f \) at \( x \) to find a positive \( \delta \) such that for \( \parallel h\parallel < \delta \) we have \( \begin{Vmatrix}{{D}_{i}G\left( h\right) }\end{Vmatrix} < \varepsilon \), for \( 1 \leq i \leq n \) . Then, by the mean value theorem,\n\n\[ \n\parallel G\left( h\right) \parallel \leq \begin{Vmatrix}{G\left( {{h}_{1},{h}_{2},\ldots ,{h}_{n}}\right) - G\left( {0,{h}_{2},\ldots ,{h}_{n}}\right) }\end{Vmatrix} \n\]\n\n\[ \n+ \begin{Vmatrix}{G\left( {0,{h}_{2},\ldots ,{h}_{n}}\right) - G\left( {0,0,{h}_{3},\ldots ,{h}_{n}}\right) }\end{Vmatrix} \n\]\n\n\[ \n+ \cdots + \begin{Vmatrix}{G\left( {0,0,\ldots ,{h}_{n}}\right) - G\left( {0,0,\ldots ,0}\right) }\end{Vmatrix} \n\]\n\n\[ \n\leq \mathop{\sum }\limits_{{i = 1}}^{n}\varepsilon \begin{Vmatrix}{h}_{i}\end{Vmatrix} = \varepsilon \parallel h\parallel \n\]\n\nSince \( \varepsilon \) was arbitrary, this shows that \( \parallel G\left( h\right) \parallel = o\left( {\parallel h\parallel }\right) \) .
|
Yes
|
Theorem 1. Necessary Condition for Extremum. Let \( \Omega \) be an open set in a normed linear space, and let \( f : \Omega \rightarrow \mathbb{R} \) . If \( {x}_{0} \) is a minimum point of \( f \) and if \( {f}^{\prime }\left( {x}_{0}\right) \) exists, then \( {f}^{\prime }\left( {x}_{0}\right) = 0 \) .
|
Proof. Let \( X \) be the Banach space, and assume \( {f}^{\prime }\left( {x}_{0}\right) \neq 0 \) . Then there exists \( v \in X \) such that \( {f}^{\prime }\left( {x}_{0}\right) v = - 1 \) . By the definition of \( {f}^{\prime }\left( {x}_{0}\right) \) we can take \( \lambda > 0 \) and so small that \( {x}_{0} + {\lambda v} \) is in \( \Omega \) and\n\n\[ \left| {f\left( {{x}_{0} + {\lambda v}}\right) - f\left( {x}_{0}\right) - \lambda {f}^{\prime }\left( {x}_{0}\right) v}\right| /\lambda \parallel v\parallel < {\left( 2\parallel v\parallel \right) }^{-1} \]\n\nThis means that \( \frac{1}{\lambda }\left\lbrack {f\left( {{x}_{0} + {\lambda v}}\right) - f\left( {x}_{0}\right) }\right\rbrack \) is within distance \( \frac{1}{2} \) from -1, and so is negative. This implies \( f\left( {{x}_{0} + {\lambda v}}\right) < f\left( {x}_{0}\right) \) .
|
Yes
|
Let \( f \) and \( g \) be functions from \( {\mathbb{R}}^{2} \) to \( \mathbb{R} \) defined by \( f\left( {x, y}\right) = \) \( {x}^{2} + {y}^{2}, g\left( {x, y}\right) = x - y + 1 \) . The set \( M = \{ \left( {x, y}\right) : g\left( {x, y}\right) = 0\} \) is the straight line shown in Figure 3.3. Also shown are some level sets of \( f \), i.e., sets of the type \( \{ \left( {x, y}\right) : f\left( {x, y}\right) = c\} \) . At the solution, the gradient of \( f \) is parallel to the gradient of \( g \) . The function \( H \) is \( H\left( {x, y,\lambda }\right) = {x}^{2} + {y}^{2} + \lambda \left( {x - y + 1}\right) \), and the three equations to be solved are \( {2x} + \lambda = {2y} - \lambda = x - y + 1 = 0 \) .
|
The solution is \( \left( {-\frac{1}{2},\frac{1}{2}}\right) \).
|
Yes
|
Example 2. Let \( f\left( {x, y}\right) = {x}^{2} - {y}^{2} \) and \( g\left( {x, y}\right) = {x}^{2} + {y}^{2} - 1 \) . Again we show \( M \) and some level sets of \( f \), which are hyperbolas and straight lines. There are four extrema; some are maxima and some are minima. Which are which? The \( H \) -function is \( H = {x}^{2} - {y}^{2} + \lambda \left( {{x}^{2} + {y}^{2} - 1}\right) \), and the three equations to solve are \( {2x} + {2\lambda x} = - {2y} + {2\lambda y} = {x}^{2} + {y}^{2} - 1 = 0 \) .
|
The \( \left( {x, y,\lambda }\right) \) solutions are \( \left( {0,1,1}\right) ,\left( {0, - 1,1}\right) ,\left( {1,0, - 1}\right) ,\left( {-1,0, - 1}\right) \) .
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.