Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Lemma 2. For \( x \neq 0,\frac{{\partial }^{2}}{\partial {x}_{j}^{2}}\left| x\right| = {\left| x\right| }^{-1} - {x}_{j}^{2}{\left| x\right| }^{-3} \) .
Proof.\n\n\[ \frac{{\partial }^{2}}{\partial {x}_{j}^{2}}\left| x\right| = \frac{\partial }{\partial {x}_{j}}\left\lbrack {{x}_{j}{\left| x\right| }^{-1}}\right\rbrack = {\left| x\right| }^{-1} + {x}_{j}\left( {-1}\right) {\left| x\right| }^{-2}{x}_{j}{\left| x\right| }^{-1} \]\n\n\[ = {\left| x\right| }^{-1} - {x}_{j}^{2}{\left| x\right| }^{-3} \]
Yes
Lemma 3. For \( x \neq 0 \), and \( g \in {C}^{2}\left( {0,\infty }\right) \) , \[ {\Delta g}\left( \left| x\right| \right) = {g}^{\prime \prime }\left( \left| x\right| \right) + \left( {n - 1}\right) {\left| x\right| }^{-1}{g}^{\prime }\left( x\right) \]
Proof. \[ \frac{\partial }{\partial {x}_{j}}g\left( \left| x\right| \right) = {g}^{\prime }\left( \left| x\right| \right) \frac{\partial }{\partial {x}_{j}}\left| x\right| = {g}^{\prime }\left( \left| x\right| \right) {x}_{j}{\left| x\right| }^{-1} \] \[ \frac{{\partial }^{2}}{\partial {x}_{j}^{2}}g\left( \left| x\right| \right) = {g}^{\prime \prime }\left( \left| x\right| \right) {x}_{j}^{2}{\left| x\right| }^{-2} + {g}^{\prime }\left( \left| x\right| \right) \frac{{\partial }^{2}}{\partial {x}_{j}^{2}}\left| x\right| \] \[ = {g}^{\prime \prime }\left( \left| x\right| \right) {x}_{j}^{2}{\left| x\right| }^{-2} + {g}^{\prime }\left( \left| x\right| \right) \left( {{\left| x\right| }^{-1} - {x}_{j}^{2}{\left| x\right| }^{-3}}\right) \] \[ {\Delta g}\left( \left| x\right| \right) = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{{\partial }^{2}}{\partial {x}_{j}^{2}}g\left( \left| x\right| \right) \] \[ = {g}^{\prime \prime }\left( \left| x\right| \right) {\left| x\right| }^{2}{\left| x\right| }^{-2} + {g}^{\prime }\left( \left| x\right| \right) \left( {n{\left| x\right| }^{-1} - {\left| x\right| }^{2}{\left| x\right| }^{-3}}\right) \] \[ = {g}^{\prime \prime }\left( \left| x\right| \right) + \left( {n - 1}\right) \left| x\right| {g}^{\prime }\left( x\right) \]
Yes
Find a fundamental solution of the operator \( A \) defined (for \( n = 1 \) ) by the equation\n\n\[ \n{A\phi } = {\phi }^{\prime \prime } + {2a}{\phi }^{\prime } + {b\phi }\;\left( {\phi \in \mathcal{D}}\right)\n\]
We seek a distribution \( T \) such that \( {AT} = \delta \) . Let us look for a regular distribution, \( T = \widetilde{f} \) . Using the definition of derivatives of distributions, we have\n\n\[ \n\left( {A\widetilde{f}}\right) \left( \phi \right) = \widetilde{f}\left( {{\phi }^{\prime \prime } - {2a}{\phi }^{\prime } + {b\phi }}\right)\n\]\n\n\[ \n= {\int }_{-\infty }^{\infty }f\left( x\right) \left\lbrack {{\phi }^{\prime \prime }\left( x\right) - {2a}{\phi }^{\prime }\left( x\right) + {b\phi }\left( x\right) }\right\rbrack {dx}\n\]\n\nGuided by previous examples, we guess that \( f \) should have as its support the interval \( \lbrack 0,\infty ) \) . The integral above then is restricted to the same interval. Using integration by parts, we obtain\n\n\[ \n{\left. f{\phi }^{\prime }\right| }_{0}^{\infty } - {\int }_{0}^{\infty }{f}^{\prime }{\phi }^{\prime } - {\left. 2af\phi \right| }_{0}^{\infty } + {2a}{\int }_{0}^{\infty }{f}^{\prime }\phi + b{\int }_{0}^{\infty }{f\phi }\n\]\n\n\[ \n= - {\left. f\left( 0\right) {\phi }^{\prime }\left( 0\right) - {f}^{\prime }\phi \right| }_{0}^{\infty } + {\int }_{0}^{\infty }{f}^{\prime \prime }\phi + {2af}\left( 0\right) \phi \left( 0\right) + {\int }_{0}^{\infty }\left( {{2a}{f}^{\prime } + {bf}}\right) \phi\n\]\n\n\[ \n= - f\left( 0\right) {\phi }^{\prime }\left( 0\right) + {f}^{\prime }\left( 0\right) \phi \left( 0\right) + {2af}\left( 0\right) \phi \left( 0\right) + {\int }_{0}^{\infty }\left( {{f}^{\prime \prime } + {2a}{f}^{\prime } + {bf}}\right) \phi\n\]\n\nThe easiest way to make this last expression simplify to \( \phi \left( 0\right) \) is to define \( f \) on \( \lbrack 0,\infty ) \) in such a way that\n\n\[ \n\text{)}\;{f}^{\prime \prime } + {2a}{f}^{\prime } + {bf} = 0\n\]\n\n\[ \nf\left( 0\right) = 0\n\]\n\n\[ \n{f}^{\prime }\left( 0\right) = 1\n\]\n\nThis is an initial-value problem, which can be solved by writing down the general solution of the equation in (i) and adjusting the coefficients in it to achieve (ii) and (iii). The characteristic equation of the differential equation in (i) is\n\n\[ \n{\lambda }^{2} + {2a\lambda } + b = 0\n\]\n\nIts roots are \( - a \pm \sqrt{{a}^{2} - b} \) . Let \( d = \sqrt{{a}^{2} - b} \) . If \( d \neq 0 \), then the general solution\n\nof (i) is\n\n\[ \n{c}_{1}{e}^{-{ax}}{e}^{dx} + {c}_{2}{e}^{-{ax}}{e}^{-{dx}}\n\]\n\nUpon imposing the conditions (ii) and (iii) we find that\n\n\[ \nf\left( x\right) = \left\{ \begin{array}{ll} {d}^{-1}{e}^{-{ax}}\sinh {dx} & x \geq 0 \\ 0 & x < 0 \end{array}\right.\n\]
Yes
Theorem 5. Consider the operator\n\n\[ \nA = \mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}\left( x\right) \frac{{d}^{j}}{d{x}^{j}} \n\]\n\nin which \( {c}_{j} \in {C}^{\infty }\left( \mathbb{R}\right) \) and \( {c}_{m}\left( x\right) \neq 0 \) for all \( x \) . This operator has a fundamental solution that is a regular distribution.
Proof. We find a function \( f \) defined on \( \lbrack 0,\infty ) \) such that\n\n\[ \n\text{(i)}\;\mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}\left( x\right) {f}^{\left( j\right) }\left( x\right) = 0 \n\]\n\n(ii) \( \;{c}_{m - 1}\left( 0\right) {f}^{\left( m - 1\right) }\left( 0\right) = 1 \)\n\n(iii) \( \;{c}_{j}\left( 0\right) {f}^{\left( j\right) }\left( 0\right) = 0\;\left( {0 \leq j \leq m - 2}\right) \) Such a function exists by the theory of ordinary differential equations. In particular, an initial-value problem has a unique solution that is defined on any interval \( \left\lbrack {0, b}\right\rbrack \), provided that the coefficient functions are continuous there and the leading coefficient does not have a zero in \( \left\lbrack {0, b}\right\rbrack \) . We also extend \( f \) to all of \( \mathbb{R} \) by setting \( f\left( x\right) = 0 \) on the interval \( \left( {-\infty ,0}\right) \) . With the function \( f \) in hand, we must verify that \( A\widetilde{f} = \delta \) . This is done as in Example 4.
Yes
Lemma 1. There is a function \( f \in {C}^{\infty }\left( \mathbb{R}\right) \) such that \( 0 \leq f \leq 1 \) , \( f\left( x\right) = 0 \) on \( ( - \infty ,0\rbrack \), and \( f\left( x\right) = 1 \) on \( \lbrack 1,\infty ) \) .
Proof. Define\n\n\[ g\left( x\right) = \left\{ \begin{array}{ll} \exp \left\lbrack {{x}^{2}/\left( {{x}^{2} - 1}\right) }\right\rbrack & \left| x\right| < 1 \\ 0 & \left| x\right| \geq 1 \end{array}\right. \]\n\nand\n\n\[ f\left( x\right) = \left\{ \begin{array}{ll} g\left( {x - 1}\right) & x \leq 1 \\ 1 & \text{ otherwise } \end{array}\right. \]\n\nThe graphs of \( f \) and \( g \) are shown in Figure 5.3.
Yes
Lemma 2. If \( {x}_{0} \in {\mathbb{R}}^{n} \) and \( \rho > r > 0 \), then there is a test function\n\n\( \phi \) such that\n\n(i) \( 0 \leq \phi \leq 1 \)\n\n(ii) \( \phi \left( x\right) = 1 \) if \( \left| {x - {x}_{0}}\right| \leq r \)\n\n(iii) \( \phi \left( x\right) = 0 \) if \( \left| {x - {x}_{0}}\right| \geq \rho \) .
Proof. Use the function \( f \) from the preceding lemma, and define\n\n\[ \phi \left( x\right) = 1 - f\left( {a{\left| x - {x}_{0}\right| }^{2} - b}\right) \]\n\nwith \( a = {\left( {\rho }^{2} - {r}^{2}\right) }^{-1} \) and \( b = {r}^{2}a \) . If \( \left| {x - {x}_{0}}\right| \leq r \), then \( a{\left| x - {x}_{0}}\right| }^{2} - b \leq a{r}^{2} - b = 0 \) , so \( \phi \left( x\right) = 1 \) . If \( \left| {x - {x}_{0}}\right| \geq \rho \), then \( a{\left| x - {x}_{0}}\right| }^{2} - b \geq a{\rho }^{2} - b = a\left( {{\rho }^{2} - {r}^{2}}\right) = 1 \), so \( \phi \left( x\right) = 0. \)
Yes
Theorem 2. Let \( \operatorname{supp}\left( T\right) \) denote the intersection of all closed sets having property (3). Then \( \operatorname{supp}\left( T\right) \) is the smallest closed set having property (3).
Proof. Let \( \mathcal{F} \) be the family of all closed sets \( F \) having property (3). Then\n\n\[ \operatorname{supp}\left( T\right) = \bigcap \{ F : F \in \mathcal{F}\} \]\n\nBeing an intersection of closed sets, \( \operatorname{supp}\left( T\right) \) is itself closed. The only question is whether it has property (3). To verify this, let \( \phi \) be a test function such that \( \operatorname{supp}\left( \phi \right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \) . It is to be shown that \( T\left( \phi \right) = 0 \) . By De Morgan’s Law,\n\n\[ \operatorname{supp}\left( \phi \right) \subset {\mathbb{R}}^{n} \smallsetminus \bigcap \{ F : F \in \mathcal{F}\} = \bigcup \left\{ {{\mathbb{R}}^{n} \smallsetminus F : F \in \mathcal{F}}\right\} \]\n\nBy the preceding theorem, there is a partition of unity \( \left\lbrack {\psi }_{j}\right\rbrack \) subordinate to the family of open sets \( \left\{ {{\mathbb{R}}^{n} \smallsetminus F : F \in \mathcal{F}}\right\} \) . Since \( \operatorname{supp}\left( \phi \right) \) is compact, there exists (by Theorem 1) an index \( m \) such that\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{m}{\psi }_{i}\left( x\right) = 1\text{ on a neighborhood of supp }\left( \phi \right) \]\n\nNotice that \( \phi = \phi \mathop{\sum }\limits_{{i = 1}}^{m}{\psi }_{i} \), because if \( \phi \left( x\right) = 0 \), the equation is obviously true, while if \( \phi \left( x\right) \neq 0 \), then \( x \in \operatorname{supp}\left( \phi \right) \) and \( \mathop{\sum }\limits_{{i = 1}}^{m}{\psi }_{i}\left( x\right) = 1 \) . Hence, by the linearity of \( T \),\n\n\[ T\left( \phi \right) = T\left( {\mathop{\sum }\limits_{{i = 1}}^{m}\phi {\psi }_{i}}\right) = \mathop{\sum }\limits_{{i = 1}}^{m}T\left( {\phi {\psi }_{i}}\right) \]\n\nAgain by Theorem 1, there exists for each \( i \) an \( {F}_{i} \in \mathcal{F} \) such that\n\n\[ \operatorname{supp}\left( {\phi {\psi }_{i}}\right) \subset \operatorname{supp}\left( {\psi }_{i}\right) \subset {\mathbb{R}}^{n} \smallsetminus {F}_{i} \]\n\nSince \( {F}_{i} \in \mathcal{F},{F}_{i} \) has property (3), and we conclude that \( T\left( {\phi {\psi }_{i}}\right) = 0 \) for \( 1 \leq i \leq \) \( m \) . By Equation (4), \( T\left( \phi \right) = 0 \) .
Yes
Each distribution having a compact support has an extension to \( \mathcal{E} \) that is continuous.
Let \( T \) be a distribution for which \( \operatorname{supp}\left( T\right) \) is compact. By the theorem on partitions of unity, there is a test function \( \psi \) such that \( \psi \left( x\right) = 1 \) on a neighborhood of \( \operatorname{supp}\left( T\right) \) . Define \( \bar{T} \) on \( \mathcal{E} \) by the equation \( \bar{T}\left( \phi \right) = T\left( {\phi \psi }\right) \) . This is meaningful because \( {\phi \psi } \in \mathfrak{D} \) . Now we wish to establish that \( \bar{T} \) is continuous on \( \mathcal{E} \) . To this end, let \( {\phi }_{j} \in \mathcal{E} \) and suppose that \( {\phi }_{j} \rightarrow 0 \), the convergence being as prescribed in \( \mathcal{E} \) . All the functions \( {\phi }_{j}\psi \) vanish outside of \( \operatorname{supp}\left( \psi \right) \), and for each multi-index \( \alpha ,{D}^{\alpha }\left( {{\phi }_{j}\psi }\right) \) converges uniformly to 0 by the Leibniz formula. Hence \( {\phi }_{j}\psi \rightarrow 0 \) in \( \mathbf{D} \) . By the continuity of \( T \) and the definition of \( \bar{T} \) ,\n\n\[ \bar{T}\left( {\phi }_{j}\right) = T\left( {{\phi }_{j}\psi }\right) \rightarrow 0 \]\n\nFinally, we must prove that \( \bar{T} \) is an extension of \( T \) . Let \( \phi \) be any test function; we want to show that \( \bar{T}\left( \phi \right) = T\left( \phi \right) \) . Equivalent equations are \( T\left( {\phi \psi }\right) = \) \( T\left( \phi \right) \) and \( T\left( {{\phi \psi } - \phi }\right) = 0 \) . To establish the latter, it suffices to show that\n\n\[ \operatorname{supp}\left( {{\phi \psi } - \phi }\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\n(Here we have used Theorem 2.) Since \( {\phi \psi } - \phi = \phi \cdot \left( {\psi - 1}\right) \), it is enough to prove that\n\n\[ \operatorname{supp}\left( {\psi - 1}\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\nTo this end, let \( x \in \operatorname{supp}\left( {\psi - 1}\right) \) . By definition of a support, we can write \( x = \lim {x}_{j} \), where \( \left( {\psi - 1}\right) \left( {x}_{j}\right) \neq 0 \) . Since \( \psi \left( {x}_{j}\right) \neq 1 \), we have \( {x}_{j} \notin \mathcal{N} \), where \( \mathcal{N} \) is an open neighborhood of \( \operatorname{supp}\left( T\right) \) on which \( \psi \) is identically 1 . Since \( {x}_{j} \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) , we have \( x \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) because the latter is closed. Hence \( x \in {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \) .
Yes
Theorem 3. Each distribution having a compact support has an extension to \( \mathcal{E} \) that is continuous.
Proof. Let \( T \) be a distribution for which \( \operatorname{supp}\left( T\right) \) is compact. By the theorem on partitions of unity, there is a test function \( \psi \) such that \( \psi \left( x\right) = 1 \) on a neighborhood of \( \operatorname{supp}\left( T\right) \) . Define \( \bar{T} \) on \( \mathcal{E} \) by the equation \( \bar{T}\left( \phi \right) = T\left( {\phi \psi }\right) \) . This is meaningful because \( {\phi \psi } \in \mathfrak{D} \) . Now we wish to establish that \( \bar{T} \) is continuous on \( \mathcal{E} \) . To this end, let \( {\phi }_{j} \in \mathcal{E} \) and suppose that \( {\phi }_{j} \rightarrow 0 \), the convergence being as prescribed in \( \mathcal{E} \) . All the functions \( {\phi }_{j}\psi \) vanish outside of \( \operatorname{supp}\left( \psi \right) \), and for each multi-index \( \alpha ,{D}^{\alpha }\left( {{\phi }_{j}\psi }\right) \) converges uniformly to 0 by the Leibniz formula. Hence \( {\phi }_{j}\psi \rightarrow 0 \) in \( \mathbf{D} \) . By the continuity of \( T \) and the definition of \( \bar{T} \) ,\n\n\[ \bar{T}\left( {\phi }_{j}\right) = T\left( {{\phi }_{j}\psi }\right) \rightarrow 0 \]\n\nFinally, we must prove that \( \bar{T} \) is an extension of \( T \) . Let \( \phi \) be any test function; we want to show that \( \bar{T}\left( \phi \right) = T\left( \phi \right) \) . Equivalent equations are \( T\left( {\phi \psi }\right) = \) \( T\left( \phi \right) \) and \( T\left( {{\phi \psi } - \phi }\right) = 0 \) . To establish the latter, it suffices to show that\n\n\[ \operatorname{supp}\left( {{\phi \psi } - \phi }\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\n(Here we have used Theorem 2.) Since \( {\phi \psi } - \phi = \phi \cdot \left( {\psi - 1}\right) \), it is enough to prove that\n\n\[ \operatorname{supp}\left( {\psi - 1}\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\nTo this end, let \( x \in \operatorname{supp}\left( {\psi - 1}\right) \) . By definition of a support, we can write \( x = \lim {x}_{j} \), where \( \left( {\psi - 1}\right) \left( {x}_{j}\right) \neq 0 \) . Since \( \psi \left( {x}_{j}\right) \neq 1 \), we have \( {x}_{j} \notin \mathcal{N} \), where \( \mathcal{N} \) is an open neighborhood of \( \operatorname{supp}\left( T\right) \) on which \( \psi \) is identically 1 . Since \( {x}_{j} \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) , we have \( x \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) because the latter is closed. Hence \( x \in {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \) .
Yes
Theorem 3. Each distribution having a compact support has an extension to \( \mathcal{E} \) that is continuous.
Proof. Let \( T \) be a distribution for which \( \operatorname{supp}\left( T\right) \) is compact. By the theorem on partitions of unity, there is a test function \( \psi \) such that \( \psi \left( x\right) = 1 \) on a neighborhood of \( \operatorname{supp}\left( T\right) \) . Define \( \bar{T} \) on \( \mathcal{E} \) by the equation \( \bar{T}\left( \phi \right) = T\left( {\phi \psi }\right) \) . This is meaningful because \( {\phi \psi } \in \mathfrak{D} \) . Now we wish to establish that \( \bar{T} \) is continuous on \( \mathcal{E} \) . To this end, let \( {\phi }_{j} \in \mathcal{E} \) and suppose that \( {\phi }_{j} \rightarrow 0 \), the convergence being as prescribed in \( \mathcal{E} \) . All the functions \( {\phi }_{j}\psi \) vanish outside of \( \operatorname{supp}\left( \psi \right) \), and for each multi-index \( \alpha ,{D}^{\alpha }\left( {{\phi }_{j}\psi }\right) \) converges uniformly to 0 by the Leibniz formula. Hence \( {\phi }_{j}\psi \rightarrow 0 \) in \( \mathbf{D} \) . By the continuity of \( T \) and the definition of \( \bar{T} \) ,\n\n\[ \bar{T}\left( {\phi }_{j}\right) = T\left( {{\phi }_{j}\psi }\right) \rightarrow 0 \]\n\nFinally, we must prove that \( \bar{T} \) is an extension of \( T \) . Let \( \phi \) be any test function; we want to show that \( \bar{T}\left( \phi \right) = T\left( \phi \right) \) . Equivalent equations are \( T\left( {\phi \psi }\right) = \) \( T\left( \phi \right) \) and \( T\left( {{\phi \psi } - \phi }\right) = 0 \) . To establish the latter, it suffices to show that\n\n\[ \operatorname{supp}\left( {{\phi \psi } - \phi }\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\n(Here we have used Theorem 2.) Since \( {\phi \psi } - \phi = \phi \cdot \left( {\psi - 1}\right) \), it is enough to prove that\n\n\[ \operatorname{supp}\left( {\psi - 1}\right) \subset {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \]\n\nTo this end, let \( x \in \operatorname{supp}\left( {\psi - 1}\right) \) . By definition of a support, we can write \( x = \lim {x}_{j} \), where \( \left( {\psi - 1}\right) \left( {x}_{j}\right) \neq 0 \) . Since \( \psi \left( {x}_{j}\right) \neq 1 \), we have \( {x}_{j} \notin \mathcal{N} \), where \( \mathcal{N} \) is an open neighborhood of \( \operatorname{supp}\left( T\right) \) on which \( \psi \) is identically 1 . Since \( {x}_{j} \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) , we have \( x \in {\mathbb{R}}^{n} \smallsetminus \mathcal{N} \) because the latter is closed. Hence \( x \in {\mathbb{R}}^{n} \smallsetminus \operatorname{supp}\left( T\right) \) .
Yes
Theorem 4. Each continuous linear functional on \( \mathcal{E} \) is an extension of some distribution having compact support.
Proof. Let \( L \) be a continuous linear functional on \( \mathcal{E} \). Let \( T = L \mid \mathfrak{D} \), which denotes the restriction of \( L \) to \( \mathbf{D} \). It is easily seen that \( T \) is a distribution. In order to prove that the support of \( T \) is compact, suppose otherwise. Then for each \( k \) there is a test function \( {\phi }_{k} \) whose support is contained in \( \{ x : \left| x\right| > k\} \) such that \( T\left( {\phi }_{k}\right) = 1 \). It follows that \( {\phi }_{k} \rightarrow 0 \) in \( \mathcal{E} \), whereas \( L\left( {\phi }_{k}\right) = 1 \). In order to prove that \( L = \bar{T} \), as in the preceding proof select \( {\gamma }_{j} \in \mathbf{D} \) so that \( {\gamma }_{j}\left( x\right) = 1 \) if \( \left| x\right| \leq j \) and \( {\gamma }_{j}\left( x\right) = 0 \) if \( \left| x\right| \geq {2j} \). If \( \phi \in \mathcal{E} \), then \( {\gamma }_{j}\phi \rightarrow \phi \) in \( \mathcal{E} \). Hence\n\n\[ L\left( \phi \right) = \lim L\left( {{\gamma }_{j}\phi }\right) = \lim T\left( {{\gamma }_{j}\phi }\right) = \bar{T}\left( \phi \right) \]\n\nbecause \( \operatorname{supp}\left( T\right) \subset \operatorname{supp}\left( {\gamma }_{j}\right) \) for all sufficiently large \( j \) .
Yes
Theorem 5. If \( T \) is a distribution with compact support and if \( \phi \in \mathcal{E} \), then \( T * \phi \in \mathcal{E} \).
Proof. See [Ru1], Theorem 6.35, page 159.
No
Theorem 2. Let \( E \) denote the translation operator, defined by\n\n\( \left( {{E}_{y}f}\right) \left( x\right) = f\left( {x - y}\right) \) . Then we have \( \widehat{{E}_{y}f} = {e}_{-y}\widehat{f} \) and \( \widehat{{e}_{y}f} = {E}_{y}\widehat{f} \) .
Proof. We verify the first equation and leave the second to the problems. We have\n\n\[ \widehat{{E}_{y}f}\left( x\right) = \int f\left( {u - y}\right) {e}^{-{2\pi ixu}}{du} = \int f\left( v\right) {e}^{-{2\pi ix}\left( {v + y}\right) }{dv} \]\n\n\[ = {e}^{-{2\pi ixy}}\int f\left( v\right) {e}^{-{2\pi ixv}}{dv} = {e}_{-y}\left( x\right) \widehat{f}\left( x\right) \]
No
Theorem 3. If \( f \) and \( g \) belong to \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \), then the same is true of \( f * g \), and \[ \parallel f * g{\parallel }_{1} \leq \parallel f{\parallel }_{1} \cdot \parallel g{\parallel }_{1} \]
Proof. ([Smi]) Define a function \( h \) on \( {\mathbb{R}}^{n} \times {\mathbb{R}}^{n} \) by the equation \[ h\left( {x, y}\right) = g\left( {x - y}\right) \] Let us prove that \( h \) is measurable. It is not enough to observe that the map \( \left( {x, y}\right) \mapsto x - y \) is continuous and that \( g \) is measurable, because the composition of a measurable function with a continuous function need not be measurable. For any open set \( \mathcal{O} \) we must show that \( {h}^{-1}\left( \mathcal{O}\right) \) is measurable. Define a linear transformation \( A \) by \( A\left( {x, y}\right) = \left( {x - y, x + y}\right) \) . The following equivalences are obvious: \[ \left( {x, y}\right) \in {h}^{-1}\left( \mathcal{O}\right) \; \Leftrightarrow \;h\left( {x, y}\right) \in \mathcal{O} \] \[ \Leftrightarrow g\left( {x - y}\right) \in \mathcal{O} \] \[ \Leftrightarrow x - y \in {g}^{-1}\left( \mathcal{O}\right) \] \[ \Leftrightarrow \;\left( {x - y, x + y}\right) \in {g}^{-1}\left( \mathcal{O}\right) \times {\mathbb{R}}^{n} \] \[ \Leftrightarrow \;A\left( {x, y}\right) \in {g}^{-1}\left( \mathcal{O}\right) \times {\mathbb{R}}^{n} \] \[ \Leftrightarrow \;\left( {x, y}\right) \in {A}^{-1}\left\lbrack {{g}^{-1}\left( \mathcal{O}\right) \times {\mathbb{R}}^{n}}\right\rbrack \] This shows that \[ {h}^{-1}\left( \mathcal{O}\right) = {A}^{-1}\left\lbrack {{g}^{-1}\left( \mathcal{O}\right) \times {\mathbb{R}}^{n}}\right\rbrack \] Since \( g \) is measurable, \( {g}^{-1}\left( \mathcal{O}\right) \) and \( {g}^{-1}\left( \mathcal{O}\right) \times {\mathbb{R}}^{n} \) are measurable sets. Since \( A \) is invertible, \( {A}^{-1} \) is a linear transformation; it carries each measurable set to another measurable set. Hence \( {h}^{-1}\left( \mathcal{O}\right) \) is measurable. Here we use the theorem that a function of class \( {C}^{1} \) from \( {\mathbb{R}}^{n} \) to \( {\mathbb{R}}^{n} \) maps measurable sets into measurable sets, and apply that theorem to \( {A}^{-1} \) . The function \( F\left( {x, y}\right) = f\left( y\right) g\left( {x - y}\right) \) is measurable, and \[ \iint \left| {F\left( {x, y}\right) }\right| {dxdy} = \int \left| {f\left( y\right) }\right| \int \left| {g\left( {x - y}\right) }\right| {dxdy} \] \[ = \int \left| {f\left( y\right) }\right| \parallel g{\parallel }_{1}{dy} = \parallel f{\parallel }_{1}\parallel g{\parallel }_{1} \] By Fubini’s Theorem (See Chapter 8, page 426), \( F \) is integrable (i.e., \( F \in \) \( \left. {{L}^{1}\left( {{\mathbb{R}}^{n} \times {\mathbb{R}}^{n}}\right) }\right) \) . By the Fubini Theorem again, \[ \parallel f * g{\parallel }_{1} = \int \left| {\left( {f * g}\right) \left( x\right) }\right| {dx} \leq \iint \left| {F\left( {x, y}\right) }\right| {dydx} = \parallel f{\parallel }_{1}\parallel g{\parallel }_{1} \]
Yes
Theorem 4. If \( f \) and \( g \) belong to \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \), then\n\n\[ \overset{⏜}{f * g} = \widehat{f}\widehat{g} \]
Proof. We use the Fubini Theorem again:\n\n\[ \widehat{f * g}\left( x\right) = \int {e}_{-x}\left( y\right) \left( {f * g}\right) \left( y\right) {dy} = \int {e}_{-x}\left( y\right) \int f\left( u\right) g\left( {y - u}\right) {dudy} \]\n\n\[ = \iint {e}_{-x}\left( {u + y - u}\right) f\left( u\right) g\left( {y - u}\right) {dudy} \]\n\n\[ = \int {e}_{-x}\left( u\right) f\left( u\right) \int {e}_{-x}\left( {y - u}\right) g\left( {y - u}\right) {dudy} \]\n\n\[ = \int {e}_{-x}\left( u\right) f\left( u\right) \int {e}_{-x}\left( {y - u}\right) g\left( {y - u}\right) {dydu} \]\n\n\[ = \int {e}_{-x}\left( u\right) f\left( u\right) {du}\int {e}_{-x}\left( z\right) g\left( z\right) {dz} = \widehat{f}\left( x\right) \widehat{g}\left( x\right) \]
Yes
Example 1. The Gaussian function \( \phi \) defined by\n\n\[ \phi \left( x\right) = {e}^{-{\left| x\right| }^{2}} \]\n\nbelongs to \( S \) .
It is easily seen, with the aid of Leibniz’s formula, that if \( \phi \in \mathcal{S} \), then \( P \cdot \phi \in \mathcal{S} \) for any polynomial \( P \), and \( {D}^{\alpha }\phi \in \mathcal{S} \) for any multi-index \( \alpha \) .
No
Lemma 1. If \( P \) is a polynomial, then the mapping \( \phi \mapsto P \cdot \phi \) is linear and continuous from \( \mathbf{S} \) into \( \mathbf{S} \) .
Proof. Let \( {\phi }_{j} \rightarrow 0 \) . We ask whether \( Q \cdot {D}^{\beta }\left( {P \cdot {\phi }_{j}}\right) \rightarrow 0 \) uniformly for each polynomial \( Q \) and multi-index \( \beta \) . By using the Leibniz formula, this expression can be exhibited as a sum of terms \( {Q}_{\gamma } \cdot {D}^{\alpha }{\phi }_{j} \), where the \( {Q}_{\gamma } \) are polynomials and \( \alpha \) is a multi-index such that \( \alpha \leq \beta \) . Each of these terms individually converges uniformly to zero, because that is a consequence of \( {\phi }_{j} \rightarrow 0 \) in \( \mathcal{S} \) . Therefore, their sum also converges to 0 .
No
Lemma 2. If \( g \in \mathcal{S} \), then the mapping \( \phi \mapsto {g\phi } \) is linear and continuous from \( \mathbf{S} \) into \( \mathbf{S} \)
Proof. This is left to the problems.
No
Lemma 3. For any multi-index \( \alpha \), the mapping \( \phi \mapsto {D}^{\alpha }\phi \) is linear and continuous from \( \mathbf{S} \) into \( \mathbf{S} \) .
Proof. This is left to the problems.
No
Lemma 4. The function \( {e}_{y} \) defined by \( {e}_{y}\left( x\right) = {e}^{2\pi ixy} \) obeys the equation \( P\left( D\right) {e}_{y} = P\left( {2\pi iy}\right) {e}_{y} \) for any polynomial \( P \) .
Proof. It suffices to deal with the case of one monomial and establish that \( {D}^{\alpha }{e}_{y} = {\left( 2\pi iy\right) }^{\alpha }{e}_{y} \) . We have\n\n\[ \frac{\partial }{\partial {x}_{j}}{e}_{y}\left( x\right) = \frac{\partial }{\partial {x}_{j}}{e}^{{2\pi i}\left( {{y}_{1}{x}_{1} + \cdots + {y}_{n}{x}_{n}}\right) } \]\n\n\[ = {e}^{{2\pi i}\left( {{y}_{1}{x}_{1} + \cdots + {y}_{n}{x}_{n}}\right) }\left( {{2\pi i}{y}_{j}}\right) = \left( {{2\pi i}{y}_{j}}\right) {e}_{y}\left( x\right) \]\n\nThus, by induction, we have\n\n\[ {\left( \frac{\partial }{\partial {x}_{j}}\right) }^{{\alpha }_{j}}{e}_{y} = {\left( 2\pi i{y}_{j}\right) }^{{\alpha }_{j}}{e}_{y} \]\n\nConsequently, \( {D}^{\alpha }{e}_{y} = {\left( 2\pi i{y}_{1}\right) }^{{\alpha }_{1}}{\left( 2\pi i{y}_{2}\right) }^{{\alpha }_{2}}\cdots {\left( 2\pi i{y}_{n}\right) }^{{\alpha }_{n}}{e}_{y} = {\left( 2\pi iy\right) }^{\alpha }{e}_{y} \]
Yes
Theorem 1. If \( \phi \in \mathcal{S} \), and if \( P \) is a polynomial, then \( {\left\lbrack P\left( D/\left( 2\pi i\right) \right) \phi \right\rbrack }^{ \land } = P \cdot \widehat{\phi } \) . Equivalently, \( {\left\lbrack P\left( D\right) \phi \right\rbrack }^{ \land } = {P}^{ + }\widehat{\phi } \), where \( {P}^{ + }\left( x\right) = P\left( {2\pi ix}\right) \) .
Proof. We have to show that\n\n\[{\left\lbrack \sum {c}_{\alpha }{\left( \frac{D}{2\pi i}\right) }^{\alpha }\phi \right\rbrack }^{ \land }\left( y\right) = \sum {c}_{\alpha }{y}^{\alpha }\widehat{\phi }\left( y\right)\]\n\nSince the Fourier map \( f \mapsto \widehat{f} \) is linear, it suffices to prove that\n\n\[{\left\lbrack {\left( \frac{D}{2\pi i}\right) }^{\alpha }\phi \right\rbrack }^{ \land }\left( y\right) = {y}^{\alpha }\widehat{\phi }\left( y\right)\]\n\nEquivalently, we must prove that\n\n\[{\left( \frac{1}{2\pi i}\right) }^{\left| \alpha \right| }\widehat{\left\lbrack {D}^{\alpha }\phi \right\rbrack }\left( y\right) = {y}^{\alpha }\widehat{\phi }\left( y\right)\]\n\nThus we must prove that\n\n\[{\left( \frac{1}{2\pi i}\right) }^{\left| \alpha \right| }{\int }_{{\mathbb{R}}^{n}}\left( {{D}^{\alpha }\phi }\right) \left( x\right) {e}_{-y}\left( x\right) {dx} = {y}^{\alpha }\widehat{\phi }\left( y\right)\]\n\nIn this integral we can use integration by parts repeatedly to transfer all derivatives from \( \phi \) to the kernel function \( {e}_{-y} \) . Each use of integration by parts will introduce a factor of -1 . Observe that no boundary values enter during the integration by parts, since \( \phi \in \mathcal{S} \) . Using also the preceding lemma, we find that the integral becomes successively\n\n\[{\left( \frac{1}{2\pi i}\right) }^{\left| \alpha \right| }{\left( -1\right) }^{\left| \alpha \right| }\int \phi \left( x\right) \left\lbrack {{D}^{\alpha }{e}_{-y}\left( x\right) }\right\rbrack \left( x\right) {dx}\]\n\n\[= {\left( -1\right) }^{\left| \alpha \right| }{\left( \frac{1}{2\pi i}\right) }^{\left| \alpha \right| }\int \phi \left( x\right) {\left( -2\pi iy\right) }^{\alpha }{e}_{-y}\left( x\right) {dx}\]\n\n\[= {y}^{\alpha }\int \phi \left( x\right) {e}_{-y}\left( x\right) {dx} = {y}^{\alpha }\widehat{\phi }\left( y\right)\]
Yes
Theorem 2. If \( \phi \in \mathcal{S} \) and \( P \) is a polynomial, then \( P\left( {-D/\left( {2\pi i}\right) }\right) \widehat{\phi } = \) \( \widehat{P\phi } \) . Equivalently, \( P\left( D\right) \widehat{\phi } = \widehat{{P}^{ * }\phi } \), where \( {P}^{ * }\left( y\right) = P\left( {-{2\pi iy}}\right) \) .
Proof. We insert the variables, and interpret \( P\left( D\right) \) as differentiating with respect to the variable \( x \) . Thus, with the help of Lemma 4, we have\n\n\[ \left\lbrack {P\left( D\right) \widehat{\phi }}\right\rbrack \left( x\right) = P\left( D\right) \int {e}_{-x}\left( y\right) \phi \left( y\right) {dy} = P\left( D\right) \int {e}_{-y}\left( x\right) \phi \left( y\right) {dy} \]\n\n\[ = \int \left\lbrack {P\left( D\right) {e}_{-y}}\right\rbrack \left( x\right) \phi \left( y\right) \;{dy} = \int P\left( {-{2\pi iy}}\right) {e}_{-y}\left( x\right) \phi \left( y\right) \;{dy} \]\n\n\[ = \int {P}^{ * }\left( y\right) {e}_{-x}\left( y\right) \phi \left( y\right) {dy} = \widehat{{P}^{ * }\phi }\left( x\right) \]
Yes
Theorem 2. If \( \phi \in \mathcal{S} \) and \( P \) is a polynomial, then \( P\left( {-D/\left( {2\pi i}\right) }\right) \widehat{\phi } = \) \( \widehat{P\phi } \) . Equivalently, \( P\left( D\right) \widehat{\phi } = \widehat{{P}^{ * }\phi } \), where \( {P}^{ * }\left( y\right) = P\left( {-{2\pi iy}}\right) \) .
Proof. We insert the variables, and interpret \( P\left( D\right) \) as differentiating with respect to the variable \( x \) . Thus, with the help of Lemma 4, we have\n\n\[ \left\lbrack {P\left( D\right) \widehat{\phi }}\right\rbrack \left( x\right) = P\left( D\right) \int {e}_{-x}\left( y\right) \phi \left( y\right) {dy} = P\left( D\right) \int {e}_{-y}\left( x\right) \phi \left( y\right) {dy} \]\n\n\[ = \int \left\lbrack {P\left( D\right) {e}_{-y}}\right\rbrack \left( x\right) \phi \left( y\right) \;{dy} = \int P\left( {-{2\pi iy}}\right) {e}_{-y}\left( x\right) \phi \left( y\right) \;{dy} \]\n\n\[ = \int {P}^{ * }\left( y\right) {e}_{-x}\left( y\right) \phi \left( y\right) {dy} = \widehat{{P}^{ * }\phi }\left( x\right) \]
Yes
Theorem 4. The mapping \( \phi \mapsto \widehat{\phi } \) is continuous and linear from \( \mathcal{S} \) into S.
Proof. First we must prove that \( \widehat{\phi } \in \mathcal{S} \) when \( \phi \in \mathcal{S} \) . It is to be shown that \( \widehat{\phi } \) is a \( {C}^{\infty } \) -function and that \( P \cdot {D}^{\alpha }\widehat{\phi } \) is bounded for each polynomial \( P \) and for each multi-index \( \alpha \) . In Theorem 5 of Section 6.1 (page 291), we noted that \( \widehat{\phi } \) is continuous. By Theorem 2 above, \( {D}^{\alpha }\widehat{\phi } = \widehat{Q \cdot \phi } \) for an appropriate polynomial \( Q \) . Since \( Q \cdot \phi \in \mathbf{S} \) (by Lemma 1), we know that \( \widehat{Q \cdot \phi } \) is continuous, and can therefore conclude that \( {D}^{\alpha }\widehat{\phi } \) is continuous. Hence \( \widehat{\phi } \in {C}^{\infty } \) . Now we ask whether \( P \cdot {D}^{\alpha }\widehat{\phi } \) is bounded. By the preceding remarks and Theorem 1,\n\n(1)\n\n\[ P \cdot {D}^{\alpha }\widehat{\phi } = P \cdot \widehat{Q \cdot \phi } = {\left\lbrack P\left( \frac{D}{2\pi i}\right) \left( Q \cdot \phi \right) \right\rbrack }^{ \land } \]\n\nSince \( P\left( {D/\left( {2\pi i}\right) }\right) \left( {Q \cdot \phi }\right) \in \mathcal{S} \), its Fourier transform is bounded, as indicated in Equation (3) of Section 6.1, page 289.\n\nFor the continuity of the map, let \( {\phi }_{j} \rightarrow 0 \) in \( \mathbf{S} \) . We want to prove that \( {\widehat{\phi }}_{j} \rightarrow 0 \) in \( \mathcal{S} \) . That means that \( P \cdot {D}^{\alpha }{\widehat{\phi }}_{j} \rightarrow 0 \) uniformly for any polynomial \( P \) and any multi-index \( \alpha \) . By Equation (1) above, the question to be addressed is whether \( {\left\lbrack P\left( D/\left( 2\pi i\right) \right) \left( Q \cdot {\phi }_{j}\right) \right\rbrack }^{ \land } \rightarrow 0 \) uniformly. If we put \( {\psi }_{j} = P\left( {D/\left( {2\pi i}\right) }\right) \left( {Q \cdot {\phi }_{j}}\right) \), we ask whether \( {\widehat{\psi }}_{j}\left( t\right) \rightarrow 0 \) uniformly. Now, \( {\psi }_{j} \in \mathcal{S} \), and \( {\psi }_{j} \rightarrow 0 \) in \( \mathcal{S} \) by Lemmas 1 and 2. Hence \( {\left( 1 + {\left| x\right| }^{2}\right) }^{n}{\psi }_{j}\left( x\right) \rightarrow 0 \) uniformly. It follows that for a given \( \varepsilon > 0 \) there is an integer \( m \) such that \( {\left( 1 + {\left| x\right| }^{2}\right) }^{n}\left| {{\psi }_{j}\left( x\right) }\right| < \varepsilon \) whenever \( j > m \) . For such \( j \) ,\n\n\[ \int \left| {{\psi }_{j}\left( x\right) }\right| {dx} < \varepsilon \int {\left( 1 + {\left| x\right| }^{2}\right) }^{-n}{dx} = {c\varepsilon } \]\n\nand this shows that \( \int \left| {\psi }_{j}\right| \rightarrow 0 \) . From the inequality\n\n\[ \left| {{\widehat{\psi }}_{j}\left( x\right) }\right| = \left| {\int {\psi }_{j}\left( y\right) {e}_{x}\left( y\right) {dy}}\right| \leq \int \left| {{\psi }_{j}\left( y\right) }\right| {dy} \]\n\nwe infer that \( {\widehat{\psi }}_{j}\left( x\right) \rightarrow 0 \) uniformly.
Yes
Theorem 5. Poisson Summation Formula. If \( f \in C\left( {\mathbb{R}}^{n}\right) \) and if\n\n\[ \mathop{\sup }\limits_{x}\left( {\left| {f\left( x\right) }\right| + \left| {\widehat{f}\left( x\right) }\right| }\right) {\left( 1 + \left| x\right| \right) }^{n + \varepsilon } < \infty \]\n\nfor some \( \varepsilon > 0 \), then \( \mathop{\sum }\limits_{{\nu \in {\mathbb{Z}}^{n}}}f\left( \nu \right) = \mathop{\sum }\limits_{{\nu \in {\mathbb{Z}}^{n}}}\widehat{f}\left( \nu \right) \) .
Proof. Let \( c \) equal the supremum in the hypotheses. Then for \( \parallel x{\parallel }_{\infty } \leq 1 \) and \( \nu \neq 0 \) we have\n\n\[ \left| {f\left( {x + \nu }\right) }\right| \leq c{\left( 1 + \left| x + \nu \right| \right) }^{-n - \varepsilon } \leq c{\left( 1 + \parallel x + \nu {\parallel }_{\infty }\right) }^{-n - \varepsilon } \]\n\n\[ \leq c{\left( 1 + \parallel \nu {\parallel }_{\infty } - \parallel x{\parallel }_{\infty }\right) }^{-n - \varepsilon } \leq c\parallel \nu {\parallel }_{\infty }^{-n - \varepsilon } \]\n\n(In verifying these calculations, notice that the exponents are negative.) Then we have\n\n\[ \mathop{\sum }\limits_{{\nu \neq 0}}\left| {f\left( {x + \nu }\right) }\right| \leq c\mathop{\sum }\limits_{{\nu \neq 0}}\parallel \nu {\parallel }_{\infty }^{-n - \varepsilon } = c\mathop{\sum }\limits_{{j = 1}}^{\infty }\mathop{\sum }\limits_{{\parallel \nu {\parallel }_{\infty } = j}}\parallel \nu {\parallel }_{\infty }^{-n - \varepsilon } \]\n\n\[ = c\mathop{\sum }\limits_{{j = 1}}^{\infty }{j}^{-n - \varepsilon }\# \{ \nu : \parallel \nu \parallel = j\} \]\n\n\[ = c\mathop{\sum }\limits_{{j = 1}}^{\infty }{j}^{-n - \varepsilon }\left( {{c}_{1}{j}^{n - 1}}\right) = {c}_{2}\mathop{\sum }\limits_{{j = 1}}^{\infty }{j}^{-1 - \varepsilon } < \infty \]\n\nBy a theorem of Weierstrass (the \
Yes
Theorem 1. The function \( \theta \) defined on \( {\mathbb{R}}^{n} \) by \( \theta \left( x\right) = {e}^{-\pi {x}^{2}} \) is a fixed point of the Fourier transform. Thus, \( \widehat{\theta } = \theta \) .
Proof. First observe that the notation is \[ {x}^{2} = {xx} = x \cdot x = \langle x, x\rangle = \mathop{\sum }\limits_{{j = 1}}^{n}{x}_{j}^{2} = {\left| x\right| }^{2} \] We prove our result first when \( n = 1 \) and then derive the general case. Define, for \( x \in \mathbb{R} \), the analogous function \( \psi \left( x\right) = {e}^{-\pi {x}^{2}} \) . Since \( {\psi }^{\prime }\left( x\right) = {e}^{-\pi {x}^{2}}\left( {-{2\pi x}}\right) = - {2\pi x\psi }\left( x\right) \), we see that \( \psi \) is the unique solution of the initial-value problem \[ {\psi }^{\prime }\left( x\right) + {2\pi x\psi }\left( x\right) = 0\;\psi \left( 0\right) = 1 \] By Problem 7 in Section 6.2, or the direct use of Theorems 1 and 2 (pages 296-297), we obtain, by taking Fourier transforms in Equation (1), \[ {\left( \widehat{\psi }\right) }^{\prime }\left( x\right) + {2\pi x}\widehat{\psi }\left( x\right) = 0 \] The initial value of \( \widehat{\psi } \) is \[ \widehat{\psi }\left( 0\right) = {\int }_{-\infty }^{\infty }\psi \left( x\right) {dx} = {\int }_{-\infty }^{\infty }{e}^{-\pi {x}^{2}}{dx} = 1 \] (See Problem 10 for this.) We have seen that \( \psi \) and \( \widehat{\psi } \) are two solutions of the initial-value problem (1). By the theory of ordinary differential equations, \( \psi = \widehat{\psi } \) . This proves the theorem for \( n = 1 \) . Now we notice that \[ \theta \left( x\right) = \exp \left\lbrack {-\pi \left( {{x}_{1}^{2} + \cdots + {x}_{n}^{2}}\right) }\right\rbrack \] \[ = {e}^{-\pi {x}_{1}^{2}}{e}^{-\pi {x}_{2}^{2}}\cdots {e}^{-\pi {x}_{n}^{2}} \] \[ = \psi \left( {x}_{1}\right) \psi \left( {x}_{2}\right) \cdots \psi \left( {x}_{n}\right) \] By Problem 9 of Section 6.2, page 300, \[ \widehat{\theta }\left( x\right) = \mathop{\prod }\limits_{{j = 1}}^{n}\widehat{\psi }\left( {x}_{j}\right) = \mathop{\prod }\limits_{{j = 1}}^{n}\psi \left( {x}_{j}\right) = \theta \left( x\right) \]
Yes
Theorem 2. First Inversion Theorem. If \( \phi \in \mathcal{S}\left( {\mathbb{R}}^{n}\right) \), then\n\n\[ \phi \left( x\right) = {\int }_{{\mathbb{R}}^{n}}\widehat{\phi } \cdot {e}_{x} = {\int }_{{\mathbb{R}}^{n}}\widehat{\phi }\left( y\right) {e}^{2\pi iyx}{dy} \]
Proof. We use the conjurer’s tricks of smoke and mirrors. Let \( \theta \) be the function in the preceding theorem, and put \( g\left( x\right) = \theta \left( {x/\lambda }\right) \) . Then \( \widehat{g}\left( y\right) = {\lambda }^{n}\widehat{\theta }\left( {\lambda y}\right) \) . (Problem 8 in Section 6.1, page 293.) By Problem 13 in Section 6.2, page 300,\n\n\[ \int \widehat{\phi }\left( y\right) \theta \left( \frac{y}{\lambda }\right) {dy} = \int \widehat{\phi }\left( y\right) g\left( y\right) {dy} = \int \phi \left( y\right) \widehat{g}\left( y\right) {dy} = {\lambda }^{n}\int \phi \left( y\right) \widehat{\theta }\left( {\lambda y}\right) {dy} \]\n\n\[ = \int \phi \left( \frac{u}{\lambda }\right) \widehat{\theta }\left( u\right) {du} \]\n\nIn the preceding calculation, let \( \lambda = k \), where \( k \in \mathbb{N} \), and contemplate letting \( k \rightarrow \infty \) . In order to use the Dominated Convergence Theorem (Section 8.6, page 406), we must establish \( {L}^{1} \) -bounds on the integrands. Here they are:\n\n\[ \left| {\widehat{\phi }\left( y\right) \theta \left( \frac{y}{k}\right) }\right| \leq \left| {\widehat{\phi }\left( y\right) }\right| \parallel \theta {\parallel }_{\infty }\;\widehat{\phi } \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \]\n\n\[ \left| {\phi \left( \frac{u}{k}\right) }\right| \leq \parallel \phi {\parallel }_{\infty }\left| {\widehat{\theta }\left( u\right) }\right| \;\widehat{\theta } \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \]\n\nThen by the Dominated Convergence Theorem,\n\n(2)\n\n\[ \theta \left( 0\right) \int \widehat{\phi }\left( y\right) {dy} = \phi \left( 0\right) \int \widehat{\theta }\left( u\right) {du} \]\n\nBut we have, by the special properties of \( \theta \),\n\n\[ 1 = \theta \left( 0\right) = \widehat{\theta }\left( 0\right) = \int \theta \left( x\right) {dx} = \int \widehat{\theta }\left( x\right) {dx} \]\n\nThus Equation (2) becomes\n\n(3)\n\n\[ \int \widehat{\phi }\left( y\right) {dy} = \phi \left( 0\right) \]\n\nThis result is now applied to the shifted function \( {E}_{-x}\phi \) :\n\n\[ \int \widehat{{E}_{-x}\phi }\left( y\right) {dy} = \left( {{E}_{-x}\phi }\right) \left( 0\right) \]\n\nBy Theorem 2 in Section 6.1, page 289, this is equivalent to\n\n\[ \int \widehat{\phi }\left( y\right) \cdot {e}_{x}\left( y\right) {dy} = \phi \left( x\right) \]
No
Theorem 3. The Fourier transform operator \( \mathcal{F} \) from \( \mathcal{S}\left( {\mathbb{R}}^{n}\right) \) to \( \mathcal{S}\left( {\mathbb{R}}^{n}\right) \) is a continuous linear bijection, and \( {\mathcal{F}}^{-1} = {\mathcal{F}}^{3} \) .
Proof. The continuity and linearity of \( \mathcal{F} \) were established by Theorem 4 in Section 6.2, page 297. The fact that \( \mathcal{F} \) is surjective is established by writing the basic inversion formula from the preceding theorem as \[ \phi \left( x\right) = \int \widehat{\phi }\left( y\right) {e}_{x}\left( y\right) \;{dy} = \int \widehat{\phi }\left( y\right) {e}_{-x}\left( {-y}\right) \;{dy} = \int \widehat{\phi }\left( {-u}\right) {e}_{-x}\left( u\right) \;{du} = \left\lbrack {\mathcal{F}\left( {B\widehat{\phi }}\right) }\right\rbrack \left( x\right) \] Here \( B \) is the operator such that \( \left( {B\phi }\right) \left( x\right) = \phi \left( {-x}\right) \) . The inversion formula also shows that \( \mathcal{F} \) is injective, for if \( \widehat{\phi } = 0 \), then obviously \( \phi = 0 \) . Again by the inversion formula, \[ \left( \widehat{\widehat{\phi }}\right) \left( y\right) = \int \widehat{\phi }\left( x\right) \cdot {e}_{-y}\left( x\right) {dx} = \phi \left( {-y}\right) = \left( {B\phi }\right) \left( y\right) \] Thus \( {\mathcal{F}}^{2} = B \) . It follows that \( {\mathcal{F}}^{4} = I \) and \( {\mathcal{F}}^{3}\mathcal{F} = I \) .
Yes
Theorem 4. Second Inversion Theorem. If \( f \) and \( \widehat{f} \) belong to \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \), then for almost all \( x \) , \[ f\left( x\right) = {\int }_{{\mathbb{R}}^{n}}\widehat{f}\left( y\right) {e}^{2\pi ixy}{dy} \]
Proof. Assume that \( f \) and \( \widehat{f} \) are in \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \). Let \( \phi \in \mathcal{S} \). Then by Theorem 2, \( \phi \left( x\right) = \int {e}_{x}\widehat{\phi } \). By Problem 13 in Section 6.2, page 300, \( \int \widehat{f}\phi = \int f\widehat{\phi } \). Hence if we put \( F\left( y\right) = \int \widehat{f}{e}_{y} \), then we have (with the help of the Fubini theorem) \[ \int \widehat{\phi }\left( x\right) f\left( x\right) {dx} = \int \phi \left( x\right) \widehat{f}\left( x\right) {dx} = \iint {e}_{x}\left( y\right) \widehat{\phi }\left( y\right) {dy}\widehat{f}\left( x\right) {dx} \] \[ = \int \widehat{\phi }\left( y\right) \left\lbrack {\int \widehat{f}\left( x\right) {e}_{y}\left( x\right) {dx}}\right\rbrack {dy} = \int \widehat{\phi }\left( y\right) F\left( y\right) {dy} \] Thus \( \int \psi \left( x\right) \left( {f - F}\right) \left( x\right) {dx} = 0 \) for all \( \psi \in \mathcal{S} \), because \( \widehat{\phi } \) can be any element of S. The same equation is true for all \( \psi \in \mathcal{D} \), since \( \mathcal{D} \) is a subset of \( \mathcal{S} \). Now apply Theorem 2 of Section 5.1, page 251, according to which \( g = 0 \) when \( \widetilde{g} = 0 \). The conclusion is that that \( f\left( x\right) = F\left( x\right) \) almost everywhere.
Yes
Lemma 1. If \( f \) and \( g \) belong to the Schwartz space \( \mathcal{S}\left( {\mathbb{R}}^{n}\right) \), then \( f * g \) also belongs to \( \mathcal{S}\left( {\mathbb{R}}^{n}\right) \), and furthermore, \( \widehat{fg} = \widehat{f} * \widehat{g} \) .
Proof. Since \( f \) and \( g \) belong to \( \mathcal{S} \), so does \( {fg} \) by Lemma 2 in Section 6.2, page 295. By Theorem 4 of Section 6.2, page 297, \( \widehat{f},\widehat{g} \), and \( \widehat{fg} \) belong to S. Consequently, \( \widehat{f}\widehat{g} \) belongs to \( \mathbf{S} \) . By Theorem 4 of Section 6.1, page 290, \( \widehat{f * g} = \widehat{f}\widehat{g} \) . Hence \( \widehat{f * g} \in \mathcal{S} \), and by the inversion theorem (Theorem 2 in the preceding section), \( f * g \in \mathcal{S} \) . Using the operator \( \mathcal{F} \) such that \( \mathcal{F}\left( f\right) = \widehat{f} \) and the operator \( B \) such that \( \left( {Bf}\right) \left( x\right) = f\left( {-x}\right) \), we have \[ \widehat{f} * \widehat{g} = {\mathcal{F}}^{-1}\mathcal{F}\left( {\widehat{f} * \widehat{g}}\right) = {\mathcal{F}}^{-1}\left( {{\mathcal{F}}^{2}f \cdot {\mathcal{F}}^{2}g}\right) \] \[ = {\mathcal{F}}^{-1}\left( {{Bf} \cdot {Bg}}\right) = {\mathcal{F}}^{-1}B\left( {fg}\right) = {\mathcal{F}}^{-1}{\mathcal{F}}^{2}\left( {fg}\right) = \widehat{fg} \]
Yes
Lemma 2. If \( f \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \) and \( \phi \in \mathcal{D}\left( {\mathbb{R}}^{n}\right) \), then \( f * \phi \in {C}^{\infty }\left( {\mathbb{R}}^{n}\right) \) .
Proof. By the theorem in Section 5.5, page 271,\n\n\[ \n{D}^{\alpha }\left( {T * \phi }\right) = T * {D}^{\alpha }\phi \;\left( {T \in {\mathcal{D}}^{\prime },\phi \in \mathcal{D}}\right) \n\]\n\nIn particular, for \( f \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \), \n\n(1)\n\n\[ \n{D}^{\alpha }\left( {f * \phi }\right) = f * {D}^{\alpha }\phi \n\]\n\n(Recall that the definition of convolution involving distributions was made to conform to the ordinary convolution if the distribution arises from a function.) Now \( f * g \) is continuous for any continuous \( g \) with compact support, as is easily seen from writing\n\n\[ \n\left( {f * g}\right) \left( x\right) - \left( {f * g}\right) \left( y\right) = \int f\left( u\right) \left\lbrack {g\left( {x - u}\right) - g\left( {y - u}\right) }\right\rbrack {du} \n\]\n\nApplying this to the right side of Equation (1), we see that \( {D}^{\alpha }\left( {f * g}\right) \) is continuous for every multi-index \( \alpha \) .
Yes
Lemma 3. The translation operator \( {E}_{x} \) has the following continuity property: If \( 1 \leq p < \infty \) and \( f \in {L}^{p}\left( {\mathbb{R}}^{n}\right) \), then the mapping \( x \mapsto {E}_{x}f \) is continuous from \( {\mathbb{R}}^{n} \) to \( {L}^{p}\left( {\mathbb{R}}^{n}\right) \) .
Proof. The continuous functions with compact support form a dense set in \( {L}^{p} \), if \( 1 \leq p < \infty \) . Hence, if \( \varepsilon > 0 \), then there exists such a continuous function \( h \) for which \( \parallel f - h{\parallel }_{p} \leq \varepsilon \) . Let the support of \( h \) be contained in the ball \( {B}_{r} \) of radius \( r \) centered at \( 0 \) . By the uniform continuity of \( h \) there is a \( \delta > 0 \) such that\n\n\[ \left| {x - y}\right| < \delta \; \Rightarrow \;\left| {h\left( x\right) - h\left( y\right) }\right| < \varepsilon \]\n\nThere is no loss of generality in supposing that \( \delta < r \) . If \( \left| {x - y}\right| < \delta \), then\n\n\[ {\begin{Vmatrix}{E}_{x}h - {E}_{y}h\end{Vmatrix}}_{p}^{p} = \int {\left| h\left( z - x\right) - h\left( z - y\right) \right| }^{p}{dz} \leq {\varepsilon }^{p}\operatorname{vol}\left( {B}_{2r}\right) \leq {\varepsilon }^{p}{\left( 4r\right) }^{n} \]\n\nFrom the triangle inequality it follows that\n\n\[ {\begin{Vmatrix}{E}_{x}f - {E}_{y}f\end{Vmatrix}}_{p} \leq {\begin{Vmatrix}{E}_{x}f - {E}_{x}h\end{Vmatrix}}_{p} + {\begin{Vmatrix}{E}_{x}h - {E}_{y}h\end{Vmatrix}}_{p} + {\begin{Vmatrix}{E}_{y}h - {E}_{y}f\end{Vmatrix}}_{p} \]\n\n\[ = {\begin{Vmatrix}{E}_{x}\left( f - h\right) \end{Vmatrix}}_{p} + {\begin{Vmatrix}{E}_{x}h - {E}_{y}h\end{Vmatrix}}_{p} + {\begin{Vmatrix}{E}_{y}\left( h - f\right) \end{Vmatrix}}_{p} \]\n\n\[ \leq \parallel f - h{\parallel }_{p} + \varepsilon {\left( 4r\right) }^{n/p} + \parallel h - f{\parallel }_{p} \]\n\n\[ \leq {2\varepsilon } + \varepsilon {\left( 4r\right) }^{n/p} \]
Yes
Theorem 1. If \( f \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \), then \( f * {\rho }_{k} \rightarrow f \) in the metric of \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \) .
Proof. Since \( \int {\rho }_{k} = 1 \), \[ \left( {f * {\rho }_{k}}\right) \left( x\right) - f\left( x\right) = \int \left\lbrack {f\left( {x - z}\right) - f\left( x\right) }\right\rbrack {\rho }_{k}\left( z\right) {dz} \] Hence by Fubini's Theorem (Chapter 8, page 426) \[ \int \left| {f * {\rho }_{k} - f}\right| \leq \iint \left| {f\left( {x - z}\right) - f\left( x\right) }\right| {\rho }_{k}\left( z\right) {dzdx} \] \[ = \iint \left| {f\left( {x - z}\right) - f\left( x\right) }\right| {dx}{\rho }_{k}\left( z\right) {dz} \] \[ = \int {\begin{Vmatrix}{E}_{z}f - f\end{Vmatrix}}_{1}{\rho }_{k}\left( z\right) {dz} \] Here we need Lemma 3: If \( f \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \) and \( \varepsilon > 0 \), then there is a \( \delta > 0 \) such that \( {\begin{Vmatrix}{E}_{z}f - f\end{Vmatrix}}_{1} \leq \varepsilon \) whenever \( \left| z\right| \leq \delta \) . If \( \rho \left( x\right) = 0 \) when \( \left| x\right| > r \), then \( {\rho }_{k}\left( x\right) = 0 \) when \( \left| x\right| > r/k \) . Hence when \( r/k \leq \delta \) we will have \( {\begin{Vmatrix}f * {\rho }_{k} - f\end{Vmatrix}}_{1} \leq \varepsilon \) . Lemma 2 shows that \( f * {\rho }_{k} \in {C}^{\infty }\left( {\mathbb{R}}^{n}\right) \) .
Yes
Theorem 2. The space of test functions \( \mathcal{D}\left( {\mathbb{R}}^{n}\right) \) is a dense subspace of \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \).
Proof. Let \( f \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \), and let \( \varepsilon > 0 \) . We wish to find an element of \( \mathcal{D}\left( {R}^{n}\right) \) within distance \( \varepsilon \) of \( f \) . The function \( f * {\rho }_{k} \) from the preceding theorem would be a candidate, but it need not have compact support. So, we do the natural thing, which is to define\n\n\[ \n{f}_{m}\left( x\right) = \left\{ \begin{array}{ll} f\left( x\right) & \text{ if }\left| x\right| \leq m \\ 0 & \text{ elsewhere } \end{array}\right.\n\]\n\nThen \( {f}_{m}\left( x\right) \rightarrow f\left( x\right) \) pointwise, and the Dominated Convergence Theorem (page 406) gives us \( \int \left| {f}_{m}\right| \rightarrow \int \left| f\right| \) . Consequently, we can select an integer \( m \) such that \( \parallel f{\parallel }_{1} - {\begin{Vmatrix}{f}_{m}\end{Vmatrix}}_{1} < \varepsilon /2 \) . Then\n\n\[ \n{\int }_{\left| x\right| > m}\left| {f\left( x\right) }\right| {dx} < \varepsilon /2\n\]\n\nNow select a \
No
Let \( n = 1 \) and \( D = \frac{d}{dx} \). If \( P \) is a polynomial, say \( P\left( \lambda \right) = \mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}{\lambda }^{j} \), then \( P\left( D\right) \) is a linear differential operator with constant coefficients:
\[ P\left( D\right) = \mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}{D}^{j} = \mathop{\sum }\limits_{{j = 0}}^{m}{c}_{j}{\left( 2\pi i\right) }^{j}{\left( \frac{D}{2\pi i}\right) }^{j} \] Consider the ordinary differential equation \[ P\left( D\right) u = g\; - \infty < x < \infty \] in which \( g \) is given and is assumed to be an element of \( {L}^{1}\left( \mathbb{R}\right) \). Apply the Fourier transform \( \mathcal{F} \) to both sides of Equation (2). Then use Theorem 1 of Section 6.2 (page 296), which asserts that if \( u \in \mathbf{S} \), then \[ \mathcal{F}\left\lbrack {P\left( D\right) u}\right\rbrack = {P}^{ + }\mathcal{F}\left( u\right) \] where \( {P}^{ + }\left( x\right) = P\left( {2\pi ix}\right) \). The transformed version of Equation (2) is therefore \[ {P}^{ + }\mathcal{F}\left( u\right) = \mathcal{F}\left( g\right) \] The solution of Equation (4) is \[ \mathcal{F}\left( u\right) = \mathcal{F}\left( g\right) /{P}^{ + } \] The function \( u \) is recovered by taking the inverse transformation, if it exists: \[ u = {\mathcal{F}}^{-1}\left\lbrack {\mathcal{F}\left( g\right) /{P}^{ + }}\right\rbrack \] Theorem 4 in Section 6.1, page 291, states that \[ \mathcal{F}\left( {\phi * \psi }\right) = \mathcal{F}\left( \phi \right) \cdot \mathcal{F}\left( \psi \right) \] An equivalent formulation, in terms of \( {\mathcal{F}}^{-1} \), is \[ \phi * \psi = {\mathcal{F}}^{-1}\left\lbrack {\mathcal{F}\left( \phi \right) \cdot \mathcal{F}\left( \psi \right) }\right\rbrack \] If \( h \) is a function such that \( \widehat{h} = 1/{P}^{ + } \), then Equations (6) and (8) yield \[ u = {\mathcal{F}}^{-1}\left\lbrack \frac{\widehat{g}}{{P}^{ + }}\right\rbrack = {\mathcal{F}}^{-1}\left\lbrack {\widehat{g}\widehat{h}}\right\rbrack = g * h \] In detail, \[ u\left( x\right) = {\int }_{-\infty }^{\infty }g\left( y\right) h\left( {x - y}\right) {dy} \] The function \( h \) must be obtained by the equation \( h = {\mathcal{F}}^{-1}\left( {1/{P}^{ + }}\right) \).
Yes
\[ {u}^{\prime }\left( x\right) + {bu}\left( x\right) = {e}^{-\left| x\right| }\;\left( {b > 0,\;b \neq 1}\right) \]
The Fourier transform of the function \( g\left( x\right) = {e}^{-\left| x\right| } \) is \( \widehat{g}\left( t\right) = 2/\left( {1 + 4{\pi }^{2}{t}^{2}}\right) \) (Problem 5 of Section 6.3, page 304). Hence the Fourier transform of Equation (11) is \[ {2\pi it}\;\widehat{u}\left( t\right) + b\;\widehat{u}\left( t\right) = 2/\left( {1 + 4{\pi }^{2}{t}^{2}}\right) \] Solving for \( \widehat{u} \), we have \[ \widehat{u}\left( t\right) = \frac{2}{\left( {1 + 4{\pi }^{2}{t}^{2}}\right) \left( {b + {2\pi it}}\right) } \] By the Inversion Theorem, \[ u\left( t\right) = {\int }_{-\infty }^{\infty }\frac{2{e}^{2\pi ixt}{dx}}{\left( {1 + 4{\pi }^{2}{x}^{2}}\right) \left( {b + {2\pi ix}}\right) } \] To simplify this, substitute \( z = {2\pi x} \), to obtain \[ u\left( t\right) = \frac{1}{\pi }{\int }_{-\infty }^{\infty }\frac{{e}^{itz}{dz}}{\left( {1 + {z}^{2}}\right) \left( {b + {iz}}\right) } \] The integrand, call it \( f\left( z\right) \), has poles at \( z = + i, - i \), and \( {ib} \) . In order to evaluate this integral, we use the residue calculus, as outlined at the end of this section. Let the complex variable be expressed as \( z = x + {iy} \) . Then \[ \left| {e}^{itz}\right| = \left| {e}^{{it}\left( {x + {iy}}\right) }\right| = \left| {e}^{-{ty} + {itx}}\right| = {e}^{-{ty}} \] For \( t > 0 \) we see that \[ \mathop{\lim }\limits_{{r \rightarrow \infty }}\sup \{ \left| {{zf}\left( z\right) }\right| : \left| z\right| = r,\mathcal{I}m\left( z\right) \geq 0\} = 0 \] Hence by Theorem 4 at the end of this section, \[ {\int }_{-\infty }^{\infty }f\left( z\right) {dz} = {2\pi i} \times \left( {\text{ residue at }i + \text{ residue at }{ib}}\right) \] By partial fraction decomposition we obtain \[ f\left( z\right) = {e}^{itz}\left\lbrack {\frac{{\left( 2ib - 2i\right) }^{-1}}{z - i} - \frac{{\left( 2ib + 2i\right) }^{-1}}{z + i} + \frac{{\left( i - i{b}^{2}\right) }^{-1}}{z - {ib}}}\right\rbrack \] Hence the residues at \( i, - i \), and \( {ib} \) are respectively \[ \frac{{e}^{-t}}{{2i}\left( {b - 1}\right) }\;\frac{-{e}^{t}}{{2i}\left( {b + 1}\right) }\;\frac{{e}^{-{bt}}}{i\left( {1 - {b}^{2}}\right) } \] Thus for \( t > 0 \) , \[ u\left( t\right) = {\pi }^{-1}{2\pi i}\left\lbrack {\frac{{e}^{-t}}{{2i}\left( {b - 1}\right) } + \frac{{e}^{-{bt}}}{i\left( {1 - {b}^{2}}\right) }}\right\rbrack \] \[ = \frac{{e}^{-t}}{b - 1} + \frac{2{e}^{-{bt}}}{1 - {b}^{2}} \] Similarly, for \( t < 0 \) , \[ u\left( t\right) = \frac{-{e}^{t}}{1 + b} \]
Yes
Consider the integral equation\n\n\[ \n{\\int }_{-\\infty }^{\\infty }k\\left( {x - s}\\right) u\\left( s\\right) {ds} = g\\left( x\\right) \n\]\n\nin which \( k \) and \( g \) are given, and \( u \) is an unknown function. We can write\n\n\[ \nu * k = g \n\]
After taking Fourier transforms and using Theorem 4 in Section 6.1 (page 290)\n\nwe have\n\n\[ \n\\widehat{u}\\widehat{k} = \\widehat{g} \n\]\n\nwhence \( \\widehat{u} = \\widehat{g}/\\widehat{k} \) and \( u = {\\mathcal{F}}^{-1}\\left( {\\widehat{g}/\\widehat{k}}\\right) \) .
Yes
Theorem 1. If \( f \) is the Fourier transform of a positive function in \( {L}^{1}\left( {\mathbb{R}}^{n}\right) \), then for any finite set of points \( {x}_{1},{x}_{2},\ldots ,{x}_{m} \) in \( {\mathbb{R}}^{n} \) the matrix having elements \( f\left( {{x}_{i} - {x}_{j}}\right) \) will be positive definite (and hence nonsingular).
Proof. Let \( f = \widehat{g} \), where \( g \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \) and \( g\left( x\right) > 0 \) everywhere. The interpolation matrix in question must be shown to be positive definite. This means that \( {u}^{ * }{Au} > 0 \) for all nonzero vectors \( u \) in \( {\mathbb{C}}^{m} \) . We undertake a calculation of this quadratic form:\n\n\[ \n{u}^{ * }{Au} = \mathop{\sum }\limits_{{k = 1}}^{m}\mathop{\sum }\limits_{{j = 1}}^{m}{\bar{u}}_{k}{A}_{kj}{u}_{j} = \sum \sum {\bar{u}}_{k}{u}_{j}f\left( {{x}_{k} - {x}_{j}}\right) \n\]\n\n\[ \n= \sum \sum {\bar{u}}_{k}{u}_{j}{\int }_{{\mathbb{R}}^{n}}g\left( y\right) {e}^{-{2\pi iy}\left( {{x}_{k} - {x}_{j}}\right) }{dy} \n\]\n\n\[ \n= {\int }_{{\mathbb{R}}^{n}}g\left( y\right) \sum {\bar{u}}_{k}{e}^{-{2\pi iy}{x}_{k}}\sum {u}_{j}{e}^{{2\pi iy}{x}_{j}}{dy} \n\]\n\n\[ \n= {\int }_{{\mathbb{R}}^{n}}g\left( y\right) {\left| h\left( y\right) \right| }^{2}{dy} \geq 0 \n\]\n\nHere we have written\n\n\[ \nh\left( y\right) = \mathop{\sum }\limits_{{j = 1}}^{m}{u}_{j}{e}^{{2\pi iy}{x}_{j}}\;\left( {y \in {\mathbb{R}}^{n}}\right) \n\]\n\nSo far, we have proved only that the interpolation matrix \( A \) is nonnegative definite. How can we conclude that the final integral above is positive? It will suffice to establish that the functions \( y \mapsto {e}^{{2\pi iy}{x}_{j}} \) form a linearly independent set, for in our computation, the vector \( u \) was not zero. Once we have the linear independence, it will follow that \( {\left| h\left( y\right) \right| }^{2} \) is positive somewhere in \( {\mathbb{R}}^{n} \), and by continuity will be positive on an open set. Since \( g \) is positive everywhere, the final integral above would have to be positive. The linear independence is proved separately in two lemmas.
No
Lemma 1. Let \( {\lambda }_{1},\ldots ,{\lambda }_{m} \) be \( m \) distinct complex numbers, and let \( {c}_{1},\ldots ,{c}_{m} \) be complex numbers. If \( \mathop{\sum }\limits_{{j = 1}}^{m}{c}_{j}{e}^{{\lambda }_{j}z} = 0 \) for all \( z \) in a subset of \( \mathbb{C} \) that has an accumulation point, then \( \mathop{\sum }\limits_{{j = 1}}^{m}\left| {c}_{j}\right| = 0 \) .
Proof. Use induction on \( m \) . If \( m = 1 \), the result is obvious, because \( {e}^{{\lambda }_{1}z} \) is not zero for any \( z \in \mathbb{C} \) . If the lemma has been established for a certain integer \( m - 1 \), then we can prove it for \( m \) as follows. Let \( f\left( z\right) = \mathop{\sum }\limits_{1}^{m}{c}_{j}{e}^{{\lambda }_{j}z} \), and suppose that \( f\left( {z}_{k}\right) = 0 \) for some convergent sequence \( \left\lbrack {z}_{k}\right\rbrack \) . Since \( f \) is an entire function, we infer that \( f\left( z\right) = 0 \) for all \( z \) in \( \mathbb{C} \) . (See, for example,[Ti2] page 88, or [Ru3] page 226.) Consider now the function\n\n\[ F\left( z\right) = \frac{d}{dz}\left\lbrack {{e}^{-{\lambda mz}}f\left( z\right) }\right\rbrack = \frac{d}{dz}\mathop{\sum }\limits_{{j = 1}}^{m}{c}_{j}{e}^{\left( {{\lambda }_{j} - {\lambda }_{m}}\right) z} = \mathop{\sum }\limits_{{j = 1}}^{{m - 1}}{c}_{j}\left( {{\lambda }_{j} - {\lambda }_{m}}\right) {e}^{\left( {{\lambda }_{j} - {\lambda }_{m}}\right) z} \]\n\nSince \( f = 0 \), we have \( F = 0 \) . By the induction hypothesis, \( {c}_{j}\left( {{\lambda }_{j} - {\lambda }_{m}}\right) = 0 \) for \( 1 \leq j \leq m - 1 \) . Since the \( {\lambda }_{j} \) are distinct, we infer that \( {c}_{1} = \cdots = {c}_{m - 1} = 0 \) . The function \( f \) then reduces to \( f\left( z\right) = {c}_{m}{e}^{\lambda mz} \) . Since \( f = 0,{c}_{m} = 0 \) .
Yes
Lemma 2. Let \( {w}_{1},\ldots ,{w}_{m} \) be \( m \) distinct points in \( {\mathbb{C}}^{n} \). Let \( {c}_{1},\ldots ,{c}_{m} \) be complex numbers. If \( \mathop{\sum }\limits_{{j = 1}}^{m}{c}_{j}{e}^{{w}_{j}x} = 0 \) for all \( x \) in a nonempty open subset of \( {\mathbb{R}}^{n} \), then \( \mathop{\sum }\limits_{{j = 1}}^{m}\left| {c}_{j}\right| = 0 \).
Proof. Let \( \mathcal{O} \) be an open set in \( {\mathbb{R}}^{n} \) having the stated property. Select \( \xi \in \mathcal{O} \) such that the complex inner products \( {w}_{j}\xi \) are all different. This is possible by the following reasoning. The condition on \( \xi \) can be expressed in the form \( {w}_{j}\xi \neq {w}_{k}\xi \) for \( 1 \leq j < k \leq m \). This, in turn, means that \( \xi \) does not lie in any of the sets\n\n\[ \n{H}_{jk} = \left\{ {x \in {\mathbb{R}}^{n} : \left( {{w}_{j} - {w}_{k}}\right) x = 0}\right\} \;\left( {1 \leq j < k \leq m}\right) \n\]\n\nEach set \( {H}_{jk} \) is the intersection of two hyperplanes in \( {\mathbb{R}}^{n} \). (See Problem 4.) Hence each \( {H}_{jk} \) is a set of Lebesgue measure 0 in \( {\mathbb{R}}^{n} \), and the same is true of any countable union of such sets. The finite family of sets \( {H}_{jk} \) therefore cannot cover the open set \( \mathcal{O} \), which must have positive measure. Now define, for \( t \in \mathbb{C} \), the function \( f\left( t\right) = \mathop{\sum }\limits_{1}^{m}{c}_{j}{e}^{\left( {{w}_{j}\xi }\right) t} \). Since \( \xi \in \mathcal{O} \), our hypothesis gives us \( f\left( 1\right) = 0 \). Let \( U \) be a neighborhood of 1 in \( \mathbb{C} \) such that \( {t\xi } \in \mathcal{O} \) when \( t \in U \). Since \( f\left( t\right) = 0 \) on \( U \), Lemma 1 shows that \( \mathop{\sum }\limits_{{j = 1}}^{m}\left| {c}_{j}\right| = 0 \).
Yes
Theorem 2. Laurent’s Theorem. Let \( f \) be a function that is analytic inside and on a circle \( C \) in the complex plane, except for having an isolated singularity at the center \( \zeta \) . Then at each point inside \( C \) with the exception of \( \zeta \) we have\n\n\[ f\left( z\right) = \mathop{\sum }\limits_{{n = - \infty }}^{\infty }{c}_{n}{\left( z - \zeta \right) }^{n}\;{c}_{n} = \frac{1}{2\pi i}{\int }_{C}\frac{f\left( z\right) {dz}}{{\left( z - \zeta \right) }^{n + 1}} \]
The coefficient \( {c}_{-1} \) is called the residue of \( f \) at \( \zeta \) . By Laurent’s theorem, the residue is also given by\n\n(12)\n\n\[ {c}_{-1} = \frac{1}{2\pi i}{\int }_{C}f\left( z\right) {dz} \]
Yes
The integral \( {\int }_{C}{e}^{z}/{z}^{4}{dz} \), where \( C \) is the unit circle, can be computed with the principle in Equation (12). Indeed, the given integral is \( {2\pi i} \) times the residue of \( {e}^{z}/{z}^{4} \) at 0 .
Since\n\n\[ \n{e}^{z}/{z}^{4} = \left( {1 + z + \frac{{z}^{2}}{2!} + \frac{{z}^{3}}{3!} + \cdots }\right) /{z}^{4} \n\]\n\n\[ \n= {z}^{-4} + {z}^{-3} + \frac{1}{2}{z}^{-2} + \frac{1}{6}{z}^{-1} + \cdots \n\]\n\nwe see that the residue is \( \frac{1}{6} \) and the integral is \( \frac{1}{3}{\pi i} \) .
Yes
Theorem 3 The Residue Theorem. Let \( C \) be a simple closed curve inside of which \( f \) is analytic with the exception of isolated singularities at the points \( {\zeta }_{1},\ldots ,{\zeta }_{m} \) . Then \( \frac{1}{2\pi i}{\int }_{C}f\left( z\right) {dz} \) is the sum of the residues of \( f \) at \( {\zeta }_{1},\ldots ,{\zeta }_{m} \) .
Proof. Draw mutually disjoint circles \( {C}_{1},\ldots ,{C}_{m} \) around the singularities and contained within \( C \) . The integral around the path shown in the figure is zero, by Cauchy’s integral theorem. (Figure 6.1a depicts the case \( m = 2 \) .) Therefore,\n\n\[ 0 = {\int }_{C}f\left( z\right) {dz} - {\int }_{{C}_{1}}f\left( z\right) {dz} - \cdots - {\int }_{{C}_{m}}f\left( z\right) {dz} \]\n\nIn this equation, divide by \( {2\pi i} \) and note that the negative terms are the residues of \( f \) at \( {\zeta }_{1},\ldots ,{\zeta }_{m} \) .
Yes
Example 5. Let us compute \( {\int }_{C}\frac{dz}{{z}^{2} + 1} \), where \( C \) is the circle described by \( \left| {z - i}\right| = 1 \).
By the preceding theorem, the integral is \( {2\pi i} \) times the sum of the residues inside \( C \) . We have\n\n\[ f\left( z\right) = \frac{1}{{z}^{2} + 1} = \frac{1}{\left( {z + i}\right) \left( {z - i}\right) } = \frac{i/2}{z + i} - \frac{i/2}{z - i} \]\n\nThe residue at \( i \) is therefore \( - i/2 \), and the value of the integral is \( \pi \) .
Yes
Theorem 4. If \( f \) is a proper rational function and if the curve \( C \) encloses all the poles of \( f \), then \( {\int }_{C}f\left( z\right) {dz} = 0 \) .
Proof. Write \( f = p/q \), where \( p \) and \( q \) are polynomials. Since \( f \) is proper, the degree of \( p \) is less than that of \( q \) . Hence the point at \( \infty \) is not a singularity of \( f \) . Now, \( C \) is the boundary of one region containing the poles, and it is also the boundary of the complementary region in which \( f \) is analytic. Hence \( {\int }_{C}f\left( z\right) {dz} = 0. \)
Yes
Theorem 4. If \( f \) is a proper rational function and if the curve \( C \) encloses all the poles of \( f \), then \( {\int }_{C}f\left( z\right) {dz} = 0 \) .
Proof. Write \( f = p/q \), where \( p \) and \( q \) are polynomials. Since \( f \) is proper, the degree of \( p \) is less than that of \( q \) . Hence the point at \( \infty \) is not a singularity of \( f \) . Now, \( C \) is the boundary of one region containing the poles, and it is also the boundary of the complementary region in which \( f \) is analytic. Hence \( {\int }_{C}f\left( z\right) {dz} = 0. \)
Yes
Theorem 5. Let \( f \) be analytic in the closed upper half-plane with the exception of a finite number of poles, none of which are on the real axis. Define\n\n\[ \n{M}_{r} \equiv \sup \{ \left| {{zf}\left( z\right) }\right| : \left| z\right| = r,\mathcal{I}\left( z\right) \geq 0\}\n\]\n\nIf \( {M}_{r} \) converges to 0 as \( r \rightarrow \infty \), then \( \frac{1}{2\pi i}{\int }_{-\infty }^{\infty }f\left( z\right) {dz} \) is the sum of the\n\nresidues at the poles in the upper half-plane.
Proof. Consider the region shown in Figure 6.1b, where \( C \) is the semicircular arc and \( r \) is chosen so large that all the poles of \( f \) lying in the upper half-plane are contained in the semicircular region. On \( C \) we have \( z = r{e}^{i\theta } \) and \( {dz} = {ir}{e}^{i\theta }{d\theta } \) . Hence\n\n\[ \n{\int }_{C}\left| {f\left( z\right) }\right| {dz} = {\int }_{0}^{\pi }\left| {f\left( {r{e}^{i\theta }}\right) \cdot r}\right| {d\theta } \leq \pi {M}_{r} \rightarrow 0\n\]\n\nBy Theorem 3,\n\n\[ \n{\int }_{-r}^{r}f\left( z\right) {dz} + {\int }_{C}f\left( z\right) {dz} = {2\pi i} \times \text{ (sum of residues) }\n\]\n\nBy taking the limit as \( r \rightarrow 0 \), we obtain the desired result.
Yes
The simplest case of the heat equation is\n\n\[ \n{u}_{xx} = {u}_{t} \n\]\nin which the subscripts denote partial derivatives. The distribution of heat in an infinite bar would obey this equation for \( \infty < x < \infty \) and \( t \geq 0 \) . A fully defined practical problem would consist of the differential equation (1) and some auxiliary conditions. To illustrate, we consider (1) with initial condition\n\n\[ \nu\left( {x,0}\right) = f\left( x\right) \; - \infty < x < \infty \]\n\nThe function \( f \) gives the initial temperature distribution in the rod.
We define \( \widehat{u}\left( {y, t}\right) \) to be the Fourier transform of \( u \) in the space variable. Thus\n\n\[ \widehat{u}\left( {y, t}\right) = {\int }_{-\infty }^{\infty }u\left( {x, t}\right) {e}^{-{2\pi ixy}}{dx} \]\n\nTaking the Fourier transform in Equations (1) and (2) with respect to the space variable, we obtain\n\n\[ \left\{ \begin{matrix} - 4{\pi }^{2}{y}^{2}\widehat{u}\left( {y, t}\right) = {\widehat{u}}_{t}\left( {y, t}\right) \\ \widehat{u}\left( {y,0}\right) = \widehat{f}\left( y\right) \end{matrix}\right. \]\n\nHere, again, we use the principle of Theorem 1 in Section 6.2, page 296: \( \widehat{P\left( D\right) u} = \) \( {P}^{ + }\widehat{u} \), where \( {P}^{ + }\left( x\right) = P\left( {2\pi ix}\right) \).\n\nEquation (3) defines an initial-value problem involving a first-order linear ordinary differential equation for the function \( \widehat{u}\left( {y, \cdot }\right) \) . (The variable \( y \) can be ignored, or interpreted as a parameter.) We note that \( {\left( \widehat{u}\right) }_{t} = \widehat{\left( {u}_{t}\right) } \). The phenomenon just observed is typical: Often, a Fourier transform will lead us from a partial differential equation to an ordinary differential equation. The solution of (3) is\n\n\[ \widehat{u}\left( {y, t}\right) = \widehat{f}\left( y\right) {e}^{-4{\pi }^{2}{y}^{2}t} \]\n\nNow let us think of \( t \) as a parameter, and ignore \( {it} \). Write Equation (4) as \( \widehat{u}\left( {y, t}\right) = \widehat{f}\left( y\right) \widehat{G}\left( {y, t}\right) \), where \( \widehat{G}\left( {y, t}\right) = {e}^{-4{\pi }^{2}{y}^{2}t} \). Using the principle that \( \widehat{\phi }\widehat{\psi } = \) \( \phi * \psi \) (Theorem 4 in Section 6.1, page 291), we have\n\n\[ u\left( {\cdot, t}\right) = f\left( \cdot \right) * G\left( {\cdot, t}\right) \]\n\nwhere \( G\left( {\cdot, t}\right) \) is the inverse transform of \( y \mapsto {e}^{-4{\pi }^{2}{y}^{2}t} \). This inverse is \( G\left( {x, t}\right) = {\left( 4\pi t\right) }^{-1/2}{e}^{-{x}^{2}/\left( {4t}\right) } \), by Problem 8 of Section 6.3, page 304. Consequently,\n\n\[ u\left( {x, t}\right) = {\left( 4\pi t\right) }^{-1/2}{\int }_{-\infty }^{\infty }f\left( {x - z}\right) {e}^{-{z}^{2}/\left( {4t}\right) }{dz} \]\n
Yes
\[ \left\{ \begin{array}{ll} {u}_{xx} = {u}_{t} & x \geq 0, t \geq 0 \\ u\left( {x,0}\right) = f\left( x\right), u\left( {0, t}\right) = 0 & x \geq 0, t \geq 0 \end{array}\right. \]
The easiest way to ensure that this will be zero (and thus satisfy the boundary condition in our problem) is to extend \( f \) to be an odd function. Then the integrand in Equation (8) is odd, and \( u\left( {0, t}\right) = 0 \) automatically. So we define \( f\left( {-x}\right) = - f\left( x\right) \) for \( x > 0 \), and then Equation (6) gives the solution for Equation (7).
No
Example 3. Again, we consider the heat equation with boundary conditions:\n\n\[ \left\{ \begin{array}{ll} {u}_{xx} = {u}_{t} & x \geq 0, t \geq 0 \\ u\left( {x,0}\right) = f\left( x\right) & u\left( {0, t}\right) = g\left( t\right) \end{array}\right. \]
Because the differential equation is linear and homogeneous, the method of superposition can be applied. We solve two related problems, viz.,\n\n\[ {v}_{xx} = {v}_{t}\;v\left( {x,0}\right) = f\left( x\right) \;v\left( {0, t}\right) = 0 \]\n\n\[ {w}_{xx} = {w}_{t}\;w\left( {x,0}\right) = 0\;w\left( {0, t}\right) = g\left( t\right) \]\n\nThe solution of (9) will then be \( u = v + w \) . The problem in (10) is solved in Example 2. In (11), we take the sine transform in the space variable, using \( {w}^{S} \) to denote the transformed function. With the aid of Problem 1, we have\n\n\[ {2\pi yg}\left( t\right) - 4{\pi }^{2}{y}^{2}{w}^{S}\left( {y, t}\right) = {w}_{t}^{S}\left( {y, t}\right) \;{w}^{S}\left( {y,0}\right) = 0 \]\n\nAgain this is an ordinary differential equation, linear and of the first order. Its solution is easily found to be\n\n\[ {w}^{S}\left( {y, t}\right) = {2\pi y}{e}^{-4{\pi }^{2}{y}^{2}t}{\int }_{0}^{t}{e}^{4{\pi }^{2}{y}^{2}\sigma }g\left( \sigma \right) {d\sigma } \]\n\nIf \( w \) is made into an odd function by setting \( w\left( {x, t}\right) = - w\left( {-x, t}\right) \) when \( x < 0 \) , then we know from Problem 9 in Section 6.3 (page 304) that\n\n\[ \widehat{w}\left( {y, t}\right) = - {2i}{w}^{S}\left( {y, t}\right) \]\n\nTherefore by the Inversion Theorem (Section 6.3, page 303)\n\n\[ w\left( {x, t}\right) = {\int }_{-\infty }^{\infty }\widehat{w}\left( {y, t}\right) {e}^{2\pi ixy}{dy} \]\n\nor\n\[ w\left( {x, t}\right) = - {4\pi i}{\int }_{-\infty }^{\infty }{e}^{2\pi ixy}y{e}^{-4{\pi }^{2}{y}^{2}t}{\int }_{0}^{t}{e}^{4{\pi }^{2}{y}^{2}\sigma }g\left( \sigma \right) {d\sigma dy} \]\n\nTo simplify this, let \( z = {2\pi y} \) . Then\n\n\[ w\left( {x, t}\right) = \frac{-i}{\pi }{\int }_{-\infty }^{\infty }z{e}^{ixz}{\int }_{0}^{t}{e}^{-{z}^{2}\left( {t - \sigma }\right) }g\left( \sigma \right) {d\sigma dz} \]
Yes
The Helmholtz Equation is\n\n\[ \n{\Delta u} - {gu} = f \n\]\n\nin which \( \Delta \) is the Laplacian, \( \mathop{\sum }\limits_{{k = 1}}^{n}{\partial }^{2}/\partial {x}_{k}^{2} \) . The functions \( f \) and \( g \) are prescribed on \( {\mathbb{R}}^{n} \), and \( u \) is the unknown function of \( n \) variables. We shall look at the special case when \( g \) is the constant 1 . To illustrate some variety in approaching such problems, let us simply try the hypothesis that the problem can be solved with an appropriate convolution: \( u = f * h \) . Substitution of this form for \( u \) in the differential equation leads to\n\n\[ \n\Delta \left( {f * h}\right) - f * h = f \n\]
Carrying out the differentiation under the integral that defines the convolution, we obtain\n\n\[ \nf * {\Delta h} - f * h = f \n\]\n\nIs there a way to cancel the three occurrences of \( f \) in this equation? After all, \( {L}^{1} \) is a Banach algebra, with multiplication defined by convolution. But there are pitfalls here, since there is no unit element, and therefore there are no inverses. However, the Fourier transform converts the convolutions into ordinary products, according to Theorem 4 in Section 6.1 (page 291):\n\n\[ \n\left( \widehat{f}\right) {\left( \Delta h\right) }^{ \land } - \widehat{f}\widehat{h} = \widehat{f} \n\]\n\nFrom this equation cancel the factor \( \widehat{f} \), and then express \( {\left( \Delta h\right) }^{ \land } \) as in Example 2 in Section 6.2 (page 297):\n\n\[ \n- 4{\pi }^{2}{\left| x\right| }^{2}\widehat{h}\left( x\right) - \widehat{h}\left( x\right) = 1 \n\]\n\n\[ \n\widehat{h}\left( x\right) = \frac{-1}{1 + 4{\pi }^{2}{\left| x\right| }^{2}} \n\]\n\nThe formula for \( h \) itself is obtained by use of the inverse Fourier transform, which leads to\n\n\[ \nh\left( x\right) = {\pi }^{n/2}{\int }_{0}^{\infty }{t}^{-n/2}\exp \left( {-t - {\left| \pi x\right| }^{2}/t}\right) {dt} \n\]\n\nThe calculation leading to this is given in \( \left\lbrack \mathrm{{Ev}}\right\rbrack \), page 187. In that reference, a different definition of the Fourier transform is used, and Problem 6.1.24, page 293, can be helpful in transferring results among different systems.
Yes
Theorem 1. Every distribution having compact support is tempered.
Proof. Let \( T \) be a distribution with compact support \( K \) . Select \( \psi \in \mathfrak{D} \) so that \( \psi \left( x\right) = 1 \) for all \( x \) in an open neighborhood of \( K \) . We extend \( T \) by defining \( \bar{T}\left( \phi \right) = T\left( {\phi \psi }\right) \) when \( \phi \in \mathcal{S} \) . Is \( \bar{T} \) an extension of \( T \) ? In other words, do we have \( \bar{T}\left( \phi \right) = T\left( \phi \right) \) for \( \phi \in \mathbf{D} \) ? An equivalent question is whether \( T\left( {{\psi \phi } - \phi }\right) = 0 \) for \( \phi \in \mathfrak{D} \) . We use the definition of the support of \( T \) to answer this. We must verify only that the support of \( \left( {1 - \psi }\right) \phi \) is contained in \( {\mathbb{R}}^{n} \smallsetminus K \) . This is true because \( 1 - \psi \) is zero on a neighborhood of \( K \) . The linearity of \( \bar{T} \) is trivial. For the continuity, suppose that \( {\phi }_{j} \rightarrow 0 \) in \( \mathcal{S} \) . Then for any \( \alpha ,{D}^{\alpha }{\phi }_{j} \) tends uniformly to 0, and \( {D}^{\alpha }\left( {{\phi }_{j}\psi }\right) \) tends uniformly to 0 by Leibniz’s Rule. Since there is one compact set containing the supports of all \( \psi {\phi }_{j} \), we can conclude that \( \psi {\phi }_{j} \rightarrow 0 \) in \( \mathbf{D} \) . By the continuity of \( T, T\left( {\psi {\phi }_{j}}\right) \rightarrow 0 \) and \( \bar{T}\left( {\phi }_{j}\right) \rightarrow 0 \) .
Yes
Theorem 2. Let \( f \) be a measurable function such that \( f/P \in {L}^{1}\left( {\mathbb{R}}^{n}\right) \) for some polynomial \( P \) . Then \( \widetilde{f} \) is a tempered distribution.
Proof. For \( \phi \in \mathcal{S} \), we have\n\n\[ \widetilde{f}\left( \phi \right) = {\int }_{{\mathbb{R}}^{n}}f\left( x\right) \phi \left( x\right) {dx} \]\n\nSuppose that \( P \) is a polynomial such that \( f/P \in {L}^{1} \) . Write\n\n\[ \widetilde{f}\left( \phi \right) = \int \left( {f/P}\right) \left( {P \cdot \phi }\right) \]\n\nSince \( \phi \in \mathcal{S},{P\phi } \) is bounded, and the integral exists. If \( {\phi }_{j} \rightarrow 0 \) in \( \mathcal{S} \), then \( P\left( x\right) {\phi }_{j}\left( x\right) \rightarrow 0 \) uniformly on \( {\mathbb{R}}^{n} \), and consequently,\n\n\[ \left| {\widetilde{f}\left( {\phi }_{j}\right) }\right| \leq \mathop{\sup }\limits_{x}\left| {P\left( x\right) {\phi }_{j}\left( x\right) }\right| \int \left| {f/P}\right| \rightarrow 0 \]
Yes
Theorem 3. If \( T \) is a tempered distribution, then so is \( \widehat{T} \) . Moreover, the map \( T \mapsto \widehat{T} \) is linear, injective, surjective, and continuous from \( {\mathcal{S}}^{\prime } \) to \( {\mathcal{S}}^{\prime } \) .
Proof. The Fourier operator \( \mathcal{F} \) is a continuous linear bijection from \( \mathcal{S} \) onto \( \mathcal{S} \) by Theorem 3 in Section 6.3, page 303. Also, \( {\mathcal{F}}^{-1} = {\mathcal{F}}^{3} \) . Since \( \widehat{T} = T \circ \mathcal{F} \), we see that \( \widehat{T} \) is the composition of two continuous linear maps, and is therefore itself continuous and linear. Hence \( \widehat{T} \) is a member of \( {\mathbf{S}}^{\prime } \) .\n\nFor the linearity of the map in question we write\n\n\[ \n{\left( aT + bU\right) }^{ \land } = \left( {{aT} + {bU}}\right) \circ \mathcal{F} = {aT} \circ \mathcal{F} + {bU} \circ \mathcal{F} = a\widehat{T} + b\widehat{U} \n\]\n\nFor the injectivity, suppose \( \widehat{T} = 0 \) . Then \( T \circ \mathcal{F} = 0 \) and \( T\left( \phi \right) = 0 \) for all \( \phi \) in the range of \( \mathcal{F} \) . Since \( \mathcal{F} \) is surjective from \( \mathcal{S} \) to \( \mathcal{S} \), the range of \( \mathcal{F} \) is \( \mathcal{S} \) . Hence \( T\left( \phi \right) = 0 \) for all \( \phi \) in \( \mathcal{S} \) ; i.e., \( T = 0 \) .\n\nFor the surjectivity, let \( T \) be any element of \( {\mathcal{S}}^{\prime } \) . Then \( T = T \circ {\mathcal{F}}^{4} = \) \( \left( {T \circ {\mathcal{F}}^{3}}\right) \circ \mathcal{F} \) . Note that \( T \circ {\mathcal{F}}^{3} \) is in \( {\mathcal{S}}^{\prime } \) by the first part of this proof.\n\nFor the continuity, let \( {T}_{j} \in {\mathcal{S}}^{\prime } \) and \( {T}_{j} \rightarrow 0 \) . This means that \( {T}_{j}\left( \phi \right) \rightarrow 0 \) for all \( \phi \) in \( \mathcal{S} \) . Consequently, \( \widehat{{T}_{j}}\left( \phi \right) = {T}_{j}\left( \widehat{\phi }\right) \rightarrow 0 \) and \( {\widehat{T}}_{j} \rightarrow 0 \) .
Yes
Theorem 4. If \( T \) is a tempered distribution and \( P \) is a polynomial, then \[ \widehat{PT} = P\left( \frac{-\partial }{2\pi i}\right) \widehat{T}\;\text{ and }\;P \cdot \widehat{T} = \widehat{P\left( \frac{\partial }{2\pi i}\right) }T \]
Proof. For \( \phi \) in \( \mathcal{S} \) we have \[ \widehat{PT}\left( \phi \right) = \left( {PT}\right) \left( \widehat{\phi }\right) = T\left( {P\widehat{\phi }}\right) = T\left\lbrack {\left( P\left( \frac{D}{2\pi i}\right) \phi \right) }^{ \land }\right\rbrack = \widehat{T}\left( {P\left( \frac{D}{2\pi i}\right) \phi }\right) = \left\lbrack {P\left( \frac{-\partial }{2\pi i}\right) \widehat{T}}\right\rbrack \left( \phi \right) \] We used Theorem 1 in Section 6.2, page 296, in this calculation. The other equation is left as Problem 6.
No
Lemma 1. If \( f \in {L}^{p}\left( {\mathbb{R}}^{n}\right) \), and if \( {\psi }_{j} \) is as described above, then \( f * {\psi }_{j} \rightarrow f \) in \( {L}^{p}\left( {\mathbb{R}}^{n}\right) \), as \( j \rightarrow \infty \) .
Proof. The case \( p = 1 \) is contained in the proof of Theorem 1 in Section 6.4, page 306. Let \( {B}_{j} \) be the support of \( {\psi }_{j} \) (i.e., the ball at 0 of radius \( 1/j \) ). By familiar calculations and Hölder's inequality (Section 8.7, page 409) we have\n\n\[ \left| {\left( {f * {\psi }_{j}}\right) \left( x\right) - f\left( x\right) }\right| = \left| {{\int }_{{B}_{j}}\left\lbrack {f\left( {x - y}\right) - f\left( x\right) }\right\rbrack {\psi }_{j}\left( y\right) {dy}}\right| \]\n\n\[ \leq {\left\{ {\int }_{{B}_{j}}{\left| f\left( x - y\right) - f\left( x\right) \right| }^{p}dy\right\} }^{1/p}{\begin{Vmatrix}{\psi }_{j}\end{Vmatrix}}_{q} \]\n\n(Here \( q \) is the index conjugate to \( p : {pq} = p + q \) .) Hence,\n\n\[ {\left| \left( f * {\psi }_{j}\right) \left( x\right) - f\left( x\right) \right| }^{p} \leq {\begin{Vmatrix}{\psi }_{j}\end{Vmatrix}}_{q}^{p}{\int }_{{B}_{j}}{\left| f\left( x - y\right) - f\left( x\right) \right| }^{p}{dy} \]\n\nThus, using the Fubini theorem (page 426), we have\n\n\[ {\int }_{{\mathbb{R}}^{n}}{\left| \left( f * {\psi }_{j}\right) \left( x\right) - f\left( x\right) \right| }^{p}{dx} \leq {\begin{Vmatrix}{\psi }_{j}\end{Vmatrix}}_{q}^{p}{\int }_{{B}_{j}}{\int }_{{\mathbb{R}}^{n}}{\left| f\left( x - y\right) - f\left( x\right) \right| }^{p}{dxdy} \]\n\nWe can write this in the form\n\n\[ {\begin{Vmatrix}f * {\psi }_{j} - f\end{Vmatrix}}_{p}^{p} \leq {\begin{Vmatrix}{\psi }_{j}\end{Vmatrix}}_{q}^{p}{\int }_{{B}_{j}}{\begin{Vmatrix}{E}_{y}f - f\end{Vmatrix}}_{p}^{p}{dy} \]\n\nwhere \( {E}_{y} \) denotes the translation operator defined by \( \left( {{E}_{y}\phi }\right) \left( x\right) = \phi \left( {x - y}\right) \) . Recall, from Lemma 3 in Section 6.4 (page 306), that for a fixed element \( f \) in \( {L}^{p}\left( {\mathbb{R}}^{n}\right) \), the map \( y \mapsto {E}_{y}f \) is continuous from \( {\mathbb{R}}^{n} \) to \( {L}^{p}\left( {\mathbb{R}}^{n}\right) \) . Hence there corresponds to any positive \( \varepsilon \) a positive \( \delta \) such that\n\n\[ \left| y\right| \leq \delta \; \Rightarrow \;{\begin{Vmatrix}{E}_{y}f - f\end{Vmatrix}}_{p} < \varepsilon \]\n\nThus if \( 1/j \leq \delta \), we shall have, from the above inequalities,\n\n\[ {\begin{Vmatrix}f * {\psi }_{j} - f\end{Vmatrix}}_{p}^{p} \leq {\varepsilon }^{p}\mu \left( {B}_{j}\right) {\begin{Vmatrix}{\psi }_{j}\end{Vmatrix}}_{q}^{p} \]\n\nwhere \( \mu \left( {B}_{j}\right) \) is the Lebesgue measure of the ball of radius \( 1/j \) . By enclosing that ball in a \
Yes
Theorem 3. The set of functions in \( {W}^{k, p}\left( \Omega \right) \) that are of class \( {C}^{\infty } \) is dense in \( {W}^{k, p}\left( \Omega \right) \) .
Proof. Let \( {B}_{1},{B}_{2},\ldots \) be a sequence of open balls such that \( \overline{{B}_{i}} \subset \Omega \) for all \( i \) and \( \bigcup {B}_{i} = \Omega \) . The center and radius of \( {B}_{i} \) are indicated by writing \( {B}_{i} = B\left( {{x}_{i},{r}_{i}}\right) \) . Appealing to Theorem 1 in Section 5.7 (page 282), we obtain a partition of unity subordinate to the collection of open balls. Thus, we have test functions \( {\phi }_{i} \) satisfying \( 0 \leq {\phi }_{i} \leq 1 \) . Further, \( \operatorname{supp}\left( {\phi }_{i}\right) \subset {B}_{i} \), and for any compact set \( K \) in \( \Omega \), there exists an integer \( m \) such that \( \mathop{\sum }\limits_{1}^{m}{\phi }_{i} = 1 \) on a neighborhood of \( K \) . Now suppose that \( f \in {W}^{k, p}\left( \Omega \right) \) . Let \( 0 < \epsilon < 1/2 \) . Eventually, we shall find a \( {C}^{\infty } \) -function \( g \) in \( {W}^{k, p}\left( \Omega \right) \) such that \( \parallel f - g\parallel < {2\epsilon } \) .\n\nSelect a sequence \( {\delta }_{i} \downarrow 0 \) such that \( \overline{B\left( {{x}_{i},\left( {1 + {\delta }_{i}}\right) {r}_{i}}\right) } \subset \Omega \) for each \( i \) . Define \( {f}_{i} = {\phi }_{i}f \) . Let \( {g}_{i} \) be a mollification of \( f \) with radius \( {\delta }_{i}{r}_{i} \) . At the same time, we decrease \( {\delta }_{i} \) if necessary to obtain the inequality \( {\begin{Vmatrix}{g}_{i} - {f}_{i}\end{Vmatrix}}_{{W}^{k, p}\left( \Omega \right) } < \epsilon /{2}^{i} \) . (This step requires the preceding lemma.) Define \( g = \sum {g}_{i} \) . If \( \mathcal{O} \) is a bounded open set in \( \Omega \), then \( \overline{\mathcal{O}} \) is compact, and for some integer \( m,\mathop{\sum }\limits_{{i = 1}}^{m}{\phi }_{i} = 1 \) on a neighborhood of \( \overline{\mathcal{O}} \) . On \( \mathcal{O} \), we have\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{m}{f}_{i} = \mathop{\sum }\limits_{{i = 1}}^{m}{\phi }_{i}f = f\mathop{\sum }\limits_{{i = 1}}^{m}{\phi }_{i} = f \]\n\nThen we can perform the following calculation, in which the norm in the space \( {W}^{k, p}\left( \mathcal{O}\right) \) is employed (until the last step, where the domain \( \Omega \) enters):\n\n\[ \parallel f - g\parallel = \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{m}{f}_{i} - \mathop{\sum }\limits_{{i = 1}}^{\infty }{g}_{i}}\end{Vmatrix} = \begin{Vmatrix}{\mathop{\sum }\limits_{{i = 1}}^{\infty }\left( {{f}_{i} - {g}_{i}}\right) }\end{Vmatrix} \]\n\n\[ \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\begin{Vmatrix}{{f}_{i} - {g}_{i}}\end{Vmatrix} \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }{\begin{Vmatrix}{f}_{i} - {g}_{i}\end{Vmatrix}}_{{W}^{k, p}\left( \Omega \right) } \]\n\n\[ \leq \varepsilon /2 + \varepsilon /4 + \cdots = \varepsilon \]
Yes
Every continuous function on the interval \( \left\lbrack {a, b}\right\rbrack \) is integrable. Hence, this simple containment relation is valid: \( C\left\lbrack {a, b}\right\rbrack \subset {L}^{1}\left\lbrack {a, b}\right\rbrack \) . Is this an embedding? We seek a constant \( c \) such that\n\n\[ \parallel f{\parallel }_{1} \leq c\parallel f{\parallel }_{\infty }\;\left( {f \in C\left\lbrack {a, b}\right\rbrack }\right) \]
The constant \( c = b - a \) obviously serves:\n\n\[ \parallel f{\parallel }_{1} = {\int }_{a}^{b}\left| {f\left( x\right) }\right| {dx} \leq {\int }_{a}^{b}\parallel f{\parallel }_{\infty } = \left( {b - a}\right) \parallel f{\parallel }_{\infty } \]
Yes
If \( 1 \leq s < r < \infty \) and if the domain \( \Omega \) has finite Lebesgue measure, then \( {L}^{r}\left( \Omega \right) \hookrightarrow {L}^{s}\left( \Omega \right) \).
To prove this, start with an \( f \) in \( {L}^{r}\left( \Omega \right) \) and write \( r = {ps} \). We may assume that \( f \geq 0 \). Then \( {f}^{s} \) is in \( {L}^{p}\left( \Omega \right) \) because \( \int {f}^{sp} = \int {f}^{r} \). Use the Hölder Inequality (page 409) with conjugate indices \( p \) and \( q = p/\left( {p - 1}\right) \):\n\n\[ \int {f}^{s} \cdot 1 \leq {\begin{Vmatrix}{f}^{s}\end{Vmatrix}}_{p} \cdot \parallel 1{\parallel }_{q} \]\n\nTaking the \( 1/s \) power in this inequality gives us\n\n\[ \parallel f{\parallel }_{s} \leq {\begin{Vmatrix}{f}^{s}\end{Vmatrix}}_{p}^{1/s}\parallel 1{\parallel }_{q}^{1/s} = \parallel f{\parallel }_{r}\mu {\left( \Omega \right) }^{\left( {1/s}\right) - \left( {1/r}\right) } \]
Yes
Theorem 5. \( \;{W}^{1,2}\left( \mathbb{R}\right) \hookrightarrow {W}^{0,\infty }\left( \mathbb{R}\right) \) .
Proof. (In outline. For details, see [LL], Chapter 8.) Let \( f \) be an element of \( {W}^{1,2}\left( \mathbb{R}\right) \) . Since \( \mathcal{D}\left( \mathbb{R}\right) \) is dense in \( {W}^{1,2}\left( \mathbb{R}\right) \), there exists a sequence \( \left\lbrack {f}_{i}\right\rbrack \) in \( \mathcal{D}\left( \mathbb{R}\right) \) converging to \( f \) in the norm of \( {W}^{1,2} \) . Each \( {f}_{i} \) has compact support and therefore satisfies \( {f}_{i}\left( {\pm \infty }\right) = 0 \) . Since \( {f}_{i}{f}_{i}^{\prime } = {\left( {f}_{i}^{2}\right) }^{\prime }/2 \), we have\n\n\[ \n{f}_{i}^{2}\left( x\right) = \frac{1}{2}\left\lbrack {{f}_{i}^{2}\left( x\right) - {f}_{i}^{2}\left( {-\infty }\right) }\right\rbrack - \frac{1}{2}\left\lbrack {{f}_{i}^{2}\left( \infty \right) - {f}_{i}^{2}\left( x\right) }\right\rbrack = {\int }_{-\infty }^{x}{f}_{i}{f}_{i}^{\prime } - {\int }_{x}^{\infty }{f}_{i}{f}_{i}^{\prime } \n\]\n\nBy taking the limit of a suitable subsequence, we obtain the same equation for \( f \), at almost all points \( x \) . Then, with the aid of the Cauchy-Schwarz inequality and the inequality between the geometric and arithmetic means, we have\n\n\[ \n{f}^{2}\left( x\right) \leq {\int }_{-\infty }^{x}\left| {f{f}^{\prime }}\right| + {\int }_{x}^{\infty }\left| {f{f}^{\prime }}\right| = {\int }_{-\infty }^{\infty }\left| {f{f}^{\prime }}\right| \leq \parallel f{\parallel }_{2}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2} \leq \frac{1}{2}\parallel f{\parallel }_{2}^{2} + \frac{1}{2}{\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2} \n\]\n\nConsequently,\n\n\[ \n\left| {f\left( x\right) }\right| \leq \frac{1}{\sqrt{2}}\sqrt{\parallel f{\parallel }_{2}^{2} + {\begin{Vmatrix}{f}^{\prime }\end{Vmatrix}}_{2}^{2}} \n\]\n\nThis establishes the embedding inequality:\n\n\[ \n\parallel f{\parallel }_{0,\infty } \leq \frac{1}{\sqrt{2}}\parallel f{\parallel }_{1,2} \n\]
No
Theorem 2. If a topological space has the fixed-point property, then the same is true of every space homeomorphic to it.
Proof. Let spaces \( X \) and \( Y \) be homeomorphic. This means that there is a homeomorphism \( h : X \rightarrow Y \) (a continuous map having a continuous inverse). Suppose that \( X \) has the fixed-point property. To prove that \( Y \) has the fixed-point property, let \( f \) be a continuous map of \( Y \) into \( Y \) . Then the map \( {h}^{-1} \circ \) \( f \circ h \) is continuous from \( X \) to \( X \), and thus has a fixed point \( x \) . The equation \( {h}^{-1}\left( {f\left( {h\left( x\right) }\right) }\right) = x \) leads immediately to \( f\left( {h\left( x\right) }\right) = h\left( x\right) \), and \( h\left( x\right) \) is a fixed point of \( f \) .
Yes
Theorem 3. The Schauder-Tychonoff Fixed-Point Theorem. Every compact convex set in a locally convex linear topological Hausdorff space has the fixed-point property.
Proof. ([Day],[Sma]) Let \( K \) be such a set, and let \( f \) be a continuous map of \( K \) into \( K \) . We denote the family of all convex, symmetric, open neighborhoods of 0 by \( \left\{ {{U}_{\alpha } : \alpha \in A}\right\} \) . The set \( A \) is simply an index set, which we partially order by writing \( \alpha \geq \beta \) when \( {U}_{\alpha } \subset {U}_{\beta } \) . Thus ordered, \( A \) becomes a directed set, suitable as the domain of a net. Since \( K \) is compact, the map \( f \) is uniformly continuous, and there corresponds to any \( \alpha \in A \) an \( {\alpha }^{\prime } \in A \) such that \( {U}_{{\alpha }^{\prime }} \subset {U}_{\alpha } \) and \( f\left( x\right) - f\left( y\right) \in {U}_{\alpha } \) whenever \( x - y \in {U}_{{\alpha }^{\prime }} \) .\n\nFor any \( \alpha \in A \), the preceding lemma provides a continuous map \( {P}_{\alpha } \) such that \( {P}_{\alpha }\left( K\right) \) is a compact, convex, finite-dimensional subset of \( K \) . This map has the further property that \( x - {P}_{\alpha }x \in {U}_{\alpha } \) for each \( x \) in \( K \) . The composition \( {P}_{\alpha } \circ f \) maps \( {P}_{\alpha }\left( K\right) \) into itself. Hence, by the Brouwer Fixed-Point Theorem (Theorem 1 above), \( {P}_{\alpha } \circ f \) has a fixed point \( {z}_{\alpha } \) in \( {P}_{\alpha }\left( K\right) \) . By the compactness of \( K \), the net \( \left\lbrack {{z}_{\alpha } : \alpha \in A}\right\rbrack \) has a cluster point \( z \) in \( K \) . In order to see that \( z \) is a fixed point of \( f \), write\n\n(1)\n\n\[ f\left( z\right) - z = \left\lbrack {f\left( z\right) - f\left( {z}_{\alpha }\right) }\right\rbrack + \left\lbrack {f\left( {z}_{\alpha }\right) - {P}_{\alpha }f\left( {z}_{\alpha }\right) }\right\rbrack + \left\lbrack {{z}_{\alpha } - z}\right\rbrack \]\n\nFor any \( \beta \in A \), we can select \( \alpha \in A \) such that \( \alpha \geq \beta \) and \( z - {z}_{\alpha } \in {U}_{{\beta }^{\prime }} \) . Then \( f\left( z\right) - f\left( {z}_{\alpha }\right) \in {U}_{\beta } \) . Also, \( f\left( {z}_{\alpha }\right) - {P}_{\alpha }f\left( {z}_{\alpha }\right) \in {U}_{\alpha } \subset {U}_{\beta } \) . Finally, \( z - {z}_{\alpha } \in {U}_{{\beta }^{\prime }} \subset {U}_{\beta } \) . Equation (1) now shows that \( f\left( z\right) - z \in 3{U}_{\beta } \) . Since \( \beta \) is any element of \( A \) , \( f\left( z\right) = z \) . Theorem 1 in Section 7.7 (page 368) justifies this last conclusion.
Yes
Theorem 6. Let \( D \) be a convex set in a locally convex linear topological Hausdorff space. If \( f \) maps \( D \) continuously into a compact subset of \( D \), then \( f \) has a fixed point.
Proof. As in the proof of Theorem 3, we use the family of neighborhoods \( {U}_{\alpha } \) . Let \( K \) be a compact subset of \( D \) that contains \( f\left( D\right) \) . Proceed as in the proof of Theorem 3, using the same set of neighborhoods \( {U}_{\alpha } \) . By the lemma, for each \( \alpha \) there is a finite set \( {F}_{\alpha } \) in \( K \) and a continuous map \( {P}_{\alpha }K \rightarrow \operatorname{co}\left( {F}_{\alpha }\right) \) such that \( x - {P}_{\alpha }x \in {U}_{\alpha } \) for each \( x \in K \) . If \( x \in \operatorname{co}\left( {F}_{\alpha }\right) \), then \( x \in D, f\left( x\right) \in K \), and \( {P}_{\alpha }\left( {f\left( x\right) }\right) \in \operatorname{co}\left( {F}_{\alpha }\right) \) . Thus \( {P}_{\alpha } \circ f \) maps the compact, convex, finite-dimensional set \( \operatorname{co}\left( {F}_{\alpha }\right) \) into itself. By the Brouwer Theorem, \( {P}_{\alpha } \circ f \) has a fixed point \( {z}_{\alpha } \) in \( \operatorname{co}\left( {F}_{\alpha }\right) \) . Then \( f\left( {z}_{\alpha }\right) \) lies in the compact set \( K \), and the net \( \left\lbrack {f\left( {z}_{\alpha }\right) : \alpha \in A}\right\rbrack \) has a cluster point \( y \) in \( K \) . We will show that \( f\left( y\right) = y \) by establishing that \( f\left( y\right) - y \in {U}_{\alpha } \) for all \( \alpha \) . Theorem 1 in Section 7.7, page 368, applies here.\n\nLet \( \alpha \) be given. Select \( \beta \geq \alpha \) so that \( {U}_{\beta } + {U}_{\beta } \subset {U}_{\alpha } \) . By the continuity of \( f \) at \( y \), select \( \gamma \geq \beta \) so that \( f\left( y\right) - f\left( x\right) \in {U}_{\beta } \) whenever \( x \in K \) and \( y - x \in {U}_{\gamma } \) . Select \( \delta \geq \gamma \) so that \( {U}_{\delta } + {U}_{\delta } \subset {U}_{\gamma } \) . Select \( \varepsilon \geq \delta \) so that \( f\left( {z}_{\varepsilon }\right) \in y + {U}_{\delta } \) . Then we have\n\n\[ y - {z}_{\varepsilon } = \left\lbrack {y - f\left( {z}_{\varepsilon }\right) }\right\rbrack + \left\lbrack {f\left( {z}_{\varepsilon }\right) + {P}_{\varepsilon }f\left( {z}_{\varepsilon }\right) }\right\rbrack \in {U}_{\delta } + {U}_{\varepsilon } \subset {U}_{\delta } + {U}_{\delta } \subset {U}_{\gamma } \]\n\nHence \( f\left( y\right) - f\left( {z}_{\varepsilon }\right) \in {U}_{\beta } \) . Furthermore,\n\n\[ f\left( y\right) - y = \left\lbrack {f\left( y\right) - f\left( {z}_{\varepsilon }\right) }\right\rbrack + \left\lbrack {f\left( {z}_{\varepsilon }\right) - y}\right\rbrack \in {U}_{\beta } + {U}_{\delta } \subset {U}_{\beta } + {U}_{\beta } \subset {U}_{\alpha } \]
Yes
Theorem 7. Rothe's Theorem. Let \( B \) denote the closed unit ball of a normed linear space \( X \) . If \( f \) maps \( B \) continuously into a compact subset of \( X \) and if \( f\left( {\partial B}\right) \subset B \), then \( f \) has a fixed point.
Proof. Let \( r \) denote the radial projection into \( B \) defined by \( r\left( x\right) = x \) if \( \parallel x\parallel \leq 1 \) and \( r\left( x\right) = x/\parallel x\parallel \) if \( \parallel x\parallel > 1 \) . This map is continuous (Problem 1). Hence \( r \circ f \) maps \( B \) into a compact subset of \( B \) . By Theorem \( 6, r \circ f \) has a fixed point \( x \) in \( B \) . If \( \parallel x\parallel = 1 \), then \( \parallel f\left( x\right) \parallel = 1 \) by hypothesis, and we have \( x = r\left( {f\left( x\right) }\right) = f\left( x\right) \) by the definition of \( r \) . If \( \parallel x\parallel < 1 \), then \( \parallel r\left( {f\left( x\right) }\right) \parallel < 1 \) and \( x = r\left( {f\left( x\right) }\right) = f\left( x\right) \) , again by the definition of \( r \) .
Yes
Theorem 8. Let \( B \) denote the closed unit ball in a normed space \( X \) . Let \( \left\{ {{f}_{t} : 0 \leq t \leq 1}\right\} \) be a family of continuous maps from \( B \) into one compact subset of \( X \) . Assume that\n\n(i) \( {f}_{0}\left( {\partial B}\right) \subset B \) .\n\n(ii) The map \( \left( {t, x}\right) \mapsto {f}_{t}\left( x\right) \) is continuous on \( \left\lbrack {0,1}\right\rbrack \times B \) .\n\n(iii) No \( {f}_{t} \) has a fixed point in \( \partial B \) .\n\nThen \( {f}_{1} \) has a fixed point in \( B \) .
Proof. (From [Sma]) If \( 0 < \varepsilon < 1 \), define\n\n\[ \n{g}_{\varepsilon }\left( x\right) = \left\{ \begin{array}{ll} {f}_{1}\left( \frac{x}{1 - \varepsilon }\right) & \parallel x\parallel \leq 1 - \varepsilon \\ {f}_{\left( {1 - \parallel x\parallel }\right) /\varepsilon }\left( \frac{x}{\parallel x\parallel }\right) & 1 - \varepsilon \leq \parallel x\parallel \leq 1 \end{array}\right. \n\]\n\nNotice that \( {g}_{\varepsilon } \) is continuous, since the two formulas agree when \( \parallel x\parallel = 1 - \varepsilon \) . If \( x \in \partial B \), then \( \parallel x\parallel = 1 \) and \( {g}_{\varepsilon }\left( x\right) = {f}_{0}\left( x\right) \in B \) . Thus \( f \) maps \( \partial B \) into \( B \) . If \( K \) is a compact set containing all the images \( {f}_{t}\left( B\right) \), then \( {g}_{\varepsilon }\left( B\right) \subset K \), by the definition of \( {g}_{\varepsilon } \) . The map \( {g}_{\varepsilon } \) satisfies the hypotheses of Theorem 7, and \( {g}_{\varepsilon } \) has a fixed point \( {x}_{\varepsilon } \) in \( B \) .\n\nWe now shall prove that for all sufficiently small \( \varepsilon ,\begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix} \leq 1 - \varepsilon \) . If this is not true, then we can let \( \varepsilon \) converge to zero through a suitable sequence of values and have, for each \( \varepsilon \) in the sequence, \( \begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix} > 1 - \varepsilon \) . Since \( {g}_{\varepsilon }\left( {x}_{\varepsilon }\right) = {x}_{\varepsilon } \) , we see that \( {x}_{\varepsilon } \) is in \( K \) . By compactness, we can assume that the sequence of \( \varepsilon \) ’s has the further properties \( {x}_{\varepsilon } \rightarrow {x}_{o} \) and \( \left( {1 - \begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix}}\right) /\varepsilon \rightarrow t \), where \( {x}_{o} \in K \) and \( t \in \left\lbrack {0,1}\right\rbrack \) . By the definition of \( {g}_{\varepsilon } \),\n\n\[ \n{f}_{\left( {1 - \begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix}}\right) /\varepsilon }\left( \frac{{x}_{\varepsilon }}{\begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix}}\right) = {x}_{\varepsilon }\n\]\n\nIn the limit, we have \( {f}_{t}\left( {x}_{o}\right) = {x}_{o} \) and \( \begin{Vmatrix}{x}_{o}\end{Vmatrix} = 1 \), in contradiction of hypothesis (iii).\n\nWe now know that \( \begin{Vmatrix}{x}_{\varepsilon }\end{Vmatrix} \leq 1 - \varepsilon \) for all sufficiently small \( \varepsilon \) . Thus, for such values of \( \varepsilon \),\n\n\[ \n{x}_{\varepsilon } = {g}_{\varepsilon }\left( {x}_{\varepsilon }\right) = {f}_{1}\left( \frac{{x}_{\varepsilon }}{1 - \varepsilon }\right)\n\]\nThe points \( {x}_{\varepsilon } \) belong to \( K \), and for any cluster point we will have \( x = {f}_{1}\left( x\right) \) . ∎
Yes
Theorem 2 Let \( X \) be a paracompact space, \( Y \) a Banach space, and \( H \) a closed subspace in \( Y \) . Suppose that \( f : X \rightarrow Y \) is continuous and \( g : X \rightarrow H \) is bounded. Then for each \( \varepsilon > 0 \) there is a continuous map \( \bar{g} : X \rightarrow H \) that satisfies\n\n(1)\n\n\[ \mathop{\sup }\limits_{{x \in X}}\parallel f\left( x\right) - \bar{g}\left( x\right) \parallel \leq \mathop{\sup }\limits_{{x \in X}}\parallel f\left( x\right) - g\left( x\right) \parallel + \varepsilon \]
Proof. Let \( \lambda \) denote the number on the right in Inequality (1). For each \( x \in X \) , define\n\n\[ \Phi \left( x\right) = \{ h \in H : \parallel f\left( x\right) - h\parallel \leq \lambda \} \]\n\nThis set is nonempty because \( g\left( x\right) \in \Phi \left( x\right) \) . (Notice that \( g \) is a selection for \( \Phi \) but not necessarily a continuous selection.) The set \( \Phi \left( x\right) \) is closed and convex in the Banach space \( H \) .\n\nWe shall prove that \( \Phi \) is lower semicontinuous. Let \( \mathcal{U} \) be open in \( H \) . It is to be shown that \( {\Phi }^{ - }\left( \mathcal{U}\right) \) is open in \( X \) . Let \( x \in {\Phi }^{ - }\left( \mathcal{U}\right) \) . Then \( \Phi \left( x\right) \cap \mathcal{U} \) is nonempty. Select \( h \) in this set. Then \( h \in \mathcal{U} \) and \( \parallel f\left( x\right) - h\parallel \leq \lambda \) . Also \( \parallel f\left( x\right) - g\left( x\right) \parallel < \lambda \) . So, by considering the line segment from \( h \) to \( g\left( x\right) \), we conclude that there is an \( {h}^{\prime } \in \mathcal{U} \) such that \( \begin{Vmatrix}{f\left( x\right) - {h}^{\prime }}\end{Vmatrix} < \lambda \) . Since \( f \) is continuous at \( x \), there is a neighborhood \( \mathcal{N} \) of \( x \) such that\n\n\[ \parallel f\left( u\right) - f\left( x\right) \parallel < \lambda - \begin{Vmatrix}{f\left( x\right) - {h}^{\prime }}\end{Vmatrix}\;\left( {u \in \mathcal{N}}\right) \]\n\nBy the triangle inequality, \( \begin{Vmatrix}{f\left( s\right) - {h}^{\prime }}\end{Vmatrix} < \lambda \) when \( s \in \mathcal{N} \) . This proves that \( {h}^{\prime } \in \Phi \left( s\right) \), that \( \Phi \left( s\right) \cap \mathcal{U} \) is nonempty, that \( s \in {\Phi }^{ - }\left( \mathcal{U}\right) \), that \( \mathcal{N} \subset {\Phi }^{ - }\left( \mathcal{U}\right) \), that \( {\Phi }^{ - }\left( \mathcal{U}\right) \) is open, and that \( \Phi \) is lower semicontinuous.\n\nNow apply Michael’s theorem to obtain a continuous selection \( \bar{g} \) for \( \Phi \) . Then \( \bar{g} \) is a continuous map of \( X \) into \( H \) and satisfies \( \bar{g}\left( x\right) \in \Phi \left( x\right) \) for all \( x \) . Hence \( \bar{g} \) satisfies (1).
Yes
Theorem 3. The Bartle-Graves Theorem. A continuous linear map of one Banach space onto another must have a continuous (but not necessarily linear) right inverse.
Proof. Let \( A : X \rightarrow Y \), as in the hypotheses. Since \( A \) is surjective, the equation \( {Ax} = y \) has solutions \( x \) for each \( y \in Y \) . At issue, then, is whether a continuous choice of \( x \) can be made. It is clear that we should set\n\n\[ \Phi \left( y\right) = \{ x \in X : {Ax} = y\} \]\n\nObviously, each set \( \Phi \left( y\right) \) is closed, convex, and nonempty. Is \( \Phi \) lower semicontinuous? Let \( \mathcal{O} \) be open in \( X \) . We must show that the set \( {\Phi }^{ - }\left( \mathcal{O}\right) \) is open in \( Y \) . But \( {\Phi }^{ - }\left( \mathcal{O}\right) = A\left( \mathcal{O}\right) \) by a short calculation. By the Interior Mapping Theorem (Section 1.8, page 48), \( A\left( \mathcal{O}\right) \) is open. Thus \( \Phi \) is lower semicontinuous, and by Michael’s theorem, a continuous selection \( f \) exists. Thus \( f\left( y\right) \in \Phi \left( y\right) \), or \( A\left( {f\left( y\right) }\right) = y. \)
Yes
Theorem 1. Let \( X \) be a normed linear space and let \( K \) be a convex subset of \( X \) that contains 0 as an interior point. If \( z \in X \smallsetminus K \) , then there is a continuous linear functional \( \phi \) defined on \( X \) such that for all \( x \in K,\phi \left( x\right) \leq 1 \leq \phi \left( z\right) \) .
Proof. Again, we need the Minkowski functional of \( K \) . It is\n\n\[ p\left( x\right) = \inf \{ \lambda : \lambda > 0\text{ and }x/\lambda \in K\} \]\n\nWe prove now that \( p\left( {x + y}\right) \leq p\left( x\right) + p\left( y\right) \) for all \( x \) and \( y \) . Select \( \lambda ,\mu > 0 \) so that \( x/\lambda \) and \( y/\mu \) are in \( K \) . By the convexity of \( K \),\n\n\[ \frac{x + y}{\lambda + \mu } \equiv \frac{\lambda }{\lambda + \mu }\frac{x}{\lambda } + \frac{\mu }{\lambda + \mu }\frac{y}{\mu }\; \in \;K \]\n\nHence \( p\left( {x + y}\right) \leq \lambda + \mu \) . Taking the infima of \( \lambda \) and \( \mu \), we obtain \( p\left( {x + y}\right) \leq \) \( p\left( x\right) + p\left( y\right) \) . Next we prove that for \( \lambda \geq 0 \) the equation \( p\left( {\lambda x}\right) = {\lambda p}\left( x\right) \) is true. Select \( \mu > 0 \) so that \( x/\mu \in K \) . Then \( {\lambda x}/{\lambda \mu } \in K \) and \( p\left( {\lambda x}\right) \leq {\lambda \mu } \) . Taking the infimum of \( \mu \), we conclude that \( p\left( {\lambda x}\right) \leq {\lambda p}\left( x\right) \) . From this we obtain the reverse inequality by writing \( {\lambda p}\left( x\right) = {\lambda p}\left( {{\lambda }^{-1}{\lambda x}}\right) \leq \lambda {\lambda }^{-1}p\left( {\lambda x}\right) = p\left( {\lambda x}\right) \).\n\nNow define a linear functional \( \phi \) on the one-dimensional subspace generated by \( z \) by writing\n\n\[ \phi \left( {\lambda z}\right) = {\lambda p}\left( z\right) \;\left( {\lambda \in \mathbb{R}}\right) \]\n\nIf \( \lambda \geq 0 \), then \( \phi \left( {\lambda z}\right) = p\left( {\lambda z}\right) \) . If \( \lambda < 0 \), then \( \phi \left( {\lambda z}\right) = {\lambda p}\left( z\right) \leq 0 \leq p\left( {\lambda z}\right) \) . Hence \( \phi \leq p \) . By the Hahn-Banach Theorem (Section 1.6, page 32), \( \phi \) has a linear extension (denoted also by \( \phi \) ) that is dominated by \( p \) . For each \( x \in K \) we have \( \phi \left( x\right) \leq p\left( x\right) \leq 1 \) . As for \( z \), we have \( \phi \left( z\right) = p\left( z\right) \geq 1 \), because if \( p\left( z\right) < 1 \), then \( z/\lambda \in K \) for some \( \lambda \in \left( {0,1}\right) \), and by convexity the point\n\n\[ z \equiv \lambda \left( {z/\lambda }\right) + \left( {1 - \lambda }\right) 0 \]\n\nwould belong to \( K \) .\n\nLastly, we prove that \( \phi \) is continuous. Select a positive \( r \) such that the ball \( B\left( {0, r}\right) \) is contained in \( K \) . For \( \parallel x\parallel < 1 \) we have \( {rx} \in B\left( {0, r}\right) \) and \( {rx} \in K \) . Hence \( p\left( {rx}\right) \leq 1,\phi \left( {rx}\right) \leq 1 \), and \( \phi \left( x\right) \leq 1/r \) . Thus \( \parallel \phi \parallel \leq 1/r \) .
Yes
Theorem 2. Let \( {K}_{1},{K}_{2} \) be a disjoint pair of convex sets in a normed linear space \( X \) . If one of them has an interior point, then there is a nonzero functional \( \phi \in {X}^{ * } \) such that\n\n\[ \mathop{\sup }\limits_{{x \in {K}_{1}}}\phi \left( x\right) \leq \mathop{\inf }\limits_{{x \in {K}_{2}}}\phi \left( x\right) \]
Proof. By performing a translation and by relabeling the two sets, we can assume that 0 is an interior point of \( {K}_{1} \) . Fix a point \( z \) in \( {K}_{2} \) and consider the set \( {K}_{1} - {K}_{2} + z \) . This set is convex and contains 0 as an interior point. Also, \( z \notin {K}_{1} - {K}_{2} + z \) because \( {K}_{1} \) is disjoint from \( {K}_{2} \) . By the preceding theorem, there is a \( \phi \in {X}^{ * } \) such that for \( u \in {K}_{1} \) and \( v \in {K}_{2} \) we have \( \phi \left( {u - v + z}\right) \leq 1 \leq \phi \left( z\right) \) . Hence \( \phi \left( u\right) \leq \phi \left( v\right) \).
Yes
Theorem 3. Let \( {K}_{1},{K}_{2} \) be a disjoint pair of closed convex sets in a normed linear space \( X \) . Assume that at least one of the sets is compact. Then there is a \( \phi \in {X}^{ * } \) such that\n\n\[ \mathop{\sup }\limits_{{x \in {K}_{2}}}\phi \left( x\right) < \mathop{\inf }\limits_{{x \in {K}_{1}}}\phi \left( x\right) \]
Proof. The set \( {K}_{1} - {K}_{2} \) is closed and convex. (See Problems 1.2.19 on page 12 and 1.4.17 on page 23.) Also, \( 0 \notin {K}_{1} - {K}_{2} \), and consequently there is a ball \( B\left( {0, r}\right) \) that is disjoint from \( {K}_{1} - {K}_{2} \) . By the preceding theorem, there is a nonzero continuous functional \( \phi \) such that\n\n\[ \mathop{\sup }\limits_{{\parallel x\parallel \leq r}}\phi \left( x\right) \leq \mathop{\inf }\limits_{{x \in {K}_{1} - {K}_{2}}}\phi \left( x\right) \]\n\nSince \( \phi \) is not zero, there is an \( \varepsilon > 0 \) such that for \( u \in {K}_{1} \) and \( v \in {K}_{2} \) , \( \varepsilon \leq \phi \left( u\right) - \phi \left( v\right) \)
Yes
Theorem 4. Let \( U \) be a compact set in a real Hilbert space. In order that the system of linear inequalities\n\n(1)\n\n\[ \langle u, x\rangle > 0\;\left( {u \in U}\right) \]\n\nbe consistent (i.e., have a solution, \( x \) ) it is necessary and sufficient that 0 not be in the closed convex hull of \( U \) .
Proof. For the sufficiency of the condition, assume the condition to be true. Thus, \( 0 \notin \overline{\operatorname{co}}\left( U\right) \) . By Theorem 3, there is a vector \( x \) and a real number \( \lambda \) such that \( \overline{\mathrm{{co}}}\left( U\right) \) and 0 are on opposite sides of the hyperplane\n\n\[ \{ y : \langle y, x\rangle = \lambda \} \]\n\nWe can suppose that \( \langle y, x\rangle > \lambda \) for \( y \in \overline{\operatorname{co}}\left( U\right) \) and that \( \langle 0, x\rangle < \lambda \) . Obviously, \( \lambda > 0 \) and \( x \) solves the system (1).\n\nNow assume that system (1) is consistent and that \( x \) is a solution of it. By continuity and compactness, there exists a positive \( \varepsilon \) such that \( \langle u, x\rangle \geq \varepsilon \) for all \( u \in U \) . For any \( v \in \operatorname{co}\left( U\right) \) we can write a convex combination \( v = \sum {\theta }_{i}{u}_{i} \) and then compute\n\n\[ \langle v, x\rangle = \left\langle {\sum {\theta }_{i}{u}_{i}, x}\right\rangle = \sum \theta \left\langle {{u}_{i}, x}\right\rangle \geq \sum {\theta }_{i}\varepsilon = \varepsilon \]\n\nThen, by continuity, \( \langle w, x\rangle \geq \varepsilon \) for all \( w \in \overline{\operatorname{co}}\left( U\right) \) . Obviously, \( 0 \notin \overline{\operatorname{co}}\left( U\right) \) .
Yes
Theorem 5. For an \( m \times n \) matrix \( A \), either \( {Ax} \geq 0 \) for some \( x \in {S}_{n} \) , or \( {y}^{T}A < 0 \) for some \( y \in {S}_{m} \) .
Proof. Suppose that there is no \( x \) in the simplex \( {S}_{n} \) for which \( {Ax} \geq 0 \) . Then \( A\left( {S}_{n}\right) \) contains no point in the nonnegative orthant, \[ {P}_{m} = \left\{ {y \in {\mathbb{R}}^{m} : y \geq 0}\right\} \] Consequently, the convex sets \( A\left( {S}_{n}\right) \) and \( {P}_{m} \) can be separated by a hyperplane. Suppose, then, that \[ {P}_{m} \subset \left\{ {y \in {\mathbb{R}}^{m} : \langle u, y\rangle > \lambda }\right\} \] \[ A\left( {S}_{n}\right) \subset \left\{ {y \in {\mathbb{R}}^{m} : \langle u, y\rangle < \lambda }\right\} \] Since \( 0 \in {P}_{m},\lambda < 0 \) . Let \( {e}_{i} \) denote the \( i \) -th standard unit vector in \( {\mathbb{R}}^{m} \) . For positive \( t, t{e}_{i} \in {P}_{m} \) . Hence \( \left\langle {u, t{e}_{i}}\right\rangle > \lambda, t{u}_{i} > \lambda ,{u}_{i} > \lambda /t \), and \( {u}_{i} \geq 0 \) . Thus \( u \in {P}_{m} \) and \( \langle u,{Ax}\rangle < 0 \) for all \( x \in {S}_{n} \) . Obviously, \( u \neq 0 \), so we can assume \( u \in {S}_{m} \) . Since \( {u}^{T}{Ax} < 0 \) for all \( x \in {S}_{n} \), we have \( {u}^{T}A{e}_{i} < 0 \) for \( 1 \leq i \leq n \), or, in other terms, \( {u}^{T}A < 0 \) . (In this last argument, \( {e}_{i} \) was a standard unit vector in \( {\mathbb{R}}^{n} \) .)
Yes
Theorem 2. Arzelà-Ascoli Theorem II. Let \( X \) be a compact metric space. A subset of \( C\left( X\right) \) is compact if and only if it is closed, bounded, and equicontinuous.
Proof. Suppose that \( K \) is a compact set in \( C\left( X\right) \) . Then it is closed. It is also totally bounded, and can be covered by a finite number of balls of radius 1 :\n\n\[ K \subset \mathop{\bigcup }\limits_{{i = 1}}^{n}B\left( {{f}_{i},1}\right) \]\n\nFor any \( g \in K \) there is an index \( i \) for which \( g \in B\left( {{f}_{i},1}\right) \) . Then\n\n\[ \parallel g\parallel \leq \begin{Vmatrix}{g - {f}_{i}}\end{Vmatrix} + \begin{Vmatrix}{f}_{i}\end{Vmatrix} \leq 1 + \mathop{\max }\limits_{i}\begin{Vmatrix}{f}_{i}\end{Vmatrix} \equiv M \]\n\nThus \( K \) is bounded. Let \( Y = \left\lbrack {-M, M}\right\rbrack \) . Then\n\n\[ K \subset C\left( {X, Y}\right) \]\n\nThe preceding theorem now is applicable, and \( K \) is equicontinuous.\n\nFor the other half of the proof let \( K \) be a closed, bounded, and equicontinuous set. Since \( K \) is bounded, we have again \( K \subset C\left( {X, Y}\right) \), where \( Y \) is a suitable compact interval. The preceding theorem now shows that \( K \) is compact.
Yes
Theorem 3. Dini’s Theorem. Let \( {f}_{1},{f}_{2},\ldots \) be continuous real-valued functions on a compact topological space. For each \( x \) assume that \( \left| {{f}_{n}\left( x\right) }\right| \downarrow 0 \) . Then this convergence is uniform.
Proof. Given \( \varepsilon > 0 \), put \( {S}_{k} = \left\{ {x : \left| {{f}_{k}\left( x\right) }\right| \geq \varepsilon }\right\} \) . Then each \( {S}_{k} \) is closed, and \( {S}_{k + 1} \subset {S}_{k} \) . For each \( x \) there is an index \( k \) such that \( x \notin {S}_{k} \) . Hence \( \mathop{\bigcap }\limits_{{k = 1}}^{\infty }{S}_{k} \) is empty. By compactness and the finite intersection property, we conclude that \( \mathop{\bigcap }\limits_{{k = 1}}^{n}{S}_{k} \) is empty for some \( n \) . This means that \( {S}_{n} \) is empty, and that \( \left| {{f}_{n}\left( x\right) }\right| < \varepsilon \) for all \( x \) . Thus \( \left| {{f}_{k}\left( x\right) }\right| < \varepsilon \) for all \( k \geq n \) . This is uniform convergence.
Yes
Lemma 1. Let \( A \) be a compact operator on a normed linear space.\n\nIf \( I + A \) is surjective, then it is injective.
Proof. Let \( B = I + A \) and \( {X}_{n} = \ker \left( {B}^{n}\right) \) . Suppose that \( B \) is surjective but not injective. We shall be looking for a contradiction. Note that \( 0 \subset {X}_{1} \subset {X}_{2} \subset \cdots \) It is now to be proved that these inclusions are proper. Select a nonzero element \( {y}_{1} \) in \( {X}_{1} \) . Since \( B \) is surjective, there exist points \( {y}_{2},{y}_{3},\ldots \) such that \( B{y}_{n + 1} = {y}_{n} \) for \( n = 2,3,\ldots \) We have\n\n\[ \n{B}^{n}{y}_{n} = {B}^{n - 1}B{y}_{n} = {B}^{n - 1}{y}_{n - 1} = \cdots = {B}^{2}{y}_{2} = B{y}_{1} = 0 \n\]\n\nFurthermore,\n\n\[ \n{B}^{n - 1}{y}_{n} = {B}^{n - 2}B{y}_{n} = {B}^{n - 2}{y}_{n - 1} = \cdots = {B}^{2}{y}_{3} = B{y}_{2} = {y}_{1} \neq 0 \n\]\n\nThese two equations prove that \( {y}_{n} \in {X}_{n} \smallsetminus {X}_{n - 1} \) and that those inclusions mentioned above are proper.\n\nBy the Riesz Lemma,(Section 1.4, page 22), there exist points \( {x}_{n} \) such that \( {x}_{n} \in {X}_{n},\begin{Vmatrix}{x}_{n}\end{Vmatrix} = 1 \), and \( \operatorname{dist}\left( {{x}_{n},{X}_{n - 1}}\right) \geq 1/2 \) . If \( m > n \), then we have \( {B}^{m}{x}_{m} = 0 \) because \( {x}_{m} \in {X}_{m} = \ker \left( {B}^{m}\right) \) . Also, \( {B}^{m - 1}{x}_{n} = 0 \) because \( {x}_{n} \in {X}_{n} \subset {X}_{m - 1} \) . Finally, \( {B}^{m}{x}_{n} = 0 \) because \( {x}_{n} \in {X}_{n} \subset {X}_{m} \) . These observations show that\n\n\[ \n{B}^{m - 1}\left( {B{x}_{m} - {x}_{n} - B{x}_{n}}\right) = {B}^{m}{x}_{m} - {B}^{m - 1}{x}_{n} - {B}^{m}{x}_{n} = 0 \n\]\n\nNow we can write\n\n\[ \n\begin{Vmatrix}{A{x}_{n} - A{x}_{m}}\end{Vmatrix} = \begin{Vmatrix}{\left( {B - I}\right) {x}_{n} - \left( {B - I}\right) {x}_{m}}\end{Vmatrix} = \begin{Vmatrix}{B{x}_{n} - {x}_{n} - B{x}_{m} + {x}_{m}}\end{Vmatrix} \n\]\n\n\[ \n= \begin{Vmatrix}{{x}_{m} - \left( {B{x}_{m} + {x}_{n} - B{x}_{n}}\right) }\end{Vmatrix} \geq \mathrm{{dist}}\left( {{x}_{m},{X}_{m - 1}}\right) \geq 1/2 \n\]\n\nThe sequence \( \left\lbrack {A{x}_{n}}\right\rbrack \) therefore can have no Cauchy subsequence, contradicting the compactness property of \( A \) .
Yes
Lemma 3. Let \( A \) be a compact operator on a Banach space. If \( I + A \) is injective, then it is surjective.
Proof. Let \( B = I + A \) and let \( {X}_{n} \) denote the range of \( {B}^{n} \) . We have \[ {B}^{n} = {\left( I + A\right) }^{n} = \mathop{\sum }\limits_{{k = 0}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {A}^{k} = I + \mathop{\sum }\limits_{{k = 1}}^{n}\left( \begin{array}{l} n \\ k \end{array}\right) {A}^{k} \] Since each \( {A}^{k} \) is compact (for \( k \geq 1 \) ), \( {B}^{n} \) is the identity plus a compact operator. Thus \( {X}_{n} \) is closed by Lemma 2. If \( x \in {X}_{n} \) for some \( n \), then for an appropriate \( u \) we have \[ x = {B}^{n}u = {B}^{n - 1}{Bu} \in {X}_{n - 1} \] Thus \[ X = {X}_{0} \supset {X}_{1} \supset {X}_{2} \supset \cdots \] Now our objective is to establish that \( {X}_{1} = {X}_{0} \) . If all the inclusions in the list (2) are proper, we can use Riesz's Lemma to select \( {x}_{n} \in {X}_{n} \) such that \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} = 1 \) and \( \operatorname{dist}\left( {{x}_{n},{X}_{n + 1}}\right) \geq 1/2 \) . Then, for \( n < m \) , we have \[ \begin{Vmatrix}{A{x}_{m} - A{x}_{n}}\end{Vmatrix} = \begin{Vmatrix}{\left( {B - I}\right) {x}_{m} - \left( {B - I}\right) {x}_{n}}\end{Vmatrix} = \begin{Vmatrix}{{x}_{n} - \left( {{x}_{m} + B{x}_{n} - B{x}_{m}}\right) }\end{Vmatrix} \] \[ \geq \operatorname{dist}\left( {{x}_{n},{X}_{n + 1}}\right) \geq 1/2 \] because \( {x}_{m} \in {X}_{m} \subset {X}_{n + 1}, B{x}_{m} \in {X}_{m + 1} \subset {X}_{n + 1} \), and \( B{x}_{n} \in {X}_{n + 1} \) . This argument shows that \( \left\lbrack {A{x}_{n}}\right\rbrack \) can contain no Cauchy subsequence, contradicting the compactness of \( A \) . Thus, not all the inclusions in the list (2) are proper, and for some \( n \) , \( {X}_{n} = {X}_{n + 1} \) . We define \( n \) to be the first integer having this property. All we have to do now is prove that \( n = 0 \) . If \( n > 0 \), let \( x \) be any point in \( {X}_{n - 1} \) . Then \( x = {B}^{n - 1}y \) for some \( y \), and \[ {Bx} = {B}^{n}y \in {X}_{n} = {X}_{n + 1} \] It follows that \( {Bx} = {B}^{n + 1}z \) for some \( z \) . Since \( B \) is injective by hypothesis, \( x = \) \( {B}^{n}z \in {X}_{n} \) . Since \( x \) was an arbitrary point in \( {X}_{n - 1} \), this shows that \( {X}_{n - 1} \subset {X}_{n} \) . But the inclusion \( {X}_{n - 1} \supset {X}_{n} \) also holds. Hence \( {X}_{n - 1} = {X}_{n} \), contrary to our choice of \( n \) . Hence \( n = 0 \) .
Yes
Theorem 1. The Fredholm Alternative. Let \( A \) be a compact linear operator on a Banach space. The operator \( I + A \) is surjective if and only if it is injective.
## Proof. This is the result of putting Lemmas 1 and 3 together.
No
Theorem 3. Let \( B \) be a bounded linear invertible operator, and let \( A \) be a compact operator, both defined on one Banach space and taking values in another. Then \( B + A \) is surjective if and only if it is injective.
Proof. Suppose that \( B + A \) is injective. Then so are \( {B}^{-1}\left( {B + A}\right) \) and \( I + {B}^{-1}A \) . Now, the product of a compact operator with a bounded operator is compact. (See Problem 7.) Thus, Theorem 1 is applicable, and \( I + {B}^{-1}A \) is surjective. Hence so are \( B\left( {I + {B}^{-1}A}\right) \) and \( B + A \) . The proof of the reverse implication is similar.
Yes
Theorem 4. A compact linear transformation operating from one normed linear space to another maps weakly convergent sequences into strongly convergent sequences.
Proof. Let \( A \) be such an operator, \( A : X \rightarrow Y \) . Let \( {x}_{n} \rightharpoonup x \) (weak convergence) in \( X \) . It suffices to consider only the case when \( x = 0 \) . Thus we want to prove that \( A{x}_{n} \rightarrow 0 \) . By the weak convergence, \( \phi \left( {x}_{n}\right) \rightarrow 0 \) for all \( \phi \in {X}^{ * } \) . Interpret \( \phi \left( {x}_{n}\right) \) as a sequence of linear maps \( {x}_{n} \) acting on an element \( \phi \in {X}^{ * } \) . Since \( {X}^{ * } \) is complete even if \( X \) is not, the Uniform Boundedness Theorem (Section 1.7, page 42) is applicable in \( {X}^{ * } \) . One concludes that \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} \) is bounded. For any \( \psi \in {Y}^{ * } \) ,\n\n\[ \psi \left( {A{x}_{n}}\right) = \left( {\psi \circ A}\right) {x}_{n} \rightarrow 0 \]\n\nbecause \( \psi \circ A \in {X}^{ * } \) . Thus \( A{x}_{n} \rightharpoonup 0 \) . If \( A{x}_{n} \) does not converge strongly to 0, there will exist a subsequence such that \( \begin{Vmatrix}{A{x}_{{n}_{i}}}\end{Vmatrix} \geq \varepsilon > 0 \) . By the compactness of \( A \), and by taking a further subsequence, we may assume that \( A{x}_{{n}_{i}} \rightarrow y \) for some \( y \) . Obviously, \( \parallel y\parallel \geq \varepsilon \) . Now we have the contradiction \( A{x}_{{n}_{i}} \rightharpoonup y \) and \( A{x}_{{n}_{i}} \rightharpoonup 0 \) .
Yes
Lemma 4. Let \( \left\lbrack {A}_{n}\right\rbrack \) be a bounded sequence of continuous linear transformations from one normed linear space to another. If \( {A}_{n}x \rightarrow 0 \) for each \( x \) in a compact set \( K \), then this convergence is uniform on \( K \) .
Proof. Suppose that the convergence in question is not uniform. Then there exist a positive \( \varepsilon \), a sequence of integers \( {n}_{i} \), and points \( {x}_{{n}_{i}} \in K \) such that \( \begin{Vmatrix}{{A}_{{n}_{i}}{x}_{{n}_{i}}}\end{Vmatrix} \geq \varepsilon \) . Since \( K \) is compact, we can assume at the same time that \( {x}_{{n}_{i}} \) converges to a point \( x \) in \( K \) . Then we have a contradiction of pointwise convergence from this inequality:\n\n\[ \begin{Vmatrix}{{A}_{{n}_{i}}x}\end{Vmatrix} = \begin{Vmatrix}{{A}_{{n}_{i}}{x}_{{n}_{i}} + \left( {{A}_{{n}_{i}}x - {A}_{{n}_{i}}{x}_{{n}_{i}}}\right) }\end{Vmatrix} \]\n\n\[ \geq \begin{Vmatrix}{{A}_{{n}_{i}}{x}_{{n}_{i}}}\end{Vmatrix} - \begin{Vmatrix}{{A}_{{n}_{i}}x - {A}_{{n}_{i}}{x}_{{n}_{i}}}\end{Vmatrix} \]\n\n\[ \geq \varepsilon - \begin{Vmatrix}{A}_{{n}_{i}}\end{Vmatrix}\begin{Vmatrix}{x - {x}_{{n}_{i}}}\end{Vmatrix} \]
Yes
Theorem 5. Let \( X \) and \( Y \) be Banach spaces. If \( Y \) has a (Schauder) basis, then every compact operator from \( X \) to \( Y \) is a limit of finite-rank operators.
Proof. If \( \left\lbrack {v}_{n}\right\rbrack \) is a basis for \( Y \), then each \( y \) in \( Y \) has a unique representation of the form\n\n\[ y = \mathop{\sum }\limits_{{k = 1}}^{\infty }{\lambda }_{k}\left( y\right) {v}_{k} \]\n\n(See Problems 24-26 in Section 1.6, pages 38-39.) The functionals \( {\lambda }_{k} \) are continuous, linear, and satisfy \( \mathop{\sup }\limits_{k}\begin{Vmatrix}{\lambda }_{k}\end{Vmatrix} < \infty \) . By taking the partial sum of the first \( n \) terms, we define a projection \( {P}_{n} \) of \( Y \) onto the linear span of the first \( n \) vectors \( {v}_{k} \) . Now let \( A \) be a compact linear transformation from \( X \) to \( Y \), and let \( S \) denote the unit ball in \( X \) . The closure of \( A\left( S\right) \) is compact in \( Y \), and \( {P}_{n} - I \) converges pointwise to 0 in \( Y \) . By the preceding lemma, this convergence is uniform on \( A\left( S\right) \) . This implies that \( \left( {{P}_{n}A - A}\right) \left( x\right) \) converges uniformly to 0 on \( S \) . Since each \( {P}_{n}A \) has finite-dimensional range, this completes the proof.
No
Theorem 6. Let \( A \) be a compact operator acting between two Banach spaces. If the range of \( A \) is closed, then it is finite dimensional.
Proof. Since \( A \) is compact, it is continuous and has a closed graph. Assume that \( A : X \rightarrow Y \) and that \( A\left( X\right) \) is closed in \( Y \) . Then \( A\left( X\right) \) is a Banach space. Let \( S \) denote the unit ball in \( X \) . By the Interior Mapping Theorem (Section 1.8, page 48), \( A\left( S\right) \) is a neighborhood of 0 in \( A\left( X\right) \) . On the other hand, by its compactness, \( A \) maps \( S \) into a compact subset of \( A\left( X\right) \) . Since \( A\left( X\right) \) has a compact neighborhood of \( 0, A\left( X\right) \) is finite dimensional, by Theorem 2 in Section 1.4, page 22.
Yes
Lemma 5. In the definition of the degenerate kernel \( k = \mathop{\sum }\limits_{{i = 1}}^{n}{u}_{i}{v}_{i}, \) there is no loss of generality in supposing that \( \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \) and \( \left\{ {{v}_{1},\ldots ,{u}_{n}}\right\} \) are linearly independent sets.
Proof. Suppose that \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) is linearly dependent. Then one vector is a linear combination of the others, say \( {v}_{n} = \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{a}_{i}{v}_{i} \) . Then we can write the kernel with a sum of fewer terms as follows:\n\n\[ \n{Kx} = \mathop{\sum }\limits_{{i = 1}}^{n}\left\langle {x,{u}_{i}}\right\rangle {v}_{i} = \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}\left\langle {x,{u}_{i}}\right\rangle {v}_{i} + \left\langle {x,{u}_{n}}\right\rangle {v}_{n} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}\left\langle {x,{u}_{i}}\right\rangle {v}_{i} + \left\langle {x,{u}_{n}}\right\rangle \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}{a}_{i}{v}_{i} \n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}\left\lbrack {\left\langle {x,{u}_{i}}\right\rangle + {a}_{i}\left\langle {x,{u}_{n}}\right\rangle }\right\rbrack {v}_{i} = \mathop{\sum }\limits_{{i = 1}}^{{n - 1}}\left\langle {x,{u}_{i} + {a}_{i}{u}_{n}}\right\rangle {v}_{i} \n\]\n\nA similar argument applies if \( \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \) is dependent.
Yes
Theorem 7. Let \( A \) be a compact operator on a Banach space. Each nonzero element of the spectrum of \( A \) is an eigenvalue of \( A \) .
Proof. Let \( \lambda \neq 0 \) and suppose that \( \lambda \) is not an eigenvalue of \( A \) . We want to show that \( \lambda \) is not in the spectrum, or equivalently, that \( A - {\lambda I} \) is invertible. Since \( \lambda \) is not an eigenvalue, the equation \( \left( {A - {\lambda I}}\right) x = 0 \) has only the solution \( x = 0 \) . Hence \( A - {\lambda I} \) is injective. By the Fredholm Alternative, \( A - {\lambda I} \) is surjective. Hence \( {\left( A - \lambda I\right) }^{-1} \) exists as a linear map. The only question is whether it is a bounded linear map. The affirmative answer comes immediately from the Interior Mapping Theorem and its corollaries in Section 1.8, page 48ff. That would complete the proof. There is an alternative that avoids use of the Interior Mapping Theorem but uses again the compactness of \( A \) . To follow this path, assume that \( {\left( A - \lambda I\right) }^{-1} \) is not bounded. We can find \( {x}_{n} \) such that \( \begin{Vmatrix}{x}_{n}\end{Vmatrix} = 1 \) and \( \begin{Vmatrix}{{\left( A - \lambda I\right) }^{-1}{x}_{n}}\end{Vmatrix} \rightarrow \infty \) . Put \( {y}_{n} = {\left( A - \lambda I\right) }^{-1}{x}_{n} \) . Then \( \begin{Vmatrix}{y}_{n}\end{Vmatrix}/\begin{Vmatrix}{\left( {A - {\lambda I}}\right) {y}_{n}}\end{Vmatrix} \rightarrow \infty \) .\n\nPut \( {z}_{n} = {y}_{n}/\begin{Vmatrix}{y}_{n}\end{Vmatrix} \), so that \( \begin{Vmatrix}{z}_{n}\end{Vmatrix} = 1 \) and \( \begin{Vmatrix}{\left( {A - {\lambda I}}\right) {z}_{n}}\end{Vmatrix} \rightarrow 0 \) . Since \( A \) is compact, there is a convergent subsequence \( A{z}_{{n}_{k}} \rightarrow w \) . Then\n\n\[ \n{z}_{{n}_{k}} = {\lambda }^{-1}\left\lbrack {A{z}_{{n}_{k}} - \left( {A - {\lambda I}}\right) {z}_{{n}_{k}}}\right\rbrack \rightarrow {\lambda }^{-1}w \n\]\n\nHence \( A\left( {{\lambda }^{-1}w}\right) = w \) or \( \left( {A - {\lambda I}}\right) w = 0 \) . Since \( \parallel w\parallel = \left| \lambda \right| \neq 0 \), we have contradicted the injective property of \( A - {\lambda I} \) .
Yes
Let \( T = \left\lbrack {a, b}\right\rbrack \subset \mathbb{R} \). Let \( \mathcal{A} \) be the algebra of all polynomials in \( C\left( T\right) \). Then \( \mathcal{A} \) is dense in \( C\left( T\right) \), by the Stone-Weierstrass Theorem. This implies that for any continuous function \( f \) defined on \( \left\lbrack {a, b}\right\rbrack \) and for any \( \epsilon > 0 \) there is a polynomial \( p \) such that
\[ \parallel f - p\parallel \equiv \max \{ \left| {f\left( t\right) - p\left( t\right) }\right| : a \leq t \leq b\} < \epsilon \]
Yes
Theorem 9. Let \( {A}_{0},{A}_{1},\ldots \) be compact operators on a Banach space, and suppose \( \mathop{\lim }\limits_{n}{A}_{n} = {A}_{0} \) . If \( \lambda \) is not an eigenvalue of \( {A}_{0} \) and if for each \( n \) there is a point \( {x}_{n} \) such that \( {A}_{n}{x}_{n} - \lambda {x}_{n} = b \), then for all sufficiently large \( n \) ,\n\n\[ \begin{Vmatrix}{{x}_{0} - {x}_{n}}\end{Vmatrix} \leq \begin{Vmatrix}{\left( {A}_{n} - \lambda I\right) }^{-1}\end{Vmatrix}\begin{Vmatrix}{{A}_{0} - {A}_{n}}\end{Vmatrix}\begin{Vmatrix}{x}_{0}\end{Vmatrix} \n\]
Proof. Since \( \lambda \) is not an eigenvalue of \( {A}_{0} \), it is not in the spectrum of \( {A}_{0} \), by Theorem 7. Hence \( {A}_{0} - {\lambda I} \) is invertible. Select \( m \) such that for \( n \geq m \)\n\n\[ \begin{Vmatrix}{\left( {{A}_{n} - {\lambda I}}\right) - \left( {{A}_{0} - {\lambda I}}\right) }\end{Vmatrix} = \begin{Vmatrix}{{A}_{n} - {A}_{0}}\end{Vmatrix} < {\begin{Vmatrix}{\left( {A}_{0} - \lambda I\right) }^{-1}\end{Vmatrix}}^{-1} \]\n\nBy Problem 2 in Section 4.3 (page 189), \( {\left( {A}_{n} - \lambda I\right) }^{-1} \) exists (when \( n \geq m \) ). Now write\n\n\[ {x}_{n} - {x}_{0} = \left\lbrack {{\left( {A}_{n} - \lambda I\right) }^{-1} - {\left( {A}_{0} - \lambda I\right) }^{-1}}\right\rbrack b \]\n\n\[ = {\left( {A}_{n} - \lambda I\right) }^{-1}\left\lbrack {I - \left( {{A}_{n} - {\lambda I}}\right) {\left( {A}_{0} - \lambda I\right) }^{-1}}\right\rbrack b \]\n\n\[ = {\left( {A}_{n} - \lambda I\right) }^{-1}\left\lbrack {I - \left\{ {{A}_{0} - {\lambda I} - \left( {{A}_{0} - {A}_{n}}\right) }\right\} {\left( {A}_{0} - \lambda I\right) }^{-1}}\right\rbrack b \]\n\n\[ = {\left( {A}_{n} - \lambda I\right) }^{-1}\left( {{A}_{0} - {A}_{n}}\right) {\left( {A}_{0} - \lambda I\right) }^{-1}b \]\n\n\[ = {\left( {A}_{n} - \lambda I\right) }^{-1}\left( {{A}_{0} - {A}_{n}}\right) {x}_{0} \]
Yes
Theorem 1. A point \( y \) is in the closure of a set \( S \) in a topological space if and only if some net in \( S \) converges to \( y \) .
Proof. If the net \( \left\lbrack {x}_{\alpha }\right\rbrack \) is in \( S \) and converges to \( y \), then to each neighborhood \( U \) of \( y \) there corresponds an index \( \beta \) such that \( {x}_{\alpha } \in U \) whenever \( \beta \prec \alpha \) . In particular, \( {x}_{\beta } \in U \) . Thus each neighborhood of \( y \) contains a point of \( S \), and \( y \) is in the closure of \( S \) . Conversely, suppose that \( y \) is in the closure of \( S \) . Let \( D \) be the family of all neighborhoods of \( y \), ordered by inclusion: \( \alpha \prec \beta \) means \( \beta \subset \alpha \) . Since \( y \) is in the closure of \( S \), there exists for each \( \alpha \in D \) a point \( {x}_{\alpha } \in \alpha \cap S \) . The net \( \left\lbrack {x}_{\alpha }\right\rbrack \) thus defined (with the aid of the Axiom of Choice) is in \( S \) and converges to \( y \) .
Yes
Lemma 1 In a linear topological space, a set \( V \) is a neighborhood of a point \( z \) if and only if \( - z + V \) is a neighborhood of 0 .
Proof. Hold \( z \) fixed, and define \( f\left( x\right) = x + z \) . This mapping sends 0 to \( z \) . Let \( V \) be a neighborhood of \( z \) . Since \( f \) is continuous, \( {f}^{-1}\left( V\right) \) is a neighborhood of 0. Observe, now, that \( {f}^{-1}\left( V\right) = \{ x : f\left( x\right) \in V\} = \{ x : x + z \in V\} = - x + V \) . Conversely, assume that \( - z + V \) is a neighborhood of 0 . We have \( {f}^{-1}\left( x\right) = x - z \) , and \( {f}^{-1} \) is also continuous. It maps \( z \) to 0 . Hence \( {\left( {f}^{-1}\right) }^{-1} \) carries \( - z + V \) to a neighborhood of \( z \) . But\n\n\[{\left( {f}^{-1}\right) }^{-1}\left( {-z + V}\right) = \left\{ {x : {f}^{-1}\left( x\right) \in - z + V}\right\} = \{ x : x - z \in - z + V\} = V\]
Yes
Theorem 1. A linear topological space is a Hausdorff space if and only if 0 is the only element common to all neighborhoods of 0.
Proof. The Hausdorff property is that for any pair of points \( x \neq y \) there must exist neighborhoods \( U \) and \( V \) of \( x \) and \( y \) respectively such that the pair \( U, V \) is disjoint. Select a neighborhood \( W \) of 0 such that \( x - y \notin W \) . Then (using the continuity of subtraction) select another neighborhood \( {W}^{\prime } \) of zero such that \( {W}^{\prime } - {W}^{\prime } \subset W \) . Then \( x + {W}^{\prime } \) is disjoint from \( y + {W}^{\prime } \), for if \( z \) is a point in their intersection, we could write \( z = x + {w}_{1} = y + {w}_{2} \), with \( {w}_{i} \in {W}^{\prime } \) . Then \( x - y = {w}_{2} - {w}_{1} \in {W}^{\prime } - {W}^{\prime } \subset W \) . The other half of the proof is even easier: just separate any nonzero point from 0 by selecting a neighborhood of zero that excludes the nonzero point.
Yes
Theorem 2. Let \( X \) be a linear topological space, and \( U \) a neighborhood of 0 . Then the polar set\n\n\[ {U}^{ \circ } = \left\{ {\phi \in {X}^{ * } : \left| {\phi \left( x\right) }\right| \leq 1\text{ for all }x \in U}\right\} \]\n\nis compact in the weak* topology of \( {X}^{ * } \) .
Proof. The linear space \( {X}^{ * } \) (whose elements are continuous linear functionals) is a subspace of \( {X}^{\prime } \) (whose elements are linear functionals). The weak* topology in \( {X}^{ * } \) is the relative topology in \( {X}^{ * } \) derived from the weak* topology on \( {X}^{\prime } \) . By the preceding theorem, we need only prove that \( {U}^{ \circ } \) is closed and bounded in the weak* sense in \( {X}^{\prime } \) . If we have a net \( \left\lbrack {\phi }_{\alpha }\right\rbrack \) in \( {U}^{ \circ } \) and \( {\phi }_{\alpha } \rightharpoonup \phi \), then \( {\phi }_{\alpha }\left( x\right) \rightarrow \phi \left( x\right) \) for all \( x \in U \) . Consequently, \( \left| {\phi \left( x\right) }\right| \leq 1 \) for all \( x \in U \), and \( \phi \in {U}^{ \circ } \) . Thus \( {U}^{ \circ } \) is closed in the weak* topology of \( {X}^{\prime } \) . If \( W \) is any neighborhood of 0 in \( {X}^{\prime } \), then \( W \) contains a set of the form\n\n\[ V\left( {\varepsilon ;{x}_{1},\ldots ,{x}_{n}}\right) = \left\{ {\phi \in {X}^{\prime } : \left| {\phi \left( {x}_{i}\right) }\right| < \varepsilon ,\;1 \leq i \leq n}\right\} \]\n\nSelect \( r \) so that \( r{x}_{i} \in U \) . Then for \( \phi \in {U}^{ \circ } \) we have \( \left| {\phi \left( x\right) }\right| \leq 1 \) for all \( x \in U \) . Furthermore, \( \left| {\phi \left( {r{x}_{i}}\right) }\right| \leq 1 \) and \( \phi \in \left( {1/{r\varepsilon }}\right) W \) . Thus \( {U}^{ \circ } \) is bounded.
Yes
Theorem 3. (The Banach-Alaoglu Theorem) The unit ball in the conjugate space of a normed linear space is compact in the weak* topology.
Proof. In the preceding theorem, take \( U \) to be the unit ball of \( X \) . The polar of \( U \) will then be the unit ball in \( {X}^{ * } \) .
No
Theorem 4. For any locally convex linear topological space there is a family of continuous seminorms that induces the topology.
Proof. Let \( P \) be the family of all continuous seminorms defined on the given space. Let \( U \) be a neighborhood of 0 in the original topology. First we must prove that \( U \) contains one of the sets \( V\left( {\varepsilon ;{p}_{1},\ldots ,{p}_{n}}\right) \) . Since the space is locally convex, \( U \) contains a convex neighborhood \( {U}_{1} \) of 0 . By the continuity of scalar multiplication, there exists a convex neighborhood \( {U}_{2} \) of 0 and a number \( \delta > 0 \) such that \( {cx} \in {U}_{1} \) whenever \( x \in {U}_{2} \) and \( \left| c\right| < \delta \) . The set \( {U}_{3} = \bigcup \left\{ {\lambda {U}_{2} : \left| \lambda \right| < 1}\right\} \) is a convex neighborhood of 0 contained in \( U \) . Its Minkowski functional \( p \) is continuous because for any \( r > 0 \) and any \( x \in r{U}_{3} \), we have \( p\left( x\right) \leq r \) . Thus, \( V\left( {\frac{1}{2};p}\right) \subset {W}_{3} \subset U \) . (Minkowski functionals were defined in the the proof of Theorem 1 in Section 7.3, page 343.) Now let \( V \) be any \
Yes
Lemma 2. In a locally convex linear topological space, the convex hull of a totally bounded set is totally bounded.
Proof. Let \( Y \) be such a set and let \( U \) be any neighborhood of 0 . Select a convex neighborhood \( V \) of 0 such that \( V + V \subset U \) . Since \( Y \) is totally bounded, there is a finite set \( F \) such that \( Y \subset F + V \) . Let \( Z = \operatorname{co}\left( F\right) \) . The set \( Z \) is compact, being the image of a compact set under a continuous map of the form \( \left( {{\theta }_{1},\ldots ,{\theta }_{n}}\right) \mapsto \mathop{\sum }\limits_{{i = 1}}^{n}{\theta }_{i}{z}_{i} \), where \( \left\{ {{z}_{1},\ldots ,{z}_{n}}\right\} = F \) . It follows that \( Z \) is totally bounded, and that \( Z \subset {F}^{\prime } + V \) for another finite set \( {F}^{\prime } \) . By the convexity of \( V \) we have\n\n\[ \operatorname{co}\left( Y\right) \subset \operatorname{co}\left( {F + V}\right) = \operatorname{co}\left( F\right) + V = Z + V \subset {F}^{\prime } + V + V \subset {F}^{\prime } + U \]
Yes
Theorem 7. Mazur's Theorem. The closed convex hull of a totally bounded set in a complete locally convex linear topological space is compact.
Proof. Let \( K \) be such a set in such a space. By the preceding lemma, \( \operatorname{co}\left( K\right) \) is totally bounded. Hence \( \overline{\mathrm{{co}}}\left( K\right) \) is closed and totally bounded. Since the ambient space is complete, \( \overline{\mathrm{{co}}}\left( K\right) \) is complete and totally bounded. Hence, by Theorem 6, it is compact.
Yes
Theorem 3. Let \( f : \mathbb{N} \times \mathbb{N} \rightarrow \mathbb{R} \) . Assume that \( \mathop{\lim }\limits_{{n \rightarrow \infty }}f\left( {n, m}\right) \) exists for each \( m \) and that \( \mathop{\lim }\limits_{{m \rightarrow \infty }}f\left( {n, m}\right) \) exists for each \( n \), uniformly in \( n \) . Then the two limits \( \mathop{\lim }\limits_{n}\mathop{\lim }\limits_{m}f\left( {n, m}\right) \) and \( \mathop{\lim }\limits_{m}\mathop{\lim }\limits_{n}f\left( {n, m}\right) \) exist and are equal.
Proof. Define \( g\left( m\right) = \mathop{\lim }\limits_{n}f\left( {n, m}\right) \) and \( h\left( n\right) = \mathop{\lim }\limits_{m}f\left( {n, m}\right) \) . Let \( \varepsilon > 0 \) . Find a positive integer \( M \) such that\n\n\[ m \geq M \Rightarrow \left| {f\left( {n, m}\right) - h\left( n\right) }\right| < \varepsilon \;\text{ for all }n \]\n\nNotice that the uniformity hypothesis is being used at this step. A consequence is that \( \left| {f\left( {n, M}\right) - h\left( n\right) }\right| < \varepsilon \), and by the triangle inequality \( \left| {f\left( {n, m}\right) - f\left( {n, M}\right) }\right| < {2\varepsilon } \) when \( m \geq M \) . Find \( N \) such that\n\n\[ n \geq N \Rightarrow \left| {f\left( {n, M}\right) - g\left( M\right) }\right| < \varepsilon \]\n\nNo uniformity of the limit in \( m \) is needed here, as \( M \) has been fixed. Now we have \( \left| {f\left( {N, M}\right) - g\left( M\right) }\right| < \varepsilon \) and \( \left| {f\left( {N, M}\right) - f\left( {n, M}\right) }\right| < {2\varepsilon } \) when \( n \geq N \) . We next conclude that \( \left| {f\left( {n, m}\right) - f\left( {N, M}\right) }\right| < {4\varepsilon } \) when \( n \geq N \) and \( m \geq M \) . This establishes that the doubly indexed sequence \( f\left( {n, m}\right) \) has the Cauchy property. By the completeness of \( \mathbb{R} \), the limit \( \mathop{\lim }\limits_{{\left( {n, m}\right) \rightarrow \left( {\infty ,\infty }\right) }}f\left( {n, m}\right) \) exists. Call it \( L \) . Then,\n\nby letting \( \left( {n, m}\right) \) go to its limit, we conclude that \( \left| {L - f\left( {N, M}\right) }\right| \leq {4\varepsilon } \) . Also, \( \left| {L - f\left( {n, m}\right) }\right| < {8\varepsilon } \) if \( n \geq N \) and \( m \geq M \) . Letting \( n \) go to its limit, we get \( \left| {L - g\left( m\right) }\right| < {8\varepsilon } \) if \( m \geq M \) . By letting \( m \) go to its limit, we get \( \left| {L - h\left( n\right) }\right| \leq {8\varepsilon } \) if \( n \geq N \) . Hence \( h\left( n\right) \rightarrow L \) and \( g\left( m\right) \rightarrow L \) .
Yes
Theorem 4. The Kharshiladze-Lozinski Theorem. For each \( n = 0,1,2,\ldots \) let \( {P}_{n} \) be a projection of the space \( C\left\lbrack {-1,1}\right\rbrack \) onto the subspace \( {\Pi }_{n} \) of polynomials of degree at most \( n \) . Then \( \begin{Vmatrix}{P}_{n}\end{Vmatrix} \rightarrow \infty \) .
It is readily seen that the equation\n\n\[ \n{P}_{n}\left( f\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{a}_{k}\left( f\right) {p}_{n} \n\]\n\nwhere the coefficients \( {a}_{k} \) are as above, defines a projection of the type appearing in Theorem 4. That is, \( {P}_{n} \) is a continuous linear idempotent map from \( C\left\lbrack {-1,1}\right\rbrack \) onto \( {\Pi }_{n} \) . Hence, \( \begin{Vmatrix}{P}_{n}\end{Vmatrix} \rightarrow \infty \) . By the Banach-Steinhaus Theorem (Chapter 1, Section 7, page 41) the set of \( f \) in \( C\left\lbrack {-1,1}\right\rbrack \) for which the series above converges uniformly to \( f \) is of the first category (relatively small) in \( C\left\lbrack {-1,1}\right\rbrack \) .
Yes
Theorem 7. Under the hypotheses given above, Equation (3) is true for the point \( x = {x}_{0} \) .
Proof. By Hypothesis (A) we are allowed to define\n\n\[ f\left( x\right) = {\int }_{T}g\left( {x, t}\right) {d\mu }\left( t\right) \]\n\nThe derivative \( {f}^{\prime }\left( {x}_{0}\right) \) exists if and only if for each sequence \( \left\lbrack {x}_{n}\right\rbrack \) converging to \( {x}_{0} \) we have\n\n\[ {f}^{\prime }\left( {x}_{0}\right) = \mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{f\left( {x}_{n}\right) - f\left( {x}_{0}\right) }{{x}_{n} - {x}_{0}} = \mathop{\lim }\limits_{{n \rightarrow \infty }}{\int }_{T}\frac{g\left( {{x}_{n}, t}\right) - g\left( {{x}_{0}, t}\right) }{{x}_{n} - {x}_{0}}{d\mu }\left( t\right) \]\n\nBy Hypothesis (B), the integrands in the preceding equation are bounded in magnitude by the single \( {L}^{1} \) -function \( G \) . The Lebesgue Dominated Convergence Theorem (see Chapter 8, page 406) allows an interchange of limit and integral. Hence\n\n\[ {f}^{\prime }\left( {x}_{0}\right) = {\int }_{T}\mathop{\lim }\limits_{{n \rightarrow \infty }}\frac{g\left( {{x}_{n}, t}\right) - g\left( {{x}_{0}, t}\right) }{{x}_{n} - {x}_{0}}{d\mu }\left( t\right) = {\int }_{T}\frac{\partial g}{\partial x}\left( {{x}_{0}, t}\right) {d\mu }\left( t\right) \]\n\nThis proof is given by Bartle [Bart1]. A related theorem can be found in McShane’s book [McS].
Yes
Theorem 8. Let \( \left( {T,\mathcal{A},\mu }\right) \) be a measure space such that \( \mu \left( T\right) < \infty \). Let \( g : \left( {a, b}\right) \times T \rightarrow \mathbb{R} \). Assume that for each \( n,\left( {{\partial }^{n}g/\partial {x}^{n}}\right) \left( {x, t}\right) \) exists, is measurable, and is bounded on \( \left( {a, b}\right) \times T \). Then \[ \frac{{d}^{n}}{d{x}^{n}}{\int }_{T}g\left( {x, t}\right) {d\mu }\left( t\right) = {\int }_{T}\frac{{\partial }^{n}g}{\partial {x}^{n}}\left( {x, t}\right) {d\mu }\left( t\right) \;\left( {n = 1,2,\ldots }\right) \]
Proof. Since \( \mu \left( T\right) < \infty \), any bounded measurable function on \( T \) is integrable. To see that Hypothesis (B) of the preceding theorem is true, use the mean value theorem: \[ \left| \frac{g\left( {x, t}\right) - g\left( {{x}_{0}, t}\right) }{x - {x}_{0}}\right| = \left| {\frac{\partial g}{\partial x}\left( {\xi, t}\right) }\right| \leq M \] where \( M \) is a bound for \( \left| {\partial g/\partial x}\right| \) on \( \left( {a, b}\right) \times T \). By the preceding theorem, Equation (4) is valid for \( n = 1 \). The same argument can be repeated to give an inductive proof for all \( n \).
Yes
Let \( X \) be an arbitrary set, and \( \mathcal{C} \) a collection of subsets of \( X \), countably many of which cover \( X \) . Let \( \beta \) be a function from \( \mathcal{C} \) to \( {\mathbb{R}}^{ * } \) such that\n\n\[ \inf \{ \beta \left( C\right) : C \in \mathcal{C}\} = 0 \]\n\nThen the equation\n\n\[ \mu \left( A\right) = \inf \left\{ {\mathop{\sum }\limits_{{i = 1}}^{\infty }\beta \left( {C}_{i}\right) : A \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i},{C}_{i} \in \mathcal{C}}\right\} \]\n\ndefines an outer measure on \( X \) .
Assume all the hypotheses. There are now three postulates for an outer measure to be verified. Our assumption about \( \beta \) implies that \( \beta \left( C\right) \geq 0 \) for all \( C \in \mathcal{C} \) . Therefore, \( \mu \left( A\right) \geq 0 \) for all \( A \) . Since \( \varnothing \subset C \) for all \( C \in \mathcal{C},\mu \left( \varnothing \right) \leq \beta \left( C\right) \) for all \( C \) . Taking an infimum yields \( \mu \left( \varnothing \right) \leq 0 \).\n\nIf \( A \subset B \) and \( B \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i} \), then \( A \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i} \) and \( \mu \left( A\right) \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\beta \left( {C}_{i}\right) \) . Taking an infimum over all countable covers of \( B \), we have \( \mu \left( A\right) \leq \mu \left( B\right) \).\n\nLet \( {A}_{i} \subset X\left( {i \in \mathbb{N}}\right) \) and let \( \varepsilon > 0 \) . By the definition of \( \mu \left( {A}_{i}\right) \) there exist \( {C}_{ij} \in \mathcal{C} \) such that \( {A}_{i} \subset \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{C}_{ij} \) and \( \mathop{\sum }\limits_{{j = 1}}^{\infty }\beta \left( {C}_{ij}\right) < \mu \left( {A}_{i}\right) + \frac{\varepsilon}{2^i} \) for each \( i \) . Then \( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i} \subset \mathop{\bigcup }\limits_{{i,j = 1}}^{\infty }{C}_{ij} \) and\n\n\[ \mu \left( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i}\right) \leq \mathop{\sum }\limits_{{i,j = 1}}^{\infty }\beta \left( {C}_{ij}\right) < \mathop{\sum }\limits_{{i = 1}}^{\infty }\left( \mu \left( {A}_{i}\right) + \frac{\varepsilon}{2^i}\right) = \mathop{\sum }\limits_{{i = 1}}^{\infty }\mu \left( {A}_{i}\right) + \varepsilon. \]\n\nSince \( \varepsilon > 0 \) was arbitrary, we have \( \mu \left( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\mu \left( {A}_{i}\right) \).
Yes
Theorem 1. Let \( X \) be an arbitrary set, and \( \mathcal{C} \) a collection of subsets of \( X \), countably many of which cover \( X \) . Let \( \beta \) be a function from \( \mathcal{C} \) to \( {\mathbb{R}}^{ * } \) such that\n\n\[ \inf \{ \beta \left( C\right) : C \in \mathcal{C}\} = 0 \]\n\nThen the equation\n\n\[ \mu \left( A\right) = \inf \left\{ {\mathop{\sum }\limits_{{i = 1}}^{\infty }\beta \left( {C}_{i}\right) : A \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i},{C}_{i} \in \mathcal{C}}\right\} \]\n\ndefines an outer measure on \( X \) .
Proof. Assume all the hypotheses. There are now three postulates for an outer measure to be verified. Our assumption about \( \beta \) implies that \( \beta \left( C\right) \geq 0 \) for all \( C \in \mathcal{C} \) . Therefore, \( \mu \left( A\right) \geq 0 \) for all \( A \) . Since \( \varnothing \subset C \) for all \( C \in \mathcal{C},\mu \left( \varnothing \right) \leq \beta \left( C\right) \) for all \( C \) . Taking an infimum yields \( \mu \left( \varnothing \right) \leq 0 \).\n\nIf \( A \subset B \) and \( B \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i} \), then \( A \subset \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{C}_{i} \) and \( \mu \left( A\right) \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\beta \left( {C}_{i}\right) \) . Taking an infimum over all countable covers of \( B \), we have \( \mu \left( A\right) \leq \mu \left( B\right) \).\n\nLet \( {A}_{i} \subset X\left( {i \in \mathbb{N}}\right) \) and let \( \varepsilon > 0 \) . By the definition of \( \mu \left( {A}_{i}\right) \) there exist \( {C}_{ij} \in \mathcal{C} \) such that \( {A}_{i} \subset \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{C}_{ij} \) and \( \mathop{\sum }\limits_{{j = 1}}^{\infty }\beta \left( {C}_{ij}\right) \leq \mu \left( {A}_{i}\right) + \varepsilon /{2}^{i} \) . Since \( \mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i} \subset \mathop{\bigcup }\limits_{{i, j = 1}}^{\infty }{C}_{ij} \), we obtain\n\n\[ \mu \left( {\mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i}}\right) \leq \mathop{\sum }\limits_{{i, j}}\beta \left( {C}_{ij}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\left\lbrack {\mu \left( {A}_{i}\right) + \varepsilon /{2}^{i}}\right\rbrack = \varepsilon + \mathop{\sum }\limits_{{i = 1}}^{\infty }\mu \left( {A}_{i}\right) \]\n\nSince this is true for each positive \( \varepsilon \), we obtain \( \mu \left( {\mathop{\bigcup }\limits_{{i = 1}}^{\infty }{A}_{i}}\right) \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\mu \left( {A}_{i}\right) \).
Yes