Q
stringlengths 27
2.03k
| A
stringlengths 4
2.98k
| Result
stringclasses 2
values |
|---|---|---|
Theorem 3.2 \( {s}^{-1/2}\vartheta \left( {1/s}\right) = \vartheta \left( s\right) \) whenever \( s > 0 \) .
|
The proof of this identity consists of a simple application of the Poisson summation formula to the pair\n\n\[ f\left( x\right) = {e}^{-{\pi s}{x}^{2}}\;\text{ and }\;\widehat{f}\left( \xi \right) = {s}^{-1/2}{e}^{-\pi {\xi }^{2}/s}. \]
|
Yes
|
Theorem 3.3 The heat kernel on the circle is the periodization of the heat kernel on the real line:
|
\[ {H}_{t}\left( x\right) = \mathop{\sum }\limits_{{n = - \infty }}^{\infty }{\mathcal{H}}_{t}\left( {x + n}\right) \]
|
Yes
|
Corollary 3.4 The kernel \( {H}_{t}\left( x\right) \) is a good kernel for \( t \rightarrow 0 \) .
|
Proof. We already observed that \( {\int }_{\left| x\right| \leq 1/2}{H}_{t}\left( x\right) {dx} = 1 \) . Now note that \( {H}_{t} \geq 0 \), which is immediate from the above formula since \( {\mathcal{H}}_{t} \geq 0 \) . Finally, we claim that when \( \left| x\right| \leq 1/2 \) ,\n\n\[ \n{H}_{t}\left( x\right) = {\mathcal{H}}_{t}\left( x\right) + {\mathcal{E}}_{t}\left( x\right) \n\]\n\nwhere the error satisfies \( \left| {{\mathcal{E}}_{t}\left( x\right) }\right| \leq {c}_{1}{e}^{-{c}_{2}/t} \) with \( {c}_{1},{c}_{2} > 0 \) and \( 0 < t \leq 1 \) . To see this, note again that the formula in the theorem gives\n\n\[ \n{H}_{t}\left( x\right) = {\mathcal{H}}_{t}\left( x\right) + \mathop{\sum }\limits_{{\left| n\right| \geq 1}}{\mathcal{H}}_{t}\left( {x + n}\right) \n\]\n\ntherefore, since \( \left| x\right| \leq 1/2 \) ,\n\n\[ \n{\mathcal{E}}_{t}\left( x\right) = \frac{1}{\sqrt{4\pi t}}\mathop{\sum }\limits_{{\left| n\right| \geq 1}}{e}^{-{\left( x + n\right) }^{2}/{4t}} \leq C{t}^{-1/2}\mathop{\sum }\limits_{{n \geq 1}}{e}^{-c{n}^{2}/t}. \n\]\n\nNote that \( {n}^{2}/t \geq {n}^{2} \) and \( {n}^{2}/t \geq 1/t \) whenever \( 0 < t \leq 1 \), so \( {e}^{-c{n}^{2}/t} \leq \) \( {e}^{-\frac{c}{2}{n}^{2}}{e}^{-\frac{c}{2}\frac{1}{t}} \) . Hence\n\n\[ \n\left| {{\mathcal{E}}_{t}\left( x\right) }\right| \leq C{t}^{-1/2}{e}^{-\frac{c}{2}\frac{1}{t}}\mathop{\sum }\limits_{{n \geq 1}}{e}^{-\frac{c}{2}{n}^{2}} \leq {c}_{1}{e}^{-{c}_{2}/t}. \n\]\n\nThe proof of the claim is complete, and as a result \( {\int }_{\left| x\right| \leq 1/2}\left| {{\mathcal{E}}_{t}\left( x\right) }\right| {dx} \rightarrow 0 \) as \( t \rightarrow 0 \) . It is now clear that \( {H}_{t} \) satisfies\n\n\[ \n{\int }_{\eta < \left| x\right| \leq 1/2}\left| {{H}_{t}\left( x\right) }\right| {dx} \rightarrow 0\;\text{ as }t \rightarrow 0, \n\]\n\nbecause \( {\mathcal{H}}_{t} \) does.
|
Yes
|
Theorem 3.5 \( {P}_{r}\left( {2\pi x}\right) = \mathop{\sum }\limits_{{n \in \mathbb{Z}}}{\mathcal{P}}_{y}\left( {x + n}\right) \) where \( r = {e}^{-{2\pi y}} \) .
|
This is again an immediate corollary of the Poisson summation formula applied to \( f\left( x\right) = {\mathcal{P}}_{y}\left( x\right) \) and \( \widehat{f}\left( \xi \right) = {e}^{-{2\pi }\left| \xi \right| y} \) . Of course, here we use the Poisson summation formula under the assumptions that \( f \) and \( \widehat{f} \) are of moderate decrease.
|
Yes
|
Theorem 4.1 Suppose \( \psi \) is a function in \( \mathcal{S}\left( \mathbb{R}\right) \) which satisfies the normalizing condition \( {\int }_{-\infty }^{\infty }{\left| \psi \left( x\right) \right| }^{2}{dx} = 1 \) . Then\n\n\[ \left( {{\int }_{-\infty }^{\infty }{x}^{2}{\left| \psi \left( x\right) \right| }^{2}{dx}}\right) \left( {{\int }_{-\infty }^{\infty }{\xi }^{2}{\left| \widehat{\psi }\left( \xi \right) \right| }^{2}{d\xi }}\right) \geq \frac{1}{{16}{\pi }^{2}} \]\n\nand equality holds if and only if \( \psi \left( x\right) = A{e}^{-B{x}^{2}} \) where \( B > 0 \) and \( {\left| A\right| }^{2} = \) \( \sqrt{{2B}/\pi } \) .
|
Proof. The second inequality actually follows from the first by replacing \( \psi \left( x\right) \) by \( {e}^{-{2\pi ix}{\xi }_{0}}\psi \left( {x + {x}_{0}}\right) \) and changing variables. To prove the first inequality, we argue as follows. Beginning with our normalizing assumption \( \int {\left| \psi \right| }^{2} = 1 \), and recalling that \( \psi \) and \( {\psi }^{\prime } \) are rapidly decreasing, an integration by parts gives\n\n\[ 1 = {\int }_{-\infty }^{\infty }{\left| \psi \left( x\right) \right| }^{2}{dx} \]\n\n\[ = - {\int }_{-\infty }^{\infty }x\frac{d}{dx}{\left| \psi \left( x\right) \right| }^{2}{dx} \]\n\n\[ = - {\int }_{-\infty }^{\infty }\left( {x{\psi }^{\prime }\left( x\right) \overline{\psi \left( x\right) } + x\overline{{\psi }^{\prime }\left( x\right) }\psi \left( x\right) }\right) {dx}. \]\n\nThe last identity follows because \( {\left| \psi \right| }^{2} = \psi \bar{\psi } \). Therefore\n\n\[ 1 \leq 2{\int }_{-\infty }^{\infty }\left| x\right| \left| {\psi \left( x\right) }\right| \left| {{\psi }^{\prime }\left( x\right) }\right| {dx} \]\n\n\[ \leq 2{\left( {\int }_{-\infty }^{\infty }{x}^{2}{\left| \psi \left( x\right) \right| }^{2}dx\right) }^{1/2}{\left( {\int }_{-\infty }^{\infty }{\left| {\psi }^{\prime }\left( x\right) \right| }^{2}dx\right) }^{1/2}, \]\n\nwhere we have used the Cauchy-Schwarz inequality. The identity\n\n\[ {\int }_{-\infty }^{\infty }{\left| {\psi }^{\prime }\left( x\right) \right| }^{2}{dx} = 4{\pi }^{2}{\int }_{-\infty }^{\infty }{\xi }^{2}{\left| \widehat{\psi }\left( \xi \right) \right| }^{2}{d\xi } \]\n\nwhich holds because of the properties of the Fourier transform and the Plancherel formula, concludes the proof of the inequality in the theorem.\n\nIf equality holds, then we must also have equality where we applied the Cauchy-Schwarz inequality, and as a result we find that \( {\psi }^{\prime }\left( x\right) = {\beta x\psi }\left( x\right) \) for some constant \( \beta \). The solutions to this equation are \( \psi \left( x\right) = A{e}^{\beta {x}^{2}/2} \), where \( A \) is constant. Since we want \( \psi \) to be a Schwartz function, we must take \( \beta = - {2B} < 0 \), and since we impose the condition \( {\int }_{-\infty }^{\infty }{\left| \psi \left( x\right) \right| }^{2}{dx} = 1 \) we find that \( {\left| A\right| }^{2} = \sqrt{{2B}/\pi } \), as was to be shown.
|
Yes
|
Proposition 2.1 Let \( f \in \mathcal{S}\left( {\mathbb{R}}^{d}\right) \). (i) \( f\left( {x + h}\right) \rightarrow \widehat{f}\left( \xi \right) {e}^{{2\pi i\xi } \cdot h} \) whenever \( h \in {\mathbb{R}}^{d} \). (ii) \( f\left( x\right) {e}^{-{2\pi ixh}} \rightarrow \widehat{f}\left( {\xi + h}\right) \) whenever \( h \in {\mathbb{R}}^{d} \). (iii) \( f\left( {\delta x}\right) \rightarrow {\delta }^{-d}\widehat{f}\left( {{\delta }^{-1}\xi }\right) \) whenever \( \delta > 0 \). (iv) \( {\left( \frac{\partial }{\partial x}\right) }^{\alpha }f\left( x\right) \rightarrow {\left( 2\pi i\xi \right) }^{\alpha }\widehat{f}\left( \xi \right) \). (v) \( {\left( -2\pi ix\right) }^{\alpha }f\left( x\right) \rightarrow {\left( \frac{\partial }{\partial \xi }\right) }^{\alpha }\widehat{f}\left( \xi \right) \). (vi) \( f\left( {Rx}\right) \rightarrow \widehat{f}\left( {R\xi }\right) \) whenever \( R \) is a rotation.
|
The first five properties are proved in the same way as in the one-dimensional case. To verify the last property, simply change variables \( y = {Rx} \) in the integral. Then, recall that \( \left| {\det \left( R\right) }\right| = 1 \), and \( {R}^{-1}y \cdot \xi = y \cdot {R\xi } \), because \( R \) is a rotation.
|
Yes
|
Corollary 2.3 The Fourier transform of a radial function is radial.
|
This follows at once from property (vi) in the last proposition. Indeed, the condition \( f\left( {Rx}\right) = f\left( x\right) \) for all \( R \) implies that \( \widehat{f}\left( {R\xi }\right) = \widehat{f}\left( \xi \right) \) for all \( R \), thus \( \widehat{f} \) is radial whenever \( f \) is.
|
Yes
|
A solution of the Cauchy problem for the wave equation is\n\n\[ u\left( {x, t}\right) = {\int }_{{\mathbb{R}}^{d}}\left\lbrack {\widehat{f}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) + \widehat{g}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| }}\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi }.\]
|
Proof. We first verify that \( u \) solves the wave equation. This is straightforward once we note that we can differentiate in \( x \) and \( t \) under the integral sign (because \( f \) and \( g \) are both Schwartz functions) and therefore \( u \) is at least \( {C}^{2} \). On the one hand we differentiate the exponential with respect to the \( x \) variables to get\n\n\[ \bigtriangleup u\left( {x, t}\right) = {\int }_{{\mathbb{R}}^{d}}\left\lbrack {\widehat{f}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) + \widehat{g}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| }}\right\rbrack \left( {-4{\pi }^{2}{\left| \xi \right| }^{2}}\right) {e}^{{2\pi ix} \cdot \xi }{d\xi } \]\n\nwhile on the other hand we differentiate the terms in brackets with respect to \( t \) twice to get\n\n\[ \frac{{\partial }^{2}u}{\partial {t}^{2}}\left( {x, t}\right) = \]\n\n\[ {\int }_{{\mathbb{R}}^{d}}\left\lbrack {-4{\pi }^{2}{\left| \xi \right| }^{2}\widehat{f}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) - 4{\pi }^{2}{\left| \xi \right| }^{2}\widehat{g}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| }}\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi }. \]\n\nThis shows that \( u \) solves equation (2). Setting \( t = 0 \) we get\n\n\[ u\left( {x,0}\right) = {\int }_{{\mathbb{R}}^{d}}\widehat{f}\left( \xi \right) {e}^{{2\pi ix} \cdot \xi }{d\xi } = f\left( x\right) \]\n\nby the Fourier inversion theorem. Finally, differentiating once with respect to \( t \), setting \( t = 0 \), and using the Fourier inversion shows that\n\n\[ \frac{\partial u}{\partial t}\left( {x,0}\right) = g\left( x\right) \]\n\nThus \( u \) also verifies the initial conditions, and the proof of the theorem is complete.
|
Yes
|
Theorem 3.2 If \( u \) is the solution of the wave equation given by formula (3), then \( E\left( t\right) \) is conserved, that is,\n\n\[ E\left( t\right) = E\left( 0\right) ,\;\text{ for all }t \in \mathbb{R}. \]
|
The proof requires the following lemma.\n\nLemma 3.3 Suppose a and \( b \) are complex numbers and \( \alpha \) is real. Then\n\n\[ {\left| a\cos \alpha + b\sin \alpha \right| }^{2} + {\left| -a\sin \alpha + b\cos \alpha \right| }^{2} = {\left| a\right| }^{2} + {\left| b\right| }^{2}. \]\n\nThis follows directly because \( {e}_{1} = \left( {\cos \alpha ,\sin \alpha }\right) \) and \( {e}_{2} = \left( {-\sin \alpha ,\cos \alpha }\right) \) are a pair of orthonormal vectors, hence with \( Z = \left( {a, b}\right) \in {\mathbb{C}}^{2} \), we have\n\n\[ {\left| Z\right| }^{2} = {\left| Z \cdot {e}_{1}\right| }^{2} + {\left| Z \cdot {e}_{2}\right| }^{2} \]\n\nwhere \( \cdot \) represents the inner product in \( {\mathbb{C}}^{2} \).\n\nNow by Plancherel's theorem,\n\n\[ {\int }_{{\mathbb{R}}^{d}}{\left| \frac{\partial u}{\partial t}\right| }^{2}{dx} = {\int }_{{\mathbb{R}}^{d}}\left| {-{2\pi }}\right| \xi \left| {\widehat{f}\left( \xi \right) \sin \left( {{2\pi }\left| \xi \right| t}\right) + \widehat{g}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) }\right| {}^{2}{d\xi }. \]\n\nSimilarly,\n\n\[ {\int }_{{\mathbb{R}}^{d}}\mathop{\sum }\limits_{{j = 1}}^{d}{\left| \frac{\partial u}{\partial {x}_{j}}\right| }^{2}{dx} = {\int }_{{\mathbb{R}}^{d}}\left| {2\pi }\right| \xi \left| {\widehat{f}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) + \widehat{g}\left( \xi \right) \sin \left( {{2\pi }\left| \xi \right| t}\right) }\right| {}^{2}{d\xi }. \]\n\nWe now apply the lemma with\n\n\[ a = {2\pi }\left| \xi \right| \widehat{f}\left( \xi \right) ,\;b = \widehat{g}\left( \xi \right) \;\text{ and }\;\alpha = {2\pi }\left| \xi \right| t. \]\n\nThe result is that\n\n\[ E\left( t\right) = {\int }_{{\mathbb{R}}^{d}}{\left| \frac{\partial u}{\partial t}\right| }^{2} + {\left| \frac{\partial u}{\partial {x}_{1}}\right| }^{2} + \cdots + {\left| \frac{\partial u}{\partial {x}_{d}}\right| }^{2}{dx} \]\n\n\[ = {\int }_{{\mathbb{R}}^{d}}\left( {4{\pi }^{2}{\left| \xi \right| }^{2}{\left| \widehat{f}\left( \xi \right) \right| }^{2} + {\left| \widehat{g}\left( \xi \right) \right| }^{2}}\right) {d\xi } \]\n\nwhich is clearly independent of \( t \) . Thus Theorem 3.2 is proved.
|
Yes
|
Lemma 3.3 Suppose a and \( b \) are complex numbers and \( \alpha \) is real. Then\n\n\[ \n{\left| a\cos \alpha + b\sin \alpha \right| }^{2} + {\left| -a\sin \alpha + b\cos \alpha \right| }^{2} = {\left| a\right| }^{2} + {\left| b\right| }^{2}.\n\]
|
This follows directly because \( {e}_{1} = \left( {\cos \alpha ,\sin \alpha }\right) \) and \( {e}_{2} = \left( {-\sin \alpha ,\cos \alpha }\right) \) are a pair of orthonormal vectors, hence with \( Z = \left( {a, b}\right) \in {\mathbb{C}}^{2} \), we have\n\n\[ \n{\left| Z\right| }^{2} = {\left| Z \cdot {e}_{1}\right| }^{2} + {\left| Z \cdot {e}_{2}\right| }^{2}\n\]\n\nwhere \( \cdot \) represents the inner product in \( {\mathbb{C}}^{2} \) .
|
Yes
|
Lemma 3.4 If \( f \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \) and \( t \) is fixed, then \( {M}_{t}\left( f\right) \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \) . Moreover, \( {M}_{t}\left( f\right) \) is indefinitely differentiable in \( t \), and each \( t \) -derivative also belongs to \( \mathcal{S}\left( {\mathbb{R}}^{3}\right) \) .
|
Proof. Let \( F\left( x\right) = {M}_{t}\left( f\right) \left( x\right) \) . To show that \( F \) is rapidly decreasing, start with the inequality \( \left| {f\left( x\right) }\right| \leq {A}_{N}/\left( {1 + {\left| x\right| }^{N}}\right) \) which holds for every fixed \( N \geq 0 \) . As a simple consequence, whenever \( t \) is fixed, we have\n\n\[ \left| {f\left( {x - {\gamma t}}\right) }\right| \leq {A}_{N}^{\prime }/\left( {1 + {\left| x\right| }^{N}}\right) \;\text{ for all }\gamma \in {S}^{2}. \]\n\nTo see this consider separately the cases when \( \left| x\right| \leq 2\left| t\right| \), and \( \left| x\right| > 2\left| t\right| \) . Therefore, by integration\n\n\[ \left| {F\left( x\right) }\right| \leq {A}_{N}^{\prime }/\left( {1 + {\left| x\right| }^{N}}\right) \]\n\nand since this holds for every \( N \), the function \( F \) is rapidly decreasing. One next observes that \( F \) is indefinitely differentiable, and\n\n(6)\n\n\[ {\left( \frac{\partial }{\partial x}\right) }^{\alpha }F\left( x\right) = {M}_{t}\left( {f}^{\left( \alpha \right) }\right) \left( x\right) \]\n\nwhere \( {f}^{\left( \alpha \right) }\left( x\right) = {\left( \partial /\partial x\right) }^{\alpha }f \) . It suffices to prove this when \( {\left( \partial /\partial x\right) }^{\alpha } = \) \( \partial /\partial {x}_{k} \), and then proceed by induction to get the general case. Furthermore, it is enough to take \( k = 1 \) . Now\n\n\[ \frac{F\left( {{x}_{1} + h,{x}_{2},{x}_{3}}\right) - F\left( {{x}_{1},{x}_{2},{x}_{3}}\right) }{h} = \frac{1}{4\pi }{\int }_{{S}^{2}}{g}_{h}\left( \gamma \right) {d\sigma }\left( \gamma \right) \]\n\nwhere\n\n\[ {g}_{h}\left( \gamma \right) = \frac{f\left( {x + {e}_{1}h - {\gamma t}}\right) - f\left( {x - {\gamma t}}\right) }{h}, \]\n\nand \( {e}_{1} = \left( {1,0,0}\right) \) . Now, it suffices to observe that \( {g}_{h} \rightarrow \frac{\partial }{\partial {x}_{1}}f\left( {x - {\gamma t}}\right) \) as \( h \rightarrow 0 \) uniformly in \( \gamma \) . As a result, we find that (6) holds, and by the first argument, it follows that \( {\left( \frac{\partial }{\partial x}\right) }^{\alpha }F\left( x\right) \) is also rapidly decreasing, hence \( F \in \mathcal{S} \) . The same argument applies to each \( t \) -derivative of \( {M}_{t}\left( f\right) \) .
|
Yes
|
Lemma 3.5 \( \frac{1}{4\pi }{\int }_{{S}^{2}}{e}^{-{2\pi i\xi } \cdot \gamma }{d\sigma }\left( \gamma \right) = \frac{\sin \left( {{2\pi }\left| \xi \right| }\right) }{{2\pi }\left| \xi \right| } \)
|
Proof. Note that the integral on the left is radial in \( \xi \) . Indeed, if \( R \) is a rotation then\n\n\[ \n{\int }_{{S}^{2}}{e}^{-{2\pi iR}\left( \xi \right) \cdot \gamma }{d\sigma }\left( \gamma \right) = {\int }_{{S}^{2}}{e}^{-{2\pi i\xi } \cdot {R}^{-1}\left( \gamma \right) }{d\sigma }\left( \gamma \right) = {\int }_{{S}^{2}}{e}^{-{2\pi i\xi } \cdot \gamma }{d\sigma }\left( \gamma \right) \n\]\n\nbecause we may change variables \( \gamma \rightarrow {R}^{-1}\left( \gamma \right) \) . (For this, see formula (4) in the appendix.) So if \( \left| \xi \right| = \rho \), it suffices to prove the lemma with\n\n\( \xi = \left( {0,0,\rho }\right) \) . If \( \rho = 0 \), the lemma is obvious. If \( \rho > 0 \), we choose spherical coordinates to find that the left-hand side is equal to\n\n\[ \n\frac{1}{4\pi }{\int }_{0}^{2\pi }{\int }_{0}^{\pi }{e}^{-{2\pi i\rho }\cos \theta }\sin {\theta d\theta d\varphi } \n\]\n\nThe change of variables \( u = - \cos \theta \) gives\n\n\[ \n\frac{1}{4\pi }{\int }_{0}^{2\pi }{\int }_{0}^{\pi }{e}^{-{2\pi i\rho }\cos \theta }\sin {\theta d\theta d\varphi } = \frac{1}{2}{\int }_{0}^{\pi }{e}^{-{2\pi i\rho }\cos \theta }\sin {\theta d\theta } \n\]\n\n\[ \n= \frac{1}{2}{\int }_{-1}^{1}{e}^{2\pi i\rho u}{du} \n\]\n\n\[ \n= \frac{1}{4\pi i\rho }{\left\lbrack {e}^{2\pi i\rho u}\right\rbrack }_{-1}^{1} \n\]\n\n\[ \n= \frac{\sin \left( {2\pi \rho }\right) }{2\pi \rho } \n\]\n\nand the formula is proved.
|
Yes
|
Theorem 3.6 The solution when \( d = 3 \) of the Cauchy problem for the wave equation\n\n\[ \bigtriangleup u = \frac{{\partial }^{2}u}{\partial {t}^{2}}\;\text{ subject to }\;u\left( {x,0}\right) = f\left( x\right) \;\text{ and }\;\frac{\partial u}{\partial t}\left( {x,0}\right) = g\left( x\right) \]\n\nis given by\n\n\[ u\left( {x, t}\right) = \frac{\partial }{\partial t}\left( {t{M}_{t}\left( f\right) \left( x\right) }\right) + t{M}_{t}\left( g\right) \left( x\right) . \]
|
Proof. Consider first the problem\n\n\[ \bigtriangleup u = \frac{{\partial }^{2}u}{\partial {t}^{2}}\;\text{ subject to }\;u\left( {x,0}\right) = 0\;\text{ and }\;\frac{\partial u}{\partial t}\left( {x,0}\right) = g\left( x\right) . \]\n\nThen by Theorem 3.1, we know that its solution \( {u}_{1} \) is given by\n\n\[ {u}_{1}\left( {x, t}\right) = {\int }_{{\mathbb{R}}^{3}}\left\lbrack {\widehat{g}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| }}\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi } \]\n\n\[ = t{\int }_{{\mathbb{R}}^{3}}\left\lbrack {\widehat{g}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| t}}\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi } \]\n\n\[ = t{M}_{t}\left( g\right) \left( x\right) \]\n\nwhere we have used (7) applied to \( g \), and the Fourier inversion formula.\n\nAccording to Theorem 3.1 again, the solution to the problem\n\n\[ \bigtriangleup u = \frac{{\partial }^{2}u}{\partial {t}^{2}}\;\text{ subject to }\;u\left( {x,0}\right) = f\left( x\right) \;\text{ and }\;\frac{\partial u}{\partial t}\left( {x,0}\right) = 0 \]\n\nis given by\n\n\[ {u}_{2}\left( {x, t}\right) = {\int }_{{\mathbb{R}}^{3}}\left\lbrack {\widehat{f}\left( \xi \right) \cos \left( {{2\pi }\left| \xi \right| t}\right) }\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi } \]\n\n\[ = \frac{\partial }{\partial t}\left( {t{\int }_{{\mathbb{R}}^{3}}\left\lbrack {\widehat{f}\left( \xi \right) \frac{\sin \left( {{2\pi }\left| \xi \right| t}\right) }{{2\pi }\left| \xi \right| t}}\right\rbrack {e}^{{2\pi ix} \cdot \xi }{d\xi }}\right) \]\n\n\[ = \frac{\partial }{\partial t}\left( {t{M}_{t}\left( f\right) \left( x\right) }\right) \]\n\nWe may now superpose these two solutions to obtain \( u = {u}_{1} + {u}_{2} \) as the solution of our original problem.
|
Yes
|
Theorem 3.7 A solution of the Cauchy problem for the wave equation in two dimensions with initial data \( f, g \in \mathcal{S}\left( {\mathbb{R}}^{2}\right) \) is given by\n\n\[ u\left( {x, t}\right) = \frac{\partial }{\partial t}\left( {t{\widetilde{M}}_{t}\left( f\right) \left( x\right) }\right) + t{\widetilde{M}}_{t}\left( g\right) \left( x\right) . \]
|
Formally, the identity in the theorem arises as follows. If we start with an initial pair of functions \( f \) and \( g \) in \( \mathcal{S}\left( {\mathbb{R}}^{2}\right) \), we may consider the corresponding functions \( \widetilde{f} \) and \( \widetilde{g} \) on \( {\mathbb{R}}^{3} \) that are merely extensions of \( f \) and \( g \) that are constant in the \( {x}_{3} \) variable, that is,\n\n\[ \widetilde{f}\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = f\left( {{x}_{1},{x}_{2}}\right) \;\text{ and }\;\widetilde{g}\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = g\left( {{x}_{1},{x}_{2}}\right) . \]\n\nNow, if \( \widetilde{u} \) is the solution (given in the previous section) of the 3-dimensional wave equation with initial data \( \widetilde{f} \) and \( \widetilde{g} \), then one can expect that \( \widetilde{u} \) is also constant in \( {x}_{3} \) so that \( \widetilde{u} \) satisfies the 2-dimensional wave equation. A difficulty with this argument is that \( \widetilde{f} \) and \( \widetilde{g} \) are not rapidly decreasing since they are constant in \( {x}_{3} \), so that our previous methods do not apply. However, it is easy to modify the argument so as to obtain a proof of Theorem 3.7.\n\nWe fix \( T > 0 \) and consider a function \( \eta \left( {x}_{3}\right) \) that is in \( \mathcal{S}\left( \mathbb{R}\right) \), such that \( \eta \left( {x}_{3}\right) = 1 \) if \( \left| {x}_{3}\right| \leq {3T} \) . The trick is to truncate \( \widetilde{f} \) and \( \widetilde{g} \) in the \( {x}_{3} \) -variable, and consider instead\n\n\[ {\widetilde{f}}^{b}\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = f\left( {{x}_{1},{x}_{2}}\right) \eta \left( {x}_{3}\right) \;\text{ and }\;{\widetilde{g}}^{b}\left( {{x}_{1},{x}_{2},{x}_{3}}\right) = g\left( {{x}_{1},{x}_{2}}\right) \eta \left( {x}_{3}\right) . \]\n\nNow both \( {\widetilde{f}}^{b} \) and \( {\widetilde{g}}^{b} \) are in \( \mathcal{S}\left( {\mathbb{R}}^{3}\right) \), so Theorem 3.6 provides a solution \( {\widetilde{u}}^{b} \) of the wave equation with initial data \( {\widetilde{f}}^{b} \) and \( {\widetilde{g}}^{b} \) . It is easy to see from the formula that \( {\widetilde{u}}^{b}\left( {x, t}\right) \) is independent of \( {x}_{3} \), whenever \( \left| {x}_{3}\right| \leq T \) and \( \left| t\right| \leq T \) . In particular, if we define \( u\left( {{x}_{1},{x}_{2}, t}\right) = {\widetilde{u}}^{b}\left( {{x}_{1},{x}_{2},0, t}\right) \), then \( u \) satisfies the 2-dimensional wave equation when \( \left| t\right| \leq T \) . Since \( T \) is arbitrary, \( u \) is a solution to our problem, and it remains to see why \( u \) has the desired form.
|
Yes
|
Proposition 5.1 If \( f \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \), then for each \( \gamma \) the definition of \( {\int }_{{\mathcal{P}}_{t,\gamma }}f \) is independent of the choice of \( {e}_{1} \) and \( {e}_{2} \) . Moreover\n\n\[{\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathcal{P}}_{t,\gamma }}f}\right) {dt} = {\int }_{{\mathbb{R}}^{3}}f\left( x\right) {dx}\]
|
Proof. If \( {e}_{1}^{\prime },{e}_{2}^{\prime } \) is another choice of basis vectors so that \( \gamma ,{e}_{1}^{\prime },{e}_{2}^{\prime } \) is orthonormal, consider the rotation \( R \) in \( {\mathbb{R}}^{2} \) which takes \( {e}_{1} \) to \( {e}_{1}^{\prime } \) and \( {e}_{2} \) to \( {e}_{2}^{\prime } \) . Changing variables \( {u}^{\prime } = R\left( u\right) \) in the integral proves that our definition (12) is independent of the choice of basis.\n\nTo prove the formula, let \( R \) denote the rotation which takes the standard basis of unit vectors \( {}^{4} \) in \( {\mathbb{R}}^{3} \) to \( \gamma ,{e}_{1} \), and \( {e}_{2} \) . Then\n\n\[{\int }_{{\mathbb{R}}^{3}}f\left( x\right) {dx} = {\int }_{{\mathbb{R}}^{3}}f\left( {Rx}\right) {dx}\]\n\n\[= {\int }_{{\mathbb{R}}^{3}}f\left( {{x}_{1}\gamma + {x}_{2}{e}_{1} + {x}_{3}{e}_{2}}\right) d{x}_{1}d{x}_{2}d{x}_{3}\]\n\n\[= {\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathcal{P}}_{t,\gamma }}f}\right) {dt}\]
|
Yes
|
Lemma 5.2 If \( f \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \), then \( \mathcal{R}\left( f\right) \left( {t,\gamma }\right) \in \mathcal{S}\left( \mathbb{R}\right) \) for each fixed \( \gamma \) . Moreover, \[ \widehat{\mathcal{R}}\left( f\right) \left( {s,\gamma }\right) = \widehat{f}\left( {s\gamma }\right) \]
|
Proof. Since \( f \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \), for every positive integer \( N \) there is a constant \( {A}_{N} < \infty \) so that \[ {\left( 1 + \left| t\right| \right) }^{N}{\left( 1 + \left| u\right| \right) }^{N}\left| {f\left( {{t\gamma } + u}\right) }\right| \leq {A}_{N} \] if we recall that \( x = {t\gamma } + u \), where \( \gamma \) is orthogonal to \( u \) . Therefore, as soon as \( N \geq 3 \), we find \[ {\left( 1 + \left| t\right| \right) }^{N}\mathcal{R}\left( f\right) \left( {t,\gamma }\right) \leq {A}_{N}{\int }_{{\mathbb{R}}^{2}}\frac{du}{{\left( 1 + \left| u\right| \right) }^{N}} < \infty . \] A similar argument for the derivatives shows that \( \mathcal{R}\left( f\right) \left( {t,\gamma }\right) \in \mathcal{S}\left( \mathbb{R}\right) \) for each fixed \( \gamma \) . To establish the identity, we first note that \[ \widehat{\mathcal{R}}\left( f\right) \left( {s,\gamma }\right) = {\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathcal{P}}_{t,\gamma }}f}\right) {e}^{-{2\pi ist}}{dt} \] \[ = {\int }_{-\infty }^{\infty }{\int }_{{\mathbb{R}}^{2}}f\left( {{t\gamma } + {u}_{1}{e}_{1} + {u}_{2}{e}_{2}}\right) d{u}_{1}d{u}_{2}{e}^{-{2\pi ist}}{dt}. \] However, since \( \gamma \cdot u = 0 \) and \( \left| \gamma \right| = 1 \), we may write \[ {e}^{-{2\pi ist}} = {e}^{-{2\pi is\gamma } \cdot \left( {{t\gamma } + u}\right) }. \] As a result, we find that \[ \widehat{\mathcal{R}}\left( f\right) \left( {s,\gamma }\right) = {\int }_{-\infty }^{\infty }{\int }_{{\mathbb{R}}^{2}}f\left( {{t\gamma } + {u}_{1}{e}_{1} + {u}_{2}{e}_{2}}\right) {e}^{-{2\pi is\gamma } \cdot \left( {{t\gamma } + u}\right) }d{u}_{1}d{u}_{2}{dt} \] \[ = {\int }_{-\infty }^{\infty }{\int }_{{\mathbb{R}}^{2}}f\left( {{t\gamma } + u}\right) {e}^{-{2\pi is\gamma } \cdot \left( {{t\gamma } + u}\right) }{dudt}. \] A final rotation from \( \gamma ,{e}_{1},{e}_{2} \) to the standard basis in \( {\mathbb{R}}^{3} \) proves that \( \widehat{\mathcal{R}}\left( f\right) \left( {s,\gamma }\right) = \widehat{f}\left( {s\gamma }\right) \), as desired.
|
Yes
|
Corollary 5.3 If \( f, g \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \) and \( \mathcal{R}\left( f\right) = \mathcal{R}\left( g\right) \), then \( f = g \) .
|
The proof of the corollary follows from an application of the lemma to the difference \( f - g \) and use of the Fourier inversion theorem.
|
No
|
Theorem 5.4 If \( f \in \mathcal{S}\left( {\mathbb{R}}^{3}\right) \), then\n\n\[ \bigtriangleup \left( {{\mathcal{R}}^{ * }\mathcal{R}\left( f\right) }\right) = - 8{\pi }^{2}f \]
|
We recall that \( \bigtriangleup = \frac{{\partial }^{2}}{\partial {x}_{1}^{2}} + \frac{{\partial }^{2}}{\partial {x}_{2}^{2}} + \frac{{\partial }^{2}}{\partial {x}_{3}^{2}} \) is the Laplacian.\n\nProof. By our previous lemma, we have\n\n\[ \mathcal{R}\left( f\right) \left( {t,\gamma }\right) = {\int }_{-\infty }^{\infty }\widehat{f}\left( {s\gamma }\right) {e}^{2\pi its}{ds}. \]\n\nTherefore\n\n\[ {\mathcal{R}}^{ * }\mathcal{R}\left( f\right) \left( x\right) = {\int }_{{S}^{2}}{\int }_{-\infty }^{\infty }\widehat{f}\left( {s\gamma }\right) {e}^{{2\pi ix} \cdot {\gamma s}}{dsd\sigma }\left( \gamma \right) , \]\n\nhence\n\n\[ \bigtriangleup \left( {{\mathcal{R}}^{ * }\mathcal{R}\left( f\right) }\right) \left( x\right) = {\int }_{{S}^{2}}{\int }_{-\infty }^{\infty }\widehat{f}\left( {s\gamma }\right) \left( {-4{\pi }^{2}{s}^{2}}\right) {e}^{{2\pi ix} \cdot {\gamma s}}{dsd\sigma }\left( \gamma \right) \]\n\n\[ = - 4{\pi }^{2}{\int }_{{S}^{2}}{\int }_{-\infty }^{\infty }\widehat{f}\left( {s\gamma }\right) {e}^{{2\pi ix} \cdot {\gamma s}}{s}^{2}{dsd\sigma }\left( \gamma \right) \]\n\n\[ = - 4{\pi }^{2}{\int }_{{S}^{2}}{\int }_{-\infty }^{0}\widehat{f}\left( {s\gamma }\right) {e}^{{2\pi ix} \cdot {\gamma s}}{s}^{2}{dsd\sigma }\left( \gamma \right) \]\n\n\[ - 4{\pi }^{2}{\int }_{{S}^{2}}{\int }_{0}^{\infty }\widehat{f}\left( {s\gamma }\right) {e}^{{2\pi ix} \cdot {\gamma s}}{s}^{2}{dsd\sigma }\left( \gamma \right) \]\n\n\[ = - 8{\pi }^{2}{\int }_{{S}^{2}}{\int }_{0}^{\infty }\widehat{f}\left( {s\gamma }\right) {e}^{{2\pi ix} \cdot {\gamma s}}{s}^{2}{dsd\sigma }\left( \gamma \right) \]\n\n\[ = - 8{\pi }^{2}f\left( x\right) \text{.} \]\n\nIn the first line, we have differentiated under the integral sign and used the fact \( \bigtriangleup \left( {e}^{{2\pi ix} \cdot {\gamma s}}\right) = \left( {-4{\pi }^{2}{s}^{2}}\right) {e}^{{2\pi ix} \cdot {\gamma s}} \), since \( \left| \gamma \right| = 1 \). The last step follows from the formula for polar coordinates in \( {\mathbb{R}}^{3} \) and the Fourier inversion theorem.
|
Yes
|
Lemma 1.1 The family \( \left\{ {{e}_{0},\ldots ,{e}_{N - 1}}\right\} \) is orthogonal. In fact,\n\n\[ \left( {{e}_{m},{e}_{\ell }}\right) = \left\{ \begin{array}{ll} N & \text{ if }m = \ell \\ 0 & \text{ if }m \neq \ell \end{array}\right. \]
|
Proof. We have\n\n\[ \left( {{e}_{m},{e}_{\ell }}\right) = \mathop{\sum }\limits_{{k = 0}}^{{N - 1}}{\zeta }^{mk}{\zeta }^{-\ell k} = \mathop{\sum }\limits_{{k = 0}}^{{N - 1}}{\zeta }^{\left( {m - \ell }\right) k}. \]\n\nIf \( m = \ell \), each term in the sum is equal to 1, and the sum equals \( N \) . If \( m \neq \ell \), then \( q = {\zeta }^{m - \ell } \) is not equal to 1, and the usual formula\n\n\[ 1 + q + {q}^{2} + \cdots + {q}^{N - 1} = \frac{1 - {q}^{N}}{1 - q} \]\n\nshows that \( \left( {{e}_{m},{e}_{\ell }}\right) = 0 \), because \( {q}^{N} = 1 \) .
|
Yes
|
Theorem 1.2 If \( F \) is a function on \( \mathbb{Z}\left( N\right) \), then\n\n\[ F\left( k\right) = \mathop{\sum }\limits_{{n = 0}}^{{N - 1}}{a}_{n}{e}^{{2\pi ink}/N}. \]\n\nMoreover,\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{{N - 1}}{\left| {a}_{n}\right| }^{2} = \frac{1}{N}\mathop{\sum }\limits_{{k = 0}}^{{N - 1}}{\left| F\left( k\right) \right| }^{2} \]
|
The proof follows directly from (1) once we observe that\n\n\[ {a}_{n} = \frac{1}{N}\left( {F,{e}_{n}}\right) = \frac{1}{\sqrt{N}}\left( {F,{e}_{n}^{ * }}\right) . \]
|
Yes
|
Lemma 1.4 If we are given \( {\omega }_{2M} = {e}^{-{2\pi i}/\left( {2M}\right) } \), then\n\n\[ \n\# \left( {2M}\right) \leq 2\# \left( M\right) + {8M}.\n\]
|
Proof. The calculation of \( {\omega }_{2M},\ldots ,{\omega }_{2M}^{2M} \) requires no more than \( {2M} \) operations. Note that in particular we get \( {\omega }_{M} = {e}^{-{2\pi i}/M} = {\omega }_{2M}^{2} \). The main idea is that for any given function \( F \) on \( \mathbb{Z}\left( {2M}\right) \), we consider two functions \( {F}_{0} \) and \( {F}_{1} \) on \( \mathbb{Z}\left( M\right) \) defined by\n\n\[ \n{F}_{0}\left( r\right) = F\left( {2r}\right) \;\text{ and }\;{F}_{1}\left( r\right) = F\left( {{2r} + 1}\right) .\n\]\n\nWe assume that it is possible to calculate the Fourier coefficients of \( {F}_{0} \) and \( {F}_{1} \) in no more than \( \# \left( M\right) \) operations each. If we denote the Fourier coefficients corresponding to the groups \( \mathbb{Z}\left( {2M}\right) \) and \( \mathbb{Z}\left( M\right) \) by \( {a}_{k}^{2M} \) and \( {a}_{k}^{M} \), respectively, then we have\n\n\[ \n{a}_{k}^{2M}\left( F\right) = \frac{1}{2}\left( {{a}_{k}^{M}\left( {F}_{0}\right) + {a}_{k}^{M}\left( {F}_{1}\right) {\omega }_{2M}^{k}}\right) .\n\]\n\nTo prove this, we sum over odd and even integers in the definition of the Fourier coefficient \( {a}_{k}^{2M}\left( F\right) \), and find\n\n\[ \n{a}_{k}^{2M}\left( F\right) = \frac{1}{2M}\mathop{\sum }\limits_{{r = 0}}^{{{2M} - 1}}F\left( r\right) {\omega }_{2M}^{kr}\n\]\n\n\[ \n= \frac{1}{2}\left( {\frac{1}{M}\mathop{\sum }\limits_{{\ell = 0}}^{{M - 1}}F\left( {2\ell }\right) {\omega }_{2M}^{k\left( {2\ell }\right) } + \frac{1}{M}\mathop{\sum }\limits_{{m = 0}}^{{M - 1}}F\left( {{2m} + 1}\right) {\omega }_{2M}^{k\left( {{2m} + 1}\right) }}\right)\n\]\n\n\[ \n= \frac{1}{2}\left( {\frac{1}{M}\mathop{\sum }\limits_{{\ell = 0}}^{{M - 1}}{F}_{0}\left( \ell \right) {\omega }_{M}^{k\ell } + \frac{1}{M}\mathop{\sum }\limits_{{m = 0}}^{{M - 1}}{F}_{1}\left( m\right) {\omega }_{M}^{km}{\omega }_{2M}^{k}}\right) ,\n\]\n\nwhich establishes our assertion.\n\nAs a result, knowing \( {a}_{k}^{M}\left( {F}_{0}\right) ,{a}_{k}^{M}\left( {F}_{1}\right) \), and \( {\omega }_{2M}^{k} \), we see that each \( {a}_{k}^{2M}\left( F\right) \) can be computed using no more than three operations (one addition and two multiplications). So\n\n\[ \n\# \left( {2M}\right) \leq {2M} + 2\# \left( M\right) + 3 \times {2M} = 2\# \left( M\right) + {8M},\n\]\n\nand the proof of the lemma is complete.
|
Yes
|
Lemma 2.1 The set \( \widehat{G} \) is an abelian group under multiplication defined \( {by} \)\n\n\[ \left( {{e}_{1} \cdot {e}_{2}}\right) \left( a\right) = {e}_{1}\left( a\right) {e}_{2}\left( a\right) \;\text{ for all }a \in G. \]
|
The proof of this assertion is straightforward if one observes that the trivial character plays the role of the unit. We call \( \widehat{G} \) the dual group of \( G \) .
|
No
|
Lemma 2.2 Let \( G \) be a finite abelian group, and \( e : G \rightarrow \mathbb{C} - \{ 0\} \) a multiplicative function, namely \( e\left( {a \cdot b}\right) = e\left( a\right) e\left( b\right) \) for all \( a, b \in G \) . Then \( e \) is a character.
|
Proof. The group \( G \) being finite, the absolute value of \( e\left( a\right) \) is bounded above and below as \( a \) ranges over \( G \) . Since \( \left| {e\left( {b}^{n}\right) }\right| = {\left| e\left( b\right) \right| }^{n} \), we conclude that \( \left| {e\left( b\right) }\right| = 1 \) for all \( b \in G \) .
|
Yes
|
Theorem 2.3 The characters of \( G \) form an orthonormal family with respect to the inner product defined above.
|
Since \( \left| {e\left( a\right) }\right| = 1 \) for any character, we find that\n\n\[ \left( {e, e}\right) = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{a \in G}}e\left( a\right) \overline{e\left( a\right) } = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{a \in G}}{\left| e\left( a\right) \right| }^{2} = 1. \]\n\nIf \( e \neq {e}^{\prime } \) and both are characters, we must prove that \( \left( {e,{e}^{\prime }}\right) = 0 \) ; we isolate the key step in a lemma.\n\nLemma 2.4 If \( e \) is a non-trivial character of the group \( G \), then \( \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = 0. \)\n\nProof. Choose \( b \in G \) such that \( e\left( b\right) \neq 1 \) . Then we have\n\n\[ e\left( b\right) \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = \mathop{\sum }\limits_{{a \in G}}e\left( b\right) e\left( a\right) = \mathop{\sum }\limits_{{a \in G}}e\left( {ab}\right) = \mathop{\sum }\limits_{{a \in G}}e\left( a\right) . \]\n\nThe last equality follows because as \( a \) ranges over the group, \( {ab} \) ranges over \( G \) as well. Therefore \( \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = 0 \) .\n\nWe can now conclude the proof of the theorem. Suppose \( {e}^{\prime } \) is a character distinct from \( e \) . Because \( e{\left( {e}^{\prime }\right) }^{-1} \) is non-trivial, the lemma implies that\n\n\[ \mathop{\sum }\limits_{{a \in G}}e\left( a\right) {\left( {e}^{\prime }\left( a\right) \right) }^{-1} = 0. \]\n\nSince \( {\left( {e}^{\prime }\left( a\right) \right) }^{-1} = \overline{{e}^{\prime }\left( a\right) } \), the theorem is proved.
|
Yes
|
Lemma 2.4 If \( e \) is a non-trivial character of the group \( G \), then \( \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = 0. \)
|
Proof. Choose \( b \in G \) such that \( e\left( b\right) \neq 1 \) . Then we have\n\n\[ e\left( b\right) \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = \mathop{\sum }\limits_{{a \in G}}e\left( b\right) e\left( a\right) = \mathop{\sum }\limits_{{a \in G}}e\left( {ab}\right) = \mathop{\sum }\limits_{{a \in G}}e\left( a\right) .\n\]\n\nThe last equality follows because as \( a \) ranges over the group, \( {ab} \) ranges over \( G \) as well. Therefore \( \mathop{\sum }\limits_{{a \in G}}e\left( a\right) = 0 \) .
|
Yes
|
Lemma 2.6 Suppose \( \\left\\{ {{T}_{1},\\ldots ,{T}_{k}}\\right\\} \) is a commuting family of unitary transformations on the finite-dimensional inner product space \( V \) ; that is,\n\n\[ \n{T}_{i}{T}_{j} = {T}_{j}{T}_{i}\\;\\text{ for all }i, j.\n\]\n\nThen \( {T}_{1},\\ldots ,{T}_{k} \) are simultaneously diagonalizable. In other words, there exists a basis for \( V \) which consists of eigenvectors for every \( {T}_{i}, i = 1,\\ldots, k \) .
|
Proof. We use induction on \( k \) . The case \( k = 1 \) is simply the spectral theorem. Suppose that the lemma is true for any family of \( k - 1 \) commuting unitary transformations. The spectral theorem applied to \( {T}_{k} \) says that \( V \) is the direct sum of its eigenspaces\n\n\[ \nV = {V}_{{\\lambda }_{1}} \\oplus \\cdots \\oplus {V}_{{\\lambda }_{s}}\n\]\n\nwhere \( {V}_{{\\lambda }_{i}} \) denotes the subspace of all eigenvectors with eigenvalue \( {\\lambda }_{i} \) . We claim that each one of the \( {T}_{1},\\ldots ,{T}_{k - 1} \) maps each eigenspace \( {V}_{{\\lambda }_{i}} \) to itself. Indeed, if \( v \\in {V}_{{\\lambda }_{i}} \) and \( 1 \\leq j \\leq k - 1 \), then\n\n\[ \n{T}_{k}{T}_{j}\\left( v\\right) = {T}_{j}{T}_{k}\\left( v\\right) = {T}_{j}\\left( {{\\lambda }_{i}v}\\right) = {\\lambda }_{i}{T}_{j}\\left( v\\right)\n\]\n\nso \( {T}_{j}\\left( v\\right) \\in {V}_{{\\lambda }_{i}} \), and the claim is proved.\n\nSince the restrictions to \( {V}_{{\\lambda }_{i}} \) of \( {T}_{1},\\ldots ,{T}_{k - 1} \) form a family of commuting unitary linear transformations, the induction hypothesis guarantees that these are simultaneously diagonalizable on each subspace \( {V}_{{\\lambda }_{i}} \) . This diagonalization provides us with the desired basis for each \( {V}_{{\\lambda }_{i}} \), and thus for \( V \) .
|
Yes
|
Theorem 2.7 Let \( G \) be a finite abelian group. The characters of \( G \) form an orthonormal basis for the vector space \( V \) of functions on \( G \) equipped with the inner product\n\n\[ \n\\left( {f, g}\\right) = \\frac{1}{\\left| G\\right| }\\mathop{\\sum }\\limits_{{a \\in G}}f\\left( a\\right) \\overline{g\\left( a\\right) }.\n\]\n\nIn particular, any function \( f \) on \( G \) is equal to its Fourier series\n\n\[ \nf = \\mathop{\\sum }\\limits_{{e \\in \\widehat{G}}}\\widehat{f}\\left( e\\right) e\n\]
|
Null
|
No
|
Theorem 2.8 If \( f \) is a function on \( G \), then \( \parallel f{\parallel }^{2} = \mathop{\sum }\limits_{{e \in \widehat{G}}}{\left| \widehat{f}\left( e\right) \right| }^{2} \) .
|
Proof. Since the characters of \( G \) form an orthonormal basis for the vector space \( V \), and \( \left( {f, e}\right) = \widehat{f}\left( e\right) \), we have that\n\n\[ \parallel f{\parallel }^{2} = \left( {f, f}\right) = \mathop{\sum }\limits_{{e \in \widehat{G}}}\left( {f, e}\right) \overline{\widehat{f}\left( e\right) } = \mathop{\sum }\limits_{{e \in \widehat{G}}}{\left| \widehat{f}\left( e\right) \right| }^{2}. \]\n\nThe apparent difference of this statement with that of Theorem 1.2 is due to the different normalizations of the Fourier coefficients that are used.
|
Yes
|
Theorem 1.1 (Euclid’s algorithm) For any integers \( a \) and \( b \) with \( b > 0 \), there exist unique integers \( q \) and \( r \) with \( 0 \leq r < b \) such that\n\n\[ a = {qb} + r. \]
|
Proof. First we prove the existence of \( q \) and \( r \) . Let \( S \) denote the set of all non-negative integers of the form \( a - {qb} \) with \( q \in \mathbb{Z} \) . This set is non-empty and in fact \( S \) contains arbitrarily large positive integers since \( b \neq 0 \) . Let \( r \) denote the smallest element in \( S \), so that\n\n\[ r = a - {qb} \]\n\nfor some integer \( q \) . By construction \( 0 \leq r \), and we claim that \( r < b \) . If not, we may write \( r = b + s \) with \( 0 \leq s < r \), so \( b + s = a - {qb} \), which then implies\n\n\[ s = a - \left( {q + 1}\right) b. \]\n\nHence \( s \in S \) with \( s < r \), and this contradicts the minimality of \( r \) . So \( r < b \), hence \( q \) and \( r \) satisfy the conditions of the theorem.\n\nTo prove uniqueness, suppose we also had \( a = {q}_{1}b + {r}_{1} \) where \( 0 \leq {r}_{1} < b \) . By subtraction we find\n\n\[ \left( {q - {q}_{1}}\right) b = {r}_{1} - r \]\n\nThe left-hand side has absolute value 0 or \( \geq b \), while the right-hand side has absolute value \( < b \) . Hence both sides of the equation must be 0, which gives \( q = {q}_{1} \) and \( r = {r}_{1} \) .
|
Yes
|
Theorem 1.2 If \( \gcd \left( {a, b}\right) = d \), then there exist integers \( x \) and \( y \) such that\n\n\[ \n{ax} + {by} = d\text{.} \n\]
|
Proof. Consider the set \( S \) of all positive integers of the form \( {ax} + {by} \) where \( x, y \in \mathbb{Z} \), and let \( s \) be the smallest element in \( S \) . We claim that \( s = \) \( d \) . By construction, there exist integers \( x \) and \( y \) such that\n\n\[ \n{ax} + {by} = s.\n\]\n\nClearly, any divisor of \( a \) and \( b \) divides \( s \), so we must have \( d \leq s \) . The proof will be complete if we can show that \( s\left| {a\text{and}s}\right| b \) . By Euclid’s algorithm, we can write \( a = {qs} + r \) with \( 0 \leq r < s \) . Multiplying the above by \( q \) we find \( {qax} + {qby} = {qs} \), and therefore\n\n\[ \n{qax} + {qby} = a - r.\n\]\n\nHence \( r = a\left( {1 - {qx}}\right) + b\left( {-{qy}}\right) \) . Since \( s \) was minimal in \( S \) and \( 0 \leq r < s \) , we conclude that \( r = 0 \), therefore \( s \) divides \( a \) . A similar argument shows that \( s \) divides \( b \), hence \( s = d \) as desired.
|
Yes
|
Corollary 1.3 Two positive integers \( a \) and \( b \) are relatively prime if and only if there exist integers \( x \) and \( y \) such that \( {ax} + {by} = 1 \) .
|
Proof. If \( a \) and \( b \) are relatively prime, two integers \( x \) and \( y \) with the desired property exist by Theorem 1.2. Conversely, if \( {ax} + {by} = 1 \) holds and \( d \) is positive and divides both \( a \) and \( b \), then \( d \) divides 1, hence \( d = 1 \) .
|
Yes
|
Corollary 1.4 If a and \( c \) are relatively prime and \( c \) divides \( {ab} \), then \( c \) divides \( b \) . In particular, if \( p \) is a prime that does not divide \( a \) and \( p \) divides \( {ab} \), then \( p \) divides \( b \) .
|
Proof. We can write \( 1 = {ax} + {cy} \), so multiplying by \( b \) we find \( b = \) \( {abx} + {cby} \) . Hence \( c \mid b \) .
|
Yes
|
Corollary 1.5 If \( p \) is prime and \( p \) divides the product \( {a}_{1}\cdots {a}_{r} \), then \( p \) divides \( {a}_{i} \) for some \( i \) .
|
Proof. By the previous corollary, if \( p \) does not divide \( {a}_{1} \), then \( p \) divides \( {a}_{2}\cdots {a}_{r} \), so eventually \( p \mid {a}_{i} \) .
|
Yes
|
Theorem 1.6 Every positive integer greater than 1 can be factored uniquely into a product of primes.
|
Proof. First, we show that such a factorization is possible. We do so by proving that the set \( S \) of positive integers \( > 1 \) which do not have a factorization into primes is empty. Arguing by contradiction, we assume that \( S \neq \varnothing \) . Let \( n \) be the smallest element of \( S \) . Since \( n \) cannot be a prime, there exist integers \( a > 1 \) and \( b > 1 \) such that \( {ab} = n \) . But then \( a < n \) and \( b < n \), so \( a \notin S \) as well as \( b \notin S \) . Hence both \( a \) and \( b \) have prime factorizations and so does their product \( n \) . This implies \( n \notin S \), therefore \( S \) is empty, as desired.\n\nWe now turn our attention to the uniqueness of the factorization. Suppose that \( n \) has two factorizations into primes\n\n\[ n = {p}_{1}{p}_{2}\cdots {p}_{r} \]\n\n\[ = {q}_{1}{q}_{2}\cdots {q}_{s} \]\n\nSo \( {p}_{1} \) divides \( {q}_{1}{q}_{2}\cdots {q}_{s} \), and we can apply Corollary 1.5 to conclude that \( {p}_{1} \mid {q}_{i} \) for some \( i \) . Since \( {q}_{i} \) is prime, we must have \( {p}_{1} = {q}_{i} \) . Continuing with this argument we find that the two factorizations of \( n \) are equal up to a permutation of the factors.
|
Yes
|
Theorem 1.7 There are infinitely many primes.
|
Proof. Suppose not, and denote by \( {p}_{1},\ldots ,{p}_{n} \) the complete set of primes. Define\n\n\[ N = {p}_{1}{p}_{2}\cdots {p}_{n} + 1 \]\n\nSince \( N \) is larger than any \( {p}_{i} \), the integer \( N \) cannot be prime. Therefore, \( N \) is divisible by a prime that belongs to our list. But this is also an absurdity since every prime divides the product, yet no prime divides 1 . This contradiction concludes the proof.
|
Yes
|
Lemma 1.8 The exponential and logarithm functions satisfy the following properties:\n\n(i) \( {e}^{\log x} = x \) .\n\n(ii) \( \log \left( {1 + x}\right) = x + E\left( x\right) \) where \( \left| {E\left( x\right) }\right| \leq {x}^{2} \) if \( \left| x\right| < 1/2 \) .\n\n(iii) If \( \log \left( {1 + x}\right) = y \) and \( \left| x\right| < 1/2 \), then \( \left| y\right| \leq 2\left| x\right| \) .
|
Proof. Property (i) is standard. To prove property (ii) we use the power series expansion of \( \log \left( {1 + x}\right) \) for \( \left| x\right| < 1 \), that is,\n\n(2)\n\n\[ \log \left( {1 + x}\right) = \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{{\left( -1\right) }^{n + 1}}{n}{x}^{n}. \]\n\nThen we have\n\n\[ E\left( x\right) = \log \left( {1 + x}\right) - x = - \frac{{x}^{2}}{2} + \frac{{x}^{3}}{3} - \frac{{x}^{4}}{4} + \cdots ,\]\n\nand the triangle inequality implies\n\n\[ \left| {E\left( x\right) }\right| \leq \frac{{x}^{2}}{2}\left( {1 + \left| x\right| + {\left| x\right| }^{2} + \cdots }\right) . \]\n\nTherefore, if \( \left| x\right| \leq 1/2 \) we can sum the geometric series on the right-hand side to find that\n\n\[ \left| {E\left( x\right) }\right| \leq \frac{{x}^{2}}{2}\left( {1 + \frac{1}{2} + \frac{1}{{2}^{2}} + \cdots }\right) \]\n\n\[ \leq \frac{{x}^{2}}{2}\left( \frac{1}{1 - 1/2}\right) \]\n\n\[ \leq {x}^{2}\text{.} \]\n\nThe proof of property (iii) is now immediate; if \( x \neq 0 \) and \( \left| x\right| \leq 1/2 \), then\n\n\[ \left| \frac{\log \left( {1 + x}\right) }{x}\right| \leq 1 + \left| \frac{E\left( x\right) }{x}\right| \]\n\n\[ \leq 1 + \left| x\right| \]\n\n\[ \leq 2\text{,} \]\n\nand if \( x = 0 \) ,(iii) is clearly also true.
|
Yes
|
Proposition 1.9 If \( {A}_{n} = 1 + {a}_{n} \) and \( \sum \left| {a}_{n}\right| \) converges, then the product \( \mathop{\prod }\limits_{n}{A}_{n} \) converges, and this product vanishes if and only if one of its factors \( {A}_{n} \) vanishes. Also, if \( {a}_{n} \neq 1 \) for all \( n \), then \( \mathop{\prod }\limits_{n}1/\left( {1 - {a}_{n}}\right) \) converges.
|
Proof. If \( \sum \left| {a}_{n}\right| \) converges, then for all large \( n \) we must have \( \left| {a}_{n}\right| < \) \( 1/2 \) . Disregarding finitely many terms if necessary, we may assume that this inequality holds for all \( n \) . Then we may write the partial products as follows:\n\n\[ \mathop{\prod }\limits_{{n = 1}}^{N}{A}_{n} = \mathop{\prod }\limits_{{n = 1}}^{N}{e}^{\log \left( {1 + {a}_{n}}\right) } = {e}^{{B}_{N}} \]\n\nwhere \( {B}_{N} = \mathop{\sum }\limits_{{n = 1}}^{N}{b}_{n} \) with \( {b}_{n} = \log \left( {1 + {a}_{n}}\right) \) . By the lemma, we know that \( \left| {b}_{n}\right| \leq 2\left| {a}_{n}\right| \), so that \( {B}_{N} \) converges to a real number, say \( B \) . Since the exponential function is continuous, we conclude that \( {e}^{{B}_{N}} \) converges to \( {e}^{B} \) as \( N \) goes to infinity, proving the first assertion of the proposition. Observe also that if \( 1 + {a}_{n} \neq 0 \) for all \( n \), the product converges to a non-zero limit since it is expressed as \( {e}^{B} \) .\n\nFinally observe that the partial products of \( \mathop{\prod }\limits_{n}1/\left( {1 - {a}_{n}}\right) \) are \( 1/\mathop{\prod }\limits_{{n = 1}}^{N}\left( {1 - {a}_{n}}\right) \), so the same argument as above proves that the product in the denominator converges to a non-zero limit.
|
Yes
|
Theorem 1.10 For every \( s > 1 \), we have\n\n\[ \zeta \left( s\right) = \mathop{\prod }\limits_{p}\frac{1}{1 - 1/{p}^{s}} \]\n\nwhere the product is taken over all primes.
|
Proof. Suppose \( M \) and \( N \) are positive integers with \( M > N \). Observe now that any positive integer \( n \leq N \) can be written uniquely as a product of primes, and that each prime must be less than or equal to \( N \) and repeated less than \( M \) times. Therefore\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{N}\frac{1}{{n}^{s}} \leq \mathop{\prod }\limits_{{p \leq N}}\left( {1 + \frac{1}{{p}^{s}} + \frac{1}{{p}^{2s}} + \cdots + \frac{1}{{p}^{Ms}}}\right) \]\n\n\[ \leq \mathop{\prod }\limits_{{p \leq N}}\left( \frac{1}{1 - {p}^{-s}}\right) \]\n\n\[ \leq \mathop{\prod }\limits_{p}\left( \frac{1}{1 - {p}^{-s}}\right) \]\n\nLetting \( N \) tend to infinity now yields\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{s}} \leq \mathop{\prod }\limits_{p}\left( \frac{1}{1 - {p}^{-s}}\right) \]\n\nFor the reverse inequality, we argue as follows. Again, by the fundamental theorem of arithmetic, we find that\n\n\[ \mathop{\prod }\limits_{{p \leq N}}\left( {1 + \frac{1}{{p}^{s}} + \frac{1}{{p}^{2s}} + \cdots + \frac{1}{{p}^{Ms}}}\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{s}}. \]\n\nLetting \( M \) tend to infinity gives\n\n\[ \mathop{\prod }\limits_{{p \leq N}}\left( \frac{1}{1 - {p}^{-s}}\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{s}} \]\n\nHence\n\n\[ \mathop{\prod }\limits_{p}\left( \frac{1}{1 - {p}^{-s}}\right) \leq \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{1}{{n}^{s}} \]\n\nand the proof of the product formula is complete.
|
Yes
|
Proposition 1.11 The series\n\n\\[ \n\\mathop{\\sum }\\limits_{p}1/p \n\\]\n\n diverges, when the sum is taken over all primes p.
|
Proof. We take logarithms of both sides of the Euler formula. Since \\( \\log x \\) is continuous, we may write the logarithm of the infinite product as the sum of the logarithms. Therefore, we obtain for \\( s > 1 \\)\n\n\\[ \n- \\mathop{\\sum }\\limits_{p}\\log \\left( {1 - 1/{p}^{s}}\\right) = \\log \\zeta \\left( s\\right) \n\\]\n\nSince \\( \\log \\left( {1 + x}\\right) = x + O\\left( {\\left| x\\right| }^{2}\\right) \\) whenever \\( \\left| x\\right| \\leq 1/2 \\), we get\n\n\\[ \n- \\mathop{\\sum }\\limits_{p}\\left\\lbrack {-1/{p}^{s} + O\\left( {1/{p}^{2s}}\\right) }\\right\\rbrack = \\log \\zeta \\left( s\\right) \n\\]\n\nwhich gives\n\n\\[ \n\\mathop{\\sum }\\limits_{p}1/{p}^{s} + O\\left( 1\\right) = \\log \\zeta \\left( s\\right) \n\\]\n\nThe term \\( O\\left( 1\\right) \\) appears because \\( \\mathop{\\sum }\\limits_{p}1/{p}^{2s} \\leq \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }1/{n}^{2} \\) . Now we let \\( s \\) tend to 1 from above, namely \\( s \\rightarrow {1}^{ + } \\), and note that \\( \\zeta \\left( s\\right) \\rightarrow \\infty \\) since \\( \\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }1/{n}^{s} \\geq \\mathop{\\sum }\\limits_{{n = 1}}^{M}1/{n}^{s} \\), and therefore\n\n\\[ \n\\mathop{\\liminf }\\limits_{{s \\rightarrow {1}^{ + }}}\\mathop{\\sum }\\limits_{{n = 1}}^{\\infty }1/{n}^{s} \\geq \\mathop{\\sum }\\limits_{{n = 1}}^{M}1/n\\;\\text{ for every }M. \n\\]\n\nWe conclude that \\( \\mathop{\\sum }\\limits_{p}1/{p}^{s} \\rightarrow \\infty \\) as \\( s \\rightarrow {1}^{ + } \\), and since \\( 1/p > 1/{p}^{s} \\) for all \\( s > 1 \\), we finally have that\n\n\\[ \n\\mathop{\\sum }\\limits_{p}1/p = \\infty \n\\]
|
Yes
|
Lemma 2.2 The Dirichlet characters are multiplicative. Moreover,
|
\[ {\delta }_{\ell }\left( m\right) = \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{\chi }\overline{\chi \left( \ell \right) }\chi \left( m\right) \] where the sum is over all Dirichlet characters. With the above lemma we have taken our first step towards a proof of the theorem, since this lemma shows that \[ \mathop{\sum }\limits_{{p \equiv \ell }}\frac{1}{{p}^{s}} = \mathop{\sum }\limits_{p}\frac{{\delta }_{\ell }\left( p\right) }{{p}^{s}} \] \[ = \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{\chi }\overline{\chi \left( \ell \right) }\mathop{\sum }\limits_{p}\frac{\chi \left( p\right) }{{p}^{s}}. \] Thus it suffices to understand the behavior of \( \mathop{\sum }\limits_{p}\chi \left( p\right) {p}^{-s} \) as \( s \rightarrow {1}^{ + } \) . In fact, we divide the above sum in two parts depending on whether or not \( \chi \) is trivial. So we have \[ \mathop{\sum }\limits_{{p \equiv \ell }}\frac{1}{{p}^{s}} = \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{p}\frac{{\chi }_{0}\left( p\right) }{{p}^{s}} + \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{{\chi \neq {\chi }_{0}}}\overline{\chi \left( \ell \right) }\mathop{\sum }\limits_{p}\frac{\chi \left( p\right) }{{p}^{s}} \] (4) \[ = \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{{p\text{ not dividing }q}}\frac{1}{{p}^{s}} + \frac{1}{\varphi \left( q\right) }\mathop{\sum }\limits_{{\chi \neq {\chi }_{0}}}\overline{\chi \left( \ell \right) }\mathop{\sum }\limits_{p}\frac{\chi \left( p\right) }{{p}^{s}}. \] Since there are only finitely many primes dividing \( q \), Euler’s theorem (Proposition 1.11) implies that the first sum on the right-hand side diverges when \( s \) tends to 1 . These observations show that Dirichlet’s theorem is a consequence of the following assertion.
|
No
|
Theorem 2.3 If \( \chi \) is a nontrivial Dirichlet character, then the sum\n\n\[ \mathop{\sum }\limits_{p}\frac{\chi \left( p\right) }{{p}^{s}} \]\n\nremains bounded as \( s \rightarrow {1}^{ + } \) .
|
The proof of Theorem 2.3 requires the introduction of the \( L \) -functions, to which we now turn.
|
No
|
Theorem 2.4 If \( s > 1 \), then\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{\infty }\frac{\chi \left( n\right) }{{n}^{s}} = \mathop{\prod }\limits_{p}\frac{1}{\left( 1 - \chi \left( p\right) {p}^{-s}\right) } \]\n\nwhere the product is over all primes.
|
Null
|
No
|
Proposition 3.1 The logarithm function \( {\log }_{1} \) satisfies the following properties:\n\n(i) If \( \left| z\right| < 1 \), then\n\n\[ \n{e}^{{\log }_{1}\left( \frac{1}{1 - z}\right) } = \frac{1}{1 - z}.\n\]\n\n(ii) If \( \left| z\right| < 1 \), then\n\n\[ \n{\log }_{1}\left( \frac{1}{1 - z}\right) = z + {E}_{1}\left( z\right)\n\]\n\nwhere the error \( {E}_{1} \) satisfies \( \left| {{E}_{1}\left( z\right) }\right| \leq {\left| z\right| }^{2} \) if \( \left| z\right| < 1/2 \) .\n\n(iii) If \( \left| z\right| < 1/2 \), then\n\n\[ \n\left| {{\log }_{1}\left( \frac{1}{1 - z}\right) }\right| \leq 2\left| z\right|\n\]
|
Proof. To establish the first property, let \( z = r{e}^{i\theta } \) with \( 0 \leq r < 1 \) , and observe that it suffices to show that\n\n(5)\n\n\[ \n\left( {1 - r{e}^{i\theta }}\right) {e}^{\mathop{\sum }\limits_{{k = 1}}^{\infty }{\left( r{e}^{i\theta }\right) }^{k}/k} = 1.\n\]\n\nTo do so, we differentiate the left-hand side with respect to \( r \), and this gives\n\n\[ \n\left\lbrack {-{e}^{i\theta } + \left( {1 - r{e}^{i\theta }}\right) {\left( \mathop{\sum }\limits_{{k = 1}}^{\infty }{\left( r{e}^{i\theta }\right) }^{k}/k\right) }^{\prime }}\right\rbrack {e}^{\mathop{\sum }\limits_{{k = 1}}^{\infty }{\left( r{e}^{i\theta }\right) }^{k}/k}.\n\]\n\nThe term in brackets equals\n\n\[ \n- {e}^{i\theta } + \left( {1 - r{e}^{i\theta }}\right) {e}^{i\theta }\left( {\mathop{\sum }\limits_{{k = 1}}^{\infty }{\left( r{e}^{i\theta }\right) }^{k - 1}}\right) = - {e}^{i\theta } + \left( {1 - r{e}^{i\theta }}\right) {e}^{i\theta }\frac{1}{1 - r{e}^{i\theta }} = 0.\n\]\n\nHaving found that the left-hand side of the equation (5) is constant, we set \( r = 0 \) and get the desired result.\n\nThe proofs of the second and third properties are the same as their real counterparts given in Lemma 1.8.
|
Yes
|
Proposition 3.2 If \( \sum \left| {a}_{n}\right| \) converges, and \( {a}_{n} \neq 1 \) for all \( n \), then\n\n\[ \mathop{\prod }\limits_{{n = 1}}^{\infty }\left( \frac{1}{1 - {a}_{n}}\right) \]\n\nconverges. Moreover, this product is non-zero.
|
Proof. For \( n \) large enough, \( \left| {a}_{n}\right| < 1/2 \), so we may assume without loss of generality that this inequality holds for all \( n \geq 1 \) . Then\n\n\[ \mathop{\prod }\limits_{{n = 1}}^{N}\left( \frac{1}{1 - {a}_{n}}\right) = \mathop{\prod }\limits_{{n = 1}}^{N}{e}^{{\log }_{1}\left( \frac{1}{1 - {a}_{n}}\right) } = {e}^{\mathop{\sum }\limits_{{n = 1}}^{N}{\log }_{1}\left( \frac{1}{1 - {a}_{n}}\right) }.\]\n\nBut we know from the previous proposition that\n\n\[ \left| {{\log }_{1}\left( \frac{1}{1 - z}\right) }\right| \leq 2\left| z\right| \]\n\nso the fact that the series \( \sum \left| {a}_{n}\right| \) converges, immediately implies that the limit\n\n\[ \mathop{\lim }\limits_{{N \rightarrow \infty }}\mathop{\sum }\limits_{{n = 1}}^{N}{\log }_{1}\left( \frac{1}{1 - {a}_{n}}\right) = A \]\n\nexists. Since the exponential function is continuous, we conclude that the product converges to \( {e}^{A} \), which is clearly non-zero.
|
Yes
|
Proposition 3.3 Suppose \( {\chi }_{0} \) is the trivial Dirichlet character,\n\n\[ \n{\chi }_{0}\left( n\right) = \left\{ \begin{array}{ll} 1 & \text{ if }n\text{ and }q\text{ are relatively prime,} \\ 0 & \text{ otherwise,} \end{array}\right.\n\]\n\nand \( q = {p}_{1}^{{a}_{1}}\cdots {p}_{N}^{{a}_{N}} \) is the prime factorization of \( q \) . Then\n\n\[ \nL\left( {s,{\chi }_{0}}\right) = \left( {1 - {p}_{1}^{-s}}\right) \left( {1 - {p}_{2}^{-s}}\right) \cdots \left( {1 - {p}_{N}^{-s}}\right) \zeta \left( s\right) .\n\]\n\nTherefore \( L\left( {s,{\chi }_{0}}\right) \rightarrow \infty \) as \( s \rightarrow {1}^{ + } \) .
|
Proof. The identity follows at once on comparing the Dirichlet and Euler product formulas. The final statement holds because \( \zeta \left( s\right) \rightarrow \infty \) as \( s \rightarrow {1}^{ + } \)
|
Yes
|
Lemma 3.5 If \( \chi \) is a non-trivial Dirichlet character, then\n\n\[ \left| {\mathop{\sum }\limits_{{n = 1}}^{k}\chi \left( n\right) }\right| \leq q,\;\text{ for any }k. \]
|
Proof. First, we recall that\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{q}\chi \left( n\right) = 0 \]\n\nIn fact, if \( S \) denotes the sum and \( a \in {\mathbb{Z}}^{ * }\left( q\right) \), then the multiplicative property of the Dirichlet character \( \chi \) gives\n\n\[ \chi \left( a\right) S = \sum \chi \left( a\right) \chi \left( n\right) = \sum \chi \left( {an}\right) = \sum \chi \left( n\right) = S. \]\n\nSince \( \chi \) is non-trivial, \( \chi \left( a\right) \neq 1 \) for some \( a \), hence \( S = 0 \) . We now write \( k = {aq} + b \) with \( 0 \leq b < q \), and note that\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{k}\chi \left( n\right) = \mathop{\sum }\limits_{{n = 1}}^{{aq}}\chi \left( n\right) + \mathop{\sum }\limits_{{{aq} < n \leq {aq} + b}}\chi \left( n\right) = \mathop{\sum }\limits_{{{aq} < n \leq {aq} + b}}\chi \left( n\right) ,\]\n\nand there are no more than \( q \) terms in the last sum. The proof is complete once we recall that \( \left| {\chi \left( n\right) }\right| \leq 1 \) .
|
Yes
|
Proposition 3.6 If \( s > 1 \), then\n\n\[ \n{e}^{{\log }_{2}L\left( {s,\chi }\right) } = L\left( {s,\chi }\right) \n\]\n\nMoreover\n\n\[ \n{\log }_{2}L\left( {s,\chi }\right) = \mathop{\sum }\limits_{p}{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) /{p}^{s}}\right) .\n\]
|
Proof. Differentiating \( {e}^{-{\log }_{2}L\left( {s,\chi }\right) }L\left( {s,\chi }\right) \) with respect to \( s \) gives\n\n\[ \n- \frac{{L}^{\prime }\left( {s,\chi }\right) }{L\left( {s,\chi }\right) }{e}^{-{\log }_{2}L\left( {s,\chi }\right) }L\left( {s,\chi }\right) + {e}^{-{\log }_{2}L\left( {s,\chi }\right) }{L}^{\prime }\left( {s,\chi }\right) = 0.\n\]\n\nSo \( {e}^{-{\log }_{2}L\left( {s,\chi }\right) }L\left( {s,\chi }\right) \) is constant, and this constant can be seen to be 1 by letting \( s \) tend to infinity. This proves the first conclusion.\n\nTo prove the equality between the logarithms, we fix \( s \) and take the exponential of both sides. The left-hand side becomes \( {e}^{{\log }_{2}L\left( {s,\chi }\right) } = L\left( {s,\chi }\right) \), and the right-hand side becomes\n\n\[ \n{e}^{\mathop{\sum }\limits_{p}{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) /{p}^{s}}\right) } = \mathop{\prod }\limits_{p}{e}^{{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) /{p}^{s}}\right) } = \mathop{\prod }\limits_{p}\left( \frac{1}{1 - \chi \left( p\right) /{p}^{s}}\right) = L\left( {s,\chi }\right) ,\n\]\n\nby (i) in Proposition 3.1 and the Dirichlet product formula. Therefore, for each \( s \) there exists an integer \( M\left( s\right) \) so that\n\n\[ \n{\log }_{2}L\left( {s,\chi }\right) - \mathop{\sum }\limits_{p}{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) /{p}^{s}}\right) = {2\pi iM}\left( s\right) .\n\]\n\nAs the reader may verify, the left-hand side is continuous in \( s \), and this implies the continuity of the function \( M\left( s\right) \). But \( M\left( s\right) \) is integer-valued so we conclude that \( M\left( s\right) \) is constant, and this constant can be seen to be 0 by letting \( s \) go to infinity.
|
Yes
|
Lemma 3.8 If \( s > 1 \), then\n\n\[ \mathop{\prod }\limits_{\chi }L\left( {s,\chi }\right) \geq 1 \]\n\nwhere the product is taken over all Dirichlet characters. In particular the product is real-valued.
|
Proof. We have shown earlier that for \( s > 1 \)\n\n\[ L\left( {s,\chi }\right) = \exp \left( {\mathop{\sum }\limits_{p}{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) {p}^{-s}}\right) }\right) .\n\nHence,\n\n\[ \mathop{\prod }\limits_{\chi }L\left( {s,\chi }\right) = \exp \left( {\mathop{\sum }\limits_{\chi }\mathop{\sum }\limits_{p}{\log }_{1}\left( \frac{1}{1 - \chi \left( p\right) {p}^{-s}}\right) }\right)\n\n\[ = \exp \left( {\mathop{\sum }\limits_{\chi }\mathop{\sum }\limits_{p}\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{k}\frac{\chi \left( {p}^{k}\right) }{{p}^{ks}}}\right)\n\n\[ = \exp \left( {\mathop{\sum }\limits_{p}\mathop{\sum }\limits_{{k = 1}}^{\infty }\mathop{\sum }\limits_{\chi }\frac{1}{k}\frac{\chi \left( {p}^{k}\right) }{{p}^{ks}}}\right) .\n\nBecause of Lemma 2.2 (with \( \ell = 1 \) ) we have \( \mathop{\sum }\limits_{\chi }\chi \left( {p}^{k}\right) = \varphi \left( q\right) {\delta }_{1}\left( {p}^{k}\right) \), and hence\n\n\[ \mathop{\prod }\limits_{\chi }L\left( {s,\chi }\right) = \exp \left( {\varphi \left( q\right) \mathop{\sum }\limits_{p}\mathop{\sum }\limits_{{k = 1}}^{\infty }\frac{1}{k}\frac{{\delta }_{1}\left( {p}^{k}\right) }{{p}^{ks}}}\right) \geq 1,\n\nsince the term in the exponential is non-negative.
|
Yes
|
Proposition 3.10 If \( N \) is a positive integer, then:\n\n(i) \( \mathop{\sum }\limits_{{1 \leq n \leq N}}\frac{1}{n} = {\int }_{1}^{N}\frac{dx}{x} + O\left( 1\right) = \log N + O\left( 1\right) \).\n\n(ii) More precisely, there exists a real number \( \gamma \), called Euler’s constant, so that\n\n\[ \mathop{\sum }\limits_{{1 \leq n \leq N}}\frac{1}{n} = \log N + \gamma + O\left( {1/N}\right) \]
|
Proof. It suffices to establish the more refined estimate given in part (ii). Let\n\n\[ {\gamma }_{n} = \frac{1}{n} - {\int }_{n}^{n + 1}\frac{dx}{x} \]\n\nSince \( 1/x \) is decreasing, we clearly have\n\n\[ 0 \leq {\gamma }_{n} \leq \frac{1}{n} - \frac{1}{n + 1} \leq \frac{1}{{n}^{2}} \]\n\nso the series \( \mathop{\sum }\limits_{{n = 1}}^{\infty }{\gamma }_{n} \) converges to a limit which we denote by \( \gamma \) . Moreover, if we estimate \( \sum f\left( n\right) \) by \( \int f\left( x\right) {dx} \), where \( f\left( x\right) = 1/{x}^{2} \), we find\n\n\[ \mathop{\sum }\limits_{{n = N + 1}}^{\infty }{\gamma }_{n} \leq \mathop{\sum }\limits_{{n = N + 1}}^{\infty }\frac{1}{{n}^{2}} \leq {\int }_{N}^{\infty }\frac{dx}{{x}^{2}} = O\left( {1/N}\right) .\n\nTherefore\n\n\[ \mathop{\sum }\limits_{{n = 1}}^{N}\frac{1}{n} - {\int }_{1}^{N}\frac{dx}{x} = \gamma - \mathop{\sum }\limits_{{n = N + 1}}^{\infty }{\gamma }_{n} + {\int }_{N}^{N + 1}\frac{dx}{x}, \]\n\nand this last integral is \( O\left( {1/N}\right) \) as \( N \rightarrow \infty \) .
|
Yes
|
Proposition 3.11 If \( N \) is a positive integer, then\n\n\[ \mathop{\sum }\limits_{{1 \leq n \leq N}}\frac{1}{{n}^{1/2}} = {\int }_{1}^{N}\frac{dx}{{x}^{1/2}} + {c}^{\prime } + O\left( {1/{N}^{1/2}}\right) \]\n\n\[ = 2{N}^{1/2} + c + O\left( {1/{N}^{1/2}}\right) \text{.} \]
|
The proof is essentially a repetition of the proof of the previous proposition, this time using the fact that\n\n\[ \left| {\frac{1}{{n}^{1/2}} - \frac{1}{{\left( n + 1\right) }^{1/2}}}\right| \leq \frac{C}{{n}^{3/2}} \]\n\nThis last inequality follows from the mean-value theorem applied to \( f\left( x\right) = {x}^{-1/2} \), between \( x = n \) and \( x = n + 1 \) .
|
Yes
|
Theorem 3.12 If \( k \) is a positive integer, then\n\n\[ \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}d\left( k\right) = \log N + O\left( 1\right) \]\n\nMore precisely,\n\n\[ \frac{1}{N}\mathop{\sum }\limits_{{k = 1}}^{N}d\left( k\right) = \log N + \left( {{2\gamma } - 1}\right) + O\left( {1/{N}^{1/2}}\right) ,\]\n\nwhere \( \gamma \) is Euler’s constant.
|
Proof. Let \( {S}_{N} = \mathop{\sum }\limits_{{k = 1}}^{N}d\left( k\right) \) . We observed that summing \( F = 1 \) along hyperbolas gives \( {S}_{N} \) . Summing vertically, we find\n\n\[ {S}_{N} = \mathop{\sum }\limits_{{1 \leq m \leq N}}\mathop{\sum }\limits_{{1 \leq n \leq N/m}}1 \]\n\nBut \( \mathop{\sum }\limits_{{1 \leq n \leq N/m}}1 = \left\lbrack {N/m}\right\rbrack = N/m + O\left( 1\right) \), where \( \left\lbrack x\right\rbrack \) denote the greatest integer \( \leq x \) . Therefore\n\n\[ {S}_{N} = \mathop{\sum }\limits_{{1 \leq m \leq N}}\left( {N/m + O\left( 1\right) }\right) = N\left( {\mathop{\sum }\limits_{{1 \leq m \leq N}}1/m}\right) + O\left( N\right) . \]\n\nHence, by part (i) of Proposition 3.10,\n\n\[ \frac{{S}_{N}}{N} = \log N + O\left( 1\right) \]\n\nwhich gives the first conclusion.
|
Yes
|
Proposition 3.13 The following statements are true:\n\n(i) \( {S}_{N} \geq c\log N \) for some constant \( c > 0 \) .\n\n(ii) \( {S}_{N} = 2{N}^{1/2}L\left( {1,\chi }\right) + O\left( 1\right) \) .
|
It suffices to prove the proposition, since the assumption \( L\left( {1,\chi }\right) = 0 \) would give an immediate contradiction.\n\nWe first sum along hyperbolas. Observe that\n\n\[ \mathop{\sum }\limits_{{{nm} = k}}\frac{\chi \left( n\right) }{{\left( nm\right) }^{1/2}} = \frac{1}{{k}^{1/2}}\mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) . \]\n\nFor conclusion (i) it will be enough to show the following lemma.\n\nLemma 3.14 \( \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) \geq \left\{ \begin{array}{ll} 0 & \text{ for all }k \\ 1 & \text{ if }k = {\ell }^{2}\text{ for some }\ell \in \mathbb{Z}. \end{array}\right. \)\n\nFrom the lemma, we then get\n\n\[ {S}_{N} \geq \mathop{\sum }\limits_{{k = {\ell }^{2},\ell \leq {N}^{1/2}}}\frac{1}{{k}^{1/2}} \geq c\log N \]\n\nwhere the last inequality follows from (i) in Proposition 3.10.\n\nThe proof of the lemma is simple. If \( k \) is a power of a prime, say \( k = {p}^{a} \), then the divisors of \( k \) are \( 1, p,{p}^{2},\ldots ,{p}^{a} \) and\n\n\[ \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) = \chi \left( 1\right) + \chi \left( p\right) + \chi \left( {p}^{2}\right) + \cdots + \chi \left( {p}^{a}\right) \]\n\n\[ = 1 + \chi \left( p\right) + \chi {\left( p\right) }^{2} + \cdots + \chi {\left( p\right) }^{a}. \]\n\nSo this sum is equal to\n\n\[ \left\{ \begin{matrix} a + 1 & \text{ if }\chi \left( p\right) = 1, \\ 1 & \text{ if }\chi \left( p\right) = - 1\text{ and }a\text{ is even,} \\ 0 & \text{ if }\chi \left( p\right) = - 1\text{ and }a\text{ is odd,} \\ 1 & \text{ if }\chi \left( p\right) = 0,\text{ that is }p \mid q. \end{matrix}\right. \]\n\nIn general, if \( k = {p}_{1}^{{a}_{1}}\cdots {p}_{N}^{{a}_{N}} \), then any divisor of \( k \) is of the form \( {p}_{1}^{{b}_{1}}\cdots {p}_{N}^{{b}_{N}} \) where \( 0 \leq {b}_{j} \leq {a}_{j} \) for all \( j \) . Therefore, the multiplicative property of \( \chi \) gives\n\n\[ \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) = \mathop{\prod }\limits_{{j = 1}}^{N}\left( {\chi \left( 1\right) + \chi \left( {p}_{j}\right) + \chi \left( {p}_{j}^{2}\right) + \cdots + \chi \left( {p}_{j}^{{a}_{j}}\right) }\right) ,\]\n\nand the proof is complete.
|
Yes
|
Lemma 3.14 \( \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) \geq \left\{ \begin{array}{ll} 0 & \text{ for all }k \\ 1 & \text{ if }k = {\ell }^{2}\text{ for some }\ell \in \mathbb{Z}. \end{array}\right. \)
|
The proof of the lemma is simple. If \( k \) is a power of a prime, say \( k = {p}^{a} \), then the divisors of \( k \) are \( 1, p,{p}^{2},\ldots ,{p}^{a} \) and\n\n\[ \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) = \chi \left( 1\right) + \chi \left( p\right) + \chi \left( {p}^{2}\right) + \cdots + \chi \left( {p}^{a}\right) \]\n\n\[ = 1 + \chi \left( p\right) + \chi {\left( p\right) }^{2} + \cdots + \chi {\left( p\right) }^{a}. \]\n\nSo this sum is equal to\n\n\[ \left\{ \begin{matrix} a + 1 & \text{ if }\chi \left( p\right) = 1, \\ 1 & \text{ if }\chi \left( p\right) = - 1\text{ and }a\text{ is even,} \\ 0 & \text{ if }\chi \left( p\right) = - 1\text{ and }a\text{ is odd,} \\ 1 & \text{ if }\chi \left( p\right) = 0,\text{ that is }p \mid q. \end{matrix}\right. \]\n\nIn general, if \( k = {p}_{1}^{{a}_{1}}\cdots {p}_{N}^{{a}_{N}} \), then any divisor of \( k \) is of the form \( {p}_{1}^{{b}_{1}}\cdots {p}_{N}^{{b}_{N}} \) where \( 0 \leq {b}_{j} \leq {a}_{j} \) for all \( j \) . Therefore, the multiplicative property of \( \chi \) gives\n\n\[ \mathop{\sum }\limits_{{n \mid k}}\chi \left( n\right) = \mathop{\prod }\limits_{{j = 1}}^{N}\left( {\chi \left( 1\right) + \chi \left( {p}_{j}\right) + \chi \left( {p}_{j}^{2}\right) + \cdots + \chi \left( {p}_{j}^{{a}_{j}}\right) }\right) ,\]\n\nand the proof is complete.
|
Yes
|
Lemma 3.15 For all integers \( 0 < a < b \) we have\n\n(i) \( \mathop{\sum }\limits_{{n = a}}^{b}\frac{\chi \left( n\right) }{{n}^{1/2}} = O\left( {a}^{-1/2}\right) \), \n\n(ii) \( \mathop{\sum }\limits_{{n = a}}^{b}\frac{\chi \left( n\right) }{n} = O\left( {a}^{-1}\right) \).
|
Proof. This argument is similar to the proof of Proposition 3.4; we use summation by parts. Let \( {s}_{n} = \mathop{\sum }\limits_{{1 \leq k \leq n}}\chi \left( k\right) \), and remember that \( \left| {s}_{n}\right| \leq q \) for all \( n \) . Then\n\n\[ \mathop{\sum }\limits_{{n = a}}^{b}\frac{\chi \left( n\right) }{{n}^{1/2}} = \mathop{\sum }\limits_{{n = a}}^{{b - 1}}{s}_{n}\left\lbrack {{n}^{-1/2} - {\left( n + 1\right) }^{-1/2}}\right\rbrack + O\left( {a}^{-1/2}\right) \]\n\n\[ = O\left( {\mathop{\sum }\limits_{{n = a}}^{\infty }{n}^{-3/2}}\right) + O\left( {a}^{-1/2}\right) . \]\n\nBy comparing the sum \( \mathop{\sum }\limits_{{n = a}}^{\infty }{n}^{-3/2} \) with the integral of \( f\left( x\right) = {x}^{-3/2} \), we find that the former is also \( O\left( {a}^{-1/2}\right) \). \n\nA similar argument establishes (ii).
|
Yes
|
Lemma 1.2 If \( f \) is real-valued integrable on \( \left\lbrack {a, b}\right\rbrack \) and \( \varphi \) is a real-valued continuous function on \( \mathbb{R} \), then \( \varphi \circ f \) is also integrable on \( \left\lbrack {a, b}\right\rbrack \) .
|
Proof. Let \( \epsilon > 0 \) and remember that \( f \) is bounded, say \( \left| f\right| \leq M \) . Since \( \varphi \) is uniformly continuous on \( \left\lbrack {-M, M}\right\rbrack \) we may choose \( \delta > 0 \) so that if \( s, t \in \left\lbrack {-M, M}\right\rbrack \) and \( \left| {s - t}\right| < \delta \), then \( \left| {\varphi \left( s\right) - \varphi \left( t\right) }\right| < \epsilon \) . Now choose a partition \( P = \left\{ {{x}_{0},\ldots ,{x}_{N}}\right\} \) of \( \left\lbrack {a, b}\right\rbrack \) with \( \mathcal{U}\left( {P, f}\right) - \mathcal{L}\left( {P, f}\right) < {\delta }^{2} \) . Let \( {I}_{j} = \) \( \left\lbrack {{x}_{j - 1},{x}_{j}}\right\rbrack \) and distinguish two classes: we write \( j \in \Lambda \) if \( \mathop{\sup }\limits_{{x \in {I}_{j}}}f\left( x\right) - \) \( \mathop{\inf }\limits_{{x \in {I}_{j}}}f\left( x\right) < \delta \) so that by construction\n\n\[ \mathop{\sup }\limits_{{x \in {I}_{j}}}\varphi \circ f\left( x\right) - \mathop{\inf }\limits_{{x \in {I}_{j}}}\varphi \circ f\left( x\right) < \epsilon . \]\n\nOtherwise, we write \( j \in {\Lambda }^{\prime } \) and note that\n\n\[ \delta \mathop{\sum }\limits_{{j \in {\Lambda }^{\prime }}}\left| {I}_{j}\right| \leq \mathop{\sum }\limits_{{j \in {\Lambda }^{\prime }}}\left\lbrack {\mathop{\sup }\limits_{{x \in {I}_{j}}}f\left( x\right) - \mathop{\inf }\limits_{{x \in {I}_{j}}}f\left( x\right) }\right\rbrack \left| {I}_{j}\right| \leq {\delta }^{2} \]\n\nso \( \mathop{\sum }\limits_{{j \in {\Lambda }^{\prime }}}\left| {I}_{j}\right| < \delta \) . Therefore, separating the cases \( j \in \Lambda \) and \( j \in {\Lambda }^{\prime } \) we find that\n\n\[ \mathcal{U}\left( {P,\varphi \circ f}\right) - \mathcal{L}\left( {P,\varphi \circ f}\right) \leq \epsilon \left( {b - a}\right) + 2\mathcal{B}\delta , \]\n\nwhere \( \mathcal{B} \) is a bound for \( \varphi \) on \( \left\lbrack {-M, M}\right\rbrack \) . Since we can also choose \( \delta < \epsilon \) , we see that the proposition is proved.
|
Yes
|
Proposition 1.3 A bounded monotonic function \( f \) on an interval \( \left\lbrack {a, b}\right\rbrack \) is integrable.
|
Proof. We may assume without loss of generality that \( a = 0, b = 1 \) , and \( f \) is monotonically increasing. Then, for each \( N \), we choose the uniform partition \( {P}_{N} \) given by \( {x}_{j} = j/N \) for all \( j = 0,\ldots, N \) . If \( {\alpha }_{j} = \) \( f\left( {x}_{j}\right) \), then we have\n\n\[ \mathcal{U}\left( {{P}_{N}, f}\right) = \frac{1}{N}\mathop{\sum }\limits_{{j = 1}}^{N}{\alpha }_{j}\;\text{ and }\;\mathcal{L}\left( {{P}_{N}, f}\right) = \frac{1}{N}\mathop{\sum }\limits_{{j = 1}}^{N}{\alpha }_{j - 1}. \]\n\nTherefore, if \( \left| {f\left( x\right) }\right| \leq B \) for all \( x \) we have\n\n\[ \mathcal{U}\left( {{P}_{N}, f}\right) - \mathcal{L}\left( {{P}_{N}, f}\right) = \frac{{\alpha }_{N} - {\alpha }_{0}}{N} \leq \frac{2B}{N}, \]\n\nand the proposition is proved.
|
Yes
|
Proposition 1.4 Let \( f \) be a bounded function on the compact interval \( \left\lbrack {a, b}\right\rbrack \) . If \( c \in \left( {a, b}\right) \), and if for all small \( \delta > 0 \) the function \( f \) is integrable on the intervals \( \left\lbrack {a, c - \delta }\right\rbrack \) and \( \left\lbrack {c + \delta, b}\right\rbrack \), then \( f \) is integrable on \( \left\lbrack {a, b}\right\rbrack \) .
|
Proof. Suppose \( \left| f\right| \leq M \) and let \( \epsilon > 0 \) . Choose \( \delta > 0 \) (small) so that \( {4\delta M} \leq \epsilon /3 \) . Now let \( {P}_{1} \) and \( {P}_{2} \) be partitions of \( \left\lbrack {a, c - \delta }\right\rbrack \) and \( \lbrack c + \) \( \delta, b\rbrack \) so that for each \( i = 1,2 \) we have \( \mathcal{U}\left( {{P}_{i}, f}\right) - \mathcal{L}\left( {{P}_{i}, f}\right) < \epsilon /3 \) . This is possible since \( f \) is integrable on each one of the intervals. Then by taking as a partition \( P = {P}_{1} \cup \{ c - \delta \} \cup \{ c + \delta \} \cup {P}_{2} \) we immediately see that \( \mathcal{U}\left( {P, f}\right) - \mathcal{L}\left( {P, f}\right) < \epsilon . \
|
Yes
|
Lemma 1.6 The union of countably many sets of measure 0 has measure 0.
|
Proof. Say \( {E}_{1},{E}_{2},\ldots \) are sets of measure 0, and let \( E = { \cup }_{i = 1}^{\infty }{E}_{i} \) . Let \( \epsilon > 0 \), and for each \( i \) choose open interval \( {I}_{i,1},{I}_{i,2},\ldots \) so that\n\n\[ \n{E}_{i} \subset \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{I}_{i, k}\;\text{ and }\;\mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {I}_{i, k}\right| < \epsilon /{2}^{i}.\n\]\n\nNow clearly we have \( E \subset \mathop{\bigcup }\limits_{{i, k = 1}}^{\infty }{I}_{i, k} \), and\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{\infty }\mathop{\sum }\limits_{{k = 1}}^{\infty }\left| {I}_{i, k}\right| \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\frac{\epsilon }{{2}^{i}} \leq \epsilon\n\]\n\nas was to be shown.
|
Yes
|
Lemma 1.8 If \( \epsilon > 0 \), then the set \( {A}_{\epsilon } \) is closed and therefore compact.
|
Proof. The argument is simple. Suppose \( {c}_{n} \in {A}_{\epsilon } \) converges to \( c \) and assume that \( c \notin {A}_{\epsilon } \) . Write \( \operatorname{osc}\left( {f, c}\right) = \epsilon - \delta \) where \( \delta > 0 \) . Select \( r \) so that \( \operatorname{osc}\left( {f, c, r}\right) < \epsilon - \delta /2 \), and choose \( n \) with \( \left| {{c}_{n} - c}\right| < r/2 \) . Then \( \operatorname{osc}\left( {f,{c}_{n}, r/2}\right) < \epsilon \) which implies \( \operatorname{osc}\left( {f,{c}_{n}}\right) < \epsilon \), a contradiction.
|
Yes
|
Theorem 2.1 Let \( f \) be a continuous function defined on a closed rectangle \( R \subset {\mathbb{R}}^{d} \). Suppose \( R = {R}_{1} \times {R}_{2} \) where \( {R}_{1} \subset {\mathbb{R}}^{{d}_{1}} \) and \( {R}_{2} \subset {\mathbb{R}}^{{d}_{2}} \) with \( d = {d}_{1} + {d}_{2} \). If we write \( x = \left( {{x}_{1},{x}_{2}}\right) \) with \( {x}_{i} \in {\mathbb{R}}^{{d}_{i}} \), then \( F\left( {x}_{1}\right) = {\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2} \) is continuous on \( {R}_{1} \), and we have \[ {\int }_{R}f\left( x\right) {dx} = {\int }_{{R}_{1}}\left( {{\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2}}\right) d{x}_{1}. \]
|
Proof. The continuity of \( F \) follows from the uniform continuity of \( f \) on \( R \) and the fact that \[ \left| {F\left( {x}_{1}\right) - F\left( {x}_{1}^{\prime }\right) }\right| \leq {\int }_{{R}_{2}}\left| {f\left( {{x}_{1},{x}_{2}}\right) - f\left( {{x}_{1}^{\prime },{x}_{2}}\right) }\right| d{x}_{2}. \] To prove the identity, let \( {P}_{1} \) and \( {P}_{2} \) be partitions of \( {R}_{1} \) and \( {R}_{2} \), respectively. If \( S \) and \( T \) are subrectangles in \( {P}_{1} \) and \( {P}_{2} \), respectively, then the key observation is that \[ \mathop{\sup }\limits_{{S \times T}}f\left( {{x}_{1},{x}_{2}}\right) \geq \mathop{\sup }\limits_{{{x}_{1} \in S}}\left( {\mathop{\sup }\limits_{{{x}_{2} \in T}}f\left( {{x}_{1},{x}_{2}}\right) }\right) \] and \[ \mathop{\inf }\limits_{{S \times T}}f\left( {{x}_{1},{x}_{2}}\right) \leq \mathop{\inf }\limits_{{{x}_{1} \in S}}\left( {\mathop{\inf }\limits_{{{x}_{2} \in T}}f\left( {{x}_{1},{x}_{2}}\right) }\right) . \] Then, \[ \mathcal{U}\left( {P, f}\right) = \mathop{\sum }\limits_{{S, T}}\left\lbrack {\mathop{\sup }\limits_{{S \times T}}f\left( {{x}_{1},{x}_{2}}\right) }\right\rbrack \left| {S \times T}\right| \] \[ \geq \mathop{\sum }\limits_{S}\mathop{\sum }\limits_{T}\mathop{\sup }\limits_{{{x}_{1} \in S}}\left\lbrack {\mathop{\sup }\limits_{{{x}_{2} \in T}}f\left( {{x}_{1},{x}_{2}}\right) }\right\rbrack \left| T\right| \times \left| S\right| \] \[ \geq \mathop{\sum }\limits_{S}\mathop{\sup }\limits_{{{x}_{1} \in S}}\left( {{\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2}}\right) \left| S\right| \] \[ \geq \mathcal{U}\left( {{P}_{1},{\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2}}\right) . \] Arguing similarly for the lower sums, we find that \[ \mathcal{L}\left( {P, f}\right) \leq \mathcal{L}\left( {{P}_{1},{\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2}}\right) \leq \mathcal{U}\left( {{P}_{1},{\int }_{{R}_{2}}f\left( {{x}_{1},{x}_{2}}\right) d{x}_{2}}\right) \leq \mathcal{U}\left( {P, f}\right) , \] and the theorem follows from these inequalities.
|
Yes
|
Theorem 2.2 Suppose \( A \) and \( B \) are compact subsets of \( {\mathbb{R}}^{d} \) and \( g : A \rightarrow B \) is a diffeomorphism of class \( {C}^{1} \) . If \( f \) is continuous on \( B \) , then\n\n\[ \n{\int }_{g\left( A\right) }f\left( x\right) {dx} = {\int }_{A}f\left( {g\left( y\right) }\right) \left| {\det \left( {Dg}\right) \left( y\right) }\right| {dy}.\n\]
|
The proof of this theorem consists first of an analysis of the special situation when \( g \) is a linear transformation \( L \) . In this case, if \( R \) is a rectangle, then\n\n\[ \n\left| {g\left( R\right) }\right| = \left| {\det \left( L\right) }\right| \left| R\right|\n\]\nwhich explains the term \( \left| {\det \left( {Dg}\right) }\right| \) . Indeed, this term corresponds to the new infinitesimal element of volume after the change of variables.
|
No
|
Theorem 1.5. (Plane Curve Classification Theorem) Suppose \( \gamma \) and \( \widetilde{\gamma } : \left\lbrack {a, b}\right\rbrack \rightarrow {\mathbf{R}}^{2} \) are smooth, unit speed plane curves with unit normal vector fields \( N \) and \( \widetilde{N} \), and \( {\kappa }_{N}\left( t\right) ,{\kappa }_{\widetilde{N}}\left( t\right) \) represent the signed curvatures at \( \gamma \left( t\right) \) and \( \widetilde{\gamma }\left( t\right) \), respectively. Then \( \gamma \) and \( \widetilde{\gamma } \) are congruent (by a direction-preserving congruence) if and only if \( {\kappa }_{N}\left( t\right) = {\kappa }_{\widetilde{N}}\left( t\right) \) for all \( t \in \left\lbrack {a, b}\right\rbrack \) .
|
Null
|
No
|
Theorem 1.6. (Total Curvature Theorem) If \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow {\mathbf{R}}^{2} \) is a unit speed simple closed curve such that \( \dot{\gamma }\left( a\right) = \dot{\gamma }\left( b\right) \), and \( N \) is the inward-pointing normal, then\n\n\[{\int }_{a}^{b}{\kappa }_{N}\left( t\right) {dt} = {2\pi }\]
|
The second will be derived as a consequence of a more general result in Chapter 9; the proof of the first is left to Problem 9-6.
|
No
|
Theorem 1.7. (Uniformization Theorem) Every connected 2-manifold is diffeomorphic to a quotient of one of the three constant curvature model surfaces listed above by a discrete group of isometries acting freely and properly discontinuously. Therefore, every connected 2-manifold has a complete Riemannian metric with constant Gaussian curvature.
|
Null
|
No
|
Theorem 1.8. (Gauss-Bonnet Theorem) Let \( S \) be an oriented compact 2-manifold with a Riemannian metric. Then\n\n\[{\int }_{S}{KdA} = {2\pi \chi }\left( S\right)\]\n\nwhere \( \chi \left( S\right) \) is the Euler characteristic of \( S \) (which is equal to 2 if \( S \) is the sphere, 0 if it is the torus, and 2 -2g if it is an orientable surface of genus \( g) \).
|
Null
|
No
|
Theorem 1.9. (Classification of Constant Curvature Metrics) \( A \) complete, connected Riemannian manifold \( M \) with constant sectional curvature is isometric to \( \widetilde{M}/\Gamma \), where \( \widetilde{M} \) is one of the constant curvature model spaces \( {\mathbf{R}}^{n},{\mathbf{S}}_{R}^{n} \), or \( {\mathbf{H}}_{R}^{n} \), and \( \Gamma \) is a discrete group of isometries of \( \widetilde{M} \), isomorphic to \( {\pi }_{1}\left( M\right) \), and acting freely and properly discontinuously on \( \widetilde{M} \) .
|
Null
|
No
|
Theorem 1.11. (Bonnet) Suppose \( M \) is a complete, connected Riemannian manifold with all sectional curvatures bounded below by a positive constant. Then \( M \) is compact and has a finite fundamental group.
|
Null
|
No
|
Lemma 2.1. Let \( V \) be a finite-dimensional vector space. There is a natural (basis-independent) isomorphism between \( {T}_{l + 1}^{k}\left( V\right) \) and the space of multilinear maps\n\n\[ \underset{l}{\underbrace{{V}^{ * } \times \cdots \times {V}^{ * }}} \times \underset{k}{\underbrace{V \times \cdots \times V}} \rightarrow V. \]
|
Exercise 2.1. Prove Lemma 2.1. [Hint: In the special case \( k = 1, l = 0 \) , consider the map \( \Phi : \operatorname{End}\left( V\right) \rightarrow {T}_{1}^{1}\left( V\right) \) by letting \( {\Phi A} \) be the \( \left( \begin{array}{l} 1 \\ 1 \end{array}\right) \) -tensor defined by \( {\Phi A}\left( {\omega, X}\right) = \omega \left( {AX}\right) \) . The general case is similar.]
|
No
|
Lemma 2.2. Let \( M \) be a smooth manifold, \( E \) a set, and \( \pi : E \rightarrow M \) a surjective map. Suppose we are given an open covering \( \left\{ {U}_{\alpha }\right\} \) of \( M \) together with bijective maps \( {\varphi }_{\alpha } : {\pi }^{-1}\left( {U}_{\alpha }\right) \rightarrow {U}_{\alpha } \times {\mathbf{R}}^{k} \) satisfying \( {\pi }_{1} \circ {\varphi }_{\alpha } = \pi \), such that whenever \( {U}_{\alpha } \cap {U}_{\beta } \neq \varnothing \), the composite map \[ {\varphi }_{\alpha } \circ {\varphi }_{\beta }^{-1} : {U}_{\alpha } \cap {U}_{\beta } \times {\mathbf{R}}^{k} \rightarrow {U}_{\alpha } \cap {U}_{\beta } \times {\mathbf{R}}^{k} \] is of the form \[ {\varphi }_{\alpha } \circ {\varphi }_{\beta }^{-1}\left( {p, V}\right) = \left( {p,\tau \left( p\right) V}\right) \] for some smooth map \( \tau : {U}_{\alpha } \cap {U}_{\beta } \rightarrow {GL}\left( {k,\mathbf{R}}\right) \). Then \( E \) has a unique structure as a smooth \( k \) -dimensional vector bundle over \( M \) for which the maps \( {\varphi }_{\alpha } \) are local trivializations.
|
Proof. For each \( p \in M \), let \( {E}_{p} = {\pi }^{-1}\left( p\right) \). If \( p \in {U}_{\alpha } \), observe that the map \( {\left( {\varphi }_{\alpha }\right) }_{p} : {E}_{p} \rightarrow \{ p\} \times {\mathbf{R}}^{k} \) obtained by restricting \( {\varphi }_{\alpha } \) is a bijection. We can define a vector space structure on \( {E}_{p} \) by declaring this map to be a linear isomorphism. This structure is well defined, since for any other set \( {U}_{\beta } \) containing \( p,\left( {2.4}\right) \) guarantees that \( {\left( {\varphi }_{\alpha }\right) }_{p} \circ {\left( {\varphi }_{\beta }\right) }_{p}^{-1} = \tau \left( p\right) \) is an isomorphism. Shrinking the sets \( {U}_{\alpha } \) and taking more of them if necessary, we may assume each of them is diffeomorphic to some open set \( {\widetilde{U}}_{\alpha } \subset {\mathbf{R}}^{n} \). Following \( {\varphi }_{\alpha } \) with such a diffeomorphism, we get a bijection \( {\pi }^{-1}\left( {U}_{\alpha }\right) \rightarrow {\widetilde{U}}_{\alpha } \times {\mathbf{R}}^{k} \), which we can use as a coordinate chart for \( E \). Because (2.4) shows that the \( {\varphi }_{\alpha } \) s overlap smoothly, these charts determine a locally Euclidean topology and a smooth manifold structure on \( E \). It is immediate that each map \( {\varphi }_{\alpha } \) is a diffeomorphism with respect to this smooth structure, and the rest of the conditions for a vector bundle follow automatically.
|
Yes
|
Lemma 2.3. Let \( F : M \rightarrow E \) be a section of a vector bundle. \( F \) is smooth if and only if the components \( {F}_{{i}_{1}\ldots {i}_{k}}^{{j}_{1}\ldots {j}_{l}} \) of \( F \) in terms of any smooth local frame \( \left\{ {E}_{i}\right\} \) on an open set \( U \in \dot{M} \) depend smoothly on \( p \in U \) .
|
Null
|
No
|
Lemma 2.4. (Tensor Characterization Lemma) A map\n\n\\[ \n\\tau : {\\Im }^{1}\\left( M\\right) \\times \\cdots \\times {\\Im }^{1}\\left( M\\right) \\times \\Im \\left( M\\right) \\times \\cdots \\times \\Im \\left( M\\right) \\rightarrow {C}^{\\infty }\\left( M\\right) \n\\]\n\nis induced by a \\( \\left( \\begin{array}{l} k \\\\ l \\end{array}\\right) \\) -tensor field as above if and only if it is multilinear over \\( {C}^{\\infty }\\left( M\\right) \\) . Similarly, a map\n\n\\[ \n\\tau : {\\mathcal{T}}^{1}\\left( M\\right) \\times \\cdots \\times {\\mathcal{T}}^{1}\\left( M\\right) \\times \\mathcal{T}\\left( M\\right) \\times \\cdots \\times \\mathcal{T}\\left( M\\right) \\rightarrow \\mathcal{T}\\left( M\\right) \n\\]\n\nis induced by a \\( \\left( \\begin{matrix} k \\\\ l + 1 \\end{matrix}\\right) \\) -tensor field as in Lemma 2.1 if and only if it is multilinear over \\( {C}^{\\infty }\\left( M\\right) \\) .
|
Null
|
No
|
Lemma 3.1. Let \( g \) be a Riemannian metric on a manifold \( M \) . There is a unique fiber metric on each tensor bundle \( {T}_{l}^{k}M \) with the property that if \( \left( {{E}_{1},\ldots ,{E}_{n}}\right) \) is an orthonormal basis for \( {T}_{p}M \) and \( \left( {{\varphi }^{1},\ldots ,{\varphi }^{n}}\right) \) is the corresponding dual basis, then the collection of tensors given by (2.1) forms an orthonormal basis for \( {T}_{l}^{k}\left( {{T}_{p}M}\right) \) .
|
Exercise 3.8. Prove Lemma 3.1 by showing that in any local coordinate system, the required inner product is given by\n\n\[ \langle F, G\rangle = {g}^{{i}_{1}{r}_{1}}\cdots {g}^{{i}_{k}{r}_{k}}{g}_{{j}_{1}{s}_{1}}\cdots {g}_{{j}_{l}{s}_{l}}{F}_{{i}_{1}\ldots {i}_{k}}^{{j}_{1}\ldots {j}_{l}}{G}_{{r}_{1}\ldots {r}_{k}}^{{s}_{1}\ldots {s}_{l}}. \]
|
No
|
Lemma 3.2. On any oriented Riemannian n-manifold \( \left( {M, g}\right) \), there is a unique \( n \) -form \( {dV} \) satisfying the property that \( {dV}\left( {{E}_{1},\ldots ,{E}_{n}}\right) = 1 \) whenever \( \left( {{E}_{1},\ldots ,{E}_{n}}\right) \) is an oriented orthonormal basis for some tangent space \( {T}_{p}M \) .
|
Exercise 3.9. Prove Lemma 3.2, and show that the expression for \( {dV} \) with respect to any oriented local frame \( \left\{ {E}_{i}\right\} \) is\n\n\[ \n{dV} = \sqrt{\det \left( {g}_{ij}\right) }{\varphi }^{1} \land \cdots \land {\varphi }^{n}, \n\] \n\nwhere \( {g}_{ij} = \left\langle {{E}_{i},{E}_{j}}\right\rangle \) are the coefficients of \( g \) and \( \left\{ {\varphi }^{i}\right\} \) is the dual coframe.
|
No
|
Proposition 3.3. \( O\left( {n + 1}\right) \) acts transitively on orthonormal bases on \( {\mathbf{S}}_{R}^{n} \) . More precisely, given any two points \( p,\widetilde{p} \in {\mathbf{S}}_{R}^{n} \), and orthonormal bases \( \left\{ {E}_{i}\right\} \) for \( {T}_{p}{\mathbf{S}}_{R}^{n} \) and \( \left\{ {\widetilde{E}}_{i}\right\} \) for \( {T}_{\widetilde{p}}{\mathbf{S}}_{R}^{n} \), there exists \( \varphi \in O\left( {n + 1}\right) \) such that \( \varphi \left( p\right) = \widetilde{p} \) and \( {\varphi }_{ * }{E}_{i} = {\widetilde{E}}_{i} \) . In particular, \( {\mathbf{S}}_{R}^{n} \) is homogeneous and isotropic.
|
Proof. It suffices to show that given any \( p \in {\mathbf{S}}_{R}^{n} \) and any orthonormal basis \( \left\{ {E}_{i}\right\} \) for \( {T}_{p}{\mathbf{S}}_{R}^{n} \), there is an orthogonal map that takes the \
|
No
|
Lemma 3.4. Stereographic projection is a conformal equivalence between \( {\mathbf{S}}_{R}^{n} - \{ N\} \) and \( {\mathbf{R}}^{n} \) .
|
Proof. The inverse map \( {\sigma }^{-1} \) is a local parametrization, so we will use it to compute the pullback metric. Consider an arbitrary point \( q \in {\mathbf{R}}^{n} \) and a vector \( V \in {T}_{q}{\mathbf{R}}^{n} \), and compute\n\n\[ \n{\left( {\sigma }^{-1}\right) }^{ * }{\overset{ \circ }{g}}_{R}\left( {V, V}\right) = {\overset{ \circ }{g}}_{R}\left( {{\sigma }_{ * }^{-1}V,{\sigma }_{ * }^{-1}V}\right) = \bar{g}\left( {{\sigma }_{ * }^{-1}V,{\sigma }_{ * }^{-1}V}\right) , \n\]\n\nwhere \( \bar{g} \) denotes the Euclidean metric on \( {\mathbf{R}}^{n + 1} \) . Writing \( V = {V}^{i}{\partial }_{i} \) and \( {\sigma }^{-1}\left( u\right) = \left( {\xi \left( u\right) ,\tau \left( u\right) }\right) \), the usual formula for the push-forward of a vector\ncan be written\n\n\[ \n{\sigma }_{ * }^{-1}V = {V}^{i}\frac{\partial {\xi }^{j}}{\partial {u}^{i}}\frac{\partial }{\partial {\xi }^{j}} + {V}^{i}\frac{\partial \tau }{\partial {u}^{i}}\frac{\partial }{\partial \tau } \n\]\n\n\[ \n= V{\xi }^{j}\frac{\partial }{\partial {\xi }^{j}} + {V\tau }\frac{\partial }{\partial \tau } \n\]\n\nNow\n\n\[ \nV{\xi }^{j} = V\left( \frac{2{R}^{2}{u}^{j}}{{\left| u\right| }^{2} + {R}^{2}}\right) \n\]\n\n\[ \n= \frac{2{R}^{2}{V}^{j}}{{\left| u\right| }^{2} + {R}^{2}} - \frac{4{R}^{2}{u}^{j}\langle V, u\rangle }{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{2}} \n\]\n\n\[ \n{V\tau } = V\left( {R\frac{{\left| u\right| }^{2} - {R}^{2}}{{\left| u\right| }^{2} + {R}^{2}}}\right) \n\]\n\n\[ \n= \frac{{2R}\langle V, u\rangle }{{\left| u\right| }^{2} + {R}^{2}} - \frac{{2R}\left( {{\left| u\right| }^{2} - {R}^{2}}\right) \langle V, u\rangle }{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{2}} \n\]\n\n\[ \n= \frac{4{R}^{3}\langle V, u\rangle }{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{2}}, \n\]\n\nwhere we have used the notation \( V\left( {\left| u\right| }^{2}\right) = 2\mathop{\sum }\limits_{k}{V}^{k}{u}^{k} = 2\langle V, u\rangle \) . Therefore,\n\n\[ \n\bar{g}\left( {{\sigma }_{ * }^{-1}V,{\sigma }_{ * }^{-1}V}\right) = \mathop{\sum }\limits_{{j = 1}}^{n}{\left( V{\xi }^{j}\right) }^{2} + {\left( V\tau \right) }^{2} \n\]\n\n\[ \n= \frac{4{R}^{4}|V{|}^{2}}{(|u{|}^{2} + {R}^{2}{)}^{2}} - \frac{{16}{R}^{4}\langle V, u{\rangle }^{2}}{(|u{|}^{2} + {R}^{2}{)}^{3}} + \frac{{16}{R}^{4}|u{|}^{2}\langle V, u{\rangle }^{2}}{(|u{|}^{2} + {R}^{2}{)}^{4}} \n\]\n\n\[ \n\; + \frac{{16}{R}^{6}\langle V, u{\rangle }^{2}}{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{4}} \n\]\n\n\[ \n= \frac{4{R}^{4}{\left| V\right| }^{2}}{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{2}}. \n\]\n\nIn other words,\n\n\[ \n{\left( {\sigma }^{-1}\right) }^{ * }{\overset{ \circ }{g}}_{R} = \frac{4{R}^{4}}{{\left( {\left| u\right| }^{2} + {R}^{2}\right) }^{2}}\bar{g} \n\]\n\n(3.10)\n\nwhere now \( \bar{g} \) represents the Euclidean metric on \( {\mathbf{R}}^{n} \), and so \( \sigma \) is a conformal equivalence.
|
Yes
|
Proposition 3.6. \( {O}_{ + }\left( {n,1}\right) \) acts transitively on the set of orthonormal bases on \( {\mathbf{H}}_{R}^{n} \), and therefore \( {\mathbf{H}}_{R}^{n} \) is homogeneous and isotropic.
|
Proof. The argument is entirely analogous to the proof of Proposition 3.3, so we give only a sketch. If \( p \in {\mathbf{H}}_{R}^{n} \) and \( \left\{ {E}_{i}\right\} \) is an orthonormal basis for \( {T}_{p}{\mathbf{H}}_{R}^{n} \), an easy computation shows that \( \left\{ {{E}_{1},\ldots ,{E}_{n},{E}_{n + 1} = p/R}\right\} \) is a basis for \( {\mathbf{R}}^{n + 1} \) such that \( m \) has the following expression in terms of the dual basis:\n\n\[ m = {\left( {\varphi }^{1}\right) }^{2} + \cdots + {\left( {\varphi }^{n}\right) }^{2} - {\left( {\varphi }^{n + 1}\right) }^{2}. \]\n\nIt follows easily that the matrix whose columns are the \( {E}_{i}\mathrm{\;s} \) is an element of \( {O}_{ + }\left( {n,1}\right) \) sending \( N = \left( {0,\ldots ,0, R}\right) \) to \( p \) and \( {\partial }_{i} \) to \( {E}_{i} \) (Figure 3.5).
|
Yes
|
Lemma 4.1. If \( \nabla \) is a connection in a bundle \( E, X \in \mathcal{T}\left( M\right), Y \in \mathcal{E}\left( M\right) \) , and \( p \in M \), then \( {\left. {\nabla }_{X}Y\right| }_{p} \) depends only on the values of \( X \) and \( Y \) in an arbitrarily small neighborhood of \( p \) . More precisely, if \( X = \widetilde{X} \) and \( Y = \widetilde{Y} \) on a neighborhood of \( p \), then \( {\left. {\nabla }_{X}Y\right| }_{p} = {\left. {\nabla }_{\widetilde{X}}\widetilde{Y}\right| }_{p} \) .
|
Proof. First consider \( Y \) . Replacing \( Y \) by \( Y - \widetilde{Y} \), it clearly suffices to show that \( {\left. {\nabla }_{X}Y\right| }_{p} = 0 \) if \( Y \) vanishes on a neighborhood \( U \) of \( p \) .\n\nChoose a bump function \( \varphi \in {C}^{\infty }\left( M\right) \) with support in \( U \) such that \( \varphi \left( p\right) = \) 1. The hypothesis that \( Y \) vanishes on \( U \) implies that \( {\varphi Y} \equiv 0 \) on all of \( M \) , so \( {\nabla }_{X}\left( {\varphi Y}\right) = {\nabla }_{X}\left( {0 \cdot {\varphi Y}}\right) = 0{\nabla }_{X}\left( {\varphi Y}\right) = 0 \) . Thus for any \( X \in \mathcal{T}\left( M\right) \), the product rule gives\n\n\[ 0 = {\nabla }_{X}\left( {\varphi Y}\right) = \left( {X\varphi }\right) Y + \varphi \left( {{\nabla }_{X}Y}\right) \]\n\n(4.1)\n\nNow \( Y \equiv 0 \) on the support of \( \varphi \), so the first term on the right is identically zero. Evaluating (4.1) at \( p \) shows that \( {\left. {\nabla }_{X}Y\right| }_{p} = 0 \) . The argument for \( X \) is similar but easier.
|
Yes
|
Lemma 4.2. With notation as in Lemma 4.1, \( {\left. {\nabla }_{X}Y\right| }_{p} \) depends only on the values of \( Y \) in a neighborhood of \( p \) and the value of \( X \) at \( p \) .
|
Proof. By linearity, it suffices to show that \( {\left. {\nabla }_{X}Y\right| }_{p} = 0 \) whenever \( {X}_{p} = \) 0 . Choose a coordinate neighborhood \( U \) of \( p \), and write \( X = {X}^{i}{\partial }_{i} \) in coordinates on \( U \), with \( {X}^{i}\left( p\right) = 0 \) . Then, for any \( Y \in \mathcal{E}\left( M\right) \) ,\n\n\[ \n{\left. {\nabla }_{X}Y\right| }_{p} = {\left. {\nabla }_{{X}^{i}{\partial }_{i}}Y\right| }_{p} = {\left. {X}^{i}\left( p\right) {\nabla }_{{\partial }_{i}}Y\right| }_{p} = 0.\n\]\n\nIn the first equality, we used Lemma 4.1, which allows us to evaluate \( {\left. {\nabla }_{X}Y\right| }_{p} \) by computing locally in \( U \) ; in the second, we used linearity of \( {\nabla }_{X}Y \) over \( {C}^{\infty }\left( M\right) \) in \( X \) .
|
Yes
|
Lemma 4.3. Let \( \nabla \) be a linear connection, and let \( X, Y \in \mathcal{T}\left( U\right) \) be expressed in terms of a local frame by \( X = {X}^{i}{E}_{i}, Y = {Y}^{j}{E}_{j} \) . Then\n\n\[ \n{\nabla }_{X}Y = \left( {X{Y}^{k} + {X}^{i}{Y}^{j}{\Gamma }_{ij}^{k}}\right) {E}_{k} \n\]\n\n(4.3)
|
Proof. Just use the defining rules for a connection and compute:\n\n\[ \n{\nabla }_{X}Y = {\nabla }_{X}\left( {{Y}^{j}{E}_{j}}\right) \n\]\n\n\[ \n= \left( {X{Y}^{j}}\right) {E}_{j} + {Y}^{j}{\nabla }_{{X}^{i}{E}_{i}}{E}_{j} \n\]\n\n\[ \n= \left( {X{Y}^{j}}\right) {E}_{j} + {X}^{i}{Y}^{j}{\nabla }_{{E}_{i}}{E}_{j} \n\]\n\n\[ \n= X{Y}^{j}{E}_{j} + {X}^{i}{Y}^{j}{\Gamma }_{ij}^{k}{E}_{k}. \n\]\n\nRenaming the dummy index in the first term yields (4.3).
|
Yes
|
Lemma 4.4. Suppose \( M \) is a manifold covered by a single coordinate chart. There is a one-to-one correspondence between linear connections on \( M \) and choices of \( {n}^{3} \) smooth functions \( \left\{ {\Gamma }_{ij}^{k}\right\} \) on \( M \), by the rule\n\n\[{\nabla }_{X}Y = \left( {{X}^{i}{\partial }_{i}{Y}^{k} + {X}^{i}{Y}^{j}{\Gamma }_{ij}^{k}}\right) {\partial }_{k}\]\n\n(4.5)
|
Proof. Observe that (4.5) is equivalent to (4.3) when \( {E}_{i} = {\partial }_{i} \) is a coordinate frame, so for every connection the functions \( \left\{ {\Gamma }_{ij}^{k}\right\} \) defined by (4.2) satisfy (4.5). On the other hand, given \( \left\{ {\Gamma }_{ij}^{k}\right\} \), it is easy to see by inspection that (4.5) is smooth if \( X \) and \( Y \) are, linear over \( \mathbf{R} \) in \( Y \), and linear over \( {C}^{\infty }\left( M\right) \) in \( X \), so only the product rule requires checking; this is a straightforward computation left to the reader.
|
No
|
Proposition 4.5. Every manifold admits a linear connection.
|
Proof. Cover \( M \) with coordinate charts \( \left\{ {U}_{\alpha }\right\} \) ; the preceding lemma guarantees the existence of a connection \( {\nabla }^{\alpha } \) on each \( {U}_{\alpha } \) . Choosing a partition of unity \( \left\{ {\varphi }_{\alpha }\right\} \) subordinate to \( \left\{ {U}_{\alpha }\right\} \), we’d like to patch the \( {\nabla }^{\alpha }\mathrm{s} \) together by the formula\n\n\[ \n{\nabla }_{X}Y = \mathop{\sum }\limits_{\alpha }{\varphi }_{\alpha }{\nabla }_{X}^{\alpha }Y \n\]\n\n(4.6)\n\nAgain, it is obvious by inspection that this expression is smooth, linear over \( \mathbf{R} \) in \( Y \), and linear over \( {C}^{\infty }\left( M\right) \) in \( X \) . We have to be a bit careful with the product rule, though, since a linear combination of connections is not necessarily a connection. (You can check, for example, that if \( {\nabla }^{1} \) and \( {\nabla }^{2} \n\nare connections, neither \( \frac{1}{2}{\nabla }^{1} \) nor \( {\nabla }^{1} + {\nabla }^{2} \) satisfies the product rule.) By direct computation,\n\n\[ \n{\nabla }_{X}\left( {fY}\right) = \mathop{\sum }\limits_{\alpha }{\varphi }_{\alpha }{\nabla }_{X}^{\alpha }\left( {fY}\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{\alpha }{\varphi }_{\alpha }\left( {\left( {Xf}\right) Y + f{\nabla }_{X}^{\alpha }Y}\right) \n\]\n\n\[ \n= \left( {Xf}\right) Y + f\mathop{\sum }\limits_{\alpha }{\varphi }_{\alpha }{\nabla }_{X}^{\alpha }Y \n\]\n\n\[ \n= \left( {Xf}\right) Y + f{\nabla }_{X}Y \n\]
|
Yes
|
Lemma 4.7. If \( \nabla \) is a linear connection on \( M \), and \( F \in {\mathcal{T}}_{l}^{k}\left( M\right) \), the map \( \nabla F : {\mathcal{T}}^{1}\left( M\right) \times \cdots \times {\mathcal{T}}^{1}\left( M\right) \times \mathcal{T}\left( M\right) \times \cdots \times \mathcal{T}\left( M\right) \rightarrow {C}^{\infty }\left( M\right) \), given by\n\n\[ \nabla F\left( {{\omega }^{1},\ldots ,{\omega }^{l},{Y}_{1},\ldots ,{Y}_{k}, X}\right) = {\nabla }_{X}F\left( {{\omega }^{1},\ldots ,{\omega }^{l},{Y}_{1},\ldots ,{Y}_{k}}\right) ,\]\ndefines a \( \left( \begin{matrix} k + 1 \\ l \end{matrix}\right) \) -tensor field.
|
Proof. This follows immediately from the tensor characterization lemma: \( {\nabla }_{X}F \) is a tensor field, so it is multilinear over \( {C}^{\infty }\left( M\right) \) in its \( k + l \) arguments; and it is linear over \( {C}^{\infty }\left( M\right) \) in \( X \) by definition of a connection.
|
Yes
|
Lemma 4.8. Let \( \nabla \) be a linear connection. The components of the total covariant derivative of a \( \left( \begin{array}{l} k \\ l \end{array}\right) \) -tensor field \( F \) with respect to a coordinate system are given by \[ {F}_{{i}_{1}\ldots {i}_{k};m}^{{j}_{1}\ldots {j}_{l}} = {\partial }_{m}{F}_{{i}_{1}\ldots {i}_{k}}^{{j}_{1}\ldots {j}_{l}} + \mathop{\sum }\limits_{{s = 1}}^{l}{F}_{{i}_{1}\ldots {i}_{k}}^{{j}_{1}\ldots p\ldots {j}_{l}}{\Gamma }_{mp}^{{j}_{s}} - \mathop{\sum }\limits_{{s = 1}}^{k}{F}_{{i}_{1}\ldots p\ldots {i}_{k}}^{{j}_{1}\ldots {j}_{l}}{\Gamma }_{m{i}_{s}}^{p}. \]
|
Exercise 4.6. Prove Lemma 4.8.
|
No
|
Lemma 4.9. Let \( \nabla \) be a linear connection on \( M \) . For each curve \( \gamma : I \rightarrow \) \( M,\nabla \) determines a unique operator\n\n\[ \n{D}_{t} : \mathcal{T}\left( \gamma \right) \rightarrow \mathcal{T}\left( \gamma \right)\n\]\n\nsatisfying the following properties:\n\n(a) Linearity over \( \mathbf{R} \) :\n\n\[ \n{D}_{t}\left( {{aV} + {bW}}\right) = a{D}_{t}V + b{D}_{t}W\;\text{ for }a, b \in \mathbf{R}.\n\]\n\n(b) Product rule:\n\n\[ \n{D}_{t}\left( {fV}\right) = \dot{f}V + f{D}_{t}V\;\text{ for }f \in {C}^{\infty }\left( I\right) .\n\]\n\n(c) If \( V \) is extendible, then for any extension \( \widetilde{V} \) of \( V \) ,\n\n\[ \n{D}_{t}V\left( t\right) = {\nabla }_{\dot{\gamma }\left( t\right) }\widetilde{V}\n\]\n\nFor any \( V \in \mathcal{T}\left( \gamma \right) ,{D}_{t}V \) is called the covariant derivative of \( V \) along \( \gamma \) .
|
Proof. First we show uniqueness. Suppose \( {D}_{t} \) is such an operator, and let \( {t}_{0} \in I \) be arbitrary. An argument similar to that of Lemma 4.1 shows that the value of \( {D}_{t}V \) at \( {t}_{0} \) depends only on the values of \( V \) in any interval \( \left( {{t}_{0} - \varepsilon ,{t}_{0} + \varepsilon }\right) \) containing \( {t}_{0} \) . (If \( I \) has an endpoint, extend \( \gamma \) to a slightly bigger open interval, prove the lemma there, and then restrict back to \( I \) .)\n\nChoose coordinates near \( \gamma \left( {t}_{0}\right) \), and write\n\n\[ \nV\left( t\right) = {V}^{j}\left( t\right) {\partial }_{j}\n\]\n\nnear \( {t}_{0} \) . Then by the properties of \( {D}_{t} \), since \( {\partial }_{j} \) is extendible,\n\n\[ \n{D}_{t}V\left( {t}_{0}\right) = {\dot{V}}^{j}\left( {t}_{0}\right) {\partial }_{j} + {V}^{j}\left( {t}_{0}\right) {\nabla }_{\dot{\gamma }\left( {t}_{0}\right) }{\partial }_{j}\n\]\n\n\[ \n= \left( {{\dot{V}}^{k}\left( {t}_{0}\right) + {V}^{j}\left( {t}_{0}\right) {\dot{\gamma }}^{i}\left( {t}_{0}\right) {\Gamma }_{ij}^{k}\left( {\gamma \left( {t}_{0}\right) }\right) }\right) {\partial }_{k}.\n\]\n\n(4.10)\n\nThis shows that such an operator is unique if it exists.\n\nFor existence, if \( \gamma \left( I\right) \) is contained in a single chart, we can define \( {D}_{t}V \) by (4.10); the easy verification that it satisfies the requisite properties is left to the reader. In the general case, we can cover \( \gamma \left( I\right) \) with coordinate charts and define \( {D}_{t}V \) by this formula in each chart, and uniqueness implies the various definitions agree whenever two or more charts overlap.
|
Yes
|
Theorem 4.10. (Existence and Uniqueness of Geodesics) Let \( M \) be a manifold with a linear connection. For any \( p \in M \), any \( V \in {T}_{p}M \), and any \( {t}_{0} \in \mathbf{R} \), there exist an open interval \( I \subset \mathbf{R} \) containing \( {t}_{0} \) and a geodesic \( \gamma : I \rightarrow M \) satisfying \( \gamma \left( {t}_{0}\right) = p,\dot{\gamma }\left( {t}_{0}\right) = V \). Any two such geodesics agree on their common domain.
|
Proof. Choose coordinates \( \left( {x}^{i}\right) \) on some neighborhood \( U \) of \( p \). From (4.10), a curve \( \gamma : I \rightarrow U \) is a geodesic if and only if its component functions \( \gamma \left( t\right) = \left( {{x}^{1}\left( t\right) ,\ldots ,{x}^{n}\left( t\right) }\right) \) satisfy the geodesic equation\n\n\[{\ddot{x}}^{k}\left( t\right) + {\dot{x}}^{i}\left( t\right) {\dot{x}}^{j}\left( t\right) {\Gamma }_{ij}^{k}\left( {x\left( t\right) }\right) = 0.\]\n\n(4.11)\n\nThis is a second-order system of ordinary differential equations for the functions \( {x}^{i}\left( t\right) \). The usual trick for proving existence and uniqueness for a second-order system is to introduce auxiliary variables \( {v}^{i} = {\dot{x}}^{i} \) to convert it to the following equivalent first-order system in twice the number of variables:\n\n\[{\dot{x}}^{k}\left( t\right) = {v}^{k}\left( t\right)\]\n\n\[{\dot{v}}^{k}\left( t\right) = - {v}^{i}\left( t\right) {v}^{j}\left( t\right) {\Gamma }_{ij}^{k}\left( {x\left( t\right) }\right).\]\n\nBy the existence and uniqueness theorem for first-order ODEs (see, for example,[Boo86, Theorem IV.4.1]), for any \( \left( {p, V}\right) \in U \times {\mathbf{R}}^{n} \), there exist \( \varepsilon > 0 \) and a unique solution \( \eta : \left( {{t}_{0} - \varepsilon ,{t}_{0} + \varepsilon }\right) \rightarrow U \times {\mathbf{R}}^{n} \) to this system satisfying the initial condition \( \eta \left( {t}_{0}\right) = \left( {p, V}\right) \). If we write the component functions of \( \eta \) as \( \eta \left( t\right) = \left( {{x}^{i}\left( t\right) ,{v}^{i}\left( t\right) }\right) \), then we can easily check that the curve \( \gamma \left( t\right) = \left( {{x}^{1}\left( t\right) ,\ldots ,{x}^{n}\left( t\right) }\right) \) in \( U \) satisfies the existence claim of the lemma.\n\nTo prove the uniqueness claim, suppose \( \gamma ,\sigma : I \rightarrow M \) are geodesics defined on an open interval with \( \gamma \left( {t}_{0}\right) = \sigma \left( {t}_{0}\right) \) and \( \dot{\gamma }\left( {t}_{0}\right) = \dot{\sigma }\left( {t}_{0}\right) \). By the uniqueness part of the ODE theorem, they agree on some neighborhood of \( {t}_{0} \). Let \( \beta \) be the supremum of numbers \( b \) such that they agree on \( \left\lbrack {{t}_{0}, b}\right\rbrack \) . If \( \beta \in I \), then by continuity \( \gamma \left( \beta \right) = \sigma \left( \beta \right) \) and \( \dot{\gamma }\left( \beta \right) = \dot{\sigma }\left( \beta \right) \), and applying local uniqueness in a neighborhood of \( \beta \), we conclude that they agree on a slightly larger interval (Figure 4.6), which is a contradiction. Arguing similarly to the left of \( {t}_{0} \), we conclude that they agree on all of \( I \).
|
Yes
|
Theorem 4.11. (Parallel Translation) Given a curve \( \gamma : I \rightarrow M,{t}_{0} \in \) \( I \), and a vector \( {V}_{0} \in {T}_{\gamma \left( {t}_{0}\right) }M \), there exists a unique parallel vector field \( V \) along \( \gamma \) such that \( V\left( {t}_{0}\right) = {V}_{0} \) .
|
Proof of Theorem 4.11. First suppose \( \gamma \left( I\right) \) is contained in a single coordinate chart. Then, using formula (4.10), \( V \) is parallel along \( \gamma \) if and only if\n\n\[{\dot{V}}^{k}\left( t\right) = - {V}^{j}\left( t\right) {\dot{\gamma }}^{i}\left( t\right) {\Gamma }_{ij}^{k}\left( {\gamma \left( t\right) }\right) ,\;k = 1,\ldots, n.\]\n\n(4.13)\n\nThis is a linear system of ODEs for \( \left( {{V}^{1}\left( t\right) ,\ldots ,{V}^{n}\left( t\right) }\right) \) . Thus Theorem 4.12 guarantees the existence and uniqueness of a solution on all of \( I \) with any initial condition \( V\left( {t}_{0}\right) = {V}_{0} \) .\n\nNow suppose \( \gamma \left( I\right) \) is not covered by a single chart. Let \( \beta \) denote the supremum of all \( b > {t}_{0} \) for which there is a unique parallel translate on \( \left\lbrack {{t}_{0}, b}\right\rbrack \) . Clearly \( \beta > {t}_{0} \), since for \( b \) close enough to \( {t}_{0},\gamma \left\lbrack {{t}_{0}, b}\right\rbrack \) is contained in a single chart and the above argument applies. Then a unique parallel translate \( V \) exists on \( \left\lbrack {{t}_{0},\beta }\right) \) (Figure 4.8). If \( \beta \in I \), choose coordinates on an open set containing \( \gamma \left( {\beta - \delta ,\beta + \delta }\right) \).
|
Yes
|
Theorem 4.12. (Existence and Uniqueness for Linear ODEs) Let \( I \subset \mathbf{R} \) be an interval, and for \( 1 \leq j, k \leq n \) let \( {A}_{j}^{k} : I \rightarrow \mathbf{R} \) be arbitrary smooth functions. The linear initial-value problem\n\n\[ \n{\dot{V}}^{k}\left( t\right) = {A}_{j}^{k}\left( t\right) {V}^{j}\left( t\right) \n\]\n\n(4.12)\n\n\[ \n{V}^{k}\left( {t}_{0}\right) = {B}^{k} \n\]\n\nhas a unique solution on all of \( I \) for any \( {t}_{0} \in I \) and any initial vector \( \left( {{B}^{1},\ldots ,{B}^{n}}\right) \in {\mathbf{R}}^{n} \) .
|
Exercise 4.11. Prove Theorem 4.12, as follows. Consider the vector field \( Y \) on \( I \times {\mathbf{R}}^{n} \) given by\n\n\[ \n{Y}^{0}\left( {{x}^{0},\ldots ,{x}^{n}}\right) = 1 \n\]\n\n\[ \n{Y}^{k}\left( {{x}^{0},\ldots ,{x}^{n}}\right) = {A}_{j}^{k}\left( {x}^{0}\right) {x}^{j},\;k = 1,\ldots, n. \n\]\n\n(a) Show that any solution to (4.12) is the projection to \( {\mathbf{R}}^{n} \) of an integral curve of \( Y \) .\n\n(b) For any compact subinterval \( K \subset I \), show there exists a positive constant \( C \) such that every solution \( V\left( t\right) = \left( {{V}^{1}\left( t\right) ,\ldots ,{V}^{n}\left( t\right) }\right) \) to (4.12) on \( K \) satisfies\n\n\[ \n\frac{d}{dt}\left( {{e}^{-{Ct}}{\left| V\left( t\right) \right| }^{2}}\right) \leq 0 \n\]\n\n(Here \( \left| {V\left( t\right) }\right| \) is just the Euclidean norm.)\n\n(c) If an integral curve of \( Y \) is defined only on some proper subinterval of \( I \), use Exercise 4.10 above to derive a contradiction.
|
No
|
Lemma 5.1. The operator \( {\nabla }^{\top } \) is well defined, and is a connection on \( M \) .
|
Proof. Since the value of \( {\bar{\nabla }}_{X}Y \) at a point \( p \in M \) depends only on \( {X}_{p} \) , \( {\nabla }_{X}^{\top }Y \) is clearly independent of the choice of vector field extending \( X \) . On the other hand, because of the result of Exercise 4.7, the value of \( {\bar{\nabla }}_{X}Y \) at \( p \) depends only on the values of \( Y \) along a curve whose initial tangent vector is \( {X}_{p} \) ; taking the curve to lie entirely in \( M \) shows that \( {\nabla }_{X}^{\top }Y \) depends only on the original vector field \( Y \in \mathcal{T}\left( M\right) \) . Thus \( {\nabla }^{\top } \) is well defined. Smoothness follows easily by expressing \( {\bar{\nabla }}_{X}Y \) in terms of an adapted orthonormal frame as in Problem 3-1.\n\nIt is obvious from the definition that \( {\nabla }_{X}^{\top }Y \) is linear over \( {C}^{\infty }\left( M\right) \) in \( X \) and over \( \mathbf{R} \) in \( Y \), so to show that it is a connection, only the product rule needs checking. Let \( f \in {C}^{\infty }\left( M\right) \) be extended arbitrarily to \( {\mathbf{R}}^{n} \) . Evaluating along \( M \), we get\n\n\[{\nabla }_{X}^{\top }\left( {fY}\right) = {\pi }^{\top }\left( {{\bar{\nabla }}_{X}\left( {fY}\right) }\right)\]\n\n\[= \left( {Xf}\right) {\pi }^{\top }Y + f{\pi }^{\top }\left( {{\bar{\nabla }}_{X}Y}\right)\]\n\n\[= \left( {Xf}\right) Y + f{\nabla }_{X}^{\top }Y\]\n\nThus \( {\nabla }^{\top } \) is a connection.
|
Yes
|
Lemma 5.2. The following conditions are equivalent for a linear connection \( \nabla \) on a Riemannian manifold:\n\n(a) \( \nabla \) is compatible with \( g \) .\n\n(b) \( \nabla g \equiv 0 \) .\n\n(c) If \( V, W \) are vector fields along any curve \( \gamma \) ,\n\n\[ \n\frac{d}{dt}\langle V, W\rangle = \left\langle {{D}_{t}V, W}\right\rangle + \left\langle {V,{D}_{t}W}\right\rangle \n\]\n\n(d) If \( V, W \) are parallel vector fields along a curve \( \gamma \), then \( \langle V, W\rangle \) is constant.\n\n(e) Parallel translation \( {P}_{{t}_{0}{t}_{1}} : {T}_{\gamma \left( {t}_{0}\right) }M \rightarrow {T}_{\gamma \left( {t}_{1}\right) }M \) is an isometry for each \( {t}_{0},{t}_{1} \) (Figure 5.1).
|
Null
|
No
|
Lemma 5.3. The tangential connection on an embedded submanifold \( M \subset \) \( {\mathbf{R}}^{n} \) is symmetric.
|
Exercise 5.3. Prove Lemma 5.3. [Hint: If \( X \) and \( Y \) are vector fields on \( {\mathbf{R}}^{n} \) that are tangent to \( M \) at points of \( M \), so is \( \left\lbrack {X, Y}\right\rbrack \) by Exercise 2.3.]
|
No
|
Theorem 5.4. (Fundamental Lemma of Riemannian Geometry) Let \( \left( {M, g}\right) \) be a Riemannian (or pseudo-Riemannian) manifold. There exists a unique linear connection \( \nabla \) on \( M \) that is compatible with \( g \) and symmetric.
|
Proof. We prove uniqueness first, by deriving a formula for \( \nabla \) . Suppose, therefore, that \( \nabla \) is such a connection, and let \( X, Y, Z \in \mathcal{T}\left( M\right) \) be arbitrary vector fields. Writing the compatibility equation three times with \( X, Y, Z \) cyclically permuted, we obtain\n\n\[ X\langle Y, Z\rangle = \left\langle {{\nabla }_{X}Y, Z}\right\rangle + \left\langle {Y,{\nabla }_{X}Z}\right\rangle \]\n\n\[ Y\langle Z, X\rangle = \left\langle {{\nabla }_{Y}Z, X}\right\rangle + \left\langle {Z,{\nabla }_{Y}X}\right\rangle \]\n\n\[ Z\langle X, Y\rangle = \left\langle {{\nabla }_{Z}X, Y}\right\rangle + \left\langle {X,{\nabla }_{Z}Y}\right\rangle \]\n\nUsing the symmetry condition on the last term in each line, this can be rewritten as\n\n\[ X\langle Y, Z\rangle = \left\langle {{\nabla }_{X}Y, Z}\right\rangle + \left\langle {Y,{\nabla }_{Z}X}\right\rangle + \langle Y,\left\lbrack {X, Z}\right\rbrack \rangle \]\n\n\[ Y\langle Z, X\rangle = \left\langle {{\nabla }_{Y}Z, X}\right\rangle + \left\langle {Z,{\nabla }_{X}Y}\right\rangle + \langle Z,\left\lbrack {Y, X}\right\rbrack \rangle \]\n\n\[ Z\langle X, Y\rangle = \left\langle {{\nabla }_{Z}X, Y}\right\rangle + \left\langle {X,{\nabla }_{Y}Z}\right\rangle + \langle X,\left\lbrack {Z, Y}\right\rbrack \rangle .\n\nAdding the first two of these equations and subtracting the third, we obtain\n\n\[ X\langle Y, Z\rangle + Y\langle Z, X\rangle - Z\langle X, Y\rangle = \]\n\n\[ 2\left\langle {{\nabla }_{X}Y, Z}\right\rangle + \langle Y,\left\lbrack {X, Z}\right\rbrack \rangle + \langle Z,\left\lbrack {Y, X}\right\rbrack \rangle - \langle X,\left\lbrack {Z, Y}\right\rbrack \rangle .\n\nFinally, solving for \( \left\langle {{\nabla }_{X}Y, Z}\right\rangle \), we get\n\n\[ \left\langle {{\nabla }_{X}Y, Z}\right\rangle = \frac{1}{2}(X\langle Y, Z\rangle + Y\langle Z, X\rangle - Z\langle X, Y\rangle \]\n\n\[ - \langle Y,\left\lbrack {X, Z}\right\rbrack \rangle - \langle Z,\left\lbrack {Y, X}\right\rbrack \rangle + \langle X,\left\lbrack {Z, Y}\right\rbrack \rangle ). \]\n\n(5.1)\n\nNow suppose \( {\nabla }^{1} \) and \( {\nabla }^{2} \) are two connections that are symmetric and compatible with \( g \) . Since the right-hand side of (5.1) does not depend on the connection, it follows that \( \left\langle {{\nabla }_{X}^{1}Y - {\nabla }_{X}^{2}Y, Z}\right\rangle = 0 \) for all \( X, Y, Z \) . This can only happen if \( {\nabla }_{X}^{1}Y = {\nabla }_{X}^{2}Y \) for all \( X \) and \( Y \), so \( {\nabla }^{1} = {\nabla }^{2} \) .
|
Yes
|
Lemma 5.5. All Riemannian geodesics are constant speed curves.
|
Proof. Let \( \gamma \) be a Riemannian geodesic. Since \( \dot{\gamma } \) is parallel along \( \gamma \), its length \( \left| \dot{\gamma }\right| = \langle \dot{\gamma },\dot{\gamma }{\rangle }^{1/2} \) is constant by Lemma 5.2(d).
|
Yes
|
Proposition 5.6. (Naturality of the Riemannian Connection) Suppose \( \varphi : \left( {M, g}\right) \rightarrow \left( {\widetilde{M},\widetilde{g}}\right) \) is an isometry.\n\n(a) \( \varphi \) takes the Riemannian connection \( \nabla \) of \( g \) to the Riemannian connection \( \widetilde{\nabla } \) of \( \widetilde{g} \), in the sense that\n\n\[{\varphi }_{ * }\left( {{\nabla }_{X}Y}\right) = {\widetilde{\nabla }}_{{\varphi }_{ * }X}\left( {{\varphi }_{ * }Y}\right)\]\n\n(b) If \( \gamma \) is a curve in \( M \) and \( V \) is a vector field along \( \gamma \), then\n\n\[{\varphi }_{ * }{D}_{t}V = {\widetilde{D}}_{t}\left( {{\varphi }_{ * }V}\right)\]\n\n(c) \( \varphi \) takes geodesics to geodesics: if \( \gamma \) is the geodesic in \( M \) with initial point \( p \) and initial velocity \( V \), then \( \varphi \circ \gamma \) is the geodesic in \( \widetilde{M} \) with initial point \( \varphi \left( p\right) \) and initial velocity \( {\varphi }_{ * }V \) .
|
Exercise 5.4. Prove Proposition 5.6 as follows. For part (a), define a map\n\n\[{\varphi }^{ * }\widetilde{\nabla } : \mathfrak{T}\left( M\right) \times \mathfrak{T}\left( M\right) \rightarrow \mathfrak{T}\left( M\right)\]\n\nby\n\n\[{\left( {\varphi }^{ * }\widetilde{\nabla }\right) }_{X}Y = {\varphi }_{ * }^{-1}\left( {{\widetilde{\nabla }}_{{\varphi }_{ * }X}\left( {{\varphi }_{ * }Y}\right) }\right)\]\n\nShow that \( {\varphi }^{ * }\widetilde{\nabla } \) is a connection on \( M \) (called the pullback connection), and that it is symmetric and compatible with \( g \) ; therefore \( {\varphi }^{ * }\widetilde{\nabla } = \nabla \) by uniqueness of the Riemannian connection. You will have to unwind the definition of the push-forward of a vector field very carefully. For part (b), define an operator \( {\varphi }^{ * }{\widetilde{D}}_{t} : \mathcal{T}\left( \gamma \right) \rightarrow \mathcal{T}\left( \gamma \right) \) by a similar formula and show that it is equal to \( {D}_{t} \) .
|
No
|
Proposition 5.7. (Properties of the Exponential Map)\n\n(a) \( \\mathcal{E} \) is an open subset of TM containing the zero section, and each set \( {\\mathcal{E}}_{p} \) is star-shaped with respect to 0 .\n\n(b) For each \( V \\in {TM} \), the geodesic \( {\\gamma }_{V} \) is given by\n\n\[ \n{\\gamma }_{V}\\left( t\\right) = \\exp \\left( {tV}\\right) \n\]\n\nfor all \( t \) such that either side is defined.\n\n(c) The exponential map is smooth.
|
Proof of Proposition 5.7. The rescaling lemma with \( t = 1 \) says precisely that \( \\exp \\left( {cV}\\right) = {\\gamma }_{cV}\\left( 1\\right) = {\\gamma }_{V}\\left( c\\right) \) whenever either side is defined; this is (b). Moreover, if \( V \\in {\\mathcal{E}}_{p} \), by definition \( {\\gamma }_{V} \) is defined at least on \( \\left\\lbrack {0,1}\\right\\rbrack \) . Thus for \( 0 \\leq t \\leq 1 \), the rescaling lemma says that\n\n\[ \n\\exp \\left( {tV}\\right) = {\\gamma }_{tV}\\left( 1\\right) = {\\gamma }_{V}\\left( t\\right) \n\]\n\nis defined. This shows that \( {\\mathcal{E}}_{p} \) is star-shaped.\n\nIt remains to show that \( \\mathcal{E} \) is open and exp is smooth. To do so, we revisit the proof of the existence and uniqueness theorem for geodesics (Theorem\n\n4.10) and reformulate it in a more invariant way. Let \( \\le
|
No
|
Lemma 5.8. (Rescaling Lemma) For any \( V \in {TM} \) and \( c, t \in R \) , \[ {\gamma }_{cV}\left( t\right) = {\gamma }_{V}\left( {ct}\right) \] whenever either side is defined.
|
Proof. It suffices to show that \( {\gamma }_{cV}\left( t\right) \) exists and (5.5) holds whenever the right-hand side is defined, for then the converse statement follows by replacing \( V \) by \( {cV}, t \) by \( {ct} \), and \( c \) by \( 1/c \) . Suppose the domain of \( {\gamma }_{V} \) is the open interval \( I \subset \mathbf{R} \) . For simplicity, write \( \gamma = {\gamma }_{V} \), and define a new curve \( \widetilde{\gamma } \) by \( \widetilde{\gamma }\left( t\right) = \gamma \left( {ct}\right) \), defined on \( {c}^{-1}I \mathrel{\text{:=}} \) \( \{ t : {ct} \in I\} \) . We will show that \( \widetilde{\gamma } \) is a geodesic with initial point \( p \) and initial velocity \( {cV} \) ; it then follows by uniqueness that it must be equal to \( {\gamma }_{cV} \) . It is immediate from the definition that \( \widetilde{\gamma }\left( 0\right) = \gamma \left( 0\right) = p \) . Writing \( \gamma \left( t\right) = \) \( \left( {{\gamma }^{1}\left( t\right) ,\ldots ,{\gamma }^{n}\left( t\right) }\right) \) in any local coordinates, the chain rule gives \[ {\dot{\widetilde{\gamma }}}^{i}\left( t\right) = \frac{d}{dt}{\gamma }^{i}\left( {ct}\right) \] \[ = c{\dot{\gamma }}^{i}\left( {ct}\right) \] In particular, it follows that \( \dot{\widetilde{\gamma }}\left( 0\right) = c\dot{\gamma }\left( 0\right) = {cV} \) . Now let \( {D}_{t} \) and \( {\widetilde{D}}_{t} \) denote the covariant differentiation operators along \( \gamma \) and \( \widetilde{\gamma } \), respectively. Using the chain rule again in coordinates, \[ {\widetilde{D}}_{t}\dot{\widetilde{\gamma }}\left( t\right) = \left( {\frac{d}{dt}{\dot{\widetilde{\gamma }}}^{k}\left( t\right) + {\Gamma }_{ij}^{k}\left( {\widetilde{\gamma }\left( t\right) }\right) {\dot{\widetilde{\gamma }}}^{i}\left( t\right) {\dot{\widetilde{\gamma }}}^{j}\left( t\right) }\right) {\partial }_{k} \] \[ = \left( {{c}^{2}{\ddot{\gamma }}^{k}\left( {ct}\right) + {c}^{2}{\Gamma }_{ij}^{k}\left( {\gamma \left( {ct}\right) }\right) {\dot{\gamma }}^{i}\left( {ct}\right) {\dot{\gamma }}^{j}\left( {ct}\right) }\right) {\partial }_{k} \] \[ = {c}^{2}{D}_{t}\dot{\gamma }\left( {ct}\right) = 0. \] Thus \( \widetilde{\gamma } \) is a geodesic, and so \( \widetilde{\gamma } = {\gamma }_{cV} \) as claimed.
|
Yes
|
Proposition 5.9. (Naturality of the Exponential Map) Suppose that \( \varphi : \left( {M, g}\right) \rightarrow \left( {\widetilde{M},\widetilde{g}}\right) \) is an isometry. Then, for any \( p \in M \), the following diagram commutes: 
|
Null
|
No
|
Lemma 5.10. (Normal Neighborhood Lemma) For any \( p \in M \), there is a neighborhood \( \mathcal{V} \) of the origin in \( {T}_{p}M \) and a neighborhood \( \mathcal{U} \) of \( p \) in \( M \) such that \( {\exp }_{p} : \mathcal{V} \rightarrow \mathcal{U} \) is a diffeomorphism.
|
Proof. This follows immediately from the inverse function theorem, once we show that \( {\left( {\exp }_{p}\right) }_{ * } \) is invertible at 0 . Since \( {T}_{p}M \) is a vector space, there is a natural identification \( {T}_{0}\left( {{T}_{p}M}\right) = {T}_{p}M \) . Under this identification, we will show that \( {\left( {\exp }_{p}\right) }_{ * } : {T}_{0}\left( {{T}_{p}M}\right) = {T}_{p}M \rightarrow {T}_{p}M \) has a particularly simple expression: it is the identity map!\n\nTo compute \( {\left( {\exp }_{p}\right) }_{ * }V \) for an arbitrary vector \( V \in {T}_{p}M \), we just need to choose a curve \( \tau \) in \( {T}_{p}M \) starting at 0 whose initial tangent vector is \( V \) , and compute the initial tangent vector of the composite curve \( {\exp }_{p} \circ \tau \left( t\right) \) . An obvious such curve is \( \tau \left( t\right) = {tV} \) . Thus\n\n\[ \n{\left. {\left( {\exp }_{p}\right) }_{ * }V = \frac{d}{dt}\right| }_{t = 0}\left( {{\exp }_{p} \circ \tau }\right) \left( t\right) = {\left. \frac{d}{dt}\right| }_{t = 0}{\exp }_{p}\left( {tV}\right) = {\left. \frac{d}{dt}\right| }_{t = 0}{\gamma }_{V}\left( t\right) = V. \n\]
|
Yes
|
Proposition 5.11. (Properties of Normal Coordinates) Let \( \left( {\mathcal{U},\left( {x}^{i}\right) }\right) \) be any normal coordinate chart centered at \( p \) .\n\n(a) For any \( V = {V}^{i}{\partial }_{i} \in {T}_{p}M \), the geodesic \( {\gamma }_{V} \) starting at \( p \) with initial velocity vector \( V \) is represented in normal coordinates by the radial line segment\n\n\[{\gamma }_{V}\left( t\right) = \left( {t{V}^{1},\ldots, t{V}^{n}}\right)\]\n\nas long as \( {\gamma }_{V} \) stays within \( \mathcal{U} \) .\n\n(b) The coordinates of \( p \) are \( \left( {0,\ldots ,0}\right) \) .\n\n(c) The components of the metric at \( p \) are \( {g}_{ij} = {\delta }_{ij} \) .\n\n(d) Any Euclidean ball \( \{ x : r\left( x\right) < \varepsilon \} \) contained in \( \mathcal{U} \) is a geodesic ball in M.\n\n(e) At any point \( q \in \mathcal{U} - p,\partial /\partial r \) is the velocity vector of the unit speed geodesic from \( p \) to \( q \), and therefore has unit length with respect to \( g \) .\n\n(f) The first partial derivatives of \( {g}_{ij} \) and the Christoffel symbols vanish at \( p \) .
|
Null
|
No
|
Lemma 6.1. For any curve segment \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow M \), and any reparametrization \( \widetilde{\gamma } \) of \( \gamma, L\left( \gamma \right) = L\left( \widetilde{\gamma }\right) \) .
|
Exercise 6.1. Prove Lemma 6.1.
|
No
|
Lemma 6.3. (Symmetry Lemma) Let \( \Gamma : \left( {-\varepsilon ,\varepsilon }\right) \times \left\lbrack {a, b}\right\rbrack \rightarrow M \) be an admissible family of curves in a Riemannian (or pseudo-Riemannian) manifold. On any rectangle \( \left( {-\varepsilon ,\varepsilon }\right) \times \left\lbrack {{a}_{i - 1},{a}_{i}}\right\rbrack \) where \( \Gamma \) is smooth,\n\n\[ \n{D}_{s}{\partial }_{t}\Gamma = {D}_{t}{\partial }_{s}\Gamma \n\]
|
Proof. This is a local question, so we may compute in coordinates \( \left( {x}^{i}\right) \) around any point \( \Gamma \left( {{s}_{0},{t}_{0}}\right) \) . Writing the components of \( \Gamma \) as \( \Gamma \left( {s, t}\right) = \) \( \left( {{x}^{1}\left( {s, t}\right) ,\ldots ,{x}^{n}\left( {s, t}\right) }\right) \), we have\n\n\[ \n{\partial }_{t}\Gamma = \frac{\partial {x}^{k}}{\partial t}{\partial }_{k};\;{\partial }_{s}\Gamma = \frac{\partial {x}^{k}}{\partial s}{\partial }_{k}.\n\]\n\nThen, using the coordinate formula (4.10) for covariant derivatives along curves,\n\n\[ \n{D}_{s}{\partial }_{t}\Gamma = \left( {\frac{{\partial }^{2}{x}^{k}}{\partial s\partial t} + \frac{\partial {x}^{i}}{\partial t}\frac{\partial {x}^{j}}{\partial s}{\Gamma }_{ji}^{k}}\right) {\partial }_{k}\n\]\n\n\[ \n{D}_{t}{\partial }_{s}\Gamma = \left( {\frac{{\partial }^{2}{x}^{k}}{\partial t\partial s} + \frac{\partial {x}^{i}}{\partial s}\frac{\partial {x}^{j}}{\partial t}{\Gamma }_{ji}^{k}}\right) {\partial }_{k}.\n\]\n\nReversing the roles of \( i \) and \( j \) in the second line above, and using the symmetry condition \( {\Gamma }_{ji}^{k} = {\Gamma }_{ij}^{k} \), we see immediately that these two expressions are equal.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.