Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Corollary 4.3. Let \( F \) be a spray on \( X \), and let \( {x}_{0} \in X \) . There exists an open neighborhood \( U \) of \( {x}_{0} \), and an open neighborhood \( V \) of \( {0}_{{x}_{0}} \) in \( {TX} \) satisfying the following condition. For every \( x \in U \) and \( v \in V \cap {T}_{x}X \) , there exists a unique geodesic
|
\[ {\alpha }_{v} : \left( {-2,2}\right) \rightarrow X \] such that \[ {\alpha }_{v}\left( 0\right) = x\;\text{ and }\;{\alpha }_{v}^{\prime }\left( 0\right) = v. \] Observe that in a chart, we may pick \( V \) as a product \[ V = U \times {V}_{2}\left( 0\right) \subset U \times \mathbf{E} \] where \( {V}_{2}\left( 0\right) \) is a neighborhood of 0 in \( \mathbf{E} \) . Then the geodesic flow is defined on \( U \times {V}_{2}\left( 0\right) \times J \), where \( J = \left( {-2,2}\right) \) . We picked \( \left( {-2,2}\right) \) for concreteness. What we really want is that 0 and 1 lie in the interval. Any bounded interval \( J \) containing 0 and 1 could have been selected in the statement of the corollary. Then of course, \( U \) and \( V \) (or \( {V}_{2}\left( 0\right) \) ) depend on \( J \) .
|
Yes
|
Theorem 5.1. Let \( Y \) be of class \( {C}^{p}\left( {p \geqq 3}\right) \) and admit partitions of unity. Let \( X \) be a closed submanifold. Then there exists a tubular neighborhood of \( X \) in \( Y \), of class \( {C}^{p - 2} \) .
|
Proof. Consider the exact sequence of tangent bundles:\n\n\[ 0 \rightarrow T\left( X\right) \rightarrow T\left( Y\right) \mid X \rightarrow N\left( X\right) \rightarrow 0. \]\n\nWe know that this sequence splits, and thus there exists some splitting\n\n\[ T\left( Y\right) \mid X = T\left( X\right) \oplus N\left( X\right) \]\n\nwhere \( N\left( X\right) \) may be identified with a subbundle of \( T\left( Y\right) \mid X \) . Following Palais, we construct a spray \( \xi \) on \( T\left( Y\right) \) using Theorem 3.1 and obtain the corresponding exponential map. We shall use its restriction to \( N\left( X\right) \) , denoted by \( \exp \mid N \) . Thus\n\n\[ \exp \mid N : \mathfrak{D} \cap N\left( X\right) \rightarrow Y. \]\n\nWe contend that this map is a local isomorphism. To prove this, we may work locally. Corresponding to the submanifold, we have a product decomposition \( U = {U}_{1} \times {U}_{2} \), with \( X = {U}_{1} \times 0 \) . If \( U \) is open in \( \mathbf{E} \), then we may take \( {U}_{1},{U}_{2} \) open in \( {\mathbf{F}}_{1},{\mathbf{F}}_{2} \) respectively. Then the injection of \( N\left( X\right) \) in \( T\left( Y\right) \mid X \) may be represented locally by an exact sequence\n\n\[ 0 \rightarrow {U}_{1} \times {\mathbf{F}}_{2}\overset{\varphi }{ \rightarrow }{U}_{1} \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2} \]\n\nand the inclusion of \( T\left( Y\right) \mid X \) in \( T\left( Y\right) \) is simply the inclusion\n\n\[ {U}_{1} \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2} \rightarrow {U}_{1} \times {U}_{2} \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2} \]\n\nWe work at the point \( \left( {{x}_{1},0}\right) \) in \( {U}_{1} \times {\mathbf{F}}_{2} \) . We must compute the derivative of the composite map\n\n\[ {U}_{1} \times {\mathbf{F}}_{2}\overset{\varphi }{ \rightarrow }{U}_{1} \times {U}_{2} \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2}\overset{\exp }{ \rightarrow }Y \]\n\nat \( \left( {{x}_{1},0}\right) \) . We can do this by the formula for the partial derivatives. Since the exponential map coincides with the projection on the zero cross section, its \
|
Yes
|
Proposition 6.1. Let \( X \) be a manifold. Let \( \pi : E \rightarrow X \) and \( {\pi }_{1} : {E}_{1} \rightarrow X \) be two vector bundles over \( X \). Let\n\n\[ f : E \rightarrow {E}_{1} \]\n\nbe a tubular neighborhood of \( X \) in \( {E}_{1} \) (identifying \( X \) with its zero section in \( {E}_{1} \) ). Then there exists an isotopy\n\n\[ {f}_{t} : E \rightarrow {E}_{1} \]\n\nwith proper domain \( \left\lbrack {0,1}\right\rbrack \) such that \( {f}_{1} = f \) and \( {f}_{0} \) is a VB-isomorphism. (If \( f,\pi ,{\pi }_{1} \) are of class \( {C}^{p} \) then \( {f}_{t} \) can be chosen of class \( {C}^{p - 1} \).)
|
Proof. We define \( F \) by the formula\n\n\[ {F}_{t}\left( e\right) = {t}^{-1}f\left( {te}\right) \]\n\nfor \( t \neq 0 \) and \( e \in E \). Then \( {F}_{t} \) is an embedding since it is composed of embeddings (the scalar multiplications by \( t,{t}^{-1} \) are in fact VB-isomorphism).\n\nWe must investigate what happens at \( t = 0 \).\n\nGiven \( e \in E \), we find an open neighborhood \( {U}_{1} \) of \( {\pi e} \) over which \( {E}_{1} \) admits a trivialization \( {U}_{1} \times {\mathbf{E}}_{1} \). We then find a still smaller open neighborhood \( U \) of \( {\pi e} \) and an open ball \( B \) around 0 in the typical fiber \( \mathbf{E} \) of \( E \) such that \( E \) admits a trivialization \( U \times \mathbf{E} \) over \( U \), and such that the representation \( \bar{f} \) of \( f \) on \( U \times B \) (contained in \( U \times \mathbf{E} \) ) maps \( U \times B \) into \( {U}_{1} \times {\mathbf{E}}_{1} \). This is possible by continuity. On \( U \times B \) we can represent \( \bar{f} \) by two morphisms,\n\n\[ \bar{f}\left( {x, v}\right) = \left( {\varphi \left( {x, v}\right) ,\psi \left( {x, v}\right) }\right) \]\n\nand \( \varphi \left( {x,0}\right) = x \) while \( \psi \left( {x,0}\right) = 0 \). Observe that for all \( t \) sufficiently small, te is contained in \( U \times B \) (in the local representation).\n\nWe can represent \( {F}_{t} \) locally on \( U \times B \) as the mapping\n\n\[ {\bar{F}}_{t}\left( {x, v}\right) = \left( {\varphi \left( {x,{tv}}\right) ,{t}^{-1}\psi \left( {x,{tv}}\right) }\right) .\n\nThe map \( \varphi \) is then a morphism in the three variables \( x, v \), and \( t \) even at \( t = 0 \). The second component of \( {\bar{F}}_{t} \) can be written\n\n\[ {t}^{-1}\psi \left( {x,{tv}}\right) = {t}^{-1}{\int }_{0}^{1}{D}_{2}\psi \left( {x,{stv}}\right) \cdot \left( {tv}\right) {ds} \]\n\nand thus \( {t}^{-1} \) cancels \( t \) to yield simply\n\n\[ {\int }_{0}^{1}{D}_{2}\psi \left( {x,{stv}}\right) \cdot {vds} \]\n\nThis is a morphism in \( t \), even at \( t = 0 \). Furthermore, for \( t = 0 \), we obtain\n\n\[ {\bar{F}}_{0}\left( {x, v}\right) = \left( {x,{D}_{2}\psi \left( {x,0}\right) v}\right) .\n\nSince \( f \) was originally assumed to be an embedding, it follows that \( {D}_{2}\psi \left( {x,0}\right) \) is a toplinear isomorphism, and therefore \( {F}_{0} \) is a VB-isomorphism. To get our isotopy in standard form, we can use a function \( \sigma : \mathbf{R} \rightarrow \mathbf{R} \) such that \( \sigma \left( t\right) = 0 \) for \( t \leqq 0 \) and \( \sigma \left( t\right) = 1 \) for \( t \geqq 1 \), and \( \sigma \) is monotone increasing. This proves our proposition.
|
Yes
|
Theorem 6.2. Let \( X \) be a submanifold of \( Y \) . Let\n\n\[ \pi : E \rightarrow X\;\text{ and }\;{\pi }_{1} : {E}_{1} \rightarrow X \]\n\nbe two vector bundles, and assume that \( E \) is compressible. Let \( f : E \rightarrow Y \) and \( g : {E}_{1} \rightarrow Y \) be two tubular neighborhoods of \( X \) in \( Y \) . Then there exists a \( {C}^{p - 1} \) -isotopy\n\n\[ {f}_{t} : E \rightarrow Y \]\n\nof tubular neighborhoods with proper domain \( \left\lbrack {0,1}\right\rbrack \) and a VB-isomorphism\n\n\( \lambda : E \rightarrow {E}_{1} \) such that \( {f}_{1} = f \) and \( {f}_{0} = {g\lambda } \) .
|
Proof. We observe that \( f\left( E\right) \) and \( g\left( {E}_{1}\right) \) are open neighborhoods of \( X \) in \( Y \) . Let \( U = {f}^{-1}\left( {f\left( E\right) \cap g\left( {E}_{1}\right) }\right) \) and let \( \varphi : E \rightarrow U \) be a compression. Let \( \psi \) be the composite map\n\n\[ E\overset{\varphi }{ \rightarrow }U\overset{f \mid U}{ \rightarrow }Y \]\n\n\( \psi = \left( {f \mid U}\right) \circ \varphi \) . Then \( \psi \) is a tubular neighborhood, and \( \psi \left( E\right) \) is contained in \( g\left( {E}_{1}\right) \) . Therefore \( {g}^{-1}\psi : E \rightarrow {E}_{1} \) is a tubular neighborhood of the same type considered in the previous proposition. There exists an isotopy of tubular neighborhoods of \( X \) :\n\n\[ {G}_{t} : E \rightarrow {E}_{1} \]\n\nsuch that \( {G}_{1} = {g}^{-1}\psi \) and \( {G}_{0} \) is a VB-isomorphism. Considering the isotopy \( g{G}_{t} \), we find an isotopy of tubular neighborhoods\n\n\[ {\psi }_{t} : E \rightarrow Y \]\n\nsuch that \( {\psi }_{1} = \psi \) and \( {\psi }_{0} = {g\omega } \) where \( \omega : E \rightarrow {E}_{1} \) is a VB-isomorphism. We have thus shown that \( \psi \) and \( {g\omega } \) are isotopic (by an isotopy of tubular neighborhoods). Similarly, we see that \( \psi \) and \( {f\mu } \) are isotopic for some VB-isomorphism\n\n\[ \mu : E \rightarrow E\text{.} \]\n\nConsequently, adjusting the proper domains of our isotopies suitably, we get an isotopy of tubular neighborhoods going from \( {g\omega } \) to \( {f\mu } \), say \( {F}_{t} \) . Then \( {F}_{t}{\mu }^{-1} \) will give us the desired isotopy from \( {g\omega }{\mu }^{-1} \) to \( f \), and we can put \( \lambda = \omega {\mu }^{-1} \) to conclude the proof.
|
Yes
|
Proposition 1.1. There exists a unique function \( {\xi \varphi } \) on \( U \) of class \( {C}^{p - 1} \) such that\n\n\[ \left( {\xi \varphi }\right) \left( x\right) = \left( {{T}_{x}\varphi }\right) \xi \left( x\right) \]\n\nIf \( U \) is open in the Banach space \( \mathbf{E} \) and \( \xi \) denotes the local representation of the vector field on \( U \), then\n\n\[ \left( {\xi \varphi }\right) \left( x\right) = {\varphi }^{\prime }\left( x\right) \xi \left( x\right) \]
|
Proof. The first formula certainly defines a mapping of \( U \) into \( \mathbf{R} \) . The local formula defines a \( {C}^{p - 1} \) -morphism on \( U \) . It follows at once from the definitions that the first formula expresses invariantly in terms of the tangent bundle the same mapping as the second. Thus it allows us to define \( {\xi \varphi } \) as a morphism globally, as desired.
|
Yes
|
Proposition 1.2. Let \( X \) be a manifold and \( U \) open in \( X \) . Let \( \xi \) be a vector field over \( X \) . If \( {\partial }_{\xi } = 0 \), then \( \xi \left( x\right) = 0 \) for all \( x \in U \) . Each \( {\partial }_{\xi } \) is a derivation of \( {\mathrm{{Fu}}}^{p}\left( U\right) \) into \( {\mathrm{{Fu}}}^{p - 1}\left( U\right) \) .
|
Proof. Suppose \( \xi \left( x\right) \neq 0 \) for some \( x \) . We work with the local representations, and take \( \varphi \) to be a continuous linear map of \( \mathbf{E} \) into \( \mathbf{R} \) such that \( \varphi \left( {\xi \left( x\right) }\right) \neq 0 \), by Hahn-Banach. Then \( {\varphi }^{\prime }\left( y\right) = \varphi \) for all \( y \in U \), and we see that \( {\varphi }^{\prime }\left( x\right) \xi \left( x\right) \neq 0 \), thus proving the first assertion. The second is obvious from the local formula.
|
No
|
Proposition 1.3. Let \( \xi ,\eta \) be two vector fields of class \( {C}^{p - 1} \) on \( X \) . Then there exists a unique vector field \( \left\lbrack {\xi ,\eta }\right\rbrack \) of class \( {C}^{p - 2} \) such that for each open set \( U \) and function \( \varphi \) on \( U \) we have\n\n\[ \left\lbrack {\xi ,\eta }\right\rbrack \varphi = \xi \left( {\eta \left( \varphi \right) }\right) - \eta \left( {\xi \left( \varphi \right) }\right) . \]\n\nIf \( U \) is open in \( \mathbf{E} \) and \( \xi ,\eta \) are the local representations of the vector fields, then \( \left\lbrack {\xi ,\eta }\right\rbrack \) is given by the local formula\n\n\[ \left\lbrack {\xi ,\eta }\right\rbrack \varphi \left( x\right) = {\varphi }^{\prime }\left( x\right) \left( {{\eta }^{\prime }\left( x\right) \xi \left( x\right) - {\xi }^{\prime }\left( x\right) \eta \left( x\right) }\right) . \]\n\nThus the local representation of \( \left\lbrack {\xi ,\eta }\right\rbrack \) is given by\n\n\[ \left\lbrack {\xi ,\eta }\right\rbrack \left( x\right) = {\eta }^{\prime }\left( x\right) \xi \left( x\right) - {\xi }^{\prime }\left( x\right) \eta \left( x\right) . \]
|
Proof. By Proposition 1.2, any vector field having the desired effect on functions is uniquely determined. We check that the local formula gives us this effect locally. Differentiating formally, we have (using the law for the derivative of a product):\n\n\[ {\left( \eta \varphi \right) }^{\prime }\xi - {\left( \xi \varphi \right) }^{\prime }\eta = {\left( {\varphi }^{\prime }\eta \right) }^{\prime }\xi - \left( {{\varphi }^{\prime }\xi }\right) \eta \]\n\n\[ = {\varphi }^{\prime }{\eta }^{\prime }\xi + {\varphi }^{\prime \prime }{\eta \xi } - {\varphi }^{\prime }{\xi }^{\prime }\eta - {\varphi }^{\prime \prime }{\xi \eta } \]\n\nThe terms involving \( {\varphi }^{\prime \prime } \) must be understood correctly. For instance, the first such term at a point \( x \) is simply \( {\varphi }^{\prime \prime }\left( x\right) \left( {\eta \left( x\right) ,\xi \left( x\right) }\right) \) remembering that \( {\varphi }^{\prime \prime }\left( x\right) \) is a bilinear map, and can thus be evaluated at the two vectors \( \eta \left( x\right) \) and \( \xi \left( x\right) \) . However, we know that \( {\varphi }^{\prime \prime }\left( x\right) \) is symmetric. Hence the two terms involving the second derivative of \( \varphi \) cancel, and give us our formula.
|
Yes
|
Corollary 1.4. The bracket \( \left\lbrack {\xi ,\eta }\right\rbrack \) is bilinear in both arguments, we have \( \left\lbrack {\xi ,\eta }\right\rbrack = - \left\lbrack {\eta ,\xi }\right\rbrack \), and Jacobi’s identity\n\n\[ \left\lbrack {\xi ,\left\lbrack {\eta ,\zeta }\right\rbrack }\right\rbrack = \left\lbrack {\left\lbrack {\xi ,\eta }\right\rbrack ,\zeta }\right\rbrack + \left\lbrack {\eta ,\left\lbrack {\xi ,\zeta }\right\rbrack }\right\rbrack \]
|
Proof. The first two assertions are obvious. The third comes from the definition of the bracket. We apply the vector field on the left of the equality to a function \( \varphi \) . All the terms cancel out (the reader will write it out as well or better than the author). The last two formulas are immediate.
|
No
|
Theorem 1.5. Let \( \xi ,\eta \) be vector fields on \( X \), and assume that \( \left\lbrack {\xi ,\eta }\right\rbrack = 0 \) . Let \( \alpha \) and \( \beta \) be the flows for \( \xi \) and \( \eta \) respectively. Then for real values \( t \) , s we have\n\n\[{\alpha }_{t} \circ {\beta }_{s} = {\beta }_{s} \circ {\alpha }_{t}\]\n\nOr in other words, for any \( x \in X \) we have\n\n\[ \alpha \left( {t,\beta \left( {s, x}\right) }\right) = \beta \left( {s,\alpha \left( {t, x}\right) }\right) ,\]\n\nin the sense that if for some value of \( t \) a value of \( s \) is in the domain of one of these expressions, then it is in the domain of the other and the two expressions are equal.
|
Proof. For a fixed value of \( t \), the two curves in \( s \) given by the right-and left-hand side of the last formula have the same initial condition, namely \( {\alpha }_{t}\left( x\right) \) . The curve on the right\n\n\[ s \mapsto \beta \left( {s,\alpha \left( {t, x}\right) }\right) \]\nis by definition the integral curve of \( \eta \) . The curve on the left\n\n\[ s \mapsto \alpha \left( {t,\beta \left( {s, x}\right) }\right) \]\n\nis the image under \( {\alpha }_{t} \) of the integral curve for \( \eta \) having initial condition \( x \) . Since \( x \) is fixed, let us denote \( \beta \left( {s, x}\right) \) simply by \( \beta \left( s\right) \) . What we must show is that the two curves on the right and on the left satisfy the same differential equation.\n\n\n\nIn the above figure, we see that the flow \( {\alpha }_{t} \) shoves the curve on the left to the curve on the right. We must compute the tangent vectors to the curve on the right. We have\n\n\[ \frac{d}{ds}\left( {{\alpha }_{t}\left( {\beta \left( s\right) }\right) }\right) = {D}_{2}\alpha \left( {t,\beta \left( s\right) }\right) {\beta }^{\prime }\left( s\right) \]\n\n\[ = {D}_{2}\alpha \left( {t,\beta \left( s\right) }\right) \eta \left( {\beta \left( s\right) }\right) . \]\n\nNow fix \( s \), and denote this last expression by \( F\left( t\right) \) . We must show that if\n\n\[ G\left( t\right) = \eta \left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) \]\n\nthen\n\n\[ F\left( t\right) = G\left( t\right) \]\n\nWe have trivially \( F\left( 0\right) = G\left( 0\right) \), in other words the curves \( F \) and \( G \) have the same initial condition. On the other hand,\n\n\[ {F}^{\prime }\left( t\right) = {\xi }^{\prime }\left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) {D}_{2}\alpha \left( {t,\beta \left( s\right) }\right) \eta \left( {\beta \left( s\right) }\right) \]\n\nand\n\n\[ {G}^{\prime }\left( t\right) = {\eta }^{\prime }\left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) \xi \left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) \]\n\n\[ = {\xi }^{\prime }\left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) \eta \left( {\alpha \left( {t,\beta \left( s\right) }\right) }\right) \;\left( {\text{ because }\left\lbrack {\xi ,\eta }\right\rbrack = 0}\right) . \]\n\nHence we see that our two curves \( F \) and \( G \) satisfy the same differential equation, whence they are equal. This proves our theorem.
|
Yes
|
Proposition 3.1. Let \( {x}_{0} \) be a point of \( X \) and \( \omega \) an r-form on \( X \) . If\n\n\[ \n\\left\\langle {\\omega ,{\\xi }_{1} \\times \\cdots \\times {\\xi }_{r}}\\right\\rangle \\left( {x}_{0}\\right)\n\]\n\nis equal to 0 for all vector fields \( {\xi }_{1},\\ldots ,{\\xi }_{r} \) at \( {x}_{0} \) (i.e. defined on some neighborhood of \( {x}_{0} \) ), then \( \\omega \\left( {x}_{0}\\right) = 0 \) .
|
Proof. Considering things locally in terms of their local representations, we see that if \( \\omega \\left( {x}_{0}\\right) \) is not 0, then it does not vanish at some \( r \) -tuple of vectors \( \\left( {{v}_{1},\\ldots ,{v}_{r}}\\right) \) . We can take vector fields at \( {x}_{0} \) which take on these values at \( {x}_{0} \) and from this our assertion is obvious.
|
Yes
|
Proposition 3.2. Let \( \omega \) be an r-form of class \( {C}^{p - 1} \) on \( X \) . Then there exists a unique \( \left( {r + 1}\right) \) -form \( {d\omega } \) on \( X \) of class \( {C}^{p - 2} \) such that, for any open set \( U \) of \( X \) and vector fields \( {\xi }_{0},\ldots ,{\xi }_{r} \) on \( U \) we have \[ \left\langle {{d\omega },{\xi }_{0} \times \cdots \times {\xi }_{r}}\right\rangle = \mathop{\sum }\limits_{{i = 0}}^{r}{\left( -1\right) }^{i}{\xi }_{i}\left\langle {\omega ,{\xi }_{0} \times \cdots \times {\widehat{\xi }}_{i} \times \cdots \times {\xi }_{r}}\right\rangle + \mathop{\sum }\limits_{{i < j}}{\left( -1\right) }^{i + j}\left\langle {\omega ,\left\lbrack {{\xi }_{i},{\xi }_{j}}\right\rbrack \times {\xi }_{0} \times \cdots \times {\widehat{\xi }}_{i} \times \cdots \times {\widehat{\xi }}_{j} \times \cdots \times {\xi }_{r}}\right\rangle . \]
|
Proof. As before, we observe that the local formula defines a differential form. If we can prove that it gives the same thing as the first formulas, which is expressed invariantly, then we can globalize it, and we are done. Let us denote by \( {S}_{1} \) and \( {S}_{2} \) the two sums occurring in the invariant expression, and let \( L \) be the local expression. We must show that \( {S}_{1} + {S}_{2} = L \) . We consider \( {S}_{1} \), and apply the definition of \( {\xi }_{i} \) operating on a function locally, as in Proposition 1.1, at a point \( x \) . We obtain \[ {S}_{1} = \mathop{\sum }\limits_{{i = 0}}^{r}{\left( -1\right) }^{i}{\left\langle \omega ,{\xi }_{0} \times \cdots \times {\widehat{\xi }}_{i} \times \cdots \times {\xi }_{r}\right\rangle }^{\prime }\left( x\right) {\xi }_{i}\left( x\right) . \] The derivative is perhaps best computed by going back to the definition. Applying this definition directly, and discarding second order terms, we find that \( {S}_{1} \) is equal to \[ \sum {\left( -1\right) }^{i}\left\langle {{\omega }^{\prime }\left( x\right) {\xi }_{i}\left( x\right) ,{\xi }_{0}\left( x\right) \times \cdots \times \widehat{{\xi }_{i}\left( x\right) } \times \cdots \times {\xi }_{r}\left( x\right) }\right\rangle + \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{{i < j}}{\left( -1\right) }^{i}\left\langle {\omega \left( x\right) ,{\xi }_{0}\left( x\right) \times \cdots \times {\xi }_{j}^{\prime }\left( x\right) {\xi }_{i}\left( x\right) \times \cdots \times \widehat{{\xi }_{i}\left( x\right) } \times \cdots \times {\xi }_{r}\left( x\right) }\right\rangle + \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{{j < i}}\langle \omega \left( x\right) ,{\xi }_{0}\left( x\right) \times \cdots \times \widehat{{\xi }_{i}\left( x\right) } \times \cdots \times {\xi }_{j}^{\prime }\left( x\right) {\xi }_{i}\left( x\right) \times \cdots \times {\xi }_{r}\left( x\right) \rangle . \] Of these there sums, the first one is the local formula \( L \) . As for the other two, permuting \( j \) and \( i \) in the first, and moving the term \( {\xi }_{j}^{\prime }\left( x\right) {\xi }_{i}\left( x\right) \) to the first position, we see that they combine to give (symbolically) \[ - \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{{i < j}}{\left( -1\right) }^{i + j}\left\langle {\omega ,\left( {{\xi }_{j}^{\prime }{\xi }_{i} - {\xi }_{i}^{\prime }{\xi }_{j}}\right) \times {\xi }_{0} \times \cdots \times {\widehat{\xi }}_{i} \times \cdots \times {\widehat{\xi }}_{j} \times \cdots \times {\xi }_{r}}\right\rangle . \]
|
Yes
|
EXD 1. \( d\left( {\omega \land \psi }\right) = {d\omega } \land \psi + {\left( -1\right) }^{\deg \left( \omega \right) }\omega \land {d\psi } \) .
|
Proof. This is a simple formal exercise in the use of the local formula for the local representation of the exterior derivative. We leave it to the reader.
|
No
|
The map \( d \) is linear, and satisfies\n\n\[ d\left( {\omega \land \psi }\right) = {d\omega } \land \psi + {\left( -1\right) }^{r}\omega \land {d\psi } \]\n\nif \( r = \deg \omega \) . The map \( d \) is uniquely determined by these properties, and by the fact that for a function \( f \), we have \( {df} = {f}^{\prime } \) .
|
The linearity of \( d \) is obvious. Hence it suffices to prove the formula for decomposable forms. We note that for any function \( f \) we have\n\n\[ d\left( {f\omega }\right) = {df} \land \omega + {fd\omega }. \]\n\nIndeed, if \( \omega \) is a function \( g \), then from the derivative of a product we get \( d\left( {fg}\right) = {fdg} + {gdf}. \) If\n\n\[ \omega = {gd}{\lambda }_{{i}_{1}} \land \cdots \land d{\lambda }_{{i}_{r}} \]\n\nwhere \( g \) is a function, then\n\n\[ d\left( {f\omega }\right) = d\left( {{fgd}{\lambda }_{{i}_{1}} \land \cdots \land d{\lambda }_{{i}_{r}}}\right) = d\left( {fg}\right) \land d{\lambda }_{{i}_{1}} \land \cdots \land d{\lambda }_{{i}_{r}} \]\n\n\[ = \left( {{fdg} + {gdf}}\right) \land d{\lambda }_{{i}_{1}} \land \cdots \land d{\lambda }_{{i}_{r}} \]\n\n\[ = {fd\omega } + {df} \land \omega \]\n\nas desired. Now suppose that\n\n\[ \omega = {fd}{\lambda }_{{i}_{1}} \land \cdots \land d{\lambda }_{{i}_{r}}\;\text{ and }\;\psi = {gd}{\lambda }_{{j}_{1}} \land \cdots \land d{\lambda }_{js} \]\n\n\[ = f\widetilde{\omega },\; = g\widetilde{\psi }, \]\n\nwith \( {i}_{1} < \cdots < {i}_{r} \) and \( {j}_{1} < \cdots < {j}_{s} \) as usual. If some \( {i}_{v} = {j}_{\mu } \), then from the definitions we see that the expressions on both sides of the equality in the theorem are equal to 0 . Hence we may assume that the sets of indices \( {i}_{i},\ldots ,{i}_{r} \) and \( {j}_{1},\ldots ,{j}_{s} \) have no element in common. Then \( d\left( {\widetilde{\omega } \land \widetilde{\psi }}\right) = 0 \) by definition, and\n\n\[ d\left( {\omega \land \psi }\right) = d\left( {{fg}\widetilde{\omega } \land \widetilde{\psi }}\right) = d\left( {fg}\right) \land \widetilde{\omega } \land \widetilde{\psi } \]\n\n\[ = \left( {{gdf} + {fdg}}\right) \land \widetilde{\omega } \land \widetilde{\psi } \]\n\n\[ = {d\omega } \land \psi + f\;{dg} \land \widetilde{\omega } \land \widetilde{\psi } \]\n\n\[ = {d\omega } \land \psi + {\left( -1\right) }^{r}f\widetilde{\omega } \land {dg} \land \widetilde{\psi } \]\n\n\[ = {d\omega } \land \psi + {\left( -1\right) }^{r}\omega \land {d\psi }, \]\n\nthus proving the desired formula, in the present case. (We used the fact that \( {dg} \land \widetilde{\omega } = {\left( -1\right) }^{r}\widetilde{\omega } \land {dg} \) whose proof is left to the reader.) The formula in the general case follows because any differential form can be expressed as a sum of forms of the type just considered, and one can then use the bilinearity of the product. Finally, \( d \) is uniquely determined by the formula, and its effect on functions, because any differential form is a sum of forms of type \( {fd}{\lambda }_{i} \land \cdots \land d{\lambda }_{{i}_{r}} \) and the formula gives an expression of \( d \) in terms of its effect on forms of lower degree. By induction, if the value of \( d \) on functions is known, its value can then be determined on forms of degree \( \geqq 1 \) . This proves our assertion.
|
Yes
|
Proposition 3.5. Let \( \omega \) be a form of class \( {C}^{2} \) . Then \( {dd\omega } = 0 \) .
|
Proof. If \( f \) is a function, then\n\n\[ \n{df}\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\frac{\partial f}{\partial {x}_{j}}d{x}_{j} \n\] \n\nand \n\n\[ \n{ddf}\left( x\right) = \mathop{\sum }\limits_{{j = 1}}^{n}\mathop{\sum }\limits_{{k = 1}}^{n}\frac{{\partial }^{2}f}{\partial {x}_{k}\partial {x}_{j}}d{x}_{k} \land d{x}_{j} \n\] \n\nUsing the fact that the partials commute, and the fact that for any two positive integers \( r, s \) we have \( d{x}_{r} \land d{x}_{s} = - d{x}_{s} \land d{x}_{r} \), we see that the preceding double sum is equal to 0 . A similar argument shows that the theorem is true for 1 -forms, of type \( g\left( x\right) d{x}_{i} \) where \( g \) is a function, and thus for all 1-forms by linearity. We proceed by induction. It suffices to prove the formula in general for decomposable forms. Let \( \omega \) be decomposable of degree \( r \), and write \n\n\[ \n\omega = \eta \land \psi \n\] \n\nwhere \( \deg \psi = 1 \) . Using the formula for the derivative of an alternating product twice, and the fact that \( {dd\psi } = 0 \) and \( {dd\eta } = 0 \) by induction, we see at once that \( {dd\omega } = 0 \), as was to be shown.
|
Yes
|
Property 2. If \( \omega \) is a differential form on \( Y \), then\n\n\[ d{f}^{ * }\left( \omega \right) = {f}^{ * }\left( {d\omega }\right) \]
|
We shall give the proof of Property 2 in the finite dimensional case and leave the general case to the reader.\n\nFor a form of degree 1 , say\n\n\[ \omega \left( y\right) = g\left( y\right) d{y}_{1} \]\n\nwith \( {y}_{1} = {f}_{1}\left( x\right) \), we find\n\n\[ \left( {{f}^{ * }{d\omega }}\right) \left( x\right) = \left( {{g}^{\prime }\left( {f\left( x\right) }\right) \circ {f}^{\prime }\left( x\right) }\right) \land d{f}_{1}\left( x\right) .\n\nUsing the fact that \( {dd}{f}_{1} = 0 \), together with Proposition 3.4 we get\n\n\[ \left( {d{f}^{ * }\omega }\right) \left( x\right) = \left( {d\left( {g \circ f}\right) }\right) \left( x\right) \land d{f}_{1}\left( x\right) ,\n\nwhich is equal to the preceding expression. Any 1-form can be expressed as a linear combination of form \( {g}_{i}d{y}_{i} \), so that our assertion is proved for forms of degree 1.\n\nThe general formula can now be proved by induction. Using the linearity of \( {f}^{ * } \), we may assume that \( \omega \) is expressed as \( \omega = \psi \land \eta \) where \( \psi \) ,\n\n\( \eta \) have lower degree. We apply Proposition 3.3 and Property 1 to\n\n\[ {f}^{ * }{d\omega } = {f}^{ * }\left( {{d\psi } \land \eta }\right) + {\left( -1\right) }^{r}{f}^{ * }\left( {\psi \land {d\eta }}\right) \]\n\nand we see at once that this is equal to \( d{f}^{ * }\omega \), because by induction, \( {f}^{ * }{d\psi } = d{f}^{ * }\psi \) and \( {f}^{ * }{d\eta } = d{f}^{ * }\eta \) . This proves Property 2.
|
No
|
Property 2. If \( \omega \) is a differential form on \( Y \), then\n\n\[ d{f}^{ * }\left( \omega \right) = {f}^{ * }\left( {d\omega }\right) \]
|
The verifications are all easy, and even trivial, except possibly for Property 2. We shall give the proof of Property 2 in the finite dimensional case and leave the general case to the reader.\n\nFor a form of degree 1 , say\n\n\[ \omega \left( y\right) = g\left( y\right) d{y}_{1} \]\n\nwith \( {y}_{1} = {f}_{1}\left( x\right) \), we find\n\n\[ \left( {{f}^{ * }{d\omega }}\right) \left( x\right) = \left( {{g}^{\prime }\left( {f\left( x\right) }\right) \circ {f}^{\prime }\left( x\right) }\right) \land d{f}_{1}\left( x\right) .\n\nUsing the fact that \( {dd}{f}_{1} = 0 \), together with Proposition 3.4 we get\n\n\[ \left( {d{f}^{ * }\omega }\right) \left( x\right) = \left( {d\left( {g \circ f}\right) }\right) \left( x\right) \land d{f}_{1}\left( x\right) ,\n\nwhich is equal to the preceding expression. Any 1-form can be expressed as a linear combination of form \( {g}_{i}d{y}_{i} \), so that our assertion is proved for forms of degree 1.\n\nThe general formula can now be proved by induction. Using the linearity of \( {f}^{ * } \), we may assume that \( \omega \) is expressed as \( \omega = \psi \land \eta \) where \( \psi \), \( \eta \) have lower degree. We apply Proposition 3.3 and Property 1 to\n\n\[ {f}^{ * }{d\omega } = {f}^{ * }\left( {{d\psi } \land \eta }\right) + {\left( -1\right) }^{r}{f}^{ * }\left( {\psi \land {d\eta }}\right) \]\n\nand we see at once that this is equal to \( d{f}^{ * }\omega \), because by induction, \( {f}^{ * }{d\psi } = d{f}^{ * }\psi \) and \( {f}^{ * }{d\eta } = d{f}^{ * }\eta \) . This proves Property 2.
|
No
|
Property 4. If \( f : X \rightarrow Y \) is a morphism, and \( g \) is a function on \( Y \), then\n\n\[ d\left( {g \circ f}\right) = {f}^{ * }\left( {dg}\right) \]\n\nand at a point \( x \in X \), the value of this 1 -form is given by\n\n\[ {T}_{f\left( x\right) }g \circ {T}_{x}f = {\left( dg\right) }_{x} \circ {T}_{x}f. \]
|
The verifications are all easy, and even trivial, except possibly for Property 2. We shall give the proof of Property 2 in the finite dimensional case and leave the general case to the reader.\n\nFor a form of degree 1 , say\n\n\[ \omega \left( y\right) = g\left( y\right) d{y}_{1} \]\n\nwith \( {y}_{1} = {f}_{1}\left( x\right) \), we find\n\n\[ \left( {{f}^{ * }{d\omega }}\right) \left( x\right) = \left( {{g}^{\prime }\left( {f\left( x\right) }\right) \circ {f}^{\prime }\left( x\right) }\right) \land d{f}_{1}\left( x\right) . \]\n\nUsing the fact that \( {dd}{f}_{1} = 0 \), together with Proposition 3.4 we get\n\n\[ \left( {d{f}^{ * }\omega }\right) \left( x\right) = \left( {d\left( {g \circ f}\right) }\right) \left( x\right) \land d{f}_{1}\left( x\right) ,\]\n\nwhich is equal to the preceding expression. Any 1-form can be expressed as a linear combination of form \( {g}_{i}d{y}_{i} \), so that our assertion is proved for forms of degree 1.\n\nThe general formula can now be proved by induction. Using the linearity of \( {f}^{ * } \), we may assume that \( \omega \) is expressed as \( \omega = \psi \land \eta \) where \( \psi \),\n\n\( \eta \) have lower degree. We apply Proposition 3.3 and Property 1 to\n\n\[ {f}^{ * }{d\omega } = {f}^{ * }\left( {{d\psi } \land \eta }\right) + {\left( -1\right) }^{r}{f}^{ * }\left( {\psi \land {d\eta }}\right) \]\n\nand we see at once that this is equal to \( d{f}^{ * }\omega \), because by induction, \( {f}^{ * }{d\psi } = d{f}^{ * }\psi \) and \( {f}^{ * }{d\eta } = d{f}^{ * }\eta \) . This proves Property 2.
|
No
|
If \( \omega \left( y\right) = g\left( y\right) d{y}_{{j}_{1}} \land \cdots \land d{y}_{{j}_{s}} \) is a differential form on \( V \), then \( {f}^{ * }\omega = \left( {g \circ f}\right) d{f}_{{j}_{1}} \land \cdots \land d{f}_{{j}_{s}}. \)
|
Indeed, we have for \( x \in U \) : \( \left( {{f}^{ * }\omega }\right) \left( x\right) = g\left( {f\left( x\right) }\right) \left( {{\mu }_{{j}_{1}} \circ {f}^{\prime }\left( x\right) }\right) \land \cdots \land \left( {{\mu }_{{j}_{s}} \circ {f}^{\prime }\left( x\right) }\right) \) and \( {f}_{j}^{\prime }\left( x\right) = {\left( {\mu }_{j} \circ f\right) }^{\prime }\left( x\right) = {\mu }_{j} \circ {f}^{\prime }\left( x\right) = d{f}_{j}\left( x\right) . \)
|
Yes
|
Example 3. Let \( U, V \) be both open sets in \( n \) -space, and let \( f : U \rightarrow V \) be a \( {C}^{p} \) map. If\n\n\[ \n\omega \left( y\right) = g\left( y\right) d{y}_{1} \land \cdots \land d{y}_{n} \n\]\n\nwhere \( {y}_{j} = {f}_{j}\left( x\right) \) is the \( j \) -th coordinate of \( y \), then\n\n\[ \nd{y}_{j} = {D}_{1}{f}_{j}\left( x\right) d{x}_{1} + \cdots + {D}_{n}{f}_{j}\left( x\right) d{x}_{n} \n\]\n\n\[ \n= \frac{\partial {y}_{j}}{\partial {x}_{1}}d{x}_{1} + \cdots + \frac{\partial {y}_{j}}{\partial {x}_{n}}d{x}_{n} \n\]
|
and consequently, expanding out the alternating product according to the usual multilinear and alternating rules, we find that\n\n\[ \n{f}^{ * }\omega \left( x\right) = g\left( {f\left( x\right) }\right) {\Delta }_{f}\left( x\right) d{x}_{1} \land \cdots \land d{x}_{n}, \n\]\n\nwhere \( {\Delta }_{f} \) is the determinant of the Jacobian matrix of \( f \) .
|
Yes
|
Proposition 5.1. Let \( \\xi \) be a vector field and \( \\omega \) a differential form of degree \( r \\geqq 1 \) . The Lie derivative \( {\\mathcal{L}}_{\\xi } \) is a derivation, in the sense that\n\n\[ \n{\\mathcal{L}}_{\\xi }\\left( {\\omega \\left( {{\\xi }_{1},\\ldots ,{\\xi }_{r}}\\right) }\\right) = \\left( {{\\mathcal{L}}_{\\xi }\\omega }\\right) \\left( {{\\xi }_{1},\\ldots ,{\\xi }_{r}}\\right) + \\mathop{\\sum }\\limits_{{i = 1}}^{r}\\omega \\left( {{\\xi }_{1},\\ldots ,{\\mathcal{L}}_{\\xi }{\\xi }_{i},\\ldots ,{\\xi }_{r}}\\right)\n\]\n\nwhere of course \( {\\mathcal{L}}_{\\xi }{\\xi }_{i} = \\left\\lbrack {\\xi ,{\\xi }_{i}}\\right\\rbrack \) .
|
Proof. The proof is routine using the definitions. The first assertion is obvious by the definition of the pull back of a form. For the local expression we actually derive more, namely we derive a local expression for \( {\\alpha }_{t}^{ * }\\omega \) and \( \\frac{d}{dt}{\\alpha }_{t}^{ * }\\omega \) which are characterized by their values at \( \\left( {{\\xi }_{1},\\ldots ,{\\xi }_{r}}\\right) \) . So we let\n\n(1)\n\n\[ \nF\\left( t\\right) = \\left\\langle {\\left( {{\\alpha }_{t}^{ * }\\omega }\\right) \\left( x\\right) ,{\\xi }_{1}\\left( x\\right) \\times \\cdots \\times {\\xi }_{r}\\left( x\\right) }\\right\\rangle\n\]\n\n\[ \n= \\left\\langle {\\omega \\left( {\\alpha \\left( {t, x}\\right) }\\right) ,{D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{1}\\left( x\\right) \\times \\cdots \\times {D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{r}\\left( x\\right) }\\right\\rangle .\n\]\n\nThen the Lie derivative \( \\left( {{\\mathcal{L}}_{\\xi }\\omega }\\right) \\left( x\\right) \) is precisely \( {F}^{\\prime }\\left( 0\\right) \), but we obtain also the local representation for \( \\frac{d}{dt}{\\alpha }_{t}^{ * }\\omega \) :\n\n(2)\n\n\[ \n{F}^{\\prime }\\left( t\\right) = \\left\\langle {\\frac{d}{dt}{\\alpha }_{t}^{ * }\\omega \\left( x\\right) ,{\\xi }_{1}\\left( x\\right) \\times \\cdots \\times {\\xi }_{r}\\left( x\\right) }\\right\\rangle =\n\]\n\n(3)\n\n\[ \n\\left\\langle {{\\omega }^{\\prime }\\left( {\\alpha \\left( {t, x}\\right) }\\right) {D}_{1}\\alpha \\left( {t, x}\\right) ,{D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{1}\\left( x\\right) \\times \\cdots \\times {D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{r}\\left( x\\right) }\\right\\rangle\n\]\n\n\[ \n+ \\mathop{\\sum }\\limits_{{i = 1}}^{r}\\left\\langle {\\omega \\left( {\\alpha \\left( {t, x}\\right) }\\right) ,{D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{1}\\left( x\\right) \\times \\cdots \\times {D}_{1}{D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{i}\\left( x\\right) \\times \\cdots \\times {D}_{2}\\alpha \\left( {t, x}\\right) {\\xi }_{r}\\left( x\\right) }\\right\\rangle\n\]\n\nby the rule for the derivative of a product. Putting \( t = 0 \) and using the differential equation satisfied by \( {D}_{2}\\alpha \\left( {t, x}\\right) \), we get precisely the local expression as stated in the proposition. Remember the initial condition \( {D}_{2}\\alpha \\left( {0, x}\\right) = \\mathrm{id} \) .
|
Yes
|
Proposition 5.2. Let \( {\xi }_{t} \) be a time-dependent vector field, \( \alpha \) its flow, and let \( \omega \) be a differential form. Then\n\n\[ \frac{d}{dt}\left( {{\alpha }_{t}^{ * }\omega }\right) = {\alpha }_{t}^{ * }\left( {{\mathcal{L}}_{{\xi }_{t}}\omega }\right) \;\text{ or }\;\frac{d}{dt}\left( {{\alpha }_{t}^{ * }\omega }\right) = {\alpha }_{t}^{ * }\left( {{\mathcal{L}}_{\xi }\omega }\right) \]\n\nfor a time-independent vector field.
|
Proof. Proposition 5.1 gives us a local expression for \( \left( {{\mathcal{L}}_{{\xi }_{t}}\omega }\right) \left( y\right) \), replacing \( x \) by \( y \) because we shall now put \( y = \alpha \left( {t, x}\right) \) . On the other hand, from (1) in the proof of Proposition 5.1, we obtain\n\n\[ {\alpha }_{t}^{ * }\left( {{\mathcal{L}}_{{\xi }_{t}}\omega }\right) \left( x\right) = \left\langle {\left( {{\mathcal{L}}_{{\xi }_{t}}\omega }\right) \left( y\right) ,{D}_{2}\alpha \left( {t, x}\right) {\xi }_{1}\left( x\right) \times \cdots \times {D}_{2}\alpha \left( {t, x}\right) {\xi }_{r}\left( x\right) }\right\rangle . \]\n\nSubstituting the local expression for \( \left( {{\mathcal{L}}_{{\xi }_{t}}\omega }\right) \left( y\right) \), we get expression (3) from the proof of Proposition 5.1, thereby proving Proposition 5.2.
|
No
|
Proposition 6.2. Let \( \omega \) be such that \( {d\omega } = 0 \) . Let \( \alpha \) be the flow of \( {\xi }_{\omega } \) . Then \( {\alpha }_{t}^{ * }\Omega = \Omega \) for all \( t \) (in the domain of the flow).
|
Proof. By Proposition 5.2,\n\n\[ \frac{d}{dt}{\alpha }_{t}^{ * }\Omega = {\alpha }_{t}^{ * }{\mathcal{L}}_{{\xi }_{\omega }}\Omega = 0\;\text{ by }\Omega \mathbf{2}. \]\n\nHence \( {\alpha }_{t}^{ * }\Omega \) is constant, equal to \( {\alpha }_{0}^{ * }\Omega = \Omega \), as was to be shown.
|
Yes
|
Proposition 6.3. If \( {\xi }_{df} \cdot h = 0 \) then \( {\xi }_{dh} \cdot f = 0 \) .
|
This is immediate from the antisymmetry of the Poisson bracket. It is interpreted as conservation of momentum in the physical theory of Hamiltonian mechanics, when one deals with the canonical 2-form on the cotangent bundle, to be defined in the next section.
|
No
|
Proposition 7.1. This map defines a 1-form on \( {T}^{ \vee }\left( X\right) \) . Let \( X = U \) be open in \( \mathbf{E} \) and\n\n\[ \n{T}^{ \vee }\left( U\right) = U \times {\mathbf{E}}^{ \vee },\;T\left( {{T}^{ \vee }\left( U\right) }\right) = \left( {U \times {\mathbf{E}}^{ \vee }}\right) \times \left( {\mathbf{E} \times {\mathbf{E}}^{ \vee }}\right) .\n\]\n\nIf \( \left( {x,\lambda }\right) \in U \times {\mathbf{E}}^{ \vee } \) and \( \left( {u,\omega }\right) \in \mathbf{E} \times {\mathbf{E}}^{ \vee } \), then the local representation\n\n\( {\theta }_{\left( x,\lambda \right) } \) is given by\n\n\[ \n\left\langle {{\theta }_{\left( x,\lambda \right) },\left( {u,\omega }\right) }\right\rangle = \lambda \left( u\right)\n\]
|
Proof. We observe that the projection \( \pi : U \times {\mathbf{E}}^{ \vee } \rightarrow U \) is linear, and hence that its derivative at each point is constant, equal to the projection on the first factor. Our formula is then an immediate consequence of the definition. The local formula shows that \( \theta \) is in fact a 1 -form locally, and therefore globally since it has an invariant description.\n\nOur 1-form is called the canonical 1-form on the cotangent bundle. We define the canonical 2-form \( \Omega \) on the cotangent bundle \( {T}^{ \vee }X \) to be\n\n\[ \n\Omega = - {d\theta }\n\]
|
Yes
|
Proposition 7.2. Let \( U \) be open in \( \mathbf{E} \), and let \( \Omega \) be the local representation of the canonical 2-form on \( {T}^{ \vee }U = U \times {\mathbf{E}}^{ \vee } \) . Let \( \left( {x,\lambda }\right) \in U \times {\mathbf{E}}^{ \vee } \) . Let \( \left( {{u}_{1},{\omega }_{1}}\right) \) and \( \left( {{u}_{2},{\omega }_{2}}\right) \) be elements of \( \mathbf{E} \times {\mathbf{E}}^{ \vee } \) . Then\n\n\[ \left\langle {{\Omega }_{\left( x,\lambda \right) },\left( {{u}_{1},{\omega }_{1}}\right) \times \left( {{u}_{2},{\omega }_{2}}\right) }\right\rangle = \left\langle {{u}_{1},{\omega }_{2}}\right\rangle - \left\langle {{u}_{2},{\omega }_{1}}\right\rangle \]\n\n\[ = {\omega }_{2}\left( {u}_{1}\right) - {\omega }_{1}\left( {u}_{2}\right) \]
|
Proof. We observe that \( \theta \) is linear, and thus that \( {\theta }^{\prime } \) is constant. We then apply the local formula for the exterior derivative, given in Proposition 3.2. Our assertion becomes obvious.
|
No
|
Theorem 8.1 (Darboux Theorem). Let \( \mathbf{E} \) be a self-dual Banach space. Let\n\n\[ \omega : U \rightarrow {L}_{a}^{2}\left( \mathbf{E}\right) \]\n\nbe a non-singular closed 2 -form on an open set of \( \mathbf{E} \), and let \( {x}_{0} \in U \) . Then \( \omega \) is locally isomorphic at \( {x}_{0} \) to the constant form \( \omega \left( {x}_{0}\right) \) .
|
Proof. Let \( {\omega }_{0} = \omega \left( {x}_{0}\right) \), and let\n\n\[ {\omega }_{t} = {\omega }_{0} + t\left( {\omega - {\omega }_{0}}\right) ,\;0 \leqq t \leqq 1. \]\n\nWe wish to find a time-dependent vector field \( {\xi }_{t} \) locally at 0 such that if \( \alpha \) denotes its flow, then\n\n\[ {\alpha }_{t}^{ * }{\omega }_{t} = {\omega }_{0} \]\n\nThen the local isomorphism \( {\alpha }_{1} \) satisfies the requirements of the theorem. By the Poincaré lemma, there exists a 1 -form \( \theta \) locally at 0 such that\n\n\[ \omega - {\omega }_{0} = {d\theta } \]\n\nand without loss of generality, we may assume that \( \theta \left( {x}_{0}\right) = 0 \) . We contend that the time-dependent vector field \( {\xi }_{t} \), such that\n\n\[ {\omega }_{t} \circ {\xi }_{t} = - \theta \]\n\nhas the desired property. Let \( \alpha \) be its flow. If we shrink the domain of the vector field near \( {x}_{0} \) sufficiently, and use the fact that \( \theta \left( {x}_{0}\right) = 0 \), then we can use the local existence theorem (Proposition 1.1 of Chapter IV) to see that the flow can be integrated at least to \( t = 1 \) for all points \( x \) in this small domain. We shall now verify that\n\n\[ \frac{d}{dt}\left( {{\alpha }_{t}^{ * }{\omega }_{t}}\right) = 0 \]\n\nThis will prove that \( {\alpha }_{t}^{ * }{\omega }_{t} \) is constant. Since we have \( {\alpha }_{0}^{ * }{\omega }_{0} = {\omega }_{0} \) because\n\n\[ \alpha \left( {0, x}\right) = x\;\text{ and }\;{D}_{2}\alpha \left( {0, x}\right) = \mathrm{{id}}, \]\n\nit will conclude the proof of the theorem.\n\nWe compute locally. We use the local formula of Proposition 5.2, and formula LIE 1, which reduces to\n\n\[ {\mathcal{L}}_{{\xi }_{t}}{\omega }_{t} = d\left( {{\omega }_{t} \circ {\xi }_{t}}\right) \]\n\nbecause \( d{\omega }_{t} = 0 \) . We find\n\n\[ \frac{d}{dt}\left( {{\alpha }_{t}^{ * }{\omega }_{t}}\right) = {\alpha }_{t}^{ * }\left( {\frac{d}{dt}{\omega }_{t}}\right) + {\alpha }_{t}^{ * }\left( {{\mathcal{L}}_{{\xi }_{t}}{\omega }_{t}}\right) \]\n\n\[ = {\alpha }_{t}^{ * }\left( {\frac{d}{dt}{\omega }_{t} + d\left( {{\omega }_{t} \circ {\xi }_{t}}\right) }\right) \]\n\n\[ = {\alpha }_{t}^{ * }\left( {\omega - {\omega }_{0} - {d\theta }}\right) \]\n\n\[ = 0\text{.} \]\n\nThis proves Darboux's theorem.
|
Yes
|
Theorem 1.2. Let \( U, V \) be open subsets of Banach spaces \( \mathbf{E},\mathbf{F} \) respectively. Let\n\n\[ f : U \times V \rightarrow L\left( {\mathbf{E},\mathbf{F}}\right) \]\n\nbe a \( {C}^{r} \) -morphism \( \left( {r \geqq 1}\right) \). Assume that if\n\n\[ {\xi }_{1},{\eta }_{1} : U \times V \rightarrow \mathbf{E} \]\n\nare two morphisms, and if we let\n\n\[ \xi = \left( {{\xi }_{1}, f \cdot {\xi }_{1}}\right) \;\text{ and }\;\eta = \left( {{\eta }_{1}, f \cdot {\eta }_{1}}\right) \]\n\nthen relation (2) above is satisfied. Let \( \left( {{x}_{0},{y}_{0}}\right) \) be a point of \( U \times V \). Then there exists open neighborhoods \( {U}_{0},{V}_{0} \) of \( {x}_{0},{y}_{0} \) respectively, contained in \( U, V \), and a unique morphism \( \alpha : {U}_{0} \times {V}_{0} \rightarrow V \) such that\n\n\[ {D}_{1}\alpha \left( {x, y}\right) = f\left( {x,\alpha \left( {x, y}\right) }\right) \]\n\nand \( \alpha \left( {{x}_{0}, y}\right) = y \) for all \( \left( {x, y}\right) \) in \( {U}_{0} \times {V}_{0} \).
|
We shall prove Theorem 1.2 in \( §3 \) . We now indicate how Theorem 1.1 follows from it. We denote by \( {\alpha }_{y} \) the map \( {\alpha }_{y}\left( x\right) = \alpha \left( {x, y}\right) \), viewed as a map of \( {U}_{0} \) into \( V \). Then our differential equation can be written\n\n\[ D{\alpha }_{y}\left( x\right) = f\left( {x,{\alpha }_{y}\left( x\right) }\right) .\n\nWe let\n\n\[ \varphi : {U}_{0} \times {V}_{0} \rightarrow U \times V \]\n\nbe the map \( \varphi \left( {x, y}\right) = \left( {x,{\alpha }_{y}\left( x\right) }\right) \). It is obvious that \( {D\varphi }\left( {{x}_{0},{y}_{0}}\right) \) is a toplinear isomorphism, so that \( \varphi \) is a local isomorphism at \( \left( {{x}_{0},{y}_{0}}\right) \). Furthermore, for \( \left( {u, v}\right) \in \mathbf{E} \times \mathbf{F} \) we have\n\n\[ {D}_{1}\varphi \left( {x, y}\right) \cdot \left( {u, v}\right) = \left( {u, D{\alpha }_{y}\left( x\right) \cdot u}\right) = \left( {u, f\left( {x,{\alpha }_{y}\left( x\right) }\right) \cdot u}\right) \]\n\nwhich shows that our subbundle is integrable.
|
No
|
Proposition 2.1. Let \( U, V \) be open sets in Banach spaces \( \mathbf{E},\mathbf{F} \) respectively. Let \( J \) be an open interval of \( \mathbf{R} \) containing 0, and let\n\n\[ g : J \times U \times V \rightarrow \mathbf{F} \]\n\nbe a morphism of class \( {C}^{r}\left( {r \geqq 1}\right) \). Let \( \left( {{x}_{0},{y}_{0}}\right) \) be a point in \( U \times V \). Then there exists open balls \( {J}_{0},{U}_{0},{V}_{0} \) centered at \( 0,{x}_{0},{y}_{0} \) and contained \( J, U, V \) respectively, and a unique morphism of class \( {C}^{r} \)\n\n\[ \beta : {J}_{0} \times {U}_{0} \times {V}_{0} \rightarrow V \]\n\nsuch that \( \beta \left( {0, x, y}\right) = y \) and\n\n\[ {D}_{1}\beta \left( {t, x, y}\right) = g\left( {t, x,\beta \left( {t, x, y}\right) }\right) \]\n\nfor all \( \left( {t, x, y}\right) \in {J}_{0} \times {U}_{0} \times {V}_{0} \).
|
Proof. This follows from the existence and uniqueness of local flows, by considering the ordinary vector field on \( U \times V \)\n\n\[ G : J \times U \times V \rightarrow \mathbf{E} \times \mathbf{F} \]\n\ngiven by \( G\left( {t, x, y}\right) = \left( {0, g\left( {t, x, y}\right) }\right) \). If \( B\left( {t, x, y}\right) \) is the local flow for \( G \), then we let \( \beta \left( {t, x, y}\right) \) be the projection on the second factor of \( B\left( {t, x, y}\right) \). The reader will verify at once that \( \beta \) satisfies the desired conditions. The uniqueness is clear.
|
Yes
|
Proposition 2.2. Let notation be as in Proposition 2.1, and with \( y \) fixed, let \( \beta \left( {t, x}\right) = \beta \left( {t, x, y}\right) \) . Then \( {D}_{2}\beta \left( {t, x}\right) \) satisfies the differential equation\n\n\[ \n{D}_{1}{D}_{2}\beta \left( {t, x}\right) \cdot v = {D}_{2}g\left( {t, x,\beta \left( {t, x}\right) }\right) \cdot v + {D}_{3}g\left( {t, x,\beta \left( {t, x}\right) }\right) \cdot {D}_{2}\beta \left( {t, x}\right) \cdot v, \n\]\n\nfor every \( v \in \mathbf{E} \) .
|
Proof. Here again, we consider the vector field as in the proof of Proposition 2.1, and apply the formula for the differential equation satisfied by \( {D}_{2}\beta \) as in Chapter IV,§1.
|
No
|
Theorem 4.1. Let \( Y, Z \) be integral submanifolds of \( X \) for the subbundle \( F \) of \( {TX} \), passing through a point \( {x}_{0} \) . Then there exists an open neighborhood \( U \) of \( {x}_{0} \) in \( X \), such that\n\n\[ Y \cap U = Z \cap U \]
|
Proof. Let \( U \) be an open neighborhood of \( {x}_{0} \) in \( X \) such that we have a chart\n\n\[ U \rightarrow V \times W \]\n\nwith\n\n\[ {x}_{0} \mapsto \left( {{y}_{0},{w}_{0}}\right) \]\n\nand \( Y \) corresponds to all points \( \left( {y,{w}_{0}}\right), y \in V \) . In other words, \( Y \) corresponds to a factor in the product in the chart. If \( V \) is open in \( {\mathbf{F}}_{1} \) and \( W \) open in \( {\mathbf{F}}_{2} \), with \( {\mathbf{F}}_{1} \times {\mathbf{F}}_{2} = \mathbf{E} \), then the subbundle \( \mathbf{F} \) is represented by the projection \n\nShrinking \( Z \), we may assume that \( Z \subset U \) . Let \( h : Z \rightarrow V \times W \) be the restriction of the chart to \( Z \), and let \( h = \left( {{h}_{1},{h}_{2}}\right) \) be represented by its two components. By assumption, \( {h}^{\prime }\left( x\right) \) maps \( \mathbf{E} \) into \( {\mathbf{F}}_{1} \) for every \( x \in Z \) . Hence \( {h}_{2} \) is constant, so that \( h\left( Z\right) \) is contained in the factor \( V \times \left\{ {w}_{0}\right\} \) . It follows at once that \( h\left( Z\right) = {V}_{1} \times \left\{ {w}_{0}\right\} \) for some open \( {V}_{1} \) in \( V \), and we can shrink \( U \) to a product \( {V}_{1} \times {W}_{1} \) (where \( {W}_{1} \) is a small open set in \( W \) containing \( \left. {w}_{0}\right) \) to conclude the proof.
|
Yes
|
Theorem 4.2. Let \( F \) be an integrable tangent subbundle over \( X \) . If\n\n\[ f : Y \rightarrow X \]\n\nis a morphism such that \( {Tf} : {TY} \rightarrow {TX} \) maps \( {TY} \) into \( F \), then the\n\ninduced map\n\[ {f}_{F} : Y \rightarrow {X}_{F} \]\n\n(same values as \( f \) but viewed as a map into the new manifold \( {X}_{F} \) ) is also a morphism. Furthermore, if \( f \) is an injective immersion, then \( {f}_{F} \) induces an isomorphism of \( Y \) onto an open subset of \( {X}_{F} \) .
|
Proof. Using the local product structure as in the proof of the local uniqueness Theorem 4.1, we see at once that \( {f}_{F} \) is a morphism. In other words, locally, \( f \) maps a neighborhood of each point of \( Y \) into a sub-manifold of \( X \) which is tangent to \( F \) . If in addition \( f \) is an injective immersion, then from the definition of the charts on \( {X}_{F} \), we see that \( {f}_{F} \) maps \( Y \) bijectively onto an open subset of \( {X}_{F} \), and is a local isomorphism at each point. Hence \( {f}_{F} \) induces an isomorphism of \( Y \) with an open subset of \( {X}_{F} \), as was to be shown.
|
Yes
|
Let \( {X}_{F}\left( {x}_{0}\right) \) be the connected component of \( {X}_{F} \) containing a point \( {x}_{0} \) . If \( f : Y \rightarrow X \) is an integral manifold for \( F \) passing through \( {x}_{0} \), and \( Y \) is connected, then there exists a unique morphism\n\n\[ \nh : Y \rightarrow {X}_{F}\left( {x}_{0}\right) \]\n\nmaking the following diagram commutative:\n\n\n\nand \( h \) induces an isomorphism of \( Y \) onto an open subset of \( {X}_{F}\left( {x}_{0}\right) \) .
|
Proof. Clear from the preceding discussion.
|
No
|
Proposition 5.1. Let \( \xi ,\eta \) be left invariant vector fields on \( G \) . Then \( \left\lbrack {\xi ,\eta }\right\rbrack \) is also left invariant.
|
Proof. This follows from the general functorial formula\n\n\[ \n{\tau }_{ * }^{x}\left\lbrack {\xi ,\eta }\right\rbrack = \left\lbrack {{\tau }_{ * }^{x}\xi ,{\tau }_{ * }^{x}\eta }\right\rbrack = \left\lbrack {\xi ,\eta }\right\rbrack \n\]
|
Yes
|
Theorem 5.2. Let \( G \) be a Lie group, \( \mathfrak{h} \) a Lie subalgebra of \( \mathfrak{l}\left( G\right) \), and let \( F \) be the corresponding left invariant subbundle of \( {TG} \). Then \( F \) is integrable.
|
Proof. I owe the proof to Alan Weinstein. It is based on the following lemma.\n\nLemma 5.3. Let \( X \) be a manifold, let \( \xi ,\eta \) be vector fields at a point \( {x}_{0} \), and let \( F \) be a subbundle of \( {TX} \). If \( \xi \left( {x}_{0}\right) = 0 \) and \( \xi \) is contained in \( F \), then \( \left\lbrack {\xi ,\eta }\right\rbrack \left( {x}_{0}\right) \in F \).\n\nProof. We can deal with the local representations, such that \( X = U \) is open in \( \mathbf{E} \), and \( F \) corresponds to a factor, that is\n\n\[ \n{TX} = U \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2}\;\text{ and }\;F = U \times {\mathbf{F}}_{1}.\n\]\n\nWe may also assume without loss of generality that \( {x}_{0} = 0 \). Then \( \xi \left( 0\right) = 0 \), and \( \xi : U \rightarrow {\mathbf{F}}_{1} \) may be viewed as a map into \( {\mathbf{F}}_{1} \). We may write\n\n\[ \n\xi \left( x\right) = A\left( x\right) x\n\]\n\nwith a morphism \( A : U \rightarrow L\left( {\mathbf{E},{\mathbf{F}}_{1}}\right) \). Indeed,\n\n\[ \n\xi \left( x\right) = {\int }_{0}^{1}{\xi }^{\prime }\left( {tx}\right) {dt} \cdot x\n\]\n\nand \( A\left( x\right) = {\operatorname{pr}}_{1} \circ {\int }_{0}^{1}{\xi }^{\prime }\left( {tx}\right) {dt} \), where \( {\operatorname{pr}}_{1} \) is the projection on \( {\mathbf{F}}_{1} \). Then\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack \left( x\right) = {\eta }^{\prime }\left( x\right) \xi \left( x\right) - {\xi }^{\prime }\left( x\right) \eta \left( x\right)\n\]\n\n\[ \n= {\eta }^{\prime }\left( x\right) A\left( x\right) x - {A}^{\prime }\left( x\right) \cdot x \cdot \eta \left( x\right) - A\left( x\right) \cdot \eta \left( x\right) ,\n\]\n\nwhence\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack \left( 0\right) = A\left( 0\right) \eta \left( 0\right)\n\]\n\nSince \( A\left( 0\right) \) maps \( \mathbf{E} \) into \( {\mathbf{F}}_{1} \), we have proved our lemma.\n\nBack to the proof of the proposition. Let \( \xi ,\eta \) be vector fields at a point \( {x}_{0} \) in \( G \), both contained in the invariant subbundle \( F \). There exist invariant vector fields \( {\xi }_{0} \) and \( {\eta }_{0} \) and \( {x}_{0} \) such that\n\n\[ \n\xi \left( {x}_{0}\right) = {\xi }_{0}\left( {x}_{0}\right) \;\text{ and }\;\eta \left( {x}_{0}\right) = {\eta }_{0}\left( {x}_{0}\right) .\n\]\n\nLet\n\n\[ \n{\xi }_{1} = \xi - {\xi }_{0}\;\text{ and }\;{\eta }_{1} = \eta - {\eta }_{0}\n\]\n\nThen \( {\xi }_{1},{\eta }_{1} \) vanish at \( {x}_{0} \) and lie in \( F \). We get:\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack = \mathop{\sum }\limits_{{i, j}}\left\lbrack {{\xi }_{i},{\eta }_{j}}\right\rbrack\n\]\n\nThe proposition now follows at once from the lemma.
|
Yes
|
Lemma 5.3. Let \( X \) be a manifold, let \( \xi ,\eta \) be vector fields at a point \( {x}_{0} \) , and let \( F \) be a subbundle of \( {TX} \) . If \( \xi \left( {x}_{0}\right) = 0 \) and \( \xi \) is contained in \( F \) , then \( \left\lbrack {\xi ,\eta }\right\rbrack \left( {x}_{0}\right) \in F \) .
|
Proof. We can deal with the local representations, such that \( X = U \) is open in \( \mathbf{E} \), and \( F \) corresponds to a factor, that is\n\n\[ \n{TX} = U \times {\mathbf{F}}_{1} \times {\mathbf{F}}_{2}\;\text{ and }\;F = U \times {\mathbf{F}}_{1}.\n\]\n\nWe may also assume without loss of generality that \( {x}_{0} = 0 \) . Then \( \xi \left( 0\right) = 0 \), and \( \xi : U \rightarrow {\mathbf{F}}_{1} \) may be viewed as a map into \( {\mathbf{F}}_{1} \) . We may write\n\n\[ \n\xi \left( x\right) = A\left( x\right) x\n\]\n\nwith a morphism \( A : U \rightarrow L\left( {\mathbf{E},{\mathbf{F}}_{1}}\right) \) . Indeed,\n\n\[ \n\xi \left( x\right) = {\int }_{0}^{1}{\xi }^{\prime }\left( {tx}\right) {dt} \cdot x\n\]\n\nand \( A\left( x\right) = {\operatorname{pr}}_{1} \circ {\int }_{0}^{1}{\xi }^{\prime }\left( {tx}\right) {dt} \), where \( {\operatorname{pr}}_{1} \) is the projection on \( {\mathbf{F}}_{1} \) . Then\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack \left( x\right) = {\eta }^{\prime }\left( x\right) \xi \left( x\right) - {\xi }^{\prime }\left( x\right) \eta \left( x\right)\n\]\n\n\[ \n= {\eta }^{\prime }\left( x\right) A\left( x\right) x - {A}^{\prime }\left( x\right) \cdot x \cdot \eta \left( x\right) - A\left( x\right) \cdot \eta \left( x\right) ,\n\]\n\nwhence\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack \left( 0\right) = A\left( 0\right) \eta \left( 0\right)\n\]\n\nSince \( A\left( 0\right) \) maps \( \mathbf{E} \) into \( {\mathbf{F}}_{1} \), we have proved our lemma.
|
Yes
|
Theorem 5.4. Let \( G \) be a Lie group, let \( \mathfrak{h} \) be a Lie subalgebra of \( \mathfrak{l}\left( G\right) \) , and let \( F \) be its associated invariant subbundle. Let\n\n\[ j : H \rightarrow G \]\n\nbe the maximal connected integral manifold of \( F \) passing through \( e \) . Then \( H \) is a subgroup of \( G \), and \( j : H \rightarrow G \) is a Lie subgroup of \( G \) . The association between \( \mathfrak{h} \) and \( j : H \rightarrow G \) establishes a bijection between Lie subalgebras of \( \mathrm{I}\left( G\right) \) and Lie subgroups of \( G \) .
|
Proof. Let \( x \in H \) . The M-isomorphism \( {\tau }^{x} \) induces a VB-isomorphism of \( F \) onto itself, in other words, \( F \) is invariant under \( {\tau }_{ * }^{x} \) . Furthermore, since \( H \) passes through \( e \), and \( {xe} \) lies in \( H \), it follows that \( j : H \rightarrow G \) is also the maximal connected integral manifold of \( F \) passing through \( x \) . Hence \( x \) maps \( H \) onto itself. From this we conclude that if \( y \in H \), then \( {xy} \in H \), and there exists some \( y \in H \) such that \( {xy} = e \), whence \( {x}^{-1} \in H \) . Hence \( H \) is a subgroup. The other assertions are then clear.
|
Yes
|
Proposition 1.1. Let \( X \) be a manifold admitting partitions of unity. Let \( \pi : E \rightarrow X \) be a vector bundle whose fibers are Hilbertable vector spaces. Then \( \pi \) admits a Riemannian metric.
|
Proof. Find a partition of unity \( \left\{ {{U}_{i},{\varphi }_{i}}\right\} \) such that \( \pi \mid {U}_{i} \) is trivial, that is such that we have a trivialization\n\n\[{\pi }_{i} : {\pi }^{-1}\left( {U}_{i}\right) \rightarrow {U}_{i} \times \mathbf{E}\]\n\n(working over a connected component of \( X \), so that we may assume the fibers toplinearly isomorphic to a fixed Hilbert space \( \mathbf{E} \) ). We can then find a Riemannian metric on \( {U}_{i} \times \mathbf{E} \) in a trivial way. By transport of structure, there exists a Riemannian metric \( {g}_{i} \) on \( \pi \mid {U}_{i} \) and we let\n\n\[g = \sum {\varphi }_{i}{g}_{i}\]\n\nThen \( g \) is a Riemannian metric on \( x \) .
|
Yes
|
For all operators \( A \), the series\n\n\[ \exp \left( A\right) = I + A + \frac{{A}^{2}}{2!} + \cdots \]\n\nconverges. If \( A \) commutes with \( B \), then\n\n\[ \exp \left( {A + B}\right) = \exp \left( A\right) \exp \left( B\right) \]
|
Proof. Standard.
|
No
|
If \( A \) is symmetric (resp. skew-symmetric), then \( \exp \left( A\right) \) is symmetric positive definite (resp. Hilbertian). If \( A \) is toplinear automorphism sufficiently close to \( I \) and is positive definite symmetric (resp. Hilbertian), then \( \log \left( A\right) \) is symmetric (resp. skew-symmetric).
|
The proofs are straightforward. As an example, let us carry out the proof of the last statement. Suppose \( A \) is Hilbertian and sufficiently close to \( I \) . Then \( {A}^{ * }A = I \) and \( {A}^{ * } = {A}^{-1} \) . Then\n\n\[ \log {\left( A\right) }^{ * } = \frac{\left( {A}^{ * } - I\right) }{1} + \cdots \]\n\n\[ = \log \left( {A}^{-1}\right) \]\n\nIf \( A \) is close to \( I \), so is \( {A}^{-1} \), so that these statements make sense. We now conclude by noting that \( \log \left( {A}^{-1}\right) = - \log \left( A\right) \) . All the other proofs are carried out in a similar fashion, taking a star operator in series term by term, under conditions which insure convergence.
|
No
|
Proposition 2.4. The exponential map gives a \( {C}^{\infty } \) -isomorphism from the space \( \operatorname{Sym}\left( \mathbf{E}\right) \) of symmetric endomorphisms of \( \mathbf{E} \) and the space \( \operatorname{Pos}\left( \mathbf{E}\right) \) of symmetric positive definite automorphisms of \( \mathbf{E} \) .
|
Proof. We must construct its inverse, and for this we use the spectral theorem. Given \( A \), symmetric positive definite, the analytic function \( \log t \) is defined on the spectrum of \( A \), and thus \( \log A \) is symmetric. One verifies immediately that it is the inverse of the exponential function (which can be viewed in the same way). We can expand \( \log t \) around a large positive number \( c \), in a power series uniformly and absolutely convergent in an interval \( 0 < \epsilon \leqq t \leqq {2c} - \epsilon \), to achieve our purposes.
|
No
|
The manifold of toplinear automorphisms of the Hilbert space \( \mathbf{E} \) is \( {C}^{\infty } \) -isomorphic to the product of the Hilbert automorphisms and the positive definite symmetric automorphisms, under the mapping\n\n\[ \operatorname{Hilb}\left( \mathbf{E}\right) \times \operatorname{Pos}\left( \mathbf{E}\right) \rightarrow \operatorname{Laut}\left( \mathbf{E}\right) \]\n\ngiven by\n\n\[ \left( {H, P}\right) \rightarrow {HP}\text{.} \]
|
Proof. Our map is induced by a continuous bilinear map of\n\n\[ L\left( {\mathbf{E},\mathbf{E}}\right) \times L\left( {\mathbf{E},\mathbf{E}}\right) \]\ninto \( L\left( {\mathbf{E},\mathbf{E}}\right) \) and so is \( {C}^{\infty } \) . We must construct an inverse, or in other words express any given toplinear automorphism \( A \) in a unique way as a product \( A = {HP} \) where \( H \) is Hilbertian, \( P \) is symmetric positive definite, and both \( H, P \) depend \( {C}^{\infty } \) on \( A \) . This is done as follows. First we note that \( {A}^{ * }A \) is symmetric positive definite (because \( \left\langle {{A}^{ * }{Av}, v}\right\rangle = \langle {Av},{Av}\rangle \) , and furthermore, \( {A}^{ * }A \) is a toplinear automorphism, so that 0 cannot be in its spectrum, and hence \( {A}^{ * }A \geqq {\epsilon I} > O \) since the spectrum is closed). We let\n\n\[ P = {\left( {A}^{ * }A\right) }^{1/2} \]\n\nand let \( H = A{P}^{-1} \) . Then \( H \) is Hilbertian, because\n\n\[ {H}^{ * }H = {\left( {P}^{-1}\right) }^{ * }{A}^{ * }A{P}^{-1} = I. \]\n\nBoth \( P \) and \( H \) depend differentiably on \( A \) since all constructions involved are differentiable.\n\nThere remains to be shown that the expression as a product is unique. If \( A = {H}_{1}{P}_{1} \) where \( {H}_{1},{P}_{1} \) are Hilbertian and symmetric positive definite respectively, then\n\n\[ {H}^{-1}{H}_{1} = P{P}_{1}^{-1} \]\n\nand we get \( {H}_{2} = P{P}_{1}^{-1} \) for some Hilbertian automorphism \( {H}_{2} \) . By definition,\n\n\[ I = {H}_{2}^{ * }{H}_{2} = {\left( P{P}_{1}^{-1}\right) }^{ * }P{P}_{1}^{-1} \]\n\nand from the fact that \( {P}^{ * } = P \) and \( {P}_{1}^{ * } = {P}_{1} \), we find\n\n\[ {P}^{2} = {P}_{1}^{2} \]\n\nTaking the log, we find \( 2\log P = 2\log {P}_{1} \) . We now divide by 2 and take the exponential, thus giving \( P = {P}_{1} \) and finally \( H = {H}_{1} \) . This proves our proposition.
|
Yes
|
Theorem 3.1. Let \( \pi \) be a vector bundle over a manifold \( X \), and assume that the fibers of \( \pi \) are all toplinearly isomorphic to a Hilbert space \( \mathbf{E} \). Then the above map, from reductions of \( \pi \) to the Hilbert group, into the Riemannian metrics, is a bijection.
|
Proof. Suppose that we are given an ordinary VB-trivialization \( \left\{ \left( {{U}_{i},{\tau }_{i}}\right) \right\} \) of \( \pi \). We must construct an HB-trivialization. For each \( i \), let \( {g}_{i} \) be the Riemannian metric on \( {U}_{i} \times \mathbf{E} \) transported from \( {\pi }^{-1}\left( {U}_{i}\right) \) by means of \( {\tau }_{i} \). Then for each \( x \in {U}_{i} \), we have a positive definite symmetric operator \( {A}_{ix} \) such that\n\n\[ \n{g}_{ix}\left( {v, w}\right) = \left\langle {{A}_{ix}v, w}\right\rangle \n\]\n\nfor all \( v, w \in \mathbf{E} \). Let \( {B}_{ix} \) be the square root of \( {A}_{ix} \). We define the trivialization \( {\sigma }_{i} \) by the formula\n\n\[ \n{\sigma }_{ix} = {B}_{ix}{\tau }_{ix} \n\]\n\nand contend that \( \left\{ \left( {{U}_{i},{\sigma }_{i}}\right) \right\} \) is a Hilbert trivialization. Indeed, from the definition of \( {g}_{ix} \), it suffices to verify that the VB-isomorphism\n\n\[ \n{B}_{i} : {U}_{i} \times \mathbf{E} \rightarrow {U}_{i} \times \mathbf{E} \n\]\n\ngiven by \( {B}_{ix} \) on each fiber, carries \( {g}_{i} \) on the usual metric. But we have, for \( v, w \in E \) :\n\n\[ \n\left\langle {{B}_{ix}v,{B}_{ix}w}\right\rangle = \left\langle {{A}_{ix}v, w}\right\rangle \n\]\n\nsince \( {B}_{ix} \) is symmetric, and equal to the square root of \( {A}_{ix} \). This proves what we want.
|
Yes
|
Proposition 4.1. Let \( X \) be a manifold and \( \pi : E \rightarrow X \) a Hilbert bundle. Let \( \sigma : X \rightarrow \mathbf{R} \) be a morphism such that \( \sigma \left( x\right) > 0 \) for all \( x \) . Then the mapping \[ w \rightarrow \frac{\sigma \left( {\pi w}\right) w}{{\left( 1 + {\left| w\right| }^{2}\right) }^{1/2}} \] gives an isomorphism of \( E \) onto \( E\left( \sigma \right) \) .
|
Proof. Obvious. The inverse mapping is constructed in the obvious way.
|
No
|
Corollary 4.2. Let \( X \) be a manifold admitting partitions of unity, and let \( \pi : E \rightarrow X \) be a Hilbert bundle over \( X \) . Then \( E \) is compressible.
|
Proof. Let \( Z \) be an open neighborhood of the zero section. For each \( x \in X \), there exists an open neighborhood \( {V}_{x} \) and a number \( {a}_{x} > 0 \) such that the vectors in \( {\pi }^{-1}\left( {V}_{x}\right) \) which are of length \( < {a}_{x} \) lie in \( Z \) . We can find a partition of unity \( \left\{ \left( {{U}_{i},{\varphi }_{i}}\right) \right\} \) on \( X \) such that each \( {U}_{i} \) is contained in some \( {V}_{x\left( i\right) } \) . We let \( \sigma \) be the function \[ \sum {a}_{x\left( i\right) }{\varphi }_{i} \] Then \( E\left( \sigma \right) \) is contained in \( Z \), and our assertion follows from the proposition.
|
Yes
|
Proposition 4.3. Let \( X \) be a manifold. Let \( \pi : E \rightarrow X \) and \( {\pi }_{1} : {E}_{1} \rightarrow X \) be two Hilbert bundles over \( X \) . Let\n\n\[ \lambda : E \rightarrow {E}_{1} \]\n\nbe a VB-isomorphism. Then there exists an isotopy of VB-isomorphisms\n\n\[ {\lambda }_{t} : E \rightarrow {E}_{1} \]\n\nwith proper domain \( \left\lbrack {0,1}\right\rbrack \) such that \( {\lambda }_{1} = \lambda \) and \( {\lambda }_{0} \) is an HB-isomorphism.
|
Proof. We find reductions of \( E \) and \( {E}_{1} \) to the Hilbert group, with Hilbert trivializations \( \left\{ \left( {{U}_{i},{\tau }_{i}}\right) \right\} \) for \( E \) and \( \left\{ \left( {{U}_{i},{\rho }_{i}}\right) \right\} \) for \( {E}_{1} \) . We can then factor \( {\rho }_{i}\lambda {\tau }_{i}^{-1} \) as in Proposition 2.5, applied to each fiber map:\n\n\n\nand obtain a factorization of \( \lambda \) into \( \lambda = {\lambda }_{H}{\lambda }_{P} \) where \( {\lambda }_{H} \) is a HB-isomorphism and \( {\lambda }_{P} \) is a positive definite symmetric VB-automorphism. The latter form a convex set, and our isotopy is simply\n\n\[ {\lambda }_{t} = {\lambda }_{H} \circ \left( {{tI} + \left( {1 + t}\right) {\lambda }_{P}}\right) \]\n\n(Smooth out the end points if you wish.)
|
Yes
|
Theorem 4.4. Let \( X \) be a submanifold of \( Y \) . Let \( \pi : E \rightarrow X \) and \( {\pi }_{1} : {E}_{1} \rightarrow X \) be two Hilbert bundles. Assume that \( E \) is compressible. Let \( f : E \rightarrow Y \) and \( g : {E}_{1} \rightarrow Y \) be two tubular neighborhoods of \( X \) in \( Y \) . Then there exists an isotopy\n\n\[ \n{f}_{t} : E \rightarrow Y \n\]\n\nof tubular neighborhoods with proper domain \( \left\lbrack {0,1}\right\rbrack \) and there exists an HB-isomorphism \( \mu : E \rightarrow {E}_{1} \) such that \( {f}_{1} = f \) and \( {f}_{0} = {g\mu } \) .
|
Proof. From Theorem 6.2 of Chapter IV, we know already that there exists a VB-isomorphism \( \lambda \) such that \( f \approx {g\lambda } \) . Using the preceding proposition, we know that \( \lambda \approx \mu \) where \( \mu \) is a HB-isomorphism. Thus \( {g\lambda } \approx {g\mu } \) and by transitivity, \( f \approx \mu \), as was to be shown.
|
No
|
Theorem 5.1. Let \( f \) be a \( {C}^{p + 2} \) function defined on an open neighborhood of 0 in the Hilbert space \( \mathbf{E} \), with \( p \geqq 1 \) . Assume that \( f\left( 0\right) = 0 \), and that 0 is a non-degenerate critical point of \( f \) . Then there exists a local \( {C}^{p} \) - isomorphism at 0, say \( \varphi \), and an invertible symmetric operator \( A \) such that\n\n\[ f\left( x\right) = \langle {A\varphi }\left( x\right) ,\varphi \left( x\right) \rangle .
|
Proof. We may assume that \( U \) is a ball around 0 . We have\n\n\[ f\left( x\right) = f\left( x\right) - f\left( 0\right) = {\int }_{0}^{1}{Df}\left( {tx}\right) {xdt} \]\n\nand applying the same formula to \( {Df} \) instead of \( f \), we get\n\n\[ f\left( x\right) = {\int }_{0}^{1}{\int }_{0}^{1}{D}^{2}f\left( {stx}\right) {tx} \cdot {xdsdt} = g\left( x\right) \left( {x, x}\right) \]\n\nwhere\n\n\[ g\left( x\right) = {\int }_{0}^{1}{\int }_{0}^{1}{D}^{2}f\left( {stx}\right) {tdsdt} \]\n\nThen \( g \) is a \( {C}^{p} \) map into the Banach space of continuous bilinear maps on \( \mathbf{E} \), and even the space of symmetric such maps. We know that this\n\nBanach space is toplinearly isomorphic to the space of symmetric operators on \( \mathbf{E} \), and thus we can write\n\n\[ f\left( x\right) = \langle A\left( x\right) x, x\rangle \]\n\nwhere \( A : U \rightarrow \operatorname{Sym}\left( \mathbf{E}\right) \) is a \( {C}^{p} \) map of \( U \) into the space of symmetric operators on \( E \) . A straightforward computation shows that\n\n\[ {D}^{2}f\left( 0\right) \left( {v, w}\right) = \langle A\left( 0\right) v, w\rangle . \]\n\nSince we assumed that \( {D}^{2}f\left( 0\right) \) is non-singular, this means that \( A\left( 0\right) \) is invertible, and hence \( A\left( x\right) \) is invertible for all \( x \) sufficiently near 0 .\n\nTheorem 5.1 is then a consequence of the following result, which expresses locally the uniqueness of a non-singular symmetric form.
|
Yes
|
Theorem 5.2. Let \( A : U \rightarrow \operatorname{Sym}\left( \mathbf{E}\right) \) be a \( {C}^{p} \) map of \( U \) into the open set of invertible symmetric operators on \( \mathbf{E} \) . Then there exists a \( {C}^{p} \) isomorphism of an open subset \( {U}_{1} \) containing 0, of the form\n\n\[ \varphi \left( x\right) = C\left( x\right) x,\;\text{ with a }{C}^{p}\text{ map }\;C : {U}_{1} \rightarrow \operatorname{Laut}\left( \mathbf{E}\right) \]\n\nsuch that\n\n\[ \langle A\left( x\right) x, x\rangle = \langle A\left( 0\right) \varphi \left( x\right) ,\varphi \left( x\right) \rangle = \langle A\left( 0\right) C\left( x\right) x, C\left( x\right) x\rangle . \]
|
Proof. We seek a map \( C \) such that\n\n\[ C{\left( x\right) }^{ * }A\left( 0\right) C\left( x\right) = A\left( x\right) . \]\n\nIf we let \( B\left( x\right) = A{\left( 0\right) }^{-1}A\left( x\right) \), then \( B\left( x\right) \) is close to the identity \( I \) for small \( x \) . The square root function has a power series expansion near 1, which is a uniform limit of polynomials, and is \( {C}^{\infty } \) on a neighborhood of \( I \), and we can therefore take the square root of \( B\left( x\right) \), so that we let\n\n\[ C\left( x\right) = B{\left( x\right) }^{1/2}. \]\n\nWe contend that this \( C\left( x\right) \) does what we want. Indeed, since both \( A\left( 0\right) \) and \( A\left( x\right) \) (or \( A{\left( x\right) }^{-1} \) ) are self-adjoint, we find that\n\n\[ B{\left( x\right) }^{ * } = A\left( x\right) A{\left( 0\right) }^{-1}, \]\n\nwhence\n\n\[ B{\left( x\right) }^{ * }A\left( 0\right) = A\left( 0\right) B\left( x\right) \]\n\nBut \( C\left( x\right) \) is a power series in \( I - B\left( x\right) \), and \( C{\left( x\right) }^{ * } \) is the same power series in \( I - B{\left( x\right) }^{ * } \) . The preceding relation holds if we replace \( B\left( x\right) \) by any power of \( B\left( x\right) \) (by induction), hence it holds if we replace \( B\left( x\right) \) by any polynomial in \( I - B\left( x\right) \), and hence finally, it holds if we replace \( B\left( x\right) \) by \( C\left( x\right) \), and thus\n\n\[ C{\left( x\right) }^{ * }A\left( 0\right) C\left( x\right) = A\left( 0\right) C\left( x\right) C\left( x\right) = A\left( 0\right) B\left( x\right) = A\left( x\right) . \]\n\nwhich is the desired relation.\n\nAll that remains to be shown is that \( \varphi \) is a local \( {C}^{p} \) -isomorphism at 0 . But one verifies that in fact, \( {D\varphi }\left( 0\right) = C\left( 0\right) \), so that what we need follows from the inverse mapping theorem. This concludes the proof of Theorems 5.1 and 5.2.
|
Yes
|
Corollary 5.3. Let \( f \) be a \( {C}^{p + 2} \) function near 0 on the Hilbert space \( \mathbf{E} \) , such that 0 is a non-degenerate critical point. Then there exists a local \( {C}^{p} \) -isomorphism \( \psi \) at 0, and an orthogonal decomposition \( \mathbf{E} = \mathbf{F} + {\mathbf{F}}^{ \bot } \) , such that if we write \( \psi \left( x\right) = y + z \) with \( y \in \mathbf{F} \) and \( z \in {\mathbf{F}}^{ \bot } \), then\n\n\[ f\left( {\psi \left( x\right) }\right) = \langle y, y\rangle - \langle z, z\rangle . \]
|
Proof. On a space where \( A \) is positive definite, we can always make the toplinear isomorphism \( x \mapsto {A}^{1/2}x \) to get the quadratic form to become the given hermitian product \( \langle \rangle \), and similarly on a space where \( A \) is negative definite. In general, we use the spectral theorem to decompose \( \mathbf{E} \) into a direct orthogonal sum such that the restriction of \( A \) to the factors is positive definite and negative definite respectively.
|
No
|
Proposition 7.2. In the chart \( U \), let \( f = \left( {{f}_{1},{f}_{2}}\right) : U \times \mathbf{E} \rightarrow \mathbf{E} \times \mathbf{E} \) represent \( F \) . Then \( {f}_{2}\left( {x, v}\right) \) is the unique vector such that for all \( {w}_{1} \in \mathbf{E} \) we have:\n\n\[ \left\langle {{f}_{2}\left( {x, v}\right), g\left( x\right) {w}_{1}}\right\rangle = \frac{1}{2}\left\langle {{g}^{\prime }\left( x\right) {w}_{1} \cdot v, v}\right\rangle - \left\langle {{g}^{\prime }\left( x\right) \cdot v \cdot v,{w}_{1}}\right\rangle . \]
|
From this one sees that \( {f}_{2} \) is homogeneous of degree 2 in the second variable \( v \), in other words that it represents a spray. This concludes the proof of Theorem 7.1.
|
No
|
Proposition 1.1. Let \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{n}}\right\} \) be a frame of vector fields. Let \( \left\{ {{\lambda }_{1},\ldots ,{\lambda }_{n}}\right\} \) be the dual frame of 1 -forms \( \left( {\text{so}{\lambda }_{i}\left( {\xi }_{j}\right) = {\delta }_{ij}}\right) \) . For any form \( \omega \in {\mathcal{A}}^{r}\left( X\right) \) we have\n\n\[ \n{d\omega } = \mathop{\sum }\limits_{{i = 1}}^{n}{\lambda }_{i} \land {D}_{{\xi }_{i}}\omega \n\]
|
Proof. Let \( {d}^{\prime }\omega = \sum {\lambda }_{i} \land {D}_{{\xi }_{i}}\omega \) . Then \( {d}^{\prime } \) defines an anti-derivation of the alternating algebra of forms, that is if \( \psi \in {\mathcal{A}}^{q}\left( x\right) \) for any \( q \), then\n\n\[ \n{d}^{\prime }\left( {\omega \land \psi }\right) = \left( {{d}^{\prime }\omega }\right) \land \psi + {\left( -1\right) }^{r}\omega \land {d}^{\prime }\psi .\n\]\n\nFurthermore, \( {d}^{\prime } = d \) on functions (as is immediately verified), and we verify that \( {d}^{\prime } = d \) on \( {\mathcal{A}}^{1}\left( X\right) \) as follows:\n\n\[ \n\left( {{d}^{\prime }\omega }\right) \left( {\xi ,\eta }\right) = \sum \left( {{\lambda }_{i} \land {D}_{{\xi }_{i}}\omega }\right) \left( {\xi ,\eta }\right)\n\]\n\n\[ \n= \sum \left\lbrack {{\lambda }_{i}\left( \xi \right) \left\langle {{D}_{{\xi }_{i}}\omega ,\eta }\right\rangle - {\lambda }_{i}\left( \eta \right) \left\langle {{D}_{{\xi }_{i}}\omega ,\xi }\right\rangle }\right\rbrack\n\]\n\n\[ \n= \sum \left\lbrack {\left\langle {{D}_{{\lambda }_{i}\left( \xi \right) }\omega ,\eta }\right\rangle - \left\langle {{D}_{{\lambda }_{i}\left( \eta \right) }\omega ,\xi }\right\rangle }\right\rbrack\n\]\n\n\[ \n= \left\langle {{D}_{\xi }\omega ,\eta }\right\rangle - \left\langle {{D}_{\eta }\omega ,\xi }\right\rangle\n\]\n\n\[ \n= \left( {d\omega }\right) \left( {\xi ,\eta }\right) \;\text{ by COVD }\mathbf{6},\n\]\nwhich concludes the proof for 1-forms. Since 1-forms generate the algebra of forms in the finite dimensional case, the proposition is proved in general.
|
Yes
|
Proposition 2.2. Let \( \omega \in \Gamma {L}^{r}\left( {{TX},\mathbf{R}}\right) \) or \( \Gamma {L}^{r}\left( {{TX},{TX}}\right) \) . Let \( \xi \) , \( {\eta }_{1},\ldots ,{\eta }_{r} \) be vector fields over \( X \) . If \( \omega \in \Gamma {L}^{r}\left( {{TX},\mathbf{R}}\right) \), then in a chart \( U \) we have the formula\n\n\[ \n{\left( {D}_{\xi }\omega \right) }_{U}\left( {{\eta }_{1U},\ldots ,{\eta }_{rU}}\right) \n\]\n\n\[ \n= {\omega }_{U}^{\prime }\left( {\xi }_{U}\right) \left( {{\eta }_{1U},\ldots ,{\eta }_{rU}}\right) + \mathop{\sum }\limits_{{j = 1}}^{r}{\omega }_{U}\left( {{\eta }_{1U},\ldots ,{B}_{U}\left( {{\xi }_{U},{\eta }_{jU}}\right) ,\ldots ,{\eta }_{rU}}\right) .\n\]\n\nIf \( \omega \in \Gamma {L}^{r}\left( {{TX},{TX}}\right) \), then\n\n\[ \n{\left( {D}_{\xi }\omega \right) }_{U}\left( {{\eta }_{1U},\ldots ,{\eta }_{rU}}\right) = \text{ same expression } - {B}_{U}\left( {{\xi }_{U},{\omega }_{U}\left( {{\eta }_{1U},\ldots ,{\eta }_{rU}}\right) }\right) .\n\]
|
Proof. This comes directly from the definitions in \( §1 \) . Observe that in applying the definitions, the sum\n\n\[ \n\mathop{\sum }\limits_{{j = 1}}^{r}{\omega }_{U}\left( {{\eta }_{1U},\ldots ,{\eta }_{1U}^{\prime } \cdot \xi ,\ldots ,{\eta }_{rU}}\right) \n\]\n\noccurs twice, once with \( \mathrm{a} + \) sign and once with \( \mathrm{a} - \) sign, so cancels in the end.
|
Yes
|
Lemma 2.3. Let \( E, F \) be vector bundles over \( X \), with \( E \) finite dimensional and \( X \) admitting cut off functions. Let\n\n\[ H : {\Gamma E} \rightarrow {\Gamma F} \]\n\nbe a linear map which is \( \mathrm{{Fu}}\left( X\right) \) -linear, that is \( H\left( {\varphi \xi }\right) = {\varphi H}\left( \xi \right) \) for \( \varphi \in \mathrm{{Fu}} \) . Given a point \( x \in X \), the value \( H\left( \xi \right) \left( x\right) \) depends only on the value \( \xi \left( x\right) \) .
|
Proof. It suffices to prove that if \( \xi \left( {x}_{0}\right) = 0 \) then \( H\left( \xi \right) \left( {x}_{0}\right) = 0 \) . There exists a cut off function \( \varphi \) near \( {x}_{0} \) by assumption, so we may give the proof locally. By assumption, there exists a finite number of sections \( {e}_{1},\ldots ,{e}_{r} \) of \( E \) which form a basis for the sections locally, so there exist functions \( {\varphi }_{1},\ldots ,{\varphi }_{r} \) such that\n\n\[ \xi = {\varphi }_{1}{e}_{1} + \cdots + {\varphi }_{r}{e}_{r} \]\n\nlocally. Then\n\n\[ H\left( \xi \right) = {\varphi }_{1}H\left( {e}_{1}\right) + \cdots + {\varphi }_{r}H\left( {e}_{r}\right) . \]\n\nThe condition \( \xi \left( {x}_{0}\right) = 0 \) is equivalent with the conditions \( {\varphi }_{i}\left( {x}_{0}\right) = 0 \) for all \( i \) . Hence \( H\left( \xi \right) \left( {x}_{0}\right) = 0 \), thus proving the lemma.
|
Yes
|
Theorem 3.1. There exists a unique linear map\n\n\\[ \n{D}_{{\alpha }^{\prime }} : \\operatorname{Lift}\\left( \\alpha \\right) \\rightarrow \\operatorname{Lift}\\left( \\alpha \\right) \n\\]\nwhich in a chart \\( U \\) has the expression\n\n\\[ \n{\\left( {D}_{{\alpha }^{\prime }}\\gamma \\right) }_{U}\\left( t\\right) = {\\gamma }_{U}^{\\prime }\\left( t\\right) - {B}_{U}\\left( {\\alpha \\left( t\\right) ;{\\alpha }_{U}^{\\prime }\\left( t\\right) ,{\\gamma }_{U}\\left( t\\right) }\\right) .\n\\]\n\nThe map \\( {D}_{{\alpha }^{\prime }} \\) satisfies the derivation property for a \\( {C}^{1} \\) function \\( \\varphi \\) on \\( J \\) :\n\n\\[ \n\\left( {{D}_{{\alpha }^{\prime }}\\left( {\\varphi \\gamma }\\right) }\\right) \\left( t\\right) = {\\varphi }^{\\prime }\\left( t\\right) \\left( {{D}_{{\alpha }^{\prime }}\\gamma }\\right) \\left( t\\right) + \\varphi \\left( t\\right) \\left( {{D}_{{\alpha }^{\prime }}\\gamma }\\right) \\left( t\\right) .\n\\]
|
Proof of Theorem 3.1. The proof is entirely analogous to the proof for Theorem 2.1, using the local representation of the bilinear map \\( {B}_{U} \\) associated with a spray in charts. We have to verify that the formula of Theorem 3.1 transforms in the proper way under a change of charts, i.e. under an isomorphism \\( h : U \\rightarrow V \\) . Note that the local representation \\( {\\gamma }_{V} \\) of the curve by definition is given by\n\n\\[ \n{\\gamma }_{V}\\left( t\\right) = {h}^{\\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) {\\gamma }_{U}\\left( t\\right) \n\\]\n\nTherefore by the rule for the derivative of a product, we find:\n\n\\[ \n{\\gamma }_{V}^{\\prime }\\left( t\\right) = {h}^{\\prime \\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) \\left( {{\\alpha }_{U}^{\\prime }\\left( t\\right) ,{\\gamma }_{U}\\left( t\\right) }\\right) + {h}^{\\prime }\\left( {{\\alpha }_{U}\\left( t\\right) ,{\\gamma }_{U}^{\\prime }\\left( t\\right) }\\right) .\n\\]\n\nHence using the transformation rule from \\( {B}_{U} \\) to \\( {B}_{V} \\), Proposition 3.3 of Chapter IV, we get\n\n\\[ \n{\\left( {D}_{{\alpha }^{\prime }}\\gamma \\right) }_{V}\\left( t\\right) = {\\gamma }_{V}^{\\prime }\\left( t\\right) - {B}_{V}\\left( {\\alpha \\left( t\\right) ;{\\alpha }_{V}^{\\prime }\\left( t\\right) ,{\\gamma }_{V}\\left( t\\right) }\\right) \n\\]\n\n\\[ \n= {h}^{\\prime \\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) \\left( {{\\alpha }_{U}^{\\prime }\\left( t\\right) ,{\\gamma }_{U}\\left( t\\right) }\\right) + {h}^{\\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) {\\gamma }_{U}^{\\prime }\\left( t\\right) \n\\]\n\n\\[ \n- {h}^{\\prime \\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) \\left( {{\\alpha }_{U}^{\\prime }\\left( t\\right) ,{\\gamma }_{U}\\left( t\\right) }\\right) \n\\]\n\n\\[ \n- {h}^{\\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) {B}_{U}\\left( {\\alpha \\left( t\\right) ,{\\alpha }_{U}^{\\prime }\\left( t\\right) ,{\\gamma }_{U}\\left( t\\right) }\\right) \n\\]\n\n\\[ \n= {h}^{\\prime }\\left( {{\\alpha }_{U}\\left( t\\right) }\\right) {\\left( {D}_{{\alpha }^{\prime }}\\gamma \\right) }_{U}\\left( t\\right) \\;\\text{ (because the }{h}^{\\prime \\prime }\\text{ term cancels }!\\text{ ),} \n\\]\n\nwhich proves the desired transformation formula for \\( {\\left( {D}_{{\alpha }^{\prime }}\\gamma \\right) }_{U} \\) in charts. Thus we have proved the existence of \\( {D}_{{\alpha }^{\prime }}\\gamma \\) as asserted. Its being a derivation is immediate from the local representation in charts. This concludes the proof of Theorem 3.1.
|
Yes
|
Corollary 3.2. Let \( \eta \) be a vector field and suppose \( \gamma \left( t\right) = \eta \left( {\alpha \left( t\right) }\right), t \in J \) . Let \( \xi \) be a vector field on \( X \) such that \( {\alpha }^{\prime }\left( {t}_{0}\right) = \xi \left( {\alpha \left( {t}_{0}\right) }\right) \) for some \( {t}_{0} \in J \) . Then\n\n\[ \left( {{D}_{{\alpha }^{\prime }}\gamma }\right) \left( {t}_{0}\right) = \left( {{D}_{\xi }\eta }\right) \left( {\alpha \left( {t}_{0}\right) }\right) \]
|
Proof. Immediate from the chain rule and the local representation of Theorem 3.1.
|
No
|
Theorem 3.3. Let \( \alpha : J \rightarrow X \) be a \( {C}^{2} \) curve in \( X \) . Let \( {t}_{0} \in J \) . Given \( v \in {T}_{\alpha \left( {t}_{0}\right) }X \), there exists a unique lift \( {\gamma }_{v} : J \rightarrow {TX} \) which is \( \alpha \) -paralled and such that \( {\gamma }_{v}\left( {t}_{0}\right) = v \) . Let \( \operatorname{Par}\left( \alpha \right) \) denote the set of \( \alpha \) -parallel lifts of \( \alpha \) . The map \( v \mapsto {\gamma }_{v} \) is a linear isomorphism of \( {T}_{\alpha \left( {t}_{0}\right) }X \) with \( \operatorname{Par}\left( \alpha \right) \) .
|
Proof. The existence and uniqueness simply comes from the existence and uniqueness of solutions of differential equations. Note that from the linearity of the equation, the integral curve \( \gamma \) is defined on the whole interval of definition \( J \) by Proposition 1.9 of Chapter IV.
|
Yes
|
Theorem 3.4. Fix \( {t}_{0} \in J \) . For \( t \in J \) define the map\n\n\[ \n{P}_{{t}_{0},\alpha }^{t} = {P}^{t} : {T}_{\alpha \left( {t}_{0}\right) }X \rightarrow {T}_{\alpha \left( t\right) }X\;\text{ by }\;{P}^{t}\left( v\right) = \gamma \left( {t, v}\right) ,\n\]\n\nwhere \( t \mapsto \gamma \left( {t, v}\right) \) is the unique curve in \( {TX} \) which is \( \alpha \) -parallel and \( \gamma \left( {{t}_{0}, v}\right) = v \) . Then \( {P}^{t} \) is a linear isomorphism.
|
Proof. We must verify that\n\n\( {P}^{t}\left( {sv}\right) = s{P}^{t}\left( v\right) \) and \( {P}^{t}\left( {v + w}\right) = {P}^{t}\left( v\right) + {P}^{t}\left( w\right) \; \) for \( s \in \mathbf{R} \) and \( v, w \in {T}_{x}X.\)\n\nBut these properties follow at once from the linearity of the differential equation satisfied by \( \gamma \), and the uniqueness theorem for its solutions with given initial conditions.\n\nThe map \( {P}_{t} \) is called parallel translation along \( \alpha \) .
|
Yes
|
Proposition 3.5 (Local Expression). Let \( \omega = {\omega }_{U},{\eta }_{j} = {\eta }_{jU} \) etc. represent the respective objects in a chart \( U \), omitting the subscript \( U \) to simplify the notation. Then\n\n\[ \n\left( {{D}_{{\alpha }^{\prime }}\omega }\right) \left( {{\eta }_{1},\ldots ,{\eta }_{r}}\right) = {\omega }^{\prime }\left( {{\eta }_{1},\ldots ,{\eta }_{r}}\right) - B\left( {\alpha ;{\alpha }^{\prime },\omega \left( {{\eta }_{1},\ldots ,{\eta }_{r}}\right) }\right) {\delta }_{E,{TX}} \n\]\n\n\[ \n+ \mathop{\sum }\limits_{{j = 1}}^{r}\omega \left( {{\eta }_{1},\ldots, B\left( {\alpha ;{\alpha }^{\prime },{\eta }_{j}}\right) ,\ldots ,{\eta }_{r}}\right) \n\]\n\nor also\n\n\[ \n{D}_{{\alpha }^{\prime }}\omega = {\omega }^{\prime } - B\left( {\alpha ;{\alpha }^{\prime },\omega }\right) {\delta }_{E,{TX}} + \mathop{\sum }\limits_{{j = 1}}^{r}{C}_{j, B,\alpha }\omega , \n\]\n\nwhere \( {\delta }_{E,{TX}} = 1 \) if \( E = {TX} \) and 0 if \( E = \mathbf{R} \) .
|
This comes from the definition at the end of \( §1 \), and the fact that the ordinary derivative\n\n\[ \n{\left( {\omega }_{U}\left( {\eta }_{1U},\ldots ,{\eta }_{rU}\right) \right) }^{\prime } \n\]\n\nin the chart is obtained by the Leibniz rule (suppressing the index \( U \) )\n\n\[ \n{\left( \omega \left( {\eta }_{1},\ldots ,{\eta }_{r}\right) \right) }^{\prime } = {\omega }^{\prime }\left( {{\eta }_{1},\ldots ,{\eta }_{r}}\right) + \sum \omega \left( {{\eta }_{1},\ldots ,{\eta }_{j}^{\prime },\ldots ,{\eta }_{r}}\right) . \n\]
|
Yes
|
Let \( E = {TX} \) or \( \mathbf{R} \) as above. Let \( \Omega : X \rightarrow {L}^{r}\left( {{TX}, E}\right) \) be a section (so a tensor field), and let \( \omega \left( t\right) = \Omega \left( {\alpha \left( t\right) }\right), t \in J \) . Let \( {t}_{0} \in J \) . Let \( \xi \) be a vector field such that \( {\alpha }^{\prime }\left( {t}_{0}\right) = \xi \left( {\alpha \left( {t}_{0}\right) }\right) \) . Then
|
\[ \left( {{D}_{{\alpha }^{\prime }}\omega }\right) \left( {t}_{0}\right) = \left( {{D}_{\xi }\Omega }\right) \left( {\alpha \left( {t}_{0}\right) }\right) \] Proof. Immediate from the chain rule and the local representation formula.
|
No
|
Theorem 3.8. Let the notation be as in Theorem 3.7. For \( t \in J \) define the map\n\n\[ \n{P}_{{t}_{0},\alpha }^{t} = {P}_{\alpha }^{t} : {L}^{r}\left( {{T}_{\alpha \left( {t}_{0}\right) }X,{E}_{\alpha \left( {t}_{0}\right) }}\right) \rightarrow {L}^{r}\left( {{T}_{\alpha \left( t\right) }X,{E}_{\alpha \left( t\right) }}\right)\n\]\n\nby\n\n\[ \n{P}_{\alpha }^{t}\left( {\omega }_{0}\right) = \gamma \left( {t,{\omega }_{0}}\right)\n\]\n\nwhere \( t \mapsto \gamma \left( {t,{\omega }_{0}}\right) \) is the unique \( \alpha \) -parallel lift of \( \alpha \) with \( \gamma \left( {0,{\omega }_{0}}\right) = {\omega }_{0} \) . Then \( {P}_{\alpha }^{t} \) is a linear isomorphism.
|
Proof. This follows at once from the linearity of the differential equation satisfied by \( \gamma \), and the uniqueness theorem for its solutions with given initial conditions.
|
Yes
|
Theorem 4.1. Let \( \left( {X, g}\right) \) be a pseudo Riemannian manifold. There exists a unique covariant derivative \( D \) such that for all vector fields \( \xi ,\eta ,\zeta \) we have\n\nMD 1.\n\[ \n{D}_{\xi }\langle \eta ,\zeta {\rangle }_{g} = {\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} + {\left\langle \eta ,{D}_{\xi }\zeta \right\rangle }_{g}.\n\]\n\nThis covariant derivative is called the pseudo Riemannian derivative, or metric derivative, or Levi-Civita derivative.
|
Proof. For the uniqueness, we shall express \( {\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} \) entirely in terms of operations which do not involve the derivative \( D \) . To do this, we write down the first defining property of a connection for a cyclic permutation of the three variables:\n\n\[ \n\xi \langle \eta ,\zeta {\rangle }_{g} = {\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} + {\left\langle \eta ,{D}_{\xi }\zeta \right\rangle }_{g} \n\]\n\n\[ \n\eta \langle \zeta ,\xi {\rangle }_{g} = {\left\langle {D}_{\eta }\zeta ,\xi \right\rangle }_{g} + {\left\langle \zeta ,{D}_{\eta }\xi \right\rangle }_{g} \n\]\n\n\[ \n\zeta \langle \xi ,\eta {\rangle }_{g} = {\left\langle {D}_{\zeta }\xi ,\eta \right\rangle }_{g} + {\left\langle \xi ,{D}_{\zeta }\eta \right\rangle }_{g} \n\]\n\nWe add the first two relations and subtract the third. Using the second defining property of a covariant derivative, the following property drops out:\n\nMD 2. \( \;2{\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} = \xi \langle \eta ,\zeta {\rangle }_{g} + \eta \langle \zeta ,\xi {\rangle }_{g} - \zeta \langle \xi ,\eta {\rangle }_{g} \)\n\n\[ \n+ \langle \left\lbrack {\xi ,\eta }\right\rbrack ,\zeta {\rangle }_{g} - \langle \left\lbrack {\xi ,\zeta }\right\rbrack ,\eta {\rangle }_{g} - \langle \left\lbrack {\eta ,\zeta }\right\rbrack ,\xi {\rangle }_{g}.\n\]\n\nThis proves the uniqueness.\n\nAs to existence, define \( {\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} \) to be \( \frac{1}{2} \) of the right side of MD 2 . If we view \( \xi ,\eta \) as fixed, and \( \zeta \) as variable, then this right side can be checked in a chart to give a continuous linear functional on vector fields. By Proposition 6.1 of Chapter V, such a functional can be represented by a vector, and this vector defines \( {D}_{\xi }\eta \) at each point of the manifold. Thus \( {D}_{\xi }\eta \) is itself a vector field. Using the basic property of the bracket product with a function \( \varphi \) :\n\n\[ \n\left\lbrack {\xi ,{\varphi \eta }}\right\rbrack = \varphi \left\lbrack {\xi ,\eta }\right\rbrack + \left( {\xi \varphi }\right) \eta \;\text{ and }\;\left\lbrack {{\varphi \xi },\eta }\right\rbrack = \varphi \left\lbrack {\xi ,\eta }\right\rbrack - \left( {\eta \varphi }\right) \xi \n\]\n\nit is routinely verified that \( {\left\langle {D}_{\xi }\eta ,\zeta \right\rangle }_{g} \) is Fu-linear in its first variable \( \xi \), and also Fu-linear in the third variable \( \zeta \) . One also verifies routinely that COVD 2 is also satisfied, whence existence follows and the theorem is proved.
|
Yes
|
Theorem 4.2. Let \( \left( {X, g}\right) \) be a pseudo Riemannian manifold. There exists a unique spray on \( X \) satisfying the following two equivalent conditions.\n\nMS 1. In a chart \( U \), the associated bilinear map \( {B}_{U} \) satisfies the following formula for all \( v, w, z \in \mathbf{E} \) :\n\n\[ - 2\left\langle {{B}_{U}\left( {x;v, w}\right), g\left( x\right) z}\right\rangle = \left\langle {{g}^{\prime }\left( x\right) \cdot v \cdot z, w}\right\rangle + \left\langle {{g}^{\prime }\left( x\right) \cdot w \cdot z, v}\right\rangle - \left\langle {{g}^{\prime }\left( x\right) \cdot z \cdot w, v}\right\rangle . \]\n\nThus if we let\n\n\[ {f}_{U,2}\left( {x, v}\right) = {B}_{U}\left( {x;v, v}\right) \;\text{ and }\;{f}_{U}\left( {x, v}\right) = \left( {v,{f}_{U,2}\left( {x, v}\right) }\right) ,\]\n\nthen \( {f}_{U} \) represents the spray on \( {TU} = U \times \mathbf{E} \) .\n\nMS 2. The covariant derivative associated to the spray is the metric derivative satisfying Theorem 4.1.\n\nThis spray is the same as the canonical spray of Chapter VII, Theorem 7.1.
|
Proof. First observe that \( {B}_{U} \) as defined by the formula is symmetric in \( \left( {v, w}\right) \) . The symmetry is built in the sum of the first two terms, and to see that the third term is symmetric, one differentiates with respect to \( x \) the formula\n\n\[ \langle g\left( x\right) z, v\rangle = \langle g\left( x\right) v, z\rangle \]\n\nwhich merely expresses the symmetry of \( g\left( x\right) \) itself. Thus we may form the quadratic map \( {f}_{U,2}\left( {x, v}\right) = {B}_{U}\left( {x;v, v}\right) \) from the symmetric bilinear map \( {B}_{U}\left( {x;v, w}\right) \) . It follows that \( {f}_{U} \) as defined represents a spray \( {F}_{U} \) over \( {TU} \) . At this point, one may argue in two ways to globalize.\n\nComparing MD 3 with MS 1 we see that the covariant derivative on \( U \) determined by the spray \( {F}_{U} \) is precisely the metric derivative. Theorem 2.1 shows that if two sprays determine the same covariant derivative on \( U \) then they are equal. If \( U, V \) are two charts, then \( {f}_{U} \) and \( {f}_{V} \) are the local representatives of sprays \( {F}_{U} \) and \( {F}_{V} \) on \( U \) and \( V \) respectively, which must therefore coincide on \( U \cap V \) . Hence the family \( \left\{ {F}_{U}\right\} \) defines a spray \( F \) on \( X \) . Once again, Theorem 2.1 and MD 3 show that covariant derivative determined by \( F \) is the metric derivative.\n\nFurthermore, if we substitute \( v = w \) (and \( z = {w}_{1} \) ) in the chart formula of MS 1, thus giving the quadratic expression \( {f}_{U,2}\left( {x, v}\right) \), then one sees that this expression coincides with the chart expression of Proposition 7.2 of Chapter VII, and hence that the spray obtained in a natural way from the metric derivative is equal to the canonical spray of Chapter VII, Theorem 7.1.\n\nAnother possibility is to admit Theorems 7.1 and 7.2 of Chapter VII, which already proved the existence of a spray whose quadratic map \( {f}_{U,2} \) is obtained from the symmetric bilinear map \( {B}_{U} \) as defined in MS 1. This gives immediately the existence of a unique spray on \( X \) having the representation of MS 1 in a chart \( U \), and this spray is the canonical spray. That MS 2 is equivalent to MS 1 then follows from MD 3. This concludes the proof.
|
Yes
|
Theorem 4.3. Let \( \alpha : J \rightarrow X \) be a \( {C}^{2} \) curve in a Riemannian manifold \( \left( {X, g}\right) \) . For the metric derivative, and curves \( \gamma ,\zeta \in \operatorname{Lift}\left( {\alpha ,{TX}}\right) \), we have the formula\n\n\[ \langle \gamma ,\zeta {\rangle }_{g}^{\prime } = {\left\langle {D}_{{\alpha }^{\prime }}\gamma ,\zeta \right\rangle }_{g} + {\left\langle \gamma ,{D}_{{\alpha }^{\prime }}\zeta \right\rangle }_{g}. \]\n\nFurthermore, parallel translation is a metric isomorphism. In particular, let \( {t}_{0} \in J \) . If \( {\gamma }_{v},{\gamma }_{w} \) are the unique \( \alpha \) -parallel lifts of \( \alpha \) with \( {\gamma }_{v}\left( {t}_{0}\right) = v \) and \( {\gamma }_{w}\left( {t}_{0}\right) = w \), then for all \( t \) ,\n\n\[ {\left\langle {\gamma }_{v}\left( t\right) ,{\gamma }_{w}\left( t\right) \right\rangle }_{g} = \langle v, w{\rangle }_{g} \]
|
Proof. The formula is proved in the same way that the computation proving Theorem 3.1 was parallel to the computation proving Theorem 2.1 (giving the behavior under changes of charts). From the formula, if \( {D}_{{\alpha }^{\prime }}\gamma = {D}_{{\alpha }^{\prime }}\zeta = 0 \), it follows that \( \langle \gamma ,\zeta {\rangle }_{g} \) is constant, whence the second assertion follows.
|
Yes
|
Corollary 4.4. Let \( \varphi \) be a \( {C}^{2} \) function on \( X \) . Let \( \alpha \) be a geodesic for the metric spray. Then\n\n\[ \n{\left( \varphi \circ \alpha \right) }^{\prime \prime } = {\left\langle {D}_{{\alpha }^{\prime }}\left( \operatorname{grad}\varphi \right) \circ \alpha ,{\alpha }^{\prime }\right\rangle }_{g}.\n\]
|
Proof. Taking the first derivative of \( \varphi \circ \alpha \) yields\n\n\[ \n{\left( \varphi \circ \alpha \right) }^{\prime }\left( t\right) = \left( {d\varphi }\right) \left( {\alpha \left( t\right) }\right) {\alpha }^{\prime }\left( t\right) = {\left\langle \left( \operatorname{grad}\varphi \right) \left( \alpha \left( t\right) \right) ,{\alpha }^{\prime }\left( t\right) \right\rangle }_{g}.\n\]\n\nNow take the next derivative using Theorem 4.3 and the fact that \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \) . The desired formula drops out.
|
No
|
Proposition 5.1. The map \( G \) is a local isomorphism at \( \left( {{x}_{0},0}\right) \) .
|
Proof. The Jacobian matrix of \( G \) in a chart is given immediately from Chapter IV, Theorem 4.1 by\n\n\[ \left( \begin{matrix} \mathrm{{id}} & \mathrm{{id}} \\ 0 & \mathrm{{id}} \end{matrix}\right) \]\n\nwhich is invertible. The inverse mapping theorem concludes the proof.
|
Yes
|
Given \( {x}_{0} \in X \) . Let \( V \) be an open neighborhood of \( \left( {{x}_{0},0}\right) \) in \( {TX} \) such that \( G \) induces an isomorphism of \( V \) with its image, and in a chart, for some \( \epsilon > 0 \) ,\n\n\[ V = {U}_{0} \times \mathbf{E}\left( \epsilon \right) \]\n\nLet \( W \) be a neighborhood of \( {x}_{0} \) in \( X \) such that \( G\left( V\right) \supset W \times W \) . Then:\n\n(1) Any two points \( x, y \in W \) are joined by a unique geodesic in \( X \) lying in \( {U}_{0} \), and this geodesic depends \( {C}^{\infty } \) on the pair \( \left( {x, y}\right) \) . In other words, if \( t \mapsto {\exp }_{x}\left( {tv}\right) \left( {0 \leqq t \leqq 1}\right) \) is the geodesic joining \( x \) and \( y \) , with \( y = {\exp }_{x}\left( v\right) \), then the correspondence\n\n\[ \left( {x, v}\right) \leftrightarrow \left( {x, y}\right) \]\n\nis \( {C}^{\infty } \) .\n\n(2) For each \( x \in W \) the exponential \( {\exp }_{x} \) maps the open set in \( {T}_{x}X \) represented by \( \left( {x,\mathbf{E}\left( \epsilon \right) }\right) \) isomorphically onto an open set \( U\left( x\right) \) containing \( W \) .
|
The properties are merely an application of the definitions and Proposition 5.1.
|
No
|
Lemma 5.3. We have the rules on lifts of \( \sigma \) to \( {TX} \) : (a) \( {D}_{1}{\partial }_{2} = {D}_{2}{\partial }_{1} \) ; and (b) \( {\partial }_{2}{\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} = 2{\left\langle {D}_{1}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} \) .
|
Proof. Let \( {\sigma }_{U} \) represent \( \sigma \) in a chart. Then from Theorem 3.1, \[ {D}_{1}{\partial }_{2}{\sigma }_{U} = {\partial }_{1}{\partial }_{2}{\sigma }_{U} - {B}_{U}\left( {{\sigma }_{U};{\partial }_{1}{\sigma }_{U},{\partial }_{2}{\sigma }_{U}}\right) . \] Since \( {B}_{U} \) is symmetric in the last two arguments, this proves (a). As to (b), we use the metric derivative to yield \[ {\partial }_{2}{\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} = 2{\left\langle {D}_{2}{\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} \] and we use (a) to permute the partials variables on the right, to conclude the proof of (b), and therefore the proof of the lemma.
|
Yes
|
Theorem 5.4. Let \( t \mapsto u\left( t\right) \) be a curve in \( {\mathbf{S}}_{g}\left( 1\right) \) . Let \( 0 \leqq r \leqq b \) where \( b \) is such that the points \( {ru}\left( t\right) \) are in the domain of the exponential \( {\exp }_{x} \) .\n\nDefine\n\n\[\n\sigma \left( {r, t}\right) = {\exp }_{x}\left( {{ru}\left( t\right) }\right) \;\text{ for }\;0 \leqq r \leqq b.\n\]\n\nThen\n\n\[\n{\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} = \langle u, u{\rangle }_{g} = 1\n\]
|
Proof. This is immediate since parallel translation is an isometry by Theorem 4.3.
|
No
|
Corollary 5.5. Assume \( \left( {X, g}\right) \) Riemannian. Let \( v \in {T}_{x}X \) . Suppose \( \parallel v{\parallel }_{q} = r \), with \( r > 0 \) . Also suppose the segment \( \{ {tv}\} \left( {0 \leqq t \leqq 1}\right) \) is contained in the domain of the exponential. Let \( \alpha \left( t\right) = {\exp }_{x}\left( {tv}\right) \) . Then \( L\left( \alpha \right) = r \) .
|
Proof. Special case of the length formula in Theorem 5.4, followed by an integration to get the length.
|
No
|
Lemma 5.6. Let \( X \) be pseudo Riemannian. Let \( \sigma : {J}_{1} \times {J}_{2} \rightarrow X \) be a \( {C}^{2} \) map. For each \( t \in {J}_{2} \) let \( {\alpha }_{t}\left( s\right) = \sigma \left( {s, t}\right) \) . Assume that each \( {\alpha }_{t} \) is a geodesic, and that \( {\alpha }_{t}^{\prime 2} \) is independent of \( t \) . Then for each \( t \in {J}_{2} \), the map \( s \mapsto {\left\langle {\partial }_{1}\sigma ,{\partial }_{2}\sigma \right\rangle }_{g}\left( {s, t}\right) \) is constant.
|
Proof. Let \( D \) be the metric derivative. Then \( {D}_{1}{\partial }_{1}\sigma = 0 \) because for a geodesic \( \alpha \), we know that the metric derivative has the property that \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \) . Thus we get\n\n\[ \n{\partial }_{1}{\left\langle {\partial }_{1}\sigma ,{\partial }_{2}\sigma \right\rangle }_{g} = {\left\langle {D}_{1}{\partial }_{1}\sigma ,{\partial }_{2}\sigma \right\rangle }_{g} + {\left\langle {\partial }_{1}\sigma ,{D}_{1}{\partial }_{2}\sigma \right\rangle }_{g} \n\]\n\n\[ \n= \frac{1}{2}{\partial }_{2}{\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}\;\text{by the above and Lemma 5.3} \n\]\n\n\[ \n= 0\;\text{by hypothesis.} \n\]\n\nThis concludes the proof.
|
Yes
|
Theorem 5.7. Let \( \\left( {X, g}\\right) \) be pseudo Riemannian. Let \( {x}_{0} \\in X \) and let \( W \) be a small open neighborhood of \( {x}_{0} \), selected as in Corollary 5.2, with \( \\epsilon \) sufficiently small. Let \( x \\in W \) . Then the geodesics through \( x \) are orthogonal to the image of \( {\\mathbf{S}}_{g}\\left( c\\right) \) under \( {\\exp }_{x} \), for \( c \) sufficiently small positive.
|
Proof. For \( \\epsilon \) sufficiently small positive, the exponential map is defined on \( {\\mathbf{S}}_{g}\\left( r\\right) \) for \( 0 < r \\leqq \\epsilon \), and as we have seen, the level sets \( {\\mathbf{S}}_{g}\\left( r\\right) \) are submanifolds of \( X \) . Then our assertion amounts to proving that for every curve \( u : J \\rightarrow {\\mathbf{S}}_{g}\\left( 1\\right) \) and \( 0 < r < c \), if we define\n\n\[ \n\\sigma \\left( {r, t}\\right) = \\exp \\left( {{ru}\\left( t\\right) }\\right) \n\]\n\nthen the two curves\n\n\[ \nt \\mapsto {\\exp }_{x}\\left( {{r}_{0}\\left( {u\\left( t\\right) }\\right) }\\right. \\;\\text{ and }\\;r \\mapsto {\\exp }_{x}\\left( {{ru}\\left( {t}_{0}\\right) }\\right) \n\]\n\nare orthogonal for any given value \( \\left( {{r}_{0},{t}_{0}}\\right) \), which amounts to proving that\n\n\[ \n{\\left\\langle {\\partial }_{1}\\sigma ,{\\partial }_{2}\\sigma \\right\\rangle }_{g} = 0 \n\]\n\nBut for \( r = 0 \), we have \( \\sigma \\left( {0, t}\\right) = {\\exp }_{x}\\left( 0\\right) = x \), independent of \( t \) . Hence \( {\\partial }_{2}\\sigma \\left( {0, t}\\right) = 0 \) . We can apply Lemma 5.6 to conclude the proof.
|
Yes
|
Lemma 5.9. Given \( x \in X \), there exists \( c > 0 \) such that if \( r < c \), and if \( \alpha \) is a geodesic in \( X \), tangent to \( {S}_{g}\left( {x, r}\right) \) at \( y = \alpha \left( {t}_{0}\right) \), then \( \alpha \left( t\right) \) lies outside \( {S}_{g}\left( {x, r}\right) \) for \( t \neq {t}_{0} \) in some neighborhood of \( {t}_{0} \) .
|
Proof. We pick \( c \) such that the exponential map \( {\exp }_{x} \) is a differential isomorphism on \( {\mathbf{B}}_{g}\left( {{0}_{x}, r}\right) \) for all \( r < c \) and preserves distances on rays from \( {0}_{x} \) to \( v \in {T}_{x}X \) with \( \parallel v{\parallel }_{q} = r \) . Without loss of generality, we can suppose \( {t}_{0} = 0 \), so \( \alpha \left( 0\right) = y \) . We shall view \( y \) as variable, so we index \( \alpha \) by \( y \) . Also we have to look at the other initial condition \( {\alpha }^{\prime }\left( 0\right) = u \in {T}_{y}Y \), so we write \( {\alpha }_{y, u} \) for the geodesic. Now let\n\n\[ \n{\eta }_{y, u}\left( t\right) = {\exp }_{x}^{-1}{\alpha }_{y, u}\left( t\right) \;\text{ and }\;{f}_{y, u}\left( t\right) = {\eta }_{y, u}{\left( t\right) }^{2}.\n\]\n\nThen \( {\eta }_{y, u} \) is a curve in the fixed Hilbert space \( {T}_{x}X \), so\n\n\[ \n{f}_{y, u}^{\prime }\left( t\right) = 2{\left\langle {\eta }_{y, u}^{\prime }\left( t\right) ,{\eta }_{y, u}\left( t\right) \right\rangle }_{g\left( x\right) ,}\n\]\n\n\[ \n{f}_{y, u}^{\prime \prime }\left( t\right) = 2{\eta }_{y, u}^{\prime }{\left( t\right) }^{2} + 2{\left\langle {\eta }_{y, u}^{\prime \prime }\left( t\right) ,{\eta }_{y, u}\left( t\right) \right\rangle }_{g\left( x\right) }.\n\]\n\nLet \( h\left( {y, u}\right) = {f}_{y, u}^{\prime \prime }\left( 0\right) \) . Then \( h\left( {x, u}\right) = 2{u}^{2} \), so \( {h}_{x} \) as a function on \( {T}_{x}X \) is positive definite. Therefore there exists \( c > 0 \) such that for \( 0 < r < c \) and \( \parallel y{\parallel }_{a} = r \) the function \( {h}_{y} \) is positive definite on \( {T}_{y}Y \), and in particular \( h\left( {y, u}\right) > 0 \) for \( {u}^{2} \neq 0 \) . Under the assumption that \( {\alpha }_{y, u} \) is tangent to \( {S}_{g}\left( {x, r}\right) \) at \( y \), we must have\n\n\[ \n{f}_{y, u}^{\prime }\left( 0\right) = 0\;\text{ and }\;{f}_{y, u}^{\prime \prime }\left( 0\right) = h\left( {y, u}\right) > 0,\n\]\n\nwhence for sufficiently small \( \left| t\right| \), we get\n\n\[ \n{f}_{y, u}\left( t\right) > {f}_{y, u}\left( 0\right) = {\left( {\exp }_{x}^{-1}{\alpha }_{y, x}\left( 0\right) \right) }^{2} = {\left( {\exp }_{x}^{-1}\left( u\right) \right) }^{2} = {r}^{2},\n\]\n\nwhich proves the lemma.
|
Yes
|
Lemma 6.1. For a piecewise \( {C}^{1} \) curve \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow U\left( x\right) - \{ x\} \) as above, we have the inequality\n\n\[ L\left( \gamma \right) \geqq \left| {r\left( b\right) - r\left( a\right) }\right| . \]\n\nEquality holds only if the function \( t \mapsto r\left( t\right) \) is monotone and the map \( t \mapsto u\left( t\right) \) is constant.
|
Proof. Let \( \sigma \left( {r, t}\right) = {\exp }_{x}\left( {{ru}\left( t\right) }\right) \) . Then \( \gamma \left( t\right) = \sigma \left( {r\left( t\right), t}\right) \) . We have\n\n\[ {\gamma }^{\prime }\left( t\right) = \frac{d\gamma }{dt} = \frac{\partial \sigma }{\partial r}{r}^{\prime }\left( t\right) + \frac{\partial \sigma }{\partial t}. \]\n\nBy the Gauss Lemma Theorem 5.7, we know that \( \partial \sigma /\partial r \) and \( \partial \sigma /\partial t \) are orthogonal. Since \( \parallel \partial \sigma /\partial r{\parallel }_{g} = 1 \) by Lemma 5.4, it follows that\n\n\[ {\begin{Vmatrix}{\gamma }^{\prime }\left( t\right) \end{Vmatrix}}_{g}^{2} = {\left| {r}^{\prime }\left( t\right) \right| }^{2} + {\begin{Vmatrix}\frac{\partial \sigma }{\partial t}\end{Vmatrix}}_{g}^{2} \geqq {\left| {r}^{\prime }\left( t\right) \right| }^{2}, \]\n\nwith equality holding only if \( \partial \sigma /\partial t = 0 \), or equivalently, \( {du}/{dt} = 0 \) . Hence\n\n\[ L\left( \gamma \right) = {\int }_{a}^{b}{\begin{Vmatrix}{\gamma }^{\prime }\left( t\right) \end{Vmatrix}}_{g}{dt} \geqq {\int }_{a}^{b}\left| {{r}^{\prime }\left( t\right) }\right| {dt} \geqq \left| {r\left( b\right) - r\left( a\right) }\right| ; \]\n\nand equality holds only if \( t \mapsto r\left( t\right) \) is monotone and \( t \mapsto u\left( t\right) \) is constant. This completes the proof.
|
Yes
|
Theorem 6.2. Let \( \left( {V, W}\right) \) constitute a normal neighborhood of a point \( {x}_{0} \in X \) . Let \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow V \) be the geodesic (up to reparametrization) in \( V \) joining two points of \( W \) (namely \( \alpha \left( 0\right) \) and \( \alpha \left( 1\right) \) ). Let \( \gamma : \left\lbrack {0,1}\right\rbrack \rightarrow X \) be any other piecewise \( {C}^{1} \) path in \( X \) joining these two points. Then\n\n\[ L\left( \alpha \right) \leqq L\left( \gamma \right) \]\n\nIf equality holds, then the polar component \( t \mapsto v\left( t\right) \) for \( \gamma \) is constant, the\n\nfunction \( t \mapsto r\left( t\right) \) is monotone, and a reparametrization of \( \gamma \) is equal to \( \alpha \) .
|
Proof. Let \( x, y \in W \) and let \( y = {\exp }_{x}\left( {ru}\right) \) with \( 0 < r < \epsilon \), and \( \parallel u{\parallel }_{q} = 1 \) . Then for \( \delta > 0 \) and \( 0 < \delta < r \) the path \( \gamma \) contains a segment joining the shell \( {\mathrm{{Sh}}}_{g}\left( {x,\delta }\right) \) with the shell \( {\mathrm{{Sh}}}_{g}\left( {x, r}\right) \) and lying between the two shells. By Lemma 6.1, the length of this segment is \( \geqq r - \delta \) . Letting \( \delta \) tend to 0 shows that \( L\left( \gamma \right) \geqq r \) . The same lemma proves the conditions on the polar functions as asserted.
|
Yes
|
Corollary 6.3. Let \( \alpha : \left\lbrack {0,1}\right\rbrack \rightarrow X \) be a piecewise \( {C}^{1} \) path, parametrized by are length. If \( L\left( \alpha \right) \leqq L\left( \gamma \right) \) for all paths from \( \alpha \left( 0\right) \) to \( \alpha \left( 1\right) \) in \( X \), then \( \alpha \) is a geodesic.
|
Proof. We can find a partition of \( \left\lbrack {0,1}\right\rbrack \) such that the image under \( \alpha \) of each small interval in the partition is contained in some neighborhood \( W \) as in the theorem, and its length is small so the image of the segment is contained in a normal neighborhood. By Theorem 6.2, the path restricted to this segment must be a geodesic. Hence the entire path is a geodesic, as was to be shown.
|
No
|
Theorem 6.4. Let \( \left( {X, g}\right) \) be a Riemannian manifold and let \( x \in X \) . There exists \( c > 0 \) such that for all \( r < c \) the map \( {\exp }_{x} \) is defined on \( {\mathbf{B}}_{g}\left( {{0}_{x}, c}\right) \), gives a differential isomorphism\n\n\[ \n{\exp }_{x} : {\mathbf{B}}_{g}\left( {{0}_{x}, r}\right) \rightarrow {B}_{g}\left( {x, r}\right) \;\text{ for all }r\text{ with }\;0 < r < c, \n\] \n\nand also a differential isomorphism\n\n\[ \n{\exp }_{x} : {\mathbf{S}}_{g}\left( {{0}_{x}, r}\right) \rightarrow {S}_{g}\left( {x, r}\right) \;\text{ for }0 < r < c. \n\]
|
Proof. Immediate from Corollary 5.5 and Theorem 6.2.
|
No
|
Proposition 6.5. Each condition implies the next, i.e.\n\n## \(\text{COM}1 \Rightarrow \text{COM}2 \Rightarrow \text{COM}3 \Rightarrow \text{COM 4.}\)
|
Proof. Assume COM 1. Let \( \alpha : J \rightarrow X \) be a geodesic parametrized by arc length on some interval, and take \( J \) to be maximal in \( \mathbf{R} \) . By the existence and uniqueness theorem for differential equations, \( J \) is open in \( \mathbf{R} \) , and it will suffice to prove that \( J \) is closed, or in other words, that \( J \) contains its end points. For \( {t}_{1},{t}_{2} \in J \) we have\n\n\[
\operatorname{dist}\left( {\alpha \left( {t}_{1}\right) ,\alpha \left( {t}_{2}\right) }\right) \leqq \left| {{t}_{2} - {t}_{1}}\right|
\]\n\nSuppose for instance that \( J \) is bounded above, and let \( \left\{ {t}_{n}\right\} \) be a sequence in \( J \) converging to the right end point of \( J \) . Then the sequence \( \left\{ {\alpha \left( {t}_{n}\right) }\right\} \) is Cauchy by the above inequality, so \( \left\{ {\alpha \left( {t}_{n}\right) }\right\} \) converges to a point \( {x}_{0} \) by COM 1. Then for all \( n \) sufficiently large, \( \alpha \left( {t}_{n}\right) \) lies in a small normal neighborhood of \( {x}_{0} \), and there is some \( \epsilon > 0 \), independent of \( n \), such that the geodesic can be extended to an interval of length at least \( \epsilon \) beyond \( {t}_{n} \), thus contradicting the maximality of \( J \), and proving COM 2 . The subsequent implications are trivial, so the proposition is proved.
|
Yes
|
Theorem 6.6 (Hopf-Rinow). Assume that \( \left( {X, g}\right) \) is finite dimensional connected geodesically complete at a point \( p \), that is, \( {\exp }_{p} \) is defined on \( {T}_{p}X \) . Then any point in \( X \) can be joined to \( p \) by a minimal geodesic.
|
Proof. I follow here the variation of the proof given in [Mi 63]. Let \( y \) be a point with \( p \neq y \) . Let \( W \) be a normal neighborhood of \( p \) containing the image of a small ball under the exponential map \( {\exp }_{p} \) . Let \( r = \) \( \operatorname{dist}\left( {p, y}\right) \), and let \( \delta \) be small \( < r \) . Then the shell \( {\operatorname{Sh}}_{g}\left( {p,\delta }\right) = \operatorname{Sh}\left( {p,\delta }\right) \) is contained in \( W \) . Since \( \operatorname{Sh}\left( {p,\delta }\right) \) is the image of the sphere of radius \( \delta \) in \( {T}_{p}X \), it follows that \( \operatorname{Sh}\left( {p,\delta }\right) \) is compact. Hence there exists a point \( {x}_{0} \) on \( \operatorname{Sh}\left( {p,\delta }\right) \) which is at minimal \( g \) -distance from \( y \), that is\n\n\[ \operatorname{dist}\left( {{x}_{0}, y}\right) \leqq \operatorname{dist}\left( {x, y}\right) \;\text{ for all }\;x \in \operatorname{Sh}\left( {p,\delta }\right) .\n\]\n\nWe can write \( {x}_{0} = {\exp }_{p}\left( {\delta u}\right) \) for some \( u \in {T}_{x} \) with \( \parallel u{\parallel }_{g} = 1 \) . Let \( \alpha \left( t\right) = \) \( {\exp }_{p}\left( {tu}\right) \) . We shall prove that \( {\exp }_{p}\left( {ru}\right) = y \) . We prove this by \
|
No
|
Corollary 6.7. In the finite dimensional case the four completeness conditions COM 1 through COM 4 are equivalent to a fifth:\n\nCOM 5. A closed \( {\operatorname{dist}}_{g} \) -bounded subset of \( X \) is compact.
|
Proof. Assume COM 4 with \( {\exp }_{{x}_{0}} \) defined on \( {T}_{{x}_{0}}X \) . Let \( S \) be closed and bounded in \( X \) . Without loss of generality, we may assume \( {x}_{0} \in S \) . Let \( b \) be a bound for the diameter of \( S \) . Then by Theorem 6.6 (Hopf-Rinow), every point of \( S \) can be joined to \( {x}_{0} \) by a geodesic of length \( \leqq b \), so \( S \) is contained in the image under \( {\exp }_{{x}_{0}} \) of the closed ball of radius \( b \) in \( {T}_{{x}_{0}}X \) , so \( S \) is compact, thus proving COM 5.\n\nAssume COM 5. Let \( \left\{ {x}_{n}\right\} \) be a Cauchy sequence in \( X \) . Then \( \left\{ {x}_{n}\right\} \) lies in a bounded set, whose closure is compact by assumption, so \( \left\{ {x}_{n}\right\} \) has a point of accumulation which is actually a limit in \( X \) . This proves COM 1, and concludes the proof of the corollary.
|
Yes
|
Lemma 6.8. Let \( f : Y \rightarrow X \) be a \( {C}^{1} \) map between Riemannian manifolds \( \left( {Y, h}\right) \) and \( \left( {X, g}\right) \) . Assume that there is a constant \( C > 0 \) such that for all \( y \in Y \) and \( w \in {T}_{y}Y \) we have\n\n\[ \parallel {Tf}\left( y\right) w{\parallel }_{g} \geqq C\parallel w{\parallel }_{h} \]\n\nIf \( \gamma : \left\lbrack {a, b}\right\rbrack \rightarrow \gamma \) is a piecewise \( {C}^{1} \) path in \( Y \), then\n\n\[ L\left( {f \circ \gamma }\right) \geqq {CL}\left( \gamma \right) \]
|
Proof. We have\n\n\[ {L}_{g}\left( {f \circ \gamma }\right) = {\int }_{a}^{b}{\begin{Vmatrix}{\left( f \circ \gamma \right) }^{\prime }\left( t\right) \end{Vmatrix}}_{g}{dt} = {\int }_{a}^{b}{\begin{Vmatrix}Tf\left( \gamma \left( t\right) \right) {\gamma }^{\prime }\left( t\right) \end{Vmatrix}}_{g}{dt} \]\n\n\[ \geqq {\int }_{a}^{b}C{\begin{Vmatrix}{\gamma }^{\prime }\left( t\right) \end{Vmatrix}}_{h}{dt} \]\n\n\[ = C{L}_{h}\left( \gamma \right) \]\n\nas was to be shown.
|
Yes
|
Proposition 1.1. There exists a unique tensor field \( R \), section of \( {L}^{3}\left( {{TX},{TX}}\right) \), i.e. arising from the functor \( \mathbf{E} \mapsto {L}^{3}\left( {\mathbf{E},\mathbf{E}}\right) \) (continuous trilinear maps of \( \mathbf{E} \) into itself) such that for all vector fields \( \xi ,\eta ,\zeta \) we have\n\n\[ R\left( {\xi ,\eta ,\zeta }\right) = {D}_{\xi }{D}_{\eta }\zeta - {D}_{\eta }{D}_{\xi }\zeta - {D}_{\left\lbrack \xi ,\eta \right\rbrack }\zeta \]
|
Proof. The expression on the right-hand side gives a well-defined vector field on \( X \) . To show that this association comes from a tensor field, we can compute in a chart. To do this, we use the local expression for the covariant derivative given in Theorem 2.1 of Chapter VIII. So for the rest of the argument, \( \xi ,\eta ,\zeta \) stand for \( {\xi }_{U},{\eta }_{U},{\zeta }_{U} \) in a chart \( U \) . Then, for example, we have\n\n(1)\n\n\[ {D}_{\eta }\zeta = {\zeta }^{\prime } \cdot \eta - B\left( {\eta ,\zeta }\right) \]\n\nWe determine \( {D}_{\xi }\left( {{D}_{\eta }\zeta }\right) \) by substitution in this formula. As a first step, we have to write down the derivative\n\n\[ {\left( {D}_{\eta }\zeta \right) }^{\prime } \cdot \xi = {\zeta }^{\prime \prime } \cdot \xi \cdot \eta + {\zeta }^{\prime } \cdot {\eta }^{\prime } \cdot \xi - B\left( {{\eta }^{\prime }\xi ,\zeta }\right) - B\left( {\eta ,{\zeta }^{\prime } \cdot \xi }\right) - \left( {{B}^{\prime } \cdot \xi }\right) \left( {\eta ,\zeta }\right) . \]\n\nThen it follows that\n\n\[ {D}_{\xi }\left( {{D}_{\eta }\zeta }\right) = {\zeta }^{\prime \prime } \cdot \xi \cdot \eta + {\zeta }^{\prime } \cdot {\eta }^{\prime } \cdot \xi - B\left( {{\eta }^{\prime } \cdot \xi ,\zeta }\right) - B\left( {\eta ,{\zeta }^{\prime } \cdot \xi }\right) - B\left( {{\zeta }^{\prime } \cdot \eta ,\xi }\right) \]\n\n\[ - \left( {{B}^{\prime } \cdot \xi }\right) \left( {\eta ,\zeta }\right) + B\left( {B\left( {\eta ,\zeta }\right) ,\xi }\right) . \]\n\nPermuting \( \xi \) and \( \eta \) gives us the second term. Using the local expression for the bracket\n\n\[ \left\lbrack {\xi ,\eta }\right\rbrack = {\eta }^{\prime } \cdot \xi - {\xi }^{\prime } \cdot \eta \]\n\nas well as (1) will give us the third term. The reader will then verify that all the expressions containing a derivative cancel, leaving only trilinear expressions involving \( \xi ,\eta \), and \( \zeta \) . This proves Proposition 1.1.
|
Yes
|
\[ R\left( {v, w}\right) = - R\left( {w, v}\right) \text{ (skew-symmetry). } \]
|
Proof. The first relation is obvious from the definition.
|
No
|
Proposition 1.4. On a pseudo Riemannian manifold, the Riemann tensor satisfies all the above four properties. Furthermore, RIEM 4 follows from RIEM 1, 2, 3.
|
Proof. Properties RIEM 1 and RIEM 3 have been proved in Proposition 1.3. Property RIEM 2 amounts to proving that \( R\left( {v, w, z, z}\right) = 0 \) for all \( v, w, z \) ; or in terms of vector fields, \( R\left( {\xi ,\eta ,\zeta ,\zeta }\right) = 0 \) . We will need to differentiate. Since all the terms with derivatives vanish in the local formula of Proposition 1.2, we may assume without loss of generality that \( \left\lbrack {\xi ,\eta }\right\rbrack = 0 \) . Then\n\n\[ \langle R\left( {\xi ,\eta }\right) \zeta ,\zeta {\rangle }_{g} = {\left\langle {D}_{\xi }{D}_{\eta }\zeta - {D}_{\eta }{D}_{\xi }\zeta ,\zeta \right\rangle }_{g}, \]\n\nand we must show that the right side is symmetric in \( \xi ,\eta \) . But \( \left\lbrack {\xi ,\eta }\right\rbrack = 0 \) implies that\n\n\[ {\mathcal{L}}_{\xi }{\mathcal{L}}_{\eta }\langle \zeta ,\zeta {\rangle }_{g} \]\n\nis symmetric in \( \xi ,\eta \) . Since we are dealing with the metric covariant derivative, it follows that\n\n\[ {\mathcal{L}}_{\eta }\langle \zeta ,\zeta {\rangle }_{g} = 2{\left\langle {D}_{\eta }\zeta ,\zeta \right\rangle }_{g} \]\n\nand therefore\n\n\[ {\mathcal{L}}_{\xi }{\mathcal{L}}_{\eta }\langle \zeta ,\zeta {\rangle }_{g} = 2{\left\langle {D}_{\xi }{D}_{\eta }\zeta ,\zeta \right\rangle }_{g} + 2{\left\langle {D}_{\xi }\zeta ,{D}_{\eta }\zeta \right\rangle }_{g}, \]\n\nfrom which it follows at once that \( {\left\langle {D}_{\xi }{D}_{\eta }\zeta ,\zeta \right\rangle }_{g} \) is symmetric in \( \xi ,\eta \), thus proving RIEM 2.\n\nThe formula RIEM 4 is a formal consequence of the preceding three formulas. It is basically an exercise in algebra, which we carry out. In the cyclic identity RIEM 3, interchange \( u \) with \( z, v, w \) successively, and add the resulting three relations. One gets, using RIEM 1 and RIEM 3:\n\n\( \left( *\right) \)\n\n\[ R\left( {u, v, w, z}\right) + R\left( {u, w, z, v}\right) + R\left( {u, z, v, w}\right) = 0. \]\n\nFrom cyclicity and RIEM 1, one gets\n\n\[ R\left( {z, v, u, w}\right) = R\left( {u, v, z, w}\right) - R\left( {u, z, v, w}\right) \;\text{ or } \]\n\n\[ R\left( {u, z, v, w}\right) = R\left( {u, v, z, w}\right) - R\left( {z, v, u, w}\right) . \]\n\nWe substitute the value on the left in \( \left( *\right) \), and use RIEM 1 to conclude the proof of RIEM 4.
|
Yes
|
The canonical 2-tensor determines the Riemann tensor. Or similarly, if the canonical tensor \( R \) satisfies\n\n\[ R\left( {v, w, v, w}\right) = 0\;\text{ for all }\;v, w, \]\nthen \( R = 0 \) .
|
Proof. Say we prove the second assertion first. From RIEM 4, which implies that \( R\left( {v, w, v, z}\right) \) is symmetric in \( \left( {w, z}\right) \), if \( R\left( {v, w, v, w}\right) = 0 \) for all \( v, w \) then \( R\left( {v, w, v, z}\right) = 0 \) for all \( v, w, z \) . From the alternating properties of RIEM 1 and RIEM 2, it follows that \( R = 0 \) identically.\n\nTo show that the canonical 2-tensor determines the Riemann tensor, we note that the problem is essentially equivalent to the other statement, but one may argue directly as when one recovers a symmetric bilinear form from a quadratic form, namely\n\n\[ \left\lbrack {\frac{{\partial }^{2}}{\partial t\partial s}R\left( {v + {tz}, w + {su}, v + {tz}, w + {su}}\right) }\right.\n\n\[ - {\left. \frac{{\partial }^{2}}{\partial t\partial s}R\left( v + tu, w + sz, v + tu, w + sz\right) \right\rbrack }_{s = t = 0}\n\n\[ = {6R}\left( {v, w, z, u}\right) \text{.} \]\n\nThis proves the proposition.
|
Yes
|
Proposition 1.6. Let \( \\left\\{ {{\\xi }_{1},\\ldots ,{\\xi }_{n}}\\right\\} \) be an orthonormal frame on an open set. Then for vector fields \( \\xi ,\\eta \) we have\n\n\[ \n{\\operatorname{Sc}}_{R}\\left( {\\xi ,\\eta }\\right) = \\mathop{\\sum }\\limits_{{i = 1}}^{n}R\\left( {\\xi ,{\\xi }_{i},\\eta ,{\\xi }_{i}}\\right) \n\]
|
Proof. This is immediate from the definition of the trace of an endomorphism of a finite dimensional vector space.
|
No
|
Theorem 2.1. Let \( \left( {X, g}\right) \) be pseudo Riemannian, let \( \alpha : \left\lbrack {a, b}\right\rbrack \rightarrow X \) be a geodesic. Given vectors \( z, w \in {T}_{\alpha \left( a\right) }X \), there exists a unique Jacobi lift \( \eta = {\eta }_{z, w} \) of \( \alpha \) to \( {TX} \) such that\n\n\[ \eta \left( a\right) = z\;\text{ and }\;{D}_{{\alpha }^{\prime }}\eta \left( a\right) = w. \]\n\nIn particular, the set of Jacobi lifts of \( \alpha \) is a vector space linearly isomorphic to \( {T}_{\alpha \left( a\right) } \times {T}_{\alpha \left( a\right) } \) under the map \( \left( {z, w}\right) \mapsto {\eta }_{z, w} \) .
|
Proof. One verifies at once that \( {\eta }_{v}\left( 0\right) = 0 \), and since \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \), we also\n\nhave\n\n\[ {D}_{{\alpha }^{\prime }}{\eta }_{v}\left( t\right) = {\alpha }^{\prime }\left( t\right) \;\text{ and }\;{D}_{{\alpha }^{\prime }}^{2}{\eta }_{v} = 0 = R\left( {{\alpha }^{\prime },{\alpha }^{\prime }}\right) {\alpha }^{\prime }. \]
|
No
|
Proposition 2.2. Let \( \left( {X, g}\right) \) be pseudo Riemannian. Let \( \alpha : \left\lbrack {a, b}\right\rbrack \rightarrow X \) be a geodesic, and let \( \eta \) be a Jacobi lift of \( \alpha \) . Then there are numbers \( c \) , \( d \) such that\n\n\[{\left\langle \eta ,{\alpha }^{\prime }\right\rangle }_{g}\left( t\right) = c\left( {t - a}\right) + d.\]\n\nIn fact, \( d = {\left\langle \eta ,{\alpha }^{\prime }\right\rangle }_{g}\left( a\right) \) and \( c = {\left\langle {D}_{{\alpha }^{\prime }}\eta ,{\alpha }^{\prime }\right\rangle }_{g}\left( a\right) \) . If \( \eta \left( a\right) \) and \( {D}_{{\alpha }^{\prime }}\eta \left( a\right) \) are orthogonal to \( {\alpha }^{\prime }\left( a\right) \), then \( \eta \left( t\right) \) is orthogonal to \( {\alpha }^{\prime }\left( t\right) \) for all \( t \) .
|
Proof. Using the metric derivative, and \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \) since \( \alpha \) is a geodesic, we find that \( \partial {\left\langle \eta ,{\alpha }^{\prime }\right\rangle }_{g} = {\left\langle {D}_{{\alpha }^{\prime }}\eta ,{\alpha }^{\prime }\right\rangle }_{g} \), and then\n\n\[{\partial }^{2}{\left\langle \eta ,{\alpha }^{\prime }\right\rangle }_{g} = {\left\langle {D}_{{\alpha }^{\prime }}^{2}\eta ,{\alpha }^{\prime }\right\rangle }_{g} = R\left( {{\alpha }^{\prime },\eta ,{\alpha }^{\prime },{\alpha }^{\prime }}\right) = 0.\]\n\nHence \( {\left\langle \eta ,{\alpha }^{\prime }\right\rangle }_{g} \) is a linear function, whose coefficients are immediately determined to be those written down in the proposition.
|
Yes
|
Proposition 2.3. As above, let \( {\alpha }^{\prime }\left( 0\right) = v \) . Write \( w = {cv} + {w}_{1} \) with \( {\left\langle {w}_{1}, v\right\rangle }_{g} = 0 \) . Then \( {\eta }_{w} \) has the decomposition\n\n\[ \n{\eta }_{w} = c{\eta }_{v} + {\eta }_{{w}_{1}},\;\text{ also written }\;{\eta }_{w}\left( t\right) = {ct}{\alpha }^{\prime }\left( t\right) + {\eta }_{{w}_{1}}\left( t\right) .\n\]\n\nFurthermore \( {\eta }_{{w}_{1}} \) is orthogonal to \( {\alpha }^{\prime } \), that is \( {\left\langle {\eta }_{{w}_{1}},{\alpha }^{\prime }\right\rangle }_{g} = 0 \) .
|
Proof. Immediate from Proposition 2.2.
|
No
|
Proposition 2.4. Notation as in Proposition 2.3, we have an orthogonal decomposition\n\n\\[ \n{D}_{{\alpha }^{\prime }}{\eta }_{w} = c{D}_{{\alpha }^{\prime }}{\eta }_{v} + {D}_{{\alpha }^{\prime }}{\eta }_{{w}_{1}}\\;\\text{ also written }\\;{D}_{{\alpha }^{\prime }}{\eta }_{w}\\left( t\\right) = c{\alpha }^{\prime }\\left( t\\right) + {D}_{{\alpha }^{\prime }}{\eta }_{{w}_{1}}\\left( t\\right) .\n\\]\n\nIn other words, if \\( {w}_{1} \\bot {\alpha }^{\prime }\\left( 0\\right) \\), then \\( {D}_{{\alpha }^{\prime }}{\eta }_{{w}_{1}} \\bot {\alpha }^{\prime } \\) . Furthermore \\( {\\left( {D}_{{\alpha }^{\prime }}{\eta }_{v}\\right) }^{2} \\) is constant.
|
Proof. For the first assertion, we take the derivative and use Proposition 2.3 to get\n\n\\[ \n0 = \\partial {\\left\\langle {\\eta }_{{w}_{1}},{\\alpha }^{\prime }\\right\\rangle }_{g} = {\\left\\langle {D}_{{\alpha }^{\prime }}{\\eta }_{{w}_{1}},{\\alpha }^{\prime }\\right\\rangle }_{g}.\n\\]\n\nFor the second, we then obtain for \\( \\eta = {\\eta }_{w} \\) :\n\n\\[ \n\\partial {\\left\\langle {D}_{{\alpha }^{\prime }}\\eta ,{D}_{{\alpha }^{\prime }}\\eta \\right\\rangle }_{g} = 2{\\left\\langle {D}_{{\alpha }^{\prime }}^{2}\\eta ,{D}_{{\alpha }^{\prime }}\\eta \\right\\rangle }_{g}\n\\]\n\n\\[ \n= 2{\\left\\langle R\\left( {\\alpha }^{\prime },\\eta \\right) {\\alpha }^{\prime },{D}_{{\alpha }^{\prime }}\\eta \\right\\rangle }_{g}.\n\\]\n\nIf \\( \\eta = {\\eta }_{v} \\) so \\( {\\eta }_{v}\\left( t\\right) = t{\\alpha }^{\prime }\\left( t\\right) \\), then the right side is 0 because \\( {R}_{4} \\) is alternating in its last two variables. This concludes the proof.
|
Yes
|
Lemma 2.5. Assume \( \left( {X, g}\right) \) Riemannian. Let \( \eta \) be a Jacobi lift of \( \alpha \) . Let \( f\left( t\right) = \parallel \eta \left( t\right) \parallel \) . Then at those values of \( t > 0 \) such that \( \eta \left( t\right) \neq 0 \), we have\n\n\[ \n{f}^{\prime \prime } = \frac{1}{\parallel \eta {\parallel }^{3}}\left( {{\left( {D}_{{\alpha }^{\prime }}\eta \right) }^{2}{\eta }^{2} - {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\eta \right\rangle }_{g}^{2}}\right) + \frac{1}{\parallel \eta \parallel }{R}_{2}\left( {{\alpha }^{\prime },\eta }\right) .\n\]
|
Proof. Straightforward calculus, using the covariant derivative. The first derivative \( {f}^{\prime } \) is given by\n\n\[ \n{f}^{\prime } = {\left( {\eta }^{2}\right) }^{-1/2}{\left\langle \eta ,{D}_{{\alpha }^{\prime }}\eta \right\rangle }_{g} = \frac{1}{\parallel \eta \parallel }{\left\langle \eta ,{D}_{{\alpha }^{\prime }}\eta \right\rangle }_{g}.\n\]\n\nThen \( {f}^{\prime \prime } \) is computed by using the rule for the derivative of a product. In the term containing \( {\left\langle {D}_{{\alpha }^{\prime }}^{2}\eta ,\eta \right\rangle }_{g} \), we replace \( {D}_{{\alpha }^{\prime }}^{2}\eta \) by \( R\left( {{\alpha }^{\prime },\eta }\right) {\alpha }^{\prime } \) (using the definition of a Jacobi lift) to conclude the proof of the lemma.
|
Yes
|
Proposition 2.6. Let \( \alpha : \left\lbrack {0, b}\right\rbrack \rightarrow X \) be a geodesic. Let \( w \in {T}_{\alpha \left( 0\right) }X \) , \( w \neq 0 \) . Let \( {\eta }_{w} = {\eta }_{0, w} = \eta \) be the unique Jacobi lift satisfying\n\n\[ \n{\eta }_{w}\left( 0\right) = 0\;\text{ and }\;{D}_{{\alpha }^{\prime }}{\eta }_{w}\left( 0\right) = w.\n\] \n\nIf \( \left( {X, g}\right) \) is Riemannian and \( {R}_{2} \geqq 0 \) (so \( \left( {X, g}\right) \) has seminegative curvature), then for \( t \in \left\lbrack {0, b}\right\rbrack \) we have\n\n\[ \n\parallel \eta \left( t\right) \parallel \geqq \parallel w\parallel t\;\text{and in particular}\;\parallel \eta \left( 1\right) \parallel \geqq \parallel w\parallel \text{if}b = 1\text{.} \n\]
|
Proof. Let \( h\left( t\right) = \parallel \eta \left( t\right) \parallel - \parallel w\parallel t \) for \( 0 \leqq t \leqq b \) . Then \( h \) is continuous, \( h\left( 0\right) = 0 \), and by Lemma 2.5, \( {h}^{\prime \prime } = {f}^{\prime \prime } \geqq 0 \) whenever \( \eta \left( t\right) \neq 0 \) . One cannot have \( \eta \left( t\right) = 0 \) for arbitrarily small values of \( t \neq 0 \), otherwise \( {D}_{{\alpha }^{\prime }}\eta \left( 0\right) \) would be 0 (because in a chart \( U,{\eta }_{U}^{\prime }\left( 0\right) = {D}_{{\alpha }^{\prime }}\eta \left( 0\right) \) ). In fact, we shall prove that there is no value of \( t \neq 0 \) such that \( \eta \left( t\right) = 0 \) . Suppose there is such a value, and let \( {t}_{0} \) be the smallest value \( > 0 \) . In the interval \( \left( {0,{t}_{0}}\right) \) we have \( {h}^{\prime \prime } \geqq 0 \) by Lemma 2.5, so \( {h}^{\prime } \) in increasing. But the beginning of the Taylor expansion of \( \eta \) in a chart is\n\n\[ \n{\eta }_{U}\left( t\right) = {wt} + O\left( {t}^{2}\right) ,\;\text{ so }\;\mathop{\lim }\limits_{{t \rightarrow 0}}{f}^{\prime }\left( t\right) = \parallel w\parallel .\n\] \n\nFurthermore, \( {h}^{\prime }\left( 0\right) \) exists and is equal to 0, so \( {h}^{\prime } \geqq 0 \) on \( \left\lbrack {0,{t}_{0}}\right) \), so \( h \) is increasing, and there cannot be a value \( {t}_{0} > 0 \) with \( \eta \left( {t}_{0}\right) = 0 \) . Then the above argument applies on the whole interval \( \left\lbrack {0, b}\right\rbrack \) to prove the desired inequality on the whole interval. This concludes the proof of Proposition 2.6.
|
Yes
|
Lemma 2.7. Let \( \sigma : {J}_{1} \times {J}_{2} \rightarrow X \) be a \( {C}^{2} \) map. Then on lifts of \( \sigma \) to the tangent bundle, we have the equality of operators
|
Proof. The formula can be verified in a chart. It follows directly from the definitions, especially using the local expression of Proposition 1.2.
|
No
|
Proposition 2.8. Let \( \sigma : \left\lbrack {a, b}\right\rbrack \times J \rightarrow X \) be a variation of a geodesic \( \alpha \) through geodesics. Let\n\n\[ \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \]\n\nThen \( \eta \) is a Jacobi lift of \( \alpha \), said to come from \( \sigma \) or associated with \( \sigma \) .
|
Proof. Given \( \sigma \), we have\n\n\( {D}_{1}^{2}{\partial }_{2}\sigma = {D}_{1}{D}_{1}{\partial }_{2}\sigma = {D}_{1}{D}_{2}{\partial }_{1}\sigma \; \) by Lemma 5.3 of Chapter VIII\n\n\[ = {D}_{2}{D}_{1}{\partial }_{1}\sigma + R\left( {{\partial }_{1}\sigma ,{\partial }_{2}\sigma }\right) {\partial }_{1}\sigma \text{by Lemma 2.7.} \]\n\nBut \( {D}_{1}{\partial }_{1}\sigma \left( {s, t}\right) = 0 \) because \( {\alpha }_{t} \) is a geodesic, whence \( {D}_{{\alpha }^{\prime }}^{2}\eta = R\left( {{\alpha }^{\prime },\eta }\right) {\alpha }^{\prime } \), so \( \eta \) is a Jacobi lift of \( \alpha \), as was to be shown.
|
Yes
|
Theorem 2.9 (Variation at the Beginning Point). Let \( \alpha \) be a geodesic in \( X \) with initial value \( \alpha \left( 0\right) = x \) . Let \( z, w \in {T}_{x}X \) . Let \( \beta \) be a curve such that\n\n\[ \beta \left( 0\right) = \alpha \left( 0\right) \;\text{ and }\;{\beta }^{\prime }\left( 0\right) = z. \]\n\nLet\n\n\[ \zeta \left( t\right) = {P}_{0,\beta }^{t}\left( {{\alpha }^{\prime }\left( 0\right) + {tw}}\right) = {P}_{0,\beta }^{t}\left( {{\alpha }^{\prime }\left( 0\right) }\right) + t{P}_{0,\beta }^{t}\left( w\right) ,\]\n\n\[ \sigma \left( {s, t}\right) = {\exp }_{\beta \left( t\right) }{s\zeta }\left( t\right) \]\n\nLet \( {\alpha }_{t}\left( s\right) = \sigma \left( {s, t}\right) \) . Then \( {\alpha }_{0} = \alpha ,\sigma \) is a variation of \( \alpha \) by geodesics \( \left\{ {\alpha }_{t}\right\} \) , and \( {\alpha }_{t} \) is the unique geodesic such that\n\n\[ {\alpha }_{t}\left( 0\right) = \beta \left( t\right) \;\text{ and }\;{\alpha }_{t}^{\prime }\left( 0\right) = \zeta \left( t\right) . \]\n\nIn particular, if \( w = 0 \), then \( {\alpha }_{t}^{\prime }\left( 0\right) = {P}_{0,\beta }^{t}\left( {{\alpha }^{\prime }\left( 0\right) }\right) \) . Furthermore, let\n\n\[ \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \]\n\nThen \( \eta = {\eta }_{z, w} \) is the unique Jacobi lift of \( \alpha \) with initial conditions\n\n\[ \eta \left( 0\right) = z\;\text{ and }\;{D}_{{\alpha }^{\prime }}\eta \left( 0\right) = w. \]
|
Proof. The stated values for \( {\alpha }_{t}\left( 0\right) \) and \( {\alpha }_{t}^{\prime }\left( 0\right) \) are immediate. Then from the definition of parallel translation,\n\n\( \left( *\right) \)\n\n\[ \zeta \left( 0\right) = {\alpha }^{\prime }\left( 0\right) \;\text{ and }\;{D}_{{\beta }^{\prime }}\zeta \left( 0\right) = w, \]\n\nbecause if \( {\gamma }_{v}\left( t\right) = {P}_{0,\beta }^{t}\left( v\right) \), then \( {D}_{{\beta }^{\prime }}{\gamma }_{v} = 0 \) and we can use the standard rule for the derivative of the product \( t{P}_{0,\beta }^{t}\left( w\right) \) .\n\nThen \( \sigma \left( {0, t}\right) = \beta \left( t\right) \), so we obtain the initial conditions:\n\n\[ \eta \left( 0\right) = {\partial }_{2}\sigma \left( {0,0}\right) = {\beta }^{\prime }\left( 0\right) = z \]\n\n\[ {D}_{{\alpha }^{\prime }}\eta \left( 0\right) = {D}_{1}{\partial }_{2}\sigma \left( {0,0}\right) = {D}_{2}{\partial }_{1}\sigma \left( {0,0}\right) \;\text{by Chapter VIII, Lemma 5.3} \]\n\n\[ = \left( {{D}_{{\beta }^{\prime }}T{\exp }_{\beta }\left( 0\right) \zeta }\right) \left( 0\right) \]\n\n\[ = {D}_{{\beta }^{\prime }}\zeta \left( 0\right) = w\;\text{by }\left( *\right) ,\]\n\nthus concluding the proof.
|
Yes
|
Proposition 2.10. Assume that the curvature is 0 , or equivalently that the Riemann tensor \( R \) is identically 0 . Then for all \( w \in {T}_{x}X \) we have \[ {\eta }_{w}\left( t\right) = t{\gamma }_{w}\left( t\right) \]
|
Proof. The two curves \( t \mapsto {\eta }_{w}\left( t\right) \) and \( t \mapsto t{\gamma }_{w}\left( t\right) \) have the same initial conditions. Also they satisfy the same differential equation, namely \[ {D}_{{\alpha }^{\prime }}^{2}{\eta }_{w} = 0\;\text{ and }\;{D}_{{\alpha }^{\prime }}^{2}\left( {t{\gamma }_{w}\left( t\right) }\right) = 0. \] Hence they are equal, thereby proving the proposition.
|
Yes
|
Proposition 2.11. Assume that \( \left( {X, g}\right) \) has constant curvature -1 . Then the Jacobi differential equation has the form\n\n(1)\n\n\[ \n{D}_{{\alpha }^{\prime }}^{2}{\eta }_{w} = {\eta }_{w} - {\left\langle {\eta }_{w},{\alpha }^{\prime }\right\rangle }_{g}{\alpha }^{\prime } \n\]\n\nFurthermore, if we orthogonalize \( w \) with respect to \( v \), so write\n\n\[ \nw = {c}_{0}v + {c}_{1}u\;\text{ with }{c}_{0},{c}_{1} \in \mathbf{R}\text{ and a unit vector }u \bot v, \n\]\n\nthen\n\n(2)\n\n\[ \n{\eta }_{w}\left( t\right) = {c}_{0}t{\alpha }^{\prime }\left( t\right) + \left( {\sinh t}\right) {c}_{1}{\gamma }_{u}\left( t\right) \n\]
|
Proof. The orthogonalization of Jacobi lifts comes from Proposition 2.3 , so we want to identify the orthogonal components of the Jacobi lift of \( {\alpha }_{v} \) with scalar multiples of parallel translation. It suffices to do so when \( w = v \) and \( w = u \bot v \) separately. The example following Theorem 2.1 already gives us the \( v \) -component, so we may assume \( w = u \) . In this case, the reader will verify that the two curves\n\n\[ \nt \mapsto {\eta }_{u}\left( t\right) \;\text{ and }\;t \mapsto \left( {\sinh t}\right) {\gamma }_{u}\left( t\right) \n\]\n\nhave the same initial conditions at 0 (for their value, and the value of their first covariant derivative). They also satisfy the same differential equation, [IX, §3]\n\nnamely\n\n\[ \n{D}_{{\alpha }^{\prime }}^{2}{\eta }_{u} = {\eta }_{u} \n\]\n\nand similarly for the other curve, since \( {D}_{{\alpha }^{\prime }}{\gamma }_{u} = 0 \) . Hence the two curves are equal, as was to be shown.
|
No
|
Proposition 2.12. Assume \( X \) has constant curvature +1. Let \( x \in X \) . Then the same formulas hold as in Proposition 2.11, except for a minus sign on one side in formula (1), and with \( \sinh t \) replaced by \( \sin t \) in formula (2).
|
Proof. The arguments are the same. Using \( \sin t \) instead of \( \sinh t \) just guarantees that the differential equation\n\n\[ \n{D}_{{\alpha }^{\prime }}^{2}{\eta }_{u} = - {\eta }_{u}\n\]\n\nis satisfied, with the minus sign.
|
No
|
Theorem 3.1. Let \( x \in X \) and \( v \in {T}_{x} \) . Let \( \alpha \) (defined on an open interval containing 0) be the geodesic such that \( \alpha \left( 0\right) = x \) and \( {\alpha }^{\prime }\left( 0\right) = v \) . Let \( w \in {T}_{x} \) and let \( {\eta }_{w} = {\eta }_{0, w} \) be the Jacobi lift of \( \alpha \) such that\n\n\[ \n{\eta }_{w}\left( 0\right) = 0\;\text{ and }\;{D}_{{\alpha }^{\prime }}{\eta }_{w}\left( 0\right) = w.\n\]\n\nThen for \( r > 0 \), in the interval of definition of \( \alpha \), we have the formula\n\n\[ \nT{\exp }_{x}\left( {rv}\right) w = \frac{1}{r}{\eta }_{w}\left( r\right)\n\]\n\nIn particular, \( w \) lies in the kernel of \( T{\exp }_{x}\left( {rv}\right) \) if and only if \( {\eta }_{w}\left( r\right) = 0 \) . Furthermore, if we let\n\n\[ \n\sigma \left( {s, t}\right) = {\exp }_{x}\left( {s\left( {v + {tw}}\right) }\right)\n\]\n\nthen \( {\eta }_{w}\left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) .
|
Proof. The curve \( {\sigma }_{t} \) is a geodesic for each \( t \), and\n\n\[ \n{\sigma }_{0}\left( s\right) = {\exp }_{x}\left( {sv}\right) = \alpha \left( s\right)\n\]\n\nso \( \sigma \) is a variation of \( \alpha \) through geodesics. Let \( \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) . Then \( \eta \) is a Jacobi lift of \( \alpha \) by Proposition 2.8. Let \( f\left( {s, t}\right) = s\left( {v + {tw}}\right) \) . Then\n\n\[ \n{\partial }_{2}\sigma \left( {s, t}\right) = \left( {T{\exp }_{x}}\right) \left( {f\left( {s, t}\right) }\right) \left( {\partial f/\partial t}\right) = \left( {T{\exp }_{x}}\right) \left( {f\left( {s, t}\right) }\right) \left( {sw}\right) .\n\]\n\nHence \( \eta \left( 0\right) = 0 \) . Furthermore this same expression yields the formula of the theorem,\n\n\[ \n\eta \left( r\right) = \left( {T{\exp }_{x}}\right) \left( {f\left( {r,0}\right) }\right) {rw} = \left( {T{\exp }_{x}}\right) \left( {rv}\right) {rw}.\n\]\n\nTaking the limit as \( r \rightarrow 0 \) in the formula, noting that in a chart \( {D}_{{\alpha }^{\prime }}\eta \left( 0\right) = {\eta }^{\prime }\left( 0\right) \), and using \( T{\exp }_{x}\left( 0\right) = \mathrm{{id}} \) proves that \( {D}_{{\alpha }^{\prime }}\eta \left( 0\right) = w \) and concludes the proof of Theorem 3.1.
|
Yes
|
Proposition 3.2 (Gauss Lemma, Global). Let \( \left( {X, g}\right) \) be pseudo Riemannian. Let \( x \in X \) and \( v \in {T}_{x}X \) . Let the exponential map \( r \mapsto {\exp }_{x}\left( {rv}\right) \) be defined on an open interval \( J \) . Then for all \( w \in {T}_{x}X \) we have\n\n\[{\left\langle T{\exp }_{x}\left( rv\right) v, T{\exp }_{x}\left( rv\right) w\right\rangle }_{g} = \langle v, w{\rangle }_{g}.\]
|
Proof. Immediate from Theorem 3.1 and the orthogonalization of Proposition 2.3.
|
No
|
Proposition 3.3. Let \( y = {\exp }_{x}\left( {ru}\right) \) be in a normal chart at \( x \) as above, with the unit vector \( u \) . Let \( \alpha \left( s\right) = {\exp }_{x}\left( {su}\right) \), and let \( \left\{ {\alpha }_{t}\right\} \) be the variation of \( \alpha \) at its end point \( y \) in the direction of the unit vector \( e \in {T}_{y}X \) . Also denote this variation by \( \sigma \), and let \( \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) . Assume that \( e \) is orthogonal to \( {\alpha }^{\prime }\left( r\right) \) . Then \( {D}_{{\alpha }^{\prime }}\eta \) is orthogonal to \( {\alpha }^{\prime } \), and \( \eta \) is the unique Jacobi lift of \( \alpha \) such that\n\n\[ \n\eta \left( 0\right) = 0\;\text{ and }\;\eta \left( r\right) = e.\n\]
|
Proof. First note the uniqueness. If there is another Jacobi lift having the last stated property, then the difference vanishes at 0 and \( r \), and by Theorem 3.1 this difference must be 0 since the exponential map is assumed to be an isomorphism from a ball to its image, which contains \( y = \exp \left( {ru}\right) \)\n\nNext, the variation \( \sigma \) is given by the formula\n\n\[ \n\sigma \left( {s, t}\right) = {\alpha }_{t}\left( s\right) = {\exp }_{x}\left( {{su}\left( t\right) }\right) \;\text{ such that }\;{\exp }_{x}\left( {s\left( t\right) u\left( t\right) }\right) = \beta \left( t\right) ,\n\]\n\nwhere \( u\left( t\right) \) is a unit vector, and \( s\left( t\right) u\left( t\right) \) is the vector whose exponential is \( \beta \left( t\right) \) . The polar coordinates \( s\left( t\right) \) and \( u\left( t\right) \) depend as smoothly on \( t \) as the exponential map, or its inverse. Then\n\n\[ \n{\partial }_{2}\sigma \left( {s, t}\right) = T{\exp }_{x}\left( {{su}\left( t\right) }\right) s{u}^{\prime }\left( t\right)\n\]\n\nso that (since \( u = u\left( 0\right) \) ),\n\n\[ \n\eta \left( s\right) = T{\exp }_{x}\left( {su}\right) s{u}^{\prime }\left( 0\right)\n\]\n\n\[ \n= {\eta }_{{u}^{\prime }\left( 0\right) }\left( s\right)\n\]\n\nbecause from Theorem 3.1, we see that \( {D}_{{\alpha }^{\prime }}\eta \left( 0\right) = {u}^{\prime }\left( 0\right) \) . Since \( u{\left( t\right) }^{2} = 1 \), it follows that \( {u}^{\prime }\left( 0\right) \) is perpendicular to \( {\alpha }^{\prime }\left( 0\right) = u \), so \( {D}_{{\alpha }^{\prime }}\eta \) is orthogonal to \( {\alpha }^{\prime } \) . Furthermore\n\n\[ \n{\beta }^{\prime }\left( t\right) = T{\exp }_{x}\left( {s\left( t\right) u\left( t\right) }\right) \left( {s\left( t\right) {u}^{\prime }\left( t\right) + {s}^{\prime }\left( t\right) u\left( t\right) }\right) ,\n\]\n\nand since \( s\left( 0\right) = r \), we find\n\n\[ \ne = {\beta }^{\prime }\left( 0\right) = T{\exp }_{x}\left( {ru}\right) \left( {r{u}^{\prime }\left( 0\right) + {s}^{\prime }\left( 0\right) u}\right)\n\]\n\n\[ \n= T{\exp }_{x}\left( {ru}\right) r{u}^{\prime }\left( 0\right) + T{\exp }_{x}\left( {ru}\right) {s}^{\prime }\left( 0\right) u.\n\]\n\nSince \( e \) is assumed orthogonal to \( {\alpha }^{\prime }\left( r\right) = T{\exp }_{x}\left( {ru}\right) u \), and \( {u}^{\prime }\left( 0\right) \) is also orthogonal to \( u \), we must have \( {s}^{\prime }\left( 0\right) = 0 \), whence the relation\n\n\[ \ne = T{\exp }_{x}\left( {ru}\right) r{u}^{\prime }\left( 0\right) \;\text{ or }\;{\eta }_{{u}^{\prime }\left( 0\right) }\left( r\right) = e.\n\]\n\nThis proves the proposition.
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.