Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Lemma 3.4. Let \( \left( {X, g}\right) \) be pseudo Riemannian. Let \( \eta ,\zeta \) be Jacobi lifts of a geodesic \( \alpha \) . Then\n\n\[ \n{\left\langle {D}_{{\alpha }^{\prime }}\eta ,\zeta \right\rangle }_{g} - {\left\langle \eta ,{D}_{{\alpha }^{\prime }}\zeta \right\rangle }_{g}\;\text{is constant.} \n\]
Proof. We differentiate the above expression and expect to get 0 . From the defining property of the covariant derivative, the derivative of the above expression is equal to\n\n\[ \n\left\langle {{D}_{{\alpha }^{\prime }}^{2}\eta ,\zeta }\right\rangle + \left\langle {{D}_{{\alpha }^{\prime }}\eta ,{D}_{{\alpha }^{\prime }}\zeta }\right\rangle - \left\langle {{D}_{{\alpha }^{\prime }}\eta ,{D}_{{\alpha }^{\prime }}\zeta }\right\rangle - \left\langle {\eta ,{D}_{{\alpha }^{\prime }}^{2}\zeta }\right\rangle \n\]\n\n\[ \n= \left\langle {{D}_{{\alpha }^{\prime }}^{2}\eta ,\zeta }\right\rangle - \left\langle {{D}_{{\alpha }^{\prime }}^{2}\zeta ,\eta }\right\rangle \n\]\n\n\[ \n= R\left( {{\alpha }^{\prime },\eta ,{\alpha }^{\prime },\zeta }\right) - R\left( {{\alpha }^{\prime },\zeta ,{\alpha }^{\prime },\eta }\right) \n\]\n\n\[ \n= 0 \n\]\n\nby the symmetry property of \( R \) . This proves the lemma.
Yes
Lemma 3.5. Let \( \left( {X, g}\right) \) be pseudo Riemannian. Let \( \alpha \) (defined at least on \( \left\lbrack {0,1}\right\rbrack ) \) be the geodesic such that \( \alpha \left( 0\right) = x \) and \( {\alpha }^{\prime }\left( 0\right) = v \) . Let\n\n\[ \nz \in {T}_{\alpha \left( 1\right) },\;w \in {T}_{\alpha \left( 0\right) },\n\]\n\nand let\n\n\[ \n{v}^{ * } = - {\alpha }^{\prime }\left( 1\right) = - {Pv}\n\]\n\nwhere \( P \) is the parallel translation along \( {\alpha }^{\prime } \) . Then\n\n\[ \n{\left\langle T{\exp }_{\alpha \left( 0\right) }\left( v\right) w, z\right\rangle }_{\alpha \left( 1\right) } = {\left\langle w, T{\exp }_{\alpha \left( 1\right) }\left( {v}^{ * }\right) z\right\rangle }_{\alpha \left( 0\right) }.\n\]
Proof. Let \( \zeta \) be the Jacobi lift of \( \alpha \) such that \( \zeta \left( 1\right) = 0 \) and \( {D}_{{\alpha }^{\prime }}\zeta \left( 1\right) = z \) . Let \( \eta \) be the Jacobi lift as in Theorem 3.1. Then\n\n\[ \n\left\langle {T{\exp }_{x}\left( v\right) w, z}\right\rangle = \left\langle {\eta \left( 1\right) ,{D}_{{\alpha }^{\prime }}\zeta \left( 1\right) }\right\rangle = \left\langle {{D}_{{\alpha }^{\prime }}\eta \left( 1\right) ,\zeta \left( 1\right) }\right\rangle + C = C,\n\]\n\nwhere \( C \) is the constant of Lemma 3.4. We compute \( C \) to be\n\n\[ \nC = - \left\langle {{D}_{{\alpha }^{\prime }}\eta \left( 0\right) ,\zeta \left( 0\right) }\right\rangle = - \langle w,\zeta \left( 0\right) \rangle .\n\]\n\nLet \( \operatorname{rev}\left( \alpha \right) \) be the reverse curve, so that \( \operatorname{rev}\left( \alpha \right) \left( t\right) = \alpha \left( {1 - t}\right) \), and let \( \xi \) be the unique Jacobi lift of \( \operatorname{rev}\left( \alpha \right) \) such that\n\n\[ \n\xi \left( 0\right) = 0\;\text{ and }\;{D}_{\operatorname{rev}{\left( \alpha \right) }^{\prime }}\xi \left( 0\right) = z.\n\]\n\nThen in fact \( \xi \left( t\right) = \zeta \left( {1 - t}\right) \), and applying Theorem 3.1 concludes the proof.
Yes
Theorem 3.6. Let \( \left( {X, g}\right) \) be Riemannian. Assume \( \left( {X, g}\right) \) has semi-negative curvature. Then for all \( x \in X \) and \( v \in {T}_{x}, v \neq 0 \), such that \( {\exp }_{x} \) is defined on the segment \( \left\lbrack {0, v}\right\rbrack \) in \( {T}_{x} \), we have\n\n\[{\begin{Vmatrix}T{\exp }_{x}\left( v\right) w\end{Vmatrix}}_{g} \geqq \parallel w{\parallel }_{g}\;\text{ for all }w \in {T}_{x}X.\]\n\nIn particular,\n\n\[ \operatorname{Ker}T{\exp }_{x}\left( v\right) = 0.\]
Proof. Let \( {\eta }_{w} \) be the Jacobi lift as in Proposition 3.1, so that\n\n\[ T{\exp }_{x}\left( v\right) w = {\eta }_{w}\left( 1\right) \]\n\nThe asserted inequality is then a special case of the inequality found in Proposition 2.6. This inequality implies that \( \operatorname{Ker}T{\exp }_{x}\left( v\right) = 0 \), which concludes the proof.
Yes
Theorem 3.7 (McAlpin [McA 65]). Let \( \\left( {X, g}\\right) \) be a Riemannian-Hilbertian manifold with seminegative curvature, and let \( x \\in X \) . Assume that \( {\\exp }_{x} \) is defined on all of \( {T}_{x} \) (what we called geodesically complete at \( x) \) . Then for all \( v \\in {T}_{x} \) the map \( T{\\exp }_{x}\\left( v\\right) \) is a topological linear isomorphism, and in particular, \( {\\exp }_{x} \) is a local isomorphism.
Proof. We have already proved that \( T{\\exp }_{x}\\left( v\\right) \) is injective and has a continuous inverse on its image. Lemma 3.5 shows that we can apply the same reasoning to the adjoint \( {\\left( T{\\exp }_{x}\\left( v\\right) \\right) }^{ * } = T{\\exp }_{y}\\left( {v}^{ * }\\right) \) for \( y = {\\exp }_{x}\\left( v\\right) \) , so this adjoint also has kernel 0 . Hence \( T{\\exp }_{x}\\left( v\\right) \) is surjective, thereby concluding the proof of the theorem. (See also Chapter X, §2.)
No
Theorem 3.8 (Cartan-Hadamard). Let \( \\left( {X, g}\\right) \) be a Riemannian manifold, connected, and such that \( {\\exp }_{x} \) is defined on all of \( {T}_{x} \) for some \( x \\in X \) (so geodesically complete). If \( {R}_{2} \\geqq 0 \) (i.e. \( X \) has seminegative curvature), then the exponential map \( {\\exp }_{x} : {T}_{x}X \\rightarrow X \) is a covering. In particular, if \( X \) is simply connected, then \( {\\exp }_{x} \) is an isomorphism.
Proof. We have already proved that \( {\\exp }_{x} \) is a local isomorphism. There remains to prove that \( {\\exp }_{x} \) is surjective, and that it is a covering. But all the work has been done, because we simply apply Theorem 6.9 of Chapter VIII with \( Y = {T}_{x} \) having the given metric \( h = g\\left( x\\right) \), for which \( Y \) is certainly complete. Theorem 3.6 guarantees that the essential estimate hypothesis is satisfied, so that proof is complete.
No
Corollary 3.9. Let \( \left( {X, g}\right) \) be a connected Riemannian manifold with seminegative curvature. Then \( \left( {X, g}\right) \) is complete if and only if the exponential map \( {\exp }_{x} \) is defined on all of \( {T}_{x} \) for some \( x \in X \), and therefore for every \( x \in X \) .
Proof. That \( \left( {X, g}\right) \) complete implies \( {\exp }_{x} \) defined on all of \( {T}_{x} \) was proved under all circumstances in Proposition 6.5 of Chapter VIII. The converse is now immediate from Theorem 2.10 and Theorem 6.9 of Chapter VIII.
Yes
Corollary 3.10. Let \( \left( {X, g}\right) \) be a Cartan-Hadamard manifold. Let \( x \in X \) . Then for all \( v, w \in {T}_{x}X \) we have the inequality\n\n\[{\operatorname{dist}}_{g}\left( {{\exp }_{x}\left( v\right) ,{\exp }_{x}\left( w\right) }\right) \geqq \parallel v - w{\parallel }_{g}.\]
Proof. By Theorem 3.8 the exponential map has an inverse\n\n\[ \varphi : X \rightarrow {T}_{x}X \]\n\nand by Theorem 3.6 this inverse satisfies\n\n\[ \parallel {T\varphi }\left( z\right) {\parallel }_{g} \leqq 1 \]\n\nfor all \( z \in X \), where the norm is that of a continuous linear map from \( {T}_{z}X \) to \( {T}_{\varphi \left( z\right) }X \), with their structures of Hilbert spaces due to \( g \) . The inequality of the corollary is then immediate from the definition of the length of curves.
Yes
Corollary 3.11. Suppose that \( \\left( {X, g}\\right) \) is a Cartan-Hadamard manifold. Then any two points can be joined by a unique geodesic whose length is the \( g \) -distance between the two points.
Proof. Immediate from Corollary 3.10, because if \( x, y \) are the two points, then \( y = {\\exp }_{x}\\left( v\\right) \) for some \( v \\in {T}_{x}X \), and the geodesic \( \\alpha \) such that \( \\alpha \\left( t\\right) = {\\exp }_{x}\\left( {tv}\\right) \) joins the two points, is unique by the Hadamard-Cartan theorem, and has length \( \\parallel v{\\parallel }_{g} \) .
Yes
Let \( X \) be Riemannian complete, simply connected. Let \( {x}_{0} \in X \). (a) If \( R = 0 \), i.e. if \( X \) has 0 curvature, then the exponential map \[{\exp }_{{x}_{0}} : {T}_{{x}_{0}}X \rightarrow X\] is an isometry.
For (a), we use Theorem 3.1 and Proposition 2.10 which shows that the exponential map amounts to parallel translation, so is an isometry.
No
Theorem 3.13. Let \( X \) be Riemannian, complete, simply connected, with sectional curvature +1 . Then \( X \) is isometric to the ordinary sphere of the same dimension in Hilbert space.
Proof. The proof is similar, except that one cannot deal with the exponential defined on the whole tangent space \( {T}_{{x}_{0}}X \) . For convenience, we let \( X \) be the unit sphere in Hilbert space of a given dimension, and we let \( Y \) be Riemannian, complete simply connected with sectional curvature +1 . We can then define the map \( f \) on the open ball of radius \( \pi \) . The same argument as before, replacing \( \sinh r \) by \( \sin r \), shows that \( f \) is a local isometry. We then pick another point \( {x}_{1} \neq \pm {x}_{0} \) . We let\n\n\[ \n{Tf}\left( {x}_{1}\right) = {L}_{1} : {T}_{{x}_{1}}X \rightarrow {T}_{f\left( {x}_{1}\right) }Y.\n\]\n\nJust as we defined \( f = {f}_{{x}_{0}} \) from \( {x}_{0} \), we can define \( {f}_{1} = {f}_{{x}_{1}} \) from \( {x}_{1} \) . Then \( f \) and \( {f}_{1} \) coincide on the intersection of their domain, and thus define a local isometry \( X \rightarrow Y \) . By Theorem 6.9 of Chapter VIII, this local isometry is a covering map, and since \( Y \) is assumed simply connected it follows that \( f \) is a differential isomorphism, and hence a global isometry, thus proving the theorem.
Yes
Theorem 4.1. We have\n\n(1)\n\n\[ \n{\partial }_{2}h = \frac{1}{\begin{Vmatrix}{\partial }_{1}\sigma \end{Vmatrix}}{\left\langle {D}_{2}{\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}, \n\] \n\n(2)\n\n\[ \n{\partial }_{2}^{2}h = \frac{1}{{\begin{Vmatrix}{\partial }_{1}\sigma \end{Vmatrix}}^{3}}\left( {{\left( {D}_{2}{\partial }_{1}\sigma \right) }^{2}{\left( {\partial }_{1}\sigma \right) }^{2} - {\left\langle {D}_{2}{\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}^{2}}\right) \n\] \n\n\[ \n+ \frac{1}{\begin{Vmatrix}{\partial }_{1}\sigma \end{Vmatrix}}{R}_{2}\left( {{\partial }_{2}\sigma ,{\partial }_{1}\sigma }\right) \n\]
Proof. The first formula comes directly from the definition of the metric (Levi-Civita) derivative. The second is obtained at once by using the rule for the derivative of a product, and setting \n\n\[ \n{D}_{2}^{2}{\partial }_{1}\sigma = {R}_{2}\left( {{\partial }_{2}\sigma ,{\partial }_{1}\sigma }\right) {\partial }_{2}\sigma \n\] \n\nwhich is the Jacobi equation satisfied by the variation of geodesics. Then we take the scalar product with \( {\partial }_{1}\sigma \) to obtain the term on the far right, with the Riemann tensor \n\n\[ \n{R}_{2}\left( {{\partial }_{2}\sigma ,{\partial }_{1}\sigma }\right) = {\left\langle R\left( {\partial }_{2}\sigma ,{\partial }_{1}\sigma \right) {\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}. \n\] \n\nThis concludes the proof. It is essentially the same as Lemma 2.5.
Yes
Theorem 4.2. Let \( X \) be a Riemannian manifold, and let \( \sigma = \sigma \left( {s, t}\right) \) be a variation of geodesics \( \left\{ {\alpha }_{t}\right\} \) . Let \( u \) be the (varying) unit vector tangent to these geodesics, namely\n\n\[ u = {\partial }_{1}\sigma /{\begin{Vmatrix}{\partial }_{1}\sigma \end{Vmatrix}}_{g} \]\n\nLet \( \widetilde{v} \) be the orthogonalization\n\n\[ \widetilde{v} = {D}_{2}{\partial }_{1}\sigma - {\left\langle {D}_{2}{\partial }_{1}\sigma, u\right\rangle }_{g}u. \]\n\nThen\n\n\[ {\widetilde{v}}^{2} = {\left( {D}_{2}{\partial }_{1}\sigma \right) }^{2} - {\left\langle {D}_{2}{\partial }_{1}\sigma, u\right\rangle }_{g}^{2} \geqq 0, \]\n\nand for the length \( \ell \left( t\right) = L\left( {\alpha }_{t}\right) \), we have\n\n\[ {\ell }^{\prime \prime }\left( t\right) = {\int }_{a}^{b}\frac{1}{\begin{Vmatrix}{\partial }_{1}\sigma \end{Vmatrix}}\left( {{\widetilde{v}}^{2} + {R}_{2}\left( {{\partial }_{1}\sigma ,{\partial }_{2}\sigma }\right) }\right) \left( {s, t}\right) {ds}. \]
Proof. Immediate from Lemma 4.1 and the definitions.
No
Theorem 4.3. Let \( X \) be a Riemannian manifold with seminegative curvature \( \left( {{R}_{2} \geqq 0}\right) \), and \( U \) a convex open set. Let \( {\beta }_{1},{\beta }_{2} \) be disjoint geodesics in \( U \), defined on the same interval. Let \( {\alpha }_{t} : \left\lbrack {a, b}\right\rbrack \rightarrow U \) be the geodesic joining \( {\beta }_{1}\left( t\right) \) with \( {\beta }_{2}\left( t\right) \), and let \( \ell \left( t\right) = L\left( {\alpha }_{t}\right) \) . Then \( {\ell }^{\prime \prime }\left( t\right) \geqq 0 \) for all \( t \) .
Proof. Immediate from Theorem 4.2 and the hypothesis that \( {R}_{2} \geqq 0 \) .
No
Theorem 4.4. Let \( X \) have seminegative curvature. Let \( U \) be a convex open subset. Let \( \gamma \) be a geodesic in \( U \) not containing a point \( x \in U \) . For each \( t \) at which \( \gamma \) is defined, let \( {\alpha }_{t} : \left\lbrack {0,1}\right\rbrack \rightarrow U \) be the geodesic joining \( x \) with \( \gamma \left( t\right) \) . Let \( \ell \left( t\right) = L\left( {\alpha }_{t}\right) \) . Then \( {\ell }^{\prime \prime }\left( t\right) > 0 \) for all \( t \) . In particular, on an interval \( \left\lbrack {{t}_{1},{t}_{2}}\right\rbrack \) where \( \gamma \) is defined, the maximum of \( L\left( {\alpha }_{t}\right) \) for \( t \in \left\lbrack {{t}_{1},{t}_{2}}\right\rbrack \) occurs only at the end points, with \( t = {t}_{1} \) or \( t = {t}_{2} \) .
Proof. The picture is as follows. We suppose there is a point \( c \) such that \( {\ell }^{\prime \prime }\left( c\right) = 0 \) .\n\n![8a5ee639-42a3-45bc-9bf4-072c37808879_274_0.jpg](images/8a5ee639-42a3-45bc-9bf4-072c37808879_274_0.jpg)\n\nAs in Theorem 4.2, let \( \sigma \left( {s, t}\right) = {\alpha }_{t}\left( s\right) \), put \( \alpha \left( s\right) = \sigma \left( {s, c}\right) \) and let\n\n\[ \eta \left( s\right) = {\partial }_{2}\sigma \left( {s, c}\right) \]\n\nso \( \eta \) is a Jacobi lift of \( \alpha \) . From the integral expression for \( {\ell }^{\prime \prime }\left( c\right) \), using the variation formula (2), we conclude from the Schwarz inequality that \( {D}_{2}{\partial }_{1}\sigma \) is proportional to \( {\partial }_{1}\sigma \) at \( t = c \) . Using the standard fact \( {D}_{2}{\partial }_{1} = {D}_{1}{\partial }_{2} \) (Chapter VIII, Lemma 5.3), we conclude that \( {D}_{{\alpha }^{\prime }}\eta \) is proportional to \( {\alpha }^{\prime } \) , i.e. there exists a function \( \varphi \) such that\n\n\[ {D}_{1}{\partial }_{2}\sigma \left( {s, c}\right) = \varphi \left( s\right) {\partial }_{1}\sigma \left( {s, c}\right) ,\;\text{ that is }\;{D}_{{\alpha }^{\prime }}\eta = \varphi {\alpha }^{\prime }. \]\n\nWe finish the proof using an argument shown to me by Quian. By Proposition 2.3, we can orthogonalize\n\n\[ \eta = \psi {\alpha }^{\prime } + \xi \]\n\nwhere \( \xi \) is a lift of \( \alpha \) orthogonal to \( {\alpha }^{\prime } \), and \( \psi \) is some function. By Proposition 2.4, we have also an orthogonal decomposition after applying \( {D}_{{\alpha }^{\prime }} \), that is\n\n\[ {D}_{{\alpha }^{\prime }}\eta = {\psi }^{\prime }{\alpha }^{\prime } + {D}_{{\alpha }^{\prime }}\xi \]\n\nSince \( {D}_{{\alpha }^{\prime }}\eta \) has been shown to be proportional to \( {\alpha }^{\prime } \), we conclude that \( {D}_{{\alpha }^{\prime }}\xi = 0 \) . Since \( \eta \left( 0\right) = 0 \) it follows that \( \xi \left( 0\right) = 0 \), and \( \xi \) being a Jacobi lift, it follows that \( \xi = 0 \), because a Jacobi lift is determined by initial conditions at a given point. Thus finally we obtain\n\n\[ \eta \left( 1\right) = \psi \left( 1\right) {\alpha }^{\prime }\left( 1\right) ,\;\text{ that is }\;{\gamma }^{\prime }\left( c\right) = {\partial }_{2}\sigma \left( {1, c}\right) = \psi \left( 1\right) {\alpha }^{\prime }\left( 1\right) . \]\n\nThis means that the geodesic \( \gamma \) is tangent to the geodesic \( \alpha \) at the point \( \gamma \left( c\right) \), and hence these two geodesics coincide since a geodesic is determined by its initial conditions at a given point. However, we assumed that \( x \) does not lie on \( \gamma \), so we get a contradiction which concludes the proof.
Yes
Corollary 4.5. Let \( X \) be a Cartan-Hadamard manifold. Then every ball in \( X \) is convex.
Proof. Let \( x \) be the center of the ball, and let \( {x}_{0},{x}_{1} \) be points in the ball. If \( x \) lies on the geodesic between \( {x}_{0} \) and \( {x}_{1} \) then the Cartan-Hadamard theorem shows that this geodesic is the ray passing through the origin of the ball, so lies in the ball. If not, then we can apply Theorem 4.4.
No
Theorem 4.6. Let \( X \) be a Riemannian manifold and let \( x \in X \). Let \( U \) be a convex open set in \( X \) such that\n\n\[{\exp }_{x} : V \rightarrow U\]\n\nis an isomorphism of some convex open set \( V \) in \( {T}_{x} \) containing \( {0}_{x} \), with U. Let \( \gamma \) be a curve in \( U \) not containing \( x \), and let \( {\alpha }_{t} \) be the geodesic segment from \( x \) to \( \gamma \left( t\right) \). Let \( \theta \left( t\right) \) be the angle between \( \gamma \) and \( {\alpha }_{t} \). Let the length of \( {\alpha }_{t} \) be\n\n\[\ell \left( t\right) = L\left( {\alpha }_{t}\right)\]\n\nThen \( {\ell }^{\prime }\left( t\right) = \begin{Vmatrix}{{\gamma }^{\prime }\left( t\right) }\end{Vmatrix}\cos \theta \left( t\right) \).
Proof. Let us first prove the result in euclidean space. Let \( t \rightarrow v\left( t\right) \) be a curve in a euclidean space, and let \( F\left( t\right) = \parallel v\left( t\right) \parallel \), with the euclidean norm [IX, §4] denoted by the double bar. Then\n\n\[\mathop{\lim }\limits_{{h \rightarrow 0 + }}\frac{\parallel v\left( {t + h}\right) \parallel - \parallel v\left( t\right) \parallel }{h} = \mathop{\lim }\limits_{{h \rightarrow 0 + }}\frac{1}{2\parallel v\left( t\right) \parallel h}\left( {\parallel v\left( {t + h}\right) {\parallel }^{2} - \parallel v\left( t\right) {\parallel }^{2}}\right)\]\n\n\[= \mathop{\lim }\limits_{{h \rightarrow 0 + }}\frac{1}{2\parallel v\left( t\right) \parallel h}\left( {\parallel v\left( {t + h}\right) - v\left( t\right) {\parallel }^{2} + 2\parallel v\left( t\right) \parallel \parallel v\left( {t + h}\right) - v\left( t\right) \parallel \cos {\Theta }_{t}}\right)\]\n\nwhere \( {\Theta }_{t} \) is the euclidean angle from the law of cosines; namely for a euclidean triangle with sides \( a, b, c \) one has\n\n\[{c}^{2} = {a}^{2} + {b}^{2} + {2ab}\cos \Theta\]\n\nwhere \( \Theta \) is the angle opposite the side \( c \). But\n\n\[\parallel v\left( {t + h}\right) - v\left( t\right) {\parallel }^{2} = O\left( {h}^{2}\right) \;\text{ for }\;h \rightarrow 0,\]\n\nso\n\n\[{F}^{\prime }\left( t\right) = \mathop{\lim }\limits_{{h \rightarrow 0 + }}\frac{\parallel v\left( {t + h}\right) \parallel - \parallel v\left( t\right) \parallel }{h} = \begin{Vmatrix}{{v}^{\prime }\left( t\right) }\end{Vmatrix}\cos {\Theta }_{t}.\]\n\nThis proves the formula in the euclidean case.\n\nFor the general case, let \( t \mapsto v\left( t\right) \) be a curve in \( V \) such that \( {\exp }_{x}v\left( t\right) = \gamma \left( t\right) \). Let\n\n\[{\alpha }_{t}\left( s\right) = {\exp }_{x}\left( {{sv}\left( t\right) }\right) ,\;0 \leqq s \leqq 1,\]\n\nso that \( {\alpha }_{t} \) is the geodesic between \( x \) and \( \gamma \left( t\right) = {\alpha }_{t}\left( 1\right) \). Then\n\n\[{\alpha }_{t}^{\prime }\left( 1\right) = T{\exp }_{x}\left( {v\left( t\right) }\right) v\left( t\right) \;\text{ and }\;{\gamma }^{\prime }\left( t\right) = T{\exp }_{x}\left( {v\left( t\right) }\right) {v}^{\prime }\left( t\right) .\]\n\nBy the global Gauss lemma, Proposition 3.2, we have\n\n\[{\left\langle {\alpha }_{t}^{\prime }\left( 1\right) ,{\gamma }^{\prime }\left( t\right) \right\rangle }_{g} = {\left\langle v\left( t\right) ,{v}^{\prime }\left( t\right) \right\rangle }_{g{\left( x\right) }^{\prime }}\]\n\nwhere the scalar product on the left is taken in the tangent space at \( \gamma \left( t\right) \), and the scalar product on the right is taken in the tangent space at \( x \). By definition of the usual formula for scalar products, we obtain\n\n\[{\begin{Vmatrix}{\alpha }_{t}^{\prime }\left( 1\right) \end{Vmatrix}}_{g}{\begin{Vmatrix}{\gamma }^{\prime }\left( t\right) \end{Vmatrix}}_{g}\cos \theta \left( t\right) = \pa
Yes
Corollary 4.7. Let \( X \) be a Cartan-Hadamard manifold. Let \( x \in X \) and let \( \gamma \) be a geodesic which does not contain \( x \) . Then the distance \( d\left( {x,\gamma \left( t\right) }\right) \) has a unique minimum for some value \( {t}_{0} \) . The geodesic from \( x \) to \( \gamma \left( {t}_{0}\right) \) is perpendicular to \( \gamma \) at \( \gamma \left( {t}_{0}\right) \) .
Proof. That the distance has a minimum comes from the fact that the geodesic distance goes to infinity as \( t \rightarrow \pm \infty \) . Because the line is locally compact, there is some minimum, and the convexity Theorem 4.4 shows that this is the only minimum, with the distance being strictly decreasing for \( t \leqq {t}_{0} \) and strictly increasing for \( t \geqq {t}_{0} \) . Theorem 4.6 concludes the proof.
Yes
Theorem 4.8. Let \( X \) be a Cartan-Hadamard manifold. Let \( {ABC} \) be a geodesic triangle whose angles are \( A, B, C \) and whose sides are geodesics of lengths \( a, b \), and \( c \) . Then:\n\n(i) \( {a}^{2} + {b}^{2} \leqq {c}^{2} + {2ab}\cos C \) ;\n\n(ii) \( A + B + C \leqq \pi \) .
Proof. Let \( x \) be the vertex of angle \( C \) . Let \( {\exp }_{x}\left( v\right) \) and \( {\exp }_{x}\left( w\right) \) with \( v \) , \( w \in {T}_{x} \) be the vertices with angles \( A, B \) respectively. Then the geodesic sides of angle \( C \) are \( \alpha ,\beta \) respectively, with\n\n\[ \alpha \left( s\right) = {\exp }_{x}\left( {sv}\right) \;\text{ and }\;\beta \left( s\right) = {\exp }_{x}\left( {sw}\right) ,\]\n\nand \( 0 \leqq s \leqq 1 \) . The picture is as follows.\n\n![8a5ee639-42a3-45bc-9bf4-072c37808879_278_0.jpg](images/8a5ee639-42a3-45bc-9bf4-072c37808879_278_0.jpg)\n\nWe let \( \Theta \) and \( \theta \) be the angles as shown, and \( \cos \theta = \cos C \) by definition. Actually, we have\n\n(4)\n\n\[ \cos \theta = \cos \Theta \text{.} \]\n\nIndeed,\n\n\[ {\left\langle {\alpha }^{\prime }\left( 0\right) ,{\beta }^{\prime }\left( 0\right) \right\rangle }_{g} = {\left\langle T{\exp }_{x}\left( 0\right) v, T{\exp }_{x}\left( 0\right) w\right\rangle }_{g\left( x\right) } = \langle v, w{\rangle }_{g\left( x\right) }.\]\n\nThe left side is equal to \( {\begin{Vmatrix}{\alpha }^{\prime }\left( 0\right) \end{Vmatrix}}_{q}{\begin{Vmatrix}{\beta }^{\prime }\left( 0\right) \end{Vmatrix}}_{q}\cos \theta \), and the right side is equal to \( \parallel v{\parallel }_{x}\parallel w{\parallel }_{x}\cos \Theta \) . Trivially \( {\alpha }^{\prime }\left( 0\right) = v \) and \( {\beta }^{\prime }\left( 0\right) = w \), so (4) follows. So far, we have not used seminegative curvature. It comes next.\n\nWe have \( {a}^{2} + {b}^{2} = \operatorname{dist}{\left( v, w\right) }^{2} + {2ab}\cos \Theta \) . By the distance increasing property of the exponential map, the inequality (i) follows.\n\nAs for (ii), since each geodesic side of the geodesic triangle has length at most equal to the sum of the other two sides, it follows that there exists a euclidean triangle with sides of lengths \( a, b, c \) . Let \( {\Theta }_{C} \) be the angle of this euclidean triangle corresponding to \( C \) . Then\n\n\[ {a}^{2} + {b}^{2} = {c}^{2} + {2ab}\cos {\Theta }_{C} \]\n\nBy (i) it follows that \( \cos C \geqq \cos {\Theta }_{C} \), and hence \( {\Theta }_{C} \geqq C \) . Similarly, \( {\Theta }_{A} \geqq A \) and \( {\Theta }_{B} \geqq B \) . But\n\n\[ {\Theta }_{A} + {\Theta }_{B} + {\Theta }_{C} = \pi \]\n\nThis proves (ii) and concludes the proof of the theorem.
Yes
Proposition 5.1. Let \( \eta : J \rightarrow {TX} \) be a lift of \( \alpha \) in \( {TX} \) . Then\n\n\[ \n\eta \left( t\right) = {P}^{t}\mathop{\sum }\limits_{{k = 0}}^{m}{D}_{{\alpha }^{\prime }}^{k}\eta \left( 0\right) \frac{{t}^{k}}{k!} + O\left( {t}^{m + 1}\right) \;\text{ for }t \rightarrow 0;\n\]\n\nor alternatively,\n\n\[ \n\eta \left( t\right) = \mathop{\sum }\limits_{{k = 0}}^{m}\gamma \left( {t,{D}_{{\alpha }^{\prime }}^{k}\eta \left( 0\right) }\right) \frac{{t}^{k}}{k!} + O\left( {t}^{m + 1}\right) \;\text{ for }t \rightarrow 0.\n\]
Proof. The second expression is merely a reformulation of the first, taking into account the definition of parallel translation. Since \( t \rightarrow 0 \), the formula is local, and we may prove it in a chart, so we use \( \eta ,\gamma \) to denote the vector components \( {\eta }_{U},{\gamma }_{U} \) in a chart \( U \), suppressing the index \( U \) . Let\n\n\[ \n\beta \left( t\right) = \eta \left( t\right) - \mathop{\sum }\limits_{{k = 0}}^{m}\gamma \left( {t,{D}_{{\alpha }^{\prime }}^{k}\eta \left( 0\right) }\right) \frac{{t}^{k}}{k!}.\n\]\n\nFrom the existence and uniqueness of the ordinary Taylor formula, it will suffice to prove that for the ordinary derivatives of \( \beta \), we have\n\n\[ \n{\partial }^{k}\beta \left( 0\right) = {\beta }^{\left( k\right) }\left( 0\right) = 0\;\text{ for }\;k = 0,\ldots, m.\n\]\n\nBy definition, note that \( \beta \left( 0\right) = 0 \) . Let \( {w}_{k} = {D}_{{\alpha }^{\prime }}^{k}\beta \left( 0\right) \) . Since \( {D}_{{\alpha }^{\prime }}\gamma = 0 \), we have\n\n\[ \n{D}_{{\alpha }^{\prime }}^{j}\beta \left( t\right) = {D}_{{\alpha }^{\prime }}^{j}\eta \left( t\right) - \mathop{\sum }\limits_{{k \geqq j}}\gamma \left( {t,{w}_{k}}\right) \frac{{t}^{k - j}}{\left( {k - j}\right) !}.\n\]\n\nTherefore\n\n\[ \n{D}_{{\alpha }^{\prime }}^{j}\beta \left( 0\right) = {D}_{{\alpha }^{\prime }}^{j}\eta \left( 0\right) - \gamma \left( {0,{w}_{j}}\right)\n\]\n\n\[ \n= {w}_{j} - {w}_{j}\n\]\n\n\[ \n= 0.\n\]
Yes
Lemma 5.2. Let \( \beta : J \rightarrow \mathbf{E} \) be the vector component of a lift of \( \alpha \) . If \( {D}_{{\alpha }^{\prime }}^{j}\beta \left( 0\right) = 0 \) for \( 0 \leqq j \leqq m \) then \( {\partial }^{j}\beta \left( 0\right) = 0 \) for \( 0 \leqq j \leqq m \) .
Proof. By definition,\n\n\[ \n{D}_{{\alpha }^{\prime }}\beta = {\beta }^{\prime } - B\left( {\alpha ;{\alpha }^{\prime },\beta }\right) .\n\]\n\nHence \( {D}_{{\alpha }^{\prime }}\beta \left( 0\right) = {\beta }^{\prime }\left( 0\right) \) . We can proceed by induction. Let us carry out the case of the second derivative so the reader sees what's going on. Hence suppose in addition that \( {D}_{{\alpha }^{\prime }}^{2}\beta \left( 0\right) = 0 \) . From the definitions, we get\n\n\[ \n{D}_{{\alpha }^{\prime }}^{2}\beta = {\beta }^{\prime \prime } - \left\lbrack {{\partial }_{1}B\left( {\alpha ;{\alpha }^{\prime },\beta }\right) {\alpha }^{\prime } + B\left( {\alpha ;{\alpha }^{\prime \prime },\beta }\right) + B\left( {\alpha ;{\alpha }^{\prime },{\beta }^{\prime }}\right) }\right\rbrack\n\n\[ \n- B\left( {\alpha ;{\alpha }^{\prime },{\beta }^{\prime } - B\left( {\alpha ;{\alpha }^{\prime },\beta }\right) }\right) .\n\]\n\nSince \( \beta \left( 0\right) = {\beta }^{\prime }\left( 0\right) = {D}_{{\alpha }^{\prime }}\beta \left( 0\right) = 0 \) we find that\n\n\[ \n0 = {D}_{{\alpha }^{\prime }}^{2}\beta \left( 0\right) = {\beta }^{\prime \prime }\left( 0\right)\n\]\n\nthus proving the assertion for \( m = 2 \) . The inductive proof is the same in general.
Yes
Proposition 5.3. Suppose that \( \\alpha \) is a geodesic. Let \( w \\in {T}_{\\alpha \\left( 0\\right) }X \) and let \( {\\eta }_{w} \) be the Jacobi lift of \( \\alpha \) such that \( {\\eta }_{w}\\left( 0\\right) = 0 \) and \( {D}_{\\alpha ^{\\prime }}{\\eta }_{w}\\left( 0\\right) = w \). Then
Proof. We plug in Proposition 5.1. Since \( {D}_{\\alpha ^{\\prime }}^{2}{\\eta }_{w} = R\\left( {\\alpha ^{\\prime },{\\eta }_{w},\\alpha ^{\\prime }}\\right) \) contains \( {\\eta }_{w} \) linearly, the evaluation of the second term of the Taylor expansion at 0 is 0. As for the third term, we have to use the chain rule. To be sure we don't forget anything, we should write more precisely\n\n\[ R\\left( {\\alpha ^{\\prime },{\\eta }_{w},\\alpha ^{\\prime }}\\right) = R\\left( {\\alpha ;\\alpha ^{\\prime },{\\eta }_{w},\\alpha ^{\\prime }}\\right) \]\n\nto make explicit the dependence on the extra position variable. But it turns out that it does not matter in the end, because no matter what, the chain rule gives\n\n\[ {D}_{\\alpha ^{\\prime }}^{3}{\\eta }_{w} = R\\left( {\\alpha ^{\\prime },{D}_{\\alpha ^{\\prime }}{\\eta }_{w},\\alpha ^{\\prime }}\\right) + \\text{terms containing}{\\eta }_{w}\\text{linearly,}\]\n\nso \( {D}_{\\alpha ^{\\prime }}^{3}{\\eta }_{w}\\left( 0\\right) = R\\left( {\\alpha ^{\\prime }\\left( 0\\right), w,\\alpha ^{\\prime }\\left( 0\\right) }\\right) \), which proves the proposition.
Yes
Proposition 5.4. Let \( \left( {X, g}\right) \) be a pseudo Riemannian manifold, and let \( x \in X, \) Fix \( v, w \in {T}_{x}X \) . Then\n\n\[ \n{\exp }_{x}^{ * }\left( {tv}\right) \left( g\right) \left( {w, w}\right) = {w}^{2} + \frac{1}{3}{R}_{2}\left( {v, w}\right) {t}^{2} + O\left( {t}^{3}\right) \;\text{ for }t \rightarrow 0.\n\]\n\nwhere we recall that \( {R}_{2}\left( {v, w}\right) = R\left( {v, w, v, w}\right) \) .
Proof. From the theory of Jacobi lifts, applied to \( \alpha \left( t\right) = {\exp }_{x}\left( {tv}\right) \), we have the formula\n\n\[ \n\frac{1}{t}{\eta }_{w}\left( t\right) = T{\exp }_{x}\left( {tv}\right) w\n\]\n\nTherefore modulo functions which are \( O\left( {t}^{3}\right) \) for \( t \rightarrow 0 \), we get from Proposition 5.3\n\n\[ \n{\exp }_{x}^{ * }\left( g\right) \left( {tv}\right) \left( {w, w}\right) \equiv {\left\langle \frac{1}{t}{\eta }_{w}\left( t\right) ,\frac{1}{t}{\eta }_{w}\left( t\right) \right\rangle }_{g\left( {\alpha \left( t\right) }\right) }\n\]\n\n\[ \n\equiv {\left\langle {P}^{t}\left\lbrack w + R\left( v, w, v\right) \frac{{t}^{2}}{3!}\right\rbrack ,{P}^{t}\left\lbrack w + R\left( v, w, v\right) \frac{{t}^{2}}{3!}\right\rbrack \right\rangle }_{g\left( {\alpha \left( t\right) }\right) }\n\]\n\n\[ \n\equiv {\left\langle w + R\left( v, w, v\right) \frac{{t}^{2}}{3!}, w + R\left( v, w, v\right) \frac{{t}^{2}}{3!}\right\rangle }_{g\left( x\right) }\n\]\n\n\[ \n\equiv {w}^{2} + 2{R}_{2}\left( {v, w}\right) \frac{{t}^{2}}{3!}\n\]\n\nwhich proves the proposition.
Yes
Lemma 1.1. Let \( \left( {X, g}\right) \) be a pseudo-Riemannian manifold and let \( \alpha \) be a geodesic. Let \( \eta \) be a Jacobi lift, and\n\n\[ f\left( s\right) = \eta {\left( s\right) }^{2} = \langle \eta \left( s\right) ,\eta \left( s\right) {\rangle }_{g}. \]\n\nThen\n\n\[ {f}^{\prime } = 2{\left\langle {D}_{{\alpha }^{\prime }}\eta ,\eta \right\rangle }_{g}\;\text{ and }\;{f}^{\prime \prime } = 2{R}_{2}\left( {{\alpha }^{\prime },\eta }\right) + 2{\left( {D}_{{\alpha }^{\prime }}\eta \right) }^{2}. \]\n\nIf \( X \) is Riemannian and \( {R}_{2} \geqq 0 \) (seminegative curvature), then \( {f}^{\prime \prime } \geqq 0 \) .
Proof. The first derivative comes from the defining property of the Levi-Civita (metric) derivative along curves, as in Chapter VIII, Theorem 4.3. This same reference then also yields the second derivative\n\n\[ {f}^{\prime \prime } = 2{\left\langle {D}_{{\alpha }^{\prime }}^{2}\eta ,\eta \right\rangle }_{g} + 2{\left\langle {D}_{{\alpha }^{\prime }}\eta ,{D}_{{\alpha }^{\prime }}\eta \right\rangle }_{g} \]\n\n\[ = 2{\left\langle R\left( {\alpha }^{\prime },\eta \right) {\alpha }^{\prime },\eta \right\rangle }_{g} + 2{\left( {D}_{{\alpha }^{\prime }}\eta \right) }^{2} \]\n\nby the Jacobi differential equation. This proves the formulas. The final statement is then immediate, thus concluding the proof.
Yes
Theorem 1.2. Let \( X \) be a Riemannian manifold with \( {R}_{2} \geqq 0 \) (semi-negative curvature). Let \( \alpha \) be a geodesic and \( \eta \) a Jacobi lift with \( \eta \left( 0\right) = 0 \) but \( {D}_{{\alpha }^{\prime }}\eta \left( 0\right) \neq 0 \) . Let\n\n\[ f\left( s\right) = \eta {\left( s\right) }^{2}. \]\n\nThen \( f\left( 0\right) = {f}^{\prime }\left( 0\right) = 0 \) . Furthermore, we have convexity:\n\n\[ {f}^{\prime \prime }\left( s\right) \geqq 0\;\text{ for all }s. \]\n\nThus \( {f}^{\prime }\left( s\right) \leqq 0 \) for \( s < 0 \) and \( {f}^{\prime }\left( s\right) \geqq 0 \) for \( s > 0 \), with the corresponding semi-decreasing and semi-increasing properties of \( f \) for \( s \leqq 0 \) and \( s \geqq 0 \) respectively.
Proof. Immediate from the definitions and assumption on \( {R}_{2} \), taking Lemma 1.1 into account.
No
Lemma 1.3. Let \( \eta \) be the Jacobi lift of \( \alpha \) coming from its \( \left( {\beta ,\zeta }\right) \) -variation at the beginning point. Let \( f = {\eta }^{2} . Then\n\n\[ \n{f}^{\prime }\left( 0\right) = 2{\left\langle {D}_{\eta \left( 0\right) }\zeta ,\eta \left( 0\right) \right\rangle }_{g}.\n\]\n
Proof. Starting with the expression in Lemma 1.1, we get\n\n\[ \n{f}^{\prime }\left( 0\right) = 2{\left\langle {D}_{{\alpha }^{\prime }\left( 0\right) }\eta ,\eta \right\rangle }_{g}\left( 0\right)\n\]\n\n\[ \n= 2{\left\langle {D}_{\zeta \left( 0\right) }\eta ,\eta \left( 0\right) \right\rangle }_{g}\n\]\n\n\[ \n= 2{\left\langle {D}_{\eta \left( 0\right) }\zeta ,\eta \left( 0\right) \right\rangle }_{g}.\n\]\nIn this step, we need for the covariant derivatives of curves the analogue of the formula for the covariant derivative of vector fields, with the difference being formally equal to \( \left\lbrack {\eta ,\zeta }\right\rbrack \left( 0\right) \). Furthermore, from (1) and (2), \( \eta \) and \( \zeta \) are obtained as the images under \( \sigma \) of the commuting vertical and horizontal unit vector fields in the \( \left( {s, t}\right) \) -plane, so the bracket is equal to 0 . We let the reader fill in the details of the above arguments, to conclude the proof.
No
Proposition 1.4. Let \( X \) be a Riemannian manifold and let \( Y \) be a totally geodesic submanifold. Let \( \alpha \) be a geodesic in \( X,\alpha \left( 0\right) = y \in Y \) . Let \( \sigma \) be the \( \left( {\beta ,\zeta }\right) \) -variation of \( \alpha \) defined above. We suppose that \( \beta \) is a geodesic in \( Y \), so in particular, \( {\beta }^{\prime }\left( 0\right) = z \in {T}_{Y}Y \) . Let \( \eta \) be the corresponding Jacobi lift of \( \alpha \), and let \( f = {\eta }^{2} \) .\n\n(i) If \( \zeta \) is orthogonal to \( Y \), i.e. its values are in \( {NY} \), then \( {f}^{\prime }\left( 0\right) = 0 \) .\n\n(ii) If in addition \( X \) has \( {R}_{2} \geqq 0 \) (seminegative curvature), then \( f\left( s\right) \) is weakly decreasing for \( s \leqq 0 \), weakly increasing for \( s \geqq 0 \), and\n\n\[ f\left( s\right) \geqq f\left( 0\right) \;\text{ for all }s, \]\n\n\[ \text{so}\parallel \eta \left( s\right) \parallel \geqq \parallel \eta \left( 0\right) \parallel \text{for all}s\text{.} \]
Proof. Since \( Y \) is totally geodesic, the second fundamental form \( {h}_{12}\left( {\eta ,\zeta }\right) \left( 0\right) = 0 \) by Theorem 1.4 of Chapter XIV. Then combining Theorem 1.5 of Chapter XIV and Lemma 1.3 which was just proved, we obtain \( {f}^{\prime }\left( 0\right) = 0 \) . The other assertions are immediate from the convexity \( {f}^{\prime \prime }\left( 0\right) \geqq 0 \) of Lemma 1.1. This concludes the proof.
Yes
Lemma 2.1. Let \( \\beta \) be a geodesic in \( Y \) with \( \\beta \\left( 0\\right) = y \) and \( {\\beta }^{\\prime }\\left( 0\\right) = z \) . For \( v \\in {N}_{{y}_{0}}Y \\), let \( {\\beta }_{v}\\left( t\\right) = \\left( {\\beta \\left( t\\right), v}\\right) \) . Let\n\n\\[ \n{\\varphi }_{v}\\left( t\\right) = E\\left( {{\\beta }_{v}\\left( t\\right) }\\right) = {\\exp }_{\\beta \\left( t\\right) }{P}_{{y}_{0}}^{\\beta \\left( t\\right) }\\left( v\\right) .\n\\]\n\nThen\n\n\\[ \n{\\varphi }_{v}^{\\prime }\\left( 0\\right) = {TE}\\left( {y, v}\\right) \\left( {z,0}\\right) .\n\\]
Proof. This is just the chain rule.
No
Proposition 2.2. Let \( X \) be convex and let \( Y \) be a totally geodesic submanifold. Let \( {y}_{0}, y \in Y \) . Let \( v \in {N}_{{y}_{0}}Y \) . Let \( \beta ,\zeta \) be the curves defined in (1) and (2) above, and let \( \eta \) be the Jacobi lift associated with the variation \( \sigma \) defined in (3). Then\n\n\[ \eta \left( 0\right) = z\;\text{ and }\;\eta \left( 1\right) = {TE}\left( {y, v}\right) \left( {z,0}\right) . \]
Proof. Putting \( s = 0 \) in the definition of \( \sigma \), we obtain\n\n\[ \sigma \left( {0, t}\right) = {\exp }_{\beta \left( t\right) }\left( 0\right) = \beta \left( t\right) \]\n\nso the value \( \eta \left( 0\right) = z \) drops out. For \( \eta \left( 1\right) \), we just apply Lemma 2.1 to conclude the proof.
No
Theorem 2.3. Let \( X \) be a convex complete Riemannian manifold, and let \( Y \) be a totally geodesic submanifold. Then \( Y \) is also convex complete. Let \( {y}_{0} \in Y \) and let\n\n\[ \n{P}_{{y}_{0}} : Y \times {N}_{{y}_{0}}Y \rightarrow {NY} \n\]\n\nbe the map such that for each \( y \in Y \) and \( v \in {N}_{{y}_{0}}Y \) we have\n\n\[ \n{P}_{{y}_{0}}\left( {y, v}\right) = {P}_{{y}_{0}}^{y}\left( v\right) \n\]\n\nThen \( {P}_{{y}_{0}} \) is a vector bundle isomorphism, trivializing the normal bundle.
Proof. This simply amount to the fact that flows of differential equations depend smoothly on parameters, and that parallel translation is invertible by parallel translation along the reverse geodesic.\n\nGiven a chart \( U \) of \( Y \) at \( {y}_{0} \), it follows that \( U \times {T}_{{y}_{0}} \) is a chart at the corresponding point in \( {NY} \) . Of course, \( Y \) itself admits a global chart, given for instance by its own exponential mapping at \( {y}_{0} \) . So once the point \( {y}_{0} \) is selected, there is a canonical way of constructing a global chart for the normal bundle. The next application will be global.\n\nWe shall always take \( Y \times {N}_{{y}_{0}}Y \) with its Riemannian product structure. Thus \( Y \) has the Riemann metric restricted from \( X \), and \( {N}_{{y}_{0}} \) has its positive definite scalar product restricted from \( {T}_{{y}_{0}}X \), so the \
Yes
Theorem 2.4 (Wu). Let \( X \) be a Cartan-Hadamard manifold. Let \( Y \) be a totally geodesic submanifold. Fix a point \( {y}_{0} \in Y \) . Let\n\n\[ E : Y \times {N}_{{y}_{0}}Y \rightarrow X \]\n\nbe defined by \( E\left( {y, v}\right) = {\exp }_{y}{P}_{{y}_{0}}^{y}\left( v\right) \) for \( v \in {N}_{{y}_{0}}Y \) . Then \( E \) is metric semi-increasing.
Proof. For \( z \in {T}_{y}Y \) and \( v, w \in {N}_{{y}_{0}}Y \) we have to show that\n\n\[ \parallel {TE}\left( {y, v}\right) \left( {z, w}\right) \parallel \geqq \parallel \left( {z, w}\right) \parallel . \]\n\nThe product Hilbert space metric by definition gives\n\n\[ \parallel \left( {z, w}\right) {\parallel }^{2} = \parallel \left( {z,0}\right) {\parallel }^{2} + \parallel \left( {0, w}\right) {\parallel }^{2} = \parallel z{\parallel }^{2} + \parallel w{\parallel }^{2}. \]\n\nThe Gauss Lemma 5.6 of Chapter VIII, §5 implies that\n\n\[ \parallel {TE}\left( {y, v}\right) \left( {z, w}\right) {\parallel }^{2} = \parallel {TE}\left( {y, v}\right) \left( {z,0}\right) {\parallel }^{2} + \parallel {TE}\left( {y, v}\right) \left( {0, w}\right) {\parallel }^{2}. \]\n\nHence we need only prove separately that\n\n\[ \parallel {TE}\left( {y, v}\right) \left( {z,0}\right) \parallel \geqq \parallel \left( {z,0}\right) \parallel = \parallel z\parallel \]\n\nand\n\n\[ \parallel {TE}\left( {y, v}\right) \left( {0, w}\right) \parallel \geqq \parallel \left( {0, w}\right) \parallel = \parallel w\parallel . \]\n\nThe second inequality is simply the metric semi-increasing property of Chapter IX, Theorem 3.6. As to the first inequality, we may now quote Proposition 1.4 (ii) and Lemma 2.1 to conclude the proof.
Yes
Lemma 2.6. Let \( \mathbf{E},\mathbf{F} \) be Banach spaces. Let \( \{ A\left( s\right) \} \left( {0 \leqq s \leqq r}\right) \) be a continuous family of bounded operators, such that \( A\left( s\right) : \mathbf{E} \rightarrow \mathbf{F} \) is invertible for \( 0 \leqq s < r \), and there is a uniform lower bound \( c > 0 \) such that\n\n\[ \left| {A\left( s\right) }\right| \geqq c\;\text{ for }\;0 \leqq s < r. \]\n\nThen \( \mathop{\lim }\limits_{{s \rightarrow r}}A{\left( s\right) }^{-1} \) exists and is a bounded operator inverse of \( A\left( r\right) \) .
Proof. We write\n\n\[ A{\left( s\right) }^{-1} - A{\left( {s}^{\prime }\right) }^{-1} = A{\left( {s}^{\prime }\right) }^{-1}\left( {A\left( {s}^{\prime }\right) - A\left( s\right) }\right) A{\left( s\right) }^{-1}. \]\n\nTaking the norm, we see that the family \( \left\{ {A{\left( s\right) }^{-1}}\right\} \) is Cauchy, so has a limit, which is the desired inverse by continuity.
Yes
Lemma 3.1. Let the notation be as above, with \( \varphi \left( s\right) = f\left( s\right) /{s}^{2} \) . Then\n\n\[ \n{\varphi }^{\prime }\left( s\right) = \frac{2}{{s}^{2}}{\eta }^{2}\left( s\right) \left( {h\left( s\right) - \frac{1}{s}}\right) \;\text{ so }\;{\varphi }^{\prime }/\varphi \left( s\right) = 2\left( {h\left( s\right) - \frac{1}{s}}\right) \n\]\n\nand\n\n\[ \n{\varphi }^{\prime \prime }\left( s\right) = \frac{{f}^{\prime \prime }\left( s\right) }{{s}^{2}} - \frac{4{f}^{\prime }\left( s\right) }{{s}^{3}} + \frac{{6f}\left( s\right) }{{s}^{4}}. \n\]
Proof. Ordinary differentiation.
No
Lemma 3.2. Let \( \alpha = {\pi \eta } \). Then\n\n\[ \n{h}^{\prime } = \frac{{\left( {D}_{ * }\eta \right) }^{2}}{{\eta }^{2}} + \frac{\left\langle {D}_{ * }^{2}\eta ,\eta \right\rangle }{{\eta }^{2}} - 2\frac{{\left\langle {D}_{ * }\eta ,\eta \right\rangle }^{2}}{{\left( {\eta }^{2}\right) }^{2}}\n\]\n\n\[ \n= \frac{{\mu }^{2}}{{\eta }^{2}} + \frac{{R}_{2}\left( {{\alpha }^{\prime },\eta }\right) }{{\eta }^{2}} - {h}^{2}.\n\]
Proof. The first equation for \( {h}^{\prime } \) is immediate from the definition of the Levi-Civita metric derivative. The second comes from the definition of \( {R}_{2} \) and the Jacobi equation for \( \eta \), as well as the definition of the orthogonal term. This concludes the proof.
No
Lemma 3.3. Let \( {h}_{1}, h \) be a pair of functions on some interval, satisfying\n\n\[ \n{h}_{1}^{\prime } \leqq - {h}_{1}^{2}\;\text{ and }\;{h}^{\prime } \geqq - {h}^{2}.\n\]\n\nThen\n\n\[ \n{\left( \left( {h}_{1} - h\right) {e}^{\int \left( {{h}_{1} + h}\right) }\right) }^{\prime } \leqq 0.\n\]\n\nSo if \( {h}_{1}\left( {s}_{1}\right) \geqq h\left( {s}_{1}\right) \) for some \( {s}_{1} \) in the interval, then\n\n\[ \n{h}_{1}\left( s\right) \geqq h\left( s\right) \;\text{ for }\;s \leqq {s}_{1}.\n\]
Proof. First note that a constant of integration added to the indefinite integral in the inequality would not affect the truth of the inequality. Next, routine differentiation yields\n\n\[ \n{\left( \left( {h}_{1} - h\right) {e}^{\int \left( {{h}_{1} + h}\right) }\right) }^{\prime } = \left( {{h}_{1}^{\prime } - {h}^{\prime } + {h}_{1}^{2} - {h}^{2}}\right) {e}^{\int \left( {{h}_{1} + h}\right) }.\n\]\n\nThe exponential term on the right is \( > 0 \), and its coefficient is \( \leqq 0 \) by hypothesis, thus concluding the proof of the first inequality. It follows that the function \( \left( {{h}_{1} - h}\right) \exp \left( {\int \left( {{h}_{1} + h}\right) }\right) \) is semi-decreasing. If \( {h}_{1}\left( {s}_{1}\right) \geqq h\left( {s}_{1}\right) \) at some point \( {s}_{1} \), then this function is \( \geqq 0 \) for \( s \leqq {s}_{1} \), thus concluding the proof.
Yes
Theorem 3.4. Let \( X \) be a Riemannian manifold and \( \eta \) the Jacobi lift of a curve in \( X \) . Assume \( \eta \left( 0\right) = 0 \) but \( {D}_{ * }\eta \left( 0\right) \neq 0 \) . Suppose \( {R}_{2} \geqq 0 \) (seminegative curvature). Let \( h \) be as in (1), defined on an interval \( J = \left( {0, b}\right) \) such that \( \eta \left( s\right) \neq 0 \) for \( s \in J \) . Then\n\n\[ \n\frac{1}{s} \leqq h\left( s\right) \;\text{ for }s \in J.\n\]\n\nIn other words, the function \( \varphi \left( s\right) = {\eta }^{2}\left( s\right) /{s}^{2} \) is semi-increasing on \( J \) .
Proof. Suppose \( {h}_{0}\left( {s}_{1}\right) > h\left( {s}_{1}\right) \) for some \( {s}_{1} \in J \) . Then for some \( \delta > 0 \) ,\n\n\[ \n{h}_{0}\left( {{s}_{1} + \delta }\right) \geqq h\left( {s}_{1}\right)\n\]\n\nLet \( {h}_{1}\left( s\right) = 1/\left( {s + \delta }\right) \) . Then \( {h}_{1}^{\prime } = - {h}_{1}^{2} \) and \( {h}_{1}\left( {s}_{1}\right) \geqq h\left( {s}_{1}\right) \) . We apply Lemma 3.3 and let \( s \rightarrow 0 \) (so \( s \leqq {s}_{1} \) ). Then \( {h}_{1}\left( s\right) \) is bounded, but \( h\left( s\right) \rightarrow \infty \), a contradiction which proves the inequality \( 1/s \leqq h\left( s\right) \) . By Lemma 3.1 we conclude that \( \varphi \left( s\right) \) is semi-increasing. This proves the theorem.
Yes
Proposition 4.1. Let \( X \) be a differential manifold modeled on a Banach space \( \mathbf{E} \) . Suppose that we are given a covering of \( X \) by open sets corresponding to charts \( U, V,\ldots \), and for each \( U \) we are given a morphism\n\n\[ \n{B}_{U} : U \rightarrow {L}_{\mathrm{{sym}}}^{2}\left( {\mathbf{E},\mathbf{E}}\right) \n\]\n\nsatisfying the transformation rule of Chapter IV, Proposition 3.3. In other words, for each change of chart by a differential isomorphism\n\n\[ \nh : U \rightarrow V \n\]\n\nwe have for \( v, w \in \mathbf{E} \) representing tangent vectors:\n\n\[ \n{B}_{V}\left( {h\left( x\right) ;{h}^{\prime }\left( x\right) v,{h}^{\prime }\left( x\right) w}\right) = {h}^{\prime \prime }\left( x\right) \left( {v, w}\right) + {h}^{\prime }\left( x\right) {B}_{U}\left( {x;v, w}\right) . \n\]\n\nThen there exists a unique covariant derivative \( D \) such that in a chart \( U \) for vector fields \( \eta ,\xi \) we have\n\n\[ \n{\left( {D}_{\xi }\eta \right) }_{U}\left( x\right) = {\eta }_{U}^{\prime }\left( x\right) {\xi }_{U}\left( x\right) - {B}_{U}\left( {x;{\xi }_{U}\left( x\right) ,{\eta }_{U}\left( x\right) }\right) . \n\]
The proof is routine, just like Proposition 3.4 of Chapter IV.
No
Lemma 4.2. Given a spray or covariant derivative on \( X \), there is a unique vector bundle morphism over \( {TX} \) , \n\n\[ \n{\kappa }_{2} : {TTX} \rightarrow {\pi }^{ * }{TX} \n\] \n\nsuch that over a chart \( U \), we have \n\n\( \left( {5}_{U}\right) \) \n\n\[ \n{\kappa }_{2, U}\left( {x, v, z, w}\right) = \left( {x, v, w - {B}_{U}\left( {x;v, z}\right) }\right) . \n\]
Proof. Let \( h : U \rightarrow V \) be a change of charts, i.e. a differential isomorphism. In Chapter IV, \( §3 \) we gave the change of chart \( \left( {2}_{U}\right) \) of \( {TTX} \) . Let \( H = \left( {h,{h}^{\prime }}\right) \) . Then the change of chart for \( {\left( TTX\right) }_{U} \) is given by the map \n\n\[ \n\left( {U \times \mathbf{E}}\right) \times \mathbf{E} \times \mathbf{E}\xrightarrow[]{\left( H,{H}^{\prime }}\right) }\left( {V \times \mathbf{E}}\right) \times \mathbf{E} \times \mathbf{E} \n\] \n\nsuch that \n\n\[ \n\left( {H,{H}^{\prime }}\right) \left( {x, v, z, w}\right) = \left( {h\left( x\right) ,{h}^{\prime }\left( x\right) z,{h}^{\prime \prime }\left( x\right) \left( {v, z}\right) + {h}^{\prime }\left( x\right) w}\right) . \n\] \n\nThen \n\n\[ \n{\kappa }_{2, V} \circ \left( {H,{H}^{\prime }}\right) \left( {x, v, z, w}\right) = \left( {h\left( x\right) ,{h}^{\prime }\left( x\right) v,{h}^{\prime }\left( x\right) w}\right) \n\] \n\nbecause the term \( {h}^{\prime \prime }\left( x\right) \left( {v, z}\right) \) cancels in the last coordinate on the right. This proves the lemma.
Yes
Theorem 4.3 (Tensorial Splitting Theorem). Given a spray, or covariant derivative on a differential manifold \( X \), the map\n\n\[ \kappa = \left( {{\kappa }_{1},{\kappa }_{2}}\right) : {TTX} \rightarrow {\pi }^{ * }{TX}{ \oplus }_{TX}{\pi }^{ * }{TX} \]\n\n is a vector bundle isomorphism over \( {TX} \) . In the chart\n\n\[ {\left( TTX\right) }_{U} = \left( {U \times \mathbf{E}}\right) \times \mathbf{E} \times \mathbf{E} \]\n\n this map is given by\n\n \( \left( {6}_{U}\right) \)\n\n\[ {\kappa }_{U}\left( {x, v, z, w}\right) = \left( {x, v, z, w - {B}_{U}\left( {x;v, z}\right) }\right) . \]
Proof. With the notation \( h, H,\left( {H,{H}^{\prime }}\right) \) as in Lemma 4.2, we conclude that\n\n\[ {\kappa }_{V} \circ \left( {H,{H}^{\prime }}\right) \left( {x, v, z, w}\right) = \left( {h\left( x\right) ,{h}^{\prime }\left( x\right) v,{h}^{\prime }\left( x\right) z,{h}^{\prime }\left( x\right) w}\right) ,\]\n\nso the family \( \left\{ {\kappa }_{U}\right\} \) defines a VB morphism over \( {TX} \) . The expression of the map in a chart shows that over \( U \) it is a VB isomorphism, which concludes the proof. Note that the map \( {\kappa }_{U} \) is represented by a \( 2 \times 2 \) matrix acting on the last two coordinates, and having the identity on the diagonal.
Yes
Lemma 4.4. Let \( X \) be a manifold with a spray or covariant derivative \( D \) . There exists a unique vector bundle morphism (over \( \pi \) )\n\n\[ K : {TTX} \rightarrow {TX} \]\n\nsuch that for all vector fields \( \xi ,\zeta \) on \( X \), we have\n\n(8)\n\n\[ {D}_{\xi }\zeta = K \circ {T\zeta } \circ \xi ,\;\text{ in other words,}\;D = K \circ T \] \nas operators on vector fields, so the following diagram is commutative:\n\n![8a5ee639-42a3-45bc-9bf4-072c37808879_300_1.jpg](images/8a5ee639-42a3-45bc-9bf4-072c37808879_300_1.jpg)\n\nIn fact, \( K = {S}_{2} \) .
Proof. In a chart \( U \), we let the local representation\n\n\[ {K}_{U,\left( {x, v}\right) } : \mathbf{E} \times \mathbf{E} \rightarrow \mathbf{E} \]\n\nbe given by\n\n\( \left( {8}_{U}\right) \)\n\n\[ {K}_{U,\left( {x, v}\right) }\left( {z, w}\right) = w - {B}_{U}\left( {x;v, z}\right) ,\]\n\nso \( K = {S}_{2} \) satisfies the requirements of the lemma.
Yes
Theorem 4.5 (Dombrowski Splitting Theorem). Let \( X \) be a manifold with a spray or a covariant derivative. Then the map \[ \left( {{\pi }_{TX},{S}_{1},{S}_{2}}\right) : {TTX} \rightarrow {TX} \oplus {TX} \oplus {TX} \] is an isomorphism of fiber bundles over \( X \) .
Proof. The map is well defined, and the previous chart formulas show that it is both a bijection and a local differential isomorphism. We let readers check this out in the charts to conclude the proof.
No
Lemma 5.1. Let \( X \) be a manifold with a spray, or equivalently a covariant derivative. Let \( \beta \) be a curve in \( X \), and let \( \zeta \) be a lift of \( \beta \) in TX. Let\n\n\[ \varphi \left( t\right) = {\exp }_{\beta \left( t\right) }\zeta \left( t\right) \]\n\nso \( \varphi \) is a curve in \( X \) . Then in a chart \( U,{\varphi }^{\prime }\left( t\right) \) has the representation\n\n\( \left( {3}_{U}\right) \)\n\n\[ {\varphi }_{U}^{\prime }\left( t\right) = {\exp }_{U}^{\prime }\left( {{\beta }_{U}\left( t\right) ,{\zeta }_{U}\left( t\right) }\right) \left( {{\beta }_{U}^{\prime }\left( t\right) ,{\left( {D}_{{\beta }^{\prime }}\zeta \right) }_{U}\left( t\right) }\right) ,\]\n\nor suppressing \( t \),\n\n\( \left( {4}_{U}\right) \)\n\n\[ {\varphi }_{U}^{\prime } = {\exp }_{U}^{\prime }\left( {{\beta }_{U},{\zeta }_{U}}\right) \left( {{\beta }_{U}^{\prime },{\left( {D}_{{\beta }^{\prime }}\zeta \right) }_{U}}\right) . \]
Proof. This is immediate from Theorem 4.3, the local expression (2) for the covariant derivative, and formula (1).
No
There exists a unique vector bundle morphism over \( X \) , \[ {\mathbf{T}}_{S}\exp : {TX} \oplus {TX} \rightarrow {TX} \] such that the following diagram commutes: ![8a5ee639-42a3-45bc-9bf4-072c37808879_305_0.jpg](images/8a5ee639-42a3-45bc-9bf4-072c37808879_305_0.jpg) The two vertical maps are vector bundle morphisms, the top vector bundle being over \( {TX} \) and the bottom one over \( X \) . The composite map is \[ \text{Texp:}{TTX} \rightarrow {TX}\text{,} \] so both Texp and \( {\mathbf{T}}_{S}\exp \) represent \( T\exp \) under the splitting maps.
Proof. Routine verification that everything makes sense.
No
Proposition 6.2. Let \( X \) be a Riemannian manifold, and let \( \Omega \) be the canonical 2-form on the tangent bundle. Let \( v \in {TX}, Z, W \in {T}_{v}{TX} \) . Write\n\n\[ \n{SZ} = \left( {{A}_{1},{B}_{1}}\right) \;\text{ and }\;{SW} = \left( {{A}_{2},{B}_{2}}\right) .\n\]\n\nThen the canonical 2-form can be expressed in the form\n\n\[ \n\Omega \left( {Z, W}\right) = {\Omega }_{S}\left( {{SZ},{SW}}\right) = {\left\langle {A}_{1},{B}_{2}\right\rangle }_{g} - {\left\langle {A}_{2},{B}_{1}\right\rangle }_{g}.\n\]
Proof. This is a routine verification, which nevertheless has to be taken seriously. We use a chart. Write \( Z = \left( {{z}_{1},{z}_{2}}\right) \) and \( W = \left( {{w}_{1},{w}_{2}}\right) \) in the chart, i.e. in \( \mathbf{E} \times \mathbf{E} \) . Put together Chapter VII, \( §7 \), formula (1) for the canonical 2-form on the tangent bundle, and Theorem 4.2 of Chapter VIII, formula MS 1, giving the chart expression for the bilinear map \( {B}_{U} \) , depending on the metric. Keep cool, calm, and collected; there will be cancellations, due to the symmetry\n\n\[ \n\left\langle {{g}^{\prime }\left( x\right) u \cdot w, v}\right\rangle = \left\langle {{g}^{\prime }\left( x\right) u \cdot v, w}\right\rangle\n\]\n\nnoted in Chapter VII, \( §7 \) ; you will use \( \frac{1}{2} + \frac{1}{2} = 1 \) ; and the formula of Proposition 6.2 will drop out to conclude the proof.
No
Theorem 6.3. Let \( X \) be a Riemannian manifold. Let \( \Omega \) be the canonical 2-form on \( {TX} \) . Let \( v \in {TX} \) (so \( v \in {T}_{x}X \) for some \( x \) ), and let \( Z \) , \( W \in {T}_{v}{TX} \) . Let \( \Phi \) be the flow of the spray on \( {TX} \) . Let\n\n\[ \psi \left( s\right) = \Omega \left( {T{\Phi }_{s}\left( v\right) Z, T{\Phi }_{s}\left( v\right) W}\right) . \]\n\nThen \( \psi \) is constant. In other words, \( \Omega \) is invariant under the flow.
Proof. We use Proposition 6.2. Let \( {\eta }_{1},{\eta }_{2} \) be the Jacobi lifts of the curve \( s \mapsto {\alpha }_{v}\left( s\right) = \exp \left( {sv}\right) \) with initial conditions\n\n\[ {\eta }_{i}\left( 0\right) = {A}_{i}\;\text{ and }\;{D}_{v}{\eta }_{i}\left( 0\right) = {B}_{i}. \]\n\nThen using Theorem 6.1 and Proposition 6.2, we get\n\n\[ \psi = \left\langle {{\eta }_{1},{D}_{ * }{\eta }_{2}}\right\rangle - \left\langle {{\eta }_{2},{D}_{ * }{\eta }_{1}}\right\rangle \]\n\nHence using the basic property of the Levi-Civita metric derivative,\n\n\[ {\psi }^{\prime } = \left\langle {{\eta }_{1},{D}_{ * }^{2}{\eta }_{2}}\right\rangle - \left\langle {{\eta }_{2},{D}_{ * }^{2}{\eta }_{1}}\right\rangle \]\n\n\[ = \left\langle {{\eta }_{1}, R\left( {v,{\eta }_{2}}\right) v}\right\rangle - \left\langle {{\eta }_{2}, R\left( {v,{\eta }_{1}}\right) v}\right\rangle \text{by the Jacobi equation} \]\n\n\[ = 0 \]\n\nby one of the fundamental identities of the Riemann tensor, Chapter IX, §1, RIEM 4. This concludes the proof.
Yes
Proposition 1.1. Let \( \alpha : \left\lbrack {a, b}\right\rbrack \rightarrow X \) be a geodesic. The index form I on \( \operatorname{Lift}\left( \alpha \right) \) also has the expression\n\n\[ I\left( {\eta ,\gamma }\right) = - {\int }_{a}^{b}\left\lbrack {{\left\langle {D}_{{\alpha }^{\prime }}^{2}\eta ,\gamma \right\rangle }_{g} - R\left( {{\alpha }^{\prime },\eta ,{\alpha }^{\prime },\gamma }\right) }\right\rbrack \left( s\right) {ds} \]\n\n\[ + {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\gamma \right\rangle }_{g}\left( b\right) - {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\gamma \right\rangle }_{g}\left( a\right) . \]\n\nIn particular, if \( \eta \) is a Jacobi lift, then\n\n\[ I\left( {\eta ,\gamma }\right) = {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\gamma \right\rangle }_{g}\left( b\right) - {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\gamma \right\rangle }_{g}\left( a\right) \]\n\nand if in addition \( \gamma \in {\operatorname{Lift}}_{0}\left( \alpha \right) \), then \( I\left( {\eta ,\gamma }\right) = 0 \) .
Proof. From the defining property of the metric derivative, we know that\n\n\[ \partial {\left\langle {D}_{{\alpha }^{\prime }}\eta ,\gamma \right\rangle }_{g} = {\left\langle {D}_{{\alpha }^{\prime }}^{2}\eta ,\gamma \right\rangle }_{g} + {\left\langle {D}_{{\alpha }^{\prime }}\eta ,{D}_{{\alpha }^{\prime }}\gamma \right\rangle }_{g}. \]\n\nThen the first formula is clear. If in addition \( \eta \) is a Jacobi lift, then the expression under the integral is 0 by definition, so the second formula follows; and if \( \gamma \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) then the expressions belonging to the end points are equal to 0 , so the proposition is proved.
Yes
Theorem 1.2. Let \( \eta \in \operatorname{Lift}\left( \alpha \right) \) . Then \( I\left( {\eta ,\gamma }\right) = 0 \) for all \( \gamma \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) if and only if\n\n\[{\left( {D}_{{\alpha }^{\prime }}^{2}\eta - R\left( {\alpha }^{\prime },\eta \right) {\alpha }^{\prime }\right) }^{2} = 0.\]\n\nIn the Riemannian case, this happens if and only if \( \eta \) is a Jacobi lift.
Proof. If \( \eta \) is a Jacobi lift, then by definition\n\n\[{D}_{{\alpha }^{\prime }}^{2}\eta = R\left( {{\alpha }^{\prime },\eta }\right) {\alpha }^{\prime }\n\nso \( I\left( {\eta ,\gamma }\right) = 0 \) for all \( \gamma \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) . Conversely, assume this is the case. Let \( \varphi \) be a \( {C}^{\infty } \) function on \( \left\lbrack {a, b}\right\rbrack \) such that \( \varphi \left( a\right) = \varphi \left( b\right) = 0 \) . Let\n\n\[{\gamma }_{1} = {D}_{{\alpha }^{\prime }}^{2}\eta - R\left( {{\alpha }^{\prime },\eta }\right) {\alpha }^{\prime }\;\text{ and }\;\gamma = \varphi {\gamma }_{1}.\n\nThen \( \gamma \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) and by Proposition 1.1,\n\n\[0 = I\left( {\eta ,\gamma }\right) = {\int }_{a}^{b}\varphi \left( s\right) {\gamma }_{1}{\left( s\right) }^{2}{ds}.\n\nThis being true for all \( \varphi \) as above, it follows that \( {\gamma }_{1}^{2} = 0 \), whence the theorem follows.
Yes
Lemma 1.4. Let \( \left( {X, g}\right) \) be a pseudo-Riemannian manifold. Let \( \alpha \) be a geodesic (not necessarily parametrized by arc length), and let \( \sigma = \sigma \left( {s, t}\right) \) be a variation of \( \alpha \) (not necessarily by geodesics), so \( \alpha = {\alpha }_{0} \), and \( {\alpha }_{t}\left( s\right) = \sigma \left( {s, t}\right) \) . Put\n\n\[ e\left( {s, t}\right) = {\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}\left( {s, t}\right) = {\alpha }_{t}^{\prime }{\left( s\right) }^{2}. \]\n\nDefine \( \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) and\n\n\[ {\gamma }_{2}\left( s\right) = {\left\langle {D}_{2}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}\left( {s,0}\right) = {\left\langle {D}_{2}{\partial }_{2}\sigma \left( s,0\right) ,{\alpha }^{\prime }\left( s\right) \right\rangle }_{g}. \]\n\nThen\n\n(1)\n\[ {\partial }_{2}e\left( {s,0}\right) = 2{\left\langle {D}_{{\alpha }^{\prime }}\eta \left( s\right) ,{\alpha }^{\prime }\left( s\right) \right\rangle }_{g}, \]\n\n(2)\n\[ {\partial }_{2}^{2}e\left( {s,0}\right) = 2{\gamma }_{2}^{\prime }\left( s\right) + 2{R}_{2}\left( {{\alpha }^{\prime }\left( s\right) ,\eta \left( s\right) }\right) + 2{\left( {D}_{{\alpha }^{\prime }}\eta \left( s\right) \right) }^{2}. \]
Proof. We shall keep in mind that from the definitions,\n\n\[ {D}_{{\alpha }^{\prime }}\eta \left( s\right) = {D}_{1}{\partial }_{2}\sigma \left( {s,0}\right) . \]\nFor the first derivative, we have\n\n\[ {\partial }_{2}e = {\partial }_{2}{\left\langle {\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} \]\n\n\( = 2{\left\langle {D}_{2}{\partial }_{1}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g}\; \) because \( D \) is the metric derivative\n\n\( = 2{\left\langle {D}_{1}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} \) by Lemma 5.3 of Chapter VIII.\n\nThis proves the first formula. For the second, we continue to differentiate, and obtain\n\n\[ {\partial }_{2}^{2}e = {\partial }_{2}{\left\langle {D}_{1}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} \]\n\n(3)\n\n\[ = 2{\left\langle {D}_{2}{D}_{1}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} + 2{\left\langle {D}_{1}{\partial }_{2}\sigma ,{D}_{2}{\partial }_{1}\sigma \right\rangle }_{g}. \]\n\nIn the first term on the right, we use Lemma 2.7 of Chapter IX to write\n\n\[ {D}_{2}{D}_{1} = {D}_{1}{D}_{2} - R\left( {{\partial }_{1}\sigma ,{\partial }_{1}\sigma }\right) \]\n\nIn the second term on the right, we use Lemma 5.3 of Chapter VIII to write \( {D}_{2}{\partial }_{1} = {D}_{1}{\partial }_{2} \) . Then we find\n\n\[ {\partial }_{2}^{2}e = 2{\left\langle {D}_{1}{D}_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} - 2{\left\langle R\left( {\partial }_{1}\sigma ,{\partial }_{2}\sigma \right) {\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} + 2{\left( {D}_{1}{\partial }_{2}\sigma \right) }^{2} \]\n\n(4)\n\n\[ = 2{\left\langle {D}_{1}{D}_{2}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} + 2{R}_{2}\left( {{\partial }_{1}\sigma ,{\partial }_{2}\sigma }\right) + 2{\left( {D}_{1}{\partial }_{2}\sigma \right) }^{2}. \]\n\nFinally, we use the metric derivative again to compute:\n\n(5)\n\n\[ {\partial }_{1}{\left\langle {D}_{2}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} = {\left\langle {D}_{1}{D}_{2}{\partial }_{2}\sigma ,{\partial }_{1}\sigma \right\rangle }_{g} + {\left\langle {D}_{2}{\partial }_{2}\sigma ,{D}_{1}{\partial }_{1}\sigma \right\rangle }_{g}. \]\n\nHowever, \( {D}_{1}{\partial }_{1}\sigma \left( {s,0}\right) = {D}_{1}{\partial }_{1}\alpha \left( s\right) = {D}_{{\alpha }^{\prime }}{\alpha }^{\prime }\left( s\right) = 0 \), because \( \alpha \) is assumed to be a geodesic. Hence from (4) and (5) we find\n\n(6)\n\n\[ {\partial }_{2}^{2}e\left( {s,0}\right) = 2{\gamma }_{2}^{\prime }\left( s\right) + 2{R}_{2}\left( {{\partial }_{1}\sigma ,{\partial }_{2}\sigma }\right) \left( {s,0}\right) +
Yes
Corollary 1.5. Let \( \eta \) be a Jacobi lift of \( \alpha \), and \( \sigma \) a variation of \( \alpha \) such that \( \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) . Assume that \( t \mapsto \sigma \left( {a, t}\right) \) and \( t \mapsto \sigma \left( {b, t}\right) \) are geodesics. Then\n\n\[{\left. \frac{{d}^{2}}{d{t}^{2}}E\left( {\alpha }_{t}\right) \right| }_{t = 0} = {\left\langle {D}_{{\alpha }^{\prime }}\eta \left( b\right) ,\eta \left( b\right) \right\rangle }_{g} - {\left\langle {D}_{{\alpha }^{\prime }}\eta \left( a\right) ,\eta \left( a\right) \right\rangle }_{g}.\]\n\nIn particular, if \( {D}_{{\alpha }^{\prime }}\eta \) is perpendicular to \( {\alpha }^{\prime } \) then this equality also holds if \( E \) is replaced by the length \( L \) .
Proof. Immediate from Theorem 1.3 and the alternative expressions of Proposition 1.1.
No
Proposition 1.6. Assumptions being as in Theorem 1.3, suppose that \( {D}_{{\alpha }^{\prime }}\eta \) is orthogonal to \( {\alpha }^{\prime } \) . Then\n\n\[ \n{\left. \frac{d}{dt}L\left( {\alpha }_{t}\right) \right| }_{t = 0} = 0 \n\]
Proof. This is immediate from Lemma 1.4 (1) and I 1.
No
Theorem 1.7. Suppose that \( \alpha \) is a geodesic whose length is the distance between its end points. Let \( \zeta \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) be orthogonal to \( {\alpha }^{\prime } . Then\n\n\[ I\left( {\zeta ,\zeta }\right) \geqq 0 \]
Proof. I owe the proof to Wu. Define\n\n\[ \sigma \left( {s, t}\right) = {\exp }_{\alpha \left( s\right) }\left( {{t\zeta }\left( s\right) }\right) \]\n\nwith \( 0 \leqq s \leqq b \) and \( 0 \leqq t \leqq \epsilon \) . For each \( t,{\sigma }_{t} \) is a curve, not necessarily a geodesic, joining the endpoints of \( \alpha \), that is\n\n\[ {\sigma }_{t}\left( a\right) = \alpha \left( a\right) \;\text{ and }\;{\sigma }_{t}\left( b\right) = \alpha \left( b\right) ,\]\n\nbecause of the assumption \( \zeta \in {\operatorname{Lift}}_{0}\left( \alpha \right) \) . Furthermore, \( \sigma \left( {s,0}\right) = \alpha \left( s\right) \), so \( \left\{ {\sigma }_{t}\right\} = \left\{ {\alpha }_{t}\right\} \) is a variation of \( \alpha \), leaving the end points fixed. Note that\n\n\[ {\partial }_{2}\sigma \left( {s,0}\right) = \zeta \left( s\right) \]\n\nFinally, the curves \( t \mapsto \sigma \left( {a, t}\right) \) and \( t \mapsto \sigma \left( {b, t}\right) \) are geodesics, and \( {D}_{{\alpha }^{\prime }}\zeta \bot {\alpha }^{\prime } \) (differentiating \( \left\langle {\zeta ,{\alpha }^{\prime }}\right\rangle = 0 \) ). Therefore, if we define the function\n\n\[ \ell \left( t\right) = L\left( {\alpha }_{t}\right) = L\left( {\sigma }_{t}\right) \]\n\nthen by Theorem 1.3 we get\n\n\[ {\ell }^{\prime \prime }\left( 0\right) = I\left( {\zeta ,\zeta }\right) \]\n\nSince by assumption \( L\left( {\alpha }_{0}\right) \leqq L\left( {\alpha }_{t}\right) \) (because \( L\left( \alpha \right) \) is the distance between the end points), it follows that the function \( \ell \) has a minimum at \( t = 0 \) , which proves the theorem.
Yes
Corollary 1.8. Let \( \eta \) be a Jacobi lift of \( \alpha \), and let \( \xi \) be any lift of \( \alpha \), with the same end points as \( \eta \), that is\n\n\[ \n\eta \left( 0\right) = \xi \left( 0\right) \;\text{ and }\;\eta \left( b\right) = \xi \left( b\right) .\n\]\n\nSuppose that \( \eta - \xi \) is orthogonal to \( {\alpha }^{\prime } \) . Then\n\n\[ \nI\left( {\eta ,\eta }\right) \leqq I\left( {\xi ,\xi }\right) \n\]
Proof. Let \( \zeta = \xi - \eta \) . By Theorem 1.7 we have \( I\left( {\zeta ,\zeta }\right) \geqq 0 \), so by the bilinearity of the index,\n\n(7)\n\n\[ \nI\left( {\xi ,\xi }\right) - {2I}\left( {\eta ,\xi }\right) + I\left( {\eta ,\eta }\right) \geqq 0.\n\]\n\nBut\n\n\[ \nI\left( {\eta ,\xi }\right) = {\left. \left\langle {D}_{\alpha },\eta ,\xi \right\rangle \right| }_{a}^{b} - {\int }_{a}^{b}\left\langle {{D}_{{\alpha }^{\prime }}^{2}\eta ,\eta }\right\rangle - \left\langle {R\left( {{\alpha }^{\prime },\eta }\right) {\alpha }^{\prime },\eta }\right\rangle \n\]\n\n\[ \n= {\left. \left\langle {D}_{\alpha },\eta ,\xi \right\rangle \right| }_{a}^{b}\;\text{because}\eta \text{is a Jacobi lift (Proposition 1.1)} \n\]\n\n\[ \n= {\left. \left\langle {D}_{\alpha },\eta ,\eta \right\rangle \right| }_{a}^{b}\text{by assumption} \n\]\n\n\[ \n= I\left( {\eta ,\eta }\right) \;\text{because}\eta \text{is a Jacobi lift.} \n\]\n\nHence inequality (1) becomes the inequality asserted in the corollary.
Yes
Proposition 1.9. Let \( f \) be a \( {C}^{2} \) function of a real variable. As in Theorem 1.3, let \( \sigma \) be a variation of \( \alpha \), and let \( \eta \left( s\right) = {\partial }_{2}\sigma \left( {s,0}\right) \) . Assume \( {D}_{{\alpha }^{\prime }}\eta \) orthogonal to \( {\alpha }^{\prime } \) . Then\n\n\[ \n{\left. \frac{{d}^{2}}{dt}f\left( L\left( {\alpha }_{t}\right) \right) \right| }_{t = 0} = {f}^{\prime }\left( {L\left( {\alpha }_{0}\right) }\right) \left\lbrack {{\left\langle {D}_{{\alpha }^{\prime }}\eta \left( b\right) ,\eta \left( b\right) \right\rangle }_{g} - {\left\langle {D}_{{\alpha }^{\prime }}\eta \left( a\right) ,\eta \left( a\right) \right\rangle }_{g}}\right\rbrack .\n\]
Proof. Let \( F\left( t\right) = f\left( {L\left( {\alpha }_{t}\right) }\right) \) . Then\n\n\[ \n{F}^{\prime }\left( t\right) = {f}^{\prime }\left( {L\left( {\alpha }_{t}\right) }\right) \frac{d}{dt}L\left( {\alpha }_{t}\right)\n\]\n\nand\n\n\[ \n{F}^{\prime \prime }\left( t\right) = {f}^{\prime \prime }\left( {L\left( {\alpha }_{t}\right) }\right) \frac{d}{dt}L\left( {\alpha }_{t}\right) + {f}^{\prime }\left( {L\left( {\alpha }_{t}\right) }\right) {\left( \frac{d}{dt}\right) }^{2}L\left( {\alpha }_{t}\right) .\n\]\n\nThen at \( t = 0 \) the first term on the right is 0 because of Proposition 1.6. The second term at \( t = 0 \) is the asserted one by Corollary 1.5 and the orthogonality assumption. This concludes the proof.
Yes
Proposition 2.1. The exponential map is metric preserving on rays from the origin.
Proof. We have directly from the definitions\n\n\[ \frac{1}{2}\frac{{f}^{\prime }}{f}\left( r\right) = \left\langle {{D}_{{\alpha }^{\prime }}\zeta \left( r\right) ,\zeta \left( r\right) }\right\rangle = {\left. \left\langle {D}_{{\alpha }^{\prime }}\zeta ,\zeta \right\rangle \right| }_{0}^{r} \]\n\n\[ = {I}_{0}^{r}\left( {\zeta ,\zeta }\right) \]\n\nbecause \( \zeta \) is a Jacobi lift of \( \alpha \), and we use Proposition 1.1.\n\nFor the second inequality, let \( {P}_{0}^{s} = {P}_{0,\alpha }^{s} \) be parallel translation along \( \alpha \) , with \( {P}_{0}^{0} = \mathrm{{id}} \) . Let \( v \) be the vector such that\n\n\[ {P}_{0}^{r}\left( v\right) = \zeta \left( r\right) \]\n\nDefine the lift \( \xi \) by\n\n(4)\n\n\[ \xi \left( s\right) = {P}_{0}^{s}\left( {\frac{s}{r}v}\right) \]\n\nNote that:\n\n(5)\n\n\[ \xi \left( 0\right) = \zeta \left( 0\right) ,\;\xi \left( r\right) = \zeta \left( r\right) ,\;{D}_{{\alpha }^{\prime }}\xi \left( s\right) = {P}_{0}^{s}\left( {\frac{1}{r}v}\right) \;\text{ (see Lemma 2.3). } \]\n\nThus \( {\left( {D}_{{\alpha }^{\prime }}\xi \right) }^{2} = 1/{r}^{2} \) . By Corollary 1.8, we obtain\n\n\[ {I}_{0}^{r}\left( {\zeta ,\zeta }\right) \leqq {I}_{0}^{r}\left( {\xi ,\xi }\right) \]\n\n\[ = {\int }_{0}^{r}\left\lbrack {{\left( {D}_{{\alpha }^{\prime }}\xi \right) }^{2} + {R}_{2}\left( {{\alpha }^{\prime },\xi }\right) }\right\rbrack \]\n\n\[ = \frac{1}{r} + {\int }_{0}^{r}{R}_{2}\left( {{\alpha }^{\prime },\xi }\right) \]\n\nthereby proving the lemma.
Yes
Lemma 2.2. Assume that \( w \bot u \) and that \( \alpha \) is contained in a convex open set. Given \( r \) as above, there exists a lift \( \xi \) of \( \alpha \) such that on \( \left\lbrack {0, r}\right\rbrack ,\xi \neq 0 \) , \( \xi \bot {\alpha }^{\prime } \), and
Proof. We have directly from the definitions\n\n\[ \frac{1}{2}\frac{{f}^{\prime }}{f}\left( r\right) = \left\langle {{D}_{{\alpha }^{\prime }}\zeta \left( r\right) ,\zeta \left( r\right) }\right\rangle = {\left. \left\langle {D}_{{\alpha }^{\prime }}\zeta ,\zeta \right\rangle \right| }_{0}^{r} \]\n\n\[ = {I}_{0}^{r}\left( {\zeta ,\zeta }\right) \]\n\nbecause \( \zeta \) is a Jacobi lift of \( \alpha \), and we use Proposition 1.1.\n\nFor the second inequality, let \( {P}_{0}^{s} = {P}_{0,\alpha }^{s} \) be parallel translation along \( \alpha \) , with \( {P}_{0}^{0} = \mathrm{{id}} \) . Let \( v \) be the vector such that\n\n\[ {P}_{0}^{r}\left( v\right) = \zeta \left( r\right) \]\n\nDefine the lift \( \xi \) by\n\n(4)\n\n\[ \xi \left( s\right) = {P}_{0}^{s}\left( {\frac{s}{r}v}\right) \]\n\nNote that:\n\n(5)\n\n\[ \xi \left( 0\right) = \zeta \left( 0\right) ,\;\xi \left( r\right) = \zeta \left( r\right) ,\;{D}_{{\alpha }^{\prime }}\xi \left( s\right) = {P}_{0}^{s}\left( {\frac{1}{r}v}\right) \;\text{ (see Lemma 2.3). } \]\n\nThus \( {\left( {D}_{{\alpha }^{\prime }}\xi \right) }^{2} = 1/{r}^{2} \) . By Corollary 1.8, we obtain\n\n\[ {I}_{0}^{r}\left( {\zeta ,\zeta }\right) \leqq {I}_{0}^{r}\left( {\xi ,\xi }\right) \]\n\n\[ = {\int }_{0}^{r}\left\lbrack {{\left( {D}_{{\alpha }^{\prime }}\xi \right) }^{2} + {R}_{2}\left( {{\alpha }^{\prime },\xi }\right) }\right\rbrack \]\n\n\[ = \frac{1}{r} + {\int }_{0}^{r}{R}_{2}\left( {{\alpha }^{\prime },\xi }\right) \]\n\nthereby proving the lemma.
Yes
Lemma 2.3. Let \( X \) be any manifold with a spray, and let \( \alpha : \left\lbrack {a, b}\right\rbrack \rightarrow X \) be a curve in \( X \) . Let \( \beta : \left\lbrack {a, b}\right\rbrack \rightarrow {T}_{\alpha \left( a\right) } \) be a curve in \( {T}_{\alpha \left( a\right) } \), let \( P \) be parallel translation along \( \alpha \), and let \( \xi \left( t\right) = {P}_{a}^{t}\left( {\beta \left( t\right) }\right) \) . Then\n\n\[ \n{D}_{{\alpha }^{\prime }}\xi \left( t\right) = {P}_{a}^{t}\left( {{\beta }^{\prime }\left( t\right) }\right) \n\]
Proof. We prove the relation in a chart, where we have the formula\n\n\[ \n{D}_{{\alpha }^{\prime }}\xi = {\xi }^{\prime } - B\left( {\alpha ;{\alpha }^{\prime },\xi }\right) .\n\]\n\nLet \( \gamma \left( {t, v}\right) \) be parallel translation of \( v \in {T}_{\alpha \left( a\right) } \) . Then\n\n\[ \n{\xi }^{\prime }\left( t\right) = {\partial }_{1}\sigma \left( {t,\beta \left( t\right) }\right) + {\partial }_{2}\gamma \left( {t,\beta \left( t\right) }\right) {\beta }^{\prime }\left( t\right) \n\]\n\n\[ \n= {\partial }_{1}\gamma \left( {t,\beta \left( t\right) }\right) + \gamma \left( {t,{\beta }^{\prime }\left( t\right) }\right) \n\]\n\nbecause \( v \mapsto {\gamma }_{t}\left( v\right) \) is linear, and the derivative of a linear map is equal to the linear map. The lemma follows from the local definition of the covariant derivative, and the definition of parallel translation (Theorem 3.3 of Chapter VIII).
Yes
Lemma 2.4. Let \( h\left( s\right) = {s}^{2}{w}^{2} \) . Then\n\n\[ \mathop{\lim }\limits_{{s \rightarrow 0}}f\left( s\right) /h\left( s\right) = 1 \]
Proof. This is immediate from the first term of the Taylor expansion given in Chapter IX, Proposition 5.1.
No
Theorem 2.5. Under the basic assumptions, assume that \( w \bot u \) . Let \( {U}_{x} \) be an open convex neighborhood of \( x \), and \( {V}_{x} \) an open neighborhood of \( {0}_{x} \) such that \( {\exp }_{x} : {V}_{x} \rightarrow {U}_{x} \) is an isomorphism. We suppose \( \alpha \) is contained in \( {U}_{x} \) . If the curvature is \( \geqq 0 \) (resp. \( > 0 \) ) on \( {U}_{x} \) then\n\n\[ \n\begin{Vmatrix}{{\eta }_{w}\left( r\right) }\end{Vmatrix} \leqq r\parallel w\parallel \;\text{ (resp. } < r\parallel w\parallel \text{ ) }\;\text{ for }\;0 < r \leqq b.\n\]
Proof. By lemma 2.2, for \( \epsilon > 0 \) we find\n\n\[ \n{\int }_{\epsilon }^{r}{f}^{\prime }/f \leqq {\int }_{\epsilon }^{r}{h}^{\prime }/h + \text{ the Riemann tensor integral. }\n\]\n\nSince by hypothesis, the Riemann tensor integrand is \( \leqq 0 \), we obtain\n\n\[ \n\log f\left( r\right) /h\left( r\right) \leqq \log f\left( \epsilon \right) /h\left( \epsilon \right)\n\]\n\nand therefore\n\n\[ \n\frac{f\left( r\right) }{h\left( r\right) } \leqq \frac{f\left( \epsilon \right) }{h\left( \epsilon \right) } \rightarrow 1\;\text{ as }\;\epsilon \rightarrow 0\;\text{ by Lemma 2.4. }\n\]\n\nThen \( f\left( r\right) \leqq h\left( r\right) \), which proves the theorem with the weak inequality sign. For the strict inequality case, one takes into account the Riemann tensor integral, and the fact that the integrand is \( < 0 \), so all inequalities are strict. This concludes the proof.
Yes
Theorem 2.6. Let \( \left( {X, g}\right) \) be a Riemannian manifold. Let \( x \in X \) and let \( {\exp }_{X} : {V}_{x} \rightarrow {U}_{x} \) be an isomorphism of a neighborhood of \( {0}_{x} \) with an open convex neighborhood of \( x \) . Suppose \( g \) has curvature \( \geqq 0 \) on \( {U}_{x} \) . Then \( {\exp }_{x} \) is metric semidecreasing from \( {V}_{x} \) to \( {U}_{x} \) . If the curvature is \( > 0 \) on \( {U}_{x} \), then for \( v \in {V}_{x}, v \neq 0 \) and \( w \in {T}_{x}, w \) unequal to a scalar multiple of \( v \), we have\n\n\[ \begin{Vmatrix}{T{\exp }_{x}\left( v\right) w}\end{Vmatrix} < \parallel w\parallel \]\n\nThus \( {\exp }_{x} \) is metric strictly decreasing on \( {V}_{x} \), except in the direction of rays from the origin.
Proof. We let \( u \) be the unit vector in the direction of \( v, v = {bu} \) . If \( w \) is orthogonal to \( u \), then the inequality of Theorem 2.5 together with (1) shows that\n\n\[ {\begin{Vmatrix}T{\exp }_{x}\left( ru\right) w\end{Vmatrix}}^{2} < \parallel w{\parallel }^{2}. \]\n\nFor arbitrary \( w \), we write \( w = {w}_{0} + {w}_{1} \) with \( {w}_{0} = {cu} \) (some \( c \in \mathbf{R} \) ), and \( {w}_{1} \bot u \) . Then by the Gauss lemma, \( T{\exp }_{x}\left( {ru}\right) {w}_{0} \bot T{\exp }_{x}\left( {ru}\right) {w}_{1} \), so\n\n\[ {\begin{Vmatrix}T{\exp }_{x}\left( ru\right) w\end{Vmatrix}}^{2} = {\begin{Vmatrix}T{\exp }_{x}\left( ru\right) {w}_{0}\end{Vmatrix}}^{2} + {\begin{Vmatrix}T{\exp }_{x}\left( ru\right) {w}_{1}\end{Vmatrix}}^{2}, \]\n\nwhich proves the theorem, in light of Proposition 2.1 and the inequality in Theorem 2.5.
Yes
Theorem 3.1 (Serre). Let \( X \) be a Bruhat-Tits space. Let \( S \) be a bounded subset of \( X \) . Then there exists a unique closed ball \( {\overline{\mathbf{B}}}_{r}\left( {x}_{1}\right) \) in \( X \) of minimal radius containing \( S \) .
Proof. We first prove uniqueness. Suppose there are two balls \( {\overline{\mathbf{B}}}_{r}\left( {x}_{1}\right) \) and \( {\overline{\mathbf{B}}}_{r}\left( {x}_{2}\right) \) of minimal radius containing \( S \), but \( {x}_{2} \neq {x}_{1} \) . Let \( x \) be any point of \( S \), so \( d\left( {x,{x}_{2}}\right) \leqq r \) and \( d\left( {x,{x}_{1}}\right) \leqq r \) . Let \( z \) be the midpoint between \( {x}_{1} \) and \( {x}_{2} \) . By the semi parallelogram law, we have\n\n\[ d{\left( {x}_{1},{x}_{2}\right) }^{2} \leqq 4{r}^{2} - {4d}{\left( x, z\right) }^{2}. \]\n\nBy definition, for each \( \epsilon > 0 \) there is a point \( x \in S \) such that \( d\left( {x, z}\right) \geq r - \epsilon \) . It follows that \( d\left( {{x}_{1},{x}_{2}}\right) = 0 \), that is \( {x}_{1} = {x}_{2} \) .\n\nAs to existence, let \( \left\{ {x}_{n}\right\} \) be a sequence of points which are centers of balls of radius \( {r}_{n} \) approaching the inf of all such radii such that \( {\overline{\mathbf{B}}}_{{r}_{n}}\left( {x}_{n}\right) \) contains \( S \) . Let \( r \) be this inf. If the sequence \( \left\{ {x}_{n}\right\} \) is a Cauchy sequence, then it converges to some point which is the center of a closed ball of the minimal radius containing \( S \), and we are done. We show this must always happen. Let \( {z}_{mn} \) be the midpoint between \( {x}_{n} \) and \( {x}_{m} \) . By the minimality of \( r \), given \( \epsilon \) there exists a point \( x \in s \) such that\n\n\[ d{\left( x,{z}_{mn}\right) }^{2} \geqq {r}^{2} - \epsilon \]\n\nWe apply the semi parallelogram law with \( z = {z}_{mn} \) . Then\n\n\[ d{\left( {x}_{m},{x}_{n}\right) }^{2} \leqq {2d}{\left( x,{x}_{m}\right) }^{2} + {2d}{\left( x,{x}_{n}\right) }^{2} - {4d}{\left( x,{z}_{mn}\right) }^{2} \]\n\n\[ \leqq \epsilon \left( {m, n}\right) + {4\epsilon } \]\n\nwhere \( \epsilon \left( {m, n}\right) \rightarrow 0 \), thus proving that \( \left\{ {x}_{n}\right\} \) is Cauchy, and concluding the proof of the theorem.
Yes
Theorem 3.2 (Bruhat-Tits). Let \( X \) be a Bruhat-Tits metric space. Let \( G \) be a group of isometries of \( X \), with the action of \( G \) denoted by \( \left( {g, x}\right) \mapsto g \cdot x \) . Suppose \( G \) has a bounded orbit (this occurs if, for instance, \( G \) is compact). Then \( G \) has a fixed point, for instance the circumcenter of the orbit.
Proof. Let \( p \in X \) and let \( G \cdot p \) be the orbit. Let \( {\overline{\mathbf{B}}}_{r}\left( {x}_{1}\right) \) be the unique closed ball of minimal radius containing this orbit. For any \( g \in G \), the image \( g \cdot {\overline{\mathbf{B}}}_{r}\left( {x}_{1}\right) = {\overline{\mathbf{B}}}_{r}\left( {x}_{2}\right) \) is a closed ball of the same radius containing the orbit, and \( {x}_{2} = g \cdot {x}_{1} \), so by the uniqueness of Theorem 3.1, it follows that \( {x}_{1} \) is a fixed point, thus concluding the proof.
Yes
Corollary 3.3. Let \( G \) be a topological group, \( H \) a closed subgroup. Let \( K \) be a subgroup of \( G \), so that \( K \) acts by translation on the coset space \( G/H \) . Suppose \( G/H \) has a metric (distance function) such that translation by elements of \( K \) are isometries, \( G/H \) is a Bruhat-Tits space, and one orbit is bounded. Then a conjugate of \( K \) is contained in \( H \) .
Proof. By Corollary 3.2, the action of \( K \) has a fixed point, i.e. there exists a coset \( {xH} \) such that \( {kxH} = {xH} \) for all \( k \in K \) . Then \( {x}^{-1}{KxH} \subset H \) , whence \( {x}^{-1}{Kx} \subset H \), as was to be shown.
Yes
Proposition 3.4. A complete Riemannian manifold satisfying EMI is a Bruhat-Tits space. A Cartan-Hadamard manifold is a Bruhat-Tits space.
Proof. On a Hilbert space, we have equality in the parallelogram law. Using the hypothesis in EMI with \( z \) as the midpoint, we see that the left side in the parallelogram law remains the same under the exponential map, the right side only increases, and hence the semi parallelogram law falls out.
No
Theorem 3.5. Let \( X \) be a Riemannian manifold. The following three conditions are equivalent:\n\n(a) The curvature is seminegative.\n\n(b) The exponential map is locally metric semi-increasing at every point.\n\n(c) The semi parallelogram law holds locally on \( X \) .
Proof. This is merely putting together results which have been proved individually. Theorem 3.6 of Chapter IX shows that (a) implies (b). That (b) implies (c) is a local version of Proposition 3.4. Indeed, the parallelogram law holds in the tangent space \( {T}_{z} \), and if the exponential map at \( z \) is metric semi-increasing, then the semi parallelogram law holds locally by applying the exponential map. Specifically, given \( {x}_{1},{x}_{2} \) in some convex open set, we let \( z \) be the midpoint on the geodesic joining \( {x}_{1} \) and \( {x}_{2} \), so that there is some \( {v}_{1} \in {T}_{z} \) such that, putting \( {v}_{2} = - {v}_{1} \) ,\n\n\[ \n{x}_{1} = {\exp }_{z}\left( {v}_{1}\right) ,\;{x}_{2} = {\exp }_{z}\left( {v}_{2}\right) ,\;z = {\exp }_{z}\left( {0}_{z}\right) .\n\]\n\nGiven \( x = {\exp }_{z}\left( v\right) \) with \( v \in {T}_{z} \) the parallelogram law in \( {T}_{z} \) reads\n\n\[ \nd{\left( {v}_{1},{v}_{2}\right) }^{2} + {4d}{\left( v,0\right) }^{2} = {2d}{\left( v,{v}_{1}\right) }^{2} + {2d}{\left( v,{v}_{2}\right) }^{2},\n\]\n\nwhere \( d\left( {v, w}\right) = \left| {v - w}\right| \) for \( v, w \in {T}_{z} \) . Under the exponential map, the distances on the left side are preserved, and the distances on the right side are expanded if condition (b) is satisfied, so under the exponential map, we get\n\n\[ \nd{\left( {x}_{1},{x}_{1}\right) }^{2} + {4d}{\left( x, z\right) }^{2} \leqq {2d}{\left( x,{x}_{1}\right) }^{2} + {2d}{\left( x,{x}_{2}\right) }^{2},\n\]\n\nwhich is the semi parallelogram law. Finally, to show that (c) implies (a), we merely follow the same argument with the reverse inequality. So assume (c). Suppose the curvature is positive at some point, and hence is positive on a convex open neighborhood of the point, which we denote by \( z \) . We pick a vector \( v \in {T}_{z} \), and let \( {v}_{1} = - v,{v}_{2} = v \) . We let \( w \bot v, w \neq 0 \) . Then\n\n\[ \nd{\left( {v}_{1},{v}_{2}\right) }^{2} + {4d}{\left( v + w,0\right) }^{2} = {2d}{\left( v + w,{v}_{1}\right) }^{2} + {2d}{\left( v + w,{v}_{2}\right) }^{2},\n\]\n\nbecause this relation is one with the norm in the Hilbert space \( {T}_{z} \) . Now we apply the exponential map, that is, we let\n\n\[ \n{x}_{1} = {\exp }_{z}\left( {v}_{1}\right) ,\;{x}_{2} = {\exp }_{z}\left( {v}_{2}\right) ,\;x = {\exp }_{z}\left( {v + w}\right) ,\;z = {\exp }_{z}\left( {0}_{z}\right) .\n\]\n\nThe distances on the left side of the equation are preserved under the exponential map (taking the norms of \( v, w \) sufficiently small). By Theorem 2.6 , the distances on the right are strictly decreased, contradicting the semi parallelogram law (actually giving an anti semi parallelogram inequality). This concludes the proof.
Yes
Theorem 3.6. Let \( \mathbf{E} \) be a Hilbert space and \( X \) a Riemannian manifold. Let \( h : \mathbf{E} \rightarrow X \) be a differential isomorphism which is metric semi-increasing, that is\n\n\[{\left| Th\left( v\right) w\right| }_{h\left( v\right) }^{2} \geqq {\left| w\right| }_{\mathbf{E}}^{2}\;\text{ for all }\;w \in \mathbf{E},\]\n\nand also such that \( h \) is metric preserving on rays from the origin. Then \( X \) is complete. Let \( v \in \mathbf{E}, v \neq 0 \) . Then\n\n\[t \mapsto h\left( {tv}\right)\]\n\nis a geodesic passing through \( h\left( 0\right) \) and \( h\left( v\right) \), and is the unique such geodesic. If the group of isometries of \( X \) operates transitively, then there is a unique geodesic through two distinct points of \( X \) .
Proof. The map \( {h}^{-1} : X \rightarrow \mathbf{E} \) is distance semi-decreasing. If \( \left\{ {x}_{n}\right\} \) is Cauchy in \( X \), then \( \left\{ {{h}^{-1}\left( {x}_{n}\right) }\right\} \) is Cauchy in \( \mathbf{E} \), converging to some point \( v \) , and by continuity of \( h \), it follows that \( \left\{ {x}_{n}\right\} \) converges to \( h\left( v\right) \), so \( X \) is complete. If \( \alpha \) is a geodesic in \( X \) between two points \( x \) and \( y \), then \( {h}^{-1} \circ \alpha \) is a curve in \( \mathbf{E} \) between \( {h}^{-1}\left( x\right) \) and \( {h}^{-1}\left( y\right) \) . In \( \mathbf{E} \), the geodesics with respect to the Hilbert space norm are just the lines, which minimize distance. From the property that \( h \) preserves distances on rays from the origin, it follows at once that if \( x = h\left( v\right) \) and \( \xi \) is the line segment from 0 (in \( \mathbf{E} \) ) to \( v \), then \( h \circ \xi \) minimizes the distance between \( h\left( 0\right) \) and \( h\left( v\right) \), and so \( h \circ \xi \) is the unique geodesic between \( h\left( 0\right) \) and \( h\left( v\right) \) . If the group of isometries of \( X \) operates transitively, then the last statement is clear, thus concluding the proof.
Yes
Theorem 4.1. Let \( X \) be a Cartan-Hadamard manifold. Let \( Y \) be a totally geodesic submanifold. Then:\n\n(i) \( Y \) is a Cartan-Hadamard manifold.\n\n(ii) Given two distinct points of \( Y \), the unique geodesic in \( X \) passing through these points actually lies in \( Y \) .
Proof. Note that from the definition of a totally geodesic submanifold, it follows that the exponential map on \( X \), restricted to \( {TY} \), is equal to the\nexponential map on \( Y \), or in a formula, for \( y \in Y \) ,\n\n\[{\exp }_{y, Y} = {\exp }_{y, X}\;\text{ restricted to }{T}_{y}Y.\]\n\nBy hypothesis and the definitions, it follows that \( {\exp }_{v, Y} \) is metric semi-increasing, so \( Y \) has seminegative curvature by Theorem 3.5. By hypothesis, \( Y \) is geodesically complete, and hence complete by Corollary 3.9 of Chapter IX. By Theorem 3.8 of Chapter IX, given \( y \in Y \), the exponential\n\n\[{\exp }_{y} : {T}_{y}Y \rightarrow Y\]\n\nis a covering, and since it is injective because \( {\exp }_{v} : {T}_{y}X \rightarrow X \) is injective, it follows that \( {\exp }_{y} : {T}_{y}Y \rightarrow Y \) is an isomorphism, so \( Y \) is simply connected. Thus we have shown that \( Y \) is Cartan-Hadamard. Then (ii) is trivial from (i), because the unique geodesic in \( Y \) passing through two distinct points is the same as the unique geodesic in \( X \) passing through these points. This concludes the proof.
Yes
Proposition 4.2. Let \( X \) be a complete Riemannian manifold, such that given two distinct points of \( X \), there is a unique geodesic passing through these two points. Let \( Y \) be a closed submanifold. Suppose that locally, given two distinct points in \( Y \), the unique geodesic segment in \( X \) joining these points actually lies in \( Y \). Then \( Y \) is totally geodesic.
Proof. I owe the following simple argument to Wu. One has mostly to prove that a \( Y \) -geodesic is an \( X \) -geodesic. Let \( \alpha : \lbrack 0, c) \rightarrow X \) be a geodesic in \( X \) having initial conditions in \( Y \), that is\n\n\[ \alpha \left( 0\right) = y \in Y\;\text{ and }\;{\alpha }^{\prime }\left( 0\right) \in {T}_{y}Y. \]\n\nSuppose \( \alpha \) does not lie in \( Y \). Then there is a largest number \( b \) such that \( \alpha \left( \left\lbrack {0, b}\right\rbrack \right) \subset Y \) but \( \alpha \left( {b + \epsilon }\right) \notin Y \) for all small \( \epsilon > 0 \). Note that \( b \) could be 0 . Since \( \alpha \left( \left\lbrack {0, b}\right\rbrack \right) \subset Y \), it follows that \( {\alpha }^{\prime }\left( b\right) \in {T}_{\alpha \left( b\right) }Y \). This is true even if \( b = 0 \), by assumption. Let\n\n\[ \beta : \left\lbrack {b, b + \epsilon }\right\rbrack \rightarrow Y \]\n\nbe the geodesic in \( Y \) such that \( {\beta }^{\prime }\left( b\right) = {\alpha }^{\prime }\left( b\right) \), with sufficiently small \( \epsilon \), so that \( \beta \left( {b + \epsilon }\right) \) lies in a convex \( X \) -ball centered at \( \alpha \left( b\right) \), and also in a convex \( Y \) -ball centered at this same point \( \alpha \left( b\right) \). Let\n\n\[ \gamma : \left\lbrack {b, b + \epsilon }\right\rbrack \rightarrow X \]\n\nbe the geodesic segment in \( X \) joining \( \beta \left( b\right) \) and \( \beta \left( {b + \epsilon }\right) \). By hypothesis, we have \( \gamma \left( \left\lbrack {b, b + \epsilon }\right\rbrack \right) \subset Y \). But a geodesic of \( X \) lying in \( Y \) is necessarily a geodesic of \( Y \), say by the minimizing characterization of geodesics. By uniqueness, we have \( \gamma = \beta \) on \( \left\lbrack {b, b + \epsilon }\right\rbrack \). But then\n\n\[ {\gamma }^{\prime }\left( b\right) = {\beta }^{\prime }\left( b\right) = {\alpha }^{\prime }\left( b\right) \]\n\nand so \( \gamma \) is in fact the continuation of the restriction of \( \alpha \) to \( \left\lbrack {0, b}\right\rbrack \). Hence \( \alpha \left( \left\lbrack {b, b + \epsilon }\right\rbrack \right) \) is contained in \( Y \), contradiction concluding the proof.
Yes
Lemma 4.3. Let \( X \) be a Cartan-Hadamard manifold. Let \( Y \) be a totally geodesic submanifold. Then the map\n\n\[ \n{\exp }_{NY} : {NY} \rightarrow X \n\]\n\nis a bijection.
Proof. The argument will follow the same pattern that is used routinely to show that given a point not in a closed subspace of a Hilbert space, there is a line through the point perpendicular to the subspace. We first prove that given \( x \in X \) but \( x \notin Y \), there exists a point \( {y}_{0} \in Y \) such that\n\n\[ \nd\left( {x,{y}_{0}}\right) = d\left( {x, Y}\right) = \mathop{\inf }\limits_{{y \in Y}}d\left( {x, y}\right) . \n\]\n\nLet \( \left\{ {y}_{n}\right\} \) be a sequence in \( Y \) such that \( d\left( {x,{y}_{n}}\right) \) approaches \( r = d\left( {x, y}\right) \) as \( n \) goes to infinity. We can apply the semi parallelogram law in \( X \) exactly as in the proof of Theorem 3.1. The midpoint in \( X \) is on the geodesic between the two points, and lies in \( Y \) because of the assumption that \( Y \) is totally geodesic. Then the semi parallelogram law shows at once that \( \left\{ {y}_{n}\right\} \) is Cauchy, and therefore converges to the desired point \( {y}_{0} \) . The unique geodesic through \( x \) and \( {y}_{0} \) is perpendicular to \( Y \) at \( {y}_{0} \) by Corollary 4.7 of Chapter IX. Furthermore, this geodesic cannot intersect \( Y \) in another point \( {y}_{1} \), otherwise the existence of this geodesic and the geodesic in \( Y \) between \( {y}_{0} \) and \( {y}_{1} \) would contradict Corollary 3.11 of Chapter IX. Thus we conclude that the map \( {\exp }_{NY} : {NY} \rightarrow X \) is bijective.
Yes
Theorem 5.1 (Rauch Comparison Theorem). Let \( \left( {X,{g}_{X}}\right) \) and \( \left( {Y,{g}_{Y}}\right) \) be Riemannian manifolds of the same dimension, which may be infinite. Let \( {\alpha }_{X} \) (resp. \( {\alpha }_{Y} \) ) be geodesics in \( X \) (resp. \( Y \) ), parametrized by arc length, and defined on the same interval \( \left\lbrack {a, b}\right\rbrack \) . Let \( {\eta }_{X} \) (resp. \( {\eta }_{Y} \) ) be Jacobi lifts or these geodesics, orthogonal to \( {\alpha }_{X}^{\prime } \) (resp. \( {\alpha }_{Y}^{\prime } \) ). Assume:\n\n(i) \( {\eta }_{X}\left( a\right) = {\eta }_{Y}\left( a\right) = 0 \), and \( {\eta }_{X}\left( r\right) ,{\eta }_{Y}\left( r\right) \neq 0 \) for \( 0 < r \leqq b \) .\n\n(ii) \( \begin{Vmatrix}{{D}_{{\alpha }_{X}^{\prime }}{\eta }_{X}\left( a\right) }\end{Vmatrix} = \begin{Vmatrix}{{D}_{{\alpha }_{Y}^{\prime }}{\eta }_{Y}\left( a\right) }\end{Vmatrix} \) .\n\n(iii) The length of \( {\alpha }_{X} \) is the distance between its end points.\n\n(iv) We have \( {R}_{2, X} \leqq {R}_{2, Y} \) along \( \left( {{\alpha }_{X},{\alpha }_{Y}}\right) \) .\n\nThen\n\n\[ \n{\begin{Vmatrix}{\eta }_{X}\left( s\right) \end{Vmatrix}}^{2} \leqq {\begin{Vmatrix}{\eta }_{Y}\left( s\right) \end{Vmatrix}}^{2}\;\text{ for all }\;s \in \left\lbrack {a, b}\right\rbrack .\n\]
Proof. We shall use the definition of the index and Proposition 1.1, that is, for a Jacobi lift \( \eta \) of \( \alpha \) such that \( \eta \left( a\right) = 0 \) we have\n\n(1)\n\n\[ \n{I}_{a}^{s}\left( {\eta ,\eta }\right) = {\int }_{a}^{s}{\left( {D}_{{\alpha }^{\prime }}\eta \right) }^{2} + {R}_{2}\left( {{\alpha }^{\prime },\eta }\right) = \left\langle {{D}_{{\alpha }^{\prime }}\eta ,\eta }\right\rangle \left( s\right) .\n\]\n\nWe may index \( \eta \) by \( X \) and \( Y \) as well. We define\n\n\[ \nf\left( s\right) = \parallel \eta \left( s\right) {\parallel }^{2} = \eta {\left( s\right) }^{2},\;\text{ also written }\;{\eta }^{2}\left( s\right) ,\n\]\n\nand again we may index \( f \) and \( \eta \) by \( X \), and also by \( Y \) . Define\n\n\[ \nh\left( s\right) = {I}_{a}^{s}\left( {\eta ,\eta }\right) /{\eta }^{2}\left( s\right) \;\text{ for }\;0 < s \leqq b.\n\]\n\nThus we have \( {h}_{X} \) and \( {h}_{Y} \) . Note that by (1),\n\n\[ \n{f}^{\prime }\left( s\right) = 2{I}_{a}^{s}\left( {\eta ,\eta }\right) \;\text{ and }\;{f}^{\prime }/f = {2h}.\n\]\n\nFor \( a < c < b \), we get\n\n\[ \n\log {\eta }^{2}\left( s\right) = \log {\eta }^{2}\left( c\right) + 2{\int }_{c}^{s}h\n\]\n\nwhence\n\n\[ \n\log \left( {{\eta }_{X}^{2}\left( s\right) /{\eta }_{Y}^{2}\left( s\right) }\right) = \log \left( {{\eta }_{X}^{2}\left( c\right) /{\eta }_{Y}^{2}\left( c\right) }\right) + 2{\int }_{c}^{s}\left( {{h}_{X} - {h}_{Y}}\right) .\n\]\n\nBy assumptions (i) and (ii), and the first term of the Taylor expansion of a Jacobi lift (Chapter IX, Proposition 5.1), we get\n\n\[ \n\mathop{\lim }\limits_{{c \rightarrow a}}\log {\eta }_{X}^{2}\left( c\right) /{\eta }_{Y}^{2}\left( c\right) = 0\n\]\n\nHence\n\n\[ \n\log {\eta }_{X}^{2}\left( s\right) /{\eta }_{Y}^{2}\left( s\right) = \mathop{\lim }\limits_{{c \rightarrow a}}2{\int }_{c}^{s}\left( {{h}_{X} - {h}_{Y}}\right) .\n\]\n\nIt will therefore suffice to prove that \( {h}_{X}\left( s\right) \leqq {h}_{Y}\left( s\right) \) for \( a < s \leqq b \) . Fix \( r \) with \( a < r < b \) . It will suffice to prove \( {h}_{X}\left( r\right) \leqq {h}_{Y}\left( r\right) \) . Define\n\n\[ \n\zeta \left( s\right) = \frac{1}{\parallel \eta \left( r\right) \parallel }\eta \left( s\right)\n\]\n\nso we may index \( \zeta \) by \( X \) (resp. \( Y \) ) to get \( {\zeta }_{X} \) and \( {\zeta }_{Y} \) . Let \( W\left( s\right) = {\alpha }^{\prime }{\left( s\right) }^{ \bot } \) be the orthogonal complement of \( {\alpha }^{\prime }\left( s\right) \), so we have \( {W}_{X}\left( s\right) \) and \( {W}_{Y}\left( s\right) \) in the tangent spaces at \( {\alpha }_{X}\left( s\right) \) and \( {\alpha }_{Y}\left( s\right) \), respectively. Let\n\n\[ \n{L}_{r} : {W}_{Y}\left( r\right) \rightarrow
Yes
Theorem 1.1. The association \( g \mapsto \left\lbrack g\right\rbrack \) is a representation of \( G \) in the group of isometries of \( {\operatorname{Pos}}_{n} \), that is each \( \left\lbrack g\right\rbrack \) is an isometry.
Proof. First we note that \( \left\lbrack g\right\rbrack \) can also be viewed as a map on the whole vector space \( {\operatorname{Sym}}_{n} \), and this map is linear as a function of such matrices.\n\nHence its derivative is given by\n\n\[ \n{\left\lbrack g\right\rbrack }^{\prime }\left( p\right) w = g{w}^{t}g\;\text{ for all }\;w \in {\operatorname{Sym}}_{n}.\n\]\n\nNow we verify that \( \left\lbrack g\right\rbrack \) preserves the scalar product, or the norm. We have:\n\n\[ \n{\left| {\left\lbrack g\right\rbrack }^{\prime }\left( p\right) w\right| }_{\left\lbrack g\right\rbrack p}^{2} = \operatorname{tr}\left( {{\left( \left\lbrack g\right\rbrack p\right) }^{-1}g{w}^{t}g{)}^{2}}\right)\n\]\n\n\[ \n= \operatorname{tr}\left( {{\left( g{p}^{t}g\right) }^{-1}g{w}^{t}g{)}^{2}}\right)\n\]\n\n\[ \n= \operatorname{tr}\left( {{}^{t}{g}^{-1}{p}^{-1}{g}^{-1}g{w}^{t}{g}^{t}{g}^{-1}{p}^{-1}{g}^{-1}g{w}^{t}g}\right)\n\]\n\n\[ \n= \operatorname{tr}\left( {{}^{t}{g}^{-1}{p}^{-1}w{p}^{-1}{w}^{t}g}\right)\n\]\n\n\[ \n= \operatorname{tr}\left( {\left( {p}^{-1}w\right) }^{2}\right)\n\]\n\n\[ \n= {\left| w\right| }_{p}^{2}\n\]\n\nwhich proves the theorem.
Yes
Theorem 1.3. The exponential map \( \exp : {\operatorname{Sym}}_{n} \rightarrow {\operatorname{Pos}}_{n} \) is metric preserving on a line through the origin.
Proof. Such a line has the form \( t \mapsto {tv} \) with some \( v \in {\operatorname{Sym}}_{n}, v \neq 0 \) . We need to prove\n\n\[ \n{\left| v\right| }_{\mathrm{{tr}}}^{2} = {\left| {\exp }^{\prime }\left( tv\right) v\right| }_{\exp {tv}}^{2} \n\]\n\nNote that\n\n\[ \n\frac{d}{dt}\exp \left( {tv}\right) = {\exp }^{\prime }\left( {tv}\right) v \n\]\n\n\[ \n= \frac{d}{dt}\sum \frac{{t}^{n}{v}^{n}}{n!} \n\]\n\n\[ \n= \sum \frac{{t}^{n - 1}}{\left( {n - 1}\right) !}{v}^{n} \n\]\n\n\[ \n= \exp \left( {tv}\right) v \n\]\n\nHence\n\n\[ \n{\left| {\exp }^{\prime }\left( tv\right) v\right| }_{\exp {tv}}^{2} = \operatorname{tr}\left( {\left( {\left( \exp tv\right) }^{-1}\left( \exp tv\right) v\right) }^{2}\right) \n\]\n\n\[ \n= \operatorname{tr}\left( {v}^{2}\right) \n\]\n\n\[ \n= {\left| v\right| }_{\mathrm{{tr}}}^{2} \n\]\n\nwhich proves the theorem.
Yes
Theorem 1.4. Let \( p, q \in {\operatorname{Pos}}_{n} \) . Let \( {a}_{1},\ldots ,{a}_{n} \) be the roots of \( \det \left( {{tp} - q}\right) \) .\n\nThen\n\n\[ \operatorname{dist}\left( {p, q}\right) = \sum {\left( \log {a}_{i}\right) }^{2}. \]
Proof. Suppose first \( p = e \) and \( q \) is the diagonal matrix of \( {a}_{1},\ldots ,{a}_{n} \) . Let \( v = \log q \), so \( v \) is diagonal with components \( \log {a}_{1},\ldots ,\log {a}_{n} \) . The theorem is then a consequence of Theorem 1.3, since \( {v}^{2} \) has components \( {\left( \log {a}_{i}\right) }^{2} \) . We reduce the general case to the above special case. First we claim that there exists \( g \in G \) such that \( \left\lbrack g\right\rbrack p = e \) and \( \left\lbrack g\right\rbrack q = d \) is diagonal. Indeed, we first translate \( p \) to \( e \), so without loss of generality we may assume \( p = e \) . There exists an orthonormal basis of \( {\mathbf{R}}^{n} \) diagonalizing \( q \), so there exists a diagonal matrix \( d \) and \( k \in K \) such that \( q = k{d}^{t}k = {kd}{k}^{-1} \) .\n\nBut \( e = k{k}^{-1} \), so taking \( \left\lbrack k\right\rbrack q \) proves our claim. Finally, from the equations \( g{p}^{t}g = e \) and \( g{q}^{t}g = d \) we get \( p = {g}^{-{1t}}{g}^{-1} \) and \( q = {g}^{-1}{d}^{t}{g}^{-1} \), so\n\n\[ \det \left( {{tp} - q}\right) = \det \left( {t{g}^{-{1t}}{g}^{-1} - {g}^{-1}{d}^{t}{g}^{-1}}\right) \]\n\n\[ = \det {\left( g\right) }^{-2}\det \left( {{te} - d}\right) \]\n\nSince \( \operatorname{dist}\left( {p, q}\right) = \operatorname{dist}\left( {e, d}\right) \), the theorem follows.
Yes
Lemma 2.1. The maps \( {F}_{v} \) and \( {\exp }^{\prime }\left( v\right) \) are hermitian with respect to the tr-scalar product on \( \mathcal{A} \) . If \( v \in \operatorname{Sym} \), then \( {F}_{v} \) and \( {\exp }^{\prime }\left( v\right) \) map Sym into itself.
Proof. A routine verification gives for \( u, v, w \in \mathcal{A} \) :\n\n\[ \operatorname{tr}\left( {{F}_{v}\left( w\right) u}\right) = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{n!}\mathop{\sum }\limits_{{r + s = n - 1}}\operatorname{tr}\left( {\exp \left( {-v/2}\right) {v}^{r}w{v}^{s}\exp \left( {-v/2}\right) u}\right) \]\n\n\[ = \mathop{\sum }\limits_{{n = 0}}^{\infty }\frac{1}{n!}\mathop{\sum }\limits_{{r + s = n - 1}}\operatorname{tr}\left( {w{v}^{s}\exp \left( {-v/2}\right) u\exp \left( {-v/2}\right) {v}^{r}}\right) \]\n\n\[ = \operatorname{tr}\left( {w{F}_{v}\left( v\right) }\right) \]\n\nbecause \( \exp \left( {-v/2}\right) \) commutes with \( {v}^{r} \) and \( {v}^{s} \) . This concludes the proof that \( {F}_{v} \) is hermitian with respect to the tr-scalar product. If \( v \in \operatorname{Sym} \), then formula (1) shows that \( {F}_{v} \) maps Sym into itself. The statements about \( {\exp }^{\prime }\left( v\right) \) follow the same pattern of proof.
Yes
Lemma 2.2. Let \( v \in \operatorname{Sym} \) . Then \( {D}_{v}^{2} \) is hermitian on Sym.
Proof. Again this is routine, namely:\n\n\[ \n{D}_{v}\left( w\right) = {vw} - {wv} \n\]\n\n\[ \n{D}_{v}^{2}\left( w\right) = {v}^{2}w - {2vwv} + w{v}^{2} \n\]\n\n\[ \n\left( {{D}_{v}^{2}w}\right) u = {v}^{2}{wu} - {2vwvu} + w{v}^{2}u \n\]\n\n\[ \nw{D}_{v}^{2}u = w{v}^{2}u - {2wvuv} + {wu}{v}^{2}. \n\]\n\nApplying tr to these last two expressions and using its basic property \( \operatorname{tr}\left( {xy}\right) = \operatorname{tr}\left( {yx}\right) \) yields the proof of the lemma.
Yes
Lemma 2.3. For any \( v \in \mathcal{A} \), we have \( {D}_{v}{F}_{v} = {D}_{v}f\left( {D}_{v}\right) \) .
Proof. Let \( t \mapsto x\left( t\right) \) be a smooth curve in \( \mathcal{A} \). Then\n\n\[ x\left( {\exp x}\right) = \left( {\exp x}\right) x. \]\n\nDifferentiating both sides gives\n\n\[ {x}^{\prime }\exp x + x{\left( \exp x\right) }^{\prime } = {\left( \exp x\right) }^{\prime }x + \left( {\exp x}\right) {x}^{\prime }, \]\n\nand therefore\n\n\[ {x}^{\prime }\exp x - \left( {\exp x}\right) {x}^{\prime } = {\left( \exp x\right) }^{\prime }x - x{\left( \exp x\right) }^{\prime }. \]\n\nMultiplying on the left and right by \( \exp \left( {-x/2}\right) \), and using the fact that \( x \) commutes with \( \exp \left( {-x/2}\right) \) yields\n\n(2)\n\n\[ \exp \left( {-x/2}\right) {x}^{\prime }\exp \left( {x/2}\right) - \exp \left( {x/2}\right) {x}^{\prime }\exp \left( {-x/2}\right) \]\n\n\[ = \exp \left( {-x/2}\right) {\left( \exp x\right) }^{\prime }\exp \left( {-x/2}\right) x \]\n\n\[ - x\exp \left( {-x/2}\right) {\left( \exp x\right) }^{\prime }\exp \left( {-x/2}\right) \text{.} \]\n\nSince \( {L}_{x} \) and \( {R}_{x} \) commute, we have\n\n\[ \exp \left( {{D}_{x}/2}\right) = \exp \left( {{L}_{x}/2}\right) \exp \left( {-{R}_{x}/2}\right) \]\n\nso (2) can be written in the form\n\n(3)\n\n\[ \left( {\exp \left( {{D}_{x}/2}\right) - \exp \left( {-{D}_{x}/2}\right) }\right) {x}^{\prime } = {D}_{x}{F}_{x}{x}^{\prime } \]\n\nWe now take the curve \( x\left( t\right) = v + {tw} \), and evaluate the preceding identity at \( t = 0 \), so \( {x}^{\prime }\left( 0\right) = w \), to conclude the proof of the lemma.
Yes
Theorem 2.4. Let \( v \in \operatorname{Sym} \). Then \( {F}_{v} = f\left( {D}_{v}\right) \) on Sym. Hence for \( w \in \) Sym, we have\n\n\[{\exp }^{\prime }\left( v\right) w = \exp \left( {v/2}\right) \cdot f\left( {D}_{v}\right) w \cdot \exp \left( {v/2}\right) .
Proof. Let \( {h}_{v} = {F}_{v} - f\left( {D}_{v}\right) \). Then \( {h}_{v} : \operatorname{Sym} \rightarrow \) Sym is hermitian, and its image is contained in the subspace \( E = \operatorname{Ker}{D}_{v} \cap \operatorname{Sym} \). Since Sym is assumed finite dimensional, it is the direct sum of \( E \) and its orthogonal complement \( {E}^{ \bot } \) in Sym. Since \( {h}_{v} \) is hermitian, it maps \( {E}^{ \bot } \) into \( {E}^{ \bot } \), but \( {h}_{v} \) also maps \( {E}^{ \bot } \) into \( E \), so \( {h}_{v} = 0 \) on \( {E}^{ \bot } \). In addition, \( E \) is the commutant of \( v \) in Sym, and hence \( f\left( {D}_{v}\right) = \mathrm{{id}} = {F}_{v} \) on \( E \), so \( {h}_{v} = 0 \) on \( E \). Hence \( {h}_{v} = 0 \) on Sym, thus concluding the proof of the theorem.
Yes
Theorem 2.6. The exponential map exp is tr-norm semi-increasing on Sym, that is for all \( v, w \in \operatorname{Sym} \), putting \( p = \exp \left( v\right) \), we have\n\n\[ \n{\left| w\right| }_{\mathrm{{tr}}}^{2} = \operatorname{tr}\left( {w}^{2}\right) \leqq \operatorname{tr}\left( {\left( {p}^{-1}{\exp }^{\prime }\left( v\right) w\right) }^{2}\right) = {\left| {\exp }^{\prime }\left( v\right) w\right| }_{p,\mathrm{{tr}}}^{2}.\n\]
Proof. The right side of the above inequality is equal to\n\n\[ \n\operatorname{tr}\left( {\left( {p}^{-1}{\exp }^{\prime }\left( v\right) w\right) }^{2}\right) = \operatorname{tr}\left( {\left( \exp \left( -v/2\right) \cdot {\exp }^{\prime }\left( v\right) w \cdot \exp \left( -v/2\right) \right) }^{2}\right.\n\]\n\n\[ \n= \operatorname{tr}\left( {{F}_{v}{\left( w\right) }^{2}}\right)\n\]\n\n\[ \n= {\left| f\left( {D}_{v}\right) w\right| }_{\mathrm{{tr}}}^{2}\text{by Theorem 2.4.}\n\]\n\nApplying Theorem 2.5 now concludes the proof.
Yes
For each \( v \in \operatorname{Sym} \), the maps\n\n\[ \n{F}_{v}\;\text{ and }\;{\exp }^{\prime }\left( v\right) : \operatorname{Sym} \rightarrow \operatorname{Sym} \n\]\n\nare linear automorphisms.
Proof. Theorem 2.6 shows that Ker \( {\exp }^{\prime }\left( v\right) = 0 \), and \( {\exp }^{\prime }\left( v\right) \) is a linear isomorphism. The statement for \( {F}_{v} \) then follows because \( {F}_{v} \) is composed of \( {\exp }^{\prime }\left( v\right) \) and multiplicative translations by invertible elements in Sym. This concludes the proof.
Yes
Theorem 3.1. The Hermitian operator \( {A}_{v} \) is invertible on Sym. Furthermore, we have the formula\n\n\[ \n{A}_{v} = {\exp }^{\prime }\left( v\right) {g}_{0}\left( {D}_{v}^{2}\right) \;\text{ on Sym. } \n\]
Proof. From the definitions and Theorem 2.4, we know that\n\n\[ \n{J}_{v} = {\exp }^{\prime }\left( v\right) = \exp \left( {{L}_{v}/2}\right) \exp \left( {{R}_{v}/2}\right) f\left( {D}_{v}\right) \;\text{ on Sym. } \n\]\n\nNote that \( \exp {L}_{v} = {L}_{p} \) and \( \exp {R}_{v} = {R}_{p} \) . Abbreviate \( L = {L}_{v}, R = {R}_{v} \) , \( D = L - R \) . By Corollary 2.7, we find\n\n\[ \n{J}_{v}^{-1}{A}_{v} = {J}_{v}^{-1}\left( {{e}^{R} + {e}^{L}}\right) = f{\left( D\right) }^{-1}{e}^{-L/2}{e}^{-R/2}\left( {{e}^{L} + {e}^{R}}\right) \n\]\n\n\[ \n= f{\left( D\right) }^{-1}\left( {{e}^{\left( {L - R}\right) /2} + {e}^{\left( {R - L}\right) /2}}\right) \n\]\n\n\[ \n= f{\left( D\right) }^{-1}\left( {{e}^{D/2} + {e}^{-D/2}}\right) \n\]\n\n\[ \n= {2f}{\left( D\right) }^{-1}\cosh \left( {D/2}\right) = g\left( D\right) , \n\]\n\nwhich proves the formula. Now from the fact that \( {g}_{0} \) is bounded away from 0, strictly positive on an interval \( \left\lbrack {0, c}\right\rbrack \) such that \( 0 \leqq {D}_{v}^{2} \leqq {cI} \), we deduce the invertibility, and conclude the proof of the theorem.
Yes
Lemma 3.2. Suppose \( X = \exp \left( V\right) \) symmetric. Given \( p, q \in X \) there exists \( y \in X \) such that \( {ypy} = q \) . In other words, \( X \) acts transitively on itself.
Proof. The condition \( {ypy} = q \) is equivalent with\n\n\[ \n{p}^{1/2}y{p}^{1/2}{p}^{1/2}y{p}^{1/2} = {p}^{1/2}q{p}^{1/2}\; \Leftrightarrow \;{\left( {p}^{1/2}y{p}^{1/2}\right) }^{2} = {p}^{1/2}q{p}^{1/2}, \n\]\n\n\[ \n\Leftrightarrow \;{p}^{1/2}y{p}^{1/2} = {\left( {p}^{1/2}q{p}^{1/2}\right) }^{1/2}, \n\]\n\n\[ \n\Leftrightarrow \;y = {p}^{-1/2}{\left( {p}^{1/2}q{p}^{1/2}\right) }^{1/2}{p}^{-1/2}, \n\]\n\nwhich concludes the proof.
Yes
Theorem 3.3. Let \( V \) be a vector subspace of \( \operatorname{Sym} \), and let \( X = \exp \left( V\right) \) . Then \( X \) is a symmetric submanifold of Pos if and only if: SYM 2. The map \( {D}_{v}^{2} \) maps \( V \) into itself for all \( v \in V \) .
Proof. Suppose that \( {D}_{u}^{2} \) maps \( V \) into itself for all \( u \in V \) . Note that \( {g}_{0} \) is actually real analytic, and the above equation is an ordinary differential equation for \( \xi \left( t\right) \) in \( V \) . It has a unique solution with initial condition \( \xi \left( 0\right) = \log p \), and of course, this solution lies in \( V \), that is \( \xi \left( t\right) \in V \) for all \( t \) . Taking \( t = 1 \) shows that \( {xpx} \in \exp \left( V\right) \), thus proving one implication. Conversely, assume that \( x, p \in \exp \left( V\right) \) implies \( {xpx} \in \exp \left( V\right) \) . Let \( w \) be as before, and also \( \xi \left( t\right) \) as before, with say \( \xi \left( 0\right) = v = \log p \) . We have to show \( {D}_{v}^{2}\left( w\right) \in V \) . By assumption, \( \xi \) is a curve in \( V \), and hence so is \( {\xi }^{\prime } \) , which we computed above, with the power series \( f \) of \( §2 \) . Thus \( {\xi }^{\prime }\left( t\right) \) is a power series in \( t \), whose coefficients lie in \( V \) . The coefficient of \( {t}^{2} \) is directly computed to be \[ \frac{1}{12}{D}_{\xi \left( 0\right) }^{2}\left( w\right) \in V \] thus completing the proof of the theorem.
Yes
Lemma 3.5. Let \( L \) be a Lie algebra and \( V \) a linear subspace. Then \( V \) is stable under \( {D}_{v}^{2} \) for all \( v \in V \) if and only if \( V \) is stable under all operators \( {D}_{u}{D}_{v} \) with \( u, v \in V \) .
Proof. Applying the hypothesis that \( {D}_{v}^{2} \) leaves \( V \) stable to \( u + v \) (polarization) shows that \( {D}_{u}{D}_{v} + {D}_{v}{D}_{u} \) leaves \( V \) stable, or in other words,\n\n\( \left( *\right) \)\n\n\[ \left\lbrack {u,\left\lbrack {v, w}\right\rbrack }\right\rbrack + \left\lbrack {v,\left\lbrack {u, w}\right\rbrack }\right\rbrack \in V\;\text{ for all }u, v, w \in V. \]\n\nFrom \( {D}_{\left\lbrack u, v\right\rbrack } = {D}_{u}{D}_{v} - {D}_{v}{D}_{u} \), we see that \( {D}_{\left\lbrack u, v\right\rbrack } + 2{D}_{v}{D}_{u} \) leaves \( V \) stable, that is\n\n\( \left( {* * }\right) \)\n\n\[ \left\lbrack {\left\lbrack {u, v}\right\rbrack, w}\right\rbrack + 2\left\lbrack {v,\left\lbrack {u, w}\right\rbrack }\right\rbrack \in V. \]\n\nInterchanging \( u \) and \( w \) in \( \left( *\right) \) shows that \( \left\lbrack {\left\lbrack {u, v}\right\rbrack, w}\right\rbrack + \left\lbrack {v,\left\lbrack {u, w}\right\rbrack }\right\rbrack \in V \) . Combining this with \( \left( {* * }\right) \) proves the lemma. Note that proof is valid for a Lie algebra over any commutative ring.
Yes
Theorem 3.7. Let \( X = \exp \left( V\right) \) . Then \( X \) is a geodesic submanifold if and only if \( X \) satisfies the (equivalent) conditions of Theorem 3.3, e.g. \( X \) is a symmetric submanifold.
Proof. Assume \( X \) is symmetric. The image of the line through 0 and an element \( v \in V, v \neq 0 \) is a geodesic which is contained in \( X \) . Since the maps \( x \mapsto {yxy} \) (for \( y \in X \) ) leave \( X \) stable, and act transitively on \( X \), it follows that \( X \) contains the geodesic between any two of its points. Conversely, assume \( X \) is a geodesic submanifold. Let \( x \in X, v \in V \) . Then \( {S}_{x} \) maps the geodesic \( {x}^{1/2}\exp \left( {tv}\right) {x}^{1/2} \) to \( {x}^{1/2}\exp \left( {-{tv}}\right) {x}^{1/2} \), and so this geodesic is stable under \( {S}_{x} \) (as a submanifold). Hence \( {S}_{x} \) maps \( X \) into itself, so \( X \) is symmetric, thus concluding the proof.
Yes
Theorem 3.9. Let \( R \) be the Riemann tensor. Then at the unit element \( e \) , with \( u, v, w \in {T}_{e}\left( \mathrm{{Pos}}\right) = \) Sym, we have\n\n\[ R\left( {v, w}\right) u = - \left\lbrack {\left\lbrack {v, w}\right\rbrack, u}\right\rbrack \]\n\nand \( {R}_{2}\left( {v, w}\right) = \langle R\left( {v, w}\right) v, w{\rangle }_{\mathrm{{tr}}} \geqq 0 \) .
Proof. Assume the formula for the 4-tensor \( R \) . Substituting \( u = v \) and taking the tr-scalar product immediately shows that\n\n\[ \langle R\left( {v, w}\right) v, w{\rangle }_{\mathrm{{tr}}} = - 2\operatorname{tr}\left( {{\left( vw\right) }^{2} - {v}^{2}{w}^{2}}\right) . \]\n\nHence the semipositivity of \( {R}_{2} \) comes from the Schwarzian property. So there remains to prove the formula for \( R \) . But this is a special case of a formula which holds much more generally for Killing fields, since for symmetric spaces, we know that \( {m}_{e} = {T}_{e} \), see Chapter XIII, Theorem 5.8 and Theorem 4.6.
No
Theorem 1.1. Suppose \( X \) is pseudo Riemannian and \( D \) is the metric derivative. Let \( \operatorname{gr}\left( \varphi \right) \) denote the gradient of a function \( \varphi \right) . Then\n\n\[ \n{D}^{2}\varphi \left( {\eta ,\zeta }\right) = \left\langle {{D}_{\eta }\operatorname{gr}\left( \varphi \right) ,\zeta }\right\rangle \n\]\n\nIn particular, \( \left\langle {{D}_{\eta }\operatorname{gr}\left( \varphi \right) ,\zeta }\right\rangle \) is symmetric in \( \left( {\eta ,\zeta }\right) \) .
Proof. Let \( f = \langle \operatorname{gr}\left( \varphi \right) ,\zeta \rangle = {D}_{\zeta }\varphi \) . By the definition of the metric derivative,\n\n\[ \n{D}_{\eta }f = \left\langle {{D}_{\eta }\operatorname{gr}\left( \varphi \right) ,\zeta }\right\rangle + \left\langle {\operatorname{gr}\left( \varphi \right) ,{D}_{\eta }\zeta }\right\rangle \n\]\n\n\[ \n= \left\langle {{D}_{\eta }\operatorname{gr}\left( \varphi \right) ,\zeta }\right\rangle + \left( {{D}_{\eta }\zeta }\right) \cdot \varphi .\n\]\n\nOn the other hand, by (1),\n\n\[ \n{D}_{\eta }f = {D}_{\eta }\left( {{D}_{\zeta }\varphi }\right) = {D}^{2}\varphi \left( {\eta ,\zeta }\right) + \left( {{D}_{\eta }\zeta }\right) \cdot \varphi ,\n\]\n\nwhich proves the theorem.
Yes
Proposition 1.2. For each vector field \( \xi, Q\left( {\eta ,\zeta }\right) \xi \) defines a bilinear tensor as a function of \( \left( {\eta ,\zeta }\right) \) . Furthermore, just as with functions, we have\n\n\[ \left( {{D}^{2}\xi }\right) \left( {\eta ,\zeta }\right) = Q\left( {\eta ,\zeta }\right) \xi \]
Proof. The expression \( Q\left( {\eta ,\zeta }\right) \) is well defined at each point of \( X \), and the local expression shows that it is a section of the vector bundle of bilinear maps of \( {TX} \) into \( {TX} \) . The formula relating it to \( {D}^{2} \) is proved by exactly the same argument as (1). Note that by definition, \( \left( {D\xi }\right) \left( \zeta \right) = {D}_{\zeta }\xi \) , so:\n\n\[ \left( {{D}^{2}\xi }\right) \left( {\eta ,\zeta }\right) = D\left( {D\xi }\right) \left( {\eta ,\zeta }\right) \]\n\n\[ = \left( {{D}_{\eta }\left( {D\xi }\right) }\right) \left( \zeta \right) \]\n\n\[ = {D}_{\eta }{D}_{\zeta }\xi - \left( {D\xi }\right) \left( {{D}_{\eta }\zeta }\right) \]\n\nwhich proves the formula.
Yes
Proposition 1.3. For all vector fields \( \eta ,\zeta \) we have\n\n\[ Q\left( {\eta ,\zeta }\right) - Q\left( {\zeta ,\eta }\right) = R\left( {\eta ,\zeta }\right) \]
Proof. This is a short computation, namely:\n\n\[ Q\left( {\eta ,\zeta }\right) - Q\left( {\zeta ,\eta }\right) = {D}_{\eta }{D}_{\zeta } - {D}_{\zeta }{D}_{\eta } - {D}_{{D}_{\eta }\zeta } + {D}_{{D}_{\zeta }\eta } \]\n\n\[ = {D}_{\eta }{D}_{\zeta } - {D}_{\zeta }{D}_{\eta } - {D}_{\left\lbrack \eta ,\zeta \right\rbrack } \]\n\n\[ = R\left( {\eta ,\zeta }\right) \]\n\nThis concludes the proof.
Yes
Proposition 1.4.\n\n\[ \left( {{D}^{2}\omega }\right) \left( {\eta ,\zeta }\right) = Q\left( {\eta ,\zeta }\right) \omega \]
Proof. As before,\n\n\[ \left( {{D}^{2}\omega }\right) \left( {\eta ,\zeta }\right) = \left( {{D}_{\eta }\left( {D\omega }\right) }\right) \left( \zeta \right) \]\n\n\[ = {D}_{\eta }{D}_{\zeta }\omega - \left( {D\omega }\right) \left( {{D}_{\eta }\zeta }\right) \]\n\n\[ = {D}_{\eta }{D}_{\zeta }\omega - {D}_{{D}_{\eta }\zeta }\omega \]\n\nwhich proves the proposition.
Yes
Proposition 1.5. Let \( A \) be a tensor field of endomorphisms of \( {TX} \), i.e. a section of \( L\left( {{TX},{TX}}\right) \) . As a function of its \( \left( {\eta ,\zeta }\right) \) variables, \( Q\left( {\eta ,\zeta }\right) A \) is tensorial. Furthermore, \( R\left( {\eta ,\zeta }\right) \) is a derivation in the sense that for all vector fields \( \xi \) , \[ R\left( {\eta ,\zeta }\right) \left( {A\xi }\right) = \left( {R\left( {\eta ,\zeta }\right) A}\right) \xi + {AR}\left( {\eta ,\zeta }\right) \xi . \]
Proof. This follows directly from Proposition 1.3 and the fact that \( {D}_{\zeta }\left( {A\eta }\right) = \left( {{D}_{\zeta }A}\right) \eta + A{D}_{\zeta }\eta \), i.e. \( {D}_{\zeta } \) is a derivation.
Yes
For all vector fields \( \xi ,\eta ,\zeta \) we have\n\n\[ \left\lbrack {\xi ,{D}_{\zeta }\eta }\right\rbrack = Q\left( {\zeta ,\eta }\right) \xi - R\left( {\zeta ,\xi }\right) \eta + {D}_{\left\lbrack \xi ,\zeta \right\rbrack }\eta + {D}_{\zeta }\left\lbrack {\xi ,\eta }\right\rbrack \]
Proof. This is a short computation as follows:\n\n\[ \left\lbrack {\xi ,{D}_{\zeta }\eta }\right\rbrack = {D}_{\xi }{D}_{\zeta }\eta - {D}_{{D}_{\zeta }\eta }\xi \]\n\n\[ = {D}_{\zeta }{D}_{\xi }\eta + {D}_{\left\lbrack \xi ,\zeta \right\rbrack }\eta + R\left( {\xi ,\zeta }\right) \eta - {D}_{{D}_{\zeta }\eta }\xi \]\n\n\[ = {D}_{\zeta }{D}_{\eta }\xi + {D}_{\zeta }\left\lbrack {\xi ,\eta }\right\rbrack + {D}_{\left\lbrack \xi ,\zeta \right\rbrack }\eta + R\left( {\xi ,\zeta }\right) \eta - {D}_{{D}_{\zeta }\eta }\xi \]\n\nthe first step by the definition of the covariant derivative, the second step by the definition of the Riemann tensor, and the third step again by the definition of the covariant derivative. This prove the lemma.
Yes
A vector field is a Killing field if and only if its restriction to every geodesic is a Jacobi lift of the geodesic.
First let \( \alpha \) be a geodesic and let \( \xi \) be Killing. We shall give two proofs that the restriction of \( \xi \) to \( \alpha \) is a Jacobi lift of \( \alpha \) . We take \( \rho = {\rho }_{s} \) to be the flow of \( \alpha \), and use Kill 1. Put\n\n\[ \sigma \left( {s, t}\right) = \rho \left( {s,\alpha \left( t\right) }\right) \]\n\nThen \( \sigma \left( {s, t}\right) \) is a variation of \( \alpha \) through geodesics, and\n\n\[ \xi \left( {\alpha \left( t\right) }\right) = {\partial }_{1}\sigma \left( {0, t}\right) \]\n\nThus \( \xi \left( {\alpha \left( t\right) }\right) \) is a Jacobi lift of \( \alpha \left( t\right) \) by Chapter IX, Proposition 2.8. This gives one proof. For the second proof, recall the Jacobi equation\n\n\[ {D}_{{\alpha }^{\prime }}^{2}\left( {\xi \circ \alpha }\right) = R\left( {{\alpha }^{\prime },\xi \circ \alpha }\right) {\alpha }^{\prime }. \]\n\nThis equation comes out directly from condition Kill 3 by setting \( \eta = \zeta = {\alpha }^{\prime } \) over \( \alpha \), and inducing \( \xi \) on \( \alpha \) . Since \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \), the term not involving \( {D}_{{\alpha }^{\prime }}^{2} \) becomes 0, and the Jacobi equation drops out from the Killing equation. These proofs are essentially those in [KoN 69] (Vol. II), p. 66, Proposition 1.3.
Yes
Proposition 3.1. Suppose \( X \) is pseudo Riemannian. The following conditions are equivalent to a vector field \( \xi \) being g-Killing.
Proof (Cf. [O’N 83]). Assume that \( \xi \) is \( g \) -Killing. The property Kill \( {}_{g}\mathbf{1} \) then follows essentially directly from the definition of Lie derivative, because for all \( t,{\rho }_{t}^{ * }\left( g\right) = g \), so the Lie derivative of \( g \) is 0 . The converse is also immediate, because in general\n\n\[ \frac{d}{ds}{\rho }_{s}^{ * }\left( g\right) = {\rho }_{s}^{ * }\left( {{\mathcal{L}}_{\xi }g}\right) \]\n\nHence assuming \( {\operatorname{Kill}}_{g}\mathbf{1} \), we conclude that the left side is 0, whence \( {\rho }_{s}^{ * }\left( g\right) \) is constant, whence equal to \( g \), thus proving that \( \xi \) is \( g \) -Killing.\n\nThe other two equivalences come from general formulas exhibiting the obstruction to being a derivation.
No
For all vector fields \( \xi ,\eta ,\zeta \) we have\n\n\[{\mathcal{L}}_{\xi }\langle \eta ,\zeta \rangle = \left\langle {{\mathcal{L}}_{\xi }\eta ,\zeta }\right\rangle + \left\langle {\eta ,{\mathcal{L}}_{\xi }\zeta }\right\rangle + {\mathcal{L}}_{\xi }\left( g\right) \langle \eta ,\zeta \rangle\n\]\n\[= \left\langle {{\mathcal{L}}_{\xi }\eta ,\zeta }\right\rangle + \left\langle {\eta ,{\mathcal{L}}_{\xi }\zeta }\right\rangle + \left\langle {{D}_{\eta }\xi ,\zeta }\right\rangle + \left\langle {\eta ,{D}_{\zeta }\xi }\right\rangle\]
Proof. The first identity exhibits the fact that \( {\mathcal{L}}_{\xi }\left( {g\left( {\eta ,\zeta }\right) }\right) \) satisfies the Leibniz derivation product rule, relative to the triple \( \left( {g,\eta ,\zeta }\right) \), cf. Chapter V, Proposition 5.1, which applies to all multilinear forms, not just alternating forms. Thus Kill \( {}_{g}2 \) is immediately equivalent to Kill \( {}_{g}1 \) . The second formula follows from the metric derivative property\n\n\[{\mathcal{L}}_{\xi }\langle \eta ,\zeta \rangle = \left\langle {{D}_{\xi }\eta ,\zeta }\right\rangle + \left\langle {\eta ,{D}_{\xi }\zeta }\right\rangle\]\n\nafter using \( {D}_{\xi }\eta = {D}_{\eta }\xi + \left\lbrack {\xi ,\eta }\right\rbrack \) and similarly for \( {D}_{\xi }\zeta \) .
Yes
Proposition 3.3. Let \( \xi \) be a g-Killing field.\n\n(i) For any curve \( \alpha \), we have\n\n\[ \left\langle {\xi \left( {{\rho }_{s} \circ \alpha }\right) ,{\left( {\rho }_{s} \circ \alpha \right) }^{\prime }}\right\rangle = \left\langle {\xi \circ \alpha ,{\alpha }^{\prime }}\right\rangle . \]\n\nEquivalently, the left side is independent of \( s \).\n\n(ii) If \( \alpha \) is a geodesic, then \( \left\langle {\xi \circ \alpha ,{\alpha }^{\prime }}\right\rangle \) is constant.
Proof. The proof of (i) is immediate from (1) and (2). As for (ii), we take the derivative of the function \( \left\langle {\xi \circ \alpha ,{\alpha }^{\prime }}\right\rangle \), and find\n\n\[ \left\langle {{D}_{{\alpha }^{\prime }}\left( {\xi \circ \alpha }\right) ,{\alpha }^{\prime }}\right\rangle \]\n\nbecause \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \) by definition of a geodesic. By Kill,3 it follows that the above expression is 0 , thus proving the proposition.
No
Proposition 3.4. Let \( \xi \) be a g-Killing field. As usual let \( {\xi }^{2} = \langle \xi ,\xi \rangle \). Then \[ \operatorname{grad}{\xi }^{2} = - 2{D}_{\xi }\xi \]
Proof. Again let \( \alpha \) be a curve with \( \alpha \left( 0\right) = x,{\alpha }^{\prime }\left( 0\right) = v = \xi \left( x\right) \). Consider the derivative \[ h\left( {s, t}\right) = {\partial }_{t}\left\langle {{\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) ,{\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) }\right\rangle . \] Since \( {\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) = \xi \left( {{\rho }_{s}\left( {\alpha \left( t\right) }\right) }\right) \), putting \( f = {\xi }^{2} \) we find \[ h\left( {0,0}\right) = {df}\left( x\right) v = \left\langle {\operatorname{grad}{\xi }^{2}, v}\right\rangle . \] On the other hand, from a basic property of the Riemannian covariant derivative, we also have \[ h\left( {s, t}\right) = 2\left\langle {{\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) ,{D}_{t}{\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) }\right\rangle \] \[ = 2\left\langle {{\partial }_{s}\rho \left( {s,\alpha \left( t\right) }\right) ,{D}_{s}{\partial }_{t}\rho \left( {s,\alpha \left( t\right) }\right) }\right\rangle \] by the usual commutation rule of Chapter VIII, Lemma 5.3, \[ = 2\left\langle {\xi \left( {{\rho }_{s} \circ \alpha \left( t\right) }\right) ,{D}_{s}{\left( {\rho }_{s} \circ \alpha \right) }^{\prime }\left( t\right) )}\right\rangle \] \[ = 2{\partial }_{s}\left\langle {\xi \left( {{\rho }_{s} \circ \alpha \left( t\right) }\right) ,{\left( {\rho }_{s} \circ \alpha \right) }^{\prime }\left( t\right) }\right\rangle \] \[ - 2\left\langle {{D}_{s}\xi \left( {{\rho }_{s} \circ \alpha \left( t\right) }\right) ,{\left( {\rho }_{s} \circ \alpha \right) }^{\prime }\left( t\right) }\right\rangle \] \[ = - 2\left\langle {{D}_{s}\xi \left( {{\rho }_{s} \circ \alpha \left( t\right) }\right) ,{\left( {\rho }_{s} \circ \alpha \right) }^{\prime }\left( t\right) }\right\rangle \] because \( {\rho }_{s} \circ \alpha \) is a geodesic and we apply Proposition 3.3(ii). We then evaluate at \( t = 0, s = 0 \) to get \[ h\left( {0,0}\right) = - 2\left\langle {{D}_{\xi }\xi \left( x\right), v}\right\rangle \] The two expressions for \( h\left( {0,0}\right) \) are valid for all \( v \), and hence the proposition follows.
Yes
Corollary 3.5. Let \( \xi \) be a \( g \) -Killing field and \( \rho = \rho \left( {s, x}\right) \) its flow. For fixed \( x \), the curve \( s \mapsto \rho \left( {s, x}\right) \) is a non-constant geodesic if and only if \( \xi \left( x\right) \neq 0 \) and \( d{\xi }^{2}\left( x\right) = 0 \) .
Proof. A curve \( s \mapsto \beta \left( s\right) \) is a geodesic if and only if \( {D}_{{\beta }^{\prime }}{\beta }^{\prime } = 0 \) . In our context, with \( \beta \left( s\right) = \rho \left( {s, x}\right) \), this means \( {D}_{s}{\partial }_{s}\rho \left( {s, x}\right) = 0 \), and so the equivalence is clear from the proposition.
No
Proposition 4.1. Killing fields form a Lie subalgebra of all vector fields.
Proof. It suffices to prove that if \( \xi ,\eta \) satisfy Kill 2, then so does \( \left\lbrack {\xi ,\eta }\right\rbrack \) . This is a special case of the following lemma, formulated in an abstract context because at this point I want to emphasize the extent to which the present arguments depend only on Lie algebras over rings.\n\nLemma 4.2. Let \( V \) be a Lie algebra (over a commutative ring). Suppose given a bilinear map \( V \times V \rightarrow V \), which we denote\n\n\[ \left( {y, z}\right) \mapsto {yz} \]\n\nand call the bilinear product. Let \( W \) be the submodule of \( V \) consisting of all elements \( w \in V \) such that the map\n\n\[ y \mapsto \left\lbrack {w, y}\right\rbrack \]\n\nis a derivation for this bilinear product, namely\n\n\[ \left\lbrack {w,{yz}}\right\rbrack = \left\lbrack {w, y}\right\rbrack z + y\left\lbrack {w, z}\right\rbrack \]\n\nThen \( W \) is a Lie subalgebra of \( V \) .\n\nProof. We carry out the short computation in full, but note that having formulated the result, the computation is forced, and no surprise occurs. For \( v, w \in W \) we have to show that \( \left\lbrack {v, w}\right\rbrack \) acts as a derivation with respect to the bilinear product. We shall use the defining property of the bracket product of a Lie algebra, which says that bracketing with an element is a derivation with respect to the bracket product. Let \( y, z \in V \) . Then\n\n\[ \left\lbrack {\left\lbrack {v, w}\right\rbrack ,{yz}}\right\rbrack = \left\lbrack {\left\lbrack {v,{yz}}\right\rbrack, w}\right\rbrack + \left\lbrack {v,\left\lbrack {w,{yz}}\right\rbrack }\right\rbrack \]\n\n\[ = \left\lbrack {\left\lbrack {v, y}\right\rbrack z + y\left\lbrack {v, z}\right\rbrack, w}\right\rbrack + \left\lbrack {v,\left\lbrack {w, y}\right\rbrack z + y\left\lbrack {w, z}\right\rbrack }\right\rbrack \]\n\n\[ = \left\lbrack {\left\lbrack {v, y}\right\rbrack, w}\right\rbrack z + \left\lbrack {v, y}\right\rbrack \left\lbrack {z, w}\right\rbrack + \left\lbrack {y, w}\right\rbrack \left\lbrack {v, z}\right\rbrack + y\left\lbrack {\left\lbrack {v, z}\right\rbrack, w}\right\rbrack \]\n\n\[ + \left\lbrack {v,\left\lbrack {w, y}\right\rbrack }\right\rbrack z + \left\lbrack {w, y}\right\rbrack \left\lbrack {v, z}\right\rbrack + \left\lbrack {v, y}\right\rbrack \left\lbrack {w, z}\right\rbrack + y\left\lbrack {v,\left\lbrack {w, z}\right\rbrack }\right\rbrack . \]\n\nThe middle terms cancel, and using the bracket derivation property, what is left is\n\n\[ = \left\lbrack {\left\lbrack {v, w}\right\rbrack, y}\right\rbrack z + y\left\lbrack {\left\lbrack {v, w}\right\rbrack, z}\right\rbrack \]\n\nwhich proves the lemma.\n\nAs noted at the beginning of \( \$ 2 \), we apply the lemma to the bilinear map\n\n\[ \left( {\zeta ,\eta }\right) \mapsto {D}_{\zeta }\eta \]\n\nWe take the real numbers as the ring of coefficients. This concludes the proof of Proposition 3.1.
Yes
Lemma 4.2. Let \( V \) be a Lie algebra (over a commutative ring). Suppose given a bilinear map \( V \times V \rightarrow V \), which we denote\n\n\[ \left( {y, z}\right) \mapsto {yz} \]\n\nand call the bilinear product. Let \( W \) be the submodule of \( V \) consisting of all elements \( w \in V \) such that the map\n\n\[ y \mapsto \left\lbrack {w, y}\right\rbrack \]\n\nis a derivation for this bilinear product, namely\n\n\[ \left\lbrack {w,{yz}}\right\rbrack = \left\lbrack {w, y}\right\rbrack z + y\left\lbrack {w, z}\right\rbrack \]\n\nThen \( W \) is a Lie subalgebra of \( V \) .
Proof. We carry out the short computation in full, but note that having formulated the result, the computation is forced, and no surprise occurs. For \( v, w \in W \) we have to show that \( \left\lbrack {v, w}\right\rbrack \) acts as a derivation with respect to the bilinear product. We shall use the defining property of the bracket product of a Lie algebra, which says that bracketing with an element is a derivation with respect to the bracket product. Let \( y, z \in V \) . Then\n\n\[ \left\lbrack {\left\lbrack {v, w}\right\rbrack ,{yz}}\right\rbrack = \left\lbrack {\left\lbrack {v,{yz}}\right\rbrack, w}\right\rbrack + \left\lbrack {v,\left\lbrack {w,{yz}}\right\rbrack }\right\rbrack \]\n\n\[ = \left\lbrack {\left\lbrack {v, y}\right\rbrack z + y\left\lbrack {v, z}\right\rbrack, w}\right\rbrack + \left\lbrack {v,\left\lbrack {w, y}\right\rbrack z + y\left\lbrack {w, z}\right\rbrack }\right\rbrack \]\n\n\[ = \left\lbrack {\left\lbrack {v, y}\right\rbrack, w}\right\rbrack z + \left\lbrack {v, y}\right\rbrack \left\lbrack {z, w}\right\rbrack + \left\lbrack {y, w}\right\rbrack \left\lbrack {v, z}\right\rbrack + y\left\lbrack {\left\lbrack {v, z}\right\rbrack, w}\right\rbrack \]\n\n\[ + \left\lbrack {v,\left\lbrack {w, y}\right\rbrack }\right\rbrack z + \left\lbrack {w, y}\right\rbrack \left\lbrack {v, z}\right\rbrack + \left\lbrack {v, y}\right\rbrack \left\lbrack {w, z}\right\rbrack + y\left\lbrack {v,\left\lbrack {w, z}\right\rbrack }\right\rbrack . \]\n\nThe middle terms cancel, and using the bracket derivation property, what is left is\n\n\[ = \left\lbrack {\left\lbrack {v, w}\right\rbrack, y}\right\rbrack z + y\left\lbrack {\left\lbrack {v, w}\right\rbrack, z}\right\rbrack \]\n\nwhich proves the lemma.
Yes
Proposition 4.3. Suppose \( D \) is the metric derivative in the pseudo Riemannian case. Then the metric Killing fields form a Lie subalgebra of the Killing fields.
Proof. Property Kill \( {}_{g} \) 2 states that \( \xi \) is Killing if and only if the Lie derivative \( {\mathcal{L}}_{\xi } \) is a derivation with respect to the metric product. As in Lemma 4.2, one proves that the set of vector fields which act as a derivation with respect to such a product is a Lie subalgebra. One uses the fact that on the space of functions, one has\n\n\[{\mathcal{L}}_{\left\lbrack \xi ,\eta \right\rbrack } = {\mathcal{L}}_{\xi } \circ {\mathcal{L}}_{\eta } - {\mathcal{L}}_{\eta } \circ {\mathcal{L}}_{\xi }\n\]\n\nThe steps essentially follow those of Lemma 4.2 and will be left to the reader, as well as the possible formulation of an abstract lemma to cover the situation. On the other hand, one can also argue from \( {\operatorname{Kill}}_{g}\mathbf{1} \), since one has the same formula for \( {\mathcal{L}}_{\left\lbrack \xi ,\eta \right\rbrack } \) acting on the metrics, showing at once that if \( {\mathcal{L}}_{\xi }\left( g\right) = {\mathcal{L}}_{\eta }\left( g\right) = 0 \) then also \( {\mathcal{L}}_{\left\lbrack \xi ,\eta \right\rbrack }\left( g\right) = 0 \) . Take your pick.
No
Proposition 4.4. (a) \( \;\left\lbrack {{\mathfrak{m}}_{p},{\mathfrak{m}}_{p}}\right\rbrack \subset {\mathfrak{h}}_{p} \) .
Proof. For (a), we let \( \xi ,\eta \in {\mathfrak{m}}_{p} \) and we evaluate at \( p \) to get 0, by the definition of \( {\mathfrak{m}}_{p} \) . For (b), let \( \eta ,\zeta \in {\mathfrak{h}}_{p} \) . Then \[ \left\lbrack {\eta ,\zeta }\right\rbrack \left( p\right) = {D}_{\eta }\zeta \left( p\right) - {D}_{\zeta }\eta \left( p\right) = 0 \] because \( \eta \left( p\right) = 0 \) and \( \zeta \left( p\right) = 0 \) in the indices, and \( {D}_{0} = 0 \) . For (c), let \( \eta \in {\mathfrak{h}}_{p} \) and \( \xi \in {\mathfrak{m}}_{p} \) . We use the relation \[ \left\lbrack {\eta ,\xi }\right\rbrack = {D}_{\eta }\xi - {D}_{\xi }\eta \] We have to show that \( {D}_{\zeta }\left\lbrack {\eta ,\xi }\right\rbrack \left( p\right) = 0 \) for all \( \zeta \) . It suffices to show that \[ {D}_{\zeta }{D}_{\eta }\xi \left( p\right) = 0\;\text{ and }\;{D}_{\zeta }{D}_{\xi }\eta \left( p\right) = 0. \] We use an elegant argument of Klingenberg. We have by Kill 3: \[ \left( {{D}_{\zeta }{D}_{\eta }\xi }\right) \left( p\right) = {D}_{{D}_{\zeta }\eta }\xi \left( p\right) + R\left( {\zeta ,\xi }\right) \eta \left( p\right) \] \[ = 0 \] the first term because \( \xi \in {\mathfrak{m}}_{p} \), and the second because \( \eta \left( p\right) = 0 \) . The second equation \( {D}_{\zeta }{D}_{\xi }\eta \left( p\right) = 0 \) follows the same way. This concludes the proof of Proposition 4.4.
Yes
Proposition 4.5. Assume that the exponential map \( {\exp }_{p} : {T}_{p} \rightarrow X \) is surjective. Then \( {\mathfrak{h}}_{p} \cap {\mathfrak{m}}_{p} = \{ 0\} \), so \( {\mathfrak{h}}_{p} + {\mathfrak{m}}_{p} \) is a direct sum. More generally, the map \[ \operatorname{Kill}\left( X\right) \rightarrow {T}_{p} \times \operatorname{End}\left( {T}_{p}\right) \;\text{ given by }\;\xi \mapsto \left( {\xi \left( p\right) ,{D\xi }\left( p\right) }\right) \] is injective. (By definition, \( {D\xi }\left( p\right) \left( v\right) = \left( {{D}_{v}\xi }\right) \left( p\right) \) for \( v \in {T}_{p} \).)
Proof. The first assertion is a consequence of the second, so suppose that \( \xi \left( p\right) = 0 \) and \( {D}_{\zeta }\xi \left( p\right) = 0 \) for all vector fields \( \zeta \) . We restrict \( \xi \) to a geodesic \( \alpha \) with \( \alpha \left( 0\right) = p \) . Then by Proposition 2.2, \( \xi \circ \alpha \) is the unique Jacobi lift of \( \alpha \) with \( \left( {0,0}\right) \) initial conditions, so \( \xi \circ \alpha = 0 \) . By the assumption that the exponential map is surjective, there exists a geodesic from \( p \) to any point of \( X \), so \( \xi = 0 \), concluding the proof of the proposition.
Yes