Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Theorem 4.6. Fix a point \( p \in X \) . For all vector fields \( \xi ,\eta ,\zeta \in {\mathfrak{m}}_{p} \), we have\n\n\[ R\left( {\xi ,\eta }\right) \zeta \left( p\right) = {D}_{\zeta }\left\lbrack {\xi ,\eta }\right\rbrack \left( p\right) = \left\lbrack {\zeta ,\left\lbrack {\xi ,\eta }\right\rbrack }\right\rbrack \left( p\right) . \]
Proof. By Kill 3, using \( {D}_{\eta }\zeta \left( p\right) = 0 \) and \( {D}_{\zeta }\xi \left( p\right) = 0 \), we get\n\n\[ {D}_{\eta }{D}_{\zeta }\xi \left( p\right) + R\left( {\xi ,\eta }\right) \zeta \left( p\right) = 0, \]\n\n\[ {D}_{\zeta }{D}_{\xi }\eta \left( p\right) + R\left( {\eta ,\zeta }\right) \xi \left( p\right) = 0. \]\n\nBut \( R\left( {\eta ,\zeta }\right) \xi = {D}_{\eta }{D}_{\zeta }\xi - {D}_{\zeta }{D}_{\eta }\xi - {D}_{\left\lbrack \eta ,\zeta \right\rbrack }\xi \), and by definition, \( {D}_{\left\lbrack \eta ,\zeta \right\rbrack }\xi \left( p\right) = 0 \) . Using this, and subtracting the above two relations yields\n\n\[ R\left( {\xi ,\eta }\right) \zeta \left( p\right) = \left( {{D}_{\zeta }{D}_{\xi }\eta - {D}_{\zeta }{D}_{\eta }\xi }\right) \left( p\right) \]\n\n\[ = {D}_{\zeta }\left( {{D}_{\xi }\eta - {D}_{\eta }\xi }\right) \left( p\right) \]\n\n\[ = {D}_{\zeta }\left\lbrack {\xi ,\eta }\right\rbrack \left( p\right) \]\n\n\[ = \left\lbrack {\zeta ,\left\lbrack {\xi ,\eta }\right\rbrack }\right\rbrack \left( p\right) \]\n\nbecause putting \( \lambda = \left\lbrack {\xi ,\eta }\right\rbrack \) we know from Proposition 4.4(a) that \( \lambda \in {\mathfrak{h}}_{p} \), and\n\n\[ \left\lbrack {\zeta ,\lambda }\right\rbrack \left( p\right) = {D}_{\zeta }\lambda \left( p\right) - {D}_{\lambda }\zeta \left( p\right) = {D}_{\zeta }\lambda \left( p\right) \]\n\nthus concluding the proof of the theorem.
Yes
Lemma 5.1. Let \( x, y \in X \) . Assume that \( {\exp }_{x} : {T}_{x} \rightarrow X \) is surjective. Given a linear isomorphism \( L : {T}_{x} \rightarrow {T}_{y} \), there is at most one \( D \) - automorphism \( f : X \rightarrow X \) such that \( f\left( x\right) = y \) and \( {T}_{x}f = L \) .
Proof. A D-automorphism \( f \) maps geodesics to geodesics, and a geodesic is uniquely determined by its initial conditions, namely the value at 0 and the derivative at 0 . Thus the condition that \( {\exp }_{x} \) is surjective is just what is needed to determine \( f \) globally on \( X \) from its initial conditions at \( x \) .
Yes
Proposition 5.2. Suppose \( X \) has a symmetry at every point \( x \in X \) . Then\n\n\( X \) is geodesically complete, that is \( {\exp }_{x} \) is defined on \( {T}_{x} \) for all \( x \) .
Proof. Let \( \alpha : \left\lbrack {0, c}\right\rbrack \rightarrow X \) be a geodesic, defined on a finite interval. Let \( x = \alpha \left( c\right) \) . Then \( {T}_{x}{\sigma }_{x} \) maps \( - {\alpha }^{\prime }\left( c\right) \) to \( {\alpha }^{\prime }\left( c\right) \) . But \( {\sigma }_{x} \) being a \( D \) -isomorphism maps geodesics to geodesics, and by the uniqueness of geodesics satisfying initial conditions, it follows that \( {\sigma }_{x} \) maps \( \alpha \left( t\right) \) with \( t \in \left\lbrack {0, c}\right\rbrack \) to \( \alpha \left( {{2c} - t}\right) \), in other words, \( \alpha \) is defined on the interval \( \left\lbrack {0,{2c}}\right\rbrack \), whence on \( \mathbf{R} \) by symmetry, thus concluding the proof.
Yes
Proposition 5.3. Let \( x, y \in X \) . Let \( \alpha \) be a non-constant geodesic such that \( \alpha \left( c\right) = x \) and \( \alpha \left( b\right) = y \) . Then\n\n\[ \n{T}_{y}{\sigma }_{x} = - {P}_{b,\alpha }^{{2c} - b}\;\text{ on }\;{T}_{y}X.\n\]
Proof. Let \( v \in {T}_{\alpha \left( c\right) }X \) be a tangent vector as above. By the remarks at the beginning of this section, \( \left( {T{\sigma }_{\alpha \left( c\right) }}\right) \left( {\gamma \left( {t, v}\right) }\right) \) is parallel translation of \( \left( {T\sigma }\right) \left( v\right) \) along \( {\sigma }_{\alpha \left( c\right) } \circ \alpha \), and we may apply PAR 2 with \( \beta \left( t\right) = \alpha \left( {{2c} - t}\right) \) . Note that \( \left( {T\sigma }\right) \left( v\right) = - v \) . Hence\n\n\[ \n\left( {{T}_{\alpha \left( b\right) }{\sigma }_{x}}\right) \left( {{P}_{c,\alpha }^{b}\left( v\right) }\right) = - {P}_{c,\alpha }^{L\left( b\right) }\left( v\right) .\n\]\n\nPutting \( \;w = {P}_{c,\alpha }^{b}\left( v\right) \; \) so \( \;v = {P}_{b,\alpha }^{c}\left( w\right) = - {P}_{c,\alpha }^{L\left( b\right) } \circ {P}_{b,\alpha }^{c}\left( w\right) \; \) yields the proposition.
Yes
Proposition 5.4. Let \( {P}_{t,\alpha }^{t + s} : {T}_{\alpha \left( t\right) } \rightarrow {T}_{\alpha \left( {t + s}\right) } \) be parallel translation. Then\n\n\[ \n{T}_{\alpha \left( t\right) }{\tau }_{\alpha, s} = {P}_{t,\alpha }^{t + s} \n\]\n\nIn particular, for \( v \in {T}_{\alpha \left( 0\right) } \), we have\n\n\[ \n{T}_{\alpha \left( 0\right) }{\tau }_{\alpha, s}\left( v\right) = {P}_{0,\alpha }^{s}\left( v\right) \n\]
Proof. This is immediate from Proposition 5.3, using the chain rule for the tangent map of a composite mapping, and PAR 1.
Yes
Proposition 5.5. Let \( \alpha \) be a non-constant geodesic.\n\n(i) Then \( \left\{ {\tau }_{\alpha, s}\right\} \) is the flow of a Killing field, i.e. it is a one-parameter group of D-automorphisms. In other words, if for \( x \in X \) we define\n\n\[ \n{\xi }_{\alpha }\left( x\right) = {\partial }_{1}{\tau }_{\alpha }\left( {0, x}\right) \n\]\n\nthen \( {\xi }_{\alpha } \) is Killing, and \( {\tau }_{\alpha } \) is its flow.\n\n(ii) The geodesic \( \alpha \) is an integral curve of \( {\xi }_{\alpha } \), that is, for all \( t \), \n\n\[ \n{\alpha }^{\prime }\left( t\right) = {\xi }_{\alpha }\left( {\alpha \left( t\right) }\right) \n\]\n\nIf the symmetries are metric symmetries, then \( {\tau }_{\alpha, s} \) is the flow of a metric Killing field, and \( {\xi }_{\alpha } \) is metric Killing.
Proof. We first show that \( {\tau }_{\alpha, s + t} = {\tau }_{\alpha, s} \circ {\tau }_{\alpha, t} \) for all \( s, t \in \mathbf{R} \) . Both sides are \( D \) -automorphisms. By Lemma 5.1 it suffices to show that they coincide at one point and that their tangent maps coincide at this point. We can select the point to be, say, \( \alpha \left( 0\right) \), in which case the equality of both sides at \( x = \alpha \left( 0\right) \) is given by (2). Then the equality of the tangent maps at \( \alpha \left( 0\right) \) is given by Proposition 5.4, which concludes the proof that \( \left\{ {\tau }_{\alpha, s}\right\} \) is a one-parameter group of \( D \) -automorphisms. It is then a property of all one-parameter groups of differential automorphisms, that if one defines \( {\xi }_{\alpha }\left( x\right) \) as in the formula given in (i), then \( \left\{ {\tau }_{\alpha }\right\} \) is the flow of \( {\xi }_{\alpha } \) . The proof is in any case immediate by differentiating \( {\tau }_{\alpha }\left( {s + t, x}\right) \) .\n\nFor (ii), we differentiate the equation in (2) with respect to \( s \), and then set \( s = 0 \) to obtain the fact that \( \alpha \) is an integral curve of \( {\xi }_{\alpha } \) .\n\nThe remark about metric symmetries is immediate, due to the fact that parallel translation in the metric case is an isometry. This concludes the proof of the proposition.
Yes
Proposition 5.6. Let \( \alpha ,\beta \) be non-constant geodesics with\n\n\[ \alpha \left( 0\right) = \beta \left( 0\right) = p. \]\n\nLet \( {\alpha }^{\prime }\left( 0\right) = w \) . Let \( {\tau }_{\alpha } \) be translation along \( \alpha \) as above, and let\n\n\[ \eta \left( t\right) = {\partial }_{1}{\tau }_{\alpha }\left( {0,\beta \left( t\right) }\right) = {\xi }_{\alpha }\left( {\beta \left( t\right) }\right) . \]\n\nThen\n\n\[ \eta \left( 0\right) = w\;\text{ and }\;{D}_{{\beta }^{\prime }}\eta \left( 0\right) = 0. \]\n\nThus \( \eta \) is the unique Jacobi lift of \( \beta \) satisfying these initial conditions.
Proof. That \( \eta \left( 0\right) = {\alpha }^{\prime }\left( 0\right) = w \) comes from Proposition 5.5(ii), so we next have to show that \( {D}_{{\beta }^{\prime }}\eta \left( 0\right) = 0 \) . Let \( v = {\beta }^{\prime }\left( 0\right) \) . By Proposition 5.4, we know that \( {T}_{p}{\tau }_{\alpha, s} = {P}_{0,\alpha }^{s} \) . Essentially from the definition of parallel translation, it follows that \( {D}_{s}{T}_{p}{\tau }_{\alpha, s}\left( v\right) = 0 \) . (Cf. Chapter VIII, Theorems 3.3 and 3.4.) Let \( \varphi \left( {s, t}\right) = {\tau }_{\alpha }\left( {s,\beta \left( t\right) }\right) \) . Since \( {\partial }_{2}\varphi \left( {s, Q}\right) = {T}_{p}{\tau }_{\alpha, s}\left( v\right) \), we get:\n\n\[ 0 = {D}_{1}{\partial }_{2}\varphi \left( {0,0}\right) = {D}_{2}{\partial }_{1}\varphi \left( {0,0}\right) = {D}_{{\beta }^{\prime }}\eta \left( 0\right) . \]\n\nThe assertion about Jacobi lifts is merely a reminder of standard properties of Jacobi lifts, cf. Chapter IX, Theorem 2.1 and Proposition 2.8. This concludes the proof of Proposition 5.6.
Yes
Corollary 5.7. Let \( \alpha \) be a non-constant geodesic, and put \( \alpha \left( 0\right) = p \) . Then\n\n\( {\xi }_{\alpha } \in {\mathfrak{m}}_{p} \) .
Proof. Special case of Proposition 5.6, because given \( v \in {T}_{p}X \) we can find a geodesic \( \beta \) such that \( \beta \left( 0\right) = p \) and \( {\beta }^{\prime }\left( 0\right) = v \) .
Yes
Theorem 5.8. The Killing sequence is exact, and is split by the map \( v \mapsto {\xi }_{v} \) . The map \( \xi \mapsto \xi \left( p\right) \) thus induces an isomorphism\n\n\[ \n{\mathfrak{m}}_{p}\overset{ \approx }{ \rightarrow }{T}_{p}X \n\]\n\nof \( {\mathfrak{m}}_{p} \) with the tangent space at \( p \) . We have a direct sum decomposition\n\n\[ \n\operatorname{Kill}\left( X\right) = {\mathfrak{h}}_{p} \oplus {\mathfrak{m}}_{p} \n\]\n\nIf \( \xi \in {\mathfrak{m}}_{p},\xi \neq 0 \) then \( \xi = {\xi }_{\alpha } = {\xi }_{v} \), where \( \alpha \) is the geodesic such that \( \alpha \left( 0\right) = p \) and \( {\alpha }^{\prime }\left( 0\right) = \xi \left( p\right) = v \) .
Proof. That \( {\mathfrak{h}}_{p} \) is the kernel of \( \xi \mapsto \xi \left( p\right) \) comes from the definition of \( {\mathfrak{h}}_{p} \) . The map is surjective, because at the given point \( p \) we can find a geodesic \( \alpha \) such that \( \alpha \left( 0\right) = p \) and \( {\alpha }^{\prime }\left( 0\right) \) is equal to a given tangent vector at \( p \) . We can then apply Proposition 5.5. The direct sum decomposition follows from Proposition 4.5. The last statement is merely a rephrasing of these results, in light of Proposition 5.5. This concludes the proof.
Yes
Proposition 6.1. Let \( \Omega : X \rightarrow {L}^{3}\left( {{TX},{TX}}\right) \) be a trilinear tensor field on a D-manifold \( X \) . Then \( {D}_{\xi }\Omega = 0 \) for all \( \xi \) if and only if parallel translation commutes with \( \Omega \), that is for every geodesic \( \alpha \), \[ {P}_{a,\alpha }^{b} \circ {\Omega }_{{\alpha }^{\prime }\left( a\right) } = {\Omega }_{{\alpha }^{\prime }\left( b\right) } \circ {P}_{a,\alpha }^{b}. \]
Proof. If \( {D}_{\xi }\Omega = 0 \) for all vector fields \( \xi \), then the commutation comes directly from the definition of \( {D}_{\xi }\Omega = 0 \), and, say, the local expression as in Chapter VIII,3.5,3.6, and 3.7. Conversely, for a trilinear tensor field \( \Omega \) and a geodesic \( \alpha \), we have \[ \left( {{D}_{{\alpha }^{\prime }}\Omega }\right) \left( {\alpha \left( 0\right) }\right) = \mathop{\lim }\limits_{{t \rightarrow 0}}\frac{{P}_{t}{\Omega }_{\alpha \left( t\right) } - {\Omega }_{\alpha \left( 0\right) }}{t}. \] The converse (actually the equivalence) follows immediately. The proposition could have been given in Chapter VIII.
Yes
Proposition 6.2. Let \( \\left( {X, D}\\right) \) be a symmetric space. Then for all vector fields \( \\xi \) we have\n\n\[ \n{D}_{\\xi }R = 0\\text{.}\n\]\n\nIn other words, the Riemann tensor is parallel.
Proof. At a given point \( x \), we compute \( {T}_{x}{\\sigma }_{x} \) applied to \( \\left( {{D}_{u}R}\\right) \\left( {v, w, z}\\right) \) in two ways, with vectors \( u, v, w, z \\in {T}_{x} \) . First,\n\n\[ \n{T}_{x}{\\sigma }_{x} \\cdot \\left( {{D}_{u}R}\\right) \\left( {v, w, z}\\right) = - \\left( {{D}_{u}R}\\right) \\left( {v, w, z}\\right) \\;\\text{ because }\\;{T}_{x}{\\sigma }_{x} = - \\mathrm{{id}}.\n\]\n\nOn the other hand, by functoriality, and the fact that \( {\\sigma }_{x} \) is a \( D \) - automorphism,\n\n\[ \n{T}_{x}{\\sigma }_{x} \\cdot \\left( {{D}_{u}R}\\right) \\left( {v, w, z}\\right) = \\left( {{D}_{{T}_{x}{\\sigma }_{x} \\cdot u}R}\\right) \\left( {{T}_{x}{\\sigma }_{x} \\cdot v,{T}_{x}{\\sigma }_{x} \\cdot w,{T}_{x}{\\sigma }_{x} \\cdot z}\\right)\n\]\n\n\[ \n= \\left( {{D}_{-u}R}\\right) \\left( {-v, - w, - z}\\right) = \\left( {{D}_{u}R}\\right) \\left( {v, w, z}\\right) .\n\]\n\nThis proves the proposition.
Yes
Proposition 6.3. Let \( \\left( {X, D}\\right) \) be a D-manifold. Let \( \\alpha \) be a geodesic, \( \\alpha \\left( 0\\right) = x,{\\alpha }^{\\prime }\\left( 0\\right) = u \\neq 0 \) . Let \( \\eta \) be the Jacobi lift of \( \\alpha \) with initial conditions\n\n\[\\eta \\left( 0\\right) = {v}_{0}\\;\\text{ and }\\;{D}_{{\\alpha }^{\\prime }}\\eta \\left( 0\\right) = {v}_{1}.\n\]\n\nLet\n\n\[A\\left( t\\right) = \\mathop{\\sum }\\limits_{{K = 0}}^{\\infty }\\frac{{t}^{2k}}{\\left( {2k}\\right) !}{R}_{u}^{k}\\left( {v}_{0}\\right) + \\mathop{\\sum }\\limits_{{k = 0}}^{\\infty }\\frac{{t}^{{2k} + 1}}{\\left( {{2k} + 1}\\right) !}{R}_{u}^{k}\\left( {v}_{1}\\right) .\n\]\n\nLet \( {P}_{0,\\alpha }^{t} \) be parallel translation along \( \\alpha \), and assume that parallel translation commutes with the Riemann tensor. Then\n\n\[\\eta \\left( t\\right) = {P}_{0,\\alpha }^{t}A\\left( t\\right)\n\]
Proof. Let \( {\\eta }_{1}\\left( t\\right) = {P}_{0,\\alpha }^{t}A\\left( t\\right) \) . Trivially, \( {\\eta }_{1}\\left( 0\\right) = {v}_{0} \) . By Chapter IX, Proposition 5.1, we also see that \( {D}_{{\\alpha }^{\\prime }}{\\eta }_{1}\\left( 0\\right) = {v}_{1} \) because \( {D}_{{\\alpha }^{\\prime }}{P}_{0,\\alpha } = 0 \) . There remains to prove that \( {\\eta }_{1} \) satisfies the Jacobi differential equation. Because of the absolute convergence of the series, it suffices to check what happens to each term. Let \( \\gamma \) denote parallel translation along \( \\alpha \) . Then for \( v \\in {T}_{\\alpha \\left( 0\\right) }X \), since \( {D}_{{\\alpha }^{\\prime }}\\gamma = 0 \), we find:\n\n\[{D}_{{\\alpha }^{\\prime }}^{2}\\gamma \\left( {t,\\frac{{t}^{m}}{m!}{R}_{u}^{k}\\left( v\\right) }\\right) = \\frac{{t}^{m - 2}}{\\left( {m - 2}\\right) !}\\gamma \\left( {t,{R}_{u}^{k}\\left( v\\right) }\\right) = \\frac{{t}^{m - 2}}{\\left( {m - 2}\\right) !}{P}_{0}^{t} \\circ {R}_{u}^{k}\\left( v\\right) .\n\]\n\nBy hypothesis,\n\n\[{P}_{0}^{t} \\circ {R}_{u} \\circ {R}_{u}^{k - 1} = {R}_{{\\alpha }^{\\prime }\\left( t\\right) } \\circ {P}_{0,\\alpha }^{t} \\circ {R}_{u}^{k - 1}.\n\]\nApplying the definition \( {R}_{w}\\left( {w}_{1}\\right) = R\\left( {w,{w}_{1}\\right) w \), and the definition of the power series \( A \), the assertion of the proposition drops out.
Yes
Theorem 1.1. Let \( {\zeta }_{X},{v}_{X} \) be extensions of \( \zeta, v \) to \( X \) . The covariant derivatives of \( {\zeta }_{X} \) and \( {v}_{X} \) on \( Y \) can be expressed in the form\n\n\[ \n{D}_{\eta }^{X}{\zeta }_{X} = {D}_{\eta }^{Y}\zeta + {h}_{12}\left( {\eta ,\zeta }\right) \n\]\n\n\[ \n{D}_{\eta }^{X}{v}_{X} = {h}_{21}\left( {\eta, v}\right) + {\nabla }_{\eta }v \n\]\n\nwhere :\n\n\( {h}_{12} \) is a symmetric bilinear bundle map \( {TY} \times {TY} \rightarrow {N}_{X}Y \) .\n\n\( {h}_{21} \) is a bilinear bundle map \( {TY} \times {NY} \rightarrow {TY} \) .\n\n\( {\nabla }_{\eta }v = {\operatorname{pr}}_{NY}{D}_{\eta }^{X}{v}_{X} \) is independent of the extension \( {v}_{X} \) of \( v \), and \( \nabla \) is a metric derivative on \( {NY} \) (to be defined in Proposition 1.6).
We may then define an operator\n\n(1)\n\n\[ \n{H}_{\eta } : {\Gamma TY} \rightarrow {\Gamma NY}\;\text{ by the condition }\;{H}_{\eta }\left( \zeta \right) = {h}_{12}\left( {\eta ,\zeta }\right) , \n\]\n\nand then\n\n(2)\n\n\[ \n{h}_{21}\left( {\eta, v}\right) = - {}^{t}{H}_{\eta }\left( v\right) \n\]\n\nAs usual, the transpose is defined by the condition that for all vector fields \( \xi \) on \( Y \), and normal fields \( \mu \) on \( Y \), we have\n\n\[ \n\left\langle {{H}_{\eta }\left( \xi \right) ,\mu }\right\rangle = \left\langle {\xi ,{}^{t}{H}_{\eta }\left( \mu \right) }\right\rangle \n\]\n\nFormula (2) will be proved in Theorem 1.5. Thus we give precise information on the four components \( {h}_{ij} \) with \( i, j = 1,2 \) . In particular, we see from (1) and (2) that \( {D}_{\eta }^{X} \) is represented by the matrix\n\n\[ \n\left( \begin{matrix} {D}_{\eta }^{Y} & - {}^{t}{H}_{\eta } \\ {H}_{\eta } & {\nabla }_{\eta } \end{matrix}\right) \;\text{ acting on }\;\left( \begin{array}{l} \zeta \\ v \end{array}\right) . \n\]
Yes
Proposition 1.2. Let \( \eta ,\zeta \) be vector fields on \( Y \) . Let \( {\zeta }_{X} \) be a vector field on \( X \) extending \( \zeta \) locally one some open set. Then on \( Y \), \[ {\operatorname{pr}}_{TY}{D}_{\eta }^{X}{\zeta }_{X} = {D}_{\eta }^{Y}\zeta \]
Proof. Let \( {\nabla }_{\eta }{\zeta }_{X} = {\operatorname{pr}}_{TY}{D}_{\eta }^{X}{\zeta }_{X} \) . Let \( {\eta }_{X} \) be an extension of \( \eta \) to a vector field on an open set in \( X \) . Then for \( x \in Y \) in this open set we have \[ {\left\lbrack {\eta }_{X},{\zeta }_{X}\right\rbrack }_{X}\left( x\right) = {\left\lbrack \eta ,\zeta \right\rbrack }_{Y}\left( x\right) \] so \[ \left( {{D}_{\eta }^{X}{\zeta }_{X}}\right) \left( x\right) - {D}_{\zeta }^{X}{\eta }_{X}\left( x\right) = \left\lbrack {{\eta }_{X},{\zeta }_{X}}\right\rbrack \left( x\right) = \left\lbrack {\eta ,\zeta }\right\rbrack \left( x\right) . \] Hence \[ {\nabla }_{\eta }{\zeta }_{X} - {\nabla }_{\zeta }{\eta }_{X} = \left\lbrack {\eta ,\zeta }\right\rbrack \] Fix \( \eta \) and \( {\eta }_{X} \) . At \( x,{D}_{\eta }^{X}{\zeta }_{X} \) depends only on \( \eta \left( x\right) \) . Also \( {D}_{\zeta }^{X}{\eta }_{X} \) depends only on \( \zeta \left( x\right) \) . Hence this formula shows that \( {\nabla }_{\eta }{\zeta }_{X} \) is independent of the choice of extension \( {\zeta }_{X} \) of \( \zeta \) . Thus we may omit the subscript \( X \), and write simply \( {\nabla }_{\eta }\zeta \) . Furthermore, we have proved one of the defining properties of the covariant derivative. By Theorem 4.1 of Chapter VII, it will suffice to show that \( \nabla \) is a covariant derivative. Note that \( {\nabla }_{\eta } \) is \( \mathrm{{Fu}}\left( Y\right) \) -linear in the variable \( \eta \), and satisfies the product rule of a derivative because it is satisfied by \( {D}_{\eta }^{X} \) . Finally, we verify the metric property. Let \( \xi \) be another vector field on \( Y \) . Then on \( Y \), \[ \xi \cdot \langle \eta ,\zeta \rangle = {\xi }_{X} \cdot \left\langle {{\eta }_{X},{\zeta }_{X}}\right\rangle \] \[ = \left\langle {{D}_{\xi }^{X}{\eta }_{X},{\zeta }_{X}}\right\rangle + \left\langle {{\eta }_{X},{D}_{\xi }^{X}{\zeta }_{X}}\right\rangle \] \[ = \left\langle {{\operatorname{pr}}_{TY}{D}_{\xi }^{X}{\eta }_{X},\zeta }\right\rangle + \left\langle {\eta ,{\operatorname{pr}}_{TY}{D}_{\xi }^{X}{\zeta }_{X}}\right\rangle \] because for \( x \in Y \), the vectors \( \zeta \left( x\right) \) and \( \eta \left( x\right) \) lie in \( {T}_{x}Y \), so the normal component is annihilated in the scalar product. This proves the metric property, and concludes the proof of the proposition.
Yes
Proposition 1.3. Let \( x \in Y \) . Let \( v, w \in {T}_{x}Y \) . Let \( \eta ,\zeta \) be sections of \( {TY} \) on a neighborhood of \( x \) such that \( \eta \left( x\right) = v \) and \( \zeta \left( x\right) = w \) . Let \( {\eta }_{X} \) and \( {\zeta }_{X} \) be extensions of \( \eta ,\zeta \) to local vector fields on \( X \) near \( x \) . Then we have the symmetric relation\n\n\[{\operatorname{pr}}_{NY}{D}_{\eta }^{X}{\zeta }_{X}\left( x\right) = {\operatorname{pr}}_{NY}{D}_{\zeta }^{X}{\eta }_{X}\left( x\right)\]\n\nIn particular, \( {\operatorname{pr}}_{NY}{D}_{\eta }^{X}{\zeta }_{X}\left( x\right) \) is independent of the choice of sections \( \eta ,\zeta \) having the given values \( v, w \) at \( x \) .
Proof. By definition of the covariant derivative,\n\n\[{D}_{\eta }^{X}{\zeta }_{X} - {D}_{\zeta }^{X}{\eta }_{X} = \left\lbrack {{\eta }_{X},{\zeta }_{X}}\right\rbrack = \left\lbrack {\eta ,\zeta }\right\rbrack \;\text{ at }x.\]\n\nBut \( \eta ,\zeta \) being sections of \( {TY} \), so is \( \left\lbrack {\eta ,\zeta }\right\rbrack \) . Hence the normal bundle components of \( \left( {{D}_{\eta }^{X}{\zeta }_{X}}\right) \left( x\right) \) and \( \left( {{D}_{\zeta }^{X}{\eta }_{X}}\right) \left( x\right) \) are the same, thus proving the formula. We know from basic definitions that \( \left( {{D}_{\eta }^{X}\zeta }\right) \left( x\right) \) is independent of the choice of \( \eta \), and \( \left( {{D}_{\zeta }^{X}\eta }\right) \left( x\right) \) is independent of the choice of \( \zeta \) . Then the last assertion follows, thus proving the proposition.
Yes
Corollary 1.4. The submanifold \( Y \) is totally geodesic if and only if its second fundamental form is 0 at every point.
Proof. The condition that a curve \( \alpha \) is a geodesic is that \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \) . Suppose \( Y \) is totally geodesic. Let \( \alpha \) be a geodesic in \( Y \) with \( \alpha \left( 0\right) = x \) and \( {\alpha }^{\prime }\left( 0\right) = v \in {T}_{x}Y \) . Then by assumption, \( \alpha \) is also a geodesic in \( X \), so taking the covariant derivatives along \( \alpha \), we get\n\n\[ {D}_{{\alpha }^{\prime }}^{X}{\alpha }^{\prime } = {D}_{{\alpha }^{\prime }}^{Y}{\alpha }^{\prime }\;\text{ at }0 \]\n\nwhence \( {h}_{12}\left( {v, v}\right) = 0 \) for all \( v \in {T}_{x}Y \) . Since \( {h}_{12} \) is symmetric, it follows that \( {h}_{12} = 0 \) . Conversely, suppose \( {h}_{12} = 0 \) . Let \( \alpha \) be a geodesic in \( X \), with, say, \( \alpha \left( a\right) = y \in Y \) and \( {\alpha }^{\prime }\left( a\right) \in {T}_{y}Y \) . Let \( \beta \) be the geodesic in \( Y \) with the same initial condition at \( y \) . By SFF 2, for any \( x \) on \( \beta \) in a small neighborhood of \( y \), we have\n\n\[ {D}_{{\beta }^{\prime }}^{X}{\beta }^{\prime }\left( x\right) = {D}_{{\beta }^{\prime }}^{Y}{\beta }^{\prime }\left( x\right) = 0. \]\n\nHence \( \beta \) is also a geodesic in \( X \) . Since \( \alpha \) and \( \beta \) have the same initial conditions, they are equal, thus concluding the proof of the first statement. The fact that the covariant derivatives and parallel translations are equal then follows at once from the defininition of \( {h}_{12} \) in Theorem 1.1. This concludes the proof of Theorem 1.4.
Yes
Theorem 1.5. Let \( \eta ,\xi \) be vector fields on \( Y \) and let \( \mu \) be a normal field on \( Y \) . Then on \( Y \), \[ \left\langle {{D}_{\eta }^{X}{\mu }_{X},\xi }\right\rangle = \left\langle {{h}_{21}\left( {\eta ,\mu }\right) ,\xi }\right\rangle = \left\langle {\mu , - {h}_{12}\left( {\eta ,\xi }\right) }\right\rangle . \]
Proof. We take \( {D}_{\eta }^{X} \) (Lie derivative) of \( \left\langle {{\xi }_{X},{\mu }_{X}}\right\rangle \), evaluated at a point of \( Y \) . The scalar product is taken in \( {TX} \), of course. To find the derivative at a point \( x \in Y \), one may differentiate along any curve passing through that point, such that the derivative of the curve is the \( \eta \left( x\right) \), and such a curve may be taken in \( Y \) . Therefore at such a point \( x \in Y \), we have \[ 0 = {D}_{\eta }^{X}\left\langle {{\mu }_{X},{\xi }_{X}}\right\rangle = \left\langle {{D}_{\eta }^{X}{\mu }_{X},{\xi }_{X}}\right\rangle + \left\langle {{\mu }_{X},{D}_{\eta }^{X}{\xi }_{X}}\right\rangle \] \[ = \left\langle {{\operatorname{pr}}_{TY}{D}_{\eta }^{X}{\mu }_{X},{\xi }_{X}}\right\rangle + \left\langle {{\mu }_{X},{\operatorname{pr}}_{NY}{D}_{\eta }^{X}{\xi }_{X}}\right\rangle \] \[ = \left\langle {{h}_{21}\left( {{\eta }_{X},{\mu }_{X}}\right) ,\xi }\right\rangle + \left\langle {\mu ,{h}_{12}\left( {\eta ,\xi }\right) }\right\rangle . \] We omit the subscript \( X \) in \( {h}_{12} \) because we know the independence from the extension to \( X \) by Proposition 1.3. This proves the formula of the theorem, at the same time that it also shows that \( {h}_{21}\left( {{\eta }_{X},{\mu }_{X}}\right) \) is independent of the extension of \( \eta ,\mu \) to \( X \) . Similarly, the relation shows that at the given point \( x,{h}_{21}\left( {\eta ,\mu }\right) \left( x\right) \) depends only on \( \eta \left( x\right) \) and \( \mu \left( x\right) \) respectively. This concludes the proof.
Yes
Proposition 1.6. Let \( {\mu }_{X} \) be an extension of a normal field to \( X \). Then \( {\operatorname{pr}}_{NY}{D}_{\eta }^{X}{\mu }_{X} \) is independent of this extension, so we may denote\n\n\[ \n{\nabla }_{\eta }\mu = {\operatorname{pr}}_{NY}{D}_{\eta }^{X}{\mu }_{X} \n\]\n\nFurthermore, \( \nabla \) is a metric derivative on \( {NY} \).
Proof. We prove the metric formula first. By definition of the covariant derivative on \( X \), we know that on \( Y \), for any normal field \( v \), \n\n\[ \n\eta \cdot \left\langle {{\mu }_{X},{v}_{X}}\right\rangle = \left\langle {{D}_{\eta }^{X}{\mu }_{X},{v}_{X}}\right\rangle + \left\langle {{\mu }_{X},{D}_{\eta }^{X}{v}_{X}}\right\rangle \n\]\n\nFor \( x \in Y \), the values \( {\mu }_{X}\left( x\right) \) and \( {v}_{X}\left( x\right) \) lie in \( {N}_{x}Y \), so the covariant derivatives in the above relation can be replaced by their projections on the normal bundle \( {NY} \). The Lie derivative on the left can be computed at \( x \) along curve whose derivative at \( x \) is \( \eta \left( x\right) \), and this curve can be taken to lie entirely in \( Y \). Therefore the left side is independent of the extensions \( {\mu }_{X},{v}_{X} \) of \( \mu, v \) locally near \( x \), so we may write it as \( \eta \cdot \langle \mu, v\rangle \). Then we write\n\n\[ \n\left\langle {\left( {{D}_{\eta }^{X}{\mu }_{X}}\right) \left( x\right), v\left( x\right) }\right\rangle = \left( {\eta \cdot \langle \mu, v\rangle }\right) \left( x\right) - \left\langle {\mu \left( x\right) ,\left( {{D}_{\eta }^{X}{v}_{X}}\right) \left( x\right) }\right\rangle .\n\]\n\nThe right side is independent of the extension \( {\mu }_{X} \) of \( \mu \), and therefore so is the left side. Similarly for \( {v}_{X} \). Thus we have proved simultaneously the metric formula and the independence which allows us to define \( {\nabla }_{\eta }\mu \). Note that the \( \mathrm{{Fu}}\left( Y\right) \) -linearity in \( \eta \) is then immediate from the metric formula. The derivation property in \( \mu \) follows from that of \( {D}_{\eta }^{X} \). This concludes the proof.
Yes
Proposition 2.1. Let \( {f}_{X} \) be an extension of \( f \) to \( X \) . Let \( \xi ,\eta \) be vector fields on \( Y \) . Then on \( Y \), we have\n\n\[ \n{D}_{X}^{2}{f}_{X}\left( {\xi ,\eta }\right) = {D}_{Y}^{2}f\left( {\xi ,\eta }\right) - {h}_{12}\left( {\xi ,\eta }\right) \cdot {f}_{X}, \n\]\n\nwhere \( {h}_{12}\left( {\xi ,\eta }\right) = {\operatorname{pr}}_{NY}{D}_{\xi }^{X}{\eta }_{X} \) as in \( §1 \) .
Proof. We have\n\n\[ \n{D}_{X}^{2}{f}_{X}\left( {{\xi }_{X},{\xi }_{X}}\right) = \xi \cdot \eta \cdot f - \left( {{D}_{\xi }^{X}{\eta }_{X}}\right) \cdot {f}_{X} \n\]\n\nBy Theorem 1.1, at points of \( Y \) we have\n\n\[ \n{D}_{\xi }^{X}{\eta }_{X} = \left( {{D}_{\xi }^{Y}\eta }\right) + {h}_{12}\left( {\xi ,\eta }\right) \n\]\n\nwhich concludes the proof by definition of \( {D}_{Y}^{2} \) .
No
Proposition 2.2. Let \( {f}_{X} \) be the normal extension of \( f \) to a tubular neighborhood of \( Y \) . Then for vector fields \( \xi ,\eta \) on \( Y \), we have\n\n\[ \n{D}_{X}^{2}{f}_{X}\left( {\xi ,\eta }\right) = {D}_{Y}^{2}f\left( {\xi ,\eta }\right) \n\]
Proof. This is immediate, because if \( v \) is a normal vector field on \( Y \) , then \( \left( {v \cdot {f}_{X}}\right) \left( x\right) = 0 \) for \( x \in Y \), immediately from the definitions. Indeed, the Lie derivative may be taken along a geodesic from \( x \), along which \( f \) is constant, so its Lie derivative is 0 . We can apply Proposition 2.1 with \( v = {h}_{12}\left( {\xi ,\eta }\right) \) to conclude the proof.
No
Proposition 2.3. Let \( v \) be a normal field on \( Y \) . Let \( f \) be a function on \( Y \) and \( {f}_{X} \) its normal extension to a tubular neighborhood of \( Y \) . Then on \( Y \) , \[ {D}_{X}^{2}{f}_{X}\left( {v, v}\right) = 0 \]
Proof. Let \( {v}_{X} \) be any extension of \( v \) to a neighborhood of a point \( {x}_{0} \) in in \( Y \) . Then at \( {x}_{0} \) , \[ {D}_{X}^{2}{f}_{X}\left( {v, v}\right) = {v}_{X} \cdot {v}_{X} \cdot {f}_{X} - \left( {{D}_{v\left( {x}_{0}\right) }{v}_{X}}\right) \cdot {f}_{X}. \] We select a suitable extension of \( v \) . For \( y \in Y \) near \( {x}_{0}, w \in {N}_{y}Y \) and \( \left| w\right| \) sufficiently small, let \( {\alpha }_{y, w} \) be the geodesic in \( X \) with \( {\alpha }_{y, w}\left( 0\right) = y \) and \( {\alpha }_{y, w}^{\prime }\left( 0\right) = w \) . Thus \( {\exp }_{y}\left( w\right) = {\alpha }_{y}\left( {1, w}\right) = {\alpha }_{y, w}\left( 1\right) \) . Define the normal extension \( {v}_{X} \) be the formula \[ {v}_{X}\left( {{\exp }_{y}\left( w\right) }\right) = {P}_{0,{\alpha }_{y, w}}^{1}\left( {v\left( y\right) }\right) . \] where \( P \) is parallel translation as in Chapter VIII, Theorems 3.3 and 3.4. Then \( {v}_{X}{f}_{X} = 0 \), so the first term on the right of (1) is 0 . As for the second term, letting \( \alpha = {\alpha }_{{x}_{0}, w} \) with \( w = v\left( {x}_{0}\right) ,\alpha \) is a geodesic so \( {D}_{{\alpha }^{\prime }}{\alpha }^{\prime } = 0 \), and we get \[ \left( {{D}_{v\left( {x}_{0}\right) }{v}_{X}}\right) \left( {x}_{0}\right) = \left( {{D}_{{\alpha }^{\prime }}{\alpha }^{\prime }}\right) \left( {x}_{0}\right) = 0. \] So having chosen \( {v}_{X} \) suitably, we conclude that both terms are 0, which proves the proposition.
Yes
Theorem 2.4. Let \( X \) be a finite dimensional Riemannian manifold, and let \( Y \) be a submanifold. Let \( f \) be a function on \( Y \) and let \( {f}_{X} \) be its normal extension to \( X \) . Then on \( Y \) , \[ {\Delta }_{Y}f = {\Delta }_{X}{f}_{X} \]
Proof. Let \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{p}}\right\} \) be an orthonormal frame of vector fields locally on \( Y \), and let \( \left\{ {{v}_{1},\ldots ,{v}_{q}}\right\} \) be an orthonormal frame of normal fields. Together they form an orthonormal frame of sections of \( {TX} \) restricted to \( Y \) . Then at a point \( x \in Y \), we have \[ {\Delta }_{X}{f}_{X}\left( x\right) = - \sum {D}_{X}^{2}{f}_{X}\left( {{\xi }_{i},{\xi }_{i}}\right) - \sum {D}_{X}^{2}{f}_{X}\left( {{v}_{j},{v}_{j}}\right) \;\text{ at }x. \] We may now apply Propositions 2.2 and 2.3 to conclude the proof.
Yes
Proposition 2.6. Let \( X \) be a Riemannian manifold, \( Y \) a submanifold, and \( {x}_{0} \in Y \) . Let \( {W}_{0} \) be the normal submanifold to \( Y \) at \( {x}_{0} \) . Let \( f \) be a function on \( X \), and let \( w \in {N}_{{x}_{0}}Y \) . Then \( {D}_{X}^{2}f\left( {w, w}\right) \) depends only on the restriction \( {f}_{{W}_{0}} \) . More precisely, if \( \alpha \) is the geodesic defined by \( \alpha \left( t\right) = \) \( {\exp }_{{x}_{0}}\left( {tw}\right) \), then
Proof. By the Killing definition of the second tensorial derivative (Chapter XIII, §1) and Corollary 3.2 of Chapter VIII, §3 we may compute this derivative along the geodesic, that is\n\n\[ {D}_{X}^{2}f\left( {w, w}\right) = \left( {{D}_{{\alpha }^{\prime }}{D}_{{\alpha }^{\prime }}f}\right) \left( {x}_{0}\right) - \left( {{D}_{{\alpha }^{\prime }}{\alpha }^{\prime }}\right) \cdot f\left( {x}_{0}\right) . \]\n\nSince \( \alpha \) is a geodesic, the second term on the right vanishes, and the first term depends only on \( f \) along the geodesic \( t \mapsto {\exp }_{{x}_{0}}\left( {tw}\right) \), and so depends only on the transversal part \( {f}_{{W}_{0}} \) . This concludes the proof of the first part. The second statement is even simpler, because the derivative \( {h}_{12}\left( {v, v}\right) \cdot f \) at \( {x}_{0} \) may be computed by using the same geodesic\n\n\[ \alpha \left( t\right) = {\exp }_{{x}_{0}}\left( {tw}\right) ,\;\text{ with }\;w = {h}_{12}\left( {v, v}\right) . \]\n\nThis concludes the proof.
Yes
Proposition 2.7. Suppose \( X \) finite dimensional. Let \( Y \) be a submanifold, and \( {x}_{0} \in Y \) . Let \( {W}_{0} \) be the normal submanifold of \( Y \) at \( {x}_{0} \) . Let \( f \) be a function on \( X \) . Then\n\n\[ \n\left( {\left( {\operatorname{tr}{h}_{12}}\right) \cdot f}\right) \left( {x}_{0}\right) = \left( {\operatorname{tr}{h}_{12}}\right) \left( {x}_{0}\right) \cdot {f}_{{W}_{0}} \n\]\n\nand thus finally\n\n\[ \n{\Delta }_{X}f\left( {x}_{0}\right) = {\Delta }_{Y}{f}_{Y}\left( {x}_{0}\right) + \left( {\operatorname{tr}{h}_{12}}\right) \cdot f\left( {x}_{0}\right) - {\operatorname{tr}}_{N, Y}{D}_{X}^{2}f\left( {x}_{0}\right) . \n\]
Proof. Immediate from (4) and Proposition 2.6, using \( w = {w}_{j} = {v}_{j}\left( {x}_{0}\right) \) and \( v = {v}_{i} = {\xi }_{i}\left( {x}_{0}\right) \) .
No
Proposition 2.8. Suppose \( X \) finite dimensional. Let \( \pi : X \rightarrow Z \) be a submersion. Then\n\n\[{\Delta }_{X} = {\Delta }_{V,\pi } + {\Delta }_{T,\pi }\]
Proof. This is just a reformulation of Proposition 2.7, taking the previous definitions into account.
No
Lemma 2.9. The above map \( \varphi \) is a local isomorphism at \( \left( {{x}_{0},0}\right) \) . Its differential at this point is in fact the identity.
Proof. This is a routine verification left to the reader. Note that the tangent space of \( {V}_{0} \times {W}_{0}^{\prime } \) at the point is precisely \( {T}_{{x}_{0}}Y \times {N}_{{x}_{0}}Y \), which we identify with \( {T}_{{x}_{0}}X \) . The second statement about the differential implies the first about the local isomorphism by the inverse mapping theorem.
No
Lemma 2.9. The above map \( \varphi \) is a local isomorphism at \( \left( {{x}_{0},0}\right) \) . Its differential at this point is in fact the identity.
Proof. This is a routine verification left to the reader. Note that the tangent space of \( {V}_{0} \times {W}_{0}^{\prime } \) at the point is precisely \( {T}_{{x}_{0}}Y \times {N}_{{x}_{0}}Y \), which we identify with \( {T}_{{x}_{0}}X \) . The second statement about the differential implies the first about the local isomorphism by the inverse mapping theorem.
No
Lemma 2.10. Under a regular action by \( H \), the map \[ \varphi : {V}_{0} \times {W}_{0} \rightarrow X\;\text{given by}\;\left( {h, x}\right) \mapsto {hx} \] is a local isomorphism at the origin \( \left( {e,{x}_{0}}\right) \).
Proof. This is a simple exercise in computing the differential of the map at the origin, and showing that it is the identity.
No
Lemma 2.10. Under a regular action by \( H \), the map\n\n\[ \n\varphi : {V}_{0} \times {W}_{0} \rightarrow X\;\text{given by}\;\left( {h, x}\right) \mapsto {hx} \n\] \n\nis a local isomorphism at the origin \( \left( {e,{x}_{0}}\right) \) .
Proof. This is a simple exercise in computing the differential of the map at the origin, and showing that it is the identity.
No
Lemma 3.1. Let \( x \in {Y}_{\pi \left( x\right) } \) be a point in a fiber. Let \( f \) be a function on Z. Then for \( w \in {N}_{x}{Y}_{\pi \left( x\right) } \) we have\n\n\[ \left( {{D}_{w}{\pi }^{ * }f}\right) \left( x\right) = \left( {D{\pi }_{*w}f}\right) \left( {\pi \left( x\right) }\right) \]\n\nor in different notation, if \( v \) is a normal field at \( x \), then\n\n\[ \left( {v \cdot {\pi }^{ * }f}\right) \left( x\right) = \left( {{\pi }_{ * }v\left( x\right) \cdot f}\right) \left( {\pi \left( x\right) }\right) . \]\n\nOn the other hand, if \( v \in {T}_{x}{Y}_{\pi \left( x\right) } \), then\n\n\[ \left( {{D}_{v}{\pi }^{ * }f}\right) \left( x\right) = 0. \]
Proof. One may prove the formulas in a chart, in which case both merely come from the chain rule\n\n\[ {\left( f \circ \pi \right) }^{\prime }\left( x\right) = {f}^{\prime }\left( {\pi \left( x\right) }\right) {T\pi }\left( x\right) \]\n\napplied to any vector in \( {T}_{x}X = {T}_{x}{Y}_{\pi \left( x\right) } + {N}_{x}{Y}_{\pi \left( x\right) } \) . So the lemma is clear.
Yes
Lemma 3.1. Let \( x \in {Y}_{\pi \left( x\right) } \) be a point in a fiber. Let \( f \) be a function on Z. Then for \( w \in {N}_{x}{Y}_{\pi \left( x\right) } \) we have\n\n\[ \left( {{D}_{w}{\pi }^{ * }f}\right) \left( x\right) = \left( {D{\pi }_{*w}f}\right) \left( {\pi \left( x\right) }\right) \]\n\nor in different notation, if \( v \) is a normal field at \( x \), then\n\n\[ \left( {v \cdot {\pi }^{ * }f}\right) \left( x\right) = \left( {{\pi }_{ * }v\left( x\right) \cdot f}\right) \left( {\pi \left( x\right) }\right) . \]\n\nOn the other hand, if \( v \in {T}_{x}{Y}_{\pi \left( x\right) } \), then\n\n\[ \left( {{D}_{v}{\pi }^{ * }f}\right) \left( x\right) = 0. \]
Proof. One may prove the formulas in a chart, in which case both merely come from the chain rule\n\n\[ {\left( f \circ \pi \right) }^{\prime }\left( x\right) = {f}^{\prime }\left( {\pi \left( x\right) }\right) {T\pi }\left( x\right) \]\n\napplied to any vector in \( {T}_{x}X = {T}_{x}{Y}_{\pi \left( x\right) } + {N}_{x}{Y}_{\pi \left( x\right) } \) . So the lemma is clear.
Yes
Proposition 3.2. Let \( \mu, v \) be vector fields on \( Z \), and \( {\mu }_{X},{v}_{X} \) their horizontal liftings to \( X \) . Then\n\n\[ \n{\operatorname{pr}}_{E}\left( {{D}_{{\mu }_{X}}{v}_{X}}\right) = {\left( {D}_{\mu }v\right) }_{X} \n\]\n\nor equivalently, for every horizontal field \( {\lambda }_{X} \), \n\n\[ \n\left\langle {{D}_{{\mu }_{X}}{v}_{X},{\lambda }_{X}}\right\rangle = \left\langle {{D}_{\mu }v,\lambda }\right\rangle \n\]
Proof. The expression \( \left\langle {{D}_{{\mu }_{X}}{v}_{X},{\lambda }_{X}}\right\rangle \) coming from (3) involves only the Lie derivative, scalar product of vector fields and brackets. The scalar product is preserved under lifting, by definition of a Riemannian submersion. Formula (1) gives the preservation of the bracket. The Lie derivative is also preserved under lifting by Lemma 3.1. This concludes the proof.
No
Proposition 3.3. Let \( \mu, v,\lambda ,\zeta \) be vector fields on \( Z \) . Then\n\n\[{\mu }_{X} \cdot \left\langle {{D}_{{v}_{X}}{\lambda }_{X},{\zeta }_{X}}\right\rangle = {\pi }^{ * }\left( {\mu \cdot \left\langle {{D}_{v}\lambda ,\zeta }\right\rangle }\right) .
Proof. Again, direct consequence of (3) and Proposition 3.2.
No
Proposition 3.4. Let \( \mu, v \) be vector fields on \( Z \) . Then\n\n\[ \n{D}_{{\mu }_{X}}{v}_{X} = \frac{1}{2}{\left\lbrack {\mu }_{X},{v}_{X}\right\rbrack }^{V} + {\left( {D}_{\mu }v\right) }_{X}.\n\]
Proof. The horizontal component was already determined in Proposition 3.2, which gives the second term on the right of the equation. As for the vertical component, we use (3) with a vertical field \( \xi \) . Since \( \left\langle {{\mu }_{X},{v}_{X}}\right\rangle = \langle \mu, v\rangle \), if \( \xi \) is vertical, we have \( \xi \cdot \left\langle {{\mu }_{X},{v}_{X}}\right\rangle = 0 \) . The first two terms and the last two terms of (3) on the right vanish by (1). The value for the vertical component then drops out, thus proving the proposition.
Yes
Proposition 3.5. Let \( \alpha : \left\lbrack {a, b}\right\rbrack \rightarrow Z \) be a curve such that \( {\alpha }^{\prime }\left( t\right) \neq 0 \) for all \( t \) . (i) Let \( y \in {Y}_{\alpha \left( a\right) } \) . There exists a unique lifting \( A = {A}_{y} \) of \( \alpha \) in \( X \) which is horizontal, i.e. such that \( {A}^{\prime }\left( t\right) \) lies in the horizontal subbundle for all \( t \), and with the given initial condition \( A\left( a\right) = y \) .
Proof. The existence and uniqueness of the lifting are elementary, at the level of the existence and uniqueness of solutions of a differential equation. We give the details. The global assertion is a consequence of local existence and uniqueness, so we may suppose that there is a vector field \( v \) locally on \( Z \) such that \( v\left( {\alpha \left( t\right) }\right) = {\alpha }^{\prime }\left( t\right) \) for all \( t \), i.e. \( v \) extends \( {\alpha }^{\prime } \) . For simplicity of notation, shrinking \( Z \) if necessary to some open subset, we suppose \( v \) is defined on all of \( Z \) . Let \( y \in X \) . By the fundamental theorem on differential equations, there exists a unique curve \( A : \left\lbrack {a, b}\right\rbrack \rightarrow X \) such that \( {A}^{\prime }\left( t\right) = {v}_{X}\left( {A\left( t\right) }\right) \) for all \( t \) . We claim that \( A \) lifts \( \alpha \), that is \( A\left( t\right) \in {Y}_{\alpha \left( t\right) } \) (the fiber above \( \alpha \left( t\right) \) ). Indeed,\n\n\[ \n{\left( \pi \circ A\right) }^{\prime }\left( t\right) = {T\pi }\left( {A\left( t\right) }\right) {A}^{\prime }\left( t\right) = {T\pi }\left( {A\left( t\right) }\right) {v}_{X}\left( {A\left( t\right) }\right) ,\n\]\n\nand \( {v}_{X}\left( {A\left( t\right) }\right) \in {E}_{A\left( t\right) } \) . Let \( \beta = \pi \circ A \) . Then \( \beta \) satisfies the differential equation \( {\beta }^{\prime }\left( t\right) = v\left( {\beta \left( t\right) }\right) \), with the same initial conditions as \( \alpha \), so \( \beta = \alpha \) , and thus \( A \) lifts \( \alpha \) . As for uniqueness, suppose \( {v}_{1},{v}_{2} \) are two extensions of \( {\alpha }^{\prime } \) to local vector fields on \( Z \) . Let \( {A}_{1},{A}_{2} \) be the liftings of \( \alpha \) corresponding to these two vector fields. Then they satisfy \( {A}_{1}^{\prime }\left( t\right) = {A}_{2}^{\prime }\left( t\right) \) for all \( t \), and so they are equal, thus proving the first part of the proposition.
Yes
Proposition 3.6. Let \( \\xi \) be a vertical field. Then\n\n\[ \n\\left\\langle {{D}_{\\xi }{\\mu }_{X},{v}_{X}}\\right\\rangle = - \\frac{1}{2}\\left\\langle {{\\left\\lbrack {\\mu }_{X},{v}_{X}\\right\\rbrack }^{V},\\xi }\\right\\rangle \n\]
Proof. By the metric derivative formula (3) and Proposition 3.4, we\n\nobtain\n\n\[ \n\\left\\langle {{D}_{\\xi }{\\mu }_{X},{v}_{X}}\\right\\rangle = \\left\\langle {{D}_{{\\mu }_{X}}\\xi ,{v}_{X}}\\right\\rangle + \\left\\langle {\\left\\lbrack {\\xi ,{\\mu }_{X}}\\right\\rbrack ,{v}_{X}}\\right\\rangle \n\]\n\n\[ \n= - \\left\\langle {{D}_{{\\mu }_{X}}{v}_{X},\\xi }\\right\\rangle \n\]\n\n\[ \n= - \\frac{1}{2}\\left\\langle {\\left\\lbrack {{\\mu }_{X},{v}_{X}}\\right\\rbrack ,\\xi }\\right\\rangle \n\]\n\n\[ \n= - \\frac{1}{2}\\left\\langle {{\\left\\lbrack {\\mu }_{X},{v}_{X}\\right\\rbrack }^{V},\\xi }\\right\\rangle \n\]\n\nthereby proving the proposition.
Yes
Proposition 4.1. Let \( \xi ,\eta \) be vertical fields on \( X \) . Then for every function \( f \) on \( Z \), we have\n\n\[ \left( {{D}_{X}^{2}{\pi }^{ * }f}\right) \left( {\xi ,\eta }\right) = - {h}_{12}\left( {\xi ,\eta }\right) \cdot {\pi }^{ * }f \]\n\n\[ = - {\pi }_{ * }{h}_{12}\left( {\xi ,\eta }\right) \cdot f \]
Proof. We have\n\n\[ {D}^{2}{\pi }^{ * }f\left( {\xi ,\eta }\right) = \xi \cdot \left( {\eta \cdot {\pi }^{ * }f}\right) - \left( {{D}_{\xi }\eta }\right) \cdot {\pi }^{ * }f \]\n\n\[ = - \left( {{D}_{\xi }\eta }\right) \cdot {\pi }^{ * }f \]\n\nbecause \( \eta \cdot {\pi }^{ * }f = 0 \) since \( {\pi }^{ * }f \) is constant on the fibers and \( \eta \cdot {\pi }^{ * }f \) can be computed along a curve contained in the fiber \( {Y}_{\pi \left( x\right) } \) . Furthermore, the constancy of \( f \) on a fiber also yields\n\n\[ \left( {{D}_{\xi }\eta }\right) \cdot {\pi }^{ * }f = {\operatorname{pr}}_{E}\left( {{D}_{\xi }\eta }\right) \cdot {\pi }^{ * }f. \]\n\nThen Lemma 3.1 and Proposition 2.1 conclude the proof.
Yes
Proposition 4.3. Let \( \mu, v \) be vector fields on \( Z \), with horizontal liftings \( {\mu }_{X},{v}_{X} \) . Then\n\n\[ \n{D}_{X}^{2}{\pi }^{ * }f\left( {{\mu }_{X},{v}_{X}}\right) = {D}_{Z}^{2}f\left( {\mu, v}\right) .\n\]
Proof. We have\n\n\[ \n{D}_{Z}^{2}f\left( {\mu, v}\right) = \mu \cdot v \cdot f - \left( {{D}_{\mu }v}\right) \cdot f,\n\]\n\nand the similar expression on \( X \) with subscript \( X \) . The vertical component of \( {D}_{{\mu }_{X}}{v}_{X} \) annihilates \( {\pi }^{ * }f \) because \( {\pi }^{ * }f \) is constant on fibers. For the horizontal component, Proposition 3.2 shows that the last terms on the right on \( X \) and on \( Z \) give the same value. As to the first term on the right, Lemma 3.1 shows that\n\n\[ \n{v}_{X} \cdot {\pi }^{ * }f = {\pi }^{ * }\left( {v \cdot f}\right)\n\]\n\nso doing the same thing with \( {\mu }_{X} \) shows that the first terms on the right of the equation on \( X \) and \( Z \) give the same value. This concludes the proof.
Yes
Theorem 4.4. Assume that \( X \), and hence \( Z \), are finite dimensional. Then for all functions \( f \) on \( Z \) we have \[ {\Delta }_{X}{\pi }^{ * }f = {\pi }^{ * }{\Delta }_{Z}f + \left( {\operatorname{tr}{h}_{12}}\right) \cdot {\pi }^{ * }f. \]
Proof. Let \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{p}}\right\} \) be an orthonormal frame of local sections of the vertical bundle \( F \), and let \( \left\{ {{\mu }_{1},\ldots ,{\mu }_{q}}\right\} \) be an orthonormal frame of sections on \( Z \) . Let \( \left\{ {{\mu }_{1X},\ldots ,{\mu }_{qX}}\right\} \) be their lifts to the horizontal bundle. Then \[ \left\{ {{\xi }_{1},\ldots ,{\xi }_{p},{\mu }_{1X},\ldots ,{\mu }_{qX}}\right\} \] is a local orthonormal frame on \( X \) . We get: \[ {\Delta }_{X}{\pi }^{ * }f = - \mathop{\sum }\limits_{i}{D}_{X}^{2}{\pi }^{ * }f\left( {{\xi }_{i},{\xi }_{i}}\right) - \mathop{\sum }\limits_{j}{D}_{X}^{2}{\pi }^{ * }f\left( {{\mu }_{jX},{\mu }_{jX}}\right) \] \[ = \left( {\operatorname{tr}{h}_{12}}\right) \cdot {\pi }^{ * }f - \mathop{\sum }\limits_{j}{D}_{Z}^{2}f\left( {{\mu }_{j},{\mu }_{j}}\right) \] by Propositions 4.2 and 4.3 respectively. This proves the theorem.
Yes
Theorem 5.1 (Gauss Equation). For \( {v}_{i}\left( {i = 1,2,3,4}\right) \) in \( {T}_{x}Y \), we have\n\n\[ \n{R}_{X}\left( {{v}_{1},{v}_{2},{v}_{3},{v}_{4}}\right) = {R}_{Y}\left( {{v}_{1},{v}_{2},{v}_{3},{v}_{4}}\right) \n\]\n\n\[ \n+ \left\langle {{h}_{12}\left( {{v}_{2},{v}_{3}}\right) ,{h}_{12}\left( {{v}_{1} \cdot {v}_{4}}\right) }\right\rangle - \left\langle {{h}_{12}\left( {{v}_{2} \cdot {v}_{4}}\right) ,{h}_{12}\left( {{v}_{1},{v}_{3}}\right) }\right\rangle . \n\]\n\nOr, if \( \xi ,\eta ,\zeta ,\tau \) are vector fields on \( Y \) ,\n\n\[ \n{R}_{X}\left( {\xi ,\eta ,\zeta ,\tau }\right) = {R}_{Y}\left( {\xi ,\eta ,\zeta ,\tau }\right) \n\]\n\n\[ \n+ \left\langle {{h}_{12}\left( {\eta ,\zeta }\right) ,{h}_{12}\left( {\xi ,\tau }\right) }\right\rangle - \left\langle {{h}_{12}\left( {\eta ,\tau }\right) ,{h}_{12}\left( {\xi ,\zeta }\right) }\right\rangle . \n\]
Proof. The proof is routine, and forced. We have by Theorem 1.1, or SFF 2 in \( §1 \), on \( Y \) :\n\n\[ \n{D}_{\eta }^{Y}\zeta = {D}_{\eta }^{X}{\zeta }_{X} + {h}_{12}\left( {\eta ,\zeta }\right) \n\]\n\nso iterating,\n\n\[ \n{D}_{\xi }^{Y}{D}_{\eta }^{Y}\zeta = {\operatorname{pr}}_{TY}\left( {{D}_{\xi }^{X}{D}_{\eta }^{X}{\zeta }_{X} + {D}_{\xi }^{X}\left( {{h}_{12}{\left( \eta ,\zeta \right) }_{X}}\right) }\right) . \n\]\n\nWe interchange \( \xi \) and \( \eta \) and subtract. We also note that\n\n\[ \n\left\lbrack {\xi ,\eta }\right\rbrack \cdot \zeta = {\operatorname{pr}}_{TY}\left\lbrack {{\xi }_{X},{\eta }_{X}}\right\rbrack \cdot {\zeta }_{X}\;\text{ on }Y. \n\]\n\nHence by the definition of the Riemann tensor, for all vector fields \( \tau \) on \( Y \) ,\n\n\[ \n\left\langle {{R}_{Y}\left( {\xi ,\eta }\right) \zeta ,\tau }\right\rangle = \left\langle {{R}_{X}\left( {\xi ,\eta }\right) \zeta ,\tau }\right\rangle \n\]\n\n\[ \n- \left\langle {{D}_{\eta }^{X}\left( {{h}_{12}{\left( \xi ,\zeta \right) }_{X}}\right) ,\tau }\right\rangle + \left\langle {{D}_{\xi }^{X}\left( {{h}_{12}{\left( \eta ,\zeta \right) }_{X}}\right) ,\tau }\right\rangle . \n\]\n\nApplying Theorem 1.4 concludes the proof.
Yes
Theorem 5.2 (Codazzi Equation). For vector fields \( \xi ,\eta ,\zeta \) on \( Y \) , \[ {\operatorname{pr}}_{NY}{R}_{X}\left( {\xi ,\eta ,\zeta }\right) = \left( {{\nabla }_{\xi }{h}_{12}}\right) \left( {\eta ,\zeta }\right) - \left( {{\nabla }_{\eta }{h}_{12}}\right) \left( {\xi ,\zeta }\right) . \]
Proof. We start again with \[ {D}_{\eta }^{X}{\zeta }_{X} = {D}_{\eta }^{Y}\zeta + {h}_{12}\left( {\eta ,\zeta }\right) \] so \[ {D}_{\xi }^{X}{D}_{\eta }^{X}{\zeta }_{X} = {D}_{\xi }^{X}\left( {{\left( {D}_{\eta }^{Y}\zeta \right) }_{X} + {D}^{X}{\left( {h}_{12}\left( \eta ,\zeta \right) \right) }_{X}}\right. \] \[ = {D}_{\xi }^{Y}{D}_{\eta }^{Y}\zeta + {h}_{12}\left( {\xi ,{D}_{\eta }^{Y}\zeta }\right) - {}^{t}{H}_{\xi }\left( {{h}_{12}\left( {\eta ,\zeta }\right) }\right) + {\nabla }_{\xi }\left( {{h}_{12}\left( {\eta ,\zeta }\right) }\right) . \] Since \( {}^{t}{H}_{\xi } \) is \( {TY} \) -valued, it is killed by \( {\operatorname{pr}}_{NY} \), and we obtain \[ {\operatorname{pr}}_{NY}{D}_{\xi }^{X}{D}_{\eta }^{X}{\zeta }_{X} = {h}_{12}\left( {\xi ,{D}_{\eta }^{Y}\zeta }\right) + {\nabla }_{\xi }\left( {{h}_{12}\left( {\eta ,\zeta }\right) }\right) . \] We interchange \( \xi \) and \( \eta \) and subtract. We use the definition of \( {R}_{X} \) to get: \[ {\operatorname{pr}}_{NY}R\left( {\xi ,\eta }\right) \zeta = {\nabla }_{\xi }\left( {{h}_{12}\left( {\eta ,\zeta }\right) }\right) - {\nabla }_{\eta }\left( {{h}_{12}\left( {\xi ,\zeta }\right) }\right) \] \[ + {h}_{12}\left( {\xi ,{D}_{\eta }^{Y}\zeta }\right) - {h}_{12}\left( {\eta ,{D}_{\xi }^{Y}\zeta }\right) - {\operatorname{pr}}_{NY}{D}_{\left\lbrack \xi ,\eta \right\rbrack }^{X}{\zeta }_{X} \] But \( {\operatorname{pr}}_{NY}{D}_{\left| \xi ,\eta \right| }^{X}{\zeta }_{X} = {h}_{12}\left( {\left\lbrack {\xi ,\eta }\right\rbrack ,\zeta }\right) \) . We use the defining equation of \( {\nabla }_{\xi }{h}_{12} \) and similarly with \( \xi ,\eta \) interchanged, which we subtract. Note that \[ {h}_{12}\left( {{D}_{\xi }\eta ,\zeta }\right) - {h}_{12}\left( {{D}_{\eta }\xi ,\zeta }\right) = {h}_{12}\left( {\left\lbrack {\xi ,\eta }\right\rbrack ,\zeta }\right) . \] Then we get cancellations, from which the Codazzi equation follows, thus proving the theorem.
Yes
Theorem 5.3 (Ricci Equation). We have\n\n\[ \n{R}_{X}\left( {\xi ,\eta ,\mu, v}\right) = {R}_{NY}\left( {\xi ,\eta ,\mu, v}\right) - \left\langle {\left\lbrack {{S}_{\mu },{S}_{v}}\right\rbrack \xi ,\eta }\right\rangle .\n\]
Proof. More of the same type of computation. We use (6) in \( §1 \) twice to get\n\n\[ \n{R}_{X}\left( {\xi ,\eta }\right) \mu = {D}_{\xi }^{X}{D}_{\eta }^{X}\mu - {D}_{\eta }^{X}{D}_{\xi }^{X}\mu - {D}_{\left\lbrack \xi ,\eta \right\rbrack }^{X}\mu \n\]\n\n\[ \n= {R}_{NY}\left( {\xi ,\eta }\right) + {S}_{{\nabla }_{\xi }\mu }\eta + {D}_{\eta }\left( {{S}_{\mu }\xi }\right) + {h}_{12}\left( {{A}_{\mu }\xi ,\eta }\right) \n\]\n\n\[ \n- {S}_{{\nabla }_{\eta }\mu }\xi - {D}_{\xi }\left( {{S}_{\mu }\eta }\right) - {h}_{12}\left( {\xi ,{S}_{\mu }\eta }\right) + {S}_{\mu }\left\lbrack {\xi ,\eta }\right\rbrack \n\]\n\nWe take the scalar product with \( v \), and use formula (3) to find:\n\n\[ \n\left\langle {{R}_{X}\left( {\xi ,\eta }\right) \mu, v}\right\rangle = \left\langle {{R}_{NY}\left( {\xi ,\eta }\right) \mu, v}\right\rangle + \left\langle {{h}_{12}\left( {{S}_{\mu }\xi ,\eta }\right), v}\right\rangle - \left\langle {{h}_{12}\left( {\xi ,{S}_{\mu }\eta }\right), v}\right\rangle \n\]\n\n\[ \n= \left\langle {{R}_{NY}\left( {\xi ,\eta }\right) \mu, v}\right\rangle - \left\langle {\left( {{S}_{\mu }{S}_{v} - {S}_{v}{S}_{\mu }}\right) \xi ,\eta }\right\rangle \n\]\n\n\[ \n= \left\langle {{R}_{NY}\left( {\xi ,\eta }\right) \mu, v}\right\rangle - \left\langle {\left\lbrack {{S}_{\mu },{S}_{v}}\right\rbrack \xi ,\eta }\right\rangle \n\]\n\nwhich concludes the proof.
Yes
Theorem 6.1. Let \( \mu, v,\lambda ,\zeta \) be vector fields on \( Z \) . Then\n\n\[ \n{R}_{X}\left( {{\mu }_{X},{v}_{X},{\lambda }_{X},{\zeta }_{X}}\right) = {R}_{Z}\left( {\mu, v,\lambda ,\zeta }\right) + {V}_{R}\left( {{\mu }_{X},{v}_{X},{\lambda }_{X},{\zeta }_{X}}\right) \n\]\n\nwhere \( {V}_{R} \) denotes the vertical component,\n\n\[ \n{V}_{R}\left( {{\mu }_{X},{v}_{X},{\lambda }_{X},{\zeta }_{X}}\right) = \frac{1}{4}\left\langle {{\left\lbrack {\mu }_{X},{\lambda }_{X}\right\rbrack }^{V},{\left\lbrack {v}_{X},{\zeta }_{X}\right\rbrack }^{V}}\right\rangle - \frac{1}{4}\left\langle {{\left\lbrack {v}_{X},{\lambda }_{X}\right\rbrack }^{V},{\left\lbrack {\mu }_{X},{\zeta }_{X}\right\rbrack }^{V}}\right\rangle \n\]\n\n\[ \n+ \frac{1}{2}\left\langle {{\left\lbrack {\lambda }_{X},{\zeta }_{X}\right\rbrack }^{V},{\left\lbrack {\mu }_{X},{v}_{X}\right\rbrack }^{V}}\right\rangle . \n\]
Proof. The Riemann tensor involves second derivatives, but all the formulas needed to perform the iteration easily have been proved in \( §3 \) . So we forge ahead. First, by Propositions 3.3, 3.5, and 3.6, we find\n\n(1)\n\[ \n\left\langle {{D}_{{\mu }_{X}}{D}_{{v}_{X}}{\lambda }_{X},{\zeta }_{X}}\right\rangle = {\mu }_{X} \cdot \left\langle {{D}_{{v}_{X}}{\lambda }_{X},{\zeta }_{X}}\right\rangle - \left\langle {{D}_{{v}_{X}}{\lambda }_{X},{D}_{{\mu }_{X}}{\zeta }_{X}}\right\rangle \n\]\n\n\[ \n= \mu \cdot \left\langle {{D}_{v}\lambda ,\zeta }\right\rangle - \left\langle {{D}_{v}\lambda ,{D}_{\mu }\zeta }\right\rangle - \frac{1}{4}\left\langle {\left\lbrack {{v}_{X},{\lambda }_{X}}\right\rbrack ,\left\lbrack {{\mu }_{X},{\zeta }_{X}}\right\rbrack }\right\rangle \n\]\n\n\[ \n= \left\langle {{D}_{\mu }{D}_{v}\lambda ,\zeta }\right\rangle - \frac{1}{4}\left\langle {{\left\lbrack {v}_{X},{\lambda }_{X}\right\rbrack }^{V},{\left\lbrack {\mu }_{X},{\zeta }_{X}\right\rbrack }^{V}}\right\rangle . \n\]\n\nDecomposing \( \left\lbrack {{\mu }_{X},{v}_{X}}\right\rbrack \) into horizontal and vertical component and using Proposition 3.6, we get\n\n(2)\n\n\[ \n\left\langle {{D}_{\left\lbrack {\mu }_{X},{v}_{X}\right\rbrack }{\lambda }_{X},{\zeta }_{X}}\right\rangle = \left\langle {{D}_{\left\lbrack \mu, v\right\rbrack }\lambda ,\zeta }\right\rangle - \frac{1}{2}\left\langle {{\left\lbrack {\lambda }_{X},{\zeta }_{X}\right\rbrack }^{V},{\left\lbrack {\mu }_{X},{v}_{X}\right\rbrack }^{V}}\right\rangle . \n\]\n\nBy (1) and (2), and the definition of the Riemann tensor\n\n\[ \nR\left( {\mu, v}\right) = {D}_{\mu }{D}_{v} - {D}_{v}{D}_{\mu } - {D}_{\left\lbrack \mu, v\right\rbrack } \n\]\n\nand similarly with the subscript \( X \), the formula of Theorem 6.1 falls out, and the proof is concluded.
Yes
Corollary 6.2. For the tensor \( {R}_{2} \) such that \( {R}_{2}\left( {v, w}\right) = R\left( {v, w, v, w}\right) \), we get\n\n\[ \n{R}_{2X}\left( {{\mu }_{X},{v}_{X}}\right) = {R}_{2Z}\left( {\mu, v}\right) + \frac{3}{4}{\begin{Vmatrix}\left\lbrack {\mu }_{X},{v}_{X}\right\rbrack \end{Vmatrix}}^{2}.\n\] \n\nIn particular, the tensor \( {R}_{2} \) decreases under submersions.
Proof. This is immediate from the definition and Theorem 6.1.
No
Proposition 1.1. Let \( \Omega = {\operatorname{vol}}_{g} \) . Then for all n-tuples of vectors \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) and \( \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\} \) in \( V \), we have\n\n\[ \Omega \left( {{v}_{1},\ldots ,{v}_{n}}\right) \Omega \left( {{w}_{1},\ldots ,{w}_{n}}\right) = \det {\left\langle {v}_{i},{w}_{j}\right\rangle }_{g}. \]\n\nIn particular,\n\n\[ \Omega {\left( {v}_{1},\ldots ,{v}_{n}\right) }^{2} = \det {\left\langle {v}_{i},{v}_{j}\right\rangle }_{g} \]
Proof. The determinant on the right side of the first formula is multilinear and alternating in each \( n \) -tuple \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) and \( \left\{ {{w}_{1},\ldots ,{w}_{n}}\right\} \).\n\nHence there exists a number \( c \in \mathbf{R} \) such that\n\n\[ \det {\left\langle {v}_{i},{w}_{j}\right\rangle }_{g} = {c\Omega }\left( {{v}_{1},\ldots ,{v}_{n}}\right) \Omega \left( {{w}_{1},\ldots ,{w}_{n}}\right) \]\n\nfor all such \( n \) -tuples. Evaluating on an oriented orthonormal basis shows that \( c = 1 \), thus proving the proposition.
Yes
Proposition 1.3. For functions \( \varphi ,\psi \) we have\n\n\[ \mathbf{\Delta }\left( {\varphi \psi }\right) = \varphi \mathbf{\Delta }\psi + \psi \mathbf{\Delta }\varphi - 2\langle {d\varphi },{d\psi }{\rangle }_{g}. \]
Proof. The routine gives:\n\n\[ \mathbf{\Delta }\left( {\varphi \psi }\right) = {d}^{ * }d\left( {\varphi \psi }\right) = {d}^{ * }\left( {{\psi d\varphi } + {\varphi d\psi }}\right) \]\n\n\[ = - \operatorname{div}\left( {\psi {\xi }_{d\varphi }}\right) - \operatorname{div}\left( {\varphi {\xi }_{d\psi }}\right) \]\n\n\[ = - \psi \operatorname{div}{\xi }_{d\varphi } - \left( {d\psi }\right) {\xi }_{d\varphi } - \varphi \operatorname{div}{\xi }_{d\psi } - \left( {d\varphi }\right) {\xi }_{d\psi } \]\n\n\[ = \psi \mathbf{\Delta }\varphi + \varphi \mathbf{\Delta }\psi - 2\langle {d\varphi },{d\psi }{\rangle }_{g} \]\n\n as was to be shown.
Yes
Corollary 1.4. Let \( \delta \) be a positive function. Then\n\n\[ \mathbf{\Delta } - \left\lbrack {\operatorname{gr}\log \delta }\right\rbrack = {\delta }^{-1/2}\mathbf{\Delta } \circ {\delta }^{1/2} - {\delta }^{-1/2}\mathbf{\Delta }\left( {\delta }^{1/2}\right) . \]
Proof. For a function \( \psi \), by Proposition 1.3,\n\n\[ \left( {\mathbf{\Delta } \circ {\delta }^{1/2}}\right) \psi = \mathbf{\Delta }\left( {{\delta }^{1/2}\psi }\right) \]\n\n\[ = {\delta }^{1/2}\mathbf{\Delta }\psi + \psi \mathbf{\Delta }\left( {\delta }^{1/2}\right) - 2\left( {\operatorname{gr}{\delta }^{1/2}}\right) \cdot \psi . \]\n\nWe apply the right side of the equality to be proved to a function \( \psi \) . We use the formula just derived, mutliplied by \( {\delta }^{-1/2} \) . The term \( {\delta }^{-1/2}\mathbf{\Delta }\left( {\delta }^{1/2}\right) \psi \) cancels, and we obtain\n\n\[ \text{(right side)}\left( \psi \right) = \mathbf{\Delta }\psi - 2{\delta }^{-1/2}\left( {\operatorname{gr}{\delta }^{1/2}}\right) \cdot \psi \text{.} \]\n\nWe use \( \mathbf{{gr}}\mathbf{2} \) to conclude the proof.
Yes
Proposition 1.5.\n\n\[ \n{\\operatorname{div}}_{\\Omega }\\xi = {\\delta }^{-1}\\sum {\\partial }_{i}\\left( {\\delta {\\varphi }_{i}}\\right) \n\]\n\n\[ \n= \\sum {\\partial }_{i}{\\varphi }_{i} + \\sum \\left( {{\\partial }_{i}\\log \\delta }\\right) {\\varphi }_{i} \n\]\n\nIn matrix form,\n\n\[ \n{\\operatorname{div}}_{\\Omega }\\xi = {}^{t}{\\mathbf{D}}_{\\Omega }{\\Phi }_{\\xi }\\;\\text{ or also }\\;{\\operatorname{div}}_{\\Omega } = {\\delta }^{-1}{}^{t}D \\circ \\delta .\n\]
Proof. We have\n\n\[ \n\\left( {\\Omega \\circ \\xi }\\right) \\left( {{u}_{1},\\ldots ,{\\widehat{u}}_{i},\\ldots ,{u}_{n}}\\right) = \\Omega \\left( {\\xi ,{u}_{1},\\ldots ,{\\widehat{u}}_{i},\\ldots ,{u}_{n}}\\right) \n\]\n\n\[ \n= {\\left( -1\\right) }^{i - 1}\\Omega \\left( {{u}_{1},\\ldots ,\\xi ,\\ldots ,{u}_{n}}\\right) \n\]\n\n\[ \n= {\\left( -1\\right) }^{i - 1}\\delta {\\varphi }_{i} \n\]\n\nHence\n\n\[ \n\\left( {\\Omega \\circ \\xi }\\right) = \\sum {\\left( -1\\right) }^{i - 1}{\\delta }_{{\\varphi }_{i}}d{x}_{1} \\land \\cdots \\land \\widehat{d{x}_{i}} \\land \\cdots \\land d{x}_{n}, \n\]\n\nand since \( {dd}{x}_{j} = 0 \) for all \( j \), we obtain\n\n\[ \nd\\left( {\\Omega \\circ \\xi }\\right) = \\sum {\\left( -1\\right) }^{i - 1}{\\partial }_{i}\\left( {\\delta {\\varphi }_{i}}\\right) d{x}_{i} \\land d{x}_{1} \\land \\cdots \\land \\widehat{d{x}_{i}} \\land \\cdots \\land d{x}_{n} \n\]\n\n\[ \n= \\sum {\\partial }_{i}\\left( {\\delta {\\varphi }_{i}}\\right) d{x}_{1} \\land \\cdots \\land d{x}_{n} \n\]\n\n\[ \n= {\\delta }^{-1}\\sum {\\partial }_{i}\\left( {\\delta {\\varphi }_{i}}\\right) \\Omega \n\]\n\nThis proves the proposition.
Yes
Proposition 1.6. Let \( \operatorname{gr}\left( \psi \right) = \sum {\varphi }_{i}{u}_{i} \) . Let \( g\left( x\right) \) be the \( n \times n \) matrix representing the metric at a point \( x \) . Then the coordinate vector of \( \operatorname{gr}\left( \psi \right) \) is\n\n\[ \Phi = \left( \begin{matrix} {\varphi }_{1} \\ \vdots \\ {\varphi }_{n} \end{matrix}\right) = g{\left( x\right) }^{-1}\left( \begin{matrix} {\partial }_{1}\psi \\ \vdots \\ {\partial }_{n}\psi \end{matrix}\right) \]\n\nIn other words,\n\n\[ \Phi = {g}^{-1}\partial \psi \]\n\nwhere \( \partial \) is the vector differential operator such that \( {}^{t}\partial = \left( {{\partial }_{1},\ldots ,{\partial }_{n}}\right) \) .
Proof. By definition,\n\n\[ {\left\langle \operatorname{gr}\left( \psi \right) ,{u}_{j}\right\rangle }_{g} = \left( {d\psi }\right) \left( {u}_{j}\right) = {\partial }_{j}\psi \]\n\nThe left side is equal to \( \left\langle {\operatorname{gr}\left( \psi \right), g\left( x\right) {u}_{j}}\right\rangle \) at a point \( x \) . Note that here the scalar product is the usual dot product on \( {\mathbf{R}}^{n} \), without the subscript \( g \) . The formula of the proposition then follows at once.
Yes
Proposition 1.7. Let \( f \) and \( \psi \) be function, and let \( \operatorname{gr}\left( \psi \right) = \sum {\varphi }_{j}{u}_{j} \) as in Proposition 1.6. Then \[ \operatorname{gr}\left( \psi \right) \cdot f = \mathop{\sum }\limits_{{j = 1}}^{n}\left( {{\partial }_{j}f}\right) {\varphi }_{j} \]
Proof. Since \( {u}_{j} \cdot f = {\partial }_{j}f \), the formula is clear.
No
Proposition 1.6. Then\n\n\\[ \n\\operatorname{gr}\\left( \\psi \\right) \\cdot f = \\mathop{\\sum }\\limits_{{j = 1}}^{n}\\left( {{\\partial }_{j}f}\\right) {\\varphi }_{j} \n\\]
Proof. Since \\( {u}_{j} \\cdot f = {\\partial }_{j}f \\), the formula is clear.
No
Proposition 1.8. On an open set of \( {\mathbf{R}}^{n} \), with metric matrix \( g,\delta = \) \( {\left( \det g\right) }^{1/2} \), and Laplacian \( {\mathbf{\Delta }}_{g} \), we have\n\n\[ - {\mathbf{\Delta }}_{g} = {\operatorname{div}}_{g}{\operatorname{gr}}_{g} = {}^{t}{\mathbf{D}}_{g}{g}^{-1}\partial \]\n\n\[ = {\delta }^{-1}{}^{t}\partial {\delta }_{g}^{-1}\partial . \]
Here, \( {\mathbf{D}}_{g} \) abbreviates \( {\mathbf{D}}_{{\Omega }_{g}} \), and \( {\operatorname{div}}_{g} \) abbreviates \( {\operatorname{div}}_{{\Omega }_{g}} \). Putting all the indices in, we get\n\n(1)\n\n\[ - {\mathbf{\Delta }}_{g}f = {\delta }^{-1}\mathop{\sum }\limits_{i}{\partial }_{i}\left( {\delta \mathop{\sum }\limits_{j}{g}^{ij}{\partial }_{j}f}\right) \]\n\nwhere in classical notation, \( {g}^{-1}\left( x\right) \) is the matrix \( \left( {{g}^{ij}\left( x\right) }\right) \) for \( x \in {\mathbf{R}}^{n} \). Using the rule for the derivative of a product, we write (1) in the form\n\n(2)\n\n\[ - {\mathbf{\Delta }}_{g}f = \mathop{\sum }\limits_{{i, j = 1}}^{n}{g}^{ij}{\partial }_{i}{\partial }_{j}f + {L}_{g}f \]\n\nwhere \( {L}_{g} \) is a first-order differential operator, that is a linear combination of the partials \( {\partial }_{1},\ldots ,{\partial }_{n} \) with coefficients which are functions, depending on \( g \). From this expression, we see that the matrix \( {g}^{-1} = \left( {g}^{ij}\right) \) is the matrix of the second-order term, quadratic in the partials \( {\partial }_{i},{\partial }_{j} \). Hence we obtain:
Yes
Theorem 1.9. Let \( X \) be a Riemannian manifold. Then the Laplacian determines the metric, i.e. if two Riemannian metrics have the same Laplacian, they are equal. If \( F : X \rightarrow Y \) is a differential isomorphism of Riemannian manifolds, and \( F \) maps \( {\mathbf{\Delta }}_{X} \) on \( {\mathbf{\Delta }}_{Y} \), that is \( F \) commutes with the Laplacians, then \( F \) is an isometry.
Note that the second statement about the differential isomorphism is just a piece of functorial abstract nonsense, in light of the first statement. Indeed, \( F \) maps the metric \( {g}_{X} \) to a metric \( {F}_{ * }{g}_{X} \) on \( Y \), and similarly for the Laplacian. By assumption, \( {F}_{ * }{\mathbf{\Delta }}_{X} = {\mathbf{\Delta }}_{Y} \). Hence \( {\mathbf{\Delta }}_{Y} \) is the Laplacian of \( {g}_{Y} \) and of \( {F}_{ * }{g}_{X} \), so \( {g}_{Y} = {F}_{ * }{g}_{X} \) by the first statement in the theorem.
Yes
Theorem 2.1. Let \( D \) be the metric covariant derivative. Then\n\n\[ \n{D}_{\xi }{\operatorname{vol}}_{g} = 0 \n\]\n\nfor all vector fields \( \xi \) .
Proof. Let \( \Omega = {\operatorname{vol}}_{g} \) be the Riemannian volume form. If \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{n}}\right\} \) is an orthonormal frame, then \( \Omega = \pm {\xi }_{1}^{ \vee } \land \cdots \land {\xi }_{n}^{ \vee } \) and \( \langle \Omega ,\Omega {\rangle }_{g} = 1 \) . Taking the Lie derivative with \( \xi \) yields 0, and also yields\n\n\[ \n0 = 2{\left\langle {D}_{\xi }\Omega ,\Omega \right\rangle }_{g} \n\]\n\nBut \( {D}_{\xi }\Omega = {\varphi \Omega } \) for some function \( \varphi \), so \( 0 = {2\varphi }\langle \Omega ,\Omega {\rangle }_{g} \), whence \( \varphi = 0 \) , which proves the proposition.
Yes
Theorem 2.2. Let \( {\xi }_{1},\ldots ,{\xi }_{n} \) be an orthonormal frame of vector fields, and let \( \xi \) be a vector field. Then\n\n\[ \n\operatorname{div}\xi = \mathop{\sum }\limits_{{i = 1}}^{n}{\left\langle {D}_{{\xi }_{i}}\xi ,{\xi }_{i}\right\rangle }_{g} = \operatorname{tr}\left( {D\xi }\right)\n\]\n\nIn particular, for \( \lambda \in {\mathcal{A}}^{1}\left( X\right) \) we have\n\n\[ \n\operatorname{div}{\lambda }^{ \vee } = \operatorname{tr}\left( {D\lambda }\right)\n\]
Proof. Let \( \Omega = {\operatorname{vol}}_{g} \) be the volume form. By COVD 6 of Chapter VIII, §1, and Proposition 2.1, we get\n\n\[ \nd\left( {\Omega \circ \xi }\right) \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i - 1}{D}_{{\xi }_{i}}\left( {\Omega \circ \xi }\right) \left( {{\xi }_{1},\ldots ,{\widehat{\xi }}_{i},\ldots ,{\xi }_{n}}\right)\n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{n}{\left( -1\right) }^{i - 1}\left( {\Omega \circ {D}_{{\xi }_{i}}\xi }\right) \left( {{\xi }_{1},\ldots ,{\widehat{\xi }}_{i},\ldots ,{\xi }_{n}}\right)\n\]\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{n}\Omega \left( {{\xi }_{1},\ldots ,{D}_{{\xi }_{i}}\xi ,\ldots ,{\xi }_{n}}\right)\n\]\n\nand since \( {D}_{{\xi }_{i}}\xi \) has the Fourier expression \( {D}_{{\xi }_{i}}\xi = \mathop{\sum }\limits_{j}{\left\langle {D}_{{\xi }_{i}}\xi ,{\xi }_{j}\right\rangle }_{g}{\xi }_{j} \),\n\n\[ \n= \mathop{\sum }\limits_{{i = 1}}^{n}{\left\langle {\mathbf{D}}_{{\xi }_{i}}\xi ,{\xi }_{i}\right\rangle }_{g}\mathbf{\Omega }\left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right)\n\]\n\nBut also \( d\left( {\Omega \circ \xi }\right) \left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) = \left( {\operatorname{div}\xi }\right) {\operatorname{vol}}_{g}\left( {{\xi }_{1},\ldots ,{\xi }_{n}}\right) \). Hence\n\n\[ \n\operatorname{div}\xi = \mathop{\sum }\limits_{{i = 1}}^{n}{\left\langle {D}_{{\xi }_{i}}\xi ,{\xi }_{i}\right\rangle }_{g}\n\]\n\nwhich proves the first formula. The second is a mere rephrasing, applied to the vector field \( {\lambda }^{ \vee } \).
Yes
Corollary 2.4. Let \( {\xi }_{1},\ldots ,{\xi }_{n} \) be an orthonormal frame as in Theorem 2.2. Let \( \varphi \) be a function. Then\n\n\[ \n{\Delta \varphi } = - \operatorname{tr}\left( {Dd\varphi }\right) = - \mathop{\sum }\limits_{{i = 1}}^{n}\left\langle {{D}_{{\xi }_{i}}{d\varphi },{\xi }_{i}}\right\rangle = - \mathop{\sum }\limits_{{i = 1}}^{n}{\left\langle {D}_{{\xi }_{i}}\left( \operatorname{grad}\varphi \right) ,{\xi }_{i}\right\rangle }_{g}. \n\]\n\nIf \( \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \) is an orthonormal basis of the tangent space \( {T}_{x}X \) at some point \( x \in X \), and \( {\alpha }_{i} \) is the geodesic with \( {\alpha }_{i}\left( 0\right) = x \) and \( {\alpha }_{i}^{\prime }\left( 0\right) = {u}_{i} \), then\n\n\[ \n\mathbf{\Delta }\varphi \left( x\right) = - \mathop{\sum }\limits_{{i = 1}}^{n}{\left( \varphi \circ {\alpha }_{i}\right) }^{\prime \prime }\left( 0\right) \n\]
Proof. The first assertion comes from applying Theorem 2.2 to \( \lambda = {d\varphi } \) . The second assertion then follows by using Corollary 5.6 of Chapter VIII.
No
Proposition 2.5. Let \( \alpha = {\alpha }_{1} \) be the unique geodesic from \( x \) to \( y \neq x \) , parametrized by arc length, and let \( {e}_{1} = {\alpha }^{\prime }\left( r\right) \in {T}_{y}X \) . Let \( {e}_{2},\ldots ,{e}_{n} \) be unit vectors in \( {T}_{y}X \) such that \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) is an orthonormal basis of \( {T}_{y}X \) . Let \( {\eta }_{i}\left( {i = 2,\ldots, n}\right) \) be the Jacobi lift of \( \alpha \) such that \[ {\eta }_{i}\left( 0\right) = 0\;\text{ and }\;{\eta }_{i}\left( r\right) = {e}_{i}. \] Then \[ {\Delta \varphi }\left( y\right) = - {f}^{\prime \prime }\left( r\right) - {f}^{\prime }\left( r\right) \mathop{\sum }\limits_{{i = 2}}^{r}{\left\langle {D}_{{\alpha }^{\prime }}{\eta }_{i}\left( r\right) ,{\eta }_{i}\left( r\right) \right\rangle }_{g}. \]
Proof. Let \( {\beta }_{i}\left( {i = 1,\ldots, n}\right) \) be the geodesic from \( y \) such that \[ {\beta }_{i}\left( 0\right) = y\;\text{ and }\;{\beta }_{i}^{\prime }\left( 0\right) = {e}_{i}. \] Observe that \( {\beta }_{1}\left( t\right) = {\alpha }_{1}\left( {r + t}\right) \) for small \( t \), by the uniqueness of the integral curve of the corresponding differential equation. We apply Corollary 2.4 to the Laplacian at \( y \), and the geodesics \( {\beta }_{i}\left( {i = 1,\ldots, n}\right) \) to get \[ \mathbf{\Delta }\varphi \left( y\right) = - \mathop{\sum }\limits_{{i = 1}}^{n}{\left( \varphi \circ {\beta }_{i}\right) }^{\prime \prime }\left( 0\right) \] Since \( {\beta }_{1}\left( t\right) = {\alpha }_{1}\left( {r + t}\right) \), we can split off the first term, to obtain \[ \mathbf{\Delta }\varphi \left( y\right) = - {f}^{\prime \prime }\left( r\right) - \mathop{\sum }\limits_{{i = 2}}^{n}{\left( \varphi \circ {\beta }_{i}\right) }^{\prime \prime }\left( 0\right) . \] Let \( {\alpha }_{i, t} \) be the unique geodesic from \( x \) to \( {\beta }_{i}\left( t\right) \) (for small \( t \) ), parametrized by arc length. Thus \( {\alpha }_{i, t} \) is what we called the variation of \( \alpha \) at its end point, in the direction of \( {e}_{i} \), for \( i = 2,\ldots, n \) . Then by Propositions 3.3 of Chapter IX, Proposition 1.9 of Chapter XI, and the fact that \[ \left( {\varphi \circ {\beta }_{i}}\right) \left( t\right) = f\left( {L\left( {\alpha }_{i, t}\right) }\right) \] we obtain \[ {\left( \varphi \circ {\beta }_{i}\right) }^{\prime \prime }\left( 0\right) = {f}^{\prime }\left( r\right) {\left\langle {D}_{{\alpha }^{\prime }}{\eta }_{i}\left( r\right) ,{\eta }_{i}\left( r\right) \right\rangle }_{g}, \] which proves our proposition.
Yes
Proposition 2.6. Let \( \delta = {\left( \det g\right) }^{1/2} \) . For each \( j \) we have\n\n\[ \n{\partial }_{j}\log \delta = - \mathop{\sum }\limits_{k}{B}_{U}\left( {{u}_{j},{u}_{k}}\right) \n\]\n\nand\n\n\[ \n\operatorname{div}\xi = \operatorname{tr}\left( {D\xi }\right) = \mathop{\sum }\limits_{i}{\partial }_{i}{\varphi }_{i} - \mathop{\sum }\limits_{{i, k}}{\varphi }_{i}\left\langle {{B}_{U}\left( {{u}_{i},{u}_{k}}\right) ,{u}_{k}}\right\rangle \n\]\n\n\[ \n= \mathop{\sum }\limits_{i}{\partial }_{i}{\varphi }_{i} - \mathop{\sum }\limits_{k}{B}_{U}\left( {\xi ,{u}_{k}}\right) \n\]
Proof. The second formula for the trace comes from the definition of the trace and the definition of \( {D\xi } \) . The first formula then follows componentwise from Proposition 1.4. This concludes the proof.
No
Proposition 3.1. Let \( u \) be a unit vector in \( {T}_{x}X \) and let \( \alpha \) be the geodesic parametrized by arc length such that \( \alpha \left( 0\right) = x \) and \( {\alpha }^{\prime }\left( 0\right) = u \) . Put \( u = {w}_{1} \) and let \( \left\{ {u,{w}_{2},\ldots ,{w}_{n}}\right\} \) be a basis of \( {T}_{x}X \) such that \( {w}_{i} \bot u \) for \( i = \) \( 2,\ldots, n \) . Let \( {\eta }_{i}\left( {i = 2,\ldots, n}\right) \) be the Jacobi lift of \( \alpha \) such that\n\n\[ \n{\eta }_{i}\left( 0\right) = 0\;\text{ and }\;{D}_{{\alpha }^{\prime }}{\eta }_{i}\left( 0\right) = {w}_{i}.\n\]\n\nThen\n\n\[ \n{r}^{n - 1}J\left( {r, u}\right) = \frac{\det \left( {{\eta }_{2}\left( r\right) ,\ldots ,\eta \left( r\right) }\right) }{\det \left( {{w}_{2},\ldots ,{w}_{n}}\right) } = \frac{\mathop{\det }\limits^{{1/2}}{\left\langle {\eta }_{i}\left( r\right) ,{\eta }_{j}\left( r\right) \right\rangle }_{g}}{\det \left( {{w}_{2},\ldots ,{w}_{n}}\right) }.\n\]\n\nThe determinant on the right is taken for \( i, j = 2,\ldots, n \) .
Proof. Observe that we may also use \( {\eta }_{1} \), which is such that \( {\eta }_{1}\left( t\right) = t{\alpha }^{\prime }\left( t\right) \) . The equality between the two expressions on the right of the equality sign follows from Proposition 1.1. Let \( f = {\exp }_{x} \) . Then for any vectors \( {w}_{1},\ldots ,{w}_{n} \in {T}_{x}X \) we have\n\n\[ \n\left( {{\exp }_{x}^{ * }{\operatorname{vol}}_{g}}\right) \left( v\right) \left( {{w}_{1},\ldots ,{w}_{n}}\right) = {\operatorname{vol}}_{g}\left( {{Tf}\left( v\right) {w}_{1},\ldots ,{Tf}\left( v\right) {w}_{n}}\right)\n\]\n\n\[ \n= \det \left( {{Tf}\left( v\right) {w}_{1},\ldots ,{Tf}\left( v\right) {w}_{n}}\right)\n\]\n\n\[ \n= J\left( v\right) \det \left( {{w}_{1},\ldots ,{w}_{n}}\right) \text{.\n}\]\n\nWe put \( v = r{w}_{1} = {ru} \) . By Theorem 3.1 of Chapter IX we know that\n\n\[ \nT{\exp }_{x}\left( {ru}\right) {w}_{i} = \frac{1}{r}{\eta }_{i}\left( r\right)\n\]\n\nThen for \( i = 1,{\eta }_{1}\left( r\right) /r = {\alpha }^{\prime }\left( r\right) \), which is a unit vector perpendicular to the others. Thus to compute the volume of the parallelotope in euclidean \( n \) -space, we may disregard this vector, and simply compute the volume of the projection on \( \left( {n - 1}\right) \) -space, and thus we may compute only the \( \left( {n - 1}\right) \times \left( {n - 1}\right) \) determinant of the vectors\n\n\[ \n\det \left( {{\eta }_{2}\left( r\right) /r,\ldots ,{\eta }_{n}\left( {r/r}\right) = \frac{1}{{r}^{n - 1}}\det \left( {{\eta }_{2}\left( r\right) ,\ldots ,{\eta }_{n}\left( r\right) }\right) ,}\right.\n\]\n\nfrom which the proposition falls out.
Yes
Corollary 3.2. If in Proposition 3.1 all the vectors \( {w}_{i} \) are unit vectors \( {u}_{i} \) such that \( \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\} \) is an orthonormal basis of \( {T}_{x}X \), and \( u = {u}_{1} \), then we have simply\n\n\[ \n{r}^{n - 1}J\left( {r, u}\right) = \mathop{\det }\limits^{{1/2}}{\left\langle {\eta }_{i}\left( r\right) ,{\eta }_{j}\left( r\right) \right\rangle }_{g}.\n\]
Proof. By Corollary 3.2, \( J\left( {r, u}\right) \) is \( \mathop{\det }\limits^{{1/2}}{\left\langle {\eta }_{i}\left( r\right) /r,{\eta }_{j}\left( r\right) /r\right\rangle }_{q} \) with the determinant taken for \( i, j = 1,\ldots, n \) or \( i, j = 2,\ldots, n \) . Using the asymptotic expansion of Chapter IX, Proposition 5.4 and the orthonormality, one gets that\n\n\[ \nJ\left( {r, u}\right) = \mathop{\prod }\limits_{{i = 2}}^{n}{\left( 1 + 2{R}_{2}\left( u,{u}_{i}\right) \frac{{r}^{2}}{3!}\right) }^{1/2} + O\left( {r}^{3}\right) \;\text{ for }r \rightarrow 0,\n\]\n\nwhich is immediately expanded to yield the corollary.
Yes
Again with an orthonormal basis \( \\left\\{ {{u}_{1},\\ldots ,{u}_{n}}\\right\\} \) of \( {T}_{x}X \), let \( u = {u}_{1} \) and\n\n\\[ \n\\operatorname{Ric}\\left( {u, u}\\right) = \\mathop{\\sum }\\limits_{{i = 2}}^{n}{R}_{2}\\left( {u,{u}_{i}}\\right)\n\\]\n\nThen\n\n\\[ \n{\\exp }_{x}^{ * }{\\operatorname{vol}}_{g}\\left( {ru}\\right) = \\left\\lbrack {1 + \\operatorname{Ric}\\left( {u, u}\\right) \\frac{{r}^{2}}{3!} + O\\left( {r}^{3}\\right) }\\right\\rbrack {\\operatorname{vol}}_{\\mathrm{{euc}}}\\left( {ru}\\right) \\;\\text{ for }r \\rightarrow 0.\n\\]
Proof. By Corollary 3.2, \( J\\left( {r, u}\\right) \) is \( \\mathop{\\det }\\limits^{{1/2}}{\\left\\langle {\\eta }_{i}\\left( r\\right) /r,{\\eta }_{j}\\left( r\\right) /r\\right\\rangle }_{q} \) with the determinant taken for \( i, j = 1,\\ldots, n \) or \( i, j = 2,\\ldots, n \) . Using the asymptotic expansion of Chapter IX, Proposition 5.4 and the orthonormality, one gets that\n\n\\[ \nJ\\left( {r, u}\\right) = \\mathop{\\prod }\\limits_{{i = 2}}^{n}{\\left( 1 + 2{R}_{2}\\left( u,{u}_{i}\\right) \\frac{{r}^{2}}{3!}\\right) }^{1/2} + O\\left( {r}^{3}\\right) \\;\\text{ for }r \\rightarrow 0,\n\\]\n\nwhich is immediately expanded to yield the corollary.
Yes
Let \( {\exp }_{x} : {\mathbf{B}}_{x} \rightarrow X \) be the normal chart in \( X \) as at the beginning of the section, and \( y = {\exp }_{x}\left( {ru}\right) \) with \( {ru} \in {\mathbf{B}}_{x} \), and some unit vector \( u \) . Let \( \alpha \left( s\right) = {\exp }_{x}\left( {su}\right) \) and let \( {e}_{1} = {\alpha }^{\prime }\left( r\right) \) . Complete \( {e}_{1} \) to an orthonormal basis \( \left\{ {{e}_{1},\ldots ,{e}_{n}}\right\} \) of \( {T}_{y}X \), and let \( {\eta }_{i} \) be the Jacobi lift of \( \alpha \) (depending on \( y \), or \( r \) if \( {u}_{1} \) is viewed as fixed), such that\n\n\[ \n{\eta }_{i}\left( 0\right) = 0\;\text{ and }\;{\eta }_{i}\left( r\right) = {e}_{i}\;\text{ for }i = 2,\ldots, n.\n\]\n\nLet \( {J}^{\prime }\left( {s, u}\right) = {\partial }_{1}J\left( {s, u}\right) \) . Then\n\n\[ \n{J}^{\prime }/J\left( {r, u}\right) + \frac{n - 1}{r} = \mathop{\sum }\limits_{{i = 2}}^{n}{\left\langle {D}_{{\alpha }^{\prime }}{\eta }_{i}\left( r\right) ,{\eta }_{i}\left( r\right) \right\rangle }_{g}.\n\]
Proof. In the present case, \( {D}_{{\alpha }^{\prime }}{\eta }_{i}\left( 0\right) = {w}_{i} \) is whatever it is, but we observe that the determinant \( \det \left( {{w}_{2},\ldots ,{w}_{n}}\right) \) is constant, so disappears in taking the logarithmic derivative of the expression in Proposition 3.1. We also observe that in the present case,\n\n\[ \n{\left\langle {\eta }_{i}\left( r\right) ,{\eta }_{j}\left( r\right) \right\rangle }_{g} = {\delta }_{ij}\n\]\n\nso the matrix formed with these scalar products is the unit matrix. Taking the logarithmic derivative of one side, we obtain\n\n\[ \n{J}^{\prime }/J\left( {r, u}\right) + \left( {n - 1}\right) /r\n\]\n\nLet \( {h}_{ij} = {\left\langle {\eta }_{i},{\eta }_{j}\right\rangle }_{g} \), and let \( H = \left( {h}_{ij}\right) \) . On the other side, we obtain the logarithmic derivative\n\n\[ \n\frac{1}{2}\frac{{\left( \det H\right) }^{\prime }}{\det H}.\n\]\nLet \( {H}_{2},\ldots ,{H}_{n} \) be the columns of \( H \) . By Leibniz’s rule, we know that\n\n\[ \n{\left( \det H\right) }^{\prime } = \mathop{\sum }\limits_{{i = 2}}^{n}\det \left( {{H}_{2},\ldots ,{H}_{i}^{\prime },\ldots ,{H}_{n}}\right)\n\]\n\nObserve that\n\n\[ \n{\left\langle {\eta }_{i},{\eta }_{j}\right\rangle }_{g}^{\prime } = {\left\langle {D}_{{\alpha }^{\prime }}{\eta }_{i},{\eta }_{j}\right\rangle }_{g} + {\left\langle {\eta }_{i},{D}_{{\alpha }^{\prime }}{\eta }_{j}\right\rangle }_{g}.\n\]\n\nand in particular,\n\n\[ \n{\left\langle {\eta }_{i},{\eta }_{i}\right\rangle }_{g}^{\prime } = 2{\left\langle {D}_{{\alpha }^{\prime }}{\eta }_{i},{\eta }_{i}\right\rangle }_{g}\n\]\n\nWhat we want follows from a purely algebraic property of determinants, namely:\n\nLemma 3.5. Let \( A = \left( {{A}^{1},\ldots ,{A}^{m}}\right) \) be a non-singular \( m \times m \) matrix over a field, where \( {A}^{1},\ldots ,{A}^{m} \) are the columns of \( A \) . Let \( B = \left( {{B}^{1},\ldots ,{B}^{m}}\right) \) be any \( m \times m \) matrix over the field. Then\n\n\[ \n\mathop{\sum }\limits_{i}\det \left( {{A}^{1},\ldots ,{B}^{i},\ldots ,{A}^{m}}\right) = \left( {\det A}\right) \operatorname{tr}\left( {{A}^{-1}B}\right) .\n\]\n\nProof. Let \( X = \left( {x}_{ij}\right) \) be the matrix such that\n\n\[ \n{x}_{1i}{A}^{1} + \cdots + {x}_{mi}{A}^{m} = {B}^{i}\;\text{ for }\;i = 1,\ldots, m.\n\]\n\nBy Cramer's rule,\n\n\[ \n{x}_{ii}\det \left( A\right) = \det \left( {{A}^{1},\ldots ,{B}^{i},\ldots ,{A}^{m}}\right) .\n\]\n\nBut \( {AX} = B \) so \( X = {A}^{-1}B \), and the lemma follows.\n\nWe apply the lemma to the case when \( A = H\left( r\right) \) is the unit matrix and \( {B}^{j} = {H}_{j}^{\prime }\left( r\right) \) to conclude the proof.
Yes
Lemma 3.5. Let \( A = \left( {{A}^{1},\ldots ,{A}^{m}}\right) \) be a non-singular \( m \times m \) matrix over a field, where \( {A}^{1},\ldots ,{A}^{m} \) are the columns of \( A \) . Let \( B = \left( {{B}^{1},\ldots ,{B}^{m}}\right) \) be any \( m \times m \) matrix over the field. Then\n\n\[ \mathop{\sum }\limits_{i}\det \left( {{A}^{1},\ldots ,{B}^{i},\ldots ,{A}^{m}}\right) = \left( {\det A}\right) \operatorname{tr}\left( {{A}^{-1}B}\right) . \]
Proof. Let \( X = \left( {x}_{ij}\right) \) be the matrix such that\n\n\[ {x}_{1i}{A}^{1} + \cdots + {x}_{mi}{A}^{m} = {B}^{i}\;\text{ for }\;i = 1,\ldots, m. \]\n\nBy Cramer's rule,\n\n\[ {x}_{ii}\det \left( A\right) = \det \left( {{A}^{1},\ldots ,{B}^{i},\ldots ,{A}^{m}}\right) . \]\n\nBut \( {AX} = B \) so \( X = {A}^{-1}B \), and the lemma follows.
Yes
Corollary 3.6. Let \( \varphi \) be a \( {C}^{2} \) function on a normal ball centered at the point \( x \in X \) . Suppose that \( \varphi \) depends only on the \( g \) -distance \( r \) from \( x \), say \( \varphi \left( y\right) = f\left( {r\left( y\right) }\right) \) . Let \( y = \exp \left( {ru}\right) \), with a unit vector \( u \) . Then\n\n\[ \mathbf{\Delta }\varphi \left( y\right) = - {f}^{\prime \prime }\left( r\right) - {f}^{\prime }\left( r\right) \left( {{J}^{\prime }/J\left( {r, u}\right) + \frac{n - 1}{r}}\right) . \]
Proof. Combine Corollary 3.4 with Proposition 2.5.
No
Lemma 3.7. Let \( u \) be a unit vector in \( {T}_{x}X \) . Let \( \varphi \) be a \( {C}^{2} \) function on a normal ball centered at \( x \), and define the function \( {f}_{u} \) by\n\n\[ \n{f}_{u}\left( r\right) = \varphi \left( {{\exp }_{x}\left( {ru}\right) }\right) .\n\]\n\nThen\n\n\[ \n{f}_{u}^{\prime }\left( r\right) = \left( {{D}_{\mathbf{n}}\varphi }\right) \left( {{\exp }_{x}\left( {ru}\right) }\right) \n\]\n\nand\n\n\[ \n{f}_{u}^{\prime \prime }\left( r\right) = \left( {{D}_{\mathbf{n}}^{2}\varphi }\right) \left( {{\exp }_{x}\left( {ru}\right) }\right) .\n\]
Proof. Let \( y = {\exp }_{x}\left( {ru}\right) \) with some unit vector \( u \in {T}_{x}X \) . Let \( \alpha \) be the geodesic defined by \( \alpha \left( t\right) = {\exp }_{x}\left( {tu}\right) \) . Then\n\n\[ \n{f}_{u}^{\prime }\left( r\right) = \left( {T\varphi }\right) \left( y\right) T{\exp }_{x}\left( {ru}\right) u = \left( {T\varphi }\right) \left( y\right) {\alpha }^{\prime }\left( r\right) .\n\]\n\nBy the global Gauss lemma of Chapter IX, Proposition 3.2, \( {\alpha }^{\prime }\left( r\right) \) is precisely the unit normal vector \( \mathbf{n}\left( y\right) \) . Hence the right side of the above equation is the Lie derivative of \( \varphi \) in the direction of this unit normal vector, which is none other than \( \left( {{D}_{\mathbf{n}}\varphi }\right) \left( y\right) \) . This proves the first formula. The second comes by iterating the first, thereby completing the proof.
Yes
Theorem 3.8. Let \( \varphi \) be a \( {C}^{2} \) function on a normal ball centered at the point \( x \in X \) . Let \( {S}_{r}\left( x\right) \) for \( r > 0 \) be the Riemannian sphere of radius \( r \) centered at \( x \), and contained in the ball. Let \( {\mathbf{\Delta }}_{S} \) denoted the Laplacian on \( S = {S}_{r}\left( x\right) \) . Let \( \mathbf{n} \) be the unit radial field from \( x \), let \( u \) be a unit vector in \( {T}_{x}X \) . Then for \( y = {\exp }_{x}\left( {ru}\right) \) we have\n\n\[ \n{\mathbf{\Delta }}_{X}\varphi \left( y\right) = \left( {{\mathbf{\Delta }}_{S}{\varphi }_{S}}\right) \left( y\right) - \left( {{D}_{\mathbf{n}}^{2}\varphi }\right) \left( y\right) - \left( {{J}^{-1}{D}_{\mathbf{n}}J\left( {r, u}\right) + \frac{n - 1}{r}}\right) \left( {{D}_{\mathbf{n}}\varphi }\right) \left( y\right) .\n\]
Proof. We apply Proposition 2.5 of Chapter XIV, which decomposes the Laplacian into a tangential part relative to a submanifold, which we now take to be the sphere \( Y = S \) ; and a transversal part. The tangential part gives precisely the term \( {\mathbf{\Delta }}_{S}{\varphi }_{S} \) at \( y \) . For the transversal part, we apply Proposition 2.6 of Chapter XIV, which tells us that the value depends only on the restriction of \( \varphi \) to the normal manifold. But then, we can apply Lemma 3.7 and the formula which we found in Corollary 3.6 to conclude the proof.
No
For \( \varphi \in {L}_{a}^{r}\left( T\right) \), and \( v,{v}_{1},\ldots ,{v}_{n - r + 1} \in T \), we have\n\n\[ \left( {\varphi \circ v}\right) \land {v}_{1}^{ \vee } \land \cdots \land {v}_{n - r + 1}^{ \vee } \]\n\n\[ = \mathop{\sum }\limits_{{i = 1}}^{{n - r + 1}}{\left( -1\right) }^{r + i}\left\langle {{v}^{ \vee },{v}_{i}}\right\rangle \left( {\varphi \land {v}_{1}^{ \vee } \land \cdots \land \widehat{{v}_{i}^{ \vee }} \land \cdots \land {v}_{n - \mathrm{i} + 1}^{ \vee }}\right) . \]
Proof. The basic formalism of forms tells us that the contraction with a vector is an anti-derivation on the algebra of forms (Chapter V, §5, CON 3). Since \( \varphi \land {v}_{1}^{ \vee } \land \cdots \land {v}_{n - r + 1}^{ \vee } \) has degree \( n + 1 \) and so is equal to 0 , we find\n\n\[ 0 = \left( {\varphi \land {v}_{1}^{ \vee } \land \cdots \land {v}_{n - r + 1}^{ \vee }}\right) \circ v \]\n\n\[ = \left( {\varphi \circ v}\right) \land {v}_{1}^{ \vee } \land \cdots \land {v}_{n - r + 1}^{ \vee } \]\n\n\[ + \mathop{\sum }\limits_{{i = 1}}^{{n - r + 1}}{\left( -1\right) }^{r + i - 1}\varphi \land {v}_{1}^{ \vee } \land \cdots \land \left( {{v}_{i}^{ \vee } \circ v}\right) \land \cdots \land {v}_{n - r + 1}^{ \vee }. \]\n\nWe observe that\n\n\[ {v}_{i}^{ \vee } \circ v = {\left\langle {v}_{i}, v\right\rangle }_{g} = {\left\langle v,{v}_{i}\right\rangle }_{g} = \left\langle {{v}^{ \vee },{v}_{i}}\right\rangle \]\nto conclude the proof of the lemma.
Yes
Proposition 4.2. Let \( \\left\\{ {{v}_{1},\\ldots ,{v}_{n}}\\right\\} \) be an orthonormal basis of \( T \) . Let \( {\\omega }_{1},\\ldots ,{\\omega }_{n} \) be the dual basis of 1 -forms. Let \( I = \\left( {{i}_{1},\\ldots ,{i}_{r}}\\right) \) with \( {i}_{1} < \\cdots < {i}_{r} \) and let \( J = \\left( {{j}_{1},\\ldots ,{j}_{n - r}}\\right) \) with \( {j}_{1} < \\cdots {j}_{n - r} \) be the complementary set such that \( \\{ 1,\\ldots, n\\} \) is a permutation of \( \\left( {I, J}\\right) \) . Let \( \\epsilon \\left( {I, J}\\right) \) be the sign of the permutation. Assume \( {v}_{1},\\ldots ,{v}_{n} \) oriented. Let
Proof. Directly from the definition of \( \\Omega = {\\Omega }_{g} \) we have that\n\n\[ \n{\\Omega }_{g} = {\\omega }_{1} \\land \\cdots \\land {\\omega }_{n} = {v}_{1}^{ \\vee } \\land \\cdots \\land {v}_{n}^{ \\vee }.\n\]\nAt first, let \( J \) be an arbitrary sequence of \( n - r \) indices among \( \\left( {1,\\ldots, n}\\right) \) . Then by S 3,\n\n\[ \n\\left( {*{\\omega }_{I}}\\right) \\left( {{v}_{{j}_{1}},\\ldots ,{v}_{{j}_{n - r}}}\\right) = * \\left( {{\\omega }_{I} \\land {\\omega }_{J}}\\right)\n\]\n\nwhich is \( \\neq 0 \) if and only if \( J \) is the complementary set, i.e. \( \\left( {I, J}\\right) \) is a permutation of \( \\left( {1,\\ldots, n}\\right) \) . In this case, the right side of the above expression is simply \( \\epsilon \\left( {I, J}\\right) * \\Omega = \\epsilon \\left( {I, J}\\right) \) . Alternatively, one may write\n\n\[ \n* {\\omega }_{I} = \\epsilon \\left( {I, J}\\right) {\\omega }_{J}\n\]\n\nif \( \\left( {I, J}\\right) \) is a permutation of \( \\left( {1,\\ldots, n}\\right) \), from which Proposition 4.2 follows.
Yes
Proposition 4.3. The star operation commutes with every \( {D}_{\xi } \), i.e. for any vector field \( \xi \) and \( \varphi \in {\mathcal{A}}^{r}\left( X\right) \), we have\n\n\[ \n* {D}_{\xi }\varphi = {D}_{\xi } * \varphi .\n\]
Proof. For 0-forms (functions) and \( n \) -forms (functions times the volume form) the assertion is immediate by using Proposition 2.1, to the effect that \( {D}_{\xi }{\operatorname{vol}}_{g} = 0 \) . Now let \( \varphi \in \Gamma {L}_{a}^{r}\left( {TX}\right) \) . Then:\n\n\[ \n\left( {{D}_{\xi } * \varphi }\right) \left( {{\xi }_{1},\ldots ,{\xi }_{n - r}}\right) + \mathop{\sum }\limits_{{i = 1}}^{{n - r}}\left( {*\varphi }\right) \left( {{\xi }_{1},\ldots ,{D}_{\xi }{\xi }_{i},\ldots ,{\xi }_{n - r}}\right) \n\]\n\n\[ \n= {D}_{\xi }\left( {\left( {*\varphi }\right) \left( {{\xi }_{1},\ldots ,{\xi }_{n - r}}\right) }\right) \;\text{[because}{D}_{\xi }\text{is a derivation]} \n\]\n\n\[ \n= {D}_{\xi } * \left( {\varphi \land {\xi }_{1}^{ \vee } \land \cdots \land {\xi }_{n}^{ \vee }}\right) \;\text{ [by }\mathbf{{S3}}\text{ ] } \n\]\n\n\[ \n= \left( {*{D}_{\xi }}\right) \left( {\varphi \land {\xi }_{1}^{ \vee } \land \cdots \land {\xi }_{n - r}^{ \vee }}\right) \;\text{[by the proposition for}n\text{-forms]} \n\]\n\n\[ \n= * \left( {{D}_{\xi }\varphi \land {\xi }_{1}^{ \vee } \land \cdots \land {\xi }_{n - r}^{ \vee }}\right) + \mathop{\sum }\limits_{{i = 1}}^{{n - r}} * \left( {\varphi \land {\xi }_{1}^{ \vee } \land \cdots \land {D}_{\xi }{\xi }_{i}^{ \vee } \land \cdots \land {\xi }_{n - r}^{ \vee }}\right) \n\]\n\n\[ \n= \left( {*{D}_{\xi }\varphi }\right) \left( {{\xi }_{1},\ldots ,{\xi }_{n - r}}\right) + \mathop{\sum }\limits_{{i = 1}}^{{n - r}}\left( {*\varphi }\right) \left( {{\xi }_{1},\ldots ,{D}_{\xi }{\xi }_{i},\ldots ,{\xi }_{n - r}}\right) , \n\]\n\nwhich proves the proposition.
Yes
Proposition 4.4. For \( \varphi ,\psi \in {\mathcal{A}}^{r}\left( X\right) \) we have\n\n\[ \n{d\varphi } \land * \psi = \varphi \land \left( {*{d}^{ * }\psi }\right) + d\left( {\varphi \land * \psi }\right) .\n\]
Proof. Immediate from the definition of \( {d}^{ * },\mathbf{S}\mathbf{6} \), and the basic formula for \( d \) of a wedge product (a graded derivation).
No
Proposition 4.5. Let \( {\xi }_{1},\ldots ,{\xi }_{n} \) be a frame of vector fields, and let \( {\xi }_{1}^{\prime },\ldots ,{\xi }_{n}^{\prime } \) be the dual frame, that is \( {\left\langle {\xi }_{i}^{\prime },{\xi }_{j}\right\rangle }_{g} = {\delta }_{ij} \) . Then for any form \( \varphi \in {\mathcal{A}}^{r}\left( X\right) \) we have\n\n\[ \n{d}^{ * }\varphi = \mathop{\sum }\limits_{{i = 1}}^{n}\left( {{D}_{{\xi }_{i}}\varphi }\right) \circ {\xi }_{1}^{\prime } \n\]
Proof. Proposition 1.1. of Chapter VIII gives us an expression for \( d\left( {*\varphi }\right) \) in terms of the frame. The dual frame is such that \( {\lambda }_{i}^{ \vee } = {\xi }_{i}^{\prime } \) . Then the formula of Proposition 4.4 is an immediate consequence of \( \mathbf{S} \) 5.
No
Proposition 4.6. Let \( {\xi }_{1},\ldots ,{\xi }_{n} \) be an orthonormal frame. As an operator on 1 -forms, \( \Delta : {\mathcal{A}}^{1}\left( X\right) \rightarrow {\mathcal{A}}^{1}\left( X\right) \) is given by\n\n\[ \Delta = - \sum {D}_{{\xi }_{i}}^{2} - \text{ Ric. } \]\n\nWritten in terms of the variables, this means\n\n\[ \langle \mathbf{\Delta }\lambda ,\xi \rangle = - \mathop{\sum }\limits_{i}\left\langle {{D}_{{\xi }_{i}}{D}_{{\xi }_{i}}\lambda ,\xi }\right\rangle - \mathop{\sum }\limits_{i}\left\langle {\left( {{D}_{\xi }{D}_{{\xi }_{i}} - {D}_{{\xi }_{i}}{D}_{\xi }}\right) \lambda ,{\xi }_{i}}\right\rangle . \]
Proof. By Proposition 4.5, we have\n\n\[ {d}^{ * }\lambda = - \sum \left( {{D}_{{\xi }_{i}}\lambda }\right) \left( {\xi }_{i}\right) \]\n\nand so by a general formula on covariant derivatives we get a value for \( d{d}^{ * }\lambda \), namely\n\n\[ \left\langle {d{d}^{ * }\lambda ,\xi }\right\rangle = - \mathop{\sum }\limits_{i}\left\langle {{D}_{\xi }{D}_{{\xi }_{i}}\lambda ,{\xi }_{i}}\right\rangle \]\n\nOn the other hand, to get \( {d}^{ * }{d\lambda } \), we first note that by COVD 6 of Chapter VIII, §1,\n\n\[ \left( {d\lambda }\right) \left( {\xi ,\eta }\right) = \left\langle {{D}_{\xi }\lambda ,\eta }\right\rangle - \left\langle {{D}_{\eta }\lambda ,\xi }\right\rangle \]\n\nAgain by Proposition 4.5,\n\n\[ \left\langle {{d}^{ * }{d\lambda },\xi }\right\rangle = \sum \left\langle {{D}_{{\xi }_{i}}{D}_{\xi }\lambda ,{\xi }_{i}}\right\rangle - \sum \left\langle {{D}_{{\xi }_{i}}{D}_{{\xi }_{i}}\lambda ,\xi }\right\rangle \]\n\nAdding the two expressions yields the formula of the proposition.
Yes
Theorem 5.1. Under the above two Hodge conditions, we have\n\n\[ \n{\mathbf{H}}^{ \bot } = {DA} + {D}^{ * }A \n\]\n\nand an orthogonal decomposition\n\n\[ \nA = \mathbf{H} \bot \mathbf{\Delta }A = \mathbf{H} \bot {DA} \bot {D}^{ * }A. \n\]\n\nThe restriction of \( \mathbf{\Delta } \) to \( {\mathbf{H}}^{ \bot } \) is invertible, and\n\n\[ \n\operatorname{Ker}D = \mathbf{H} + {DA} \n\]
Proof. By orthogonalization and \( \mathbf{H}\mathbf{2} \), given \( u \in A \) we have\n\n\[ \nu = \mathbf{H}u + \mathbf{\Delta }v = \mathbf{H}u + D{D}^{ * }v + {D}^{ * }{Dv} \]\n\nwith some \( v \in A \). Hence \( A \) is contained in \( \mathbf{H} + {DA} + {D}^{ * }A \), so we get equality. Furthermore\n\n\[ \langle \mathbf{{\Delta u}}, u\rangle = \parallel {Du}{\parallel }^{2} + {\begin{Vmatrix}{D}^{ * }u\end{Vmatrix}}^{2}. \]\n\nHence \( {\Delta u} = 0 \) if and only if \( {Du} = {D}^{ * }u = 0 \). (Each implication is immediate.) The adjointness relation then shows that \( {DA},{D}^{ * }A \) are orthogonal to \( \mathbf{H} \), and \( {D}^{2} = 0 \) implies the \( {DA} \) is orthogonal to \( {D}^{ * }A \), so we get the orthogonal decomposition\n\n\[ \nA = \mathbf{H} \bot {DA} \bot {D}^{ * }A, \]\n\nand \( {\Delta A} = {DA} + {D}^{ * }A \) by \( \mathbf{H}\mathbf{2} \). Since \( \Delta \mathbf{H} = 0 \) it follows that\n\n\[ \n\text{A:}{DA} + {D}^{ * }A \rightarrow {DA} + {D}^{ * }A \]\n\nis surjective, and so is an isomorphism, and thus \( \mathbf{\Delta } \) is invertible on \( {\mathbf{H}}^{ \bot } \). Finally \( \mathbf{H} + {DA} \) is contained in the kernel of \( D \), and \( D \) is injective on \( {D}^{ * }A \) because\n\n\[ \nD{D}^{ * }u = 0 \Rightarrow \left\langle {D{D}^{ * }u, u}\right\rangle = 0 \Rightarrow {\begin{Vmatrix}{D}^{ * }u\end{Vmatrix}}^{2} = 0. \]\n\nThis proves the theorem.
Yes
Proposition 5.3. Under these assumptions, \( D = S{D}^{ * }S \) and \( \mathbf{H},\mathbf{\Delta }, G \) commute with \( S \) .
Proof. We give the proof when \( n \) is even for simplicity. For \( u \in {A}^{p} \), we have:\n\n\[ S{D}^{ * }{Su} = - {S}^{2}D{S}^{2}u = - {S}^{2}D{\left( -1\right) }^{p}u \]\n\n\[ = - {\left( -1\right) }^{p}{\left( -1\right) }^{p + 1}{Du} \]\n\n\[ = {Du} \]\n\nso \( D = S{D}^{ * }S \) .\n\nFor the commutation of \( S \) with \( \mathbf{\Delta } \), we write, using the above,\n\n\[ S\mathbf{\Delta } = - {SDSDS} - {SSDSD} \]\n\n\[ {\Delta S} = - {DSDSS} - {SDSDS}. \]\n\nOn \( {A}^{p},{SS} = {\left( -1\right) }^{p} \), so it is immediate that \( {SS} \) commutes with \( {DSD} \), thus showing that \( S \) commutes with \( \mathbf{\Delta } \) .\n\nSince \( S \) commutes with \( \mathbf{\Delta } \), it follows that\n\n\[ S : \mathbf{H} \rightarrow \mathbf{H} \]\n\ninduces an automorphism of \( \mathbf{H} \) with itself. For \( u \in A \) we have:\n\n\( {Su} - \mathbf{H}{Su} \in \mathbf{H} \) by definition of the orthogonal projection; and\n\n\( {Su} - S\mathbf{H}u = S\mathbf{\Delta }{Gu} = \mathbf{\Delta }{SGu} \) since \( \mathbf{\Delta } \) commutes with \( S \) .\n\nThen\n\n\( {Su} - S\mathbf{H}u \bot \mathbf{H} \) since it lies in \( \mathbf{\Delta }A \) .\n\nSubtracting shows that \( \mathbf{H}{Su} - S\mathbf{H}u \) is both orthogonal to \( \mathbf{H} \), and also lies in \( \mathbf{H} \), so must be 0, whence \( \mathbf{H} \) commutes with \( S \) . Since \( G = {\mathbf{\Delta }}^{-1} \) on \( {\mathbf{H}}^{ \bot } \) it follows that \( G \) also commutes with \( S \), thus proving the proposition.
Yes
Lemma 6.1. There is a canonical isomorphism\n\n\[ \land ^{p}{T}_{y}^{ \vee } \otimes \land ^{q}{T}_{z}^{ \vee } \rightarrow \land ^{n}{T}_{x}^{ \vee } \]\n\ndefined as follows. For \( \omega \in \mathop{\bigwedge }\limits^{q}{T}_{z}^{ \vee } \) and \( \eta \in \mathop{\bigwedge }\limits^{p}{T}_{y}^{ \vee } \), let \( \widetilde{\eta } \in \mathop{\bigwedge }\limits^{p}{T}_{x}^{ \vee } \) map on \( \eta \) in sequence (3). The map\n\n\[ \left( {\eta ,\omega }\right) \mapsto \widetilde{\eta } \land \omega \]\n\nis independent of the choice of \( \widetilde{\eta } \), and defines the isomorphism.
Proof. Routine algebraic verification. The above lemma is sometimes stated in the form\n\n\[ \det \left( {T}_{x}^{ \vee }\right) = \det \left( {T}_{y}^{ \vee }\right) \otimes \det \left( {T}_{z}^{ \vee }\right) . \]
No
Lemma 6.2. Let \( \pi : X \rightarrow Z \) be a submersion. Suppose \( X \) is orientable. Then every fiber \( {Y}_{z} \) is orientable. If \( \Omega \) and \( \omega \) are volume forms on \( X, Z \) respectively, then there exists a p-form \( \widetilde{\eta } \) on \( X \) whose restriction to each fiber \( {Y}_{\pi \left( y\right) } \) as above is the form \( {\eta }_{y} \) such that \( {\Omega }_{y} = {\eta }_{y} \otimes {\omega }_{\pi \left( y\right) } \) . For any such \( \widetilde{\eta } \), we have\n\n\[ \Omega = \widetilde{\eta } \land \omega . \]
Proof. The orientability comes from the existence of the family of forms \( \left\{ {\eta }_{v}\right\} \), which is verified to be \( {C}^{\infty } \) in terms of coordinates. The local existence of \( \widetilde{\eta } \) is immediate. The global existence follows by using a partition of unity.
No
Lemma 6.3. Under the above assumptions, let \( {\Omega }_{x} \) and \( {\Omega }_{z} \) be metric volume forms on \( {T}_{x} \) and \( {T}_{z} \) (so they determine an orientation). Then one of the possible (up to sign) metric volume forms \( {\Omega }_{y} \) on \( {T}_{y} \) satisfies the relation\n\n\[ \n{\Omega }_{x} = {\Omega }_{y} \otimes {\Omega }_{z} \n\]
Proof. Let \( \left\{ {{e}_{1},\ldots ,{e}_{p}}\right\} \) be an orthonormal basis for \( {T}_{y} \), and \( \left\{ {{e}_{p + 1},\ldots ,{e}_{p + q}}\right\} \) an orthonormal basis for \( {T}_{v}^{ \bot } \) . Together they form an orthonormal basis for \( {T}_{x} \) . The metric dual bases \( \left\{ {{e}_{1}^{ \vee },\ldots ,{e}_{p}^{ \vee }}\right\} \) and \( \left\{ {{e}_{p + 1}^{ \vee },\ldots ,{e}_{p + q}^{ \vee }}\right\} \) form an orthonormal basis of the dual space, and with the appropriate orientation of \( \left\{ {{e}_{1},\ldots ,{e}_{p}}\right\} \),\n\n\[ \n{\Omega }_{x} = {e}_{1}^{ \vee } \land \cdots \land {e}_{p}^{ \vee } \land {e}_{p + 1}^{ \vee } \land \cdots {e}_{p + q}^{ \vee }.\n\]\n\nNote that \( {e}_{p + 1}^{ \vee },\ldots ,{e}_{p + q}^{ \vee } \) are the images of an orthonormal basis of \( {T}_{z}^{ \vee } \) under the natural injection\n\n\[ \n0 \rightarrow {T}_{z}^{ \vee } \rightarrow {T}_{x}^{ \vee }\n\]\n\nThen the lemma is an immediate consequence of the definitions.
Yes
Proposition 6.4. Let \( \pi : X \rightarrow Z \) be a Riemannian submersion. Suppose \( X, Z \) oriented, so \( {Y}_{z} \) is oriented for each \( z \) . Let \( {\Omega }_{X},{\Omega }_{Z} \) be the Riemannian volume forms on \( X, Z \) respectively. Then for each \( z \in Z \), the Riemannian volume form \( {\Omega }_{{Y}_{z}} \) (with the determined orientation of \( {Y}_{z} \) ) satisfies\n\n\[{\Omega }_{X}\left( y\right) = {\Omega }_{{Y}_{z}}\left( y\right) \otimes {\Omega }_{Z}\left( z\right)\]
The relation of Proposition 6.4 is punctual. However, the individual volume forms on the fibers locally are the restriction of a form on an open set of \( X \) itself. Indeed, if \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{p}}\right\} \) is an orthonormal frame of vertical vector fields on \( X \), suitably oriented, then\n\n\[{\Omega }_{Y} = {\xi }_{1}^{ \vee } \land \cdots \land {\xi }_{p}^{ \vee }\n\nThen \( {\Omega }_{Y} \) restricted to each fiber \( {Y}_{z} \) is the Riemannian volume form on \( {Y}_{z} \) . We call \( {\Omega }_{Y} \) the vertical metric volume form, which is independent of the choice of vertical orthonormal frame, with the orientation determined by that of \( X \) and \( Z \) . In general, by a vertical volume form we mean a form equal to a positive function times \( {\Omega }_{Y} \), or equivalently, a form which can be expressed locally as a wedge product \( {\xi }_{1}^{\prime } \land \cdots \land {\xi }_{p}^{\prime } \), where \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{p}}\right\} \) is a suitably oriented orthogonal frame of vertical vector fields, and \( \left\{ {{\xi }_{1}^{\prime },\ldots ,{\xi }_{p}^{\prime }}\right\} \) is the dual frame (in the sense of dual basis of vector spaces) vanishing on horizontal fields. Any two such forms differ by a function nowhere 0 . Note that if \( \left\{ {{\xi }_{1},\ldots ,{\xi }_{p}}\right\} \) is a vertical orthonormal frame, then \( {\xi }_{i}^{ \vee } = {\xi }_{i}^{\prime } \) for \( i = 1,\ldots, p.\)
Yes
Proposition 6.5. Let \( \pi : X \rightarrow Z \) be a Riemannian submersion. Let \( {\Omega }_{X} \) and \( {\Omega }_{Z} \) be Riemannian volume forms on \( X, Z \) respectively. Let \( v \) be a vector field on \( Z \), and \( {v}_{X} \) its horizontal lift to \( X \) . Abbreviate \( {\operatorname{div}}_{X} \) for \( {\operatorname{div}}_{{\Omega }_{X}} \), and similarly for \( Z \) . Let \( {\Omega }_{Y} \) be the vertical metric volume form, and let \( \varphi \) be the function such that\n\n\[ \left( {{\mathcal{L}}_{{v}_{X}}{\Omega }_{Y}}\right) \land {\Omega }_{Z} = \varphi {\Omega }_{X} \]\n\nThen\n\n\[ {\operatorname{div}}_{X}\left( {v}_{X}\right) = {\pi }^{ * }{\operatorname{div}}_{Z}\left( v\right) + \varphi . \]
Proof. The first formula comes from definition DIV 2 of the divergence, and the fact that the Lie derivative is a derivation for the wedge product, by Chapter V, Proposition 5.3, LIE 2, namely\n\n\[ {\mathcal{L}}_{{v}_{X}}\left( {\Omega }_{X}\right) = {\mathcal{L}}_{{v}_{X}}{\Omega }_{Y} \land {\Omega }_{Z} + {\Omega }_{Y} \land {\mathcal{L}}_{{v}_{X}}{\Omega }_{Z} \]\n\n\[ = \varphi {\Omega }_{X} + {\Omega }_{Y} \land {\pi }^{ * }{\operatorname{div}}_{Z}\left( v\right) {\Omega }_{Z} \]\n\nThe second condition is then immediate, because \( {\mathcal{L}}_{{v}_{X}}\Psi \) is a form \( \widetilde{0} \), in the notation of Lemma 6.1. This concludes the proof.
No
Proposition 7.1. The exponential commutes with conjugation, namely for \( v \in {T}_{e}G \), we have\n\n\[ \exp {\mathbf{c}}_{\text{Lie }}\left( x\right) v = {\mathbf{c}}_{x}\left( {\exp v}\right) = x\exp \left( v\right) {x}^{-1}. \]\n
Proof. This is actually a special case of the general fact that if \( f : G \rightarrow {G}^{\prime } \) is a Lie group homomorphism, and \( v \in {T}_{e}G \), then\n\n\[ f\left( {\exp v}\right) = \exp \left( {{Tf}\left( e\right) v}\right) . \]\n\nWe apply this formula to \( f = {\mathbf{c}}_{x} \). As to the general formula, one notes that \( \alpha \left( t\right) = f\left( {\exp \left( {tv}\right) }\right) \) defines a 1-parameter subgroup \( \alpha \) of \( {G}^{\prime } \), and that \( {\alpha }^{\prime }\left( 0\right) = {Tf}\left( e\right) v \) by the chain rule, so \( \alpha \left( t\right) = \exp \left( {{Tf}\left( e\right) {tv}}\right) \) for all \( t \), concluding the proof.
Yes
Proposition 7.2. We have \( \chi \left( a\right) = \det {\mathbf{c}}_{\text{Lie }}\left( a\right) \) for \( a \in G \) .
Proof. We use \( {\mathbf{c}}_{a} = {L}_{a} \circ {R}_{a}^{-1} \), and abbreviate \( {\mathbf{c}}_{a}V = \det {\mathbf{c}}_{\text{Lie }}\left( a\right) V \) . Then for \( V \neq 0 \) ,\n\n\[ \n\Omega \left( V\right) = {\left( {\mathbf{c}}_{a}\Omega \right) }_{e}\left( {{\mathbf{c}}_{a}V}\right) = {\left( {R}_{{a}^{-1}}\Omega \right) }_{e}\left( \left( {\det {\mathbf{c}}_{\mathrm{{Lie}}}\left( a\right) V}\right) \right.\n\]\n\n\[ \n= \det {\mathbf{c}}_{\mathrm{{Lie}}}\left( a\right) {\left( {R}_{{a}^{-1}}\Omega \right) }_{e}\left( V\right) = \det {\mathbf{c}}_{\mathrm{{Lie}}}\left( a\right) \chi {\left( a\right) }^{-1}\Omega \left( V\right) .\n\]\n\nCancelling \( \Omega \left( V\right) \) concludes the proof of the proposition.
Yes
Proposition 7.3. Let \( \Omega \) be a left invariant volume form on \( G \) . Then \( {\chi \Omega } \) is right invariant, i.e. is a right Haar form.
Proof. We have\n\n\[ \n{R}_{a}\left( {\chi \Omega }\right) = {R}_{a}\left( \chi \right) {R}_{a}\left( \Omega \right) = \chi \left( {a}^{-1}\right) {\chi \chi }\left( a\right) \Omega = {\chi \Omega }, \n\] \n\nthus proving the proposition.
Yes
Proposition 7.4. For \( h \in H \), we have\n\n\[ \text{det}T{\mathbf{c}}_{h}\left( {e}_{G}\right) = \det T{\mathbf{c}}_{h}\left( {e}_{G/H}\right) \cdot \det T{\mathbf{c}}_{h}\left( {e}_{H}\right) \text{.} \]
Proof. Let \( {T}_{y} = {T}_{y}Y,{T}_{x} = {T}_{y}X \) and \( {T}_{z} = {T}_{\pi \left( y\right) }Z \), so we have the exact\n\nsequence\n\n\[ 0 \rightarrow {T}_{y} \rightarrow {T}_{x} \rightarrow {T}_{z} \rightarrow 0. \]\n\nThe map \( f \) induces tangent linear maps on each of those spaces, and we denote these by \( {L}_{y},{L}_{x},{L}_{z} \), so\n\n\[ {L}_{x} = T{f}_{X}\left( y\right) ,\;{L}_{y} = T{f}_{Y}\left( y\right) \;\text{ and }\;{L}_{z} = T{f}_{Z}\left( z\right) . \]\n\nIf \( V \) is a finite dimensional vector space of dimension \( p \), we let det \( V = \) \( \mathop{\bigwedge }\limits^{p}V \) be its maximal exterior product with itself. Similarly to Lemma 6.1, we have a natural isomorphism\n\n\[ \det {T}_{x} = \left( {\det {T}_{y}}\right) \otimes \left( {\det {T}_{z}}\right) \]\n\nConcretely, if \( \left\{ {{v}_{1},\ldots ,{v}_{p}}\right\} \) is a basis of \( {T}_{y} \) and \( \left\{ {{w}_{1},\ldots ,{w}_{q}}\right\} \) is a basis of \( {T}_{z} \) , with representatives \( \left\{ {{\widetilde{w}}_{1},\ldots ,{\widetilde{w}}_{q}}\right\} \) in \( {T}_{x} \), then\n\n\[ {v}_{1} \land \cdots \land {v}_{p} \land {\widetilde{w}}_{1} \land \cdots \land {\widetilde{w}}_{q} \]\n\nis a basis of det \( {T}_{x} = \mathop{\bigwedge }\limits^{{p + q}}{T}_{x} \) . The scaling effect of det \( {L}_{x} \) is then equal to the product of the scaling effect on each factor, \( \left( {\det {L}_{y}}\right) \left( {\det {L}_{z}}\right) \), which proves the general formula. The special case first stated in Proposition 7.4 occurs with \( f = {\mathbf{c}}_{h}\left( {h \in H}\right) \). This concludes the proof.
Yes
Proposition 7.5. Let \( X \) be a homogeneous space for \( G \) . If \( X \) is strictly unimodular, then there exists a left G-invariant volume form on \( X \), unique up to a constant multiple.
Proof. We want to define the invariant form on \( G/H \) by translating a given volume form \( {\omega }_{e} \) on \( {T}_{e}\left( {G/H}\right) \) . On \( G/H \), the left translation \( {L}_{h} \) is induced by conjugation \( {\mathbf{c}}_{h} \) on \( G \) . By Proposition 7.4 and the hypothesis, we have\n\n\[ \text{det}T{L}_{h}\left( {e}_{G/H}\right) = \det T{\mathbf{c}}_{h}\left( {e}_{G/H}\right) = 1\text{.} \]\n\nHence \( {L}_{h}{\omega }_{e}\left( {G/H}\right) = {\omega }_{e\left( {G/H}\right) } \), that is \( {\omega }_{e\left( {G/H}\right) } \) is invariant under translations by elements of \( H \) . Then for any \( g \in G \) we define\n\n\[ {\omega }_{gH} = {L}_{g}{\omega }_{e}\left( {G/H}\right) \]\n\nThe value on the right is independent of the coset representative \( g \), and it is then clear that translation yields the desired \( G \) -invariant volume form on \( G/H \) . The uniqueness up to a constant factor follows because the invariant forms are determined linearly from their values at the origin, and the forms at the origin constitute a 1-dimensional space. This concludes the proof.
Yes
Proposition 8.1. Suppose there is a section \( \sigma : Z \rightarrow X \) of a homogeneously fibered submersion. Define\n\n\[ \gamma : H \times Z \rightarrow X\;\text{ by }\;\gamma \left( {h, z}\right) = {h\sigma }\left( z\right) . \]\n\nThen \( \gamma \) is a submersion.
Proof. The tangent map \( {T\gamma }\left( {h, z}\right) \) is a surjective homomorphism of tangent spaces at each point. In fact, if we let \( {\gamma }_{h}\left( {\sigma \left( z\right) }\right) = h\left( {\sigma z}\right) = \gamma \left( {h, z}\right) \) , then \( T{\gamma }_{h}\left( {\sigma \left( z\right) }\right) \) gives a linear isomorphism of the tangent spaces to the fiber. On the other hand, \( {T\sigma } \) gives a linear isomorphism of the tangent space \( {T}_{z}Z \) to a subspace of \( {T}_{\sigma \left( z\right) }X \), and we have the direct sum decomposition at the point \( x = \sigma \left( z\right) \),\n\n\[ {T}_{\sigma \left( z\right) }X = {T}_{x}\left( {Hx}\right) \oplus {\sigma }_{ * }{T}_{z}Z \]\n\nThis concludes the proof.
Yes
Theorem 8.2 (Wu). Let \( \pi : X \rightarrow Z \) be a metrically homogeneously fibered submersion. For any two points \( x, y \in X \), the isotropy groups \( {H}_{x} \) , \( {H}_{y} \) are conjugate in \( H \) . In fact, let \( x, y \) be points of \( X \) which can be joined by the horizontal lift of a curve in \( Z \) . Then \( {H}_{x} = {H}_{y} \), and the flow of the horizontal lift induces an H-homogeneous space isomorphism between the fibers at \( x \) and at \( y \) .
Proof. We recall that the horizontal lift was defined in Chapter XIV, §3. Suppose first that \( x, y \) can be joined by a horizontal lift \( A \) . Let \( h \in {H}_{x} \) . Since \( H \) acts isometrically on \( X, h \circ A \) is the unique horizontal lift from \( {hx} = x \) to \( {hy} \) . But \( h \circ A \) has the same initial conditions as \( A \), and so coincides with \( A \) by the uniqueness of solutions of differential equations. Hence \( {hy} = y \), and \( h \in {H}_{y} \) . The reverse inclusion \( {H}_{y} \subset {H}_{x} \) follows by symmetry, so \( {H}_{x} = {H}_{y} \) . Next, for arbitrary points \( x, y \in X \), consider any curve in \( Z \) between \( \pi \left( x\right) \) and \( \pi \left( y\right) \) . Then the horizontal lift of this curve in \( X \) joins \( x \) to a point \( {y}^{\prime } \) in the same fiber as \( y \), and the isotropy groups of \( y \) and \( {y}^{\prime } \) are conjugate. Finally, let \( F \) be the flow of horizontal lifts, that is \( {F}_{t}\left( x\right) = {A}_{x}\left( t\right) \), where \( {A}_{x} \) is the horizontal lift of a curve \( {\alpha }_{\pi \left( x\right) } \) with initial condition \( \pi \left( x\right) \) on \( Z \) . Then \( t \mapsto {F}_{t}\left( {hx}\right) \) and \( t \mapsto h{A}_{x}\left( t\right) \) are horizontal lifts with the same initial conditions, and so are equal. This concludes the proof.
Yes
Theorem 8.3. Let \( \pi : X \rightarrow Z \) be a metrically homogeneously fibered strictly unimodular submersion. Let \( v \) be a vector field on \( Z \). Then the Haar form \( \Psi \) is \( {v}_{X} \) -constant over the fibers. If \( \delta \) is the Riemannian Haar density, then\n\n\[{\operatorname{div}}_{X}\left( {v}_{X}\right) = {\pi }^{ * }{\operatorname{div}}_{Z}\left( v\right) + {\pi }^{ * }\left( {v \cdot \log \delta }\right)\]
Proof. Let \( \alpha \) be an integral curve of \( v \) in \( Z \) and let \( A \) be its horizontal lift, so \( {v}_{X} \) restricts to \( {A}^{\prime } \) on the curve. By Theorem 8.2, the flow \( {F}_{t} \) gives a homogeneous space isomorphism \( {Y}_{\alpha \left( 0\right) } \rightarrow {Y}_{\alpha \left( t\right) } \) of the fibers. Let \( {\Psi }_{\alpha \left( t\right) } \) be the Haar form restricted to the fiber. By the unimodularity condition, \( {F}_{t}^{ * }{\Psi }_{\alpha \left( t\right) } = {\Psi }_{\alpha \left( 0\right) } \), which is constant. We now use frames as in the remarks preceding the theorem. In taking \( {F}_{t}^{ * }\left( \Psi \right) \), we note that each term \( {F}_{t}^{ * }\left( {\xi }_{i}^{\prime }\right) \) may have a horizontal component, so that in a neighborhood (in \( X \) ) of a point of the fiber \( {Y}_{\alpha \left( 0\right) } \) ,\n\n\[{F}_{t}^{ * }\left( \Psi \right) = \Psi + {\Phi }_{t}\]\n\nwhere \( {\Phi }_{t} \) contains a horizontal factor. The restriction of \( {\Phi }_{t} \) to the fiber \( {Y}_{\alpha \left( 0\right) } \) is 0, so the restriction of \( {\mathcal{L}}_{{v}_{X}}\Psi \) to the fiber \( {Y}_{\alpha \left( 0\right) } \) is 0 . Hence \( \Psi \) is \( {v}_{X} \) - constant over the fibers. We can then apply Proposition 6.5 to conclude the proof.
Yes
Theorem 8.4 (Helgason). Let \( \pi : X \rightarrow Z \) be a Riemannian submersion metrically homogeneously fibered, and unimodular. Let \( \delta \) be the Riemannian Haar density. Let \( {\mathbf{\Delta }}_{X},{\mathbf{\Delta }}_{Z} \) be the Laplacians. Then for a function \( \psi \) on \( Z \), we have\n\n\[{\mathbf{\Delta }}_{X}\left( {{\pi }^{ * }\psi }\right) = {\pi }^{ * }\left( {\left( {{\mathbf{\Delta }}_{Z}\psi }\right) - \left( {{\operatorname{gr}}_{Z}\log \delta }\right) \cdot \psi }\right) .
Proof. All the work has been done, and the statement merely puts together Proposition 6.5 via Theorem 8.3, and the definition of the Laplacian as minus the divergence of the gradient.
No
We cannot define \[ {\pi }_{ * } : \mathrm{{DO}}\left( X\right) \rightarrow \mathrm{{DO}}\left( Z\right) \] in general, but we can define \( {\pi }_{ * } \) in a natural way on a subset of \( \operatorname{DO}\left( X\right) \).
Indeed, an element of the group \( H \) acting on \( X \) also acts on any object functorially associated with \( X \), especially on \( \operatorname{DO}\left( X\right) \). By definition, given \( h \in H \), let \( \left\lbrack h\right\rbrack D \) for \( D \in \mathrm{{DO}}\left( X\right) \) be defined by \[ \left( {\left( {\left\lbrack h\right\rbrack D}\right) f}\right) = \left( {D\left( {f \circ {L}_{h}}\right) }\right) \circ {L}_{h}^{-1} \] where \( {L}_{h} \) is left translation by \( h \), so that for \( x \in X \), \[ \left( {\left( {\left\lbrack h\right\rbrack D}\right) f}\right) \left( x\right) = D\left( {f \circ {L}_{h}}\right) \left( {{h}^{-1}x}\right) . \] We say that \( D \) is \( \mathbf{H} \) -invariant if \( \left\lbrack h\right\rbrack D = D \) for all \( h \in H \). The set of \( H \) - invariant differential operators is a subalgebra of \( \operatorname{DO}\left( X\right) \), which we denote by \( \operatorname{DO}{\left( X\right) }^{H} \). We can then define \[ {\pi }_{ * } : \mathrm{{DO}}{\left( X\right) }^{H} \rightarrow \mathrm{{DO}}\left( Z\right) \] as follows. For a function, \( f \) on \( Z \), we let \[ \left( {{\pi }_{ * }D}\right) f = D{\left( f \circ \pi \right) }_{Z} \] This means that \( D\left( {f \circ \pi }\right) \) is constant on the fibers of \( \pi \), that is \( D\left( {f \circ \pi }\right) \) is an \( H \) -invariant function, which therefore factors through a function on \( Z \). We denote this function by inserting the subscript \( Z \). To verify that \( D\left( {f \circ \pi }\right) \) is constant on fibers, put \( F = f \circ \pi \), so that \( F \) is a function on \( X \), constant on fibers. For \( h \in H \), let \( \left\lbrack h\right\rbrack F = F \circ {L}_{h}^{-1} \). Then \[ \left\lbrack h\right\rbrack \left( {DF}\right) = \left( {\left\lbrack h\right\rbrack D}\right) \left( {\left\lbrack h\right\rbrack F}\right) = \left( {\left\lbrack h\right\rbrack D}\right) \left( {F \circ {L}_{h}^{-1}}\right) \] \[ = {DF} \] because we assumed \( D \in \mathrm{{DO}}{\left( X\right) }^{H} \). Thus \( D\left( {f \circ \pi }\right) \) is constant on orbits of \( H \). Hence \( \left( {{\pi }_{ * }D}\right) f = D{\left( f \circ \pi \right) }_{Z} \) defines a linear map \( \operatorname{DO}{\left( X\right) }^{H} \rightarrow \operatorname{DO}\left( Z\right) \). This map is a differential operator. One can see this either as a special case of the general discussion, using the section of Proposition 8.1, or one can simply rewrite the local formula for the differential operator on the submersion, and use the \( H \) -invariance to see that the coefficient functions \( {\varphi }_{\left( j\right) }\left( {w, x}\right) \) are \( H \) -invariant, that is \( {\varphi }_{\left( j\right) }\left( {{hw}, x}\right) = {\varphi }_{\left( j\right) }\left( {w, x}\right) \) for \( h \in H \) and \( w \in W \) as before.
Yes
Lemma 1.1. Let \( A \) have measure 0 in \( {\mathbf{R}}^{n} \) and let \( f : A \rightarrow {\mathbf{R}}^{n} \) satisfy a Lipschitz condition. Then \( f\left( A\right) \) has measure 0 .
Proof. Let \( C \) be a Lipschitz constant for \( f \) . Let \( \left\{ {R}_{j}\right\} \) be a sequence of cubes covering \( A \) such that \( \sum \mu \left( {R}_{j}\right) < \epsilon \) . Let \( {r}_{j} \) be the length of the side of \( {R}_{j} \) . Then for each \( j \) we see that \( f\left( {A \cap {S}_{j}}\right) \) is contained in a cube \( {R}_{j}^{\prime } \) whose sides have length \( \leqq {2C}{r}_{j} \) . Hence\n\n\[ \mu \left( {R}_{j}^{\prime }\right) \leqq {2}^{n}{C}^{n}{r}_{j}^{n} = {2}^{n}{C}^{n}\mu \left( {R}_{j}\right) \]\n\nOur lemma follows.
Yes
Lemma 1.2. Let \( U \) be open in \( {\mathbf{R}}^{n} \) and let \( f : U \rightarrow {\mathbf{R}}^{n} \) be a \( {C}^{1} \) map. Let\n\n\( Z \) be a set of measure 0 in \( U \) . Then \( f\left( Z\right) \) has measure 0 .
Proof. For each \( x \in U \) there exists a rectangle \( {R}_{x} \) contained in \( U \) such that the family \( \left\{ {R}_{x}^{0}\right\} \) of interiors covers \( Z \) . Since \( U \) is separable, there exists a denumerable subfamily covering \( Z \), say \( \left\{ {R}_{j}\right\} \) . It suffices to prove that \( f\left( {Z \cap {R}_{j}}\right) \) has measure 0 for each \( j \) . But \( f \) satisfies a Lipschitz condition on \( {R}_{j} \) since \( {R}_{j} \) is compact and \( {f}^{\prime } \) is bounded on \( {R}_{j} \), being continuous. Our lemma follows from Lemma 1.1.
Yes
Lemma 1.3. Let \( A \) be a subset of \( {\mathbf{R}}^{m} \) . Assume that \( m < n \) . Let\n\n\[ f : A \rightarrow {\mathbf{R}}^{n} \]\n\nsatisfy a Lipschitz condition. Then \( f\left( A\right) \) has measure 0 .
Proof. We view \( {\mathbf{R}}^{m} \) as embedded in \( {\mathbf{R}}^{n} \) on the space of the first \( m \) coordinates. Then \( {\mathbf{R}}^{m} \) has measure 0 in \( {\mathbf{R}}^{n} \), so that \( A \) has also \( n \) - dimensional measure 0 . Lemma 1.3 is therefore a consequence of Lemma 1.1.
No
Corollary 2.2. Let \( S \) be the unit cube spanned by the unit vectors in \( {\mathbf{R}}^{n} \). Let \( \lambda : {\mathbf{R}}^{n} \rightarrow {\mathbf{R}}^{n} \) be a linear map. Then \[ \operatorname{Vol}\lambda \left( S\right) = \left| {\operatorname{Det}\left( \lambda \right) }\right| \]
Proof. If \( {v}_{1},\ldots ,{v}_{n} \) are the images of \( {e}_{1},\ldots ,{e}_{n} \) under \( \lambda \), then \( \lambda \left( S\right) \) is the block spanned by \( {v}_{1},\ldots ,{v}_{n} \). If we represent \( \lambda \) by the matrix \( A = \left( {a}_{ij}\right) \), then \[ {v}_{i} = {a}_{1i}{e}_{1} + \cdots + {a}_{ni}{e}_{n} \] and hence \( \operatorname{Det}\left( {{v}_{1},\ldots ,{v}_{n}}\right) = \operatorname{Det}\left( A\right) = \operatorname{Det}\left( \lambda \right) \). This proves the corollary.
Yes
Corollary 2.3. If \( R \) is any rectangle in \( {\mathbf{R}}^{n} \) and \( \lambda : {\mathbf{R}}^{n} \rightarrow {\mathbf{R}}^{n} \) is a linear map, then\n\n\[ \operatorname{Vol}\lambda \left( R\right) = \left| {\operatorname{Det}\left( \lambda \right) }\right| \operatorname{Vol}\left( R\right) \]
Proof. After a translation, we can assume that the rectangle is a block. If \( R = {\lambda }_{1}\left( S\right) \) where \( S \) is the unit cube, then\n\n\[ \lambda \left( R\right) = \lambda \circ {\lambda }_{1}\left( S\right) \]\n\nwhence by Corollary 2.2,\n\n\[ \operatorname{Vol}\lambda \left( R\right) = \left| {\operatorname{Det}\left( {\lambda \circ {\lambda }_{1}}\right) }\right| = \left| {\operatorname{Det}\left( \lambda \right) \operatorname{Det}\left( {\lambda }_{1}\right) }\right| = \left| {\operatorname{Det}\left( \lambda \right) }\right| \operatorname{Vol}\left( R\right) . \]
Yes
Corollary 2.5. If \( g \) is continuous on \( f\left( R\right) \), then\n\n\[{\int }_{f\left( R\right) }{gd\mu } = {\int }_{R}\left( {g \circ f}\right) \left| {\Delta }_{f}\right| {d\mu }.\]
Proof. The functions \( g \) and \( \left( {g \circ f}\right) \left| {\Delta }_{f}\right| \) are uniformly continuous on \( f\left( R\right) \) and \( R \) respectively. Let us take a partition of \( R \) and let \( \left\{ {S}_{j}\right\} \) be the subrectangles of this partition. If \( \delta \) is the maximum length of the sides of the subrectangles of the partition, then \( f\left( {S}_{j}\right) \) is contained in a rectangle whose sides have length \( \leqq {C\delta } \) for some constant \( C \) . We have\n\n\[{\int }_{f\left( R\right) }{gd\mu } = \mathop{\sum }\limits_{j}{\int }_{f\left( {S}_{j}\right) }{gd\mu }\]\n\nThe sup and inf of \( g \) of \( f\left( {S}_{j}\right) \) differ only by \( \epsilon \) if \( \delta \) is taken sufficiently small. Using the theorem, applied to each \( {S}_{j} \), and replacing \( g \) by its minimum \( {m}_{j} \) and maximum \( {M}_{j} \) on \( {S}_{j} \), we see that the corollary follows at once.
Yes
Corollary 2.7. Let \( U \) be open in \( {\mathbf{R}}^{n} \) and let \( f : U \rightarrow {\mathbf{R}}^{n} \) be a \( {C}^{1} \) map. Let \( A \) be a measurable subset of \( U \) such that the boundary of \( A \) has measure 0, and such that \( f \) is \( {C}^{1} \) invertible on the interior of \( A \) . Let \( g \) be in \( {\mathcal{L}}^{1}\left( {f\left( A\right) }\right) \) . Then \( \left( {g \circ f}\right) \left| {\Delta }_{f}\right| \) is in \( {\mathcal{L}}^{1}\left( A\right) \) and\n\n\[{\int }_{f\left( A\right) }{gd\mu } = {\int }_{A}\left( {g \circ f}\right) \left| {\Delta }_{f}\right| {d\mu }.\]
Proof. Let \( {U}_{0} \) be the interior of \( A \) . The sets \( f\left( A\right) \) and \( f\left( {U}_{0}\right) \) differ only by a set of measure 0, namely \( f\left( {\partial A}\right) \) . Also the sets \( A,{U}_{0} \) differ only by a set of measure 0 . Consequently we can replace the domains of integration \( f\left( A\right) \) and \( A \) by \( f\left( {U}_{0}\right) \) and \( {U}_{0} \), respectively. The theorem applies to conclude the proof of the corollary.
Yes
Lemma 4.1. Let \( \lambda : {C}_{c}\left( X\right) \rightarrow \mathbf{C} \) be a positive linear map. Then \( \lambda \) is bounded on \( {C}_{K}\left( X\right) \) for any compact \( K \) .
Proof. By the corollary of Urysohn's lemma, there exists a continuous real function \( g \geqq 0 \) on \( X \) which is 1 on \( K \) has compact support. If \( f \in {C}_{K}\left( X\right) \), let \( b = \parallel f\parallel \) . Say \( f \) is real. Then \( {bg} \pm f \geqq 0 \), whence \[ \lambda \left( {bg}\right) \pm {\lambda f} \geqq 0 \] and \( \left| {\lambda f}\right| \leqq {b\lambda }\left( g\right) \) . Thus \( {\lambda g} \) is our desired bound.
Yes
Lemma 4.2. Let \( \left\{ {W}_{\alpha }\right\} \) be an open covering of \( X \) . For each index \( \alpha \), let \( {\lambda }_{\alpha } \) be a functional on \( {C}_{c}\left( {W}_{\alpha }\right) \) . Assume that for each pair of indices \( \alpha ,\beta \) the functionals \( {\lambda }_{\alpha } \) and \( {\lambda }_{\beta } \) are equal on \( {C}_{c}\left( {{W}_{\alpha } \cap {W}_{\beta }}\right) \) . Then there exists a unique functional \( \lambda \) on \( X \) whose restriction to each \( {C}_{c}\left( {W}_{\alpha }\right) \) is equal to \( {\lambda }_{\alpha } \) . If each \( {\lambda }_{\alpha } \) is positive, then so is \( \lambda \) .
Proof. Let \( f \in {C}_{c}\left( X\right) \) and let \( K \) be the support of \( f \) . Let \( \left\{ {h}_{i}\right\} \) be a partition of unity over \( K \) subordinated to a covering of \( K \) by a finite number of the open sets \( {W}_{\alpha } \) . Then each \( {h}_{i}f \) has support in some \( {W}_{\alpha \left( i\right) } \) and we define\n\n\[ \n{\lambda f} = \mathop{\sum }\limits_{i}{\lambda }_{\alpha \left( i\right) }\left( {{h}_{i}f}\right) \n\]\n\nWe contend that this sum is independent of the choice of \( \alpha \left( i\right) \), and also of the choice of partition of unity. Once this is proved, it is then obvious that \( \lambda \) is a functional which satisfies our requirements. We now prove this independence. First note that if \( {W}_{{\alpha }^{\prime }\left( i\right) } \) is another one of the open sets \( {W}_{\alpha } \) in which the support of \( {h}_{i}f \) is contained, then \( {h}_{i}f \) has support in the intersection \( {W}_{\alpha \left( i\right) } \cap {W}_{{\alpha }^{\prime }\left( i\right) } \), and our assumption concerning our functionals \( {\lambda }_{\alpha } \) shows that the corresponding term in the sum does not depend on the choice of index \( \alpha \left( i\right) \) . Next, let \( \left\{ {g}_{k}\right\} \) be another partition of unity over \( K \) subordinated to some covering of \( K \) by a finite number of the open sets \( {W}_{\alpha } \) . Then for each \( i \) ,\n\n\[ \n{h}_{i}f = \mathop{\sum }\limits_{k}{g}_{k}{h}_{i}f \n\]\n\nwhence\n\n\[ \n\mathop{\sum }\limits_{i}{\lambda }_{\alpha \left( i\right) }\left( {{h}_{i}f}\right) = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{k}{\lambda }_{\alpha \left( i\right) }\left( {{g}_{k}{h}_{i}f}\right) \n\]\n\nIf the support of \( {g}_{k}{h}_{i}f \) is in some \( {W}_{\alpha } \), then the value \( {\lambda }_{\alpha }\left( {{g}_{k}{h}_{i}f}\right) \) is independent of the choice of index \( \alpha \) . The expression on the right is then symmetric with respect to our two partitions of unity, whence our theorem follows.
Yes
Theorem 4.3. Let \( \\dim X = n \) and let \( \\omega \) be an \( n \) -form on \( X \) of class \( {C}^{0} \) , that is continuous. Then there exists a unique positive functional \( \\lambda \) on \( {C}_{c}\\left( X\\right) \) having the following property. If \( \\left( {U,\\varphi }\\right) \) is a chart and\n\n\\[ \n\\omega \\left( x\\right) = f\\left( x\\right) d{x}_{1} \\land \\cdots \\land d{x}_{n}\n\\]\n\nis the local representation of \( \\omega \) in this chart, then for any \( g \\in {C}_{c}\\left( X\\right) \) with support in \( U \), we have\n\n(1)\n\n\\[ \n{\\lambda g} = {\\int }_{\\varphi U}{g}_{\\varphi }\\left( x\\right) \\left| {f\\left( x\\right) }\\right| {dx}\n\\]\n\nwhere \( {g}_{\\varphi } \) represents \( g \) in the chart [i.e. \( {g}_{\\varphi }\\left( x\\right) = g\\left( {{\\varphi }^{-1}\\left( x\\right) }\\right) \) ], and \( {dx} \) is Lebesgue measure.
Proof. The integral in (1) defines a positive functional on \( {C}_{c}\\left( U\\right) \) . The change of variables formula shows that if \( \\left( {U,\\varphi }\\right) \) and \( \\left( {V,\\psi }\\right) \) are two charts, and if \( g \) has support in \( U \\cap V \), then the value of the functional is independent of the choice of charts. Thus we get a positive functional by the general localization lemma for functionals.
Yes
Theorem 4.4. Let \( \dim X = n \) and assume that \( X \) is oriented. Let \( \omega \) be an \( n \) -form on \( X \) of class \( {C}^{0} \) . Then there exists a unique functional \( \lambda \) on \( {C}_{c}\left( X\right) \) having the following property. If \( \left( {U,\varphi }\right) \) is an oriented chart and\n\n\[ \n\omega \left( x\right) = f\left( x\right) d{x}_{1}, \land \cdots \land d{x}_{n} \n\]\n\nis the local representation of \( \omega \) in this chart, then for any \( g \in {C}_{c}\left( X\right) \) with support in \( U \), we have\n\n\[ \n{\lambda g} = {\int }_{\varphi U}{g}_{\varphi }\left( x\right) f\left( x\right) {dx} \n\]\n\nwhere \( {g}_{\varphi } \) represents \( g \) in the chart, and \( {dx} \) is Lebesgue measure.
Proof. Since the Jacobian determinant of transition maps belonging to oriented charts is positive, we see that Theorem 4.4 follows like Theorem 4.3 from the change of variables formula (in which the absolute value sign now becomes unnecessary) and the existence of partitions of unity.
No