| Algorithm | Objective function | Iteration complexity | Remark |
| Algorithm 1 (Liu et al., 2017) | g-strongly convex | O(√L/μ log (L/ε)) | computationally intractable |
| Algorithm 2 (Liu et al., 2017) | g-convex | O(√L/ε) | computationally intractable |
| RAGD (Zhang & Sra, 2018) | g-strongly convex | O((10/9)√L/μ log (L/ε)) | nonstandard assumption |
| Algorithm 1 (Ahn & Sra, 2020) | g-strongly convex | O*(L/μ + √L/μ log (μ/ε)) | eventually accelerated |
| RAGDsDR (Alimisis et al., 2021) | g-convex | O(√ζL/ε) | only in early stages |
| (Martínez-Rubio, 2022) | g-convex | O(√L/ε) | only for constant curvature |
| (Martínez-Rubio, 2022) | g-strongly convex | O*(√L/μ log (μ/ε)) | only for constant curvature |
| RNAG-C (ours) | g-convex | O(ξ√L/ε) | |
| RNAG-SC (ours) | g-strongly convex | O(ξ√L/μ log (L/ε)) | |
using a lower bound $K_{\mathrm{min}}$ of the sectional curvature and an upper bound $D$ of $\mathrm{diam}(N)$ . For completeness, we provide a potential-function analysis in Appendix D to show that RGD with a fixed stepsize has the same iteration complexity as GD.
However, it is still unclear whether a reasonable generalization of NAG to the Riemannian setting is possible with strong theoretical guarantees. When studying the global complexity of Riemannian optimization algorithms, it is common to assume that the sectional curvature of $M$ is bounded below by $K_{\mathrm{min}}$ and bounded above by $K_{\mathrm{max}}$ to prevent the manifold from being overly curved. Unfortunately, (Criscitiello & Boumal, 2021; Hamilton & Moitra, 2021) show that even when sectional curvature is bounded, achieving global acceleration is impossible in general. Thus, one might need another common assumption, an upper bound $D$ of $\mathrm{diam}(N)$ . This motivates our central question:
Can we design computationally tractable accelerated first-order methods on Riemannian manifolds when the sectional curvature and the diameter of the domain are bounded?
In the literature, there are some partial answers but no full answer to this question (see Table 1 and Section 2). In this paper, we provide a complete answer via new first-order algorithms, which we call the Riemannian Nesterov accelerated gradient (RNAG) method. We show that acceleration is possible on Riemannian manifolds for both geodesically convex (g-convex) and geodesically strongly convex (g
strongly convex) cases whenever the bounds $K_{\mathrm{min}}$ , $K_{\mathrm{max}}$ , and $D$ are available. The main contributions of this work can be summarized as follows:
- Generalizing Nesterov's scheme, we propose RNAG, a first-order method for Riemannian optimization. We provide two specific algorithms: RNAG-C (Algorithm 1) for minimizing g-convex functions and RNAG-SC (Algorithm 2) for minimizing g-strongly convex functions. Both algorithms call one gradient oracle per iteration. Our algorithms are computationally tractable in the sense that they only involve exponential maps, logarithm maps, parallel transport, and operations in tangent spaces. In particular, RNAG-C can be interpreted as a variant of NAG-C with high friction in (Su et al., 2014, Section 4.1) (see Appendix B).
- Given the bounds $K_{\mathrm{min}}$ , $K_{\mathrm{max}}$ , and $D$ , we prove that RNAG-C has an $O\left(\sqrt{\frac{L}{\epsilon}}\right)$ iteration complexity (Corollary 5.5), and that RNAG-SC has an $O\left(\sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$ iteration complexity (Corollary 5.7). The crucial steps of the proofs are constructing potential functions as (4) and handling metric distortion using Lemma 5.2 and Lemma 5.3. To the best of our knowledge, this is the first proof for full acceleration in the g-convex case.
- We identify a connection between our algorithms and the ODEs for modeling Riemannian acceleration in (Alimisis et al., 2020) by letting the stepsize tend to zero. This analysis confirms the accelerated convergence of
our algorithms through the lens of continuous-time flows.
# 2. Related Work
Given a bound $D$ for $\mathrm{diam}(N)$ , (Liu et al., 2017) proposed accelerated methods for both g-convex and g-strongly convex cases. Their algorithms have the same iteration complexities as NAG but require a solution to a nonlinear equation at every iteration, which could be as difficult as solving the original problem in general. Given $K_{\min}$ , $K_{\max}$ , and $d(x_0, x^*)$ , (Zhang & Sra, 2018) proposed a computationally tractable algorithm for the g-strongly convex case and showed that their algorithm achieves the iteration complexity $O\left(\frac{10}{9}\sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$ when $d(x_0, x^*) \leq \frac{1}{20\sqrt{\max\{K_{\max}, - K_{\min}\}}}\left(\frac{\mu}{L}\right)^{\frac{3}{4}}$ . Given only $K_{\min}$ and $K_{\max}$ , (Ahn & Sra, 2020) considered the g-strongly convex case. Although full acceleration is not guaranteed, the authors proved that their algorithm eventually achieves acceleration in later stages. Given $K_{\min}$ , $K_{\max}$ , and $D$ , (Alimisis et al., 2021) proposed a momentum method for the g-convex case. They showed that their algorithm achieves acceleration in early stages. Although this result is not as strong as full acceleration, their theoretical guarantee is meaningful in practical situations. (Martínez-Rubio, 2022) focused on manifolds with constant sectional curvatures, namely a subset of the hyperbolic space or sphere. Their algorithm is accelerated, but it is not straightforward to generalize their argument to any manifolds. Beyond the g-convex setting, (Criscitiello & Boumal, 2020) studied accelerated methods for nonconvex problems. (Lezcano-Casado, 2020) studied adaptive and momentum-based methods using the trivialization framework in (Lezcano-Casado, 2019). Further works on accelerated Riemannian optimization can be found in (Criscitiello & Boumal, 2021, Section 1.6).
Another line of research takes the perspective of continuous-time dynamics as in the Euclidean counterpart (Su et al., 2014; Wibisono et al., 2016; Wilson et al., 2021). For both g-convex and g-strongly convex cases, (Alimisis et al., 2020) proposed ODEs that can model accelerated methods on Riemannian manifolds given $K_{\mathrm{min}}$ and $D$ . (Duruisseaux & Leok, 2021b) extended this result and developed a variational framework. Time-discretization methods for such ODEs on Riemannian manifolds have recently been of considerable interest as well (Duruisseaux & Leok, 2021a; Franca et al., 2021; Duruisseaux & Leok, 2022).
While many positive results have been obtained for accelerated Riemannian optimization, there are also a few negative results (Hamilton & Moitra, 2021) and (Criscitiello & Boumal, 2021), showing that achieving full acceleration for Riemannian optimization is impossible in general. Because their results involve a growing diameter of domain and most
of the positive results assume that the diameter of domain is bounded by a constant $D$ , the negative result is not contradictory but complementary to the positive results. This indicates that the assumption of bounding the domain by a constant is necessary for achieving full acceleration. See Section 8 for a detailed discussion.
# 3. Preliminaries
# 3.1. Background
A Riemannian manifold $(M,g)$ is a real smooth manifold equipped with a Riemannian metric $g$ which assigns to each $p\in M$ a positive-definite inner product $g_{p}(v,w) = \langle v,w\rangle_{p} = \langle v,w\rangle$ on the tangent space $T_{p}M$ . The inner product $g_{p}$ induces the norm $\| v\| _p = \| v\|$ defined as $\sqrt{\langle v,v\rangle_p}$ on $T_{p}M$ . The tangent bundle $TM$ of $M$ is defined as $TM = \sqcup_{p\in M}T_{p}M$ . For $p,q\in M$ , the geodesic distance $d(p,q)$ between $p$ and $q$ is the infimum of the length of all piecewise continuously differentiable curves from $p$ to $q$ . For nonempty set $N\subseteq M$ , the diameter $\mathrm{diam}(N)$ of $N$ is defined as $\mathrm{diam}(N) = \sup_{p,q\in N}d(p,q)$ .
For a smooth function $f: M \to \mathbb{R}$ , the Riemannian gradient $\operatorname{grad} f(x)$ of $f$ at $x$ is defined as the tangent vector in $T_xM$ satisfying
$$
\langle \operatorname {g r a d} f (x), v \rangle = d f (x) [ v ],
$$
where $df(x): T_xM \to \mathbb{R}$ is the differential of $f$ at $x$ . Let $I \coloneqq [0,1]$ . A geodesic $\gamma: I \to M$ is a smooth curve of locally minimum length with zero acceleration. In particular, straight lines in $\mathbb{R}^n$ are geodesics. The exponential map at $p$ is defined as, for $v \in T_pM$ ,
$$
\exp_ {p} (v) = \gamma_ {v} (1),
$$
where $\gamma_v: I \to M$ is the geodesic satisfying $\gamma_v(0) = p$ and $\gamma_v'(0) = v$ . In general, $\exp_p$ is only defined on a neighborhood of 0 in $T_pM$ . It is known that $\exp_p$ is a diffeomorphism in some neighborhood $U$ of 0. Thus, its inverse is well defined and is called the logarithm map $\log_x: \exp_p(U) \to T_pM$ . For a smooth curve $\gamma: I \to M$ and $t_0, t_1 \in I$ , the parallel transport $\Gamma(\gamma)_{t_0}^{t_1}: T_{\gamma(t_0)}M \to T_{\gamma(t_1)}M$ is a way of transporting vectors from $T_{\gamma(t_0)}M$ to $T_{\gamma(t_1)}M$ along $\gamma$ . When $\gamma$ is a geodesic, we let $\Gamma_p^q: T_pM \to T_qM$ denote the parallel transport from $T_pM$ to $T_qM$ .
A subset $N$ of $M$ is said to be geodesically uniquely convex if for every $x, y \in N$ , there exists a unique geodesic $\gamma : [0,1] \to M$ such that $\gamma(0) = x$ , $\gamma(1) = y$ , and $\gamma(t) \in N$ for all $t \in [0,1]$ . Let $N$ be a geodesically uniquely
convex subset of $M$ . A function $f: N \to \mathbb{R}$ is said to be geodesically convex if $f \circ \gamma: [0,1] \to \mathbb{R}$ is convex for each geodesic $\gamma: [0,1] \to M$ whose image is in $N$ . When $f$ is geodesically convex, we have
$$
f (y) \geq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle .
$$
Let $N$ be an open geodesically uniquely convex subset of $M$ , and $f: N \to \mathbb{R}$ be a continuously differentiable function. We say that $f$ is geodesically $\mu$ -strongly convex for $\mu > 0$ if
$$
f (y) \geq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle + \frac {\mu}{2} \| \log_ {x} (y) \| ^ {2}
$$
for all $x, y \in N$ . We say that $f$ is geodesically $L$ -smooth if
$$
f (y) \leq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle + \frac {L}{2} \left\| \log_ {x} (y) \right\| ^ {2}
$$
for all $x, y \in N$ . For additional notions from Riemannian geometry that are used in our analysis, we refer the reader to Appendix A as well as the textbooks (Lee, 2018; Petersen, 2016; Boumal, 2020).
# 3.2. Assumptions
In this subsection, we present the assumptions that are imposed throughout the paper.
Assumption 3.1. The domain $N$ is an open geodesically uniquely convex subset of $M$ . The diameter of the domain is bounded as $\mathrm{diam}(N) \leq D < \infty$ . The sectional curvature inside $N$ is bounded below by $K_{\min}$ and bounded above by $K_{\max}$ . If $K_{\max} > 0$ , we further assume that $D < \frac{\pi}{\sqrt{K_{\max}}}$ .
Assumption 3.1 implies that the exponential map $\exp_x$ is a diffeomorphism for any $x\in N$ (Alimisis et al., 2021).
Assumption 3.2. The objective function $f: N \to \mathbb{R}$ is continuously differentiable and geodesically $L$ -smooth. Moreover, $f$ is bounded below, and has minimizers, all of which lie in $N$ . A global minimizer is denoted by $x^{*}$ .
Assumption 3.3. All the iterates $x_{k}$ and $y_{k}$ are well-defined on the manifold $M$ remain in $N$ .
Although Assumption 3.3 is common in the literature (Zhang & Sra, 2018; Ahn & Sra, 2020; Alimisis et al., 2021), it is desirable to relax or remove it. We leave the extension as a future research topic.
To implement our algorithms, we also assume that we can compute (or approximate) exponential maps, logarithmic maps, and parallel transport. For many manifolds in practical applications, these maps are implemented in libraries such as (Townsend et al., 2016).

Figure 1. Illustration of the maps $v_A \mapsto \Gamma_{p_A}^{p_B} \left( v_A - \log_{p_A}(p_B) \right)$ and $v_A \mapsto \log_{p_B} \left( \exp_{p_A}(v_A) \right)$ .
We define the constants $\zeta \geq 1$ and $\delta \leq 1$ as
$$
\zeta = \left\{ \begin{array}{l l} \sqrt {- K _ {\min }} D \coth \left(\sqrt {- K _ {\min }} D\right), & \text {i f} K _ {\min } < 0 \\ 1, & \text {i f} K _ {\min } \geq 0 \end{array} \right.
$$
$$
\delta = \left\{ \begin{array}{l l} 1, & \text {i f K _ {\max } \leq 0} \\ \sqrt {K _ {\max }} D \cot \left(\sqrt {K _ {\max }} D\right), & \text {i f K _ {\max } > 0}. \end{array} \right.
$$
These constants naturally arise from the Rauch comparison theorem (Lee, 2018, Theorem 11.7) (Petersen, 2016, Theorem 6.4.3), and many known methods on Riemannian manifolds have a convergence rate depending on some of these constants (Alimisis et al., 2020; 2021; Zhang & Sra, 2016). Note that we can set $\zeta = \delta = 1$ when $M = \mathbb{R}^n$ .
# 4. Algorithms
In this section, we first generalize Nesterov's scheme to the Riemannian setting and then design specific algorithms for both g-convex and g-strongly convex cases. In (Ahn & Sra, 2020; Zhang & Sra, 2018) NAG is generalized to a three-step algorithm on a Riemannian manifold as
$$
y _ {k} = \exp_ {x _ {k}} \left(\tau_ {k} \log_ {x _ {k}} \left(z _ {k}\right)\right)
$$
$$
x _ {k + 1} = \exp_ {y _ {k}} (- \alpha_ {k} \operatorname {g r a d} f (y _ {k})) \tag {2}
$$
$$
z _ {k + 1} = \exp_ {y _ {k}} \left(\beta_ {k} \log_ {y _ {k}} \left(z _ {k}\right) - \gamma_ {k} \operatorname {g r a d} f \left(y _ {k}\right)\right).
$$
However, it is more natural to define the iterates $z_{k}$ in the tangent bundle $TM$ , instead of in $M$ .