File size: 114,928 Bytes
20091b1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
# Accelerated Gradient Methods for Geodesically Convex Optimization: Tractable Algorithms and Convergence Analysis

Jungbin Kim<sup>1</sup> Insoon Yang<sup>1</sup>

# Abstract

We propose computationally tractable accelerated first-order methods for Riemannian optimization, extending the Nesterov accelerated gradient (NAG) method. For both geodesically convex and geodesically strongly convex objective functions, our algorithms are shown to have the same iteration complexities as those for the NAG method on Euclidean spaces, under only standard assumptions. To the best of our knowledge, the proposed scheme is the first fully accelerated method for geodesically convex optimization problems. Our convergence analysis makes use of novel metric distortion lemmas as well as carefully designed potential functions. A connection with the continuous-time dynamics for modeling Riemannian acceleration in (Alimisis et al., 2020) is also identified by letting the stepsize tend to zero. We validate our theoretical results through numerical experiments.

# 1. Introduction

We consider Riemannian optimization problems of the form

$$
\min  _ {x \in N \subseteq M} f (x), \tag {1}
$$

where  $M$  is a Riemannian manifold,  $N$  is an open geodesically uniquely convex subset of  $M$ , and  $f: N \to \mathbb{R}$  is a continuously differentiable geodesically convex function. Geodesically convex optimization is the Riemannian version of convex optimization and has salient features such as every local minimum being a global minimum. More interestingly, some (constrained) nonconvex optimization problems defined in the Euclidean space can be considered geodesically convex optimization problems on appropriate

$^{1}$ Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea. Correspondence to: Insoon Yang <insoonyang@snu.ac.kr>.

Proceedings of the  $39^{th}$  International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).

Riemannian manifolds (Vishnoi, 2018, Section 1). Geodesically convex optimization has a wide range of applications, including covariance estimation (Wiesel, 2012), Gaussian mixture models (Hosseini & Sra, 2015; 2020), matrix square root computation (Sra, 2015), metric learning (Zadeh et al., 2016), and optimistic likelihood calculation (Nguyen et al., 2019). See (Zhang & Sra, 2016, Section 1.1) for more examples.

The iteration complexity theory for first-order algorithms is well known when  $M = \mathbb{R}^n$ . Given an initial point  $x_0$ , gradient descent (GD) updates the iterates as

$$
x _ {k + 1} = x _ {k} - \gamma_ {k} \operatorname {g r a d} f (x _ {k}). \tag {GD}
$$

For a convex and  $L$ -smooth objective function  $f$ , GD with  $\gamma_k = \frac{1}{L}$  finds an  $\epsilon$ -approximate solution, i.e.,  $f(x_k) - f(x^*) \leq \epsilon$ , in  $O\left(\frac{L}{\epsilon}\right)$  iterations. For a  $\mu$ -strongly convex and  $L$ -smooth objective function  $f$ , GD with  $\gamma_k = \frac{1}{L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\frac{L}{\mu} \log \frac{L}{\epsilon}\right)$  iterations. A major breakthrough in first-order algorithms is the Nesterov accelerated gradient (NAG) method that achieves a faster convergence rate than GD (Nesterov, 1983). Given an initial point  $x_0 = z_0$ , the NAG scheme updates the iterates as

$$
y _ {k} = x _ {k} + \tau_ {k} \left(z _ {k} - x _ {k}\right)
$$

$$
x _ {k + 1} = y _ {k} - \alpha_ {k} \operatorname {g r a d} f \left(y _ {k}\right) \tag {NAG}
$$

$$
z _ {k + 1} = y _ {k} + \beta_ {k} (z _ {k} - y _ {k}) - \gamma_ {k} \operatorname {g r a d} f (y _ {k}).
$$

For a convex and  $L$ -smooth function  $f$ , NAG with  $\tau_{k} = \frac{2}{k + 2}$ ,  $\alpha_{k} = \frac{1}{L}$ ,  $\beta_{k} = 1$ ,  $\gamma_{k} = \frac{k + 2}{2L}$  (NAG-C) finds an  $\epsilon$ -approximate solution in  $O\left(\sqrt{\frac{L}{\epsilon}}\right)$  iterations (Tseng, 2008). For a  $\mu$ -strongly convex and  $L$ -smooth objective function  $f$ , NAG with  $\tau_{k} = \frac{\sqrt{\mu / L}}{1 + \sqrt{\mu / L}}$ ,  $\alpha_{k} = \frac{1}{L}$ ,  $\beta_{k} = 1 - \sqrt{\frac{\mu}{L}}$ ,  $\gamma_{k} = \sqrt{\frac{\mu}{L}}\frac{1}{\mu}$  (NAG-SC) finds an  $\epsilon$ -approximate solution in  $O\left(\sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$  iterations (Nesterov, 2018).

Considering the problem (1) for any Riemannian manifold  $M$ , (Zhang & Sra, 2016) successfully generalizes the complexity analysis of GD to Riemannian gradient descent (RGD),

$$
x _ {k + 1} = \exp_ {x _ {k}} \left(- \gamma_ {k} \operatorname {g r a d} f (x _ {k})\right), \tag {RGD}
$$

Table 1. Iteration complexities (required number of iterations to obtain an  $\epsilon$ -approximate solution) for various accelerated methods on Riemannian manifolds. The notation  $\tilde{O}(\cdot)$  and  $O^{*}(\cdot)$  omits  $\log(L/\epsilon)$  and  $\log(L/\mu)$  factors, respectively (Martínez-Rubio, 2022). For our algorithms, the constant  $\xi$  is defined as  $\xi = \zeta + 3(\zeta - \delta)$ , where  $\zeta$  and  $\delta$  are defined in Section 3.2. For the iteration complexity of RAGD (Zhang & Sra, 2018),  $\frac{10}{9}$  is not regarded as a constant because this constant arises from their nonstandard assumption  $d(x_{0}, x^{*}) \leq \frac{1}{20\sqrt{\max\{K_{\max}, -K_{\min}\}}}\left(\frac{\mu}{L}\right)^{\frac{3}{4}}$ .  

<table><tr><td>Algorithm</td><td>Objective function</td><td>Iteration complexity</td><td>Remark</td></tr><tr><td>Algorithm 1 (Liu et al., 2017)</td><td>g-strongly convex</td><td>O(√L/μ log (L/ε))</td><td>computationally intractable</td></tr><tr><td>Algorithm 2 (Liu et al., 2017)</td><td>g-convex</td><td>O(√L/ε)</td><td>computationally intractable</td></tr><tr><td>RAGD (Zhang &amp; Sra, 2018)</td><td>g-strongly convex</td><td>O((10/9)√L/μ log (L/ε))</td><td>nonstandard assumption</td></tr><tr><td>Algorithm 1 (Ahn &amp; Sra, 2020)</td><td>g-strongly convex</td><td>O*(L/μ + √L/μ log (μ/ε))</td><td>eventually accelerated</td></tr><tr><td>RAGDsDR (Alimisis et al., 2021)</td><td>g-convex</td><td>O(√ζL/ε)</td><td>only in early stages</td></tr><tr><td>(Martínez-Rubio, 2022)</td><td>g-convex</td><td>O(√L/ε)</td><td>only for constant curvature</td></tr><tr><td>(Martínez-Rubio, 2022)</td><td>g-strongly convex</td><td>O*(√L/μ log (μ/ε))</td><td>only for constant curvature</td></tr><tr><td>RNAG-C (ours)</td><td>g-convex</td><td>O(ξ√L/ε)</td><td></td></tr><tr><td>RNAG-SC (ours)</td><td>g-strongly convex</td><td>O(ξ√L/μ log (L/ε))</td><td></td></tr></table>

using a lower bound  $K_{\mathrm{min}}$  of the sectional curvature and an upper bound  $D$  of  $\mathrm{diam}(N)$ . For completeness, we provide a potential-function analysis in Appendix D to show that RGD with a fixed stepsize has the same iteration complexity as GD.

However, it is still unclear whether a reasonable generalization of NAG to the Riemannian setting is possible with strong theoretical guarantees. When studying the global complexity of Riemannian optimization algorithms, it is common to assume that the sectional curvature of  $M$  is bounded below by  $K_{\mathrm{min}}$  and bounded above by  $K_{\mathrm{max}}$  to prevent the manifold from being overly curved. Unfortunately, (Criscitiello & Boumal, 2021; Hamilton & Moitra, 2021) show that even when sectional curvature is bounded, achieving global acceleration is impossible in general. Thus, one might need another common assumption, an upper bound  $D$  of  $\mathrm{diam}(N)$ . This motivates our central question:

Can we design computationally tractable accelerated first-order methods on Riemannian manifolds when the sectional curvature and the diameter of the domain are bounded?

In the literature, there are some partial answers but no full answer to this question (see Table 1 and Section 2). In this paper, we provide a complete answer via new first-order algorithms, which we call the Riemannian Nesterov accelerated gradient (RNAG) method. We show that acceleration is possible on Riemannian manifolds for both geodesically convex (g-convex) and geodesically strongly convex (g

strongly convex) cases whenever the bounds  $K_{\mathrm{min}}$ ,  $K_{\mathrm{max}}$ , and  $D$  are available. The main contributions of this work can be summarized as follows:

- Generalizing Nesterov's scheme, we propose RNAG, a first-order method for Riemannian optimization. We provide two specific algorithms: RNAG-C (Algorithm 1) for minimizing g-convex functions and RNAG-SC (Algorithm 2) for minimizing g-strongly convex functions. Both algorithms call one gradient oracle per iteration. Our algorithms are computationally tractable in the sense that they only involve exponential maps, logarithm maps, parallel transport, and operations in tangent spaces. In particular, RNAG-C can be interpreted as a variant of NAG-C with high friction in (Su et al., 2014, Section 4.1) (see Appendix B).  
- Given the bounds  $K_{\mathrm{min}}$ ,  $K_{\mathrm{max}}$ , and  $D$ , we prove that RNAG-C has an  $O\left(\sqrt{\frac{L}{\epsilon}}\right)$  iteration complexity (Corollary 5.5), and that RNAG-SC has an  $O\left(\sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$  iteration complexity (Corollary 5.7). The crucial steps of the proofs are constructing potential functions as (4) and handling metric distortion using Lemma 5.2 and Lemma 5.3. To the best of our knowledge, this is the first proof for full acceleration in the g-convex case.  
- We identify a connection between our algorithms and the ODEs for modeling Riemannian acceleration in (Alimisis et al., 2020) by letting the stepsize tend to zero. This analysis confirms the accelerated convergence of

our algorithms through the lens of continuous-time flows.

# 2. Related Work

Given a bound  $D$  for  $\mathrm{diam}(N)$ , (Liu et al., 2017) proposed accelerated methods for both g-convex and g-strongly convex cases. Their algorithms have the same iteration complexities as NAG but require a solution to a nonlinear equation at every iteration, which could be as difficult as solving the original problem in general. Given  $K_{\min}$ ,  $K_{\max}$ , and  $d(x_0, x^*)$ , (Zhang & Sra, 2018) proposed a computationally tractable algorithm for the g-strongly convex case and showed that their algorithm achieves the iteration complexity  $O\left(\frac{10}{9}\sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$  when  $d(x_0, x^*) \leq \frac{1}{20\sqrt{\max\{K_{\max}, - K_{\min}\}}}\left(\frac{\mu}{L}\right)^{\frac{3}{4}}$ . Given only  $K_{\min}$  and  $K_{\max}$ , (Ahn & Sra, 2020) considered the g-strongly convex case. Although full acceleration is not guaranteed, the authors proved that their algorithm eventually achieves acceleration in later stages. Given  $K_{\min}$ ,  $K_{\max}$ , and  $D$ , (Alimisis et al., 2021) proposed a momentum method for the g-convex case. They showed that their algorithm achieves acceleration in early stages. Although this result is not as strong as full acceleration, their theoretical guarantee is meaningful in practical situations. (Martínez-Rubio, 2022) focused on manifolds with constant sectional curvatures, namely a subset of the hyperbolic space or sphere. Their algorithm is accelerated, but it is not straightforward to generalize their argument to any manifolds. Beyond the g-convex setting, (Criscitiello & Boumal, 2020) studied accelerated methods for nonconvex problems. (Lezcano-Casado, 2020) studied adaptive and momentum-based methods using the trivialization framework in (Lezcano-Casado, 2019). Further works on accelerated Riemannian optimization can be found in (Criscitiello & Boumal, 2021, Section 1.6).

Another line of research takes the perspective of continuous-time dynamics as in the Euclidean counterpart (Su et al., 2014; Wibisono et al., 2016; Wilson et al., 2021). For both g-convex and g-strongly convex cases, (Alimisis et al., 2020) proposed ODEs that can model accelerated methods on Riemannian manifolds given  $K_{\mathrm{min}}$  and  $D$ . (Duruisseaux & Leok, 2021b) extended this result and developed a variational framework. Time-discretization methods for such ODEs on Riemannian manifolds have recently been of considerable interest as well (Duruisseaux & Leok, 2021a; Franca et al., 2021; Duruisseaux & Leok, 2022).

While many positive results have been obtained for accelerated Riemannian optimization, there are also a few negative results (Hamilton & Moitra, 2021) and (Criscitiello & Boumal, 2021), showing that achieving full acceleration for Riemannian optimization is impossible in general. Because their results involve a growing diameter of domain and most

of the positive results assume that the diameter of domain is bounded by a constant  $D$ , the negative result is not contradictory but complementary to the positive results. This indicates that the assumption of bounding the domain by a constant is necessary for achieving full acceleration. See Section 8 for a detailed discussion.

# 3. Preliminaries

# 3.1. Background

A Riemannian manifold  $(M,g)$  is a real smooth manifold equipped with a Riemannian metric  $g$  which assigns to each  $p\in M$  a positive-definite inner product  $g_{p}(v,w) = \langle v,w\rangle_{p} = \langle v,w\rangle$  on the tangent space  $T_{p}M$ . The inner product  $g_{p}$  induces the norm  $\| v\| _p = \| v\|$  defined as  $\sqrt{\langle v,v\rangle_p}$  on  $T_{p}M$ . The tangent bundle  $TM$  of  $M$  is defined as  $TM = \sqcup_{p\in M}T_{p}M$ . For  $p,q\in M$ , the geodesic distance  $d(p,q)$  between  $p$  and  $q$  is the infimum of the length of all piecewise continuously differentiable curves from  $p$  to  $q$ . For nonempty set  $N\subseteq M$ , the diameter  $\mathrm{diam}(N)$  of  $N$  is defined as  $\mathrm{diam}(N) = \sup_{p,q\in N}d(p,q)$ .

For a smooth function  $f: M \to \mathbb{R}$ , the Riemannian gradient  $\operatorname{grad} f(x)$  of  $f$  at  $x$  is defined as the tangent vector in  $T_xM$  satisfying

$$
\langle \operatorname {g r a d} f (x), v \rangle = d f (x) [ v ],
$$

where  $df(x): T_xM \to \mathbb{R}$  is the differential of  $f$  at  $x$ . Let  $I \coloneqq [0,1]$ . A geodesic  $\gamma: I \to M$  is a smooth curve of locally minimum length with zero acceleration. In particular, straight lines in  $\mathbb{R}^n$  are geodesics. The exponential map at  $p$  is defined as, for  $v \in T_pM$ ,

$$
\exp_ {p} (v) = \gamma_ {v} (1),
$$

where  $\gamma_v: I \to M$  is the geodesic satisfying  $\gamma_v(0) = p$  and  $\gamma_v'(0) = v$ . In general,  $\exp_p$  is only defined on a neighborhood of 0 in  $T_pM$ . It is known that  $\exp_p$  is a diffeomorphism in some neighborhood  $U$  of 0. Thus, its inverse is well defined and is called the logarithm map  $\log_x: \exp_p(U) \to T_pM$ . For a smooth curve  $\gamma: I \to M$  and  $t_0, t_1 \in I$ , the parallel transport  $\Gamma(\gamma)_{t_0}^{t_1}: T_{\gamma(t_0)}M \to T_{\gamma(t_1)}M$  is a way of transporting vectors from  $T_{\gamma(t_0)}M$  to  $T_{\gamma(t_1)}M$  along  $\gamma$ . When  $\gamma$  is a geodesic, we let  $\Gamma_p^q: T_pM \to T_qM$  denote the parallel transport from  $T_pM$  to  $T_qM$ .

A subset  $N$  of  $M$  is said to be geodesically uniquely convex if for every  $x, y \in N$ , there exists a unique geodesic  $\gamma : [0,1] \to M$  such that  $\gamma(0) = x$ ,  $\gamma(1) = y$ , and  $\gamma(t) \in N$  for all  $t \in [0,1]$ . Let  $N$  be a geodesically uniquely

convex subset of  $M$ . A function  $f: N \to \mathbb{R}$  is said to be geodesically convex if  $f \circ \gamma: [0,1] \to \mathbb{R}$  is convex for each geodesic  $\gamma: [0,1] \to M$  whose image is in  $N$ . When  $f$  is geodesically convex, we have

$$
f (y) \geq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle .
$$

Let  $N$  be an open geodesically uniquely convex subset of  $M$ , and  $f: N \to \mathbb{R}$  be a continuously differentiable function. We say that  $f$  is geodesically  $\mu$ -strongly convex for  $\mu > 0$  if

$$
f (y) \geq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle + \frac {\mu}{2} \| \log_ {x} (y) \| ^ {2}
$$

for all  $x, y \in N$ . We say that  $f$  is geodesically  $L$ -smooth if

$$
f (y) \leq f (x) + \langle \operatorname {g r a d} f (x), \log_ {x} (y) \rangle + \frac {L}{2} \left\| \log_ {x} (y) \right\| ^ {2}
$$

for all  $x, y \in N$ . For additional notions from Riemannian geometry that are used in our analysis, we refer the reader to Appendix A as well as the textbooks (Lee, 2018; Petersen, 2016; Boumal, 2020).

# 3.2. Assumptions

In this subsection, we present the assumptions that are imposed throughout the paper.

Assumption 3.1. The domain  $N$  is an open geodesically uniquely convex subset of  $M$ . The diameter of the domain is bounded as  $\mathrm{diam}(N) \leq D < \infty$ . The sectional curvature inside  $N$  is bounded below by  $K_{\min}$  and bounded above by  $K_{\max}$ . If  $K_{\max} > 0$ , we further assume that  $D < \frac{\pi}{\sqrt{K_{\max}}}$ .

Assumption 3.1 implies that the exponential map  $\exp_x$  is a diffeomorphism for any  $x\in N$  (Alimisis et al., 2021).

Assumption 3.2. The objective function  $f: N \to \mathbb{R}$  is continuously differentiable and geodesically  $L$ -smooth. Moreover,  $f$  is bounded below, and has minimizers, all of which lie in  $N$ . A global minimizer is denoted by  $x^{*}$ .

Assumption 3.3. All the iterates  $x_{k}$  and  $y_{k}$  are well-defined on the manifold  $M$  remain in  $N$ .

Although Assumption 3.3 is common in the literature (Zhang & Sra, 2018; Ahn & Sra, 2020; Alimisis et al., 2021), it is desirable to relax or remove it. We leave the extension as a future research topic.

To implement our algorithms, we also assume that we can compute (or approximate) exponential maps, logarithmic maps, and parallel transport. For many manifolds in practical applications, these maps are implemented in libraries such as (Townsend et al., 2016).

![](images/dad0465d08d04b1bf3d029d744e59c94dd954f68e2bdf0bbc5ff43e23d8e58b9.jpg)  
Figure 1. Illustration of the maps  $v_A \mapsto \Gamma_{p_A}^{p_B} \left( v_A - \log_{p_A}(p_B) \right)$  and  $v_A \mapsto \log_{p_B} \left( \exp_{p_A}(v_A) \right)$ .

We define the constants  $\zeta \geq 1$  and  $\delta \leq 1$  as

$$
\zeta = \left\{ \begin{array}{l l} \sqrt {- K _ {\min }} D \coth \left(\sqrt {- K _ {\min }} D\right), & \text {i f} K _ {\min } <   0 \\ 1, & \text {i f} K _ {\min } \geq 0 \end{array} \right.
$$

$$
\delta = \left\{ \begin{array}{l l} 1, & \text {i f K _ {\max } \leq 0} \\ \sqrt {K _ {\max }} D \cot \left(\sqrt {K _ {\max }} D\right), & \text {i f K _ {\max } > 0}. \end{array} \right.
$$

These constants naturally arise from the Rauch comparison theorem (Lee, 2018, Theorem 11.7) (Petersen, 2016, Theorem 6.4.3), and many known methods on Riemannian manifolds have a convergence rate depending on some of these constants (Alimisis et al., 2020; 2021; Zhang & Sra, 2016). Note that we can set  $\zeta = \delta = 1$  when  $M = \mathbb{R}^n$ .

# 4. Algorithms

In this section, we first generalize Nesterov's scheme to the Riemannian setting and then design specific algorithms for both g-convex and g-strongly convex cases. In (Ahn & Sra, 2020; Zhang & Sra, 2018) NAG is generalized to a three-step algorithm on a Riemannian manifold as

$$
y _ {k} = \exp_ {x _ {k}} \left(\tau_ {k} \log_ {x _ {k}} \left(z _ {k}\right)\right)
$$

$$
x _ {k + 1} = \exp_ {y _ {k}} (- \alpha_ {k} \operatorname {g r a d} f (y _ {k})) \tag {2}
$$

$$
z _ {k + 1} = \exp_ {y _ {k}} \left(\beta_ {k} \log_ {y _ {k}} \left(z _ {k}\right) - \gamma_ {k} \operatorname {g r a d} f \left(y _ {k}\right)\right).
$$

However, it is more natural to define the iterates  $z_{k}$  in the tangent bundle  $TM$ , instead of in  $M$ .<sup>3</sup> Thus, we propose another scheme that involves iterates in  $TM$  without using  $z_{k}$ . To associate tangent vectors in different tangent spaces, we use parallel transport, which is a way to transport vectors from one tangent space to another.

# Algorithm 1 RNAG-C

Input: initial point  $x_0$ , parameters  $\xi$  and  $T > 0$ , step size

$$
s \leq \frac {1}{L}
$$

Initialize  $\bar{v}_0 = 0\in T_{x_0}M$

$$
\text {S e t} \lambda_ {k} = \frac {k + 2 \xi + T}{2}.
$$

for  $k = 0$  to  $\tilde{K} -1$  do

$$
y _ {k} = \exp_ {x _ {k}} \left(\frac {\xi}{\lambda_ {k} + (\xi - 1)} \bar {v} _ {k}\right)
$$

$$
x _ {k + 1} = \exp_ {y _ {k}} (- s \operatorname {g r a d} f (y _ {k}))
$$

$$
v _ {k} = \Gamma_ {x _ {k}} ^ {y _ {k}} \left(\bar {v} _ {k} - \log_ {x _ {k}} \left(y _ {k}\right)\right)
$$

$$
\bar {\bar {v}} _ {k + 1} = v _ {k} - \frac {s \lambda_ {k}}{\xi} \operatorname {g r a d} f \left(y _ {k}\right)
$$

$$
\bar {v} _ {k + 1} = \Gamma_ {y k} ^ {x _ {k + 1}} \left(\bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x _ {k + 1}\right)\right)
$$

end for

Output:  $x_{K}$

Given  $z_{k} \in M$  in (2), we define the iterates  $v_{k} = \log_{y_{k}}(z_{k})$ ,  $\bar{v}_{k} = \log_{x_{k}}(z_{k})$ , and  $\bar{\bar{v}}_k = \log_{y_{k - 1}}(z_k)$  in the tangent bundle  $TM$ . It is straightforward to check that the following scheme is equivalent to (2):

$$
\begin{array}{l} y _ {k} = \exp_ {x _ {k}} (\tau_ {k} \bar {v} _ {k}) \\ x _ {k + 1} = \exp_ {y _ {k}} (- \alpha_ {k} \operatorname {g r a d} f (y _ {k})) \\ v _ {k} = \log_ {y _ {k}} \left(\exp_ {x _ {k}} (\bar {v} _ {k})\right) \tag {3} \\ \end{array}
$$

$$
\bar {\bar {v}} _ {k + 1} = \beta_ {k} v _ {k} - \gamma_ {k} \operatorname {g r a d} f \left(y _ {k}\right)
$$

$$
\bar {v} _ {k + 1} = \log_ {x _ {k + 1}} \left(\exp_ {y _ {k}} (\bar {v} _ {k + 1})\right).
$$

In (3), the third and last steps associate tangent vectors in different tangent spaces using the map  $T_{p_A}M \to T_{p_B}M$ ;  $v_A \mapsto \log_{p_B}\left(\exp_{p_A}(v_A)\right)$ . We change these steps by using the map  $v_A \mapsto \Gamma_{p_A}^{p_B}\left(v_A - \log_{p_A}(p_B)\right)$  instead. Technically, this modification allows us to use Lemma 5.3 when handling metric distortion in our convergence analysis. With the change, we obtain the following scheme, which we call RNAG:

$$
\begin{array}{l} y _ {k} = \exp_ {x _ {k}} \left(\tau_ {k} \bar {v} _ {k}\right) \\ x _ {k + 1} = \exp_ {y _ {k}} \left(- \alpha_ {k} \operatorname {g r a d} f (y _ {k})\right) \\ v _ {k} = \Gamma_ {x _ {k}} ^ {y _ {k}} (\bar {v} _ {k} - \log_ {x _ {k}} (y _ {k})) \tag {RNAG} \\ \end{array}
$$

$$
\bar {\bar {v}} _ {k + 1} = \beta_ {k} v _ {k} - \gamma_ {k} \operatorname {g r a d} f (y _ {k})
$$

$$
\bar {v} _ {k + 1} = \Gamma_ {y _ {k}} ^ {x _ {k + 1}} \left(\bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x _ {k + 1}\right)\right).
$$

Because RNAG only involves exponential maps, logarithm maps, parallel transport, and operations in tangent spaces, this scheme is computationally tractable, unlike the scheme in (Liu et al., 2017), which involves a nonlinear operator. Note that RNAG is different from the scheme (2) because the maps  $v_{A} \mapsto \log_{p_{B}}\left(\exp_{p_{A}}(v_{A})\right)$  and  $v_{A} \mapsto \Gamma_{p_{A}}^{p_{B}}\left(v_{A} - \log_{p_{A}}(p_{B})\right)$  are not equivalent in general (see Figure 1).

By carefully choosing the parameters  $\tau_{k},\alpha_{k},\beta_{k}$  and  $\gamma_{k}$ , we finally obtain two algorithms, RNAG-C (Algorithm 1) for

# Algorithm 2 RNAG-SC

Input: initial point  $x_0$ , parameter  $\xi$ , step size  $s \leq \frac{1}{L}$

Initialize  $\bar{v}_0 = 0\in T_{x_0}M$

Set  $q = \mu s$

for  $k = 0$  to  $K - 1$  do

$$
\begin{array}{l} y _ {k} = \exp_ {x _ {k}} \left(\frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}} \bar {v} _ {k}\right) \\ x _ {k + 1} = \exp_ {y _ {k}} (- s \operatorname {g r a d} f (y _ {k})) \\ v _ {k} = \Gamma_ {x _ {k}} ^ {y _ {k}} \left(\bar {v} _ {k} - \log_ {x _ {k}} (y _ {k})\right) \\ \bar {\bar {v}} _ {k + 1} = \left(1 - \sqrt {\frac {q}{\xi}}\right) v _ {k} + \sqrt {\frac {q}{\xi}} \left(- \frac {1}{\mu} \operatorname {g r a d} f (y _ {k})\right) \\ \bar {v} _ {k + 1} = \Gamma_ {y _ {k}} ^ {x _ {k + 1}} \left(\bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x _ {k + 1}\right)\right) \\ \end{array}
$$

end for

Output:  $x_{K}$

![](images/1079441413407bebed574101614b080ec85366de496235a2d95a1ff04ce12bb9.jpg)  
Figure 2. Illustration of RNAG-SC.

the g-convex case, and RNAG-SC (Algorithm 2) for the g-strongly convex case. In particular, we can interpret RNAG-C as a slight variation of NAG-C with high friction (Su et al., 2014, Section 4.1) with the friction parameter  $r = 1 + 2\xi$ . See Appendix B for a detailed interpretation. Note that we recover NAG-C and NAG-SC from these algorithms when  $M = \mathbb{R}^n$  and  $\xi = 1$ . Figure 2 is an illustration of some steps of RNAG-SC, where the curve  $\gamma$  is a geodesic with  $\gamma(0) = y_k$  and  $\gamma'(0) = \operatorname{grad} f(y_k)$ .

# 5. Convergence Analysis

# 5.1. Metric distortion lemma

To handle a potential function involving squared norms in tangent spaces, we need to compare distances in different tangent spaces.

Proposition 5.1. (Alimisis et al., 2020, Lemma 2) Let  $\gamma$  be a smooth curve whose image is in  $N$ . Then, we have

$$
\delta \left\| \gamma^ {\prime} (t) \right\| ^ {2} \leq \left\langle D _ {t} \log_ {\gamma (t)} (x), - \gamma^ {\prime} (t) \right\rangle \leq \zeta \left\| \gamma^ {\prime} (t) \right\| ^ {2}.
$$

In the proposition above,  $D_{t}$  is a covariant derivative along the curve (see Appendix A). Using this proposition, we obtain the following lemma.

Lemma 5.2. Let  $p_A, p_B, x \in N$  and  $v_A \in T_{p_A}M$ . If there

is  $r \in [0,1]$  such that  $\log_{p_A}(p_B) = rv_A$ , then we have

$$
\begin{array}{l} \left\| v _ {B} - \log_ {p _ {B}} (x) \right\| _ {p _ {B}} ^ {2} + (\zeta - 1) \| v _ {B} \| _ {p _ {B}} ^ {2} \\ \leq \left\| v _ {A} - \log_ {p _ {A}} (x) \right\| _ {p _ {A}} ^ {2} + (\zeta - 1) \left\| v _ {A} \right\| _ {p _ {A}} ^ {2}, \\ \end{array}
$$

where  $v_{B} = \Gamma_{p_{A}}^{p_{B}}\left(v_{A} - \log_{p_{A}}(p_{B})\right) \in T_{p_{B}}M$

In particular, when  $r = 1$ , Lemma 5.2 recovers a weaker version of (Zhang & Sra, 2016, Lemma 5). We can further generalize this lemma as follows:

Lemma 5.3. Let  $p_A, p_B, x \in N$  and  $v_A \in T_{p_A}M$ . Define  $v_B = \Gamma_{p_A}^{p_B} \left( v_A - \log_{p_A}(p_B) \right) \in T_{p_B}M$ . If there are  $a, b \in T_{p_A}M$ , and  $r \in (0,1)$  such that  $v_A = a + b$  and  $\log_{p_A}(p_B) = rb$ , then we have

$$
\begin{array}{l} \left\| v _ {B} - \log_ {p _ {B}} (x) \right\| _ {p _ {B}} ^ {2} + (\xi - 1) \| v _ {B} \| _ {p _ {B}} ^ {2} \\ \leq \left\| v _ {A} - \log_ {p _ {A}} (x) \right\| _ {p _ {A}} ^ {2} + (\xi - 1) \left\| v _ {A} \right\| _ {p _ {A}} ^ {2} \\ + \frac {\xi - \delta}{2} \left(\frac {1}{1 - r} - 1\right) \| a \| _ {p _ {A}} ^ {2} \\ \end{array}
$$

for  $\xi \geq \zeta$

As  $\exp_{p_A}(v_A) \neq \exp_{p_B}(v_B)$  (see Figure 1), our lemma does not compare the projected distance between points on the manifold, unlike (Zhang & Sra, 2018, Theorem 10) and (Ahn & Sra, 2020, Lemma 4.1). The proofs of Lemma 5.2 and Lemma 5.3 can be found in Appendix C.

# 5.2. Main results

We now prove the iteration complexities of RNAG-C and RNAG-SC using potential functions of the form

$$
\begin{array}{l} \phi_ {k} = A _ {k} \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + B _ {k} \left(\left\| \bar {v} _ {k} - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} + (\xi - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2}\right). \tag {4} \\ \end{array}
$$

The term  $(\xi - 1) \| \bar{v}_k \|_{x_k}^2$  is novel compared with the potential function in (Ahn & Sra, 2020), and it measures the kinetic energy (Wibisono et al., 2016). Intuitively, this potential makes sense because a large  $\xi$  means high friction (see Appendix B and Section 6). This term is useful when handling metric distortion.

# 5.2.1. THE GEODESICALLY CONVEX CASE

For the g-convex case, we use a potential function defined as

$$
\begin{array}{l} \phi_ {k} = s \lambda_ {k - 1} ^ {2} \left(f (x _ {k}) - f (x ^ {*})\right) \\ + \frac {\xi}{2} \left\| \bar {v} _ {k} - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + \frac {\xi (\xi - 1)}{2} \| \bar {v} _ {k} \| ^ {2}. \tag {5} \\ \end{array}
$$

The following theorem shows that this potential function is decreasing when the parameters  $\xi$  and  $T$  are chosen appropriately.

Theorem 5.4. Let  $f$  be a  $g$ -convex and geodesically  $L$ -smooth function. If the parameters  $\xi$  and  $T$  of RNAG-  $C$  satisfy  $\xi \geq \zeta$  and

$$
\begin{array}{l} \frac {\xi - \delta}{2} \left(\frac {1}{1 - \xi / \lambda_ {k}} - 1\right) \\ \leq (\xi - \zeta) \left(\frac {1}{(1 - \xi / (\lambda_ {k} + \xi - 1)) ^ {2}} - 1\right) \\ \end{array}
$$

for all  $k \geq 0$ , then the iterates of RNAG-  $C$  satisfy  $\phi_{k + 1} \leq \phi_k$  for all  $k \geq 0$ , where  $\phi_k$  is defined as (5).

In particular, we can show that the parameters  $\xi = \zeta + 3(\zeta - \delta)$  and  $T = 4\xi$  satisfy the condition in Theorem 5.4. In this case, the monotonicity of the potential function yields

$$
f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \frac {1}{s \lambda_ {k - 1} ^ {2}} \phi_ {k} \leq \frac {1}{s \lambda_ {k - 1} ^ {2}} \phi_ {0}.
$$

Thus, RNAG-C achieves acceleration. The result is summarized in the following corollary.

Corollary 5.5. Let  $f$  be a  $g$ -convex and geodesically  $L$ -smooth function. Then, RNAG-  $C$  with parameters  $\xi = \zeta + 3(\zeta - \delta)$ ,  $T = 4\xi$  and step size  $s = \frac{1}{L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\xi \sqrt{\frac{L}{\epsilon}}\right)$  iterations.

This result implies that the iteration complexity of RNAG-C is the same as that of NAG-C because  $\xi$  is a constant. The proofs of Theorem 5.4 and Corollary 5.5 are contained in Appendix E.

# 5.2.2. THE GEODESCALLY STRONGLY CONVEX CASE

For the g-strongly convex case, we consider a potential function defined as

$$
\begin{array}{l} \phi_ {k} = \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {- k} \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) \right. \tag {6} \\ \left. + \frac {\mu}{2} \left| \left| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right| \right| ^ {2} + \frac {\mu (\xi - 1)}{2} \| v _ {k} \| ^ {2}\right). \\ \end{array}
$$

This potential function is also shown to be decreasing under appropriate conditions on  $\xi$  and  $s$ .

Theorem 5.6. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. If the step size  $s$  and the parameter  $\xi$  of RNAG-SC satisfy  $\xi \geq \zeta$ ,  $\sqrt{\xi q} < 1$ , and

$$
\begin{array}{l} \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} - \sqrt {\xi q} \left(1 - \sqrt {\frac {q}{\xi}}\right) \\ \leq (\xi - \zeta) \left(\frac {1}{\left(1 - \sqrt {\xi q} / \left(1 + \sqrt {\xi q}\right)\right) ^ {2}} - 1\right), \\ \end{array}
$$

then the iterates of RNAG-SC satisfy  $\phi_{k + 1}\leq \phi_k$  for all  $k\geq 0$ , where  $\phi_{k}$  is defined as (6).

In particular, the parameters  $\xi = \zeta + 3(\zeta - \delta)$  and  $s = \frac{1}{9\xi L}$  satisfy the condition in Theorem 5.6. In this case, by monotonicity of the potential function, we have

$$
f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \phi_ {k} \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \phi_ {0},
$$

which implies that RNAG-SC achieves acceleration. The following corollary summarizes the result.

Corollary 5.7. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. Then, RNAG-SC with parameter  $\xi = \zeta + 3(\zeta - \delta)$  and step size  $s = \frac{1}{9\xi L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\xi \sqrt{\frac{L}{\mu}} \log \left(\frac{L}{\epsilon}\right)\right)$  iterations.

Because  $\xi$  is a constant, the iteration complexity of RNAG-SC is the same as that of NAG-SC. The proofs of Theorem 5.6 and Corollary 5.7 can be found in Appendix F.

# 6. Continuous-Time Interpretation

In this section, we identify a connection to the ODEs for modeling Riemannian acceleration in (Alimisis et al., 2020, Equations 2 and 4). Specifically, following the informal arguments in (Su et al., 2014, Section 2) and (d'Aspremont et al., 2021, Section 4.8), we obtain ODEs by taking the limit  $s \to 0$  in our schemes. The detailed analysis is contained in Appendix G. For a sufficiently small  $s$ , the Euclidean geometry is valid as only a sufficiently small subset of  $M$  is considered. Thus, we informally assume  $M = \mathbb{R}^n$  for simplicity. We can show that the iterations of RNAG-C satisfy

$$
\begin{array}{l} \frac {y _ {k + 1} - y _ {k}}{\sqrt {s}} \\ = \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} \frac {y _ {k} - y _ {k - 1}}{\sqrt {s}} \\ - \frac {\lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} \sqrt {s} \operatorname {g r a d} f (y _ {k}) \\ + \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} \sqrt {s} \left(\operatorname {g r a d} f \left(y _ {k - 1}\right) - \operatorname {g r a d} f \left(y _ {k}\right)\right). \\ \end{array}
$$

We introduce a smooth curve  $y(t)$  that is approximated by the iterates of RNAG-C as  $y(t) \approx y_{t/\sqrt{s}} = y_k$  with  $k = \frac{t}{\sqrt{s}}$ . Using the Taylor expansion, we have

$$
\begin{array}{l} \frac {y _ {k + 1} - y _ {k}}{\sqrt {s}} = \dot {y} (t) + \frac {\sqrt {s}}{2} \ddot {y} (t) + o \left(\sqrt {s}\right), \\ \frac {y _ {k} - y _ {k - 1}}{\sqrt {s}} = \dot {y} (t) - \frac {\sqrt {s}}{2} \ddot {y} (t) + o \left(\sqrt {s}\right), \\ \sqrt {s} \operatorname {g r a d} f \left(y _ {k - 1}\right) = \sqrt {s} \operatorname {g r a d} f \left(y _ {k}\right) + o \left(\sqrt {s}\right). \\ \end{array}
$$

Letting  $s \to 0$  yields the ODE<sup>5</sup>

$$
\nabla_ {\dot {y}} \dot {y} + \frac {1 + 2 \xi}{t} \dot {y} + \operatorname {g r a d} f (y) = 0, \tag {7}
$$

where the covariant derivative  $\nabla_{\dot{y}}\dot{y} = D_t\dot{y}$  is a natural extension of the second derivative  $\ddot{y}$  (see Appendix A).

In the g-strongly convex case, we can show that the iterations of RNAG-SC satisfy

$$
\begin{array}{l} \frac {y _ {k + 1} - y _ {k}}{\sqrt {s}} \\ = \frac {1 - \sqrt {q / \xi}}{1 + \sqrt {\xi q}} \frac {y _ {k} - y _ {k - 1}}{\sqrt {s}} - \frac {1 + \sqrt {q / \xi}}{1 + \sqrt {\xi q}} \sqrt {s} \operatorname {g r a d} f (y _ {k}) \\ + \frac {1 - \sqrt {q / \xi}}{1 + \sqrt {\xi q}} \sqrt {s} (\operatorname {g r a d} f (y _ {k - 1}) - \operatorname {g r a d} f (y _ {k})). \\ \end{array}
$$

Through a similar limiting process, we obtain the following ODE:

$$
\nabla_ {\dot {y}} \dot {y} + \left(\frac {1}{\sqrt {\xi}} + \sqrt {\xi}\right) \sqrt {\mu} \dot {y} + \operatorname {g r a d} f (y) = 0. \tag {8}
$$

Replacing the parameter  $\xi$  in the coefficients of our ODEs with  $\zeta$ , we recover (Alimisis et al., 2020, Equations 2 and 4). Because  $\xi \geq \zeta$ , the continuous-time acceleration results (Alimisis et al., 2020, Theorems 5 and 7) are valid for our ODEs as well. Thus, this analysis confirms the accelerated convergence of our algorithms through the lens of continuous-time flows.

In both ODEs, the parameter  $\xi \geq \zeta$  appears in the coefficient of the friction term  $\dot{X}$ , increasing with  $\xi$ . Intuitively, this makes sense because  $\zeta$  is large for an ill-conditioned domain, where  $-K_{\mathrm{min}}$  and  $D$  are large and thus metric distortion is more severe (where one might want to decrease the effect of momentum).

# 7. Experiments

In this section, we examine the performance of our algorithms on the Rayleigh quotient maximization problem and the Karcher mean problem. To implement the geometry of manifolds, we used the Python libraries Pymanopt (Townsend et al., 2016) and Geomstats (Miolane et al., 2020). For comparison, we use the known accelerated algorithms RAGD (Zhang & Sra, 2018) for the g-strongly convex case and RNAGsDR with no line search (Alimisis et al., 2021) for the g-convex case. The source code of our RNAG implementation is available online. $^{6}$

![](images/19ca2eac8996b88a4483e47507ec55830de1924209345e39f81722ca8997bed2.jpg)  
(a) Rayleigh quotient maximization

![](images/9cca97eb248700e905de36943f516c245ce4d6fb1952674fd5d5dfe78c1076a0.jpg)  
Figure 3. Performances of various Riemannian optimization algorithms on the Rayleigh quotient maximization problem and the Karcher mean problem.

![](images/174951bcce192d8965b7c4dbe3c6f945d838b65352a35604f550e814c4458ef7.jpg)  
(b) Karcher mean of SPD matrices  
(c) Karcher mean on hyperbolic space

We set the input parameters as  $\zeta = 1$  for implementing RAGDsDR, and  $\xi = 1$  for implementing our algorithms. The stepsize was chosen as  $s = \frac{1}{L}$  in our algorithms.

Rayleigh quotient maximization. Given a real  $d \times d$  symmetric matrix  $A$ , we consider the problem

$$
\min  _ {x \in \mathbb {S} ^ {d - 1}} f (x) = - \frac {1}{2} x ^ {\top} A x.
$$

on the unit  $(d - 1)$ -sphere on  $\mathbb{S}^{d - 1}$ . For this manifold, we set  $K_{\min} = K_{\max} = 1$ . We let  $d = 1000$  and  $A = \frac{1}{2}\left(B + B^{\top}\right)$ , where the entries of  $B \in \mathbb{R}^{d \times d}$  were randomly generated by the Gaussian distribution  $N(0,1 / d)$ . We have the smoothness parameter  $L = \lambda_{\max} - \lambda_{\min}$  by the following proposition.

Proposition 7.1. The function  $f$  is geodesically  $(\lambda_{\max} - \lambda_{\min})$ -smooth, where  $\lambda_{\max}$  and  $\lambda_{\min}$  are the largest and smallest eigenvalues of  $A$ , respectively.

The proof can be found in Appendix H. The result is shown in Figure 3(a). We observe that RNAG-C outperforms RGD and is comparable to RAGDsDR, a known accelerated method for the g-convex case.

Karcher mean of SPD matrices. When  $K_{\mathrm{max}} \leq 0$ , the Karcher mean (Karcher, 1977) of the points  $p_i \in M$  for  $i = 1, \ldots, n$ , is defined as the solution of

$$
\min  _ {x \in M} f (x) = \frac {1}{2 n} \sum_ {i = 1} ^ {n} d \left(x, p _ {i}\right) ^ {2}. \tag {9}
$$

The following proposition shows that one can set the strong convexity parameter as  $\mu = 1$ .

Proposition 7.2. The function  $f$  is geodesically 1-strongly convex.

The proof can be found in Appendix H. We consider this problem on the manifold  $\mathcal{P}(d) \subseteq \mathbb{R}^{d \times d}$  of symmetric positive definite matrices endowed with the Riemannian metric

$\langle X,Y\rangle_P = \operatorname {Tr}\left(P^{-1}XP^{-1}Y\right)$ . It is known that one can set  $K_{\mathrm{min}} = -\frac{1}{2}$  and  $K_{\mathrm{max}} = 0$  (Criscitiello & Boumal, 2020, Appendix I). We set the dimension and the number of matrices as  $d = 100$  and  $n = 50$ . The matrices  $p_i$  were randomly generated using Matrix Mean Toolbox (Bini & Iannazzo, 2013) with condition number  $10^{6}$ . We set the smoothness parameter as  $L = 10$ . The result is shown in Figure 3(b). We observe that RNAG-SC and RAGD (Zhang & Sra, 2018) perform significantly better than RGD. The performances of RNAG-C and RAGDsDR are only slightly better than that of RGD in early stages. This result makes sense because  $f$  is g-strongly convex and well-conditioned.

Karcher mean on hyperbolic space. We consider the problem (9) on the hyperbolic space  $\mathbb{H}^d$  with the hyperboloid model  $\mathbb{H}^d = \left\{x \in \mathbb{R}^{d+1} : -x_{d+1}^2 + \sum_{k=1}^d x_k^2 = -1\right\}$ . For this manifold, we can set  $K_{\min} = K_{\max} = -1$ . We set the dimension and the number of points as  $d = 1000$  and  $n = 10$ . First  $d$  entries of each point  $p_i$  are randomly generated by the Gaussian distribution  $N(0,1/d)$ . We set the smoothness parameter as  $L = 10$ . The result is similar to that of the previous example, and is shown in Figure 3(c).

# 8. Discussion

In this paper, we have proposed novel computationally tractable first-order methods that achieve Riemannian acceleration for both g-convex and g-strongly convex objective functions whenever the constants  $K_{\mathrm{min}}$ ,  $K_{\mathrm{max}}$ , and  $D$  are available. The iteration complexities of RNAG-C and RNAG-SC match those of their Euclidean counterparts. The continuous-time analysis of our algorithms provides an intuitive interpretation of the parameter  $\xi$  as a measurement of friction, which is higher when the domain manifold is more ill-conditioned. In fact, the iteration complexities of our algorithms depend on the parameter  $\xi \geq \zeta$ , which is affected by the values of the constants  $K_{\mathrm{min}}$ ,  $K_{\mathrm{max}}$ , and  $D$ . When

$\zeta$  is large (i.e.,  $-K_{\mathrm{min}}$  and  $D$  are large), we have a worse guarantee. A possible future direction is to study the effect of the constants  $K_{\mathrm{min}}$ ,  $K_{\mathrm{max}}$ , and  $D$  on the complexities of Riemannian optimization algorithms tightly.

Comparison with (Liu et al., 2017). The algorithms in (Liu et al., 2017) achieve acceleration with only standard assumption. However, to implement the operator  $\mathbb{S}:(y_{k - 1},x_k,x_{k - 1})\mapsto y_k$  in (Liu et al., 2017, Algorithm 1), one needs to solve the following nonlinear equation at each iteration:

$$
\begin{array}{l} (1 - \sqrt {\mu / L}) \Gamma_ {y _ {k}} ^ {y _ {k - 1}} \log_ {y _ {k}} (x _ {k}) - \beta \Gamma_ {y _ {k}} ^ {y _ {k - 1}} \operatorname {g r a d} f (y _ {k}) \\ = \left(1 - \sqrt {\mu / L}\right) ^ {3 / 2} \log_ {y _ {k - 1}} \left(x _ {k - 1}\right). \\ \end{array}
$$

It is unclear whether this equation is solvable in a tractable way or even feasible as noted in (Ahn & Sra, 2020). On the other hand, our algorithms involve only operations in tangent spaces and the exponential map, logarithm map, and parallel transport. Thus, our algorithms are computationally tractable for various manifolds in practice, where the operations above are implementable.

Comparison with (Criscitiello & Boumal, 2021). It is natural to ask how our positive result is not contradictory to the negative result in (Criscitiello & Boumal, 2021). To clarify this, we provide the following two reasons:

(i) We assume that the diameter  $\mathrm{diam}(N)$  of the domain  $N$  is bounded, which is a more restrictive condition than their assumption that the distance  $d(x_0,x^*)$  is bounded.  
(ii) We assume that the diameter  $\mathrm{diam}(N)$  is bounded by a fixed constant  $D$ . Thus, in Corollary 5.5 and Corollary 5.7,  $\xi$  does not depend on other parameters such as  $\mu$  and  $L$ . In contrast, (Criscitiello & Boumal, 2021, Theorem 1.3) introduces a bound  $\frac{3}{4} r$  of  $d(x_0, x^*)$  by letting  $r$  be the solution of  $\kappa = 12r\sqrt{-K_{\min}} + 9$ , thus  $r\sqrt{-K_{\min}}$  grows with  $\kappa = L / \mu$ . A similar discussion can be found in (Martínez-Rubio, 2022, Remark 29).

We believe that the second one is the main reason for our positive results coexist with their negative results. As mentioned in Section 2, their result is not contradictory but complementary to our results.

# Acknowledgements

We thank the anonymous reviewers for their insightful suggestions and Dr. Antonio Orvieto and the co-authors of (Alimisis et al., 2021) for allowing us to use a part of their code. This work was supported in part by Samsung Electronics, the National Research Foundation of Korea funded by MSIT(2020R1C1C1009766), and the Information and Communications Technology Planning and Evaluation (IITP) grant funded by MSIT(2022-0-00124, 2022-0-00480).

# References

Ahn, K. and Sra, S. From nesterov's estimate sequence to riemannian acceleration. In Proceedings of Thirty Third Conference on Learning Theory, pp. 84-118, 2020.  
Alimisis, F., Orvieto, A., Becigneul, G., and Lucchi, A. A continuous-time perspective for modeling acceleration in riemannian optimization. In Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, pp. 1297-1307, 2020.  
Alimisis, F., Orvieto, A., Becigneul, G., and Lucchi, A. Momentum improves optimization on riemannian manifolds. In Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, pp. 1351-1359, 2021.  
Bini, D. A. and Iannazzo, B. Computing the karcher mean of symmetric positive definite matrices. Linear Algebra and its Applications, 438(4):1700-1710, 2013.  
Boumal, N. An introduction to optimization on smooth manifolds. Available online, Aug 2020. URL http://www.nicolasboumal.net/book.  
Criscitiello, C. and Boumal, N. An accelerated first-order method for non-convex optimization on manifolds. arXiv preprint arXiv:2008.02252, 2020.  
Criscitiello, C. and Boumal, N. Negative curvature ob-structs acceleration for geodesically convex optimization, even with exact first-order oracles. arXiv preprint arXiv:2111.13263, 2021.  
d'Aspremont, A., Scieur, D., and Taylor, A. Acceleration methods. arXiv preprint arXiv:2101.09545, 2021.  
Duruisseaux, V. and Leok, M. Accelerated optimization on riemannian manifolds via discrete constrained variational integrators. arXiv preprint arXiv:2104.07176, 2021a.  
Duruisseaux, V. and Leok, M. A variational formulation of accelerated optimization on riemannian manifolds. arXiv preprint arXiv:2101.06552, 2021b.  
Duruisseaux, V. and Leok, M. Accelerated optimization on riemannian manifolds via projected variational integrators. arXiv preprint arXiv:2201.02904, 2022.  
Franca, G., Barp, A., Girolami, M., and Jordan, M. I. Optimization on manifolds: A symplectic approach. arXiv preprint arXiv:2107.11231, 2021.  
Hamilton, L. and Moitra, A. No-go theorem for acceleration in the hyperbolic plane. arXiv preprint arXiv:2101.05657, 2021.

Hosseini, R. and Sra, S. Matrix manifold optimization for gaussian mixtures. In Advances in Neural Information Processing Systems, pp. 910-918, 2015.  
Hosseini, R. and Sra, S. An alternative to em for gaussian mixture models: batch and stochastic riemannian optimization. Mathematical Programming, 181(1):187-223, 2020.  
Karcher, H. Riemannian center of mass and mollifier smoothing. Communications on Pure and Applied Mathematics, 30(5):509-541, 1977.  
Lee, J. M. Introduction to Riemannian Manifolds, volume 176. Springer, 2018.  
Lezcano-Casado, M. Trivializations for gradient-based optimization on manifolds. In Advances in Neural Information Processing Systems, pp. 9157-9168, 2019.  
Lezcano-Casado, M. Adaptive and momentum methods on manifolds through trivializations. arXiv preprint arXiv:2010.04617, 2020.  
Liu, Y., Shang, F., Cheng, J., Cheng, H., and Jiao, L. Accelerated first-order methods for geodesically convex optimization on riemannian manifolds. In Advances in Neural Information Processing Systems, pp. 4868-4877, 2017.  
Martínez-Rubio, D. Global riemannian acceleration in hyperbolic and spherical spaces. In International Conference on Algorithmic Learning Theory, pp. 768-826, 2022.  
Miolane, N., Guigui, N., Brigant, A. L., Mathe, J., Hou, B., Thanwerdas, Y., Heyder, S., Peltre, O., Koep, N., Zaatiti, H., Hajri, H., Cabanes, Y., Gerald, T., Chauchat, P., Shewmake, C., Brooks, D., Kainz, B., Donnat, C., Holmes, S., and Pennec, X. Geomstats: A python package for riemannian geometry in machine learning. Journal of Machine Learning Research, 21(223):1-9, 2020.  
Nesterov, Y. Lectures on Convex Optimization, volume 137. Springer, 2018.  
Nesterov, Y. E. A method for solving the convex programming problem with convergence rate  $o(1 / k^2)$ . In Dokl. akad. nauk Sssr, volume 269, pp. 543-547, 1983.  
Nguyen, V. A., Shafieezadeh-Abadeh, S., Yue, M.-C., Kuhn, D., and Wiesemann, W. Calculating optimistic likelihoods using (geodesically) convex optimization. arXiv preprint arXiv:1910.07817, 2019.  
Petersen, P. Riemannian Geometry, volume 1 of 171. Springer, Cham, 3 edition, 2016. ISBN 978-3-319-26652-7.

Sra, S. On the matrix square root via geometric optimization. arXiv preprint arXiv:1507.08366, 2015.  
Su, W., Boyd, S., and Candes, E. A differential equation for modeling nesterov's accelerated gradient method: Theory and insights. In Advances in Neural Information Processing Systems, pp. 2510-2518, 2014.  
Townsend, J., Koep, N., and Weichwald, S. Pymanopt: A python toolbox for optimization on manifolds using automatic differentiation. The Journal of Machine Learning Research, 17(1):4755-4759, 2016.  
Tseng, P. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal on Optimization, 2(3), 2008.  
Vishnoi, N. K. Geodesic convex optimization: Differentiation on manifolds, geodesics, and convexity. arXiv preprint arXiv:1806.06373, 2018.  
Wibisono, A., Wilson, A. C., and Jordan, M. I. A variational perspective on accelerated methods in optimization. *proceedings of the National Academy of Sciences*, 113(47): E7351-E7358, 2016.  
Wiesel, A. Geodesic convexity and covariance estimation. IEEE Transactions on Signal Processing, 60(12):6182-6189, 2012.  
Wilson, A. C., Recht, B., and Jordan, M. I. A lyapunov analysis of accelerated methods in optimization. Journal of Machine Learning Research, 22(113):1-34, 2021.  
Zadeh, P., Hosseini, R., and Sra, S. Geometric mean metric learning. In International Conference on Machine Learning, pp. 2464-2471, 2016.  
Zhang, H. and Sra, S. First-order methods for geodesically convex optimization. In Conference on Learning Theory, pp. 1617-1638, 2016.  
Zhang, H. and Sra, S. An estimate sequence for geodesically convex optimization. In Proceedings of the 31st Conference on Learning Theory, pp. 1703-1723, 2018.

# A. Background

Definition A.1. A smooth vector field  $V$  is a smooth map from  $M$  to  $TM$  such that  $p \circ V$  is the identity map, where  $p: TM \to M$  is the projection. The collection of all smooth vector fields on  $M$  is denoted by  $\mathfrak{X}(M)$ .

Definition A.2. Let  $\gamma : I \to M$  be a smooth curve. A smooth vector field  $V$  along  $\gamma$  is a smooth map from  $I$  to  $TM$  such that  $V(t) \in T_{\gamma(t)}M$  for all  $t \in I$ . The collection of all smooth vector fields along  $\gamma$  is denoted by  $\mathfrak{X}(\gamma)$ .

Proposition A.3 (Fundamental theorem of Riemannian geometry). There exists a unique operator

$$
\nabla : \mathfrak {X} (M) \times \mathfrak {X} (M) \rightarrow \mathfrak {X} (M): (U, V) \mapsto \nabla_ {U} V
$$

satisfying the following properties for any  $U, V, W \in \mathfrak{X}(M)$ , smooth functions  $f, g$  on  $M$ , and  $a, b \in \mathbb{R}$ :

1.  $\nabla_{fU + gW}V = f\nabla_U V + g\nabla_W V$  
2.  $\nabla_U(aV + bW) = a\nabla_U V + b\nabla_U W$  
3.  $\nabla_U(fV) = (Uf)V + f\nabla_U V$  
4.  $[U,V] = \nabla_U V - \nabla_V U$  
5.  $U\langle V,W\rangle = \langle \nabla_U V,W\rangle +\langle V,\nabla_U W\rangle$

where  $[\cdot, \cdot]$  denotes the Lie bracket. The operator  $\nabla$  is called the Levi-Civita connection or the Riemannian connection. The field  $\nabla_U V$  is called the covariant derivative of  $V$  along  $U$ .

From now on, we always assume that  $M$  is equipped with the Riemannian connection  $\nabla$ .

Proposition A.4. (Boumal, 2020, Section 8.11) For any smooth vector fields  $U, V$  on  $M$ , the vector field  $\nabla_U V$  at  $x$  depends on  $U$  only through  $U(x)$ . Thus, we can write  $\nabla_u V$  to mean  $(\nabla_U V)(x)$  for any  $U \in \mathfrak{X}(M)$  such that  $U(x) = u$ , without ambiguity.

For a smooth function  $f:M\to \mathbb{R}$ ,  $\operatorname{grad} f$  is a smooth vector field.

Definition A.5. (Boumal, 2020, Section 8.11) The Riemannian Hessian of a smooth function  $f$  on  $M$  at  $x \in M$  is a self-adjoint linear operator  $\operatorname{Hess} f(x): T_xM \to T_xM$  defined as

$$
\operatorname {H e s s} f (x) [ u ] = \nabla_ {u} \operatorname {g r a d} f.
$$

Proposition A.6. (Boumal, 2020, Section 8.12) Let  $c: I \to M$  be a smooth curve. There exists a unique operator  $D_t: \mathfrak{X}(c) \to \mathfrak{X}(c)$  satisfying the following properties for all  $Y, Z \in \mathfrak{X}(c), U \in \mathfrak{X}(M)$ , a smooth function  $g$  on  $I$ , and  $a, b \in \mathbb{R}$ :

1.  $D_{t}(aY + bZ) = aD_{t}Y + bD_{t}Z$  
2.  $D_{t}(gZ) = g^{\prime}Z + gD_{t}Z$  
3.  $(D_{t}(U\circ c))(t) = \nabla_{c^{\prime}(t)}U$  for all  $t\in I$  
4.  $\frac{d}{dt}\langle Y,Z\rangle = \langle D_tY,Z\rangle +\langle Y,D_tZ\rangle$

This operator is called the (induced) covariant derivative along the curve  $c$ .

We define the acceleration of a smooth curve  $\gamma$  as the vector field  $D_{t}\gamma^{\prime}$  along  $\gamma$ . Now, we can define the parallel transport using covariant derivatives.

Definition A.7. (Boumal, 2020, Section 10.3) A vector field  $Z \in \mathfrak{X}(c)$  is parallel if  $D_tZ = 0$ .

Proposition A.8. (Boumal, 2020, Section 10.3) For any smooth curve  $c: I \to M$ ,  $t_0 \in I$  and  $u \in T_{c(t_0)}M$ , there exists a unique parallel vector field  $Z \in \mathfrak{X}(c)$  such that  $Z(t_0) = u$ .

Definition A.9. (Boumal, 2020, Section 10.3) Given a smooth curve  $c$  on  $M$ , the parallel transport of tangent vectors at  $c(t_0)$  to the tangent space at  $c(t_1)$  along  $c$ ,

$$
\Gamma (c) _ {t _ {0}} ^ {t _ {1}}: T _ {c (t _ {0})} M \to T _ {c (t _ {1})} M,
$$

is defined by  $\Gamma (c)t_{0}^{t_1}(u) = Z(t_1)$ , where  $Z\in \mathfrak{X}(c)$  is the unique parallel vector field such that  $Z(t_{0}) = u$

Proposition A.10. (Boumal, 2020, Section 10.3) The parallel transport operator  $\Gamma(c)_{t_0}^{t_1}$  is linear. Also,  $\Gamma(c)_{t_1}^{t_2} \circ \Gamma(c)_{t_0}^{t_1} = \Gamma(c)_{t_0}^{t_2}$  and  $\Gamma(c)_{t}^{t}$  is the identity. In particular, the inverse of  $\Gamma(c)_{t_0}^{t_1}$  is  $\Gamma(c)_{t_1}^{t_0}$ . The parallel transport is an isometry, that is,

$$
\left\langle u, v \right\rangle_ {c (t _ {0})} = \left\langle \Gamma (c) _ {t _ {0}} ^ {t _ {1}} (u), \Gamma (c) _ {t _ {0}} ^ {t _ {1}} (v) \right\rangle_ {c (t _ {1})}.
$$

Proposition A.11. (Boumal, 2020, Section 10.3) Consider a smooth curve  $c: I \to M$ . Given a vector field  $Z \in \mathfrak{X}(c)$ , we have

$$
D _ {t} Z (t) = \lim  _ {h \rightarrow 0} \frac {\Gamma (c) _ {t + h} ^ {t} Z (t + h) - Z (t)}{h}.
$$

# B. Comparison between RNAG-C and High-Friction NAG-C

In this section, we review high-friction NAG-C in (Su et al., 2014, Section 4.1), and compare it to RNAG-C. For  $r \geq 3$ , they designed the generalized NAG-C with high friction as

$$
x _ {k} = y _ {k - 1} - s \operatorname {g r a d} f \left(y _ {k - 1}\right)
$$

$$
y _ {k} = x _ {k} + \frac {k - 1}{k + r - 1} \left(x _ {k} - x _ {k - 1}\right).
$$

Introducing the third sequence as  $z_{k} = y_{k} + \frac{k}{r - 1}\left(y_{k} - x_{k}\right)$ , we can rewrite this method as

$$
y _ {k} = x _ {k} + \frac {r - 1}{k + r - 1} (z _ {k} - x _ {k})
$$

$$
x _ {k + 1} = y _ {k} - s \operatorname {g r a d} f \left(y _ {k}\right) \quad (\text {N A G - C - H F})
$$

$$
z _ {k + 1} = z _ {k} - \frac {k + r - 1}{r - 1} s \operatorname {g r a d} f \left(y _ {k}\right).
$$

Note that we can recover NAG-C by letting  $r = 3$ . The iterates of NAG-C-HF satisfy

$$
f (x _ {k}) - f (x ^ {*}) \leq \frac {(r - 1) ^ {2} \| x _ {0} - x ^ {*} \| ^ {2}}{2 s (k + r - 2) ^ {2}} \leq \frac {(r - 1) ^ {2} \| x _ {0} - x ^ {*} \| ^ {2}}{2 s (k - 2) ^ {2}}
$$

for  $s \leq \frac{1}{L}$  (Su et al., 2014, Theorem 6). Thus, we have  $f(x_k) - f(x^*) \leq \epsilon$  whenever

$$
(k - 2) ^ {2} \geq \frac {(r - 1) ^ {2} \left\| x _ {0} - x ^ {*} \right\| ^ {2}}{2 s \epsilon}.
$$

In particular, when  $s = \frac{1}{L}$  and  $r = 1 + 2\xi$ , we have the Iteration complexity  $O\left(\xi \sqrt{\frac{L}{\epsilon}}\right)$ .

For comparison, we write RNAG-C in Euclidean space as

$$
y _ {k} = x _ {k} + \frac {2 \xi}{k + 2 \xi + (T + 2 \xi - 2)} \left(z _ {k} - x _ {k}\right)
$$

$$
x _ {k + 1} = y _ {k} - s \operatorname {g r a d} f \left(y _ {k}\right) \tag {10}
$$

$$
z _ {k + 1} = z _ {k} - \frac {k + 2 \xi + T}{2 \xi} s \operatorname {g r a d} f \left(y _ {k}\right).
$$

One can see that the algorithm (10) is similar to that of NAG-C-HF with  $r = 1 + 2\xi$ , where the only difference occurs in constants that can be ignored as  $k$  grows. Note that both algorithms have the same iteration complexity  $O\left(\xi \sqrt{\frac{L}{\epsilon}}\right)$  even when we do not ignore the effect of  $\xi$ , and lead to the same ODE (Su et al., 2014, Section 4.1)

$$
\ddot {y} + \frac {1 + 2 \xi}{t} \dot {y} + \operatorname {g r a d} f (y) = 0.
$$

# C. Proofs of Lemma 5.2 and Lemma 5.3

Proposition C.1. (Alimisis et al., 2020, Lemma 12) Let  $\gamma$  be a smooth curve whose image is in  $N$ , then

$$
\frac {d}{d t} \left\| \log_ {\gamma (t)} (x) \right\| ^ {2} = 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), \log_ {\gamma (t)} (x) \right\rangle = 2 \left\langle \log_ {\gamma (t)} (x), - \gamma^ {\prime} (t) \right\rangle .
$$

Lemma C.2. Let  $p_A, p_B, x \in N$  and  $v_A \in T_{p_A}M$ . If there is  $r \in [0,1]$  such that  $\log_{p_A}(p_B) = rv_A$ , then we have

$$
\begin{array}{l} \left\| v _ {B} - \log_ {p _ {B}} (x) \right\| _ {p _ {B}} ^ {2} + (\zeta - 1) \| v _ {B} \| _ {p _ {B}} ^ {2} \\ \leq \left\| v _ {A} - \log_ {p _ {A}} (x) \right\| _ {p _ {A}} ^ {2} + (\zeta - 1) \| v _ {A} \| _ {p _ {A}} ^ {2}, \\ \end{array}
$$

where  $v_{B} = \Gamma_{p_{A}}^{p_{B}}\left(v_{A} - \log_{p_{A}}(p_{B})\right) \in T_{p_{B}}M$

Proof. By geodesic unique convexity of  $N$ , there is a unique geodesic  $\gamma$  such that  $\gamma(0) = p_A$  and  $\gamma(r) = p_B$  whose image lies in  $N$ . We can check that  $\gamma'(0) = v_A$ . Define the vector field  $V(t)$  along  $\gamma$  as  $V(t) = \Gamma(\gamma)_0^t (v_A - t\gamma'(0))$ . Then, we can check that  $V(t) = (1 - t)\gamma'(t)$  and  $V'(t) = -\gamma'(t)$ . Define the function  $w: [0,r] \to \mathbb{R}$  as  $w(t) = \left\| \log_{\gamma(t)}(x) - V(t) \right\|^2$ , It follows from Proposition 5.1 and Proposition C.1 that

$$
\begin{array}{l} \frac {d}{d t} w (t) = 2 \left\langle D _ {t} \left(\log_ {\gamma (t)} (x) - V (t)\right), \log_ {\gamma (t)} (x) - V (t) \right\rangle \\ = 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), \log_ {\gamma (t)} (x) \right\rangle - 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), V (t) \right\rangle - 2 \left\langle D _ {t} V (t), \log_ {\gamma (t)} (x) \right\rangle + 2 \left\langle D _ {t} V (t), V (t) \right\rangle \\ = 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), \log_ {\gamma (t)} (x) \right\rangle - 2 (1 - t) \left\langle D _ {t} \log_ {\gamma (t)} (x), \gamma^ {\prime} (t) \right\rangle + 2 \left\langle \gamma^ {\prime} (t), \log_ {\gamma (t)} (x) \right\rangle + 2 \left\langle D _ {t} V (t), V (t) \right\rangle \\ = 2 (1 - t) \left\langle D _ {t} \log_ {\gamma (t)} (x), - \gamma^ {\prime} (t) \right\rangle + 2 \left\langle D _ {t} V (t), V (t) \right\rangle \\ \leq 2 (1 - t) \zeta \| \gamma^ {\prime} (t) \| ^ {2} + 2 \langle D _ {t} V (t), V (t) \rangle \\ = - 2 \zeta \left\langle - \gamma^ {\prime} (t), (1 - t) \gamma^ {\prime} (t) \right\rangle + 2 \left\langle D _ {t} V (t), V (t) \right\rangle \\ = - 2 (\zeta - 1) \langle D _ {t} V (t), V (t) \rangle \\ = - (\zeta - 1) \left(\frac {d}{d t} \| V (t) \| ^ {2}\right). \\ \end{array}
$$

Integrating both sides from 0 to  $r$  gives

$$
w (r) - w (0) \leq \int_ {0} ^ {r} - (\zeta - 1) \left(\frac {d}{d t} \| V (t) \| ^ {2}\right) d t = - (\zeta - 1) \left(\| V (r) \| ^ {2} - \| V (0) \| ^ {2}\right).
$$

This completes the proof.

Lemma C.3. Let  $p_A, p_B, x \in N$  and  $v_A \in T_{p_A}M$ . Define  $v_B = \Gamma_{p_A}^{p_B} \left( v_A - \log_{p_A}(p_B) \right) \in T_{p_B}M$ . If there are  $a, b \in T_{p_A}M$ , and  $r \in (0,1)$  such that  $v_A = a + b$  and  $\log_{p_A}(p_B) = rb$ , then we have

$$
\begin{array}{l} \left\| v _ {B} - \log_ {p _ {B}} (x) \right\| _ {p _ {B}} ^ {2} + (\xi - 1) \| v _ {B} \| _ {p _ {B}} ^ {2} \\ \leq \left\| v _ {A} - \log_ {p _ {A}} (x) \right\| _ {p _ {A}} ^ {2} + (\xi - 1) \left\| v _ {A} \right\| _ {p _ {A}} ^ {2} \\ + \frac {\xi - \delta}{2} \left(\frac {1}{1 - r} - 1\right) \| a \| _ {p _ {A}} ^ {2} \\ \end{array}
$$

for  $\xi \geq \zeta$

Proof. Define  $\gamma, V, w$  as in the proof of Lemma 5.2. As in the proof of Lemma 5.2, we can check that  $\gamma'(0) = b$  and  $V'(t) = -\gamma'(t)$ , and that we have

$$
\frac {d}{d t} w (t) = - 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), V (t) \right\rangle + 2 \left\langle D _ {t} V (t), V (t) \right\rangle .
$$

Consider the smooth function  $f_0: p \mapsto \frac{1}{2} \left\| \log_p(x) \right\|^2$ . Because  $\operatorname{grad} f_0(p) = -\log_p(x)$ , we have  $\operatorname{Hess} f_0(\gamma(t))[w] = \nabla_w X$ , where  $X: p \mapsto -\log_p(x)$  (Alimisis et al., 2020, Section 4). By Proposition 5.1, we have  $\delta \| w \|^2 \leq \langle \operatorname{Hess} f_0(\gamma(t))[w], w \rangle \leq \zeta \| w \|^2 \leq \xi \| w \|^2$  (Alimisis et al., 2021, Appendix D). Thus,

$$
- \frac {\xi - \delta}{2} \| w \| ^ {2} = \delta \| w \| ^ {2} - \frac {\xi + \delta}{2} \| w \| ^ {2} \leq \left\langle \operatorname {H e s s} f _ {0} (\gamma (t)) [ w ] - \frac {\xi + \delta}{2} w, w \right\rangle \leq \xi \| w \| ^ {2} - \frac {\xi + \delta}{2} \| w \| ^ {2} = \frac {\xi - \delta}{2} \| w \| ^ {2}.
$$

Because  $\mathrm{Hess}f_0(\gamma (t))$  is self-adjoint, it is diagonalizable. Thus, the norm of the operator  $\mathrm{Hess}f_0(\gamma (t)) - \frac{\xi + \delta}{2} I$  on  $T_{\gamma (t)}M$  can be bounded as

$$
\left\| \operatorname {H e s s} f _ {0} (\gamma (t)) - \frac {\xi + \delta}{2} I \right\| \leq \frac {\xi - \delta}{2}.
$$

Now, we have

$$
\begin{array}{l} - 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), V (t) \right\rangle = 2 \left\langle \nabla_ {\gamma^ {\prime} (t)} X, V (t) \right\rangle \\ = 2 \left\langle \mathrm {H e s s} f _ {0} (\gamma (t)) [ \gamma^ {\prime} (t) ], V (t) \right\rangle \\ = 2 \left\langle \left(\operatorname {H e s s} f _ {0} (\gamma (t)) - \frac {\xi + \delta}{2} I\right) \left(\gamma^ {\prime} (t)\right), V (t) \right\rangle + 2 \left\langle \frac {\xi + \delta}{2} \gamma^ {\prime} (t), V (t) \right\rangle \\ \leq 2 \left\| \left(\operatorname {H e s s} f _ {0} (\gamma (t)) - \frac {\xi + \delta}{2} I\right) \left(\gamma^ {\prime} (t)\right) \right\| \| V (t) \| + 2 \left\langle \frac {\xi + \delta}{2} \gamma^ {\prime} (t), V (t) \right\rangle \\ \leq 2 \left\| \operatorname {H e s s} f _ {0} (\gamma (t)) - \frac {\xi + \delta}{2} I \right\| \| \gamma^ {\prime} (t) \| \| V (t) \| + 2 \left\langle \frac {\xi + \delta}{2} \gamma^ {\prime} (t), V (t) \right\rangle \\ \leq 2 \frac {\xi - \delta}{2} \| \gamma^ {\prime} (t) \| \| V (t) \| + 2 \left\langle \frac {\xi + \delta}{2} \gamma^ {\prime} (t), V (t) \right\rangle . \\ \end{array}
$$

Because the parallel transport preserves inner product and norm, we obtain

$$
\begin{array}{l} - 2 \left\langle D _ {t} \log_ {\gamma (t)} (x), V (t) \right\rangle \leq 2 \frac {\xi - \delta}{2} \| b \| \| a + (1 - t) b \| + (\xi + \delta) \langle b, a + (1 - t) b \rangle \\ = \frac {\xi - \delta}{2} \frac {1}{1 - t} 2 \| (1 - t) b \| \| a + (1 - t) b \| + (\xi + \delta) \langle b, a + (1 - t) b \rangle \\ \leq \frac {\xi - \delta}{2} \frac {1}{1 - t} \left(\| (1 - t) b \| ^ {2} + \| a + (1 - t) b \| ^ {2}\right) + (\xi + \delta) \langle b, a + (1 - t) b \rangle \\ = \frac {\xi - \delta}{2} \frac {1}{1 - t} \| a \| ^ {2} - 2 \xi \langle - b, a + (1 - t) b \rangle \\ = \frac {\xi - \delta}{2} \frac {1}{1 - t} \| a \| ^ {2} - 2 \xi \langle D _ {t} V (t), V (t) \rangle . \\ \end{array}
$$

Thus, for  $t\in (0,r)$

$$
\frac {d}{d t} w (t) \leq \frac {\xi - \delta}{2} \frac {1}{1 - r} \| a \| ^ {2} - 2 (\xi - 1) \langle D _ {t} V (t), V (t) \rangle .
$$

Integrating both sides from 0 to  $r$ , the result follows.

# D. Convergence Analysis for RGD

In this section, we review the iteration complexity of RGD with the fixed step size  $\gamma_{k} = s$  under the assumptions in Section 3.2. The results in this section correspond to (Zhang & Sra, 2016, Theorems 13 and 15).

# D.1. Geodesically convex case

We define the potential function as

$$
\phi_ {k} = s (k + \zeta - 1) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) + \frac {1}{2} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2}.
$$

The following theorem says that  $\phi_k$  is decreasing.

Theorem D.1. Let  $f$  be a geodesically convex and geodesically  $L$ -smooth function. If  $s \leq \frac{1}{L}$ , then the iterates of RGD satisfy

$$
s (k + \zeta) \left(f (x _ {k + 1}) - f (x ^ {*})\right) + \frac {1}{2} \left\| \log_ {x _ {k + 1}} (x ^ {*}) \right\| ^ {2} \leq s (k + \zeta - 1) \left(f (x _ {k}) - f (x ^ {*})\right) + \frac {1}{2} \left\| \log_ {x _ {k}} (x ^ {*}) \right\| ^ {2}
$$

for all  $k\geq 0$

Proof. (Step 1). In this step,  $\langle \cdot ,\cdot \rangle$  and  $\| \cdot \|$  always denote the inner product and the norm on  $T_{x_k}M$ . It follows from the geodesic convexity of  $f$  that

$$
\begin{array}{l} f \left(x ^ {*}\right) \geq f \left(x _ {k}\right) + \left\langle \operatorname {g r a d} f \left(x _ {k}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle \\ = f \left(x _ {k}\right) - \frac {1}{s} \left\langle \log_ {x _ {k}} \left(x _ {k + 1}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle . \\ \end{array}
$$

By the geodesic  $\frac{1}{s}$ -smoothness of  $f$ , we have

$$
\begin{array}{l} f \left(x _ {k + 1}\right) \leq f \left(x _ {k}\right) + \left\langle \operatorname {g r a d} f \left(x _ {k}\right), \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\rangle + \frac {1}{2 s} \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} \\ = f \left(x _ {k}\right) - \frac {1}{2 s} \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2}. \\ \end{array}
$$

Taking a weighted sum of these inequalities yields

$$
\begin{array}{l} 0 \geq \left[ f (x _ {k}) - f (x ^ {*}) - \frac {1}{s} \left\langle \log_ {x _ {k}} (x _ {k + 1}), \log_ {x _ {k}} (x ^ {*}) \right\rangle \right] \\ + (k + \zeta) \left[ f (x _ {k + 1}) - f (x _ {k}) + \frac {1}{2 s} \left\| \log_ {x _ {k}} (x _ {k + 1}) \right\| ^ {2} \right] \\ = (k + \zeta) (f (x _ {k + 1}) - f (x ^ {*})) - (k + \zeta - 1) (f (x _ {k}) - f (x ^ {*})) \\ - \frac {1}{s} \left\langle \log_ {x _ {k}} (x _ {k + 1}), \log_ {x _ {k}} (x ^ {*}) \right\rangle + \frac {k + \zeta}{2 s} \left\| \log_ {x _ {k}} (x _ {k + 1}) \right\| ^ {2} \\ \geq (k + \zeta) (f (x _ {k + 1}) - f (x ^ {*})) - (k + \zeta - 1) (f (x _ {k}) - f (x ^ {*})) \\ - \frac {1}{s} \left\langle \log_ {x _ {k}} (x _ {k + 1}), \log_ {x _ {k}} (x ^ {*}) \right\rangle + \frac {\zeta}{2 s} \left\| \log_ {x _ {k}} (x _ {k + 1}) \right\| ^ {2} \\ = (k + \zeta) \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right)\right) - (k + \zeta - 1) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {1}{2 s} \left(\zeta \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} - 2 \left\langle \log_ {x _ {k}} \left(x _ {k + 1}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle\right) \\ = (k + \zeta) (f (x _ {k + 1}) - f (x ^ {*})) - (k + \zeta - 1) (f (x _ {k}) - f (x ^ {*})) \\ + \frac {1}{2 s} \left(\left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} - \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right). \\ \end{array}
$$

(Step 2: Handling metric distortion). By Lemma 5.2 with  $p_A = x_k$ ,  $p_B = x_{k+1}$ ,  $x = x^*$ ,  $v_A = \log_{x_k}(x_{k+1})$ ,  $v_B = 0$ ,  $r = 1$ , we have

$$
\left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| _ {x _ {k + 1}} ^ {2} \leq \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2}.
$$

Combining this inequality with the result in Step 1 gives

$$
\begin{array}{l} 0 \geq (k + \zeta) (f (x _ {k + 1}) - f (x ^ {*})) - (k + \zeta - 1) (f (x _ {k}) - f (x ^ {*})) \\ + \frac {1}{2 s} \left(\left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2} - \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2}\right) \\ + \frac {1}{2 s} \left(\left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| _ {x _ {k + 1}} ^ {2} - \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} - (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2}\right) \\ = (k + \zeta) (f (x _ {k + 1}) - f (x ^ {*})) - (k + \zeta - 1) (f (x _ {k}) - f (x ^ {*})) + \frac {1}{2 s} \left\| \log_ {x _ {k + 1}} (x ^ {*}) \right\| _ {x _ {k + 1}} ^ {2} - \frac {1}{2 s} \left\| \log_ {x _ {k}} (x ^ {*}) \right\| _ {x _ {k}} ^ {2} \\ = \frac {\phi_ {k + 1} - \phi_ {k}}{s}. \\ \end{array}
$$

Corollary D.2. Let  $f$  be a geodesically convex and geodesically  $L$ -smooth function. Then, RGD with the step size  $s = \frac{1}{L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\frac{\zeta L}{\epsilon}\right)$  iterations.

Proof. It follows from Theorem D.1 that

$$
f (x _ {k}) - f (x ^ {*}) \leq \frac {\phi_ {k}}{s (k + \zeta - 1)} \leq \frac {\phi_ {0}}{s (k + \zeta - 1)} = \frac {1}{s (k + \zeta - 1)} \left(s (\zeta - 1) (f (x _ {0}) - f (x ^ {*})) + \frac {1}{2} \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}\right).
$$

By geodesic  $\frac{1}{s}$ -smoothness of  $f$ , we have

$$
f (x _ {k}) - f (x ^ {*}) \leq \frac {1}{s (k + \zeta - 1)} \left(s (\zeta - 1) \frac {1}{2 s} \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2} + \frac {1}{2} \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}\right) = \frac {\zeta L}{2 (k + \zeta - 1)} \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}.
$$

Thus, we have  $f(x_{k}) - f(x^{*}) \leq \epsilon$  whenever  $k \geq \frac{\zeta L}{2\epsilon} \left\| \log_{x_0}(x^*) \right\|^2 - (\zeta - 1)$ . Thus we obtain an  $O\left(\frac{\zeta L}{\epsilon}\right)$  iteration complexity.

This result implies that the iteration complexity of RGD for geodesically convex case is the same as that of GD, since  $\zeta$  is a constant.

# D.2. Geodesically strongly convex case

We define the potential function as

$$
\phi_ {k} = (1 - \mu s) ^ {- k} \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left| \left| \log_ {x _ {k}} \left(x ^ {*}\right) \right| \right| ^ {2}\right).
$$

The following theorem states that  $\phi_{k}$  is decreasing.

Theorem D.3. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. If  $s \leq \min \left\{\frac{1}{L}, \frac{1}{\zeta \mu}\right\}$ , then the iterates of RGD satisfy

$$
\left(1 - \mu s\right) ^ {- (k + 1)} \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| ^ {2}\right) \leq \left(1 - \mu s\right) ^ {- k} \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right)
$$

for all  $k\geq 0$

Proof. (Step 1). In this step,  $\langle \cdot ,\cdot \rangle$  and  $\| \cdot \|$  always denote the inner product and the norm on  $T_{x_k}M$ . Set  $q = \mu s$ . By geodesic  $\mu$ -strong convexity of  $f$ , we have

$$
\begin{array}{l} f \left(x ^ {*}\right) \geq f \left(x _ {k}\right) + \left\langle \operatorname {g r a d} f \left(x _ {k}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {\mu}{2} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} \\ = f \left(x _ {k}\right) - \frac {1}{s} \left\langle \log_ {x _ {k}} \left(x _ {k + 1}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {q}{2 s} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2}. \\ \end{array}
$$

By geodesic  $\frac{1}{s}$ -smoothness of  $f$ , we have

$$
\begin{array}{l} f \left(x _ {k + 1}\right) \leq f \left(x _ {k}\right) + \left\langle \operatorname {g r a d} f \left(x _ {k}\right), \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\rangle + \frac {1}{2 s} \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} \\ = f \left(x _ {k}\right) - \frac {1}{2 s} \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2}. \\ \end{array}
$$

Note that  $\zeta q \leq 1$ . Taking weighted sum of these inequalities, we arrive to the valid inequality

$$
\begin{array}{l} 0 \geq q \left[ f (x _ {k}) - f (x ^ {*}) - \frac {1}{s} \left\langle \log_ {x _ {k}} (x _ {k + 1}), \log_ {x _ {k}} (x ^ {*}) \right\rangle + \frac {q}{2 s} \left\| \log_ {x _ {k}} (x ^ {*}) \right\| ^ {2}. \right. \\ + \left[ f (x _ {k + 1}) - f (x _ {k}) + \frac {1}{2 s} \left\| \log_ {x _ {k}} (x _ {k + 1}) \right\| ^ {2} \right] \\ = f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) - (1 - q) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ - \frac {q}{s} \left\langle \log_ {x _ {k}} \left(x _ {k + 1}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {q ^ {2}}{2 s} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + \frac {1}{2 s} \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} \\ \geq f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) - (1 - q) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {q}{2 s} \left(- 2 \left\langle \log_ {x _ {k}} \left(x _ {k + 1}\right), \log_ {x _ {k}} \left(x ^ {*}\right) \right\rangle + q \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + \zeta \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2}\right) \\ = f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) - (1 - q) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {q}{2 s} \left(\left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} - (1 - q) \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right). \\ \end{array}
$$

(Step 2: Handle metric distortion). By Lemma 5.2 with  $p_A = x_k, p_B = x_{k+1}, x = x^*, v_A = \log_{x_k}(x_{k+1}), v_B = 0, r = 1$ , we have

$$
\left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| _ {x _ {k + 1}} ^ {2} \leq \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2}.
$$

Combining this inequality with the result in Step 1 gives

$$
\begin{array}{l} 0 \geq f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) - (1 - q) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {q}{2 s} \left(\left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2} - (1 - q) \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2}\right) \\ + \frac {q}{2 s} \left(\left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| _ {x _ {k + 1}} ^ {2} - \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) - \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} - (\zeta - 1) \left\| \log_ {x _ {k}} \left(x _ {k + 1}\right) \right\| _ {x _ {k}} ^ {2}\right) \\ \end{array}
$$

$$
\begin{array}{l} 0 = f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) - (1 - q) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) + \frac {q}{2 s} \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) \right\| _ {x _ {k + 1}} ^ {2} - \frac {q}{2 s} (1 - q) \left\| \log_ {x _ {k}} \left(x ^ {*}\right) \right\| _ {x _ {k}} ^ {2} \\ = \left(f (x _ {k + 1}) - f (x ^ {*}) + \frac {\mu}{2} \left\| \log_ {x _ {k + 1}} (x ^ {*}) \right\| _ {x _ {k + 1}} ^ {2}\right) - (1 - q) \left(f (x _ {k}) - f (x ^ {*}) + \frac {\mu}{2} \left\| \log_ {x _ {k}} (x ^ {*}) \right\| _ {x _ {k}} ^ {2}\right) \\ = (1 - q) ^ {(k + 1)} \left(\phi_ {k + 1} - \phi_ {k}\right). \\ \end{array}
$$

Corollary D.4. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. Then, RGD with step size  $s = \frac{1}{\zeta L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\frac{\zeta L}{\mu}\log \frac{L}{\epsilon}\right)$  iterations.

Proof. By Theorem D.3, we have

$$
f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \left(1 - \mu s\right) ^ {k} \phi_ {k} \leq \left(1 - \mu s\right) ^ {k} \phi_ {0} = \left(1 - \mu s\right) ^ {k} \left(f \left(x _ {0}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right).
$$

It follows from the geodesic  $L$ -smoothness of  $f$  and the inequality  $\left(1 - \frac{\mu}{\zeta L}\right)^k \leq e^{-\frac{\mu}{\zeta L} k}$  that

$$
f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \left(1 - \frac {\mu}{\zeta L}\right) ^ {k} \left(\frac {L}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} + \frac {\mu}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right) \leq e ^ {- \frac {\mu}{\zeta L} k} L \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}.
$$

Thus, we have  $f(x_{k}) - f(x^{*})\leq \epsilon$  whenever  $k\geq \frac{\zeta L}{\mu}\log \left(\frac{L}{\epsilon}\left\| \log_{x_0}(x^*)\right\| ^2\right)$ . Accordingly, we obtain an  $O\left(\frac{\zeta L}{\mu}\log \frac{L}{\epsilon}\right)$  iteration complexity.

This result implies that the iteration complexity of RGD for g-strongly convex case is the same as that of GD, since  $\zeta$  is a constant. Another proof of the iteration complexity of RGD for g-strongly convex functions can be found in (Criscitiello & Boumal, 2021, Proposition 1.8).

# E. Convergence Analysis for RNAG-C

Theorem 5.4. Let  $f$  be a  $g$ -convex and geodesically  $L$ -smooth function. If the parameters  $\xi$  and  $T$  of RNAG-  $C$  satisfy  $\xi \geq \zeta$  and

$$
\begin{array}{l} \frac {\xi - \delta}{2} \left(\frac {1}{1 - \xi / \lambda_ {k}} - 1\right) \\ \leq (\xi - \zeta) \left(\frac {1}{(1 - \xi / (\lambda_ {k} + \xi - 1)) ^ {2}} - 1\right) \\ \end{array}
$$

for all  $k \geq 0$ , then the iterates of RNAG-  $C$  satisfy  $\phi_{k + 1} \leq \phi_k$  for all  $k \geq 0$ , where  $\phi_k$  is defined as (5).

Proof. (Step 1). In this step,  $\langle \cdot ,\cdot \rangle$  and  $\| \cdot \|$  always denote the inner product and the norm on  $T_{y_k}M$ . It is easy to check that  $\operatorname{grad}f(y_k) = -\frac{\xi}{s\lambda_k} (\bar{v}_{k + 1} - v_k)$ ,  $\log_{y_k}(x_k) = -\frac{\xi}{\lambda_k - 1} v_k$ , and  $\lambda_k^2 -\lambda_k\leq \lambda_{k - 1}^2$ . By the geodesic convexity of  $f$ , we have

$$
\begin{array}{l} f \left(x ^ {*}\right) \geq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle \\ = f \left(y _ {k}\right) - \frac {\xi}{s \lambda_ {k}} \left\langle \bar {\bar {v}} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle , \\ \end{array}
$$

$$
\begin{array}{l} f \left(x _ {k}\right) \geq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x _ {k}\right) \right\rangle \\ = f \left(y _ {k}\right) + \frac {\xi^ {2}}{s \left(\lambda_ {k} ^ {2} - \lambda_ {k}\right)} \langle \bar {\bar {v}} _ {k + 1} - v _ {k}, v _ {k} \rangle . \\ \end{array}
$$

It follows from the geodesic  $\frac{1}{s}$ -smoothness of  $f$  that

$$
\begin{array}{l} f \left(x _ {k + 1}\right) \leq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x _ {k + 1}\right) \right\rangle + \frac {1}{2 s} \left\| \log_ {y _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} \\ = f (y _ {k}) - \frac {s}{2} \| \operatorname {g r a d} f (y _ {k}) \| ^ {2} \\ = f \left(y _ {k}\right) - \frac {\xi^ {2}}{2 s \lambda_ {k} ^ {2}} \left\| \bar {\bar {v}} _ {k + 1} - v _ {k} \right\| ^ {2}. \\ \end{array}
$$

Taking a weighted sum of these inequalities yields

$$
\begin{array}{l} 0 \geq \lambda_ {k} \left[ f (y _ {k}) - f (x ^ {*}) - \frac {\xi}{s \lambda_ {k}} \left\langle \bar {v} _ {k + 1} - v _ {k}, \log_ {y _ {k}} (x ^ {*}) \right\rangle \right] \\ + \left(\lambda_ {k} ^ {2} - \lambda_ {k}\right) \left[ f \left(y _ {k}\right) - f \left(x _ {k}\right) + \frac {\xi^ {2}}{s \left(\lambda_ {k} ^ {2} - \lambda_ {k}\right)} \langle \bar {\bar {v}} _ {k + 1} - v _ {k}, v _ {k} \rangle \right] \\ + \lambda_ {k} ^ {2} \left[ f (x _ {k + 1}) - f (y _ {k}) + \frac {\xi^ {2}}{2 s \lambda_ {k} ^ {2}} \| \bar {v} _ {k + 1} - v _ {k} \| ^ {2} \right] \\ = \lambda_ {k} ^ {2} (f (x _ {k + 1}) - f (x ^ {*})) - (\lambda_ {k} ^ {2} - \lambda_ {k}) (f (x _ {k}) - f (x ^ {*})) \\ - \frac {\xi}{s} \left\langle \bar {v} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {\xi^ {2}}{s} \left\langle \bar {v} _ {k + 1} - v _ {k}, v _ {k} \right\rangle + \frac {\xi^ {2}}{2 s} \| \bar {\bar {v}} _ {k + 1} - v _ {k} \| ^ {2} \\ \geq \lambda_ {k} ^ {2} (f (x _ {k + 1}) - f (x ^ {*})) - \lambda_ {k - 1} ^ {2} (f (x _ {k}) - f (x ^ {*})) \\ + \frac {\xi}{2 s} \left(- 2 \left\langle \bar {\bar {v}} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle + 2 \xi \left\langle \bar {\bar {v}} _ {k + 1} - v _ {k}, v _ {k} \right\rangle + \xi \left\| \bar {\bar {v}} _ {k + 1} - v _ {k} \right\| ^ {2}\right) \\ = \lambda_ {k} ^ {2} \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right)\right) - \lambda_ {k - 1} ^ {2} \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \frac {\xi}{2 s} \left(\| \bar {v} _ {k + 1} - v _ {k} \| ^ {2} - 2 \left\langle \bar {v} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\rangle + 2 (\xi - 1) \left\langle \bar {v} _ {k + 1} - v _ {k}, v _ {k} \right\rangle + (\xi - 1) \| \bar {v} _ {k + 1} - v _ {k} \| ^ {2}\right). \\ = \lambda_ {k} ^ {2} (f (x _ {k + 1}) - f (x ^ {*})) - \lambda_ {k - 1} ^ {2} (f (x _ {k}) - f (x ^ {*})) \\ + \frac {\xi}{2 s} \left(\| \bar {v} _ {k + 1} - v _ {k} \| ^ {2} - 2 \left\langle \bar {v} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\rangle + (\xi - 1) \| \bar {v} _ {k + 1} \| ^ {2} - (\xi - 1) \| v _ {k} \| ^ {2}\right). \\ \end{array}
$$

Note that

$$
\left\| \bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} - \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} = \| \bar {\bar {v}} _ {k + 1} - v _ {k} \| ^ {2} - 2 \left\langle \bar {\bar {v}} _ {k + 1} - v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\rangle .
$$

Thus, we obtain

$$
\begin{array}{l} 0 \geq \lambda_ {k} ^ {2} (f (x _ {k + 1}) - f (x ^ {*})) - \lambda_ {k - 1} ^ {2} (f (x _ {k}) - f (x ^ {*})) \\ + \frac {\xi}{2 s} \left(\left\| \bar {v} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} - \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + (\xi - 1) \left\| \bar {v} _ {k + 1} \right\| ^ {2} - (\xi - 1) \| v _ {k} \| ^ {2}\right). \\ \end{array}
$$

(Step 2: Handle metric distortion). By Lemma 5.3 with  $p_A = y_k$ ,  $p_B = x_{k+1}$ ,  $x = x^*$ ,  $v_A = \bar{v}_{k+1}$ ,  $v_B = \bar{v}_{k+1}$ ,  $a = v_k$ ,  $b = -\gamma_k$  grad  $f(y_k) = -\frac{s\lambda_k}{\xi}$  grad  $f(y_k)$ ,  $r = \frac{s}{\gamma_k} = \frac{\xi}{\lambda_k} \in (0,1)$ , we have

$$
\begin{array}{l} \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) - \bar {v} _ {k + 1} \right\| _ {x _ {k + 1}} ^ {2} + (\xi - 1) \| \bar {v} _ {k + 1} \| _ {x _ {k + 1}} ^ {2} \\ \leq \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - \bar {\bar {v}} _ {k + 1} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| \bar {\bar {v}} _ {k + 1} \| _ {y _ {k}} ^ {2} + \frac {\xi - \delta}{2} \left(\frac {1}{1 - \xi / \lambda_ {k}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2}. \\ \end{array}
$$

It follows from Lemma 5.2 with  $p_A = x_k, p_B = y_k, x = x^*, v_A = \bar{v}_k, v_B = v_k, r = \tau_k = \frac{\xi}{\lambda_k + \xi - 1}$  that

$$
\begin{array}{l} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} + (\xi - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} = \left(\left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2}\right) + (\xi - \zeta) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} \\ \geq \left(\left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\zeta - 1) \left\| v _ {k} \right\| _ {y _ {k}} ^ {2}\right) + (\xi - \zeta) \left\| \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} \\ = \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\zeta - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \frac {1}{(1 - \tau_ {k}) ^ {2}} \| v _ {k} \| _ {y _ {k}} ^ {2} \\ = \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \left(\frac {1}{\left(1 - \tau_ {k}\right) ^ {2}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2}, \\ \end{array}
$$

Combining these inequalities with the result in Step 1 gives

$$
\begin{array}{l} 0 \geq s \lambda_ {k} ^ {2} (f (x _ {k + 1}) - f (x ^ {*})) - \lambda_ {k - 1} ^ {2} (f (x _ {k}) - f (x ^ {*})) \\ + \frac {\xi}{2} \left(\left\| \bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + (\xi - 1) \left\| \bar {\bar {v}} _ {k + 1} \right\| ^ {2} - \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} - (\xi - 1) \left\| v _ {k} \right\| ^ {2}\right). \\ + \frac {\xi}{2} \left[ \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) - \bar {v} _ {k + 1} \right\| _ {x _ {k + 1}} ^ {2} + (\xi - 1) \| \bar {v} _ {k + 1} \| _ {x _ {k + 1}} ^ {2} \right. \\ \left. - \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - \bar {\bar {v}} _ {k + 1} \right\| _ {y _ {k}} ^ {2} - (\xi - 1) \| \bar {\bar {v}} _ {k + 1} \| _ {y _ {k}} ^ {2} - \frac {\xi - \delta}{2} \left(\frac {1}{1 - \xi / \lambda_ {k}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \right] \\ + \frac {\xi}{2} \left[ \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \left(\frac {1}{\left(1 - \tau_ {k}\right) ^ {2}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \right. \\ \left. - \left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} - (\xi - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} \right] \\ = \phi_ {k + 1} - \phi_ {k} + \frac {\xi}{2} \left((\xi - \zeta) \left(\frac {1}{(1 - \tau_ {k}) ^ {2}} - 1\right) - \frac {\xi - \delta}{2} \left(\frac {1}{1 - \xi / \lambda_ {k}} - 1\right)\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \\ \geq \phi_ {k + 1} - \phi_ {k}. \\ \end{array}
$$

Corollary E.1. Let  $f$  be a  $g$ -convex and geodesically  $L$ -smooth function. Then, RNAG-C with parameters  $\xi = \zeta + 3(\zeta - \delta)$ ,  $T = 4\xi$  and step size  $s = \frac{1}{L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\xi \sqrt{\frac{L}{\epsilon}}\right)$  iterations.

Proof. (Step 1: Checking the condition for Theorem 5.4). A straightforward calculation shows that

$$
2 \left(\frac {1}{1 - t} - 1\right) \leq 3 \left(\frac {1}{1 - (3 / 4) t} - 1\right).
$$

for all  $t \in (0,1/3]$ . For convenience, let  $r = \frac{s}{\gamma_k} = \frac{\xi}{\lambda_k} = \frac{2\xi}{k + 6\xi} \in (0,1/3]$ . Then,  $\tau_k = \frac{\xi}{\lambda_k + (\xi - 1)} = \frac{2\xi}{k + 6\xi + 2(\xi - 1)} \geq \frac{2\xi}{k + 8\xi} \geq \frac{2\xi}{\frac{4}{3}(k + 6\xi)} = \frac{3}{4} r$ . Now, we have

$$
\begin{array}{l} (\xi - \zeta) \left(\frac {1}{(1 - \tau_ {k}) ^ {2}} - 1\right) \geq (\xi - \zeta) \left(\frac {1}{1 - \tau_ {k}} - 1\right) \\ \geq (\xi - \zeta) \left(\frac {1}{1 - \frac {3}{4} r} - 1\right) \\ \geq \frac {\xi - \delta}{2} \left(\frac {1}{1 - r} - 1\right). \\ \end{array}
$$

(Step 2: Computing iteration complexity). By Theorem 5.4, we have

$$
f (x _ {k}) - f (x ^ {*}) \leq \frac {\phi_ {k}}{s \lambda_ {k - 1} ^ {2}} \leq \frac {\phi_ {0}}{s \lambda_ {k - 1} ^ {2}} = \frac {1}{s \lambda_ {k - 1} ^ {2}} \left(s \lambda_ {- 1} ^ {2} (f (x _ {0}) - f (x ^ {*})) + \frac {\xi}{2} \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}\right).
$$

It follows from the geodesic  $\frac{1}{s}$ -smoothness of  $f$  that

$$
\begin{array}{l} f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \frac {1}{s \lambda_ {k - 1} ^ {2}} \left(s \lambda_ {- 1} ^ {2} \frac {1}{2 s} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} + \frac {\xi}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right) \\ = \frac {1}{s \lambda_ {k - 1} ^ {2}} \left(\frac {\lambda_ {- 1} ^ {2}}{2} + \frac {\xi}{2}\right) \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} \\ = \frac {4 L}{(k - 1 + 6 \xi) ^ {2}} \left(\frac {(6 \xi - 1) ^ {2}}{8} + \frac {\xi}{2}\right) \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} \\ \leq \frac {4 L}{(k - 1) ^ {2}} \left(\frac {(6 \xi - 1) ^ {2}}{8} + \frac {\xi}{2}\right) \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}. \\ \end{array}
$$

![](images/88e20a2ac600c32a689b520f59236c4a3f25ffbe9135f799649ec3f76078190b.jpg)

Thus, we have  $f(x_{k}) - f(x^{*})\leq \epsilon$  whenever

$$
(k - 1) ^ {2} \geq \frac {4 L}{\epsilon} \left(\frac {(6 \xi - 1) ^ {2}}{8} + \frac {\xi}{2}\right) \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}.
$$

This implies that RNAG-C has an  $O\left(\xi \sqrt{\frac{L}{\epsilon}}\right)$  iteration complexity.

# F. Convergence Analysis for RNAG-SC

Theorem 5.6. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. If the step size  $s$  and the parameter  $\xi$  of RNAG-SC satisfy  $\xi \geq \zeta$ ,  $\sqrt{\xi q} < 1$ , and

$$
\begin{array}{l} \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} - \sqrt {\xi q} \left(1 - \sqrt {\frac {q}{\xi}}\right) \\ \leq (\xi - \zeta) \left(\frac {1}{\left(1 - \sqrt {\xi q} / \left(1 + \sqrt {\xi q}\right)\right) ^ {2}} - 1\right), \\ \end{array}
$$

then the iterates of RNAG-SC satisfy  $\phi_{k + 1}\leq \phi_k$  for all  $k\geq 0$ , where  $\phi_{k}$  is defined as (6).

Proof. (Step 1). In this step,  $\langle \cdot ,\cdot \rangle$  and  $\| \cdot \|$  always denote the inner product and the norm on  $T_{y_k}M$ . Set  $q = \mu s$ . It is straightforward to check that  $\operatorname{grad}f(y_k) = \mu \frac{1 - \sqrt{q / \xi}}{\sqrt{q / \xi}} v_k - \mu \frac{1}{\sqrt{q / \xi}}\bar{v}_{k + 1}$  and  $\log_{y_k}(x_k) = -\sqrt{\xi q} v_k$ . By geodesic  $\mu$ -strong convexity of  $f$ , we have

$$
\begin{array}{l} f \left(x ^ {*}\right) \geq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {\mu}{2} \left\| \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} \\ = f (y _ {k}) + \mu \frac {1 - \sqrt {q / \xi}}{\sqrt {q / \xi}} \left\langle v _ {k}, \log_ {y _ {k}} (x ^ {*}) \right\rangle - \mu \frac {1}{\sqrt {q / \xi}} \left\langle \bar {v} _ {k + 1}, \log_ {y _ {k}} (x ^ {*}) \right\rangle + \frac {\mu}{2} \left\| \log_ {y _ {k}} (x ^ {*}) \right\| ^ {2}. \\ \end{array}
$$

It follows from the geodesic convexity of  $f$  that

$$
\begin{array}{l} f \left(x _ {k}\right) \geq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x _ {k}\right) \right\rangle \\ = f \left(y _ {k}\right) - \xi \mu \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} + \xi \mu \langle v _ {k}, \bar {\bar {v}} _ {k + 1} \rangle . \\ \end{array}
$$

By the geodesic  $\frac{1}{s}$ -smoothness of  $f$ , we have

$$
\begin{array}{l} f \left(x _ {k + 1}\right) \leq f \left(y _ {k}\right) + \left\langle \operatorname {g r a d} f \left(y _ {k}\right), \log_ {y _ {k}} \left(x _ {k + 1}\right) \right\rangle + \frac {1}{2 s} \left\| \log_ {y _ {k}} \left(x _ {k + 1}\right) \right\| ^ {2} \\ = f (y _ {k}) - \frac {s}{2} \left\| \operatorname {g r a d} f (y _ {k}) \right\| ^ {2} \\ = f \left(y _ {k}\right) - \frac {s}{2} \left\| \mu \frac {1 - \sqrt {q / \xi}}{\sqrt {q / \xi}} v _ {k} - \mu \frac {1}{\sqrt {q / \xi}} \bar {\bar {v}} _ {k + 1} \right\| ^ {2} \\ = f \left(y _ {k}\right) - \frac {\xi \mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} \left\| v _ {k} \right\| ^ {2} + \xi \mu \left(1 - \sqrt {\frac {q}{\xi}}\right) \left\langle v _ {k}, \bar {\bar {v}} _ {k + 1} \right\rangle - \frac {\xi \mu}{2} \left\| \bar {\bar {v}} _ {k + 1} \right\| ^ {2}. \\ \end{array}
$$

Taking a weighted sum of these inequalities yields

$$
\begin{array}{l} 0 \geq \sqrt {\frac {q}{\xi}} \left[ f (y _ {k}) - f (x ^ {*}) + \mu \frac {1 - \sqrt {q / \xi}}{\sqrt {q / \xi}} \left\langle v _ {k}, \log_ {y _ {k}} (x ^ {*}) \right\rangle - \mu \frac {1}{\sqrt {q / \xi}} \left\langle \bar {\bar {v}} _ {k + 1}, \log_ {y _ {k}} (x ^ {*}) \right\rangle + \frac {\mu}{2} \left\| \log_ {y _ {k}} (x ^ {*}) \right\| ^ {2} \right] \\ + \left(1 - \sqrt {\frac {q}{\xi}}\right) \left[ f (y _ {k}) - f (x _ {k}) - \xi \mu \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} + \xi \mu \langle v _ {k}, \bar {\bar {v}} _ {k + 1} \rangle \right] \\ + \left[ f (x _ {k + 1}) - f (y _ {k}) + \frac {\xi \mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} \| v _ {k} \| ^ {2} - \xi \mu \left(1 - \sqrt {\frac {q}{\xi}}\right) \langle v _ {k}, \bar {v} _ {k + 1} \rangle + \frac {\xi \mu}{2} \| \bar {v} _ {k + 1} \| ^ {2} \right] \\ = \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right)\right) - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right)\right) \\ + \mu \left(1 - \sqrt {\frac {q}{\xi}}\right) \left\langle v _ {k}, \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle - \mu \left\langle \bar {\bar {v}} _ {k + 1}, \log_ {y _ {k}} \left(x ^ {*}\right) \right\rangle + \frac {\mu}{2} \sqrt {\frac {q}{\xi}} \left\| \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} \\ - \frac {\xi \mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} \| v _ {k} \| ^ {2} + \frac {\xi \mu}{2} \| \bar {\bar {v}} _ {k + 1} \| ^ {2}. \\ \end{array}
$$

We further notice that

$$
\begin{array}{l} \left\| \bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} \\ = \| \bar {v} _ {k + 1} \| ^ {2} - 2 \left\langle \bar {v} _ {k + 1}, \log_ {y _ {k}} (x ^ {*}) \right\rangle - \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} + 2 \left(1 - \sqrt {\frac {q}{\xi}}\right) \left\langle v _ {k}, \log_ {y _ {k}} (x ^ {*}) \right\rangle + \sqrt {\frac {q}{\xi}} \left\| \log_ {y _ {k}} (x ^ {*}) \right\| ^ {2}. \\ \end{array}
$$

Therefore, we obtain

$$
\begin{array}{l} 0 \geq \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \bar {v} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right) - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right) \\ + (\xi - 1) \frac {\mu}{2} \| \bar {v} _ {k + 1} \| ^ {2} - \frac {\xi \mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} \| v _ {k} \| ^ {2} + \frac {\mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} \\ = \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \bar {v} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right) - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2}\right) \\ + (\xi - 1) \frac {\mu}{2} \| \bar {\bar {v}} _ {k + 1} \| ^ {2} - (\xi - 1) \frac {\mu}{2} \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} + \frac {\xi \mu}{2} \sqrt {\frac {q}{\xi}} \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2} \\ = \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| ^ {2} + (\xi - 1) \frac {\mu}{2} \| \bar {\bar {v}} _ {k + 1} \| ^ {2}\right) \\ - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left(f (x _ {k}) - f (x ^ {*}) + \frac {\mu}{2} \left\| v _ {k} - \log_ {y _ {k}} (x ^ {*}) \right\| ^ {2} + (\xi - 1) \frac {\mu}{2} \| v _ {k} \| ^ {2}\right) \\ + \frac {\xi \mu}{2} \sqrt {\frac {q}{\xi}} \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| ^ {2}. \\ \end{array}
$$

(Step 2: Handle metric distortion). It follows from Lemma 5.3 with  $p_A = y_k, p_B = x_{k+1}, x = x^*, v_A = \bar{v}_{k+1}, v_B = \bar{v}_{k+1}$ ,  $a = \left(1 - \sqrt{\frac{q}{\xi}}\right) v_k, b = \sqrt{\frac{q}{\xi}} \left(-\frac{1}{\mu}\right) \operatorname{grad} f(y_k), r = \sqrt{\xi q}$  that

$$
\begin{array}{l} \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) - \bar {v} _ {k + 1} \right\| _ {x _ {k + 1}} ^ {2} + (\xi - 1) \| \bar {v} _ {k + 1} \| _ {x _ {k + 1}} ^ {2} \\ \leq \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - \bar {\bar {v}} _ {k + 1} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| \bar {\bar {v}} _ {k + 1} \| _ {y _ {k}} ^ {2} + \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left\| \left(1 - \sqrt {\frac {q}{\xi}}\right) v _ {k} \right\| _ {y _ {k}} ^ {2}. \\ \end{array}
$$

Applying Lemma 5.2 with  $p_A = x_k, p_B = y_k, x = x^*, v_A = \bar{v}_k, v_B = v_k, r = \frac{\sqrt{\xi q}}{1 + \sqrt{\xi q}}$  gives

$$
\begin{array}{l} \left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} + (\xi - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} \\ = \left(\left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} + (\zeta - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2}\right) + (\xi - \zeta) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} \\ \geq \left(\left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\zeta - 1) \left\| v _ {k} \right\| _ {y _ {k}} ^ {2}\right) + (\xi - \zeta) \left\| \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} \\ = \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\zeta - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right) ^ {2}} \| v _ {k} \| _ {y _ {k}} ^ {2} \\ = \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \left(\frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right) ^ {2}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \\ \end{array}
$$

Combining these inequalities with the result in Step 1 gives

$$
\begin{array}{l} 0 \geq \left(f \left(x _ {k + 1}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \bar {\bar {v}} _ {k + 1} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \frac {\mu}{2} \left\| \bar {\bar {v}} _ {k + 1} \right\| _ {y _ {k}} ^ {2}\right) \\ - \left(1 - \sqrt {\frac {q}{\xi}}\right) \left(f \left(x _ {k}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| v _ {k} - \log_ {y _ {k}} \left(x ^ {*}\right) \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \frac {\mu}{2} \| v _ {k} \| _ {y _ {k}} ^ {2}\right) \\ + \frac {\mu}{2} \sqrt {\xi q} \left(1 - \sqrt {\frac {q}{\xi}}\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \\ + \frac {\mu}{2} \left[ \left\| \log_ {x _ {k + 1}} \left(x ^ {*}\right) - \bar {v} _ {k + 1} \right\| _ {x _ {k + 1}} ^ {2} + (\xi - 1) \| \bar {v} _ {k + 1} \| _ {x _ {k + 1}} ^ {2} \right. \\ \left. - \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - \bar {\bar {v}} _ {k + 1} \right\| _ {y _ {k}} ^ {2} - (\xi - 1) \| \bar {\bar {v}} _ {k + 1} \| _ {y _ {k}} ^ {2} - \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left\| \left(1 - \sqrt {\frac {q}{\xi}}\right) v _ {k} \right\| _ {y _ {k}} ^ {2} \right] \\ + \frac {\mu}{2} \left[ \left\| \log_ {y _ {k}} \left(x ^ {*}\right) - v _ {k} \right\| _ {y _ {k}} ^ {2} + (\xi - 1) \| v _ {k} \| _ {y _ {k}} ^ {2} + (\xi - \zeta) \left(\frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right) ^ {2}} - 1\right) \| v _ {k} \| _ {y _ {k}} ^ {2} \right. \\ \left. - \left\| \log_ {x _ {k}} \left(x ^ {*}\right) - \bar {v} _ {k} \right\| _ {x _ {k}} ^ {2} - (\xi - 1) \| \bar {v} _ {k} \| _ {x _ {k}} ^ {2} \right] \\ = \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k + 1} \left(\phi_ {k + 1} - \phi_ {k}\right) \\ + \frac {\mu}{2} \left(\sqrt {\xi q} \left(1 - \sqrt {\frac {q}{\xi}}\right) - \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} + (\xi - \zeta) \left(\frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right) ^ {2}} - 1\right)\right) \| v _ {k} \| ^ {2} \\ \geq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k + 1} \left(\phi_ {k + 1} - \phi_ {k}\right). \\ \end{array}
$$

Corollary F.1. Let  $f$  be a geodesically  $\mu$ -strongly convex and geodesically  $L$ -smooth function. Then, RNAG-SC with parameter  $\xi = \zeta + 3(\zeta - \delta)$  and step size  $s = \frac{1}{9\xi L}$  finds an  $\epsilon$ -approximate solution in  $O\left(\xi \sqrt{\frac{L}{\mu}} \log \left(\frac{L}{\epsilon}\right)\right)$  iterations.

Proof. (Step 1: Checking the condition for Theorem 5.6). It is straightforward to check that

$$
\frac {\xi - \delta}{2} \left(\frac {1}{1 - t} - 1\right) \leq (\xi - \zeta) \left(\frac {1}{1 - \frac {t}{1 + t}} - 1\right)
$$

for all  $t \in (0,1/3]$ . Because  $\sqrt{\xi q} = \sqrt{\xi \mu \frac{1}{9\xi L}} = \frac{1}{3} \sqrt{\mu/L} \in (0,1/3]$ , we have

$$
\begin{array}{l} (\xi - \zeta) \left(\frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right) ^ {2}} - 1\right) \geq (\xi - \zeta) \left(\frac {1}{\left(1 - \frac {\sqrt {\xi q}}{1 + \sqrt {\xi q}}\right)} - 1\right) \\ \geq \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right). \\ \end{array}
$$

Because  $\sqrt{\frac{q}{\xi}} \in (0,1)$ , we have

$$
\begin{array}{l} \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} - \sqrt {\xi q} \left(1 - \sqrt {\frac {q}{\xi}}\right) \leq \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right) \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {2} \\ \leq \frac {\xi - \delta}{2} \left(\frac {1}{1 - \sqrt {\xi q}} - 1\right). \\ \end{array}
$$

Combining these inequalities gives the desired condition.

(Step 2: Computing iteration complexity). It follows from Theorem 5.4 that

$$
f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \phi_ {k} \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \phi_ {0} = \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \left(f \left(x _ {0}\right) - f \left(x ^ {*}\right) + \frac {\mu}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right).
$$

By the geodesic  $L$ -smoothness of  $f$ , we have

$$
\begin{array}{l} f \left(x _ {k}\right) - f \left(x ^ {*}\right) \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} \left(\frac {L}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} + \frac {\mu}{2} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right) \\ \leq \left(1 - \sqrt {\frac {q}{\xi}}\right) ^ {k} L \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} \\ = \left(1 - \sqrt {\frac {\mu}{9 \xi^ {2} L}}\right) ^ {k} L \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} \\ \leq e ^ {- \sqrt {\frac {\mu}{9 \xi^ {2} L}} k} L \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2} \\ \leq e ^ {- \sqrt {\frac {\mu}{9 \xi^ {2} L}} k} L \left\| \log_ {x _ {0}} (x ^ {*}) \right\| ^ {2}. \\ \end{array}
$$

Thus, we have  $f(x_{k}) - f(x^{*})\leq \epsilon$  whenever

$$
k \geq \sqrt {\frac {9 \xi^ {2} L}{\mu}} \log \left(\frac {L}{\epsilon} \left\| \log_ {x _ {0}} \left(x ^ {*}\right) \right\| ^ {2}\right),
$$

which implies the  $O\left(\xi \sqrt{\frac{L}{\mu}}\log \frac{L}{\epsilon}\right)$  iteration complexity of RNAG-SC.

![](images/5dfd8204d9ffac0b1bf461c75910605e17cbffe9f0d88a236413696fdabd43ef.jpg)

# G. Continuous-Time Interpretation

# G.1. The g-convex case

Because we approximate the curve  $y(t)$  by the iterates  $y_{k}$ , we first rewrite RNAG-C in the form using only the iterates  $y_{k}$  as follows:

$$
\begin{array}{l} y _ {k + 1} - y _ {k} = x _ {k + 1} - y _ {k} + \frac {\xi}{\lambda_ {k + 1} + \xi - 1} \bar {v} _ {k + 1} \\ = - s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\xi}{\lambda_ {k + 1} + \xi - 1} \left(\bar {v} _ {k + 1} + s \operatorname {g r a d} f \left(y _ {k}\right)\right) \\ = - s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\xi}{\lambda_ {k + 1} + \xi - 1} \left(v _ {k} - \frac {s \lambda_ {k}}{\xi} \operatorname {g r a d} f \left(y _ {k}\right) + s \operatorname {g r a d} f \left(y _ {k}\right)\right) \\ = \left(- 1 + \frac {- \lambda_ {k} + \xi}{\lambda_ {k - 1} + \xi - 1}\right) s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\xi}{\lambda_ {k + 1} + (\xi - 1)} \frac {\lambda_ {k} - 1}{\xi} \left(y _ {k} - x _ {k}\right) \\ = \frac {1 - \lambda_ {k} - \lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} \left(y _ {k} - x _ {k}\right) \\ = \frac {1 - \lambda_ {k} - \lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} \left(y _ {k} - y _ {k - 1} + s \operatorname {g r a d} f \left(y _ {k - 1}\right)\right) \\ = \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} (y _ {k} - y _ {k - 1}) - \frac {\lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} s \operatorname {g r a d} f (y _ {k}) \\ + \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} s \left(\operatorname {g r a d} f \left(y _ {k - 1}\right) - \operatorname {g r a d} f \left(y _ {k}\right)\right) \\ \end{array}
$$

We introduce a smooth curve  $y(t)$  as mentioned in Section 6. Now, dividing both sides of the above equality by  $\sqrt{s}$  and substituting

$$
\begin{array}{l} \frac {y _ {k + 1} - y _ {k}}{\sqrt {s}} = \dot {y} + \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s}) \\ \frac {y _ {k} - y _ {k - 1}}{\sqrt {s}} = \dot {y} - \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s}) \\ \sqrt {s} \operatorname {g r a d} f (y _ {k - 1}) = \sqrt {s} \operatorname {g r a d} f (y _ {k}) + o (\sqrt {s}), \\ \end{array}
$$

we obtain

$$
\dot {y} + \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s}) = \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)} \left(\dot {y} - \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s})\right) - \frac {\lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} \sqrt {s} \operatorname {g r a d} f (y).
$$

Dividing both sides by  $\sqrt{s}$  and rearranging terms, we have

$$
\frac {1}{2} \left(1 + \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)}\right) \ddot {y} + \frac {1}{\sqrt {s}} \left(1 - \frac {\lambda_ {k} - 1}{\lambda_ {k + 1} + (\xi - 1)}\right) \dot {y} + \frac {\lambda_ {k + 1}}{\lambda_ {k + 1} + (\xi - 1)} \operatorname {g r a d} f (y) + \frac {o (\sqrt {s})}{\sqrt {s}} = 0.
$$

Substituting  $k = \frac{t}{\sqrt{s}}$ , we can check that  $\frac{\lambda_k - 1}{\lambda_{k + 1} + (\xi - 1)} \to 1$ ,  $\frac{\lambda_{k + 1}}{\lambda_{k + 1} + (\xi - 1)} \to 1$ , and  $\frac{1}{\sqrt{s}} \left(1 - \frac{\lambda_k - 1}{\lambda_{k + 1} + (\xi - 1)}\right) = \frac{1}{\sqrt{s}} \frac{\lambda_{k + 1} - \lambda_k + \xi}{\lambda_{k + 1} + (\xi - 1)} = \frac{1}{\sqrt{s}} \frac{1 + 2\xi}{k + T + 4\xi - 2} = \frac{1 + 2\xi}{t + (T + 4\xi - 2)\sqrt{s}} \to \frac{1 + 2\xi}{t}$  as  $s \to 0$ . Therefore, we obtain

$$
\ddot {y} + \frac {1 + 2 \xi}{t} \dot {y} + \operatorname {g r a d} f (y) = 0.
$$

# G.2. The g-strongly convex case

As we approximate the curve  $y(t)$  by the iterates  $y_{k}$ , we first rewrite RNAG-C in the form using only the iterates  $y_{k}$  as follows:

$$
\begin{array}{l} y _ {k + 1} - y _ {k} = x _ {k + 1} - y _ {k} + \frac {\sqrt {\xi \mu s}}{1 + \sqrt {\xi \mu s}} \bar {v} _ {k + 1} \\ = - s \operatorname {g r a d} f (y _ {k}) + \frac {\sqrt {\xi \mu s}}{1 + \sqrt {\xi \mu s}} (\bar {\bar {v}} _ {k + 1} + s \operatorname {g r a d} f (y _ {k})) \\ = - \frac {s}{1 + \sqrt {\xi \mu s}} \operatorname {g r a d} f \left(y _ {k}\right) + \frac {\sqrt {\xi \mu s}}{1 + \sqrt {\xi \mu s}} \left(\left(1 - \sqrt {\frac {\mu s}{\xi}}\right) v _ {k} + \sqrt {\frac {\mu s}{\xi}} \left(- \frac {\operatorname {g r a d} f \left(y _ {k}\right)}{\mu}\right)\right) \\ = - \frac {2 s}{1 + \sqrt {\xi \mu s}} \operatorname {g r a d} f (y _ {k}) + \frac {\sqrt {\xi \mu s}}{1 + \sqrt {\xi \mu s}} \left(1 - \sqrt {\frac {\mu s}{\xi}}\right) \frac {1}{\sqrt {\xi \mu s}} (y _ {k} - x _ {k}) \\ = - \frac {2 s}{1 + \sqrt {\xi \mu s}} \operatorname {g r a d} f \left(y _ {k}\right) + \frac {1 - \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} \left(y _ {k} - y _ {k - 1} + s \operatorname {g r a d} f \left(y _ {k - 1}\right)\right) \\ = \frac {1 - \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} \left(y _ {k} - y _ {k - 1}\right) - \frac {1 + \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} s \operatorname {g r a d} f \left(y _ {k}\right) + \frac {1 - \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} s \left(\operatorname {g r a d} f \left(y _ {k - 1}\right) - \operatorname {g r a d} f \left(y _ {k}\right)\right) \\ \end{array}
$$

Dividing both sides by  $\sqrt{s}$  and substituting

$$
\begin{array}{l} \frac {y _ {k + 1} - y _ {k}}{\sqrt {s}} = \dot {y} + \frac {\sqrt {s}}{2} \ddot {y} + o \left(\sqrt {s}\right) \\ \frac {y _ {k} - y _ {k - 1}}{\sqrt {s}} = \dot {y} - \frac {\sqrt {s}}{2} \ddot {y} + o \left(\sqrt {s}\right) \\ \sqrt {s} \operatorname {g r a d} f \left(y _ {k - 1}\right) = \sqrt {s} \operatorname {g r a d} f \left(y _ {k}\right) + o (\sqrt {s}) \\ \end{array}
$$

yield

$$
\dot {y} + \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s}) = \frac {1 - \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} \left(\dot {y} - \frac {\sqrt {s}}{2} \ddot {y} + o (\sqrt {s})\right) - \frac {1 + \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} \sqrt {s} \operatorname {g r a d} f (y _ {k}).
$$

Dividing both sides by  $\sqrt{s}$  and rearranging terms, we obtain

$$
\frac {1}{2} \left(1 + \frac {1 - \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}}\right) \ddot {y} + \frac {\left(\sqrt {1 / \xi} + \sqrt {\xi}\right) \sqrt {\mu}}{1 + \sqrt {\xi \mu s}} \dot {y} + \frac {1 + \sqrt {\mu s / \xi}}{1 + \sqrt {\xi \mu s}} \operatorname {g r a d} f (y _ {k}) + \frac {o (\sqrt {s})}{\sqrt {s}} = 0.
$$

Taking the limit  $s \to 0$  gives

$$
\ddot {y} + \left(\frac {1}{\sqrt {\xi}} + \sqrt {\xi}\right) \sqrt {\mu} \dot {y} + \operatorname {g r a d} f (y) = 0
$$

as desired.

# G.3. Experiments

In this section, we empirically show that the iterates of our methods converge to the solution of the corresponding ODEs, as taking the limit  $s \to 0$ . We use the Rayleigh quotient maximization problem in Section 7 with  $d = 10$  and  $\xi = 2$ . For RNAG-SC, we set  $\mu = 0.1$  (note that the limiting argument above does not use geodesic  $\mu$ -strong convexity of  $f$ ). To compute the solution of ODEs (7) and (8), we implement SIRNAG (Option I) (Alimisis et al., 2020) with very small integration step size. The results are shown in Figure 4 and Figure 5.

# H. Proofs for Section 7

Proposition H.1. The function  $f$  is geodesically  $(\lambda_{\max} - \lambda_{\min})$ -smooth, where  $\lambda_{\max}$  and  $\lambda_{\min}$  are the largest and smallest eigenvalues of  $A$ , respectively.

![](images/062ca8e16e330a3ce9c5e0a0aefdfa369e0334dbe0eec62767a1267bab05decf.jpg)  
(a) Error vs. # of iterations

![](images/118c6fa626a15215a3c46ee6078bbfcf360113e5e76b8ae18bfebd6948ec648b.jpg)  
(b) Distance vs. # of iterations

![](images/b0c338c64f1a11fa4211f8d4fab228b4f4cbbae7d8beb53984d536592eb405e1.jpg)  
Figure 4. Convergence of RNAG-C to the solution of ODE (7).  
(a) Error vs. # of iterations  
Figure 5. Convergence of RNAG-SC to the solution of ODE (8).

![](images/5c3667bc28776e4a1f1772152644a86f2ec20eed4e0a457483d917c546a78c60.jpg)  
(b) Distance vs. # of iterations

Proof. For  $x \in \mathbb{S}^{d - 1} \subseteq \mathbb{R}^d$  and a unit tangent vector  $v \in T_xM$ , we have

$$
\exp_ {x} (t v) = \frac {x + \tan (t) v}{\| x + \tan (t) v \|} = \frac {x + \tan (t) v}{\sec (t)}
$$

for  $t \in I$ , where  $I$  is a small interval containing 0. We consider the function  $h: I \to M$  defined as

$$
\begin{array}{l} h (t) = f \left(\exp_ {x} (t v)\right) \\ = - \frac {1}{2} \cos^ {2} (t) (x + \tan (t) v) ^ {\top} A (x + \tan (t) v) \\ = - \frac {1}{2} h _ {1} (t) h _ {2} (t), \\ \end{array}
$$

where  $h_1(t) = \cos^2(t)$  and  $h_2(t) = (x + \tan(t)v)^\top A(x + \tan(t)v)$ . Note that  $h_1(0) = 1$ ,  $h_1'(0) = 0$ ,  $h_1''(0) = -2$ ,  $h_2(0) = x^\top Ax$ ,  $h_2'(0) = 2v^\top Ax$ , and  $h_2''(0) = 2v^\top Av$ . Now, by the product rule, we have

$$
h ^ {\prime \prime} (0) = - \frac {1}{2} h _ {1} ^ {\prime \prime} (0) h _ {2} (0) - h _ {1} ^ {\prime} (0) h _ {2} ^ {\prime} (0) - \frac {1}{2} h _ {1} (0) h _ {2} ^ {\prime \prime} (0) = x ^ {\top} A x - v ^ {\top} A v.
$$

Because Rayleigh quotient is always in  $[\lambda_{\min}, \lambda_{\max}]$ , we have  $|h''(0)| \leq (\lambda_{\max} - \lambda_{\min})$ . This shows that  $f$  is geodesically  $(\lambda_{\max} - \lambda_{\min})$ -smooth.

Proposition H.2. The function  $f$  is geodesically 1-strongly convex.

Proof. It is enough to show that the function  $x \mapsto d(x, p_i)^2$  is convex. When  $K_{\max} \leq 0$ , we have  $\delta = 1$ . Let  $\gamma : I \to M$  be a geodesic whose image is in  $N$ . It follows from Proposition C.1 that

$$
\begin{array}{l} \frac {d ^ {2}}{d t ^ {2}} \frac {1}{2} d (\gamma (t), p _ {i}) ^ {2} = \frac {d}{d t} \left\langle \log_ {\gamma (t)} (p _ {i}), - \gamma^ {\prime} (t) \right\rangle \\ = \left\langle D _ {t} \log_ {\gamma (t)} \left(p _ {i}\right), - \gamma^ {\prime} (t) \right\rangle + \left\langle \log_ {\gamma (t)} \left(p _ {i}\right), - \gamma^ {\prime \prime} (t) \right\rangle . \\ \end{array}
$$

Note that  $\gamma''(t) = 0$  because  $\gamma$  is a geodesic. Now, Proposition 5.1 gives  $\frac{d^2}{dt^2} \frac{1}{2} d\left(\gamma(t), p_i\right)^2 \geq 1$ .