Buckets:

|
download
raw
132 kB

Title: An accelerated, stochastic, second-order method with optimal adaptation to inexactness

URL Source: https://arxiv.org/html/2309.01570

Markdown Content: 1Introduction 2Problem statement and preliminaries 3The method 4Theoretical complexity lower bound 5Tensor generalization 6Strongly convex case 7Experiments 8Conclusion Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness A. Agafonov 1, 2 &D. Kamzolov1&A. Gasnikov 2,4,5 &A. Kavis 3 &K. Antonakopoulos 3 &V. Cevher3 &M. Takáč2 1The University of Hong Kong  2ByteDance Inc.  3Apple Inc. {lzheng2,lpk}@cs.hku.hk jianbo.yuan@bytedance.commr.chongwang@apple.com Artem Agafonov 1, 2, Dmitry Kamzolov1, Alexander Gasnikov 2,4,5, Ali Kavis 3, &Kimon Antonakopoulos 3, Volkan Cevher3, Martin Takáč1 1 Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE 2 Moscow Institute of Physics and Technology, Dolgoprudny, Russia 3 Laboratory for Information and Inference Systems, IEM STI EPFL, Lausanne, Switzerland 4 Innopolis University, Kazan, Russia 5 Skoltech, Moscow, Russia Abstract

We present a new accelerated stochastic second-order method that is robust to both gradient and Hessian inexactness, which occurs typically in machine learning. We establish theoretical lower bounds and prove that our algorithm achieves optimal convergence in both gradient and Hessian inexactness in this key setting. We further introduce a tensor generalization for stochastic higher-order derivatives. When the oracles are non-stochastic, the proposed tensor algorithm matches the global convergence of Nesterov Accelerated Tensor method. Both algorithms allow for approximate solutions of their auxiliary subproblems with verifiable conditions on the accuracy of the solution.

1Introduction

In this paper, we consider the following general convex optimization problem:

min 𝑥 ∈ ℝ 𝑑 ⁡ 𝑓 ⁢ ( 𝑥 ) ,

(1)

where 𝑓 ⁢ ( 𝑥 ) is a convex and sufficiently smooth function. We assume that a solution 𝑥 ∗ ∈ ℝ 𝑑 exists and we denote 𝑓 ∗ := 𝑓 ⁢ ( 𝑥 ∗ ) . We define 𝑅

‖ 𝑥 0 − 𝑥 ∗ ‖ as a distance to the solution.

Assumption 1.1.

The function 𝑓 ⁢ ( 𝑥 ) ∈ 𝐶 2 has 𝐿 2 -Lipschitz-continuous Hessian if for any 𝑥 , 𝑦 ∈ ℝ 𝑑

‖ ∇ 2 𝑓 ⁢ ( 𝑥 ) − ∇ 2 𝑓 ⁢ ( 𝑦 ) ‖ ≤ 𝐿 2 ⁢ ‖ 𝑥 − 𝑦 ‖ .

Since the calculation of an exact gradient is very expensive or impossible in many applications in several domains, including machine learning, statistics, and signal processing, efficient methods that can work with inexact stochastic gradients are of great interest.

Assumption 1.2.

For all 𝑥 ∈ ℝ 𝑑 , we assume that stochastic gradients 𝑔 ⁢ ( 𝑥 , 𝜉 ) ∈ ℝ 𝑑 satisfy

𝔼 ⁢ [ 𝑔 ⁢ ( 𝑥 , 𝜉 ) ∣ 𝑥 ]

∇ 𝑓 ⁢ ( 𝑥 ) , 𝔼 ⁢ [ ‖ 𝑔 ⁢ ( 𝑥 , 𝜉 ) − ∇ 𝑓 ⁢ ( 𝑥 ) ‖ 2 ∣ 𝑥 ] ≤ 𝜎 1 2 .

(2)

Extensive research has been conducted on first-order methods, both from a theoretical and practical perspective. For 𝐿 1 -smooth functions with stochastic gradients characterized by a variance of 𝜎 1 2 , lower bound Ω ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝐿 1 ⁢ 𝑅 2 𝑇 2 ) has been established by Nemirovski & Yudin (1983). For 𝑓 ⁢ ( 𝑥 )

𝔼 ⁢ [ 𝐹 ⁢ ( 𝑥 , 𝜉 ) ] , the stochastic approximation (SA) was developed, starting from the pioneering paper by Robbins & Monro (1951). Important improvements of the SA were developed by Polyak (1990); Polyak & Juditsky (1992); Nemirovski et al. (2009), where longer stepsizes with iterate averaging and proper step-size modifications were proposed, obtaining the rate 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝐿 1 ⁢ 𝑅 2 𝑇 ) . The optimal method matching the lower bounds has been developed by Lan (2012) with a convergence rate 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝐿 1 ⁢ 𝑅 2 𝑇 2 ) . However, the literature on second-order methods is significantly limited for the study of provable, globally convergent stochastic second-order methods for convex minimization. Second-order methods. Although second-order methods have been studied for centuries (Newton, 1687; Raphson, 1697; Simpson, 1740; Kantorovich, 1949; Moré, 1977; Griewank, 1981), most of the results are connected with local quadratic convergence. The significant breakthroughs regarding global convergence have been achieved only recently, starting from the paper on Cubic Regularized Newton (CRN) method by Nesterov & Polyak (2006), the first second-order method with a global convergence rate 𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 2 ) . Following this work, Nesterov (2008) proposes an acceleration mechanism on top of CRN and achieves the convergence rate of 𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) , going beyond the Ω ⁢ ( 1 / 𝑇 2 ) lower bound for first-order methods. Another cornerstone in the field is the work by Monteiro & Svaiter (2013), which achieves lower complexity bound Ω ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 7 / 2 ) (Agarwal & Hazan, 2018; Arjevani et al., 2019) up to a logarithmic factor, for the first time in the literature. The gap between upper and lower bounds was closed only in 2022 in subsequent works of Kovalev & Gasnikov (2022); Carmon et al. (2022). One of the main limitations of the second-order methods is a high per-iteration cost as they require computation of exact Hessian. Therefore, it is natural to use approximations of derivatives instead of their exact values. In (Ghadimi et al., 2017), CRN method with 𝛿 2 -inexact Hessian information and its accelerated version were proposed, achieving convergence rate 𝑂 ⁢ ( 𝛿 2 ⁢ 𝑅 2 𝑇 2 + 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) . This algorithm was later extended by Agafonov et al. (2023) to handle 𝛿 1 -inexact gradients (and high-order derivatives) with a resulting convergence rate of 𝑂 ⁢ ( 𝛿 1 ⁢ 𝑅 + 𝛿 2 ⁢ 𝑅 2 𝑇 2 + 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) . A recent paper by Antonakopoulos et al. (2022) proposes a stochastic adaptive second-order method based on the extragradient method without line-search and with the convergence rate 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝜎 2 ⁢ 𝑅 2 𝑇 3 / 2 + 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) when gradients and Hessians are noisy with variances 𝜎 1 2 and 𝜎 2 2 . In the light of these results, we identify several shortcomings and open questions:

What are the lower bounds for inexact second-order methods? What is the optimal trade-off between inexactness in the gradients and the Hessian?

In this work, we attempt to answer these questions in a systematic manner. Detailed descriptions of other relevant studies can be found in Appendix A.

Table 1: Comparison of existing results for second-order methods under inexact feedback. 𝑇 denotes the number of iterations, and 𝐿 2 represents the Lipschitz constant of the Hessian. Algorithm Inexactness Gradient convergence Hessian convergence Exact convergence Accelerated Inexact Cubic Newton (Ghadimi et al., 2017) exact gradient

𝛿 2 -inexact Hessian 1 ✗ 𝑂 ⁢ ( 𝛿 2 ⁢ 𝑅 2 𝑇 2 )

𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 3 )

\hdashline Accelerated Inexact Tensor Method 2 (Agafonov et al., 2023) 𝛿 1 -inexact gradient 3

𝛿 2 -inexact Hessian 4 𝑂 ⁢ ( 𝛿 1 ⁢ 𝑅 )

𝑂 ⁢ ( 𝛿 2 ⁢ 𝑅 2 𝑇 2 )

𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 3 )

\hdashline Extra-Newton (Antonakopoulos et al., 2022) stochastic gradient (2) unbiased stochastic Hessian 5 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 )

𝑂 ⁢ ( 𝜎 2 ⁢ 𝑅 2 𝑇 3 / 2 )

𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 3 )

\hdashline Accelerated Stochastic Second-order method [This Paper] stochastic gradient (2) stochastic Hessian (4) 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 )

𝑂 ⁢ ( 𝜎 2 ⁢ 𝑅 2 𝑇 2 ) 6 𝑂 ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 3 )

\hdashline Lower bound [This Paper] stochastic gradient (2) stochastic Hessian (4) Ω ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 )

Ω ⁢ ( 𝜎 2 ⁢ 𝑅 2 𝑇 2 )

Ω ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 7 / 2 )

Contributions. We summarize our contributions as follows:

1.

We propose an accelerated second-order algorithm that achieves the convergence rate of 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝜎 2 ⁢ 𝑅 2 𝑇 2 + 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) for stochastic Hessian with variance 𝜎 2 2 and 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝛿 2 ⁢ 𝑅 2 𝑇 2 + 𝐿 2 ⁢ 𝑅 3 𝑇 3 ) for 𝛿 2 -inexact Hessian, improving the existing results (Agafonov et al., 2023; Antonakopoulos et al., 2022) (see Table 1).

2.

We prove that the above bounds are tight with respect to the variance of the gradient and the Hessian by developing a matching theoretical complexity lower bound (see Table 1).

3.

Our algorithm involves solving a cubic subproblem that arises in several globally convergent second-order methods (Nesterov & Polyak, 2006; Nesterov, 2008). To address this, we propose a criterion based on the accuracy of the subproblem’s gradient, along with a dynamic strategy for selecting the appropriate level of inexactness. This ensures an efficient solution of the subproblems without sacrificing the fast convergence of the initial method.

4.

We extend our method for higher-order minimization with stochastic/inexact oracles. We achieve the 𝑂 ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + ∑ 𝑖

2 𝑝 𝛿 𝑖 ⁢ 𝑅 𝑖 𝑇 𝑖 + 𝐿 𝑝 ⁢ 𝑅 𝑝 + 1 𝑇 𝑝 + 1 ) rate with 𝛿 𝑖 -inexact 𝑖 -th derivative.

5.

We propose a restarted version of our algorithm for strongly convex minimization, which exhibits a linear rate. Via a mini-batch strategy, we demonstrate that the total number of Hessian computations scales linearly with the desired accuracy 𝜀 .

2Problem statement and preliminaries

Taylor approximation and oracle feedback. Our starting point for constructing second-order method is based primarily on the second-order Taylor approximation of the function 𝑓 ⁢ ( 𝑥 )

Φ 𝑥 ⁢ ( 𝑦 )

 def  𝑓 ⁢ ( 𝑥 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 ) , 𝑦 − 𝑥 ⟩ + 1 2 ⁢ ⟨ 𝑦 − 𝑥 , ∇ 2 𝑓 ⁢ ( 𝑥 ) ⁢ ( 𝑦 − 𝑥 ) ⟩ , 𝑦 ∈ ℝ 𝑑 .

In particular, since the exact computation of the Hessians can be a quite tiresome task, we attempt to employ more tractable inexact estimators 𝑔 ⁢ ( 𝑥 ) and 𝐻 ⁢ ( 𝑥 ) for the gradient and Hessian. These estimators are going to be the main building blocks for the construction of the "inexact" second-order Taylor approximation. Formally, this is given by:

𝜙 𝑥 ⁢ ( 𝑦 )

𝑓 ⁢ ( 𝑥 ) + ⟨ 𝑔 ⁢ ( 𝑥 ) , 𝑦 − 𝑥 ⟩ + 1 2 ⁢ ⟨ 𝑦 − 𝑥 , 𝐻 ⁢ ( 𝑥 ) ⁢ ( 𝑦 − 𝑥 ) ⟩ , 𝑦 ∈ ℝ 𝑑 .

(3)

Therefore, by combining Assumption 1.1 with the aforementioned estimators, we readily get the following estimation:

Lemma 2.1 ((Agafonov et al., 2023, Lemma 2)).

Let Assumption 1.1 hold. Then, for any 𝑥 , 𝑦 ∈ ℝ 𝑑 , we have

| 𝑓 ⁢ ( 𝑦 ) − 𝜙 𝑥 ⁢ ( 𝑦 ) | ≤ ( ‖ 𝑔 ⁢ ( 𝑥 ) − ∇ 𝑓 ⁢ ( 𝑥 ) ‖ + 1 2 ⁢ ‖ ( 𝐻 ⁢ ( 𝑥 ) − ∇ 2 𝑓 ⁢ ( 𝑥 ) ) ⁢ ( 𝑦 − 𝑥 ) ‖ ) ⁢ ‖ 𝑦 − 𝑥 ‖ + 𝐿 2 6 ⁢ ‖ 𝑦 − 𝑥 ‖ 3 .

‖ ∇ 𝑓 ⁢ ( 𝑦 ) − ∇ 𝜙 𝑥 ⁢ ( 𝑦 ) ‖ ≤ ‖ 𝑔 ⁢ ( 𝑥 ) − ∇ 𝑓 ⁢ ( 𝑥 ) ‖ + ‖ ( 𝐻 ⁢ ( 𝑥 ) − ∇ 2 𝑓 ⁢ ( 𝑥 ) ) ⁢ ( 𝑦 − 𝑥 ) ‖ + 𝐿 2 3 ⁢ ‖ 𝑦 − 𝑥 ‖ 2 .

Now, having established the main toolkit concerning the approximation of 𝑓 in the rest of this section, we introduce the blanket assumptions regarding the inexact gradients and Hessians (for a complete overview, we refer to Table 1). In particular, we assume that our estimators satisfy the following statistical conditions.

Assumption 2.2 (Unbiased stochastic gradient with bounded variance and stochastic Hessian with bounded variance).

For all 𝑥 ∈ ℝ 𝑑 , stochastic gradient 𝑔 ⁢ ( 𝑥 , 𝜉 ) satisfies  (2) and stochastic Hessian 𝐻 ⁢ ( 𝑥 , 𝜉 ) satisfies

𝔼 ⁢ [ ‖ 𝐻 ⁢ ( 𝑥 , 𝜉 ) − ∇ 2 𝑓 ⁢ ( 𝑥 ) ‖ 2 2 ∣ 𝑥 ] ≤ 𝜎 2 2 .

(4) Assumption 2.3 (Unbiased stochastic gradient with bounded variance and inexact Hessian).

For all 𝑥 ∈ ℝ 𝑑 stochastic gradient 𝑔 ⁢ ( 𝑥 , 𝜉 ) satisfies (2). For given 𝑥 , 𝑦 ∈ ℝ 𝑑 inexact Hessian 𝐻 ⁢ ( 𝑥 ) satisfies

‖ ( 𝐻 ⁢ ( 𝑥 ) − ∇ 2 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] ‖ ≤ 𝛿 2 𝑥 , 𝑦 ⁢ ‖ 𝑦 − 𝑥 ‖ .

(5)

Assumptions 2.2 and  2.3 differ from Condition 1 in (Agafonov et al., 2023) by the unbiasedness of the gradient. An unbiased gradient allows us to attain optimal convergence in the corresponding term 𝑂 ⁢ ( 1 / 𝑇 ) , while an inexact gradient slows down the convergence to 𝑂 ⁢ ( 1 ) since a constant error can misalign the gradient. Note, that we do not assume the unbiasedness of the Hessian in all assumptions. Finally, note that Assumption 2.3 does not require (5) to be met for all 𝑥 , 𝑦 ∈ ℝ 𝑑 . Instead, we only consider inexactness along the direction 𝑦 − 𝑥 , which may be significantly less than the norm of the difference between Hessian and its approximation 𝐻 ⁢ ( 𝑥 ) . Auxiliary problem. Most second-order methods with global convergence require solving an auxiliary subproblem at each iteration. However, to the best of our knowledge, existing works that consider convex second-order methods under inexact derivatives do not account for inexactness in the solution of the subproblem. To address this gap, we propose incorporating a gradient criteria for the subproblem solution, given by

min 𝑦 ∈ ℝ 𝑑 ⁡ 𝜔 𝑥 ⁢ ( 𝑦 ) ⁢ such that ⁢ ‖ ∇ 𝜔 𝑥 ⁢ ( 𝑦 ) ‖ ≤ 𝜏 ,

(6)

where 𝜔 𝑥 ⁢ ( 𝑦 ) is the objective of subproblem and 𝜏 ≥ 0 is a tolerance parameter. We highlight that this criterion is verifiable at each step of the algorithm, which facilitates determining when to stop. By setting a constant tolerance parameter 𝜏 , we get the following relationship between the absolute accuracy 𝜖 required for the initial problem and 𝜏 : 𝜏

𝑂 ⁢ ( 𝜖 5 6 ) . In practice, it may not be necessary to use a very small accuracy in the beginning. Later, we will discuss strategies for choosing the sequence of 𝜏 𝑡 based on the number of iterations 𝑡 .

3The method

In this section, we present our proposed method, dubbed as Accelerated Stochastic Cubic Regularized Newton’s method. In particular, extending on recent accelerated second-order algorithms (Nesterov, 2021b; Ghadimi et al., 2017; Agafonov et al., 2023), we propose a new variant of the accelerated cubic regularization method with stochastic gradients that achieves optimal convergence in terms corresponding to gradient and Hessian inexactness. Moreover, the proposed scheme allows for the approximate solution of the auxiliary subproblem, enabling a precise determination of the required level of subproblem accuracy. We begin the algorithm description by introducing the main step. Given constants 𝛿 ¯

0 and 𝑀 ≥ 𝐿 2 , we define a model of the objective

𝜔 𝑥 𝑀 , 𝛿 ¯ ⁢ ( 𝑦 ) := 𝜙 𝑥 ⁢ ( 𝑦 ) + 𝛿 ¯ 2 ⁢ ‖ 𝑥 − 𝑦 ‖ 2 + 𝑀 6 ⁢ ‖ 𝑥 − 𝑦 ‖ 3 .

At each step of the algorithm, we aim to find 𝑢 ∈ argmin 𝑦 ∈ ℝ 𝑑 𝜔 𝑥 𝑀 , 𝛿 ¯ ⁢ ( 𝑦 ) . However, finding the respective minimizer is a separate challenge. Instead of computing the exact minimum, we aim to find a point 𝑠 ∈ ℝ 𝑑 with a small norm of the gradient.

Definition 3.1.

Denote by 𝑠 𝑀 , 𝛿 ¯ , 𝜏 ⁢ ( 𝑥 ) a 𝜏 -inexact solution of subproblem, i.e. a point 𝑠 := 𝑠 𝑀 , 𝛿 ¯ , 𝜏 ⁢ ( 𝑥 ) such that

‖ ∇ 𝜔 𝑥 𝑀 , 𝛿 ¯ ⁢ ( 𝑠 ) ‖ ≤ 𝜏 .

Next, we employ the technique of estimating sequences to propose the Accelerated Stochastic Cubic Newton method. Such acceleration is based on aggregating stochastic linear models given by

𝑙 ⁢ ( 𝑥 , 𝑦 )

𝑓 ⁢ ( 𝑦 ) + ⟨ 𝑔 ⁢ ( 𝑦 , 𝜉 ) , 𝑥 − 𝑦 ⟩

in function 𝜓 𝑡 ⁢ ( 𝑥 ) (8), (9). The method is presented in detail in Algorithm 1.

Algorithm 1 Accelerated Stochastic Cubic Newton 1:  Input: 𝑦 0

𝑥 0 is starting point; constants 𝑀 ≥ 2 ⁢ 𝐿 2 ; non-negative non-decreasing sequences { 𝛿 ¯ 𝑡 } 𝑡 ≥ 0 , { 𝜆 𝑡 } 𝑡 ≥ 0 , { 𝜅 ¯ 2 𝑡 } 𝑡 ≥ 0 , { 𝜅 ¯ 3 𝑡 } 𝑡 ≥ 0 , and
𝛼 𝑡

3 𝑡 + 3 , 𝐴 𝑡

∏ 𝑗

1 𝑡 ( 1 − 𝛼 𝑗 ) , 𝐴 0

1 ,
(7)
𝜓 0 ⁢ ( 𝑥 ) := 𝜅 ¯ 2 0 + 𝜆 0 2 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 0 3 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 3 .
(8) 2:  for  𝑡 ≥ 0  do 3:     
𝑣 𝑡

( 1 − 𝛼 𝑡 ) ⁢ 𝑥 𝑡 + 𝛼 𝑡 ⁢ 𝑦 𝑡 , 𝑥 𝑡 + 1

𝑠 𝑀 , 𝛿 ¯ 𝑡 , 𝜏 ⁢ ( 𝑣 𝑡 )
4:     Compute
𝑦 𝑡 + 1

arg ⁡ min 𝑥 ∈ ℝ 𝑛

{ 𝜓 𝑡 + 1 ( 𝑥 ) := 𝜓 𝑡 ( 𝑥 ) + 𝜆 𝑡 + 1 − 𝜆 𝑡 2 ∥ 𝑥 − 𝑥 0 ∥ 2

(9)

  • ∑ 𝑖

    2 3 𝜅 ¯ 𝑖 𝑡
  • 1 − 𝜅 ¯ 𝑖 𝑡 𝑖 ∥ 𝑥 − 𝑥 0 ∥ 𝑖
  • 𝛼 𝑡 𝐴 𝑡 𝑙 ( 𝑥 , 𝑥 𝑡
  • 1 ) } .

Theorem 3.2.

Let Assumption 1.1 hold and 𝑀 ≥ 2 ⁢ 𝐿 2 .

Let Assumption 2.2 hold. After 𝑇 ≥ 1 with parameters

𝜅 ¯ 2 𝑡 + 1

2 ⁢ 𝛿 ¯ 𝑡 ⁢ 𝛼 𝑡 2 𝐴 𝑡 , 𝜅 ¯ 3 𝑡 + 1

8 ⁢ 𝑀 3 ⁢ 𝛼 𝑡 + 1 3 𝐴 𝑡 + 1 , 𝜆 𝑡

𝜎 1 𝑅 ⁢ ( 𝑡 + 3 ) 5 2 , 𝛿 ¯ 𝑡

2 ⁢ 𝜎 2 + 𝜎 1 + 𝜏 𝑅 ⁢ ( 𝑡 + 3 ) 3 2 ,

(10)

we get the following bound

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]

≤ 𝑂 ⁢ ( 𝜏 ⁢ 𝑅 𝑇 + 𝜎 1 ⁢ 𝑅 𝑇 + 𝜎 2 ⁢ 𝑅 2 𝑇 2 + 𝑀 ⁢ 𝑅 3 𝑇 3 ) .

(11) •

Let Assumption 2.3 hold. After 𝑇 ≥ 1 with parameters defined in (10) and

𝜎 2

𝛿 2

max 𝑡

1 , … , 𝑇 ⁡ 𝛿 𝑡 𝑣 𝑡 − 1 , 𝑥 𝑡 , we get the following bound

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]

≤ 𝑂 ⁢ ( 𝜏 ⁢ 𝑅 𝑇 + 𝜎 1 ⁢ 𝑅 𝑇 + 𝛿 2 ⁢ 𝑅 2 𝑇 2 + 𝑀 ⁢ 𝑅 3 𝑇 3 ) .

(12)

This result provides an upper bound for the objective residual after 𝑇 iterations of Algorithm 1. The last term in the RHS of (11) and (12) corresponds to the case of exact Accelerated Cubic Newton method (Nesterov, 2008). The remaining terms reveal how the convergence rate is affected by the imprecise calculation of each derivative and by inexact solution of subproblem. We provide sufficient conditions for the inexactness in the derivatives to ensure that the method can still obtain an objective residual smaller than 𝜀 . Specifically, this result addresses the following question: given that the errors are controllable and can be made arbitrarily small, how small should each derivative’s error be to achieve an 𝜀 -solution?

Corollary 3.3.

Let assumptions of Theorem 3.2 hold and let 𝜀

0 be the desired solution accuracy.

Let the levels of inexactness in Assumption 2.2 be:

𝜏

𝑂 ⁢ ( 𝜀 5 6 ⁢ ( 𝑀 𝑅 3 ) 1 6 ) , 𝜎 1

𝑂 ⁢ ( 𝜀 5 6 ⁢ ( 𝑀 𝑅 3 ) 1 6 ) , 𝜎 2

𝑂 ⁢ ( 𝜀 1 3 ⁢ 𝑀 2 3 )

Let the levels of inexactness in Assumption 2.3 be:

𝜏

𝑂 ⁢ ( 𝜀 5 6 ⁢ ( 𝑀 𝑅 3 ) 1 6 ) , 𝜎 1

𝑂 ⁢ ( 𝜀 5 6 ⁢ ( 𝑀 𝑅 3 ) 1 6 ) , 𝛿 2

𝑂 ⁢ ( 𝜀 1 3 ⁢ 𝑀 2 3 )

And let the number of iterations of Algorithm 1 satisfy 𝑇

𝑂 ⁢ ( 𝑀 ⁢ 𝑅 3 𝜀 ) 1 3 . Then 𝑥 𝑇 is an 𝜀 -solution of problem (1), i.e. 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ≤ 𝜀 .

In practice, achieving an excessively accurate solution for the subproblem on the initial iterations is not essential. Instead, a dynamic strategy can be employed to determine the level of accuracy required for the subproblem. Specifically, we can choose a dynamic precision level according to 𝜏 𝑡

𝑐 𝑡 5 / 2 , where 𝑐

0 . As a result, the convergence rate term associated with the inexactness of the subproblem becomes 𝑂 ⁢ ( 𝑐 𝑇 3 ) , which matches the convergence rate of the Accelerated Cubic Newton method.

4Theoretical complexity lower bound

In this section, we present a novel theoretical complexity lower bound for inexact second-order methods with stochastic gradient and inexact (stochastic) Hessian. The proof technique draws inspiration from the works (Devolder et al., 2014; Nesterov, 2021b; 2018). For this section, we assume that the function 𝑓 ⁢ ( 𝑥 ) is convex and has 𝐿 1 -Lipschitz-continuous gradient and 𝐿 2 -Lipschitz-continuous Hessian. To begin, we describe the information and structure of stochastic second-order methods. At each point 𝑥 𝑡 , the oracle provides us with an unbiased stochastic gradient 𝑔 𝑡

𝑔 ⁢ ( 𝑥 𝑡 , 𝜉 ) and an inexact (stochastic) Hessian 𝐻 𝑡

𝐻 ⁢ ( 𝑥 𝑡 , 𝜉 ) . The method can compute the minimum of the following models:

ℎ 𝑡 + 1

argmin ℎ { 𝜙 𝑥 𝑡 ⁢ ( ℎ )

𝑎 1 ⁢ ⟨ 𝑔 𝑡 , ℎ ⟩ + 𝑎 2 ⁢ ⟨ 𝐻 𝑡 ⁢ ℎ , ℎ ⟩ + 𝑏 1 ⁢ ‖ ℎ ‖ 2 + 𝑏 2 ⁢ ‖ ℎ ‖ 3 } .

Now, we formulate the main assumption regarding the method’s ability to generate new points.

Assumption 4.1.

The method generates a recursive sequence of test points 𝑥 𝑡 that satisfies the following condition

𝑥 𝑡 + 1 ∈ 𝑥 0 + Span ⁡ { ℎ 1 , … , ℎ 𝑡 + 1 }

Most first-order and second-order methods, including accelerated versions, typically satisfy this assumption. However, we highlight that randomized methods are not covered by this lower bound. Randomized lower bound even for exact high-order methods is still an open problem. More details on randomized lower bounds for first-order methods are presented in (Woodworth & Srebro, 2017; Nemirovski & Yudin, 1983). Finally, we present the main theoretical complexity lower bound theorem for stochastic second-order methods.

Theorem 4.2.

Let some second-order method ℳ with exactly solved subproblem satisfy Assumption 4.1 and have access only to unbiased stochastic gradient and inexact Hessian satisfying Assumption 2.2 or Assumption 2.3 with 𝜎 2

𝛿 2

max 𝑡

1 , … , 𝑇 ⁡ 𝛿 𝑡 𝑥 𝑡 − 1 , 𝑥 𝑡 . Assume the method ℳ ensures for any function 𝑓 with 𝐿 1 -Lipschitz-continuous gradient and 𝐿 2 -Lipschitz-continuous Hessian the following convergence rate

min 0 ≤ 𝑡 ≤ 𝑇 ⁡ 𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑡 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 𝑂 ⁢ ( 1 ) ⁢ max ⁡ { 𝜎 1 ⁢ 𝑅 Ξ 1 ⁢ ( 𝑇 ) ; 𝜎 2 ⁢ 𝑅 2 Ξ 2 ⁢ ( 𝑇 ) ; 𝐿 2 ⁢ 𝑅 3 Ξ 3 ⁢ ( 𝑇 ) } .

(13)

Then for all 𝑇 ≥ 1 we haven

Ξ 1 ⁢ ( 𝑇 ) ≤ 𝑇 , Ξ 2 ⁢ ( 𝑇 ) ≤ 𝑇 2 , Ξ 3 ⁢ ( 𝑇 ) ≤ 𝑇 7 / 2 .

(14) Proof.

We prove this Theorem from contradiction. Let assume that there exist the method ℳ that satisfies conditions of the Theorem 4.2 and it is faster in one of the bounds from (14).

The first case, Ξ 1 ⁢ ( 𝑇 ) > 𝑇 or Ξ 2 ⁢ ( 𝑇 ) > 𝑇 2 . Let us apply this method for the first-order lower bound function. It is well-known, that for the first-order methods, the lower bound is Ω ⁢ ( 𝜎 1 ⁢ 𝑅 𝑇 + 𝐿 1 ⁢ 𝑅 2 𝑇 2 ) (Nemirovski & Yudin, 1983). Also, the first-order lower bound function has 0 -Lipschitz-continuous Hessian. It means, that the method ℳ can be applied for the first-order lower-bound function. We fix stochastic Hessian oracle as 𝐻 ⁢ ( 𝑥 , 𝜉 )

2 ⁢ 𝐿 1 ⁢ 𝐼 . It means that 𝜎 2

2 ⁢ 𝐿 for such inexact Hessian. With such matrix 𝐻 ⁢ ( 𝑥 , 𝜉 )

2 ⁢ 𝐿 1 ⁢ 𝐼 , the method ℳ has only the first-order information and lies in the class of first-order methods. Hence, we apply the method ℳ to the first-order lower bound function and get the rate min 0 ≤ 𝑡 ≤ 𝑇 ⁡ 𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑡 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 𝑂 ⁢ ( 1 ) ⁢ max ⁡ { 𝜎 1 ⁢ 𝑅 Ξ 1 ⁢ ( 𝑇 ) ; 𝜎 2 ⁢ 𝑅 2 Ξ 2 ⁢ ( 𝑇 ) } , where Ξ 1 ⁢ ( 𝑇 ) > 𝑇 or Ξ 2 ⁢ ( 𝑇 ) > 𝑇 2 . It means that we’ve got a faster method than a lower bound. It is a contradiction, hence the rates for the method ℳ are bounded as Ξ 1 ⁢ ( 𝑇 ) ≤ 𝑇 , Ξ 2 ⁢ ( 𝑇 ) ≤ 𝑇 2 . The second case, Ξ 3 ⁢ ( 𝑇 ) > 𝑇 7 / 2 . It is well-known, that the deterministic second-order lower bound is Ω ⁢ ( 𝐿 2 ⁢ 𝑅 3 𝑇 7 / 2 ) . Let us apply the method ℳ for the second-order lower bound function, where the oracle give us exact gradients and exact Hessians, then 𝜎 1

0 , 𝜎 2

0 and the method ℳ is in class of exact second-order methods but converges faster than the lower bound. It is a contradiction, hence the rate for the method ℳ is bounded as Ξ 3 ⁢ ( 𝑇 ) ≤ 𝑇 7 / 2 . ∎

5Tensor generalization

In this section we propose a tensor generalization of Algorithm 1. We start with introducing the standard assumption on the objective 𝑓 for tensor methods.

Assumption 5.1.

Function 𝑓 is convex, 𝑝 times differentiable on ℝ 𝑑 , and its 𝑝 -th derivative is Lipschitz continuous, i.e. for all 𝑥 , 𝑦 ∈ ℝ 𝑑

‖ ∇ 𝑝 𝑓 ⁢ ( 𝑥 ) − ∇ 𝑝 𝑓 ⁢ ( 𝑦 ) ‖ ≤ 𝐿 𝑝 ⁢ ‖ 𝑥 − 𝑦 ‖ .

We denote the 𝑖 -th directional derivative of function 𝑓 at 𝑥 along directions 𝑠 1 , … , 𝑠 𝑖 ∈ ℝ 𝑛 as ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ⁢ [ 𝑠 1 , … , 𝑠 𝑖 ] . If all directions are the same we write ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ⁢ [ 𝑠 ] 𝑖 . For a 𝑝 -th order tensor 𝑈 , we denote by ‖ 𝑈 ‖ its tensor norm recursively induced (Cartis et al., 2017) by the Euclidean norm on the space of 𝑝 -th order tensors:

‖ 𝑈 ‖

max ‖ 𝑠 1 ‖

‖ 𝑠 𝑝 ‖

1 ⁡ { | 𝑈 ⁢ [ 𝑠 1 , … , 𝑠 𝑝 ] | } ,

where ∥ ⋅ ∥ is the standard Euclidean norm. We construct tensor methods based on the 𝑝 -th order Taylor approximation of the function 𝑓 ⁢ ( 𝑥 ) , which can be written as follows:

Φ 𝑥 , 𝑝 ⁢ ( 𝑦 )

 def  𝑓 ⁢ ( 𝑥 ) + ∑ 𝑖

1 𝑝 1 𝑖 ! ⁢ ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 , 𝑦 ∈ ℝ 𝑑 .

Using approximations 𝐺 𝑖 ⁢ ( 𝑥 ) for the derivatives ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) we create an inexact 𝑝 -th order Taylor series expansion of the objective

𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 )

𝑓 ⁢ ( 𝑥 ) + ∑ 𝑖

1 𝑝 1 𝑖 ! ⁢ 𝐺 𝑖 ⁢ ( 𝑥 ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 .

Next, we introduce a counterpart of Lemma 2.1 for high-order methods.

Lemma 5.2 ((Agafonov et al., 2023, Lemma 2)).

Let Assumption 5.1 hold. Then, for any 𝑥 , 𝑦 ∈ ℝ 𝑑 , we have

| 𝑓 ⁢ ( 𝑦 ) − 𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 ) | ≤ ∑ 𝑖

1 𝑝 1 𝑖 ! ⁢ ‖ ( 𝐺 𝑖 ⁢ ( 𝑥 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 − 1 ‖ ⁢ ‖ 𝑦 − 𝑥 ‖ + 𝐿 𝑝 ( 𝑝 + 1 ) ! ⁢ ‖ 𝑦 − 𝑥 ‖ 𝑝 + 1 ,

‖ ∇ 𝑓 ⁢ ( 𝑦 ) − ∇ 𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 ) ‖ ≤ ∑ 𝑖

1 𝑝 1 ( 𝑖 − 1 ) ! ⁢ ‖ ( 𝐺 𝑖 ⁢ ( 𝑥 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 − 1 ‖ + 𝐿 𝑝 𝑝 ! ⁢ ‖ 𝑦 − 𝑥 ‖ 𝑝 ,

where we use the standard convention 0 !

1 .

Following the assumptions for the second-order method we introduce analogical assumptions for high-order method.

Assumption 5.3 (Unbiased stochastic gradient with bounded variance and stochastic high-order derivatives with bounded variance).

For any 𝑥 ∈ ℝ 𝑑 stochastic gradient 𝐺 1 ⁢ ( 𝑥 , 𝜉 ) and stochastic high-order derivatives 𝐺 𝑖 ⁢ ( 𝑥 , 𝜉 ) , 𝑖

2 , … , 𝑝 satisfy

𝔼 ⁢ [ 𝐺 1 ⁢ ( 𝑥 , 𝜉 ) ∣ 𝑥 ]

∇ 𝑓 ⁢ ( 𝑥 ) , 𝔼 ⁢ [ ‖ 𝐺 1 ⁢ ( 𝑥 , 𝜉 ) − ∇ 𝑓 ⁢ ( 𝑥 ) ‖ 2 ∣ 𝑥 ] ≤ 𝜎 1 2 ,

(15)

𝔼 ⁢ [ ‖ 𝐺 𝑖 ⁢ ( 𝑥 , 𝜉 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ‖ 2 ∣ 𝑥 ] ≤ 𝜎 𝑖 2 , 𝑖

2 , … , 𝑝 .

Assumption 5.4 (Unbiased stochastic gradient with bounded variance and inexact high-order derivatives).

For any 𝑥 ∈ ℝ 𝑑 stochastic gradient 𝐺 1 ⁢ ( 𝑥 , 𝜉 ) satisfy (15). For given 𝑥 , 𝑦 ∈ ℝ 𝑑 inexact high-order derivatives 𝐺 𝑖 ⁢ ( 𝑥 ) , 𝑖

2 , … , 𝑝 satisfy

‖ ( 𝐺 𝑖 ⁢ ( 𝑥 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 − 1 ‖ ≤ 𝛿 𝑖 𝑥 , 𝑦 ⁢ ‖ 𝑦 − 𝑥 ‖ 𝑖 − 1 .

To extend Algorithm 1 to tensor methods, we introduce a 𝑝 -th order model of the function:

𝜔 𝑥 , 𝑝 𝑀 , 𝛿 ¯ ⁢ ( 𝑦 ) := 𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 ) + 𝛿 ¯ 2 ⁢ ‖ 𝑥 − 𝑦 ‖ 2 + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑖 ! ⁢ ‖ 𝑥 − 𝑦 ‖ 𝑖 + 𝑝 ⁢ 𝑀 ( 𝑝 + 1 ) ! ⁢ ‖ 𝑥 − 𝑦 ‖ 𝑝 + 1 ,

where 𝜂 𝑖

0 , 3 ≤ 𝑖 ≤ 𝑝 . Next, we modify Definition 3.1 for the high order derivatives case

Definition 5.5.

Denote by 𝑆 𝑝 𝑀 , 𝛿 ¯ , 𝜏 ⁢ ( 𝑥 ) a point 𝑆 := 𝑆 𝑝 𝑀 , 𝛿 ¯ , 𝜏 ⁢ ( 𝑥 ) such that ‖ ∇ 𝜔 𝑥 , 𝑝 𝑀 , 𝛿 ¯ ⁢ ( 𝑆 ) ‖ ≤ 𝜏 .

Now, we are prepared to introduce the method and state the convergence theorem.

Algorithm 2 Accelerated Stochastic Tensor Method 1:  Input: 𝑦 0

𝑥 0 is starting point; constants 𝑀 ≥ 2 𝑝 ⁢ 𝐿 𝑝 ; 𝜂 𝑖 ≥ 4 ,   3 ≤ 𝑖 ≤ 𝑝 ; starting inexactness 𝛿 ¯ 0 ≥ 0 ; nonnegative nondecreasing sequences { 𝜅 ¯ 𝑖 𝑡 } 𝑡 ≥ 0 for 𝑖

2 , … , 𝑝 + 1 , and
𝛼 𝑡

𝑝 + 1 𝑡 + 𝑝 + 1 , 𝐴 𝑡

∏ 𝑗

1 𝑡 ( 1 − 𝛼 𝑗 ) , 𝐴 0

1 .
(16)
𝜓 0 ⁢ ( 𝑥 ) := 𝜅 ¯ 2 0 + 𝜆 0 2 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 2 + ∑ 𝑖

3 𝑝 𝜅 ¯ 𝑖 0 𝑖 ! ⁢ ‖ 𝑥 − 𝑥 0 ‖ 𝑖 .
\useshortskip 2:  for  𝑡 ≥ 0  do 3:     
𝑣 𝑡

( 1 − 𝛼 𝑡 ) ⁢ 𝑥 𝑡 + 𝛼 𝑡 ⁢ 𝑦 𝑡 , 𝑥 𝑡 + 1

𝑆 𝑝 𝑀 , 𝛿 ¯ 𝑡 , 𝜏 ⁢ ( 𝑣 𝑡 )
4:     Compute
𝑦 𝑡 + 1

arg ⁡ min 𝑥 ∈ ℝ 𝑛
{ 𝜓 𝑡 + 1 ( 𝑥 ) := 𝜓 𝑡 ( 𝑥 ) + 𝜆 𝑡 + 1 − 𝜆 𝑡 2 ∥ 𝑥 − 𝑥 0 ∥ 2

+ ∑ 𝑖

2 𝑝 𝜅 ¯ 𝑖 𝑡 + 1 − 𝜅 ¯ 𝑖 𝑡 𝑖 ! ∥ 𝑥 − 𝑥 0 ∥ 𝑖 + 𝛼 𝑡 𝐴 𝑡 𝑙 ( 𝑥 , 𝑥 𝑡 + 1 ) } .

Theorem 5.6.

Let Assumption 5.1 hold and 𝑀 ≥ 2 𝑝 ⁢ 𝐿 𝑝 .

Let Assumption 5.3 hold. After 𝑇 ≥ 1 with parameters

𝜅 ¯ 2 𝑡

𝑂 ⁢ ( 𝛿 ¯ 𝑡 ⁢ 𝛼 𝑡 2 𝐴 𝑡 ) , 𝜅 ¯ 𝑖 𝑡 + 1

𝑂 ⁢ ( 𝛼 𝑡 + 1 𝑖 ⁢ 𝛿 𝑖 𝐴 𝑡 + 1 ) , 𝜅 ¯ 𝑝 + 1 𝑡 + 1

𝑂 ⁢ ( 𝛼 𝑡 + 1 𝑝 + 1 ⁢ 𝑀 𝐴 𝑡 + 1 ) ,

𝜆 𝑡

𝑂 ⁢ ( 𝜎 1 𝑅 ⁢ 𝑡 𝑝 + 1 / 2 ) , 𝛿 𝑡

𝑂 ⁢ ( 𝜎 2 + 𝜎 1 + 𝜏 𝑅 ⁢ 𝑡 3 2 )

(17)

we get the following bound

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]
≤ 𝑂 ⁢ ( 𝜏 ⁢ 𝑅 𝑇 + 𝜎 1 ⁢ 𝑅 𝑇 + ∑ 𝑖

2 𝑝 𝜎 𝑖 ⁢ 𝑅 𝑖 𝑇 𝑖 + 𝑀 ⁢ 𝑅 𝑝 + 1 𝑇 𝑝 + 1 ) .

Let Assumption 5.4 hold. After 𝑇 ≥ 1 with parameters defined in (10) and

𝜎 𝑖

𝛿 𝑖

max 𝑡

1 , … , 𝑇 ⁡ 𝛿 𝑖 , 𝑡 𝑣 𝑡 − 1 , 𝑥 𝑡 we get the following bound

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]
≤ 𝑂 ⁢ ( 𝜏 ⁢ 𝑅 𝑇 + 𝜎 1 ⁢ 𝑅 𝑇 + ∑ 𝑖

2 𝑝 𝛿 𝑖 ⁢ 𝑅 𝑖 𝑇 𝑖 + 𝑀 ⁢ 𝑅 𝑝 + 1 𝑇 𝑝 + 1 ) .

6Strongly convex case Assumption 6.1.

Function 𝑓 is 𝜇 -strongly convex, 𝑝 times differentiable on ℝ 𝑑 , and its 𝑝 -th derivative is Lipschitz continuous, i.e. for all 𝑥 , 𝑦 ∈ ℝ 𝑑

‖ ∇ 𝑝 𝑓 ⁢ ( 𝑥 ) − ∇ 𝑝 𝑓 ⁢ ( 𝑦 ) ‖ ≤ 𝐿 𝑝 ⁢ ‖ 𝑥 − 𝑦 ‖ .

To exploit the strong convexity of the objective function and attain a linear convergence rate, we introduce a restarted version of Accelerated Stochastic Tensor Method (Algorithm 2). In each iteration of Restarted Accelerated Stochastic Tensor Method (Algorithm 3), we execute Algorithm 2 for a predetermined number of iterations as specified in equation ( ⁢ 18 ⁢ ) . The output of this run is then used as the initial point for the subsequent iteration of Algorithm 1, which resets the parameters, and this process repeats iteratively.

Algorithm 3 Restarted Accelerated Stochastic Tensor Method

Input: 𝑧 0 ∈ ℝ 𝑑 , strong convexity parameter 𝜇 > 0 , 𝑀 ≥ 𝐿 𝑝 , and 𝑅 0 > 0 such that ‖ 𝑧 0 − 𝑥 ∗ ‖ ≤ 𝑅 0 . For 𝑠

1 , 2 , … :

1.

Set 𝑥 0

𝑧 𝑠 − 1 , 𝑟 𝑠 − 1

𝑅 0 2 𝑠 − 1 , and 𝑅 𝑠 − 1

‖ 𝑧 𝑠 − 1 − 𝑥 ∗ ‖ .

2.

Run Algorithm 2 for 𝑡 𝑠 iterations, where

𝑡 𝑠

𝑂 ( 1 ) max { 1 , ( 𝜏 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 , ( 𝜎 1 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 , max 𝑖

2 , … , 𝑝 ( 𝛿 𝑖 ⁢ 𝑅 𝑠 − 1 𝑖 − 2 𝜇 ) 1 𝑖 , ( 𝐿 𝑝 ⁢ 𝑅 𝑠 − 1 𝑝 − 1 𝜇 ) 1 𝑝 + 1 } .

(18) 3.

Set 𝑧 𝑠

𝑥 𝑡 𝑠 .

Theorem 6.2.

Let Assumption 6.1 hold and let parameters of Algorithm 1 be chosen as in (17). Let { 𝑧 𝑠 } 𝑠 ≥ 0 be generated by Algorithm 3 and 𝑅

0 be such that ‖ 𝑧 0 − 𝑥 ∗ ‖ ≤ 𝑅 . Then for any 𝑠 ≥ 0 we have

𝔼 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 2

≤ 4 − 𝑠 ⁢ 𝑅 2 ,

𝔼 ⁢ 𝑓 ⁢ ( 𝑧 𝑠 ) − 𝑓 ⁢ ( 𝑥 ∗ )

≤ 2 − 2 ⁢ 𝑠 − 1 ⁢ 𝜇 ⁢ 𝑅 2 .

(19)

Moreover, the total number of iterations to reach desired accuracy 𝜀 : 𝑓 ⁢ ( 𝑧 𝑠 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ≤ 𝜀 in expectation is

𝑂 ⁢ ( ( 𝜏 + 𝜎 1 ) 2 𝜇 ⁢ 𝜀 + ( 𝜎 2 𝜇 + 1 ) ⁢ log ⁡ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ∑ 𝑖

3 𝑝 ( 𝜎 𝑖 ⁢ 𝑅 𝑖 − 2 𝜇 ) 1 𝑖 + ( 𝐿 𝑝 ⁢ 𝑅 𝑝 − 1 𝜇 ) 1 𝑝 + 1 ) .

Now, let us make a few observations regarding the results obtained in Theorem 6.2. For simplicity let solution of the subproblem be exact and 𝑝

2 , i.e. we do the restarts of the Accelerated Stochastic Cubic Newton, so the total number of iterations is

𝑂 ⁢ ( 𝜎 1 2 𝜇 ⁢ 𝜀 + ( 𝜎 2 𝜇 + 1 ) ⁢ log ⁡ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ( 𝐿 2 ⁢ 𝑅 𝜇 ) 1 3 ) .

(20)

Next, let’s consider solving the stochastic optimization problem min 𝑥 ∈ ℝ 𝑑 ⁡ 𝐹 ⁢ ( 𝑥 )

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 , 𝜉 ) ] using the mini-batch Restarted Accelerated Stochastic Cubic Newton method (Algorithm 3) with 𝑝

2 . In this approach, the mini-batched stochastic gradient is computed as 1 𝑟 1 ⁢ ∑ 𝑖

1 𝑟 1 ∇ 𝑓 ⁢ ( 𝑥 , 𝜉 𝑖 ) and the mini-batched stochastic Hessian is computed as 1 𝑟 2 ⁢ ∑ 𝑖

1 𝑟 2 ∇ 2 𝑓 ⁢ ( 𝑥 , 𝜉 𝑖 ) , where 𝑟 1 and 𝑟 2 represent the batch sizes for gradients and Hessians, respectively. From the convergence estimates in (19) and (20), we can determine the required sample sizes for computing the batched gradients and batched Hessians. Specifically, we have 𝑟 1

𝑂 ~ ⁢ ( 𝜎 1 2 𝜀 ⁢ 𝜇 2 / 3 ) and 𝑟 2

𝑂 ⁢ ( 𝜎 2 𝜇 1 / 3 ) . Consequently, the overall number of stochastic gradient computations is 𝑂 ⁢ ( 𝜎 1 2 𝜀 ⁢ 𝜇 2 / 3 ) , which is similar to the accelerated SGD method (Ghadimi & Lan, 2013). Interestingly, the number of stochastic Hessian computations scales linearly with the desired accuracy 𝜀 , i.e., 𝑂 ⁢ ( 𝜎 2 𝜇 1 / 3 ⁢ log ⁡ 1 𝜀 ) . This result highlights the practical importance of second-order methods. Since the batch size of the Hessian is constant, there is no need to adjust it as the desired solution as accuracy increases. This is particularly useful in distributed optimization problems under the assumption of beta similarity (Zhang & Lin, 2015). In methods with such assumption (Zhang & Lin, 2015; Daneshmand et al., 2021; Agafonov et al., 2023), the server stores a Hessian sample that provides a "good" approximation of the exact Hessian of the objective function. Algorithms utilize this approximation instead of exchanging curvature information with the workers. The constant batch size allows for accurately determining the necessary sample size to achieve fast convergence to any desired accuracy.

7Experiments

In this section, we present numerical experiments conducted to demonstrate the efficiency of our proposed methods. We consider logistic regression problems of the form:

𝑓 ⁢ ( 𝑥 )

𝔼 ⁢ [ log ⁡ ( 1 + exp ⁡ ( − 𝑏 𝜉 ⋅ 𝑎 𝜉 ⊤ ⁢ 𝑥 ) ) ] ,

where ( 𝑎 𝜉 , 𝑏 𝜉 ) are the training samples described by features 𝑎 𝜉 ∈ ℝ 𝑑 and class labels 𝑏 𝑖 ∈ { − 1 , 1 } .

Setup. We present results on the a9a dataset ( 𝑑

123 ) from LibSVM by Chang & Lin (2011). We demonstrate the performance of Accelerated Stochastic Cubic Newton in three regimes: deterministic oracles (Figure 1), stochastic oracles with the same batch size for gradient and Hessians (Figures 2(a), 2(b)), and stochastic oracles with smaller batch size for Hessians (Figures 2(c), 2(d)). The final mode is especially intriguing because the convergence component of Algorithm 1 associated with gradient noise decreases as 1 / 𝑡 , while the component related to Hessian noise decreases as 1 / 𝑡 2 . This enables the use of smaller Hessian batch sizes (see Corollary 3.3).

Figure 1:Logistic regression on a9a with deterministic oracles

For stochastic experiments, we randomly split the dataset into training ( 30000 data samples) and test ( 2561 data samples) sets. The methods randomly sample data from the training set and do not have access to the test data. In this case, the training loss represents finite sum minimization properties, and the test loss represents expectation minimization. We compare the performance of the SGD, Extra-Newton (EN), and Accelerated Stochastic Cubic Newton (ASCN). We present experiments for fine-tuned hyperparameters in Figures 1, 2. For SGD, we’ve fine-tuned 1 parameter 𝑙 ⁢ 𝑟 . For EN, we’ve fine-tuned 2 parameters: 𝛾 and 𝛽 0 . For ASCN, we’ve fine-tuned 2 parameters: 𝑀 and 𝜎 1 𝑅 (only for stochastic case) as the entity, also 𝜏

0 and 𝜎 2

0 as they are dominated by 𝜎 1 𝑅 . To demonstrate the globalization properties of the methods, we consider the starting point 𝑥 0 far from the solution, specifically 𝑥 0

3 ⋅ 𝑒 , where 𝑒 is the all-one vector. All methods are implemented as PyTorch 2.0 optimizers. Additional details and experiments are provided in the Appendix C.

(a)Train loss. Gradient and Hessian batch sizes are 1500 (b)Test loss. Gradient and Hessian batch sizes are 1500 (c)Train loss. Gradient batch size is 10000 , Hessian batch size is 150 (d)Test loss. Gradient batch size is 10000 , Hessian batch size is 150 Figure 2:Logistic regression on a9a with stochastic oracles

Results. The ASCN method proposed in this study consistently outperforms Extra-Newton and SGD across all experimental scenarios. In deterministic settings, ASCN exhibits a slight superiority over Extra-Newton. In stochastic experiments, we observe a notable improvement as well. However, it’s worth noting that in stochastic regime as we approach convergence, all methods tend to converge to the same point. This convergence pattern is primarily influenced by the stochastic gradient noise 𝜎 1 ⁢ 𝑅 𝑇 term, which dominates in rates as we converge to solution. Furthermore, experiment with different batch sizes for gradients and Hessians support the theory, confirming that the Hessian inexactness term in ASCN 𝜎 2 ⁢ 𝑅 2 𝑇 2 has faster rate than the corresponding term in Extra-Newton 𝜎 2 ⁢ 𝑅 2 𝑇 3 / 2 . To conclude, the experiments show that second-order information could significantly accelerate the convergence. Moreover, the methods need significantly less stochastic Hessians than stochastic gradients.

8Conclusion

In summary, our contribution includes a novel stochastic accelerated second-order algorithm for convex and strongly convex optimization. We establish a lower bound for stochastic second-order optimization and prove our algorithm’s achievement of optimal convergence in both gradient and Hessian inexactness. Additionally, we introduce a tensor generalization of second-order methods for stochastic high-order derivatives. Nevertheless, it’s essential to acknowledge certain limitations. Like other globally convergent second-order methods, our algorithm involves a subproblem that necessitates an additional subroutine to find its solution. To mitigate this challenge, we offer theoretical insights into the required accuracy of the subproblem’s solution. Future research could involve enhancing the adaptiveness of the algorithm. Additionally, there is a potential for constructing optimal stochastic second-order and tensor methods by incorporating stochastic elements into existing exact methods. These efforts could further improve both practical and theoretical aspects of stochastic second-order and high-order optimization.

Acknowledgments. This work was supported by a grant for research centers in the field of artificial intelligence, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Moscow Institute of Physics and Technology dated November 1, 2021 No. 70-2021-00138. This work was supported by Hasler Foundation Program: Hasler Responsible AI (project number 21043). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. Research was sponsored by the Army Research Office and was accomplished under Grant Number W911NF-24-1-0048.

Ethics Statement. The authors acknowledge that they have read and adhere to the ICLR Code of Ethics.

Reproducibility Statement. The experimental details are provided in Section 7 and Appendix C. The PyTorch code for the methods is available on https://github.com/OPTAMI/OPTAMI.

References Agafonov et al. (2021) Artem Agafonov, Pavel Dvurechensky, Gesualdo Scutari, Alexander Gasnikov, Dmitry Kamzolov, Aleksandr Lukashevich, and Amir Daneshmand.An accelerated second-order method for distributed stochastic optimization.In 2021 60th IEEE Conference on Decision and Control (CDC), pp.  2407–2413, 2021.ISBN 2576-2370.doi: 10.1109/CDC45484.2021.9683400.URL https://doi.org/10.1109/CDC45484.2021.9683400. Agafonov et al. (2023) Artem Agafonov, Dmitry Kamzolov, Pavel Dvurechensky, Alexander Gasnikov, and Martin Takáč.Inexact tensor methods and their application to stochastic convex optimization.Optimization Methods and Software, 0(0):1–42, 2023.doi: 10.1080/10556788.2023.2261604.URL https://doi.org/10.1080/10556788.2023.2261604. Agarwal & Hazan (2018) Naman Agarwal and Elad Hazan.Lower bounds for higher-order convex optimization.In Sébastien Bubeck, Vianney Perchet, and Philippe Rigollet (eds.), Proceedings of the 31st Conference On Learning Theory, volume 75 of Proceedings of Machine Learning Research, pp.  774–792. PMLR, 06–09 Jul 2018.URL https://proceedings.mlr.press/v75/agarwal18a.html. Antonakopoulos et al. (2022) Kimon Antonakopoulos, Ali Kavis, and Volkan Cevher.Extra-newton: A first approach to noise-adaptive accelerated second-order methods.In S Koyejo, S Mohamed, A Agarwal, D Belgrave, K Cho, and A Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp.  29859–29872. Curran Associates, Inc., 2022.URL https://proceedings.neurips.cc/paper_files/paper/2022/file/c10804702be5a0cca89331315413f1a2-Paper-Conference.pdf. Arjevani et al. (2019) Yossi Arjevani, Ohad Shamir, and Ron Shiff.Oracle complexity of second-order methods for smooth convex optimization.Mathematical Programming, 178:327–360, 2019.ISSN 1436-4646.doi: 10.1007/s10107-018-1293-1.URL https://doi.org/10.1007/s10107-018-1293-1. Baes (2009) Michel Baes.Estimate sequence methods: extensions and approximations.Institute for Operations Research, ETH, Zürich, Switzerland, 2(1), 2009. Bellavia et al. (2022) S Bellavia, G Gurioli, B Morini, and Ph.L. Toint.Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives.Journal of Complexity, 68:101591, 2022.ISSN 0885-064X.doi: https://doi.org/10.1016/j.jco.2021.101591.URL https://www.sciencedirect.com/science/article/pii/S0885064X21000467. Bellavia & Gurioli (2022) Stefania Bellavia and Gianmarco Gurioli.Stochastic analysis of an adaptive cubic regularization method under inexact gradient evaluations and dynamic hessian accuracy.Optimization, 71:227–261, 2022.doi: 10.1080/02331934.2021.1892104.URL https://doi.org/10.1080/02331934.2021.1892104. Bubeck et al. (2019) Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford.Near-optimal method for highly smooth convex optimization.In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pp.  492–507. PMLR, 5 2019.URL https://proceedings.mlr.press/v99/bubeck19a.html. Carmon et al. (2022) Yair Carmon, Danielle Hausler, Arun Jambulapati, Yujia Jin, and Aaron Sidford.Optimal and adaptive monteiro-svaiter acceleration.In S Koyejo, S Mohamed, A Agarwal, D Belgrave, K Cho, and A Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp.  20338–20350. Curran Associates, Inc., 2022.URL https://proceedings.neurips.cc/paper_files/paper/2022/file/7ff97417474268e6b5a38bcbfae04944-Paper-Conference.pdf. Cartis & Scheinberg (2018) Coralia Cartis and Katya Scheinberg.Global convergence rate analysis of unconstrained optimization methods based on probabilistic models.Mathematical Programming, 169:337–375, 2018.ISSN 1436-4646.doi: 10.1007/s10107-017-1137-4.URL https://doi.org/10.1007/s10107-017-1137-4. Cartis et al. (2011a) Coralia Cartis, Nicholas IM Gould, and Philippe L Toint.Adaptive cubic regularisation methods for unconstrained optimization. part i: motivation, convergence and numerical results.Mathematical Programming, 127(2):245–295, 2011a. Cartis et al. (2011b) Coralia Cartis, Nicholas IM Gould, and Philippe L Toint.Adaptive cubic regularisation methods for unconstrained optimization. part ii: worst-case function-and derivative-evaluation complexity.Mathematical programming, 130(2):295–319, 2011b. Cartis et al. (2017) Coralia Cartis, Nicholas IM Gould, and Philippe L Toint.Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models.arXiv preprint arXiv:1708.04044, 2017. Chang & Lin (2011) Chih-Chung Chang and Chih-Jen Lin.Libsvm: A library for support vector machines.ACM transactions on intelligent systems and technology (TIST), 2(3):1–27, 2011. Daneshmand et al. (2021) Amir Daneshmand, Gesualdo Scutari, Pavel Dvurechensky, and Alexander Gasnikov.Newton method over networks is fast up to the statistical precision.In International Conference on Machine Learning, pp.  2398–2409. PMLR, 2021. Devolder et al. (2014) Olivier Devolder, François Glineur, and Yurii Nesterov.First-order methods of smooth convex optimization with inexact oracle.Mathematical Programming, 146:37–75, 2014.ISSN 1436-4646.doi: 10.1007/s10107-013-0677-5.URL https://doi.org/10.1007/s10107-013-0677-5. Doikov & Nesterov (2023) Nikita Doikov and Yurii Nesterov.Gradient regularization of Newton method with Bregman distances.Mathematical Programming, 2023.ISSN 1436-4646.doi: 10.1007/s10107-023-01943-7.URL https://doi.org/10.1007/s10107-023-01943-7. Doikov & Nesterov (2020) Nikita Doikov and Yurii E Nesterov.Inexact tensor methods with dynamic accuracies.In ICML, pp.  2577–2586, 2020. Doikov et al. (2023) Nikita Doikov, El Mahdi Chayti, and Martin Jaggi.Second-order optimization with lazy Hessians.In Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (eds.), Proceedings of the 40th International Conference on Machine Learning, volume 202, pp.  8138–8161. PMLR, 9 2023.URL https://proceedings.mlr.press/v202/doikov23a.html. Doikov et al. (2024) Nikita Doikov, Konstantin Mishchenko, and Yurii Nesterov.Super-universal regularized newton method.SIAM Journal on Optimization, 34:27–56, 2024.doi: 10.1137/22M1519444.URL https://doi.org/10.1137/22M1519444. Dvurechensky et al. (2022) Pavel Dvurechensky, Dmitry Kamzolov, Aleksandr Lukashevich, Soomin Lee, Erik Ordentlich, César A Uribe, and Alexander Gasnikov.Hyperfast second-order local solvers for efficient statistically preconditioned distributed optimization.EURO Journal on Computational Optimization, 10:100045, 2022.ISSN 2192-4406.doi: https://doi.org/10.1016/j.ejco.2022.100045.URL https://www.sciencedirect.com/science/article/pii/S2192440622000211. Gasnikov et al. (2019a) Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, and César A Uribe.Optimal tensor methods in smooth convex and uniformly convex optimization.In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pp.  1374–1391. PMLR, 5 2019a.URL https://proceedings.mlr.press/v99/gasnikov19a.html. Gasnikov et al. (2019b) Alexander Gasnikov, Pavel Dvurechensky, Eduard Gorbunov, Evgeniya Vorontsova, Daniil Selikhanovych, César A Uribe, Bo Jiang, Haoyue Wang, Shuzhong Zhang, Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, Yuanzhi Li, and Aaron Sidford.Near optimal methods for minimizing convex functions with Lipschitz p-th derivatives.In Alina Beygelzimer and Daniel Hsu (eds.), Proceedings of the Thirty-Second Conference on Learning Theory, volume 99, pp.  1392–1393. PMLR, 5 2019b.URL https://proceedings.mlr.press/v99/gasnikov19b.html. Ghadimi & Lan (2013) Saeed Ghadimi and Guanghui Lan.Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, ii: Shrinking procedures and optimal algorithms.SIAM Journal on Optimization, 23:2061–2089, 2013.doi: 10.1137/110848876.URL https://doi.org/10.1137/110848876. Ghadimi et al. (2017) Saeed Ghadimi, Han Liu, and Tong Zhang.Second-order methods with cubic regularization under inexact information.arXiv preprint arXiv:1710.05782, 2017. Ghanbari & Scheinberg (2018) Hiva Ghanbari and Katya Scheinberg.Proximal Quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates.Computational Optimization and Applications, 69:597–627, 2018.ISSN 1573-2894.doi: 10.1007/s10589-017-9964-z.URL https://doi.org/10.1007/s10589-017-9964-z. Grapiglia & Nesterov (2020) Geovani Nunes Grapiglia and Yu Nesterov.Tensor methods for minimizing convex functions with hölder continuous higher-order derivatives.SIAM Journal on Optimization, 30(4):2750–2779, 2020. Grapiglia & Nesterov (2021) Geovani Nunes Grapiglia and Yurii Nesterov.On inexact solution of auxiliary problems in tensor methods for convex optimization.Optimization Methods and Software, 36:145–170, 2021.doi: 10.1080/10556788.2020.1731749.URL https://doi.org/10.1080/10556788.2020.1731749. Griewank (1981) Andreas Griewank.The modification of Newton’s method for unconstrained optimization by bounding cubic terms.Technical report, Technical report NA/12, 1981. Hanzely et al. (2022) Slavomír Hanzely, Dmitry Kamzolov, Dmitry Pasechnyuk, Alexander Gasnikov, Peter Richtárik, and Martin Takáč.A damped Newton method achieves global 𝒪 ⁢ ( 1 𝑘 2 ) and local quadratic convergence rate.In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp.  25320–25334. Curran Associates, Inc., 2022.URL https://proceedings.neurips.cc/paper_files/paper/2022/file/a1f0c0cd6caaa4863af5f12608edf63e-Paper-Conference.pdf. Hoffmann & Kornstaedt (1978) K H Hoffmann and H J Kornstaedt.Higher-order necessary conditions in abstract mathematical programming.Journal of Optimization Theory and Applications, 26:533–568, 1978.ISSN 1573-2878.doi: 10.1007/BF00933151.URL https://doi.org/10.1007/BF00933151. Jiang & Mokhtari (2023) Ruichen Jiang and Aryan Mokhtari.Accelerated Quasi-Newton proximal extragradient: Faster rate for smooth convex optimization.arXiv preprint arXiv:2306.02212, 2023. Jiang et al. (2023) Ruichen Jiang, Qiujiang Jin, and Aryan Mokhtari.Online learning guided curvature approximation: A quasi-newton method with global non-asymptotic superlinear convergence.CoRR, abs/2302.08580, 2023.doi: 10.48550/arXiv.2302.08580.URL https://doi.org/10.48550/arXiv.2302.08580. Kamzolov (2020) Dmitry Kamzolov.Near-optimal hyperfast second-order method for convex optimization.In Yury Kochetov, Igor Bykadorov, and Tatiana Gruzdeva (eds.), Mathematical Optimization Theory and Operations Research, pp.  167–178. Springer International Publishing, 2020.ISBN 978-3-030-58657-7. Kamzolov et al. (2022) Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky, Artem Agafonov, and Martin Takáč.Exploiting higher-order derivatives in convex optimization methods.arXiv preprint arXiv:2208.13190, 2022. Kamzolov et al. (2023) Dmitry Kamzolov, Klea Ziu, Artem Agafonov, and Martin Takáč.Accelerated adaptive cubic regularized Quasi-Newton methods.arXiv preprint arXiv:2302.04987, 2023. Kantorovich (1949) Leonid Vitalyevich Kantorovich.On Newton’s method.Trudy Matematicheskogo Instituta imeni VA Steklova, 28:104–144, 1949.(In Russian). Kohler & Lucchi (2017) Jonas Moritz Kohler and Aurelien Lucchi.Sub-sampled cubic regularization for non-convex optimization.In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70, pp.  1895–1904. PMLR, 5 2017.URL https://proceedings.mlr.press/v70/kohler17a.html. Kovalev & Gasnikov (2022) Dmitry Kovalev and Alexander Gasnikov.The first optimal acceleration of high-order methods in smooth convex optimization.In S Koyejo, S Mohamed, A Agarwal, D Belgrave, K Cho, and A Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp.  35339–35351. Curran Associates, Inc., 2022.URL https://proceedings.neurips.cc/paper_files/paper/2022/file/e56f394bbd4f0ec81393d767caa5a31b-Paper-Conference.pdf. Lan (2012) Guanghui Lan.An optimal method for stochastic composite optimization.Mathematical Programming, 133:365–397, 2012.ISSN 1436-4646.doi: 10.1007/s10107-010-0434-y.URL https://doi.org/10.1007/s10107-010-0434-y. Lucchi & Kohler (2023) Aurelien Lucchi and Jonas Kohler.A sub-sampled tensor method for nonconvex optimization.IMA Journal of Numerical Analysis, 43:2856–2891, 10 2023.ISSN 0272-4979.doi: 10.1093/imanum/drac057.URL https://doi.org/10.1093/imanum/drac057. Mishchenko (2023) Konstantin Mishchenko.Regularized Newton method with global 𝒪 ⁢ ( 1 𝑘 2 ) convergence.SIAM Journal on Optimization, 33:1440–1462, 2023.doi: 10.1137/22M1488752.URL https://doi.org/10.1137/22M1488752. Monteiro & Svaiter (2013) Renato D C Monteiro and B F Svaiter.An accelerated hybrid proximal extragradient method for convex optimization and its implications to second-order methods.SIAM Journal on Optimization, 23:1092–1125, 2013.doi: 10.1137/110833786.URL https://doi.org/10.1137/110833786. Moré (1977) Jorge J. Moré.The levenberg–marquardt algorithm: implementation and theory.In Conference on Numerical Analysis, University of Dundee, Scotland, 7 1977.URL https://www.osti.gov/biblio/7256021. Nemirovski et al. (2009) A Nemirovski, A Juditsky, G Lan, and A Shapiro.Robust stochastic approximation approach to stochastic programming.SIAM Journal on Optimization, 19:1574–1609, 2009.doi: 10.1137/070704277.URL https://doi.org/10.1137/070704277. Nemirovski & Yudin (1983) Arkadi Semenovich Nemirovski and David Borisovich Yudin.Problem Complexity and Method Efficiency in Optimization.A Wiley-Interscience publication. Wiley, 1983. Nesterov (2008) Yurii Nesterov.Accelerating the cubic regularization of Newton’s method on convex problems.Mathematical Programming, 112:159–181, 2008.ISSN 1436-4646.doi: 10.1007/s10107-006-0089-x.URL https://doi.org/10.1007/s10107-006-0089-x. Nesterov (2018) Yurii Nesterov.Lectures on Convex Optimization.Springer Cham, 2 edition, 2018.ISBN 978-3-319-91577-7.doi: 10.1007/978-3-319-91578-4. Nesterov (2021a) Yurii Nesterov.Inexact high-order proximal-point methods with auxiliary search procedure.SIAM Journal on Optimization, 31:2807–2828, 2021a.doi: 10.1137/20M134705X.URL https://doi.org/10.1137/20M134705X. Nesterov (2021b) Yurii Nesterov.Implementable tensor methods in unconstrained convex optimization.Mathematical Programming, 186:157–183, 2021b.ISSN 1436-4646.doi: 10.1007/s10107-019-01449-1.URL https://doi.org/10.1007/s10107-019-01449-1. Nesterov (2021c) Yurii Nesterov.Superfast second-order methods for unconstrained convex optimization.Journal of Optimization Theory and Applications, 191:1–30, 2021c.ISSN 1573-2878.doi: 10.1007/s10957-021-01930-y.URL https://doi.org/10.1007/s10957-021-01930-y. Nesterov & Polyak (2006) Yurii Nesterov and Boris T Polyak.Cubic regularization of Newton method and its global performance.Mathematical Programming, 108:177–205, 2006.doi: 10.1007/s10107-006-0706-8.URL https://doi.org/10.1007/s10107-006-0706-8. Newton (1687) Isaac Newton.Philosophiae naturalis principia mathematica.Edmond Halley, 1687. Polyak & Juditsky (1992) Boris T Polyak and Anatoli B Juditsky.Acceleration of stochastic approximation by averaging.SIAM Journal on Control and Optimization, 30:838–855, 1992.doi: 10.1137/0330046.URL https://doi.org/10.1137/0330046. Polyak (1990) Boris Teodorovich Polyak.A new method of stochastic approximation type.Avtomatika i Telemekhanika, 51:98–107, 1990. Polyak (2017) Roman Polyak.Complexity of the regularized Newton method.arXiv preprint arXiv:1706.08483, 2017. Polyak (2009) Roman A Polyak.Regularized Newton method for unconstrained convex optimization.Mathematical Programming, 120:125–145, 2009.ISSN 1436-4646.doi: 10.1007/s10107-007-0143-3.URL https://doi.org/10.1007/s10107-007-0143-3. Raphson (1697) Joseph Raphson.Analysis Aequationum Universalis Seu Ad Aequationes Algebraicas Resolvendas Methodus Generalis & Expedita, Ex Nova Infinitarum Serierum Methodo, Deducta Ac Demonstrata.Th. Braddyll, 1697. Robbins & Monro (1951) Herbert Robbins and Sutton Monro.A stochastic approximation method.The Annals of Mathematical Statistics, 22:400–407, 1951.ISSN 0003-4851.URL http://www.jstor.org/stable/2236626. Scheinberg & Tang (2016) Katya Scheinberg and Xiaocheng Tang.Practical inexact proximal Quasi-Newton method with global complexity analysis.Mathematical Programming, 160:495–529, 2016.ISSN 1436-4646.doi: 10.1007/s10107-016-0997-3.URL https://doi.org/10.1007/s10107-016-0997-3. Scieur (2023) Damien Scieur.Adaptive Quasi-Newton and anderson acceleration framework with explicit global (accelerated) convergence rates.arXiv preprint arXiv:2305.19179, 2023. Simpson (1740) Thomas Simpson.Essays on several curious and useful subjects, in speculative and mix’d mathematicks. Illustrated by a variety of examples.H. Woodfall, 1740. Tripuraneni et al. (2018) Nilesh Tripuraneni, Mitchell Stern, Chi Jin, Jeffrey Regier, and Michael I Jordan.Stochastic cubic regularization for fast nonconvex optimization.In S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett (eds.), Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.URL https://proceedings.neurips.cc/paper_files/paper/2018/file/db1915052d15f7815c8b88e879465a1e-Paper.pdf. Woodworth & Srebro (2017) Blake Woodworth and Nathan Srebro.Lower bound for randomized first order convex optimization.arXiv preprint arXiv:1709.03594, 2017. Xu et al. (2020) Peng Xu, Fred Roosta, and Michael W Mahoney.Newton-type methods for non-convex optimization under inexact Hessian information.Mathematical Programming, 184:35–70, 2020.ISSN 1436-4646.doi: 10.1007/s10107-019-01405-z.URL https://doi.org/10.1007/s10107-019-01405-z. Zhang & Lin (2015) Yuchen Zhang and Xiao Lin.Disco: Distributed optimization for self-concordant empirical loss.In Francis Bach and David Blei (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp.  362–370, Lille, France, 07–09 Jul 2015. PMLR.URL https://proceedings.mlr.press/v37/zhangb15.html. Appendix ARelated work

The idea of using high-order derivatives in optimization has been known for a long time Hoffmann & Kornstaedt (1978). In 2009, M. Baes extended the cubic regularization approach with second-order derivatives ( 𝑝

2 ) from Nesterov & Polyak (2006) to high-order derivatives ( 𝑝

2 ) in Baes (2009). However, the subproblem of these methods was non-convex, making them impractical. In 2018, Yu. Nesterov proposed the implementable (Accelerated) Tensor Method Nesterov (2021b), wherein the convexity of the subproblem was reached by increasing a regularization parameter. Hence, the convex subproblem could be efficiently solved by appropriate subsolvers, making the algorithm practically applicable. In the same work, a lower complexity bound for tensor methods under higher-order smoothness assumption was proposed. Shortly after, near-optimal Gasnikov et al. (2019b; a); Bubeck et al. (2019) and optimal Kovalev & Gasnikov (2022); Carmon et al. (2022) high-order methods were introduced. Furthermore, under higher smoothness assumptions, second-order methods Nesterov (2021c; a); Kamzolov (2020); Doikov et al. (2024) can surpass the corresponding lower complexity bound for functions with Lipschitz-continuous Hessians. For more comprehensive information on high-order methods, one can refer to the review Kamzolov et al. (2022). In general, second and higher-order methods are known for their faster convergence compared to first-order methods. However, their computational cost per iteration is significantly higher due to the computation of high-order derivatives. To alleviate this computational burden, it is common to employ approximations of derivatives instead of exact values. While there is a wide range of second-order and tensor methods available for the non-convex case, assuming stochastic or inexact derivatives Cartis et al. (2011a; b); Cartis & Scheinberg (2018); Kohler & Lucchi (2017); Xu et al. (2020); Tripuraneni et al. (2018); Lucchi & Kohler (2023); Bellavia & Gurioli (2022); Bellavia et al. (2022); Doikov et al. (2023), the same cannot be said for the convex case. In the context of convex problems, there have been studies on high-order methods such as second-order methods with inexact Hessian information Ghadimi et al. (2017), tensor methods with inexact and stochastic derivatives Agafonov et al. (2023), and Extra-Newton algorithm with stochastic gradients and Hessians Antonakopoulos et al. (2022). As a possible application of methods with inexact Hessians, we highlight Quasi-Newton(QN) methods. Such methods approximate second-order derivatives using the history of gradient feedback. Quasi-Newton methods are known for their impressive practical performance and local superlinear convergence. However, for the long period of time, the main drawback of such methods was a slow theoretical global convergence, slower than gradient descent. First steps to improve the global convergence of such methods were done in  (Scheinberg & Tang, 2016; Ghanbari & Scheinberg, 2018) but the methods could be still slower than gradient descent. The first global Quasi-Newton methods that provably matches the gradient descent were reached by cubic regularization in two consecutive papers  Kamzolov et al. (2023); Jiang et al. (2023). It also opened a possibility for accelerated QN Kamzolov et al. (2023) that theoretically matches fast gradient method and first-order lower-bounds. This direction were further explored for different methods in Scieur (2023); Jiang & Mokhtari (2023). Another possible application for high-order methods with inexact or stochastic derivatives is distributed optimization Zhang & Lin (2015); Daneshmand et al. (2021); Agafonov et al. (2021); Dvurechensky et al. (2022). One of the main challenges of tensor and regularized second-order methods is solving the auxiliary subproblem to compute the iterate update. In both second-order and higher-order cases, it usually requires running a subsolver algorithm. The impact of the accuracy, up to which we solve the auxiliary problem, on the convergence of the algorithm has been studied in several works Grapiglia & Nesterov (2021; 2020); Doikov & Nesterov (2020). One actively developing direction relies on the constructions of CRN with explicit step Polyak (2009; 2017); Mishchenko (2023); Doikov & Nesterov (2023); Doikov et al. (2024); Hanzely et al. (2022).

Appendix BOn the intuition behind the algorithm

Model. For the second-order case the model 𝜔 𝑥 , 𝑀 𝛿 ¯ ⁢ ( 𝑦 ) comprises three key components: an inexact Taylor approximation 𝜙 𝑥 ⁢ ( 𝑦 ) ; cubic regularization 𝑀 6 ⁢ ‖ 𝑥 − 𝑦 ‖ 3 and additional quadratic regularization 𝛿 ¯ 2 ⁢ ‖ 𝑥 − 𝑦 ‖ 2 . The combination of Taylor polynomial and cubic regularization is the standard model for exact second-order methods, as they create a model that is both convex and upper bounds the objective (Nesterov, 2008) (see (Nesterov, 2021b) for high-order optimization). However, inserting inexactness to the Taylor approximation leads to the necessity of additional regularization (Agafonov et al., 2023).

The first reason to add quadratic regularization is to ensure that the Hessian of the function is majorized by the Hessian of the model:

0 ⪯ ∇ 2 𝑓 ⁢ ( 𝑦 ) ⪯ ∇ 2 𝜙 𝑥 ⁢ ( 𝑦 ) + 𝛿 2 ⁢ 𝐼 + 𝐿 2 ⁢ ‖ 𝑦 − 𝑥 ‖ ⪯ ∇ 2 𝜙 𝑥 ⁢ ( 𝑦 ) + 𝛿 ¯ 2 ⁢ 𝐼 + 𝑀 ⁢ ‖ 𝑦 − 𝑥 ‖

∇ 2 𝜔 𝑥 𝛿 ¯ .

Moreover, this regularization is essential for handling stochastic gradients correctly. Note, that we add quadratic regularizer with the constant 𝛿 ¯ 𝑡

2 ⁢ 𝛿 2 + 𝜎 1 𝑅 ⁢ ( 𝑡 + 1 ) 3 / 2 . Here, 𝛿 2 accounts for a Hessian majorization, while 𝜎 1 𝑅 ⁢ ( 𝑡 + 1 ) 3 / 2 is crucial for achieving optimal convergence in gradient inexactness. From our perspective, this regularization can be viewed as a damping for the size of stochastic Cubic Newton step, as stochastic gradients may lead to undesirable directions.

For further clarification, please refer to Lemma E.3. This lemma serves as a bound on the progress of the step. Take a look at the right-hand side of equation (29). Without proper quadratic regularization, we won’t capture the correct term related to Hessian inexactness 𝛿 2 . Consequently, the desired convergence term, 𝛿 2 ⁢ 𝑅 2 𝑇 2 , cannot be achieved. Moving on to the left-hand side of (29), we encounter the term 2 𝛿 ¯ ⁢ ‖ 𝑔 ⁢ ( 𝑥 ) − ∇ 𝑓 ⁢ ( 𝑥 ) ‖ . Here, choosing the appropriate 𝛿 ¯ is crucial to compensate for stochastic gradient errors and achieve optimal convergence in the gradient inexactness term.

Estimating sequences. Estimating sequences are a standard optimization technique to achieve acceleration (Nesterov, 2018)[Section 2.2.1]. As far as our knowledge extends, the application of estimating sequences to second-order methods was first introduced in (Nesterov, 2008). The concept involves adapting acceleration techniques traditionally applied to first-order methods to the realm of second-order methods. In this work, we make slight modifications to the estimating sequences derived from (Nesterov, 2008) to preserve the customary relationships inherent in accelerated methods:

𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑙 ⁢ 𝑜 ⁢ 𝑤 ≤ 𝜓 𝑡 ∗ ≤ 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 − 1 + 𝑐 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝑐 3 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑢 ⁢ 𝑝 ,

where 𝐴 𝑡 and 𝛼 𝑡 are scaling factors, common for acceleration (Nesterov, 2018; 2008).

For simplicity, let 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑙 ⁢ 𝑜 ⁢ 𝑤

0 , 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑢 ⁢ 𝑝

0 , 𝑐 2

0 . That is the case for exact derivatives and subproblem solutions. Then, one can get the convergence, with a specific choice of scaling factors:

𝑓 ⁢ ( 𝑥 𝑡 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ≤ 𝐴 𝑡 − 1 ⁢ 𝑐 3 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 .

In our case, errors and 𝑐 2 are non zero and stay for gradient, Hessian and subproblem inexactness. By applying estimating sequence technique we get rates (11), (12).

The choice of parameters. The cubic regularization parameter 𝑀 ≥ 2 ⁢ 𝐿 2 represents the standard choice for second-order methods (Nesterov & Polyak, 2006). The quadratic regularization parameter 𝛿 ¯ 𝑡

2 ⁢ 𝛿 2 + 𝜎 1 + 𝜏 𝑅 ⁢ ( 𝑡 + 1 ) 3 / 2 consists of 2 ⁢ 𝛿 2 for compensating Hessian errors, and 𝜎 1 + 𝜏 𝑅 ⁢ ( 𝑡 + 1 ) 3 / 2 for compensating stochastic gradient and subproblem solution errors. The regularization parameters 𝜅 ¯ 2 𝑡 , 𝜅 ¯ 3 𝑡 , 𝜆 𝑡 are utilized for the second step of the method. The 𝜅 ¯ ’s are chosen in (40), (42) to uphold the inequality (34) for acceleration. 𝜆 𝑡

𝜎 𝑅 ⁢ ( 𝑡 + 1 ) 5 / 2 serves to compensate for stochastic gradient errors in the estimation functions 𝜓 𝑡 . The specific choice for 𝛿 ¯ and 𝜆 is made in the proof of Theorem E.7 to achieve optimal convergence rate.

Appendix CAdditional Experiments

After tuning we got the following hyperparameters. For deterministic oracles: 𝑙 ⁢ 𝑟

20 for GD, 𝛾

5 and 𝛽 0

0.5 for Extra-Newton, 𝑀

0.01 for ASCN. For stochastic oracles: 𝑙 ⁢ 𝑟

20 for SGD, 𝛾

5 and 𝛽 0

0.05 for Extra-Newton, 𝑀

0.01 and 𝜎

1 ⁢ 𝑒 − 7 for ASCN. For stochastic oracles with different batch sizes: 𝑙 ⁢ 𝑟

20 for SGD, 𝛾

5 and 𝛽 0

1.0 for Extra-Newton, 𝑀

0.01 and 𝜎

1 ⁢ 𝑒 − 7 for ASCN.

On Figures 3, 4 we present additional experiments with different batch sizes for gradients and Hessians, and on Figures 5, 6 we present Figure 2 with increased size. Moving forward to Figures 7 and 8, a comparison in running time is illustrated for two distinct setups: the gradient batch size is set at 10000 , and Hessian batch sizes are configured to be 150 and 450 . Specifically, one iteration of SGD consumes 0.16 seconds, while the execution times for EN and ASCN are approximately 0.33 seconds for both scenarios.

(a)Train loss (b)Test loss Figure 3:Logistic regression on a9a. Gradient batch size is 10000 , Hessian batch size is 450 (a)Train loss (b)Test loss Figure 4:Logistic regression on a9a. Gradient batch size is 10000 , Hessian batch size is 900 (a)Train loss (b)Test loss Figure 5:Logistic regression on a9a. Gradient and Hessian batch sizes are 1500 (a)Train loss (b)Test loss Figure 6:Logistic regression on a9a. Gradient batch size is 10000 , Hessian batch size is 150 (a)Train loss (b)Test loss Figure 7:Logistic regression on a9a. Gradient batch size is 10000 , Hessian batch size is 150 (a)Train loss (b)Test loss Figure 8:Logistic regression on a9a. Gradient batch size is 10000 , Hessian batch size is 450 Appendix DProof of Lemmas 2.1 and 5.2 Lemma 5.2.

Let Assumption 5.1 hold. Then, for any 𝑥 , 𝑦 ∈ ℝ 𝑑 , we have

| 𝑓 ⁢ ( 𝑦 ) − 𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 ) | ≤ ∑ 𝑖

1 𝑝 1 𝑖 ! ⁢ ‖ ( 𝐺 𝑖 ⁢ ( 𝑥 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 − 1 ‖ ⁢ ‖ 𝑦 − 𝑥 ‖ + 𝐿 𝑝 ( 𝑝 + 1 ) ! ⁢ ‖ 𝑦 − 𝑥 ‖ 𝑝 + 1 ,

‖ ∇ 𝑓 ⁢ ( 𝑦 ) − ∇ 𝜙 𝑥 , 𝑝 ⁢ ( 𝑦 ) ‖ ≤ ∑ 𝑖

1 𝑝 1 ( 𝑖 − 1 ) ! ⁢ ‖ ( 𝐺 𝑖 ⁢ ( 𝑥 ) − ∇ 𝑖 𝑓 ⁢ ( 𝑥 ) ) ⁢ [ 𝑦 − 𝑥 ] 𝑖 − 1 ‖ + 𝐿 𝑝 𝑝 ! ⁢ ‖ 𝑦 − 𝑥 ‖ 𝑝 ,

The proof of that lemma is the same as proofs of Lemmas 1, 2 from Agafonov et al. (2023). Lemma 2.1 is a special case of the Lemma 5.2 for 𝑝

2 .

Appendix EProof of Theorem 3.2

The full proof is organized as follows:

Lemma E.1 provides an upper bound for the estimating sequence 𝜓 𝑡 ⁢ ( 𝑥 ) ;

Lemmas E.2, E.6 present the efficiency of Inexact Cubic Newton step 𝑥 𝑡 + 1

𝑆 𝑀 , 𝛿 𝑡 ⁢ ( 𝑣 𝑡 ) .

Lemma E.6 provides a lower bound on 𝜓 𝑡 ⁢ ( 𝑥 ) based on results of technical Lemmas E.5- E.4;

Everything is combined together in Theorem E.7 in order to prove convergence and obtain convergence rate.

The following lemma shows that the sequence of functions 𝜓 ¯ 𝑡 ⁢ ( 𝑥 ) can be upper bounded by the properly regularized objective function.

Lemma E.1.

For convex function 𝑓 ⁢ ( 𝑥 ) and 𝜓 𝑡 ⁢ ( 𝑥 ) , we have

𝜓 𝑡 ⁢ ( 𝑥 ∗ ) ≤ 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝 ,

(21)

where

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑥 ∗ − 𝑥 𝑗 + 1 ⟩

(22) Proof.

For 𝑡

0 , let us define 𝐴 − 1 such that 1 𝐴 − 1

0 then 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 − 1

0 and

𝜓 0 ⁢ ( 𝑥 ∗ ) ≤ 𝜅 ¯ 2 0 + 𝜆 0 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 0 3 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3

From

𝜓 𝑡 + 1 ⁢ ( 𝑥 ) := 𝜓 𝑡 ⁢ ( 𝑥 ) + 𝜆 𝑡 + 1 − 𝜆 𝑡 2 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 2 + ∑ 𝑖

2 3 𝜅 ¯ 𝑖 𝑡 + 1 − 𝜅 ¯ 𝑖 𝑡 𝑖 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 𝑖 + 𝛼 𝑡 𝐴 𝑡 ⁢ 𝑙 ⁢ ( 𝑥 , 𝑥 𝑡 + 1 ) ,

(23)

we have

𝜓 𝑡 ⁢ ( 𝑥 ∗ )

𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 3 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑙 ⁢ ( 𝑥 ∗ , 𝑥 𝑗 + 1 ) .

(24)

From (7), we have that, for all 𝑗 ≥ 1 , 𝐴 𝑗

𝐴 𝑗 − 1 ⁢ ( 1 − 𝛼 𝑗 ) , which leads to 𝛼 𝑗 𝐴 𝑗

1 𝐴 𝑗 − 1 𝐴 𝑗 − 1 . Hence, we have ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗

1 𝐴 𝑡 − 1 − 1 𝐴 − 1

1 𝐴 𝑡 − 1 and, using the convexity of the objective 𝑓 , we get

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑙 ⁢ ( 𝑥 ∗ , 𝑥 𝑗 + 1 ) ≤ ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑙 ¯ ⁢ ( 𝑥 ∗ , 𝑥 𝑗 + 1 ) + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑥 ∗ − 𝑥 𝑗 + 1 ⟩

(25)

≤ 𝑓 ⁢ ( 𝑥 ∗ ) ⁢ ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝

𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 − 1 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝 ,

(26)

where 𝑙 ¯ ⁢ ( 𝑥 , 𝑦 )

𝑓 ⁢ ( 𝑦 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑦 ) , 𝑥 − 𝑦 ⟩ . Finally, combining all the inequalities from above, we obtain

𝜓 𝑡 ⁢ ( 𝑥 ∗ ) ≤ ( ⁢ 24 ⁢ ) , ( ⁢ 26 ⁢ ) 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝 .

(27)

Lemma E.2.

For the function 𝑓 ⁢ ( 𝑥 ) with 𝐿 2 -Lipschitz-continuous Hessian and 𝐻 ⁢ ( 𝑥 𝑡 ) is 𝛿 2 𝑡 -inexact Hessian for 𝑣 𝑡 , 𝑥 𝑡 + 1 ∈ ℝ 𝑑 we have

‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ ≤ 𝛿 2 𝑡 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ + 𝐿 2 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 2 + ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ ,

(28)

where we denote 𝛿 2 𝑡

𝛿 𝑡 𝑣 𝑡 , 𝑥 𝑡 + 1 to simplify the notation.

Proof.
‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖

‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − Φ 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) + Φ 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖

≤ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − Φ 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) ‖ + ‖ Φ 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖

‖ ( ∇ 2 𝑓 ⁢ ( 𝑣 𝑡 ) − 𝐵 𝑣 𝑡 ) ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ + ‖ Φ 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖

  • ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖

≤ 𝛿 2 𝑡 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ + 𝐿 2 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 2 + ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖

The next Lemma characterizes the progress of the inexact cubic step 𝑆 𝑀 , 𝛿 𝑡 ⁢ ( 𝑣 𝑡 ) in Algorithm 1.

Lemma E.3.

Let { 𝑥 𝑡 , 𝑣 𝑡 } 𝑡 ≥ 1 be generated by Algorithm 1. Then, for any 𝛿 ¯ 𝑡 ≥ 4 ⁢ 𝛿 2 𝑡 , 𝑀 ≥ 4 ⁢ 𝐿 2 and 𝑥 𝑡 + 1

𝑆 𝑀 , 𝛿 ¯ 𝑡 ⁢ ( 𝑣 𝑡 ) , the following holds

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ min ⁡ { ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 ⁢ ( 1 4 ⁢ 𝛿 𝑡 ) , ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 2 ⁢ ( 1 3 ⁢ 𝑀 ) 1 2 } .

(29) Proof.

For simplicity, we denote 𝑟 𝑡 + 1

‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ and

𝜁 𝑡 + 1

𝛿 ¯ 𝑡 + 𝑀 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ .

(30)

By Definition 3.1 for 𝑥 𝑡 + 1

𝑆 𝑀 , 𝛿 ¯ 𝑡 , 𝜏 ⁢ ( 𝑣 𝑡 )

𝜏 ≥ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) + 𝛿 ¯ 𝑡 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) + 𝑀 2 ‖ ⁢ 𝑥 𝑡 + 1 − 𝑣 𝑡 ⁢ ‖ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖

= ( ⁢ 30 ⁢ ) ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ .

(31)

We start with getting an upper bound for ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ .

‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ ≤ ( ⁢ 28 ⁢ ) 𝛿 2 𝑡 ⁢ 𝑟 𝑡 + 1 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 2 + ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ .

(32)

From inexact solution of the subproblem we get

‖ ∇ 𝑓 ⁢ ( 𝑥 𝑘 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ Def.  3.1 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 2 ⁢ 𝜏 2

(33)

Next, from the previous inequality and (32), we get

4 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + 4 ⁢ ( 𝛿 2 𝑡 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) 2 ⁢ 𝑟 𝑡 + 1 2 ≥ ( ⁢ 32 ⁢ ) 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2

≥ ( ⁢ 33 ⁢ ) ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2 − 2 ⁢ 𝜏 2

2 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑥 𝑡 + 1 − 𝑣 𝑡 ⟩ ⁢ 𝜁 𝑡 + 1 + ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 𝜁 𝑡 + 1 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 2 − 2 ⁢ 𝜏 2 .

Hence,

𝜏 2 𝜁 𝑡 + 1 + 2 𝜁 𝑡 + 1 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ ≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( 𝛿 2 𝑡 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2 ,

and finally by using defenetion of 𝜁 𝑡 + 1 , we get

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ ≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( 𝛿 2 𝑡 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2

Next, we consider 2 cases depending on which term dominates in the 𝜁 𝑡 + 1 .

If 𝛿 ¯ 𝑡 ≥ 𝑀 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ , then we get the following bound

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ ≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( 𝛿 2 𝑡 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2

≥ 1 4 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2

If 𝛿 ¯ 𝑡 < 𝑀 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ , then similarly to previous case, we get

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 2 ⁢ 𝜁 𝑡 + 1 + ( ( 𝛿 ¯ 𝑡 + 𝑀 2 ⁢ 𝑟 𝑡 + 1 ) 2 − 4 ⁢ ( 𝛿 2 𝑡 + 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2 2 ⁢ 𝜁 𝑡 + 1

‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 2 ⁢ 𝜁 𝑡 + 1 + ( ( 𝛿 ¯ 𝑡 − 2 ⁢ 𝛿 2 𝑡 + 𝑀 − 2 ⁢ 𝐿 2 2 ⁢ 𝑟 𝑡 + 1 ) ⁢ ( 𝛿 𝑡 ¯ + 2 ⁢ 𝛿 2 𝑡 + 2 ⁢ 𝐿 2 + 𝑀 2 ⁢ 𝑟 𝑡 + 1 ) ) ⁢ 𝑟 𝑡 + 1 2 2 ⁢ 𝜁 𝑡 + 1

≥ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 2 ⁢ 𝑀 ⁢ 𝑟 𝑡 + 1 + 𝑀 2 − 4 ⁢ 𝐿 2 2 4 ⁢ 𝑟 𝑡 + 1 3 2 ⁢ 𝑀 ≥ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 2 ⁢ 𝑀 ⁢ 𝑟 𝑡 + 1 + 2 ⁢ 𝑀 32 ⁢ 𝑟 𝑡 + 1 3 ≥ ( 1 3 ⁢ 𝑀 ) 1 2 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 2 ,

where for the last inequality, we use 𝛼 𝑟 + 𝛽 ⁢ 𝑟 3 3 ≥ 4 3 ⁢ 𝛽 1 / 4 ⁢ 𝛼 3 / 4 .

We use the following lemma (Agafonov et al., 2023, Lemma 7).

Lemma E.4.

Let ℎ ⁢ ( 𝑥 ) be a convex function, 𝑥 0 ∈ ℝ 𝑛 , 𝜃 𝑖 ≥ 0   for  𝑖

2 , … , 𝑝 + 1 and

𝑥 ¯

arg ⁡ min 𝑥 ∈ ℝ 𝑛 ⁡ { ℎ ¯ ⁢ ( 𝑥 )

ℎ ⁢ ( 𝑥 ) + ∑ 𝑖

2 𝑝 + 1 𝜃 𝑖 ⁢ 𝑑 𝑖 ⁢ ( 𝑥 − 𝑥 0 ) } ,

where 𝑑 𝑖 ⁢ ( 𝑥 )

1 𝑖 ⁢ ‖ 𝑥 ‖ 𝑖 is a power-prox function. Then, for all 𝑥 ∈ ℝ 𝑛 ,

ℎ ¯ ⁢ ( 𝑥 ) ≥ ℎ ¯ ⁢ ( 𝑥 ¯ ) + ∑ 𝑖

2 𝑝 + 1 ( 1 2 ) 𝑖 − 2 ⁢ 𝜃 𝑖 ⁢ 𝑑 𝑖 ⁢ ( 𝑥 − 𝑥 ¯ ) .

We will also use the next technical lemma Nesterov (2008); Ghadimi et al. (2017) on Fenchel conjugate for the 𝑝 -th power of the norm.

Lemma E.5.

Let 𝑔 ⁢ ( 𝑧 )

𝜃 𝑝 ⁢ ‖ 𝑧 ‖ 𝑝 for 𝑝 ≥ 2 and 𝑔 ∗ be its conjugate function i.e., 𝑔 ∗ ( 𝑣 )

sup 𝑧 { ⟨ 𝑣 , 𝑧 ⟩ −

𝑔 ( 𝑧 ) } . Then, we have

𝑔 ∗ ⁢ ( 𝑣 )

𝑝 − 1 𝑝 ⁢ ( ‖ 𝑣 ‖ 𝑝 𝜃 ) 1 𝑝 − 1

Moreover, for any 𝑣 , 𝑧 ∈ ℝ 𝑛 , we have 𝑔 ⁢ ( 𝑧 ) + 𝑔 ∗ ⁢ ( 𝑣 ) − ⟨ 𝑧 , 𝑣 ⟩ ≥ 0 .

Finally, the last step is the next Lemma which prove that 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 ≤ min 𝑥 ⁡ 𝜓 ¯ 𝑡 ⁢ ( 𝑥 )

𝜓 ¯ 𝑡 ∗ .

Lemma E.6.

Let { 𝑥 𝑡 , 𝑦 𝑡 } 𝑡 ≥ 1 be generated by Algorithm 1. Then

𝜓 𝑡 ∗

min 𝑥 ⁡ 𝜓 𝑡 ⁢ ( 𝑥 ) ≥ 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 ,

(34)

where

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣

∑ 𝑗

0 𝑡 − 1 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑗 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑗 ) ‖ 2 ,
(35)
𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

∑ 𝑗

0 𝑡 − 1 𝜏 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ,

(36)

and

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 2 2 ⁢ 𝐴 𝑗 2 ⁢ 𝜆 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) ‖ 2 + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑦 𝑗 − 𝑥 𝑗 + 1 ⟩ .

(37) Proof.

We prove Lemma by induction. Let us start with 𝑡

0 , we define 𝐴 − 1 such that 1 𝐴 − 1

0 . Then 𝑓 ⁢ ( 𝑥 0 ) 𝐴 − 1

0 and 𝜓 0 ∗

0 , hence, 𝜓 0 ∗ ≥ 𝑓 ⁢ ( 𝑥 0 ) 𝐴 − 1 . Let us assume that (34) is true for 𝑡 and show that (34) is true for 𝑡 + 1 . By definition,

𝜓 𝑡 ⁢ ( 𝑥 )

𝜆 𝑡 + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 3 ⁢ ‖ 𝑥 − 𝑥 0 ‖ 3 + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑙 ⁢ ( 𝑥 , 𝑥 𝑗 + 1 ) .

Next, we apply Lemma E.4 with the following choice of parameters: ℎ ⁢ ( 𝑥 )

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑙 ⁢ ( 𝑥 , 𝑥 𝑗 + 1 ) , 𝜃 2

𝜆 𝑡 + 𝜅 ¯ 2 𝑡 , and 𝜃 3

𝜅 ¯ 3 𝑡 . By (23), 𝑦 𝑡

argmin 𝑥 ∈ ℝ 𝑑 ℎ ⁢ ( 𝑥 ) , and we have

𝜓 𝑡 ⁢ ( 𝑦 𝑡 + 1 ) ≥ 𝜓 𝑡 ∗ + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 − 𝑦 𝑡 ‖ 3

≥ ( ⁢ 34 ⁢ ) 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 − 𝑦 𝑡 ‖ 3 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 ,

where the last inequality follows from the assumption of the lemma.

By the definition of 𝜓 𝑡 + 1 ⁢ ( 𝑥 ) , the above inequality, and convexity of 𝑓 , we obtain

𝜓 𝑡 + 1 ⁢ ( 𝑦 𝑡 + 1 )

𝜓 𝑡 ⁢ ( 𝑦 𝑡 + 1 ) + 𝜆 𝑡 + 1 − 𝜆 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑥 0 ‖ 2 + 𝜅 ¯ 2 𝑡 + 1 − 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑥 0 ‖ 2

+ 𝜅 ¯ 3 𝑡 + 1 − 𝜅 ¯ 3 𝑡 3 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑥 0 ‖ 3 + 𝛼 𝑡 𝐴 𝑡 ⁢ 𝑙 ⁢ ( 𝑦 𝑡 + 1 , 𝑥 𝑡 + 1 )

≥ 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 + 𝛼 𝑡 𝐴 𝑡 ⁢ 𝑙 ⁢ ( 𝑦 𝑡 + 1 , 𝑥 𝑡 + 1 ) − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

𝛼 𝑡 𝐴 𝑡 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩ + ⟨ 𝑔 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ + ⟨ 𝑔 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 − 𝑥 𝑡 + 1 ⟩ )

≥ 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

+ 𝛼 𝑡 𝐴 𝑡 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩ ) − 𝛼 𝑡 2 2 ⁢ 𝐴 𝑡 2 ⁢ 𝜆 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 − 𝑥 𝑡 + 1 ⟩

𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 + 𝛼 𝑡 𝐴 𝑡 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩ )

≥ 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

  • 1 𝐴 𝑡 − 1 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡
  • 1 )
  • ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡
  • 1 ) , 𝑥 𝑡 − 𝑥 𝑡
  • 1 ⟩ )
  • 𝛼 𝑡 𝐴 𝑡 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡
  • 1 )
  • ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡
  • 1 ) , 𝑦 𝑡
  • 1 − 𝑥 𝑡
  • 1 ⟩ ) .

Next, we consider the sum of two linear models from the last inequality:

1 𝐴 𝑡 − 1 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑥 𝑡 − 𝑥 𝑡 + 1 ⟩ ) + 𝛼 𝑡 𝐴 𝑡 ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩ )

( ⁢ 7 ⁢ ) 1 − 𝛼 𝑡 𝐴 𝑡 ⁢ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + 1 − 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑥 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝛼 𝑡 𝐴 𝑡 ⁢ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 − 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝛼 𝑡 ⁢ 𝑦 𝑡 1 − 𝛼 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑥 𝑡 + 1 ⟩

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ .

As a result, by (23), we get

𝜓 𝑡 + 1 ∗

𝜓 𝑡 + 1 ⁢ ( 𝑦 𝑡 + 1 ) ≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2

  • 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡
  • 1 − 𝑦 𝑡 ‖ 3
  • 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡
  • 1 ) , 𝑦 𝑡
  • 1 − 𝑦 𝑡 ⟩ − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡
  • 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 .

(38)

To complete the induction step, we show, that the sum of all terms in the RHS except 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 is non-negative (except err).

Lemma E.3 provides the lower bound for ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ . Let us consider the case when the minimum in the RHS of (29) is attained at the first term. By Lemma E.5 with the following choice of the parameters

𝑧

𝑦 𝑡 − 𝑦 𝑡 + 1 , 𝑣

𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝜃

𝜅 ¯ 𝑖 𝑡 ,

we have

𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 − 𝑦 𝑡 + 1 ‖ 2 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ ≥ − 1 2 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 𝜅 ¯ 2 𝑡 ) .

(39)

Hence,

1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩

≥ ( ⁢ 39 ⁢ ) 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ − ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 2 ⁢ 𝜅 ¯ 2 𝑡

≥ ( ⁢ 29 ⁢ ) 1 𝐴 𝑡 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 ⁢ ( 1 4 ⁢ 𝛿 ¯ 𝑡 ) − ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 2 ⁢ 𝜅 ¯ 2 𝑡 + 1 − 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 − 𝜏 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡

≥ − 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 − 𝜏 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ,

where the last inequality holds by our choice of the parameters

𝜅 ¯ 2 𝑡 + 1 ≥ 2 ⁢ 𝛿 ¯ 𝑡 ⁢ 𝛼 𝑡 2 𝐴 𝑡 .

(40)

Next, we consider the case when the minimum in the RHS of (29) is achieved on the second term. Again, by Lemma E.5 with the same choice of 𝑧 , 𝑣 and with 𝜃

𝜅 ¯ 3 𝑡 2 , we have

𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 − 𝑦 𝑡 + 1 ‖ 3 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ ≥ − 2 3 ⁢ ( 2 ⁢ ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 𝜅 ¯ 3 𝑡 ) 1 2 .

(41)

Hence, we get

1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩

≥ ( ⁢ 41 ⁢ ) 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ − 2 3 ⁢ ( 2 ⁢ ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 𝜅 ¯ 3 𝑡 ) 1 2

≥ ( ⁢ 29 ⁢ ) 1 𝐴 𝑡 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 2 ⁢ ( 1 3 ⁢ 𝑀 ) 1 2 − 2 3 ⁢ ( 2 ⁢ ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 3 𝜅 ¯ 3 𝑡 ) 1 2 − 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 − 𝜏 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡

≥ − 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 − 𝜏 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ,

where the last inequality holds by our choice of 𝜅 ¯ 3 𝑡 :

𝜅 ¯ 3 𝑡 ≥ 8 ⁢ 𝑀 3 ⁢ 𝛼 𝑡 3 𝐴 𝑡 .

(42)

As a result, we unite both cases and get

𝜓 𝑡 + 1 ∗ ≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 𝜅 ¯ 2 𝑡 2 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 2

+ 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑦 𝑡 + 1 − 𝑦 𝑡 ‖ 3 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 − 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 − 𝜏 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝜏

To sum up, by our choice of the parameters 𝜅 ¯ 𝑖 𝑡 , 𝑖

2 , 3 , we prove the induction step.

Finally, we are in a position to prove the convergence rate theorem The proof uses the following technical assumption. Let 𝑅 be such that

‖ 𝑥 0 − 𝑥 ∗ ‖ ≤ 𝑅 .

(43) Theorem E.7.

Let Assumption 1.1 hold and 𝑀 ≥ 4 ⁢ 𝐿 2 . Let Assumption 2.3 hold. After 𝑇 ≥ 1 with parameters defined in (10) and 𝜎 2

𝛿 2

max 𝑡

1 , … , 𝑇 ⁡ 𝛿 𝑡 𝑣 𝑡 − 1 , 𝑥 𝑡 we get the following bound for the objective residual

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]

≤ 10 ⁢ 𝜏 ⁢ 𝑅 𝑇 + 2 + 19 ⁢ 𝜎 1 ⁢ 𝑅 𝑇 + 1 + 18 ⁢ 𝛿 2 ⁢ 𝑅 2 ( 𝑇 + 3 ) 2 + 20 ⁢ 𝐿 2 ⁢ 𝑅 3 ( 𝑇 + 1 ) 3 .

Proof.

First of all, let us bound 𝐴 𝑇 .

𝛼 𝑡

3 𝑡 + 3 , 𝑡 ≥ 1 .

(44)

Then, we have

𝐴 𝑇

∏ 𝑡

1 𝑇 ( 1 − 𝛼 𝑡 )

∏ 𝑡

1 𝑇 𝑡 𝑡 + 3

𝑇 ! ⁢ 3 ! ( 𝑇 + 3 ) !

6 ( 𝑇 + 1 ) ⁢ ( 𝑇 + 2 ) ⁢ ( 𝑇 + 3 ) .

(45)

And from Agafonov et al. (2023) we get

∑ 𝑡

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑡 𝑖 𝐴 𝑡 ≤ 3 𝑖 ( 𝑇 + 3 ) 𝑖 − 1

(46)

From Lemmas E.6 and E.1, we obtain that, for all 𝑡 ≥ 1 ,

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝜏 ≤ ( ⁢ 34 ⁢ ) 𝜓 𝑡 + 1 ∗ ≤ 𝜓 𝑡 + 1 ⁢ ( 𝑥 ∗ )

≤ ( ⁢ 21 ⁢ ) 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑢 ⁢ 𝑝 .

Next, we apply expectation

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 + 1 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝜅 ¯ 2 𝑇 + 𝜆 𝑇 2 ⁢ 𝑅 2 + 𝜅 ¯ 3 𝑇 6 ⁢ 𝑅 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏 ] .

(47)

Let us choose

𝛿 ¯ 𝑡

𝛿 2 + 𝜏 + 𝜎 𝑅 ⁢ ( 𝑡 + 3 ) 3 / 2 ,

(48)

𝜆 𝑡

𝜎 𝑅 ⁢ ( 𝑡 + 3 ) 5 / 2 .

(49)

Then, we bound terms in (47) step by step. We start from deterministic terms.

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝜅 ¯ 2 𝑇 + 𝜆 𝑇 2 ⁢ 𝑅 2 + 𝜅 ¯ 3 𝑇 6 ⁢ 𝑅 3 ]

( ⁢ 10 ⁢ ) 18 ⁢ 𝛿 ¯ 𝑇 ⁢ 𝑅 2 ( 𝑇 + 3 ) 2 + 6 ⁢ 𝜆 𝑇 ⁢ 𝑅 2 ( 𝑇 + 3 ) 3 + 72 ⁢ 𝑀 ⁢ 𝑅 3 ( 𝑇 + 3 ) 3

( ⁢ 48 ⁢ ) , ( ⁢ 49 ⁢ ) 18 ⁢ 𝜏 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 + 18 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 + 18 ⁢ 𝛿 2 ⁢ 𝑅 2 ( 𝑇 + 3 ) 2 + 6 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 + 72 ⁢ 𝑀 ⁢ 𝑅 3 ( 𝑇 + 3 ) 3 .

Now, we bound expectation of all error terms. Firstly, we consider 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝 ]

( ⁢ 22 ⁢ ) 𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑥 ∗ − 𝑥 𝑗 + 1 ⟩ ]

0 .

Next, we bound 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 ]

( ⁢ 35 ⁢ ) 𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑗 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑗 ) ‖ 2 ] ≤ 2 ⁢ 𝜎 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗

( ⁢ 44 ⁢ ) , ( ⁢ 48 ⁢ ) 2 ⁢ 𝜎 ⁢ 𝑅 3 3 / 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 3 / 2 𝐴 𝑗 ≤ ( ⁢ 46 ⁢ ) 2 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2

Now we calculate 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 ]

( ⁢ 37 ⁢ ) 𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 𝛼 𝑗 2 2 ⁢ 𝐴 𝑗 2 ⁢ 𝜆 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) ‖ 2 + ∑ 𝑗

0 𝑇 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑦 𝑗 − 𝑥 𝑗 + 1 ⟩ ]

( ⁢ 49 ⁢ ) 𝜎 ⁢ 𝑅 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 2 𝐴 𝑗 2 ⁢ ( 𝑗 + 3 ) 5 / 2

( ⁢ 44 ⁢ ) 𝜎 ⁢ 𝑅 3 5 / 2 ⁢ 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 9 / 2 𝐴 𝑗 2 ≤ 𝜎 ⁢ 𝑅 3 5 / 2 ⁢ 2 ⁢ 𝐴 𝑇 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 9 / 2 𝐴 𝑗

≤ ( ⁢ 46 ⁢ ) 3 ⁢ 𝜎 ⁢ 𝑅 2 ⁢ 𝐴 𝑇 ⁢ ( 𝑇 + 3 ) 7 / 2 ≤ ( ⁢ 45 ⁢ ) 𝜎 ⁢ 𝑅 4 ⁢ ( 𝑇 + 3 ) 1 / 2 .

Finally, we consider 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏 ]

( ⁢ 60 ⁢ ) ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝜏 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗

( ⁢ 48 ⁢ ) 𝜏 ⁢ 𝑅 3 3 / 2 ⁢ ∑ 𝐴 𝑇 ⁢ 𝛼 𝑡 3 / 2 𝐴 𝑗 ≤ ( ⁢ 46 ⁢ ) 𝜏 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 .

Combining all bounds from above we achieve convergence rate

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 + 1 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 19 ⁢ 𝜏 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 + 27 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 3 ) 1 / 2 + 18 ⁢ 𝛿 2 ⁢ 𝑅 2 ( 𝑇 + 3 ) 2 + 72 ⁢ 𝑀 ⁢ 𝑅 3 ( 𝑇 + 3 ) 3 .

The case of stochastic Hessian (Theorem 3.2 under Assumption 2.3) can be obtained in the same way by taking expectation in Lemma E.3.

Appendix FProof of Theorem 5.6

Algorithm 2 is the tensor generalization of Algorithm 1. So, we will follow the same steps as in Appendix E.

First of all, we provide tensor counterpart of Lemmas E.1, E.2, E.3. The proofs directly follow the proofs of Lemmas for 2 -nd order case.

Lemma F.1.

For convex function 𝑓 ⁢ ( 𝑥 ) and 𝜓 𝑡 ⁢ ( 𝑥 ) , we have

𝜓 𝑡 ⁢ ( 𝑥 ∗ ) ≤ 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 − 1 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + ∑ 𝑖

3 𝑝 + 1 𝜅 ¯ 𝑖 𝑡 𝑖 ! ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 𝑖 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝 ,

(50)

where

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑢 ⁢ 𝑝

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑥 ∗ − 𝑥 𝑗 + 1 ⟩

(51) Lemma F.2.

For the function 𝑓 ⁢ ( 𝑥 ) with 𝐿 𝑝 -Lipschitz-continuous Hessian and 𝐺 𝑖 ⁢ ( 𝑥 𝑡 ) is 𝛿 𝑖 𝑡 -inexact 𝑖 -th order derivative for 𝑣 𝑡 , 𝑥 𝑡 + 1 ∈ ℝ 𝑑 we have

‖ ∇ 𝜙 𝑣 𝑡 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ ≤ ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 𝑖 + 𝐿 𝑝 𝑝 ! ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 𝑝 + 1 + ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ ,

(52)

where we denote 𝛿 2 𝑡

𝛿 𝑡 𝑣 𝑡 , 𝑥 𝑡 + 1 to simplify the notation.

Lemma F.3.

Let { 𝑥 𝑡 , 𝑣 𝑡 } 𝑡 ≥ 1 be generated by Algorithm 1. Then, for any 𝛿 ¯ 𝑡 ≥ 4 ⁢ 𝛿 2 𝑡 , 𝜂 𝑖 ≥ 4 , 𝑀 ≥ 2 𝑝 ⁢ 𝐿 𝑝 and 𝑥 𝑡 + 1

𝑆 𝑀 , 𝛿 ¯ 𝑡 ⁢ ( 𝑣 𝑡 ) , the following holds

𝜏 2 𝛿 𝑡 + 2 𝛿 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ min { 2 𝑝 ∥ ∇ 𝑓 ( 𝑥 𝑡 + 1 ) ∥ 𝑝 + 1 𝑝 ( ( 𝑝 − 1 ) ! 𝑀 ) 1 𝑝 ; 1 4 ⁢ 𝛿 ¯ 𝑡 ∥ ∇ 𝑓 ( 𝑥 𝑡 + 1 ) ∥ 2 ;

min 𝑖

3 , … , 𝑝 ( 2 𝑝 ∥ ∇ 𝑓 ( 𝑥 𝑡 + 1 ) ∥ 𝑖 𝑖 − 1 ( ( 𝑖 − 1 ) ! 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ) 1 𝑖 − 1 ) } .

(53) Proof.

For simplicity, we denote 𝑟 𝑡 + 1

‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ and

𝜁 𝑡 + 1

𝛿 ¯ 𝑡 + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 ( 𝑖 − 1 ) ! ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 𝑖 − 2 + 𝑀 ( 𝑝 − 1 ) ! ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 𝑝 − 1 .

(54)

By Definition 5.5 for 𝑥 𝑡 + 1

𝑆 𝑝 𝑀 , 𝛿 ¯ 𝑡 , 𝜏 ⁢ ( 𝑣 𝑡 )

𝜏 2 ≥ ∥ ∇ 𝜙 𝑣 𝑡 , 𝑝 ( 𝑥 𝑡 + 1 ) + 𝛿 ¯ 𝑡 ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ∥ 𝑥 𝑡 + 1 − 𝑣 𝑡 ∥ 𝑖 − 2 ( 𝑥 𝑡 + 1 − 𝑣 𝑡 )

  • 𝑀 ( 𝑝 − 1 ) ! ∥ 𝑥 𝑡
  • 1 − 𝑣 𝑡 ∥ 𝑝 − 1 ( 𝑥 𝑡
  • 1 − 𝑣 𝑡 ) ∥ 2

= ( ⁢ 54 ⁢ ) ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2 .

(55)

From inexact solution of subproblem we get

‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) − 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≤ 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 2 ⁢ 𝜏 2 .

(56)

Next, from previous inequality and Lemma F.2

4 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 + 1 ) 2 + 2 ⁢ 𝜏 2

≥ 2 ⁢ ‖ ∇ 𝜙 𝑣 𝑡 , 𝑝 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2 + 2 ⁢ 𝜏 2

≥ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) + 𝜁 𝑡 + 1 ⁢ ( 𝑥 𝑡 + 1 − 𝑣 𝑡 ) ‖ 2

≥ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 2 ⁢ 𝜁 𝑡 + 1 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑥 𝑡 + 1 − 𝑣 𝑡 ⟩ + 𝜁 𝑡 + 1 2 ⁢ ‖ 𝑥 𝑡 + 1 − 𝑣 𝑡 ‖ 2 .

(57)

Hence,

𝜏 2 𝜁 𝑡 + 1 + 2 𝜁 𝑡 + 1 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ ≥

1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2 ,

and finally by using definition of 𝜁 𝑡 + 1 , we get

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ ≥

1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2 .

Next, we consider 𝑝 cases depending on which term dominates in 𝜁 𝑡 + 1 .

If 𝛿 ¯ 𝑡 ≥ ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝑀 ( 𝑝 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 , then we get the following bound

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2

≥ 1 2 ⁢ 𝑝 ⁢ 𝛿 ¯ 𝑡 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 .

If 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 1 ≥ 𝛿 ¯ 𝑡 + ∑ 𝑗

3 , 𝑗 ≠ 𝑖 𝑝 𝜂 𝑗 ⁢ 𝛿 𝑗 ( 𝑗 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑗 − 1 + 𝑀 ( 𝑝 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 for 𝑖 , 3 ≤ 𝑖 ≤ 𝑝 , we get

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2

≥ ( 𝑖 − 1 ) ! ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 𝑝 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ⁢ 𝑟 𝑡 + 1 𝑖 − 1 + ( 𝛿 ¯ 𝑡 − 2 ⁢ 𝛿 2 𝑡 + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 − 2 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝑝 ⁢ 𝑀 − 2 ⁢ 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 )

× ( 𝛿 ¯ 𝑡 + 2 ⁢ 𝛿 2 𝑡 + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 + 2 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝑝 ⁢ 𝑀 + 2 ⁢ 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ) ⁢ 𝑟 𝑡 + 1 2 𝜁 𝑡 + 1

≥ ( 𝑖 − 1 ) ! ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 𝑝 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 − 2 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 + 2 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 ⁢ ( 𝑖 − 1 ) ! 𝑝 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡

≥ ( 𝑖 − 1 ) ! ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 𝑝 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ⁢ 𝑟 𝑡 + 1 𝑖 − 2 + 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑝 ⁢ 𝑟 𝑡 + 1 𝑖 .

≥ 2 𝑖 ⁢ ( ( 𝑖 − 2 ) ⁢ ( 𝑖 − 1 ) ! ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 𝑝 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ) 𝑖 2 ⁢ ( 𝑖 − 1 ) ⁢ ( 𝑖 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑝 ) 𝑖 − 2 2 ⁢ ( 𝑖 − 1 )

≥ 2 𝑝 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 𝑖 − 1 ⁢ ( ( 𝑖 − 1 ) ! 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ) 1 𝑖 − 1 ,

where we used 𝛼 ( 𝑖 − 2 ) ⁢ 𝑟 𝑖 − 2 + 𝛽 ⁢ 𝑟 𝑖 𝑖 ≥ 2 𝑖 ⁢ 𝛼 𝑖 2 ⁢ ( 𝑖 − 1 ) ⁢ 𝛽 𝑖 − 2 2 ⁢ ( 𝑖 − 1 ) .

If 𝑀 ( 𝑝 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ≥ 𝛿 ¯ 𝑡 + ∑ 𝑖

3 𝑝 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 − 1 , then similarly to previous case, we get

𝜏 2 𝛿 ¯ 𝑡 + 2 𝛿 ¯ 𝑡 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑡 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑡 ) ‖ 2 + ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

≥ 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 1 2 ⁢ 𝜁 𝑡 + 1 ⁢ ( 𝜁 𝑡 + 1 2 − 4 ⁢ ( ∑ 𝑖

2 𝑝 𝛿 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑟 𝑡 + 1 𝑖 + 𝐿 𝑝 𝑝 ! ⁢ 𝑟 𝑡 + 1 𝑝 + 1 ) 2 ) ⁢ 𝑟 𝑡 + 1 2

( 𝑝 − 1 ) ! 𝑝 ⁢ 𝑀 ⁢ 𝑟 𝑡 + 1 𝑝 − 1 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 2 + 𝑝 ⁢ 𝑀 2 ⁢ 𝑝 ⁢ ( 𝑝 ! ) ⁢ 𝑟 𝑡 + 1 𝑝 + 1

≥ 2 𝑝 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑝 + 1 𝑝 ⁢ ( ( 𝑝 − 1 ) ! 𝑀 ) 1 𝑝 .

Next we will need technical Lemmas E.4, E.5.

Lemma F.4.

Let { 𝑥 𝑡 , 𝑦 𝑡 } 𝑡 ≥ 1 be generated by Algorithm 2. Then

𝜓 𝑡 ∗

min 𝑥 ⁡ 𝜓 𝑡 ⁢ ( 𝑥 ) ≥ 𝑓 ⁢ ( 𝑥 𝑡 ) 𝐴 𝑡 − 1 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 ,

(58)

where

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣

∑ 𝑗

0 𝑡 − 1 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑗 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑗 ) ‖ 2 ,
(59)
𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏

∑ 𝑗

0 𝑡 − 1 𝜏 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ,

(60)

and

𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑥

∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 2 2 ⁢ 𝐴 𝑗 2 ⁢ 𝜆 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) ‖ 2 + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑦 𝑗 − 𝑥 𝑗 + 1 ⟩ .

(61) Proof.

We prove Lemma by induction. Let us start with 𝑡

0 , we define 𝐴 − 1 such that 1 𝐴 − 1

0 . Then 𝑓 ⁢ ( 𝑥 0 ) 𝐴 − 1

0 and 𝜓 0 ∗

0 , hence, 𝜓 0 ∗ ≥ 𝑓 ⁢ ( 𝑥 0 ) 𝐴 − 1 . Let us assume that (58) is true for 𝑡 and show that (58) is true for 𝑡 + 1 .

Following the steps of Lemma E.6 for 𝑝 -th order case, we get

𝜓 𝑡 + 1 ∗

𝜓 𝑡 + 1 ⁢ ( 𝑦 𝑡 + 1 ) ≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩

  • ∑ 𝑖

    2 𝑝
  • 1 ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑑 𝑖 ⁢ ( 𝑦 𝑡
  • 1 − 𝑦 𝑡 )
  • 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡
  • 1 ) , 𝑦 𝑡
  • 1 − 𝑦 𝑡 ⟩ − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡
  • 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 𝜏 .

(62)

To complete the induction step we will show, that the sum of all terms in the RHS except 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 and error terms is non-negative.

Lemma F.3 provides the lower bound for ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ . Let us consider the case when the minimum in the RHS of (53) is attained at the term with particular 𝑖

3 , … , 𝑝 . By Lemma E.5 with the following choice of the parameters

𝑧

𝑦 𝑡 − 𝑦 𝑡 + 1 , 𝑣 𝑡

𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝜃

( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ,

we have

1 𝑖 ⁢ ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ ‖ 𝑦 𝑡 − 𝑦 𝑡 + 1 ‖ 𝑖 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ ≥ − 𝑖 − 1 𝑖 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ) 1 𝑖 − 1 .

(63)

Hence,

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ⁢ 𝑑 𝑖 ⁢ ( 𝑦 𝑡 + 1 − 𝑦 𝑡 ) + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩

≥ ( ⁢ 39 ⁢ ) 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ − 𝑖 − 1 𝑖 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ) 1 𝑖 − 1

≥ ( ⁢ 53 ⁢ ) 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 2 𝑝 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 𝑖 − 1 ⁢ ( ( 𝑖 − 1 ) ! 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 ) 1 𝑖 − 1 − 𝑖 − 1 𝑖 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑖 ( 1 2 ) 𝑖 − 2 ⁢ 𝜅 ¯ 𝑖 𝑡 ( 𝑖 − 1 ) ! ) 1 𝑖 − 1

≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 ,

where the last inequality holds by our choice of the parameters

𝜅 ¯ 𝑖 𝑡 ≥ 𝑝 𝑖 − 1 2 ⁢ 𝛼 𝑡 𝑖 𝐴 𝑡 ⁢ 𝜂 𝑖 ⁢ 𝛿 𝑖 𝑡 .

(64)

Let us consider the case when the minimum in the RHS of (53) is achieved on the second term. Following similar steps, we get

𝜅 ¯ 2 𝑡 ≥ 2 ⁢ 𝑝 ⁢ 𝛼 𝑡 2 𝐴 𝑡 ⁢ 𝛿 ¯ 𝑡 .

(65)

Next, we consider the case when the minimum in the RHS of (53) is achieved on the first term. Again, by Lemma E.5 with the same choice of 𝑧 , 𝑣 and with 𝜃

( 1 2 ) 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 − 1 ( 𝑝 − 1 ) ! , we have

1 𝑝 ⁢ ( 1 2 ) 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 − 1 𝑝 ! ⁢ ‖ 𝑦 𝑡 − 𝑦 𝑡 + 1 ‖ 𝑝 + 1 + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩ ≥ − 𝑝 𝑝 + 1 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑝 + 1 ( 1 2 ) 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 ( 𝑝 − 1 ) ! ) 1 𝑝 .

(66)

Hence, we get

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ + 1 2 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 − 1 𝑝 ! ⁢ 𝑑 𝑝 + 1 ⁢ ( 𝑦 𝑡 + 1 − 𝑦 𝑡 ) + 𝛼 𝑡 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑦 𝑡 + 1 − 𝑦 𝑡 ⟩

≥ ( ⁢ 66 ⁢ ) 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ ⟨ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) , 𝑣 𝑡 − 𝑥 𝑡 + 1 ⟩ − 𝑝 𝑝 + 1 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑝 + 1 ( 1 2 ) 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 𝑝 ! ) 1 𝑝

≥ ( ⁢ 53 ⁢ ) 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 + 1 𝐴 𝑡 ⁢ 2 𝑝 ⁢ ‖ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑝 + 1 𝑝 ⁢ ( ( 𝑝 − 1 ) ! 𝑀 ) 1 𝑝 − 𝑝 𝑝 + 1 ⁢ ( ‖ 𝛼 𝑡 𝐴 𝑡 ⁢ ∇ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) ‖ 𝑝 + 1 ( 1 2 ) 𝑝 − 1 ⁢ 𝜅 ¯ 𝑝 + 1 𝑡 𝑝 ! ) 1 𝑝

≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 ,

where the last inequality holds by our choice of 𝜅 ¯ 𝑝 + 1 𝑡 :

𝜅 ¯ 𝑝 + 1 𝑡 ≥ ( 𝑝 + 1 ) 𝑝 + 1 2 ⁢ 𝛼 𝑡 𝑝 + 1 𝐴 𝑡 ⁢ 𝑀 .

(67)

To sum up, by our choice of the parameters 𝜅 ¯ 𝑖 𝑡 , 𝑖

2 , … , 𝑝 , we obtain

𝜓 𝑡 + 1 ∗ ≥ 𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝜏 .

Theorem F.5.

Let Assumption 5.1 hold and 𝑀 ≥ 2 𝑝 ⁢ 𝐿 𝑝 . Let Assumption 5.4 hold. After 𝑇 ≥ 1 with parameters defined in (17) and 𝜎 2

𝛿 2

max 𝑡

1 , … , 𝑇 ⁡ 𝛿 𝑡 𝑣 𝑡 − 1 , 𝑥 𝑡 we get the following bound for the objective residual

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ]
≤ 2 ⁢ ( 𝑝 + 1 ) 3 ⁢ 𝜏 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 + 3 ⁢ ( 𝑝 + 1 ) 3 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 + ∑ 𝑖

3 𝑝 2 ⁢ ( 𝑝 + 1 ) 2 ⁢ 𝑖 − 1 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑖 𝑖 ! ⁢ ( 𝑇 + 𝑝 + 1 ) 𝑖 + ( 𝑝 + 1 ) 2 ⁢ ( 𝑝 + 1 ) ( 𝑝 + 1 ) ! ⁢ 𝑀 ⁢ 𝑅 𝑝 + 1 ( 𝑇 + 𝑝 + 1 ) 𝑝 + 1

Proof.

First of all, let us bound 𝐴 𝑇 .

𝛼 𝑡

𝑝 + 1 𝑡 + 𝑝 + 1 , 𝑡 ≥ 1 .

(68)

Then, we have

1 ( 𝑇 + 𝑝 + 1 ) 𝑝 + 1 ≤ 𝐴 𝑇 ≤ ( 𝑝 + 1 ) ! ( 𝑇 + 1 ) 𝑝 + 1 .

(69)

And from Agafonov et al. (2023) we get

∑ 𝑡

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑡 𝑖 𝐴 𝑡 ≤ ( 𝑝 + 1 ) 𝑖 ( 𝑇 + 𝑝 + 1 ) 𝑖 − 1

(70)

From Lemmas F.4 and F.1, we obtain that, for all 𝑡 ≥ 1 ,

𝑓 ⁢ ( 𝑥 𝑡 + 1 ) 𝐴 𝑡 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑣 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑥 − 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝜏 ≤ ( ⁢ 58 ⁢ ) 𝜓 𝑡 + 1 ∗ ≤ 𝜓 𝑡 + 1 ⁢ ( 𝑥 ∗ )

≤ ( ⁢ 50 ⁢ ) 𝑓 ⁢ ( 𝑥 ∗ ) 𝐴 𝑡 + 𝜅 ¯ 2 𝑡 + 𝜆 𝑡 2 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 2 + 𝜅 ¯ 3 𝑡 6 ⁢ ‖ 𝑥 ∗ − 𝑥 0 ‖ 3 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑡 + 1 𝑢 ⁢ 𝑝 .

Next, we apply expectation

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 + 1 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝜅 ¯ 2 𝑇 + 𝜆 𝑇 2 ⁢ 𝑅 2 + ∑ 𝑖

3 𝑝 + 1 𝜅 ¯ 𝑖 𝑇 𝑖 ! ⁢ 𝑅 𝑖 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 + 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏 ] .

(71)

Let us choose

𝛿 ¯ 𝑡

𝛿 2 + 𝜏 + 𝜎 𝑅 ⁢ ( 𝑡 + 𝑝 + 1 ) 3 / 2 ,

(72)

𝜆 𝑡

𝜎 𝑅 ⁢ ( 𝑡 + 𝑝 + 1 ) 𝑝 + 1 / 2 .

(73)

Then, we bound terms in (71) step by step.

We start from deterministic terms.

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝜅 ¯ 2 𝑇 + 𝜆 𝑇 2 ⁢ 𝑅 2 + ∑ 𝑖

3 𝑝 𝜅 ¯ 𝑖 𝑇 𝑖 ! ⁢ 𝑅 𝑖 + 𝜅 ¯ 𝑝 + 1 𝑇 ( 𝑝 + 1 ) ! ⁢ 𝑅 𝑝 + 1 ]

( ⁢ 17 ⁢ ) 𝑝 ⁢ 𝛼 𝑇 2 ⁢ 𝛿 ¯ 𝑇 ⁢ 𝑅 2 + ∑ 𝑖

3 𝑝 4 ⁢ 𝑝 𝑖 − 1 𝑖 ! ⁢ 2 ⁢ 𝛼 𝑇 𝑖 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑖 + ( 𝑝 + 1 ) 𝑝 + 1 ( 𝑝 + 1 ) ! ⁢ 𝛼 𝑇 𝑝 + 1 ⁢ 𝑀 ⁢ 𝑅 𝑝 + 1

( ⁢ 72 ⁢ ) , ( ⁢ 73 ⁢ ) ( 𝑝 + 1 ) 3 ⁢ ( 𝜏 + 𝜎 ) ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 + ( 𝑝 + 1 ) 3 ⁢ 𝛿 2 ⁢ 𝑅 2 ( 𝑇 + 𝑝 + 1 ) 2 + ∑ 𝑖

3 𝑝 2 ⁢ ( 𝑝 + 1 ) 2 ⁢ 𝑖 − 1 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑖 𝑖 ! ⁢ ( 𝑇 + 𝑝 + 1 ) 𝑖 + ( 𝑝 + 1 ) 2 ⁢ ( 𝑝 + 1 ) ( 𝑝 + 1 ) ! ⁢ 𝑀 ⁢ 𝑅 𝑝 + 1 ( 𝑇 + 𝑝 + 1 ) 𝑝 + 1 .

Now, we bound expectation of all error terms. Firstly, we consider 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑢 ⁢ 𝑝 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑥 ∗ − 𝑥 𝑗 + 1 ⟩ ]

0 .

Next, we bound 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑣 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑣 𝑗 ) − ∇ 𝑓 ⁢ ( 𝑣 𝑗 ) ‖ 2 ] ≤ 2 ⁢ 𝜎 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗

( ⁢ 68 ⁢ ) , ( ⁢ 72 ⁢ ) 2 ⁢ 𝜎 ⁢ 𝑅 ( 𝑝 + 1 ) 3 / 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 3 / 2 𝐴 𝑗 ≤ ( ⁢ 70 ⁢ ) 2 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2

Now we calculate 𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 ]

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝑥 ]

( ⁢ 37 ⁢ ) 𝐴 𝑇 ⁢ 𝔼 ⁢ [ ∑ 𝑗

0 𝑇 𝛼 𝑗 2 2 ⁢ 𝐴 𝑗 2 ⁢ 𝜆 𝑗 ⁢ ‖ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) ‖ 2 + ∑ 𝑗

0 𝑇 𝛼 𝑗 𝐴 𝑗 ⁢ ⟨ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) − ∇ 𝑓 ⁢ ( 𝑥 𝑗 + 1 ) , 𝑦 𝑗 − 𝑥 𝑗 + 1 ⟩ ]

( ⁢ 73 ⁢ ) 𝜎 ⁢ 𝑅 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 2 𝐴 𝑗 2 ⁢ ( 𝑗 + 3 ) 𝑝 + 1 / 2

( ⁢ 44 ⁢ ) 𝜎 ⁢ 𝑅 ( 𝑝 + 1 ) 5 / 2 ⁢ 2 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 𝑝 + 5 / 2 𝐴 𝑗 2 ≤ 𝜎 ⁢ 𝑅 ( 𝑝 + 1 ) 5 / 2 ⁢ 2 ⁢ 𝐴 𝑇 ⁢ ∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝛼 𝑗 𝑝 + 5 / 2 𝐴 𝑗

≤ ( ⁢ 46 ⁢ ) ( 𝑝 + 1 ) 2 ⁢ 𝜎 ⁢ 𝑅 2 ⁢ 𝐴 𝑇 ⁢ ( 𝑇 + 𝑝 + 1 ) 𝑝 + 3 / 2 ≤ ( ⁢ 45 ⁢ ) ( 𝑝 + 1 ) 2 ⁢ 𝜎 ⁢ 𝑅 2 ⁢ ( 𝑇 + 𝑝 + 1 ) 1 / 2 .

Finally, we consider 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏

𝐴 𝑇 ⁢ 𝔼 ⁢ [ 𝑒 ⁢ 𝑟 ⁢ 𝑟 𝑇 + 1 𝜏 ]

∑ 𝑗

0 𝑇 𝐴 𝑇 ⁢ 𝜏 2 𝐴 𝑗 ⁢ 𝛿 ¯ 𝑗

( ⁢ 72 ⁢ ) 𝜏 ⁢ 𝑅 ( 𝑝 + 1 ) 3 / 2 ⁢ ∑ 𝐴 𝑇 ⁢ 𝛼 𝑡 3 / 2 𝐴 𝑗 ≤ ( ⁢ 46 ⁢ ) 𝜏 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 .

Combining all bounds from above we achieve convergence rate

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 + 1 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 2 ⁢ ( 𝑝 + 1 ) 3 ⁢ 𝜏 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 + 3 ⁢ ( 𝑝 + 1 ) 3 ⁢ 𝜎 ⁢ 𝑅 ( 𝑇 + 𝑝 + 1 ) 1 / 2 + ∑ 𝑖

3 𝑝 2 ⁢ ( 𝑝 + 1 ) 2 ⁢ 𝑖 − 1 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑖 𝑖 ! ⁢ ( 𝑇 + 𝑝 + 1 ) 𝑖 + ( 𝑝 + 1 ) 2 ⁢ ( 𝑝 + 1 ) ( 𝑝 + 1 ) ! ⁢ 𝑀 ⁢ 𝑅 𝑝 + 1 ( 𝑇 + 𝑝 + 1 ) 𝑝 + 1 .

Again, the case of stochastic Hessian (Theorem 5.6 under Assumption 5.3) can be obtained in the same way by taking expectation in Lemma F.3.

Appendix GRestarts for strongly-convex function Theorem 6.2.

Let Assumption 6.1 hold and let parameters of Algorithm 1 be chosen as in (17). Let { 𝑧 𝑠 } 𝑠 ≥ 0 be generated by Algorithm 3 and 𝑅

0 be such that ‖ 𝑧 0 − 𝑥 ∗ ‖ ≤ 𝑅 . Then for any 𝑠 ≥ 0 we have

𝔼 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 2

≤ 4 − 𝑠 ⁢ 𝑅 2 ,

(74)

𝔼 ⁢ 𝑓 ⁢ ( 𝑧 𝑠 ) − 𝑓 ⁢ ( 𝑥 ∗ )

≤ 2 − 2 ⁢ 𝑠 − 1 ⁢ 𝜇 ⁢ 𝑅 2 .

(75)

Moreover, the total number of iterations to reach desired accuracy 𝜀 : 𝑓 ⁢ ( 𝑧 𝑠 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ≤ 𝜀 in expectation is

𝑂 ⁢ ( ( 𝜏 + 𝜎 1 ) 2 𝜇 ⁢ 𝜀 + ( 𝜎 2 𝜇 + 1 ) ⁢ log ⁡ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ∑ 𝑖

3 𝑝 ( 𝜎 𝑖 ⁢ 𝑅 𝑖 − 2 𝜇 ) 1 𝑖 + ( 𝐿 𝑝 ⁢ 𝑅 𝑝 − 1 𝜇 ) 1 𝑝 + 1 ) .

(76) Proof.

We prove by induction that 𝔼 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 2 ≤ 4 − 𝑠 ⁢ ‖ 𝑧 0 − 𝑥 ∗ ‖ 2

4 − 𝑠 ⁢ 𝑅 0 2 . For 𝑠

0 this obviously holds. By strong convexity and convergence of Algorithm 2

𝔼 ⁢ [ 𝑓 ⁢ ( 𝑥 𝑇 ) − 𝑓 ⁢ ( 𝑥 ∗ ) ] ≤ 𝐶 𝜏 ⁢ 𝜏 ⁢ 𝑅 𝑇 + 𝐶 1 ⁢ 𝜎 1 ⁢ 𝑅 𝑇 + ∑ 𝑖

2 𝑝 𝐶 𝑖 ⁢ 𝜎 𝑖 ⁢ 𝑅 𝑖 𝑇 𝑖 + 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ 𝑅 𝑝 + 1 𝑇 𝑝 + 1

we get

𝔼 [ 𝑧 𝑠 + 1 | 𝑧 𝑠 , 𝑧 𝑠 − 1 , … , 𝑧 0 ] ⁢ ‖ 𝑧 𝑠 + 1 − 𝑥 ∗ ‖ 2 ≤ 2 𝜇 ⁢ 𝔼 [ 𝑧 𝑠 + 1 | 𝑧 𝑠 , … , 𝑧 0 ] ⁢ ( 𝑓 ⁢ ( 𝑥 𝑡 𝑠 + 1 ) − 𝑓 ⁢ ( 𝑥 ∗ ) )

≤ 2 𝜇 ( 𝐶 𝜏 ⁢ 𝜏 ⁢ 𝑅 𝑇 + 𝐶 1 ⁢ 𝜎 1 ⁢ 𝑅 𝑇 + ∑ 𝑖

2 𝑝 𝐶 𝑖 ⁢ 𝜎 𝑖 ⁢ 𝑅 𝑖 𝑇 𝑖 + 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ 𝑅 𝑝 + 1 𝑇 𝑝 + 1 . )

≤ 2 𝜇 ( 𝜇 ⁢ 𝐶 𝜏 ⁢ 𝜎 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝜏 ⁢ 𝜎 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ + 𝜇 ⁢ 𝐶 1 ⁢ 𝜎 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 1 ⁢ 𝜎 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖

+ ∑ 𝑖

2 𝑝 𝜇 ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 𝑖 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 𝑖 + 𝜇 ⁢ 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 𝑝 + 1 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 𝑝 + 1 )

≤ 𝑝 4 ⁢ ( 𝑝 + 2 ) ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 2 + 2 4 ⁢ ( 𝑝 + 2 ) ⁢ 𝑟 𝑠 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ .

Then by taking full expectation we obtain

𝔼 ⁢ ‖ 𝑧 𝑠 + 1 − 𝑥 ∗ ‖ 2 ≤ 𝑝 4 ⁢ ( 𝑝 + 1 ) ⁢ 𝔼 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ 2 + 2 4 ⁢ ( 𝑝 + 2 ) ⁢ 𝑟 𝑠 ⁢ 𝔼 ⁢ ‖ 𝑧 𝑠 − 𝑥 ∗ ‖ ≤ 1 4 𝑠 + 1 ⁢ 𝑅 0 2

Thus, by induction, we obtain that (74), (75) hold.

Next we provide the corresponding complexity bounds. From the above induction bounds, we obtain that after 𝑆 restarts the total number of iterations of Algorithm 2

𝔼 𝑇

𝔼 ∑ 𝑠

1 𝑆 𝑡 𝑠 ≤ ∑ 𝑠

1 𝑆 max { 1 , ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝜏 ⁢ 𝜏 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 , ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 1 ⁢ 𝜎 1 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 , max 𝑖

1 , … , 𝑝 ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑠 − 1 𝑖 − 2 𝜇 ) 1 𝑖 , ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ 𝑅 𝑠 − 1 𝑝 − 1 𝜇 ) 1 𝑝 + 1 }

≤ ∑ 𝑠

1 𝑆 ( 1 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝜏 ⁢ 𝜏 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 1 ⁢ 𝜎 1 𝜇 ⁢ 𝑟 𝑠 − 1 ) 2 + ∑ 𝑖

2 𝑝 ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ 𝑅 𝑠 − 1 𝑖 − 2 𝜇 ) 1 𝑖 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ 𝑅 𝑠 − 1 𝑝 − 1 𝜇 ) 1 𝑝 + 1 )

≤ 𝑆 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝜏 ⁢ 𝜏 𝜇 ⁢ 𝑅 0 ) 2 ⁢ 4 𝑆 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 1 ⁢ 𝜎 1 𝜇 ⁢ 𝑅 0 ) 2 ⁢ 4 𝑆 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 2 ⁢ 𝛿 2 𝜇 ) 1 2 ⁢ 𝑆 + ∑ 𝑖

3 𝑝 ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ 𝑅 0 𝑖 − 2 𝜇 ) 1 𝑖 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑝 + 1 ⁢ 𝐿 𝑝 ⁢ 𝑅 0 𝑝 − 1 𝜇 ) 1 𝑝 + 1

≤ log ⁡ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝜏 𝜇 ⁢ 𝑅 0 ) 2 ⁢ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 1 ⁢ 𝜎 1 𝜇 ⁢ 𝑅 0 ) 2 ⁢ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀

+ ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 2 ⁢ 𝛿 2 𝜇 ) 1 2 ⁢ log ⁡ 𝑓 ⁢ ( 𝑧 0 ) − 𝑓 ⁢ ( 𝑥 ∗ ) 𝜀 + ∑ 𝑖

3 𝑝 ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐶 𝑖 ⁢ 𝛿 𝑖 ⁢ 𝑅 0 𝑖 − 2 𝜇 ) 1 𝑖 + ( 8 ⁢ ( 𝑝 + 2 ) ⁢ 𝐿 𝑝 ⁢ 𝐶 𝑝 + 1 ⁢ 𝑅 0 𝑝 − 1 𝜇 ) 1 𝑝 + 1 .

Therefore, the total oracle complexities are given by (76). ∎

For the case of stochastic high-order derivative the proof remains same w.r.t. change 𝛿 𝑖 to 𝜎 𝑖 .

Appendix HOn the solution of subproblem (9) in Algorithm 1

The subproblem (9) admits a closed form solution, which we will derive in the following steps. First, note that the problem (9) is convex. Let us write optimality condition

0

∇ 𝜓 𝑡 ⁢ ( 𝑦 𝑡 )

( 𝜆 𝑡 + 𝜅 ¯ 2 𝑡 + 𝜅 ¯ 3 𝑡 ⁢ ‖ 𝑦 𝑡 − 𝑥 0 ‖ ) ⁢ ( 𝑦 𝑡 − 𝑥 0 ) + ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) .

Thus,

( 𝜆 𝑡 + 𝜅 ¯ 2 𝑡 + 𝜅 ¯ 3 𝑡 ⁢ ‖ 𝑦 𝑡 − 𝑥 0 ‖ ) ⁢ ‖ ( 𝑦 𝑡 − 𝑥 0 ) ‖

‖ ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) ‖ .

Let us denote 𝑟 𝑡

‖ ( 𝑦 𝑡 − 𝑥 0 ) ‖ and 𝑆 𝑡

‖ ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) ‖ . Then, we get

𝜅 ¯ 3 𝑡 ⁢ 𝑟 𝑡 2 + ( 𝜆 𝑡 + 𝜅 ¯ 2 𝑡 ) ⁢ 𝑟 𝑡 − 𝑆 𝑡

0 .

Next, we get the solution of quadratic equation

𝑟 𝑡

( 𝜆 𝑡 + 𝜅 ¯ 2 𝑡 ) 2 + 4 ⁢ 𝜅 ¯ 3 𝑡 ⁢ 𝑆 𝑡 − ( 𝜆 𝑡 + 1 + 𝜅 ¯ 2 𝑡 ) 2 ⁢ 𝜅 ¯ 3 𝑡 .

Finally, we get explicit solution

𝑦 𝑡

𝑥 0 − ∑ 𝑗

0 𝑡 − 1 𝛼 𝑗 𝐴 𝑗 ⁢ 𝑔 ⁢ ( 𝑥 𝑗 + 1 ) 𝜆 𝑡 + 𝜅 ¯ 2 𝑡 + 𝜅 ¯ 3 𝑡 ⁢ 𝑟 𝑡 .

We use the explicit solution in our implementation of the Algorithm 1.

Generated on Sun May 26 16:09:46 2024 by LaTeXML

Xet Storage Details

Size:
132 kB
·
Xet hash:
34fb9491685e60530ccc7e3b511ff8622d2fb12b9215f0f9a360762b5965cdd8

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.