We study a class of algorithms for solving bilevel optimization problems in both stochastic and deterministic settings when the inner-level objective is strongly convex. Specifically, we consider algorithms based on inexact implicit differentiation and we exploit a warm-start strategy to amortize the estimation of the exact gradient. We then introduce a unified theoretical framework inspired by the study of singularly perturbed systems (Habets, 1974) to analyze such amortized algorithms. By using this framework, our analysis shows these algorithms to match the computational complexity of oracle methods that have access to an unbiased estimate of the gradient, thus outperforming many existing results for bilevel optimization. We illustrate these findings on synthetic experiments and demonstrate the efficiency of these algorithms on hyper-parameter optimization experiments involving several thousands of variables.
1 INTRODUCTION
Bilevel optimization refers to a class of algorithms for solving problems with a hierarchical structure involving two levels: an inner and an outer level. The inner-level problem seeks a solution $y^{\star}(x)$ minimizing a cost $g(x, y)$ over a set $\mathcal{V}$ given a fixed outer variable $x$ in a set $\mathcal{X}$ . The outer-level problem minimizes an objective of the form $\mathcal{L}(x) = f(x, y^{\star}(x))$ over $\mathcal{X}$ for some upper-level cost $f$ . When the solution $y^{\star}(x)$ is unique, the bilevel optimization problem takes the following form:
xβXminβL(x):=f(x,yβ(x)),s u c h t h a tyβ(x)=yβYargminβg(x,y).(1)
First introduced in the field of economic game theory by Stackelberg (1934) and long studied in optimization (Ye and Zhu, 1995; Ye and Ye, 1997; Ye et al., 1997), this problem has recently received increasing attention in the machine learning community (Domke, 2012; Gould et al., 2016; Liao et al., 2018; Blondel et al., 2021; Liu et al., 2021; Shaban et al., 2019; Ablin et al., 2020). Indeed, many machine learning applications can be reduced to (1) including hyper-parameter optimization (Feurer and Hutter, 2019), meta-learning (Bertinetto et al., 2018), reinforcement learning (Hong et al., 2020b; Liu et al., 2021) or dictionary learning (Mairal et al., 2011; Lecouat et al., 2020a;b).
The hierarchical nature of (1) introduces additional challenges compared to standard optimization problems, such as finding a suitable trade-off between the computational budget for approximating the inner and outer level problems (Ghadimi and Wang, 2018; Dempe and Zemkoho, 2020). These considerations are exacerbated in machine learning applications, where the costs $f$ and $g$ often come as an average of functions over a large or infinite number of data points (Franceschi et al., 2018). All these challenges highlight the need for methods that are able to control the computational costs inherent to (1) while dealing with the large-scale setting encountered in machine learning.
Gradient-based bilevel optimization methods appear to be viable approaches for solving (1) in large-scale settings (Lorraine et al., 2020). They can be divided into two categories: Iterative differentiation (ITD) and Approximate implicit differentiation (AID). ITD approaches approximate the map $y^{\star}(x)$ by a differentiable optimization algorithm $\mathcal{A}(x)$ viewed as a function of $x$ . The resulting surrogate loss $\tilde{\mathcal{L}}(x) = f(x, \mathcal{A}(x))$ is optimized instead of $\mathcal{L}(x)$ using reverse-mode automatic differentiation (see Baydin et al., 2018). AID approaches (Pedregosa, 2016) rely on an expression of the gradient $\nabla \mathcal{L}$ resulting from the implicit function theorem (Lang, 2012, Theorem 5.9). Unlike ITD, AID avoids differentiating the algorithm approximating $y^{\star}(x)$ and, instead, approximately
solves a linear system using only Hessian and Jacobian-vector products to estimate the gradient $\nabla \mathcal{L}$ (Rajeswaran et al., 2019). These methods can also rely on stochastic approximation to increase scalability (Franceschi et al., 2018; Grazzi et al., 2020; 2021).
In the context of machine-learning, Ghadimi and Wang (2018) provided one of the first comprehensive studies of the computational complexity for a class of bilevel algorithms based on AID approaches. Subsequently, Hong et al. (2020b); Ji et al. (2021); Ji and Liang (2021); Yang et al. (2021) proposed different algorithms for solving (1) and obtained improved overall complexity by achieving a better trade-off between the cost of the inner and outer level problems. Still, the question of whether these complexities can be improved by better exploiting the structure of (1) through heuristics such as warm-start remains open (Grazzi et al., 2020). Moreover, these studies proposed separate analysis of their algorithms depending on the convexity of the loss $\mathcal{L}$ and whether a stochastic or deterministic setting is considered. This points out to a lack of unified and systematic theoretical framework for analyzing bilevel problems, which is what the present work addresses.
We consider the Amortized Implicit Gradient Optimization (AmIGO) algorithm, a bilevel optimization algorithm based on Approximate Implicit Differentiation (AID) approaches that exploits a warm-start strategy when estimating the gradient of $\mathcal{L}$ . We then propose a unified theoretical framework for analyzing the convergence of AmIGO when the inner-level problem is strongly convex in both stochastic and deterministic settings. The proposed framework is inspired from the early work of Habets (1974) on singularly perturbed systems and analyzes the effect of warm start by viewing the iterates of AmIGO algorithm as a dynamical system. The evolution of such system is described by a total energy function which allows to recover the convergence rates of unbiased oracle methods which have access to an unbiased estimate of $\nabla \mathcal{L}$ (c.f. Table 1). To the best of our knowledge, this is the first time a bilevel optimization algorithm based on a warm-start strategy provably recovers the rates of unbiased oracle methods across a wide range of settings including the stochastic ones.
2 RELATED WORK
Singularly perturbed systems (SPS) are continuous-time deterministic dynamical systems of coupled variables $(x(t), y(t))$ with two time-scales where $y(t)$ evolves much faster than $x(t)$ . As such, they exhibit a hierarchical structure similar to (1). The early work of Habets (1974); Saberi and Khalil (1984) provided convergence rates for SPS towards equilibria by studying the evolution of a single scalar energy function summarizing these systems. The present work takes inspiration from these works to analyze the convergence of AmIGO which involves three time-scales.
Two time-scale Stochastic Approximation (TTSA) can be viewed as a discrete-time stochastic version of SPS. (Kaledin et al., 2020) showed that TTSA achieves a finite-time complexity of $O\left(\epsilon^{-1}\right)$ for linear systems while Doan (2020) obtained a complexity of $O\left(\epsilon^{-3/2}\right)$ for general non-linear systems by extending the analysis for SPS. Hong et al. (2020b) further adapted the non-linear TTSA for solving (1). In the present work, we obtain faster rates by taking into account the dynamics of a third variable $z_k$ appearing in AmIGO, thus resulting in a three time-scale dynamics.
Warm-start in bilevel optimization. Ji et al. (2021); Ji and Liang (2021) used a warm-start for the inner-level algorithm to obtain an improved computational complexity over algorithms without warm-start. In the deterministic non-convex setting, Ji et al. (2021) used a warm-start strategy when solving the linear system appearing in AID approaches to obtain improved convergence rates. However, it remained open whether using a warm-start when solving both inner-level problem and linear system arising in AID approaches can yield faster algorithms in the more challenging stochastic setting (Grazzi et al., 2020). In the present work, we provide a positive answer to this question.
3 AMORTIZED IMPLICIT GRADIENT OPTIMIZATION
3.1 GENERAL SETTING AND MAIN ASSUMPTIONS
Notations. In all what follows, $\mathcal{X}$ and $\mathcal{Y}$ are Euclidean spaces. For a differentiable function $h(x,y)$ : $\mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ , we denote by $\nabla h$ its gradient w.r.t. $(x,y)$ , by $\partial_x h$ and $\partial_y h$ its partial derivatives w.r.t. $x$ and $y$ and by $\partial_{xy} h$ and $\partial_{yy} h$ the partial derivatives of $\partial_y h$ w.r.t $x$ and $y$ , respectively.
Geometries
Setting
Algorithms
Complexity
(SC)
(D)
BA (Ghadimi and Wang, 2018)
O(ΞΊL2 β¨ ΞΊg2 log2 Ξ΅-1)
AccBio (Ji and Liang, 2021)
O(ΞΊL1/2 ΞΊg1/2 log2 Ξ΅-1)
AmIGO (Corollary 1)
O(ΞΊLΞΊg log Ξ΅-1)
(S)
BSA (Ghadimi and Wang, 2018)
O(ΞΊL4 Ξ΅-2)
TTSA (Hong et al., 2020b)
O(ΞΊL0.5(ΞΊg8.5 + ΞΊg3)Ξ΅-3/2 log Ξ΅-1)
AmIGO (Corollary 2)
O(ΞΊL2 ΞΊg3 Ξ΅-1 log Ξ΅-1)
(NC)
(D)
BA (Ghadimi and Wang, 2018)
O(ΞΊg5 Ξ΅-5/4)
AID-BiO (Ji et al., 2021)
O(ΞΊg4 Ξ΅-1)
AmIGO (Corollary 3)
O(ΞΊg4 Ξ΅-1)
(S)
BSA (Ghadimi and Wang, 2018)
O(ΞΊg9 Ξ΅-3 + ΞΊg6 Ξ΅-2)
TTSA (Hong et al., 2020b)
O(ΞΊg16 Ξ΅-5/2 log Ξ΅-1)
stocBiO (Ji et al., 2021)
O(ΞΊg9 Ξ΅-2 + ΞΊg6 Ξ΅-2 log Ξ΅-1)
MRBO/VRBO* (Yang et al., 2021)
O(poly(ΞΊg) Ξ΅-3/2 log Ξ΅-1)
AmIGO (Corollary 4)
O(ΞΊg9 Ξ΅-2)
Table 1: Cost of finding an $\epsilon$ -accurate solution as measured by $\mathbb{E}[\mathcal{L}(x_k) - \mathcal{L}^\star] \wedge 2^{-1}\mu \mathbb{E}\left[| x_k - x^\star|^2\right]$ when $\mathcal{L}$ is $\mu$ -strongly-convex (SC) and $\frac{1}{k}\sum_{i=1}^{k}\mathbb{E}\left[|\nabla\mathcal{L}(x_i)|^2\right]$ when $\mathcal{L}$ is non-convex (NC). The settings (D) and (S) stand for the deterministic and stochastic settings. The cost corresponds to the total number of gradients, Jacobian and Hessian-vector products used by the algorithm. $\kappa_L$ and $\kappa_g$ are the conditioning numbers of $\mathcal{L}$ and $g$ whenever applicable. The dependence on $\kappa_L$ and $\kappa_g$ for TTSA and AccBio are derived in Proposition 11 of Appendix A.4. The rate of MRBO/VRBO is obtained under the additional mean-squared smoothness assumption (Arjevani et al., 2019).
To ensure that (1) is well-defined, we consider the setting where the inner-level problem is strongly convex so that the solution $y^{\star}(x)$ is unique as stated by the following assumption:
Assumption 1. For any $x \in \mathcal{X}$ , the function $y \mapsto g(x, y)$ is $L_g$ -smooth and $\mu_g$ -strongly convex.
Assumption 1 holds in the context of hyper-parameter selection when the inner-level is a kernel regression problem (Franceschi et al., 2018), or when the variable $y$ represents the last linear layer of a neural network as in many meta-learning tasks (Ji et al., 2021). Under Assumption 1 and additional smoothness assumptions on $f$ and $g$ , the next proposition shows that $\mathcal{L}$ is differentiable:
Proposition 1. Let $g$ be a twice differentiable function satisfying Assumption 1. Assume that $f$ is differentiable and consider the quadratic problem:
Then, (2) admits a unique minimizer $z^{\star}(x,y)$ for any $(x,y)$ in $\mathcal{X}\times \mathcal{Y}$ . Moreover, $y^{\star}(x)$ is unique and well-defined for any $x$ in $\mathcal{X}$ and $\mathcal{L}$ is differentiable with gradient given by:
Proposition 1 follows by application of the implicit function theorem (Lang, 2012, Theorem 5.9) and provides an expression for $\nabla \mathcal{L}$ solely in terms of partial derivatives of $f$ and $g$ evaluated at $(x,y^{\star}(x))$ . Following Ghadimi and Wang (2018), we further make two smoothness assumptions on $f$ and $g$ :
Assumption 2. There exist positive constants $L_{f}$ and $B$ such that for all $x, x' \in \mathcal{X}$ and $y, y' \in \mathcal{Y}$ :
Assumption 3. There exit positive constants $L_{g}^{\prime}, M_{g}$ such that for any $x, x^{\prime} \in \mathcal{X}$ and $y, y^{\prime} \in \mathcal{Y}$ :
Assumptions 1 to 3 allow a control of the variations of $y^{\star}$ and $z^{\star}$ and ensure $\mathcal{L}$ is $L$ -smooth for some positive constant $L$ as shown in Proposition 6 of Appendix B.2. As an $L$ -smooth function, $\mathcal{L}$ is necessarily weakly convex (Davis et al., 2018), meaning that $\mathcal{L}$ satisfies the inequality $\mathcal{L}(x) - \mathcal{L}(y) \leq \nabla \mathcal{L}(x)^{\top}(x - y) - \frac{\mu}{2}| x - y|^{2}$ for some fixed $\mu \in \mathbb{R}$ with $|\mu| \leq L$ . In particular, $\mathcal{L}$ is convex when $\mu \geq 0$ , strongly convex when $\mu > 0$ and generally non-convex when $\mu < 0$ . We thus consider two cases for $\mathcal{L}$ , the strongly convex case $(\mu > 0)$ and the non-convex case $(\mu < 0)$ . When $\mathcal{L}$ is convex, we denote by $\mathcal{L}^{\star}$ its minimum value achieved at a point $x^{\star}$ and define $\kappa_{\mathcal{L}} = L / \mu$ when $\mu > 0$ .
Stochastic/deterministic settings. We consider the general setting where $f(x,y)$ and $g(x,y)$ are expressed as an expectation of stochastic functions $\hat{f}(x,y,\xi)$ and $\hat{g}(x,y,\xi)$ over a noise variable $\xi$ . We recover the deterministic setting as a particular case when the variable $\xi$ has zero variance, thus allowing us to treat both stochastic (S) and deterministic (D) settings in a unified framework. As often in machine-learning, we assume we can always draw a new batch $\mathcal{D}$ of i.i.d. samples of the noise variable $\xi$ with size $|\mathcal{D}| \geq 1$ and use it to compute stochastic approximations of $f$ and $g$ defined by abuse of notation as $\hat{f}(x,y,\mathcal{D}) \coloneqq \frac{1}{|\mathcal{D}|}\sum_{\xi \in \mathcal{D}}\hat{f}(x,y,\xi)$ and $\hat{g}(x,y,\mathcal{D}) \coloneqq \frac{1}{|\mathcal{D}|}\sum_{\xi \in \mathcal{D}}\hat{g}(x,y,\xi)$ . We make the following noise assumptions which are implied by those in Ghadimi and Wang (2018):
Assumption 4. For any batch $\mathcal{D}$ , $\nabla \hat{f}(x, y, \mathcal{D})$ and $\partial_y \hat{g}(x, y, \mathcal{D})$ are unbiased estimator of $\nabla f(x, y)$ and $\partial_y g(x, y)$ with a uniformly bounded variance, i.e. for all $x, y \in \mathcal{X} \times \mathcal{Y}$ :
Assumption 5. For any batch $\mathcal{D}$ , the matrices $F_{1}(x,y,\mathcal{D})\coloneqq \partial_{xy}\hat{g} (x,y,\mathcal{D}) - \partial_{xy}g(x,y)$ and $F_{2}(x,y,\mathcal{D}):= \partial_{yy}\hat{g} (x,y,\mathcal{D}) - \partial_{yy}g(x,y)$ have zero mean and satisfy for all $x,y\in \mathcal{X}\times \mathcal{Y}$ :
For conciseness, we will use the notations $\sigma_f^2 \coloneqq \tilde{\sigma}f^2 |\mathcal{D}|^{-1}$ , $\sigma_g^2 \coloneqq \tilde{\sigma}g^2 |\mathcal{D}|^{-1}$ , $\sigma{g{xy}}^2 \coloneqq \tilde{\sigma}{g{xy}}^2 |\mathcal{D}|^{-1}$ and $\sigma_{g_{yy}}^2 \coloneqq \tilde{\sigma}{g{yy}}^2 |\mathcal{D}|^{-1}$ , without explicit reference to the batch $\mathcal{D}$ . Next, we describe the algorithm.
3.2 ALGORITHMS
Amortized Implicit Gradient Optimization (AmIGO) is an iterative algorithm for solving (1). It constructs iterates $x_{k}$ , $y_{k}$ and $z_{k}$ such that $x_{k}$ approaches a stationary point of $\mathcal{L}$ while $y_{k}$ and $z_{k}$ track the quantities $y^{\star}(x_k)$ and $z^{\star}(x_k,y_k)$ . AmIGO computes the iterate $x_{k + 1}$ using an update equation $x_{k + 1} = x_{k} - \gamma_{k}\hat{\psi}{k}$ for some given step-size $\gamma{k}$ and a stochastic estimate $\hat{\psi}k$ of $\nabla \mathcal{L}(x_k)$ based on (3) and defined according to (4) below for some new batches of samples $\mathcal{D}f$ and $\mathcal{D}{g{xy}}$ .
AmIGO computes $\hat{\psi}k$ in 4 steps given iterates $x_k, y{k-1}$ and $z_{k-1}$ . A first step computes an approximation $y_k$ to $y^*(x_k)$ using a stochastic algorithm $\mathcal{A}k$ initialized at $y{k-1}$ . A second step computes unbiased estimates $u_k = \partial_x \hat{f}(x_k, y_k, \mathcal{D}_f)$ and $v_k = \partial_y \hat{f}(x_k, y_k, \mathcal{D}f)$ of the partial derivatives of $f$ w.r.t. $x$ and $y$ . A third step computes an approximation $z_k$ to $z^*(x_k, y_k)$ using a second stochastic algorithm $\mathcal{B}k$ for solving (2) initialized at $z{k-1}$ . To increase efficiency, algorithm $\mathcal{B}k$ uses the pre-computed vector $v_k$ for approximating the partial derivative $\partial_y f$ in (2). Finally, the stochastic estimate $\hat{\psi}k$ is computed using (4) by summing the pre-computed vector $u_k$ with the jacobian-vector product $w_k = \partial{xy} \hat{g}(x_k, y_k, \mathcal{D}{g{xy}}) z_k$ . AmIGO is summarized in Algorithm 1.
Algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ . While various choices for $\mathcal{A}_k$ and $\mathcal{B}_k$ are possible, such as adaptive algorithms (Kingma and Ba, 2015), or accelerated stochastic algorithms (Ghadimi and Lan, 2012), we
focus on simple stochastic gradient descent algorithms with a pre-defined number of iterations $T$ and $N$ . These algorithms compute intermediate iterates $y^{t}$ and $z^n$ optimizing the functions $y \mapsto g(x_k, y)$ and $z \mapsto Q(x_k, y_k, z)$ starting from some initial values $y^0$ and $z^0$ and returning the last iterates $y^T$ and $z^N$ as described in Algorithms 2 and 3. Algorithm $\mathcal{A}k$ updates the current iterate $y^{t-1}$ using a stochastic gradient $\partial_y \hat{g}(x_k, y^{t-1}, \mathcal{D}g)$ for some new batch of samples $\mathcal{D}g$ and a fixed step-size $\alpha_k$ . Algorithm $\mathcal{B}k$ updates the current iterate $z^{t-1}$ using a stochastic estimate of $\partial_z Q(x_k, y_k, z^{t-1})$ with step-size $\beta_k$ . The stochastic gradient is computed by evaluating the Hessian-vector product $\partial{yy} \hat{g}(x_k, y_k, \mathcal{D}{g{yy}}) z^{t-1}$ for some new batch of samples $\mathcal{D}{g_{yy}}$ and summing it with a vector $v_k$ approximating $\partial_y f(x_k, y_k)$ provided as input to algorithm $\mathcal{B}_k$ .
Warm-start for $y^0$ and $z^0$ . Following the intuition that $y^\star(x_k)$ remains close to $y^\star(x_{k-1})$ when $x_k \simeq x_{k-1}$ , and assuming that $y_{k-1}$ is an accurate approximation to $y^\star(x_{k-1})$ , it is natural to initialize $\mathcal{A}k$ with the iterate $y{k-1}$ . The same intuition applies when initializing $\mathcal{B}k$ with $z{k-1}$ . Next, we introduce a framework for analyzing the effect of warm-start on the convergence speed of AmIGO.
4 ANALYSIS OF AMORTIZED IMPLICIT GRADIENT OPTIMIZATION
4.1 GENERAL APPROACH AND MAIN RESULT
The proposed approach consists in three main steps: (1) Analysis of the outer-level problem, (2) Analysis of the inner-level problem and (3) Analysis of the joint dynamics of both levels.
Outer-level problem. We consider a quantity $E_{k}^{x}$ describing the evolution of $x_{k}$ defined as follows:
where $u \in {0,1}$ is set to 1 in the stochastic setting and to 0 in the deterministic one and $\delta_{k}$ is a positive sequence that determines the convergence rate of the outer-level problem and is defined by:
with $\eta_0$ such that $\gamma_0^{-1} \geq \eta_0 \geq \mu$ if $\mu \geq 0$ and $\eta_0 = L$ if $\mu < 0$ and where we choose the step-size $\gamma_k$ to be a non-increasing sequence with $\gamma_0 \leq \frac{1}{L}$ . With this choice for $\delta_k$ and by setting $u = 1$ in (5), $E_k^x$ recovers the quantity considered in the stochastic estimate sequences framework of Kulunchakov and Mairal (2020) to analyze the convergence of stochastic optimization algorithms when $\mathcal{L}$ is convex. When $\mathcal{L}$ is non-convex, $E_k^x$ recovers a standard measure of stationarity (Davis and Drusvyatskiy, 2018). In Section 4.3, we control $E_k^x$ using bias and variance error $E_{k - 1}^{\psi}$ and $V_{k - 1}^{\psi}$ of $\hat{\psi}_k$ given by (6) below where $\mathbb{E}k$ denotes expectation conditioned on $(x_k, y_k, z{k - 1})$ .
Inner-level problems. We consider the mean-squared errors $E_{k}^{y}$ and $E_{k}^{z}$ between initializations $(y^{0} = y_{k - 1}$ and $z^0 = z_{k - 1}$ ) and stationary values $(y^{\star}(x_k)$ and $z^{\star}(x_k,y_k))$ of algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ :
In Section 4.3, we show that the warm-start strategy allows to control $E_{k}^{y}$ and $E_{k}^{z}$ in terms of previous iterates $E_{k-1}^{y}$ and $E_{k-1}^{z}$ as well as the bias and variance errors in (6). We further prove that such bias and variance errors are, in turn, controlled by $E_{k}^{y}$ and $E_{k}^{z}$ .
Joint dynamics. Following Habets (1974), we consider an aggregate error $E_{k}^{tot}$ defined as a linear combination of $E_{k}^{x}$ , $E_{k}^{y}$ and $E_{k}^{z}$ with carefully selected coefficients $a_{k}$ and $b_{k}$ :
Ektotβ=Ekxβ+akβEkyβ+bkβEkzβ.(7)
As such $E_k^{tot}$ represents the dynamics of the whole system. The following theorem provides an error bound for $E_k^{tot}$ in both convex and non-convex settings for a suitable choice of the coefficients $a_k$ and $b_k$ provided that $T$ and $N$ are large enough:
Theorem 1. Choose a batch-size $\left|\mathcal{D}{g{yy}}\right| \geq 1 \vee \frac{\tilde{\sigma}{g{yy}}^2}{\mugL_g}$ and the step-sizes $\alpha_k = L_g^{-1}$ , $\beta_k = (2L_g)^{-1}$ , $\gamma_k = L^{-1}$ . Set the coefficients $a_k$ and $b_k$ to be $a_k := \delta_0(1 - \alpha_k\mu_g)^{1/2}$ and $b_k := \delta_0\big(1 - \frac{1}{2}\beta_k\mu_g\big)^{1/2}$ and set the number of iterations $T$ and $N$ of Algorithms 2 and 3 to be of order $T = O(\kappa_g)$ and $N = O(\kappa_g)$ up to a logarithmic dependence on $\kappa_g$ . Let $\hat{x}k = u(1 - \delta_k)\hat{x}{k-1} + (1 - u(1 - \delta_k))x_k$ , with $\hat{x}_0 = x_0$ . Then, under Assumptions 1 to 5, $E_k^{tot}$ satisfies:
where $\mathcal{W}^2$ , defined in (20) of Appendix A.2, is the effective variance of the problem with $\mathcal{W}^2 = 0$ in the deterministic setting and, in the stochastic setting, $\mathcal{W}^2 > 0$ is of the following order:
We describe the strategy of the proof in Section 4.3 and provide a proof outline in Appendix A.1 with exact expressions for all variables including the expressions of $T$ , $N$ and $\mathcal{W}^2$ . The full proof is provided in Appendix A.2. The choice of $a_{k}$ and $b_{k}$ ensures that $E_k^y$ and $E_k^z$ contribute less to $E_k^{tot}$ as the algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ become more accurate. The effective variance $\mathcal{W}^2$ accounts for interactions between both levels in the presence of noise and becomes proportional to the outer-level variance $\sigma_f^2$ when the inner-level problem is solved exactly. In the deterministic setting, all variances $\tilde{\sigma}f^2$ , $\tilde{\sigma}g^2$ , $\tilde{\sigma}{gxy}^2$ and $\tilde{\sigma}{gyy}^2$ vanish so that $\mathcal{W}^2 = 0$ . Hence, we characterize such setting by $\mathcal{W}^2 = 0$ and the stochastic one by $\mathcal{W}^2 > 0$ . Next, we apply Theorem 1 to obtain the complexity of AmIGO.
4.2 COMPLEXITY ANALYSIS
We define the complexity $\mathcal{C}(\epsilon)$ of a bilevel algorithm to be the total number of queries to the gradients of $f$ and $g$ , Jacobian/hessian-vector products needed by the algorithm to achieve an error $\epsilon$ according to some pre-defined criterion. Let the number of iterations $k$ , $T$ and $N$ and sizes of the batches $|\mathcal{D}g|$ , $|\mathcal{D}f|$ , $|\mathcal{D}{g{xy}}|$ and $|\mathcal{D}{g{yy}}|$ , be such that AmIGO achieves a precision $\epsilon$ . Then $\mathcal{C}(\epsilon)$ is given by:
Corollary 1 outperforms the complexities in Table 1 in terms of the dependence on $\epsilon$ . It is possible to improve the dependence on $\kappa_{g}$ to $\kappa_{g}^{1/2}$ using acceleration in $\mathcal{A}{k}$ and $\mathcal{B}{k}$ as discussed in Appendix A.5.1, or using generic acceleration methods such as Catalyst (Lin et al., 2018).
Corollary 2 improves over the results in Table 1 in the stochastic strongly-convex setting and recovers the dependence on $\epsilon$ of stochastic gradient descent for smooth and strongly convex functions up to a logarithmic factor.
Corollary 3 recovers the complexity of AID-BiO (Ji et al., 2021) in the deterministic non-convex setting. This is expected since AID-BiO also exploits warm-start for both $\mathcal{A}_k$ and $\mathcal{B}_k$ .
Corollary 4 recovers the optimal dependence on $\epsilon$ of $O\left(\frac{1}{\epsilon^2}\right)$ achieved by stochastic gradient descent in the smooth non-convex case (Arjevani et al., 2019, Theorem 1). It also improves over the results in (Ji et al., 2021) which involve an additional logarithmic factor $\log (\epsilon^{-1})$ as $N$ is required to be $O(\kappa_g\log (\epsilon^{-1}))$ . In our case, $N$ remains constant since $\mathcal{B}_k$ benefits from warm-start. The faster rates of MRBO/VRBO* (Yang et al., 2021) are obtained under the additional mean-squared smoothness assumption (Arjevani et al., 2019), which we do not investigate in the present work. Such assumption allows to achieve the improved complexity of $O(\epsilon^{-3 / 2}\log (\epsilon^{-1}))$ . However, these algorithms still require $N = O(\log (\epsilon^{-1}))$ , indicating that the use of warm-start in $\mathcal{B}_k$ could further reduce the complexity to $O(\epsilon^{-3 / 2})$ which would be an interesting direction for future work.
4.3 OUTLINE OF THE PROOF
The proof of Theorem 1 proceeds by deriving a recursion for both outer-level error $E_{k}^{x}$ and inner-level errors $E_{k}^{y}$ and $E_{k}^{z}$ and then combining those to obtain an error bound on the total error $E_{k}^{tot}$ .
Outer-level recursion. To allow a unified analysis of the behavior of $E_k^x$ in both convex and nonconvex settings, we define $F_k$ as follows:
The following proposition, with a proof in Appendix C.1, provides a recursive inequality on $E_k^x$ involving the errors in (6) due to the inexact gradient $\hat{\psi}_k$ :
Proposition 2. Let $\rho_{k}$ be a non-increasing sequence with $0 < \rho_{k} < 2$ . Assumptions 1 to 3 ensure that:
with $s_k$ defined as $s_k \coloneqq \frac{1}{2}\delta_k + \left(\frac{u}{2}\delta_k + (1 - u)\right)\mathbb{1}_{\mu > 0}$ .
In the ideal case where $y_{k} = y^{\star}(x_{k})$ and $z_{k} = z^{\star}(x_{k},y_{k})$ , the bias $E_k^\psi$ vanishes and (10) simplifies to (Kulunchakov and Mairal, 2019, Proposition 1) which recovers the convergence rates for stochastic gradient methods in the convex case. However, $y_{k}$ and $z_{k}$ are generally inexact solutions and introduce a positive bias $E_k^\psi$ . Therefore, controlling the inner-level iterates is required to control the bias $E_k^\psi$ which, in turn, impacts the convergence of the outer-level as we discuss next.
Controlling the inner-level iterates $y_{k}$ and $z_{k}$ . Proposition 3 below controls the expected mean squared errors between iterates $y_{k}$ and $z_{k}$ and their limiting values $y^{\star}(x_k)$ and $z^{\star}(x_k, y_k)$ :
Proposition 3. Let the step-sizes $\alpha_{k}$ and $\beta_{k}$ be such that $\alpha_{k} \leq L_{g}^{-1}$ and $\beta_{k} \leq \frac{1}{2L_{g}} \wedge \frac{\mu_{g}}{\mu_{g}^{2} + \sigma_{gyy}^{2}}$ . Let $\Lambda_{k} := (1 - \alpha_{k}\mu_{g})^{T}$ and $\Pi_{k} := \left(1 - \frac{\beta_{k}\mu_{g}}{2}\right)^{N}$ . Under Assumptions 1, 4 and 5, it holds that:
where $R_{k}^{y} = O\left(\kappa_{g}\sigma_{g}^{2}\right)$ and $R_{k}^{z} = O\left(\kappa_{g}^{3}\sigma_{g_{yy}}^{2} + \kappa_{g}^{2}\sigma_{f}^{2}\right)$ are defined in (14) of Appendix A.2.
While Proposition 3 is specific to the choice of the algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ in Algorithms 2 and 3, our analysis directly extends to other algorithms satisfying inequalities similar to (11) such as accelerated or variance reduced algorithms discussed in Appendices A.5.1 and A.5.2. Proposition 4 below controls the bias and variance terms $V_k^\psi$ and $E_k^\psi$ in terms of the warm-start error $E_k^y$ and $E_k^z$ .
Proposition 4. Under Assumptions 1 to 5, the following inequalities hold:
where $w_{x}^{2} = O\left(\kappa_{g}^{2}\left(\sigma_{f}^{2} + \sigma_{g_{xy}}^{2}\right) + \kappa_{g}^{3}\sigma_{g_{yy}}^{2}\right)$ , $\sigma_{x}^{2} = O\left(\sigma_{g_{xy}}^{2} + \kappa_{g}^{2}\sigma_{g_{yy}}^{2}\right)$ and $L_{\psi} = O(\kappa_g^2)$ are positive constants defined in (13) and (16) of Appendix A.2 with $L_{\psi}$ controlling the variations of $\mathbb{E}_k[\hat{\psi}_k]$ .
Proposition 4 highlights the dependence of $E_{k}^{\psi}$ and $V_{k}^{\psi}$ on the inner-level errors. It suggests analyzing the evolution of $E_{k}^{y}$ and $E_{k}^{z}$ to quantify how large the bias and variances can get:
Proposition 5. Let $\zeta_k > 0$ , a $2 \times 2$ matrix $\pmb{P}_k$ , two vectors $\pmb{U}_k$ and $\pmb{V}_k$ in $\mathbb{R}^2$ all independent of $x_k$ , $y_k$ and $z_k$ be as defined in Proposition 8 of Appendix A.2. Under Assumptions 1 to 5, it holds that:
Proposition 5 describes the evolution of the inner-level errors as the number of iterations $k$ increases. The matrix $P_{k}$ and vectors $U_{k}$ and $V_{k}$ arise from discretization errors and depend on the step-sizes and constants of the problem. The second term of (12) represents interactions with the outer-level through $E_{k-1}^{x}, V_{k-1}^{\psi}$ and $E_{k-1}^{\psi}$ . Propositions 2, 4 and 5 describe the joint dynamics of $(E_{k}^{x}, E_{k}^{y}, E_{k}^{z})$ from which the evolution of $E_{k}^{tot}$ can be deduced as shown in Appendices A.1 and A.2.
5 EXPERIMENTS
We run three sets of experiments described in Sections 5.1 to 5.3. In all cases, we consider AmIGO with either gradient descent (AmIGO-GD) or conjugate gradient (AmIGO-CG) for algorithm $\mathcal{B}_k$ . We AmIGO with AID methods without warm-start for $\mathcal{B}_k$ which we refer to as (AID-GD) and (AID-CG) and with (AID-CG-WS) which uses warm-start for $\mathcal{B}_k$ but not for $\mathcal{A}_k$ . We also consider other variants using either a fixed-point algorithm (AID-FP) (Grazzi et al., 2020) or Neumann series expansion (AID-N) (Lorraine et al., 2020) for $\mathcal{B}_k$ . Finally, we consider two algorithms based on iterative differentiation which we refer to as (ITD) (Grazzi et al., 2020) and (Reverse) (Franceschi et al., 2017). For all methods except (AID-CG-WS), we use warm-start in algorithm $\mathcal{A}_k$ , however only AmIGO, AmIGO-CG and AID-CG-WS exploits warm-start in $\mathcal{B}_k$ the other AID based methods initializing $\mathcal{B}_k$ with $z^0 = 0$ . In Sections 5.2 and 5.3, we also compare with BSA algorithm (Ghadimi and Wang, 2018), TTSA algorithm (Hong et al., 2020a) and stocBio (Ji et al., 2021). An implementation of AmIGO is available in https://github.com/MichaelArbel/AmIGO.
5.1 SYNTHETIC PROBLEM
To study the behavior of AmIGO in a controlled setting, we consider a synthetic problem where both inner and outer level losses are quadratic functions with thousands of variables as described in details in Appendix F.1. Figure 1(a) shows the complexity $\mathcal{C}(\epsilon)$ needed to reach $10^{-6}$ relative error amongst the best choice for $T$ and $M$ over a grid as the conditioning number $\kappa_{g}$ increases. AmIGO-CG achieves the lowest time and is followed by AID-CG thus showing a favorable effect of warm-start for $\mathcal{B}k$ . The same conclusion holds for AmIGO-GD compared to AID-GD. Note that AID-CG is still faster than AmIGO-CG for larger values of $\kappa{g}$ highlighting the advantage of using algorithms $\mathcal{B}k$ with $O(\sqrt{\kappa_g})$ complexity such as (CG) instead non-accelerated ones with $O(\kappa_g^{-1})$ such as (GD). Figure 1(b) shows the relative error after 10s and maintains the same conclusions. For moderate values of $\kappa{g}$ , only AmIGO and AID-CG reach an error of $10^{-20}$ as shown in Figure 1(c). We refer to Figures 2 and 3 of Appendix F for additional results on the effect of the choice of $T$ and $M$ showing that AmIGO consistently performs well for a wide range of values of $T$ and $M$ .
5.2 HYPER-PARAMETER OPTIMIZATION
We consider a classification task on the 20Newsgroup dataset using a logistic loss and a linear model. Each dimension of the linear model is regularized using a different hyper-parameter. The
Figure 1: Top row: performance on the synthetic task. The relative error is defined as a ratio between current and initial errors $(\mathcal{L}(x_k) - \mathcal{L}^\star) / (\mathcal{L}(x_0) - \mathcal{L}^\star)$ . The complexity $\mathcal{C}(\epsilon)$ as defined in (8). Bottom row: performance on the hyper-parameter optimization task.
collection of those hyper-parameters form a vector $x$ of dimension $d = 101631$ optimized using an unregularized regression loss over the validation set while the model is learned using the training set. We consider two evaluations settings: A default setting based on Grazzi et al. (2020); Ji et al. (2021) and a grid-search search setting near the default values of $\beta_{k}$ , $T$ and $N$ as detailed in Appendix F.2. We also vary the batch-size from $10^{3} * {0.1, 1, 2, 4}$ and report the best performing choices for each method. Figure 1(d,e,f) show AmIGO-CG to be the fastest, achieving the lowest error and highest validation and test accuracies. The test accuracy of AmIGO-CG decreases after exceeding $80%$ indicating a potential overfitting as also observed in Franceschi et al. (2018). Similarly, AmIGO-GD outperformed all other methods that uses an algorithm $\mathcal{B}_k$ with $O(\kappa_g)$ complexity. Moreover, all remaining methods achieved comparable performance matching those reported in Ji et al. (2021), thus indicating that the warm-start in $\mathcal{B}_k$ and acceleration in $\mathcal{B}_k$ were the determining factors for obtaining an improved performance. Additionally, Figure 4 of Appendix F report similar results for each choice of the batch-size indicating robustness to the choice of the batch-size.
5.3 DATASET DISTILLATION
Dataset distillation (Wang et al., 2018) consists in learning a synthetic dataset so that a model trained on this dataset achieves a small error on the training set. Figure 5 of Appendix F.3 shows the training loss (outer loss), the training and test accuracies of a model trained on MNIST by dataset distillation. Similarly to Figure 1, AmIGO-CG achieves the best performance followed by AID-CG. AmIGO obtains the best performance by far among methods without acceleration for $\mathcal{B}k$ while all the remaining ones fail to improve. This finding is indicative of an ill-conditioned inner-level problem as confirmed when computing the conditioning number of the hessian $\partial{yy}g(x,y)$ which we found to be of order $7\times 10^{4}$ . Indeed, when compared to the synthetic example for $\kappa_g = 10^4$ as shown in Figure 2, we also observe that only AmIGO-CG, AmIGO and AID-CG could successfully optimize the loss. Hence, these results confirm the importance of warm-start for an improved performance.
6 CONCLUSION
We studied AmIGO, an algorithm for bilevel optimization based on amortized implicit differentiation and introduced a unified framework for analyzing its convergence. Our analysis showed that AmIGO achieves the same complexity as unbiased oracle methods, thus achieving improved rates compared to methods without warm-start in various settings. We then illustrated empirically such improved convergence in both synthetic and a hyper-optimization experiments. A future research direction consists in extending the proposed framework to non-smooth objectives and analyzing acceleration in both inner and outer level problems as well as variance reduction techniques.
7 ACKNOWLEDGMENTS AND FUNDING
This project was supported by the ERC grant number 714381 (SOLARIS project) and by ANR 3IA MIAI@Grenoble Alpes, (ANR19-P3IA-0003).
Riccardo Grazzi, Luca Franceschi, Massimiliano Pontil, and Saverio Salzo. On the iteration complexity of hypergradient computation. In International Conference on Machine Learning, pages 3748-3758. PMLR, 2020. Riccardo Grazzi, Massimiliano Pontil, and Saverio Salzo. Convergence properties of stochastic hypergradients. In International Conference on Artificial Intelligence and Statistics, pages 3826-3834. PMLR, 2021. P Habets. Stabilite asyptotique pour des problèmes de perturbations singulieres. In Stability Problems, pages 2-18. Springer, 1974. Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic. arXiv preprint arXiv:2007.05170, 2020a. Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A two-timescale framework for bilevel optimization: Complexity analysis and application to actor-critic. arXiv preprint arXiv:2007.05170, 2020b. Kaiyi Ji and Yingbin Liang. Lower bounds and accelerated algorithms for bilevel optimization. arXiv preprint arXiv:2102.03926, 2021. Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. In International Conference on Machine Learning, pages 4882-4892. PMLR, 2021. Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, and Hoi-To Wai. Finite time analysis of linear two-timescale stochastic approximation with markovian noise. In Conference on Learning Theory, pages 2144-2203. PMLR, 2020. Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. Andrei Kulunchakov and Julien Mairal. Estimate sequences for variance-reduced stochastic composite optimization. In International Conference on Machine Learning, pages 3541-3550. PMLR, 2019. Andrei Kulunchakov and Julien Mairal. Estimate sequences for stochastic composite optimization: Variance reduction, acceleration, and robustness to noise. 2020. Serge Lang. Fundamentals of differential geometry, volume 191. Springer Science & Business Media, 2012. Bruno Lecouat, Jean Ponce, and Julien Mairal. Designing and learning trainable priors with non-cooperative games. arXiv preprint arXiv:2006.14859, 2020a. Bruno Lecouat, Jean Ponce, and Julien Mairal. A flexible framework for designing trainable priors with adaptive smoothing and game encoding. In Conference on Neural Information Processing Systems (NeurIPS), 2020b. Renjie Liao, Yuwen Xiong, Ethan Fetaya, Lisa Zhang, KiJung Yoon, Xaq Pitkow, Raquel Urtasun, and Richard Zemel. Reviving and improving recurrent back-propagation. In International Conference on Machine Learning, pages 3082-3091. PMLR, 2018. Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. Catalyst acceleration for first-order convex optimization: from theory to practice. Journal of Machine Learning Research, 18(1):7854-7907, 2018. Risheng Liu, Jiaxin Gao, Jin Zhang, Deyu Meng, and Zhouchen Lin. Investigating bi-level optimization for learning and vision from a unified perspective: A survey and beyond. arXiv preprint arXiv:2101.11517, 2021. Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing millions of hyperparameters by implicit differentiation. In International Conference on Artificial Intelligence and Statistics, pages 1540-1552. PMLR, 2020.
In this section, we provide a proof of Theorem 1 as well as its corollaries Corrolaries 1 to 4. In Appendix A.1, we provide an outline of the proof Theorem 1 that states the main intermediary results needed for the proof and provide explicit expressions for the quantities needed throughout the rest of the paper. Appendices A.2 and A.3 provide the proofs of Theorem 1 and Corrolaries 1 to 4. The proofs of the intermediary results are deferred to Appendices B and C.
A.1 PROOF OUTLINE OF THEOREM 1
The proof of Theorem 1 proceeds in 8 steps as discussed below.
Step 1: Smoothness properties. This step consists in characterizing the smoothness of $\nabla \mathcal{L}$ , $y^{\star}$ , $z^{\star}$ as well as the conditional expectation $\mathbb{E}[\hat{\psi}k|x_k,y_k,z_k]$ knowing $x{k},y_{k}$ and $z_{k}$ . For this purpose, we consider the function $\Psi :\mathcal{X}\times \mathcal{Y}\times \mathcal{Y}\to \mathcal{X}$ defined as follows:
Ξ¨(x,y,z):=βxβf(x,y)+βxyβg(x,y)z.
Hence, by definition of $\hat{\psi}_k$ , it is easy to see that $\mathbb{E}[\hat{\psi}_k|x_k,y_k,z_k] = \Psi (x_k,y_k,z_k)$ . The following proposition controls the smoothness of $\nabla \mathcal{L}$ , $y^{\star}$ , $z^{\star}$ and $\Psi$ and is adapted from (Ghadimi and Wang, 2018, Lemma 2.2). We provide a proof in Appendix B.2 for completeness.
Proposition 6. Under Assumptions 1 to 3, $\mathcal{L}$ , $\Psi$ , $y^{\star}$ and $z^{\star}$ satisfy:
The expressions of $L_y$ , $L_z$ , $L_\psi$ and $L$ suggests the following dependence on the conditioning $\kappa_g$ of the inner-level problem which will be useful for the complexity analysis: $L_y = O(\kappa_g)$ , $L_\psi = O(\kappa_g^2)$ , $L_z = O(\kappa_g^2)$ and $L = O(\kappa_g^3)$ .
Step 2: Convergence of the inner-level iterates. In this step, we control the mean squared errors $\mathbb{E}\left[| y_k - y^\star (x_k)|^2\right]$ and $\mathbb{E}\left[| z_k - z^\star (x_k,y_k)|^2\right]$ as stated in Proposition 3. In fact we prove a slightly stronger version stated below:
Proposition 7. Let $\alpha_{k}$ and $\beta_{k}$ be two positive sequences with $\alpha_{k} \leq L_{g}^{-1}$ and $\beta_{k} \leq \frac{1}{2L_{g}}\min \left(1, \frac{2L_{g}}{\mu_{g}(1 + \mu_{g}^{-2}\sigma_{gy}^{2})}\right)$ and define $\Lambda_{k} := (1 - \alpha_{k}\mu_{g})^{T}$ and $\Pi_{k} := \left(1 - \frac{\beta_{k}\mu_{g}}{2}\right)^{N}$ . Denote by $\bar{z}{k}$ the conditional expectation of $z{k}$ knowing $x_{k}, y_{k}$ and $z_{k}^{0}$ . Let $R_{k}^{y}$ and $R_{k}^{z}$ be defined as:
It is easy to see from the above expressions that $R_{k}^{y} = O\big(\kappa_{g}\sigma_{g}^{2}\big)$ while $R_{k}^{z} = O\Big(\kappa_{g}^{3}\sigma_{g_{y}y}^{2} + \kappa_{g}^{2}\sigma_{f}^{2}\Big)$ as stated in Proposition 3. Controlling $\mathbb{E}[| y_k - y^\star (x_k)| ]^2$ follows by standard results on SGD (Kulunchakov and Mairal, 2020, Corollary 31) since the iterates of Algorithm 2 uses i.i.d. samples. The error terms $\mathbb{E}\Big[| z_k - z^\star (x_k,y_k)| ^2\Big]$ is more delicate since Algorithm 3 uses the same sample $\partial_y\hat{f} (x_k,y_k)$ for updating the iterates, therefore introducing additional correlations between these iterates. We defer the proof of Proposition 7 to Appendix B.3 which relies on a general result for stochastic linear systems with correlated noise provided in Appendix E.
Step 3: Controlling the bias and variance errors $V_{k}^{\psi}$ and $E_{k}^{\psi}$ is achieved by Proposition 4. The bias $E_{k}^{\psi}$ is controlled simply by using the smoothness of the potential $\Psi$ near the point $(x_{k},y^{\star}(x_{k}),z^{\star}(x_{k},y^{\star}(x_{k}))$ as shown in Proposition 6. The variance term $V_{k}^{\psi}$ is more delicate to control due to the multiplicative noise resulting from the Jacobian-vector product between $\partial_{xy}\hat{g} (x_k,y_k,\mathcal{D}{g{xy}})z_k$ . We defer the proof to Appendix B.4 and provide below explicit expressions for the constants $\sigma_x^2$ and $w_{x}^{2}$ :
Note that $w_{x}^{2} = O\left(\kappa_{g}^{2}\left(\sigma_{f}^{2} + \sigma_{g_{xy}}^{2}\right) + \kappa_{g}^{3}\sigma_{g_{yy}}^{2}\right)$ and $\sigma_{x}^{2} = O\left(\sigma_{g_{xy}}^{2} + \kappa_{g}^{2}\sigma_{g_{yy}}^{2}\right)$ , as stated in Proposition 4.
Step 4: Outer-level error bound. This step consists in obtaining the inequality in Proposition 2 which extends the result of (Kulunchakov and Mairal, 2020, Proposition 1) to biased gradients and to the non-convex case. We defer the proof of such result to Appendix C.1.
Step 5: Inner-level error bound. This step consists in proving Proposition 5. For clarity, we provide a second statement with explicit expressions for the quantities of interest:
Proposition 8. Let $r_k$ and $\theta_k$ be two positive non-increasing sequences no greater than 1. For any $0 \leq v \leq 1$ , denote by $\phi_k$ and $\tilde{R}_k$ the following non-negative scalars:
We defer the proof of the above result to Appendix C.2.
Step 6: General error bound. By combining the inequalities in Propositions 2 and 8 resulting from the analysis of both outer and inner levels, we obtain a general error bound on $E_k^{tot}$ in Proposition 9 with a proof in Appendix C.4.
Proposition 9. Choose the step-sizes $\alpha_{k}$ and $\beta_{k}$ such that they are non-increasing in $k$ and choose $r_k$ and $\theta_{k}$ such that $\delta_{k}r_{k}^{-1}$ and $\delta_{k}\theta_{k}^{-1}$ are non-increasing sequences. Choose the coefficients $a_{k}$ and $b_{k}$ defining $E_{k}^{tot}$ in (7) to be of the form $a_{k} = \delta_{k}r_{k}^{-1}\Lambda_{k}^{s}$ and $b_{k} = \delta_{k}\theta_{k}^{-1}\Pi_{k}^{s}$ for some $0 < s < 1$ , and fix a non-increasing sequence $0 < \rho_{k} < 1$ . Then, under Assumptions 1 to 5 $E_{k}^{tot}$ satisfies:
where we introduced $u_{k}^{I} = a_{k}U_{k}^{(1)} + b_{k}U_{k}^{(2)}$ for conciseness with $U_{k}^{(1)}$ and $U_{k}^{(2)}$ being the components of the vector $\mathbf{U}_{\mathbf{k}}$ defined in Proposition 8.
Proposition 9 holds without conditions on the error made by Algorithms 2 and 3. The general form of $a_{k}$ and $b_{k}$ allows to account for potentially decreasing step-sizes $\gamma_{k}$ , $\alpha_{k}$ and $\beta_{k}$ . However, in the present work, we will restrict to the constant step-size for ease of presentation as we discuss next.
Step 7: Controlling the precision of the inner-level algorithms. In this step, we provide conditions on $T$ and $N$ in Proposition 10 below so that $| A_k|_\infty \leq 1 - (1 - \rho_k)\delta_k$ in the constant step-size case. These conditions are expressed in terms of the following constants:
with $C_1, C_2, C_3, C_1', C_2'$ and $C_3'$ defined in (19a) to (19f). Then, $|A_k|_{\infty} \leq 1 - (1 - \rho_k)\delta_k$ and $V_k^{tot} \leq \gamma \delta_0 \mathcal{W}^2$ , with $\mathcal{W}^2$ given by:
We provide a proof of Proposition 10 in Appendix D. It is easy to see from Proposition 10 that that $T = O(\kappa_g)$ and $N = O(\kappa_g)$ when $\alpha = \frac{1}{L_g}$ and $\beta = \frac{1}{2L_g}$ , where the big- $O$ notation hides a logarithmic dependence in $\kappa_g$ coming from the constants ${C_i, C_i' | i \in {1, 2, 3}}$ .
Step 8: Proving the main inequalities. The final step combines Propositions 9 and 10 to get the desired inequality. We provide a full proof in Appendix A.2 assuming Propositions 9 and 10 hold.
A.2 PROOF OF THEOREM 1
In order to prove Theorem 1 in the convex case, we need the following averaging strategy lemma, a generalization of (Kulunchakov and Mairal, 2020, Lemma 30):
Lemma 1. Let $\mathcal{L}$ be a convex function on $\mathcal{X}$ . Let $x_{k}$ be a (potentially stochastic) sequence of iterates in $\mathcal{X}$ . Let $(E_k){k\geq 0}$ , $(V{k}){k\geq 0}$ and $(\delta_k){k\geq 0}$ be non-negative sequences such that $\delta_{k}\in (0,1)$ . Fix some non-negative number $u\in [0,1]$ and define the following averaged iterates $\hat{x}k$ recursively by $\hat{x}k = u(1 - \delta_k)\hat{x}{k - 1} + (1 - (1 - \delta_k)u)x_k$ and starting from any initial point $\hat{x}0$ . Assume the iterates $(x{k}){k\geq 1}$ satisfy the following relation for all $k\geq 1$ :
Proof. For simplicity, we write $F_{k} = \mathbb{E}[\mathcal{L}(x_{k}) - \mathcal{L}^{\star}]$ and $\hat{F}_k = \mathbb{E}[\mathcal{L}(\hat{x}_k) - \mathcal{L}^\star ]$ . We first multiply (21) by $\Gamma_k^{-1}$ and sum the resulting inequalities for all $1\leq k\leq K$ to get:
Consider now the quantity $\hat{F}_k$ . Recalling that $\mathcal{L}$ is convex and by definition of the iterates $\hat{x}_k$ we apply Jensen's inequality to write:
Since $\mu > 0$ , $\mathcal{L}$ is a convex function and we can apply Lemma 1 with $V_{k} = \gamma \delta_{0}\mathcal{W}^{2}$ and $E_{k} = \frac{\delta_{k}}{2\gamma_{k}} | x_{k} - x^{\star} |^{2} + a_{k}E_{k}^{y} + b_{k}E_{k}^{z}$ . The result follows by noting that $\Gamma_{k}\sum_{t=1}^{k}\Gamma_{t}^{-1} \leq \delta_{0}^{-1}$ .
Case $\mu < 0$ . In this case, we recall that $F_{k}$ and $E_k^x$ are given by:
Using that $\mathbb{E}[\mathcal{L}(x_k) - \mathcal{L}^\star ] + E_k^{tot} - E_k^x$ is non-negative since $E_{k}^{tot} - E_{k}^{x} = a_{k}E_{k}^{y} + b_{k}E_{k}^{z}$ , we get:
Finally, since $\rho_t = \frac{1}{2}$ , $\delta_t = L\gamma$ , the result follows after dividing both sides by $\frac{1}{2} kL\gamma$ .
A.3 PROOF OF CORROLARIES 1 TO 4
Proof of Corollary 1. Choosing $u = 0$ implies that $E_k^x = \frac{\mu}{2} | x_k - x^\star |^2 + \mathcal{L}(x_k) - \mathcal{L}^\star \leq E_k^{tot}$ . We can then apply Theorem 1 for $\mu > 0$ which yields the following:
In the deterministic setting, it holds all variances vanish: $\sigma_f^2 = \sigma_g^2 = \sigma_{g_{yy}}^2 = \sigma_{g_{xy}}^2 = 0$ . Hence, $\mathcal{W}^2 = 0$ by definition of $\mathcal{W}^2$ . Therefore, to achieve an error $\mathcal{L}(x_k) - \mathcal{L}^\star \leq \epsilon$ for some $\epsilon > 0$ , (24) suggests choosing $k = O\left(\kappa_{\mathcal{L}}\log \left(\frac{E_0^{tot}}{\epsilon}\right)\right)$ . Additionally, $T = \Theta (\kappa_g)$ and $N = \Theta (\kappa_g)$ as required by Theorem 1 and since $\sigma_{g_{yy}}^2 = 0$ , it holds that $N = O(\kappa_g)$ . Using batches of size 1, yields the desired complexity.
Proof of Corollary 2. Here we choose $u = 1$ and apply Theorem 1 for $\mu > 0$ which yields:
Hence, to achieve an error $\mathbb{E}[\mathcal{L}(\hat{x}k) - \mathcal{L}^\star ]\leq \epsilon$ , we need $k = O\Big(\kappa{\mathcal{L}}\log \left(\frac{E_0^{tot} + \mathbb{E}[\mathcal{L}(x_0) - \mathcal{L}^\star]}{\epsilon}\right)\Big)$ to guarantee that the first term in the l.h.s. of the above inequality is $O(\epsilon)$ . Moreover, we recall that $L^{-1} = O(\kappa_g^{-3})$ from Proposition 6 and that Theorem 1 ensures the variance $\mathcal{W}$ satisfies:
Recall that $T = \Theta(\kappa_g)$ and $N = \Theta(\kappa_g)$ as required by, Theorem 1, thus yielding the desired result.
Proof of Corollary 3. In the non-convex deterministic case, recall that $E_{k}^{x} = \frac{1}{L}| \nabla \mathcal{L}(x_{k})|^{2}\leq E_{k}^{tot}$ . We thus apply Theorem 1 for $\mu < 0$ , multiply by $L$ to get:
The setting being deterministic, it holds that $\mathcal{W}^2 = 0$ . Moreover, recall that $L = O(\kappa_g^3)$ from Proposition 6. Hence, to achieve an error of order $\min_{1\leq t\leq k}| \nabla \mathcal{L}(x_t)|^2\leq \epsilon$ , it suffices to choose $k = O\left(\frac{\kappa_g^3}{\epsilon} (\mathcal{L}(x_0) - \mathcal{L}^\star +(E_0^y +E_0^z))\right)$ . Thus using batches of size 1 and $T$ and $N$ of order $\kappa_{g}$ .
Proof of Corollary 4. In the non-convex stochastic case, $E_{k}^{x} = \frac{1}{L}\mathbb{E}\Big[| \nabla \mathcal{L}(x_{k})|^{2}\Big]\leq E_{k}^{tot}$ . We thus apply Theorem 1 for $\mu < 0$ , multiply by $L$ to get:
to achieve an error of order $\epsilon$ , we need to ensure each term in the l.h.s. of the above inequality is of order $\epsilon$ . For the first term, similarly to the deterministic setting Corollary 3, we simply need $k = O\left(\frac{\kappa_g^3}{\epsilon} (\mathcal{L}(x_0) - \mathcal{L}^\star + (E_0^y + E_0^z))\right)$ . For the second term, we need to have $\mathcal{W}^2 = O(\epsilon)$ , which is achieved using the following choice for the sizes of the batches:
Finally, as required by Theorem 1, we set $T = \Theta(\kappa_g)$ and $N = \Theta(\kappa_g)$ thus yielding the desired complexity.
A.4 COMPARAISONS WITH OTHER METHODS
In this subsection we derive and discuss the complexities of methods presented in Table 1.
A.4.1 COMPARAISON WITH TTSA (HONG ET AL., 2020B)
Proposition 11. strongly-convex case $\mu >0$ . The complexity of the TTSA algorithm in Hong et al. (2020b) to achieve an error $\frac{\mu}{2}\mathbb{E}\left[| x_k - x^\star | ^2\right]\leq \epsilon$ is given by:
non-convex case $\mu < 0$ . The complexity of the TTSA algorithm in Hong et al. (2020b) to achieve an error $\frac{1}{k}\sum_{1\leq i\leq k}\mathbb{E}\Big[| \nabla \mathcal{L}(x_i)| ^2\Big]\leq \epsilon$ is given by:
By a simple calculation, it is easy to see that $\prod_{i=0}^{k}\left(1 - \frac{8}{3(k + k_{\alpha})}\right) \leq \frac{(k_{\alpha} - 1)^2}{(k - 1 + k_{\alpha})^2}$ . Moreover, using that $L_{\psi} = O(\mu_g^{-2})$ , $L_y = O(\mu_g^{-1})$ , we get that
Using that $L = O(\mu_g^{-3})$ , we get $\mu_g^{-6}\mu^{-\frac{1}{3}}(1 + \mu^{-2}) = O\left(\mu_g^{-5}\kappa_{\mathcal{L}}^{\frac{1}{3}} + \mu_g\kappa_{\mathcal{L}}^{\frac{7}{3}}\right)$ . Hence, to reach an error $\epsilon$ , we need to control both terms in the above inequality. This suggests the following condition on $k$ to control the second term which dominates the error:
Moreover, the result in (Hong et al., 2020b, Theorem 1) requires $N = \Theta \left(\kappa_{g}\log \frac{1}{\epsilon}\right)$ , where $N$ is the number of terms in the Neumann series used to approximate the hessian inverse $(\partial_{yy}g(x,y))^{-1}$ in the expression of the gradient $\nabla \mathcal{L}$ . Hence, the total complexity is given by the following expression:
Smooth Non-convex case $\mu < 0$ . Following Hong et al. (2020b), consider the proximal map of $\mathcal{L}$ for a fixed $\rho > 0$ :
x^(z):=argxβXminβ{L(x)+2Οββ₯xβzβ₯2}
and define the quantity $\tilde{\Delta}_x^k \coloneqq \mathbb{E}\left[| \hat{x}(x_k) - x_k|^2\right]$ , where $x_k$ are the iterates produced by the TTSA algorithm. Let $K$ be a random variable uniformly distributed on ${0, \dots, K-1}$ and independent from the remaining r.v. used in the TTSA algorithm. The result in (Hong et al., 2020b, Theorem 2) provides the following error bound on $\tilde{\Delta}_x^k$
where $\rho$ is set to $2L$ and $\Delta^0\leq \max \left(\mathbb{E}[\mathcal{L}(x_0) - \mathcal{L}^\star ],\mathbb{E}\Big[||y_1 - y^\star (x_0)||^2\Big]\right)$ . Now, recall that by definition of the proximal map, we have the following identity:
Moreover, controlling the bias in the estimation of the gradient requires $N = O\left(\kappa_{g}\log \frac{1}{\epsilon}\right)$ terms in the Neumann series approximating the hessian. Hence, the total complexity of the algorithm is:
A.4.2 COMPARaison WITH ACCBIO (JI AND LIANG, 2021)
Complexity of AccBio. The bilevel algorithm AccBio introduced in Ji and Liang (2021) uses acceleration for both the inner and outer loops. This allows to obtain the following conditions on $k$ , $T$ and $N$ to achieve an $\epsilon$ accuracy:
Note that, since AccBio do not use a warm-start strategy when solving the linear system, $N$ is required to grow as $\log \frac{1}{\epsilon}$ in order to achieve an $\epsilon$ accuracy. This contributes an additional logarithmic factor to the total complexity so that $\mathcal{C}(\epsilon) = O\left(\kappa_{\mathcal{L}}^{\frac{1}{2}}\kappa_{g}^{\frac{1}{2}}\left(\log \frac{1}{\epsilon}\right)^{2}\right)$ . This is by contrast with AmIGO which exploits warm start when solving the linear system and thus only needs a constant number of iterations $N = O(\kappa_g)$ although the dependence on $\kappa_{g}$ is worse compared to AccBio. However, it is possible to improve such dependence by using acceleration in the inner-level algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ as we discuss in Appendix A.5.1.
Complexity of AccBio as a function of $\mu$ and $\mu_{g}$ . The authors choose to report the complexity as a function of $\mu$ and $\mu_{g}$ instead of the conditioning numbers $\kappa_{\mathcal{L}}$ and $\kappa_{g}$ . To achieve this, they observe that, under the additional assumption that the hessian $\partial_{yy}g(x,y)$ is constant w.r.t. $y$ , the Lipschitz constant $L$ has an improved dependence on $\mu_{g}$ : $L = O(\mu_g^{-2})$ instead of $L = O(\mu_g^{-3})$ in the general case where $\partial_{yy}g(x,y)$ is only Lipschitz in $y$ . This allows them to express $\kappa_{\mathcal{L}} = \frac{L}{\mu} = O(\mu_g^{-2}\mu^{-1})$ and to report the following complexities in terms of $\mu$ and $\mu_{g}$ :
C(Ο΅)=O(ΞΌβ21βΞΌgβ23ββ(logΟ΅1β)2).
Note that, in the general case where $L = O(\mu_g^{-3})$ , the complexity as a function of $\mu$ and $\mu_g$ becomes $O\left(\mu^{-\frac{1}{2}}\mu_g^{-2}\left(\log \frac{1}{\epsilon}\right)^2\right)$ , while still maintaining the same expression in terms of $\kappa_{\mathcal{L}}$ and $\kappa_g$ . Hence, using the expression in terms of conditioning allows a more general expression for the complexity that is less sensitive to the specific assumptions on $g$ and is therefore more suitable for comparison with other results in the literature.
A.5 CHOICE OF THE INNER-LEVEL ALGORITHMS $\mathcal{A}_k$ AND $\mathcal{B}_k$ .
The choice of $\mathcal{A}_k$ and $\mathcal{B}_k$ has an impact on the total complexity of the algorithm. We discuss two choices for $\mathcal{A}_k$ and $\mathcal{B}_k$ which improve the total complexity of AmIGO: Accelerated algorithms (in Appendix A.5.1) and variance reduced algorithms (in Appendix A.5.2).
A.5.1 ACCELERATION OF THE INNER-LEVEL FOR AMIGO
AmIGO could benefit from acceleration in the inner-loop by using standard acceleration schemes Nesterov (2003) for $\mathcal{A}_k$ and $\mathcal{B}_k$ . As a consequence, and using analysis of accelerated algorithms (Nesterov, 2003) in the deterministic setting, the error of the inner-level iterates would satisfy:
where $\tilde{\Lambda}_k$ and $\tilde{\Pi}_k$ are accelerated rates of the form $\tilde{\Lambda}_k = O((1 - \sqrt{\kappa_g})^T)$ and $\tilde{\Pi}_k = O((1 - \sqrt{\kappa_g})^N)$ . The rest of the proofs are similar provided that $\Lambda_k$ and $\Pi_k$ are replaced by their accelerated rates $\tilde{\Lambda}_k$ and $\tilde{\Pi}_k$ . This implies that $T$ and $N$ need to be only of order $T = O(\sqrt{\kappa_g})$ and $N = O(\sqrt{\kappa_g})$ so that the final complexity becomes:
C(Ο΅):=O(ΞΊLβΞΊg1/2βlogΟ΅1β).
Note that using conjugate gradient for $\mathcal{B}_k$ also enjoys an accelerated convergence rate Shewchuk et al. (1994). This is confirmed in our experiments of Figure 1 where AmIGO-CG enjoys the fastest convergence.
In order to further improve the dependence on $\kappa_{\mathcal{L}}$ to $\kappa_{\mathcal{L}}^{1/2}$ , one would need to use an accelerated scheme when updating the iterates $x_{k}$ . The analysis of such schemes along with warm-start would be an interesting direction for future work.
A.5.2 VARIANCE REDUCED ALGORITHMS FOR $\mathcal{A}_k$ AND $\mathcal{B}_k$
When the inner-level cost function $g$ is a finite average of functions $g(x,y) = \frac{1}{n}\sum_{1\leq i\leq n}g_i(x,y)$ empirical average, it is possible to use variance reduced algorithms such as SAG (Schmidt et al.,
2017). If every function $g_{i}$ is $L_{g}$ -smooth, then by (Schmidt et al., 2017, Proposition 1), the inner level error becomes:
with $\tilde{\Lambda}k = \left(1 - \frac{\kappa_g}{8n}\right)^T$ . This has the advantage that the error due to the variance decays exponentially with the number of iterations $T$ . As a consequence, the dependence of the effective variance $\mathcal{W}^2$ on the conditioning numbers $\kappa{\mathcal{L}}$ and $\kappa_g$ can be improved to:
This can be achieved by taking $T = O(n\kappa_g)$ up to a logarithmic dependence on the condition numbers. As a consequence, the complexity in the strongly convex stochastic setting becomes:
The downside of this approach is the dependence on the number $n$ of functions $g_{i}$ in the total complexity.
B PRELIMINARY RESULTS
B.1 EXPRESSION OF THE GRADIENT
We provide a proof of Proposition 1 which shows that $\mathcal{L}$ is differentiable and provides an expression of its gradient.
Proof. Assumption 1 ensures that $y \mapsto g(x, y)$ admits a unique minimizer $y^{\star}(x)$ defined as the unique solution to the implicit equation $\partial_y g(x, y^{\star}(x)) = 0$ . Moreover, since $g$ is twice continuously differentiable and strongly convex, it follows that $\partial_{yy}g(x,y^{\star}(x))$ is invertible for any $x \in \mathcal{X}$ . Therefore the implicit function theorem (Lang, 2012, Theorem 5.9), ensures that $x \mapsto y^{\star}(x)$ is continuously differentiable with Jacobian given by $\nabla y^{\star}(x) = -\partial_{xy}g(x,y^{\star}(x))\partial_{yy}g(x,y^{\star}(x))^{-1}$ . Hence, by composition of differentiable functions, $\mathcal{L}$ is also differentiable with gradient given by:
We can thus define $z^{\star}(x,y) = -\partial_{yy}g(x,y)^{-1}\partial_{y}f(x,y)$ to get the desired expression for $\nabla \mathcal{L}(x)$ and note that $z^{\star}$ is the solution to (2).
B.2 SMOOTHNESS PROPERTIES OF $\mathcal{L}$ , $\mathcal{V}^{\star}$ , $z^{\star}$ AND $\Psi$
Proof of Proposition 6. Lipschitz continuity of $x \mapsto y^{\star}(x)$ . By Assumptions 1 and 3, the implicit function theorem (Lang, 2012, Theorem 5.9) ensures $y^{\star}(x)$ is differentiable with Jacobian given by:
Moreover, by Assumption 3, we know that $\partial_y g(x,y)$ is $L_{g}^{\prime}$ -Lipschitz in $x$ for any $y\in \mathcal{V}$ , hence, $| \partial_{xy}g(x,y^{\star}(x))|{op}$ is upper-bounded by $L{g}^{\prime}$ . Moreover, by Assumption 1, $g$ is $\mu_g$ -strongly convex in $y$ uniformly on $\mathcal{X}$ . Therefore, it holds that $\left| \partial_{yy}g(x,y^{\star}(x))^{-1}\right|{op}\leq \mu_g^{-1}$ . This allows to deduce that $| \nabla y^{\star}(x)|{op}\leq \mu_g^{-1}L_g'$ , and by application of the fundamental theorem of calculus that:
This shows that $y^{\star}$ is $L_{y}$ -Lipschitz continuous with $L_{y} := \mu_{g}^{-1} L_{g}'$ .
Lipschitz continuity of $x \mapsto z^{\star}(x, y)$ . Let $(x, y)$ and $(x', y')$ be two points in $\mathcal{X} \times \mathcal{Y}$ . Recalling the definition of $z^{\star}(x, y)$ in Proposition 1, it is easy to see that $z^{\star}(x, y)$ admits the following expression:
zβ(x,y)=ββyyβg(x,y)β1βyβf(x,y).(25)
Recalling the expression of $z^{\star}(x,y)$ , the following holds:
where we introduced $H_{1} = \partial_{yy}g(x,y)$ and $H_{2} = \partial_{yy}g(x^{\prime},y^{\prime})$ for conciseness. Using Assumption 1, we can upper-bound $\left| H_1^{-1}\right|{op}$ and $\left| H_2^{-1}\right|{op}$ by $\mu_g^{-1}$ . By Assumption 3, we know that $| H_1 - H_2|_{op}\leq M_g| (x,y) - (x',y')|$ . Finally by Assumption 2, we also have that $| \partial_yf(x',y') - \partial_yf(x,y)| \leq L_f| (x,y) - (x',y')|$ and that $| \partial_yf(x',y')| \leq B$ ensuring that:
Hence, we conclude that $z^{\star}$ is $L_{z}$ -Lipschitz continuous with $L_{z}$ defined as in (13).
boundedness of $z^{\star}(x,y)$ . Recalling the expression of $z^{\star}$ in (25), it is easy to see that $| z^{\star}(x,y)|$ is upper-bounded by $\mu_g^{-1}B$ since $\partial_{yy}g(x,y)$ is $\mu_g$ -strongly convex in $y$ by Assumption 1 and $\partial_yf(x,y)$ is bounded by $B$ by Assumption 2.
To get the first term of the last inequality above, we used that $\partial_x f$ is $L_{f}$ -Lipschitz by Assumption 2. To get the second term, we used that $\partial_{xy}g(x,y)$ is bounded since $\partial_y g(x,y)$ is $L_{g}^{\prime}$ -Lipschitz by Assumption 3. Finally, for the last term, we used that $\partial_{xy}g(x,y)$ is $M_g$ -Lipschitz by Assumption 3.
By choosing $x' = x$ , $y' = y^{\star}(x)$ and $z' = z^{\star}(x, y^{\star}(x))$ , it is easy to see from Proposition 1 that $\Psi(x, y^{\star}(x), z^{\star}(x, y^{\star}(x))) = \nabla \mathcal{L}(x)$ . Hence, applying the above inequality yields:
As shown earlier, $| z^{\star}(x,y^{\star}(x))|$ is upper-bounded by $\mu_g^{-1}B$ , while $| z^{\star}(x,y) - z^{\star}(x,y^{\star}(x))|$ is bounded by $L_{z}| y - y^{\star}(x)|$ . This allows to conclude that $| \Psi (x,y,z) - \nabla \mathcal{L}(x)| \leq L_{\psi}$ with $L_{\psi}$ defined in (13).
Lipschitz continuity of $x \mapsto \nabla \mathcal{L}(x)$ . We apply (26) with $(y,z) = (y^{\star}(x),z^{\star}(x,y^{\star}(x)))$ and $(y',z') = (y^{\star}(x'),z^{\star}(x,y^{\star}(x'))))$ which yields:
where we used that $| z^{\star}(x',y^{\star}(x'))|$ is upper-bounded by $\mu_q^{-1}B$ , $z^{\star}$ is $L_{z}$ -Lipschitz and $y^{\star}$ is $L_{y}$ -Lipschitz. Hence, $\nabla \mathcal{L}$ is $L$ -Lipschitz continuous, with $L$ as given by (13).
B.3 CONVERGENCE OF THE ITERATES OF ALGORITHMS $\mathcal{A}_k$ AND $\mathcal{B}_k$
Proof. Controlling the iterates $y^{t}$ of $\mathcal{A}_k$ .
Consider a new batch $\mathcal{D}_g$ of samples $\xi$ . We have by definition of the update equation of $y^t$ that:
The first line uses that $\partial_y\hat{g} (x_k,y^{t - 1},\mathcal{D}g)$ is an unbiased estimator of $\partial_yg(x_k,y^{t - 1})$ . For the second line, we use Assumption 4 which allows to upper-bound the variance of $\partial_y\hat{g}$ by $\sigma_g^2$ . Moreover, since $g$ is convex and $L{g}$ -smooth and since $\alpha_{k}\leq L_{g}^{-1}$ , it follows that the last term in the above inequality is non-positive and can thus be upper-bounded by 0. By unrolling the resulting inequality recursively for $1 < t\leq k$ , we obtain the desired result.
Controlling the iterates $z^n$ of $\mathcal{B}_k$ . The proof follows by direct application of Proposition 15 with $\beta = \beta_k$ and the following choices for $A_n, A, \hat{b}, b$ :
First we have that $\tilde{\Pi}_k\leq \Pi_k$ . Moreover, Proposition 6, we have that $| z^{\star}(x_k,y_k)| \leq B\mu_g^{-1}$ hence, $\tilde{R}_k^z\leq R_k^z$ thus yielding the desired inequalities. Finally (15a) also follows similarly using (45) from Proposition 15.
B.4 CONTROLLING THE BIAS AND VARIANCE $E_k^\psi$ AND $V_k^\psi$
Proof of Proposition 4. Recall that the expressions of $E_k^\psi$ and $V_k^\psi$ in (6) involves the conditional expectation $\mathbb{E}_k[\hat{\psi}k]$ knowing $x_k, y_k$ and $z{k-1}$ . This can be also expressed using $\Psi$ as follows:
where we used the tower property for conditional expectations in the first line, then the fact that the expectation of $\psi_{k}$ conditionally on $x_{k},y_{k}$ and $z_{k}$ is simply $\Psi (x_k,y_k,z_k)$ . Finally, for the last line, we use the independence of the noise and the linearity of $\Psi$ w.r.t. the last variable. In all what follows, we write $\bar{z}k = \mathbb{E}[z_k|x_k,y_k,z{k - 1}]$ which is the same object as defined in Proposition 7. We then treat $E_{k}^{\psi}$ and $V_{k}^{\psi}$ separately.
Bounding $E_k^{\psi}$ . Using Propositions 6 and 7 we directly get the desired inequality:
where we used that $\tilde{\xi}{N + 1,k}$ is independent from $z{k}$ and $\mathcal{D}f$ to get the last term. Hence, using Assumption 4 to bound the first term of the above relation, we get $V{k}^{\psi} \leq \sigma_{f}^{2} + W_{k}^{\prime} + 2W_{k}^{\prime \prime}$ . Thus, it remains to control each of $W_{k}^{\prime}$ and $W_{k}^{\prime \prime}$ .
Bound on $W_{k}^{\prime \prime}$ . Using that $\mathcal{D}f$ is independent from $\tilde{\xi}{n,k}$ , we can apply Proposition 14 to write:
where we used the simplifying notion $(\hat{f} - f)(x_k, y_k, \mathcal{D}f) = \hat{f}(x_k, y_k, \mathcal{D}f) - f(x_k, y_k)$ . Using Assumption 3 to bound $| \partial{xy} g(x_k, y_k) |{op}$ by $L_g'$ , Assumption 1 to upper-bound $\left| \left( \sum_{t=1}^{N} (I - \beta_k A)^{N-t} \right) \right|{op}$ by $\left( \sum{t=1}^{N} (1 - \beta_k \mu_g)^{N-t} \right)$ we get
where we used that $\sum_{t=0}^{N-1}(1 - \beta_k\mu_g) \leq \frac{1}{\beta_k\mu_g}$ for the second line, Cauchy-Schwarz inequality to get the third line and Assumption 4 to get the last line.
Bound on $W_{k}^{\prime}$ Using that $\tilde{\xi}{N + 1,k}$ is independent from $z{k}$ , we write:
(i) follows from Assumptions 3 and 5, (ii) uses that $| z_k|^2 \leq 2\left(| z_k - z|^2 + | z|^2\right)$ , (iii) uses that $| z^{\star}(x_k,y_k)| \leq B\mu_g^{-1}$ by Proposition 6. Finally (iv) follows by application of Proposition 7. We further have by definition of $R_k^z$ that:
Combining the inequalities on $W_{k}^{\prime}, W_{k}^{\prime \prime}$ and (29), we get that $V_{k}^{\psi} \leq w_{x}^{2} + \sigma_{x}^{2}\Pi_{k}E_{k}^{z}$ , with $w_{x}^{2}$ and $\sigma_x^2$ given by (16).
C GENERAL ANALYSIS OF AMIGO
C.1 ANALYSIS OF THE OUTER-LOOP
Proof of Proposition 2. We treat both cases $\mu \geq 0$ and $\mu < 0$ separately. For simplicity we denote by $\mathbb{E}k$ the conditional expectation knowing the iterates $x{k},y_{k}$ and $z_{k - 1}$ and write $\psi_{k} = \mathbb{E}{k}\Big[\hat{\psi}{k}\Big]$ .
Case $\mu \geq 0$ . Recall that $E_k^x$ is given by:
For simplicity define $\epsilon_{k} = u\delta_{k} + (1 - u)$ , $e_k = (1 - u)(\mathcal{L}(x_k) - \mathcal{L}^\star) + \frac{\eta_k}{2}| x_k - x^\star |^2$ and $e_k' = u\delta_k(\mathcal{L}(x_k) - \mathcal{L}^\star)$ . It is then easy to see that $\mathbb{E}[e_k]$ is equal to the l.h.s of (10), i.e. $\mathbb{E}[e_k] = E_k^x$ . We
will start by bounding the difference between two successive iterates of $e_k$ :
(i) follows from the update expression $x_{k} = x_{k - 1} - \gamma_{k}\hat{\psi}{k - 1}$ , (ii) follows from the convexity of $\mathcal{L}$ and (iii) follows by $L$ -smoothness of $\mathcal{L}$ . Taking the expectation conditionally on the randomness at iteration $k - 1$ and using that $\mathbb{E}{k - 1}\left[\hat{\psi}{k - 1}\right] = \psi{k - 1}$ , we therefore get
where (i) follows from $\delta_k \leq \epsilon_k$ since by construction $\delta_k \leq 1$ . Taking the expectation w.r.t. all the randomness and applying Cauchy-Schwarz inequality to the last term yields the following inequality:
Since $\mathcal{L}$ is convex, we have the inequality: $| x_{k - 1} - \gamma_k\nabla \mathcal{L}(x_{k - 1}) - x^\star | ^2\leq | x_{k - 1} - x^\star | ^2$ . Hence, we can deduce that:
Case $\mu < 0$ . Recall that for $\mu < 0$ , we set $E_k^x = \frac{1}{L} \mathbb{E}\left[|\nabla \mathcal{L}(x_k)|^2\right]$ . Using that $\mathcal{L}$ is $L$ -smooth, we have that:
where we used that $1 - \frac{L\gamma_k}{2} \geq \frac{1}{2}$ and $0 \leq 1 - L\gamma_k \leq 1$ to get the last inequality. Using the definition of $F_{k}$ yields an inequality of the form:
Hence, in both cases $\mu \geq 0$ and $\mu < 0$ we get an inequality of the same form, but with different expressions for $F_{k}$ and $s_k$ . We get the desired result using Young's inequality, to upper-bound the last term in the r.h.s. of the above inequality. More precisely, we use that for any $0 < \rho_{k} < 1$ :
In this section we prove Proposition 5 which controls the evolutions of the warm-start errors $E_{k}^{y}$ and $E_{k}^{z}$ . As a first step, in Proposition 12, we provide a result controlling the mean squared error between two successive iterates $x_{k-1}, x_{k}$ and $y_{k-1}, y_{k}$ which will be used in the proof of Proposition 5.
Proposition 12 (Control of the increments of $x_{k}$ and $y_{k}$ ). Consider $\zeta_{k}, \phi_{k}$ and $\tilde{R}_{k}^{y}$ as defined in Proposition 8 for some fixed $0 \leq v \leq 1$ . Then, the following holds:
Proof. Proof of Proposition 12 We prove each inequality separately.
Increments of $x_{k}$ . By the update equation, we have that $x_{k} = x_{k - 1} - \gamma_{k}\hat{\psi}{k - 1}$ , hence we only need to control $\mathbb{E}\left[\left| \hat{\psi}{k - 1}\right| ^2\right]$ . We have the following:
In the case $(\mu < 0)$ , we have $E_{k - 1}^{x} = \frac{1}{2L}\mathbb{E}\Big[| \nabla \mathcal{L}(x_{k - 1})|^{2}\Big]$ , hence by setting $\zeta_k\coloneqq 2L$ , we get the desired inequality. In the convex case $(\mu \geq 0)$ , since $\mathcal{L}$ is $L$ -smooth, we have that:
provided that $u < 1$ . We also have that $(\mathcal{L}(x_{k - 1}) - \mathcal{L}^{\star}) \leq \frac{L}{2}| x_{k - 1} - x^{\star}|^{2} \leq L\eta_{k - 1}^{-1}E_{k - 1}^{x}$ which yields $|\nabla \mathcal{L}(x_{k - 1})|^2 \leq 2L^2\eta_{k - 1}^{-1}E_{k - 1}^{x}$ . Hence, we can set $\zeta_k = 2L\min \left((1 - u)^{-1}, L\eta_{k - 1}^{-1}\right)$ .
Increments of $y_{k}$ . Denoting by $\mathcal{D}_g^t$ a batch of samples at time iteration $t$ of algorithm $\mathcal{A}_k$ and using the update equation of $y^{t}$ we get the following inequality by application of the triangular inequality:
where we applied Proposition 7 for every $0 < t \leq T - 1$ to get the second line with $\Lambda_{t,k} := (1 - \alpha_k\mu_g)^t$ . This directly implies the following bound:
(i) follows by Young's inequality, (ii) uses Proposition 7 to bound the first term and that $(1 + r_k^{-1})\leq$ $2r_{k}^{-1}$ for the second term, (iii) uses that $y^{\star}$ is $L_{y}$ -Lipschitz by Proposition 6 and (iv) uses the update equation $x_{k} = x_{k - 1} - \gamma_{k}\hat{\psi}_{k - 1}$
Upper-bound on $E_k^z$ . Similarly, for a non-increasing sequence $0 < \theta_k \leq 1$ , we have that:
(i) follows by Young's inequality, (ii) uses Proposition 7 to bound the first term and that $(1 + \theta_k^{-1})\leq 2\theta_k^{-1}$ for the second term, (iii) uses that $z^{\star}(x,y)$ is $L_{z}$ -Lipschitz in $x$ and $y$ by Proposition 6 and, finally, (iv) uses the update equation $x_{k} = x_{k - 1} - \gamma_{k}\hat{\psi}{k - 1}$ for the term $\mathbb{E}\Big[| x_k - x{k - 1}| ^2\Big]$ and Proposition 12 to control the increments $\mathbb{E}\big[| y_k - y_{k - 1}| ^2\big]$ .
In order to express the upper-bound on $E_k^z$ in terms of $E_{k-1}^y$ instead of $E_k^y$ , we substitute $E_k^y$ in (34a) by its upper-bound in (33a) and use that $(1 + r_k) \leq 2$ to write:
where the $P_{k}$ is a $2\times 2$ matrix and $\pmb{U}{k}$ and $\pmb{V}{k}$ are 2-dimensional vectors given by (17). The desired result follows directly by substituting $\mathbb{E}\left[\left| \hat{\psi}_{k - 1}\right| ^2\right]$ by its upper-bound from Proposition 12 in the above inequality.
C.4 GENERAL ERROR BOUND
Proof of Proposition 9. First note that, by assumption, we have that $\delta_k r_k^{-1} \leq \delta_{k-1} r_{k-1}^{-1}$ and $\delta_k \theta_k^{-1} \leq \delta_{k-1} \theta_{k-1}^{-1}$ . Moreover, since $\alpha_k$ and $\beta_k$ are non-increasing, we also have that $\Lambda_{k-1} \leq \Lambda_k$ and $\Pi_{k-1} \leq \Pi_k$ . This implies the following inequalities which will be used in the rest of the proof:
where we used (36) a second time to replace $\Lambda_{k - 1}(a_{k - 1})^{-1}$ and $\Pi_{k - 1}(b_{k - 1})^{-1}$ by $\Lambda_k(a_k)^{-1}$ and $\Pi_k(b_k)^{-1}$ . By summing both inequalities (37) and (40) and substituting all terms $E_{k - 1}^{\psi}$ and $V_{k - 1}^{\psi}$ by their upper-bounds we obtain an inequality of the form:
where $A_{k}^{x}, A_{k}^{y}, A_{k}^{z}$ are the components of the vector $A_{k}$ defined in (18) and $V_{k}^{tot}$ is the variance term also defined in (18). The desired inequality follows by upper-bounding $A_{k}^{x}, A_{k}^{y}, A_{k}^{z}$ by their maximum value $| A_{k}|_{\infty}$ .
D CONTROLLING THE PRECISION OF THE INNER-LEVEL ALGORITHMS.
In this section, we prove Proposition 10. To achieve this, we first provide general conditions on $\Lambda_{k}$ and $\Pi_{k}$ for controlling the rate $| A_k|_{\infty}$ and which hold regardless of the choice of step-sizes. This is achieved in Proposition 13 of Appendix D.1. Then we prove Proposition 10 in Appendix D.2.
A direct calculation shows $u_{k}^{+} \leq 1$ whenever (42a) holds. Moreover, recall that $u_{k}^{I} = a_{k}U_{k}^{(1)} + b_{k}U_{k}^{(2)}$ with $U_{k}^{(1)}$ and $U_{k}^{(2)}$ being the components of the vector $U_{k}$ defined in (17). Thus by direct substitution, we get the following expression for $u_{k}^{I}$ :
where we use $16L_{z}^{2}\phi_{k}\theta_{k}^{-2}\Pi_{k}^{s}\leq 1$ and $u_{k}^{I}\leq 1$ for the first line and $u_{k}^{I}\leq u_{k}^{+}$ for the last line.
D.2 CONTROLLING THE NUMBER OF INNER-LEVEL ITERATIONS
We provide now a proof of Proposition 10 which is a consequence of Proposition 13.
Proof of Proposition 10. We first provide conditions on the number of iterations $T$ and $N$ of algorithms $\mathcal{A}_k$ and $\mathcal{B}k$ to control the rate $| A_k|{\infty}$ and then provide an upper-bound on $V_k^{tot}$ .
Conditions on $T$ and $N$ . We consider the setting with constant step-size $\gamma_{k} = \gamma$ , $\alpha_{k} = \alpha$ and $\beta_{k} = \beta$ and choose $r_k = \theta_k = 1$ and $\delta_k = \delta_0$ for some $0 < \delta_0 < 1$ . We also take $v = 1$ so that $\phi_k = 2$ and $\tilde{R}_k^y = R_k^y$ . By direct substitution of the parameters $r_k$ , $\theta_k$ , $\phi_k$ , $\gamma_k$ , $\delta_k$ , $\rho_k$ and $\zeta_k$ , in the expressions of $D_k^{(1)}$ , $D_k^{(2)}$ , $D_k^{(3)}$ , $D_k^{(4)}$ , $D_k^{(5)}$ and $D_k^{(6)}$ defined in (41a) to (41f), we verify that:
Hence, for such choice, we are guaranteed by Proposition 13 that $| A_k|_{\infty}\leq 1 - (1 - \rho_k)\delta_k$
Bound on the variance $V_{k}^{tot}$ . By choosing $T$ and $N$ as in (43), we know that $\Lambda_{k}$ and $\Pi_{k}$ satisfy (42), so that the variance term $V_{k}^{tot}$ is upper-bounded by $\tilde{V}{k}^{tot}$ . Moreover, by direct substitution of the sequences appearing in the expression of $\tilde{V}{k}^{tot}$ by their values, we get:
Moreover, recall that $\tilde{R}_k^y = R_k^y$ since we chose $v = 1$ . Thus $\tilde{R}_k^y \leq 2\mu_g^{-1}\alpha \sigma_g^2$ . This implies that:
Vkt o tβΞ΄0β1βΞ³β1β€(Ξ΄0β1β21βuβ+(1+4Ly2βΞksβΞ³))wx2β+2Ξ ksβΞ³β1ΞΌgβ3β(B2Ξ²Οgyyβ2β+3ΞΌgβΟf2β)+(4Ξksβ+80Lz2βΞ ksβ+20LΟ2βΞ·0β1β)ΞΌgβ1βΞ±Ξ³β1Οg2ββ(44)
By choosing $T$ and $N$ as in (43), the following conditions hold:
where we used that $2\mu_{g}^{-3}\Bigl (\sigma_{g_{xy}}^{2} + (L_{g}^{\prime})^{2}\Bigr)\Bigl (B^{2}\beta \sigma_{g_{yy}}^{2} + 3\mu_{g}\sigma_{f}^{2}\Bigr)\leq w_{x}^{2}$ by definition of $w_{x}^{2}$ in (16a). Therefore, we have shown that $V_{k}^{tot}\leq \gamma \delta_{0}\mathcal{W}^{2}$ , with $\mathcal{W}^2$ given by (20).
E STOCHASTIC LINEAR DYNAMICAL SYSTEM WITH CORRELATED NOISE
Let $A$ be a positive definite matrix in $\mathbb{R}^d\times \mathbb{R}^d$ satisfying $0 < \mu_{g}\geq |\sigma_{i}(A)|\leq L_{g}$ and $b$ a vector in $\mathbb{R}^d$ . We denote by $z^{\star} = -A^{-1}b$ . Consider $A_{m}$ be a sequence of i.i.d. positive symmetric matrices in $\mathbb{R}^d\times \mathbb{R}^d$ such that $\mathbb{E}[A_m] = A$ , and $\hat{b}$ a random vector in $\mathbb{R}^d$ such that $\mathbb{E}\big[\hat{b}\big] = b$ , with $A_{m}$ and $\hat{b}$ being mutually independent. Define $\Sigma_A = \mathbb{E}\Big[(A_n - A)^\top (A_n - A)\Big]$ and denote by $\sigma_{A}$ and $L_{A}$ the largest singular values of $\Sigma_{A}$ and $A^{-1}\Sigma_{A}A^{-1}$ . Let $\beta$ be such that $\beta \leq \frac{1}{L_g}$ . Finally let $\sigma_c^2$ be an upper-bound on $\mathbb{E}\left[\left| \hat{b} -b\right| ^2\right]$ . Let $z$ and $z^{\prime}$ be two vectors in $\mathbb{R}^d$ and define the iterates $z^n$ and $\bar{z}^n$ such that $z^0 = z$ and $\bar{z}^0 = z'$ and using the recursion:
where $\Sigma_A = \mathbb{E}\left[(A_n - A)^\top (A_n - A)\right]$ and $D_{n} = \sum_{t = 0}^{n - 1}(I - \beta A)^{t}$ . By simple calculation we can upper-bound the last term by:
Moreover, provided that $\beta \leq \frac{1}{L_g(1 + L_A)}$ , where $L_{A}$ is the highest eigenvalue of $A^{-1}\Sigma_{A}A^{-1}$ , then we have the following:
Moreover, by Lemma 3 we know that $n\beta^2\mu_g^2 (1 - \beta \mu_g)^{n - 1}\leq (1 - \frac{\beta\mu_g}{2})^{n - 1}$ and since $\beta \mu_{g}\leq 1$ , we have that $(1 - \frac{\beta\mu_g}{2})^{-1}\leq 2$ so that $(1 - \frac{\beta\mu_g}{2})^{n - 1}\leq 2(1 - \frac{\beta\mu_g}{2})^n$ . Hence, we can write:
Lemma 2. Let $A$ and $\Sigma_A$ be symmetric positive matrix in $\mathbb{R}^d\times \mathbb{R}^d$ with $\sigma_A^2$ its largest singular value of $\Sigma_{A}$ and $0 < \mu_{g}\leq \sigma_{i}(A)\leq L_{g}$ . Let $\beta$ be a positive number such that:
Proof. First note that $\beta \leq \frac{1}{L_g}$ , so that $I - \beta A$ is positive. Now, we observe that $\left| (I - \beta A)^2 + \beta^2 \Sigma_A \right|_{op} \leq (1 - \beta \mu_g)^2 + \beta^2 \sigma_A^2$ which holds since $I - \beta A$ is positive. And since $\beta \leq \frac{\mu_g}{\mu_g^2 + \sigma_A^2}$ , we further have $(1 - \beta \mu_g)^2 + \beta^2 \sigma_A^2 \leq 1 - \beta \mu_g$ , which yields the desired result.
Lemma 3. Let $0 \leq b < 1$ and $n \geq 1$ , then the following inequality holds:
nb2(1βb)nβ1β€(1β2bβ)nβ1.
Proof. We consider the function $h(n, b)$ defined by:
h(n,b):=(nβ1)log(1βb1β2bββ)βlog(nb2).
We need to show that $h(n, b)$ is non-negative for any $n \geq 1$ and $0 \leq b < 1$ . For this purpose, we fix $b$ and consider the variations of $h(n, b)$ in $n$ :
βnβh(n,b)=log(1βb1β2bββ)βn1β.
$\partial_{n}h(n,b)$ is non-negative for $n \geq n^{\star} = \log \left(\frac{1 - \frac{b}{2}}{1 - b}\right)^{-1}$ and non-positive for all $n \leq n^{\star}$ . Hence, $h(n,b)$ achieves its minimum value in $n^{\star}$ over the $(0, +\infty)$ . We distinguish two cases depending on whether $n^{\star}$ is greater of smaller than 1.
Case $n^{\star} \leq 1$ . In this case $n \mapsto h(n, b)$ is increasing on the interval $[1, +\infty)$ since $\partial_n h(n, b) \geq 0$ for $n \geq n^{\star}$ . Hence, $h(n, b) \geq h(1, b)$ for all $n \geq 1$ . Moreover, since $h(1, b) = -\log (b^{2}) \geq 0$ the result follows directly.
Case $n^{\star} > 1$ . In this case we still have $h(n,b) \geq h(n^{\star},b)$ for all $n \geq 1$ , since $n^{\star}$ achieves the minimum value of $h$ . Thus we only need to show that $h(n^{\star},b) \geq 0$ . Using the expression of $n^{\star}$ , we have:
Since $n^{\star} > 1$ , the first term $1 - \frac{1}{n^{\star}}$ is non-negative, thus we only need to show that $n^{\star}b^{2} \leq 1$ so that the last term is also non-negative. It is easy to see that $n^{\star}b^{2} \leq 1$ is equivalent to having $\tilde{h}(b) \geq 0$ , where we define the function $\tilde{h}(b)$ as:
h~(b)=log1βb1β2bβββb2.
We can analyze the variations of $\tilde{b}$ be computing its derivative which is given by:
βbβh~(b)=(1βb)(2βb)1ββ2b.
Hence, we have the following equivalence:
βbβh~(b)β₯0βΊ2b(1βb)(2βb)β€1
This is always true for $0 \leq b < 1$ since $b(1 - b) \leq \frac{1}{4}$ so that $2b(1 - b)(2 - b) \leq \frac{2 - b}{2} \leq 1$ . Thus we have shown that $\tilde{h}$ is increasing over $[0,1)$ so that $\tilde{h}(b) \geq \tilde{h}(0) = 0$ . As discussed above, this is equivalent to having $n^{\star}b^{2} \leq 1$ , so that $h(n^{\star}, b) \geq 0$ which concludes the proof.
F EXPERIMENTS
F.1 DETAILS OF THE SYNTHETIC EXAMPLE
We choose the functions $f$ and $g$ to be of the form: $f(x,y) := \frac{1}{2} x^{\top}A_{f}x + y^{\top}C_{f}$ and $g(x,y) := \frac{1}{2} y^{\top}A_{g}y + y^{\top}B_{g}x$ where $A_{f}$ and $A_{g}$ are symmetric definite positive matrices of size $d_x \times d_x$ and $d_y \times d_y$ , $B_{g}$ is a $d_y \times d_x$ matrix and $C_{f}$ is a $d_y$ vector with $d_x = 2000$ and $d_y = 1000$ .
We generate the parameters of the problem so that the smoothness constants $L$ and $L_{g}$ are fixed to $1, \kappa_{\mathcal{L}} = 10$ and $\kappa_{g}$ taking values in ${10^i, i \in {0,..,7}}$ . We then solve each problem using different methods and perform a grid-search on the number of iterations $T$ and $M$ of algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ .
We fix the step-sizes to $\gamma_{k} = 1 / L$ and $\alpha_{k} = \beta_{k} = 1 / L_{g}$ and perform a grid-search on the number of iterations $T$ and $M$ of algorithms $\mathcal{A}_k$ and $\mathcal{B}_k$ from ${10^i, i \in 0, 1, 2, 3}$ . For AID methods without warm-start in $\mathcal{B}_k$ , we consider an additional setting where $M$ increases logarithmically with $k$ , as suggested in Ji et al. (2021), with $M = \lfloor 10^{3} \log(k) \rfloor$ . Similarly, for (ITD) and (Reverse), we additionally use an increasing $T$ of the same form.
F.2 EXPERIMENTAL DETAILS FOR LOGISTIC REGRESSION
The inner-level and outer-level cost functions for such task take the following form:
For the default setting, we use the well-chosen parameters reported in Grazzi et al. (2020); Ji et al. (2021) where $\alpha_{k} = \gamma_{k} = 100$ , $\beta_{k} = 0.5$ , and $T = N = 10$ . For the grid-search setting, we select the best performing parameters $T$ , $M$ and $\beta_{k}$ from a grid ${10,20} \times {5,10} \times {0.5,10}$ , while the batch-size (chosen to be the same for all steps of the algorithms) varies from $10 * {0.1,1,2,4}$ . We also compared with VRBO (Yang et al., 2021) using the implementation available online and noticed instabilities for large values of $T$ and $N$ , as reported by the authors, but also a drop in performance compared to stocBiO for smaller $T$ and $N$ due to inexact estimates of the gradient.
Figure 2: Evolution of the relative error vs. time in seconds for different AID based methods on the synthetic example. Each column corresponds to a method (AID-CG, AmIGO-CG, AmIGO-GD, AID-N, AID-FP) and each row corresponds to a choice of the conditioning number $\kappa_{g}$ . For each method we consider $T$ and $N$ from a grid ${1, 10, 10^{2}, 10^{3}} \times {1, 10, 10^{2}, 10^{3}}$ . Lightest colors correspond to smaller values of $N$ while nuances within each color correspond to increasing values of $T$ .
Figure 3: Evolution of the relative error vs. time in seconds for different ITD based methods on the synthetic example. From the left to the right, the first two columns correspond to Reverse and ITD method small conditioning numbers $\kappa_{g} \in {1, 10, 10^{3}}$ , last two columns are for higher conditioning numbers $\kappa_{g} \in {10^{4}, 10^{5}, 10^{7}}$ . For each method we consider $T \in {1, 10, 10^{2}, 10^{3}}$ . Lightest colors correspond to smaller values of $T$ .
F.3 DATASET DISTILLATION
Dataset distillation (Wang et al., 2018; Lorraine et al., 2020) consists in learning a small synthetic dataset such that a model trained on this dataset achieves a small error on the training set. Specifically, we consider a classification problem of $C$ classes using a linear model and a training dataset $\mathcal{D}{tr}$ where each training point $\xi \in \mathcal{D}{tr}$ is a $d$ -dimensional vector with a class $c_{\xi} \in {1,\dots,C}$ . The linear model is represented by a matrix $y \in \mathbb{R}^{c\times d}$ multiplying a data point $y\xi$ and providing the logits of each class. The dataset distillation can be cas as a bilevel problem of the form:
where $\lambda \in \mathbb{R}^d$ is a vector of hyper-parameter for regularizing the inner-level problem which we found beneficial to add.
Experimental setup. We perform the distillation task on MNIST dataset. We set the step-sizes $\alpha_{k} = \beta_{k} = 0.1$ and $T = N = 10$ . We perform a grid-search on the outer-level step-size $\gamma_{k}\in {0.01,0.001,0.0001}$ and run the algorithms for $k = 10000$ iterations.
Figure 4: Evolution of the validation loss (left column), validation accuracy (middle column) and test accuracy (right column) in time (s) for different methods on the logistic regression task. Each row corresponds to different choices for the size of the batch $|\mathcal{D}| \in {100, 1000, 2000, 4000}$ chosen to be the same for all gradient, Hessian and Jacobian-vector products evaluations. Time is reported in seconds.
Figure 5: Performance of various bi-level algorithms on the dataset distillation task on MNIST dataset.