SlowGuess's picture
Add Batch 48bc5aeb-78dc-400d-8399-48ea2086c863
59e53bd verified

ADAPTIVE STOCHASTIC GRADIENT ALGORITHM FOR BLACK-BOX MULTI-OBJECTIVE LEARNING

Feiyang Ye $^{1,2,}$ , Yueming Lyu $^{3,4,}$ , Xuehao Wang $^{1}$ , Yu Zhang $^{1,5,\dagger}$ , Ivor W. Tsang $^{2,3,4,6}$

1Department of Computer Science and Engineering, Southern University of Science and Technology
$^{2}$ Australian Artificial Intelligence Institute, University of Technology Sydney
3Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore
$^{4}$ Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore
$^{5}$ Shanghai Artificial Intelligence Laboratory
$^{6}$ School of Computer Science and Engineering, Nanyang Technological University

{feiyang.ye.uts,xuehaowangfi,yu.zhang.ust}@gmail.com

{Lyu_Yueming, Ivor_Tsang}@cfar.a-star.edu.sg

ABSTRACT

Multi-objective optimization (MOO) has become an influential framework for various machine learning problems, including reinforcement learning and multi-task learning. In this paper, we study the black-box multi-objective optimization problem, where we aim to optimize multiple potentially conflicting objectives with function queries only. To address this challenging problem and find a Pareto optimal solution or the Pareto stationary solution, we propose a novel adaptive stochastic gradient algorithm for black-box MOO, called ASMG. Specifically, we use the stochastic gradient approximation method to obtain the gradient for the distribution parameters of the Gaussian smoothed MOO with function queries only. Subsequently, an adaptive weight is employed to aggregate all stochastic gradients to optimize all objective functions effectively. Theoretically, we explicitly provide the connection between the original MOO problem and the corresponding Gaussian smoothed MOO problem and prove the convergence rate for the proposed ASMG algorithm in both convex and non-convex scenarios. Empirically, the proposed ASMG method achieves competitive performance on multiple numerical benchmark problems. Additionally, the state-of-the-art performance on the black-box multi-task learning problem demonstrates the effectiveness of the proposed ASMG method.

1 INTRODUCTION

Multi-objective optimization (MOO) involves optimizing multiple potentially conflicting objectives simultaneously (Deb et al., 2016; Fliege & Svaiter, 2000). In recent years, MOO has drawn intensive attention in a wide range of applications, including meta-learning (Ye et al., 2021; Yu et al., 2022), reinforcement learning (Thomas et al., 2021; Prabhakar et al., 2022), learning-to-rank (LTR) problems Mahapatra et al. (2023a;b), and multi-task learning (Momma et al., 2022; Fernando et al., 2022; Zhou et al., 2022b; Lin et al., 2022; 2023; Ye et al., 2024). A typical MOO problem is formulated as

min⁑x∈XF(x):=(F1(x),F2(x),…,Fm(x)),(1) \min _ {\boldsymbol {x} \in \mathcal {X}} F (\boldsymbol {x}) := \left(F _ {1} (\boldsymbol {x}), F _ {2} (\boldsymbol {x}), \dots , F _ {m} (\boldsymbol {x})\right), \tag {1}

where $m \geq 2$ denotes the number of objectives, $\mathcal{X} \subseteq \mathbb{R}^d$ and $d$ represents the parameter dimension. The objective function $F_{i}:\mathbb{R}^{d}\to \mathbb{R}$ satisfies $F_{i}(\pmb {x})\geq -\infty$ for $i = 1,\ldots ,m$

Solving the MOO problem is challenging because in most cases it cannot find a common parameter that minimizes all objective functions simultaneously. Therefore, a widely adopted strategy is to find a Pareto optimal solution or a Pareto stationary solution. To achieve this goal, a typical gradient-based method is the multiple gradient descent algorithm (MGDA) (DΓ©sideri, 2012). The basic idea of

MGDA is to iteratively update the variable $x$ via a common descent direction for all the objectives through a convex combination of gradients from individual objectives. Various MGDA-based MOO algorithms (Yu et al., 2020; Liu et al., 2021; Fernando et al., 2022; Zhou et al., 2022b) have been proposed to adjust the multiple gradients to seek a common descent direction that simultaneously decreases all the objectives. Those gradient-based MOO algorithms have been successfully applied in a wide range of applications, especially for multi-task learning (MTL) (Sener & Koltun, 2018; Zhang & Yang, 2022).

However, in many MOO-based learning problems, the gradient of the objective $F(\pmb{x})$ w.r.t. the variable $\pmb{x}$ cannot be explicitly calculated, making problem (1) a black-box MOO problem (Wang & Shan, 2004; Ε½ilinskas, 2014). For instance, many large models such as large language models (LLMs) (Devlin et al., 2018; Raffel et al., 2020; Yu et al., 2023) are released in the service and are only allowed for access with APIs (Brown et al., 2020). In such scenarios, users can only query the large models without accessing gradients to accomplish tasks of interest (Sun et al., 2022b;a), and gradient-based MOO methods are no longer applicable since they all rely on the availability of true gradients or stochastic gradients w.r.t. the variable $\pmb{x}$ . Several kinds of approaches have been widely studied for black-box MOO, such as Bayesian optimization (BO) (Konakovic Lukovic et al., 2020; Zhang & Golovin, 2020) and genetic algorithms (GA) (Laumanns et al., 2002; Wang & Shan, 2004; Chen et al., 2012; Arrieta et al., 2018). Among those methods, BO methods are good at dealing with low-dimensional expensive black-box MOO problems, while GA is to explore the entire Pareto optimal set, which is computationally expensive for machine learning problems, and usually lacks convergence analysis. Therefore, those limitations motivate us to design an algorithm for black-box MOO that can effectively reach a Pareto optimal solution or a Pareto stationary solution for relatively high-dimensional learning problems with affordable evaluations and convergence guarantee.

To achieve that, in this paper, we propose a novel Adaptive Stochastic Multi-objective Gradient (ASMG) algorithm for black-box MOO by taking advantage of gradient-based MOO methods. Specifically, the ASMG method first smoothes each objective to their expectation over a Gaussian distribution, leading to Gaussian smoothed objectives. Then it iteratively updates the parameterized distribution via a common search direction aggregated by the approximated stochastic gradients for all smoothed objectives. We explore the connections between the MOO and the corresponding Gaussian smoothed MOO and provide a convergence analysis for the proposed ASMG algorithm under both convex and non-convex scenarios. Moreover, experiments on various numerical benchmark problems and a black-box multi-task learning problem demonstrate the effectiveness of the proposed ASMG method.

The main contributions of this work are three-fold: (i) We propose a novel ASMG algorithm for black-box multi-objective optimization. To the best of our knowledge, we are the first to design a stochastic gradient algorithm for black-box MOO with a theoretical convergence guarantee. (ii) Theoretically, we explicitly provide the connection of the Pareto optimal and stationary conditions between the original MOO and the corresponding Gaussian-smoothed MOO. Moreover, we prove the convergence rate for the proposed ASMG algorithm in both convex and non-convex cases. (iii) Empirically, the proposed ASMG algorithm achieves competitive performances on multiple numerical benchmark problems. Moreover, the state-of-the-art performance on black-box multi-task learning problems demonstrates the effectiveness of the proposed ASMG method.

Notation and Symbols. $| \cdot | 1,| \cdot | 2$ , and $| \cdot |{\infty}$ denote the $l{1}$ norm, $l_{2}$ norm, and $l_{\infty}$ norm for vectors, respectively. $| \cdot | _F$ denote the Frobenius norm for matrices. $\Delta^m$ denotes an $m$ -dimensional simplex. $S^{+}$ denotes the set of positive semi-definite matrices. $\frac{X}{Y}$ denotes the elementwise division operation when $X$ and $Y$ are vectors, and the elementwise division operation for diagonal elements in $X$ and $Y$ when they are diagonal matrices. For a square matrix $\mathbf{X}$ , $\mathrm{diag}(\mathbf{X})$ is a vector with diagonal entries in $\mathbf{X}$ , and if $\mathbf{x}$ is a vector, $\mathrm{diag}(\mathbf{x})$ is a diagonal matrix with $\mathbf{x}$ as its diagonal entries. Define $| X| Y\coloneqq \sqrt{\langle X,YX\rangle}$ for a matrix $Y\in S^{+}$ or a non-negative vector $Y$ , where $\langle \cdot ,\cdot \rangle$ denotes the inner product under the $l{2}$ norm for vectors and the inner product under the Frobenius norm for matrices.

2 BACKGROUND

In this section, we introduce useful concepts of MOO and stochastic gradient approximation strategies for black-box optimization.

2.1 MULTI-OBJECTIVE OPTIMIZATION

In MOO (Deb et al., 2016), we are interested in finding solutions that can not be improved simultaneously for all the objectives, leading to the notion of Pareto optimality, which is defined as follows for the problem (1).

Definition 2.1. For any two point $\pmb{x}_1, \pmb{x}_2 \in \mathcal{X}$ , we define that $\pmb{x}_1$ dominates $\pmb{x}_2$ if $F_i(\pmb{x}_1) \leq F_i(\pmb{x}_2)$ holds for $i = 1, \dots, m$ , and $F_i(\pmb{x}_1) \neq F_i(\pmb{x}_2)$ holds for some $i$ . A point $\pmb{x}^* \in \mathcal{X}$ is called Pareto optimal if it is not dominated by any other point in $\mathcal{X}$ . The set of all Pareto optimal solutions forms the Pareto set. The set of objective values $F(\pmb{x}^*)$ for all the Pareto optimal is called the Pareto front.

We then present the sufficient condition for determining Pareto optimal.

Proposition 2.2. For MOO problem (1), if all objective $F_{i}(\pmb{x})$ for $i = 1, \dots, m$ are convex functions and there exists $\lambda \in \Delta^{m-1}$ such that $\pmb{x}^{} = \arg \min_{\pmb{x}} \pmb{\lambda}^{\top} F(\pmb{x})$ , then $\pmb{x}^{}$ is a Pareto optimal.

The above proposition implies that the minimizer of any linearization is Pareto optimal (Zhou et al., 2022b). In the general nonconvex cases, MOO aims to find the Pareto stationary solution (Fliege et al., 2019). If a point $\hat{\pmb{x}}$ is a Pareto stationary solution, then there is no common descent direction for all $F_{i}(\pmb {x})$ 's $(i = 1,\dots ,m)$ at $\hat{\pmb{x}}$ . For the Pareto stationary condition, we have the following proposition according to Proposition 1 in Zhou et al. (2022b).

Proposition 2.3. For MOO problem (1), (i) we say $\pmb{x}^{}\in \mathcal{X}$ is a Pareto stationary solution if there exist $\lambda \in \Delta^{m - 1}$ such that $| \sum_{i = 1}^{m}\lambda_{i}\nabla F_{i}(\pmb{x}^{})| = 0$ ; (ii) we say $\pmb{x}^{}\in \mathcal{X}$ is a $\epsilon$ -accurate Pareto stationary solution if $\min_{\lambda}| \sum_{i = 1}^{m}\lambda_{i}\nabla F_{i}(\pmb{x}^{})|^{2}\leq \epsilon$ where $\lambda \in \Delta^{m - 1}$ .

2.2 STOCHASTIC GRADIENT APPROXIMATION STRATEGIES

Inspired by evolution strategy, the stochastic gradient approximation method (Wierstra et al., 2014; Lyu & Tsang, 2021) for black-box optimization, instead of maintaining a population of searching points, iteratively updates a search distribution by stochastic gradient approximation.

The stochastic gradient approximation strategies employed in black-box optimization typically follow a general procedure. Firstly, a parameterized search distribution is utilized to generate a batch of sample points. Then the sample points allow the algorithm to capture the local structure of the fitness function and appropriately estimate the stochastic gradient to update the distribution. Specifically, when $\theta$ denotes the parameters of the search distribution $p_{\theta}(\boldsymbol{x})$ and $f(\boldsymbol{x})$ denotes a single objective function for sample $\boldsymbol{x}$ , the expected fitness under the search distribution can be defined as $J(\theta) = \mathbb{E}{p{\theta}(\boldsymbol{x})}[f(\boldsymbol{x})]$ . Based on this definition, we can obtain the Monte Carlo estimate of the search gradient as

βˆ‡ΞΈJ(ΞΈ)=1Nβˆ‘j=1Nf(xj)βˆ‡ΞΈlog⁑pΞΈ(xj),(2) \nabla_ {\boldsymbol {\theta}} J (\boldsymbol {\theta}) = \frac {1}{N} \sum_ {j = 1} ^ {N} f \left(\boldsymbol {x} _ {j}\right) \nabla_ {\boldsymbol {\theta}} \log p _ {\boldsymbol {\theta}} \left(\boldsymbol {x} _ {j}\right), \tag {2}

where $N$ denotes the number of samples, and $\pmb{x}j$ denotes the $j$ -th sample. Therefore, the stochastic gradient $\nabla{\pmb{\theta}}J(\pmb{\theta})$ provides a search direction in the space of search distributions.

3 METHODOLOGY

In this section, we introduce the proposed ASMG algorithm. Firstly, we formulate the black-box MOO as a min-max optimization problem and solve it in Section 3.1. Then in Section 3.2, we derive the update formula of parameters in the search distribution under the Gaussian sampling.

3.1 BLACK-BOX MULTI-OBJECTIVE OPTIMIZATION

We aim to minimize the MOO problem (1) with only function queries. Due to the lack of gradient information in black-box optimization, we use the stochastic gradient approximation method. Specifically, the objective of the original MOO is smoothed to the expectation of $F(\pmb{x})$ under a parametric search distribution $p_{\theta}(\pmb{x})$ with parameter $\pmb{\theta}$ , i.e., $J_{i}(\pmb{\theta}) = \mathbb{E}{p{\theta}(\pmb{x})}[F_{i}(\pmb{x})]$ for $i = 1, \dots, m$ . Then the optimal parameter $\pmb{\theta}$ is found by minimizing the following smoothed MOO problem as

min⁑θJ(ΞΈ):=(J1(ΞΈ),J2(ΞΈ),…,Jm(ΞΈ)).(3) \min _ {\boldsymbol {\theta}} J (\boldsymbol {\theta}) := \left(J _ {1} (\boldsymbol {\theta}), J _ {2} (\boldsymbol {\theta}), \dots , J _ {m} (\boldsymbol {\theta})\right). \tag {3}

By following Wierstra et al. (2014); Lyu & Tsang (2021), the search distribution is assumed to be a Gaussian distribution, i.e., $p_{\theta}(\boldsymbol{x}) = \mathcal{N}(\boldsymbol{x} \mid \boldsymbol{\mu}, \boldsymbol{\Sigma})$ where $\boldsymbol{\mu}$ denotes the mean and $\boldsymbol{\Sigma}$ denotes the covariance matrix, and correspondingly $\boldsymbol{\theta}$ includes $\boldsymbol{\mu}$ and $\boldsymbol{\Sigma}$ , i.e., $\boldsymbol{\theta} = {\boldsymbol{\mu}, \boldsymbol{\Sigma}}$ . We denote $J_{i}(\boldsymbol{\theta}) = J_{i}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ for $i = 1, \dots, m$ . This Gaussian-smoothed MOO method can effectively estimate stochastic gradients, and enable a more accurate search direction for the distribution to address high-dimensional black-box MOO problems. The connection between this Gaussian smoothed MOO problem (3) and problem (1) is shown in Section 4.

Here we aim to derive an update formulation for $\theta$ . To optimize all the Gaussian smoothed objective functions effectively, inspired by MGDA, we can find a parameter $\theta$ to maximize the minimum decrease across all smoothed objectives in each iteration as

max⁑θmin⁑i∈[m](Ji(ΞΈt)βˆ’Ji(ΞΈ))β‰ˆmax⁑θmin⁑i∈[m]βŸ¨βˆ‡ΞΈJi(ΞΈt),ΞΈtβˆ’ΞΈβŸ©,(4) \max _ {\boldsymbol {\theta}} \min _ {i \in [ m ]} \left(J _ {i} \left(\boldsymbol {\theta} _ {t}\right) - J _ {i} (\boldsymbol {\theta})\right) \approx \max _ {\boldsymbol {\theta}} \min _ {i \in [ m ]} \left\langle \nabla_ {\boldsymbol {\theta}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right), \boldsymbol {\theta} _ {t} - \boldsymbol {\theta} \right\rangle , \tag {4}

where $\nabla_{\pmb{\theta}}J_{i}(\pmb{\theta}{t}) = \nabla{\pmb{\theta}}\mathbb{E}{p{\pmb{\theta}{t}}}[F{i}(\pmb{x})]$ denotes the derivative of $J_{i}(\pmb{\theta})$ w.r.t. $\pmb{\theta} = {\pmb{\mu},\pmb{\Sigma}}$ at $\pmb{\theta}_t = {\pmb {\mu}_t,\pmb {\Sigma}_t}$ . In Eq. (4), the first-order Taylor expansion is used to derive the approximation with an assumption that the variable $\pmb{\theta}$ is close to $\pmb{\theta}t$ . To further make this assumption hold, we add a regularization term to maximize $-\frac{1}{\beta_t}\mathrm{KL}(p_\pmb {\theta}| p{\pmb{\theta}_t})$ into Eq. (4), and then the objective function to update $\pmb{\theta}$ is formulated as

min⁑θmax⁑λtβˆˆΞ”mβˆ’1βŸ¨βˆ‘i=1mΞ»itβˆ‡ΞΈJi(ΞΈt),ΞΈβˆ’ΞΈt⟩+1Ξ²tKL(pΞΈβˆ₯pΞΈt).(5) \min _ {\boldsymbol {\theta}} \max _ {\boldsymbol {\lambda} ^ {t} \in \Delta^ {m - 1}} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\theta}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\theta} - \boldsymbol {\theta} _ {t} \right\rangle + \frac {1}{\beta_ {t}} \mathrm {K L} (p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}). \tag {5}

Note that problem (5) is convex w.r.t. $\theta$ and concave w.r.t. $\lambda$ for Gaussian distribution $p_{\theta}$ . Then using Von Neumann-Fan minimax theorem (Borwein, 2016), we can switch the order of the min and max operators, leading to an equivalent problem as

max⁑λtβˆˆΞ”mβˆ’1minβ‘ΞΈβŸ¨βˆ‘i=1mΞ»itβˆ‡ΞΈJi(ΞΈt),ΞΈβˆ’ΞΈt⟩+1Ξ²tKL(pΞΈβˆ₯pΞΈt).(6) \max _ {\boldsymbol {\lambda} ^ {t} \in \Delta^ {m - 1}} \min _ {\boldsymbol {\theta}} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\theta}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\theta} - \boldsymbol {\theta} _ {t} \right\rangle + \frac {1}{\beta_ {t}} \mathrm {K L} (p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}). \tag {6}

By solving the inner problem of problem (6), we can obtain the update formulations for $\mu$ and $\Sigma$ in the $t$ -th iteration as

ΞΌt+1=ΞΌtβˆ’Ξ²tΞ£tβˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt),Ξ£t+1βˆ’1=Ξ£tβˆ’1+2Ξ²tβˆ‘i=1mΞ»itβˆ‡Ξ£Ji(ΞΈt),(7) \boldsymbol {\mu} _ {t + 1} = \boldsymbol {\mu} _ {t} - \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}), \quad \boldsymbol {\Sigma} _ {t + 1} ^ {- 1} = \boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \tag {7}

where $\nabla_{\pmb{\mu}}J_{i}(\pmb{\theta}{t})$ and $\nabla{\pmb{\Sigma}}J_{i}(\pmb{\theta}{t})$ denote the derivative of $J{i}(\pmb{\theta})$ w.r.t. $\pmb{\mu}$ and $\pmb{\Sigma}$ at $\pmb{\mu} = \pmb{\mu}_t$ and $\pmb{\Sigma} = \pmb{\Sigma}_t$ , respectively. To obtain those two gradients, in the following theorem, we prove that we only need function queries.

Theorem 3.1. (Wierstra et al., 2014) The gradient of the expectation of an integrable function $F_{i}(\pmb{x})$ under a Gaussian distribution $p_{\theta} \coloneqq \mathcal{N}(\pmb{\mu}, \pmb{\Sigma})$ with respect to the mean $\pmb{\mu}$ and the covariance $\pmb{\Sigma}$ can be expressed as

βˆ‡ΞΌEpΞΈ[Fi(x)]=EpΞΈ[Ξ£βˆ’1(xβˆ’ΞΌ)Fi(x)],(8) \nabla_ {\boldsymbol {\mu}} \mathbb {E} _ {p \theta} [ F _ {i} (\boldsymbol {x}) ] = \mathbb {E} _ {p \theta} [ \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) F _ {i} (\boldsymbol {x}) ], \tag {8}

βˆ‡Ξ£EpΞΈ[Fi(x)]=12EpΞΈ[(Ξ£βˆ’1(xβˆ’ΞΌ)(xβˆ’ΞΌ)βŠ€Ξ£βˆ’1βˆ’Ξ£βˆ’1)Fi(x)].(9) \nabla_ {\boldsymbol {\Sigma}} \mathbb {E} _ {p _ {\boldsymbol {\theta}}} [ F _ {i} (\boldsymbol {x}) ] = \frac {1}{2} \mathbb {E} _ {p _ {\boldsymbol {\theta}}} \left[ \left(\boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} - \boldsymbol {\Sigma} ^ {- 1}\right) F _ {i} (\boldsymbol {x}) \right]. \tag {9}

According to Theorem 3.1, to calculate the gradients, we need to calculate the inverse covariance matrix, which is computationally expensive in high dimensions, and hence to reduce the computational cost, we assume that the covariance matrix $\Sigma$ is a diagonal matrix.

Then substituting Eq. (7) into problem (6), it can be approximated by the following quadratic programming (QP) problem as

min⁑λtβˆˆΞ”mβˆ’1βˆ₯βˆ‘i=1mΞ»itpitβˆ₯2+2βˆ₯βˆ‘i=1mΞ»ithitβˆ₯2,(10) \min _ {\boldsymbol {\lambda} ^ {t} \in \Delta^ {m - 1}} \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \boldsymbol {p} _ {i} ^ {t} \right\| ^ {2} + 2 \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \boldsymbol {h} _ {i} ^ {t} \right\| ^ {2}, \tag {10}

where $\pmb{p}_i^t = \pmb{\Sigma}_t^{\frac{1}{2}}\nabla_\mu J_i(\pmb{\theta}_t)$ and $\pmb{h}i^t = \mathrm{diag}(\pmb{\Sigma}t\nabla{\pmb{\Sigma}}J_i(\pmb{\theta}t))$ . The detailed derivation is put in the Appendix A. Problem (10) is obviously convex and the objective function of problem (10) can be simplified to $\lambda^{t\top}\Lambda^{\top}\Lambda \lambda^{t}$ , where $\lambda^{t} = (\lambda{1}^{t},\dots,\lambda{m}^{t})^{\top}$ and $\Lambda = ((\pmb{p}_i^t)^\top, \sqrt{2} (\pmb{h}_i^t)^\top)^\top, \dots, ((\pmb{p}_m^t)^\top, \sqrt{2} (\pmb{h}_m^t)^\top)^\top)$ . The matrix $\Lambda^{\top}\Lambda$ is of size $m \times m$ , which is independent of the dimension of $\pmb{\mu}$ . Therefore, the computational cost to solve problem (10) is negligible since $m$ is usually very small. Here we use the open-source CVXPY library (Diamond & Boyd, 2016) to solve it.

3.2 UPDATE FORMULATIONS FOR GAUSSIAN SAMPLING

Since $\pmb{p}_i^t$ and $h_i^t$ in problem (10) and the update formulation of $\pmb{\mu}$ and $\pmb{\Sigma}$ in Eq. (7) need to calculate expectations of the black-box function. However, those expectations do not have analytical forms, and we estimate them by Monte Carlo sampling.

Specifically, according to Theorem 3.1, the stochastic approximation of $\pmb{p}_i^t$ and $\pmb{h}_i^t$ using Monte Carlo sampling are given as

p^it=1Nβˆ‘j=1NΞ£tβˆ’12(xjβˆ’ΞΌt)(Fi(xj)βˆ’Fi(ΞΌt)),(11) \hat {\boldsymbol {p}} _ {i} ^ {t} = \frac {1}{N} \sum_ {j = 1} ^ {N} \boldsymbol {\Sigma} _ {t} ^ {- \frac {1}{2}} \left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) \left(F _ {i} \left(\boldsymbol {x} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right), \tag {11}

h^it=12Nβˆ‘j=1N[diag⁑((xjβˆ’ΞΌt)(xjβˆ’ΞΌt)⊀Σtβˆ’1βˆ’I)(Fi(xj)βˆ’Fi(ΞΌt))],(12) \hat {\boldsymbol {h}} _ {i} ^ {t} = \frac {1}{2 N} \sum_ {j = 1} ^ {N} \left[ \operatorname {d i a g} \left(\left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) \left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {- 1} - \boldsymbol {I}\right) \left(F _ {i} \left(\boldsymbol {x} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right) \right], \tag {12}

where $x_{j}$ denotes the $j$ -th sample and inspired by Lyu & Tsang (2021), subtracting $F_{i}(\pmb{\mu}_{t})$ is used to improve the computational stability while keeping them as unbiased estimations. By incorporating Theorem 3.1 into Eq. (7), the updated formulations for $\pmb{\mu}$ and $\pmb{\Sigma}$ using Monte Carlo sampling are rewritten as

ΞΌt+1=ΞΌtβˆ’Ξ²tβˆ‘i=1mΞ»itΞ£tg^it,Ξ£t+1βˆ’1=Ξ£tβˆ’1+2Ξ²tβˆ‘i=1mΞ»itG^it,(13) \boldsymbol {\mu} _ {t + 1} = \boldsymbol {\mu} _ {t} - \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i} ^ {t}, \quad \boldsymbol {\Sigma} _ {t + 1} ^ {- 1} = \boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t}, \tag {13}

where the stochastic gradients $\hat{\pmb{g}}_i^t$ and $\hat{G}_i^t$ are formulated as

g^it=1Nβˆ‘j=1NΞ£tβˆ’1(xjβˆ’ΞΌt)(Fi(xj)βˆ’Fi(ΞΌt)),(14) \hat {\boldsymbol {g}} _ {i} ^ {t} = \frac {1}{N} \sum_ {j = 1} ^ {N} \boldsymbol {\Sigma} _ {t} ^ {- 1} \left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) \left(F _ {i} \left(\boldsymbol {x} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right), \tag {14}

G^it=12Nβˆ‘j=1Ndiag⁑[Ξ£tβˆ’1[diag⁑((xjβˆ’ΞΌt)(xjβˆ’ΞΌt)⊀Σtβˆ’1βˆ’I)(Fi(xj)βˆ’Fi(ΞΌt))]].(15) \hat {G} _ {i} ^ {t} = \frac {1}{2 N} \sum_ {j = 1} ^ {N} \operatorname {d i a g} \left[ \boldsymbol {\Sigma} _ {t} ^ {- 1} \left[ \operatorname {d i a g} \left((\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}) (\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {- 1} - \boldsymbol {I}\right) \left(F _ {i} (\boldsymbol {x} _ {j}) - F _ {i} (\boldsymbol {\mu} _ {t})\right) \right] \right]. \tag {15}

Note that $\hat{\pmb{g}}i^t$ is an unbiased estimator for the gradient $\nabla{\mu}J_{i}(\pmb{\theta}_{t})$ as proved in Lemma B.4. To avoid the scaling problem, in practice, we can employ monotonic transformation for the aggregated objective, more details can be found in Appendix E.

To ensure the convergence, the sequence of weighted vector ${\pmb{\lambda}^t}_{t=0}^{T-1}$ is usually required to be a convergent sequence (Zhou et al., 2022b; Fernando et al., 2022; Liu & Vicente, 2021). However, directly solving problem (10) in each iteration cannot guarantee that. Moreover, since solving the composite weights $\pmb{\lambda}^t$ depends on $\hat{\pmb{p}}_i^t$ and $\hat{\pmb{h}}_i^t$ , which are related to stochastic gradients $\hat{\pmb{g}}_i^t$ and $\hat{\pmb{G}}i^t$ , the estimation of the composite stochastic gradient may contain some bias, i.e. $\mathbb{E}[\sum{i=1}^{m} \lambda_i^t \pmb{\Sigma}_t \hat{\pmb{g}}i^t] \neq \sum{i=1}^{m} \mathbb{E}[\lambda_i^t] \mathbb{E}[\pmb{\Sigma}_t \hat{\pmb{g}}_i^t]$ . To generate a stable composite weights sequence and reduce the bias caused by the correlation of weights and the stochastic gradients, we apply a momentum strategy (Zhou et al., 2022b) to $\pmb{\lambda}$ . Specifically, in $k$ -th iteration, we first solve problem (10) to obtain $\tilde{\pmb{\lambda}}_k$ , and then update the weights by $\pmb{\lambda}^k = (1 - \gamma_t) \pmb{\lambda}^{k-1} + \gamma_t \tilde{\pmb{\lambda}}^k$ , where $\gamma_t$ is a coefficient and $\gamma_t \in (0,1]$ . To preserve the advantage of maximizing the minimum decrease across all the Gaussian smoothed objectives, the coefficient $\gamma_t$ is set as 1 at the beginning and then decays to 0 as $t \to +\infty$ . In Lemma B.5, we show that the bias caused by solving $\pmb{\lambda}^t$ decreases to zero as $\gamma \to 0$ .

The complete algorithm is shown in Algorithm 1. Since the computational cost associated with solving problem (10) in each iteration is negligible, the computational cost for the ASMG method per iteration is on the order of $\mathcal{O}(mNd)$ .

4 CONVERGENCE ANALYSIS

In this section, we provide a comprehensive convergence analysis for the proposed ASMG method. All the proofs are put in Appendix C. Firstly, we make a standard assumption for problem (3).

Assumption 4.1. The functions $J_{i}(\pmb {\theta}),\dots ,J_{m}(\pmb {\theta})$ are $H$ -Lipschitz and $L$ -smoothness w.r.t. $\pmb{\theta} = {\pmb {\mu},\pmb {\Sigma}} \in \Theta$ , where $\Theta \coloneqq {\pmb {\mu},\pmb {\Sigma}\mid \pmb {\mu}\in \mathbb{R}^d,\pmb {\Sigma}\in \mathcal{S}^+}$

The smoothness assumption in Assumption 4.1 is widely adopted in MOO (Zhou et al., 2022b; Fernando et al., 2022). Then, we provide a boundedness result for the covariance matrix $\Sigma$ .

Theorem 4.2. Suppose that the gradient $\hat{G}_i$ are positive semi-definite matrix, i.e., $\hat{G}_i \succeq \xi I$ for $i = 1, \ldots, m$ , where $\xi \geq 0$ and that the covariance matrix is a diagonal matrix. Then for Algorithm 1, we have $\boldsymbol{\Sigma}T \preceq \frac{\boldsymbol{I}}{\xi \sum{t=1}^{T} \beta_t + \boldsymbol{\Sigma}_0^{-1}}$ .

Algorithm 1 The ASMG Method
Require: number of iterations $T$ , step size $\beta$ , number of samples $N$
1: Initialized $\theta_0 = (\pmb {\mu}0,\pmb {\Sigma}0)$ and $\gamma_0 = 1$ .
2: for $t = 0$ to $T - 1$ do
3: Take i.i.d samples $\pmb {z}j\sim \mathcal{N}(0,I)$ for $j\in {1,\dots ,N}$ .
4: Set $\pmb {x}j = \pmb {\mu}t + \pmb {\Sigma}t^{\frac{1}{2}}\pmb {z}j$ for $j\in {1,\ldots ,N}$ .
5: Query the batch observations ${F
{1}(\pmb{x}
{1}),\dots ,F
{1}(\pmb{x}
{N}),\dots ,F
{m}(\pmb{x}
{1}),\dots ,F
{m}(\pmb{x}{N})}$ .
6: Query the batch observations ${F
{1}(\pmb{\mu}{t}),\dots ,F{m}(\pmb{\mu}_{t})}$ .
7: Compute $\hat{\pmb{p}}i^t = \frac{1}{N}\sum{j = 1}^N\Sigma_t^{-\frac{1}{2}}(\pmb {x}_j - \pmb {\mu}_t)\big(F_i(\pmb {x}_j) - F_i(\pmb {\mu}_t)\big);$
8: Compute $\hat{\pmb{h}}i^t = \frac{1}{2N}\sum{j = 1}^N[\mathrm{diag}\big((\pmb {x}_j - \pmb {\mu}_t)(\pmb {x}_j - \pmb {\mu}t)^{\top}\pmb {\Sigma}t^{-1} - \pmb {I}\big)(F_i(\pmb {x}j) - F_i(\pmb {\mu}t))]$ .
9: Compute $\tilde{\lambda}^t$ by solving the QP problem (10);
10: Update the weights for the gradient composition $\lambda^t = (1 - \gamma_t)\lambda^{t - 1} + \gamma_t\tilde{\lambda}^t$
11: Compute the stochastic gradients $\hat{\pmb{g}}i^t$ and $\hat{\pmb{G}}i^t$ according to Eqs. (14) (15), respectively;
12: Set $\pmb {\mu}
{t + 1} = \pmb {\mu}t - \beta_t\sum{i = 1}^m\lambda_i^t\Sigma_t\hat{\pmb{g}}i^t$ and set $\boldsymbol{\Sigma}{t + 1}^{-1} = \boldsymbol{\Sigma}
{t}^{-1} + 2\beta
{t}\sum
{i = 1}^{m}\lambda
{i}^{t}\hat{\boldsymbol{G}}
{i}^{t}$
13: end for
14: return $\theta_T = (\pmb {\mu}_T,\pmb {\Sigma}_T)$

Theorem 4.2 establishes the upper bound for $\Sigma$ throughout the optimization process and is useful to analyze the convergence properties in the non-convex scenario as shown in Section 4.2.

4.1 CONVEX CASES

In this section, we assume that each objective in problem (1), i.e., $F_{i}(\mathbf{x})$ ( $i = 1, \dots, m$ ), is convex w.r.t. $\mathbf{x}$ . Note that the proposed ASMG algorithm approximates the gradients of the objectives of the Gaussian smoothed MOO problem, i.e., problem (3). It is necessary to study the relation between the optimal solutions of the original MOO problem (1) and the corresponding Gaussian-smoothed MOO problem (3), and we put the results in the following proposition.

Proposition 4.3. Suppose $p_{\theta}(\pmb{x})$ is a Gaussian distribution with $\pmb{\theta} = {\pmb{\mu}, \pmb{\Sigma}}$ and the functions $F_{i}(\pmb{x}), i = 1, \dots, m$ are all convex functions. Let $J_{i}(\pmb{\theta}) = \mathbb{E}{p{\pmb{\theta}}}[F_{i}(\pmb{x})]$ . Then for any $\pmb{\lambda} \in \Delta^{m-1}$ and $\pmb{\mu}^{} \in \mathcal{X}$ , we have $\sum_{i=1}^{m} \lambda_{i}(F_{i}(\pmb{\mu}) - F_{i}(\pmb{\mu}^{})) \leq \sum_{i=1}^{m} \lambda_{i}(J_{i}(\pmb{\mu}, \pmb{\Sigma}) - J_{i}(\pmb{\mu}^{}, \mathbf{0}))$ , where $\mathbf{0}$ denotes a zero matrix with appropriate size and $J_{i}(\pmb{\mu}^{}, \mathbf{0}) = F_{i}(\pmb{\mu}^{*})$ .

When $\mu^{*}$ is a Pareto-optimal solution of problem (1), Proposition 4.3 implies that the distance to the Pareto-optimal objective values of the original MOO problem is upper-bounded by that of the Gaussian smoothed MOO problem. Then the following theorem captures the convergence of $\mu$ for convex objective functions.

Theorem 4.4. Suppose that $F_{i}(\pmb{x})$ ( $i = 1, \dots, m$ ) is a convex function, $J_{i}(\pmb{\theta})$ is c-strongly convex w.r.t. $\pmb{\mu}$ , $\hat{G}{i}$ is positive semi-definite matrix such that $\xi \pmb{I} \preceq \hat{G}{i} \preceq \frac{c\pmb{I}}{4}$ with $\xi \geq 0$ , $\pmb{\Sigma}{0} \in S^{+}$ , and $\pmb{\Sigma}{0} \preceq R\pmb{I}$ where $R > 0$ . If $\beta \leq \frac{1}{L}$ and the sequence ${\pmb{\mu}_t}$ generated by Algorithm 1 satisfies that the distance between the sequence ${\pmb{\mu}_t}$ and the Pareto set is bounded, i.e., $| \pmb{\mu}_t - \pmb{\mu}^* | \leq D$ where $\pmb{\mu}^*$ denotes a Pareto optimal solution of problem (1), then with Assumption 4.1, we have

1Tβˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»it(Ji(ΞΌt+1,Ξ£t)βˆ’Ji(ΞΌβˆ—,0))]=O(1Ξ²T+log⁑TT+Ξ³).(16) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) - J _ {i} \left(\boldsymbol {\mu} ^ {*}, 0\right)\right) \right] = \mathcal {O} \left(\frac {1}{\beta T} + \frac {\log T}{T} + \gamma\right). \tag {16}

Based on Theorem 4.4 and Proposition 4.3, when $\beta = \mathcal{O}(1)$ and $\gamma = \mathcal{O}(T^{-1})$ , we have

1Tβˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»it(Fi(ΞΌt+1)βˆ’Fi(ΞΌβˆ—))]=O(log⁑TT).(17) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(F _ {i} \left(\boldsymbol {\mu} _ {t + 1}\right) - F _ {i} \left(\boldsymbol {\mu} ^ {*}\right)\right) \right] = \mathcal {O} (\frac {\log T}{T}). \tag {17}

Therefore, the proposed ASMG algorithm possesses a convergence rate $\mathcal{O}\left(\frac{\log T}{T}\right)$ in convex cases. Note that Theorem 4.4 does not require each objective function $F_{i}(\pmb{x})$ to be differentiable. Hence, Theorem 4.4 holds for non-smooth convex functions ${F_{i}(\pmb{x})}$ . If $F_{i}(\pmb{x})$ is $c$ -strongly convex, then $J_{i}(\pmb{\theta})$ is also $c$ -strongly convex (Domke, 2020) and Theorem 4.4 holds.

4.2 NON-CONVEX CASES

In many practical problems, the objective functions of problem (1) are non-convex, and we aim to find a Pareto stationary solution. Similar to Proposition 4.3, we have the following result to reveal the relation between Pareto stationary solutions of problems (1) and (3).

Proposition 4.5. Suppose $p_{\theta}(\pmb{x})$ is a Gaussian distribution with $\pmb{\theta} = {\pmb{\mu}, \pmb{\Sigma}}$ and $F_{i}(\pmb{x})$ ( $i = 1, \dots, m$ ) is a $L_{F}$ -Lipschitz smooth function. Let $J_{i}(\pmb{\theta}) = \mathbb{E}{p{\pmb{\theta}}}[F_{i}(\pmb{x})]$ and $\pmb{\Sigma}$ be a diagonal matrix. If $\pmb{\mu}^{}$ is a Pareto stationary solution of problem (3) and there exists $\lambda \in \Delta^{m-1}$ such that $| \sum_{i=1}^{m} \lambda_{i} \nabla_{\pmb{\mu}} J_{i}(\pmb{\mu}^{}) | = 0$ , then we have $| \sum_{i=1}^{m} \lambda_{i} \nabla F_{i}(\pmb{\mu}^{}) |^{2} \leq L_{F}^{2} | \mathrm{diag}(\pmb{\Sigma}) |_{1}$ and this implies that $\pmb{\mu}^{}$ is a $\epsilon$ -accurate Pareto stationary solution of problem (1) with $\epsilon = L_{F}^{2} | \mathrm{diag}(\pmb{\Sigma}) |_{1}$ .

According to Proposition 4.5, a Pareto stationary solution of problem (3) is a $\epsilon$ -accurate Pareto stationary solution of problem (1). The following theorem establishes the convergence of the proposed ASMG method under the non-convex case.

Theorem 4.6. Suppose that $J_{i}(\pmb{\theta})$ ( $i = 1, \dots, m$ ) is bounded, i.e., $|J_{i}(\pmb{\theta})| \leq B$ , $\hat{G}{i}$ is positive semi-definite matrix such that $\xi \mathbf{I} \preceq \hat{G}{i} \preceq b\mathbf{I}$ with $b \geq \xi > 0$ , $\Sigma_{0} \in S^{+}$ , and $\Sigma_{0} \preceq R\mathbf{I}$ with $R > 0$ . If $\beta \leq \frac{1}{RL\sqrt{d}}$ , then with Assumption 4.1 we have

1Tβˆ‘t=0Tβˆ’1Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]=O(Ξ³Ξ²+1Ξ²T+Ξ³+Ξ²).(18) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \right\| ^ {2} \right] = \mathcal {O} \left(\frac {\gamma}{\beta} + \frac {1}{\beta T} + \gamma + \beta\right). \tag {18}

According to Theorem 4.6, if $\beta = \mathcal{O}(T^{-\frac{1}{2}})$ and $\gamma = \mathcal{O}(T^{-1})$ , the proposed ASMG method possesses a $\mathcal{O}(T^{-\frac{1}{2}})$ convergence rate to reach a Pareto stationary solution for problem (3), which is a $\epsilon$ -accurate Pareto stationary solution of problem (1). According to Theorem 4.2, when $\beta = \mathcal{O}(T^{-\frac{1}{2}})$ , diagonal entries of $\Sigma_T$ converge to zero as $T \to \infty$ and hence $\epsilon \to 0$ , leading to a Pareto stationary solution for problem (1).

5 RELATED WORKS

Several kinds of approaches have been studied for black-box optimization, such as Bayesian optimization (BO) (Srinivas et al., 2009; Lyu et al., 2019), evolution strategies (ES) (Back, 1991; Hansen, 2006), and genetic algorithms (GA) (Srinivas & Patnaik, 1994). BO-based methods are inefficient in handling high-dimensional problems and GA methods lack convergence analysis. ES-based methods are better for relatively high-dimensional problems and have been applied in many applications such as reinforcement learning (Liu et al., 2019) and prompt learning (Sun et al., 2022b;a). Although BO achieves good query efficiency for low-dimensional problems, it often fails to handle high-dimensional problems with large sample budgets (Eriksson et al., 2019). The computation of GP with a large number of samples itself is expensive, and the internal optimization of the acquisition functions is challenging.

Among ES-based methods, CMA-ES (Hansen, 2006) is a representative one. It uses second-order information to search candidate solutions by updating the mean and covariance matrix of the likelihood of candidate distributions. The CMA-ES method is widely adopted in many learning tasks (Won et al., 2017; Sun et al., 2022b;a; Han et al., 2023). Though it is designed for single-objective black-box optimization, it is also applied to black-box multi-task learning (Sun et al., 2023), where all objectives are aggregated with equal weights. Therefore, we consider CMA-ES as an important baseline method in our experiments.

6 EMPIRICAL STUDY

In this section, we empirically evaluate the proposed ASMG method on different problems. The experiments are conducted on a single NVIDIA GeForce RTX 3090 GPU.

6.1 SYNTHETIC PROBLEMS

We compare the proposed ASMG method with CMA-ES (Hansen, 2006), ES (Salimans et al., 2017), BES (Gao & Sener, 2022), and MMES (He et al., 2020) methods on the following three $d$ -dimensional

synthetic benchmark test problems:

F(x)=(βˆ‘i=1d102(iβˆ’1)dβˆ’1∣xiβˆ’0.01∣,βˆ‘i=1d102(iβˆ’1)dβˆ’1∣xi+0.01∣),(19) F (\boldsymbol {x}) = \left(\sum_ {i = 1} ^ {d} 1 0 ^ {\frac {2 (i - 1)}{d - 1}} | \boldsymbol {x} _ {i} - 0. 0 1 |, \sum_ {i = 1} ^ {d} 1 0 ^ {\frac {2 (i - 1)}{d - 1}} | \boldsymbol {x} _ {i} + 0. 0 1 |\right), \tag {19}

F(x)=(βˆ‘i=1d∣xiβˆ’0.1∣12,βˆ‘i=1d∣xi+0.1∣12),(20) F (\boldsymbol {x}) = \left(\sum_ {i = 1} ^ {d} | \boldsymbol {x} _ {i} - 0. 1 | ^ {\frac {1}{2}}, \sum_ {i = 1} ^ {d} | \boldsymbol {x} _ {i} + 0. 1 | ^ {\frac {1}{2}}\right), \tag {20}

F(x)=(βˆ‘i=1d102(iβˆ’1)dβˆ’1∣xi∣12,10d+βˆ‘i=1d((10(iβˆ’1)dβˆ’1xi)2βˆ’10cos⁑(2Ο€10(iβˆ’1)dβˆ’1xi))).(21) F (\boldsymbol {x}) = \left(\sum_ {i = 1} ^ {d} 1 0 ^ {\frac {2 (i - 1)}{d - 1}} \left| \boldsymbol {x} _ {i} \right| ^ {\frac {1}{2}}, 1 0 d + \sum_ {i = 1} ^ {d} \left(\left(1 0 ^ {\frac {(i - 1)}{d - 1}} \boldsymbol {x} _ {i}\right) ^ {2} - 1 0 \cos \left(2 \pi 1 0 ^ {\frac {(i - 1)}{d - 1}} \boldsymbol {x} _ {i}\right)\right)\right). \tag {21}

Test problems (19)-(21) are called the shift $l_{1}$ -ellipsoid, shift $l_{\frac{1}{2}}$ -ellipsoid, and mixed ellipsoid-ramstrigin 10, respectively.

For the baseline methods, by following Sun et al. (2023), we aggregate multiple objectives with equal weights to become a single objective. The results are evaluated by calculating the Euclidean distance between the solution $\pmb{x}$ and the optimal solution set $\mathcal{P}$ , i.e., $\mathcal{E} = \mathrm{dist}(\pmb{x},\mathcal{P})$ . Due to the page limitation, the details of the evaluation metric and implementation are put in Appendix F.1.


(a) Shift $l_{1}$ -Ellipsoid.


(b) Shift $l_{\frac{1}{2}}$ Ellipsoid.


(c) Mixed Ellipsoid-Rastrigin 10.
Figure 1: Results on the synthetic problems with 50 samples (i.e., $N = 50$ ).

Results. Figure 1 shows the results on those three $d$ -dimensional synthetic problems with 50 samples (i.e., $N = 50$ ) and $d = 100$ . The proposed ASMG method approximately achieves a linear convergence rate in the logarithm scale and can arrive at solutions with a high precision, i.e., $10^{-4}$ , on three cases. The CMA-ES method can converge with high precision on problem (19) but only achieve $10^{-1}$ precision on problem (20). The MMES method also cannot reach a high precision on these problems. Moreover, the CMA-ES and MMES methods fail on problem (21) and the ES and BES methods fail on all three problems. The results show that it could be challenging for ES and BES to optimize non-smooth or non-convex test functions without adaptively updating mean and covariance. Those results consistently demonstrate the effectiveness of the proposed ASMG method.

6.2 BLACK-BOX MULTI-TASK LEARNING

In this section, we apply the proposed ASMG method to black-box multi-task learning. Multi-task learning (MTL) (Caruana, 1997; Zhang & Yang, 2022) is a widely adopted paradigm and aims to train a single model to handle multiple tasks simultaneously. Given $m$ tasks, task $i$ has a training dataset $\mathcal{D}_i$ . Let $\mathcal{L}_i(\mathcal{D}i; \mathcal{M}{\Phi})$ denote the average loss on $\mathcal{D}i$ for task $i$ using the model $\mathcal{M}$ with parameter $\Phi$ . Then MTL can be formulated as a MOO problem with $m$ objectives as $\min{\Phi} (\mathcal{L}_i, \dots, \mathcal{L}m)$ . For the conventional MTL setting, the model $\mathcal{M}$ is available for backward propagation, allowing the optimization problem to be solved using the gradients of the model parameters, i.e., $\nabla{\Phi} \mathcal{L}_i$ . However, in many practical scenarios, such as multi-task prompt tuning for extremely large pre-trained models (Sun et al., 2023), part of the model $\mathcal{M}$ remains fixed in the service and is only accessible through an inference API. This results in the gradient of the objectives $\mathcal{L}_i$ with respect to the local parameters $\phi \subset \Phi$ being unavailable. For cases where the gradients of task losses with respect to the learned parameter $\phi$ cannot be explicitly calculated in MTL, we refer to black-box MTL.

Problem Formulation. We consider a specific black-box MTL problem, where our focus is to learn a shared prompt for all tasks using pre-trained vision-language models (Sun et al., 2023; Wang et al., 2023; Liu et al., 2023). Following the setup in Liu et al. (2023), we employ the CLIP model (Radford et al., 2021) as the base model. In this context, our model $\mathcal{M}$ can be expressed as $\mathcal{M} = {\mathcal{M}_c, \pmb{p}}$ , where $\mathcal{M}_c$ represents the fixed CLIP model, and $\pmb{p} \in \mathbb{R}^D$ is the token embedding of the prompt. Note

that the CLIP model is treated as a black-box model, making it impossible to calculate the gradient of the token embedding $\pmb{p}$ in the text encoder using backward propagation. Inspired by Sun et al. (2022b;a), we optimize $\pmb{v} \in \mathbb{R}^d$ and employ a fixed randomly initialized matrix $\pmb{A} \in \mathbb{R}^{D \times d}$ to project $\pmb{v}$ onto the token embedding space instead of directly optimizing the prompt $\pmb{p}$ . Consequently, the corresponding black-box MTL problem can be formulated as

min⁑v∈Rd(L1(D1;{Mc,}),…,Lm(Dm;{Mc,})).(22) \min _ {\boldsymbol {v} \in \mathbb {R} ^ {d}} \left(\mathcal {L} _ {1} \left(\mathcal {D} _ {1}; \left\{\mathcal {M} _ {c}, \boldsymbol {A v} \right\}\right), \dots , \mathcal {L} _ {m} \left(\mathcal {D} _ {m}; \left\{\mathcal {M} _ {c}, \boldsymbol {A v} \right\}\right)\right). \tag {22}

The details of the CLIP model with token embedding and the loss function $\mathcal{L}_i$ are put in Appendix F.2.

Baselines. The proposed ASMG method is compared with (i) the zero-shot setting, which evaluates the model on the downstream datasets that were not seen during the training phase without prompt tuning (Radford et al., 2021); (ii) four ES-based black-box optimization methods, i.e., ES (Salimans et al., 2017), BES (Gao & Sener, 2022), MMES (He et al., 2020) and CMA-ES (Hansen, 2006), where we simply transform multiple objectives into one single objective by equal weights; (iii) the ASMG-EW method, where we fix the weighted vector as $\lambda^t = \frac{1}{m}$ for ASMG during optimization. The implementation details are put in Appendix F.2.

Table 1: Results on the Office-31 and Office-home datasets. Each experiment is repeated over 3 random seeds and the mean classification accuracy (%) is reported. The best result across all groups is in bold and the best result in each comparison group is underlined.

MethodOffice-31Office-home
ADWAvgArClPrRwAvg
Zero-shot73.6879.5166.6773.2873.0651.6883.7981.4172.48
Dimension d = 256
ES75.1080.0571.6775.61 Β±1.1871.1647.9680.3380.9070.09Β±0.51
BES72.6580.6073.5275.59Β±0.9068.9445.9781.5379.4268.97Β±1.40
MMES75.9083.3376.6778.63Β±0.5971.8549.2682.1081.3771.14Β±0.69
CMA-ES76.2487.9875.9380.05Β±1.3469.2650.0985.7382.1371.80Β±0.22
ASMG-EW76.5283.8877.2279.21Β±1.2070.0247.1380.2379.5069.22Β±1.39
ASMG77.8386.6180.5681.67Β±0.6474.2653.5286.2383.0374.26Β±1.06
Dimension d = 512
ES75.9581.6975.3777.67 Β±0.9170.7848.3982.0681.1570.60Β±0.36
BES75.7382.5174.8177.68Β±1.8869.3948.1882.9480.2970.20Β±0.51
MMES76.0184.7077.2279.31Β±0.3470.4050.4585.1082.5672.13Β±1.19
CMA-ES76.7587.1677.2280.38Β±0.4870.4650.0286.2682.0272.19Β±0.27
ASMG-EW78.0184.7076.6779.79Β±1.4569.2046.9180.5180.6869.33Β±0.52
ASMG78.6387.4378.3381.47Β±0.3773.5052.8485.8883.7874.00Β±0.81
Dimension d = 1024
ES72.5978.1474.8175.18Β±1.9170.3447.3882.5980.5470.21Β±0.08
BES72.1479.5171.6774.44Β±0.8470.2748.2579.9480.0069.62Β±0.81
MMES77.0981.4275.7478.09Β±0.9571.0349.1984.2981.9571.61Β±0.41
CMA-ES76.8787.1677.5980.54Β±0.4171.2850.9285.7382.4972.61Β±0.39
ASMG-EW77.1582.5177.7879.15Β±1.4869.2047.0981.3680.8669.63Β±0.88
ASMG76.3087.7080.1981.40Β±0.4973.1851.8285.8483.2173.51Β±0.07

Results. Table 1 presents experimental results on the Office-31 and Office-home datasets for three different dimensions of $z$ . We can see that the ASMG method consistently outperforms all baselines in terms of average classification accuracy across different settings, highlighting its effectiveness. When comparing ASMG with ASMG-EW, the results demonstrate the effectiveness of adaptive stochastic gradient. Notably, even in the high-dimensional setting (i.e., $d = 1024$ ), our method maintains good performance. Remarkably, ASMG achieves the highest average classification accuracy when $d = 256$ , surpassing zero-shot by $8.4%$ on Office-31 and $1.8%$ on Office-home. This further validates the effectiveness of the proposed ASMG method.

7 CONCLUSION

In this paper, we propose ASMG, a novel and effective adaptive stochastic gradient-based method for solving the black-box MOO problem. Specifically, we smooth the black-box MOO problem to a Gaussian smoothed MOO and we propose a novel adaptive stochastic gradient approximation approach to solve it. Theoretically, we explore the connections between the MOO and the corresponding Gaussian smoothed MOO, and we provide a convergence guarantee for ASMG under both convex and non-convex scenarios. Moreover, empirical studies on synthetic problems and black-box MTL demonstrate the effectiveness of the proposed ASMG method.

ACKNOWLEDGEMENTS

This work is supported by National Key R&D Program of China 2022ZD0160300, NSFC key grant under grant no. 62136005, NSFC general grant under grant no. 62076118, and Shenzhen fundamental research program JCYJ20210324105000003.

REFERENCES

Aitor Arrieta, Shuai Wang, Ainhoa Arruabarrenna, Urtzi Markiegi, Goiuria Sagardui, and Leire Etxeberria. Multi-objective black-box test case selection for cost-effectively testing simulation models. In Proceedings of the genetic and evolutionary computation conference, pp. 1411-1418, 2018.
Thomas Back. A survey of evolution strategies. In Proc. of Fourth Internal. Conf. on Genetic Algorithms, 1991.
Jonathan M Borwein. A very complicated proof of the minimax theorem. Minimax Theory and its Applications, 1(1):21-27, 2016.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
Rich Caruana. Multitask learning. Machine learning, 28:41-75, 1997.
Guodong Chen, Xu Han, Guiping Liu, Chao Jiang, and Ziheng Zhao. An efficient multi-objective optimization method for black-box functions using sequential approximate technique. Applied Soft Computing, 12(1):14-27, 2012.
Kalyanmoy Deb, Karthik Sindhya, and Jussi Hakanen. Multi-objective optimization. In Decision sciences, pp. 161-200. CRC Press, 2016.
Jean-Antoine DΓ©sideri. Multiple-gradient descent algorithm (mgda) for multiobjective optimization. Comptes Rendus Mathematique, 350(5-6):313-318, 2012.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Steven Diamond and Stephen Boyd. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83):1-5, 2016.
Justin Domke. Provable smoothness guarantees for black-box variational inference. In International Conference on Machine Learning, pp. 2587-2596. PMLR, 2020.
David Eriksson, Michael Pearce, Jacob Gardner, Ryan D Turner, and Matthias Poloczek. Scalable global optimization via local bayesian optimization. Advances in neural information processing systems, 32, 2019.
Heshan Devaka Fernando, Han Shen, Miao Liu, Subhajit Chaudhury, Keerthiram Murugesan, and Tianyi Chen. Mitigating gradient bias in multi-objective learning: A provably convergent approach. In The Eleventh International Conference on Learning Representations, 2022.
JΓΆrg Fliege and Benar Fux Svaiter. Steepest descent methods for multicriteria optimization. Mathematical methods of operations research, 51:479-494, 2000.
JΓΆrg Fliege, A Ismael F Vaz, and LuΓ­s Nunes Vicente. Complexity of gradient descent for multiobjective optimization. Optimization Methods and Software, 34(5):949-959, 2019.
Katelyn Gao and Ozan Sener. Generalizing gaussian smoothing for random search. In International Conference on Machine Learning, pp. 7077-7101. PMLR, 2022.
Chengcheng Han, Liqing Cui, Renyu Zhu, Jianing Wang, Nuo Chen, Qiushi Sun, Xiang Li, and Ming Gao. When gradient descent meets derivative-free optimization: A match made in black-box scenario. arXiv preprint arXiv:2305.10013, 2023.

Nikolaus Hansen. The cma evolution strategy: a comparing review. Towards a new evolutionary computation: Advances in the estimation of distribution algorithms, pp. 75-102, 2006.
Xiaoyu He, Zibin Zheng, and Yuren Zhou. Mmes: Mixture model-based evolution strategy for large-scale optimization. IEEE Transactions on Evolutionary Computation, 25(2):320-333, 2020.
Mina Konakovic Lukovic, Yunsheng Tian, and Wojciech Matusik. Diversity-guided multi-objective bayesian optimization with batch evaluations. Advances in Neural Information Processing Systems, 33:17708-17720, 2020.
Marco Laumanns, Lothar Thiele, Eckart Zitzler, Emo Welzl, and Kalyanmoy Deb. Running time analysis of multi-objective evolutionary algorithms on a simple discrete optimization problem. In Parallel Problem Solving from Natureβ€”PPSN VII: 7th International Conference Granada, Spain, September 7–11, 2002 Proceedings 7, pp. 44–53. Springer, 2002.
Baijiong Lin and Yu Zhang. Libmtl: A python library for deep multi-task learning. Journal of Machine Learning Research, 24(1-7):18, 2023.
Baijiong Lin, Feiyang Ye, Yu Zhang, and Ivor Tsang. Reasonable effectiveness of random weighting: A litmus test for multi-task learning. Transactions on Machine Learning Research, 2022.
Baijiong Lin, Weisen Jiang, Feiyang Ye, Yu Zhang, Pengguang Chen, Ying-Cong Chen, Shu Liu, and James Kwok. Dual-balancing for multi-task learning. arXiv preprint arXiv:2308.12029, 2023.
Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, and Qiang Liu. Conflict-averse gradient descent for multi-task learning. Advances in Neural Information Processing Systems, 34:18878-18890, 2021.
Guoqing Liu, Li Zhao, Feidiao Yang, Jiang Bian, Tao Qin, Nenghai Yu, and Tie-Yan Liu. Trust region evolution strategies. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 4352-4359, 2019.
Suyun Liu and Luis Nunes Vicente. The stochastic multi-gradient algorithm for multi-objective optimization and its application to supervised machine learning. Annals of Operations Research, pp. 1-30, 2021.
Yajing Liu, Yunig Lu, Hao Liu, Yaozu An, Zhuoran Xu, Zhuokun Yao, Baofeng Zhang, Zhiwei Xiong, and Chenguang Gui. Hierarchical prompt learning for multi-task learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10888-10898, 2023.
Yueming Lyu and Ivor W Tsang. Black-box optimizer with stochastic implicit natural gradient. In Machine Learning and Knowledge Discovery in Databases. Research Track: European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part III 21, pp. 217-232. Springer, 2021.
Yueming Lyu, Yuan Yuan, and Ivor W Tsang. Efficient batch black-box optimization with deterministic regret bounds. arXiv preprint arXiv:1905.10041, 2019.
Debabrata Mahapatra, Chaosheng Dong, Yetian Chen, and Michinari Momma. Multi-label learning to rank through multi-objective optimization. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 4605-4616, 2023a.
Debabrata Mahapatra, Chaosheng Dong, and Michinari Momma. Querywise fair learning to rank through multi-objective optimization. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2023, Long Beach, CA, USA, August 6-10, 2023, pp. 1653-1664, 2023b.
Michinari Momma, Chaosheng Dong, and Jia Liu. A multi-objective/multi-task learning framework induced by pareto stationarity. In International Conference on Machine Learning, pp. 15895-15907. PMLR, 2022.

Prakruthi Prabhakar, Yiping Yuan, Guangyu Yang, Wensheng Sun, and Ajith Muralidharan. Multi-objective optimization of notifications using offline reinforcement learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 3752-3760, 2022.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pp. 8748-8763. PMLR, 2021.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research, 21(1):5485-5551, 2020.
Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International conference on machine learning, pp. 1278-1286. PMLR, 2014.
Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pp. 213-226. Springer, 2010.
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. arXiv preprint arXiv:1703.03864, 2017.
Ozan Sener and Vladlen Koltun. Multi-task learning as multi-objective optimization. Advances in neural information processing systems, 31, 2018.
Mandavilli Srinivas and Lalit M Patnaik. Genetic algorithms: A survey. computer, 27(6):17-26, 1994.
Niranjan Srinivas, Andreas Krause, Sham M Kakade, and Matthias Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995, 2009.
Tianxiang Sun, Zhengfu He, Hong Qian, Yunhua Zhou, Xuan-Jing Huang, and Xipeng Qiu. Bbtv2: towards a gradient-free future with large language models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 3916-3930, 2022a.
Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, and Xipeng Qiu. Black-box tuning for language-model-as-a-service. In Proceedings of ICML, 2022b.
Tianxiang Sun, Zhengfu He, Qin Zhu, Xipeng Qiu, and Xuan-Jing Huang. Multitask pre-training of modular prompt for chinese few-shot learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 11156-11172, 2023.
Philip S Thomas, Joelle Pineau, Romain Laroche, et al. Multi-objective spibb: Seldonian offline policy improvement with safety constraints in finite mdps. Advances in Neural Information Processing Systems, 34:2004-2017, 2021.
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. Deep hashing network for unsupervised domain adaptation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5018-5027, 2017.
G Gary Wang and Songqing Shan. An efficient pareto set identification approach for multi-objective optimization on black-box functions. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, volume 46946, pp. 279-291, 2004.
Zhen Wang, Rameswar Panda, Leonid Karlinsky, Rogerio Feris, Huan Sun, and Yoon Kim. Multitask prompt tuning enables parameter-efficient transfer learning. arXiv preprint arXiv:2303.02861, 2023.
Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, and JΓΌrgen Schmidhuber. Natural evolution strategies. The Journal of Machine Learning Research, 15(1):949-980, 2014.

Jungdam Won, Jongho Park, Kwanyu Kim, and Jehee Lee. How to train your dragon: example-guided control of flapping flight. ACM Transactions on Graphics (TOG), 36(6):1-13, 2017.
Feiyang Ye, Baijiong Lin, Zhixiong Yue, Pengxin Guo, Qiao Xiao, and Yu Zhang. Multi-objective meta learning. Advances in Neural Information Processing Systems, 34:21338-21351, 2021.
Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, and Ivor Tsang. A first-order multi-gradient algorithm for multi-objective bi-level optimization. arXiv preprint arXiv:2401.09257, 2024.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T Kwok, Zhenguo Li, Adrian Weller, and Weiyang Liu. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284, 2023.
Runsheng Yu, Weiyu Chen, Xinrun Wang, and James Kwok. Enhancing meta learning via multi-objective soft improvement functions. In The Eleventh International Conference on Learning Representations, 2022.
Tianhe Yu, Saurabh Kumar, Abhishek Gupta, Sergey Levine, Karol Hausman, and Chelsea Finn. Gradient surgery for multi-task learning. Advances in Neural Information Processing Systems, 33: 5824-5836, 2020.
Richard Zhang and Daniel Golovin. Random hypervolume scalarizations for provable multi-objective black box optimization. In International Conference on Machine Learning, pp. 11096-11105. PMLR, 2020.
Yu Zhang and Qiang Yang. A survey on multi-task learning. IEEE Transactions on Knowledge and Data Engineering, 34(12):5586-5609, 2022.
Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16816-16825, 2022a.
Shiji Zhou, Wenpeng Zhang, Jiyan Jiang, Wenliang Zhong, Jinjie Gu, and Wenwu Zhu. On the convergence of stochastic multi-objective gradient manipulation and beyond. Advances in Neural Information Processing Systems, 35:38103-38115, 2022b.
Antanas Ε½ilinskas. A statistical model-based algorithm for 'black-box' multi-objective optimisation. International Journal of Systems Science, 45(1):82-93, 2014.

APPENDIX

A PROOF OF THE RESULT IN SECTION 3.1

A.1 PROOF OF UPDATED RULE

The objective of the inner minimization of problem (6) can be rewritten as

βŸ¨βˆ‘i=1mΞ»iβˆ‡ΞΈJi(ΞΈt),θ⟩+1Ξ²tKL(pΞΈβˆ₯pΞΈt)=μ⊀(βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt))+βˆ‘i=1mΞ»itr⁑(Ξ£βˆ‡Ξ£Ji(ΞΈt))+12Ξ²t[tr⁑(Ξ£tβˆ’1Ξ£)+(ΞΌβˆ’ΞΌt)⊀Σtβˆ’1(ΞΌβˆ’ΞΌt)+log⁑∣Σtβˆ£βˆ£Ξ£βˆ£βˆ’d],(23) \begin{array}{l} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\theta}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\theta} \right\rangle + \frac {1}{\beta_ {t}} \mathrm {K L} (p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}) = \boldsymbol {\mu} ^ {\top} \left(\sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t})\right) + \sum_ {i = 1} ^ {m} \lambda_ {i} \operatorname {t r} \left(\boldsymbol {\Sigma} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t})\right) \\ + \frac {1}{2 \beta_ {t}} \left[ \operatorname {t r} \left(\boldsymbol {\Sigma} _ {t} ^ {- 1} \boldsymbol {\Sigma}\right) + \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {- 1} \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) + \log \frac {| \boldsymbol {\Sigma} _ {t} |}{| \boldsymbol {\Sigma} |} - d \right], \tag {23} \\ \end{array}

where $\nabla_{\pmb{\mu}}J_{i}(\pmb{\theta}{t})$ and $\nabla{\Sigma}J_{i}(\pmb{\theta}_{t})$ denotes the derivative w.r.t $\pmb{\mu}$ and $\pmb{\Sigma}$ taking at $\pmb{\mu} = \pmb{\mu}_t$ and $\pmb{\Sigma} = \pmb{\Sigma}_t$ respectively. We can see the above problem is convex with respect to $\pmb{\mu}$ and $\pmb{\Sigma}$ . Taking the derivative w.r.t $\pmb{\mu}$ and $\pmb{\Sigma}$ and setting them to zero, we can obtain that

βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt)+1Ξ²tΞ£tβˆ’1(ΞΌβˆ’ΞΌt)=0,(24) \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) + \frac {1}{\beta_ {t}} \boldsymbol {\Sigma} _ {t} ^ {- 1} (\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}) = 0, \tag {24}

βˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)+12Ξ²t[Ξ£tβˆ’1βˆ’Ξ£βˆ’1]=0.(25) \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) + \frac {1}{2 \beta_ {t}} \left[ \boldsymbol {\Sigma} _ {t} ^ {- 1} - \boldsymbol {\Sigma} ^ {- 1} \right] = 0. \tag {25}

Substituting the above equalities into the regularization term of the objective of the outer optimization problem we have

1Ξ²tKL(pΞΈβˆ₯pΞΈt)=12Ξ²t[tr⁑(Ξ£tβˆ’1Ξ£)+(ΞΌβˆ’ΞΌt)⊀Σtβˆ’1(ΞΌβˆ’ΞΌt)+log⁑∣Σtβˆ£βˆ£Ξ£βˆ£βˆ’d](26)=12Ξ²ttr⁑(Iβˆ’2Ξ²tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)Ξ£)βˆ’12(ΞΌβˆ’ΞΌt)βŠ€βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt)+12Ξ²tlog⁑(∣Σt(Ξ£tβˆ’1+2Ξ²tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt))∣)βˆ’d2Ξ²t(27)=d2Ξ²tβˆ’βŸ¨βˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt),Ξ£βŸ©βˆ’12(ΞΌβˆ’ΞΌt)βŠ€βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈ)+12Ξ²tlog⁑(∣I+2Ξ²tΞ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)∣)βˆ’d2Ξ²t(28)=βˆ’βŸ¨βˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt),Ξ£βˆ’Ξ£tβŸ©βˆ’12(ΞΌβˆ’ΞΌt)βŠ€βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈ)+Qt2Ξ²t(29) \begin{array}{l} \frac {1}{\beta_ {t}} \mathrm {K L} \left(p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}\right) = \frac {1}{2 \beta_ {t}} \left[ \operatorname {t r} \left(\boldsymbol {\Sigma} _ {t} ^ {- 1} \boldsymbol {\Sigma}\right) + \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {- 1} \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) + \log \frac {\left| \boldsymbol {\Sigma} _ {t} \right|}{\left| \boldsymbol {\Sigma} \right|} - d \right] (26) \\ = \frac {1}{2 \beta_ {t}} \operatorname {t r} \left(\boldsymbol {I} - 2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) \boldsymbol {\Sigma}\right) - \frac {1}{2} \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \\ + \frac {1}{2 \beta_ {t}} \log \left(| \boldsymbol {\Sigma} _ {t} \left(\boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t})\right) |\right) - \frac {d}{2 \beta_ {t}} (27) \\ = \frac {d}{2 \beta_ {t}} - \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} \right\rangle - \frac {1}{2} (\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}) ^ {\top} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta}) \\ + \frac {1}{2 \beta_ {t}} \log \left(\left| \boldsymbol {I} + 2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) \right|\right) - \frac {d}{2 \beta_ {t}} (28) \\ = - \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} - \boldsymbol {\Sigma} _ {t} \right\rangle - \frac {1}{2} \left(\boldsymbol {\mu} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta}) + \frac {Q _ {t}}{2 \beta_ {t}} (29) \\ \end{array}

where $Q_{t}$ is given as below.

Qt=log⁑(∣I+2Ξ²tΞ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)∣)βˆ’2Ξ²tβŸ¨βˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt),Ξ£t⟩(30)=log⁑(∣I+2Ξ²tΞ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)∣)βˆ’tr⁑(2Ξ²tΞ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)).(31) \begin{array}{l} Q _ {t} = \log \left(\left| \boldsymbol {I} + 2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) \right|\right) - 2 \beta_ {t} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} _ {t} \right\rangle (30) \\ = \log \left(\left| \boldsymbol {I} + 2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) \right|\right) - \operatorname {t r} \left(2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t})\right). (31) \\ \end{array}

Since $\pmb{\Sigma}{t}$ and $\nabla{\pmb{\Sigma}}J_{i}(\pmb{\theta}{t})$ are both diagonal matrix, we denote $\mathrm{diag}(\pmb{\Sigma}{t}\sum_{i = 1}^{m}\lambda_{i}\nabla_{\pmb{\Sigma}}J_{i}(\pmb{\theta}{t})) = (v{t}^{1},\dots ,v_{t}^{d})$ in $t$ -th iteration, then we have

Qt=log⁑(∏i=1d(1+2Ξ²tvti))βˆ’βˆ‘i=1d2Ξ²tvti=βˆ‘i=1d(log⁑(1+2Ξ²tvti)βˆ’2Ξ²tvti)=βˆ‘i=1dβˆ’2Ξ²t2(vti)2+O(Ξ²t3(vti)3),(32) Q _ {t} = \log \left(\prod_ {i = 1} ^ {d} \left(1 + 2 \beta_ {t} v _ {t} ^ {i}\right)\right) - \sum_ {i = 1} ^ {d} 2 \beta_ {t} v _ {t} ^ {i} = \sum_ {i = 1} ^ {d} \left(\log \left(1 + 2 \beta_ {t} v _ {t} ^ {i}\right) - 2 \beta_ {t} v _ {t} ^ {i}\right) = \sum_ {i = 1} ^ {d} - 2 \beta_ {t} ^ {2} \left(v _ {t} ^ {i}\right) ^ {2} + \mathcal {O} \left(\beta_ {t} ^ {3} \left(v _ {t} ^ {i}\right) ^ {3}\right), \tag {32}

where the last equality is due to the Taylor expansion. Note that $\mathcal{O}(\beta_t^3 (v_t^i)^3)$ decrease to zero when $\beta_{t}\rightarrow 0$ . We can approximate $Q_{t}$ by $\sum_{i = 1}^{d} - 2\beta_{t}^{2}(v_{t}^{i})^{2}$ . Then substituting Eqs. (24) (29) and $Q_{t}$ into the outer optimization problem of problem (6), we have

βŸ¨βˆ‘i=1mΞ»iβˆ‡ΞΈJi(ΞΈt),ΞΈβˆ’ΞΈt⟩+1Ξ²tKL(pΞΈβˆ₯pΞΈt)(33)=βŸ¨βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt),ΞΌβˆ’ΞΌt⟩+βŸ¨βˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt),Ξ£βˆ’Ξ£t⟩+1Ξ²tKL(pΞΈβˆ₯pΞΈt)(34)=12βŸ¨βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt),ΞΌβˆ’ΞΌt⟩+Qt2Ξ²t(35)=βˆ’Ξ²t2βŸ¨βˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt),Ξ£tβˆ‘i=1mΞ»iβˆ‡ΞΌJi(ΞΈt)βŸ©βˆ’Ξ²t⟨Σtβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt),Ξ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt)⟩.(36) \begin{array}{l} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\theta}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right), \boldsymbol {\theta} - \boldsymbol {\theta} _ {t} \right\rangle + \frac {1}{\beta_ {t}} \mathrm {K L} \left(p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}\right) (33) \\ = \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\mu} - \boldsymbol {\mu} _ {t} \right\rangle + \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} - \boldsymbol {\Sigma} _ {t} \right\rangle + \frac {1}{\beta_ {t}} \mathrm {K L} (p _ {\boldsymbol {\theta}} \| p _ {\boldsymbol {\theta} _ {t}}) (34) \\ = \frac {1}{2} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\mu} - \boldsymbol {\mu} _ {t} \right\rangle + \frac {Q _ {t}}{2 \beta_ {t}} (35) \\ = - \frac {\beta_ {t}}{2} \left\langle \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \right\rangle - \beta_ {t} \left\langle \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}), \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t}) \right\rangle . (36) \\ \end{array}

Therefore, the outer optimization problem is equivalent to the following problem

min⁑λtβˆˆΞ”mβˆ’1βˆ₯Ξ£t12βˆ‘i=1mΞ»iβˆ‡ΞΈJi(ΞΈt)βˆ₯2+2βˆ₯diag⁑(Ξ£tβˆ‘i=1mΞ»iβˆ‡Ξ£Ji(ΞΈt))βˆ₯2,(37) \min _ {\boldsymbol {\lambda} ^ {t} \in \Delta^ {m - 1}} \left\| \boldsymbol {\Sigma} _ {t} ^ {\frac {1}{2}} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\theta}} J _ {i} (\boldsymbol {\theta} _ {t}) \right\| ^ {2} + 2 \left\| \operatorname {d i a g} \left(\boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\Sigma}} J _ {i} (\boldsymbol {\theta} _ {t})\right) \right\| ^ {2}, \tag {37}

where we reach the result in Eq. (10).

A.2 PROOF OF THEOREM 3.1

We now provide the proof of the gradient of $\mathbb{E}_{p\theta}[F_i(\boldsymbol{x})]$ w.r.t $\pmb{\mu}$ and $\pmb{\Sigma}$ .

βˆ‡ΞΌEpΞΈ[Fi(x)]=EpΞΈ[Fi(x)βˆ‡ΞΌlog⁑(p(x;ΞΌ,Ξ£))](38)=EpΞΈ[Fi(x)βˆ‡ΞΌ(βˆ’12(xβˆ’ΞΌ)βŠ€Ξ£βˆ’1(xβˆ’ΞΌ)](39)=EpΞΈ[Ξ£βˆ’1(xβˆ’ΞΌ)Fi(x)].(40) \begin{array}{l} \nabla_ {\boldsymbol {\mu}} \mathbb {E} _ {p _ {\boldsymbol {\theta}}} [ F _ {i} (\boldsymbol {x}) ] = \mathbb {E} _ {p _ {\boldsymbol {\theta}}} [ F _ {i} (\boldsymbol {x}) \nabla_ {\boldsymbol {\mu}} \log (p (\boldsymbol {x}; \boldsymbol {\mu}, \boldsymbol {\Sigma})) ] (38) \\ = \mathbb {E} _ {p _ {\boldsymbol {\theta}}} \left[ F _ {i} (\boldsymbol {x}) \nabla_ {\boldsymbol {\mu}} \left(- \frac {1}{2} (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) \right] \right. (39) \\ = \mathbb {E} _ {p _ {\theta}} \left[ \boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) F _ {i} (\boldsymbol {x}) \right]. (40) \\ \end{array}

We further have

βˆ‡Ξ£EpΞΈ[Fi(x)]=EpΞΈ[Fi(x)βˆ‡Ξ£log⁑(p(x;ΞΌ,Ξ£))](41)=EpΞΈ[Fi(x)βˆ‡Ξ£(βˆ’12(xβˆ’ΞΌ)βŠ€Ξ£βˆ’1(xβˆ’ΞΌ)βˆ’12log⁑det⁑(Ξ£))](42)=12EpΞΈ[(Ξ£βˆ’1(xβˆ’ΞΌ)(xβˆ’ΞΌ)βŠ€Ξ£βˆ’1βˆ’Ξ£βˆ’1)Fi(x)],(43) \begin{array}{l} \nabla_ {\boldsymbol {\Sigma}} \mathbb {E} _ {p _ {\boldsymbol {\theta}}} [ F _ {i} (\boldsymbol {x}) ] = \mathbb {E} _ {p _ {\boldsymbol {\theta}}} [ F _ {i} (\boldsymbol {x}) \nabla_ {\boldsymbol {\Sigma}} \log (p (\boldsymbol {x}; \boldsymbol {\mu}, \boldsymbol {\Sigma})) ] (41) \\ = \mathbb {E} _ {p _ {\boldsymbol {\theta}}} \left[ F _ {i} (\boldsymbol {x}) \nabla_ {\boldsymbol {\Sigma}} \left(- \frac {1}{2} \left(\boldsymbol {x} - \boldsymbol {\mu}\right) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} \left(\boldsymbol {x} - \boldsymbol {\mu}\right) - \frac {1}{2} \log \det (\boldsymbol {\Sigma})\right) \right] (42) \\ = \frac {1}{2} \mathbb {E} _ {p _ {\boldsymbol {\theta}}} \left[ \left(\boldsymbol {\Sigma} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu}) (\boldsymbol {x} - \boldsymbol {\mu}) ^ {\top} \boldsymbol {\Sigma} ^ {- 1} - \boldsymbol {\Sigma} ^ {- 1}\right) F _ {i} (\boldsymbol {x}) \right], (43) \\ \end{array}

where we reach the conclusion.

B TECHNICAL LEMMAS

In this section, we introduce the following technical lemmas for analysis. The proof of all technical lemmas is put in Appendix D.

Lemma B.1. Suppose $\pmb{\Sigma}$ and $\hat{\pmb{\Sigma}}$ are two $d$ -dimensional diagonal matrix and $\pmb{z}$ is a $d$ -dimensional vector, then we have $| \pmb{\Sigma} \pmb{z} | \leq | \pmb{\Sigma} |_F | \pmb{z} |$ and $| \pmb{\Sigma} \hat{\pmb{\Sigma}} |_F \leq | \pmb{\Sigma} |_F | \hat{\pmb{\Sigma}} |_F$ .

Lemma B.2. Given a convex function $f(\pmb{x})$ , for Gaussian distribution with parameters $\pmb{\theta} \coloneqq {\pmb{\mu}, \pmb{\Sigma}^{\frac{1}{2}}}$ , let $\bar{J}(\pmb{\theta}) \coloneqq \mathbb{E}_{p(\pmb{x}; \pmb{\theta})}[f(\pmb{x})]$ . Then $\bar{J}(\pmb{\theta})$ is a convex function with respect to $\pmb{\theta}$ .

Lemma B.3. Suppose that the gradient $\hat{G}_i$ are positive semi-definite matrix and satisfies $\xi I \preceq \hat{G}_i \preceq bI$ . Then for algorithm 1, we have the following results.

(a) The (diagonal) covariance matrix $\Sigma_T$ satisfies

12bβˆ‘t=1TΞ²tI+Ξ£0βˆ’1βͺ―Ξ£Tβͺ―12ΞΎβˆ‘t=1TΞ²tI+Ξ£0βˆ’1. \frac {1}{2 b \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} + \boldsymbol {\Sigma} _ {0} ^ {- 1}} \preceq \boldsymbol {\Sigma} _ {T} \preceq \frac {1}{2 \xi \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} + \boldsymbol {\Sigma} _ {0} ^ {- 1}}.

(b) $| \pmb{\Sigma}t| F\leq \frac{\sqrt{d}}{2\xi\sum{t = 1}^T\beta_t}$
(c) $| \pmb{\Sigma}
{t + 1} - \pmb{\Sigma}_t| F\leq \frac{b\beta_t d^{\frac{3}{2}}}{2\xi^2(\sum{t = 1}^T\beta_t)^2}.$

Lemma B.4. Suppose the gradient estimator $\hat{\pmb{g}}_i^t$ for the $i$ -th objective in $t$ -th iteration as

g^it=Ξ£tβˆ’12z(Fi(ΞΌt+Ξ£t12z)βˆ’Fi(ΞΌt)), \hat {\boldsymbol {g}} _ {i} ^ {t} = \boldsymbol {\Sigma} _ {t} ^ {- \frac {1}{2}} \boldsymbol {z} \left(F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\Sigma} _ {t} ^ {\frac {1}{2}} \boldsymbol {z}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right),

where $\mathbf{z} \sim \mathcal{N}(0, I)$ . Suppose assumption 4.1 holds, the gradient $\hat{G}_i$ are positive semi-definite matrix and satisfies $\xi \mathbf{I} \preceq \hat{G}_i \preceq b\mathbf{I}$ and $\pmb{\Sigma}_0 \preceq R\mathbf{I}$ , where $\xi, b, R \geq 0$ . Then we have

(a) $\hat{\pmb{g}}i^t$ is an unbiased estimator of the gradient $\nabla{\pmb{\mu}}\mathbb{E}{p\sigma_t}[F_i(\pmb {x})]$
(b) $\mathbb{E}
{\boldsymbol{z}}[| \boldsymbol{\Sigma}t\hat{\boldsymbol{g}}i^t|^2] \leq \frac{H^2(d + 4)^2}{4\xi^2(\sum{k = 1}^t\beta_k)^2}$ .
(c) $\mathbb{V}
{\pmb{z}}[\hat{\pmb{g}}i^t] = \mathbb{E}{\pmb{z}}[| \hat{\pmb{g}}_i^t - \nabla_\pmb{\mu} J_i(\pmb{\theta}t)|^2] \leq \frac{H^2C(d + 4)^2}{N}$ , where $C = \max \left(\frac{b}{\xi}, | \Sigma_0^{-1}|{\infty}\right)$ .

Lemma B.5. Suppose $\pmb{\lambda}^{t} = (1 - \gamma_{t})\pmb{\lambda}^{t - 1} + \gamma_{t}\tilde{\pmb{\lambda}}^{t}$ , then we have $\mathbb{V}{\pmb{z}}[\pmb{\lambda}^t] = \mathbb{E}{\pmb{z}}[| \pmb{\lambda}^t - \mathbb{E}{\pmb{z}}[\pmb{\lambda}^t] |^2] \leq 2\gamma_t^2$ . Lemma B.6. Suppose assumption 4.1 holds, if $\hat{\pmb{g}}1^t, \dots, \hat{\pmb{g}}m^t$ are unbiased estimates of $\nabla{\mu} J_1(\pmb{\theta}t), \dots, \nabla{\mu} J_m(\pmb{\theta}t)$ . Further suppose that each gradient variance is bounded by $\mathbb{V}{\pmb{z}}[\hat{\pmb{g}}i^t] = \mathbb{E}{\pmb{z}}[| \hat{\pmb{g}}i^t - \nabla{\mu} J_i(\pmb{\theta}t) |^2] \leq \delta$ , $i = 1, \dots, m$ and let $\mathbb{V}{\pmb{z}}[\pmb{\lambda}^t] = \mathbb{E}{\pmb{z}}[| \pmb{\lambda}^t - \mathbb{E}{\pmb{z}}[\pmb{\lambda}^t] |^2]$ . Then for any gradient descent algorithm updated with composite gradient $\pmb{q}t = -\sum{i=1}^{m} \lambda_i^t \hat{\pmb{g}}_i^t$ with $\lambda^t \in \Delta^{m-1}$ , we have following inequality in $t$ -th iteration,

(a) $| \mathbb{E}{\boldsymbol{z}}[-\boldsymbol{q}t] - \mathbb{E}{\boldsymbol{z}}[\sum{i=1}^{m}\lambda_i^t\nabla_\mu J_i(\boldsymbol{\theta}t)]|^2 \leq \mathbb{V}{\boldsymbol{z}}[\boldsymbol{\lambda}^t]\sum_{i=1}^{m}\mathbb{V}_{\boldsymbol{z}}[\hat{\boldsymbol{g}}_i^t].$
(b) $\mathbb{E}z[(\sum{i = 1}^m\lambda_i^t\nabla_\mu J_i(\pmb {\theta}_t))^{\top}\pmb {q}t]\leq 2H\sqrt{\mathbb{V}z[\pmb{\lambda}^t]}\sum{i = 1}^m\mathbb{V}z[\pmb {\hat{g}}i^t ] - \mathbb{E}z[|| \sum{i = 1}^m\lambda_i^t\nabla_\mu J_i(\pmb {\theta}t)|^2 ].$
(c) $\mathbb{E}
{\pmb{z}}[| \pmb{q}t|^2 -| \sum{i = 1}^m\lambda_i^t\nabla_\mu J_i(\pmb {\theta}t)|^2 ]\leq \sum{i = 1}^m\mathbb{V}
{\pmb{z}}[\hat{\pmb{g}}i^t ] + 4H\sqrt{\mathbb{V}{\pmb{z}}[\pmb{\lambda}^t]\sum
{i = 1}^m\mathbb{V}
{\pmb{z}}[\hat{\pmb{g}}_i^t]}$

C PROOF OF THE RESULT IN SECTION 4

In this section, we provide the proof of the result in Section 4.

Theorem 4.2 can be directly obtained by Lemma B.3 (a).

C.1 PROOF OF THE PROPOSITION 4.3

From the definition of $J_{i}(\pmb{\mu}, \pmb{\Sigma})$ , we know that $F_{i}(\pmb{\mu}^{}) = J_{i}(\pmb{\mu}^{}, \mathbf{0})$ . Note that $F_{i}(\pmb{x})$ is a convex function, we have that

Fi(ΞΌ)=Fi(Ex∼N(ΞΌ,Ξ£)[x])≀Ex∼N(ΞΌ,Ξ£)[Fi(x)]=Ji(ΞΌ,Ξ£).(44) F _ {i} (\boldsymbol {\mu}) = F _ {i} \left(\mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} \left(\boldsymbol {\mu}, \boldsymbol {\Sigma}\right)} [ \boldsymbol {x} ]\right) \leq \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} \left(\boldsymbol {\mu}, \boldsymbol {\Sigma}\right)} \left[ F _ {i} (\boldsymbol {x}) \right] = J _ {i} \left(\boldsymbol {\mu}, \boldsymbol {\Sigma}\right). \tag {44}

It follows that

Fi(ΞΌ)βˆ’Fi(ΞΌβˆ—)≀Ji(ΞΌ,Ξ£)βˆ’Ji(ΞΌβˆ—,0).(45) F _ {i} (\boldsymbol {\mu}) - F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) \leq J _ {i} \left(\boldsymbol {\mu}, \boldsymbol {\Sigma}\right) - J _ {i} \left(\boldsymbol {\mu} ^ {*}, \mathbf {0}\right). \tag {45}

Then we have

βˆ‘i=1mΞ»i(Fi(ΞΌ)βˆ’Fi(ΞΌβˆ—))β‰€βˆ‘i=1mΞ»i(Ji(ΞΌ,Ξ£)βˆ’Ji(ΞΌβˆ—,0)),(46) \sum_ {i = 1} ^ {m} \lambda_ {i} \left(F _ {i} (\boldsymbol {\mu}) - F _ {i} \left(\boldsymbol {\mu} ^ {*}\right)\right) \leq \sum_ {i = 1} ^ {m} \lambda_ {i} \left(J _ {i} \left(\boldsymbol {\mu}, \boldsymbol {\Sigma}\right) - J _ {i} \left(\boldsymbol {\mu} ^ {*}, \mathbf {0}\right)\right), \tag {46}

where we reach the conclusion.

C.2 PROOF OF THE PROPOSITION 4.5

Note that $| \sum_{i = 1}^{m}\lambda_{i}\nabla_{\pmb{\mu}^{}}J_{i}(\pmb{\mu}^{})| = 0$ we have

βˆ₯βˆ‘i=1mΞ»iβˆ‡Fi(ΞΌβˆ—)βˆ₯2=βˆ₯βˆ‘i=1mΞ»iβˆ‡Fi(ΞΌβˆ—)βˆ’βˆ‘i=1mΞ»iβˆ‡ΞΌβˆ—Ji(ΞΌβˆ—)+βˆ‘i=1mΞ»iβˆ‡ΞΌβˆ—Ji(ΞΌβˆ—)βˆ₯2(47)=βˆ₯βˆ‘i=1mΞ»iβˆ‡Fi(ΞΌβˆ—)βˆ’βˆ‘i=1mΞ»iβˆ‡ΞΌβˆ—Ji(ΞΌβˆ—)βˆ₯2.(48) \begin{array}{l} \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla F _ {i} (\boldsymbol {\mu} ^ {*}) \right\| ^ {2} = \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla F _ {i} (\boldsymbol {\mu} ^ {*}) - \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu} ^ {*}} J _ {i} (\boldsymbol {\mu} ^ {*}) + \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu} ^ {*}} J _ {i} (\boldsymbol {\mu} ^ {*}) \right\| ^ {2} (47) \\ = \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) - \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla_ {\boldsymbol {\mu} ^ {*}} J _ {i} \left(\boldsymbol {\mu} ^ {*}\right) \right\| ^ {2}. (48) \\ \end{array}

It follows that

βˆ₯βˆ‘i=1mΞ»iβˆ‡Fi(ΞΌβˆ—)βˆ₯22β‰€βˆ‘i=1mΞ»iβˆ₯βˆ‡Fi(ΞΌβˆ—)βˆ’βˆ‡ΞΌβˆ—Ji(ΞΌβˆ—)βˆ₯2(49)=βˆ‘i=1mΞ»iβˆ₯βˆ‡Fi(ΞΌβˆ—)βˆ’Ex∼N(ΞΌβˆ—,Οƒ)βˆ‡Fi(x)βˆ₯2(50)β‰€βˆ‘i=1mΞ»iEx∼N(ΞΌβˆ—,Οƒ)βˆ₯βˆ‡Fi(x)βˆ’βˆ‡Fi(ΞΌβˆ—)βˆ₯2(51)≀LF2Ex∼N(ΞΌβˆ—,Οƒ)βˆ₯xβˆ’ΞΌβˆ—βˆ₯2(52)=LF2βˆ₯diag⁑(Ξ£)βˆ₯1,(53) \begin{array}{l} \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} \nabla F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) \right\| _ {2} ^ {2} \leq \sum_ {i = 1} ^ {m} \lambda_ {i} \left\| \nabla F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) - \nabla_ {\boldsymbol {\mu} ^ {*}} J _ {i} \left(\boldsymbol {\mu} ^ {*}\right) \right\| ^ {2} (49) \\ = \sum_ {i = 1} ^ {m} \lambda_ {i} \left\| \nabla F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) - \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} \left(\boldsymbol {\mu} ^ {*}, \boldsymbol {\sigma}\right)} \nabla F _ {i} (\boldsymbol {x}) \right\| ^ {2} (50) \\ \leq \sum_ {i = 1} ^ {m} \lambda_ {i} \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} \left(\boldsymbol {\mu} ^ {*}, \boldsymbol {\sigma}\right)} \| \nabla F _ {i} (\boldsymbol {x}) - \nabla F _ {i} \left(\boldsymbol {\mu} ^ {*}\right) \| ^ {2} (51) \\ \leq L _ {F} ^ {2} \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} (\boldsymbol {\mu} ^ {*}, \sigma)} \| \boldsymbol {x} - \boldsymbol {\mu} ^ {*} \| ^ {2} (52) \\ = L _ {F} ^ {2} \| \operatorname {d i a g} (\boldsymbol {\Sigma}) \| _ {1}, (53) \\ \end{array}

where the equality in Eq. (50) is due to $\nabla_{\pmb{\mu}^*}J_i(\pmb{\mu}^*) = \mathbb{E}_{\pmb{x}\sim \mathcal{N}(\pmb{\mu}^*,\pmb{\sigma})}\nabla F_i(\pmb{x})$ in Rezende et al. (2014).

C.3 PROOF OF THEOREM 4.4

We denote $\pmb{q}t = -\sum{i=1}^{m} \lambda_i^t \hat{\pmb{g}}i^t$ , then the update rule of $\pmb{\mu}$ can be represented as $\pmb{\mu}{t+1} = \pmb{\mu}_t + \beta_t \pmb{\Sigma}_t \pmb{q}_t$ .

According to assumption 4.1, the function $J_{i}(\pmb{\theta})$ is $L$ -smooth w.r.t ${\pmb{\mu}, \pmb{\Sigma}}$ , then we have

Ξ»itJi(ΞΌt+1,Ξ£t)≀λit(Ji(ΞΌt,Ξ£t)+Ξ²tβˆ‡ΞΌJi(ΞΈt,Ξ£t)⊀Σtqt+LΞ²t22βˆ₯Ξ£tqtβˆ₯2).(54) \lambda_ {i} ^ {t} J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) \leq \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\mu} _ {t}, \boldsymbol {\Sigma} _ {t}\right) + \beta_ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\Sigma} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} + \frac {L \beta_ {t} ^ {2}}{2} \| \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \| ^ {2}\right). \tag {54}

Since $F_{i}(\pmb{x})$ is convex function, we have $J_{i}(\pmb{\theta})$ is convex w.r.t $\pmb{\theta} = {\pmb{\mu}, \pmb{\Sigma}^{\frac{1}{2}}}$ by Lemma B.2, together with $J_{i}(\pmb{\theta})$ is $c$ -strongly convex w.r.t $\pmb{\mu}$ we obtain

Ji(ΞΈt)≀Ji(ΞΌβˆ—,0)+βˆ‡ΞΌJi(ΞΈt)⊀(ΞΌtβˆ’ΞΌβˆ—)+βˆ‡Ξ£12Ji(ΞΈt)⊀Σt12βˆ’c2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2.(55) J _ {i} \left(\boldsymbol {\theta} _ {t}\right) \leq J _ {i} \left(\boldsymbol {\mu} ^ {*}, 0\right) + \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) + \nabla_ {\boldsymbol {\Sigma} ^ {\frac {1}{2}}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {\frac {1}{2}} - \frac {c}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2}. \tag {55}

Note that $\nabla_{\pmb{\Sigma}^{\frac{1}{2}}}J(\pmb{\theta}_t) = \pmb{\Sigma}t^{\frac{1}{2}}\nabla{\pmb{\Sigma}}J(\pmb{\theta}t) + \nabla{\pmb{\Sigma}}J(\pmb{\theta}_t)\pmb{\Sigma}_t^{\frac{1}{2}}$ we have

Ji(ΞΈt)≀Ji(ΞΌβˆ—,0)+βˆ‡ΞΌJi(ΞΈt)⊀(ΞΌtβˆ’ΞΌβˆ—)+2βˆ‡Ξ£Ji(ΞΈt)Ξ£tβˆ’c2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2.(56) J _ {i} \left(\boldsymbol {\theta} _ {t}\right) \leq J _ {i} \left(\boldsymbol {\mu} ^ {*}, 0\right) + \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) + 2 \nabla_ {\boldsymbol {\Sigma}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) \boldsymbol {\Sigma} _ {t} - \frac {c}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2}. \tag {56}

Substituting Eq. (56) into Eq. (54), we have

Ξ»itJi(ΞΌt+1,Ξ£t)≀λitJi(ΞΌβˆ—,0)+Ξ»itβˆ‡ΞΌJi(ΞΈt)⊀(ΞΌtβˆ’ΞΌβˆ—)+2Ξ»itβˆ‡Ξ£Ji(ΞΈt)⊀Σt+Ξ²tΞ»itβˆ‡ΞΌJi(ΞΈt,Ξ£t)⊀Σtqt+LΞ»itΞ²t22βˆ₯Ξ£tqtβˆ₯2βˆ’c2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2.(57) \begin{array}{l} \lambda_ {i} ^ {t} J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) \leq \lambda_ {i} ^ {t} J _ {i} \left(\boldsymbol {\mu} ^ {*}, 0\right) + \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) + 2 \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\Sigma}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \\ + \beta_ {t} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\Sigma} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} + \frac {L \lambda_ {i} ^ {t} \beta_ {t} ^ {2}}{2} \| \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \| ^ {2} - \frac {c}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2}. \tag {57} \\ \end{array}

Let $A_{t} = \sum_{i = 1}^{m}\lambda_{i}^{t}\big(J_{i}(\pmb{\mu}{t + 1},\pmb{\Sigma}{t}) - J_{i}(\pmb{\mu}^{*},0)\big)$ and $\beta_{t}\leq \frac{1}{L}$ , we have

Ez[At]≀Ez[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)⊀(ΞΌtβˆ’ΞΌβˆ—)]+Ξ²tEz[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt,Ξ£t)⊀Σtqt]+Ξ²t2Ez[βˆ₯Ξ£tqtβˆ₯2]+Ez[βˆ‘i=1mΞ»itβˆ‡Ξ£Ji(ΞΈt)⊀Σt]βˆ’c2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2.(58) \begin{array}{l} \mathbb {E} _ {\boldsymbol {z}} \left[ A _ {t} \right] \leq \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) \right] + \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\Sigma} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \right] \\ + \frac {\beta_ {t}}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \left\| \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \right\| ^ {2} \right] + \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\Sigma}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \right] - \frac {c}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2}. \tag {58} \\ \end{array}

Note that

βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12(59)=βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+Ξ²tΞ£tqtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12(60)=βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’(βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12+Ξ²t⟨μtβˆ’ΞΌβˆ—,qt⟩+Ξ²t2⟨Σtqt,qt⟩)(61)=βˆ’2Ξ²tqt⊀(ΞΌtβˆ’ΞΌβˆ—)βˆ’Ξ²t2(Ξ£tqt)⊀qt.(62) \begin{array}{l} \left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \left\| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} (59) \\ = \left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \left\| \boldsymbol {\mu} _ {t} + \beta_ {t} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} (60) \\ = \left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \left(\left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} + \beta_ {t} \left\langle \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}, \boldsymbol {q} _ {t} \right\rangle + \beta_ {t} ^ {2} \left\langle \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t}, \boldsymbol {q} _ {t} \right\rangle\right) (61) \\ = - 2 \beta_ {t} \boldsymbol {q} _ {t} ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) - \beta_ {t} ^ {2} \left(\boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t}\right) ^ {\top} \boldsymbol {q} _ {t}. (62) \\ \end{array}

Therefore we have

βˆ’qt⊀(ΞΌtβˆ’ΞΌβˆ—)=12Ξ²t(βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12)+Ξ²t2(Ξ£tqt)⊀qt(63)≀12Ξ²t(βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12)+Ξ²tβˆ₯Ξ£tβˆ₯Fβˆ₯qtβˆ₯2,(64) \begin{array}{l} - \boldsymbol {q} _ {t} ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) = \frac {1}{2 \beta_ {t}} \left(\left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \left\| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2}\right) + \frac {\beta_ {t}}{2} \left(\boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t}\right) ^ {\top} \boldsymbol {q} _ {t} (63) \\ \leq \frac {1}{2 \beta_ {t}} \left(\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2}\right) + \beta_ {t} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \| \boldsymbol {q} _ {t} \| ^ {2}, (64) \\ \end{array}

where the inequality is due to $\beta_t \geq 0$ and Lemma B.1. Note that we have

Ez[(βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)+qt)⊀(ΞΌtβˆ’ΞΌβˆ—)]≀βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)+qtβˆ₯2](65)≀DEz[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)+qtβˆ₯2](66)≀DVz[Ξ»t]βˆ‘i=1mVz[g^it],(67) \begin{array}{l} \mathbb {E} _ {\boldsymbol {z}} \left[ \left(\sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) + \boldsymbol {q} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) \right] \leq \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| \sqrt {\mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) + \boldsymbol {q} _ {t} \| ^ {2} \right]} (65) \\ \leq D \sqrt {\mathbb {E} _ {\boldsymbol {z}} \left[ \left\| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) + \boldsymbol {q} _ {t} \right\| ^ {2} \right]} (66) \\ \leq D \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]}, (67) \\ \end{array}

where the first inequality is due to the Cauchy-Schwarz inequality, the second inequality is due to $| \pmb{\mu}_t - \pmb{\mu}^*| \leq D$ and the last inequality is due to Lemma B.6 (a). Then we have

Ez[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)⊀(ΞΌtβˆ’ΞΌβˆ—)](68)=Ez[βˆ’qt⊀(ΞΌtβˆ’ΞΌβˆ—)+(βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)+qt)⊀(ΞΌtβˆ’ΞΌβˆ—)](69)≀12Ξ²tEz[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12]+Ξ²tβˆ₯Ξ£tβˆ₯FEzβˆ₯qtβˆ₯2]+DVz[Ξ»t]βˆ‘i=1mVz(git),(70) \begin{array}{l} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) \right] (68) \\ = \mathbb {E} _ {\boldsymbol {z}} \left[ - \boldsymbol {q} _ {t} ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) + \left(\sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) + \boldsymbol {q} _ {t}\right) ^ {\top} \left(\boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*}\right) \right] (69) \\ \leq \frac {1}{2 \beta_ {t}} \mathbb {E} _ {\boldsymbol {z}} \left[ \left\| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \left\| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \right\| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} \right] + \beta_ {t} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \mathbb {E} _ {\boldsymbol {z}} \| \boldsymbol {q} _ {t} \| ^ {2} ] + D \sqrt {\mathbb {V} _ {\boldsymbol {z}} \left[ \boldsymbol {\lambda} ^ {t} \right] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left(g _ {i} ^ {t}\right)}, (70) \\ \end{array}

where the inequality is due to Eq. (64) and Eq. (67). Note that

Ez[βˆ‘i=1mΞ»itβˆ‡Ξ£Ji(ΞΈt)⊀Σt]≀Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡Ξ£Ji(ΞΈt)βˆ₯]βˆ₯Ξ£tβˆ₯F≀Hβˆ₯Ξ£tβˆ₯F,(71) \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\Sigma}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \right] \leq \mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\Sigma}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) \| \right] \| \boldsymbol {\Sigma} _ {t} \| _ {F} \leq H \| \boldsymbol {\Sigma} _ {t} \| _ {F}, \tag {71}

where the first inequality is due to Lemma B.1 and the second inequality is due to the Lipschitz continuous assumption of the function $J_{i}(\pmb{\theta})$ . By using Lemma B.1 and Lemma B.6 (b), we further have

Ez[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt,Ξ£t)⊀Σtqt]≀βˆ₯Ξ£tβˆ₯F(2HVz[Ξ»t]βˆ‘i=1mVz[g^it]βˆ’Ez[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]).(72) \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\Sigma} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \right] \leq \| \boldsymbol {\Sigma} _ {t} \| _ {F} \left(2 H \sqrt {\mathbb {V} _ {\boldsymbol {z}} \left[ \boldsymbol {\lambda} ^ {t} \right] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right]} - \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}\right) \| ^ {2} \right]\right). \tag {72}

Then substituting Eqs. (70) (71) (72) into Eq. (58) and multiplying $\beta_{t}$ on both sides of the inequality, we have

Ξ²tEz[At]≀12Ez[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12]βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2+Ξ²t2Hβˆ₯Ξ£tβˆ₯F+(2HΞ²t2βˆ₯Ξ£tβˆ₯F+Ξ²tD)Vz[Ξ»t]βˆ‘i=1mVz[g^it]+Ξ²t22Ez[βˆ₯Ξ£tqtβˆ₯2]+Ξ²t2βˆ₯Ξ£tβˆ₯FEzβˆ₯qtβˆ₯2]βˆ’Ξ²t2βˆ₯Ξ£tβˆ₯FEz[βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2](73)≀12Ez[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12]βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2+Ξ²t2Hβˆ₯Ξ£tβˆ₯F+(2HΞ²t2βˆ₯Ξ£tβˆ₯F+Ξ²tD)Vz[Ξ»t]βˆ‘i=1mVz[g^it]+Ξ²t2H2(d+4)28ΞΎ2(βˆ‘k=1tΞ²k)2+Ξ²t2βˆ₯Ξ£tβˆ₯F(βˆ‘i=1mVz[g^it]+4HVz[Ξ»t]βˆ‘i=1mVz[g^it]),(74) \begin{array}{l} \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} [ A _ {t} ] \leq \frac {1}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} \right] - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} + \beta_ {t} ^ {2} H \| \boldsymbol {\Sigma} _ {t} \| _ {F} \\ + \left(2 H \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} + \beta_ {t} D\right) \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {\beta_ {t} ^ {2}}{2} \mathbb {E} _ {\boldsymbol {z}} [ \| \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \| ^ {2} ] \\ + \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \mathbb {E} _ {\boldsymbol {z}} \| \boldsymbol {q} _ {t} \| ^ {2} ] - \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \mathbb {E} _ {\boldsymbol {z}} [ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} ] (73) \\ \leq \frac {1}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} \right] - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} + \beta_ {t} ^ {2} H \| \boldsymbol {\Sigma} _ {t} \| _ {F} \\ + \left(2 H \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} + \beta_ {t} D\right) \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {\beta_ {t} ^ {2} H ^ {2} (d + 4) ^ {2}}{8 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}} \\ + \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \left(\sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ] + 4 H \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]}\right), (74) \\ \end{array}

where the second inequality is due to Lemma B.4 (b) and Lemma B.6 (c). We further obtain that

βˆ‘t=0Tβˆ’1[12Ez[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌt+1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12]βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2](75)≀12βˆ‘t=0Tβˆ’1[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’12βˆ’βˆ₯ΞΌtβˆ’1βˆ’ΞΌβˆ—βˆ₯Ξ£tβˆ’1βˆ’12βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2]+12[βˆ₯ΞΌ0βˆ’ΞΌβˆ—βˆ₯Ξ£0βˆ’12βˆ’βˆ₯ΞΌTβˆ’ΞΌβˆ—βˆ₯Ξ£Tβˆ’1βˆ’12](76)≀12βˆ‘t=0Tβˆ’1[βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2Ξ²tβˆ‘i=1mΞ»itG^it2βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2]+βˆ₯Ξ£0βˆ’1βˆ₯FD2(77)≀12βˆ‘t=0Tβˆ’1[cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2βˆ’cΞ²t2βˆ₯ΞΌtβˆ’ΞΌβˆ—βˆ₯2]+βˆ₯Ξ£0βˆ’1βˆ₯FD2(78)=βˆ₯Ξ£0βˆ’1βˆ₯FD2,(79) \begin{array}{l} \sum_ {t = 0} ^ {T - 1} \left[ \frac {1}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {t + 1} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} \right] - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} \right] (75) \\ \leq \frac {1}{2} \sum_ {t = 0} ^ {T - 1} \left[ \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {t - 1} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {t - 1} ^ {- 1}} ^ {2} - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} \right] \\ + \frac {1}{2} \left[ \| \boldsymbol {\mu} _ {0} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {0} ^ {- 1}} ^ {2} - \| \boldsymbol {\mu} _ {T} - \boldsymbol {\mu} ^ {*} \| _ {\boldsymbol {\Sigma} _ {T - 1} ^ {- 1}} ^ {2} \right] (76) \\ \leq \frac {1}{2} \sum_ {t = 0} ^ {T - 1} \left[ \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| _ {2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t}} ^ {2} - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} \right] + \| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2} (77) \\ \leq \frac {1}{2} \sum_ {t = 0} ^ {T - 1} \left[ \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} - \frac {c \beta_ {t}}{2} \| \boldsymbol {\mu} _ {t} - \boldsymbol {\mu} ^ {*} \| ^ {2} \right] + \| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2} (78) \\ = \left\| \boldsymbol {\Sigma} _ {0} ^ {- 1} \right\| _ {F} D ^ {2}, (79) \\ \end{array}

where the second inequality is due to the update rule of $\pmb{\Sigma}_{t}$ and $| \pmb{\mu}_t - \pmb{\mu}^*| \leq D$ , and the third inequality is due to Cauchy-Schwarz inequality and $\hat{G}_i\preceq \frac{c}{4}\pmb {I}$ . Let $C = \max (\frac{c}{4\xi},| \pmb{\Sigma}_0^{-1}|_\infty)$ , we have

βˆ‘t=0Tβˆ’1Ξ²tEz[At]≀βˆ₯Ξ£0βˆ’1βˆ₯FD2+βˆ‘t=0Tβˆ’1(Ξ²t2Hd2ΞΎβˆ‘k=1tΞ²k+Ξ²t2H2(d+4)28ΞΎ2(βˆ‘k=1tΞ²k)2+(6HΞ²t2βˆ₯Ξ£tβˆ₯F+Ξ²tD)Vz[Ξ»t]βˆ‘i=1mVz[g^it]+Ξ²t2βˆ₯Ξ£tβˆ₯Fβˆ‘i=1mVz[g^it])(80)≀βˆ₯Ξ£0βˆ’1βˆ₯FD2+βˆ‘t=0Tβˆ’1(Ξ²t2Hd2ΞΎβˆ‘k=1tΞ²k+Ξ²t2H2(d+4)28ΞΎ2(βˆ‘k=1tΞ²k)2+(6HΞ²t2dR+Ξ²tD)Ξ³t2βˆ‘i=1mVz[g^it]+Ξ²t2dβˆ‘i=1mVz[g^it]2ΞΎβˆ‘k=1tΞ²k)(81)≀βˆ₯Ξ£0βˆ’1βˆ₯FD2+βˆ‘t=0Tβˆ’1(Ξ²t2Hd2ΞΎβˆ‘k=1tΞ²k+Ξ²t2H2(d+4)28ΞΎ2(βˆ‘k=1tΞ²k)2+HΞ³tΞ²t(d+4)(6HΞ²tdR+D)2CmN+Ξ²t2dH2C(d+4)2m2NΞΎβˆ‘k=1tΞ²k),(82) \begin{array}{l} \sum_ {t = 0} ^ {T - 1} \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} [ A _ {t} ] \leq \| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2} + \sum_ {t = 0} ^ {T - 1} \left(\frac {\beta_ {t} ^ {2} H \sqrt {d}}{2 \xi \sum_ {k = 1} ^ {t} \beta_ {k}} + \frac {\beta_ {t} ^ {2} H ^ {2} (d + 4) ^ {2}}{8 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}} \right. \\ \left. + \left(6 H \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} + \beta_ {t} D\right) \sqrt {\mathbb {V} _ {\boldsymbol {z}} \left[ \boldsymbol {\lambda} ^ {t} \right] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right]} + \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right]\right) (80) \\ \leq \| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2} + \sum_ {t = 0} ^ {T - 1} \left(\frac {\beta_ {t} ^ {2} H \sqrt {d}}{2 \xi \sum_ {k = 1} ^ {t} \beta_ {k}} + \frac {\beta_ {t} ^ {2} H ^ {2} (d + 4) ^ {2}}{8 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}} \right. \\ + \left(6 H \beta_ {t} ^ {2} \sqrt {d} R + \beta_ {t} D\right) \gamma_ {t} \sqrt {2 \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {\beta_ {t} ^ {2} \sqrt {d} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]}{2 \xi \sum_ {k = 1} ^ {t} \beta_ {k}} \Bigg) (81) \\ \leq \| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2} + \sum_ {t = 0} ^ {T - 1} \left(\frac {\beta_ {t} ^ {2} H \sqrt {d}}{2 \xi \sum_ {k = 1} ^ {t} \beta_ {k}} + \frac {\beta_ {t} ^ {2} H ^ {2} (d + 4) ^ {2}}{8 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}} \right. \\ \left. + H \gamma_ {t} \beta_ {t} (d + 4) \left(6 H \beta_ {t} \sqrt {d} R + D\right) \sqrt {\frac {2 C m}{N}} + \frac {\beta_ {t} ^ {2} \sqrt {d} H ^ {2} C (d + 4) ^ {2} m}{2 N \xi \sum_ {k = 1} ^ {t} \beta_ {k}}\right), (82) \\ \end{array}

where the first inequality is due to Eq. (79) and Lemma B.3 (a), the second inequality is due to Lemma B.3 (b) and Lemma B.5, and the third inequality is due to Lemma B.4 (c).

Therefore, we have

1Tβˆ‘t=0Tβˆ’1Ez[At]≀βˆ₯Ξ£0βˆ’1βˆ₯FD2TΞ²t+1Tβˆ‘t=0Tβˆ’1(Ξ²tHd2ΞΎβˆ‘k=1tΞ²k+Ξ²tH2(d+4)28ΞΎ2(βˆ‘k=1tΞ²k)2)+HΞ³t(d+4)(6HΞ²tdR+D)2CmN+Ξ²tdH2C(d+4)2m2NΞΎβˆ‘k=1tΞ²k).(83) \begin{array}{l} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} [ A _ {t} ] \leq \frac {\| \boldsymbol {\Sigma} _ {0} ^ {- 1} \| _ {F} D ^ {2}}{T \beta_ {t}} + \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \left(\frac {\beta_ {t} H \sqrt {d}}{2 \xi \sum_ {k = 1} ^ {t} \beta_ {k}} + \frac {\beta_ {t} H ^ {2} (d + 4) ^ {2}}{8 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}}\right) \\ + H \gamma_ {t} (d + 4) \left(6 H \beta_ {t} \sqrt {d} R + D\right) \sqrt {\frac {2 C m}{N}} + \frac {\beta_ {t} \sqrt {d} H ^ {2} C (d + 4) ^ {2} m}{2 N \xi \sum_ {k = 1} ^ {t} \beta_ {k}} \Bigg). \tag {83} \\ \end{array}

Let $\beta_{t} = \beta$ and $\gamma_t = \gamma$ . Since we have $\sum_{t=1}^{T} \frac{1}{t} \leq 1 + \log(T)$ , we obtain

1Tβˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»it(Ji(ΞΌt+1,Ξ£t)βˆ’Ji(ΞΌβˆ—,0))]=1Tβˆ‘t=0Tβˆ’1Ez[At]=O(1Ξ²T+log⁑TT+Ξ³),(84) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) - J _ {i} \left(\boldsymbol {\mu} ^ {*}, 0\right)\right) \right] = \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} [ A _ {t} ] = \mathcal {O} \left(\frac {1}{\beta T} + \frac {\log T}{T} + \gamma\right), \tag {84}

where we reach the conclusion.

C.4 PROOF OF THEOREM 4.6

We denote $\pmb{q}t = -\sum{i=1}^m \lambda_i^t \hat{\pmb{g}}i^t$ , then the update rule of $\pmb{\mu}$ can be represented as $\pmb{\mu}{t+1} = \pmb{\mu}_t + \beta_t \pmb{\Sigma}_t \pmb{q}_t$ . According to assumption 4.1, the function $J_i(\pmb{\theta})$ is $L$ -smooth w.r.t ${\pmb{\mu}, \pmb{\Sigma}}$ , then we have

Ξ»itJi(ΞΌt+1,Ξ£t)≀λit(Ji(ΞΌt,Ξ£t)+Ξ²tβˆ‡ΞΌJi(ΞΈt,Ξ£t)⊀Σtqt+LΞ²t22βˆ₯Ξ£tqtβˆ₯2).(85) \lambda_ {i} ^ {t} J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) \leq \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\mu} _ {t}, \boldsymbol {\Sigma} _ {t}\right) + \beta_ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} \left(\boldsymbol {\theta} _ {t}, \boldsymbol {\Sigma} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} + \frac {L \beta_ {t} ^ {2}}{2} \| \boldsymbol {\Sigma} _ {t} \boldsymbol {q} _ {t} \| ^ {2}\right). \tag {85}

Let $B_{t} = \mathbb{E}{\boldsymbol{z}}[\sum{i=1}^{m}\lambda_{i}^{t}(J_{i}(\boldsymbol{\mu}{t+1},\boldsymbol{\Sigma}{t}) - J_{i}(\boldsymbol{\mu}{t},\boldsymbol{\Sigma}{t}))]$ , then we have

Bt≀βtβˆ₯Ξ£tβˆ₯FEz[(βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt))⊀qt]+LΞ²t2βˆ₯Ξ£tβˆ₯F22Ez[βˆ₯qtβˆ₯2](86)≀2HΞ²tdRVz[Ξ»t]βˆ‘i=1mVz[g^it]βˆ’Ξ²tdREz[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]+LΞ²t2dR22Ez[βˆ₯qtβˆ₯2](87)≀(2HΞ²tdR+2HLΞ²t2dR2)Vz[Ξ»t]βˆ‘i=1mVz[g^it]+LΞ²t2dR22βˆ‘i=1mVz[g^it]+LΞ²t2dR2βˆ’2Ξ²tdR2Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2],(88) \begin{array}{l} B _ {t} \leq \beta_ {t} \| \boldsymbol {\Sigma} _ {t} \| _ {F} \mathbb {E} _ {\boldsymbol {z}} \left[ \left(\sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t})\right) ^ {\top} \boldsymbol {q} _ {t} \right] + \frac {L \beta_ {t} ^ {2} \| \boldsymbol {\Sigma} _ {t} \| _ {F} ^ {2}}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \boldsymbol {q} _ {t} \| ^ {2} \right] (86) \\ \leq 2 H \beta_ {t} \sqrt {d} R \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} - \beta_ {t} \sqrt {d} R \mathbb {E} _ {\boldsymbol {z}} [ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} ] \\ + \frac {L \beta_ {t} ^ {2} d R ^ {2}}{2} \mathbb {E} _ {\boldsymbol {z}} [ \| \boldsymbol {q} _ {t} \| ^ {2} ] (87) \\ \leq \left(2 H \beta_ {t} \sqrt {d} R + 2 H L \beta_ {t} ^ {2} d R ^ {2}\right) \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {L \beta_ {t} ^ {2} d R ^ {2}}{2} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ] \\ + \frac {L \beta_ {t} ^ {2} d R ^ {2} - 2 \beta_ {t} \sqrt {d} R}{2} \mathbb {E} _ {\boldsymbol {z}} [ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} ], (88) \\ \end{array}

where the first inequality is due to Lemma B.1, the second inequality is due to $| \pmb{\Sigma}_t| _F\leq | \pmb{\Sigma}_0| F\leq$ $\sqrt{d} R$ and Lemma B.6 (b), and the last inequality is due to Lemma B.6 (c). Let $\beta{t}\leq \frac{1}{LR\sqrt{d}}$ , and rearrange Eq. (88) we obtain

Ξ²tdR2Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]≀Bt+4HRdΞ²tVz[Ξ»t]βˆ‘i=1mVz[g^it]+LΞ²t2dR22βˆ‘i=1mVz[g^it].(89) \frac {\beta_ {t} \sqrt {d} R}{2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} \right] \leq B _ {t} + 4 H R \sqrt {d} \beta_ {t} \sqrt {\mathbb {V} _ {\boldsymbol {z}} \left[ \boldsymbol {\lambda} ^ {t} \right] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right]} + \frac {L \beta_ {t} ^ {2} d R ^ {2}}{2} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right]. \tag {89}

So we have

Ξ²tEz[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]≀2BtdR+8HΞ²tVz[Ξ»t]βˆ‘i=1mVz[g^it]+LΞ²t2dR2βˆ‘i=1mVz[g^it].(90) \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} \right] \leq \frac {2 B _ {t}}{\sqrt {d} R} + 8 H \beta_ {t} \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {L \beta_ {t} ^ {2} \sqrt {d} R}{2} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]. \tag {90}

Note that we have

βˆ‘t=0Tβˆ’1βˆ‘i=1mΞ»it(Ji(ΞΈt+1)βˆ’Ji(ΞΈt))=βˆ‘t=0Tβˆ’1βˆ‘i=1m(Ξ»itβˆ’Ξ»it+1)Ji(ΞΈt+1)+βˆ‘i=1m(Ξ»iTβˆ’1Ji(ΞΈT)βˆ’Ξ»i0Ji(ΞΈ0))(91)β‰€βˆ‘t=0Tβˆ’1βˆ‘i=1m∣λitβˆ’(1βˆ’Ξ³t)Ξ»itβˆ’Ξ³tΞ»~it+1∣B+2B(92)β‰€βˆ‘t=0Tβˆ’1Ξ³tβˆ‘i=1m∣λitβˆ’Ξ»~it+1∣B+2B,(93) \begin{array}{l} \sum_ {t = 0} ^ {T - 1} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\theta} _ {t + 1}\right) - J _ {i} \left(\boldsymbol {\theta} _ {t}\right)\right) = \sum_ {t = 0} ^ {T - 1} \sum_ {i = 1} ^ {m} \left(\lambda_ {i} ^ {t} - \lambda_ {i} ^ {t + 1}\right) J _ {i} \left(\boldsymbol {\theta} _ {t + 1}\right) + \sum_ {i = 1} ^ {m} \left(\lambda_ {i} ^ {T - 1} J _ {i} \left(\boldsymbol {\theta} _ {T}\right) - \lambda_ {i} ^ {0} J _ {i} \left(\boldsymbol {\theta} _ {0}\right)\right) (91) \\ \leq \sum_ {t = 0} ^ {T - 1} \sum_ {i = 1} ^ {m} \left| \lambda_ {i} ^ {t} - \left(1 - \gamma_ {t}\right) \lambda_ {i} ^ {t} - \gamma_ {t} \tilde {\lambda} _ {i} ^ {t + 1} \right| B + 2 B (92) \\ \leq \sum_ {t = 0} ^ {T - 1} \gamma_ {t} \sum_ {i = 1} ^ {m} \left| \lambda_ {i} ^ {t} - \tilde {\lambda} _ {i} ^ {t + 1} \right| B + 2 B, (93) \\ \end{array}

where the first inequality is due to the update rule of $\lambda^t$ and $|J_i(\pmb {\theta})|\leq B$ . Then we have

βˆ‘t=0Tβˆ’1Bt=βˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»it(Ji(ΞΈt+1)βˆ’Ji(ΞΈt))]+βˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»it(Ji(ΞΌt+1,Ξ£t)βˆ’Ji(ΞΌt+1,Ξ£t+1))](94)β‰€βˆ‘t=0Tβˆ’1Ξ³tβˆ‘i=1m∣λitβˆ’Ξ»~it+1∣B+2B+βˆ‘t=0Tβˆ’1Ez[βˆ‘i=1mΞ»itHβˆ₯Ξ£t+1βˆ’Ξ£tβˆ₯F](95)≀2mBβˆ‘t=0Tβˆ’1Ξ³t+2B+βˆ‘t=0Tβˆ’1Hβˆ₯Ξ£t+1βˆ’Ξ£tβˆ₯F,(96) \begin{array}{l} \sum_ {t = 0} ^ {T - 1} B _ {t} = \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\theta} _ {t + 1}\right) - J _ {i} \left(\boldsymbol {\theta} _ {t}\right)\right) \right] + \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \left(J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t}\right) - J _ {i} \left(\boldsymbol {\mu} _ {t + 1}, \boldsymbol {\Sigma} _ {t + 1}\right)\right) \right] (94) \\ \leq \sum_ {t = 0} ^ {T - 1} \gamma_ {t} \sum_ {i = 1} ^ {m} \left| \lambda_ {i} ^ {t} - \tilde {\lambda} _ {i} ^ {t + 1} \right| B + 2 B + \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {z} \left[ \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} H \| \boldsymbol {\Sigma} _ {t + 1} - \boldsymbol {\Sigma} _ {t} \| _ {F} \right] (95) \\ \leq 2 m B \sum_ {t = 0} ^ {T - 1} \gamma_ {t} + 2 B + \sum_ {t = 0} ^ {T - 1} H \| \boldsymbol {\Sigma} _ {t + 1} - \boldsymbol {\Sigma} _ {t} \| _ {F}, (96) \\ \end{array}

where the first inequality is due to Eq. (93) and the Lipschitz continuous assumption of the function $J_{i}(\pmb{\theta})$ . Substituting Eq. (96) into Eq. (90), we have

1Tβˆ‘t=0Tβˆ’1Ξ²tEz[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]≀2mBβˆ‘t=0Tβˆ’1Ξ³t+2B+βˆ‘t=0Tβˆ’1Hβˆ₯Ξ£t+1βˆ’Ξ£tβˆ₯FdRT+1Tβˆ‘t=0Tβˆ’1(8HΞ²tVz[Ξ»t]βˆ‘i=1mVz[g^it]+LΞ²t2dR2βˆ‘i=1mVz[g^it]).(97) \begin{array}{l} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} [ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} ] \leq \frac {2 m B \sum_ {t = 0} ^ {T - 1} \gamma_ {t} + 2 B + \sum_ {t = 0} ^ {T - 1} H \| \boldsymbol {\Sigma} _ {t + 1} - \boldsymbol {\Sigma} _ {t} \| _ {F}}{\sqrt {d} R T} \\ + \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \left(8 H \beta_ {t} \sqrt {\mathbb {V} _ {\boldsymbol {z}} [ \boldsymbol {\lambda} ^ {t} ] \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]} + \frac {L \beta_ {t} ^ {2} \sqrt {d R}}{2} \sum_ {i = 1} ^ {m} \mathbb {V} _ {\boldsymbol {z}} [ \hat {\boldsymbol {g}} _ {i} ^ {t} ]\right). \tag {97} \\ \end{array}

According to Lemma B.3 (c), B.4 (c), and B.5. We know that $| \pmb{\Sigma}_{t+1} - \pmb{\Sigma}t|F \leq \frac{b\beta_t d^{\frac{3}{2}}}{2\xi^2 (\sum{k=1}^t \beta_k)^2}$ , $\mathbb{V}{\pmb{z}}[\hat{\pmb{g}}i^t] \leq \frac{H^2 C(d+4)^2}{N}$ , where $C = \max \left(\frac{b}{\xi}, | \pmb{\Sigma}0^{-1}|{\infty}\right)$ , and $\mathbb{V}{\pmb{z}}[\pmb{\lambda}^t] \leq 2\gamma_t^2$ . Then we have

1Tβˆ‘t=0Tβˆ’1Ξ²tEz[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]≀2mBβˆ‘t=0Tβˆ’1Ξ³t+2B+βˆ‘t=0Tβˆ’1HbΞ²td322ΞΎ2(βˆ‘k=1tΞ²k)2dRT+1Tβˆ‘t=0Tβˆ’1(8H2C(d+4)Ξ³tΞ²t2mN+Ξ²t2H2C(d+4)2LdRm2N).(98) \begin{array}{l} \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \beta_ {t} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} \right] \leq \frac {2 m B \sum_ {t = 0} ^ {T - 1} \gamma_ {t} + 2 B + \sum_ {t = 0} ^ {T - 1} H \frac {b \beta_ {t} d ^ {\frac {3}{2}}}{2 \xi^ {2} (\sum_ {k = 1} ^ {t} \beta_ {k}) ^ {2}}}{\sqrt {d} R T} \\ + \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \left(8 H ^ {2} C (d + 4) \gamma_ {t} \beta_ {t} \sqrt {\frac {2 m}{N}} + \frac {\beta_ {t} ^ {2} H ^ {2} C (d + 4) ^ {2} L \sqrt {d} R m}{2 N}\right). \tag {98} \\ \end{array}

Let $\beta_{t} = \beta$ and $\gamma_t = \gamma$ , we obtain

1Tβˆ‘t=0Tβˆ’1Ez[βˆ₯βˆ‘i=1mΞ»itβˆ‡ΞΌJi(ΞΈt)βˆ₯2]=O(Ξ³Ξ²+1Ξ²T+Ξ³+Ξ²),(99) \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \nabla_ {\boldsymbol {\mu}} J _ {i} (\boldsymbol {\theta} _ {t}) \| ^ {2} \right] = \mathcal {O} \left(\frac {\gamma}{\beta} + \frac {1}{\beta T} + \gamma + \beta\right), \tag {99}

where we reach the conclusion.

D PROOF OF TECHNICAL LEMMAS

In this section, we provide the proof of lemmas in Appendix B.

D.1 PROOF OF LEMMA B.1

Since $\pmb{\Sigma}$ and $\hat{\pmb{\Sigma}}$ are both diagonal matrix. Denote $\sigma = \mathrm{diag}(\pmb {\Sigma})$ and $\hat{\sigma} = \mathrm{diag}(\hat{\pmb{\Sigma}})$ . Then we have

βˆ₯Ξ£zβˆ₯2=βˆ‘i=1d(Οƒizi)2β‰€βˆ‘i=1d(Οƒi)2βˆ‘i=1d(zi)2=βˆ₯Οƒβˆ₯2βˆ₯zβˆ₯2=βˆ₯Ξ£βˆ₯F2βˆ₯zβˆ₯2.(100) \left\| \boldsymbol {\Sigma} \boldsymbol {z} \right\| ^ {2} = \sum_ {i = 1} ^ {d} \left(\sigma_ {i} z _ {i}\right) ^ {2} \leq \sum_ {i = 1} ^ {d} \left(\sigma_ {i}\right) ^ {2} \sum_ {i = 1} ^ {d} \left(z _ {i}\right) ^ {2} = \left\| \boldsymbol {\sigma} \right\| ^ {2} \| \boldsymbol {z} \| ^ {2} = \left\| \boldsymbol {\Sigma} \right\| _ {F} ^ {2} \| \boldsymbol {z} \| ^ {2}. \tag {100}

We further have

βˆ₯ΣΣ^βˆ₯F2=βˆ‘i=1d(ΟƒiΟƒ^i)2β‰€βˆ‘i=1d(Οƒi)2βˆ‘i=1d(Οƒ^i)2=βˆ₯Οƒβˆ₯2βˆ₯Οƒ^βˆ₯2=βˆ₯Ξ£βˆ₯F2βˆ₯Ξ£^βˆ₯F2.(101) \left\| \boldsymbol {\Sigma} \hat {\boldsymbol {\Sigma}} \right\| _ {F} ^ {2} = \sum_ {i = 1} ^ {d} \left(\sigma_ {i} \hat {\sigma} _ {i}\right) ^ {2} \leq \sum_ {i = 1} ^ {d} \left(\sigma_ {i}\right) ^ {2} \sum_ {i = 1} ^ {d} \left(\hat {\sigma} _ {i}\right) ^ {2} = \left\| \boldsymbol {\sigma} \right\| ^ {2} \left\| \hat {\boldsymbol {\sigma}} \right\| ^ {2} = \left\| \boldsymbol {\Sigma} \right\| _ {F} ^ {2} \left\| \hat {\boldsymbol {\Sigma}} \right\| _ {F} ^ {2}. \tag {101}

Then we reach the conclusion.

D.2 PROOF OF LEMMA B.2

For $\lambda \in [0,1]$ , we have

Ξ»JΛ‰(ΞΈ1)+(1βˆ’Ξ»)JΛ‰(ΞΈ2)=Ξ»Ez∼N(0,I)[f(ΞΌ1+Ξ£112z)]+(1βˆ’Ξ»)Ez∼N(0,I)[f(ΞΌ2+Ξ£212z)](102)=E[Ξ»f(ΞΌ1+Ξ£112z)+(1βˆ’Ξ»)f(ΞΌ2+Ξ£212z)](103)β‰₯E[f(λμ1+(1βˆ’Ξ»)ΞΌ2+(λΣ112+(1βˆ’Ξ»)Ξ£212)z)](104)=JΛ‰(λθ1+(1βˆ’Ξ»)ΞΈ2),(105) \begin{array}{l} \lambda \bar {J} (\boldsymbol {\theta} _ {1}) + (1 - \lambda) \bar {J} (\boldsymbol {\theta} _ {2}) = \lambda \mathbb {E} _ {\boldsymbol {z} \sim \mathcal {N} (\boldsymbol {0}, \boldsymbol {I})} [ f (\boldsymbol {\mu} _ {1} + \boldsymbol {\Sigma} _ {1} ^ {\frac {1}{2}} \boldsymbol {z}) ] + (1 - \lambda) \mathbb {E} _ {\boldsymbol {z} \sim \mathcal {N} (\boldsymbol {0}, \boldsymbol {I})} [ f (\boldsymbol {\mu} _ {2} + \boldsymbol {\Sigma} _ {2} ^ {\frac {1}{2}} \boldsymbol {z}) ] (102) \\ = \mathbb {E} \left[ \lambda f \left(\boldsymbol {\mu} _ {1} + \boldsymbol {\Sigma} _ {1} ^ {\frac {1}{2}} \boldsymbol {z}\right) + (1 - \lambda) f \left(\boldsymbol {\mu} _ {2} + \boldsymbol {\Sigma} _ {2} ^ {\frac {1}{2}} \boldsymbol {z}\right) \right] (103) \\ \geq \mathbb {E} [ f (\lambda \boldsymbol {\mu} _ {1} + (1 - \lambda) \boldsymbol {\mu} _ {2} + (\lambda \boldsymbol {\Sigma} _ {1} ^ {\frac {1}{2}} + (1 - \lambda) \boldsymbol {\Sigma} _ {2} ^ {\frac {1}{2}}) \boldsymbol {z}) ] (104) \\ = \bar {J} (\lambda \boldsymbol {\theta} _ {1} + (1 - \lambda) \boldsymbol {\theta} _ {2}), (105) \\ \end{array}

where we reach the conclusion.

D.3 PROOF OF LEMMA B.3

(a): Since we have $\pmb{\Sigma}_{t + 1}^{-1} = \pmb{\Sigma}t^{-1} + 2\beta_t\sum{i = 1}^m\lambda_i^t\hat{G}_i^t$ and $\pmb {\lambda}^t\in \Delta^{m - 1}$ . We obtain

Ξ£tβˆ’1+2bΞ²tIβͺ°Ξ£t+1βˆ’1βͺ°Ξ£tβˆ’1+2ΞΎΞ²tI.(106) \boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 b \beta_ {t} \boldsymbol {I} \succeq \boldsymbol {\Sigma} _ {t + 1} ^ {- 1} \succeq \boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 \xi \beta_ {t} \boldsymbol {I}. \tag {106}

Summing up it over $t = 0,\ldots ,T - 1$ , we have

Ξ£0βˆ’1+2bβˆ‘t=1TΞ²tIβͺ°Ξ£Tβˆ’1βͺ°Ξ£0βˆ’1+2ΞΎβˆ‘t=1TΞ²tI.(107) \boldsymbol {\Sigma} _ {0} ^ {- 1} + 2 b \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} \succeq \boldsymbol {\Sigma} _ {T} ^ {- 1} \succeq \boldsymbol {\Sigma} _ {0} ^ {- 1} + 2 \xi \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I}. \tag {107}

Therefore, we have

12bβˆ‘t=1TΞ²tI+Ξ£0βˆ’1βͺ―Ξ£Tβͺ―12ΞΎβˆ‘t=1TΞ²tI+Ξ£0βˆ’1.(108) \frac {1}{2 b \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} + \boldsymbol {\Sigma} _ {0} ^ {- 1}} \preceq \boldsymbol {\Sigma} _ {T} \preceq \frac {1}{2 \xi \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} + \boldsymbol {\Sigma} _ {0} ^ {- 1}}. \tag {108}

(b): We have

βˆ₯Ξ£tβˆ₯F≀βˆ₯12ΞΎβˆ‘t=1TΞ²tI+Ξ£0βˆ’1βˆ₯F≀βˆ₯12ΞΎβˆ‘t=1TΞ²tIβˆ₯F=d2ΞΎβˆ‘t=1TΞ²t.(109) \left\| \boldsymbol {\Sigma} _ {t} \right\| _ {F} \leq \left\| \frac {1}{2 \xi \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I} + \boldsymbol {\Sigma} _ {0} ^ {- 1}} \right\| _ {F} \leq \left\| \frac {1}{2 \xi \sum_ {t = 1} ^ {T} \beta_ {t} \boldsymbol {I}} \right\| _ {F} = \frac {\sqrt {d}}{2 \xi \sum_ {t = 1} ^ {T} \beta_ {t}}. \tag {109}

(c): We have

βˆ₯Ξ£t+1βˆ’Ξ£tβˆ₯F=βˆ₯1Ξ£tβˆ’1+2Ξ²tβˆ‘i=1mΞ»itG^itβˆ’Ξ£tβˆ₯F≀βˆ₯βˆ’2Ξ²tΞ£tΞ£tβˆ‘i=1mΞ»itG^itI+2Ξ²tΞ£tβˆ‘i=1mΞ»itG^itβˆ₯F(110)≀2Ξ²tβˆ₯Ξ£tβˆ₯F2βˆ₯βˆ‘i=1mΞ»itG^itβˆ₯F.(111) \begin{array}{l} \left\| \boldsymbol {\Sigma} _ {t + 1} - \boldsymbol {\Sigma} _ {t} \right\| _ {F} = \left\| \frac {1}{\boldsymbol {\Sigma} _ {t} ^ {- 1} + 2 \beta_ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t}} - \boldsymbol {\Sigma} _ {t} \right\| _ {F} \leq \left\| \frac {- 2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t}}{\boldsymbol {I} + 2 \beta_ {t} \boldsymbol {\Sigma} _ {t} \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t}} \right\| _ {F} (110) \\ \leq 2 \beta_ {t} \| \boldsymbol {\Sigma} _ {t} \| _ {F} ^ {2} \| \sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} \hat {G} _ {i} ^ {t} \| _ {F}. (111) \\ \end{array}

Since $| \pmb{\Sigma}t| \leq \frac{\sqrt{d}}{2\xi\sum{t = 1}^T\beta_t}$ and $| \sum_{i = 1}^{m}\lambda_i^t\hat{G}_i^t| _F\leq b\sqrt{d}$ . Then we have

βˆ₯Ξ£t+1βˆ’Ξ£tβˆ₯F≀bΞ²td322ΞΎ2(βˆ‘t=1TΞ²t)2.(112) \left\| \boldsymbol {\Sigma} _ {t + 1} - \boldsymbol {\Sigma} _ {t} \right\| _ {F} \leq \frac {b \beta_ {t} d ^ {\frac {3}{2}}}{2 \xi^ {2} \left(\sum_ {t = 1} ^ {T} \beta_ {t}\right) ^ {2}}. \tag {112}

D.4 PROOF OF LEMMA B.4

(a). We first show that $\hat{\pmb{g}}i^t$ is a unbiased estimator of $\nabla{\pmb{\mu}}\mathbb{E}{p{\theta_i}}[F_i(\pmb {x})]$

Ez[g^it]=Ez[Ξ£tβˆ’12zFi(ΞΌt+Ξ£t12z)]βˆ’Ez[Ξ£tβˆ’12zFi(ΞΌt)](113)=Ez[Ξ£tβˆ’12zFi(ΞΌt+Ξ£t12z)](114)=Ex∼N(ΞΌt,Ξ£t)[Ξ£tβˆ’1(xβˆ’ΞΌt)Fi(x)](115)=βˆ‡ΞΌEpΞΈt[Fi(x)].(116) \begin{array}{l} \mathbb {E} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i} ^ {t} \right] = \mathbb {E} _ {\boldsymbol {z}} \left[ \boldsymbol {\Sigma} _ {t} ^ {- \frac {1}{2}} \boldsymbol {z} F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\Sigma} _ {t} ^ {\frac {1}{2}} \boldsymbol {z}\right) \right] - \mathbb {E} _ {\boldsymbol {z}} \left[ \boldsymbol {\Sigma} _ {t} ^ {- \frac {1}{2}} \boldsymbol {z} F _ {i} \left(\boldsymbol {\mu} _ {t}\right) \right] (113) \\ = \mathbb {E} _ {\boldsymbol {z}} \left[ \boldsymbol {\Sigma} _ {t} ^ {- \frac {1}{2}} \boldsymbol {z} F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\Sigma} _ {t} ^ {\frac {1}{2}} \boldsymbol {z}\right) \right] (114) \\ = \mathbb {E} _ {\boldsymbol {x} \sim \mathcal {N} \left(\boldsymbol {\mu} _ {t}, \boldsymbol {\Sigma} _ {t}\right)} \left[ \boldsymbol {\Sigma} _ {t} ^ {- 1} (\boldsymbol {x} - \boldsymbol {\mu} _ {t}) F _ {i} (\boldsymbol {x}) \right] (115) \\ = \nabla_ {\boldsymbol {\mu}} \mathbb {E} _ {p _ {\theta_ {t}}} [ F _ {i} (\boldsymbol {x}) ]. (116) \\ \end{array}

(b). Since the $\sigma$ is the diagonal elements of $\Sigma$ , then we have

βˆ₯Ξ£tg^ijtβˆ₯2=βˆ₯ΟƒtβŠ™Οƒtβˆ’12βŠ™zj(Fi(ΞΌt+Οƒt12βŠ™zj)βˆ’Fi(ΞΌt))βˆ₯2(117)=βˆ₯Οƒt12βŠ™zjβˆ₯2(Fi(ΞΌt+Οƒt12βŠ™zj)βˆ’Fi(ΞΌt))2(118)≀βˆ₯Οƒt12βŠ™zjβˆ₯2H2βˆ₯Οƒt12βŠ™zjβˆ₯2(119)≀H2βˆ₯Οƒt12βˆ₯∞2Γ—βˆ₯Οƒt12βˆ₯∞2Γ—βˆ₯zjβˆ₯4=H2βˆ₯Οƒtβˆ₯∞2βˆ₯zjβˆ₯4.(120) \begin{array}{l} \left\| \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i j} ^ {t} \right\| ^ {2} = \left\| \boldsymbol {\sigma} _ {t} \odot \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \odot \boldsymbol {z} _ {j} \left(F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right) \right\| ^ {2} (117) \\ = \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} \left(F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right) ^ {2} (118) \\ \leq \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} (119) \\ \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {z} _ {j} \right\| ^ {4} = H ^ {2} \left\| \boldsymbol {\sigma} _ {t} \right\| _ {\infty} ^ {2} \left\| \boldsymbol {z} _ {j} \right\| ^ {4}. (120) \\ \end{array}

It follows that

βˆ₯Ξ£tg^itβˆ₯22=βˆ₯1Nβˆ‘j=1NΞ£tg^ijtβˆ₯2≀1Nβˆ‘j=1Nβˆ₯Ξ£tg^ijtβˆ₯2≀H2βˆ₯Οƒtβˆ₯∞2βˆ₯zjβˆ₯4.(121) \left\| \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i} ^ {t} \right\| _ {2} ^ {2} = \left\| \frac {1}{N} \sum_ {j = 1} ^ {N} \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i j} ^ {t} \right\| ^ {2} \leq \frac {1}{N} \sum_ {j = 1} ^ {N} \left\| \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i j} ^ {t} \right\| ^ {2} \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} \right\| _ {\infty} ^ {2} \left\| \boldsymbol {z} _ {j} \right\| ^ {4}. \tag {121}

Noticed that $\mathbb{E}_z[||\pmb {z}||^4 ]\leq (d + 4)^2$ and

βˆ₯Οƒtβˆ₯βˆžβ‰€1βˆ₯Οƒ0βˆ’1βˆ₯min⁑+2(βˆ‘k=1tΞ²k)ΞΎ,(122) \left\| \boldsymbol {\sigma} _ {t} \right\| _ {\infty} \leq \frac {1}{\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\min } + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) \xi}, \tag {122}

where $| \cdot |_{min}$ denotes the minimum element in the input. Then we have

Eβˆ₯Ξ£tg^itβˆ₯22≀H2βˆ₯Οƒtβˆ₯∞2(d+4)2≀H2(d+4)24ΞΎ2(βˆ‘k=1tΞ²k)2,(123) \mathbb {E} \left\| \boldsymbol {\Sigma} _ {t} \hat {\boldsymbol {g}} _ {i} ^ {t} \right\| _ {2} ^ {2} \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} \right\| _ {\infty} ^ {2} (d + 4) ^ {2} \leq \frac {H ^ {2} (d + 4) ^ {2}}{4 \xi^ {2} \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) ^ {2}}, \tag {123}

where we reach the conclusion.

(c): We have

βˆ₯g^ijtβˆ₯2=βˆ₯Οƒtβˆ’12βŠ™zj(Fi(ΞΌt+Οƒt12βŠ™zj)βˆ’Fi(ΞΌt))βˆ₯2(124)=βˆ₯Οƒtβˆ’12βŠ™zjβˆ₯2(Fi(ΞΌt+Οƒt12βŠ™zj)βˆ’Fi(ΞΌt))2(125)≀βˆ₯Οƒtβˆ’12βŠ™zjβˆ₯2H2βˆ₯Οƒt12βŠ™zjβˆ₯2(126)≀H2βˆ₯Οƒtβˆ’12βˆ₯∞2Γ—βˆ₯Οƒt12βˆ₯∞2Γ—βˆ₯zjβˆ₯4.(127) \begin{array}{l} \left\| \hat {\boldsymbol {g}} _ {i j} ^ {t} \right\| ^ {2} = \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \odot \boldsymbol {z} _ {j} \left(F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right) \right\| ^ {2} (124) \\ = \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} \left(F _ {i} \left(\boldsymbol {\mu} _ {t} + \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j}\right) - F _ {i} \left(\boldsymbol {\mu} _ {t}\right)\right) ^ {2} (125) \\ \leq \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \odot \boldsymbol {z} _ {j} \right\| ^ {2} (126) \\ \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {z} _ {j} \right\| ^ {4}. (127) \\ \end{array}

Then we obtain

Ez[βˆ₯g^ijtβˆ₯2]≀H2βˆ₯Οƒtβˆ’12βˆ₯∞2Γ—βˆ₯Οƒt12βˆ₯∞2Γ—E[βˆ₯zjβˆ₯4](128)≀H2βˆ₯Οƒtβˆ’12βˆ₯∞2Γ—βˆ₯Οƒt12βˆ₯∞2(d+4)2.(129) \begin{array}{l} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \hat {\boldsymbol {g}} _ {i j} ^ {t} \| ^ {2} \right] \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \times \mathbb {E} [ \| \boldsymbol {z} _ {j} \| ^ {4} ] (128) \\ \leq H ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \times \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} (d + 4) ^ {2}. (129) \\ \end{array}

Note that for $N$ i.i.d samples $\mathbf{z}_j$ , we have

Vz[1Nβˆ‘j=1Ng^ijt]=1NVz[g^ijt]≀1NEz[βˆ₯g^ijtβˆ₯22](130)≀H2(d+4)2βˆ₯Οƒtβˆ’12βˆ₯∞2βˆ₯Οƒt12βˆ₯∞2N.(131) \begin{array}{l} \mathbb {V} _ {\boldsymbol {z}} \left[ \frac {1}{N} \sum_ {j = 1} ^ {N} \hat {\boldsymbol {g}} _ {i j} ^ {t} \right] = \frac {1}{N} \mathbb {V} _ {\boldsymbol {z}} \left[ \hat {\boldsymbol {g}} _ {i j} ^ {t} \right] \leq \frac {1}{N} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \hat {\boldsymbol {g}} _ {i j} ^ {t} \| _ {2} ^ {2} \right] (130) \\ \leq \frac {H ^ {2} (d + 4) ^ {2} \| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \| _ {\infty} ^ {2} \| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \| _ {\infty} ^ {2}}{N}. (131) \\ \end{array}

Note that we have

Οƒ0βˆ’1+2bβˆ‘k=1tΞ²t1β‰₯Οƒtβˆ’1β‰₯Οƒ0βˆ’1+2ΞΎβˆ‘k=1tΞ²t1.(132) \boldsymbol {\sigma} _ {0} ^ {- 1} + 2 b \sum_ {k = 1} ^ {t} \beta_ {t} \mathbf {1} \geq \boldsymbol {\sigma} _ {t} ^ {- 1} \geq \boldsymbol {\sigma} _ {0} ^ {- 1} + 2 \xi \sum_ {k = 1} ^ {t} \beta_ {t} \mathbf {1}. \tag {132}

Then, we have

βˆ₯Οƒtβˆ’12βˆ₯∞2=βˆ₯Οƒtβˆ’1βˆ₯βˆžβ‰€βˆ₯Οƒ0βˆ’1βˆ₯∞+2(βˆ‘k=1tΞ²k)b.(133) \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} = \left\| \boldsymbol {\sigma} _ {t} ^ {- 1} \right\| _ {\infty} \leq \left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\infty} + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) b. \tag {133}

And

βˆ₯Οƒt12βˆ₯∞2=βˆ₯Οƒtβˆ₯βˆžβ‰€1βˆ₯Οƒ0βˆ’1βˆ₯min⁑+2(βˆ‘k=1tΞ²k)ΞΎ,(134) \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} = \left\| \boldsymbol {\sigma} _ {t} \right\| _ {\infty} \leq \frac {1}{\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\min } + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) \xi}, \tag {134}

where $| \cdot |_{min}$ denotes the minimum element in the input.

we then have

βˆ₯Οƒt12βˆ₯∞2βˆ₯Οƒtβˆ’12βˆ₯∞2≀βˆ₯Οƒ0βˆ’1βˆ₯∞+2(βˆ‘k=1tΞ²k)bβˆ₯Οƒ0βˆ’1βˆ₯min⁑+2(βˆ‘k=1tΞ²k)ΞΎ(135)=bΞΎ+βˆ₯Οƒ0βˆ’1βˆ₯βˆžβˆ’bΞΎβˆ₯Οƒ0βˆ’1βˆ₯minβˆ₯Οƒ0βˆ’1βˆ₯min+2(βˆ‘k=1tΞ²k)ΞΎ.(136) \begin{array}{l} \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \leq \frac {\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\infty} + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) b}{\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\operatorname* {m i n}} + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) \xi} (135) \\ = \frac {b}{\xi} + \frac {\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\infty} - \frac {b}{\xi} \left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {m i n}}{\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {m i n} + 2 \left(\sum_ {k = 1} ^ {t} \beta_ {k}\right) \xi}. (136) \\ \end{array}

If $| \pmb{\sigma}0^{-1}|{\infty} - \frac{b}{\xi}| \pmb{\sigma}0^{-1}|{min}\geq 0$ we have

βˆ₯Οƒt12βˆ₯∞2βˆ₯Οƒtβˆ’12βˆ₯∞2≀bΞΎ+βˆ₯Οƒ0βˆ’1βˆ₯βˆžβˆ’bΞΎβˆ₯Οƒ0βˆ’1βˆ₯minβˆ₯Οƒ0βˆ’1βˆ₯min≀βˆ₯Οƒ0βˆ’1βˆ₯∞.(137) \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \leq \frac {b}{\xi} + \frac {\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\infty} - \frac {b}{\xi} \left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {m i n}}{\left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {m i n}} \leq \left\| \boldsymbol {\sigma} _ {0} ^ {- 1} \right\| _ {\infty}. \tag {137}

If $| \pmb{\sigma}0^{-1}|{\infty} - \frac{b}{\xi}| \pmb{\sigma}0^{-1}|{min} < 0$ , we have

βˆ₯Οƒt12βˆ₯∞2βˆ₯Οƒtβˆ’12βˆ₯∞2≀bΞΎ.(138) \left\| \boldsymbol {\sigma} _ {t} ^ {\frac {1}{2}} \right\| _ {\infty} ^ {2} \left\| \boldsymbol {\sigma} _ {t} ^ {- \frac {1}{2}} \right\| _ {\infty} ^ {2} \leq \frac {b}{\xi}. \tag {138}

Therefore, let $C = \max (\frac{b}{\xi},| \pmb{\sigma}_0^{-1}|_\infty)$ , we have

Vz[βˆ₯1Nβˆ‘j=1Ng^ijtβˆ₯2]≀H2(d+4)2CN,(139) \mathbb {V} _ {\boldsymbol {z}} \left[ \| \frac {1}{N} \sum_ {j = 1} ^ {N} \hat {\boldsymbol {g}} _ {i j} ^ {t} \| ^ {2} \right] \leq \frac {H ^ {2} (d + 4) ^ {2} C}{N}, \tag {139}

where we reach the conclusion.

D.5 PROOF OF LEMMA B.5

We have

Vz[Ξ»t]=Ez[βˆ₯Ξ»tβˆ’Ez[Ξ»t]βˆ₯2]≀Ez[βˆ₯Ξ»tβˆ’Ξ»tβˆ’1βˆ₯2](140)=Ez[βˆ₯Ξ³t(Ξ»~tβˆ’Ξ»tβˆ’1)βˆ₯2]≀γt2Ez[βˆ₯Ξ»~tβˆ’Ξ»tβˆ’1βˆ₯2]≀2Ξ³t2,(141) \begin{array}{l} \mathbb {V} _ {z} \left[ \boldsymbol {\lambda} ^ {t} \right] = \mathbb {E} _ {z} \left[ \left\| \boldsymbol {\lambda} ^ {t} - \mathbb {E} _ {z} \left[ \boldsymbol {\lambda} ^ {t} \right] \right\| ^ {2} \right] \leq \mathbb {E} _ {z} \left[ \left\| \boldsymbol {\lambda} ^ {t} - \boldsymbol {\lambda} ^ {t - 1} \right\| ^ {2} \right] (140) \\ = \mathbb {E} _ {\boldsymbol {z}} \left[ \| \gamma_ {t} \left(\tilde {\boldsymbol {\lambda}} ^ {t} - \boldsymbol {\lambda} ^ {t - 1}\right) \| ^ {2} \right] \leq \gamma_ {t} ^ {2} \mathbb {E} _ {\boldsymbol {z}} \left[ \| \tilde {\boldsymbol {\lambda}} ^ {t} - \boldsymbol {\lambda} ^ {t - 1} \| ^ {2} \right] \leq 2 \gamma_ {t} ^ {2}, (141) \\ \end{array}

where we reach the conclusion.

D.6 PROOF OF LEMMA B.6

According to Lemma B.4 (a) and (c), we know that $\hat{\pmb{g}}i^t$ is an unbiased estimator of the gradient $\nabla{\pmb{\mu}}J_i(\pmb{\theta}_t)$ and the variance of $\hat{\pmb{g}}_i^t$ is bounded. Therefore let $\pmb{q}t = -\sum{i=1}^{m}\lambda_i^t\hat{\pmb{g}}_i^t$ , the results in Lemma B.6 can be directly obtained by Lemma 1, 7, and 8 in Zhou et al. (2022b).

E UPDATED RULE UNDER TRANSFORMATION

To avoid the scaling problem, we can employ monotonic transformation for the aggregated objective, i.e. $h(\boldsymbol{\lambda}^{\top}F(\boldsymbol{x}_j)) = \frac{\boldsymbol{\lambda}^{\top}F(\boldsymbol{x}_j) - \hat{\boldsymbol{\mu}}}{\hat{\boldsymbol{\sigma}}}$ , where $\hat{\boldsymbol{\mu}}$ and $\hat{\boldsymbol{\sigma}}$ denote mean and stand deviation of aggregated function values $\boldsymbol{\lambda}^{\top}F(\boldsymbol{x}j) = \sum{i=1}^{m} \lambda_i F_i(\boldsymbol{x}_j)$ , $j = 1, \dots, N$ . Then by applying this rescaling strategy, the update rule for $\boldsymbol{\mu}_t$ and $\boldsymbol{\Sigma}_t$ in $t$ -th iteration can be written as

ΞΌt+1=ΞΌtβˆ’Ξ²tNβˆ‘j=1N(xjβˆ’ΞΌt)βˆ‘i=1mΞ»itFi(xj)βˆ’ΞΌ^tΟƒ^t,(142)Ξ£t+1βˆ’1=Ξ£tβˆ’1+Ξ²tNβˆ‘j=1Ndiag⁑[Ξ£tβˆ’1[diag⁑((xjβˆ’ΞΌt)(xjβˆ’ΞΌt)⊀Σtβˆ’1)βˆ‘i=1mΞ»itFi(xj)βˆ’ΞΌ^tΟƒ^t]].(143) \begin{array}{l} \boldsymbol {\mu} _ {t + 1} = \boldsymbol {\mu} _ {t} - \frac {\beta_ {t}}{N} \sum_ {j = 1} ^ {N} \left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) \frac {\sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} F _ {i} \left(\boldsymbol {x} _ {j}\right) - \hat {\boldsymbol {\mu}} ^ {t}}{\hat {\boldsymbol {\sigma}} ^ {t}}, (142) \\ \boldsymbol {\Sigma} _ {t + 1} ^ {- 1} = \boldsymbol {\Sigma} _ {t} ^ {- 1} + \frac {\beta_ {t}}{N} \sum_ {j = 1} ^ {N} \operatorname {d i a g} \left[ \boldsymbol {\Sigma} _ {t} ^ {- 1} \left[ \operatorname {d i a g} \left(\left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) \left(\boldsymbol {x} _ {j} - \boldsymbol {\mu} _ {t}\right) ^ {\top} \boldsymbol {\Sigma} _ {t} ^ {- 1}\right) \frac {\sum_ {i = 1} ^ {m} \lambda_ {i} ^ {t} F _ {i} \left(\boldsymbol {x} _ {j}\right) - \hat {\boldsymbol {\mu}} ^ {t}}{\hat {\boldsymbol {\sigma}} ^ {t}} \right] \right]. (143) \\ \end{array}

F ADDITIONAL MATERIALS FOR SECTION 6

F.1 SYNTHETIC PROBLEMS

Evaluation Metrics. The Pareto optimal set of problem (19) is $\mathcal{P}_1 = {\pmb{x} \mid x_i \in [-0.01, 0.01]}$ . Therefore, the result is evaluated by calculating the Euclidean distance between solution $\pmb{x}$ and the set $\mathcal{P}_1$ , which is denoted $\mathcal{E} = \mathrm{dist}(\pmb{x}, \mathcal{P}_1)$ . We denote the Pareto optimal set of problem (20) as $\mathcal{P}_2$ . Since the Pareto front of problem (20) is concave, the solution of the ASMG method will go to the boundary of its Pareto optimal set, i.e. $\hat{\mathcal{P}}_2 = {\pmb{x} \mid x_i \in {-0.1, 0.1} } \subset \mathcal{P}_2$ . Moreover, For ES and CMA-ES methods, since their optimization objective is $F_1(x) + F_2(x)$ , the corresponding solution set is $\hat{\mathcal{P}}_2$ . Therefore, the result of these three methods on problem (20) is evaluated by $\mathcal{E} = \mathrm{dist}(\pmb{x}, \hat{\mathcal{P}}_2)$ . The Pareto optimal set of problem (21) is $\mathcal{P}_3 = {\pmb{x} \mid \pmb{x} = 0}$ . Therefore, the result is evaluated by $\mathcal{E} = \mathrm{dist}(\pmb{x}, \mathcal{P}_3)$ .

Implementation Details. For all the methods, we initialize $\mu_0$ from the uniform distribution $\mathrm{Uni}[0,1]$ , and set $\Sigma_0 = I$ . The ASMG method uses a fixed step size of $\beta = 0.1$ and $\gamma_t = 1/(t + 1)$ . For the ES method, we employ the default step size from Salimans et al. (2017), i.e., $\beta = 0.01$ . For the BES method, we adopt the default step size from Gao & Sener (2022), i.e., $\beta = 0.01$ . We employ the default hyperparameter setting from He et al. (2020) for the MMES method. We then assess these methods using varying sample sizes, i.e. $N \in {10,50,100}$ . The mean value of $\mathcal{E}$ over 3 independent runs is reported.

Result. Figure 2 and 3 show the results on three 100-dimensional synthetic problems with sample sizes $N = 10$ and $N = 100$ , respectively. Combining these results with the result from Figure 1, we observe consistent performance from the proposed ASMG method, consistently achieving high precision across all three cases, i.e. $10^{-4}$ , for $N = 10, 50, 100$ . The CMA-ES method shows convergence with high precision on the Shift $l_{1}$ -Ellipsoid problem when $N = 10$ and 50. However, it fails to converge when the sample size is very small, i.e., $N = 10$ . The same performance also occurs on the Shift $l_{\frac{1}{2}}$ -Ellipsoid problem for the CMA-ES method. It can achieve $10^{-1}$ precision when $N = 50$ and 100, but only achieve $10^{1}$ precision when $N = 10$ . It still fails on the Mixed Ellipsoid-Rastrigin10 problem when $N = 50$ and 100. The MMES method also cannot reach a high precision on these problems. The ES and BES methods do not converge in any of the settings, indicating that it could be challenging for these methods to optimize these non-smooth or non-convex problems. These results show the effectiveness of the proposed ASMG method.

Figure 4 presents the results for the shift $l_{\frac{1}{2}}$ -ellipsoid problem with a sample size of $N = 100$ across various problem dimensions, i.e. $d \in {200,500,1000}$ . The CMA-ES method can still converge when $d = 200$ , but it does not converge when $d = 1000$ . In contrast, the ASMG method consistently achieves high precision across all three settings, demonstrating its effectiveness in handling high-dimensional problems.


(a) Shift $l_{1}$ -Ellipsoid.


(b) Shift $l_{\frac{1}{2}}$ Ellipsoid.


(c) Mixed Ellipsoid-Rastrigin 10.
Figure 2: Results on the synthetic problems with 10 samples (i.e., $N = 10$ ).


(a) Shift $l_{1}$ -Ellipsoid.


(b) Shift $l_{\frac{1}{2}}$ Ellipsoid.


(c) Mixed Ellipsoid-Rastrigin 10.


Figure 3: Results on the synthetic problems with 100 samples (i.e., $N = 100$ ).
(a) $d = 200$
Figure 4: Results on the shift $l_{1}$ -ellipsoid problem with $N = 100$ and different problem dimension $d$ .


(b) $d = 500$


(c) $d = 1000$

F.2 BLACK-BOX MULTI-TASK LEARNING

Details of CLIP. CLIP is a widely adopted vision-language model that trains an image encoder $h_{\mathrm{image}}(\cdot)$ and a text encoder $h_{\mathrm{text}}(\cdot)$ jointly by aligning the embedding space of images and text. Given an image $\pmb{x}$ and a set of class names ${y_i}{i=1}^K$ , CLIP obtains image features $h{\mathrm{image}}(\pmb{x})$ and a set of text features ${h_{\mathrm{text}}(\pmb{p}; \pmb{y}i)}{i=1}^K$ where $\pmb{p} \in \mathbb{R}^D$ represents the token embedding of the shared prompt. The image $\pmb{x}$ is classified into the class $y_i$ that corresponds to the highest similarity score $h_{\mathrm{image}}(\pmb{x}) \cdot h_{\mathrm{text}}(\pmb{p}; \pmb{y}_i)$ among the cosine similarities between the image features and all the text features. In the zero-shot setup, the shared token embedding $\pmb{p}$ is transformed from the prompt "a photo of a", while in the prompt tuning setup, the token embedding of the shared prompt is optimized directly to enhance performance.

Loss function $\mathcal{L}_i$ . In the context of multi-task learning, we consider a scenario involving $m$ tasks, each having its own dedicated training dataset. For task $i$ , we have dataset $\mathcal{D}_i = {(\pmb{x}_k, \hat{\pmb{y}}_k)}$ . For each training epoch, we sample a mini-batch $\mathcal{B}_i$ from $\mathcal{D}_i$ and the function $\mathcal{L}_i$ in Eq. (22) can be formulated as

Li(Bi;{Mc,Av})=βˆ‘(x,yΛ‰)∈Biβ„“(Mc(Av;x),yΛ‰), \mathcal {L} _ {i} \left(B _ {i}; \left\{\mathcal {M} _ {c}, \boldsymbol {A} \boldsymbol {v} \right\}\right) = \sum_ {\left(\boldsymbol {x}, \bar {\boldsymbol {y}}\right) \in \mathcal {B} _ {i}} \ell \left(\mathcal {M} _ {c} (\boldsymbol {A} \boldsymbol {v}; \boldsymbol {x}), \bar {\boldsymbol {y}}\right),

where $\ell$ can be cross-entropy function for classification problem and $\mathcal{M}_c$ denotes CLIP model in our setting.

Datasets. We conduct experiments on two MTL benchmark datasets (Lin & Zhang, 2023), i.e., Office-31 (Saenko et al., 2010) and Office-home (Venkateswara et al., 2017). The Office-31 dataset includes images from three different sources: Amazon (A), digital SLR cameras (D), and Webcam (W). It contains 31 categories for each source and a total of 4652 labeled images. The Office-home dataset includes images from four sources: artistic images (Ar), clip art (Cl), product images (Pr), and real-world images (Rw). It contains 65 categories for each source and a total of 15,500 labeled images. For those two datasets, we treat the multi-class classification problem on each source as a separate task.

Implementation Details. Following the setup of Zhou et al. (2022a), we conduct experiments based on the CLIP model with ResNet-50 as the image encoder and use a prompt with 4 tokens for the text encoder where both the image and text encoders are kept frozen during the experiments. For zero-shot, we apply the default prompt "a photo of a {class}".

For all methods, we set $\mu_0 = 0$ and $\Sigma_0 = I$ as initialization. For all baseline methods except the zero-shot setting, we optimize the prompt with a batch size of 64 for 200 epochs, the population size $N$ is set as 20 for Office-31 and 40 for Office-home, while $A$ is sampled from the normal distribution as described in the Sun et al. (2022a), i.e. $\mathcal{N}(0,\frac{\sigma_e}{\sqrt{d}})$ , where $\sigma_{e}$ is the standard deviation of word embeddings in CLIP. For ASMG and ASMG-EW methods, the step size is fixed as $\beta = 0.5$ . The coefficient $\gamma$ in the ASMG method is set as $\gamma_t = 1 / (t + 1)$ . For the ES method, we employ the default step-size of ES in Salimans et al. (2017), i.e., $\beta$ is chosen from ${0.5, 0.1, 0.01}$ . For the BES method, we perform grid search on step size, i.e., $\beta$ is chosen from ${0.5, 0.1, 0.01}$ . For the MMES method, we employ the default hyperparameter setting from He et al. (2020). Additionally, we evaluate the performance of the method on different dimensions of $z$ , specifically $d \in {256, 512, 1024}$ . The CMA-ES method is implemented using the official implementation available1 while we implement the ES and BES method by ourselves.

G RELATIONSHIP TO GRADIENT-BASED MOO

For previous methods on MOO, the most relevant method to our approach is gradient-based MOO methods (Yu et al., 2020; Liu et al., 2021; Fernando et al., 2022; Zhou et al., 2022b), as they also aim at finding the Pareto optimal solution or the Pareto stationary solution. A typical gradient-based method is the MGDA method (DΓ©sidΓ©ri, 2012), which also solves a max-min optimization problem to obtain the weights. However, the proposed ASMG method is not a typical MGDA-type method. The max-min optimization problem proposed in MGDA-type methods is related to the true gradient of the parameters. They add a regularization term $|d|^2$ to control the norm of the aggregated gradient. In our case, the update is conducted on a Gaussian distribution, and we need to jointly update the mean and covariance matrix. We use Kullback-Leibler divergence to regularize the distance between two distributions, i.e. $\theta$ and $\theta_t$ . Therefore, the form of the proposed max-min optimization, i.e. Eq. (4), differs from MGDA-type methods, and the solution process is also different. However, our max-min problem can also lead to a simple quadratic programming problem for the aggregation weights computation.