Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1072",
"Title": "Black-Box Generalization: Stability of Zeroth-Order Learning",
"Limitation": "Due to space limitation, we refer the reader to Appendix C for the detailed stability analysis of ZoSS with mini-batch. Specifically, we prove a growth recursion lemma for the mini-batch ZoSS updates (see Appendix C.1 for proof). Lemma 7 (Mini-Batch ZoSS Growth Recursion) Consider the sequences of updates {G̃Jt}Tt=1 and {G̃′Jt} T t=1 and µ ≤ cLΓdK/(nβ(3 + d)3/2). Let w0 = w′0 be the starting point, wt+1 = G̃Jt(wt) and w′t+1 = G̃ ′ Jt (w′t) for any t ∈ {1, . . . , T}. Then for any wt, w′t ∈ Rd and t ≥ 0 the following recursion holds\nE[∥G̃Jt(wt)− G̃′Jt(w ′ t)∥] ≤\n{( 1 + βαtΓ d K ) δt + cLαt n Γ d K if G̃Jt(·) = G̃′Jt(·)(\n1 + m−1m βαtΓ d K ) δt + 2Lαt m Γ d K + cLαt n Γ d K if G̃Jt(·) ̸= G̃′Jt(·). Although the iterate stability error (at time t) in the growth recursion depends on the batch size m under the event {G̃Jt(·) ̸= G̃′Jt(·)}, the stability bound on the final iterates is independent of m, and coincides with the single example updates (m = 1, Lemma 2). Herein, we provide an informal statement of the result. Lemma 8 (Mini-Batch ZoSS Stability | Nonconvex Loss) Consider the mini-batch ZoSS with any batch size m ≤ n, and iterates Wt+1 = Wt −αt∆fK,µWt,Jt , W ′ t+1 = W ′ t −αt∆f K,µ W ′t ,J ′ t , for all t ≤ T , with respect to the sequences S, S′. Then the stability error δT satisfies the inequality of Lemma 2. We refer the reader to Appendix Section C.1, Theorem 14 for the formal statement of the result.3 Through the Lipschitz condition of the loss and Lemma 8, we show that the mini-batch ZoSS enjoys the same generalization error bounds as in the case of single-query ZoSS (m = 1). As a consequence, the batch size does not affect the generalization error. Theorem 9 (Mini-batch ZoSS | Generalization Error) Let the loss function f(·, z) be L-Lipschitz and β-smooth (possibly nonconvex, possibly unbounded) for all z ∈ Z . Then the bounds of Theorem 5 and Theorem 6 hold for the mini-batch ZoSS with iterate Wt+1 = Wt − αt∆fK,µWt,Jt , for all t ≤ T and any batch size m ≤ n. 3As in the single-query (m = 1) ZoSS, under the assumption of convex loss, the stability error of mini-batch ZoSS satisfies the inequality (46), Appendix B, Lemma 11. By letting K → ∞ and c → 0, the generalization error bounds of mini-batch ZoSS reduce to those of mini-batch SGD, extending results of the single-query (m = 1) SGD that appeared in prior work [1]. Additionally, once K → ∞, c → 0 and m = n we obtain generalization guarantees for full-batch GD. For the sake of clarity and completeness we provide dedicated stability and generalization analysis of full-batch GD in Appendix D, Corollary 15.",
"Reviewer Comment": "Reviewer_3: Overall the paper is relatively well written and easy to follow. My main concern is in the relevance and the novelty of the analysis. Nesterov and Spokoiniy [14] (see Theorem 1) show that the gradient estimator deployed by the authors of this submission is an unbiased estimator of a smoothed version of\nf\n(\n⋅\n,\nz\n)\n. As long as the original function is Lipschitz (which is the case here), the smoothed version is a good approximation of the original one and the quality of this approximation is controlled by the parameter\nμ\n. Thus, the algorithm analyzed by the authors of this submission is an SGD performed on a slightly different objective, which has identical regularity properties. Consequently, using the result of Hardt et al [1] in the context of SGD + the triangle inequality coupled with Theorem 1 from Nesterov and Spokoiniy [14], we immediately get the stability result for the zero-order variant considered in this submission (and it allows us to pick\nμ\nproperly). Thus, I wonder if the authors could actually compact all their analysis into a couple of lines outsourcing most of the analysis from [1] and [14]?\nWhat is actually pretty interesting is that if the original functions are NOT\nβ\n-smooth, then the smoothed function has a Lipschitz gradient with the constant that depends on\nμ\n. Hence, an interesting direction could be the exploration of the case of non-\nβ\n-smooth loss functions. Unfortunately, this case is not considered by the authors.\nQuestions:\nI addressed my main concerns in the previous part. I will update this part after the discussion phase and the authors' responses.\nLimitations:\nThe authors addressed the limitations of their work. I will update this part after the discussion phase and the authors' responses.\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 3 good\nContribution: 2 fair\n\nReviewer_4: Pros.\nThe generalization bounds for zeroth-order gradient update are new, and it may be potential for black-box learning tasks.\nThe bounds are independent of the dimension\nd\nand the number of function evaluations per step\nK\n.\nCons.\nIn the proof of Lemma 2 (Line 194), it seems that\nP\n(\nE\nt\n)\n=\n1\nn\ninstead of\nP\n(\nE\nt\n)\n=\n1\n−\n1\nn\n.\nIf so, how does this influence the following proof and main theorem?\nThe\nβ\n-smooth assumption of the loss function is a little bit strong. It requires the second-order derivative to be bounded. It narrows down the potential loss functions because many practical loss functions may not have second-order derivatives, which makes the bounds less appealing.\n===============================\nThanks for the authors' detailed response. My main concern has been clarified.\nQuestions:\nCould the authors explain more about the proof of Lemma 2 to clarify my above concern? if\nP\n(\nE\nt\n)\n=\n1\nn\ninstead of\nP\n(\nE\nt\n)\n=\n1\n−\n1\nn\n, how does this influence the generalization bounds?\nLimitations:\nNA\nEthics Flag: No\nSoundness: 2 fair\nPresentation: 3 good\nContribution: 3 good\n\nReviewer_5: Strengths\nThe bounds are a natural generalization of the bounds by Hardt et al. (2016) to the case of ZoSS. It is noteworthy that the order of the convergence rate of the ZoSS is the same as that of the SGD.\nThe proofs of the theorems are quite accessible: the reader only needs a college-level knowledge of Calculus, Linear Algebra, and Probability.\nWeaknesses\nThe requirement that the loss function be smooth calls into question the motivation for this work. After all, if we have smoothness and we know the loss function, then we can use the SGD. If we don't know the loss function, then how can we check its smoothness?\nThe work lacks empirical validation of the theory on real data. It would also be great if the authors show a specific natural example where the ZoSS (under the paper's assumptions) is applicable and the SGD is not.\nQuestions:\nHow one shall choose the values of\nK\nand\nμ\nin practice?\nLimitations:\nI think that the requirement of Lipschitz and smoothness is the main limitation of this work.\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 2 fair\n\nReviewer_6: The method is similar to the analysis of SGD (HRS16) in spirit, and the achieved bound is not very surprising. Still, it's good to see similar generalization bounds can be obtained for zeroth-order optimization (though it shares the same drawbacks of HRS16 as well, such as the linear dependence on\nT\n).\nThe recursion lemma is interesting.\nThe writing is good and the paper is easy-to-follow in general.\nThe title seems a bit unclear and misleading to me. I suggest change it to be more specific to avoid confusion (like 'generalization bounds for zeroth-order optimization via algorithmic stability'?). The analysis also applies only to ZoSS algorithms and doesn't seem to be able to cover all zeroth-order algorithms.\nTo conclude, the results of this paper are solid and meaningful in that it provides the first generalization analysis for zeroth-order optimization. But it's felt that there isn't enough technical depth (incremental over HRS16) and the paper is a bit over-selling.\nQuestions:\nWhat's 'high-probability analysis' (line 43)? Is it a typo?\nIs it possible to achieve similar results for one-point zeroth-order algorithms? For example, the 'gradient descent without a gradient' method of FKM04.\nLimitations:\nThe authors addressed some limitations\nEthics Flag: No\nSoundness: 3 good\nPresentation: 3 good\nContribution: 3 good",
"abstractText": "We provide the first generalization error analysis for black-box learning through derivative-free optimization. Under the assumption of a Lipschitz and smooth unknown loss, we consider the Zeroth-order Stochastic Search (ZoSS) algorithm, that updates a d-dimensional model by replacing stochastic gradient directions with stochastic differences of K + 1 perturbed loss evaluations per dataset (example) query. For both unbounded and bounded possibly nonconvex losses, we present the first generalization bounds for the ZoSS algorithm. These bounds coincide with those for SGD, and they are independent of d, K and the batch size m, under appropriate choices of a slightly decreased learning rate. For bounded nonconvex losses and a batch size m = 1, we additionally show that both generalization error and learning rate are independent of d and K, and remain essentially the same as for the SGD, even for two function evaluations. Our results extensively extend and consistently recover established results for SGD in prior work, on both generalization bounds and corresponding learning rates. If additionally m = n, where n is the dataset size, we recover generalization guarantees for full-batch GD as well.",
"1 Introduction": "Learning methods often rely on empirical risk minimization objectives that highly depend on a limited training data-set. Known gradient-based approaches such as SGD train and generalize effectively in reasonable time [1]. In contrast, emerging applications such as convex bandits [2– 4], black-box learning [5], federated learning [6], reinforcement learning [7, 8], learning linear quadratic regulators [9, 10], and hyper-parameter tuning [11] stand in need of gradient-free learning algorithms [11–14] due to an unknown loss/model or impossible gradient evaluation.\nGiven two or more function evaluations, zeroth-order algorithms (see, e.g., [14, 15]) aim to estimate the true gradient for evaluating and updating model parameters (say, of dimension d). In particular, Zeroth-order Stochastic Search (ZoSS) [13, Corollary 2], [16, Algorithm 1] uses K + 1 function evaluations (K ≥ 1), while deterministic zeroth-order approaches [5, Section 3.3] require at least K ≥ d + 1 queries. The optimization error of the ZoSS algorithm is optimal as shown in prior work for convex problems [13], and suffers at most a factor of √ d/K in the convergence rate as compared with SGD. In addition to the optimization error, the importance of generalization error raises the question of how well zeroth-order algorithms generalize to unseen examples. In this paper, we show that the generalization error of ZoSS essentially coincides with that of SGD, under the choice of a slightly decreased learning rate. Assuming a Lipschitz and smooth loss function, we\n36th Conference on Neural Information Processing Systems (NeurIPS 2022).\nestablish generalization guarantees for ZoSS, by extending stability-based analysis for SGD [1], to the gradient-free setting. In particular, we rely on the celebrated result that uniform algorithmic stability implies generalization [1, 17, 18].\nEarly works [17, 19–22] first introduced the notion of stability, and the connection between (uniform) stability and generalization. Recently, alternative notions of stability and generalization gain attention such as locally elastic stability [23], VC-dimension/flatness measures [24], distributional stability [25– 27], information theoretic bounds [16, 28–33] mainly based on assuming a sub-Gaussian loss, as well as connections between differential privacy and generalization [34–37].\nIn close relation to our paper, Hardt et al. [1] first showed uniform stability final-iterate bounds for vanilla SGD. More recent works develop alternative generalization error bounds based on high probability guarantees [38–41] and data-dependent variants [42], or under different assumptions than those of prior works such as as strongly quasi-convex [43], non-smooth convex [44–47], and pairwise losses [48, 49]. In the nonconvex case, [50] provide bounds that involve on-average variance of the stochastic gradients. Generalization performance of other algorithmic variants lately gain further attention, including SGD with early momentum [51], randomized coordinate descent [52], look-ahead approaches [53], noise injection methods [54], and stochastic gradient Langevin dynamics [55–62].\nRecently, stability and generalization of full-bath GD has also been studied; see, e.g., [63–67]. In particular, Charles and Papailiopoulos. [64] showed instability of GD for nonconvex losses. Still, such instability does not imply a lower bound on the generalization error of GD (in expectation). In fact, Hoffer et al. [63] showed empirically that the generalization of GD is not affected by the batch-size, and for large enough number of iterations GD generalizes comparably to SGD. Our analysis agree with the empirical results of Hoffer et al. [63], as we show that (for smooth losses) the generalization of ZoSS (and thus of SGD) is independent of the batch size.\nNotation. We denote the training data-set S of size n as {zi}ni=1, where zi are i.i.d. observations of a random variable Z with unknown distribution D. The parameters of the model are vectors of dimension d, denoted by W ∈ Rd, and Wt is the output at time t of a (randomized) algorithm AS . The (combined) loss function f(·, z) : Rd → R+ is uniformly Lipschitz and smooth for all z ∈ Z . We denote the Lipschitz constant as L and the smoothness parameter by β. The number of function (i.e., loss) evaluations (required at each iteration of the ZoSS algorithm) is represented by K +1 ∈ N. We denote by ∆f the smoothed approximation of the loss gradient, associated with parameter µ. The parameter ΓdK ≜ √ (3d− 1)/K + 1 prominently appears in our results. We denote the gradient of the loss function with respect to model parameters W , by ∇f(W, z) ≡ ∇wf(w, z)|w=W . We denote the mini batch at t by Jt, and m ≜ |Jt|.",
"1.1 Contributions": "Under the assumption of Lipschitz and smooth loss functions, we provide generalization guarantees for black-box learning, extending the analysis of prior work by Hardt et al. [1] to the gradient free setting. In particular, we establish uniform stability and generalization error bounds for the final iterate of the ZoSS algorithm; see Table 1 for a summary of the results. In more detail, the contributions of this work are as follows:\n• For unbounded and bounded losses, we show generalization error bounds identical to SGD, with a slightly decreased learning rate. Specifically, the generalization error bounds are independent of the dimension d, the number of evaluations K and the batch-size m. Further, a large enough number of evaluations (K) provide fast generalization even in the high dimensional regime. • For bounded nonconvex losses and single (example) query updates (m = 1), we show that both the ZoSS generalization error and learning rate are independent of d and K, similar to that of SGD [1, Theorem 3.8]. This property guarantees efficient generalization even with two function evaluations. • In the full information regime (i.e., when the number of function evaluations K grow to ∞), the ZoSS generalization bounds also provide guarantees for SGD by recovering the results in prior work [1]. Further, we derive novel SGD bounds for unbounded nonconvex losses, as well as mini-batch SGD for any batch size. Our results subsume generalization guarantees for full-batch ZoSS and GD algorithms.",
"2 Problem Statement": "Given a data S ≜ {zi}ni=1 of i.i.d samples zi from an unknown distribution D, our goal is to find the parameters w∗ of a learning model such that w∗ ∈ argminw R(w), where R(w) ≜ EZ∼D[f(w,Z)]. Since the distribution D is not known, we consider the empirical risk\nRS(w) ≜ 1\nn n∑ i=1 f(w, zi), (1)\nand the corresponding empirical risk minimization (ERM) problem to find w∗s ∈ argminw RS(w). For a (randomized) algorithm AS with input S and output W = A(S), the excess risk ϵexcess is bounded by the sum of the generalization error ϵgen and the optimization error ϵopt,\nϵexcess ≜ ES,A[R(W )]−R(w∗) = ES,A[R(W )−RS(W )]︸ ︷︷ ︸ ϵgen +(ES,A[RS(W )]−R(w∗)︸ ︷︷ ︸ ϵopt ). (2)\nTo analyze and control ϵgen, we prove uniform stability bounds which imply generalization [1, Theorem 2.2]. Specifically, if for all i.i.d. sequences S, S′ ∈ Zn that differ in one entry, we have supz EA[f(A(S), z)− f(A(S′), z)] ≤ ϵstab, for some ϵstab > 0, then ϵgen ≤ ϵstab. Because the loss is L-Lipschitz, ϵstab may then be chosen as L supS,S′ EA∥A(S)−A(S′)∥. Our primary goal in this work is to develop uniform stability bounds for a gradient-free algorithm AS of the form wt+1 = wt − αt∆fwt,z , where ∆fwt,z only depends on loss function evaluations. To achieve this without introducing unnecessary assumptions, we consider a novel algorithmic stability error decomposition approach. In fact, the stability error introduced at time t by AS breaks down into the stability error of SGD and an approximation error due to missing gradient information. Let Gt(·)\nand G′t(·) be the following SGD update rules\nGt(w) ≜ w − αt∇f(w, zit), G′t(w) ≜ w − αt∇f(w, z′it), (3)\nunder inputs S, S′ respectively, and let it ∈ {1, 2, . . . , n} be a random index chosen uniformly and independently by the random selection rule of the algorithm, for all t ≤ T . Similarly we use the notation G̃(·) and G̃′(·) to denote the iteration mappings of AS , i.e.,\nG̃t(w) ≜ w − αt∆fw,zit , G̃ ′ t(w) ≜ w − αt∆fw,z′it . (4)\nThen, as we also discuss later on (Lemma 1), the iterate stability error G̃t(w)− G̃′t(w′) of AS , for any w,w′ ∈ Rd and for all at t ≤ T , may be decomposed as\nG̃t(w)− G̃′t(w′) ∝ Gt(w)−G′t(w′)︸ ︷︷ ︸ ϵGBstab\n+ [ ∇f(w, zit)−∆fw,zit ] + [ ∇f(w′, z′it)−∆fw′,z′it ]︸ ︷︷ ︸ ϵest , (5)\nwhere ϵGBstab denotes the gradient-based stability error (associated with SGD), and ϵest denotes the gradient approximation error. We now proceed by formally introducing ZoSS.",
"3 Zeroth-Order Stochastic Search (ZoSS)": "As a gradient-free alternative of the classical SGD algorithm, we consider the ZoSS scheme, with iterates generated according to the following (single-example update) rule\nWt+1 = Wt − αt 1\nK K∑ k=1 f(Wt + µU t k, zit)− f(Wt, zit) µ U tk, U t k ∼ N (0, Id), µ ∈ R+, (6)\nwhere αt ≥ 0 is the corresponding learning rate (for the mini-batch update rule we refer the reader to Section 5). At every iteration t, ZoSS generates K i.i.d. standard normal random vectors U tk, k = 1, . . . ,K, and obtains K + 1 loss evaluations on perturbed model inputs. Then ZoSS evaluates a smoothed approximation of the gradient for some µ > 0. In light of the discussion in Section 2, we define the ZoSS smoothed gradient step at time t as\n∆fK,µw,zit ≡ ∆f K,µ,Ut w,zit ≜\n1\nK K∑ k=1 f(w + µU tk, zit)− f(w, zit) µ U tk. (7)",
"3.1 ZoSS Stability Error Decomposition": "To show stability bounds for ZoSS, we decompose its error into two parts through the stability error decomposition discussed in Section 2. Under the ZoSS update rule, Eq. (5) holds by considering the directions ∆fw,zit and ∆fw′,z′it according to ZoSS smoothed approximations (7). Then for any w,w′ ∈ Rd, the iterate stability error G̃t(w)− G̃′t(w′) of ZoSS at t, breaks down into the gradient based error ϵGBstab and approximation error ϵest.\nThe error term ϵGBstab expresses the stability error of the gradient based mappings [1, Lemma 2.4] and inherits properties related to the SGD update rule. The error ϵest captures the approximation error of the ZoSS smoothed approximation and depends on K and µ. The consistency of the smoothed approximation with respect to SGD follows from limK↑∞,µ↓0 ∆fK,µw,z = ∇f(w, z) for all w ∈ R and z ∈ Z . Further, the stability error is also consistent since limK↑∞,µ↓0 |ϵest| = 0. Later on, we use the ZoSS error decomposition in Eq. (5) together with a variance reduction lemma (Lemma 10), to derive exact expressions on the iterate stability error G̃t(w)− G̃′t(w′) for fixed K and µ > 0 (see Lemma 1). Although in this paper we derive stability bounds and bounds on the ϵgen, the excess risk ϵexcess depends on both errors ϵgen and ϵopt. In the following section, we briefly discuss known results on the ϵopt of zeroth-order methods, including convex and nonconvex losses.",
"3.2 Optimization Error in Zeroth-Order Stochastic Approximation": "Convergence rates of the ZoSS optimization error and related zeroth-order variants have been extensively studied in prior works; see e.g., [14, 15, 68]. For the convex loss setting, when K + 1\nfunction evaluations are available and no other information regarding the loss is given, the ZoSS algorithm achieves optimal rates with respect to the optimization error ϵopt. Specifically, under the assumption of a closed and convex loss, Duchi et al. [13] provided a lower bound for the minimax convergence rate and showed that ϵopt = Ω( √ d/K), for any algorithm that approximates the gradient given K + 1 evaluations. In the nonconvex setting Ghadimi et al. [69, 70] established sample complexity guarantees for the zeroth-order approach to reach an approximate stationary point.",
"4 Main Results": "For our analysis, we introduce the same assumptions on the loss function (Lipschitz and smooth) as appears in prior work [1]. Additionally, we exploit the η-expansive and σ-bounded properties of the SGD mappings Gt(·) and G′t(·) in Eq. (3).1 The mappings Gt(·) and G′t(·) are introduced for analysis purposes due to the stability error decomposition given in Eq. (5) and no further assumptions or properties are required for the zeroth-order update rules G̃t(·) and G̃′t(·) given in Eq. (4). The η-expansivity of Gt(·) holds for η = 1+ βαt if the loss is nonconvex, and η = 1 if the loss is convex and αt ≤ 2/β [1, Lemma 3.6]. Note that Gt(·) is always σ-bounded (σ = Lαt) [1, Lemma 3.3.].",
"4.1 Stability Analysis": "We derive generalization error bounds through uniform stability. To study the stability of ZoSS, we apply a variance reduction lemma that we provide in Appendix A. Exploiting the variance reduction lemma, we show a growth recursion lemma for the iterates of the ZoSS.\nLemma 1 (ZoSS Growth Recursion) Consider the sequences of updates {G̃t}Tt=1 and {G̃′t} T t=1. Let w0 = w′0 be the starting point, wt+1 = G̃t(wt) and w ′ t+1 = G̃ ′ t(w ′ t) for any t ∈ {1, . . . , T}. Then for any wt, w′t ∈ Rd and t ≥ 0 the following recursion holds\nE[∥G̃t(wt)− G̃′t(w′t)∥] ≤\n{( η + αt √ 3d−1 K β ) ∥wt − w′t∥+ µβαt(3 + d)3/2, if G̃t(·) = G̃′t(·),\n∥wt − w′t∥+ 2αtLΓdK + µβαt(3 + d)3/2, if G̃t(·) ̸= G̃′t(·).\nThe growth recursion of ZoSS characterizes the stability error that it is introduced by the ZoSS update and according to the outcome of the randomized selection rule at each iteration. Lemma 1 extends growth recursion results for SGD in prior work [1, Lemma 2.5] to the setting of the ZoSS algorithm. If K → ∞ and µ → 0 (while the rest of the parameters are fixed), then ΓdK → 1, and the statement recovers that of the SGD [1, Lemma 2.5].\nProof of Lemma 1. Let S and S′ be two samples of size n differing in only a single example, and let G̃t(·), G̃′t(·) be the update rules of the ZoSS for each of the sequences S, S′ respectively. First under the event Et ≜ {G̃t(·) ≡ G̃′t(·)} (see Eq. (4)), by applying the Taylor expansion there exist vectors W ∗k,t and W † k,t with j th coordinates in the intervals ( w (j) t , w (j) t +µU (j) k,t ) ∪ ( w (j) t +µU (j) k,t , w (j) t ) and(\nw ′(j) t , w ′(j) t + µU (j) k,t\n) ∪ ( w ′(j) t + µU (j) k,t , w ′(j) t ) , respectively, such we find that for any wt, w′t ∈ Rd\nit is true that\nG̃t(wt)− G̃′t(w′t) = G̃t(wt)− G̃t(w′t)\n= wt − w′t − αt K K∑ k=1 ⟨∇f(wt, zit)−∇f(w′t, zit), U tk⟩U tk (8)\n− αt K K∑ k=1 (µ 2 UTk∇2wf(w, zit)|w=W∗k,tU t k ) U tk + αt K K∑ k=1 (µ 2 UTk∇2wf(w, zit)|w=W †k,tU t k ) U tk\n= wt − αt∇f(wt, zit)︸ ︷︷ ︸ G(wt) − (w′t − αt∇f(w′t, zit))︸ ︷︷ ︸ G′(w′t)≡G(w′t)\n1 [1, Definition 2.3]: An update rule G(·) is η-expansive if ∥G(w) − G(w′)∥ ≤ η∥w − w′∥ for all w,w′ ∈ Rd. If ∥w −G(w)∥ ≤ σ then it is σ-bounded.\n− αt K K∑ k=1 (µ 2 UTk∇2wf(w, zit)|w=W∗k,tU t k ) U tk + αt K K∑ k=1 (µ 2 UTk∇2wf(w, zit)|w=W †k,tU t k ) U tk\n− αt ( 1\nK K∑ k=1 ⟨∇f(wt, zit)−∇f(w′t, zit), U tk⟩U tk − (∇f(wt, zit)−∇f(w′t, zit)) ) . (9)\nWe find (9) by adding and subtracting αt∇f(wt, zit) and αt∇f(w′t, zit) in Eq. (8). Recall that U tk are independent for all k ≤ K, t ≤ T and that the mappings G(·) and G′(·) defined in Eq. (9), are η-expansive. The last display and the triangle inequality give\nE[∥G̃t(wt)− G̃t(w′t)∥]\n≤ ∥G(wt)−G(w′t)∥+ 2αt K K∑ k=1 µβ 2 E [ ∥U tk∥3 ] +αt √ 3d− 1 K E[∥∇f(wt, zit)−∇f(w′t, zit)∥] (10)\n≤ η∥wt − w′t∥+ 2αt K K∑ k=1 µβ 2 E [ ∥U tk∥3 ] + αt √ 3d− 1 K β∥wt − w′t∥ (11)\n≤ ( η + αt √ 3d− 1 K β ) ∥wt − w′t∥+ µβαt(3 + d)3/2, (12)\nwhere (10) follows from (9) and Lemma 10, and for (11) we applied the η-expansive property of G(·) (see [1, Lemma 2.4 and Lemma 3.6]) and the β-smoothness of the loss function.2 Finally (12) holds since the random vectors U tk ∼ N (0, Id) are identically distributed for all k ∈ {1, 2, . . . ,K} and E∥U tk∥3 ≤ (3 + d)3/2. Eq. (12) gives the first part of the recursion.\nSimilar to (9), under the event Ect ≜ {G̃t(·) ̸= G̃′t(·)}, we find\nG̃t(wt)− G̃′t(w′t) = wt − αt∇f(wt, zit)︸ ︷︷ ︸\nG(wt)\n− ( w′t − αt∇f(w′t, z′it) )︸ ︷︷ ︸ G′(w′t)\n− αt K K∑ k=1 (µ 2 UTk∇2wf(w, zit)|w=W̃∗k,tU t k ) U tk + αt K K∑ k=1 (µ 2 UTk∇2wf(w, z′it)|w=W̃ †k,tU t k ) U tk\n− αt ( 1\nK K∑ k=1 ⟨∇f(wt, zit)−∇f(w′t, z′it), U t k⟩U tk − (∇f(wt, zit)−∇f(w′t, z′it)) ) . (13)\nBy using the last display, triangle inequality, Lemma 10 and β-smoothness, we find\nE[∥G̃t(wt)− G̃t(w′t)∥]\n≤ ∥G(wt)−G′(w′t)∥+ 2αt K K∑ k=1 µβ 2 E[∥U tk∥3] + αt √ 3d− 1 K E[∥∇f(wt, zit)−∇f(w′t, z′it)∥]\n≤ min{η, 1}δt + 2σt + 2αt K K∑ k=1 µβ 2 E[∥U tk∥3] + 2Lαt √ 3d− 1 K\n(14)\n≤ δt + 2αtLΓdK + µβαt(3 + d)3/2, (15) where (14) follows from the triangle inequality and L−Lipschitz condition, while the upper bound on ∥G(wt)−G′(w′t)∥ comes from [1, Lemma 2.4]. Finally, (15) holds since η ≥ 1 for both convex and nonconvex losses, σt = Lαt and E∥U tk∥3 ≤ (3 + d)3/2 for all k ∈ {1, . . . ,K}. This shows the second part of recursion. □\nFor sake of brevity, let I be an adapted stopping time that corresponds to the first iteration index that the single distinct instance of the two data-sets S, S′ is sampled by ZoSS. For any t0 ∈ {0, 1, . . . , n} we define the event Eδt0 ≜ {I > t0} ≡ {δt0 = 0}. The next result provides the stability bound.\n2For all z ∈ Z and W ∈ Rd it is true that ∥∇2wf(w, z)|w=W ∥ ≤ β.\nLemma 2 (ZoSS Stability | Nonconvex Loss) Assume that the loss function f(·, z) is L-Lipschitz and β-smooth for all z ∈ Z . Consider the ZoSS algorithm (6) with final-iterate estimates WT and W ′T , corresponding to the data-sets S, S\n′, respectively (that differ in exactly one entry). Then the discrepancy δT ≜ ∥WT −W ′T ∥, under the event Eδt0 , satisfies the inequality\nE[δT |Eδt0 ] ≤ ( 2L n ΓdK + µβ(3 + d) 3/2 ) T∑ t=t0+1 αt T∏ j=t+1 ( 1 + βαjΓ d K ( 1− 1 n )) . (16)\nThe corresponding bound of Lemma 2 for convex losses is slightly tighter than the bound in (16). Since the two bounds differ only by a constant, the consequent results of Lemma 2 are essentially identical for convex losses as well. We provide the equivalent version of Lemma 2 for convex losses in Appendix B.\nProof of Lemma 2. Consider the events Et ≜ {G̃t(·) ≡ G̃′t(·)} and Ect ≜ {G̃t(·) ̸= G̃′t(·)} (see Eq. (4)). Recall that P(Et) = 1 − 1/n and P(Ect ) = 1/n for all t ≤ T . For any t0 ≥ 0, a direct application of Lemma 1 gives\nE[δt+1|Eδt0 ] = P(Et)E[δt+1|Et, Eδt0 ] + P(E c t )E[δt+1|Ect , Eδt0 ]\n= ( 1− 1\nn\n) E[δt+1|Et, Eδt0 ] + 1 n E[δt+1|Ect , Eδt0 ]\n≤ ( η + αtβ √ 3d− 1 K + 1 n ( 1− η − αtβ √ 3d− 1 K )) E[δt|Eδt0 ]\n+ 2αtL\nn ΓdK + µβαt(3 + d) 3/2. (17)\nWith Rt ≜ (η + αtβ(ΓdK − 1) + (1− η − αtβ(ΓdK − 1))/n) solving the recursion in (17) gives\nE[δT |Eδt0 ] ≤ ( 2L n ΓdK + µβ(3 + d) 3/2 ) T∑ t=t0+1 αt T∏ j=t+1 Rj . (18)\nWe consider the last inequality for nonconvex loss functions with η = 1 + βαt and convex loss functions with η = 1 to derive Lemma 2 and Lemma 11 respectively (Appendix B). □",
"4.2 Generalization Error Bounds": "For the first generalization error bound, we evaluate the right part of the inequality (16) for decreasing step size and bounded nonconvex loss. Then the Lipschitz condition provides a uniform stability condition for the loss and yields the next theorem.\nTheorem 3 (Nonconvex Bounded Loss | Decreasing Stepsize) Assume that the loss f(·, z) ∈ [0, 1] is L-Lipschitz and β-smooth for all z ∈ Z . Consider the ZoSS update rule (6) with T the total number of iterates, αt ≤ C/tΓdK for some (fixed) C > 0 and for all t ≤ T , and fixed µ ≤ cLΓdK/nβ(3 + d)3/2 for some c > 0. Then the generalization error of ZoSS is bounded by\n|ϵgen| ≤ ( (2 + c)CL2 ) 1 Cβ+1 (eT ) Cβ Cβ+1\nn max\n{ 1, 1 + (Cβ)−1− e βC\nβC 1 Cβ+1\n( (2 + c)L2\neT\n) Cβ Cβ+1 } (19)\n≤ ( 1 + (Cβ)−1 ) ( (2 + c)CL2 ) 1 Cβ+1\nn (eT )\nCβ Cβ+1 . (20)\nInequality (19), as a tighter version of (20), provides a meaningful bound in marginal cases, i.e.,\nlim β↓0\nE [|f(WT , z)− f(W ′T , z)|] ≤ (2 + c)CL2\nn max\n{ log ( eT\n(2 + c)CL2\n) , 1 } . (21)\nBy neglecting the negative term in (19) we find (20), that is the ZoSS equivalent of SGD [1, Theorem 3.8]. When K → ∞ and c → 0, then ΓdK → 1, and the inequalities (19), (20) reduce to a\ngeneralization bound for SGD. Inequality (20) matches that of [1, Theorem 3.8], and (19) provides a tighter generalization bound for SGD as well. We show Theorem 3 in Appendix A.\nNext, we provide a bound on the generalization error for nonconvex losses that comes directly from Theorem 3. In contrast to Theorem 3, the next result provides learning rate and a generalization error bounds, both of which are independent of the dimension and the number of function evaluations.\nCorollary 4 Assume that the loss function f(·, z) ∈ [0, 1] is L-Lipschitz and β-smooth for all z ∈ Z . Consider the ZoSS update rule (6) with µ ≤ cLΓdK/(nβ(3 + d)3/2), T the total number of iterates, and αt ≤ C/t for some (fixed) C > 0 and for all t ≤ T . Then the generalization error of ZoSS is bounded by\n|ϵgen| ≤ ( 1 + (βC)−1 )2 ( 1 + (2 + c)CL2 ) 3Te 2n . (22)\nAs a consequence, even in the high dimensional regime d → ∞, two function evaluations (i.e., K = 1) are sufficient for the ZoSS to achieve ϵgen = O(T/n), with the learning rate being no smaller than that of SGD. We continue by providing the proof of Theorem 3. For the proof of Corollary 4, see Appendix A.3.\nIn light of Theorem 3 and Corollary 4, we observe that the over-fitting phenomenon occurs in the gradient-free approach similarly to gradient-based algorithms. For general nonconvex (and convex) losses under standard step-size choices, the generalization error increases with respect to T . Further, the effect of β affects both the stability (similarly to SGD in prior work) of the algorithm and the error approximation of the ZoSS. If β is large, then the expected approximation error (due to limited function evaluations) is also large [15] and the dependence on smoothness is unavoidable in blackbox learning. In our results, this is expressed through the Growth Recursion of ZoSS (Lemma 2), that involves both the stability and approximation error per iteration. However, a smaller step-size (αt = 1/2tβΓdK) mitigates the effect of β on the bound. We refer the reader to Appendix E for a unified analysis of the excess risk, that captures the over-fitting and under-fitting trade-off.\nAdditionally, the number of iterations T is considered to be fixed and known (as in prior works including on average and high probability results on generalization). This is reasonable and quite standard because given the theoretical results, we know beforehand the appropriate choices of T that provide a good trade-off between generalization and optimization. A classical setting is that of a fixed step-size αt = 1/T with T = √ n, which provides the well known generalization error bound for\nSGD with order O(1/ √ n), as appears in very recent and timely prior works [32, Section 3.1], [45].\nIn the unbounded loss case, we apply Lemma 2 by setting t0 = 0 (recall that t0 is a free parameter, while the algorithm depends on the random variable I). The next result provides a generalization error bound for the ZoSS algorithm with constant step size. In the first case of the theorem, we also consider the convex loss as a representative result, as we show the same bound holds for an appropriate choice of greater learning rate than the learning rate of the nonconvex case. The convex case for the rest of the results of this work can be similarly derived.\nTheorem 5 (Unbounded Loss | Constant Step Size) Assume that the loss f(·, z) is L-Lipschitz, βsmooth for all z ∈ Z . Consider the ZoSS update rule (6) with µ ≤ cLΓdK/(nβ(3 + d)3/2) for some c > 0. Let T be the total number of iterates and for any t ≤ T , • if f(·, z) is convex for all z ∈ Z and αt ≤ min{log ( 1+Cβ(1− 1/ΓdK) ) /Tβ(ΓdK − 1), 2/β}, or\nif f(·, z) is nonconvex and αt ≤ log ( 1 + Cβ ) /TβΓdK , for C > 0 then\n|ϵgen| ≤ (2 + c)CL2\nn , (23)\n• if f(·, z) is nonconvex and αt ≤ C/TΓdK , for some C > 0, then\n|ϵgen| ≤ L2 (2 + c) (eCβ − 1)\nnβ . (24)\nFor the proof of Theorem 5 see Appendix A.4. In the following, we present the generalization error of ZoSS for an unbounded loss with a decreasing step size. Recall that the results for unbounded nonconvex loss also hold for the case of a convex loss with similar bounds on the generalization error and learning rate (see the first case of Theorem 5).\nTheorem 6 (Unbounded Loss | Decreasing Step Size) Assume that the loss f(·, z) is L-Lipschitz, β-smooth for all z ∈ Z . Consider ZoSS with update rule (6), T the total number of iterates, αt ≤ C/tΓdK for all t ≤ T and for some C > 0, and µ ≤ cLΓdK/(nβ(3 + d)3/2) for some c > 0. Then the generalization error of ZoSS is bounded by\n|ϵgen| ≤ (2 + c)L2(eT )Cβ\nn min\n{ C + β−1, C log(eT ) } . (25)\nFor the proof of Theorem 6 see Appendix A.5. Note that the constant C is free and controls the learning rate. Furthermore, it quantifies the trade-off between the speed of training and the generalization of the algorithm. In the next section, we consider the ZoSS algorithm with a minibatch of size m for which we provide generalization error bounds. These results hold under the assumption of unbounded loss and for any batch size m including the case m = 1.",
"5 Generalization of Mini-Batch ZoSS": "For the mini-batch version of ZoSS, at each iteration t, the randomized selection rule (uniformly) samples a batch Jt of size m and evaluates the new direction of the update by averaging the smoothed approximation ∆fK,µw,z over the samples z ∈ Jt as\n∆fK,µw,Jt ≡ ∆f K,µ,Ut w,Jt ≜\n1\nmK m∑ i=1 K∑ k=1 f(w + µU tk,i, zJt,i)− f(w, zJt,i) µ U tk,i, (26)\nwhere U tk,i ∼ N (0, Id) are i.i.d. (standard normal), and µ ∈ R+. The update rule of the minibatch ZoSS is Wt+1 = Wt − αt∆fK,µWt,Jt for all t ≤ T , and we define G̃Jt(w) ≜ w − αt∆f K,µ w,Jt\n, G̃′J′t (w) ≜ w − αt∆fK,µw,J′t for Jt ⊂ S and J ′ t ⊂ S′ respectively. Due to space limitation, we refer the reader to Appendix C for the detailed stability analysis of ZoSS with mini-batch. Specifically, we prove a growth recursion lemma for the mini-batch ZoSS updates (see Appendix C.1 for proof).\nLemma 7 (Mini-Batch ZoSS Growth Recursion) Consider the sequences of updates {G̃Jt}Tt=1 and {G̃′Jt} T t=1 and µ ≤ cLΓdK/(nβ(3 + d)3/2). Let w0 = w′0 be the starting point, wt+1 = G̃Jt(wt) and w′t+1 = G̃ ′ Jt (w′t) for any t ∈ {1, . . . , T}. Then for any wt, w′t ∈ Rd and t ≥ 0 the following recursion holds\nE[∥G̃Jt(wt)− G̃′Jt(w ′ t)∥] ≤\n{( 1 + βαtΓ d K ) δt + cLαt n Γ d K if G̃Jt(·) = G̃′Jt(·)(\n1 + m−1m βαtΓ d K ) δt + 2Lαt m Γ d K + cLαt n Γ d K if G̃Jt(·) ̸= G̃′Jt(·).\nAlthough the iterate stability error (at time t) in the growth recursion depends on the batch size m under the event {G̃Jt(·) ̸= G̃′Jt(·)}, the stability bound on the final iterates is independent of m, and coincides with the single example updates (m = 1, Lemma 2). Herein, we provide an informal statement of the result.\nLemma 8 (Mini-Batch ZoSS Stability | Nonconvex Loss) Consider the mini-batch ZoSS with any batch size m ≤ n, and iterates Wt+1 = Wt −αt∆fK,µWt,Jt , W ′ t+1 = W ′ t −αt∆f K,µ W ′t ,J ′ t , for all t ≤ T , with respect to the sequences S, S′. Then the stability error δT satisfies the inequality of Lemma 2.\nWe refer the reader to Appendix Section C.1, Theorem 14 for the formal statement of the result.3 Through the Lipschitz condition of the loss and Lemma 8, we show that the mini-batch ZoSS enjoys the same generalization error bounds as in the case of single-query ZoSS (m = 1). As a consequence, the batch size does not affect the generalization error.\nTheorem 9 (Mini-batch ZoSS | Generalization Error) Let the loss function f(·, z) be L-Lipschitz and β-smooth (possibly nonconvex, possibly unbounded) for all z ∈ Z . Then the bounds of Theorem 5 and Theorem 6 hold for the mini-batch ZoSS with iterate Wt+1 = Wt − αt∆fK,µWt,Jt , for all t ≤ T and any batch size m ≤ n.\n3As in the single-query (m = 1) ZoSS, under the assumption of convex loss, the stability error of mini-batch ZoSS satisfies the inequality (46), Appendix B, Lemma 11.\nBy letting K → ∞ and c → 0, the generalization error bounds of mini-batch ZoSS reduce to those of mini-batch SGD, extending results of the single-query (m = 1) SGD that appeared in prior work [1]. Additionally, once K → ∞, c → 0 and m = n we obtain generalization guarantees for full-batch GD. For the sake of clarity and completeness we provide dedicated stability and generalization analysis of full-batch GD in Appendix D, Corollary 15.",
"6 Discussion: Black-box Adversarial Attack Design and Future Work": "A standard, well-cited example of ZoSS application is adversarial learning as considered in [5], when the gradient is not known for the adversary (for additional applications for instance federated/reinforcement learning, linear quadratic regulators; see also Section 1 for additional references). Notice that the algorithm in [5] is restrictive in the high dimensional regime since it requires 2d function evaluations per iteration. In contrast, ZoSS can be considered with any K ≥ 2 functions evaluations (the trade-off is between accuracy and resource allocation, which is also controlled through K). If K = d + 1 evaluations are available we recover guarantees for the deterministic zeroth-order approaches (similar to [5]).\nRetrieving a large number of function evaluations often is not possible in practice. When a limited amount of function evaluations is available, the adversary obtains the solution (optimal attack) with an optimization error that scales by a factor of √ d/K, and the generalization error of the attack is of the order √ T/n under appropriate choices of the step-size, the smoothing parameter µ and K. Fine tuning of the these parameters might be useful in practice, but in general K should be chosen as large as possible. In contrast, µ should be small and satisfy the inequality µ ≤ cLΓdK/nβ(3 + d)3/2 (Theorem 6). For instance, in practice µ is often chosen between 10−10 and 10−8 (or even lower) and the ZoSS algorithm remains (numerically) stable.\nFor neural networks with smooth activation functions [71–73], the ZoSS algorithm does not require the smoothness parameter β to be necessarily known, however if β is large then the guarantees of the estimated model would be pessimistic. To ensure that the learning procedure is successful, the adversary can approximate β (since the loss is not known) by estimating the (largest eigenvalue of the) Hessian through the available function evaluations [74, Section 4.1].\nAlthough the non-smooth (convex) loss setting lies out of the scope of this work, it is expected to inherit properties and rates of the SGD for non-smooth losses (at least for sufficiently small smoothing parameter µ). In fact, [45, page 3, Table 1] developed upper and lower bounds for the SGD in the non-smooth case, and they showed that standard step-size choices provide vacuous stability bound. Due to these inherent issues of non-smooth (and often convex only cases), the generalization error analysis of ZoSS for non-smooth losses remains open. Finally, information-theoretic generalization error bounds of ZoSS can potentially provide further insight into the problem, due to the noisy updates of the algorithm, and consist part of future work.",
"7 Conclusion": "In this paper, we characterized the generalization ability of black-box learning models. Specifically, we considered the Zeroth-order Stochastic Search (ZoSS) algorithm, which evaluates smoothed approximations of the unknown gradient of the loss by only relying on K +1 loss evaluations. Under the assumptions of a Lipschitz and smooth (unknown) loss, we showed that the ZoSS algorithm achieves the same generalization error bounds as that of SGD, while the learning rate is slightly decreased compared to that of SGD. The efficient generalization ability of ZoSS, together with strong optimality results related to the optimization error by Duchi et al. [13], makes it a robust and powerful algorithm for a variety of black-box learning applications and problems.",
"Reviewer Summary": "Reviewer_3: The paper analyses the uniform stability of zero-order optimisation algorithms. Using the Gaussian randomisation for the gradient estimator that was put forward and extensively analysed by Nesterov and Spokoiniy, the authors establish the stability result in the spirit of Hardt et al [1] (who were only dealing with the pure SGD). This work focuses only on the generalisation bounds and ignores the optimisation error, referring to the work of Duchi et al [13].\n\nReviewer_4: In this paper, the authors provide generalization error bounds for derivative-free updates under\nL\n-Lipschitz and\nβ\n-smooth assumption of the loss fucntion. The main technique extends (Hardt et al. 2016) by bounding an additional gradient approximation error term (\nϵ\ne\ns\nt\n).\n\nReviewer_5: The paper establishes generalization error bounds of the gradient-free analogue of the SGD---zero-order stochastic search (ZoSS)---in which the gradient of the loss function at a point is approximated by\nK\n+\n1\nvalues of the function in the vicinity of this point. The cases of nonconvex and bounded, convex and unbounded, nonconvex and unbounded loss functions are considered. However, in all cases, the authors require that the loss function be Lipschitz and smooth.\n\nReviewer_6: This paper provides generalization bounds for the ZoSS algorithm, based on the analysis of algorithmic stability. The bounds match the previous work of HRS16 on SGD. The main technical ingredient is a recursion lemma which controls how fast the expected distance between two ZoSS instances grow, when they differ on only one example. This result is then extended to a few different scenarios."
}