Adapting to Function Difficulty and Growth Conditions in Private Optimization
Hilal Asi* Daniel Levy* John C. Duchi {asi,danilevy,jduchi}@stanford.edu
Abstract
We develop algorithms for private stochastic convex optimization that adapt to the hardness of the specific function we wish to optimize. While previous work provide worst-case bounds for arbitrary convex functions, it is often the case that the function at hand belongs to a smaller class that enjoys faster rates. Concretely, we show that for functions exhibiting $\kappa$ -growth around the optimum, i.e., $f(x) \geq f(x^{\star}) + \lambda \kappa^{-1} | x - x^{\star} |_2^\kappa$ for $\kappa > 1$ , our algorithms improve upon the standard $\sqrt{d}/n\varepsilon$ privacy rate to the faster $(\sqrt{d}/n\varepsilon)^{\frac{\kappa}{\kappa-1}}$ . Crucially, they achieve these rates without knowledge of the growth constant $\kappa$ of the function. Our algorithms build upon the inverse sensitivity mechanism, which adapts to instance difficulty [2], and recent localization techniques in private optimization [25]. We complement our algorithms with matching lower bounds for these function classes and demonstrate that our adaptive algorithm is simultaneously (minimax) optimal over all $\kappa \geq 1 + c$ whenever $c = \Theta(1)$ .
1 Introduction
Stochastic convex optimization (SCO) is a central problem in machine learning and statistics, where for a sample space $\mathbb{S}$ , parameter space $\mathcal{X} \subset \mathbb{R}^d$ , and a collection of convex losses ${F(\cdot; s) : s \in \mathbb{S}}$ , one wishes to solve
using an observed dataset $S = S_{1}^{n} \stackrel{\mathrm{iid}}{\sim} P$ . While as formulated, the problem is by now fairly well-understood [12, 38, 29, 10, 37], it is becoming clear that, because of considerations beyond pure statistical accuracy—memory or communication costs [45, 26, 13], fairness [23, 28], personalization or distributed learning [35]—problem (1) is simply insufficient to address modern learning problems. To that end, researchers have revisited SCO under the additional constraint that the solution preserves the privacy of the provided sample [22, 21, 1, 16, 19]. A waypoint is Bassily et al. [7], who provide a private method with optimal convergence rates for the related empirical risk minimization problem, with recent papers focus on SCO providing (worst-case) optimal rates in various settings: smooth convex functions [8, 25], non-smooth functions [9], non-Euclidean geometry [5, 4] and under more stringent privacy constraints [34].
Yet these works ground their analyses in worst-case scenarios and provide guarantees for the hardest instance of the class of problems they consider. Conversely, they argue that their algorithms are optimal in a minimax sense: for any algorithm, there exists a hard instance on which the error achieved by the algorithm is equal to the upper bound. While valuable, these results are pessimistic—the exhibited hard instances are typically pathological—and fail to reflect achievable performance.
In this work, we consider the problem of adaptivity when solving (1) under privacy constraints. Importantly, we wish to provide private algorithms that adapt to the hardness of the objective $f$ . A loss function $f$ may belong to multiple problem classes, each exhibiting different achievable rates, so a natural desideratum is to attain the error rate of the easiest sub-class. As a simple vignette, if one gets an arbitrary 1-Lipschitz convex loss function $f$ , the worst-case guarantee of any $\varepsilon$ -DP algorithm is $\Theta(1 / \sqrt{n} + d / (n\varepsilon))$ . However, if one learns that $f$ exhibits some growth property—say $f$ is 1-strongly convex—the regret guarantee improves to the faster $\Theta(1 / n + (d / (n\varepsilon))^2)$ rate with the appropriate algorithm. It is thus important to provide algorithms that achieve the rates of the "easiest" class to which the function belongs [32, 46, 18].
To that end, consider the nested classes of functions $\mathcal{F}^{\kappa}$ for $\kappa \in [1,\infty ]$ such that, if $f\in \mathcal{F}^{\kappa}$ then there exists $\lambda >0$ such that for all $x\in \mathcal{X}$
For example, strong convexity implies growth with parameter $\kappa = 2$ . This growth assumption closely relates to uniform convexity [32] and the Polyak-Kurdyka-Lojasiewicz inequality [11], and we make these connections precise in Section 2. Intuitively, smaller $\kappa$ makes the function much easier to optimize: the error around the optimal point grows quickly. Objectives with growth are widespread in machine learning applications: among others, the $\ell_1$ -regularized hinge loss exhibits sharp growth (i.e. $\kappa = 1$ ) while $\ell_1$ - or $\ell_{\infty}$ -constrained $\kappa$ -norm regression—i.e. $s = (a,b) \in \mathbb{R}^d \times \mathbb{R}$ and $F(x;s) = |b - \langle a,x\rangle|^{\kappa}$ —has $\kappa$ -growth for any $\kappa$ integer greater than 2 [43]. In this work, we provide private adaptive algorithms that adapt to the actual growth of the function at hand.
We begin our analysis by examining Asi and Duchi's inverse sensitivity mechanism [2] on ERM as a motivation. While not a practical algorithm, it achieves instance-optimal rates for any one-dimensional function under mild assumptions, quantifying the best bound one could hope to achieve with an adaptive algorithm, and showing (in principle) that adaptive private algorithms can exist. We first show that for any function with $\kappa$ -growth, the inverse sensitivity mechanism achieves privacy cost $(d / (n\varepsilon))^{\kappa / (\kappa - 1)}$ ; importantly, without knowledge of the function class $\mathcal{F}^{\kappa}$ , that $f$ belongs to. This constitutes grounding and motivation for our work in three ways: (i) it validates our choice of sub-classes $\mathcal{F}^{\kappa}$ as the privacy rate is effectively controlled by the value of $\kappa$ , (ii) it exhibits the rate we wish to achieve with efficient algorithms on $\mathcal{F}^{\kappa}$ and (iii) it showcases that for easier functions, privacy costs shrink significantly—to illustrate, for $\kappa = 5/4$ the privacy rate becomes $(d / (n\varepsilon))^5$ .
We continue our treatment of problem (1) under growth in Section 4 and develop practical algorithms that achieve the rates of the inverse sensitivity mechanism. Moreover, for approximate $(\varepsilon, \delta)$ -differential privacy, our algorithms improve the rates, achieving roughly $(\sqrt{d} / (n\varepsilon))^{\kappa / (\kappa - 1)}$ . Our algorithms hinge on a reduction to SCO: we show that by solving a sequence of increasingly constrained SCO problems, one achieves the right rate whenever the function exhibits growth at the optimum. Importantly, our algorithm only requires a lower bound $\underline{\kappa} \leq \kappa$ (where $\kappa$ is the actual growth of $f$ ).
We provide optimality guarantees for our algorithms in Section 5 and show that both the inverse sensitivity and the efficient algorithms of Section 4 are simultaneously minimax optimal over all classes $\mathcal{F}^{\kappa}$ whenever $\kappa = 1 + \Theta(1)$ and $d = 1$ for $\varepsilon$ -DP algorithms. Finally, we prove that in arbitrary dimension, for both pure- and approximate-DP constraints, our algorithms are also simultaneously optimal for all classes $\mathcal{F}^{\kappa}$ with $\kappa \geq 2$ .
On the way, we provide results that may be of independent interest to the community. First, we develop optimal algorithms for SCO under pure differential privacy constraints, which, to the best of our knowledge, do not exist in the literature. Secondly, our algorithms and analysis provide high-probability bounds on the loss, whereas existing results only provide (weaker) bounds on the expected loss. Finally, we complete the results of Ramdas and Singh [40] on (non-private) optimization lower bounds for functions with $\kappa$ -growth by providing information-theoretic lower bounds (in contrast to oracle-based lower bounds that rely on observing only gradient information) and capturing the optimal dependence on all problem parameters (namely $d$ , $L$ and $\lambda$ ).
1.1 Related work
Convex optimization is one of the best studied problems in private data analysis [16, 19, 41, 7]. The first papers in this line of work mainly study minimizing the empirical loss, and readily establish that
the (minimax) optimal privacy rates are $d / n\varepsilon$ for pure $\varepsilon$ -DP and $\sqrt{d\log(1 / \delta)} /n\varepsilon$ for $(\varepsilon ,\delta)$ -DP [16, 7]. More recently, several works instead consider the harder problem of privately minimizing the population loss [8, 25]. These papers introduce new algorithmic techniques to obtain the worst-case optimal rates of $1 / \sqrt{n} +\sqrt{d\log(1 / \delta)} /n\varepsilon$ for $(\varepsilon ,\delta)$ -DP. They also show how to improve this rate to the faster $1 / n + d\log (1 / \delta) / (n\varepsilon)^2$ in the case of 1-strongly convex functions. Our work subsumes both of these results as they correspond to $\kappa = \infty$ and $\kappa = 2$ respectively. To the best of our knowledge, there has been no work in private optimization that investigates the rates under general $\kappa$ -growth assumptions or adaptivity to such conditions.
In contrast, the optimization community has extensively studied growth assumptions [40, 32, 15] and show that on these problems, carefully crafted algorithms improves upon the standard $1 / \sqrt{n}$ for convex functions to the faster $(1 / \sqrt{n})^{\kappa /(\kappa -1)}$ . [32] derives worst-case optimal (in the first-order oracle model) gradient algorithms in the uniformly convex case (i.e. $\kappa \geq 2$ ) and provides technique to adapt to the growth $\kappa$ , while [40], drawing connections between growth conditions and active learning, provides upper and lower bounds in the first-order stochastic oracle model. We complete the results of the latter and provide information-theoretic lower bounds that have optimal dependence on $d$ , $\lambda$ and $n$ —their lower bound only holding for $\lambda$ inversely proportional to $d^{1 / 2 - 1 / \kappa}$ , when $\kappa \geq 2$ . Closest to our work is [15] who studies instance-optimality via local minimax complexity [14]. For one-dimensional functions, they develop a bisection-based instance-optimal algorithm and show that on individual functions of the form $t\mapsto \kappa^{-1}|t|^{\kappa}$ , the local minimax rate is $(1 / \sqrt{n})^{\kappa /(\kappa -1)}$ .
2 Preliminaries
We first provide notation that we use throughout this paper, define useful assumptions and present key definitions in convex analysis and differential privacy.
Notation. $n$ typically denotes the sample size and $d$ the dimension. Throughout this work, $x$ refers to the optimization variable, $\mathcal{X} \subset \mathbb{R}^d$ to the constraint set and $s$ to elements ( $S$ when random) of the sample space $\mathbb{S}$ . We usually denote by $F: \mathcal{X} \times \mathbb{S} \to \mathbb{R}$ the (convex) loss function and for a dataset $S = (s_1, \ldots, s_n) \subset \mathbb{S}$ , we define the empirical and population losses
We omit the dependence on $P$ as it is often clear from context. We reserve $\varepsilon, \delta \geq 0$ for the privacy parameters of Definition 2.1. We always take gradients with respect to the optimization variable $x$ . In the case that $F(\cdot; s)$ is not differentiable at $x$ , we override notation and define $\nabla F(x; s) = \operatorname*{argmin}_{g \in \partial F(x; s)} | g |_2$ , where $\partial F(x; s)$ is the subdifferential of $F(\cdot; s)$ at $x$ . We use A for (potentially random) mechanism and $S_1^n$ as a shorthand for $(S_1, \ldots, S_n)$ . For $p \geq 1$ , $| \cdot |p$ is the standard $\ell_p$ -norm, $\mathbb{B}p^d(R)$ is the corresponding $d$ -dimensional $p$ -ball of radius $R$ and $p^\star$ is the dual of $p$ , i.e., such that $1/p^\star + 1/p = 1$ . Finally, we define the Hamming distance between datasets $d{\mathrm{Ham}}(S, S') := \inf{\sigma \in \mathfrak{S}n} \mathbf{1}{s_i \neq s{\sigma(i)}'}$ , where $\mathfrak{S}_n$ is the set of permutations over sets of size $n$ .
Assumptions. We first state standard assumptions for solving (1). We assume that $\mathcal{X}$ is a closed, convex domain such that $\mathrm{diam}2(\mathcal{X}) = \sup{x,y\in \mathcal{X}}| x - y| _2\leq D < \infty$ . Furthermore, we assume that for any $s\in \mathbb{S}$ , $F(\cdot ;s)$ is convex and $L$ -Lipschitz with respect to $| \cdot | _2$ . Central to our work, we define the following $\kappa$ -growth assumption.
Assumption 1 ( $\kappa$ -growth). Let $x^{\star} = \operatorname*{argmin}_{x \in \mathcal{X}} f(x)$ . For a loss $F$ and distribution $P$ , we say that $(F, P)$ has $(\lambda, \kappa)$ growth for $\kappa \in [1, \infty]$ and $\lambda > 0$ , if the population function satisfies
In the case where $\widehat{P}$ is the empirical distribution on a finite dataset $S$ , we refer to $(\lambda, \kappa)$ -growth of $(F, \widehat{P})$ as $\kappa$ -growth of the empirical function $f_{S}$ .
Uniform convexity and Kurdyka-Lojasiewicz inequality. Assumption 1 is closely related to two fundamental notions in convex analysis: uniform convexity and the Kurdyka-Lojasiewicz inequality.
Following [39], we say that $h: \mathcal{Z} \subset \mathbb{R}^d \to \mathbb{R}$ is $(\sigma, \kappa)$ -uniformly convex with $\sigma > 0$ and $\kappa \geq 2$ if
This immediately implies that (i) sums (and expectations) preserve uniform convexity (ii) if $f$ is uniformly convex with $\lambda$ and $\kappa$ , then it has $(\lambda, \kappa)$ -growth. This will be useful when constructing hard instances as it will suffice to consider $(\lambda, \kappa)$ -uniformly convex functions which are generally more convenient to manipulate. Finally, we point out that, in the general case that $\kappa \geq 1$ , the literature refers to Assumption 1 as the Kurdyka-Lojasiewicz inequality [11] with, in their notation, $\varphi(s) = (\kappa / \lambda)^{1 / \kappa} s^{1 / \kappa}$ . Theorem 5-(ii) in [11] says that, under mild conditions, Assumption 1 implies the following inequality between the error and the gradient norm for all $x \in \mathcal{X}$
This is a key result in our analysis of the inverse sensitivity mechanism of Section 3.
Differential privacy. We begin by recalling the definition of $(\varepsilon, \delta)$ -differential privacy.
Definition 2.1 ([22, 21]). A randomized algorithm $\mathsf{A}$ is $(\varepsilon, \delta)$ -differentially private $((\varepsilon, \delta)-DP)$ if for all datasets $S, S' \in \mathbb{S}^n$ that differ in a single data element and for all events $\mathcal{O}$ in the output space of $\mathsf{A}$ , we have
We use the following standard results in differential privacy.
Lemma 2.1 (Composition [20, Thm. 3.16]). If $\mathsf{A}_1,\ldots ,\mathsf{A}_k$ are randomized algorithms that each is $(\varepsilon ,\delta)$ -DP, then their composition $(\mathsf{A}_1(S),\dots ,\mathsf{A}_k(S))$ is $(k\varepsilon ,k\delta)$ -DP.
Next, we consider the Laplace mechanism. We will let $Z \sim \mathsf{Lap}_d(\sigma)$ denote a $d$ -dimensional vector $Z \in \mathbb{R}^d$ such that $Z_i \stackrel{\mathrm{iid}}{\sim} \mathsf{Lap}(\sigma)$ for $1 \leq i \leq d$ .
Lemma 2.2 (Laplace mechanism [20, Thm. 3.6]). Let $h: \mathbb{S}^n \to \mathbb{R}^d$ have $\ell_1$ -sensitivity $\Delta$ , that is, $\sup_{\mathcal{S}, \mathcal{S}' \in \mathbb{S}^n: d_{\mathrm{Ham}}(\mathcal{S}, \mathcal{S}') \leq 1} | h(\mathcal{S}) - h(\mathcal{S}') |_1 \leq \Delta$ . Then the Laplace mechanism $\mathsf{A}(\mathcal{S}) = h(\mathcal{S}) + \mathsf{Lap}_d(\sigma)$ with $\sigma = \Delta / \varepsilon$ is $\varepsilon$ -DP.
Finally, we need the Gaussian mechanism for $(\varepsilon ,\delta)$ -DP.
Lemma 2.3 (Gaussian mechanism [20, Thm. A.1]). Let $h: \mathbb{S}^n \to \mathbb{R}^d$ have $\ell_2$ -sensitivity $\Delta$ , that is, $\sup_{\mathcal{S}, \mathcal{S}' \in \mathbb{S}^n: d_{\mathrm{Ham}}(\mathcal{S}, \mathcal{S}') \leq 1} | h(\mathcal{S}) - h(\mathcal{S}') |_2 \leq \Delta$ . Then the Gaussian mechanism $\mathsf{A}(\mathcal{S}) = h(\mathcal{S}) + \mathsf{N}(0, \sigma^2 I_d)$ with $\sigma = 2\Delta \log(2/\delta) / \varepsilon$ is $(\varepsilon, \delta)$ -DP.
Inverse sensitivity mechanism. Our goal is to design private optimization algorithms that adapt to the difficulty of the underlying function. As a reference point, we turn to the inverse sensitivity mechanism of [2] as it enjoys general instance-optimality guarantees. For a given function $h: \mathbb{S}^n \to \mathcal{T} \subset \mathbb{R}^d$ that we wish to estimate privately, define the inverse sensitivity at $x \in \mathcal{T}$
that is, the inverse sensitivity of a target parameter $y \in \mathcal{T}$ at instance $S$ is the minimal number of samples one needs to change to reach a new instance $S'$ such that $h(S') = y$ . Having this quantity, the inverse sensitivity mechanism samples an output from the following probability density
The inverse sensitivity mechanism preserves $\varepsilon$ -DP and enjoys instance-optimality guarantees in general settings [2]. In contrast to (worst-case) minimax optimality guarantees which measure the performance of the algorithm on the hardest instance, these notions of instance-optimality provide stronger per-instance optimality guarantees.
3 Adaptive rates through inverse sensitivity for $\varepsilon$ -DP
To understand the achievable rates when privately optimizing functions with growth, we begin our theoretical investigation by examining the inverse sensitivity mechanism in our setting. We show that, for instances that exhibit $\kappa$ -growth of the empirical function, the inverse sensitivity mechanism privately solves ERM with excess loss roughly $(d / n\varepsilon)^{\frac{\kappa}{\kappa - 1}}$ .
In our setting, we use a gradient-based approximation of the inverse sensitivity mechanism to simplify the analysis, while attaining similar rates. Following [3] with our function of interest $h(S) \coloneqq \operatorname*{argmin}_{x \in \mathcal{X}} f_S(x)$ , we can lower bound the inverse sensitivity $\mathrm{len}_h(\mathcal{S}; x) \geq n|\nabla f_S(x)|_2 / 2L$ under natural assumptions. We define a $\rho$ -smoothed version of this quantity which is more suitable to continuous domains
and define the $\rho$ -smooth gradient-based inverse sensitivity mechanism
Note that while exactly sampling from the un-normalized density $\pi_{\mathrm{A_{gr - inv}}(S)}$ is computationally intractable, analyzing its performance is an important step towards understanding the optimal rates for the family of functions with growth that we study in this work. The following theorem demonstrates the adaptivity of the inverse sensitivity mechanism to the growth of the underlying instance. We defer the proof to Appendix A.
Theorem 1. Let $\mathcal{S} = (s_1, \ldots, s_n) \in \mathbb{S}^n$ , $F(x; s)$ be convex, L-Lipschitz for all $s \in \mathbb{S}$ . Let $x^\star = \operatorname*{argmin}{x \in \mathcal{X}} f{\mathcal{S}}(x)$ and assume $x^\star$ is in the interior of $\mathcal{X}$ . Assume that $f_{\mathcal{S}}(x)$ has $\kappa$ -growth (Assumption 1) with $\kappa \geq \underline{\kappa} > 1$ . For $\rho > 0$ , the $\rho$ -smooth inverse sensitivity mechanism $\mathsf{A}{\mathrm{gr - inv}}(\mathcal{S})$ is $\varepsilon$ -DP, and with probability at least $1 - \beta$ the output $\hat{x} = \mathsf{A}{\mathrm{gr - inv}}(\mathcal{S})$ has
Moreover, setting $\rho = (L / \lambda)^{\frac{1}{\kappa - 1}}(d / n\varepsilon)^{\frac{\kappa}{\kappa - 1}}$ , we have
The rates of the inverse sensitivity in Theorem 1 provide two main insights regarding the landscape of the problem with growth conditions. First, these conditions allow to improve the worst-case rate $d / n\varepsilon$ to $(d / n\varepsilon)^{\frac{\kappa}{\kappa - 1}}$ for pure $\varepsilon$ -DP and therefore suggest a better rate $(\sqrt{d\log(1 / \delta)} /n\varepsilon)^{\frac{\kappa}{\kappa - 1}}$ is possible for approximate $(\varepsilon ,\delta)$ -DP. Moreover, the general instance-optimality guarantees of this mechanism [2] hint that these are the optimal rates for our class of functions. In the sections to come, we validate the correctness of these predictions by developing efficient algorithms that achieve these rates (for pure and approximate privacy), and prove matching lower bounds which demonstrate the optimality of these algorithms.
4 Efficient algorithms with optimal rates
While the previous section demonstrates that there exists algorithms that improve the rates for functions with growth, we pointed out that $\mathrm{A}_{\mathrm{gr} - \mathrm{inv}}$ was computationally intractable in the general case. In this section, we develop efficient algorithms—e.g. that are implementable with gradient-based methods—that achieve the same convergence rates. Our algorithms build on the recent localization techniques that Feldman et al. [25] used to obtain optimal rates for DP-SCO with general convex functions. In Section 4.1, we use these techniques to develop private algorithms that achieve the optimal rates for (pure) DP-SCO with high probability, in contrast to existing results which bound the expected excess loss. These results are of independent interest.
In Section 4.2, we translate these results into convergence guarantees on privately optimizing convex functions with growth by solving a sequence of increasingly constrained SCO problems—the high-probability guarantees of Section 4.1 being crucial to our convergence analysis of these algorithms.
4.1 High-probability guarantees for convex DP-SCO
We first describe our algorithm (Algorithm 1) then analyze its performance under pure-DP (Proposition 1) and approximate-DP constraints (Proposition 2). Our analysis builds on novel tight generalization bounds for uniformly-stable algorithms with high probability [24]. We defer the proofs to Appendix B.
Algorithm 1 Localization-based Algorithm
Require: Dataset $\mathcal{S} = (s_1, \dots, s_n) \in \mathbb{S}^n$ , constraint set $\mathcal{X}$ , step size $\eta$ , initial point $x_0$ , privacy parameters $(\varepsilon, \delta)$ ;
1: Set $k = \lceil \log n \rceil$ and $n_0 = n / k$
2: for $i = 1$ to $k$ do
3: Set $\eta_{i} = 2^{-4i}\eta$
4: Solve the following ERM over $\mathcal{X}i = {x\in \mathcal{X}:||x - x{i - 1}||_2\leq 2L\eta_in_0}$
5: Let $\hat{x}i$ be the output of the optimization algorithm
6: if $\delta = 0$ then
7: Set $\zeta_i \sim \mathrm{Lap}d(\sigma_i)$ where $\sigma_i = 4L\eta_i\sqrt{d} / \varepsilon_i$
8: else if $\delta > 0$ then
9: Set $\zeta{i}\sim \mathsf{N}(0,\sigma{i}^{2})$ where $\sigma_{i} = 4L\eta_{i}\sqrt{\log(1 / \delta)} /\varepsilon$
10: Set $x_{i} = \hat{x}{i} + \zeta{i}$
11: return the final iterate $x_{k}$
Proposition 1. Let $\beta \leq 1 / (n + d)$ , $\mathrm{diam}_2(\mathcal{X}) \leq D$ and $F(x;s)$ be convex, $L$ -Lipschitz for all $s \in \mathbb{S}$ . Setting
then for $\delta = 0$ , Algorithm 1 is $\varepsilon$ -DP and has with probability $1 - \beta$
Similarly, by using a different choice for the parameters and noise distribution, we have the following guarantees for approximate $(\varepsilon, \delta)$ -DP.
Proposition 2. Let $\beta \leq 1 / (n + d)$ , $\mathrm{diam}_2(\mathcal{X}) \leq D$ and $F(x;s)$ be convex, $L$ -Lipschitz for all $s \in \mathbb{S}$ . Setting
then for $\delta > 0$ , Algorithm 1 is $(\varepsilon, \delta)$ -DP and has with probability $1 - \beta$
Before presenting our algorithms for functions with growth, we remark that the exact calculation of the ERM solution in step 5 of Algorithm 1 is not necessary; we chose it to clarify the main algorithmic ideas. However, as long as the returned solution $\hat{x}i$ is sufficiently accurate, say $F{i}(\hat{x}{i}) - \min{x_{i}^{\star}\in \mathcal{X}}F_{i}(x_{i}^{\star})\leq \Delta$ , we have that $| \hat{x}i - x_i^\star | 2\leq \sqrt{2\Delta / \lambda_i}$ where $\lambda{i} = 1 / (\eta{i}n_{0})$ . This implies that as long as $\Delta \leq \frac{L}{n}$ , the sensitivity of $\hat{x}i$ is at most twice the sensitivity of the exact ERM solution $x{i}^{\star}$ and hence multiplying the noise $\sigma_{i}$ by a factor of 2 is sufficient to guarantee privacy. As the set $\mathcal{X}_i$ is convex, finding sufficiently accurate $\hat{x}_i$ can be done efficiently using standard optimization methods for minimizing convex functions over convex domains.
4.2 Algorithms for DP-SCO with growth
Building on the algorithms of the previous section, we design algorithms that recover the rates of the inverse sensitivity mechanism for functions with growth, importantly without knowledge of the value of $\kappa$ . Inspired by epoch-based algorithms from the optimization literature [31, 29], our algorithm iteratively applies the private procedures from the previous section. Crucially, the growth assumption allows to reduce the diameter of the domain after each run, hence improving the overall excess loss by carefully choosing the hyper-parameters. We provide full details in Algorithm 2.
Algorithm 2 Epoch-based algorithms for $\kappa$ -growth
Require: Dataset $S = (s_1,\dots ,s_n)\in \mathbb{S}^n$ , convex set $\mathcal{X}$ , initial point $x_0$ , number of iterations $T$ privacy parameters $(\varepsilon ,\delta)$ .. 1: Set $n_0 = n / T$ and $D_0 = \mathrm{diam}2(\mathcal{X})$
2: if $\delta = 0$ then
3: Set $\eta_0 = \frac{D_0}{2L}\min \left(\frac{1}{\sqrt{n_0\log(n_0)\log(1 / \beta)}},\frac{\varepsilon}{d\log(1 / \beta)}\right)$
4: else if $\delta >0$ then
5: $\eta_0 = \frac{D_0}{2L}\min \left(\frac{1}{\sqrt{n_0\log(n_0)\log(1 / \beta)}},\frac{\varepsilon}{\sqrt{d\log(1 / \delta)}\log(1 / \beta)}\right)$
6: for $i = 0$ to $T - 1$ do
7: Let $S{i} = (s_{1 + (i - 1)n_{0}},\ldots ,s_{in_{0}})$
8: Set $D_{i} = 2^{-i}D_{0}$ and $\eta_{i} = 2^{-i}\eta_{0}$
9: Set $\mathcal{X}i = {x\in \mathcal{X}:||x - x_i||2\leq D_i}$
10: Run Algorithm 1 on dataset $S{i}$ with starting point $x{i}$ , privacy parameter $(\varepsilon ,\delta)$ , domain $\mathcal{X}i$ (with diameter $D{i}$ ), step size $\eta_{i}$
11: Let $x_{i + 1}$ be the output of the private procedure
12: return $x_{T}$
The following theorem summarizes our main upper bound for DP-SCO with growth in the pure privacy model, recovering the rates of the inverse sensitivity mechanism in Section 3. We defer the proof to Appendix B.3.
Theorem 2. Let $\beta \leq 1 / (n + d)$ , $\operatorname{diam}_2(\mathcal{X}) \leq D$ and $F(x; s)$ be convex, $L$ -Lipschitz for all $s \in \mathbb{S}$ . Assume that $f$ has $\kappa$ -growth (Assumption 1) with $\kappa \geq \underline{\kappa} > 1$ . Setting $T = \left\lceil \frac{2\log n}{\underline{\kappa} - 1} \right\rceil$ , Algorithm 2 is $\varepsilon$ -DP and has with probability $1 - \beta$
where $\tilde{O}$ hides logarithmic factors depending on $n$ and $d$ .
Sketch of the proof. The main challenge of the proof is showing that the iterate achieves good risk without knowledge of $\kappa$ . Let us denote by $D \cdot \rho$ the error guarantee of Proposition 1 (or Proposition 2 for approximate-DP). At each stage $i$ , as long as $x^{\star} = \operatorname*{argmin}{x \in \mathcal{X}} f(x)$ belongs to $\mathcal{X}i$ , the excess loss is of order $D_i \cdot \rho$ and thus decreases exponentially fast with $i$ . The challenge is that, without knowledge of $\kappa$ , we do not know the index $i_0$ (roughly $\frac{\log_2 n}{\kappa - 1}$ ) after which $x^{\star} \notin D_j$ for $j \geq i_0$ and the regret guarantees become meaningless with respect to the original problem. However, in the stages after $i_0$ , as the constraint set becomes very small, we upper bound the variations in function values $f(x{j + 1}) - f(x_j)$ and show that the sub-optimality cannot increase (overall) by more than $O(D{i_0} \cdot \rho)$ , thus achieving the optimal rate of stage $i_0$ .
Moreover, we can improve the dependence on the dimension for approximate $(\varepsilon, \delta)$ -DP, resulting in the following bounds.
Theorem 3. Let $\beta \leq 1 / (n + d)$ , $\mathrm{diam}_2(\mathcal{X}) \leq D$ and $F(x; s)$ be convex, $L$ -Lipschitz for all $s \in \mathbb{S}$ . Assume that $f$ has $\kappa$ -growth (Assumption 1) with $\kappa \geq \underline{\kappa} > 1$ . Setting $T = \left\lceil \frac{2\log n}{\kappa - 1} \right\rceil$ and $\delta > 0$ ,
Algorithm 2 is $(\varepsilon, \delta)$ -DP and has with probability $1 - \beta$
where $\widetilde{O}$ hides logarithmic factors depending on $n$ and $d$ .
5 Lower bounds
In this section, we develop (minimax) lower bounds for the problem of SCO with $\kappa$ -growth under privacy constraints. Note that taking $\varepsilon \to \infty$ provides lower bound for the unconstrained minimax risk. For a sample space $\mathbb{S}$ and collection of distributions $\mathcal{P}$ over $\mathbb{S}$ , we define the function class $\mathcal{F}^{\kappa}(\mathcal{P})$ as the set of convex functions from $\mathbb{R}^d \to \mathbb{R}$ that are $L$ -Lipschitz and has $\kappa$ -growth (Assumption 1). We define the constrained minimax risk [6]
where $\mathcal{A}^{\epsilon, \delta}$ is the collection of $(\varepsilon, \delta)$ -DP mechanisms from $\mathbb{S}^n$ to $\mathcal{X}$ . When clear from context, we omit the dependency on $\mathcal{P}$ of the function class and simply write $\mathcal{F}^\kappa$ . We also forego the dependence on $\delta$ when referring to pure-DP constraints, i.e. $\mathfrak{M}_n(\mathcal{X}, \mathcal{P}, \mathcal{F}^\kappa, \varepsilon, \delta = 0) \eqqcolon \mathfrak{M}_n(\mathcal{X}, \mathcal{P}, \mathcal{F}^\kappa, \varepsilon)$ . We now proceed to prove tight lower bounds for $\varepsilon$ -DP in Section 5.1 and $(\varepsilon, \delta)$ -DP in Section 5.2.
5.1 Lower bounds for pure $\varepsilon$ -DP
Although in Section 4 we show that the same algorithm achieves the optimal upper bounds for all values of $\kappa > 1$ , the landscape of the problem is more subtle for the lower bounds and we need to delineate two different cases to obtain tight lower bounds. We begin with $\kappa \geq 2$ , which corresponds to uniform convexity and enjoys properties that make the problem easier (e.g., closure under summation or addition of linear terms). The second case, $1 < \kappa < 2$ , corresponds to sharper growth and requires a different hard instance to satisfy the growth condition.
$\kappa$ -growth with $\kappa \geq 2$ . We begin by developing lower bounds under pure DP for $\kappa \geq 2$
Theorem 4 (Lower bound for $\varepsilon$ -DP, $\kappa \geq 2$ ). Let $d \geq 1$ , $\mathcal{X} = \mathbb{B}2^d(R)$ , $\mathbb{S} = {\pm e_j}{j \leq d}$ , $\kappa \geq 2$ and $n \in \mathbb{N}$ . Let $\mathcal{P}$ be the set of distributions on $\mathbb{S}$ . Assume that
The following lower bound holds
First of all, note that $L \geq \lambda 2^{\kappa} R^{\kappa - 1}$ is not an overly-restrictive assumption. Indeed, for an arbitrary $(\lambda, \kappa)$ -uniformly convex and $L$ -Lipschitz function, it always holds that $L \geq \frac{\lambda}{2} R^{\kappa - 1}$ . This is thus equivalent to assuming $\kappa = \Theta(1)$ . Note that when $\kappa \gg 1$ , the standard $n^{-1/2} + d / (n\varepsilon)$ lower bound holds. We present the proof in Appendix C.1.1 and preview the main ideas here.
Sketch of the proof. Our lower bounds hinges on the collections of functions $F(x; s) \coloneqq a\kappa^{-1}|x|_2^\kappa + b\langle x, s \rangle$ for $a, b \geq 0$ to be chosen later. These functions are [39, Lemma 4] $\kappa$ -uniformly convex for any $s \in \mathbb{S}$ and in turn, so is the population function $f$ . We proceed as follows, we first prove an information-theoretic (non-private) lower bound (Theorem 8 in Appendix C.1.1) which provides the statistical term in (7). With the same family of functions, we exhibit a collection of datasets and prove by contradiction that if an estimator were to optimize below a certain error it would have violated $\varepsilon$ -DP—this yields a lower bound on ERM for our function class (Theorem 9 in Appendix C.1.1). We conclude by proving a reduction from SCO to ERM in Proposition 4.
$\kappa$ -growth with $\kappa \in (1,2]$ . As the construction of the hard instance is more intricate for $\kappa < 2$ , we provide a one-dimensional lower bound and leave the high-dimensional case for future work. In this case we directly obtain the result with a private version of Le Cam's method [44, 42, 6], however with a different family of functions.
The issue with the construction of the previous section is that the function does not exhibit sharp growth for $\kappa < 2$ . Indeed, the added linear function shifts the minimum away from 0 where the function is differentiable and as a result it locally behaves as a quadratic and only achieves growth $\kappa = 2$ . To establish the lower bound, we consider a different sample function $F$ that has growth exactly 1 on one side and $\kappa$ on the other side. This yields the following
Theorem 5 (Lower bound for $\varepsilon$ -DP, $\kappa \in (1,2]$ ). Let $d = 1$ , $\mathbb{S} = {-1, +1}$ , $\kappa \in (1,2]$ , $\lambda = 1$ , $L = 2$ , and $n \in \mathbb{N}$ . There exists a collection of distributions $\mathcal{P}$ such that, whenever $n\varepsilon \geq 1/\sqrt{3}$ , it holds that
5.2 Lower bounds under approximate privacy constraints
We conclude our treatment by providing lower bounds but now under approximate privacy constraints, demonstrating the optimality of the risk bound of Theorem 3. We prove the result via a reduction: we show that if one solves ERM with $\kappa$ -growth with error $\Delta$ , this implies that one solves arbitrary convex ERM with error $\phi(\Delta)$ . Given that a lower bound of $\Omega(\sqrt{d} / (n\varepsilon))$ holds for ERM, a lower bound of $\phi^{-1}(\sqrt{d} / (n\varepsilon))$ holds for ERM with $\kappa$ -growth. However, for this reduction to hold, we require that $\kappa \geq 2$ . Furthermore, we consider $\kappa$ to be roughly a constant—in the case that $\kappa$ is too large, standard lower bounds on general convex functions apply.
Theorem 6 (Private lower bound for $(\varepsilon, \delta)$ -DP). Let $\kappa \geq 2$ such that $\kappa = \Theta(1)$ , $\mathcal{X} = \mathbb{B}_2^d(D)$ . Let $d \geq 1$ and $\mathbb{S} = {\pm 1/\sqrt{d}}^d$ . Assume that $n\varepsilon = \Omega(\sqrt{d})$ , then for any $(\varepsilon, \delta)$ mechanism A, there exists $\lambda > 0$ , $F$ and $S \subset \mathbb{S}$ such that
Theorem 6 implies that the same lower bound (up to logarithmic factors) applies to SCO via the reduction of [8, Appendix C]. Before proving the theorem, let us state (and prove in Appendix C.2) the following reduction: if an $(\varepsilon, \delta)$ -DP algorithm achieves excess error (roughly) $\Delta$ on ERM for any function with $\kappa$ -growth, there exists an $(\varepsilon, \delta)$ -DP algorithm that achieves error $\Delta^{(\kappa - 1) / \kappa}$ for any convex function. We construct the latter by iteratively solving ERM problems with geometrically increasing $| \cdot |_2^\kappa$ -regularization towards the previous iterate to ensure the objective has $\kappa$ -growth.
Proposition 3 (Solving ERM with $\kappa$ -growth implies solving any convex ERM). Let $\kappa \geq 2$ . Assume there exists an $(\epsilon, \delta)$ mechanism $A$ such that for any $L$ -Lipschitz loss $G$ on $\mathcal{V}$ and dataset $S$ such that $g_{\mathcal{S}}(x) \coloneqq \frac{1}{n} \sum_{s \in S} G(x; s)$ exhibits $(\lambda, \kappa)$ -growth, the mechanism achieves excess loss
Then, we can construct an $(\varepsilon, \delta)$ -DP mechanism $\mathsf{A}'$ such that for any $L$ -Lipschitz loss $f$ , the mechanism achieves excess loss
where $k$ is the smallest integer such that $k \geq \log \left[\frac{\kappa\frac{1}{\kappa - 1}L^{\frac{\kappa}{\kappa - 1}}}{2^{2\kappa - 3}\Delta(n,L,\varepsilon / k,\delta / k)}\right]$ .
With this proposition, the proof of the theorem directly follows as Bassily et al. [7] prove a lower bound $\Omega (\sqrt{d} /(n\varepsilon))$ for ERM with $(\varepsilon ,\delta)$ -DP.
Discussion
In this work, we develop private algorithms that adapt to the growth of the function at hand, achieving the convergence rate corresponding to the "easiest" sub-class the function belongs to. However, the picture is not yet complete. First of, there are still gaps in our theoretical understanding, the most interesting one being $\kappa = 1$ . On these functions, appropriate optimization algorithms achieve linear convergence [43] and raise the question, can we achieve exponentially small privacy cost in our setting? Finally, while our optimality guarantees are more fine-grained than the usual minimax results over convex functions, they are still contingent on some predetermined choice of sub-classes. Studying more general notions of adaptivity is an important future direction in private optimization.
Acknowledgments
The authors would like to thank Karan Chadha and Gary Cheng for comments on an early version of the draft. HA, DL and JCD were supported NSF under CAREER Award CCF-1553086 and HDR 1934578 (Stanford Data Science Collaboratory), Office of Naval Research YIP Award N00014-19-2288 and the Stanford DAWN Consortium.
Competing Interests
JCD has a consulting relationship with Apple. HS has spent internships at Apple during the 2019, 2020 and 2021 summers. DL has spent an internship at Google during the summer of 2020.
References
[1] M. Abadi, A. Chu, I. Goodfellow, B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In 23rd ACM Conference on Computer and Communications Security (ACM CCS), pages 308-318, 2016.
[2] H. Asi and J. Duchi. Near instance-optimality in differential privacy. arXiv:2005.10630 [cs.CR], 2020.
[3] H. Asi and J. C. Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. In Advances in Neural Information Processing Systems 33, 2020.
[4] H. Asi, J. Duchi, A. Fallah, O. Javidbakht, and K. Talwar. Private adaptive gradient methods for convex optimization. In Proceedings of the 38th International Conference on Machine Learning, pages 383-392, 2021.
[5] H. Asi, V. Feldman, T. Koren, and K. Talwar. Private stochastic convex optimization: Optimal rates in $\ell_1$ geometry. In Proceedings of the 38th International Conference on Machine Learning, 2021.
[6] R. F. Barber and J. C. Duchi. Privacy and statistical risk: Formalisms and minimax bounds. arXiv:1412.4451 [math.ST], 2014.
[7] R. Bassily, A. Smith, and A. Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In 55th Annual Symposium on Foundations of Computer Science, pages 464-473, 2014.
[8] R. Bassily, V. Feldman, K. Talwar, and A. Thakurta. Private stochastic convex optimization with optimal rates. In Advances in Neural Information Processing Systems 32, 2019.
[9] R. Bassily, V. Feldman, C. Guzmán, and K. Talwar. Stability of stochastic gradient descent on nonsmooth convex losses. In Advances in Neural Information Processing Systems 33, 2020.
[10] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167-175, 2003.
[11] J. Bolte, T. P. Nguyen, J. Peypouquet, and B. Suter. From error bounds to the complexity of first-order descent methods for convex functions. Mathematical Programming, 165:471-507, 2017.
[12] L. Bottou, F. Curtis, and J. Nocedal. Optimization methods for large-scale learning. SIAM Review, 60(2):223-311, 2018.
[13] M. Braverman, A. Garg, T. Ma, H. L. Nguyen, and D. P. Woodruff. Communication lower bounds for statistical estimation problems via a distributed data processing inequality. In Proceedings of the Forty-Eighth Annual ACM Symposium on the Theory of Computing, 2016. URL https://arxiv.org/abs/1506.07216.
[14] T. Cai and M. Low. A framework for estimating convex functions. Statistica Sinica, 25: 423-456, 2015.
[15] S. Chatterjee, J. Duchi, J. Lafferty, and Y. Zhu. Local minimax complexity of stochastic convex optimization. In Advances in Neural Information Processing Systems 29, 2016.
[16] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069-1109, 2011.
[17] J. C. Duchi. Information theory and statistics. Lecture Notes for Statistics 311/EE 377, Stanford University, 2019. URL http://web.stanford.edu/class/stats311/lecture-notes.pdf. Accessed May 2019.
[18] J. C. Duchi and F. Ruan. Asymptotic optimality in stochastic optimization. Annals of Statistics, 49(1):21-48, 2021.
[19] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In 54th Annual Symposium on Foundations of Computer Science, pages 429-438, 2013.
[20] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in Theoretical Computer Science, 9(3 & 4):211-407, 2014.
[21] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via distributed noise generation. In Advances in Cryptology (EUROCRYPT 2006), 2006.
[22] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis. In Proceedings of the Third Theory of Cryptography Conference, pages 265-284, 2006.
[23] C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science (ITCS), pages 214–226, 2012.
[24] V. Feldman and J. Vondrak. High probability generalization bounds for uniformly stable algorithms with nearly optimal rate. In Proceedings of the Thirty Second Annual Conference on Computational Learning Theory, pages 1270-1279, 2019.
[25] V. Feldman, T. Koren, and K. Talwar. Private stochastic convex optimization: Optimal rates in linear time. In Proceedings of the Fifty-Second Annual ACM Symposium on the Theory of Computing, 2020.
[26] A. Garg, T. Ma, and H. L. Nguyen. On communication cost of distributed statistical estimation and dimensionality. In Advances in Neural Information Processing Systems 27, 2014.
[27] M. Hardt and K. Talwar. On the geometry of differential privacy. In Proceedings of the Forty-Second Annual ACM Symposium on the Theory of Computing, pages 705-714, 2010. URL http://arxiv.org/abs/0907.3754.
[28] T. Hashimoto, M. Srivastava, H. Namkoong, and P. Liang. Fairness without demographics in repeated loss minimization. In Proceedings of the 35th International Conference on Machine Learning, 2018.
[29] E. Hazan and S. Kale. An optimal algorithm for stochastic strongly convex optimization. In Proceedings of the Twenty Fourth Annual Conference on Computational Learning Theory, 2011. URL http://arxiv.org/abs/1006.2425.
[30] C. Jin, P. Netrapalli, R. Ge, S. M. Kakade, and M. I. Jordan. A short note on concentration inequalities for random vectors with subgaussian norm. arXiv:1902.03736 [math.PR], 2019.
[31] A. Juditsky and Y. Nesterov. Primal-dual subgradient methods for minimizing uniformly convex functions. URL http://hal.archives-ouvertes.fr/docs/00/50/89/33/PDF/Strong-hal.pdf, 2010.
[32] A. Juditsky and Y. Nesterov. Deterministic and stochastic primal-dual subgradient algorithms for uniformly convex minimization. Stochastic Systems, 4(1):44-80, 2014.
[33] D. Levy and J. C. Duchi. Necessary and sufficient geometries for gradient methods. In Advances in Neural Information Processing Systems 32, 2019.
[34] D. Levy, Z. Sun, K. Amin, S. Kale, A. Kulesza, M. Mohri, and A. T. Suresh. Learning with user-level privacy. Advances in Neural Information Processing Systems 34, 2021. URL https://arxiv.org/abs/2102.11845.
[35] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017.
[36] M. Mitzenmacher and E. Upfal. Probability and computing: Randomized algorithms and probabilistic analysis. Cambridge University Press, 2005.
[37] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
[38] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574-1609, 2009.
[39] Y. Nesterov. Accelerating the cubic regularization of newton's method on convex problems. Mathematical Programming, 112(1):159-181, 2008.
[40] A. Ramdas and A. Singh. Optimal rates for stochastic convex optimization under tsybakov noise condition. In Proceedings of the 30th International Conference on Machine Learning, pages 365-373, 2013.
[41] A. Smith and A. Thakurta. Differentially private feature selection via stability arguments, and the robustness of the Lasso. In Proceedings of the Twenty Sixth Annual Conference on Computational Learning Theory, pages 819-850, 2013. URL http://proceedings.mlrpress/v30/Guha13.html.
[42] M. J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press, 2019.
[43] Y. Xu, Q. Lin, and T. Yang. Stochastic convex optimization: Faster local growth implies faster global convergence. In Proceedings of the 34th International Conference on Machine Learning, pages 3821-3830, 2017.
[44] B. Yu. Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam, pages 423-435. Springer-Verlag, 1997.
[45] Y. Zhang, J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Information-theoretic lower bounds for distributed estimation with communication constraints. In Advances in Neural Information Processing Systems 26, 2013.
[46] Y. Zhu, S. Chatterjee, J. Duchi, and J. Lafferty. Local minimax complexity of stochastic convex optimization. In Advances in Neural Information Processing Systems 29, 2016.
Checklist
- For all authors...
(a) Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? [Yes]
(b) Did you describe the limitations of your work? [Yes]
(c) Did you discuss any potential negative societal impacts of your work? [Yes]
(d) Have you read the ethics review guidelines and ensured that your paper conforms to them? [Yes]
- If you are including theoretical results...
(a) Did you state the full set of assumptions of all theoretical results? [Yes]
(b) Did you include complete proofs of all theoretical results? [Yes]
- If you ran experiments...
(a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [N/A]
(b) Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [N/A]
(c) Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? [N/A]
(d) Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
- If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
(a) If your work uses existing assets, did you cite the creators? [N/A]
(b) Did you mention the license of the assets? [N/A]
(c) Did you include any new assets either in the supplemental material or as a URL? [N/A]
(d) Did you discuss whether and how consent was obtained from people whose data you're using/curating? [N/A]
(e) Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? [N/A]
- If you used crowdsourcing or conducted research with human subjects...
(a) Did you include the full text of instructions given to participants and screenshots, if applicable? [N/A]
(b) Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? [N/A]
(c) Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? [N/A]
